text
stringlengths
59
500k
subset
stringclasses
6 values
Atiyah–Singer index theorem In differential geometry, the Atiyah–Singer index theorem, proved by Michael Atiyah and Isadore Singer (1963),[1] states that for an elliptic differential operator on a compact manifold, the analytical index (related to the dimension of the space of solutions) is equal to the topological index (defined in terms of some topological data). It includes many other theorems, such as the Chern–Gauss–Bonnet theorem and Riemann–Roch theorem, as special cases, and has applications to theoretical physics.[2][3] Atiyah–Singer index theorem FieldDifferential geometry First proof byMichael Atiyah and Isadore Singer First proof in1963 ConsequencesChern–Gauss–Bonnet theorem Grothendieck–Riemann–Roch theorem Hirzebruch signature theorem Rokhlin's theorem History The index problem for elliptic differential operators was posed by Israel Gel'fand.[4] He noticed the homotopy invariance of the index, and asked for a formula for it by means of topological invariants. Some of the motivating examples included the Riemann–Roch theorem and its generalization the Hirzebruch–Riemann–Roch theorem, and the Hirzebruch signature theorem. Friedrich Hirzebruch and Armand Borel had proved the integrality of the  genus of a spin manifold, and Atiyah suggested that this integrality could be explained if it were the index of the Dirac operator (which was rediscovered by Atiyah and Singer in 1961). The Atiyah–Singer theorem was announced in 1963.[1] The proof sketched in this announcement was never published by them, though it appears in Palais's book.[5] It appears also in the "Séminaire Cartan-Schwartz 1963/64"[6] that was held in Paris simultaneously with the seminar led by Richard Palais at Princeton University. The last talk in Paris was by Atiyah on manifolds with boundary. Their first published proof[7] replaced the cobordism theory of the first proof with K-theory, and they used this to give proofs of various generalizations in another sequence of papers.[8] • 1965: Sergey P. Novikov published his results on the topological invariance of the rational Pontryagin classes on smooth manifolds.[9] • Robion Kirby and Laurent C. Siebenmann's results,[10] combined with René Thom's paper[11] proved the existence of rational Pontryagin classes on topological manifolds. The rational Pontryagin classes are essential ingredients of the index theorem on smooth and topological manifolds. • 1969: Michael Atiyah defines abstract elliptic operators on arbitrary metric spaces. Abstract elliptic operators became protagonists in Kasparov's theory and Connes's noncommutative differential geometry.[12] • 1971: Isadore Singer proposes a comprehensive program for future extensions of index theory.[13] • 1972: Gennadi G. Kasparov publishes his work on the realization of K-homology by abstract elliptic operators.[14] • 1973: Atiyah, Raoul Bott, and Vijay Patodi gave a new proof of the index theorem[15] using the heat equation, described in a paper by Melrose.[16] • 1977: Dennis Sullivan establishes his theorem on the existence and uniqueness of Lipschitz and quasiconformal structures on topological manifolds of dimension different from 4.[17] • 1983: Ezra Getzler[18] motivated by ideas of Edward Witten[19] and Luis Alvarez-Gaume, gave a short proof of the local index theorem for operators that are locally Dirac operators; this covers many of the useful cases. • 1983: Nicolae Teleman proves that the analytical indices of signature operators with values in vector bundles are topological invariants.[20] • 1984: Teleman establishes the index theorem on topological manifolds.[21] • 1986: Alain Connes publishes his fundamental paper on noncommutative geometry.[22] • 1989: Simon K. Donaldson and Sullivan study Yang–Mills theory on quasiconformal manifolds of dimension 4. They introduce the signature operator S defined on differential forms of degree two.[23] • 1990: Connes and Henri Moscovici prove the local index formula in the context of non-commutative geometry.[24] • 1994: Connes, Sullivan, and Teleman prove the index theorem for signature operators on quasiconformal manifolds.[25] Notation • X is a compact smooth manifold (without boundary). • E and F are smooth vector bundles over X. • D is an elliptic differential operator from E to F. So in local coordinates it acts as a differential operator, taking smooth sections of E to smooth sections of F. Symbol of a differential operator If D is a differential operator on a Euclidean space of order n in k variables $x_{1},\dots ,x_{k}$, then its symbol is the function of 2k variables $x_{1},\dots ,x_{k},y_{1},\dots ,y_{k}$, given by dropping all terms of order less than n and replacing $\partial /\partial x_{i}$ by $y_{i}$. So the symbol is homogeneous in the variables y, of degree n. The symbol is well defined even though $\partial /\partial x_{i}$ does not commute with $x_{i}$ because we keep only the highest order terms and differential operators commute "up to lower-order terms". The operator is called elliptic if the symbol is nonzero whenever at least one y is nonzero. Example: The Laplace operator in k variables has symbol $y_{1}^{2}+\cdots +y_{k}^{2}$, and so is elliptic as this is nonzero whenever any of the $y_{i}$'s are nonzero. The wave operator has symbol $-y_{1}^{2}+\cdots +y_{k}^{2}$, which is not elliptic if $k\geq 2$, as the symbol vanishes for some non-zero values of the ys. The symbol of a differential operator of order n on a smooth manifold X is defined in much the same way using local coordinate charts, and is a function on the cotangent bundle of X, homogeneous of degree n on each cotangent space. (In general, differential operators transform in a rather complicated way under coordinate transforms (see jet bundle); however, the highest order terms transform like tensors so we get well defined homogeneous functions on the cotangent spaces that are independent of the choice of local charts.) More generally, the symbol of a differential operator between two vector bundles E and F is a section of the pullback of the bundle Hom(E, F) to the cotangent space of X. The differential operator is called elliptic if the element of Hom(Ex, Fx) is invertible for all non-zero cotangent vectors at any point x of X. A key property of elliptic operators is that they are almost invertible; this is closely related to the fact that their symbols are almost invertible. More precisely, an elliptic operator D on a compact manifold has a (non-unique) parametrix (or pseudoinverse) D′ such that DD′ -1 and D′D -1 are both compact operators. An important consequence is that the kernel of D is finite-dimensional, because all eigenspaces of compact operators, other than the kernel, are finite-dimensional. (The pseudoinverse of an elliptic differential operator is almost never a differential operator. However, it is an elliptic pseudodifferential operator.) Analytical index As the elliptic differential operator D has a pseudoinverse, it is a Fredholm operator. Any Fredholm operator has an index, defined as the difference between the (finite) dimension of the kernel of D (solutions of Df = 0), and the (finite) dimension of the cokernel of D (the constraints on the right-hand-side of an inhomogeneous equation like Df = g, or equivalently the kernel of the adjoint operator). In other words, Index(D) = dim Ker(D) − dim Coker(D) = dim Ker(D) − dim Ker(D*). This is sometimes called the analytical index of D. Example: Suppose that the manifold is the circle (thought of as R/Z), and D is the operator d/dx − λ for some complex constant λ. (This is the simplest example of an elliptic operator.) Then the kernel is the space of multiples of exp(λx) if λ is an integral multiple of 2πi and is 0 otherwise, and the kernel of the adjoint is a similar space with λ replaced by its complex conjugate. So D has index 0. This example shows that the kernel and cokernel of elliptic operators can jump discontinuously as the elliptic operator varies, so there is no nice formula for their dimensions in terms of continuous topological data. However the jumps in the dimensions of the kernel and cokernel are the same, so the index, given by the difference of their dimensions, does indeed vary continuously, and can be given in terms of topological data by the index theorem. Topological index The topological index of an elliptic differential operator $D$ between smooth vector bundles $E$ and $F$ on an $n$-dimensional compact manifold $X$ is given by $(-1)^{n}\operatorname {ch} (D)\operatorname {Td} (X)[X]=(-1)^{n}\int _{X}\operatorname {ch} (D)\operatorname {Td} (X)$ in other words the value of the top dimensional component of the mixed cohomology class $\operatorname {ch} (D)\operatorname {Td} (X)$ on the fundamental homology class of the manifold $X$ up to a difference of sign. Here, • $\operatorname {Td} (X)$ is the Todd class of the complexified tangent bundle of $X$. • $\operatorname {ch} (D)$ is equal to $\varphi ^{-1}(\operatorname {ch} (d(p^{*}E,p^{*}F,\sigma (D))))$, where • $\varphi :H^{k}(X;\mathbb {Q} )\to H^{n+k}(B(X)/S(X);\mathbb {Q} )$ is the Thom isomorphism for the sphere bundle $p:B(X)/S(X)\to X$ • $\operatorname {ch} :K(X)\otimes \mathbb {Q} \to H^{*}(X;\mathbb {Q} )$ is the Chern character • $d(p^{*}E,p^{*}F,\sigma (D))$ is the "difference element" in $K(B(X)/S(X))$ associated to two vector bundles $p^{*}E$ and $p^{*}F$ on $B(X)$ and an isomorphism $\sigma (D)$ between them on the subspace $S(X)$. • $\sigma (D)$ is the symbol of $D$ In some situations, it is possible to simplify the above formula for computational purposes. In particular, if $X$ is a $2m$-dimensional orientable (compact) manifold with non-zero Euler class $e(TX)$, then applying the Thom isomorphism and dividing by the Euler class,[26][27] the topological index may be expressed as $(-1)^{m}\int _{X}{\frac {\operatorname {ch} (E)-\operatorname {ch} (F)}{e(TX)}}\operatorname {Td} (X)$ where division makes sense by pulling $e(TX)^{-1}$ back from the cohomology ring of the classifying space $BSO$. One can also define the topological index using only K-theory (and this alternative definition is compatible in a certain sense with the Chern-character construction above). If X is a compact submanifold of a manifold Y then there is a pushforward (or "shriek") map from K(TX) to K(TY). The topological index of an element of K(TX) is defined to be the image of this operation with Y some Euclidean space, for which K(TY) can be naturally identified with the integers Z (as a consequence of Bott-periodicity). This map is independent of the embedding of X in Euclidean space. Now a differential operator as above naturally defines an element of K(TX), and the image in Z under this map "is" the topological index. As usual, D is an elliptic differential operator between vector bundles E and F over a compact manifold X. The index problem is the following: compute the (analytical) index of D using only the symbol s and topological data derived from the manifold and the vector bundle. The Atiyah–Singer index theorem solves this problem, and states: The analytical index of D is equal to its topological index. In spite of its formidable definition, the topological index is usually straightforward to evaluate explicitly. So this makes it possible to evaluate the analytical index. (The cokernel and kernel of an elliptic operator are in general extremely hard to evaluate individually; the index theorem shows that we can usually at least evaluate their difference.) Many important invariants of a manifold (such as the signature) can be given as the index of suitable differential operators, so the index theorem allows us to evaluate these invariants in terms of topological data. Although the analytical index is usually hard to evaluate directly, it is at least obviously an integer. The topological index is by definition a rational number, but it is usually not at all obvious from the definition that it is also integral. So the Atiyah–Singer index theorem implies some deep integrality properties, as it implies that the topological index is integral. The index of an elliptic differential operator obviously vanishes if the operator is self adjoint. It also vanishes if the manifold X has odd dimension, though there are pseudodifferential elliptic operators whose index does not vanish in odd dimensions. Relation to Grothendieck–Riemann–Roch The Grothendieck–Riemann–Roch theorem was one of the main motivations behind the index theorem because the index theorem is the counterpart of this theorem in the setting of real manifolds. Now, if there's a map $f:X\to Y$ of compact stably almost complex manifolds, then there is a commutative diagram[28] if $Y=*$ is a point, then we recover the statement above. Here $K(X)$ is the Grothendieck group of complex vector bundles. This commutative diagram is formally very similar to the GRR theorem because the cohomology groups on the right are replaced by the Chow ring of a smooth variety, and the Grothendieck group on the left is given by the Grothendieck group of algebraic vector bundles. Extensions of the Atiyah–Singer index theorem Teleman index theorem Due to (Teleman 1983), (Teleman 1984): For any abstract elliptic operator (Atiyah 1970) on a closed, oriented, topological manifold, the analytical index equals the topological index. The proof of this result goes through specific considerations, including the extension of Hodge theory on combinatorial and Lipschitz manifolds (Teleman 1980), (Teleman 1983), the extension of Atiyah–Singer's signature operator to Lipschitz manifolds (Teleman 1983), Kasparov's K-homology (Kasparov 1972) and topological cobordism (Kirby & Siebenmann 1977). This result shows that the index theorem is not merely a differentiability statement, but rather a topological statement. Connes–Donaldson–Sullivan–Teleman index theorem Due to (Donaldson & Sullivan 1989), (Connes, Sullivan & Teleman 1994): For any quasiconformal manifold there exists a local construction of the Hirzebruch–Thom characteristic classes. This theory is based on a signature operator S, defined on middle degree differential forms on even-dimensional quasiconformal manifolds (compare (Donaldson & Sullivan 1989)). Using topological cobordism and K-homology one may provide a full statement of an index theorem on quasiconformal manifolds (see page 678 of (Connes, Sullivan & Teleman 1994)). The work (Connes, Sullivan & Teleman 1994) "provides local constructions for characteristic classes based on higher dimensional relatives of the measurable Riemann mapping in dimension two and the Yang–Mills theory in dimension four." These results constitute significant advances along the lines of Singer's program Prospects in Mathematics (Singer 1971). At the same time, they provide, also, an effective construction of the rational Pontrjagin classes on topological manifolds. The paper (Teleman 1985) provides a link between Thom's original construction of the rational Pontrjagin classes (Thom 1956) and index theory. It is important to mention that the index formula is a topological statement. The obstruction theories due to Milnor, Kervaire, Kirby, Siebenmann, Sullivan, Donaldson show that only a minority of topological manifolds possess differentiable structures and these are not necessarily unique. Sullivan's result on Lipschitz and quasiconformal structures (Sullivan 1979) shows that any topological manifold in dimension different from 4 possesses such a structure which is unique (up to isotopy close to identity). The quasiconformal structures (Connes, Sullivan & Teleman 1994) and more generally the Lp-structures, p > n(n+1)/2, introduced by M. Hilsum (Hilsum 1999), are the weakest analytical structures on topological manifolds of dimension n for which the index theorem is known to hold. Other extensions • The Atiyah–Singer theorem applies to elliptic pseudodifferential operators in much the same way as for elliptic differential operators. In fact, for technical reasons most of the early proofs worked with pseudodifferential rather than differential operators: their extra flexibility made some steps of the proofs easier. • Instead of working with an elliptic operator between two vector bundles, it is sometimes more convenient to work with an elliptic complex $0\rightarrow E_{0}\rightarrow E_{1}\rightarrow E_{2}\rightarrow \dotsm \rightarrow E_{m}\rightarrow 0$ of vector bundles. The difference is that the symbols now form an exact sequence (off the zero section). In the case when there are just two non-zero bundles in the complex this implies that the symbol is an isomorphism off the zero section, so an elliptic complex with 2 terms is essentially the same as an elliptic operator between two vector bundles. Conversely the index theorem for an elliptic complex can easily be reduced to the case of an elliptic operator: the two vector bundles are given by the sums of the even or odd terms of the complex, and the elliptic operator is the sum of the operators of the elliptic complex and their adjoints, restricted to the sum of the even bundles. • If the manifold is allowed to have boundary, then some restrictions must be put on the domain of the elliptic operator in order to ensure a finite index. These conditions can be local (like demanding that the sections in the domain vanish at the boundary) or more complicated global conditions (like requiring that the sections in the domain solve some differential equation). The local case was worked out by Atiyah and Bott, but they showed that many interesting operators (e.g., the signature operator) do not admit local boundary conditions. To handle these operators, Atiyah, Patodi and Singer introduced global boundary conditions equivalent to attaching a cylinder to the manifold along the boundary and then restricting the domain to those sections that are square integrable along the cylinder. This point of view is adopted in the proof of Melrose (1993) of the Atiyah–Patodi–Singer index theorem. • Instead of just one elliptic operator, one can consider a family of elliptic operators parameterized by some space Y. In this case the index is an element of the K-theory of Y, rather than an integer. If the operators in the family are real, then the index lies in the real K-theory of Y. This gives a little extra information, as the map from the real K-theory of Y to the complex K-theory is not always injective. • If there is a group action of a group G on the compact manifold X, commuting with the elliptic operator, then one replaces ordinary K-theory with equivariant K-theory. Moreover, one gets generalizations of the Lefschetz fixed-point theorem, with terms coming from fixed-point submanifolds of the group G. See also: equivariant index theorem. • Atiyah (1976) showed how to extend the index theorem to some non-compact manifolds, acted on by a discrete group with compact quotient. The kernel of the elliptic operator is in general infinite dimensional in this case, but it is possible to get a finite index using the dimension of a module over a von Neumann algebra; this index is in general real rather than integer valued. This version is called the L2 index theorem, and was used by Atiyah & Schmid (1977) to rederive properties of the discrete series representations of semisimple Lie groups. • The Callias index theorem is an index theorem for a Dirac operator on a noncompact odd-dimensional space. The Atiyah–Singer index is only defined on compact spaces, and vanishes when their dimension is odd. In 1978 Constantine Callias, at the suggestion of his Ph.D. advisor Roman Jackiw, used the axial anomaly to derive this index theorem on spaces equipped with a Hermitian matrix called the Higgs field.[29] The index of the Dirac operator is a topological invariant which measures the winding of the Higgs field on a sphere at infinity. If U is the unit matrix in the direction of the Higgs field, then the index is proportional to the integral of U(dU)n−1 over the (n−1)-sphere at infinity. If n is even, it is always zero. • The topological interpretation of this invariant and its relation to the Hörmander index proposed by Boris Fedosov, as generalized by Lars Hörmander, was published by Raoul Bott and Robert Thomas Seeley.[30] Examples Chern-Gauss-Bonnet theorem Suppose that $M$ is a compact oriented manifold of dimension $n=2r$. If we take $\Lambda ^{\text{even}}$ to be the sum of the even exterior powers of the cotangent bundle, and $\Lambda ^{\text{odd}}$ to be the sum of the odd powers, define $D=d+d^{*}$, considered as a map from $\Lambda ^{\text{even}}$ to $\Lambda ^{\text{odd}}$. Then the analytical index of $D$ is the Euler characteristic $\chi (M)$ of the Hodge cohomology of $M$, and the topological index is the integral of the Euler class over the manifold. The index formula for this operator yields the Chern–Gauss–Bonnet theorem. The concrete computation goes as follows: according to one variation of the splitting principle, if $E$ is a real vector bundle of dimension $n=2r$, in order to prove assertions involving characteristic classes, we may suppose that there are complex line bundles $l_{1},\,\ldots ,\,l_{r}$ such that $E\otimes \mathbb {C} =l_{1}\oplus {\overline {l_{1}}}\oplus \dotsm l_{r}\oplus {\overline {l_{r}}}$. Therefore, we can consider the Chern roots $x_{i}(E\otimes \mathbb {C} )=c_{1}(l_{i})$, $x_{r+i}(E\otimes \mathbb {C} )=c_{1}{\mathord {\left({\overline {l_{i}}}\right)}}=-x_{i}(E\otimes \mathbb {C} )$, $i=1,\,\ldots ,\,r$. Using Chern roots as above and the standard properties of the Euler class, we have that $ e(TM)=\prod _{i}^{r}x_{i}(TM\otimes \mathbb {C} )$. As for the Chern character and the Todd class,[31] ${\begin{aligned}\operatorname {ch} {\mathord {\left(\Lambda ^{\text{even}}-\Lambda ^{\text{odd}}\right)}}&=1-\operatorname {ch} (T^{*}M\otimes \mathbb {C} )+\operatorname {ch} {\mathord {\left(\Lambda ^{2}T^{*}M\otimes \mathbb {C} \right)}}-\ldots +(-1)^{n}\operatorname {ch} {\mathord {\left(\Lambda ^{n}T^{*}M\otimes \mathbb {C} \right)}}\\&=1-\sum _{i}^{n}e^{-x_{i}}(TM\otimes \mathbb {C} )+\sum _{i<j}e^{-x_{i}}e^{-x_{j}}(TM\otimes \mathbb {C} )+\ldots +(-1)^{n}e^{-x_{1}}\dotsm e^{-x_{n}}(TM\otimes \mathbb {C} )\\&=\prod _{i}^{n}\left(1-e^{-x_{i}}\right)(TM\otimes \mathbb {C} )\\[3pt]\operatorname {Td} (TM\otimes \mathbb {C} )&=\prod _{i}^{n}{\frac {x_{i}}{1-e^{-x_{i}}}}(TM\otimes \mathbb {C} )\end{aligned}}$ Applying the index theorem, $\chi (M)=(-1)^{r}\int _{M}{\frac {\prod _{i}^{n}\left(1-e^{-x_{i}}\right)}{\prod _{i}^{r}x_{i}}}\prod _{i}^{n}{\frac {x_{i}}{1-e^{-x_{i}}}}(TM\otimes \mathbb {C} )=(-1)^{r}\int _{M}(-1)^{r}\prod _{i}^{r}x_{i}(TM\otimes \mathbb {C} )=\int _{M}e(TM)$ which is the "topological" version of the Chern-Gauss-Bonnet theorem (the geometric one being obtained by applying the Chern-Weil homomorphism). Hirzebruch–Riemann–Roch theorem Take X to be a complex manifold of (complex) dimension n with a holomorphic vector bundle V. We let the vector bundles E and F be the sums of the bundles of differential forms with coefficients in V of type (0, i) with i even or odd, and we let the differential operator D be the sum ${\overline {\partial }}+{\overline {\partial }}^{*}$ restricted to E. This derivation of the Hirzebruch–Riemann–Roch theorem is more natural if we use the index theorem for elliptic complexes rather than elliptic operators. We can take the complex to be $0\rightarrow V\rightarrow V\otimes \Lambda ^{0,1}T^{*}(X)\rightarrow V\otimes \Lambda ^{0,2}T^{*}(X)\rightarrow \dotsm $ with the differential given by ${\overline {\partial }}$. Then the i'th cohomology group is just the coherent cohomology group Hi(X, V), so the analytical index of this complex is the holomorphic Euler characteristic of V: $\operatorname {index} (D)=\sum _{p}(-1)^{p}\dim H^{p}(X,V)=\chi (X,V)$ Since we are dealing with complex bundles, the computation of the topological index is simpler. Using Chern roots and doing similar computations as in the previous example, the Euler class is given by $ e(TX)=\prod _{i}^{n}x_{i}(TX)$ and ${\begin{aligned}\operatorname {ch} \left(\sum _{j}^{n}(-1)^{j}V\otimes \Lambda ^{j}{\overline {T^{*}X}}\right)&=\operatorname {ch} (V)\prod _{j}^{n}\left(1-e^{x_{j}}\right)(TX)\\\operatorname {Td} (TX\otimes \mathbb {C} )=\operatorname {Td} (TX)\operatorname {Td} \left({\overline {TX}}\right)&=\prod _{i}^{n}{\frac {x_{i}}{1-e^{-x_{i}}}}\prod _{j}^{n}{\frac {-x_{j}}{1-e^{x_{j}}}}(TX)\end{aligned}}$ Applying the index theorem, we obtain the Hirzebruch-Riemann-Roch theorem: $\chi (X,V)=\int _{X}\operatorname {ch} (V)\operatorname {Td} (TX)$ In fact we get a generalization of it to all complex manifolds: Hirzebruch's proof only worked for projective complex manifolds X. Hirzebruch signature theorem The Hirzebruch signature theorem states that the signature of a compact oriented manifold X of dimension 4k is given by the L genus of the manifold. This follows from the Atiyah–Singer index theorem applied to the following signature operator. The bundles E and F are given by the +1 and −1 eigenspaces of the operator on the bundle of differential forms of X, that acts on k-forms as $i^{k(k-1)}$ times the Hodge star operator. The operator D is the Hodge Laplacian $D\equiv \Delta \mathrel {:=} \left(\mathbf {d} +\mathbf {d^{*}} \right)^{2}$ restricted to E, where d is the Cartan exterior derivative and d* is its adjoint. The analytic index of D is the signature of the manifold X, and its topological index is the L genus of X, so these are equal.  genus and Rochlin's theorem The  genus is a rational number defined for any manifold, but is in general not an integer. Borel and Hirzebruch showed that it is integral for spin manifolds, and an even integer if in addition the dimension is 4 mod 8. This can be deduced from the index theorem, which implies that the  genus for spin manifolds is the index of a Dirac operator. The extra factor of 2 in dimensions 4 mod 8 comes from the fact that in this case the kernel and cokernel of the Dirac operator have a quaternionic structure, so as complex vector spaces they have even dimensions, so the index is even. In dimension 4 this result implies Rochlin's theorem that the signature of a 4-dimensional spin manifold is divisible by 16: this follows because in dimension 4 the  genus is minus one eighth of the signature. Proof techniques Pseudodifferential operators Main article: pseudodifferential operator Pseudodifferential operators can be explained easily in the case of constant coefficient operators on Euclidean space. In this case, constant coefficient differential operators are just the Fourier transforms of multiplication by polynomials, and constant coefficient pseudodifferential operators are just the Fourier transforms of multiplication by more general functions. Many proofs of the index theorem use pseudodifferential operators rather than differential operators. The reason for this is that for many purposes there are not enough differential operators. For example, a pseudoinverse of an elliptic differential operator of positive order is not a differential operator, but is a pseudodifferential operator. Also, there is a direct correspondence between data representing elements of K(B(X), S(X)) (clutching functions) and symbols of elliptic pseudodifferential operators. Pseudodifferential operators have an order, which can be any real number or even −∞, and have symbols (which are no longer polynomials on the cotangent space), and elliptic differential operators are those whose symbols are invertible for sufficiently large cotangent vectors. Most versions of the index theorem can be extended from elliptic differential operators to elliptic pseudodifferential operators. Cobordism The initial proof was based on that of the Hirzebruch–Riemann–Roch theorem (1954), and involved cobordism theory and pseudodifferential operators. The idea of this first proof is roughly as follows. Consider the ring generated by pairs (X, V) where V is a smooth vector bundle on the compact smooth oriented manifold X, with relations that the sum and product of the ring on these generators are given by disjoint union and product of manifolds (with the obvious operations on the vector bundles), and any boundary of a manifold with vector bundle is 0. This is similar to the cobordism ring of oriented manifolds, except that the manifolds also have a vector bundle. The topological and analytical indices are both reinterpreted as functions from this ring to the integers. Then one checks that these two functions are in fact both ring homomorphisms. In order to prove they are the same, it is then only necessary to check they are the same on a set of generators of this ring. Thom's cobordism theory gives a set of generators; for example, complex vector spaces with the trivial bundle together with certain bundles over even dimensional spheres. So the index theorem can be proved by checking it on these particularly simple cases. K-theory Atiyah and Singer's first published proof used K-theory rather than cobordism. If i is any inclusion of compact manifolds from X to Y, they defined a 'pushforward' operation i! on elliptic operators of X to elliptic operators of Y that preserves the index. By taking Y to be some sphere that X embeds in, this reduces the index theorem to the case of spheres. If Y is a sphere and X is some point embedded in Y, then any elliptic operator on Y is the image under i! of some elliptic operator on the point. This reduces the index theorem to the case of a point, where it is trivial. Heat equation Atiyah, Bott, and Patodi (1973) gave a new proof of the index theorem using the heat equation, see e.g. Berline, Getzler & Vergne (1992). The proof is also published in (Melrose 1993) and (Gilkey 1994). If D is a differential operator with adjoint D*, then D*D and DD* are self adjoint operators whose non-zero eigenvalues have the same multiplicities. However their zero eigenspaces may have different multiplicities, as these multiplicities are the dimensions of the kernels of D and D*. Therefore, the index of D is given by $\operatorname {index} (D)=\dim \operatorname {Ker} (D^{*})=\operatorname {Tr} \left(e^{-tD^{*}D}\right)-\operatorname {Tr} \left(e^{-tDD^{*}}\right)$ for any positive t. The right hand side is given by the trace of the difference of the kernels of two heat operators. These have an asymptotic expansion for small positive t, which can be used to evaluate the limit as t tends to 0, giving a proof of the Atiyah–Singer index theorem. The asymptotic expansions for small t appear very complicated, but invariant theory shows that there are huge cancellations between the terms, which makes it possible to find the leading terms explicitly. These cancellations were later explained using supersymmetry. Citations 1. Atiyah & Singer 1963. 2. Kayani 2020. 3. Hamilton 2020, p. 11. 4. Gel'fand 1960. 5. Palais 1965. 6. Cartan-Schwartz 1965. 7. Atiyah & Singer 1968a. 8. Atiyah & Singer (1968a); Atiyah & Singer (1968b); Atiyah & Singer (1971a); Atiyah & Singer (1971b). 9. Novikov 1965. 10. Kirby & Siebenmann 1969. 11. Thom 1956. 12. Atiyah 1970. 13. Singer 1971. 14. Kasparov 1972. 15. Atiyah, Bott & Patodi 1973. 16. Melrose 1993. 17. Sullivan 1979. 18. Getzler 1983. 19. Witten 1982. 20. Teleman 1983. 21. Teleman 1984. 22. Connes 1986. 23. Donaldson & Sullivan 1989. 24. Connes & Moscovici 1990. 25. Connes, Sullivan & Teleman 1994. 26. Shanahan, P. (1978), The Atiyah–Singer index theorem: an introduction, Lecture Notes in Mathematics, vol. 638, Springer, CiteSeerX 10.1.1.193.9222, doi:10.1007/BFb0068264, ISBN 978-0-387-08660-6 27. Lawson, H. Blane; Michelsohn, Marie-Louise (1989), Spin Geometry, Princeton University Press, ISBN 0-691-08542-0 28. "algebraic topology - How to understand the Todd class?". Mathematics Stack Exchange. Retrieved 2021-02-05. 29. Index Theorems on Open Spaces 30. Some Remarks on the Paper of Callias 31. Nakahara, Mikio (2003), Geometry, topology and physics, Institute of Physics Publishing, ISBN 0-7503-0606-8 References The papers by Atiyah are reprinted in volumes 3 and 4 of his collected works, (Atiyah 1988a, 1988b) • Atiyah, M. F. (1970), "Global Theory of Elliptic Operators", Proc. Int. Conf. on Functional Analysis and Related Topics (Tokyo, 1969), University of Tokio, Zbl 0193.43601 • Atiyah, M. F. (1976), "Elliptic operators, discrete groups and von Neumann algebras", Colloque "Analyse et Topologie" en l'Honneur de Henri Cartan (Orsay, 1974), Asterisque, vol. 32–33, Soc. Math. France, Paris, pp. 43–72, MR 0420729 • Atiyah, M. F.; Segal, G. B. (1968), "The Index of Elliptic Operators: II", Annals of Mathematics, Second Series, 87 (3): 531–545, doi:10.2307/1970716, JSTOR 1970716 This reformulates the result as a sort of Lefschetz fixed-point theorem, using equivariant K-theory. • Atiyah, Michael F.; Singer, Isadore M. (1963), "The Index of Elliptic Operators on Compact Manifolds", Bull. Amer. Math. Soc., 69 (3): 422–433, doi:10.1090/S0002-9904-1963-10957-X An announcement of the index theorem. • Atiyah, Michael F.; Singer, Isadore M. (1968a), "The Index of Elliptic Operators I", Annals of Mathematics, 87 (3): 484–530, doi:10.2307/1970715, JSTOR 1970715 This gives a proof using K-theory instead of cohomology. • Atiyah, Michael F.; Singer, Isadore M. (1968b), "The Index of Elliptic Operators III", Annals of Mathematics, Second Series, 87 (3): 546–604, doi:10.2307/1970717, JSTOR 1970717 This paper shows how to convert from the K-theory version to a version using cohomology. • Atiyah, Michael F.; Singer, Isadore M. (1971a), "The Index of Elliptic Operators IV", Annals of Mathematics, Second Series, 93 (1): 119–138, doi:10.2307/1970756, JSTOR 1970756 This paper studies families of elliptic operators, where the index is now an element of the K-theory of the space parametrizing the family. • Atiyah, Michael F.; Singer, Isadore M. (1971b), "The Index of Elliptic Operators V", Annals of Mathematics, Second Series, 93 (1): 139–149, doi:10.2307/1970757, JSTOR 1970757. This studies families of real (rather than complex) elliptic operators, when one can sometimes squeeze out a little extra information. • Atiyah, M. F.; Bott, R. (1966), "A Lefschetz Fixed Point Formula for Elliptic Differential Operators", Bull. Am. Math. Soc., 72 (2): 245–50, doi:10.1090/S0002-9904-1966-11483-0. This states a theorem calculating the Lefschetz number of an endomorphism of an elliptic complex. • Atiyah, M. F.; Bott, R. (1967), "A Lefschetz Fixed Point Formula for Elliptic Complexes: I", Annals of Mathematics, Second series, 86 (2): 374–407, doi:10.2307/1970694, JSTOR 1970694 and Atiyah, M. F.; Bott, R. (1968), "A Lefschetz Fixed Point Formula for Elliptic Complexes: II. Applications", Annals of Mathematics, Second Series, 88 (3): 451–491, doi:10.2307/1970721, JSTOR 1970721 These give the proofs and some applications of the results announced in the previous paper. • Atiyah, M.; Bott, R.; Patodi, V. K. (1973), "On the heat equation and the index theorem", Invent. Math., 19 (4): 279–330, Bibcode:1973InMat..19..279A, doi:10.1007/BF01425417, MR 0650828, S2CID 115700319. Atiyah, M.; Bott, R.; Patodi, V. K. (1975), "Errata", Invent. Math., 28 (3): 277–280, Bibcode:1975InMat..28..277A, doi:10.1007/BF01425562, MR 0650829 • Atiyah, Michael; Schmid, Wilfried (1977), "A geometric construction of the discrete series for semisimple Lie groups", Invent. Math., 42: 1–62, Bibcode:1977InMat..42....1A, doi:10.1007/BF01389783, MR 0463358, S2CID 189831012, Atiyah, Michael; Schmid, Wilfried (1979), "Erratum", Invent. Math., 54 (2): 189–192, Bibcode:1979InMat..54..189A, doi:10.1007/BF01408936, MR 0550183 • Atiyah, Michael (1988a), Collected works. Vol. 3. Index theory: 1, Oxford Science Publications, New York: The Clarendon Press, Oxford University Press, ISBN 978-0-19-853277-4, MR 0951894 • Atiyah, Michael (1988b), Collected works. Vol. 4. Index theory: 2, Oxford Science Publications, New York: The Clarendon Press, Oxford University Press, ISBN 978-0-19-853278-1, MR 0951895 • Baum, P.; Fulton, W.; Macpherson, R. (1979), "Riemann-Roch for singular varieties", Acta Mathematica, 143: 155–191, doi:10.1007/BF02684299, S2CID 83458307, Zbl 0332.14003 • Berline, Nicole; Getzler, Ezra; Vergne, Michèle (1992), Heat Kernels and Dirac Operators, Berlin: Springer, ISBN 978-3-540-53340-5 This gives an elementary proof of the index theorem for the Dirac operator, using the heat equation and supersymmetry. • Bismut, Jean-Michel (1984), "The Atiyah–Singer Theorems: A Probabilistic Approach. I. The index theorem", J. Funct. Analysis, 57: 56–99, doi:10.1016/0022-1236(84)90101-0 Bismut proves the theorem for elliptic complexes using probabilistic methods, rather than heat equation methods. • Cartan-Schwartz (1965), Séminaire Henri Cartan. Théoreme d'Atiyah-Singer sur l'indice d'un opérateur différentiel elliptique. 16 annee: 1963/64 dirigee par Henri Cartan et Laurent Schwartz. Fasc. 1; Fasc. 2. (French), École Normale Supérieure, Secrétariat mathématique, Paris, Zbl 0149.41102 • Connes, A. (1986), "Non-commutative differential geometry", Publications Mathématiques de l'Institut des Hautes Études Scientifiques, 62: 257–360, doi:10.1007/BF02698807, S2CID 122740195, Zbl 0592.46056 • Connes, A. (1994), Noncommutative Geometry, San Diego: Academic Press, ISBN 978-0-12-185860-5, Zbl 0818.46076 • Connes, A.; Moscovici, H. (1990), "Cyclic cohomology, the Novikov conjecture and hyperbolic groups" (PDF), Topology, 29 (3): 345–388, doi:10.1016/0040-9383(90)90003-3, Zbl 0759.58047 • Connes, A.; Sullivan, D.; Teleman, N. (1994), "Quasiconformal mappings, operators on Hilbert space and local formulae for characteristic classes", Topology, 33 (4): 663–681, doi:10.1016/0040-9383(94)90003-5, Zbl 0840.57013 • Donaldson, S.K.; Sullivan, D. (1989), "Quasiconformal 4-manifolds", Acta Mathematica, 163: 181–252, doi:10.1007/BF02392736, Zbl 0704.57008 • Gel'fand, I. M. (1960), "On elliptic equations", Russ. Math. Surv., 15 (3): 113–123, Bibcode:1960RuMaS..15..113G, doi:10.1070/rm1960v015n03ABEH004094 reprinted in volume 1 of his collected works, p. 65–75, ISBN 0-387-13619-3. On page 120 Gel'fand suggests that the index of an elliptic operator should be expressible in terms of topological data. • Getzler, E. (1983), "Pseudodifferential operators on supermanifolds and the Atiyah–Singer index theorem", Commun. Math. Phys., 92 (2): 163–178, Bibcode:1983CMaPh..92..163G, doi:10.1007/BF01210843, S2CID 55438589 • Getzler, E. (1988), "A short proof of the local Atiyah–Singer index theorem", Topology, 25: 111–117, doi:10.1016/0040-9383(86)90008-X • Gilkey, Peter B. (1994), Invariance Theory, the Heat Equation, and the Atiyah–Singer Theorem, CRC Press, ISBN 978-0-8493-7874-4 Free online textbook that proves the Atiyah–Singer theorem with a heat equation approach • Hamilton, M. J. D. (2020). "The Higgs boson for mathematicians. Lecture notes on gauge theory and symmetry breaking". arXiv:1512.02632 [math.DG]. • Kayani, U. (2020). "Dynamical supersymmetry enhancement of black hole horizons". arXiv:1910.01080 [hep-th]. • Higson, Nigel; Roe, John (2000), Analytic K-homology, Oxford University Press, ISBN 9780191589201 • Hilsum, M. (1999), "Structures riemaniennes Lp et K-homologie", Annals of Mathematics, 149 (3): 1007–1022, arXiv:math/9905210, doi:10.2307/121079, JSTOR 121079, S2CID 119708566 • Kasparov, G.G. (1972), "Topological invariance of elliptic operators, I: K-homology", Math. USSR Izvestija (Engl. Transl.), 9 (4): 751–792, Bibcode:1975IzMat...9..751K, doi:10.1070/IM1975v009n04ABEH001497 • Kirby, R.; Siebenmann, L.C. (1969), "On the triangulation of manifolds and the Hauptvermutung", Bull. Amer. Math. Soc., 75 (4): 742–749, doi:10.1090/S0002-9904-1969-12271-8 • Kirby, R.; Siebenmann, L.C. (1977), Foundational Essays on Topological Manifolds, Smoothings and Triangulations, Annals of Mathematics Studies in Mathematics, vol. 88, Princeton: Princeton University Press and Tokio University Press • Lawson, H. Blane; Michelsohn, Marie-Louise (1989), Spin Geometry, Princeton University Press, ISBN 0-691-08542-0 • Melrose, Richard B. (1993), The Atiyah–Patodi–Singer Index Theorem, Wellesley, Mass.: Peters, ISBN 978-1-56881-002-7 Free online textbook. • Novikov, S.P. (1965), "Topological invariance of the rational Pontrjagin classes" (PDF), Doklady Akademii Nauk SSSR, 163: 298–300 • Palais, Richard S. (1965), Seminar on the Atiyah–Singer Index Theorem, Annals of Mathematics Studies, vol. 57, S.l.: Princeton Univ Press, ISBN 978-0-691-08031-4 This describes the original proof of the theorem (Atiyah and Singer never published their original proof themselves, but only improved versions of it.) • Shanahan, P. (1978), The Atiyah–Singer index theorem: an introduction, Lecture Notes in Mathematics, vol. 638, Springer, CiteSeerX 10.1.1.193.9222, doi:10.1007/BFb0068264, ISBN 978-0-387-08660-6 • Singer, I.M. (1971), "Future extensions of index theory and elliptic operators", Prospects in Mathematics, Annals of Mathematics Studies in Mathematics, vol. 70, pp. 171–185 • Sullivan, D. (1979), "Hyperbolic geometry and homeomorphisms", J.C. Candrell, "Geometric Topology", Proc. Georgia Topology Conf. Athens, Georgia, 1977, New York: Academic Press, pp. 543–595, ISBN 978-0-12-158860-1, Zbl 0478.57007 • Sullivan, D.; Teleman, N. (1983), "An analytic proof of Novikov's theorem on rational Pontrjagin classes", Publications Mathématiques de l'Institut des Hautes Études Scientifiques, Paris, 58: 291–293, doi:10.1007/BF02953773, S2CID 8348213, Zbl 0531.58045 • Teleman, N. (1980), "Combinatorial Hodge theory and signature operator", Inventiones Mathematicae, 61 (3): 227–249, Bibcode:1980InMat..61..227T, doi:10.1007/BF01390066, S2CID 122247909 • Teleman, N. (1983), "The index of signature operators on Lipschitz manifolds", Publications Mathématiques de l'Institut des Hautes Études Scientifiques, 58: 251–290, doi:10.1007/BF02953772, S2CID 121497293, Zbl 0531.58044 • Teleman, N. (1984), "The index theorem on topological manifolds", Acta Mathematica, 153: 117–152, doi:10.1007/BF02392376, Zbl 0547.58036 • Teleman, N. (1985), "Transversality and the index theorem", Integral Equations and Operator Theory, 8 (5): 693–719, doi:10.1007/BF01201710, S2CID 121137053 • Thom, R. (1956), "Les classes caractéristiques de Pontrjagin de variétés triangulées", Symp. Int. Top. Alg. Mexico, pp. 54–67 • Witten, Edward (1982), "Supersymmetry and Morse theory", J. Diff. Geom., 17 (4): 661–692, doi:10.4310/jdg/1214437492, MR 0683171 • Shing-Tung Yau, ed. (2009) [First published in 2005], The Founders of Index Theory (2nd ed.), Somerville, Mass.: International Press of Boston, ISBN 978-1571461377 - Personal accounts on Atiyah, Bott, Hirzebruch and Singer. External links Links on the theory • Mazzeo, Rafe. "The Atiyah–Singer Index Theorem: What it is and why you should care" (PDF). Archived from the original on June 24, 2006. Retrieved January 3, 2006.{{cite web}}: CS1 maint: bot: original URL status unknown (link) Pdf presentation. • Voitsekhovskii, M.I.; Shubin, M.A. (2001) [1994], "Index formulas", Encyclopedia of Mathematics, EMS Press • Wassermann, Antony. "Lecture notes on the Atiyah–Singer Index Theorem". Archived from the original on March 29, 2017. Links of interviews • Raussen, Martin; Skau, Christian (2005), "Interview with Michael Atiyah and Isadore Singer" (PDF), Notices of AMS, pp. 223–231 • R. R. Seeley and other (1999) Recollections from the early days of index theory and pseudo-differential operators - A partial transcript of informal post–dinner conversation during a symposium held in Roskilde, Denmark, in September 1998.
Wikipedia
Kosmann lift In differential geometry, the Kosmann lift,[1][2] named after Yvette Kosmann-Schwarzbach, of a vector field $X\,$ on a Riemannian manifold $(M,g)\,$ is the canonical projection $X_{K}\,$ on the orthonormal frame bundle of its natural lift ${\hat {X}}\,$ defined on the bundle of linear frames.[3] Generalisations exist for any given reductive G-structure. Introduction In general, given a subbundle $Q\subset E\,$ of a fiber bundle $\pi _{E}\colon E\to M\,$ over $M$ and a vector field $Z\,$ on $E$, its restriction $Z\vert _{Q}\,$ to $Q$ is a vector field "along" $Q$ not on (i.e., tangent to) $Q$. If one denotes by $i_{Q}\colon Q\hookrightarrow E$ the canonical embedding, then $Z\vert _{Q}\,$ is a section of the pullback bundle $i_{Q}^{\ast }(TE)\to Q\,$, where $i_{Q}^{\ast }(TE)=\{(q,v)\in Q\times TE\mid i(q)=\tau _{E}(v)\}\subset Q\times TE,\,$ and $\tau _{E}\colon TE\to E\,$ is the tangent bundle of the fiber bundle $E$. Let us assume that we are given a Kosmann decomposition of the pullback bundle $i_{Q}^{\ast }(TE)\to Q\,$, such that $i_{Q}^{\ast }(TE)=TQ\oplus {\mathcal {M}}(Q),\,$ i.e., at each $q\in Q$ one has $T_{q}E=T_{q}Q\oplus {\mathcal {M}}_{u}\,,$ where ${\mathcal {M}}_{u}$ is a vector subspace of $T_{q}E\,$ and we assume ${\mathcal {M}}(Q)\to Q\,$ to be a vector bundle over $Q$, called the transversal bundle of the Kosmann decomposition. It follows that the restriction $Z\vert _{Q}\,$ to $Q$ splits into a tangent vector field $Z_{K}\,$ on $Q$ and a transverse vector field $Z_{G},\,$ being a section of the vector bundle ${\mathcal {M}}(Q)\to Q.\,$ Definition Let $\mathrm {F} _{SO}(M)\to M$ be the oriented orthonormal frame bundle of an oriented $n$-dimensional Riemannian manifold $M$ with given metric $g\,$. This is a principal ${\mathrm {S} \mathrm {O} }(n)\,$-subbundle of $\mathrm {F} M\,$, the tangent frame bundle of linear frames over $M$ with structure group ${\mathrm {G} \mathrm {L} }(n,\mathbb {R} )\,$. By definition, one may say that we are given with a classical reductive Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle {\mathrm S\mathrm O}(n)\,} -structure. The special orthogonal group ${\mathrm {S} \mathrm {O} }(n)\,$ is a reductive Lie subgroup of ${\mathrm {G} \mathrm {L} }(n,\mathbb {R} )\,$. In fact, there exists a direct sum decomposition ${\mathfrak {gl}}(n)={\mathfrak {so}}(n)\oplus {\mathfrak {m}}\,$, where ${\mathfrak {gl}}(n)\,$ is the Lie algebra of ${\mathrm {G} \mathrm {L} }(n,\mathbb {R} )\,$, ${\mathfrak {so}}(n)\,$ is the Lie algebra of ${\mathrm {S} \mathrm {O} }(n)\,$, and ${\mathfrak {m}}\,$ is the $\mathrm {Ad} _{\mathrm {S} \mathrm {O} }\,$-invariant vector subspace of symmetric matrices, i.e. $\mathrm {Ad} _{a}{\mathfrak {m}}\subset {\mathfrak {m}}\,$ for all $a\in {\mathrm {S} \mathrm {O} }(n)\,.$ Let $i_{\mathrm {F} _{SO}(M)}\colon \mathrm {F} _{SO}(M)\hookrightarrow \mathrm {F} M$ be the canonical embedding. One then can prove that there exists a canonical Kosmann decomposition of the pullback bundle $i_{\mathrm {F} _{SO}(M)}^{\ast }(T\mathrm {F} M)\to \mathrm {F} _{SO}(M)$ such that $i_{\mathrm {F} _{SO}(M)}^{\ast }(T\mathrm {F} M)=T\mathrm {F} _{SO}(M)\oplus {\mathcal {M}}(\mathrm {F} _{SO}(M))\,,$ i.e., at each $u\in \mathrm {F} _{SO}(M)$ one has $T_{u}\mathrm {F} M=T_{u}\mathrm {F} _{SO}(M)\oplus {\mathcal {M}}_{u}\,,$ ${\mathcal {M}}_{u}$ being the fiber over $u$ of the subbundle ${\mathcal {M}}(\mathrm {F} _{SO}(M))\to \mathrm {F} _{SO}(M)$ of $i_{\mathrm {F} _{SO}(M)}^{\ast }(V\mathrm {F} M)\to \mathrm {F} _{SO}(M)$. Here, $V\mathrm {F} M\,$ is the vertical subbundle of $T\mathrm {F} M\,$ and at each $u\in \mathrm {F} _{SO}(M)$ the fiber ${\mathcal {M}}_{u}$ is isomorphic to the vector space of symmetric matrices ${\mathfrak {m}}$. From the above canonical and equivariant decomposition, it follows that the restriction $Z\vert _{\mathrm {F} _{SO}(M)}$ of an ${\mathrm {G} \mathrm {L} }(n,\mathbb {R} )$-invariant vector field $Z\,$ on $\mathrm {F} M$ to $\mathrm {F} _{SO}(M)$ splits into a ${\mathrm {S} \mathrm {O} }(n)$-invariant vector field $Z_{K}\,$ on $\mathrm {F} _{SO}(M)$, called the Kosmann vector field associated with $Z\,$, and a transverse vector field $Z_{G}\,$. In particular, for a generic vector field $X\,$ on the base manifold $(M,g)\,$, it follows that the restriction ${\hat {X}}\vert _{\mathrm {F} _{SO}(M)}\,$ to $\mathrm {F} _{SO}(M)\to M$ of its natural lift ${\hat {X}}\,$ onto $\mathrm {F} M\to M$ splits into a ${\mathrm {S} \mathrm {O} }(n)$-invariant vector field $X_{K}\,$ on $\mathrm {F} _{SO}(M)$, called the Kosmann lift of $X\,$, and a transverse vector field $X_{G}\,$. See also • Frame bundle • Orthonormal frame bundle • Principal bundle • Spin bundle • Connection (mathematics) • G-structure • Spin manifold • Spin structure Notes 1. Fatibene, L.; Ferraris, M.; Francaviglia, M.; Godina, M. (1996). "A geometric definition of Lie derivative for Spinor Fields". In Janyska, J.; Kolář, I.; Slovák, J. (eds.). Proceedings of the 6th International Conference on Differential Geometry and Applications, August 28th–September 1st 1995 (Brno, Czech Republic). Brno: Masaryk University. pp. 549–558. arXiv:gr-qc/9608003v1. Bibcode:1996gr.qc.....8003F. ISBN 80-210-1369-9. 2. Godina, M.; Matteucci, P. (2003). "Reductive G-structures and Lie derivatives". Journal of Geometry and Physics. 47: 66–86. arXiv:math/0201235. Bibcode:2003JGP....47...66G. doi:10.1016/S0393-0440(02)00174-2. 3. Kobayashi, Shoshichi; Nomizu, Katsumi (1996), Foundations of Differential Geometry, vol. 1, Wiley-Interscience, ISBN 0-470-49647-9 (Example 5.2) pp. 55-56 References • Kobayashi, Shoshichi; Nomizu, Katsumi (1996), Foundations of Differential Geometry, vol. 1 (New ed.), Wiley-Interscience, ISBN 0-471-15733-3 • Kolář, Ivan; Michor, Peter; Slovák, Jan (1993), Natural operators in differential geometry (PDF), Springer-Verlag, archived from the original (PDF) on 30 March 2017, retrieved 4 June 2011 • Sternberg, S. (1983), Lectures on Differential Geometry (2nd ed.), New York: Chelsea Publishing Co., ISBN 0-8218-1385-4 • Fatibene, Lorenzo; Francaviglia, Mauro (2003), Natural and Gauge Natural Formalism for Classical Field Theories, Kluwer Academic Publishers, ISBN 978-1-4020-1703-2
Wikipedia
Tariff elimination versus tax avoidance: free trade agreements and transfer pricing Hiroshi Mukunoki1 & Hirofumi Okoshi ORCID: orcid.org/0000-0001-8003-016X2 International Tax and Public Finance volume 28, pages 1188–1210 (2021)Cite this article We explore the new roles of rules of origin (ROO) when multinational enterprises (MNEs) manipulate their transfer prices to avoid a high corporate tax. The ROO under a free trade agreement (FTA) require exporters to identify the origin of exports to be eligible for a preferential tariff rate. We find that a value-added criterion of ROO restricts abusive transfer pricing by MNEs. Interestingly, an FTA with ROO can induce MNEs to shift profits from a low- to high-tax country. Because the ROO augment tax revenues inside FTA countries, they can transform a welfare-reducing FTA into a welfare-improving one. Tax avoidance by multinational enterprises (MNEs) has become controversial in the last two decades of rapid globalization. The Organisation for Economic Co-operation and Development (OECD) estimates that countries lose 4–10\(\%\) of corporate income tax revenue annually because of profit shifting.Footnote 1 One way to shift profits across countries is to manipulate the price of intra-firm trade (transfer price), which is known as abusive transfer pricing. Because MNEs determine the prices of transactions among related companies, they manipulate these prices to decrease profits in high-tax countries and conversely increase profits in low-tax countries. Some empirical research has provided evidence of transfer pricing being used to save tax payments.Footnote 2 Because the taxes paid by firms are one of the main sources of government revenues, tax avoidance by MNEs has become a serious issue, as trade liberalization and the creation of global value chains increase intra-firm trade and provide MNEs with greater opportunities to redistribute their tax base to low-tax countries. Our primary focus is on how such losses of tax revenues are linked to trade liberalization driven by trade agreements. Trade agreements among countries facilitate firms' exports and imports. They also influence firm behaviors in other respects including transfer pricing and generate more complicated welfare effects. In particular, the specific rules needed to implement trade agreements complicate the effects of trade liberalization. We focus on the rules of origin (ROO) of a free trade agreement (FTA), which require exporters in member countries making tariff-free exports to other member countries to prove that the exported products originated within the FTA.Footnote 3 To meet the ROO, firms may change their strategies such as their input procurement. Conconi et al. (2018) concludes that the ROO of the North American Free Trade Agreement (NAFTA) reduce imports of inputs from non-member countries, suggesting that such rules cause inefficiency in input procurement. Considering tax avoidance, this also implies that the ROO can hinder MNEs from shifting profits within the firm because they may need to consider whether their intra-firm transactions satisfy the requirements of the ROO. One way to prove the origin is to satisfy the value-added (VA) criterion, which is closely related to transfer price manipulation.Footnote 4 The VA criterion requires firms to add a sufficient value inside FTA member countries. Specifically, let p denote the export price of the product and r denote the value of the input materials, which are used per unit of final good production and do not originate in the FTA. The VA criterion typically requires that the VA ratio \((p-r)/p\) is above the specified level. This method of calculating the VA content is called the "transaction value method." The value of the input materials depends on the transfer price if MNEs procure inputs from related companies outside FTA countries, and, hence, a VA criterion can constrain MNEs from engaging in tax avoidance through abusive transfer pricing. However, a VA criterion allows the MNE an option to meet the ROO without changing its input sources. This violates the principal purpose of the ROO to increase local procurement of inputs. Although this possibility has been overlooked in the economic literature on transfer pricing and FTA, it has been noted by some policy researchers. LaNasa (1996) stated that "[v]alue-added rules of origin may be circumvented by the use of transfer pricing, ... to increase the amount of local value added to ensure that the good qualifies as originating in the country of assembly, related parties could reduce the price of the imported materials used in the finished product." Eden (1998) examined the ROO of NAFTA and suggested that "...underinvoicing parts coming outside North America and overinvoicing locally made parts would increase the North American content." Falvey and Reed (1998) indicated that the VA criterion "...allows room for manipulation of prices as well as quantities, and may generate additional incentives for transfer pricing by multinationals." Reuter (2012) similarly pointed out that "most rules of origin are on a percent-of-value basis ...By overinvoicing the value added, the MNE can more easily meet a rule-of-origin test and qualify for duty-free entry for its products into another country in the free trade area."Footnote 5 The World Customs Organization notes that a disadvantage of the VA criterion of ROO is the possible exposure to transfer pricing.Footnote 6 A recent report by Deloitte Touche Tohmatsu Limited reports that exporters should consider an adjustment of transfer prices when making use of FTAs.Footnote 7 As Estevadeordal and Suominen (2003) reported that 68 of the 87 FTAs they analyzed employ a VA criterion at least in a particular product category, the aforementioned statements suggest that the effect of FTAs on tax avoidance and welfare can be understood clearly if we investigate the role of the ROO in restricting the abusive use of transfer pricing. Given the experience of tax avoidance countries face, analyzing the anti-tax avoidance aspect of the ROO holds crucial policy implications. In reality, different groups of policymakers, namely, customs and tax authorities, are responsible for designing trade policies and regulating transfer pricing. The interaction between these two authorities has been rare. According to WCO (2018), "...the WCO is working with the OECD and World Bank Group to encourage Customs and tax administrations to establish bilateral lines of communication in order to exchange knowledge, skills and data, where possible, which will help ensure that each authority has the broadest picture of an MNE's business, its compliance record and can make informed decisions on the collect revenue liability." Thus, the increasing number of FTAs and volume of intra-firm trade necessitates us to explore the relationship between transfer prices and ROO. Against this backdrop, this study builds an international monopoly model to investigate an MNE's response to an FTA formation with two new elements: transfer pricing and ROO.Footnote 8 We consider a situation wherein an MNE produces final goods within an FTA member country and exports the goods to other FTA member countries. The MNE procures inputs from either an FTA member country or a low-tax country outside the FTA. In the absence of ROO, the MNE always prefers to produce inputs outside the FTA by itself and avoids tax by setting a high transfer price. However, the presence of ROO restricts the manipulation of the transfer price because a high transfer price reduces the VA ratio of the final product inside the FTA. Thus, the MNE chooses one of three options: (i) fully manipulating its transfer price to avoid tax payments at the expense of the preferential tariff of the FTA, (ii) procuring inputs inside an FTA to comply with the ROO and eliminate tariffs, or (iii) adjusting its transfer price to comply with the ROO to eliminate tariffs and pursue partial tax avoidance.Footnote 9 This model exhibits the MNE's choice of the "tariff elimination versus tax avoidance" via its location choice of producing inputs and/or transfer price manipulation.Footnote 10 When the MNE chooses the second option, it no longer avoids higher taxes. When it chooses the third option, its transfer price deviates from the optimal abusive transfer price, which retains part of the MNE's tax base in the high-tax country. As the ROO restrict the abusive use of transfer pricing through either a change in input procurement or an adjustment of the transfer price, tax revenues in a high-tax country increase. Thus, the VA criterion works as an anti-tax avoidance policy. Interestingly, the direction of shifted profits can be from a low-tax country to a high-tax country when the MNE adjusts the transfer price to meet the VA criterion. Empirical analyses on transfer pricing should consider the possibility that the VA criterion of ROO affects transfer pricing. The ROO can also increase the welfare gains of FTA countries because members can collect tax revenues from the MNE. Although consumers' gains from the FTA are smaller than those without ROO, tax revenues from the MNE can cover the smaller consumers' gains and the loss of tariff revenues. Our results present a new role of the ROO in preventing abusive transfer pricing and increasing the welfare gains of FTA formation for member countries. Our model contributes to the literature on transfer pricing policies since MNEs have been accused of tax avoidance activities. How to regulate transfer prices has been a central issue in policy debates. Several studies examine the effects of policies on transfer price manipulation. Elitzur and Mintz (1996) investigated the determinants of transfer prices when tax authorities use the cost-plus method to infer the appropriate transfer price. Nielsen et al. (2003) compared the use of transfer prices under two international tax systems, namely, separating account and formula apportionment.Footnote 11 Bond and Gresik (2020) examined a high-tax country's unilateral adoption of border adjusted taxes and cash flow taxes when heterogeneous firms choose either arm's length transactions or to establish their own subsidiaries in their input sourcing. Choi et al. (2020) examined the effect of the arm's length principle on a monopolistic MNE's transfer pricing and tax competition.Footnote 12 As their focus was on direct regulation on transfer pricing, the role of ROO in preventing abusive transfer pricing is overlooked in the literature. Our second contribution is to the literature on FTAs with ROO. Krishna and Krueger (1995) showed that the ROO may work as hidden protection against input suppliers outside the FTA. Ju and Krishna (2005) showed that ROO can either increase or decrease the price of FTA-made inputs, depending on the number of firms complying with the rules. However, their focus was on intermediate goods markets, and they did not consider how ROO affect consumers. Demidova and Krishna (2008) extended the work of Ju and Krishna (2005) by including the heterogeneity of productivity of final good producers. They showed that productivity sorting ensures the negative relationship between the stringency of the rules and the demand for FTA-made inputs (i.e., wages). Ishikawa et al. (2007) focused on final good markets, showing that the ROO have a role to segment markets within the FTA, and that both inside and outside firms producing final goods may benefit from the ROO at the cost of consumers. Mukunoki (2017) showed that an FTA with ROO may harm consumers if it changes outside firms' location decisions. Mukunoki and Okoshi (2021) investigated a firm's export price manipulation to comply with the ROO, particularly how an MNE's transfer price manipulation affects the inputs imported from outside the FTA. None of these studies, however, consider transfer price manipulation to meet ROO. Felbermayr et al. (2019) suggested that there is little rationale for ROO because tariff circumvention is not profitable for 86% of bilateral trade owing to the small differences in external tariffs and non-negligible transport costs. This study thus provides a new rationale for the ROO from the viewpoint of tax avoidance by an MNE. This study also examines the connection between transfer pricing and trade policy. In this regard, Horst (1971) showed that the optimal transfer price is influenced by not only tax differentials but also tariffs. Schjelderup and Sorgard (1997) showed that if the importing country imposes an ad valorem tariff on inputs, an MNE can save tariff payments by reducing its export price. Subsequently, the optimal transfer price is influenced by both corporate tax avoidance and tariff avoidance.Footnote 13 Kant (1988) regarded the transfer price as a tool to repatriate profits when a foreign subsidiary is not fully owned by the parent firm. The study found that even when the tax rate in the home country is higher than that in the host country, an MNE has an incentive to remit all the profits earned in the low-tax host country. These studies, however, did not explicitly consider trade liberalization by forming an FTA, let alone the effects of ROO on transfer prices. The rest of the paper is organized as follows. Section 2 presents the model and derives the equilibrium. Section 3 investigates the effects of FTA formation on profit shifting, consumers, and the MNE's profit. Section 4 discusses total welfare of member countries and the effect of input tariff. The last section concludes. An Online Appendix shows the results are robust by relaxing some key assumptions. There are three countries, H, F, and O; countries H and F are potential FTA members. Figure 1 illustrates the model. A single firm, an MNE, produces a final good using inputs and sells it in country F. For simplicity, the benchmark model ignores the output market in country H and focuses only on the consumers in country F. This assumption does not qualitatively change our main results as long as the two markets are segmented in the sense that the MNE can make a separate decision in each market. The representative consumer's utility in country F is given by \(U=ax-\frac{x^{2}}{2}\), where x is the consumption of the final good. By utility maximization, the demand function becomes \(x=a-p\). One of the two member countries, country H, has a location advantage for final good production because of low factor prices, a large pool of skilled labor and so on. Therefore, country H hosts a downstream affiliate of the MNE (firm \(M_{H}\)). The MNE's upstream affiliate (firm \(M_{O}\)) is located in country O. Firm \(M_{O}\) may also produce an input for final good production, as explained below. Firm \(M_O\) has already operated in country O and generates positive profits, \(\overline{\pi }\), which are exogenously given. To produce the final product, firm \(M_H\) needs to procure one unit of inputs for the production of one unit of final products.Footnote 14 Firm \(M_H\) can procure the input from a perfectly competitive input market inside FTA countries, which supply the input at the price of w. Alternatively, firm \(M_O\) located in country O can produce the input at the cost of \(w-\Delta \). We assume \(\Delta >0\) and \(\Delta \in (0,w]\). Therefore, input production in country O is more efficient than that in country H. This implies that the self-production of the input in country O gives the MNE not only a lower input cost but also a tax-saving opportunity via the manipulation of the transfer price, which is denoted by r. We assume away transfer pricing that realizes negative reported profits because tax authorities can audit tax avoidance. Without the FTA, country F imposes a specific tariff, \(\tau \), on imports of the final good. We consider the case in which \(\tau <a-w+\Delta \) holds to rule out zero output in the equilibrium. The governments in countries O and H, respectively, levy t and T as a corporate tax on reported profits as well.Footnote 15 To focus on the effect of FTA formation on the final good market, tariffs on inputs are assumed away.Footnote 16 Hereafter, we focus on the case in which \(T\ge t\) holds. Without loss of generality, we set \(t=0\). Our assumptions are consistent with observed tax policies. For instance, Mexico and Belgium have higher corporate taxes than other countries, and these countries are major host countries of export platform foreign direct investment (FDI), where the FDI firm exports from the host country to other countries. For example, see Tekin-Koru and Waldkirch (2010) for Mexican evidence of its increasing role as a host of export platform FDI. Tintelnot (2017) showed the share of output exported to countries outside the host country by the United States' MNEs. For instance, the share of exports in Belgium was 63% in 2004. The equilibrium without ROO Let us first derive the market equilibrium without the ROO in each scheme of the MNE choice. In the inshoring scheme, denoted as scheme I, the MNE purchases the input from local producers. Firm \(M_H\) earns profits under the cost of inputs w and tax rate T. In the offshoring scheme, denoted as scheme O, the MNE's upstream affiliate in country O, firm \(M_O\), produces the input at the production cost of \(w-\Delta \). Firm \(M_O\) sells the input to firm \(M_H\) at the input price, denoted by r. Thus, r is the transfer price of the MNE. In the inshoring scheme, the MNE maximizes the post-tax profit, \(\Pi =(1-T)(p-w-\lambda \tau )x+\overline{\pi }\), subject to p. The equilibrium price and sales are, respectively, given by \(p^{I}=\frac{a+w+\lambda \tau }{2}\) and \(x^{I}=\frac{a-w-\lambda \tau }{2}\) where \(\lambda \) is a state variable that takes zero if the MNE qualifies for an FTA tariff rate, and unity otherwise. By substituting them, the equilibrium post-tax profits under the inshoring scheme become $$\begin{aligned} \Pi ^{I}=(1-T)\underbrace{(x^{I})^{2}}_{\pi _{H}^{I}}+\underbrace{\overline{\pi }}_{\pi _{O}^{I}}. \end{aligned}$$ \(\pi _{i}^{s}\) represents the reported profits of firm \(M_{i}\) under scheme \(s\in \{I,O\}\). In the offshoring scheme, the MNE maximizes $$\begin{aligned} \Pi ^{O}=(1-T)\underbrace{(p-r-\lambda \tau )x}_{\pi _{H}^{O}}+\underbrace{[\{r-(w-\Delta )\}x+\overline{\pi }]}_{\pi _{O}^{O}} \end{aligned}$$ with respect to r and p, subject to \(\pi _{H}^{O}\ge 0\) and \(\pi _{O}^{O}\ge 0\). Since \(\frac{\partial \Pi _{O}}{\partial r}=Tx>0\) always holds, the MNE is willing to set the optimal transfer price as high as possible. Therefore, the optimal abusive transfer price is set at the level that transfers all the profits earned in a high-tax country to a low-tax country, \(r=p-\lambda \tau \).Footnote 17 Next, the post-tax profits are rewritten as \(\Pi ^{O}=[\{p-\lambda \tau -(w-\Delta )\}x+\overline{\pi }]\). By maximizing them with respect to p, the price, sales, and transfer price in equilibrium are given by \(p^{O}=\frac{a+w-\Delta +\lambda \tau }{2}\), \(x^{O}=\frac{a-w+\Delta -\lambda \tau }{2}\), and \(r^{O}=\frac{a+w-\Delta -\lambda \tau }{2}\), respectively. Thus, the post-tax profits under the offshoring scheme are given by $$\begin{aligned} \Pi ^{O}=( x^{O}) ^{2}+\overline{\pi }. \end{aligned}$$ Irrespective of the formation of an FTA, the MNE always prefers the offshoring scheme to the inshoring scheme, as $$\begin{aligned} \Pi ^{O}-\Pi ^{I}=(x^{O})^{2}-(1-T)(x^{I})^{2}\ge 0 \end{aligned}$$ holds given \(\lambda \), because \(x^{O}>x^{I}\). Intuitively, the offshoring generates more profits because procurement from its upstream affiliate provides the MNE with both efficient input production and the opportunity to shift profits. For notational convenience, we use the superscript "\(*\)" for the variables in the pre-FTA case and "\(\widehat{\quad }\)" for the post-FTA variables without ROO hereafter. The equilibrium with ROO Let us next consider an FTA formation with the ROO. As stated in the Introduction, our focus is on the VA criterion of ROO. Specifically, a VA criterion is applied to exports of the final good in the FTA. For notational convenience, we use "\(\widetilde{\quad }\) " as a circumflex for the variables in the presence of the ROO. After an FTA is formed, firm \(M_{H}\) needs to meet the VA criterion to be eligible for the elimination of \(\tau \). Specifically, the ROO require firm \(M_{H}\) to add a proportion of at least \(\underline{\alpha }\) \((\in (0,1])\) of the values of exported goods within FTA. There are three cases that we explain sequentially below. First, if firm \(M_{H}\) chooses the offshoring of input production and sets an abusive transfer price, \(r=p\), the VA ratio is always zero, which fails to meet the requirements of the ROO. Hence, the final goods exports of the MNE incur tariff \(\tau \), even after the formation of the FTA. We call this case scheme N (non-compliance).Footnote 18 The equilibrium outcomes of this scheme are obtained by setting \(\lambda =1\) in \(p^O\), \(x^O\), and \(r^O\), as well as in the other corresponding welfare components. Second, if firm \(M_{H}\) chooses the inshoring of input production (scheme I), the VA ratio is 1, and it satisfies the requirement of the ROO. The equilibrium outcomes are obtained by setting \(\lambda =0\) in \(p^I\) and \(x^I\). Third, if firm \(M_{H}\) chooses the offshoring of the input production and sets p and r such that they satisfy $$\begin{aligned} \alpha \equiv \frac{p-r}{p}\ge \underline{\alpha }, \end{aligned}$$ it complies with the ROO, and the tariff is thus eliminated. A combination of p and r that realizes (5) with strict inequality cannot be the equilibrium price. In Sect. 2.1, we show that the MNE's post-tax profit after FTA formation is maximized by setting \(p=r\) without the ROO. This implies that, as long as \(\frac{p-r}{p}>\underline{\alpha }\) holds, the MNE always has an incentive to reduce \(p-r\) by adjusting p and r.Footnote 19 Therefore, the MNE optimally sets p and r such that (5) is satisfied with equality, which yields: $$\begin{aligned} r=(1-\underline{\alpha })p. \end{aligned}$$ We call this case scheme B (binding ROO). By substituting (6), and \(\lambda =0\) into (2), the post-tax profits under scheme B are given by $$\begin{aligned} \Pi ^{B} =(1-T)\underbrace{\underline{\alpha }px}_{\pi _{H}^{B}}+\underbrace{[\{(1-\underline{\alpha })p-(w-\Delta )\}x+\overline{\pi }]}_{\pi _{O}^{B}}=\{1-\underline{\alpha }T\}(p-c_{M})x+\overline{\pi }, \end{aligned}$$ where \(c_{M}=\frac{w-\Delta }{1-\underline{\alpha }T}(>w-\Delta )\) represents the "perceived marginal cost" of producing the final good.Footnote 20 The perceived marginal cost is higher than the physical marginal cost, \(w-\Delta \), as long as \(\underline{\alpha }\) is positive and \(T>0\). Both an increase in the stringency of the ROO (i.e., \(\underline{\alpha }\)) and the tax in country H (i.e., T) increases the perceived marginal cost. We can interpret the perceived marginal cost as follows. Without any ROO, the MNE shifts all the profits to a low-tax country by setting \(r=p\). From \(r=p\), the introduction of the ROO decreases the transfer price by as much as \(\underline{\alpha }p\) and increases the per-unit tax payments of the MNE by as much as \(\underline{\alpha }pT>0\). This means that the ROO decrease the MNE's marginal gains from selling the final good in terms of the post-tax profits. Therefore, the MNE becomes less aggressive in the product market under scheme B. The lower incentive to sell the final good is reflected in the perceived marginal cost. The price setting of the MNE is made with \(c_{M}\) instead of \(w-\Delta \), that is, the binding ROO increase the equilibrium price, since \(c_{M}>w-\Delta \) holds. By maximizing (7) with respect to p, we obtain equilibrium price, sales, and transfer price as \(\widetilde{p}^{B}=\frac{a+c_{M}}{2}\), \(\widetilde{x}^{B}=\frac{a-c_{M}}{2}\), and \(\widetilde{r}^{B}=(1-\underline{\alpha })\left( \frac{a+c_{M}}{2}\right) \), respectively. We thus have the following proposition: Suppose that an FTA is formed and the MNE chooses offshoring. The ROO induce the MNE to set a lower transfer price and a higher output price if it complies with the ROO. See Appendix A.1. The MNE adjusts both transfer price and output price to satisfy (5). If the MNE keeps \(p=\widehat{p}^{O}\) and only lowers r to satisfy the VA ratio, the increase in the tax burden hurts the MNE more. If the MNE only raises p keeping \(r=\widehat{r}^{O}\), it loses its profit in the product market even more. The rise in output price implies that the tariff pass-through is smaller with the ROO than without them because a part of tariff reduction is countered by an adjustment of the output price to meet the VA criterion. A smaller tariff pass-through reduces the consumers' gains from an FTA formation, as shown in Sect. 3.2. By substituting the equilibrium price and sales, the equilibrium post-tax profit of the MNE under scheme B becomes $$\begin{aligned} \widetilde{\Pi }^{B}=\frac{\{(1-\underline{\alpha } T)a-w+\Delta \}^{2}}{4(1-\underline{\alpha }T)}+\overline{\pi }. \end{aligned}$$ \(\widetilde{\Pi }^{B}\) is a decreasing function of \(\underline{\alpha }\) because an increase in \(\underline{\alpha }\) forces the MNE to set a transfer price and an output price that deviate more from the levels at which it avoids tax payments in the high-tax country. The MNE's choice of scheme In Sect. 2.1, we have argued that the MNE always chooses scheme O before an FTA formation, and also after an FTA formation without the ROO. In an FTA formation with the ROO, schemes I, N, and B are possible equilibrium outcomes. Among the three possible schemes (I, N, and B), the MNE chooses the one that maximizes its profits. Let us first compare \(\widetilde{\Pi }^{I}\) with \(\widetilde{\Pi }^{N}\). Since both profits are independent of the VA threshold, \(\underline{\alpha }\), the tariff level and tax differential determine which profit is larger. The MNE faces a trade-off between tax avoidance and tariff avoidance. If the tax differential is large, the MNE prefers scheme N to scheme I because of the stronger incentive to avoid tax payments in country H. If the tax differential is small, scheme I is preferable for the MNE. Thus, there exists a unique threshold of T, \(\widetilde{T}\), such that \(\widetilde{\Pi }^{I}=\widetilde{\Pi }^{N}\) holds. As a higher tariff discourages the MNE from choosing scheme N, \(\frac{\partial \widetilde{T}}{\partial \tau }>0\) holds.Footnote 21 Next, we compare the profits in scheme B with those in schemes N and I. \(\widetilde{\Pi }^{B}\) is decreasing in \(\underline{\alpha }\), and \(\widetilde{\Pi }^{B}=\widehat{\Pi }^{O}\) holds at \(\underline{\alpha }=0\), which is larger than \(\widetilde{\Pi }^{N}\) and \(\widetilde{\Pi }^{I}\). Therefore, we can derive a unique threshold, \(\underline{\alpha }^{N}\) (resp. \(\underline{\alpha }^{I}\)), above which the MNE prefers scheme N (resp. scheme I) to scheme B. Intuitively, under a less-strict ROO, the MNE prefers scheme B to schemes N and I because adjusting the transfer price to comply with those ROO becomes less costly as the VA criterion becomes less stringent. In other words, the profits of MNE from tariff elimination, by adjusting the transfer price, become smaller as the FTA is attached to more stringent ROO. Putting the above comparisons together, we characterize the equilibrium outcomes as follows: The MNE chooses offshoring (scheme O) before an FTA formation or after an FTA formation without the ROO. After an FTA formation with the ROO, the MNE chooses (i) inshoring (scheme I) if \(T\le \widetilde{T}\) and \(\alpha >\underline{\alpha }^{I}\) hold, (ii) offshoring and its exports incur a tariff (scheme N) if \(\widetilde{T}<T\) and \(\alpha >\underline{\alpha }^{N}\) hold, and (iii) offshoring, and it then adjusts its transfer price to meet the ROO (scheme B) if \(\underline{\alpha }\le \min \{\underline{\alpha ^{I}},\underline{\alpha }^{N}\}\) holds. The equilibrium MNE's choice The equilibrium outcomes with the ROO are illustrated in Fig. 2. The MNE always chooses the self-production of inputs before an FTA is formed. After an FTA is formed, this proposition suggests that the MNE may change its input procurement from self-production to the purchase of local inputs, despite higher production cost. As Conconi et al. (2018) showed, the ROO lower the likelihood of input procurement from non-FTA countries. This "input trade diversion" corresponds to the area of scheme I in Fig. 2. Further, as Takahashi and Urata (2010) and Hayakawa et al. (2013) noted, some firms may not use FTA tariffs because of the burden of the ROO. This possibility corresponds to the area of scheme N in Fig. 2. A standard explanation for the non-use of an FTA is that export firms must incur additional costs to meet the ROO. Our model suggests another burden of meeting the ROO: It increases tax payments by restricting the MNE's freedom to adjust its transfer price. Effects of FTA formation We have explored the equilibrium outcomes of an FTA with the ROO. This section analyzes how an FTA formation prevents the MNE's profit shifting and how it affects the consumer surplus and the MNE's profit. FTA as an anti-tax avoidance policy Let us explore how an FTA formation affects the MNE's tax avoidance. When the MNE engages in transfer pricing, an FTA with the ROO enables member countries to recover some of the MNE's tax bases. When the MNE procures the input from the local input market (scheme I), there are no opportunities to shift profits, and all the tax bases are retained in country H. When it adjusts its transfer price to meet the VA criterion of the ROO, a part of the tax base is retained in country H because of the limited use of abusive transfer pricing. Notably, we can confirm that the ROO reverse the direction of profit shifting across countries. To see this point more clearly, it is useful to decompose the optimal transfer prices into the "tax avoidance motive" and "tariff elimination motive." In the pre-FTA equilibrium, the optimal transfer price is always above the marginal cost of input production: $$\begin{aligned} r=w-\Delta +\underbrace{\frac{a-w+\Delta -\lambda \tau }{2}}_{\text {Tax avoidance motive}}. \end{aligned}$$ The second term of (9) represents the tax avoidance motive, which makes the transfer price as high as making the profit of the downstream affiliate of the MNE zero. In scheme B of the post-FTA equilibrium, the tariff elimination motive counters the tax avoidance motive. The optimal transfer price is expressed as $$\begin{aligned} \widetilde{r}^B=w-\Delta +\underbrace{\frac{a-w+\Delta }{2}}_{\text {Tax avoidance motive}}-\underbrace{\frac{\underline{\alpha }\{(1-\underline{\alpha }T)a+(1-T)(w-\Delta )\}}{2(1-\underline{\alpha }T)}}_{\text {Tariff eliminative motive}}. \end{aligned}$$ The third term of (10) captures the tariff elimination motive, which is zero at \(\underline{\alpha }=0\) and increasing in \(\underline{\alpha }\). If the tariff elimination motive is sufficiently large, such that \(\widetilde{r}^B\) is lower than \(w-\Delta \), then the profits of the MNE shift from a low-tax country to a high-tax country, which is in sharp contrast to the conventional effect of transfer pricing. Therefore, the direction of profit shifting relies on the size of the two motives. Indeed, we can derive a unique threshold of \(\underline{\alpha }\), \(\underline{\alpha }^r\), such that \(\widetilde{r}^B<w-\Delta \) holds and profits shift from a high-tax country to a low-tax country if \(\underline{\alpha }>\underline{\alpha }^r\) holds. Figure 3 illustrates the reversal of profit shifting.Footnote 22 The dotted curve represents \(\underline{\alpha }^r \), and the dotted area in the figure represents the case in which profits flow from a low-tax country to a high-tax country. The following proposition summarizes the effect on tax revenue. The direction of the MNE's shifted profits An FTA formation with the ROO reduces the profits of the MNE that are shifted from a high-tax country to a low-tax country if the MNE uses an FTA tariff. The MNE shifts its profits from a low-tax country to a high-tax country if the tariff elimination motive of transfer pricing is sufficiently large. This proposition sheds new light on the role of the ROO that has been overlooked in policy debates. As Proposition 2 shows, an FTA formation with the ROO can induce the MNE to abandon the self-production of inputs as an opportunity to avoid tax.Footnote 23 This result suggests that a VA criterion provides another channel to keep MNEs away from tax avoidance by restricting the extent of abusive transfer pricing. Although the main purpose of imposing the ROO is to prevent trade circumvention, the ROO also play a role in preventing tax avoidance. Furthermore, Proposition 3 provides a new empirical implication for estimating transfer pricing. As our model shows, the optimal transfer price depends on the stringency of the VA criterion, suggesting that the observed transfer prices can reflect not only the tax avoidance motive but also the tariff elimination motive.Footnote 24 Effects on consumer surplus and the MNE's overall profit Let us next explore the effect of an FTA formation on consumers and the MNE's overall profit. Under scheme I, FTA formation increases the marginal cost of production from \(w-\Delta \) to w because the MNE changes the location of its input procurement. However, the FTA formation also eliminates the tariff, \(\tau \), faced by the MNE. We can easily confirm the following: The MNE chooses scheme I only if \(\Delta <\tau \) holds (see footnote 21). Therefore, an FTA always decreases the MNE's marginal cost of exports whenever scheme I becomes the equilibrium outcome, and it always increases the exports of the MNE.Footnote 25 Tariff elimination increases the free-on-board (f.o.b) price, \(p-\tau \), but its degree is always less than the tariff level, and it decreases the consumer price. Thus, an FTA formation benefits consumers and the MNE under scheme I. Under scheme B, the MNE also faces a higher marginal cost because the perceived marginal cost is higher than \(w-\Delta \). As in scheme I; however, the MNE chooses scheme B only if the cost reduction from tariff elimination dominates the increase in the marginal cost of production (see Appendix A.2 for details). Therefore, the FTA always increases the exports of the MNE whenever scheme B becomes the equilibrium outcome. As before, the tariff elimination increases the f.o.b price and decreases the consumer price. Thus, an FTA formation benefits consumers and the MNE under scheme B. Under scheme N, the situation is the same as that in the pre-FTA equilibrium, and the FTA has no effects on the equilibrium outcomes. Putting the two cases together, we have the following proposition: An FTA formation with the ROO always benefits consumers and the MNE if the MNE uses an FTA tariff, whereas it has no effect on consumers and the MNE otherwise. However, the presence of the ROO decreases both consumers' and the MNE's gains, compared to an FTA without ROO. Although FTA formation is beneficial for consumers and the MNE, the ROO decrease their gains because of the increase in the production cost owing to the inefficient procurement of inputs (scheme I) or increase in the perceived marginal costs (Scheme B). In scheme B, the MNE gives up full tax avoidance, and the adjustment of transfer price to meet the ROO increases the MNE's perceived marginal cost. We should recognize this export-decreasing effect of the ROO driven by the change in the MNE's transfer pricing. We have shown that an FTA with ROO can prevent the MNE's tax avoidance. In this section, we explore how FTA formation affects the total welfare of member countries. This study employs a partial-equilibrium model focusing on one industry, and we should be careful about evaluating the welfare impacts of FTA because there should also be gains/losses in other industries. Nevertheless, our analysis provides a new element to consider the desirability of FTA. Besides that, the benchmark model also assumes that country H does not impose a tariff on inputs imported from country O. The member countries may have an incentive to prevent tax avoidance by the MNE by setting a high tariff on inputs, which reduces the gains from producing inputs outside the FTA. This diminishes the role of the ROO in preventing abusive transfer pricing. We show that the main results of the benchmark model remain unchanged even if country O optimally sets the level of input tariff. Total welfare of member countries To explore the welfare effects of FTAs, we focus on the total welfare of FTA countries. The total welfare of FTA countries under scheme \(s\in \{I,O,B,N\}\) is the sum of the consumer surplus in country F (\(CS_{F}^{s}\)), tax revenues of country H paid by the MNE (\(TR_{H}^{s}\)), and tariff revenues in country F (\(TR_{F}^{s}\)): $$\begin{aligned} W^{s}=CS_{F}^{s}+TR_{H}^{s}+TR_{F}^{s}=\frac{\left( x^{s}\right) ^{2}}{2}+T\pi _{H}^{s}+\lambda \tau x^{s}. \end{aligned}$$ Total welfare does not include the post-tax profits of the MNE because it is owned by residents outside the FTA. We have the following proposition. See Online Appendix B.3 for the proof. An FTA formation without the ROO benefits member countries when the initial tariff rate is high (\(\tau >\tau ^{W}\)) and hurts them when it is low (\(\tau <\tau ^{W}\)). An FTA formation with the ROO benefits member countries if (i) the post-FTA scheme is scheme I, and \(T>\widetilde{T}^{W}\) holds or (ii) the post-FTA scheme is scheme B and \(\underline{\alpha }>\underline{\alpha }^{W}\) holds. It has no effect on member countries if the post-FTA scheme is N. Otherwise, an FTA with the ROO hurts member countries. Let us first explain the welfare effect of an FTA formation without ROO. The post-FTA equilibrium scheme is always scheme O. The member countries cannot collect tax revenues both before and after the FTA formation. The FTA generates a trade-off between an increase in the consumer surplus and disappearance of tariff revenues. When the initial tariff rate is high (\(\tau >\tau ^{W}\)), the consumers' gains exceed tariff revenues and the FTA formation increases the total welfare of member countries. The presence of the ROO changes the welfare effect of FTA formation. The FTA has no effect if the post-FTA equilibrium scheme is scheme N, but it affects the total welfare in scheme I or scheme B. As discussed in Sect. 3.2, the ROO reduce consumer gains from FTA formation in country F in these schemes. However, the ROO also help generate tax revenues in country H if the MNE changes its input procurement from country O to country I, or it adjusts its transfer price to comply with the ROO. Thus, the ROO can either increase or decrease the welfare gains from FTA formation. If the post-FTA scheme is scheme I, an FTA formation improves the total welfare of the FTA members when the tax gap is relatively high, and the positive effects from generating tax revenue are large enough. Similarly, if the post-FTA scheme is scheme B and \(\underline{\alpha }\) is high, then the MNE needs to adjust its transfer price to comply with the ROO. In this case, the positive effect from gaining the tax revenue is large enough to improve total welfare. We found there exists a case wherein an FTA without the ROO worsens total welfare, but an FTA with the ROO improves it. There is also a case where an FTA with the ROO improves the total welfare, but an FTA without the ROO worsens it (see Online Appendix B.3 for details). However, note that this study employs a partial equilibrium model that focuses on one specific sector. In reality, an FTA should affect many sectors. The welfare effects discussed here explain only a part of the overall effects of the FTA. Nevertheless, the analysis of this study is distinct in that it suggests a new mechanism through which the ROO change the welfare effects of an FTA formation. A tariff on inputs In the benchmark model, we have assumed that country H imposes no tariff on imported inputs. Given the zero external tariff, only the ROO can hinder the MNE's transfer pricing. However, country H may have an incentive to set an input tariff to prevent the loss of the tax revenue. Even if we consider an input tariff, and its degree is determined endogenously by country H, the main results of the baseline model holds, as long as the corporate tax in country H is not very high. See Online Appendix B.4 for the detailed discussion. This is because, with a low corporate tax in country H, the main source of country H's welfare is not the tax revenue from the MNE, but the tariff revenue from the input tariff. Let \(\xi \) be a tariff on inputs in country H. Then, country H's welfare is either the tax revenue from the MNE (\(TR_{H}^{I}=T\pi _{H}^{I}\)) if inputs are produced in country H, or the tariff revenue (\(TR_{H}^{O}=\xi x^{O}\)) if inputs are produced in country O. If \(\xi \) exceeds the threshold level, \(\xi _{M}\), then the MNE chooses inshoring over offshoring. If \(T<\frac{\xi x^{O}}{\pi _{H}^{I}}\) is satisfied for some \(\xi <\xi _{M}\) such that \(TR_{H}^{O}>TR_{H}^{I}\) holds, the optimal external tariff to maximize tariff revenue becomes \(\xi _{T}=\frac{a-w+\Delta -\tau }{2}\). In this case, country H does not impose a tariff that induces inshoring and prevents profit shifting. If T is large enough, \(TR_{H}^{O}\le TR_{H}^{I}\) always holds and country H sets the tariff that induces inshoring, \(\xi \ge \xi _{M}\). As our main results are obtained when T is small, they are not qualitatively changed by introducing an input tariff of country H. Although introducing an input tariff does not change the main results, it is worth discussing which members of the FTA, country H or country F, benefit more from imposing an import tariff that hinders transfer pricing (i.e., \(\xi \ge \xi _{M}\)). If \(a-w-\tau <\Delta \) holds, then \(\xi _{T}<\Delta \), and the MNE's marginal cost is lower under offshoring. In this case, country F prefers offshoring because the MNE's exports to country F is larger. If \(a-w-\tau >\Delta \) holds, we have \(\xi _{T}>\Delta \) and the MNE's marginal cost is lower under inshoring conditions. In this case, country F prefers inshoring. Therefore, when \(TR_{H}^{O}\le TR_{H}^{I}\) and \(a-w-\tau <\Delta \) holds, an input tariff that prevents transfer pricing hurts country F, but it benefits country H. When \(TR_{H}^{O}>TR_{H}^{I}\) and \(a-w-\tau >\Delta \) holds, the prohibitive input tariff benefits country F but hurts country H. In other cases, the prohibitive input tariff either benefits or hurts the two countries at the same time. The recent proliferation of FTAs has been advancing trade liberalization among countries, and the cross-border economic activities of MNEs prevail globally. This study investigated a vertically integrated MNE's input production and pricing strategies to analyze the welfare effects of FTA formation when the MNE can manipulate its transfer price of intra-firm trade. As in previous studies, the MNE uses its transfer price to avoid a high corporate tax. After the formation of an FTA, however, there emerges another reason for transfer price manipulation in the presence of the ROO. Specifically, if the ROO of the FTA employ a VA criterion, the FTA induces the MNE to manipulate the transfer price to comply with the ROO and be eligible for tariff elimination. When the VA criterion of the ROO is low, the MNE prefers transfer price manipulation, since adjusting the transfer price is straightforward. However, once the required VA level is high, the transfer price adjustment decreases the efficiency of tax avoidance, such that the manipulation of the transfer price for the ROO is suboptimal. If the tax gap between a country outside an FTA and a member country is large, the MNE produces a necessary input in the outside country at the expense of the FTA tariff rate because the gain from tax avoidance is large. If it is small, the MNE procures the input into the inside country to qualify for the FTA tariff. This result is in line with empirical and anecdotal evidence that (i) FTAs sometimes induce input relocation to inside FTA countries, (ii) not all firms export using the preferential tariffs of FTAs, and (iii) transfer price manipulation is influenced by both the difference in corporate tax rates and the required VA criterion of ROO. Our model also showed the possibility that ROO can prevent profit shifting by an MNE by either a change in procurement strategy or another use of transfer prices. The formation of FTAs with the ROO is expected to work as an effective policy to not only induce trade liberalization, but also keep MNEs away from tax avoidance. A remarkable result is the MNE's profits can be shifted from a low-tax country to a high-tax country when the MNE manipulates the transfer price for the ROO, contrary to the case where it uses the transfer price for tax avoidance. Although the ROO reduce consumers' gains of the FTA formation, member countries can benefit more with the ROO owing to the emergence of the MNE's tax base. There is a case where the ROO can transform a welfare-reducing FTA into a welfare-improving one. There remains room for further research. We assumed that tax rates and tariff rates are exogenously given. It would be intriguing to investigate how the formation of an FTA affects the outcomes of tax competition among countries as well as the optimal tariffs set by FTA members. Another direction in which to extend the model is to examine the effects of regulations on transfer pricing, such as the arm's length principle, in the presence of ROO. Finally, further empirical investigation on the relationship between ROO and transfer pricing is essential. See http://www.oecd.org/tax/beps/, accessed on March 11, 2020. For instance, Swenson (2001), Clausing (2003), Cristea and Nguyen (2016) and Davies et al. (2018) provided empirical evidence of the transfer price manipulation. Blouin et al. (2018) found that MNEs have conflicting motives for using transfer pricing to lower corporate tax and tariff payments. Regional trade agreements in goods are classified into FTAs and customs unions. Unlike customs unions, member countries of an FTA can set their own tariff schedule against non-member countries. This offers an opportunity for firms producing outside the FTA to save tariff payments by choosing as a transit country the member country whose tariff against non-member countries is low, and then re-exporting from that country to other FTA member countries whose tariffs against non-member countries are higher. Stoyanov (2012) presents evidence of firms' incentive to transship a good through FTA members. To forestall firms from tariff avoidance, FTA members stipulate ROO. Other ways to prove the origins of products include change in tariff classification criterion and specific process criterion. Although the effects of these criteria are also important, this study focuses only on the VA criterion. Some practitioners see the link as one factor to be considered, noting that "if transfer pricing changes the value of local content, then the ROO as applied may remove any FTA benefit that was previously available" (see https://www.expertguides.com/articles/oecd-beps-project-and-trade-new-perspectives/AREXIEUO, accessed on May 3, 2018). See http://www.wcoomd.org/-/media/wco/public/global/pdf/topics/origin/overview/origin-handbook/rules-of-origin-handbook.pdf accessed on May 3, 2018. The report states that "...in cases where the preferential calculation is based on the Value Added Rule and the required threshold is barely reached, an adjustment of transfer prices might lead to the loss of the preferential status of an article." See https://www2.deloitte.com/content/dam/Deloitte/ch/Documents/tax/deloitte-ch-en-making-use-of-free-trade-agreements.pdf, accessed on May 1, 2021. If we consider local firms among FTA members and oligopoly in the final goods market, the fundamental properties of our results remain unchanged, although the analysis becomes more complicated. See Mukunoki and Okoshi (2019) for the oligopoly version of the model. Although the MNE uses its transfer price for complying with the ROO, it can still shift profits from one country to another to save tax payments when the VA requirement is less stringent and the tax gap is large. Nevertheless, the overall tax payments become larger because the transfer price is suboptimal from the viewpoint of tax savings. We use the terms "tax rate" and "tax revenue" to represent the corporate tax rate and corporate tax revenue, respectively, which we distinguish from the tariff rate and tariff revenue. The traditional international corporate tax system is the separating account system that computes MNEs' national tax base by regarding intra-firm transactions as inter-firm transactions. Conversely, under the formula apportionment system, the tax payments of MNEs to one country depend on their consolidated tax base and the proportion of activity operated in the country. See more details in Chapter XVI and Article 86 of European Commission (2011). Bauer and Langenmayr (2013), Choe and Matsushima (2013), and Kato and Okoshi (2019) also investigated the effect of the arm's length principle on the input procurement decision, tacit collusion, and input production location, respectively. Given the multiple roles of transfer prices, the recent literature examines MNEs' optimal strategies (Hyde and Choe 2005; Nielsen et al. 2008; Dürr and Göx 2011). None of them, however, link transfer pricing and ROO. We can consider a more general situation, wherein the MNE uses a continuum of inputs, and determines the extent to which it uses intra-firm inputs for final good production. As explained in Online Appendix B.1, this modification does not change the qualitative results of the benchmark model. In this model, we postulate that the governments in countries O and H adopt a territorial tax system instead of a worldwide one. Most OECD countries, except Chile, Israel, Mexico, and South Korea, have adopted a territorial tax system. The USA moved from a worldwide tax system to a territorial tax system in December 2017. This assumption is relaxed in Sect. 4.2. We assume there is no cost of shifting profits across countries. This is a conventional way of determining the optimal transfer price in the literature, when the cost of profit shifting is absent. We relax this assumption by introducing a standard convex concealment cost in Online Appendix B.2. Some empirical evidence shows that not all firms use FTA tariffs, because of the existence of the ROO, that is, the effects of FTA formation are heterogeneous across firms. See, for example, Takahashi and Urata (2010) and Hayakawa et al. (2013). For instance, the MNE can always reduce its tax payments and increase the post-tax profits by increasing r and setting p such that it does not affect its sales in the product market. The terminology "perceived marginal cost" is often used in the analysis of vertically related industries in the context of industrial organization. See Choi et al. (2020) for an application of this terminology in the tax avoidance literature. Formally, the threshold is calculated as \(\widetilde{T}=1-\left( \frac{a-w+\Delta -\tau }{a-w}\right) ^{2}<1\). We can confirm that \(\widetilde{T}\) is positive if and only if \(\Delta <\tau \) holds. To secure the existence of the equilibrium with scheme I, we additionally assume \(\Delta <\tau \) hereafter. We use the following parameters for the figure: \(a=1\), \(w=\frac{1}{2}\), \(\Delta =\frac{1}{32}\), and \(\tau =\frac{1}{4}\). This effect is observed in the other two criteria of ROO, that is, the change in tariff classification (CTC) criterion and specific process (SP) criterion. Specifically, the CTC criterion requires the Harmonized System (HS) code of exported goods must be different at a specified level from the HS codes of all non-originating materials. If the CTC criterion requires the changes in high digits of the HS codes, such as those in the first two digits (i.e., changes in tariff chapter) or the first four digits (i.e., changes in tariff heading), the exported goods and the imported materials may share the same HS codes. In this case, the MNE needs to procure inputs inside the FTA to meet the CTC criterion. The MNE also needs to procure inputs inside the FTA if the SP criterion requires that input materials, which the MNE initially produces outside the FTA, should be produced inside the FTA. Although there are other ways of shifting profits, such as using internal debt and making royalty payments, the tariff elimination motive behind the transfer pricing of tangible assets remains. Specifically, the change in exports becomes \(\Delta x^{I*}=\widetilde{x}^{I}-x^{O*}=\frac{\tau -\Delta }{2}\), which is positive if \(\Delta <\tau \) holds. Bauer, C. J., Langenmayr, D., 2013. Sorting into outsourcing: Are profits taxed at a gorilla's arm's length? Journal of International Economics 90 (2), 326–336. Blouin, J. L., Robinson, L. A., Seidman, J. K., 2018. Conflicting transfer pricing incentives and the role of coordination. Contemporary Accounting Research 35 (1), 87–116. Bond, E. W., Gresik, T. A., 2020. Unilateral tax reform: Border adjusted taxes, cash flow taxes, and transfer pricing. Journal of Public Economics 184, 104160. Choe, C., Matsushima, N., 2013. The arm's length principle and tacit collusion. International Journal of Industrial Organization 31 (1), 119–130. Choi, J. P., Furusawa, T., Ishikawa, J., 2020. Transfer pricing regulation and tax competition. Journal of International Economics 127, 103367. Clausing, K. A., 2003. Tax-motivated transfer pricing and US intrafirm trade prices. Journal of Public Economics 87 (9–10), 2207–2223. Conconi, P., García-Santana, M., Puccio, L., Venturini, R., 2018. From final goods to inputs: the protectionist effect of rules of origin. American Economic Review 108 (8), 2335–65. Cristea, A. D., Nguyen, D. X., 2016. Transfer pricing by multinational firms: New evidence from foreign firm ownerships. American Economic Journal: Economic Policy 8 (3), 170–202. Davies, R. B., Martin, J., Parenti, M., Toubal, F., 2018. Knocking on tax haven's door: Multinational firms and transfer pricing. Review of Economics and Statistics 100 (1), 120–134. Demidova, S., Krishna, K., 2008. Firm heterogeneity and firm behavior with conditional policies. Economics Letters 98 (2), 122–128. Dürr, O. M., Göx, R. F., 2011. Strategic incentives for keeping one set of books in international transfer pricing. Journal of Economics & Management Strategy 20 (1), 269–298. Eden, L., 1998. Taxing multinationals: Transfer pricing and corporate income taxation in North America. University of Toronto Press, NY. Elitzur, R., Mintz, J., 1996. Transfer pricing rules and corporate tax competition. Journal of Public Economics 60 (3), 401–422. Estevadeordal, A., Suominen, K. (2003). Rules of origin in the world trading system. In: document presented at the Seminar on Regional Trade Agreements of WTO (Geneva, 14 November). European Commission. (2011). Proposal for a council directive on a common consolidated corporate tax base (CCCTB). COM (2011) 121 final. Falvey, R., Reed, G., 1998. Economic effects of rules of origin. Weltwirtschaftliches Archiv 134 (2), 209–229. Felbermayr, G., Teti, F., Yalcin, E., 2019. Rules of origin and the profitability of trade deflection. Journal of International Economics 121, 103248. Hayakawa, K., Hiratsuka, D., Shiino, K., Sukegawa, S., 2013. Who uses free trade agreements? Asian Economic Journal 27 (3), 245–264. Horst, T., 1971. The theory of the multinational firm: Optimal behavior under different tariff and tax rates. Journal of Political Economy 79 (5), 1059–1072. Hyde, C. E., Choe, C., 2005. Keeping two sets of books: The relationship between tax and incentive transfer prices. Journal of Economics & Management Strategy 14 (1), 165–186. Ishikawa, J., Mukunoki, H., Mizoguchi, Y., 2007. Economic integration and rules of origin under international oligopoly. International Economic Review 48 (1), 185–210. Ju, J., Krishna, K., 2005. Firm behaviour and market access in a free trade area with rules of origin. Canadian Journal of Economics 38 (1), 290–308. Kant, C., 1988. Foreign subsidiary, transfer pricing and tariffs. Southern Economic Journal 55 (1), 162–170. Kato, H., Okoshi, H., 2019. Production location of multinational firms under transfer pricing: The impact of the arm's length principle. International tax and public finance 26 (4), 835–871. Krishna, K., Krueger, A. (1995). Implementing free trade areas: Rules of origin and hidden protection. Technical report National Bureau of Economic Research. LaNasa III, J. A., 1996. Rules of origin and the uruguay round's effectiveness in harmonizing and regulating them. American Journal of International Law 90 (4), 625–640. Mukunoki, H., 2017. The welfare effect of a free trade agreement in the presence of foreign direct investment and rules of origin. Review of International Economics 25 (4), 733–759. Mukunoki, H., & Okoshi, H. (2019). Tariff elimination versus tax avoidance: Free trade agreements and transfer pricing. RIETI Discussion Paper Series 19-E-099. Mukunoki, H., & Okoshi, H. (2021). Rules of origin and consumer-hurting free trade agreements. The World Economy 44 (8), 2303–2318. Nielsen, S. B., Raimondos-Møller, P., Schjelderup, G., 2003. Formula apportionment and transfer pricing under oligopolistic competition. Journal of Public Economic Theory 5 (2), 419–437. Nielsen, S. B., Raimondos-Møller, P., Schjelderup, G., 2008. Taxes and decision rights in multinationals. Journal of Public Economic Theory 10 (2), 245–258. Reuter, P., 2012. Draining development?: Controlling flows of illicit funds from developing countries. World Bank Publications, NY. Schjelderup, G., Sorgard, L., 1997. Transfer pricing as a strategic device for decentralized multinationals. International Tax and Public Finance 4 (3), 277–290. Stoyanov, A., 2012. Tariff evasion and rules of origin violations under the Canada-US free trade agreement. Canadian Journal of Economics 45 (3), 879–902. Swenson, D. L., 2001. Tax reforms and evidence of transfer pricing. National Tax Journal 54 (1), 7–25. Takahashi, K., Urata, S., 2010. On the use of FTAs by japanese firms: Further evidence. Business and Politics 12 (1), 1–15. Tekin-Koru, A., Waldkirch, A., 2010. North-south integration and the location of foreign direct investment. Review of International Economics 18 (4), 696–713. Tintelnot, F., 2017. Global production with export platforms. The Quarterly Journal of Economics 132 (1), 157–209. WCO. (2018). WCO guide to customs valuation and transfer pricing. World Customs Organization. Faculty of Economics, Gakushuin University, Mejiro 1-5-1, Toshima-ku, Tokyo, 171-8588, Japan Hiroshi Mukunoki Faculty of Economics, Okayama University, 3-1-1 Tsushima-naka, Kita-ku, Okayama-shi, Okayama, 700-8530, Japan Hirofumi Okoshi Correspondence to Hirofumi Okoshi. This study was conducted as a part of the Project "Analyses of Offshoring" undertaken at the Research Institute of Economy, Trade, and Industry (RIETI). We wish to thank two anonymous referees, Jay Pil Choi, Ruud Aloysius de Mooij, Carsten Eckel, Clemens Fuest, Taiji Furusawa, Andreas Haufler, Jung Hur, Jota Ishikawa, Hiro Kasahara, Yoshimasa Komoriya, Ngo Van Long, Kiyoshi Matsubara, Kaz Miyagiwa, Monika Mrazova, Martin Richardson, Kensuke Teshima, and the participants of the Canadian Economic Association meeting, RIETI, 58th congress of ERSA, Microeconomics Workshop at the University of Tokyo, Summer Workshop on Economic Theory at Otaru University of Commerce, and 21st annual conference of ETSG, Workshop on International Economics at Osaka University, 76th Annual Congress of IIPF. Hiroshi Mukunoki acknowledges financial support from JSPS KAKENHI (Grant Numbers JP19H00594 and JP20K01659). Hirofumi Okoshi acknowledges the financial support from Deutsche Forschungsgemeinschaft. (German Science Foundation GRK1928). The usual disclaimer applies. Conflict of interest: The authors declare that they have no confict of interest. A.1 Proof of Proposition 1 By comparing the equilibrium output prices, we have \(\widetilde{p}^{B}-\widehat{p}^{O}=\frac{c_{M}-(w-\Delta )}{2}>0\). By comparing the equilibrium transfer prices, we obtain \(\widetilde{r}^{B}-\widehat{r}^{O}=-\frac{\alpha [\{1-\alpha T\}a-\left( 1-T\right) (w-\Delta )]}{2(1-\underline{\alpha }T)}<-\frac{\left( 1-\alpha \right) T a}{2(1-\underline{\alpha }T)}<0\), where the first inequality is due to \(a>(w-\Delta )\). Without the ROO, (4) indicates that the MNE always chooses scheme O before an FTA formation. With the ROO, the post-tax profits of the MNE under schemes N and I are given by $$\begin{aligned} \widetilde{\Pi }^{N}&=\frac{(a-w+\Delta -\tau )^{2}}{4}+\overline{\pi }, \end{aligned}$$ (A.1) $$\begin{aligned} \widetilde{\Pi }^{I}&=\frac{(1-T)(a-w)^{2}}{4}+\overline{\pi }. \end{aligned}$$ The condition under which the MNE prefers scheme I to scheme N is given by $$\begin{aligned} \widetilde{\Pi }^{I}-\widetilde{\Pi }^{N}>0\iff T<1-\left( \frac{a-w+\Delta -\tau }{a-w}\right) ^{2}\equiv \widetilde{T}. \end{aligned}$$ From (8), we can easily confirm that the following inequality holds: $$\begin{aligned} \widetilde{\Pi }^{B}|_{\underline{\alpha }=0}=\frac{(a-w+\Delta )^{2}}{4}+\overline{\pi }>\max \{\widetilde{\Pi }^{N},\widetilde{\Pi }^{I}\}. \end{aligned}$$ Further, the first derivative of \(\widetilde{\Pi }^{B}\) with respect to \(\underline{\alpha }\) is $$\begin{aligned} \frac{\partial \widetilde{\Pi }^{B}}{\partial \underline{\alpha }}=-\frac{T\{(1-\underline{\alpha }T)a-w+\Delta \}(1-\underline{\alpha }T+w-\Delta )}{4(1- \underline{\alpha }T)}<0. \end{aligned}$$ Let \(\underline{\alpha }^{x}\) denote the cutoff level of \(\underline{\alpha }^{x}\) such that \(\widetilde{x}^{B}=x^{O*}(=\widetilde{x}^{N})\) holds. Specifically, we have $$\begin{aligned} \widetilde{x}^{B}\gtreqless x^{O*}\iff \underline{\alpha }\lesseqgtr \frac{\tau }{(w-\Delta +\tau )T}\equiv \underline{\alpha }^{x}. \end{aligned}$$ If evaluated at \(\underline{\alpha }=\underline{\alpha }^{x}\), (8) becomes $$\begin{aligned} \widetilde{\Pi }^{B}|_{\underline{\alpha }=\underline{\alpha }^{x}}=\frac{(w-\Delta )(a-w+\Delta -\tau )^{2}}{4(w-\Delta +\tau )}+\overline{\pi }\left( <\widetilde{\Pi }^{N}\right) . \end{aligned}$$ This implies that there exists a unique cutoff level of \(\underline{\alpha }\), \(\underline{\alpha }^{N}\in (0,\underline{\alpha }^{x})\), such that \(\widetilde{\Pi }^{B}\ge \widetilde{\Pi }^{N}\) holds with \(\underline{\alpha }\le \underline{\alpha }^{N}\) and \(T\ge \widetilde{T}\). Moreover, remember that \(\frac{\partial \widetilde{\Pi }^{I}}{\partial T}<0\) and \(\widetilde{\Pi }^{I}=\widetilde{\Pi }^{N}\) holds at \(T=\widetilde{T}\). Then, $$\begin{aligned} \widetilde{\Pi }^{I}>\widetilde{\Pi }^{I}|_{T=\widetilde{T}}=\widetilde{\Pi }^{N}>\widetilde{\Pi }^{B}|_{\underline{\alpha }=\underline{\alpha }^{x}} \end{aligned}$$ holds for any \(T\in [0,\widetilde{T}]\). Note that \(\widetilde{\Pi }^{B}>\widetilde{\Pi }^{I}\) holds if the following condition is satisfied: $$\begin{aligned} \widetilde{\Pi }^{B}|_{\underline{\alpha }=1}>\widetilde{\Pi }^{I}\iff T<1-\left( \frac{w-\Delta }{w}\right) . \end{aligned}$$ This implies that there exists a unique cutoff level of \(\underline{\alpha }\), \(\underline{\alpha }^{I}\in (0,\underline{\alpha }^{x})\), such that \(\widetilde{\Pi }^{B}\ge \widetilde{\Pi }^{I}\) holds with \(\underline{\alpha }\le \underline{\alpha }^{I}\) and \(1-\left( \frac{w-\Delta }{w}\right) \le T<\widetilde{T}\). From (10), we obtain $$\begin{aligned} \frac{\partial \widetilde{r}^{B}}{\partial \underline{\alpha }}=-\frac{(1-\underline{\alpha }T)^{2}+(1-T)(w-\Delta )}{2(1-\underline{\alpha }T)^{2}}<0. \end{aligned}$$ (A.10) Therefore, \(\widetilde{r}^{B}=w-\Delta +\frac{a-w+\Delta }{2}>w-\Delta \) holds at \(\underline{\alpha }=0\) and \(\widetilde{r}^{B}\) takes the minimum value at \(\underline{\alpha }=1\), which is given by $$\begin{aligned} \widetilde{r}^{B}|_{\underline{\alpha }=1}=0<w-\Delta . \end{aligned}$$ Scheme B is the equilibrium at any \(\underline{\alpha }\) if \(T<\widetilde{T}\) holds. Therefore, there exists a unique \(\underline{\alpha }^{r}\) such that \(\widetilde{r}^{B}<w-\Delta \) holds when \(\underline{\alpha }>\underline{\alpha }^{r}\) holds. Under scheme I, the changes in the amount of supplies from the pre-FTA equilibrium to the post-FTA equilibrium without the ROO are $$\begin{aligned} \widetilde{x}^{I}-x^{O*}&=\frac{\tau -\Delta }{2}>0, \end{aligned}$$ $$\begin{aligned} \widetilde{x}^{I}-\widehat{x}^{O}&=-\frac{\Delta }{2}<0, \end{aligned}$$ because \(\tau >\Delta \) holds. Under scheme B, the FTA formation increases the amount of exports to country F when \(\underline{\alpha }<\underline{\alpha }^{x}\) holds. From Proposition 2, we know that \(\underline{\alpha }<\underline{\alpha }^{x}\) holds under scheme B and we always have \(\widetilde{x}^{B}>x^{O*}\). In addition, we can easily confirm that $$\begin{aligned} \widetilde{x}^{B}-\widehat{x}^{O}=-\frac{(w-\Delta )\underline{\alpha }T}{2(1-\underline{\alpha }T)}<0 \end{aligned}$$ holds. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Mukunoki, H., Okoshi, H. Tariff elimination versus tax avoidance: free trade agreements and transfer pricing. Int Tax Public Finance 28, 1188–1210 (2021). https://doi.org/10.1007/s10797-021-09689-8 Issue Date: October 2021 Profit shifting JEL classification codes
CommonCrawl
Meromorphic integrability of the Hamiltonian systems with homogeneous potentials of degree -4 The study on cyclicity of a class of cubic systems doi: 10.3934/dcdsb.2021191 Simplification of weakly nonlinear systems and analysis of cardiac activity using them Irada Dzhalladova 1, and Miroslava Růžičková 2,, V. Hetman Kyiv National Economic University, Department of Computer Mathematics and Information Security, Kyiv 03068, Peremogy 54/1, Ukraine University of Białystok, Faculty of Mathematics, K. Ciołkowskiego 1M, 15-245 Białystok, Poland * Corresponding author: Miroslava Růžičková Received November 2020 Revised June 2021 Early access July 2021 The paper deals with the transformation of a weakly nonlinear system of differential equations in a special form into a simplified form and its relation to the normal form and averaging. An original method of simplification is proposed, that is, a way to determine the coefficients of a given nonlinear system in order to simplify it. We call this established method the degree equalization method, it does not require integration and is simpler and more efficient than the classical Krylov-Bogolyubov method of normalization. The method is illustrated with several examples and provides an application to the analysis of cardiac activity modelled using van der Pol equation. Keywords: Averaging, normal form, weakly nonlinear system, qualitative properties, essential and non-essential coefficients, degree equalization, van der Pol equation, cardiac activity. Mathematics Subject Classification: Primary: 34C29, 34C20, 34C60; Secondary: 34B30, 34C15, 34A34. Citation: Irada Dzhalladova, Miroslava Růžičková. Simplification of weakly nonlinear systems and analysis of cardiac activity using them. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2021191 H. M. Ahmed and Q. Zhu, The averaging principle of Hilfer fractional stochastic delay differential equations with Poisson jumps, Appl. Math. Lett., 112 (2021), 106755, 7 pp. doi: 10.1016/j.aml.2020.106755. Google Scholar M. Bendahmane, F. Mroue, M. Saad and R. Talhouk, Mathematical analysis of cardiac electromechanics with physiological ionic model, Discrete Continuous Dynam. Systems - B, 24 (2019), 4863-4897. doi: 10.3934/dcdsb.2019035. Google Scholar G. D. Bifkhoff, Dynamical Systems, American Mathematical Society, Providence, R.I., IX, 1966. Google Scholar N. N. Bogolyubov, On Certain Statistical Methods in Mathematical Physics, (in Russian), Kiev, 1935. Google Scholar N. N. Bogolyubov and Y. A. Mitropolskiy, Asymptotic Methods in the Theory of Nonlinear Oscillations, (Translated from Russian), Gordon and Breach, New York, 1961. Google Scholar A. D. Bryuno, The normal form of differential equations, Dokl. Akad. Nauk SSSR, 157 (1964), 1276-1279. Google Scholar A. D. Bryuno, A Local Method of Nonlinear Analysis of Differential Equations, Nauka, Moscow, 1979. Google Scholar A. D. Bryuno, Power Geometry in Algebraic and Differential Equations, Fizmatlit, Moscow, 1998. Google Scholar G. Chen and J. Della Dora, Further reductions of normal forms for dynamical systems, J. Differential Equations, 166 (2000), 79-106. doi: 10.1006/jdeq.2000.3783. Google Scholar A. Deprit, Canonical transformations depending on a small parameter, Celest. Mech., 1 (1969), 12-30. doi: 10.1007/BF01230629. Google Scholar A. Deprit, J. Henrard, J. F. Price and A. Rom, Birkhoff's normalization, Celest. Mech., 1 (1969), 222-251. doi: 10.1007/BF01228842. Google Scholar S. P. Diliberto, New results on periodic surfaces and the averaging principle, Proc. U.S.-Japan Seminar on Differential and Functional Equations, Minneapolis, Minn., Benjamin, New York, (1967), 49–87. Google Scholar P. Fatou, Sur le mouvement d'un système soumis à des forces à courte période, Bulletin de la Société Mathématique de France, 56 (1928), 98-139. doi: 10.24033/bsmf.1131. Google Scholar J. K. Hale, Oscillations in Non-Linear Systems, McGraw-Hill, New York, 1963. Google Scholar M. Han, Y. Xu and B. Pei, Mixed stochastic differential equations: Averaging principle result, Applied Mathematics Letters, 112 (2021), 106705, 7 pp. doi: 10.1016/j.aml.2020.106705. Google Scholar G. Hori, Theory of general perturbations with unspecified canonical variables, Publ. Astron. Soc. Japan, 18 (1966), 287-296. Google Scholar M. Kesmia, S. Boughaba and S. Jacquir, New approach of controlling cardiac alternans, Discrete Continuous Dynam. Systyms - B, 23 (2018), 975-989. doi: 10.3934/dcdsb.2018051. Google Scholar N. M. Krylov and N. N. Bogolyubov, Introduction to Non-Linear Mechanics, Princeton Univ. Press, Princeton, 1947. (Translated from Russian, Izd-vo AN SSSR, Kiev, 1937) Google Scholar P. Kügler, Modelling and simulation for preclinical cardiac safety assessment of drugs with Human iPSC-derived cardiomyocytes, Jahresber Dtsch Math-Ver., 122 (2020), 209-257. doi: 10.1365/s13291-020-00218-w. Google Scholar J. L. Lagrange, Mécanique Céleste $(2$ vols.$)$, {Edition Albert Blanchard}, Paris, 1788. Google Scholar A. K. Lopatin, Averaging, Normal forms and Symmetry in Non-Linear Mechanics, Preprint Inst. Mat. Nat. Acad. Ukrainy, Kiev, 1994, (in Russian) Google Scholar D. Luo, Q. Zhu and Z. Luo, An averaging principle for stochastic fractional differential equations with time-delays, Applied Mathematics Letters, 105 (2020), 106290, 8 pp. doi: 10.1016/j.aml.2020.106290. Google Scholar L. I. Mandelshtam ana N. D. Papaleksi, On justification of a method of approximate solving differential equations, J. Exp. Theor. Physik, 4 (1934), 117–121. (in Russian). Google Scholar W. Mao, L. Hu, S. You and X. Mao, The averaging method for multivalued SDEs with jumps and non-Lipschitz coefficients, Discrete Continuous Dynam. Systems - B, 24 (2019), 4937-4954. doi: 10.3934/dcdsb.2019039. Google Scholar J. A. Mitropolskiy and A. M. Samoilenko, To the problem on asymptotic decompositions of non-linear mechanics, Ukr. Mat. Zhurn., 31 (1979), 42–53. (in Russian). Google Scholar Y. A. Mitropolskiy, Basic lines of research in the theory of nonlinear oscillations and the progress achieved, Proceedings of the International Symposium on Non-linear Oscillations, Kiev, I (1963), 15–22. Google Scholar Y. A. Mitropolskiy and A. K. Lopatin, Group Theory, Approach in Asymptotic Methods of Non-Linear Mechanics, Naukova Dumka, Kiev, 1988. (in Russian). Google Scholar Y. A. Mitropolskiy and N. Van Dao, Averaging method, In: Applied Asymptotic Methods in Nonlinear Oscillations, Solid Mechanics and Its Applications, Vol 55, Springer, Dordrecht, (1997), 282–326. doi: 10.1007/978-94-015-8847-8. Google Scholar A. M Molchanov, Separation of motions and asymptotic methods in the theory of linear oscillations, DAN SSSR, 5, (1961), 1030–1033. (in Russian). Google Scholar A. Poincaré, New Methods of Celestial Mechanics, Gauthiers-Villars, Paris, 1892. (Translated to Russian, Nauka, Moscow, 1971.) Google Scholar M. I. Rabinovich and D. I. Trubetskov, Oscillations and Waves in Linear and Nonlinear Systems, Kluwer Academic Publishers, Dordrecht, 1989. (Translated from the Russian by R. N. Hainsworth, "Vvedenie v teoriyu kolebanij i voln, " Nauka, Moscow, 1984.) doi: 10.1007/978-94-009-1033-1. Google Scholar J. A. Sanders and F. Verhulst, Averaging Methods in Nonlinear Dynamical Systems, Springer-Verlag, New York, 1985. doi: 10.1007/978-1-4757-4575-7. Google Scholar T. G. Strizhak, Averaging Method in Problems of Mechanics, Vishcha Shkola, Kiev-Donetsk, 1982. (in Russian). Google Scholar T. G. Strizhak, An Asymptotic Normalization Method, Vishcha Shkola, Glavnoe Izd., Kiev, 1984. Google Scholar B. van der Pol, A theory of the amplitude of free and forced triode vibrations, Radio Rev., 1 (1920), 701–710. Google Scholar B. van der Pol, On "Relaxation Oscillations", Philos. Mag., 2 (1926), 978-992. doi: 10.1080/14786442608564127. Google Scholar B. van der Pol, The nonlinear theory of electric oscillations, Proceedings of the Institute of Radio Engineers, 22 (1934), 1051-1086. doi: 10.1109/JRPROC.1934.226781. Google Scholar Figure 1. The amplitude of any solution to van der Pol equation increases if its initial value is from the interval $ (0, 2) $, and decreases if the initial value is greater than two. In both cases it converges to the value $ 2 $ Figure 2. The limit cycle $ x^2(t) +\frac{1}{\omega} \dot x^2(t) = a^2 $ and some trajectories to van der Pol equation if $ a_0<2 $ Figure 3. If the initial amplitude value is close to zero, the amplitude exponentially increases to $ 2 $ with increasing $ t $ Figure 4. The area of the viability of the heart. The intensity of energy replenishment depends on $ \mu $ Stefan Siegmund. Normal form of Duffing-van der Pol oscillator under nonautonomous parametric perturbations. Conference Publications, 2001, 2001 (Special) : 357-361. doi: 10.3934/proc.2001.2001.357 Zhaosheng Feng, Guangyue Gao, Jing Cui. Duffing--van der Pol--type oscillator system and its first integrals. Communications on Pure & Applied Analysis, 2011, 10 (5) : 1377-1391. doi: 10.3934/cpaa.2011.10.1377 Zhaosheng Feng. Duffing-van der Pol-type oscillator systems. Discrete & Continuous Dynamical Systems - S, 2014, 7 (6) : 1231-1257. doi: 10.3934/dcdss.2014.7.1231 Dario Bambusi, A. Carati, A. Ponno. The nonlinear Schrödinger equation as a resonant normal form. Discrete & Continuous Dynamical Systems - B, 2002, 2 (1) : 109-128. doi: 10.3934/dcdsb.2002.2.109 Boris Anicet Guimfack, Conrad Bertrand Tabi, Alidou Mohamadou, Timoléon Crépin Kofané. Stochastic dynamics of the FitzHugh-Nagumo neuron model through a modified Van der Pol equation with fractional-order term and Gaussian white noise excitation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (7) : 2229-2243. doi: 10.3934/dcdss.2020397 Xiaoqin P. Wu, Liancheng Wang. Hopf bifurcation of a class of two coupled relaxation oscillators of the van der Pol type with delay. Discrete & Continuous Dynamical Systems - B, 2010, 13 (2) : 503-516. doi: 10.3934/dcdsb.2010.13.503 Zhaoxia Wang, Hebai Chen. A nonsmooth van der Pol-Duffing oscillator (I): The sum of indices of equilibria is $ -1 $. Discrete & Continuous Dynamical Systems - B, 2022, 27 (3) : 1421-1446. doi: 10.3934/dcdsb.2021096 Zhaoxia Wang, Hebai Chen. A nonsmooth van der Pol-Duffing oscillator (II): The sum of indices of equilibria is $ 1 $. Discrete & Continuous Dynamical Systems - B, 2022, 27 (3) : 1549-1589. doi: 10.3934/dcdsb.2021101 Robert T. Glassey, Walter A. Strauss. Perturbation of essential spectra of evolution operators and the Vlasov-Poisson-Boltzmann system. Discrete & Continuous Dynamical Systems, 1999, 5 (3) : 457-472. doi: 10.3934/dcds.1999.5.457 Arnaud Münch. A variational approach to approximate controls for system with essential spectrum: Application to membranal arch. Evolution Equations & Control Theory, 2013, 2 (1) : 119-151. doi: 10.3934/eect.2013.2.119 Yi Wang, Chengmin Zheng. Normal and slow growth states of microbial populations in essential resource-based chemostat. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 227-250. doi: 10.3934/dcdsb.2009.12.227 Virginie De Witte, Willy Govaerts. Numerical computation of normal form coefficients of bifurcations of odes in MATLAB. Conference Publications, 2011, 2011 (Special) : 362-372. doi: 10.3934/proc.2011.2011.362 Shu-Yi Zhang. Existence of multidimensional non-isothermal phase transitions in a steady van der Waals flow. Discrete & Continuous Dynamical Systems, 2013, 33 (5) : 2221-2239. doi: 10.3934/dcds.2013.33.2221 Wenxiong Chen, Congming Li, Biao Ou. Qualitative properties of solutions for an integral equation. Discrete & Continuous Dynamical Systems, 2005, 12 (2) : 347-354. doi: 10.3934/dcds.2005.12.347 Jianyu Chen. On essential coexistence of zero and nonzero Lyapunov exponents. Discrete & Continuous Dynamical Systems, 2012, 32 (12) : 4149-4170. doi: 10.3934/dcds.2012.32.4149 O. A. Veliev. Essential spectral singularities and the spectral expansion for the Hill operator. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2227-2251. doi: 10.3934/cpaa.2017110 Adriana Buică, Jaume Giné, Maite Grau. Essential perturbations of polynomial vector fields with a period annulus. Communications on Pure & Applied Analysis, 2015, 14 (3) : 1073-1095. doi: 10.3934/cpaa.2015.14.1073 Alfonso Castro, Jorge Cossio, Carlos Vélez. Existence and qualitative properties of solutions for nonlinear Dirichlet problems. Discrete & Continuous Dynamical Systems, 2013, 33 (1) : 123-140. doi: 10.3934/dcds.2013.33.123 John Burke, Edgar Knobloch. Normal form for spatial dynamics in the Swift-Hohenberg equation. Conference Publications, 2007, 2007 (Special) : 170-180. doi: 10.3934/proc.2007.2007.170 Jorge A. Esquivel-Avila. Qualitative analysis of a nonlinear wave equation. Discrete & Continuous Dynamical Systems, 2004, 10 (3) : 787-804. doi: 10.3934/dcds.2004.10.787 Irada Dzhalladova Miroslava Růžičková
CommonCrawl
Engineering Meta Engineering Stack Exchange is a question and answer site for professionals and students of engineering. It only takes a minute to sign up. How do I compute/plot the open loop transfer function of a system given its state-space description? Suppose I have a dynamical system with a continuous time state-space description given by the matrices A, B, C, and D, under full-state feedback u=Kx (where x is the state and x is the control input). How do I compute the Bode plot of the open-loop system, as I would do in traditional frequency-domain analysis? As a simple, concrete example, consider the problem where the plant is a mass on a spring, on which I can exert a force $F_{ext}$. Suppose I use a PID controller to control the position of the mass. A state space description of the plant (with an adjoined integrator state to be used by the controller) is: A = [0, 1, 0; -k, -g, 0; 1, 0, 0]; B = [0; 1/m; 0]; And the PID controller can be realized through full-state feedback with the matrix: K = [kp, kd, ki]; Meanwhile, in the frequency domain, the plant has transfer function $G(s)$, and the controller has transfer function $H(s) = k_p + \frac{1}{s} k_i + s k_d$. In frequency domain analysis, the transfer function $GH$, the open loop transfer function is of interest. This is what we example, for example, to determine gain margin and phase margin. How do I obtain $GH$ given $A, B, C, D$ and $K$? control-engineering nibot nibotnibot In order for the open loop to be meaningful the input to the controller has to have the same dimensions and units as the output of the plant, since normally when doing control in the frequency domain using unity negative feedback the input to the controller is defined as the error signal (reference signal minus the output of the plant). But if the full state is not measured (in the output of the plant) then you can not recreate the 'state' from the error signal directly. Instead you would have to use an observer, which normally is defined as $$ \dot{\hat{x}} = A\,\hat{x} + B\,u + L\,(y - C\,\hat{x} - D\,u) $$ where $\hat{x}$ is the estimate of the state, $L$ is a matrix such that $A-L\,C$ is Hurwitz and $y$ the output of the actual plant. When using full state feedback then $u = -K\,\hat{x}$. However in the case of the open loop $y$ will be replaced by minus the error signal $-e = y-r$, such that when $r=0$ then it is equivalent as the equation above. Combining this white the dynamics of the plant yields $$ \begin{align} \begin{bmatrix} \dot{x} \\ \dot{\hat{x}} \end{bmatrix} &= \underbrace{ \begin{bmatrix} A & -B\,K \\ 0 & A - B\,K - L\,C + L\,D\,K \end{bmatrix}}_{A_{ol}} \begin{bmatrix} x \\ \hat{x} \end{bmatrix} + \underbrace{ \begin{bmatrix} 0 \\ -L \end{bmatrix}}_{B_{ol}} e \\ y &= \underbrace{ \begin{bmatrix} C & -D\,K \end{bmatrix}}_{C_{ol}} \begin{bmatrix} x \\ \hat{x} \end{bmatrix} \end{align} $$ Due to the way the observer is defined the 'controller' (the effective transfer function which gets multiplied by the plant) will always have a strictly proper transfer function. So even if a constant gain might be sufficient for control, this general method will always add some sort of low-pass filter to the controller as well. This state space model can be converted to an equivalent transfer function using $$ G_{ol}(s) = C_{ol}\,(s\,I - A_{ol})^{-1}\,B_{ol}. $$ You could also calculate the points of the Bode plot directly by substituting $s = j\,\omega$, with $\omega$ the desired frequencies (in radians per second) for your Bode plot. It can be noted that in the case of your example system the integral state will not be observable. But it can be an option to first filter the error signal with $\begin{bmatrix}1 & 1/s\end{bmatrix}^\top$ before passing it through $G_{ol}(s)$, and use for the observer in $G_{ol}(s)$ that both the position and the integral are measured. fibonaticfibonatic $\begingroup$ Hmm, I think that's part of the story, but we still have to incorporate the feedback law u=Kx. $\endgroup$ – nibot $\begingroup$ @nibot In your question you asked about the Bode plot of the open-loop system. By definition when you use a feedback law ($u = K\,x$) you are not dealing with the open-loop system. Also if did meant the system with feedback, how would you define the input of the system, since the normal input is already defined as $u = K\,x$? $\endgroup$ – fibonatic $\begingroup$ I added some additional material to the question that should make it more clear. $\endgroup$ $\begingroup$ @nibot I have updated my answer. $\endgroup$ Thanks for contributing an answer to Engineering Stack Exchange! Not the answer you're looking for? Browse other questions tagged control-engineering or ask your own question. In the context of a known disturbance $d(t)$ in a control loop, what is the $\Delta t$ at which the control loop has to be executed? How do you set up a PID-Control if the time constants of the controlled system are variable? The relationship between the I control and the stability margin Control: control action of a PID controller with feedback What is a definitive discrete PID controller equation? PI controller for second order system How to find Performance assessment (IAE and ISE) of a PID Controlled system? How to reduce oscillations (rapid changes of control signal) when controlling a real system, which occur due to noise in measurement from sensors?
CommonCrawl
\begin{document} \title{IMAP: Intrinsically Motivated Adversarial Policy} \author{Xiang Zheng} \affiliation{ \institution{City University of Hong Kong} \city{Hong Kong} \country{China}} \email{[email protected]} \author{Xingjun ma} \affiliation{ \institution{Fudan University} \city{Shanghai} \country{China}} \email{[email protected]} \author{Shengjie Wang} \affiliation{ \institution{Tsinghua University} \city{Beijing} \country{China}} \email{[email protected]} \author{Xinyu Wang} \affiliation{ \institution{Tencent} \city{Shenzhen} \country{China}} \email{[email protected]} \author{Chao Shen} \affiliation{ \institution{Xi'an Jiaotong University} \city{Xi'an} \country{China}} \email{[email protected]} \author{Cong Wang} \affiliation{ \institution{City University of Hong Kong} \city{Hong Kong} \country{China}} \email{[email protected]} \renewcommand{Zheng et al.}{Zheng et al.} \begin{abstract} Reinforcement learning (RL) agents are known to be vulnerable to evasion attacks during deployment. In single-agent environments, attackers can inject imperceptible perturbations on the policy or value network's inputs or outputs; in multi-agent environments, attackers can control an adversarial opponent to indirectly influence the victim's observation. Adversarial policies offer a promising solution to craft such attacks. Still, current approaches either require perfect or partial knowledge of the victim policy or suffer from sample inefficiency due to the sparsity of task-related rewards. To overcome these limitations, we propose the Intrinsically Motivated Adversarial Policy (IMAP) for efficient black-box evasion attacks in single- and multi-agent environments without any knowledge of the victim policy. IMAP uses four intrinsic objectives based on state coverage, policy coverage, risk, and policy divergence to encourage exploration and discover stronger attacking skills. We also design a novel Bias-Reduction (BR) method to boost IMAP further. Our experiments demonstrate the effectiveness of these intrinsic objectives and BR in improving adversarial policy learning in the black-box setting against multiple types of victim agents in various single- and multi-agent MuJoCo environments. Notably, our IMAP reduces the performance of the state-of-the-art robust WocaR-PPO agents by 34\%-54\% and achieves a SOTA attacking success rate of 83.91\% in the two-player zero-sum game YouShallNotPass. \end{abstract} \begin{CCSXML} <ccs2012> <concept> <concept_id>10010147.10010257.10010258.10010261.10010276</concept_id> <concept_desc>Computing methodologies~Adversarial learning</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003752.10010070.10010071.10010261.10010276</concept_id> <concept_desc>Theory of computation~Adversarial learning</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010205.10010208</concept_id> <concept_desc>Computing methodologies~Continuous space search</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>10010520.10010553.10010554.10010556</concept_id> <concept_desc>Computer systems organization~Robotic control</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> \end{CCSXML} \ccsdesc[500]{Computing methodologies~Adversarial learning} \ccsdesc[300]{Theory of computation~Adversarial learning} \ccsdesc[100]{Computing methodologies~Continuous space search} \ccsdesc[100]{Computer systems organization~Robotic control} \keywords{adversarial policy, intrinsic motivation, black-box attack} \received{20 February 2007} \received[revised]{12 March 2009} \received[accepted]{5 June 2009} \maketitle \section{Introduction} Reinforcement Learning (RL) agents are vulnerable to various types of attacks~\cite{tessler2019action,lee2020spatiotemporally}, attributed to either the weakness of the function approximators or the inherent weakness of the policies themself~\cite{zhang2021robust}. The growing application of RL agents in safety-critical systems, such as autonomous vehicles~\cite{huang2018adversarial,aradi2020survey,kiran2021deep,buddareddygari2022targeted,qu2021attacking}, healthcare~\cite{esteva2019guide,yu2021reinforcement}, and aerospace~\cite{li2021constrained,wang2021multi}, highlights the need for developing both certification methods ~\cite{lutjens2020certified,everett2021neural,zhang2020robust,wu2021crop} and empirical evaluation methods~\cite{lin2017tactics,gleave2019adversarial,zhang2020robust,pinto2017robust,sun2020stealthy} to verify the robustness of deployed agents. Adversarial policies (AP), as a type of test-time evasion attack, have emerged as a crucial technique for evaluating the robustness of the deployed RL agents~\cite{zhang2021robust,sun2021strongest,yu2022natural,gleave2019adversarial,wu2021adversarial,guo2021adversarial}. Adversarial policies play an essential role in understanding the vulnerability of RL agents in both single- and multi-agent environments. In single-agent environments, although gradient-based evasion attacks like Fast Gradient Sign Method (FGSM) can craft adversarial perturbations for value or policy networks~\cite{pattanaik2017robust,lin2017tactics}, they have been proven to be suboptimal due to their myopic nature~\cite{sun2021strongest}. To find the strongest adversary, Sun~\cite{sun2021strongest} proposed Policy Adversarial Actor Director (PA-AD), which involves an adversarial policy trained by RL to find the perturbation direction in the action space of the victim agent and an FGSM-style adversarial actor to craft the corresponding perturbation on the state space of the victim agent. Mo~\cite{mo2022attacking} proposed Decoupled Adversarial Policy (DAP) consisting of two sub-policies that select the attacking step and determine the worst-case victim action respectively. However, these white-box methods are unsuitable for attacking deployed RL agents with unknown structures and parameters. To address this, Zhang~\cite{zhang2021robust} proposed SA-RL to train a state adversary to directly generate adversarial perturbation on the input of the victim policy. Yu~\cite{yu2022natural} proposed an advRL-GAN framework to generate semantically natural adversarial examples against RL agents with pixel inputs. However, these black-box methods in single-agent environments still require knowledge of the immediate rewards and actions of the victim agent, making it less practical in real scenarios. Unlike state adversaries that directly disturb the input of the victim policy in single-agent environments, the adversary in multi-agent environments can control an opponent agent to indirectly influence the observation of the victim. Gleave~\cite{gleave2019adversarial} first found this kind of adversarial policy, denoted as AP-MRL. Wu~\cite{wu2021adversarial} proposed training a surrogate victim model by imitation learning and using an explainable Artificial Intelligent technique to identify the time most critical for the adversarial policy to influence the victim's behavior. Guo~\cite{guo2021adversarial} developed adversarial policies in non-zero-sum games by simultaneously maximizing the adversary's value function and minimizing the victim's value function. However, existing methods for adversarial policy learning in multi-agent environments are sample-inefficient due to a lack of efficient exploration strategies since task reward functions are usually sparse. To address the abovementioned issues, we propose Intrinsically Motivated Adversarial Policy (IMAP) to efficiently learn optimal black-box adversarial policy in single- and multi-agent environments without knowledge of the victim policy. We design four intrinsic objectives for IMAP to encourage the adversarial policy to explore novel states. Specifically, the two coverage-driven intrinsic objectives encourage the adversary to maximize the entropy of either the state coverage or the policy coverage, the risk-driven intrinsic objective incites the adversary to minimize the task-agnostic risk function, and the divergence-driven intrinsic objective stimulates the adversary to deviate from its latest policy to force exploration. All intrinsic objectives are designed in the black-box setting without knowledge of the victim policy, including model parameters, immediate rewards, and policy outputs. What's more, we identify that the bias introduced by the intrinsic objectives may distract the adversary in sparse-reward tasks and thus design a bias-reduction method to boost the performance of IAMP further. Our contributions are summarized as follows: \begin{itemize} \item We propose IMAP, which uses four novel intrinsic objectives (state-coverage-driven, policy-coverage-driven, risk-driven, and divergence-driven) to learn black-box adversarial policies efficiently in both single- and multi-agent environments. \item In single-agent environments, our IMAP outperforms the baseline SA-RL~\cite{zhang2021robust} in four dense-reward locomotion tasks when attacking the vanilla PPO and five types of robust RL agents, including two adversarial training methods ATLA~\cite{zhang2021robust} and ATLA-SA~\cite{zhang2021robust}, and three robust regularizer methods SA~\cite{zhang2020robust}, RADIAL~\cite{oikarinen2021robust}, and WocaR~\cite{liang2022efficient}. Additionally, it achieves the best results in six sparse-reward locomotion tasks and two sparse-reward navigation tasks compared to SA-RL. We also empirically show that a victim agent that is robust to one type of IMAP might still be vulnerable to another, raising a new challenge for developing robust RL algorithms and stronger evasion attacks. \item In multi-agent environments, our IMAP achieves a SOTA attacking success rate of 83.91\% in the two-player zero-sum competitive game YouShallNotPass, outperforming the baseline AP-MRL~\cite{gleave2019adversarial}. The adversary learns a natural blocking skill using the policy-coverage-driven intrinsic objective, shown in \Cref{fig: you-IMAP}. In another game KickAndDefend, our IMAP also outperforms AP-MRL. \item We develop a novel bias-reduction (BR) method for adversarial policy learning with an approximate extrinsic optimality constraint and empirically demonstrate that BR effectively boosts the performance of IMAP in sparse-reward tasks. \end{itemize} \begin{figure} \caption{Visualization of the adversarial behavior learned by IMAP in Walker2d. IMAP can make the state-of-the-art robust model trained by Woca-R-PPO~\cite{liang2022efficient} fall down.} \label{fig: walker2d-IMAP} \end{figure} \begin{figure} \caption{Visualization of the adversarial behavior learned by IMAP in YouShallNotPass. Instead of sticking in the ground, IMAP encourages the agent to find more effective adversarial behavior like "aggressively" blocking the victim.} \label{fig: you-IMAP} \end{figure} \section{Related Work} Our work mainly concerns evasion attacks against RL and exploration strategies for sparse-reward RL. In this section, we summarize the state-of-the-art evasion attack and defense methods in the context of RL and intrinsic motivation exploration strategies for sparse-reward RL. \subsection{Evasion Attacks Against RL} Existing evasion attacks against RL can be divided into two standard classes: gradient-based adversarial attacks and adversarial policy. Gradient-based adversarial attacks against RL, analogous to FGSM-style adversarial attacks against Deep Neural Network (DNN), craft adversarial examples for the target policy or value networks to deviate the agent from its original trajectories~\cite{lin2017tactics,bai2018adversarial,lee2020spatiotemporally,sun2020stealthy,korkmaz2021investigating,qu2021attacking}. Adversarial policy instead learns a policy network to generate adversarial perturbations in the state or action space of the victim agent or determine the timing of the attack in single-agent environments~\cite{pinto2017robust,gleave2019adversarial,sun2021strongest,zhang2021robust,sharif2021adversarial,qu2021attacking,mo2022attacking}, or control an opponent to maliciously create 'natural' observations to attack the victim policy. \subsubsection{Gradient-Based Evasion Attacks} Gradient-based evasion attacks are designed to reduce the probability of selecting the optimal action or increase the likelihood of choosing the worst action via FGSM-style attacks. Following the convention of adversarial attacks on DNNs in supervised learning tasks, Lin~\cite{lin2017tactics} first investigated adversarial attacks in the context of DRL and showed that existing adversarial example crafting techniques like FGSM could be utilized to significantly degrade the test-time performance of DRL agent in Atari games with pixels-based inputs and discrete actions. Sun~\cite{sun2020stealthy} promoted the efficiency of such attacks by carefully manipulating the observation of a victim agent at heuristically selected optimal time steps rather than the entire training trajectories. Weng~\cite{weng2020toward} proposed a sample-efficient model-based adversarial attack on DRL agents in continuous control tasks, where the adversary can manipulate either the victim's observations or actions with small perturbations. Lee~\cite{lee2020spatiotemporally} showed the vulnerability of the DRL agents under the action space adversarial attacks. Zhang~\cite{zhang2020robust} proposed two heuristic attacks, Robust Sarsa and Maximal Action Difference, which can be utilized when value functions are unknown. \subsubsection{Adversarial Policies} To investigate the robustness of RL agents on state observations under optimal adversarial attack, Zhang introduced an optimal adversary optimized by RL under the SA-MDP framework, which was shown to be stronger than existing heuristic evasion attacks~\cite{zhang2021robust}. Sun unified the state space and action space perturbations and proposed to first train an adversarial policy to generate the perturbation direction in the low-dimensional action space and then craft the corresponding perturbation in the high-dimensional state space perturbation by gradient-based evasion attacks~\cite{sun2021strongest}. Apart from works on optimal adversaries in single-agent environments, adversarial policies are also investigated in multi-agent competition games~\cite{gleave2019adversarial,wu2021adversarial,guo2021adversarial,fujimoto2021reward,wang2022adversarial}. Gleave~\cite{gleave2019adversarial} leveraged original Proximal Policy Optimization (PPO) to train the adversarial policy with sparse task rewards and showed that the adversarial policy could successfully induce off-distribution activations in the victim policy network. Wu~\cite{wu2021adversarial} modified the original PPO loss to encourage the adversary to perturb the critical action of the victim at strategically selected steps. Fujimoto~\cite{fujimoto2021reward} proposed a reward-free adversarial policy by only maximizing the victim policy entropy. Apart from adversarial policies against RL agents in continuous control tasks, Wang~\cite{wang2022adversarial} recently demonstrated the existence of adversarial policies against the state-of-the-art Go AI system, KataGo. \subsection{Defense Against Evasion Attack} Defense methods for RL agents against evasion attack can be mainly divided into four categories: adversarial training~\cite{pinto2017robust,tan2020robustifying,zhang2021robust,sun2021strongest,behzadan2017whatever,vinitsky2020robust}, robust regularizer~\cite{zhang2020robust,everett2021neural,oikarinen2021robust}, randomized smoothing~\cite{behzadan2018mitigation,kumar2021policy,wu2021crop,lutter2021robust,anderson2022certified}, and active detection~\cite{lin2017detecting,guo2021edge}. Adversarial training has been demonstrated as one of the most popular and empirically successful techniques in robustifying DNN in supervised learning tasks~\cite{madry2017towards}. The adversarial training procedure for RL is similar to the one for DNN, that is, optimizing the policy under attacks via heuristic gradient-based adversaries or optimal adversarial policies. The adversary in adversarial training can have various access rights to the environment to robustify the victim agent against different types of uncertainties, e.g., directly injecting perturbations to the state or action or reward~\cite{behzadan2017whatever,vinitsky2020robust,tan2020robustifying,zhang2021robust,sun2021strongest,wu2022robust}, adding disturbance forces or torques~\cite{pinto2017robust}, or even changing the layout or dynamic property of the environment~\cite{chen2018gradient}. Apart from adversarial training, a regularizer can be applied to robustify the policy. The regularizer can enhance the smoothness of the learned policy by upper-bounding the divergence of the action distributions under state perturbations~\cite{zhang2020robust,shen2020deep}. Oikarinen~\cite{oikarinen2021robust} proposed a robust deep RL framework with adversarial loss by designing a regularizer to minimize overlap between bounds of actions to avoid choosing a significantly worse action under small state perturbation. Another defense strategy against evasion attack is using randomized smoothing techniques and analyzing the robustness of RL in the probabilistic view~\cite{anderson2022certified,kumar2021policy,wu2021crop,lutter2021robust}. Active detection methods focus on detecting malicious samples by either comparing the KL-divergence of the nominal action distribution and the predicted one~\cite{lin2017detecting} or using explainable AI techniques to identify critical time steps contributing to the victim agent’s performance~\cite{guo2021edge}. \subsection{Intrinsic Motivation} Intrinsic motivation is a critical and promising exploration technique for sparse-reward and reward-free RL. It encourages the agent to visit novel states by formulating the agent's familiarity with the environment as the intrinsic objective and measuring the agent's uncertainty as the intrinsic bonus. Intrinsic motivation is mainly developed in two large branches: provable and practical exploration strategies. Provable exploration strategies can guarantee sublinear regret bounds for several Markov Decision Process (MDP) settings like tabular MDP~\cite{osband2017posterior,he2021nearly} and linear MDP~\cite{jin2020provably,wang2021provably,neu2021online,papini2021reinforcement}. These provable methods usually utilize the Upper Confidence Bound (UCB) bonus based on \textit{optimism in the face of uncertainty principle}~\cite{zhang2021model} or posterior sampling techniques~\cite{osband2013more} to balance the exploration and exploit tradeoff. However, it is challenging for these methods to efficiently estimate the UCB bonus or the posterior distribution of the value function. Practical exploration methods instead design approximate intrinsic bonuses to address this challenge. Practical methods are usually classified into three categories: knowledge-based, data-based, and competence-based. Knowledge-based intrinsic motivation methods approximate the novelty via various techniques, including pseudo-count of the state visit frequency~\cite{bellemare2016unifying,fu2017ex2}, prediction errors~\cite{pathak2017curiosity,burda2018exploration}, and variances of outputs of an ensemble of neural networks ~\cite{pathak2019self,lee2021sunrise,bai2021principled}. Data-based intrinsic motivation is a simple yet promising technique for sparse-reward RL tasks. It formulates the intrinsic objective as state coverage and encourages the agent to cover novel states by maximizing the state entropy~\cite{hazan2019provably,mutti2021task,liu2021behavior,liu2021aps}. Competence-based methods demand the agent to learn usable and differentiable low-level skills when exploring, which is shown to be too challenging~\cite{sharma2019dynamics,laskin2021urlb}. \section{Preliminaries} In this section, we introduce the formulations of single- and multi-agent RL tasks and the basic policy gradient method. \subsection{Single-Agent RL} \label{sec: single-agent RL preliminaries} In single-agent RL tasks, the target agent interacts with the environment by taking sequential actions according to the observed state at each step, which is usually modeled as an MDP $M=(\mathcal{S}, \mathcal{A}, P, R_e, \gamma, \mu)$, where $\mathcal{S}$ and $\mathcal{A}$ are the state space and action space, $P: \mathcal{S} \times \mathcal{A} \to \Delta(\mathcal{S})$ is a transition funtion mapping state $s$ and action $a$ to the next state distribution $P(s'|s, a)$, $R_e: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \to \mathbb{R}$ is the bounded instant extrinsic reward function, $\gamma \in [0,1)$ is the discount factor determining the horizon of the process, and $\mu \in \Delta(\mathcal{S})$ is the initial state distribution. The goal of the target agent is to maximize the expected cumulative rewards. \subsection{Multi-Agent RL} \label{sec: multi-agent RL preliminaries} For multi-agent RL tasks, we focus on two-player zero-sum competition games. A two-player zero-sum competition game can be fomulated as a Markov Game $M=((\mathcal{S}_t,\mathcal{S}_o), (\mathcal{A}_t, \mathcal{A}_o), P, (R_e, -R_e), \gamma, \mu)$, where $\mathcal{S}_t$ and $\mathcal{S}_o$ are the state space of the target agent and the oppoent agent respectively, $\mathcal{A}_t$ and $\mathcal{A}_o$ are the target agent's action space and the opponent agent's action space respectively, $P: \mathcal{S}_t \times \mathcal{S}_o \times \mathcal{A}_t \times \mathcal{A}_o \to \Delta(\mathcal{S}_t, \mathcal{S}_o)$ is the transition funtion where $\Delta(\mathcal{S}_t, \mathcal{S}_o)$ is the space of the probability distribution over both $\mathcal{S}_t$ and $\mathcal{S}_o$, $R_e: \mathcal{S}_t \times \mathcal{S}_o \times \mathcal{A}_t \times \mathcal{A}_o \times \mathcal{S}_t \times \mathcal{S}_o \to \mathbb{R}$ is the bounded instant extrinsic reward function for the target agent, $-R_e$ is the extrinsic reward function for the opponent agent according to the zero-sum assumption, $\gamma \in [0,1)$ is the common discount factor determining the horizon of the game, and $\mu \in \Delta(\mathcal{S})$ is the initial state distribution. When one agent's policy is fixed, the state transition of the Markov Game will depend only on the other agent's policy instead of the joint policy. \subsection{Policy Optimization} As stated in \Cref{sec: single-agent RL preliminaries}, the target agent tries to maximize the expected total rewards. For a policy $\pi$, we can use the value function $V^\pi: \mathcal{S} \to \mathbb{R}$ to represent the discounted sum of future intrinsic rewards starting from the state $s$ \begin{equation} V^\pi(s)=\underset{\tau\sim P(\cdot|\mu,\pi)}{\mathbb{E}}\left[R(\tau) | s_0=s\right], \end{equation} where $\tau=(s_0,a_0,s_1,a_1,\dots)$ is the trajectory, $P(\tau|\mu,\pi)$ is the distribution of $\tau$ induced by the policy $\pi$ with the initial state distribution $\mu$, \begin{equation} P(\tau|\mu,\pi)=\mu(s_0) \prod_{t=0}^{\infty} P(s_{t+1}|s_t,a_t)\pi(a_t|s_t), \end{equation} and $R(\tau)$ is the discounted cumulative extrinsic reward along a trajectory $\tau$ \begin{equation} R(\tau) = \sum_{t=0}^{\infty} \gamma^t r^e_t, \end{equation} where $r^e_t:=R_e(s_t, a_t, s_{t+1})$ is the extrinsic reward function. Similarly, the action-value function $Q^\pi: \mathcal{S} \times \mathcal{A} \to \mathbb{R}$ is defined as \begin{equation} Q^\pi(s,a) = \underset{\tau\sim P(\cdot|\mu,\pi)}{\mathbb{E}}\left[R(\tau) | s_0=s, a_0=a\right] \end{equation} The goal of the agent is to find a policy $\pi_\theta$ that maximizes the value, and the optimization problem can be represented as \begin{equation} \label{eqn: value function} \max_\theta V^{\pi_\theta}(\mu). \end{equation} where $V^{\pi_\theta}(\mu):=\underset{s\sim\mu}{\mathbb{E}} V^{\pi_\theta}(s)$. According to the performance difference lemma, we can rewrite $V^{\pi_\theta}(\mu)$ as \begin{equation} \label{eqn: advantage} V^{\pi_\theta}(\mu) = V^{(k)}(\mu) + \underset{s\sim d^{\smash{\pi_\theta}},a\sim\pi_\theta}{\mathbb{E}}\left[\frac{1}{1-\gamma}A^{(k)}(s,a)\right], \end{equation} where $d^{\pi_\theta}(s) := (1-\gamma) \sum_{t=0}^{\infty} \gamma^t P(s_t=s|\mu, \pi_\theta)$ is the state distribution induced by $\pi_\theta$ with the initial state distribution $\mu$, $V^{(k)}:=V^{\pi_{\theta_k}}$ is the value function at the $k$-th iteration, $A^{(k)}(s,a) := Q^{\pi_{\theta_k}}(s,a) - V^{(k)}(s)$ is the advantage function. Note that according to the definition of $d^{\pi_\theta}_\mu$, \Cref{eqn: value function} can also be represented as \begin{equation} \max_{d^\pi} J_e(d^\pi), \end{equation} where $J_e(d^\pi) := \sum_s d^\pi(s) \hat{r}^e(s)$ is also called the extrinsic objective, $\hat{r}^e(s) = \mathbb{E}_{a\sim\pi,s'\sim P}R_e(s,a,s')$ is the instant extrinsic reward at state $s$. Trust-Region Policy Optimization (TRPO) introduced by Schulman~\cite{schulman2015trust} guarantees monotonic improvement of the policy by constraining the KL-divergence between the new policy and the old policy as following \begin{equation} \label{eqn: TRPO} \begin{aligned} &\!\max_\theta \underset{s\sim d_\mu^{\smash{(k)}},a\sim\pi_\theta}{\mathbb{E}} \left[A^{(k)}(s,a)\right] \\ &\text{s.t.}\ D_{\text{KL}}\left(\operatorname{Pr}_\mu^{(k)} \| \operatorname{Pr}_\mu^{\pi_\theta}\right) \leq \delta, \end{aligned} \end{equation} where $d^{(k)}_\mu:=d^{\pi_{\theta_k}}_\mu$ is the state visitation induced by $\pi_{\theta_k}$ with the initial state distribution $\mu$, $\mathrm{Pr}_\mu^{(k)}:=\mathrm{Pr}_\mu^{\pi_{\theta_k}}$ is the trajectory distribution induced by the policy $\pi_{\theta_k}$, $D_{\text{KL}}(P_1||P_2)$ is the KL-divergence between two distributions $P_1$ and $P_2$. Proximal Policy Optimization (PPO)~\cite{schulman2017proximal} avoids complex second-order optimization involved in \Cref{eqn: TRPO} by constructing a new objective function to minimize \begin{equation} \label{eqn: vanilla PPO} \begin{aligned} L_k^{\text{PPO}}(\theta) = & \underset{s_t\sim d_\mu^{\smash{(k)}},a_t\sim\pi_{\theta_k}}{\mathbb{E}} \min \left\{ \frac{\pi_\theta(a_t|s_t)}{\pi_{\theta_k}(a_t|s_t)} \hat{A}_t, \right. \\ & \left. \operatorname{clip}\left( \frac{\pi_\theta(a_t|s_t)}{\pi_{\theta_k}(a_t|s_t)}; 1 - \epsilon, 1 + \epsilon \right) \hat{A}_t \right\}, \end{aligned} \end{equation} where \begin{equation} \operatorname{clip}(x;1-\epsilon, 1+\epsilon) = \begin{cases} 1-\epsilon, &x\le 1 - \epsilon \\ 1 + \epsilon, &x\ge 1 - \epsilon \\ x, & \text{otherwise} \end{cases} \end{equation} is the clipping function, \begin{equation} \hat{A}_t = \sum_{l=0}^{\infty}(\gamma \lambda)^l (r^e_{t+l} + \gamma V^{(k)}(s_{t+l+1}) - V^{(k)}(s_{t+l})) \end{equation} is the Generalized Advantage Estimation (GAE)~\cite{schulman2015high}. The clipping function makes sure that the policy gradient is zero when $r^{(k)}\notin[1-\epsilon, 1+\epsilon]$; GAE is able to reduce the variance of policy gradient estimates; the outer minimization operator ensures the objective function $L_k^{\text{PPO}}(\theta)$ is a lower bound of the original objective. PPO then utilizes multiple steps of mini-batch stochastic gradient ascent on $L_k^{\text{PPO}}(\theta)$ with a dataset $\mathcal{D}:=\{(s_t,a_t,r^e_t,s_{t+1})\}_{t=0}^{N}$ collected by the old stochastic policy $\pi_{\theta_k}$. \section{Threat Model} \label{sec: threat model} In this section, we define the threat model for black-box adversarial policy learning in single- and multi-agent RL tasks. \subsection{Threat Model for Single-Agent RL} In single-agent RL tasks, the goal of the evasion attacker is to reduce the expected cumulated extrinsic reward of the target agent $V^{\pi^v}(\mu)$, where $\pi^v$ is the policy of the target (victim) agent parameterized by fixed deterministic parameters. Note that when the extrinsic reward signal is the indicator of task completion, $V^{\pi^v}(\mu)$ is equal to the success rate of the task under the policy $\pi^v$. The attacker does not know $\pi^v$, including the target policy network's architecture, hyperparameters, parameters, activations, and outputs. The attacker also has no knowledge of the environment dynamics or permission to change the environment directly. What's more, the attacker is assumed to be unaware of the shaping or intrinsic reward function utilized by the victim agent for training $\pi^v$ before deployment. This assumption is reasonable since the reward signal and the corresponding value network in the training phase are usually unnecessary for using $\pi^v$ in the deployment phase. Thus, the adversary can only leverage the sparse extrinsic signal to determine whether the evasion attack is successful. The adversary only has the right to access the input $s$ of the victim policy $\pi^v$ and is restricted to adding small bounded perturbations to $s$ to make the attack imperceptible. The adversary can query the victim model, e.g., collecting the rollouts of the victim policy. Formally, we model the state adversary as $ a^a \sim \pi^a(\cdot|s)$, which generates an adversarial perturbation $a^a$ based on the victim's current state. The perturbation $a^a$ is usually bounded in an $\ell_p$ norm ball with a constant small radius $\epsilon$, that is, $\|a^a\|_p\le\epsilon$~\cite{zhang2021robust}. The transition functions for the adversary and the victim under this threat model becomes \begin{equation} P^a(s_{t+1}|s_t,a^a_t) = P(s_{t+1}|s_t, \pi_v(s_t+a^a_t)). \end{equation} \subsection{Threat Model for Multi-Agent RL} As stated in \Cref{sec: multi-agent RL preliminaries}, we focus on two-player zero-sum competition games in this paper where the sum of the two agents' rewards equals zero for any state transition. We assume that the evasion attacker can control the opponent agent of the target agent to indirectly degrade the victim's performance. Similar to the single-agent black-box evasion attack, the adversary's goal is to minimize the expected cumulative extrinsic reward of the victim. The adversary is also ignorant of the victim's policy model, including the network structure, parameters, activations, and output. The target agent follows a fixed deterministic policy $\pi_v$, in accordance with the common case where the parameters of the deployed safety-critical policy network are usually static or infrequently updated. When the victim policy is held fixed, the two-player Markov game $M$ reduces to a single-player MDP $M^a = ((\mathcal{S}_v,\mathcal{S}_a), \mathcal{A}_a, P_a, R_a)$ for the evasion attacker to solve, where $P^a: \mathcal{S}_v \times \mathcal{S}_a \times \mathcal{A}_a \to \Delta(\mathcal{S}_v, \mathcal{S}_a)$ is the transition function with the fixed victim policy $\pi_v$ embedded~\cite{guo2021adversarial}. In each interaction step, the victim agent takes its action $a^v_t = \pi^v(s^v_t, s^a_t)$ according to the fixed deterministic policy and current state, while the adversary samples its action from the stochastic policy $a^a_t \sim \pi^a(s^v_t, s^a_t)$. The transition function for the adversary under this threat model is then \begin{equation} P^a(s^v_{t+1},s^a_{t+1}|s^v_{t},s^a_{t},a^a_t) \\ = P(s^v_{t+1}, s^a_{t+1}|s_t, a^v_t, a^a_t). \end{equation} \section{Intrinsic Objectives for IMAP} In this section, we design appropriate intrinsic objectives for black-box adversarial policy learning. According to our threat models, the adversary cannot fetch either the target agent policy's gradient or output. Thus, it cannot craft adversarial examples for the target policy via FGSM or train a surrogate target model to utilize the transferability of adversarial examples. Moreover, the adversary does not know the value network and the reward function utilized by the target agent in the training phase, making it even more challenging for the adversary to solve. To facilitate adversarial policy learning in the black-box setting, we leverage intrinsic motivation to encourage the adversary to discover novel attacking strategies. Intrinsic motivation is a promising exploration technique for sparse-reward RL. Formally, we use the following general objective function for black-box adversarial policy learning with intrinsic objective as a regularizer: \begin{equation} \label{eqn: regularized objective} L_k(d^{\pi^a}) = J_e^a(d^{\pi^a}) + \tau_k J_i\left(d^{\pi^a};\{d^{\pi_i^a}\}_{i=1}^k\right), \end{equation} where $J_e^a(d^{\pi^a}) := -\sum_s d^{\pi^a}(s) \hat{r}^e(s)$ is the extrinsic objective of the adversary, $r^e$ is the victim agent's extrinsic reward, $J_i\left(d^{\pi^a};\{d^{\pi_i^a}\}_{i=1}^k\right)$ is the intrinsic objective which is a function of the state distribution induced by the adversarial policy $\pi^a$ and state distributions induced by all prior adversarial policies $\pi^a_1, ..., \pi_k^a$, $\tau_k$ is the temperature parameter determining the strength of the regularizer. Note here we use a surrogate extrinsic reward function $\hat{r}^e(s)$ instead of the true extrinsic reward function $r^e(s)$ utilized by the victim agent since the adversary is assumed to be unaware of $r^e(s)$ and has to design a surrogate extrinsic reward function according to the task type. We further discuss the choice of the surrogate extrinsic reward function in \Cref{sec: experiment setup}. We design four appropriate intrinsic objectives $J^i(d^{\pi_a})$ for black-box adversarial policy learning to encourage the adversary to explore novel states, including two coverage-driven intrinsic objectives, one diversity-driven objective, and one risk-driven intrinsic objective. We first introduce the state-coverage-driven intrinsic objective, which encourages the adversary to lure the victim agent into covering a specific induced state distribution. We then present a policy-coverage-driven intrinsic objective to incite the adversary to maximize the deviation of the occupancy of the victim policy from its optimal trajectories. Inspired by constrained reinforcement learning, we also propose a risk-driven intrinsic objective by designing a heuristic task-agnostic risk function for the adversary to restrict the victim's dynamic behavior, expected to reduce the performance of the victim policy. Last, diversity-driven exploration stimulates the adversary to keep deviating the new adversarial policy from the old ones by maximizing their KL-divergence. Though these intrinsic objectives can encourage the adversary to explore novel states, the adversary might be distracted by the introduced intrinsic objectives in sparse-reward tasks. To decrease the bias introduced by these intrinsic objectives, we also propose an approximate extrinsic optimality constraint for \Cref{eqn: regularized objective} to ensure the adversarial policy is approximately optimal with respect to the expected extrinsic reward and discuss the choice of the temperature parameter $\tau_k$. \subsection{Coverage-Driven Intrinsic Objective} Current adversarial policy learning in single- and multi-agent RL tasks uses heuristic dithering exploration methods that randomly perturb the optimal actions regardless of the agent's learning process, which has been shown to be inefficient when the extrinsic reward is sparse. To address this issue, we introduce two coverage-driven intrinsic objectives for black-box sparse-reward adversarial policy learning. Note the design of the intrinsic objectives for the adversary can be slightly different in single- and multi-agent RL tasks due to differences in their MDP modeling. \subsubsection{State Coverage for Single-Agent RL} State coverage (SC) is a natural choice to be the intrinsic objective when the extrinsic reward is sparse, which is also known as state distribution matching. SC encourages the adversary to lure the victim into covering a certain state distribution, analogous to targeted evasion attacks in image classification tasks. Since we focus on untargeted evasion attacks, we choose uniform distribution $\mathcal{U}$ as a natural target distribution, that is, \begin{equation} J_i^{\text{SC}}(d^{\pi^a}) = - D_{\text{KL}}\left(d^{\pi^a} \| \mathcal{U} \right), \end{equation} which is equivalent to the state entropy \begin{equation} J_i^{\text{SC}}(d^{\pi^a}) = -\sum_s d^{\pi^a}(s) \ln d^{\pi^a}(s). \end{equation} By maximizing $J_i^{\text{SC}}$, the adversary lures the victim agent to cover the state space as uniformly as possible. For a non-robust victim policy, recovering from an unseen state to the optimal trajectory will be hard. \subsubsection{State Coverage for Multi-Agent RL} For multi-agent RL tasks, the adversary can not only lure the victim into covering a certain state distribution but also enforce itself to match a certain state distribution. When some prior knowledge exists, the adversary can leverage the prior to facilitate the learning process, similar to imitation learning. When there is no prior available, the adversary can choose to maximize its own state entropy to visit novel states more efficiently when the extrinsic reward is sparse. We first define marginal state distribution as \begin{equation} d^{\pi}_{\mathcal{Z}}(z) = (1-\gamma) \sum_{t=0}^{\infty} \gamma^t P(\Pi_\mathcal{Z} s_t=z|\mu, \pi_\theta), \end{equation} where $\Pi_\mathcal{Z}$ is an operator mapping the full state into a low-dimensional projection space $\mathcal{Z}$, $\mathcal{Z}$ can represent a subspace of $\mathcal{S}$, or a latent space generated by dimension reduction methods like Principle Component Analysis (PCA) or an Autoencoder. The SC-driven intrinsic objective for black-box adversarial policy learning in multi-agent RL tasks is then formulated as following \begin{equation} J_i^{\text{SC-M}}(d^{\pi^a}) = (1-\alpha^v)J_i^{\text{SC}}(d^{\pi^a}_{\mathcal{S}_a}) + \alpha^v J_i^{\text{SC}}(d^{\pi^a}_{\mathcal{S}_v}), \end{equation} where $\alpha^v$ is a balancing constant for the two sub-objectives. \subsubsection{Policy Coverage for Single-Agent RL} Instead of luring the victim policy to a target state distribution, policy coverage (PC) aims to encourage the adversary to derail the victim policy from the optimal trajectories by maximizing the deviation of the state distribution $d^\pi$ induced by the next perturbed victim policy from the policy coverage $\rho^k$ induced by all prior policies. Policy coverage $\rho^k$ is defined as a uniform combination of past visitation densities~\cite{zhang2021made}, that is, $\rho^k = \sum_{i=1}^k d^{\pi_i^a}$. Similar to SC, we define the PC-driven intrinsic objective as the entropy of the policy coverage $\rho^k$, that is, \begin{equation} J_i^{\text{PC}}(d^{\pi^a}) = -\sum_s \rho^k(s) \ln \rho^k(s). \end{equation} By maximizing the entropy of the policy coverage, the victim policy will be gradually lured to be away from its optimal trajectories. \subsubsection{Policy Coverage for Multi-Agent RL} Similar to the design of the state-coverage-based intrinsic objective for black-box adversarial policy learning in single-agent tasks, we propose a policy-coverage-based intrinsic objective for multi-agent scenarios as \begin{equation} \label{eqn: PC-MA} J_i^{\text{PC-M}}(d^{\pi^a}) = (1-\alpha^v)J_i^{\text{PC}}(d^{\pi^a}_{\mathcal{S}_a}) + \alpha^v J_i^{\text{PC}}(d^{\pi^a}_{\mathcal{S}_v}), \end{equation} where $\rho^k_{\mathcal{S}} := \sum_{i=1}^k d^{\pi_i^a}_{\mathcal{S}}$ is the marginal policy cover. The first term encourages the adversary to control the opponent agent to cover more states instead of struggling in place. The second term rewards the adversary for inducing the victim to cover novel states to exploit the victim policy's potential weakness. \subsubsection{State Density Approximation} Since all the coverage-based intrinsic objectives involve state density, selecting appropriate state density approximation methods is crucial. In current related works, there are two main types of methods to approximate state density, i.e., prediction-error-based estimation and $\kappa$-nearest-neighbour ($\kappa$-NN) estimation. Prediction-error-based estimation like ICM~\cite{pathak2017curiosity} or RND~\cite{burda2018exploration} utilizes the prediction error of a neural network at a specific state $s$ to represent its sparsity (inverse of state density). However, it may suffer from forgetting problems~\cite {zhang2021noveld,zhang2021made}. We thus turn to $\kappa$-NN estimation, which is more efficient and stable~\cite{liu2021behavior,zheng2022cim}. \paragraph{$\kappa$-NN estimation} $\kappa$-NN estimation is a nonparametric estimation method. It expresses the sparsity of a state via the distance between the state and its $k$-nearest neighbor, that is, \begin{equation} \hat{\rho}^k(s) = \frac{1}{\|s-s^\kappa_\mathcal{B}\|}, \end{equation} where $s^\kappa_\mathcal{B}\in\mathcal{B}$ is the $\kappa$-nearest state of state $s$ in the replay buffer $\mathcal{B}$. Note that $\mathcal{B}$ includes all history trajectories sampled by $\pi^a_1,...,\pi^a_k$. A more stable version of $k$-NN estimation is \begin{equation} \hat{\rho}^k(s) = \frac{\kappa}{\sum_{j=1}^{\kappa}\|s-s^j_\mathcal{B}\|}, \end{equation} which uses the average distance instead of the maximum distance. For the state density $d^{\pi^a}$, since we cannot use the next policy $\pi^a$ to directly sample trajectories, we can approximate it using the trajectories sampled by the latest policy $\pi^a_k$ under the assumption that the two policies are similar, that is, \begin{equation} \hat{d}^{\pi^a}(s) = \frac{\kappa}{\sum_{j=1}^{\kappa}\|s-s^j_\mathcal{D}\|}, \end{equation} where $\mathcal{D}$ is a replay buffer containing only trajectories sampled by the latest policy $\pi^a_k$. \subsection{Risk-Driven Intrinsic Objective} Apart from the coverage-driven intrinsic objective, we also propose a novel risk-driven intrinsic objective for black-box adversarial policy learning. Inspired by the constrained RL where a cost function is defined to penalize the agent's inappropriate behavior, we design a heuristic intrinsic objective for the adversary based on a task-agnostic risk function, that is, \begin{equation} J_i^{\text{R}}(d^{\pi^a}) = -\sum_s d^{\pi^a}(s) \|s-s_0\|. \end{equation} where $s_0\sim\mu$ is the initial state of the agent. Intuitively, this risk-based intrinsic objective encourages the adversary to make the target agent stuck near the initial state instead of following the optimal trajectories. It differs from reward shaping since it is task-agnostic with no need for task-domain knowledge. Though it is simple, we show its effectiveness in certain tasks, especially those with termination mechanisms, that is, the episode will be terminated when the agent steps into dangerous states predefined by the environment. For multi-agent tasks, we can define a similar risk-driven intrinsic objective as \begin{equation} J_i^{\text{R-M}}(d^{\pi^a}) = -\sum_s d^{\pi^a}_{\mathcal{S}_v} \|\Pi_{\mathcal{S}_v} s - \Pi_{\mathcal{S}_v} s_0\|. \end{equation} Here we use the projected state of the victim agent to calculate the risk instead of the joint state since we only expect the victim agent to be stuck near the initial state. \subsection{Diversity-Driven Intrinsic Objective} The diversity-driven intrinsic objective encourages the adversary to explore different behaviors by continuously enforcing the next adversarial policy deviating from its prior ones. This way, the adversary is expected to explore novel attacking strategies instead of falling into local optimality. To achieve this goal, we first introduce a mimic policy $\pi^m$ to learn the behavior of the adversarial policy. The mimic policy is updated by solving the following optimization, \begin{equation} \label{eqn: loss of mimic policy} \min_{\pi^m} \sum_s d^{\pi^a_k}(s) D_{\text{KL}}\left(\pi^a_k(s), \pi^m(s)\right), \end{equation} that is, minimizing the KL divergence between the latest policy $\pi^a_k$ and the mimic $\pi^m$. \Cref{eqn: loss of mimic policy} can be solved by Stochastic Gradient Descent (SGD). Under the assumption that the next policy $\pi^a$ is similar to the latest policy $\pi^a_k$, we then design the diversity-driven intrinsic objective for black-box adversarial policy as follows \begin{equation} \label{eqn: div-driven objective} J_i^{\text{D}}(d^{\pi^a}) = \sum_s d^{\pi^a}(s) D_{\text{KL}}\left(\pi^a_k(s), \pi^m_{k}(s)\right). \end{equation} Note that $D_{\text{KL}}\left(\pi^a_k(s), \pi^m_{k}(s)\right)$ does not depend on $d^{\pi^a}$ and \Cref{eqn: div-driven objective} is thus a linear function of $d^{\pi^a}$. The objective $J_i^{\text{D}}$ encourages the adversary to visit states where the KL divergence between the latest adversarial policy and the mimic policy is large. In practice, we usually select a smaller learning rate for the mimic to stabilize the learning. Intuitively, the delayed update for the mimic prevents the adversary from having to constantly adapt. \subsection{Solving the Regularized Objective} We now present how to maximize the regularized objective $L^k(d^{\pi^a})$ defined in \Cref{eqn: regularized objective}. It is easy to verify that $L^k(d^{\pi^a})$ is a concave function of $d^{\pi^a}$ when using any previously defined intrinsic objective. We leverage the Frank-Wolfe algorithm (also known as the conditional gradient method) to solve $\max_{d^{\pi^a}}L^k(d^{\pi^a})$. Frank-Wolfe algorithm iteratively solves the following problem \begin{equation} d^{\pi^a_{k+1}} \in \arg\max_d \left\langle d, \nabla_{d^{\pi^a}} \left.L_k(d^{\pi^a})\right|_{d^{\pi^a}=d^{\pi^a_k}} \right\rangle \end{equation} to constructs a sequence of estimates $d^{\pi_0^a},d^{\pi_1^a},...$ that converges to a solution of the regularized objective. The R.H.S. is also known as the Frank-Wolfe gap. Note that maximizing the Frank-Wolfe gap is equivalent to finding a policy $\pi_{k+1}^a$ that maximizes the expected cumulative rewards, which is in proportion to the derivative of $L^k(d^{\pi^a})$. We thus define the intrinsic bonus as \begin{equation} \label{eqn: intrinsic reward} r^i_k = \nabla_{d^{\pi^a}} \left.L_k(d^{\pi^a})\right|_{d^{\pi^a}=d^{\pi^a_k}} \end{equation} We propose a modified PPO objective based on the intrinsic bonus as \begin{equation} \label{eqn: modified PPO objective} \begin{aligned} L_k^{\text{PPO}}(\theta) = & \underset{s_t\sim d_\mu^{\smash{(k)}},a_t\sim\pi_{\theta_k}}{\mathbb{E}} \min \left\{ \frac{\pi_\theta(a_t|s_t)}{\pi_{\theta_k}(a_t|s_t)} \tilde{A}_t, \right. \\ & \left. \operatorname{clip}\left( \frac{\pi_\theta(a_t|s_t)}{\pi_{\theta_k}(a_t|s_t)}; 1 - \epsilon, 1 + \epsilon \right) \tilde{A}_t \right\}, \end{aligned} \end{equation} where \begin{equation} \label{eqn: modified advantage} \begin{aligned} \tilde{A}_t & = \hat{A}_t + \tau_k \hat{A}^i_t \\ \hat{A}_t^i & = \sum_{l=0}^{\infty}(\gamma \lambda)^l (r^i_{t+l} + \gamma V^{(k)}_i(s_{t+l+1}) - V^{(k)}_i(s_{t+l})). \end{aligned} \end{equation} We use $\tilde{A_t}$ to denote the weighted advantage function. $V^{(k)}_i(s)=\mathbb{E}\left[\sum_{t=0}^{\infty} \gamma^t r^i_t | s_0=s\right]$ is the intrinsic value function under the policy $\pi^a_k$. $V^{(k)}_i$ is usually approximated by a neural network with parameters $\phi_k^i$ and denoted as $V_{\phi_k^i}$. $\hat{A}^i_t$ is the GAE calculated with intrinsic rewards and intrinsic value functions. The extrinsic value function $V_{\phi^e}$ and the intrinsic value function $V_{\phi^i}$ is updated by solving the following regressions via SGD \begin{equation} \label{eqn: loss of value functions} \begin{aligned} \phi^e_{k+1} &= \arg\min_{\phi^e} \underset{s_t\sim d_\mu^{\smash{(k)}}}{\mathbb{E}} \| V_{\phi^e}(s_t)-(V_{\phi^e_k}(s_t) + \hat{A}_t)\|,\\ \phi^i_{k+1} &= \arg\min_{\phi^i} \underset{s_t\sim d_\mu^{\smash{(k)}}}{\mathbb{E}} \| V_{\phi^i}(s_t)-(V_{\phi^i_k}(s_t) + \hat{A}^i_t)\|. \end{aligned} \end{equation} Algorithm 1 shows the total solution for intrinsically motivated black-box adversarial policy learning. \begin{algorithm}[t] \caption{IMAP} \label{alg: IMAP} \begin{algorithmic} \STATE Initialize the adversarial policy $\pi_\theta^a$ and its value functions $V_{\phi^e}$ and $V_{\phi^i}$ \STATE Initialize replay buffers $\mathcal{B}$ and $\mathcal{D}$ \STATE Initialize the Lagrangian multiplier $\lambda_0=0$, the step counter $t=0$, and the batch counter $k=0$ \WHILE {$t<T$} \STATE Collect samples $\mathcal{D}=\{(s_t,a_t,r_t^e,s_{t+1})\}$ using $\pi_{\theta_k}^a$ \STATE Update replay buffer $\mathcal{B} = \mathcal{B} \cup \mathcal{D}$ \STATE $t = t + \operatorname{len}(\mathcal{D})$ \STATE compute intrinsic rewards $r^i_k$ via \Cref{eqn: intrinsic reward} \STATE Compute advantage $\tilde{A}^t$ via \Cref{eqn: modified advantage} \STATE Update $\theta$ via \Cref{eqn: modified PPO objective} \STATE Update $\phi^e$ and $\phi^i$ via \Cref{eqn: loss of value functions} \STATE Update $\lambda_k$ via \Cref{eqn: update lambda} \STATE $k=k+1$ \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{Reducing Bias of Intrinsic Objective} The regulator in \Cref{eqn: regularized objective} may introduce bias, that is, the policy which maximizes the regularized objective $L_k(d^{\pi^a})$ cannot be guaranteed to maximize the extrinsic objective $J_e^a(d^{\pi^a})$. A common practice to reduce the bias is to carry out a sufficient hyperparameter search to find the best sequences of the temperature parameter $\lambda_1, ..., \lambda_k$ for different tasks. A more adaptive way is to introduce an extrinsic optimality constraint to prevent the agent from being distracted from the intrinsic rewards. The constrained optimization problem for black-box adversarial policy learning is \begin{equation} \label{eqn: constrained RL} \begin{aligned} &\!\max_{d^{\pi^a}} J_e^a(d^{\pi^a}) + J_i(d^{\pi^a}) \\ &\text{s.t.}\ J_e^a(d^{\pi^a}) = \max_{d} J_e^a(d). \end{aligned} \end{equation} It can be viewed to first find a policy $d^{\pi^a}$ that maximizes $L_k(d^{\pi^a})$, and then check whether this policy satisfies the constraint. Apparently, $\max_{d^\pi} J_e^a(d^{\pi}) = 1$ when the extrinsic reward signal is the task success indicator, which makes the constraint too hash. To solve \Cref{eqn: constrained RL} efficiently without introducing complex optimization mechanisms, we propose approximate extrinsic optimality constraint as a soft adaptive constraint \begin{equation} \label{eqn: soft-constrained RL} \begin{aligned} &\!\max_{d^{\pi^a}} J_e^a(d^{\pi^a}) + J_i(d^{\pi^a}) \\ &\text{s.t.}\ J_e^a(d^{\pi^a}) >= \beta J_e^a(d^{\pi_{k-1}^a}), \end{aligned} \end{equation} where the hyperparameter $\beta\ge1$ is to adjust the constraint strength. Instead of training another policy to evaluate $J_e^a(d^{\pi})$, we assume that the performance of the policy $d^{\pi_{k-1}^a}$ that maximizes $L_{k-1}(d^{\pi^a})$ is similar to the policy that maximizes $J_e^a(d^{\pi})$ at $k$ iteration. To solve the soft-constrained optimization problem, we leverage the Lagrangian method to convert it into an unconstrained min-max optimization problem. Define $b_{k-1}:=\beta J_e^a(d^{\pi_{k-1}^a})$, the Lagrangian of \Cref{eqn: soft-constrained RL} is then \begin{equation} \label{eqn: Lagrangian} \begin{aligned} \mathcal{L}(d^{\pi^a}, \lambda) = & J_e^a(d^{\pi^a}) + J_i(d^{\pi^a}) \\ & + \lambda( J_e^a(d^{\pi^a}) - b_{k-1} ) \\ = & (1 + \lambda)J_e^a(d^{\pi^a}) + J_i(d^{\pi^a}) - \lambda b_{k-1}, \end{aligned} \end{equation} where $\lambda$ is the Lagrangian multiplier. The corresponding dual problem is \begin{equation} \min_{\lambda\ge 0}\max_{d^{\pi^a}}\mathcal{L}(d^{\pi^a}, \lambda). \end{equation} The Lagrangian multiplier $\lambda$ can be updated by stochastic gradient descent \begin{equation} \label{eqn: update lambda} \lambda_k = \lambda_{k-1} - \eta (J_e^a(d^{\pi_k^a}) - b_{k-1}), \end{equation} where $\eta$ is the updating step size. The Lagrangian implies us an interpretation for $\lambda$, that is, when $lambda$ increases, it encourages the agent to pay more attention to the extrinsic objective. To make the learning process more stable, we use the following "normalized" bias-reduction (BR) objective \begin{equation} \begin{aligned} \hat{\mathcal{L}}(d^{\pi^a}, \lambda_{k-1}) = & \frac{1}{1 + \lambda_{k-1}} (\mathcal{L}(d^{\pi^a}, \lambda_{k-1}) + \lambda b_{k-1}) \\ = & J_e^a(d^{\pi^a}) + \tau_k J_i(d^{\pi^a}), \end{aligned} \end{equation} where \begin{equation} \label{eqn: tau} \tau_k = \frac{1}{1 + \lambda_{k-1}} \end{equation} is the temperature parameter. \section{Experiments} In this section, we conduct comprehensive experiments in various types of single- and multi-agent RL tasks to evaluate the attack capacity of our IMAP equipped with four different intrinsic objectives. \subsection{Experiment Setup} \label{sec: experiment setup} \begin{figure} \caption{Rendered pictures of typical MuJoCo environments we used to evaluate IMAP. \protect\subref{fig1-a} the dense-reward single-agent locomotion task Ant and the sparse-reward single-agent locomotion task SparseAnt; \protect\subref{fig1-b} the sparse-reward single-agent navigation task AntUMaze; \protect\subref{fig1-c} \& \protect\subref{fig1-d} two sparse-reward multi-agent competition tasks YouShallNotPass and KickAndDefend where the blue human is controlled by the victim policy and the red human is controlled by the adversarial policy.} \label{fig: env-a} \label{fig: env-b} \label{fig: env-c} \label{fig: env-d} \label{fig: env} \end{figure} We evaluate our CIM on both single-agent and multi-agent RL tasks. All environments are implemented based on the OpenAI Gym library. In single-agent environments, we select 1) four dense-reward locomotion tasks, including Hopper, Walker2d, HalfCheetah, and Ant; 2) six sparse-reward locomotion tasks, including SparseHopper, SpasreWalker2d, SparseHalfCheetah, SparseAnt, SparseHumaonidStandup, and SparseHumanoid; 3) one sparse-reward navigation task, AntUMaze. We select two two-player zero-sum competition games for multi-agent RL tasks, including YouShallNotPass and KickAndDefend. \subsubsection{Dense-Reward Single-Agent Locomotion Tasks} \paragraph{Task Description} In the four dense-reward locomotion tasks, the victim agent is expected to run as fast as possible and live as long as possible. The maximum length of one episode is set to be 1000 timesteps. The victim agent is trained to maximize the average episode cumulative rewards. The dense instant extrinsic reward function in these tasks is defined as follows \begin{equation} \label{eqn: true reward function} r^{e1} = v_x - \omega_a\|a\|^2 - \omega_f\|f\|^2 + b_1, \end{equation} where $v_x$ is the forward velocity of the robot, $a$ is the action vector output by the target policy, $f$ is the contact force vector clipped from -1 to 1 elementwise, $\omega_a$ and $\omega_f$ are two task-dependent constant coefficients, and $b$ is the constant living bonus. According to the threat model defined in \Cref{sec: threat model}, the adversary is assumed to have no authority to obtain actions $a_v$ of the victim agent and thus cannot utilize the true reward function defined by \Cref{eqn: true reward function}. Instead, the adversary should define a surrogate extrinsic reward function inferred from the task. To reduce the bias introduced by manually designing a surrogate extrinsic reward, we use the following simple surrogate extrinsic reward \begin{equation} \label{eqn: surrogate extrinsic reward} \hat{r}^{e1} = \omega_v v_x + 1, \end{equation} where $\omega_v$ is a constant coefficient to balance the forward reward and the living bonus 1. \paragraph{Evaluation Metrics} We select vanilla PPO which uses \Cref{eqn: vanilla PPO} as the objective and five robust training methods for the victim policy learning and report the average episodic rewards of these models under no attack and against various black-box attacks. Our selected robust training methods include (1) SA~\cite{zhang2020robust} improving the robustness of PPO via a smooth policy regularization (denoted as SA-regularizer for concision) on policy network solved by convex relaxations; (2) ATLA~\cite{zhang2021robust} alternately training the agent and an RL attacker with independent value and policy networks; (3) ATLA-SA~\cite{zhang2021robust} combining ATLA training framework and SA-regularizer and using LSTM as the policy network; (4) RADIAL~\cite{oikarinen2021robust} leveraging an adversarial loss function based on bounds of the policy network under bounded $l_\infty$ attacks; (5) WocaR~\cite{liang2022efficient} directly estimating and optimizing the worse-case cumulative episode rewards based on bounds of the policy network under bounded $l_\infty$ attacks. In sum, SA, RADIAL, and WocaR belong to certified robust regularizer-based defense methods against evasion attacks, while ATLA and ATLA-SA belong to adversarial training defense methods. We use the released robust models as the victim. WocaR is the state-of-the-art robust RL method. \subsubsection{Sparse-Reward Single-Agent Locomotion Tasks} \paragraph{Task Description} In the six sparse-reward locomotion tasks, the victim agent starts from the initial position and needs to move forward across a distant line to complete the task and obtain an extrinsic reward signal. The episode is terminated once the victim agent gets the extrinsic reward or steps into unhealthy states defined by the task. The episode will be truncated when the length is larger than 500 timesteps. The sparse reward function is defined as \begin{equation} r^{e2} = \mathbb{1}[x\ge x_g] - b_2, \end{equation} where $\mathbb{1}[\cdot]$ is the indicator function, and $b_2$ is a living cost to force the victim agent to move as fast as possible. Still, the adversary does not know the training procedure of the victim agent and should infer a surrogate extrinsic reward. Since $r^{e2}$ is already sparse and easy to be inferred from the task description, we set the surrogate extrinsic reward the same as the true sparse reward, that is, $\hat{r}^{e2} = r^{e2}$. \paragraph{Evaluation Metrics} We train the victim agent with an auxiliary objective and report average episode true rewards under various attacks. Since the extrinsic reward signal $r^{e2}$ is sparse, vanilla PPO cannot directly solve these tasks. To successfully solve the task, we utilize $J_{e1}^v(d^{\pi_k^v}) = \sum_s d^{\pi^v}(s) r^{e1}(s)$ as an auxiliary objective for the victim agent to encourage it to move forward in the early stage of training and gradually decay the strength of the regularizer to reduce the bias introduced by this auxiliary objective, that is, \begin{equation} \label{eqn: constrained RL} \max_{d^{\pi^v}} J_{e2}^v(d^{\pi^v}) + \omega^{e1}J_{e1}^v(d^{\pi^v}). \\ \end{equation} where $J_{e2}^v(d^{\pi_k^v}) = \sum_s d^{\pi^v}(s) r^{e2}(s)$ is the victim's original objective. In experiments, we found linearly or exponentially decaying $\omega^{e1}$ result in low success rates and thus leverage the Lagrangian method similar to \Cref{eqn: update lambda} to adaptively update $\omega^{e1}$ as following \begin{equation} \omega^{e1}_k = \omega^{e1}_{k-1} - \eta (J_{e2}^v(d^{\pi_k^v}) - \beta J_{e2}^v(d^{\pi_{k-1}^v})). \end{equation} \subsubsection{Sparse-Reward Single-Agent Navigation Task} \paragraph{Task Description} To further validate the attack performance of our IMAP, we also select a sparse-reward single-agent navigation task named AntUMaze. The environment of AntUMaze is shown in \Cref{fig: env-b}. The Ant in the AntUMaze task is required to navigate in the U-shape maze to reach the target region instead of just running forward as fast as possible and thus more complex than Ant and SparseAnt tasks. The sparse reward function for this task is defined as \begin{equation} r^{e3} = \mathbb{1}[\|p-p_g\|\le\epsilon] - b_3, \end{equation} where $p$ and $p_g$ are the Ant position vector and the target vector separately, $b_3$ is a constant cost to encourage the victim policy to search the shortest trajectory. Similar to sparse-reward locomotion tasks, the surrogate extrinsic reward for the adversary is set to $\hat{r}^{e3}=r^{e3}$ for concision. \paragraph{Evaluation Metrics} We train the victim agent for AntUMaze with data-based intrinsic motivation and report average episode true rewards under various attacks to avoid complex reward shaping. The objective of the victim agent is \begin{equation} \label{eqn: constrained RL} \max_{d^{\pi^v}} J_{e3}^v(d^{\pi^v}) + \tau_k J_{i}^v(d^{\pi^v}) \\ \end{equation} where $J_{i}^v(d^{\pi^v})$ is the intrinsic motivation encouraging the victim agent to explore the maze, and $\tau_k$ is updated according to \Cref{eqn: update lambda} and \Cref{eqn: tau}. \subsubsection{Sparse-Reward Multi-Agent Competition Tasks} \paragraph{Task Description} For multi-agent competition tasks, we select two two-player zero-sum competition games YouShallNotPass and KickAndDefend, which have been widely adopted in previous adversarial policy research. The environment is visualized in \Cref{fig: env-c} and \Cref{fig: env-d}. In YouShallNotPass, two humanoid robots are initialized facing each other. The victim policy controls the runner (blue), and the adversarial policy controls the blocker (red). The runner wins if it reaches the finish line within 500 timesteps; the blocker wins if the runner does not win. KickAndDefend is a soccer penalty shootout between two humanoid robots. The victim policy controls the kicker (blue), and the adversarial policy controls the goalie (red). The kicker wins if it shoots the ball into the red gate within 500 timesteps; otherwise, the goalie wins. The sparse reward in these two tasks is defined as \begin{equation} r^{e4} = \mathbb{1}[\text{the victim wins}]. \end{equation} Since these two games are zero-sum, the surrogate extrinsic reward for the adversary is the same as $r^{e4}$, that is, $\hat{r}^{e4} = r^{e4}$. \paragraph{Evaluation Metrics} Note that instead of reporting the true average episode rewards of victim policies under various attacks, we report the success rates of different adversarial policies when playing with victim policies. The adversary's success rate $ASR$ is defined as \begin{equation} ASR = \frac{\text{the number of episodes where the adversary wins}}{\text{the number of total episodes}} \end{equation} where the adversary's latest policy collects the total episodes. The victim policies were trained via self-playing against random old versions of their opponents. As previous works did, we use the pre-trained victim policy weights released by~\cite{bansal2017emergent}. \subsection{Baselines and Implementation} We now introduce the baselines used in our experiments. \subsubsection{Single-Agent Tasks} We select SA-RL~\cite{zhang2021robust}, the state-of-the-art black-box adversarial policy for single-agent evasion attack on state space, as the baseline. Since our threat model assumes that the adversary can not obtain the true reward used in the victim training, we use SA-RL-s to denote SA-RL with the surrogate victim reward function. Since the true and surrogate victim reward in sparse-reward is similar or the same, we don't make distinctions between SA-RL and SA-RL-s in sparse-reward tasks. All attacking methods use the same attacking budget $\epsilon$ in each task. \subsubsection{Multi-Agent Tasks} We choose AP-MARL~\cite{gleave2019adversarial}, the state-of-the-art black-box adversarial policy for multi-agent tasks, as the baseline. Although several adversarial policy learning methods for multi-agent tasks have been developed after AP, they either need to train a surrogate victim policy~\cite{wu2021adversarial} or value network~\cite{gong2022curiosity}, or target non-zero-sum games~\cite{guo2021adversarial} or cooperative games~\cite{li2023attacking}. In contrast, we don't need to train any surrogate victim model and are interested in zero-sum competitive games. \subsection{IMAP in Single-Agent Tasks} \begin{table*}[t] \centering \caption{Average episode rewards $\pm$ standard deviation of six types of models, including PPO (vanilla), ATLA, SA, ATLA-SA, RADIAL, and WocaR, over 300 episodes under three baselines attacks, including Random, SA-RL, SA-RL-s (SA-RL with a surrogate victim reward) and four IMAP variants, including state-coverage-driven IMAP-SC, policy-coverage-driven IMAP-PC, risk-driven IMAP-R, and diversity-driven IMAP-D, on four dense-reward MuJoCo locomotion tasks, including Hopper, Walker2d, HalfCheetah, and Ant. Natural rewards of all models are also reported. To compare the overall performance, we also report the average reward of six types of models under the same attack on each environment. We bold the best attack result (the lowest value) under each row. IMAP-PC outperforms other black-box attacks on most models and shows the best average performance on each task.} \label{tab: results in dense-reward tasks} \begin{tabular}{m{0.8cm}m{1.45cm}m{1.45cm}m{1.45cm}m{1.45cm}m{1.45cm}m{1.45cm}m{1.45cm}m{1.45cm}m{1.45cm}} \toprule \textbf{Env.} & \textbf{Model} & \textbf{Natural}\hphantom{00} \textbf{Reward} & \textbf{Random} & \textbf{SA-RL} & \textbf{SA-RL-s} & \textbf{IMAP-SC (ours)} & \textbf{IMAP-PC (ours)} & \textbf{IMAP-R (ours)} & \textbf{IMAP-D (ours)}\\ \midrule \multirow{7}{0.4cm}{\textbf{Hop.} 11D 0.075} & PPO (va.) & 3167\p542 & 2101\p793 & \hphantom{0}636\p9 & \hphantom{00}\TB{80}$\,\pm\,$\TB{2} & \hphantom{00}\TB{80}$\,\pm\,$\TB{2} & \hphantom{00}\TB{80}$\,\pm\,$\TB{2} & \hphantom{00}\TB{80}$\,\pm\,$\TB{2} & \hphantom{00}\TB{80}$\,\pm\,$\TB{2}\\ & ATLA & 2559\p958 & 2153\p882 & \hphantom{0}976\p40 & \hphantom{0}875\p145 & \hphantom{0}689\p132 & \hphantom{0}\TB{639}$\,\pm\,$\TB{48} & \hphantom{0}672\p120 & \hphantom{0}808\p170\\ & SA & 3705\p2 & 2710\p801 & \TB{1076}$\,\pm\,$\TB{791} & 1826\p897 & 1282\p68 & 1346\p85 & 1714\p1176 & 2278\p1144\\ & ATLA-SA & 3291\p600 & 3165\p576 & 1772\p802 & 1585\p469 & 1685\p512 & \TB{1536}$\,\pm\,$\TB{392} & 1807\p642 & 1823\p527\\ & RADIAL & 3740\p44 & 3729\p100 & 1722\p186 & \TB{1622}$\,\pm\,$\TB{408} & 2194\p672 & 1647\p398 & 1871\p498 & 1895\p551\\ & WocaR & 3616\p99 & 3633\p30 & 2390\p145 & 1850\p530 & 2140\p612 & \TB{1646}$\,\pm\,$\TB{337} & 2917\p495 & 1832\p493\\ & Avg. Rew. & 3346 & 2915 & 1429 & 1306 & 1345 & \TB{1149} & 1510 & 1452\\ \midrule \multirow{7}{0.4cm}{\textbf{Wal.} 17D 0.05} & PPO (va.) & 4472\p635 & 3007\p1200 & 1086\p516 & 1253\p468 & 1002\p391 & \hphantom{0}\TB{895}$\,\pm\,$\TB{450} & 2966\p956 & \hphantom{0}947\p160\\ & ATLA & 3138\p1061 & 3384\p1056 & 2213\p915 & 1163\p464 & 1035\p614 & \hphantom{0}\TB{991}$\,\pm\,$\TB{500} & 1599\p742 & 1385\p590\\ & SA & 4487\p61 & 4465\p39 & \TB{2908}$\,\pm\,$\TB{336} & 3927\p162 & 4196\p231 & 3072\p1304 & 4083\p155 & 3820\p39\\ & ATLA-SA & 3842\p475 & 3927\p368 & 3663\p707 & 3508\p66 & 3144\p995 & \TB{2868}$\,\pm\,$\TB{1145} & 3620\p143 & 3469+650\\ & RADIAL & 5251\p12 & 5184\p42 & \TB{3320}$\,\pm\,$\TB{245} & 4376\p1229 & 4562\p941 & 4377\p1147 & 4584\p1021 & 4474\p1187\\ & WocaR & 4156\p495 & 4244\p157 & 3770\p196 & 2871\p1153 & 3178\p1168 & 2874\p1085 & \TB{2740}$\,\pm\,$\TB{1162} & 2859\p1078\\ & Avg. Rew. & 4224 & 4035 & 2827 & 2850 & 2853 & \TB{2513} & 3265 & 2826\\ \midrule \multirow{7}{0.4cm}{\textbf{Half.} 17D 0.15} & PPO (va.) & 7117\p98 & 5486\p1378 & \hphantom{000}\TB{0}$\,\pm\,$\TB{0} & \hphantom{000}\TB{0}$\,\pm\,$\TB{0} & \hphantom{000}\TB{0}$\,\pm\,$\TB{0} & \hphantom{000}\TB{0}$\,\pm\,$\TB{0} & \hphantom{00}56\p147 & \hphantom{000}\TB{0}$\,\pm\,$\TB{0}\\ & ATLA & 5417\p49 & 5388\p34 & 2709\p80 & \TB{1696}$\,\pm\,$\TB{1352} & 2451\p1352 & 1711\p1357 & 1996\p965 & 1765\p1357\\ & SA & 3632\p20 & 3619\p18 & 3028\p23 & 2997\p22 & 2996\p24 & \TB{2984}$\,\pm\,$\TB{20} & 3390\p62 & 3000\p27\\ & ATLA-SA & 6157\p852 & 6164\p603 & 5058\p418 & \TB{4170}$\,\pm\,$\TB{664} & 4311\p412 & 4202\p726 & 4395\p728 & 4231\p681\\ & RADIAL & 4724\p14 & 4731\p42 & 3253\p131 & 1654\p1312 & 1669\p1326 & \TB{1641}$\,\pm\,$\TB{1298} & 1791\p1278 & 2563\p1496\\ & WocaR & 6032\p68 & 5969\p149 & 5365\p54 & 4257\p1254 & \TB{3734}$\,\pm\,$\TB{1512} & 4026\p1374 & 4782\p105 & 4759\p487 \\ & Avg. Rew. & 5513 & 5226 & 3236 & 2462 & 2433 & \TB{2427} & 2730\\ \midrule \multirow{5}{0.5cm}{\textbf{Ant} 111D 0.15} & PPO (va.) & 5687\p758 & 5261\p1005 & \hphantom{000}\TB{0}$\,\pm\,$\TB{0} & \hphantom{0}351\p110 & \hphantom{0}310\p184 & \hphantom{0}212\p244 & \hphantom{0}188\p135 & \hphantom{0}284\p195\\ & ATLA & 4894\p123 & 4541\p691 & \hphantom{00}33\p327 & \hphantom{000}\TB{0}$\,\pm\,$\TB{0} & \hphantom{0}428\p63 & \hphantom{00}70\p128 & \hphantom{0}696\p24 & \hphantom{000}\TB{0}$\,\pm\,$\TB{0}\\ & SA & 4292\p384 & 4986\p452 & \TB{2511}$\,\pm\,$\TB{1117} & 2698\p822 & 2720\p879 & 2643\p851 & 2722\p994 & 2746\p831\\ & ATLA-SA & 5359\p153 & 5366\p104 & 3765\p101 & 3125\p207 & 3228\p190 & 3156\p302 & \TB{2611}$\,\pm\,$\TB{213} & 3125\p182\\ & Avg. Rew. & 5058 & 5039 & 1577 & 1544 & 1672 & \TB{1520} & 1554 & 1539\\ \bottomrule \end{tabular} \end{table*} \begin{figure} \caption{Curve of test-time attacking results of SA-RL and four IMAP variants on six sparse-reward locomotion tasks.} \label{fig: results in sparse-reward tasks} \end{figure} \begin{table*}[t] \centering \caption{Average episode rewards $\pm$ standard deviation of six locomotion victim agents in SparseHopper, SparseWalker, SparseHalfCheetah, SparseWalker, SparseAnt, SparseHumanoidStandup, and SparseHumanoid, and two navigation agents in AntUMaze and Ant4Rooms under nine attacks including SA, four IMAP variants and four IMAP variants with the BR method. We bold the best attack result (the lowest value) under each row and underline the results that BR improves IMAP variants. The natural rewards of all victim agents are near one, so we do not include them in the table. IMAP performs better than SA-RL in all tasks, and BR can further improve the performance of IMAP in half of the tasks.} \label{tab: results of IMAP and IMAP+BR} \begin{tabular}{m{0.8cm}m{1.45cm}m{1.45cm}m{1.45cm}m{1.45cm}m{1.45cm}m{1.45cm}m{1.45cm}m{1.45cm}m{1.45cm}} \toprule \textbf{Env.} & SA-RL & IMAP-SC & IMAP-PC & IMAP-R & IMAP-D & IMAP-SC\hphantom{0} + BR & IMAP-PC\hphantom{0} + BR & IMAP-R\hphantom{00} + BR & IMAP-D\hphantom{00} + BR\\ \midrule S.Hop. & \hphantom{-}0.01\p0.32 & \hphantom{-}0.00\p0.30 & \hphantom{-}0.16\p0.45 & -0.03\p0.00 & -0.02\p0.28 & \underline{-0.01\p0.28} & \underline{-\TB{0.05}$\,\pm\,$\TB{0.22}} & -0.02\p0.27 & \hphantom{-}0.01\p0.32\\ S.Wal. & \hphantom{-}0.85\p0.23 & \hphantom{-}0.66\p0.44 & \hphantom{-}0.63\p0.45 & -\TB{0.04}$\,\pm\,$\TB{0.01} & \hphantom{-}0.91\p0.06 & \hphantom{-}0.91\p0.06 & \hphantom{-}0.84\p0.26 & \hphantom{-}0.80\p0.32 & \hphantom{-}\underline{0.90\p0.12}\\ S.Half. & \hphantom{-}0.30\p0.51 & \hphantom{-}0.17\p0.45 & \hphantom{-}\TB{0.04}$\,\pm\,$\TB{0.35} & \hphantom{-}0.98\p0.00 & \hphantom{-}0.33\p0.51 & \hphantom{-}\underline{0.06\p0.37} & \hphantom{-}0.07\p0.38 & \hphantom{-}0.98\p0.00 & \hphantom{-}\underline{0.12\p0.43}\\ S.Ant & \hphantom{-}0.12\p0.42 & \hphantom{-}0.23\p0.48 & \hphantom{-}0.27\p0.49 & \hphantom{-}0.43\p0.49 & \hphantom{-}0.12\p0.42 & \hphantom{-}\underline{0.11\p0.42} & \hphantom{-}\underline{0.13\p0.43} & \hphantom{-}0.96\p0.10 & \hphantom{-}\underline{\TB{0.10}$\,\pm\,$\TB{0.40}}\\ S.Hu.St. & \hphantom{-}0.88\p0.32 & \hphantom{-}0.99\p0.05 & \hphantom{-}\TB{0.23}$\,\pm\,$\TB{0.50} & \hphantom{-}0.99\p0.00 & \hphantom{-}0.80\p0.42 & \hphantom{-}0.99\p0.05 & \hphantom{-}0.36\p0.54 & \hphantom{-}0.99\p0.00 & \hphantom{-}0.87\p0.35\\ S.Hu. & \hphantom{-}0.49\p0.50 & \hphantom{-}0.46\p0.50 & \hphantom{-}0.40\p0.49 & \hphantom{-}\TB{0.24}$\,\pm\,$\TB{0.44} & \hphantom{-}0.45\p0.5 & \hphantom{-}0.47\p0.50 & \hphantom{-}\underline{0.35\p0.48} & \hphantom{-}0.43\p0.5 & \hphantom{-}0.53\p0.49\\ \midrule A.UM. & \hphantom{-}0.32\p0.52 & \hphantom{-}0.30\p0.51 & \hphantom{-}0.37\p0.52 & \hphantom{-}0.97\p0.10 & \hphantom{-}0.28\p0.51 & \hphantom{-}0.36\p0.52 & \hphantom{-}\underline{\TB{0.19}$\,\pm\,$\TB{0.47}} & \hphantom{-}0.97\p0.07 & \hphantom{-}0.34\p0.52\\ A.4R. & \hphantom{-}0.34\p0.51 & \hphantom{-}0.32\p0.51 & \hphantom{-}0.40\p0.52 & \hphantom{-}0.74\p0.43 & \hphantom{-}0.24\p0.48 & \hphantom{-}0.43\p0.52 & \hphantom{-}\underline{0.33\p0.51} & \hphantom{-}\underline{\TB{0.22}$\,\pm\,$\TB{0.48}} & \hphantom{-}0.24\p0.49\\ \bottomrule \end{tabular} \end{table*} \subsubsection{Dense-Reward Locomotion Tasks} \Cref{tab1} presents results of three baseline attacks (Random, SA-RL, SA-RL-s) and four IMAP variants (IMAP-SC, IMAP-PC, IMAP-R, IMAP-D) on attacking vanilla PPO and robustly trained ATLA, SA, ATLA-SA, RADIAL, and WocaR. IMAP-SC, IMAP-PC, IMAP-R, and IMAP-D use the state-coverage-driven, policy-coverage-driven, risk-driven, and divergence-driven intrinsic objectives separately. RADIAL and WocaR do not release their models for Ant, so we omit them. From \Cref{tab1}, we can see that IMAP performs best against most models compared with other adversarial policies and show the best average performance. IMAP variants reduce 13 out of 22 models' average episode rewards to the lowest, while SA-RL only 6. Among all IMAP variants, IMAP-PC shows the best average performance (bold in Avg. Rew. line), suggesting the advantage of coverage-driven intrinsic objectives compared with risk-driven and divergence-driven ones. The average performance of all models in Hopper, Walker, HalfCheetah, and Ant under IMAP-PC are reduced by 65.66\%, 40.52\%, 55.97\%, and 69.94\% separately. Notably, for the state-of-the-art WocaR robust models, our IMAP reduces the average episode rewards by a significant margin, that is, 54.58\%, 34.07\%, and 38.10\% in Hopper, Walker, and HalfCheetah respectively. Surprisingly, in Walker2d and HalfCheetah, although the WocaR model is the most robust model under SA-RL, the performance can still be decreased by our IMAP variants and is worse than RADIAL and ATLA-SA. This denotes that a weak adversarial policy may give a false sense of robustness. Moreover, IMAP-SC, IMAP-PC, and IMAP-R achieve the best performance when attacking WocaR in HalfCheetah, Hopper, and Walker2d separately, suggesting that we should try multiple types of intrinsic objectives when attacking robust RL models. Comparing SA-RL and SA-RL-s in \Cref{tab1}, we can see the advantage of utilizing the simple surrogate victim reward $\hat{r}^{e1}$. SA-RL-s performs better than SA-RL when attacking 16 out of 22 models. Especially, SA-RL-s dominates SA-RL in HalfCheetah. Intuitively, the true victim reward includes various items like control input and contact force costs, which might be unstable and obfuscate the adversary to find a suboptimal attack strategy. For the two coverage-driven intrinsic objectives, IMAP-PC performs better than IMAP-SC, suggesting that the adversary should be aware of past state distributions instead of only current state distributions in these dense-reward locomotion tasks. Intuitively, although IMAP-SC encourages the adversary to lure the victim into covering states uniformly, it might vary near the optimal trajectory; IMAP-PC instead stimulates the adversary to lure the victim into deviating from all past optimal trajectories by maximizing the entropy of the policy coverage. In experiments, we found that the bias reduction method BR does not boost much performance of IMAP variants in dense-reward locomotion tasks, so we ignore the results of IMAP variants+BR in \Cref{tab: results in dense-reward tasks}. This might be caused by the difficulty of approximating $\max_{d} J_e^a(d)$ in dense-reward tasks. However, we observe significant improvement brought about by BR in sparse-reward single-agent and multi-agent tasks. We will discuss the effect of BR in \Cref{sec: sparse-reward locomotion} and \Cref{sec: sparse-reward competition}. \subsubsection{Sparse-Reward Locomotion and Navigation Tasks} \label{sec: sparse-reward locomotion} \Cref{fig: results in sparse-reward tasks} shows the results of the baseline SA-RL and four IMAP variants in six sparse-reward locomotion tasks. From \Cref{fig: results in sparse-reward tasks}, we can see that intrinsic objectives help improve the performance of the adversarial policy. IMAP-R performs best in SparseHopper and SparseWalker2d and learns 5~10$\times$ faster than others. This demonstrates that agents that are 'robust' to one type of adversarial policy may be vulnerable to another type of intrinsically motivated adversarial policy. We can also see that IMAP-PC performs best in SparseHumanoidStandup, while IMAP-R performs even worse than SA-RL, denoting that the Humanoid is robust to the risk-driven intrinsic objective but is still fragile to PC-driven intrinsic objective. In other tasks like SparseHalfCheetah and SparseHumanoid, IMAP variants also improve the performance of SA-RL by a large margin. What's more, IMAP performs better than SA-RL in navigation tasks AntUMaze and Ant4Rooms, as shown in \Cref{tab: results of IMAP and IMAP+BR}. Specifically, when BR is not applied, IMAP-D achieves the best results, demonstrating the effectiveness of the divergence-driven intrinsic objective. \paragraph{Ablation on Bias-Reduction} To investigate the effectiveness of BR, we report the results of IMAP variants with BR in \Cref{tab: results of IMAP and IMAP+BR}. BR can further improve the performance of IMAP in half of 8 sparse-reward tasks by reducing the bias introduced by the intrinsic objective. For instance, a clear distraction phenomenon exists in IMAP-PC in SparseHopper, as shown in \Cref{fig: results in sparse-reward tasks}. By applying BR to IMAP-PC, the average episode rewards can be reduced from 0.16 to -0.05. We underline the results where BR takes a positive effect. Note that BR cannot always help improve the performance of IMAP variants, especially when the task is challenging, like SparseWalker2d or SparseHumanoidStandup. This is reasonable since, in these challenging tasks, the extrinsic reward might provide a wrong optimization direction. For instance, in SparseWalker2d, SA-RL tries to make the episode as long as possible to increase the cumulative cost $\sum-b_2$. However, this strategy is ineffective in reducing the cumulative reward signal $\sum\mathbb{1}[x\ge x_g]$. Thus, when the strength of intrinsic motivation decrease, the adversary might be trapped again in this kind of local optima and cannot escape anymore. \subsection{IMAP in multi-agent tasks} \label{sec: sparse-reward competition} \begin{figure} \caption{Learning curve of AP-MRL and IAMP in two two-player zero-sum games. IMAP improves the adversary's success rate by a large margin.} \label{fig: results in multi-agent tasks} \end{figure} \Cref{fig: results in multi-agent tasks} shows the results of IMAP in multi-agent tasks. IMAP-PC+BR performs best than other IMAP variants. IMAP-PC improves $ASR$ from AP-MRL's 59.64\% to 83.91\% by learning a more natural attacking behavior in YouShallNotPass as shown in \Cref{fig: you-IMAP} without knowledge of the victim. This demonstrates the effectiveness of maximum PC entropy as designed in \Cref{eqn: PC-MA}. In KickAndDefend, the goalie is restricted in a square region before the gate according to the game rule, and thus the adversary cannot control the goalie to 'aggressively' attack the victim kicker. Even with such restriction, IMAP still improves $ASR$ from 47.02\% to 56.96\% in KickAndDefend, again showing intrinsic motivation's benefit in searching the optimal adversarial policy. \section{Conclusion} In this paper, we proposed Intrinsically Motivated Adversarial Policy (IMAP) to launch test-time black-box evasion attacks against RL agents in single- and multi-agent environments. We developed four IMAP variants, namely, IMAP-SC, IMAP-PC, IMAP-R, and IMAP-D, based on state-coverage-driven, policy-coverage-driven, risk-driven, and divergence-driven intrinsic objectives separately. We evaluated the effectiveness of IMAP variants in various MuJoCo environments. The results showed that our IMAP learned stronger adversarial policies. To reduce the bias introduced by the intrinsic objective, we also developed a bias-reduction method BR and empirically showed that BR can effectively boost the performance of IMAP in sparse-reward tasks. We found that IMAP could defeat the state-of-the-art robust RL agents, proposing a new challenge to defend the RL agents against IMAP. \end{document}
arXiv
Journal of the American Mathematical Society Published by the American Mathematical Society, the Journal of the American Mathematical Society (JAMS) is devoted to research articles of the highest quality in all areas of pure and applied mathematics. The 2020 MCQ for Journal of the American Mathematical Society is 4.79. Journals Home eContent Search About JAMS Editorial Board Author and Submission Information Journal Policies Subscription Information On a correspondence between cuspidal representations of $\operatorname {GL}_{2n}$ and $\tilde {\operatorname {Sp}}_{2n}$ by David Ginzburg, Stephen Rallis and David Soudry PDF J. Amer. Math. Soc. 12 (1999), 849-907 Request permission Let $\eta$ be an irreducible, automorphic, self-dual, cuspidal representation of $\operatorname {GL}_{2n}(\mathbb A)$, where $\mathbb A$ is the adele ring of a number field $K$. Assume that $L^S(\eta ,\Lambda ^2,s)$ has a pole at $s=1$ and that $L(\eta , \frac 12)\neq 0$. Given a nontrivial character $\psi$ of $K\backslash \mathbb A$, we construct a nontrivial space of genuine and globally $\psi ^{-1}$-generic cusp forms $V_{\sigma _{\psi }(\eta )}$ on $\widetilde {\operatorname {Sp}}_{2n}(\mathbb A)$—the metaplectic cover of ${\operatorname {Sp}}_{2n}(\mathbb A)$. $V_{\sigma _{\psi }(\eta )}$ is invariant under right translations, and it contains all irreducible, automorphic, cuspidal (genuine) and $\psi ^{-1}$-generic representations of $\widetilde {\operatorname {Sp}}_{2n}(\mathbb A)$, which lift ("functorially, with respect to $\psi$") to $\eta$. We also present a local counterpart. Let $\tau$ be an irreducible, self-dual, supercuspidal representation of $\operatorname {GL}_{2n}(F)$, where $F$ is a $p$-adic field. Assume that $L(\tau ,\Lambda ^2,s)$ has a pole at $s=0$. Given a nontrivial character $\psi$ of $F$, we construct an irreducible, supercuspidal (genuine) $\psi ^{-1}$-generic representation $\sigma _\psi (\tau )$ of $\widetilde {\operatorname {Sp}}_{2n}(F)$, such that $\gamma (\sigma _\psi (\tau )\otimes \tau ,s,\psi )$ has a pole at $s=1$, and we prove that $\sigma _\psi (\tau )$ is the unique representation of $\widetilde {\operatorname {Sp}}_{2n}(F)$ satisfying these properties. I. N. Bernšteĭn and A. V. Zelevinskiĭ, Representations of the group $GL(n,F),$ where $F$ is a local non-Archimedean field, Uspehi Mat. Nauk 31 (1976), no. 3(189), 5–70 (Russian). MR 0425030 Jacques Dixmier and Paul Malliavin, Factorisations de fonctions et de vecteurs indéfiniment différentiables, Bull. Sci. Math. (2) 102 (1978), no. 4, 307–330 (French, with English summary). MR 517765 Masaaki Furusawa, On the theta lift from $\textrm {SO}_{2n+1}$ to $\widetilde \textrm {Sp}_n$, J. Reine Angew. Math. 466 (1995), 87–110. MR 1353315, DOI 10.1515/crll.1995.466.87 Stephen Gelbart, Ilya Piatetski-Shapiro, and Stephen Rallis, Explicit constructions of automorphic $L$-functions, Lecture Notes in Mathematics, vol. 1254, Springer-Verlag, Berlin, 1987. MR 892097, DOI 10.1007/BFb0078125 D. Ginzburg, S. Rallis and D. Soudry, On explicit lifts of cusp forms from $\mathrm {GL}_{m}$ to classical groups, preprint (1997). D. Ginzburg, S. Rallis, and D. Soudry, A new construction of the inverse Shimura correspondence, Internat. Math. Res. Notices 7 (1997), 349–357. MR 1440573, DOI 10.1155/S107379289700024X David Ginzburg, Stephen Rallis, and David Soudry, Self-dual automorphic $\textrm {GL}_n$ modules and construction of a backward lifting from $\textrm {GL}_n$ to classical groups, Internat. Math. Res. Notices 14 (1997), 687–701. MR 1460389, DOI 10.1155/S1073792897000457 D. Ginzburg, S. Rallis and D. Soudry, $L$-functions for symplectic groups, to appear in Bull. de la SMF. Hervé Jacquet and Stephen Rallis, Uniqueness of linear periods, Compositio Math. 102 (1996), no. 1, 65–123. MR 1394521 H. Jacquet and J. A. Shalika, On Euler products and the classification of automorphic representations. I, Amer. J. Math. 103 (1981), no. 3, 499–558. MR 618323, DOI 10.2307/2374103 Hervé Jacquet and Joseph Shalika, Exterior square $L$-functions, Automorphic forms, Shimura varieties, and $L$-functions, Vol. II (Ann Arbor, MI, 1988) Perspect. Math., vol. 11, Academic Press, Boston, MA, 1990, pp. 143–226. MR 1044830 Colette Mœglin, Marie-France Vignéras, and Jean-Loup Waldspurger, Correspondances de Howe sur un corps $p$-adique, Lecture Notes in Mathematics, vol. 1291, Springer-Verlag, Berlin, 1987 (French). MR 1041060, DOI 10.1007/BFb0082712 Freydoon Shahidi, Twisted endoscopy and reducibility of induced representations for $p$-adic groups, Duke Math. J. 66 (1992), no. 1, 1–41. MR 1159430, DOI 10.1215/S0012-7094-92-06601-4 Freydoon Shahidi, A proof of Langlands' conjecture on Plancherel measures; complementary series for $p$-adic groups, Ann. of Math. (2) 132 (1990), no. 2, 273–330. MR 1070599, DOI 10.2307/1971524 Gordan Savin, Local Shimura correspondence, Math. Ann. 280 (1988), no. 2, 185–190. MR 929534, DOI 10.1007/BF01456050 David Soudry, Rankin-Selberg convolutions for $\textrm {SO}_{2l+1}\times \textrm {GL}_n$: local theory, Mem. Amer. Math. Soc. 105 (1993), no. 500, vi+100. MR 1169228, DOI 10.1090/memo/0500 Retrieve articles in Journal of the American Mathematical Society with MSC (1991): 11F27, 11F70, 11F85 Retrieve articles in all journals with MSC (1991): 11F27, 11F70, 11F85 David Ginzburg Affiliation: School of Mathematical Sciences, Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv 69978, Israel Email: [email protected] Stephen Rallis Affiliation: Department of Mathematics, The Ohio State University, Columbus, Ohio 43210 Email: [email protected] David Soudry MR Author ID: 205346 Email: [email protected] Received by editor(s): July 22, 1998 Published electronically: April 26, 1999 Additional Notes: The first and third authors' research was supported by The Israel Science Foundation founded by the Israel Academy of Sciences and Humanities. Journal: J. Amer. Math. Soc. 12 (1999), 849-907 MSC (1991): Primary 11F27, 11F70, 11F85
CommonCrawl
Invertible sheaf In mathematics, an invertible sheaf is a sheaf on a ringed space which has an inverse with respect to tensor product of sheaves of modules. It is the equivalent in algebraic geometry of the topological notion of a line bundle. Due to their interactions with Cartier divisors, they play a central role in the study of algebraic varieties. Definition Let (X, OX) be a ringed space. Isomorphism classes of sheaves of OX-modules form a monoid under the operation of tensor product of OX-modules. The identity element for this operation is OX itself. Invertible sheaves are the invertible elements of this monoid. Specifically, if L is a sheaf of OX-modules, then L is called invertible if it satisfies any of the following equivalent conditions:[1][2] • There exists a sheaf M such that $L\otimes _{{\mathcal {O}}_{X}}M\cong {\mathcal {O}}_{X}$. • The natural homomorphism $L\otimes _{{\mathcal {O}}_{X}}L^{\vee }\to {\mathcal {O}}_{X}$ is an isomorphism, where $L^{\vee }$ denotes the dual sheaf ${\underline {\operatorname {Hom} }}(L,{\mathcal {O}}_{X})$. • The functor from OX-modules to OX-modules defined by $F\mapsto F\otimes _{{\mathcal {O}}_{X}}L$ is an equivalence of categories. Every locally free sheaf of rank one is invertible. If X is a locally ringed space, then L is invertible if and only if it is locally free of rank one. Because of this fact, invertible sheaves are closely related to line bundles, to the point where the two are sometimes conflated. Examples Let X be an affine scheme Spec R. Then an invertible sheaf on X is the sheaf associated to a rank one projective module over R. For example, this includes fractional ideals of algebraic number fields, since these are rank one projective modules over the rings of integers of the number field. The Picard group Main article: Picard group Quite generally, the isomorphism classes of invertible sheaves on X themselves form an abelian group under tensor product. This group generalises the ideal class group. In general it is written $\mathrm {Pic} (X)\ $ with Pic the Picard functor. Since it also includes the theory of the Jacobian variety of an algebraic curve, the study of this functor is a major issue in algebraic geometry. The direct construction of invertible sheaves by means of data on X leads to the concept of Cartier divisor. See also • Vector bundles in algebraic geometry • Line bundle • First Chern class • Picard group • Birkhoff-Grothendieck theorem References 1. EGA 0I, 5.4. 2. Stacks Project, tag 01CR, . • Grothendieck, Alexandre; Dieudonné, Jean (1960). "Éléments de géométrie algébrique: I. Le langage des schémas". Publications Mathématiques de l'IHÉS. 4. doi:10.1007/bf02684778. MR 0217083.
Wikipedia
Dipendra Prasad Email: [email protected] Professor, Department of Mathematics, IIT Bombay. PhD from Harvard University, 1989. Mathematical Interest Algebraic number theory, Automorphic forms, Representation theory. Some Photos 1 Some Photos 2 Some Photos 3 Some Photos 4 Some Photos 5 Some Photos 6 Some Photos 7 Some Photos 8 Some Photos 9 Some Photos 10 Some Photos 11 Some Photos 12 Some Photos 13 Some Photos 14 Some old photos 1 Some old photos 2 Some old photos 3 Some old photos 4 [List of Publications] [Description of Work] [Professional Recognition] [Professional Experience] Link to Google Scholar Jan 01, 2016 to June 30, 2016: Chaire Morlet at CIRM and Aix-Marseille Universit�. Annotated List of publications (March, 2015). [PDF file] Editorial work Managing Editor: International Journal of Number Theory 2005-2010. Editor: Journal of Number Theory. Editor: Math. Zeitschrift. Editor: Journal of Ramanujan Mathematical Society. Editor: Proceedings of Indian Academy. Some recent invitations to lecture 1. Conference on `Number Theory and Representation theory' at Harvard University on Dick Gross's 60th birthday, in June 2010. 2. Conference in Berlin on Wilhelm Zink's 65th birthday in July 2010. 3. Conference at RIMS, Kyoto, Sept 2010. 4. NISER Foundation day conference in Bhuvaneswar in Dec. 2010 `Doing mathematics by asking questions: examples from Number Theory'. 5. Invited to speak on the occasion of SASTRA award to Wei Zhang in Kumbakonam in Dec. 2010. 6. Platinum Jubilee Award Lecture in the Indian Science Congress in Chennai in Jan 2011. 7. Lectured in IIT Kanpur on the `Fundamental Lemma', the work of the Fields Medallist, B-C. Ngo in April 2011. 8. Special activity on Automorphic representations at Morning Side Center in Beijing in May-June 2011. 9. An FRG meeting at University of Colarado in June 2011. 10. Conference on $L$-packets in Banff on `Relative Local Langlands conjecture', Canada in June 2011. 11. Conference in Max-Planck Institute, Germany in August 2011, on `Representations of Lie groups'. 12. Lectures in the workshop on Deligne-Lusztig theory at TIFR, Mumbai in December 2011. 13. Conference at National University of Singapore, on branching laws, in particular on `Gross-Prasad' conjectures in March 2012. 14. Symposium at Panjab University, Chandigarh; lectured on `Modelling representation theory' on Feb 7, 2012. 15. Madan Mohan Malaviya's One hundred fiftieth annivarsay lecture on `Groups as Unifying themes' at BHU, Varanasi on Feb. 11, 2012 16. Workshop `Representations des groupes reductifs p-adiques' in Porquerolles island, near Toulon, FRANCE from 17 to 23 June 2012. Lectured on `Ext-analogue of branching laws for Classical groups'. 17. ARCC workshop "Hypergeometric motives" at ICTP, Trieste from June 21st to June 30th. Lectured on `Automorphic representations, Motives, and L-functions'. 18. Gave INSPIRE lectures at Shivaji University, Kolhapur in May 2012, on `A perspective on Mathematics through examples'. 19. Gave lectures in the AIS program at IISER, Mohali on `Representation theory of finite groups of Lie type: Deligne-Lusztig theory'. 20. Plenary speaker at Ramanujan Mathematical Society meeting in Delhi in Oct. 2012, on `Fourier coefficients of Automorphic forms'. 21. Plenary speaker at the Legacy of Ramanujan conference in Delhi in Dec. 2012 on `Fourier coefficients of Automorphic forms'. 22. Gave INSPIRE lecture at Kumaun University, Nainital in Dec. 2012 on `A perspective on Mathematics through examples'. 23. Gave INSPIRE lecture at Guru Nanak Khalsa College, Matunga in Oct 2012, on `An overview of mathematics through examples'. 24. Gave a series of lectures in Pune University on `Ramanujan Graphs and Number theory' in March, 2013. 25. Oberwolfach Workshop: Spherical Varieties and Automorphic Representations, May 12th to 18th, 2013. 26. Tsinghua University, Beijing in June 2013. 27. Lectured in the DST-JSPS conference in Tokyo in November 2013 on `Branching laws and the local Langlands correspondence'. 28. Gave a colloquium lecture in Tokyo University in November 2013 on `Ext Analogues of Branching laws'. 29. Gave an invited talk in the conference at IMSc on occasion of Ram Murty's 60th birthday in December 2013 on ` Counting integral points in a polytope: a problem in invariant theory'. 30. Lectured in the Inaugural Conference in Sanya, China in December 2013 `On distinguished representations'. 31. Lectured in an Advanced Instructional School on IISER, Pune in December 2013 on `Maximal subgroups of classical groups'. 32. Lectured in the Conference in Oberwolfauch in January 2014 on `Ext-Analogues of branching laws'. 33. Lectured in a workshop at HRI, Allahabad in March 2014 on `Schur Multiplier for finite, real and p-adic groups'. 34. Invitation to the conference in Banff June 01-07, 2014 on `Future of trace formula'. 35. Summer school in Jussieu in June 2014 on `Gan-Gross-Prasad conjectures'. 36. Conference at Univ. Paris 13 in June 2014. 37. Research Professor at MSRI August to Dec. 2014. Papers in Journals Trilinear forms for representations of GL(2) and local epsilon factors, Compositio Math, vol. 75, 1-46 (1990). [PDF file] (With B.H. Gross) Test Vectors for linear forms, Maths Annalen, vol. 291, 343-355 (1991). PDF file. Invariant linear forms for representations of GL(2) over a local field, American J. of Maths, vol. 114, 1317-1363 (1992). [PDF file] (With B.H. Gross) On the decomposition of a representation of SO(n) when restricted to SO(n-1), Canadian J. of Maths, vol. 44, 974-1002 (1992). [PDF file] On the decomposition of a representation of GL(3) restricted to GL(2), Duke J. of Maths, vol. 69, 167-177 (1993). PDF file. Bezout's theorem for simple abelian varieties, Expositiones Math, vol. 11, 465-467 (1993). PDF file. On the local Howe duality correspondence, IMRN, No. 11, 279-287 (1993). [PDF file] (With B.H. Gross) On irreducible representations of SO(2n+1)xSO(2m), Canadian J. of Maths, vol. 46(5), 930-950 (1994). [PDF file] On an extension of a theorem of Tunnell, Compositio Math., vol. 94, 19-28(1994). PDF file. (With D. Ramakrishnan) Lifting orthogonal representations to spin groups and local root numbers, Proc. Indian Acad. of Science, vol. 105, 259-267 (1995). [PDF file] Some applications of seesaw duality to branching laws, Maths Annalen, vol. 304, 1-20 (1996). [PDF file] (With C. Khare) Extending local representations to global representations, Kyoto J. of Maths, vol. 36, 471-480 (1996). [PDF file] On the self-dual representations of finite groups of Lie type, J. of Algebra, vol. 210, 298-310 (1998). [PDF file] Some remarks on representations of a division algebra and of Galois groups of local fields, J. of Number Theory, vol. 74, 73-97 (1999). [PDF file] Distinguished representations for quadratic extensions, Compositio Math., vol. 119(3), 343-354 (1999). [PDF file] (with D. Ramakrishnan) On the global root numbers of $GL(n) \times GL(m)$, Proceedings of Symposia in Pure Maths of the AMS, vol. 66, 311-330, (1999). [PDF file] On the self-dual representations of $p$-adic groups, IMRN vol. 8, 443-452 (1999). [PDF file] (With Kumar Murty) Tate cycles on a product of two Hilbert Modular Surfaces, J. of Number Theory, vol. 80, 25-43 (2000). [PDF file] Theta correspondence for Unitary groups, Pacific J. of Maths, vol. 194, no. 2, 427-438 (2000). [PDF file] (with A. Raghuram) Kirillov theory of $GL_2(D)$ where $D$ is a division algebra over a non-Archimedean local field, Duke J. of Math, vol. 104, no. 1, 19-44 (2000). [PDF file] Comparison of germ expansion for inner forms of $GL_n$, Manuscripta Mathematicae, vol. 102, 263-268 (2000). [PDF file] The space of degenerate Whittaker models for general linear groups over finite fields, IMRN, vol. 11, 579-595 (2000). [PDF file] (With C. Khare) On the Steinitz module and capitulation of ideals, Nagoya Math. J.,vol. 160, 1-15 (2000). [PDF file] (with C.S.Yogananda) Bounding the torsion in CM elliptic curves, Comptes Rendus Mathematiques Mathematical Reports of the Academy of Sciences, Canada, vol. 23, 1-5 (2001). [PDF file] On a conjecture of Jacquet about distinguished representations of $GL_n$, Duke J. of Math, vol. 109, 67-78 (2001). [PDF file] Locally algebraic representations of p-adic groups, appendix to the paper by P.Schneider and J.Teitelbaum, $U({\frak g})$-finite locally analytic representations, (Electronic Journal) Representation Theory, vol. 5, 111-128 (2001). [PDF file] (with Nilabh Sanat)} On the restriction of cuspidal representations to unipotent elements, Math. Proceedings of Cambridge Phil. Society, vol. 132(1), 35-56 (2002). [PDF file] (with CS Rajan)} On an Archimedean analogue of Tate's conjecture, J. of Number Theory, vol. 99 (2003), 180-184. [PDF file] (with UK Anandavardhanan) Distinguished representations for SL(2), Math. Res. Letters 10, 867--878 (2003). [PDF file] On an analogue of a conjecture of Mazur: A question in Diophantine approximation, Contributions to automorphic forms, geometry, and number theory, 699--709, Johns Hopkins Univ. Press, Baltimore, MD, 2004. (with C. Khare) Reduction of homomorphisms mod p, and algebraicity, J. of Number Theory, vol. 105, 322--332 (2004). [PDF file] (with SO Juriaans and IBS Passi) Hyperbolic Unit Groups, Proc. of the AMS, vol. 133, (2005) no. 2, 415-423. [PDF file] (with Jeffrey D. Adler) On certain mulitplicity one theorems, Israel J. of Mathematics, vol. 153, 221-245 (2006). [PDF file] (with UK Anandavardhanan) On the SL(2) period integral, American J. of Mathematics, vol. 128, 1429-1453 (2006). [PDF file] Relating invariant linear form and local epsilon factors via global methods, with an appendix by H. Saito; Duke J. of Math. vol. 138, No.2, 233-261 (2007). [PDF file] (with Rainer Schulze-Pillot) Generalised form of a conjecture of Jacquet, and a local consequence; Crelle Journal 616, 219-236 (2008). [PDF file] (with Dinakar Ramakrishnan) On the self-dual representations of division algebras over local fields; American J. of Math., vol. 134, no. 3, 749-772 (2012). [PDF file] (with Shrawan Kumar and George Lusztig) Characters of simplylaced nonconnected groups versus characters of nonsimplylaced connected groups. Contemporary Math., vol 478, AMS, pp. 99-101. [PDF file ] (with Ramin Takloo-Bighash) Bessel models for GSp(4); Crelle Journal, vol. 655, 189-243 (2011). [PDF file ] Some remarks on representations of quaternion division algebras. [PDF file] (with Wee Teck Gan and Benedict H. Gross) Symplectic local root numbers, central critical $L$-values, and restriction problems in the representation theory of classical groups; vol. 346, pp 1-109, Asterisque (2012) . (with Wee Teck Gan and Benedict H. Gross) Restriction of representations of classical groups: Examples; vol. 346, pp. 111-170, Asterisque (2012). [PDF file] (with U.K. Anandavardhanan) A local-global question in Automorphic forms; Compositio Math 149 (2013), no. 6, 959--995. [PDF file] (with Jeff Adler) Extensions of representations of $p$-adic groups, special volume of Nagoya J. of Math dedicated to the memory of Prof. Hiroshi Saito, vol. 208, pp. 171-199 (2012). (with Dinakar Ramakrishnan) On the cuspidality criterion for the Asai transfer to ${\rm GL}(4)$; an appendix to ``Determination of cusp forms on $GL(2)$ by coefficients restricted to quadratic subfields by M. Krishnamurthy;'' Journal of Number Theory , Volume 132, Issue 6, Pages 1359-1384 (June 2012) (with Shrawan Kumar) Dimension of zero weight space: an algebro-geometric approach; Journal of Algebra, volume 403 (2014) 324-344. [PDF file] A `relative' local Langlands conjecture. [PDF file] (with B. Gross and W.T. Gan) Branching laws: The non-tempered case. [PDF file] Half the sum of positive roots, the Coxeter element, and a theorem of Kostant; Forum Math, vol. 28, 203-208 (2016). [PDF file] A character relationship on $ GL_n$; Israel Journal, vol. 211, (2016), 257-270. [PDF file] (with Shiv Prakash Patel) Multiplicity formula for restriction of representations of $\widetilde{{\rm G}L_{2}}(E)$ to $\widetilde{{\rm S} L_{2}}(E)$; Proceedings of the AMS, vol 144, 903-908 (2016). [PDF file] A refined notion of arithmetically equivalent number fields, and curves with isomorphic Jacobians; Advances in Mathematics 312 (2017), 198-208. [PDF file] (with Jeffrey Adams and Gordan Savin) Euler Poincare Characteristic for the Oscillator Representation; ``Representation theory, Number theory, and Invariant theory,'' Progress in Mathematics, volume 323 (2017) pages 1-22. Ext versions of branching laws. To appear in the ICM proceedings (2018) [PDF file] (with Sarah Dijols) Symplectic models for Unitary groups; arXiv:1611.01621; Transactions of the AMS, DOI: https://doi.org/10.1090/tran/7651. [PDF file] Generalizing the MVW involution, and the contragredient. Transactions of the AMS, DOI: https://doi.org/10.1090/tran/7602 [PDF file] (with U.K. Anandavardhanan) Distinguished representations for SL(n), arXiv:1612.01076; to appear in MRL. [PDF file] (with Shiv Prakash Patel) Restriction of representations of metaplectic $GL_2(F)$ to tori, Israel Journal of Mathematics 225 (2018), no. 2, 525-551. [PDF file] (with J. Adler) Multiplicity upon restriction to the derived subgroup, to appear in Pacific Journal of Mathematics (2018). [PDF file] A mod p Artin-Tate conjecture and generalized Herbrand-Ribet; submitted. [PDF file] (With M. Nori) On a duality theorem of Schneider-Stuhler. to appear in Crelle Journal (2018). [PDF file] Generic representations for symmetric spaces. Submitted. [PDF file] Papers in Conference Proceedings Weil representation, Howe duality, and the theta correspondence, AMS and CRM proceeding and lecture notes, 105-126 (1993). [PDF file] Ribet's Theorem: Shimura-Taniyama-Weil implies Fermat, Proceedings of the seminar on Fermat's Last Theorem at Fields Institute, edited by V. Kumar Murty, CMS Conference Proceedings, vol. 17, 155-177 (1995). [PDF file] A brief survey on the Theta correspondence, Proceedings of the Trichy Conference edited by K. Murty and M. Waldschmidt, Contemporary Maths, AMS, vol. 210, 171-193 (1997). [PDF file] (with C.S.Yogananda) A report on Artin's holomorphy conjecture, in the volume on Number Theory, edited by R.P.Bambah, V.C.Dumir, and R.J.Hans-Gill, Hindustan Book Agency, (1999) 301-314. [PDF file] The space of degenerate Whittaker Models for $GL_4$ over $p$-adic fields, Proceedings of the TIFR conference on Automorphic Forms, AMS (2001). [PDF file] The main theorem of Complex Multiplication, Proceedings of the Advanced Instructional School on Algebraic Number Theory, entitled, ``Elliptic Curves, Modular Forms, and Cryptography", HRI, Allahabad (2000). [PDF file] Distinguished representations for quadratic extension of a finite field. [PDF file ] Contributions to Algebraic number theory from India since Independence, unpublished. [PDF file] A Cauchy-Schwarz inequality for representations of $SU(2)$. [PDF file] A proposal for non-abelian Herbrand-Ribet; still in preliminary form [PDF file] Unpublished Lecture Notes Lectures on Algebraic number theory (2001), notes by Anupam Kumar Singh. [PDF file] Lectures on Algebraic Groups (2002), notes by Shripad Garge. [PDF file] (with A. Raghuram) Representation theory of $GL(n)$ over non-Archimedean local fields, lecture notes for a workshop at ICTP, Italy (2001). [PDF file] Lectures on Tate's thesis, lecture notes for a workshop at ICTP, Italy (2007). [PDF file] Some questions on representations of Algebraic Groups (2011). [PDF file] Notes on representations of finite groups of Lie type (2014). [PDF file] Notes on modular representations of $p$-adic groups, and the Langlands correspondence (2014). [PDF file] Notes on Central Extensions (2015). [PDF file] Introduction to modular forms; lectures in a workshop on `Modular forms and Black holes', at NISER Bhuvaneswar, in Jan 2017. [PDF file] Homework Exercises given by Prof Tate in a course on Algebra at Harvard in 1985. [PDF file] A Brief Description of Work so far Branching Laws for representations of Real and p-adic groups: Many problems in representation theory involve understanding how a representation of a group decomposes when restricted to a subgroup. Situations which involve multiplicity one phenomenon in which either the trivial representation, or some other representation of the subgroup appears with multiplicity at most one is specially useful. To cite a few examples, the theory of spherical functions and Whittaker models depends on such a multiplicity one phenomenon. The Clebsch-Gordon theorem about tensor product of representations of SU(2) has been very useful both in Physics and Mathematics. Many of my initial papers have been about finding such multiplicity one situations for infinite dimensional representations of real and $p$-adic groups. The results are expressed in terms of the arithmetic information which goes in parameterising representations, the so called Langlands parameters. In particular, the Clebsch-Gordon theorem was generalised by me for infinite dimensional representations of real and $p$-adic GL(2). Several papers, some written in collaboration with B.H.Gross, point out to the importance of the so called epsilon factors in these branching laws. The papers [1], [2], [3], [4], [5], [9], [11], [14] belong to this theme. These works have implication for the global theory of automorphic forms. There are many parallels between global period integrals, expressed in many situations as special value of $L$-functions, and local branching laws expressed in terms of epsilon factors. In paper [35] this theme has been carried out, giving a global proof of the decomposition of tensor product of two representations of GL(2) in terms of epsilon factors. In paper [36] written with Schulze-Pillot, we generalise Jacquet's conjecture to general cubic algebras, and deduce the local analogue. This paper also proves a very general globalisation theorem of local representations. The paper [15] studies the question of when a representation of $G(K)$ has a $G(k)$-invariant vector for $K$ a quadratic extension of $k$ for $k$ either a finite or a $p$-adic field. In the $p$-adic case, this was done only for division algebras in [15]. I have used the methods of this paper to prove a conjecture of Jacquet about distinguished representations of $GL_n$ and $U_n$ in the case when $K$ is a unramified quadratic extension of $k$ in [25]. Lusztig followed up the theme of [15] in his paper in Representation Theory , vol.4, (2000). I have written a paper [33] with Jeff. Adler in which we prove several multiplicity 1 theorems; in particular we show that an irreducible representation of $GSp(2n)$ when restricted to $Sp(2n)$ decomposes with mutiplicity 1 for $p$-adic fields. Representations of division algebras and of Galois groups of local fields: Generalising local class field theory, Langlands has conjectured a correspondence between irreducible representations of $GL(n)$ or of a division algebra of index $n$ to $n$ dimensional representations of the Galois group of the local field. This correspondence has recently been established by Harris, Taylor and Henniart. The correspondence preserves self-dual representations. Self-dual representations are of two kinds: symplectic and orthogonal. The question is: how does the Langlands correspondence behave on these two kinds of self-dual representations. Based on considerations of Poincare duality on the middle dimensional cohomology of a certain rigid analytic space, Dinakar Ramakrishnan and I conjecture that a representation of division algebra is orthogonal if and only if the associated representation of the Galois group is symplectic. The conjecture was made in [10]. The paper [14] was also motivated by its consideration. In the paper [37] with Ramakrishnan we show how this conjecture is a consequence of `functoriality', and since the functorial lift between classical groups and $GL(n)$ is now known in many cases, we are able to prove the conjecture in [37] for those cases when the parameter is symplectic. Self-dual representations of finite and $p$-adic groups : For a compact connected Lie group it is a theorem due to Malcev that an irreducible, self-dual representation carries an invariant symmetic or skew-symmetric bilinear form depending on the action of a certain element in the center of the group. We have generalised this result to finite groups of Lie type in [13] and to $p$-adic groups in [17], providing an answer to a question raised by Serre. These results are, however, proved only for generic representations and a condition on the group: the group contains an element which operates by $-1$ on all simple roots. The group $SL(n)$ for $n \cong 2 \bmod 4$ does not have such an element for a finite field ${\Bbb F}_q$ for $q \cong 3 \bmod 4$, and for such group there are generic self-dual representations on which the central element acts trivially, although the representation is symplectic, belying a belief at that point. A. Turull later gave much more complete results about Schur index in general for $SL(n)$. Kirillov/Whittaker models : In the work [20] done with A. Raghuram, we develop Kirillov theory for irreducible admissible representations of $GL_2(D)$ where $D$ is a division algebra over a non-Archimedean local field. This work is in close analogy with the work of Jacquet-Langlands done in the case when $D$ is a field, and realises any irreducible admissible representation of $GL_2(D)$ on a space of functions of $D^*$ with values in what may be called the space of degenerate Whittaker models which is the largest quotient of the representation on which the unipotent radical of the minimal parabolic which is isomorphic to $D$ acts via a non-trivial character of $D$. Paper [22] studies this space of degenerate Whittaker models for finite fields obtaining a rather pretty result about the space of degenerate Whittaker model for a cuspidal representation of $GL_{2n}({\Bbb F})$ with respect to the $(n,n)$ parabolic with unipotent radical $M_n({\Bbb F})$. In paper [40] in the conference proceedings of a conference at the Tata Institute on Automorphic forms, I elaborate on a conjecture with B. Gross which gives a very precise structure for the space of degenerate Whittaker models on $GL_2(D)$ when $D$ is a quaternion division algebra. There is also a proposal in this paper to interpret triple product epsilon factors (for $GL(2)$) in terms of intertwining operators. Weil Representations: Generalising the classical construction of theta functions, Weil representations provide one of the few general methods of constructing representations of groups over real and $p$-adic groups, as well as automorphic forms. The relation of this construction of representations to the Langlands parametrisation is still not fully understood. I have written two papers dealing with this question in which I refine some conjectures of Jeff Adams on the Langlands parameters of representations obtained via the Weil construction, thus making rather precise conjectures about the behaviour of the theta correspondence for groups of similar size. I have also done some work on the $K$-type of the Weil representation, and also on the character formula for the Weil representation. Papers [7], [19] as well as the expository paper [40] containing some new results too, belong to this theme. Modular forms: There is a well known theorem of Deligne about estimates on the Fourier coefficients of modular forms. In the paper [12] with C. Khare, we study whether the converse is true, i.e. if given finitely many algebraic integers satisfying Deligne bounds, there exists an eigenform of Hecke operators with these algebraic integers as Fourier coefficients. One simple case of this problem is solved by an application of Wiles's theorem about the Shimura-Taniyama conjecture. Representations of finite groups of Lie type: I have worked on some aspects of representation theory of finite groups of Lie type with my student Nilabh Sanat, and we have written a paper [27] together. This paper decomposes an irreducible cuspidal representation of a classical group restricted to its maximal unipotent subgroup as an alternating sum of certain explicit unipotent representations. Other works : I have a short note [6] in which I give a proof of the analogue of Bezout's theorem for abelian varieties: any two subvarieties of complementary dimensions in a simple abelian variety intersect. When the paper was written, I did not know that the theorem was due to W. Barth, but the proof presented in [6] was different anyway. The short note [26] to the paper of Schneider and Teitelbaum introduces the concept of locally algebraic representations, and suggestes an analogue of the Harish-Chandra sub-quotient theorem for $p$-adic representations of $p$-adic groups. In paper [18] with Kumar Murty, we parametrise Tate cycles on products of two Hilbert modular surfaces in terms of Hilbert modular forms, including the precise information about the field of rationality. L. Merel has proved an important theorem stating that the order of torsion on elliptic curves over a number field are bounded independent of the elliptic curve and the field, and depends only on the degree of the field. However, there are still no good bounds. In an attempt to see what might be the best bound, in a note with Yogananda [24], we estimate the bounds on torsion on CM elliptic curves. I have made an analogue of a conjecture of Mazur on the density of rational points in the Euclidean topology on an Abelian variety to certain tori (isomorphic to $({\Bbb S}^1)^n$ but non-algebraic!), and proved it using the Schanuel conjecture in [29]. In a paper with C. Khare [30] we prove that an abstract homomorphism between the Mordell-Weil group of abelian varieties over a number field which respects reduction mod $p$, in fact arises from homomorphism of abelian varieties. The paper [28] written with CS Rajan is a re-look at Sunada's theorem about isospectral Riemannian manifolds where we deduce it as a consequence of a simple lemma in group theory. In this paper we also conjecture, and verify in several cases, that the Jacobians of two Riemann surfaces with the same spectrum for Laplacian are isogenous (after an extension of the base field), and propose this as an Archimedean analogue of Tate's conjecture. I have written some survey papers, of which [39], [40] might have some results which may not be found elsewhere. Professional Recognition, Awards, Fellowships received : 1. Sloan Fellowship at Harvard University 1988-89. 2. NSERC fellowship of the Canadian Government, 1993. 3. BM Birla Prize in Mathematics for the year 1994. 4. Elected fellow of the Indian Academy of Science in 1995. 5. Elected fellow of the National Academy of Science, India in 1997. 6. Swarna Jayanti Fellowship for Mathematics awarded in the year 98-99 for 5 years. 7. Shanti-Swarup Bhatnagar Award for Mathematical Sciences for the year 2002. 8. Ramanujan Award of the Indian Science Congress for the year 2005. 9. J.C. Bose fellowship 2010-2015. Research Scholar TIFR, Bombay 1980-1985 Graduate student Harvard University 1985-19 89 Research Assistant TIFR, Bombay 1989-1990 Fellow TIFR, Bombay 1990-1993 Reader TIFR, Bombay 1993-1997 Associate Professor Mehta Research Institute 1994- 1997 Professor Mehta Research Institute 1997- 2004 member Institute for Advanced Study Princeton, 1992-93 visitor University of Toronto 1993 visitor MSRI, Berkeley Spring 1995 visitor Harvard University Spring 1997 Visiting Associate Professor University of Chicago Spring 1998 Visiting Professor University of Chicago Spring 2000. Visiting Professor Cal. Tech. Spring 2003 Visitor University of California at San Diego, 2007-08 Research Professor MSRI Fall semester 2014 Back to Department of Mathematics, IIT Bombay visitors since May 30, 2015 Visitors Total
CommonCrawl
Hybrid-Lambda: simulation of multiple merger and Kingman gene genealogies in species networks and species trees Sha Zhu1, James H. Degnan2, Sharyn J. Goldstien3 & Bjarki Eldon4 BMC Bioinformatics volume 16, Article number: 292 (2015) Cite this article There has been increasing interest in coalescent models which admit multiple mergers of ancestral lineages; and to model hybridization and coalescence simultaneously. Hybrid-Lambda is a software package that simulates gene genealogies under multiple merger and Kingman's coalescent processes within species networks or species trees. Hybrid-Lambda allows different coalescent processes to be specified for different populations, and allows for time to be converted between generations and coalescent units, by specifying a population size for each population. In addition, Hybrid-Lambda can generate simulated datasets, assuming the infinitely many sites mutation model, and compute the F ST statistic. As an illustration, we apply Hybrid-Lambda to infer the time of subdivision of certain marine invertebrates under different coalescent processes. Hybrid-Lambda makes it possible to investigate biogeographic concordance among high fecundity species exhibiting skewed offspring distribution. Species trees describe ancestral relations among species. Gene genealogies describe the random ancestral relations of alleles sampled within species. Species trees are often assumed to be bifurcating [6], and gene genealogies to follow the Kingman coalescent [23, 27] in allowing at most two lineages to coalesce at a time. Recently, there has been increasing interest in coalescent models which admit multiple mergers of ancestral lineages [1, 2, 9, 12, 36, 38, 39] and to model hybridization and coalescence simultaneously [3, 25, 26, 28, 46]. For high fecundity species exhibiting sweepstake-like reproduction, such as oysters and other marine organisms [1, 4, 9, 11, 17, 18, 38], the Kingman coalescent may not be appropriate, as it is based on low offspring number population models (see recent reviews by [19] and [42]). Thus, we consider Λ coalescents [8, 35, 36] derived from sweepstake-like reproduction models, and allow more than two lineages to coalesce at a time. We introduce the software Hybrid-Lambda for simulating gene trees under two models of Λ-coalescents within rooted species trees and rooted species networks. Our program differs from existing software which also allows multiple mergers, such as SIMCOAL 2.0 [29] — which allows multiple mergers in gene trees due to small population sizes under the Wright-Fisher model — in that we apply coalescent processes that are obtained from population models explicitly modelling skewed offspring distributions, as opposed to bottlenecks. Species trees may also fail to be bifurcating due to either polytomies or hybridization events. The simulation of gene genealogies within a species network which admits hybridization is another application of Hybrid-Lambda. The package ms [24] can also simulate gene genealogies within species networks under Kingman's coalescent. However the input of ms is difficult to automate when the network is sophisticated or generated from other software. Other simulation studies using species networks have either used a small number of network topologies coded individually (for example, in phylonet [43, 45, 46]) or have assumed that gene trees have evolved on species trees embedded within the species network [22, 28, 31]. Hybrid-Lambda will help to automate simulation studies of hybridization by allowing for a large number of species network topologies and allowing gene trees to evolve directly within the network. Hybrid-Lambda can simulate both Kingman and Λ-coalescent processes within species networks. A comparison of features of several software packages that output gene genealogies under coalescent models is given in Table 1. Table 1 Comparison of software programs simulating gene trees in species trees and networks. Migration refers to modeling post-speciation gene flow The program input file for Hybrid-Lambda is a character string that describes relationships between species. Standard Newick format [33] is used for the input of species trees and the output of gene trees, whose interior nodes are not labelled. An extended Newick formatted string [5, 25] labels all internal nodes, and is used for the input of species networks (see Fig. 1). Demonstration of a multiple merger genealogy within a species network. A multiple merger gene genealogy with topology (((a 1 ,a 2 , a 3 ), c 1 ), (b 1 , c 2 , d 1 )), of which the coalescence events pointed to by arrows labelled "multiple merger" indicate coalescence of 3 lines, simulated in a species network with topology ((((B,C)s1)h1#H1,A)s2,(h1#H1,D)s3)r, where H1 is the probability that a lineage has its ancestry from its left parental population Hybrid-Lambda can use multiple lineages sampled from each species and simulate Kingman or multiple merger (Λ)-coalescent processes within a given species network. In addition, separate coalescent processes can be specified on different branches of the species network. The coalescent is a continuous-time Markov process, in which times between coalescent events are independent exponential random variables with different rates. The rates are determined by a so-called coalescent parameter that can be input via command line, or a(n) (extended) Newick formatted string with specific coalescent parameters as branch lengths. By default, the Kingman coalescent is used, for which two of b active lineages coalesce at rate \(\lambda _{b,2} = \binom {b}{2}\). One can choose between two different examples of a Λ-coalescent, whose parameters have clear biological interpretation. While we cannot hope to cover the huge class of Lambda-coalescents, our two examples are the ones that have been most studied in the literature [2, 7, 13]. If the coalescent parameter is between 0 and 1, then we use ψ for the coalescent parameter, and the rate λ b,k at which k out of b (2≤k≤b) active ancestral lineages merge is $$ \lambda_{b,k}=\binom{b}{k}\psi^{k-2}(1-\psi)^{b-k},\quad \psi \in [0,1]\!, $$ ((1)) Eldon and Wakeley [9]. If the coalescent parameter is between 1 and 2, then we use α for the coalescent parameter, and the rate of k-mergers (2≤k≤b) is $$ \lambda_{b,k}=\binom{b}{k}\frac{B(k-\alpha,b-k+\alpha)}{B(2-\alpha,\alpha)}, \quad \alpha \in (1,2), $$ where B(·,·) is the beta function [39]. Hybrid-Lambda assumes by default that the input network (tree) branch lengths are in coalescent units. However, this is not essential. Coalescent units can be converted through an alternative input file with numbers of generations as branch lengths, which are then divided by their corresponding effective population sizes. By default, effective population sizes on all this parameter using the command line, or using a(n) (extended) Newickbranches are assumed to be equal and unchanged. Users can change formatted string to specify population sizes on all branches through another input file. The simulation requires ultrametric species networks, i.e. equal lengths of all paths from tip to root. Hybrid-Lambda checks the distances in coalescent units between the root and all tip nodes and prints out warning messages if the ultrametric assumption is violated. Hybrid-Lambda outputs simulated gene trees in three different files: one contains gene trees with branch lengths in coalescent units, another uses the number of generations as branch lengths, and the third uses the number of expected mutations as branch lengths. Besides outputting gene tree files, Hybrid-Lambda also provides several functions for analysis purposes: user-defined random seed for simulation, output simulated data in 0/1 format assuming the infinitely many sites mutation model, a frequency table of gene tree topologies, a figure of the species network or tree (this function only works when LATE X or dot is installed) (Fig. 2), Demonstration of a network figure generated by Hybrid-Lambda. The network is automatically generated by Hybrid-Lambda as dot and .pdf files from the extended newick string "(((((((6:.1,7:.1) s_6:.4,2:.5) s_1:1.1,3:1.6) s_2:3.3, 4:4.9) s_3:2) h_2#.5:1.41,5:8.31) s_4:0.1,(1:7.2,h_2#.5:.3) s_5:1.21)r;" the expected F ST value for a split model between two populations, when gene trees are simulated from two populations, the software Hybrid-Lambda can generate a table of relative frequencies of reciprocal monophyly, paraphyly, and polyphyly. Simulation example We give a simulation example showing the impact of the particular coalescent model on estimating the divergence time for two populations. Results can be confirmed using analytic approximations to F ST . This is shown in the Appendix along with example code for using Hybrid-Lambda for this example. Eldon B and Wakeley J [10] showed that population subdivision can be observed in genetic data despite high migration between populations. One of the most widely used measures of population differentiation is the F ST statistic. The relationship between F ST and biogeography depends on the underlying coalescent process, which might be especially important for the interpretation of divergence and demographic history of many marine species. Here we used Hybrid-Lambda to simulate divergence between two populations based on different Λ-coalescents, as well as the standard Kingman coalescent. Mutations were simulated in Hybrid-Lambda under the infinite-sites model. The summary statistic F ST was estimated for these data and was used to compare F ST estimated from mtDNA from five species of marine invertebrates. These species were used in previous studies to test the hypothesis that contemporary oceanic conditions are creating subdivisions between the North Island and South Island reef populations of New Zealand [16, 34, 44]. These studies represent some of the earliest mitochondrial studies on the marine disjunction between the North and South Islands of New Zealand. The F ST statistic between North Island and South Island populations reported for these species ranges from approximately 0.07 to 0.8 (Fig. 3). Cellana ornata displays a very strong split, which was estimated to have occurred around 0.2–0.3 million years ago based on published estimates of divergence rates and reciprocal monophyly displayed in the data set. This result may be supported by our simulations using the Kingman coalescent. However, when multiple mergers and a higher fraction of replacement by a single parent is allowed to occur then our simulations support much younger splits between the populations ∼ 9,000 generations or ∼ 48,000 generations ago (Fig. 3). Similarly, the strong split observed for Coscinasterias muricata could be placed anywhere from ∼ 9,000 to 45,000 generations ago depending on the degree to which multiple mergers are allowed to occur. While the range for Patiriella regularis, Cellana radians and C. flava is much smaller, it is still not clear cut as to whether divergence would be observed under different coalescent models. Here we used ψ=0.01 and ψ=0.23, and α=1.5 and α=1.9, with larger values of ψ and smaller values of α corresponding to higher probabilities of multiple mergers. Our choice of parameter values corresponds to the estimated values obtained for mtDNA of oysters and Atlantic cod. An estimate for Pacific oysters based on mitochondrial DNA for ψ was 0.075 [9]. The results for our choice of parameter values suggest that our conclusions about a much earlier split of the populations than previously estimated are robust with regard to parameter choice. A recent study of Atlantic cod [2] estimated ψ between 0.07 and 0.23 for nuclear genes and near 0.01 for mitochondrial genes. The same study estimated α to be 1.0 and 1.28 for nuclear genes and between 1.53 and 2.0 for mitochondrial genes. Estimated F ST from simulation. The estimated F ST from two populations simulated to have diverged over 0, 10, 20, and 50 thousand generations, as a function of the underlying coalescent process. Dashed lines show the relationship between the F ST value estimated from mtDNA data and the estimated number of generations since divergence, for the different coalescent processes for the five marine invertebrate species, Cellana ornata, C. radians, C. flava (Goldstien et al. [16]), Coscinasterias muricata (Perrin et al. [34]), and Patiriella regularis (Waters and Roy [44]) The implications for using alternative coalescent models are far reaching. Many marine organisms reproduce through broadcast spawning of thousands to millions of gametes, and while the expected survival of these offspring is low, there is the potential for a small subset of the adults to have a greater contribution to the next generation than assumed by the Kingman coalescent. Hybrid-Lambda makes it possible to investigate the effect of high fecundity on biogeographic concordance among species that exhibit high fecundity and high offspring mortality, including in complex demagraphic scenarios that allow hybridization. Hybrid-Lambda can be downloaded from http://hybridlambda.github.io/ . The program is written in C++ (requires compilers that support C++11 standard to build), and released under the GNU General Public License (GPL) version 3 or later. Users can modify and make new distributions under the terms of this license. For full details of this license, visit http://www.gnu.org/licenses/. Hybrid-Lambda works on Unix-like operating systems. We have used travis continuous integration to test compiling the program on Linux and Mac OS. An API in R [37] is currently under development. Appendix: F ST calculations Here we show analytic calculations that can be used to obtain expressions for F ST when mutation rates are low. The effect of α on F ST for fixed generation times is shown in Fig. 4. Comparison of estimated F ST values from simulation and analytical predictions. Values of F ST as a function of the parameter α for 1<α<2 for different numbers of generations of separation for two populations. Simulations (dotted lines) are based on 1 individual from each of two populations separated by t generations with 103 replicates and α∈{1.1,1.2,…,1.9}. Analytical predictions (solid lines) of F ST were calculated using (6) Assume two populations A and B have been isolated until time τ in the past as measured from the present. Assume also that the same coalescent process is operating in populations A and B. Let T w denote the time until coalescence for two lines when drawn from the same population, and T b when drawn from different populations. Let λ A denote the coalescence rate for two lines in population A, and λ AB for the common ancestral population AB. For the Beta (2−α,α)-coalescent, λ A =1, for the point-mass process λ A =ψ 2. One now obtains $$ \begin{aligned} E[\!T_{w}] & = (1 - e^{-\lambda_{A}\tau})\lambda_{A}^{-1} + e^{-\lambda_{A}\tau}\left(\tau + \lambda_{AB}^{-1}\right), \\ E[\!T_{b}] & = \tau + \lambda_{AB}^{-1}. \\ \end{aligned} $$ Slatkin [40] obtained the approximation, where μ is the per generation mutation rate, $$ F_{ST}^{(0)} := {\lim}_{\mu \to 0}F_{ST} = 1 - \frac{E[\!T_{w}]}{E[\!T_{b}] } $$ Thus, using (3) gives $$ F_{ST}^{(0)} = \left(1 - e^{-\lambda_{A}\tau}\right)\left(1 - \frac{1 }{(\tau + \lambda_{AB}^{-1})\lambda_{A} }\right) $$ The result (5) seems to make sense, since \({\lim }_{\tau \to 0}F_{\textit {ST}}^{(0)} = 0\) and \({\lim }_{\tau \to \infty }F_{\textit {ST}}^{(0)} = 1\). By way of example, if all populations exhibit a Beta (2−α,α)-coalescent, λ A =λ AB =1, and $$ F_{ST}^{(0)} = \left(1 - e^{-\tau} \right)\frac{\tau}{1 + \tau}. $$ However, deciding the timeunit of τ now becomes important, since the timescale of a Beta (2−α,α)-coalescent is proportional to N α−1, 1<α<2 [39], where N is the population size. One can obtain a more accurate expression of the timescale given knowledge about the mean of the potential offspring distribution (see [39]). However, since the mean is unknown in most cases, we apply the approximation N α−1. Assuming n≥2 sequences from each population, the 'observed' FST \((\hat {F}_{\textit {ST}})\) was computed as \(\hat {F}_{\textit {ST}} = 1 - \tfrac {n}{n-1}\tfrac {H_{w}}{ H_{b}}\) where H w is the average pairwise differences within populations, \(H_{w} =\tfrac {1}{2}(H_{w,1} + H_{w,2})\), and H b is the average of n 2 pairwise differences between populations. The following command-line argument for Hybrid- Lambda simulates 1,000 genealogies with 10 lineages sampled from each of two populations separated by one coalescent unit with mutation rate μ=0.00001 using a β-coalescent with parameter α=1.5: hybrid-Lambda -spng '(A:10000,B:10000);' -num 1000 -seed 45 -mu 0.00001 -S 10 10 ∖ -mm 1.5 -sim_num_mut -seg -fst -spng '(A:10000,B:10000);' denotes the population structure of a split model of one population splits to two at 10,000 generations in the past. -num 1000 simulates 1,000 genealogies from this model. -seed 45 initializes the random seed for the simulation. -mu 0.00001 specifies the mutation rate of 0.00001 per generation. -S 10 10 samples 10 individuals from each population. -mm 1.5 specifies the Λ-coalescent parameter. -sim_num_mut outputs simulated genealogies in Newick string, of which the number of mutations on internal branches are labelled. -seg generates haplotype data set. -fst computes F ST of the generated haplotype data set. One can use this example to generate the data for Fig. 4 by setting the -S flag to -S 1 1. Árnason E. Mitochondrial cytochrome b variation in the high-fecundity Atlantic cod: trans-Atlantic clines and shallow gene genealogy. Genetics. 2004; 166:1871–85. Árnason E, Halldórsdóttir K.Nucleotide variation and balancing selection at the Ckma gene in Atlantic cod: analysis with multiple merger coalescent models. PeerJ. 2015; e786:3. doi:10.7717/peerj.786. Bartoszek K, Jones G, Oxelman B, Sagitov S. Time to a single hybridization event in a group of species with unknown ancestral history. J Theor Biol. 2004; 322:1–6. Beckenbach AT. In: (Golding B, editor.)Mitochondrial haplotype frequencies in oysters: neutral alternatives to selection models, Non-neutral Evolution. New York: Chapman & Hall; 1994, pp. 188–98. Cardona G, Rossell F, Valiente G. Extended Newick: it is time for a standard representation of phylogenetic networks. BMC Bioinform. 2008; 9:532. Degnan JH, Salter LA. Gene tree distributions under the coalescent process. Evolution. 2005; 59:24–37. Delmas JF, Dhersin JS, Siri-Jegousse A. Asymptotic results on the length of coalescent trees. Ann Appl Prob. 2008; 18:997–1025. Donnelly P, Kurtz TG. Particle representations for measure-valued population models. Ann Probab. 1999; 27:166–205. Eldon B, Wakeley J. Coalesent processes when the distribution of offspring number among individuals is highly skewed. Genetics. 2006; 172:2621–33. Eldon B, Wakeley J. Coalescence times and F ST under a skewed offspring distribution among individuals in a population. Genetics. 2009; 181:615–29. Eldon B. Estimation of parameters in large offspring number models and ratios of coalescence times. Theor Popul Biol. 2011; 80:16–28. Eldon B, Degnan JH. Multiple merger gene genealogies in two species: monophyly, paraphyly, and polyphyly for two examples of Lambda coalescents. Theor Popul Biol. 2012; 82:117–30. Eldon B, Birkner M, Blath J, Freund F. Can the Site-Frequency Spectrum Distinguish Exponential Population Growth from Multiple-Merger CoalescentsGenetics. 2015; 199:841–56. Ewing G, Hermisson J. MSMS: a coalescent simulation program including recombination, demographic structure and selection at a single locus. Bioinformatics. 2010; 26:2064–65. Excoffier L, Foll M. Fastsimcoal: a continuous-time coalescent simulator of genomic diversity under arbitrarily complex evolutionary scenarios. Bioinformatics. 2011; 27:9. Goldstien SJ, Schiel DR, Gemmell NJ. Comparative phylogeography of coastal limpets across a marine disjunction in New Zealand. Mol Ecol. 2009; 15:3259–68. Hedgecock D. In: (Beaumont A, editor.)Does variance in reproductive success limit effective population sizes of marine organisms? Genetics and Evolution of Aquatic Organisms. London: Chapman and Hall; 1994, pp. 1222–344. Hedgecock D, Tracey M, Nelson K. In: (Abele LG, editor.)Genetics, The Biology of Crustacea vol. 2. New York: Academic Press; 1982, pp. 297–403. Hedgecock D, Pudovkin AI. Sweepstakes reproductive success in highly fecund marine fish and shellfish: a review and commentary. Bull Mar Sci. 2011; 87:971–1002. Heled J, Bryant D, Drummond AJ. BMC Evolut Biol. 2013; 13:44. Hellenthal G, Stephens M. msHOT: modifying Hudson's ms simulator to incorporate crossover and gene conversion hotspots. Bioinformatics. 2007; 23:520–21. Holland BR, Benthin S, Lockhart PJ, Moulton V, Huber KT. BMC Evol Biol. 2008; 8:202. Hudson RR. Gene genealogies and the coalescent process. Oxford Surv Evol Biol. 1990; 7:1–44. Hudson RR. Generating samples under a Wright-Fisher neutral model. Bioinformatics. 2002; 18:337–38. Huson D, Rupp R, Scornavacca C. Phylogenetic Networks: Concepts, Algorithms and Applications: Cambridge University Press; 2010. Jones G, Sagitov S, Oxelman B. Statistical inference of allopolyploid species networks in the presence of incomplete lineage sorting. Syst Biol. 2013; 62:467–78. Kingman JFC. On the genealogy of large populations. J App Probab. 1982; 19A:27–43. Kubatko LS. Identifying hybridization events in the presence of coalescence via model selection. Syst Biol. 2009; 58:478–88. Laval G, Excoffier L. SIMCOAL 2.0: a program to simulate genomic diversity over large recombining regions in a subdivided population with a complex history. Bioinformatics. 2004; 20:2485–87. Liang L, Zöllner S, Abecasis GR. GENOME: a rapid coalescent-based whole genome simulator. Bioinformatics. 2007; 23:1565–67. Meng C, Kubatko LS. Detecting hybrid speciation in the presence of incomplete lineage sorting using gene tree incongruence: A model. Theor Popul Biol. 2009; 75:35–45. Mailund T, Schierup H, Pedersen CNS, Mechlenborg PJM, Madsen JN, Schauser L, et al. CoaSim a flexible environment for simulating genetic data under coalescent models. BMC Bioinforma. 2005; 6:252. Olsen G. Gary Olsen's interpretation of the "Newick's 8:45" tree format standard. 1990. http://evolution.genetics.washington.edu/phylip/newick_doc.html. Access date 2/Sep/2015. Perrin C, Wing SR, Roy MS. Effects of hydrographic barriers on population genetic structure of the sea star Coscinasterias muricata (Echinodermata, Asteroidea) in the New Zealand fiords. Mol Ecol. 2004; 13:2183–95. Pitman J. Coalescents with multiple collisions. Ann Probab. 1999; 27:1870–902. Sagitov S. The general coalescent with asynchronous mergers of ancestral lines. J Appl Probab. 1999; 36:1116–125. R Core Team. R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 2015. http://www.R-project.org/. Sargsyan O, Wakeley J. A coalescent process with simultaneous multiple mergers for approximating the gene genealogies of many marine organisms. Theor Popul Biol. 2008; 74:104–114. Schweinsberg J. Coalescent processes obtained from supercritical Galton-Watson processes. Stoch Proc Appl. 2003; 106:107–39. Slatkin M. Inbreeding coefficients and coalescence times. Genet. Res. 1991; 58:167–175. Staab PR, Zhu S, Metzler D, Lunter G. Scrm: efficiently simulating long sequences using the approximated coalescent with recombination. Bioinformatics. 2015; 31(10):1680–82. Tellier A, Lemaire C.Coalescence 2.0: a multiple branching of recent theoretical developments and their applications. Mol Ecol. 2014; 23:2637–52. Than C, Ruths D, Nakhleh L. PhyloNet: a software package for analyzing and reconstructing reticulate evolutionary relationships. BMC Bioinforma. 2008; 9:322. doi:10.1186/1471-2105-9-322. Waters JM, Roy MS. Phylogeography of a high-dispersal New Zealand sea-star: does upwelling block gene-flowMol Ecol. 2004; 13:2797–806. Yu Y, Than C, Degnan JH, Nakhleh L. Coalescent histories on phylogenetic networks and detection of hybridization despite incomplete lineage sorting. Syst Biol. 2011; 60:138–49. Yu Y, Degnan JH, Nakhleh L. The probability of a gene tree topology within a phylogenetic network with applications to hybridization detection. PLoS Genet. 2012; e1002660:8. doi:10.1371/journal.pgen.1002660. This work was supported by New Zealand Marsden Fund (SZ and JD), EPSRC grant EP/G052026/1 and DFG grant BL 1105/3-1 through the SPP Priority Programme 1590 "Probabilistic Structures in Evolution" (BE). This work was partly conducted while JD was a Sabbatical Fellow at the National Institute for Mathematical and Biological Synthesis, an Institute sponsored by the National Science Foundation, the U.S. Department of Homeland Security, and the U.S. Department of Agriculture through NSF Award #EF-0832858, with additional support from The University of Tennessee, Knoxville. Wellcome Trust Centre for Human Genetics, University of Oxford, Oxford, UK Sha Zhu Department of Mathematics and Statistics, University of New Mexico, Albuquerque, New Mexico, USA James H. Degnan Department of Biology, University of Canterbury, Christchurch, New Zealand Sharyn J. Goldstien Institut für Mathematik, Technische Universität Berlin, Berlin, Germany Bjarki Eldon Search for Sha Zhu in: Search for James H. Degnan in: Search for Sharyn J. Goldstien in: Search for Bjarki Eldon in: Correspondence to Sha Zhu. SZ was responsible for the software development. JD and BE supervised the project. BE derived all the F ST calculations in the Appendix. SG provided the simulation results and time estimates in Fig. 3. All the authors have contributed to the manuscript writing. All authors read and approved the final manuscript. Zhu, S., Degnan, J.H., Goldstien, S.J. et al. Hybrid-Lambda: simulation of multiple merger and Kingman gene genealogies in species networks and species trees. BMC Bioinformatics 16, 292 (2015) doi:10.1186/s12859-015-0721-y DOI: https://doi.org/10.1186/s12859-015-0721-y Multiple merger F ST Infinite sites model Hybrid-lambda Skewed offspring distribution Submission enquiries: [email protected]
CommonCrawl
Performance enhancement of overlapping BSSs via dynamic transmit power control Xiaoying Lei1 & Seung Hyong Rhee1 EURASIP Journal on Wireless Communications and Networking volume 2015, Article number: 8 (2015) Cite this article In densely deployed wireless local area networks (WLANs), overlapping basic service sets (BSSs) may suffer from severe performance degradations. Mobile stations in a BSS may compete for channel access with stations that belong to another BSS in such environment, and it reduces overall throughput due to the increased collision probability. In this paper, we propose a new scheme for transmit power control, which enables mobile stations to dynamically adjust their transmit powers. Using our mechanism, those stations in different BSSs will have more chances of simultaneous transmissions and thus improve their performances by enhancing spatial reuse. We develop a Markov chain model to analyze the performance of the proposed scheme and also perform extensive simulations. Both the analytical and simulation results show that our mechanism effectively improves the network performance of WLANs. As IEEE 802.11 wireless local area networks (WLANs) have been widely deployed in homes, offices, and public places [1], the high density of WLANs has posed a great concern on the problem of co-channel interferences. Thus, the overall network performance of WLANs may be severely degraded unless an efficient scheme is provided to reduce the interference. A WLAN basic service set (BSS) is typically formed by an access point (AP) and a number of stations associated with the AP [2], and in that case, data transmissions are allowed only between the stations and the AP. When the coverage of nearby co-channel BSSs overlaps with each other, they are called overlapping BSSs (OBSSs) [3]. In case a station located at the overlapping area transmits frames, other stations of the neighbor BSS may sense the transmission and refrain their transmissions. Also if they cannot sense the transmission, then they will become hidden terminals to the transmitter. Therefore, the chance of simultaneous transmissions among OBSSs is reduced, and thus the whole network may suffer from the poor spatial reuse of OBSSs. Many solutions have been suggested so far to dynamically control the transmit power of WLAN stations and thus to improve the overall throughput of the network [1,4-8]. By adopting those schemes, stations are able to reduce their transmission ranges using only proper amount of transmit power, such that more stations can simultaneously transmit and thus the overall throughput is increased. The previous works, however, may not be adopted in a practical WLAN system: For example, the problem of how to determine the proper power level is not fully investigated in [1,4]. Also [6] is based on assumptions that may not be possible in real world, [7] requires the real-time adaptation of a measurement algorithm, and power control scheme in [8] is limited to use only in ad hoc mode. In this paper, we propose a method for dynamic transmit power control to enhance the throughput of OBSSs. First, we study the four different radio ranges in 802.11 systems and how OBSSs interfere with each other in a density WLAN. Based on these observations, we propose a new power control scheme such that every station keeps a table for recording the path loss between itself and the neighbor BSS stations from which request to send/clear to send (RTS/CTS) frames can be overheard. Utilizing the information, those stations adjust their transmit powers and data frames are delivered using only the proper powers. We develop a discrete-time Markov chain model in order to verify that our proposed method provides the OBSSs with more opportunities of simultaneous transmissions and thus increases spatial reuse. In addition, simulation results are presented to validate our proposed scheme and its analytical model. The remaining part of this paper is organized as follows. We discuss the related previous works and study the interference occurred in OBSSs in Section 1. The details of our proposed power control method are addressed in Section 1, and the Markov chain model is investigated in Section 1. The extensive simulation results are reported in Section 1, and finally, concluding remarks are drawn in Section 1. Problem definition and related works Problem definition There are four different radio ranges in 802.11 systems as illustrated in Figure 1 [9]: transmission range, net allocation vector (NAV) set range, clear channel assessment (CCA) busy range, and interference range. Transmission range is the range from a transmitter (T) and represents the area within which the receiver station (R) can receive a frame successfully. The NAV set range is the area within which the wireless stations (A, B) can set the NAVs correctly, based on the duration/ID information carried in the RTS/CTS frames. CCA busy range is the area within which the wireless stations (C, D) can physically sense the busy channel during the data transmission. Interference range is the range from a receiver and represents the area within which the wireless stations (E) are able to interfere with the reception of data frames at the receiver. A sketch of the radio ranges during a four-way frame exchange [ 9 ]. Currently most stations are configured to transmit at their maximum powers, and such a default deployment may result in a high interference among OBSSs [1]. A scenario of interference among OBSSs is illustrated in Figure 2, where two BSSs, BSS 1 and BSS 2, overlap with each other. When station A, which belongs to BSS 1 and locates at the overlapping area, begins a transmission, neighbor AP (i.e., AP 2) and other stations (such as B) will sense the transmission and set their NAVs. Also other neighbor stations which are hidden terminals to the sender A, e.g., D and E, may try to access channel. In case D successfully transmits a data frame to AP 2, the AP cannot response with an ACK in time since it has set NAV. Due to the unsuccessful transmission, D increases its contention window and contends for retransmission. In this example, we can see that transmission from one BSS can hamper the operation of neighbor BSSs. This problem of the 802.11 WLANs comes from the fact that each station must rely on its direct experience in estimating congestion, which often leads to asymmetric views [10]. A simple scenario of OBSSs. (a) Topology of OBSSs and (b) transmission process for OBSSs. Several attempts have been made to improve the performance of 802.11 MAC by utilizing transmit power control scheme. Since the transmit power control (TPC) method standardized in the IEEE 802.11 suffers from inaccuracies, Oteri et al. [4] propose a fractional CSMA/CA scheme by combining the TPC with user grouping and inter-BSS coordination to improve the performance of overlapping BSSs. However, their approach lacks of a mechanism for determining the proper transmit power. In [6], an iterative power control algorithm is proposed to increase the number of concurrent transmissions in the dense wireless networks. This proposal is based on the assumptions that every node has complete knowledge of the network topology and current configuration, which may not be possible in real world. In [7], a run-time self-adaptation algorithm is proposed based on packet loss differentiation, which can jointly adapt both transmit power and physical carrier sensing (PCS) threshold. The problem of this scheme is that it requires metrics such as PER and interference level to be measured in real time which can increase the burden of system. Also Cesana et al. [8] present an interference aware MAC for ad hoc mode network, in which each station transmits using RTS/CTS procedure, and the information about reception powers of RTS frames and interference levels is inserted into CTS packets. Utilizing the information, stations which overhear a CTS can tune their transmit powers such that they can transmit simultaneously without interfering with each other. For performance enhancement of OBSSs, many recent works have provided different approaches. Li et al. [11] propose an interference avoidance algorithm to mitigate the interference from the neighbor BSS operating at the same channel. However, this scheme enables AP to drop its defer threshold to energy detect threshold when transmitting to stations located at overlapping area. Thus a hidden terminal to AP can sense the transmission from AP and the collision probability is reduced. Fang et al. [12] propose a PCF-based two-level carrier sensing mechanism which adopts two NAVs in stations, namely self BSS network NAV (SBNAV) and OBSSs network NAV (OBNAV). When a transmission processes in one of the BSSs, the station which senses it sets the value of its NAV to be either SBNAV or OBNAV, whichever is bigger. If there are no OBSSs, the OBNAV is set to 0. In [13], an interference packet detection scheme in link layer is proposed, in which a receiving station that detects interference packets reports the existence of another BSS to AP. Then the AP announces channel switching to all stations in its BSS for avoidance of interference. There is no guarantee that the chosen channel is free from interference though. Dynamic transmit power control Our proposed dynamic TPC (DTPC) scheme is presented in this section. In the DTPC, the stations located at overlapping area are referred as interference prone (IP) stations adopting the notion in [11]. As all the stations continually monitor the ongoing transmissions, combining with the information recorded in the path loss table, a station can determine whether it can start a concurrent transmission. Then all the stations which try to start concurrent transmissions adjust their transmit powers to proper levels and compete for channel access. If one station is successful to access the channel, then since its transmission uses a low power, more stations may become hidden terminals to the transmitter. Thus we propose all the stations to use RTS/CTS procedure where the RTS/CTS frames are exchanged using their maximum powers. Our DTPC scheme enables the performance enhancement in two aspects: First, when a transmission from an IP station is ongoing, another station which belongs to a neighbor BSS and is not a hidden terminal to the IP station can start a simultaneous transmission after tuning the transmit power. Second, if a hidden terminal starts a transmission in parallel with the IP station, the neighbor AP can adjust its transmit power for timely ACK response, which means a successful transmission. NAV reset timer modification A timer named RESET_NAV is defined in the IEEE 802.11 MAC for NAV update [14]. The stations overhearing an RTS set their NAVs and also set the timer RESET_NAV with a duration of CTS_Time+2SIFS_Time+2Slot_Time. Here, CTS_Time is calculated from the length of the CTS frame and the rate at which the CTS frame is transmitted. After setting the timer, the stations will reset their NAVs if they overhear DATA frame from the RTS sender. As the RTS/CTS frames are transmitted on the maximum power and data frames are transmitted on a tuned power, some stations which set NAVs according to an RTS frame may not overhear the data frame, and the timer of these stations will expire within the time-out. We modify the NAV reset timer as follows: A new timer D_RESET_NAV is added, and the duration of this timer is the same as the duration field of RTS. Thus if a station overhears an RTS of a station that belongs to its BSS, it sets D_RESET_NAV, otherwise it sets RESET_NAV. It makes sense because in 802.11 WLAN, a station is supposed to receive all the incoming frames and at least decode the MAC header part unless it is in the sleeping mode. Moreover, in infrastructure architecture, the direct transmission is only possible between AP and stations. Thus, a station can check the address fields of a received packet to confirm whether the sender belongs to the domestic BSS. This modification in the NAV reset timer guarantees the domestic stations which set D_RESET_NAV timer will not experience time-out until the ongoing transmission terminates. Our DTPC proposes that after RESET_NAV timer expires, stations enter into the back-off (BO) process directly. The station whose BO counter decreases to 0 will access the channel. Path loss recording In our proposal, an AP broadcasts the value of an allowable maximum transmit power via beacon frames, and other stations transmit RTS/CTS frames using the maximum power. Also it is assumed that all BSSs adopt a same value of maximum transmit power. Two more fields are added into the RTS frame, reception power and signal to interference and noise (SINR). When a station transmits an RTS frame on maximum power, it piggybacks the reception power of the beacon it received recently and SINR of the beacon. Note that $${SINR}_{j}=\frac{P_{tra} \times G_{ij}}{N_{j}},$$ where P tra is the transmit power of sender i, G ij is the path loss between sender i and receiver j, and N j is the noise and interference experienced in j. Thus when the AP receives the RTS, it can calculate the path loss from the sender. Also a neighbor BSS station which overhears the RTS packets can calculate the path loss between the sender and itself, by adopting the allowable maximum transmit power. As the RTS/CTS frames are exchanged on the maximum power, it prevents hidden terminals and exposed terminals in a wide range. After the RTS/CTS frame exchanged, the sender adjusts its transmit power to a low level and delivers a data frame SIFS later. In DTPC, each station keeps a table for recording reception power of beacon frame and path loss between itself and the neighbor BSS stations from which it can overhear an RTS/CTS frame, i.e. < n o d e id ,p a t h l o s s ij ,p rev >. AP keeps a table for its own BSS stations and neighbor BSS stations located at overlapping area. When a station overhears an RTS/CTS frame, it updates the record related to the sender. If there is no record for the sender, it adds a new record into the table. Tuning transmit power In this section, the method for tuning transmit power is explained. We assume the thresholds S I N R(γ) for stations in all BSSs are the same. A BSS that first starts a transmission is called primary BSS, and the other BSS overlapping with the primary BSS is referred as a secondary BSS. Let \(P_{\textit {i\_re}}\) be the power that station i which belongs to primary BSS received a beacon from its AP, and \(P_{\textit {j\_tr}}\) be the transmit power of station j in secondary BSS. Also let I i be the noise and interference that station i experienced, G ij be the path loss between station i and station j assuming a symmetric channel, and let S I N R i be the SINR that i experienced when it received a frame from its AP. In order to guarantee that transmissions from j does not disturb ongoing transmission in i, the following condition is required. $$ \frac{P_{i\_re}}{I_{i}+\frac{P_{j\_tr}}{G_{ij}}}> \gamma. $$ As \(I_{i} = \frac {P_{\textit {i\_re}}}{{SINR}_{i}}\), after rearrangement of (1), we can get $$ P_{j\_tr}< \frac{P_{i\_re}\cdot G_{ij}({SINR}_{i}-\gamma)}{{SINR}_{i}\cdot \gamma}. $$ In order to guarantee the transmission from station j can be received by its AP successfully, the condition below should be satisfied: $$ \frac{P_{j\_tr}\times G_{AP\_j}}{I_{AP}}> \gamma. $$ Rearranging (3) gives $$ P_{j\_tr}> \frac{I_{AP}\times \gamma }{G_{AP\_j}}, $$ where G\(_{\textit {AP\_j}}\) is the path loss between station j and its AP. Combining (2) and (4), the transmit power of station j can be adjusted as follows: $$ \frac{I_{AP}\times \gamma }{G_{AP\_j}}< P_{j\_tr}< \frac{P_{i\_re}\cdot G_{ij}({SINR}_{i}-\gamma)}{{SINR}_{i}\cdot \gamma}. $$ Transmissions in a non-hidden terminal environment We consider the proposed DTPC in a non-hidden terminal environment. The network topology is given in Figure 2a, and we use Figure 3a to illustrate the radio ranges of stations while using Figure 3b to present the transmission process. As shown in Figure 3b, the RTS frame of station A contains the reception power of the recently received beacon from AP1 and SINR of the beacon. The stations which belong to BSS2 and overhear this transmission (e.g., station B and AP2) will set their NAVs and RESET_NAV timers. After A receives a CTS from AP1, it tunes its transmit power and transmits a data frame. The stations which set NAVs according to the RTS frame but cannot sense the following data frame will experience the RESET_NAV timer's time-out and enter into the BO process. Station B, whose BO counter reaches 0 first, accesses the channel and delivers a data frame after adjusting its transmit power. Proposed scheme works in a non-hidden terminal environment. (a) Radio range and (b) transmission process. Transmissions in a hidden terminal environment Now we use Figure 4a and Figure 4b to consider the transmission process of DTPC in a hidden terminal scenario. The network topology is the same as depicted in Figure 2a. When station A transmits an RTS frame, AP2 overhears it and sets its NAV as shown in Figure 4b. Then after receiving a CTS frame from AP1, station A adjusts its power level and transmits its data frame on a possible low power. As AP2 cannot overhear this data frame, its RESET_NAV timer expires and it will not set NAV when station A is exchanging data frame with AP 1. D which is a hidden terminal to station A, after sensing the channel is idle, transmits a data frame to AP2 during A's ongoing transmission. AP2 adjusts its power level based on (5) and then responses with an ACK SIFS later. Proposed scheme works in a hidden terminal environment. (a) Radio range and (b) transmission process. In order to analyze the performance of the proposed scheme compared to 802.11 MAC, we develop an analytical model using the discrete-time Markov chain in this section. Markov chain model While an ongoing transmission in a BSS prevents transmissions in a neighbor OBSS in the legacy 802.11 MAC, in our proposed scheme, however, the OBSSs are allowed to transmit simultaneously. Thus, in order to compare the channel utilization of the proposed scheme (DTPC) and the legacy MAC, we make an assumption that the co-channel is divided into two sub channels, and each BSS may occupy one of them. Adopting slotted time, in order to make the model Markovian, we suppose that the packet lengths which are integer multiples of slot durations are independent and geometrically distributed with parameter q (i.e., packet duration has a mean of 1/q slots) [15]. Also we assume that devices always have packets to send to AP in each time slot, and each device attempts to transmit with probability p. In addition, it is assumed that there are no hidden or exposed terminals in domestic BSS. Let X n be the number of transmissions ongoing in the two sub channels at time n. Since each BSS can process one transmission during a time slot, the state space for the model is given by S={0,1,2}. Note that the value of X n can be 2 only when two BSSs process a transmission at the same slot. The relationship between X n+1 and X n can be written as follows: $$\begin{array}{@{}rcl@{}} X_{n+1} = X_{n}+S_{n}-T_{n}, n \geq 0 , \end{array} $$ where S n is the number of new transmissions successfully started at time n, and T n is the number of terminations at time n. Note that S n =1 if a new transmission starts successfully in time slot n and S n =0, otherwise. If X n =2, which means that both BSSs are processing transmissions, then S n =0 with probability 1. The number of terminations T n at time n ranges from 0 to X n . If X n =0, then T n =0 with probability 1. When a station has a frame to transmit, it attempts to transmit with probability p. If k stations are transmitting in the current slot, then the success probability D k in the next time slot is $$\begin{array}{@{}rcl@{}} D_{k} = Kp(1-p)^{L-1}, \end{array} $$ where L is the number of stations. Also the probability \(R_{k}^{(j)}\) that j transmissions are finished when the system is in state k is given by $$\begin{array}{@{}rcl@{}} R_{k}^{(j)} & = & Pr[j\ transfers\ terminate\ at\ time\ t|X_{t-1} = k]\\ & = & {k \choose j}q^{j}(1-q)^{k-j} . \end{array} $$ Now the transmission probability matrix for the model can be computed as follows: $$ {\fontsize{7}{6}\mathbf{P} = \left[ \begin{array}{ccc} 1-D_{0}(1-D_{0})-D_{0}D_{0}\ &\ D_{0}(1-D_{0})\ &\ D_{0}D_{0}\\ 1-D_{0}D_{1}R_{1}^{(1)}-D_{1}R_{1}^{(0)}\ &\ D_{1}R_{1}^{(0)}(1-D_{0})+D_{0}D_{1}R_{1}^{(0)}\ &\ D_{0}D_{1}R_{1}^{(0)}\\ R_{2}^{(2)} & R_{2}^{(1)} & R_{2}^{(0)} \end{array} \right].} $$ The state transition diagram of the Markov chain model for OBSS operations is depicted in Figure 5. In the developed Markov model, the legacy 802.11 MAC and our proposed DTPC differ only in transition probability. Detailed transition probabilities of both cases are omitted here due to the space limitation. State transition diagram. Capacity analysis The average utilization ρ per sub channel can be obtained as $$\begin{array}{@{}rcl@{}} \rho = \frac{\sum_{i\in{s}}{i\pi_{i}}}{N}, \end{array} $$ where π i is the limiting probability that the system is in state i and S is the state space of the Markov chain. Then the overall system throughput is given by $$\begin{array}{@{}rcl@{}} TH &=& N\times C\times\rho, \end{array} $$ where C is the channel bit rate. We study the performance of our proposed scheme compared to that of the legacy MAC using the parameters shown in Table 1. Table 1 System parameters Figure 6a shows the variation of whole network throughput according to the number of stations in each BSS. Throughputs of both proposed DTPC and the legacy 802.11 MAC decrease as the number of stations increases. The figure proves the effectiveness of our proposed scheme in enhancing the throughput. One can see that the throughput can be increased by around 40 Mbps. Figure 6b shows the dependency of the throughput on the transmission probability. The throughputs of both schemes reach their peaks when the transmission probability is around 0.06. The figure proves once again that our proposed scheme improves the network throughput. Analysis results. (a) Throughput vs. number of stations; (b) throughput vs. transmission probability. In the Markov chain model, we have assumed that there are no hidden or exposed stations in the domestic BSS and a transmission is completed successfully. In a practical network, however, the hidden and/or exposed stations may exist and they will introduce collisions. In order to make the analysis more accurate, our future works will include the study on how to model the probability that a transmission completed successfully in a time slot. Also, we have analyzed two overlapping BSSs. Modeling the performance of multiple OBSSs, however, becomes more challenging, as the transition probabilities in both legacy scheme and proposed solution are dependent on the network topologies. Especially, based on the location of a transmitting station, the possibility whether a neighbor BSS can process a concurrent transmission in a time slot becomes more complex. We plan to investigate this issue in our future work. We conduct extensive simulations of proposed DTPC using OPNET. The network size is set to 300 × 300 m 2 and two overlapping BSSs are processing their transmissions on the same channel. The network topology is given in Figure 7, where the mobility of stations are not assumed. The numbers of member stations in both BSSs are the same and all the stations periodically transmit to AP the constant bit rate (CBR) UDP packets which are 1,024 bytes long. The IEEE 802.11a is adopted for the WLAN protocol, and other parameters are presented in Table 2. All the results reported here are the average values of 20 runs. Network topology. Table 2 Simulation parameters Figure 8a shows that the overall network throughput is inversely proportional to the increment of the network size, where the number of member stations in each BSS varies from 5 to 20. We can see that the simulation result closely follows the analytic data and our proposed scheme enhances the overall network performance. Note that as the stations in legacy 802.11 MAC contend for accessing channel with the stations in another BSS, one BSS is allowed to transmit. The proposed DTPC, however, enables the stations to dynamically adjust their transmit powers such that other stations in different BSSs can have more opportunities of simultaneous transmissions. Simulation results. (a) Network throughput and (b) retransmission attempts. Figure 8b presents the retransmission attempts versus the network size. We find the retransmission attempts in our proposed scheme are lower than that of the legacy one. It can be achieved by the fact that our proposed scheme enables RTS/CTS frames to be transmitted on a maximum power and it prevents hidden and exposed terminals occurring in a wide range. In addition, unlike the legacy scheme, the neighbor AP can adjust its power to a proper level such that it can respond with immediate ACKs. In this paper, we have presented a dynamic transmit power control scheme, namely DTPC, for enhancing the performance of OBSSs. Stations can dynamically adjust their transmit powers using the proposed DTPC, which enables the overlapping BSSs to transmit simultaneously and to enhance the spatial reuse. We have developed a Markov chain model in order to analyze the performance of DTPC, and the simulation results prove that the analytical model is properly built. Both analytic and simulation results show that the proposed DTPC significantly improves the performance of OBSSs. As the future work, we plan to investigate the performance of DTPC operating in multiple OBSSs rather than two OBSSs. Also the hidden and/or exposed terminals existed in a BSS and various network topologies should be studied as a future work. W Li, Y Cui, X Cheng, MA Al-Rodhaan, A Al-Dhelaan, Achieving proportional fairness via AP power control in multi-rate WLANs. IEEE Trans. Wireless Commun. 10(11), 3784–3792 (2011). A Jow, C Schurgers, Borrowed channel relaying: a novel method to improve infrastructure network throughput. EURASIP J. Wireless Commun. Netw. 2009174730 (2010). B Han, L Ji, S Lee, RR Miller, B Bhattacharjee, in IEEE International Conference on Communications. Channel access throttling for overlapping BSS management (Dresden, 2009), pp. 1–6. O Oteri, P Xia, F LaSita, R Olesen, in 16th International Symposium on Wireless Personal Multimedia Communications, WPMC. Advanced power control techniques for interference mitigation in dense 802.11 networks (Atlantic, 2013), pp. 1–7. X Wang, H Lou, M Ghosh, G Zhang, P Xia, O Oteri, F La Sita, R Olesen, N Shah, in Systems, Applications and Technology Conference, LISAT, 2014 IEEE Long Island. Carrier grade Wi-Fi: air interface requirements and technologies (Farmingdale, 2014), pp. 1–6. X Liu, S Seshan, P Steenkiste, in proceedings of the annual conference of ITA. Interference-aware transmission power control for dense wireless networks, (2007), pp. 1–7. H Ma, J Zhu, S Roy, SY Shin, Joint transmit power and physical carrier sensing adaptation based on loss differentiation for high density IEEE 802.11 WLAN. Comput. Netw. 52, 1703–1720 (2008). M Cesana, D Maniezzo, P Bergamo, M Gerla, in Vehicular Technology Conference 2003. Interference aware (IA) MAC: an enhancement to IEEE802.11b DCF (Orlando, Florida, USA, 2003), pp. 2799–2803. D Qiao, S Choi, A Jain, KG Shin, in Proceedings of the 9th annual international conference on mobile computing and networking. MiSer: an optimal low energy transmission strategy for IEEE 802.11a/h (San Diego, California, USA, 2003), pp. 161–175. X Wang, GB Giannakis, CSMA/CCA: a modied CSMA/CA protocol mitigating the fairness problem for IEEE 802.11 DCF. EURASIP J. Wireless Commun. Netw. 2006, 039604 (2006). Y Li, X Wang, SA Mujtaba, in Vehicular Technology Conference, 2003. Co-channel interference avoidance algorithm in 802.11 wireless LANs (Orlando, Florida, USA, 2003), pp. 2610–2614. Y Fang, D Gu, AB McDonald, J Zhang, in The 14th IEEE Workshop on Local and Metropolitan Area Networks. Two-level carrier sensing in overlapping basic service sets (BSSs) (Island of Crete, Greece, 2005), p. 6. T Tandai, K Toshimitsu, T Sakamoto, in Personal, Indoor and Mobile Radio Communications, 2006. Interferential packet detection scheme for a solution to overlapping BSS issues in IEEE 802.11 WLANs (Finland, 2006), pp. 1–5. IEEE 802.11-2012. Part 11, Wireless LAN medium access control (MAC) and physical layer (PHY) specifications. IEEE (2012). J Mo, H-S So, J Walrand, Comparison of multichannel MAC protocols. IEEE Trans. Mobile Comput. 7(1), 60–65 (2008). This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2013008855), and in part by the Research Grant of Kwangwoon University in 2013. Department of Electronics Convergence Engineering, Kwangwoon University, Wolgye Dong, 447-1, Nowon-GU, Seoul, Korea Xiaoying Lei & Seung Hyong Rhee Search for Xiaoying Lei in: Search for Seung Hyong Rhee in: Correspondence to Xiaoying Lei. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Lei, X., Rhee, S.H. Performance enhancement of overlapping BSSs via dynamic transmit power control. J Wireless Com Network 2015, 8 (2015) doi:10.1186/s13638-014-0232-y IEEE 802.11 MAC Overlapping BSSs Transmit power
CommonCrawl
Simon P. Norton Simon Phillips Norton (28 February 1952 – 14 February 2019)[1] was a mathematician in Cambridge, England, who worked on finite simple groups. Simon P. Norton Born(1952-02-28)28 February 1952 Died14 February 2019(2019-02-14) (aged 66) NationalityBritish Alma materUniversity of Cambridge Scientific career FieldsMathematics ThesisF and Other Simple Groups (1976) Doctoral advisorJohn Horton Conway Education Simon Norton was born into a Sephardi family of Iraqi descent, the youngest of three brothers.[2] From 1964 he was a King's Scholar at Eton College, where he earned a reputation as an eccentric mathematical genius and was taught by Norman Routledge. He obtained an external first-class degree in Pure Mathematics at the University of London while still at the school, commuting to Royal Holloway College. He also represented the United Kingdom at the International Mathematical Olympiad thrice consecutively starring from 1967, winning a gold medal each time and two special prizes in 1967 and 1969.[3] He then went up to Trinity College, Cambridge, and achieved a first in the final examinations. Career and Life He stayed at Cambridge, working on finite groups. Norton was one of the authors of the ATLAS of Finite Groups. He constructed the Harada–Norton group and in 1979 together with John Conway proved there is a connection between the Monster group and the j-function in number theory. They dubbed this "monstrous moonshine", and made some conjectures later proved by Richard Borcherds. Norton also made several early discoveries in Conway's Game of Life,[4] and invented the game Snort. In 1985, Cambridge University did not renew his contract. Norton is the subject of the biography The Genius in My Basement, written by his Cambridge tenant, Alexander Masters,[5] which describes his eccentric lifestyle and his life-long obsession with buses. He was also an occasional contributor to Word Ways: The Journal of Recreational Linguistics. Norton was very interested in transport issues and was a member of Subterranea Britannica. He coordinated the local group of the Campaign for Better Transport (United Kingdom), and had done so since the organisation was known as Transport 2000, writing most of the newsletter for the local Cambridge group[6] and tirelessly campaigning for efficient, inclusive and environmentally friendly public transport in the region and across the United Kingdom. He collapsed and died in north London, aged 66, of a heart condition on 14 February 2019.[1] Selected publications • 1995: (with C. J. Cummins) Cummins, C. J.; Norton, S. P. (1995). "Rational Hauptmoduls are replicable". Canadian Journal of Mathematics. 47 (6): 1201–1218. doi:10.4153/cjm-1995-061-1. S2CID 123645483. • 1996: Arasu, K. T.; Dillon, J. F.; Harada, K.; Sehgal, S.; Solomon, R. (1996). "Non-monstrous moonshine". Groups, Difference Sets, and the Monster: Proceedings of a Special Research Quarter at The Ohio State University, Spring 1993. pp. 433–441. ISBN 9783110147919. • 1996: Norton, S.P. (1996). "Free transposition groups". Communications in Algebra. 24 (2): 425–432. doi:10.1080/00927879608825578. • 1998: Curtis, Robert (11 June 1998). "Anatomy of the Monster: I". The Atlas of Finite Groups: Ten Years On. London Mathematical Society Lecture Note Series, 249. pp. 198–214. ISBN 9780521575874. • 2001: Norton, Simon (2001). "Computing in the Monster". Journal of Symbolic Computation. 31 (1–2): 193–201. doi:10.1006/jsco.1999.1008. • 2002: (with Robert A. Wilson) Norton, Simon P.; Wilson, Robert A. (2002). "Anatomy of the Monster: II". Proceedings of the London Mathematical Society. 84 (3): 581–598. doi:10.1112/S0024611502013357. S2CID 2279725. References 1. Obituary: Daily Telegraph 2. Tessler, Gloria (28 March 2019). "Obituary: Simon Norton". The Jewish Chronicle. 3. https://www.imo-official.org/participant_r.aspx?id=10021 4. Poundstone, William (1985), The recursive universe: cosmic complexity and the limits of scientific knowledge, Contemporary Books, p. 7, ISBN 978-0-8092-5202-2 5. Masters, Alexander (2012), The Genius in My Basement, London: HarperCollins (published 1 September 2011), ISBN 978-0-00-724338-9, LCCN 2011535364, OCLC 739420610 6. "Cambridgeshire Campaign for Better Transport Homepage". Archive of the Cambridgeshire Campaign for Better Transport. Cambridgeshire Campaign for Better Transport. 2019. Retrieved 6 April 2022. External links • Simon Phillips Norton at the Mathematics Genealogy Project • Simon P. Norton's results at International Mathematical Olympiad • Simon Norton at the Cambridge mathematics department • Turner, Jenny (24 August 2011). "The Genius in My Basement by Alexander Masters – review". The Guardian. Retrieved 31 August 2015. • Feature profile on National Public Radio's Weekend Edition Sunday, 02/26/12 The Genius In My Basement • Cambridgeshire Campaign for Better Transport (Archive) coordinated by Simon Norton, who authored the bulk of the newsletters and reports. Authority control International • FAST • ISNI • VIAF National • Germany • United States • Czech Republic Academics • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. If action equals reaction, how is it ever possible to win in martial arts? In kick-boxing, when a fighter's leg hits an opponents leg, the outcome, based on Newton's 3rd law, should be the same for each fighter. It's not even important who kicked who, as in the moment of contact the attacker should feel more less the same as the defender. Here is a catch: in most of situations different parts of the fighters' bodies collide: the attacker typically contacts the front of his leg with the defender's side. The front is harder. Is it hardness that makes the difference? Some web pages inform me, that because of the 3rd law, a fighter should make powerful but very brief hits - retracting a kicking leg before it receives a reaction. But from what I know, if you don't feel a reaction, there was no action in the first place. How is it ever possible to take advantage in martial arts hit and win? newtonian-mechanics forces Qmechanic♦ loksonlokson $\begingroup$ youtube.com/watch?v=397lM2aZZt4 $\endgroup$ $\begingroup$ If I slam my fist into your face, both my fist and your face will experience the same force from the impact. (An equal and opposite reaction.) Now what do you suppose will be hurt more by that impact? My fist? Or your face? ;-) $\endgroup$ – Ajedi32 You are correct. As you noted, Newton's third law does indeed say that the force on each fighter's body is the same (but in opposite direction) at each instant in time. This guarantees not only that the forces are equal, but also the impulse delivered to each of the colliding objects. Denoting the impulse imparted to objects 1 and 2 as $J_1$ and $J_2$ we therefore have \begin{align} F_1(t) &= -F_2(t) \\ \text{and} \qquad J_1 \equiv \int dt \, F_1(t) &= -\int dt \, F_2(t) \equiv -J_2 \end{align} Impulse has the same dimensions as momentum, so really what we're saying is that in a collision both objects experience the same change in momentum over the same period of time. Again, you are absolutely right. Retracting the arm or leg does not reduce the force or impulse delivered to that arm or leg during the blow. You actually already partially got it when you mentioned hardness of the objects involved in the collision. The technical words for describing this are stress and strain. Stress is essentially the inter-molecular or inter-atomic forces within a solid. Strain is the deformation of the solid from its usual shape. When an arm or leg hits a nose, both experience the same impulse, but because the nose is softer, it deforms more. Once the nose tissues move too much relative to one another the the nose breaks. The elbow, on the other hand, is made largely of calcium and and support much larger internal stress while maintaining low enough strain that the tissues don't move too much relative to one another; e.g. the elbow doesn't break. Once the collision is over the bone molecules move back to where they were before. Of course, if the stress in the bone is too large, and consequently the strain exceeds a certain amount, the bone fractures.$^{[a]}$ You can think of the difference between pushing on a nose or an elbow in terms of pushing on springs with different spring constant. Supposing we have $F = k x$, then for a given force (stress) the displacement (strain) is $x = F/k$. A low $k$ means a large strain (like the nose) while a large $k$ means less strain (like the elbow). Of course, there are also biological factors (but which are fundamentally physical of course). Certain parts of the body are simply more important than others. An elbow-skull collision does not have equal damaging effects to the owners of said elbow and skull. The impulse imparted to the elbow causes compression of the bone which transduces the force to articulating structures such as the shoulder. The skull, on the other hand, transduces the impulse to e.g. the brain. Rattling a shoulder around may hurt, but rattling a brain around leads to unconsciousness. $[a]$: There is fascinating data regarding the stress/strain curves for bone. At low stress the bone is essentially elastic. Past a threshold, the strain is a much steeper function of stress. Then at a critical point the bone fractures. P.S. I've left out a discussion of exactly how/why body tissues are destroyed by excessive strain. In other words, why doesn't your nose just always spring back to its original shape after being deflected by an elbow? A careful description of this process on the microscopic level would make an interesting subject for another question. DanielSankDanielSank $\begingroup$ The "funny bone" is a good example of this most people have experienced first hand. You can barely bump something that's moderately hard. If you bump it an inch in either direction, you barely notice the hit. But right on that nerve, and your whole arm lights up like you've been electrocuted. $\endgroup$ – MichaelS $\begingroup$ There's also the matter of how the impact is felt. When you kick or punch someone, you're not only expecting it, but the force is being delivered in a controlled way, and the inertia remains within the leg. But when you are kicked, say in the knee, the sudden jolt of inertia can cause things to pop out of alignment, and not being prepared for it, the sudden motion and reaction to it can cause even more damage. $\endgroup$ – Zibbobz $\begingroup$ It also matters where the momentum ends up. Many strikes are not intended to hurt, but to throw the opponent out of balance by transferring more momentum than the opponent can pass on to the ground in that direction. $\endgroup$ – Jan Hudec $\begingroup$ I'm definitely no expert on martial arts, but it seems to me that at least part of the answer should involve balance — if you knock me sufficiently off balance, not only will I feel the force of your blow on me, but shortly afterwards I might be feeling the force of the ground rushing up to greet me. Even if I manage to refrain from falling, knocking me off balance could then feed back into the ability to hit strategic target points. $\endgroup$ – Ben Hocking This is a standard question on how Newton's third law applies when two bodies attached on the ground collide. It is true that whenever you hit your opponent they apply, by reaction, the same force on your body, but do not forget that the total resultant of the forces also takes into account the reaction of the muscles and of the ground. Namely, given two bodies, $1$ and $2$, we have $$ m_1\mathbf{a}_1 = \mathbf{R}_1 = \mathbf{F}_{2\to 1} + \mathbf{F}_{\textrm{ground}\to 1}+\mathbf{F}_{\textrm{muscles internal reactions}} $$ and likewise for $2$. Although it is true that $\mathbf{F}_{2\to 1} = - \mathbf{F}_{1\to 2}$ there are still other components coming into play to calculate the overall acceleration, and those other ones depend on your interaction with the ground and your internal muscles reactions. The muscles reactions depend very much on the individual parts of the body that collide and some other factors like timeframe of the impulse and so on and so forth. All overall the bottom line is that, besides the action-reaction force (which is equal and opposite in sign), there are other contributions that are to be addressed to the single bodies only and their interaction with the ground and internal structure, and those play a role as well in the complete equation. gentedgented $\begingroup$ Good point about transduction into the ground, but doesn't that just mean that the fighter who's more grounded has higher internal stress than the player who is less grounded (the less grounded one will be accelerated by the blow more than the grounded one)? $\endgroup$ – DanielSank $\begingroup$ Yes, but not so easily. The interaction with the ground takes into account how your muscles react to the external solicitations and how they translate this into overall acceleration to each part of your body (and this depends on how they handles the stress). $\endgroup$ – gented Not the answer you're looking for? Browse other questions tagged newtonian-mechanics forces or ask your own question. Action and reaction - not the same effect Action and reaction pair problem Modulus of action-reaction forces Action and Reaction - Time lag Action - Reaction? Force: action and reaction Action-reaction force diagrams
CommonCrawl
\begin{definition}[Definition:Meaningful Product] Let $\left({S, \circ}\right)$ be a semigroup. Let $a_1, \ldots, a_n$ be a sequence of elements of $S$. Then we define a '''meaningful product''' of $a_1, \ldots, a_n$ inductively as follows: If $n = 1$ then the only meaningful product is $a_1$. If $n > 1$ then a meaningful product is defined to be any product of the form: :$\left({a_1 \ldots a_m}\right)\left({a_{m+1} \ldots a_n}\right)$ where $m < n$ and $\left({a_1 \ldots a_m}\right)$ and $\left({a_{m+1} \ldots a_n}\right)$ are meaningful products of $m$ and $n - m$ elements respectively. \end{definition}
ProofWiki
Home Journals AMA_C Modelling and Optimal Siting of Static VAR Compensator to Enhance Voltage Stability of Power System with Uncertain Load Modelling and Optimal Siting of Static VAR Compensator to Enhance Voltage Stability of Power System with Uncertain Load Suman Machavarapu* | Mannam Venu Gopala Rao | Pulipaka Venkata Ramana Rao Vignan's Lara Institute of Technology & Science, Vadlamudi 522213, India Prasad V. Potluri Siddhartha Institute of Technology, Kanuru 520007, India ANU college of Engineering & Technology, Nagarjuna Nagar 522510, India [email protected] Voltage stability is the most vital phenomena in power systems which may be mainly disturbed by a mismatch in the reactive power generation and load. Not only reactive power imbalance sometimes due to internal faults of the equipments and short circuit faults there may be voltage collapse at the buses. Voltage stability can be enhanced using shunt devices such as Static VAR Compensator (SVC). It can generate or absorb reactive power in a controlled manner such that it can able to enhance Voltage Stability. Voltage Stability Index method is used to determine Voltage sensitivity at each bus and the bus having highest Voltage stability index value can be considered as weak bus which is the optimal location of facts controller. In this paper investigation is made to observe how susceptance model and firing angle model of SVC is used to enhance the voltage at each bus under chaotic load case is observed. IEEE 5-bus and 30-bus systems are considered as test systems and simulations are carried out in Matlab environment. voltage stability, voltage collapse, voltage sensitivity, voltage stability index, static VAR compensator Voltage instability is one of the major problems in a modern power system that has been challenging issue for power system engineers for so many decades. Voltage instability leads to Voltage collapse which may in turn leads collapse of the power system. Voltage collapse is highly undesirable in power systems, it occurs when the system is overloaded. The primary reason for Variation in voltage is an imbalance between reactive power generation and consumption. FACTS devices facilitate an effective solution to prevent Voltage instability and voltage collapse due to their fast and flexible control. FACTS controllers are power electronic devices which are mainly used to improve the power handling capability of the lines by controlling the reactive power. SVC is the combination of Thyristor Control Reactor (TCR) and Thyristor Switched Capacitor (TSC). SVC can effectively generate or absorb reactive power in a controlled manner. In different problems of Voltage Stability [1] and how to counteract was described. [2] Deals with the different FACTS controllers and the improvement of the loadability limits of transmission lines. The voltage stability index and how to determine [3] weak bus using voltage stability index. [4] Discussed about the power flow analysis and Newton Raphson power flow algorithm. [5] Presents different models of SVC and incorporating in power system. Describes the voltage stability index [6-7] and Simplified Voltage stability index. Different voltage sensitivity indices are discussed in [8]. [9-10] Deals the susceptance model and firing angle model of SVC FACTS controller and how these models are incorporated in power system. It was clearly given about optimal siting [11] of FACTS controller and also gives how different FACTS controllers improve the voltage profile using reactive power control. If a right location is selected a single SVC can control voltage stability of all buses. In this paper index method is used to find out the critical or weak buses. If The load at the critical bus is increased beyond the rated level, then there will voltage drop at all the buses which should be compensated SVC FACTS controller. The Effectiveness of SVC Facts controller is observed using susceptance and firing angle models with increased load condition. Section-2 describes a load flow solution using the Newton Raphson method. The critical or weak bus is identified using Voltage Stability Index approach in section-3. Section-4 presents mathematical modeling of SVC. 2. Newton Raphson Power Flow Load flow [12] equations are used to determine the best of operation of existing power system and also an extension of the existing power system in a more economical way. Continuous monitoring of power system can be possible by knowing the status of the system time to time. The unknown parameters at the buses there by power flows in the lines and losses can be determined. Newton Raphson (NR) load flow is just like solving a set of nonlinear equations using Newton Raphson method. The NR load flow method is having quadratic convergence characteristics so that it is superior to other load flow methods. This method is more efficient method for large and complex power systems. It needs less number of iterations to reach convergence and the number of iterations is independent of the size of the system. 3. Mathematical Modelling of SVC SVC [13] is the combination of both Thyristor Controlled Reactor (TCR) and Thyristor Switched Capacitor (TSC). TCR can able to provide controlled reactive power absorption and TSC can able to provide controlled reactive power generation. SVC can absorb or generate reactive power in a controlled manner. SVC can be modelled in two ways as given in 4.1 and 4.2 3.1 Susceptance model In practice the SVC can be seen as an adjustable reactance with either firing angle limits or reactance limits. The equivalent circuit shown in the above figure is used to derive the SVC nonlinear power equations and the linearized equations required by Newton's method from the above figure the current drawn by the SVC is given by, which is also the reactive power injected at bus k. $I_{S V C}=j^{*} B_{S V C} * V_{k}$ (1) and the reactive power drawn by the SVC, which is also the reactive power injected at bus k. $Q_{S V C}=Q_{k}=-V_{k}^{2} * B_{S V C}$ (2) The linearised equation is given by the following equation $\left[\begin{array}{c}\Delta P_{k} \\ \Delta Q_{k}\end{array}\right]^{(i)}=\left[\begin{array}{cc}0 & 0 \\ 0 & Q_{k}\end{array}\right]^{(i)}\left[\begin{array}{c}\Delta \theta_{k} \\ \Delta B_{S V C} / B_{S V C}\end{array}\right]^{(i)}$ (3) The changing susceptance represents the total SVC susceptance necessary to maintain the nodal voltage magnitude at the specified value. 3.2 Firing angle model An alternate SVC model, which circumvents the additional iterative process, consists in handling the thyristor – controlled (TCR) firing angle α. In firing angle method BSVC is given by $I_{S V C}=j * B_{S V C} * V_{k}$ (4) $\mathrm{B}_{\mathrm{svc}}=\mathrm{B}_{\mathrm{c}}-\mathrm{B}_{\mathrm{TCR}}=-\frac{1}{X_{C} * X_{L}}\left\{X_{L}-\frac{X_{C}}{\Pi} *\left[2 *\left(\prod-\alpha\right)+\sin 2 \alpha\right]\right\}$ $X_{L}=\omega^{*} L$ $X_{C}=\frac{1}{\omega^{*} C}$ (5) $Q_{k}=-\frac{V_{k}^{2}}{X_{C} * X_{L}}\left\{X_{L}-\frac{X_{C}}{\Pi} *[2 *(\Pi-\alpha)+\sin 2 \alpha]\right\}$ (6) From equation (9), the linearised SVC equation can be written as $\left[\begin{array}{c}\Delta P_{k} \\ \Delta Q_{k}\end{array}\right]=\left[\begin{array}{cccc}0 & & 0 & \\ 0 & & \frac{2 * V^{2}}{\Pi * X_{L}} & {[\cos (2 \alpha)-1]}\end{array}\right]\left[\begin{array}{c}\Delta \theta_{k} \\ \Delta \alpha\end{array}\right]$ (7) 4. Determination of l-Index L-Index is used to determine weak or critical bus. The bus having highest L-Index[14-15] value can be considered as weak bus. Weak or critical bus is nothing but when ever disturbsnce occurs which bus effects severely. At load bus VSI can be determind as follows ${{L}_{j}}=\left| {{L}_{j}} \right|=\left| 1-\frac{\sum\limits_{i=1}^{{{\alpha }_{G}}}{{{C}_{ij}}{{V}_{i}}}}{{{V}_{j}}} \right|$ (8) $\alpha_{G}$ Number of Generator Buses Vj= Complex voltage at Load j Vi=Complex voltage at generator bus i Cij=Elements of matrix C which can be determined using the following equation $\left[ C \right]=-{{\left[ {{Y}_{LL}} \right]}^{-1}}\left[ {{Y}_{LG}} \right]$ (9) Sub matrices of YBUS matrix are [YLL ] and [YLG] and it can be found using $\left[ \begin{matrix} {{I}_{L}} \\ {{I}_{G}} \\\end{matrix} \right]=\left[ \begin{matrix} {{Y}_{LL}} & {{Y}_{LG}} \\ {{Y}_{GL}} & {{Y}_{GG}} \\\end{matrix} \right]\left| \begin{matrix} {{V}_{L}} \\ {{V}_{G}} \\\end{matrix} \right|$ (10) Two different test systems are considered as given in 5.1 and 5.2 5.1 Test case1: Standard IEEE 5-bus system Standard IEEE 5-bus sytem is as shown in Figure 1 with one slack bus, voltage control bus and three load buses. Figure 1. Standard IEEE 5-bus system The load of the 5-bus is increased up 10% to 200%, and voltage variations at all buses tabulated in Table1. From the Table1 it was clear that with increment in loading the bus voltages keep on decreasing. As the loading of 5th bus increased the voltage at this bus is more effected than other buses which is as given in Figure 2. Table 1. Variation of bus voltages with respect to increment of load at bus-5 %Change in load Demand Figure 2. Variation in bus-5 voltages with increment in Loading L-index is calculated at all buses and tabulated in Table 2 Table 2. L-Index values with normal and heavy loading Base Loading Bus No. (L)Index From the Table-2 we can observe that at 5th bus L-Index value is 0.1033 which is the highest of all. So the conclusion is 5th bus is weak bus. By varying SVC susceptance, Variations of bus voltages at all buses tabulated in Table 3 Table 3. Improvement in Bus voltages with SVC susceptance Heavily Loaded case (200%) Variation of Bus Voltages with Bsvc From the Table 3 it was observed clearly that By varying susceptance the voltage not only at 5th bus but also at other buses also improved. At susceptance is equal to 1.0106 it was observed exactly the bus-5 voltage is 1 p.u. By varying the SVC firing angle, Variations of bus voltages at all buses tabulated. From the Table 4 it was observed that by increasing firing angle not only 5th bus but also voltages of all buses increased. The bus-5 voltage reached to 1 p.u exactly at firing angle 208.40. Figure 3. Variation in bus-5 voltages with SVC susceptance Table 4. Improvement in Bus voltages with SVC firing angle Variation of Bus Voltages with aSVC Figure 4. Variation of bus-5 voltages with SVC firing angle 5.2 Test case2: Standard IEEE 30-bus system IEEE -30 bus sytem with one slack bus, 5-generator buses and 24 load buses as given in Figure 5 L-index value of all the load buses determined and tabulated in Table 5. From the L-index table it was observed that, 30th bus is having highest L-index value so it is the weak or critical bus. At normal loading and 200% increment in loading it is observed that the 30th bus was the weak bus. Increasing active and reactive load demands at 30-bus in between 10% to 200% from the normal value, all bus voltages tabulated in Table.6. It was observed that by increasing load at 30th bus not only voltage at that bus but also voltages at remain buses also changed. At 200% increment of load from base load the voltage at bus-30 is 0.8707 pu. Which is undesirable. Since 30th bus is the weak bus location of SVC facts controller is 30th bus as given in Figure 5. When the system is under overloaded condition, SVC FACTS controller is connected and variation of voltages at all buses with respect to susceptance tabulated in Table 7 by varying Figure 5. IEEE 30-bus System Susceptance of SVC it was observed that not only volatage of 30th bus voltage increased but also voltage of all the buses increased. At exactly susceptance is equal to 1.1719 30th bus voltage reached to 1 pu. Variation of voltages at all buses by varying firing angle tabulated in Table 8. Table 5. L-index values of IEEE-30 bus sytem Normal Loading L-Index Figure 6. Voltage Varition with incresed load demand Table 6. Voltage variations with increment in load demand at load buses Load Bus Table7. Variation of Voltages with SVC susceptance at load buses Heavily loaded (200%) Table 8. Variation of voltages with firing angle at load buses 131.9795° Figure 7. Variation of Voltages with SVC susceptance Figure 8. Variation of Voltages with SVC firing angle Figure 9. Voltage variation at all buses under normal and heavily loaded case Figure 10. Variation of all bus voltages under hevily loaded and SVC with susceptance model Figure 11. Variation of all bus voltages under heavily loaded and SVC with firing angle model It was observed that when the system is under over loaded condition there will be voltage drop from the reference level which is undesirable in the system. SVC is the Shunt FACTS controller to support the voltage profile when there is a disturbance. To determine optimal location of FACTS controller L-index method is used. According to this method it was identified that 5th bus in standard IEEE 5-bus system, 30th bus in IEEE 30-bus system are weak buses. At these weak buses SVC FACTS controller is placed. Suceptance and Firing angle methods are considered. SVC has been providing good control over the bus voltage when there is disturbances such as over loading. [1] Bujal, N.R., Hasan, A.E., Sulaiman, M. (2014). Analysis of voltage stability problems in power system. 4th International Conference on Engineering Technology and Technopreneuship(ICE2T), 27-29. https://doi.org/10.1109/ICE2T.2014.7006262 [2] Gupta, V.K., Kumar, S., Bhattacharyya, B. (2014). Enhancement of Power System Loadability with FACTS Devices. The Institution of Engineers (India), 95(2): 113–120. https://doi.org/10.1007/s40031-014-0085-0 [3] Dike, D.O., Mahajan, S.M. (2015). Voltage stability index based reactive power compensation scheme. Electrical Power and Energy System; 73: 734-742. https://doi.org/10.1016/j.ijepes.2015.04.016 [4] Saadat, H. (1999). Power System Analysis, McGraw-Hill Series in Electrical Computer Engineering. [5] Perez, H.A., Ache, E., Esquivel, C.R.F. (2000). Advanced SVC Models for the Newton Raphson load flow and Newton optimal power flow studies. IEEE Transactions on the power systems, 15(1): 129-136. https://doi.org/10.1109/59.852111 [6] Huang, H.L., Kong, Y. (2008). The Analysis on the L-index based Optimal Power Flow Considering Voltage Stability Constraints. WSEAS Transactions on Systems, 7(11): 1300-1309. [7] London, S.P., Rodriguez, L.F., Oliver, G. (2014). A simplified voltage stability index(SVSI). Electrical Power and Energy System; 63: 806-813. http://dx.doi.org/10.1016/j.ijepes.2014.06.044 [8] Abedelatti, A.E., Hashim, H., Abidin, I.Z., Sie, A.W.H., Nasional, U.T., Mara, T. (2015). Weakest Bus Based on Voltage Indices and Loadability, 8–9. http://dspace.uniten.edu.my/jspui/handle/123456789/10203 [9] Acha, E., Agelidis, V.G. (2006). Power Electronic Control in Electrical Systems, Newness. [10] Lahacani, N.A., Mendil, B. (2008). Modeling and Simulation of the SVC for Power System Flow Studies. Leonardo Journal of Sciences, 153-170. [11] Hernandez, A., Rodriguez, M.A., Torres, E., Eguia, P. (2013). A Review and Comparison of FACTS Optimal Placement for Solving Transmission System Issues. Renewable Energy and Power Quality Journal (RE&PQJ), (11). https://doi.org/10.24084/repqj11.435 [12] Thukaram, D., Lomi, A. (2000). Selection of Static VAR Compensator Location and Size for System Voltage Stability Improvements. Electrical Power System Research, 54: 139-150. https://doi.org/10.1016/S0378-7796(99)00082-6 [13] Suman, M., Rao, M.V.G., Rao, P.V.R. (2018). Enhancement of voltage stability using optimally sited static VAR compensator with three phase fault. Journal of Research in Dynamical & Control systems, 10(3): 95-106. [14] Hassen, M.O., Cheng, S.J., Zakaria, Z.A. (2009). Steady State Modelling of SVC and TCSC for Power Flow Analysis. the international multi conference of engineering and computer scientists Proceeding, 2: 1443-1448. [15] Kowsalya, M., Ray, K.K., Kotheri, D.P. (2009). Positioning of SVC and STATCOM in a Long Transmission Line. International journal of Recent Trends in Engineering, 2(5).
CommonCrawl
Geoenvironmental Disasters Climate change adaptive capacity and smallholder farming in Trans-Mara East sub-County, Kenya Harrison K. Simotwo ORCID: orcid.org/0000-0002-1295-68141, Stella M. Mikalitsa1 & Boniface N. Wambua1 Geoenvironmental Disasters volume 5, Article number: 5 (2018) Cite this article At the centre of smallholders' adaptation is a need to understand their perceptions on key climatic scenarios so as to glean helpful information for key decision-making processes. In Kenya at the moment, downstream information regarding these circumstances remain scanty, with many smallholders being 'on their own', in spite of the imminent threats from shifting precipitation patterns, rising temperatures, and intensifying droughts. At the sub-national levels, potential impacts of these situations are likely to deepen due to extensive cases of land use transformations, habitat degradation, plummeting water resources capacity and common inter-ethnic conflicts, among other negative externalities. The study examined current climatic situations in Trans-Mara East sub-County, to the south-western part of Kenya, as well as the smallholders' perceptions about the situations, their adaptation levels and constraints thereof. Pearson correlation coefficient, indicated a weak positive association between smallholder's perceptions and either their age, marital status, level of education, or livelihood streams (r ≤ 0.1; p ≥ 0.05, for all), unlike their climatic perceptions and farm sizes which showed a strong positive association (r = 0.430; p ≤ 0.01). Key desired adaptation options, improving crop varieties, livestock feeding techniques and crop diversification, topped their options, with destocking being least desired. Education levels (r = 0.229; p ≤ 0.05) and farm sizes (r = 0.534; p ≤ 0.01) had a positively significant association with adaptive capacity, in addition to a significantly weak, association between their adaptive capacity and both their individual's marital status (r = 0.154; p ≥ 0.05) and diversity of livelihood streams (r = 0.034; p ≥ 0.05). The analysis also showed a weak negative association between their adaptive capacity and age (r = − 0.026; p ≥ 0.05). Amid the key constraints which emerged include high cost of farm inputs, limited access to credit and market uncertainties, among others. Pearson correlation coefficient showed a significantly strong negative association between smallholders' constraints and both (r ≥ − 0.3; p ≤ 0.01) their level of education, and diversity of livelihood streams. A significantly strong positive association (r = 0.280; p ≤ 0.01) was identified between smallholder's age and constraints, while marital status and farm sizes both (r ≤ − 0.01; p ≥ 0.05) revealed weak non-significant negative association with the constraints. Trans-Mara East sub-County has been grappling with a number of climate-related challenges. These were manifested through increased rainfall uncertainties, intensifying droughts, and rising temperatures, with effects on crop and livestock performances in the area, accompanied by plummeting household food security and income positions. Besides, smallholders' perceptions intersected with various intervening subtleties. Smallholders' adaptive capacity in the area, was largely not associated with their socioeconomic characteristics as most of the respective components such as education, and livelihood streams, were barely fully-fledged. Moreover, the constraints against their adaptive capacity were mainly related to the existing policies and their respective implementations at the downstream levels with limited attribution to the farm-level interventions. It is thus incumbent upon the decision-makers, and other key stakeholders to explore avenues for amplifying the smallholders' desired adaptation schemes while down-sizing the existing adaptation bottlenecks in the area. Global climate change manifested through rising temperatures, changing patterns of precipitation, and rising atmospheric carbon dioxide is poised to become a key driver of smallholder performances across many parts of the developing world in the current century (Campbell et al. 2016; Raworth 2007). Among the key socio-economic impacts of the scenarios include shifts in the productivity of major cereal and horticultural crops (Singh et al. 2015; Tittonell and Giller 2013) with a net adverse effects on the food security situations and income levels among many agricultural-dependent economies (Mertz et al. 2009b). For instance, in sub-Saharan Africa, this situation is likely to disrupt huge proportions of their economies whose main contribution emanates from agriculture dominated by smallholder output (Moyo et al. 2012; Mutunga et al. 2017). Specifically, in Kenya, rainfall-dependent smallholders are responsible for up to 70% of the agricultural output (Raworth 2007; Silvestri et al. 2015), which is essential to household food security and income flows in nearly all the rural areas (Mikalitsa 2015; Oluoko-Odingo 2011). Thus, the climatic shifts will not only affect the smallholders, but also the country's economy. Nonetheless, adaptation (Field 2012) has been floated as the only immediate option for cushioning smallholders, among other vulnerable groups (Labbé et al. 2016; Opiyo et al. 2015), and ecosystems against the imminent impacts of climate change. As a result, various models (Gornott and Wechsung 2016; Hoetker 2007) have been put forward on smallholder responses to these situations, and are likely to drive the approaches to climate change adaptation and specific decisions regarding the mitigation plans (Le Dang et al. 2014; Labbé et al. 2016). But, significant actions by decision-makers and other key stakeholders may not be easily effected until there is a unified approach to the available knowledge and information regarding the actual state and trends at the downstream levels (Mertz et al. 2009a). Thus the need for more empirical studies such as this one. The study was therefore set out to operationalize a number of objectives. The first objective entailed examining the existing meteorological data on rainfall and temperatures for the area from 1980 to 2015. Data resulting from such a process can easily help in understanding the existence and magnitude of any shift in the climatic situations. Secondly, the study sought to assess smallholder's perceptions about the area's climatic situations. The third objective targeted to evaluate current, and desired, adaptation options and ranking the resulting data using a Weighted Average Index (Ndamani and Watanabe 2015). Moreover, the final objective sought to assess smallholder adaptation constraints using the Problem Confrontational Index (Deressa et al. 2011). The approach taken by this study i.e. looking at the situation at the downstream levels as opposed to the national level was deemed appropriate owing to the current systems of governance in Kenya (Thugge et al. 2011; Wiesmann et al. 2014). In particular, the implementation of environmental conservation and agricultural policies, among other key directives, have been largely decentralized to the sub-national levels of administration. Thus, adaptation constraints such as poor road networks and limited value addition options for agricultural output can easily be dealt with at the sub-national levels (Wiesmann et al. 2014). Knowledge and information access on the status and trends in smallholder adaptation to climate change are critical to improving household food security and nutrition as well as income flows and reducing poverty and inequality across many parts of the developing world in a bottom-up approach (Oluoko-Odingo 2011; Wambua and Omoke 2014). And in the process of unpacking data to support such dialogues, it is imperative to understand the smallholders' perceptions about the climatic situations and associated risks at specific geographical locations, as this is fundamental to their preparedness and subsequent adaptation strategies which have been shown to subtly vary from one region to the other (Raworth 2007; Silvestri et al. 2015). These perceptions can be assessed against the existing meteorological data in order to close any gaps thereto (Field 2012). Besides, it is also vital to examine the factors impinging smallholders against their quest to adopting various adaptation options, so as to properly inform the requisite priorities. Broadening such adaptation discourses through empirical research helps in making information more accessible with a high plausibility of enhancing the quality of key decision-making processes. In the end, the outcome of such actions will not only be of great benefit to the targeted smallholders, but also other vulnerable groups (Opiyo et al. 2015). The present study identified Trans-Mara East sub-County in the south-western part of Kenya as an ideal case for making a contribution into current discourses about smallholders in the face of shifting climatic scenarios. This sub-County is one of the areas whose farming activities are likely to be affected by the overarching threats from climate change and variability, due to reported cases of land use transformations, habitat degradation, dwindling groundwater resources and common inter-ethnic conflicts (Kipsisei 2011; Nyamwaro et al. 2006). Such anthropogenic disorders have been shown to exacerbate the impacts of climate change on people and ecosystems (Mertz et al. 2009a; Raworth 2007). Trans-Mara East sub-County, Kenya, is one of the five sub-Counties in Narok County that was carved out of Trans-Mara District in 2012. It borders Bomet County to the north, Nyamira and Kisii counties to the north-west, Trans-Mara West to the western part, and Narok West to both the eastern and southern parts (Wiesmann et al. 2014). It lies within latitude 0° 50′ and 6° 50′ south and longitudes 34° 35′ and 35° 14′ East, with a mean altitude of 1450 m above sea level, and area coverage of 320.5 km2, and is divided into four administrative wards, i.e. Ilkerin, Kapsasian, Mogondo, and Ololmasani (Fig. 1). In the Kenya's 2009 national population census, the sub-County has a rural population of 94,115, with both the KNBSFootnote 1 and CRAFootnote 2 2015 estimates being 105,879, out of whom 22,488 were smallholders distributed in each Ward as 6297 (Ilkerin); 5599 (Kapsasian), 4205 (Mogondo), and 6387 (Ololmasani). Besides, the sub-County falls within a transitional zone of agro-ecological zones III and IV, and is characterised by a bimodal rainfall of 450–900 mm per annum, which peak in March to May and November to December, with mean annual temperatures of 17.8 °C. It is also characterized by gently undulating landscapes which generally slope from west to east, with largely black cotton soils (Kipsisei 2011; Nyamwaro et al. 2006). The main crops grown in the area include cereals, pulses, fruit and vegetable crops, among others, while the main livestock rearing activities involve cattle, goats, sheep, donkeys, and chicken, with nearly all smallholders practicing mixed-cropping systems (Wiesmann et al. 2014). A map showing location of the study area in Kenya (Source: Adapted from Survey of Kenya, 2017). This study employed a cross-sectional survey (Kothari 2004) which captured data from the farming households across the four administrative Wards in Trans-Mara East sub-County between February and October 2016. Thus, from a sample population of 22,488 smallholders, a sample size of 100 households was drawn using the Nassiuma (2000) model. Questionnaire surveys were then distributed according to the household densities per Ward, as 28 (Ilkerin), 19 (Mogondo), 25 (Kapsasian), and 28 (Ololmasani). This process was augmented with four focus group discussions – one per Ward, and 13 key informant interviews (Field 2009; Kothari 2004). Field data collation and analysis was performed using Statistical Package for Social Sciences (version 21) described in Field (2009). The resultant frequency summary for the smallholders' demographics, livelihood streams, and adaptation status. The current adaptation measures in the area, with the perceived level of importance of the each of strategies, were analysed, from the data which required respondents to allocate scores based on a Likert scale of 0–3 as done by Ndamani and Watanabe (2015). And in this process, values 0 and 3 were used to denote the lowest and highest levels of importance, respectively. Besides, an evaluation of the practices was performed using a weighted average index (Devkota et al. 2017; Ndamani and Watanabe 2015), where each practice got a specific rank to denote their level of importance, as follows; $$ \mathrm{Weighted}\ \mathrm{Average}\ \mathrm{Index}\ \left(\mathrm{WAI}\right)=\sum \left({\mathrm{F}}_{\mathrm{i}}{\mathrm{W}}_{\mathrm{i}}\right)/\sum {\mathrm{F}}_{\mathrm{i}} $$ (F = frequency of a score's occurrence; W = weight of each score; i = score). To assess the magnitude of the bottlenecks limiting smallholders from adopting robust strategies against climate variability shocks in the area, a Problem Confrontational Index (PCI) was applied (Uddin et al. 2014). This index entailed a process of evaluating perceived constraints on a Likert scale from the least to the highly impactful elements. In the study, PCI value was obtained as follows: $$ \mathrm{PCI}=\left[\left({\mathrm{P}}_{\mathrm{n}}\times 0\right)+\left({\mathrm{P}}_1\times 1\right)+\left({\mathrm{P}}_{\mathrm{m}}\times 2\right)+\left({\mathrm{P}}_{\mathrm{h}}\times 3\right)\right]/100 $$ (P n responses grading an element as non-issue; P 1 responses grading an element as low; P m responses grading the element as moderate; P h responses grading the element as high). Besides, Pearson Correlation analysis (Field 2009) was used to ascertain the magnitude and direction of relationships between key socio-economic variables and the farmer's perceptions, adaptation status, and constraints thereto. This process entailed a computation of correlation coefficient (r), obtained using Eq. 3 below: $$ \mathrm{r}=\frac{\sum \left({\mathrm{x}}_{\mathrm{i}}-\overline{\mathrm{x}}\right)\left({\mathrm{y}}_{\mathrm{i}}-\overline{\mathrm{y}}\right)}{\left(\mathrm{N}-1\right){\mathrm{S}}_{\mathrm{x}}{\mathrm{S}}_{\mathrm{y}}} $$ where x = first variable; y = second variable; Sx = variance of the first variable, Sy = variance of the second variable. Results and discussions Features of the sample Majority of the smallholders encountered in the area were from age 35 and above (76%), with the youthful – in spite being the majority, constituting a small portion (24%) of smallholders in the area. Further, mean and median age of smallholders in the area were found to be 42 and 40 years, respectively. These observations agree with other studies across the sub-Saharan Africa and rest of the world, which have shown an "ageing farmer population". Such a situation harbour potentially adverse ramifications on the future food security and overall agricultural productivity, amidst the burgeoning populations in Kenya, as is the case in other developing countries. Bulging populations, especially in urban areas, demand commensurate upturn in food resources. Further, of all the respondents, the males constituted 47%, while the females were 53%. All the males were found to be de jure house hold heads. On the contrary, female respondents were either sharing responsibilities with their male counterparts, as married couples (69.8%), or entirely being responsible for all household farming decisions as single mothers (30.2%). Many studies (Khisa et al. 2014; Mikalitsa 2010; Oluoko-Odingo 2011) have pointed out a possibly high vulnerability of single-headed households to weather-related challenges in farming. The current study, however, did not establish the plausibility of this claim. The situation in the study area could have differed from those of other studies as most of the single head households had additional off-farm livelihood streams. Majority of the respondents had attained primary school level (55%) only, while the lowest part of their population had tertiary education (9%). Besides, access to education also varied with the respondents' age, with the younger members of the population being more educated than their older counterparts. Such scenarios are likely to affect the adoption of new farming technologies. Various studies (Kassie et al. 2014; Oluoko-Odingo 2009; Pérez et al. 2015) have revealed the existence of an association between the level of education and adaptation against adverse environmental challenges. Livelihood streams Majority of the smallholders (83%) entirely relied on farming as a source of livelihood, with only 17% of them having additional livelihood options from off-farm income streams, across the area. Farmers' response capacity to impacts of climate variability and change have been shown to closely associate with diversity of their livelihood streams (Nielsen and Reenberg 2010). Thus, from the aforementioned observation, it is most likely that most of the farmers in the study area would easily have limited capacity to cope with the resultant impacts (Field 2012). Such a situation can be remedied by widening diffusion of innovative strategies that will dispose the smallholders to opportunities with which they can diversify their livelihood systems (Oluoko-Odingo 2011). Farm sizes The study found many of the respondents' (33%) owning farm sizes which ranged between 2.1 and 2.5 ha. Out of this figure, huge portions of the farm were set aside for maize cultivation and cattle rearing. Besides, a smaller segment of the farm was allocated to cattle rearing as compared to maize cultivation, which – according to the farmers, was as a result of the availability of crop residues which could be fed on animals. Other responses also indicated that the smallholders could still meet animal feed demands by hiring grazing fields from their neighbours who had larger land sizes. The third largest portion of land went to the cultivation of beans and other pulses, though mostly in a mix with other 'friendly' crops. However, most of the drought-tolerant crops like sorghum, finger millet and sweet potatoes, among others, were allocated a smaller share of farm size. This is despite being regarded as 'saviour' crops, considering the current challengesFootnote 3 facing maize crop cultivation in the area. Such a situation demonstrates that adaptable crops (Tittonell and Giller 2013) are yet to be accorded proper attention, thus underlining the need for targeted awareness-raising schemes. Furthermore, it was found out that most of the farmers in the area were largely depending on 'free-ranging' livestock – a situation which exposes them to greater risks from climate uncertainties, through the resultant impacts on availability of animal feed resources. Climate variability situations in trans-Mara east sub-county Meteorological situations An analysis of rainfall data for the area indicated no major shifts in the mean annual amounts during the period 1980 to 2015, though the overall trend between 2000 and 2015 indicates a slight decline (Fig. 2). However, a detailed scrutiny of the monthly rainfall data for the area showed huge deviations from the long-standing regimes for the area. For instance, the area's established bimodal rainfall patterns, with the first peak in March to May and the second peak in November to December, have become highly irregular from a given year to the other. Considering the situations from 2000 to 2015, depicted that the first and second rainfall peaks were missed on six and four occasions, respectively. But in 1980 to 1999, the first and second rainfall peaks were missed on only three occasions for each peak. Mean annual measurements for rainfall and temperature for Trans-Mara East. (Data source: Kenya Meteorological Department, 2017) Details of the mean annual temperature situations in the area between 1980 and 2015 also showed a generally rising trend, with an overall increase on 1.2 °C during the period (Fig. 2). An examination of the mean monthly maximum temperatures for the area showed a progressive increase between 2000 and 2015, as compared to between 1980 and 1999. The period 2000–2015 registered highest monthly maximum temperatures of more than 28.3 °C on six occasions (2000, 2005, 2006, 2009, 2011, 2015) whereas the 1980–1989 monthly maximum records only exceeded 28.3 °C on just four occasions (1981, 1982, 1983, 1997). These climatic situations mirrored the area's farm-level experiences, covered in the next section of this study, which also showed decreasing rainfall amounts and increasingly rising temperatures in the area. The findings are also in tandem with other studies which have indicated increasing climate variability in Kenya (Khisa et al. 2014; Mutunga et al. 2017; Oluoko-Odingo 2011) and other parts of the world (Asseng et al. 2015; Labbé et al. 2016; Thornton et al. 2014). Such increasing climatic shifts have been shown to harbour potentially adverse implications on food security through the impact on crop output in Kenya and other parts of the developing world (Campbell et al. 2016; Cohn et al. 2016; Oluoko-Odingo 2011; Rigolot et al. 2017; Silvestri et al. 2015; Wambua and Omoke 2014). Extensive crop and livestock performances Considering crop performances in the area between 2010 and 2015, maize crop emerged as the most negatively affected crop in the period, as compared to the crops, based on the Weighted Average Index (Fig. 3). Among the livestock enterprises in the area, donkeys and cattle were found to have been largely affected by climate-related feed shortages compared to goats and sheep (Table 1). These findings corroborate other studies which have reported the resilience of goats to varying climatic situations, due to their more diverse feeding habits compared to other farm animals (Opiyo et al. 2015). Crop performances under different climatic situations in the study area between 2000 and 2015 (Source: Field Data, 2017) Table 1 Climate variability-related feed shortages among livestock in the area (Source: Field Data, 2017) Smallholder responses to challenges related to their output indicated climate variability as the top possible cause for depressed crop and livestock performances in the area (Fig. 4), and this was mostly manifested through prolonged droughts and rainfall uncertainties. Key factors attributed to depressed crop and livestock output in the area (Source: Field Data, 2017) Delayed rainfall situations with irregular patterns affect planting seasons. Snowballing susceptibility of crops to the effects of destructive pests and disease outbreaks also occur (Mutunga et al. 2017; Okumu 2013). These affect the availability of livestock feed resources thus undermining on overall productivity (Field 2012). Smallholders perception on climate variability and change Perceptions about temperatures and rainfall situations Any preparedness towards a potentially adverse situation, including climate change, has been shown to correspond to perceptions and awareness levels among the affected individuals and/or groups (Le Dang et al. 2014; Raworth 2007).Thus, smallholders in Trans-Mara East sub-County were surveyed for their perceptions on climatic situations, particularly ascertaining their experiences with rainfall, drought, and temperature situations for the area between 2000 and 2015. Temperature experiences surveyed in terms of length and frequency of the warmest seasons, and the associated actual feel, indicated a majority of the respondents (> 80%) perceived an upward trend. Less than 10% of them reported either a decreased or unchanged temperature situations in the area (Fig. 5). Smallholders perception about the temperature and rainfall situations in Trans-Mara East between the years 2000 and 2015 (Source: Field Data, 2017) Their experiences with rainfall situations were also surveyed in terms of the amounts, duration, and frequency. Responses showed a majority of them (> 65%) had observed a downward trend. This was in addition to a less than 20% reporting either an increasing or unchanging precipitation during the period. Besides, smallholder experiences about drought situations were surveyed for the same period. The responses indicated that the area had experienced moderate to severe incidences of drought, at increasing rates (Table 2). Table 2 Drought occurrences in Trans-Mara East sub-county from 2000 to 2015 Drought situations undermine the smallholders' capacity to fight poverty and progress (Barrett and Carter 2013; Kithia 2014; Oluoko-Odingo 2009). This will likely jeopardise on the journey towards realising the much desired sustainable development objectives at the downstream levels (Kassie et al., 2015; Labbé et al. 2016; Wambua and Omoke 2014). Diminishing household livelihood situations in rural areas is largely attributed to climate uncertainties which constrict crop and livestock performance (Mertz et al. 2009a; Oluoko-Odingo 2011). The resulting destitution and scarcity of essential life elements, like food, often trigger other socio-economic concerns such as household-level conflicts, plummeting health situations, and environmental degradation (Le Dang et al. 2014; Field 2012; Mutunga et al. 2017; Tittonell and Giller 2013; Wiesmann et al. 2014). These circumstances are already commonplace in the area. The observations clearly demonstrate a congruence between smallholders' perceptions and metrological indicators of the climatic situations in the area. These indicate that there is an increasing unpredictability of rainfall patterns in the study area. The findings thus reinforce other reports on the climatic situation in Kenya (Khisa et al. 2014; Marenya and Barrett 2007; Mutunga et al. 2017; Okumu 2013; Wambua and Omoke 2014) and other parts of the world (Deressa et al. 2011; Kassie et al. 2013; Labbé et al. 2016; Muzamhindo 2015; Ndamani and Watanabe 2015; Uddin et al. 2014). Reports about the Kenya's situations have shown rising temperatures across country, with rainfall patterns becoming more irregular and unpredictable (Klisch et al. 2015; Mutunga et al. 2017). For instance, national meteorological reports indicate a warming trend in the temperatures between 1961 and 2009, with overall rise in minimum and maximum temperatures of 0.7–2.0 °C and 0.2–1.3 °C, and the warmest records in the period occurred being between 2000 and 2009. Dwindling rainfall situations in Trans-Mara East, corroborate with other studies from the rest of Kenya (Barrett and Carter 2013; Kithia 2014; Oluoko-Odingo 2009). Such circumstances can destabilise smallholders' capacity to fight poverty and progress towards local-level sustainability objectives (Kassie et al., 2015; Labbé et al. 2016; Wambua and Omoke 2014). The resulting adverse impacts will in the long run affect both the human well-being, at household level, and the overall health and productivity of natural ecosystems (Mikalitsa 2015; Nyamwaro et al. 2006; Oluoko-Odingo 2011). These circumstances are reportedly common in many parts of rural Kenya. Perceptions and smallholders' socio-economic strata A Pearson correlation analysis on the association between smallholder's perceptions and their socio-economic strata indicated a weak positive association (r < ± 3, p ≥ 0.05) between smallholder's perceptions and either their age, marital status, level of education, or livelihood streams. However, there was a strong positive association (r = 0.430, p ≤ 0.01) between their perceptions on climate variability and farm sizes. Rapidly evolving information technology (Musingi and Ayiemba 2012; Thugge et al. 2011) play a vital role in availing a wide range of information-access platforms to people, with a greater likelihood to influence their subsequent perceptions and decisions. In the study area, smallholders had a wide range of options within their disposal, through which they could easily access climate- and agricultural-related content. These scenarios could have influenced their perceptions across their demographic and socio-economic strata, as indicated by the weak association between their perceptions and their ages, level of education, marital status, and livelihood streams. Access to information in Kenya has been bolstered the rapid penetration of mobile phone technology, which currently stands at over 30 million handsets in an entire population of 45 million people (Klisch et al. 2015; Thugge et al. 2011; Wiesmann et al. 2014). Given that these mobile phones have the capability to receive signals from FM radio stations, and internet, information access across the country has been greatly enhanced thus levelling key socio-economic and demographic underpinnings. On the other hand, smallholder's farm size had a strong positive relationship with their perceptions. Conversely, smallholders who had much smaller farms (≤ 1.0 ha) were found to be practicing intensive mixed-cropping systems with easy options for manual watering, in case of prolonged droughts. These observations could have been influenced by greater worries and "bad" experiences by the smallholders with larger farms who were found to mostly had mono-cropping of maize which are prone to weather-related challenges (Okumu 2013). Smallholders' adaptive capacity Smallholders' desired adaptation strategies As a precautionary measure against impending effects of climate variability, farmers often employ various strategies. These measures are largely dependent on their perception, level of awareness, education, and affordability – tied to their levels of income (Ndamani and Watanabe 2015; Oluoko-Odingo 2011). In Trans-Mara East, the current smallholders' adaptation strategies include those related to cushioning, and enhancing the productivity of, their cropping and livestock rearing practices. Notwithstanding the presence of these measures, smallholders in the area indicated a desire for additional adaptation schemes (Labbé et al. 2016; Silvestri et al. 2015) according to their perceived level of importance (Table 3). For example, improving crop varieties appeared to be the most desired adaptation strategy possibly due to their aspiration for a maize variety which is free from the effects of climatic, and other external, variances. These include Maize Lethal Necrosis disease reported in the area its environs in 2011, and since then, a tangible remedy is yet to be found (personal communication with experts from KARLOFootnote 4 and CIMMYTFootnote 5). Table 3 Smallholder's ranking of the adaptation strategies employed in Trans-Mara East sub-County, as per their perceived importance levels Crop diversification, among other crop management strategies, was also highly ranked as a potentially viable adaptation strategy. This is a good strategy since having different types of crops in one's farm can easily act as a security against the failure or poor yield performances. Many studies have also highlighted crop diversification options as suitable adaptation measures (Le Dang et al. 2014; Mutunga et al. 2017; Ndamani and Watanabe 2015; Uddin et al. 2014). Besides, improving livestock feeding techniques also appeared among the highly desired strategies as compared to destocking of the animals as they find them being a huge source of livelihood, and form of 'security' against emergencies such as healthcare and children's education, hence would prefer to have more stock under improved feeding strategies which can sustain and even enhance their productivity and market value. These findings agree with other studies (McKune et al. 2015; Rigolot et al. 2017) on the options for smallholder against climate-related challenges. Adjusting planting dates, irrigation, and enhanced post-harvest management techniques, also featured among the highly desired crop management practices in the area as compared to agroforestry. This could be probably as a result of the farmer's perceived changes in rainfall patterns which directly affect the availability of crop moisture requirements (Mertz et al. 2009b). The farmers would prefer to reconfigure their planting dates to possibly avoid being disadvantaged by the shifting rainfall regimes. This was in addition to having irrigation as an alternative avenue for meeting crop water demands during long dry spells. However, options such as irrigation are capital intensive despite being the ideal strategy against rainfall uncertainties. Therefore, most of the smallholders cannot afford to adopt them unless they are collectively supported by the government and other key stakeholders in the agricultural sector. This can be addressed through a public-private-community partnership approach as it has been shown to work elsewhere (Raworth 2007). Moreover, and in line with the smallholders highly ranked adaptation measures in Trans-Mara East, enhanced post-harvest management practices constitute one of the key measures which can be harnessed at the farm-level to curtail key losses of farm produces. For example, annual grain losses in the sub-Saharan Africa exceeds 20% of the harvested yields (Field 2012; Iizumi and Ramankutty 2016), yet these adverse scenarios can easily be contained through suitable transportation and storage methods. Smallholder's adaptive capacity and their socioeconomic levels Results from the Pearson correlation analysis, targeting to ascertain the veracity of association between the smallholder's adaptive capacities against their socio-economic state of affairs, were also revealing. Education levels (r = 0.229; p ≤ 0.05) and farm sizes (r = 0.534, p ≤ 0.01) were found to have positively significant association with their adaptive capacity. There was also a positive, but significantly weak, association between their adaptive capacity and individual's marital status (r = 0.154, p ≥ 0.05) and diversity of livelihood streams (r = 0.034, p ≥ 0.05). Association between smallholder's adaptive capacity and their age though negative, was also weak and not significant (r = − 0.026, p ≥ 0.05). Successful implementation of desired adaptation has been shown to be largely associated to a number of socio-economic dynamics which include the smallholder's age, marital status, their wellbeing, educational levels, farm sizes, and diversity of livelihood streams (Le Dang et al. 2014; Silvestri et al. 2015).These studies are in agreement with the strong positive association between the adaptive capacity and educational levels and well as farm size in the study area. Education for instance, enhances skill acquisition among individuals, and in the process their possibility to occupy societal positions which can dispose them to a wide range of information, on adaptation, and more meaningful income streams. Larger farm size also allow smallholders to allocate different portions of their land into various adaptable crop and livestock enterprises, thus raising their adaptive capacity (Fisher et al. 2015). This possibly underpin the significantly strong positive correlation between the smallholder's farm sizes and their adaptive capacity in Trans-Mara East. Besides, the weak positive association between smallholder's marital status and adaptive capacity, as well as their livelihood systems could be as a result of other intervening dynamics in the area (Kipsisei 2011; Nyamwaro et al. 2006). For example, most of the single women in the area had additional off-farm income streams, including monthly stipends from a national social safety net programme. Such policy driven programmes have been shown to reduce societal inequality gaps. These scenarios possibly explain the contrasting observations -on marital status, between this study and other studies (Moyo et al. 2012; Oluoko-Odingo 2011; Uddin et al. 2014) which show negative association between marital status and adaptive capacity. However, those who largely relied on off-farm livelihood streams in Trans-Mara East, were receiving low wages, as most of them were engaged in poorly-paying ventures, including as casual labourers. Such meagre income streams is mostly exhausted by the competing household demands with limited surplus that would have been injected into more productive and sustainable initiatives to boost their adaptive capacity. This elucidates on the weak positive association between adaptive capacity and diversity of income streams in the area. Farm productivity has been shown to deteriorate with the farmer's age, especially among the rural smallholders who largely rely on their own physical labour to execute many farming responsibilities (Deressa et al. 2011; Labbé et al. 2016; Uddin et al. 2014). These observations corroborate with the findings on negative –though weak, correlation between age and adaptive capacity in the study area. Owing to such observations in other parts of the world, a number of studies and key global-level stakeholders, have sounded alarm bells on the future peril of food security. This is depicted by documentations indicating that the median age of farmers have been continually rising, contrary to the dropping median age of overall populations in many developing countries. Constrains to smallholders' adaptation Key adaptation constraints In spite of the smallholders' desire to put in place workable safeguards against the potentially adverse impacts of climate variability and change, a number of challenges stand in their way. These hurdles include those emanating from both the downstream and upstream (policies and programmes) levels (Iizumi and Ramankutty 2016; Raworth 2007) (Table 4). Table 4 Key challenges constraining smallholders from taking up adaptation measures against climate variability in Trans-Mara East (N = 100) High costs of farm inputs, limited access to micro-credit facilities, uncertain commodity prices, and poor road networks, featured among topmost concerns for smallholders against their quest for robust adaptation options. Specifically, they decried of "very expensive" planting materials for various crops such as beans and maize seeds in the retail stores, as well as the "skewed circulation" of the GOKFootnote 6's subsidised fertilisers, and as a result, they often opt to "cheaper" alternatives like establishing new crops using yields from previous harvests and planting without fertilisers. However, such practices have been shown to increase crop's vulnerability to pests and diseases, in addition to reducing their overall vigour and eventual yields (Mertz et al. 2009b; Tittonell and Giller 2013). Besides, their constrained livelihood systems, compounded by limited of access to credit facilities, diminishes their ability to raise funds for the adoption of meaningful adaptation strategy due to cost implications. Inadequate access to client-friendly credit facilities inhibit people's abilities to venture into more rewarding enterprises (Barrett and Carter 2013; Oluoko-Odingo 2011). This is the case for smallholders in Trans-Mara East. Such a state of affairs inhibits their abilities to broaden livelihood streams through more rewarding on-farm and off-farm schemes – a situation which directly impacts on their ability to instil climate-smart practices in their farms (Raworth 2007; Silvestri et al. 2015). Roads are the only available transportation networks in Trans-Mara East, relied upon by smallholders to reach the markets. However, the sorry state of these roads, as per the farmers' experiences, and the researcher's own notes in the area, constitute a key impairment to obtaining the actual market value for the smallholders' produce. This affects the farmer's morale, especially in relation to the need to venture into potentially adaptable farming options. The challenge is exacerbated by the absence of yields' value addition opportunities in the area (source: FGDs and key informants). Disjointed access to weather-related information and agricultural extension services, disposes the smallholders to perils of adverse climatic situations in the area (Le Dang et al. 2014). For instance, most of them rely on vernacular radio stations for farming-related information, yet these channels offer limited deliberations on matters climate change. Besides, the farmers also indicated that they "rarely see" the taxpayer-funded agricultural extension officers, who are supposed to be the first-line promoters of sustainability practices at the farm-level. Such situations impede the penetration, and subsequent adoption, of tangible adaptation measures among smallholders (Mutunga et al. 2017; Oluoko-Odingo 2011; Silvestri et al. 2015). Limited farm sizes, land tenure and common inter-ethnic conflicts in the area also contribute to slow pace in adopting sustainable farming practices in the area (key informant interviews and Kipsisei 2011). Further, and with the burgeoning human populations in the area, demand for more land under cultivation is continually rising. This challenge is perpetuated by long-held traditions on father-to-son land inheritance – the outcome being a continued subdivision of land into uneconomical units (Wiesmann et al. 2014). Consequently, a meagre output from these units drives many of them into destitution and despondency. These conditions have been associated to the rampant cattle rustling and inter-ethnic conflicts in the area (Kipsisei 2011; Nyamwaro et al. 2006). The factors combined, affect any incentive to invest in long-term adaptations measures in the area (Le Dang et al. 2014). The current observations agree with Mutunga et al. (2017) and Opiyo et al. (2015) who recorded similar concerns in their research in other parts of Kenya as well as Deressa et al. (2011) and Ndamani and Watanabe (2015) in Ethiopia and Ghana, respectively. Adaptation constraints and socioeconomic strata Pearson correlation analysis (Field 2009) for the association between the smallholder's adaption constraints in the area, indicated a significantly strong negative (p ≤ 0.01, r < 0) correlation with level of education, and diversity of livelihood streams. However, there was also a significantly strong positive association (p ≤ 0.01, r > 0) between age and the adaptation constraints, while both the marital status and farm size exhibited weak non-significant negative association with the constraints (p ≥ 0.05, r < 0). With an enhanced access to higher levels of education, one is likely to acquire more skills useful in solving life-related challenges, both at individual and societal levels, thus broadening their social and technical capital (Godfray et al. 2010; Musingi and Ayiemba 2012). Besides, increasing more skills is likely to enable individuals to access various livelihood streams. This then enables them to build a stronger financial and technical capital, resulting in lowered socioeconomic constraints related to their undertakings (Thornton et al. 2014; Wambua and Omoke 2014; Wilkinson 2015). The observations, thus, explain the strong negative relationship between smallholder's adaptation constraints with their level of education, as well as with their livelihood streams in Trans-Mara East. Moreover, younger- to mid-age segments of the populations have been shown to be more endowed with social, technical, and financial capital, compared to the older segments of the populations (Nielsen and Reenberg 2010; Wilkinson 2015). These forms of capital as elucidated in (Brand 2009; Raworth 2007) accords individuals, including smallholders, an upper-hand in confronting any abrupt or projected socio-economic challenges, at household to societal levels. And this case applies to smallholders across many parts of the world (Deressa et al. 2011; Moyo et al. 2012; Ndamani and Watanabe 2015; Uddin et al. 2014), as their access to the aforementioned forms of capital enable them to leapfrog any farm-related constraints. These records vindicate the strong positive association between smallholder's age (25–64 years) and adaptation constraints in the study area. Smallholder family labour and decision-making input constitutes a key social capital, which when harnessed constructively enhances farm enterprise performances while at the same time constricting any potential constraints (Mertz et al. 2009a; Oluoko-Odingo 2011). This is especially so among female-headed households compared to male-headed (McKune et al. 2015; Mikalitsa 2010). For instance, single mothers' societal roles in the sub-Saharan Africa are often constrained by the ensuing cultural practices such as those pertaining to land tenure and labour utilisation. Such situations are likely to create gender-based capital discrepancies with synchronised results in farm-level responses and constraints to key challenges led by climate variability. These observations support the current study's findings on a generally negative association between the smallholder's constraints and their marital status, as well as their farm sizes in the study area. The above findings indicate that Trans-Mara East sub-County has been undoubtedly experiencing climate variability challenges between 1980 and 2015. The key manifestations for these situations include increased rainfall uncertainties, intensifying droughts, and rising temperatures. These scenarios have largely affected crop and livestock performances in the area, with corresponding negative effects on household food security and income positions. Moreover, smallholder's perceptions about the climatic situations in the area were in tandem with the meteorological records and existing literature. Besides, their perceptions were largely not associated with their socioeconomic characteristics including marital status, age, level of education, and livelihood streams. This was due to other intervening subtleties such as the rapidly increasing penetration of information technology systems into the Kenyan rural areas, which possibly shaped the association. Farm sizes only appeared to magnify the magnitude of losses associated with climatic uncertainties thus the increasing strength and direction of the relationship between it and smallholder's perceptions. Smallholder's adaptive capacity indicated a dynamic community with a higher scale of readiness to make requisite adjustments against climate variability and its associated impacts, given financial, technical and social support. Among their cues for readiness include current crop diversification options, adjusted livestock feeding techniques, as well as attuned key household diets, with their most desired adaptation options being improved crop varieties, livestock feeding techniques and crop diversification. Further, in the area, smallholder's education levels and farm sizes had a positive association with their adaptive capacity. This was, however, with a significantly weak, association between their adaptive capacity and both their individual's marital status and diversity of livelihood stream, and between their adaptive capacity and age. Among the key constraints against smallholder's adaptive capacity in the area included high cost of farm inputs, limited access to credit, market uncertainties, poor road networks, limited livelihood streams, and disjointed agricultural- and climate-related information systems, as well as farm sizes, land tenure issues and inter-ethnic conflicts in the area. Moreover, through Pearson correlation analysis, there was significantly strong negative correlation between their constraints and their level of education, as well as the diversity of livelihood streams, with a notably strong positive association between age and the constraints, unlike either their marital status or farm sizes. Education and livelihood diversification, for instance, can enhance peoples' capacity to combat various environmental challenges, including climate change. Kenya National Bureau of Statistics. Commission for Revenue Allocation. Maize farming in the area is also "at cross-roads" due to losses attributed to Maize Lethal Necrosis disease. Kenya Agricultural Research and Livestock Organization. International Maize and Wheat Improvement Centre. Government of Kenya. CIMMYT: International maize and wheat improvement centre CRA: Commission for revenue allocation DWIS: Disjointed weather information streams FGDs: Focus group discussions GoK: Government of Kenya KARLO: Kenya agricultural research and livestock organization KNBS: PCI: Problem confrontational index WAI: Weighted average index Asseng, S., F. Ewert, P. Martre, and R. Rötter. 2015. Rising temperatures reduce global wheat production. Nature Climate 5: 143 Retrieved from https://www.nature.com/articles/nclimate2470. Barrett, C.B., and M.R. Carter. 2013. The economics of poverty traps and persistent poverty: Empirical and policy implications. Journal of Development Studies 49 (7): 976–990. https://doi.org/10.1080/00220388.2013.785527. Brand, F. 2009. Critical natural capital revisited: Ecological resilience and sustainable development. Ecological Economics 68: 605–612. Campbell, B., S. Vermeulen, and P. Aggarwal. 2016. Reducing risks to food security from climate change. Global Food Retrieved from http://www.sciencedirect.com/science/article/pii/S2211912415300262. Cohn, A., L. VanWey, and S. Spera. 2016. Cropping frequency and area response to climate variability can exceed yield response. Nature Climate Change Retrieved from http://agri.ckcest.cn/ass/NK005-20160321002.pdf. Deressa, T., R. Hassan, and C. Ringler. 2011. Perception of and adaptation to climate change by farmers in the Nile basin of Ethiopia. The Journal of Agricultural Retrieved from https://www.cambridge.org/core/journals/journal-of-agricultural-science/article/perception-of-and-adaptation-to-climate-change-by-farmers-in-the-nile-basin-of-ethiopia/98FC44BF50B3E78DC8205A464097CDB8. Devkota, R.P., V.P. Pandey, U. Bhattarai, H. Shrestha, S. Adhikari, and K.N. Dulal. 2017. Climate change and adaptation strategies in Budhi Gandaki River basin, Nepal: A perception-based analysis. Climatic Change 140 (2): 195–208. https://doi.org/10.1007/s10584-016-1836-5. Field, A. 2009. Discovering statistics using SPSS. 3rd ed. SAGE Publications Ltd. Retrived from https://books.google.co.ke/books?hl=en&lr=&id=srb0a9fmMEoC&oi=fnd&pg=PP2&ots=u2sYDbFXLF&sig=P4JKapbwHnbCzj6lCfeezOmGcz8&redir_esc=y#v=onepage&q&f=falsehttps://books.google.co.ke/books?hl=en&lr=&id=srb0a9fmMEoC&oi=fnd&pg=PP2&ots=u2sYDbFXLF&sig=P4JKapbwHnbCzj6lCfeezOmGcz8&redir_esc=y#v=onepage&q&f=false. Field, C. (2012). Managing the risks of extreme events and disasters to advance climate change adaptation: Special report of the intergovernmental panel on climate change. Retrieved from https://books.google.com/books?hl=en&lr=&id=nQg3SJtkOGwC&oi=fnd&pg=PR4&dq=Managing+the+risks+of+extreme+events+and+disasters+to+advance+climate+change+adaptation.+A+special+report+of+Working+Groups+I+and+II+of+the+Intergovernmental+Panel+on+Climate+Change.+Cambridge:+Cambridge+University+Press.&ots=13CfrssDWM&sig=xmKettobUmF5AFtaWL7uvBpptlc. Fisher, M., T. Abate, R.W. Lunduka, W. Asnake, Y. Alemayehu, and R.B. Madulu. 2015. Drought tolerant maize for farmer adaptation to drought in sub-Saharan Africa: Determinants of adoption in eastern and southern Africa. Climatic Change 133 (2): 283–299. https://doi.org/10.1007/s10584-015-1459-2. Godfray, H., Beddington, J., Crute, I., & Haddad, L. 2010. Food security: The challenge of feeding 9 billion people. Retrieved from http://science.sciencemag.org/content/327/5967/812.short. Gornott, C., and F. Wechsung. 2016. Statistical regression models for assessing climate impacts on crop yields: A validation study for winter wheat and silage maize in Germany. Agricultural and Forest Meteorology Retrieved from https://www.infona.pl/resource/bwmeta1.element.elsevier-17a3ef0a-dd52-3eaa-a59b-8c0ab0a836be. Hoetker, G. 2007. The use of logit and probit models in strategic management research: Critical issues. Strategic Management Journal 28 (4): 331–343. https://doi.org/10.1002/smj.582. Iizumi, T., and N. Ramankutty. 2016. Changes in yield variability of major crops for 1981–2010 explained by climate change. Environmental Research Letters Retrieved from http://iopscience.iop.org/article/10.1088/1748-9326/11/3/034003/meta. Kassie, M., M. Jaleta, B. Shiferaw, and F. Mmbando. 2013. Adoption of interrelated sustainable agricultural practices in smallholder systems: Evidence from rural Tanzania. Forecasting and Social …. Retrieved from http://www.sciencedirect.com/science/article/pii/S0040162512001898. Kassie, M., S. Ndiritu, and J. Stage. 2014. What determines gender inequality in household food security in Kenya? Application of exogenous switching treatment regression. World Development Retrieved from http://www.sciencedirect.com/science/article/pii/S0305750X13002374. Kassie, M., H. Teklewold, P. Marenya, M. Jaleta, and O. Erenstein. 2015. Production risks and food security under alternative technology choices in Malawi: Application of a multinomial endogenous switching regression. Journal of Agricultural Economics 66 (3): 640–659. https://doi.org/10.1111/1477-9552.12099. Khisa, G., Oteng'i, S., & Mikalitsa, S. 2014. Effect of climate change on small scale agricultural production and food security in Kitui District, Kenya. Kipsisei, G. (2011). Environmental degradation and social conflict in trans Mara district, south Rift Valley of Kenya. Retrieved from http://erepository.uonbi.ac.ke/bitstream/handle/11295/4483/Kipsisei_Environmental degradation and social conflict.pdf?sequence=1. Kithia, S. 2014. Effects of soil erosion on sediment dynamics, food security and rural poverty in Makueni District, eastern Kenya. International Journal of Applied Retrieved from http://erepository.uonbi.ac.ke/bitstream/handle/11295/78461/Wambua_Effects of soil erosion on sediment dynamics, food security and rural poverty in Makueni District.pdf?sequence=1. Klisch, A., C. Atzberger, and L. Luminari. 2015. Satellite-based drought monitoring in Kenya in an operational setting. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 40 (7): 433–439 Retrieved from https://scholar.google.com/scholar?cluster=7211563320755842275&hl=en&as_sdt=0,5&as_vis=1. Kothari, C. 2004. Research methodology: Methods and techniques. Retrieved from https://books.google.com/books?hl=en&lr=&id=hZ9wSHysQDYC&oi=fnd&pg=PA2&ots=1sTbpE92G7&sig=ymkzuN6naUisIflbyhSid6wzZCY. Labbé, J., J.D. Ford, L. Berrang-Ford, B. Donnelly, S. Lwasa, D.B. Namanya, et al. 2016. Vulnerability to the health effects of climate variability in rural southwestern Uganda. Mitigation and Adaptation Strategies for Global Change 21 (6): 931–953. https://doi.org/10.1007/s11027-015-9635-2. Le Dang, H., E. Li, J. Bruwer, and I. Nuberg. 2014. Farmers' perceptions of climate variability and barriers to adaptation: Lessons learned from an exploratory study in Vietnam. Mitigation and Adaptation Strategies Retrieved from http://link.springer.com/article/10.1007/s11027-012-9447-6. Marenya, P., and C. Barrett. 2007. Household-level determinants of adoption of improved natural resources management practices among smallholder farmers in western Kenya. Food Policy Retrieved from http://www.sciencedirect.com/science/article/pii/S0306919206001011. McKune, S., E. Borresen, A. Young, and T. Ryley. 2015. Climate change through a gendered lens: Examining livestock holder food security. Global Food Retrieved from http://www.sciencedirect.com/science/article/pii/S221191241500022X. Mertz, O., K. Halsnæs, J.E. Olesen, and K. Rasmussen. 2009a. Adaptation to climate change in developing countries. Environmental Management 43 (5): 743–752. https://doi.org/10.1007/s00267-008-9259-3. Mertz, O., C. Mbow, A. Reenberg, and A. Diouf. 2009b. Farmers' perceptions of climate change and agricultural adaptation strategies in rural Sahel. Environmental Management 43 (5): 804–816. https://doi.org/10.1007/s00267-008-9197-0. Mikalitsa, S. 2010. Gender-specific constraints affecting technology use and household food security in western province of Kenya. African Journal of Food, Agriculture, Nutrition and Development Retrieved from https://www.ajol.info/index.php/ajfand/article/view/55327. Mikalitsa, S. 2015. Intrahousehold allocation, household headship and nutrition of under-fives: A study of western Kenya. African Journal of Food, Agriculture, Nutrition and Development Retrieved from https://www.ajol.info/index.php/ajfand/article/download/113414/103133. Moyo, M., B.M. Mvumi, M. Kunzekweguta, K. Mazvimavi, P. Craufurd, and P. Dorward. 2012. Farmer perceptions on climate change and variability in semi-arid Zimbabwe in relation to climatology evidence. African Crop Science Journal 20: 317–335. Musingi, J.K., and E.H. Ayiemba. 2012. Effects of technological development on rural livelihoods in developing world: A case study of effects of a large scale multipurpose dam on malaria prevalence in a rural community around Kenya's largest dam. European Scientific Journal 8 (14): 132–143. Mutunga, E., Charles, K., & Patricia, M. (2017). Smallholder farmers perceptions and adaptations to climate change and variability in Kitui County, Kenya. Retrieved from http://repository.seku.ac.ke/handle/123456789/3447. Muzamhindo, N. 2015. Factors influencing smallholder farmers ' adaptation to climate change and variability in Chiredzi District of Zimbabwe. 6 (9): 1–9. Nassiuma, D.K. 2000. Survey sampling: Theory and methods. Nairobi: University of Nairobi press. Ndamani, F., and T. Watanabe. 2015. Farmers' perceptions about adaptation practices to climate change and barriers to adaptation: A micro-level study in Ghana. Water Retrieved from http://www.mdpi.com/2073-4441/7/9/4593/htm. Nielsen, J.Ø., and A. Reenberg. 2010. Temporality and the problem with singling out climate as a current driver of change in a small west African village. Journal of Arid Environments 74 (4): 464–474. https://doi.org/10.1016/j.jaridenv.2009.09.019. Nyamwaro, S., Murilla, G., Mochabo, M., Wanjala, K. 2006. Conflict Minimizing Strategies on Natural Resource Management and Use: The Case for Managing and Coping with Conflicts Between Wildlife and Agro-pastoral Production Resources in Transmara District, Kenya. A Draft Paper Presented to the Pastoralism and Poverty Reduction in East Africa: A Policy Research Conference, June 27-28. Okumu, O.F. 2013. Small-scale farmers' perceptions and adaptation measures to climate change in Kitui County, Kenya. Kenya: University of Nairobi. Oluoko-Odingo, A.A. 2009. Determinants of poverty: Lessons from Kenya. GeoJournal 74 (4): 311–331. https://doi.org/10.1007/s10708-008-9238-5. Oluoko-Odingo, A.A. 2011. Vulnerability and adaptation to food insecurity and poverty in Kenya. Annals of the Association of American Geographers 101 (1): 1–20. https://doi.org/10.1080/00045608.2010.532739. Opiyo, F., O. Wasonga, M. Nyangito, J. Schilling, and R. Munang. 2015. Drought adaptation and coping strategies among the Turkana pastoralists of northern Kenya. International Journal of Disaster Risk Science 6 (3): 295–309. https://doi.org/10.1007/s13753-015-0063-4. Pérez, C., E. Jones, P. Kristjanson, and L. Cramer. 2015. How resilient are farming households and communities to a changing climate in Africa? A gender-based perspective. Global Environmental Retrieved from http://www.sciencedirect.com/science/article/pii/S0959378015000825. Raworth, K. 2007. Adapting to climate change: What's needed in poor countries, and who should pay. Oxfam Policy and Practice: Climate Change and Resilience 3 (1): 42–88. Rigolot, C., Voil, P. De, Douxchamps, S., & Prestwidge, D. 2017. INteractions between intervention packages, climatic risk, climate change and food security in mixed crop–livestock systems in Burkina Faso. Agricultural. Retrieved from http://www.sciencedirect.com/science/article/pii/S0308521X15300755. Silvestri, S., D. Sabine, K. Patti, F. Wiebke, R. Maren, M. Ianetta, et al. 2015. Households and food security: Lessons from food secure households in East Africa. Agriculture & Food Security 4 (1): 23. https://doi.org/10.1186/s40066-015-0042-4. Singh, B., A. Bohra, S. Mishra, R. Joshi, and S. Pandey. 2015. Embracing new-generation "omics" tools to improve drought tolerance in cereal and food-legume crops. Biologia Plantarum 59 (3): 413–428. https://doi.org/10.1007/s10535-015-0515-0. Thornton, P., P. Ericksen, and M. Herrero. 2014. Climate variability and vulnerability to climate change: A review. Global Change Retrieved from http://onlinelibrary.wiley.com/doi/10.1111/gcb.12581/full. Thugge, K., Ndung'u, N., & Otieno, R. O. 2011. Unlocking the future potential for Kenya – The vision 2030. Tittonell, P., and K.E. Giller. 2013. When yield gaps are poverty traps: The paradigm of ecological intensification in African smallholder agriculture. Field Crops Research 143: 76–90. https://doi.org/10.1016/j.fcr.2012.10.007. Uddin, M., W. Bokelmann, and J. Entsminger. 2014. Factors affecting farmers' adaptation strategies to environmental degradation and climate change effects: A farm level study in Bangladesh. Climate Retrieved from http://www.mdpi.com/2225-1154/2/4/223/htm. Wambua, B. N., Omoke, K. J., & Mutua, T. M. 2014. Effects of Socio-Economic Factors on Food Security Situation in Kenyan Dry lands Ecosystem. Asian Journal of Agriculture and Food Science (ISSN: 2321–1571), 2(01). Retrieved from http://erepository.uonbi.ac.ke/bitstream/handle/11295/78454/Wambua_Effects of socio - economic factors on food security situation in Kenyan dry lands ecosystem.pdf?sequence=1. Wiesmann, U., Kiteme, B., Mwangi, Z. 2016. Socio-Economic Atlas of Kenya: Depicting the National Population Census by County and Sub-Location. Second, revised edition. KNBS, Nairobi. CETRAD, Nanyuki. CDE, Bern. Retrieved from https://www.kenya-atlas.org/pdf/Socio-Economic_Atlas_of_Kenya_2nd_edition_hires.pdf. Wilkinson, J. 2015. Food security and the global agrifood system: Ethical issues in historical and sociological perspective. Global Food Security. Retrieved from http://www.sciencedirect.com/science/article/pii/S2211912415300201. Huge appreciation goes to the Association of African Universities for their financial support. Kuresok Youth Empowerment through their leaders –Eng. Weldon Mutai, and Mr. Richard Rotich, are also thanked for assisting with field logistics. Equally thanked are the smallholders and all sub-County officers of both the national and county governments in Trans-Mara East. Funding for data collection was provided by the Association of African Universities (AAU). Even so, the AAU had no role in the design of the study and collection, analysis and interpretation of data and in writing the manuscript. The datasets supporting the conclusions of this article is have been included within the article with the additional data being available in the Open Science Framework repository [unique persistent identifier and hyperlink to dataset(s) in https://osf.io/5bwfg/. Department of Geography and Environmental Studies, University of Nairobi, P.O. Box 30197-00100, Nairobi, Kenya Harrison K. Simotwo, Stella M. Mikalitsa & Boniface N. Wambua Harrison K. Simotwo Stella M. Mikalitsa Boniface N. Wambua HKS conceived the study, worked on the study design, data collection, analysis, interpretation and drafting of the manuscript. SMM reviewed and contributed to the study design, data collection, analysis, interpretation and drafting of the manuscript. BNW reviewed and contributed to the study design, data collection, analysis, interpretation and drafting of the manuscript. All authors read and approved the final manuscript. Correspondence to Harrison K. Simotwo. The authors declare that they have no competing of interests. Harrison K. Simotwo is the main contributor of this research article, Stella M. Mikalitsa and Boniface N. Wambua contributed equally. Simotwo, H.K., Mikalitsa, S.M. & Wambua, B.N. Climate change adaptive capacity and smallholder farming in Trans-Mara East sub-County, Kenya. Geoenviron Disasters 5, 5 (2018). https://doi.org/10.1186/s40677-018-0096-2 Climate variability and change Trans-Mara
CommonCrawl
Lie product formula In mathematics, the Lie product formula, named for Sophus Lie (1875), but also widely called the Trotter product formula,[1] named after Hale Trotter, states that for arbitrary m × m real or complex matrices A and B,[2] $e^{A+B}=\lim _{n\rightarrow \infty }(e^{A/n}e^{B/n})^{n},$ where eA denotes the matrix exponential of A. The Lie–Trotter product formula (Trotter 1959) and the Trotter–Kato theorem (Kato 1978) extend this to certain unbounded linear operators A and B.[3] This formula is an analogue of the classical exponential law $e^{x+y}=e^{x}e^{y}\,$ which holds for all real or complex numbers x and y. If x and y are replaced with matrices A and B, and the exponential replaced with a matrix exponential, it is usually necessary for A and B to commute for the law to still hold. However, the Lie product formula holds for all matrices A and B, even ones which do not commute. The Lie product formula is conceptually related to the Baker–Campbell–Hausdorff formula, in that both are replacements, in the context of noncommuting operators, for the classical exponential law. The formula has applications, for example, in the path integral formulation of quantum mechanics. It allows one to separate the Schrödinger evolution operator (propagator) into alternating increments of kinetic and potential operators (the Suzuki–Trotter decomposition, after Trotter and Masuo Suzuki). The same idea is used in the construction of splitting methods for the numerical solution of differential equations. Moreover, the Lie product theorem is sufficient to prove the Feynman–Kac formula.[4] The Trotter–Kato theorem can be used for approximation of linear C0-semigroups.[5] See also • Time-evolving block decimation References 1. Joel E. Cohen; Shmuel Friedland; Tosio Kato; F. P. Kelly (1982). "Eigenvalue inequalities for products of matrix exponentials" (PDF). Linear Algebra and Its Applications. 45: 55–95. doi:10.1016/0024-3795(82)90211-7. 2. Hall 2015 Theorem 2.11 3. Hall 2013 Theorem 20.1 4. Appelbaum, David (2019). "The Feynman-Kac Formula via the Lie-Kato-Trotter Product Formula". Semigroups of Linear Operators : With Applications to Analysis, Probability and Physics. Cambridge University Press. pp. 123–125. ISBN 978-1-108-71637-6. 5. Ito, Kazufumi; Kappel, Franz (1998). "The Trotter-Kato Theorem and Approximation of PDEs". Mathematics of Computation. 67 (221): 21–44. doi:10.1090/S0025-5718-98-00915-6. JSTOR 2584971. • Sophus Lie and Friedrich Engel (1888, 1890, 1893). Theorie der Transformationsgruppen (1st edition, Leipzig; 2nd edition, AMS Chelsea Publishing, 1970) ISBN 0828402329 • Albeverio, Sergio A.; Høegh-Krohn, Raphael J. (1976), Mathematical Theory of Feynman Path Integrals: An Introduction, Lecture Notes in Mathematics, vol. 423 (1st ed.), Berlin, New York: Springer-Verlag, doi:10.1007/BFb0079827, hdl:10852/44049, ISBN 978-3-540-07785-5. • Hall, Brian C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, vol. 267, Springer, ISBN 978-1461471158 • Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-0-387-40122-5 • "Trotter product formula", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Kato, Tosio (1978), "Trotter's product formula for an arbitrary pair of self-adjoint contraction semigroups", Topics in functional analysis (essays dedicated to M. G. Kreĭn on the occasion of his 70th birthday), Adv. in Math. Suppl. Stud., vol. 3, Boston, MA: Academic Press, pp. 185–195, MR 0538020 • Trotter, H. F. (1959), "On the product of semi-groups of operators", Proceedings of the American Mathematical Society, 10 (4): 545–551, doi:10.2307/2033649, ISSN 0002-9939, JSTOR 2033649, MR 0108732 • Joel E. Cohen; Shmuel Friedland; Tosio Kato; F. P. Kelly (1982), "Eigenvalue inequalities for products of matrix exponentials" (PDF), Linear Algebra and Its Applications, 45: 55–95, doi:10.1016/0024-3795(82)90211-7 • Varadarajan, V.S. (1984), Lie Groups, Lie Algebras, and Their Representations, Springer-Verlag, ISBN 978-0-387-90969-1, pp. 99. • Suzuki, Masuo (1976). "Generalized Trotter's formula and systematic approximants of exponential operators and inner derivations with applications to many-body problems". Comm. Math. Phys. 51 (2): 183–190. doi:10.1007/bf01609348. S2CID 121900332.
Wikipedia
\begin{document} \title{Decidability of Intuitionistic Sentential Logic with Identity via Sequent Calculus hanks{The results published in this paper were obtained by the authors as part of the project granted by National Science Centre, grant no 2017/26/E/HS1/00127.} \begin{abstract} The aim of our paper is twofold: firstly we present a sequent calculus for an intuitionistic non-Fregean logic \textsf{ISCI}, which is based on the calculus presented in \cite{isci} and, secondly, we discuss the problem of decidability of \textsf{ISCI} \textit{via} the obtained system. The original calculus from \cite{isci} did not provide the decidability result for \textsf{ISCI}. There are two problems to be solved in order to obtain this result: the so-called loops characteristic for intuitionistic logic and the lack of the subformula property due to the form of the identity-dedicated rules. We discuss possible routes to overcome these problems: we consider a weaker version of the subformula property, guarded by the complexity of formulas which can be included within it; we also present a proof-search procedure such that when it fails, then there exists a countermodel (in Kripke semantics for \textsf{ISCI}). \end{abstract} \section{Introduction} The motivation for introducing a number of non-Fregean logics (\textsf{NFL}) is the willingness to formalize ontology of situations found in Wittgenstein's \textit{Tractatus}. Wittgenstein stood in opposition to Frege's denotational theory. Now, instead of Frege's idea that sentences denote either \textit{Truth} or \textit{Falsity}, Wittgenstein underlines that a comparison of two sentences should be based on their \textit{logical form} rather than their \textit{logical value}. It is the logical form which contains the information of \textit{the configuration of the objects in the state of affairs} \cite[p.~15]{witt}. Roman Suszko, who developed non-Fregean theories \cite{Suszko:1968a,Suszko:1968b,Suszko1971,suszko-abolition,bloom1972investigations}, followed Wittgenstein's ontology and rejected the so called Fregean Axiom: the idea that the truth values of sentences are sufficient enough to judge their identity. It is worth highlighting that other aspects of Frege's theory were not negated. Suszko would underline that building a logical systems without Frege's Axiom is like \textit{realising Euclid's program without his fifth postulate} \cite{bloom1972investigations}. Ergo, Suszko's formalization of \textit{Tractatus} can be seen as an extension of Frege's theory rather than its alternative. Moreover, Suszko's approach makes the language more expressive and better depicts the colloquial intuition behind the use of the natural language \cite{Omyla2016}. Suszko obtained a series of non-Fregean logics by an addition of a new connective---identity---and axioms characterizing it. In contrary to the classical equivalence, two sentences are identical whenever they denote the same situation. The weakest \textsf{NFL} introduced by Suszko is \textit{Sentential Calculus with Identity} (\textsf{SCI}). It is built upon the classical propositional calculus by an addition of the identity connective `$\equiv$'. Suszko added four axioms depicting identity's properties: identity is reflexive, it entails equivalence and it is a congruence relation. Moreover, Suszko noted that any theory can be built upon non-Fregean framework. We follow this idea and, similarly to \cite{isci, calculus19901,Lukowski1990b,Lukowski1993,Lukowski1992,Lewitzka09,lewitzka11}, we study \textsf{SCI} in an intuitionistic setting. \section{Intuitionistic Sentential Calculus with Identity} Despite Suszko's claim that other non-classical theories can be modelled within NFL, other extensions of \textsf{SCI} have been relatively rarely analyzed in the literature. \textsf{ISCI}'s name and semantics has been originally introduced in \cite{calculus19901}, and later on appeared in \cite{isci} (\cite{Lukowski1990b,Lukowski1993,Lukowski1992} are basically extensions of \cite{calculus19901}). Considering its intuitionistic setting, the identity connective required an appropriate constructive interpretation. We follow the interpretation of identity proposed by Chlebowski in \cite{isci}, where the author extends the well known Brouwer-Heyting-Kolmogorov interpretation (henceforth: BHK-interpretation) of intuitionistic connectives as follows: \begin{center} \begin{tabular}{c|l} there is no proof of $\bot$ & \\ $a$ is a proof of $\phi\supset \psi$ & $a$ is a construction that converts \\ & each proof $a_{1}$ of $\phi$ into a proof $a(a_{1})$ of $\psi$ \\ $a$ is a proof of $\phi\equiv \psi$ & $a$ is a construction which shows that\\ & the classes of proofs of $\phi$ and $\psi$ are equal \end{tabular}\\ \end{center} \noindent As we mentioned above, Suszko's identity is stronger than equivalence. As far as the latter is concerned, in accordance with the BHK-interpretation $a$ would be a proof of a formula $\phi\leftrightarrow \psi$ provided it is a construction converting each proof of $\phi$ into a proof of $\psi$ and \textit{vice versa}. In light of the above interpretation we can, naturally, wonder what construction would fall under the identity connective. The simplest and most adequate example encapsulating the intuitionistic interpretation of Suszko's identity would be\ldots the simple identity function $\lambda x.x$. It seems that it is not the only possible option, but definitely the identity function as matching proofs of $\phi$ with proofs of $\psi$ can be used to show that the classes of proofs of the two formulas are equal. It also strongly suggests that the only case of formulas which are identical in the above sense should be that of syntactical identity of the formulas. This is in line with the fact that \textsf{SCI} is the weakest non-Fregean logic: any formula $\phi$ is identical only to itself. Of course, if we were to analyze other axiomatic extensions of \textsc{SCI} and/or \textsc{ISCI}, which would change the properties of the identity connective, the interpretation would change as well in order to complement such properties. Suszko's identity connective in its classical version is characterized by four axioms, from which the following three are added to an axiomatic basis of intuitionistic sentential logic: \begin{enumerate} \item[($\equiv_{1}$)] $\phi \equiv \phi$ \item[($\equiv_{2}$)] $(\phi \equiv \delta)\supset (\phi\supset \delta)$ \item[($\equiv_{4}$)] $((\phi \equiv \delta)\wedge(\chi \equiv \psi))\supset((\phi\otimes \chi) \equiv (\delta\otimes \psi))$ \end{enumerate} where $\otimes \in \{\supset, \equiv \}$. \textsf{SCI} is characterized axiomatically by adding ($\equiv_{1}$), ($\equiv_2$), ($\equiv_{4}$) together with ($\equiv_{3}$): $(\phi \equiv \delta)\supset ((\lnot\phi) \equiv (\lnot\delta))$ to an axiomatic basis of classical sentential logic. \textit{Modus ponens} is the only rule of inference in both cases: \textsf{SCI} and \textsf{ISCI}. What can be noticed is the fact that the third axiom scheme expressed in intuitionistic language without negation: $(\phi \equiv \delta)\supset ((\phi\supset\bot) \equiv (\delta \supset\bot))$ is redundant, as it can be obtained on the basis of the axiom scheme $(\equiv_{4})$ and $\bot \equiv \bot$ as an instance of $(\equiv_{1})$. Naturally, due to the different interpretations of logical connectives, the substance of the said axioms in \textsf{SCI} and \textsf{ISCI} will differ, too. We omit the discussion of the intuitionistic interpretations of axioms ($\equiv_{1}$)--($\equiv_{4}$) since this can be found in \cite{isci}. \subsection{Language} We now turn to the fragment of \textsf{ISCI} expressed in the language containing only $\bot, \supset, \equiv$. The intuitionistic negation $\neg \phi$ is omitted due to its definitional equivalence to $\phi \supset \bot$. The other connectives are not definable by $\supset, \bot$ in intuitionistic logic, but we omit them for simplicity. The language will be called $\mathcal{L}_\mathsf{ISCI}$. By \textsf{Prop} we mean a denumerable set of propositional variables. These are denoted by lower-case indexed letters \textbf{$p_1, p_2,p_3,\ldots$}. Formulas with the main connective being the identity operator will be referred to as \textit{equations}. Formulas are denoted by lower-case Greek letters, with subscripts, if necessary. The grammar of $\mathcal{L}_\mathsf{ISCI}$ is as follows: \[ \phi ::= p_i \;|\; \bot \;|\; \phi \supset \phi \;|\; \phi \equiv \phi \] \noindent where $p_i \in \mathsf{Prop}$. \textsf{Form} will be used for the set of formulas of $\mathcal{L}_\mathsf{ISCI}$. Later we will use $\mathsf{Eq}$ for the set of all equations and $\mathsf{Form}_0$ for the sum $\mathsf{Prop} \cup \mathsf{Eq}$. \begin{defi}[Complexity of a formula] By {\em complexity of a formula} we mean the following value: \begin{itemize} \item $c(\phi)=0$, if $\phi \in \mathsf{Prop}$ or $\phi = \bot$; \item when $\phi$ is of the form $ \chi \otimes \psi $, with $\otimes \in \{\supset, \equiv\}$, then $c(\phi)=c(\chi) + c(\psi) +1$. \end{itemize} \end{defi} Sequent calculi for \textsf{SCI} and \textsf{ISCI} presented in \cite{Chlebowski2018,isci} do not have the subformula property understood in the usual sense. In \cite{isci} this issue is discussed but no solution is presented. Here we analyse a property called \textit{extended subformula property}. The idea behind it is that when constructing a derivation of a formula $\phi$ in a logic with non-Fregean identity we can use a formula $\psi$ built from subformulas of $\phi$, though $\psi$ is not itself a subformula of $\phi$. To warrant that the set of extended subformulas of $\phi$ is finite, we put a complexity constraint on the elements of the set. Formally: \begin{defi}[Subformula, extended subformula] Let $\phi$ be a formula of $\mathcal{L}_\mathsf{ISCI}$. $sub(\phi)$ is the smallest set of formulas closed under the rules: \begin{enumerate} \item $\phi \in sub(\phi)$; \item if $\chi \otimes \psi \in sub(\phi)$ for $\otimes \in \{\supset, \equiv\}$, then $\{\chi, \psi\} \subseteq sub(\phi)$. \end{enumerate} Each element of $sub(\phi)$ is called a {\em subformula of $\phi$}. Further, $ex.sub(\phi)$ is the smallest set closed under the rules: \begin{enumerate} \item[3.] $sub(\phi) \subseteq ex.sub(\phi)$; \item[4.] if $\chi \in ex.sub(\phi)$ and $c(\chi \equiv \chi) \leqslant c(\phi)$, then $\chi\equiv\chi \in ex.sub(\phi)$; \item[5.] if $\chi\equiv\psi \in ex.sub(\phi)$, then $\{\chi\supset\psi,\psi\supset\chi\} \subseteq ex.sub(\phi)$; \item[6.] if $\chi_1\equiv \psi_1, \chi_2\equiv \psi_2 \in ex.sub(\phi)$ and $c((\chi_1\otimes\chi_2)\equiv(\psi_1\otimes\psi_2)) \leqslant c(\phi)$, then $(\chi_1\otimes\chi_2)\equiv(\psi_1\otimes\psi_2) \in ex.sub(\phi)$. \end{enumerate} Each element of $ex.sub(\phi)$ is called {\em an extended subformula of $\phi$}. \end{defi} \subsection{Kripke semantics} We recall the Kripke semantics for \textsf{ISCI} proposed in \cite{isci}. An \emph{$\mathsf{ISCI}$ frame} is simply an ordered pair $\mathbf{F} = \langle W, \leq \rangle$, where $W$ is a non-empty set and $\leq$ is a reflexive and transitive binary relation on $W$. If $\mathbf{F}=\langle W, \leq \rangle$ is an $\mathsf{ISCI}$ frame, then by \emph{assignment in} $\mathbf{F}$ we mean a function: $$v: \mathsf{Form}_0\times W\longrightarrow \{0, 1\}.$$ \begin{defi} \label{isci-assignment} An assignment is called {\em $\mathsf{ISCI}$-admissible}, provided that for each $w \in W$, and for arbitrary formulas $\phi$, $\chi$, $\psi$, $\delta$: \begin{enumerate} \item[$(1)$] $v(\psi\equiv \psi,w)=1$, \item[$(2)$] if $v(\psi\equiv \phi,w)=1$ and $v(\chi\equiv \delta,w)=1$, then $v((\psi\otimes \chi)\equiv(\phi\otimes \delta),w)=1$. \end{enumerate} \end{defi} Let us note that by (1), $v(\bot\equiv \bot,w)=1$, hence a special case of (2) is: if $v(\psi\equiv \phi, w)=1$, then $v((\psi\supset \bot)\equiv(\phi\supset \bot),w)=1$. Hence we can see that the notion of \textsf{ISCI}-admissible assignment captures axioms $(\equiv_1)$, $(\equiv_3)$ and $(\equiv_4)$. Axiom $(\equiv_2)$ will be incorporated into the notion of forcing. The definition of forcing presented in \cite{isci} contains a mistake, which is the lack of clause (2) below; here we introduce the corrected version. For simplicity, we also generalize the monotonicity condition to formulas of arbitrary shape. (This is a negligible difference, however.) \begin{defi}[forcing]\label{forcing} Let $v$ be an $\mathsf{ISCI}$-admissible assignment in a given frame $\mathbf{F}$. A \emph{forcing relation $\Vdash$ determined by $v$ in} $\mathbf{F}$ is a relation between elements of $W$ and elements of $\mathsf{Form}$ which satisfies, for arbitrary $w\in W$, the following conditions: \begin{itemize} \item[$(1)$] $w\Vdash p_{i}$ iff $v(p_{i}, w) = 1$; \item[$(2)$] $w\Vdash \phi\equiv \psi$ iff $v(\phi\equiv \psi, w)=1$; \item[$(3)$] $w\nVdash\bot$; \item[$(4)$] if $w \Vdash \phi \equiv \psi$, then $w \Vdash \phi \supset \psi$ and $w \Vdash \psi \supset \phi$; \item[$(5)$] $w\Vdash \psi\supset \phi$ iff for each $w'$ such that $w\leq w'$, if $w'\Vdash \psi$ then $w'\Vdash \phi$; \item[$(mon)$] for any formula $\phi$: if $w\Vdash \phi$ and $w\leq w'$, then $w'\Vdash \phi$. \end{itemize} \end{defi} \begin{defi}\label{ISCI-model} An \emph{$\mathsf{ISCI}$ model} is a triple $\mathbf{M} = \langle W, \leq,\Vdash\rangle$, where $\mathbf{F} = \langle W, \leq\rangle$ is an $\mathsf{ISCI}$ frame and $\Vdash$ is a forcing relation determined by some $\mathsf{ISCI}$-admissible assignment in $\mathbf{F}$. A formula $\psi$ which is forced by every world of an $\mathsf{ISCI}$ model, that is, such that $w \Vdash \psi$ for each $w\in W$, is called \emph{true in the model}. A formula true in every $\mathsf{ISCI}$ model is called $\mathsf{ISCI}$-\emph{valid}. \end{defi} In \cite{isci} it was proved that the axiomatic account of \textsf{ISCI} is both sound and complete with respect to the presented Kripke semantics. \section{Sequent Calculus} In this paper we shall use sequents built of sets of formulas instead of multisets. This decision is motivated by the greater simplicity in the completeness proof. In all the remaining conventions pertaining to sequent calculi we follow \cite{negri-structural,basicproof}. Hence a \textit{sequent} here is a structure $\Gamma \Rightarrow \phi$, where $\Gamma$ (the \textit{antecedent} of a sequent) is a set of formulas of $\mathcal{L}_\mathsf{ISCI}$ and $\phi$ (the \textit{succedent} of a sequent) is a single formula of $\mathcal{L}_\mathsf{ISCI}$. The antecedent of a sequent can be empty, contrary to the succedent. We shall use $S, S^*, S_1, \ldots$ for sequents. We present a restricted and slightly modified variant of the sequent calculus $\mathbf{G3}_\mathsf{ISCI}$ for \textsf{ISCI} proposed by Chlebowski and Leszczy\'{n}ska-Jasion in \cite{isci}. It must be stressed that when the notion of a sequent is altered (multisets \textit{vs} sets) the rules inherit different meaning as well, which heavily influences the structural rules of the calculus (see our comment below Definition \ref{def6}). As far as the logical side of the calculus is concerned, the rules considered in this paper capture only $\supset,\bot,\equiv$, whereas calculus $\mathbf{G3}_\mathsf{ISCI}$ from \cite{isci} pertains to a richer language. Taking into account only the rules for the three connectives, there are still some major differences between the two calculi: first of all, we assume a generalized form of axioms; second, we strengthen premises of rules $L^2_\equiv$ and $L_\supset$; and finally, we resign from the rule called $L^{3*}_\equiv$ which has the following shape. $$\infer[L^{3*}_\equiv]{\phi\equiv\chi, \Gamma \Rightarrow \gamma}{(\phi\otimes\phi)\equiv (\chi\otimes\chi), \phi\equiv\chi,\Gamma\Rightarrow \gamma}$$ The calculus presented here will be called $\mathsf{SC}_\mathsf{ISCI}$. In $\mathbf{G3}_\mathsf{ISCI}$ the formula $\phi$ that occurs on both sides of an axiom must be a propositional variable or an equation. In $\mathsf{SC}_\mathsf{ISCI}$ $\phi$ is arbitrary. Rule $L^2_\equiv$ in $\mathbf{G3}_\mathsf{ISCI}$ has a weaker premise, as only one of the implications is considered. In $\mathsf{SC}_\mathsf{ISCI}$ the premise is strengthened (and so the rule is weakened) to simplify the description of the countermodel construction. Rule $L_\supset$ does not need, in general, the presence of the principal implication formula in the right premise of the rule. A practical motivation for all these modifications is to simplify the reasoning concerning the countermodel construction: this is not about simplifying \textit{proving-in}, but \textit{proving-about}. \begin{table}[h] \caption{Rules of $\mathsf{SC_{ISCI}}$} \label{left-rules1} \centering \begin{tabular}{cc} \hline\noalign{ } $\phi, \Gamma \Rightarrow \phi$ & $\bot, \Gamma \Rightarrow \chi$ \\ & \\ $$\infer[L_{\supset}]{\phi\supset \chi,\Gamma\Rightarrow \psi}{\phi\supset \chi,\Gamma\Rightarrow \phi & \chi,\phi\supset \chi,\Gamma\Rightarrow \psi}$$ & $$\infer[R_{\supset}]{\Gamma\Rightarrow \psi\supset \delta}{\psi,\Gamma\Rightarrow \delta}$$ \\ & \\ {$\infer[L_\equiv^1]{\Gamma\Rightarrow \gamma}{\psi\equiv \psi,\Gamma\Rightarrow \gamma}$} & {$$\infer[L_\equiv^2]{\phi\equiv \chi,\Gamma\Rightarrow \psi} {\phi \equiv \chi, \phi \supset \chi, \chi \supset \phi, \Gamma \Rightarrow \psi}$$} \\ &\\ \multicolumn{2}{c}{$$\infer[L_\equiv^3]{\psi\equiv \delta, \phi\equiv \chi,\Gamma\Rightarrow \gamma}{(\psi\otimes \phi)\equiv(\delta\otimes \chi),\psi\equiv\delta,\phi\equiv\chi,\Gamma\Rightarrow \gamma}$$} \\ \noalign{ }\hline \end{tabular} \end{table} The rules of $\mathsf{SC_{ISCI}}$ are presented in Table \ref{left-rules1}. As we can see, they are divided into two groups: rules for intuitionistic implication (the second line in the table) and rules for equations. We consider two schemes of axioms. We do not include the axiom scheme of the form $\Gamma \Rightarrow \phi \equiv \phi$ (the classical equivalent of which can be found for example in \cite{michaels}), however it is obtainable in the present calculus---see Fact \ref{fact 1} below. \begin{defi}[derivation and proof in $\mathsf{SC_{ISCI}}$]\label{def6} A {\em derivation of a sequent $S$ in} $\mathsf{SC_{ISCI}}$ is a tree labelled with sequents, with $S$ in the root, and regulated by the rules specified in Table \ref{left-rules1}. If all the leaves of a finite derivation of $S$ are labelled with axioms of $\mathsf{SC_{ISCI}}$, then the derivation is a {\em proof of $S$ in $\mathsf{SC_{ISCI}}$}; we then say that $S$ is {\em provable in $\mathsf{SC_{ISCI}}$}. \end{defi} Strengthening the right premise of $L_\supset$ by requiring the presence of the implication formula has the effect expressed below---in the root-first perspective, nothing ever disappears from the antecedents of sequents. \begin{fact}\label{internet} In each derivation of a sequent in $\mathsf{SC_{ISCI}}$ the antecedents of sequents are bottom-up inherited. \end{fact} Calculus $\mathbf{G3}_\mathsf{ISCI}$ contains also the structural rules of cut, contraction and weakening. Completeness of $\mathbf{G3}_\mathsf{ISCI}$ was established in \cite{isci} indirectly by an interpretation of $\mathbf{G3}_\mathsf{ISCI}$ in the axiomatic system for \textsf{ISCI}---the rule of cut is used to simulate \textit{modus ponens} and then its admissibility is demonstrated. However, the proof of admissibility of cut requires the use of contraction. Then the rule of contraction is also shown admissible, but the use of rule $L^{3*}_\equiv$, the one omitted here, seems necessary in the proof. (The admissibility of weakening in $\mathbf{G3}_\mathsf{ISCI}$ is not controversial.) Although $\mathbf{G3}_\mathsf{ISCI}$ and $\mathsf{SC}_\mathsf{ISCI}$ differ, it seems that the result presented here can be used to prove completeness of $\mathbf{G3}_\mathsf{ISCI}$ without the structural rules and $L^{3*}_\equiv$. However, we shall not elaborate on this issue here. The rules for $\supset$ satisfy the \textit{subformula property}. In contrast, the premises of the identity-based rules can contain formulas which are not subformulas of ones appearing in the conclusion of the rule. However, \begin{fact} Let $\mathcal{D}$ be a derivation of sequent $\Rightarrow \alpha$, and let $c(\alpha)=n$. If $\mathcal{D}$ satisfies the following criteria: \begin{itemize} \item if rule $L^1_\equiv$ is applied, then $\psi \equiv \psi \in ex.sub(\alpha)$; \item if rule $L^3_\equiv$ is applied in $\mathcal{D}$, then $c((\psi\otimes\phi) \equiv (\delta\otimes\chi)) \leqslant n$, \end{itemize} then each formula occurring in $\mathcal{D}$ is an extended subformula of $\alpha$. \end{fact} Calculus $\mathsf{SC}_\mathsf{ISCI}$ allows for derivations that do not have the extended subformula property, but, as we shall see, the calculus is complete if we restrict ourselves only to derivations with this property. Despite of the congruence-character of $\equiv$, classical \textsf{SCI} admits the finite model property, which was shown already in \cite{bloom1972investigations}. The model built there is algebraic and is basically constructed from the set of all subformulas of a given formula. Hence it is not surprising that within \textsf{ISCI} we can expect a similar effect. Correctness of $\mathbf{G3}_\mathsf{ISCI}$ was analysed in \cite{isci} with respect to the semantics defined with the small mistake in the definition of forcing (which does not influence the basic construction, though). \subsection*{Correctness of $\mathsf{SC_{ISCI}}$} Let $\mathcal{M} = \langle W, \leq, \Vdash\rangle$ be an arbitrary model. A sequent $\Gamma \Rightarrow \phi$ will be called {\em true in $\mathcal{M}$}, provided that: if all formulas in $\Gamma$ are true in $\mathcal{M}$, then so is $\phi$. A sequent is called \textsf{ISCI}-\textit{valid}, or simply \textit{valid}, if it is true in every model. It is pretty clear that the intuitionistic component of $\mathsf{SC_{ISCI}}$ is correct with respect to the semantics of \textsf{ISCI}, hence we do not analyse the correctness of rules $L_\supset$ and $R_\supset$. We only briefly sketch the arguments that the identity rules of $\mathsf{SC_{ISCI}}$ preserve the validity of sequents. \begin{enumerate} \item[$L_\equiv^1$] Suppose that $\Gamma \Rightarrow \gamma$ is not true in a model $\mathcal{M}$. Then there is a world $w$ in $\mathcal{M}$ forcing each formula in $\Gamma$ but not $\gamma$. Since the relation of forcing is determined by some \textsf{ISCI}-admissible assignment $v$, and $v(\psi \equiv \psi)=1$ for each such assignment, also each world of $\mathcal{M}$ forces $\psi\equiv \psi$. Therefore $w\Vdash \psi\equiv \psi$, which shows that $\psi\equiv \psi, \Gamma \Rightarrow \gamma$ is not true in $\mathcal{M}$. \item[$L_\equiv^2$] Suppose $\phi \equiv \chi, \Gamma \Rightarrow \gamma$ is not true in some $\mathcal{M}$. Then there is a world $w$ in $\mathcal{M}$ forcing formulas from $\Gamma$ and formula $\phi \equiv \chi$ and at the same time $w \not\Vdash \gamma$. By (4) in the definition of forcing $w \Vdash \phi \supset \chi$ and $w \Vdash \chi\supset \phi$. Hence it follows that $\phi\equiv\chi,\phi \supset \chi, \chi\supset \phi, \Gamma \Rightarrow \gamma$ is not true in $\mathcal{M}$. \end{enumerate} The case of $L_\equiv^3$ is proved similarly, with reference to property (2) of an \textsf{ISCI}-admissible assignment. Correctness of the rules of $\mathsf{SC_{ISCI}}$, together with the fact that the axioms of $\mathsf{SC_{ISCI}}$ are valid, yields \begin{theo} If there is a proof of $\Rightarrow \phi$ in $\mathsf{SC_{ISCI}}$, then $\phi$ is $\mathsf{ISCI}$-valid. \end{theo} As far as invertibility of the rules is concerned, it can be easily seen that the invertibility of the identity rules is warranted by the fact that the antecedents of the conclusions are subsets of the antecedents of the premises. Things are not that simple with the rules for implication. More specifically, \begin{fact} Rule $L_\supset$ is not semantically invertible, that is, if the conclusion is a valid sequent, then the right premise is valid as well, but the left premise needs not be valid. \end{fact} \noindent The problem is caused by the fact that the left premise and the conclusion of $L_\supset$ do not share the succedent. On the other hand, \begin{fact} Rule $R_\supset$ is semantically invertible, that is, if the conclusion of the rule is a valid sequent, then so is the premise. \end{fact} \begin{proof} Indeed, suppose that the premise, $\psi, \Gamma \Rightarrow \delta$, of rule $R_\supset$ is not a valid sequent. Then in some model $\mathcal{M}$ there is a world $w$ such that $w \Vdash \psi$, each formula from $\Gamma$ is forced by $w$ and $w \not\Vdash \delta$. Since $w \leq w$, $w \not\Vdash \psi \supset \delta$ and this yields immediately that $\Gamma \Rightarrow \psi \supset \delta$ is not valid. \end{proof} \section{Completeness, procedure, decidability} In this section we prove completeness directly with respect to Kripke semantics by a countermodel construction. However, detours are to be expected. By and large, a noninvertible rule, like $L_\supset$, in the system means that we will construct a derivation differently when looking for a countermodel than when we expect to find a proof. The difference is in the treatment of implications. In the case of the proof-search, invertible rules are prior to $L_\supset$. In the case of the countermodel construction, however, each implication in the antecedent must be sooner or later treated with $L_\supset$ \textit{before} we move on to another possible world, which relates to the application of $R_\supset$. For this reason, the proof-search procedure sketched in the following subsection is not a basis for the construction of a countermodel. Let us start with a simple fact established by the following derivation: $$ \infer[L^1_\equiv]{\Gamma \Rightarrow \phi \equiv \phi}{\Gamma, \phi \equiv \phi \Rightarrow \phi \equiv \phi} $$ \begin{fact}\label{fact 1} For each formula $\phi$, sequent $\Gamma \Rightarrow \phi \equiv \phi$ is provable in $\mathsf{SC_{ISCI}}$ for arbitrary antecedent $\Gamma$. \end{fact} \subsection{Proof search} We assume that we start with a sequent of the form $\Rightarrow \phi$ and that $c(\phi)=n$. We consider derivations constructed bottom-up, starting with the root. Rules are `applied' in this direction to conclusions to obtain premises; we write `b-applied' for `backwards-applied', so we are not forced to language abuse. Our derivations are constructed under the following conditions: \begin{enumerate} \item[(C1)] \underline{axioms}: no rules are b-applied to axioms, \item[(C2)] \underline{repetitions-check and intuitionistic loop-check}: no rules are b-applied to $\Gamma \Rightarrow \psi$, if the premise (at least one of the premises in the case of $L_\supset$) would be $\Gamma \Rightarrow \psi$ or any other sequent already present on the branch under construction, \item[(C3)] \underline{saturation wrt equations}: rules for identity are b-applied first, until all possible identities from $ex.sub(\phi)$ are constructed; only then the rules for implication are b-applied. \item[(C4)] \underline{extended subformula}: if a rule for identity is b-applied, then the active equation in the premise of the rule is an element of $ex.sub(\phi)$. \end{enumerate} It also goes without saying that the rules are b-applied to sequents as long as it is possible to do so without violation of one of the mentioned conditions. A derivation constructed in line with (C1)--(C4) will be called \textit{restricted}. Clause (C2) warrants that, e.g., if $L_\supset$ is b-applied, then none of the premises is a copy of the conclusion. Clause (C4) yields that restricted derivations satisfy the extended subformula property (recall Fact 1). Before we move on, let us establish the following: for each formula $\phi$, set $ex.sub(\phi)$ is finite; hence the number of different sequents that may occur in any restricted derivation of $\Rightarrow \phi$ is finite. Since by clause (C2) there are no repetitions of sequents in a restricted derivation of $\Rightarrow \phi$, it follows that each such derivation is finite. But there is more to it. The same clause warrants that \begin{fact}\label{finite} The number of all restricted derivations of $\Rightarrow \phi$ is finite. \end{fact} \underline{Sketch of the proof-search procedure}: starting with the root, the rules are b-applied maintaining the following priorities of rules: (i) identity rules, (ii) $R_\supset$, (iii) exactly one application of $L_\supset$. After (iii) we go back to (i). In point (iii), if there is more than one implication formula in the antecedent (which is usually the case), then there is a choice, and, obviously, one can make a wrong one. This is to be expected in a calculus for intuitionistic logic with a noninvertible rule. At this point backtracking must be employed: if a proof is not constructed, we must go back to the last choice of implication formula in (iii) and choose another one. An alternative form of employing backtracking seems to be by switching to a hypersequent format, where b-applications of $L_\supset$ would result in extending a hypersequent with all possible sequents that correspond to the possible choices of implication formula. We postpone this issue, however, for further research. \subsection{Countermodel} Suppose that the proof-procedure fails for $\Rightarrow \phi$. In order to build a contermodel for $\phi$ we will need \textit{a set of derivations}, sometimes a large one. However, taking into account Fact \ref{finite}, we may claim that our approach is constructive. Worlds of the conuntermodel will be built from the occurrences of sequents and the values of formulas in the worlds will depend on their presence in antecedents/succedents of sequents. As is usually the case in such constructions, in order to obtain the desired effect the worlds need to be saturated and this can be assured only be suitable applications of the rules. Here is an auxiliary notion. Suppose that a sequent $\Gamma \Rightarrow \gamma$ occurs on a branch of a restricted derivation. Let $\psi \supset \chi$ be an implication formula, not necessarily an element of $\Gamma$. If (i) $\chi \in \Gamma$, or (ii) $\psi = \gamma$, then we say that the sequent is \textit{saturated with respect to implication} $\psi \supset \chi$. Now we will give rule $L_\supset$ priority before rule $R_\supset$. More specifically, until the end of this section we expect that restricted derivations satisfy, in addition, the following \begin{itemize} \item[(C5)] if rule $R_\supset$ is b-applied to a sequent $\Gamma \Rightarrow \chi$ and there is $\gamma \supset \delta \in \Gamma$ (an implication formula in the antecedent), then either the sequent is saturated with respect to this implication, or there is a successor $S$ of this sequent on the given branch such that (i) there is no application of $R_\supset$ between $\Gamma \Rightarrow \chi$ and $S$, and (ii) $S$ is saturated with respect to $\gamma \supset \delta$. \end{itemize} By giving rule $L_\supset$ priority before $R_\supset$ we mean that if a sequent does not satisfy (C5), then before rule $R_\supset$ is applied, $L_\supset$ needs to be applied in order to saturate the sequent. By assumption, a restricted derivation constructed in line with (C5) is still not a proof of $\Rightarrow \phi$ in $\mathsf{SC_{ISCI}}$, hence it has an open branch. We will be interested in the leftmost open one. Let $\mathcal{D}_{\Rightarrow \phi}$ stand for a restricted derivation of sequent $\Rightarrow \phi$ satisfying (C5); by $\mathcal{B}_{\Rightarrow\phi}$ we shall refer to the leftmost open branch of $\mathcal{D}_{\Rightarrow\phi}$. Each such branch determines a structure as follows. Let $W_0$ stand for the set of all occurrences of sequents on $\mathcal{B}_{\Rightarrow\phi}$ (below we would say `sequent' instead of `occurrence of a sequent', but this should not lead to a confusion). Accessibility relation will be defined by the applications of $R_\supset$ {and} $L_\supset$, we start with some auxiliary notions. For all $S,S^*\in W_0$: we say that $S$ and $S^*$ are in relation $r$ iff (i) $S=S^*$, or (ii) $S$ is an immediate predecessor or immediate successor of $S^*$ on $\mathcal{B}_{\Rightarrow\phi}$, but $S/S^*$, respectively $S^*/S$, is not an instance of $R_\supset$. Let $\overline{r}$ stand for the transitive closure of $r$. Classes of abstraction of $\overline{r}$ will constitute points of our countermodel. Less formally, a class of abstraction of $\overline{r}$ contains all occurrences of sequents on $\mathcal{B}_{\Rightarrow\phi}$ between two applications of $R_\supset$. We take $W = \{[S]_{\overline{r}}: S \in W_0\}$. For $w,y \in W$ we set: \begin{itemize} \item $w \leq_0 y$ iff for some $S\in w, S^*\in y$, $S^*/S$ is an instance of $R_\supset$ in $\mathcal{B}_{\Rightarrow\phi}$. \end{itemize} Let $\leq_W$ stand for the reflexive and transitive closure of $\leq_0$. We say that the structure $\langle W,\leq_W \rangle$ \textit{is determined by $\mathcal{B}_{\Rightarrow\phi}$ of $\mathcal{D}_{\Rightarrow \phi}$}. Since there is no risk of a confusion, later on we will omit the relativisation to relation $\overline{r}$, and we will write $[S]$ instead of $[S]_{\overline{r}}$. It can be easily seen that the above construction warrants that sequents considered within one class of abstraction of $\overline{r}$ `get saturated' with respect to implication formulas in the antecedent. However, implications in the succedents of sequents cause a problem now, as we can have such sequents on the branch and no warranty that $R_\supset$ was applied: each time when rule $L_\supset$ is applied and the leftmost open branch goes through its left premise, the succedent of a sequent is altered. On the other hand, we still know that the sequents that occur on an open branch---in particular, those with implication formulas on the right side---are not provable in the calculus. In what follows we shall pick an appropriate open branch of a derivation of each such troublesome sequent. The branch will serve in the construction of additional points of our countermodel; ones that do not force the implications that occur in the succedents of sequents. Let $\mathbb{S}$ stand for the set of all restricted derivations of $\Rightarrow \phi$ satisfying (C5) and all subderivations of these derivations. By Fact \ref{finite}, $\mathbb{S}$ is finite. It follows that for each sequent, $S$, that occurs in a derivation in $\mathbb{S}$, $\mathbb{S}$ contains a restricted derivation, satisfying (C5), of this very sequent. We will denote such a derivation with $\mathcal{D}_S$ (it does not mean that the derivation is unique; if there is a choice---just pick one). By $\mathcal{B}_S$ we shall refer to the leftmost open branch of $\mathcal{D}_S$. What is more, for each sequent $S$ we define the structure $\langle W_S,\leq_S \rangle$ determined by $\mathcal{B}_S$ of $\mathcal{D}_S$, just as above. Instead of $\langle W_S,\leq_S\rangle$ we may use also $\langle W_{\mathcal{B}_S},\leq_{\mathcal{B}_S}\rangle$, or $\langle W_{\mathcal{B}},\leq_{\mathcal{B}}\rangle$ when the reference to a sequent $S$ is not important (the structure depends on the content of the whole branch, anyway). We are almost in a position to supplement the initial structure $\langle W,\leq_W \rangle$ (determined by $\mathcal{B}_{\Rightarrow\phi}$) with worlds that do not force the troublesome implication formulas occurring in succedents of sequents. Before we continue, however, we need what follows. \begin{fact}\label{suma} Suppose that $\Gamma \Rightarrow \gamma$ is not provable in $\mathsf{SC}_\mathsf{ISCI}$ and that $\Gamma \Rightarrow \gamma$ occurs in the leftmost open branch of a derivation in $\mathbb{S}$. Suppose also that there is $\Gamma^* \Rightarrow \gamma^*$ preceding $\Gamma \Rightarrow \gamma$ on the branch, and that rule $R_\supset$ is not applied between them. Then sequent $\Gamma \cup \Gamma^* \Rightarrow \gamma$ is not provable in $\mathsf{SC}_\mathsf{ISCI}$. \end{fact} \begin{proof} By the inspection of rules we know that $\Gamma \cup \Gamma^* = \Gamma^*$. If $\gamma \neq \gamma^*$, then rule $L_\supset$ is applied between the two sequents and the branch goes through the left premise. The point is that while the b-application of rule $L_\supset$ changes the succedent on the branch, it does not affect applicability of the rules to the resulting left premise, as all the rules except for $R_\supset$, which is not applied between the two sequents, are based on the left side of a sequent. In other words, the b-application of $L_\supset$ that changes $\gamma$ to $\gamma^*$ can be `skipped' and the result will be a derivation going through sequent $\Gamma \cup \Gamma^* \Rightarrow \gamma$. A formal argument would go along the lines of the argument presented in the proof of Lemma \ref{c2}. We skip it. \end{proof} Now we go back to the initial structure $\langle W,\leq_W \rangle$ determined by $\mathcal{B}_{\Rightarrow\phi}$. For each sequent of the form $\Gamma \Rightarrow \psi \supset \chi$ on $\mathcal{B}_{\Rightarrow\phi}$ we first define a maximum $\Gamma^M$ for the sequent in $W$: $$\Gamma^M = \bigcup \{ \Gamma_i : \Gamma_i \Rightarrow \delta \in [\Gamma \Rightarrow \psi \supset \chi] \text{ for some } \delta \}$$ which is, literally, the sum of all antecedents of sequents that were obtained on the branch going through $\Gamma \Rightarrow \psi \supset \chi$ between two applications of rule $R_\supset$. Due to Fact \ref{internet}, we could have defined $\Gamma^M$ just as the maximal (wrt inclusion) antecedent in $[\Gamma \Rightarrow \psi \supset \chi]$, or as the least element wrt to the predecessor-successor relation induced by the rules. All accounts lead to the same effect. By Fact \ref{suma}, sequent $\Gamma^M \Rightarrow \psi \supset \chi$ is not provable in $\mathsf{SC}_\mathsf{ISCI}$. It follows that neither is sequent $\Gamma^M,\psi \Rightarrow \chi$ (for if it was, one application of $R_\supset$ would make the previous sentence false). Now we need a restricted derivation of $\Gamma^M,\psi \Rightarrow \chi$ and the structure determined by its leftmost branch to supplement the constructed countermodel. There is still one more subtle impediment to this construction, and it is the fact that---as we have established---there is no warranty that $R_\supset$ was b-applied here, which means that there is no warranty that sequent $\Gamma^M,\psi \Rightarrow \chi$ is in $\mathbb{S}$. For this reason, in the final definitions below we refer to $\mathbb{B}$ instead of $\mathbb{S}$. $\mathbb{B}$ is a set of branches constructed as follows. Take $\{\mathcal{B}_{\Rightarrow\phi}\}$ and close this singleton set with the following rule: whenever the conclusion $\Gamma \Rightarrow \psi \supset \chi$ of rule $R_\supset$ occurs on a branch in $\mathbb{B}$, add to $\mathbb{B}$ the leftmost branch of a restricted derivation of sequent $\Gamma^M,\psi \Rightarrow \chi$. Finally, let $$\overline{W} = \bigcup_{\mathcal{B}\in\mathbb{B}} W_\mathcal{B}, \quad \overline{\leq_0} = \bigcup_{\mathcal{B}\in\mathbb{B}} \leq_{W_\mathcal{B}}, \quad \overline{\leq} \text{ is the transitive closure of } \overline{\leq_0} \cup \overline{\leq_1}, \text{ where}$$ $$\overline{\leq_1} = \left\lbrace\langle w,y \rangle: w = [S], \text{ for some } S \text{ of the form } \Gamma \Rightarrow \psi \supset \chi, \text{ and } y = [\Gamma^M, \psi \Rightarrow \chi]\right\rbrace$$ Let $i, n \in \mathds{N}$. We set $\mathsf{Eq}^i = \{\psi \equiv \chi: c(\psi \equiv \chi)=i\}$ and $\mathsf{Form}^n_0 = \mathsf{Prop} \cup \bigcup^{n}_{i=1} \mathsf{Eq}^i$. We define the assignment $$v_0: \mathsf{Form}^n_0 \times \overline{W} \longrightarrow \{0,1\}$$ by requiring that for each $w \in \overline{W}$, $v_0(\psi,w)=1$ iff (i) $\psi$ is of the form $\chi \equiv \chi$ or (ii) there is a sequent of the form `$\Gamma,\psi \Rightarrow\delta$' in $w$. Next, we extend $v_0$ to a valuation $v: \mathsf{Form}_0 \times \overline{W} \longrightarrow \{0,1\},$ as follows: $v(\psi \equiv \chi,w)=1$ iff (i) $\psi=\chi$ or (ii) $v_0(\psi \equiv \chi,w)=1$ or (iii) $\psi$ is of the form $\psi_1\otimes \psi_2$, $\chi$ is of the form $\chi_1\otimes \chi_2$ and $v(\psi_1 \equiv \chi_1,w)=v(\psi_2 \equiv \chi_2,w)=1$. It is easy to verify: \begin{coro} Structure $\langle \overline{W}, \overline{\leq} \rangle$ is an $\mathsf{ISCI}$-frame and $v$ is an $\mathsf{ISCI}$-admissible assignment on $\langle \overline{W},\leq \rangle$. \end{coro} \begin{proof} Clause (1) of Definition 4 is warranted by (i) and clause (2) by (iii). \end{proof} \begin{coro}\label{equations} Let $\langle \overline{W},\overline{\leq} \rangle$ and $v$ be as defined above. For equations $\psi \equiv \chi$ of complexity up to $n$ and such that $\psi \neq \chi$, if $v(\psi \equiv \chi,w)=1$, then there is a sequent of the form $\psi \equiv \chi, \Gamma^* \Rightarrow \delta$ in $w$. \end{coro} \begin{proof} By induction on complexity of $\psi \equiv \chi$. Let $c(\psi \equiv \chi)=1$ (base case) and assume that $v(\psi \equiv \chi,w)=1$. Case (i) is excluded and (iii) cannot hold, hence (ii) holds, which means that there is $\psi \equiv \chi, \Gamma^* \Rightarrow \delta$ in $w$. For $c(\psi \equiv \chi)=k+1\leqslant n (1 \leqslant k)$, if $v(\psi \equiv \chi,w)=1$, then (i) is excluded, (ii) proves our thesis, hence suppose that (iii) holds; $\psi$ is of the form $\psi_1\otimes \psi_2$, $\chi$ is of the form $\chi_1\otimes \chi_2$. Then IH warrants that if $v(\psi_1 \equiv \chi_1,w)=v(\psi_2 \equiv \chi_2,w)=1$, then there are $\psi_1 \equiv \chi_1, \Gamma_1 \Rightarrow \delta_1 \in w$ and $\psi_2 \equiv \chi_2, \Gamma_2 \Rightarrow \delta_2 \in w$. (Actually, for the case with $\psi_i = \chi_i$ we need to refer also to applications of $L^1_\equiv$, which is straightforward.) Since the two sequents are on the same branch, there also is one with both formulas $\psi_1 \equiv \chi_1$ and $\psi_2 \equiv \chi_2$ in the antecedent. Rule $L^3_\equiv$ is b-applied with respect to these formulas before $R_\supset$. Hence there is $\psi \equiv \chi, \Gamma^* \Rightarrow \delta$ in $w$. \end{proof} Let $\mathcal{M} = \langle \overline{W},\overline{\leq}, \Vdash \rangle$ be an $\mathsf{ISCI}$-model with the forcing relation $\Vdash$ determined by assignment $v$. \begin{lemma}\label{c2} Let $\chi \in \mathsf{Form}_0$. If for some $w \in \overline{W}$ there is $\Gamma \Rightarrow \chi \in w$, then $v_0(\chi,[\Gamma \Rightarrow \chi])=0$, and hence also $v(\chi,[\Gamma \Rightarrow \chi])=0$. \end{lemma} \noindent This is an important lemma showing that if a propositional variable or an equation occurs in a succedent of a sequent in $w$, then the same formula does not occur in the antecedent of any sequent in $w$. The proof shows that this situation can only happen when the considered sequent is derivable. \begin{proof}[Proof of Lemma \ref{c2}] First we consider in detail the case for $\chi \in \mathsf{Prop}$. Assume that $w \in \overline{W}$, then $w$ is a set of occurrences of sequents from an open branch $\mathcal{B}$ constructed as described above. Let $S = \Gamma \Rightarrow \chi$. Suppose also that $S$ occurs on $\mathcal{B}$, but nevertheless, $v_0(\chi,[S])=1$. By definition of $v_0$, for $\chi \in \mathsf{Prop}$, $v_0(\chi,w)=1$ iff $\chi$ occurs in the antecedent of some sequent in $w$. Let $S^* = \Gamma_0, \chi \Rightarrow \delta$ stand for such a sequent: $$\Gamma_0, \chi \Rightarrow \delta \in [\Gamma \Rightarrow \chi].$$ If $\chi=\delta$, then the indicated sequent $S^*$ is an axiom (a contradiction), hence $\chi \neq \delta$. Sequents $S$ and $S^*$ are in relation $\overline{r}$, which means that they are linked with the derivability relation, but there is no application of $R_\supset$ between them. If sequent $S$ precedes sequent $S^*$ in $\mathcal{B}$ (in the sense of the predecessor-successor relation that goes top-down): $$\infer*{S^* = \Gamma_0,\chi \Rightarrow \delta}{S = \Gamma \Rightarrow \chi},$$ then every formula from the antecedent of $S^*$, $\chi$ in particular, occurs in the antecedent of $S$, hence $S$ is an axiom (antecedents are `bottom-up inherited'). It follows that $S^*$ precedes $S$ in $\mathcal{B}$. (below (1) displays the path between the two sequents) \begin{equation}\label{pathC} \infer*{S = \Gamma \Rightarrow \chi}{S^* = \Gamma_0, \chi \Rightarrow \delta} \end{equation} Now we `cut off' the part $\mathcal{P}$ of (\ref{pathC}) which is the shortest path between a sequent with $\chi$ in the antecedent and a sequent with $\chi$ in the succedent. It means that if $S^*$ is followed by a sequent with $\chi$ in the antecedent, then we drop $S^*$ and consider the path leading from its immediate successor to $\Gamma \Rightarrow \chi$. If the successor also has $\chi$ in the antecedent, then we also drop this sequent, and so on, until we arrive at some $\Gamma^*_0, \chi \Rightarrow \delta^*$ such that its immediate successor does not contain $\chi$ in the antecedent. We do the same with the bottom sequent: if $\Gamma \Rightarrow \chi$ is preceded by a sequent with $\chi$ in the succedent, then we drop $\Gamma \Rightarrow \chi$, and so on, until we arrive at a sequent $\Gamma^* \Rightarrow \chi$ such that its immediate predecessor does not have $\chi$ in the succedent. $$\infer*{\Gamma^* \Rightarrow \chi}{\Gamma^*_0,\chi \Rightarrow \delta^*}$$ The rest of the argument consists in deriving a contradiction from these assumptions together with the assumption that the branch is open. (with the assumption that the sequents on branch $\mathcal{B}$ are not provable) By inspection of the rules we can see that as $\chi$ is not present in the successor of $\Gamma^*_0,\chi \Rightarrow \delta^*$, the sequent must be the right premise of $L_\supset$ (recall that there are no applications of $R_\supset$ in this path). Similarly, the bottom sequent must result from $L_\supset$, but this time $\mathcal{B}$ goes through the left premise. $$ \infer[L_\supset]{\Gamma^*_1, \gamma \supset \theta \Rightarrow \chi}{\infer*{\Gamma^*_1, \gamma \supset \theta \Rightarrow \gamma}{\infer[L_\supset]{\Gamma_1, \psi \supset \chi \Rightarrow \delta^*}{\infer*{\Gamma_1,\psi \supset \chi \Rightarrow \psi}{closed\ subtree} \ \ & \ \ \infer*{\psi\supset\chi,\chi,\Gamma_1 \Rightarrow \delta^*}{\mathcal{B}}}} & \theta, \Gamma^*_1 \Rightarrow \chi} $$ \noindent where $\Gamma^*_0 = \Gamma_1,\psi\supset \chi$, $\Gamma^* = \Gamma^*_1, \gamma \supset \theta$. The leftmost subtree must be closed, as $\mathcal{B}$ is the leftmost open branch of $\mathcal{D}$. The b-application of $L_\supset$ to the root changes only the succedent of the sequent. Since $R_\supset$ is not applied on the considered path, this change has no effect on applicability (b-applicability) of rules above. We consider a modification of $\mathcal{D}$: $$ \infer*{\Gamma^*_1, \gamma \supset \theta \Rightarrow \chi}{\infer[L_\supset]{\Gamma_1, \psi \supset \chi \Rightarrow \delta^*}{\infer*{\Gamma_1,\psi \supset \chi \Rightarrow \psi}{closed\ subtree} \ \ & \ \ \infer*{\chi,\Gamma_1,\psi \supset \chi \Rightarrow \delta^*}{\mathcal{B}^*}}} $$ Going upwards we apply the same argument: if our open branch $\mathcal{B}^*$ goes through the left premise of $L_\supset$, then we reject this application of the rule, we skip its left premise and the whole right subtree and thus leave $\chi$ in the succedent of a sequent, while not violating the applicability of other rules on the branch. But this means, finally, that sequent $\Gamma \Rightarrow \chi$ is derivable, contrary to the assumption (the only applications of $L_\supset$ that are left are such that the left premise is a provable sequent and the considered branch goes through the right premise, ending as follows): $$ \infer*{\Gamma \Rightarrow \chi}{\infer*{\Gamma^* \Rightarrow \chi}{\infer[L_\supset]{\Gamma_1, \psi \supset \chi \Rightarrow \chi}{\infer*{\Gamma_1,\psi \supset \chi \Rightarrow \psi}{closed\ subtree} \ \ & \ \ \chi,\Gamma_1,\psi \supset \chi \Rightarrow \chi}}} $$ Hence it follows that $v_0(\chi,[\Gamma \Rightarrow \chi])=0$. The argument for $\chi = \chi_1 \equiv \chi_2$ is almost exactly the same, we only start with the observation that as the sequents considered are not provable, it must be $\chi_1 \neq \chi_2$. There is an additional case to consider: when $\chi$ shows up in $\Gamma^*_0,\chi \Rightarrow \delta^*$ by a b-application of a rule for identity. As in the above argument, we go up the derivation and eliminate the applications of $L_\supset$ obtaining a sequent with $\chi$ both in the antecedent and succedent. \end{proof} \begin{lemma} If sequent $\Rightarrow \phi$ is not provable in $\mathsf{SC_{ISCI}}$, then $[\Rightarrow \phi] \not\Vdash \phi$, where $\langle \overline{W},\overline{\leq},\Vdash \rangle$ is constructed as described above. \end{lemma} \begin{proof} We shall prove a stronger thesis from which our lemma follows. The thesis is a conjunction of the two statements: \begin{enumerate} \item for each sequent $S$ s.t. $[S] \in \overline{W}$ and each formula $\psi$ that occurs in the antecedent of $S$: $[S] \Vdash \psi$, and \item for each $[\Gamma \Rightarrow \chi] \in \overline{W}$: $[\Gamma \Rightarrow \chi] \not\Vdash \chi$. \end{enumerate} We reason by induction on complexity of formulas $\psi$ and $\chi$. \textbf{Base step}. Suppose that $c(\psi) = 0$, then $\psi\in \mathsf{Prop}$ or $\psi = \bot$. The second is impossible, as the branch is open. For propositional variables in the antecedent: $\psi \in \mathsf{Form}^n_0$, hence by the definition of $v_0$, $v_0(\psi,[\psi,\Gamma \Rightarrow \chi])=1$. Hence also $v(\psi,[\psi,\Gamma \Rightarrow \chi])=1$ and $[\psi,\Gamma \Rightarrow \chi] \Vdash \psi$ by definition of forcing determined by $v$. Suppose that $c(\chi)=0$, $\chi$ occurs in the succedent. If $\chi=\bot$, then, clearly, $[\psi,\Gamma \Rightarrow \chi] \not\Vdash \chi$. If $\chi \in \mathsf{Prop}$, then, by Lemma \ref{c2}, $v(\chi,[\Gamma \Rightarrow \chi])=0$ and hence $[\Gamma \Rightarrow \chi] \not\Vdash \chi$. Let us observe at this point that the same holds for equations. For this reason equations will not be considered in the inductive part. \textbf{Induction hypothesis}: the thesis holds for $\psi$, $\chi$ of complexity up to $k: 0 \leqslant k < n$. We assume that a formula of complexity $k+1$ is of the form $\delta \supset \gamma$. Assume that $c(\psi)=k+1$ and $\psi$ is of the form $\delta \supset \gamma$. Let $w^* \in W$ be such that $[\psi,\Gamma \Rightarrow \chi]\: \overline{\leq}\: w^*$; the aim is to show that $w^* \not\Vdash \delta$ or $w^* \Vdash \gamma$. Assume that $[\psi,\Gamma \Rightarrow \chi]\: \overline{\leq_0}\: w^*$. Then for some branch $\mathcal{B}$, $[\psi,\Gamma \Rightarrow \chi]\: {\leq_\mathcal{B}}\: w^*$. Since $\leq_\mathcal{B}$ is the transitive closure of $\leq_0$, this part of the proof is by (sub)induction on the length of the chain: $$[\psi,\Gamma \Rightarrow \chi] \leq_0 w_1 \leq_0 \ldots \leq_0 w_{m-1} \leq_0 w^* .$$ The argument is essentially the same in the base and inductive case, and it relies on the fact that the implications in antecedents are carried bottom-up. The base case is $m=1$ and there are further two possibilities: (c) and (d) below. For (c) and (d) the reasoning pertains to classes from one set $W_1$ associated with one sequent. \begin{itemize} \item[(c)] $[\psi,\Gamma \Rightarrow \chi] = w^*$. Rule $L_\supset$ was applied to a sequent from $[\psi,\Gamma \Rightarrow \chi]$ with respect to formula $\delta \supset \gamma$, hence set $[\psi,\Gamma \Rightarrow \chi]$ contains the left premise of $L_\supset$ with $\delta$ in the succedent, or the right premise with $\gamma$ in the antecedent. In both cases the main induction hypothesis applies, hence $w^* \not\Vdash \delta$ or $w^* \Vdash \gamma$, as required. \item[(d)] $[\Gamma \Rightarrow \chi] \leq_0 w^*$, that is, there is an application of $R_\supset$ between a sequent from $w^*$ which is the premise of $R_\supset$ and a sequent from $[\Gamma \Rightarrow \chi]$---a conclusion of $R_\supset$. Implication $\delta \supset \gamma$ is carried to the sequent-premise, hence the argument is exactly the same as for (c). \end{itemize} Subinduction hypothesis: the argument would be a repetition of the base case, hence we skip this part. Now suppose that $[\psi,\Gamma \Rightarrow \chi]\: \overline{\leq_1}\: w^*$. Let us recall that the derivation that is the origin for the branch determining a structure $w^*$ is a part of starts with a sequent defined by a maximum $\Gamma^M$. It means that all formulas from $\psi,\Gamma$ are transfered to the antecedents of sequents in $w^*$, hence this case comes to (d). Finally, when the transitive closure is considered, the inductive argument is just as the base one. We proceed to 2. Assume that $c(\chi)=k+1$ and suppose that (b) $\chi = \delta \supset \gamma$. Then we have $[\Gamma \Rightarrow \chi] \:\overline{\leq_1}\: [\Gamma^M, \delta \Rightarrow \gamma]$, $c(\delta),c(\gamma) < k+1$, hence by the inductive hypothesis $[\Gamma^M, \delta \Rightarrow \gamma] \Vdash \delta$ and $[\Gamma^M, \delta \Rightarrow \gamma] \not\Vdash \gamma$, and thus $[\Gamma \Rightarrow \chi] \not\Vdash \delta \supset \gamma$. \end{proof} It follows that \begin{theo} If a sequent $\Rightarrow \phi$ is $\mathsf{ISCI}$-valid, then it is provable in $\mathsf{SC}_\mathsf{ISCI}$. \end{theo} \section{Final remarks} In the paper we presented a sequent calculus for \textsf{ISCI} and showed that \textsf{ISCI} is decidable. It was shown by arguments relating to the fact that restricted derivations are finite objects that the described procedures can be deemed constructive. We have postponed, however, for the future both the complexity constraints and the implementation issues. \end{document}
arXiv
\begin{document} \title{Motif Graph Neural Network} \author{Xuexin Chen, Ruichu Cai$^\star$, Yuan Fang$^\star$, Min Wu$^\star$, Zijian Li, Zhifeng Hao \thanks{This research was supported in part by National Key R$\&$D Program of China (2021ZD0111501), National Science Fund for Excellent Young Scholars (62122022), Natural Science Foundation of China (61876043, 61976052), the major key project of PCL (PCL2021A12), Guangdong Provincial Science and Technology Innovation Strategy Fund (2019B121203012).} \IEEEcompsocitemizethanks{ \IEEEcompsocthanksitem Xuexin Chen is with the School of Computer Science, Guangdong University of Technology, Guangzhou 510006, China. E-mail: [email protected] \IEEEcompsocthanksitem Ruichu Cai is with the School of Computer Science, Guangdong University of Technology, Guangdong Provincial Key Laboratory of Public Finance and Taxation with Big Data Application, Guangzhou, China and also with Peng Cheng Laboratory, Shenzhen 518066, China. E-mail: [email protected] \IEEEcompsocthanksitem Yuan Fang is with the School of Computing and Information Systems, Singapore Management University, 178902, Singapore. E-mail: [email protected] \IEEEcompsocthanksitem Min Wu is with the Institute for Infocomm Research (I$^{2}$R), A*STAR, 138632, Singapore. E-mail: [email protected] \IEEEcompsocthanksitem Zijian Li is with the School of Computer Science, Guangdong University of Technology, Guangzhou 510006, China. E-mail: [email protected] \IEEEcompsocthanksitem Zhifeng Hao is with the College of Science, Shantou University, Shantou 515063, China. Email: [email protected] } } \markboth{Journal of \LaTeX\ Class Files,~Vol.~14, No.~8, August~2015} {Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for IEEE Journals} \maketitle \begin{abstract} Graphs can model complicated interactions between entities, which naturally emerge in many important applications. These applications can often be cast into standard graph learning tasks, in which a crucial step is to learn low-dimensional graph representations. Graph neural networks (GNNs) are currently the most popular model in graph embedding approaches. However, standard GNNs in the neighborhood aggregation paradigm suffer from limited discriminative power in distinguishing \emph{high-order} graph structures as opposed to \emph{low-order} structures. To capture high-order structures, researchers have resorted to motifs and developed motif-based GNNs. However, existing motif-based GNNs still often suffer from less discriminative power on high-order structures. To overcome the above limitations, we propose Motif Graph Neural Network (MGNN), a novel framework to better capture high-order structures, hinging on our proposed motif redundancy minimization operator and injective motif combination. First, MGNN produces a set of node representations w.r.t. each motif. The next phase is our proposed redundancy minimization among motifs which compares the motifs with each other and distills the features unique to each motif. Finally, MGNN performs the updating of node representations by combining multiple representations from different motifs. In particular, to enhance the discriminative power, MGNN utilizes an injective function to combine the representations w.r.t. different motifs. We further show that our proposed architecture increases the expressive power of GNNs with a theoretical analysis. We demonstrate that MGNN outperforms state-of-the-art methods on seven public benchmarks on both node classification and graph classification tasks. \end{abstract} \begin{IEEEkeywords} Graph Neural Network, Motif, High-order Structure, Graph Representation \end{IEEEkeywords} \section{Introduction}\label{sec:intro} Graphs are capable of modeling complex interactions between entities, which naturally emerge in many real-world scenarios. Social networks, protein-protein interaction networks, and knowledge graphs are just a few examples, with many important applications in areas like social recommendation \cite{DBLP:conf/www/Fan0LHZTY19}, drug discovery \cite{sun2020graph}, fraud detection \cite{xu2021towards}, and particle physics \cite{shlomi2020graph}. These applications can often be cast into standard graph learning tasks such as node classification, link prediction, and graph classification, in which a crucial step is to learn low-dimensional graph representations. \begin{figure} \caption{Toy example of the discriminative power of GNNs on two nodes A and B with non-isomorphic neighborhoods. } \label{fig:example} \end{figure} Graph embedding approaches can be broadly categorized into graph neural networks (GNNs) \cite{DBLP:conf/iclr/KipfW17, DBLP:journals/corr/abs-1710-10903, hamilton2017inductive}, matrix factorization \cite{DBLP:conf/kdd/OuCPZ016, DBLP:conf/cikm/CaoLX15} and skip-gram models \cite{DBLP:conf/kdd/GroverL16,DBLP:conf/kdd/PerozziAS14}. Among these, GNNs are currently the most popular model, largely owing to their ability of integrating both structure and content information through a message passing mechanism. To be more specific, in the standard GNN architecture, the representation vector of a node is computed by aggregating and updating messages (i.e., features or representation vectors) from the node’s neighbors. The aggregation can be performed recursively by stacking multiple layers, to capture long-range node dependencies. However, standard GNNs in the neighborhood aggregation paradigm suffer from limited discriminative power in distinguishing \emph{high-order} graph structures consisting of the connections between neighbors of a node, as opposed to \emph{low-order} structures consisting of the connections between the node and its neighbors. For example, standard GNNs cannot distinguish between nodes A and B with non-isomorphic neighborhoods in Fig.~\ref{fig:example}(a), as their neighborhoods differ only in the higher-order. To capture high-order structures, researchers have resorted to motifs \cite{milo2002network, benson2016higher} and developed motif-based GNNs \cite{DBLP:conf/cikm/LeeRKKKR19,DBLP:conf/cikm/ZhaoZSL19,DBLP:conf/dsw/MontiOB18,DBLP:conf/icdm/SankarWKS20}. These approaches usually employ a motif-based adjacency matrix for each motif, which is constructed from the number of times two nodes are connected via an instance of the motif. Such motif-based adjacency matrices can better grasp the high-order structures. For example, given the open and closed motifs illustrated in Fig.~\ref{fig:example}(b), nodes A and B in Fig.~\ref{fig:example}(a) can be naturally distinguished by motif-based GNNs since node A is only associated with an open motif, whereas node B is associated with both open and close motifs. However, existing motif-based GNNs often suffer from two problems. First, they overlook the \emph{redundancy} among motifs, which is defined as common edges shared by different motif instances. For example, in Fig.~\ref{fig:example}(c), the two motif instances of node B share two edges. When the redundancy is high enough, different motifs may become similar and lack specificity. Second, they often combine multiple motifs in a \emph{non-injective} manner, potentially resulting in less discriminative power on high-order structures. That is, a non-injective function, such as sum or mean, is used to combine different motifs, as shown in Fig.~\ref{fig:example}(d). In our example, node A has only an open motif with a feature valued 6, and node B has both open and close motifs each with a feature valued 3. However, when the motifs are combined by summing up their features, both nodes A and B would obtain the same feature representation of 6 and thus cannot be distinguished. Thus, the resulting node representations may also converge and further decrease the discriminative power. To overcome the above limitations, we propose Motif Graph Neural Network (MGNN), a new class of GNN capable of distinguishing high-order structures with provably better discriminative power. From a model perspective, our MGNN follows the message passing mechanism, and its procedure is broken down into the following four phases. The first phase is motif instance counting. To form a motif-based adjacency matrix, we count the number of times two nodes co-occur in an instance of each motif. To capture comprehensive high-order graph structures, MGNN employs the motif-based adjacency matrices for all the possible motifs of size three, as opposed to some previous work with only one \cite{zhao2018ranking} or some motifs \cite{DBLP:conf/icdm/SankarWKS20}. The second phase is message aggregation. MGNN, like other motif-based GNNs, aggregates node features (i.e., messages) on each motif-based adjacency matrix to produce different representations of the motifs. The first two phases are largely based on previous studies, except that we have employed all the motifs of size three to thoroughly capture high-order structures in an efficient manner. The third phase is the redundancy minimization among motifs. We address the challenge of redundancy among motifs by a proposed redundancy minimization operator, which compares the motifs with each other in terms of their representations, to distill the features specific to each motif. The fourth phase is the updating of node representations by combining multiple motifs. To improve the limited discriminative power of non-injective combinations, MGNN utilizes an injective function to combine motifs and update node representations. For example, to distinguish nodes A and B in Fig.~\ref{fig:example}(d), MGNN uses the injective concatenation to combine motif-based representations, so that the representation of node A is (6,0) and that of B is (3, 3), which can be differentiated apart. From a theoretical perspective, we show that MGNN is provably more expressive than standard GNN, and standard GNN is in fact a special case of MGNN. We summarize our key contributions in the following. \begin{itemize} \item We propose Motif Graph Neural Network (MGNN), a novel framework to better capture high-order structures, hinging on the motif redundancy minimization operator and injective motif combination. \item We further show that our proposed architecture increases the expressive power of GNNs with a theoretical analysis. \item We demonstrate that MGNN outperforms existing standard or motif-based GNNs on seven public benchmarks on both node classification and graph classification tasks. \end{itemize} \section{Related Work} Standard Graph Neural Networks follow the message passing paradigm to leverage node dependence and learn node representations. Different GNN models resort to different aggregation functions to aggregate the messages (i.e., features) for each node from its neighbors, and update its representation \cite{DBLP:conf/iclr/KipfW17, DBLP:journals/corr/abs-1710-10903, hamilton2017inductive, DBLP:conf/icml/GilmerSRVD17, DBLP:journals/corr/abs-1806-01261}\chen{, \cite{gan2022multigraph, he2022optimizing, gong2022self, guo2021bilinear}}. For example, Graph Convolutional Networks \cite{DBLP:conf/iclr/KipfW17} use mean aggregation to pool neighborhood information. Graph Attention Networks \cite{DBLP:journals/corr/abs-1710-10903} aggregate neighborhood information with trainable attention weights. GraphSAGE \cite{hamilton2017inductive} uses mean, max or other learnable pooling function. Moreover, during aggregation, Message Passing Neural Networks \cite{DBLP:conf/icml/GilmerSRVD17} also incorporate edge information, while Graph Networks \cite{DBLP:journals/corr/abs-1806-01261} \chen{and multi graph fusion-based dynamic GCN~\cite{ gan2022multigraph}} further consider global graph information. \chen{In \cite{he2022optimizing} and \cite{gong2022self}, GNNs are developed to use an aggregation strategy based on Hilbert-Schmidt independence criterion and self-paced label augmentation strategy, respectively.} Some graph-level downstream tasks, such as graph classification, further employ a readout function to aggregate individual node representations into a whole-graph representation. The readout can be a simple permutation invariant function such as average and summation, while more sophisticated graph pooling methods have also been proposed, including global pooling \cite{DBLP:journals/corr/VinyalsBK15,DBLP:journals/corr/LiTBZ15,zhang2018end}, hierarchical pooling \cite{DBLP:journals/pami/DhillonGK07,DBLP:conf/aaai/RanjanST20,diehl2019towards,DBLP:conf/icml/GaoJ19,DBLP:conf/nips/YingY0RHL18}. \chen{Besides graph-level downstream tasks, GNNs can also be used for the visual question answering task, etc~\cite{guo2021bilinear}.} However, all these models are limited to only capturing low-order graph structures around every node. However, standard GNNs are at most as powerful as the $1$-dimensional Weisfeiler-Leman ($1$-WL) graph isomorphism test \cite{DBLP:conf/iclr/XuHLJ19}, which implies that they cannot distinguish nodes with isomorphic low-order graph structures but different high-order structures. In other words, standard GNNs will always associate such nodes with the same representation. To improve the discriminative power of GNNs, it is a common practice to leverage high-order graph structures such as motifs \cite{milo2002network, benson2016higher}. In particular, motif-based GNN models use one \cite{zhao2018ranking, DBLP:conf/cikm/ZhaoZSL19}\chen{, \cite{DBLP:journals/corr/abs-2205-00867}} or more \cite{DBLP:conf/dsw/MontiOB18,DBLP:conf/icdm/SankarWKS20,DBLP:conf/bigdataconf/DareddyDY19,DBLP:journals/corr/abs-1711-05697,DBLP:conf/cikm/LeeRKKKR19}\chen{, \cite{wang2022graph}, \cite{yang2022graph}} motif-based adjacency matrices to perform message passing. When multiple motif-based adjacency matrices are used, the combination function w.r.t.~multiple motifs include summation \cite{DBLP:conf/dsw/MontiOB18}\chen{, \cite{wang2022graph}}, averaging \cite{DBLP:conf/icdm/SankarWKS20}, neighborhood aggregation \cite{DBLP:conf/bigdataconf/DareddyDY19}, fusion by a fully connected layer \cite{DBLP:journals/corr/abs-1711-05697}, selection by reinforcement learning \cite{DBLP:conf/cikm/LeeRKKKR19}, \chen{combination by a variant of recurrent neural network \cite{yang2022graph}}, and so on. However, all these previous combination functions are not injective to sufficiently differentiate higher-order structures. \chen{Note that although the model in \cite{yang2022graph} does not employ an injective function, it still effectively captures the high-order structure of the nodes, through a strategy of encoding neighbor's features sequentially and a variant of the recurrent neural network to learn the node representations.} Besides, all these models do not take into account the redundancy among motif instances. In another line, several studies \cite{maron2019provably, DBLP:conf/aaai/0001RFHLRG19, morris2020weisfeiler} attempt to extend the discriminative power of GNNs from 1-WL to $k$-WL, given that the higher the dimension of WL, the stronger the discriminative power. Like the standard GNNs, there is also a message propagation mechanism in these $k$-WL approaches where the difference is that the message is not propagated between nodes but between $k$-tuples (or a subgraph with $k$ nodes). Since their message propagation is not between nodes, they \cite{maron2019provably,DBLP:conf/aaai/0001RFHLRG19,morris2020weisfeiler} have the following shortcomings compared with our MGNN. First, they cannot generate node embeddings, but MGNN can, which limits their application to node-level tasks such as node classification. Second, their time complexity is higher than that of MGNN. The time complexity of MGNN is $\mathcal{O}(|\mathcal{V}|^2)$ (see Section~\ref{sec:model_train}), while their time complexity is $\mathcal{O}(|\mathcal{V}|^3)$ \cite{maron2019provably} or even $\mathcal{O}(|\mathcal{V}|^4)$ in the worst case \cite{DBLP:conf/aaai/0001RFHLRG19,morris2020weisfeiler}. Third, \cite{DBLP:conf/aaai/0001RFHLRG19,morris2020weisfeiler} have a space complexity $\mathcal{O}(|\mathcal{V}|^3)$, which is also higher than $\mathcal{O}(|\mathcal{V}|^2)$ needed by MGNN. There are also several approaches employing high-order structures, in which each node receives messages from its multi-hop neighbors, such as MixHop~\cite{abu2019mixhop}, GDC~\cite{klicpera2019diffusion}, CADNet~\cite{lim2021class}, \chen{PathGCN~\cite{flam2021neural}, SE-aggregation~\cite{zhang2021learning} and MBRec~\cite{xia2022multi}}. However, like standard GNNs, they are \chen{typically} at most as powerful as the $1$-WL test in distinguishing graph structures~\chen{\cite{zhang2021learning}}. \section{Preliminaries} In this section, we introduce major notations and definitions of related concepts. \subsection{Notations and Problem Formulation} A graph is denoted by $G=(\mathcal{V}, \mathcal{E})$, with the set of nodes $\mathcal{V}$ and the set of edges $\mathcal{E}$. Let $\mathbf{A} \in \mathbb{R}^{|\mathcal{V}| \times |\mathcal{V}|}$ be the adjacency matrix of $G$, and $\mathbf{X} \in \mathbb{R}^{|\mathcal{V}| \times d_0}$ be the node feature matrix of G, where $i$-th row means the features of node $i$ denoted by $\mathbf{x}_i$. We use $(\mathbf{T})_{ij}$ to represent the element in the $i$-th row and the $j$-th column of a matrix $\mathbf{T}$, and $(\mathbf{T})_{i*}$ to represent all of the $i$-th row's elements. In this paper, we investigate the problem of graph representation learning, which aims to embed nodes into a low-dimensional space. The node embeddings can be used for downstream tasks such as node classification, potentially in an end-to-end fashion. Formally, a node embedding model is denoted by a function $\psi: \mathcal{V} \to \mathcal{H}$ that maps the nodes in $\mathcal{V}$ to $d$-dimensional vectors in $\mathcal{H} = \{\mathbf{h}_i \in \mathbb{R}^d|1\le i\le |\mathcal{V}| \}$, where $i$ denotes the index of the nodes. \begin{figure} \caption{An example of a 3-node ($k=3$) network motif, along with its adjacency matrix $\mathbf{B}_{M_1}$. } \label{fig:motif_def} \end{figure} \begin{figure} \caption{All 3-node motifs in a directed and unweighted graph.} \label{fig:motif} \end{figure} \subsection{Motif and Motif-based Adjacency Matrix} We work with directed motifs because they allow us to describe more complex structures. Specifically, we first introduce the definition of motif~\cite{milo2002network, zhao2018ranking, benson2016higher} as follows. \begin{definition}\label{def:motif} (Network motif). A motif $M$ is a connected graph of $n$ nodes ($n>1$), with a $n \times n$ adjacency matrix $\mathbf{B}_M$ containing binary elements $\{0,1\}$. \end{definition} An example of 3-node motif is given in Fig.~\ref{fig:motif_def}. In particular, a motif with three or more nodes (i.e., $n \ge 3$) can capture high-order graph structures. Among them, w.r.t.~a given node, the high-order structure captured by its motifs with $n > 3$ nodes (i.e., not only its edges incident to its neighboring nodes but also the edges between its neighboring nodes), can be similarly captured by multiple $3$-node motifs. Thus, the given node's 3-node motifs have sufficient capacity for structures. As shown in Fig.~\ref{fig:motif}, we enumerate a total of thirteen 3-node motifs. Therefore, we only utilize motifs with $n=3$ nodes in this work. Given the above motif definition, we can further define the set of motif instances as follows. \begin{definition}\label{def:motif_instance} (Motif instance). Consider an edge set $\mathcal{E}'$ and the subgraph $G[\mathcal{E}']$ induced from $\mathcal{E}'$ in $G$. If $G[\mathcal{E}']$ and a motif $M_k$ are isomorphic \cite{babai2018group}, written as $M_k \simeq G[\mathcal{E}']$, then \begin{equation*} m(\mathcal{E}') = \{(\mathbf{x}_{u}, \mathbf{x}_{v}) \big| (u, v) \in \mathcal{E}' \} \end{equation*} is an \emph{instance} of the motif $M_k$, where $u, v$ are two adjacent nodes that form an edge in $\mathcal{E}'$, and $\mathbf{x}_{u}$ means the $u$-th row of $\mathbf{X}$ (i.e., the feature vector of node $u$). \end{definition} For example, a motif instance of $M_1$ in Fig.~\ref{fig:motif_def} is $\{(\mathbf{x}_1, \mathbf{x}_3), (\mathbf{x}_2, \mathbf{x}_1), (\mathbf{x}_3, \mathbf{x}_2) \}$. \begin{definition}\label{def:motif_instance_set} (Motif instance set). On a graph $G=(\mathcal{V}, \mathcal{E})$, the set of instances of motif $M_k$, denoted as $\mathcal{M}_k$, is defined by \begin{equation*} \mathcal{M}_k = \{ m(\mathcal{E}') | \mathcal{E}' \subseteq \mathcal{E}, |\mathcal{E}'|=r, M_k \simeq G[\mathcal{E}'] \}, \end{equation*} where $\mathcal{E}' \subseteq \mathcal{E}, |\mathcal{E}'|=r$ denotes the set of all $r$-combinations of the edge set $\mathcal{E}$, and $|\mathcal{E}'|=r$ is the number of edges in the motif $M_k$. \end{definition} Based on the motif instances, the definition of the motif-based adjacency matrix is given as follows. \begin{definition}\label{def:motif_adj} (Motif-based adjacency matrix). Given a motif $M_k$ and its set of instances $\mathcal{M}_k$, the corresponding motif-based adjacency matrix $\mathbf{A}_k$ is defined by \begin{equation}\label{equ:motif_adj} (\mathbf{A}_k)_{ij} = \sum_{m \in \mathcal{M}_k} \mathbb{I}((\mathbf{x}_i, \mathbf{x}_j) \in m), \end{equation} where $\mathbb{I}(\cdot)$ is an indicator function, i.e., $\mathbb{I}(x)=1$ if the statement $x$ is true and $0$ otherwise. \end{definition} Intuitively, $(\mathbf{A}_k)_{ij}$ is the number of times two nodes $i$ and $j$ are connected via an instance of the motif $M_k$. \begin{figure*} \caption{Overview of a MGNN layer. The MGNN layer takes the graph and its node features matrix as inputs, throughout the procedure (a) (b) (c) (d), and finally outputs the node representation matrix that captures the high-order graph structure. The first two phases are the generation of $M_k$-based representation of nodes. The key idea of phase (c) is to compare the motifs with each other and distill the features specific to each motif. In phase (d), updated $M_k$-based representations are combined by an injective concatenation operation. } \label{fig:model} \end{figure*} \section{Proposed Approach}\label{sec:approach} In this section, we introduce the proposed approach. We first present an overall framework of our approach, followed by its four phases in detail. Finally, we discuss the overall objective function for model training. \subsection{Overall Framework} We propose Motif Graph Neural Network (MGNN) that can model high-order structures with provably better discriminative power. Specifically, our MGNN follows a message passing mechanism, and its procedure is broken down into the following four phases. The first phase involves the construction of a motif-based adjacency matrix, as shown in Fig.~\ref{fig:model}(a). Given a motif, its motif-based adjacency matrix captures the number of times each pair of nodes are connected via an instance of the motif. Thus, we need an efficient counting algorithm for motif instances. In MGNN, we consider all 13 motifs of size three, namely $M_1,M_2,\ldots,M_{13}$ given by Fig.~\ref{fig:motif}, and correspondingly construct 13 motif-based adjacency matrices $\mathbf{A}_1,\mathbf{A}_2,\ldots,\mathbf{A}_{13}$. The second phase is message aggregation, as shown in Fig.~\ref{fig:model}(b). MGNN aggregates node features (i.e., messages) on each motif-based adjacency matrix to produce a set of node representations w.r.t.~each motif. The first two phases of motif instance counting \cite{zhao2018ranking} and message aggregation \cite{DBLP:conf/dsw/MontiOB18, DBLP:journals/corr/abs-1711-05697} are largely similar to previous works, except that we have employed all the motifs of size three to comprehensively capture high-order structures in an efficient manner. In the second phase, we follow previous work completely. The third phase is the redundancy minimization among motifs, as shown in Fig.~\ref{fig:model}(c). We propose a redundancy minimization operator, which compares the motifs with each other and distills the features unique to each motif. The fourth phase performs the updating of node representations by combining multiple representations from different motifs, as shown in Fig.~\ref{fig:model}(d). In particular, to enhance the discriminative power, MGNN utilizes an injective function to combine the representations w.r.t.~different motifs. \subsection{Motif-based Adjacency Matrix Construction}\label{sec:adj} \begin{figure} \caption{Overview for constructing $M_9$-based adjacency matrix by enumeration and non-enumeration method.} \label{fig:motif_adj_construct} \end{figure} The key step to constructing a motif-based adjacency matrix is to efficiently count the number of motif instances. Depending on if the motif is open ($M_8$--$M_{13}$) or closed ($M_1$--$M_7$), different counting algorithms will apply. For open motifs ($M_8$--$M_{13}$), existing methods \cite{zhang2019local,ribeiro2021survey} are often implemented by enumerating motif instances. For example, given the graph in Fig.~\ref{fig:motif_adj_construct}(a), to construct the $M_9$-based adjacency matrix, a traditional technique is to enumerate the instances of $M_9$ as shown in Fig.~\ref{fig:motif_adj_construct}(b). However, such enumeration suffers from high computational complexity, with a worst-case complexity of $O(|\mathcal{V}|^3)$ in both space and time. To reduce the complexity, we propose an adjacency matrix construction method for open motifs without enumerating any motif instance, which has a time and space complexity of $O(|\mathcal{V}|^2)$ and $O(|\mathcal{V}|)$, respectively. Consider a node $v$. Let $u_\text{in}$, $u_\text{out}$, and $u_\text{bi}$ denote an incoming, outgoing, and bi-directional neighbor of node $v$, respectively. Correspondingly, let $d_\text{in}$, $d_\text{out}$ and $d_\text{bi}$ denote the number of each type of neighbor of $v$, respectively, as illustrated by the examples in Fig.~\ref{fig:motif_adj_construct}(a). As shown in Fig.~\ref{fig:motif}, the center node of each open motif has at most two types of neighbors; for example, $M_9$ has $u_\text{out}$ and $u_\text{in}$, and $M_{13}$ has only $u_\text{bi}$. Our key observation is that $(\mathbf{A}_k)_{vu}$, the number of times two nodes ($v$ and $u$) are connected via an instance of an open motif $M_k$, can be computed as follows. On one hand, when the motif has two types of neighbors, $(\mathbf{A}_k)_{vu}$ will be equal to the number of the other type of neighbors, e.g., $(\mathbf{A}_9)_{vu_\text{in}} = d_\text{out}$, $(\mathbf{A}_9)_{vu_\text{out}} = d_\text{in}$, $(\mathbf{A}_{11})_{vu_\text{out}} = d_\text{bi}$, $(\mathbf{A}_{12})_{vu_\text{in}}=d_\text{bi}$, $(\mathbf{A}_{11})_{vu_\text{bi}} = d_\text{out} - 1$, $(\mathbf{A}_{12})_{vu_\text{bi}} = d_\text{in} - 1$. On the other hand, when the motif has only one type of neighbors, $(\mathbf{A}_k)_{vu}$ will be equal to the number of neighbors in the motif, e.g., $(\mathbf{A}_8)_{vu_\text{out}} = d_\text{out} - 1$, $(\mathbf{A}_{10})_{v u_\text{in}} = d_\text{in} - 1$, $(\mathbf{A}_{13})_{vu_\text{bi}} = d_\text{bi} - 1$. Still using Fig.~\ref{fig:motif_adj_construct} as an example, node B is an incoming neighbor ($u_{\text{in}}$) of node A, while C and D are the outgoing neighbors ($u_{\text{out}}$) of node A. Furthermore, for $(\mathbf{A}_9)_{AB}$, it satisfies $(\mathbf{A}_9)_{vu_\text{in}}=d_\text{out}$ (denoting node A as $v$) and for $(\mathbf{A}_9)_{AC}$ or $(\mathbf{A}_9)_{AD}$, it satisfies $(\mathbf{A}_9)_{vu_\text{out}} = d_\text{in}$. The motif-based adjacency matrix for a closed motif ($M_1$--$M_7$) can be constructed by an existing method \cite{zhao2018ranking} with a time and space complexity of $O(|\mathcal{V}|^3)$ and $O(|\mathcal{V}|^2)$, respectively, which counts $(\mathbf{A}_k)_{vu}$ through two matrix multiplication operations and the matrices used by this method can be stored in the HDF5 format \cite{hdf1997hierarchical}. \subsection{Motif-wise Message Aggregation} To produce the motif-wise node representations, on each motif-based adjacency matrix, node features (i.e, messages) can be incorporated into a multi-layer message aggregation mechanism, as shown in Fig.~\ref{fig:model}(b). Specifically, the motif $M_k$-based representation of node $v$ in the $l$-th layer is given by \begin{equation}\label{equ:maf} \mathbf{h}_{v, k}^{(l)} = \operatorname{AGG}\left( \left\{ \alpha^{(l)}_{k,vi}\cdot (\tilde{\mathbf{A}}_k)_{vi} (\mathbf{Z}^{(l)})_{i*}| i \in \mathcal{N}(v) \right\} \right), \end{equation} \begin{equation}\label{equ:gcn} \mathbf{Z}^{(l)} = \tilde{\mathbf{A}}\mathbf{H}^{(l-1)}\mathbf{W}^{(l)}, \end{equation} where $\mathbf{H}^{(l-1)} \in \mathbb{R}^{|\mathcal{V}| \times d_{l-1}}$ denote the node messages from the previous $(l-1)$-th layer and $\mathbf{H}^{(0)} = \mathbf{X}$, $\mathbf{W}^{(l)} \in \mathbb{R}^{d_{l-1} \times d_l}$ is the trainable weight matrix in the $l$-th layer. $\tilde{\mathbf{A}}$ is the normalized adjacency matrix given by $\tilde{\mathbf{A}} = \hat{\mathbf{A}} - \frac{\hat{\lambda}_{\max}}{2}\mathbf{I}$, where $\hat{\mathbf{A}} = \mathbf{D}^{-0.5}\mathbf{A}\mathbf{D}^{-0.5}$ and $\mathbf{D}$ is a diagonal matrix in which the diagonal elements are defined as $(\mathbf{D})_{ii} = \sum_{j=1}^{|\mathcal{V}|} (\mathbf{A})_{ij}$ and $\hat{\lambda}_{\max}$ refers to the largest eigenvalue of $\hat{\mathbf{A}}$. The above normalization technique aids in the centralization of the Laplacian's eigenvalues and the reduction of the spectral radius bound~\cite{DBLP:conf/nips/WijesingheW19}. The motif $M_k$-based adjacency matrix $\mathbf{A}_k$ is normalized in the same way into $\tilde{\mathbf{A}}_k$. AGG, the function of $\mathbf{H}^{(l-1)}$ and $\tilde{\mathbf{A}}_k$ as Fig.~\ref{fig:model}(b) illustrated, is a message aggregate function, e.g., sum, mean or max. The coefficient $\alpha^{(l)}_{k,vi}$ is the attention weight that indicates the importance of node $i$’s messages to node $v$. $\alpha^{(l)}_{k,vi}$ can be assigned a constant value according to prior knowledge or computed by the attention mechanism \cite{DBLP:journals/corr/abs-1710-10903}. $\mathcal{N}(v)$ represents the set of neighboring nodes of $v$. Note that not all nodes will have 13 motifs, and MGNN can still accommodate such nodes. In particular, if node $v$ lacks a motif $M_k$, the entries $(A_k)_{vj}$ and $(A_k)_{jv}$ in Eq.~\eqref{equ:motif_adj} are all set to zeros, and subsequently, $\mathbf{h}^{(l)}_{v,k}$ in Eq.~\eqref{equ:maf} will also be a zero vector. Intuitively, in Eq.~\eqref{equ:maf}, before performing motif-wise aggregation for the motif $M_k$, we first stack a GCN layer \cite{DBLP:conf/iclr/KipfW17}, i.e., $\mathbf{Z} = \tilde{\mathbf{A}}\mathbf{H}^{(l-1)}\mathbf{W}^{(l)}$ in Eq.~\eqref{equ:gcn}, to update the overall node messages by aggregating from the previous layer. The GCN layer can also be replaced by other GNN layers. \subsection{Motif Redundancy Minimization} As different motifs often share certain substructures, their corresponding motif-wise representations may become similar and lack specificity. Inspired by the idea of redundancy minimization between features \cite{DBLP:journals/pami/PengLD05}, we propose a redundancy minimization operator at the motif level, denoted $\Delta$. The key idea of $\Delta$ is to compare the motifs with each other and adaptively distill the features specific to each motif. We formally define $\Delta$ as follows. Given a node $v$, for simplicity, let $\mathbf{h}_k$ and $\mathbf{z}_v$ denote $\mathbf{h}^{(l)}_{v, k}$, $(\mathbf{Z}^{(l)})_{v*}$ respectively. We call the motif- and GCN-based representations collectively as the intermediate representations of the node. \begin{definition}\label{def:op} (Motif redundancy minimization operator). For any node $v$, given its intermediate representations $\mathbf{h}_1$, ..., $\mathbf{h}_{13}, \mathbf{z}_v$, let $\bar{\mathcal{H}}_k=\Big(\big\|_{i=1,i\ne k}^{13} \mathbf{h}_i\Big) \big\| \mathbf{z}_v$, where $\|$ is the concatenation operator. In other words, $\bar{\mathcal{H}}_k$ concatenates all the intermediate representations except that based on motif $M_k$. Then, for motif $M_k$, its redundancy minimized representation of the node $v$ is given by \begin{equation}\label{equ:op} \begin{aligned} \tilde{\mathbf{h}}_k&=\Delta(k, \mathbf{h}_1, ..., \mathbf{h}_{13}, \mathbf{z}_v)\\ &= \sigma \Big(\beta_k \cdot \big( f(\mathbf{h}_k) - f_k(\bar{\mathcal{H}}_k) \big) \Big). \end{aligned} \end{equation} $\tilde{\mathbf{h}}_k$ is the updated representation of $\mathbf{h}_k$ after redundancy minimization. $f: \mathbb{R}^d \to \mathbb{R}^{d'}$ is a learnable projection function to map the intermediate motif-based representations to the same space as its redundant features. And $f_k: \mathbb{R}^{13d} \to \mathbb{R}^{d'}$ is a learnable feature selection function, which selects the redundant features w.r.t.~motif $M_k$. $\beta_k$ is the similarity between $f(\mathbf{h}_k)$ and $f_k(\bar{\mathcal{H}}_k)$, which acts as a regularizer to prevent extremely small or large differences between the two terms. $\sigma$ is an activation function (e.g., ReLU). \end{definition} Intuitively, in Eq.~\eqref{equ:op}, the motif redundancy minimization operator subtracts or removes redundant features w.r.t.~each motif from the intermediate representations of a given node. Apart from minimizing the redundancy, the operator also performs an adaptive selection of motifs in general. That is, for an unimportant motif $M_k$, this operator will make $\tilde{\mathbf{h}}_{k}$ in Eq.~\eqref{equ:op} close to a zero vector through functions $f$ and $f_k$. In particular, when $\tilde{\mathbf{h}}_{k}$ is a zero vector, it is equivalent to removing the instance of $M_k$ containing node $v$ in Eq.~\eqref{equ:motif_adj}. In Section~\ref{sec:case}, we will use a heatmap to demonstrate this adaptive selection mechanism, which improves the robustness of MGNN. To realize the motif redundancy minimization operator, we need to instantiate $f$, $f_k$ and $\beta_k$ in Eq.~\eqref{equ:op}. In particular, we use a fully connected layer to fit $f$ and $f_k$, namely, \begin{equation}\label{equ:f1} f(\mathbf{h}_k) = \mathbf{W}_f^{(l)} \mathbf{h}_{k} + \mathbf{b}_f^{(l)}, \end{equation} \begin{equation}\label{equ:f2} f_k(\bar{\mathcal{H}}_k) = \mathbf{W}^{(l)}_{f_k} \bar{\mathcal{H}}_k + \mathbf{b}_{f_k}^{(l)}, \end{equation} where $\mathbf{W}_f^{(l)} \in \mathbb{R}^{d'_{l} \times d_{l}}$ is a trainable matrix in the $l$-th layer shared by all motifs, and $\mathbf{W}^{(l)}_{f_k} \in \mathbb{R}^{d'_{l} \times (13 d_{l})}$ is a trainable matrix specific to motif $M_k$ in the $l$-th layer, and $\mathbf{b}_f^{(l)} \in \mathbb{R}^{d'_l}$ and $\mathbf{b}_{f_k}^{(l)} \in \mathbb{R}^{d'_l}$ are the corresponding bias vectors. Furthermore, to measure the similarity $\beta_k$, we use the inner product with a non-linear activation (e.g., sigmoid or tanh), that is, \begin{equation}\label{equ:att} \beta_k= \sigma \big(f(\mathbf{h}_k)^\intercal f_k(\bar{\mathcal{H}}_k)\big). \end{equation} $\sigma$ in Eq.\eqref{equ:op} and Eq.\eqref{equ:att} might be the same or distinct. Through the above instantiations, we can minimize the redundancy among motifs as shown in Fig.~\ref{fig:model}(c), for every node in every layer. That is, \begin{equation}\label{equ:instance} \tilde{\mathbf{h}}_{v, k}^{(l)} =\Delta\big(k, \mathbf{h}_{v, 1}^{(l)}, ..., \mathbf{h}_{v, 13}^{(l)}, (\mathbf{Z}^{(l)})_{v*}\big), \end{equation} where $\tilde{\mathbf{h}}_{v, k}^{(l)} \in \mathbb{R}^{d'_l}$ is the updated representation of node $v$ based on motif $M_k$ in the $l$-th layer. \subsection{Node Representation Update via Injective Function}\label{sec:update} As shown in Fig.~\ref{fig:model}(d), MGNN updates the node representation by combining their intermediate, motif-based representations. To improve the discriminative power on high-order structures, MGNN utilizes an injective function to combine different motif-based representations of each node, to update the output node representations in each layer. Specifically, we use the injective vector concatenation function, and generate the output node representation in the $l$-th layer below. \begin{equation}\label{equ:concat} \mathbf{h}^{(l)}_v = \big\|_{k=1}^{13} \tilde{\mathbf{h}}_{v, k}^{(l)}, \end{equation} where $\mathbf{h}^{(l)}_v \in \mathbb{R}^{13 d_l}$ is the output representation of node $v$ in the $l$-th layer. The following two properties of the concatenation function are essential to increase the expressive power of MGNN. First, the output node representation $\mathbf{h}_v^{(l)}$ will not change if the order of concatenation and aggregation is interchanged. Second, $\mathbf{h}_v^{(l)}$ can always explicitly preserve each motif-based feature embedding via the injective combination. Using these two properties, we can theoretically show that MGNN has a larger representation capacity than the standard GNN, as we will further discuss in Section~\ref{sec:theorem}. \subsection{Model Training}\label{sec:model_train} The node representations generated by MGNN can be used for various downstream learning tasks, including supervised and unsupervised learning. For supervised learning, the node representations can be directly used as features for a specific downstream task, optimized with a supervised loss that can be abstracted as \begin{equation}\label{equ:suploss} \mathcal{L}(\mathbf{Y}, \hat{\mathbf{Y}}), \end{equation} \begin{equation}\label{equ:predict_func} \hat{\mathbf{Y}} = \Phi(\mathbf{H}^{(L)}), \end{equation} where $\hat{\mathbf{Y}}$ is the predicted matrix. $\mathbf{H}^{(L)}$ is the node representation matrix generated by the last or $L$-th MGNN layer, such that its $i$-th row is the embedding vector $\mathbf{h}^{(L)}_i$ of node $i$ in Eq.~\eqref{equ:concat}. The loss function $\mathcal{L}$, prediction function $\Phi$ and the ground truth $\mathbf{Y}$ depend on the specific downstream task. Taking node classification as an example, the loss can be the cross-entropy loss over the training samples, as follows. \begin{equation}\label{equ:nodeclsloss} \sum_{i \in \mathcal{Y}} \sum^{n_c}_{j=1} - (\mathbf{Y})_{ij} \log (\mathbf{\tilde{H}}^{(l)})_{ij}, \end{equation} where $\mathcal{Y}$ is the set of training node indices, $n_c$ denotes the number of classes, $\mathbf{Y}$ is the ground truth matrix such that its $i$-th row is the one-hot label vector of node $i$, and $\mathbf{\tilde{H}}^{(l)}$ is the predicted matrix such that its $i$-th row is the predicted class distribution of node $i$, which can be obtained by taking a softmax activation or additional neural network layers as the prediction function $\Phi$ and passing $\mathbf{H}^{(l)}$ through $\Phi$. Another common supervised task is graph classification, which can use a similar cross-entropy loss and prediction function, but the node representations must first undergo a readout operation \cite{DBLP:conf/iclr/XuHLJ19} to generate the graph-level representations. For unsupervised learning, the node representations can be trained through the graph auto-encoder \cite{DBLP:journals/corr/KipfW16a} or other self-supervised frameworks \cite{you2020graph} without any task-specific supervision. Algorithm \ref{alg:mgnn} summarizes the framework of MGNN. To be more specific, MGNN takes the node features, normalized adjacency matrix, and motif-based adjacency matrices for all possible motifs as inputs. The construction of motif-based adjacency matrices is based on our proposed method and another method in the literature mentioned in section \ref{sec:adj}. MGNN further propagates the node representations (or node input features) layer by layer. In each layer, first, from line 4 to line 7, MGNN produces the $M_k$-based representation $\mathbf{h}^{(l)}_{v,k}$ of node $v$ by performing message aggregation. Second, from line 8 to line 14, MGNN compares the $M_k$-based representation with each other using the motif redundancy minimization operator, to distill the features specific to each motif. Third, in line 15, MGNN utilizes the injective concatenation to combine $M_k$-based representations and update the representation of the node. Finally, in line 18, the set of output representations of each node is returned. The computational complexity of one MGNN layer is $\mathcal{O}(|\mathcal{V}|^2)$, as follows. Firstly, the complexity is dominated by the computations in Eqs.~\eqref{equ:maf} and \eqref{equ:gcn}, where the time complexities are given by $\mathcal{O}(|\mathcal{V}|d)$ and $\mathcal{O}(|\mathcal{V}|^2d)$, respectively ($d$ denotes the dimension of an MGNN layer). Hence, when computing Eq.~\eqref{equ:maf} over all the nodes, the complexity is $\mathcal{O}(|\mathcal{V}|^2d)$. Secondly, in our implementation, Eq.~\eqref{equ:gcn} can be pre-calculated before Eq.~\eqref{equ:maf}. Therefore, the overall time complexity of MGNN is $\mathcal{O}(|\mathcal{V}|^2d)$, which can be further simplified to $\mathcal{O}(|\mathcal{V}|^2)$ as $d$ is typically a small constant. \begin{algorithm}[t] \LinesNumbered \label{alg:mgnn} \caption{The framework of MGNN.} \KwIn{Node input feature matrix $\mathbf{X} \in \mathbb{R}^{|\mathcal{V}| \times d_0}$, normalized adjacency matrix $\tilde{\mathbf{A}} \in \mathbb{R}^{|\mathcal{V}| \times |\mathcal{V}|}$, and normalized motif-based adjacency matrices $\tilde{\mathbf{A}}_{k} \in \mathbb{R}^{|\mathcal{V}| \times |\mathcal{V}|}$, $k \in \{1, ..., 13\}$.} \KwOut{Node embedding $\mathbf{h}^{(L)}_v \in \mathbb{R}^{13d_L}$ for each node.} Randomly initialize all parameters $\mathbf{H}^{(0)} \leftarrow \mathbf{X}$ \For{$l = 1, ..., L$}{ \For{$v \in \mathcal{V}$}{ \For{$k = 1, ..., 13$}{ $\mathbf{h}^{(l)}_{v,k} \leftarrow$ Compute the $M_k$-based representation of node $v$ by Eq.~\eqref{equ:maf} } \For{$k = 1, ..., 13$}{ $f(\mathbf{h}^{(l)}_{v,k}) \leftarrow$ Map $\mathbf{h}^{(l)}_{v,k}$ to the same space by Eq.~\eqref{equ:f1} $\bar{\mathcal{H}}_k \leftarrow$ Concatenate each $\mathbf{h}^{(l)}_{v,i} (i \ne k)$ and $\mathbf{z}_v$ according definition \ref{def:op} $f_k(\bar{\mathcal{H}}_k) \leftarrow$ Select the redundant feature w.r.t. motif $M_k$ by Eq.~\eqref{equ:f2} $\beta_k \leftarrow$ Compute the similarity between $f(\mathbf{h}^{(l)}_{v,k})$ and $f(\bar{\mathcal{H}}_k)$ by Eq.~\eqref{equ:att} $\tilde{\mathbf{h}}_{v, k}^{(l)} \leftarrow$ Update $\mathbf{h}^{(l)}_{v,k}$ based on $f(\mathbf{h}^{(l)}_{v,k})$, $f_k(\bar{\mathcal{H}}_k)$ and $\alpha_k$ by Eq.~\eqref{equ:instance} } $\mathbf{h}^{(l)}_v \leftarrow$ Concatenate each $\tilde{\mathbf{h}}^{(l)}_{v,k}$ by Eq.~\eqref{equ:concat} } } \Return{ $\{\mathbf{h}^{(L)}_v \big| v \in \mathcal{V}\}$} \end{algorithm} \section{Theoretical Analysis}\label{sec:theorem} In this section, we aim to analyze the representation capacity of MGNN in comparison with standard GNN. In order to facilitate the analysis, we first introduce a simplified version of MGNN, and then further show that even the simplified MGNN still has stronger discriminative power than standard GNN. \subsection{Simplified Version of MGNN} A simplified version of the $l$-th MGNN layer is as follows: \begin{equation}\label{equ:mgnn_abs1} \mathbf{h}^{(l)}_{v, k} = \omega \Big( \Big\{ \alpha^{(l)}_{k,vi} \cdot (\mathbf{A}_k)_{vi} \mathbf{W}_m^{(l)} \mathbf{h}^{(l-1)}_i \big| i \in \mathcal{N}(v) \Big\} \Big), \end{equation} \begin{equation}\label{equ:mgnn_abs2} \mathbf{h}^{(l)}_v = \big\|_{k=1}^{13} \sigma(\mathbf{h}^{(l)}_{v, k}), \end{equation} where $\omega$ represents the aggregate function. Then we demonstrate that Eqs.~\eqref{equ:mgnn_abs1}--\eqref{equ:mgnn_abs2} is a simplified version of a MGNN layer. Specifically, in Eq.~\eqref{equ:maf}, normalized $\tilde{\mathbf{A}}_k$ and $(\mathbf{Z}^{(l)})_{i*}$ are substituted for $\mathbf{A}_k$ as well as $\mathbf{W}_m^{(l)}\mathbf{h}^{(l-1)}_i$, respectively, where $\mathbf{W}_m^{(l)}$ is the trainable weight matrix in the $l$-th simplified MGNN layer and $\mathbf{h}^{(l-1)}_i$ is the node messages from the previous ($l-1$)-th simplified MGNN layer ($\mathbf{h}^{(0)}_i = \mathbf{x}_i$). After that, Eq.~\eqref{equ:mgnn_abs1} is obtained. Then, the output $\mathbf{h}^{(l)}_{v, k}$ of Eq.~\eqref{equ:mgnn_abs1} is utilized in place of $\tilde{\mathbf{h}}_{v, k}^{(l)}$ in Eq.~\eqref{equ:concat} and then Eq.~\eqref{equ:mgnn_abs2} is obtained. Thus, Eqs.~\eqref{equ:mgnn_abs1}--\eqref{equ:mgnn_abs2} is a simplified version of a MGNN layer. \begin{table}[htbp] \centering \caption{The layer of the abstract model of the standard GNN or the simplified version of MGNN.} \begin{tabular}{lc} \toprule Step1 & message aggregation \\ \midrule Standard GNN & $\bar{\mathbf{h}}^{(l)}_v = \omega \left(\left\{ (\mathbf{A})_{vi} \mathbf{W}_s^{(l)} \tilde{\mathbf{h}}^{(l-1)}_i \big| i \in \mathcal{N}(v) \right\}\right)$\\ MGNN & $\mathbf{h}^{(l)}_{v, k} = \omega \Big( \Big\{\alpha^{(l)}_{k,vi} \cdot (\mathbf{A}_k)_{vi} \mathbf{W}_m^{(l)}\mathbf{h}^{(l-1)}_i \big| i \in \mathcal{N}(v) \Big\} \Big)$\\ \midrule Step2 & node representation update \\ \midrule Standard GNN & $\tilde{\mathbf{h}}^{(l)}_v = \sigma(\bar{\mathbf{h}}^{(l)}_v) $ \\ MGNN & $\mathbf{h}^{(l)}_v = \big\|_{k=1}^{13} \sigma(\mathbf{h}^{(l)}_{v, k})$\\ \bottomrule \end{tabular} \label{tab:architecture_comparison} \end{table} \subsection{Representational Capacity Study} In order to compare the representational capacity of the simplified version of MGNN with that of the standard GNN, we begin with the layers of the abstract model of the standard GNN and the simplified version of MGNN in Table \ref{tab:architecture_comparison}, where $\mathbf{W}_s^{(l)}$ is the trainable weight matrix in the $l$-th standard GNN layer, and $\tilde{\mathbf{h}}^{(l-1)}_i$ is the node messages from the previous ($l-1$)-th standard GNN layer ($\tilde{\mathbf{h}}^{(0)}_i = \mathbf{x}_i$). The mainstream models of GNNs, including GCN, GAT, GraphSAGE and GIN, can be viewed as an instance of the standard GNN. Then we shows MGNN has larger representational capacity than standard GNN in Lemmas~\ref{lma:special_case}--\ref{lma: example} and Theorem \ref{thm:powerful}. Based on the above abstractions, we first show that even a special case of MGNN at least has the same representational capacity as the standard GNN in Lemma \ref{lma:special_case}. \begin{lemma}\label{lma:special_case} Given any an instance of the standard GNN, if the aggregate functions of standard GNN and MGNN are the same and the input to $\omega$ only consist of values in the same dimension from different feature vectors, its representations of the graphs can also be generated by a special case of MGNN. \end{lemma} The proof for Lemma \ref{lma:special_case} is hinged on two important properties of the injective concatenation function, i.e., the interchangeability of concatenation and aggregation, and the explicit preservation of motif-based representations, as first mentioned in Section~\ref{sec:update}. The detailed proof can be found in Section I of our supplementary materials. In short, Lemma \ref{lma:special_case} shows that a standard GNN can be subsumed by MGNN. Taking one step further, we show that there exist two graphs that can be distinguished by MGNN but are indistinguishable by the standard GNN. \begin{lemma}\label{lma: example} There exist two non-isomorphic graphs $G$ and $G'$ with self-loops, which can be distinguished by MGNN, but not by the standard GNN. \end{lemma} \begin{proof} \begin{figure} \caption{Two graphs with self-loops that cannot be distinguished by the standard GNN. Inside these two graphs, the features of the nodes are the same and the self-loops are not depicted for brevity.} \label{fig:lemma2} \end{figure} As Fig.~\ref{fig:lemma2} illustrates, consider the two non-isomorphic graphs $G$ and $G'$ with self-loops, in which all nodes have the same features. First, $G$ and $G'$ cannot be distinguished by standard GNN, because the multi-set of neighboring features of each node are the same. Second, $G$ and $G'$ can be naturally distinguished by MGNN since each node in $G$ is only associated with an open motif, whereas each node in $G'$ is associated with both open and close motifs. \end{proof} \begin{table*}[htbp] \centering \caption{Statistics of the datasets.} \begin{tabular}{r|lrlccc} \toprule \multicolumn{1}{l}{Category} & Dataset & \multicolumn{1}{c}{\# Graphs} & \multicolumn{1}{c}{\# Nodes (Avg.)} & \# Edges (Avg.) & \# Features & \# Classes \\ \midrule \midrule \multicolumn{1}{l|}{\multirow{3}[2]{*}{Citation Graphs}} & Cora & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{2,708} & 5,429 & 1,433 & 7 \\ & Citeseer & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{3,327} & 4,732 & 3,703 & 6 \\ & Pubmed & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{19,717} & 44,338 & 500 & 3 \\ \midrule \multicolumn{1}{l|}{Knowledge Graphs} & Chem2Bio2RF & \multicolumn{1}{c}{1} & \multicolumn{1}{c}{295,911} & 727,997 & 5 & 10 \\ \midrule \multicolumn{1}{l|}{\multirow{3}[2]{*}{Biochemical Graphs}} & MUTAG & \multicolumn{1}{c}{188} & \multicolumn{1}{c}{17.90 } & 19.79 & 7 & 2 \\ & ENZYMES & \multicolumn{1}{c}{600} & \multicolumn{1}{c}{32.63} & 62.14 & 21 & 6 \\ & AIDS & \multicolumn{1}{c}{2,000} & \multicolumn{1}{c}{15.69} & 16.20 & 42 & 2 \\ \midrule \bottomrule \end{tabular} \label{tab:dataset} \end{table*} Based on the results on Lemma~\ref{lma:special_case} and Lemma \ref{lma: example}, we immediately come to the conclusion about the representation capacity in Theorem~\ref{thm:powerful}. \begin{theorem}\label{thm:powerful} MGNN has a larger representation capacity than the standard GNN. \end{theorem} \section{Experiments}\label{sec:experiments} In this section, we introduce the details of the experimental setup and the comparison results. \subsection{Experimental Setup} \subsubsection{Datasets} To evaluate the effectiveness of our proposed MGNN, we utilize seven public datasets on two benchmark tasks: (1) classifying nodes on three citation network datasets (Cora, Citeseer, and Pubmed) and a knowledge graph (Chem2Bio2RDF), and (2) classifying graphs on three biochemical graph datasets (MUTAG, ENZYMES, and AIDS). Table \ref{tab:dataset} summarizes the statistics of seven datasets. \begin{itemize} \item \textbf{Cora, Citeseer} and \textbf{Pubmed}~\cite{DBLP:journals/aim/SenNBGGE08} contain documents represented by nodes and citation links represented by edges. \item \textbf{Chem2Bio2RDF}~\cite{chen2010chem2bio2rdf} integrates data from multiple public sources. Because the node feature is not provided in Chem2Bio2RDF and the discriminative power of GNN-based methods often depends on the properties of nodes, we use the degree statistical information of each node and its $1$-hop neighborhood (5 dimensions in total) \cite{DBLP:journals/corr/abs-1811-03508} as its node features. \item \textbf{MUTAG}~\cite{DBLP:conf/icml/KriegeM12} contains 188 chemical compounds divided into two classes according to their mutagenic effect on a bacterium. \item \textbf{ENZYMES}~\cite{DBLP:conf/ismb/BorgwardtOSVSK05} contains 100 proteins from each of the 6 Enzyme Commission top level enzyme classes. \item \textbf{AIDS}\footnote{https://wiki.nci.nih.gov/display/NCIDTPdata/AIDS+Antiviral+Screen+Data/}~\cite{DBLP:conf/sspr/RiesenB08} contains 2 classes (active, inactive), which represent molecules with activity against HIV or not. \end{itemize} \subsubsection{Baselines} We consider three categories of methods, namely low- and high-order GNN-based methods, network embedding-based methods, as well as graph pooling-based methods. Low-order GNN-based methods include: \begin{itemize} \item \textbf{GCN} \cite{DBLP:conf/iclr/KipfW17} aggregates the feature information from a node’s neighborhood. \item \textbf{GraphSAGE}~\cite{hamilton2017inductive} generates embeddings by sampling and aggregating features from a node’s local neighborhood. \item \textbf{GAT}~\cite{DBLP:journals/corr/abs-1710-10903} incorporates the attention mechanism into the propagation step, following a self-attention strategy. \item \textbf{GIN}~\cite{DBLP:conf/iclr/XuHLJ19} performs the feature aggregation in an injective manner based on the theory of the $1$-WL graph isomorphism test. \item \textbf{BGNN} \cite{DBLP:conf/iclr/0004P21} combines gradient boosted decision trees (GBDT) with GNN. \end{itemize} High-order GNN-based methods include: \begin{itemize} \item \textbf{MotifNet} \cite{DBLP:conf/dsw/MontiOB18} utilizes a Laplacian matrix based on multiple motif-based adjacency matrices as the convolution kernel of the graph, and uses an attention mechanism to select node features. \item \textbf{MixHop} \cite{abu2019mixhop} concatenates the aggregated node features from neighbors at different hops in each layer. \item \textbf{GDC} \cite{klicpera2019diffusion} utilizes generalized graph diffusion (e.g. Personalized PageRank) to generate a new graph, then uses this new graph to predict rather than the original graph. \item \textbf{CADNet} \cite{lim2021class} obtains neighborhood representations by random walks with attention, and incorporates the neighborhood representations via trainable coefficients. \end{itemize} Network embedding-based methods include: \begin{itemize} \item \textbf{DeepWalk} \cite{DBLP:conf/kdd/PerozziAS14} combines truncated random-walk with skip-gram model to learn node embedding. \item \textbf{GraRep}~\cite{DBLP:conf/cikm/CaoLX15} leverages various powers of the adjacency matrix to capture higher-order node similarity. \item \textbf{HOPE}~\cite{DBLP:conf/kdd/OuCPZ016} preserves higher-order proximity in node representations. \item \textbf{Node2Vec} \cite{DBLP:conf/kdd/GroverL16} employs biased-random walks, which provide a trade-off between breadth-first (BFS) and depth-first (DFS) graph searches, to learn node embedding. \item \textbf{Graph2Vec}~\cite{DBLP:journals/corr/NarayananCVCLJ17} creates WL tree for nodes as features in graphs to decompose the graph-feature co-occurence matrix. \item \textbf{NetLSD}~\cite{DBLP:conf/kdd/TsitsulinMKBM18} calculates the heat kernel trace of the normalized Laplacian matrix over a vector of time scales. \item \textbf{GL2Vec}~\cite{DBLP:conf/iconip/ChenK19} extends Graph2Vec with edge features by utilizing the line graph. \item \textbf{Feather}~\cite{DBLP:conf/cikm/RozemberczkiS20} describes node neighborhoods with random walk weights. \end{itemize} Graph pooling-based methods include: \begin{itemize} \item \textbf{Graclus}~\cite{DBLP:journals/pami/DhillonGK07} is an alternative of eigen-decomposition to calculate a clustering version of the original graph. \item \textbf{GlobalATT}~\cite{DBLP:journals/corr/LiTBZ15} employs gate recurrent unit architectures with global attention to update node latent representations. \item \textbf{EdgePool} \cite{diehl2019towards} extracts graph features by contracting edges and merging the connected nodes uniformly. \item \textbf{TopKPool}~\cite{DBLP:conf/icml/GaoJ19} learns a scalar projection score for each node and selects the top $k$ nodes. \item \textbf{ASAP}~\cite{DBLP:conf/aaai/RanjanST20} utilizes a self-attention network along with a modified GNN formulation to capture the importance of each node in a given graph. \end{itemize} We use the low- and high-order GNN-based approaches for both node classification and graph classification tasks, except that BGNN is used for the node classification task only since its GNN module is designed to provide the gradients generated by the node classification loss during training \cite{DBLP:conf/iclr/0004P21}. We use the following node-level network embedding approaches for the node classification task only: DeepWalk \cite{DBLP:conf/kdd/PerozziAS14}, GraRep~\cite{DBLP:conf/cikm/CaoLX15}, HOPE~\cite{DBLP:conf/kdd/OuCPZ016}, and Node2Vec \cite{DBLP:conf/kdd/GroverL16}. We use all the graph pooling approaches and the following graph-level network embedding approaches for the graph classification task only: Graph2Vec~\cite{DBLP:journals/corr/NarayananCVCLJ17}, NetLSD~\cite{DBLP:conf/kdd/TsitsulinMKBM18}, GL2Vec~\cite{DBLP:conf/iconip/ChenK19}, and Feather~\cite{DBLP:conf/cikm/RozemberczkiS20}. \subsubsection{Implementation details} The configurations of our MGNN as well as GNN-based baselines on the node classification task are as follows. We use 1 GNN layer for Cora and Citeseer datasets, while 2 GNN layers for the other two larger datasets namely Pubmed and Chem2Bio2RDF. In addition, a fully connected layer (FCL) is added after the last GNN layer to further process the node representation matrix. For the graph classification task, we use 3 GNN layers on 3 biochemical graph datasets for MGNN, GNN-based baselines as well as graph pooling-based baselines. Similarly, the node representation matrix after the last GNN layer would be passed through three fully connected layers. We use sum aggregation as the readout operation to derive the embedding for the graph. For our MGNN, aggregate function AGG in Eq.~\eqref{equ:maf} was sum. The activation function $\sigma$ in Eq.~\eqref{equ:att} was set as sigmoid for Cora and Citeseer, while it was set as tanh for other datasets \cite{DBLP:journals/corr/abs-1710-10903}. We further set $d_1, d_2, d_3$ in Eq.~\eqref{equ:maf}, the output dimensionality of the GCN layer which is stacked in the first, second, and third MGNN layers, to $16, n_c, n_c$, respectively, where $n_c$ is the number of classes in the corresponding dataset. Next, the dimensionality $d'_l$ in Eq.~\eqref{equ:f1} was set to 6 on each dataset. We used the Adam optimizer and the learning rate $\eta$ in the optimization algorithm was set as 0.011. The maximum number of training epochs $t$ was set as 3000. In practice, we made use of PyTorch for an efficient GPU-based implementation of Algorithm \ref{alg:mgnn} using sparse-dense matrix multiplications.\footnote{Our source codes and pre-processed datasets are publicly available via https://github.com/DMIRLAB-Group/MGNN} \begin{table*}[htbp] \centering \caption{Performance on the node classification task, measured in accuracy. Standard deviation errors are given. The best performance is marked in bold, and the second best is underlined.} \begin{tabular}{lcccc} \toprule & Cora & Citeseer & Pubmed & Chem2Bio2RDF\\ \midrule DeepWalk & 0.4313 $\pm$ 0.0221 & 0.2732 $\pm$ 0.0216 & 0.4440 $\pm$ 0.0208 & 0.9253 $\pm$ 0.0023\\ GraRep & 0.5957 $\pm$ 0.0062 & 0.4220 $\pm$ 0.0022 & 0.6147 $\pm$ 0.0073 & 0.9313 $\pm$ 0.0018\\ HOPE & 0.4510 $\pm$ 0.0010 & 0.3180 $\pm$ 0.0021 & 0.4880 $\pm$ 0.0011 & 0.9030 $\pm$ 0.0001\\ Node2Vec & 0.7150 $\pm$ 0.0042 & 0.4670 $\pm$ 0.0145 & 0.6788 $\pm$ 0.0063 & 0.9029 $\pm$ 0.0012\\ \midrule GCN & 0.8595 $\pm$ 0.0207 & 0.7764 $\pm$ 0.0045 & 0.8865 $\pm$ 0.0048 & 0.9371 $\pm$ 0.0017 \\ GraphSAGE & 0.8610 $\pm$ 0.0101 & 0.7744 $\pm$ 0.0061 & \underline{0.8980} $\pm$ 0.0049 & \underline{0.9630} $\pm$ 0.0010\\ GAT & 0.8775 $\pm$ 0.0127 & 0.7852 $\pm$ 0.0052 & 0.8840 $\pm$ 0.0079 & 0.9628 $\pm$ 0.0017\\ GIN & 0.8107 $\pm$ 0.0188 & 0.7255 $\pm$ 0.0160 & 0.8810 $\pm$ 0.0156 & 0.9205 $\pm$ 0.0129\\ BGNN & 0.8470 $\pm$ 0.0143 & 0.7750 $\pm$ 0.0112 & 0.8380 $\pm$ 0.0119 & 0.8746 $\pm$ 0.0115 \\ \midrule MotifNet & 0.8580 $\pm$ 0.0075 & 0.7750 $\pm$ 0.0071 & 0.8895 $\pm$ 0.0102 & 0.8863 $\pm$ 0.0114\\ MixHop & \underline{0.8803} $\pm$ 0.0120 & 0.7796 $\pm$ 0.0053 & 0.8628 $\pm$ 0.0150 & \underline{0.9630} $\pm$ 0.0004 \\ GDC & 0.8660 $\pm$ 0.0100 & \underline{0.7854} $\pm$ 0.0061 & 0.8768 $\pm$ 0.0059 & 0.8838 $\pm$ 0.0036 \\ CADNet & 0.8612 $\pm$ 0.0131 & 0.7652 $\pm$ 0.0148 & 0.8772 $\pm$ 0.0085 & 0.8287 $\pm$ 0.0258 \\ \midrule MGNN & \textbf{0.9060} $\pm$ 0.0049 & \textbf{0.7948} $\pm$ 0.0050 & \textbf{0.9232} $\pm$ 0.0084 & \textbf{0.9870} $\pm$ 0.0021\\ \bottomrule \end{tabular} \label{tab:baselines_ncls} \end{table*} For the baselines, we tuned their settings empirically. First, for GNN-based methods and graph pooling-based methods, the embedding dimension and dropout \cite{DBLP:journals/corr/abs-1207-0580} rate of these models, were set to 16, 0.5, respectively. GCN, GAT, MotifNet used their default aggregate function and GraphSAGE used max aggregate empirically. The degree of multivariate polynomial filters in MotifNet was set to 1 and utilizes 13 motif-based adjacency matrices. Considering that GAT concatenates different head outputs, which is similar to MGNN. Therefore, GAT was set to use 13 heads and the embedding dimension is 8. Second, for network embedding-based methods, the embedding dimension of these models was set to 128, and we used the logistic regression model \cite{cox1958regression} as a classifier to evaluate the quality of the embeddings generated by these unsupervised models. The other settings for these models largely align with the literature. Note that, in our experiments, all the methods make use of the same directed/undirected edge information on each dataset. Specifically, Chem2Bio2RDF is a directed graph. The baseline implementations used here are able to deal with directed graphs, in which message propagation follows the given edge directions. Meanwhile, ENZYMES, MUTAG, and AIDS are all undirected graphs, and the original Cora, Citeseer and Pubmed are directed citation graphs. Following standard benchmarking practice, the three citation graphs are treated as undirected \cite{wu2020comprehensive,yang2016revisiting}, where a preprocessing step is applied to ignore edge directions for all the methods. Note that when an undirected graph is fed into MGNN, MGNN treats each undirected edge as two directed edges in opposite directions. We adopt a widely-used \emph{accuracy} metric for performance evaluation. For the node classification task, similar to the experimental setup in \cite{DBLP:conf/iclr/ChenMX18}, we split the dataset into 500 nodes for validation, 500 nodes for testing, and the remaining nodes were used for training, to simulate labeled and unlabeled information. Note that Chem2Bio2RDF is an exception, and we split it into 5000 nodes for validation and 5000 nodes for testing due to its large size. Then we report the average and standard deviation of accuracy scores across the 5 runs with different random seeds. For the graph classification task, similar to the experimental setup in \cite{DBLP:conf/iclr/XuHLJ19}, we perform 5-fold cross validation. For other experiments, we present the average accuracy scores over the 5 runs with various random seeds. \subsection{Performance Evaluation} We evaluate the empirical performance of MGNN against the state-of-the-art baselines in Tables ~\ref{tab:baselines_ncls} and \ref{tab:baselines_gcls}. \subsubsection{\textbf{Comparison to baselines}} As shown in Table~\ref{tab:baselines_ncls}, MGNN significantly and consistently outperforms all the baselines on different datasets. In particular, GraphSAGE achieves the second best performance on Pubmed and Chem2Bio2RDF, while MixHop achieves the second best performance on Cora and Chem2Bio2RDF, and GDC achieves the second best performance on Citeseer. Our MGNN is capable of achieving further improvements against GraphSAGE by 2.81\% on Pubmed, against MixHop by 2.92\% on Cora, as well as against GraphSAGE and MixHop by 2.49\% on Chem2Bio2RDF. On Citeseer, MGNN outperforms GDC by 1.20\% in terms of accuracy. Note that the number of edges in Citeseer is small and the occurrences of motifs are limited. Therefore, our MGNN cannot collect as much high-order information as it can on other datasets, and MGNN achieves less improvement on Citeseer than on other datasets. Similarly, in Table~\ref{tab:baselines_gcls}, MGNN regularly surpasses all the baselines. In particular, GCN achieves the second best performance on AIDS, while GDC achieves the second best performance on MUTAG, and MixHop achieves the second best performance on ENZYMES. MGNN is able to achieve further improvements against GCN by 0.76\% on AIDS, against GDC by 3.18\% on MUTAG and against MixHop by 10.95\% on ENZYMES as shown in Table~\ref{tab:baselines_gcls}. In particular, a graph represents a compound's molecular structure in these three biochemical graph datasets. Any chemical structure can be represented by 13 motifs, which allows our MGNN to identify similar structures among various compounds and boost classification accuracy. For example, both carbon dioxide $CO_2$ and methane $CH_4$ have the motif $M_8$. Moreover, $CH_4$ has six $M_8$ while $CO_2$ has one $M_8$ only, and such difference is useful for graph classification. \begin{table}[htbp] \centering \caption{Performance on the graph classification task in terms of accuracy. Standard deviation errors are given.} \begin{tabular}{lccc} \toprule & MUTAG & ENZYMES & AIDS \\ \midrule Graph2Vec & 0.6650 $\pm$ 0.0087 & 0.2033 $\pm$ 0.0239 & 0.8045 $\pm$ 0.0033\\ GL2Vec & 0.6703 $\pm$ 0.0106 & 0.1967 $\pm$ 0.0461 & 0.8225 $\pm$ 0.0065 \\ NetLSD & 0.7450 $\pm$ 0.0611 & 0.2136 $\pm$ 0.0461 & 0.9575 $\pm$ 0.0082 \\ Feather & 0.7716 $\pm$ 0.0341 & 0.2483 $\pm$ 0.0226 & 0.7930 $\pm$ 0.0019 \\ \midrule Graclus & 0.7504 $\pm$ 0.0750 & 0.2567 $\pm$ 0.0253 & 0.8640 $\pm$ 0.0398 \\ ASAP & 0.7562 $\pm$ 0.0799 & 0.2600 $\pm$ 0.0320 & 0.8960 $\pm$ 0.0279 \\ EdgePool & 0.7508 $\pm$ 0.0687 & 0.2500 $\pm$ 0.0449 & 0.8615 $\pm$ 0.0581 \\ TopKPool & 0.7238 $\pm$ 0.0527 & 0.2417 $\pm$ 0.0349 & 0.8530 $\pm$ 0.0492\\ GlobalATT & 0.7346 $\pm$ 0.0736 & 0.2383 $\pm$ 0.0427 & 0.8390 $\pm$ 0.0248 \\ \midrule GCN & 0.7555 $\pm$ 0.0651 & 0.2100 $\pm$ 0.0285 & \underline{0.9895} $\pm$ 0.0091 \\ GAT & 0.7391 $\pm$ 0.0315 & 0.1667 $\pm$ 0.0000 & 0.8740 $\pm$ 0.1013 \\ GraphSAGE & 0.7984 $\pm$ 0.0526 & 0.2333 $\pm$ 0.0586 & 0.9855 $\pm$ 0.0091 \\ GIN & 0.7780 $\pm$ 0.0940 & 0.2630 $\pm$ 0.0330 & 0.9870 $\pm$ 0.0090\\ \midrule MotifNet & 0.8040 $\pm$ 0.0330 & 0.1770 $\pm$ 0.0140 & 0.9880 $\pm$ 0.0060 \\ MixHop & 0.7663 $\pm$ 0.0897 & \underline{0.2767} $\pm$ 0.0494 & 0.9265 $\pm$ 0.0157\\ GDC & \underline{0.8199} $\pm$ 0.0849 & 0.2633 $\pm$ 0.0126 & 0.8705 $\pm$ 0.0165 \\ CADNet & 0.7450 $\pm$ 0.0531 & 0.2267 $\pm$ 0.0273 & 0.7995 $\pm$ 0.0011 \\ \midrule MGNN & \textbf{0.8460} $\pm$ 0.0230 & \textbf{0.3070} $\pm$ 0.0300 & \textbf{0.9970} $\pm$ 0.0030 \\ \bottomrule \end{tabular} \label{tab:baselines_gcls} \end{table} Next, we further compare the robustness of MGNN and the baseline approaches by introducing noise. Specifically, by replacing the original input node features with a 16-dimensional random vector, we first modified Cora and Pubmed which are denoted as Cora-RandomX and Pubmed-RandomX. Then we compared the performance of MGNN and the baselines on the above two modified datasets. As shown in Fig.~\ref{fig:ablation_x}(a)(b), the performance of MGNN and other GNN baselines on Cora-RandomX and Pubmed-RandomX showed signs of deterioration to varying degrees compared with that on Cora and Pubmed, which is intuitive since additional noise is introduced through random features. Importantly, not only does MGNN considerably and continuously exceed all GNN baselines on the accuracy metric but also its rate of decrease is the lowest. This is because MGNN can grasp more high-order structure information with higher discriminative power, which makes MGNN more robust than other standard or motif-based GNNs. \begin{figure} \caption{Performance on robustness comparison in two modified datasets using random vectors as node features. Compared to Cora and Pubmed, the performance degradation rate of models is denoted by the inverted black triangle with the percentage on the top of each bar.} \label{fig:ablation_x} \end{figure} \subsubsection{Model ablation study} \begin{figure} \caption{Ablation study on all motifs on seven datasets, minimal-redundancy operator $\Delta$ and injective vector concatenation function on three datasets: (a) the ablation study result related to all motifs on seven datasets and (b)(c): the ablation study results related to $\Delta$ and concatenation function on three datasets, respectively.} \label{fig:ablation_motif} \label{fig:ablation_delta} \label{fig:ablation_concat} \label{fig:ablation_mgnn} \end{figure} As Fig. \ref{fig:model} illustrates, the network motif, motif redundancy minimization and injective $M_k$-based representation concatenation are key components in our proposed MGNN. We can thus derive the following variants of MGNN: (1) MGNN without any motif information denoted as MGNNw/oM (this variant is actually a GCN); (2) MGNN without motif redundancy minimization operator $\Delta$ denoted as MGNNw/o$\Delta$; (3) MGNN with other functions for feature vector combination, including summation, max and mean. To show the impact of motifs, $\Delta$ and injective concatenation in MGNN, we compare MGNN with the above variants. In Fig. \ref{fig:ablation_mgnn}, we observe that MGNN achieves better performance than the variants in terms of accuracy, demonstrating the effectiveness of motifs, $\Delta$ and injective concatenation. Firstly, in order to demonstrate the impact of the motif on MGNN's performance, we compare MGNN with the variant MGNNw/oM. As Fig.~\ref{fig:ablation_motif} shows, we observe that the performance of MGNNw/oM is lower than that of MGNN on all the 7 datasets, demonstrating the importance of incorporating motif into GNNs, that is, the high-order structures grasped by motif are important for GNNs' performance. Secondly, in order to investigate the impact of $\Delta$, we removed the minimal-redundancy operator $\Delta$ on MGNN, and the comparison between MGNN and MGNNw/o$\Delta$ is as Fig.~\ref{fig:ablation_delta} illustrates. As can be seen, MGNN significantly outperforms MGNNw/o$\Delta$ in terms of accuracy in three datasets. Note that the number of edges in Pubmed is large and the redundancy of motifs is probably higher than other datasets (i.e, different motifs in Pubmed share more certain substructures). Therefore, on Pubmed, MGNNw/o$\Delta$ is more difficult to distinguish between different motif-wise representations than on other datasets, and the performance gap between MGNN and MGNNw/o$\Delta$ is more pronounced on Pubmed than on other datasets. Thirdly, to demonstrate the impact of injective concatenation, we used other non-injective vector combination functions, including summation, max, and mean, to replace injective concatenation. Fig.~\ref{fig:ablation_concat} illustrates that MGNN with concatenation performs significantly better than MGNN with other functions on these three datasets. Moreover, we show the results of different combination functions (namely, concatenation, max, sum and mean) on three datasets in Table \ref{tab:ablation_mgnn}. We can observe that the performance decline is larger on Citeseer and Cora than on PubMed. A potential reason is Cora and Citeseer are very sparse and the occurrences of motifs are limited. Thus, the limited number of motifs on Cora and Citeseer would make it more difficult to distinguish among different node representations using non-injective functions. \begin{table}[htbp] \centering \addtolength{\tabcolsep}{-1mm} \caption{Performance of MGNN by using different combination functions on three datasets. The rates of decline in performance w.r.t.~concatenation are given in parentheses.} \label{tab:ablation_mgnn} \begin{tabular}{lcccc} \toprule & Concat & Max & Sum & Mean \\ \midrule Cora & 0.906 & 0.878 (3.09\% $\downarrow$) & 0.884 (2.43\% $\downarrow$) & 0.894 (1.32\% $\downarrow$) \\ Citeseer & 0.795 & 0.776 (2.37\% $\downarrow$) & 0.766 (3.62\% $\downarrow$) & 0.788 (0.86\% $\downarrow$) \\ Pubmed & 0.923 & 0.914 (1.00\% $\downarrow$) & 0.918 (0.56\% $\downarrow$) & 0.918 (0.56\% $\downarrow$) \\ \bottomrule \end{tabular} \end{table} \subsection{Case Study}\label{sec:case} In this section, we investigate the importance of different motifs for prediction and demonstrate the necessity of using the high-order structure for prediction. We completed the following two studies on the Chem2Bio2RDF dataset. First, MGNN makes a prediction across the 5 runs by using only 1 out of the 13 motifs and then compares the results to determine the significance of various motifs. Second, we take protein-disease association prediction as our case study. In particular, we rank the protein-disease pairs based on their predicted scores, then identify those top pairs supported by existing publications. Meanwhile, we also evaluate the performance of MGNN versus the baseline approaches for protein-disease association prediction. Next, we show the details of these two studies. \subsubsection{The importance of different motifs for prediction} To demonstrate the importance of different motifs, MGNN utilizes just one motif to conduct node classification across the 5 runs on the Chem2Bio2RDF network, and the performance of MGNN is used to assess the importance of each motif in Table \ref{tab:important_motif}. \begin{table}[htbp] \centering \caption{Importance ranking of 13 motifs on Chem2Bio2RDF. The importance score of each motif is the performance of MGNN when using only that motif for prediction. The symbol `ALL' indicates that all motifs are used by MGNN.} \begin{tabular}{ccccccc} \toprule Rank & Motif & ACC & & Rank & Motif & ACC\\ \midrule 1 & M3 & 0.9809 & & 8 & M1 & 0.9686\\ 2 & M13 & 0.9802 & & 9 & M8 & 0.9477\\ 3 & M7 & 0.9791 & & 10 & M10 & 0.9408\\ 4 & M2 & 0.9789 & & 11 & M12 & 0.9377\\ 5 & M11 & 0.9781 & & 12 & M9 & 0.9317\\ 6 & M5 & 0.9762 & & 13 & M6 & 0.9271\\ 7 & M4 & 0.9717 & & \verb|-| & ALL & 0.9870\\ \bottomrule \end{tabular} \label{tab:important_motif} \end{table} As shown in Table \ref{tab:important_motif}, we can draw two conclusions. First, for a Chem2Bio2RDF network, the importance of different motifs varies. This is because important motifs often serve as building blocks within a network, and can even be used to define universal classes of the network they are in \cite{milo2002network}. For example, $M_{13}$ is a building block of the protein-protein interaction (PPI) network on the Chem2Bio2RDF graph (protein$\leftrightarrow$protein$\leftrightarrow$protein), and the PPI network is key for protein-disease association prediction, $M_{13}$ is thus ranked as one of top 3 most significant motifs on this dataset as shown in Table \ref{tab:important_motif}. Another example is the triangular motifs ($M_1$-$M_7$), which are essential in social networks due to their triadic closure nature \cite{benson2016higher, granovetter1973strength}. Second, the performance of the top three motifs is similar to the performance of all motifs combined (last row on the right). This is because a single motif may effectively encapsulate all of the network's essential information. With these two conclusions, we can see that one of the advantages of MGNN is its generality. That is, even if the importance of motifs is unknown, we can still use MGNN with all the motifs to achieve a final performance similar to that of using important motifs only. To further demonstrate the adaptive selection results of our motif redundancy minimization operator, we conduct the following experiments on Chem2Bio2RDF. We randomly sample 15 nodes and presented the representations of the top 3 and bottom 3 most significant motifs in Table \ref{tab:important_motif} by heatmap. As shown in Fig.~\ref{fig:heatmap}, representations w.r.t. unimportant motifs ($M_9$, $M_6$) are more sparse than other motifs. In addition, we observed that there are often only no more than three non-zero dimensions for a motif, which shows that MGNN actually needs a very low dimension to capture high-order structures. \begin{figure} \caption{Heatmap of the top 3 ($M_3, M_{13}, M_7$) and bottom 3 ($M_{12}, M_9, M_6$) most important motif-based representations on Chem2Bio2RDF.} \label{fig:heatmap} \end{figure} \begin{figure} \caption{Performance on protein-disease association prediction in Chem2Bio2RDF dataset, measured in AUROC. Standard deviation errors are given.} \label{fig:link_pred} \end{figure} \begin{table}[htbp] \centering \caption{Top predicted protein-disease associations with literature support.} \begin{tabular}{ccccc} \toprule Rank & Gene & Disease & PubMed ID \\ \midrule 1 & COX2 & Colorectal Carcinoma & 26 159 723 \\ 6 & CTNNB1 & Colorectal Carcinoma & 24 947 187 \\ 7 & P2RX7 & Colorectal Carcinoma & 28 412 208 \\ 12 & SMAD3 & Colorectal Carcinoma & 30 510 241 \\ 16 & HRH1 & Colorectal Carcinoma & 30 462 522 \\ 37 & ABCB1 & Colorectal Carcinoma & 28 302 530 \\ 44 & AKT1 & Malignant neoplasm of breast & 29 482 551 \\ 64 & TP53 & Malignant neoplasm of breast & 31 391 192 \\ 82 & EP300 & Colorectal Carcinoma & 23 759 652 \\ 92 & ADORA1 & Colorectal Carcinoma & 27 814 614 \\ 107 & REN & Renal Tubular Dysgenesis & 21 903 317 \\ 141 & FGFR2 & Autosomal Dominant & 16 141 466 \\ 157 & BCL2 & Non-Hodgkin Lymphoma & 29 666 304 \\ 169 & NOS2 & Malignant neoplasm of breast & 20 978 357 \\ 263 & CTNNB1 & Mental retardation & 24 614 104 \\ \bottomrule \end{tabular} \label{tab:ranking} \end{table} \subsubsection{The necessity of incorporating high-order structure information for prediction} We take protein-disease association prediction on the Chem2Bio2RDF dataset as an example to demonstrate the necessity of incorporating higher-order information from two aspects, i.e., illustration of validity and illustration of practicality. For an illustration of validity, we train MGNN and compare it to the baseline methods for protein-disease association prediction in terms of the area under the ROC curve (AUROC). For an illustration of practicality, we rank all the protein-disease pairs based on their predicted scores and then identify top pairs supported by existing publications. MGNN is first trained to predict each protein-disease pair's associated score and compare it to the baseline approaches. Specifically, we will describe this step in detail by stating the background of the task as well as the specific experimental setup. As for the background of the task, protein-disease association prediction is a significant issue with the potential to give clinically actionable insights for disease diagnosis, prognosis, and treatment \cite{agrawal2018large}. The issue can be defined as predicting which proteins are associated with a given disease. Experimental methods and computational methods are the two primary kinds of current attempts to solve this challenge. Experimental methods for gene–disease association, such as genome-wide association studies (GWAS), and RNA interference (RNAi) screens, are costly and time-consuming to conduct. Therefore, a variety of computational methods have been developed to discover or predict gene–disease associations, including text mining, network-based methods \cite{ata2021recent}, and so on. Among them, network-based methods often need to use the structure information of the PPI network (constructed by $M_{13}$ motif). However, high-order PPI network structure is largely ignored in protein-disease discovery nowadays \cite{agrawal2018large}. Our MGNN thus can overcome this limitation. We next describe the experimental setup. We mapped a protein to the gene that it is produced by, and viewed protein-disease association prediction as a link prediction task on the graph \cite{agrawal2018large}. We split the edges of the Chem2Bio2RDF dataset with the ratio of 85\%/5\%/10\% for training, validation and testing respectively. We adopted an inner product decoder for link prediction. The parameters of the model were optimized using negative sampling and cross-entropy loss and we used AUROC as a metric. The number of the epoch was set to 1000 and the other hyperparameter settings are consistent with the node/graph classification task. Note that, the Chem2Bio2RDF dataset is missing a semantic mapping to disease IDs. To alleviate this problem, we search for genes associated with the disease (2929 known gene(protein)-disease links in Chem2Bio2RDF), and perform gene-disease association queries in the public database DisGeNET\footnote{https://www.disgenet.org/}, so as to realize the inference of the actual semantics of the disease IDs. Fig.~\ref{fig:link_pred} compares the performance of MGNN and other GNN-based methods under five random seeds. As can be seen, MGNN exceeds all GNN baselines on the AUROC metric. Importantly, the AUROC of MGNN is close to 100\%, and the standard deviation is very small, which means that MGNN has strong practicability in protein-disease association prediction. For an illustration of practicality, we further ranked the whole unknown protein-disease pairs (over 28 million unknown pairs) based on their predicted scores, and identified 103 out of the top 1000 pairs that are supported by existing publications. Table \ref{tab:ranking} displays the first 15 of these 103 pairs, and the last column provides the PubMed ID of the publications that support our prediction. As shown in Table \ref{tab:ranking}, all pairs have been validated by wet-labs and can be found in the DisGeNET database, e.g. row 2 (CTNNB1, Colorectal Carcinoma) is validated by RNAi screening \cite{tiong2014csnk1e}, and row 4 (SMAD3,Colorectal Carcinoma) is validated by GWAS \cite{huyghe2019discovery}. These methods predict protein-disease association from the angles which are orthogonal from MGNN. Therefore, we consider that they provide reasonable supports for our prediction. Taking the first protein-disease pair as an example, COX2 and Colorectal Carcinoma are reported in \cite{ahmed2015co} (i.e. PubMed ID: 26159723). In fact, COX2 is preferentially expressed in cancer cells and its expression is enhanced by proinflammatory cytokines and carcinogens \cite{ahmed2015co}. It is thus reasonable to predict a protein-disease association between them because there is evidence that the over-expression of COX2 is related to the infiltrating growth of Colorectal Carcinoma and other pathological characteristics \cite{tsunozaki2002cyclooxygenase}. \subsection{Parameter Sensitivity} \begin{figure} \caption{Parameter sensitivity analysis for MGNN: (a) dimensionality $d^\prime_l$ in Eqs.~\eqref{equ:f1}--\eqref{equ:f2}. (b) output dimensionality $d_1$ in Eq.~\eqref{equ:gcn}.} \label{fig:dim_sensitivity} \end{figure} We present the sensitivity analysis for the dimensionality parameters $d^\prime_l$ and $d_1$ in our MGNN. In Fig.~\ref{fig:dim_sensitivity}(a), the performance of MGNN is not sensitive to changes in the dimensionality $d_l^\prime$ in Eqs.~\eqref{equ:f1}--\eqref{equ:f2}. Particularly, values of $d_l^\prime$ in the range $[2^2, 2^5]$ typically give a robust and reasonably good performance, e.g., $d^\prime_l = 6$ is a desirable choice in most cases. For the output dimensionality $d_1$ in Eq.~\eqref{equ:gcn}, as shown in Fig.~\ref{fig:dim_sensitivity}(b), the performance gradually improves and becomes stable around $2^4$, which is the preferred choice in most cases. \subsection{Model Size and Efficiency} \begin{table}[htbp] \centering \addtolength{\tabcolsep}{-1mm} \caption{Model size and efficiency analysis on the Pubmed dataset.}\label{tab:complexity} \begin{tabular}{lrrrrr} \toprule & \multirow{2}{*}{\# Params} & \multicolumn{2}{c}{Training} & Inference & \multirow{2}{*}{Accuracy} \\ & & per epoch/s & overall/min & /ms & \\ \midrule GAT & 104,624 & \textbf{0.01} & \textbf{1.13} & \underline{4.00} & 0.8840 \\ MixHop & \textbf{24,144} & \underline{0.02} & 15.01 & 19.48 & 0.8628\\\midrule BGNN & 9,866,596 & 1.97 & 608.64 & 4.57 & 0.8380\\ EGAT & 108,107 & 3.72 & 320.29 & 20.63 & 0.8970\\ ESAGE & 212,693 & 3.11 & 14.01 & 5.77 & \underline{0.9040}\\ EGAT+SAGE & 164,443 & 3.27 & 150.07 & 12.34 & 0.8992\\ \midrule MGNN & \underline{26,084} & 0.04 & \underline{10.00} & \textbf{1.25} & \textbf{0.9232}\\ \bottomrule \end{tabular} \end{table} We evaluate the model size and efficiency of MGNN, in terms of the number of trainable parameters, training time (per epoch and overall), and inference time. We select a representative baseline from standard GNNs (i.e., GAT) and high-order GNNs (i.e., MixHop), respectively, for comparison to MGNN. Moreover, since MGNN can be viewed as a model that integrates several motif-based modules, we also compare an ensemble GNN here (i.e., BGNN). For a more comprehensive comparison, we also develop a simple ensemble framework over 13 GNN modules (corresponding to our 13 motifs). First, we separately apply thirteen GNN modules that employ different initializations but otherwise the same input, and fuse their output by a fully connected layer. All hidden dimensions are set to 16. For the above framework, we develop three variants, respectively, the modules use only 13 GATs, only 13 GraphSAGEs as well as 7 GraphSAGEs and 6 GATs, denoted as EGAT and ESAGE, and EGAT+SAGE, respectively. As shown in Table \ref{tab:complexity}, MGNN is competitive in terms of model size and efficiency, while achieving the best accuracy. In particular, although several ensemble methods including EGAT, ESAGE and EGAT+SAGE achieve better accuracies among the baselines, their model sizes or efficiency are all worse than MGNN. Note that the per epoch and overall training times are often inconsistent across methods, as a method may train faster per epoch but it converges slower, or vice versa. Further experiments involving ensemble GNNs on all datasets are presented in Section II of our supplementary materials. \section{Conclusion} We propose Motif Graph Neural Networks, a novel framework to better capture high-order structures. Different from previous work, we propose the motif redundancy minimization operator and injective motif combination to improve the discriminative power of GNNs on the high-order structure. We also propose an efficient manner to construct a motif-based adjacency matrix. Further, we theoretically show that MGNN is provably more expressive than standard GNN, and standard GNN is in fact a special case of MGNN. Finally, we demonstrate that MGNN outperforms all baselines on seven public benchmarks. \appendix \begin{table*}[htbp] \centering \caption{Performance comparison on the node classification task, measured in accuracy. Standard deviation errors are given.} \label{tab:ensemble_nc} \begin{tabular}{lcccl} \toprule & Cora & Citeseer & Pubmed & \multicolumn{1}{c}{Chem2Bio2RDF} \\ \midrule GCN & 0.8595 $\pm$ 0.0207 & 0.7764 $\pm$ 0.0045 & 0.8865 $\pm$ 0.0048 & 0.9371 $\pm$ 0.0017 \\ GraphSAGE & 0.8610 $\pm$ 0.0101 & 0.7744 $\pm$ 0.0061 & 0.8980 $\pm$ 0.0049 & 0.9630 $\pm$ 0.0010\\ GAT & 0.8775 $\pm$ 0.0127 & \underline{0.7852} $\pm$ 0.0052 & 0.8840 $\pm$ 0.0079 & 0.9628 $\pm$ 0.0017\\ GIN & 0.8107 $\pm$ 0.0188 & 0.7255 $\pm$ 0.0160 & 0.8810 $\pm$ 0.0156 & 0.9205 $\pm$ 0.0129\\ \midrule BGNN & 0.8470 $\pm$ 0.0143 & 0.7750 $\pm$ 0.0112 & 0.8380 $\pm$ 0.0119 & 0.8746 $\pm$ 0.0115 \\ EGAT & 0.8720 $\pm$ 0.0040 & 0.7220 $\pm$ 0.0060 & 0.8970 $\pm$ 0.0010 & 0.9658 $\pm$ 0.0040 \\ ESAGE & 0.8612 $\pm$ 0.0135 & 0.7604 $\pm$ 0.0171 & \underline{0.9040} $\pm$ 0.0102 & 0.9633 $\pm$ 0.0019\\ EGAT+SAGE & \underline{0.8792} $\pm$ 0.0102 & 0.7632 $\pm$ 0.0141 & 0.8992 $\pm$ 0.0109 & \underline{0.9663} $\pm$ 0.0010\\ \midrule MGNN & \textbf{0.9060} $\pm$ 0.0049 & \textbf{0.7948} $\pm$ 0.0050 & \textbf{0.9232} $\pm$ 0.0084 & \textbf{0.9870} $\pm$ 0.0021 \\ \bottomrule \end{tabular} \end{table*} \subsection{Proof for Lemma 1}\label{sec:proof} \begin{proof} First, we show the relationship between the graph's adjacency matrix and the motif-based adjacency matrix. Then, using this relationship, we finish the proof of the lemma. On the directed graph $G$ with self-loops, the subgraph composed of any node linked to any two of its neighbors is always an instance of open motif ($M_8$--$M_{13}$). That is, in the adjacency matrix $\mathbf{A}$ of the graph with self-loops, if $(\mathbf{A})_{ij} > 0$, $(\mathbf{A})_{uv} > 0$ where $(i,j)$ and $(u,v)$ are adjacent edges in $G$, then there always exist $k' \in \{8, 9, ..., 13\}$ such that $\mathbf{A}_{k'}$ satisfies $(\mathbf{A}_{k'})_{ij} > 0$, $(\mathbf{A}_{k'})_{uv} > 0$. It immediately follows that, on a graph with self-loops, if $(\mathbf{A})_{ij} > 0$, then we also have $(\mathbf{A}_{k'})_{ij} > 0$. Without loss of generality, we assume $k'=13$ for ease of discussion later. That is, $\forall (i,j) \in \mathcal{E}$, $(\mathbf{A}_{13})_{ij} > 0$. Next, we use the construction method to complete the proof of this lemma. Based on Table~I, an instance of standard GNN is in the form of $\tilde{\mathbf{h}}^{(l)}_v = \sigma (\omega \left(\left\{ (\mathbf{A})_{vi} \mathbf{W}_s^{(l)}\tilde{\mathbf{h}}^{(l-1)}_i \big| i \in \mathcal{N}(v) \right\}\right))$. We will use the following steps to find a special case of MGNN which have the same representational capacity as standard GNN. First, this special case of MGNN must satisfy the following equation. \begin{equation}\label{equ:raw_proof_target} \begin{aligned} &\big\|_{k=1}^{13} \sigma(\omega(\{ \alpha^{(l)}_{k,vi}\cdot(\mathbf{A}_k)_{vi} \mathbf{W}_m^{(l)}\mathbf{h}_i^{(l-1)} \big| i \in \mathcal{N}(v) \})) \\ &= \sigma(\omega( \{\big\|_{k=1}^{12} \mathbf{0}_k \big\|(\mathbf{A})_{vi}\mathbf{W}_s^{(l)}\tilde{\mathbf{h}}^{(l-1)}_i \big| i \in \mathcal{N}(v)\} )), \end{aligned} \end{equation} where $\mathbf{0}_k$ is a $d_l$-dimensional zero vector, $\mathbf{W}_m^{(l)}$ and $\mathbf{W}_s^{(l)} \in \mathbb{R}^{d_l \times d_{l-1}}$, $\mathbf{h}^{(l-1)}_i$ and $\tilde{\mathbf{h}}^{(l-1)}_i \in \mathbb{R}^{d_{l-1}}$, so that the dimensions on both sides of Eq.~\eqref{equ:raw_proof_target} are the same. That is, the output dimensions of the special case of MGNN and standard GNN are the same, both being $13 d_l$. Next, with $\mathbf{W}_m^{(l)}$ and $\alpha^{(l)}_{k,vi}$ as variables, our goal is to prove that there will always be solutions to $\mathbf{W}_m^{(l)}$ and $\alpha^{(l)}_{k,vi}$ such that Eq.~\eqref{equ:raw_proof_target} holds. For simplicity, in Eq.~\eqref{equ:raw_proof_target}, we use symbol $\varphi$, a aggregation function with activation, to represent $\sigma \circ \omega$, that is, \begin{equation}\label{equ:proof_target} \begin{aligned} \big\|_{k=1}^{13} &\varphi(\{ \alpha^{(l)}_{k, vi}\cdot(\mathbf{A}_k)_{vi} \mathbf{W}_{m}^{(l)}\mathbf{h}^{(l-1)}_i \big| i \in \mathcal{N}(v) \})\\ =& \varphi( \{\big\|_{k=1}^{12} \mathbf{0}_k \big\|(\mathbf{A})_{vi}\mathbf{W}_{s}^{(l)}\tilde{\mathbf{h}}^{(l-1)}_i \big| i \in \mathcal{N}(v)\} ). \end{aligned} \end{equation} In the left hand side (LHS) of Eq.~\eqref{equ:proof_target}, the result will not change if the order of concatenation operation and aggregation $\varphi$ is exchanged. This is because the result value for each dimension in the LHS is only aggregated from the values of the same dimension in different feature vectors, and each feature vector is completely preserved after concatenation is performed. Thus, the LHS of Eq.~\eqref{equ:proof_target} becomes \begin{equation}\label{equ:proof_target2} \varphi(\{ \big\|_{k=1}^{13} \alpha^{(l)}_{k,vi}\cdot(\mathbf{A}_k)_{vi}\mathbf{W}_{m}^{(l)}\mathbf{h}^{(l-1)}_i \big| i \in \mathcal{N}(v)\}). \end{equation} By combining Eq.~\eqref{equ:proof_target}--\eqref{equ:proof_target2}, we get the equivalent form of Eq.~\eqref{equ:raw_proof_target}: \begin{equation}\label{equ:proof_target4} \begin{aligned} &\varphi(\{ \big\|_{k=1}^{13} \alpha^{(l)}_{k, vi}\cdot(\mathbf{A}_k)_{vi}\mathbf{W}_{m}^{(l)}\mathbf{h}^{(l-1)}_i \big| i \in \mathcal{N}(v)\}) \\ =& \varphi(\{\big\|_{k=1}^{12} \mathbf{0}_k \big\|(\mathbf{A})_{vi} \mathbf{W}_{s}^{(l)}\tilde{\mathbf{h}}^{(l-1)}_i \} \big| i \in \mathcal{N}(v)\}). \end{aligned} \end{equation} Therefore, our goal now is to prove that there will always be solutions such that Eq.~\eqref{equ:proof_target4} holds. We can solve for the following Eqs.~\eqref{equ:final_target1}--\eqref{equ:final_target2} to ensure that Eq.~\eqref{equ:proof_target4} holds. For $k \in \{1, ..., 12\}$, \begin{equation}\label{equ:final_target1} \alpha^{(l)}_{k,vi} \cdot (\mathbf{A}_k)_{vi} \mathbf{W}_m^{(l)}\mathbf{h}^{(l-1)}_i = \mathbf{0}_k, \end{equation} and for $k=13$, \begin{equation}\label{equ:final_target2} \alpha^{(l)}_{13,vi} \cdot (\mathbf{A}_{13})_{vi} \mathbf{W}_m^{(l)}\mathbf{h}^{(l-1)}_i =(\mathbf{A})_{vi} \mathbf{W}^{(l)}_s \tilde{\mathbf{h}}^{(l-1)}_i. \end{equation} Then we will demonstrate that $\forall l \ge 1$, there will always be solutions to $\mathbf{W}_m^{(l)}$ and $\alpha^{(l)}_{k,vi}$, such that Eqs.~\eqref{equ:final_target1}--\eqref{equ:final_target2} holds. Specifically, when $l=1$, $\mathbf{h}^{(0)}_i = \tilde{\mathbf{h}}^{(0)}_i = \mathbf{x}_i$, allowing Eqs.~\eqref{equ:final_target1}--\eqref{equ:final_target2} to hold for $\mathbf{W}_m^{(l)} = \frac{(\mathbf{A})_{vi}}{\alpha^{(l)}_{13,vi} \cdot (\mathbf{A}_{13})_{vi}} \mathbf{W}_s^{(l)}$, $\alpha^{(l)}_{13,vi} \ne 0$ and $\alpha^{(l)}_{k,vi} = 0$ ($k \in \{1, ..., 12\}$), that is, $1$-th special case of MGNN layer can generate the same vector representation as $1$-th standard GNN layer since both models have the same output in the previous layer (i.e., $\mathbf{h}^{(0)}_i = \tilde{\mathbf{h}}^{(0)}_i$). Similarly, when $l > 1$, Eqs.~\eqref{equ:final_target1}--\eqref{equ:final_target2} holds. This finishes the proof of the lemma. \end{proof} \subsection{Performance Evaluation of Ensemble GNNs}\label{sec:ensemble} \begin{table}[htbp] \centering \caption{Performance comparison on the graph classification task, measured in accuracy. Standard deviation errors are given. \label{tab:ensemble_gc}} \begin{tabular}{lccc} \toprule & MUTAG & ENZYMES & AIDS \\ \midrule GCN & 0.7555 $\pm$ 0.0651 & 0.2100 $\pm$ 0.0285 & \underline{0.9895} $\pm$ 0.0091 \\ GAT & 0.7391 $\pm$ 0.0315 & 0.1667 $\pm$ 0.0000 & 0.8740 $\pm$ 0.1013 \\ GraphSAGE & \underline{0.7984} $\pm$ 0.0526 & 0.2333 $\pm$ 0.0586 & 0.9855 $\pm$ 0.0091 \\ GIN & 0.7780 $\pm$ 0.0940 & 0.2630 $\pm$ 0.0330 & 0.9870 $\pm$ 0.0090\\ \midrule EGAT & 0.7820 $\pm$ 0.0610 & 0.2420 $\pm$ 0.0450 & 0.9850 $\pm$ 0.0050 \\ ESAGE & 0.7350 $\pm$ 0.0650 & \underline{0.2670} $\pm$ 0.0560 & 0.9850 $\pm$ 0.0060 \\ EGAT+SAGE & 0.7340 $\pm$ 0.0320 & 0.2500 $\pm$ 0.0480 & 0.9840 $\pm$ 0.0070 \\ \midrule MGNN & \textbf{0.8460} $\pm$ 0.0230 & \textbf{0.3070} $\pm$ 0.0300 & \textbf{0.9970} $\pm$ 0.0030 \\ \bottomrule \end{tabular} \end{table} We evaluate the empirical performance of MGNN against ensemble GNNs and standard GNNs in Table~\ref{tab:ensemble_nc} and Table~\ref{tab:ensemble_gc}. As shown in Table~\ref{tab:ensemble_nc}, MGNN significantly and consistently outperforms all the baselines on different datasets. In particular, ESAGE achieves the second best performance on Pubmed, while EGAT+SAGE achieves the second best performance on Cora and Chem2Bio2RDF. On Citeseer, GAT achieves the second best performance. MGNN is able to achieve further improvements against ESAGE by 2.12\% on Pubmed, against GAT by 1.22\% on Citeseer, as well as against EGAT+SAGE by 3.05\% and 2.14\% on Cora and Chem2Bio2RDF respectively. In Table \ref{tab:ensemble_gc}, similarly, MGNN regularly surpasses all baselines. In particular, ESAGE achieves the second best performance on ENZYMES, while GraphSAGE achieves the second best performance on MUTAG and GCN achieves the second best performance on AIDS. Our MGNN is capable of achieving further improvements against ESAGE by 14.98\% on ENZYMES, as well as against GraphSAGE and GCN by 5.96\% on MUTAG and by 0.76\% on AIDS, respectively. \subsection{Efficiency Analysis of Motif-based Adjacency Matrix Construction}\label{sec:enumerate} \begin{table}[htbp] \centering \chen{\caption{The efficiency analysis of three methods for constructing motif-based adjacency matrix, in terms of the running time (seconds). `MatMul' denotes matrix multiplication method.}\label{tab:enumerate}} \begin{tabular}{lrcrrr} \toprule & \multicolumn{1}{c}{\multirow{2}[4]{*}{\# Nodes}} & \multicolumn{1}{c}{Closed Motif: M1} & & \multicolumn{2}{c}{Open Motif: M13} \\ \cmidrule{3-3}\cmidrule{5-6} & & \multicolumn{1}{c}{MatMul [10]} & & \multicolumn{1}{c}{Enumerate} & \multicolumn{1}{c}{\makecell{Non-\\enumerate}} \\ \midrule Cora & 2,708 & 0.003 & & 73.322 & 1.534 \\ Pubmed & 19,717 & 0.027 & & 4249.435 & 18.852 \\ \makecell{Chem2-\\Bio2RDF} & 295,911 & 0.228 & & 1226K & 69.353 \\ \bottomrule \end{tabular} \end{table} \chen{We evaluate the efficiency of MatMul \cite{zhao2018ranking} for closed motifs and our proposed non-enumeration method for open motifs, in terms of the running time, in Table \ref{tab:enumerate} below. For open motifs, we would compare the running time of both enumeration and non-enumeration methods.} \chen{As shown in Table \ref{tab:enumerate}, it can be observed that MatMul can run very fast for closed motifs even for large-scale graphs, such as Chem2Bio2RDF. Meanwhile, compared to the standard enumeration method, our proposed non-enumeration method performs much better for open motifs. Even for Chem2Bio2RDF dataset, our non-enumeration can still run quite fast, taking about 69 seconds to construct the adjacency matrix for the open motif $M_{13}$. These results demonstrate that our preprocessing for both closed and open motifs is efficient.} \subsection{Performance and efficiency analysis of MGNN using all motifs} \begin{table}[htbp] \centering \chen{\caption{Performance and efficiency analysis of MGNN using all motifs or not, measured in accuracy and overall training time (minutes). `(M7, M8, M9)' denotes that MGNN uses only $M_7$, $M_8$ and $M_9$ motifs.}\label{tab:part_motif}} \begin{tabular}{lrrrrrr} \toprule & & \multicolumn{2}{c}{ACC} & & \multicolumn{2}{c}{Overall/min} \\ \cmidrule{3-4}\cmidrule{6-7} & \multicolumn{1}{l}{\# Nodes} & (M7, M8, M9) & ALL & & (M7, M8, M9) & ALL \\ \midrule Cora & 2,708 & 0.8732 & 0.9060 & & 0.87 & 1.37 \\ CiteSeer & 3,327 & 0.7224 & 0.7948 & & 0.80 & 1.29 \\ PubMed & 19,717 & 0.4220 & 0.9232 & & 5.73 & 10.00 \\ \makecell{Chem2-\\Bio2RDF} & 295,911 & 0.9741 & 0.9870 & & 14.26 & 27.26 \\ \bottomrule \end{tabular} \end{table} \chen{We compare the performance and efficiency of MGNN using all motifs or not, in terms of accuracy and overall training time. Specifically, we select motifs $M_7$, $M_8$ and $M_9$ which are commonly important in Cora, CiteSeer, PubMed and Chem2Bio2RDF, and make MGNN utilize just the above three motifs to conduct node classification on the four datasets. For simplicity, we denote this variant of MGNN as (M7, M8, M9).} \chen{As shown in Table \ref{tab:part_motif}, MGNN using all the motifs achieves better accuracy, while (M7, M8, M9) method can clearly save the training time. However, (M7, M8, M9) method achieves lower accuracy than MGNN using all the motifs on all four datasets, showing that these three motifs are not sufficient to capture all the important high-order structures for these four datasets. In addition, we would think the efficiency when using all the motifs is still satisfactory. Even on the largest dataset (i.e., Chem2Bio2RDF), the overall training time for MGNN using all the motifs is just 13 minutes longer than (M7, M8, M9) method, while on other datasets the differences are much smaller.} \end{document}
arXiv
# Understanding vectors and their operations Vectors are the fundamental objects in linear algebra. They are simply ordered lists of numbers, and they can be represented as columns or rows of a matrix. Vectors can be added together and multiplied by scalars (i.e., numbers). Consider two vectors: $$ \mathbf{v} = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}, \quad \mathbf{w} = \begin{bmatrix} 4 \\ 5 \\ 6 \end{bmatrix} $$ The sum of these vectors is: $$ \mathbf{v} + \mathbf{w} = \begin{bmatrix} 1 + 4 \\ 2 + 5 \\ 3 + 6 \end{bmatrix} = \begin{bmatrix} 5 \\ 7 \\ 9 \end{bmatrix} $$ The scalar multiplication of $\mathbf{v}$ by 2 is: $$ 2 \mathbf{v} = \begin{bmatrix} 2 \cdot 1 \\ 2 \cdot 2 \\ 2 \cdot 3 \end{bmatrix} = \begin{bmatrix} 2 \\ 4 \\ 6 \end{bmatrix} $$ ## Exercise Calculate the sum of the vectors $\mathbf{u} = \begin{bmatrix} 3 \\ 4 \\ 5 \end{bmatrix}$ and $\mathbf{x} = \begin{bmatrix} 7 \\ 8 \\ 9 \end{bmatrix}$. # Introduction to matrices and their operations Matrices are rectangular arrays of numbers. They can be used to represent linear transformations, and they can be combined using matrix multiplication. Consider two matrices: $$ \mathbf{A} = \begin{bmatrix} 1 & 2 \\ 3 & 4 \\ 5 & 6 \end{bmatrix}, \quad \mathbf{B} = \begin{bmatrix} 7 & 8 \\ 9 & 10 \end{bmatrix} $$ The matrix multiplication of $\mathbf{A}$ and $\mathbf{B}$ is: $$ \mathbf{A} \mathbf{B} = \begin{bmatrix} 1 \cdot 7 + 2 \cdot 9 & 1 \cdot 8 + 2 \cdot 10 \\ 3 \cdot 7 + 4 \cdot 9 & 3 \cdot 8 + 4 \cdot 10 \\ 5 \cdot 7 + 6 \cdot 9 & 5 \cdot 8 + 6 \cdot 10 \end{bmatrix} = \begin{bmatrix} 58 & 64 \\ 139 & 154 \\ 98 & 114 \end{bmatrix} $$ ## Exercise Calculate the matrix multiplication of $\mathbf{C} = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix}$ and $\mathbf{D} = \begin{bmatrix} 6 & 7 \\ 8 & 9 \end{bmatrix}$. # Linear transformations and their properties A linear transformation is a function that preserves the operations of addition and scalar multiplication. It can be represented as a matrix, and the matrix's columns are the images of the basis vectors. Consider the linear transformation $T: \mathbb{R}^2 \to \mathbb{R}^2$ given by the matrix: $$ \mathbf{T} = \begin{bmatrix} 2 & 1 \\ 1 & 0 \end{bmatrix} $$ The image of the vector $\mathbf{v} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}$ under $T$ is: $$ \mathbf{T} \mathbf{v} = \begin{bmatrix} 2 & 1 \\ 1 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \end{bmatrix} = \begin{bmatrix} 2 \cdot 1 + 1 \cdot 2 \\ 1 \cdot 1 + 0 \cdot 2 \end{bmatrix} = \begin{bmatrix} 4 \\ 1 \end{bmatrix} $$ ## Exercise Calculate the image of the vector $\mathbf{w} = \begin{bmatrix} 3 \\ 4 \end{bmatrix}$ under the linear transformation represented by the matrix $\mathbf{S} = \begin{bmatrix} 3 & 1 \\ 2 & 0 \end{bmatrix}$. # Eigenvalues and eigenvectors: definition and properties An eigenvector of a linear transformation is a non-zero vector that is mapped to a scalar multiple of itself. The corresponding scalar is called the eigenvalue. Consider the linear transformation $T: \mathbb{R}^2 \to \mathbb{R}^2$ given by the matrix: $$ \mathbf{T} = \begin{bmatrix} 2 & 1 \\ 1 & 0 \end{bmatrix} $$ The eigenvector of $T$ corresponding to the eigenvalue 2 is $\mathbf{v} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}$, because $T(\mathbf{v}) = 2 \mathbf{v}$. ## Exercise Find the eigenvectors and eigenvalues of the linear transformation represented by the matrix $\mathbf{S} = \begin{bmatrix} 3 & 1 \\ 2 & 0 \end{bmatrix}$. # Diagonalization of matrices and its applications A matrix is diagonalizable if it can be written as a product of a diagonal matrix and its inverse. This is useful for solving linear systems and computing powers of a matrix. Consider the matrix: $$ \mathbf{A} = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} $$ The matrix $\mathbf{A}$ is diagonalizable with the diagonal matrix: $$ \mathbf{D} = \begin{bmatrix} 1 & 0 \\ 0 & 4 \end{bmatrix} $$ And the inverse of the diagonal matrix is: $$ \mathbf{D}^{-1} = \begin{bmatrix} 1 & 0 \\ 0 & \frac{1}{4} \end{bmatrix} $$ So, $\mathbf{A}$ can be diagonalized as $\mathbf{A} = \mathbf{D} \mathbf{D}^{-1}$. ## Exercise Diagonalize the matrix $\mathbf{B} = \begin{bmatrix} 2 & 1 \\ 1 & 3 \end{bmatrix}$. # Orthogonal and orthonormal bases An orthogonal basis is a set of vectors that are orthogonal to each other (i.e., their dot product is zero). An orthonormal basis is an orthogonal basis in which all vectors have unit length. Consider the vectors: $$ \mathbf{u} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \quad \mathbf{v} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} $$ These vectors form an orthogonal basis for $\mathbb{R}^2$. ## Exercise Find an orthonormal basis for the subspace of $\mathbb{R}^3$ defined by the equation $x + y + z = 0$. # Applications of computational linear algebra in machine learning and data science Computational linear algebra is widely used in machine learning and data science. For example, it is used to compute the principal components of a dataset, which are the directions of maximum variance. Consider the dataset: $$ \mathbf{X} = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} $$ The principal components of this dataset can be computed using singular value decomposition: $$ \mathbf{X} = \mathbf{U} \mathbf{S} \mathbf{V}^{\mathrm{T}} $$ ## Exercise Compute the principal components of the dataset $\mathbf{Y} = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}$. # NumPy functions for working with vectors and matrices NumPy is a powerful library in Python that provides functions for working with vectors and matrices. For example, it provides functions for creating vectors and matrices, performing operations on them, and solving linear systems. ```python import numpy as np # Create a vector v = np.array([1, 2, 3]) # Create a matrix A = np.array([[1, 2], [3, 4]]) # Perform matrix multiplication B = np.array([[5, 6], [7, 8]]) C = np.dot(A, B) # Solve a linear system x = np.linalg.solve(A, v) ``` ## Exercise Use NumPy to compute the eigenvectors and eigenvalues of the matrix $\mathbf{S} = \begin{bmatrix} 3 & 1 \\ 2 & 0 \end{bmatrix}$. # Solving linear systems and least squares problems Linear systems and least squares problems are common problems in computational linear algebra. They can be solved using various methods, such as Gaussian elimination, LU decomposition, and the QR factorization. Consider the linear system: $$ \mathbf{A} \mathbf{x} = \mathbf{b} $$ where: $$ \mathbf{A} = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, \quad \mathbf{b} = \begin{bmatrix} 5 \\ 6 \end{bmatrix} $$ The solution can be computed using Gaussian elimination: $$ \mathbf{x} = \begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 5 \\ 2 \end{bmatrix} = \begin{bmatrix} 5 \\ 2 \end{bmatrix} $$ ## Exercise Solve the linear system $\mathbf{A} \mathbf{x} = \mathbf{b}$ where $\mathbf{A} = \begin{bmatrix} 2 & 1 \\ 1 & 0 \end{bmatrix}$ and $\mathbf{b} = \begin{bmatrix} 4 \\ 3 \end{bmatrix}$. # Eigendecomposition and singular value decomposition Eigendecomposition and singular value decomposition are powerful techniques in computational linear algebra. They are used to compute the eigenvalues and eigenvectors of a matrix, and the singular values and left and right singular vectors of a rectangular matrix, respectively. Consider the matrix: $$ \mathbf{A} = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} $$ The eigendecomposition of $\mathbf{A}$ is: $$ \mathbf{A} = \mathbf{P} \mathbf{D} \mathbf{P}^{\mathrm{T}} $$ where $\mathbf{P}$ is the matrix of eigenvectors and $\mathbf{D}$ is the diagonal matrix of eigenvalues. ## Exercise Compute the eigendecomposition and singular value decomposition of the matrix $\mathbf{B} = \begin{bmatrix} 2 & 1 \\ 1 & 3 \end{bmatrix}$. # Applications of computational linear algebra in computer graphics Computational linear algebra is also used in computer graphics for tasks such as transforming objects, rendering images, and solving optimization problems. In computer graphics, a 3D object is represented by a set of vertices. These vertices can be transformed using a transformation matrix. For example, to rotate a vertex $\mathbf{v}$ by an angle $\theta$ around the z-axis, the transformation matrix is: $$ \mathbf{R} = \begin{bmatrix} \cos \theta & -\sin \theta & 0 \\ \sin \theta & \cos \theta & 0 \\ 0 & 0 & 1 \end{bmatrix} $$ ## Exercise Write a Python function that uses NumPy to transform a set of vertices by a rotation matrix.
Textbooks
Gap solitons for the repulsive Gross-Pitaevskii equation with periodic potential: Coding and method for computation Global existence and asymptotic stability in a competitive two-species chemotaxis system with two signals Stability analysis of an enteropathogen population growing within a heterogeneous group of animals Carles Barril , and Àngel Calsina Faculty of Sciences, Universitat Autónoma de Barcelona, 08193 Bellaterra, 08193 Bellaterra, Barcelona, Spain * Corresponding authorr: [email protected] Received October 2015 Revised November 2016 Published February 2017 Fund Project: The first author is supported by Spanish Ministry of Education grant FPU13/04333 Figure(2) An autonomous semi-linear hyperbolic pde system for the proliferation of bacteria within a heterogeneous population of animals is presented and analysed. It is assumed that bacteria grow inside the intestines and that they can be either attached to the epithelial wall or as free particles in the lumen. A condition involving ecological parameters is given, which can be used to decide the existence of endemic equilibria as well as local stability properties of the non-endemic one. Some implications on phage therapy are addressed. Keywords: Mathematical epidemiology, phage therapy, steady state stability, spatially structured population, semilinear formulation, characteristic equation. Mathematics Subject Classification: Primary:92D25, 35B40;Secondary:35L50. Citation: Carles Barril, Àngel Calsina. Stability analysis of an enteropathogen population growing within a heterogeneous group of animals. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1231-1252. doi: 10.3934/dcdsb.2017060 R. J. Atterbury, M. A. van Bergen, F. Ortiz, M. A. Lovell, J. A. Harris, A. De Boer, J. A. Wagenaar, V. M. Allen and P. A. Barrow, Bacteriophage therapy to reduce salmonella colonization of broiler chickens, Applied and Environmental Microbiology, 73 (2007), 4543-4549. doi: 10.1128/AEM.00049-07. Google Scholar M. M. Ballyk, D. A. Jones and H. L. Smith, Microbial competition in reactors with wall attachment, Microbial Ecology, 41 (2001), 210-221. doi: 10.1007/s002480000005. Google Scholar B. Boldin, Persistence and spread of gastro-intestinal infections: The case of enterotoxigenic escherichia coli in piglets, Bulletin of Mathematical Biology, 70 (2008), 2077-2101. doi: 10.1007/s11538-008-9348-8. Google Scholar F. Brauer, Mathematical epidemiology is not an oxymoron BMC Public Health, 9 (2009), S2. doi: 10.1186/1471-2458-9-S1-S2. Google Scholar À. Calsina, J. M. Palmada and J. Ripoll, Optimal latent period in a bacteriophage population model structured by infection-age, Mathematical Models and Methods in Applied Sciences, 21 (2011), 693-718. doi: 10.1142/S0218202511005180. Google Scholar À. Calsina and J. J. Rivaud, A size structured model for bacteria-phages interaction, Nonlinear Analysis: Real World Applications, 15 (2014), 100-117. doi: 10.1016/j.nonrwa.2013.06.004. Google Scholar S. Chow and J. K. Hale, Methods of Bifurcation Theory Springer-Verlag, New York, 1982. doi: 10.1007/978-1-4613-8159-4. Google Scholar P. Clément, O. Diekmann, M. Gyllenberg, H. Heijmans and H. R. Thieme, Perturbation theory for dual semigroups, Mathematische Annalen, 277 (1987), 709-725. doi: 10.1007/BF01457866. Google Scholar P. Clément, O. Diekmann, M. Gyllenberg, H. Heijmans and H. R. Thieme, Perturbation theory for dual semigroups. Ⅲ. nonlinear lipschitz continuous perturbations in the sun-reflexive case, Pitman Research Notes in Mathematics, 190 (1989), 67-89. Google Scholar M. E. Coleman, D. W. Dreesen and R. G. Wiegert, A simulation of microbial competition in the human colonic ecosystem, Applied and Environmental Microbiology, 62 (1996), 3632-3639. Google Scholar O. Diekmann, J. A. P. Heesterbeek and J. A. J. Metz, On the definition and the computation of the basic reproduction ratio r 0 in models for infectious diseases in heterogeneous populations, Journal of Mathematical Biology, 28 (1990), 365-382. doi: 10.1007/BF00178324. Google Scholar O. Diekmann, S. A. van Gils, S. M. Verduyn Lunel and H. O. Walther, Delay Equations: Functional-, Complex-, and Nonlinear Analysis Springer-Verlag, New York, 1995. doi: 10.1007/978-1-4612-4206-2. Google Scholar O. Diekmann, P. Getto and M. Gyllenberg, Stability and bifurcation analysis of volterra functional equations in the light of suns and stars, SIAM Journal on Mathematical Analysis, 39 (2007), 1023-1069. doi: 10.1137/060659211. Google Scholar K. Engel and R. Nagel, One-parameter Semigroups for Linear Evolution Equations Springer-Verlag, New York, 2000. doi: 10.1007/b97696. Google Scholar F. Gaggìa, P. Mattarelli and B. Biavati, Probiotics and prebiotics in animal feeding for safe food production, International Journal of Food Microbiology, 141 (2010), S15-S28. Google Scholar H. W. Hethcote and J. W. van Ark, Epidemiological models for heterogeneous populations: Proportionate mixing, parameter estimation, and immunization programs, Mathematical Biosciences, 84 (1987), 85-118. doi: 10.1016/0025-5564(87)90044-7. Google Scholar H. W. Hethcote, The mathematics of infectious diseases, SIAM Review, 42 (2000), 599-653. doi: 10.1137/S0036144500371907. Google Scholar M. Lichtner, Variation of constants formula for hyperbolic systems, Journal of Applied Analysis, 15 (2009), 79-100. doi: 10.1515/JAA.2009.79. Google Scholar B. M. Marshall and S. B. Levy, Food animals and antimicrobials: Impacts on human health, Clinical Microbiology Reviews, 24 (2011), 718-733. doi: 10.1128/CMR.00002-11. Google Scholar R. Nagel and J. Poland, The critical spectrum of strongly continuous semigroup, Advances in Mathematics, 152 (2000), 120-133. doi: 10.1006/aima.1998.1893. Google Scholar A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4612-5561-1. Google Scholar H. L. Smith, Models of virulent phage growth with application to phage therapy, SIAM Journal on Applied Mathematics, 68 (2008), 1717-1737. doi: 10.1137/070704514. Google Scholar H. L. Smith and H. R. Thieme, Chemostats and epidemics: Competition for nutrients/hosts, Math. Biosci. Eng., 10 (2013), 1635-1650. doi: 10.3934/mbe.2013.10.1635. Google Scholar H. R. Thieme, Spectral bound and reproduction number for infinite-dimensional population structure and time heterogeneity, SIAM Journal on Applied Mathematics, 70 (2009), 188-211. doi: 10.1137/080732870. Google Scholar W. Wang and X. Zhao, Threshold dynamics for compartmental epidemic models in periodic environments, Journal of Dynamics and Differential Equations, 20 (2008), 699-717. doi: 10.1007/s10884-008-9111-8. Google Scholar Figure Scheme 1. Figure 1. Bifurcation diagram showing epidemic progression (dark regions) or eradication (white regions) in a system with two hosts. The changing parameters are the fraction of bacteriophage given to the first host ($q_0^1/(q_0^1+q_0^2)$ ranging from 0 to 1) and its detachment rate ($\delta_1$ ranging from 0 to 1.5). The bacterial and bacteriophage distributions along the intestine once the system has converged to the equilibria are shown for two different set of parameters A and B. The dashed line refers to attached bacteria while gray color is used for host one and black for host two. The other fix parameters used to do the numeric simulations are the total bacteriophage dose per time unit $q_0^1+q_0^2=11$ and: $c_h=l_h=1$, $\gamma_1^h(u)=1-u$, $\gamma_2^h(v)=1-v$, $\alpha_h =4$, $b=4$, $\kappa_1^h=0.06$, $\kappa_2^h=0.1$, $\lambda_1^h=\lambda_2^h=0.1$, $\mu_1=0.4$ and $\mu_2=0.1$ for all $h\in{1,2}$ and $\delta_2=0.5$. Notice that the two hosts only differ in their detachment rate ($\delta_1$ and $\delta_2$) and the treatment received ($q_0^1$ and $q_0^2$). Janet Dyson, Rosanna Villella-Bressan, G.F. Webb. The steady state of a maturity structured tumor cord cell population. Discrete & Continuous Dynamical Systems - B, 2004, 4 (1) : 115-134. doi: 10.3934/dcdsb.2004.4.115 Inom Mirzaev, David M. Bortz. A numerical framework for computing steady states of structured population models and their stability. Mathematical Biosciences & Engineering, 2017, 14 (4) : 933-952. doi: 10.3934/mbe.2017049 Zhihua Liu, Hui Tang, Pierre Magal. Hopf bifurcation for a spatially and age structured population dynamics model. Discrete & Continuous Dynamical Systems - B, 2015, 20 (6) : 1735-1757. doi: 10.3934/dcdsb.2015.20.1735 Leonid Shaikhet. Stability of a positive equilibrium state for a stochastically perturbed mathematical model of glassy-winged sharpshooter population. Mathematical Biosciences & Engineering, 2014, 11 (5) : 1167-1174. doi: 10.3934/mbe.2014.11.1167 Soohyun Bae. Weighted $L^\infty$ stability of positive steady states of a semilinear heat equation in $\R^n$. Discrete & Continuous Dynamical Systems - A, 2010, 26 (3) : 823-837. doi: 10.3934/dcds.2010.26.823 Rinaldo M. Colombo, Mauro Garavello. Stability and optimization in structured population models on graphs. Mathematical Biosciences & Engineering, 2015, 12 (2) : 311-335. doi: 10.3934/mbe.2015.12.311 Stéphane Mischler, Clément Mouhot. Stability, convergence to the steady state and elastic limit for the Boltzmann equation for diffusively excited granular media. Discrete & Continuous Dynamical Systems - A, 2009, 24 (1) : 159-185. doi: 10.3934/dcds.2009.24.159 Jitendra Kumar, Gurmeet Kaur, Evangelos Tsotsas. An accurate and efficient discrete formulation of aggregation population balance equation. Kinetic & Related Models, 2016, 9 (2) : 373-391. doi: 10.3934/krm.2016.9.373 Federica Di Michele, Bruno Rubino, Rosella Sampalmieri. A steady-state mathematical model for an EOS capacitor: The effect of the size exclusion. Networks & Heterogeneous Media, 2016, 11 (4) : 603-625. doi: 10.3934/nhm.2016011 Dongxue Yan, Xianlong Fu. Asymptotic analysis of a spatially and size-structured population model with delayed birth process. Communications on Pure & Applied Analysis, 2016, 15 (2) : 637-655. doi: 10.3934/cpaa.2016.15.637 Xianlong Fu, Dongmei Zhu. Stability analysis for a size-structured juvenile-adult population model. Discrete & Continuous Dynamical Systems - B, 2014, 19 (2) : 391-417. doi: 10.3934/dcdsb.2014.19.391 Xianlong Fu, Dongmei Zhu. Stability results for a size-structured population model with delayed birth process. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 109-131. doi: 10.3934/dcdsb.2013.18.109 Hal L. Smith, Horst R. Thieme. Persistence and global stability for a class of discrete time structured population models. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4627-4646. doi: 10.3934/dcds.2013.33.4627 La-Su Mai, Kaijun Zhang. Asymptotic stability of steady state solutions for the relativistic Euler-Poisson equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 981-1004. doi: 10.3934/dcds.2016.36.981 Mei-hua Wei, Jianhua Wu, Yinnian He. Steady-state solutions and stability for a cubic autocatalysis model. Communications on Pure & Applied Analysis, 2015, 14 (3) : 1147-1167. doi: 10.3934/cpaa.2015.14.1147 Kousuke Kuto. Stability and Hopf bifurcation of coexistence steady-states to an SKT model in spatially heterogeneous environment. Discrete & Continuous Dynamical Systems - A, 2009, 24 (2) : 489-509. doi: 10.3934/dcds.2009.24.489 Yuxiang Li. Stabilization towards the steady state for a viscous Hamilton-Jacobi equation. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1917-1924. doi: 10.3934/cpaa.2009.8.1917 Piotr Zgliczyński. Steady state bifurcations for the Kuramoto-Sivashinsky equation: A computer assisted proof. Journal of Computational Dynamics, 2015, 2 (1) : 95-142. doi: 10.3934/jcd.2015.2.95 Daniel Ginsberg, Gideon Simpson. Analytical and numerical results on the positivity of steady state solutions of a thin film equation. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1305-1321. doi: 10.3934/dcdsb.2013.18.1305 Samir K. Bhowmik, Dugald B. Duncan, Michael Grinfeld, Gabriel J. Lord. Finite to infinite steady state solutions, bifurcations of an integro-differential equation. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 57-71. doi: 10.3934/dcdsb.2011.16.57 Carles Barril Àngel Calsina
CommonCrawl
[1] Simple Matrix – A Multivariate Public Key Cryptosystem (MPKC) for Encryption Chengdong Tao South China University of Technology, China Hong Xiang ChongQing University, China Albrecht Petzoldt Technische Universität Darmstadt, Germany Jintai Ding ChongQing University, China; University of Cincinnati, OH, USA Group Theory and Lie Theory mathscidoc:2207.17001 Finite Fields and Their Applications, 35, 352-368, 2015.9 [ Download ] [ 2022-07-14 14:48:55 uploaded by dingjt ] [ 148 downloads ] [ 0 comments ] Multivariate cryptography is one of the main candidates to guarantee the security of communication in the presence of quantum computers. While there exist a large number of secure and efficient multivariate signature schemes, the number of practical multivariate encryption schemes is somewhat limited. In this paper we present our results on creating a new multivariate encryption scheme, which is an extension of the original SimpleMatrix encryption scheme of PQCrypto 2013. Our scheme allows fast en- and decryption and resists all known attacks against multivariate cryptosystems. Furthermore, we present a new idea to solve the decryption failure problem of the original SimpleMatrix encryption scheme. [2] A colimit of traces of reflection groups Penghui Li Institute of Science and Technology Austria Proceedings of the AMS, 147, 2019.6 [ Download ] [ 2022-05-18 14:37:24 uploaded by PenghuiLi ] [ 100 downloads ] [ 0 comments ] Li-Nadler proposed a conjecture about traces of Hecke categories, which implies the semistable part of the Betti geometric Langlands conjecture of Ben-Zvi-Nadler in genus 1. We prove a Weyl group analogue of this conjecture. Our theorem holds in the natural generality of reflection groups in Euclidean or hyperbolic space. As a corollary, we give an expression of the centralizer of a finite order element in a reflection group using homotopy theory. [3] Ultraproducts of quasirandom groups with small cosocles Yilong Yang Journal of Group Theory, 19, (6), 2016.3 [ Download ] [ 2022-05-17 23:53:57 uploaded by yilong ] [ 103 downloads ] [ 0 comments ] [4] A Diameter Bound for Finite Simple Groups of Large Rank Arindam Biswas Yilong Yang Journal of the London Mathematical Society, 95, (2), 2017.1 [5] Speed of random walks, isoperimetry and compression of finitely generated groups Jérémie Brieussel Institut/Laboratoire Montpelliérain Alexander Grothendieck (IMAG) (UMR 5149), Université de Montpellier, 34090 Montpellier, France Tianyi Zheng Department of Mathematics, Stanford University, Stanford (Palo Alto) CA 94305 Group Theory and Lie Theory Metric Geometry Probability mathscidoc:2203.17002 Annals of Mathematics, 193, (1), 1-105, 2021.1 [ Download ] [ 2022-03-18 13:05:11 uploaded by admin ] [ 217 downloads ] [ 0 comments ] We give a solution to the inverse problem (given a prescribed function, find a corresponding group) for large classes of speed, entropy, isoperimetric profile, return probability and Lp-compression functions of finitely generated groups. For smaller classes, we give solutions among solvable groups of exponential volume growth. As corollaries, we prove a recent conjecture of Amir on joint evaluation of speed and entropy exponents and we obtain a new proof of the existence of uncountably many pairwise non-quasi-isometric solvable groups, originally due to Cornulier and Tessera. We also obtain a formula relating the Lp-compression exponent of a group and its wreath product with the cyclic group forp in [1,2]. [6] Martin boundary covers Floyd boundary Ilya Gekhtman Department of Mathematics, Technion-Israeli Institute of Technology, 32000 Haifa, Israel Victor Gerasimov Departamento de Matemática, Universidade Federal de Minas Gerais, Av. Antônio Carlos 6627, Caixa Postal 702, 30161-970 Brasil Leonid Potyagailo UFR de Mathématiques, Université de Lille, 59655 Villeneuve d'Ascq, France Wenyuan Yang Beijing International Center for Mathematical Research, Peking University, Beijing 100871, China Dynamical Systems Geometric Analysis and Geometric Topology Group Theory and Lie Theory Probability mathscidoc:2203.11005 Inventiones Mathematicae, 223, 759-809, 2021.1 For a random walk on a finitely generated group G we obtain a generalization of a classical inequality of Ancona. We deduce as a corollary that the identity map on G extends to a continuous equivariant surjection from the Martin boundary to the Floyd boundary, with preimages of conical points being singletons. This provides new results for Martin compactifications of relatively hyperbolic groups. [7] Quotients of higher-dimensional Cremona groups Jérémy Blanc Universität Basel, Switzerland Stéphane Lamy Université de Toulouse, France Susanna Zimmermann Université d'Angers, France Group Theory and Lie Theory Algebraic Geometry mathscidoc:2203.17001 Acta Mathematica, 226, (2), 211-318, 2021.7 [ Download ] [ 2022-03-10 11:10:46 uploaded by actaadmin ] [ 144 downloads ] [ 0 comments ] We study large groups of birational transformations Bir(X), where X is a variety of dimension at least 3, defined over C or a subfield of C. Two prominent cases are when X is the projective space P^n, in which case Bir(X) is the Cremona group of rank n, or when X⊂P^{n+1} is a smooth cubic hypersurface. In both cases, and more generally when X is birational to a conic bundle, we produce infinitely many distinct group homomorphisms from Bir(X) to Z/2, showing in particular that the group Bir(X) is not perfect, and thus not simple. As a consequence, we also obtain that the Cremona group of rank n⩾3 is not generated by linear and Jonquières elements. [8] Small cancellation labellings of some infinite graphs and applications Damian Osajda Instytut Matematyczny, Uniwersytet Wrocławski, Wrocław, Poland; and Fakultät für Mathematik, Universität Wien, Austria Combinatorics Functional Analysis Group Theory and Lie Theory mathscidoc:2203.06001 Acta Mathematica, 225, (1), 159-191, 2020.11 We construct small cancellation labellings for some infinite sequences of finite graphs of bounded degree. We use them to define infinite graphical small cancellation presentations of groups. This technique allows us to provide examples of groups with exotic properties: • We construct the first examples of finitely generated coarsely non-amenable groups (that is, groups without Guoliang Yu's Property A) that are coarsely embeddable into a Hilbert space. Moreover, our groups act properly on CAT(0) cubical complexes. • We construct the first examples of finitely generated groups, with expanders embedded isometrically into their Cayley graphs—in contrast, in the case of the Gromov monster expanders are not even coarsely embedded. We present further applications. [9] On the realization of Riemannian symmetric spaces in Lie groups II Jinpeng An Zhengdong Wang Group Theory and Lie Theory mathscidoc:1912.431045 Topology and its Applications, 153, (15), 2943-2947 [ Download ] [ 2019-12-24 21:15:07 uploaded by Jinpeng_An ] [ 401 downloads ] [ 0 comments ] In this paper we generalize a result in [J. An, Z. Wang, On the realization of Riemannian symmetric spaces in Lie groups, Topology Appl. 153 (7) (2005) 10081015, showing that an arbitrary Riemannian symmetric space can be realized as a closed submanifold of a covering group of the Lie group defining the symmetric space. Some properties of the subgroups of fixed points of involutions are also proved. [10] Corrections to compact group actions and the topology of manifolds with non-positive curvature Richard Schoen Shing-Tung Yau Topology, 21, (4), 483, 1982.1 [ Download ] [ 2019-12-24 20:51:36 uploaded by yaust ] [ 378 downloads ] [ 0 comments ] (1) Ker (f,) is a characteristic subgroup of T,(M),(2) the group G leaves Ker (f*) invariant. This assumption is needed in the last two statements of Corollary 6,(vi) of Theorems 8, Theorem 11 and Theorem 13. In Theorem 12, the corresponding assumption can be stated as follows: Let H be the subspace of H,(M, I?) defined by {PIP U ai, U... U ai, _,= 0 for all ii}. Then G leaves H invariant. [11] A subalgebra of 0-Hecke algebra Xuhua He Journal of Algebra, 322, (11), 4030-4039 [ Download ] [ 2019-12-21 11:29:26 uploaded by Xuhua_He ] [ 385 downloads ] [ 0 comments ] Let (W, I) be a finite Coxeter group. In the case where W is a Weyl group, Berenstein and Kazhdan in [A. Berenstein, D. Kazhdan, Geometric and unipotent crystals. II. From unipotent bicrystals to crystal bases, in: Quantum Groups, in: Contemp. Math., vol. 433, Amer. Math. Soc., Providence, RI, 2007, pp. 1388] constructed a monoid structure on the set of all subsets of I using unipotent -linear bicrystals. In this paper, we will generalize this result to all types of finite Coxeter groups (including non-crystallographic types). Our approach is more elementary, based on some combinatorics of Coxeter groups. Moreover, we will calculate this monoid structure explicitly for each type. [12] The flabby class group of a finite cyclic group Kefeng Liu Proceedings of the 4th International Congress of Chinese Mathematicians, 1, 241-251, 2008 The aim of this paper is to give a proof of the calculation of the flabby class group of a finite cyclic group due to Endo and Miyata [2]. In the next section I will recall the definition and some basic facts about this group. In the final section I will give some examples to show that the invertibility conditions used by Endo and Miyata cannot be removed. I would like to thank M.c. Kang for useful comments and for showing me his results on some related problems. [13] Lipschitz structure and minimal metrics on topological groups Christian Rosendal University of Illinois & University of Maryland Arkiv for Matematik, 56, (1), 185-206, 2018 [ Download ] [ 2019-12-04 10:01:02 uploaded by arkivadmin ] [ 450 downloads ] [ 0 comments ] We discuss the problem of deciding when a metrisable topological group G has a canonically defined local Lipschitz geometry. This naturally leads to the concept of minimal metrics on G, that we characterise intrinsically in terms of a linear growth condition on powers of group elements. Combining this with work on the large scale geometry of topological groups, we also identify the class of metrisable groups admitting a canonical global Lipschitz geometry. In turn, minimal metrics connect with Hilbert's fifth problem for completely metrisable groups and we show, assuming that the set of squares is sufficiently rich, that every element of some identity neighbourhood belongs to a 1-parameter subgroup. [14] On the center of quiver Hecke algebras Peng Shan Tsinghua University Michela Varagnolo University of Cergy-Pontoise Eric Vasserot University of Paris Diderot Group Theory and Lie Theory Quantum Algebra Representation Theory mathscidoc:1806.17001 Best Paper Award in 2018 Duke Math J., 166, (6), 1005–1101, 2017 [ Download ] [ 2018-06-06 00:23:21 uploaded by pengshan ] [ 658 downloads ] [ 0 comments ] [15] A Hopf algebra associated with a Lie pair Zhuo Chen Tsinghua University Mathieu Stienon Pennsylvania State University Ping Xu Pennsylvania State University Differential Geometry Group Theory and Lie Theory mathscidoc:1803.10004 C.R. Acad. Sci. Paris, Ser.I, 352, 929-933, 2014.6 [ Download ] [ 2018-03-30 21:18:32 uploaded by zhuo_chen ] [ 668 downloads ] [ 0 comments ] [16] Nilpotent $p\mspace{1mu}$ -local finite groups José Cantarero Centro de Investigación en Matemáticas, A.C., Guanajuato, Mexico Jérôme Scherer Mathematics Institute for Geometry and Applications, École Polytechnique Fédérale de Lausanne Antonio Viruel Departamento de Álgebra, Geometría y Topología, Universidad de Málaga Arkiv for Matematik, 52, (2), 203-225, 2012.5 We provide characterizations of $p\mspace {1mu}$ -nilpotency for fusion systems and $p\mspace {1mu}$ -local finite groups that are inspired by known result for finite groups. In particular, we generalize criteria by Atiyah, Brunetti, Frobenius, Quillen, Stammbach and Tate. [17] Contracting automorphisms and$L$^{$p$}-cohomology in degree one Yves Cornulier Institut de recherche mathématique de Rennes, Université de Rennes 1 Romain Tessera Département de mathématiques (UMPA) École normale supérieure de Lyon, 46 allée d'Italie, Lyon Cedex 07, France [ Cited by 5 ] We characterize those Lie groups, and algebraic groups over a local field of characteristic zero, whose first reduced$L$^{$p$}-cohomology is zero for all$p$>1, extending a result of Pansu. As an application, we obtain a description of Gromov-hyperbolic groups among those groups. In particular we prove that any non-elementary Gromov-hyperbolic algebraic group over a non-Archimedean local field of zero characteristic is quasi-isometric to a 3-regular tree. We also extend the study to general semidirect products of a locally compact group by a cyclic group acting by contracting automorphisms. [18] Modules of systems of measures on polarizable Carnot groups M. Brakalova Department of Mathematics, Fordham University I. Markina Department of Mathematics, University of Bergen A. Vasil'ev Department of Mathematics, University of Bergen Arkiv for Matematik, 1-31, 2015.8 The paper presents a study of Fuglede's $p$ -module of systems of measures in condensers in polarizable Carnot groups. In particular, we calculate the $p$ -module of measures in spherical ring domains, find the extremal measures, and finally, extend a theorem by Rodin to these groups. [19] A geometric interpretation of the Schützenberger group of a minimal subshift Jorge Almeida CMUP, Departamento de Matemática, Faculdade de Ciências, Universidade do Porto Alfredo Costa CMUC, Department of Mathematics, University of Coimbra Geometric Analysis and Geometric Topology Group Theory and Lie Theory mathscidoc:1701.01011 The first author has associated in a natural way a profinite group to each irreducible subshift. The group in question was initially obtained as a maximal subgroup of a free profinite semigroup. In the case of minimal subshifts, the same group is shown in the present paper to also arise from geometric considerations involving the Rauzy graphs of the subshift. Indeed, the group is shown to be isomorphic to the inverse limit of the profinite completions of the fundamental groups of the Rauzy graphs of the subshift. A further result involving geometric arguments on Rauzy graphs is a criterion for freeness of the profinite group of a minimal subshift based on the Return Theorem of Berthé et al. [20] Unique Cartan decomposition for II_{1}factors arising from arbitrary actions of free groups Sorin Popa Mathematics Department, University of California, Los Angeles Stefaan Vaes Department of Mathematics, KU Leuven Functional Analysis Group Theory and Lie Theory Spectral Theory and Operator Algebra mathscidoc:1701.12005 [ Cited by 14 ] We prove that for any free ergodic probability measure-preserving action $${\mathbb{F}_n \curvearrowright (X, \mu)}$$ of a free group on$n$generators $${\mathbb{F}_n, 2\leq n \leq \infty}$$ , the associated group measure space II_{1}factor $${L^\infty (X)\rtimes \mathbb{F}_n}$$ has$L$^{∞}($X$) as its unique Cartan subalgebra, up to unitary conjugacy. We deduce that group measure space II_{1}factors arising from actions of free groups with different number of generators are never isomorphic. We actually prove unique Cartan decomposition results for II_{1}factors arising from arbitrary actions of a much larger family of groups, including all free products of amenable groups and their direct products. [21] Fusion systems and localities Andrew Chermak Mathematics Department, Kansas State University Acta Mathematica, 211, (1), 47-139, 2011.8 We introduce$objective partial groups$, of which the linking systems and$p$-local finite groups of Broto, Levi, and Oliver, the transporter systems of Oliver and Ventura, and the $${\mathcal{F}}$$ -localities of Puig are examples, as are groups in the ordinary sense. As an application we show that if $${\mathcal{F}}$$ is a saturated fusion system over a finite$p$-group then there exists a centric linking system $${\mathcal{L}}$$ having $${\mathcal{F}}$$ as its fusion system, and that $${\mathcal{L}}$$ is unique up to isomorphism. The proof relies on the classification of the finite simple groups in an indirect and—for that reason—perhaps ultimately removable way. [22] Normal subgroups in the Cremona group Serge Cantat Université de Rennes I, Campus de Beaulieu, Bâtiment 22-23, Rennes Cedex, France Stéphane Lamy Mathematics Institute, University of Warwick Yves de Cornulier Laboratoire de Mathématiques d'Orsay, CNRS & Université Paris-Sud 11 Acta Mathematica, 210, (1), 31-94, 2010.7 Let$k$be an algebraically closed field. We show that the Cremona group of all birational transformations of the projective plane $$ \mathbb{P}_{\mathbf{k}}^2 $$ is not a simple group. The strategy makes use of hyperbolic geometry, geometric group theory and algebraic geometry to produce elements in the Cremona group that generate non-trivial normal subgroups.
CommonCrawl
Impact of Economic Structure on the Environmental Kuznets Curve (EKC) hypothesis in India Muhammed Ashiq Villanthenkodath ORCID: orcid.org/0000-0002-6617-60711, Mohini Gupta2, Seema Saini3 & Malayaranjan Sahoo4 This study aims to evaluate the impact of economic structure on the Environmental Kuznets Curve (EKC) in India. The present study deviates from the bulk of study in the literature with the incorporation of both aggregated and disaggregated measures of economic development on the environmental degradation function. For the empirical analysis, the study employed the Auto-Regressive Distributed Lag (ARDL) bounds testing approach of cointegration to analyse the long-run and short-run relationship during 1971–2014. Further, the direction of the causality is investigated through the Wald test approach. The results revealed that the conventional EKC hypothesis does not hold in India in both aggregated and disaggregated models since economic growth and its component have a U-shaped impact on the environmental quality in India. However, the effect of population on environmental quality is positive but not significant in the aggregated model. Whereas, in the disaggregated model, it is significantly affecting environmental quality. Hence, it is possible to infer that the population of the country increases, the demand for energy consumption increase tremendously, particularly consumption of fossil fuel like coal, oil, and natural gas, and is also evident from the energy structure coefficient from both models. This increase is due to the scarcity of renewable energy for meeting the needs of people. On the contrary, urbanization reduces environmental degradation, which may be due to improved living conditions in terms of efficient infrastructure and energy efficiency in the urban area leading to a negative relation between urbanization and environmental degradation. The changes adopted in human activities related to the pandemic epoch of COVID-19 have led the Indian economy to a conjunction phase nearly similar to the recession period. For instance, India's GDP contracted to 23.9 per centFootnote 1 in the first quarter as compared to the same quarter of the previous year. Further, it has a vivid effect on energy consumption and carbon dioxide emission as a reduction in human mobility and shutdown of industries has led to the decline in coal and oil consumption. This trend is steepened and prevailing continuous downfall in the industrial growth and overall economic performance of India. Hence, the slowdown in the industrial sector during COVID-19 may reduce atmospheric emissions. However, the present slowdown of carbon emissions is not sustainable for the long run as carbon dioxide keeps on increasing in the atmosphere due to economic activities over the period of time. Therefore, the study raises the question of whether the COVID 19 epoch could impact carbon dioxide reduction for a longer period. For answering this question, the present study attempts to analyze the impact on carbon dioxide influenced through gross domestic production (GDP), industrial sector, energy structure, population and urbanization in India. Moreover, environmental degradation is a global concern, and it has gained importance as carbon dioxide emission is the prime emission that will affect the worldwide natural environment (MK 2020; Villanthenkodath et al. 2021). Therefore, many nations agreed to the Kyoto protocol in 1997 so as to shield nature from exploitation. Nonetheless, it is observed that carbon dioxide emission has rapidly levelled up in developing countries like India. Also, mentioning the reflection of the COVID-19 scenario in 2020 has adversely influenced the entire globe, but considering India as a developing country, the requirement to move from back to forth is a matter of concern. In theoretical literature, the Environmental Kuznets Curve (EKC) layout the opposing relation of exhibiting environment degradation and economic development. The EKC approach is pertained dominant to the pollution and growth introduced by Grossman and Krueger (1991); also, Stern and Common (2001) explain that the low industrialization will contribute to less environmental damages. However, there is no consensus regarding the eminent effect of carbon dioxide and its complex relation with economic growth. Hasanov et al. (2019) provide a cubic function form of EKC in the literature describing the monotonic rise of GDP along with carbon dioxide in Kazakhstan, which found the EKC does not hold. Unlike existing studies in the literature, the prime focus of the present study is to examine the impact of the economic structure on the environmental quality of India through the Environmental Kuznets Curve (EKC) framework. Besides that, the proponents of economic growth encourage the reduction of environmental degradation if the economic growth is disengaged from its effect. According to the report of the Centre for Research on Energy and Clean Air (CREA 2020), India's carbon dioxide emission has seen a drastic fall of 15% in the first quarter during the pandemic period 2020. It may be attributed to the reduction in demand for coal, oil, and gas consumption made carbon dioxide emissions fall by 30%, witnessed for the first time in the last four decades. This fall in carbon dioxide emission is mainly due to the shutdown of the industrial sector, which majorly encourages this emissions reduction, and also India targeted for 40% reduction in emission by shifting to non-fossil fuel consumption. However, India is undergoing rapid industrial development; hence understanding these changes and their related impact on carbon dioxide emission is required for the relevant policymaking. Moreover, it is believed that the industrial sector is a nucleus part of the economic system, transforming in scale and structure with the growth of an economy, specifically in a developing country like India (Fan et al. 2003). Meanwhile, the industry sectors are the eminent emitter of carbon dioxide, and consumers also contribute by utilizing the products of carbon dioxide. The intensity of carbon dioxide may differ with the different sectors of industrial structure in a specific region Tian et al. (2014). Hereby, the industrial structure is one of the important determinants that are associated with economic growth and carbon dioxide emissions. Thus, understanding how the association between CO2 emissions, economic structure in terms of industrial sector value-added, and economic growth by keeping urbanization, energy structure, and population as control variables prevails to provide the particulars for implementing the policy. In this background, best of our knowledge, the contribution of this study is that first of its kind that builds a model of the structural transformation in the context of environmental degradation to foster industrial diversity and environmental sustainability. Further, several prevailing studies consider only the aggregate component of the economy while estimating the EKC hypothesis, but this study contributes to the literature by considering both aggregate and disaggregate components of the economy in the estimation of EKC. Also, the time series study reads the impact on carbon dioxide of economic structure and economic growth undertaking EKC hypothesis India. The study uses time-series data spanning from 1971 to 2014; it is the updated series compared to other studies, and it has relatively more data points to produce reliable outcomes. The findings of the study portray both models at the aggregate and disaggregate level, wherein the aggregate model represents the long-run relation between CO2 emissions and economic growth. In contrast, the disaggregate model shows a long-run relationship between industrial value-added and CO2 emissions in the presence of other control variables. However, both model does not hold the conventional EKC hypothesis for India. Thus, the government authority can establish a policy targeting renewable energy over and above the non-renewable energy structures. The paper proceeds in the following sections: Sect. 2 briefs about the related literature. Section 3 represents the theoretical model, data briefing, and econometric methodology; Sect. 4 delineates the empirical analysis; last Sect. 5 includes the conclusion and policy implication. In the existing literature, the relationship between economic growth and environmental quality has been amply studied. In the book "The Limits to Growth", Meadows et al. (1972) argue that economic growth degrades environmental sustainability. Hence, to protect the environmental quality, there should be a limit to growth. In the seminal paper, Grossman and Krueger (1991) explored the environmental impact of the North American Free Trade Agreement (NAFTA) and observed that economic growth would affect the environment by scale effect, composition effect, and technical effect. They also find that two pollutants, i.e., smoke and SO2, increase with GDP at a low level of national income, but at a higher level of income, they decrease with GDP. Similarly, Wang et al. (2016) assessed the relationship between economic growth and sulfur dioxide emissions and found that income-sulfur dioxide emissions follow a conventional environmental Kuznets curve path. Similar results were found by Panayotou (1993), Shafik (1994), Apergis and Ozturk (2015), Bilgili et al. (2016), Shahbaz et al. (2017), El Montasser et al. (2018) while estimating the EKC hypothesis. Likewise, Stern and Common (2001) investigated the relationship between economic growth and sulfur dioxide for 74 countries globally from 1960 to 1990 but did not find evidence for the conventional EKC hypothesis. Hence, they concluded that the EKC model is fundamentally misspecified, and there is an omitted variable bias. The same outcome has reached (Harbaugh et al. 2002) while taking a similar variable as the proxy measure for environmental quality. However, Dasgupta et al. (2002) doubt the universal acceptability of the EKC hypothesis. Pal and Mitra (2017) argue that there is still another turning point, even if there is evidence for the conventional EKC relationship. By incorporating additional variables in the CO2 emissions model, Wang et al. (2013) examine the impact of economic growth, population, technology level, urbanization, service level, industrialization, energy consumption structure, and foreign trade on the energy-related CO2 emissions in Guangdong Province, China, from 1980 to 2010 using an extended STIRPAT model. Results indicate that technology level, foreign trade degree, and energy consumption structure lead to a decline in CO2 emissions. In a different study, Wang et al. (2017) investigate the driving factors of CO2 emissions from a regional perspective in China by employing the extended STIRPAT model from 1952 to 2012. The emanated results show that the impacts and influences of various factors on carbon emissions are different in the different development stages. Likewise, Ghazali and Ali (2019) studied the impact of various factors on CO2 in Newly Industrialized Countries (NICs) by utilizing the extended STIRPAT model from 1991 to 2013. The empirical results of the study suggest that GDP per capita, population, and CO2 emission intensity along with energy intensity are main contributors for CO2 emissions for NICs, while population carrying capacity have no significant impact on CO2 emission level. There seems to be mixed evidence of the EKC hypothesis. Grossman and Krueger (1991) suggest that the environment cannot be controlled by economic growth unless supported by institutions and policies. Therefore, the EKC hypothesis's validation depends intuitively on other factors such as access to technology or technological progress, quality of institution & availability of natural resources (Dogan and Inglesi-Lotz 2020). Recent studies have also included other variables like energy consumption, foreign aid, corruption, foreign investment, urbanization, technology, energy intensity, and financial development (Mahalik et al. 2021; Villanthenkodath and Mahalik 2020). Hence, Carson (2010) points those results are sensitive to the model specification, dataset, variable added, and environmental proxy. In the Indian context, a review of literature also shows mixed evidence of the EKC hypothesis. For instance, Boutabba (2014) examines the causal relationship's existence and direction in a multivariate framework for the Indian economy from 1970 to 2008. Their results suggest the long-run relationship between the per capita income and per capita carbon emission and further lend support to the EKC hypothesis. Similarly, Sehrawat and Giri (2015), using urbanization as an additional contributor to the emissions, attempted to study the EKC hypothesis during 1971–2011 for the Indian economy. They confirm the existence of the EKC hypothesis. Besides that, Kanjilal and Ghosh (2013), using a threshold cointegration test, found the presence of the EKC hypothesis for India. Likewise, Jayanthakumaran et al. (2012) concluded in favour of the EKC hypothesis in India. Recently, Shahbaz and Sinha (2019) estimated the EKC for emissions using the ARDL technique from 1971 to 2015 for the Indian economy. The study includes renewable energy measured by electric power consumption and its effect on environmental quality. The results suggest that EKC does exist for India. A study conducted by Dar and Asif (2017) explored energy use, financial development, and economic growth on the emissions using the ARDL model for the Indian economy. However, the study fails to establish the presence of the EKC hypothesis. A similar outcome has been reached by Alam and Adil (2019) since they conclude that there is no significant relationship between economic growth and carbon emissions. A study by Roy et al. (2017) analyzed the environmental impact of energy demand, energy mix, and fossil fuel intensity in a fast-growing economy like India from 1990 to 2016. They find that population, energy structure, and energy intensity are statistically factors for the CO2 emission in India. Some studies consider the various economic growth sources to test the EKC hypothesis but do not largely exist in the literature Dogan and Inglesi-Lotz (2020) and Lin et al. (2016). Our research will bridge this gap by studying the EKC hypothesis's presence by considering different economic growth sources for the Indian economy. Theoretical model, data description, and econometric methodology Theoretical model and data description The IPAT identity is considered as a system for determining what constitutes the patterns of the environment (Chertow 2000). The framework demonstrates how climate change (Generally calculated in terms of either CO2 or other air pollutants) responds to factors such as population, affluence, and technology. $$I = PAT$$ In Eq. 1, \(I\) stands for the degradation of environmental quality proxy in terms of emissions, \(P\) measures the growth of population. \(A\) is the affluence of society measured in terms of GDP, \(T\) used as technology proxy. Dietz and Rosa (1997) introduced the STIRPAT model due to the criticism related to earlier IPAT model assumptions such as the elasticities of all parameters are each equal and the simplicity (Tursun et al, 2015; Wang and Zhao 2015). $$I_{t} = \alpha P_{t}^{\beta } A_{t}^{\gamma } T_{t}^{\delta } \mu_{t}$$ In Eq. 2, \(\alpha\) indicates the intercept, \(P\), \(A,\) and \(T\) follows the same meaning of Eq. 1. \(\beta , \gamma\) and \(\delta\) indicates the elasticities of related to the impact of \(P\), \(A\) and \(T\) on the environment. Subscript \(t\) measures the year and \(\mu_{t}\) is the stochastic error term in the model. The underpinning theoretical framework of this study was proposed by Dogan and Inglesi-Lotz (2020) and Lin et al. (2016). For evaluating the determinants of CO2 emissions, these studies extended the STIRPAT model. Lin et al. (2016) modified the equation of STIRPAT by incorporating the square of GDP, energy structure, and urbanization of the countries. Similarly, Dogan and Inglesi-Lotz (2020) extended the STIRPAT by introducing the square term of industrial value-added in the context of European countries. Hence, the conceptualization of affluence in the STIRPAT model in both the industrial value-added and total GDP of India to analyses their impacts on emissions of CO2. Moreover, in any economy, the structure of energy consumption, i.e., shares of fossil fuels in total energy consumption, is an important element that influences the levels of emissions, which in turn affects the environment (You 2011). In the Indian context, earlier studies neglect the composition and pattern of GDP and their subsequent effects on the environment; instead, the studies focus on the aggregate GDP as a measurement of economic growth. On this line, the study has set up two models for the empirical analysis by following (Dogan and Inglesi-Lotz 2020). Model 1: Aggregate model $$\ln {\text{CO}}_{2t} = \alpha_{0} + \alpha_{1} \ln {\text{CO}}_{2t - i} + \alpha_{2} \ln {\text{GDP}}_{t} + \alpha_{3} \ln {\text{GDPSQ}}_{t} + \alpha_{4} \ln {\text{POP}}_{t} + \alpha_{5} \ln {\text{URB}}_{t} + \alpha_{6} \ln {\text{ES}}_{t} + \mu_{t}$$ Model 2: Disaggregate model $$\ln {\text{CO}}_{2t} = \alpha_{0} + \alpha_{1} \ln {\text{CO}}_{2t - i} + \alpha_{2} \ln {\text{IND}}_{t} + \alpha_{3} \ln {\text{INDSQ}}_{t} + \alpha_{4} \ln {\text{POP}}_{t} + \alpha_{5} \ln {\text{URB}}_{t} + \alpha_{6} \ln {\text{ES}}_{t} + \mu_{t}$$ In Eqs. 3 and 4, \(CO_{2}\) is the carbon dioxide emissions, \(\Delta \ln {\text{CO}}_{2t - i}\) measures the lag form of the carbon dioxide emissions, GDP stands for economic growth, \({\text{GDPSQ}}\) is the square term of the GDP, POP represent the population, \(URB\) is the urbanization, \(ES\) stands for the energy structure, IND means the industrial value-added and \({\text{INDSQ}}\) analyses the square term of industrial value-added. Intercept represented by \(\alpha_{0}\), while \(\alpha_{1} , \ldots \alpha_{6}\) stands for the coefficients of the explanatory variables in the model. Variables of the study are represented in Table 1, which offers the definition, measurement, and source of each variable for the period of 1971–2014. The selection of years was dictated by the availability of data for all the variables, particularly the energy structure data, which is available only up to 2014 in the World Development Indicators. The data were converted into the natural logarithm for the empirical analysis by the following studies (Pal et al. 2021; Sahoo et al. 2021; Villanthenkodath and Arakkal 2020; Villanthenkodath and Mushtaq 2021; Ansari and Villanthenkodath 2021; Villanthenkodath and Mahalik 2021). Table 1 Definition of variables Econometric methodology Stationarity test The first phase in the empirical analysis is to determine the order of integration of the variables for choosing the appropriate econometric models for the analysis. To attain this objective, we have employed the augmented Dickey-Fuller (ADF) and Phillips-Perron (PP) unit root tests. The null hypothesis of the non-stationarity is examined in opposition to the alternative hypothesis of stationarity. The first difference stationary or I (1) series indicates that all the variables are non-stationary in the levels, but it becomes stationary at their first difference. If the variables are I (0), then such variables are level stationery. Cointegration analysis The Autoregressive Distributed Lag (ARDL) bounds testing approach of cointegration proposed by Pesaran and Shin (1995) and Pesaran et al. (2001) has been employed for establishing the long-run relationship between the variables. The ARDL bounds testing approach is superior to other cointegration methods that can be listed as follows. Firstly, it can be applied in the case of a small sample size. Secondly, irrespective of the order of integration, i.e., I(0)/I(1) or mixed integration order of the variables, this method can be employed. Thirdly, the problem of endogeneity can be solved by using the optimal lag in the model specification. Fourthly, it offers superior results over other conventional cointegration. Model 1, i.e., the aggregate model estimated using the ARDL bounds testing approach based unrestricted error correction model as follows. $$\Delta {\text{lnCO}}_{2t} =\, \lambda_{0} + \mathop \sum \limits_{i = 1}^{p} \lambda_{1i} \Delta \ln {\text{CO}}_{2t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{2i} \Delta \ln {\text{GDP}}_{t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{3i} \ln {\text{GDPSQ}}_{t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{4i} \Delta \ln {\text{POP}}_{t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{5i} \Delta \ln {\text{URB}}_{t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{6i} \Delta \ln {\text{ES}}_{t - i} + \varphi_{1} \Delta \ln {\text{CO}}_{2t - 1} + \varphi_{2} \Delta \ln {\text{GDP}}_{t - 1} + \varphi_{3} \Delta \ln {\text{GDPSQ}}_{t - 1} + \varphi_{4} \Delta \ln {\text{POP}}_{t - 1} + \varphi_{5} \Delta \ln {\text{URB}}_{t - 1} + \varphi_{6} \Delta \ln {\text{ES}}_{t - 1} + \mu_{t}$$ In Eq. 5 ∆ stands for the first difference operator, \(\lambda_{0}\) represents the constant and \(\mu_{t}\) is the stochastic error terms. The process of the bounds testing approach for the long-run relationship using ARDL is based on the Wald test or F test. The null hypothesis of no cointegration i.e. \(H_{0} : \varphi_{1} = \varphi_{2} = \varphi_{3} = \varphi_{4} = \varphi_{5} = \varphi_{6} = 0\) is tested against the alternative hypothesis of cointegration, i.e., \(H_{1} : \varphi_{1} \ne \varphi_{2} \ne \varphi_{3} \ne \varphi_{4} \ne \varphi_{5} \ne \varphi_{6} \ne 0\) in the long run. The decision related to the long-run relationship is based on the F-statistics. If the F-statistics surpasses the critical values, then we conclude the existence of a long-run relationship and vice versa. In case the estimated value falls in between the critical values, we cannot have precise conclusion about the cointegration. The long-run elasticities can also be estimated using Eq. 5. The error correction model is represented in the following equation. $$\Delta {\text{lnCO}}_{2t}\, =\, \;\lambda_{0} + \mathop \sum \limits_{i = 1}^{p} \lambda_{1i} \Delta \ln {\text{CO}}_{2t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{2i} \Delta \ln {\text{GDP}}_{t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{3i} \ln {\text{GDPSQ}}_{t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{4i} \Delta \ln {\text{POP}}_{t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{5i} \Delta \ln {\text{URB}}_{t - i} + \mathop \sum \limits_{i = 1}^{p} \lambda_{6i} \Delta \ln {\text{ES}}_{t - i} + \varphi {\text{ECT}}_{t - 1} + \mu_{1}$$ In Eq. 5, \(\text{ECT}\) stands for the error correction term, the coefficient of error correction term, i.e. \(\varphi\) has to be negative and less than one, and it shows the time taken for the adjustment towards the long-run equilibrium. In model 2, the empirical analysis has been carried out similarly by replacing the GDP with IND and GDPSQ with INDSQ in Eqs. 5 and 6. Empirical results and discussion The focus of this section is on the empirical simulations carried out in this study. First, preliminary analysis in terms of summary statistics is followed by correlation matrix analysis and then the visual plot of all variables under consideration. Table 2 highlights the descriptive statistics with industrial sector value added is having the highest average with the highest minimum and maximum, while industrial value-added, economic growth, and CO2 emissions are positively skewed. However, the population, urbanization, and energy sector are negatively skewed throughout the inquiry. Table 3 represents the analysis of the Pearson correlation matrix of the studied variables. The outcome shows the linear association between the variables. Moreover, there is a positive and significant relationship between CO2 emissions and industrial value-added. A similar conclusion has been reached for economic growth. This result indicates that industrial value-added, economic growth, urbanization, and energy structure drive environmental degradation in India. However, the population delineates a negative association with environmental degradation. Hence, to substantiate the outcomes of the correlation analysis, more analysis is needed. Figure 1 depicts the trend and pattern of the studied variables; it is clear that a positive correlation trend has been established for all the variables except population. Table 2 Summary statistics Table 3 Correlation matrix Visual plot of variables In time series modelling, the need for stationary analysis is important for circumventing spurious effects. The current study has implemented the traditional unit root test ADF and PP to analyze the stationarity properties of the variables, as seen in Table 4. The outcomes of the unit root test reveal a mixed order of integration among the variable vector under review. Table 4 ADF and PP and tests of unit root Successively, the study has established the long-run relationship between the variables with the help of Pesaran's ARDL Bounds test. The result shows the clear existence of a long-run relationship among the series that has been explored in the study. Optimum parsimonious lag has been chosen by the Akaike Information Criterion (AIC) (Table 5). Table 5 ARDL bounds test The long-run and short-run result obtained from Model 1 and Model 2 is reported in Tables 6 and 7. Model 1 displays the outcomes of the model using total GDP to reflect the economic growth, whereas Model 2 employs growth of the industrial sector which is used as an affluence proxy. The obtained result shows that the conventional EKC hypothesis is not holding in both the models; rather, it shows a U-shaped relation between the affluence proxies, i.e., GDP and IND, since the sign of coefficient on GDP and IND is negative, and GDPSQ and INDSQ is positive. Our findings are in line with Alam and Adil (2019) and Dar and Asif (2017). However, it differs from Jayanthakumaran et al. (2012) and Shahbaz and Sinha (2019). Table 6 ARDL results Model 1 In line with the preconceived notion, if other things remain constant, then the population growth coefficient has a positive effect on increasing levels of emission in both the short-run and long-run across the models. In model 1, the long-run coefficient is not significant when aggregate GDP is used. However, the short-run coefficient is positive and significant. In model 2, the population has a positive and significant impact on pollution both in the short-run and long-run when disaggregate GDP is employed. It may be due to an increase in the population contributing to the rising need for energy consumption. Similarly, the demand for goods and services also spur population growth; hence the energy required to produce the consumption goods also increases, which in turn enhances the CO2 emissions. In the literature, Song et al. (2015) and Gertler et al. (2013) observed that population growth could also be complemented by an increase in general economic conditions, living standards, and household income levels; as a result, there is a rise in energy consumption and emissions of CO2. In both models, the level of urbanization is negative and statistically significant in the short-run and long run. It indicates that urbanization has historically created a positive effect on environmental degradation, especially at the early stages of urbanization. However, the improved living conditions in terms of efficient infrastructure and energy in an urban area lead to a negative relation between urbanization and environmental degradation. The accelerating force behind such a move may be that replacing the inefficient energy sources with more efficient energy sources. This finding is consistent with the studies that found the negative relationship between urbanization and emissions of CO2 (Pachauri 2004; Poumanyvong and Kaneko 2010; Burton 2000; Pachauri and Jiang 2008). In both models, the energy structure coefficient is positive and significant in the long run and short run. Therefore, the study concluded that the composition of fossil fuels in the mix of energy is a driver of CO2 emissions. These findings support the theory that fossil fuels use is the major contributor to the increase in emissions. Therefore, our findings agree with previous studies MK (2020) for India and Canadell et al. (2009) for South Africa. The incorporated error correction term in both models shows that high speed in convergence to long-run equilibrium. The diagnostic test results show that both models are free from heteroscedasticity, serial correlation, and ARCH problems. ARDL models are well specified since Ramsey reset test offers the desired result. The cumulative sum of recursive residuals (CUSUM) and the CUSUM Square of recursive residuals (CUSUMsq) has been employed for both the models as proposed by Brown et al. (1975). The plot of the same is in Figs. 2 and 3. CUSUM and CUSUMsq for Model 1 Table 8 delineates the causality result based on the modified Wald test and corroborates the fossil fuel-induced growth hypothesis since there is a one-way causality running from energy structure (fossil fuel composition) to economic growth in India. The finding suggests that in the case of India, the fossil fuel conservation policy has to be enforced with caution; otherwise, it damages economic growth. Table 8 Granger causality analysis Conclusion and policy implications In this study, we are trying to examine the aggregate and disaggregate measure of economic growth and its effect on the environmental quality in India from 1971 to 2014. We have run two models to analyze the EKC hypothesis in the aggregated model, and the other one is a disaggregated model. For analyzing the long run and short run, we have applied Auto-Regressive Distributed Lag (ARDL) bound testing approach. The direction of the variables is measured through the modified Wald test. The results revealed that the EKC hypothesis doesn't hold in India in both aggregated and disaggregated models. In the aggregate model, considering the economic growth shows a U- relation with environmental degradation in India. In the disaggregated model, employing industrial sector value added instead of economic growth also produced a similar outcome. However, the effect of population on environmental quality is positive but not significant in model 1 or the aggregated model. Whereas, in model 2, it is significantly affecting environmental quality. As per the World Bank (2019), India is the second-highest populace country in the world after China, and it has forecasted that it may cross the China population by 2035. So, when the population of the country increases, the demand for energy consumption increase tremendously, particularly the consumption of fossil fuel like coal, oil, and natural gas. This increases due to the scarcity of renewable energy for meeting the needs of people. Hence government should increase more investment in the renewable energy sector (solar and wind energy etc.) to increase the environmental quality in India. In this regard, foreign direct investment needs to be attracted to boost the performance of renewable energy in India. Moreover, renewable energy investment can be promoted by charging a higher price for fossil fuels or removing fossil-fuel subsidies. As a result, the demand for renewable energy probably enhances to attract new investment. In contradiction, urbanization in both models 1 and 2 shows a negative impact on the environmental quality or CO2 emissions. Especially at the early stages of urbanization, it has historically had a positive impact on environmental degradation. "Urbanization helps more residents to gain connections at competitive rates to environment-friendly infrastructure and services". Innovation, like renewable technology, is driven by urbanization. In the long run, the future of the green economy can be decided by environmentally friendly facilities, machinery, cars, and services. From the above findings, as the primary drivers of CO2 emissions, the chemical and heavy industries play a crucial role in India. This country ought, however, to intensify the structural transformation. Administrative means of these sectors and the promotion of low and light emission industries by fostering industrial diversity. Besides, people should increase their eco-friendly knowledge and waste recycling to reduce emissions. The better health condition of an urban resident can be achieved by stringent environmental policy and environmental awareness among the urban as well as the general population of the country. The result indicates that the policy of conservation of fossil fuels must be pursued with precaution in the case of India; otherwise, it hurts economic development. As reported by Industry body FICCI's Economic Outlook Survey (http://www.ficci.in/ficci-surveys.asp). Alam R, Adil MH (2019) Validating the environmental Kuznets curve in India: ARDL bounds testing framework. OPEC Energy Review 43(3):277–300 Ansari MA, Villanthenkodath MA (2021) Does tourism development promote ecological footprint? A nonlinear ARDL approach. Anatolia. https://doi.org/10.1080/13032917.2021.1985542 Apergis N, Ozturk I (2015) Testing environmental Kuznets curve hypothesis in Asian countries. Ecol Ind 52:16–22 Bilgili F, Koçak E, Bulut Ü (2016) The dynamic impact of renewable energy consumption on CO2 emissions: a revisited Environmental Kuznets Curve approach. Renew Sustain Energy Rev 54:838–845 Boutabba MA (2014) The impact of financial development, income, energy and trade on carbon emissions: evidence from the Indian economy. Econ Model 40:33–41 Brown RL, Durbin J, Evans JM (1975) Techniques for testing the constancy of regression relationships over time. J R Stat Soc Ser B (methodological) 37(2):149–192 Burton E (2000) The compact city: just or just compact? A preliminary analysis. Urban Stud 37(11):1969–2006 Canadell JG, Raupach MR, Houghton RA (2009) Anthropogenic CO2 emissions in Africa. Biogeosciences 6(3):463–468. https://doi.org/10.5194/bg-6-463-2009 Carson RT (2010) The environmental Kuznets curve: seeking empirical regularity and theoretical structure. Rev Environ Econ Policy 4(1):3–23 Chertow MR (2000) The IPAT equation and its variants. J Ind Ecol 4(4):13–29. https://doi.org/10.1162/10881980052541927 CREA (2020) How air pollution worsens the COVID-19 pandemic. https://energyandcleanair.org/wp/wpcontent/uploads/2020/04/How_air_pollution_worsens_the_COVID-19_pandemic.pdf Dar JA, Asif M (2017) Is financial development good for carbon mitigation in India? A regime shift-based cointegration analysis. Carbon Manage 8(5–6):435–443 Dasgupta S, Laplante B, Wang H, Wheeler D (2002) Confronting the environmental Kuznets curve. J Econ Perspect 16(1):147–168 Dietz T, Rosa EA (1997) Effects of population and affluence on CO2 emissions. Proc Natl Acad Sci 94(1):175–179. https://doi.org/10.1073/pnas.94.1.175 Dogan E, Inglesi-Lotz R (2020) The impact of economic structure to the environmental Kuznets curve (EKC) hypothesis: evidence from European countries. Environ Sci Pollut Res 27(11):12717–12724. https://doi.org/10.1007/s11356-020-07878-2 El Montasser G, Ajmi AN, Nguyen DK (2018) Carbon emissions–income relationships with structural breaks: the case of the Middle Eastern and North African countries. Environ Sci Pollut Res 25(3):2869–2878 Fan S, Zhang X, Robinson S (2003) Structural change and economic growth in China. Rev Dev Econ 7(3):360–377 Gertler P, Shelef O, Wolfram C, Fuchs A (2013) How pro-poor growth affects the demand for energy (No. w19092). National Bureau of Economic Research. https://doi.org/10.3386/w19092 Ghazali A, Ali G (2019) Investigation of key contributors of CO2 emissions in extended STIRPAT model for newly industrialized countries: a dynamic common correlated estimator (DCCE) approach. Energy Rep 5:242–252 Grossman GM, Krueger AB (1991) Environmental impacts of a North American free trade agreement (No. w3914). National Bureau of Economic Research. Harbaugh WT, Levinson A, Wilson DM (2002) Reexamining the empirical evidence for an environmental Kuznets curve. Rev Econ Stat 84(3):541–551 Hasanov FJ, Mikayilov JI, Mukhtarov S, Suleymanov E (2019) Does CO2 emissions–economic growth relationship reveal EKC in developing countries? Evidence from Kazakhstan. Environ Sci Pollut Res 26(29):30229–30241 Jayanthakumaran K, Verma R, Liu Y (2012) CO2 emissions, energy consumption, trade and income: a comparative analysis of China and India. Energy Policy 42:450–460 Kanjilal K, Ghosh S (2013) Environmental Kuznet's curve for India: evidence from tests for cointegration with unknown structuralbreaks. Energy Policy 56:509–515 Lin B, Omoju OE, Nwakeze NM, Okonkwo JU, Megbowon ET (2016) Is the environmental Kuznets curve hypothesis a sound basis for environmental policy in Africa? J Clean Prod 133:712–724. https://doi.org/10.1016/j.jclepro.2016.05.173 Mahalik MK, Villanthenkodath MA, Mallick H, Gupta M (2021) Assessing the effectiveness of total foreign aid and foreign energy aid inflows on environmental quality in India. Energy Policy 149:112015 Meadows DH, Meadows DL, Randers J, Behrens WW (1972) The limits to growth. N Y 102(1972):27 MK Ashin Nishan (2020) Role of energy use in the prediction of CO2 emissions and economic growth in India: evidence from artificial neural networks (ANN). Environ Sci Pollut Res 27(19):23631–23642 Narayan PK (2005) The saving and investment nexus for China: evidence from cointegration tests. Appl Econ 37(17):1979–1990 Pachauri S (2004) An analysis of cross-sectional variations in total household energy requirements in India using micro survey data. Energy Policy 32(15):1723–1735 Pachauri S, Jiang L (2008) The household energy transition in India and China. Energy Policy 36(11):4022–4035. https://doi.org/10.1016/j.enpol.2008.06.016 Pal D, Mitra SK (2017) The environmental Kuznets curve for carbon dioxide in India and China: growth and pollution at crossroad. J Policy Model 39(2):371–385 Pal S, Villanthenkodath MA, Patel G, Mahalik MK (2021) The impact of remittance inflows on economic growth, unemployment and income inequality: an international evidence. Int J Econ Policy Stud. https://doi.org/10.1007/s42495-021-00074-1 Panayotou T (1993) Empirical tests and policy analysis of environmental degradation at different stages of economic development (No. 992927783402676). International Labour Organization. Pesaran MH, Shin Y (1995) An autoregressive distributed lag modelling approach to cointegration analysis (No. 9514). Faculty of Economics, University of Cambridge. Pesaran MH, Shin Y, Smith RJ (2001) Bounds testing approaches to the analysis of level relationships. J Appl Economet 16(3):289–326 Poumanyvong P, Kaneko S (2010) Does urbanization lead to less energy use and lower CO2 emissions? A cross-country analysis. Ecol Econ 70(2):434–444 Rees W, Wackernagel M, Testemale P (1996) Our ecological footprint: reducing human impact on the Earth. New Society Publishers, Gabriola Island, BC, pp 3–12 Roy M, Basu S, Pal P (2017) Examining the driving forces in moving toward a low carbon society: an extended STIRPAT analysis for a fast growing vast economy. Clean Technol Environ Policy 19(9):2265–2276 Sahoo M, Saini S, Villanthenkodath MA (2021) Determinants of material footprint in BRICS countries: an empirical analysis. Environ Sci Pollut Res. https://doi.org/10.1007/s11356-021-13309-7 Sehrawat M, Giri AK (2015) Financial development and income inequality in India: an application of ARDL approach. Int J Soc Econ 42:64–81 Shafik N (1994) Economic development and environmental quality: an econometric analysis. Oxford Econ Papers 46:757–773 Shahbaz M, Sinha A (2019) Environmental Kuznets curve for CO2 emissions: a literature survey. J Econ Stud 46:106–168 Shahbaz M, Solarin SA, Hammoudeh S, Shahzad SJH (2017) Bounds testing approach to analyzing the environment Kuznets curve hypothesis with structural beaks: the role of biomass energy consumption in the United States. Energy Econ 68:548–565 Song M, Guo X, Wu K, Wang G (2015) Driving effect analysis of energy-consumption carbon emissions in the Yangtze River Delta region. J Clean Prod 103:620–628. https://doi.org/10.1016/j.jclepro.2014.05.095 Stern DI, Common MS (2001) Is there an environmental Kuznets curve for sulfur? J Environ Econ Manag 41(2):162–178 Tian X, Chang M, Shi F, Tanikawa H (2014) How does industrial structure change impact carbon dioxide emissions? A comparative analysis focusing on nine provincial regions in China. Environ Sci Policy 37:243–254 Tursun H, Li Z, Liu R, Li Y, Wang X (2015) Contribution weight of engineering technology on pollutant emission reduction based on IPAT and LMDI methods. Clean Technol Environ Policy 17(1):225–235. https://doi.org/10.1007/s10098-014-0780-1 Villanthenkodath MA, Arakkal MF (2020) Exploring the existence of environmental Kuznets curve in the midst of financial development, openness, and foreign direct investment in New Zealand: Insights from ARDL bound test. Environ Sci Pollut Res 27(29):36511–36527 Villanthenkodath MA, Mahalik MK (2020) Technological innovation and environmental quality nexus in India: Does inward remittance matter? J Public Aff. https://doi.org/10.1002/pa.2291 Villanthenkodath MA, Mushtaq U (2021) Modelling the nexus between foreign aid and economic growth: a case of afghanistan and Egypt. Stud Appl Econ. https://doi.org/10.25115/eea.v39i2.3802 Villanthenkodath MA, Mahalik MK (2021) Does economic growth respond to electricity consumption asymmetrically in Bangladesh? The Implication for Environmental Sustainability. Energy 233:121142. https://doi.org/10.1016/j.energy.2021.121142 Villanthenkodath MA, Ansari MA, Shahbaz M, Vo XV (2021) Do tourism development and structural change promote environmental quality? Evidence from India. Environ Dev Sustain. https://doi.org/10.1007/s10668-021-01654-z Wang Y, Zhao T (2015) Impacts of energy-related CO2 emissions: Evidence from under developed, developing and highly developed regions in China. Ecol Ind 50:186–195. https://doi.org/10.1016/j.ecolind.2014.11.010 Wang P, Wu W, Zhu B, Wei Y (2013) Examining the impact factors of energy-related CO2 emissions using the STIRPAT model in Guangdong Province, China. Appl Energy 106:65–71 Wang Y, Han R, Kubota J (2016) Is there an environmental Kuznets curve for SO2 emissions? A semi-parametric panel data analysis for China. Renew Sustain Energy Rev 54:1182–1188 Wang C, Wang F, Zhang X, Yang Y, Su Y, Ye Y, Zhang H (2017) Examining the driving factors of energy related carbon emissions using the extended STIRPAT model based on IPAT identity in Xinjiang. Renew Sustain Energy Rev 67:51–61 World Bank (2019) ending poverty, investing in opportunity. The World Bank. https://doi.org/10.1596/978-1-4648-1470-9 You J (2011) China's energy consumption and sustainable development: Comparative evidence from GDP and genuine savings. Renew Sustain Energy Rev 15(6):2984–2989. https://doi.org/10.1016/j.rser.2011.03.026 The authors involved in this research communication do not have any financial and personal relationships with other people or organizations that could inappropriately influence (bias) their work. Department of Humanities and Social Sciences, Indian Institute of Technology Kharagpur, Kharagpur, West Bengal, India Muhammed Ashiq Villanthenkodath Department of Humanities and Social Sciences, Jaypee Institute of Information Technology, A-10 sector-62, Noida, UP, 201309, India Mohini Gupta Department of Economics Science, Indian Institute of Technology Kanpur, Kharagpur, India Seema Saini Department of Humanities and Social Sciences, National Institute of Technology (NIT) Rourkela, Rourkela, Odisha, India Malayaranjan Sahoo VMA: Idea proposed, Data curation, Investigation, Writing—original draft, writing, revision, and estimation. GM: Writing—original draft, Introduction. SS: Writing—original draft, literature review. SM: Writing—original draft, restructuring. All authors read and approved the final manuscript". Correspondence to Muhammed Ashiq Villanthenkodath. Ethical approval and consent to participate The authors of the paper do not have any conflict of interest. Villanthenkodath, M.A., Gupta, M., Saini, S. et al. Impact of Economic Structure on the Environmental Kuznets Curve (EKC) hypothesis in India. Economic Structures 10, 28 (2021). https://doi.org/10.1186/s40008-021-00259-z Revised: 03 December 2021 DOI: https://doi.org/10.1186/s40008-021-00259-z Economic structure Energy structure
CommonCrawl
Epispiral The epispiral is a plane curve with polar equation $\ r=a\sec {n\theta }$. There are n sections if n is odd and 2n if n is even. It is the polar or circle inversion of the rose curve. In astronomy the epispiral is related to the equations that explain planets' orbits. See also • Logarithmic spiral • Rose (mathematics) References • J. Dennis Lawrence (1972). A catalog of special plane curves. Dover Publications. p. 192. ISBN 0-486-60288-5. • https://www.mathcurve.com/courbes2d.gb/epi/epi.shtml Spirals, curves and helices Curves • Algebraic • Curvature • Gallery • List • Topics Helices • Angle • Antenna • Boerdijk–Coxeter • Hemi • Symmetry • Triple Biochemistry • 310 • Alpha • Beta • Double • Pi • Polyproline • Super • Triple • Collagen Spirals • Archimedean • Cotes's • Epispiral • Hyperbolic • Poinsot's • Doyle • Euler • Fermat's • Golden • Involute • List • Logarithmic • On Spirals • Padovan • Theodorus • Spirangle • Ulam
Wikipedia
\begin{document} \title{Borel Isomorphism of SPR Markov Shifts} \author{Mike Boyle} \address{Department of Mathematics - University of Maryland} \email{[email protected]} \author{J\'er\^ome Buzzi} \address{Laboratoire de Math\'ematiques d'Orsay (CNRS, UMR 8628) - Universit\'e Paris-Sud} \email{[email protected]} \author{Ricardo G\'omez} \address{Instituto de Matem\'aticas - Universidad Nacional Aut\'onoma de M\'exico } \email{[email protected]} \begin{abstract} We show that strongly positively recurrent Markov shifts (including shifts of finite type) are classified up to Borel conjugacy by their entropy, period and their numbers of periodic points. \end{abstract} \maketitle \section{Introduction} Theorem \ref{Hochiso} below is one of the results in the \lq\lq full sets\rq\rq\ paper of Hochman \cite{Hochman}. In the statement, \lq Markov shift\rq\ means countable state Markov shift. The free part of a Borel system is the subsystem obtained by restriction to the nonperiodic points, and a full subset is an invariant subset of measure one for every invariant Borel probability measure. Two Borel systems are {\it almost-Borel isomorphic} if they are Borel isomorphic after restriction to full subsets of their free parts. Detailed definitions for the Introduction are given in the next section. \begin{theorem} \cite{Hochman} \label{Hochiso} Two mixing Markov shifts are almost-Borel isomorphic if and only if (1) they have equal entropy and (2) one has a measure of maximum entropy if and only the other does. \end{theorem} An important observation \cite{Hochman} in this setting is that two Borel systems that embed each into the other are Borel isomorphic, by a Borel variant of Cantor-Bernstein Theorem (a.k.a.\ the measurable Schr\"oder-Bernstein Theorem). Consequently Theorem \ref{Hochiso} was an immediate corollary of the following embedding theorem. \begin{theorem}\cite{Hochman}\label{Hochmanembed} Suppose $(Y,T)$ is a mixing Markov shift and $(X,S)$ is a Borel system such that $h(S,\mu )< h(T)$ for every ergodic invariant Borel probability $\mu$ on $X$. Then there is an almost-Borel embedding of $(X,S)$ into $(Y,T)$. \end{theorem} This theorem easily leads to a decisive almost-Borel classification of Markov shifts, and has implications for other systems \cite{Hochman, BB2014}. The study of Borel dynamics, adopting weakly wandering sets as the relevant notion of negligible sets, was initated by Shelah and Weiss \cite{ShelahWeiss,Weiss1, Weiss2}. Here that notion of isomorphism preserves additionally the infinite and quasi-invariant measures (and again it is natural to restrict to free parts). Whether there is a theorem for Borel dynamics like Theorem \ref{Hochmanembed} is a difficult open problem, discussed in \cite{Hochman}. Our purpose in this paper is to show that a generalization of Theorem \ref{Hochiso} to this richer category holds in at least one meaningful case. \begin{theorem}\label{spriso} The free parts of mixing SPR Markov shifts are Borel isomorphic if and only if they have equal entropy. \end{theorem} We note that Hochman \cite{Hochman} has asked if those free parts are in fact topologically conjugate, at least in the case of subshifts of finite type. As in the almost-Borel case, Theorem \ref{spriso} is an immediate corollary of an embedding result, stated next. \begin{theorem}\label{MSEmbed2} Suppose $(Y,T)$ is a mixing SPR Markov shift and $(X,S)$ is a Markov shift such that $h(X)=h(Y)$ and $X$ has a unique irreducible component of full entropy and this component is a mixing SPR Markov shift. Then there is a Borel embedding of $(X,S)$ into $(Y,T)$. \end{theorem} The proof is independent of Hochman's result and techniques. Roughly speaking, Hochman builds almost-Borel embeddings from the bottom up with a uniform version of the Krieger Generator Theorem \cite{KriegerGenerator}. In our much more special situation, we can build Borel embeddings with the following offshoot of the Krieger Embedding Theorem. \begin{theorem}\label{MSEmbed1} Suppose $(Y,T)$ is a mixing Markov shift and $(X,S)$ is a Markov shift such that $h(X)<h(Y)$. Then there is a Borel embedding of the free part of $(X,S)$ into $(Y,T)$. \end{theorem} Theorem \ref{MSEmbed1}, though not completely trivial, is completely unsurprising. (The question of when a Markov shift embeds {\it continuously} into a mixing Markov shift is much harder \cite{Fiebigs1997,Fiebigs2005}.) The novel feature in the proof of Theorem \ref{MSEmbed2} is the use of a ``top-down'' embedding given by the almost isomorphism theorem of \cite{BBG2006} to reduce the problem to embeddings of lower entropy systems. At the end of the paper we state the Borel classification of the free parts of irreducible SPR Markov shifts, which follows from the mixing case. \subsection*{Acknowledgments} We thank Mike Hochman for the stimulating discussions out of which this paper emerged. M. Boyle gratefully acknowledges the financial support of ANR project DynNonHyp BLAN08-2\_313375 and the hospitality of the Mathematics Department of Universit\'e Paris-Sud in Orsay. \section{Definitions and background}\label{sec:def} A {\bf Borel system} $(X,\mathcal X,T)$ is a standard Borel space\footnote{$\mathcal X$ is a $\sigma$-algebra of subsets of $X$ such that there is distance on $X$ which turns it into a complete separable space whose collection of Borel subsets is $\mathcal X$.} $(X,\mathcal X)$ together with a Borel automorphism\footnote{A bijection such that $T^{-1}\mathcal X:=\{T^{-1}E:E\in\mathcal X\}=T\mathcal X=\mathcal X$.} $T:X\to X$. We often abbreviate $(X,\mathcal X,T)$ to $(X,T)$ or $X$ or $T$ if it does not create confusion. A {\bf Borel factor map} is a homomorphism of Borel systems: a (not necessarily onto) Borel measurable map intertwining the actions. An isomorphism or conjugacy of Borel systems is a bijective Borel factor map; an embedding of Borel systems is an injective Borel factor map. By an easy exercise in descriptive set theory (see \cite[p.399]{Weiss1}), there is a Borel conjugacy of two systems if and only if there is a Borel conjugacy between their free parts and for each $n$ the cardinalities of their sets of periodic orbits of size $n$ is the same. Given a Borel system $(X,T)$, we use $\Prob(X)\supset \pe(X) \supset \pen(X)$ respectively to denote the sets of all measures\footnote{Unless specified otherwise, the word measure will denote an invariant Borel probability.}, all ergodic measures, and all ergodic nonatomic measures. Recall from \cite{Weiss1} that a set $W$ is {\bf wandering} if it is Borel and if $\bigcup_{k\in\ZZ} T^kW$ is a disjoint union (which we denote $\bigsqcup_{k\in\ZZ} T^kW$). A set is {\bf weakly wandering} if it is a Borel subset of a countable union of wandering sets. Such a set has measure zero for all quasi-invariant measures \cite{ShelahWeiss,Weiss1}, not only for measures in $ \Prob(X)$. To avoid any mystery, we record a simple remark. \begin{remark} \label{lem:rug} Suppose $(X,S)$ and $(Y,T)$ are Borel systems and each contains an uncountable Borel set which is wandering. Then the systems are Borel isomorphic if and only if they are Borel isomorphic modulo wandering sets. \end{remark} The basis of the remark is the following. Any weakly wandering set is contained in the orbit of a wandering set. Under the assumption, such wandering sets in $X$ and $Y$ can be enlarged to uncountable Borel subsets of the ambient Polish space. Any two such sets are Borel isomorphic. A {\bf Markov shift} $(X,S)$ is a topological system $\Sigma (G)$ defined by the action of the left shift $\sigma:(x_n)_{n\in\ZZ} \mapsto (x_{n+1})_{n\in\ZZ}$ on the set $\Sigma(G)$ of paths on some oriented graph $G$ with countably (possibly finitely) many vertices and edges. We will use the edge shift (rather than the vertex shift) presentation. The domain $X$ is the set of $x=(x_n)_{n\in \ZZ} \in \mathcal E^\ZZ$ (where $\mathcal E$ is the set of oriented edges) such that for all $n$, the terminal vertex of $x_n$ equals the initial vertex of $x_{n+1}$. The (Polish) topology on $X$ is the relative topology of the product of the discrete topologies. When $G$ is finite, $\Sigma (G)$ is a shift of finite type (SFT). $\Sigma (G)$ is {\bf irreducible} if $G$ contains a unique strongly connected component, i.e., a maximal set of the vertices such that for any pair, there is a loop containing both. An arbitrary Markov shift is the disjoint union of a wandering set and countably many disjoint irreducible Markov shifts. An irreducible Markov shift is mixing if and only if the g.c.d. of the periods of its periodic points is 1. The Borel entropy of a system $(X,S)$ is the supremum of the Kolmogorov-Sinai entropies $h(S, \mu )$, $\mu \in \Prob(X)$. Markov shifts of positive entropy contain uncountable wandering sets; so, by the Remark \ref{lem:rug}, for simplicity we can neglect weakly wandering sets in both statements and proofs. An irreducible Markov shift $(X,S)$ (more generally, an irreducible component) has at most one measure of maximum (necessarily finite) entropy \cite{Gurevic1970}; if this measure $\mu$ exists, then $(S, \mu )$ is measure-preservingly isomorphic to the product of a finite entropy Bernoulli shift and a finite cyclic rotation (see \cite{BB2014} for comment and references). An irreducible Markov shift $\Sigma$ is {\bf strongly positively recurrent} (or {\bf stably positive recurrent} or just {\bf SPR}) if it admits a measure $\mu$ of maximal entropy which is {\it exponentially recurrent}: for every non-empty open subset $U\subset\Sigma$, $$ \limsup_{n\to\infty} \frac1n\log\mu \Big( \Sigma\setminus\bigcup_{k=0}^{n-1}\sigma^{-k}U \Big) < 0\ . $$ We refer to \cite{BBG2006, Gurevich1996, GurevichSavchenko} for more on SPR shifts. In the language of \cite{Gurevich1996, GurevichSavchenko}, the SPR Markov shifts are the positively recurrent symbolic Markov chains defined by stably recurrent matrices (further developed in \cite{GurevichSavchenko} as the fundamental class of \lq\lq stably positive\rq\rq\ matrices). The SPR Markov shifts are a natural subclass preserving some of the significant properties of finite state shifts \cite[Sec.2]{BBG2006}. \section{Embedding a Markov shift with smaller entropy} In this section we will prove Theorem \ref{MSEmbed1}. First we recall and adapt some standard finite-state symbolic dynamics (for more detail on this, see \cite{Boyle1983} or \cite{LindMarcus1995}). \begin{lemma}\label{lem:disjSFT} Suppose $\epsilon > 0$ and $X$ is a mixing Markov shift with entropy $h(X)>0$. Then $X$ contains infinitely many mixing SFTs $S_n$, pairwise disjoint, such that $h(S_n)>h(X)-\epsilon$ for all $n$. \end{lemma} \begin{proof} $X$ contains an SFT $S$ with entropy greater than $h(X)-\epsilon$ \cite{Gurevic1970}; $S$ is easily enlarged to a mixing SFT $S'$ in $X$. The complement of a given proper subshift of $S'$ contains a mixing SFT with entropy arbitrarily close to $h(S')$ \cite[Lemma 26.17]{DGS}. Thus one can construct the required family inductively. \end{proof} \begin{definition} For a system $(X,S)$, $|P^o_n(X)|$ denotes the cardinality of the set of points in $S$-orbits of length $n$. \end{definition} \begin{theorem}[Krieger Embedding Theorem \cite{KriegerEmbedding}] \label{KriegerEmbed} Let $X$ be a subshift on a finite alphabet and $Y$ a mixing SFT such that $h(X)<h(Y)$ and $|P^o_n(X)| \leq |P^o_n(Y)|$ for all $n$. Then there is a continuous embedding of $X$ into $Y$. \end{theorem} \begin{proposition}\cite[Lemma 2.1 and p.546]{Boyle1983} \label{Boyle1983} Suppose $X$ is a mixing SFT and $M$ is a positive integer. Let $\mathcal O_1, \dots , \mathcal O_r$ be distinct finite orbits in $X$. Let $W_i$ be the set of points whose positive iterates are positively asymptotic to $\mathcal O_i$, and let $W=\cup_i W_i$. Then there exist a mixing SFT $Z$ and a continuous surjection $p:Z\to X$ such that: \begin{enumerate} \item $|p^{-1}(x)|=1$ for all $x$ outside $W$ \item The preimage of $\mathcal O_i$ is an orbit $\widetilde{\mathcal O_i}$ of length $M|\mathcal O_i|$. \item $p^{-1}(W_i)$ is the set of points positively asymptotic to $\widetilde{\mathcal O_i}$. \end{enumerate} \end{proposition} \begin{corollary}\label{coro:BoyleEmbed} Let $X$ and $Y$ be SFTs such that $h(X)<h(Y)$ and $Y$ is mixing. Then there is a continuous embedding of $X\setminus X_0$ into $Y$ where $X_0$ is the union of a weakly wandering set and a finite set of periodic points. \end{corollary} \begin{proof} We have that $\lim_n (\, |P^o_n(Y)|-|P^o_n(X)|\, ) = \infty$. Thus we may choose $M$ to build $Z$ as in Proposition \ref{Boyle1983} such that $Z$, by Theorem \ref{KriegerEmbed}, embeds into $Y$. The map $Z\to X$ is a Borel isomorphism on the complement of a set $X_0$ of points positively asymptotic to finitely many periodic points. \end{proof} To reduce Theorem \ref{MSEmbed1} to this corollary, we use reductions stated as three lemmas. A {\bf loop system} is a Markov shift defined by a {\bf loop graph}: a graph made of simple loops which are based at a common vertex and otherwise do not intersect. Given a power series $f = \sum_{n=1}^{\infty} f_nz^n$ with coefficients in $\ZZ_+$, we let $\Sigma_f$ denote the loop system with exactly $f_n$ simple loops of length $n$ in the loop graph. If $h(\Sigma_f) = \log \lambda < \infty$, then \begin{enumerate} \item $0< f(1/\lambda ) \leq 1$, \item $\alpha < \lambda \implies f(1/\alpha ) = \infty$ and \item $f(1/\lambda )=1 $ if $\Sigma_f$ has a measure of maximum entropy (i.e. is positive recurrent). \end{enumerate} For more on loop systems and Markov shifts, see \cite{BBG2006,GurevichSavchenko,Kitchens1998} and their references. \begin{lemma}\label{lem:RedLoop} Any Markov shift $X$ is Borel isomorphic to a Borel system $$ W \ \sqcup\ \bigsqcup_{n\in\NN} \Sigma(L_n) $$ where $W$ is weakly wandering and for each $n$, $L_n$ is a loop graph. \end{lemma} \begin{lemma}\label{lem:RedSFT} Let $\Sigma$ be a loop system and $h>h(\Sigma)$. Then there is a SFT $S$ with $h(S)<h$ such that $\Sigma$ has a continuous embedding into $S$. \end{lemma} Before proving the lemmas, we deduce the lower-entropy embedding theorem from them. \begin{proof}[Proof of Theorem \ref{MSEmbed1}] According to Remark \ref{lem:rug} and Lemma \ref{lem:RedLoop}, we may assume that $X$ is a disjoint union of loop systems $\Sigma (L_n)$. Let $h=(h(Y) +h(X))/2>h(X)$. By Lemma \ref{lem:RedSFT}, each loop system $\Sigma (L_n)$ can be (continuously) embedded into some SFT $W_n$ with entropy less than $h$. Let $\epsilon =h(Y)-h>0$. By Lemma \ref{lem:disjSFT} (with $\epsilon = (h(Y)-h))/2)$), there are pairwise disjoint mixing SFTs $Y_n$ in $Y'$ with $h(Y_n)> h$. Finally, Corollary \ref{coro:BoyleEmbed} shows that each $W_n$ (apart from finitely many periodic points) can be Borel embedded into $Y_n\subset Y$. Altogether, apart from a countable set of periodic points, $X$ has been Borel embedded into $Y$. \end{proof} We now prove the lemmas. \begin{proof}[Proof of Lemma \ref{lem:RedLoop}] Let $G$ be some graph presenting $X$. For convenience, we identify its vertices with $1,2,\dots$. Observe that each $W^{\eps}_n:=\{x\in X:x_0=n$ and $\forall i>0\; x_{\eps i}\ne n\}$ ($n\in\NN^*,\eps\in\{-1,+1\}$) is wandering. Consider the loop graphs $L_n$ defined by the first return loops of $G$ at vertex $n$ which avoid the vertices $k<n$. For each $x\in X$, let $N:=\inf\{n\geq1:\exists a_k,b_k\to\infty\; x_{-a_k}=x_{b_k}=n\}$ and consider the following three cases. \begin{enumerate} \item $N=\infty$. Then there exists $\eps\in\{-1,+1\}$ such that $x\in\sigma^{-j}W_{x_0}^\eps$, where $j:=\eps\sup\{\eps i\in\ZZ:x_i=x_0\}\in\ZZ$. \item $N<\infty$ and $\{x_m : m \in \ZZ \}\cap [1,N) \neq \varnothing$. Then there exist $k\in [1,N)$ and $\eps\in\{-1,+1\}$ such that $j:=\eps\sup\{\eps i\in\ZZ:x_i=k\}\in\ZZ$, so $x\in\sigma^{-j}W_{k}^\eps$. \item Otherwise, $x\in\Sigma(L_N)$. \end{enumerate} To conclude, observe that $\bigcup_{k\in\NN^*,j\in\ZZ,\eps\in\{-1,+1\}} \sigma^{-j}W_k^{\eps}$ is a weakly wandering set. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:RedSFT}] Let $\Sigma = \Sigma_f$, a loop system described by a power series $f = \sum_{n=1}^{\infty} f_nz^n$. If $f$ is a polynomial, then $\Sigma_f$ is itself an SFT. From now on, we assume $f$ to have infinitely many non-zero terms. We are going to build the SFT as a finite loop system $\Sigma_p$, with a polynomial $p$ obtained by truncating the power series $f$ and then adding some monomials to ensure enough space for the embedding while keeping the entropy $<h$. Let $\beta\in(h(\Sigma),h)$. Given a positive integer $N$, let $f^{(N)}$ denote the truncation of $f$ to the polynomial $f_1z + f_2z^2 + \cdots + f_Nz^N$. As $f(e^{-h(\Sigma)})\leq1$ and $h(\Sigma) < \beta$ we have $f_n <e^{n\beta}$ for all $n\geq1$. Let $g^{<N>}$ denote the polynomial $g_{N+1}z^{N+1}+ g_{N+2}z^{N+2}+ \cdots + g_{2N}z^{2N}$, where $g_n=\lceil e^{n\beta}\rceil$ (the integer ceiling). Then \begin{align*} |g^{<N>}(z)| &\leq \Big[(e^{(N+1)\beta} +1) +\cdots + (e^{2N\beta} +1)|z|^{N-1}\Big]|z|^{N+1} \\ &= e^{(N+1)\beta}|z|^{N+1}\Bigg[\frac{1-(e^\beta |z|)^N}{1-e^\beta |z|}\Bigg] + |z|^{N+1} \Bigg[\frac{1-| z|^N}{1- |z|}\Bigg] \ . \end{align*} As $\beta>0$, we see that $\lim_{N\to\infty} g^{<N>}(z)=0$ uniformly for $|z|$ fixed, smaller than $e^{-\beta}$. Recall that $f(r)<1$ for $r<e^{-h(\Sigma)}$. Also if $r>0$ and $|z|=r$ and $f^{(N)}(r)<f(r)< 1$, then $|1-f^{(N)}(z)| \geq 1-f^{(N)}(r)> 1-f(r)>0$. Fix some $\gamma\in(\beta,h)$ and then $N$ sufficiently large that the following hold: \begin{enumerate} \item $|2g^{<N>}(z)| <1 -f(e^{-\gamma}) < 1-f^{(N)}(e^{-\gamma}) \leq |1-f^{(N)}(z)|$; \item both $1-f^{(N)}(z)$ and $1-f^{(N)}(z)-2g^{<N>}(z)$ are non-zero. \end{enumerate} It follows from Rouch\'e's Theorem that $1-f^{(N)}$ and $1-f^ {(N)}-2g^{<N>}$ have the same number of zeros inside the circle $|z|=e^{-\gamma}$, i.e. no zeros. Thus, setting $p:=f^ {(N)}+2g^{<N>}$, we get $h(\sigma_p) < \gamma < h$. Now, set $k=g^{<N>} $ and split $p$ as $p=(f^{(N)}+ g^{<N>}) + g^{<N>} =: h+k$ and let $q := h(1+ k+ k^2 + \cdots )$. $\sigma_q$ is the loop system defined from $\sigma_{h+k}$ by replacing the loops from $k$ by all the loops made by concatenating a copy of a loop from $h$ with an arbitrary positive number of copies of loops from $k$ (see \cite[Lemma 5.1]{BBG2006} for detail). It follows that $\sigma_q$ can be identified to the subset of $\sigma_p$ obtained by removing a copy of $\sigma_k$ with the points asymptotic to it. Hence, there is a continuous embedding of $\sigma_q$ into $\sigma_p$. Note that for $n\leq N$ we have $f_n=p_n=q_n$. Also, for $n> N$, $f_n <e^{n\beta}\leq (1+k+k^2+\cdots )_n \leq q_n$. This yields an embedding $\sigma_f \to \sigma_q$ and concludes the proof. \end{proof} \section{The SPR case} We now give the proof of Theorem \ref{MSEmbed2}. Let $X'$ be the mixing SPR component of $X$ with $h(X)= h(Y)$. Equal entropy mixing SPR Markov shifts are {\it almost isomorphic} as defined and proved in \cite{BBG2006}. Consequently there will be a word $w$ and a subsystem $\Sigma^w$ of $X'$ (consisting of the points which see $w$ infinitely often in the past and in the future) such that there is a continuous embedding $\psi_0$ from $X_0=\Sigma^w$ onto a subsystem $Y_0$ of $Y$ and $\epsilon >0$ such that the complements $ X'\setminus X_0$ and $Y\setminus Y_0$ have Borel entropy less than $h(Y)-\epsilon$. The Borel subsystem $ X\setminus X_0$ is (after passing to a higher block presentation) the union of a Markov shift $X_1$ (the subsystem of $X$ avoiding the word $w$) and a weakly wandering set $W$ (defined by the occurence of $w$, with a failure of infinite recurrence in the past or future). By Remark \ref{lem:rug}, we can forget about $W$. We cannot expect $X_1$ to have entropy less than $h(Y\setminus Y_0)$, and therefore we cannot apply Theorem \ref{MSEmbed1} to embed $ X_1$ into a subsystem of $Y\setminus Y_0$. Instead, we will push $X_1$ into the image of $X_0$, and adjust the definition on $X_0$ to keep injectivity. For $L$ large enough, \[\Sigma^{w,L}:=\{x\in\Sigma:\forall n\in\ZZ\ \exists k\in\{0,\dots,L\}\; x_{n+k}\dots x_{n+k+|w|-1}=w\}\] is a mixing Markov subshift with $h(\Sigma^{w,L}) >h(X_1)$. We apply Lemma \ref{lem:disjSFT} to get pairwise disjoint mixing SFTs $Y_1,Y_2, \dots $ in $\Sigma^{w,L}$ satisfying $h(Y_i)>h(X_1)$ for all $i\in\NN$. Let $C$ denote the complement in $X_1$ of the periodic points. Theorem \ref{MSEmbed1} gives Borel embeddings $\gamma_i: C\to Y_{i}$. Let $Z_i:=\gamma_i(C)\subset Y_i$ and let $\phi_i$ be the conjugacy $\gamma_{i+1}\circ \gamma_i^{-1}:Z_{i}\to Z_{i+1}$. We define $\psi: X_0\cup C \to\Sigma'$ by \begin{align*} \psi : x \ &\mapsto\ \gamma_1(x) \in Z_1 \quad \quad \quad \quad \textnormal{if } x\in C \\ &\mapsto \ \phi_i (\psi_0(x) ) \in Z_{i+1}\quad \ \textnormal{if } \psi_0( x)\in Z_i\\ &\mapsto \ \psi_0 (x) \quad \quad \quad \quad \quad \quad \, \text{otherwise} \ . \end{align*} This $\psi$ is a Borel embedding. This finishes the proof of Theorem \ref{MSEmbed2}. \qed Lastly we record the obvious corollary of Theorem \ref{spriso}. \begin{theorem} The free parts of two irreducible SPR Markov shifts are Borel isomorphic if and only if they have the same entropy and period. \end{theorem} \end{document}
arXiv
\begin{document} \begin{frontmatter} \title{Simultaneous Feature and Expert Selection \\ within Mixture of Experts} \author[rvt]{Billy Peralta\corref{cor1}} \ead{[email protected]} \cortext[cor1]{Corresponding author, Telephone: (56 45) 255 3948} \address[rvt]{Department of Informatics,Universidad Cat\'olica de Temuco, Chile.} \begin{abstract} A useful strategy to deal with complex classification scenarios is the ``divide and conquer'' approach. The mixture of experts (MOE) technique makes use of this strategy by joinly training a set of classifiers, or experts, that are specialized in different regions of the input space. A global model, or gate function, complements the experts by learning a function that weights their relevance in different parts of the input space. Local feature selection appears as an attractive alternative to improve the specialization of experts and gate function, particularly, for the case of high dimensional data. Our main intuition is that particular subsets of dimensions, or subspaces, are usually more appropriate to classify instances located in different regions of the input space. Accordingly, this work contributes with a regularized variant of MoE that incorporates an embedded process for local feature selection using $L1$ regularization, with a simultaneous expert selection. The experiments are still pending. \end{abstract} \begin{keyword} Mixture of experts, local feature selection, embedded feature selection, regularization. \end{keyword} \end{frontmatter} \section{Mixture of Experts with embedded variable selection} \label{Sec:OurApproach} Our main idea is to incorporate a local feature selection scheme inside each expert and gate function of a MoE formulation. Our main intuition is that, in the context of classification, different partitions of the input data can be best represented by specific subsets of features. This is particularly relevant in the case of high dimensional spaces, where the common presence of noisy or irrelevant features might obscure the detection of particular class patterns. Specifically, our approach takes advantage of the linear nature of each local expert and gate function in the classical MoE formulation \cite{1351018}, meaning that $L1$ regularization can be directly applied. Below, we first briefly describe the classical MoE formulation for classification. Afterwards, we discuss the proposed modification to the MoE model that provides embedded feature selection. \subsection{Mixture of Experts} In the context of supervised classification, there is available a set of $N$ training examples, or instance-label pairs $(x_n,y_n)$, representative of the domain data $(x,y)$, where $x_n \in \Re^D$ and $y_n \in C$. Here $C$ is a discrete set of $Q$ class labels $\left\{c_1,...,c_Q\right\}$. The goal is to use training data to find a function $f$ that minimizes a loss function which scores the quality of $f$ to predict the true underlying relation between $x$ and $y$. From a probabilistic point of view \cite{Bishop:2007}, a useful approach to find $f$ is using a conditional formulation: \begin{eqnarray} f(x) & = & \arg\max_{c_i \in C } \: p(y=c_i | x) \label{eq:1}. \nonumber \end{eqnarray} In the general case of complex relations between $x$ and $y$, a useful strategy consists of approximating $f$ through a mixture of local functions. This is similar to the case of modeling a mixture distribution \cite{citeulike:2235458} and it leads to the MoE model. We decompose the conditional likelihood $p(y|x)$ as: \begin{eqnarray}\label{Eq:MoE} p(y|x) & = & \sum^{K}_{i=1} p(y,m_i | x) \;=\; \sum^{K}_{i=1} p(y | m_i,x)\:p(m_i | x), \end{eqnarray} \noindent where Equation (\ref{Eq:MoE}) represents a MoE model with $K$ experts $m_i$. Figure (\ref{fig:Fig_14}) shows a schematic diagram of the MoE approach. The main idea is to obtain local models in such a way that they are specialized in a particular region of the data. In Figure (\ref{fig:Fig_14}), $x$ corresponds to the input instance, $p(y | m_i,x)$ is the \textbf{expert function}, $p(m_i | x)$ is the \textbf{gating function}, and $p(y | x)$ is the weighted sum of the experts. Note that the output of each expert model is weighted by the gating function. This weight can be interpreted as the \textit{relevance} of expert $m_i$ for the classification of input instance $x$. Also note that the gate function has $K$ outputs, one for each expert. There are $K$ expert functions that have $Q$ components, one for each class. \begin{figure} \caption{Mixture of experts scheme.} \label{fig:Fig_14} \end{figure} The traditional MoE technique uses multinomial logit models, also known as soft-max functions \cite{Bishop:2007}, to represent the gate and expert functions. An important characteristic of this model is that it forces competition among its components. In MoE, such components are expert functions for the gates and class-conditional functions for the experts. The competition in soft-max functions enforces the especialization of experts in different areas of the input space \cite{Yuille:1998:WM:303568.304791}. Using multinomial logit models, a gate function is defined as: \begin{eqnarray} \label{Eq:MoE-Params-1} p(m_i|x) & = & \frac{exp{(\nu^{T}_{i}x)}}{\sum^{K}_{j=1} exp{(\nu^{T}_{j}x})} \end{eqnarray} \noindent where $i \in \{1, \dots, K\}$ refers to the set of experts and $\nu_i \in \Re^D$ is a vector of model parameters. Component $\nu_{ij}$ of vector $\nu_i$ models the relation between the gate and dimension $j$ of input instance $x$. Similarly, an expert function is defined as: \begin{eqnarray}\label{Eq:MoE-Params-2} p(y=c_l|x,m_i) & = & \frac{exp{(\omega^{T}_{li}x)}}{\sum^{M}_{j=1} exp{(\omega^{T}_{ji}x)}} \end{eqnarray} \noindent where $\omega_{li}$ depends on class label $c_l$ and expert $i$. In this way, there are a total of $Q \times K$ vectors $\omega_{li}$. Component $\omega_{lij}$ of vector $\omega_{li}$ models the relation between expert function $i$ and dimension $j$ of input instance $x$. There are several methods to find the value of the hidden parameters $\nu_{ij}$ and $\omega_{lij}$ \cite{Moerland97somemethods}. An attractive alternative is to use the EM algorithm. In the case of MoE, the EM formulation augments the model by introducing a set of latent variables, or \textit{responsibilities}, indicating the expert that generates each instance. Accordingly, the EM iterations consider an expectation step that estimates expected values for \textit{responsibilities}, and a maximization step that updates the values of parameters $\nu_{ij}$ and $\omega_{lij}$. Specifically, the posterior probability of the \textit{responsibility} $R_{in}$ assigned by the gate function to expert $m_i$ for an instance $x_n$ is given by \cite{Moerland97somemethods}: \begin{eqnarray} \label{Eq:resp} R_{in} & = & p(m_i|x_n,y_n) \\ & = & \frac{p(y_n|x_n,m_i)\:p(m_i|x_n)}{\sum^{K}_{j=1} p(y_n | x_n,m_j)\:p(m_j | x_n)} \nonumber \end{eqnarray} Considering these responsibilities and Equation (\ref{Eq:MoE}), the expected complete log-likelihood $\left\langle \textsl{L}_c \right\rangle$ used in the EM iterations is \cite{Moerland97somemethods}: \begin{eqnarray}\label{Eq:ExpLogL} \left\langle \textsl{L}_c \right\rangle & = & \sum^{N}_{n=1}{ \sum^{K}_{i=1}{ R_{in}\: \left[log \; p(y_n|x_n,m_i)\: + log \; p(m_i|x_n) \right]}} \end{eqnarray} \subsection{Regularized Mixture of Experts (RMoE)} To embed a feature selection process in the MoE approach, we use the fact that in Equations (\ref{Eq:MoE-Params-1}) and (\ref{Eq:MoE-Params-2}) the multinomial logit models for gate and experts functions contain linear relations for the relevant parameters. This linearity can be straightforwardly used in feature selection by considering that a parameter component $\nu_{ij}$ or $\omega_{lij}$ with zero value implies that dimension ${j}$ is irrelevant for gate function $p(m_i|x)$ or expert model $p(y | m_i,x)$, respectively. Consequently, we propose to penalize complex models using $L_1$ regularization. Similar consideration is used in the work of \cite{1248698} but in the context of unsupervised learning. The idea is to maximize the likelihood of data while simultaneously minimizing the number of parameter components $\nu_{ij}$ and $\omega_{lij}$ different from zero. Considering that there are $Q$ classes, $K$ experts, and $D$ dimensions, the expected $L1$ regularized log-likelihood $\left\langle \textsl{L}^R_c \right\rangle$ is given by: \begin{eqnarray}\label{Eq:ExpLogL-Reg} \left\langle \textsl{L}^R_c \right\rangle & = & \left\langle \textsl{L}_c \right\rangle - \lambda_\nu \sum^{K}_{i=1}{ \sum^{D}_{j=1}{ \left| \nu_{ij} \right| }} - \lambda_\omega \sum^{Q}_{l=1}{ \sum^{K}_{i=1}{ \sum^{D}_{j=1}{ \left| \omega_{lij} \right| }}} \; . \end{eqnarray} To maximize Equation (\ref{Eq:ExpLogL-Reg}) with respect to model parameters, we use first the standard fact that the likelihood function in Equation (\ref{Eq:ExpLogL}) can be decomposed in terms of independent expressions for gate and expert models \cite{Moerland97somemethods}. In this way, the maximization step of the EM based solution can be performed independently with respect to gate and expert parameters \cite{Moerland97somemethods}. In our problem, each of these optimizations has an extra term given by the respective regularization term in Equation (\ref{Eq:ExpLogL-Reg}). To handle this case, we observe that each of these optimizations is equivalent to the expression to solve a regularized logistic regression \cite{Lee+etal06:L1logreg}. As shown in \cite{Lee+etal06:L1logreg}, this problem can be solved by using a coordinate ascent optimization strategy \cite{Tseng:01} given by a sequential two-step approach that first models the problem as an unregularized logistic regression and afterwards incorporates the regularization constraints. In summary, we handle Equation (\ref{Eq:ExpLogL-Reg}) by using a EM based strategy that at each step solves the maximation with respect to model parameters by decomposing this problem in terms of gate and expert parameters. Each of these problems is in turn solved using the strategy proposed in \cite{Lee+etal06:L1logreg}. Next, we provide details of this procedure. \textbf{Optimization of the unregularized log-likelihood} In this case, we solve the unconstrained log-likelihood given by Equation (\ref{Eq:ExpLogL}). First, we optimize the log-likelihood with respect to vector $\omega_{li}$. The maximization of the expected log-likelihood $\left\langle \textsl{L}_c \right\rangle$ implies deriving Equation (\ref{Eq:ExpLogL}) with respect to $\omega_{li}$: \begin{eqnarray}\label{Eq:ExpLogL-Der-1} \pd{ \sum^{N}_{n=1}{ \sum^{K}_{i=1}{ R_{in}\: \left[log \; p(y_n|x_n,m_i)\: \right]}}}{\omega_{li}} & = & 0, \end{eqnarray} \noindent and applying the derivate, we have: \begin{eqnarray}\label{Eq:ExpLogL-Der-2} - \sum^{N}_{n=1}{R_{in} \left( p(y_n|x_n,m_i) - y_n \right)x_n } & = & 0. \end{eqnarray} In this case, the classical technique of least-squares can not be directly applied because of the soft-max function in $p(y_n|x_n,m_i)$. Fortunately, as described in \cite{journals/neco/JordanJ94} and later in \cite{Moerland97somemethods}, Equation (\ref{Eq:ExpLogL-Der-2}) can be approximated by using a transformation that implies inverting the soft-max function. Using this transformation, Equation (\ref{Eq:ExpLogL-Der-2}) is equivalent to an optimization problem that can be solved using a weighted least squares technique \cite{Bishop:2007}: \begin{eqnarray} \min_{\omega_{li}} & \sum^{N}_{n=1}{R_{in} \left( \omega^{T}_{li}x_n - log \: y_{n} \right )^2 } \label{eq:14} \end{eqnarray} A similar derivation can be performed with respect to vectors $\nu_{i}$. Again deriving Equation (\ref{Eq:ExpLogL}), in this case with respect to parameters $\nu_{ij}$ and applying the transformation suggested in \cite{journals/neco/JordanJ94}, we obtain: \begin{eqnarray} \min_{\nu_{i}} & \sum^{N}_{n=1}{\left(\nu^{T}_{i} x_n - log R_{in} \right)^2 } \label{eq:14-b}\\ \end{eqnarray} \textbf{Optimization of the regularized likelihood} Following the procedure of \cite{Lee+etal06:L1logreg}, we add the regularization term to the optimization problem given by Equation (\ref{eq:14}), obtaining an expression that can be solved using quadratic programming \cite{tibshirani96regression}: \begin{eqnarray} \min_{\omega_{li}} & \sum^{N}_{n=1}{R_{in} \left( log \: y_{n} - \omega^{T}_{li}x_n \right)^2 } \nonumber \\ \mbox{subject to:} & ||\omega_{li}||_1 \leq \lambda_\omega \label{eq:15} \end{eqnarray} Similarly, we can also obtain a standard quadratic optimization problem to find parameters $\nu_{ij}$ : \begin{eqnarray} \min_{\nu_{i}} & \sum^{N}_{n=1}{\left( log R_{in} - \nu^{T}_{i} x_n \right)^2 } \nonumber\\ \mbox{subject: to} & ||\nu_{i}||_1 \leq \lambda_\nu \label{eq:16} \end{eqnarray} A practical advantage of using quadratic programming is that most available optimization packages can be utilized to solve it \cite{Boyd&Vandenberghe:2004}. Specifically, in the case of $T$ iterations, there are a total of $T*K*(Q+1)$ convex quadratic problems related to the maximization step of the EM algorithm. To further reduce this computational load, we slightly modify this maximization by applying the following two-steps scheme: \begin{itemize} \item Step-1: Solve $K$ quadratic problems to find gate parameters $\nu_{ij}$ assuming that each expert uses all the available dimensions. In this case, there are $T-1$ iterations. \item Step-2: Solve $K*(Q+1)$ quadratic problems to find expert parameters $\omega_{lij}$ applying the feature selection process. In this case, there is a single iteration. \end{itemize} Using the previous scheme we reduce from $T*K*(Q+1)$ to $K*(T+1)+K*(Q+1)$ the number of quadratic problems that we need to solve in the maximization step of the EM algorithm. In our experiments, we do not notice a drop in performance by using this simplification, but we are able to increase processing speed in one order of magnitude. In summary, starting by assigning random values to the relevant parameters $\nu_{ij}$ and $\omega_{lij}$, our EM implementation consists of iterating the following two steps: \begin{itemize} \item Expectation: estimating responsabilities for each expert using Equation (\ref{Eq:resp}), and then estimating the outputs of gate and experts using Equations (\ref{Eq:MoE-Params-1}) and (\ref{Eq:MoE-Params-2}). \item Maximization: updating the values of parameters $\nu_{ij}$ and $\omega_{lij}$ in Equations (\ref{eq:15}) and (\ref{eq:16}) by solving $K*(T+1)+K*(Q+1)$ quadratic problems according to the approximation described above in Step-1 and Step-2. \end{itemize} \section{Expert Selection} \label{Sec:ExpertSelection} The MoE o RMoE assumes that all the gate functions affects to every data. But for example in object detection, we can assume that there are some group of objects i.e. group of vehicles, animals, kitchen stuff, where each group is assigned to a gate function. We think that considering all groups of objects can confuse the classifiers. Therefore we propose to select a subset of gates function according to each data. We denominate this idea as a ``expert selection''. Recalling that the likelihood in regular mixture of experts is: \begin{eqnarray}\label{Eq:ExpLogLES} \textsl{L} & = & \prod^{N}_{n=1}{\prod^{K}_{i=1}{ p(y_n|x_n,m_i)p(m_i|x_n)}} \end{eqnarray} Now, in order to select a gate, we change the multinomial logit representation of the gate function (Equation \ref{Eq:MoE-Params-1}) in this way: \begin{eqnarray} \label{Eq:CMoE-Params-1} p(m_i|x_n) & = & \frac{ exp{\mu_{in}(\nu^{T}_{i}x)}}{\sum^{K}_{j=1} exp{\mu_{jn}(\nu^{T}_{j}x)}} \end{eqnarray} \noindent where all the components of Equation \ref{Eq:MoE-Params-1} remain the same, except $\mu$. The variable $\mu_{in} \in \left\{0,1\right\}^K$ is the vector of model parameters of the ´´expert selector´´. It depends on data $x_n$ and expert $i$, where $i \in \{1, \dots, K\}$ for the set of expert gates. When $\mu_{in}=1/0$, it indicates that the gate $i$ is relevant/irrelevant for data $n$. In the case of $\mu_{in}=0$, the value is constant and we can say that the data $n$ is ignorant about expert $i$ and assign a constant value. In this way, it is done the expert selection. In order to use EM algorithm, we show the expected log-likelihood by considering the \textit{responsabilities}, i.e. the posteriori probability of experts and the respective regularization terms with the addition of the term corresponding to the expert selector: \begin{eqnarray}\label{Eq:ExpLogLES} \left\langle \textsl{L}_c \right\rangle & = & \sum^{N}_{n=1}{ \sum^{K}_{i=1}{ R_{in}\: \left[log \; p(y_n|x_n,m_i)\: + log \; p(m_i|x_n)\right]}} \nonumber \\ && - \lambda_\nu \sum^{K}_{i=1}{ \sum^{D}_{j=1}{ \left| \nu_{ij} \right| }} - \lambda_\omega \sum^{Q}_{l=1}{ \sum^{K}_{i=1}{ \sum^{D}_{j=1}{ \left| \omega_{lij} \right| }}} - P(\mu) \end{eqnarray} The penalization $P$ depends on the regularization norm, mainly 0-norm or 1-norm. Now, we define the posteriori probability of the gates $m_i$ as: \begin{eqnarray} \label{Eq:respselexp} R_{in} & = & \frac{p(y_n|x_n,m_i)p(m_i|x_n)}{\sum^{K}_{j=1} p(y_n |x_n,m_j)\:p(m_j | x_n)} \end{eqnarray} Next, we repeat the strategy of Lee et al. by first optimizing the unregularized expected log-likelihood and then, adding the restriction. In order to facilitate the calculations, we define some auxiliary variables. As the derivative is linear in the sum, we calculate the contribution of a single data and call it as $E'$: \begin{eqnarray}\label{Eq:E_ind} E'&=&-log\sum^{K}_{k=1}{ p(y_n|x_n,m_k)\: p(m_k|x_n)} \end{eqnarray} We solve this process using an EM algorithm, where in the E-step, we calculate the responsabilities in this case by using the equation \ref{Eq:respselexp}. In the M-step, we assume the responsabilities as known and we find the optimal parameters $\nu$, $\omega$ and $\mu$. Since the use of the responsability values, the term $p(y_n|x_n,m_k)$ can be evaluated separatevely and then the parameter $\omega$ can be optimized using the equation used in RMoE. In the case of $p(m_k|x_n)$, by fixing the parameter $\mu$, we can optimize the parameter $\nu$. We use some notations in order to facilitate the calculus: the term $p(y_n|x_n,m_k)\: $ as $g_k^n$, $p(m_k|x_n)$ as $h_{kn}$ and $exp(\mu_{in}\nu_{i}x_n)$ as $z_{i}$, we derive the equation respect to $\nu_{in}$ for having: \begin{eqnarray}\label{Eq:Total} \pd{E'}{\nu_{i}} &=& {\pd{E'}{z_{i}}} {\pd{z_{i}}{\nu_{i}}} \nonumber \\ \pd{E'}{\nu_{i}} &=& {\left[ \sum^{K}_{k=1} {{\pd{E'}{h_{k}}}{\pd{h_{k}}{z_{i}}}} \right]} {\pd{z_{i}}{\nu_{i}}} \end{eqnarray} Now we have three terms and we evaluate the derivative over each one : \begin{eqnarray}\label{Eq:Comp1} \pd{E'}{h_{k}}&=&\pd{-log \sum^{K}_{j=1}{g_j h_j}}{h_k} \nonumber \\ \pd{E'}{h_{k}}&=&\frac{-g_k}{\sum^{K}_{j=1}{g_j h_j}} \nonumber \\ \pd{E'}{h_{k}}&=&-\frac{R_{kn}}{h_k} \end{eqnarray} \begin{eqnarray}\label{Eq:Comp2} \pd{h_{k}}{z_{i}}&=&\pd{\frac{exp(h_{k})}{\sum^{K}_{j=1}{exp(h_{j})}}}{z_{i}} \nonumber \\ \pd{h_{k}}{z_{i}}&=&\delta_{ki}h_{i}-h_{i}h_{k} \end{eqnarray} \begin{eqnarray}\label{Eq:Comp3} \pd{z_{li}}{\nu_{i}}&=&\pd{\mu_{i}\nu_{i}x}{\nu_{i}} \nonumber \\ \pd{z_{li}}{\nu_{i}}&=&\mu_{i}x \nonumber \end{eqnarray} We integrate these elements for obtain: \begin{eqnarray}\label{Eq:newnu} \pd{E'}{\nu_{i}} &=& \left( \sum^{K}_{k=1} {\frac{R_{kn}}{h_k} (\delta_{ki}h_{i}-h_{i}h_{k})}\right)\mu_{i}x\nonumber \\ \pd{E'}{\nu_{i}} &=& \left( R_{in} - h_{i} \right)\mu_{i}x \end{eqnarray} By considering all the data, the regularization term and applying the trick of Bishop by taking the logarithms of the outputs and equaling to zero, we have: \begin{eqnarray} \min_{\nu_{i}} & \sum^{N}_{n=1}{\left( (log(R_{in}) - \nu^{T}_{i} \mu_{in} x_n \right)^2 } \nonumber\\ \mbox{subject: to} & ||\nu_{i}||_1 \leq \lambda_\nu \label{eq:newnu} \end{eqnarray} In this case it is a modified version of equation \ref{eq:16} and we can apply a QP package to solve it. Finally, we fix the parameters $\nu$ and $\omega$ for optimizing the parameter $\mu$. The regularization over the parameter of expert selector has originally norm $0$; on the other hand, it can be relaxed bu considering norm $1$. We state both approaches: \newline \textbf{A. Optimization of $\mu$ considering norm $0$} As the parameter $\mu$ depends on data $x_n$, we need to solve the optimization problem: \begin{eqnarray} \min_{\mu_{in}}& {-log\sum^{K}_{k=1}{ p(y_n|x_n,m_k)\: p(m_k|x_n)} } \nonumber \\ \mbox{subject: to} & :||\mu_{in}||_0 \leq \lambda_\mu \label{eq:reg0} \end{eqnarray} The minimization of equation \ref{eq:reg0} requires an exploration of $C^{K}_{\lambda_\mu}$ combinations, however, by assuming a low number of gates $K<50$ and a lower number of active experts $\lambda_\mu<10$, this numerical optimization is feasible in practice. \newline \textbf{B. Optimization of $\mu$ considering norm $1$} A more applicable approach is relaxing the constraint of $0$-norm by replacing by the use of a $1$-norm, also known as LASSO regularization. Given that $\mu$ is in the same component of $\nu$, its solution has many equal steps. In particular, we find almost the same equations. Using the same notations of Equation \ref{Eq:Total}, we have for the individual log-likelihood: \begin{eqnarray}\label{Eq:Totalmu} \pd{E'}{\mu_{in}} &=& {\pd{E'}{z_{i}}} {\pd{z_{i}}{\mu_{in}}} \nonumber \\ \pd{E'}{\mu_{in}} &=& {\left[ \sum^{K}_{k=1} {{\pd{E'}{h_{k}}}{\pd{h_{k}}{z_{i}}}} \right]} {\pd{z_{i}}{\mu_{in}}} \end{eqnarray} We get the same Equations \ref{Eq:Comp1} and \ref{Eq:Comp2}. In the case of the last component we have: \begin{eqnarray}\label{Eq:Comp3mu} \pd{z_{li}}{\mu_{in}}&=&\pd{\mu_{in}\nu_{i}x}{\mu_{in}} \nonumber \\ \pd{z_{li}}{\mu_{in}}&=&\nu_{i}x \end{eqnarray} We ensemble all components equations to have: \begin{eqnarray}\label{Eq:004} \pd{E'}{\mu_{in}} &=& \left( \sum^{K}_{k=1} {\frac{R_{kn}}{h_k} (\delta_{ki}h_{i}-h_{i}h_{k})}\right)\nu_{i}x\nonumber \\ \pd{E'}{\mu_{in}} &=& \left( R_{in} - h_{i} \right) \nu_{i}x \nonumber \end{eqnarray} In order to find the optimum parameter $\mu_{in}$, we fix $n$ and consider from $i=1$ to $K$. By equaling each equation to zero, we have: \begin{eqnarray}\label{Eq:mu} \left( R_{in} - h_{i} \right) \nu_{i}x &= &0 \end{eqnarray} Next, we approximate the previous equation using the logarithms over the outputs (Bishop): \begin{eqnarray}\label{Eq:mulog} \left( log(R_{in}) - \mu_{i}\nu_{i}x \right) \nu_{i}x & = &0 \end{eqnarray} Now, we fix $n$ in order to find jointly the parameters of $\mu$ for a fixed data $n$. Therefore when we add the $K$ equations, we have an equation system: \begin{eqnarray}\label{Eq:newnu} \left( \sum^{K}_{i=1} {\left( log(R_{in}) - \mu_{in}\nu_{i}x_n \right) \nu_{i}x_n}\right) &=&0 \nonumber \\ \end{eqnarray} This equation can be represented as a minimization problem considering the sum of squares residuals between $log(R_{in})$ and $\mu_{in}\nu_{i}x_n$; where we add restriction of norm 1 over $\mu_{*n}$ that represents all selected experts for data $n$. In this case, we have: \begin{eqnarray} \min_{\mu}& {\left\| log(R_{n}) - \mu_{*n} \nu x_n \right\|_2^2} \nonumber \\ \mbox{subject: to} & ||\mu_{*n}||_1 \leq \lambda_\mu \label{eq:003} \end{eqnarray} This equation can be solved with a quadratic program optimization package where the variable is $\mu_{*n}$. In the training phase, we require to solve this optimization $N$ times. And in the test phase, it is necessary to solve this optimization problem for each test data. By using norm 0 or 1, we can find the parameters of the expert selector. All the process is summarized as an EM algorithm where in the M-step, first, we freeze $\nu$ and $\omega$ and find $\mu$; then we freeze $\mu$ and iterate for finding the local optimum $\nu$ and $\omega$; then in the E-step, we find the responsabilities $R_{in}$ using the new parameters $\nu$, $\omega$ and $\mu$. In the beginning, we initialize all parameters randomly. In the following section, we will detail the results of our experiments. \begin{thebibliography}{41} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \providecommand{\bibinfo}[2]{#2} \ifx\xfnm\relax \def\xfnm[#1]{\unskip,\space#1}\fi \bibitem[{Aguilar(2008)}]{aguilardb} \bibinfo{author}{J.~Aguilar}, \bibinfo{title}{Dataset repository in arff}, \bibinfo{howpublished}{http://www.upo.es/eps/aguilar/datasets.html}, \bibinfo{year}{2008}. \bibitem[{Asuncion and Newman(2007)}]{UCI:2007} \bibinfo{author}{A.~Asuncion}, \bibinfo{author}{D.~Newman}, \bibinfo{title}{{UCI} machine learning repository}, \bibinfo{howpublished}{http://www.ics.uci.edu/$\sim$mlearn/{MLR}epository.ht ml}, \bibinfo{year}{2007}. \bibitem[{Battiti(1994)}]{Battiti94usingmutual} \bibinfo{author}{R.~Battiti}, \bibinfo{title}{Using mutual information for selecting features in supervised neural net learning}, \bibinfo{journal}{IEEE Transactions on Neural Networks} \bibinfo{volume}{5} (\bibinfo{year}{1994}) \bibinfo{pages}{537--550}. \bibitem[{Bishop(2007)}]{Bishop:2007} \bibinfo{author}{C.~Bishop}, \bibinfo{title}{Pattern Recognition and Machine Learning (Information Science and Statistics)}, \bibinfo{publisher}{Springer}, \bibinfo{address}{New York, USA}, \bibinfo{edition}{2nd} edition, \bibinfo{year}{2007}. \bibitem[{Bishop and Svens\'en(2003)}]{conf/uai/BishopS03} \bibinfo{author}{C.~Bishop}, \bibinfo{author}{M.~Svens\'en}, \bibinfo{title}{Bayesian hierarchical mixtures of experts}, in: \bibinfo{booktitle}{Conference on Uncertainty in Artificial Intelligence}, pp. \bibinfo{pages}{57--64}. \bibitem[{Boyd and Vandenberghe(2004)}]{Boyd&Vandenberghe:2004} \bibinfo{author}{S.~Boyd}, \bibinfo{author}{L.~Vandenberghe}, \bibinfo{title}{Convex Optimization}, \bibinfo{publisher}{Cambridge University Press}, \bibinfo{address}{Cambridge, United Kingdom}, \bibinfo{year}{2004}. \bibitem[{Breiman(2001)}]{Breiman:2001:RF:570181.570182} \bibinfo{author}{L.~Breiman}, \bibinfo{title}{Random forests}, \bibinfo{journal}{Machine Learning} \bibinfo{volume}{45} (\bibinfo{year}{2001}) \bibinfo{pages}{5--32}. \bibitem[{Dempster et~al.(1977)Dempster, Laird and Rubin}]{Dempster:1977} \bibinfo{author}{A.~Dempster}, \bibinfo{author}{N.~Laird}, \bibinfo{author}{D.~Rubin}, \bibinfo{title}{Maximum likelihood from incomplete data via the em algorithm}, \bibinfo{journal}{Journal of the Royal Statistical Society. Series B (Methodological)} \bibinfo{volume}{39} (\bibinfo{year}{1977}) \bibinfo{pages}{1--38}. \bibitem[{Duda et~al.(2001)Duda, Hart and Stork}]{Duda:Hart:2001} \bibinfo{author}{R.~Duda}, \bibinfo{author}{P.~Hart}, \bibinfo{author}{D.~Stork}, \bibinfo{title}{Pattern Classification}, \bibinfo{publisher}{Wiley-Interscience}, \bibinfo{address}{USA}, \bibinfo{edition}{second} edition, \bibinfo{year}{2001}. \bibitem[{Ebrahimpour and Jafarlou(2010)}]{Ebrahimpourvif10} \bibinfo{author}{R.~Ebrahimpour}, \bibinfo{author}{F.M. Jafarlou}, \bibinfo{title}{View-independent face recognition with hierarchical mixture of experts using global eigenspaces}, \bibinfo{journal}{Journal of Communication and Computer} \bibinfo{volume}{7} (\bibinfo{year}{2010}) \bibinfo{pages}{1103--1107}. \bibitem[{Freund and Schapire(1995)}]{Freund:1995:DGO:646943.712093} \bibinfo{author}{Y.~Freund}, \bibinfo{author}{R.~Schapire}, \bibinfo{title}{A decision-theoretic generalization of on-line learning and an application to boosting}, in: \bibinfo{booktitle}{Proceedings of the European Conference on Computational Learning Theory}, \bibinfo{publisher}{Springer-Verlag}, \bibinfo{address}{London, UK}, \bibinfo{year}{1995}, pp. \bibinfo{pages}{23--37}. \bibitem[{Guyon and Elisseeff(2003)}]{Guyon:Elisseeff:2003} \bibinfo{author}{I.~Guyon}, \bibinfo{author}{A.~Elisseeff}, \bibinfo{title}{An introduction to variable and feature selection}, \bibinfo{journal}{Journal of Machine Learning Research} \bibinfo{volume}{3} (\bibinfo{year}{2003}) \bibinfo{pages}{1157--1182}. \bibitem[{Guyon et~al.(2002)Guyon, Weston, Barnhill and Vapnik}]{Guyon:2002:GSC:599613.599671} \bibinfo{author}{I.~Guyon}, \bibinfo{author}{J.~Weston}, \bibinfo{author}{S.~Barnhill}, \bibinfo{author}{V.~Vapnik}, \bibinfo{title}{Gene selection for cancer classification using support vector machines}, \bibinfo{journal}{Journal of Machine Learning} \bibinfo{volume}{46} (\bibinfo{year}{2002}) \bibinfo{pages}{389--422}. \bibitem[{Hall(1999)}]{citeulike:530837} \bibinfo{author}{M.~Hall}, \bibinfo{title}{{Correlation-based Feature Selection for Machine Learning}}, Ph.D. thesis, University of Waikato, \bibinfo{year}{1999}. \bibitem[{Hampshire and Waibel(1992)}]{journals/pami/HampshireW92} \bibinfo{author}{J.~Hampshire}, \bibinfo{author}{A.~Waibel}, \bibinfo{title}{The meta-pi network: Building distributed knowledge representations for robust multisource pattern recognition.}, \bibinfo{journal}{IEEE Transactions Pattern Analysis and Machine Intelligence} \bibinfo{volume}{14} (\bibinfo{year}{1992}) \bibinfo{pages}{751--769}. \bibitem[{Ho(1998)}]{709601} \bibinfo{author}{T.K. Ho}, \bibinfo{title}{The random subspace method for constructing decision forests}, \bibinfo{journal}{Pattern Analysis and Machine Intelligence, IEEE Transactions on} \bibinfo{volume}{20} (\bibinfo{year}{1998}) \bibinfo{pages}{832--844}. \bibitem[{Jacobs et~al.(1991)Jacobs, Jordan, Nowlan and Hinton}]{1351018} \bibinfo{author}{R.~Jacobs}, \bibinfo{author}{M.~Jordan}, \bibinfo{author}{S.~Nowlan}, \bibinfo{author}{G.~Hinton}, \bibinfo{title}{Adaptive mixtures of local experts}, \bibinfo{journal}{Neural Computation} \bibinfo{volume}{3} (\bibinfo{year}{1991}) \bibinfo{pages}{79--87}. \bibitem[{Jordan and Jacobs(1994)}]{journals/neco/JordanJ94} \bibinfo{author}{M.~Jordan}, \bibinfo{author}{R.~Jacobs}, \bibinfo{title}{Hierarchical mixtures of experts and the {EM} algorithm}, \bibinfo{journal}{Neural Computation} \bibinfo{volume}{6} (\bibinfo{year}{1994}) \bibinfo{pages}{181--214}. \bibitem[{Kohavi and John(1997)}]{Kohavi:John:1997} \bibinfo{author}{R.~Kohavi}, \bibinfo{author}{G.~John}, \bibinfo{title}{Wrappers for feature subset selection}, \bibinfo{journal}{Artificial Intelligence} \bibinfo{volume}{97} (\bibinfo{year}{1997}) \bibinfo{pages}{273--324}. \bibitem[{Lee et~al.(2006)Lee, Lee, Abbeel and Ng}]{Lee+etal06:L1logreg} \bibinfo{author}{S.I. Lee}, \bibinfo{author}{H.~Lee}, \bibinfo{author}{P.~Abbeel}, \bibinfo{author}{A.Y. Ng}, \bibinfo{title}{Efficient {L1} regularized logistic regression}, in: \bibinfo{booktitle}{Proceedings of the 21st National Conference on Artificial Intelligence (AAAI)}. \bibitem[{Liu(2012)}]{Liu2012} \bibinfo{author}{H.~Liu}, \bibinfo{title}{Arizona state university: Feature selection datasets}, \bibinfo{howpublished}{http://featureselection.asu.edu/datasets.php}, \bibinfo{year}{2012}. \bibitem[{Liu and Setiono(1995)}]{Liu-Seti95} \bibinfo{author}{H.~Liu}, \bibinfo{author}{R.~Setiono}, \bibinfo{title}{Chi2: Feature selection and discretization of numeric attributes}, in: \bibinfo{editor}{J.~Vassilopoulos} (Ed.), \bibinfo{booktitle}{Proceedings of the International Conference on Tools with Artificial Intelligence}, \bibinfo{publisher}{IEEE Computer Society}, \bibinfo{address}{Herndon, Virginia}, \bibinfo{year}{1995}, pp. \bibinfo{pages}{388--391}. \bibitem[{MacKay(1995)}]{MacKay95:network} \bibinfo{author}{D.~MacKay}, \bibinfo{title}{Probable networks and plausible predictions -- a review of practical {B}ayesian methods for supervised neural networks}, \bibinfo{journal}{Network: Computation in Neural Systems} \bibinfo{volume}{6} (\bibinfo{year}{1995}) \bibinfo{pages}{469--505}. \bibitem[{MATLAB(2008)}]{Matlab:2008} \bibinfo{author}{MATLAB}, \bibinfo{title}{version 7.6.0.324 (R2008a)}, \bibinfo{publisher}{The MathWorks Inc.}, \bibinfo{address}{Massachusetts, USA}, \bibinfo{year}{2008}. \bibitem[{Meeds and Osindero(2005)}]{conf/nips/MeedsO05} \bibinfo{author}{E.~Meeds}, \bibinfo{author}{S.~Osindero}, \bibinfo{title}{An alternative infinite mixture of gaussian process experts}, in: \bibinfo{booktitle}{Advances In Neural Information Processing Systems}, pp. \bibinfo{pages}{883--890}. \bibitem[{Moerland(1997)}]{Moerland97somemethods} \bibinfo{author}{P.~Moerland}, \bibinfo{title}{Some Methods for Training Mixtures of Experts}, \bibinfo{type}{Technical Report}, IDIAP Research Institute, \bibinfo{year}{1997}. \bibitem[{Murthy et~al.(1994)Murthy, Kasif and Salzberg}]{Murthy:1994:SIO:1622826.1622827} \bibinfo{author}{S.K. Murthy}, \bibinfo{author}{S.~Kasif}, \bibinfo{author}{S.~Salzberg}, \bibinfo{title}{A system for induction of oblique decision trees}, \bibinfo{journal}{Journal of Artificial Intelligence Research} \bibinfo{volume}{2} (\bibinfo{year}{1994}) \bibinfo{pages}{1--32}. \bibitem[{Nguyen et~al.(2006)Nguyen, Abbass and McKay}]{journals/ijon/NguyenAM06} \bibinfo{author}{M.~Nguyen}, \bibinfo{author}{H.~Abbass}, \bibinfo{author}{R.~McKay}, \bibinfo{title}{A novel mixture of experts model based on cooperative coevolution}, \bibinfo{journal}{Neurocomputing} \bibinfo{volume}{70} (\bibinfo{year}{2006}) \bibinfo{pages}{155--163}. \bibitem[{Pan and Shen(2007)}]{1248698} \bibinfo{author}{W.~Pan}, \bibinfo{author}{X.~Shen}, \bibinfo{title}{Penalized model-based clustering with application to variable selection}, \bibinfo{journal}{Journal of Machine Learning Research} \bibinfo{volume}{8} (\bibinfo{year}{2007}) \bibinfo{pages}{1145--1164}. \bibitem[{Pinto et~al.(2008)Pinto, Cox and DiCarlo}]{10.1371/journal.pcbi.0040027} \bibinfo{author}{N.~Pinto}, \bibinfo{author}{D.D. Cox}, \bibinfo{author}{J.~DiCarlo}, \bibinfo{title}{Why is real-world visual object recognition hard?}, \bibinfo{journal}{PLoS Computational Biology} \bibinfo{volume}{4} (\bibinfo{year}{2008}) \bibinfo{pages}{151--156}. \bibitem[{Quinlan(1993)}]{quinlan45} \bibinfo{author}{J.~Quinlan}, \bibinfo{title}{C4.5: programs for machine learning}, \bibinfo{publisher}{Morgan Kaufmann Publishers Inc.}, \bibinfo{address}{California, USA}, \bibinfo{year}{1993}. \bibitem[{Rasmussen and Ghahramani(2001)}]{conf/nips/RasmussenG01} \bibinfo{author}{C.~Rasmussen}, \bibinfo{author}{Z.~Ghahramani}, \bibinfo{title}{Infinite mixtures of gaussian process experts.}, in: \bibinfo{booktitle}{Advances in Neural Information Processing Systems}, pp. \bibinfo{pages}{881--888}. \bibitem[{Saragih et~al.(2009)Saragih, Lucey and Cohn}]{Saragihdefor09} \bibinfo{author}{J.~Saragih}, \bibinfo{author}{S.~Lucey}, \bibinfo{author}{J.~Cohn}, \bibinfo{title}{Deformable model fitting with a mixture of local experts}, \bibinfo{journal}{International Conference on Computer Vision} (\bibinfo{year}{2009}) \bibinfo{pages}{2248--2255}. \bibitem[{Scott and Sain(2004)}]{citeulike:2235458} \bibinfo{author}{D.~Scott}, \bibinfo{author}{S.~Sain}, \bibinfo{title}{Multi-dimensional density estimation}, \bibinfo{title}{Multi-Dimensional Density Estimation}, \bibinfo{publisher}{Elsevier}, \bibinfo{address}{Amsterdam}, \bibinfo{year}{2004}, pp. \bibinfo{pages}{229--263}. \bibitem[{Tibshirani(1996)}]{tibshirani96regression} \bibinfo{author}{R.~Tibshirani}, \bibinfo{title}{Regression shrinkage and selection via the {L}asso}, \bibinfo{journal}{Journal of the Royal Statistical Society (Series B)} \bibinfo{volume}{58} (\bibinfo{year}{1996}) \bibinfo{pages}{267--288}. \bibitem[{Titsias and Likas(2002)}]{journals/neco/TitsiasL02} \bibinfo{author}{M.~Titsias}, \bibinfo{author}{A.~Likas}, \bibinfo{title}{Mixture of experts classification using a hierarchical mixture model.}, \bibinfo{journal}{Neural Computation} \bibinfo{volume}{14} (\bibinfo{year}{2002}) \bibinfo{pages}{2221--2244}. \bibitem[{Tseng(2001)}]{Tseng:01} \bibinfo{author}{P.~Tseng}, \bibinfo{title}{Convergence of block coordinate descent method for nondifferentiable maximization}, \bibinfo{journal}{Journal of Optimization Theory and Applications} \bibinfo{volume}{109} (\bibinfo{year}{2001}) \bibinfo{pages}{475--494}. \bibitem[{Van-Rijsbergen(1979)}]{Rijsbergen79informationretrieval} \bibinfo{author}{C.~Van-Rijsbergen}, \bibinfo{title}{Information Retrieval}, \bibinfo{publisher}{Butterworth-Heinemann}, \bibinfo{address}{London, UK}, \bibinfo{edition}{2nd} edition, \bibinfo{year}{1979}. \bibitem[{Wang and Zhu(2008)}]{wzvs_08} \bibinfo{author}{S.~Wang}, \bibinfo{author}{J.~Zhu}, \bibinfo{title}{Variable selection for model-based high dimensional clustering and its application to microarray data}, \bibinfo{journal}{Biometrics} \bibinfo{volume}{64} (\bibinfo{year}{2008}) \bibinfo{pages}{440--448}. \bibitem[{Xu et~al.(1994)Xu, Jordan and Hinton}]{conf/nips/XuJH94} \bibinfo{author}{L.~Xu}, \bibinfo{author}{M.~Jordan}, \bibinfo{author}{G.~Hinton}, \bibinfo{title}{An alternative model for mixtures of experts}, in: \bibinfo{booktitle}{Advances in Neural Information Processing Systems}, pp. \bibinfo{pages}{633--640}. \bibitem[{Yuille and Geiger(1998)}]{Yuille:1998:WM:303568.304791} \bibinfo{author}{A.~Yuille}, \bibinfo{author}{D.~Geiger}, \bibinfo{title}{Winner-take-all mechanisms}, in: \bibinfo{editor}{M.A. Arbib} (Ed.), \bibinfo{booktitle}{The handbook of brain theory and neural networks}, \bibinfo{publisher}{MIT Press}, \bibinfo{address}{Cambridge, MA, USA}, \bibinfo{year}{1998}, p. \bibinfo{pages}{1056}. \end{thebibliography} \end{document}
arXiv
Per Lindström Per "Pelle" Lindström (9 April 1936 – 21 August 2009, Gothenburg)[1] was a Swedish logician, after whom Lindström's theorem and the Lindström quantifier are named.[2] (He also independently discovered Ehrenfeucht–Fraïssé games.[1]) He was one of the key followers of Lars Svenonius.[3] Lindström was awarded a PhD from the University of Gothenburg in 1966.[4] His thesis was titled Some Results in the Theory of Models of First Order Languages. A festschrift for Lindström was published in 1986.[5] Selected publications • Per Lindström, First Order Predicate Logic with Generalized Quantifiers, Theoria 32, 1966, 186–195. • Per Lindström, On Extensions of Elementary Logic, Theoria 35, 1969, 1–11. • Per Lindström (1997). Aspects of incompleteness. Springer-Verlag. ISBN 978-3-540-63213-9.; 2nd ed. published by ASL in 2003, ISBN 978-1-56881-173-4 References 1. ASL Newsletter, September 2009 2. Jacquette, Dale (2005). A companion to philosophical logic. p. 329. ISBN 1-4051-4575-7. 3. Burr, John Roy (1980). Handbook of world philosophy. p. 186. ISBN 0-313-22381-5. 4. Per Lindström at the Mathematics Genealogy Project 5. Lindström, Per; Furberg, Mats; Wetterström, Thomas; Åberg, Claes (1986). Logic and abstraction: essays dedicated to Per Lindström on his fiftieth birthday. ISBN 91-7346-168-7. Further reading • Väänänen, J.; Westerståhl, D. (2010). "In Memoriam: Per Lindström" (PDF). Theoria. 76 (2): 100–107. doi:10.1111/j.1755-2567.2010.01069.x. External links • Per Lindström at the Mathematics Genealogy Project Authority control International • FAST • ISNI • VIAF National • Norway • Germany • Israel • United States • Sweden • Netherlands Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
\begin{document} \title{Deterministic Secure Quantum Communication Without Maximally Entangled States} \author{ Xi-Han Li, Fu-Guo Deng\footnote{Email address: [email protected]}, Chun-Yan Li, Yu-Jie Liang, Ping Zhou and Hong-Yu Zhou } \address{ The Key Laboratory of Beam Technology and Material Modification of Ministry of Education, Beijing Normal University, Beijing 100875, People's Republic of China, and\\ Institute of Low Energy Nuclear Physics, and Department of Material Science and Engineering, Beijing Normal University, Beijing 100875, People's Republic of China, and\\ Beijing Radiation Center, Beijing 100875, People's Republic of China} \date{\today } \begin{abstract} Two deterministic secure quantum communication schemes are proposed, one based on pure entangled states and the other on $d$-dimensional single-photon states. In these two schemes, only single-photon measurements are required for the two authorized users, which makes the schemes more convenient than others in practical applications. Although each qubit can be read out after a transmission of additional classical bit, it is unnecessary for the users to transmit qubits double the distance between the sender and the receiver, which will increase their bit rate and their security. The parties use decoy photons to check eavesdropping efficiently. The obvious advantage in the first scheme is that the pure entangled source is feasible with present techniques. \textbf{Keywords:} Deterministic secure quantum communication, Pure entangled states, Decoy photons, Single photons \end{abstract} \pacs{ 03.67.Hk, 03.65.Ud} \maketitle \section{Introduction} In the last decade, scientists have made dramatic progress in the field of quantum communication \cite{book,gisin}. The quantum key distribution (QKD), whose task is to create a private key between two remote authorized users, is one of the most remarkable applications of quantum mechanics. By far, there has been much attention focused on the QKD \cite{bb84,ekert91,bbm92,gisin,longqkd,CORE,BidQKD,ABC,guoQKD,delay} since Bennett and Brassard (BB84) \cite{bb84} proposed an original protocol in 1984. In recent years, a novel concept, quantum secure direct communication (QSDC) was put forward and studied by some groups \cite{two-step,QOTP,Wangc,bf,cai,caiA}. It allows two remote parties to communicate directly without creating a private key in advance and then using it to encrypt the secret message \cite{two-step,QOTP,Wangc,bf,cai,caiA}. Thus, the sender should confirm whether the channel is secure before he encodes his message on the quantum states because the message cannot be discarded, unlike that in QKD protocols \cite{two-step,QOTP,Wangc}. In 2002, following some ideas in quantum dense coding \cite{bw}, Bostr\"{o}m and Felbinger \cite{bf} proposed a ping-pong QSDC scheme by using Einstein-Podolsky-Rosen (EPR) pairs as quantum information carriers, but it has been proven to be insecure in a noise channel \cite{attack}. Recently, Deng \emph{et al.} \cite{two-step} proposed a two-step QSDC scheme with an EPR pair block and another scheme with a sequence of single photons \cite{QOTP}. Wang \emph{et al.} \cite{Wangc} introduced a high-dimensional QSDC protocol by following some ideas in quantum superdense coding \cite{superdense}. Now, QSDC has also been studied in the case of a network \cite{Linetwork,dengnetwork,dengepl}. Another class of quantum communication protocols \cite{imo1,beige,zhangzj,yan,Gao,zhangs2006,wangj,song,leepra} used to transmit secret messages is called deterministic secure quantum communication (DSQC). Certainly, the receiver can read out the secret message only after he exchanges at least one bit of classical information for each qubit with the sender in a DSQC protocol, which is different from QSDC. DSQC is similar to QKD, but it can be used to obtain deterministic information, not a random binary string, which is different from the QKD protocols \cite{bb84,ekert91,bbm92,gisin} in which the users cannot predict whether an instance is useful or not. For transmitting a secret message, those protocols \cite{imo1,beige,zhangzj,yan,Gao,zhangs2006,wangj,song,leepra} can be replaced with an efficient QKD protocol, such as those in Refs. 6-11, because the users can retain or flip the bit value in the key according to the secret message after they obtain the private key \cite{QOTP}. Schimizu and Imoto \cite{imo1} and Beige \emph{et al.} \cite{beige} presented some novel DSQC protocols with entanglement or a single photon. More recently, Gao and Yan \cite{yan, Gao} and Man \emph{et al.} \cite{zhangzj} proposed several DSQC schemes based on quantum teleportation \cite{teleportation} and entanglement swapping \cite{entanglementswapping}. The users should complete the eavesdropping check before they take a swapping or teleportation. Although the secret message can be read out only after transmitting an additional classical bit for each qubit, do not have the users to transmit the qubits that carry the secret message. Therefore, these schemes may be more secure than others in a noise channel, and they are more convenient for quantum error correction. On the other hand, a Bell-basis measurement is required inevitably for the parties in both entanglement swapping \cite{entanglementswapping} and quantum teleportation \cite{teleportation}, which will increase the difficulty of implementing these schemes in laboratory. In Ref. 35, Yan and Gao introduced an interesting DSQC protocol following some ideas in Ref. 11 with EPR pairs. After sharing a sequence of EPR pairs securely, the two parties of a quantum communication only need perform single-photon measurements on their photons and can communicate directly by exchanging a bit of classical information for each qubit. Obviously, their DSQC protocol is more convenient than other quantum communication protocols \cite{bf,two-step,Wangc,zhangzj,yan,Gao,zhangs2006,caiA,song} from the aspect of measurement even though it requires the two parties to exchange classical bit and each EPR pair can only carry one bit of the message. In this paper, we will first propose a new DSQC scheme with pure entangled states, nonmaximally entangled two-photon states. The quantum signal source is in a more general formal of entanglement, which makes this scheme more suitable for applications than the Yan-Gao protocol \cite{yandelay}. Then, we will discuss it with a sequence of $d$-dimensional single photons. We use some decoy photons to ensure the security of the whole quantum communication. In both schemes, single-photon measurements are enough. Moreover, we redefine the total efficiency of quantum communication. Compared with the old one presented in Ref. 36, our definition is more reasonable. \section{DSQC with pure entangled states} \subsection{DSQC with Two-dimensional Quantum Systems} In the DSQC schemes with entanglement swapping and teleportation \cite{yan,Gao,zhangzj}, the parties usually use EPR pairs as the quantum information carriers. An EPR pair is in one of the four Bell states, the four maximally two-qubit entangled states, as follows: \begin{eqnarray} \vert \psi^{\pm} \rangle_{AB}=\frac{1}{\sqrt{2}}(\vert 0 \rangle_A \vert 1 \rangle_B \pm \vert 1 \rangle_A \vert 0 \rangle_B),\\ \vert \phi^{\pm} \rangle_{AB}=\frac{1}{\sqrt{2}}(\vert 0 \rangle_A \vert 0 \rangle_B \pm \vert 1 \rangle_A \vert 1 \rangle_B), \end{eqnarray} where $\vert 0 \rangle$ and $\vert 1 \rangle$ are the eigenvectors of the measuring basis (MB) $Z$. The subscripts $A$ and $B$ indicate the two correlated photons in each EPR pair. For the Bell state $\vert \psi^{\pm} \rangle$ ($\vert \phi^{\pm} \rangle$), if the two photons are measured with the same MB $Z$, the outcomes will always be anti-correlated (correlated). The correlation of the entangled quantum system plays an important role in quantum communication \cite{ekert91,bbm92,longqkd,CORE} as it provides a tool for checking eavesdropping. For example, the two photons $A$ and $B$ are anti-correlated in the Bennett-Brassard-Mermin 1992 QKD protocol \cite{bbm92} even though the users measure them with the MB $Z$ or $X$ as \begin{eqnarray} \vert \psi^{-} \rangle_{AB} &=& \frac{1}{\sqrt{2}}(\vert 0 \rangle_A \vert 1 \rangle_B - \vert 1 \rangle_A \vert 0 \rangle_B)\nonumber\\ &=& \frac{1}{\sqrt{2}}(\vert +x \rangle_A \vert -x \rangle_B - \vert -x \rangle_A \vert +x \rangle_B). \end{eqnarray} Here, $\vert \pm x\rangle=\frac{1}{\sqrt{2}}(\vert 0\rangle \pm \vert 1\rangle)$ are the two eigenvectors of the basis $X$. This nature forbids an eavesdropper to eavesdrop on the quantum communication freely with an intercepting-resending strategy. In experiment, however, the two photons are usually not in the maximally entangled state $\vert \psi^-\rangle_{AB}$. That is, a practical quantum signal source usually produces a pure entangled state, such as $\vert \Psi\rangle_{AB}=a \vert 0\rangle_{A}\vert 1\rangle_{B} +b\vert 1\rangle_{A} \vert 0\rangle_{B}$ (here $\vert a\vert ^2 + \vert b\vert ^2 =1$). In this time, the two photons are always anti-correlated with the basis $Z$, but not with the basis $X$, as \begin{eqnarray} \vert \Psi\rangle_{AB} &=& a \vert 0\rangle_{A}\vert 1\rangle_{B} +b\vert 1\rangle_{A} \vert 0\rangle_{B}\nonumber\\ &=& \frac{1}{2}[(a+b)(\vert +x\rangle_A\vert +x\rangle_B - \vert -x\rangle_A\vert -x\rangle_B) \nonumber\\ &-& (a-b)(\vert +x\rangle_A\vert -x\rangle_B - \vert -x\rangle_A\vert +x\rangle_B)]. \end{eqnarray} That is, the security of the quantum communication with pure entangled states is lower than that with Bell states if the users use the two bases $Z$ and $X$ to measure them for the eavesdropping check directly. On the other hand, the quantum source is more convenient than maximally entangled states. In the present DSQC scheme, we will use pure entangled states as the quantum information carriers for DSQC. This scheme has the advantage of a practical entangled source and of high security with decoy photons, compared with those in Refs. 26-28 and 35. For the integrality of our point-to-point DSQC scheme, we give all the steps as follows: (1) The sender Alice prepares $N$ two-photon ordered pairs in which each is randomly in one of the two pure entangled states $\{\vert \Psi \rangle_{AB},\vert \Psi' \rangle_{AB} \}$. Here, $\vert \Psi' \rangle_{AB}=a\vert 1 \rangle_A \vert 0\rangle_B + b\vert 0 \rangle_A \vert 1 \rangle_B)$ which can be prepared by flipping the bit value of the two photons in the state $\vert \Psi \rangle_{AB}$, i.e, $(\sigma^{A}_x\otimes \sigma^{B}_x)\vert \Psi \rangle_{AB}=\vert \Psi' \rangle_{AB}$, similar to Ref. 10. Alice picks out photon $A$ from each pair to form an ordered sequence $S_A$, say [$A_1,A_2,...A_N$], and the other partner photons compose the sequence $S_B$ =[$B_1,B_2,...B_N$], similar to Refs. 6, 13, 37, 38. For checking eavesdropping efficiently, Alice replaces some photons in the sequence $S_B$ with her decoy photons $S_{de}$, which are randomly in one of the states $\{\vert 0\rangle, \vert 1\rangle, \vert +x\rangle, \vert -x\rangle\}$. They can be prepared with an ideal single-photon source. Also, Alice can get a decoy photon by measuring photon $A$ in a photon pair $\vert \Psi \rangle_{AB}$ in the sequence $S_A$ with the MB $Z$ and then operating on photon $B$ with the local unitary operation $\sigma_x$ or a Hadamard (H) operation: \begin{eqnarray} H\vert 0\rangle =\vert +x\rangle, \;\;\;\; H\vert 1\rangle =\vert -x\rangle. \end{eqnarray} We will discuss the reason that Alice inserts the decoy photons in the sequence $S_B$ in detail below. (2) Alice encodes her secret message $M_A$ on the photons in the sequence $S_B$ with the two unitary operations $I$ and $U=\sigma_x$, which represent bits 0 and 1, respectively. Obviously, she can choose all the decoy photons, $S_{de}$, as samples for checking eavesdropping. (3) Alice sends sequence $S_B$ to Bob and always keeps the sequence $S_A$ at home. (4) After Bob confirms the receipt of sequence $S_B$, Alice tells Bob the positions and the states of the decoy photons $S_{de}$. Bob performs a suitable measurement on each photon in $S_{de}$ with the same basis as Alice chose for preparing it, and completes the error rate analysis of the samples. If the error is very low, Alice and Bob continue their communication to next step; otherwise, they abandon the result of the transmission and repeat the quantum communication from the beginning. (5) Alice and Bob measure their photons remaining in the sequences $S_A$ and $S_B$ with the same basis $Z$, and they get the results $R_A$ and $R_B$, respectively. (6) Alice publicly broadcasts her results $R_A$. (7) Bob reads out the secret message $M_A$ with his outcomes $R_B$ directly; i.e., $M_A=R_A\oplus R_B\oplus 1$. It is interesting to point out that it is unnecessary for the receiver Bob and the sender Alice to perform Bell-basis measurements on their photons, they only perform single-photon measurements, which makes this scheme more convenient than others \cite{yan,Gao,zhangzj} with entanglement swapping and quantum teleportation. Moreover, the quantum sources are just a practical pure entangled states, not maximally entangled states, which makes this DSQC scheme easier than those with Bell states \cite{two-step,yandelay}. As each photon just transmits the distance between the sender and the receiver, its bit rate is higher than those with two-way quantum communication as the attenuation of a signal in a practical channel is exponential; i.e., $N_s(L)=N_s(0)e^{-\lambda L}$. Here, $N_s(L)$ is the photon number after being transmitted the distance $L$, and $\lambda$ is the attenuation parameter. As the security of a quantum communication scheme depends on the error rate analysis of samples chosen randomly, the present DSQC scheme can be made to be secure as the decoy photons are prepared randomly in one of the four states $\{\vert 0\rangle, \vert 1\rangle, \vert +x\rangle, \vert -x\rangle\}$ and are distributed in the sequence $S_B$ randomly. An eavesdropper, say Eve, knows neither the states of the decoy photons nor their positions in the sequence $S_B$, so her action will inevitably perturb the decoy photons and be detected by the users. As the basis for the measurement on each decoy photon is chosen after the sender has told the receiver its basis, all of the decoy photons can be used for checking eavesdropping, not just a fraction of them as Ref. 9. Without the decoy photons, the security of the present DSQC scheme will decrease as the two photons in a pure entangled state $\vert \Psi\rangle$ or $\vert \Psi'\rangle$ have not deterministic relation when they are measured with the MB $X$. That is, the parties cannot determine whether the errors of their outcomes comes from eavesdropping done by Eve or the nondeterministic relation obtained with the MB $X$ if they only transmit a sequence of pure entangled states. In this way, Eve can obtain a fraction of the secret message without being detected. \subsection{DSQC with d-dimensional Quantum Systems} It is straightforward to general our DSQC scheme to the case with $d$-dimensional quantum systems (such as the orbit momentum of a photon \cite{OAM}). A pure symmetric $d$-dimensional two-photon entangled state can be described as \begin{equation} \vert \Psi_{p}\rangle_{AB} =\sum_{j} a_j\vert j\rangle_A \otimes \vert j \rangle_B, \end{equation} where \begin{equation} \sum_{j} |a_j|^2=1. \end{equation} Defining \begin{equation} U_{m} =\sum_{j} \vert j+m\;{\rm mod} \; d \rangle \langle j\vert, \end{equation} which is used to transfer the state $\vert j\rangle$ into the state $\vert j+m\rangle$; i.e., \begin{equation} (U^A_{m}\otimes U^B_{m}) \vert \Psi_{p}\rangle_{AB} =\sum_{j} a_j\vert j+m\;{\rm mod} \; d \rangle_A \otimes \vert j+m\;{\rm mod} \; d \rangle_B,\nonumber\\ \end{equation} where $m=1,2,\cdots, d-1$. As in Ref. 23, the MB $Z_{d}$ is made up of the $d$ eigenvectors as \begin{eqnarray} \left\vert 0 \right\rangle, \;\;\;\left\vert 1 \right\rangle, \;\;\;\;\left\vert 2 \right\rangle, \;\; \cdots, \;\;\;\;\left\vert {d - 1} \right\rangle. \end{eqnarray} The $d$ eigenvectors of the MB $X_{d}$ can be described as \begin{eqnarray} \vert 0\rangle_x&=&\frac{1}{{\sqrt d }}\left( {\left\vert 0 \right\rangle + \vert 1\rangle \;\; + \cdots \;\; + \left\vert {d-1}\right\rangle }\right),\;\nonumber \\ \vert 1\rangle_x&=&\frac{1}{{\sqrt d }}\left({\left\vert 0 \right\rangle + e^{{\textstyle{{2\pi i} \over d}}} \left\vert 1 \right\rangle + \cdots + e^{{\textstyle{{(d-1)2\pi i} \over d}}} \left\vert {d-1} \right\rangle} \right),\; \nonumber\\ \vert 2\rangle_x&=&\frac{1}{{\sqrt d }}\left({\left\vert 0 \right\rangle + e ^{{\textstyle{{4\pi i} \over d}}} \left\vert 1 \right\rangle + \cdots + e^{{\textstyle{{(d-1)4\pi i} \over d}}} \left\vert {d-1} \right\rangle }\right),\nonumber\\ &&\cdots \cdots \cdots \cdots \cdots \cdots \nonumber\\ \vert d-1\rangle_x&=&\frac{1}{{\sqrt d }}(\left\vert 0 \right\rangle + e ^{{\textstyle{{2(d-1)\pi i} \over d}}} \left\vert 1 \right\rangle + e ^{{\textstyle{{2\times 2(d-1)\pi i} \over d}}} \left\vert 2 \right\rangle + \cdots \nonumber\\ && + e^{{\textstyle{{(d-1)\times 2(d-1)\pi i} \over d}}} \left\vert {d-1} \right\rangle ). \end{eqnarray} The two vectors $\vert k\rangle$ and $\vert l\rangle_x$ coming from two MBs satisfy the relation $\vert \langle k|l\rangle_x \vert ^2=\frac{1}{d}$. As in Ref. 23, we can construct the $d$-dimensional Hadamard ($H_d$) operation as follows: \begin{eqnarray} H_d =\frac{1}{\sqrt{d}} \left( {\begin{array}{*{20}c} 1 & 1 & \cdots & 1 \\ 1 & {e^{2\pi i/d} } & \cdots & {e^{(d-1)2\pi i/d} } \\ 1 & {e^{4\pi i/d} } & \cdots & {e^{(d-1)4\pi i/d} }\\ \vdots & \vdots & \cdots & \vdots \\ 1 & {e^{2(d-1)\pi i/d} } & \cdots & {e^{(d-1)2(d-1)\pi i/d} } \\ \end{array}} \right)\label{HD}. \end{eqnarray} That is, $H_d\vert j\rangle=\vert j\rangle_x$. For quantum communication, Alice prepares $N$ ordered $d$-dimensional two-photon pure entangled states. Each pair is randomly in one of the states $\{ (U^A_{m}\otimes U^B_{m}) \vert \Psi_{p}\rangle_{AB} \}$ ($m=0,1,2,\cdots, d-1$), similar to the case with two-dimensional two-photon pure entangled states. The uniform distribution of the pure entangled states will make the users obtain outcomes $0$, $1$, $2$, $\cdots$, $d-1$ with the same probability. Before the communication, Alice divides the photon pairs into two sequences, $S_A'$ and $S_B'$. That is, the sequence $S_A'$ is composed of photons $A$ in the $N$ ordered photon pairs, and the sequence $S_B'$ is made up of photons $B$. The sender Alice can also prepare decoy photons, similar to the case with two-dimensional photons. In detail, Alice measures some of the photons $A$ in the sequence $S_A'$ with the basis $Z_d$ and then operates on them with $I$ or $H_d$. She inserts the decoy photons in the sequence $S_B'$ and keeps their positions secret. For the other photons in the sequence $S_B'$, Alice encodes her secret message on the sequence $S_B'$ with the unitary operations $\{U_{m}^B\}$. After Bob receives the sequence $S_B'$, Alice requires Bob to measure the decoy photons with the suitable bases $\{Z_d, X_d\}$, the same as those used for preparing them. If the transmission is secure, Alice and Bob can measure the photons remaining in the sequences $S_A'$ and $S_B'$ with the basis $Z_d$, respectively. After Alice publishes her outcomes $R_A'$, Bob can obtain the secret message $M_A$ directly with his own outcomes $R_B'$. \subsection{The Capacity and Efficiency} In each pure entangled state $\rho$, such as $\vert \Psi_{p}\rangle_{AB} =\sum_{j} a_j\vert j\rangle_A \otimes \vert j \rangle_B$, the von Neumann entropy for each photon is \begin{eqnarray} S(\rho)=-\sum_i \lambda_i \; log_2 \; \lambda_i = -\sum_{i=0}^{d-1} |a_i|^2 \; log_2 \;|a_i|^2, \end{eqnarray} where $\lambda_i=|a_i|^2$ is the probability that one gets the result $\vert i\rangle$ when one measures photon $A$ or $B$ in the state $\vert \Psi_{p}\rangle_{AB} =\sum_{j} a_j\vert j\rangle_A \otimes \vert j \rangle_B$ with the basis $Z_d$. When $|a_i|^2=\frac{1}{d}$ (for each $i=0,1,\cdots, d-1$), the von Neumann entropy has its the maximal value $S(\rho)_{max}=log_2 d$. In other cases, $S(\rho)_{max}<log_2 d$. For each photon pair, its von Neumann entropy is $2S(\rho)$. In fact, each pure entangled state $\rho$ in the present DSQC scheme can carry $log_2 d$ bits of classical information. It is obvious that photon $B$ is randomly in the state $\vert i\rangle$ with the probability $P(i)=\frac{1}{d}$ when Bob measures it with the basis $Z_d$. The reason is $P(i)=\frac{1}{d}\sum_m \vert a_m \vert^2=\frac{1}{d}$ as the photon pair is randomly in one of the states $\{(U^A_{m}\otimes U^B_{m}) \vert \Psi_{p}\rangle_{AB} \}$ ($m=0,1,2,\cdots, d-1$). That is, the distribution of the pure entangled states $\{(U^A_{m}\otimes U^B_{m}) \vert \Psi_{p}\rangle_{AB} \}$ provides a way for carrying information efficiently. As almost all the quantum source (except for the decoy photons used for eavesdropping check) can be used to carry the secret message, the intrinsic efficiency $\eta_q$, for qubits in our schemes approaches 100\%. Here, \begin{eqnarray} \eta_q=\frac{q_u}{q_t}, \end{eqnarray} where $q_u$ is the number of useful qubits in the quantum communication and $q_t$ is the number of total qubits used (not the ones transmitted; this is different from the definition proposed by Cabello \cite{Cabello}). We define the total efficiency of a quantum communication scheme as \begin{eqnarray} \eta_t=\frac{m_u}{q_t+b_t}, \end{eqnarray} where $m_u$ and $b_t$ are the numbers of message transmitted and the classical bits exchanged, respectively. In the present DSQC scheme, $m_u=log_2 d$, $q_t=2S(\rho)$ and $b_t=log_2 d$ as the users pay $\log_2 d$ bits of classical information and $q_t=2S(\rho)$ bits of quantum information (a photon pair) for $m_u=log_2 d$ bits of the secret message. Thus, its total efficiency is $\eta_t=\frac{log_2 d}{log_2 d + 2S(\rho)}\geq \frac{1}{3}$ in theory. It is of interest to point out that our definition of the total efficiency of a quantum communication scheme, $\eta_t$, is more reasonable, compared with the old one \cite{Cabello}. Even though Alice only transmits a sequence of photons to Bob, the source is an entangled one, which is different from the single photons discussed below. Obviously, the new definition can be used to distinguish a scheme with single photons from one with entangled ones if the efficiency for qubits and the classical information exchanged are both the same. Moreover, the total efficiency of dense coding according to this definition is no more than 100\% as the traveling photon in an EPR pair carries two bits of information, and the quantum system used for the quantum channel is a two-qubit one. \begin{center} \section{efficient one-way DSQC with $d$-dimensional single photons } \end{center} In our DSQC scheme above, the parties only exploit the correlation of the two photons in a pure entangled state along the direction $z$ for transmitting the secret message. We can also simplify some procedures with single photons following some ideas in Ref. 14. Certainly, an ideal single-photon source is not available for a practical application at present, different from the pure entanglement source. With the development of technology, we believe that a practical ideal single-photon source can be produced without difficulty \cite{singlephotonsource}, so in theory, it is interesting to study the model for DSQC with single photons. Similar to the case with pure entangled states, we can describe the principle of our DSQC scheme with single photons as follows: (S1) Alice prepares a sequence of $d$-dimensional single photons $S$. She prepares them by choosing the MB $Z_d$ or the MB $X_d$ randomly, the same as in Ref. 14. She chooses some photons as the decoys and encodes her secret message on the other photons with the unitary operations $\{U_m, U_m^x\}$, where \begin{equation} U_{m}^x =\sum_{j}e^{\frac{2\pi i}{d}jm} \vert j+m\;{\rm mod} \; d \rangle \langle j\vert. \end{equation} That is, Alice encodes her secret message with the operations $U_m$ if a single photon is in one of the eigenstates of the MB $Z_d$. Otherwise, she will encode the message with the operations $U_m^x$. (S2) Alice sends the sequence $S$ to Bob. (S3) Bob completes the error rate analysis on the decoy photons. In detail, Alice tells Bob the positions and the states of the decoy photons. Bob measures them with the suitable MBs and analyzes their error rates. (S4) If the transmission of the sequence $S$ is secure, Alice tells Bob the original states of the photons retained. Bob measures them with the same MBs as those chosen by Alice for preparing them. Otherwise, they discard their transmission and repeat the quantum communication from the beginning. (S5) Bob reads out the secret message $M_A$ with his own outcomes. In essence, this DSQC scheme is a revision of the QSDC protocol in Ref. 14, and is modified for transmitting the secret message in one-way quantum communication. Compared with the schemes based on entanglement \cite{bf,two-step,Wangc,zhangzj, yan,Gao,caiA,zhangs2006,song}, this DSQC scheme only requires the parties to prepare and measure single photons, which makes it more convenient in practical applications, especially with the development of techniques for storing quantum states \cite{storage}. Compared with the quantum one-time pad QSDC scheme \cite{QOTP}, the photons in the present DSQC scheme need only be transmitted from the sender to the receiver, not double the distance between the parties, which will increase the bit rate in a practical channel as the channel will attenuate the signal exponentially with the distance $L$. Certainly, the parties should exchange a classical bit for each qubit to read out the secret message. As the process for exchanging a classical bit is by far easier than that for a qubit, the present scheme still has an exciting nature for applications. \section{discussion and summary} Similar to DSQC protocols \cite{yan,Gao,zhangzj} with entanglement swapping and teleportation, Alice can also encode her secret message on the sequence $S_A$ after she sends the sequence $S_B$ to Bob and confirms the security of the transmission in our first DSQC scheme. Moreover, she can accomplish this task in a simple way. That is, Alice first measures her photons remaining in the sequence $S_A$ and then publishes the difference between her outcomes and her secret message. In the DSQC scheme with single photons, Alice need only modify the process for publishing her information to encode her secret message after the transmission is confirmed to be secure. At this time, she only tells Bob the combined information about the original states of the single photons and the secret message, not just the states. In summary, we have proposed two DSQC protocols. One is based on a sequence of pure entangled states, not maximally entangled ones. The obvious advantage is that a pure-entanglement quantum signal source is feasible at present. In the other scheme, the parties exploit only a sequence of $d$-dimensional single photons. In the present two DSQC protocols, only single-photon measurements are required for the authorized users, which makes them more convenient than those \cite{yan,Gao,zhangzj} with quantum teleportation and entanglement swapping. Even though it is necessary for the users to exchange one bit of classical information for each bit of the secret message, the qubits do not run through the quantum line twice, which will increase their bit rate and security in practical conditions as the qubits do not suffer from the noise and the loss aroused by the quantum line after they are transmitted from one party to the other. Also, they can be easily modified for encoding the secret message after confirming the security of the quantum channel, the same as in Refs. 26-28. \section*{ACKNOWLEDGMENTS} This work is supported by the National Natural Science Foundation of China under grant Nos. 10447106, 10435020, 10254002, and A0325401 and by the Beijing Education Committee under grant No. XK100270454. \end{document}
arXiv
A novel centralized algorithm for constructing virtual backbones in wireless sensor networks Chuanwen Luo1, Wenping Chen1, Jiguo Yu2, Yongcai Wang1 & Deying Li1 EURASIP Journal on Wireless Communications and Networking volume 2018, Article number: 55 (2018) Cite this article Finding the minimum connected dominating set (MCDS) is a key problem in wireless sensor networks, which is crucial for efficient routing and broadcasting. However, the MCDS problem is NP-hard. In this paper, a new approximation algorithm with approximation ratio H(Δ)+3 in time O(n2) is proposed to approach the MCDS problem. The key idea is to divide the sensors in CDS into core sensors and supporting sensors. The core sensors dominate the supporting sensors in CDS, while the supporting sensors dominate other sensors that are not in CDS. To minimize the number of both the cores and the supporters, a three-phased algorithm is proposed. (1) Finding the base-core sensors by constructing independent set (denoted as S1), in which the sensors who have the largest \(\frac {|N^{2}(v)|}{|N(v)|}\) (number of two-hop neighbors over the number of one-hop neighbors) will be selected greedily into S1; (2) Connecting all base-core sensors in S1 to form a connected subgraph, the sensors in the subgraph are called cores; (3) Adding the one-hop neighbors of the core sensors to the supporter set S2. This guarantees a small number of sensors can be added into CDS, which is a novel scheme for MCDS construction. Extensive simulation results are shown to validate the performance of our algorithm. Wireless sensor networks (WSNs) play a critical role in many areas, such as environmental monitoring, disaster forecast, etc [1]. A key problem in WSN is multi-hop communication, because the communication range of a individual sensor is generally limited. In multi-hop communication, any two sensors that are within the communication range of each other are called neighbors, which can communicate to each other. Other sensors that are not within the communication range of each other and want to communicate, need intermediate sensors between them to forward their packets (for instance, sensory data [2, 3] and image data [4, 5]). However, due to the broadcasting nature of the wireless communication, if there is not a specific routing path for packet forwarding, all neighbors are possible to become intermediators for forwarding messages, which causes message flooding problem. The key way to avoid flooding is to find a communication backbone, so that the packets are relayed by the backbone sensors to save energy for the other sensors. If modeling the WSN into an undirected graph, the connected dominating set (CDS) [6–8] is one of the good choices to construct virtual backbone of the network, because the sensor nodes in the CDS form a connected subgraph to forward messages from other sensors. However, forwarding message may run into collision, which introduces retransmissions and increases end-to-end delays. As the number of sensors in the CDS grows, the negative effect of retransmissions increases greatly. Hence, CDS with smaller number of sensors is highly desired, which leads to the problem of finding the CDS with the minimum number of sensors, i.e., the minimum connected dominating set (MCDS) problem. However, it has been proved that the MCDS problem is NP-hard [9]. Therefore, approximation algorithms become the focus of addressing the MCDS problem. The majority of proposed algorithms in literatures follow a general two-phased approach [10–14]. In the first phase, a dominating set is constructed, and the sensors in the dominating set are called dominators. In the second phase, additional sensors are selected, called connectors. Together with the dominators, they induce a connected CDS topology. In this paper, we design a three-phased approximation algorithm for the MCDS problem in WSNs. Firstly, we propose a novel method to construct an independent set S1 for the graph G such that any pair of complementary sensor subsets for S1 is separated by exactly three hops. Secondly, sensors in S1 are connected by other sensors that are added into C to form a subtree. The number of sensors in C is an even number, since any pair of complementary sensor subsets in S1 is separated by two sensors. A supporter set S2 is constructed that neighbors of S1∪C are added into S2. S1∪C∪S2 is connected dominated set. The performances of the proposed algorithms are thoroughly analyzed. Our contributions are presented as follow: We propose a novel algorithm to generate the CDS and construct the virtual backbones in WSNs. We analyze the performance ratio and time complexity of our algorithm. We conduct extensive simulations to demonstrate the performance of the algorithms. Simulation results show that the algorithm generates CDS with smaller size than the state-of-the-art algorithms in [15]. The rest of the paper is organized as follows. Related work is reviewed in Section 2. Our novel centralized algorithm for constructing a CDS is presented in Section 3. The performance of the proposed algorithm is thoroughly analyzed in Section 4. Section 5 gives the results of simulations, which show the performance of the algorithm. Finally, we conclude this paper in Section 6. In this section, we review the classical algorithms for constructing CDS. For more comprehensive approximation algorithms for CDS construction, one can refer to Du and Wan and Yu et al. [16, 17]. Since the MCDS problem in unit disk graph is NP-hard, many algorithms are proposed to compute approximation solutions. CDS construction algorithms can be divided into distributed algorithms and centralized algorithms. Distributed algorithms In the case of distributed algorithms, each node in the network only knows the local information and communicates with its neighbors. Recently, the popular methods for constructing CDS are to firstly construct an maximal independent set (MIS), then a CDS is formed by connecting the nodes in the MIS, such as [7, 8, 11–13]. In [7], Wan et al. proposed an ID-based distributed algorithm to construct a CDS with the performance ratio 8|opt|−2, where opt represents the minimum connected dominating set of the unit disk graph. In [8, 11, 12], some MIS-based algorithms are proposed and the first phase of these algorithms is to construct an MIS as shown in [7]. In the second phase of the algorithm in [8], Li et al. constructed a Steiner tree for connecting all nodes in MIS. The performance ratio of their algorithm is (4.8+ ln5)|opt|+1.2. In [11] Min et al. improved the construction of Steiner tree to decrease the size of connectors. Consequently, they proved that the approximation ratio of the proposed algorithm is 6.8. In [12], Wan et al. proved the approximation ratio of [7] is 7.333 and proposed a new approximation algorithm with ratio 6.389. In [13], Misra et al. proposed a heuristic algorithm, called collaborative cover, to obtain an MIS. After that, they constructed a Steiner tree with minimum number of Steiner nodes to obtain a small CDS. The size of the CDS they got is at most (4.8+ ln5)|opt|+1.2. Centralized algorithms In the literature, Guha and Kuller [6] proposed the first approximation algorithm to construct an MCDS as a virtual backbone in a wireless network. They presented two centralized greedy algorithms for CDS construction with approximation factor 2H(Δ)+2 and H(Δ)+2 respectively, where Δ is the maximum degree of the graph. In [18], Ruan et al. proposed another centralized algorithm with the approximation factor lnΔ+2. In [19], Fu et al. proposed a centralized algorithm for CDS construction with the time complexity O(nΔ2). Note that Δ can be as many as O(n). Thus, the time complexity of the algorithm in [19] is O(n3). In [15], Al-Nabhan et al. proposed three similar centralized algorithms to construct CDSs in wireless network with approximation factor of 5. These approximation algorithms outperform the existing state-of-the-art methods. Their algorithm contains four phases. The first phase is to construct a special independent set S1 and any pair of complementary subsets of S1 is separated by exactly three hops. The second phase is to compute an MDS for each disconnected component and all nodes in MDS form the set S2. The third phase is to connect S2 nodes and S1 nodes. The fourth phase is to connect all nodes in S1. Some other centralized CDS construction algorithms also exist in the literatures [20–23]. The MCDS has many applications in the special network models, such as ad hoc networks [24, 25], energy harvest networks [26], battery-free networks [27], cognitive ratio networks [28], and others [29–31]. In this paper, we propose a three-phased approximation algorithm for CDS construction with approximation ratio H(Δ)+3 in time O(n2). To compare with the three algorithms proposed in [15], extensive simulations are conducted, and the results show effectiveness of our algorithm. A preliminary version [32] was published in WASA 2017. MCDS construction For simplicity, all sensors in WSN are randomly deployed in the two-dimensional plane. Assume that all sensors have the same transmission range in the network. The WSN is modeled as a unit disk graph G(V,E), where V is the set of all sensors and E represents the set of links in the network. If the Euclidean distance between any two sensors u and v is less than or equal to 1, then there is an undirected edge e uv between these two sensors. Each sensor v∈V has a unique ID. Let N(v) be the set of all neighbors of v and d v =|N(v)| be the degree of v. Denote Δ=max{d v |∀v∈V} and Ni(v) to be the i-hops neighbor set of v. Connected dominating set (CDS) A dominating set (DS) of a graph G=(V,E) is a subset V′⊆V such that each node in V∖V′ is adjacent to at least one node in V′, and a connected dominating set (CDS) is a dominating set which also induces a connected subgraph. Minimum connected dominating set (MCDS) problem Given a graph G=(V,E), the minimum connected dominating set problem is to find the CDS in G such that the size of the CDS is minimized. In this paper, we propose a novel approximation algorithm for solving the MCDS problem. Algorithm overview In this section, we overview the proposed approximation algorithm for the MCDS problem. The algorithm consists of three phases. In the first phase, we construct an independent set S1 (sensors in S1 called base-cores) for the graph G such that any pair of complementary sensor subsets in S1 is separated by exactly three hops, which differs from the construction process of the first phase in [15]. In the second phase, we select connectors from V∖S1 to connect the base-core sensors in S1 for obtaining a subtree, called all sensors on the subtree as cores. In the third phase, we construct a supporter set S2 such that neighbors of S1∪C are added into S2. S1∪C∪S2 forms a CDS. For an illustrative purpose, we employ the different colors to differentiate sensor states during the construction process of our algorithm. Figure 1 shows the state transition process of sensors in the WSN. Sensor state transition of our algorithm. The transition conditions are as follows: a has the largest value of \(\frac {|N^{2}(v)|}{|N(v)|}\); b exists a black neighbor; c has a red neighbor but does not have black neighbor; d exists a yellow neighbor but does not have black or red neighbor; e has the largest value of \(\frac {|N^{2}(v)|}{|N(v)|}\) among all blue nodes, where N2(v) and N(v) only contain white nodes; f becomes connector; g exists a black neighbor; h has the maximum number of yellow neighbors among all red nodes; I has a red neighbor but does not have black neighbor; J becomes connector We illustrate the CDS construction process of our algorithm by Fig. 2, which is the same network example G(V,E) in [7]. Initially, all sensors are marked as white and each sensor has a unique ID, as shown in Fig. 2a. In the first phase, we can know that node 8 has the largest value of \(\frac {|N^{2}(v)|}{|N(v)|}\) among all sensors in the graph. Hence, sensor 8 is colored black and all neighbors in N(8) are colored red and all sensors in N2(8) are colored yellow. As shown in Fig. 2b, sensors 3, 4, 5, and 6 are colored red and sensors 0, 1, 2, 7, 9, 10, 11, and 12 are colored yellow. None of sensors become connector in the second phase since only one black sensor 8 is added into independent set S1. In the third phase, we need to select supporters (added into S2) from red sensors to dominate all yellow sensors. For all red sensors, sensor 5 has the maximum number of yellow neighbors, then sensor 5 is marked green and its yellow neighbors 9, 10, 11, and 12 are colored red. After that, sensors 6 and 4 have the same number of yellow neighbors and the ID of sensor 6 is larger than sensor 4; therefore, sensor 6 is marked green and its yellow neighbors 1 and 7 are colored red. Then sensor 4 is marked green and sensors 0 and 2 are colored red. Finally, sensors with black and green form a CDS that contains sensors 4, 5, 6, and 8, as shown in Fig. 2c. Figure 2d shows a CDS (blue and black sensors) obtained by the algorithm in [12]. The process of CDS construction by our algorithm in a–c. d A CDS constructed by algorithm in [7] Independent set S 1 construction In this section, we construct the set S1 such that the hop distance between any two complementary sensor subsets in S1 is exactly three hops. The details of S1 construction process as shown in the following steps. First, a sensor v∈V with the largest value of \(\frac {|N^{2}(v)|}{|N(v)|}\) initiates the S1 construction by coloring itself black. Then, the black sensor v dominates its neighbors in N(v) and all sensors in N(v) are marked red. After that, we color all sensors in N2(v) as yellow and all sensors in N3(v) are colored blue. Last, each blue sensor u deletes red sensors from the set N2(u) and deletes yellow sensors from the set N(u). Then select black sensor from the current blue sensors, for this purpose, the algorithm repeats the following steps, until no blue/white sensors is left in the graph. We select a blue sensor v and color it black when the value of \(\frac {|N^{2}(v)|}{|N(v)|}\) is largest among all blue sensors. If more than one sensor node have the same value of \(\frac {|N^{2}(v)|}{|N(v)|}\), then the algorithm selects the blue sensor with the maximum number of sensors in N(v). If more than one blue sensor have the same value of |N(v)|, then the algorithm selects the blue sensor with the highest ID value among these blue sensors. After that, the algorithm executes the following operations: All sensors in N(v) are colored red All sensors in N2(v) are colored yellow All sensors in N3(v) are colored blue Each blue sensor u deletes red sensors from the set N2(u) and deletes yellow sensors from the set N(u) The detail illustration as shown in Algorithm 1. After Algorithm 1 terminates, the sensors in V are either black, red, or yellow. We obtain an independent set S1 that is composed of black sensors and any red sensor is definitely dominated by a black sensor and any yellow sensor has two hops distance from a black sensor. We can prove that any pair of complementary sensor subsets of S1 is separated by exactly three hops. The sensors in the set S1 are called base-cores. Connector set C construction In this section, we propose a novel algorithm to find a set of connectors C such that S1∪C forms a subtree. Before we describe the algorithm, we introduce some terms and notations. For any subset U⊆V, let q(U) be the number of connected components in G(U). The set U is initially equal to S1, and the initial value of q(U) is |S1|. Let M={e|e∈E and the endpoints are red and yellow } and C be the set of connectors. Let W be the subset of S1 such that any pair of sensors of W is connected by other sensors in C. To begin our algorithm, first, we select an arbitrary black sensor s1∈S1 to start selection of connectors and set W={s1}. The algorithm repeats the following steps, until the condition q(U)=1 is satisfied: Select a sensor s i ∈W such that there exists a sensor s j ∈N3(s i )∩(S1−W) Select an edge e xy ∈M such that x∈N(s j ) and y∈N(s i ) Delete the edge e xy from M, then sensors x and y are marked blue and added into C For each yellow sensor w, if w∈N(x) or w∈N(y), then it is marked red Execute operations U=U∪{u}, U=U∪C and q(U)=q(U)−1 After Algorithm 2 terminates, any two black sensors are connected by a path that consists of black sensors and blue sensors. That is, we obtain a subtree and all sensors on the subtree are called cores. Supporter set S 2 construction After executing Algorithm 2, we have got a subtree over on S1∪C. However, there are still some yellow sensors not being dominated since they have two hops distance from black sensor or blue sensor. In this section, we propose a novel greedy algorithm for acquiring a supporting set S2, in which the sensors are used to dominate remaining yellow sensors. Sensors in the set S2 are called supporter. Let RD be the set {s|s∈V,Color s =red} and YL be the set {s|s∈V,Color s =yellow}. In each iteration, we select a red sensor s∈RD with the maximum number of yellow sensors in N(s). If more than one red sensors have the same number of the yellow neighbors, then the algorithm selects the red sensor with the highest ID. The algorithm repeats the following steps, until the condition YL=∅ is satisfied: Select a red sensor s∈V with the maximum number of yellow neighbors Sensor s is marked green and its yellow neighbors in N(s)∩YL are marked red Delete sensors of N(s)∩YL from YL CDS construction In this section, we propose our approximation algorithm for solving MCDS problem. The algorithm consists of four steps, and the first three steps correspond to Algorithms 1–3, respectively. The last step is to compute union of S1, C, and S2. The detail illustration as shown in Algorithm 4. After this algorithm terminates, we obtain a CDS that is union of S1 (black sensors), C (blue sensors), and S2 (green sensors). For a given graph G(V,E), we give the executing process of the Algorithm 4, as shown in Fig. 3(a)-(d). Let the transmission range R be 250 m and deploy 100 sensors in the 1000*1000 m2 detection area. The execution process of the Algorithm 4 as follows: a Select a sensor s to start S1 construction and sensor s is marked black. b An independent set S1 that contains four black sensors is constructed in step 1. c The connector set C that contains six blue sensors is constructed after executing step 2, and we obtain a subtree that contains all cores. d The supporter set S2 that consists of four green sensors is constructed in step 3, then we can obtain a CDS that consists of all black, blue, and green sensors In this section, we analyze the performance ratio and time complexity of our algorithm. Let \(H(n)=\sum _{i=1}^{n} \frac {1}{i}\) be the harmonic function and MCDS be an optimal CDS. Lemma 1 The set S1 found by Algorithm 1 is an independent set, and any pair of complementary sensor subsets of S1 is separated by exactly three hops. We use {s1,s2,···,s k } to denote the set S1. Any two sensors s i ,s j ∈S1 are not adjacent to each other according to the process of S1 construction by Algorithm 1. Therefore, the set S1 is an independent set of G. Let T j ={s1,s2,···,s j } and H j =(T j ,E j ) for any 1≤j≤k. For arbitrary two sensors s i ,s l ∈T j , an edge (s i ,s l )∈E j if and only if their distance in G is three. We prove by induction on j that H j is connected. Since H1 contains a single sensor, it is connected obviously. Assume that H j is connected for 1≤j≤k−1, when the sensor sj+1 is marked black, according to the Algorithm 1, there exists s i ∈T i (1≤i≤j) such that the distance between sj+1 and s i in G is three, which means there exists an edge between s i and sj+1 in Hj+1. Due to H j is connected, Hj+1 is also connected. Therefore, H j is connected for any 1≤j≤k. This implies that any pair of complementary subsets of S1 is exactly three hops. □ The CDS=S1∪C∪S2 got by Algorithm 4 is a connected dominating set. According to lemma 1, we know that S1 is an independent set and S1∪C is connected. According to Algorithm 3, each sensor in S2 is adjacent to at least one sensor in S1∪C. Therefore, the set CDS is connected. Since the distance between any sensor not in S1∪C and S1∪C in G is at most 2, all other sensors not in CDS are dominated by sensors in CDS according to the selection process of S2. Therefore, for any sensor v∈V, it belongs to the set CDS or has at least a neighbor in CDS, which means CDS is a connected dominating set. □ The size of S1 is less than or equal to |MCDS|. This lemma has been proved by lemma 2 in [15]. The size of the set C obtained by Algorithm 2 is at most 2|MCDS|−2. Let S1 be the set {s1,s2···,s k }. According to lemma 1, we obtain that auxiliary graph H k over S1 is a tree. Hence, H k contains k−1 edges. According to Algorithm 2, any two endpoints of an edge in H k are two sensors in S1. Therefore, two connectors are added into C to connect the two sensors. Therefore, the size of set C is 2|S1|−2. By lemma 3, we get |C| is at most 2|MCDS|−2. □ The size of the set S2 obtained by Algorithm 3 is less than H(Δ)|MCDS|. For a sensor v ∈MCDS, let P v be the sensors set including v in which each sensor is dominated by v. According to Algorithm 3, when a red sensor v is marked green, all yellow neighbors of v are dominated by v. We will prove that the total number of sensors in P v for any node v is at most H(Δ). Assume that when we pick a sensor v from RD to add to S2, y yellow sensors turn to red. We obtain that each of y yellow sensors spends at most \(\frac {1}{y}\). Assume that the number of yellow sensors is initially y0<Δ in P v , and finally drops to 0. Let y j denote the number of yellow sensors in P v after step j. Here, we assume that some yellow sensors in P v are marked red at each step. Therefore, the number of yellow sensors in P v decreases at each step. After the first step, the number of sensors which changed color is y0−y1. In the jth step, the number of sensors that change color in set P v is yj−1−y j , and the cost of each sensor which changed color is at most \(\frac {1}{y_{j}}\). Let y h =0. We can get the total number of sensors in P v is $$\begin{array}{*{20}l} \sum_{j=1}^{h} \frac{1}{y_{j-1}} (y_{j-1}-y_{j})&=\sum_{j=1}^{h} \sum_{i=y_{j}}^{y_{j-1}} \frac{1}{y_{j-1}}\\ &\leq\sum_{j=1}^{h} \sum_{i=y_{j}}^{y_{j-1}} \frac{1}{i}\\ &=\sum_{j=1}^{h} \left(\sum_{i=1}^{y_{i-1}} \frac{1}{i}-\sum_{i=1}^{y_{i}} \frac{1}{i}\right)\\ &= H(y_{0})< H(\Delta). \end{array} $$ $$|S_{2}|\leq \cup_{v\in MCDS}|P_{v}|<H(\Delta)|MCDS|. $$ This lemma is proved. □ We know that CDS=S1∪C∪S2. According to lemma 3–5, we obtain the following theorem. The number of sensors in CDS found by Algorithm 4 is less than (H(Δ)+3)|MCDS|−2. The time complexity of Algorithm 1 is O(n2). According to Algorithm 1, we need |S1| iterations for obtaining the set S1. In the first iteration, we need at most n steps to choose a sensor v with the largest value of \(\frac {|N^{2}(v)|}{|N(v)|}\) from V. Since any black sensor comes from blue sensor, we need at most n steps to select a black sensor from blue sensors in ith iteration. Therefore, the total number of black sensor selection over all iterations is O(n2)=O(n|S1|), since |S1|<n, and we obtain that the time complexity of Algorithm 1 is O(n2)+O(n)=O(n2). □ Firstly, we pick out all edges with a red endpoint and a yellow endpoint from set E. Therefore, the operation needs the running time of O(|E|). Secondly, due to the initial value of q(U) is |S1|, the number of iterations is less than |S1| by lemma 3. In the interior of the loop, first, we need |W| steps to select a sensor s i ∈W such that there exists a sensor s j ∈N3(s i )∩(S1−W). The maximum value of |W| is equal to |S1|. Second, we select an edge e xy ∈M for connecting s i and s j such that e xy is composed of an endpoint x∈N(s i ) and an endpoint y∈N(s j ). Therefore, we need at most 2Δ steps to select edge e xy . Last, 2Δ steps are needed for coloring all sensors in N(x)∪N(y). Therefore, the time complexity of Algorithm 2 is O(|E|+(|W|+2Δ+2Δ)×|S1|)=O(n2). □ We need n steps to pick out red sensors (added into RD) and yellow sensors (added into YL) from V. Algorithm 3 executes at most |YL| iterations. In a single iteration, due to the size of RD is less than n, we need at most n steps a red sensor v∈RD with the maximum number of yellow neighbors among all sensors in RD. And at most Δ steps are needed to mark all yellow neighbors of v. Therefore, the time complexity of Algorithm 3 is O((n+Δ+n)×|YL|)=O(n2), since |YL|<n. □ We know that Algorithm 4 consists of four steps and the first three steps correspond to Algorithms 1, 2, and 3, respectively. The last step needs single time to compute the union of S1, C, and S2. According to lemmas 6–8, we obtain the following theorem. In this section, we evaluate the performance of our algorithm through simulations. In the simulations, N sensors are randomly deployed in the two-dimension plane. All sensors are assumed to have the same transmission range R. Each experimental result is the average of 100 runs. We first evaluate how the network configuration, such as the number of the sensors, the transmission range, and the area of the deployment, impact on the size of CDS, as shown in Section 5.1. After that, we compare the performance of our algorithm with the performance of the three algorithms (Approach I, Approach II, and Approach III) in [15], as shown in Section 5.2. We used MATLAB R2013a for all simulations. Impact of network configuration In this section, we evaluate the impact of the different parameter settings on the size of CDS. Firstly, Fig. 4a illustrates the impact of the transmission range R on the size of CDS with different number of sensors. We randomly deploy N sensors in a 1000×1000 m2 area, and measure the size of CDS when the transmission range R varies from 200 to 500 m increased by 50 m. As shown in Fig. 4a, we can observe that the size of CDS decreases as the transmission range R increases. This is because when the transmission range becomes longer, the number of neighbors of sensors increase. That is to say, a backbone sensor is able to dominate more non-backbone sensors. When the transmission range R is large enough and the number of senors reaches to some number, the CDS size is almost same no matter how big the number of sensors N is. It is because the some sensors can cover the whole detection area when the transmission range R is large enough. From Fig. 4a, when R = 500 m, the CDS sizes are almost the same. We can also find that the ratio of CDS size to the total number of sensors in the network decreases with the increasing of the density of network deployment. For example, we fix R to 300 m, when N = 100 and N = 500, the size of CDSs are 11.2 and 14.5, respectively, and the ratio of the former is 11.2% and the latter is 2.9%. The performance of our algorithm. a Size of CDS with a different value of N when R varies between 200 and 500 m. b Size of CDS with the different value of R when N changes from 200 and 1200 sensors. c Size of CDS with fixed R = 80 m when N varies between 200 and 1000 sensors. d Size of CDS with fixed N=1000 sensors when R changes from 50 to 120 m Secondly, we evaluate the impact of the number of sensors N on the size of CDS with the different transmission range R. In the 1000×1000 m2 monitor area, the number of sensors N changes from 200 to 1200 sensors, we can find that the size of CDS increases with the number of sensors increasing when R = 100 m and that the size of CDS levels off as R is more than 250 m, as shown in Fig. 4b. We also obtain that, when N is fixed, the size of CDS decreases more and more slowly with the increasing of the transmission range when the transmission range reaches to some value. Thirdly, we measure the effect of the size of the deployment area on the size of CDS. We deploy network sensors in the detection areas 300×300m2, 400×400m2, 500×500m2, and 600×600m2, respectively. First, we evaluate the impact of the number of sensors N on the size of CDS in different detection areas, as shown in Fig. 4c. When we fix the transmission range to 80 m and the number of sensors N (from 200 to 1000), we can notice that the CDS size increases as the deployment area grows. Afterwards, we evaluate the impact of the transmission range on the size of CDS in different detection areas, as shown in Fig. 4d. When we fix the number of sensors N to 1000 and R (from 50 to 120 m), we can notice that the CDS size increases as the deployment area grows. In this section, we compare the performance of our algorithm with the performance of the three algorithms (Approach I, Approach II, and Approach III) in [15]. To compare the performance of our algorithm with the three algorithms, we set the same value of the experiment parameters of our algorithm as the other three algorithms in [15]. Firstly, we give the comparison of the algorithms when the sensors are randomly deployed in the 300×300 m2 area, as shown in Fig. 5. When the number of sensors N = 1000 and the transmission range R is increased by 10 m from 50 to 120 m, we give the comparative results of the four algorithms in Fig. 5a. The results show that the size of CDS got by our algorithm is always better than the other three algorithms as the transmission range becomes longer. And CDS sizes decrease with the transmission range increasing, which is because the transmission range is bigger, the coverage area is larger, and the network area size is finite. Similarly, we fix the transmission range R to 50 m and change N from 100 to 1000 sensors increased by 100. The comparative results in Fig. 5b illustrate that our algorithm outperforms the other three algorithms. Comparing results in the 300×300 m2 detection area. a The average performance of four algorithms when N=1000 sensors and R changes from 50 to 120 m. b The average performance of four algorithms, when R = 50 m and N varies between 100 and 1000 sensors Secondly, for 600×600 m2 monitor area, Fig. 6 shows the performance of the compared algorithms. If setting the number of sensors N=1000 and changing the transmission range R between 50 and 100 m, our algorithm is better than the other three algorithms and the gap between the four results is getting smaller and smaller with increasing of the transmission range. By setting R = 100 m, Fig. 6b gives the comparison in terms of CDS size through increasing the number of sensors from 100 to 1000. We can observe that our algorithm is still better than the other three algorithms. Comparing results in the 600×600m2 detection area. a The average performance of four algorithms, when N = 1000 sensors and R changes from 50 to 120 m. b The average performance of four algorithms when R = 100m and N changes from 100 to 1000 sensors Finally, to better illustrate the superiority of our algorithm, we deploy the sensors 1000×1000 m2 area randomly, as shown in Fig. 7. In Fig. 7a, when the number of sensors N is fixed to 1000 and R varies from 150 to 500 m, we can observe that our algorithm also outperforms the other three algorithms in the larger detection area. And the size of CDS of the four algorithms tends to be stable when transmission range is big enough. According to Fig. 7b, if we set R=200 m and vary N from 1000 and 10,000, our algorithm still outperforms other three algorithms and the CDS sizes of the algorithms level off as the number of sensors increases, which means that our algorithm is also suitable in dense networks. Comparing results in the 1000×1000m2 detection area. a The average performance of four algorithms when N is fixed to 1000 sensors and R changes from 150 to 500 m. b The average performance of four algorithms when R = 200 m and N varies between 1000 and 10,000 sensors This paper proposes an approximation algorithm for the MCDS problem in wireless sensor networks. The key idea is to separate sensors in CDS into core sensors and supporting sensors. The core sensors dominate the supporting sensors in CDS and some sensors are not in CDS, while the supporting sensors dominate remaining sensors that are not in CDS. Simulation results show that the algorithm generates CDS with smaller size than the state-of-the-art algorithms. CDS: Connected dominating set Dominating set MCDS: Minimum connected dominating set MIS: Maximal independent set J Blum, M Ding, X Cheng, Applications of connected dominating sets in wireless networks. Handb. Comb. Optim.42:, 329–369 (2004). S Cheng, Z Cai, J Li, H Gao, Dataset from big sensory data in wireless sensor networks. IEEE Trans. Knowl. Data Eng. 29(4), 813–827 (2017). J Li, S Cheng, Z Cai, J Yu, C Wang, Y Li, Approximate holistic aggregation in wireless sensor networks. ACM Trans. Sens. Netw.13(2), 1–24 (2017). H Wu, Z Miao, Y Wang, M Lin, Optimized recognition with few instances based on semantic distance. Vis. Comput. 31(4), 367–375 (2015). H Wu, Z Miao, Y Wang, J Chen, C Ma, T Zhou, Image completion with multi-image based on entropy reduction. Neurocomputing. 159(7), 157–171 (2015). S Guha, S Khuller, Approximation algorithms for connected dominating sets. Algorithmica. 20(4), 374–387 (1998). P-J Wan, KM Alzoubi, O Frieder, in Proceedings IEEE, INFOCOM 2002. Twenty-First Annual Joint Conference of the IEEE Computer and Communications Societies, vol. 3. Distributed construction of connected dominating set in wireless ad hoc networks (IEEENew York, 2002), pp. 1597–1604. Y Li, MT Thai, F Wang, C-W Yi, P-J Wan, D-Z Du, On greedy construction of connected dominating sets in wireless networks. Wirel. Commun. Mob. Comput. 5(8), 927–932 (2005). BN Clark, CJ Colbourn, DS Johnson, Unit disk graphs. Discret. Math. 86(1-3), 165–177 (1990). S Funke, A Kesselman, U Meyer, M Segal, A simple improved distributed algorithm for minimum cds in unit disk graphs. ACM Trans. Sens. Netw. (TOSN). 2(3), 444–453 (2006). M Min, H Du, X Jia, CX Huang, SC-H Huang, W Wu, Improving construction for connected dominating set with steiner tree in wireless sensor networks. J. Glob. Optim. 35(1), 111–119 (2006). P-J Wan, L Wang, F Yao, in Distributed Computing Systems, 2008. ICDCS'08. The 28th International Conference On. Two-phased approximation algorithms for minimum cds in wireless ad hoc networks (IEEEHang Zhou, 2008), pp. 337–344. R Misra, C Mandal, Minimum connected dominating set using a collaborative cover heuristic for ad hoc sensor networks. IEEE Trans. Parallel Distrib. Syst.21(3), 292–302 (2010). Q Tang, Y-S Luo, M-Z Xie, P Li, Connected dominating set construction algorithm for wireless networks based on connected subset. J. Commun. 11(1), 50–57 (2016). N Al-Nabhan, B Zhang, X Cheng, M Al-Rodhaan, A Al-Dhelaan, Three connected dominating set algorithms for wireless sensor networks. Int. J. Sens. Netw.21(1), 53–66 (2016). D-Z Du, P-J Wan, Connected dominating set: theory and applications. Springer Sci. Bus. Media. 77: (2012). J Yu, N Wang, G Wang, D Yu, Connected dominating sets in wireless ad hoc and sensor networks–a comprehensive survey. Comput. Commun. 36(2), 121–134 (2013). L Ruan, H Du, X Jia, W Wu, Y Li, K-I Ko, A greedy approximation for minimum connected dominating sets. Theor. Comput. Sci. 329(1-3), 325–330 (2004). D Fu, L Han, L Liu, Q Gao, Z Feng, An efficient centralized algorithm for connected dominating set on wireless networks. Procedia Comput. Sci. 56:, 162–167 (2015). X Cheng, M Ding, D Chen, in Proc. of International Workshop on Theoretical Aspects of Wireless Ad Hoc, Sensor, and Peer-to-Peer Networks (TAWN), vol. 2. An approximation algorithm for connected dominating set in ad hoc networks (Washington, 2004). H Du, W Wu, Q Ye, D Li, W Lee, X Xu, Cds-based virtual backbone construction with guaranteed routing cost in wireless sensor networks. IEEE Trans. Parallel Distrib. Syst. 24(4), 652–661 (2013). W Wang, B Liu, D Kim, D Li, J Wang, W Gao, A new constant factor approximation to construct highly fault-tolerant connected dominating set in unit disk graph. IEEE/ACM Trans. Netw. (TON). 25(1), 18–28 (2017). Y Shi, Z Zhang, Y Mo, D-Z Du, Approximation algorithm for minimum weight fault-tolerant virtual backbone in unit disk graphs. IEEE/ACM Trans. Netw. 25(2), 925–933 (2017). D Li, X Jia, H Liu, Energy efficient broadcast routing in static ad hoc wireless networks. IEEE Trans. Mob. Comput. 3(2), 144–151 (2004). Y Hong, D Bradley, D Kim, D Li, AO Tokuta, Z Ding, Construction of higher spectral efficiency virtual backbone in wireless networks. Ad. Hoc. Netw. 25:, 228–236 (2015). T Shi, S Cheng, Z Cai, Y Li, J Li, Exploring connected dominating sets in energy harvest networks. IEEE/ACM Trans. Netw (TON). 25(3), 1803–1817 (2017). T Shi, S Cheng, J Li, Z Cai, in The 36th Annual IEEE International Conference on Computer Communications INFOCOM. Constructing connected dominating sets in battery-free networks (IEEEAtlanta, 2017), pp. 1–9. J Yu, W Li, X Cheng, M Atiquzzaman, H Wang, L Feng, Connected dominating set construction in cognitive radio networks. Pers. Ubiquit. Comput. 20(5), 757–769 (2016). J Yu, N Wang, G Wang, Constructing minimum extended weakly-connected dominating sets for clustering in ad hoc networks. J. Parallel Distrib. Comput. 72(1), 35–47 (2012). S Cheng, Z Cai, J Li, Curve query processing in wireless sensor networks. IEEE Trans. Veh. Technol. 64(11), 5198–5209 (2015). Z He, Z Cai, S Cheng, X Wang, Approximate aggregation for tracking quantiles and range countings in wireless sensor networks. Theor. Comput. Sci. 607:, 381–390 (2015). C Luo, Y Wang, J Yu, W Chen, D Li, in International Conference on Wireless Algorithms, Systems, and Applications (WASA), Guilin, China. A new greedy algorithm for constructing the minimum size connected dominating sets in wireless networks (SpringerCham, 2017), pp. 109–114. This work was supported in part by the National Natural Science Foundation of China under Grants 11671400, 61672524; the Fundamental Research Funds for the Central University, and the Research Funds of Renmin University of China, 2015030273. School of Information, Renmin University of China, Zhongguancun Road, Beijing, 100872, People's Republic of China Chuanwen Luo, Wenping Chen, Yongcai Wang & Deying Li School of Information Science and Engineering, Qufu Normal University, Rizhao, Shandong, 276826, People's Republic of China Jiguo Yu Chuanwen Luo Wenping Chen Yongcai Wang Deying Li CWL has contributed towards the algorithms, the analysis, and the simulations and written the paper. DYL has contributed towards the algorithms, and the analysis. As the supervisor of CWL, she has proofread the paper several times and provided guidance throughout the whole preparation of the manuscript. WPC, JGY, and YCW have revised the equations, helped writing the introduction and the related works, and critically revised the paper. All authors read and approved the final manuscript. Correspondence to Deying Li. Luo, C., Chen, W., Yu, J. et al. A novel centralized algorithm for constructing virtual backbones in wireless sensor networks. J Wireless Com Network 2018, 55 (2018). https://doi.org/10.1186/s13638-018-1068-7 Virtual backbone Unit disk graph
CommonCrawl
Interpretation of directional derivative without unit vector I understand that using a unit vector (of a vector say $\vec{a}$ ) and computing the directional derivative gives the slope (or rate of change of the function) in the direction of the vector. I have three questions : If I use the vector itself rather than it's unit vector what will I get when I compute it's dot product with the gradient of the function? It wouldn't give the slope of the curve(formed by the slicing of the function with the plane containing the vector $\vec{a}$ ), would it? Note: The function is scalar. Also going by it's formal definition: $\displaystyle \nabla _{\mathbf {v}}{f}({\mathbf {x}})=\lim _{h\rightarrow 0}{\frac {f({\mathbf {x}}+h{\mathbf {v}})-f({\mathbf {x}})}{h}}$ $\mathbf {v}$ is a vector Quoting from Wikipedia This definition is valid in a broad range of contexts, for example where the norm of a vector (and hence a unit vector) is undefined. Also quoting from Wikipedia: If the function f is differentiable at x, then the directional derivative exists along any vector v, and one has $\displaystyle \nabla _{\mathbf {v} }{f}({\mathbf {x} })=\nabla f({\mathbf {x} })\cdot {\mathbf {v} }$ Intuitively, the directional derivative of f at a point x represents the rate of change of f, in the direction of v with respect to time, when moving past x. Why is it mentioned with respect to time isn't it with respect to the change in x (or/and y ) in the direction of the vector ? multivariable-calculus paulplusx paulplusxpaulplusx If you define $\nabla_x f(x_0)=\lim_{h \to 0^+} \frac{f(x_0+hx)-f(x_0)}{h}$, then you have the identity $\nabla_x f(x_0)=\| x \| \nabla_{x/\| x \|} f(x_0)$. (I will remark that this notation clashes with notation elsewhere in math, but I will stick with it here.) That is, the derivative "along $x$" is the directional derivative multiplied by the norm of $x$. In effect instead of just moving in a direction and measuring the change in $f$ relative to the distance you traveled in that direction, you are moving in a direction at a particular rate in time and measuring the change in $f$ relative to that change in time. The speed is the conversion factor between these measurements. This definition of $\nabla_x$ doesn't depend on there being such a thing as the norm of $x$, whereas the directional derivative does. But for your purposes you can ignore this remark for now. I said this in the first paragraph, but just to directly address your third question, let me add one more thing. The directional derivative does not really have a notion of time, it is really a change in $f$ with respect to distance traveled in the specified direction. Your generalized notion $\nabla_x$ effectively involves time after you identify $\| x \|$ as a speed and $h$ as a time, so that $hx$ is a displacement and $h \| x \|$ is a length. $\begingroup$ Thanks. Can you help me with my first question? $\endgroup$ – paulplusx Aug 2 '18 at 11:44 $\begingroup$ @paulplusx Read the first sentence. $\endgroup$ – Ian Aug 2 '18 at 12:49 $\begingroup$ I am sorry but I am not able to understand (with the given explanation) what is the exact output of the dot product of gradient and the vector. Could you provide a more intuitive/geometrical answer supported by a bit of maths? (As simple as possible ) $\endgroup$ – paulplusx Aug 2 '18 at 12:59 $\begingroup$ @paulplusx It is the directional derivative in the direction of $x$, multiplied by $\| x \|$. The point is that you go $\| x\|$ times further from the start point than you would to estimate the directional derivative for a given $h$, but then you divide by the same $h$, so the ratio is about $\| x\|$ times bigger (which becomes exact for $h \to 0$). $\endgroup$ – Ian Aug 2 '18 at 13:04 $\begingroup$ @john Yes, I was lazy about the notation. $\endgroup$ – Ian Aug 11 '20 at 17:19 "This definition is valid in a broad range of contexts, for example where the norm of a vector (and hence a unit vector) is undefined. What does that mean?" Not all vector spaces have a defined inner product space or a norm for that space. But, returning to first principles we can define a directional derivative. "Intuitively, the directional derivative of f at a point x represents the rate of change of f, in the direction of v with respect to time, when moving past x. Why is it mentioned with respect to time isn't it with respect to the change in x (or/and y ) in the direction of the vector ?" If we are looking at the changes in $f(\bf{x})$ as $\bf x$ traverses some path, we may find that we parameterize that path, and might like to think of that parameter as "time." Doug MDoug M $\begingroup$ Understood. Can you help me with my first question (which is the subject of the question) about non-unit vector dot product with the gradient? $\endgroup$ – paulplusx Aug 2 '18 at 11:45 Its always better to understand vectors and their derivatives in some context taken from physics. directional derivative of distance w.r.t time gives you velocity in the respective direction (like x or y axis/direction). Its a differentiation w.r.t to time. Also, the vector remains a vector after this operation (both distance and velocity have components on the axes in space). gradient of voltage (where we differentiate w.r.t distance) gives you electric field in a particular direction. Here, the operation converts a scalar to a vector. Though voltage is dependent on the position in space, it has a value but no direction. (Just like its hotter when closer to a furnace, there's higher voltage when closer to a positive charge). So, the two derivatives are not the same are used for different reasons best understood in a given context (because most often math is a means to an end, and to what end?... right?). Answers to your questions : 1) dot product of (vector, gradient of the function) ... please note that you can't compute gradient of a vector. 2) the 'h' mentioned must be infinitesimal time being multiplied with 'v' (velocity in the x direction). Hence, it is indeed differentiation w.r.t to time. 3) This is answered in 2. JayanthJayanth $\begingroup$ I meant a scalar function. I have added a note for it now. $\endgroup$ – paulplusx Aug 2 '18 at 17:00 The following definition should explain the directional derivative of a non-unit vector; we can call it the general directional derivative. It is similar to the 'unit vector' definition of the directional derivative given in most intro-calculus textbooks, but scaled by the magnitude of the vector. $$\displaystyle \nabla _{\mathbf {v} }{f}({\mathbf {x} })= \nabla f({\mathbf {x} })\cdot {\mathbf {v} } = \nabla f({\mathbf {x} })\cdot \left( {\frac{\mathbf {v}}{\| \mathbf{v} \| } \|\mathbf{ v}\| }\right) = \|\mathbf{ v}\| ~\nabla f({\mathbf {x} })\cdot \left( {\frac{\mathbf {v}}{\| \mathbf{v} \| } } \right)$$ In the case that $\mathbf {v}$ is already a unit vector, this simplifies to $\nabla f({\mathbf {x} })\cdot {\mathbf {v} }$. Not the answer you're looking for? Browse other questions tagged multivariable-calculus or ask your own question. directional derivative unit vector Proof of equivalence of two ways of calculating directional derivative Difference between magnitude of gradient vs directional derivative of gradient Why isn't the directional derivative generally scaled down to the unit vector? Understanding directional derivative Intuition behind the directional derivative being zero when gradient is perpendicular to the direction Directional derivative and unit vectors
CommonCrawl
Flat (geometry) In geometry, a flat or Euclidean subspace is a subset of a Euclidean space that is itself a Euclidean space (of lower dimension). The flats in two-dimensional space are points and lines, and the flats in three-dimensional space are points, lines, and planes. "Euclidean subspace" redirects here. For a subspace that contains the zero vector or a fixed origin, see Linear subspace. In a n-dimensional space, there are flats of every dimension from 0 to n − 1;[1] flats of dimension n − 1 are called hyperplanes. Flats are the affine subspaces of Euclidean spaces, which means that they are similar to linear subspaces, except that they need not pass through the origin. Flats occur in linear algebra, as geometric realizations of solution sets of systems of linear equations. A flat is a manifold and an algebraic variety, and is sometimes called a linear manifold or linear variety to distinguish it from other manifolds or varieties. Descriptions By equations A flat can be described by a system of linear equations. For example, a line in two-dimensional space can be described by a single linear equation involving x and y: $3x+5y=8.$ In three-dimensional space, a single linear equation involving x, y, and z defines a plane, while a pair of linear equations can be used to describe a line. In general, a linear equation in n variables describes a hyperplane, and a system of linear equations describes the intersection of those hyperplanes. Assuming the equations are consistent and linearly independent, a system of k equations describes a flat of dimension n − k. Parametric A flat can also be described by a system of linear parametric equations. A line can be described by equations involving one parameter: $x=2+3t,\;\;\;\;y=-1+t\;\;\;\;z={\frac {3}{2}}-4t$ while the description of a plane would require two parameters: $x=5+2t_{1}-3t_{2},\;\;\;\;y=-4+t_{1}+2t_{2}\;\;\;\;z=5t_{1}-3t_{2}.\,\!$ In general, a parameterization of a flat of dimension k would require parameters t1, …, tk. Operations and relations on flats Intersecting, parallel, and skew flats An intersection of flats is either a flat or the empty set.[2] If each line from one flat is parallel to some line from another flat, then these two flats are parallel. Two parallel flats of the same dimension either coincide or do not intersect; they can be described by two systems of linear equations which differ only in their right-hand sides. If flats do not intersect, and no line from the first flat is parallel to a line from the second flat, then these are skew flats. It is possible only if sum of their dimensions is less than dimension of the ambient space. Join For two flats of dimensions k1 and k2 there exists the minimal flat which contains them, of dimension at most k1 + k2 + 1. If two flats intersect, then the dimension of the containing flat equals to k1 + k2 minus the dimension of the intersection. Properties of operations These two operations (referred to as meet and join) make the set of all flats in the Euclidean n-space a lattice and can build systematic coordinates for flats in any dimension, leading to Grassmann coordinates or dual Grassmann coordinates. For example, a line in three-dimensional space is determined by two distinct points or by two distinct planes. However, the lattice of all flats is not a distributive lattice. If two lines ℓ1 and ℓ2 intersect, then ℓ1 ∩ ℓ2 is a point. If p is a point not lying on the same plane, then (ℓ1 ∩ ℓ2) + p = (ℓ1 + p) ∩ (ℓ2 + p), both representing a line. But when ℓ1 and ℓ2 are parallel, this distributivity fails, giving p on the left-hand side and a third parallel line on the right-hand side. Euclidean geometry The aforementioned facts do not depend on the structure being that of Euclidean space (namely, involving Euclidean distance) and are correct in any affine space. In a Euclidean space: • There is the distance between a flat and a point. (See for example Distance from a point to a plane and Distance from a point to a line.) • There is the distance between two flats, equal to 0 if they intersect. (See for example Distance between two lines (in the same plane) and Skew lines § Distance.) • There is the angle between two flats, which belongs to the interval [0, π/2] between 0 and the right angle. (See for example Dihedral angle (between two planes). See also Angles between flats.) See also • N-dimensional space • Matroid • Coplanarity • Isometry Notes 1. In addition, a whole n-dimensional space, being a subset of itself, may also be considered as an n-dimensional flat. 2. Can be considered as −1-flat. References • Heinrich Guggenheimer (1977) Applicable Geometry,page 7, Krieger, New York. • Stolfi, Jorge (1991), Oriented Projective Geometry, Academic Press, ISBN 978-0-12-672025-9 From original Stanford Ph.D. dissertation, Primitives for Computational Geometry, available as DEC SRC Research Report 36 Archived 2021-10-17 at the Wayback Machine. External links • Weisstein, Eric W. "Hyperplane". MathWorld. • Weisstein, Eric W. "Flat". MathWorld.
Wikipedia
\begin{document} \title[Hypersurfaces of prescribed mean curvature]{Hypersurfaces of prescribed mean curvature in Lorentzian manifolds} \author{Claus Gerhardt} \address{Ruprecht-Karls-Universit\"at, Institut f\"ur Angewandte Mathematik, Im Neuenheimer Feld 294, 69120 Heidelberg, Germany} \email{[email protected]} \subjclass{} \keywords{Prescribed mean curvature, Lorentz manifold} \date{April 12, 1999} \begin{abstract} We give a new existence proof for closed hypersurfaces of prescribed mean curvature in Lorentzian manifolds. \end{abstract} \maketitle \tableofcontents \setcounter{section}{-1} \section{Introduction} Hypersurfaces of prescribed mean curvature especially those with constant mean curvature play an important role in general relativity. In \cite{cg83} the existence of closed hypersurfaces of prescribed mean curvature in a globally hyperbolic Lorentz manifold with a compact Cauchy hypersurface was proved provided there were barriers. The proof consisted of two parts, the a priori estimates for the gradient and the application of a fixed point theorem. That latter part of the proof was rather complicated, and certainly nobody would have qualified it as elegant. Ecker and Huisken, therefore, gave another existence proof using an evolutionary approach, but they had to assume that the time-like convergence condition is satisfied, and, even more important, that the prescribed mean curvature satisfies a structural monotonicity condition, cf. \cite{eh91}. These are serious restrictions which had to be assumed because the authors relied on the gradient estimate of Bartnik \cite{rb84}, who had proved another a priori estimate in the elliptic case. We shall show in the following that the evolutionary method can be used in the existence proof without any unnecessary restrictions on the curvature of the ambient space or the right-hand side. The only difference in the assumptions---relative to our former paper--- is that the right-hand side is now supposed to be of class $C^1$, while bounded is actually sufficient. But this drawback can easily be overcome by approximation. This paper is organized as follows: In \rs{1} we introduce the notations and definitions we rely on. In \rs 2 we look at the curvature flow associated with our problem, and the corresponding evolution equations for the basic geometric quantities of the flow hypersurfaces. In \rs 3 lower order estimates for the evolution problem are proved, while a priori estimates in the $C^2$\nobreakdash-\hspace{0pt}{norm} are derived in \rs 4. Finally, in \rs 5, we demonstrate that the evolutionary solution converges to a stationary solution. \section{Notations and definitions}\las{1} The main objective of this section is to state the equations of Gau{\ss}, Codazzi, and Weingarten for hypersurfaces $M$ in a \di{(n+1)} Lorentzian space $N$. Geometric quantities in $N$ will be denoted by $(\bar g_{\alpha\beta}),(\riema \alpha\beta\gamma\delta )$, etc., and those in $M$ by $(g_{ij}), (\riem ijkl)$, etc. Greek indices range from $0$ to $n$ and Latin from $1$ to $n$; the summation convention is always used. Generic coordinate systems in $N$ resp. $M$ will be denoted by $(x^\alpha)$ resp. $(\xi^i)$. Covariant differentiation will simply be indicated by indices, only in case of possible ambiguity they will be preceded by a semicolon, i.e. for a function $u$ in $N$, $(u_\alpha)$ will be the gradient and $(u_{\alpha\beta})$ the Hessian, but e.g., the covariant derivative of the curvature tensor will be abbreviated by $\riema \alpha\beta\gamma{\delta ;\epsilon}$. We also point out that \begin{equation} \riema \alpha\beta\gamma{\delta ;i}=\riema \alpha\beta\gamma{\delta ;\epsilon}x_i^\epsilon \end{equation} with obvious generalizations to other quantities. Let $M$ be a \textit{space-like} hypersurface, i.e. the induced metric is Riemannian, with a differentiable normal $\nu$ that is time-like. In local coordinates, $(x^\alpha)$ and $(\xi^i)$, the geometric quantities of the space-like hypersurface $M$ are connected through the following equations \begin{equation}\lae{1.3} x_{ij}^\alpha= h_{ij}\nu^\alpha \end{equation} the so-called \textit{Gau{\ss} formula}. Here, and also in the sequel, a covariant derivative is always a \textit{full} tensor, i.e. \begin{equation} x_{ij}^\alpha=x_{,ij}^\alpha-\ch ijk x_k^\alpha+\cha \beta\gamma\alpha x_i^\beta x_j^\gamma. \end{equation} The comma indicates ordinary partial derivatives. In this implicit definition the \textit{second fundamental form} $(h_{ij})$ is taken with respect to $\nu$. The second equation is the \textit{Weingarten equation} \begin{equation} \nu_i^\alpha=h_i^k x_k^\alpha, \end{equation} where we remember that $\nu_i^\alpha$ is a full tensor. Finally, we have the \textit{Codazzi equation} \begin{equation} h_{ij;k}-h_{ik;j}=\riema\alpha\beta\gamma\delta \nu^\alpha x_i^\beta x_j^\gamma x_k^\delta \end{equation} and the \textit{Gau{\ss} equation} \begin{equation} \riem ijkl=- \{h_{ik}h_{jl}-h_{il}h_{jk}\} + \riema \alpha\beta\gamma\delta x_i^\alpha x_j^\beta x_k^\gamma x_l^\delta . \end{equation} Now, let us assume that $N$ is a globally hyperbolic Lorentzian manifold with a \textit{compact} Cauchy surface. $N$ is then a topological product $\R[]\times \protect\mathcal S_0$, where $\protect\mathcal S_0$ is a compact Riemannian manifold, and there exists a Gaussian coordinate system $(x^\alpha)$, such that the metric in $N$ has the form \begin{equation}\lae{1.7} d\bar s_N^2=e^{2\psi}\{-{dx^0}^2+\sigma_{ij}(x^0,x)dx^idx^j\}, \end{equation} where $\sigma_{ij}$ is a Riemannian metric, $\psi$ a function on $N$, and $x$ an abbreviation for the space-like components $(x^i)$, see \cite{GR}, \cite[p.~212]{HE}, \cite[p.~252]{GRH}, and \cite[Section~6]{cg83}. We also assume that the coordinate system is \textit{future oriented}, i.e. the time coordinate $x^0$ increases on future directed curves. Hence, the \textit{contravariant} time-like vector$(\xi^\alpha)=(1,0,\dotsc,0)$ is future directed as is its \textit{covariant} version $(\xi_\alpha)=e^{2\psi}(-1,0,\dotsc,0)$. Let $M=\graph \fv u{\protect\mathcal S_0}$ be a space-like hypersurface \begin{equation} M=\set{(x^0,x)}{x^0=u(x),\,x\in\protect\mathcal S_0}, \end{equation} then the induced metric has the form \begin{equation} g_{ij}=e^{2\psi}\{-u_iu_j+\sigma_{ij}\} \end{equation} where $\sigma_{ij}$ is evaluated at $(u,x)$, and its inverse $(g^{ij})=(g_{ij})^{-1}$ can be expressed as \begin{equation}\lae{1.10} g^{ij}=e^{-2\psi}\{\sigma^{ij}+\frac{u^i}{v}\frac{u^j}{v}\}, \end{equation} where $(\sigma^{ij})=(\sigma_{ij})^{-1}$ and \begin{equation}\lae{1.11} \begin{aligned} u^i&=\sigma^{ij}u_j\\ v^2&=1-\sigma^{ij}u_iu_j\equiv 1-\abs{Du}^2. \end{aligned} \end{equation} Hence, $\graph u$ is space-like if and only if $\abs{Du}<1$. The covariant form of a normal vector of a graph looks like \begin{equation} (\nu_\alpha)=\pm v^{-1}e^{\psi}(1, -u_i). \end{equation} and the contravariant version is \begin{equation} (\nu^\alpha)=\mp v^{-1}e^{-\psi}(1, u^i). \end{equation} Thus, we have \begin{rem} Let $M$ be space-like graph in a future oriented coordinate system. Then, the contravariant future directed normal vector has the form \begin{equation} (\nu^\alpha)=v^{-1}e^{-\psi}(1, u^i) \end{equation} and the past directed \begin{equation}\lae{1.15} (\nu^\alpha)=-v^{-1}e^{-\psi}(1, u^i). \end{equation} \end{rem} In the Gau{\ss} formula \re{1.3} we are free to choose the future or past directed normal, but we stipulate that we always use the past directed normal for reasons that we have explained in \cite{cg5}. Look at the component $\alpha=0$ in \re{1.3} and obtain in view of \re{1.15} \begin{equation}\lae{1.16} e^{-\psi}v^{-1}h_{ij}=-u_{ij}-\cha 000\mspace{1mu}u_iu_j-\cha 0j0 \mspace{1mu}u_i-\cha 0i0\mspace{1mu}u_j-\cha ij0. \end{equation} Here, the covariant derivatives a taken with respect to the induced metric of $M$, and \begin{equation} -\cha ij0=e^{-\psi}\bar h_{ij}, \end{equation} where $(\bar h_{ij})$ is the second fundamental form of the hypersurfaces $\{x^0=\textup{const}\}$. An easy calculation shows \begin{equation} \bar h_{ij}e^{-\psi}=-\tfrac{1}{2}\dot\sigma_{ij} -\dot\psi\sigma_{ij}, \end{equation} where the dot indicates differentiation with respect to $x^0$. Next, let us analyze under which condition a space-like hypersurface $M$ can be written as a graph over the Cauchy hypersurface $\protect\mathcal S_0$. We first need \begin{definition} Let $M$ be a closed, space-like hypersurface in $N$. Then, \bi[(i)] \item $M$ is said to be \textit{achronal}, if no two points in $M$ can be connected by a future directed time-like curve. \item $M$ is said to \textit{separate} $N$, if $N\raise 0.28ex\hbox{$\scriptstyle\setminus$} M$ is disconnected. \end{enumerate} \end{definition} In \cite[Proposition 2.5]{cg5} we proved \begin{prop}\lap{1.5} Let $N$ be connected and globally hyperbolic, $\protect\mathcal S_0\nobreak\ \su\nobreak\ N$ a compact Cauchy hypersurface, and $M\su N$ a compact, connected space-like hypersurface of class $C^m, m\ge 1$. Then, $M=\graph \fu u{\protect\mathcal S}0$ with $u\in C^m(\protect\mathcal S_0)$ iff $M$ is achronal. \end{prop} Sometimes, we need a Riemannian reference metric, e.g. if we want to estimate tensors. Since the Lorentzian metric can be expressed as \begin{equation} \bar g_{\alpha\beta}dx^\alpha dx^\beta=e^{2\psi}\{-{dx^0}^2+\sigma_{ij}dx^i dx^j\}, \end{equation} we define a Riemannian reference metric $(\tilde g_{\alpha\beta})$ by \begin{equation} \tilde g_{\alpha\beta}dx^\alpha dx^\beta=e^{2\psi}\{{dx^0}^2+\sigma_{ij}dx^i dx^j\} \end{equation} and we abbreviate the corresponding norm of a vectorfield $\eta$ by \begin{equation} \nnorm \eta=(\tilde g_{\alpha\beta}\eta^\alpha\eta^\beta)^{1/2}, \end{equation} with similar notations for higher order tensors. \section{The evolution problem}\las{2} Let $N$ be a globally hyperbolic Lorentzian manifold with a compact Cauchy hypersurface $\protect\mathcal S_0$. Consider the problem of finding a closed hypersurface of prescribed mean curvature $H$ in $N$, or more precisely, let $\varOmega$ be a connected open subset of $N$, $f\in C^{0,\alpha}(\bar \varOmega)$, then we look for a hypersurface $M\su \varOmega$ such that \begin{equation} \fv HM=f(x)\qquad \forall \,x\in M, \end{equation} where $\fv HM$ means that $H$ is evaluated at the vector $(\kappa_i(x))$ the components of which are the principal curvatures of $M$. We assume that $\pa \varOmega$ consists of two achronal, compact, connected, space-like hypersurfaces $M_1$ and $M_2$, where $M_1$ is supposed to lie in the \textit{past} of $M_2$. The $M_i$ should act as barriers for $(H,f)$. \begin{definition} $M_2$ is an \textit{upper barrier} for $(H,f)$, if $M_2$ is of class $C^{2,\alpha}$ and \begin{equation} \fv H{M_2}\ge f, \end{equation} and $M_1$ ia a \textit{lower barrier} for $(H,f)$, if $M_1$ is of class $C^{2,\alpha}$ satisfying \begin{equation} \fv H{M_1}\le f. \end{equation} \end{definition} In \cite[Section 6]{cg83} we proved the following theorem \begin{thm}\lat{2.2} Let $M_1$ be a lower and $M_2$ be an upper barrier for $(H,f)$, $f\in C^{0,\alpha}(\bar \varOmega)$. Then, the problem \begin{equation}\lae{2.4} \fv HM=f \end{equation} has a solution $M\su \bar \varOmega$ of class $C^{2,\alpha}$ that can be written as a graph over the Cauchy hypersurface $\protect\mathcal S_0$. \end{thm} The crucial point in the proof is an a priori estimate in the $C^1$-norm and for this estimate only the boundedness of $f$ is needed, i.e. even for merely bounded $f$ $H^{2,p}$ solutions exist. We want to give a new proof of \rt{2.2} that is based on the evolution method, and to make this method work, we have to assume temporarily slightly higher degrees of regularity for the barriers and right-hand side, i.e. we assume the barriers to be of class $C^{4,\alpha}$ and $f$ to be of class $C^{2,\alpha}$. We can achieve these assumptions by approximation without sacrificing the barrier conditions, cf. \cite[p. 179]{cg97}. To solve \re{2.4} we look at the evolution problem \begin{equation}\lae{2.5} \begin{aligned} \dot x&=(H-f)\nu,\\ x(0)&=x_0, \end{aligned} \end{equation} where $x_0$ is an embedding of an initial hypersurface $M_0$, for which we choose $M_0=M_2$, $H$ is the mean curvature of the flow hypersurfaces $M(t)$ with respect to the past directed normal $\nu$, and $x(t)$ is an embedding of $M(t)$. In \cite{cg5} we have considered problems of the form \re{2.5} for general curvature operators in a pseudo-riemannian setting, so that the present situation can be retrieved as a special case of the general results in \cite[Section 3]{cg5}. The evolution exists on a maximal time interval $[0,T^*)$, $0<T^*\le \infty$, cf. \cite[Section 2]{cg96}, where we apologize for the ambiguity of also calling the evolution parameter \textit{time}. Next, we want to show how the metric, the second fundamental form, and the normal vector of the hypersurfaces $M(t)$ evolve. All time derivatives are \textit{total} derivatives. We refer to \cite{cg5} for more general results and to \cite[Section 3]{cg96}, where proofs are given in a Riemannian setting, but these proofs are also valid in a Lorentzian environment. \begin{lem} The metric, the normal vector, and the second fundamental form of $M(t)$ satisfy the evolution equations \begin{equation} \dot g_{ij}=2(H- f)h_{ij}, \end{equation} \begin{equation}\lae{2.7} \dot \nu=\nabla_M(H- f)=g^{ij}(H- f)_i x_j, \end{equation} and \begin{equation}\lae{2.8} \dot h_i^j=(H- f)_i^j- (H- f) h_i^k h_k^j-(H- f) \riema \alpha\beta\gamma\delta \nu^\alpha x_i^\beta \nu^\gamma x_k^\delta g^{kj} \end{equation} \begin{equation} \dot h_{ij}=(H- f)_{ij}+ (H- f) h_i^k h_{kj}-(H- f) \riema \alpha\beta\gamma\delta \nu^\alpha x_i^\beta \nu^\gamma x_j^\delta . \end{equation} \end{lem} \begin{lem}[Evolution of $(H- f)$] The term $(H- f)$ evolves according to the equation \begin{equation}\lae{2.10} \begin{aligned} {(H- f)}^\prime- \varDelta (H- f)=&\msp[0]- \norm A^2(H- f)-f_\alpha\nu^\alpha (H- f)\\ &-\bar R_{\alpha\beta}\nu^\alpha\nu^\beta (H- f), \end{aligned} \end{equation} where \begin{equation} (H- f)^{\prime}=\frac{d}{dt}(H- f) \end{equation} and \begin{equation} \norm A^2=h_{ij}h^{ij}. \end{equation} \end{lem} From \re{2.8} we deduce with the help of the Ricci identities a parabolic equation for the second fundamental form \begin{lem}\lal{2.7} The mixed tensor $h_i^j$ satisfies the parabolic equation \begin{equation}\raisetag{-58pt}\lae{2.13} \begin{aligned} \dot h_i^j-\varDelta h_i^j&=-\norm A^2h_i^j+f h_i^kh_k^j - f_{\alpha\beta} x_i^\alpha x_k^\beta g^{kj}- f_\alpha\nu^\alpha h_i^j\\ &\quad\,+2\riema \alpha\beta\gamma\delta x_m^\alpha x_i ^\beta x_k^\gamma x_r^\delta h^{km} g^{rj}\\ &\quad\,-g^{kl}\riema \alpha\beta\gamma\delta x_m^\alpha x_k ^\beta x_r^\gamma x_l^\delta h_i^m g^{rj}- g^{kl}\riema \alpha\beta\gamma\delta x_m^\alpha x_k ^\beta x_i^\gamma x_l^\delta h^{mj} \\ &\quad\,-\bar R_{\alpha\beta}\nu^\alpha\nu^\beta h_i^j+f\riema \alpha\beta\gamma\delta \nu^\alpha x_i^\beta\nu^\gamma x_m^\delta g^{mj}\\ &\quad\,+ g^{kl}\bar R_{\alpha\beta\gamma\delta ;\epsilon}\{\nu^\alpha x_k^\beta x_l^\gamma x_i^\delta x_m^\epsilon g^{mj}+\nu^\alpha x_i^\beta x_k^\gamma x_m^\delta x_l^\epsilon g^{mj}\}. \end{aligned} \end{equation} \end{lem} \begin{rem}\lar{2.6} In view of the maximum principle, we immediately deduce from \re{2.10} that the term $(H-f)$ has a sign during the evolution if it has one at the beginning. Thus, we have \begin{equation}\lae{2.14} H\ge f. \end{equation} \end{rem} \section{Lower order estimates}\las 3 We recall our assumption that the ambient space is globally hyperbolic with a compact Cauchy hypersurface ${\protect\mathcal S_0}$. The barriers $M_i$ are then graphs over ${\protect\mathcal S_0}, M_i=\graph u_i$, because they are achronal, cf. \rp{1.5}, and we have \begin{equation}\lae{3.1} u_1\le u_2, \end{equation} for $M_1$ should lie in the past of $M_2$, and the enclosed domain is supposed to be connected. Moreover, in view of the Harnack inequality, the strict inequality is valid in \re{3.1} unless the barriers coincide and are a solution to our problem. Let us look at the evolution equation \re{2.5} with initial hypersurface $M_0$ equal to $M_2$. Then, because of the short-time existence, the evolution will exist on a maximal time interval $I=[0,T^*), \,T^*\le \infty$, as long as the evolving hypersurfaces are space-like and smooth. Furthermore, since the initial hypersurface is a graph over ${\protect\mathcal S_0}$, we can write \begin{equation} M(t)=\graph\fu{u(t)}S0\quad \forall\,t\in I, \end{equation} where $u$ is defined in the cylinder $Q_{T^*}=I\times {\protect\mathcal S_0}$. We then deduce from \re{2.5}, looking at the component $\alpha=0$, that $u$ satisfies a parabolic equation of the form \begin{equation}\lae{3.3} \dot u=-e^{-\psi}v^{-1}(H-f), \end{equation} where we use the notations in \rs{1}, and where we emphasize that the time derivative is a total derivative, i.e. \begin{equation}\lae{3.4} \dot u=\pde ut+u_i\dot x^i. \end{equation} Since the past directed normal can be expressed as \begin{equation} (\nu^\alpha)=-e^{-\psi}v^{-1}(1,u^i), \end{equation} we conclude from \re{2.5}, \re{3.3}, and \re{3.4} \begin{equation} \lae{3.6} \pde ut=-e^{-\psi}v(H- f). \end{equation} Thus, $\pde ut$ is non-positive in view of \rr{2.6}. Next, let us state our first a priori estimate \begin{lem}\lal{3.1} During the evolution the flow hypersurfaces stay in $\bar \varOmega$. \end{lem} This is an immediate consequence of the Harnack inequality, cf. \cite[Lemma 5.1]{cg96} for details. As a consequence of \rl{3.1} we obtain \begin{equation} \inf_{{\protect\mathcal S_0}} u_1\le u\le \sup_{\protect\mathcal S_0} u_2\quad \forall\,t\in I. \end{equation} We are now able to derive the $C^1$-estimates, i.e. we shall show that the hypersurfaces remain uniformly space-like, or equivalently, that the term \begin{equation} \tilde v=v^{-1}=\frac{1}{\sql} \end{equation} is uniformly bounded. Let us first derive an evolution equation for $\tilde v$. \begin{lem}[Evolution of $\tilde v$]\lal{3.2} Consider the flow \re{2.5} in the distinguished coordinate system associated with ${\protect\mathcal S_0}$. Then, $\tilde v$ satisfies the evolution equation \begin{equation}\lae{3.20} \begin{aligned} \dot{\tilde v}-\varDelta\tilde v=&-\norm A^2\tilde v -f\eta_{\alpha\beta}\nu^\alpha\nu^\beta-f_\beta x_i^\beta \eta_\alpha x_k^\alpha g^{ik}\\ &-2h^{ij} x_i^\alpha x_j^\beta \eta_{\alpha\beta}-g^{ij}\eta_{\alpha\beta\gamma}x_i^\beta x_j^\gamma\nu^\alpha\\ &-\bar R_{\alpha\beta}\nu^\alpha x_k^\beta\eta_\gamma x_l^\gamma g^{kl}, \end{aligned} \end{equation} where $\eta$ is the covariant vector field $(\eta_\alpha)=e^{\psi}(-1,0,\dotsc,0)$. \end{lem} \begin{proof} We have $\tilde v=\spd \eta\nu$. Let $(\xi^i)$ be local coordinates for $M(t)$. Differentiating $\tilde v$ covariantly we deduce \begin{equation}\lae{3.21} \tilde v_i=\eta_{\alpha\beta}x_i^\beta\nu^\alpha+\eta_\alpha\nu_i^\alpha, \end{equation} \begin{equation}\lae{3.22} \begin{aligned} \tilde v_{ij}= &\msp[5]\eta_{\alpha\beta\gamma}x_i^\beta x_j^\gamma\nu^\alpha+\eta_{\alpha\beta}x_{ij}^\beta\nu^\alpha\\ &+\eta_{\alpha\beta}x_i^\beta\nu_j^\alpha+\eta_{\alpha\beta}x_j^\beta\nu_i^\alpha+\eta_\alpha\nu_{ij}^\alpha \end{aligned} \end{equation} The time derivative of $\tilde v$ can be expressed as \begin{equation}\lae{3.23} \begin{aligned} \dot{\tilde v}&=\eta_{\alpha\beta}\msp\dot x^\beta\nu^\alpha+\eta_\alpha\dot\nu^\alpha\\ &=\eta_{\alpha\beta}\nu^\alpha\nu^\beta(H-f)+(H-f)^k x_k^\alpha\eta_\alpha\\ &=\eta_{\alpha\beta}\nu^\alpha\nu^\beta(H-f)+H^k x_k^\alpha\eta_\alpha-{ f}_\beta x_i^\beta x_k^\alpha g^{ik}\eta_\alpha, \end{aligned} \end{equation} where we have used \re{2.7}. Substituting \re{3.22} and \re{3.23} in \re{3.20}, and simplifying the resulting equation with the help of the Weingarten and Codazzi equations, we arrive at the desired conclusion. \end{proof} \begin{lem}\lal{3.3} There is a constant $c=c(\varOmega)$ such that for any positive function $0<\epsilon=\epsilon(x)$ on ${\protect\mathcal S_0}$ and any hypersurface $M(t)$ of the flow we have \begin{align} \nnorm \nu&\le c\tilde v,\\\lae{3.14} g^{ij}&\le c\tilde v^2\sigma^{ij},\\ \intertext{and}\lae{3.15} \abs{h^{ij}\eta_{\alpha\beta}x_i^\alpha x_j^\beta}&\le \frac{\epsilon}{2}\norm A^2\tilde v+\frac{c}{2\epsilon}\tilde v^3 \end{align} where $(\eta_\alpha)$ is the vector field in \rl{3.2}. \end{lem} \begin{proof} The first two estimates can be immediately verified. To prove \re{3.15} we choose local coordinates $(\xi^i)$ such that \begin{equation} h_{ij}=\kappa_i\delta _{ij},\qquad g_{ij}=\delta _{ij} \end{equation} and deduce \begin{equation} \begin{aligned} \abs{h^{ij}\eta_{\alpha\beta}x_i^\alpha x_j^\beta}&\le \sum_i\abs{\kappa_i}\abs{\eta_{\alpha\beta}x_i^\alpha x_i^\beta}\\ &\le \frac{\epsilon}{2}\norm A^2\tilde v+\frac{1}{2\epsilon}\tilde v^{-1}\sum_i\abs{\eta_{\alpha\beta} x_i^\alpha x_i^\beta}^2, \end{aligned} \end{equation} and \begin{equation} \sum_i\abs{\eta_{\alpha\beta} x_i^\alpha x_i^\beta}^2\le g^{ik}\eta_{\alpha\beta}x_i^\alpha x_j^\beta \msp[3]g^{jl} \eta_{\gamma\delta } x_k^\gamma x_l^\delta . \end{equation} Hence, the result in view of \re{3.14}. \end{proof} Combining the preceding lemmata we infer \begin{lem}\lal{3.4} There is a constant $c=c(\varOmega)$ such that for any positive function $\epsilon=\epsilon(x)$ on ${\protect\mathcal S_0}$ the term $\tilde v$ satisfies a parabolic inequality of the form \begin{equation} \dot{\tilde v}-\varDelta\tilde v\le -(1-\epsilon)\norm A^2\tilde v+c[\abs f+\nnorm{Df}]\tilde v^2+c[1+\epsilon^{-1}]\tilde v^3. \end{equation} \end{lem} We note that the statement \textit{$c$ depends on $\varOmega$} also implies that $c$ depends on geometric quantities of the ambient space restricted to $\varOmega$. We further need the following two lemmata \begin{lem}\lal{3.5} Let $M(t)=\graph u(t)$ be the flow hypersurfaces, then we have \begin{equation} \dot u-\varDelta u=e^{-\psi}v^{-1}f-e^{-\psi}g^{ij}\bar h_{ij}+\cha 000\norm{Du}^2+2\cha 0i0 u^i, \end{equation} where the time derivative is a total derivative. \end{lem} \begin{proof} We use the relation \begin{equation} \dot u=-e^{-\psi}v^{-1}(H-f) \end{equation} together with \re{1.16}. \end{proof} \begin{lem}\lal{3.6} Let $M\su \bar \varOmega$ be a graph over ${\protect\mathcal S_0}$, $M=\graph u$, then \begin{equation} \abs{\tilde v_i u^i}\le c\tilde v^3+\norm A e^\psi\norm {Du}^2, \end{equation} where $c=c(\varOmega)$. \end{lem} \begin{proof} First, we use that \begin{equation} \tilde v^2=1+e^{2\psi}\norm{Du}^2, \end{equation} and thus, \begin{equation} 2\tilde v\tilde v_i=2\psi_\alpha x_i^\alpha e^{2\psi} \norm{Du}^2+2e^{2\psi} u_{ij}u^j, \end{equation} from which we infer \begin{equation} \abs{\tilde v_iu^i}\le c\tilde v^3+\tilde v^{-1}e^{2\psi}\abs{u_{ij}u^i u^j}, \end{equation} which gives the result because of \re{1.16}. \end{proof} We are now ready to prove the uniform boundedness of $\tilde v$. \begin{prop} During the evolution the term $\tilde v$ remains uniformly bounded \begin{equation} \tilde v\le c=c(\varOmega,\abs f,\nnorm{Df}). \end{equation} \end{prop} \begin{proof} Let $\mu,\lambda$ be positive constants, where $\mu$ is supposed to be small and $\lambda$ large, and define \begin{equation}\lae{3.27} \varphi=e^{\mu e^{\lambda u}}, \end{equation} where we assume without loss of generality that $1\le u$, otherwise replace in \re{3.27} $u$ by $(u+c)$, $c$ large enough. We shall show that \begin{equation} w=\tilde v \varphi \end{equation} is uniformly bounded if $\mu,\lambda$ are chosen appropriately. In view of \rl{3.3} and \rl{3.5} we have \begin{equation} \dot\varphi-\varDelta\varphi\le c\mu\lambda e^{\lambda u}[\tilde v\abs f +\tilde v^2] \varphi-\mu\lambda^2 e^{\lambda u} [1+\mu e^{\lambda u}]\norm{Du}^2\varphi, \end{equation} from which we further deduce taking \rl{3.4} and \rl{3.6} into account \begin{equation} \begin{aligned} \dot w-\varDelta w&\le -(1-\epsilon) \norm A^2\tilde v\varphi +c[\abs f+\nnorm{Df}]\tilde v^2\varphi\\ &\quad\,+c[1+\epsilon^{-1}]\tilde v^3\varphi-\mu\lambda^2 e^{\lambda u} [1+\mu e^{\lambda u}] \tilde v \norm{Du}^2\varphi\\ &\quad\,+c[1+\abs f]\mu\lambda e^{\lambda u}\tilde v^3\varphi+2\mu\lambda e^{\lambda u} \norm A e^\psi \norm{Du}^2\varphi. \end{aligned} \end{equation} We estimate the last term on the right-hand side by \begin{equation} \begin{aligned} 2\mu\lambda e^{\lambda u}\norm A e^\psi\norm{Du}^2\varphi&\le (1-\epsilon)\norm A^2\tilde v\varphi\\ &\quad\,+\frac{1}{1-\epsilon}\mu^2\lambda^2e^{2\lambda u}\tilde v^{-1}e^{2\psi}\norm{Du}^4\varphi, \end{aligned} \end{equation} and conclude \begin{equation} \begin{aligned} \dot w-\varDelta w&\le c[\abs f+\nnorm{Df}]\tilde v^2\varphi+ c[1+\abs f]\mu\lambda e^{\lambda u} \tilde v^3\varphi\\ &\quad\,+c[1+\epsilon^{-1}]\tilde v^3\varphi +[\frac{1}{1-\epsilon}-1]\mu^2\lambda^2 e^{2\lambda u}\norm{Du}^2\tilde v\varphi\\ &\quad\,-\mu\lambda^2 e^{\lambda u}\norm{Du}^2\tilde v\varphi, \end{aligned} \end{equation} where we have used that \begin{equation} e^{2\psi}\norm{Du}^2\le \tilde v^2. \end{equation} Setting $\epsilon=e^{-\lambda u}$, we then obtain \begin{equation}\lae{3.34} \begin{aligned} \dot w-\varDelta w&\le c[\abs f+\nnorm{Df}]\tilde v^2\varphi+c e^{\lambda u} \tilde v^3\varphi\\ &\quad\,+c[1+\abs f]\mu\lambda e^{\lambda u}\tilde v^3\varphi\\ &\quad\,+[\frac{\mu}{1-\epsilon}-1]\mu\lambda^2 e^{\lambda u}\norm{Du}^2\tilde v\varphi. \end{aligned} \end{equation} Now, we choose $\mu=\frac{1}{2}$ and $\lambda_0$ so large that \begin{equation} \frac{\mu}{1-e^{-\lambda u}}\le \frac{3}{4}\qquad\forall\,\lambda\ge \lambda_0, \end{equation} and infer that the last term on the right-hand side of \re{3.34} is less than \begin{equation} -\frac{1}{8}\lambda^2e^{\lambda u}\norm{Du}^2\tilde v\varphi \end{equation} which in turn can be estimated from above by \begin{equation} -c\lambda^2e^{\lambda u}\tilde v^3\varphi \end{equation} at points where $\tilde v\ge 2$. Thus, we conclude that for \begin{equation} \lambda\ge \max (\lambda_0, 4[1+\abs f_{_\varOmega}]) \end{equation} the parabolic maximum principle, applied to $w$, yields \begin{equation} w\le \textup{const} (\abs{w(0)}_{_{\protect\mathcal S_0}},\lambda_0,\abs f, \nnorm{Df},\varOmega). \end{equation} \end{proof} \section{$C^2$-estimates}\las{4} Since the mean curvature operator is a quasilinear operator, the uniform $C^1$-estimates we have established in the last section also yield uniform $C^2$-estimates during the evolution, but nevertheless, we would like to give an independent proof of the $C^2$-estimates. \begin{lem}\lal{4.1} During the evolution the principal curvatures of the evolution hypersurfaces $M(t)$ are uniformly bounded. \end{lem} \begin{proof} As already mentioned in \rr{2.6}, we know that $f\le H$, thus, it is sufficient to estimate the principal curvatures from above. Let $\varphi$ be defined by \begin{equation}\lae{4.1} \varphi=\sup\set{{h_{ij}\eta^i\eta^j}}{{\norm\eta=1}}. \end{equation} We claim that $\varphi$ is uniformly bounded. Let $0<T<T^*$, and $x_0=x_0(t_0)$, with $ 0<t_0\le T$, be a point in $M(t_0)$ such that \begin{equation} \sup_{M_0}\varphi<\sup\set {\sup_{M(t)} \varphi}{0<t\le T}=\varphi(x_0). \end{equation} We then introduce a Riemannian normal coordinate system $(\xi^i)$ at $x_0\in M(t_0)$ such that at $x_0=x(t_0,\xi_0)$ we have \begin{equation} g_{ij}=\delta _{ij}\quad \textup{and}\quad \varphi=h_n^n. \end{equation} Let $\tilde \eta=(\tilde \eta^i)$ be the contravariant vector field defined by \begin{equation} \tilde \eta=(0,\dotsc,0,1), \end{equation} and set \begin{equation} \tilde \varphi=\frac{h_{ij}\tilde \eta^i\tilde \eta^j}{g_{ij}\tilde \eta^i\tilde \eta^j}\raise 2pt \hbox{.} \end{equation} $\tilde \varphi$ is well defined in neighbourhood of $(t_0,\xi_0)$, and $\tilde \varphi$ assumes its maximum at $(t_0,\xi_0)$. Moreover, at $(t_0,\xi_0)$ we have \begin{equation} \dot{\tilde \varphi}=\dot h_n^n, \end{equation} and the spatial derivatives do also coincide; in short, at $(t_0,\xi_0)$ $\tilde \varphi$ satisfies the same differential equation \re{2.13} as $h_n^n$. For the sake of greater clarity, let us therefore treat $h_n^n$ like a scalar and pretend that $\varphi=h_n^n$. At $(t_0,\xi_0)$ we have $\dot\varphi\ge 0$, and, in view of the maximum principle, we deduce from \rl{2.7} \begin{equation} 0\le -\norm A^2h_n^n+f\abs{h_n^n}^2+c[\abs f+\nnorm{Df}+\nnorm{D^2f}][1+\abs{h_n^n}]. \end{equation} Thus, $\varphi$ is uniformly bounded. \end{proof} \section{Convergence to a stationary solution}\las 5 We are now ready to give a new proof of \rt{2.2}. Let us look at the scalar version of the flow as in \re{3.6} \begin{equation}\lae{5.1} \pde ut=-e^{-\psi}v(H- f). \end{equation} This is a scalar parabolic differential equation defined on the cylinder \begin{equation} Q_{T^*}=[0,T^*)\times {\protect\mathcal S_0} \end{equation} with initial value $u(0)=u_2\in C^{4,\alpha}({\protect\mathcal S_0})$. In view of the a priori estimates, which we have established in the preceding sections, we know that \begin{equation} {\abs u}_\low{2,0,{\protect\mathcal S_0}}\le c \end{equation} and \begin{equation} H\,\textup{is uniformly elliptic in}\,u \end{equation} independent of $t$. Thus, we can apply the known regularity results, see e.g. \cite[Chapter 5.5]{nk}, where even more general operators are considered, to conclude that uniform $C^{2,\alpha}$-estimates are valid, leading further to uniform $C^{4,\alpha}$-estimates due to the regularity results for linear operators. Therefore, the maximal time interval is unbounded, i.e. $T^*=\infty$. Now, integrating \re{5.1} with respect to $t$, and observing that the right-hand side is non-positive, yields \begin{equation} u(0,x)-u(t,x)=\int_0^te^{-\psi}v(H- f)\ge c\int_0^t(H- f), \end{equation} i.e., \begin{equation} \int_0^\infty \abs{H- f}<\infty\qquad\forall\msp x\in {\protect\mathcal S_0} \end{equation} Hence, for any $x\in{\protect\mathcal S_0}$ there is a sequence $t_k\rightarrow \infty$ such that $(H- f)\rightarrow 0$. On the other hand, $u(\cdot,x)$ is monotone decreasing and therefore \begin{equation} \lim_{t\rightarrow \infty}u(t,x)=\tilde u(x) \end{equation} exists and is of class $C^{4,\alpha}({\protect\mathcal S_0})$ in view of the a priori estimates. We, finally, conclude that $\tilde u$ is a stationary solution of our problem, and that \begin{equation} \lim_{t\rightarrow \infty}(H- f)=0. \end{equation} To prove existence under the weaker assumptions of \rt{2.2}, we use approximation and the a priori estimate in \cite[Theorem 4.1]{cg83}. \end{document}
arXiv
Physics Physics is the natural science of matter, involving the study of matter,[lower-alpha 1] its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force.[2] Physics is one of the most fundamental scientific disciplines, with its main goal being to understand how the universe behaves.[lower-alpha 2][3][4][5] A scientist who specializes in the field of physics is called a physicist. Part of a series on Physics The fundamental science • Index • Outline • Glossary • History (timeline) Branches • Acoustics • Astrophysics • Atomic physics • Biophysics • Classical physics • Electromagnetism • Geophysics • Mechanics • Modern physics • Nuclear physics • Optics • Thermodynamics Research • Physicist (list) • List of physics awards • List of journals • List of unsolved problems •  Physics portal •  Category Physics is one of the oldest academic disciplines and, through its inclusion of astronomy, perhaps the oldest.[6] Over much of the past two millennia, physics, chemistry, biology, and certain branches of mathematics were a part of natural philosophy, but during the Scientific Revolution in the 17th century these natural sciences emerged as unique research endeavors in their own right.[lower-alpha 3] Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the fundamental mechanisms studied by other sciences[3] and suggest new avenues of research in these and other academic disciplines such as mathematics and philosophy. Advances in physics often enable new technologies. For example, advances in the understanding of electromagnetism, solid-state physics, and nuclear physics led directly to the development of new products that have dramatically transformed modern-day society, such as television, computers, domestic appliances, and nuclear weapons;[3] advances in thermodynamics led to the development of industrialization; and advances in mechanics inspired the development of calculus. History The word "physics" originates from Ancient Greek: φυσική (ἐπιστήμη), romanized: physikḗ (epistḗmē), meaning "knowledge of nature".[8][9][10] Ancient astronomy Astronomy is one of the oldest natural sciences. Early civilizations dating back before 3000 BCE, such as the Sumerians, ancient Egyptians, and the Indus Valley Civilisation, had a predictive knowledge and a basic awareness of the motions of the Sun, Moon, and stars. The stars and planets, believed to represent gods, were often worshipped. While the explanations for the observed positions of the stars were often unscientific and lacking in evidence, these early observations laid the foundation for later astronomy, as the stars were found to traverse great circles across the sky,[6] which could not explain the positions of the planets. According to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy.[11] Egyptian astronomers left monuments showing knowledge of the constellations and the motions of the celestial bodies,[12] while Greek poet Homer wrote of various celestial objects in his Iliad and Odyssey; later Greek astronomers provided names, which are still used today, for most constellations visible from the Northern Hemisphere.[13] Natural philosophy Natural philosophy has its origins in Greece during the Archaic period (650 BCE – 480 BCE), when pre-Socratic philosophers like Thales rejected non-naturalistic explanations for natural phenomena and proclaimed that every event had a natural cause.[14] They proposed ideas verified by reason and observation, and many of their hypotheses proved successful in experiment;[15] for example, atomism was found to be correct approximately 2000 years after it was proposed by Leucippus and his pupil Democritus.[16] Medieval European and Islamic The Western Roman Empire fell in the fifth century, and this resulted in a decline in intellectual pursuits in the western part of Europe. By contrast, the Eastern Roman Empire (also known as the Byzantine Empire) resisted the attacks from the barbarians, and continued to advance various fields of learning, including physics.[17] In the sixth century, Isidore of Miletus created an important compilation of Archimedes' works that are copied in the Archimedes Palimpsest. In sixth-century Europe John Philoponus, a Byzantine scholar, questioned Aristotle's teaching of physics and noted its flaws. He introduced the theory of impetus. Aristotle's physics was not scrutinized until Philoponus appeared; unlike Aristotle, who based his physics on verbal argument, Philoponus relied on observation. On Aristotle's physics Philoponus wrote: But this is completely erroneous, and our view may be corroborated by actual observation more effectively than by any sort of verbal argument. For if you let fall from the same height two weights of which one is many times as heavy as the other, you will see that the ratio of the times required for the motion does not depend on the ratio of the weights, but that the difference in time is a very small one. And so, if the difference in the weights is not considerable, that is, of one is, let us say, double the other, there will be no difference, or else an imperceptible difference, in time, though the difference in weight is by no means negligible, with one body weighing twice as much as the other[19] Philoponus' criticism of Aristotelian principles of physics served as an inspiration for Galileo Galilei ten centuries later,[20] during the Scientific Revolution. Galileo cited Philoponus substantially in his works when arguing that Aristotelian physics was flawed.[21][22] In the 1300s Jean Buridan, a teacher in the faculty of arts at the University of Paris, developed the concept of impetus. It was a step toward the modern ideas of inertia and momentum.[23] Islamic scholarship inherited Aristotelian physics from the Greeks and during the Islamic Golden Age developed it further, especially placing emphasis on observation and a priori reasoning, developing early forms of the scientific method. Although Aristotle's principles of physics was criticized, it is important to identify the evidence he based his views off of.  When thinking of the history of science and math, it is notable to acknowledge the contributions made by older scientists. Aristotle's science was the backbone of the science we learn in schools today. Aristotle published many biological works including The Parts of Animals, in which he discusses both biological science and natural science as well. It is also integral to mention the role Aristotle had in the progression of physics and metaphysics and how his beliefs and findings are still being taught in science classes to this day. The explanations that Aristotle gives for his findings are also very simple. When thinking of the elements, Aristotle believed that each element (earth, fire, water, air) had its own natural place. Meaning that because of the density of these elements, they will revert back to their own specific place in the atmosphere.[24] So, because of their weights, fire would be at the very top, air right underneath fire, then water, then lastly earth. He also stated that when a small amount of one element enters the natural place of another, the less abundant element will automatically go into its own natural place. For example, if there is a fire on the ground, if you pay attention, the flames go straight up into the air as an attempt to go back into its natural place where it belongs. Aristotle called his metaphysics "first philosophy" and characterized it as the study of "being as being".[25] Aristotle defined the paradigm of motion as a being or entity encompassing different areas in the same body.[25] Meaning that if a person is at a certain location (A) they can move to a new location (B) and still take up the same amount of space. This is involved with Aristotle's belief that motion is a continuum. In terms of matter, Aristotle believed that the change in category (ex. place) and quality (ex. color) of an object is defined as "alteration". But, a change in substance is a change in matter. This is also very close to our idea of matter today. He also devised his own laws of motion that include 1) heavier objects will fall faster, the speed being proportional to the weight and 2) the speed of the object that is falling depends inversely on the density object it is falling through (ex. density of air).[26] He also stated that, when it comes to violent motion (motion of an object when a force is applied to it by a second object) that the speed that object moves, will only be as fast or strong as the measure of force applied to it.[26] This is also seen in the rules of velocity and force that is taught in physics classes today. These rules are not necessarily what we see in our physics today but, they are very similar. It is evident that these rules were the backbone for other scientists to come revise and edit his beliefs. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics (also known as Kitāb al-Manāẓir), written by Ibn al-Haytham, in which he presented the alternative to the ancient Greek idea about vision. In his Treatise on Light as well as in his Kitāb al-Manāẓir, he presented a study of the phenomenon of the camera obscura (his thousand-year-old version of the pinhole camera) and delved further into the way the eye itself works.Using the knowledge of previous scholars, he was able to begin to explain how light enters the eye. He asserted that the light ray is focused, but the actual explanation of how light projected to the back of the eye had to wait until 1604. His Treatise on Light explained the camera obscura, hundreds of years before the modern development of photography.[27] The seven-volume Book of Optics (Kitab al-Manathir) hugely influenced thinking across disciplines from the theory of visual perception to the nature of perspective in medieval art, in both the East and the West, for more than 600 years. Many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to Johannes Kepler. The translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build devices that replicated those Ibn al-Haytham had built and understand the way vision works. Classical Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics.[28] Major developments in this period include the replacement of the geocentric model of the Solar System with the heliocentric Copernican model, the laws governing the motion of planetary bodies (determined by Kepler between 1609 and 1619), Galileo's pioneering work on telescopes and observational astronomy in the 16th and 17th Centuries, and Isaac Newton's discovery and unification of the laws of motion and universal gravitation (that would come to bear his name).[29] Newton also developed calculus,[lower-alpha 4] the mathematical study of continuous change, which provided new mathematical methods for solving physical problems.[30] The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from research efforts during the Industrial Revolution as energy needs increased.[31] The laws comprising classical physics remain very widely used for objects on everyday scales travelling at non-relativistic speeds, since they provide a very close approximation in such situations, and theories such as quantum mechanics and the theory of relativity simplify to their classical equivalents at such scales. Inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern Modern physics began in the early 20th century with the work of Max Planck in quantum theory and Albert Einstein's theory of relativity. Both of these theories came about due to inaccuracies in classical mechanics in certain situations. Classical mechanics predicted that the speed of light depends on the motion of the observer, which could not be resolved with the constant speed predicted by Maxwell's equations of electromagnetism. This discrepancy was corrected by Einstein's theory of special relativity, which replaced classical mechanics for fast-moving bodies and allowed for a constant speed of light.[32] Black-body radiation provided another problem for classical physics, which was corrected when Planck proposed that the excitation of material oscillators is possible only in discrete steps proportional to their frequency. This, along with the photoelectric effect and a complete theory predicting discrete energy levels of electron orbitals, led to the theory of quantum mechanics improving on classical physics at very small scales.[33] Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger and Paul Dirac.[33] From this early work, and work in related fields, the Standard Model of particle physics was derived.[34] Following the discovery of a particle with properties consistent with the Higgs boson at CERN in 2012,[35] all fundamental particles predicted by the standard model, and no others, appear to exist; however, physics beyond the Standard Model, with theories such as supersymmetry, is an active area of research.[36] Areas of mathematics in general are important to this field, such as the study of probabilities and groups. Philosophy In many ways, physics stems from ancient Greek philosophy. From Thales' first attempt to characterize matter, to Democritus' deduction that matter ought to reduce to an invariant state the Ptolemaic astronomy of a crystalline firmament, and Aristotle's book Physics (an early book on physics, which attempted to analyze and define motion from a philosophical point of view), various Greek philosophers advanced their own theories of nature. Physics was known as natural philosophy until the late 18th century.[lower-alpha 5] By the 19th century, physics was realized as a discipline distinct from philosophy and the other sciences. Physics, as with the rest of science, relies on philosophy of science and its "scientific method" to advance our knowledge of the physical world.[38] The scientific method employs a priori reasoning as well as a posteriori reasoning and the use of Bayesian inference to measure the validity of a given theory.[39] The development of physics has answered many questions of early philosophers but has also raised new questions. Study of the philosophical issues surrounding physics, the philosophy of physics, involves issues such as the nature of space and time, determinism, and metaphysical outlooks such as empiricism, naturalism and realism.[40] Many physicists have written about the philosophical implications of their work, for instance Laplace, who championed causal determinism,[41] and Erwin Schrödinger, who wrote on quantum mechanics.[42][43] The mathematical physicist Roger Penrose has been called a Platonist by Stephen Hawking,[44] a view Penrose discusses in his book, The Road to Reality.[45] Hawking referred to himself as an "unashamed reductionist" and took issue with Penrose's views.[46] Core theories Physics deals with a wide variety of systems, although certain theories are used by all physicists. Each of these theories was experimentally tested numerous times and found to be an adequate approximation of nature. For instance, the theory of classical mechanics accurately describes the motion of objects, provided they are much larger than atoms and moving at a speed much less than the speed of light. These theories continue to be areas of active research today. Chaos theory, a remarkable aspect of classical mechanics, was discovered in the 20th century, three centuries after the original formulation of classical mechanics by Newton (1642–1727). These central theories are important tools for research into more specialized topics, and any physicist, regardless of their specialization, is expected to be literate in them. These include classical mechanics, quantum mechanics, thermodynamics and statistical mechanics, electromagnetism, and special relativity. Classical Classical physics includes the traditional branches and topics that were recognized and well-developed before the beginning of the 20th century—classical mechanics, acoustics, optics, thermodynamics, and electromagnetism. Classical mechanics is concerned with bodies acted on by forces and bodies in motion and may be divided into statics (study of the forces on a body or bodies not subject to an acceleration), kinematics (study of motion without regard to its causes), and dynamics (study of motion and the forces that affect it); mechanics may also be divided into solid mechanics and fluid mechanics (known together as continuum mechanics), the latter include such branches as hydrostatics, hydrodynamics, aerodynamics, and pneumatics. Acoustics is the study of how sound is produced, controlled, transmitted and received.[47] Important modern branches of acoustics include ultrasonics, the study of sound waves of very high frequency beyond the range of human hearing; bioacoustics, the physics of animal calls and hearing,[48] and electroacoustics, the manipulation of audible sound waves using electronics.[49] Optics, the study of light, is concerned not only with visible light but also with infrared and ultraviolet radiation, which exhibit all of the phenomena of visible light except visibility, e.g., reflection, refraction, interference, diffraction, dispersion, and polarization of light. Heat is a form of energy, the internal energy possessed by the particles of which a substance is composed; thermodynamics deals with the relationships between heat and other forms of energy. Electricity and magnetism have been studied as a single branch of physics since the intimate connection between them was discovered in the early 19th century; an electric current gives rise to a magnetic field, and a changing magnetic field induces an electric current. Electrostatics deals with electric charges at rest, electrodynamics with moving charges, and magnetostatics with magnetic poles at rest. Modern Modern physics ${\hat {H}}|\psi _{n}(t)\rangle =i\hbar {\frac {\partial }{\partial t}}|\psi _{n}(t)\rangle $ $G_{\mu \nu }+\Lambda g_{\mu \nu }={\kappa }T_{\mu \nu }$ Schrödinger and Einstein field equations Founders • Max Planck • Albert Einstein • Niels Bohr • Max Born • Werner Heisenberg • Erwin Schrödinger • Pascual Jordan • Wolfgang Pauli • Paul Dirac • Ernest Rutherford • Louis de Broglie • Satyendra Nath Bose Concepts • Topology • Space • Time • Energy • Matter • Work • Randomness • Information • Entropy • Mind • Light • Particle • Wave Branches • Applied • Experimental • Theoretical • Mathematical • Philosophy of physics • Quantum mechanics • Quantum field theory • Quantum information • Quantum computation • Electromagnetism • Weak interaction • Electroweak interaction • Strong interaction • Atomic • Particle • Nuclear • Atomic, molecular, and optical • Condensed matter • Statistical • Complex systems • Non-linear dynamics • Biophysics • Neurophysics • Plasma physics • Special relativity • General relativity • Astrophysics • Cosmology • Theories of gravitation • Quantum gravity • Theory of everything Scientists • Witten • Röntgen • Becquerel • Lorentz • Planck • Curie • Wien • Skłodowska-Curie • Sommerfeld • Rutherford • Soddy • Onnes • Einstein • Wilczek • Born • Weyl • Bohr • Kramers • Schrödinger • de Broglie • Laue • Bose • Compton • Pauli • Walton • Fermi • van der Waals • Heisenberg • Dyson • Zeeman • Moseley • Hilbert • Gödel • Jordan • Dirac • Wigner • Hawking • P. W. Anderson • Lemaître • Thomson • Poincaré • Wheeler • Penrose • Millikan • Nambu • von Neumann • Higgs • Hahn • Feynman • Yang • Lee • Lenard • Salam • 't Hooft • Veltman • Bell • Gell-Mann • J. J. Thomson • Raman • Bragg • Bardeen • Shockley • Chadwick • Lawrence • Zeilinger • Goudsmit • Uhlenbeck Categories • Modern physics Classical physics is generally concerned with matter and energy on the normal scale of observation, while much of modern physics is concerned with the behavior of matter and energy under extreme conditions or on a very large or very small scale. For example, atomic and nuclear physics study matter on the smallest scale at which chemical elements can be identified. The physics of elementary particles is on an even smaller scale since it is concerned with the most basic units of matter; this branch of physics is also known as high-energy physics because of the extremely high energies necessary to produce many types of particles in particle accelerators. On this scale, ordinary, commonsensical notions of space, time, matter, and energy are no longer valid.[50] The two chief theories of modern physics present a different picture of the concepts of space, time, and matter from that presented by classical physics. Classical mechanics approximates nature as continuous, while quantum theory is concerned with the discrete nature of many phenomena at the atomic and subatomic level and with the complementary aspects of particles and waves in the description of such phenomena. The theory of relativity is concerned with the description of phenomena that take place in a frame of reference that is in motion with respect to an observer; the special theory of relativity is concerned with motion in the absence of gravitational fields and the general theory of relativity with motion and its connection with gravitation. Both quantum theory and the theory of relativity find applications in many areas of modern physics.[51] Fundamental concepts in modern physics • Causality • Covariance • Action • Physical field • Symmetry • Physical interaction • Statistical ensemble • Quantum • Wave • Particle Difference While physics itself aims to discover universal laws, its theories lie in explicit domains of applicability. Loosely speaking, the laws of classical physics accurately describe systems whose important length scales are greater than the atomic scale and whose motions are much slower than the speed of light. Outside of this domain, observations do not match predictions provided by classical mechanics. Einstein contributed the framework of special relativity, which replaced notions of absolute time and space with spacetime and allowed an accurate description of systems whose components have speeds approaching the speed of light. Planck, Schrödinger, and others introduced quantum mechanics, a probabilistic notion of particles and interactions that allowed an accurate description of atomic and subatomic scales. Later, quantum field theory unified quantum mechanics and special relativity. General relativity allowed for a dynamical, curved spacetime, with which highly massive systems and the large-scale structure of the universe can be well-described. General relativity has not yet been unified with the other fundamental descriptions; several candidate theories of quantum gravity are being developed. Relation to other fields Prerequisites Mathematics provides a compact and exact language used to describe the order in nature. This was noted and advocated by Pythagoras,[52] Plato,[53] Galileo,[54] and Newton. Some theorists, like Hilary Putnam and Penelope Maddy, hold that logical truths, and therefore mathematical reasoning, depend on the empirical world. This is usually combined with the claim that the laws of logic express universal regularities found in the structural features of the world, which may explain the peculiar relation between these fields. Physics uses mathematics[55] to organise and formulate experimental results. From those results, precise or estimated solutions are obtained, or quantitative results, from which new predictions can be made and experimentally confirmed or negated. The results from physics experiments are numerical data, with their units of measure and estimates of the errors in the measurements. Technologies based on mathematics, like computation have made computational physics an active area of research. Ontology is a prerequisite for physics, but not for mathematics. It means physics is ultimately concerned with descriptions of the real world, while mathematics is concerned with abstract patterns, even beyond the real world. Thus physics statements are synthetic, while mathematical statements are analytic. Mathematics contains hypotheses, while physics contains theories. Mathematics statements have to be only logically true, while predictions of physics statements must match observed and experimental data. The distinction is clear-cut, but not always obvious. For example, mathematical physics is the application of mathematics in physics. Its methods are mathematical, but its subject is physical.[56] The problems in this field start with a "mathematical model of a physical situation" (system) and a "mathematical description of a physical law" that will be applied to that system. Every mathematical statement used for solving has a hard-to-find physical meaning. The final mathematical solution has an easier-to-find meaning, because it is what the solver is looking for. Pure physics is a branch of fundamental science (also called basic science). Physics is also called "the fundamental science" because all branches of natural science like chemistry, astronomy, geology, and biology are constrained by laws of physics.[57] Similarly, chemistry is often called the central science because of its role in linking the physical sciences. For example, chemistry studies properties, structures, and reactions of matter (chemistry's focus on the molecular and atomic scale distinguishes it from physics). Structures are formed because particles exert electrical forces on each other, properties include physical characteristics of given substances, and reactions are bound by laws of physics, like conservation of energy, mass, and charge. Physics is applied in industries like engineering and medicine. Application and influence Applied physics is a general term for physics research, which is intended for a particular use. An applied physics curriculum usually contains a few classes in an applied discipline, like geology or electrical engineering. It usually differs from engineering in that an applied physicist may not be designing something in particular, but rather is using physics or conducting physics research with the aim of developing new technologies or solving a problem. The approach is similar to that of applied mathematics. Applied physicists use physics in scientific research. For instance, people working on accelerator physics might seek to build better particle detectors for research in theoretical physics. Physics is used heavily in engineering. For example, statics, a subfield of mechanics, is used in the building of bridges and other static structures. The understanding and use of acoustics results in sound control and better concert halls; similarly, the use of optics creates better optical devices. An understanding of physics makes for more realistic flight simulators, video games, and movies, and is often critical in forensic investigations. With the standard consensus that the laws of physics are universal and do not change with time, physics can be used to study things that would ordinarily be mired in uncertainty. For example, in the study of the origin of the earth, one can reasonably model earth's mass, temperature, and rate of rotation, as a function of time allowing one to extrapolate forward or backward in time and so predict future or prior events. It also allows for simulations in engineering that drastically speed up the development of a new technology. But there is also considerable interdisciplinarity, so many other important fields are influenced by physics (e.g., the fields of econophysics and sociophysics). Research Scientific method Physicists use the scientific method to test the validity of a physical theory. By using a methodical approach to compare the implications of a theory with the conclusions drawn from its related experiments and observations, physicists are better able to test the validity of a theory in a logical, unbiased, and repeatable way. To that end, experiments are performed and observations are made in order to determine the validity or invalidity of the theory.[58] A scientific law is a concise verbal or mathematical statement of a relation that expresses a fundamental principle of some theory, such as Newton's law of universal gravitation.[59] Theory and experiment Main articles: Theoretical physics and Experimental physics Theorists seek to develop mathematical models that both agree with existing experiments and successfully predict future experimental results, while experimentalists devise and perform experiments to test theoretical predictions and explore new phenomena. Although theory and experiment are developed separately, they strongly affect and depend upon each other. Progress in physics frequently comes about when experimental results defy explanation by existing theories, prompting intense focus on applicable modelling, and when new theories generate experimentally testable predictions, which inspire the development of new experiments (and often related equipment).[60] Physicists who work at the interplay of theory and experiment are called phenomenologists, who study complex phenomena observed in experiment and work to relate them to a fundamental theory.[61] Theoretical physics has historically taken inspiration from philosophy; electromagnetism was unified this way.[lower-alpha 6] Beyond the known universe, the field of theoretical physics also deals with hypothetical issues,[lower-alpha 7] such as parallel universes, a multiverse, and higher dimensions. Theorists invoke these ideas in hopes of solving particular problems with existing theories; they then explore the consequences of these ideas and work toward making testable predictions. Experimental physics expands, and is expanded by, engineering and technology. Experimental physicists who are involved in basic research design and perform experiments with equipment such as particle accelerators and lasers, whereas those involved in applied research often work in industry, developing technologies such as magnetic resonance imaging (MRI) and transistors. Feynman has noted that experimentalists may seek areas that have not been explored well by theorists.[62] Scope and aims Physics covers a wide range of phenomena, from elementary particles (such as quarks, neutrinos, and electrons) to the largest superclusters of galaxies. Included in these phenomena are the most basic objects composing all other things. Therefore, physics is sometimes called the "fundamental science".[57] Physics aims to describe the various phenomena that occur in nature in terms of simpler phenomena. Thus, physics aims to both connect the things observable to humans to root causes, and then connect these causes together. For example, the ancient Chinese observed that certain rocks (lodestone and magnetite) were attracted to one another by an invisible force. This effect was later called magnetism, which was first rigorously studied in the 17th century. But even before the Chinese discovered magnetism, the ancient Greeks knew of other objects such as amber, that when rubbed with fur would cause a similar invisible attraction between the two.[63] This was also first studied rigorously in the 17th century and came to be called electricity. Thus, physics had come to understand two observations of nature in terms of some root cause (electricity and magnetism). However, further work in the 19th century revealed that these two forces were just two different aspects of one force—electromagnetism. This process of "unifying" forces continues today, and electromagnetism and the weak nuclear force are now considered to be two aspects of the electroweak interaction. Physics hopes to find an ultimate reason (theory of everything) for why nature is as it is (see section Current research below for more information).[64] Research fields Contemporary research in physics can be broadly divided into nuclear and particle physics; condensed matter physics; atomic, molecular, and optical physics; astrophysics; and applied physics. Some physics departments also support physics education research and physics outreach.[65] Since the 20th century, the individual fields of physics have become increasingly specialised, and today most physicists work in a single field for their entire careers. "Universalists" such as Einstein (1879–1955) and Lev Landau (1908–1968), who worked in multiple fields of physics, are now very rare.[lower-alpha 8] The major fields of physics, along with their subfields and the theories and concepts they employ, are shown in the following table. FieldSubfieldsMajor theoriesConcepts Nuclear and particle physics Nuclear physics, Nuclear astrophysics, Particle physics, Astroparticle physics, Particle physics phenomenology Standard Model, Quantum field theory, Quantum electrodynamics, Quantum chromodynamics, Electroweak theory, Effective field theory, Lattice field theory, Gauge theory, Supersymmetry, Grand Unified Theory, Superstring theory, M-theory, AdS/CFT correspondence Fundamental interaction (gravitational, electromagnetic, weak, strong), Elementary particle, Spin, Antimatter, Spontaneous symmetry breaking, Neutrino oscillation, Seesaw mechanism, Brane, String, Quantum gravity, Theory of everything, Vacuum energy Atomic, molecular, and optical physics Atomic physics, Molecular physics, Atomic and molecular astrophysics, Chemical physics, Optics, Photonics Quantum optics, Quantum chemistry, Quantum information science Photon, Atom, Molecule, Diffraction, Electromagnetic radiation, Laser, Polarization (waves), Spectral line, Casimir effect Condensed matter physics Solid-state physics, High-pressure physics, Low-temperature physics, Surface physics, Nanoscale and mesoscopic physics, Polymer physics BCS theory, Bloch's theorem, Density functional theory, Fermi gas, Fermi liquid theory, Many-body theory, Statistical mechanics Phases (gas, liquid, solid), Bose–Einstein condensate, Electrical conduction, Phonon, Magnetism, Self-organization, Semiconductor, superconductor, superfluidity, Spin, Astrophysics Astronomy, Astrometry, Cosmology, Gravitation physics, High-energy astrophysics, Planetary astrophysics, Plasma physics, Solar physics, Space physics, Stellar astrophysics Big Bang, Cosmic inflation, General relativity, Newton's law of universal gravitation, Lambda-CDM model, Magnetohydrodynamics Black hole, Cosmic background radiation, Cosmic string, Cosmos, Dark energy, Dark matter, Galaxy, Gravity, Gravitational radiation, Gravitational singularity, Planet, Solar System, Star, Supernova, Universe Applied physics Accelerator physics, Acoustics, Agrophysics, Atmospheric physics, Biophysics, Chemical physics, Communication physics, Econophysics, Engineering physics, Fluid dynamics, Geophysics, Laser physics, Materials physics, Medical physics, Nanotechnology, Optics, Optoelectronics, Photonics, Photovoltaics, Physical chemistry, Physical oceanography, Physics of computation, Plasma physics, Solid-state devices, Quantum chemistry, Quantum electronics, Quantum information science, Vehicle dynamics Nuclear and particle Particle physics is the study of the elementary constituents of matter and energy and the interactions between them.[66] In addition, particle physicists design and develop the high-energy accelerators,[67] detectors,[68] and computer programs[69] necessary for this research. The field is also called "high-energy physics" because many elementary particles do not occur naturally but are created only during high-energy collisions of other particles.[70] Currently, the interactions of elementary particles and fields are described by the Standard Model.[71] The model accounts for the 12 known particles of matter (quarks and leptons) that interact via the strong, weak, and electromagnetic fundamental forces.[71] Dynamics are described in terms of matter particles exchanging gauge bosons (gluons, W and Z bosons, and photons, respectively).[72] The Standard Model also predicts a particle known as the Higgs boson.[71] In July 2012 CERN, the European laboratory for particle physics, announced the detection of a particle consistent with the Higgs boson,[73] an integral part of the Higgs mechanism. Nuclear physics is the field of physics that studies the constituents and interactions of atomic nuclei. The most commonly known applications of nuclear physics are nuclear power generation and nuclear weapons technology, but the research has provided application in many fields, including those in nuclear medicine and magnetic resonance imaging, ion implantation in materials engineering, and radiocarbon dating in geology and archaeology. Atomic, molecular, and optical Atomic, molecular, and optical physics (AMO) is the study of matter—matter and light—matter interactions on the scale of single atoms and molecules. The three areas are grouped together because of their interrelationships, the similarity of methods used, and the commonality of their relevant energy scales. All three areas include both classical, semi-classical and quantum treatments; they can treat their subject from a microscopic view (in contrast to a macroscopic view). Atomic physics studies the electron shells of atoms. Current research focuses on activities in quantum control, cooling and trapping of atoms and ions,[74][75][76] low-temperature collision dynamics and the effects of electron correlation on structure and dynamics. Atomic physics is influenced by the nucleus (see hyperfine splitting), but intra-nuclear phenomena such as fission and fusion are considered part of nuclear physics. Molecular physics focuses on multi-atomic structures and their internal and external interactions with matter and light. Optical physics is distinct from optics in that it tends to focus not on the control of classical light fields by macroscopic objects but on the fundamental properties of optical fields and their interactions with matter in the microscopic realm. Condensed matter Condensed matter physics is the field of physics that deals with the macroscopic physical properties of matter.[77][78] In particular, it is concerned with the "condensed" phases that appear whenever the number of particles in a system is extremely large and the interactions between them are strong.[79] The most familiar examples of condensed phases are solids and liquids, which arise from the bonding by way of the electromagnetic force between atoms.[80] More exotic condensed phases include the superfluid[81] and the Bose–Einstein condensate[82] found in certain atomic systems at very low temperature, the superconducting phase exhibited by conduction electrons in certain materials,[83] and the ferromagnetic and antiferromagnetic phases of spins on atomic lattices.[84] Condensed matter physics is the largest field of contemporary physics. Historically, condensed matter physics grew out of solid-state physics, which is now considered one of its main subfields.[85] The term condensed matter physics was apparently coined by Philip Anderson when he renamed his research group—previously solid-state theory—in 1967.[86] In 1978, the Division of Solid State Physics of the American Physical Society was renamed as the Division of Condensed Matter Physics.[85] Condensed matter physics has a large overlap with chemistry, materials science, nanotechnology and engineering.[79] Astrophysics Astrophysics and astronomy are the application of the theories and methods of physics to the study of stellar structure, stellar evolution, the origin of the Solar System, and related problems of cosmology. Because astrophysics is a broad subject, astrophysicists typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.[87] The discovery by Karl Jansky in 1931 that radio signals were emitted by celestial bodies initiated the science of radio astronomy. Most recently, the frontiers of astronomy have been expanded by space exploration. Perturbations and interference from the earth's atmosphere make space-based observations necessary for infrared, ultraviolet, gamma-ray, and X-ray astronomy. Physical cosmology is the study of the formation and evolution of the universe on its largest scales. Albert Einstein's theory of relativity plays a central role in all modern cosmological theories. In the early 20th century, Hubble's discovery that the universe is expanding, as shown by the Hubble diagram, prompted rival explanations known as the steady state universe and the Big Bang. The Big Bang was confirmed by the success of Big Bang nucleosynthesis and the discovery of the cosmic microwave background in 1964. The Big Bang model rests on two theoretical pillars: Albert Einstein's general relativity and the cosmological principle. Cosmologists have recently established the ΛCDM model of the evolution of the universe, which includes cosmic inflation, dark energy, and dark matter. Numerous possibilities and discoveries are anticipated to emerge from new data from the Fermi Gamma-ray Space Telescope over the upcoming decade and vastly revise or clarify existing models of the universe.[88][89] In particular, the potential for a tremendous discovery surrounding dark matter is possible over the next several years.[90] Fermi will search for evidence that dark matter is composed of weakly interacting massive particles, complementing similar experiments with the Large Hadron Collider and other underground detectors. IBEX is already yielding new astrophysical discoveries: "No one knows what is creating the ENA (energetic neutral atoms) ribbon" along the termination shock of the solar wind, "but everyone agrees that it means the textbook picture of the heliosphere—in which the Solar System's enveloping pocket filled with the solar wind's charged particles is plowing through the onrushing 'galactic wind' of the interstellar medium in the shape of a comet—is wrong."[91] Current research Research in physics is continually progressing on a large number of fronts. In condensed matter physics, an important unsolved theoretical problem is that of high-temperature superconductivity.[92] Many condensed matter experiments are aiming to fabricate workable spintronics and quantum computers.[79][93] In particle physics, the first pieces of experimental evidence for physics beyond the Standard Model have begun to appear. Foremost among these are indications that neutrinos have non-zero mass. These experimental results appear to have solved the long-standing solar neutrino problem, and the physics of massive neutrinos remains an area of active theoretical and experimental research. The Large Hadron Collider has already found the Higgs boson, but future research aims to prove or disprove the supersymmetry, which extends the Standard Model of particle physics. Research on the nature of the major mysteries of dark matter and dark energy is also currently ongoing.[94] Although much progress has been made in high-energy, quantum, and astronomical physics, many everyday phenomena involving complexity,[95] chaos,[96] or turbulence[97] are still poorly understood. Complex problems that seem like they could be solved by a clever application of dynamics and mechanics remain unsolved; examples include the formation of sandpiles, nodes in trickling water, the shape of water droplets, mechanisms of surface tension catastrophes, and self-sorting in shaken heterogeneous collections.[lower-alpha 9][98] These complex phenomena have received growing attention since the 1970s for several reasons, including the availability of modern mathematical methods and computers, which enabled complex systems to be modeled in new ways. Complex physics has become part of increasingly interdisciplinary research, as exemplified by the study of turbulence in aerodynamics and the observation of pattern formation in biological systems. In the 1932 Annual Review of Fluid Mechanics, Horace Lamb said:[99] I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather optimistic. Education This section is an excerpt from Physics education.[edit] Physics education or physics teaching refers to the education methods currently used to teach physics. The occupation is called physics educator or physics teacher. Physics education research refers to an area of pedagogical research that seeks to improve those methods. Historically, physics has been taught at the high school and college level primarily by the lecture method together with laboratory exercises aimed at verifying concepts taught in the lectures. These concepts are better understood when lectures are accompanied with demonstration, hand-on experiments, and questions that require students to ponder what will happen in an experiment and why. Students who participate in active learning for example with hands-on experiments learn through self-discovery. By trial and error they learn to change their preconceptions about phenomena in physics and discover the underlying concepts. Physics education is part of the broader area of science education. Career This section is an excerpt from Physicist.[edit] A physicist is a scientist who specializes in the field of physics, which encompasses the interactions of matter and energy at all length and time scales in the physical universe.[100][101] Physicists generally are interested in the root or ultimate causes of phenomena, and usually frame their understanding in mathematical terms. They work across a wide range of research fields, spanning all length scales: from sub-atomic and particle physics, through biological physics, to cosmological length scales encompassing the universe as a whole. The field generally includes two types of physicists: experimental physicists who specialize in the observation of natural phenomena and the development and analysis of experiments, and theoretical physicists who specialize in mathematical modeling of physical systems to rationalize, explain and predict natural phenomena.[100] Physicists can apply their knowledge towards solving practical problems or to developing new technologies (also known as applied physics or engineering physics).[102][103][104] See also • Earth science – Fields of natural science related to Earth • Neurophysics – branch of biophysics dealing with the development and use of physical methods to gain information about the nervous systemPages displaying wikidata descriptions as a fallback • Psychophysics – Branch of knowledge relating physical stimuli and psychological perception • Quantum physics – Description of physical properties at the atomic and subatomic scalePages displaying short descriptions of redirect targets • Relationship between mathematics and physics – Study of how mathematics and physics relate to each other • Science tourism – Travel to notable science locations Lists • List of important publications in physics • List of physicists • Lists of physics equations Notes 1. At the start of The Feynman Lectures on Physics, Richard Feynman offers the atomic hypothesis as the single most prolific scientific concept.[1] 2. The term "universe" is defined as everything that physically exists: the entirety of space and time, all forms of matter, energy and momentum, and the physical laws and constants that govern them. However, the term "universe" may also be used in slightly different contextual senses, denoting concepts such as the cosmos or the philosophical world. 3. Francis Bacon's 1620 Novum Organum was critical in the development of scientific method.[7] 4. Calculus was independently developed at around the same time by Gottfried Wilhelm Leibniz; while Leibniz was the first to publish his work and develop much of the notation used for calculus today, Newton was the first to develop calculus and apply it to physical problems. See also Leibniz–Newton calculus controversy 5. Noll notes that some universities still use this title.[37] 6. See, for example, the influence of Kant and Ritter on Ørsted. 7. Concepts which are denoted hypothetical can change with time. For example, the atom of nineteenth-century physics was denigrated by some, including Ernst Mach's critique of Ludwig Boltzmann's formulation of statistical mechanics. By the end of World War II, the atom was no longer deemed hypothetical. 8. Yet, universalism is encouraged in the culture of physics. For example, the World Wide Web, which was innovated at CERN by Tim Berners-Lee, was created in service to the computer infrastructure of CERN, and was/is intended for use by physicists worldwide. The same might be said for arXiv.org 9. See the work of Ilya Prigogine, on 'systems far from equilibrium', and others. References 1. Feynman, Leighton & Sands 1963, p. I-2 "If, in some cataclysm, all [] scientific knowledge were to be destroyed [save] one sentence [...] what statement would contain the most information in the fewest words? I believe it is [...] that all things are made up of atoms – little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another ..." 2. Maxwell 1878, p. 9 "Physical science is that department of knowledge which relates to the order of nature, or, in other words, to the regular succession of events." 3. Young & Freedman 2014, p. 1 "Physics is one of the most fundamental of the sciences. Scientists of all disciplines use the ideas of physics, including chemists who study the structure of molecules, paleontologists who try to reconstruct how dinosaurs walked, and climatologists who study how human activities affect the atmosphere and oceans. Physics is also the foundation of all engineering and technology. No engineer could design a flat-screen TV, an interplanetary spacecraft, or even a better mousetrap without first understanding the basic laws of physics. (...) You will come to see physics as a towering achievement of the human intellect in its quest to understand our world and ourselves." 4. Young & Freedman 2014, p. 2 "Physics is an experimental science. Physicists observe the phenomena of nature and try to find patterns that relate these phenomena." 5. Holzner 2006, p. 7 "Physics is the study of your world and the world and universe around you." 6. Krupp 2003 7. Cajori 1917, pp. 48–49 8. "physics". Online Etymology Dictionary. Archived from the original on 24 December 2016. Retrieved 1 November 2016. 9. "physic". Online Etymology Dictionary. Archived from the original on 24 December 2016. Retrieved 1 November 2016. 10. φύσις, φυσική, ἐπιστήμη. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project 11. Aaboe 1991 12. Clagett 1995 13. Thurston 1994 14. Singer 2008, p. 35 15. Lloyd 1970, pp. 108–109 16. Gill, N.S. "Atomism – Pre-Socratic Philosophy of Atomism". About Education. Archived from the original on 10 July 2014. Retrieved 1 April 2014. 17. Lindberg 1992, p. 363. 18. Smith 2001, Book I [6.85], [6.86], p. 379; Book II, [3.80], p. 453. 19. "John Philoponus, Commentary on Aristotle's Physics". Archived from the original on 11 January 2016. Retrieved 15 April 2018. 20. Galileo (1638). Two New Sciences. in order to better understand just how conclusive Aristotle's demonstration is, we may, in my opinion, deny both of his assumptions. And as to the first, I greatly doubt that Aristotle ever tested by experiment whether it be true that two stones, one weighing ten times as much as the other, if allowed to fall, at the same instant, from a height of, say, 100 cubits, would so differ in speed that when the heavier had reached the ground, the other would not have fallen more than 10 cubits. Simp. – His language would seem to indicate that he had tried the experiment, because he says: We see the heavier; now the word see shows that he had made the experiment. Sagr. – But I, Simplicio, who have made the test can assure[107] you that a cannon ball weighing one or two hundred pounds, or even more, will not reach the ground by as much as a span ahead of a musket ball weighing only half a pound, provided both are dropped from a height of 200 cubits. 21. Lindberg 1992, p. 162. 22. "John Philoponus". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. 2018. Archived from the original on 22 April 2018. Retrieved 11 April 2018. 23. "John Buridan". The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. 2018. Archived from the original on 22 April 2018. Retrieved 11 April 2018. 24. tbcaldwe (14 October 2012). "Natural Philosophy: Aristotle | Physics 139". Retrieved 17 December 2022. 25. "Aristotle – Physics and metaphysics". www.britannica.com. Retrieved 17 December 2022. 26. "Aristotle". galileoandeinstein.phys.virginia.edu. Retrieved 17 December 2022. 27. Howard & Rogers 1995, pp. 6–7 28. Ben-Chaim 2004 29. Guicciardini 1999 30. Allen 1997 31. "The Industrial Revolution". Schoolscience.org, Institute of Physics. Archived from the original on 7 April 2014. Retrieved 1 April 2014. 32. O'Connor & Robertson 1996a 33. O'Connor & Robertson 1996b 34. "The Standard Model". DONUT. Fermilab. 29 June 2001. Archived from the original on 31 May 2014. Retrieved 1 April 2014. 35. Cho 2012 36. Womersley, J. (February 2005). "Beyond the Standard Model" (PDF). Symmetry. Vol. 2, no. 1. pp. 22–25. Archived (PDF) from the original on 24 September 2015. 37. Noll, Walter (23 June 2006). "On the Past and Future of Natural Philosophy" (PDF). Journal of Elasticity. 84 (1): 1–11. doi:10.1007/s10659-006-9068-y. S2CID 121957320. Archived (PDF) from the original on 18 April 2016. 38. Rosenberg 2006, Chapter 1 39. Godfrey-Smith 2003, Chapter 14: "Bayesianism and Modern Theories of Evidence" 40. Godfrey-Smith 2003, Chapter 15: "Empiricism, Naturalism, and Scientific Realism?" 41. Laplace 1951 42. Schrödinger 1983 43. Schrödinger 1995 44. Hawking & Penrose 1996, p. 4 "I think that Roger is a Platonist at heart but he must answer for himself." 45. Penrose 2004 46. Penrose et al. 1997 47. "acoustics". Encyclopædia Britannica. Archived from the original on 18 June 2013. Retrieved 14 June 2013. 48. "Bioacoustics – the International Journal of Animal Sound and its Recording". Taylor & Francis. Archived from the original on 5 September 2012. Retrieved 31 July 2012. 49. "Acoustics and You (A Career in Acoustics?)". Acoustical Society of America. Archived from the original on 4 September 2015. Retrieved 21 May 2013. 50. Tipler & Llewellyn 2003, pp. 269, 477, 561 51. Tipler & Llewellyn 2003, pp. 1–4, 115, 185–187 52. Dijksterhuis 1986 53. Mastin 2010 harvnb error: no target: CITEREFMastin2010 (help) "Although usually remembered today as a philosopher, Plato was also one of ancient Greece's most important patrons of mathematics. Inspired by Pythagoras, he founded his Academy in Athens in 387 BC, where he stressed mathematics as a way of understanding more about reality. In particular, he was convinced that geometry was the key to unlocking the secrets of the universe. The sign above the Academy entrance read: 'Let no-one ignorant of geometry enter here.'" 54. Toraldo Di Francia 1976, p. 10 'Philosophy is written in that great book which ever lies before our eyes. I mean the universe, but we cannot understand it if we do not first learn the language and grasp the symbols in which it is written. This book is written in the mathematical language, and the symbols are triangles, circles, and other geometrical figures, without whose help it is humanly impossible to comprehend a single word of it, and without which one wanders in vain through a dark labyrinth.' – Galileo (1623), The Assayer" 55. "Applications of Mathematics to the Sciences". 25 January 2000. Archived from the original on 10 May 2015. Retrieved 30 January 2012. 56. "Journal of Mathematical Physics". Archived from the original on 18 August 2014. Retrieved 31 March 2014. [Journal of Mathematical Physics] purpose is the publication of papers in mathematical physics—that is, the application of mathematics to problems in physics and the development of mathematical methods suitable for such applications and for the formulation of physical theories. 57. The Feynman Lectures on Physics Vol. I Ch. 3: The Relation of Physics to Other Sciences; see also reductionism and special sciences 58. Ellis, G.; Silk, J. (16 December 2014). "Scientific method: Defend the integrity of physics". Nature. 516 (7531): 321–323. Bibcode:2014Natur.516..321E. doi:10.1038/516321a. PMID 25519115. 59. Honderich 1995, pp. 474–476 60. "Has theoretical physics moved too far away from experiments? Is the field entering a crisis and, if so, what should we do about it?". Perimeter Institute for Theoretical Physics. June 2015. Archived from the original on 21 April 2016. 61. "Phenomenology". Max Planck Institute for Physics. Archived from the original on 7 March 2016. Retrieved 22 October 2016. 62. Feynman 1965, p. 157 "In fact experimenters have a certain individual character. They ... very often do their experiments in a region in which people know the theorist has not made any guesses." 63. Stewart, J. (2001). Intermediate Electromagnetic Theory. World Scientific. p. 50. ISBN 978-981-02-4471-2. 64. Weinberg, S. (1993). Dreams of a Final Theory: The Search for the Fundamental Laws of Nature. Hutchinson Radius. ISBN 978-0-09-177395-3. 65. Redish, E. "Science and Physics Education Homepages". University of Maryland Physics Education Research Group. Archived from the original on 28 July 2016. 66. "Division of Particles & Fields". American Physical Society. Archived from the original on 29 August 2016. Retrieved 18 October 2012. 67. Halpern 2010 68. Grupen 1999 69. Walsh 2012 70. "High Energy Particle Physics Group". Institute of Physics. Archived from the original on 29 May 2019. Retrieved 18 October 2012. 71. Oerter 2006 72. Gribbin, Gribbin & Gribbin 1998 73. "CERN experiments observe particle consistent with long-sought Higgs boson". CERN. 4 July 2012. Archived from the original on 14 November 2012. Retrieved 18 October 2012. 74. "Atomic, Molecular, and Optical Physics". MIT Department of Physics. Archived from the original on 27 February 2014. Retrieved 21 February 2014. 75. "Korea University, Physics AMO Group". Archived from the original on 1 March 2014. Retrieved 21 February 2014. 76. "Aarhus Universitet, AMO Group". Archived from the original on 7 March 2014. Retrieved 21 February 2014. 77. Taylor & Heinonen 2002 78. Girvin, Steven M.; Yang, Kun (28 February 2019). Modern Condensed Matter Physics. Cambridge University Press. ISBN 978-1-108-57347-4. Archived from the original on 25 February 2021. Retrieved 23 August 2020. 79. Cohen 2008 80. Moore 2011, pp. 255–258 81. Leggett 1999 82. Levy 2001 83. Stajic, Coontz & Osborne 2011 84. Mattis 2006 85. "History of Condensed Matter Physics". American Physical Society. Archived from the original on 12 September 2011. Retrieved 31 March 2014. 86. "Philip Anderson". Princeton University, Department of Physics. Archived from the original on 8 October 2011. Retrieved 15 October 2012. 87. "BS in Astrophysics". University of Hawaii at Manoa. Archived from the original on 4 April 2016. Retrieved 14 October 2016. 88. "NASA – Q&A on the GLAST Mission". Nasa: Fermi Gamma-ray Space Telescope. NASA. 28 August 2008. Archived from the original on 25 April 2009. Retrieved 29 April 2009. 89. See also Nasa – Fermi Science Archived 3 April 2010 at the Wayback Machine and NASA – Scientists Predict Major Discoveries for GLAST Archived 2 March 2009 at the Wayback Machine. 90. "Dark Matter". NASA. 28 August 2008. Archived from the original on 13 January 2012. Retrieved 30 January 2012. 91. Kerr 2009 92. Leggett, A.J. (2006). "What DO we know about high Tc?" (PDF). Nature Physics. 2 (3): 134–136. Bibcode:2006NatPh...2..134L. doi:10.1038/nphys254. S2CID 122055331. Archived from the original (PDF) on 10 June 2010. 93. Wolf, S.A.; Chtchelkanova, A.Y.; Treger, D.M. (2006). "Spintronics – A retrospective and perspective" (PDF). IBM Journal of Research and Development. 50: 101–110. doi:10.1147/rd.501.0101. S2CID 41178069. Archived from the original (PDF) on 24 September 2020. 94. Gibney, E. (2015). "LHC 2.0: A new view of the Universe". Nature. 519 (7542): 142–143. Bibcode:2015Natur.519..142G. doi:10.1038/519142a. PMID 25762263. 95. National Research Council & Committee on Technology for Future Naval Forces 1997, p. 161 96. Kellert 1993, p. 32 97. Eames, I.; Flor, J.B. (2011). "New developments in understanding interfacial processes in turbulent flows". Philosophical Transactions of the Royal Society A. 369 (1937): 702–705. Bibcode:2011RSPTA.369..702E. doi:10.1098/rsta.2010.0332. PMID 21242127. Richard Feynman said that 'Turbulence is the most important unsolved problem of classical physics' 98. National Research Council (2007). "What happens far from equilibrium and why". Condensed-Matter and Materials Physics: the science of the world around us. pp. 91–110. doi:10.17226/11967. ISBN 978-0-309-10969-7. Archived from the original on 4 November 2016. – Jaeger, Heinrich M.; Liu, Andrea J. (2010). "Far-From-Equilibrium Physics: An Overview". arXiv:1009.4874 [cond-mat.soft]. 99. Goldstein 1969 100. Rosen, Joe (2009). Encyclopedia of Physics. Infobase Publishing. p. 247. 101. "physicist". Merriam-Webster Dictionary. "a scientist who studies or is a specialist in physics" 102. "Industrial Physicists: Primarily specializing in Physics" (PDF). American Institute for Physics. October 2016. 103. "Industrial Physicists: Primarily specializing in Engineering" (PDF). American Institute for Physics. October 2016. 104. "Industrial Physicists: Primarily specializing outside of STEM sectors" (PDF). American Institute for Physics. October 2016. Sources • Aaboe, A. (1991). "Mesopotamian Mathematics, Astronomy, and Astrology". The Cambridge Ancient History. Vol. III (2nd ed.). Cambridge University Press. ISBN 978-0-521-22717-9. • Abazov, V.; et al. (DØ Collaboration) (12 June 2007). "Direct observation of the strange 'b' baryon $\Xi _{b}^{-}$". Physical Review Letters. 99 (5): 052001. arXiv:0706.1690v2. Bibcode:2007PhRvL..99e2001A. doi:10.1103/PhysRevLett.99.052001. PMID 17930744. S2CID 11568965. • Allen, D. (10 April 1997). "Calculus". Texas A&M University. Retrieved 1 April 2014. • Ben-Chaim, M. (2004). Experimental Philosophy and the Birth of Empirical Science: Boyle, Locke and Newton. Aldershot: Ashgate Publishing. ISBN 978-0-7546-4091-2. OCLC 53887772. • Cajori, Florian (1917). A History of Physics in Its Elementary Branches: Including the Evolution of Physical Laboratories. Macmillan. • Cho, A. (13 July 2012). "Higgs Boson Makes Its Debut After Decades-Long Search". Science. 337 (6091): 141–143. Bibcode:2012Sci...337..141C. doi:10.1126/science.337.6091.141. PMID 22798574. • Clagett, M. (1995). Ancient Egyptian Science. Vol. 2. Philadelphia: American Philosophical Society. • Cohen, M.L. (2008). "Fifty Years of Condensed Matter Physics". Physical Review Letters. 101 (5): 25001–25006. Bibcode:2008PhRvL.101y0001C. doi:10.1103/PhysRevLett.101.250001. PMID 19113681. • Dijksterhuis, E.J. (1986). The mechanization of the world picture: Pythagoras to Newton. Princeton, New Jersey: Princeton University Press. ISBN 978-0-691-08403-9. Archived from the original on 5 August 2011. • Feynman, R.P.; Leighton, R.B.; Sands, M. (1963). The Feynman Lectures on Physics. Vol. 1. ISBN 978-0-201-02116-5. • Feynman, R.P. (1965). The Character of Physical Law. ISBN 978-0-262-56003-0. • Godfrey-Smith, P. (2003). Theory and Reality: An Introduction to the Philosophy of Science. ISBN 978-0-226-30063-4. • Goldstein, S. (1969). "Fluid Mechanics in the First Half of this Century". Annual Review of Fluid Mechanics. 1 (1): 1–28. Bibcode:1969AnRFM...1....1G. doi:10.1146/annurev.fl.01.010169.000245. • Gribbin, J.R.; Gribbin, M.; Gribbin, J. (1998). Q is for Quantum: An Encyclopedia of Particle Physics. Free Press. Bibcode:1999qqep.book.....G. ISBN 978-0-684-85578-3. • Grupen, Klaus (10 July 1999). "Instrumentation in Elementary Particle Physics: VIII ICFA School". AIP Conference Proceedings. 536: 3–34. arXiv:physics/9906063. Bibcode:2000AIPC..536....3G. doi:10.1063/1.1361756. S2CID 119476972. • Guicciardini, N. (1999). Reading the Principia: The Debate on Newton's Methods for Natural Philosophy from 1687 to 1736. New York: Cambridge University Press. ISBN 978-0521640664. • Halpern, P. (2010). Collider: The Search for the World's Smallest Particles. John Wiley & Sons. ISBN 978-0-470-64391-4. • Hawking, S.; Penrose, R. (1996). The Nature of Space and Time. ISBN 978-0-691-05084-3. • Holzner, S. (2006). Physics for Dummies. John Wiley & Sons. Bibcode:2005pfd..book.....H. ISBN 978-0-470-61841-7. Physics is the study of your world and the world and universe around you. • Honderich, T., ed. (1995). The Oxford Companion to Philosophy (1 ed.). Oxford: Oxford University Press. pp. 474–476. ISBN 978-0-19-866132-0. • Howard, Ian; Rogers, Brian (1995). Binocular Vision and Stereopsis. Oxford University Press. ISBN 978-0-19-508476-4. • Kellert, S.H. (1993). In the Wake of Chaos: Unpredictable Order in Dynamical Systems. University of Chicago Press. ISBN 978-0-226-42976-2. • Kerr, R.A. (16 October 2009). "Tying Up the Solar System With a Ribbon of Charged Particles". Science. 326 (5951): 350–351. doi:10.1126/science.326_350a. PMID 19833930. • Krupp, E.C. (2003). Echoes of the Ancient Skies: The Astronomy of Lost Civilizations. Dover Publications. ISBN 978-0-486-42882-6. Retrieved 31 March 2014. • Laplace, P.S. (1951). A Philosophical Essay on Probabilities. Translated from the 6th French edition by Truscott, F.W. and Emory, F.L. New York: Dover Publications. • Leggett, A.J. (1999). "Superfluidity". Reviews of Modern Physics. 71 (2): S318–S323. Bibcode:1999RvMPS..71..318L. doi:10.1103/RevModPhys.71.S318. • Levy, Barbara G. (December 2001). "Cornell, Ketterle, and Wieman Share Nobel Prize for Bose-Einstein Condensates". Physics Today. 54 (12): 14. Bibcode:2001PhT....54l..14L. doi:10.1063/1.1445529. • Lindberg, David (1992). The Beginnings of Western Science. University of Chicago Press. • Lloyd, G.E.R. (1970). Early Greek Science: Thales to Aristotle. London; New York: Chatto and Windus; W. W. Norton & Company. ISBN 978-0-393-00583-7. • Mattis, D.C. (2006). The Theory of Magnetism Made Simple. World Scientific. ISBN 978-981-238-579-6. • Maxwell, J.C. (1878). Matter and Motion. D. Van Nostrand. ISBN 978-0-486-66895-6. matter and motion. • Moore, J.T. (2011). Chemistry For Dummies (2 ed.). John Wiley & Sons. ISBN 978-1-118-00730-3. • National Research Council; Committee on Technology for Future Naval Forces (1997). Technology for the United States Navy and Marine Corps, 2000–2035 Becoming a 21st-Century Force: Volume 9: Modeling and Simulation. Washington, DC: The National Academies Press. ISBN 978-0-309-05928-2. • O'Connor, J.J.; Robertson, E.F. (February 1996a). "Special Relativity". MacTutor History of Mathematics archive. University of St Andrews. Retrieved 1 April 2014. • O'Connor, J.J.; Robertson, E.F. (May 1996b). "A History of Quantum Mechanics". MacTutor History of Mathematics archive. University of St Andrews. Retrieved 1 April 2014. • Oerter, R. (2006). The Theory of Almost Everything: The Standard Model, the Unsung Triumph of Modern Physics. Pi Press. ISBN 978-0-13-236678-6. • Penrose, R.; Shimony, A.; Cartwright, N.; Hawking, S. (1997). The Large, the Small and the Human Mind. Cambridge University Press. ISBN 978-0-521-78572-3. • Penrose, R. (2004). The Road to Reality. ISBN 978-0-679-45443-4. • Rosenberg, Alex (2006). Philosophy of Science. Routledge. ISBN 978-0-415-34317-6. • Schrödinger, E. (1983). My View of the World. Ox Bow Press. ISBN 978-0-918024-30-5. • Schrödinger, E. (1995). The Interpretation of Quantum Mechanics. Ox Bow Press. ISBN 978-1-881987-09-3. • Singer, C. (2008). A Short History of Science to the 19th Century. Streeter Press. • Smith, A. Mark (2001). Alhacen's Theory of Visual Perception: A Critical Edition, with English Translation and Commentary, of the First Three Books of Alhacen's De Aspectibus, the Medieval Latin Version of Ibn al-Haytham's Kitāb al-Manāẓir, 2 vols. Transactions of the American Philosophical Society. Vol. 91. Philadelphia: American Philosophical Society. ISBN 978-0-87169-914-5. OCLC 47168716. • Smith, A. Mark (2001a). "Alhacen's Theory of Visual Perception: A Critical Edition, with English Translation and Commentary, of the First Three Books of Alhacen's "De aspectibus", the Medieval Latin Version of Ibn al-Haytham's "Kitāb al-Manāẓir": Volume One". Transactions of the American Philosophical Society. 91 (4): i–clxxxi, 1–337. doi:10.2307/3657358. JSTOR 3657358. • Smith, A. Mark (2001b). "Alhacen's Theory of Visual Perception: A Critical Edition, with English Translation and Commentary, of the First Three Books of Alhacen's "De aspectibus", the Medieval Latin Version of Ibn al-Haytham's "Kitāb al-Manāẓir": Volume Two". Transactions of the American Philosophical Society. 91 (5): 339–819. doi:10.2307/3657357. JSTOR 3657357. • Stajic, Jelena; Coontz, R.; Osborne, I. (8 April 2011). "Happy 100th, Superconductivity!". Science. 332 (6026): 189. Bibcode:2011Sci...332..189S. doi:10.1126/science.332.6026.189. PMID 21474747. • Taylor, P.L.; Heinonen, O. (2002). A Quantum Approach to Condensed Matter Physics. Cambridge University Press. ISBN 978-0-521-77827-5. • Thurston, H. (1994). Early Astronomy. Springer. • Tipler, Paul; Llewellyn, Ralph (2003). Modern Physics. W. H. Freeman. ISBN 978-0-7167-4345-3. • Toraldo Di Francia, G. (1976). The Investigation of the Physical World. ISBN 978-0-521-29925-1. • Walsh, K.M. (1 June 2012). "Plotting the Future for Computing in High-Energy and Nuclear Physics". Brookhaven National Laboratory. Archived from the original on 29 July 2016. Retrieved 18 October 2012. • Young, H.D.; Freedman, R.A. (2014). Sears and Zemansky's University Physics with Modern Physics Technology Update (13th ed.). Pearson Education. ISBN 978-1-292-02063-1. External links • Physics at Quanta Magazine • Usenet Physics FAQ – FAQ compiled by sci.physics and other physics newsgroups • Website of the Nobel Prize in physics – Award for outstanding contributions to the subject • World of Physics – Online encyclopedic dictionary of physics • Nature Physics – Academic journal • Physics – Online magazine by the American Physical Society • Physics/Publications at Curlie – Directory of physics related media • The Vega Science Trust – Science videos, including physics • HyperPhysics website – Physics and astronomy mind-map from Georgia State University • Physics at MIT OpenCourseWare – Online course material from Massachusetts Institute of Technology • The Feynman Lectures on Physics The fundamental interactions of physics Physical forces • Strong interaction • fundamental • residual • Electroweak interaction • weak interaction • electromagnetism • Gravitation Radiations • Electromagnetic radiation • Gravitational radiation Hypothetical forces • Fifth force • Quintessence • Weak gravity conjecture • Glossary of physics • Particle physics • Philosophy of physics • Universe • Weakless universe Major branches of physics Divisions • Pure • Applied • Engineering Approaches • Experimental • Theoretical • Computational Classical • Classical mechanics • Newtonian • Analytical • Celestial • Continuum • Acoustics • Classical electromagnetism • Classical optics • Ray • Wave • Thermodynamics • Statistical • Non-equilibrium Modern • Relativistic mechanics • Special • General • Nuclear physics • Quantum mechanics • Particle physics • Atomic, molecular, and optical physics • Atomic • Molecular • Modern optics • Condensed matter physics Interdisciplinary • Astrophysics • Atmospheric physics • Biophysics • Chemical physics • Geophysics • Materials science • Mathematical physics • Medical physics • Ocean physics • Quantum information science Related • History of physics • Nobel Prize in Physics • Philosophy of physics • Physics education • Timeline of physics discoveries Natural science • Outline • Earth science • Life sciences • Physical science • Space science • Category • Science Portal • Commons Authority control National • France • BnF data • Germany • Israel • United States • Japan • Czech Republic • Korea Other • Historical Dictionary of Switzerland • NARA
Wikipedia
\begin{document} \urldef{\mailsa}\path|[email protected]| \title{Parallelization of continuous and discontinuous Galerkin dual-primal Isogeometric tearing and interconnecting methods} \author{Christoph Hofer$^1$} \institute{ $^1$ Johannes Kepler University (JKU), Altenbergerstr. 69, A-4040 Linz, Austria,\\ \mailsa } \noindent \maketitle \begin{abstract} In this paper we investigate the parallelization of dual-primal isogeometric tearing and interconnecting (IETI-DP) type methods for solving large-scale continuous and discontinuous Galerkin systems of equations arising from Isogeometric analysis of elliptic boundary value problems. These methods are extensions of the finite element tearing and interconnecting methods to isogeometric analysis. The algorithms are implemented by means of energy minimizing primal subspaces. We discuss how these methods can efficiently be parallelized in a distributed memory setting. Weak and strong scaling studies presented for two and three dimensional problems show an excellent parallel efficiency. \end{abstract} \keywords{ Diffusion problems, Isogeometric analysis, discontinuous Galerkin, IETI-DP, parallelization, MPI } \pagestyle{myheadings} \thispagestyle{plain} \markboth{}{C. Hofer, Parallelization of cG and dG-IETI-DP methods} \section{Introduction} Isogeometric Analysis (IgA) is a novel methodology for the numerical solution of partial differential equations (PDE). IgA was first introduced by Hughes, Cottrell and Bazilevs in \cite{HL:HughesCottrellBazilevs:2005a}, see also the monograph \cite{HL:CotrellHughesBazilevs:2009a} for a comprehensive presentation of the IgA framework and the recent survey article \cite{HL:BeiraodaVeigaBuffaSangalliVazquez:2014a}. The main principle is to use the same basis functions for describing the geometry and to represent the discrete solution of the PDE problem under consideration. The most common choices are B-Splines, Non Uniform Rational B-Splines (NURBS), T-Splines, Truncated Hierarchical B-Splines (THB-Splines), etc., see, e.g., \cite{HL:GiannelliJuettlerSpeleers:2012a}, \cite{HL:GiannelliJuettlerSpeleers:2014a} and \cite{HL:BazilevsCaloCottrellEvans:2010a}. One of the strengths of IgA is the capability of creating high-order splines spaces, while keeping the number of degrees of freedom quite small. Moreover, having basis functions with high smoothness is useful when considering higher-order PDEs, e.g., the biharmonic equation. In many cases the domain can not be represented with a single mapping, referred to as \emph{geometrical mapping}. Complicated geometries are decomposed into simple domains, called \emph{patches}, which are topologically equivalent to a cube. The set of patches forming the computational domain is called multipatch domain. The obtained patch parametrizations and the original geometry may not be identical. The result are small gaps and overlaps occurring at the interfaces of the patches, called \emph{segmentation crimes}, see \cite{HL:JuettlerKaplNguyenPanPauley:2014a}, \cite{HL:PauleyNguyenMayerSpehWeegerJuettler:2015a} and \cite{Hoschek_Lasser_CAD_book_1993} for a comprehensive analysis. Nevertheless, one still wants to solve PDEs on such domains. To do so, numerical schemes based on the discontinuous Galerkin (dG) method for elliptic PDEs were developed in \cite{HL:HoferLangerToulopoulos:2016a}, \cite{HL:HoferToulopoulos:2016a} and \cite{HL:HoferLangerToulopoulos:2016b}. There, the corresponding error analysis is also provided. In addition to domains with segmentation crimes, the dG formulation is very useful when considering different B-Splines spaces on each patch, e.g., non-matching grids at the interface and different spline degrees. An analysis for the dG-IgA formulation with extensions to low regularity solutions can be found in \cite{HL:LangerToulopoulos:2015a}. For a detailed discussion of dG for finite element methods, we refer, e.g., to \cite{HL:Riviere:2008a} and \cite{HL:PietroErn:2012a}. In the present paper, we are considering fast solution methods for linear systems arising from the discretization of elliptic PDEs by means of IgA. We investigate non-overlapping domain decomposition (DD) methods of the dual-primal tearing and interconnecting type. This type of methods are equivalent to the so called Balancing Domain Decomposition by Constraints (BDDC) methods, see \cite{HL:MandelDohrmannTezaur:2005a}, \cite{HL:ToselliWidlund:2005a}, \cite{HL:Pechstein:2013a} and references therein. The version based on a conforming Galerkin (cG) discretization, called dual-primal isogeometric tearing and interconnecting (IETI-DP) method was first introduced in \cite{HL:KleissPechsteinJuettlerTomar:2012a} and the equivalent IgA BDDC method was analysed in \cite{HL:VeigaChoPavarinoScacchi:2013a}. Further extensions to the analysis are presented in \cite{HL:HoferLanger:2016b}. The version based on the dG formulation, abbreviated by dG-IETI-DP, was introduced in \cite{HL:HoferLanger:2016a} and analyzed in \cite{HL:Hofer:2016a}, see \cite{HL:DryjaGalvisSarkis:2007a}, \cite{HL:DryjaGalvisSarkis:2013a} and \cite{HL:DryjaSarkis:2014a} for the corresponding finite element counterparts. We also want to mention development in overlapping Schwarz methods, see, e.g., \cite{HL:VeigaChoPavarinoScacchi:2012a} and \cite{HL:VeigaChoPavarinoScacchi:2013b}. The aim of this paper is to present the parallel scalability of the cG and dG IETI-DP methods. We investigate weak and strong scaling in two and three dimensional domains for different B-Spline degrees. The implemented algorithms are based on energy minimizing primal subspaces, which simplifies the parallelization of the solver part, but having more effort in the setup phase (assembling phase). We rephrase key parts of this algorithm and discuss how to realize the communication by means of Message Passing Interface (MPI). In general, FETI-DP and equivalent BDDC methods are by nature well suited for large-scale parallelization and has been widely studied for solving large-scale finite element equations, e.g., in \cite{HL:KlawonnRheinbach:2010a}, \cite{HL:Rheinbach:2009}, \cite{HL:KlawonnRheinbach:2006a} and \cite{HL:KlawonnLanserRheinbach:2015a}, see also \cite{HL:KlawonnLanserRheinbachStengelWellein:2015a} for a hybrid OpenMP/MPI version. Considering a domain decomposition with several ten thousands of subdomains, the influence of the coarse grid problem becomes more and more significant. Especially, its LU-factorization is the bottleneck of the algorithm. The remedy is to reformulate the FETI-DP system in such a way that the solution of the coarse grid problem is not required in the application of the system matrix, but in the preconditioner. This enables the use of inexact methods like geometric or algebraic multigrid, see, e.g., \cite{HL:KlawonnLanserRheinbach:2016a}, \cite{HL:KlawonnLanserRheinbach:2015a}, \cite{HL:KlawonnRheinbach:2007a}, \cite{HL:KlawonnRheinbach:2010a} and \cite{HL:KlawonnRheinbachPavarino:2008a}. Moreover, inexact solvers can also be used in the scaled Dirichlet preconditioner and, if using the saddle point formulation, also for the local solvers, cf., \cite{HL:KlawonnRheinbach:2007a}, see also \cite{HL:KlawonnRheinbach:2010a}, \cite{HL:Rheinbach:2009} and references therein for alternative approaches by means of hybrid FETI. We also want to mention inexact version for the BDDC method, see, e.g., \cite{HL:Tu:2007b}, \cite{HL:Tu:2007a}, \cite{HL:Dohrmann:2007a}, \cite{HL:LiWidlund:2007a} and \cite{HL:Zampini:2014}. FETI-DP methods has also been successfully applied to non-linear problems my means of a non-linear version of FETI-DP. We want to highlight recent advances presented, e.g., in \cite{HL:KlawonnLanserRheinbach:2014a}, \cite{HL:KlawonnLanserRheinbach:2016a} and \cite{HL:KlawonnLanserRheinbach:2015a}, showing excellent scalability on large-scale supercomputers. In the present paper, we consider the following second-order elliptic boundary value problem in a bounded Lipschitz domain $\Omega\subset \mathbb{R}^d,$ with $d\in\{2,3\}$: Find $u: \overline{\Omega} \rightarrow \mathbb{R}$ such that\\ \begin{equation} \label{equ:ModelStrong} - \mdiv(\alpha \grad u) = f \; \text{in } \Omega,\; u = 0 \; \text{on } \Gamma_D, \;\text{and}\; \alpha \frac{\partial u}{\partial n} = g_N \; \text{on } \Gamma_N, \end{equation} with given, sufficient smooth data $f, g_N \text{ and } \alpha$, where the coefficient $\alpha$ is uniformly bounded from below and above by some positive constants $\alpha_{min}$ and $\alpha_{max}$, respectively. The boundary $\partial \Omega$ of the computational domain $\Omega$ consists of a Dirichlet part $\Gamma_D$ of positive boundary measure and a Neumann part $\Gamma_N$. Furthermore, we assume that the Dirichlet boundary $\Gamma_D$ is always a union of complete patch sides (edges / face in 2D / 3D) which are uniquely defined in IgA. Without loss of generality, we assume homogeneous Dirichlet conditions. This can always be obtained by homogenization. By means of integration by parts, we arrive at the weak formulation of \eqref{equ:ModelStrong} which reads as follows: Find $u \in V_{D} = \{ u\in H^1: \gamma_0 u = 0 \text{ on } \Gamma_D \}$ such that \begin{align} \label{equ:ModelVar} a(u,v) = \left\langle F, v \right\rangle \quad \forall v \in V_{D}, \end{align} where $\gamma_0$ denotes the trace operator. The bilinear form $a(\cdot,\cdot): V_{D} \times V_{D} \rightarrow \mathbb{R}$ and the linear form $\left\langle F, \cdot \right\rangle: V_{D} \rightarrow \mathbb{R}$ are given by the expressions \begin{equation*} a(u,v) := \int_\Omega \alpha \nabla u \cdot \nabla v \,dx \quad \mbox{and} \quad \left\langle F, v \right\rangle := \int_\Omega f v \,dx + \int_{\Gamma_N} g_N v \,ds. \end{equation*} The remainder of the paper is organized as follows: In Section~\ref{sec:iga}, we give a short introduction to isogeometric analysis, providing the basic definitions and notations. Section~\ref{sec:galerkin-IGA} describes the different discretizations of the model problem obtained the continuous and discontinuous Galerkin methods. In Section~\ref{sec:IETI-method}, we formulate the IETI-DP method for both discretizations and provide implementation details. The way how the algorithm is parallelized is explained in Section~\ref{sec:para}. Numerical examples are presented in Section~\ref{sec:num_ex}. Finally we draw some conclusions in Section~\ref{sec:conclusion}. \section{Isogeometric Analysis} \label{sec:iga} In this section, we give a very short overview about IgA. For a more comprehensive study, we refer to, e.g., \cite{HL:CotrellHughesBazilevs:2009a} and \cite{HL:LangerToulopoulos:2015a}. Let $\p{\Omega}:=(0,1)^d,d\in\{2,3\}$, be the d-dimensional unit cube, which we refer to as the \emph{parameter domain}. Let $p_\iota$ and $M_\iota,\iota\in\{1,\ldots,d\}$, be the B-Spline degree and the number of basis functions in $x_\iota$-direction. Moreover, let $\Xi_\iota = \{\xi_1=0,\xi_2,\ldots,\xi_{n_\iota}=1\}$, $n_\iota=M_\iota-p_\iota-1$, be a partition of $[0,1]$, called \emph{knot vector}. With this ingredients we are able to define the B-Spline basis $\p{N}_{i,p}$, $i\in\{1,\ldots,M_\iota\}$ on $[0,1]$ via Cox-De Boor's algorithm, cf. \cite{HL:CotrellHughesBazilevs:2009a}. The generalization to $\p{\Omega}$ is realized by considering a tensor product, again denoted by $\p{N}_{i,p}$, where $i=(i_1,\ldots,i_d)$ and $p=(p_1,\ldots,p_d)$ are a multi-indices. For notational simplicity, we define ${\mathcal{I}}:= \{(i_1,\ldots,i_d)\,|\,i_\iota \in \{1,\ldots,M_\iota\}\} $ as the set of multi-indices. Since the tensor product knot vector $\Xi$ provides a partition of $\p{\Omega}$, it introduces a mesh $\p{\mathcal{Q}}$, and we denote a mesh element by $\p{Q}$, called \emph{cell}. The B-Spline basis functions parametrize the computational domain $\Omega$, also called \emph{physical domain}. It is given as image of parameter domain under the \emph{geometrical mapping} $G :\; \p{\Omega} \rightarrow \mathbb{R}^{{d}}$, defined as \begin{align*} G(\xi) := \sum_{i\in \mathcal{I}} P_i \p{N}_{i,p}(\xi), \end{align*} with the control points $P_i \in \mathbb{R}^{{d}}$, $i\in \mathcal{I}$. The image of the mesh $\p{\mathcal{Q}}_h$ under $G$ defines the mesh on $\Omega$, denoted by $\mathcal{Q}_h$ with cells $Q$. Both meshes possess a characteristic mesh size $\p{h}$ and $h$, respectively. More complicated geometries $\Omega$ have to be represented with multiple non-overlapping domains $\Omega\sMP:=G\sMP(\p{\Omega}),k=1,\ldots,N$, called \emph{patches}, where each patch is associated with a different geometrical mapping $G\sMP$. We sometimes call $\overline{\Omega}:=\bigcup_{k=1}^N\overline{\Omega}\sMP$ a \emph{multipatch domain}. Furthermore, we denote the set of all indices $l$ such that $\Omega^{(k)}$ and $\Omega^{(l)}$ have a common interface $F^{(kl)}$ by ${\mathcal{I}}_{\mathcal{F}}^{(k)}$. We define the interface $\Gamma\sMP$ of $\Omega\sMP$ as $\Gamma\sMP := \bigcup_{l\in{\mathcal{I}}_{\mathcal{F}}\sMP}^N F^{(kl)}$. We use B-Splines not only for defining the geometry, but also for representing the approximate solution of our PDE. This motivates to define the basis functions in the physical space $\g{N}_{i,p}:=\p{N}_{i,p}~\circ~G^{-1}$ and the corresponding discrete space as \begin{align} \label{equ:gVh} V_h:=\text{span}\{\g{N}_{i,p}\}_{i\in{\mathcal{I}}}. \end{align} Moreover, each function $u_h(x) = \sum_{i\in\mathcal{I}} u_i \g{N}_{i,p}(x)$ is associated with the coefficient vector $\boldsymbol{u} = (u_i)_{i\in\mathcal{I}}$. This map is known as \emph{Ritz isomorphism} or \emph{IgA isomorphism} in connection with IgA. One usually writes this relation as $u_h \leftrightarrow \boldsymbol{u}$. In the following, we will use the notation $u_h$ for the function and its vector representations. If we consider a single patch $\Omega^{(k)}$ of a multipatch domain $\Omega$, we will use the notation $V_{h}^{(k)},\g{N}_{i,p}^{(k)},\p{N}_{i,p}^{(k)}, G^{(k)}, \ldots$ with the analogous definitions. To keep notation simple, we will use $h_k$ and $ \p{h}_k$ instead of $h\sMP$ and $\p{h}\sMP$, respectively. \section{Galerkin Methods for Isogeometric Analysis} \label{sec:galerkin-IGA} In this section we rephrase the variational formulation for the continuous and discontinuous Galerkin method for multipatch IgA systems. \subsection{Continuous Galerkin method} We are considering the finite dimensional subspace $V_{h}^{cG}$ of $V_D$, where $V_{h}^{cG}$ is given by \begin{align*} V_h^{cG}:= \{v\,|\, v|_{\Omega\sMP}\in V_h\sMP\} \cap H^1({\Omega}). \end{align*} Since, we restrict ourselves to homogeneous Dirichlet conditions, we look for the Galerkin approximate $u_h$ from $V_{D,h}^{cG} \subset V_h^{cG}$, where $V_{D,h}^{cG}$ contains all functions, which vanish on the Dirichlet boundary. The Galerkin IgA scheme reads as follows: Find $u_h \in V_{D,h}^{cG}$ such that \begin{align} \label{HL:equ:ModelDisc} a(u_h,v_h) = \left\langle F, v_h \right\rangle \quad \forall v_h \in V_{D,h}. \end{align} There exists a unique IgA solution $u_h \in V_{D,h}^{cG}$ of (\ref{HL:equ:ModelDisc}) that converges to the solution $u \in V_{D}$ of (\ref{equ:ModelVar}) if $h$ tends to $0$. Due to Cea's lemma, the usual discretization error estimates in the $H^1$ - norm follow from the corresponding approximation error estimates, see \cite{HL:BazilevsVeigaCottrellHughesSangalli:2006a} or \cite{HL:BeiraodaVeigaBuffaSangalliVazquez:2014a}. \subsection{Discontinuous Galerkin method} In the dG-IgA scheme, we again use the spaces $V_{h}^{(k)}$ of B-Splines defined in \eqref{equ:gVh}, whereas now discontinuities are allowed across the patch interfaces $F\sMP[kl]$. The continuity of the function value and its normal fluxes are then enforced in a weak sense by adding additional terms to the bilinear form. We define the dG-IgA space \begin{align} \label{equ:gVh_glob} V_{h}^{dG}:= \{v\,| \,v|_{\Omega^{(k)}}\in V_{h}^{(k)}\}, \end{align} where $V_{h}^{(k)}$ is defined as in \eqref{equ:gVh}. A comprehensive study of dG schemes for FE can be found in \cite{HL:Riviere:2008a} and \cite{HL:PietroErn:2012a}. For an analysis of the dG-IgA scheme, we refer to \cite{HL:LangerToulopoulos:2015a}. We define $V_{D,h}^{dG}$ as the space of all functions from $V_{h}$ that vanish on the Dirichlet boundary $\Gamma_D$. Having these definitions at hand, we can define the discrete problem based on the Symmetric Interior Penalty (SIP) dG formulation as follows: Find $u_h \in V_{D,h}^{dG}$ such that \begin{align} \label{equ:ModelDiscDG} a_h(u_h,v_h) = \left\langle F, v_h \right\rangle \quad \forall v_h \in V_{D,h}^{dG}, \end{align} where \begin{align*} a_h(u,v) &:= \sum_{k=1}^N a_e^{(k)}(u,v) \quad \text{and} \quad \left\langle F, v \right\rangle:=\sum_{k=1}^N \left( \int_{\Omega^{(k)}}f v^{(k)} dx+\int_{\Gamma_N\sMP} g_N v\sMP \,ds\right), \\ a^{(k)}_e(u,v) &:= a^{(k)}(u,v) + s^{(k)}(u,v) + p^{(k)}(u,v), \end{align*} and \begin{align*} a^{(k)}(u,v) &:= \int_{\Omega^{(k)}}\alpha^{(k)} \nabla u^{(k)} \nabla v ^{(k)} dx,\\ s^{(k)}(u,v)&:= \sum_{l\in{\mathcal{I}}_{\mathcal{F}}^{(k)}} \int_{F^{(kl)}}\frac{\alpha^{(k)}}{2}\left(\frac{\partial u^{(k)}}{\partial n}(v^{(l)}-v^{(k)})+ \frac{\partial v^{(k)}}{\partial n}(u^{(l)}-u^{(k)})\right)ds,\\ p^{(k)}(u,v)&:= \sum_{l\in{\mathcal{I}}_{\mathcal{F}}^{(k)}} \int_{F^{(kl)}}\frac{\delta \alpha^{(k)}}{h_{kl}}(u^{(l)}-u^{(k)})(v^{(l)}-v^{(k)})\,ds. \end{align*} Here the notation $\frac{\partial}{\partial n}$ denotes the derivative in the direction of the outer normal vector, $\delta$ a positive sufficiently large penalty parameter, and $h_{kl}$ the harmonic average of the adjacent mesh sizes, i.e., $h_{kl}= 2h_k h_l/(h_k + h_l)$. We equip $V_{D,h}^{dG}$ with the dG-norm \begin{align} \label{HL:dgNorm} \left\|u\right\|_{dG}^2 = \sum_{k = 1}^N\left[\alpha^{(k)} \left\|\nabla u^{(k)}\right\|_{L^2(\Omega^{(k)})}^2 + \sum_{l\in{\mathcal{I}}_{\mathcal{F}}^{(k)}} \frac{\delta \alpha^{(k)}}{h_{kl}}\int_{F^{(kl)}} (u^{(k)} - u^{(l)})^2 ds\right]. \end{align} Furthermore, we define the bilinear forms \begin{align*} d_h(u,v) = \sum_{k =1}^N d^{(k)}(u,v) \quad \text{where} \quad d^{(k)}(u,v)= a^{(k)}(u,v) + p^{(k)}(u,v), \end{align*} for later use. We note that $\left\|u_h\right\|_{dG}^2 = d_h(u_h,u_h)$. \begin{lemma} \label{lem:wellPosedDg} Let $\delta$ be sufficiently large. Then there exist two positive constants $\gamma_0$ and $\gamma_1$, which are independent of $h_k,H_k,\delta,\alpha^{(k)}$ and $u_h$ such that the inequalities \begin{align} \label{equ:equivPatchNormdG} \gamma_0 d^{(k)}(u_h,u_h)\leq a_e^{(k)}(u_h,u_h) \leq \gamma_1 d^{(k)}(u_h,u_h), \quad \forall u_h\in V_{D,h}^{dG} \end{align} are valid for all $k=1,2,\ldots,N$. Furthermore, we have the inequalities \begin{align} \label{equ:equivNormdG} \gamma_0 \left\|u_h\right\|_{dG}^2\leq a_h(u_h,u_h)\leq \gamma_1 \left\|u_h\right\|_{dG}^2, \quad \forall u_h\in V_{D,h}^{dG}. \end{align} \end{lemma} This Lemma is an equivalent statement of Lemma 2.1 in \cite{HL:DryjaGalvisSarkis:2013a} for IgA, and the proof can be found in \cite{HL:HoferLanger:2016a}. A direct implication of \eqref{equ:equivNormdG} is the well posedness of the discrete problem \eqref{equ:ModelDiscDG} by the Theorem of Lax-Milgram. The consistency of the method together with the interpolation estimates of B-Splines lead to the a-priori error estimate, established in \cite{HL:LangerToulopoulos:2015a}. We note that, in \cite{HL:LangerToulopoulos:2015a}, the results were obtained for the Incomplete Interior Penalty (IIP) scheme. An extension to SIP-dG and the use of harmonic averages for $h$ and/or $\alpha$ are discussed in Remark~3.1 in \cite{HL:LangerToulopoulos:2015a}, see also \cite{HL:LangerMantzaflarisMooreToulopoulos:2015b}. For both the cG and dG formulation, we choose the B-Spline function $\{\g{N}_{i,p}\}_{i\in\mathcal{I}_0}$ as basis for the space $V_{h}^X,X\in\{cG,dG\}$, where $\mathcal{I}_0$ contains all indices of $\mathcal{I}$, where the corresponding basis functions do not have a support on the Dirichlet boundary. In the cG case, the basis functions on the interface are identified accordingly to obtain a conforming subspace of $V_{D}$. For the remainder of this paper, we drop the superscript $X\in\{cG,dG\}$ and use the symbol $V_h$ for both formulations. Depending on the considered formulation, one needs to use the right space $V_{h}^X,X\in\{cG,dG\}$. The IgA schemes \eqref{HL:equ:ModelDisc} and \eqref{equ:ModelDiscDG} are equivalent to the system of linear IgA equations \begin {align} \label{equ:Ku=f_DG} \boldsymbol{K} \boldsymbol{u} = \boldsymbol{f}, \end{align} where $\boldsymbol{K} = (\boldsymbol{K}_{i,j})_{i,j\in {\mathcal{I}}_0}$ , $\boldsymbol{f}= (\boldsymbol{f}_i)_{i\in {\mathcal{I}}_0}$ denote the stiffness matrix and the load vector, respectively, with $ \boldsymbol{K}_{i,j} = a(\g{N}_{j,p},\g{N}_{i,p})$ or $ \boldsymbol{K}_{i,j} = a_h(\g{N}_{j,p},\g{N}_{i,p})$ and $\boldsymbol{f}_i = \left\langle F, \g{N}_{i,p} \right\rangle$, and $\boldsymbol{u}$ is the vector representation of $u_h$. \section{IETI-DP methods and their implementation} \label{sec:IETI-method} In this section, we recall the main ingredients for the cG-IETI-DP and dG-IETI-DP method. We focus mainly on the implementation, since this is the relevant part for parallelization. \subsection{Derivation of the method} \label{sec:derivation} A rigorous and formal definition of the cG-IETI-DP and dG-IETI-DP method is quite technical and not necessary for the parallelization, which is the purpose of this paper. Therefore, we are not going to present the whole derivation of each method. We will give a general description, which is valid for both methods. For a detailed derivation, we refer to \cite{HL:HoferLanger:2016b} and \cite{HL:HoferLanger:2016a}. The first step is to introduce additional dofs on the interface to decouple the local problems and incorporate their connection via Lagrange multipliers $\boldsymbol{\lambda}$. This is quite straightforward in the case of the cG formulation, but more involved in the dG case. In any of the two cases, we can equivalently rewrite \eqref{equ:Ku=f_DG} as: Find $(u,\boldsymbol{\lambda}) \in V_{h,e} \times U$ such that \begin{align} \label{equ:saddlePointSing} \MatTwo{K_e}{B^T}{B}{0} \VecTwo{u}{\boldsymbol{\lambda}} = \VecTwo{f}{0}, \end{align} where $V_{h,e}\supset V_{h}$, is the decoupled space with additional dofs and $U$ is the set of Lagrange multipliers. The jump operator $B$ enforces the ``continuity'' of the solution $u$ in the sense that $\ker(B) \equiv V_{h}$. The matrix $K_e$ is the block diagonal matrix of the patch local stiffness matrices $K\sMP$, i.e., $K_e = \text{diag}(K\sMP)$. Since $B$ only acts on the patch interface dofs, we first can reorder the stiffness matrix in the following way \begin{align*} K\sMP = \MatTwo{K_{BB}\sMP}{K_{BI}\sMP}{K_{IB}\sMP}{K_{II}\sMP}, \qquad f\sMP = \VecTwo{f_B\sMP}{f_I\sMP} \end{align*} and then consider only the Schur complement representation: Find $(u_B,\boldsymbol{\lambda}) \in W \times U$ such that \begin{align} \label{equ:saddlePointSingSchur} \MatTwo{S_e}{B_B^T}{B_B}{0} \VecTwo{u_B}{\boldsymbol{\lambda}} = \VecTwo{g}{0}, \end{align} where $S_e=\text{diag}(S_e\sMP)$, $S_e\sMP= K_{BB}\sMP- K_{BI}\sMP(K_{II}\sMP)^{-1}K_{IB}\sMP$ and $g\sMP= f_B - K_{BI}\sMP (K_{II}\sMP)^{-1}f_I\sMP$. The space $W$ is the restriction of $V_{h,e}$ to the interface. For completeness, we denote its ``continuous'' representation as $\widehat{W}$, i.e., $\ker(B_B) = \widehat{W}$. Equation \eqref{equ:saddlePointSingSchur} is valid for the cG-IETI-DP and dG-IETI-DP method, but the matrix $K_e$ has difference entries and the number of boundary dofs (subscript $B$) is different. Fortunately, this does not change the way how the algorithm is implemented and parallelized. In the following, we will drop the subscript $B$ in $u_B$ and $B_B$ for notational simplicity. The matrix $S_e$ is not invertible and, hence, we cannot build the Schur complement system of \eqref{equ:saddlePointSingSchur}. To overcome this, we introduce an intermediate space $\widetilde{W}$, such that $\widehat{W} \subset \widetilde{W} \subset W$, and $S_e$ restricted to $\widetilde{W}$, denoted by $\widetilde{S}$, is invertible. We introduce primal variables as a set $\Psi\subset \widehat{W}^*$ and define the spaces \begin{equation*} \widetilde{W} := \{w\in W: \psi(w^{(k)}) = \psi(w^{(l)}), \forall\psi \in \Psi, \forall k>l \} \end{equation*} and \begin{equation*} W_{\Delta} := \prod_{k=1}^N W_{\Delta}^{(k)},\text{ with} \quad W_{\Delta}^{(k)}:=\{w^{(k)}\in W^{(k)}:\, \psi(w^{(k)}) =0\; \forall\psi \in \Psi\}. \end{equation*} Moreover, we introduce the space $W_{\Pi} \subset \widehat{W}$ such that $\widetilde{W} = W_{\Pi} \oplus W_{\Delta}.$ We call $W_{\Pi}$ \emph{primal space} and $W_{\Delta}$ \emph{dual space}. Typically, the set $\Psi$ corresponds to ``continuous'' vertex values, edge averages and/or face averages. Since $\widetilde{W} \subset W$, there is a natural embedding $\widetilde{I}: \widetilde{W} \to W$. Let the jump operator restricted to $\widetilde{W}$ be $\widetilde{B} := B\widetilde{I} : \widetilde{W} \to U^*.$ Now we are in the position to reformulate problem \eqref{equ:saddlePointSingSchur} in the space $\widetilde{W}$ as follows: Find $(u,\boldsymbol{\lambda}) \in \widetilde{W} \times U:$ \begin{align} \label{equ:saddlePointReg} \MatTwo{\widetilde{S}}{\widetilde{B}^T}{\widetilde{B}}{0} \VecTwo{u}{\boldsymbol{\lambda}} = \VecTwo{\widetilde{g}}{0}, \end{align} where $\widetilde{g} := \widetilde{I}^T g$, and $\widetilde{B}^T= \widetilde{I}^T B^T$. Here, $\widetilde{I}^T: W^* \to \widetilde{W}^*$ denotes the adjoint of $\widetilde{I}$. By construction, $\widetilde{S}$ is SPD on $\widetilde{W}$. Hence, we can define the Schur complement $F$ and the corresponding right-hand side as follows: \begin{align*} F:= \widetilde{B} \widetilde{S}^{-1}\widetilde{B}^T, \quad d:= \widetilde{B}\widetilde{S}^{-1} \widetilde{g}. \end{align*} Hence, the saddle point system \eqref{equ:saddlePointReg} is equivalent to the Schur complement problem: \begin{align} \label{equ:SchurFinal} \text{Find } \boldsymbol{\lambda} \in U: \quad F\boldsymbol{\lambda} = d. \end{align} Equation \eqref{equ:SchurFinal} is solved by means of the PCG algorithm, but it requires an appropriate preconditioner in order to obtain an efficient solver. Recalling the definition of $S_e = \text{diag}(S_e^{(k)})_{k=1}^N$, we define the scaled Dirichlet preconditioner $M_{sD}^{-1} := B_D S_e B_D^T,$ where $B_D$ is a scaled version of the jump operator $B$. The scaled jump operator $B_D$ is defined such that the operator enforces the constraints \begin{align*} {\delta^\dagger}^{(l)}_j(\boldsymbol{u}^{(k)})^{(k)}_i - {\delta^\dagger}^{(k)}_i(\boldsymbol{u}^{(l)})^{(k)}_j = 0 \quad\forall (i,j)\in B_e(k,l), \;\forall l\in{\mathcal{I}}_{\mathcal{F}}^{(k)}, \end{align*} and \begin{align*} {\delta^\dagger}^{(l)}_j(\boldsymbol{u}^{(k)})^{(l)}_i - {\delta^\dagger}^{(k)}_i(\boldsymbol{u}^{(l)})^{(l)}_j = 0 \quad\forall (i,j)\in B_e(l,k), \;\forall l\in{\mathcal{I}}_{\mathcal{F}}^{(k)}, \end{align*} where, for $(i,j)\in B_e(k,l)$, $ {\delta^\dagger}^{(k)}_i:= \rho^{(k)}_i/\sum_{l\in{\mathcal{I}}_{\mathcal{F}}^{(k)}} \rho^{(l)}_j $ is an appropriate scaling. One can show, that the preconditioned system has a quasi-optimal condition number bound with respect to $H/h:=\max_k(H_k/h_k)$, i.e., \begin{align} \label{equ:kappa} \kappa(M_{sD}^{-1}F_{|\text{ker}(\widetilde{B}^T)}) \leq C (1+\log(H/h))^2, \end{align} for both versions, see \cite{HL:HoferLanger:2016b}, \cite{HL:Hofer:2016a} and \cite{HL:VeigaChoPavarinoScacchi:2013a}. Moreover, numerical examples show also robustness with respect to jumps in the diffusion coefficient and only a weak dependence on the B-Spline degree $p$, see, e.g., \cite{HL:HoferLanger:2016a}, \cite{HL:HoferLanger:2016b} and \cite{HL:VeigaChoPavarinoScacchi:2013a}. \subsection{Implementation of the algorithm} \label{sec:implementation} Since $F$ is symmetric and positive definite on $\widetilde{U}$, we can solve the linear system $F \mathbf{\boldsymbol{\lambda}} = d$ by means of the PCG algorithm, where we use $M_{sD}^{-1}$ as preconditioner. The PCG does not require an explicit representation of the matrices $F$ and $M_ {sD}^{-1}$, since we just need their application to a vector. There are different ways to provide an efficient implementation. We will follow the concept of the energy minimizing primal subspaces. The idea is to split the space $\widetilde{W}$ into $\widetilde{W}_{\Pi} \oplus \prod \widetilde{W}_{\Delta}^{(k)}$, such that $\widetilde{W}_{\Delta}^{(k)} {\perp_S} \widetilde{W}_{\Pi}$ for all $k$, i.e., we choose $\widetilde{W}_{\Pi}:=\widetilde{W}_{\Delta}^{\perp_S}$, see, e.g., \cite{HL:Pechstein:2013a} and \cite{HL:Dohrmann:2003a}. By means of this choice, the operators $\tilde{S}$ and $\tilde{S}^{-1}$ have the following forms \begin{align*} \widetilde{S} &= \MatTwo{S_{\Pi \Pi}}{0}{0}{S_{\Delta \Delta}} \text{ and } \widetilde{S}^{-1} = \MatTwo{S_{\Pi \Pi}^{-1} }{0}{0}{S_{\Delta \Delta}^{-1}}, \end{align*} where $S_{\Pi \Pi}$ and $S_{\Delta \Delta}$ are the restrictions of $\widetilde{S}$ to the corresponding subspaces. We note that $S_{\Delta \Delta}$ can be seen as a block diagonal operator, i.e., $S_{\Delta \Delta} = \text{diag}(S_{\Delta \Delta}^{(k)})$. The application of $F$ and $M_{sD}^{-1}$ is summarized in Algorithm~\ref{HL:alg:applyF}. \subsubsection{Constructing a basis for the primal subspace} First we need to provide an appropriate local basis $\{\widetilde{\phi}_j\}_j^{n_{\Pi}}$ for $\widetilde{W}_{\Pi}$, where $n_{\Pi}$ is the number of primal variables. We request from the basis that it has to be nodal with respect to the primal variables, i.e., $\psi_i(\widetilde{\phi}_j) = \delta_{i,j},$ for $i,j \in\{1,\ldots,n_{\Pi}\}.$ In order to construct such a basis, we introduce the constraint matrix $C^{(k)}: W^{(k)}\to \mathbb{R}^{n_{\Pi}^{(k)}}$ for each patch $\Omega^{(k)}$ which realizes the primal variables, i. e., $ (C^{(k)} v)_j = \psi_{i(k,j)}(v)$ for $ v\in W$ and $j\in\{1,\ldots,n_{\Pi}^{(k)}\},$ where $n_{\Pi}^{(k)}$ is the number of primal variables associated with $\Omega^{(k)}$ and $i(k,j)$ the global index of the $j$-th primal variable on $\Omega^{(k)}$. For each patch $k$, the basis functions $\{\widetilde{\phi}_j^{(k)}\}_{j=1}^{n_{\Pi}^{(k)}}$ of $\widetilde{W}_{\Pi}^{(k)}$ are the solution of the system \begin{align} \label{HL:equ:KC_basis} \MatThree{K_{BB}^{(k)}}{K_{BI}^{(k)}}{{C^{(k)}}^T}{K_{IB}^{(k)}}{K_{II}^{(k)}}{0}{C^{(k)}}{0}{0}\VecThree{\widetilde{\phi}_j^{(k)}}{\cdot}{\widetilde{\mathbf{\mu}}_j^{(k)}} = \VecThree{0}{0}{\mathbf{e}_j^{(k)}}, \end{align} where $\mathbf{e}_j^{(k)} \in \mathbb{R}^{n_{\Pi}^{(k)}}$ is the $j$-th unit vector. Here we use an equivalent formulation with the system matrix $K\sMP$ For each patch $k$, the LU factorization of this matrix is computed and stored. \paragraph{Application of ${S_{\Delta \Delta}^{(k)}}^{-1}:$ } The application of ${S_{\Delta \Delta}^{(k)}}^{-1}$ corresponds to solving a local Neumann problem in the space $\widetilde{W}_{\Delta}$, i.e., $S^{(k)} w^{(k)}= f_{\Delta}^{(k)}$ with the constraint $C^{(k)} w^{(k)} = 0$. This problem can be rewritten as a saddle point problem in the form \begin{align*} \label{HL:equ:SC_solution} \MatThree{K_{BB}^{(k)}}{K_{BI}^{(k)}}{{C^{(k)}}^T}{K_{IB}^{(k)}}{K_{II}^{(k)}}{0}{C^{(k)}}{0}{0}\VecThree{w^{(k)}}{\cdot}{\cdot} = \VecThree{f_{\Delta}^{(k)}}{0}{0}. \end{align*} From (\ref{HL:equ:KC_basis}), the LU factorization of the matrix is already available. \paragraph{Application of ${\mathbf{S}_{\Pi\Pi}^{(k)}}^{-1}:$} The matrix $\mathbf{S}_{\Pi \Pi}$ can be assembled from the patch local matrices $\mathbf{S}_{\Pi\Pi}^{(k)}$. Let $\{\widetilde{\phi}_j^{(k)}\}_{j=1}^{n_{\Pi}^{(k)}}$ be the basis of $\widetilde{W}_{\Pi}^{(k)}$. The construction of $\{\widetilde{\phi}_j^{(k)}\}_{j=1}^{n_{\Pi}^{(k)}}$ in (\ref{HL:equ:KC_basis}) provides \begin{align*} \left(\mathbf{S}_{\Pi\Pi}^{(k)}\right)_{i,j} &= \left\langle S^{(k)} \widetilde{\phi}_i^{(k)}, \widetilde{\phi}_j^{(k)} \right\rangle = -\left\langle{C^{(k)}}^T \widetilde{\mathbf{\mu}}_i^{(k)},\widetilde{\phi}_j^{(k)}\right\rangle = -\left\langle \widetilde{\mathbf{\mu}}_i^{(k)}, C^{(k)} \widetilde{\phi}_j^{(k)} \right\rangle\\ & = -\left\langle \widetilde{\mathbf{\mu}}_i^{(k)}, \mathbf{e}_{j} \right\rangle^{(k)}= - \left(\widetilde{\mathbf{\mu}}_i^{(k)}\right)_j, \end{align*} where $i,j\in\{1,\ldots,n_{\Pi}^{(k)}\}$. Therefore, we can reuse the Lagrange multipliers $\widetilde{\mathbf{\mu}}_i^{(k)}$ obtained in (\ref{HL:equ:KC_basis}), and can assemble $\mathbf{S}_{\Pi \Pi}^{(k)}$ from them. Once $\mathbf{S}_{\Pi \Pi}$ is assembled, the LU factorization can be calculated and stored. \subsubsection{Application of $\widetilde{I}$ and $\widetilde{I}^T$} The last building block is the embedding $\widetilde{I}: \widetilde{W}\to W$ and its adjoint $\widetilde{I}^T: W^*\to \widetilde{W}^*$. Recall the direct splitting $W\sMP = W_\Delta\sMP \oplus W_{\Pi}\sMP$. Let us denote by $\Phi\sMP=[\widetilde{\phi}_1\sMP,\ldots,\widetilde{\phi}_{n_\Pi\sMP}\sMP]$ the coefficient representation of the basis for $W_{\Pi}\sMP$. Given the primal part $\boldsymbol{w}_\Pi$ of a function in $\widetilde{W}$, we obtain its restriction to $\widetilde{W}_{\Pi}\sMP$ via an appropriately defined restriction matrix $\boldsymbol{R}\sMP$, i.e. $\boldsymbol{w}_\Pi\sMP = \boldsymbol{R}\sMP\boldsymbol{w}_\Pi$. The corresponding function is then given by $w_\Pi\sMP = \Phi\sMP\boldsymbol{R}\sMP\boldsymbol{w}_\Pi\sMP$. Following the lines in \cite{HL:Pechstein:2013a}, we can formulate the operator $\widetilde{I}: \widetilde{W}\to W$ as \begin{align*} \begin{bmatrix} \boldsymbol{w}_\Pi\\ w_\Delta \end{bmatrix} \mapsto w:= \Phi \boldsymbol{R} \boldsymbol{w}_\Pi + w_\Delta, \end{align*} where $\Phi$ and $\boldsymbol{R}$ are block versions of $\Phi\sMP$ and $\boldsymbol{R}\sMP$, respectively. The second function is its adjoint operation $\widetilde{I}^T: W^*\to \widetilde{W}^*$. It can be realized in the following way \begin{align*} f \mapsto \begin{bmatrix} \boldsymbol{f}_\Pi\\ f_\Delta \end{bmatrix} = \begin{bmatrix} \boldsymbol{A} \Phi^T \boldsymbol{f}\\ f - C^T \Phi^T f \end{bmatrix}, \end{align*} where $\boldsymbol{A}$ is the corresponding assembling operator to $\boldsymbol{R}$, i.e., $\boldsymbol{A} = \boldsymbol{R}^T$. A more extensive discussion and derivation can be found in \cite{HL:Pechstein:2013a}. \begin{algorithm} \caption{Algorithm for calculating $\boldsymbol{\nu} = F\mathbf{\boldsymbol{\lambda}}$ and $\mathbf{\boldsymbol{\nu}} = M_{sD}^{-1}\mathbf{\boldsymbol{\lambda}}$ for given $\mathbf{\boldsymbol{\lambda}} \in U$} \label{HL:alg:applyF} \begin{algorithmic} \algblock{Begin}{End} \Procedure{$F$}{$\mathbf{\boldsymbol{\lambda}}$} \State Application of $B^T:$ $\{f^{(k)}\}_{k=1}^ N = B^T\mathbf{\boldsymbol{\lambda}}$ \State Application of $\widetilde{I}^T:$ $\{\mathbf{f}_{\Pi},\{f_{\Delta}^{(k)}\}_{k=1}^ N\} = \widetilde{I}^T\left(\{f^{(k)}\}_{k=1}^ N\right)$ \State Application of $\widetilde S^{-1}:$ \Begin \State $\mathbf{w}_{\Pi} = \mathbf{S}_{\Pi \Pi}^{-1} \mathbf{f}_{\Pi}$ \State $w_{\Delta}^{(k)} = {S_{\Delta \Delta}^{(k)}}^{-1}f_{\Delta}^{(k)} \quad \forall k=1,\ldots, N$ \End \State Application of $\widetilde{I}:$ $\{w^{(k)}\}_{k=1}^ N = \widetilde{I}\left(\{\mathbf{w}_{\Pi},\{w_{\Delta}^{(k)}\}_{k=1}^ N\} \right)$ \State Application of $B:$ $ \boldsymbol{\nu} = B\left( \{w^{(k)}\}_{k=1}^ N \right)$ \EndProcedure \Procedure{$M_{sD}^{-1}$}{$\mathbf{\boldsymbol{\lambda}}$} \State Application of $B_D^T:$ $\{w^{(k)}\}_{k=1}^N = B_D^T\mathbf{\boldsymbol{\lambda}}$ \State Application of $S_e:$ \Begin \State Solve $K^{(k)}_{II} x^{(k)} = -K^{(k)}_{IB}w^{(k)}$ $\quad \forall k=1,\ldots, N$ \State $v^{(k)} = K^{(k)}_{BB} w^{(k)} + K^{(k)}_{BI}x^{(k)}$. $\quad \forall k=1,\ldots, N$ \End \State Application of $B_D:$ $ \mathbf{\boldsymbol{\nu}} = B_D\left( \{v^{(k)}\}_{k=1}^N \right)$ \EndProcedure \end{algorithmic} \end{algorithm} \section{Parallelization of the building blocks} \label{sec:para} Here we investigate how the single operations can be executed in parallel in a distributed memory setting. The parallelization of the method is performed with respect to the patches, i.e., one or several patches are assigned to a processor. The required communication has to be understood as communication between patches, which are assigned to different processors. The majority of the used MPI methods are performed in its non-blocking version. We aim at overlapping computations with communications wherever possible. \subsection{Parallel version of PCG} We solve $F \boldsymbol{\lambda} = d$ with the preconditioned CG method. This requires a parallel implementation of CG, where we follow the approach presented in Section 2.2.5.5 in \cite{HL:Pechstein:2013a}, see also \cite{HL:DouglasHaaseLanger:2003a}. This approach is based on the concept of accumulated and distributed vectors. We say a vector $\boldsymbol{\lambda}_{acc}=[\boldsymbol{\lambda}_{acc}\sMP[q]]$ is an \emph{accumulated} representation of $\boldsymbol{\lambda}$, if $\boldsymbol{\lambda}_{acc}\sMP[q](k_q(i)) = \boldsymbol{\lambda}(i)$, where $i$ is the global index corresponding to the local index $k_q(i)$ on processor $q$. On the contrary, $\boldsymbol{\lambda}_{dist}=[\boldsymbol{\lambda}_{dist}\sMP[q]]$ is a \emph{distributed} representation of $\boldsymbol{\lambda}$, if the sum of all processor local contributions give the global vector, i.e., $\boldsymbol{\lambda}_{dist}(i) = \sum_{q} \boldsymbol{\lambda}_{dist}\sMP[q](k_q(i))$. Hence, each processor only holds the part of $\boldsymbol{\lambda}$, which belongs to its patches, either in a distributed or accumulated description. The Lagrange multipliers and the search direction of the CG are represented in the accumulated setting, whereas the residual is given in the distributed representation. In order to achieve the accumulated representation, information exchange between the neighbours of a patch is required. This is done after applying the matrix and the preconditioner, respectively and implemented via \verb+MPI_Send+ and \verb+MPI_Recv+ operations. The last aspect in the parallel CG implementation is the realization of scalar products. Given a distributed representation $u_{dist}$ of $u$ and an accumulated representation of $v_{acc}$ of $v$, the scalar product $(u,v)_{l^2}$ is then given by $(u,v)_{l^2}= \sum_{q} (u_{dist}\sMP[q],v_{acc}\sMP[q])_{l^2}$, i.e., first the local scalar products are formed, globally added, and distributed with \verb+MPI_Allreduce+ . \subsection{Assembling} The assembling routine of the IETI-DP algorithm consists of the following steps: \begin{enumerate} \item Assemble the patch local stiffness matrices and right hand side, \item assemble the system matrix in \eqref{HL:equ:KC_basis} and calculating its LU-factorization, \item assemble $S_{\Pi\Pi}$ and calculating its LU-factorization, \item calculate the LU-factorization of $K_{II}\sMP$ , \item calculate the right hand side $\{g_\Pi, g_\Delta\} =\widetilde{g}\in\widetilde{W}^*$, with $g\sMP=f_B - K_{BI}\sMP (K_{II}\sMP)^{-1}f_I\sMP$. \end{enumerate} Most of the tasks are completely independent of each other and, hence, can be performed in parallel. Only the calculation of $S_{\Pi\Pi}$ and $\widetilde{g} = \widetilde{I}^T g$ require communication, which will be handled in Section~\ref{sec:para_AccDist}. The LU-factorization of $S_{\Pi\Pi}$ is only required at one processor, since it has to be solved only once per CG iteration. According to \cite{HL:KlawonnLanserRheinbachStengelWellein:2015a}, it is advantageous to distribute this matrix to all other processors in order to reduce communication in the solver part, see \cite{HL:KlawonnRheinbach:2010a} and references therein for improving scalability based on a different approach. In the current paper, we investigate cases, where one, several and all processor hold the LU-factorization of $S_{\Pi\Pi}$. Therefore, each processor is assigned to exactly one holder of $S_{\Pi\Pi}$. This relation is implemented by means of an additional MPI communicator. We note that, for extremely large scale problems with $\geq 10^5$ subdomains, one has to consider different strategies dealing with $S_{\Pi\Pi}^{-1}$. Most commonly one uses AMG and solves $S_{\Pi\Pi} u_\Pi = f_\Pi$ in an inexact way, see, e.g., \cite{HL:KlawonnLanserRheinbach:2015a} and \cite{HL:KlawonnRheinbach:2007a}. When considering a moderate number of patches, i.e., $10^3 - 10^4$, the approach using the LU-factorization of $S_{\Pi\Pi}$ is the most efficient one. In this paper, we restrict ourselves to this case. The patch local matrix $S_{\Pi\Pi}\sMP$ is obtained as a part of the solutions of \eqref{HL:equ:KC_basis} and the assembling of the global matrix $S_{\Pi\Pi}$ is basically a \verb+MPI_gatherv+ operation. In the case where all processors hold $S_{\Pi\Pi}$ we use \verb+MPI_allgatherv+. If several processors hold the LU factorization, we just call \verb+MPI_gatherv+ on each of these processors. A different possibility would be to first assemble $S_{\Pi\Pi}$ on one patch, distribute it to the other holders and then calculate the LU-factorization on each of the processors. \subsection{Solver and Preconditioner} \label{sec:para_AccDist} More communication is involved in the solver part. According to Algorithm\,\ref{HL:alg:applyF}, we have to perform the following operations: \begin{enumerate} \item application of $B$ and $B^T$ and its scaled versions \item application of $\widetilde{I}$ and $\widetilde{I}^T$ \item application of $\widetilde{S}^{-1}$ \item application of $S^{-1}$ \end{enumerate} The only operations which require communication are $\widetilde{I}$ and $\widetilde{I}^T$. To be more precise, the communication is hidden in the operators $\boldsymbol{A}$ and $\boldsymbol{R}$, see Section~\ref{sec:implementation}, all other operations are block operations, where the corresponding matrices are stored locally on each processor. In principle, their implementation is given by accumulating and distributing values. The actual implementation depends on how many processors hold the coarse grid problem. In order to implement $\widetilde{I}$, we need the distribution operation $\boldsymbol{R}$. If all processors hold $S_{\Pi\Pi}$, this operations reduces to just extracting the right entries. Hence it is local and no communication is required. Otherwise, each holder of $S_{\Pi\Pi}$ reorders and duplicates the entries of $\boldsymbol{w}_\Pi$ in such a way, that all entries corresponding to the patches of a single slave are in a contiguous block of memory. Then we utilize the \verb+MPI_scatter+ method to distribute only the necessary data to all slave processors. See Figure~\ref{fig:DistAcc} for an illustration. We arrive at the implementation of $\widetilde{I}^T$. Each processor stores the values of $\boldsymbol{w}\sMP_{\Pi}$ in a vector $\widetilde{\boldsymbol{w}}\sMP_{\Pi}$ of length $n_\Pi$ already in such a way, that $\sum_{k=1}^N \widetilde{\boldsymbol{w}}\sMP_{\Pi} = \boldsymbol{w}_{\Pi}.$ Storing the entries in this way enables the use the MPI reduction operations to efficiently assemble the local contributions. If only one processor holds the coarse problem, we use the \verb+MPI_Reduce+ method to perform this operation. Similarly, if all processors hold $S_{\Pi\Pi}$, we utilize the \verb+MPI_Allreduce+ method. If several processors have the coarse grid problem, we use a two level approach. First, each master processor collects the local contributions from its slaves using the \verb+MPI_Reduce+ operation. In the second step, all the master processors perform an \verb+MPI_Allreduce+ operation to accumulate the contributions from each group and simultaneously distribute the result under them. This procedure is visualized in Figure~\ref{fig:DistAcc}. \begin{figure} \caption{Distribution operation} \caption{Assembling operation} \caption{Distribution and assembling operation, illustrated for four processors, partitioned into two groups corresponding to two $S_{\Pi\Pi}^{-1}$ holder.} \label{fig:DistAcc} \end{figure} \section{Numerical examples} \label{sec:num_ex} We consider the model problem \eqref{equ:ModelStrong} in the two dimensional computational domain $\Omega = (0,1)^2$ formed by $32\times32 = 1024$ patches. Each of them is a square arranged in a uniform grid. For the three dimensional case we consider the domain $\Omega=(0,1)^2\times(0,2)$, partitioned into $8\times8\times16$ regular cubes. Note that, in IgA framework, we cannot choose the number of subdomains as freely as in the finite element case since they are fixed by the geometry. Therefore, the number of 1024 subdomains stays constant throughout the tests. Since we are interested in the parallel scalability of the proposed algorithms, we assume for simplicity homogeneous diffusion coefficients $\alpha \equiv 1$. In all tests we consider the smooth right hand side $f(x,y) = 20\pi^2\sin(4\pi(x+0.4))\sin(2\pi(y+0.3))$, corresponding to the exact solution $u(x,y) = \sin(4\pi(x+0.4))\sin(2\pi(y+0.3))+x+y$. For the discretization, we use tensor B-Spline spaces $V_h$ of different degree $p$. We increase the B-Spline degree in such a way that the number of knots stay the same, i.e., the smoothness of $V_h$ increases. We investigate the scaling behaviour of the cG-IETI-DP and dG-IETI-DP method. Although, we consider also the dG variant, we restrict ourselves to matching meshes. Otherwise, it would not be possible to compare the two methods. Moreover, some patches would have a significant larger number of dofs, which leads to load imbalances and affects the scaling in a negative way. The domain is refined in a uniform way by inserting a single knot for each dimension on each knot span. We denote by $H_k$ the patch diameter and by $h_k$ the characteristic mesh size on $\Omega\sMP$. The set of primal variables is chosen by continuous patch vertices and interface averages for the two dimensional setting. For the three dimensional examples, we choose only continuous edge averages in order to keep the number of primal variables small. The preconditioned conjugate gradient method is used to solve \eqref{equ:SchurFinal} with the scaled Dirichlet preconditioner $M_{sD}^{-1}$. We choose zero initial guess and a relative reduction of the residual of $10^{-8}$. For solving the local systems and the coarse grid problem, a direct solver is used. The algorithm is realized in the isogeometric open source C++ library G+SMO \cite{gismoweb}, which is based on the Eigen library \cite{HL:eigenweb}. We utilize the PARDISO 5.0.0 Solver Project \cite{HL:PARDISO500} for performing the LU factorizations. The code is compiled with the \verb+gcc 4.8.3+ compiler with optimization flag \verb+-O3+. For the communication between the processors, we use the \verb+MPI 2+ standard with the \verb+OpenMPI 1.10.2+ implementation. The results are obtain on the RADON1 cluster at Linz. We use 64 out of 68 available nodes, each equipped with 2x Xeon E5-2630v3 ``Haswell'' CPU (8 Cores, 2.4Ghz, 20MB Cache) and 128 GB RAM. This gives the total number of 1024 available cores. We investigate two quantities, the assembling phase and the solving phase. In the assembling phase, we account for the time used for \begin{itemize} \item assembling the local matrices and right hand sides, \item LU-factorization of $K_{II}$, \item LU-factorization of $\MatTwo{K}{C^T}{C}{0}$, \item calculation of $\widetilde{\Phi}$ and $\widetilde{\mu}$, \item assembling the coarse grid matrix $S_{\Pi\Pi}$ and calculation of its LU factorization. \end{itemize} As already indicated in Section~\ref{sec:para}, $S_{\Pi\Pi}$ is only assembled on certain processors. The solving phase consists of the CG algorithm for \eqref{equ:SchurFinal} and the back-substitution to obtain the solution from the Lagrange multipliers. The main ingredients are \begin{itemize} \item application of $F$, \item application of $M_{sD}^{-1}$. \end{itemize} In Section~\ref{sec:weak} and Section~\ref{sec:strong}, we study the weak and strong scaling behaviour for the cG-IETI-DP and the dG-IETI-DP method. In this two sections, we assume that only one processor holds the coarse grid matrix $S_{\Pi\Pi}$. The comparison of having a different number of $S_{\Pi\Pi}$ holders is done in Section~\ref{sec:diffHolder}. \subsection{Weak scaling} \label{sec:weak} In this subsection we investigate the weak scaling behaviour, i.e., the relation of problem size and number of processors is constant. In each refinement step we multiply the number of used cores by $2^{d}, d\in\{2,3\}$. The ideal behaviour would be a constant time for each refinement. First, we consider the two dimensional case. We apply three initial refinements and start with a single processor and perform up to additional 5 refinements with maximum 1024 processors. We choose as primal variables continuous vertex values and edge averages. The results for degree $p\in\{2,3,4\}$ are illustrated in Figure~\ref{fig:Res_weak_2d}. The first row of figures corresponds to the cG-IETI-DP method, and the second one corresponds to the dG-IETI-DP method. The left column of Table~\ref{tab:weak_scaling1} summarizes timings and the speedup for the cG-IETI-DP method, whereas the right column presents the results for the dG-IETI-DP method. For each method, we investigate the weak scaling for the assembling and solution phase. As in Figure~\ref{fig:Res_weak_2d}, we present the scaling and timings for $p\in\{2,3,4\}$. \begin{figure} \caption{$p=2$} \caption{$p=3$} \caption{$p=4$} \caption{$p=2$} \caption{$p=3$} \caption{$p=4$} \caption{Weak scaling of the cG-IETI-DP (first row) and dG-IETI-DP (second row) method for B-Spline degrees $p\in\{2,3,4\}$ in two dimensions. Each degree corresponds to one column. } \label{fig:Res_weak_2d} \end{figure} \begin{table} [h] \begin{footnotesize} \begin{tabular}{|r|r|c|c|c|c||r|c|c|c|c|}\hline & \multicolumn{2}{c|}{cG-IETI-DP}&\multicolumn{3}{c||}{ $p=2$} & \multicolumn{2}{c|}{dG-IETI-DP} &\multicolumn{3}{c|}{ $p=2$} \\\hline $\#$ procs & $\#$dofs & Iter. & \makecell{Ass. \\ Time} & \makecell{Solv. \\ Time} & \makecell{Total \\ Time} & $\#$dofs & Iter. & \makecell{Ass. \\ Time} & \makecell{Solv. \\ Time} & \makecell{Total \\ Time} \\\hline 1 & 99104 & 7 & 4.8 & 1.9 & 6.7 & 133824 & 8 & 6.7 & 2.8 & 9.5 \\\hline 4 & 328224 & 8 & 3.0 & 1.8 & 4.8 & 394688 & 9 & 4.1 & 2.4 & 6.5 \\\hline 16 & 1179680 & 9 & 2.9 & 1.9 & 4.8 & 1309632 & 10 & 3.5 & 2.3 & 5.8 \\\hline 64 & 4455456 & 10 & 3.0 & 2.4 & 5.4 & 4712384 & 11 & 3.4 & 2.7 & 6.1 \\\hline 256 & 17298464 & 11 & 3.5 & 4.2 & 7.7 & 17809344 & 11 & 3.8 & 4.2 & 8.0 \\\hline 1024 & 68150304 & 11 & 3.8 & 4.4 & 8.2 & 69169088 & 12 & 4.2 & 4.7 & 8.9 \\\hline \hline & \multicolumn{2}{c|}{cG-IETI-DP}&\multicolumn{3}{c||}{ $p=3$} &\multicolumn{2}{c|}{dG-IETI-DP}& \multicolumn{3}{c|}{ $p=3$} \\\hline $\#$ procs & $\#$dofs & Iter. & \makecell{Ass. \\ Time} & \makecell{Solv. \\ Time} & \makecell{Total \\ Time} & $\#$dofs & Iter. & \makecell{Ass. \\ Time} & \makecell{Solv. \\ Time} & \makecell{Total \\ Time} \\\hline 1 & 120576 & 8 & 7.2 & 2.6 & 9.8 & 159264 & 8 & 10.3 & 3.5 & 13.8 \\\hline 4 & 366080 & 9 & 5.3 & 2.4 & 7.7 & 436512 & 9 & 7.2 & 3.0 & 10.2 \\\hline 16 & 1250304 & 10 & 5.5 & 2.8 & 8.3 & 1384224 & 10 & 6.3 & 3.0 & 9.3 \\\hline 64 & 4591616 & 10 & 5.6 & 3.4 & 9.0 & 4852512 & 11 & 6.4 & 4.5 & 10.9 \\\hline 256 & 17565696 & 11 & 6.6 & 6.3 & 12.9 & 18080544 & 12 & 7.2 & 6.9 & 14.1 \\\hline 1024 & 68679680 & 12 & 7.3 & 7.0 & 14.3 & 69702432 & 12 & 7.9 & 7.3 & 15.2 \\\hline \hline & \multicolumn{2}{c|}{cG-IETI-DP}&\multicolumn{3}{c||}{ $p=4$} & \multicolumn{2}{c|}{dG-IETI-DP}& \multicolumn{3}{c|}{ $p=4$} \\\hline $\#$ procs & $\#$dofs & Iter. & \makecell{Ass. \\ Time} & \makecell{Solv. \\ Time} & \makecell{Total \\ Time} & $\#$dofs & Iter. & \makecell{Ass. \\ Time} & \makecell{Solv. \\ Time} & \makecell{Total \\ Time} \\\hline 1 & 144096 & 8 & 11.7 & 3.0& 14.7 & 186752 & 9 & 16.7 & 4.6 & 21.3 \\\hline 4 & 405984 & 9 & 10.0 & 3.3 & 13.3 & 480384 & 10 & 12.4 & 3.8 & 16.2 \\\hline 16 & 1322976 & 10 & 9.7 & 3.4 & 13.1 & 1460864 & 11 & 11.5 & 4.1 & 15.6 \\\hline 64 & 4729824 & 11 & 10.0 & 5.0 & 15.0 & 4994688 & 11 & 11.0 & 5.5 & 16.5 \\\hline 256 & 17834976 & 12 & 11.9 & 9.3 & 21.2 & 18353792 & 12 & 13.0 & 9.8 & 22.8 \\\hline 1024 & 69211104 & 13 & 13.0 & 11.3 & 24.3 & 70237824 & 13 & 13.5 & 11.4 & 24.9 \\\hline \end{tabular} \end{footnotesize} \caption{Weak scaling results for the two dimensional testcase for the cG and dG IETI-DP method. Left column contains results for the cG variant and the right column for the dG version. Each row corresponds to a fixed B-Spline degree $p\in\{2,3,4\}$} \label{tab:weak_scaling1} \end{table} We observe that the time used for the assembling stays almost constant, hence shows quite optimal behaviour. However, the time for solving the system increases when refining and increasing the number of used processors. Especially, when considering the largest number of processors, we see a clear increase of the solution time. One reason is that the number of iterations slightly increases when increasing the system size. This is due to the quasi optimal condition number bound of the IETI-DP type methods, cf. \eqref{equ:kappa}. Secondly, as already pointed out in Section~\ref{sec:para}, the solving phase consists of more communication between processors, which cannot be completely overlapped with computations. Moreover, one also has to take in account global synchronization points in the conjugate gradient method. Next, we consider the weak scaling for the three dimensional case. As already indicated in the introduction of this section, we choose only continuous edge averages as primal variables. We perform the tests in the same way as for the two dimensional case. However, we already start with two processors and perform two initial refinements. Multiplying the number of used processors by 8 with each refinement, we end up again with 1024 processors on the finest grid. The two algorithms behave similar to the two dimensional case, where the assembling phase gives quite good results and the solver phase shows again an increasing time after each refinement. The results are visualized in Figure~\ref{fig:Res_weak_3d} and summarized in Table~\ref{tab:weak_scaling2}. Note, for the dG-IETI-DP method with $p=4$ and $\sim 54$~Mio. dofs, we exceeded the memory capacity of the cluster. \begin{figure} \caption{$p=2$} \caption{$p=3$} \caption{$p=4$} \caption{$p=2$} \caption{$p=3$} \caption{$p=4$} \caption{Weak scaling of the cG-IETI-DP (first row) and dG-IETI-DP (second row) method for B-Spline degrees $p\in\{2,3,4\}$ in three dimensions. Each degree corresponds to one column. No timings are obtained in the case of 1024 cores in (f) due to memory limitations } \label{fig:Res_weak_3d} \end{figure} \begin{table} [h] \begin{footnotesize} \begin{tabular}{|r|r|c|c|c|c||r|c|c|c|c|}\hline & \multicolumn{2}{c|}{cG-IETI-DP}&\multicolumn{3}{c||}{ $p=2$} & \multicolumn{2}{c|}{dG-IETI-DP} &\multicolumn{3}{c|}{ $p=2$} \\\hline $\#$ procs & $\#$dofs & Iter. & \makecell{Ass. \\ Time} & \makecell{Solv. \\ Time} & \makecell{Total \\ Time} & $\#$dofs & Iter. & \makecell{Ass. \\ Time} & \makecell{Solv. \\ Time} & \makecell{Total \\ Time} \\\hline 2 & 220896 & 16 & 8.8 & 3.8 & 12.6 & 396932 & 26 & 23.3 & 14.0 & 37.3 \\\hline 16 & 1023200 & 17 & 8.0 & 4.7 & 12.7 & 1551400 & 27 & 16.9 & 12.3 & 29.2 \\\hline 128 & 5969376 & 17 & 9.0 & 7.5 & 16.5 & 7730288 & 28 & 17.5 & 17.0 & 34.5 \\\hline 1024 & 40238048 & 19 & 17.7 & 21.0 & 38.7 & 46577920 & 28 & 26.2 & 41.2 & 67.4 \\\hline \hline & \multicolumn{2}{c|}{cG-IETI-DP}&\multicolumn{3}{c||}{ $p=3$} &\multicolumn{2}{c|}{dG-IETI-DP}& \multicolumn{3}{c|}{ $p=3$} \\\hline $\#$ procs & $\#$dofs & Iter. & \makecell{Ass. \\ Time} & \makecell{Solv. \\ Time} & \makecell{Total \\ Time} & $\#$dofs & Iter. & \makecell{Ass. \\ Time} & \makecell{Solv. \\ Time} & \makecell{Total \\ Time} \\\hline 2 & 350840 & 17 & 36.2 & 7.9 & 44.1 & 598405 & 29 & 85.3 & 28.1 & 113.4 \\\hline 16 & 1361976 & 18 & 34.5 & 8.4 & 42.9 & 2005737 & 30 & 64.2 & 26.5 & 90.7 \\\hline 128 & 7020728 & 18 & 40.0 & 15.0 & 55.0 & 8985265 & 30 & 65.4 & 31.3 & 96.7 \\\hline 1024 & 43894200 & 21 & 69.4 & 48.4 & 117.8 & 50613825 & 31 & 92.6 & 91.0 & 183.6 \\\hline \hline & \multicolumn{2}{c|}{cG-IETI-DP}&\multicolumn{3}{c||}{ $p=4$} &\multicolumn{2}{c|}{dG-IETI-DP}& \multicolumn{3}{c|}{ $p=4$} \\\hline $\#$ procs & $\#$dofs & Iter. & \makecell{Ass. \\ Time} & \makecell{Solv. \\ Time} & \makecell{Total \\ Time} & $\#$dofs & Iter. & \makecell{Ass. \\ Time}& \makecell{Solv. \\ Time}& \makecell{Total \\ Time} \\\hline 2 & 523776 & 18 & 146.6 & 14.8 & 161.4 & 853878 & 32 & 307.3 & 55.1 & 362.4 \\\hline 16 & 1768320 & 18 & 149.2 & 15.9 & 165.1 & 2538650 & 32 & 250.7 & 49.8 & 300.5 \\\hline 128 & 8188800 & 20 & 163.6 & 25.6 & 189.2 & 10367970 & 34 & 232.0 & 55.4 & 287.4 \\\hline 1024 & 47765376 & 22 & 259.7 & 96.9 & 356.6 & $\sim$54000000 & x & x & x & x \\\hline \end{tabular} \end{footnotesize} \caption{Weak scaling results for the three dimensional testcase for the cG and dG IETI-DP method. Left column contains results for the cG variant and the right column for the dG version. Each row corresponds to a fixed B-Spline degree $p\in\{2,3,4\}$. No timings are available for the dG-IETI-DP method with $p=4$ on 1024 cores due to memory limitations. } \label{tab:weak_scaling2} \end{table} \subsection{Strong scaling} \label{sec:strong} Secondly, we are investigating the strong scaling behaviour. Now we fix the problem size and increase the number of processors. In the optimal case, the time used by a certain quantity reduces in the same way as the number of used processors increases. We use the same primal variables for the strong scaling studies as in the weak scaling studies in Section~\ref{sec:weak}. Again as in Section~\ref{sec:weak}, we begin with the two dimensional example. We perform $7$ initials refinements and end up with $17$~Mio. dofs on 1024 subdomains. We start already with $4$ processors in the initial case and do $8$ refinements until we reach 1024 cores. Similar to Section~\ref{sec:weak}, the results for $p\in\{2,3,4\}$ are illustrated in Figure~\ref{fig:strong_2d} and summarized in Table~\ref{tab:strong_scaling1}. \begin{figure} \caption{cG-IETI-DP} \caption{dG-IETI-DP} \caption{Strong scaling of the cG-IETI-DP (left column) and dG-IETI-DP (right column) method for B-Spline degrees $p\in\{2,3,4\}$ in two dimensions. The markers $\{\circ,*,\Diamond\}$ as well as different shades of red (assembling phase) and blue (solver phase) correspond to the degrees $\{2,3,4\}$.} \label{fig:strong_2d} \end{figure} \begin{table} [h] \begin{footnotesize} \begin{tabular}{|r|c|c|c|c||c|c|c|c||c|c|c|c|}\hline 2d & \multicolumn{4}{c||}{ $p=2$} & \multicolumn{4}{c||}{$p=3$}& \multicolumn{4}{c|}{$p=4$}\\ \hline \tiny{cG-IETI-DP} & \multicolumn{2}{c|}{\makecell{assembling \\ phase}} & \multicolumn{2}{c||}{\makecell{solving \\ phase}} & \multicolumn{2}{c|}{\makecell{assembling \\ phase}} & \multicolumn{2}{c||}{\makecell{solving \\ phase}}& \multicolumn{2}{c|}{\makecell{assembling \\ phase}}& \multicolumn{2}{c|}{\makecell{solving \\ phase}} \\ \hline $\#$ procs & Time & Sp. & Time & Sp. & Time & Sp. & Time & Sp. & Time & Sp. & Time & Sp. \\ \hline 4 & 190.8 & 4 & 138.2 & 4 & 364.5 & 4 & 202.6 & 4 & 653.7 & 4 & 279.1 & 4 \\ \hline 8 & 94.8 & 8 & 88.8 & 6 & 181.2 & 8 & 141.8 & 6 & 325.9 & 8 & 202.7 & 6 \\ \hline 16 & 47.1 & 16 & 45.9 & 12 & 89.9 & 16 & 71.9 & 11 & 162.3 & 16 & 102.6 & 11 \\ \hline 32 & 23.1 & 32 & 22.9 & 24 & 44.5 & 32 & 35.7 & 23 & 80.4 & 32 & 51.3 & 22 \\ \hline 64 & 11.6 & 65 & 11.8 & 46 & 22.4 & 65 & 18.5 & 44 & 40.2 & 64 & 26.3 & 42 \\ \hline 128 & 5.9 & 127 & 7.3 & 75 & 11.3 & 128 & 11.1 & 73 & 20.4 & 128 & 14.8 & 75 \\ \hline 256 & 3.0 & 247 & 4.1 & 133 & 5.7 & 251 & 6.1 & 131 & 10.4 & 250 & 8.7 & 128 \\ \hline 512 & 1.6 & 471 & 2.1 & 257 & 2.9 & 487 & 3.2 & 250 & 5.3 & 493 & 4.7 & 235 \\ \hline 1024 & 0.9 & 819 & 1.1 & 472 & 1.6 & 891 & 1.6 & 494 & 2.8 & 917 & 2.4 & 456 \\ \hline\hline \tiny{dG-IETI-DP} & \multicolumn{2}{c|}{\makecell{assembling \\ phase}} & \multicolumn{2}{c||}{\makecell{solving \\ phase}} & \multicolumn{2}{c|}{\makecell{assembling \\ phase}} & \multicolumn{2}{c||}{\makecell{solving \\ phase}}& \multicolumn{2}{c|}{\makecell{assembling \\ phase}}& \multicolumn{2}{c|}{\makecell{solving \\ phase}} \\ \hline $\#$ procs & Time & Sp. & Time & Sp. & Time & Sp. & Time & Sp. & Time & Sp. & Time & Sp. \\ \hline 4 & 216.6& 4& 144.0 & 4 & 402.2& 4& 225.5 & 4 & 711.8& 4& 294.2 & 4 \\ \hline 8 & 106.9& 8& 92.9 & 6 & 199.5& 8& 156.8 & 5 & 352.6& 8& 210.6 & 5 \\ \hline 16 & 52.5& 16& 47.9 & 12 & 98.1& 16& 80.4 & 11 & 174.3& 16& 106.4 & 11 \\ \hline 32 & 25.1& 34& 25.7 & 22 & 47.5& 33& 42.2 & 21 & 84.9& 33& 55.2 & 21 \\ \hline 64 & 12.7& 68& 12.2 & 47 & 23.9& 67& 20.4 & 44 & 42.9& 66& 27.0 & 43 \\ \hline 128 & 6.5& 132& 7.6 & 75 & 12.0& 134& 11.6 & 77 & 21.7& 131& 15.1 & 77 \\ \hline 256 & 3.4& 252& 4.1 & 140 & 6.2& 255& 6.6 & 135 & 11.4& 249& 9.0 & 129 \\ \hline 512 & 1.9& 455& 2.2 & 260 & 3.4& 472& 3.3 & 267 & 6.0& 474& 4.9 & 236 \\ \hline 1024 & 1.1& 777& 1.1 & 498 & 1.9& 846& 1.7 & 528 & 3.2& 885& 2.3 & 494 \\ \hline \end{tabular} \end{footnotesize} \caption{Strong scaling results: Time (s) and Speedup for $p\in\{2,3,4\}$ in two dimensions having approximately 17 Mio. dofs. First row shows results for the cG variant of the IETI-DP method, whereas the second row contains results for the dG version. Each column corresponds to a degree $p$. } \label{tab:strong_scaling1} \end{table} We observe that the assembling phase has a quite good scaling performance, as already observed for the weak scaling results in Section~\ref{sec:weak}. Moreover, the higher the B-Spline degree, the better the parallel performance behaves. This holds due to increased computational costs for the parallel part. Similar to the weak scaling results, the solver phase does not provide such an excellent scaling as the assembling phase. Still, we obtain a scaling from around 500 when using 1024 processors. We note that the degree of the B-Splines does not seem to have such a significant effect on the scaling for the solver phase as for the assembling phase. In the three dimensional example we perform four initial refinements and obtain around 5 Mio. dofs. The presentation of the results is done in the same way as in the previous examples, see Figure~\ref{fig:strong_3d} and Table~\ref{tab:strong_scaling2}. Also in three dimensions the cG-IETI-DP algorithms behaves very similarly to the two dimensional case, showing excellent scaling results. However, the dG version of the algorithm shows a good scaling but not as promising as cG version. Especially, when considering $p=2$, we observe degraded scalability for the assembling phase. Having a closer look at the timings, we observe that this originates from small load imbalances in the interior domains, due to the additional layer of dofs and the larger number of primal variables. The latter one leads to an increased time in solving \eqref{HL:equ:KC_basis}, due to a larger number of right hand sides on the interior subdomains. One can further optimize the three dimensional case, by considering different strategies for the primal variables, where one aims for smaller and more equally distributed numbers of primal variables. \begin{figure} \caption{cG-IETI-DP} \caption{dG-IETI-DP} \caption{Strong scaling of the cG-IETI-DP (left column) and dG-IETI-DP (right column) method for B-Spline degrees $p\in\{2,3,4\}$ in three dimensions. The markers $\{\circ,*,\Diamond\}$ as well as different shades of red (assembling phase) and blue (solver phase) correspond to the degrees $\{2,3,4\}$.} \label{fig:strong_3d} \end{figure} \begin{table} [h] \begin{footnotesize} \begin{tabular}{|r|c|c|c|c||c|c|c|c||c|c|c|c|}\hline 3d& \multicolumn{4}{c||}{ $p=2$} & \multicolumn{4}{c||}{$p=3$}& \multicolumn{4}{c|}{$p=4$}\\ \hline \tiny{cG-IETI-DP} & \multicolumn{2}{c|}{\makecell{assembling \\ phase}} & \multicolumn{2}{c||}{\makecell{solving \\ phase}} & \multicolumn{2}{c|}{\makecell{assembling \\ phase}} & \multicolumn{2}{c||}{\makecell{solving \\ phase}}& \multicolumn{2}{c|}{\makecell{assembling \\ phase}}& \multicolumn{2}{c|}{\makecell{solving \\ phase}} \\ \hline $\#$ procs & Time & Sp. & Time & Sp. & Time & Sp. & Time & Sp. & Time & Sp. & Time & Sp. \\ \hline 8 & 140.4 & 8 & 93.7 & 8 & 624.8 & 8 & 205.3 & 8 & 2565.7 & 8 & 382.6 & 8 \\ \hline 16 & 70.5 & 16 & 47.7 & 16 & 312.8 & 16 & 103.4 & 16 & 1285.6 & 16 & 182.5 & 17 \\ \hline 32 & 35.2 & 32 & 25.0 & 30 & 157.0 & 32 & 53.7 & 31 & 643.3 & 32 & 93.7 & 33 \\ \hline 64 & 17.5 & 64 & 11.9 & 63 & 78.6 & 64 & 27.2 & 60 & 322.5 & 64 & 46.6 & 66 \\ \hline 128 & 9.0 & 125 & 7.9 & 95 & 40.2 & 124 & 14.9 & 110 & 163.0 & 126 & 26.4 & 116 \\ \hline 256 & 4.8 & 236 & 4.2 & 178 & 20.4 & 245 & 8.8 & 187 & 82.1 & 250 & 13.9 & 221 \\ \hline 512 & 2.5 & 452 & 2.6 & 294 & 10.3 & 483 & 5.6 & 296 & 41.4 & 496 & 10.0 & 305 \\ \hline 1024 & 1.4 & 807 & 1.5 & 506 & 5.5 & 906 & 3.2 & 509 & 21.4 & 961 & 5.8 & 526 \\ \hline\hline \tiny{dG-IETI-DP} & \multicolumn{2}{c|}{\makecell{assembling \\ phase}} & \multicolumn{2}{c||}{\makecell{solving \\ phase}} & \multicolumn{2}{c|}{\makecell{assembling \\ phase}} & \multicolumn{2}{c||}{\makecell{solving \\ phase}}& \multicolumn{2}{c|}{\makecell{assembling \\ phase}}& \multicolumn{2}{c|}{\makecell{solving \\ phase}} \\ \hline $\#$ procs & Time & Sp. & Time & Sp. & Time & Sp. & Time & Sp. & Time & Sp. & Time & Sp. \\ \hline 8 & 249.6 & 8 & 210.8 & 8 & 985.8 & 8 & 433.2 & 8 & 3588.1 & 8 & 854.8 & 8 \\ \hline 16 & 126.2 & 16 & 106.6 & 16 & 498.6 & 16 & 217.1 & 16 & 1792.9 & 16 & 405.7 & 17 \\ \hline 32 & 65.0 & 31 & 56.6 & 30 & 255.4 & 31 & 110.3 & 31 & 913.5 & 31 & 205.0 & 33 \\ \hline 64 & 33.1 & 60 & 30.5 & 55 & 128.5 & 61 & 58.2 & 60 & 460.0 & 62 & 105.9 & 65 \\ \hline 128 & 17.4 & 115 & 17.0 & 99 & 65.5 & 120 & 30.7 & 113 & 234.3 & 123 & 56.4 & 121 \\ \hline 256 & 9.4 & 212 & 9.6 & 175 & 33.8 & 233 & 18.4 & 188 & 117.8 & 244 & 35.4 & 193 \\ \hline 512 & 5.1 & 391 & 6.1 & 277 & 17.4 & 453 & 11.5 & 302 & 59.9 & 479 & 21.4 & 320 \\ \hline 1024 & 3.1 & 653 & 3.5 & 481 & 9.6 & 822 & 7.1 & 491 & 31.3 & 917 & 13.2 & 517 \\ \hline \end{tabular} \end{footnotesize} \caption{Strong scaling results: Time (s) and Speedup for $p\in\{2,3,4\}$ in three dimensions having approximately 5 Mio. dofs. First row shows results for the cG variant of the IETI-DP method, whereas the second row contains results for the dG version. Each column corresponds to a degree $p$. } \label{tab:strong_scaling2} \end{table} \subsection{Study on the number of $S_{\Pi\Pi}^{-1}$ holders} \label{sec:diffHolder} In this last section of the numerical experiments, we want to investigate the influence of the number of holders of $S_{\Pi\Pi}^{-1}$ on the scaling behaviour. As already indicated in Section~\ref{sec:para_AccDist}, if more processors hold the LU-factorization of the coarse grid matrix, it is possible to decrease the communication effort after applying $S_{\Pi\Pi}^{-1}$, while having more communication before the application. The advantage of this strategy is to be able to have a better overlap of communication with computations. However one has to take into account, that this also increases the communication in the assembling phase, since the local contribution $S\sMP_{\Pi\Pi}$ has to be sent to all the master processors. We only consider the two dimensional domain, where we perform $7$ initials refinements, but on a decomposition with $4096$ subdomains and end up with around $70$~Mio. dofs. This gives a comparable setting as in Section~\ref{sec:weak} having the most refined domain. In order to better observe the influence of the number of $S_{\Pi\Pi}^{-1}$ holders, we increase the number of subdomains, leading to a larger coarse grid problem. We only investigate the case of using 1024 processors and the number of $S_{\Pi\Pi}$ holders given by $2^n,n\in\{0,1,\ldots,10\}$. Hence, we obtain the number of master processors ranging from 1 to 1024, such that each master has the same number of slaves. The results are summarized in Figure~\ref{fig:diffHolder} and Table~\ref{tab:diffHolder}. \begin{table} [h] \begin{footnotesize} \begin{tabular}{|r|c|c|c||c|c|c||c|c|c|}\hline \tiny{cG-IETI-DP} & \multicolumn{3}{c||}{ $p=2$} & \multicolumn{3}{c||}{$p=3$}& \multicolumn{3}{c|}{$p=4$}\\ \hline \makecell{$\#$ $S_{\Pi\Pi}^{-1}$ \\ Holder}& \makecell{Assemble \\ Time} & \makecell{Solving \\ Time} & \makecell{Total \\ Time} & \makecell{Assemble \\ Time} & \makecell{Solving \\ Time} & \makecell{Total \\ Time} & \makecell{Assemble \\ Time} & \makecell{Solving \\ Time} & \makecell{Total \\ Time} \\ \hline 1 & 3.61 & 3.66 & 7.27 & 6.50 & 5.52 &12.02 & 11.23 & 8.30 & 19.53 \\ \hline 2 & 4.49 & 3.58 & 8.07 & 7.97 & 5.57 &13.54 & 13.83 & 8.02 & 21.85 \\ \hline 4 & 4.53 & 3.82 & 8.35 & 7.65 & 5.40 &13.05 & 13.60 & 8.09 & 21.69 \\ \hline 8 & 4.46 & 3.63 & 8.09 & 7.72 & 5.76 &13.48 & 13.32 & 8.15 & 21.47 \\ \hline 16 & 4.34 & 3.49 & 7.83 & 7.64 & 5.61 &13.25 & 13.16 & 7.93 & 21.09 \\ \hline 32 & 4.33 & 3.73 & 8.06 & 7.74 & 5.39 &13.13 & 13.15 & 8.78 & 21.93 \\ \hline 64 & 4.34 & 3.59 & 7.93 & 7.62 & 5.45 &13.07 & 13.10 & 8.04 & 21.14 \\ \hline 128 & 4.49 & 4.06 & 8.55 & 7.60 & 6.05 &13.65 & 13.06 & 8.47 & 21.53 \\ \hline 256 & 4.31 & 4.64 & 8.95 & 7.63 & 6.43 &14.06 & 13.02 & 8.81 & 21.83 \\ \hline 512 & 4.34 & 3.61 & 7.95 & 7.55 & 5.71 &13.26 & 13.23 & 8.09 & 21.32 \\ \hline 1024 & 3.73 & 3.80 & 7.53 & 6.56 & 5.77 &12.33 & 11.19 & 8.26 & 19.45 \\ \hline\hline \tiny{dG-IETI-DP} & \multicolumn{3}{c||}{ $p=2$} & \multicolumn{3}{c||}{$p=3$}& \multicolumn{3}{c|}{$p=4$}\\ \hline \makecell{$\#$ $S_{\Pi\Pi}^{-1}$ \\ Holder} & \makecell{Assemble \\ Time} & \makecell{Solving \\ Time} & \makecell{Total \\ Time} & \makecell{Assemble \\ Time} & \makecell{Solving \\ Time} & \makecell{Total \\ Time} & \makecell{Assemble \\ Time} & \makecell{Solving \\ Time} & \makecell{Total \\ Time} \\ \hline 1 & 4.57& 5.09& 9.66& 7.28 & 7.16 & 14.44& 12.44 & 10.01 & 22.45 \\ \hline 2 & 5.23& 4.16& 9.39& 9.10 & 6.26 & 15.36& 15.02 & 9.01 & 24.03 \\ \hline 4 & 5.25& 4.18& 9.43& 9.12 & 6.58 & 15.70& 14.93 & 8.73 & 23.66 \\ \hline 8 & 5.19& 4.28& 9.47& 8.97 & 6.29 & 15.26& 14.95 & 9.30 & 24.25 \\ \hline 16 & 5.26& 4.20& 9.46& 8.78 & 6.41 & 15.19& 14.79 & 9.16 & 23.95 \\ \hline 32 & 5.11& 4.64& 9.75& 8.82 & 6.29 & 15.11& 14.96 & 9.05 & 24.01 \\ \hline 64 & 5.35& 4.75& 10.1& 9.06 & 6.87 & 15.93& 14.85 & 9.37 & 24.22 \\ \hline 128 & 5.07& 6.06& 11.13& 8.88 & 8.25 & 17.13& 14.61 & 10.65 & 25.26 \\ \hline 256 & 5.07& 5.89& 10.96& 8.66 & 7.77 & 16.43& 14.52 & 11.32 & 25.84 \\ \hline 512 & 5.03& 6.15& 11.18& 8.66 & 8.29 & 16.95& 14.43 & 11.16 & 25.59 \\ \hline 1024 & 4.70& 5.33& 10.03& 7.45 & 7.68 & 15.13& 12.89 & 10.60 & 23.49 \\ \hline \end{tabular} \end{footnotesize} \caption{Influence of the number of processors having an LU-factorization of $S_{\Pi\Pi}$. Timings in seconds for 1024 Processors on a domain with around $70$~Mio. dofs and 2048 subdomains. } \label{tab:diffHolder} \end{table} \begin{figure} \caption{$p=2$} \caption{$p=3$} \caption{$p=4$} \caption{$p=2$} \caption{$p=3$} \caption{$p=4$} \caption{Influence of the number of $S_{\Pi\Pi}^{-1}$ holders on the scaling. First row corresponds to cG-IETI-DP, second row to dG-IETI-DP. Each column has a fixed degree $p\in\{2,3,4\}$. Figures (a-c) summarizes the cG version and Figures (d-f) the dG version, respectively.} \label{fig:diffHolder} \end{figure} We observe that choosing several holders of the coarse grid problem in the cG version does not really have a significant effect. However, in the dG version, due to an increased number of primal variables, the use of several holders actually increases the performance of the solver by around $10\%$. Nevertheless, what is gained in the solving part does not pay off with the additional effort in the assembling phase. Considering the total computation time in Table~\ref{tab:diffHolder}, the best options is still either using only a single coarse grid problem on one processor or making a redundant factorization on each processor. \section{Conclusion} \label{sec:conclusion} We have investigated the parallel scalability of the cG-IETI-DP and dG-IETI-DP method, respectively. Numerical tests showed a very good scalability in the strong and weak scaling for the assembling phase for both methods. We reached a speedup of approximately 900 when using 1024 cores. Although the speedup of the solver phase is not as good as the one for the assembler phase, we still reached a speedup of around 500 when using 1024 cores. One can even increase the parallel performance of the solver part by increasing the number of processors, which are holding the coarse grid problem. However, numerical examples have shown that this does not really pay off in the total time, due to an increased assembling time. To summarize, we saw that the proposed methods are well suited for large scale parallelization of assembling and solving IgA equations in two and three dimensions. \section*{References} \end{document}
arXiv
Saturday, February 28, 2015 ... / / How science celebrities often hurt science A German blog responded to Lawrence Krauss' essay which argued that celebrity scientists such as Einstein, Feynman, Sagan, and Tyson are generally good for science and the society because they motivate young people, help to fight scientific nonsense, promote scientific literacy, and improve decision making. The German blogger says that the celebrity status is just very weakly correlated with one's being a great scientist, she instinctively avoids fandoms, those celebrities do influence what scientists discuss and study, but he or she believes that they don't hurt, after all. In his or her perspective, the most serious related problem is that the vast majority of quality science gets unnoticed by the public; I agree with this comment. And she promotes science blogs as windows into the real science. Well, my reactions to this comment are mixed. Vystavil Luboš Motl v 7:22 PM | slow feedback (48) | Odkazy na tento příspěvek | AMS opposes climate witch hunts Left-wing media activists have been excited about finding out that Willie Soon, a climate skeptic at Harvard-Smithsonian, has earned over $1 million which included grants from the Koch Foundation. They could have asked me years ago – I would tell them. Willie is clearly one of the top earners and the impressive figure makes him a counterpart of James Hansen (and that man's Greenpeace money). On the other hand, it is no way an obscene amount of money for research that Willie has participated in for at least 20 years. If a quick calculation helps you, note that $1,200,000 / 20 = $60,000 which is not too much for an important guy. Of course that I consider the Koch Foundation to be a much more impartial and decent sponsor of scientific research than Greenpeace. Can the source of money affect the character of research and conclusions? You bet. But let me be more precise about my thoughts on this question. Vystavil Luboš Motl v 8:30 AM | slow feedback (15) | Odkazy na tento příspěvek | Other texts on similar topics: climate, politics, science and society Friday, February 27, 2015 ... / / Obamanet is harmful The FCC has approved some proposal to establish "net neutrality", probably treating the packets on par with electricity, gas, or feces in the sewerage system – instead of information services which is how the Internet data have been classified so far. Even though those 300+ pages should have been posted on the FCC website yesterday, I can't find it. The whole institution seems to be a complete mess. Despite the absence of the document, lots of clueless people celebrate this "achievement". Exceptions – sensible reactions – are rare. Matt Walsh of The Blaze, an information ecosystem founded by Glenn Beck, is one of these exceptions. The transition from the good old Internet as originally invented by Al Gore ;-) to the Obamanet is wrong for numerous reason. In particular, it may be described as a cure for a non-existent disease a partial nationalization of the industry and the ISP companies forced egalitarianism blow to innovation in technologies depending on prioritization forced price distortion contamination of the legal system by hundreds of pages of junk that may contain secret timebombs threatening every other person on the Internet transition of power from somewhat ineffective large ISP companies to an even more inefficient organization, the government risk of censorship by the government The way how these standards were recommended and adopted is also shocking, if I reproduce the words by the former FCC chairman Michael Powell. A layman, crackpot, amateur, far left political activist, and community organizer recorded a YouTube video and FCC took this video seriously. Other texts on similar topics: computers, politics Thursday, February 26, 2015 ... / / Nature is subtle Caltech has created their new Walter Burke Institute for Theoretical Physics. It's named after Walter Burke – but it is neither the actor nor the purser nor the hurler, it's Walter Burke the trustee so no one seems to give a damn about him. Walter Burke, the actor That's why John Preskill's speech [URL fixed, tx] focused on a different topic, namely his three principles of creating the environment for good physics. Other texts on similar topics: philosophy of science, science and society, string vacua and phenomenology, stringy quantum gravity Assyrian history destroyed Many events are taking place every day and many events make me – and many of you – upset. But what made me extremely angry today was this ISIS video: The video that was embedded here violated YouTube's rules although I don't know what the exact rule is. Ask those who saw it on Thursday... To skip the babbling by the apparatchik-bigot and to get to the drastic "action", jump to 2:40. The animals have penetrated into Mosul, Northern Iraq, and they chose the local Nineveh Museum as their target. The museum contains lots of priceless (or at least multi-billion) statues from the neo-Assyrian empire. Well, it did contain it – up to yesterday. Other texts on similar topics: arts, Middle East, politics, religion Wednesday, February 25, 2015 ... / / Black hole microstates from gluing an exterior with its delayed twin ...and a proof of state-dependence of interior field operators... Kyriakos Papadodimas (CERN) and Suvrat Raju (Tata) have released a five-page paper that is full of hot ambitious ideas as well as cool, almost rock-solid arguments about the "holographic code" describing the black hole interior: Local Operators in the Eternal Black Hole They work with the eternal Schwarzschild black hole in the \(AdS_{d+1}\) space. They describe it using the tortoise coordinate, one that I and Andy Neitzke learned to love when we studied the quasinormal modes. This coordinate makes the \(rt\)-plane look "conformal" and some world sheet methods may therefore become applicable; I would like to comment on this point of mine in more detail later. At any rate, the eternal \(AdS\) black hole may be holographically described using two conformal field theories, \(CFT\), and an eternal black hole state is a maximally entangled state\[ \ket{\Psi} = \frac{1}{\sqrt{Z(\beta)}} \sum_E \exp(-\beta E / 2) \ket{E,E} \] The first thing they appreciate is that one may evolve this state in time, by a Hamiltonian (i.e. one may wait), to obtain many inequivalent states \(\ket{\Psi_T}\) that seem to have indistinguishable local physics, however:\[ \ket{\Psi_T} = e^{iH_L T} \ket\Psi = e^{i H_R T} \ket \Psi \] One either asks the object to "wait" for time \(T\) in the left \(CFT\) only; or in the right \(CFT\) only. In both cases, one gets the same result but the result depends on \(T\) nontrivially. Other texts on similar topics: stringy quantum gravity Tuesday, February 24, 2015 ... / / Ismail El Gizouli, new IPCC boss Rajendra Pachauri has been the head of the International Panel for Climate Change for many years, having turned into a cell of organized crime. This railway engineer and porn writer has had uncountable conflicts of interests and numerous conflicts with the law in the past but he was forced to resign because of a relative detail: his colleague who boasted voluptous heaving breasts in New Delhi sued him because she didn't like the way in which the love guru raped her. Well, it was probably her mistake, too. Everyone must have known what a dirty pr*ck this guy is, so a decent woman would keep the distance at least one mile from him. He was replaced by Ismail El Gizouli. This vice-chairman representing Africa is famous for this December 2013 YouTube video hit. It seems to be the only publicly available web page about this Gentleman and when I was embedding it into this blog post, it had 8 views (and 2 of them counted the electronic devices of your humble correspondent). Quite a rock star! Other texts on similar topics: climate, Kyoto, politics, science and society Monday, February 23, 2015 ... / / Giuliani vs Obama 2015 Almost eight years ago, it looked like Rudy Giuliani and Barack Obama could have been going to compete for the White House. Let's go, Obama girls. Finally, America's mayor didn't make it through the primaries. You know, your humble correspondent probably isn't the most canonical guy who would have picked Giuliani but I would find him highly natural in the office, anyway. He's still a symbol of the mainstream America's leader who has everything that seemed necessary in those old years when I couldn't think about a single major complaint against America – these days, I have way too many. Yes, I also think that Giuliani was the #1 person who showed his qualities as a leader after 9/11. As a leader, history turned him into a hero. He may have lost the primaries because of his highly imperfect image as a family man (be sure that the Czech voters have much more tolerance in all these matters!) or due to something else, who knows. John McCain was a lousy candidate and he is still a lousy politician but he was what the GOP finally offered. Other texts on similar topics: politics Hawking wins the "best male actor" Oscar Eddie Redmayne did a great job in "TOE" The Theory of Everything (2014) is coming to the Czech movie theaters this week. Those of us who have mastered the space and time have already seen the picture. The touching film is based on "Travelling to Infinity: My Life with Stephen" by Jane Wilde Hawking, the famous physicist's first wife, which is a reason to expect fewer path integrals and more of the social stuff. Even Stephen Hawking himself has pointed out that Eddie Redmayne was almost as handsome as himself (Hawking) so you shouldn't be surprised that this actor added the Oscar for the "best actor" last night, next to his Golden Globe for the "best actor" as well similar awards from SAG and BAFTA. In this way, Redmayne almost became more famous than Hawking himself for a while. Before I would allow something like that, I would carefully test Redmayne's abilities to compute path integrals and emitted radiation within quantum field theory on curved backgrounds. Would he continue to be as great as Hawking himself? The $15 million budget movie has earned $100+ million so far, not bad, and I must say that this success is impressive for the writer of the book, Jane Hawking herself. In some sense, you could say that the first woman who marries a young Stephen Hawking is a "random educated woman" and the random educated woman's ability to write a book that produces a $100 million box office movie seems like a coincidence. Well, I admit that the fact that she has a famous man to write about may have helped, too. Other texts on similar topics: arts, science and society Many worlds: a Rozali-Carroll exchange Sean Carroll wrote another tirade, The Wrong Objections to the Many-Worlds Interpretation of Quantum Mechanics where he tries to defend some misconceptions about the "many worlds interpretation" of quantum mechanics while showing that he is totally unable and unwilling to think rationally and honestly. After some vacuous replies to vacuous complaints by vacuous critics who love to say that physics isn't testable, he claims that the following two assumptions, The world is described by a quantum state, which is an element of a kind of vector space known as Hilbert space. The quantum state evolves through time in accordance with the Schrödinger equation, with some particular Hamiltonian. imply that the worlds where other outcomes of quantum measurements materialized must be as "real" as our branch of this network of parallel world. This claim is self-evidently untrue. Quantum mechanics – as understood for 90 years – says that no such words "really" exist (they can't even be well-defined in any way) even though the theory respects the postulates. So Carroll's claim is equivalent to saying that \(12\times 2 = 1917\) because \(1+1=2\) and \(2+3=5\). Sorry but this would-be "derivation" is completely wrong. Other texts on similar topics: philosophy of science, quantum foundations, science and society Barry Kripke wrote a paper on light-cone-quantized string theory In the S08E15 episode of The Big Bang Theory, Ms Wolowitz died. The characters were sad and Sheldon was the first one who said something touching. I think it was a decent way to deal with the real-world death of Carol Ann Susi who provided Ms Wolowitz with her voice. The departure of Ms Wolowitz abruptly solved a jealousy-ignited argument between Stewart and Howard revolving around the furniture from the Wolowitz house. Also, if you missed that, Penny learned that she's been getting tests from Amy who was comparing her intelligence to the intelligence of the chimps. Penny did pretty well, probably more so than Leonard. Vystavil Luboš Motl v 10:32 AM | slow feedback (35) | Odkazy na tento příspěvek | Other texts on similar topics: arts, science and society, string vacua and phenomenology, stringy quantum gravity, TBBT Bridge loan to default or haircut is an oxymoron No sane person deliberately builds bridges to nowhere The new Greek Marxist prime minister didn't receive his bonuses for being at work on time which is why he only sent his letter, originally planned for Wednesday, on Thursday. As I described in some detail in a successful comment on idnes.cz LOL, what we got was a variation of Spoiled Little Joe's Letter to Baby Jesus. ;-) Odds are 98% that you don't speak Czech. In his letter, little Joe is increasingly upset and obscene because he is not satisfied with what he found under the Christmas tree. To summarize the letter, Tsipras wants to get a six-month "bridge loan", a kind of fellowship for himself and almost 11 million other Greeks dependent on the Greek state, before Greece and the ECB and the European lenders (and IMF?) will figure out an idea how to abolish or significantly reduce the debt and how to cut all the strings that were attached to the bailouts. During the six months, the Greek government will be allowed to continue in its insane Marxist policies, revert all pro-growth and pro-austerity reforms, stop privatization, rehire all the useless government employees, increase pensions, salaries, and so on. That's a very amusing proposal to make the lenders happy, indeed. Other texts on similar topics: Europe, markets, politics A good story on proofs of inevitability of string theory Natalie Wolchover is one of the best popular physics writers in the world, having written insightful stories especially for the Simons Foundation and the Quanta Magazine (her Bc degree in nonlinear optics from Tufts helps). Yesterday, she added In Fake Universes, Evidence for String Theory It is a wonderful article about the history of string theory (Veneziano-related history; thunderstorms by which God unsuccessfully tried to kill Green and Schwarz in Aspen, Colorado, which would postpone the First Superstring Revolution by a century; dualities; AdS/CFT etc.) with a modern focus on the research attempting to prove the uniqueness of string theory. At least since the 1980s, we were saying that "string theory is the only game in town". This slogan was almost universally understood as a statement about the sociology or comparative literature. If you look at proposals for a quantum theory of gravity, aside from string theory, you won't find any that work. Vystavil Luboš Motl v 12:44 PM | slow feedback (13) | Odkazy na tento příspěvek | Other texts on similar topics: string vacua and phenomenology, stringy quantum gravity Players in Greece, Ukraine get tougher It's been five days since the latest blog post about Greece and Ukraine. Since that time, people got increasingly used to negotiations but their positions toughened, too. The screenshot above comes from CNN. Just a few days earlier, an on-screen text in the same station talked about arms for "pro-U.S." troops in Ukraine. Now, we got a map where Crimea and Ukraine finally belong to the same country again and the country's name is Russia. Either quality control must be completely absent at CNN or these blunders are a new strategy to increase their visibility. Other texts on similar topics: Europe, markets, politics, Russia ATLAS, CMS: small SUSY deviations Both ATLAS and CMS, the two main detectors at the Large Hadron Collider, published some preprints about the search for SUSY or new SUSY-like Higgs bosons. No formidable deviation from the Standard Model was found. However... ATLAS was looking for a CP-odd Higgs boson, \(A\), in decays to \(Zh\). It turned out that there is approximately a 2.5-sigma excess for \(m_A=220\GeV\): look in the conclusions. I won't seriously mention the below-2-sigma excess for \(m_A=260\GeV\) at all. Other texts on similar topics: experiments, string vacua and phenomenology BBC friendly towards gluinos at LHC After Two Years' Vacation, the Large Hadron Collider will be restarted next month. At least since the discovery of the Higgs boson, most of the readers of mainstream media were overwhelmed by tirades against modern particle physics – especially supersymmetry and similar things. The writers of such stories have often emulated assorted Shwolins and Shmoits in effect, if not in intent. Gluino vampire alchemist. The doll only costs $690, below several billions needed for a chance to see the much smaller gluino at the LHC. Well, ATLAS' new (deputy) spokeswoman Beate Heinemann of UC Berkeley (Gianotti is superseding Heuer as CERN's director general) made a difference today and several stories that she inspired at visible places have conveyed the excitement in the particle physics community and the nonzero chance that a bigger discovery than the Higgs boson may be made in 2015 – and perhaps announced at the SUSY-related conferences in August and September. Vystavil Luboš Motl v 8:55 AM | slow feedback (115) | Odkazy na tento příspěvek | Other texts on similar topics: experiments, LHC, string vacua and phenomenology Sunday, February 15, 2015 ... / / Anti-Jewish attacks: another reason why a strong Israel is vital Last night, I watched Spielberg's 1993 movie Schindler's List about Oskar Schindler, a real historical figure and a German-speaking industrialist born in Svitavy, Moravia, one of the Czech lands (then Austria-Hungary), who was wealthy, an NSDAP member, and well connected, but who used all these virtues to turn his factories into shields for Jews against the Holocaust. He has saved about 1,000 lives. Pilsen's Great Synagogue is the third largest synagogue in the world, after one in Jerusalem and another one in Budapest. Prague has had over 20 synagogues throughout the years. It wasn't the only movie about these issues that I recently watched; the Pianist (2002) was another one. These stories about the treatment of the Jews by the Nazi society are heartbreaking. And every society making average and subpar members proud that they belong to the "right" 90% or 95% or 97% or 99% of the society annoys me, scares me, and disgusts me (yes, the number included the figures for the climate alarmists and the Occupy movement, too). Meanwhile, in the real world, two innocent people were killed in Copenhagen yesterday: one Danish film director and one Jewish guard of the local Jewish community. The apparently Arab perpetrator, Omar El-Hussein, one inspired by the Charlie Hebdo attackers was shot dead later. Other texts on similar topics: Europe, Middle East, murders, politics Deflation, austerity: good news for Germany German GDP jumped as twin surpluses, deflation don't hurt at all Without much ado, the German economy is behaving in the way that may serve as a role model for everyone else. Half a year ago, we would be bombarded by stories about the bad shape of the Germany economy when its GDP growth happened to be negative in a quarter. However, the situation looks different these days even though none of the media publish any errata. In the fourth quarter, Germany's GDP added 0.7% (quarter-on-quarter, not annualized) which contributed to the 1.6% annual growth in 2014. Germany's unemployment rate at 4.9% is among the lowest in Europe and lower than in the U.S. The country announced $247 trade surplus in 2014. I guess that the weak euro could have helped this result. But anyway, you should compare this result with the $505 billion trade deficit of the U.S. in 2014 (which is still better than in many previous years, partly thanks to the fracking revolution). Also, Germany probably continued with its black zero small budget surpluses. Genetics: 150 years One of the 3 greatest pure scientific results born in the Czech lands What do I mean by these three results? Kepler's laws (Prague 1609) Mendel's laws of genetics (Brno 1855) First derivation of the gravitational red shift from the equivalence principle (Prague 1911) Well, the three guys were ethnic German (let's ignore the Jewish blood of the last one) but some of my ancestors would work as maids for similar folks so I am eager to take the credit. Pulihrášek [The Pintopea], one of the modern applications of Mendel's research. This species created by Ms Natalie Chalcarzová a Mr Václav Kocián in Brno is pea crossed with lots of beer pints on the surface, with a cannon-ball, and a LED light bulb. In 1843, the Augustinian convent in Brno, Moravia got a nice boost, the 21-year-old Silesian German student Johann Mendel. "The applicant seems ready for exact research in the natural scientific direction," the recommendation from his physics instructor in Olomouc read. After four years of incubation as a newbie, he adopted the religious name Gregor. Other texts on similar topics: biology, Czechoslovakia, science and society Progress in Ukraine and perhaps Greece? Merkel, Hollande, Poroshenko, and Putin signed some documents in Minsk, the Minsk 2.0 ceasefire, which should come to force on Saturday (mid)night. After some doubts, the Novorussian Armed Forces seem ready to respect it, and so should Yatsenyuk's government. However, Dmitry Yaroš just announced that the Right Sector will keep on fighting after Saturday night, anyway. I guess that NAF will defend itself and if the Right Sector will be the only foe fighting NAF, I would bet that the Right Sector might be neutralized rather easily. We will see what will happen. On Tuesday, I wrote a text on Greece and Ukraine. Things look somewhat rosier now, three days later. The world markets have jumped – thanks to the Minsk 2.0 agreement but also partly due to good economic news from Germany and the EU in general. (Sub-percent deflation in Germany really, really didn't hurt anyone or anything; it's a good thing.) Dow was back above 18,000 and close to the all-time record, DAX in Germany jumped above 11,000 for the first time in the history, S&P set a new record above 2,000, and NASDAQ is back to levels approaching 5,000 last seen in 2000, before the dotcom bubble burst. Other texts on similar topics: Europe, politics, Russia Klaus reviews his and his family's life in communism Two hours ago, "Victims of Communism Memorial Foundation" posted a 20-minute interview with the Czech ex-president Václav Klaus. He offers his memories about communism and his life in that era. His parents were grateful to the Soviet Union for the liberation. At the end of his high school studies, he began to understand the political aspects of the real world. And in the 1960s, he was literally supposed to become a scholarly expert in the non-Marxist systems. Val Fitch, co-discoverer of CP-violation: 1923-2015 Val Logsdon Fitch was born on a cattle ranch in Nebraska, a mile from the ranch where Penny came from, in 1923. His dad was badly injured thanks to a horse riding accident. Val himself went to insurance business and then became a soldier in WW2 before he joined the Manhattan Project. That's where the young man was turned to an experimental physicist. Other texts on similar topics: science and society, string vacua and phenomenology Has the big bang theory been disproved? It seems that most of the "science writers" have changed their job to the permanent promotion of low-quality and downright crackpot papers that are chosen not by their cleverness or according to the scientific evidence but by their "audacity to overthrow (and I really mean 'revert') all the paradigms of modern physics". As I was told later, Anthony Watts has become an inseparable component of this cesspool. Almost on a daily basis, the readers are served wonderful stories about loons who have found something wrong with string theory or inflationary cosmology, nutcases who don't believe the Higgs boson, whackadoodles who have "disproved" the uncertainty principle or quantum mechanics or its fundamentally probabilistic character, nut jobs who have violated the rules of relativity and sent signals faster than light, and the persistent authors of a few other "widely expected paradigm shifts". Sorry, ladies and gentlemen, but a scientific revolution that would "confirm" elementary laymen's misconceptions about the contemporary science and that would simply return the picture of the world to the "previous iteration" has never occurred and most likely will never occur so the probability is virtually 100% that all these "paradigm shift" stories will always be just junk. Just a week ago, the would-be science media were full of new stories claiming that the black holes don't exist which were inspired by a "gravity's rainbow" preprint by Ahmed Farag Ali and two co-authors. You may want to remember the Egyptian name I just mentioned. Why? Because in recent 2 days, the news outlets have switched to a (not so) new fad: there has been no big bang! Other texts on similar topics: alternative physics, astronomy, science and society, stringy quantum gravity Pro-U.S. troops in Ukraine, U.S. bailout for Greece The following two stories share one thing: the sheer stupidity of the political opinions that became common in the U.S. Oleg told us about this terrific screenshot from a CNN show. Obama considers arming pro-U.S. troops. I have already discussed how crazy and counterproductive it is to think about weapon deliveries to Donbass. Russia would surely respond correspondingly and not necessarily symmetrically and the escalation of the sad conflict would be the only possible outcome. But the new funny twist is that the pro-Kiev soldiers were called "pro-U.S." troops. Much of the Internet sources that did notice this detail at all called it a "Freudian slip". But after some years in America, I actually think that many Americans genuinely can't even understand why we find it so laughable and weird. Much of this inability to laugh to this stupidity comes from the Americans' naive, Hollywood-like understanding of the good of evil; some of it arises from the complete misunderstanding of the world geography and history and, indeed, the very existence of a world outside the U.S. Vystavil Luboš Motl v 6:15 PM | slow feedback (174) | Odkazy na tento příspěvek | Other texts on similar topics: markets, politics, Russia If done right, temperature adjustments are great Many skeptics' adjustment-phobia unmasks their anti-scientific credentials Christopher Booker, whom I met in Nice a few years ago and whom I like, wrote the most read Earth-category article in the Telegraph over the last 3 days, The fiddling with temperature data is the biggest science scandal ever The title summarizes the main point of this article that presents many examples suggesting that the temperature data from the weather stations have been repeatedly retroactively adjusted. I tend to agree that these adjustments are likely to have made the warming trend look higher – and the case for global warming more robust – than the most accurate data would manage. However, I am not quite certain about the size and relevance of this effect and I feel very uncomfortable about many climate skeptics' knee-jerk emotional reaction showing that they hate the very idea of an adjustment. Other texts on similar topics: climate, science and society, weather records Greece, Ukraine, Norway, Jordan There are lots of events related to these four countries, among others, and each of them would deserve long essays. But briefly: ECB and Greece The European Central Bank and the Eurogroup – the set of the finance ministers of the eurozone member states – gave two ultimatums to Greece. By Wednesday, they have to present their own alternative solution of the debt problem if they want it to be considered. By Monday which is 8 days away, they have to apply for the extension of the usual Greek-Troika procedures that have worked for years. The new Greek government refuses to continue – it was like a cure of a drug addict, they say. Troika must evaporate and their bailout scheme must be stopped. The help used to come with lots of strings and just like assorted Šhmoits and Horgans, the Syriza folks don't like strings. So they just want a lethal amount of drugs – or money – with no strings attached, it seems. You know, it's hard to cure a drug addict (or a subsidy-addicted nation) but I assure you that the slow and difficult gradual treatment is probably better than all the alternatives – at least it reduces the probability of death. The comrades also boast that they will face no financial problems in the short run. It really seems that Greece won't renew the troika bailout mechanisms which means, among other things, that it should finally lose all funds from the European Central Bank in 8 days. The IMF will probably cut all the funding for Greece, too. Alan Greenspan has predicted that the Grexit is unavoidable and no financial player may be crazy enough to fund Greece today. I would normally agree with Greenspan except that I am afraid that he may be underestimate many people's insanity. Many other experts estimate that they may run out of cash sometimes between the Tuesday February 17th and March – or, if they are lucky, by the middle of the year 2015. That's why I found it sensible to embed the classic music video of "[Greece in] Europe: The Final Countdown". ;-) I expect the end of the insanely failed experiment that we've known as "Greek economics" to arrive within a month or so. Other texts on similar topics: Europe, Middle East, politics, Russia Cumrun Vafa: mathematical introduction to string theory Placing dualities at the center A few days ago, Cumrun Vafa of Harvard was invited to Brazil to speak about the mathematical aspects of string theory in an introductory way. You may guess that the place is in Brazil because the flag next to Cumrun resembles the 1-sigma and 2-sigma confidence bands in a colorful exclusion graph. The Brazilians must love particle physics to have chosen such colors. ;-) The task facing Cumrun is of course tough because string theory depends on a big percentage of the intuition that people learn when they study physics, not just mathematics, and it's also hard because even though many quantities that appear in string theory are totally exact or well-defined, there is no known definition of string theory that would be both rigorous and universal – covering everything that string theorists investigate. But it's still rather natural for Cumrun to present string theory from the mathematical vantage point because he surely belongs among the 50% of the string theorists who are excellent mathematicians at heart. Other texts on similar topics: mathematics, string vacua and phenomenology, stringy quantum gravity, video Sheldon and Leonard co-author a paper on superfluid vacuum theory In the latest episode of The Big Bang Theory, Leonard Hofstadter had an interesting idea while he was talking to Penny: the spacetime is a surface of the superfluid. The surface tension could even explain the negative pressure of the positive cosmological constant which is a constant positive contribution to the vacuum energy density (they incorrectly talk about the negative energy). Sheldon completed the maths and wrote their joint paper quickly. It was a source of pride for much of the episode. The Troll Manifestation: an excerpt The sitcom referred to the Quantum Diaries blog – probably because the bloggers were drinking some wine with the filmmakers (Ken Bloom actually contributed the plot of the episode) – and Leonard quoted a flattering comment on that blog written by your humble correspondent. The Cooper-Hofstadter paper is a variation of the superfluid vacuum theory that has been around for quite some time. While the "surface of superfluid" and "surface tension" could be interesting twists, it seems a bit hard to understand what it could mean quantitatively – probably because it means nothing. Much of these theories seem to be all about words that can't be elaborated upon. Wikipedia correctly introduces the superfluid vacuum theory as one that may be a "fringe theory". Other texts on similar topics: science and society, string vacua and phenomenology, stringy quantum gravity, TBBT Obama administration wanted Disney to remake Frozen as an AGW agitprop In 1844, Hans Christian Andersen wrote one of his most famous fairy-tales, The Snow Queen. You may remember it – lots of romantic, touching, supernatural characters and events culminating in a happy end. Dozens of movies, including Czechoslovak and Soviet ones, have been shot over the course of 170 years. In 2013, the Walt Disney Studios have created another (3D) remake called Frozen. Its budget was $150 million but the movie has earned $1.3 billion. A sister and a snowman are looking for the other sister who has some magic freezing powers. The musical video above has 410 million views – it is not too far from being competitive with a crazy fat Korean rapper and horses. Other texts on similar topics: arts, climate, politics, science and society String theory cleverly escapes acausality traps String theory's forefather Gabriele Veneziano and three equally Italian co-authors submitted an interesting preprint, Regge behavior saves String Theory from causality violations They were intrigued by a Maldacena and 3 pals paper from July 2014 (I encourage all serious readers to get familiar with the names of the co-authors in both papers). All these folks studied the violations of causality in quantum gravity. Causality is the obviously necessary principle saying that the cause must precede its effect. In special relativity, this principle is strengthened. Because it has to be valid in all reference frames – which are mutually related by Lorentz transformations – the cause must actually belong to the past light cone of the effect: the influence cannot spread faster than at the speed of light in the vacuum. Maldacena et al. have designed a scattering (thought) experiment that allows the delay between the cause and effect to be negative. That's a problem, isn't it? If you can order the events so that one may say that "the past affects the future", things are OK. But once you may affect both the past and the future, it becomes unlikely that the collection of all these events can make any sense. Due to the lethal causal loops, it's unlikely that any laws of physics may be obeyed. ECB finally treats Greek junk as junk For more than four decades, the Greeks disrespected all the basic principles of common sense and sane economic policies. For many years, it has been clear that they have ruined their nation and devoured their future and by helping them to sustain life resembling what they knew when their living standards peaked, one is only delaying the inevitable and does it for increasing costs while making the inevitably coming hard landing even harder. Ten days ago, Greece has elected an über-Marxist bunch of loons. For a formerly capitalist nation, this injury is incompatible with life. Tsipras and his comrades have promised to spit on everything that Samaras established (and it was far from sufficient), on their debt, on the present creditors, and on the future creditors, too. Instead, they would propose totally unacceptable "solutions". They rejected everything that made sense and everything they promised was and is insane. And they started to behave arrogantly towards all of those who allowed Greece to live as a wealthy nation for so many years and those who are and will be essential to avoid a humanitarian catastrophe in Greece: especially on Germany but also on the "troika" of ECB, IMF, and the officials from Brussels. Clearly, there is no genuine room for negotiations here. In the past, the rules have already been distorted too much and a new distortion is either unacceptable or so small that it makes no sense to spend time with it. Germany has apparently prepared a document for the weekend that demands that Syriza ditches every single promise it has made and it obeys all the agreed upon rules. One should emphasize that by adopting the communist policies, Greece isn't just crippling conditions for a future aid. It is also violating conditions for aid and bailouts that have already been decided and distributed in the past (especially in 2010 and 2012). BICEP2+Keck: 4 new papers, improved sensitivity Minutes ago, the BICEP2 Collaboration tweeted that it has made several new papers available via bicepkeck.org. No Planck is involved here. Keck Array (BICEP2.5) telescope. Other texts on similar topics: astronomy, experiments, string vacua and phenomenology, stringy quantum gravity Varoufakis' new bonds, a new chapter of fraudulent Greek accounting Greece's Yanis Varoufakis is just another dirty Marxist. He is also a "scholar" of a sort who has co-authored a "critical" introduction to game theory and was an economist at Valve Corporation, a PC game company. By now, he has understood that there won't be a new haircut, at least not under this name, so he proposes a new plan he calls "smart debt engineering". It's only "smart" to the extent to which the lender is completely stupid, however. Pilsen's Škoda RegioPanter wins train tender for Nuremberg area How half a billion dollars is earned in the "hard way" In recent years, I gave numerous physics talks in Northern Bohemia. One of the things that impressed me about that part of the country were low-floor ultramodern trains with lots of LCD displays (where lots of things are beeping all the time) etc. named "RegioPanter". I was thinking: Wow, the Czech Railways (which have been famous for its mediocre trains) must have tons of money to buy these new Western European or Japanese trains, or whatever they came from. (None of such trains run between Pilsen and Prague.) All the technology is above the ceiling, WiFi and electric outlets are everywhere. It consumes about 1/2 of the energy that competing trains do. It's painful but it was just one hour ago when I learned where these "RegioPanters" came from. They are produced in my hometown of Pilsen as Škoda 7Ev. Why did I learn about it? Pilsen's Škoda Transportation has just won the second huge tender in Germany. Needless to say, to beat the competitors in the German market is a source of pride for folks in Pilsen and Czechia, too. Some ex-classmates are working in that almost purely Czech company. Other texts on similar topics: Czechoslovakia, everyday life, markets, science and society It is both ethical and right for an experimenter to correct his mistakes Interpretations of measurements are inevitably theory-dependent ATLAS has measured some top-antitop asymmetry which was previously claimed to behave strangely by the Fermilab. ATLAS got zero – no anomalous effect – within the error margin. Off-topic: an ex-co-author of mine, Robbert Dijkgraaf, kickstarted the 2015 International Year of Light by a fun 15-minute Amsterdam lecture. Hat tip: Clifford Johnson Tommaso Dorigo of the competing CMS team didn't like the ATLAS' estimates of the error margins: The ATLAS Top Production Asymmetry And One Thing I Do Not Like Of It. The three most important points he is making are that it's a terrible sin for an experimenter to underestimate the error margin of his measurement to avoid this underestimate, he should actually try to estimate things as accurately as possible because some seemingly "error enhancing" or "conservative" choices may actually lower the final error margin it's dishonest for an experimenter to modify his methodology after he sees the results I see the possible "ethical" justification of all these points but at the end, I am closer to disagreeing with two of them (1st and 3rd one). The dear reader is surely asking: Could you tell us some details? Other texts on similar topics: experiments, philosophy of science, science and society Black hole microstates from gluing an exterior wit... Barry Kripke wrote a paper on light-cone-quantized... A good story on proofs of inevitability of string ... Anti-Jewish attacks: another reason why a strong I... Klaus reviews his and his family's life in communi... Val Fitch, co-discoverer of CP-violation: 1923-201... Pro-U.S. troops in Ukraine, U.S. bailout for Greec... Cumrun Vafa: mathematical introduction to string t... Sheldon and Leonard co-author a paper on superflui... Obama administration wanted Disney to remake Froze... Varoufakis' new bonds, a new chapter of fraudulent... Pilsen's Škoda RegioPanter wins train tender for N... It is both ethical and right for an experimenter t...
CommonCrawl
Model-based assessment of the safety of community interventions with primaquine in sub-Saharan Africa Stijn W. van Beek ORCID: orcid.org/0000-0001-8020-19341, Elin M. Svensson1,2, Alfred B. Tiono3, Joseph Okebe4, Umberto D'Alessandro5, Bronner P. Gonçalves6, Teun Bousema7, Chris Drakeley6 & Rob ter Heine1 Single low-dose primaquine (SLD-PQ) is recommended in combination with artemisinin-based combination therapy to reduce Plasmodium falciparum transmission in areas threatened by artemisinin resistance or aiming for malaria elimination. SLD-PQ may be beneficial in mass drug administration (MDA) campaigns to prevent malaria transmission but uptake is limited by concerns of hemolysis in glucose-6-phosphate dehydrogenase (G6PD)-deficient individuals. The aim of this study was to improve the evidence on the safety of MDA with SLD-PQ in a sub-Saharan African setting. A nonlinear mixed-effects model describing the pharmacokinetics and treatment-induced hemolysis of primaquine was developed using data from an adult (n = 16, G6PD deficient) and pediatric study (n = 38, G6PD normal). The relationship between primaquine pharmacokinetics and hemolysis was modeled using an established erythrocyte lifespan model. The safety of MDA with SLD-PQ was explored through Monte Carlo simulations for SLD-PQ at 0.25 or 0.4 mg/kg using baseline data from a Tanzanian setting with detailed information on hemoglobin concentrations and G6PD status. The predicted reduction in hemoglobin levels following SLD-PQ was small and returned to pre-treatment levels after 25 days. G6PD deficiency (African A- variant) was associated with a 2.5-fold (95% CI 1.2–8.2) larger reduction in hemoglobin levels. In the Tanzanian setting where 43% of the population had at least mild anemia (hemoglobin < 11–13 g/dl depending on age and sex) and 2.73% had severe anemia (hemoglobin < 7–8 g/dl depending on age and sex), an additional 3.7% and 6.0% of the population were predicted to develop at least mild anemia and 0.25% and 0.41% to develop severe anemia after 0.25 and 0.4 mg/kg SLD-PQ, respectively. Children < 5 years of age and women ≥ 15 years of age were found to have a higher chance to have low pre-treatment hemoglobin. This study supports the feasibility of MDA with SLD-PQ in a sub-Saharan African setting by predicting small and transient reductions in hemoglobin levels. In a setting where a substantial proportion of the population had low hemoglobin concentrations, our simulations suggest treatment with SLD-PQ would result in small increases in the prevalence of anemia which would most likely be transient. The annual number of malaria cases is estimated at 228 million, most of them in sub-Saharan Africa [1]. Plasmodium falciparum is the main malaria species in this region. The transmission of malaria depends on the presence of sexual stage parasites, or gametocytes. Primaquine is the only currently available drug targeting mature Plasmodium falciparum gametocytes. To decrease transmission and limit development of artemisinin resistance, a single low dose of primaquine (SLD-PQ) is recommended by the World Health Organization (WHO) in combination with an artemisinin-based combination therapy [1,2,3]. As artemisinin resistance is emerging in sub-Saharan Africa, strategies that could limit resistance are much needed [4]. However, concerns using SLD-PQ exist because of the risk of hemolysis, especially in individuals with (severe forms of) glucose-6-phosphate dehydrogenase (G6PD) deficiency [2, 5, 6]. Primaquine-induced hemolysis may predominantly be driven by cytochrome P450 D6 (CYP2D6)-mediated metabolites [7,8,9]. The WHO recommends mass drug administration (MDA) for interruption of transmission in areas approaching elimination with good access to treatment and surveillance [1]. MDA consists of treating a defined population in a certain area at approximately the same time with therapeutic doses of an antimalarial. SLD-PQ at 0.25 mg per kg body weight is recommended for MDA targeting Plasmodium falciparum malaria [10]; this dose is considered safe even for G6PD-deficient individuals, and the WHO recommends it may be administered without testing for G6PD deficiency [2, 11,12,13]. However, most of the studies that assessed safety have been performed in small populations with relatively high pre-treatment hemoglobin levels. Moreover, pragmatic dosing strategies may result in some individuals receiving a higher dose than the recommended 0.25 mg/kg, which may achieve better gametocyte clearance but potentially increase the risk of hemolysis [14,15,16]. If co-administered with dihydroartemisinin-piperaquine (DP) instead of artemether-lumefantrine (AL), it may even be needed to use a target dose of 0.4 mg/kg to achieve the same level of gametocyte clearance [16]. This leaves the question whether some populations would still be at risk of clinically relevant hemolysis when SLD-PQ is used at population level. The aim of this study was, therefore, to predict the safety of SLD-PQ when used in MDA campaigns in a sub-Saharan African setting using population pharmacokinetic/pharmacodynamic modeling. Table 1 shows the characteristics of the participants included in the two studies who provided pharmacokinetic data [11, 12]. The first study was a randomized placebo-controlled trial in children from Balonghin, Burkina Faso [11]. The purpose of the study was to assess the effect of SLD-PQ (0.25 and 0.4 mg/kg) on the transmission of malaria. The study included Plasmodium falciparum-infected children aged 2–15 years without any malaria symptoms and normal G6PD activity. The children were treated with AL alone, AL and 0.25 mg/kg primaquine, or AL and 0.40 mg/kg primaquine. AL was given twice daily for 3 days, and primaquine or placebo was administered with the fifth dose of AL. A subset of 40 children was included in a pharmacokinetic sub-study. One blood sample was taken pre-dose, four in the first 12 h and two between 24 and 72 h after dosing. Hemoglobin concentrations were quantified using a HemoCue photometer (HemoCue AB, Angelholm, Sweden) on days 0 (pre-dosing), 2, 3, 7, 10 and 14. Two of the 40 children were excluded because of undeterminable primaquine concentrations. In total, 228 pharmacokinetic samples and 226 hemoglobin samples from 38 children were included in the analysis. Table 1 Characteristics of the populations included in the analysis The second study was an open-label, randomized, dose-escalation trial in G6PD-deficient (African A- variant) adult males from Burkina Faso and The Gambia [12]. The purpose of the study was to assess the safety of SLD-PQ (0.25 mg/kg and 0.4 mg/kg) in G6PD-deficient African males. All individuals from Burkina Faso and some from The Gambia were Plasmodium falciparum malaria infected and asymptomatic. The participants were treated either with AL (Burkina Faso) or DP (The Gambia) alone or in combination with primaquine. Six pharmacokinetic samples were taken up to 72 h post dose from 16 participants. Randomized sampling times were allocated so that there were four samples on day 0 (day of dosing) and one each on days 1 and 2 per individual. Hemoglobin concentrations were assessed on day 0 (pre-dosing), twice daily on days 1, 2 and 3, and once daily on days 4, 5, 7, 10, 14 and 28 using self-calibrating HemoCue 201+ photometers (HemoCue AB, Angelholm, Sweden). From this second study, 97 pharmacokinetic samples and 199 hemoglobin samples were included in the analysis. Quantification of primaquine and genotyping of G6PD and CYP2D6 Primaquine plasma levels were quantified using liquid chromatography-mass spectrometry at two different laboratories as previously described with lower limits of quantification of 4 ng/ml and 1.14 ng/ml for the first and second study, respectively [12, 17, 18]. For the first study, G6PD status was determined using the BinaxNow rapid diagnostic test (Alere Inc., Waltham, MA, USA) as described in the original publication [11]. For the second study, G6PD status was determined using Beutler's fluorescence spot test (R&D Diagnostics, Greece) [12]. For both studies, CYP2D6 genotype was determined with Quantstudio 12K Flex OpenArray with TaqMan assays (Thermo Fisher Scientific, Waltham, MA, USA) [12, 18]. Pharmacokinetic/pharmacodynamic modeling The analysis of the pharmacokinetic data and the relationship with hemoglobin concentrations over time was performed by means of nonlinear mixed-effects modeling. Model structure and estimates of previous work on the pediatric dataset were used as a starting point for the pharmacokinetic analysis [18]. Three transit compartments described the gradual absorption of primaquine. The model incorporated a well-stirred liver model [19]. Liver volume was calculated from total body weight and height [20]. A liver plasma flow of 49.5 l/h was assumed, derived from an adult total blood flow of 90 l/h and a plasma fraction of 55% in whole blood (hematocrit level 45%). Allometric scaling to a total body weight of 70 kg for volume, clearance and liver plasma flow parameters was included to account for differences in weight, with exponents of respectively 1 and 0.75 for volume and clearance parameters [21]. The bioavailability of primaquine was assumed to be 100%, and all estimated parameters are apparent oral pharmacokinetic parameters. Primaquine pharmacokinetic data below the limit of quantification were handled using the M3 method as described by Beal et al. [22]. CYP2D6 activity score (AS), a quantitative measure of phenotype, was inferred from the CYP2D6 genotype [23, 24]. The AS value for an individual can be, going from no to high activity, 0, 0.5, 1, 1.5, 2 or 3. The CYP2D6 AS was included as a covariate on the CYP2D6-mediated clearance. Quantification of the CYP2D6-mediated metabolites is difficult, and no usable pharmacokinetic data were available [21]. The same relationship between AS and CYP2D6-mediated clearance as in the model built on the pediatric data was included in the current model as alternative relationships could not be explored because of the lack of data on these metabolites [18]. The individual CYP2D6-mediated clearance was calculated as follows: $${\text{CL}}_{\text{CYP2D6,individual}}= {\text{CL}}_{\text{CYP2D6,population}} \cdot \text{AS} \cdot {\left(\frac{\text{WT}}{70}\right)}^{0.75}$$ where the CLCYP2D6,population is the estimated population CYP2D6-mediated clearance, AS is the CYP2D6 activity score, and WT is the total body weight. Missing CYP2D6 AS (11% in the pediatric study and 7% in the adult study) was imputed with the most prevalent score of 1.5. As no pharmacokinetic data on the CYP2D6-mediated metabolites were used, it was decided a priori that a single compartment with volume and clearance parameters fixed to 1 would describe the pharmacokinetics of the CYP2D6-mediated metabolites. The metabolite compartment describes a virtual CYP2D6-dependent metabolite concentration, expressed as arbitrary unit per milliliter. Carboxyprimaquine and other metabolites of primaquine were not described by the model. The pharmacodynamic model was developed following the pharmacokinetic model, using individual pharmacokinetic parameter estimates as input. The relationship between primaquine metabolite concentrations and hemolysis was modeled using an established erythrocyte lifespan model [25]. The erythrocyte lifespan model, shown in the lower half of Fig. 1, included four transit compartments and a concentration-slope effect describing the elimination of erythrocytes following primaquine-induced hemolysis caused by the CYP2D6-mediated metabolites. As hemoglobin is directly correlated with erythrocyte count, all lifespan compartments together make up the total hemoglobin value for an individual. The effect of G6PD deficiency on the primaquine-induced hemolysis was described by estimating a scaling factor on the concentration-slope effect. The primaquine-induced hemolysis (elimination of erythrocytes − Kerythrocyte elimination) was calculated as follows: Schematic of the final pharmacokinetic/pharmacodynamic model. The pharmacokinetic model is on the upper half of the figure and the pharmacodynamic model on the lower half. CCYP2D6: concentration in the CYP2D6-mediated metabolite compartment; CLH1: hepatic clearance out of the system; CLH2: CYP2D6-mediated hepatic clearance; CLm: clearance of the metabolite; CYP2D6: cytochrome P450 D6; EH: hepatic extraction ratio; FG6PDd: factor by which the primaquine-induced elimination of erythrocytes is increased in G6PD-deficient individuals; Hb: hemoglobin; Kin: erythrocyte production; Kerythrocyte elimination: primaquine-induced elimination of erythrocytes; Ktr: first-order rate constant defined as 4/LS where LS is the erythrocyte lifespan in hours; MAT: mean absorption time; PD: pharmacodynamic; PK: pharmacokinetic; Slope: concentration-slope effect of primaquine-induced elimination of erythrocytes; VL: liver volume; Vm: volume of the metabolite compartment; VPQ: volume of the primaquine compartment $${K}_{\text{erythrocyte elimination}}= {C}_{\text{CYP2D6 metabolites}} \cdot \text{Slope} \cdot {F}_{\text{G6PDd}}$$ where CCYP2D6 metabolites is the concentration in the CYP2D6-mediated metabolite compartment, Slope is the concentration-slope effect of primaquine-induced elimination of erythrocytes, and FG6PDd is the factor by which the primaquine-induced elimination of erythrocytes is increased in G6PD-deficient individuals. Inter-individual variability in the pharmacokinetic and pharmacodynamic parameters was assumed to be log-normally distributed. Residual variability in primaquine pharmacokinetics was implemented using a proportional error model in addition to an additive residual error which was fixed to 50% of the largest of the two lower limit of quantification values, in line with the M3 method by Beal [22]. For the pharmacodynamic model, additive, proportional and combined models were tested to describe the residual variability. The relative standard errors of the pharmacokinetic and pharmacodynamic parameters were derived from a non-parametric bootstrap with 1000 samples. Mass drug administration simulations The developed model was used to explore the safety of MDA with SLD-PQ in a sub-Saharan African setting through Monte Carlo simulations. For this, we selected a large cross-sectional Tanzanian dataset that included data on age, sex, weight, hemoglobin level (g/dl) and G6PD status to create a simulation dataset [26, 27]. G6PD deficiency in this population was defined by having the hemizygous or homozygous G202A/A376G genotype, characterizing the G6PD A- variant. Children < 6 months old were removed from the dataset since we assumed they would not be included in MDA as primaquine is contraindicated for children < 6 months old and pregnant women [10]. The total number of individuals was 7672 after exclusion of children < 6 months old and individuals with missing data (568 individuals were removed). Each individual was included four times in the simulation dataset to include more combinations of CYP2D6 status and to assess the effect of inter-individual variability better. CYP2D6 AS was randomly assigned to the sampled individuals according to distributions described in the literature, resulting in probabilities of 0.026, 0.101, 0.256, 0.386, 0.209 and 0.022 for an AS of 0, 0.5, 1, 1.5, 2 and 3, respectively [28, 29]. Hemoglobin nadir was simulated for a single dose of 0.25 mg/kg or 0.4 mg/kg primaquine. We simulated doses assuming that the smallest available tablet size of primaquine is 2.5 mg and that it can be split in two (administered doses are rounded to 1.25 mg increments). The predicted prevalence of different severity classes of anemia at the nadir was assessed as a measure of safety. The severity classes of anemia consisted of mild, moderate and severe anemia and their definitions were adapted from the WHO [30]. Depending on age, sex and pregnancy status, the WHO definitions described individuals with hemoglobin levels < 11–13 g/dl as mildly anemic, individuals with hemoglobin levels < 7–11 g/dl as moderately anemic and individuals with hemoglobin levels < 7–8 g/dl as severely anemic. The complete definitions by age, sex and pregnancy status can be found in the supplementary materials (Additional file 1: Table S1). We also explored whether the incidence of severe anemia could be limited by excluding individuals with pre-treatment hemoglobin levels below a certain threshold from dosing. Based on the hemoglobin distribution within the simulation dataset, we assessed scenarios where individuals with hemoglobin levels < 7, 7.5 and 8 g/dl were not treated with primaquine. Software, parameter estimation and model selection R version 3.4.3 was used for data management, statistics and plotting [31]. Model development was performed using the nonlinear mixed-effects modeling program NONMEM version 7.4 with Pirana as an interface [32, 33]. PsN version 4.7 was used as an aid for advanced functionalities [33]. The Xpose4 R package version 4.6.1 was used for graphical visualization of the visual predictive checks (VPCs) [33]. The VPCs were performed using 1000 simulations and were prediction and variability corrected [34]. In NONMEM, the Laplacian method with interaction was used for estimation of the pharmacokinetic parameters [35]. For estimation of the pharmacodynamic parameters, the first-order conditional estimation method with interaction was used. Goodness-of-fit plots together with differences in objective function value were used to compare performances between different models. A change in objective function value of > 3.84 between two nested models was considered statistically significant (p < 0.05) for 1 degree of freedom. Pharmacokinetic modeling Figure 1 shows the model schematic of both the pharmacokinetic and pharmacodynamic models, with the pharmacokinetic model shown in the upper half. The pharmacokinetic model for primaquine in children was successfully extended to adults by means of allometric scaling of the pharmacokinetics on body weight and re-estimation of the parameters. The estimated pharmacokinetic parameters and their uncertainty are shown in Table 2. Inter-individual variability was included on the central volume of primaquine, mean absorption time and clearance. Both the CYP2D6-mediated hepatic clearance and hepatic clearance out of the system share the same inter-individual variability. The VPC, goodness-of-fit plots and model code are included within the supplemental information (Additional file 2: Figure S1, Additional file 3: Figure S2 and Additional file 5: Model code). Table 2 Final pharmacokinetic and pharmacodynamic model parameters Pharmacodynamic modeling The erythrocyte lifespan model schematic is shown in the lower half of Fig. 1. One concentration-slope effect describing the primaquine-induced elimination of erythrocytes was estimated for all four lifespan compartments, as we did not have data on erythrocyte populations with different ages. A lifespan of 276 h (95% CI 119–654 h) was estimated for both the G6PD-normal and -deficient individuals; separate lifespans for G6PD-normal and -deficient individuals could not be estimated reliably. The effect of G6PD deficiency on the primaquine-induced elimination of erythrocytes was estimated at a 2.46-fold increase (95% CI 1.16–8.17-fold). A proportional error model was most appropriate to describe the residual error in the pharmacodynamic model. The pharmacodynamic parameters and their uncertainty are shown in Table 2. The VPC, goodness-of-fit plots and model code are included within the supplemental materials (Additional file 2: Figure S1, Additional file 4: Figure S3 and Additional file 5: Model code). The predicted reduction in hemoglobin levels for a typical individual (weight 70 kg, length 170 cm, AS 1.5) with a baseline hemoglobin concentration of 13 g/dl, with and without G6PD deficiency following a single 0.25 mg/kg dose, is shown in Fig. 2. For a typical G6PD normal individual, the reduction in hemoglobin from baseline to the nadir is approximately 0.13 g/dl. For a typical G6PD-deficient individual, this reduction is 0.30 g/dl. It is predicted to take about 25 days for the hemoglobin concentration to return completely to pre-treatment values. Predicted reduction in hemoglobin levels after a single dose of 0.25 mg/kg primaquine for a typical G6PD-normal and -deficient individual. A typical individual was assumed to have a weight of 70 kg, length of 170 cm, CYP2D6 activity score of 1.5 and pre-treatment hemoglobin of 13 g/dl The median (range) of hemoglobin was 12.1 (2.0–18.8) g/dl and prevalence of G6PD deficiency was 5.51% in the simulation population. Following the linear kinetics of primaquine, the expected maximum concentration and total exposure following 0.4 mg/kg primaquine are 60% higher compared to 0.25 mg/kg primaquine. Table 3 shows the predicted median reduction in hemoglobin levels at the nadir after 0.25 mg/kg and 0.4 mg/kg primaquine for the whole population and by G6PD status. A table showing the predicted median reduction in hemoglobin levels by CYP2D6 AS group is included in the supplemental materials (Additional file 6: Table S2). Table 3 Predicted median reduction in hemoglobin after 0.25 and 0.4 mg/kg primaquine Table 4 shows the prevalence and grade of anemia after taking 0.25 mg/kg and 0.4 mg/kg primaquine. The prevalence of anemia without any intervention was already high (43.0%) in this population and even higher (49.6%) for the G6PD-deficient individuals. After SLD-PQ of 0.25 mg/kg, an additional 3.7% of the general population was predicted to develop anemia (8.6% relative increase from pre-treatment prevalence) and an additional 0.25% to develop severe anemia specifically (9.2% relative increase from pre-treatment prevalence). Following a dose of 0.4 mg/kg primaquine, an additional 6.0% of the general population was predicted to develop anemia (14% relative increase from pre-treatment prevalence) and an additional 0.41% to develop severe anemia (15% relative increase from pre-treatment prevalence). Table 4 Predicted prevalence of anemia and its severity after 0.25 and 0.4 mg/kg primaquine The simulated hemoglobin distribution before treatment and following 0.25 and 0.4 mg/kg primaquine for children < 5 years of age is depicted in Fig. 3. The hemoglobin distributions for the other subgroups used in the anemia definition of the WHO are shown in the supplementary information (Additional file 7: Figure S4). Changes in the distribution of hemoglobin levels after SLD-PQ are minimal. Children < 5 years of age and women ≥ 15 years of age have a relatively high proportion of individuals near to and below the cut-off defining severe anemia both before and after treatment. Violin plot of the simulated hemoglobin level distributions pre-treatment and following 0.25 and 0.4 mg/kg primaquine for children < 5 years of age. The dashed lines represent the cut-offs between the different groups of anemia severity. This subgroup included 26% of the total individuals in the simulation dataset of which 6.5% were G6PD deficient Excluding individuals with a pre-treatment hemoglobin value < 7, 7.5 or 8 g/dl translates into excluding 1.67%, 2.35% and 3.35% of the population, respectively (Additional file 8: Figure S5). Depending on the threshold, 41–48% of the excluded individuals are < 5 years of age compared to 26% in the general population. Table 5 shows the predicted prevalence of severe anemia after dosing with 0.25 mg/kg or 0.4 mg/kg primaquine by exclusion according to different pre-treatment hemoglobin values. For a primaquine dose of 0.25 mg/kg, the proportion of individuals transitioning from moderate to severe anemia decreased by 12%, 28% and 48% when excluding by pre-treatment hemoglobin < 7, 7.5 and 8 g/dl, respectively (i.e. from 0.25 to 0.13% of the total population for the 8 g/dl threshold). For 0.4 mg/kg primaquine, this proportion decreased by 10%, 22% and 41%, respectively. After administering 0.4 mg/kg primaquine excluding individuals with pre-treatment hemoglobin < 8 g/dl, the prevalence of severe anemia was similar to that after treating everybody with 0.25 mg/kg primaquine. Table 5 Predicted prevalence of severe anemia after 0.25 or 0.4 mg/kg primaquine per dosing scenario based on pre-treatment hemoglobin level Concerns about primaquine safety related to hemolysis have been an obstacle to its wide implementation as the risk is at the individual level while the benefit is only gained at the population level. Contemporary safety studies provide limited information at the population level as they are typically based on selected individuals with relatively high pre-treatment hemoglobin. The present analysis describes an assessment of the relationship between primaquine concentrations and primaquine-induced hemolysis in a semi-mechanistic model with the aim of exploring the safety of an MDA campaign with SLD-PQ in a sub-Saharan African setting. The analysis shows that, post treatment, hemoglobin levels would drop in a minority of individuals to levels below the pre-defined threshold defining (severe) anemia but that this effect is transient. G6PD-deficient individuals were found to be more at risk because of the increased hemolytic effect of primaquine. The estimated pharmacokinetic parameters were similar between our model and the model built on the pediatric data [18]. The erythrocyte lifespan was estimated at 276 h, or 11.5 days, which is short compared to what is expected in healthy individuals (70–140 days) [36]. However, considering the facts that malaria infection itself drastically reduces erythrocyte lifespan (16–84 days) and that in individuals taking oxidative medication, like primaquine, the lifespan may even be shortened to 2.5–5 days, we consider our findings in line with the literature [37, 38]. Although one may argue that our population predictions for safety in a healthy population may not necessarily be representative as they are based on estimates from mostly malaria-infected patients of which many were G6PD deficient, we consider our predictions a "worst case scenario." The MDA simulations showed an increase in the prevalence of anemia (hemoglobin < 11–13 g/dl) after a single dose of both 0.25 mg/kg and 0.4 mg/kg primaquine of 3.7% (8.6% relative increase) and 6.0% (14% relative increase), respectively. The increase in the prevalence of anemia was determined at nadir hemoglobin levels following primaquine treatment and was shown to be transient. After reaching the nadir following primaquine administration, the hemoglobin levels quickly recovered to return to baseline again after 25 days such that the reported increases in prevalence of anemia are present for a limited time [12, 39, 40]. The time to hemoglobin recovery was predicted to be independent of G6PD status or hemoglobin levels. The rapid recovery of hemoglobin levels also suggests that consecutive rounds of MDA with at least 1 month in between, as typically implemented [41, 42], would be unlikely to affect the prevalence of anemia. Severe anemia is of most concern clinically, and whilst the relative increase in the prevalence of severe anemia was similar to that of any anemia grade in our Tanzanian setting, the absolute increase was much lower (0.25% vs 3.7%). This suggests that post-SLD-PQ hemoglobin levels in a small but non-negligible number of individuals might drop below our severe anemia threshold of hemoglobin < 7–8 g/dl. Again, our predictions suggested this drop would be transient. Acknowledging that individuals with low hemoglobin are at higher risk of developing severe anemia, we investigated the exclusions of individuals with hemoglobin levels below specific thresholds. Children < 5 years of age and women ≥ 15 years of age were most likely to have low hemoglobin levels in our simulations and were most at risk to develop severe anemia. As expected, we predicted that the prevalence of individuals with severe anemia is reduced by not treating individuals with low pre-treatment hemoglobin levels (< 7, < 7.5 or < 8 g/dl) although measuring hemoglobin on a large scale may be logistically challenging. Similarly, we predicted that by excluding individuals with hemoglobin values < 8 g/dl, 0.4 mg/kg primaquine can be used instead of 0.25 mg/kg without increasing the prevalence of severe anemia. Higher doses of SLD-PQ may be useful in the absence of specific low-dose or pediatric formulations of the drug. It is important to acknowledge that we did not account for females who are heterozygous for G6PD deficiency genes as we did not have the data to do so. Heterozygous females have been described to present with a wide range of G6PD activity. In our analysis we decided to include the heterozygous females in the G6PD normal group as the majority will have a G6PD activity which is closer to normal than deficient activity [43, 44]. We performed a sensitivity analysis by simulating MDA in which heterozygous females were included in the G6PD-deficient group. In this simulation, 17.2% of the total population was defined to be G6PD deficient instead of 5.5% in the main analysis. The impact on the prevalence of anemia following SLD-PQ is generally negligible, e.g. the prevalence of severe anemia following 0.25 mg/kg primaquine increases from 2.98% to 3.02%. The full results of the sensitivity analysis can be found in the supplementary materials (Additional file 9: Table S3). The data used to develop the model did not include children < 2 years of age and our model does not include any enzymatic maturation factors on either the pharmacokinetics or pharmacodynamics. As MDA is recommended to include children from the age of 6 months, the uncertainty in the extrapolation from our model to these younger children should be carefully considered. However, maturation of CYP2D6-dependent metabolism is not thought to play a role at this age [45], and recent findings support the safety of primaquine in young children [46]. A further limitation is that this study included only individuals who were G6PD normal or G6PD deficient with the African A- variant, the dominant variant in sub-Saharan Africa [29]. As the African A- variant is not the most severe variant of G6PD deficiency, this warrants caution for extrapolating to regions where more severe variants are prevalent. For example, individuals with the Mediterranean G6PD-deficiency variant have much lower G6PD enzyme activity compared to the African A- variant and subsequently are more at risk of severe hemolytic events. It should also be acknowledged that the Tanzanian dataset we used for MDA simulations may differ from other sub-Saharan African populations. For example, the simulation data were collected from villages at different altitudes, and whilst this encompasses a range of malaria endemicities, population hemoglobin levels will differ from other areas. As with other similar chemotherapeutic interventions, the epidemiology of malaria and co-infections would need to be considered in designing and implementing an MDA with SLD-PQ. Furthermore, our simulations were based on the likely availability of lower strength tablets produced at good manufacturing practice standards in the near future. These formulations are not currently available, and until then it will be more difficult to dose at the same precision as in our simulations. Lastly, it is important to emphasize that we have been cautious by using quite conservative definitions of anemia compared to other commonly used definitions, which further supports the safety of MDA with SLD-PQ in this population [47]. We predict a small drug concentration-dependent increase in hemolysis following primaquine administration, which disappears completely after 25 days. G6PD deficiency was associated with a 2.5-fold larger reduction in hemoglobin levels. MDA with SLD-PQ is predicted to result in a small and transient relative increase in the prevalence of anemia. Children < 5 years of age and women ≥ 15 years of age were found to have a higher chance to have low pre-treatment hemoglobin. Individuals with low pre-treatment hemoglobin are at increased risk of severe anemia but this is also expected to be transitory. By exclusion from dosing of individuals with low pre-treatment hemoglobin, the incidence of severe anemia after SLD-PQ treatment could be limited. This study supports the feasibility of MDA with SLD-PQ in a sub-Saharan African setting where anemia may be common. The datasets used during the model development can be requested from the original authors through the Worldwide Antimalarial Resistance Network repository using their PubMed IDs (27010,542 and 26952094), https://app-live.wwarn.org/DataInventoryExplorer/#1. Attribution of graphical abstract resources: Computer simulation icon designed by Srip from Flaticon—www.flaticon.com/authors/srip. Population icon icon designed by Smashicons from Flaticon—www.flaticon.com/authors/smashicons. Pill blister pack, map of Africa and erythrocytes icons designed by Servier Medical Art—smart.servier.com. Mosquito icon designed by Freepik: www.freepik.com. SLD-PQ: Single low dose of primaquine G6PD: Glucose-6-phosphate dehydrogenase CYP2D6: Cytochrome P450 D6 MDA: Mass drug administration DP: dihydroartemisinin-piperaquine Artemether-lumefantrine Activity score VPCs: Visual predictive checks World Health Organisation. World malaria report 2019. https://www.who.int/malaria/publications/world-malaria-report-2019/en/. Accessed 15 Sep 2021. White NJ, Qiao LG, Qi G, Luzzatto L. Rationale for recommending a lower dose of primaquine as a Plasmodium falciparum gametocytocide in populations where G6PD deficiency is common. Malar J. 2012;11:418. Rosenthal PJ. Has artemisinin resistance emerged in Africa? Lancet Infect Dis. 2021;21:1056–7. Ndwiga L, Kimenyi KM, Wamae K, Osoti V, Akinyi M, Omedo I, et al. A review of the frequencies of Plasmodium falciparum Kelch 13 artemisinin resistance mutations in Africa. Int J Parasitol Drugs Drug Resist. 2021;16:155–61. Beutler E. G6PD deficiency. Blood. 1994;84:3613–36. Beutler E, Duparc S. Glucose-6-phosphate dehydrogenase deficiency and antimalarial drug development. Am J Trop Med Hyg. 2007;77:779–89. Bowman ZS, Morrow JD, Jollow DJ, McMillan DC. Primaquine-induced hemolytic anemia: role of membrane lipid peroxidation and cytoskeletal protein alterations in the hemotoxicity of 5-hydroxyprimaquine. J Pharmacol Exp Ther. 2005;314:838–45. Fletcher KA, Barton PF, Kelly JA. Studies on the mechanisms of oxidation in the erythrocyte by metabolites of primaquine. Biochem Pharmacol. 1988;37:2683–90. Pybus BS, Sousa JC, Jin X, Ferguson JA, Christian RE, Barnhart R, et al. CYP450 phenotyping and accurate mass identification of metabolites of the 8-aminoquinoline, anti-malarial drug primaquine. Malar J. 2012;11:259. World Health Organisation. Mass drug administration for falciparum malaria: a practical field manual. https://www.who.int/publications/i/item/9789241513104. Accessed 15 Sep 2021. Goncalves BP, Tiono AB, Ouedraogo A, Guelbeogo WM, Bradley J, Nebie I, et al. Single low dose primaquine to reduce gametocyte carriage and Plasmodium falciparum transmission after artemether-lumefantrine in children with asymptomatic infection: a randomised, double-blind, placebo-controlled trial. BMC Med. 2016;14:40. Bastiaens GJH, Tiono AB, Okebe J, Pett HE, Coulibaly SA, Goncalves BP, et al. Safety of single low-dose primaquine in glucose-6-phosphate dehydrogenase deficient falciparum-infected African males: two open-label, randomized, safety trials. PLoS ONE. 2018;13:e0190272. Eziefula AC, Bousema T, Yeung S, Kamya M, Owaraganise A, Gabagaya G, et al. Single dose primaquine for clearance of Plasmodium falciparum gametocytes in children with uncomplicated malaria in Uganda: a randomised, controlled, double-blind, dose-ranging trial. Lancet Infect Dis. 2014;14:130–9. Watson J, Taylor WR, Menard D, Kheng S, White NJ. Modelling primaquine-induced haemolysis in G6PD deficiency. Elife. 2017;6:e23061. Hayes DJ, Banda CG, Chipasula-Teleka A, Terlouw DJ. Modelling the therapeutic dose range of single low dose primaquine to reduce malaria transmission through age-based dosing. BMC Infect Dis. 2017;17:254. Stepniewska K, Humphreys GS, Gonçalves BP, Craig E, Gosling R, Guerin PJ, et al. Efficacy of single dose primaquine with artemisinin combination therapy on P. falciparum gametocytes and transmission: a WWARN individual patient meta-analysis. J Infect Dis. 2020. https://doi.org/10.1093/infdis/jiaa498. Page-Sharp M, Ilett KF, Betuela I, Davis TME, Batty KT. Simultaneous determination of primaquine and carboxyprimaquine in plasma using solid phase extraction and LC–MS assay. J Chromatogr B Analyt Technol Biomed Life Sci. 2012;902:142–6. Goncalves BP, Pett H, Tiono AB, Murry D, Sirima SB, Niemi M, et al. Age, weight, and CYP2D6 Genotype are major determinants of primaquine pharmacokinetics in African children. Antimicrob Agents Chemother. 2017;61:e02590-e2616. Pang KS, Rowland M. Hepatic clearance of drugs. I. Theoretical considerations of a "well-stirred" model and a "parallel tube" model. Influence of hepatic blood flow, plasma and blood cell binding, and the hepatocellular enzymatic activity on hepatic drug clearance. J Pharmacokinet Biopharm. 1977;5:625–53. Johnson TN, Tucker GT, Tanner MS, Rostami-Hodjegan A. Changes in liver volume from birth to adulthood: a meta-analysis. Liver Transpl. 2005;11:1481–93. Anderson BJ, Holford NH. Mechanism-based concepts of size and maturity in pharmacokinetics. Annu Rev Pharmacol Toxicol. 2008;48:303–32. Beal SL. Ways to fit a PK model with some data below the quantification limit. J Pharmacokinet Pharmacodyn. 2001;28:481–504. Gaedigk A, Simon SD, Pearce RE, Bradford LD, Kennedy MJ, Leeder JS. The CYP2D6 activity score: translating genotype information into a qualitative measure of phenotype. Clin Pharmacol Ther. 2008;83:234–42. Gaedigk A, Dinh JC, Jeong H, Prasad B, Leeder JS. Ten years' experience with the CYP2D6 activity score: a perspective on future investigations to improve clinical predictions for precision therapeutics. J Pers Med. 2018;8:15. Lledo-Garcia R, Kalicki RM, Uehlinger DE, Karlsson MO. Modeling of red blood cell life-spans in hematologically normal populations. J Pharmacokinet Pharmacodyn. 2012;39:453–62. Drakeley CJ, Carneiro I, Reyburn H, Malima R, Lusingu JP, Cox J, et al. Altitude-dependent and -independent variations in Plasmodium falciparum prevalence in northeastern Tanzania. J Infect Dis. 2005;191:1589–98. Sepúlveda N, Manjurano A, Campino SG, Lemnge M, Lusingu J, Olomi R, et al. Malaria host candidate genes validated by association with current, recent, and historical measures of transmission intensity. J Infect Dis. 2017;216:45–54. Pett H, Bradley J, Okebe J, Dicko A, Tiono AB, Goncalves BP, et al. CYP2D6 polymorphisms and the safety and gametocytocidal activity of single-dose primaquine for Plasmodium falciparum. Antimicrob Agents Chemother. 2019;63:e00538-e619. Howes RE, Piel FB, Patil AP, Nyangiri OA, Gething PW, Dewi M, et al. G6PD deficiency prevalence and estimates of affected populations in malaria endemic countries: a geostatistical model-based map. PLoS Med. 2012;9:e1001339. World Health Organisation. Haemoglobin concentrations for the diagnosis of anaemia and assessment of severity; 2011. https://www.who.int/vmnis/indicators/haemoglobin.pdf. Accessed 15 Sep 2021. R Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2016. https://www.R-project.org/. Accessed 15 Sep 2021. Beal S, Sheiner LB, Boeckmann A, Bauer RJ. NONMEM user's guides (1989–2009). Ellicott City: Icon Development Solutions; 2009. Keizer RJ, Karlsson MO, Hooker A. Modeling and simulation workbench for NONMEM: tutorial on Pirana, PsN, and Xpose. CPT Pharmacometr Syst Pharmacol. 2013;2:e50. Bergstrand M, Hooker AC, Wallin JE, Karlsson MO. Prediction-corrected visual predictive checks for diagnosing nonlinear mixed-effects models. AAPS J. 2011;13:143–51. Bauer RJ. NONMEM tutorial part II: estimation methods and advanced examples. CPT Pharmacometr Syst Pharmacol. 2019;8:538–56. Franco RS. Measurement of red cell lifespan and aging. Transfus Med Hemother. 2012;39:302–7. Looareesuwan S, Davis TM, Pukrittayakamee S, Supanaranond W, Desakorn V, Silamut K, et al. Erythrocyte survival in severe falciparum malaria. Acta Trop. 1991;48:263–70. Karafin MS, Francis RO. Impact of G6PD status on red cell storage and transfusion outcomes. Blood Transfus. 2019;17:289–95. Eziefula AC, Pett H, Grignard L, Opus S, Kiggundu M, Kamya MR, et al. Glucose-6-phosphate dehydrogenase status and risk of hemolysis in Plasmodium falciparum-infected African children receiving single-dose primaquine. Antimicrob Agents Chemother. 2014;58:4971–3. Kheng S, Muth S, Taylor WR, Tops N, Kosal K, Sothea K, et al. Tolerability and safety of weekly primaquine against relapse of Plasmodium vivax in Cambodians with glucose-6-phosphate dehydrogenase deficiency. BMC Med. 2015;13:203. Landier J, Kajeechiwa L, Thwin MM, Parker DM, Chaumeau V, Wiladphaingern J, et al. Safety and effectiveness of mass drug administration to accelerate elimination of artemisinin-resistant falciparum malaria: a pilot trial in four villages of Eastern Myanmar. Wellcome Open Res. 2017;2:81. von Seidlein L, Peto TJ, Landier J, Nguyen TN, Tripura R, Phommasone K, et al. The impact of targeted malaria elimination with mass drug administrations on falciparum malaria in Southeast Asia: a cluster randomised trial. PLoS Med. 2019;16:e1002745. LaRue N, Kahn M, Murray M, Leader BT, Bansil P, McGray S, et al. Comparison of quantitative and qualitative tests for glucose-6-phosphate dehydrogenase deficiency. Am J Trop Med Hyg. 2014;91:854–61. Bancone G, Kalnoky M, Chu CS, Chowwiwat N, Kahn M, Malleret B, et al. The G6PD flow-cytometric assay is a reliable tool for diagnosis of G6PD deficiency in women and anaemic subjects. Sci Rep. 2017;7:9822. Stevens JC, Marsh SA, Zaya MJ, Regina KJ, Divakaran K, Le M, et al. Developmental changes in human liver CYP2D6 expression. Drug Metab Dispos. 2008;36:1587–93. Setyadi A, Arguni E, Kenangalem E, Hasanuddin A, Lampah DA, Thriemer K, et al. Safety of primaquine in infants with Plasmodium vivax malaria in Papua. Indonesia Malar J. 2019;18:111. White NJ. Anaemia and malaria. Malar J. 2018;17:371. This work was supported in part by funding to CD and TB from the Bill & Melinda Gates Foundation for the Primaquine supplement to AFIRM (OPP1034789). Sample collection and analysis in Tanzania were funded by Medical Research Council, UK, grant no. 9901439. Department of Pharmacy, Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, The Netherlands Stijn W. van Beek, Elin M. Svensson & Rob ter Heine Department of Pharmacy, Uppsala University, Uppsala, Sweden Elin M. Svensson National Center for Research and Training on Malaria (CNRFP), Ouagadougou, Burkina Faso Alfred B. Tiono Department of International Public Health, Liverpool School of Tropical Medicine, Liverpool, UK Joseph Okebe Medical Research Council Unit The Gambia at the London School of Hygiene & Tropical Medicine, Faraja , The Gambia Umberto D'Alessandro London School of Hygiene & Tropical Medicine, London, UK Bronner P. Gonçalves & Chris Drakeley Department of Medical Microbiology, Radboud University Medical Center, Nijmegen, The Netherlands Teun Bousema Stijn W. van Beek Bronner P. Gonçalves Chris Drakeley Rob ter Heine Conceived and designed the analysis: SWB, EMS, ABT, JO, UD, BPC, CD, TB, RH. Performed the analysis: SWB, EMS, RH. Wrote the manuscript: SWB, EMS, ABT, JO, UD, BPC, CD, TB, RH. All authors read and approved the final manuscript. Correspondence to Stijn W. van Beek or Chris Drakeley. No data were collected as part of this study, but ethical approval and consent to participate were provided as part of the original studies where the data we use originates from. Definition of anemia according the World Health Organization. Additional file 2: Figure S1. Visual predictive checks for the final pharmacokinetic and pharmacodynamic models. Goodness-of-fit plots for the pharmacokinetic model. Goodness-of-fit plots for the pharmacodynamic model. Additional file 5: Model code. NONMEM control stream of the pharmacokinetic/pharmacodynamic model. Additional file 6: file 6: Table S2. Predicted median reduction in hemoglobin after 0.25 and 0.4 mg/kg primaquine stratified by CYP2D6 activity score group. Simulated hemoglobin distributions before and after treatment. Distribution of observed pre-treatment hemoglobin levels in the simulation dataset. Predicted prevalence of anemia and its severity after 0.25 and 0.4 mg/kg primaquine when heterozygous females are included in the G6PD-deficient group. van Beek, S.W., Svensson, E.M., Tiono, A.B. et al. Model-based assessment of the safety of community interventions with primaquine in sub-Saharan Africa. Parasites Vectors 14, 524 (2021). https://doi.org/10.1186/s13071-021-05034-4 Received: 13 July 2021 Primaquine Plasmodium falciparum G6PD Protozoa and protozoan diseases
CommonCrawl
\begin{document} \title{S\MakeLowercase{low manifolds for a nonlocal fast-slow stochastic evolutionary system with stable} \begin{abstract} \indent This work aims at understanding the slow dynamics of a nonlocal fast-slow stochastic evolutionary system with stable L$\acute{e}$vy noise. Slow manifolds along with exponential tracking property for a nonlocal fast-slow stochastic evolutionary system with stable L$\acute{e}$vy noise are constructed and two examples with numerical simulations are presented to illustrate the results. \\ \textbf{Keywords:} Nonlocal Laplacian, fast-slow stochastic system, random slow manifold, non-Gaussian L\'evy motion. \end{abstract} \section{Introduction}\label{s:1} \begin{linenomath*} Over the last few years, the theory of nonlocal operators attracts a lot of attention from researchers because most of the complex phenomena \cite{caffarelli2010drift,meerschaert2012stochastic,metzler2004restaurant} involve nonlocal operators. Many researchers made a lot of progress by working on different type of nonlocal operators. The usual Laplacian operator $\Delta$ is not a nonlocal operator. It generates Brownian motion (or Wiener process), which is Gaussian process. While nonlocal Laplacian operator $(-\Delta)^{\frac{\alpha}{2}}$ generates a symmetric $\alpha$-stable L$\acute{e}$vy motion, for $\alpha\in(0,2)$, \cite{applebaum2009levy, duan2015introduction}. This motion is non-Gaussian process. \\ \indent The theory of invariant manifolds is very helpful for describing and understanding dynamics of deterministic systems under stochastic forces. It was introduced in \cite{hadamard1901iteration,caraballo2004existence,duan2004smooth,chow1988invariant}, while for deterministic system its modification was given in \cite{ruelle1982characteristic,bates1998existence,chicone1997center,chow1991smooth,henry2006lecture} by numerous authors.\\ There is very rich and papular history for the theory of invariant manifold \cite{bates1998existence,henry2006lecture} in finite and infinite deterministic systems. Furthermore, invariant manifold provides us very helpful tool in investigating the dynamical conduct of stochastic systems \cite{chueshov2010master,chen2014slow,duan2004smooth}. An invariant manifold for a fast-slow stochastic system in which fast mode is indicated by the slow mode tends to slow manifold as scale parameter approaches to zero. Moreover, slow manifold for a fast-slow stochastic system tends to critical manifold as scale parameter approaches to zero.\\ \indent The existence of slow manifold for stochastic system based on Brownian motion has been widely constructed \cite{duan2015introduction,fu2012slow,schmalfuss2008invariant,wang2013slow}. The numerical simulation for slow manifold and establishment of parameter estimation are provided in \cite{ren2015approximation,ren2015parameter}. L$\acute{e}$vy motions appear in many systems as models for fluctuations, for instance, it appear in the turbulent motions of fluid flows \cite{weeks1995observation}. A few monographs about stochastic ordinary differential equations processed by L$\acute{e}$vy noise are devoted in \cite{applebaum2009levy,cont2004option}. The existence of slow manifold under non-Gaussian L$\acute{e}$vy noise is constructed in \cite{yuan2017slow}. While the study of dynamics for nonlocal stochastic differential equations processed by non-Gaussian L$\acute{e}$vy noise is still under development. \\ \indent The main objective of this article is to construct the existence of slow manifold for a nonlocal stochastic dynamical system processed by $\alpha$-stable L$\acute{e}$vy noise with $\alpha \in (1,2)$ defined in a separable Hilbert space $\mathbb{H}=H_{1}\times H_{2}$ having norm \begin{align*}||\cdot||_{\mathbb{H}}=||\cdot||_{1}+||\cdot||_{2}.\end{align*} Namely, we consider the system \begin{align} &\dot{x}=-\frac{1}{\epsilon}(-\Delta)^{\frac{\alpha}{2}}x+\frac{1}{\epsilon}f(x,y)+\frac{\sigma_{1}}{\sqrt[\alpha_{1}]{\epsilon}}\dot{L}_{t}^{\alpha_{1}},\mbox{ in }H_{1}\\&\dot{y}=Jy+g(x,y)+\sigma_{2}\dot{L}_{t}^{\alpha_{2}},\mbox{ in }H_{2}\\ &x|(-1,1)^{c}=0,\indent \indent y|(-1,1)^{c}=0. \end{align}Here, for $u\in \mathbb{R}$ and $\alpha\in(0,2),$\\$$(-\Delta)^{\frac{\alpha}{2}}x(u,t)=\frac{2^{\alpha}\Gamma(\frac{1+\alpha}{2})}{\sqrt{\pi}|\Gamma(\frac{-\alpha}{2})|}P.V.\int_{\mathbb{R}}\frac{x(u,t)-x(v,t)}{|u-v|^{1+\alpha}}dv,$$ \\is known as fractional Laplacian operator with the Cauchy principle value $(P.V.)$. The Gamma function $\Gamma$ is defined by\\$$\Gamma(q)=\int_{0}^{\infty}t^{q-1}e^{-t}dt,\indent \forall \indent q>0.$$\indent We take $H_{1}=L^{2}(-1,1)$ and $H_{2}$ a separable Hilbert space. The norm of $H_{1}$ and $H_{2}$ are $||\cdot||_{1}$ and $||\cdot||_{2}$ respectively. In the system $(1)-(3)$, $\epsilon$ is a parameter with the property $0<\epsilon\ll1$. This parameter represents the ratio of two times scales such that $||\frac{dx}{dt}||_{1}\gg||\frac{dy}{dt}||_{2}.$ The operator $J$ is linear operator satisfying an exponential dichotomy condition (S1) presented in next section. Lipschitz continuous operators $f$ and $g$ are nonlinear with $f(0,0)=0=g(0,0)$. The noise process $L_{t}^{\alpha}$ are two sided symmetric $\alpha$-stable L$\acute{e}$vy process taking values in Hilbert space $\mathbb{H}$, where $\alpha \in (1,2)$ is the index of stability \cite{applebaum2009levy,chow1991smooth}. \end{linenomath*} \\ \indent We introduce a random transformation such that a solution of stochastic dynamical system $(1)-(3)$ can be indicated as a transformed solution of some random dynamical system. After that, we establish the construction of slow manifold for random dynamical system with the help of Lyapunov-Perron method \cite{caraballo2004existence,duan2004smooth,chow1988invariant}.\\ \indent The setup of this article is as follows. In Section 2, some fundamental concepts about random dynamical system, nonlocal fractional Laplacian and a detail discussion about differential equation processed by L$\acute{e}$vy motion are given. In Section 3, we convert stochastic dynamical system $(1)-(3)$ to random dynamical system by introducing a random transformation. In Section 4, we review concept about random invariant manifold and establish the existence of exponential tracking slow manifold for random dynamical system. In section 5, an approximation to slow manifold is established. While in Section 6, two examples with numerical simulations are presented to illustrate the results. \section{Preliminaries} In this section we recall out some ideas about fractional Laplacian operator and random dynamical system processed by L$\acute{e}$vy motion.\\ \indent The nonlocal fractional Laplacian operator is represented by $A_{\alpha}$ and considered as $A_{\alpha}=-(-\Delta)^{\frac{\alpha}{2}}$.\\ \begin{lemma}\begin{linenomath*} (\cite{bai2017slow}) The fractional Laplacian operator $A_{\alpha} $ has the upper-bound$$||e^{A_{\alpha}t}||_{1}\leqslant C e^{-\lambda_{1}t},\indent t\geq0,$$where the constant $C>0$ is independent of $t$ and $\lambda_{1}$. Nonlocal fractional Laplacian operator is also known as a sectorial operator.\\\end{linenomath*}\end{lemma} \begin{lemma} \begin{linenomath*}(\cite{kwasnicki2012eigenvalues}) The spectral problem$$(-\Delta)^{\frac{\alpha}{2}}\varphi(u)=\lambda\varphi(u),\indent \varphi|(-1,1)^{c}=0,$$where $\varphi(\cdot)\in H_{1}$ are defined in (\cite{kwasnicki2012eigenvalues}), has eigenvalues in the interval (-1,1) satisfying the form $$\lambda_{l}=(\frac{l\pi}{2}-\frac{(2-\alpha)\pi}{8})^{\alpha}+O(\frac{1}{l}),\indent (l\rightarrow\infty).$$Furthermore the eigenvalues of fractional Laplacian are such that,$$0<\lambda_{1}<\lambda_{2}\leqslant \lambda_{3}\leqslant\cdot\cdot\cdot\leqslant\lambda_{l}\leqslant \cdot\cdot\cdot, \mbox{ for } l=1,2,3,\cdot\cdot\cdot.$$\end{linenomath*}\end{lemma} \begin{definition} (\cite{yuan2017slow}) \begin{linenomath*}Let $(\Omega, \mathcal{F},\mathbb{P})$ be a probability space and $\theta=\{\theta_{l}\}_{l\in\mathbb{R}}$ be a flow on $\Omega$ such that\\ $\bullet$ $\theta_{0}=Id_{\Omega};$\\ $\bullet$ $\theta_{l_{1}}\theta_{l_{2}}=\theta_{l_{1}+l_{2}},$ where $l_{1},l_{2}\in\mathbb{R};$\\and it can be defined by a mapping $$\theta:\mathbb{R}\times\Omega\rightarrow\Omega.$$ The above mapping $(l,\omega)\mapsto \theta_{l}\omega$ is $(\mathcal{B(\mathbb{R})\otimes\mathcal{F},F})$-measurable, and $\theta_{l}\mathbb{P}=\mathbb{P}$ for all $l\in\mathbb{R}$. Here additionally we consider that the probability measure $\mathbb{P}$ is invariant with regard to the flow $\{\theta_{l}\}_{l\in\mathbb{R}}$. Then $\Theta=\(\Omega, \mathcal{F},\mathbb{P},\theta)$ is known as a metric dynamical system.\end{linenomath*}\end{definition} \begin{linenomath*} In this work, let $L_{t}^{\alpha}$, $\alpha\in (1,2)$ be a two sided symmetric $\alpha$-stable L$\acute{e}$vy process having values in Hilbert space $\mathbb{H}$. Take a canonical sample space for two sided symmetric $\alpha$-stable L$\acute{e}$vy process. Let $\Omega=D(\tilde{K},\mathbb{H})$ be the space of c$\grave{a}$dl$\grave{a}$g functions, having zero value at $t=0$. These functions are defined on compact subset $\tilde{K}$ of $\mathbb{R}$ and taken values in Hilbert space $\mathbb{H}$. If we use the usual open-compact metric, then the space $D(\tilde{K},\mathbb{H})$ may not separable and complete. The space can be made complete and separable by defining another metric $d_{\tilde{K}}^{0}$ just as the space of real valued c$\grave{a}$dl$\grave{a}$g functions can be made complete and separable on unit interval or on $\mathbb{R}$ \cite{wei2016weak,chao2018stable}. For making space $D(\tilde{K},\mathbb{H})$ complete and separable, let $D^{0}(\tilde{K},\mathbb{H})$ be the subset of $D(\tilde{K},\mathbb{H})$ as defined in definition 3.6 of \cite{wei2016weak}. Hence, the class of functions denoted by $\Lambda_{\tilde{K}}^{0}$ with respect to new metric is $$\Lambda_{\tilde{K}}^{0}=\Big\{\mbox{ mapping }\lambda:\tilde{K}\rightarrow\tilde{K} \mbox{ is a strictly increasing and continuous function}\Big\}.$$ Then $d_{\tilde{K}}^{0}$ corresponding to class $\Lambda_{\tilde{K}}^{0}$ is given by \begin{align*}d_{\tilde{K}}^{0}(f_{1},f_{2})=&\mathop {\mbox{inf} }\limits_{\lambda \in \Lambda_{\tilde{K}}^{0} }\mbox{ max }\bigg \{\mathop {\mbox{ sup } }\limits_{x>x^{*}, x, x^{*} \in \tilde{K}}\mbox{ log }\left|\left[ \frac{||\lambda(x)||_{\mathbb{R}}-\lambda(x^{*})||_{\mathbb{R}}}{||x||_{\mathbb{R}}-||x^{*}||_{\mathbb{R}}}\right]\right|,\\&||\lambda-I||_{\mbox{sup}},||f_{1}-f_{2}\lambda||_{\mbox{sup}}\bigg \},\end{align*} for $f_{1}, f_{2}$ in $D^{0}(\tilde{K},\mathbb{H})$. \\By Theorem 3.2 in \cite{wei2016weak}, the metric space $[D^{0}(\tilde{K},\mathbb{H}),d_{\tilde{K}}^{0}]$ is complete and separable. Hence, the class of functions $D^{0}(\tilde{K},\mathbb{H})$ is equipped with Skorokhod's topology, which is generated by Skorokhod's metric $d_{\tilde{K}}^{0}$, is a Polish space, i.e., a complete and separable space. On this space, take a measurable flow $\theta=\{\theta_{l}\}_{l\in\tilde{K}}$ is defined namely a mapping \begin{align*} \theta:\tilde{K}\times D^{0}(\tilde{K},\mathbb{H})\rightarrow D^{0}(\tilde{K},\mathbb{H}),\mbox{ such that}, \theta_{l}\omega(\cdot)=\omega(\cdot+l)-\omega(l),\end{align*} where $\omega\in D^{0}(\tilde{K},\mathbb{H})$ and $l \in \tilde{K}$.\\ \indent Suppose that $\mathbb{P}$ be the probability measure on $\mathcal{F}$ defined by the distribution of two sided symmetric $\alpha$-stable L$\acute{e}$vy motion. The sample path of L$\acute{e}$vy motion are in $D(\tilde{K},\mathbb{H})$. Note that $\mathbb{P}$ is ergodic with regard to $\{\theta_{l}\}_{l\in\tilde{K}}$. Thus $( D^{0}(\tilde{K},\mathbb{H}), d_{\tilde{K}}^{0}, \mathbb{P}, \{\theta_{l}\}_{l\in\tilde{K}})$ is a metric dynamical system. Instead of considering $D(\tilde{K},\mathbb{H})$, here we consider $D^{0}(\tilde{K},\mathbb{H})$, a $\{\theta_{l}\}_{l\in\tilde{K}}$-invariant subset $\Omega_{1}=D^{0}(\tilde{K},\mathbb{H})\subset \Omega=D(\tilde{K},\mathbb{H})$ of $\mathbb{P}$-measure 1, where $D^{0}(\tilde{K},\mathbb{H})$ is $\{\theta_{l}\}_{l\in\tilde{K}}$-invariant mean that $\theta_{l}\Omega_{1}=\Omega_{1}$ for $l\in \tilde{K}$. Since on $\mathcal{F}$, we take the restriction of measure $\mathbb{P}$, but still it is denoted by $\mathbb{P}$. For our project, we take scalar L$\acute{e}$vy motion under consideration.\end{linenomath*} \begin{definition} (\cite{arnold2013random}) \begin{linenomath*}A cocycle $\phi$ satisfies \begin{align*}&\phi(0,\omega,x)=x,\\&\phi(l_{1}+l_{2},\omega,x)=\phi(l_{2},\theta_{l_{1}}\omega,\phi(l_{1},\omega,x)).\end{align*}It is $(\mathcal{B(\mathbb{R^{+}})\otimes\mathcal{F}\otimes\mathcal{B(\mathbb{H})},\mathcal{F}})$-measurable and defined by mapping:$$\phi:\mathbb{R^{+}}\times\Omega\times\mathbb{H}\rightarrow\mathbb{H},$$ for $x\in\mathbb{H}$, $\omega\in\Omega$ and $l_{1},l_{2}\in\mathbb{R^{+}}$. Metric dynamical system $(\Omega,\mathcal{F},\mathbb{P},\theta)$, together with $\phi$, generates a random dynamical system.\end{linenomath*}\end{definition} \begin{linenomath*} If $x\mapsto\phi(l,\omega,x)$ is continuous (differentiable) for $\omega\in\Omega$ and $l\geq0$, then random dynamical system is continuous (differentiable). There is a family of non-empty and closed sets $\mathcal{M}=\{\mathcal{M}(\omega):\omega\in \Omega\}$ in metric space $(\mathbb{H},||\cdot||_{\mathbb{H}})$. This family of sets is called a random set if for all $x'\in\mathbb{H}$ the map:$$\omega\mapsto\mathop {\inf }\limits_{x \in \mathcal{M}(\omega)}||x-x'||_{\mathbb{H}},$$ is a random variable.\end{linenomath*} \begin{definition} \begin{linenomath*}(\cite{duan2015introduction}) For a random dynamical system $\phi$, if random variable $x(\omega)$ taking values in $\mathbb{H}$ satisfies $$\phi(l,\omega,x(\omega))=x(\theta_{l}\omega),\indent \indent a.s.$$for every $l\geq0$. Then the same random variable $x(\omega)$ is called stationary orbit. It is also known as random fixed point.\end{linenomath*}\end{definition} \begin{definition} (\cite{fu2012slow}) \begin{linenomath*} For a random dynamical system $\phi$, a random set $\mathcal{M}=\{\mathcal{M}(\omega):\omega\in \Omega\}$ is said to be random positively invariant set if$$\phi(l,\omega,\mathcal{M}(\omega))\subset \mathcal{M}(\theta_{l}\omega),$$ for every $\omega\in\Omega$ and $l\geq0$.\end{linenomath*}\end{definition} \begin{definition} \cite{yuan2017slow} \begin{linenomath*} Define a map $$h:H_{2}\times\Omega\rightarrow H_{1},$$ such that $y\mapsto h(y,\omega)$ is Lipschitz continuous for every $\omega\in\Omega$. Take $$\mathcal{M}(\omega)=\{(h(y,\omega),y):y\in H_{2}\},$$ such that random positively invariant set $\mathcal{M}=\{\mathcal{M}(\omega):\omega\in \Omega\}$ can be represented as a graph of Lipschitz continuous map $h$, then $\mathcal{M}$ is said to be Lipschitz continuous invariant manifold.\end{linenomath*}\end{definition} \begin{linenomath*}Moreover, $\mathcal{M}(\omega)$ is said to have exponential tracking property, if there exist an $x'\in \mathcal{M}(\omega)$ for all $x\in \mathbb{H}$ satisfying $$||\phi(l,\omega,x)-\phi(l,\omega,x')||_{\mathbb{H}}\leqslant c_{1}(x,x',\omega)e^{c_{2}l}||x-x'||_{\mathbb{H}},\indent l\geq0,$$for every $\omega\in \Omega$. Here $c_{1}$ is positive random variable, while $c_{2}$ is negative constant. \end{linenomath*} \section{Stochastic System to Random Dynamical System} \begin{linenomath*} In the fast-slow system (1)-(2) processed by symmetric $\alpha$-stable L$\acute{e}$vy noise, the state space for the fast mode is $H_{1}=L^{2}(-1,1)$ and the state space for the slow mode is $H_{2}$. In order to establish the slow manifold, we suppose the following conditions on nonlocal system (1)-(2).\\ \textbf{(S1)} With regards to linear part of (2), there is a constant $\gamma_{J}>0$ such that $$||e^{Jt}y||_{2}\leqslant e^{\gamma_{J} t}||y||_{2}, t\leq0, \mbox{ for all } y\in H_{2}.$$ \textbf{(S2)} With regards to nonlinear part of (1)-(2), there is a constant $K>0$ such that for all $(x_{i},y_{i})^{T}$ in $H_{1}\times H_{2}$ and for all $(x_{j},y_{j})^{T}$ in $H_{1}\times H_{2}$, $$||f(x_{i},y_{i})-f(x_{j},y_{j})||_{H_{1}}\leqslant K(||x_{i}-x_{j}||_{H_{1}}+||y_{i}-y_{j}||_{H_{2}}),$$ $$||g(x_{i},y_{i})-g(x_{j},y_{j})||_{H_{2}}\leqslant K(||x_{i}-x_{j}||_{H_{1}}+||y_{i}-y_{j}||_{H_{2}}),$$ where $T$ indicates the transpose of matrix, and nonlinearities $f$ and $g$ $$f:L^{2} (-1,1)\times H_{2}\rightarrow L^{2} (-1,1),$$ $$g:L^{2} (-1,1)\times H_{2}\rightarrow H_{2},$$ with $f(0,0)=g(0,0)=0$ are $C^{1}$-smooth.\\ \textbf{(S3)} With regards to nonlinear parts of (1)-(2), the Lipschitz constant $K$ is such that $$K<\frac{\lambda_{1}\gamma_{J}}{\gamma_{J}+2\lambda_{1}}.$$ \indent Now let $\Theta_{1}=(\Omega_{1},\mathcal{F}_{1},\mathbb{P}_{1},\theta_{t}^{1})$ and $\Theta_{2}=(\Omega_{2},\mathcal{F}_{2},\mathbb{P}_{2},\theta_{t}^{2})$ are two independent driving (metric) dynamical system as we explained in Section 2. Define \begin{align*}\Theta=\Theta_{1}\times\Theta_{2}=(\Omega_{1}\times\Omega_{2},\mathcal{F}_{1}\otimes\mathcal{F}_{2},\mathbb{P}_{1}\times\mathbb{P}_{2},(\theta_{t}^{1},\theta_{t}^{2})^{T}),\end{align*} and $$\theta_{t}\omega:=(\theta_{t}^{1}\omega_{1},\theta_{t}^{2}\omega_{2})^{T}, \mbox{ for } \omega:=(\omega_{1},\omega_{2})^{T}\in \Omega_{1}\times\Omega_{2}:=\Omega.$$ Let $L_{t}^{\alpha_{1}}$ and $L_{t}^{\alpha_{2}}$ for $\alpha_{1}, \alpha_{2}$ in $(1,2)$ be two mutually independent symmetric $\alpha$-stable L$\acute{e}$vy processes in $H_{1}=L^{2}(-1,1)$ and a separable Hilbert space $H_{2}$ with generating triplet $(a_{1},\mathcal{Q}_{1},v_{1})$ and $(a_{2},\mathcal{Q}_{2},v_{2})$.\\ \indent In order to convert stochastic evolutionary system (1)-(2) into a random system, first we prove the existence and uniqueness of solutions for the stochastic system (1)-(2) and the nonlocal Langevin like equation$$d\eta(t)=A_{\alpha}\eta(t)dt+\sigma dL_{t}^{\alpha}.$$\end{linenomath*} \begin{lemma} Let $L_{t}^{\alpha}$ be a symmetric $\alpha$-stable L$\acute{e}$vy process, then under supposition (S1-S3), nonlocal system (1)-(2) has a unique solution.\end{lemma} \begin{proof} Rewrite the system (1)-(2) in the form \begin{align}\label{sde} \left( {\begin{array}{*{20}{c}} {\dot{x}}\\ \dot{y} \end{array}} \right)= \left( {\begin{array}{*{20}{c}} {\frac{1}{\epsilon }{A_\alpha }}&0\\ 0&J \end{array}} \right) \left( {\begin{array}{*{20}{c}} {x}\\ y \end{array}} \right) +\left( {\begin{array}{*{20}{c}} {\frac{1}{\epsilon}f(x,y)}\\ g(x,y) \end{array}} \right) +\left( {\begin{array}{*{20}{c}} {\frac{\sigma_{1}}{\sqrt[\alpha_{1}]{\epsilon}}\dot{L}_{t}^{\alpha_{1}}}\\ \sigma_{2}\dot{L}_{t}^{\alpha_{2}} \end{array}} \right) \end{align} From \cite{bai2017slow}, it is known that $\left( {\begin{array}{*{20}{c}} {\frac{1}{\epsilon }{A_\alpha }}&0\\ 0&J \end{array}} \right)$ is an infinitesimal generator of a $C_{0}$-semigroup. Then by (\cite{peszat2007stochastic}, p.170), above stochastic evolutionary system has a unique solution.\end{proof} \begin{lemma} Let $L_{t}^{\alpha}$ be a symmetric $\alpha$-stable L$\acute{e}$vy process for $\alpha\in (1,2)$ with generating triplet $(a,\mathcal{Q},v)$. Then the nonlocal stochastic equation \begin{align} d\eta(t)=A_{\alpha}\eta(t)dt+\sigma dL_{t}^{\alpha},\mbox{ in } L^{2}(-1,1),\end{align} where $\eta(0)=\eta_{0}$ and $A_{\alpha}$ is the fractional Laplacian operator, posses the solution \begin{align*} \eta(t)=e^{-\lambda_{n} t}\eta_{0}+\sigma\int_{0}^{t}e^{-\lambda_{n}(t-s)}dL_{s}^{\alpha}, \mbox{ for } t\geq0,\mbox{ and }n=1,2,3\cdot\cdot\cdot.\end{align*}\end{lemma} \begin{proof} From \cite{bostan2013map}, it is known that fractional Laplacian is linear self-adjoint operator. By \cite{kwasnicki2012eigenvalues}, we obtain that there exist an infinite sequence of eigenvalues $\{\lambda_{n}\}$ such that $$0<\lambda_{1}<\lambda_{2}\leqslant \lambda_{3}\leqslant\cdot\cdot\cdot\leqslant\lambda_{n}\leqslant \cdot\cdot\cdot, \mbox{ for } n=1,2,3,\cdot\cdot\cdot.$$ and the corresponding eigenfunctions $\varphi_{n}$ form a complete orthonormal set in $L^{2}(-1,1)$ such that $$-(-\Delta)^{\alpha/2}\varphi_{n}=-\lambda_{n}\varphi_{n}.$$ Since $L_{t}^{\alpha}$, $\alpha\in (1,2)$ is a symmetric $\alpha$-stable L$\acute{e}$vy process with exponent $\mathbb{E}e^{i\eta L_{t}^{\alpha}}=e^{-t\psi_{t}(\eta)}$. Here \begin{align*}\psi_{t}(\eta)=&-i\langle a,\eta\rangle_{L^{2}(-1,1)}+\frac{1}{2}\langle \mathcal{Q}\eta,\eta\rangle_{L^{2}(-1,1)}+\int_{L^{2}(-1,1)}(1-e^{i\langle \eta,y\rangle_{L^{2}(-1,1)}}\\&+i\langle\eta,y\rangle_{L^{2}(-1,1)}c(y))v(dy),\mbox{ with }c(y)=1.\end{align*} From (\cite{sato1999levy}, p.80) it is obtained that $\alpha\in(1,2)$ if and only if $\int_{|y|>1}|y|v(dy)<\infty$,\\ and by using of (\cite{sato1999levy}, p.163), we get that $\int_{|y|>1}|y|v(dy)<\infty$ if and only if $L_{t}^{\alpha}$ has finite mean. Finally with the help of (\cite{sato1999levy}, p.39) we have that if $\int_{|y|>1}|y|v(dy)<\infty$, then center and mean are identical. Since symmetric $\alpha$-stable L$\acute{e}$vy process for $1<\alpha<2$ has zero mean, so its center $a$ is also zero. Hence \begin{align*}\psi_{t}(\eta)=&\frac{1}{2}\langle \mathcal{Q}\eta,\eta\rangle_{L^{2}(-1,1)}+\int_{L^{2}(-1,1)}(1-e^{i\langle \eta,y\rangle_{L^{2}(-1,1)}})v(dy).\end{align*} Then by (\cite{peszat2007stochastic}, p.143) above equation (5) has following solution \begin{align*} \eta(t)=e^{-\lambda_{n} t}\eta_{0}+\sigma\int_{0}^{t}e^{-\lambda_{n}(t-s)}dL_{s}^{\alpha}, \mbox{ for } t\geq0,\mbox{ and }n=1,2,3\cdot\cdot\cdot.\end{align*}\end{proof} \begin{lemma} For a fixed $\epsilon>0$, the equations \begin{align} d\eta(t)=\frac{1}{\epsilon}A_{\alpha}\eta(t) dt+\frac{\sigma_{1}}{\sqrt[\alpha_{1}]{\epsilon}}dL_{t}^{\alpha_{1}},\indent \eta(0)=\eta_{0},\end{align} \begin{align} d\delta(t)=A_{\alpha}\delta(t) dt+\sigma_{1}dL_{t}^{\alpha_{1}},\indent \delta(0)=\delta_{0},\end{align} have c$\grave{a}$dl$\grave{a}$g stationary solutions $\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1})$ and $\sigma_{1}\delta(\theta_{t}^{1}\omega_{1})$ through random variables $\sigma_{1}\eta^{\epsilon}(\omega_{1})=\frac{\sigma_{1}}{\sqrt[\alpha_{1}]{\epsilon}}\int_{-\infty}^{0}e^{\frac{\lambda_{n}s}{\epsilon}}dL_{s}^{\alpha_{1}}(\omega_{1})$ and $\sigma_{1}\delta(\omega_{1})=\sigma_{1}\int_{-\infty}^{0}e^{\lambda_{n}s}dL_{s}^{\alpha_{1}}(\omega_{1})$ respectively.\end{lemma} \begin{proof} The equation (7) has unique c$\grave{a}$dl$\grave{a}$g solution \begin{align*} \phi(t,\omega_{1},\delta_{0})=e^{-\lambda_{n}t}\delta_{0}+\sigma_{1}\int_{0}^{t}e^{-\lambda_{n}(t-s)}dL_{s}^{\alpha_{1}}(\omega_{1})\end{align*} It follows that \begin{align*} \phi(t,\omega_{1},\sigma_{1}\delta(\omega_{1}))&=\sigma_{1}e^{-\lambda_{n}t}\delta(\omega_{1})+\sigma_{1}\int_{0}^{t}e^{-\lambda_{n}(t-s)}dL_{s}^{\alpha_{1}}(\omega_{1}) \\&=\sigma_{1}e^{-\lambda_{n}t}\int_{-\infty}^{0}e^{\lambda_{n}s}dL_{s}^{\alpha_{1}}(\omega_{1})+\sigma_{1}\int_{0}^{t}e^{-\lambda_{n}(t-s)}dL_{s}^{\alpha_{1}}(\omega_{1}) \\&=\sigma_{1}\int_{-\infty}^{0}e^{-\lambda_{n}(t-s)}dL_{s}^{\alpha_{1}}(\omega_{1})+\sigma_{1}\int_{0}^{t}e^{-\lambda_{n}(t-s)}dL_{s}^{\alpha_{1}}(\omega_{1}) \\&=\sigma_{1}\int_{-\infty}^{t}e^{-\lambda_{n}(t-s)}dL_{s}^{\alpha_{1}}(\omega_{1}), \end{align*} and \begin{align*} \sigma_{1}\delta(\theta_{t}^{1}\omega_{1})&=\sigma_{1}\int_{-\infty}^{0}e^{\lambda_{n}s}dL_{s}^{\alpha_{1}}(\theta_{t}^{1}\omega_{1}) \\&=\sigma_{1}\int_{-\infty}^{0}e^{\lambda_{n}s}d(L_{t+s}^{\alpha_{1}}(\omega_{1})-L_{t}^{\alpha_{1}}(\omega_{1})) \\&=\sigma_{1}\int_{-\infty}^{0}e^{\lambda_{n}s}dL_{t+s}^{\alpha_{1}}(\omega_{1}) =\sigma_{1}\int_{-\infty}^{t}e^{-\lambda_{n}(t-s)}dL_{s}^{\alpha_{1}}(\omega_{1}). \end{align*} Hence $\phi(t,\omega_{1},\sigma_{1}\delta(\omega_{1}))=\sigma_{1}\delta(\theta_{t}^{1}\omega_{1})$ is the stationary solution for (7).\\ Similarly (6) has c$\grave{a}$dl$\grave{a}$g stationary solution \begin{align*} \sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1})=\frac{\sigma_{1}}{\sqrt[\alpha_{1}]{\epsilon}}\int_{-\infty}^{t}e^{\frac{-\lambda_{n}(t-s)}{\epsilon}}dL_{s}^{\alpha_{1}}(\omega_{1}). \end{align*} \end{proof} \begin{lemma} \cite{yuan2017slow} Similarly the stochastic equation \begin{align} d\xi(t)=J\xi(t) dt+\sigma_{2}dL_{t}^{\alpha_{2}},\indent \xi(0)=\xi_{0},\end{align} has c$\grave{a}$dl$\grave{a}$g stationary solution $\sigma_{2}\xi(\theta_{t}^{2}\omega_{2})$ through random variable $$\sigma_{2}\xi(\omega_{2})=\sigma_{2}\int_{-\infty}^{0}e^{Js}dL_{s}^{\alpha_{2}}(\omega_{2}).$$\end{lemma} \begin{remark} (\cite{duan2015introduction}, p.191) $L_{ct}^{\alpha}$ and $c^{\frac{1}{\alpha}}L_{t}^{\alpha}$ have the same distribution for every $c>0$, i.e., $$L_{ct}^{\alpha}\overset{d}{=} c^{\frac{1}{\alpha}}L_{t}^{\alpha}, \mbox{ for every }c>0.$$\end{remark} \begin{lemma} The process $\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1})$ has the same distribution as the process $\sigma_{1}\delta(\theta_{\frac{t}{\epsilon}}^{1}\omega_{1})$, where $\eta^{\epsilon}$ and $\delta$ are given in previous Lemma 3.3.\end{lemma} \begin{proof} From Lemma 3.3, \begin{align*} \eta^{\epsilon}(\theta_{t}^{1}\omega_{1})&=\frac{1}{\sqrt[\alpha_{1}]{\epsilon}} \int_{-\infty}^{t}e^{\frac{-\lambda_{n}(t-s)}{\epsilon}}dL_{s}^{\alpha_{1}}(\omega_{1})=\int_{-\infty}^{\frac{t}{\epsilon}}e^{-\lambda_{n}(\frac{t}{\epsilon}-r)}\left(\frac{1}{\sqrt[\alpha_{1}]{\epsilon}} dL_{\epsilon r}^{\alpha_{1}}(\omega_{1})\right)\\&\overset{d}{=}\int_{-\infty}^{\frac{t}{\epsilon}}e^{-\lambda_{n}(\frac{t}{\epsilon}-r)} dL_{r}^{\alpha_{1}}(\omega_{1})=\delta(\theta_{\frac{t}{\epsilon}}^{1}\omega_{1}). \end{align*} Hence the process $\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1})$ and the process $\sigma_{1}\delta(\theta_{\frac{t}{\epsilon}}^{1}\omega_{1})$ have the same distribution.\end{proof} \indent Define a random transformation $$\binom{X}{Y}:=\nu(\omega,x,y)=\binom{x-\sigma_{1}\eta^{\epsilon}(\omega_{1})}{{y-\sigma_{2}\xi(\omega_{2})}},$$ then $(X(t),Y(t))=\nu(\theta_{t}\omega,x,y)$ satisfies the random system \begin{align} dX&=\frac{1}{\epsilon}A_{\alpha}Xdt+\frac{1}{\epsilon}f(X+\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1}),Y+\sigma_{2}\xi(\theta_{t}^{2}\omega_{2}))dt,\\ dY&=JYdt+g(X+\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1}),Y+\sigma_{2}\xi(\theta_{t}^{2}\omega_{2}))dt. \end{align} Here the additional terms $\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1})$ and $\sigma_{2}\xi(\theta_{t}^{2}\omega_{2})$ does not change the Lipschitz constant of nonlinearities $f$ and $g$. So $f$ and $g$ in random dynamical system (9)-(10) and in stochastic dynamical system (1)-(2) have the same Lipschitz constant. The random system (9)-(10) can be solved for any $\omega\in \Omega$ and for any initial value $(X(0),Y(0))^{T}=(X_{0},Y_{0})^{T}$, then the solution operator \begin{align*} (t,\omega,(X_{0},Y_{0})^{T})\mapsto\Phi(t,\omega,(X_{0},Y_{0})^{T})=(X(t,\omega,(X_{0},Y_{0})^{T}),Y(t,\omega,(X_{0},Y_{0})^{T}))^{T},\end{align*} defines the random dynamical system for (9)-(10). Furthermore, \begin{align*} \phi(t,\omega,(X_{0},Y_{0})^{T})=\Phi(t,\omega,(X_{0},Y_{0})^{T})+(\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1}),\sigma_{2}\xi(\theta_{t}^{2}\omega_{2}))^{T}, \end{align*} defines the random dynamical system for (1)-(2). \section{Random slow manifolds} We define Banach spaces consist of functions for exploring the random system (9)-(10). For any $\beta \in \tilde{K}\subset\mathbb{R}$: \begin{align*} C_{\beta}^{H_{1},-}&=\{\Phi:(-\infty,0]\rightarrow L^{2}(-1,1)\mbox{ is continuous and } \mathop {\mbox{sup} }\limits_{t\in(-\infty,0]}||e^{-\beta t}\Phi(t)||_{1}<\infty\}, \\C_{\beta}^{H_{1},+}&=\{\Phi:[0,\infty)\rightarrow L^{2}(-1,1)\mbox{ is continuous and } \mathop {\mbox{sup} }\limits_{t\in[0,\infty)}||e^{-\beta t}\Phi(t)||_{1}<\infty\}, \end{align*} having norms \begin{align*}||\Phi||_{C_{\beta}^{H_{1},-}}=\mathop {\mbox{sup} }\limits_{t\in(-\infty,0]}||e^{-\beta t}\Phi(t)||_{1}, \mbox{ and } ||\Phi||_{C_{\beta}^{H_{1},+}}=\mathop {\mbox{sup} }\limits_{t\in[0,\infty)}||e^{-\beta t}\Phi(t)||_{1}.\end{align*} Similarly, define \begin{align*} C_{\beta}^{H_{2},-}&=\{\Phi:(-\infty,0]\rightarrow H_{2}\mbox{ is continuous and } \mathop {\mbox{sup} }\limits_{t\in(-\infty,0]}||e^{-\beta t}\Phi(t)||_{2}<\infty\}, \\C_{\beta}^{H_{2},+}&=\{\Phi:[0,\infty)\rightarrow H_{2}\mbox{ is continuous and } \mathop {\mbox{sup} }\limits_{t\in[0,\infty)}||e^{-\beta t}\Phi(t)||_{2}<\infty\}, \end{align*} having norms \begin{align*}||\Phi||_{C_{\beta}^{H_{2},-}}=\mathop {\mbox{sup} }\limits_{t\in(-\infty,0]}||e^{-\beta t}\Phi(t)||_{2}, \mbox{ and } ||\Phi||_{C_{\beta}^{H_{2},+}}=\mathop {\mbox{sup} }\limits_{t\in[0,\infty)}||e^{-\beta t}\Phi(t)||_{2}.\end{align*} Let $C_{\beta}^{\pm}$ be the product of Banach spaces $C_{\beta}^{\pm}:=C_{\beta}^{H_{1},\pm}\times C_{\beta}^{H_{2},\pm}$, having norm \begin{align*}||Z||_{C_{\beta}^{\pm}}=||X||_{C_{\beta}^{H_{1},\pm}}+||Y||_{C_{\beta}^{H_{2},\pm}},\indent Z=(X,Y)^{T}\in C_{\beta}^{\pm}.\end{align*} \indent Assume that $0<\gamma<1$ be a number satisfying the property \begin{align}\label{sde}K<\gamma\lambda_{1}<\lambda_{1} \mbox{ and }-\gamma+\lambda_{1}>K.\end{align} For convenience, we may consider $$\gamma=\frac{\gamma_{J}}{2\lambda_{1}+\gamma_{J}}.$$ Let's define $$\mathcal{M}^{\epsilon}(\omega)\triangleq\{Z_{0}\in \mathbb{H}:Z(t,\omega,Z_{0})\in C_{\beta}^{-}\},\mbox{ with }\beta=-\frac{\gamma}{\epsilon}.$$ Next, we will prove that $\mathcal{M}^{\epsilon}(\omega)$ is an invariant manifold by using of Lyapunov-Perron method. \begin{lemma} Let $Z(\cdot,\omega)=(X(\cdot,\omega),Y(\cdot,\omega))^{T}$ in $C_{\beta}^{-}$. Then $Z(t,\omega)$ is the solution of (9)-(10) with initial value $Z_{0}=(X_{0},Y_{0})^{T}$ iff $Z(t,\omega)$ satisfies \begin{align*}&\binom{X(t)}{Y(t)}=\binom{\frac{1}{\epsilon}\int_{-\infty}^{t}e^{A_{\alpha}(t-s)/\epsilon}f(X(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),Y(s)+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds}{e^{Jt}Y_{0}+\int_{0}^{t}e^{J(t-s)}g(X(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),Y(s)+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds}.\end{align*}\end{lemma} \begin{proof} If $(X(\cdot,\omega),Y(\cdot,\omega))^{T}$ in $C_{\beta}^{-}$, then by using constants of variation formula, random system (9)-(10) in integral form is \begin{align} X(t)&=e^{\frac{A_{\alpha}(t-r)}{\epsilon}}X(r)+\frac{1}{\epsilon}\int_{r}^{t}e^{A_{\alpha}(t-s)/\epsilon}f(X(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),Y(s)+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds,\\ Y(t)&=e^{Jt}Y_{0}+\int_{0}^{t}e^{J(t-s)}g(X(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),Y(s)+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds. \end{align} Since, $(X(\cdot,\omega),Y(\cdot,\omega))^{T}$ in $C_{\beta}^{-}$. So, \begin{align*} ||e^{\frac{A_{\alpha}(t-r)}{\epsilon}}X(r)||_{C_{\beta}^{H_{1},-}} &\leqslant e^{\frac{-\lambda_{1}(t-r)}{\epsilon}}||X(r)||_{C_{\beta}^{H_{1},-}}\\&=e^{\frac{-\lambda_{1}(t-r)}{\epsilon}}\mathop {\mbox{sup} }\limits_{r\in(-\infty,0]}||e^{-\beta r}X(r)||_{1}\\&= e^{\frac{-\lambda_{1}(t-r)}{\epsilon}}||X(r)||_{1}\rightarrow0, \mbox{ as }r\rightarrow-\infty.\end{align*} Hence, (12) leads to \begin{align} X(t)=\frac{1}{\epsilon}\int_{-\infty}^{t}e^{A_{\alpha}(t-s)/\epsilon}f(X(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),Y(s)+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds. \end{align} The result follows from (13)-(14).\end{proof} \begin{lemma} Suppose that $Z(t,\omega,Z_{0})=(X(t,\omega,(X_{0},Y_{0})^{T} ,Y(t,\omega,(X_{0},Y_{0})^{T}))^{T}$ be the solution of \begin{align}&\binom{X(t)}{Y(t)}=\binom{\frac{1}{\epsilon}\int_{-\infty}^{t}e^{A_{\alpha}(t-s)/\epsilon}f(X(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),Y(s)+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds}{e^{Jt}Y_{0}+\int_{0}^{t}e^{J(t-s)}g(X(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),Y(s)+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds},t\leq0.\end{align} Then $Z^{\epsilon}(t,\omega,Z_{0})$ is the unique solution in $C_{\beta}^{-}$, where $Z_{0}=(X_{0},Y_{0})^{T}$ is the initial value.\end{lemma} \begin{proof} With the help of Banach fixed point theorem, we prove that $Z(t,\omega,Z_{0})=(X(t,\omega,(X_{0},Y_{0})^{T} ,Y(t,\omega,(X_{0},Y_{0})^{T}))^{T}$ is the unique solution of (15). In order to prove it, let's introduce two operators for $t\leq0$: $$\mathfrak{K}_{i}(Z)[t]=\frac{1}{\epsilon}\int_{-\infty}^{t}e^{A_{\alpha}(t-s)/\epsilon}f(X(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),Y(s)+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds,$$ $$\mathfrak{K}_{j}(Z)[t]=e^{Jt}Y_{0}+\int_{0}^{t}e^{J(t-s)}g(X(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),Y(s)+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds.$$ Then Lyapunov-Perron transform is defined to be $$\mathfrak{K}(Z)=\binom{\mathfrak{K}_{i}(Z)}{\mathfrak{K}_{j}(Z)}=(\mathfrak{K}_{i}(Z),\mathfrak{K}_{j}(Z))^{T}.$$ First we need to prove that the transform $\mathfrak{K}$ maps $C_{\beta}^{-}$ into itself. For this consider $Z=(X,Y)^{T}$ in $C_{\beta}^{-}$ satisfying: \begin{align*}||\mathfrak{K}_{i}(Z)[t]||_{C_{\beta}^{H_{1},-}}&=||\frac{1}{\epsilon}\int_{-\infty}^{t}e^{A_{\alpha}(t-s)/\epsilon}f(X(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),Y(s)+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds||_{1}\\ &\leqslant\frac{1}{\epsilon}\mathop { \mbox{sup} }\limits_{t\in (-\infty,0]}\{e^{-\beta (t-s)}\int_{-\infty}^{t}e^{-\lambda_{1}(t-s)/\epsilon}||f(X(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),Y(s)\\&\indent+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))||_{1}ds\\ &\leqslant\frac{K}{\epsilon}\mathop {\mbox{sup} }\limits_{t\in (-\infty,0]}\{e^{-\beta (t-s)}\int_{-\infty}^{t}e^{-\lambda_{1}(t-s)/\epsilon}(||X(s)||_{1}+||Y(s)||_{2})ds\}+ \mathcal{C}_{i}\\ &\leqslant\frac{K}{\epsilon}\mathop {\mbox{sup} }\limits_{t\in(-\infty,0]}\{\int_{-\infty}^{t}e^{(-\beta-\lambda_{1}/\epsilon)(t-s)}ds\}||Z||_{C_{\beta}^{-}}+ \mathcal{C}_{i}\\&=\frac{K}{\lambda_{1}+\epsilon\beta}||Z||_{C_{\beta}^{-}}+ \mathcal{C}_{i}.\end{align*} Similarly, we have \begin{align*}||\mathfrak{K}_{j}(Z)[t]||_{C_{\beta}^{H_{2},-}}&=||e^{Jt}Y_{0}+\int_{0}^{t}e^{J(t-s)}g(X(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),Y(s)+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds||_{2}\\ &\leqslant \mathop {\mbox{sup} }\limits_{t\in(-\infty,0]}\{e^{-\beta (t-s)}\int_{t}^{0}e^{\gamma_{J}(t-s)}||g(X(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),Y(s)\\&\indent+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))||_{2}ds\}+\mathop {\mbox{sup} }\limits_{t\in (-\infty,0]}\{e^{-\beta t}e^{\gamma_{J} t}||Y_{0}||_{2}\}\\ &\leqslant K \mathop {\mbox{sup} }\limits_{t\in (-\infty,0]}\{\int_{t}^{0}e^{(\gamma_{J}-\beta)(t-s)}(||X(s)||_{1}+||Y(s)||_{2})ds\}+\mathcal{C}_{j}+||Y_{0}||_{2}\\&=\frac{ K}{-\beta+\gamma_{J}}||Z||_{C_{\beta}^{-}}+\mathcal{C}_{j}+||Y_{0}||_{2}\\&=\frac{K}{ -\beta+\gamma_{J}}||Z||_{C_{\beta}^{-}}+\mathcal{C}_{k}.\end{align*} By Lyapunov-Perron transform definition $\mathfrak{K}$ in combine form is\begin{align*}||\mathfrak{K}(Z)||_{C_{\beta}^{-}}\leqslant\varrho(\lambda_{1},\gamma_{J},K,\beta,\epsilon)||Z||_{C_{\beta}^{-}}+\mathcal{C}.\end{align*} Where $\mathcal{C}, \mathcal{C}_{i}, \mathcal{C}_{j}$ and $\mathcal{C}_{k}$ are constants, while \begin{align*}\varrho(\lambda_{1},\gamma_{J},K,\beta,\epsilon)= \frac{K}{\lambda_{1}+\epsilon\beta}+\frac{K}{-\beta+\gamma_{J}}.\end{align*} Hence $\mathfrak{K}$ maps $C_{\beta}^{-}$ into itself, which means $\mathfrak{K}(Z)$ is in $C_{\beta}^{-}$ for every $Z$ in $C_{\beta}^{-}$.\\ \indent Next, we need to prove that the map $\mathfrak{K}$ is contractive. For this, let's consider $Z=(X,Y)^{T},\bar{Z}=(\tilde{X},\tilde{Y})^{T}\in C_{\beta}^{-}$, \begin{align*}||\mathfrak{K}_{i}(Z)-\mathfrak{K}_{i}(\tilde{Z})||_{C_{\beta}^{H_{1},-}}&\leqslant\frac{1}{\epsilon}\mathop {\mbox{sup} }\limits_{t\in (-\infty,0]}\{e^{-\beta (t-s)}\int_{-\infty}^{t}e^{-\lambda_{1}(t-s)/\epsilon}||f(X(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),Y(s)\\& \indent +\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))-f(\tilde{X}(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),\tilde{Y}(s)+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))||_{1}ds\}\\ &\leqslant\frac{K}{\epsilon}\mathop {\mbox{sup} }\limits_{t\in(-\infty,0]}\{e^{-\beta (t-s) }\int_{-\infty}^{t}e^{-\lambda_{1}(t-s)/\epsilon}(||X(s)-\tilde{X}(s)||_{1}\\&\indent+||Y(s)-\tilde{Y}(s)||_{2})ds\\ &\leqslant\frac{K}{\epsilon}\mathop {\mbox{sup} }\limits_{t\in(-\infty, 0]}\int_{-\infty}^{t}e^{(\frac{-\lambda_{1}}{\epsilon}-\beta)(t-s)}ds\}||Z-\tilde{Z}||_{C_{\beta}^{-}}\\ &=\frac{K}{\lambda_{1}+\epsilon\beta}||Z-\tilde{Z}||_{C_{\beta}^{-}}.\end{align*} Using the same way \begin{align*}||\mathfrak{K}_{j}(Z)-\mathfrak{K}_{j}(\tilde{Z})||_{C_{\beta}^{H_{2},-}}&\leqslant K \mathop {\mbox{sup} }\limits_{t\in(-\infty, 0]}\{\int_{t}^{0}e^{\gamma_{J}(t-s)}e^{-\beta(t-s)}ds\}||Z-\tilde{ Z}||_{C_{\beta}^{-}}\\ &\leqslant K \mathop {\mbox{sup} }\limits_{t\in (-\infty, 0]}\int_{t}^{0}e^{(\gamma_{J}-\beta)(t-s)}ds\}||Z-\tilde{Z}||_{C_{\beta}^{-}}\\&=\frac{ K}{-\beta+\gamma_{J}}||Z-\tilde{Z}||_{C_{\beta}^{-}}.\end{align*} In combine form\begin{align*} &||\mathfrak{K}(Z)-\mathfrak{K}(\tilde{Z})||_{C_{\beta}^{-}}\leqslant\varrho(\lambda_{1},\gamma_{J},K,\beta,\epsilon)||Z-\tilde{Z}||_{C_{\beta}^{-}},\end{align*} where\begin{align*}&\varrho(\lambda_{1},\gamma_{J},K,\beta,\epsilon)=\frac{K}{\lambda_{1}+\epsilon\beta}+\frac{K}{-\beta+\gamma_{J}}\rightarrow\frac{K}{\lambda_{1}}+\frac{K}{-\beta+\gamma_{J}} \mbox{ for } \epsilon\rightarrow0.\end{align*} By the supposition (S3), and $\beta=-\frac{\gamma}{\epsilon}$, \begin{align*}\varrho(\lambda_{1},\gamma_{J},K,\beta,\epsilon)\rightarrow\frac{K}{\lambda_{1}} \mbox{ for } \epsilon\rightarrow0.\end{align*} So, there is a very small parameter $\epsilon_{0}\rightarrow 0$ such that \begin{align*}0<\varrho(\lambda_{1},\gamma_{J},K,\beta,\epsilon)< 1,\mbox{ for }\epsilon \mbox{ in } (0,\epsilon_{0}).\end{align*} Hence, by definition of contractive mapping, the map $\mathfrak{K}$ is contractive in $C_{-\frac{\gamma}{\epsilon}}^{-}$. By Banach fixed point theorem, every contractive mapping in non-empty Banach space has a unique fixed point, which is a unique solution. Hence (15) has the unique solution \begin{align*}Z(t,\omega,Z_{0})=(X(t,\omega,(X_{0},Y_{0})^{T}),Y(t,\omega,(X_{0},Y_{0})^{T}))^{T} \mbox{ in } C_{-\frac{\gamma}{\epsilon}}^{-}.\end{align*}\end{proof} From Lemma 4.2 we get the following remark.\\ \begin{remark} For any $(X_{0},Y_{0})^{T}$, $(X'_{0},Y'_{0})^{T}$ in $\mathbb{H}$, and for all $\omega \in \Omega, Y_{0}, Y'_{0} \in H_{2},$ there is an $\epsilon_{0}>0$ such that \begin{align}||Z(t,\omega,(X_{0},Y_{0})^{T})-Z(t,\omega,(X'_{0},Y'_{0})^{T})||_{C_{-\frac{\gamma}{\epsilon}}^{-}}\leqslant\frac{1}{1-\varrho(\lambda_{1},\gamma_{J},K,\beta,\epsilon)}||Y_{0}-Y_{0}'||_{2}.\end{align}\end{remark} \begin{proof} For the sake of simplicity, instead of writing $Z(t,\omega,(X_{0},Y_{0})^{T})$ and $Z(t,\omega,(X'_{0},Y'_{0})^{T})$, let's write $Z(t,\omega,Y_{0})$ and $Z(t,\omega,Y'_{0})$. For all $\omega \in \Omega$ and $Y_{0}, Y'_{0}$ in $H_{2}$, we have the upper-bound \begin{align*}||Z(t,\omega,Y_{0})-Z(t,\omega,Y'_{0})||_{C_{-\frac{\gamma}{\epsilon}}^{-}} &=||X(t,\omega,Y_{0})-X(t,\omega,Y'_{0})||_{C_{-\frac{\gamma}{\epsilon}}^{H_{1},-}}+||Y(t,\omega,Y_{0})\\&\indent-Y(t,\omega,Y'_{0})||_{C_{-\frac{\gamma}{\epsilon}}^{H_{2},-}}\\ &\leqslant\frac{K}{\lambda_{1}+\epsilon\beta}||Z(t,\omega,Y_{0})-Z(t,\omega,Y'_{0})||_{C_{-\frac{\gamma}{\epsilon}}^{-}}+\frac{ K}{-\beta+\gamma_{J}}\\&\indent\times||Z(t,\omega,Y_{0})-Z(t,\omega,Y'_{0})||_{C_{-\frac{\gamma}{\epsilon}}^{-}} +||Y_{0}-Y'_{0}||_{2}\\ &=\varrho(\lambda_{1},\gamma_{J},K,\beta,\epsilon)||Z(t,\omega,Y_{0})-Z(t,\omega,Y'_{0})||_{C_{-\frac{\gamma}{\epsilon}}^{-}}\\&\indent+||Y_{0}-Y'_{0}||_{2}.\end{align*} Thus, \small{\begin{align}||Z(t,\omega,(X_{0},Y_{0})^{T})-Z(t,\omega,(X'_{0},Y'_{0})^{T})||_{C_{-\frac{\gamma}{\epsilon}}^{-}}\leqslant\frac{1}{1-\varrho(\lambda_{1},\gamma_{J},K,\beta,\epsilon)}||Y_{0}-Y_{0}'||_{2}.\end{align}} \end{proof} \begin{theorem} Let suppositions (S1-S3) satisfied. Then for sufficiently small $\epsilon>0$, random system of equations(9)-(10) posses a Lipschitz random slow manifold:$$\mathcal{M}^{\epsilon}(\omega)=\{(\mathcal{H}^{\epsilon}(\omega,Y_{0}),Y_{0})^{T}:Y_{0}\in H_{2}\},$$where $$\mathcal{H}^{\epsilon}(\cdot,\cdot):\Omega\times H_{2}\rightarrow L^{2}(-1,1),$$ is a Lipschitz continuous graph map having Lipschitz constant$$Lip \mathcal{H}^{\epsilon}(\omega,\cdot)\leqslant\frac{K}{(\lambda_{1}-\gamma)[1-K(\frac{1}{\lambda_{1}-\gamma}+\frac{\epsilon}{\gamma+\epsilon\gamma_{J}})]}.$$\end{theorem} \begin{proof} For any $Y_{0}\in H_{2}$, introduce the Lyapunov-Perron map $\mathcal{H}^{\epsilon}:$\begin{equation}\label{sde}\mathcal{H}^{\epsilon}(\omega, Y_{0})=\frac{1}{\epsilon}\int_{-\infty}^{0}e^{-A_{\alpha}s/\epsilon}f(X(s,\omega,Y_{0})+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),Y(s,\omega,Y_{0})+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds,\end{equation} then by (17), the following upper-bound is obtained $$||\mathcal{H}^{\epsilon}(\omega,Y_{0})-H^{\epsilon}(\omega,Y'_{0})||_{1}\leqslant\frac{K}{\lambda_{1}+\epsilon\beta}\frac{1}{[1-\varrho(\lambda_{1},\gamma_{J},K,\beta,\epsilon)]}||Y_{0}-Y'_{0}||_{2},$$ for all $Y_{0},Y'_{0} \in H_{2}$ and $\omega \in \Omega$. So$$||\mathcal{H}^{\epsilon}(\omega,Y_{0})-\mathcal{H}^{\epsilon}(\omega,Y'_{0})||_{1}\leqslant\frac{K}{-\gamma+\lambda_{1}}\frac{1}{[1-\varrho(\lambda_{1},\gamma_{J},K,\beta,\epsilon)]}||Y_{0}-Y'_{0}||_{2},$$ for every $Y_{0},Y'_{0} \in H_{2}$ and $\omega \in \Omega$. Then by Lemma 4.1, \label{sde} $$\mathcal{M}^{\epsilon}(\omega)=\{(\mathcal{H}^{\epsilon}(\omega,Y_{0}),Y_{0})^{T}:Y_{0}\in H_{2}\}.$$ Next by using of Theorem III.9 in Casting and Valadier (\cite{castaing1977convex}, p.67), $\mathcal{M}^{\epsilon}(\omega)$ is a random set, i.e., for any $Z=(X,Y)^{T}$ in $\mathbb{H}=H_{1}\times H_{2}$,\begin{equation}\label{sde}\omega\mapsto \mathop {\mbox{inf} }\limits_{Z'\in \mathbb{H}}||(X,Y)^{T}-(\mathcal{H}^{\epsilon}(\omega,\mathfrak{K}Z'),\mathfrak{K}Z')^{T}||,\end{equation}is measurable. Let there is a countable dense set, say, $\mathbb{H}_{c}$ of separable space $\mathbb{H}$. Then right side of (19) is \begin{equation}\label{sde}\mathop {\mbox{inf} }\limits_{Z'\in \mathbb{H}_{c}}||(X,Y)^{T}-(\mathcal{H}^{\epsilon}(\omega,\mathfrak{K}Z'),\mathfrak{K}Z')^{T}||.\end{equation} Under infimum of (19) the measurability of any expression can be obtained, since $\omega\mapsto \mathcal{H}^{\epsilon}(\omega,\mathfrak{K}Z')$ is measurable for all $Z'$ in $\mathbb{H}.$\\ \indent Now it remains to prove that $\mathcal{M}^{\epsilon}(\omega)$ is positively invariant in the sense: for all $Z_{0} =(X_{0},Y_{0})^{T}$ in $\mathcal{M}^{\epsilon}(\omega),$ $ Z(s,\omega,Z_{0})$ is in $\mathcal{M}^{\epsilon}(\theta_{s}\omega)$ for each $s\geqslant 0.$ Observe that $Z(t+s,\omega,Z_{0})$ is a solution of \begin{align*} &dX=\frac{1}{\epsilon}A_{\alpha}Xdt+\frac{1}{\epsilon}f(X+\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1}), Y+\sigma_{2}\xi(\theta_{t}^{2}\omega_{2}))dt,\\&dY=JYdt+g(X+\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1}), Y+\sigma_{2}\xi(\theta_{t}^{2}\omega_{2}))dt,\end{align*} with initial value $Z(0)=(X(0),Y(0))^{T}=Z(s,\omega,Z_{0})$. So, $Z(t+s,\omega,Z_{0})=Z(t,\theta_{s}\omega,Z(s,\omega,Z_{0}))$. Since $Z(t,\omega,Z_{0})$ in $C_{-\frac{\gamma}{\epsilon}}^{-}$, then $Z(t,\theta_{s}\omega,Z(s,\omega,Z_{0}))$ in $C_{-\frac{\gamma}{\epsilon}}^{-}$. Hence, $Z(s,\omega,Z_{0}) \in \mathcal{M}^{\epsilon}(\theta_{s}\omega)$. It completes the proof.\end{proof} \begin{theorem} Let suppositions (S1-S3) satisfied. Then for sufficiently small $\epsilon>0$, random invariant manifold of random system (9)-(10) posses the exponential tracking property: there exist $\check{Z}_{0}=(\check{X}_{0},\check{Y}_{0})^{T}\in \mathcal{M}^{\epsilon}(\omega)$, for all $Z_{0}=(X_{0},Y_{0})^{T} \in \mathbb{H}$, such that $$||\Phi(t,\omega,Z_{0})-\check{\Phi}(t,\omega,\check{Z}_{0})||\leqslant\mathcal{C}_{i}e^{-\mathcal{C}_{j}t}||Z_{0}-\check{Z}_{0}||,\indent t\geq0.$$ Where $\mathcal{C}_{i}$ and $\mathcal{C}_{j}$ are positive constants.\end{theorem} \begin{proof} Assume that there are two dynamical orbits for random system (9)-(10), i.e., \begin{align*}\Phi(t,\omega,Z_{0})=(X(t,\omega,Z_{0}), Y(t,\omega,{Z}_{0}))^{T}\end{align*} and \begin{align*}\check{\Phi}(t,\omega,\check{Z}_{0})=(X(t,\omega,\check{Z}_{0}), Y(t,\omega,\check{Z}_{0}))^{T}.\end{align*} Then the difference \begin{align*}\Psi(t)=\check{\Phi}(t,\omega,\check{Z}_{0})-\Phi(t,\omega,Z_{0}):=(U(t),V(t))^{T}\end{align*} satisfies the equations \begin{align}&dU=\frac{1}{\epsilon}A_{\alpha}Udt+\frac{1}{\epsilon}\mathrm{\tilde{F}}(U,V,\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1}),\sigma_{2}\xi(\theta_{t}^{2}\omega_{2}))dt,\\ &dV=JVdt+\mathrm{\tilde{G}}(U,V,\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1}),\sigma_{2}\xi(\theta_{t}^{2}\omega_{2}))dt.\end{align} Where nonlinearities $\mathrm{\tilde{F}}$ and $\mathrm{\tilde{G}}$ are \begin{align*}\mathrm{\tilde{F}}(U,V,\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1}),\sigma_{2}\xi(\theta_{t}^{2}\omega_{2})) =&f(U+X+\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1}),V+Y+\sigma_{2}\xi(\theta_{t}^{2}\omega_{2}))\\&-f(X+\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1}),Y+\sigma_{2}\xi(\theta_{t}^{2}\omega_{2})),\end{align*} \begin{align*}\mathrm{\tilde{G}}(U,V,\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1}),\sigma_{2}\xi(\theta_{t}^{2}\omega_{2})) =&g(U+X+\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1}),V+Y+\sigma_{2}\xi(\theta_{t}^{2}\omega_{2}))\\&-g(X+\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1}),Y+\sigma_{2}\xi(\theta_{t}^{2}\omega_{2})).\end{align*} \indent First, we claim that $\Psi(t)=(U(t),V(t))^{T}$ is a solution of (21)-(22) in $C_{\beta}^{+}$ for $\beta=-\frac{\gamma}{\epsilon}$ if \begin{equation}\label{sde}\binom{U(t)}{V(t)}=\binom{e^{A_{\alpha}t/\epsilon}U(0)+\frac{1}{\epsilon}\int_{0}^{t}e^{A_{\alpha}(t-s)/\epsilon}\mathrm{\tilde{F}}(U,V,\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds}{\int_{+\infty}^{t}e^{J(t-s)}\mathrm{\tilde{G}}(U,V,\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds}.\end{equation} It is proved with the help of variation of constants formula just like Lemma 4.1. Since the steps of proof are similar as in Lemma 4.1, so here we omit the proof. Next, it need to prove that $(U,V)^{T}$ is unique solution of (23) in $C_{\beta}^{+}$ with initial value $(U(0),V(0))^{T}=(U_{0},V_{0})^{T}$ such that $$(\check{X}_{0},\check{Y}_{0})^{T}=(U_{0},V_{0})^{T}+(X_{0},Y_{0})^{T}\in \mathcal{M}^{\epsilon}(\omega).$$ It is clear that $$(\check{X}_{0},\check{Y}_{0})^{T}\in \mathcal{M}^{\epsilon}(\omega)$$ if and only if $$\check{X}_{0}=\frac{1}{\epsilon} \int_{ - \infty }^{0} e^{A_{\alpha}(-s)}\mathrm{\tilde{F}}(X(s,\check{Y}_{0}),Y(s,\check{Y}_{0}),\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds.$$ Since here $$(\check{X}_{0},\check{Y}_{0})^{T}=(U_{0},V_{0})^{T}+(X_{0},Y_{0})^{T}.$$ So it follows that $$(\check{X}_{0},\check{Y}_{0})^{T}=(U_{0},V_{0})^{T}+(X_{0},Y_{0})^{T}\in \mathcal{M}^{\epsilon}(\omega)$$ if and only if \small{\begin{align*}U_{0}+X_{0}=&\frac{1}{\epsilon}\int_{-\infty}^{0}e^{A_{\alpha}(-s)}\mathrm{\tilde{F}}(X(s,{V}_{0}+Y_{0}),Y(s,{V}_{0}+Y_{0}),\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds \\=&\mathcal{H}^{\epsilon}(\omega,V_{0}+Y_{0}).\end{align*}} In short $$(\check{X}_{0},\check{Y}_{0})^{T}=(U_{0},V_{0})^{T}+(X_{0},Y_{0})^{T}\in \mathcal{M}^{\epsilon}(\omega)\triangleq\{Z_{0}\in \mathbb{H}:Z(t,\omega,Z_{0})\in C_{\beta}^{+}\},$$ if and only if \begin{align}U_{0}=-X_{0}+\mathcal{H}^{\epsilon}(\omega,V_{0}+Y_{0}).\end{align} For every $\Psi=(U,V)^{T}\in C_{\beta}^{+}$, take $\beta=-\frac{\gamma}{\epsilon}$, $t\geq0$ and define two operators \begin{align*} &\mathfrak{J}_{i}(\Psi)[t]:=e^{A_{\alpha}t/\epsilon}U_{0}+\frac{1}{\epsilon}\int_{0}^{t}e^{A_{\alpha}(t-s)/\epsilon}\mathrm{\tilde{F}}(U(s),V(s),\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds,\\ &\mathfrak{J}_{j}(\Psi)[t]:=\int_{+\infty}^{t}e^{F(t-s)}\mathrm{\tilde{G}}(U(s),V(s),\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds.\end{align*} Furthermore, Lyapunov-Perron transform $\mathfrak{J}:C_{-\frac{\gamma}{\epsilon}}^{+}\rightarrow C_{-\frac{\gamma}{\epsilon}}^{+}$ is defined as: \begin{align*}\mathfrak{J}(\Psi)[t]=\binom{\mathfrak{J}_{i}(\Psi)[t]}{\mathfrak{J}_{j}(\Psi)[t]}=(\mathfrak{J}_{i}(\Psi)[t],\mathfrak{J}_{j}(\Psi)[t])^{T}.\end{align*} For any $\Psi=(U,V)^{T},\check{\Psi}=(\check{U},\check{V})^{T}\in C_{-\frac{\gamma}{\epsilon}}^{+},$ we obtain the estimate from (24) \begin{align*}||e^{A_{\alpha}t/\epsilon}(U_{0}-\check{U}_{0})||_{1}&\leqslant e^{-\lambda_{1}t/\epsilon}LipH^{\epsilon}||V_{0}-\check{V}_{0}||_{2}\\&\leqslant e^{-\lambda_{1}t/\epsilon}LipH^{\epsilon}||\int_{+\infty}^{0}e^{J(-s)}(\mathrm{\tilde{G}}(\Psi(s),\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))\\&\indent-\mathrm{\tilde{G}}(\check{\Psi}(s),\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),\sigma_{2}\xi(\theta_{s}^{2}\omega_{2})))ds||_{2} \\&\leqslant e^{-\lambda_{1}t/\epsilon}LipH^{\epsilon}K\int_{0}^{+\infty}e^{-\gamma_{J} s}||\Psi(s)-\check{\Psi}(s)||ds. \end{align*} So, \begin{align*}||\mathfrak{J}_{i}(\Psi-\check{\Psi})||_{C_{\beta}^{1,+}}&\leqslant LipH^{\epsilon}\times K||\Psi-\check{\Psi}||_{C_{\beta}^{+}}\mathop {\mbox{sup} }\limits_{t\in[0,\infty)}\{e^{-(\beta+\frac{\lambda_{1}}{\epsilon})t}\int_{0}^{+\infty}e^{(-\gamma_{J}+\beta)s}ds\}\\&\indent+\frac{K}{\epsilon}||\Psi-\check{\Psi}||_{C_{\beta}^{+}} \mathop {\mbox{sup} }\limits_{t\in[0,\infty)}\{e^{-\beta t}\int_{0}^{t}e^{-\lambda_{1}(t-s)/\epsilon}ds\}.\end{align*} Hence \begin{align}\label{sde}||\mathfrak{J}_{i}(\Psi-\check{\Psi})||_{C_{\beta}^{1,+}}\leqslant(\frac{Lip H^{\epsilon}\times K}{-\beta+\gamma_{J}}+\frac{K}{\lambda_{1}+\epsilon\beta})||\Psi-\check{\Psi}||_{C_{\beta}^{+}}.\end{align} By the same way \begin{align*}||\mathfrak{J}_{j}(\Psi-\check{\Psi})||_{C_{\beta}^{2,+}}\leqslant K||\Psi-\check{\Psi}||_{C_{\beta}^{+}}\mathop {\mbox{sup} }\limits_{t\in[0,\infty)}\{e^{-\beta t}\int_{t}^{+\infty}e^{(\gamma_{J})(t-s)}ds\}.\end{align*} This implies \begin{align}\label{sde} ||\mathfrak{J}_{j}(\Psi-\check{\Psi})||_{C_{\beta}^{2,+}}\leqslant\frac{ K}{-\beta+\gamma_{J}}||\Psi-\check{\Psi}||_{C_{\beta}^{+}}.\end{align} From Theorem 4.4, it is known that \begin{align*}Lip\mathcal{H}^{\epsilon}(\omega,.)\leqslant\frac{K}{(\lambda_{1}-\gamma)[1-K(\frac{1}{\lambda_{1}-\gamma}+\frac{\epsilon}{\gamma+\epsilon\gamma_{J}})]}.\end{align*} Now, (25)-(26)in combine form is obtained as \begin{align*}||\mathfrak{J}(\Psi-\check{\Psi})||_{C_{-\frac{\gamma}{\epsilon}}^{+}}\leqslant\rho(\lambda_{1},\gamma_{J},K,\gamma,\epsilon)||\Psi-\check{\Psi}||_{C_{-\frac{\gamma}{\epsilon}}^{+}},\end{align*} where, \begin{align*}\rho(\lambda_{1},\gamma_{J},K,\gamma,\epsilon)&=\frac{K}{\lambda_{1}+\epsilon\beta}+\frac{ K} {-\beta+\gamma_{J}}+\frac{ K^{2}}{(\lambda_{1}-\gamma)(-\beta+\gamma_{J})[1-K(\frac{1}{\lambda_{1}-\gamma}+\frac{\epsilon}{\gamma+\epsilon\gamma_{J}})]},\\ &\rightarrow\frac{K}{\lambda_{1}}+\frac{K} {-\beta+\gamma_{J}}+\frac{ K^{2}}{(\lambda_{1}-\gamma)(-\beta+\gamma_{J})[1-K(\frac{1}{\lambda_{1}-\gamma})]},\mbox{ as } \epsilon\rightarrow0.\end{align*} By taking $\beta=-\frac{\gamma}{\epsilon}$, it is obtained that \begin{align}\rho(\lambda_{1},\gamma_{J},K,\gamma,\epsilon)\rightarrow\frac{K}{\lambda_{1}}\mbox{ as }\epsilon\rightarrow0.\end{align} By (11), there is a sufficiently small constant $\check{\epsilon}_{0}>0$ such that \begin{align*}\rho(\lambda_{1},\gamma_{J},K,\gamma,\epsilon)<1,\mbox{ for all }0<\epsilon<\check{\epsilon}_{0}.\end{align*} So, the operator $\mathfrak{J}$ is strictly contractive and has a unique fixed point $\Psi$ in $C_{-\frac{\gamma}{\epsilon}}^{+}$. By Banach fixed point theorem, this unique fixed point is called unique solution of (23) and it satisfies \begin{align*}(\check{X}_{0},\check{Y}_{0})^{T}=(U_{0},V_{0})^{T}+(X_{0},Y_{0})^{T}\in \mathcal{M}^{\epsilon}(\omega).\end{align*} Furthermore, we have \begin{align*}||\Psi||_{C_{-\frac{\gamma}{\epsilon}}^{+}}\leqslant\frac{1}{1-K(\frac{1}{\lambda_{1}-\gamma}+\frac{\epsilon}{\gamma+\epsilon\gamma_{J}})}||\Psi_{0}||_{C_{-\frac{\gamma}{\epsilon}}^{+}},\end{align*} this implies that \begin{align*}||\Phi(t,\omega,Z_{0})-\check{\Phi}(t,\omega,\check{Z}_{0})||_{C_{-\frac{\gamma}{\epsilon}}^{+}}\leqslant \frac{e^{-\frac{\gamma}{\epsilon}t}}{1-K(\frac{1}{\lambda_{1}-\gamma}+\frac{\epsilon}{\gamma+\epsilon\gamma_{J}})}||Z_{0}-\check{Z}_{0}||_{C_{-\frac{\gamma}{\epsilon}}^{+}},t\geq0.\end{align*} Hence, it obtains the exponential tracking property of $\mathcal{M}^{\epsilon}(\omega)$.\end{proof} \begin{remark} From Theorem 4.4 and Theorem 4.5, it is concluded that the random dynamical system has an exponential tracking random slow manifold. Since there is a relation between solutions of stochastic system (1)-(2) and random system (9)-(10). So if (1)-(2) satisfies the suppositions of Theorem 4.4 and Theorem 4.5, then it also posses exponential tracking random slow manifold, i.e., \begin{align*} \mathcal{\tilde{M}}^{\epsilon}(\omega)=\mathcal{M}^{\epsilon}(\omega)+(\sigma_{1}\eta^{\epsilon}(\omega_{1}),\sigma_{2}\xi(\omega_{2}))^{T}=\{(\mathcal{\tilde{H}}^{\epsilon}(\omega,Y_{0}),Y_{0})^{T}:Y_{0}\in H_{2}\},\end{align*} where, \begin{align*} \mathcal{\tilde{H}}^{\epsilon}(\omega,Y_{0})=\mathcal{H}^{\epsilon}(\omega,Y_{0})+\sigma_{1}\eta^{\epsilon}(\omega_{1}).\end{align*} \end{remark} \section{Approximation of a random slow manifold} From random system (9)-(10), we get the following equations by letting time scale $\tau=\frac{t}{\epsilon}$, \begin{align} \frac{dX(\tau\epsilon)}{d\tau}&=A_{\alpha}X(\tau\epsilon)+f(X(\tau\epsilon)+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y(\tau\epsilon)+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2})),\\ \frac{dY(\tau\epsilon)}{d\tau}&=\epsilon[JY(\tau\epsilon)+g(X(\tau\epsilon)+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y(\tau\epsilon)+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))].\end{align} In integral form (28)-(29) can be written as \begin{align} X(\tau\epsilon)&=\int_{-\infty}^{\tau}e^{A_{\alpha}(\tau-s)}f(X(s\epsilon)+\sigma_{1}\eta^{\epsilon}(\theta_{s\epsilon}^{1}\omega_{1}),Y(s\epsilon)+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))ds,\\ Y(\tau\epsilon)&=Y_{0}+\epsilon\int_{0}^{\tau}[JY(s\epsilon)+g(X(s\epsilon)+\sigma_{1}\eta^{\epsilon}(\theta_{s\epsilon}^{1}\omega_{1}),Y(s\epsilon)+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))]ds.\end{align} For a sufficiently small $\epsilon>0$, we approximate the slow manifold by expanding the solution of (28) such as \begin{align} X(\tau\epsilon)=X_{0}(\tau)+\epsilon X_{1}(\tau)+\epsilon^{2}X_{2}(\tau)+\cdot\cdot\cdot,\end{align} with initial data \begin{align} X(0)=\mathcal{H}^{\epsilon}(\omega,Y_{0})=\mathcal{H}^{(0)}(\omega,Y_{0})+\epsilon\mathcal{H}^{(1)}(\omega,Y_{0})+\epsilon^{2}\mathcal{H}^{2}(\omega,Y_{0})+\cdot\cdot\cdot.\end{align} We have the Taylor expansions \begin{align*} f&(X(\tau\epsilon)+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y(\tau\epsilon)+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2})) \\=&f(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))+f_{X}(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))\\&(X(\tau\epsilon)-X_{0})+f_{Y}(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))(Y(\tau\epsilon)-Y_{0}),\\=&f(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))+f_{X}(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))\\&\[\epsilon X_{1}(\tau)+\epsilon^{2}X_{2}(\tau)+\cdot\cdot\cdot{\Big]}+f_{Y}(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))\[\epsilon\int_{0}^{\tau}[JY(s\epsilon)\\&+g(X(s\epsilon)+\sigma_{1}\eta^{\epsilon}(\theta_{s\epsilon}^{1}\omega_{1}),Y(s\epsilon)+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))]ds{\Big]},\end{align*} and \begin{align*} g&(X(\tau\epsilon)+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y(\tau\epsilon)+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2})) \\=&g(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))+g_{X}(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))\\&(X(\tau\epsilon)-X_{0})+g_{Y}(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))(Y(\tau\epsilon)-Y_{0}),\\=&g(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))+g_{X}(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))\\&\[\epsilon X_{1}(\tau)+\epsilon^{2}X_{2}(\tau)+\cdot\cdot\cdot{\Big]}+g_{Y}(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))\[\epsilon\int_{0}^{\tau}[JY(s\epsilon)\\&+g(X(s\epsilon)+\sigma_{1}\eta^{\epsilon}(\theta_{s\epsilon}^{1}\omega_{1}),Y(s\epsilon)+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))]ds{\Big]}.\end{align*} Putting the Taylor expansion of $f$ and value of $X(\tau\epsilon)$ in (28), \begin{align*} &\frac{d\[X_{0}(\tau)+\epsilon X_{1}(\tau)+\epsilon^{2}X_{2}(\tau)+\cdot\cdot\cdot{\Big]}}{d\tau}\\=&A_{\alpha}\[X_{0}(\tau)+\epsilon X_{1}(\tau)+\epsilon^{2}X_{2}(\tau)+\cdot\cdot\cdot{\Big]}+f(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))\\&+f_{X}(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))\[\epsilon X_{1}(\tau)+\epsilon^{2}X_{2}(\tau)+\cdot\cdot\cdot{\Big]}+f_{Y}(X_{0}\\&+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))\[\epsilon\int_{0}^{\tau}[JY(s\epsilon)+g(X(s\epsilon)+\sigma_{1}\eta^{\epsilon}(\theta_{s\epsilon}^{1}\omega_{1}),Y(s\epsilon)\\&+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))]ds{\Big]}. \end{align*} Now, by comparing the terms with equal powers of $\epsilon$, it is concluded that \begin{align*} \frac{dX_{0}(\tau)}{d\tau}=&A_{\alpha}X_{0}(\tau)+f(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2})),\\&\mbox{ with initial value }X_{0}(0)=\mathcal{H}^{(0)}(\omega,Y_{0}),\\ \frac{dX_{1}(\tau)}{d\tau}=&\[A_{\alpha}+f_{X}(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))\]X_{1}(\tau)+f_{Y}(X_{0}\\&+\sigma_{1}\eta^{\epsilon}(\theta_{\tau\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{\tau\epsilon}^{2}\omega_{2}))\int_{0}^{\tau}[JY(s\epsilon)+g(X(s\epsilon)\\&+\sigma_{1}\eta^{\epsilon}(\theta_{s\epsilon}^{1}\omega_{1}),Y(s\epsilon)+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))]ds, \\&\mbox{ with initial value } X_{1}(0)=\mathcal{H}^{(1)}(\omega,Y_{0}).\end{align*} We get the values of $X_{0}(\tau)$ and $X_{1}(\tau)$ by solving above two equations, i.e., \begin{align*} X_{0}(\tau)=&e^{A_{\alpha}\tau}\mathcal{H}^{(0)}(\omega,Y_{0})+\int_{0}^{\tau}e^{A_{\alpha}(\tau-s)}f(X_{0}(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s\epsilon}^{1}\omega_{1}),Y_{0}(s)+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))ds,\\ X_{1}(\tau)=&e^{A_{\alpha}\tau+\int_{0}^{\tau}f_{X}(X_{0}(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s\epsilon}^{1}\omega_{1}),Y_{0}(s)+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))ds}\times\mathcal{H}^{(1)}(\omega,Y_{0})\\&+\int_{0}^{\tau}e^{A_{\alpha}(\tau-s)+\int_{s}^{\tau}f_{X}(X_{0}(r)+\sigma_{1}\eta^{\epsilon}(\theta_{r\epsilon}^{1}\omega_{1}),Y_{0}(r)+\sigma_{2}\xi(\theta_{r\epsilon}^{2}\omega_{2}))dr} \\&\times f_{Y}(X_{0}(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s\epsilon}^{1}\omega_{1}),Y_{0}(s)+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))\[\int_{0}^{s}[JY(r\epsilon)\\&+g(X(r\epsilon)+\sigma_{1}\eta^{\epsilon}(\theta_{r\epsilon}^{1}\omega_{1}),Y(r\epsilon)+\sigma_{2}\xi(\theta_{r\epsilon}^{2}\omega_{2}))]dr\]ds. \end{align*} From (18), \begin{align*}\label{sde}\mathcal{H}^{\epsilon}(\omega, Y_{0})=&\frac{1}{\epsilon}\int_{-\infty}^{0}e^{-A_{\alpha}s/\epsilon}f(X(s)+\sigma_{1}\eta^{\epsilon}(\theta_{s}^{1}\omega_{1}),Y(s)+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2}))ds, \\=&\int_{-\infty}^{0}e^{-A_{\alpha}s}f(X(s\epsilon)+\sigma_{1}\eta^{\epsilon}(\theta_{s\epsilon}^{1}\omega_{1}),Y(s\epsilon)+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))ds, \\=&\int_{-\infty}^{0}e^{-A_{\alpha}s}f(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{s\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))ds\\&+\epsilon\int_{-\infty}^{0}e^{-A_{\alpha}s}\[f_{X}(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{s\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))X_{1}(s)\\&+f_{Y}(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{s\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))\times\int_{0}^{s}[JY(r\epsilon)+g(X(r\epsilon)\\&+\sigma_{1}\eta^{\epsilon}(\theta_{r\epsilon}^{1}\omega_{1}),Y(r\epsilon)+\sigma_{2}\xi(\theta_{r\epsilon}^{2}\omega_{2}))]dr\]ds+\mathcal{O}(\epsilon^{2}). \end{align*} Comparing above equation with equation (33), we find that \begin{align*} \mathcal{H}^{(0)}(\omega, Y_{0})=&\int_{-\infty}^{0}e^{-A_{\alpha}s}f(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{s\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))ds,\\ \mathcal{H}^{(1)}(\omega,Y_{0})=&\int_{-\infty}^{0}e^{-A_{\alpha}s}\[f_{X}(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{s\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))X_{1}(s)\\&+f_{Y}(X_{0}+\sigma_{1}\eta^{\epsilon}(\theta_{s\epsilon}^{1}\omega_{1}),Y_{0}+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))\times\int_{0}^{s}[JY(r\epsilon)\\&+g(X(r\epsilon)+\sigma_{1}\eta^{\epsilon}(\theta_{r\epsilon}^{1}\omega_{1}),Y(r\epsilon)+\sigma_{2}\xi(\theta_{r\epsilon}^{2}\omega_{2}))]dr\]ds.\end{align*} So, the approximation of random slow manifold $\mathcal{M}^{\epsilon}(\omega)=\{(\mathcal{H}^{\epsilon}(\omega,Y_{0}),Y_{0})^{T}:Y_{0}\in H_{2}\}$ for random system (9)-(10) up to order $\mathcal{O}(\epsilon^{2})$ is given by \begin{align} \mathcal{H}^{(\epsilon)}(\omega, Y_{0})=\mathcal{H}^{(0)}(\omega, Y_{0})+\epsilon\mathcal{H}^{(1)}(\omega, Y_{0})+\mathcal{O}(\epsilon^{2}). \end{align} Hence, the original system (1)-(2) has slow manifold $\mathcal{\tilde{M}}^{\epsilon}(\omega)=\{(\mathcal{\tilde{H}}^{\epsilon}(\omega,Y_{0}),Y_{0})^{T}:Y_{0}\in H_{2}\}$ up to order $\mathcal{O}(\epsilon^{2})$, where \begin{align} \mathcal{\tilde{H}}^{\epsilon}(\omega,Y_{0})=\mathcal{H}^{(0)}(\omega, Y_{0})+\epsilon\mathcal{H}^{(1)}(\omega, Y_{0})+\sigma_{1}\eta(\omega_{1})+\mathcal{O}(\epsilon^{2}). \end{align} \section{Examples} \noindent \textbf{Example 1.} Take a system \begin{align} &\dot{x}=\frac{1}{\epsilon}A_{\alpha}x+\frac{1}{6\epsilon}(y)^{2}+\frac{\sigma_{1}}{\sqrt[\alpha_{1}]{\epsilon}}\dot{L}_{t}^{\alpha_{1}}, \mbox{ in } H_{1}=L^{2}(-1,1),\\ &\dot{y}=y+\frac{1}{3}\sin\int_{-1}^{1}x(a)da+\sigma_{2}\dot{L}_{t}^{\alpha_{2}}, \mbox{ in } H_{2}=\mathbb{R},\end{align} where $x$ is fast mode, $y$ is slow mode. While $\dot{L}_{t}^{\alpha_{1}}$ and $\dot{L}_{t}^{\alpha_{2}}$ are derivatives of scalar symmetric $\alpha$-stable L$\acute{e}$vy processes, with $1<\alpha<2$. Nonlinearities $f=\frac{1}{6}(y)^{2}$ and $g=\frac{1}{3}\sin\int_{-1}^{1}x(a)da$ are Lipschitz continuous. Random system corresponding to stochastic system (36)-(37) is \begin{align} &\dot{X}=\frac{1}{\epsilon}A_{\alpha}X^{\epsilon}+\frac{1}{6\epsilon}((Y+\sigma_{2}\xi(\theta_{t}^{2}\omega_{2}))^{2}),\\ &\dot{Y}=Y+\frac{1}{3}\sin\left(\int_{-1}^{1}[X+\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1})]da\right).\end{align} For sufficiently small $\epsilon>0$ and $Y_{0}\in \mathbb{R}$, random evolutionary system $(38)-(39)$ posses a random slow manifold, i.e., $$\mathcal{M}^{\epsilon}(\omega)=\{(\mathcal{H}^{\epsilon}(\omega,Y_{0}),Y_{0})^{T}:Y_{0}\in \mathbb{R}),$$ where $$\mathcal{H}^{\epsilon}(\omega,Y_{0})=\frac{1}{6\epsilon}\int_{-\infty}^{0}e^{-A_{\alpha}s/\epsilon}\left(Y(s)+\sigma_{2}\xi(\theta_{s}^{2}\omega_{2})\right)^{2}ds.$$ Approximate slow manifold for nonlocal system (36)-(37) up to order $\mathcal{O}(\epsilon)$ is $$\mathcal{\tilde{H}}^{\epsilon}(\omega,Y_{0})=H^{0}(\omega,Y_{0})+\sigma_{1}\eta^{\epsilon}(\omega_{1})+\mathcal{O}(\epsilon).$$ Where $$H^{0}(\omega,Y_{0})=\frac{1}{6\epsilon}\int_{-\infty}^{0}e^{-A_{\alpha}s}(Y_{0}+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))^{2}ds.$$ \noindent \textbf{Example 2.} Take a nonlocal fast-slow stochastic system \begin{align} &\dot{x}=\frac{1}{\epsilon}A_{\alpha}x+\frac{0.01}{\epsilon}(\sqrt{y^{2}+5}-\sqrt{5})+\frac{\sigma_{1}}{\sqrt[\alpha_{1}]{\epsilon}}\dot{L}_{t}^{\alpha_{1}}, \mbox{ in } H_{1}=L^{2}(-1,1),\\ &\dot{y}=-y+(0.01\times b)\sin\int_{-1}^{1}xda+\sigma_{2}\dot{L}_{t}^{\alpha_{2}}, \mbox{ in } H_{2}=\mathbb{R},\end{align} where $x$ is fast mode, $y$ is slow mode, $a$ and $b$ are positive real unknown parameter. While $\dot{L}_{t}^{\alpha_{1}}$ and $\dot{L}_{t}^{\alpha_{2}}$ are derivatives of scalar symmetric $\alpha$-stable L$\acute{e}$vy processes, with $1<\alpha<2$. Lipschitz continuous nonlinearities are $f=0.01(\sqrt{y^{2}+5}-\sqrt{5})$ and $g=(0.01\times b)\int_{-1}^{1}xda$. Lipschitz constants of $f$ and $g$ are $L_{f}=0.01$ and $L_{g}=0.01\times b$ respectively. Random system corresponding to stochastic system (40)-(41): \begin{align} &\dot{X}=\frac{1}{\epsilon}A_{\alpha}X+\frac{0.01}{\epsilon}(\sqrt{(Y+\sigma_{2}\xi(\theta_{t}^{2}\omega_{2}))^{2}+5}-\sqrt{5}),\\ &\dot{Y}=-Y+(0.01\times b)\sin\left(\int_{-1}^{1}[X+\sigma_{1}\eta^{\epsilon}(\theta_{t}^{1}\omega_{1})]da\right).\end{align} For sufficiently small $\epsilon>0$, random system $(42)-(43)$ posses a exponential tracking slow manifold, $$\mathcal{M}^{\epsilon}(\omega)=\{(\mathcal{H}^{\epsilon}(\omega,Y_{0}),Y_{0})^{T}:Y_{0}\in \mathbb{R}),$$ where $$\mathcal{H}^{\epsilon}(\omega,Y_{0})=\frac{0.01}{\epsilon}\int_{-\infty}^{0}e^{-A_{\alpha}s/\epsilon}(\sqrt{(Y_{0}+\sigma_{2}\xi(\theta_{t}^{2}\omega_{2}))^{2}+5}-\sqrt{5})ds.$$ Approximate slow manifold for nonlocal system (41)-(42) up to order $\mathcal{O}(\epsilon)$ is $$\mathcal{\tilde{H}}^{\epsilon}(\omega,Y_{0})=H^{0}(\omega,Y_{0})+\sigma_{1}\eta^{\epsilon}(\omega_{1})+\mathcal{O}(\epsilon).$$ Where for a fixed $Y_{0}\in \mathbb{R}$, $$H^{0}(\omega,Y_{0})=\frac{0.01}{\epsilon}\int_{-\infty}^{0}e^{-A_{\alpha}s}(\sqrt{(Y_{0}+\sigma_{2}\xi(\theta_{s\epsilon}^{2}\omega_{2}))^{2}+5}-\sqrt{5})ds.$$ We have conducted the numerical simulation for example 2. The simulation of example 1 is similar, so we omit that. \begin{figure} \caption{ (left) Random slow manifold for one sample, (right) random slow manifold for different samples.} \label{fig:graph1} \label{fig:graph1} \end{figure} \begin{figure} \caption{ (left) Exponential tracking property in the system for $\alpha=1.2$ and $\epsilon=0.01$, (right) exponential tracking property in the system for $\alpha=1$ and $\epsilon=0.01$.} \label{fig:graph1} \label{fig:graph1} \end{figure} \end{document}
arXiv
\begin{document} \begin{abstract} A nonempty closed convex bounded subset $C$ of a Banach space is said to have the weak approximate fixed point property if for every continuous map $f:C\to C$ there is a sequence $\{x_n\}$ in $C$ such that $x_n-f(x_n)$ converge weakly to $0$. We prove in particular that $C$ has this property whenever it contains no sequence equivalent to the standard basis of $\ell_1$. As a byproduct we obtain a characterization of Banach spaces not containing $\ell_1$ in terms of the weak topology. \end{abstract} \maketitle \section{Introduction and main results} Let $X$ be a real Banach space and $C$ a nonempty closed convex bounded subset of $X$. The set $C$ is said to have the {\it approximate fixed point property} (shortly {\it afp property}) if for every continuous mapping $f:C\to C$ there is a sequence $\{x_n\}$ in $C$ such that $x_n-f(x_n)\to0$. The set $C$ is said to have the {\it weak approximate fixed point property} (shortly {\it weak afp property}) if for every continuous mapping $f:C\to C$ there is a sequence $\{x_n\}$ in $C$ such that the sequence $\{x_n-f(x_n)\}$ weakly converges to $0$. The study of these notions was started by C. S. Barroso \cite{Ba} in topological vector spaces where, in particular, the weak afpp for weakly compact convex subsets of Banach spaces was proved, and after by C.S. Barroso and P.-K. Lin \cite{BL} in Banach spaces for general bounded, closed convex sets with emphazis on geometrical aspects. Our terminology follows \cite{BL}. Anyway, it is worth to remark that the notion of the afp property in this context does not have a good meaning. Indeed, if $C$ is compact, then any continuous selfmap of $C$ has even a fixed point by Schauder's theorem (see e.g. \cite[p. 151, Theorem 183]{HHZ}). If $C$ is not compact, then it does not have the afp property by a result of P.-K.\ Lin and Y.\ Sternfeld \cite[Theorem 1]{LS}. Anyway it may have a sense in case of non-complete $X$ or non-closed $C$. A Lipschitz version of this property is studied in \cite{LS}. For the weak afp property the situation is different: A Banach space $X$ is said to have the {\it weak approximate fixed point property} if each nonempty closed convex bounded subset of $X$ has the weak afp property. This notion was studied by C.S.\ Barroso and P.-K.\ Lin in \cite{BL}. They proved that Asplund space do have the weak afp and asked in Problem 1.1 whether the same is true for spaces not containing $\ell_1$. In the present paper we answer this question affirmatively. This is the content of the following theorem. \begin{thm} Let $X$ be a Banach space. Then $X$ has the weak approximate fixed point property if and only if $X$ contains no isomorphic copy of $\ell_1$. \end{thm} This theorem is an immediate consequence of the following more general theorem: \begin{thm}\label{convex} Let $X$ be a Banach space and $C$ a nonempty closed convex bounded subset of $X$. Then the following assertions are equivalent. \begin{itemize} \item[(1)] Each nonempty closed convex subset of $C$ has the weak approximate fixed point property. \item[(2)] $C$ contains no sequence equivalent to the standard basis of $\ell_1$. \end{itemize} \end{thm} Let us recall that a bounded sequence $\{x_n\}$ is equivalent to the standard basis of $\ell_1$ if there is a constant $c>0$ such that for any $N\in\mathbb N$ and any choice of $a_1,\dots,a_N\in\mathbb R$ we have $$\left\|\sum_{n=1}^N a_n x_n\right\|\ge c\sum_{n=1}^N |a_n|.$$ It means that the mapping $T:\ell_1\to X$ defined by $T(\{a_n\})=\sum_{n=1}^\infty a_n x_n$ is an isomorphic embedding. Such sequences $\{x_n\}$ will be called {\it $\ell_1$-sequences}. The implication $(1)\Rightarrow(2)$ is known to be true. Indeed, suppose that $\{x_n\}$ is an $\ell_1$-sequence contained in $C$. Set $D$ to be the closed convex hull of the set $\{x_n:n\in\mathbb N\}$. Let $T$ be the mapping defined in the previous paragraph. Then $Y=T(\ell_1)$ is a subspace of $X$ which is isomorphic to $\ell_1$ and contains $D$. So, by Schur's theorem (see e.g. \cite[p. 74, Theorem 99]{HHZ}), weakly convergent sequences in $Y$ are norm convergent. So, if $D$ had the weak afp property, it would have the afp property as well. But it is impossible by the already quoted \cite[Theorem 1]{LS} as $D$ is not compact. We remark that Theorem~\ref{convex} immediately implies that weakly compact sets have weak afp property which also follows from a result of C.S.\ Barroso \cite[Theorem 3.1]{Ba}. We finish this section by recalling and commenting two results from \cite{BL}. \begin{lemma}\label{L1} Let $X$ be any Banach space, $C$ any nonempty closed convex bounded subset of $X$ and $f:C\to C$ any continuous mapping. Then the point $0$ is in the weak closure of the set $\{x-f(x): x\in C\}$. \end{lemma} This lemma is proved in \cite[Lemma 2.1]{BL} using Brouwer's fixed point theorem and paracompactness of metric spaces. \begin{lemma}\label{L2} Let $X$ be any Banach space, $C$ any nonempty closed convex bounded subset of $X$ and $f:C\to C$ any continuous mapping. Then there is a nonempty closed convex separable subset $D\subset C$ with $f(D)\subset D$. \end{lemma} This is easy and is proved in the second part of the proof of Theorem 2.2 in \cite{BL}. In view of Lemma~\ref{L1} to prove the weak afp property one needs to reach the point $0$ by a limit of a sequence, not just by a limit of a net. In \cite[Theorem 2.2]{BL} it is done by using implicitly the metrizability of the weak topology on bounded sets of a separable Asplund space. We show that it is also possible under the weaker assumption that the space does not contain a copy of $\ell_1$. Topological results which enable us to do so are contained in the following section. \section{$\ell_1$-sequences and Fr\'echet-Urysohn property of the weak topology} Let us recall that a topological space $T$ is called {\it Fr\'echet-Urysohn} if the closures of subsets of $T$ are described using sequences, i.e. if whenever $A\subset T$ and $x\in T$ is such that $x\in\overline{A}$, there is a sequence $\{x_n\}$ in $A$ with $x_n\to x$. Metrizable spaces are Fr\'echet-Urysohn but there are many nonmetrizable Fr\'echet-Urysohn spaces (for examples see the results below). We will need the following deep result of J.\ Bourgain, D.H.\ Fremlin and M.\ Talagrand \cite[Theorem 3F]{BFT}: \begin{thm}\label{t-BFT} Let $P$ be a Polish space (i.e., a separable completely metrizable space). Denote by $B_1(P)$ the space of all real-valued functions on $P$ which are of the first Baire class and equip this space with the topology of pointwise convergence. Suppose that $A\subset B_1(P)$ is relatively countably compact in $B_1(P)$ (i.e., each sequence in $A$ has a cluster point in $B_1(P)$). Then the closure $\overline A$ of $A$ in $B_1(P)$ is compact and Fr\'echet-Urysohn. \end{thm} In fact, we need a slightly weaker version formulated in the following corollary. \begin{cor}\label{cor-BFT} Let $P$ be a Polish space and $A$ be a set of real-valued continuous functions on $P$. Suppose that each sequence in $A$ has a pointwise convergent subsequence. Then the closure of $A$ in $\mathbb R^P$ is a Fr\'echet-Urysohn compact space contained in $B_1(P)$. \end{cor} \begin{proof} $A$ is obviously contained in $B_1(P)$. Moreover, let $(f_n)$ be any sequence in $A$. By the assumption there is a subsequence $(f_{n_k})$ pointwise converging to some function $f$. As the functions $f_{n_k}$ are continuous, the limit function $f$ is of the first Baire class. Hence, it is a cluster point of $(f_n)$ in $B_1(P)$. So, $A$ is relatively countably compact in $B_1(P)$. The assertion now follows from Theorem~\ref{t-BFT}. \end{proof} Now we are ready to prove the following proposition which can be viewed as an improvent of a result due to E.\ Odell and H.P.\ Rosenthal \cite{OR} on characterization of separable spaces not containing $\ell_1$. We note that we use the results of \cite{BFT} and this paper was published three years after \cite{OR}. \begin{prop}\label{FU} Let $X$ be a Banach space and $C$ be a bounded subset of $X$. If $C$ is norm-separable and contains no $\ell_1$-sequence, then the set $$\wscl{\kappa(C-C)}=\wscl{\{\kappa(x-y): x,y\in C\}}$$ is Fr\'echet-Urysohn when equipped with the weak* topology, where $\kappa$ denotes the canonical embedding of $X$ into $X^{**}$. In particular, $$\overline{C-C}^{w}=\overline{\{x-y: x,y\in C\}}^{w}$$ is Fr\'echet-Urysohn when equipped with the weak topology. \end{prop} \begin{proof} As the closed linear span of $C$ is separable, we can without loss of generality suppose that $X$ is separable. Further we have: $$\mbox{Each sequence in $C-C$ has a weakly Cauchy subsequence.}\eqno{(*)}$$ Indeed, let $\{z_n\}$ be a sequence in $C-C$. Then there are sequences $\{x_n\}$ and $\{y_n\}$ in $C$ such that $z_n=x_n-y_n$ for each $n\in\mathbb N$. As $C$ contains no $\ell_1$-sequence, by Rosenthal's theorem \cite{R} there is a subsequence $\{x_{n_k}\}$ of $\{x_n\}$ which is weakly Cauchy. Using Rosenthal's theorem once more, we get a subsequence $\{y_{n_{k_l}}\}$ of $y_{n_k}$ which is weakly Cauchy. Then $\{z_{n_{k_l}}\}=\{x_{n_{k_l}}-y_{n_{k_l}}\}$ is a weakly Cauchy subsequence of $\{z_n\}$. This completes the proof of \thetag{*}. Further, denote by $K$ the dual unit ball $(B_{X^*},w^*)$ equipped with the weak* topology. Then $K$ is a metrizable compact space. Denote by $r$ the mapping $r:X^{**}\to \mathbb R^K$ defined by $r(F)=F|_K$ for $F\in X^{**}$. Then we have: \begin{itemize} \item[(i)] $r$ is a homeomorphism of $(X^{**},w^*)$ onto $r(X^{**})$. \item[(ii)] $r\circ \kappa$ is a homeomorphism of $(X,w)$ onto $r(\kappa(X))$. \item[(iii)] The functions from $r(\kappa(X))$ are continuous on $K$. \end{itemize} Set $M=r(\kappa(C-C))$. Then $M$ is a uniformly bounded sets of continuous functions on $K$. Moreover, by \thetag{*} any sequence from $M$ has a pointwise convergent subsequence. By Corollary~\ref{cor-BFT} the closure of $M$ in $\mathbb R^K$ is a Fr\'echet-Urysohn compact subset of $B_1(K)$. But this closure is equal to $r\left(\wscl{\kappa(C-C)}\right)$. It follows that $\wscl{\kappa(C-C)}$ is Fr\'echet-Urysohn in the weak* topology. This completes the proof of the first statement. Further, to show the `in particular case' it is enough to observe that the set $\wscl{\kappa(C-C)}$ contains $\kappa\left(\overline{C-C}^{w}\right)$, hence $\overline{C-C}^{w}$ is Fr\'echet-Urysohn in the weak topology. \end{proof} As a corollary we get the following characterization of spaces not containing $\ell_1$: \begin{thm}\label{ell1} Let $X$ be a Banach space. Then the following assertions are equivalent. \begin{itemize} \item[(1)] $X$ contains no isomorphic copy of $\ell_1$. \item[(2)] Each bounded separable subset of $X$ is Fr\'echet-Urysohn in the weak topology. \item[(3)] For each separable subset $A\subset X$ there are relatively weakly closed subsets $A_n$, $n\in\mathbb N$, of $A$ such that $A=\bigcup_{n\in\mathbb N}A_n$ and each $A_n$ is Fr\'echet-Urysohn in the weak topology. \end{itemize} \end{thm} Note that the assertion (3) is a topological property of the space $(X,w)$ (as norm separability coincides with weak separability). \begin{proof} The implication (1)$\Rightarrow$(2) follows from Proposition~\ref{FU}. The implication (2)$\Rightarrow$(1) follows from the fact that the unit ball of $\ell_1$ is not Fr\'echet-Urysohn (as $0$ is in the weak closure of the sphere and the sphere is weakly sequentially closed by the Schur theorem). The implication (2)$\Rightarrow$(3) is trivial if we use the fact that a closed ball is weakly closed. Let us prove (3)$\Rightarrow$(2). To show (2) it is enough to prove that the unit ball of any closed separable subspace of $X$ is Fr\'echet-Urysohn in the weak topology. Let $Y$ be such a subspace. Let $Y_n$, $n\in\mathbb N$, be the cover of $Y$ provided by (3). As each $Y_n$ is weakly closed, it is also norm-closed. By Baire category theorem some $Y_n$ has a nonempty interior in $Y$, so it contains a ball. We get that some ball in $Y$ is Fr\'echet-Urysohn, so the unit ball has this property as well. \end{proof} It is worth to compare the previous theorem with a similar characterization of Asplund spaces: \begin{thm} Let $X$ be a Banach space. Then the following assertions are equivalent. \begin{itemize} \item[(1)] $X$ is Asplund. \item[(2)] Each bounded separable subset of $X$ is metrizable in the weak topology. \item[(3)] For each separable subset $A\subset X$ there are relatively weakly closed subsets $A_n$, $n\in\mathbb N$, of $A$ such that $A=\bigcup_{n\in\mathbb N}A_n$ and each $A_n$ is metrizable in the weak topology. \end{itemize} \end{thm} We recall that $X$ is Asplund if and only if $Y^*$ is separable for each separable subspace $Y\subset X$. The equivalence of (1) and (2) follows from the well-known fact that the unit ball of $Y$ is metrizable in the weak topology if and only if $Y^*$ is separable. The equivalence of (2) and (3) can be proved similarly as corresponding equivalence in the previous theorem. \begin{remark} There is no analogue of Theorem~\ref{ell1} for convex sets. Indeed, let $X=\ell_1$ and $C$ be the closed convex hull of the standard basis. Then $C$ contains an $\ell_1$-sequence but is Fr\'echet Urysohn in the weak topology. In fact, it is even metrizable as it is easy to see that on the positive cone of $\ell_1$ the weak and norm topologies coincide. \end{remark} \section{Proof of Theorem~\ref{convex}} It remains to prove the implication (2)$\Rightarrow$(1). Let $X$ be a Banach space, $C\subset X$ a nonempty closed convex bounded set containing no $\ell_1$-sequence and $f:C\to C$ be a continuous mapping. Let $D$ be the set provided by Lemma~\ref{L2}. Then $D$ is separable and contains no $\ell_1$-sequence. By Lemma~\ref{L1} the point $0$ is in the weak closure of $\{x-f(x):x\in D\}$. By Proposition~\ref{FU} there is a sequence from this set weakly converging to $0$. This completes the proof.\qed \begin{remark} We stress the difference between approximation in the norm and in the weak topology. Suppose that $X$ is a Banach space, $C\subset X$ a nonempty closed convex bounded set and $f:C\to C$ a continuous mapping. For approximation in the norm, we have the equivalence of the following three conditions: \begin{itemize} \item There is a sequence $\{x_n\}$ in $C$ such that $x_n-f(x_n)\to0$. \item The point $0$ is in the norm-closure of the set $\{x-f(x):x\in C\}$. \item $\inf\{\|x-f(x)\|: x\in C\}=0$. \end{itemize} These three statements are trivially equivalent (by properties of metric spaces) and are rather strong. For the weak topology the situation the situation is different. First, there is no analogue of the third condition. Secondly, the analogue of the second one is satisfied allways by Lemma~\ref{L1}. But the analogue of the first one is not satisfied allways, as the weak topology is not in general described by sequences. \end{remark} \end{document}
arXiv
Formalized Mathematics (ISSN 0777-4028) Volume 2, Number 2 (1991): pdf, ps, dvi. Stanislawa Kanas, Jan Stankiewicz. Metrics in Cartesian Product, Formalized Mathematics 2(2), pages 193-197, 1991. MML Identifier: METRIC_3 Summary: A continuation of paper \cite{METRIC_1.ABS}. It deals with the method of creation of the distance in the Cartesian product of metric spaces. The distance of two points belonging to Cartesian product of metric spaces has been defined as sum of distances of appropriate coordinates (or projections) of these points. It is shown that product of metric spaces with such a distance is a metric space. Adam Lecko, Mariusz Startek. Submetric Spaces -- Part I, Formalized Mathematics 2(2), pages 199-203, 1991. MML Identifier: SUB_METR Adam Lecko, Mariusz Startek. On Pseudometric Spaces, Formalized Mathematics 2(2), pages 205-211, 1991. MML Identifier: METRIC_2 Konrad Raczkowski, Andrzej Ndzusiak. Real Exponents and Logarithms, Formalized Mathematics 2(2), pages 213-216, 1991. MML Identifier: POWER Summary: Definitions and properties of the following concepts: root, real exponent and logarithm. Also the number $e$ is defined. Eugeniusz Kusak, Wojciech Leonczuk. Hessenberg Theorem, Formalized Mathematics 2(2), pages 217-219, 1991. MML Identifier: HESSENBE Summary: We prove the Hessenberg theorem which states that every Pappian projective space is Desarguesian. Michal Muzalewski, Wojciech Skaba. Three-Argument Operations and Four-Argument Operations, Formalized Mathematics 2(2), pages 221-224, 1991. MML Identifier: MULTOP_1 Summary: The article contains the definition of three- and four- argument operations. The article introduces also a few operation related schemes: {\it FuncEx3D}, {\it TriOpEx}, {\it Lambda3D}, {\it TriOpLambda}, {\it FuncEx4D}, {\it QuaOpEx}, {\it Lambda4D}, {\it QuaOpLambda}. Wojciech Leonczuk, Krzysztof Prazmowski. Incidence Projective Spaces, Formalized Mathematics 2(2), pages 225-232, 1991. MML Identifier: INCPROJ Summary: A basis for investigations on incidence projective spaces. With every projective space defined in terms of collinearity relation we associate the incidence structure consisting of points and of lines of the given space. We introduce general notion of projective space defined in terms of incidence and define several properties of such structures (like satisfability of the Desargues Axiom or conditions on the dimension). Barbara Konstanta, Urszula Kowieska, Grzegorz Lewandowski, Krzysztof Prazmowski. One-Dimensional Congruence of Segments, Basic Facts and Midpoint Relation, Formalized Mathematics 2(2), pages 233-235, 1991. MML Identifier: AFVECT01 Summary: We study a theory of one-dimensional congruence of segments. The theory is characterized by a suitable formal axiom system; as a model of this system one can take the structure obtained from any weak directed geometrical bundle, with the congruence interpreted as in the case of ``classical" vectors. Preliminary consequences of our axiom system are proved, basic relations of maximal distance and of midpoint are defined, and several fundamental properties of them are established. Andrzej Trybulec. Algebra of Normal Forms, Formalized Mathematics 2(2), pages 237-242, 1991. MML Identifier: NORMFORM Summary: We mean by a normal form a finite set of ordered pairs of subsets of a fixed set that fulfils two conditions: elements of it consist of disjoint sets and element of it are incomparable w.r.t. inclusion. The underlying set corresponds to a set of propositional variables but it is arbitrary. The correspondents to a normal form of a formula, e.g. a disjunctive normal form is as follows. The normal form is the set of disjuncts and a disjunct is an ordered pair consisting of the sets of propositional variables that occur in the disjunct non-negated and negated. The requirement that the element of a normal form consists of disjoint sets means that contradictory disjuncts have been removed and the second condition means that the absorption law has been used to shorten the normal form. We construct a lattice $\langle {\Bbb N},\sqcup,\sqcap \rangle$, where $ a \sqcup b = \mu (a \cup b)$ and $a \sqcap b = \mu c$, $c$ being set of all pairs $\langle X_1 \cup Y_1, X_2 \cup Y_2 \rangle$, $\langle X_1, X_2 \rangle \in a$ and $\langle Y_1,Y_2 \rangle \in b$, which consist of disjoiny sets. $\mu a$ denotes here the set of all minimal, w.r.t. inclusion, elements of $a$. We prove that the lattice of normal forms over a set defined in this way is distributive and that $\emptyset$ is the minimal element of it. Michal Muzalewski, Leslaw W. Szczerba. Ordered Rings -- Part I, Formalized Mathematics 2(2), pages 243-245, 1991. MML Identifier: O_RING_1 Summary: This series of papers is devoted to the notion of the ordered ring, and one of its most important cases: the notion of ordered field. It follows the results of \cite{SZMIELEW:1}. The idea of the notion of order in the ring is based on that of positive cone i.e. the set of positive elements. Positive cone has to contain at least squares of all elements, and be closed under sum and product. Therefore the key notions of this theory are that of square, sum of squares, product of squares, etc. and finally elements generated from squares by means of sums and products. Part I contains definitions of all those key notions and inclusions between them. Michal Muzalewski, Leslaw W. Szczerba. Ordered Rings -- Part II, Formalized Mathematics 2(2), pages 247-249, 1991. MML Identifier: O_RING_2 Summary: This series of papers is devoted to the notion of the ordered ring, and one of its most important cases: the notion of ordered field. It follows the results of \cite{SZMIELEW:1}. The idea of the notion of order in the ring is based on that of positive cone i.e. the set of positive elements. Positive cone has to contain at least squares of all elements, and has to be closed under sum and product. Therefore the key notions of this theory are that of square, sum of squares, product of squares, etc. and finally elements generated from squares by means of sums and products. Part II contains classification of sums of such elements. Michal Muzalewski, Leslaw W. Szczerba. Ordered Rings -- Part III, Formalized Mathematics 2(2), pages 251-253, 1991. MML Identifier: O_RING_3 Summary: This series of papers is devoted to the notion of the ordered ring, and one of its most important cases: the notion of ordered field. It follows the results of \cite{SZMIELEW:1}. The idea of the notion of order in the ring is based on that of positive cone i.e. the set of positive elements. Positive cone has to contain at least squares of all elements, and be closed under sum and product. Therefore the key notions of this theory are that of square, sum of squares, product of squares, etc. and finally elements generated from squares by means of sums and products. Part III contains classification of products of such elements. Michal Muzalewski, Wojciech Skaba. N-Tuples and Cartesian Products for n$=$5, Formalized Mathematics 2(2), pages 255-258, 1991. MML Identifier: MCART_2 Summary: This article defines ordered $n$-tuples, projections and Cartesian products for $n=5$. We prove many theorems concerning the basic properties of the $n$-tuples and Cartesian products that may be utilized in several further, more challenging applications. A few of these theorems are a strightforward consequence of the regularity axiom. The article originated as an upgrade of the article \cite{MCART_1.ABS}. Michal Muzalewski, Wojciech Skaba. Ternary Fields, Formalized Mathematics 2(2), pages 259-261, 1991. MML Identifier: ALGSTR_3 Summary: This article contains part 3 of the set of papers concerning the theory of algebraic structures, based on the book \cite[pp. 13--15]{SZMIELEW:1} (pages 6--8 for English edition).\par First the basic structure $\langle F, 0, 1, T\rangle$ is defined, where $T$ is a ternary operation on $F$ (three argument operations have been introduced in the article \cite{MULTOP_1.ABS}. Following it, the basic axioms of a ternary field are displayed, the mode is defined and its existence proved. The basic properties of a ternary field are also contemplated there.} Jozef Bialas. The $\sigma$-additive Measure Theory, Formalized Mathematics 2(2), pages 263-270, 1991. MML Identifier: MEASURE1 Summary: The article contains definition and basic properties of $\sigma$-additive, nonnegative measure, with values in $\overline{\Bbb R}$, the enlarged set of real numbers, where $\overline{\Bbb R}$ denotes set $\overline{\Bbb R} = {\Bbb R} \cup \{-\infty,+\infty\}$ - by \cite{SIKORSKI:1}. We present definitions of $\sigma$-field of sets, $\sigma$-additive measure, measurable sets, measure zero sets and the basic theorems describing relationships between the notion mentioned above. The work is the third part of the series of articles concerning the Lebesgue measure theory. Eugeniusz Kusak, Wojciech Leonczuk. Incidence Projective Space (a reduction theorem in a plane), Formalized Mathematics 2(2), pages 271-274, 1991. MML Identifier: PROJRED1 Summary: The article begins with basic facts concerning arbitrary projective spaces. Further we are concerned with Fano projective spaces (we prove it has rank at least four). Finally we restrict ourselves to Desarguesian planes; we define the notion of perspectivity and we prove the reduction theorem for projectivities with concurrent axes. Michal Muzalewski, Wojciech Skaba. Groups, Rings, Left- and Right-Modules, Formalized Mathematics 2(2), pages 275-278, 1991. MML Identifier: MOD_1 Summary: The notion of group was defined as a group structure introduced in the article \cite{VECTSP_1.ABS}. The article contains the basic properties of groups, rings, left- and right-modules of an associative ring. Michal Muzalewski, Wojciech Skaba. Finite Sums of Vectors in Left Module over Associative Ring, Formalized Mathematics 2(2), pages 279-282, 1991. MML Identifier: LMOD_1 Michal Muzalewski, Wojciech Skaba. Submodules and Cosets of Submodules in Left Module over Associative Ring, Formalized Mathematics 2(2), pages 283-287, 1991. MML Identifier: LMOD_2 Michal Muzalewski, Wojciech Skaba. Operations on Submodules in Left Module over Associative Ring, Formalized Mathematics 2(2), pages 289-293, 1991. MML Identifier: LMOD_3 Michal Muzalewski, Wojciech Skaba. Linear Combinations in Left Module over Associative Ring, Formalized Mathematics 2(2), pages 295-300, 1991. MML Identifier: LMOD_4 Michal Muzalewski, Wojciech Skaba. Linear Independence in Left Module over Domain, Formalized Mathematics 2(2), pages 301-303, 1991. MML Identifier: LMOD_5 Summary: Notion of a submodule generated by a set of vectors and linear independence of a set of vectors. A few theorems originated as a generalization of the theorems from the article \cite{VECTSP_7.ABS}. Jan Popiolek, Andrzej Trybulec. Calculus of Propositions, Formalized Mathematics 2(2), pages 305-307, 1991. MML Identifier: PROCAL_1 Summary: Continues the analysis of classical language of first order (see \cite{QC_LANG1.ABS}, \cite{QC_LANG2.ABS}, \cite{CQC_LANG.ABS}, \cite{CQC_THE1.ABS}, \cite{LUKASI_1.ABS}). Three connectives: truth, negation and conjuction are primary (see \cite{QC_LANG1.ABS}). The others (alternative, implication and equivalence) are defined with respect to them (see \cite{QC_LANG2.ABS}). We prove some important tautologies of the calculus of propositions. Most of them are given as the axioms of classical logical calculus (see \cite{GRZEG1}). In the last part of our article we give some basic rules of inference. Agata Darmochwal. Calculus of Quantifiers. Deduction Theorem, Formalized Mathematics 2(2), pages 309-312, 1991. MML Identifier: CQC_THE2 Summary: Some tautologies of the Classical Quantifier Calculus. The deduction theorem is also proved.
CommonCrawl
\begin{document} \title{Capture and release of a conditional state of a cavity QED system by quantum feedback.} \author{W. P. Smith${}^{1}$, J. E. Reiner${}^{1}$, L. A. Orozco${}^{1}$, S. Kuhr${}^{2}$, and H. M. Wiseman${}^{3}$} \address{${}^{1}$Dept. Physics and Astronomy, SUNY Stony Brook, Stony Brook NY 11794-3800, USA.\\ ${}^{2}$ Institut f{\"u}r Angewandte Physik, Universit{\"a}t Bonn, Wegelerstr. 8, D-53115 Bonn, Germany.\\ ${}^{3}$ Center for Quantum Dynamics, School of Science, Griffith University, Brisbane, Queensland 4111, Australia.} \date{\today} \maketitle \begin{abstract} Detection of a single photon escaping an optical cavity QED system prepares a non-classical state of the electromagnetic field. The evolution of the state can be modified by changing the drive of the cavity. For the appropriate feedback, the conditional state can be captured (stabilized) and then released. This is observed by a conditional intensity measurement that shows suppression of vacuum Rabi oscillations for the length of the feedback pulse and their subsequent return. \end{abstract} \pacs{42.50.Lc, 42.50.Dv,03.65.Ta,03.67.-a} \begin{multicols}{2} Feedback control of quantum systems was first studied about fifteen years ago \cite{YamImoMac86,HauYam86,Sha87}, in the field of quantum optics. In these approaches, the feedback could be understood in an essentially classical way, with quantum field theory entering only to dictate the magnitude of the fluctuations. This is possible if fluctuations are small compared to the mean fields being detected. More recently, a different approach to quantum optical feedback has been developed \cite{WisMil93b,Wis94a}, based on quantum trajectories \cite{Car93b,DumZolRit92,MolCasDal93}, which specify the stochastic evolution of a quantum state conditioned on continuous monitoring (such as by photodetection). This theory allows the treatment of feedback in the deep quantum regime, where quantum fluctuations are not small compared to the mean. It is also arguably the best way to approach feedback, as the conditioned state by definition comprises all of the knowledge of the experimenter on which feedback could be based \cite{DohJac99,Doh00}. So far, experiments in quantum feedback, such as Refs.~ \cite{YamImoMac86,WalJak85b,TapRarSat88,MerHeiFab91,Tau95,Buc99}, have all been in the regime of small fluctuations \cite{mit}. Cavity QED is able to explore the opposite regime, where fluctuations in the conditional state are large. Furthermore, using the theory of quantum trajectories, Carmichael and coworkers \cite{carmichael00,foster00} showed that such conditional quantum fluctuations are intrinsically related to the production of squeezing and antibunching. In this letter we present experimental results for the application of feedback in this regime. Following a photodetection, the conditioned quantum state of the system is $|\psi_{\rm c}(\tau)\rangle$. Given our knowledge of this evolution, we can, for certain times $\tau$, change the parameters of the system dynamics so as to capture the system in that conditioned state. When the parameters are later restored to their usual values, the released system state resumes its interrupted evolution. This directly demonstrates both the reality of the conditioned state and its usefulness for quantum feedback. A Cavity quantum electrodynamical (QED) system consists of a single mode of the electromagnetic field of a cavity interacting with one or a collection of $N$ two-level atoms \cite{berman94}. Microwave Cavity QED systems have been used recently to prepare multiparticle entanglement \cite{rauschenbeutel00}, and to produce photon number states of the electromagnetic field \cite{varcoe00}. Operated at optical frequencies, cavity QED systems can now trap single atoms in the electric field of the cavity when its average occupation is about one photon \cite{hood00,pinkse00}. The system size and dynamics are characterized by two dimensionless numbers: The saturation photon number $n_{0}$ and the single atom cooperativity $C_{1}$. They scale the influence of a photon and the influence of an atom in cavity QED. These two numbers relate the reversible dipole coupling of a single atom with the cavity mode ($g$) with the irreversible coupling to the reservoirs through cavity ($\kappa$) and atomic radiative ($\gamma$) decays: $C_{1}=g^{2}/\kappa \gamma$ and $n_{0}=\gamma^{2}/3g^{2}$. The strong coupling regime of cavity QED requires $n_{0} \leq 1$, and $C_{1} \geq 1$. Strictly speaking, the coupling constant ($g$) is spatially dependent and together with the $N$ moving atoms may be described by effective constants. With weak driving, the system can be accurately modelled as having either zero, one, or two excitations of the coupled normal modes of the field and the atoms. In this regime, photodetections are very infrequent and the state before a detection can be taken to be the steady state, which is almost pure: \begin{eqnarray} |\psi_{\rm ss}\rangle &=& |0,G\rangle\ + \lambda\left(|1,G\rangle - \frac{2g\sqrt{N}}{\gamma}|0,E\rangle\right)\nonumber\\ && +\, \lambda^2\left(\zeta_0\frac{1}{\sqrt{2}}|2,G\rangle - \theta_0\frac{2g\sqrt{N}}{\gamma}|1,E\rangle\right) + \cdots. \label{psiss} \end{eqnarray} \noindent Here $|n,G\rangle$ represents $n$ photons with all $(N)$ atoms in their ground state, $|n,E\rangle$ represents $n$ photons with one atom in the excited state with the rest $(N-1)$ in their ground state. The small parameter is $\lambda=\langle\hat a \rangle = \epsilon /\kappa (1+C_1N)$, which depends on the input driving field $\epsilon$, while $\zeta_0$ and $\theta_0 $ are coefficients of order unity for the two-excitation component of the state that can give rise to a photon detection, and depend on $g$, $\kappa$, and $\gamma$ \cite{carmichael91,reiner01}. After the photodetection occurs $|\psi_{\rm ss}\rangle$ collapses to $\hat a|\psi_{\rm ss}\rangle / \sqrt{{\langle \hat a^{\dagger} \hat a \rangle}_{\rm ss}}$. This evolves as the conditioned state: \begin{eqnarray} |\psi_{\rm c}(\tau)\rangle &=& |0,G\rangle\ + \lambda\left(\zeta(\tau)|1,G\rangle - \theta(\tau)\frac{2g\sqrt{N}}{\gamma}|0,E\rangle\right)\nonumber\\ && +\, O(\lambda^2) \label{psiconditioned} \end{eqnarray} This is different from the initial state because $\zeta$ (the ``field'' evolution) and $\theta$ (the ``atomic polarization'' evolution) oscillate coherently at the vacuum Rabi frequency $(g\sqrt{N}$) over time as the system re-equilibrates exchanging energy between the atomic polarization and the cavity field \cite{carmichael91,reiner01}. If we choose a time $\tau=T$ for Eq.~(\ref{psiconditioned}) such that $\zeta(T)=\theta(T)$ then, to order $\lambda$ we obtain \begin{equation} |\psi_{\rm c}(T)\rangle \simeq |0,G\rangle\ + \lambda'\left(|1,G\rangle - \frac{2g\sqrt{N}}{\gamma}|0,E\rangle \right) ,\label{condition1} \end{equation} This is of the form of $|\psi_{\rm ss}\rangle$ in Eq.~(\ref{psiss}) but with a different mean field $\lambda'=\zeta(T)\lambda.$ This conditional state can be stabilized if, at time $T$, we change the driving amplitude by a factor $\zeta(T)$. Given the almost $90^{\circ}$ out of phase oscillations between the field $(\zeta)$ and the atomic polarization $(\theta)$ \cite{reiner01} the time $T$ is close to the time when the field is crossing zero. Conditional quantum states such as Eq.~\ref{psiconditioned} can be measured using high order quantum optical correlations \cite{foster00,mandel95}. When the light transmitted through the cavity (with annihilation operator $\hat{a}$) is split the photons enter two detectors. The normalized correlation function of the two photocurrents is the time-and normally ordered average \begin{eqnarray} g^{(2)}(\tau) &=& \frac{\langle \hat a^{ \dagger} (t) \hat a^{\dagger} (t+\tau) \hat a (t + \tau) \hat a (t) \rangle_{\rm ss}}{\langle \hat a^{\dagger} (t) \hat a (t) \rangle^{2}_{\rm ss}} \nonumber \\ &=& \frac{\langle\hat n (t+\tau) \rangle_{\rm c} } {\langle \hat n(t)\rangle_{\rm ss}}, \label{sec} \end{eqnarray} where $\hat{n}=\hat{a}^{\dagger}\hat{a}$, and c means ``conditioned on a detection at time $t$ in steady state''. If a detection at one detector is used to trigger a feedback pulse on the system, the correlation function will no longer be time symmetric. However, for $\tau > 0$ the expression (\ref{sec}) still measures the conditional state in the presence of feedback: \begin{equation} g^{(2)}(\tau) \simeq \frac{|\langle 1,G|\psi_{\rm c}(\tau) \rangle|^2}{| \langle 1,G|\psi_{\rm ss}|^2}=[\zeta(\tau)]^2\label{eqg2cond} \end{equation} Fig. \ref{figure1} shows the conditional evolution of the state of the cavity QED system, as given by Eq. \ref{eqg2cond}. We start with the quantum theory valid for $N$ two level atoms identically coupled to the cavity in the weak field regime \cite{carmichael91}. We find $g_{\rm {eff}} < g$ and $N_{{\rm eff}}$ \cite{rempe91,carmichael99} using our experimentally determined values for $g^{(2)}(0)$ such that $g\sqrt{N}=g_{\rm {eff}}\sqrt{N_{{\rm eff}}}$. All broadening effects are incorporated by the modification of the atomic decay rate, $\gamma \rightarrow \gamma'$. We numerically solve the time evolution with the driving step incorporated. This simplified approach agrees with our previous work for $g^{(2)}(\tau)$ \cite{fosterpra00}. The dashed line is the free evolution of the system, and shows the time symmetry of the correlation function. The application of a feedback pulse at time $T$ alters the evolution of the system. The continuous lines shows the evolution when the step change in the driving intensity $\epsilon$ satisfies the conditions necessary to reach a new steady state described by Eq. \ref{condition1}. The parameters of the calculation are those of our experiment: $g\sqrt{N}/2\pi=37.3$ MHz, $\gamma'/2\pi=9.1$ MHz and $\kappa/2\pi=3.7$ MHz. The change in the intensity is small (0.2 \%) and here we assume that its rise and fall are instantaneous. The new state reached by the system after the change of driving intensity no longer shows the vacuum Rabi oscillations and instead has a new value for the steady state slightly lower than the original one. The duration of the pulse that changes the steady state is finite and our model shows the reappearance of the oscillation delayed by the length of the pulse. \begin{figure}\label{figure1} \end{figure} Our cavity QED apparatus, described in detail in Ref. \cite{fosterpra00}, consists of the cavity, the atomic beam, an excitation laser, and the detector system. Two high reflectivity curved mirrors (input transmission mirror 10 ppm, output transmission mirror 285 ppm, and separation $l=880\mu$m) form the optical cavity (waist of the TEM$_{00}$ mode $w_0=34\mu$m). A Pound-Drever-Hall stabilization technique keeps the cavity locked to the appropriate atomic resonance. An effusive oven produces a thermal (440 K) beam of Rb atoms with an angular spread of 2.8 mrads at the cavity mode. A laser beam intersects the atomic beam before the atoms enter the cavity in a region with 2.5 Gauss uniform magnetic field. It optically pumps all the $^{85}$Rb atoms of the $F=3$ ground state the magnetic sublevel $F=3, m_F=3$. The three rates that characterize our cavity QED system are $(g,\kappa,\gamma/2)/2\pi = (5.1,3.7,3.0)$ MHz. \begin{figure} \caption{\narrowtext Simplified diagram of the experimental setup. The output of the cavity QED system passes through a beam splitter (entrance of an intensity correlator) such that the detection of a photon at the ``start" avalanche photodiode (APD) also triggers a change in the driving intensity trough a pulse that drives an electro-optical modulator EOM in front of a polarizer. A histogram of the delays between the ``start'' and ``stop'' gives the conditional evolution of the intensity } \label{figure2} \end{figure} Fig. \ref{figure2} shows a schematic of our apparatus. Light from a Ti:Sapph, locked to the $5S_{1/2}, F=3 \rightarrow 5P_{3/2}, F=4$ transition of $^{85}$Rb at 780 nm, drives the cavity QED system. The signal escaping the cavity creates photodetections at the ``Start''and ``Stop'' avalanche photodiodes (APD). The output pulse of the ``Start'' detector is split and one part sent to the start channel of the correlator [time to digital converter (TDC) with 0.5 ns per bin, histograming memory, and computer] while the other goes to a variable time delay, and after pulse shaping and lengthening, drives an Electro Optical Modulator (EOM) in front of a polarizer to produce a pulse of 8 ns risetime and 120 ns length in the driving intensity of the cavity. The delay between the detection of a photon at APD1 and the arrival of the pulse at the cavity can be as short as 45 ns. The other APD sends its pulses to the correlator to stop the TDC that measures the time interval between the two events. We operate the cavity QED system in a non-classical regime where the size of the vacuum Rabi oscillations is large enough to permit their rapid identification during data taking. We begin by measuring the antibunched second order correlation function of the intensity escaping our cavity QED system. We then apply the step change in the driving intensity at time $T$ to fulfill the conditions of Eq. (\ref{condition1}) and obtain a new steady state. Fig. \ref{figure3} shows measurements of the correlation function in the absence (i) and presence (ii, iii) of feedback. Traces i and ii have the same oscillating frequency while for trace iii we have a smaller number of atoms. $\tau^*$ marks the position where the oscillation we want to suppress reaches a maximum. The steady state for $\tau$ large corresponds to an intracavity intensity of $n/n_0=0.07$. Fig. \ref{figure3} ii shows the correlation function with step down feedback (- 2.6 \%) for 120 ns, beginning at $\tau=T=57$ ns, when the oscillation crosses unity and is growing. The oscillation that has a maximum in trace i at the point marked by $\tau^*$ has disappeared, the steady state is lower that marked by the dashed line, and the oscillation reappears after the pulse is turned off with approximately the same size of the amplitude as the suppressed one. Trace iii shows step up feedback (+ 3.9 \%) at $T=46$ ns when the phase is opposite from trace ii. \begin{figure} \caption{\narrowtext Measured intensity correlation function with the feedback step applied during the shaded region: i) no feedback ($g\sqrt{N}/2\pi=37$ MHz), ii) suppression with a step down change of 2.6 \% ($g\sqrt{N}/2\pi=37$ MHz) iii) suppression with a step up change of 3.9\% ($g\sqrt{N}/2\pi=31$ MHz). The oscillation of the system continues with the same phase and amplitude once the step is off. Note that the time $T$ for the beginning of the feedback in ii and iii is different as indicated by the position of the shaded region. The data has been binned in 2.5 ns points joined with a line.} \label{figure3} \end{figure} Reversing the sign of the step produces an enhancement of the oscillations. If the time $T$ for the application of the pulse is not correct, it is not possible to achieve good suppression. There is qualitative agreement between the traces i and ii with those of the theory in Fig. \ref{figure1}. They show the suppression and the delayed return of the oscillation by the application of a feedback pulse to the driving intensity. Although the theoretical model does not include all the experimental details that give rise to broadening the main features of the response are clearly explained. \begin{figure}\label{figure4} \end{figure} We have followed the size of the amplitude of the oscillation immediately after we apply the feedback pulse, at the time $\tau^*$ defined in Fig. \ref{figure3}, to make a quantitative comparison with theory. Fig. \ref{figure4} shows the results for a series of measurements that include steps up (positive) and steps down (negative). The theory (dashed line) incorporates the measured shape of the pulse (at the point -4.6\%), all sources of dephasing present in the system are modelled by the polarization decay rate $\gamma'/2\pi= 9.1$ MHz. The plot shows both enhancement and suppression with quantitative agreement. The quantum feedback in this system is triggered by a fluctuation (detection of a photon) that is large enough to significantly modify the system, because of the strong coupling. This detection prepares the system in an evolving conditional state. We then change the drive of the system and are able to freeze its evolution to a new time-independent steady state. The suppressed oscillations return once the pulse turns off, with the same phase and amplitude information. This sort of quantum feedback is a novel way to manipulate the fragile conditional states that come from strongly coupled systems. We would like to thank J. Gripp and J. Wang for their interest and help with this project. This work has been supported by the NSF, NIST, DAAD, and the Australian Research Council. \begin{references} \bibitem{YamImoMac86} Y. Yamamoto, N. Imoto and S. Machida, Phys. Rev. A {\bf 33}, 3243 (1986). \bibitem{HauYam86} H. A. Haus and Y. Yamamoto, Phys. Rev. A {\bf 34}, 270 (1986). \bibitem{Sha87} J. H. Shapiro, G. Saplakoglu, S.-T. Ho, P. Kumar, B. E. A. Saleh, M. C. Teich, J. Opt. Soc. Am. B {\bf 4}, 1604 (1987). \bibitem{WisMil93b} H. M. Wiseman and G. J. Milburn, Phys. Rev. Lett. {\bf 70}, 548 (1993). \bibitem{Wis94a} H. M. Wiseman, Phys. Rev. A {\bf 49}, 2133 (1994). \bibitem{Car93b} H.J. Carmichael, {\em An Open Systems Approach to Quantum Optics} (Springer-Verlag, Berlin, 1993). \bibitem{DumZolRit92} R. Dum, P. Zoller and H. Ritsch, Phys. Rev. A {\bf 45}, 4879 (1992). \bibitem{MolCasDal93} K. M\o lmer, Y. Castin, and J. Dalibard, J. Opt. Soc. Am. B {\bf 10}, 524 (1993). \bibitem{DohJac99} A.C. Doherty and K. Jacobs, Phys. Rev. A {\bf 60}, 2700 (1999). \bibitem{Doh00} A. C. Doherty, S. Habib, K. Jacobs, H. Mabuchi, S. M. Tan, Phys. Rev. A {\bf 62}, 012105 (2000). \bibitem{WalJak85b} J. G.~Walker and E.~Jakeman, Optica-Acta. {\bf 32}, 1303 (1985) \bibitem{TapRarSat88} P. R. Tapster, J. G. Rarity and J. S. Satchell, Phys. Rev. A {\bf 37}, 2963 (1988). \bibitem{MerHeiFab91} J. Mertz, A. Heidmann and C. Fabre, Phys. Rev. A {\bf 44}, 3329 (1991). \bibitem{Tau95} M. S. Taubman, H. Wiseman, D. E. McClelland, H.-A. Bachor, J. Opt. Soc. Am. B {\bf 12}, 1792 (1995). \bibitem{Buc99} B. C. Buchler, M. B. Gray, D. A. Shaddock, T. C. Ralph, D. E. McClelland, Opt. Lett. {\bf 24} 259 (1999). \bibitem{mit} An exception is the following, which is an experiment of a quite different nature: R. J. Nelson, Y. Weinstein, D. Cory, and S. Lloyd, Phys. Rev. Lett. {\bf 85}, 3045 (2000). \bibitem{carmichael00} H. J. Carmichael, H. Castro Beltran, G. T. Foster, and L. A. Orozco, Phys. Rev. Lett. {\bf 85}, 1855 (2000). \bibitem{foster00} G. T. Foster, L. A. Orozco, H. M. Castro-Beltran, and H. J. Carmichael, Phys. Rev. Lett. {\bf 85}, 3149 (2000). \bibitem{berman94} P. Berman, ed., {\it Cavity Quantum Electrodynamics}, Supplement 2 of Advances in Atomic, Molecular and Optical Physics series (Academic Press, Boston, 1994). \bibitem{rauschenbeutel00} A. Rauschenbeutel, G. Nogues, S. Osnaghi, P. Bertet, M. Brune, J. M. Raimond, and S. Haroche, Science {\bf 288}, 2024 (2000). \bibitem{varcoe00} B. T. H. Varcoe, S. Brattke, M. Weidinger, and H. Walther, Nature {\bf 403}, 743 (2000). \bibitem{hood00} C. J. Hood, R. W. Lynn, A. C. Doherty, A. S. Parkins, and H. J. Kimble, Science {\bf 287}, 1447 (2000); A. C. Doherty, T. W. Lynn, C. J. Hood, and H. J. Kimble, Phys. Rev. A {\bf 63}, 013401 (2001). \bibitem{pinkse00} P. W. H. Pinkse, T. Fischer, P. Maunz, and G. Rempe, Nature {\bf 404}, 365 (2000). \bibitem {carmichael91} H. J. Carmichael, R. J. Brecha, and P. R. Rice, Opt. Comm. {\bf 82}, 73 (1991). \bibitem{reiner01} J. E. Reiner, W. P. Smith, L. A. Orozco, H. J. Carmichael, and P. R. Rice, J. Opt. Soc. Am. B {\bf 18}, 1911 (2001). \bibitem{mandel95} L. Mandel, E. Wolf, {\it Optical Coherence and Quantum Optics}, (Cambridge University Press, New York, 1995). \bibitem{fosterpra00} G. T. Foster, S. L. Mielke, and L. A. Orozco, Phys. Rev. A {\bf 61}, 53821 (2000). \bibitem{rempe91} G. Rempe, R. J. Thompson, R. J. Brecha, W. D. Lee, and H. J. Kimble, Phys. Rev. Lett. {\bf 67}, 1727 (1991). \bibitem{carmichael99}H. J. Carmichael, B. C. Sanders, Phys. Rev. A {\bf 60}, 2497 (1999). \end{references} \end{multicols} \end{document}
arXiv
19th International Conference on Bioinformatics 2020 (InCoB2020) Extended mining of the oil biosynthesis pathway in biofuel plant Jatropha curcas by combined analysis of transcriptome and gene interactome data Xuan Zhang1,2,3,4, Jing Li1,2,3, Bang-Zhen Pan1,2,3, Wen Chen1, Maosheng Chen1,2,3, Mingyong Tang1,2,3, Zeng-Fu Xu1,2,3 & Changning Liu1,2,3 Jatropha curcas L. is an important non-edible oilseed crop with a promising future in biodiesel production. However, little is known about the molecular biology of oil biosynthesis in this plant when compared with other established oilseed crops, resulting in the absence of agronomically improved varieties of Jatropha. To extensively discover the potentially novel genes and pathways associated with the oil biosynthesis in J. curcas, new strategy other than homology alignment is on the demand. In this study, we proposed a multi-step computational framework that integrates transcriptome and gene interactome data to predict functional pathways in non-model organisms in an extended process, and applied it to study oil biosynthesis pathway in J. curcas. Using homologous mapping against Arabidopsis and transcriptome profile analysis, we first constructed protein–protein interaction (PPI) and co-expression networks in J. curcas. Then, using the homologs of Arabidopsis oil-biosynthesis-related genes as seeds, we respectively applied two algorithm models, random walk with restart (RWR) in PPI network and negative binomial distribution (NBD) in co-expression network, to further extend oil-biosynthesis-related pathways and genes in J. curcas. At last, using k-nearest neighbors (KNN) algorithm, the predicted genes were further classified into different sub-pathways according to their possible functional roles. Our method exhibited a highly efficient way of mining the extended oil biosynthesis pathway of J. curcas. Overall, 27 novel oil-biosynthesis-related gene candidates were predicted and further assigned to 5 sub-pathways. These findings can help better understanding of the oil biosynthesis pathway of J. curcas, as well as paving the way for the following J. curcas breeding application. Jatropha curcas L. also called "physic nuts" (a member of the Euphorbiaceae family), is a small perennial tree or large shrub, metabolites and medicinal components of which have been used to manufacture soap and medicinal materials for a long time [1, 2]. Because of its extraordinary tolerances to environmental stresses, such as drought and infertility, J. curcas can grow well in bad conditions, with no endangerment to food security being a non-eatable crop. In recent years, J. curcas attracted more attention for high potential of biofuel plantations. The oil content of J. curcas is around 30–45% with a high percentage of monounsaturated oleic and polyunsaturated linoleic acid [3], so that J. curcas can be used directly as diesel without processing. In addition, the filter-press cake from seeds is rich in protein (60–63%) as compared with soybean (45%) [4], making it a viable resource of various amino acids. However, there are still many challenges that limit the commercial potential of J. curcas. First of all, the seeds of J. curcas contain high levels of polyunsaturated fatty acids, which negatively impact the biofuel quality. Therefore, optimizing oil composition would facilitate the improvement of the quality of jatropha biodiesel. For instance, the reduction of unsaturated fatty acids would increase oxidative stability, the decrease of free fatty acids could prevent soap formation and increase the yield of biodiesel, and the shrinkage of 18-carbon fatty acids could lower the viscosity for better atomization of biodiesel [5]. Meanwhile, how to effectively increase oil accumulation is another critical issue in the research of oil plants, which is commonly implicated with the mechanism of lipid metabolism. However, little is known about the molecular biology of this plant as compared with other well-established oilseed crops. Besides, low seed production, uneven fruit maturation, and lack of high-yield genotypes limit the availability of this crop [6]. To make it commercially viable, new cultivars need to be developed. Genetic engineering methods could play a major role in J. curcas crop improvement, because the scope for classical breeding is limited due to the longer breeding cycle. For this purpose, functional genomics for understanding metabolic pathways and genetic improvement is urgent in J. curcas. Driven by the development of sequencing technology, large-scale molecular biological data were generated. They comprise the relatively static data on intermolecular physical interactions, such as PPI data, as well as the quite dynamic data collected for studying gene activation during development, such as gene expression profile. Network science is gradually altering our view of cell biology by offering unforeseen possibilities to understand the internal organization of a cell [7]. Co-expression network analysis is a powerful method to extract functional modules from co-expressed genes, analyze their biological meanings, and identify important novel genes [8]. PPI network also represents strong interactions. Based on the primary roles of proteins in biological function, their interactions determine molecular and cellular mechanisms, which control healthy and diseased states in organisms. Combination of transcriptome and gene interactome data was successfully applied for efficient mining of key pathways [9, 10]. Despite many progresses achieved in genomic and transcriptomic studies in J. curcas, especially the information of gene expression profiles that can provide a fundamental molecular understanding of fatty acid biosynthesis, the regulatory mechanisms controlling seed development and oil biosynthesis in J. curcas are not very clear. In general, the process of oil biosynthesis share some similar elements among oilseed plants, therefore, the identification of these oil-biosynthesis-related genes is mostly based on BLAST hits or domain homology methods. However, J. curcas seeds differ greatly from other oilseed plants in terms of their oil content and fatty acid composition. Therefore, a systemic identification and analysis of the specific oil-biosynthesis- related genes of J. curcas are needed. In this study, we described a multi-step computational framework for extensively mining novel oil-biosynthesis-related genes and pathways in J. curcas using transcriptome and gene interactome data. At first, PPI and co-expression networks in J. curcas were constructed using homologous mapping against Arabidopsis and transcriptome profile analysis, and further validated by network structure parameters and GO annotation consistency. We then trained the RWR algorithm on the PPI network and NBD algorithm on the co-expression network respectively, and predicted the oil-biosynthesis-related genes in J. curcas using the homologs of Arabidopsis genes as seeds. As a result, 27 novel oil-biosynthesis-related gene candidates were predicted. Consistent with other researches, most of the predictions exhibited high expression levels in seed development. At last, using the KNN algorithm, these genes were assigned to 5 sub-pathways, such as fatty acid synthesis and triacylglycerol biosynthesis. All these above results have shown that our proposed multi-step computational framework is a highly efficient way to mine functional pathways in non-model organisms, and these findings can help better understanding the oil biosynthesis pathway of J. curcas, as well as paving the way for the following J. curcas breeding application. The workflow of the key pathway extended mining algorithm Here, we designed a multi-step computational framework that integrates transcriptome and gene interactome data to mine functional pathways in non-model organisms in an extended process. The framework mainly includes three parts: data collection, gene prediction, and sub-pathway assignment (Fig. 1). In the data retrieved part, the known oil-biosynthesis-related genes were collected from the experimentally verified oil metabolism pathways in the model species. Gene expression data was obtained from high-throughput gene expression profiling technologies such as RNA-seq or Microarray. Another widely used functional linkage data is PPI that can be collected from the STRING [11] database. In the gene prediction part, we first constructed PPI and co-expression networks in J. curcas. The reference PPI was driven from high reliable Arabidopsis thaliana data. We inferred the PPI of J. curcas based on a homologous-group-based method. The gene co-expression was measured by the Spearman or Pearson correlation coefficients based on RNA-seq or Microarray expression profiles [12, 13]. As our expression profile was RNA-Seq type, Spearman correlation was selected to generate an association matrix. Then according to the different properties of the network, we respectively applied two algorithm models, RWR in PPI network and NBD in co-expression network, to predict oil-biosynthesis-related pathways and genes in J. curcas. In the sub-pathway assignment part, we further classified the predicted genes into different sub-pathways according to their possible functional roles. The Euclidean distance was used to measure the distances between a candidate and all known oil-biosynthesis-related genes. Then, KNN voting method is used to assign each predicted gene to the corresponding sub-pathway. Data retrieved and network construction Oil-biosynthesis-related gene in J. curcas To obtain the whole picture of the oil synthesis pathway, we downloaded 132 Arabidopsis oil synthesis genes from ARALIP (Additional file 1). According to ARALIP, Arabidopsis thaliana oil-biosynthesis-related genes were divided into 5 sub-pathways, 40 in Fatty Acid Synthesis, 7 in Fatty Acid Elongation & Desaturation & Export From Plastid, 6 in Lipid Trafficking, 66 in Triacylglycerol Biosynthesis, and 23 in Triacylglycerol & Fatty Acid Degradation. We observed that some pathways overlapped with others. Though homology-based method, 105 oil-biosynthesis-related genes were identified as known oil metabolism genes in J. curcas (Additional file 2), 30 in Fatty Acid Synthesis, 10 in Fatty Acid Elongation & Desaturation & Export From Plastid, 6 in Lipid Trafficking, 45 in Triacylglycerol Biosynthesis, and 28 in Triacylglycerol & Fatty Acid Degradation. Figure 2a shows that J. curcas's known oil metabolism genes account for 75% of Arabidopsis oil metabolism genes. Fatty Acid Synthesis and Triacylglycerol Biosynthesis related genes in J. curcas were less than Arabidopsis (75% and 68.18%) while the opposite situation was observed in Fatty Acid Elongation & Desaturation & Export From Plastid and Triacylglycerol & Fatty Acid Degradation (144.86% and 121.74%). Moreover, for the Lipid Trafficking sub-pathway, the two species have the same gene number. The detailed statistic of gene number in each sub-pathway of the two species can be found in Additional file 3. These results indicated that the core lipid metabolic pathways in the two species are carried out by a comparable number of orthologous proteins. However, an inconsistent number of genes in some pathways also indicate that there are different oil synthesis pathways between J. curcas and Arabidopsis thaliana. Data retrieved and network construction. a Comparison of the numbers of oil-biosynthesis-related homologous genes between Arabidopsis thaliana and Jatropha curcas in different pathways. b The PPI network of J. curcas conforms to a power-law distribution. c The change of correlation coefficient threshold and its corresponding GO consistency and number of genes with GO annotation in the co-expression network. d GO consistency analysis of PPI, co-expression, and random network. e Comparison of the numbers of connections between known oil-biosynthesis-related genes in co-expression, PPI, and random network Construction of the protein–protein network There are 22,446 coding genes in TAIR (version 10), of which 14,051 genes can find 15,936 homologous in the J. curcas genome by inparinoid v4.1 (default parameters, see method). We have retrieved a very reliable Arabidopsis PPI network from literature and databases, giving a total of 17,894 Arabidopsis genes and 252,401 interactions. Through the homology-group-based method, we have finally produced the PPI network of J. curcas which containing 9602 nodes and 118,839 edges. For oil-biosynthesis-related genes in J. curcas, 86 of them are in the PPI network while 19 are not. We next analyzed the network topological characteristic of J. curcas PPI network. The node's degree exhibits a power-law distribution (Fig. 2b). The scale-free R2 value is 0.89 and scale-free gamma is 1.52. More detailed network topological characteristics statistics can be found in Additional file 4. Construction of co-expression network There are 25,297 genes and 114 samples in the J. curcas expression profile. To construct a co-expression network, a suitable Spearman's correlation coefficient (SCC) cut-off value is needed. Figure 2c shows a negative correlation between gene number with GO and SCC cut-off. At about 0.6, the network gene number with GO began to drop rapidly. We need to keep the functional genes in the network as much as possible. Our results show that 102 (97%), 91 (86%), 53 (50%), and 10 (9%) functional genes were retained by using SCC cutoff 0.6, 0.7, 0.8 and 0.9 on co-expression network, respectively (Additional file 5). So, the SCC cut-off value of 0.6 was then selected to screen significant co-expression correlations from large-scale expression data sets. Our final co-expression network consists of 22,749 nodes, 19,739,995 edges. The scale-free R2 value is 0.59 and scale-free gamma is 0.60. More detailed network topological characteristics statistics can be found in Additional file 6. From the above data, it's clear that the co-expression network includes more genes in the network than the PPI network while with more noise. Network validation To verify the reliability of our networks, we used the GO consistency test based on GO enrichment analysis [14, 15]. As it can be seen in Fig. 2d, both PPI and co-expression network have much higher GO consistency values than random networks. PPI network reached 0.65, followed by co-expression network 0.22 and random network 0.17 (Fig. 2d and Additional file 7). We need to mention that the GO consistency value in the co-expression network is positively correlated with the correlation coefficient cutoff value. This indicates that GO consistency can be used as a standard to measure co-expression network reliability (Fig. 2c). Besides, we checked if the known oil-biosynthesis-related genes are more closely connected than randomly selected nodes in PPI and co-expression networks. Figure 2e shows that the number of interactions among known oil-biosynthesis-related genes is much larger than the random set in both co-expression network and PPI network (308 vs 275.58 and 58 vs 5.8, P value 0.02 and 0, respectively). The detailed data can be found in Additional file 8. Prediction of oil-biosynthesis-related genes and pathway of J. curcas in PPI and co-expression networks Because of the different topological characteristics of the co-expression network and PPI network, two different algorithms, NBD and RWR, were applied. We used leave-one-out cross-validation to evaluate the accuracy of our methods. The average area under the ROC (Receiver operating characteristic) curve (AUC) reached 0.83 by the RWR algorithm on the PPI network (Fig. 3a). On the other hand, a 0.69 AUC score was obtained by the NBD method on the co-expression network (Fig. 3b). As the value of SCC was chosen more strictly, the AUC results were correspondingly higher (Additional file 5). Prediction of oil-biosynthesis-related genes in PPI and co-expression networks. a The ROC curve of the RWR algorithm on the PPI network by leave-one-out cross-validation. b The ROC curve of the Negative binomial distribution method on the co-expression network by leave-one-out cross-validation. c Predicted oil-biosynthesis-related gene network, green node: known oil-biosynthesis-related genes; red node: oil-biosynthesis-related candidate genes predicted by Negative binomial distribution algorithm on co-expression network; blue node: oil-biosynthesis-related candidate genes predicted by RWR algorithm on PPI network; brown border: co-expression; pink border: PPI; red border: both co-expression and PPI Next, we predict oil-biosynthesis-related genes by RWR and NBD methods. Of the 9602 genes in the PPI network, 86 are known to be oil-biosynthesis-related and 9516 are unknown. Using the RWR possibility P > 0.001 as the threshold, we selected the top 14 candidate genes that are most closely linked to the known oil-biosynthesis-related genes (Additional file 9). Among them, gene JCDBG19737 (mtACP2), which ranks first, is the most attractive. JCDBG19737 encodes a member of the mitochondrial acyl carrier protein (ACP) family. As part of the mitochondrial matrix, it is likely to be involved in fatty acid or lipoic acid biogenesis. Although JCDBG19737 is less homology from known Arabidopsis oil-biosynthesis-related genes, RWR algorithm shows that it is more likely to have direct interaction with known oil-biosynthesis-related genes in the PPI network of J. curcas. Another example is gene JCDBG21654 (TRX-M1, TRXm2), which encodes m-type thioredoxin (Trx-m1), a redox activated co-chaperone, localized in the chloroplast stroma. We know that the important process of oil synthesis lies in the plastid, which may suggest JCDBG21654 is an important regulatory gene. In the co-expression network, we predicted the candidate genes related to oil biosynthesis by calculating the possibility of each function unknown gene connecting with the known oil-biosynthesis-related genes using NBD method. As a result, 13 oil-biosynthesis-related candidate genes were predicted using p value < 0.01 as a cutoff (Additional file 9). The gene annotation indicates that they are participate in different pathways, such as JCDBG23541 is a cytochrome P450 78A7-like gene, and JCDBG13536 is a pseudogene. The known oil-biosynthesis-related genes and predicted genes by RWR and NBD methods together constitute an oil-biosynthesis-related gene network of 122 genes and 659 connections (Fig. 3c). The extended oil-biosynthesis-related pathway of J. curcas Next, we studied the extended oil pathway of J. curcas. The GO enrichment analysis shows that the most enriched GO terms are highly related to the oil pathway (Top 10 were collected, Fig. 4a). The most enriched biological process is the metabolic process, fatty acid biosynthetic process, lipid metabolic process, and fatty acid metabolic process. The most enriched molecular function is catalytic activity, transferase activity (transferring acyl groups), flavin adenine dinucleotide binding, oxidoreductase activity (acting on the CH-CH group of donors), O-acyltransferase activity and ligase activity (Additional file 10). GO enrichment analysis and gene expression clustering of predicted oil-biosynthesis-related gene. a GO enrichment analysis of J. curcas predicted oil-biosynthesis-related genes. b Gene expression clustering of predicted oil-biosynthesis-related gene at different time points after pollination (The expression value was normalized by z-score) Also, we did gene expression clustering analysis of predicted oil-biosynthesis-related genes at different time points of developing J. curcas seeds (14, 19, 25, 29, 35, 41, and 45 days after pollination (DAP). The expression matrix was download from JCDB and normalized by z-score method. Five clusters were obtained by hierarchical clustering (Fig. 4b and Additional file 11). In these five clusters, Cluster 3 exhibited the highest expression at 14 DAP and 19 DAP, suggesting that they may play an important role in lipid accumulation; Cluster 1 has a higher expression at 25 DAP while cluster 2 has a higher expression at 41 DAP; Cluster 5 continues to be highly expressed in the later stage. Plant lipids are synthesized as triacylglycerols (TAGs) via a complex series of pathways in which many fatty acid (FA) biosynthetic enzymes are involved. The major FAs in plant oils are palmitic (16:0), stearic (18:0), oleic (18:1), linoleic (18:2) and linolenic acids (18:3). Among them, palmitic and stearic acids are saturated, oleic acid is monounsaturated, and linoleic and oleic acids are polyunsaturated FAs. To further study the function of our predictions, we use the KNN method to assign them into different sub-pathway – ① Fatty Acid Synthesis, ② Fatty Acid Elongation, Desaturation & Export From Plastid, ③ Lipid Trafficking, ④ Triacylglycerol Biosynthesis, and ⑤ Triacylglycerol and Fatty Acid Degradation. KNN results (Fig. 5, see Additional file 12 for detailed data) showed the oil-biosynthesis pathway with our newly predicted oil-biosynthesis-related genes, of which 7 associated with Fatty Acid Synthesis, 15 associated with Triacylglycerol Biosynthesis, and 1 associated with Triacylglycerol & Fatty Acid Degradation. The gene expression profiles of novel Fatty Acid Synthesis and Triacylglycerol Biosynthesis related genes was also shown in Fig. 5, indicating that these genes are involved in the whole process of oil biosynthesis. The extended oil-biosynthesis-related pathway of J. curcas and the gene expression profiles and potential functional roles of predicted oil-biosynthesis-related genes. ACP, acyl carrier protein; G3P, glycerol-3-phosphate; LPA, lysophosphatidic acid; PA, phosphatidic acid; TAG, triacylglycerol; DAG, dihydroxyacetone Studies on the regulatory pathways of oil biosynthesis have great theoretical and practical value in J. curcas. These pathways usually involve many genes and intricate regulatory networks, and, any abnomal change in the networks would affect the whole oil synthesis, such as oil content and component diversity. However, in J. curcas, the regulatory pathways of oil biosynthesis are still unclear due to data deficiency and technical limitations. To the best of our knowledge, only differential expression information of developing seeds has been provided by transcriptome analysis till now. Here, we provided a systematic approach to deeply mine the oil-synthesis-related genes and pathways in J. curcas. The result of present study represents the first method that combined transcriptome and gene interactome data analysis of J. curcas and can provide insight into the biosynthesis of oil including specific triglycerides, which will contribute to the genetic improvement of J. curcas in seed development and oil accumulation. In the functional study of identifying key pathways, lack of adequate analytical data is a common challenge for non-model species. As for J. curcas, although high-throughput measurement technology that is becoming cheaper and cheaper enriched the data for the functional research, it is still far from meeting the needs. Correspondingly, model plants, such as Arabidopsis, accumulated plenty of data for pathway research because of their well-established genome, fast transformation, and various mutants. Therefore, they can act as a powerful reference and provide some primary information for the study of other non-model species. In this work, to compensate for the data shortage in J. curcas, we exploited Arabidopsis transcriptomic data and functional networks as reference and scaffold to spot the potential genes and pathways associated with oil biosynthesis in J. curcas. By sequence alignment, we substantially found many oil-biosynthesis-related genes of J. curcas that were quite conservative between J. curcas and Arabidopsis. These highly conserved genes provide seeds for further prediction of more J. curcas specific oil-biosynthesis-related genes. Due to the great difference between J. curcas and Arabidopsis in the process of oil biosynthesis, it is far from enough to rely on homologous analysis to find oil-biosynthesis-related genes and pathways in J. curcas. We need a method to systematically identify and analyze oil-biosynthesis-related genes and pathway in J. curcas, especially which are J. curcas specific. Because genes tend to be closely linked to genes with similar functions in gene interaction network, we may look for more oil-biosynthesis-related genes by studying the gene interaction networks of J. curcas, which are likely to be linked with known oil-biosynthesis-related genes in the network. On the other hand, network data may contain quite a lot of noise, so they should be used carefully, especially when predicting new genes. In the co-expression network, we used the negative binomial distribution algorithm to calculate the probability of each candidate gene participating in the key pathway. The predictions in this part were considered specific to the Jatropha oil pathway. Besides, it is important to emphasize the limitations of available PPI data once more. Our current knowledge about Jatropha protein interactome is neither complete nor distinct. The PPI data of J. curcas was derived from homology analysis and prediction based on Arabidopsis data. That is, it is not sure that how many interactions detected are true, there are false positives and negatives indeed. It is more difficult to obtain large-scale gene interactome data than large-scale genome and transcriptome data, which may be a critical problem for functional genomics research of non-model organisms in the future. Our method, which combines transcriptome and gene interactome data, may be a feasible and effective way at present. For the predicted results of this study, we will further use molecular biology experiments to verify their functions (related experiments are in progress). Understanding the oil metabolism pathway is key to promote the commercialization of J. curcas. In this paper, we presented a multi-step computational framework that integrates transcriptome and gene interactome data for mining oil-biosynthesis-related genes and assign them to obtain an extended pathway. The major advantage over simple homology search methods is that we can predict the function related genes which are species-specific. Our method can be used widely in key pathway studies, especially for the non-model organism. The gene expression profiles were downloaded at April 2019 from the J. curcas database (JCDB [16], http://jcdb.liu-lab.com) which contained 114 RNA-Seq samples. JCDB is a comprehensive database of J. curcas that we have developed in previous studies. The expression profile was normalized by upper-quartile method [17]. Other information such as sequence and gene annotation retrieval details from JCDB can be found in Additional file 13. Oil biosynthesis related genes in Arabidopsis thaliana were collected from ARABIDOPSIS ACYL-LIPID METABOLISM PATHWAYS database (ARALIP, http://aralip.plantbiology.msu.edu/pathways/pathways) [18]. The PPIs in Arabidopsis thaliana were collected from literature [19,20,21] and databases (AtPID 5.0 [22], AtPIN 9.0 [23], and PAIR 3.0 [24]). The protein sequences and gene annotations of Arabidopsis thaliana were downloaded from The Arabidopsis Information Resource (TAIR) version 10[25]. Annotation and homologue search We used InParanoid [26] version 4.1 to find the orthologous relationships between J. curcas and Arabidopsis thaliana genes with default parameters. The protein sequences of the two species were used as inputs, and genes were assigned to homolog groups according to the relatedness which were measured in BLAST scores (cutoff = 40 bits). The confidence interval (cutoff = 0.05) was calculated by the bootstrap approach [27]. Co-expression network construction The genes with high expression variation (top 75% percentile) were retained to construct a co-expression network. We calculated the Spearman's correlation coefficient and its corresponding P value between the expression profiles of each gene-pair using our in-house Perl script (Available upon request). Only genes pairs with a correlation value higher than 0.6 and adjusted P value less than 0.01 were regarded as co-expressed in our network. Protein–protein interaction network migration In one species, if two genes are detected as interacting protein–protein, we can infer that in another species, genes homologous to them are also considered to interact. These infered gene pairs are traditionally defined as interacting homologous genes. We used a homologous-group-based method to inferring J. curcas PPIs—If an Arabidopsis gene in group A interacts with an Arabidopsis gene in group B, then all the genes in group A of J. curcas interact with all the J. curcas genes in group B. Network topological characteristics In network theory, a scale-free network is a kind of complex network in which most nodes in the network only connect with a few nodes, while few nodes connect with a lot of nodes. Its degree distribution follows a power law, at least asymptotically. The log–log plot of power-law distribution was line fitting using Eq. 1: $${log}_{10}{\rm P}\left({\rm k}\right) \sim -\upgamma {log}_{10}k,$$ where k is the degree of a node, P is the fraction of nodes. In biological networks, nodes represent genes, and the interconnected edges of nodes reflect the degree of correlation of expression. A subset of nodes that are closely connected to each other is a module. Within a module, highly connected genes, also known as "hub genes," are likely to have important biological functions. Metabolic, protein and gene interaction networks have been reported to exhibit scale-free behavior based on the analysis of the distribution of the number of connections of the network nodes [28]. To construct a biologically meaningful network with small world and scale-free structure, many network topological characteristics criteria were designed in the J. curcas tender shoot system [29]. We also calculated some network properties to reach this goal, such as number of genes, number of edges, connected components, the size of giant component, network density, average node degree, degree centrality, network heterogeneity, clustering coefficient, scale-free R2, and scale-free Gamma, using our in-house Perl script (Available upon request). For the PPI network parameters can be found in Additional file 4 and co-expression network parameters with different correlation coefficient threshold can be found in Additional file 6. GO consistency To confirm the reliability of our PPI or co-expression network, we provided a GO consistency test [14, 15]. The basic idea of GO consistency is that in a reliable gene interaction network, a gene may share the same function (GO terms) with its neighbors. For each gene in the network, we performed GO enrichment analysis of its neighbor genes using GOATOOLS [30]. If the enriched GO terms overlapped with its own GO annotation, we counted it as a GO match. And the GO consistency was defined as N/M. Where N is the total GO match, and M is the total number of genes tested in the network. To simulate the random networks for comparison, genes were randomly selected from the network and the above steps were repeated 5000 times. Negative binomial distribution algorithm on weighted co-expression network We assume that a novel oil-biosynthesis-related candidate gene has relatively more connections with known oil pathway genes than the random background. Connections across candidate and known oil-biosynthesis-related genes approximately follow a negative binomial distribution in networks. The probability P that a candidate gene is linked to k or more known oil-biosynthesis-related genes was calculated by Eq. 2: $${\rm P}=1- {\int }_{i=0}^{{i=k}}\frac{{p}^{i}\times {(1-p)}^{n-i}\times {C}_{n}^{i}}{i!},$$ where p is the probability that a gene is linked to a known oil-biosynthesis-related gene by chance (p = number of known oil-biosynthesis-related genes / number of all genes), and n is the degree of the candidate gene in the network. Random walk with restart algorithm on PPI network RWR is a ranking algorithm [31]. It simulates a random walker, either starts on a seed node or a set of seed nodes (here are known oil-biosynthesis-related genes), and moves to its immediate neighbors randomly at each step [32]. All the nodes in the graph are ranked by the probability of the random walker reaching this node. Let \({P}^{0}\) be the initial probability vector and \({P}^{t}\) be a vector in which the ith element holds the probability of finding the random walker at node i at step t. The probability vector at step t + 1 can be given by Eq. 3: $${P}^{t1}=\left(1-{\rm r}\right){\rm W}{P}^{t}+{\rm r}{P}^{0},$$ where W is the transition matrix of the graph. r is the transition probability from node i to node j. The parameter \({\rm r} \epsilon (0, 1)\) is the restart probability. At each step, the random walker can return to seed nodes with probability r. The connections between genes in the PPI network were transformed into the adjacency matrix. The restart probability was set to 0.8. The RWR function returns a matrix of values with only one column. These values represent the affinity score between each candidate genes and known oil-biosynthesis-related genes. The MATLAB code of the RWR function was download from http://www3.ntu.edu.sg/home/aspatra/research/Yongjin_BI2010.zip. K-nearest neighbor algorithm on function assignment of candidate genes Penalized k-Nearest-Neighbor-Graph (PKNNG) was designed to evaluate the distances in gene expression datasets [33]. We used a basic distance-voting strategy to determine which sub-pathway the candidate genes should belong to. A candidate gene was classified by a plurality vote of its neighbors. Given the k nearest neighbors of a gene A in a network (here we use k = 5), the naive KNN method selects the functional class that is voted for by the maximum number of neighbors, and assigns it to gene A. Gene expression data in 7 different developmental stages of J. curcas seeds was used to calculate the distance between the candidate gene and the oil-biosynthesis-related gene. Those expression data were obtained from JCDB [16] and Jiang's paper [34]. The distance was calculated by Euclidean distance Eq. 4: $${\rm d}\left({\rm x, y}\right)=\sqrt{\sum_{i=1}^{n}{({x}_{i}-{y}_{i})}^{2}},$$ where n is the sample number of the expression data, x is the candidate gene, and y is the known oil-biosynthesis-related gene. All datasets generated during this study are included in this published article and the sources are cited accordingly. Jatropha curcas gene expression profiles: http://jcdb.liu-lab.com/sdb/data/JCDB_JatCur_1.0/JCDB_1.0.gene.expression.counts.profile.zip. Jatropha curcas gene ontology annotation: http://jcdb.liu-lab.com/sdb/data/JCDB_JatCur_1.0/JCDB_1.0.blast2go.GO.anno.xls.zip; Jatropha curcas protein sequences: http://jcdb.liu-lab.com/sdb/data/JCDB_JatCur_1.0/JCDB_1.0.protein.fa.zip. Arabidopsis thaliana protein sequences: https://www.arabidopsis.org/download_files/Proteins/TAIR10_protein_lists/TAIR10_pep_20101214. Arabidopsis thaliana oil proteins: http://aralip.plantbiology.msu.edu/data/aralip_data.xlsx. The MATLAB code of the RWR function are available at http://www3.ntu.edu.sg/home/aspatra/research/Yongjin_BI2010.zip. KNN: K-nearest neighbors NBD: Negative binomial distribution PPI: Protein–protein interaction PKNNG: Penalized k-nearest-neighbor-graph RWR: Random walk with restart RNA-seq: RNA sequencing Openshaw K. A review of Jatropha curcas: an oil plant of unfulfilled promise. Biomass Bioenerg. 2000;19(1):1–15. Sabandar CW, Ahmat N, Jaafar FM, Sahidin I. Medicinal property, phytochemistry and pharmacology of several Jatropha species (Euphorbiaceae): a review. Phytochemistry. 2013;85:7–29. Fairless D. Biofuel: the little shrub that could–maybe. Nature. 2007;449(7163):652–5. Maghuly F, Laimer M. Jatropha curcas, a biofuel crop: functional genomics for understanding metabolic pathways and genetic improvement. Biotechnol J. 2013;8(10):1172–82. Natarajan P, Parani M. De novo assembly and transcriptome analysis of five major tissues of Jatropha curcas L. using GS FLX titanium platform of 454 pyrosequencing. BMC Genomics. 2011;12:191. Spinelli VM, Dias LAD, Rocha RB, Resende MDV. Yield performance of half-sib families of physic nut (Jatropha curcas L). Crop Breed Appl Biot. 2014;14(1):49–53. Barabasi AL, Oltvai ZN. Network biology: understanding the cell's functional organization. Nat Rev Genet. 2004;5(2):101–13. Liang YH, Cai B, Chen F, Wang G, Wang M, Zhong Y, Cheng ZM. Construction and validation of a gene co-expression network in grapevine (Vitis vinifera L.). Hortic Res. 2014;1:14040. Alcaraz N, Friedrich T, Kotzing T, Krohmer A, Muller J, Pauling J, Baumbach J. Efficient key pathway mining: combining networks and OMICS data. Integr Biol (Camb). 2012;4(7):756–64. Hancock T, Takigawa I, Mamitsuka H. Mining metabolic pathways through gene expression. Bioinformatics. 2010;26(17):2128–35. Szklarczyk D, Gable AL, Lyon D, Junge A, Wyder S, Huerta-Cepas J, Simonovic M, Doncheva NT, Morris JH, Bork P, et al. STRING v11: protein-protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets. Nucleic Acids Res. 2019;47(D1):D607–13. Ballouz S, Verleyen W, Gillis J. Guidance for RNA-seq co-expression network construction and analysis: safety in numbers. Bioinformatics. 2015;31(13):2123–30. Song L, Langfelder P, Horvath S. Comparison of co-expression measures: mutual information, correlation, and model based indices. BMC Bioinformatics. 2012;13:328. Liao Q, Liu C, Yuan X, Kang S, Miao R, Xiao H, Zhao G, Luo H, Bu D, Zhao H, et al. Large-scale prediction of long non-coding RNA functions in a coding-non-coding gene co-expression network. Nucleic Acids Res. 2011;39(9):3864–78. Chen W, Zhang X, Li J, Huang S, Xiang S, Hu X, Liu C. Comprehensive analysis of coding-lncRNA gene co-expression network uncovers conserved functional lncRNAs in zebrafish. BMC Genomics. 2018;19(Suppl 2):112. Zhang X, Pan BZ, Chen M, Chen W, Li J, Xu ZF, Liu C. JCDB: a comprehensive knowledge base for Jatropha curcas, an emerging model for woody energy plants. BMC Genomics. 2019;20(Suppl 9):958. Bullard JH, Purdom E, Hansen KD, Dudoit S. Evaluation of statistical methods for normalization and differential expression in mRNA-Seq experiments. BMC Bioinformatics. 2010;11:94. Li-Beisson Y, Shorrosh B, Beisson F, Andersson MX, Arondel V, Bates PD, Baud S, Bird D, Debono A, Durrett TP, et al. Acyl-lipid metabolism. Arabidopsis Book. 2013;11:e0161. Arabidopsis Interactome Mapping C. Evidence for network evolution in an Arabidopsis interactome map. Science. 2011;333(6042):601–7. Mukhtar MS, Carvunis AR, Dreze M, Epple P, Steinbrenner J, Moore J, Tasan M, Galli M, Hao T, Nishimura MT, et al. Independently evolved virulence effectors converge onto hubs in a plant immune system network. Science. 2011;333(6042):596–601. Jones AM, Xuan Y, Xu M, Wang RS, Ho CH, Lalonde S, You CH, Sardi MI, Parsa SA, Smith-Valle E, et al. Border control–a membrane-linked interactome of Arabidopsis. Science. 2014;344(6185):711–6. Li P, Zang W, Li Y, Xu F, Wang J, Shi T. AtPID: the overall hierarchical functional protein interaction network interface and analytic platform for Arabidopsis. Nucleic Acids Res. 2011;39(Database issue):D1130–3. Brandao MM, Dantas LL, Silva-Filho MC. AtPIN: Arabidopsis thaliana protein interaction network. BMC Bioinform. 2009;10:454. Lin M, Shen X, Chen X. PAIR: the predicted Arabidopsis interactome resource. Nucleic Acids Res. 2011;39(Database issue):D1134-1140. Berardini TZ, Reiser L, Li D, Mezheritsky Y, Muller R, Strait E, Huala E. The Arabidopsis information resource: Making and mining the "gold standard" annotated reference plant genome. Genesis. 2015;53(8):474–85. Ostlund G, Schmitt T, Forslund K, Kostler T, Messina DN, Roopra S, Frings O, Sonnhammer EL. InParanoid 7: new algorithms and tools for eukaryotic orthology analysis. Nucleic Acids Res. 2010;38(Database issue):D196-203. Efron B, Tibshirani RJ. An Introduction to the Bootstrap. London: Taylor & Francis; 1994. Albert R. Scale-free networks in cell biology. J Cell Sci. 2005;118(Pt 21):4947–57. Govender N, Senan S, Mohamed-Hussein ZA, Wickneswari R. A gene co-expression network model identifies yield-related vicinity networks in Jatropha curcas shoot system. Sci Rep. 2018;8(1):9211. Klopfenstein DV, Zhang L, Pedersen BS, Ramirez F, Warwick Vesztrocy A, Naldi A, Mungall CJ, Yunes JM, Botvinnik O, Weigel M, et al. GOATOOLS: A Python library for Gene Ontology analyses. Sci Rep. 2018;8(1):10872. Kohler S, Bauer S, Horn D, Robinson PN. Walking the interactome for prioritization of candidate disease genes. Am J Hum Genet. 2008;82(4):949–58. Li Y, Patra JC. Genome-wide inferring gene-phenotype relationship by walking on the heterogeneous network. Bioinformatics. 2010;26(9):1219–24. Baya AE, Granitto PM. Clustering gene expression data with a penalized graph-based metric. BMC Bioinform. 2011;12:2. Jiang H, Wu P, Zhang S, Song C, Chen Y, Li M, Jia Y, Fang X, Chen F, Wu G. Global analysis of gene expression profiles in developing physic nut (Jatropha curcas L.) seeds. PLoS ONE. 2012;7(5):e36522. Data analysis was supported by the HPC Platform, The Public Technology Service Center of Xishuangbanna Tropical Botanical Garden (XTBG), CAS, China. About this supplement This article has been published as part of BMC Bioinformatics Volume 22 Supplement 6, 2021: 19th International Conference on Bioinformatics 2020 (InCoB2020). The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-22-supplement-6. This work was supported by the National Natural Science Foundation of China (Nos. 31471220, 91440113), the Natural Science Foundation of Yunnan Province (No. 2018FB060), Start-up Fund from Xishuangbanna Tropical Botanical Garden, 'Top Talents Program in Science and Technology' from Yunnan Province. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Publication costs are funded by the Natural Science Foundation of Yunnan Province (No. 2018FB060). CAS Key Laboratory of Tropical Plant Resources and Sustainable Use, Xishuangbanna Tropical Botanical Garden, Chinese Academy of Sciences, Kunming, 650223, Yunnan, China Xuan Zhang, Jing Li, Bang-Zhen Pan, Wen Chen, Maosheng Chen, Mingyong Tang, Zeng-Fu Xu & Changning Liu Center of Economic Botany, Core Botanical Gardens, Chinese Academy of Sciences, Menglun, 666303, Yunnan, China Xuan Zhang, Jing Li, Bang-Zhen Pan, Maosheng Chen, Mingyong Tang, Zeng-Fu Xu & Changning Liu The Innovative Academy of Seed Design, Chinese Academy of Sciences, Kunming, 650223, Yunnan, China College of Life Sciences, University of Chinese Academy of Sciences, Beijing, 100049, China Xuan Zhang Jing Li Bang-Zhen Pan Wen Chen Maosheng Chen Mingyong Tang Zeng-Fu Xu Changning Liu CL and ZX conceived, and supervised this study. CL and XZ designed the algorithm. BP, MC, and MT collected and compiled data from literature and public databases. XZ, WC, and JL performed the data analysis. XZ, JL and CL compiled the draft of the manuscript. All authors read and approved the final manuscript. Correspondence to Zeng-Fu Xu or Changning Liu. . Arabidopsis oil-biosynthesis-related genes. . Jatropha curcas oil-biosynthesis-related genes based on homology search. . Homologous correspondence of oil-biosynthesis-related genes between Arabidopsis and Jatropha curcas. . PPI network statistics of Jatropha curcas. . The results of NBD algorithm on co-expression networks with different SCC cutoffs. . Co-expression network statistics of Jatropha curcas. . GO consistency. . Oil-biosynthesis-related genes are closely linked relative to random background. . Prediction of oil-biosynthesis-related genes. Additional file 10 . BP and MF enrichment. . Expression clusters of predicited oil-biosynthesis-related genes. . Predicited oil-biosynthesis-related genes in different sub-pathways. . Retrieval details from JCDB. Zhang, X., Li, J., Pan, BZ. et al. Extended mining of the oil biosynthesis pathway in biofuel plant Jatropha curcas by combined analysis of transcriptome and gene interactome data. BMC Bioinformatics 22, 409 (2021). https://doi.org/10.1186/s12859-021-04319-w Extended mining Oil biosynthesis Jatropha curcas Gene interactome
CommonCrawl
ICALP 2012 CDT - Stage 1 -Application Submitted Track A Track B Track C Abstracts, track A Abstracts, track B Abstracts, track C Off Campus Accommodation(Restricted permissions) Visa supporting letters ICALP 2012 Accepted Papers with Abstracts for Track C: Foundations of Networked Computation Ran Gelles, Rafail Ostrovsky and Kina Winoto. Multi-User Equality Testing and Its Applications Abstract: Motivated by the recent widespread emergence of location-based services (LBS) over mobile devices, we explore highly-efficient protocols for proximity-testing. Such protocols allow a group of friends to discover if they are all close to each other in some physical location, without revealing their individual locations to each other even if one of them is missing. Since our focus is hand-held devices, we aim at protocols with very small communication complexity and a small constant number of rounds. The proximity-testing problem can be reduces into a private equality testing (PET), in which parties find out whether or not they hold the same input without revealing any other information about their inputs to each other. While previous works analyze the 2-party PET special case (and its LBS application), in this work we consider highly-efficient schemes for the multiparty case with no honest majority. We provide schemes for both a direct setting and a setting with a trusted mediating server. Our most efficient scheme takes 2 rounds, where in each round the users send only a couple of ElGamal ciphertexts. Marcel Ochel, Klaus Radke and Berthold Voecking. Online Packing with Gradually Improving Capacity Estimations with Applications to Network Lifetime Maximization Abstract: We introduce a general model for online packing problems with applications to lifetime optimization of wireless sensor networks. Classical approaches for lifetime maximization make the crucial assumption that battery capacities of sensor nodes are known a priori. For real-world batteries, however, the capacities are only vaguely known. To capture this aspect, we introduce an adversarial online model where estimates become more and more accurate over time, that is, when using the corresponding resources. Our model is based on general linear packing programs and we assume the remaining capacities to be always specified by lower and upper bounds that may deviate from each other by a fixed factor alpha. We analyze the algorithmic consequences of our model and provide a general ln(alpha)/alpha competitive framework. Furthermore, we show a complementary upper bound of O(1/sqrt(alpha)). Michael Goodrich and Michael Mitzenmacher. Anonymous Card Shuffling and its Applications to Parallel Mixnets Abstract: We study the question of how to shuffle $n$ cards when faced with an opponent who knows the initial position of all the cards {\em and} can track every card when permuted, {\em except} when one takes $K< n$ cards at a time and shuffles them in a private buffer ``behind your back,'' which we call {\em buffer shuffling}. The problem arises naturally in the context of parallel mixnet servers as well as other security applications. Our analysis is based on related analyses of load-balancing processes. We include extensions to variations that involve corrupted servers and adversarially injected messages, which correspond to an opponent who can peek at some shuffles in the buffer and who can mark some number of the cards. In addition, our analysis makes novel use of a sum-of-squares metric for anonymity, which leads to improved performance bounds for parallel mixnets and can also be used to bound well-known existing anonymity measures. Adrian Kosowski, Bi Li, Nicolas Nisse and Karol Suchan. k-Chordal Graphs: from Cops and Robber to Compact Routing via Treewidth Abstract: Cops and robber games concern a team of cops that must capture a robber moving in a graph. We consider the class of k-chordal graphs, i.e., graphs with no induced cycle of length greater than k, $k\geq 3$. We prove that k-1 cops are always sufficient to capture a robber in k-chordal graphs. This leads us to our main result, a new structural decomposition for a graph class including k-chordal graphs. We present a quadratic algorithm that, given a graph G and $k\geq 3$, either returns an induced cycle larger than k in G, or computes a tree-decomposition of G, each bag of which contains a dominating path with at most k-1 vertices. This allows us to prove that any k-chordal graph with maximum degree $\Delta$ has treewidth at most $(k-1)(\Delta-1)+2$, improving the $O(\Delta (\Delta-1)^{k-3})$ bound of Bodlaender and Thilikos (1997).Moreover, any graph admitting such a tree-decomposition has small hyperbolicity. As an application, for any n-node graph admitting such a tree-decomposition, we propose a compact routing scheme using routing tables, addresses and headers of size O(log n) bits and achieving an additive stretch of O(k log Delta). As far as we know, this is the first routing scheme with O(log n)-routing tables and small additive stretch for k-chordal graphs. Yoann Dieudonne and Andrzej Pelc. Deterministic network exploration by anonymous silent agents with local traffic reports Abstract: A team consisting of an unknown number of mobile agents, starting from different nodes of an unknown network, possibly at different times, have to explore the network: every node must be visited by at least one agent and all agents must eventually stop. Agents are anonymous (identical), execute the same deterministic algorithm and move in syn- chronous rounds along links of the network. They are silent: they cannot send any messages to other agents or mark visited nodes in any way. In the absence of any additional information, exploration with termination of an arbitrary network in this weak model is impossible. Our aim is to solve the exploration problem giving to agents very restricted local traffic reports. Specifically, an agent that is at a node v in a given round, is pro- vided with three bits of information, answering the following questions: Am I alone at v? Did any agent enter v in this round? Did any agent exit v in this round? We show that this small information permits to solve the exploration problem in arbitrary networks. More precisely, we give a deterministic terminating exploration algorithm working in arbitrary networks for all initial configurations that are not perfectly symmetric, i.e., in which there are agents with different views of the network. The algorithm works in time polynomial in the (unknown) size of the net- work. A deterministic terminating exploration algorithm working for all initial configurations in arbitrary networks does not exist. Fedor Fomin, Petr Golovach, Jesper Nederlof and Michał Pilipczuk. Minimizing Rosenthal Potential in Multicast Games Abstract: A multicast game is a network design game modelling how selfish non-cooperative agents build and maintain one-to-many network communication. There is a special source node and a collection of agents located at corresponding terminals. Each agent is interested in selecting a route from the special source to its terminal minimizing the cost. The mutual influence of the agents is determined by a cost sharing mechanism, which evenly splits the cost of an edge among all the agents using it for routing. The existence of a Nash equilibrium for the game was previously established by the means of Rosenthal potential. Anshelevich et al. [FOCS 2004, SICOMP 2008] introduced a measure of quality of the best Nash equilibrium, the price of stability, as the ratio of its cost to the optimum network cost. While Rosenthal potential is a reasonable measure of the quality of Nash equilibra, finding a Nash equilibrium minimizing this potential is NP-hard. In this paper we provide several algorithmic and complexity results on finding a Nash equilibrium minimizing the value of Rosenthal potential. Let $n$ be the number of agents and $G$ be the communication network. We show that - For a given strategy profile $s$ and integer $k\geq 1$, there is a local search algorithm which in polynomial time $n^{O(k)} \cdot |G|^{O(1)}$ finds a better strategy profile, if there is any, in a $k$-exchange neighbourhood of $s$. In other words, the algorithm decides if Rosenthal potential can be decreased by changing strategies of at most $k$ agents; - The running time of our local search algorithm is essentially tight: unless $FPT= W[1]$, for any function $f(k)$, searching of the $k$-neighbourhood cannot be done in time $f(k)\cdot |G|^{O(1)}$. The key ingredient of our algorithmic result is a subroutine that finds an equilibrium with minimum potential in $O(3^n \cdot |G|^{O(1)})$ time. In other words, finding an equilibrium with minimum potential is fixed-parameter tractable when parameterized by the number of agents. Ilias Diakonikolas, Christos Papadimitriou, George Pierrakos and Yaron Singer. Efficiency-Revenue Trade-offs in Auctions Abstract: When agents with independent priors bid for a single item, Myerson's optimal auction maximizes expected revenue, whereas Vickrey's second-price auction optimizes social welfare. We address the natural question of {\em trade-offs} between the two criteria, that is, auctions that optimize, say, revenue under the constraint that the welfare is above a given level. If one allows for randomized mechanisms, it is easy to see that there are polynomial-time mechanisms that achieve any point in the trade-off (the {\em Pareto curve\/}) between revenue and welfare. We investigate whether one can achieve the same guarantees using {\em deterministic} mechanisms. We provide a negative answer to this question by showing that this is a (weakly) NP-hard problem. On the positive side, we provide polynomial-time deterministic mechanisms that approximate with arbitrary precision any point of the trade-off between these two fundamental objectives for the case of two bidders, even when the valuations are correlated arbitrarily. The major problem left open by our work is whether there is such an algorithm for three or more bidders with independent valuation distributions. Daniel M. Kane, Kurt Mehlhorn, Thomas Sauerwald and He Sun. Counting Arbitrary Subgraphs in Data Streams Abstract: We study the subgraph counting problem in data streams. We provide the first non-trivial estimator for approximately counting the number of occurrences of an arbitrary subgraph H of constant size in a (large) graph G. Our estimator works in the turnstile model, i.e., can handle both edge-insertions and edge-deletions, and is applicable in a distributed setting. Prior to this work, only for a few non-regular graphs estimators were known in case of edge-insertions, leaving the problem of counting general subgraphs in the turnstile model wide open. We further demonstrate the applicability of our estimator by analyzing its concentration for several graphs H and the case where G is a power law graph. Piotr Krysta and Berthold Voecking. Online Mechanism Design (Randomized Rounding on the Fly) Abstract: We study incentive compatible mechanisms for combinatorial auctions (CAs) in an online model with sequentially arriving bidders. We distinguish two kinds of arrivals: bidders might appear in an order specified by a random permutation in analogy to the secretary problem, or in an order specified by an adversary like in a worst-case competitive analysis. Previously known online mechanisms for CAs assume that each item is available at a certain multiplicity $b > 1$. Typically, one assumes $b =\Omega(\log m)$, where $m$ is the number of different items. We present the first online mechanisms for CAs guaranteeing competitiveness without assumptions about the minimum multiplicity. In particular, our analysis covers the standard CAs with $b=1$. We introduce an online variant of oblivious randomized rounding enabling us to prove competitive ratios that are close to or even beat the best known offline approximation factors for various CAs settings. Our mechanisms are universally truthful. They are interesting not only for online but also for offline optimization as they have a polynomially bounded running time for valuations given by demand oracles. For example, we achieve a competitive ratio of $O(\log m)$ with respect to the social welfare for CAs with submodular (or more general XOS) valuations when considering bidders in random order. This beats the best previously known offline approximation factor for this important class of valuations by a factor of $O(\log \log m)$. Ning Chen, Xiaotie Deng, Hongyang Zhang and Jie Zhang. Incentive Ratios of Fisher Markets Abstract: We consider a Fisher market where agents submit their own utility functions and money endowments to the market maker, who, upon receiving every agent's report, derives market equilibrium prices and allocations of the items. While agents may benefit by misreporting their private information, we show that the percentage of improvement by a unilateral strategic play, called incentive ratio, is rather limited---it is less than 2 for linear markets and at most $e^{1/e}=1.445$ for Cobb-Douglas markets. We further prove that both ratios are tight. Leonid Barenboim. On the Locality of NP-Complete Problems Abstract: We consider the distributed message-passing {LOCAL} model. In this model a communication network is represented by a graph where vertices host processors, and communication is performed over the edges. Computation proceeds in synchronous rounds. The running time of an algorithm is the number of rounds from the beginning until all vertices terminate. An algorithm is called {\em local} if it terminates within constant number of rounds. The question of what problems can be computed locally was raised by Naor and Stockmayer \cite{NS93} in their seminal paper in STOC'93. Since then the quest for problems with local algorithms, and for problems that cannot be computed locally, has become a central research direction in the field of distributed algorithms \cite{KMW04,KMW10,LOW08,PR01}. The currently known problems that can be solved locally have simple sequential algorithms. On the other hand, many problems with simple sequential algorithms cannot be solved locally \cite{KMW04,KMW10}. We devise the first local algorithm for an {NP-complete problem}. Specifically, we show that O(n^{1/2 + \epsilon} \cdot \chi)-coloring can be computed within O(1) rounds, where \epsilon > 0 is an arbitrarily small constant, and \chi is the chromatic number of the input graph. (This problem was shown to be NP-complete in \cite{Z07}.) In our way to this result we devise a constant-time algorithm for computing (O(1), O(n^{1/2 + \epsilon}))-netwok-decompositions. Network-decompositions were introduced by Awerbuch et al. \cite{AGLP89}, and are very useful for computing various distributed problems. The best previously-known algorithm for network-decomposition has a polylogarithmic running time (but is applicable for a wider range of parameters) \cite{LS93}. We also devise a \Delta^{1 + \epsilon}-coloring algorithm for graphs with sufficiently large maximum degree \Delta that runs within O(1) rounds. It improves the best previously-known result for this family of graphs, which is O(\log^* n) \cite{SW10}. Khaled Elbassioni. A QPTAS for $\eps$-Envy-Free Profit-Maximizing Pricing on Line Graphs Abstract: We consider the problem of pricing edges of a line graph so as to maximize the profit made from selling intervals to single-minded customers. An instance is given by a set $E$ of $n$ edges with a limited supply for each edge, and a set of $m$ clients, where each client $j$ specifies one interval of $E$ she is interested in and a budget $B_j$ which is the maximum price she is willing to pay for that interval. An envy-free pricing is one in which every customer is allocated (possibly empty) interval maximizing her utility. Recently, Grandoni and Rothvoss (SODA 2011) gave a polynomial-time approximation scheme (PTAS) for the unlimited supply case with running time $(nm)^{O((\frac{1}{\eps})^{\frac{1}{\eps}})}$. By utilizing the known hierarchical decomposition of doubling metrics, we give a PTAS with running time $(nm)^{O(\frac{1}{\eps^2})}$. We then consider the limited supply case, and the notion of $\eps$-envy-free pricing in which a customer gets an allocation maximizing her utility within an additive error of $\eps$. For this case we develop an approximation scheme with running time $(nm)^{O(\frac{\log^4 \max_eH_e}{\eps^3})}$, where $H_e=\frac{B_{\max}(e)}{B_{\min}(e)}$ is the maximum ratio of the budgets of any two customers demanding edge $e$. This yields a PTAS in the uniform budget case, and a quasi-PTAS for the general case. Reuven Bar-Yehuda, Erez Kantor, Shay Kutten and Dror Rawitz. Growing Half-Balls: Minimizing Storage and Communication Costs in CDNs Abstract: The Dynamic Content Distribution problem addresses the trade-off between storage and delivery costs in modern virtual Content Delivery Networks (CDNs). That is, a video file can be stored in multiple places so that the request of each user is served from a location that is near by to the user. This minimizes the delivery costs, but is associated with a storage cost. This problem is NP-hard even in grid networks. In this paper, we present a constant factor approximation algorithm for grid networks. We also present an $O(\log \diam)$-competitive algorithm, where $\diam$ is the normalized diameter of the network, for general networks with general metrics. We show a matching lower bound by using a reduction from online undirected \textsc{Steiner tree}. Our algorithms use a rather intuitive approach that has an elegant representation in geometric terms. Nishanth Chandran, Juan Garay and Rafail Ostrovsky. Edge Fault Tolerance on Sparse Networks Abstract: Byzantine agreement, which requires $n$ processors (nodes) in a completely connected network to agree on a value dependent on their initial values and despite the arbitrary, possible malicious behavior of some of them, is perhaps the most popular paradigm in fault-tolerant distributed systems. However, partially connected networks are far more realistic than fully connected networks, which led Dwork, Peleg, Pippenger and Upfal [STOC'86] to formulate the notion of \emph{almost-everywhere (a.e.) agreement} which shares the same aim with the original problem, except that now not all pairs of nodes are connected by reliable and authenticated channels. In such a setting, agreement amongst all correct nodes cannot be guaranteed due to possible poor connectivity with other correct nodes, and some of them must be given up. The number of such nodes is a function of the underlying communication graph and the adversarial set of nodes. In this work we introduce the notion of \emph{almost-everywhere agreement with edge corruptions} which is exactly the same problem as described above, except that we additionally allow the adversary to completely control some of the communication channels between two correct nodes---i.e., to ``corrupt'' edges in the network. While it is easy to see that an a.e. agreement protocol for the original node-corruption model is also an a.e. agreement protocol tolerating edge corruptions (albeit for a reduced fraction of edge corruptions with respect to the bound for node corruptions), no polynomial-time protocol is known in the case where a constant fraction of the edges can be corrupted and the degree of the network is sub-linear. We make progress on this front, by constructing graphs of degree $O(n^\epsilon)$ (for arbitrary constant $0<\epsilon<1$) on which we can run a.e. agreement protocols tolerating a constant fraction of adversarial edges. The number of given-up nodes in our construction is $\mu n$ (for some constant $0<\mu<1$ that depends on the fraction of corrupted edges), which is asymptotically optimal. We remark that allowing an adversary to corrupt edges in the network can be seen as taking a step closer towards guaranteeing a.e. agreement amongst honest nodes even on adversarially chosen communication networks, as opposed to earlier results where the communication graph is specially constructed. In addition, building upon the work of Garay and Ostrovsky~[Eurocrypt'08], we obtain a protocol for {\em a.e. secure computation} tolerating edge corruptions on the above graphs. Elias Koutsoupias and Katia Papakonstantinopoulou. Contention issues in congestion games Abstract: We study time-dependent strategies for playing congestion games. The players can time their participation in the game with the hope that fewer players will compete for the same resources. We study two models: the boat model, in which the latency of a player is influenced only by the players that start at the same time, and the conveyor belt model in which the latency of a player is affected by the players that share the system, even if they started earlier or later; unlike standard congestion games, in these games the order of the edges in the paths affect the latency of the players. We characterize the symmetric Nash equilibria of the games with affine latencies of networks of parallel links in the boat model and we bound their price of anarchy and stability. For the conveyor belt model, we characterize the symmetric Nash equilibria of two players on parallel links. We also show that the games of the boat model are themselves congestion games. The same is true for the games of two players for the conveyor model; however, for this model the games of three or more players are not in general congestion games and may not have pure equilibria. David Peleg, Liam Roditty and Elad Tal. Distributed Algorithms for Network Diameter and Girth Abstract: This paper considers the problem of computing the diameter $D$ and the girth $g$ of an $n$-node network in the CONGEST distributed model. In this model, in each synchronous round, each vertex can transmit a different short (say, $O(\log n)$ bits) message to each of its neighbors. We present a distributed algorithm that computes the diameter of the network in $O(n)$ rounds. We also present two distributed approximation algorithms. The first computes a $3/2$ multiplicative approximation of the diameter in $\Ot(D\sqrt n)$ rounds. The second computes a $2-1/g$ multiplicative approximation of the girth in $\Ot(D+\sqrt{gn})$ rounds. Recently, Frischknecht, Holzer and Wattenhofer~\cite{FHW12} considered these problems in the CONGEST model but from the perspective of lower bounds. They showed an $\OMt(n)$ rounds lower bound for exact diameter computation. For diameter approximation, they showed a lower bound of $\OMt(\sqrt n)$ rounds for getting a multiplicative approximation of $3/2-\eps$. Both lower bounds hold for networks with constant diameter. For girth approximation, they showed a lower bound of $\OMt(\sqrt n)$ rounds for getting a multiplicative approximation of $2-\eps$ on a network with constant girth. Our exact algorithm for computing the diameter matches their lower bound. Our diameter and girth approximation algorithms almost match their lower bounds for constant diameter and for constant girth. Navendu Jain, Ishai Menache, Joseph Naor and F. Bruce Shepherd. Topology-Aware VM Migration in Bandwidth Oversubscribed Datacenter Networks Abstract: Virtualization can deliver significant benefits for cloud computing by enabling VM migration to improve utilization, balance load and alleviate hotspots. While several mechanisms exist to migrate VMs, few efforts have focused on developing policies to minimize migration costs in a multi-rooted tree data center network. The problem of VM migration in a data center is a variant of well-studied problems: (1) Maximum throughput tree routing problem to route VMs in a bandwidth oversubscribed network, and (2) Variants of the matching problem---quadratic assignment and demand matching---for meeting load constraints at overloaded and underloaded servers. While these problems have been individually studied, a new fundamental challenge is to simultaneously handle packing constraints of server load and tree edge capacities. This paper proposes novel algorithms for the objective of ``relieving" a maximal number of hot servers. Our first approach considers a related problem of migrating a maximal number of VMs, which in some scenarios can serve a plausible solution for the original objective. We provide an $8$-approximation algorithm for the problem. Our second approach directly targets the original objective, while assuming that the network topology is a directed tree. The idea is to first formulate an LP with fractional migration and then round it to a second LP having totally unimodular constraint system. The latter LP guarantees integral solutions with provable performance guarantees. Since the actual network topology is undirected, we construct an iterative heuristic which imposes edge directionality in a clever way. We conclude our work by evaluating this heuristic on realistic models of data center workloads, demonstrating that the algorithm is time-responsive in computing placement and migration decisions while scaling to large systems. Andrew Berns, Sriram Pemmaraju and James Hegeman. Super-Fast Distributed Algorithms for Metric Facility Location Abstract: This paper presents an O(log log n (log^*n))-round, O(1)-approximation algorithm for the metric facility location problem on a clique network. Though metric facility location has been considered by several researchers in low-diameter settings, this is the first sub-logarithmic algorithm for the problem that yields an O(1)-approximation. We assume the standard CONGEST model, which is a synchronous message-passing model in which each node in a size-n network can send a message of size O(log n) along each incident communication link in each round. Since facility location is specified by Theta(n^2) pieces of information, any fast solution to the problem needs to be truly distributed. Our paper makes three main technical contributions. First, we show a new lower bound for metric facility location. Next, we demonstrate a reduction of the distributed facility location problem to the problem of computing an O(1)-ruling set on an appropriate spanning subgraph. Finally, we present a "super-fast" algorithm to compute a 2-ruling set by using a combination of randomized and deterministic sparsification. Luca Gugelmann, Konstantinos Panagiotou and Ueli Peter. Hyperbolic Random Graphs: Degree Sequence and Clustering Abstract: Recently, Papadopoulos, Krioukov, Boguñá and Vahdat [Infocom'10] introduced a random geometric graph model that is based on hyperbolic geometry. The authors argued empirically and by some preliminary mathematical analysis that the resulting graphs have many of the desired properties for models of large real-world graphs, such as high clustering and heavy tailed degree distributions. By computing explicitly a maximum likelihood fit of the Internet graph, they demonstrated impressively that this model is adequate for reproducing the structure of such with high accuracy. In this work we initiate the rigorous study of random hyperbolic graphs. We compute exact asymptotic expressions for the expected number of vertices of degree k for all k up to the maximum degree and provide small probabilities for large deviations. We also prove a constant lower bound for the clustering coefficient. In particular, our findings confirm rigorously that the degree sequence follows a power-law distribution with controllable exponent and that the clustering is nonvanishing. Marco Chiesa, Giuseppe Di Battista, Thomas Erlebach and Maurizio Patrignani. Computational Complexity of Traffic Hijacking under BGP and S-BGP Abstract: Harmful Internet hijacking incidents put in evidence how fragile the Border Gateway Protocol (BGP) is, which is used to exchange routing information between Autonomous Systems (ASes). As proved by recent research contributions, even S-BGP, the secure variant of BGP that is being deployed, is not fully able to blunt traffic attraction attacks. Given a traffic flow between two ASes, we study how difficult it is for a malicious AS to devise a strategy for hijacking or intercepting that flow. We show that this problem marks a sharp difference between BGP and S-BGP. Namely, while it is solvable, under reasonable assumptions, in polynomial time for the type of attacks that are usually performed in BGP it is NP-hard for S-BGP. Our study of traffic attraction strategies has several by-products. As an example, we solve a problem left open in the literature, stating when performing a hijacking in S-BGP is equivalent to performing an interception. Adam Groce, Jonathan Katz, Aishwarya Thiruvengadam and Vassilis Zikas. Byzantine Agreement with a Rational Adversary Abstract: Researchers in cryptography have frequently found it beneficial to consider, instead of a traditional worst-case adversary, a rational adversary that behaves in a more predictable way. We apply this model to the previously unconsidered case of Byzantine agreement (BA). We define security for both flavours of BA, i.e., consensus and broadcast, for a natural class of utilities. Surprisingly, we show that many of the widely known results on BA, e.g., the equivalence of broadcast and consensus or the impossibility of consensus tolerating t\geq n/2 corruptions, do not transfer---per se---to the rational model. We then study the question of feasibility of information-theoretic (both perfect and statistical) BA for the cases where the parties have complete or partial knowledge of the adversary's utility function. For the first case we describe a BA protocol tolerating t<n corruptions in the plain model, i.e., without assuming a setup, that is even perfectly secure. For the second case, we prove tight bounds that depend on the adversary's preference of agreement over disagreement. All our suggested protocols are more efficient than corresponding protocols for traditional, i.e., non-rational, BA. Kshipra Bhawalkar, Jon Kleinberg, Kevin Lewi, Tim Roughgarden and Aneesh Sharma. Preventing Unraveling in Social Networks: The Anchored k-Core Problem Abstract: We consider a model of user engagement in social networks, where each player incurs a cost of remaining engaged but derives a benefit proportional to the number of its engaged neighbors. The natural equilibrium of this model corresponds to the k-core of the social network --- the maximal induced subgraph with minimum degree at least k. We study the problem of "anchoring" a small number of vertices to maximize the size of the corresponding anchored k-core --- the maximal induced subgraph in which every non-anchored vertex has degree at least k. This problem corresponds to preventing "unraveling" --- a cascade of iterated withdrawals. We provide polynomial-time algorithms for general graphs with k=2, and for bounded-treewidth graphs with arbitrary k. We prove strong non-approximability results for general graphs and k >= 3. Page contact: Yvonne Colmer Last revised: Thu 26 Apr 2012
CommonCrawl
Norm inflation for the Boussinesq system Positive solution branches of two-species competition model in open advective environments A theoretical approach to understanding rumor propagation dynamics in a spatially heterogeneous environment Linhe Zhu 1,, , Wenshan Liu 1,2, and Zhengdi Zhang 1, School of Mathematical Sciences, Jiangsu University, Zhenjiang, 212013, China School of Mathematical Sciences, Nanjing Normal University Nanjing, 210023, China * Corresponding author: Linhe Zhu Received March 2020 Revised July 2020 Published September 2020 Fund Project: The first author is supported by National Natural Science Foundation of China (Grant No.12002135), China Postdoctoral Science Foundation (Grant No.2019M661732), Natural Science Foundation of Jiangsu Province (Grant No.BK20190836) and Natural Science Research of Jiangsu Higher Education Institutions of China (Grant No.19KJB110001). The third author is supported by National Natural Science Foundation of China (Grant No.11872189) Figure(8) Most of the previous work on rumor propagation either focus on ordinary differential equations with temporal dimension or partial differential equations (PDE) with only consideration of spatially independent parameters. Little attention has been given to rumor propagation models in a spatiotemporally heterogeneous environment. This paper is dedicated to investigating a SCIR reaction-diffusion rumor propagation model with a general nonlinear incidence rate in both heterogeneous and homogeneous environments. In spatially heterogeneous case, the well-posedness of global solutions is established first. The basic reproduction number $ R_0 $ is introduced, which can be used to reveal the threshold-type dynamics of rumor propagation: if $ R_0 < 1 $, the rumor-free steady state is globally asymptotically stable, while $ R_0 > 1 $, the rumor is uniformly persistent. In spatially homogeneous case, after introducing the time delay, the stability properties have been extensively studied. Finally, numerical simulations are presented to illustrate the validity of the theoretical analysis and the influence of spatial heterogeneity on rumor propagation is further demonstrated. Keywords: Spatial heterogeneity, Reaction-diffusion model, Basic reproduction number, Stability, Uniform persistence. Mathematics Subject Classification: Primary:35K57;Secondary:92D25. Citation: Linhe Zhu, Wenshan Liu, Zhengdi Zhang. A theoretical approach to understanding rumor propagation dynamics in a spatially heterogeneous environment. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020274 H. Amann, Nonhomogeneous linear and quasilinear elliptic and parabolic boundary value problems, in: H.J. Schmeisser, H. Triebel (Eds.), Function Spaces, Differential Operators and Nonlinear Analysis (Friedrichroda, 1992), in: Teubner-Texte zur Mathematik, vol 133, Teubner, Stuttgart, 1993, 9-126. doi: 10.1007/978-3-663-11336-2_1. Google Scholar L. J. S. Allen, B. M. Bolker, Y. Lou and A. L. Nevai, Asymptotic profiles of the steady states for an SIS epidemic patch model, Siam Journal on Applied Mathematics, 67 (2007), 1283-1309. doi: 10.1137/060672522. Google Scholar Y. L. Cai, Y. Kang, M. Banerjee and W. M. Wang, Complex Dynamics of a host-parasite model with both horizontal and vertical transmissions in a spatial heterogeneous environment, Nonlinear Analysis: Real World Applications, 40 (2018), 444-465. doi: 10.1016/j.nonrwa.2017.10.001. Google Scholar Y. L. Cai, X. Z. Lian, Z. H. Peng and W. M. Wang, Spatiotemporal transmission dynamics for influenza disease in a heterogenous environment, Nonlinear Analysis: Real World Applications, 46 (2019), 178-194. doi: 10.1016/j.nonrwa.2018.09.006. Google Scholar T. Chen, L. Chen, X. Xu, Y. F. Cai, H. B. Jiang and X. Q. Sun, Reliable sideslip angle estimation of four-wheel independent drive electric vehicle by information iteration and fusion, Mathematical Problems in Engineering, 2018 (2018), 9075372, 14pp. doi: 10.1155/2018/9075372. Google Scholar D. J. Daley and D. G. Kendall, Epidemic and rumors, Nature, 204 (1964), 1118. Google Scholar O. Diekmann, J. A. P. Heesterbeek and J. A. J. Metz, On the definition and the computation of the basic reproduction ratio $R_0$ in models for infectious diseases in heterogeneous populations, Journal of Mathematical Biology, 28 (1990), 365-382. doi: 10.1007/BF00178324. Google Scholar Z. M. Guo, F. B. Wang and X. F. Zou, Threshold dynamics of an infective disease model with a fixed latent period and non-local infections, Journal of Mathematical Biology, 65 (2012), 1387-1410. doi: 10.1007/s00285-011-0500-y. Google Scholar J. Groeger, Divergence theorems and the supersphere, Journal of Geometry And Physics, 77 (2014), 13-29. doi: 10.1016/j.geomphys.2013.11.004. Google Scholar J. K. Hale, Asymptotic Behavior of Dissipative Systems, , American Mathematical Society, Providence, RI, 1988. Google Scholar J. K. Hale and S. M. Verduyn Lunel, Introduction to Functional Differential Equations, Springer-Verlag, New York, 1993. doi: 10.1007/978-1-4612-4342-7. Google Scholar H. W. Hethcote, The mathematical of infectious diseases, SIAM Review, 42 (2000), 599-653. doi: 10.1137/S0036144500371907. Google Scholar X. L. Lai and X. F. Zou, Repulsion effect on superinfecting virions by infected cells, Bulletin of Mathematical Biology, 76 (2014), 2806-2833. doi: 10.1007/s11538-014-0033-9. Google Scholar J. R. Li, H. J. Jiang, Z. Y. Yu and C. Hu, Dynamical analysis of rumor spreading model in homogeneous complex networks, Applied Mathematics and Computation, 359 (2019), 374-385. doi: 10.1016/j.amc.2019.04.076. Google Scholar X. Liang, L. Zhang and X. Q. Zhao, Basic reproduction ratios for periodic abstract functional differential equations (with application to a spatial model for lyme disease), Journal of Dynamic and Differential Equations, 31 (2019), 1247-1278. doi: 10.1007/s10884-017-9601-7. Google Scholar Y. J. Lou and X. Q. Zhao, A reaction-diffusion malaria model with incubation period in the vector population, Journal of Mathematical Biology, 62 (2011), 543-568. doi: 10.1007/s00285-010-0346-8. Google Scholar Y. T. Luo, L. Zhang, T. T. Zheng and Z. D. Teng, Analysis of a diffusive virus infection model with humoral immunity, Cell-to-cell Transmission and Nonlinear Incidence. Physica A, 535 (2019), 122415, 20pp. doi: 10.1016/j.physa.2019.122415. Google Scholar D. Maki and M. Thomson, Mathematical Models and Applications, Prentice-Hall, Englewood Cliffs, 1973. Google Scholar P. Miao, Z. D. Zhang, C. W. Lim and X. D. Wang, Hopf bifurcation and hybrid control of a delayed ecoepidemiological model with nonlinear incidence rate and Holling type Ⅱ functional response, Mathematical Problems in Engineering, 2018 (2018), 6052503, 12pp. doi: 10.1155/2018/6052503. Google Scholar R. Peng and X. Q. Zhao, A reaction-diffusion SIS epidemic model in a time-periodic environment, Nonlinearity, 25 (2012), 1451-1471. doi: 10.1088/0951-7715/25/5/1451. Google Scholar M. H. Protter and H. F. Weinberger, Maximum Principles in Differential Equations, Pren-tice Hall, Englewood Cliffs, 1967. Google Scholar X. Ren, Y. Tian, L. Liu and X. Liu, A reaction-diffusion within-host HIV model with cell-to-cell transmission, Journal of Mathematical Biology, 76 (2018), 1831-1872. doi: 10.1007/s00285-017-1202-x. Google Scholar H. L. Smith, Monotone Dynamical Systems: An Introduction to the Theory of Competitive and Cooperative Systems, in: Math. Surveys Monger. vol. 41, American Mathematical Society, Providence, RI, 1995. Google Scholar H. L. Smith and X. Q. Zhao, Robust persistence for semidynamical systems, Nonlinear Analysis: Theory Methods & Applications, 47 (2001), 6169-6179. doi: 10.1016/S0362-546X(01)00678-2. Google Scholar S. T. Tang, Z. D. Teng and H. Miao, Global dynamics of a reaction-diffusion virus infection model with humoral immunity and nonlinear incidence, Computers and Mathematics with Applications, 78 (2019), 786-806. doi: 10.1016/j.camwa.2019.03.004. Google Scholar H. R. Thieme, Spectral bound and reproduction number for infinite-dimensional population structure and time heterogeneity, SIAM Journal on Applied Mathematics, 70 (2009), 188-211. doi: 10.1137/080732870. Google Scholar H. R. Thieme, Convergence results and a Poincare-Bendixson trichotomy for asymptotically autonomous differential equations, Journal of Mathematical Biology, 30 (1992), 755-763. doi: 10.1007/BF00173267. Google Scholar P. van den Driessche and J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Mathematical Biosciences, 180 (2002), 29-48. doi: 10.1016/S0025-5564(02)00108-6. Google Scholar N. K. Vaidya, F. B. Wang and X. F. Zou, Avian influenza dynamics in wild birds with bird mobility and spatial heterogeneous environment, Discrete and Continuous Dynamical Systems, 17 (2012), 2829-2848. doi: 10.3934/dcdsb.2012.17.2829. Google Scholar J. Wang, L. Zhao and R. Huang, SIRaRu rumor spreading model in complex networks, Physica A, 398 (2014), 43-55. doi: 10.1016/j.physa.2013.12.004. Google Scholar W. Wang and X. Q. Zhao, Basic reproduction numbers for reaction-diffusion epidemic models, SIAM Journal on Applied Dynamical Systems, 11 (2012), 1652-1673. doi: 10.1137/120872942. Google Scholar W. Wang and X. Q. Zhao, A nonlocal and time-delayed reaction-diffusion model of dengue transmission, SIAM Journal on Applied Mathematics, 71 (2011), 147-168. doi: 10.1137/090775890. Google Scholar J. L. Wang, F. L. Xie and T. Kuniya, Analysis of a reaction-diffusion cholera epidemic model in a spatially heterogeneous environment, Communications in Nonlinear Science and Numerical Simulation, 80 (2020), 104951, 20pp. doi: 10.1016/j.cnsns.2019.104951. Google Scholar W. Wang, W. B. Ma and X. L. Lai, Repulsion effect on superinfecting virions by infected cells for virus infection dynamic model with absorption effect and chemotaxis, Nonlinear Analysis: Real World Applications, 33 (2017), 253-283. doi: 10.1016/j.nonrwa.2016.04.013. Google Scholar R. Wu and X. Q. Zhao, A reaction-diffusion model of vector-borne disease with periodic delays, Journal of Nonlinear Science, 29 (2019), 29-64. doi: 10.1007/s00332-018-9475-9. Google Scholar J. Wu, Theory and Applications of Partial Functional Differential Equations, Springer-Verlag, New York, 1996. doi: 10.1007/978-1-4612-4050-1. Google Scholar D. M. Xiao and S. G. Ruan, Global analysis of an epidemic model with nonmonotone incidence rate, Mathematical Biosciences, 208 (2007), 419-429. doi: 10.1016/j.mbs.2006.09.025. Google Scholar Y. Yu, Z. D. Zhang and Q. S. Bi, Multistability and fast-slow analysis for van der Pol-Duffing oscillator with varying exponential delay feedback factor, Applied Mathematical Modelling, 57 (2018), 448-458. doi: 10.1016/j.apm.2018.01.010. Google Scholar R. Zhang, Y. Wang, Z. D. Zhang and Q. S. Bi, Nonlinear behaviors as well as the bifurcation mechanism in switched dynamical systems, Nonlinear Dynamics, 79 (2015), 465-471. Google Scholar C. Zhang, J. G. Gao, H. Q. Sun and J. L. Wang, Dynamics of a reaction-diffusion SVIR model in a spatial heterogeneous environment, Physica A, 533 (2019), 122049, 15pp. doi: 10.1016/j.physa.2019.122049. Google Scholar X. Q. Zhao, Basic reproduction ratios for periodic compartmental models with time delay, Journal of Dynamic and Differential Equations, 29 (2017), 67-82. doi: 10.1007/s10884-015-9425-2. Google Scholar X. Q. Zhao, Dynamical Systems in Population Biology, Springer, New York, 2003. doi: 10.1007/978-0-387-21761-1. Google Scholar L. H. Zhu, G. Guan and Y. M. Li, Nonlinear dynamical analysis and control strategies of a network-based SIS epidemic model with time delay, Applied Mathematical Modelling, 70 (2019), 512-531. doi: 10.1016/j.apm.2019.01.037. Google Scholar L. H. Zhu, W. S. Liu and Z. D. Zhang, Delay differential equations modeling of rumor propagation in both homogeneous and heterogeneous networks with a forced silence function, Applied Mathematics and Computation, 370 (2020), 124925, 22pp. doi: 10.1016/j.amc.2019.124925. Google Scholar L. H. Zhu and X. Y. Huang, SIRaRu rumor spreading model in complex networks, Communications in Theoretical Physics, 72 (2020), 015002. Google Scholar L. H. Zhu, M. X. Liu and Y. M. Li, The dynamics analysis of a rumor propagation model in online social networks, Physica A, 520 (2019), 118-137. doi: 10.1016/j.physa.2019.01.013. Google Scholar L. H. Zhu, H. Y. Zhao and H. Y. Wang, Partial differential equation modeling of rumor propagation in complex networks with higher order of organization, Chaos, 29 (2019), 053106, 23pp. doi: 10.1063/1.5090268. Google Scholar L. H. Zhu, X. Zhou, Y. M. Li and Y. X. Zhu, Stability and bifurcation analysis on a delayed epidemic model with information dependent vaccination, Physica Scripta, 94 (2019), 125202. doi: 10.1088/1402-4896/ab2f04. Google Scholar L. H. Zhu, H. Y. Zhao and H. Y. Wang, Stability and spatial patterns of an epidemi-like rumor propagation model with diffusions, Physica Scripta, 94 (2019), 085007. Google Scholar M. Zhu and Y. Xu, A time-periodic dengue fever model in a heterogeneous environment, Mathematics and Computers in Simulation, 155 (2019), 115-129. doi: 10.1016/j.matcom.2017.12.008. Google Scholar Figure 1. The asymptotic behavior of the solution of system (4) Download as PowerPoint slide Figure 2. The uniform persistence of rumor propagation Figure 3. Projection diagram in the $ tx $-plane Figure 4. Distribution of rumor collectors and rumor-infective users at $ t = 0.5 $ for different diffusion coefficient $ D = 0.001,1,5 $ Figure 5. Two incidence functions Figure 6. Contour surfaces of $ \mathcal{R}^0 $ with consideration of $ \beta,\theta,A\in[0,1] $ Figure 7. (a) The density of susceptible users. (b) The density of collectors. (c) The density of infective users. (d) The rumor-free equilibrium point $ E_0 $ is globally asymptotically stable Figure 8. (a) The density of susceptible users. (b) The density of collectors. (c) The density of infective users. (d) The rumor-prevailing equilibrium point $ E^\star $ is locally asymptotically stable Klemens Fellner, Jeff Morgan, Bao Quoc Tang. Uniform-in-time bounds for quadratic reaction-diffusion systems with mass dissipation in higher dimensions. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 635-651. doi: 10.3934/dcdss.2020334 Weiwei Liu, Jinliang Wang, Yuming Chen. Threshold dynamics of a delayed nonlocal reaction-diffusion cholera model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020316 Izumi Takagi, Conghui Zhang. Existence and stability of patterns in a reaction-diffusion-ODE system with hysteresis in non-uniform media. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020400 Hideki Murakawa. Fast reaction limit of reaction-diffusion systems. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1047-1062. doi: 10.3934/dcdss.2020405 H. M. Srivastava, H. I. Abdel-Gawad, Khaled Mohammed Saad. Oscillatory states and patterns formation in a two-cell cubic autocatalytic reaction-diffusion model subjected to the Dirichlet conditions. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020433 Guillaume Cantin, M. A. Aziz-Alaoui. Dimension estimate of attractors for complex networks of reaction-diffusion systems applied to an ecological model. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020283 Abdelghafour Atlas, Mostafa Bendahmane, Fahd Karami, Driss Meskine, Omar Oubbih. A nonlinear fractional reaction-diffusion system applied to image denoising and decomposition. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020321 Masaharu Taniguchi. Axisymmetric traveling fronts in balanced bistable reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3981-3995. doi: 10.3934/dcds.2020126 Maho Endo, Yuki Kaneko, Yoshio Yamada. Free boundary problem for a reaction-diffusion equation with positive bistable nonlinearity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3375-3394. doi: 10.3934/dcds.2020033 Shin-Ichiro Ei, Shyuh-Yaur Tzeng. Spike solutions for a mass conservation reaction-diffusion system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3357-3374. doi: 10.3934/dcds.2020049 Chihiro Aida, Chao-Nien Chen, Kousuke Kuto, Hirokazu Ninomiya. Bifurcation from infinity with applications to reaction-diffusion systems. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3031-3055. doi: 10.3934/dcds.2020053 Leilei Wei, Yinnian He. A fully discrete local discontinuous Galerkin method with the generalized numerical flux to solve the tempered fractional reaction-diffusion equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020319 Lin Shi, Xuemin Wang, Dingshi Li. Limiting behavior of non-autonomous stochastic reaction-diffusion equations with colored noise on unbounded thin domains. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5367-5386. doi: 10.3934/cpaa.2020242 Shin-Ichiro Ei, Hiroshi Ishii. The motion of weakly interacting localized patterns for reaction-diffusion systems with nonlocal effect. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 173-190. doi: 10.3934/dcdsb.2020329 Nabahats Dib-Baghdadli, Rabah Labbas, Tewfik Mahdjoub, Ahmed Medeghri. On some reaction-diffusion equations generated by non-domiciliated triatominae, vectors of Chagas disease. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021004 El Haj Laamri, Michel Pierre. Stationary reaction-diffusion systems in $ L^1 $ revisited. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 455-464. doi: 10.3934/dcdss.2020355 Chungang Shi, Wei Wang, Dafeng Chen. Weak time discretization for slow-fast stochastic reaction-diffusion equations. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021019 Gheorghe Craciun, Jiaxin Jin, Casian Pantea, Adrian Tudorascu. Convergence to the complex balanced equilibrium for some chemical reaction-diffusion systems with boundary equilibria. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1305-1335. doi: 10.3934/dcdsb.2020164 Mohammad Ghani, Jingyu Li, Kaijun Zhang. Asymptotic stability of traveling fronts to a chemotaxis model with nonlinear diffusion. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021017 HTML views (143) Linhe Zhu Wenshan Liu Zhengdi Zhang
CommonCrawl
Universality of Power-of-$d$ Load Balancing in Many-Server Systems Debankur Mukherjee Department of Mathematics and Computer Science 5612 AZ Eindhoven Wednesday, 21 December 2016, 16:00 to 17:00 AG-80 Sandeep K Juneja We consider a system of $N$~parallel single-server queues with unit exponential service rates and a single dispatcher where tasks arrive as a Poisson process of rate $\lambda(N)$. When a task arrives, the dispatcher assigns it to a server with the shortest queue among $d(N)$ randomly selected servers ($1 \leq d(N) \leq N$). This load balancing strategy is referred to as a JSQ($d(N)$) scheme, marking that it subsumes the celebrated Join-the-Shortest Queue (JSQ) policy as a crucial special case for $d(N) = N$. We construct a stochastic coupling to bound the difference in the queue length processes between the JSQ policy and a scheme with an arbitrary value of $d(N)$. We use the coupling to derive the fluid limit in the regime where $\lambda(N) / N \to \lambda < 1$ as $N \to \infty$ with $d(N) \to\infty$, along with the associated fixed point. The fluid limit turns out not to depend on the exact growth rate of $d(N)$, and in particular coincides with that for the ordinary JSQ policy. We further leverage the coupling to establish that the diffusion limit in the critical regime where $(N - \lambda(N)) / \sqrt{N} \to \beta > 0$ as $N \to \infty$ with $d(N)/(\sqrt{N} \log (N))\to\infty$ corresponds to that for the JSQ policy. These results indicate that the optimality of the JSQ policy can be preserved at the fluid-level and diffusion-level while reducing the overhead by nearly a factor O($N$) and O($\sqrt{N}/\log(N)$), respectively. https://www.tcs.tifr.res.in/events/universality-power-d-load-balancing-many-server-systems
CommonCrawl
\begin{document} \begin{abstract} We study the problem of extending a state on an abelian $C^*$-subalgebra to a tracial state on the ambient $C^*$-algebra. We propose an approach that is well-suited to the case of regular inclusions, in which there is a large supply of normalizers of the subalgebra. Conditional expectations onto the subalgebra give natural extensions of a state to the ambient $C^*$-algebra; we prove that these extensions are tracial states if and only if certain invariance properties of both the state and conditional expectations are satisfied. In the example of a groupoid $C^*$-algebra, these invariance properties correspond to invariance of associated measures on the unit space under the action of bisections. Using our framework, we are able to completely describe the tracial state space of a Cuntz-Krieger graph algebra. Along the way we introduce certain operations called graph tightenings, which both streamline our description and provides connections to related finiteness questions in graph $C^*$-algebras. Our investigation has close connections with the so-called unique state extension property and its variants. \end{abstract} \title{Traces arising from regular inclusions} \section*{Introduction} A {\em trace\/} on an complex algebra $A$ is a linear functional $\phi:A\to\mathbb{C}$ satisfying $\phi(xy)=\phi(yx)$ for all $x,y \in A$. If $A$ is a $C^*$-algebra, and the trace $\phi$ is also a state, it is simply called a {\em tracial state}. In this paper we study tracial states on $C^*$-algebras $A$ by reconstructing them from their restrictions to {\em abelian\/} subalgebras $B\subset A$. The material is organized as follows In Section 1 our approach focuses on the case when a conditional expectation $\mathbb{E}: A \to B$ exists and the ``candidate'' tracial state on $A$ is $\phi\circ \mathbb{E}$, where $\phi \in S(B)$. In other words, we focus on states on $A$ \textcolor{black}{that} factor through $\mathbb{E}$; equivalently, states that vanish on $\text{ker}\,\mathbb{E}$. \textcolor{black}{In orther to characterize such states, we identify a certain {\em invariance\/} condition on $\phi$, coupled with a a suitable {\em normalization\/} condition on $\mathbb{E}$ (both conditions employ {\em normalizers\/} of $B$).} Section 2 specializes our investigation to the case of \'{e}tale groupoid $C^*$-algebras, where the natural abelian $C^*$-algebra to consider is $C_0(G^{(0)})$ -- the $C^*$-algebra of continuous functions that vanish at $\infty$ on the unit space $G^{(0)}$. In this framework, the invariance conditions treated in Section 1 become measure theoretical in nature. In Section 3 we explore the link between the invariance and normalization conditions from Section 1 and certain state extension properties. \textcolor{black}{When the so-called \emph{extension property} holds, the tracial state space of $A$ can be completely described by its restrictions to $B$.} The paper concludes with Section 4, where the case of {\em graph $C^*$-algebras\/} is fully investigated, using the results proved in the previous sections. Given some directed graph $E$, our main goal is the complete parametrization of the tracial state space of the associated $C^*$-algebra $C^*(E)$, solely in graph theoretical language. Earlier work in this direction (\cite{Tomforde1}, \cite{PaskRen1}) identified the notion of {\em graph traces\/} as a major ingredient. In many instances, graph traces are not sufficient for exhausting all tracial states, and our analysis shows exactly what additional structure is necessary: {\em cyclical tags on graph traces}. The usage of cyclical tags alone, although necessary, is still insufficient for describing all tracial states on $C^*(E)$; however, this deficiency can be fixed using graph operations called {\em tightenings}. \section{Invariant states on abelian C*-subalgebras} Following \cite{Kumjian} \textcolor{black}{and} \cite{Renault3}, given a $C^*$-algebra inclusion $B \subset A$, an element $n \in A$ is said to \emph{normalize} $B$ if $nBn^* \cup n^* B n \subset B$. The collection of such normalizers is denoted by $N_A(B)$, or simply $N(B)$ when there is no danger of confusion. Clearly $N(B)$ is closed under products and adjoints, and contains $B$. A $C^*$-inclusion $B\subset A$ is said to be {\em regular}, if $N(B)$ generates $A$ as a $C^*$-algebra. (Equivalently, if the span of $N(B)$ is dense in $A$.) Most of the $C^*$-algebra inclusions $B\subset A$ we are going to deal with in this paper are {\em non-degenerate}, in the sense that $B$ contains an approximate unit for $A$. (Of course, if $A$ is unital, then non-degeneracy of $B$ is equivalent to the fact that $B$ contains the unit of $A$.) \textcolor{black}{Note that, }if $B \subset A$ is a non-degenerate $C^*$-subalgebra, then $n^* n$, $nn^* \in B$ for any $n \in N(B)$. \begin{definition}\label{inv-state-def} Assume $B\subset A$ is a non-degenerate and let $\phi$ be a state on $B \subset A$. \begin{enumerate} \item Given $n \in N(B)$, we say that $\phi$ is \emph{$n$-invariant} if \begin{equation} \forall\, b\in B:\,\,\,\phi(nbn^*)=\phi(n^*nb). \label{normalized-phi-def} \end{equation} \item Given $N_0 \subset N(B)$, we say that $\phi$ is \emph{$N_0$-invariant} if $\phi$ is $n$-invariant for all $n \in \Sigma$. \item Lastly, if $\phi$ is $N(B)$-invariant, then we simply say that $\phi$ is \emph{fully invariant}. \end{enumerate} The collection of fully invariant states on $B \subset A$ is denoted by $S^{\operatorname{inv}}(B)$. \end{definition} \begin{mycomment} The restriction $\tau|_B$ of any tracial state $\tau\in T(A)$ is clearly a fully invariant state on $B$, so we have an affine $w^*$-continuous map \begin{equation} T(A)\ni\tau\longmapsto \tau|_B\in S^{\text{inv}}(B). \label{T-Sinv} \end{equation} This paper \textcolor{black}{aims at} understanding when the map \eqref{T-Sinv} is either surjective, or injective, or both. \end{mycomment} The most important features of normalizers and invariant states are collected in Proposition \ref{semigroup} below. Both in its proof and elsewhere in the paper, we are going to employ the following well known technical results and notations. \begin{fact}\label{fxx} Assume $x$ is an element in some $C^*$-algebra $A$. \begin{itemize} \item[(i)] For any function $f\in C\left([0,\infty)\right)$, the elements $f(xx^*),\,f(x^*x)\in \tilde{A}$, given by continuous functional calculus, satisfy the equality \begin{equation} xf(x^*x)=f(xx^*)x. \label{fxx=} \end{equation} \item[(ii)] When specializing to the $k^{\text{th}}$ root functions $f(t)=t^{1/k}$, we also have the equalities \begin{equation} \lim_{k\to\infty}(xx^*)^{1/k}x= \lim_{k\to\infty}x(x^*x)^{1/k}=x. \end{equation} \item[(iii)] If we fix a double sequence $(f_k^\ell)_{k,\ell=1}^\infty$ of polynomials in one variable, such that \begin{equation} \forall\,k\in\mathbb{N}:\,\,\,\lim_{\ell\to\infty}tf_k^\ell(t)=t^{1/k}\text{\em, uniformly on compact $K\subset [0,\infty)$} \end{equation} (this is possible by the Stone-Weierstrass Theorem), then: \begin{align} \lim_{k\to\infty}\lim_{\ell\to\infty}xf^\ell_k(x^*x)x^*x&=\lim_{k\to\infty}\lim_{\ell\to\infty}xx^*xf^\ell_k(x^*x)=x, \label{fklx1}\\ \lim_{k\to\infty}\lim_{\ell\to\infty}f^\ell_k(xx^*)xx^*x&=\lim_{k\to\infty}\lim_{\ell\to\infty}xx^*f^\ell_l(xx^*)x=x. \label{fklx2} \end{align} \end{itemize} \end{fact} \begin{proposition} \label{semigroup} Let $B \subset A$ be \textcolor{black}{a} non-degenerate abelian $C^*$-subalgebra of a $C^*$-algebra $A$. \begin{itemize} \item[(i)] $\overline{n B} = \overline{ B n}$ for all $n \in N(B)$. \item[(ii)] All states $\phi\in S(B)$ are $B$-invariant. \item[(iii)] If $\phi\in S(B)$ is $n$-invariant for some $n\in N(B)$, then $\phi$ is also $n^*$-invariant. \item[(iv)] If $\phi\in S(B)$ is both $n_1$-invariant and $n_2$-invariant, for some $n_1,n_2\in N(B)$, then $\phi$ is also $n_1 n_2$-invariant. \item[(v)] If $N_0 \subset N(B)$ is a sub-$*$-semigroup, generated as a $*$-semigroup by some subset $W\subset N(B)$, and $\phi\in S(B)$ is $W$-invariant, then $\phi$ is $N_0$-invariant. \item[(vi)] A state $\phi \in S(B)$ is fully invariant if and only if \begin{equation} \forall\, n\in N(B):\,\,\,\phi(nn^*)=\phi(n^*n). \label{phi-fully-inv-n} \end{equation} \end{itemize} \end{proposition} \begin{proof} (i) It suffices to show that for any $n\in N(B)$ and any $b\in B$, we have $nb\in \overline{Bn}$ and $bn\in\overline{nB}$. If we fix $n$ and $b$, then using the $f_k^\ell$'s from Fact \ref{fxx}, combined with the commutativity of $B$, we have \begin{equation} nb= \lim_{k\to\infty}\lim_{\ell\to\infty}f_k^\ell(nn^*)nn^*nb = \lim_{k\to\infty}\lim_{\ell\to\infty}f_k^\ell(nn^*)nbn^*n. \label{nB=Bn} \end{equation} Since $n$ normalizes $B$, we know that $nbn^*\in B$, so the elements $b_k^\ell=f_k^\ell(nn^*)nbn^*$ all belong to $B$, and then \eqref{nB=Bn}, which now simply states that $nb=\lim_{k\to\infty}\lim_{\ell\to\infty}b_k^\ell n$, clearly proves that $nb\in \overline{Bn}$. The fact that $bn\in \overline{nB}$ is proved exactly the same way. (ii) This is obvious, since $B$ is abelian. (iii) Take a sequence $\{b_k\} \subset B$ such that $bn = \lim_k n b_k$. Then \[ \phi(n^* b n) = \lim_k \phi(n^* n b_k) = \lim_k \phi(n b_k n^*) = \phi(bnn^*) = \phi(nn^* b).\] (iv) Suppose that $b \in B$. Take a sequence $\{c_k\} \subset B$ such that $(n_1^* n_1) n_2 = \lim_k n_2 c_k$. Then \begin{align*} \phi(n_1 n_2 b n_2^* n_1^*) &= \phi(n_1^* n_1 n_2 b n_2^*) = \lim_k \phi(n_2 c_k b n_2^*)= \lim_k \phi(n_2^* n_2 c_k b) = \\ &=\phi(n_2^* n_1^* n_1 n_2 b) , \end{align*} so that $\phi$ is $n_1 n_2$-invariant. Part (v) follows immediately from (iii) and (iv). (vi) The ``if'' implication (for which it suffices to prove \eqref{normalized-phi-def} only for positive $b$) follows from the observation, that for any $n\in N(B)$ and any $b\in B^+$, the element $x=nb^{1/2}$ is again in $N(B)$, so applying condition \eqref{phi-fully-inv-n} to $x$ will clearly imply $$\phi(nbn^*)=\phi(b^{1/2}n^*nb^{1/2})=\phi(n^*nb).$$ Conversely, if $\phi$ is fully invariant, then $$\forall\,n\in N(B):\,\,\phi(nn^*)=\lim_\lambda\phi(nu_\lambda n^*)=\lim_\lambda \phi(n^*n u_\lambda)=\phi(n^*n),$$ where $(u_\lambda) \subset B$ be an approximate identity for $A$. \end{proof} Besides the notion of invariance for states on a $C^*$-subalgebra, we will also use the following two additional variants. \begin{definition} Given a state $\psi \in S(A)$, we say that an element $x \in A$ \emph{centralizes} $\psi$ if $\psi(xa)=\psi(ax)$ for all $a \in A$. It is easy to see that the set \[ Z_\psi = \{x \in A: x \text{ centralizes } \psi \} \] is a $C^*$-subalgebra of $A$. (Obviously, $\psi$ is always tracial when restricted to $Z_\psi$. In particular, $\psi$ is tracial on $A$, if and only if its centralizer $Z_\psi$ contains a set that generates $A$ as a $C^*$-algebra.) \end{definition} \begin{definition} If $B \subset A$ is a $C^*$-subalgebra and $n \in N(B)$, we will say that a map $\Phi: A \to B$ is \emph{normalized by $n$} if $\Phi(nan^*)=n\Phi(a)n^*$ for all $a \in A$. \end{definition} \begin{lemma} \label{centralize} Let $B \subset A$ be a non-degenerate abelian C*-subalgebra with a conditional expectation $\mathbb{E}: A \to B$, which is normalized by some $n \in N(B)$. For a state $\phi \in S(B)$, the following are equivalent: \begin{itemize} \item[(i)] $\phi$ is $n$-invariant state on $B$; \item[(ii)] $\phi \circ \mathbb{E}\in S(A)$ is a state on $A$, which is centralized by $n$. \end{itemize} \end{lemma} \begin{proof} The implication $(ii)\Rightarrow (i)$ is pretty obvious, and holds even without the assumption that $\mathbb{E}$ is normalized by $n$. Indeed, if $b\in B$, then $nbn^*=\mathbb{E}(nbn^*)$ and $bn^*n=\mathbb{E}(bn^*n)$, so if $\phi\circ \mathbb{E}$ is centralized by $n$, then: $$ \phi(nbn^*)=(\phi\circ \mathbb{E})\big(n(bn^*)\big)= (\phi\circ \mathbb{E})\big((bn^*)n\big)=\phi(bn^*n)=\phi(n^*nb).$$ For the proof of $(i)\Rightarrow (ii)$, we fix $a\in A$ and we show that $\phi\big(\mathbb{E}(an)\big)=\phi\big(\mathbb{E}(na)\big)$. Fix polynomials $(f_k^\ell)$ as in Fact \ref{fxx}(iii). Since $\mathbb{E}$ is a conditional expectation, it follows that \begin{equation} \mathbb{E}(an)=\lim_{k\to\infty}\lim_{\ell\to\infty}\mathbb{E}\left(anf_k^\ell(n^*n)n^*n\right)= \lim_{k\to\infty}\lim_{\ell\to\infty}\mathbb{E}\left(anf_k^\ell(n^*n)\right)n^*n. \end{equation} By the $n$-invariance of $\phi$, we have \begin{align} \phi\big(\mathbb{E}(an)\big)&= \lim_{k\to\infty}\lim_{\ell\to\infty}\phi\left(\mathbb{E}\left(anf_k^\ell(n^*n)\right)n^*n\right) = \notag \\ &=\lim_{k\to\infty}\lim_{\ell\to\infty}\phi\left(n\mathbb{E}\left(anf_k^\ell(n^*n)\right)n^*\right). \end{align} Because $\mathbb{E}$ is normalized by $n$, with the help of \eqref{fxx=} our computation continues as: \begin{align} \phi\big(\mathbb{E}(an)\big) &=\lim_{k\to\infty}\lim_{\ell\to\infty}\phi\left(\mathbb{E}(nanf_k^\ell\left(n^*n)n^*\right)\right)=\notag\\ &= \lim_{k\to\infty}\lim_{\ell\to\infty}\phi\left(\mathbb{E}\left(naf_k^\ell(nn^*)nn^*\right)\right).\label{P-phi-normal} \end{align} Since $\mathbb{E}$ is a conditional expectation onto an abelian C*-subalgebra, we have: \begin{align*} \mathbb{E}(naf_k^\ell(nn^*)nn^*) =\mathbb{E}(na)f_k^\ell(nn^*)nn^* =\\= f_k^\ell(nn^*)nn^*\mathbb{E}(na) = \mathbb{E}(f_k^\ell(nn^*)nn^*na), \end{align*} so when we return to \eqref{P-phi-normal} and we also use \eqref{fklx2}, we finally get: \begin{equation*} \phi\big(\mathbb{E}(an)\big) =\lim_{k\to\infty}\lim_{\ell\to\infty} \phi \left( \mathbb{E}\left(f_k^\ell(nn^*)nn^*na\right)\right)= \phi\big(\mathbb{E}(na)\big). \qedhere \end{equation*} \end{proof} \begin{theorem}\label{phiP-trace-thm} Let $B \subset A$ be a non-degenerate abelian C*-subalgebra with a conditional expectation $\mathbb{E}: A \to B$, which is normalized by some set $N_0\subset N(B)$. For a state $\phi \in S(B)$, the following are equivalent: \begin{itemize} \item[(i)] $\phi$ is $N_0$-invariant; \item[(ii)] $\phi\circ \mathbb{E}$ is centralized by all elements of the $C^*$-subalgebra $C^*(B\cup N_0)\subset A$; \item[(iii)] the restriction $(\phi \circ \mathbb{E})|_{C^*(B\cup N_0)}$ is a tracial state on $C^*(B\cup N_0)$. \end{itemize} \end{theorem} \begin{proof} $(i)\Rightarrow (ii)$. Assume $\phi$ is $N_0$-invariant. By Lemma \ref{centralize}, we clearly have the inclusion $N_0\subset Z_{\phi\circ \mathbb{E}}$, so (using the fact that $Z_{\phi\circ \mathbb{E}}$ is a $C^*$-subalgebra of $A$) in order to prove statement (ii), it suffices to show that $\phi\circ \mathbb{E}$ is also centralized by $B$, which is pretty clear, since $B$ is abelian. The implication $(ii)\Rightarrow (iii)$ is trivial, since any state becomes tracial when restricted to its centralizer. $(iii)\Rightarrow (i)$. Assume $(\phi \circ \mathbb{E})|_{C^*(B\cup N_0)}$ is a tracial. In particular, $N_0$ centralizes this restriction, so by Lemma \ref{centralize} (applied to $C^*(B\cup N_0)$ in place of $A$), it again follows that $\phi$ is $N_0$-invariant. \end{proof} \section{Invariant states in the \'{e}tale groupoid framework} The invariance \textcolor{black}{conditions from Section 1} can be neatly described in the context of {\em \'{e}tale groupoid $C^*$-algebras}, which we briefly recall here. A \emph{groupoid} is a set $G$ along with a subset $G^{(2)} \subset G \times G$ of \emph{composable pairs} and two functions: composition $G^{(2)} \ni (\alpha,\beta)\longmapsto \alpha\beta\in G$ and an involution $G\ni \gamma\longmapsto \gamma^{-1}\in G$ (the inversion), such that the following hold: \begin{itemize} \item[(i)] $\gamma(\eta \zeta) = (\gamma \eta) \zeta$ whenever $(\gamma,\eta),(\eta,\zeta) \in G^{(2)}$; \item[(ii)] $(\gamma,\gamma^{-1}) \in G^{(2)}$ for all $\gamma \in G$, and $\gamma^{-1}(\gamma \eta) = \eta$ and $(\gamma \eta) \eta^{-1} = \gamma$ for $(\gamma,\eta) \in G^{(2)}$. \end{itemize} Elements satisfying $u = u^2 \in G$ are called \emph{units} of $G$ and the set of all such units is denoted $G^{(0)} \subset G$ and called the \emph{unit space} of $G$. There are maps $r,s: G \to G^{(0)}$ defined by \[ r(\gamma) = \gamma \gamma^{-1} \qquad \qquad s(\gamma) =\gamma^{-1} \gamma \] that are called, respectively, the \emph{range} and \emph{source} maps. If $A,B \subset G$, then $$AB = \{\gamma \in G: \exists \alpha \in A, \beta \in B\text{, such that }\alpha \beta = \gamma\}.$$ It is not difficult to show that $(\alpha,\beta) \in G^{(2)}$ if and only if $s(\alpha)=r(\beta)$. For a given unit $u \in G^{(0)}$ there is an associated group $G(u) = \{\gamma \in G: r(\gamma) = s(\gamma) = u \}$; this is called the \emph{isotropy} or \emph{stabilizer group} of $u$. The union of all isotropy groups in $G$ forms a subgroupoid of $G$ called $\operatorname{Iso}(G)$, the \emph{isotropy bundle} of $G$. A groupoid is called \emph{principal} (or an \emph{equivalence relation}) if $\operatorname{Iso}(G) = G^{(0)}$; that is, if no unit has non-trivial stabilizer group. Throughout this present paper a groupoid $G$ will be called \emph{\'etale}, if it is endowed with a Hausdorff, locally compact and second countable topology so that \begin{itemize} \item[(a)] the composition and inversion operations are continuous (the domain of $\circ$ is equipped with the relative product topology), and furthermore, \item[(b)] the range and source maps are local homeomorphisms. \end{itemize} By condition (b), for each $\gamma\in G$, there exists \textcolor{black}{an} open set $\gamma\in X\subset G$, such that the maps $s(X)\xleftarrow{\,s|_X\,}X \xrightarrow{\,r|_X\,}r(X)$ are homeomorphisms onto open sets \textcolor{black}{in $G$}; such an $X$ is called a {\em bisection}. Note that in the \'etale case, the unit space $G^{(0)}$ is in fact {\em clopen\/} in $G$, and all range and source fibers $r^{-1}(u)$, $s^{-1}(u)$, $u\in G^{(0)}$, are discrete in the relative topology; hence compact subsets of $G$ intersect any given range (or source) fiber at most finitely many times. In order to define a $C^*$-algebra from an \'etale groupoid $G$, it is necessary to specify a $*$-algebra structure on $C_c(G)$. This is given by \begin{align*} (f\times g)(\gamma) &= \sum_{(\alpha,\beta) \in G^{(2)}: \alpha \beta = \gamma} f(\alpha) g(\beta);\\ f^*(\gamma)&=\overline{f(\gamma^{-1})}. \end{align*} (Compactness of supports ensures that the sum involved in the definition of the product gives a well-defined element of $C_c(G)$.) As $G^{(0)}$ is open in $G$, we have an inclusion $C_c(G^{(0)})\subset C_c(G)$, which turns $C_c(G^{(0)})$ into a $*$-subalgebra. However, the $*$-algebra operations on $C_c(G^{(0)})$ inherited from $C_c(G)$ coincide with the usual (pointwise!) operations: $h^*=\bar{h}$ and $h\times k=hk$, $\forall\,h,k\in C_c(G^{(0)})$. In fact, something similar can be said concerning the left and right $C_c(G^{(0)})$-module structure of $C_c(G)$: for all $f\in C_c(G)$, $h\in C_c(G^{(0)})$ we have \begin{align} (f\times h)(\gamma)&=f(\gamma)h\big(s(\gamma)\big);\\ (h\times f)(\gamma)&=h\big(r(\gamma)\big)f(\gamma). \end{align} Following Renault (\cite{Renault}), for an \'etale groupoid $G$, the full $C^*$-norm on $C_c(G)$ is given as $$\|f\|=\sup\left\{\big\|\pi(f)\big\|\,:\,\pi\text{ non-degenerate $*$-representation of $C_c(G)$}\right\},$$ and the {\em full groupoid $C^*$-algebra} $C^*(G)$ is defined to be the completion of $C_c(G)$ in the full $C^*$-norm. When restricted to $C_c(G^{(0)})$, the full $C^*$-norm agrees with the usual sup-norm $\|\cdot\|_\infty$, so by completion, the embedding $C_c(G^{(0)})\subset C_c(G)$ gives rise to a non-degenerate inclusion $C_0(G^{(0)})\subset C^*(G)$. At the same time, one can also consider the restriction map, which ends up being a contractive map $\left(C_c(G),\,\|\cdot\|\right)\ni f\longmapsto f|_{G^{(0)}}\in \left(C_c(G^{(0)}),\,\|\cdot\|_\infty\right)$, so by completion one obtains a contractive linear map $\mathbb{E}:C^*(G)\to C_0(G^{(0)})$, which is in fact a {\em conditional expectation}. We refer to $\mathbb{E}$ as the {\em natural expectation}. Using the KSGNS construction associated with $\mathbb{E}$ (\cite{Lance}) we obtain a $*$-representation $\pi_{\mathbb{E}}:C^*(G)\to\mathcal{L}\left(L^2\left(C^*(G),\mathbb{E}\right)\right)$, where $L^2\left(C^*(G),\mathbb{E}\right)$ is the Hilbert $C_0(G^{(0)})$-module obtained by completing $C^*(G)$ in the norm given by the inner product $\langle a|b\rangle_{C_0(G^{(0)})}=\mathbb{E}(a^*b)$. With this representation in mind, the quotient $C^*(G)/\text{ker}\,\pi_{\mathbb{E}}$ is the so-called {\em reduced\/} groupoid $C^*$-algebra, denoted by $C^*_{\text{red}}(G)$. An alternative description of the ideal $\text{ker}\,\pi_{\mathbb{E}}$ is to employ the usual GNS-representations $\pi_{ev_u\circ \mathbb{E}}$, associated with the states $ev_u\circ \mathbb{E}\in S\big(C^*(G)\big)$ that are obtained by composing $\mathbb{E}$ with evaluation maps $ev_u:C_0(G^{(0)})\ni h\longmapsto h(u)\in \mathbb{C}$, $u\in G^{(0)}$. With these (honest) representations in mind, we have $\text{ker}\,\pi_{\mathbb{E}}=\bigcap_{u\in G^{(0)}}\text{ker}\,\pi_{ev_u\circ \mathbb{E}}$. As was the case with the full groupoid $C^*$-algebra, after composing with the quotient map $\pi_{\text{red}}:C^*(G)\to C^*_{\text{red}}(G)$, we still have an embedding $C_c(G)\subset C^*_{\text{red}}(G)$, so we can also view $C^*_{\text{red}}(G)$ as the completion of the convolution $*$-algebra $C_c(G)$ with respect to a (smaller) $C^*$-norm, denoted $\|\cdot\|_{\text{red}}$. As before, when restricted to $C_c(G^{(0)})$, the norm $\|\,\cdot\,\|_{\text{red}}$ agrees with $\|\,\cdot\,\|_\infty$, so $C_0(G^{(0)})$ still embeds in $C^*_{\text{red}}(G)$, and furthermore, since the natural expectation $\mathbb{E}$ vanishes on $\text{ker}\,\pi_{\mathbb{E}}$, we will have a reduced version of natural expectation, denoted by $\mathbb{E}_{\text{red}}:C^*_{\text{red}}(G)\to C_0(G^{(0)})$, which satisfies $\mathbb{E}_{\text{red}}\circ\pi_{\text{red}}=\mathbb{E}$.. As pointed out for instance in \cite{Renault3}, a large supply of normalizers for $C_0(G^{(0)})$ are those elements of the groupoid $C^*$-algebra represented by functions $f \in C_c(G)$ supported in bisections. We shall refer to such elements as \emph{elementary normalizers} of $C_0(G^{(0)})$. Note that the collection $N_{\text{elem}}\big(C_0(G^{(0)})\big)$ of elementary normalizers, along with $0$, is a $*$-subsemigroup of $N\left(C_0(G^{(0)})\right)$, and furthermore $N_{\text{elem}}\big(C_0(G^{(0)})\big)$ generate the ambient algebra -- $C^*(G)$ or $C^*_{\text{red}}(G)$ -- as a $C^*$-algebra. Using the embedding of $C_c(G)$ in the groupoid (full or reduced) $C^*$-algebra, we interpret $N_{\text{elem}}\big(C_0(G^{(0)})\big)$ as a subset in $C_c(G)$, namely: \begin{equation} N_{\text{elem}}\big(C_0(G^{(0)})\big)=\bigcup_{X\text{ bisection}}C_c(X)\subset C_c(G). \label{Nelem-pres} \end{equation} \begin{mycomment} In order to avoid any unnecessary notational complications or duplications, the results and definitions in the remainder of this section are stated only using the reduced $C^*$-algebra $C^*_{\text{red}}(G)$ as the ambient $C^*$-algebra. However, with only a few explicitly noted exceptions, by composing with the quotient $*$-homomorphism $\pi_{\text{red}}:C^*(G)\to C^*_{\text{red}}(G)$, the same results will hold if we use the full $C^*$-algebra $C^*(G)$ instead; we leave it to the reader to write down the missing statements corresponding to the full case (by simply erasing the subscript ``red'' from the statements). \end{mycomment} The \'etale groupoid framework is particularly convenient because one of the hypotheses in Lemma \ref{centralize} above is automatically satisfied. \begin{proposition}\label{prop-elem-inv} The natural conditional expectation $\mathbb{E}_{\text{\rm red}}: C^*_{\text{\rm red}}(G) \to C_0(G^{(0)})$ is normalized by all elementary normalizers. In particular, for a state $\phi$ on $C_0(G^{(0)})$, the following are equivalent: \begin{itemize} \item[(i)] $\phi$ is an $N_{\text{\rm elem}}\big(C_0(G^{(0)})\big)$-invariant state on $C_0(G^{(0)})$; \item[(ii)] $\phi\circ \mathbb{E}_{\text{\rm red}}$ is a tracial state on $C^*_{\text{\rm red}}(G)$. \end{itemize} \end{proposition} \begin{proof} Assume $n \in C_c(X)$, for some bisection $X\subset G$. In order to prove the first assertion, we must show that $\mathbb{E}_{\text{\rm red}}(n\times f\times n^*)=n\times \mathbb{E}_{\text{\rm red}}(f)\times n^*$, for all $f \in C_c(G)$. Fix $f$, as well as $x \in G^{(0)}$. Then \[ \mathbb{E}_{\text{\rm red}}(n\times f\times n^*)(u) = \begin{cases} |n(\gamma)|^2 f(s(\gamma)) & \text{if }\exists \gamma \in X \cap r^{-1}(u) \cap s^{-1}(\operatorname{supp }f) \\ 0 & \text{ else} \end{cases} .\] It is straightforward to verify that this is the same as $\left(n\times \mathbb{E}_{\text{\rm red}}(f)\times n^*\right)(u)$. The second statement is a direct consequence of Theorem \ref{phiP-trace-thm}, combined with the fact that $N_{\text{elem}}(C_0(G^{(0)}))$ generates $C^*_{\text{red}}(G)$ as a $C^*$-algebra. \end{proof} We want to characterize the $N_{\text{elem}}(C_0(G^{(0)}))$-invariant states on $C_0(G^{(0)})$ -- hereafter referred to as {\em elementary invariant\/} states -- completely in measure-theoretical terms on $G$. We introduce the following terminology in parallel with Definition \ref{inv-state-def}. \begin{definition} Let $G$ be an \'{e}tale topological groupoid with unit space $G^{(0)}$, and let $\mu$ be a positive Radon measure on $G^{(0)}$. \begin{enumerate} \item Given an open bisection $X\subset G$, we say that $\mu$ is \emph{$X$-balanced} if $\mu(XBX^{-1}) = \mu(s(X) \cap B)$ for any Borel set $B \subset G^{(0)}$. \item If $\mathcal{X}$ is a family of open bisections, then we say that $\mu$ is \emph{$\mathcal{X}$-balanced} if $\mu$ is $X$-balanced for all $X \in \mathcal{X}$. \item If $\mu$ is $X$-balanced for every open bisection $X$, then we say that $\mu$ is \emph{totally balanced}. \end{enumerate} \end{definition} \begin{notations} Given a proper continuous function between locally compact spaces $h: X \to Y$, and a Radon measure $\mu$ on $X$, we denote its $h$-pushforward by $h_* \mu$. This is a Radon measure on $Y$, given by $(h_* \mu)(A) = \mu(h^{-1}(A))$, for any Borel set $A \subset Y$. Note that the pushforward construction is covariant: $(g \circ f)_* \mu = g_* (f_* \mu)$. By Riesz's Theorem, we have a bijective correspondence \begin{equation} \text{Prob}(X)\ni \mu\longmapsto \phi_\mu \in S\big(C_0(X)\big) \label{riesz-cor} \end{equation} between the space of {\em \textcolor{black}{Radon probability} measures on $X$} and the {\em state space of $C_0(X)$}, defined as follows. For each $\mu\in\text{Prob}(X)$, the associated state $\phi_\mu\in S\big(C_0(X)\big)$ is: $$\phi_\mu(f)=\int _X f(x)\,d\mu(x),\,\,\,f\in C_0(X).$$ On the level of positive linear functionals, the pushforward construction corresponds to {\em composition\/}: $$(h_*\phi)(f)=\phi\big(f\circ h),\,\,\,f\in C_0(Y),\,h:X\to Y.$$ \end{notations} \begin{lemma}\label{Xbal-lemma} With $G$ as above, let $X\subset G$ be an open bisection. For a finite Radon measure $\mu$ on $G^{(0)}$, the following are equivalent: \begin{itemize} \item[(i)] $\mu|_{s(X)} = \big(s \circ (r|_X)^{-1}\big)_* (\mu|_{r(X)})$; \item[(ii)] $\mu\big(s(B)\big)=\mu\big(r(B)\big)$, for all Borel subsets $B\subset X$; \item[(iii)] $\mu\big(s(K)\big)=\mu\big(r(K)\big)$, for all compact subsets $K\subset X$; \item[(iv)] $\mu$ is $X$-balanced. \end{itemize} {\rm (In condition (i) we use the restriction notation for measures: if $\mu$ is a finite Radon measure on $G^{(0)}$ -- thought as a function $\mu:\text{Bor}(G^{(0)})\to [0,\infty)$, and $D\subset G^{(0)}$ is some open subset, then $\mu|_D$ is the Radon measure on $D$ obtained by restricting $\mu$ to $\text{Bor}(D)$.)} \end{lemma} \begin{proof} The equivalence $(i)\Leftrightarrow (ii)$ is trivial, because the maps $s(X)\xleftarrow{\,s|_X\,}X \xrightarrow{\,r|_X\,}r(X)$ are homeomorphisms onto open sets. The equivalence $(ii)\Leftrightarrow (iv)$ follows from the observation that, for any Borel set $B\subset G^{(0)}$, the set $B'=X\cap s^{-1}(B)\subset X$ is Borel, and furthermore, the sets that appear in the definition of $X$-invariance are precisely $ XBX^{-1}= r(B') $ and $s(X)\cap B = s(B')$. Lastly, the equivalence $(ii)\Leftrightarrow (iii)$ follows from regularity and finiteness of $\mu$. \end{proof} \textcolor{black}{We are interested in balanced measures because they are tied up with elementary invariance.} \begin{lemma} \label{grpdmeasX} Let $G$ be an \'{e}tale groupoid with unit space $G^{(0)}$, let $\mu$ be a \textcolor{black}{Radon probability} measure on $G^{(0)}$, and let $\phi_\mu$ be the state on the $C^*$-subalgebra $C_0(G^{(0)})\subset C^*_{\text{\rm red}}(G)$ given by \eqref{riesz-cor}. For an open bisection $X\subset G$, the following conditions are equivalent: \begin{itemize} \item[(i)] $\mu$ is $X$-balanced; \item[(ii)] $\phi_\mu$ is $C_c(X)$-invariant. (As in \eqref{Nelem-pres}, $C_c(X)\subset N_{C^*_{\text{\rm red}}(G)}\left(C_0(G^{(0)})\right)$.) \end{itemize} \end{lemma} \begin{proof} The entire argument will be based on the following \begin{claim} For any $n\in C_c(X)$ and any $b\in C_c(G^{(0)})$, one has the equalities: \begin{align} &\phi_\mu(n^*\!\times\! n\!\times\! b)=\int_{s(X)}\!\left|\left(n\circ (s|_X)^{-1}\right)(u)\right|^2 b(u)\,d\left(\mu|_{s(X)}\right)(u); \label{grpdmeasXcl1}\\ &\phi_\mu(n\!\times\! b\!\times\! n^*)=\int_{r(X)}\!\left|\left(n\circ (r|_X)^{-1}\right)(u)\right|^2\left(b\circ s\circ (r|_X)^{-1}\right)(u)\,d\left(\mu|_{r(X)}\right)(u); \label{grpdmeasXcl2}\\ &\phi_\mu(n\!\times\! b\!\times\! n^*)=\int_{s(X)}\!\left|\left(n\circ (s|_X)^{-1}\right)(u)\right|^2 b(u)\,d\left(s\circ (r|_X)^{-1}\right)_*\left(\mu|_{r(X)}\right)(u).\label{grpdmeasXcl3} \end{align} \end{claim} The equality \eqref{grpdmeasXcl1} follows from the definition of the convolution multiplication and $*$-involution, which yields \[ (n^*\times n)(u) = \begin{cases} \left|n\left((s|_X)^{-1}(u)\right)\right|^2 & u \in s(X) \\ 0 & u \not\in s(X) \end{cases} \] so we can multiply the functions $n^*n$ and $b$ to obtain: \[ (n^*\times n\times b)(u) = \begin{cases} \left|n\left((s|_X)^{-1}(u)\right)\right|^2 b(u) & u \in s(X) \\ 0 & u \not\in s(X). \end{cases} \] Likewise, the equality in \eqref{grpdmeasXcl2} follows from \[ (n\times b\times n^*)(u) = \begin{cases} \left|n\left(( r|_X)^{-1}(u)\right)\right|^2 \cdot b\left(s\left((r|_X)^{-1}(u)\right)\right) & u \in r(X) \\ 0 & u \not\in r(X) \end{cases} \] which implies that the support of $n\times b\times n^*$ is contained in $X (\operatorname{supp } b) X^{-1}\subset r(X)$. Lastly, the equality between the right-hand sides of \eqref{grpdmeasXcl2} and \eqref{grpdmeasXcl3} follows immediately by applying the definition of the pushforward \begin{equation} \int_{s(X)}f\,d\left(s\circ (r|_X)^{-1}\right)_*\left(\mu|_{r(X)}\right)= \int_{r(X)}\left(f\circ s\circ (r|_X)^{-1}\right)\,d\left(\mu|_{r(X)}\right), \label{grpdmeasXpush1} \end{equation} to functions $f\in C_c\big(s(X)\big)$ of the form: $f(u)=\left|n\circ \left((s|_X)^{-1}\right)(u)\right|^2 b(u)$. Having proved the Claim, the implication $(i)\Rightarrow (ii)$ follows from Lemma \ref{Xbal-lemma}, which yields: \begin{equation} \forall\,n\in C_c(X),\,b\in C_c(G^{(0)}): \,\,\,\phi_\mu(n^*\!\times\!n\!\times\!b)=\phi_\mu(n\!\times\!b\!\times\!n^*). \label{grpdmeasX(i)->(ii)} \end{equation} By density, \eqref{grpdmeasX(i)->(ii)} holds for all $n\in C_c(X)$, $b\in C_0(G^{(0)})$, thus $\phi_\mu$ is $n$-invariant for all $n\in C_c(X)$. As for the implication $(ii)\Rightarrow(i)$, all we have to observe is that, if $\phi_\mu$ is $C_c(X)$-invariant, then \eqref{grpdmeasX(i)->(ii)} is valid, which by the identities \eqref{grpdmeasXcl1} and \eqref{grpdmeasXcl3}, simply state that the equality \begin{equation} \int_{s(X)}f\,d\left(s\circ (r|_X)^{-1}\right)_*\left(\mu|_{r(X)}\right)= \int_{s(X)}f\,d\left(\mu|_{s(X)}\right), \label{grpdmeasXpush3} \end{equation} holds for all functions of the form: \begin{equation} f(u)=\left|\left(n\circ (s|_X)^{-1}\right)(u)\right|^2 b(u),\,\,n\in C_c(X),\,b\in C_c(G^{(0)}). \label{grpdmeasXpush2} \end{equation} Since (using a partition of unity argument) the functions of the above form linearly span all functions in $C_c\left(s(X)\right)$, the equality \eqref{grpdmeasXpush3} simply states that $$\left(s\circ (r|_X)^{-1}\right)_*\left(\mu|_{r(X)}\right)=\mu|_{s(X)},$$ so by Lemma \ref{Xbal-lemma}, it follows that $\mu$ is indeed $X$-balanced. \end{proof} Combining Proposition \ref{prop-elem-inv} with Lemma \ref{grpdmeasX}, we now reach the following conclusion. \begin{theorem}\label{grpdmeas} Let $G$ be an \'{e}tale groupoid with unit space $G^{(0)}$, let $\mu$ be a probability Radon measure on $G^{(0)}$, and let $\phi_\mu$ be the state on the $C^*$-subalgebra $C_0(G^{(0)})\subset C^*_{\text{\rm red}}(G)$ given by \eqref{riesz-cor}. The following conditions are equivalent: \begin{itemize} \item[(i)] $\mu$ is totally balanced; \item[(ii)] $\phi_\mu$ is elementary invariant; \item[(iii)] $\phi_\mu$ is fully invariant; \item[(iv)] $\phi_\mu\circ \mathbb{E}_{\text{\rm red}}$ is a tracial state on $C^*_{\text{\rm red}}(G)$. \qed \end{itemize} \end{theorem} In concrete situations, one would like to check condition (i) from the above Theorem in an ``economical'' way. To be \textcolor{black}{more} precise, assuming that a given measure $\mu\in\text{Prob}(G^{(0)})$ is $\mathcal{X}$-balanced, for some collection of bisections $\mathcal{X}$, we seek a natural subalgebra on which $\phi_\mu\circ \mathbb{E}_{\text{red}}$ is tracial (as in Theorem \ref{phiP-trace-thm}), and furthermore find criteria on $\mathcal{X}$ which ensure that our subalgebra is in fact \textcolor{black}{all of} $C^*_{\text{red}}(G)$. Parts of the Lemma below mimic corresponding statements from Proposition \ref{semigroup}. (Each one of statements (i)--(iii) has an implicit statement built-in: the new sets, such as $X'$, $X^{-1}$ and $X_1X_2$ are always bisections.) \begin{proposition} \label{balance} Let $G$ be an \'{e}tale groupoid with unit space $G^{(0)}$ and let $\mu$ be a Radon probability measure on $G^{(0)}$, \begin{itemize} \item[(i)] If $\mu$ is $X$-balanced, for some bisection $X$, then $\mu$ is $X'$-balanced, for any open subset $X'\subset X$. \item[(ii)] If $\mu$ is $X$-balanced, for some bisection $X$, then $\mu$ is $X^{-1}$-balanced. \item[(iii)] If $\mu$ is both $X_1$- and $X_2$-balanced, for two bisections $X_1$, $X_2$, then $\mu$ is $X_1 X_2$-balanced. \item[(iv)] Assume $X$ is an open set, written as a union $X=\bigcup_{j\in J}X_j$ of bisections, such that $s|_X,r|_X:X\to G^{(0)}$ are injective. Then $X$ is a bisection, and if $\mu$ is $X_j$-balanced for all $j\in J$, then $\mu$ is $X$-balanced. \end{itemize} \end{proposition} \begin{proof} Statements (i) and (ii) are trivial from Lemma \ref{Xbal-lemma}. Before we prove (iii), we need some clarifications. First of all, the set $X_1X_2$ is obtained as the image of the open set $$X_1\circ X_2=\{(\alpha,\beta)\in X_1\times X_2\,:\, s(\alpha)=r(\beta)\}=X_1\times X_2\cap G^{(2)}\subset G^{(2)}.$$ under composition map $m:G^{(2)}\to G$. Secondly, by the bisection property, the restrictions of the coordinate maps $X_1\xleftarrow{\,\,p_1\,\,}X_1\times X_2 \xrightarrow{\,\,p_2\,\,}X_2$ give rise to two homeomorphisms $p_1(X_1\circ X_2)\xleftarrow{\,\,p_1\,\,}X_1\circ X_2 \xrightarrow{\,\,p_2\,\,}p_2(X_1\circ X_2)$ onto open subsets of $X_1$ and $X_2$ respectively, and furthermore the compositions $s\circ p_1$ and $r\circ p_2$ agree on $X_1\circ X_2$, and the resulting map, denoted here by $t:X_1\circ X_2\to \subset G^{(0)}$ is a homeomorphism onto an open subset $D\subset G^{(0)}$. (This open set is simply $D=t(X_1\circ X_2)=s(X_1)\cap r(X_2)$. By construction, $X_1X_2=\varnothing\,\Leftrightarrow\,s(X_1)\cap r(X_2)=\varnothing$.) Furthermore, again by the bisection property, $m|_{X_1\circ X_2}:X_1\circ X_2\to X_1X_2$ is also a homeomorphism onto an open set, so composing its inverse with the coordinate maps, we obtain two homeomorphisms $q_k=p_k\circ (m|_{X_1\circ X_2})^{-1}:X_1X_2\to X_k$, $k=1,2$, which satisfy $s|_{X_1X_2}=s\circ q_1$ and $r|_{X_1X_2}=r\circ q_2$. Using all these three homeomorphisms, the fact that $X_1X_2$ is a bisection is obvious. Not only are the maps $s(X_1X_2)\xleftarrow{\,s|_{X_1X_2}\,}X_1X_2 \xrightarrow{\,r|_{X_1X_2}\,}r(X_1X_2)$ homeomorphisms, but so is the map $r\circ q_2=s\circ q_1=t\circ (m|_{X_1\circ X_2})^{-1}:X_1X_2\to D$. After all these preparations, statement (iii) follows from the observation that the $X_1$- and $X_2$-balancing features imply that, for any Borel set $B\subset X_1X_2$ we have \begin{align*} \mu\big(s(B)\big)= \mu\big(s\big(q_2(B)\big)\big)= \mu\big(r\big(q_2(B)\big)\big)= \\= \mu\big(s\big(q_1(B)\big)\big)= \mu\big(r\big(q_1(B)\big)\big)= \mu\big(r(B)\big), \end{align*} so the desired conclusion follows from Lemma \ref{Xbal-lemma}. (iv). Since we have the equalities $s(X)=\bigcup_{j\in J}s(X_j)$ and $r(X)=\bigcup_{j\in J}s(X_j)$, it follows that $s(X)$ and $r(X)$ are open. The fact that both $s(X)\xleftarrow{\,s|_X\,}X\xrightarrow{\,r|_X\,}r(X)$ are \textcolor{black}{homeomorphisms} follows by local compactness. Finally, to prove that $\mu$ is $X$-balanced, we apply criterion (iii) from Lemma \ref{Xbal-lemma}. Start with some compact set $K\subset X$, and using compactness write it as a finite disjoint union $K=\bigcup_{k=1}^nB_{j_k}$, where $B_{j_k}\subset X_{j_k}$, $k=1,\dots,n$ are Borel sets. Using the fact that $\mu$ is $X_j$-balanced for all $j$, we know that $\mu\big(s(B_{j_k})\big)= \mu\big(r(B_{j_k})\big)$, for all $k$, so using that $s$ and $r$ are homeomorphisms, we also have $s(K)=\bigcup_{k=1}^ns(B_{j_k})$ and $r(K)=\bigcup_{k=1}^nr(B_{j_k})$ (disjoint unions of Borel sets in $s(X)$ and $r(X)$ respectively), so we have \begin{align*} \mu\big(s(K)\big)&=\mu\big(\bigcup_{k=1}^ns(B_{j_k})\big)=\sum_{k=1}^n\mu\big(s(B_{j_k})\big) =\\ &=\sum_{k=1}^n\mu\big(r(B_{j_k})\big)= \mu\big(\bigcup_{k=1}^nr(B_{j_k})\big)= \mu\big(r(K)\big).\qedhere \end{align*} \end{proof} Using the above result, combined with Lemma \ref{grpdmeasX}, we immediately obtain the following \textcolor{black}{measure-theoretic} groupoid analogue of Theorem \ref{phiP-trace-thm}. \begin{theorem}\label{thm-C*Gtrace} Assume $\mathcal{W}$ is a collection of bisections in the \'{e}tale groupoid $G$, and let $\mathcal{X}$ be the inverse semigroup generated by $\mathcal{W}$. For a measure $\mu\in\text{\rm Prob}(G^{(0)})$, the following are equivalent: \begin{itemize} \item[(i)] $\mu$ is $\mathcal{W}$-balanced; \item[(ii)] $\mu$ is $\mathcal{X}$-balanced; \item[(iii)] the state $\phi_\mu\circ \mathbb{E}_{\text{\rm red}}$ is tracial \textcolor{black}{when restricted to} the subalgebra $$C^*\bigg(C_0(G^{(0)})\cup\bigcup_{W\in\mathcal{W}}C_c(W)\bigg)= \overline{\text{\rm span}}\bigg(C_0(G^{(0)})\cup\bigcup_{X\in\mathcal{X}}C_c(X)\bigg).\qed$$ \end{itemize} \end{theorem} \begin{remark} A sufficient condition for a collection $\mathcal{X}$ of bisections of $G$ to satisfy the equality $$\overline{\text{\rm span}}\bigg(C_0(G^{(0)})\cup\bigcup_{X\in\mathcal{X}}C_c(X)\bigg)= C^*_{\text{\rm red}}(G)$$ is that $\mathcal{X}$ {\em covers $G\smallsetminus G^{(0)}$}. This follows using a standard partition of \textcolor{black}{unity} argument, which implies the equality $C_c(G)=\text{span}\bigg(C_0(G^{(0)})\cup\bigcup_{X\in\mathcal{X}}C_c(X)\bigg)$. As a consequence, the desired ``economical'' criterion for traciality of $\phi_\mu\circ \mathbb{E}_{\text{red}}$ is as follows. \end{remark} \begin{corollary}\label{tot-inv-cor} Assume $G$, $\mathcal{W}$ and $\mathcal{X}$ are as in Theorem \ref{thm-C*Gtrace}. If $\mu\in\text{\rm Prob}(G^{(0)})$ is $\mathcal{W}$-balanced, and $\mathcal{X}$ covers $G\smallsetminus G^{(0)}$, then $\phi_\mu\circ \mathbb{E}_{\text{\rm red}}$ is tracial on $C^*_{\text{\rm red}}(G)$.\qed \end{corollary} \section{Tracial states via extension properties} So far, assuming that an non-degenerate abelian $C^*$-subalgebra $B\subset A$ is the range of a conditional expectation $\mathbb{E}: A \to B$, we have examined certain conditions both for a state $\phi\in S(B)$ and for $\mathbb{E}$, that ensure that $\phi\circ \mathbb{E}$ \textcolor{black}{is a trace}. In the groupoid framework, the natural conditional expectation $\mathbb{E}$ \textcolor{black}{exhibited nice behavior} (elementary invariance), so the focus was solely placed on $\phi$. In this section we provide another framework, in which again the conditional expectation in question will also be normalized by all $n\in N(B)$. (As a side issue one should also be concerned with the {\em uniqueness\/} of conditional expectation.) A natural class of subalgebras to which this analysis can be carried on nicely are Renault's Cartan subalgebras (\cite{Renault3}; see also the Comment following Corollary \ref{phiP-trace-ext1} below). As it turns out, very little from the Cartan subalgebra machinery is needed for our purposes: the \emph{almost extension property\/} (\cite{NR2}), which requires that the set \[ P_1(B\uparrow A) = \{\omega \in \hat{B}: \omega \text{ has a unique extension to a state on }A \}\] is weak-$*$ dense in $\hat{B}$ -- the Gelfand spectrum of $B$. (A slight strengthening of the above condition will be introduced \textcolor{black}{in} the Comment following Lemma \ref{lemma-almost} below.) The utility of the almost extension property is exhibited by Lemma \ref{lemma-almost} below, in preparation of which we need the following simple fact. \begin{fact}\label{fact-omega} Let $\omega$ be a state on $B \subset A$ with extension $\theta \in S(A)$, so that $\theta|_B = \omega$. If $x,y \in A$ and satisfy either \begin{enumerate} \item $y^* y \in B$ and $\omega(y^*y)=0$, or \item $xx^* \in B$ and $\omega(xx^*)=0$, \end{enumerate} then $\theta(xy)=0$. In particular, if $b\in B$ satisfies $0\leq b\leq 1$ and $\omega(b)=1$, then $$\forall\,a\in A:\,\,\theta(a)=\theta(ab)=\theta(ba)=\theta(bab).$$ \end{fact} \begin{proof} Apply the Cauchy-Schwarz inequality for the sesquilinear form: $$ \langle a | a' \rangle = \theta(a^* a'). $$ The second statement follows from the first one applied with $y=1-b$. \end{proof} \begin{lemma}[{compare to \cite[Lemma 6]{Kumjian}}]\label{lemma-almost} Let $B \subset A$ be a non-degenerate abelian $C^*$-subalgebra with the almost extension property, and let $\mathbb{E}: A \to B$ be a conditional expectation. Then $\mathbb{E}$ is normalized by all $n \in N(B)$. \end{lemma} \begin{mycomment} As noted in \cite{NR2}, the almost extension property implies that at most one conditional expectation $\mathbb{E}: A \to B$ can exist. In the case such an expectation does exist \textcolor{black}{and the almost extension property holds}, we say that the inclusion $B\subset A$ has the {\em conditional\/} almost extension property. \end{mycomment} \noindent{\em Proof of Lemma \ref{lemma-almost}.} Fix some normalizer $n\in N(B)$, and let us prove that \begin{equation} \mathbb{E}(nan^*)=n\mathbb{E}(a)n^*, \label{lemma-almost-norm} \end{equation} for all $a\in A$. Fix polynomials $(f_k^\ell)$ as in Fact \ref{fxx}(iii), so we have \begin{equation} \mathbb{E}(nan^*)=\lim_{k\to\infty}\lim_{\ell\to\infty}\mathbb{E}(nn^*nf_k^\ell(n^*n)a f_k^\ell(n^*n)n^*nn^*). \label{lemma-alomost1} \end{equation} Likewise, and using also the fact that $\mathbb{E}$ is a conditional expectation, we also have \begin{align} n\mathbb{E}(a)n^*&=\lim_{k\to\infty}\lim_{\ell\to\infty}nn^*nf_k^\ell(n^*n)\mathbb{E}(a) f_k^\ell(n^*n)n^*nn^*= \notag\\ &=\lim_{k\to\infty}\lim_{\ell\to\infty}n\mathbb{E}(n^*nf_k^\ell(n^*n)a f_k^\ell(n^*n)n^*n)n^*, \label{lemma-alomost2} \end{align} Inspecting \eqref{lemma-alomost1} and \eqref{lemma-alomost2}, we now see that it suffices to prove \eqref{lemma-almost-norm} for elements of the form $a=n^* a_1 n$; in other words, instead of \eqref{lemma-almost-norm}, it suffices to prove \begin{equation} \forall\,a\in A:\,\,\,\mathbb{E}(nn^*ann^*)=n\mathbb{E}(n^*an)n^*, \label{lemma-almost-norm2} \end{equation} As both sides of this equation belong to $B$, we only need show that \[ \omega(\mathbb{E}(nn^* a nn^*))=\omega(n\mathbb{E}(n^* a n) n^*) \tag{*}\] for all $\omega \in P_1(B\uparrow A)$. Suppose that $\omega(nn^*)=0$. In this case, we have by Fact \ref{fact-omega} that both sides of (*) are zero. Suppose that $\omega(nn^*) > 0$ and define two states $\psi_\omega$ and $\theta_\omega$ on $A$ by \[ \psi_\omega(a) = \frac{(\omega\circ \mathbb{E})(nn^*ann^*)}{\omega(nn^*)^2} \text{ and } \theta_\omega(a) = \frac{\omega(n\mathbb{E}(n^*an)n^*)}{\omega(nn^*)^2} ,\] so (*) is equivalent to the equality $\psi_\omega=\theta_\omega$ (of states on $A$). Note that, if $b \in B$, then $\psi_\omega(b) = \theta_\omega(b) = \omega(b)$, so that both states $\psi_\omega$ and $\theta_\omega$ are extensions of $\omega \in P_1(B\uparrow A)$, so by uniqueness we have $\psi_\omega=\theta_\omega$, and (*) is established. \qed In the context of the conditional almost extension property, Theorem \ref{phiP-trace-thm} has the following consequences. \begin{corollary}\label{phiP-trace-ext1} Let $B \subset A$ be a non-degenerate abelian C*-subalgebra with the conditional almost extension property, let $\mathbb{E}: A \to B$ be its (unique) conditional expectation, and let $\phi$ be a state on $B$. \begin{itemize} \item[$(a)$] For a subset $N_0\subset N(B)$ the following are equivalent: \begin{itemize} \item[(i)] $\phi$ is $N_0$-invariant; \item[(ii)] $\phi\circ \mathbb{E}$ is centralized by all elements of $C^*(B\cup N_0)\subset A$; \item[(iii)] the restriction $(\phi \circ \mathbb{E})|_{C^*(B\cup N_0)}$ is a tracial state on $C^*(B\cup N_0)$. \end{itemize} \item[$(b)$] In particular, if $B$ is regular, then $\phi\circ \mathbb{E}$ is a trace on $A$ if and only $\phi$ is fully invariant.\qed \end{itemize} {\em (Of course, statement $(b)$ can be slightly relaxed, by requiring that $\phi$ is only $N_0$-invariant for a subset $N_0\subset N(B)$ which together with $B$ generates $A$ as a $C^*$-algebra.)} \end{corollary} \begin{mycomment} A natural class exhibiting the conditional almost extension property are Cartan subalgebras, as defined by Renault in \cite{Renault3}. They are regular non-degenerate inclusions $B\subset A$, in which \begin{itemize} \item $B$ is {\em maximal abelian\/} (masa) in $A$, and \item there exists a {\em faithful\/} conditional expectation $\mathbb{E}:A\to B$ (which is necessarily unique). \end{itemize} As pointed out for instance in \cite{BNRSW}, Cartan subalgebras do have the the conditional almost extension property, but there are many examples of regular non-degenerate abelian $C^*$-subalgebra inclusions $B\subset A$ with the conditional almost extension property, which are non-Cartan. In fact, for \'etale groupoids, the equivalent condition to the almost extension property is {\em topological principalness\/}: the set of units $u \in G^{(0)}$ with trivial isotropy $G(u)$ is dense in $G^{(0)}$. For topologically principal groupoids, both inclusions $C_0(G^{(0)})\subset C^*_{\text{red}}(G)$ and $C_0(G^{(0)})\subset C^*(G)$ have the conditional almost extension property. However, since the (full) conditional expectation $\mathbb{E}:C^*(G)\to C_0(G^{(0)})$ is not faithful in general, $C_0(G^{(0)})$ is generally not Cartan in $C^*(G)$. On the other hand, since the (reduced) expectation $\mathbb{E}_{\text{red}}:C^*_{\text{red}}(G)\to C_0(G^{(0)})$ is faithful, $C_0(G^{(0)})$ is Cartan in $C^*_{\text{red}}(G)$. \end{mycomment} Up to this point, we have seen that for regular non-degenerate abelian $C^*$-subalgebras $B\subset A$ with the conditional almost extension property, Corollary \ref{phiP-trace-ext1}(b) provides us with an injective $w^*$-continuous affine map \begin{equation} S^{\text{inv}}(B)\ni\phi\longmapsto \phi\circ \mathbb{E}\in T(A), \label{Sinv-T} \end{equation} which is a right inverse of the restriction map \eqref{T-Sinv}; in particular, it follows that for such inclusions, the map \eqref{T-Sinv} is surjective. \begin{question} If $B\subset A$ is a regular non-degenerate abelian $C^*$-subalgebra with the conditional almost extension property, under what additional circumstances is the map \eqref{Sinv-T} also surjective? (If this is the case, this would imply that the restriction map \eqref{T-Sinv} is in fact an affine $w^*$-homeomorphism.) \end{question} As the Example below suggests, even in the case of Cartan inclusions, the map \eqref{Sinv-T} may fail to be surjective. \begin{example} Let $B = C(\overline{\mathbb{D}}) \subset A = C(\overline{\mathbb{D}}) \rtimes_\alpha \mathbb{Z} = C^*(C(\overline{\mathbb{D}}), u)$, where $\alpha$ is rotation of $\mathbb{D}$ by an irrational multiple of $\pi$ and $u$ is the unitary that implements the automorphism in the crossed product. Then $B$ is a Cartan subalgebra as can be directly verified. The conditional expectation is given on the dense set of Laurent polynomials in $u$ by \[ \mathbb{E}( \sum f_n u^n) = f_0 .\] (It is obvious that $\mathbb{E}(u^n)=0$ for all $n \neq 0$.) As $0$ is a fixed point under the rotation $\alpha$, we have that $(\operatorname{ev}_0(\cdot)1, \operatorname{id})$ is a covariant representation of $(C(\overline{\mathbb{D}}),\alpha)$ in $C^*(\mathbb{Z})\cong C(\mathbb{T})$, thus it induces a $*$-homomorphism $\rho: A \to C(\mathbb{T})$. Any state $\psi$ on $C(\mathbb{T})$ defines a state $\psi \circ \rho$ on $A$, which is clearly tracial since $C(\mathbb{T})$ is abelian and $\rho$ is a $*$-homomorphism. A tracial state of this form factors through $\mathbb{E}$ if and only if it maps $\{u^n\}_{n \neq 0}$ to $0$, so taking for instance $\psi=ev_z$ to be a point evaluation at $z \in \mathbb{T}$, then clearly $(ev_z\circ\rho)(u)=z \neq 0$, so the trace $\tau=ev_z\circ \rho\in T(A)$ does not belong to the range of the map \eqref{Sinv-T}. \end{example} \begin{remark} In connection with the above example, the reason that the map $\phi \to \phi \circ \mathbb{E}$ fails to be surjective is the fact that the state $\operatorname{ev}_0$ on $C(\overline{\mathbb{D}})$ does not have a unique extension to a state on $C(\overline{\mathbb{D}}) \rtimes \mathbb{Z}$. Such an obstruction can be avoided if we consider inclusions with the \emph{(honest) extension property}, which are those non-degenerate abelian $C^*$-subalgebra inclusion $B\subset A$ for which \emph{every} pure state on $B$ has a unique extension to a state on $A$. As shown in \cite{KS} and \cite{ABG}, the extension property implies the following: \begin{itemize} \item $B$ is maximal abelian; \item there exists a unique conditional expectation $\mathbb{E}: A \to B$ \item $\ker \mathbb{E} = [A,B]$ (the closed linear span of the set of elements of the form $ab-ba$, $a \in A, b \in B$). \end{itemize} From the last two properties it follows immediately that any tracial state $\tau \in T(A)$ vanishes on $\ker \mathbb{E}$. Thus, any tracial state factors through $\mathbb{E}$, and is completely determined by its restriction to $B$. Since restrictions of the form $\tau\big|_B$, $\tau\in T(A)$ are always fully invariant, Corollary \ref{phiP-trace-ext1} has the following immediate consequence. \end{remark} \begin{corollary}\label{cor-Sinv-T-iso} If $B \subset A$ is a regular abelian $C^*$-subalgebra algebra inclusion with the extension property, and $\mathbb{E}: A \to B$ is its associated conditional expectation, then the map $$S^{\text{\rm inv}}(B)\ni\phi\longmapsto \phi\circ \mathbb{E}\in T(A)$$ is an affine $w^*$-homeomorphism, with inverse $\tau \to \tau|_B$. \qed \end{corollary} \begin{example} \label{free-group} For an \'etale groupoid $G$, the inclusions of $C_0(G^{(0)})$ into either the full or reduced $C^*$-algebra of $G$ have the extension property if and only if $G$ is \emph{principal}: all units in $G$ have trivial isotropy group. In the case when $G$ is a principal groupoid, the above combined with Theorem \ref{grpdmeas} (in both its reduced and full versions) establishes a bijection between the set of totally balanced measures on $G^{(0)}$ and the tracial state spaces of both $C^*(G)$ and $C^*_{\text{red}}(G)$. In particular, if $\Gamma$ is a discrete group acting freely on $X$, then the tracial state spaces of both crossed-product $C^*$-algebras $C_0(X) \rtimes \Gamma$ and $C_0(X) \rtimes_{\text{red}} \Gamma$ are naturally identified with the $\Gamma$-invariant Radon probability measures on $X$. The condition that the groupoid be principal (or for crossed products, that the action be free) cannot be relaxed, especially in the non-amenable case, as the following example shows. Let $\mathbf{F}_2$ -- the free group on two generators -- act on by translation on its Alexandrov compactification $\mathbf{F}_2\cup\{\infty\}$ (by keeping $\infty$ fixed), so that the associated action of $\mathbf{F}_2$ on the unitized on $c_0(\mathbf{F}_2)^\sim$ is given by $\alpha_g(f + c \mathbf{1})= \lambda_g(f) + c \mathbf{1}$, where $\lambda$ is the left-shift action on $c_0(\mathbf{F}_2)$. It is not hard to show that $c_0(\mathbf{F}_2)^\sim \rtimes_{\text{red}} \mathbf{F}_2$ has a unique tracial state. On the other hand, the full crossed product $c_0(\mathbf{F}_2)^\sim \rtimes \mathbf{F}_2$ has the full group $C^*$-algebra $C^*(\mathbf{F}_2)$ as quotient, and so it must have infinitely many tracial states. \end{example} \section{Graph C*-algebras} In this section we provide a method for parametrizing tracial state spaces on graph $C^*$-algebras. \textcolor{black}{Our approach complements the treatment in \cite{Tomforde2} by giving an explicit parametrization of the tracial state space of a graph $C^*$-algebra.} We begin with a quick review of graph terminology and notation, most of which are borrowed from \cite{Raeburn}. A \emph{directed graph} $E = (E^0, E^1, r,s)$ consists of two countable sets $E^0,E^1$ as well as range and source maps $r,s: E^1 \to E^0$. A vertex is \emph{regular} if $r^{-1}(v)$ is finite and non-empty. A vertex which is not regular is called \emph{singular}; a singular vertex is either a source ($r^{-1}(v)=\varnothing$) or an infinite receiver ($r^{-1}(v)$ infinite). A \emph{finite path} in $E$ is a sequence $\lambda=e_1 \ldots e_n$ of edges satisfying $s(e_k)=r(e_{k+1})$ for $k=1,\ldots,n-1$. (Note that we are using the right-to-left convention.) The length $\lambda=e_1 \ldots e_n$ is defined to be $|\lambda|=n$, and the set of paths of length $n$ in $E$ is denoted by $E^n$; the collection $\bigcup_{n=0}^\infty E^n$ of all finite paths in $E$ is denoted $E^*$. (The vertices $E^0$ are included in $E^*$ as the paths of length zero.) An \emph{infinite path} in $E$ is an infinite sequence $e_1 e_2 \ldots$ of edges in $E$ satisfying $s(e_k)=r(e_{k+1})$ for all $k$; the set of these paths is denoted by $E^\infty$. If $\lambda=e_1 \ldots e_n$ is a finite path then we define its range $r(\lambda)$ to be $r(e_1)$, and its source $s(\lambda)$ to be $s(e_n)$. The range of an infinite path is defined the same way. In order to avoid any confusion, for any vertex $v\in E^0$, and any $n\in\mathbb{N}\cup\{\infty\}$, the set $\{\lambda\in E^n\,:\,r(\lambda)=v,\,|\lambda|=n\}$ will be denoted by $r^{-n}(v)$. If $\lambda$ is a finite path and $\nu$ is a finite (or infinite) path with $s(\lambda)=r(\nu)$, then we can concatenate the paths to form $\lambda \nu$. Whenever a (finite or infinite) path $\sigma$ can be decomposed as $\sigma=\lambda\nu$, we write $\lambda\prec\sigma$ (or $\sigma\succ\lambda$) and we denote $\nu$ by $\sigma\ominus\lambda$. A \emph{cycle} is a finite path $\lambda$ of positive length with $r(\lambda)=s(\lambda)$. Given a cycle $\lambda=e_1 \ldots e_n\in E^*$, an \emph{entry} to $\lambda$ is a path $f_1 f_2 \ldots f_j$, $j > 0$, with $r(f_1)=r(e_k)$ and $f_1 \neq e_k$, for some $k$. If no entry to $\lambda$ exists, we say that $\lambda$ is {\em entry-less}. It fairly easy to see that every entry-less cycle $\lambda$ can be written uniquely as a repeated concatenation $\lambda=\nu^m$, of a {\em simple\/} entry-less cycle $\nu$, i.e. the number of vertices in $\nu$ equals $|\nu|$. \textcolor{black}{An infinite path $x$ is called \emph{periodic} if there exist $\alpha, \lambda \in E^*$, with $s(\alpha)=r(\lambda)=s(\lambda)$, such that $x= \alpha \lambda^\infty$ (that is, $x$ is obtained by following $\alpha$ and then repeating the cycle $\lambda$ forever). If $x = \alpha \lambda^\infty$, and $\lambda$ has minimal length among any cycle in such a decomposition, then the period of $x$ is defined to be $|\lambda|$ and is denoted $\operatorname{per}(x)$. } \begin{definition} If $B$ is a $C^*$-algebra then a \emph{Cuntz-Krieger $E$-family} in $B$ is a set $\{S_e, P_v\}_{e \in E^1,v \in E^0}$, where the $S_e$ are partial isometries with mutually orthogonal range projections and the $P_v$ are mutually orthogonal projections which also satisfy: \begin{enumerate}[(i)] \item $S_e^* S_e^{\phantom{*}} = P_{s(e)}$; \item $S_e^{\phantom{*}} S_e^* \leq P_{r(e)}$; \item if $v$ is regular, then $P_v = \sum_{r(e)=v} S_e^{\phantom{*}} S_e^*$. \end{enumerate} The $C^*$-subalgebra of $B$ generated by $\{S_e,P_v\}_{e \in E^1,v \in E^0}$ is denoted $C^*(S,P)$. The graph algebra $C^*(E)$ is the universal $C^*$-algebra generated by a Cuntz-Krieger $E$-family, $C^*(E)=C^*(s,p)$, where $\{s_e,p_v\}$ are the \emph{universal generators}. For any Cuntz-Krieger $E$-family $\{S_e,P_v\}_{e \in E^1,v \in E^0}$ there is a unique $*$-homomorphism $\pi_{S,P}: C^*(E) \to C^*(S,P)$ satisfying $\pi_{S,P}(s_e)=S_e$ and $\pi_{S,P}(p_v)=P_v$. For an $E$-family $\{S,P\}$ and a finite path $\lambda = e_1 \ldots e_n$ in $E^*$, there is an associated partial isometry $S_\lambda = S_{e_1} S_{e_2} \ldots S_{e_n}$ in $C^*(S,P)$. (If $|\lambda|=0$, so $\lambda$ reduces to a vertex $v\in E^0$, then $S_\lambda=P_v$.) When specializing to $C^*(E)$, we have partial isometries denoted $s^{}_\lambda$, $\lambda\in E^*$. By construction, all $s^{}_\lambda\in C^*(E)$, $\lambda\in E^*$ are partial isometries: the source projection of $s^{}_\lambda$ is $s^*_\lambda s^{}_\lambda=p_{s(\lambda)}$; the range projection $s^{}_\lambda s^*_\lambda$ will be denoted from now on by $p^{}_\lambda$. \end{definition} As it turns out, \textcolor{black}{one has the equality} \begin{equation} C^*(E)=\overline{\operatorname{span}} \{s_\alpha^{\phantom{*}} s_\beta^*: \alpha, \beta \in E^*, s(\alpha)=s(\beta) \}. \label{C*E=span} \end{equation} The products $s_\alpha^{\phantom{*}} s_\beta^*$ listed in the right-hand side of \eqref{C*E=span} are referred to as the \emph{spanning monomials}, and the set of all these elements is denoted by $G(E)$. The equality \eqref{C*E=span} is due to the fact that $G(E)\cup\{0\}$ is a $*$-semigroup, which is a consequence of the following product rule: \begin{equation} (s^{\phantom{*}}_\alpha s_\beta^*)(s_\lambda^{\phantom{*}}s_\nu^*)= \begin{cases} s^{\phantom{*}}_\alpha s_{\nu(\beta\ominus\lambda)}^*,&\text{if }\lambda\prec\beta\\ s^{\phantom{*}}_{\alpha(\lambda\ominus\beta)}s^*_\nu,&\text{if }\beta\prec\lambda\\ 0,&\text{otherwise} \end{cases} \label{G-prod} \end{equation} Since all projections $p^{}_v$, $v\in E^0$ are mutually orthogonal, for any finite set $V\subset E^0$, the sum $q^{}_V=\sum_{v\in V}p^{}_v$ will be again a projection, and furthermore, the net $(q_V^{})_{V\in \mathcal{P}_{\text{fin}}(E^0)}$ forms an approximate unit for $C^*(E)$, hereafter referred to as the {\em canonical\/} approximate unit. The $*$-subalgebra $\bigcup_{V\in\mathcal{P}_{\text{fin}}(E^0)}q^{}_V C^*(E)q^{}_V$ will be denoted by $C^*(E)_{\text{fin}}$. Passing from a graph to a sub-graph does not always produce a meaningful link between the associated $C^*$-algebras. The best suited objects that allow such links are the identified as follows: given some graph $E$, a subset $H \subset E^0$ is called \begin{itemize} \item \emph{hereditary}, if $r(e) \in H$ implies $s(e) \in H$ \item \emph{saturated}, if whenever $v \in E^0$ is regular and $\{s(e): e \in r^{-1}(v) \} \subset H$, it follows that $v \in H$. \end{itemize} Any subset $H \subset E^0$ is contained in a minimal saturated set $\overline{H}$ called its \emph{saturation}, which is the union $\overline{H}=\bigcup_{k=0}^\infty H_k$, where $H_0 = H$ and, for $k > 1$, \begin{equation} H_k = H_{k-1}\cup \{v \in E^0:\text{ $v$ regular and } s(r^{-1}(v)) \subset H_{k-1} \}. \label{saturation}\end{equation} Clearly, the saturation of a hereditary set is again hereditary. The main point about considering such sets is the fact (see \cite{Raeburn}) that, whenever $H\subset E^0$ is saturated and hereditary, and we form the sub-graph $$E \setminus H = (E^0 \smallsetminus H, s^{-1}(E^0 \smallsetminus H), r,s),$$ then we have a natural surjective $*$-homomorphism $\rho_H:C^*(E)\to C^*(E\setminus H)$, defined on the generators as $$ \rho_H(p^{}_v)=\begin{cases} p^{}_v,&\text{if }v\in E^0\smallsetminus H;\\ 0,&\text{otherwise;}\end{cases} \quad \rho_H(s^{}_e)= \begin{cases} s^{}_e,&\text{if }s(e)\in E^0\smallsetminus H;\\ 0,&\text{otherwise.}\end{cases} $$ (A sub-graph of this form will be called {\em canonical}.) The ideal $\ker\rho_H$ is simply the closed two-sided ideal generated by $\{p^{}_v\}_{v\in H}$; alternatively, it is also described as: $$\ker\rho_H=\overline{\text{span}}\{s^{}_\alpha s^*_\beta\,:\,\alpha,\beta\in E^*,\,s(\alpha)=s(\beta)\in H\}.$$ The {\em gauge action\/} on $C^*(E)$ is the point-norm continuous group homomorphism $\gamma:\mathbb{T}\ni z\longmapsto \gamma_z\in\text{Aut}\big(C^*(E)\big)$, given on the generators by $\gamma_z(p_v)=p_v$, $v\in E^0$ and $\gamma_z(s_e)=z s_e$, $e\in E^1$. On the spanning monomials listed above, the automorphisms $\gamma_z$, $z\in\mathbb{T}$, act as $\gamma_z(s_\alpha^{\phantom{*}} s_\beta^*)=z^{|\alpha|-|\beta|}s_\alpha^{\phantom{*}} s_\beta^*$. The {\em gauge invariant uniqueness theorem\/} of an Huef and Raeburn (see \cite{HuefRaeburn}) states that, given some $C^*$-algebra $\mathcal{A}$ equipped with a group homomorphism $\theta:\mathbb{T}\ni z\longmapsto \theta_z\in\text{Aut}(\mathcal{A})$, and a gauge invariant $*$-homomorphism $\pi:C^*(E)\to \mathcal{A}$ (that is, such that $\theta_z\left(\pi(x)\right)= \pi\left(\gamma_z(x)\right)$, $\forall\,x\in C^*(E)$, $z\in\mathbb{T}$), the condition that $\pi$ is injective is equivalent to the condition that $\pi(p_v^{})\neq 0$, for all $v\in E^0$. There are two distinguished abelian $C^*$-subalgebras of $C^*(E)$ which we use to define states on $C^*(E)$, the first of which is defined as follows. \begin{definition} Let $E$ be a directed graph. Then the \emph{diagonal} $\mathcal{D} \subset C^*(E)$ is the $C^*$-subalgebra of $C^*(E)$ generated by the set $G_{\mathcal{D}}(E)=\{p^{}_\alpha\}_{\alpha \in E^*}$. (We sometimes use the notation $\mathcal{D}(E)$, when specifying the graph is necessary.) \end{definition} \begin{remark} \label{diag-desc} As it turns out, $G_{\mathcal{D}}(E)\cup \{0\}$ is an abelian semigroup of projections; more specifically, by \eqref{G-prod}, the product rule for $G_{\mathcal{D}}(E)$ is: \begin{equation} p^{}_\alpha p^{}_\beta= p^{}_\beta p^{}_\alpha = \begin{cases} p^{}_\alpha,&\text{if }\beta\prec\alpha\\ p^{}_\beta,&\text{if }\alpha\prec\beta\\ 0,&\text{otherwise} \end{cases} \label{GD-prod} \end{equation} Using the semigroup property, it follows that we can in fact present $\mathcal{D}(E) = \overline{\operatorname{span}}\,G_{\mathcal{D}}(E)$. We can also write $\mathcal{D}(E)=\left[\sum_{v\in E^0}\mathcal{D}(E)p^{}_v\right]^{-}$, with each summand presented as $$\mathcal{D}(E)p^{}_v= \overline{\text{span}}\{p^{}_\alpha\,:\,\alpha \in E^*,\,p^{}_\alpha\leq p^{}_v\} =\overline{\text{span}}\{p^{}_\alpha\,:\,\alpha \in E^*,\,\,r(\alpha)=v\}.$$ As it turns out, each corner $\mathcal{D}(E)p^{}_v$ is in fact a unital abelian AF-subalgebra, with unit $p^{}_v$, so $\mathcal{D}$ itself is an abelian AF-algebra, which contains the canonical approximate unit $(q_V^{})_{V\in \mathcal{P}_{\text{fin}}(E^0)}$. As explained for instance in \cite{NR}, the Gelfand spectrum $\widehat{\mathcal{D}(E)}$ of the diagonal $C^*$-subalgebra $\mathcal{D}(E)$ can be identified with the set \[ E^{\leq \infty} = E^\infty\cup \{ x \in E^*: s(x) \text{ is singular } \} \] with evaluation maps defined by $ev^{\mathcal{D}}_x(p^{}_\alpha) = 1$ if $\alpha\prec x$, and $0$ otherwise. In other words, for each $\alpha\in E^*$, when we view $p_\alpha^{}\in\mathcal{D}(E)$ as a continuous function on $\widehat{\mathcal{D}(E)}\simeq E^{\leq\infty}$, this function will be the indicator function of the compact-open set $Z(\alpha)=\{x\in E^{\leq\infty}\,:\,\alpha\prec x\}$. Furthermore, the sets $Z(\alpha)$, $\alpha\in E^*$ form a basis for the topology, so clearly $\widehat{\mathcal{D}(E)}$ is a totally disconnected. When identifying $\mathcal{D}(E)\simeq C_0\big(\widehat{\mathcal{D}(E)}\big)$, the algebraic sum (without closure!) $\mathcal{D}(E)_{\text{fin}}=\sum_{v\in E^0}\mathcal{D}(E)p^{}_v$ gets naturally identified with $C_c\big(\widehat{\mathcal{D}(E)}\big)$, the algebra of continuous functions with compact support. \end{remark} \begin{remark}\label{orthogonality} Cylinder sets can be used to analyze path (in)comparability. To be more precise, given two paths, $\alpha,\beta\in E^*$, the following statements hold. \begin{itemize} \item[I.] (Comparability Rule) The inequality $\alpha\prec\beta$ is equivalent to the reverse inclusion $Z(\alpha)\supset Z(\beta)$. \item[II.] (Orthogonality Rule) Conditions (i)--(iv) below are equivalent: \begin{itemize} \item[(i)] $s^*_\alpha s^{}_\beta=0$; \item[(ii)] the projections $p^{}_\alpha$ and $p^{}_\beta$ are orthogonal, i.e. $p^{}_\alpha p^{}_\beta=0$; \item[(iii)] $\alpha$ and $\beta$ are {\em incomparable}, i.e. $\alpha\not\prec\beta$ and $\beta\not\prec\alpha$; \item[(iv)] $Z(\alpha)\cap Z(\beta)=\varnothing$. \end{itemize} \end{itemize} \end{remark} \begin{remark} Among all paths $x\in E^{\leq\infty}$, the ones of interest to us will be those that represent isolated points in the spectrum $\widehat{\mathcal{D}(E)}$. On the one hand, if $E$ has {\em sources\/} (i.e. vertices $v\in E^0$ with $r^{-1}(v)=\varnothing$), then all finite paths that start at sources are determine isolated points in $\widehat{\mathcal{D}(E)}$. On the other hand, the infinite paths $x=e_1e_2\dots\in E^\infty$ that produce isolated points in $\widehat{\mathcal{D}(E)}$ are precisely those with the property that there exists $k$ such that $r^{-1}(r(e_n))=\{e_n\}$, for all $n\geq k$. If this is the case, if we form $\alpha=e_1e_2\dots e_{k-1}$, then $\{x\}=Z(\alpha)$. Among those paths, the periodic ones will play an important role in our discussion. \end{remark} \begin{definition} A finite path $\alpha=e_1e_2\dots e_n\in E^*$ (possibly of length zero) is called a \emph{ray} if there is a a simple entry-less cycle $\nu$, such that $s(\alpha)=s(\nu)$, and furthermore, no edge $e_k$ from $\alpha$ is appears in $\nu$. (Note: In \cite{NR}, rays were called {\em distinguished\/} paths.) In this case, the cycle $\nu$ (which is uniquely determined by $\alpha$) is referred to as the {\em seed\/} of $\alpha$. We caution the reader that zero-length rays are permitted: they are what we will call {\em cyclic vertices}. For reasons explained in the second paragraph below, the (possibly empty) set of all rays in $E$ will be denoted by $E^*_{\text{\rm\sc ip}}$. By definition, any two distinct rays $\alpha_1\neq \alpha_2$ are incomparable, so by the Orthogonality Rule (Remark \ref{orthogonality}) they satisfy: $s^*_{\alpha_1}s^{}_{\alpha_2}=s^*_{\alpha_2}s^{}_{\alpha_1}=0$. Clearly, rays parametrize the set $E^\infty_{\text{\rm\sc ip}}$ of infinite periodic paths that yield isolated points in $\widehat{\mathcal{D}(E)}$: any such path can be uniquely presented as $x=\alpha\nu^\infty$, with $\alpha$ ray and $\nu$ the seed of $\alpha$, and its period (as a function from $\mathbb{N}$ to $E^1$) is $\text{per}(x)=|\nu|$. When it would be necessary to emphasize the sole dependence on $\alpha$, we also denote the infinite path $\alpha\nu^\infty$ simply by $\xi_\alpha$. When we collect the corresponding points in $\widehat{\mathcal{D}(E)}$, we obtain a countable open set $\Sigma_{\text{\rm\sc ip}}=\{ev^{\mathcal{D}}_x\,:\,x\in E^\infty_{\text{\rm\sc ip}}\}\subset\widehat{\mathcal{D}(E)}$. \end{definition} \begin{remark}\label{path-repn} Associated with the space $E^{\leq \infty}$ we have the \emph{path representation} $\pi_{\text{path}}: C^*(E) \to B(\ell^2(E^{\leq \infty}))$ given on generators by (see \cite{Raeburn} for details): $$ \pi_{\text{path}}(s_e)\delta_x = \begin{cases} \delta_{ex} & r(x)=s(e) \\ 0 & \text{ otherwise; } \end{cases} \qquad\qquad \pi_{\text{path}}(p_v)\delta_x = \begin{cases} \delta_x & r(x) =v \\ 0 & \text{ otherwise. } \end{cases} $$ In general, $\pi_{\text{path}}$ is not faithful; however, it is always faithful on the diagonal subalgebra $\mathcal{D}(E)$. This embedding gives us a explicit form of the identification $\widehat{\mathcal{D}(E)} = E^{\leq \infty}$ as follows: for $x \in E^{\leq \infty}$, the associated character on $\mathcal{D}(E)$ is simply $ev^{\mathcal{D}}_x(a)= \langle \delta_x | \pi_{\text{path}}(a) \delta_x \rangle$. For future use, we denote the subalgebras $\pi_{\text{path}}(\mathcal{D}(E))$ and $\pi_{\text{path}}(C^*(E))$ of $B(\ell^2(E^{\leq \infty}))$ by $D_{\text{path}}(E)$ and $A_{\text{path}}(E)$, respectively. \end{remark} \begin{notation} As shown in \cite[Prop. 3.1]{NR}, a spanning monomial $b=s_\alpha^{\phantom{*}} s_\beta^* \in C^*(E)$ is normal if and only if one of the following holds: \begin{itemize} \item[$(a)$] $\alpha = \beta$, so $w=s_\alpha^{\phantom{*}}s_\alpha^*\in G_{\mathcal{D}}(E)$; \item[$(b)$] $\alpha\prec\beta$ and $\beta\ominus\alpha$ is an entry-less cycle; \item[$(c)$] $\beta\prec\alpha$ and $\alpha\ominus\beta$ is an entry-less cycle. \end{itemize} The set of normal spanning monomials in $C^*(E)$ is denoted by $G_{\mathcal{M}}(E)$. \end{notation} \begin{definition} The \emph{abelian core} $\mathcal{M}(E)$ is the $C^*$-subalgebra of $C^*(E)$ generated by the set $G_{\mathcal{M}}(E)$ of normal spanning monomials. \end{definition} \begin{notations}If $b\in G_{\mathcal{M}}(E)\smallsetminus G_{\mathcal{D}}(E)$ (i.e. $b$ is of either type $(b)$ or $(c)$ above), then $b$ is a normal partial isometry, so its adjoint $b^*$ also acts as its pseudo-inverse. For this reason, we will denote $b^*$ simply by $b^{-1}$. More generally, we will allow arbitrary negative integer exponents, by letting $b^{-m}$ be an alternative notation for $b^{*m}$. We will also allow zero exponents, by agreeing that $b^0=bb^*=b^*b$, a monomial which in fact belongs to $G_{\mathcal{D}}(E)$. (Equivalently, for any $b\in G_{\mathcal{M}}(E)\smallsetminus G_{\mathcal{D}}(E)$, the $C^*$-subalgebra $C^*(b)\subset C^*(E)$ generated by $b$ is a unital abelian $C^*$-algebra, and $b$ is a unitary element in $C^*(b)$.) \end{notations} \begin{remark} In general, for a monomial $b\in G_{\mathcal{M}}(E)\smallsetminus G_{\mathcal{D}}(E)$, there might be multiple ways to present it as $s_\alpha^{\phantom{*}} s_\beta^*$, with $\alpha$ and $\beta$ as in $(b)$ or $(c)$ above, but after careful inspection, one can show that $b$ can be uniquely presented as $b=s_\alpha^{\phantom{*}} s_\nu^m s_\alpha^*=(s_\alpha ^{}s_\nu^{}s_\alpha^*)^m$, where $\alpha\in E^*$ is a ray with seed $\nu$ and $m$ is some non-zero integer, so if we let $b_\alpha=s_\alpha^{\phantom{*}} s_\nu^{} s_\alpha^*$ (recall that $\nu$ is uniquely determined by $\alpha$), then we can present $$ G_{\mathcal{M}}(E)\smallsetminus G_{\mathcal{D}}(E)=\{b_\alpha^m\,:\,\text{$\alpha$ ray, $m$ non-zero integer}\}. $$ Clearly, using our exponent conventions, $G_{\mathcal{M}}(E)\smallsetminus G_{\mathcal{D}}(E)$ is closed under taking adjoints, because $(b_\alpha^m)^*=b_\alpha^{-m}$. As it turns out, $G_{\mathcal{M}}(E)\cup \{0\}$ is an abelian $*$-semigroup; besides the product rules \eqref{GD-prod} for $G_{\mathcal{D}}(E)$, the remaining rules which involve the monomials in $G_{\mathcal{M}}(E)\smallsetminus G_{\mathcal{D}}(E)$ are: \begin{align} b_\alpha^0&= p_\alpha^{},\,\,\text{ for all rays }\alpha;\\ b_\alpha^m p_\beta^{}= p_\beta^{}b_\alpha^m &=\begin{cases} b_\alpha^m,&\text{if }\beta\prec\xi_\alpha\\ 0,&\text{otherwise} \end{cases} \label{GM-prod1}\\ b_{\alpha_1}^{m_1}b_{\alpha_2}^{m_2}= b_{\alpha_2}^{m_2}b_{\alpha_1}^{m_1} &=\begin{cases} b_{\alpha_1}^{m_1+m_2},&\text{if }\alpha_1=\alpha_2\\ 0,&\text{otherwise} \end{cases} \label{GM-prod2} \end{align} By the above $*$-semigroup property, $\mathcal{M}(E)\subset C^*(E)$ is an abelian $C^*$-subalgebra which contains $\mathcal{D}(E)$, and it can also be described as $\mathcal{M}(E) = \overline{\operatorname{span}}\,G_{\mathcal{M}}(E)$. Furthermore, the images of $\mathcal{D}(E)$ and $\mathcal{M}(E)$ under the path representation agree; that is, $\pi_{\text{path}}(\mathcal{M}(E)) = D_{\text{path}}(E)$. In general, $\mathcal{M}(E)$ is much larger than $\mathcal{D}(E)$; in fact, $\mathcal{M}(E)=\mathcal{D}(E)'$, the commutant of $\mathcal{D}(E)$ in $C^*(E)$. As was the case with the diagonal, we have $\mathcal{M}(E)=\left[\sum_{v\in E^0}\mathcal{M}(E)p^{}_v\right]^{-}$, with the summand $\mathcal{M}(E)p^{}_v$ now presented as $$\overline{\text{span}}\left(\{b^m_\alpha\,:\,\,m \in\mathbb{Z},\,\alpha\in E^*_{\text{\rm\sc ip}},\,r(\alpha)=v\}\cup\{p^{}_\alpha\,:\,\alpha\in E^*,\,r(\alpha)=v\}\right),$$ so, upon identifying $\mathcal{M}(E)\simeq C_0\big(\widehat{\mathcal{M}(E)}\big)$, the (non-norm-closed) algebraic sum $\mathcal{M}(E)_{\text{fin}}=\sum_{v\in E^0}\mathcal{M}(E)p^{}_v$ \textcolor{black}{is} naturally identified with $C_c\big(\widehat{\mathcal{M}(E)}\big)$, the algebra of continuous functions with compact support. \end{remark} \begin{definition}\label{twisted-path-rep}(Twisted path representation.) With the notation as above, define the \emph{twisted representation} $\Theta: C^*(E) \to C\big(\mathbb{T},A_{\text{path}}(E)\big)$ by \[ \Theta(a)(z)=\pi_{\text{path}}(\gamma_z(a)) \qquad z \in \mathbb{T}, a \in C^*(E) .\] For any pair $(z,x)\in\mathbb{T}\times E^{\leq\infty}$, we define the state $\omega_{z,x}$ on $C^*(E)$ by $$\omega_{z,x}(a)=\langle \delta_x | \Theta(a)(z) \delta_x\rangle.$$ \end{definition} \begin{remark} \label{core-embed} As $\pi_{\text{path}}$ is injective on $\mathcal{D}(E)$, the gauge-invariant uniqueness theorem implies that $\Theta$ is injective. (The gauge action on the codomain is by translation: $(\lambda_z(f))(w)=f(z^{-1}w)$.) In particular, $\Theta$ yields an injection of $\mathcal{M}(E)$ into $C(\mathbb{T},D_{\text{path}}(E))$. Therefore the spectrum of $\mathcal{M}(E)$ can be recovered as a quotient of the spectrum of $C(\mathbb{T},D_{\text{path}}(E))$ (that is, $\mathbb{T} \times E^{\leq \infty}$), by the natural equivalence relation implemented by $\Theta$. Specifically, if $(z,x) \in \mathbb{T} \times E^{\leq \infty}$, then the restriction $\omega_{z,x}|_{\mathcal{M}(E)}$ is a pure state on $\mathcal{M}(E)$. The equivalence relation $\sim$ on $\mathbb{T} \times E^{\leq \infty}$ is simply given by: \begin{equation} (z_1,x_1)\sim (z_2,x_2)\Leftrightarrow \omega_{z_1,x_1}|_{\mathcal{M}(E)}=\omega_{z_2,x_2}|_{\mathcal{M}(E)}. \label{equiv-core} \end{equation} Since the restrictions of these states on the diagonal act as $\omega_{z,x}|_{\mathcal{D}(E)}=ev^{\mathcal{D}}_x$, it is fairly obvious that $(z_1,x_1)\sim (z_2,x_2)$ implies $x_1=x_2$. The precise description of the equivalence classes $(z,x)_\sim=\{(z_1,x_1)\in \mathbb{T}\times E^{\leq\infty}\,:\,(z_1,x_1)\sim (z,x)\}$ goes as follows. \begin{equation} (z,x)_\sim=\begin{cases}z\mathbb{U}_{\text{per}(x)}\times \{x\}_,&\text{if $x\in E^\infty_{\text{\rm\sc ip}}$}\\ \mathbb{T}\times\{x\},&\text{if $x\in E^{\leq\infty}\smallsetminus E^\infty_{\text{\rm\sc ip}}$} \end{cases} \end{equation} (For any integer $n\geq 1$, the symbol $\mathbb{U}_n$ denotes the group of $n^{\text{th}}$ roots of unity.) \end{remark} \begin{lemma} \label{top-spec} Let $E$ be a directed graph. \begin{itemize} \item[(i)] When we equip the quotient space $\mathbb{T} \times E^{\leq \infty}\!/\!\sim$ with the quotient topology, the map $(z,x)_\sim \longmapsto \omega_{z,x}|_{\mathcal{M}(E)}$ is a homeomorphism of onto the spectrum of $\mathcal{M}(E)$. \item[(ii)] For every ray $\alpha$, if we regard $p_\alpha^{}$ as a continuous function on $\widehat{\mathcal{M}(E)}$, then $p_\alpha^{}$ is the characteristic function of a compact-open subset $\mathbf{T}_\alpha$, which is homeomorphic to $\mathbb{T}$. Specifically, if $\nu$ is the seed of $\alpha$, and $x=\alpha\nu^\infty\in E^\infty_{\text{\rm\sc ip}}$ is the associated periodic path, then $\mathbf{T}_\alpha=\{(z,x)_\sim\}_{z\in\mathbb{T}}$ and the map $\mathbb{T}/\mathbb{U}_{|\nu|}\ni z\mathbb{U}_{|\nu|}\longmapsto (z,x)_\sim\in \mathbf{T}_\alpha$ is a homeomorphism. Alternatively, $\mathbf{T}_\alpha$ is naturally identified with the spectrum -- computed in the unital $C^*$-algebra $C^*(b_\alpha)$ -- of the normal partial isometry $b^{}_\alpha=s_\alpha^{\phantom{*}} s_\nu^{\phantom{*}} s_\alpha^*$. \item[(iii)] The compact-open sets $(\mathbf{T}_\alpha)_{\alpha\in E^*_{\text{\rm\sc ip}}}$ are mutually disjoint. When we consider $\Omega_{\text{\rm\sc ip}} = \bigcup_{\alpha\in E^*_{\text{\sc ip}}} \mathbf{T}_\alpha$, and fix a positive Radon measure $\mu$ on $\widehat{\mathcal{M}(E)}$ with corresponding positive linear functional $\phi_\mu$ on $\mathcal{M}(E)_{\text{\rm fin}}=C_c(\widehat{\mathcal{M}(E)})$, then \begin{equation} \int_{\Omega_{\text{\rm\sc ip}}} f d\mu = \sum_{\alpha\in E^*_{\text{\rm\sc ip}}} \phi(f p^{}_\alpha) \label{int-cycl} \end{equation} for all $f \in \mathcal{M}(E)_{\text{\rm fin}}=C_c(\widehat{\mathcal{M}(E)})$. \qed \end{itemize} \end{lemma} \begin{proof} Parts (i) and (ii) are established in \cite{NR} and \cite{BNR}. For part (iii) we only need to justify the first statement, because the rest follows from the Lebesgue dominated convergence theorem. This follows immediately from the observation that any two distinct rays $\alpha_1$, $\alpha_2$ are incomparable, so by \eqref{G-prod} the projections $p_{\alpha_1}^{}$ and $p_{\alpha_2}^{}$ are orthogonal, thus the sets $\{\mathbf{T}_\alpha\}_{\alpha\text{ ray}}$ form a countable disjoint compact-open cover of $\Omega_{\text{\rm\sc ip}}$. \end{proof} \begin{remark} Both $\mathcal{D}(E)$ and $\mathcal{M}(E)$ are abelian regular $C^*$-subalgebras in $C^*(E)$, since all generators $p_v^{}$, $v\in E^0$ and $s_e^{}$, $e\in E^1$, normalize both of them. It is shown in \cite{NR} that $\mathcal{M}(E)$ is in fact a Cartan subalgebra of $C^*(E)$, with its (unique) conditional expectation acting on generators as \begin{equation} \mathbb{E}_{\mathcal{M}}(s_\alpha^{\phantom{*}} s_\beta^*) = \begin{cases} s_\alpha^{\phantom{*}} s_\beta^*, & \text{if }s_\alpha^{\phantom{*}} s_\beta^* \in G_{\mathcal{M}}(E)\\ 0, & \text{otherwise} \\ \end{cases} \label{pm} \end{equation} \end{remark} Within this framework, Theorem \ref{phiP-trace-thm} has the following consequence. \begin{corollary}\label{cor-M-inv} For a state $\phi$ on $\mathcal{M}(E)$, the following conditions are equivalent: \begin{itemize} \item[(i)] The composition $\phi \circ \mathbb{E}_{\mathcal{M}}$ is a tracial state on $C^*(E)$. \item[(ii)] $\phi$ is $s_e^{}$-invariant for all $e \in E^1$. \item[(iii)] $\phi$ is fully invariant. \qed \end{itemize} \end{corollary} \begin{remark} In general, $\mathcal{D}(E)$ is not Cartan, and there may exist more than one conditional expectation onto it. One expectation -- hereafter referred to as the {\em Haar expectation\/} -- always exists, defined as $$\mathbb{E}_{\mathcal{D}}(a)=\int_\mathbb{T}\gamma_z\left(\mathbb{E}_{\mathcal{M}}(a)\right)\,dm(z)= \int_\mathbb{T}\mathbb{E}_{\mathcal{M}}\left(\gamma_z(a)\right)\,dm(z).$$ (Here $m$ denotes the \textcolor{black}{normalized} Lebesgue measure on $\mathbb{T}$; the second equality follows from \eqref{pm}, which clearly implies that $\mathbb{E}_{\mathcal{M}}$ is gauge invariant.) The Haar expectation acts on the spanning monomials as: \begin{equation} \mathbb{E}_{\mathcal{D}}(s_\alpha^{\phantom{*}} s_\beta^*) = \begin{cases} p_\alpha^{}, & \text{if }\alpha=\beta\\ 0, & \text{otherwise} \\ \end{cases} \label{pD} \end{equation} Since the integration map $\int_{\mathbb{T}}\gamma_z(a)\,dm(z)$ is always a faithful positive map, it follows that $\mathbb{E}_{\mathcal{D}}$ is faithful. Using formulas \eqref{pD} it is easy to see that $\mathbb{E}_{\mathcal{D}}$ is also normalized by all $p^{}_v$, $v\in E^0$, and $s^{}_e$, $s^*_e$, $e\in E^1$, so we also have the following analogue of Corollary \ref{cor-M-inv}. \end{remark} \begin{corollary}\label{cor-D-inv} For a state $\psi$ on $\mathcal{D}(E)$, the following conditions are equivalent: \begin{itemize} \item[(i)] The composition $\psi \circ \mathbb{E}_{\mathcal{D}}$ is a tracial state on $C^*(E)$. \item[(ii)] $\phi$ is $s_e^{}$-invariant for all $e \in E^1$. \item[(iii)] $\phi$ is fully invariant.\qed \end{itemize} \end{corollary} \begin{remark}\label{rem-inv} Either using Corollary \ref{cor-D-inv} or directly \textcolor{black}{from} the definition, it follows that any fully invariant state $\psi$ on $\mathcal{D}(E)$ satisfies \begin{equation} \forall\,\alpha\in E^*:\,\,\,\psi(p^{}_{\alpha})=\psi(p^{}_{s(\alpha)}). \label{SinvD-proj} \end{equation} In particular, a fully invariant state on $\mathcal{D}(E)$ is completely determined by its values on the projections $p_v$, $v\in E^0$. \end{remark} \begin{definition}\label{def-gr-tr} Let $E$ be a directed graph. A \emph{graph trace} on $E$ is a function $g: E^0 \to [0,\infty)$ such that: \begin{itemize} \item[{\sc (a)}] for any $v \in E^0$, $g(v) \geq \sum_{e: r(e)=v} g(s(e))$; \item[{\sc (b)}] for any regular $v$, we have equality in {\sc (a)}. \end{itemize} \textcolor{black}{Note that}, for any graph trace $g$, its null space $N_g=\{v\in E^0\,:\,g(v)=0\}$ is a saturated hereditary set. Depending on the quantity $\|g\|_1=\sum_{v\in E^0}g(v)$, a graph trace $g$ is declared {\em finite}, if $\|g\|_1<\infty$, or {\em infinite}, otherwise. We denote the set of all graph traces on $E$ by $T(E)$, and the set of finite graph traces on $E$ by $T_{\text{fin}}(E)$. Lastly, we define the set $T_1(E)=\{g\in T(E):\,\|g\|_1=1\}$, the elements of which are termed {\em normalized\/} graph traces. \end{definition} \begin{theorem}\label{gr-tr-char} A map $g:E^0\to [0,\infty)$ is a graph trace on $E$, if and only if, every finite tuple $\Xi=(\xi_i,\lambda_i)_{i\in I}^n\subset\mathbb{R}\times E^*$ satisfies \begin{equation} \textstyle{\sum_{i\in I}\xi_i p^{}_{\lambda_i}\geq 0} \,\Rightarrow \, \textstyle{\sum_{i\in I}\xi_ig(s(\lambda_i))\geq 0}. \label{gr-tr-char-thm} \end{equation} \end{theorem} \begin{proof} To prove the ``if'' implication, assume $g$ satisfies condition \eqref{gr-tr-char-thm} and let us verify conditions {\sc (a)} and {\sc (b)} from Definition \ref{def-gr-tr}. To check condition {\sc(a)}, start off by fixing some $v\in E^0$, and notice that, since for every finite set $F\subset r^{-1}(v)$, we have $p^{}_v\geq\sum_{e\in F}p^{}_e$ (by the Cuntz-Krieger relations), then by \eqref{gr-tr-char-thm}, it follows that $g(v)\geq\sum_{e\in F}g(s(e))$; this clearly implies the inequality $g(v)\geq\sum_{e\in r^{-1}(v)}g(s(e))$. In order to check {\sc (b)}, simply notice that, if $v$ is regular (so $r^{-1}(v)$ is both finite and non-empty), the by the Cuntz-Krieger relations, we have an equality $p^{}_v = \sum_{e\in r^{-1}(v)}p^{}_e$, so applying \eqref{gr-tr-char-thm} both ways (writing the equality as two inequalities), we clearly get $g(v)=\sum_{e\in r^{-1}(v)}g(s(e))$. To prove the ``only if '' implication, we fix a graph trace $g$ and we prove the implication \eqref{gr-tr-char-thm}. As a matter of terminology, if a tuple $\Xi$ satisfies the inequality \begin{equation} \textstyle{\sum_{i\in I}\xi_ip^{}_{\lambda_i}\geq 0}, \label{lemma-ineq-tr} \end{equation} we will call $\Xi $ {\em admissible}. Our proof will use induction on the number $\langle \Xi \rangle=|I|+\sum_{i\in I}|\lambda_i|$. If $\langle \Xi\rangle =1$, then $|I|=1$, thus $I$ is a singleton $\{i_0\}$ and $\lambda_{i_0}$ is a path of length $0$, i.e. a vertex $v\in E^0$; in this case, \eqref{gr-tr-char-thm} is same as the implication ``$\xi p^{}_v\geq 0\Rightarrow \xi g(v)\geq 0$,'' which is trivial, since $g$ takes non-negative values. Assume \eqref{gr-tr-char-thm} holds whenever $\langle \Xi\rangle <N$, for some $N>1$, and show that \eqref{gr-tr-char-thm} holds when $\langle\Xi\rangle =N$. Fix an admissible tuple $\xi$ with $\langle\Xi\rangle =N$ (so \eqref{lemma-ineq-tr} is satisfied), and let us prove the inequality \begin{equation} \textstyle{\sum_{i\in I}\xi_jg(s(\lambda_i))\geq 0}, \label{lemma-ineq-tr-conc} \end{equation} If we consider the set $W=\{r(\lambda_i)\,:\,i\in I\}$, then we can split (disjointly) $I=\bigcup_{v\in W}I_v$, where $I_v=\{i\,:\,r(\lambda_i)=v\}$ and we will have $$\textstyle{\sum_{i\in I}\xi_j g(s(\lambda_i))= \sum_{v\in W}\sum_{i\in I_v}\xi_i g(s(\lambda_i))},$$ with each tuple $\Xi_v=(\xi_i,\lambda_i)_{i\in I_v}$ admissible. (This is obtained by multiplying the inequality \eqref{lemma-ineq-tr} by $p^{}_v$.) In the case when $W$ has at least two vertices, we have $\langle \Xi_v\rangle< \langle \Xi\rangle$, $\forall\,v\in W$, so the inductive hypothesis can be used, and the desired conclusion follows. Based on the above argument, for the remainder of the proof we can assume that $W$ is a singleton, so we have a vertex $v\in E^0$, such that $r(\lambda_i)=v$, $\forall\,i\in I$. Split $I=I^0\cup I^+$, where $I^0=\{i\in I\,:\,|\lambda_i|=0\}$ and $I^+=\{i\in I\,:\,|\lambda_i|>0\}$. Since $W$ is a singleton, the set $I^0$ consists of all $I$ for which $\lambda_i=v$. The case when $I^+=\varnothing$ is trivial, because that would mean that all $\lambda_i$ will be equal to $v$, so for the remainder of the proof we are going to assume that $I^+\neq\varnothing$. With this set-up the hypothesis \eqref{lemma-ineq-tr} reads \begin{equation} \textstyle{\big(\sum_{i\in I^0}\xi_i\big)p_v+\sum_{i\in I^+}\xi_ip^{}_{\lambda_i}\geq 0}, \label{lemma-ineq-tr0} \end{equation} and the desired conclusion \eqref{lemma-ineq-tr-conc} reads: \begin{equation} \textstyle{\big(\sum_{i\in I^0}\xi_i\big)g(v)+\sum_{i\in I^+}\xi_i g(s(\lambda_i))\geq 0}. \label{lemma-ineq-tr-conc0} \end{equation} (In the case when $I^0=\varnothing$, we let $\sum_{i\in I^0}\xi_i=0$.) Since $I^+$ is non-empty (and finite), we can find a finite non-empty set $F\subset E^1$ which allows us to split $I^+$ as a disjoint union of non-empty sets $I^+=\bigcup_{e\in F}I_e$, where $I_e=\{i\in I\,:\,\lambda_i\succ e\}$. Using the Cuntz-Krieger relations, it follows that the element $q=\sum_{e\in E}s^{}_es^*_e\in\mathcal{D}$ is a projection satisfying $q\leq p^{}_v$, so the difference $q'=p^{}_v-q$ is also a (possibly zero) projection. In either case, it follows that $q's^{}_{\lambda_i}s^*_{\lambda_i}=0$, $\forall\,i\in I^+$, so when we multiply \eqref{lemma-ineq-tr0} by $q'$ we obtain: \begin{equation} \textstyle{\big(\sum_{i\in I^0}\xi_i\big) q'\geq 0}. \label{J0-q'} \end{equation} Likewise multiplying \eqref{lemma-ineq-tr0} by each $s^{}_{e}s^*_{e}$ we obtain $$\textstyle{\big(\sum_{i\in I^0}\xi_i\big) s^{}_{e}s^*_{e}+ \sum_{i\in I_e}\xi_js^{}_{\lambda_i} s^*_{\lambda_i}\geq 0},$$ so if we multiply on the left by $s^*_{e}$ and on the right by $s^{}_{e}$, we obtain: \begin{equation} \textstyle{\big(\sum_{i\in I^0}\xi_i\big) p^{}_{s(e)} +\sum_{i\in I_e}\xi_is^{}_{\lambda_i\ominus e} s^*_{\lambda_i\ominus e}\geq 0}. \label{Xie-ok} \end{equation} For each $e\in F$, we can form the tuple $\tilde\Xi_e=(\xi_i,\tilde{\lambda}_i)_{i\in I^0\cup I_e}$ by letting $$\tilde{\lambda}_i=\begin{cases} s(e),&\text{if }i\in I^0\\ \lambda_i\ominus e,&\text{if }j\in I_e \end{cases} $$ and then \eqref{Xie-ok} shows that all $\tilde{\Xi}_e$ are admissible. Since we obviously have $\langle \tilde{\Xi}_e\rangle <\langle\Xi\rangle$, by the inductive hypothesis we obtain $\big(\sum_{i\in I^0}\xi_j\big) g(s(e)) +\sum_{i\in I_e}\xi_i g(s({\lambda_i\ominus e}))\geq 0$, which combined with the obvious equality $s(\lambda_i\ominus e)=s(\lambda_i)$ yields: \begin{equation} \textstyle{\big(\sum_{i\in I^0}\xi_j\big) g(s(e)) +\sum_{i\in I_e}\xi_j g(s(\lambda_i))\geq 0}. \label{Xie-ok-ind} \end{equation} We we sum all these inequalities (over $e\in E$), we obtain: \begin{equation} \textstyle{\big(\sum_{i\in I^0}\xi_i\big) \big(\sum_{e\in F}^{}g(s(e))\big) +\sum_{i\in I^+}\xi_i g(s(\lambda_i))\geq 0}. \label{Xie-ok-ind-almost} \end{equation} Comparing this inequality with the desired conclusion \eqref{lemma-ineq-tr-conc0}, we see that it suffices to show that \begin{equation} \textstyle{\big(\sum_{i\in I^0}\xi_i\big)g(v) \geq \big(\sum_{i\in I^0}\xi_i\big) \big(\sum_{e\in F}^{}g(s(e))\big)}. \label{lemma-ineq-tr-conc1} \end{equation} The case when $I^0=\varnothing$ is trivial, since both sides will equal zero, so for the remainder, we can assume $I^0\neq \varnothing$. In the case when $q'=0$, that is, when $p^{}_v=\sum_{e\in F}s^{}_es^*_e$, it follows that $v$ is regular and $F=r^{-1}(v)$, so by condition (ii) in the graph trace definition, it follows that $g(v)=\sum_{e\in F}g(s(e))$ and again \eqref{lemma-ineq-tr-conc1} becomes an equality. Lastly, in the case when $q'\neq 0$, we use condition (i) in the graph trace definition, which yields $g(v)\geq\sum_{e\in F}g(s(e))$; this means that desired inequality would follow once we prove that $\sum_{i\in I^0}\xi_i\geq 0$, an inequality which is now (under the assumption that $q'$ is a non-zero projection) a consequence of \eqref{J0-q'}. \end{proof} In preparation Proposition~\ref{tr-infinite} below, which contains two easy applications of Theorem~\ref{gr-tr-char}, we introduce the following terminology. \begin{definition} A vertex $v\in E^0$ is said to be {\em essentially left infinite}, if there exists an infinite set $X\subset E^*$ of mutually incomparable paths such that $s(\alpha)=v$ for all $\alpha \in X$. \end{definition} \begin{remark}\label{rem-entry-cycles} One particular class of essentially left infinite vertices are those that {\em emit entries into cycles}, i.e. vertices $v$ that have some path $\alpha=e_1e_2\dots e_m$ of positive length, with $s(\alpha)=v$, such that $e_1$ is an entry to a cycle. Indeed, if $e_1$ enters a cycle $\nu$, then all paths $\nu^n\alpha$, $n\in \mathbb{N}$, are mutually incomparable. Another class of essentially left infinite vertices are those that emit paths to infinitely many vertices. (In \cite{Tomforde4}, such vertices are called {\em left infinite}.) \end{remark} The following result generalizes \cite[Lemma 3.3(i)]{PaskRen1} and part of the proof of \cite[Theorem 3.2]{Tomforde4}. \begin{proposition}\label{tr-infinite} Let $E$ be a directed graph, $g$ be a graph trace on $E$, and $v\in E^0$ be some vertex. Assume either one of the hypotheses below is satisfied \begin{itemize} \item[$(a)$] $v$ emits an entry to a cycle; or \item[$(b)$] $g$ is finite and $v$ is essentially left infinite. \end{itemize} Then $g(v)=0$. \end{proposition} \begin{proof} The main ingredient in the proof is the observation that, for any finite set $F$ of mutually incomparable paths starting at $v$, one has the inequality \begin{equation} \sum_{w\in r(F)}g(w)\geq |F|\cdot g(v). \label{tr-inf-obs} \end{equation} Indeed, if we list $F$ as $\{\alpha_1,\dots,\alpha_n\}$ (with all $\alpha$'s distinct, i.e. $n=|F|$), then by mutual incomparability, we have the inequality $ \sum_{w\in r(F)}p^{}_w\geq\sum_{j=1}^np^{}_{\alpha_j}$, and then \eqref{tr-inf-obs} follows immediately from Theorem~\ref{gr-tr-char}. By assumption, in either case, we can find an infinite set $Y\subset E^*$ of mutually incomparable paths starting at $v$, such that the sum $M=\sum_{w\in r(Y)}g(w)$ is finite. (In case $(a)$, as seen in the preceding remark, we can ensure that $r(Y)$ is a singleton; case $(b)$ is trivial, by finiteness of $g$.) The desired conclusion now follows immediately from \eqref{tr-inf-obs}, which implies $M\geq n\cdot g(v)$ for arbitrarily large $n$. \end{proof} \begin{mycomment} As we will see shortly, graph traces on $E$ correspond to certain maps on the ``compactly supported'' diagonal subalgebra $\mathcal{D}(E)_{\text{fin}}=\bigcup_{V\in\mathcal{P}_{\text{fin}}(E^0)}\mathcal{D}(E)q^{}_V$, which will eventually yield tracial positive functionals on the dense $*$-subalgebra $C^*(E)_{\text{fin}}\subset C^*(E)$. Although neither $\mathcal{D}(E)_{\text{fin}}$, nor $\mathcal{M}(E)_{\text{fin}}$, nor $C^*(E)_{\text{fin}}$, are $C^*$-algebras, they are nevertheless unions of increasing nets of unital $C^*$-algebras: $\mathcal{D}(E)_{\text{fin}}=\bigcup_{V\in\mathcal{P}_{\text{fin}}(E^0)}\mathcal{D}(E)q^{}_V$, $\mathcal{M}(E)_{\text{fin}}=\bigcup_{V\in\mathcal{P}_{\text{fin}}(E^0)}\mathcal{M}(E)q^{}_V$, and $C^*(E)_{\text{fin}}=\bigcup_{V\in\mathcal{P}_{\text{fin}}(E^0)}q^{}_V C^*(E)q^{}_V$. (Recall that, for any finite subset $V\subset E^0$, the projection $q^{}_V$ is defined to be $\sum_{v\in V}p^{}_v$.) It is clear that the conditional expectations $\mathbb{E}_{\mathcal{M}}$ and $\mathbb{E}_{\mathcal{D}}$ map $C^*(E)_{\text{fin}}$ onto $\mathcal{M}(E)_{\text{fin}}$ and $\mathcal{D}(E)_{\text{fin}}$, respectively, so Corollaries \ref{cor-M-inv} and \ref{cor-D-inv} have suitable statements applicable to $C^*(E)_{\text{fin}}$, with the word ``state'' replaced by ``positive linear functional.'' By definition, positivity for linear functionals defined on each one of these $*$-algebras is equivalent to the positivity of their restrictions to each of the cut-off algebras corresponding to $V\in\mathcal{P}_{\text{fin}}(E^0)$. Upon identifying $\mathcal{D}(E)_{\text{fin}}=C_c(\widehat{\mathcal{D}(E)})$ and $\mathcal{M}(E)_{\text{fin}}=C_c(\widehat{\mathcal{M}(E)})$, the positive cones $\mathcal{D}(E)^+_{\text{fin}}$ and $\mathcal{M}(E)^+_{\text{fin}}$ correspond precisely to the non-negative continuous compactly supported functions. \end{mycomment} With this set-up in mind, Theorem \ref{gr-tr-char} has the following consequence. \begin{theorem}\label{gr-tr-lin} For any graph trace $g$ on $E$, there exists a unique positive linear functional $\eta=\eta_g:\mathcal{D}(E)_{\text{\rm fin}}\to\mathbb{C}$, such that \begin{equation} \eta_g(p^{}_\lambda)=g(s(\lambda)), \,\,\,\forall\,\lambda\in E^*. \label{etag=} \end{equation} When restricted to the unital $C^*$-algebras $\mathcal{D}(E)q^{}_V$, $V\in\mathcal{P}_{\text{\rm fin}}(E^0)$, the positive linear functionals $\eta_g$, $g\in T(E)$, have norms: $$\left\|\eta_g|_{\mathcal{D}(E)q^{}_V}\right\|=\sum_{v\in V}g(v).$$ In particular, for $g\in T(E)$, the functional $\eta_g$ is norm-continuous, if and only if $g$ is finite, and in this case, one has $\|\eta_g\|=\|g\|_1$. \end{theorem} \begin{proof} Let $\mathcal{A}$ be the complex span of $\{p_\lambda \}_{\lambda \in E^*}$, and let $\mathcal{A}_h$ be its Hermitean part, which is the same as the real span of $\{p_\lambda\}_{\lambda \in E^*}$. An application of Theorem \ref{gr-tr-char} shows that there is a unique $\mathbb{R}$-linear functional $\theta: \mathcal{A}_h \to \mathbb{R}$ with $\theta(p_\lambda) = g(s(\lambda))$ for all $\lambda \in E^*$. If we fix $V \in \mathcal{P}_{\operatorname{fin}}(E^0)$ and $x \in \mathcal{A}_h q_V$, another application of Theorem \ref{gr-tr-char} \textcolor{black}{to the inequality $-||x|| q_V \leq x \leq ||x|| q_V$} shows that $|\theta(x)| \leq \theta(q_V) ||x||$. Thus for each $V \in \mathcal{P}_{\operatorname{fin}}(E^0)$, there is a unique $\mathbb{C}$-linear hermitean functional $\eta_V: \mathcal{D}(E) q_V \to \mathbb{C}$ with $||\eta_V|| = \eta_V(q_V)$, so that $\eta_V$ is in fact positive with norm equal to $\sum_{v \in V} g(v)$. Clearly if $V \subset W$ are both finite subsets of $E^0$, then $\eta_W|_{\mathcal{D}(E)q_V} = \eta_V$; thus, by density, there exists a unique positive linear functional $\eta_g$ defined on all of $\mathcal{D}(E)$ such that $\eta_g|_{\mathcal{D}(E)q_V} = \eta_V$ if $V \in \mathcal{P}_{\operatorname{fin}}(E^0)$. \end{proof} \begin{mycomment} As a $*$-subalgebra in $C^*(E)_{\text{fin}}$, both $\mathcal{D}(E)_{\text{fin}}$ and $\mathcal{M}(E)_{\text{fin}}$ are non-degenerate (since they both contain $\{q^{}_V\}_{V\in\mathcal{P}_{\text{fin}}(E)}$, as well as regular, because they are normalized by all $s^{}_e$, $e\in E^1$ and all $p^{}_v$, $v\in E^0$. Given a positive linear functional $\eta$ on either one of these algebras, it then makes sense to define what it means for it to be $s^{}_e$-invariant. \end{mycomment} \begin{remark} The map $g\longmapsto \eta_g$ establishes a affine bijective correspondence between $T(E)$ and the space of positive linear functionals on $\mathcal{D}(E)_{\text{\rm fin}}$ that are $s^{}_e$-invariant for all $e\in E^1$. The inverse of this correspondence is obtained as follows. Given a linear positive functional $\theta$ on $\mathcal{D}(E)_{\text{fin}}$ which is $s^{}_e$-invariant, for all $e\in E^1$, the associated graph trace is simply the map \begin{equation} g^\theta:E^0\ni v\longmapsto \theta(p^{}_v)\in [0,\infty). \label{g-theta-def} \end{equation} When we specialize to the case of interest to us, Theorem \ref{gr-tr-lin} yields the following statement. \end{remark} \begin{theorem}\label{gr-tr-thm0} For any normalized graph trace $g$, there exists a unique state $\psi_g\in S(\mathcal{D}(E))$ satisfying \begin{equation} \psi_g(p^{}_\lambda)=g(s(\lambda)), \,\,\,\forall\,\lambda\in E^*. \label{psig=} \end{equation} All states $\psi_g$, $g\in T_1(E)$ are fully invariant, and furthermore, the correspondence \begin{equation} T_1(E)\ni g\longmapsto \psi_g\in S^{\text{\rm inv}}(\mathcal{D}(E)) \label{tr-to-Sinv} \end{equation} is an affine bijection, which has as its inverse the correspondence \begin{equation} S^{\text{\rm inv}}(\mathcal{D}(E))\ni \theta\longmapsto g^\theta\in T_1(E) \label{Sinv-tr} \end{equation} defined as in \eqref{g-theta-def}. \qed \end{theorem} \begin{mycomment} Using Corollary \ref{cor-D-inv}, it follows that for any $g\in T_1(E)$, the composition $\chi_g=\psi_g\circ \mathbb{E}_{\mathcal{D}}$ defines a tracial state on $C^*(E)$; this way we obtain an injective correspondence \begin{equation} T_1(E)\ni g\longmapsto \chi_g\in T(C^*(E)). \label{gtr-to-tr} \end{equation} Of course, any tracial state $\tau\in T(C^*(E))$ becomes invariant, when restricted to $\mathcal{D}(E)$, so using \eqref{Sinv-tr} we obtain a correspondence \begin{equation} T(C^*(E)) \ni \tau \longmapsto g^\tau \in T_1(E). \label{surj} \end{equation} Theorem \ref{gr-tr-thm0} shows that this map is surjective, because the correspondence \eqref{gtr-to-tr} is clearly an affine right inverse for \eqref{surj}. The surjectivity of \eqref{surj} is also proved in \cite{Tomforde1}, by completely different means. \end{mycomment} \begin{remark}\label{gauge-inv-tr=} Using formulas \eqref{pD}, given a normalized graph trace $g\in T_1(E)$, the associated tracial state $\chi_g=\psi_g\circ\mathbb{E}_{\mathcal{D}}$ -- hereafter referred to as the {\em Haar trace induced by $g$} -- acts on the spanning monomials as: \begin{equation} \chi_g(s^{}_\alpha s^*_\beta)= \begin{cases} g(s(\alpha)),&\text{if $\alpha=\beta$}\\ 0,&\text{otherwise} \end{cases} \label{chig=} \end{equation} Among other things, the above formulas prove that $\chi_g$ is in fact {\em gauge invariant}, i.e. $\chi_g\circ\gamma_z=\chi_g$, for all $z\in\mathbb{T}$. Conversely, every gauge invariant tracial state $\tau\in T(C^*(E))$ arises this way. Indeed, if $\tau$ is such a trace, then by gauge invariance it follows that, whenever $\alpha,\beta\in E^*$ are such that $|\alpha|\neq|\beta|$, we must have $\tau(s^{}_\alpha s^*_\beta)=0$; furthermore, if $|\alpha|=|\beta|$, then $$\tau(s^{}_\alpha s^*_\beta)=\tau(s^*_\beta s^{}_\alpha)= \begin{cases} \tau(0)=0,&\text{if $\alpha\neq\beta$}\\ \tau(s^*_\alpha s^{}_\alpha)=\tau(p^{}_{s(\alpha)}),&\text{otherwise} \end{cases} $$ so in all cases we get $\tau(s^{}_\alpha s^*_\beta)=\chi_{g^\tau}(s^{}_\alpha s^*_\beta)$. To summarize: \begin{itemize} \item the range of the injective correspondence \eqref{gtr-to-tr} is the set $T(C^*(E))^{\mathbb{T}}$ of gauge invariant tracial states; \item when restricting the correpondence \eqref{surj} to $T(C^*(E))^{\mathbb{T}}$, one obtains an affine {\em isomorphism} \begin{equation} T(C^*(E))^{\mathbb{T}} \ni \tau \longmapsto g^\tau \in T_1(E). \label{surj-gauge} \end{equation} \end{itemize} \end{remark} When searching for an analogue of Theorem \ref{gr-tr-thm0}, with $\mathcal{D}(E)$ replaced by $\mathcal{M}(E)$, it is obvious that the space $T(E)$ is not sufficient, so additional structure needs to be added to it. \begin{definition} \label{tagging-defn} The {\em cyclic support\/} of a function $g:E^0\to \mathbb{C}$ is defined to \textcolor{black}{be} the set $$\text{supp}^cg=\{v\in E^0\,:\text{ $v\in E^0$ cyclic, $g(v)\neq 0$}\}.$$ (Recall that a cyclic vertex $v$ is one visited by a simple entry-less cycle. Equivalently, $v$ is a \textcolor{black}{ray of length zero}.) A \emph{cyclically tagged graph trace} consists of a pair $(g,\mu)$, where $g$ is a graph trace and a map $\mu:\text{supp}^cg\ni v \longmapsto \mu_v\in \operatorname{Prob}(\mathbb{T})$ -- hereafter referred to as the {\em tag\/}. Note that our definition includes the possibility of an {\em empty\/} tag in the case when $\text{supp}^cg=\varnothing$. (More on this in Theorem~\ref{auto-gauge} below.) The space of all such pairs will be denoted by $T^{\text{\sc ct}}(E)$. The adjective ``finite,'' ``infinite,'' or ``normalized,'' is attached to $(g,\mu)$ precisely when it applies to $g$. \end{definition} Using this terminology, one has the following extension of Theorem \ref{gr-tr-lin}. \begin{theorem}\label{taged-gr-tr-lin} For any cyclically tagged graph trace $(g,\mu)$ on $E$, there exists a unique positive linear functional $\tilde{\eta}=\tilde{\eta}_{(g,\mu)}:\mathcal{M}(E)_{\text{\rm fin}}\to\mathbb{C}$, such that \begin{itemize} \item[(i)] $\tilde{\eta}_{(g,\mu)}(p^{}_\lambda)= g(s(\lambda))$, for every finite path $\lambda\in E^*$; \item[(ii)] for any ray $\alpha$ and any integer $m\neq 0$, $$\tilde{\eta}_{(g,\mu)}(b_\alpha ^m)= \begin{cases} g(s(\alpha))\int_{\mathbb{T}}z^m\,d\mu_{s(\alpha)}(z),&\text{ if }g(s(\alpha))\neq 0,\\ 0,&\text{ otherwise}\end{cases} $$ \end{itemize} When restricted to the unital $C^*$-algebras $\mathcal{M}(E)q^{}_V$, $V\in\mathcal{P}_{\text{\rm fin}}(E^0)$, the positive linear functionals $\tilde{\eta}_{(g,\mu)}$, $(g,\mu)\in T^{\text{\rm\sc ct}}(E)$, have norms: $$\left\|\tilde{\eta}_{(g,\mu)}|_{\mathcal{M}(E)q^{}_V}\right\|=\sum_{v\in V}g(v).$$ In particular, for any $(g,\mu)\in T^{\text{\rm\sc ct}}(E)$, the functional $\tilde{\eta}_{(g,\mu)}$ is norm-bounded if and only if $g$ is finite, and in this case, one has $\|\eta_{(g,\mu)}\|=\|g\|_1$. \end{theorem} \begin{proof} Assume $(g,\mu)\in T^{\text{\rm\sc ct}}(E)$ is fixed throughout the entire proof. Fix for the moment some a ray $\alpha$ with $g(s(\alpha))\neq 0$, and consider the $C^*$-subalgebra $C^*(b_\alpha)\subset\mathcal{M}(E)$. (Recall that, if $\nu$ is the seed of the ray $\alpha$, then $b^{}_\alpha$ is the normal partial isometry $s^{}_\alpha s^{}_\nu s^*_\alpha$.) As pointed out in Lemma \ref{top-spec}, using the fact that the projection $b_\alpha^0=p^{}_\alpha$ is the characteristic function of the compact-open set $\mathbf{T}_\alpha\subset\widehat{\mathcal{M}(E)}$, we have of course the equality $\mathcal{M}(E)p^{}_\alpha=C^*(b^{}_\alpha)$, so using the surjective $*$-homomorphism $$\pi_\alpha:\mathcal{M}(E)\ni a\longmapsto ap^{}_\alpha \in C^*(b^{}_\alpha)\xrightarrow{\,\,\sim\,\,} C(\mathbb{T}),$$ we can define a state $\omega_\alpha$ on $\mathcal{M}(E)$ by $$\omega_\alpha(a)=\int_{\mathbb{T}}\pi_\alpha(a)\,d\mu_{s(\alpha)}.$$ Specifically, if we write the compression $ap^{}_\alpha $ as a $f(b^{}_\alpha)$, for some $f\in C(\mathbb{T})$, then $\omega_\alpha(a)=\int_{\mathbb{T}}f(z)\,d\mu_{s(\alpha)}(z)$. Using the product rules \eqref{GD-prod}, \eqref{GM-prod1} and \eqref{GM-prod2}, it follows that on the generator set $G_{\mathcal{M}}(E)$, the state $\omega_\alpha$ acts as \begin{equation} \omega_\alpha(p^{}_{\lambda})= \begin{cases} 1,&\text{if }\lambda\prec\xi_\alpha;\\ 0,&\text{otherwise;}\end{cases} \quad \omega_\alpha(b^m_{\alpha_1})= \begin{cases} \int_{\mathbb{T}}z^m\,d\mu_{s(\alpha)}(z),&\text{if }\alpha_1=\alpha;\\ 0,&\text{otherwise.} \end{cases} \label{omega-alpha=} \end{equation} Define now the functional $\theta:\mathcal{M}(E)_{\text{fin}}\to\mathbb{C}$ by \begin{equation} \theta(a)=\sum_{\substack{\alpha\in E^*_{\text{\rm\sc ip}}\\ g(s(\alpha))\neq 0}} g(s(\alpha))\omega_\alpha(a),\,\,\,a\in\mathcal{M}(E)_{\text{fin}}. \label{theta=} \end{equation} Concerning the point-wise convergence of the sum in \eqref{theta=}, as well as its positivity, they are a consequence of the following fact. \begin{claim} For any vertex $v\in E^0$, one has the inequality \begin{equation} \sum_{\substack{\alpha\in E^*_{\text{\rm\sc ip}}\\ r(\alpha)=v}}g(s(\alpha))\leq g(v). \label{theta-summable-claim} \end{equation} In particular, the sum \begin{equation} \theta_v=\sum_{\substack{\alpha\in E^*_{\text{\rm\sc ip}}\\ r(\alpha)=v}}g(s(\alpha))\omega_\alpha|_{\mathcal{M}(E)p^{}_v} \label{theta-summable-claim0} \end{equation} is a norm-convergent sum, thus $\theta_v$ is a positive linear functional on $\mathcal{M}(E)p^{}_v$ with norm \begin{equation} \left\|\theta_v\right\|=\sum_{\substack{\alpha\in E^*_{\text{\rm\sc ip}}\\ r(\alpha)=v}}g(s(\alpha)). \label{theta-summable-claim1} \end{equation} \end{claim} The inequality \eqref{theta-summable-claim} follows from the observation that, for any finite set $F$ of rays with range $v$, the projections $\{p^{}_\alpha\}_{\alpha\in F}$ satisfy the inequality $\sum_{\alpha\in F}p^{}_\alpha\leq p^{}_v$, which by Theorem~\ref{gr-tr-char} implies $\sum_{\alpha\in F}g(s(\alpha))\leq g(v)$. The equality \eqref{theta-summable-claim1} is now clear from the positivity of $\theta_v$, which combined with \eqref{omega-alpha=} yields: $$\left\|\theta_v\right\|=\theta_v(p^{}_v)= \sum_{\substack{\alpha\in E^*_{\text{\rm\sc ip}}\\ r(\alpha)=v}}g(s(\alpha))\omega_\alpha(p^{}_v)= \sum_{\substack{\alpha\in E^*_{\text{\rm\sc ip}}\\ r(\alpha)=v}}g(s(\alpha)).$$ Using the Claim, we see that $\theta$ given in \eqref{theta=} is indeed correctly defined, positive and it can alternatively be presented as $\theta(a)=\sum_{v\in E^0}\theta_v(a)$ (a sum which has only finitely many non-zero terms for each $a\in\mathcal{M}(E)_{\text{fin}}$). By construction, $\theta$ acts on the generator set $G_{\mathcal{M}}(E)$ as: \begin{align} \theta(p^{}_\lambda)&= \sum_{\substack{\alpha\in E^*_{\text{\rm\sc ip}}\\ \lambda\prec \xi_\alpha}}g(s(\alpha)),\,\,\,\lambda\in E^* \label{theta-p}\\ \theta(b^m_{\alpha})&= \begin{cases} g(s(\alpha))\int_{\mathbb{T}}z^m\,d\mu_{s(\alpha)}(z),&\text{if }\alpha\in E^*_{\text{\rm\sc ip}}\text{ and }g(s(\alpha))\neq 0\\ 0,&\text{otherwise} \end{cases} \label{theta-b} \end{align} Next we consider the positive linear functional $\eta_g:\mathcal{D}(E)_{\text{fin}}\to\mathbb{C}$ associated to $g$, as constructed in Theorem~\ref{gr-tr-lin}, and the linear positive functional $\eta_g\circ\mathbb{E}_{\mathcal{D}}:\mathcal{M}(E)_{\text{fin}}\to\mathbb{C}$. (Here we use the fact that $\mathbb{E}_{\mathcal{D}}$ maps $C^*(E)_{\text{fin}}$ onto $\mathcal{D}(E)_{\text{fin}}$.) Using Riesz' Theorem, there is a positive Radon measure $\upsilon$ on $\widehat{\mathcal{M}(E)}$, such that $ \eta_g\left(\mathbb{E}_{\mathcal{D}}(f)\right)= \int_{\widehat{\mathcal{M}(E)}}f\,d\upsilon$, for all $f\in C_c(\widehat{\mathcal{M}(E)})= \mathcal{M}(E)_{\text{fin}}$. Using this measure, we now define the desired positive linear functional $\tilde{\eta}$ on $C_c(\widehat{\mathcal{M}(E)})= \mathcal{M}(E)_{\text{fin}}$ by: \begin{align} \tilde{\eta}(f)&= \theta(f)+\int_{\widehat{\mathcal{M}(E)}\smallsetminus\Omega_{\text{\rm\sc ip}}}f\,d\upsilon =\notag\\ &=\theta(f)+\eta_g\left(\mathbb{E}_{\mathcal{D}}(f)\right)- \sum_{\alpha\in E^*_{\text{\rm\sc ip}}}\eta_g\left(\mathbb{E}_{\mathcal{D}}(fp^{}_\alpha)\right)= \label{eta-g-m}\\ &=\theta(f)+\eta_g\left(\mathbb{E}_{\mathcal{D}}(f)\right)- \sum_{\alpha\in E^*_{\text{\rm\sc ip}}}\eta_g\left(\mathbb{E}_{\mathcal{D}}(f)p^{}_\alpha\right). \label{eta-g-m0} \end{align} (The equality \eqref{eta-g-m} follows from Lemma \ref{top-spec}.) To check condition (i), start with some $\lambda\in E^*$ and observe that, for all rays $\alpha$, we have the equalities $$p^{}_\lambda p^{}_\alpha=\begin{cases} p^{}_\alpha,&\text{if }\lambda\prec \xi_\alpha\\ 0,&\text{otherwise} \end{cases} $$ which by \eqref{theta-p} imply that $$ \sum_{\alpha\in E^*_{\text{\rm\sc ip}}}\eta_g\left(\mathbb{E}_{\mathcal{D}}(p^{}_\lambda p^{}_\alpha)\right) = \sum_{\substack{\alpha\in E^*_{\text{\rm\sc ip}}\\ \lambda\prec\xi_\alpha}}\eta_g(p^{}_\alpha)= \sum_{\substack{\alpha\in E^*_{\text{\rm\sc ip}}\\ \lambda\prec\xi_\alpha}}g(s(\alpha))=\theta(p^{}_\lambda),$$ so by \eqref{eta-g-m} we obtain the desired property $$\tilde{\eta}(p^{}_\lambda)=\eta_g(p^{}_\lambda)=g(s(\lambda)).$$ In order to check condition (ii), we simply verify that, for any ray $\alpha$ and any integer $m$, we have the equality \begin{equation} \tilde{\eta}(b^m_\alpha)=\theta(b^m_\alpha). \label{cond(ii)} \end{equation} The case when $m=0$ we have $b^0_\alpha=p^{}_\alpha$, so by condition (i) and \eqref{theta-b}, we have $\tilde{\eta}(b^0_\alpha)=\tilde{\eta}(p^{}_\alpha)=g(s(\alpha))=\theta(b^0_\alpha)$. In the case when $m\neq 0$, we notice that since $\mathbb{E}_{\mathcal{D}}$ vanishes on $G(E)\smallsetminus G_{\mathcal{D}}(E)$ -- by \eqref{pD} -- we have $\mathbb{E}_{\mathcal{D}}(b^m_\alpha)=0$, and then \eqref{cond(ii)} is trivial using \eqref{eta-g-m0}. The remaining statements in the Theorem (including the uniqueness of $\tilde{\eta}$) are pretty clear, since any positive linear functional $\tilde{\eta}$ satisfying conditions (i) and (ii) must satisfy $\tilde{\eta}|_{\mathcal{D}(E)_{\text{fin}}}= \eta_g$, from which the continuity of the restrictions $\tilde{\eta}|_{\mathcal{M}(E)q^{}_V}$ follows immediately. \end{proof} One aspect not addressed so far is invariance of the states $\tilde{\eta}$. For this purpose, the following definition is well-suited. \begin{definition} Two cyclic vertices are said to be \emph{equivalent} if they are visited by the same entry-less cycle. A cyclically tagged graph trace $(g,\mu)$ is said to be {\em consistent} if $\mu_v = \mu_{v'}$ whenever $v$ and $w$ are equivalent. (Note that if two cyclic vertices $v,w$ are equivalent, then $g(v)=g(w)$.) The space of all consistent cyclically tagged traces on $E$ is denoted by $T^{\text{\rm\sc cct}}(E)$. As agreed earlier, the adjective ``finite,'' ``infinite,'' or ``normalized,'' is attached to an element $(g,\mu)\in T^{\text{\rm\sc cct}}(E)$, precisely when it applies to $g$. In particular, the space of normalized consistent cyclically tagged graph traces on $E$ is denoted by $T_1^{\text{\rm\sc cct}}(E)$. \end{definition} \begin{proposition}\label{cct-equiv} A cyclically tagged graph trace $(g,\mu)$ is consistent if and only if the associated positive functional $\tilde{\eta}_{(g,\mu)}:\mathcal{M}(E)_{\text{fin}} \to \mathbb{C}$ constructed in Theorem~\ref{taged-gr-tr-lin} is $s^{}_e$-invariant for all $e \in E^1$. \end{proposition} \begin{proof} Assume $(g,\mu)$ is consistent, and let us show the invariance of $\tilde{\eta}_{(g,\mu)}$, which amounts to checking, that for each $e\in E^1$, we have: \begin{itemize} \item[(i)] $\tilde{\eta}_{(g,\mu)}(s^{}_ep^{}_\lambda s^*_e)= \tilde{\eta}_{(g,\mu)}(p^{}_ep^{}_\lambda)$, $\forall\,\lambda\in E^*$; \item[(ii)] $\tilde{\eta}_{(g,\mu)}(s^{}_eb^m_\alpha s^*_e)= \tilde{\eta}_{(g,\mu)}(p^{}_eb^m_\alpha)$, $\forall\,\alpha\in E^*_{\text{\sc ip}}$, $m\in\mathbb{Z}$. \end{itemize} Property (i) is obvious, since $\tilde{\eta}_{(g,\mu)}$ agrees with the $s^{}_e$-invariant functional $\eta_g$ on $\mathcal{D}(E)_{\text{fin}}$. As for condition (ii), we only need to verify it if $s(e)=r(\alpha)$ (otherwise both sides are zero). Also notice that if $|\alpha|>0$, then $e\alpha$ is also a ray with $s(e\alpha)=s(\alpha)$, which satisfies $s^{}_eb^m_\alpha s^*_e=b^m_{e\alpha}$, so by condition (ii) in Theorem~\ref{taged-gr-tr-lin}, we have $\tilde{\eta}_{(g,\mu)}(s^{}_eb^m_\alpha s^*_e)= \tilde{\eta}_{(g,\mu)}(b^m_{e\alpha})=g(s(e\alpha))\int_{\mathbb{T}}z^m\,d\mu_{s(e\alpha)}(z)= g(s(\alpha))\int_{\mathbb{T}}z^m\,d\mu_{s(\alpha)}(z)= \tilde{\eta}_{(g,\mu)}(b^m_\alpha)$. In the remaining case, $|\alpha|=0$, so $\alpha$ reduces to a vertex $v=r(\nu)$, for some simple entry-less cycle $\nu$. If $e$ is not an edge in $\nu$, then it is a ray, thus the preceding argument still applies (we will have $s^{}_eb^m_vs^*_e=b^m_e$). If $e$ is an edge on $\nu$, then $s^{}_eb^m_vs^*_e=b^m_{r(e)}$, with $r(e)$ obviously equivalent to $v$, and the desired equality -- which now reads $\tilde{\eta}_{(g,\mu)}(b^m_{r(e)})= \tilde{\eta}_{(g,\mu)}(b^m_v)$ -- follows from the equalities $g(v)=g(r(e))$ and $\mu_v=\mu_{r(e)}$. Conversely, notice first that, if $\tilde{\eta}_{(g,\mu)}$ is $s^{}_e$-invariant, for all $e\in E^1$, then it will also satisfy the identity \begin{equation} \tilde{\eta}_{(g,\mu)}(s^{}_\lambda a s^*_\lambda)= \tilde{\eta}_{(g,\mu)}(p^{}_\lambda a),\,\,\,\forall\,\lambda\in E^*, a\in \mathcal{M}(E)_{\text{fin}}. \label{tilde-eta-ainv} \end{equation} Secondly, observe that, if $v$, $v'$ are equivalent cyclic vertices, presented as $v=s(\nu)$ and $v'=s(\nu')$ for two simple entry-less cycles, then we can write $\nu=\alpha\beta$ and $\nu'=\beta\alpha$ for two suitably chosen paths $\alpha,\beta\in E^*$. This clearly implies that $b^{}_{v'}=s^{}_\beta b^{}_v s^*_\beta$, which also yields $b^m_{v'}=s^{}_\beta b^m_v s^*_\beta$, $\forall\,m\in\mathbb{Z}$. Combining these two observations with condition (ii) from Theorem~\ref{taged-gr-tr-lin}, it follows that, if $\tilde{\eta}_{(g,\mu)}$ is invariant, then for any two equivalent cyclic vertices $v$ and $v'$ we have (with $\alpha$, $\beta$ as above): \begin{align*} \int_{\mathbb{T}}z^m\,d\mu_{v'}(z)&= \tilde{\eta}_{(g,\mu)}(b^m_{v'})= \tilde{\eta}_{(g,\mu)}(s^{}_\beta b^m_{v}s^*_\beta)= \tilde{\eta}_{(g,\mu)}(p^{}_\beta b^m_{v})=\\ &= \tilde{\eta}_{(g,\mu)}(b^m_{v})= \int_{\mathbb{T}}z^m\,d\mu_{v}(z), \,\,\,\forall\,m\in\mathbb{Z}, \end{align*} which clearly implies $\mu_{v'}=\mu_v$. \end{proof} \begin{remark} The map $(g,\mu)\longmapsto \tilde{\eta}_{(g,\mu)}$ establishes a affine bijective correspondence between $T^{\text{\sc cct}}(E)$ and the space of positive linear functionals on $\mathcal{M}(E)_{\text{\rm fin}}$ that are $s^{}_e$-invariant for all $e\in E^1$. The inverse of this correspondence is the map $\theta\longmapsto (g^\theta,\mu^\theta)$ defined as follows. Given a linear positive functional $\theta$ on $\mathcal{M}(E)_{\text{fin}}$ which is $s^{}_e$-invariant, for all $e\in E^1$, the graph trace $g^\theta$ is given by \eqref{g-theta-def}, and the tag $\mu^\theta=(\mu^\theta_v)_{v\in\text{supp}^cg^\theta}$ is given (implicitly) by \begin{equation} \int_{\mathbb{T}}f(z)\,d\mu^\theta_v(z)= \frac{\theta(f(b^{}_v))}{g^\theta(v)},\,\,\,\forall\,v\in\text{supp}^cg^\theta,\,f\in C(\mathbb{T}). \label{im-def-muv} \end{equation} \end{remark} When we specialize to states, we now have the following extension of Theorem~\ref{gr-tr-thm0}. \begin{theorem}\label{gr-tr-thm1} For any normalized consistent cyclically tagged graph trace $(g,\mu)\in T^{\text{\rm\sc cct}}_1(E)$, there exists a unique state $\phi_{(g,\mu)}\in S(\mathcal{M}(E))$ satisfying \begin{itemize} \item[(i)] $\phi_{(g,\mu)}(p^{}_\lambda)= g(s(\lambda))$, for every finite path $\lambda\in E^*$; \item[(ii)] for any ray $\alpha$ and any integer $m$: $$\phi_{(g,\mu)}(b_\alpha ^m)= \begin{cases} g(s(\alpha))\int_{\mathbb{T}}z^m\,d\mu_{s(\alpha)}(z),&\text{ if }g(s(\alpha))\neq 0,\\ 0,&\text{ otherwise}\end{cases} $$ \end{itemize} All states $\phi_{(g,\mu)}$, $(g,\mu)\in T^{\text{\rm\sc cct}}_1(E)$ are fully invariant, and furthermore, the correspondence \begin{equation} T^{\text{\rm\sc cct}}_1(E)\ni (g,\mu)\longmapsto \phi_{(g,\mu)}\in S^{\text{\rm inv}}(\mathcal{M}(E)) \label{tr-to-SinvM} \end{equation} is an affine bijection, which has as its inverse the correspondence \begin{equation} S^{\text{\rm inv}}(\mathcal{M}(E))\ni \theta\longmapsto (g^\theta,\mu^\theta)\in T^{\text{\rm\sc cct}}_1(E) \label{Sinv-trM} \end{equation} defined as in \eqref{g-theta-def} and \eqref{im-def-muv}. \qed \end{theorem} \begin{mycomment} Using Corollary \ref{cor-M-inv}, it follows that for any $(g,\mu)\in T^{\text{\rm\sc cct}}_1(E)$, the composition $\tau_{(g,\mu)}=\phi_{(g,\mu)}\circ \mathbb{E}_{\mathcal{M}}$ defines a tracial state on $C^*(E)$; this way we obtain an injective correspondence \begin{equation} T^{\text{\rm\sc cct}}_1(E)\ni (g,\mu)\longmapsto \tau_{(g,\mu)}\in T(C^*(E)). \label{cctgtr-to-tr} \end{equation} Of course, any tracial state $\tau\in T(C^*(E))$ becomes invariant, when restricted to $\mathcal{M}(E)$, so using \eqref{Sinv-trM} we obtain a correspondence \begin{equation} T(C^*(E)) \ni \tau \longmapsto (g^\tau,\mu^\tau) \in T^{\text{\rm\sc cct}}_1(E). \label{surjM} \end{equation} Theorem \ref{gr-tr-thm1} shows that this map is surjective, because the correspondence \eqref{cctgtr-to-tr} is clearly an affine right inverse for \eqref{surjM}. \end{mycomment} \begin{remark} The range of \eqref{cctgtr-to-tr} clearly contains the range of \eqref{gtr-to-tr}, which equals $T(C^*(E))^{\mathbb{T}}$. After all, any trace $g\in T_1(E)$ can be tagged using the constant map $\mu:\text{supp}^cg\to\text{Prob}(\mathbb{T})$ that takes $\mu_v$ to be the Haar measure for every $v$, and it is straightforward to verify that for this particular tagging one, has $\tau_{(g,\mu)}=\chi_g$. \end{remark} Concerning the range of \eqref{cctgtr-to-tr}, one legitimate question is whether it equals the whole tracial state space $T(C^*(E))$. Using the bijection \eqref{tr-to-SinvM}, this question is equivalent to the surjectivy of the map \begin{equation} S^{\text{inv}}(\mathcal{M}(E))\ni \phi\longmapsto \phi\circ\mathbb{E}_{\mathcal{M}}\in T(C^*(E)). \label{SinvM-to-tr} \end{equation} As we have seen in Corollary~\ref{cor-Sinv-T-iso}, a sufficient condition for the surjectivity of \eqref{SinvM-to-tr} is the condition that the inclusion $\mathcal{M}(E)\subset C^*(E)$ has the (honest) extension property. As it turns out, this issue can be neatly described using the graph. \begin{theorem}\label{tight-thm} The inclusion $\mathcal{M}(E) \subset C^*(E)$ has the extension property, if and only if no cycle in $E$ has an entry. \end{theorem} \begin{proof} To prove the ``if'' implication, assume that no cycle in $E$ has an entry, fix a pure state $\omega$ on $\mathcal{M}(E)$, and let $\phi$ be an extension of $\omega$ to $C^*(E)$. In order to prove uniqueness of $\phi$, it suffices to show that the value of $\phi$ on a standard generator $s_\alpha^{} s_\beta^*$ is independent of the choice of $\phi$. By assumption, there is a $x \in E^{\leq \infty}$ and $z \in \mathbb{T}$ such that $\omega = \omega_{z,x}$ as in Lemma \ref{top-spec}. On the one hand, by Fact 3.1 and the observation that $\omega(p_\gamma)=1$ for all $\gamma \prec x$, it follows that \begin{equation} \label{tight-state} \forall \,\gamma \prec x:\quad\phi(s_\alpha^{} s_\beta^*)=\phi(p_\gamma^{} s_\alpha^{} s_\beta^* p_\gamma^{}). \end{equation} On the other hand, using the results from \cite[Section 3]{NR}, it follows that there is $\gamma \prec x$ such that $p_\gamma^{} s_\alpha^{} s_\beta^* p_\gamma^{}$ belongs to $\mathcal{M}(E)$. (In the language of \cite{NR}, $x$ must be essentially aperiodic by our assumption on $E$.) Using \eqref{tight-state} it follows that $\phi(s_\alpha^{} s_\beta^*) = \omega(p_\gamma^{} s_\alpha^{} s_\beta^* p_\gamma^{})$, and the desired conclusion follows. \begin{comment} The ``if'' implication can be either proved directly (by using Lemma \ref{top-spec}), or (as shown here) by showing that ${\mathcal{M}}(E)$ is a $C^*$-diagonal in the sense of Kumjian (\cite{Kumjian}), which amounts to proving that (since $\mathbb{E}_{\mathcal{M}}^2 = \mathbb{E}_{\mathcal{M}}$) \[ \text{Range} (\text{Id} - \mathbb{E}_{\mathcal{M}}) \subset \overline{\text{span}} N_{\text{free}}(\mathcal{M}(E)), \] where $N_{\text{free}}(\mathcal{M}(E)) = \{n \in N(\mathcal{M}(E)): n^2 = 0\}$ is the set of free normalizers. Using \eqref{pm}, it suffices to show that all spanning monomials $s_\alpha^{\phantom{*}} s_\beta^* \not \in G_{\mathcal{M}}(E)$ are free normalizers. Since all spanning monomials are normalizers, all that remains to be proved is that any spanning monomial $n=s_\alpha^{\phantom{*}} s_\beta^* \not\in G_{\mathcal{M}}(E)$ has $n^2=0$, for which it suffices to show that $s_\beta^* s_\alpha^{\phantom{*}} =0$. Argue by contradiction, assuming $s_\beta^* s_\alpha^{\phantom{*}} \neq 0$, and let us prove that $n$ belongs to $G_{\mathcal{M}}(E)$. By the Orthogonality Relations, if $s_\beta^* s_\alpha^{\phantom{*}} \neq 0$, then either $\alpha\prec\beta$, or $\beta\prec\alpha$. By symmetry (if $n^*$ belongs to $G_{\mathcal{M}}(E)$, then so does $n$), we can assume that $\beta\prec\alpha$, in which case we can also write $n=s^{}_\beta s^{}_{\alpha\ominus\beta}s^*_\beta$, with $\alpha\ominus\beta$ either a vertex or a cycle. Of course, since the hypothesis of the ``if'' implication is that {\em every cycle is entry-less}, this clearly implies that $n$ belongs to $G_{\mathcal{M}}(E)$. \textcolor{blue}{I hate to flipflop but if you think this could be shortened by appealing to Lemma \ref{top-spec}, then we should use that.} \end{comment} For the ``only if'' direction, we show that if there is a cycle $\nu\in E^*$ that has an entry, then we can construct a pure state on $\mathcal{M}(E)$ which has multiple extensions to states on $C^*(E)$. Consider the path $x=\nu^\infty\in E^\infty$ formed by following $\nu$ infinitely many times. For each $z\in\mathbb{T}$ consider the state $\omega_{z,x}\in S(C^*(E))$ introduced in Definition~\ref{twisted-path-rep}, given by $$\omega_{z,x}(a) = \langle \delta_x | \pi_{\text{path}}(\gamma_z(a)) \delta_x \rangle.$$ As explained in Remark~\ref{core-embed}, since $x\not\in E^\infty_{\text{\sc ip}}$, it follows that: $$(z,x)\sim (1,x),\,\,\,\forall\,z\in\mathbb{T},$$ which by Lemma~\ref{top-spec} means that all restrictions $\omega_{z,x}|_{\mathcal{M}(E)}$, $z\in\mathbb{T}$, coincide, so they are all equal to the pure state $\vartheta\in\widehat{\mathcal{M}(E)}$ corresponding to the equivalence class $(1,x)_{\sim}=\mathbb{T}\times\{x\}$. However, as states on $C^*(E)$, the functionals $\omega_{z,x}$, $z\in\mathbb{T}$ cannot all be equal, since for example we have $\omega_{z,x}(\nu)=z^{|\nu|}$, $\forall\,z\in\mathbb{T}$. \end{proof} \begin{definition} A graph $E$ is {\em tight}, if every cycle is entry-less. \end{definition} Combining Theorem~\ref{tight-thm} with Corollary~\ref{cor-Sinv-T-iso} and Theorem~\ref{gr-tr-thm1} we now obtain the following statement. \begin{theorem}\label{traces-on-tight} If $E$ is tight, then the correspondence \eqref{cctgtr-to-tr} is an affine isomorphism between the space $T^{\text{\sc cct}}_1(E)$ and the tracial state space $T(C^*(E))$.\qed \end{theorem} \begin{remark} Tight graphs are interesting in other respects: they are the only graphs that yield finite, stably finite, quasi-diagonal, or AF-embeddable $C^*$-algebras (\cite{Schaf}), as well as the only graphs that yield graph algebras with stable rank one (\cite{JPS}). A graph which yields a $C^*$-algebra with Hausdorff spectrum must be tight, although this is not sufficient \cite[Ex. 10]{Goehle2}. \end{remark} In the remainder of this paper we aim to parametrize the entire tracial state space $T(C^*(E))$ for arbitrary graphs by employing Theorem~\ref{traces-on-tight} in conjunction with certain procedures that replace the graph $E$ with a tight sub-graph $E'$, in such a way that the tracial state spaces $T(C^*(E))$ and $T(C^*(E'))$ coincide. Since the sub-graphs that are best suited for analyzing how the trace spaces change are the {\em canonical\/} ones, the following terminology is all we need. \begin{definition} \label{tightening-defn} If $E$ is a directed graph, a {\em tightening\/} of $E$ is a canonical sub-graph, i.e. one that can be presented as $E\setminus H$, for some saturated hereditary subset $H\subset E^0$, in such a way that \begin{itemize} \item[{\sc (a)}] $E\setminus H$ is tight, and \item[{\sc (b)}] the canonical $*$-homomorphism $\rho_H:C^*(E)\to C^*(E\setminus H)$ implements a bijective correspondence: $T(C^*(E\setminus H))\ni \tau\longmapsto \tau\circ \rho_H\in T(C^*(E))$ \end{itemize} Since $\rho_H$ is always surjective, the correspondence from {\sc (b)} is always injective, so the only requirement in our definition is its {\em surjectivity}. \end{definition} When it comes to parametrizing tracial states on graph $C^*$-algebras, the most useful and natural tightening is as follows. \begin{example}\label{min-tight} Let $E$ be a graph, and let $C=C_E$ be the set of vertices which emit entrances into cycles. The set $C$ is obviously hereditary, but not saturated in general, so we need to take its saturation $\overline{C}$. As it turns out, $E\setminus \overline{C}$ constitutes a tightening of $E$. First of all, since passing from $E$ to $E\setminus \overline{C}$ clearly removes all entries into the cycles in $E$, it is clear that $E\setminus \overline{C}$ is tight. Secondly, in order to justify the surjectivity of \begin{equation} T(C^*(E\setminus \overline{C}))\ni \tau\longmapsto \tau\circ \rho_{\overline{H}}\in T(C^*(E)), \label{rho-traces-min} \end{equation} all we must show is the fact that {\em all tracial states on $C^*(E)$ vanish on} $\ker\rho_{\overline{C}}$, for which it suffices to prove the inclusion $H\subset N_g$, which in itself is a consequence of Proposition~\ref{tr-infinite}. \end{example} The sub-graph constructed in the above Example is called the {\em minimal tightening}, and is denoted by $E_{\text{tight}}$. The canonical $*$-homomorphism will be denoted by $\rho_{\text{tight}}:C^*(E)\to C^*(E_{\text{tight}})$. Combining this construction with Theorem~\ref{traces-on-tight} we now obtain. \begin{theorem} For any directed graph $E$, the map $$T_1^{\text{\rm\sc cct}}(E_{\text{\rm tight}})\ni (g,\mu)\longmapsto \tau_{(g,\mu)}\circ \rho_{\text{\rm tight}}\in T(C^*(E))$$ is an affine isomorphism.\qed \end{theorem} \begin{comment} Since they are defined using graph independent rules for removing vertices, the minimal and the weak tightening operations are {\em idempotent}. In other words, if we form the minimal tightening $E_{\text{tight}}=E\setminus \overline{C}_E$, then $C_{E_{\text{tight}}}=\varnothing$, so $(E_{\text{tight}})_{\text{tinght}}=E_{\text{tinght}}$. The same statement is true for the weak tightening: $L_{E_\text{w-tight}}=\varnothing$. \end{comment} The final result in this paper deals with a graph-theoretic characterization of automatic gauge invariance for tracial states, which as pointed out in Remark~\ref{gauge-inv-tr=} is equivalent to the surjectivity of the map \eqref{gtr-to-tr}. In \cite{Tomforde4}, it is shown that this feature is implied by condition (K). However, as Theorem~\ref{auto-gauge} below shown, this is not necessary. \begin{theorem}\label{auto-gauge} For a directed graph $E$, the following conditions are equivalent: \begin{itemize} \item[(i)] all tracial states on $C^*(E)$ are gauge invariant; \item[(ii)] the source of each cycle in $E$ is essentially left infinite. \end{itemize} \end{theorem} \begin{proof} (i) $\Rightarrow$ (ii): Suppose that $\lambda = e_1 \ldots e_m$ is a cycle such that $v=s(\lambda)=r(e_1)$ is not essentially left infinite; we show how to construct a tracial state on $C^*(E)$ which is not gauge-invariant. Note that as $v$ is not essentially infinite, in particular it does not emit an entrance to any cycle; therefore, none of the edges in $\lambda$ will be removed when forming $E_{\operatorname{tight}}$, and so we can assume that $E$ is tight. (Since the canonical quotient $\pi: C^*(E) \to C^*(E_{\operatorname{tight}})$ is equivariant for the respective gauge actions, a non-gauge invariant tracial state on $C^*(E)_{\operatorname{tight}})$ will give rise to a non-gauge invariant trace on $C^*(E)$.) Say that a path $\mu \in E^*$ is \emph{acyclic} if it cannot be written as $\mu = \alpha \nu \beta$ for $\alpha,\beta \in E^*$ and $\nu$ a cycle. Let $A$ denote the set of all acyclic paths with source $v$; note that any two paths in $A$ are incomparable, and so $A$ must be finite because $v$ is not essentially left infinite. For $w \in E^0$ let $g(w) = |A \cap r^{-1}(w)|$; it is straightforward to verify that $g$ is a finite graph trace with $g(v) = 1$ which we can normalize to obtain $g' \in T_1(E)$. Note that the cyclic support of $g'$ is precisely $r(\{e_1,\ldots,e_m\})$ (as $v$ is not essentially left infinite, it emits no entrances to cycles). Now we can take any $z \in \mathbb{T} \setminus \mathbb{U}_{| \lambda |}$ and let $\mu_{s(e_i)}=\delta_z$ for all $i=1,\ldots,m$. The affiliated tracial state $\tau_{(g,\mu)} \in T(C^*(E))$ will satisfy \[ \tau_{(g,\mu)}(b_\lambda) = g(s(\lambda)) z^{|\lambda|} \neq 0 \]so that in particular $\tau_{(g,\mu)}$ is not gauge-invariant. (ii) $\Rightarrow$ (i): Suppose that the source of each cycle is essentially left infinite. Any finite graph trace must vanish on an essentially left infinite vertex as in Proposition \ref{tr-infinite}; hence if every source of every cycle is essentially left infinite, then there are no vertices in the cyclic support of any graph trace, and so there are no taggings to consider. Thus every tracial state on $C^*(E_{\operatorname{tight}})$ is gauge-invariant, which shows that every tracial state on $C^*(E)$ is gauge-invariant. \end{proof} \begin{mycomment} Besidese the minimal tightening $E_{\operatorname{tight}}$ introduced in this paper, other tightenings could naturally be considered. The same arguments as those used in Example~\ref{min-tight} can be used with $C$ replaced by another hereditary subset $H\subset E^0$, as long as: \begin{itemize} \item[{\sc (a)}] the canonical sub-graph $E\setminus \overline{H}$ is tight, and \item[{\sc (b)}] one has the inclusion $H\subset N_g$, for all $g\in T_1(E)$. \end{itemize} One way to ensure {\sc (a)} is to take $H$ to contain $C_E$. As far as condition {\sc (b)} is concerned, we could use Proposition~\ref{tr-infinite} as a guide. In particular, we can consider the set $L=L_E$ of \emph{all} essentially left infinite vertices. Since $L_E$ is potentially much larger than $C_E$, the resulting subgraph $E \setminus \overline{L}_E$ will potentially be considerably smaller than $E_{\operatorname{tight}}$ (and thus easier to analyze regarding graph traces). \end{mycomment} \end{document}
arXiv
\begin{definition}[Definition:Proof System/Formal Proof] Let $\mathscr P$ be a proof system for a formal language $\LL$. Let $\phi$ be a WFF of $\LL$. A '''formal proof of $\phi$''' in $\mathscr P$ is a collection of axioms and rules of inference of $\mathscr P$ that leads to the conclusion that $\phi$ is a theorem of $\mathscr P$. The term '''formal proof''' is also used to refer to specific presentations of such collections. For example, the term applies to tableau proofs in natural deduction. \end{definition}
ProofWiki
Information-theoretic approaches provide methods for model selection and (multi)model inference that differ quite a bit from more traditional methods based on null hypothesis testing (e.g., Anderson, 2007; Burnham & Anderson, 2002). These methods can also be used in the meta-analytic context when model fitting is based on likelihood methods. Below, I illustrate how to use the metafor package in combination with the glmulti package that provides the necessary functionality for model selection and multimodel inference using an information-theoretic approach. Variable yi contains the effect size estimates (standardized mean differences) and vi the corresponding sampling variances. There are 48 rows of data in this dataset. The dataset now includes 41 rows of data (nrow(dat)), so we have lost 7 data points for the analyses. One could consider methods for imputation to avoid this problem, but this would be the topic for another day. So, for now, we will proceed with the analysis of the 41 estimates. With level = 1, we stick to models with main effects only. This implies that there are $2^7 = 128$ possible models in the candidate set to consider. Since I want to keep the results for all these models (the default is to only keep up to 100 model fits), I set confsetsize=128 (or I could have set this to some very large value). With crit="aicc", we select the information criterion (in this example: the AICc or corrected AIC) that we would like to compute for each model and that should be used for model selection and multimodel inference. For more information about the AIC (and AICc), see, for example, the entry for the Akaike Information Criterion on Wikipedia. As the function runs, you should receive information about the progress of the model fitting. Fitting the 128 models should only take a few seconds. "yi ~ 1 + imag" 10 models within 2 IC units. 77 models to reach 95% of evidence weight. The horizontal red line differentiates between models whose AICc is less versus more than 2 units away from that of the "best" model (i.e., the model with the lowest AICc). The output above shows that there are 10 models whose AICc is less than 2 units away from that of the best model. Sometimes this is taken as a cutoff, so that models with values more than 2 units away are considered substantially less plausible than those with AICc values closer to that of the best model. However, we should not get too hung up about such (somewhat arbitrary) divisions (and there are critiques of this rule; e.g., Anderson, 2007). We see that the "best" model is the one that only includes imag as a moderator. The second best includes imag and meta. And so on. The values under weights are the model weights (also called "Akaike weights"). From an information-theoretic perspective, the Akaike weight for a particular model can be regarded as the probability that the model is the best model (in a Kullback-Leibler sense of minimizing the loss of information when approximating full reality by a fitted model). So, while the "best" model has the highest weight/probability, its weight in this example is not substantially larger than that of the second model (and also the third, fourth, and so on). So, we shouldn't be all too certain here that we have really found the best model. Several models are almost equally plausible (in other examples, one or two models may carry most of the weight, but not here). And here, we see that imag is indeed a significant predictor of the treatment effect (and since it is a dummy variables, it can change the treatment effect from $.1439$ to $.1439 + .4437 = .5876$, which is also practically relevant (for standardized mean differences, some would interpret this as changing a small effect into at least a medium-sized one). However, now I am starting to mix the information-theoretic approach with classical null hypothesis testing, and I will probably go to hell for all eternity if I do so. Also, other models in the candidate set have model probabilities that are almost as large as the one for this model, so why only focus on this one model? The importance value for a particular predictor is equal to the sum of the weights/probabilities for the models in which the variable appears. So, a variable that shows up in lots of models with large weights will receive a high importance value. In that sense, these values can be regarded as the overall support for each variable across all models in the candidate set. The vertical red line is drawn at .80, which is sometimes used as a cutoff to differentiate between important and not so important variables, but this is again a more or less arbitrary division. This method properly works with models that are fit with or without the Knapp and Hartung method (the default for rma() is test="z", but this could be set to test="knha", in which case standard errors are computed in a slightly different way, and tests and confidence intervals are based on the t-distribution). I rounded the results to 4 digits to make the results easier to interpret. Note that the table again includes the importance values. In addition, we get unconditional estimates of the model coefficients (first column). These are model-averaged parameter estimates, which are weighted averages of the model coefficients across the various models (with weights equal to the model probabilities). These values are called "unconditional" as they are not conditional on any one model (but they are still conditional on the 128 models that we have fitted to these data; but not as conditional as fitting a single model and then making all inferences conditional on that one single model). Moreover, we get estimates of the unconditional variances of these model-averaged values. These variance estimates take two sources of uncertainty into account: (1) uncertainty within a given model (i.e., the standard error of a particular model coefficient shown in the output when fitting a model; as an example, see the output from the "best" model shown earlier) and (2) uncertainty with respect to which model is actually the best approximation to reality (so this source of variability examines how much the size of a model coefficient varies across the set of candidate models). The model-averaged parameter estimates and the unconditional variances can be used for multimodel inference. For example, adding and subtracting the values in the last column from the model-averaged parameter estimates yields approximate 95% confidence intervals for each coefficient that are based not on any one model, but all models in the candidate set. We can also use multimodel methods for computing a predicted value and corresponding confidence interval. Again, we do not want to base our inference on a single model, but all models in the candidate set. Doing so requires a bit more manual work, as I have not (yet) found a way to use the predict() function from the glmulti package in combination with metafor for this purpose. So, we have to loop through all models, compute the predicted value based on each model, and then we can compute a weighted average (using the model weights) of the predicted values across all models. TASK: Diagnostic of candidate set. Your candidate set contains 268435456 models. So, over $2 \times 10^8$ possible models. Fitting all of these models would not only test our patience (and would be a waste of valuable CPU cycles), it would also be a pointless exercise (even fitting the 128 models above could be critiqued by some as a mindless hunting expedition – although if one does not get too fixated on the best model, but considers all the models in the set as part of a multimodel inference approach, this critique loses some of its force). So, I won't consider this any further in this example. The same principle can of course be applied when fitting other types of models, such as those that can be fitted with the rma.mv() or rma.glmm() functions. One just has to write an appropriate rma.glmulti function and, for multimodel inference, a corresponding getfit method. For multivariate/multilevel models fitted with the rma.mv() function, one can also consider model selection with respect to the random effects structure. Making this work would require a bit more work. Time permitting, I might write up an example illustrating this at some point in the future. Anderson, D. R. (2007). Model based inference in the life sciences: A primer on evidence. New York: Springer. Bangert-Drowns, R. L., Hurley, M. M., & Wilkinson, B. (2004). The effects of school-based writing-to-learn interventions on academic achievement: A meta-analysis. Review of Educational Research, 74(1), 29–58. Burnham, K. P., & Anderson, D. R. (2002). Model selection and multimodel inference: A practical information-theoretic approach (2nd ed.). New York: Springer.
CommonCrawl
\begin{definition}[Definition:Contingent Statement] A '''contingent statement''' is a statement form which is neither a tautology, nor unsatisfiable, but whose truth value depends upon the truth value of its component substatements. <!-- == Formal Definition == Let $\mathfrak M$ be a collection of models for a particular formal language $\LL$. A well-formed word $\mathbf A$ of $\LL$ is said to be '''contingent (for $\mathfrak M$)''' {{iff}}: :$\exists \MM_1, \MM_2 \in \mathfrak M: \MM_1 \models \mathbf A \land \MM_2 \not \models \mathbf A$ that is, {{iff}} some models in $\mathfrak M$ approve, and others disapprove of $\mathbf A$. == Also known as == {{refactor|Seems to be replaceable by Also see entries. But, will need source review}} In the context of propositional formulas the term '''satisfiable''' is usually used: A propositional formula is '''satisfiable''' if its value is True in at least ''one'' boolean interpretation. A propositional formula is '''not-valid''' or '''falsifiable''' if its value is False in at least ''one'' boolean interpretation. \end{definition}
ProofWiki
Relation between attack and attack model for signatures What is the relationship between an attack and an attack model? For example, let $\Pi$ be the Lamport signature scheme. This signature has its security based on any one-way function. The Grover algorithm, an attack, inverts this function with complexity $\mathrm{O}(2^{n/3})$. Furthermore, there are algorithms that try to forge a signature, known as adversaries. Depending on the way they act, we choose an attack model such as the chosen-message-attack model. Is there any relationship between this model and the attack described above? signature provable-security attack adversarial-model juaninfjuaninf $\begingroup$ It is not entirely clear what you mean. Could you please clarify it a bit? In the last sentence you say "this model," which refers to the chosen-message-attack model. You refer to an attack, with must be some party using Grover's algorithm. So your question must be: Is there any relationship between the chosen-message-attack model and some party using Grover's algorithm. Is that a correct characterization of your inquiry? Thank you. $\endgroup$ – Patriot I hope I got your point and try to answer your question. Actually, if I understand you right, then what you call an attack actually means an adversary acting in a specific attack model. To clarify this, we need to review the security models for digital signature schemes and when we have discussed this we can clarify issues. Basically, we have to discuss what an adversary tries to achieve and which environment is given to him. Goal of an adversary: We start by discussing the goals of an adversary beginning with the strongest and ending up with the weakest attack goal. Total break: The adversary is able to obtain the secret signing key. Thus, he is then able to impersonate the signer by signing arbitrary messages in the name of the signer. Selective forgery: The adversary is able to produce valid signatures for some selected messages or a particular class of messages. (Weak) Existential forgery: The adversary is able to produce at least one valid signature for a message, for which he has not been given a signature yet (the adversary typically has no control over the choice of this forged message). Strong existential forgery: The adversary is able to produce a valid signature different from any signature he has seen. In contrast to weak existential unforgeability, the message corresponding to the forged signature may already have been signed. Think of a second valid signature for a message that has already been signed (signatures that are publicly re-randomizable - such as Camenisch Lysyanskaya signatures - can never achieve this level). Power of an adversary: After defining the goals, we will take a closer look at the adversary and define his ability or power. This basically defines the environment he is acting in. Thereby, we start with the weakest and end up with the strongest one. Key-only attack: The adversary solely knows the public-key corresponding to the secret signing key of the signer. Known-message attack: The adversary has additionally access to a list of message-signature pairs from the signer, whereas he has no influence on the choice of the messages. Random-message attack: The adversary can obtain signatures for message, whereas the adversary has no control how the messages are chosen (they are randomly chosen by the signer). Chosen-message attack: The adversary has access to a list of message-signature pairs, whereas the messages were chosen by the adversary before attempting to break the signature scheme. Adaptively chosen-message attack: The adversary is able to adaptively choose the messages which are signed by the signer during the attack. Thus, he may choose messages depending on the public-key of the signer and also previous messages or signatures, which were obtained during the attack. In other words, the adversary can use the signer as a signing black-box (oracle) throughout the entire attack. Security as a combination of goals and power: Now, the combination of the goal and the power of an adversary gives the type of security of a specific scheme. In general, a digital signature scheme is said to be secure if the most powerful adversary cannot even achieve the weakest goal. Secure Signatures: In the notation introduced above, a digital signature scheme is considered secure if it is existentially unforgeable under adaptively chosen-message attacks. Typically, here, existentially unforgeable refers to weak existential unforgeability (but there are also some signature schemes whose security is proved with respect to strong existential unforgeability). Coming back to your question: Actually, what you describe as an attack, is a total break under a key-only attack. Or you may see it as an adversary mounting an attack (key-only) with this goal (total break). DrLecterDrLecter $\begingroup$ Sometime, the adversary's goal is, given a signature, to produce a different signature of the same message (say, also given). E.g. in RSA, perhaps $s+N$ is just as valid as $s$ (sometime they have the same bit length). Or simply adding a leading 0 could do. Or changing uppercase to lowercase in hexadecimal. Or... Such mundane things become a real issue in some protocols, where signatures get re-signed, or hashed, see this. That seems to missing in your interesting thesaurus. $\endgroup$ – fgrieu ♦ $\begingroup$ @fgrieu yes, of course, this is what is called strong existential unforgeability. Thanks, I will add this one and also take random message attacks into account :) $\endgroup$ – DrLecter Not the answer you're looking for? Browse other questions tagged signature provable-security attack adversarial-model or ask your own question. Unforgeability and type of adversary Signature based on public key cryptography and forgery Existential Unforgeability of signature scheme against Adaptive Chosen Message Attack Can the original message be retrieved from a public/private key signature? What needs to be proved for a cryptosystem to be secure? Random oracle model proofs and programmability Hash collision resistance requirements for Lamport signatures Winternitz Signature in standard model Practical differences between circuits and turing machines for cryptography What is the probability of attacking SHA1 used in RSA signature with small exponent? Is Lamport-Diffie EUF-CMA secure in the standard model? Signature scheme against an unbounded rival Secure Computation: Reactive Functionalities in the Hybrid Model for Standalone Security Hash and (MAC) based signatures using zk-SNARK
CommonCrawl
\begin{document} \begin{frontmatter} \title{On the Dynamic Consistency of a Discrete Predator-Prey Model} \author[cmbe]{Priyanka Saha\fnref{fn1}} \author[cmbe]{Nandadulal Bairagi\corref{cor1}} \ead{[email protected]} \cortext[cor1]{Corresponding author} \author[ajc]{Milan Biswas\fnref{fn2}} \address[cmbe]{Centre for Mathematical Biology and Ecology\\ Department of Mathematics, Jadavpur University\\ Kolkata-700032, India.} \address[ajc]{A. J. C. Bose College\\ A. J. C. Bose Road \\ Kolkata-700020, India.} \begin{abstract} We here discretize a predator-prey model by standard Euler forward method and non-standard finite difference method and then compare their dynamic properties with the corresponding continuous-time model. We show that NSFD model preserves positivity of solutions and is completely consistent with the dynamics of the corresponding continuous-time model. On the other hand, the discrete model formulated by forward Euler method does not show dynamic consistency with its continuous counterpart. Rather it shows scheme--dependent instability when step--size restriction is violated. \end{abstract} \end{frontmatter} \section{Introduction} Nonlinear system of differential equations play very important role in studying different physical, chemical and biological phenomena. However, in general, nonlinear differential equations cannot be solved analytically and therefore discretization is inevitable for good approximation of the solutions \cite{AL00}. Another reason of constructing discrete models, at least in case of population model, is that it permits arbitrary time-step units \cite{M89,M84}. Unfortunately, conventional discretization schemes, such as Euler method, Runge-Kutta method, show dynamic inconsistency \cite{M88}. It produces spurious solutions which are not observed in its parent model and its dynamics depend on the step-size. For example, consider the simple logistic model in continuous system: \begin{eqnarray}\label{Continuous model0-int} \frac{dx}{dt}=rx(1-\frac{x}{K}), ~~x(0)=x_0>0, \end{eqnarray} where $r$ and $K$ are positive constants. The system (\ref{Continuous model0-int}) has two equilibrium points with the following dynamical properties: \begin{enumerate} \item the trivial equilibrium point $x=0$ is always unstable. \item the nontrivial equilibrium point $x=K$ is always stable. \end{enumerate} Fig. 1 shows that even if we start very close to zero ($x_0=0.3$) the solution goes to $x=K=50$, implying that the system is stable around the equilibrium point $x=K$ and unstable around $x=0$.\\ \begin{center} \includegraphics[width=3in, height=1.75in]{Fig_4a.eps} \end{center} {\bf Figure 1:} {\it Time series of the continuous system \eqref{Continuous model0-int}. It shows that the system (\ref{Continuous model0-int}) is stable around the interior equilibrium point $x=K$. Initial point and parameters are taken as $x(0)=0.4$, $r=3$ and $K=50$.}\\ The corresponding discrete model formulated by standard finite difference schemes (such as Euler forward method) is given by Anguelov and Lubuma\cite{AL00} \begin{eqnarray}\label{Continuous model1-int} \frac{x_{n+1}-x_n}{h}=rx_n(1-\frac{x_n}{K}). \end{eqnarray} This equation can be transformed into logistic difference equation \begin{eqnarray}\label{discrete model-int1} x_{n+1}=x_n+hrx_n(1-\frac{x_n}{K}), \end{eqnarray} where $h$ is the step-size. The system \eqref{discrete model-int1} also has same equilibrium points with the following dynamic properties: \begin{enumerate} \item the trivial equilibrium point $x=0$ is always unstable. \item the nontrivial equilibrium point $x=K$ is stable if $h<\frac{2}{r}$. \end{enumerate} The bifurcation diagram of Euler model \eqref{discrete model-int1} (Fig. 2) with $h$ as the bifurcating parameter shows that the fixed point $x=K$ changes its stability as the step-size $h$ crosses the value $\frac{2}{r}=0.666$. The fixed point is stable for $h<0.666$ and shows more complex behaviors (period doubling bifurcation) as the step-size is further increased. Thus, dynamics of Euler--forward model \eqref{discrete model-int1} depends on the step-size and exhibits spurious dynamics which are not observed in the corresponding continuous system \eqref{Continuous model0-int}.\\ \begin{center} \includegraphics[width=3in, height=1.75in]{Fig_1.eps} \end{center} \noindent\textbf{Figure 2:} {\it Bifurcation diagram of the model \eqref{discrete model-int1} with $h$ as the bifurcating parameter. It shows that the system is stable till the step-size $h$ is less than $0.666$ and unstable for higher values of $h$. Parameters and initial point are as in Fig. 1.}\\ Let us consider another simple example (decay equation) \begin{eqnarray}\label{discrete model-int} \frac{dx}{dt}=-\lambda x, ~ \lambda>0, x(0)=x_0>0. \end{eqnarray} Its solution, given by $$x(t)=x_0e^{-\lambda t},$$ is always positive. The corresponding discrete model constructed by Euler forward method is given by \begin{eqnarray}\label{discrete model-int} x_{n+1}=(1-\lambda h)x_n. \end{eqnarray} Note that its solution will not be positive if $\lambda h$ is sufficiently large and therefore supposed to show numerical instability.\\ These examples demonstrate that the discrete systems constructed by standard finite difference scheme is unable to preserve some properties of its corresponding continuous systems. Dynamic behaviors of the discrete model depend strongly on the step-size. However, on principles, the corresponding discrete system should have same properties to that of the original continuous system. It is therefore of immense importance to construct discrete model which will preserve the properties of its constituent continuous models. In the recent past, a considerable effort has been given in the construction of discrete-time model to preserve dynamic consistency of the corresponding continuous-time model without any limitation on the step-size. Mickens first proved that corresponding to any ODE, there exists an exact difference equation which has zero local truncation error \cite{M84,M88} and proposed a non-standard finite difference scheme (NSFD) in 1989 \cite{M89}. Later in 1994, he introduced the concept of elementary stability, the property which brings correspondence between the local stability at equilibria of the differential equation and the numerical method \cite{M94}. Anguelov and Lubuma \cite{AL01} formalized some of the foundations of Micken's rules, including convergence properties of non-standard finite difference schemes. They defined qualitative stability, which means that the constructed discrete system satisfies some properties like positivity of solutions, conservation laws and equilibria for any step-size. In 2005, Micken coined the term dynamic consistency, which means that a numerical method is qualitatively stable with respect to all desired properties of the solutions to the differential equation \cite{M05}. NSFD scheme has gained lot of attentions in the last few years because it generally does not show spurious behavior as compared to other standard finite difference methods. NSFD scheme has been successfully used in different fields like economics \cite{L13}, physiology \cite{SM03}, epidemic \cite{MET03,BB17a,SI10}, ecology \cite{RL13,BET17,G12,BB17} and physics \cite{M02,MO14}. Here we shall discretize a nonlinear continuous-time predator-prey system following dynamics preserving non-standard finite difference (NSFD) method introduced by Mickens \cite{M89}.\\ \noindent The paper is arranged in the following sequence. In the next section we describe the considered continuous-time model. Section 3 contains some definitions and general technique of constructing a NSFD model. Section 4 contains the analysis of NSFD and Euler models. Extensive simulations are presented in Section 5. The paper ends with the summary in Section 6. \section{The model} Celik \cite{C15} have investigated the following dimensionless Holling-Tanner predator-prey system with ratio-dependent functional response: \begin{eqnarray}\label{model in continuous system} \frac{dN}{dt} & = & N(1-N)-\frac{NP}{N+\alpha P},\\ \frac{dP}{dt} & = & \beta P(\delta -\frac{P}{N}).\nonumber \end{eqnarray} The state variables $N$ and $P$ represent, respectively, the density of prey and predator populations at time $t$, and $N(t)>0,~P(t)\geq0$ for all $t$. Here $\alpha$, $\beta$ and $\delta$ are positive constants. For more description of the model, readers are referred to the work of Celik \cite{C15}. \\ Celik \cite{C15} discussed about the existence and stability of the coexistence interior equilibrium $E^*=(N^*,P^*)$, where \begin{eqnarray} N^*=\frac{1+\alpha \delta-\delta}{1+\alpha \delta}, \nonumber ~~~~~P^*=\delta N^*. \end{eqnarray} The following results are known.\\ \noindent\textbf{Theorem 1.1.} {\it The interior equilibrium point $E^*$ of the system \eqref{model in continuous system} exists and becomes stable if \begin{eqnarray}\label{stability condition} (i) \alpha \delta +1>\delta, ~(ii) \delta (2+\alpha \delta)<(1+\alpha \delta)^2 (1+\beta \delta).\nonumber \end{eqnarray}} Here we seek to construct a discrete model of the corresponding continuous model \eqref{model in continuous system} that preserves the qualitative properties of the continuous system and maintains dynamic consistency. We also construct the corresponding Euler discrete model and compare its results with the results of NSFD model. \section{Some definitions} Consider the differential equation \begin{eqnarray}\label{Continuous model} \frac{dx}{dt}=f(x,t,\lambda), \end{eqnarray} where $\lambda$ represents the parameter defining the system \eqref{Continuous model}. Assume that a finite difference scheme corresponding to the continuous system \eqref{Continuous model} is described by \begin{eqnarray}\label{Discrete model} x_{k+1}=F(x_{k},t_{k},h,\lambda). \end{eqnarray} We assume that $F(., ., ., .)$ is such that the proper uniqueness--existence properties holds; the step size is $h=\nabla t$ with $t_k=hk$, $k=$ integer; and $x_k$ is an approximation to $x(t_k)$. \begin{definition} \cite{M05}\label{definition1} ~Let the differential equation \eqref{Continuous model} and/or its solutions have a property $P$. The discrete model \eqref{Discrete model} is said to be dynamically consistent with the equation \eqref{Continuous model} if it and/or its solutions also have the property $P$. \end{definition} \begin{definition} \cite{M05,DK05,AL03}\label{definition2} The NSFD procedures are based on just two fundamental rules: $~~~$(i) the discrete first--derivative has the representation $~~~~~~~~~~~~~~~$$\frac{dx}{dt} \rightarrow \frac{x_{k+1}-\psi(h)x_k}{\phi(h)}$, $h=\triangle t$,\\ where $\phi(h)$, $\psi(h)$ satisfy the conditions $\psi(h)=1+O(h^2)$,~ $\phi(h)=h+O(h^2)$; $~~~$(ii) both linear and nonlinear terms may require a nonlocal representation on the discrete computational lattice; for example, $~~~~~~~~~~~~~~~$$x\rightarrow 2x_k-x_{k+1}$,~~~~ $x^3\rightarrow (\frac{x_{k+1}+x_{k-1}}{2})x_k^2$, $~~~~~~~~~~~~~~~$$x^3\rightarrow 2x_k^3-x_k^2x_{k+1}$, ~~~~$x^2\rightarrow (\frac{x_{k+1}+x_k+x_{k-1}}{3})x_k$. \noindent While no general principles currently exist for selecting the functions $\psi(h)$ and $\phi(h)$, particular forms for a specific equation can easily be determined. Functional forms commonly used for $\psi(h)$ and $\phi(h)$ are $$\phi(h)=\frac{1-e^{-\lambda h}}{\lambda}, ~\psi(h)=cos(\lambda h),$$ where $\lambda$ is some parameter appearing in the differential equation. \end{definition} \begin{definition}\label{definition3} The finite difference method \eqref{Discrete model} is called positive if for any value of the step size $h$, solution of the discrete system remains positive for all positive initial values. \end{definition} \begin{definition}\label{definition4} The finite difference method \eqref{Discrete model} is called elementary stable if for any value of the step size $h$, the fixed points of the difference equation are those of the differential system and the linear stability properties of each fixed point being the same for both the differential system and the discrete system. \end{definition} \begin{definition} \cite{DK06}\label{definition5} A method that follows the Mickens rules (given in the Definition 3.2) and preserves the positivity of the solutions is called positive and elementary stable nonstandard (PESN) method. \end{definition} \section{Nonstandard finite difference (NSFD) model} For convenience, at first we can write the continuous system \eqref{model in continuous system} as \begin{eqnarray}\label{model in continuous system 2} \frac{dN}{dt} & = & N-N^2-\frac{NP}{(N+\alpha P)}+(N-N)(N+\alpha P),\\ \frac{dP}{dt} & = & \beta \delta P-\frac{\beta P^2}{N}.\nonumber \end{eqnarray}\\ Now we express the above system as follows: \begin{eqnarray}\label{continuous form} \frac{dN}{dt} & = & N-N^2-NA(N,P)+(N-N)B(N,P),\\ \frac{dP}{dt} & = & \beta \delta P-\beta PC(N,P),\nonumber \end{eqnarray}\\ where $A(N,P)=\frac{P}{N+\alpha P}$, $B(N,P)=(N+\alpha P)$ and $C(N,P)=\frac{P}{N}$.\\ We employ the following non-local approximations termwise for the system \eqref{continuous form}: \begin{eqnarray}\label{Nonlocal approxmiation} \left\{ \begin{array}{ll} \frac{dN}{dt}\rightarrow\frac{N_{n+1}-N_n}{h},~~~~~~~~~~~~~~~~~~\frac{dP}{dt}\rightarrow\frac{P_{n+1}-P_n}{h}\\ N\rightarrow N_n,~~~~~~~~~~~~~~~~~~~~~~~~~~~~P\rightarrow P_n,\\ N^2\rightarrow N_nN_{n+1},\\ PC(N,P)\rightarrow P_{n+1}C(N_n,P_n),\\ NA(N,P)\rightarrow N_{n+1}A(N_n,P_n),\\ (N-N)B(N,P)\rightarrow (N_n-N_{n+1})B(N_n,P_n),\\ \end{array} \right. \end{eqnarray} where $h~(>0)$ is the step-size.\\ By these transformations, the continuous-time system \eqref{model in continuous system 2} is converted to \begin{eqnarray}\label{discrete system} \frac{N_{n+1}-N_n}{h}&=&N_n-N_n N_{n+1}-\frac{N_{n+1}P_n}{N_n+\alpha P_n}+(N_n-N_{n+1})(N_n+\alpha P_n),\nonumber \\ \frac{P_{n+1}-P_n}{h}&=&\beta \delta P_n-\frac{\beta P_{n+1}P_n }{N_n}. \end{eqnarray} System \eqref{discrete system} can be simplified to \begin{eqnarray}\label{model in discrete system} N_{n+1} & = & \frac{N_n\{1+h+h(N_n+\alpha P_n)\}(N_n+\alpha P_n)}{(1+2hN_n+\alpha hP_n)(N_n+\alpha P_n)+hP_n},\\ P_{n+1} & = & \frac{P_nN_n(1+\beta \delta h)}{N_n+\beta hP_n}.\nonumber \end{eqnarray} Note that all solutions of the discrete-time system \eqref{model in discrete system} remains positive for any step-size if they start with positive initial values. Therefore, the system \eqref{model in discrete system} is positive. \subsection{Existence of fixed points} Fixed points of the system \eqref{model in discrete system} are the solutions of the coupled algebraic equations obtained by putting $N_{n+1}=N_n=N$ and $P_{n+1}=P_n=P$ in \eqref{model in discrete system}. However, the fixed points can be obtained more easily from \eqref{discrete system} with the same substitutions. Thus, fixed points are the solutions of the following nonlinear algebraic equations: \begin{eqnarray}\label{existence1} N-N^2-\frac{NP}{N+\alpha P}=0,\\ \beta \delta P-\frac{\beta P^2}{N}=0.\nonumber \end{eqnarray} It is easy to observe that $E_1=(1,0)$ is the predator-free fixed point. The interior fixed point $E^*=(N^*,P^*)$ satisfies \begin{eqnarray}\label{existence2} 1-N^*-\frac{P^*}{N^*+\alpha P^*}=0 ~and~ \delta-\frac{P^*}{N^*}=0. \end{eqnarray} From the second equation of \eqref{existence2}, we have $P^*=\delta N^*$. Substituting $P^*$ in the first equation of \eqref{existence2}, we find $N^*=\frac{1+\alpha \delta-\delta}{1+\alpha \delta}$, which is always positive if $1+\alpha \delta >\delta$. Thus the positive fixed point $E^*$ exists if $1+\alpha \delta>\delta$. \subsection{Stability analysis of fixed points} The variational matrix of system \eqref{model in discrete system} evaluated at an arbitrary fixed point $(N,P)$ is given by\\ \begin{equation} \label{jacobian1}J(N,P)=\left( \begin{array}{cc} a_{11} ~~ a_{12}\\ a_{21} ~~ a_{22}\\ \end{array} \right),\end{equation}\\ where \begin{equation}\label{jacobian} \left\{ \begin{array}{ll} a_{11} = \frac{\{1+h+h(N_n+\alpha P_n)\}(N_n+\alpha P_n)}{(1+2hN_n+\alpha hP_n)(N_n+\alpha P_n)+hP_n}+\frac{hN_n(N_n+\alpha P_n)}{(1+2hN_n+\alpha hP_n)(N_n+\alpha P_n)+hP_n}\\ ~~~~~~~~+\frac{N_n\{1+h+h(N_n+\alpha P_n)\}}{(1+2hN_n+\alpha hP_n)(N_n+\alpha P_n)+hP_n}\\ ~~~~~~~~-\frac{N_n\{1+h+h(N_n+\alpha P_n)\}(N_n+\alpha P_n)\{2h(N_n+\alpha P_n)+(1+2hN_n+\alpha hP_n)\}}{\{(1+2hN_n+\alpha h P_n)(N_n+\alpha P_n)+hP_n\}^2},\\ \\ a_{12}=\frac{\alpha hN_n(N_n+\alpha P_n)}{(1+2hN_n+\alpha hP_n)(N_n+\alpha P_n)+hP_n}+\frac{\alpha N_n\{1+h+h(N_n+\alpha P_n)\}}{(1+2hN_n+\alpha hP_n)(N_n+\alpha P_n)+hP_n}\\ ~~~~~~~~ -\frac{N_n\{1+h+h(N_n+\alpha P_n)\}(N_n+\alpha P_n)\{\alpha h(N_n+\alpha P_n)+\alpha (1+2hN_n+\alpha hP_n)+h\}}{\{(1+2hN_n+\alpha hP_n)(N_n+\alpha P_n)+hP_n\}^2},\\ \\ a_{21}=\frac{P_n(1+\beta \delta h)}{N_n+\beta hP_n}-\frac{P_nN_n(1+\beta \delta h)}{(N_n+\beta hP_n)^2},\\ \\ a_{22}=\frac{(1+\beta \delta h)N_n}{N_n+\beta hP_n}-\frac{\beta hP_nN_n(1+\beta \delta h)}{(N_n+\beta hP_n)^2}.\nonumber \end{array} \right. \end{equation} Let $\lambda_{1}$ and $\lambda_{2}$ be the eigenvalues of the variational matrix \eqref{jacobian1} then we give the following definition in relation to the stability of the system \eqref{model in discrete system}. \begin{definition}\label{definition6} A fixed point $(x,y)$ of the system \eqref{model in discrete system} is called stable if $\left|\lambda_{1}\right|<1$, $\left|\lambda_{2}\right|<1$ and a source if $\left|\lambda_{1}\right|>1$, $\left|\lambda_{2}\right|>1$. It is called a saddle if $\left|\lambda_{1}\right|<1$, $\left|\lambda_{2}\right|>1$ or $\left|\lambda_{1}\right|>1$, $\left|\lambda_{2}\right|<1$ and a non--hyperbolic fixed point if either $\left|\lambda_{1}\right|=1$ or $\left|\lambda_{2}\right|=1$. \end{definition} \begin{lemma} \cite{LG13}\label{lemma} Let $\lambda_{1}$ and $\lambda_{2}$ be the eigenvalues of the variational matrix \eqref{jacobian1}. Then $\left|\lambda_{1}\right|<1$ and $\left|\lambda_{2}\right|<1$ iff $(i) 1-det(J)>0, (ii) 1-trace(J)+det(J)>0$ and $(iii) 0<a_{11}<1,~ 0<a_{22}<1.$ \end{lemma} \begin{theorem} {\it Suppose that conditions of Theorem 1.1 hold. Then the fixed point $E^*$ of the system \eqref{model in discrete system} is locally asymptotically stable.}\end{theorem} \noindent\textbf{Proof.} At the interior fixed point $E^*$, the variational matrix reads as\\ $$J(N^*,P^*)=\left( \begin{array}{cc} a_{11}^* ~~ a_{12}^*\\ a_{21}^* ~~ a_{22}^* \end{array} \right),$$\\ where \begin{eqnarray}\label{interior jacobian} \left\{ \begin{array}{ll} a_{11}^*=1+\frac{N^*h(1-2N^*-\alpha P^*)}{G},\\ a_{12}^*=\frac{N^*h(\alpha-\alpha N^*-1)}{G},\\ a_{21}^*=\frac{\beta \delta hP^*}{H},\\ a_{22}^*=1-\frac{\beta hP^*}{H} \end{array} \right. \end{eqnarray}\\ with $G=\{1+h+h(N^*+\alpha P^*)\}(N^*+\alpha P^*)$ and $H=(1+\beta \delta h)N^*$.\\ Using $P^*=\delta N^*$ in \eqref{interior jacobian}, we have \begin{eqnarray}\label{interior jacobian 1} \left\{ \begin{array}{ll} a_{11}^*=1+\frac{N^*h(1-2N^*-\alpha \delta N^*)}{G},\\ a_{12}^*=\frac{N^*h(\alpha-\alpha N^*-1)}{G},\\ a_{21}^*=\frac{\beta \delta^2 hN^*}{H},\\ a_{22}^*=1-\frac{\beta \delta h N^*}{H}. \end{array} \right. \end{eqnarray} One can compute that $1-det(J)=-\frac{(N^*)^2h\{(1-\beta \delta-\alpha \beta \delta^2)-(2+\alpha \delta)N^*\}}{GH}+\frac{\beta \delta h^2(N^*)^2\{N^*(1+\alpha \delta)+N^*(1+\alpha \delta)^2+\frac{\alpha \delta^2}{1+\alpha \delta}\}}{GH}>0$, provided $-(1-\beta \delta-\alpha \beta \delta^2)+(2+\alpha \delta)N^*>0$, i.e. $\delta (2+\alpha \delta)<(1+\alpha \delta)^2(1+\beta \delta)$. Note that $trace(J)=\frac{(N^*)^2}{GH}[(1+\alpha \delta)\{2+h(2+\beta\delta+2N^*\alpha \delta)\}+h(1+N^*\alpha \delta)+h^2\beta \delta\{\frac{2\delta}{1+\alpha \delta}+\alpha \delta(1+N^*+N^*\alpha \delta)+N^*\}]>0$ and $1-trace(J)+det(J)=\frac{\beta \delta h^2 (N^*)^2(1+\alpha \delta-\delta)}{GH}>0$, following the existing condition of $E^*$. Therefore, the positive fixed point $E^*$ is locally asymptotically stable provided conditions of Theorem 1.1 hold. Hence the theorem is proven. \subsection{The Euler forward method} By Euler's forward method, we transform the continuous model \eqref{model in continuous system} in the following discrete model: \begin{eqnarray} \label{model in Euler system} \frac{N_{n+1}-N_{n}}{h}&=&N_n[1-N_n-\frac{P_n}{N_n+\alpha P_n}],\\ \frac {P_{n+1}-P_{n}}{h}&=&\beta P_n[\delta-\frac{P_n}{P_n}],\nonumber \end{eqnarray} where $h>0$ is the step size. Rearranging the above equations, we have \begin{eqnarray} \label{model in Euler system1} N_{n+1}&=&N_{n}+h N_n[1-N_n-\frac{P_n}{N_n+\alpha P_n}],\\ P_{n+1}&=&P_{n}+h\beta P_n[\delta-\frac{P_n}{N_n}]. \nonumber \end{eqnarray} It is to be noticed that the system \eqref{model in Euler system1} with positive initial values is not unconditionally positive due to the presence of negative terms. The system may therefore exhibit spurious behaviors and numerical instabilities \cite{M94}. \subsubsection{Existence and stability of fixed points} At the fixed point, we substitute $N_{n+1}=N_{n}=N$ and $P_{n+1}=P_{n}=P$. One can easily compute that \eqref{model in Euler system1} has the same interior fixed points as in the previous case. The fixed point $E_1=(1,0)$ always exist and the fixed point $E^{*}=(N^{*},P^{*})$ exists if $1+\alpha \delta>\delta$, where $N^*=\frac{1+\alpha \delta-\delta}{1+\alpha \delta}$, $P^* = \delta N^*$. We are interested for interior equilibrium only.\\ The variational matrix of the system \eqref{model in Euler system1} at any arbitrary fixed point $(N, P)$ is given by $$J(x,y)=\left( \begin{array}{cc} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{array} \right),$$ \begin{eqnarray}\label{Jacobian Matrix Elements} \nonumber \mbox{where} \left\{ \begin{array}{ll} a_{11}=1+h[1-N_n-\frac{P_n}{N_n+\alpha P_n}]+h N_n[-1+\frac{P_n}{(N_n+\alpha P_n)^2}],\\ a_{12}=-h (\frac{N_n}{N_n+\alpha P_n})^2,\\ a_{21}=h \beta (\frac{P_n}{N_n})^2,\\ a_{22}=1+h[\beta \delta-\frac{\beta P_n}{N_n}-\beta \frac{P_n}{N_n}]. \end{array} \right. \end{eqnarray} \begin{theorem} Suppose that the conditions of Theorem 1.1 hold. The interior fixed point $E^*$ of the system \eqref{model in Euler system1} is then locally asymptotically stable if $h<min[\frac{G}{H},\frac{2(1+\alpha \delta)^2}{G}]$, where $G=(1+\alpha \delta)^2 (1+\beta \delta)-\delta (2+\alpha \delta)$, $H=\beta \delta (1+\alpha \delta-\delta)(1+\alpha \delta).$ \end{theorem} \begin{proof} At the interior equilibrium point $E^*$, the Jacobian matrix is evaluated as $$J(N^{*},P^{*})=\left( \begin{array}{cc} a_{11} & a_{12} \\ a_{21} & a_{22} \\ \end{array} \right),$$ where $a_{11}=1-hN^*[1-\frac{P^*}{(N^*+\alpha P^*)^2}]$, $a_{12}=-h(\frac{N^*}{N^*+\alpha P^*})^2$, $a_{21}=h\beta (\frac{P^*}{N^*})^2$, $a_{22}=1-h\beta \frac{P^*}{N^*}$. Note that $1-trace(J)+det(J) =h^2 \beta P^*$ is always positive, following the existence conditions of $E^*$. Thus, condition (ii) of Lemma 4.1 is satisfied. One can compute that $det(J)=1-h N^*[\frac{G}{H}-h]$. Here $H$ is positive following the existence condition of $E^*$ and $G>0$ if $(1+\alpha \delta)^2 (1+\beta \delta)>\delta (2+\alpha \delta)$. Thus condition (i) of Lemma 4.1 is satisfied if $h>\frac{G}{H}$. Simple computations give $1+trace(J)+det(J)=2(2-h\frac{G}{(1+\alpha \delta)^2})+h^2H$. This expression will be positive if $0<h<\frac{2(1+\alpha \delta)^2}{G}$. Therefore, coexistence equilibrium point $E^*$ exists and becomes stable if $1+\alpha \delta>\delta$, $\delta (2+\alpha \delta)<(1+\alpha \delta)^2 (1+\beta \delta)$ and $h<min[\frac{G}{H},\frac{2(1+\alpha \delta)^2}{G}]$. Hence the theorem. \end{proof} \noindent {\bf Remark 4.1.} Note that if $h>\frac{G}{H}$ then $E^*$ is unstable even when the other two conditions are satisfied. \section{Numerical simulations} In this section, we present some numerical simulations to validate our analytical results of the NSFD discrete system \eqref{model in discrete system} and the Euler system \eqref{model in Euler system1} with their continuous counterpart \eqref{model in continuous system}. For this experiment, we consider the parameters set as in Celik \cite{C15}: $\alpha=0.7, \beta=0.9, \delta=0.6$. The step size is kept fixed at $h=0.1$ in all simulations, if not stated otherwise. We consider the initial value $I_1=(0.2, 0.2)$ as in Celik \cite{C15} for all simulations. For the above parameter set, the interior fixed point is evaluated as $E^*=(N^*, P^*)= (0.5775, 0.3465)$. We first reproduce the phase plane diagrams (Fig. 3) of the continuous system \eqref{model in continuous system}, the NSFD discrete system \eqref{model in discrete system} and the Euler discrete system \eqref{model in Euler system1} by using ODE45 of the software Matlab 7.11. Following the analytical results stated in the section 3, the phase plane diagrams show that the equilibrium $E^*$ is stable for all three cases. \begin{center} \includegraphics[width=2in, height=1.5in]{Fig_1a.eps} \includegraphics[width=2in, height=1.5in]{Fig_1b.eps} \includegraphics[width=2in, height=1.5in]{Fig_1c.eps} \end{center} {\bf Figure 3:} {\it Phase diagrams of the continuous system \eqref{model in continuous system} (Fig. a), the NSFD discrete system \eqref{model in discrete system} (Fig. b) and the Euler system \eqref{model in Euler system1} (Fig. c). These figures show that solution in each case converges to the stable coexistence equilibrium $E^*$ for the parameters $\alpha=0.7, \beta=0.9, \delta=0.6$. Here $G=(1+\alpha \delta)^2 (1+\beta \delta)-\delta (2+\alpha \delta)=1.6533$ and $h=0.1<min\{\frac{G}{H},\frac{2(1+\alpha \delta)^2}{G}\}=min\{2.6293, 2.4393\}$.} \begin{center} \includegraphics[width=2in, height=1.5in]{Fig_2a.eps} \includegraphics[width=2in, height=1.5in]{Fig_2b.eps} \end{center} {\bf Figure 4:} {\it Bifurcation diagrams of prey population of Euler--forward model \eqref{model in Euler system1} (Fig. a) and NSFD model \eqref{model in discrete system} (Fig. b) with step--size $h$ as the bifurcation parameter. All the parameters and initial value are same as in Fig. 3. The first figure shows that the prey population is stable for small step--size $h$ and unstable for higher value of $h$. The second figure shows that the prey population is stable for all step--size $h$.}\\ To compare step--size dependency of the Euler model and NSFD model, we have plotted the bifurcation diagrams of prey population of the systems \eqref{model in Euler system1} and \eqref{model in discrete system} considering step--size $h$ as the bifurcation parameter (Fig. 4) for the same parameter values as in Fig. 3. Fig. 4a shows that behavior of the Euler model depends on the step--size. If step--size is small, system population is stable and the dynamics resembles with the continuous system \eqref{model in continuous system}. As the step--size is increased, system population becomes unstable and therefore the dynamics is inconsistent with the continuous system. However, the second figure (Fig. 4b) shows that the NSFD model \eqref{model in discrete system} remains stable for all $h$, indicating that the dynamics is independent of step--size. \begin{center} \includegraphics[width=2in, height=1.5in]{Fig_3a.eps} \includegraphics[width=2in, height=1.5in]{Fig_3b.eps} \includegraphics[width=2in, height=1.5in]{Fig_3c.eps} \includegraphics[width=2in, height=1.5in]{Fig_3d.eps} \end{center} {\bf Figure 5:} {\it Time series solutions of the NSFD system \eqref{model in discrete system} and Euler system \eqref{model in Euler system1} for two particular values of step-size (h). Here $h=2$ for Figs. (a) \& (b) and $h=2.67$ for Figs. (c) \& (d). Other parameters are in Fig. 4.}\\ In particular, we plot (Fig. 5) time series behavior of the NSFD system \eqref{model in discrete system} and Euler discrete system \eqref{model in Euler system1} for $h=2 (<min\{\frac{G}{H},\frac{2(1+\alpha \delta)^2}{G}\}=min\{2.6293, 2.4393\})$ and for $h=2.67 (>min\{2.6293, 2.4393\})$. The first two figures ($5a \& 5b$) show that both populations are stable when the step-size is $h=2$. Fig. 5c shows that populations of NSFD system \eqref{model in discrete system} remains stable for all step-size, indicating its dynamic consistency with the continuous system, but Fig. 5(d) shows that populations of Euler system \eqref{model in Euler system1} oscillate for $h=2.67$, indicating its dynamic inconsistency with its continuous counterpart. \section{Summary} Nonstandard finite difference (NSFD) scheme has gained lot of attentions in the last few years mostly for two reasons. First, it generally does not show spurious behavior as compared to other standard finite difference methods and second, dynamics of the NSFD model does not depend on the step-size. NSFD scheme also reduces the computational cost of traditional finite--difference schemes. In this work, we have studied two discrete systems constructed by NSFD scheme and forward Euler scheme of a well studied two--dimensional Holling-Tanner type predator--prey system with ratio-dependent functional response. We have shown that dynamics of the discrete system formulated by NSFD scheme are same as that of the continuous system. It preserves the local stability of the fixed point and the positivity of the solutions of the continuous system for any step size. Simulation experiments show that NSFD system always converge to the correct steady--state solutions for any arbitrary large value of the step size ($h$) in accordance with the theoretical results. However, the discrete model formulated by forward Euler method does not show dynamic consistency with its continuous counterpart. Rather it shows scheme--dependent instability when step--size restriction is violated. \end{document}
arXiv
Mechanisms and mediation in survival analysis: towards an integrated analytical framework Jonathan Pratschke ORCID: orcid.org/0000-0003-3864-35141, Trutz Haase2, Harry Comber3, Linda Sharp4, Marianna de Camargo Cancela3 & Howard Johnson5 BMC Medical Research Methodology volume 16, Article number: 27 (2016) Cite this article A wide-ranging debate has taken place in recent years on mediation analysis and causal modelling, raising profound theoretical, philosophical and methodological questions. The authors build on the results of these discussions to work towards an integrated approach to the analysis of research questions that situate survival outcomes in relation to complex causal pathways with multiple mediators. The background to this contribution is the increasingly urgent need for policy-relevant research on the nature of inequalities in health and healthcare. The authors begin by summarising debates on causal inference, mediated effects and statistical models, showing that these three strands of research have powerful synergies. They review a range of approaches which seek to extend existing survival models to obtain valid estimates of mediation effects. They then argue for an alternative strategy, which involves integrating survival outcomes within Structural Equation Models via the discrete-time survival model. This approach can provide an integrated framework for studying mediation effects in relation to survival outcomes, an issue of great relevance in applied health research. The authors provide an example of how these techniques can be used to explore whether the social class position of patients has a significant indirect effect on the hazard of death from colon cancer. The results suggest that the indirect effects of social class on survival are substantial and negative (-0.23 overall). In addition to the substantial direct effect of this variable (-0.60), its indirect effects account for more than one quarter of the total effect. The two main pathways for this indirect effect, via emergency admission (-0.12), on the one hand, and hospital caseload, on the other, (-0.10) are of similar size. The discrete-time survival model provides an attractive way of integrating time-to-event data within the field of Structural Equation Modelling. The authors demonstrate the efficacy of this approach in identifying complex causal pathways that mediate the effects of a socio-economic baseline covariate on the hazard of death from colon cancer. The results show that this approach has the potential to shed light on a class of research questions which is of particular relevance in health research. A wide-ranging debate has taken place in recent years on mediation analysis and causal modelling [1–9]. This debate has involved many different fields and raised profound questions about the status of scientific explanations, statistical theory and research methodology when making causal inferences. In this paper, we build on this discussion to outline an integrated approach to the analysis of research questions that situate survival outcomes in relation to complex causal pathways. There are good reasons for pursuing this goal, as researchers are increasingly seeking to shed light on the "mechanisms" that generate survival outcomes by exploring mediated effects. As Aalen et al. [1] observe, "In other areas [outside Psychology and Social Science] mediation analysis has largely been ignored. This is especially so for situations where time plays a central role, as in survival analysis. In view of the importance of survival analysis in medicine and other areas, it is surprising that not more attention has gone into the issue of mediation." The background to this contribution is the increasingly urgent need for policy-relevant research on the nature and form of social inequalities in relation to health and health care, as interventions to promote population health and to improve equity rest on causal interpretations of the determinants of health-related outcomes, however incomplete or flawed these may be [10]. At the same time, and despite the enormous progress that has been made in each of the aforementioned areas, an integrated framework for causal modelling has not yet been identified in health research, with a view to incorporating survival outcomes with such desirable features as (a) latent variables, (b) time-varying covariates, (c) complex pathways and (d) support for causal inferences in relation to direct and indirect effects. We will begin by briefly summarising recent debates on causal inference, mediated effects and statistical models. We will show that these three strands of research have powerful synergies which can be exploited by bringing them together within an appropriate analytical framework. We will then present an illustrative example using survival data for a sample of patients diagnosed with colon cancer in the Republic of Ireland between 2004 and 2008. We will assess whether social class (measured by a proxy variable) exerts statistically-significant direct and indirect causal effects on survival prospects. We are particularly interested in assessing whether the influence of this socio-economic baseline covariate is mediated by the route of admission to or by the caseload of the hospital where the main treatment was received. These indirect pathways are of great relevance from a policy-making perspective, as they have the potential to shed light on the mechanisms that (re)produce social inequalities in health outcomes. Mediation effects The study of mediation raises complex issues, although the basic structure of such effects is simple. By including mediators in a regression equation, the coefficients for other variables in the model may change or become statistically or substantively non-significant. In this way, mediation effects can mask the influence of certain variables and impede a full appreciation of their role in determining outcomes. Conversely, the appropriate specification of such effects can provide practitioners and policy-makers with richer information on disparities in access to health and health care. Mediation analysis has stimulated interest amongst health researchers due to its potential to provide answers to a series of important research questions and due to dissatisfaction with the methods and approaches which have tended to dominate health research [4]. The latter have recently been called into question, primarily due to their tendency to focus on empirical associations ("black-box epidemiology") and consequent failure to develop plausible explanations [3, 11]. As a possible solution to this problem, "mechanisms" have been contrasted with "black boxes". The aim of applied research, it is argued, should be to develop increasingly sophisticated accounts of the systemic relationships and processes that generate empirical regularities [12]. In this vein, mediation analysis can inform intervention strategies, identify "active ingredients" and suggest strategic sites for action. Where the mediator and outcome are singular, continuous and observed, multiple-equation techniques for studying mediation are frequently used, building on Baron and Kenny's influential approach [13, 14]. As these have been widely discussed, we will merely note that this technique relies on a series of linear regression models and enables the researcher to assess whether a single variable may be said to mediate between a covariate and the outcome [15]. Although these techniques have been applied countless times, they are of limited use if either the mediator or the outcome are categorical or ordinal (or represent the time to an event), or if more complex forms of mediation are involved [5, 8]. These limitations have discouraged health researchers from exploring mediation effects, partly due to the fact that non-linear models like the Cox model make it difficult to estimate indirect effects [16]. Causal inference Causality has become a major issue in Statistics in recent years [1]. The "traditional" statistical approach to the analysis of direct effects involved conditioning on a mediating variable. Aware that this does not rest on a rigorous definition of causality, Robins and Greenland [17] and Pearl [18] developed alternative formulations. The "causal inference" literature which subsequently developed relies on the counterfactual theory of causality proposed by Rubin [19]. Judea Pearl, an influential scholar in this area, contributed to the new-found popularity of causal questions amongst statisticians by combining Rubin's approach with the theory of non-parametric Structural Equation Models. Other authors have used similar techniques to clarify the necessary and sufficient conditions for making causal inferences about mediation effects [6, 20]. Within this literature, causal inference focuses on four different kinds of effects: the total effect, the "controlled" direct effect (based on the idea of holding the mediating variables fixed by setting their values to a constant by some kind of intervention), the "natural" direct effect (where the treatment is set at a given level and we compare outcomes without fixing the mediators to a constant, but allowing them to assume the "natural" levels that they would have taken in the absence of the treatment) and the "natural" indirect effect (where the direct effect is disabled and we focus on the effect transmitted by the mediator). Pearl, in a recent paper [10], clarified some of the issues at stake when making causal inferences about mediation effects using statistical models. Firstly, he argues that indirect effects should not be treated as artefacts or nuisance parameters, but as "an intrinsic property of reality that has tangible policy implications". The second is that it is possible to define direct and indirect effects within a general, causal approach that does not require particular distributional assumptions. Thirdly, he shows that the assumptions required by causal mediation analysis are essentially analogous to those that apply to causal models more generally: no confounding due to unmeasured common causes. Fourthly, he demonstrates that the total effect, natural direct effect and natural indirect effect are identified for linear Structural Equation Models as long as the aforementioned assumptions are satisfied and can be estimated in a straightforward way from the estimated coefficients. Finally, Pearl considers such models to be potentially useful despite their reliance on assumptions which cannot be tested explicitly. This raises interesting questions about the relationship between statistical models, generative mechanisms and causality – which hinge around a fundamental paradox. Although statistical models can permit valid inferences about causal mechanisms under certain conditions, the very nature of these models implies that these conditions will rarely, if ever, be (fully) satisfied. After all, reality is infinitely complex, whilst models provide relatively simple, stylised representations, and researchers can never be certain that they have included all relevant confounders. One way of tackling this paradox is to embed it within the process of scientific discovery. The plausibility of models is assessed by the scientific community using prevailing criteria and techniques, which either reinforces or undermines the conviction that a model captures the essence of a really-existing mechanism. If a model omits an important confounder, the onus is on other researchers to demonstrate that alternative specifications yield different conclusions. In other words, it is not sufficient to appeal to the possibility of misspecification or omission (which applies to all models); this must be substantiated explicitly. The impact of model misspecification depends on the strength of the effects associated with the omitted variables or paths, which implies that once substantively-important covariates have been included in a model, the omission of less important effects will, ceteris paribus, have a weaker influence on the model. Rather than seeking a warrant for making absolute claims, we would suggest that the aim of causal models is to clarify important relationships and pathways and to contribute to the development of mechanism-based explanations. Statistical models for mediation analysis In an attempt to overcome the limitations of existing approaches to mediation analysis, researchers have sought to extend the Baron-Kenny approach to survival outcomes by applying them directly to Cox models [21, 22]. This technique is known to yield biased results, however, and has met with forceful criticism in the scientific literature, as summarised by Lange and Hansen [23]: Most importantly, the observed changes in hazard ratios cannot be given a causal interpretation. In addition, the important assumption of proportional hazards can never be satisfied for both models with and without the mediator. In other words, it is not mathematically consistent to use a Cox model both with and without a potential mediator (mathematically, this is due to the fact that the class of proportional hazard models is not closed under marginalization). As a result of these difficulties, researchers have concentrated their efforts on extending survival models in different ways. One such approach uses "marginal" models and focuses on obtaining causally-valid inferences for single mediation effects using standard survival models [24]. Another approach – known as "marginal structural modelling" - can be used to identify the causal effect of time-dependent exposures while controlling for time-dependent confounders which are also affected by the exposure [25]. These models use inverse probability of treatment weights and inverse probability of censoring weights to create a pseudo-population in which treatment is un-confounded by subject-specific characteristics or censoring [26]. The models are therefore designed to remove confounding due to a specific type of mediation effect, rather than to study mediation effects more generally. The independent variable of interest has to be dichotomous and their integration with survival outcomes is limited. The third approach uses Dynamic Path Analysis, developed by Fosen et al. [27] using Aalen's additive hazards model, as "an extension of classical path analysis to a time-continuous survival setting where path effects are estimated as a function of time" [16]. Lange and Hansen [23] suggest that this approach has weaknesses when used to study mediation, as it cannot sustain causal interpretations and cannot be implemented using standard software. Their recommendation is to adapt the additive hazards model in a different way to calculate the counterfactual rate difference, which represents the number of deaths that can be attributed to mediation through the mediator, compared with those that can be attributed to the direct path. Martinussen and Vansteelandt [28] also use the Aalen additive hazards model to adjust survival models for confounding in a similar way. These approaches seek to extend existing survival models to obtain valid estimates of causal effects. As a consequence, they encounter constraints on the number and kinds of variables that can be analysed, and more complex causal mechanisms typically cannot be assessed. An alternative strategy is to integrate survival outcomes within Structural Equation Models, as the latter already include specifications such as growth curves, multilevel structures, latent variables, latent classes and multiple outcome variables [29]. Iacobucci [5] offers a general motivation for this strategy: Mediation models have also been generalized to allow for nomological networks that are richer than just the three central constructs, X, M, and Y. If there are additional predictors or consequences of any of these, Structural Equation Models are superior (i.e., mathematically statistically optimal given their smaller standard errors), substantively to get a better sense of the bigger theoretical picture, and statistically because the focal associations will be estimated more purely, having other effects partialed out and statistically controlled… We favour this strategy, which seeks to integrate survival outcomes within a Structural Equation Model, not least because the latter has come to be seen as the most appropriate methodological framework for carrying out mediation analysis more generally [10, 30–33]. The nature of survival models has, for a long time, appeared to exclude this possibility [5]. We will show in the next section how this challenge may be tackled, preparing the ground for an integrated framework. Structural equation modelling There is an intuitively appealing way of integrating time-to-event data within Structural Equation Models. The idea of using a linear specification of the hazard function based on discrete-time modelling techniques was proposed more than 20 years ago, and Singer and Willett [34] showed that this model could be estimated using the tools of traditional logistic regression analysis. Muthén and colleagues subsequently integrated the discrete-time survival model within the MPlus program [35, 36]. This approach – which will be described in greater detail below – makes it possible to estimate complex discrete-time survival models using existing software. It is possible, for example, to relate survival outcomes to other kinds of data structures and to develop models which more accurately reflect real-world mechanisms: Discrete-time models have the strength that they can easily accommodate time-varying covariates. They also do not require a hazard-related proportionality assumption that is commonly used in continuous-time survival analysis, for example, the Cox proportional hazards model. In addition, these models easily allow for unstructured as well as structured estimation of the hazard function at each discrete time point. [35] This conceptual shift – from continuous to discrete time, and from a single equation to a Structural Equation Model – permits the kind of integration of methods that is required for mediation analysis to yield its full potential in health research. Amongst the benefits of this approach are that it encourages researchers to formulate and test more comprehensive hypotheses and to develop more ambitious theories regarding generative mechanisms. The notion of developing and testing mechanism-based accounts of the world involves a metaphorical mapping which is highly effective in this context. One way of understanding this concept is to situate it, once again, within the process of scientific discovery, whereby a little-understood association may be replaced, over time, by a more detailed explanation. This process gives rise to a constant revision of explanations, accompanied by new and more powerful accounts which articulate the relationship between processes situated at different levels. We argue that the central aim of scientific research is to provide an increasingly accurate or powerful account of these "mechanisms". The mechanism-based approach can be applied effectively to the development of statistical models. Models offer a stylised representation of real-world mechanisms; by interpreting the results of statistical models, we can make substantiated claims about the ways in which these mechanisms work. In fact, "direct" and "indirect" effects always relate to a specific theory/model, as "typically, there are other (unmeasured) intermediate variables that would mediate the direct effect" [3]. Indeed, every direct effect in a statistical model may be treated as a "black box", and replaced (over time) by a more complex set of direct and indirect effects. It is the substantive focus of each research project that ultimately decides which black boxes should be opened (simultaneously creating new black boxes). The discrete-time survival model In discrete time, h j denotes the probability that an individual experiences a non-repeatable event during time period j, given that he or she did not experience it during previous periods: $$ {h}_j=P\left(T=j\Big|T\ge j\right) $$ where T is a discrete random variable that indicates the time period in which the event occurs. The most important aspect of the model, which underwrites its elegance, is that by conditioning on successive periods the statistical theory is simplified [37]. As a consequence, the joint density function for the various time intervals (e.g. T 1, T 2, T 3) can always be written as the product of the marginal distribution of T 1, the conditional distribution of T 2 given T 1 and the conditional distribution of T 3 given T 1 and T 2. As in other survival models, the survival probability, which expresses the probability of not experiencing the event, can be expressed in terms of the hazard: $$ {S}_j=\prod_{k=1}^j\left(1-{h}_k\right) $$ where h k indicates the hazard probability for each time period up to and including j, when the event was observed. A log-odds relationship is often specified between the individual hazards and the covariates [34, 35]. If we assume that z ij is a p × 1 vector of values for a set of time-varying covariates (z 1,…, z p ), measured for individual i in time period j and that x i is a q × 1 vector of values for a set of time-invariant covariates (x 1,…, x q ), then the hazard can be related to the covariates using the following logistic function: $$ {h}_{ij}=\frac{1}{1+{e}^{-\left({\mathrm{logit}}_{ij}\right)}} $$ $$ {\mathrm{logit}}_{ij}={\beta}_j+\kappa {\mathit{\hbox{'}}}_{zj}{\mathbf{z}}_{ij}+\kappa {\mathit{\hbox{'}}}_{xj}{\mathbf{x}}_i $$ where κ zj is a logit parameter vector for the time-varying covariates and κ xj is a logit parameter vector for the time-invariant covariates, both of which can vary across the J time periods [35]. The resulting coefficients can be antilogged and interpreted as odds ratios in the usual way. Both continuous and categorical covariates can be included. If we drop the j subscript from κ zj and/or κ xj , the effects of the covariates are assumed to be equal across time periods, yielding the proportional hazard odds model. The inverse logit of β j is the hazard probability for time period j, where z j = 0 and x = 0, which gives the baseline hazard. A constant baseline hazard probability model can be obtained by setting β j = β for all j = 1,…, J or, alternatively, a piecewise or parametric baseline hazard function can be specified. In general terms, therefore, the conditional log-odds that an event will occur in a given time period, given that it did not occur in previous periods, is modelled as a linear function of a constant term (which may or may not be specific to the period) and the values assumed by a set of explanatory variables (which may or may not vary over time), multiplied by a set of appropriate slopes (which may or may not vary across time periods). To specify the model, we define a J × 1 vector u of binary variables, for which we imagine a set of underlying continuous latent response propensities, u ij * = (u i1 *, u i2 *, …, u iJ *)', whereby the latent u ij * are related to the observed u ij via a threshold parameter τ j . This is identical to the derivation of logistic regression via the "latent response" formulation. The higher the threshold τ, the higher u * needs to be to exceed it and the lower the probability of u = 1. The threshold parameter is related to the intercept by the equation β j = –τ j . The binary u ij = 0 if individual i is observed to be at risk for the event of interest for the whole of time period j but does not experience it, u ij = 1 if individual i experiences the event in time period j and u ij is missing if individual i has already experienced the event or is lost to follow-up (i.e. right-censored). The fact that an individual does not have observations on u after experiencing the event or dropping out is handled as missing data, and the conventional assumption of "non-informative censoring" must be made (as in other survival models). The Maximum Likelihood estimator is constructed as a product of terms which coincide with each period up to the last one for which data were recorded, assuming that the n individuals composing the sample are independent given the covariates [34, 35]. Expressing the hazard probabilities as a function of the observed covariates using the logit link function is equivalent to the logistic regression of the u i on the observed covariates [34]. This dependence on the explanatory variables is what introduces heterogeneity and accounts for inter-individual differences in hazard probabilities, yielding a proportional shift in the baseline hazard profile if the coefficients are assumed to be equal (proportional hazards model). The discrete-time survival model with unstructured hazard probabilities but without covariates is always saturated, and thus fits the u variables perfectly. In the MPlus program, the proportional hazards discrete time survival model may be specified either by placing equality constraints on the coefficients for the logistic regression of each u variable on each explanatory variable or by creating a latent variable with a variance of 0 and a unit path to each u. By relaxing the constraints on these paths, it is possible to test the proportionality assumption. Combinations of discrete-time survival models and other structural equation models (such as latent curve models, for example) can be used in a flexible way to address a wide range of research questions [38]. A final characteristic of the discrete-time survival model that is worth noting is that its estimates converge on those provided by the Cox continuous-time model as the definition of the time periods becomes increasingly fine-grained [39]. The discrete-time model can be justified not only when time of observation is inherently discrete, but also when it is measured continuously and subsequently transformed into discrete intervals. This provides a useful bridge between the two techniques for purposes of comparison. We will now provide an illustrative example of the approach outlined above. Our statistical model is based on discrete time-to-event data for death due to cancer of the colon, with time measured in quarters. All cases of adenocarcinoma of colon (ICD10 C18) registered by the Irish National Cancer Registry as incident during the years 2004–2008 are included. This implies a minimum of 13 and a maximum of 32 time intervals, which we truncate at 24, as the number of deaths per quarter is negligible after this point. An unstructured baseline hazard profile is adopted, for simplicity. Registry data were linked to public hospital discharge data from the Hospital Inpatient Enquiry (HIPE) for all patients admitted to public hospitals [40]. Active tumour-directed treatment is defined as excisional biopsy, surgery, chemotherapy or radiotherapy with a primary aim of removing or reducing the tumour. The type of initial admission (scheduled or emergency) was determined from the HIPE data, and information on patient age and tumour stage (AJCC) was derived from the Registry. Social class is measured by a proxy variable: the small-area affluence/deprivation score of the patient's neighbourhood of residence using the Haase-Pratschke index of relative affluence and deprivation [41]. Scores on this index are based on 2006 census data (using Small Areas with an average population ~230 persons) and matched to the individual-level data provided by the National Cancer Registry of Ireland by geo-coding patients' addresses. Treatment is classified as either sub-optimal (less intensive treatment, or fewer modalities than recommended) or optimal/more aggressive (treatment according to guidelines or using additional modalities) by comparison with the recommendations of the National Comprehensive Cancer Network [42]. High caseload for the main hospital was defined as more than 40 colon cancer patients per annum, on average, during the study period. Registry data were also linked to official death certificates from the Central Statistics Office. Deaths were classified as either due to colon cancer or other causes, based on an algorithm developed by the Scottish Cancer Registry [43]. All treatments were recorded for the first 12 months following diagnosis, and patients who received no treatment were excluded from the analysis, as these typically involve cases where cancer was diagnosed either post-mortem or immediately prior to death. Patients were followed until death or censoring at the end of the study (31 December 2011), and those who died from other causes were also treated as censored observations. All explanatory variables included in the model are time-invariant. Of 6347 colon carcinomas incident in 2004–2008 in patients who did not develop a second primary cancer prior to 31/12/2011, 5178 (81 %) had at least one episode of tumour-directed treatment and 4793 patients (93 %) received cancer-directed surgery. Just over half (55 %) of patients were male and 52 % were aged 70 or over. The majority (60 %) were married and most (63 %) attended hospital solely or predominantly as public patients. Almost half (46 %) of cancers were at stage I or II at diagnosis, and 85 % were of low or intermediate grade. More than three quarters (78 %) of patients had no recorded comorbid conditions and just over one fifth (22 %) were admitted as an emergency. Treatment was classified as optimal (or more aggressive) in 81 % of cases, but only 56 % of patients attending low-caseload hospitals fall into this category. We coded the survival outcome variables so that patients enter the study at the moment of diagnosis, with staging data providing a proxy for onset of illness (and therefore early/late diagnosis). We include only a small set of baseline covariates (see Table 1 below) to simplify the presentation and due to space considerations; a more fully-specified model will be presented in a separate paper. Table 1 Variables included in the model (N = 5178) Model specification In the example analysis, we use a causal modelling approach to explore whether the social class position of patients has a significant direct and/or indirect effect on the hazard of (cause-specific) death from colon cancer, mediated by route of admission to hospital (elective or emergency) and/or the caseload of the hospital where the main treatment was received. We hypothesise that age and social class influence the route of admission to hospital, as older and more disadvantaged patients are more likely to be admitted as emergency cases. We further hypothesise that access to a high-caseload hospital will depend on age, affluence and route of admission: not only do we expect that older and more disadvantaged patients have a lower probability of accessing high-caseload hospitals, but we also believe that this applies to those who enter hospital on an emergency basis. The causal order encoded by the model is based on logical/theoretical criteria as well as chronological order, and we assume no effect modification. The direct and indirect influences are shown in Fig. 1 below, using the typical conventions of path models, where observed variables are represented by rectangles, latent variables by circles, direct effects by straight arrows which point from cause to effect and residuals by straight arrows pointing at the dependent variable in a regression equation. All covariances are omitted from the diagram, but included for pairs of exogenous variables where direct effects were not specified. The direct and indirect pathways relating to social class are highlighted by thicker arrows in the figure. The upper part of the figure (including the latent hazard and survival indicators) coincides with the model defined in Equation 3, albeit with time-invariant covariates and constant effects (i.e. logit ij = β j + κ'x x i). The latent variable shown in the figure (labelled "Latent hazard") merely simplifies the presentation, as the direct effect of each explanatory variable on the survival outcome can be identified with a single path. This specification is exactly equivalent to one in which each explanatory variable has an effect on each of the 24 discrete-time survival indicators, with these 24 effects being constrained to be equal. Discrete-time survival model for colon cancer with mediation effects The coefficients from the logit regression of the survival outcome on the covariates may be interpreted as linear regression coefficients using the threshold approach, as mentioned earlier [44, 45]. It would be attractive to adopt the same procedure for the two mediating variables – admission route and high caseload – which are both binary. Unfortunately, this is not possible in MPlus, which can only handle continuous mediators in models with discrete-time survival outcomes. We therefore use the linear probability model for the equations in which these two mediators are the dependent variables; this procedure is sub-optimal but nevertheless reasonable as the distributions of these variables are relatively balanced [46]. As the entire model is linear (because the discrete-time survival part of the model may be interpreted in terms of a linear regression using the latent response formulation), the indirect effects may be estimated using the product-of-coefficients approach and represent "natural indirect effects". The direct effects in the statistical model are equivalent to "natural direct effects", whilst the total effect is given by the sum of the direct and indirect effects [10]. The standard errors for the indirect effects are estimated using the delta method and the model is estimated using MPlus v5.21, with a Maximum Likelihood estimator and robust standard errors [36]. The code used to specify the model is included in Appendix A. The size of the mediation effects is reported below, both in absolute terms and as a mediation proportion, with standard errors and confidence intervals [32]. The latent response variables underlying the survival indicators have a mean of 0 and a standard deviation of 1 and thus the raw coefficients may be interpreted as capturing the effect, measured in standard deviations, of a unit change in the explanatory variables; the units of the latter are shown in Table 1. The results of the analysis are shown in Tables 2 and 3 below and the number of patients who were alive in each quarter, from diagnosis, is shown in Fig. 2. As noted above, all variables were assumed to have a constant effect over time, and this assumption is encoded in the unit paths described earlier (Fig. 1), specified between the discrete-time survival indicators and the latent hazard. Table 2 Direct effects on hazard of death, admission route and caseload Table 3 Effects of affluence/deprivation on hazard of death Patients who remain exposed to hazard of death, by quarter from diagnosis Starting with the hazard of death due to colon cancer, the model indicates that an increase in age of 10 years leads to an increase in the hazard of 0.30 standard deviations, whilst moving along the spectrum of affluence and deprivation from the most deprived to the most affluent patient leads to a substantial reduction in the hazard (-0.60 standard deviations). Entering hospital for the first time via the emergency department leads to an increase in the hazard of 0.41 standard deviations, and tumour stage has an even greater impact (0.64 for Stage II compared to Stage I, 1.31 for Stage III and 3.19 for Stage IV). Optimal treatment reduces the hazard considerably (-0.77 standard deviations), as does attending a hospital with high caseload for the treatment of colon cancer (-0.17 standard deviations). There is no residual variance and the sample is assumed to be homogeneous (i.e. no "frailty", no latent classes), in line with standard practice in basic discrete-time survival modelling. All heterogeneity in hazard profiles thus derives from the effects of the explanatory variables, as noted above. Turning to the admission route, each ten-year increase in age leads to an increase of 0.02 standard deviations in the probability of an emergency admission, whilst the most affluent patients have a lower risk of entering hospital in an emergency when compared with the most deprived (-0.30). As far as hospital caseload is concerned, affluence has a powerful impact (0.58) on the probability of receiving treatment in a high-caseload hospital. Being an emergency case at admission reduces this probability (-0.06), and all effects are statistically significant, with the exception of the effect of age on high caseload. As can be seen from Table 3, the indirect effects of social class (as measured by affluence/deprivation score) are substantial and negative (-0.23 overall). This implies that, in addition to the substantial direct effect of this variable (-0.60), there are indirect effects that account for more than one quarter of the total effect. Whilst the standard error of the direct effect is relatively large, those of the indirect effects are small. This is because the model has high power to detect indirect effects. The two main indirect effects, via emergency admission (-0.12) and via hospital caseload (-0.10) are of similar size. The empirical example presented in the previous section demonstrates the flexibility of the causal modelling framework set out earlier and shows its potential in relation to the study of mediation effects. As a result of the way in which causal models bring together theoretical knowledge and empirical evidence, they have the potential to sustain ongoing research programs which yield progressively more refined and powerful explanations. The indirect effects of social class were shown to be substantial in size and statistically significant, accounting for roughly one quarter of the total effect. In a more fully-specified model with a full set of covariates, this proportion is likely to increase. By improving measurement instruments, including new covariates and modifying the structure of a model such as this, it is possible to provide more appropriate and precise information to practitioners and policy-makers and to sustain an ongoing dialogue regarding mechanisms, possible interventions and monitoring strategies. Rather than merely replicating tests of association between specific variables in an endless series of samples, this approach encourages the progressive enrichment and extension of explanatory models. The model presented here shows that survival outcomes can be integrated within the framework of causal modelling using the linear specification of the discrete-time survival model. Although the model is simple, it provides valuable additional information compared to alternative approaches to modelling survival. It confirms that there is a risk of underestimating the overall impact of social class on health outcomes when attention is confined to direct effects. As noted earlier, this is because intermediate variables must be included in order to obtain accurate estimates and to assess the influence of treatments and interventions, but their inclusion tends to mask the effects of key baseline covariates. Given the large standard error associated with the direct effect, the influence of social class could easily be overlooked, particularly when working with small samples. Secondly, the analysis opens up interesting avenues for intervention strategies by providing a better understanding of how social class differentials in health outcomes are generated. The model suggests that differences in wealth, knowledge and influence (captured by social class) enable advantaged individuals to seek professional assistance before a problem becomes acute, whilst those who are more disadvantaged encounter greater difficulties in seeking and/or receiving assistance following initial symptoms. As a result, more affluent groups are able to obtain better information about their condition and to decide where to receive treatment, using their resources to choose experienced consultants and to reduce waiting times. What was previously a "black box" is now a potential mechanism, which can be refined and extended in different ways in the course of subsequent research. This approach set out above has a number of strengths, not least because survival data are themselves frequently based on a discrete conception of time (measured in weeks, months or years). From this perspective, the discrete-time approach offers an intuitively compelling framework that is appropriate to many research problems, although it has rarely been cited by health researchers. For example, the path-breaking paper by Muthén and Masyn [35] has been cited 127 times (Google Scholar, January 2015), mostly by Psychologists, with only two citations in the broad field of medical research. Many studies suggest that socio-economic variables have a profound influence on health, although the precise pathways through which these effects operate remain unclear [4, 32]. When studying health outcomes, it is often necessary to control for variables such as these. However, once we control for the stage of illness, types of treatment and so on, we may find that these socio-economic measures no longer have significant effects. This is not a problem if we are merely concerned with predicting the outcome, but it could be misleading to base policies on these kinds of findings. It is quite possible, for example, that socio-economic covariates have an influence on intermediate health-related and treatment-related variables, implying that they have indirect effects on the outcome. This is a good example of a research problem that requires sophisticated techniques for conducting mediation analysis within an extended nomological network (set of variables and paths). It is only appropriate to conclude by mentioning some limitations to this analytical framework. Firstly, as we noted above, the statistical theory and software tools for causal mediation analysis with survival outcomes are currently confined to continuous mediators. In our example, we used the linear probability model to regress the binary mediators on the baseline covariates. Secondly, the calculation of indirect effects by the product-of-coefficients method with a survival outcome relies on the latent response formulation (for the regression of the survival indicators on the explanatory variables). Although leading methodologists view this as a valid extension of mediation analysis (see, for example, the responses provided by Linda and Bengt Muthén on the MPlus Discussion Board on December 14 2005, February 09 2010, July 26 2006 and August 18 2008, http://www.statmodel.com/cgi-bin/discus/discus.cgi), a more rigorous statistical justification for this approach would be valuable. Thirdly, in a fully-specified model, the survival part of the model must be carefully assessed and the proportional hazards assumption tested. In our example, we merely assume proportionality in order to simplify the presentation. Fourthly, measurement error in the covariates and mediators can lead to biased estimates, which means that the inclusion of latent variables in this part of the model can improve the accuracy and reliability of inferences. Finally, it is important to be aware that the assumptions required in order to make causal claims based on the results of this kind of statistical model are challenging. As Judea Pearl has argued, the most important assumptions relate to the absence of confounding of each relationship that forms part of the mediation structure. In our (simple) example, we assume that there are no (significant) unmeasured common causes of affluence/deprivation, on the one hand, and (a) emergency admission to hospital, (b) hospital caseload and (c) the survival outcome, on the other. The importance of identifying and measuring important confounders implies that a major collective effort will often be needed in order to collect and integrate the data that are required in order to draw defensible causal claims from non-experimental data. Causal models require large amounts of high-quality data, and this can necessitate costly and time-consuming data collection and data-matching techniques. The kinds of research questions that health researchers are increasingly called upon to answer are encouraging them to reconsider central aspects of their approach to theory and research practice. Above all, questions relating to mediation are provoking a rethinking of established approaches to ontology, methodology and statistics. In ontological terms, this is leading to a greater willingness to consider generative mechanisms as the object of scientific explanation. In methodological terms, it is leading to growing interest in Structural Equation Modelling as an integrated modelling framework. In statistical terms, it is focusing attention on the assumptions and conditions necessary for making causal inferences. In this paper, we outlined the state-of-the-art in relation to mediation analysis and described the discrete-time survival model, which represents an attractive way of integrating time-to-event data and Structural Equation Modelling. We provided an example involving complex causal pathways that mediate the effects of a key socio-economic baseline covariate – social class – on the hazard of death from colon cancer following diagnosis. The results show that this approach has potential to shed light on a class of research questions which is of particular relevance in health research today. Statement on ethics approval The database on which this analysis is based was provided by the National Cancer Registry Ireland. The data, once fully anonymised, are publicly available and can be requested by interested researchers. Specific ethical approval was not required for this study as the National Cancer Registry Ireland is authorised under the Health (Provision of Information) Act 1997 to collect and hold data on all persons diagnosed with cancer in the Republic of Ireland without requiring individual consent. The National Cancer Registry Ireland was established under the Health (Corporate Bodies) Act 1961 and is authorised to provide data to researchers – with due regard for anonymity – without requiring approval by an ethics committee. Aalen OO, Røysland K, Gran JM, Ledergerber B. Causality, mediation and time: a dynamic viewpoint. J R Stat Soc Ser A Stat Soc. 2012;175:831–61. Albert JM. Mediation analysis via potential outcomes models. Stat Med. 2008;27:1282–304. Hafeman DM, Schwartz S. Opening the black box: a motivation for the assessment of mediation. Int J Epidemiol. 2009;38:838–45. Huang B, Sivaganesan S, Succop P, Goodman E. Statistical assessment of mediational effects for logistic mediational models. Stat Med. 2004;23:2713–28. Iacobucci D. Mediation analysis and categorical variables: the final frontier. J Consum Psychol. 2012;22:582–94. Imai K, Keele L, Tingley D. A general approach to causal mediation analysis. Psychol Methods. 2010;15:309–34. Pearl J. The causal mediation formula—a guide to the assessment of pathways and mechanisms. Prev Sci. 2012;13:426–36. Shrout PE, Bolger N. Mediation in experimental and nonexperimental studies: new procedures and recommendations. Psychol Methods. 2002;7:422–45. VanderWeele TJ. Causal mediation analysis with survival data. Epidemiology. 2011;22:582–5. Pearl J. Interpretation and identification of causal mediation. Psychol Methods. 2014;19:459–81. Davey Smith G. Reflections on the limitations to epidemiology. J Clin Epidemiol. 2001;54:325–31. Kristensen P, Aalen OO. Understanding mechanisms: opening the "black box" in observational studies. Scand J Work Environ Health. 2013;39:121–4. Baron RM, Kenny DA. The moderator–mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. J Pers Soc Psychol. 1986;51:1173–82. Sobel ME. Asymptotic confidence intervals for indirect effects in structural equation models. Sociol Methodol. 1982;13:290–312. Li Y, Schneider JA, Bennett DA. Estimation of the mediation effect with a binary mediator. Stat Med. 2006;26:3398–414. Aalen OO. Armitage lecture 2010: understanding treatment effects: the value of integrating longitudinal data and survival analysis. Stat Med. 2012;31:1903–17. Robins JM, Greenland S. Identifiability and exchangeability for direct and indirect effects. Epidemiology. 1992;3:143–55. Pearl J. Models, reasoning and inference. Cambridge: Cambridge University Press; 2000. Holland PW. Statistics and causal inference. J Am Stat Assoc. 1986;81:945–60. Petersen ML, Sinisi SE, van der Laan MJ. Estimation of direct causal effects. Epidemiology. 2006;17:276–84. Jung SY, Rosenzweig M, Linkov F, Brufsky A, Weissfeld JL, Sereika SM. Comorbidity as a mediator of survival disparity between younger and older women diagnosed with metastatic breast cancer. Hypertension. 2011;59:205–11. Lynch JW, Kaplan GA, Cohen RD, Tuomilehto J, Salonen J. Do cardiovascular risk factors explain the relation between socioeconomic status, risk of all-cause mortality, cardiovascular mortality, and acute myocardial infarction? Am J Epidemiol. 1996;144:934–42. Lange T, Hansen JV. Direct and indirect effects in a survival context. Epidemiology. 2011;22:575–81. Tchetgen EJ. On causal mediation analysis with a survival outcome. Int J Biostat. 2011;7:1–38. Robins JM. Association, causation, and marginal structural models. Synthese. 1999;121:151–79. Gerhard T, Delaney JA, Cooper-DeHoff RM, Shuster J, Brumback BA, Johnson JA, et al. Comparing marginal structural models to standard methods for estimating treatment effects of antihypertensive combination therapy. BMC Med Res Methodol. 2012;12:119. Fosen J, Ferkingstad E, Borgan Ø, Aalen OO. Dynamic path analysis—a new approach to analyzing time-dependent covariates. Lifetime Data Anal. 2006;12:143–67. Martinussen T, Vansteelandt S. On collapsibility and confounding bias in Cox and Aalen regression models. Lifetime Data Anal. 2013;19:279–96. Hancock GR, Mueller RO, editors. Structural equation modeling: a second course. Greenwich: IAP; 2006. Gunzler D, Chen T, Wu P, Zhang H. Introduction to mediation analysis with structural equation modeling. Shanghai Arch Psychiatry. 2013;25:390–4. MacKinnon DP, Fairchild AJ. Current directions in mediation analysis. Curr Dir Psychol Sci. 2009;18:16–20. Ditlevsen S, Christensen U, Lynch J, Damsgaard MT, Keiding N. The mediation proportion: a structural equation approach for estimating the proportion of exposure effect on outcome explained by an intermediate variable. Epidemiology. 2005;16:114–20. Iacobucci D, Saldanha N, Deng X. A meditation on mediation: evidence that structural equations models perform better than regressions. J Consum Psychol. 2007;17:139–53. Singer JD, Willett JB. It's about time: using discrete-time survival analysis to study duration and the timing of events. J Educ Stat. 1993;18:155–95. Muthén B, Masyn K. Discrete-time survival mixture analysis. J Educ Behav Stat. 2005;30:27–58. Muthén LK, Muthén BO. MPlus: statistical analysis with latent variables. User's guide. Los Angeles: Muthén & Muthén; 1998–2010. Brown CC. On the use of indicator variables for studying the time-dependence of parameters in a response-time model. Biometrics. 1975;31:863–72. Bollen KA, Curran PJ. Latent curve models: a structural equation perspective. Hoboken: Wiley-Interscience; 2006. Asparouhov T, Masyn K, Muthén BO. Continuous time survival in latent variable models. Proceedings of the joint statistical meeting in Seattle (ASA Section on Biometrics). 2006. p. 180–7. Economic and Social Research Institute. Activity in acute public hospitals in Ireland, annual report 2012. Dublin: Economic and Social Research Institute; 2013. Haase T, Pratschke J. The pobal-haase deprivation index for small areas. Dublin: Pobal; 2010. National Comprehensive Cancer Network (NCCN). Clinical practice guidelines in oncology: colon cancer. Fort Washington: NCCN; 2015. Scottish Cancer Intelligence Unit. Trends in cancer survival in Scotland 1971-1995 [Internet]. Edinburgh: Information & Statistics Division; 2000. http://www.isdscotlandarchive.scot.nhs.uk/isd/files//trends_1971-95.pdf. Accessed 15 January 2016. Muthén BO. Latent variable structural equation modeling with categorical data. J Econom. 1983;22:43–65. Winship C, Mare RD. Structural equations and path analysis for discrete data. Am J Sociol. 1983;89:54–110. Hellevik O. Linear versus logistic regression when the dependent variable is a dichotomy. Qual Quant. 2007;43:59–74. The research on which this paper is based was funded by Irish Cancer Society research grant number HIC12COM. The anonymised data used in this study can be provided to interested researchers on written request to the National Cancer Registry Ireland ([email protected]). Department of Economics and Statistics, University of Salerno, Via Giovanni Paolo II, 132, Fisciano, 84084, Italy Jonathan Pratschke Social & Economic Consultant, Templeogue Road, Terenure, Dublin, 6W, Ireland Trutz Haase National Cancer Registry Ireland, Building 6800, Cork Airport Business Park, Kinsale Road, Cork, Ireland Harry Comber & Marianna de Camargo Cancela Institute of Health & Society, Newcastle University, The Baddiley-Clark Building, Richardson Road, Newcastle upon Tyne, NE2 4AX, UK Linda Sharp Health Intelligence Unit, Health Service Executive, Red Brick House, Stewarts Hospital Campus, Palmerstown, Dublin, 20, Ireland Harry Comber Marianna de Camargo Cancela Correspondence to Jonathan Pratschke. HC designed the research, directed its implementation and contributed, in particular, to the section headed "Discussion". LS contributed primarily to the section headed "Background", while MdCC contributed to the "Methods" section. TH contributed to the section headed "Structural Equation Modelling" and HJ contributed to the section on "Data". JP took overall responsibility for drafting the article and integrating the contributions of other authors, as well as writing the remaining sections. All authors contributed to the literature review, critically reviewed the article and approved the final version. Appendix A: MPlus v5.21 code for discrete-time survival model TITLE: Discrete-time survival model for colon cancer with proportional hazards DATA: FILE IS G:\filename.dat; DEFINE: IF (stage EQ 2) THEN stage2 = 1; IF (stage NE 2) THEN stage2 = 0; IF (stage EQ 3) THEN stage3 = 1; VARIABLE: NAMES ARE id q1-q24 age hp2006r stage2 stage3 stage4 emerg t_col_d cl_hi_c; USEVARIABLES = q1-q24 age hp2006r stage2 stage3 stage4 emerg t_col_d cl_hi_c; CATEGORICAL = q1-q24; MISSING = ALL (999); ESTIMATOR = MLR; f BY q1-q24@1; f@0; f ON hp2006r (dirb) emerg (dirc) t_col_d cl_hi_c (dirf); emerg ON hp2006r (b1); cl_hi_c ON hp2006r (b2) emerg (c2); t_col_d WITH cl_hi_c; t_col_d WITH emerg; stage2 WITH cl_hi_c; stage2 WITH emerg; MODEL CONSTRAINT: ! indirect effects of DEPRIVATION ! depriv - > emerg - > F new (indb01); indb01 = b1*dirc; ! depriv - > caseload - > F indb02 = b2*dirf; ! depriv - > emerg - > caseload - > F indb03 = b1*c2*dirf; ! all indirect effects of deprivation new (indb); indb = indb01 + indb02 + indb03; ! total effect new (totb); totb = indb + dirb; ! mediation proportion new (medb); medb = indb/totb; SAMPSTAT; STANDARDIZED; Pratschke, J., Haase, T., Comber, H. et al. Mechanisms and mediation in survival analysis: towards an integrated analytical framework. BMC Med Res Methodol 16, 27 (2016). https://doi.org/10.1186/s12874-016-0130-6 Causal modelling Mediation analysis Discrete-time survival model
CommonCrawl
# Riemannian Geometry Eckhard Meinrenken Lecture Notes, University of Toronto, Spring 2002 ## Manifolds 1.1. Motivation. One of Riemann's key ideas was to develop the notion of a "manifold" independent from an embedding into an ambient Euclidean space. Roughly, an $n$-dimensional manifold is a space that locally looks like $\mathbb{R}^{n}$. More precisely, a manifold is a space that can be covered by coordinate charts, in such a way that the change of coordinates between any two charts is a smooth map. The following examples should give an idea what we have in mind. EXAmPle 1.1. Let $S^{2} \subset \mathbb{R}^{3}$ be the unit sphere defined by the equation $\left(x_{0}\right)^{2}+\left(x_{1}\right)^{2}+\left(x_{2}\right)^{2}$. For $j=0,1,2$ let $U_{j}^{+} \subset S^{2}$ be the subset defined by $x_{j}>0$ and $U_{j}^{-}$the subset defined by $x_{j}<0$. Let $\phi_{j}^{ \pm}: U_{j}^{ \pm} \rightarrow \mathbb{R}^{2}$ be the maps omitting the $j$ th coordinate. Then all transition maps $\phi_{j}^{ \pm} \circ\left(\phi_{k}^{ \pm}\right)^{-1}$ are smooth. For instance, $$ \phi_{2}^{+} \circ \phi_{1}^{-}: \phi_{1}^{-}\left(U_{1}^{-} \cap U_{2}^{+}\right) \rightarrow \phi_{2}^{+}\left(U_{1}^{-} \cap U_{2}^{+}\right) $$ is the map $(u, v) \mapsto\left(u,-\sqrt{1-u^{2}-v^{2}}\right)$, and this is smooth since $u^{2}+v^{2}<1$ on the image of $\phi_{1}^{-}$. EXAmple 1.2. The real projective plane $\mathbb{R} P(2)$ is the set of all lines (=1-dimensional subspaces) in $\mathbb{R}^{3}$. Any such line is determined by its two points of intersection $\{x,-x\}$ with $S^{2}$. Thus $\mathbb{R} P(2)$ may be identified with the quotient of $S^{2}$ by the equivalence relation, $x \sim-x$. Let $\pi: S^{2} \rightarrow \mathbb{R} P(2)$ be the quotient map. To get a picture of $\mathbb{R} P(2)$, note that for $0<\epsilon<1$, the subset $\left\{\left(x_{0}, x_{1}, x_{2}\right) \in S^{2} \mid x_{2} \geq \epsilon\right\}$ is a 2 -disk, containing at most one element of each equivalence class. Hence its image under $\pi$ is again a 2-disk. On the other hand, the strip $\left\{\left(x_{0}, x_{1}, x_{2}\right) \in S^{2} \mid-\epsilon \leq x_{2} \geq \epsilon\right\}$ contains, with any $x$, also the point $-x$. Its image under $\pi$ looks like a Moebius strip. Thus $\mathbb{R} P(2)$ looks like a union of a Moebius strip and a disk, glued along their boundary circles. This is still somewhat hard to imagine, since we cannot perform this gluing in such a way that $\mathbb{R} P(2)$ would become a surface in $\mathbb{R}^{3}$. Nonetheless, it "should be" a surface: Using the coordinate charts from $S^{2}$, let $U_{j}=\pi\left(U_{j}^{+}\right)$, and let $\phi_{j}: U_{j} \rightarrow \mathbb{R}^{2}$ be the unique maps such that $\pi \circ \phi_{j}=\phi_{j}^{+}$. Then the $U_{j}$ cover $\mathbb{R} P(2)$, and the "change of coordinate" maps are again smooth. It is indeed possible to embed $\mathbb{R} P(2)$ into $\mathbb{R}^{4}$ : One possibility is the map, $$ \left[\left(x_{0}, x_{1}, x_{2}\right)\right] \mapsto\left(x_{1} x_{2}, x_{0} x_{2}, x_{0} x_{1}, t_{0} x_{0}^{2}+t_{1} x_{1}^{2}+t_{2} x_{2}^{2}\right) $$ where $t_{0}, t_{1}, t_{2} \in \mathbb{R}$ are distinct (e.g. $t_{0}=1, t_{1}=2, t_{2}=3$ ). However, these embedding do not induce the "natural" metric on projective space, i.e. the metric induced from the 2 -sphere. 1.2. Topological spaces. To develop the concept of a manifold as a "space that locally looks like $\mathbb{R}^{n}$ ", our space first of all has to come equipped with some topology (so that the word "local" makes sense). Recall that a topological space is a set $M$, together with a collection of subsets of $M$, called open subsets, satisfying the following three axioms: (i) the empty set $\emptyset$ and the space $M$ itself are both open, (ii) the intersection of any finite collection of open subsets is open, (iii) the union of any collection of open subsets is open. The collection of open subsets of $M$ is also called the topology of $M$. A map $f: M_{1} \rightarrow M_{2}$ between topological spaces is called continuous if the pre-image of any open subset in $M_{2}$ is open in $M_{1}$. A continuous map with a continuous inverse is called a homeomorphism. One basic ingredient in the definition of a manifold is that our topological space comes equipped with a covering by open sets which are homeomorphic to open subsets of $\mathbb{R}^{n}$. Definition 1.3. Let $M$ be a topological space. An $n$-dimensional chart for $M$ is a pair $(U, \phi)$ consisting of an open subset $U \subset \mathbb{R}^{n}$ and a continuous map $\phi: U \rightarrow \mathbb{R}^{n}$ such that $\phi$ is a homeomorphism onto its image $\phi(U)$. Two such charts $\left(U_{\alpha}, \phi_{\alpha}\right),\left(U_{\beta}, \phi_{\beta}\right)$ are $C^{\infty}$-compatible if the transition map $$ \phi_{\beta} \circ \phi_{\alpha}^{-1}: \phi_{\alpha}\left(U_{\alpha} \cap U_{\beta}\right) \rightarrow \phi_{\beta}\left(U_{\alpha} \cap U_{\beta}\right) $$ is a diffeomorphism (a smooth map with smooth inverse). A covering $\mathcal{A}=\left(U_{\alpha}\right)_{\alpha \in A}$ of $M$ by pairwise $C^{\infty}$-compatible charts is called a $C^{\infty}$-atlas. Example 1.4. Let $X \subset \mathbb{R}^{2}$ be the union of lines $\mathbb{R} \times\{1\} \cup \mathbb{R} \times\{-1\}$. Let $M=X / \sim$ be its quotient by the equivalence relation, $(u, 1) \sim(u,-1)$ for $u<0$. Let $\pi: X \rightarrow M$ be the quotient map. Thus $M$ is obtained by gluing to copies of the real line along the negative axis. It is somewhat hard to picture this space, since $\pi(0, \pm 1)$ are distinct points in $M$. Nonetheless, $M$ admits a $C^{\infty}$-atlas: Let $U_{+}=\pi(\mathbb{R} \times\{1\})$ and $U_{-}=\pi(\mathbb{R} \times\{-1\})$, and define $\phi_{ \pm}: U_{ \pm} \rightarrow \mathbb{R}$ by $\phi_{ \pm}(u, \pm 1)=u$. Then $\left(U_{ \pm}, \phi_{ \pm}\right)$defines an atlas with two charts (the transition map is just the identity map). The example just given shows that existence of an atlas does not imply that our space looks "nice". The problem with the example is that the points $\pi(0, \pm 1)$ in $M$ do not admit disjoint open neighborhoods. Recall that a topological space is called Hausdorff if any two distinct points in the space admit disjoint open neighborhoods. Thus, we require manifolds to be Hausdorff. We will impose another restriction on the topology. Recall that A basis for a topological space $M$ is a collection $\mathcal{B}$ of open subsets of $M$ such that every open subset of $M$ is a union of open subsets in the collection $\mathcal{B}$. For example, the collection of open balls $B_{\epsilon}(x)$ in $\mathbb{R}^{n}$ define a basis. But one already has a basis if one takes only all balls $B_{\epsilon}(x)$ with $x \in \mathbb{Q}^{n}$ and $\epsilon \in \mathbb{Q}_{>0}$; this then defines a countable basis. A topological space with countable basis is also called second countable. We will require manifolds to admit a countable basis. This will imply, among other things, that the manifold admits a countable atlas, a fact that is useful for certain inductive arguments. Two atlases on a topological space are called equivalent if their union is again an atlas. It is not hard to check that this is indeed an equivalence relation. An equivalence class of atlases is called a $C^{\infty}$-structure on $M$. Definition 1.5 (Manifolds). A $C^{\infty}$-manifold is a Hausdorff topological space $M$, with countable basis, together with a $C^{\infty}$-structure. It is perhaps somewhat surprising that the two topological restrictions (Hausdorff and countable basis) rule out any further "accidents": The topological properties of manifolds are just as nice as those of Euclidean $\mathbb{R}^{n}$. Definition 1.6. A map $F: N \rightarrow M$ between manifolds is called smooth (or $C^{\infty}$ ) if for all charts $(U, \phi)$ of $N$ and $(V, \psi)$ of $M$, with $F(U) \subset V$, the composite map $$ \psi \circ F \circ \phi^{-1}: \phi(U) \rightarrow \psi(V) $$ is smooth. The space of smooth maps from $N$ to $M$ is denoted $C^{\infty}(N, M)$. A smooth map $F: N \rightarrow M$ with smooth inverse $F^{-1}: M \rightarrow N$ is called a diffeomorphism. Problems 1.7. 1. Review point set topology: Continuous maps, coverings, neighborhoods, Hausdorff property, compactness, ... 2. Show that equivalence of $C^{\infty}$-atlases is an equivalence relation. Warning: $C^{\infty}$-compatibility of charts on a topological space is not an equivalence relation. (Why?) 3. Given a manifold $M$ with $C^{\infty}$-atlas $\mathcal{A}$, let $\mathcal{A}^{\prime}$ be the collection of all $C^{\infty}$-charts $(U, \phi)$ on $M$ that are compatible with all charts in $\mathcal{A}$. Show that $\mathcal{A}^{\prime}$ is again an atlas, and that $\mathcal{A}^{\prime}$ contains any atlas equivalent to $\mathcal{A}$. 4. Verify that the map (1) is $1-1$. ## Examples of manifolds Spheres. The unit sphere $S^{n} \subset \mathbb{R}^{n+1}$ is a manifold of dimension $n$, with charts $U_{j}^{ \pm}$constructed similar to $S^{2}$. Another choice of atlas, with only two charts, is given by "stereographic projection" from the north and south pole. Projective spaces. Let $\mathbb{R} P(n)$ be the quotient $S^{n} / \sim$ under the equivalence relation $x \sim-x$. It is easy to check that this is Hausdorff and has countable basis. Let $\pi: S^{n} \rightarrow \mathbb{R} P(n)$ be the quotient map. Just as for $n=2$, the charts $U_{j}=\pi\left(U_{j}^{+}\right)$, with map $\phi_{j}$ induced from $U_{j}^{+}$, form an atlas. Products. If $M_{j}$ are a finite collection of manifolds of dimensions $n_{j}$, their direct product is a manifold of dimension $\sum n_{j}$. For instance, the $n$-torus is defined as the $n$-fold product of $S^{1}$ 's. Lens spaces. Identify $\mathbb{R}^{4}$ with $\mathbb{C}^{2}$, thus $S^{3}=\left\{(z, w):|z|^{2}+|w|^{2}=1\right\}$. Given natural numbers $q>p \geq 1$ introduce an equivalence relation, by declaring that $(z, w) \sim\left(z^{\prime}, w^{\prime}\right)$ if $$ \left(z^{\prime}, w^{\prime}\right)=\left(e^{2 \pi i \frac{k}{q}} z, e^{2 \pi i \frac{k p}{q}} w\right) $$ for some $k \in\{0, \ldots, q-1\}$. Let $L(p, q)=S^{3} / \sim$ be the lens space. Note that $L(1,2)=\mathbb{R} P(3)$. If $p, q$ are relatively prime, $L(p, q)$ is a manifold. Indeed, if $p, q$ are relatively prime then for all $(z, w) \in S^{3}$, the only solution of $$ (z, w)=\Phi_{k}(z, w):=\left(e^{2 \pi i \frac{k}{q}} z, e^{2 \pi i \frac{k p}{q}} w\right) $$ is $k=0$. Let $f_{k}(z, w)=\left\|(z, w)-\Phi_{k}(z, w)\right\|$. Then $f_{k}>0$ for $k=1, \ldots, q-1$. Since $S^{3}$ is compact, each $f_{k}$ takes on its minimum on $S^{3}$. Let $\epsilon>0$ be sufficiently small so that $f_{k}>\epsilon$ for all $k=1, \ldots, q-1$. Then if $U$ is an open subset of $S^{3}$ that is contained in some open ball of radius $\epsilon$ in $\mathbb{R}^{3}$, then $U$ contains at most one element of each equivalence class. Let $(U, \phi)$ be a coordinate chart for $S^{3}$, with $U$ sufficiently small in this sense. Let $V=\pi(U)$, and $\psi: V \rightarrow \mathbb{R}^{3}$ the unique map such that $\psi \circ \pi=\phi$. Then $(V, \psi)$ is a coordinate chart for $L(p, q)$, and the collection of coordinate charts constructed this way defines an atlas. Grassmannians. The set $\operatorname{Gr}(k, n)$ of all $k$-dimensional subspaces of $\mathbb{R}^{n}$ is called the Grassmannian of $k$-planes in $\mathbb{R}^{n}$. A $C^{\infty}$-atlas may be constructed as follows. For any subset $I \subset\{1, \ldots, n\}$ let $I^{\prime}=\{1, \ldots, n\} \backslash I$ be its complement. Let $\mathbb{R}^{I} \subset \mathbb{R}^{n}$ be the subspace consisting of all $x \in \mathbb{R}^{n}$ with $x_{i}=0$ for $i \notin I$. If $I$ has cardinality $k$, then $\mathbb{R}^{I} \in \operatorname{Gr}(k, n)$. Note that $\mathbb{R}^{I^{\prime}}=\left(\mathbb{R}^{I}\right)^{\perp}$. Let $U_{I}=\{E \in$ $\left.\operatorname{Gr}(k, n) \mid E \cap \mathbb{R}^{I^{\prime}}=0\right\}$. Each $E \in U_{I}$ is described as the graph of a unique linear map $A: \mathbb{R}^{I} \rightarrow \mathbb{R}^{I^{\prime}}$, that $E=\left\{x+A(x) \mid x \in \mathbb{R}^{I}\right.$. This gives a bijection $$ \phi_{I}: U_{I} \rightarrow L\left(\mathbb{R}^{I}, \mathbb{R}^{I^{\prime}}\right) \cong \mathbb{R}^{k(n-k)} . $$ We can use this to define the topology on the Grassmannian: It is the smallest topology for which all maps $\phi^{I}$ are continuous. To check that the charts are compatible, suppose $E \in U_{I} \cap U_{\tilde{I}}$, and let $A_{I}$ and $A_{\tilde{I}}$ be the linear maps describing $E$ in the two charts. We have to show that the map taking $A_{I}$ to $A_{\tilde{I}}$ is smooth. Let $\Pi_{I}$ denote orthogonal projection $\mathbb{R}^{n} \rightarrow \mathbb{R}^{I}$. The map $A_{I}$ is determined by the equations $$ A_{I}\left(x_{I}\right)=\left(1-\Pi_{I}\right) x, \quad x_{I}=\Pi_{I} x $$ for $x \in E$, and $x=x_{I}+A_{I} x_{I}$. Thus $$ A_{\tilde{I}}\left(x_{\tilde{I}}\right)=\left(I-\Pi_{\tilde{I}}\right)\left(A_{I}+1\right) x_{I}, \quad x_{\tilde{I}}=\Pi_{\tilde{I}}\left(A_{I}+1\right) x_{I} . $$ The map $S\left(A_{I}\right): \Pi_{\tilde{I}}\left(A_{I}+1\right): \mathbb{R}^{I} \rightarrow \mathbb{R}^{\tilde{I}}$ is an isomorphism, since it is the composition of two isomorphisms $\left(A_{I}+1\right): \mathbb{R}^{I} \rightarrow E$ and $\left.\Pi_{\tilde{I}}\right|_{E}: E \rightarrow \mathbb{R}^{\tilde{I}}$. The above equations show, $$ A_{\tilde{I}}=\left(I-\Pi_{\tilde{I}}\right)\left(A_{I}+1\right) S\left(A_{I}\right)^{-1} . $$ The dependence of $S$ on the matrix entries of $A_{I}$ is smooth, by Cramer's formula for the inverse matrix. It follows that the collection of all $\phi_{I}: U_{I} \rightarrow \mathbb{R}^{k(n-k)}$ defines on $\operatorname{Gr}(k, n)$ the structure of a manifold of dimension $k(n-k)$. Rotation groups. Let $\operatorname{Mat}_{n} \cong \mathbb{R}^{n^{2}}$ be the set of $n \times n$-matrices. The subset $\operatorname{SO}(n)=\{A \in$ $\operatorname{Mat}_{n} \mid A^{t} A=I, \operatorname{det}(A)=1$ is the group of rotations in $\mathbb{R}^{n}$. Let $\mathfrak{s o}(n)=\left\{B \in \operatorname{Mat}_{n} \mid B^{t}+B=\right.$ $0\} \cong \mathbb{R}^{n(n-1) / 2}$. Then $\exp (B) \in \mathrm{SO}(n)$ for all $B \in \mathfrak{s o}(n)$. For $\epsilon$ sufficiently small, exp restricts to a bijection from $V=\{B \in \mathfrak{s o}(n)|| \mid B \|<\epsilon\}$. For any $A_{0}$, let $U=\left\{A \in \mathrm{SO}(n) \mid A=A_{0} \exp (B)\right\}$. Let $\phi$ be the map taking $A$ to $B=\log \left(A A_{0}^{-1}\right)$. Then the set of all $(U, \phi)$ constructed this way define an atlas, and give $\mathrm{SO}(n)$ the structure of a manifold of dimension $n(n-1) / 2$. ## Submanifolds Let $M$ be a manifold of dimension $m$. Definition 3.1. A subset $S \subset M$ is called an embedded submanifold of dimension $k \leq m$, if $S$ can be covered by coordinate charts $(U, \phi)$ for $M$ with the property $\phi(U \cap S)=\Phi(U) \cap \mathbb{R}^{k}$. Charts $(U, \phi)$ of $M$ with this property are called submanifold charts for $S$. Thus $S$ becomes a $k$-dimensional manifold in its own right, with atlas consisting of charts $\left(U \cap S,\left.\phi\right|_{U \cap S}\right)$. EXAmPle 3.2. $S^{n}$ is a submanifold of $\mathbb{R}^{n+1}$ : A typical submanifold chart is $$ V=\left\{x \in \mathbb{R}^{n+1} \mid x_{0}>0, \sum_{i>0} x_{i}^{2}<1\right\}, \quad \phi(x)=\left(x_{1}, \ldots, x_{n}, \sqrt{1-\sum_{i>0} x_{i}^{2}}-x_{0}\right) . $$ EXAmple 3.3. Similarly, if $f: U \rightarrow \mathbb{R}^{n-k}$ is any smooth function on an open subset $U \subset \mathbb{R}^{k}$, the graph $\Gamma_{f}=\{(x, f(x)) \mid x \in U\}$ is an embedded submanifold of $U \times \mathbb{R}^{k}$, with submanifold chart $$ \phi: U \times \mathbb{R}^{k} \rightarrow \mathbb{R}^{n}, \quad\left(y^{\prime}, y^{\prime \prime}\right) \mapsto\left(y^{\prime}, f\left(y^{\prime}\right)-y^{\prime \prime}\right) $$ Recall that for any smooth function $F: V \rightarrow \mathbb{R}^{m}$ on an open subset $V \subset \mathbb{R}^{n}$, a point $a \in \mathbb{R}^{m}$ is a regular value if for all $x \in F^{-1}(a)$, the Jacobian $D F(x): \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}$ is onto. If $m=1$, this means that the gradient $\nabla F$ does not vanish on $F^{-1}(a)$. (We do not require that $a$ is in the image of $F$ - thus regular value is a bit misleading.) Proposition 3.4. Let $V \subset \mathbb{R}^{n}$ open, and $F: V \rightarrow \mathbb{R}^{m}$ smooth. For any regular value $a \in \mathbb{R}^{m}$ of $F$, the inverse image $F^{-1}(a)$ is an embedded submanifold of dimension $k=n-m$. In fact, there exists a submanifold chart $(U, \phi)$ around any given $x \in F^{-1}(a)$ such that $$ F\left(\phi^{-1}\left(y^{\prime}, y^{\prime \prime}\right)\right)=a+y^{\prime \prime} $$ for $\operatorname{all}\left(y^{\prime}, y^{\prime \prime}\right) \in \phi(U) \subset \mathbb{R}^{k} \times \mathbb{R}^{m}$. Proof. This is really just a version of the implicit function theorem from multivariable calculus. A more familiar version of this theorem states that for all $x \in F^{-1}(a)$, after possibly renumbering the coordinates in $\mathbb{R}^{n}$, the equation $F(y)=a$ can be "solved" for $y^{\prime \prime}=\left(y_{k+1}, \ldots, y_{n}\right)$ as a function of $y^{\prime}=\left(y_{1}, \ldots, y_{k}\right)$. That is, there exists a unique function $g_{a}$ from a neighborhood of $x^{\prime} \in \mathbb{R}^{n-m}$ to $\mathbb{R}^{m}$, such that on a sufficiently small neighborhood $U$ of $x$ $$ F^{-1}(a) \cap U=\left\{\left(y^{\prime}, g_{a}\left(y^{\prime}\right)\right)\right\} \cap U . $$ This means that on $U$, the level set $F^{-1}(a)$ is the graph of the function $g_{a}$, and therefore an embedded submanifold. But in fact, $g_{a}$ depends smoothly on the value $a=F(x)$. That is, taking $U$ sufficiently small, we have $$ F^{-1}(F(y)) \cap U=\left\{\left(y^{\prime}, g_{F(y)}\left(y^{\prime}\right)\right)\right\} \cap U . $$ for all $y=\left(y^{\prime}, y^{\prime \prime}\right) \in U$. Then $\phi(y)=\left(y^{\prime}, g_{F(y)}\left(y^{\prime}\right)-y^{\prime \prime}\right)$ is a submanifold chart with the desired property. Manifolds are often described as level sets for regular values: EXAMPle 3.5. For $0<r<R$, the 2-torus can be identified with the embedded submanifold $F^{-1}\left(r^{2}\right)$ where $$ F\left(x_{1}, x_{2}, x_{3}\right)=\left(\sqrt{x_{1}^{2}+x_{2}^{2}}-R\right)^{2}+x_{3}^{2} $$ is a smooth function on the complement of the $x_{3}$-axis, $x_{1}^{2}+x_{2}^{2}>0$. One checks that indeed, $a=r^{2}$ is a regular value of this function. The proposition generalizes to maps between manifolds: If $F \in C^{\infty}(M, N)$, a point $a \in N$ is called a regular value of $F$ if it is a regular value "in local coordinates": That is, for all $p \in F^{-1}(a)$, and all charts $(U, \phi)$ around $p$ and $(V, \psi)$ around $a$, with $F(U) \subset V$, the Jacobian $D\left(\psi \circ F \circ \phi^{-1}\right)$ at $\phi(p)$ is onto. THEOREM 3.6. If $a \in N$ is a regular value of $F \in C^{\infty}(M, N)$, then $F^{-1}(a)$ is an embedded submanifold of $M$. In fact, given a coordinate chart $(V, \psi)$ around a, with $\psi(a)=0$, each $p \in F^{-1}(a)$ admits a submanifold chart $(U, \phi)$ with $$ \psi \circ F \circ \phi^{-1}\left(y^{\prime}, y^{\prime \prime}\right)=y^{\prime \prime} $$ for $y=\left(y^{\prime}, y^{\prime \prime}\right) \in \phi(U) \subset \mathbb{R}^{n}$. Proof. Choose any coordinate chart $\left(U^{\prime}, \phi^{\prime}\right)$ around $x$. The Proposition, applied to the map $\psi \circ F \circ\left(\phi^{\prime}\right)^{-1}$, gives a change of coordinates with the desired properties. Problems 3.7. 1. Show that conversely, every submanifold is locally the graph of a function. 2. Let $S \subset \mathbb{R}^{3}$ be the 2 -torus $\left(\sqrt{x_{1}^{2}+x_{2}^{2}}-R\right)^{2}+x_{3}^{2}=r^{2}$, and $F: S \rightarrow \mathbb{R}$ the function $\left(x_{1}, x_{2}, x_{3}\right) \mapsto x_{2}$. What are the critical points for this function? What is the shape of the level sets $F^{-1}(a)$ for $a$ a singular value? 3. Let $\operatorname{Sym}_{n} \subset \mathrm{Mat}_{n}$ be the subspace of symmetric matrices, and $F: \mathrm{Mat}_{n} \rightarrow \mathrm{Sym}_{n}$ the map $A \mapsto A^{t} A$. Show that the identity matrix $I$ is a regular value of this map. This proves that the orthogonal group $\mathrm{O}(n)$ is an embedded submanifold of Mat ${ }_{n}$, of dimension $n(n-1) / 2$. (In fact, by a theorem of E. Cartan, every closed subset $G \subset$ Mat $_{n}$, with the property that $G$ is a group under matrix multiplication, is an embedded submanifold of Mat ${ }_{n}$.) ## Tangent spaces For embedded submanifolds $M \subset \mathbb{R}^{n}$, the tangent space $T_{p} M$ at $p \in M$ can be defined as the set of all velocity vectors $v=\dot{\gamma}(0)$, where $\gamma: \mathbb{R} \rightarrow M$ is a smooth curve with $\gamma(0)=p$. Thus $T_{p} M$ becomes a vector subspace of $\mathbb{R}^{n}$. To extend this idea to general manifolds, note that the vector $v=\dot{\gamma}(0)$ defines a "directional derivative" $C^{\infty}(M) \rightarrow \mathbb{R}$ : $$ v:\left.f \mapsto \frac{d}{d t}\right|_{t=0} f(\gamma(t)) $$ We will define $T_{p} M$ as a set of directional derivatives. Definition 4.1. Let $M$ be a manifold, $p \in M$. The tangent space $T_{p} M$ is the space of all linear maps $v: C^{\infty}(M) \rightarrow \mathbb{R}$ of the form $$ v(f)=\left.\frac{d}{d t}\right|_{t=0} f(\gamma(t)) $$ for some smooth curve $\gamma \in C^{\infty}(\mathbb{R}, M)$ with $\gamma(0)=p$. The following alternative description of $T_{p} M$ makes it clear that $T_{p} M$ is a vector subspace of the space of linear maps $C^{\infty}(M) \rightarrow \mathbb{R}$, of dimension $\operatorname{dim} T_{p} M=\operatorname{dim} M$. Proposition 4.2. Let $(U, \phi)$ be a coordinate chart around $p$, with $\phi(p)=0$. A linear map $v: C^{\infty}(M) \rightarrow \mathbb{R}$ is in $T_{p} M$ if and only if it has the form, $$ v(f)=\left.\sum_{i=1}^{m} a_{i} \frac{\partial\left(f \circ \phi^{-1}\right)}{\partial x_{i}}\right|_{x=0} $$ for some $a=\left(a_{1}, \ldots, a_{m}\right) \in \mathbb{R}^{m}$. Proof. Given a linear map $v$ of this form, let $\gamma(t)$ be any smooth curve with $\phi(\gamma(t))=t a$ for $|t|$ sufficiently small ${ }^{1}$. Then $$ \left.\frac{d}{d t}\right|_{t=0} f(\gamma(t))=\left.\frac{d}{d t}\right|_{t=0}\left(f \circ \phi^{-1}\right)(t a)=\left.\sum_{i=1}^{m} a_{i} \frac{\partial\left(f \circ \phi^{-1}\right)}{\partial x_{i}}\right|_{x=0}, $$ by the chain rule. Conversely, given any curve $\gamma$ with $\gamma(0)=p$, let $\tilde{\gamma}=\phi \circ \gamma$ be the corresponding curve in $\phi(U)$ (defined for small $|t|$ ). Then $$ \left.\frac{d}{d t}\right|_{t=0} f(\gamma(t))=\frac{\partial}{\partial t}\left(f \circ \phi^{-1}\right)(\tilde{\gamma}(t))=\left.\sum_{i=1}^{m} a_{i} \frac{\partial\left(f \circ \phi^{-1}\right)}{\partial x_{i}}\right|_{x=0} $$ where $a=\left.\frac{d \tilde{\gamma}}{d t}\right|_{t=0}$. Corollary 4.3. If $U \subset \mathbb{R}^{m}$ is an open subset, the tangent space $T_{p} U$ is canonically identified with $\mathbb{R}^{m}$. We now describe a third definition of $T_{p} M$ which characterizes "directional derivatives" in a coordinate-free way, without reference to curves $\gamma$. Note first that every tangent vector $v \in T_{p} M$ satisfies a product rule, $$ v\left(f_{1} f_{2}\right)=f_{1}(p) v\left(f_{2}\right)+v\left(f_{1}\right) f_{2}(p) $$ for all $f_{j} \in C^{\infty}(M)$. Indeed, in local coordinates $(U, \phi)$, this just follows from the product rule from calculus, $$ \frac{\partial}{\partial x_{i}}\left(\tilde{f}_{1} \tilde{f}_{2}\right)=\tilde{f}_{1}(x) \frac{\partial \tilde{f}_{2}}{\partial x_{i}}+\frac{\partial \tilde{f}_{1}}{\partial x_{i}} \tilde{f}_{2}(x) $$ where $\tilde{f}_{j}=f_{j} \circ \phi^{-1}$. It turns out that the product rule completely characterizes tangent vectors: Proposition 4.4. A linear map $v: C^{\infty}(M) \rightarrow \mathbb{R}$ is a tangent vector if and only if it satisfies the product rule (2). Proof. Let $v: C^{\infty}(M) \rightarrow \mathbb{R}$ be a linear map satisfying the product rule (2). To show that $v \in T_{p} M$, we use the second definition of $T_{p} M$ in terms of local coordinates. We first note that by the product rule applied to the constant function $1=1 \cdot 1$ we have $v(1)=0$. Thus $v$ vanishes on constants. Next we show that $v\left(f_{1}\right)=v\left(f_{2}\right)$ if $f_{1}=f_{2}$ near $p$. Equivalently, we show that $v(f)=0$ if $f=0$ near $p$. Choose $\chi \in C^{\infty}(M)$ with $\chi(p)=1$, zero outside a small neighborhood of $p$ so that $f \chi=0$. The product rule tells us that $$ 0=v(f \chi)=v(f) \chi(p)+v(\chi) f(p)=v(f) . $$ Thus $v(f)$ depends only on the behavior of $f$ in an arbitrarily small neighborhood of $p$. In particular, letting $(U, \phi)$ be a coordinate chart around $p$, with $\phi(p)=0$, we may assume that $\operatorname{supp}(f) \subset U .{ }^{2}$ Consider the Taylor expansion of $\tilde{f}=f \circ \phi^{-1}$ near $x=0$ : $$ \tilde{f}(x)=\tilde{f}(0)+\left.\sum_{i} x_{i} \frac{\partial}{\partial x_{i}}\right|_{x=0} \tilde{f}+r(x) $$ The remainder term $r$ is a smooth function that vanishes at $x=0$ together with its first derivatives. This means that it can be written (non-uniquely) in the form $r(x)=\sum_{i} x_{i} r_{i}(x)$ ${ }^{1}$ More precisely, choose any function $\chi: \mathbb{R} \rightarrow \mathbb{R}$ with $\chi(t)=t$ for $|t|<\epsilon / 2$ and $\dot{\chi}(t)=0$ for $|t| \geq \epsilon$. Choose $\epsilon$ sufficiently small, so that the ball of radius $\epsilon\|a\|$ is contained in $\phi(U)$. Then $\chi(t) a \in \phi(U)$ for all $t$, and $\gamma(t)=\phi^{-1}(\chi(t) a)$ is a well-defined curve with the desired properties. ${ }^{2}$ The support $\operatorname{supp}(f)$ of a function $f$ on $M$ is the closure of the set of all points where it is non-zero. where $r_{i}$ are smooth functions that vanish at $0 .{ }^{3}$ By the product rule, $v$ vanishes on $r \circ \phi^{-1}$ (since it is a sum of products of functions that vanish at $p$ ). It also vanishes on the constant $\tilde{f}(0)=f(p)$. Thus $$ v(f)=v\left(\tilde{f} \circ \phi^{-1}\right)=\left.\sum_{i} a_{i} \frac{\partial}{\partial x_{i}}\right|_{x=0} \tilde{f} $$ with $a_{i}=v\left(x_{i} \circ \phi^{-1}\right)$. (Here the coordinates $x_{i}$ are viewed as functions on $\mathbb{R}^{n}, x \mapsto x_{i}$.) REMARK 4.5. There is a fourth definition of $T_{p} M$, as follows. Let $C_{p}^{\infty}(M)$ denote the subspace of functions vanishing at $p$, and let $C_{p}^{\infty}(M)^{2}$ consist of finite sums $\sum_{i} f_{i} g_{i}$ where $f_{i}, g_{i} \in C_{p}^{\infty}(M)$. Since any tangent vector vanishes $v: C^{\infty}(M) \rightarrow \mathbb{R}$ vanishes on constants, $v$ is effectively a map $v: C_{p}^{\infty}(M) \rightarrow \mathbb{R}$. Since tangent vectors vanish on products, $v$ vanishes on the subspace $C_{p}^{\infty}(M)^{2} \subset C_{p}^{\infty}(M)$. Thus $v$ descends to a linear map $C_{p}^{\infty}(M) / C_{p}^{\infty}(M)^{2} \rightarrow \mathbb{R}$, i.e. an element of the dual space $\left(C_{p}^{\infty}(M) / C_{p}^{\infty}(M)^{2}\right)^{*}$. The map $$ T_{p} M \rightarrow\left(C_{p}^{\infty}(M) / C_{p}^{\infty}(M)^{2}\right)^{*} $$ just defined is an isomorphism, and can therefore be used as a definition of $T_{p} M$. This may appear very fancy on first sight, but really just says that a tangent vector is a linear functional on $C^{\infty}(M)$ that vanishes on constants and depends only on the first order Taylor expansion of the function at $p$. ## Tangent map DeFinition 5.1. For any smooth map $F \in C^{\infty}(M, N)$ and any $p \in M$, the tangent map $T_{p} F: T_{p} M \rightarrow T_{F(p)} N$ is defined by the equation $$ T_{p} F(v)(f)=v(f \circ F) $$ It is easy to check (using any of the definitions of tangent space) that $T_{p} F(v)$ is indeed a tangent vector. For example, if $\gamma: \mathbb{R} \rightarrow M$ is a curve on $M$ representing $v$, we have $$ T_{p} F(v)(f)=v(f \circ F)=\left.\frac{d}{d t}\right|_{t=0} f(F(\gamma(t)) $$ which shows that $T_{p} F(v)$ is the tangent vector at $F(p)$ represented by the curve $F \circ \gamma: \mathbb{R} \rightarrow N$. Similarly, it is easily verified that under composition of functions, $$ T_{p}\left(F_{2} \circ F_{1}\right)=T_{F_{1}(p)} F_{2} \circ T_{p} F_{1} . $$ In particular, if $F$ is a diffeomorphism, $T_{p} F$ is invertible and we have $$ T_{F(p)} F^{-1}=\left(T_{p} F\right)^{-1} \text {. } $$ It is instructive to work out the expression for $T_{p} F$ in local coordinates. We had seen that any chart $(U, \phi)$ around $p$ defines an isomorphism $T_{p} M \rightarrow \mathbb{R}^{m}$. This is the same as the isomorphism given by the tangent map, $$ T_{p} \phi: T_{p} U=T_{p} M \rightarrow T_{\phi(p)} \phi(U)=\mathbb{R}^{m} . $$ Similarly, a chart $(V, \psi)$ around $F(p)$ gives an identification $T_{F(p)} \psi: T_{F(p)} V \cong \mathbb{R}^{n}$. Suppose $F(U) \subset V$. ${ }^{3}$ Exercise: Show that if $h$ is any function on $\mathbb{R}^{n}$ with $h(0)=0$, then $h$ can be written in the form $h=\sum x_{i} h_{i}$ where all $h_{i}$ are smooth. Show that if the first derivatives of $h$ vanish at 0 , then $h_{i}(0)=0$. TheOrem 5.2. In local charts $(U, \phi)$ and $(V, \psi)$ as above, the map $$ T_{F(p)} \psi \circ T_{p} F \circ\left(T_{p} \phi\right)^{-1}: \mathbb{R}^{m} \rightarrow \mathbb{R}^{n} $$ is the Jacobian of the map $\tilde{F}=\psi \circ F \circ \phi^{-1}: \phi(U) \rightarrow \psi(V)$. Proof. Let $a \in \mathbb{R}^{m}$ represent $v \in T_{p} M$ in the chart $(U, \phi)$, and let $b \in \mathbb{R}^{n}$ represent its image under $T_{p} F$. We denote the coordinates on $\phi(U)$ by $x_{1}, \ldots, x_{m}$ and the coordinates on $\psi(V)$ by $y_{1}, \ldots, y_{n}$. Let $\tilde{f}=f \circ \psi^{-1} \in C^{\infty}(\psi(V))$. Then $$ \begin{aligned} v(f \circ F) & =\left.\sum_{i=1}^{m} a_{i} \frac{\partial}{\partial x_{i}}\right|_{x=\phi(p)} f\left(F\left(\phi^{-1}(x)\right)\right) \\ & =\left.\sum_{i=1}^{m} a_{i} \frac{\partial}{\partial x_{i}}\right|_{x=\phi(p)} \tilde{f}(\tilde{F}(x)) \\ & =\left.\left.\sum_{i=1}^{m} \sum_{j=1}^{n} a_{i} \frac{\partial \tilde{F}_{j}}{\partial x_{i}}\right|_{x=\phi(p)} \frac{\partial \tilde{f}}{\partial y_{j}}\right|_{y=\psi(F(p))} \\ & =\left.\sum_{j=1}^{n} b_{j} \frac{\partial \tilde{f}}{\partial y_{j}}\right|_{y=\psi(F(p))} \end{aligned} $$ where $b_{j}=\left.\sum_{i=1}^{m} \frac{\partial \tilde{F}_{j}}{\partial x_{i}}\right|_{x=\phi(p)} a_{i}=(D \tilde{F})_{j i} a_{i}$. Thus $T_{p} F$ is just the Jacobian expressed in a coordinate-free way. As an immediate application, we can characterize regular values in a coordinate-free way: Definition 5.3. A point $q \in N$ is a regular value of $F \in C^{\infty}(M, N)$ if and only if the tangent map $T_{p} F$ is onto for all $p \in F^{-1}(q)$. This is clearly equivalent to our earlier definition in local charts. Definition 5.4. Let $\gamma \in C^{\infty}(J, M)$ be a smooth curve $(J \subset \mathbb{R}$ an open interval $)$. The tangent (or velocity) vector to $\gamma$ at time $t_{0} \in J$ is the vector $$ \dot{\gamma}\left(t_{0}\right):=T_{t_{0}} \gamma\left(\left.\frac{d}{d t}\right|_{t=t_{0}}\right) \in T_{\gamma\left(t_{0}\right)} M $$ We will also use the notation $\frac{d \gamma}{d t}\left(t_{0}\right)$ to denote the velocity vector. Problems 5.5. 1. Show that if $F \in C^{\infty}(M, N), T_{\gamma(t)} F\left(\frac{d \gamma}{d t}\right)=\frac{d(F \circ \gamma)}{d t}$ for all $t \in J$. 2. Suppose that $S \subset M$ is an embedded submanifold, and let $\iota: S \rightarrow M, p \mapsto p$ be the inclusion map. Show that $\iota$ is smooth and that the tangent map $T_{p} \iota$ is $1-1$ for all $p \in S$. Show that if $M$ is an open subset of $\mathbb{R}^{m}$, this becomes the identification of $T_{p} S$ as a subspace of $\mathbb{R}^{m}$, as described at the beginning of this section. 3. Suppose $F \in C^{\infty}(M, N)$ has $q \in N$ as a regular value. Let $S=F^{-1}(q) \hookrightarrow M$ be the level set. For $p \in S$, show that $T_{p} S$ is the kernel of the tangent map $T_{p} F$. ## Tangent bundle Let $M$ be a manifold of dimension $m$. If $M$ is an embedded submanifold of $\mathbb{R}^{n}$, the tangent bundle $T M$ is the subset of $\mathbb{R}^{2 n}=\mathbb{R}^{n} \times \mathbb{R}^{n}$ given by $$ T M=\left\{(p, v) \in \mathbb{R}^{n} \times \mathbb{R}^{n} \mid p \in M, v \in T_{p} M\right\} $$ where each $T_{p} M$ is identified as a vector subspace of $\mathbb{R}^{n}$. It is not hard to see that $T M$ is, in fact, a smooth embedded submanifold of dimension $2 m$. Moreover, the natural map $\pi: T M \rightarrow M,(p, v) \mapsto p$ is smooth, and its "fibers" $\pi^{-1}(p)=T_{p} M$ carry the structure of vector spaces. DeFinition 6.1. A vector bundle of rank $k$ over a manifold $M$ is a manifold $E$, together with a smooth map $\pi: E \rightarrow M$, and a structure of a vector space on each fiber $E_{p}:=\pi^{-1}(p)$, satisfying the following local triviality condition: Each point in $M$ admits an open neighborhood $U$, and a smooth map $$ \psi: \pi^{-1}(U) \rightarrow U \times \mathbb{R}^{k}, $$ such that $\psi$ restricts to linear isomorphisms $E_{p} \rightarrow \mathbb{R}^{k}$ for all $p \in U$. The map $\psi: E_{U} \equiv \pi^{-1}(U) \rightarrow U \times \mathbb{R}^{k}$ is called a (local) trivialization of $E$ over $U$. In general, there need not be a trivialization over $U=M$. Definition 6.2. A vector bundle chart for a vector bundle $\pi: E \rightarrow M$ is a chart $(U, \phi)$ for $M$, together with a chart $\left(\pi^{-1}(U), \hat{\phi}\right)$ for $E_{U}=\pi^{-1}(U)$, such that $\hat{\phi}: \pi^{-1}(U) \rightarrow \mathbb{R}^{m} \times \mathbb{R}^{k}$ restricts to linear isomorphisms from each fiber $E_{p}$ onto $\{\phi(p)\} \times \mathbb{R}^{k}$. Every vector bundle chart defines a local trivialization. Conversely, if $\psi:\left.E\right|_{U} \rightarrow U \times \mathbb{R}^{k}$ is a trivialization of $E_{U}$, where $U$ is the domain of a chart $(U, \phi)$, one obtains a vector bundle chart $\left(\pi^{-1}(U), \hat{\phi}\right)$ for $E$. ExAmPle 6.3. (Vector bundles over the Grassmannnian) For any $p \in \operatorname{Gr}(k, n)$, let $E_{p} \subset \mathbb{R}^{n}$ be the $k$-plane it represents. Then $E=\cup_{p \in \operatorname{Gr}(k, n)} E_{p}$ is a vector bundle over $\operatorname{Gr}(k, n)$, called the tautological vector bundle. Recall the definition of charts $\phi_{I}: U_{I} \rightarrow L\left(\mathbb{R}^{I}, \mathbb{R}^{I^{\prime}}\right)$ for the Grassmannian, where any $p=\{E\}=U_{I}$ is identified with the linear map $A$ having $E$ as its graph. Let $$ \hat{\phi}_{I}: \pi^{-1}\left(U_{I}\right) \rightarrow L\left(\mathbb{R}^{I}, \mathbb{R}^{I^{\prime}}\right) \times \mathbb{R}^{I} $$ be the map $\hat{\phi}_{I}(v)=\left(\phi(\pi(v)), \pi_{I}(v)\right)$ where $\pi_{I}: \mathbb{R}^{n} \rightarrow \mathbb{R}^{I}$ is orthogonal projection. The $\hat{\phi}_{I}$ serve as bundle charts for the tautological vector bundle. There is another natural vector bundle $E^{\prime}$ over $\operatorname{Gr}(k, n)$, with fiber $E_{p}^{\prime}:=E_{p}^{\perp}$ the orthogonal complement of $E_{p}$. A special case is $k=1$, where $\operatorname{Gr}(k, n)=\mathbb{R} P(n-1)$. In this case $E$ is called the tautological line bundle, and $E^{\prime}$ the hyperplane bundle. At this stage, we are mainly interested in tangent bundles of manifolds. THEorem 6.4. For any manifold $M$, the disjoint union $T M=\cup_{p \in M} T_{p} M$ carries the structure of a vector bundle over $M$, where $\pi$ takes $v \in T_{p} M$ to the base point $p$. Proof. Recall that any chart $(U, \phi)$ for $M$ gives identifications $T_{p} \phi: T_{p} M \rightarrow \mathbb{R}^{m}$ for all $p \in U$. Taking all these maps together, we obtain a bijection, $$ T \phi: \pi^{-1}(U) \rightarrow U \times \mathbb{R}^{m} . $$ We take the collection of $\left(\pi^{-1}(U), T \phi\right)$ as vector bundle charts for $T M$. We need to check that the transition maps are smooth. If $(V, \psi)$ is another coordinate chart with $U \cap V \neq \emptyset$, the transition map for $\pi^{-1}(U \cap V)$ is given by, $$ T \psi \circ(T \phi)^{-1}:(U \cap V) \times \mathbb{R}^{m} \rightarrow(U \cap V) \times \mathbb{R}^{m} . $$ But $T_{p} \psi \circ\left(T_{p} \phi\right)^{-1}=T_{\phi(p)}\left(\psi \circ \phi^{-1}\right)$ is just the Jacobian for the change of coordinates $\psi \circ \phi^{-1}$, and as such depends smoothly on $x=\phi(p)$. Definition 6.5. A (smooth) section of a vector bundle $\pi: E \rightarrow M$ is a smooth map $\sigma: M \rightarrow E$ with the property $\pi \circ \sigma=\operatorname{id}_{M}$. The space of sections of $E$ is denoted $\Gamma^{\infty}(M, E)$. Thus, a section is a family of vectors $\sigma_{p} \in E_{p}$ depending smoothly on $p$. EXAMPles 6.6. (a) Every vector bundle has a distinguished section, the zero section $p \mapsto \sigma_{p}=0$. (b) A section of the trivial bundle $M \times \mathbb{R}^{k}$ is the same thing as a smooth function from $M$ to $R^{k}$. In particular, if $\psi: E_{U} \rightarrow U \times \mathbb{R}^{k}$ is a local trivialization of a vector bundle $E$, the section $\sigma$ (restricted to $U$ ) becomes a smooth function $\left.\psi \circ \sigma\right|_{U}: U \rightarrow \mathbb{R}^{k}$. (c) Let $\pi: E \rightarrow M$ be a rank $k$ vector bundle. A frame for $E$ over $U \subset M$ is a collection of sections $\sigma_{1}, \ldots, \sigma_{k}$ of $E_{U}$, such that $\left(\sigma_{j}\right)_{p}$ are linearly independent at each point $p \in U$. Any frame over $U$ defines a local trivialization $\psi: E_{U} \rightarrow U \times \mathbb{R}^{k}$, given in terms of its inverse map $\psi^{-1}(p, a)=\sum_{j} a_{j}\left(\sigma_{j}\right)_{p}$. Conversely, each local trivialization gives rise to a frame. The space $\Gamma^{\infty}(M, E)$ is a vector space under pointwise addition: $\left(\sigma_{1}+\sigma_{2}\right)_{p}=\left(\sigma_{1}\right)_{p}+\left(\sigma_{2}\right)_{p}$. Moreover, it is a $C^{\infty}(M)$-module under multiplication ${ }^{4}:(f \sigma)_{p}=f_{p} \sigma_{p}$. Definition 6.7. A section of the tangent bundle $T M$ is called a vector field on $M$. The space of vector fields is denoted $$ \mathfrak{X}(M)=\Gamma^{\infty}(M, T M) . $$ Thus, a vector field $X \in \mathfrak{X}(M)$ is a family of tangent vectors $X_{p} \in T_{p} M$ depending smoothly on the base point. In the next section, we will discuss the space of vector fields in more detail. Problems 6.8. $\quad$ 1. Let $S \subset M$ be an embedded submanifold. Show that for any vector bundle $\pi: E \rightarrow M$, the restriction $\left.E\right|_{S} \rightarrow S$ is a vector bundle over $S$. In particular, $\left.T M\right|_{S}$ is defined; its sections are called "vector fields along $S$ ". The bundle $\left.T M\right|_{S}$ contains the tangent bundle $T S$ as a sub-bundle: For all $p \in S, T_{p} S$ is a vector subspace of $T_{p} M$. The normal bundle of $S$ in $M$ is defined as a "quotient bundle" $\nu_{S}=\left.T M\right|_{S} / T S$ with fibers, $$ \left(\nu_{S}\right)_{p}=T_{p} M / T_{p} S $$ Show that this is again a vector bundle. ${ }^{4}$ Here and from now on, we will often write $f_{p}$ or $\left.f\right|_{p}$ for the value $f(p)$. 2. Let $F: M \rightarrow N$ be a smooth map, and $\pi: E \rightarrow N$ a vector bundle. Show that $$ F^{*} E:=\cup_{p \in M} E_{F(p)} $$ is a vector bundle over $M$. It is called the pull-back bundle. Sections of $F^{*}(T N)$ are called vector fields along (the map) $F$. For instance, if $\gamma: J \rightarrow M$ is a smooth curve, the set of velocity vectors $\dot{\gamma}(t)$ becomes a vector field along $\gamma$. 3. Let $E, E^{\prime}$ be two vector bundles over $M$. Show that $$ E \oplus E^{\prime}:=\cup_{p \in M} E_{p} \oplus E_{p}^{\prime} $$ is again a vector bundle over $M$. It is called the direct sum (or Whitney sum) of $E$ and $E^{\prime}$. For instance, the direct sum of the two natural bundles $E, E^{\prime}$ over the Grassmannian has fibers $E_{p} \oplus E_{p}^{\prime}=\mathbb{R}^{n}$, hence $E \oplus E^{\prime}$ is the trivial bundle $\operatorname{Gr}(k, n) \times \mathbb{R}^{n}$. 4. Show that for any vector bundle $E \rightarrow M$, $$ E^{*}=\cup_{p \in M} E_{p}^{*} $$ (where $E_{p}^{*}=L\left(E_{p}, \mathbb{R}\right)$ is the dual space to $E_{p}$ ) is again a vector bundle. It is called the dual bundle to $E$. In particular, one defines $T^{*} M:=(T M)^{*}$, called the cotangent bundle. The sections of $T^{*} M$ are called covector fields or "1-forms". ## Vector fields as derivations Let $X \in \mathfrak{X}(M)$ be a vector field on $M$. Each $X_{p} \in T_{p} M$ defines a linear map $X_{p}$ : $C^{\infty}(M) \rightarrow \mathbb{R}$. Letting $p$ vary, this gives a linear map $$ X: C^{\infty}(M) \rightarrow C^{\infty}(M),(X(f))_{p}=X_{p}(f) . $$ Note that the right hand side really does define a smooth function on $M$. Indeed, this follows from the expression in local coordinates $(U, \phi)$. Let $a \in C^{\infty}\left(\phi(U), \mathbb{R}^{k}\right)$ be the expression of $X$ in the local trivialization, that is, $(T \phi)\left(X_{p}\right)=(\phi(p), a(\phi(p)))$. Thus $$ a(\phi(p))=\left(a_{1}(\phi(p)), \ldots, a_{m}(\phi(p))\right) $$ are simply the components of $X_{p}$ in the coordinate chart $(U, \phi)$ : $$ X_{p}(f)=\left.\sum_{i=1}^{m} a_{i}(\phi(p)) \frac{\partial}{\partial x_{i}}\right|_{\phi(p)}\left(f \circ \phi^{-1}\right) . $$ for $p \in U$. The formula shows that $$ \left.X(f)\right|_{U} \circ \phi^{-1}=\sum_{i=1}^{m} a_{i} \frac{\partial}{\partial x_{i}}\left(f \circ \phi^{-1}\right) . $$ That is, in local coordinates $X$ is represented by the vector field $$ \sum_{i=1}^{m} a_{i} \frac{\partial}{\partial x_{i}}: C^{\infty}(\phi(U)) \rightarrow C^{\infty}(\phi(U)) . $$ Theorem 7.1. A linear map $X: C^{\infty}(M) \rightarrow C^{\infty}(M)$ is a vector field if and only if it is a derivation of the algebra $C^{\infty}(M)$ : That is, $$ X\left(f_{1} f_{2}\right)=f_{2} X\left(f_{1}\right)+f_{1} X\left(f_{2}\right) $$ for all $f_{1}, f_{2} \in C^{\infty}(M)$. Proof. For all $p \in M, X$ defines a tangent vector $X_{p}$ by $X_{p}(f)=X(f)_{p}$. We have to show that $p \mapsto X_{p}$ defines a smooth section of $T M$. Choosing local coordinates $(U, \phi)$ around $p$, taking $p$ to $x=\phi(p)$, the tangent vector $X(f)_{p}$ is represented by $a(x)=\left(a_{1}(x), \ldots, a_{m}(x)\right) \in$ $\mathbb{R}^{m}$. That is, $$ X(f)_{p}(f)=\sum_{j=1}^{m} a_{j}(x) \frac{\partial}{\partial x_{i}}\left(f \circ \phi^{-1}\right) . $$ Taking for $f$ any function $f_{j}$ with $f_{j} \circ \phi^{-1}(x)=x_{j}$ on some open neighborhood $V \subset \phi(U)$ of the given point $\phi(p)$, we see that $$ a_{j}=X\left(f_{j}\right) \circ \phi^{-1} $$ on $V$. Since $X\left(f_{j}\right) \in C^{\infty}(M)$, it follows that $a_{j}$ is smooth. This proves that $p \mapsto X_{p}$ is a smooth section of $T M$ over $U$. If $X, Y$ are vector fields (viewed as linear maps $C^{\infty}(M) \rightarrow C^{\infty}(M)$ ), the composition $X \circ Y$ is not a vector field. However, the Lie bracket (commutator) $$ [X, Y]:=X \circ Y-Y \circ X $$ is a vector field. Indeed, it is easily checked that the right hand side defines a derivation. Alternatively, the calculation can be carried out in local coordinates $(U, \phi)$ : One finds that if $X_{U}$ is represented by $\sum_{i=1}^{m} a_{i} \frac{\partial}{\partial x_{i}}$ and $\left.Y\right|_{U}$ by $\sum_{i=1}^{m} b_{i} \frac{\partial}{\partial x_{i}}$, then $\left.[X, Y]\right|_{U}$ is represented by $$ \sum_{i=1}^{m} \sum_{j=1}^{m}\left(a_{j} \frac{\partial b_{i}}{\partial x_{j}}-b_{j} \frac{\partial a_{i}}{\partial x_{j}}\right) \frac{\partial}{\partial x_{i}} . $$ Let $F \in C^{\infty}(M, N)$ be a smooth map. Then the collection of tangent maps $T_{p} F: T_{p} M \rightarrow$ $T_{F(p)} N$ defines a map $T F: T M \rightarrow T N$ which is easily seen to be smooth. The map $T F$ is an example of a vector bundle map: It takes fibers to fibers, and the restriction to each fiber is a linear map. For instance, local trivializations $\psi:\left.E\right|_{U} \rightarrow U \times \mathbb{R}^{k}$ are vector bundle maps. Definition 7.2. Let $F \in C^{\infty}(M, N)$. Two vector fields $X \in \mathfrak{X}(M)$ and $Y \in \mathfrak{X}(N)$ are called $F$-related if for all $p \in M, T_{p} F\left(X_{p}\right)=Y_{F(p)}$. One writes $X \sim_{F} Y$. For example, if $S \subset M$ is an embedded submanifold and $\iota: S \rightarrow M$ is the inclusion, vector fields $X$ on $S$ and $Y$ on $M$ are $\iota$-related if and only if $Y$ is tangent to $S$, and $X$ is its restriction. THEOREM 7.3. a) One has $X \sim_{F} Y$ if and only if for all $f \in C^{\infty}(N), X(f \circ F)=Y(f)$. b) If $X_{1} \sim_{F} Y_{1}$ and $X_{2} \sim_{F} Y_{2}$ then $\left[X_{1}, X_{2}\right] \sim_{F}\left[Y_{1}, Y_{2}\right]$. Proof. At any $p \in M$, the condition $X(f \circ F)=F \circ Y(f)$ says that $$ \left(T_{p} F\left(X_{p}\right)\right)(f)=Y(f)_{F(p)}=Y_{F(p)}(f) . $$ This proves (a). Part (b) follows from (a): $$ \begin{aligned} {\left[X_{1}, X_{2}\right](f \circ F) } & =X_{1}\left(X_{2}(f \circ F)\right)-X_{2}\left(X_{1}(f \circ F)\right) \\ & =X_{1}\left(Y_{2}(f) \circ F\right)-X_{2}\left(Y_{1}(f) \circ F\right) \\ & =Y_{1}\left(Y_{2}(f)\right) \circ F-Y_{2}\left(Y_{1}(f)\right) \circ F \\ & =\left[Y_{1}, Y_{2}\right](f) \circ F . \end{aligned} $$ Part (b) shows, for instance, that if two vector fields are tangent to a submanifold $S \subset M$ then their bracket is again tangent to $S$. (Alternatively, one can see this in coordinates, using submanifold charts for $S$.) Problems 7.4. 1. Given an example of vector fields $X, Y \in \mathfrak{X}\left(\mathbb{R}^{3}\right)$ such that $X, Y,[X, Y]$ are linearly independent at any point $p \in \mathbb{R}^{3}$. Thus, there is no 2-dimensional submanifold $S$ with the property that $X, Y$ are tangent to $S$ everywhere. 2. For any $n$, give an example of vector field $X, Y$ on $\mathbb{R}^{n}$ such that $X, Y$ together with iterated Lie brackets $[X, Y],[[X, Y], Y], \ldots \operatorname{span} T_{p} \mathbb{R}^{n}=\mathbb{R}^{n}$ everywhere. ## Flows of vector fields Suppose $X$ is a vector field on a manifold $M$. A smooth curve $\gamma \in C^{\infty}(J, M)$, where $J$ is an open neighborhood of $0 \in \mathbb{R}$, is called a solution curve to $X$ if for all $t \in J$, $$ \dot{\gamma}(t)=X_{\gamma(t)} $$ In local coordinates $(U, \phi)$ around a given point $p=\gamma\left(t_{0}\right)$, write $$ \phi \circ \gamma(t)=x(t)=\left(x_{1}(t), \ldots, x_{n}(t)\right) $$ (defined for $t$ sufficiently close to $t_{0}$ ). Then $$ \dot{\gamma}(f)=\frac{d}{d t} f(\gamma(t))=\frac{d}{d t}\left(f \circ \phi^{-1}\right)(x(t))=\left.\sum_{i} \frac{d x_{i}}{d t} \frac{\partial}{\partial x_{i}}\left(f \circ \phi^{-1}\right)\right|_{x(t)} . $$ Let $\sum_{i} a_{i} \frac{\partial}{\partial x_{i}}$ represent $X$ in the chart, that is $$ X_{\gamma(t)}(f)=\left.\sum_{i} a_{i}(x(t)) \frac{\partial}{\partial x_{i}}\left(f \circ \phi^{-1}\right)\right|_{x(t)} . $$ Hence, the equation for a solution curve corresponds to the following equation in local coordinates: $$ \dot{x}_{i}=a_{i}(x(t)) $$ for $i=1, \ldots, n$. This is a first order system of ordinary differential equations (ODE's). One of the main results from the theory of ODE's reads: Theorem 8.1 (Existence and uniqueness theorem for ODE's). Let $V \subset \mathbb{R}^{m}$ be an open subset, and $a \in C^{\infty}\left(V, \mathbb{R}^{m}\right)$. For any given $x_{0} \in V$, there exists an open interval $J \subset \mathbb{R}$ around 0 , and a solution $x: J \rightarrow V$ of the $O D E$ $$ \frac{d x_{i}}{d t}=a_{i}(x(t)) $$ with initial condition $x(0)=x_{0}$. In fact, there is a unique maximal solution of this initial value problem, defined on some interval $J_{x_{0}}$, such that any other solution is obtained by restriction to some subinterval $\mathcal{J} \subset \mathcal{J}_{x_{0}}$. The solution depends smoothly on initial conditions, in the following sense: ThEOrEm 8.2 (Dependence on initial conditions for ODE's). Let $a \in C^{\infty}\left(V, \mathbb{R}^{m}\right)$ as above. For any $x_{0} \in V$, let $\Phi\left(t, x_{0}\right):=\gamma_{x_{0}}(t): \mathcal{J}_{x_{0}} \rightarrow V$ be the maximal solution with initial value $\gamma_{x_{0}}(0)=x_{0}$. Let $$ \mathcal{J}=\bigcup_{x_{0} \in V} \mathcal{J}_{x_{0}} \times\left\{x_{0}\right\} \subset \mathbb{R} \times V . $$ Then $\mathcal{J}$ is an open neighborhood of $\{0\} \times V$, and the map $\Phi: \mathcal{J} \rightarrow V$ is smooth. ExAmple 8.3. If $V=(0,1) \subset \mathbb{R}$ and $a(x)=1$, the solution curves to $\dot{x}=a(x(t))=1$ with initial condition $x_{0} \in V$ are $x(t)=x_{0}+t$, defined for $-x_{0}<t<1-x_{0}$. Thus $$ \mathcal{J}=\{(t, x) \mid 0<t+x<1\}, \quad \Phi(t, x)=t+x $$ in this case. One can construct a similar example with $V=\mathbb{R}$, where solution curves escape to infinity in finite time: For instance, $a(x)=x^{2}$ has solution curves, $x(t)=-\frac{1}{t-c}$, these escape to infinity for $t \rightarrow c$. Similarly, $a(x)=1+x^{2}$ has solution curves $x(t)=\tan (t-c)$, these reach infinity for $t \rightarrow c \pm \frac{\pi}{2}$. If $a=\left(a_{1}, \ldots, a_{m}\right): \phi(U) \rightarrow \mathbb{R}^{m}$ corresponds to $X$ in a local chart $(U, \phi)$, then any solution curve $x: J \rightarrow \phi(U)$ for $a$ defines a solution curve $\gamma(t)=\phi^{-1}(x(t))$ for $X$. The existence and uniqueness theorem for ODE's extends to manifolds, as follows: Theorem 8.4. Let $X \in \mathfrak{X}(M)$ be a vector field on a manifold $M$. For any given $p \in M$, there exists a solution curve $\gamma: J \rightarrow M$ of with initial condition $\gamma(0)=p$. Any two solutions to the same initial value problem agree on their common domain of definition. Proof. Existence and uniqueness of solutions for small times $t$ follows from the existence and uniqueness theorem for ODE's, by considering the vector field in local charts. To prove uniqueness even for large times $t$, let $\gamma_{t}: J_{1} \rightarrow M$ and $\gamma_{2}: J_{2} \rightarrow M$ be two solutions to the IVP. We have to show that $\gamma_{1}=\gamma_{2}$ on $J_{1} \cap J_{2}$. Suppose not. Let $b>0$ be the infimum of all $t \in J_{1} \cap J_{2}$ with $\gamma_{1}(t) \neq \gamma_{2}(t)$. If $\gamma_{1}(b)=\gamma_{2}(b)$, the uniqueness part for solutions of ODE's, in a chart around $\gamma_{j}(b)$, would show that the $\gamma_{j}(t)$ coincide for $|t-b|$ sufficiently close to $b$. This contradiction shows that $\gamma_{1}(b) \neq \gamma_{2}(b)$. But then we can choose disjoint open neighborhoods $U_{j}$ of $\gamma_{j}(b)$. For $|t-b|$ sufficiently small, $\gamma_{j}(t) \in U_{j}$. In particular, $\gamma_{1}(t) \neq \gamma_{2}(t)$ for small $|t-b|$, again in contradiction to the definition of $b$. Note that the uniqueness part uses the Hausdorff property in the definition of manifolds. Indeed, the uniqueness part may fail for non-Hausdorff manifolds. EXAmple 8.5. A counter-example is the non-Hausdorff manifold $Y=\mathbb{R} \times\{1\} \cup \mathbb{R} \times\{-1\} / \sim$, where $\sim$ glues two copies of the real line along the strictly negative real axis. Let $U_{ \pm}$denote the charts obtained as images of $\mathbb{R} \times\{ \pm 1\}$. Let $X$ be the vector field on $Y$, given by $\frac{\partial}{\partial x}$ in both charts. It is well-defined, since the transition map is just the identity map. Then $\gamma_{+}(t)=\pi(t, 1)$ and $\gamma_{-}(t)=\pi(t,-1)$ are both solution curves, and they agree for negative $t$ but not for positive $t$. Theorem 8.6. Let $X \in \mathfrak{X}(M)$ be a vector field on a manifold $M$. For each $p \in M$, let $\gamma_{p}$ : $\mathcal{J}_{p} \rightarrow M$ be the maximal solution curve with initial value $\gamma_{p}(0)=p$. Let $\mathcal{J}=\bigcup_{p \in M}\{p\} \times \mathcal{J}_{p}$, and let $$ \Phi: \mathcal{J} \rightarrow M, \quad \Phi(t, p) \equiv \Phi_{t}(p):=\gamma_{p}(t) $$ Then $\mathcal{J}$ is an open neighborhood of $\{0\} \times M$ in $\mathbb{R} \times M$, and the map $\Phi$ is smooth. If $\left(t_{1}, \Phi_{t_{2}}(p)\right),\left(t_{2}, p\right) \in \mathcal{J}$ then also $\left(t_{1}+t_{2}, p\right) \in \mathcal{J}$, and one has $$ \Phi_{t_{1}}\left(\Phi_{t_{2}}(p)\right)=\Phi_{t_{1}+t_{2}}(p) $$ The map $\Phi$ is called the flow of the vector field $X$. Proof. Define $\mathcal{J}=\cup_{p \in M}\{p\} \times \mathcal{J}_{p}$ where $\mathcal{J}_{p}$ is the largest interval around 0 for which there is a solution curve $\gamma_{p}(t)$ with initial value $\gamma_{p}(0)=p$. Let $\Phi(t, p):=\gamma_{p}(t)$. We first establish the property $\Phi_{t_{1}}\left(\Phi_{t_{2}}(p)\right)=\Phi_{t_{1}+t_{2}}(p)$. Given $\left(t_{2}, p\right) \in \mathcal{J}^{X}$, consider the two curves $$ \gamma(t)=\Phi_{t}\left(\Phi_{t_{2}}(p)\right), \quad \lambda(t)=\Phi_{t+t_{2}}(p) . $$ By definition of $\Phi$, the curve $\gamma$ is a solution curve with initial value $\gamma(0)=\Phi_{t_{2}}(p)$, defined for the set of all $t$ with $\left(t, \Phi_{t_{2}}(p)\right) \in \mathcal{J}^{X}$. We claim that $\lambda$ is also a solution curve, hence coincides with $\gamma$ by uniqueness of solution curves. We calculate $$ \begin{aligned} \frac{d}{d t} \lambda(t) & =\frac{d}{d t} \Phi_{t+t_{2}}(p) \\ & =\left.\frac{d}{d u}\right|_{u=t+t_{2}} \Phi_{u}(p) \\ & =\left.X_{\Phi_{u}(p)}\right|_{u=t+t_{2}}=X_{\lambda(t)} \end{aligned} $$ It remains to show that $\mathcal{J}$ is open and $\Phi$ is smooth. We will use the property $\Phi_{t_{1}+t_{2}}=\Phi_{t_{2}} \circ \Phi_{t_{1}}$ of the flow to write the flow for large times $t$ as a composition of flows for small times, where we can use the result for ODE's in local charts. Let $(t, p) \in \mathcal{J}$, say $t>0$. Since the interval $[0, t]$ is compact, we can choose $t_{i}>0$ with $t=t_{1}+t_{2}+\ldots+t_{N}$ such that the curve $\Phi_{s}(p)$ stays in a fixed coordinate chart $V_{0}$ for $0 \leq s \leq t_{1}$, the curve $\Phi_{s}\left(\Phi_{t_{1}}(p)\right)$ stays in a fixed coordinate chart $V_{1}$ for $0 \leq s \leq t_{2}$, and so on. Also, let $\epsilon>0$ be sufficiently small, such that $\Phi_{s}\left(\Phi_{t}(p)\right)$ is defined and stays in $V_{N}:=V_{N-1}$ for $-\epsilon \leq s \leq \epsilon$. Inductively define $p_{0}, \ldots, p_{N}$ by let $p_{k+1}=\Phi_{t_{k+1}}\left(p_{k}\right)$ where $p_{0}=p$. Thus $p_{N}=\Phi_{t}(p)$. Choose open neighborhoods $U_{k}$ of $p_{k}$, with the property that $$ \begin{array}{lll} \overline{\Phi_{s}\left(U_{0}\right)} \subset V_{0} & \text { for } & 0 \leq s \leq t_{1} \\ \overline{\Phi_{s}\left(U_{1}\right)} \subset V_{1} & \text { for } & 0 \leq s \leq t_{2} \end{array} $$ and $\Phi_{s}\left(U_{N}\right) \subset V_{N}$ for $-\epsilon<s<\epsilon$. Let $U$ be the set of all points $q \in M$ such that $$ q \in U_{0}, \Phi_{t_{1}}(q) \in U_{1}, \Phi_{t_{1}+t_{2}}(q) \in U_{2}, \ldots, \Phi_{t}(q) \in U_{N} $$ Then $U$ is an open neighborhood of $p$. The composition $\Phi_{s+t}=\Phi_{s} \circ \Phi_{t_{N}} \circ \cdots \circ \Phi_{t_{1}}$ is well-defined on $U$, for all $-\epsilon<s<\epsilon$. Thus $$ (t-\epsilon, t+\epsilon) \times U \subset \mathcal{J} . $$ The map $\Phi$, restricted to this set, is smooth, since it is written as a composition of smooth maps: $$ \Phi(t+s, \cdot)=\Phi\left(s, \Phi_{t_{N}} \circ \cdots \Phi_{t_{1}}(\cdot)\right) $$ Let $X$ be a vector field, and $\mathcal{J}=\mathcal{J}^{X}$ be the domain of definition for the flow $\Phi=\Phi^{X}$. Definition 8.7. A vector field $X \in \mathfrak{X}(M)$ is called complete if $\mathcal{J}^{X}=M \times \mathbb{R}$. Thus $X$ is complete if and only if all solution curves exist for all time. Above, we had seen some examples of incomplete vector fields on $M=\mathbb{R}$. In these examples, the vector field increases "too fast towards infinity". Conversely, we expect that vector fields $X$ are complete if they vanish outside a compact set. This is indeed the case. The $\operatorname{support} \operatorname{supp}(X)$ is defined to be the smallest closed subset outside of which $X$ is zero. That is, $$ \operatorname{supp}(X)=\overline{\left\{p \in M \mid X_{p} \neq 0\right\}} . $$ Proposition 8.8. Every vector field of compact support is complete. In particular, this is the case if $M$ is compact. Proof. By compactness, there exists $\epsilon>0$ such that the flow for any point $p$ exists for times $|t| \leq \epsilon$. But this implies that any integral curve can be extended indefinitely. THEOREM 8.9. If $X$ is a complete vector field, the flow $\Phi_{t}$ defines a 1-parameter group of diffeomorphisms. That is, each $\Phi_{t}$ is a diffeomorphism and $$ \Phi_{0}=\operatorname{id}_{M}, \quad \Phi_{t_{1}} \circ \Phi_{t_{2}}=\Phi_{t_{1}+t_{2}} . $$ Conversely, if $\Phi_{t}$ is a 1-parameter group of diffeomorphisms such that the map $(t, p) \mapsto \Phi_{t}(p)$ is smooth, the equation $$ X_{p}(f)=\left.\frac{d}{d t}\right|_{t=0} f\left(\Phi_{t}(p)\right) $$ defines a smooth vector field on $M$, with flow $\Phi_{t}$. Proof. It remains to show the second statement. Clearly, $X_{p}$ is a tangent vector at $p \in M$. Using local coordinates, one can show that $X_{p}$ depends smoothly on $p$, hence it defines a vector field. Given $p \in M$ we have to show that $\gamma(t)=\Phi_{t}(p)$ is an integral curve of $X$. Indeed, $$ \frac{d}{d t} \Phi_{t}(p)=\left.\frac{d}{d s}\right|_{s=0} \Phi_{t+s}(p)=\left.\frac{d}{d s}\right|_{s=0} \Phi_{s}\left(\Phi_{t}(p)\right)=X_{\Phi_{t}(p)} . $$ By a similar argument, one establishes the identity $$ \frac{d}{d t} \Phi_{t}^{*}(f)=\left.\Phi_{t}^{*} \frac{d}{d s}\right|_{s=0} \Phi_{s}^{*}(f)=\Phi_{t}^{*} X(f) $$ which we will use later on. In fact, this identity may be viewed as a definition of the flow. ExAmple 8.10. Let $X$ be a complete vector field, with flow $\Phi_{t}$. For each $t \in \mathbb{R}$, the tangent map $T \Phi_{t}: T M \rightarrow T M$ has the flow property, $$ T \Phi_{t_{1}} \circ T \Phi_{t_{2}}=T\left(\Phi_{t_{1}} \circ \Phi_{t_{2}}\right)=T\left(\Phi_{t_{1}+t_{2}}\right), $$ and the $\operatorname{map} \mathbb{R} \times T M \rightarrow T M,(t, v) \mapsto \Phi_{t}(v)$ is smooth (since it is just the restriction of the $\operatorname{map} T \Phi: T(\mathbb{R} \times M) \rightarrow T M$ to the submanifold $\mathbb{R} \times T M)$. Hence, $T \Phi_{t}$ is a flow on $T M$, and therefore corresponds to a vector field $\widehat{X} \in \mathfrak{X}(T M)$. This is called the tangent lift of $X$. ExAmple 8.11. Given $A \in \operatorname{Mat}_{m}(\mathbb{R})$ let $\Phi_{t}: \mathbb{R}^{m} \rightarrow \mathbb{R}^{m}$ be multiplication by the matrix $e^{t A}=\sum_{j} \frac{t^{j}}{j !} A^{j}$ (exponential map of matrices). Since $e^{\left(t_{1}+t_{2}\right) A}=e^{t_{1} A} e^{t_{2} A}$, and since $(t, x) \mapsto$ $e^{t A} x$ is a smooth map, $\Phi_{t}$ defines a flow. What is the corresponding vector field $X$ ? For any function $f \in C^{\infty}\left(\mathbb{R}^{m}\right)$ we calculate, $$ \begin{aligned} X_{x}(f) & =\left.\frac{d}{d t}\right|_{t=0} f\left(e^{t A} x\right) \\ & =\sum_{j} \frac{\partial f}{\partial x_{j}}(A x)_{j} \\ & =\sum_{j k} A_{j k} x_{k} \frac{\partial f}{\partial x_{j}} \end{aligned} $$ showing that $X=\sum_{j k} A_{j k} x_{k} \frac{\partial}{\partial x_{j}}$. Problems 8.12. 1. Let $X \in \mathfrak{X}(N), Y \in \mathfrak{X}(M)$ be vector fields and $F \in C^{\infty}(N, M)$ a smooth map. Show that $X \sim_{F} Y$ if and only if it intertwines the flows $\Phi_{t}^{X}, \Phi_{t}^{Y}$ : That is, $$ F \circ \Phi_{t}^{X}=\Phi_{t}^{Y} \circ F $$ 2. Let $X$ be a vector field on $U \subset M$, given in local coordinates by $\sum_{i} a_{i} \frac{\partial}{\partial x_{i}}$. Let $\left(x_{1}, \ldots, x_{m}, v_{1}, \ldots, v_{m}\right)$ be the corresponding coordinates on $T U \subset T M$. Show that the tangent lift $\hat{X}$ is given by $$ \sum_{i} a_{i} \frac{\partial}{\partial x_{i}}+\sum_{i j} \frac{\partial a_{i}}{\partial x_{j}} v_{j} \frac{\partial}{\partial v_{i}} $$ 3. Show that for any vector field $X \in \mathfrak{X}(M)$ and any $x \in M$ with $X_{x} \neq 0$, there exists a local chart around $x$ in which $X$ is given by the constant vector field $\frac{\partial}{\partial x^{1}}$. Hint: Show that if $S$ is an embedded codimension 1 submanifold, with $x \in S$ and $X_{x} \notin T_{x} S$, the map $U \times(-\epsilon, \epsilon) \rightarrow M$ is a diffeomorphisms onto its image, for some open neighborhood $U$ of $x$ in $S$. Use the time parameter $t$ and a chart around $x \in U$ to define a chart near $x$. ## Geometric interpretation of the Lie bracket If $f \in C^{\infty}(N)$ and $F \in C^{\infty}(M, N)$ we define the pull-back $F^{*}(f)=f \circ F \in C^{\infty}(M)$. Thus pull-back is a linear map, $$ F^{*}: C^{\infty}(N) \rightarrow C^{\infty}(M) $$ Using pull-backs, the definition of a tangent map reads $$ T_{p} F(v)=v \circ F^{*}: C^{\infty}(N) \rightarrow \mathbb{R} . $$ For instance, the definition of $F$-related vector fields $X \sim_{F} Y$ can be re-phrased as, $X \circ F^{*}=$ $F^{*} \circ Y$. For any vector field $X \in \mathfrak{X}(N)$ and any diffeomorphism $F \in C^{\infty}(M, N)$, we define $F^{*} X \in \mathfrak{X}(M)$ by $$ F^{*} X\left(F^{*} f\right)=F^{*}(X(f)) $$ That is, $$ F^{*} X=F^{*} \circ X \circ\left(F^{*}\right)^{-1} . $$ Lemma 9.1. If $X, Y$ are vector fields on $N, F^{*}[X, Y]=\left[F^{*} X, F^{*} Y\right]$. Any complete vector field $X \in \mathfrak{X}(M)$ with flow $\Phi_{t}$ gives rise to a family of maps $\Phi_{t}^{*}$ : $\mathfrak{X}(M) \rightarrow \mathfrak{X}(M)$. One defines the Lie derivative $L_{X}$ of a vector field $Y \in \mathfrak{X}(M)$ by $$ L_{X}(Y)=\left.\frac{d}{d t}\right|_{t=0} \Phi_{t}^{*} Y \in \mathfrak{X}(M) . $$ The definition of Lie derivative also works for incomplete vector fields, since the definition only involves derivatives at $t=0$. Theorem 9.2. For any $X, Y \in \mathfrak{X}(M)$, the Lie derivative $L_{X} Y$ is just the Lie bracket $[X, Y]$. One has the identity $$ \left[L_{X}, L_{Y}\right]=L_{[X, Y]} . $$ Proof. Let $\Phi_{t}=\Phi_{t}^{X}$ be the flow of $X$. For $f \in C^{\infty}(M)$ we calculate, $$ \begin{aligned} \left(L_{X} Y\right)(f) & =\left.\frac{d}{d t}\right|_{t=0}\left(\Phi_{t}^{*} Y\right)(f) \\ & =\left.\frac{d}{d t}\right|_{t=0} \Phi_{t}^{*}\left(Y\left(\Phi_{-t}^{*}(f)\right)\right) \\ & =\left.\frac{d}{d t}\right|_{t=0}\left(\Phi_{t}^{*}(Y(f))-Y\left(\Phi_{t}^{*}(f)\right)\right. \\ & =X(Y(f))-Y(X(f)) \\ & =[X, Y](f) . \end{aligned} $$ The identity $\left[L_{X}, L_{Y}\right]=L_{[X, Y]}$ just rephrases the Jacobi identity for the Lie bracket. Again, let $X$ be a complete vector field with flow $\Phi$. Let us work out the Taylor expansion of the map $\Phi_{t}^{*}$ at $t=0$. That is, for any function $f \in C^{\infty}(M)$, consider the Taylor expansion (pointwise, i.e. at any point of $M$ ) of the function $$ \Phi_{t}^{*} f=f \circ \Phi_{t} \in C^{\infty}(M) $$ around $t=0$. We have, $$ \frac{d}{d t} \Phi_{t}^{*} f=\left.\frac{d}{d s}\right|_{s=0} \Phi_{t+s}^{*} f=\left.\frac{d}{d s}\right|_{s=0} \Phi_{t}^{*} \Phi_{s}^{*} f=\Phi_{t}^{*} X(f) . $$ By induction, this shows $$ \frac{d^{k}}{d t^{k}} \Phi_{t}^{*} f=\Phi_{t}^{*} X^{k}(f) $$ where $X^{k}=X \circ \cdots \circ X$ (k times). Hence, the Taylor expansion reads $$ \Phi_{t}^{*} f=\sum_{k=0}^{\infty} \frac{t^{k}}{k !} X^{k}(f) . $$ One often writes the right hand side as $\exp (t X)(f)$. Suppose now that $Y$ is another vector field, with flow $\Psi_{s}$. In general, $\Phi_{t} \circ \Psi_{s}$ need not equal $\Psi_{s} \circ \Phi_{t}$, that is, the flows need not commute. Let us compare the Taylor expansions of $\Phi_{t}^{*} \Psi_{s}^{*} f$ and $\Psi_{s}^{*} \Phi_{t}^{*} f$. We have, in second order, $$ \begin{aligned} \Phi_{t}^{*} \Psi_{s}^{*} f & =\Phi_{t}^{*}\left(f+s Y(f)+\frac{s^{2}}{2} Y^{2}(f)+\cdots\right) \\ & =f+s Y(f)+\frac{s^{2}}{2} Y^{2}(f)+t X(f)+s t X(Y(f))+\frac{t^{2}}{2} X^{2}(f)+\cdots \end{aligned} $$ where the dots indicate cubic or higher terms in the expansion. Interchanging the roles of $X, Y$, and subtracting, we find, $$ \left(\Phi_{t}^{*} \Psi_{s}^{*}-\Psi_{s}^{*} \Phi_{t}^{*}\right) f=s t[X, Y](f)+\ldots $$ This shows that $[X, Y]$ measures the extent to which the flows fail to commute (up to second order to the Taylor expansion). In fact, THEOrEm 9.3. Let $X, Y$ be complete vector fields. Then $[X, Y]=0$ if and only if the flows of $X$ and $Y$ commute. Proof. Let $\Phi_{t}$ be the flow of $X$ and $\Psi_{s}$ the flow of $Y$. Suppose $[X, Y]=0$. Then $$ \frac{d}{d t}\left(\Phi_{t}\right)^{*} Y=\left(\Phi_{t}\right)^{*} L_{X} Y=\left(\Phi_{t}\right)^{*}[X, Y]=0 $$ for all $t$. Integrating from 0 to $t$, this shows $\left(\Phi_{t}\right)^{*} Y=Y$ for all $t$, which means that $Y$ is $\Phi_{t}$-related to itself. It follows that $\Phi_{t}$ takes the flow $\Psi_{s}$ of $Y$ to itself, which is just the desired equation $\Phi_{t} \circ \Psi_{s}=\Psi_{s} \circ \Phi_{t}$. Conversely, by differentiating the equation $\Phi_{t} \circ \Psi_{s}=\Psi_{s} \circ \Phi_{t}$ with respect to $s, t$, we find that $[X, Y]=0$. ## Lie groups and Lie algebras ### Definition of Lie groups. Definition 10.1. A Lie group is a group $G$, equipped with a manifold structure, such that group multiplication $g_{1}, g_{2} \mapsto g_{1} g_{2}$ is a smooth map $G \times G \rightarrow G$. Examples of Lie groups include: The general linear group $\operatorname{GL}(n, \mathbb{R})$ (invertible matrices in $\operatorname{Mat}_{n}(\mathbb{R})$ ), the special linear group $\mathrm{SL}(n, \mathbb{R})$ (those with determinant 1 ), the orthogonal group $\mathrm{O}(n)$ and special orthogonal group $\mathrm{SO}(n)$, the unitary group $\mathrm{U}(n)$ and the special unitary group $\mathrm{SU}(n)$ and the complex general linear or special linear groups $\mathrm{GL}(n, \mathbb{C})$ and $\mathrm{SL}(n, \mathbb{C})$. A important (and not very easy) theorem of E. Cartan says that any subgroup $H$ of a Lie group $G$ that is closed as a subset of $G$, is in fact an embedded submanifold, and hence is a Lie group in its own right. Thanks to Cartan, we don't actually have to check in any of these examples of matrix groups that they are embedded submanifolds: It is automatic from the fact that they are groups and closed subsets. Most examples of Lie groups encountered in practice (for instance, all compact groups) are matrix Lie groups. (An example of a Lie group that is not isomorphic to a matrix Lie group is the double covering of $\mathrm{SL}(2, \mathbb{R})$.) Any $a \in G$ defines two maps $l_{a}, r_{a}: G \rightarrow G$ with $$ l_{a}(g)=a g, \quad r_{a}(g)=g a . $$ The maps $l_{a}, r_{a}$ are called left-translation and right-translation, respectively. They are diffeomorphisms of $G$, with inverse maps $l_{a^{-1}}$ and $r_{a^{-1}}$. Proposition 10.2. For any Lie group, inversion $g \mapsto g^{-1}$ is a smooth map (hence a diffeomorphism). Proof. Consider the map $F: G \times G \rightarrow G \times G, \quad(g, h) \mapsto(g, g h)$. We claim that $F$ is a diffeomorphism. Once this is shown, smoothness of the inversion map follows since it can be written as a composition $$ G \longrightarrow G \times G \longrightarrow G \times G \longrightarrow G $$ where the first map is the inclusion $g \mapsto(g, e)$, the second maps is $F^{-1}(g, h)=\left(g, g^{-1} h\right)$, and the last map is projection to the second factor. Clearly $F$ is a bijection, with inverse map $F^{-1}(a, b)=\left(a, a^{-1} b\right)$. To show that $F$ is a diffeomorphism, it suffices to show that all elements of $G \times G$ are regular values of $F$, i.e. that the tangent map is a bijection everywhere. ${ }^{5}$ Let us calculate the tangent map to $F$ at $(g, h) \in G \times G$. Suppose the path $\gamma(t)=\left(g_{t}, h_{t}\right)$ represents a vector $(v, w)$ in the tangent space, with $g_{0}=g$ and $h_{0}=h$. To calculate $$ T_{(g, h)} F(v, w)=T_{(g, h)} F\left(\left.\frac{d \gamma}{d t}\right|_{t=0}\right)=\left.\frac{d}{d t}\right|_{t=0} F(\gamma(t))=\left.\frac{d}{d t}\right|_{t=0}\left(g_{t}, g_{t} h_{t}\right) $$ we have to calculate the tangent vector to the curve $t \mapsto g_{t} h_{t} \in G$. We have $$ \begin{aligned} \left.\frac{d}{d t}\right|_{t=0}\left(g_{t} h_{t}\right) & =\left.\frac{d}{d t}\right|_{t=0}\left(g h_{t}\right)+\left.\frac{d}{d t}\right|_{t=0}\left(g_{t} h\right) \\ & =T_{h} l_{g}\left(\left.\frac{d}{d t}\right|_{t=0}\left(h_{t}\right)\right)+T_{g} r_{h}\left(\left.\frac{d}{d t}\right|_{t=0}\left(g_{t}\right)\right) \\ & =T_{h} l_{g}(w)+T_{g} r_{h}(v) . \end{aligned} $$ This shows $$ T_{(g, h)} F(v, w)=\left(v, T_{h} l_{g}(w)+T_{g} r_{h}(v)\right) $$ which is 1-1 and therefore a bijection. For matrix Lie groups, smoothness of the inversion map also follows from Cramer's rule for the inverse matrix. ### Definition of Lie algebras, the Lie algebra of a Lie group. Definition 10.3. A Lie algebra is a vector space $\mathfrak{g}$, together with a bilinear map $[\cdot, \cdot]$ : $\mathfrak{g} \times \mathfrak{g} \rightarrow \mathfrak{g}$ satisfying anti-symmetry $$ [\xi, \eta]=-[\eta, \xi] \text { for all } \xi, \eta \in \mathfrak{g} $$ and the Jacobi identity, $$ [\xi,[\eta, \zeta]]+[\eta,[\zeta, \xi]]+[\zeta,[\xi, \eta]]=0 \text { for all } \xi, \eta, \zeta \in \mathfrak{g} . $$ The map $[\cdot, \cdot]$ is called the Lie bracket. ${ }^{5}$ We are using the following corollary of the regular value theorem: If $F \in C^{\infty}(M, N)$ has bijective tangent map at any point $p \in M$, then $F$ restricts to a diffeomorphism from a neighborhood $U$ of $p$ onto $F(U)$. Thus, if $F$ is a bijection it must be a diffeomorphism. (Smooth bijections need not be diffeomorphisms in general, the $\operatorname{map} F: \mathbb{R} \rightarrow \mathbb{R}, t \mapsto t^{3}$ is a counter-example.) Any associative algebra is a Lie algebra, with bracket the commutator. The space of vector fields $\mathfrak{X}(M)$ on a manifolds is a Lie algebra, with bracket what we've already called the Lie bracket of vector fields. For any Lie group $G$, one defines a Lie algebra structure on the tangent space to the identity element, $\mathfrak{g}:=T_{e} G$ in the following way. Let $\mathfrak{X}(G)^{L}$ denote the space of left-invariant vector fields on $G$. Thus $X \in \mathfrak{X}(G)^{L}$ if and only if $l_{a}^{*}(X)=X$ for all $a \in G$. Evaluation at the identity element gives a linear map $$ \mathfrak{X}(G)^{L} \rightarrow \mathfrak{g}, X \mapsto \xi:=X_{e} $$ This map is an isomorphism: Given $\xi \in \mathfrak{g}$, one defines a left-invariant vector field $X$ by $X_{g}=T_{e} L_{g}(\xi)$. (Exercise: Check that $X$ is indeed smooth!) The Lie bracket of two vector fields is again left-invariant: $$ l_{a}^{*}[X, Y]=\left[l_{a}^{*} X, l_{a}^{*} Y\right]=[X, Y] . $$ Thus $\mathfrak{X}(G)^{L}$ is a Lie subalgebra of the Lie algebra of all vector fields on $G$. Using the isomorphism $\mathfrak{X}(G)^{L} \cong \mathfrak{g}$, this gives a Lie algebra structure on $\mathfrak{g}$. That is, if we denote by $X=\xi^{L}$ the left-invariant vector field on $G$ generated by $\xi$, we have, $$ \left[\xi^{L}, \eta^{L}\right]=[\xi, \eta]^{L} . $$ Problems 10.4. We defined the Lie bracket on $\mathfrak{g}=T_{e} G$ by its identification with leftinvariant vector fields. A second Lie algebra structure on $\mathfrak{g}$ is defined by identifying $T_{e} G$ with the space of right-invariant vector fields. How are the two brackets related? (Answer: One has $\left[\xi^{R}, \eta^{R}\right]=-[\xi, \eta]^{R}$, so the two brackets differ by sign.) 10.3. Matrix Lie groups. Let $G=\mathrm{GL}(n, \mathbb{R})$. Since $\mathrm{GL}(n, \mathbb{R})$ is an open subset of the set $\operatorname{Mat}_{\mathbb{R}}(n)$ of $n \times n$-matrices, all tangent spaces are identified with $\operatorname{Mat}_{\mathbb{R}}(n)$ itself. In particular $\mathfrak{g}=\mathfrak{g l}(n, \mathbb{R}) \cong \operatorname{Mat}_{\mathbb{R}}(n)$. Let us confirm the obvious guess that the Lie bracket on $\mathfrak{g}$ is simply the commutator of matrices. The left-invariant vector field corresponding to $\xi \in \mathfrak{g}$ is $$ \xi_{g}^{L}=\left.\frac{d}{d t}\right|_{t=0}(g \exp (t \xi))=g \xi $$ (matrix multiplication). Its action on functions $f \in C^{\infty}(G)$ is, $$ \xi^{L}(f)_{g}=\left.\frac{d}{d t}\right|_{t=0}(g \exp (t \xi))=\left.\sum_{i j}(g \xi)_{i j} \frac{\partial f}{\partial g_{i j}}\right|_{g} $$ Hence, $$ \begin{aligned} \xi^{L} \eta^{L}(f)_{g} & =\left.\left.\frac{d}{d t}\right|_{t=0} \sum_{i j}(g \exp (t \xi) \eta)_{i j} \frac{\partial f}{\partial g_{i j}}\right|_{g \exp (t \xi)} \\ & =\left.\sum_{i j}(g \xi \eta)_{i j} \frac{\partial f}{\partial g_{i j}}\right|_{g \exp (\xi)}+\ldots, \end{aligned} $$ where ... involves second derivatives of the function $f$. (When we calculate Lie brackets, the second derivatives drop out so we need not care about ....) We find, $$ \left(\xi^{L} \eta^{L}-\eta^{L} \xi^{L}\right)(f)_{g}=\left.\sum_{i j}(g(\xi \eta-\eta \xi))_{i j} \frac{\partial f}{\partial g_{i j}}\right|_{g \exp (\xi)} $$ Comparing to $$ [\xi, \eta]^{L}(f)_{g}=\left.\sum_{i j}(g([\xi, \eta]))_{i j} \frac{\partial f}{\partial g_{i j}}\right|_{g \exp (\xi)} $$ this confirms that the Lie bracket is indeed just the commutator. ${ }^{6}$ We obtain similar results for other matrix Lie groups: For instance, the Lie algebra of $\mathrm{O}(n)=\left\{A \mid A^{t} A=I\right\}$ is the space $$ \mathfrak{o}(n)=\left\{B \mid B+B^{t}=0\right\} $$ with bracket the commutator, while the Lie algebra of $\mathrm{SL}(n, \mathbb{R})$ is $$ \mathfrak{s l}(n, \mathbb{R})=\{B \mid \operatorname{tr}(B)=0\}, $$ with bracket the commutator. In all such cases, this follows from the result for the general linear group, once we observe that the exponential map for matrices takes $\mathfrak{g} \subset \mathfrak{g} \mathfrak{l}(n, \mathbb{R})$ to the corresponding subgroup $G \subset \mathrm{GL}(n, \mathbb{R})$. 10.4. The exponential map for Lie groups. There is an alternative characterization of the Lie algebra in terms of 1-parameter subgroups. A 1-parameter subgroup of a Lie group $G$ is a smooth group homomorphism $\phi: \mathbb{R} \rightarrow G$, that is, $\phi(0)=e$ and $\phi\left(t_{1}+t_{2}\right)=\phi\left(t_{1}\right) \phi\left(t_{2}\right)$. For any such $\phi$, the velocity vector at $t=0$ defines an element $\xi \in T_{e} G=\mathfrak{g}$. Let $\xi^{L}$ be the corresponding left-invariant vector field. Then $\phi(t)$ is an integral curve for $\xi^{L}$ : $$ \frac{d}{d t} \phi(t)=\left.\frac{d}{d s}\right|_{s=0} \phi(t+s)=\left.\frac{d}{d s}\right|_{s=0} \phi(t) \phi(s)=\left.T_{e} l_{\phi(t)} \frac{d}{d s}\right|_{s=0} \phi(s)=T_{e} l_{\phi(t)} \xi=\xi_{\phi(t)}^{L} . $$ More generally, a similar calculation shows that for all $g \in G$, the curve $\gamma(t)=g \phi(t)$ is an integral curve through $g$. That is, the flow of $\xi^{L}$ is $\Phi(t, g)=g \phi(t)$. Suppose conversely that $X$ is a left-invariant vector field. If $\gamma(t)$ is an integral curve, then so is its left translate $g \gamma(t)$ for any $g$. It follows that $X$ is complete and has a left-invariant flow. Let $\phi(t)=\Phi(t, e)$, then $\phi(t)$ is a 1-parameter subgroup, and $X=\xi^{L}$ for the corresponding $\xi \in \mathfrak{g}$. To summarize, elements of the Lie algebra are in 1-1 correspondence with 1-parameter subgroups. Let $\phi_{\xi}(t)$ denote the 1-parameter subgroup corresponding to $\xi \in \mathfrak{g}$. Definition 10.5. For any Lie group $G$, with Lie algebra $\mathfrak{g}$, one defines the exponential map $$ \exp : \mathfrak{g} \rightarrow G, \quad \exp (\xi):=\phi_{\xi}(1) . $$ Note that this generalizes the exponential map for matrices. Indeed, suppose $G \subseteq \operatorname{GL}(n, \mathbb{R})$ is a matrix Lie group, with Lie algebra $\mathfrak{g} \subseteq \mathfrak{g l}(n, \mathbb{R})$. Then the flow of the left-invariant vector field corresponding to $\xi \in \mathfrak{g l}(n, \mathbb{R})$ is just $\Phi_{t}(g)=g \exp (t \xi)$ (using the exponential map for matrices). THEOREM 10.6. The exponential map is smooth, and defines a diffeomorphism from some open neighborhood $U$ of 0 to $\exp (U)$. Proof. We leave smoothness as an exercise. For the second part, it suffices to show that the tangent map at 0 is bijective. Since $\mathfrak{g}$ is a vector space, the tangent space at 0 is identified with $\mathfrak{g}$ itself. Note that $$ \phi_{t \xi}(1)=\phi_{\xi}(t) $$ ${ }^{6}$ This motivates why we used left-invariant vector fields in the definition of Lie bracket: Otherwise we would have found minus the commutator at this point. Hence $$ \left(T_{0} \exp \right)(\xi)=\left.\frac{d}{d t}\right|_{t=0} \exp (t \xi)=\left.\frac{d}{d t}\right|_{t=0} \phi_{\xi}(t)=\xi $$ thus $T_{0} \exp$ is simply the identity map. For matrix Lie groups, exp coincides with the exponential map for matrices (hence its name). 10.5. Group actions. Lie groups often arise as transformation groups, by some "action" on a manifold $M$. Definition 10.7. An action of a Lie group $G$ on a manifold $M$ is a group homomorphism $G \rightarrow \operatorname{Diff}(M), g \mapsto \Phi_{g}$ such that the action map $\Phi: G \times M \rightarrow M,(g . p) \mapsto \Phi_{g}(p)$ is smooth. Note that an action of $G=\mathbb{R}$ is the same thing as a flow. Every matrix Lie group $G \subset$ $\operatorname{GL}(n, \mathbb{R})$ acts on $\mathbb{R}^{n}$ in the obvious way. Any Lie group $G$ acts on itself by multiplication from the left $g \mapsto l_{g}$, multiplication from the right $g \mapsto r_{g^{-1}}$, and also by the adjoint (=conjugation) action $g \mapsto l_{g} r_{g^{-1}}$. Definition 10.8. An action of a finite dimensional Lie algebra $\mathfrak{g}$ on a manifold $M$ is a Lie algebra homomorphism $\mathfrak{g} \rightarrow \mathfrak{X}(M), \xi \mapsto \xi_{M}$ such that the action map $\mathfrak{g} \times M \rightarrow T M, \quad(\xi, p) \mapsto$ $\xi_{M}(p)$ is smooth. THEOREM 10.9. Given an action of a Lie group $G$ on a manifold $M$, one obtains an action of the corresponding Lie algebra $\mathfrak{g}$, by setting $$ \xi_{M}(p)=\left.\frac{d}{d t}\right|_{t=0} \Phi(\exp (-t \xi)) . $$ The vector field $\xi_{M}$ is called the generating vector field corresponding to $\xi$. ExERCISE 10.10. Prove this theorem. Hints: First verify the theorem for the left-action of a group on itself. (Show that $\xi_{M}$ equals $-\xi^{R}$ in this case.) Then, use that the action map $\Phi: G \times M \rightarrow M$ is equivariant, i.e. $\Phi \circ\left(l_{a} \times \mathrm{id}\right)=\Phi_{a} \circ \Phi$. Finally, show that $\left(-\xi^{R}, 0\right) \sim_{\Phi} \xi_{M}$. This implies $$ \left(-[\xi, \eta]^{R}, 0\right)=\left(\left[\xi^{R}, \eta^{R}\right], 0\right) \sim_{\Phi}\left[\xi_{M}, \eta_{M}\right] . $$ Deduce $\left[\xi_{M}, \eta_{M}\right]=[\xi, \eta]_{M}$. Note: Many people omit the minus sign in the definition of the generating vector field $\xi_{M}$. But then $\xi \mapsto \xi_{M}$ is not a Lie algebra homomorphism but an "anti-homomorphism". We prefer to avoid "anti" whenever possible. ## Frobenius' theorem 11.1. Submanifolds. We defined embedded submanifolds as subsets of manifolds admitting submanifold charts. One often encounters more general submanifolds, in the following sense. Definition 11.1. Let $S, M$ be manifolds of $\operatorname{dimensions} \operatorname{dim} S \leq \operatorname{dim} M$. A smooth map $F \in C^{\infty}(S, M)$ is an immersion if for all $p \in S$, the tangent map $T_{p} F$ is 1-1. It is called a submanifold if in addition $F$ is 1-1. Locally, the image of any immersion looks like an embedded submanifold, by the following result: Theorem 11.2. Let $F \in C^{\infty}(S, M)$ be an immersion. Then every point $p$ in $U$ has an open neighborhood $U \subset S$ such that $F(U)$ is an embedded submanifold. Proof. Using local coordinates, it suffices to prove this for the case that $S$ is an open subset of $\mathbb{R}^{s}$ and $M$ an open subset of $\mathbb{R}^{m}$. Given $p$, we may renumber the coordinates such that $T_{p} F\left(\mathbb{R}^{s}\right) \cap \mathbb{R}^{m-s}=\{0\}$, where we view $\mathbb{R}^{m-s}$ as the subspace where the first $s$ coordinates are 0 . Define a map, $$ \widetilde{F}: S \times \mathbb{R}^{m-s} \rightarrow \mathbb{R}^{m}, \quad(q, y) \mapsto F(q)+y . $$ It is easily checked that $T_{(p, 0)} \widetilde{F}$ is a bijection. Hence, by the regular value theorem, there exists an open neighborhood $U$ of $p$ and an open ball $B_{\epsilon}(0)$ around $0 \in \mathbb{R}^{m-s}$ such that $\widetilde{F}$ restricts to a diffeomorphism from $U \times B_{\epsilon}(0)$ onto its image in $M \subset \mathbb{R}^{m}$. This gives the desired submanifold chart. ExAmple 11.3. An smooth immersion $\gamma: J \rightarrow M$ from an open interval $J \subset \mathbb{R}$ is the same thing as a regular curve: For all $t \in J, \dot{\gamma}(t) \neq 0$. In general, submanifolds need not be embedded submanifolds: For instance, the integral curves of a complete vector field define submanifolds $\mathbb{R} \rightarrow M$, but usually their images are not embedded. (Note that some authors use "submanifold" to denote embedded submanifolds, while others use the same terminology for immersions! We follow the conventions from F. Warner's book.) 11.2. Integral submanifolds. Let $X_{1}, \ldots, X_{k}$ be a collection of vector fields on a manifold $M$ such that the $X_{i}$ are pointwise linearly independent. That is, at every $p \in M$ the values $\left(X_{i}\right)_{p}$ of the vector fields span a $k$-dimensional subspace of the tangent space $T_{p} M$. A $k$-dimensional submanifold $\iota: S \hookrightarrow M$ is called an integral submanifold for $X_{1}, \ldots, X_{k}$, if each $X_{j}$ is tangent to $S$, that is $\left(X_{j}\right)_{\iota(p)} \in T_{p} \iota\left(T_{p} S\right) \subset T_{\iota(p)} M$ for all $p \in S$. We had seen above that the Lie bracket of any two vector fields tangent to $S$ is again tangent to $S$. Hence, a necessary condition for the existence of integral submanifolds through every given point $p \in S$ is that the $X_{j}$ are in involution: That is, $$ \left[X_{i}, X_{j}\right]=\sum_{l=1}^{k} c_{i j}^{l} X_{l} $$ for some functions $c_{i j}^{l}$. Frobenius' theorem (see below) asserts that this condition is also sufficient. Example 11.4. On $M=\mathbb{R}^{3} \backslash\left\{x_{2}=0\right\}$ consider the vector fields, $$ X=x_{3} \frac{\partial}{\partial x_{2}}-x_{2} \frac{\partial}{\partial x_{3}}, \quad Z=x_{1} \frac{\partial}{\partial x_{2}}-x_{2} \frac{\partial}{\partial x_{1}} $$ We have, $$ [X, Z]=x_{1} \frac{\partial}{\partial x_{3}}-x_{3} \frac{\partial}{\partial x_{1}}=: Y . $$ Using $x_{1} X+x_{2} Y+x_{3} Z=0$, we see that $X, Z$ are in involution. As stated, the scope of Frobenius' theorem is limited since in general, manifolds need not admit pointwise linearly independent vector fields - often they don't even admit any vector field without zeroes. It is convenient to shift attention to the subbundle of $T M$ spanned by the vector fields, rather than the vector fields themselves: Definition 11.5. A $k$-dimensional distribution on a manifold $M$ is a rank $k$ vector subbundle $E$ of the tangent bundle $T M$. That is, $M$ can be covered by open subsets $U \subset M$ such that over each $U$, there are $k$ vector fields $X_{1}, \ldots, X_{k}$ spanning $E$. The distribution is called integrable if any such local basis is in involution. An submanifold $\iota: S \rightarrow M$ is called an integral submanifold for a (possibly non-integrable) distribution $E$ if $T_{p} \iota\left(T_{p} S\right)=E_{\iota(p)}$ for all $p \in S$. EXERCISE 11.6. Show that the condition of being in involution does not depend on the choice of $X_{i}$ 's: If $X_{i}^{\prime}=\sum_{j} a_{i j} X_{j}$ and the $X_{j}$ are in involution, then so are the $X_{i}^{\prime}$. EXAmple 11.7. On $M=\mathbb{R}^{3} \backslash\{0\}$ consider the three vector fields, $X, Y, Z$ introduced above. They are pointwise linearly dependent: $x_{1} X+x_{2} Y+x_{3} Z=0$. It follows that the vector bundle $E$ spanned by $X, Y, Z$ has rank 2 . The above local calculation shows that $E$ is integrable. The spheres $x_{1}^{2}+x_{2}^{2}+x_{3}^{2}=r^{2}$ are integral submanifolds. ### Frobenius' theorem. Theorem 11.8 (Frobenius). A rank $k$ distribution $E$ on a manifold $M$ is integrable, if and only if there exists an integral submanifold through every point $p \in M$. In this case, every point $p \in M$ admits a coordinate neighborhood $(U, \phi)$ in which $E$ is spanned by the first $k$ coordinate vector fields, $\frac{\partial}{\partial x_{1}}, \ldots, \frac{\partial}{\partial x_{k}}$. Proof. We have seen that if there exists an integral submanifold through every point, then $E$ must be integrable. Suppose conversely that $E$ is integrable. It suffices to construct the coordinate charts $(U, \phi)$ described in the theorem: In such coordinates, it is clear that the integral submanifolds are given by setting the coordinates $x_{k+1}, \ldots, x_{m}$ equal to constants. Choose an arbitrary chart around $p$, with coordinates $y_{1}, \ldots, y_{m}$, where $p$ corresponds to $y=0$. Using the chart, we may assume that $M$ is an open subset of $\mathbb{R}^{m}$. Consider the $k$ dimensional subspace $E_{p}=E_{0}$. Renumbering the coordinates if necessary, we may assume that $E_{0} \cap \mathbb{R}^{m-k}=\{0\}$, where $\mathbb{R}^{m-k}$ is identified with the subspace of $\mathbb{R}^{m}$ where the first $k$ coordinates are 0 . Passing to a small neighborhood of $p=0$ if necessary, we may assume that $E_{q} \cap \mathbb{R}^{m-k}=\{0\}$ for all $q$, or equivalently that orthogonal projection from $E_{q}$ to $\mathbb{R}^{k}$ is an isomorphism. That is, $E$ is spanned by vector fields of the form, $$ X_{i}=\frac{\partial}{\partial y_{i}}+\sum_{r=k+1}^{m} a_{i r} \frac{\partial}{\partial y_{r}} . $$ It turns out that we got very lucky: the $X_{i}$ commute! Indeed, by definition of the Lie bracket we have, $$ \left[X_{i}, X_{j}\right]=\sum_{r=k+1}^{m}\left(\frac{a_{j r}}{\partial y_{i}}-\frac{a_{i r}}{\partial y_{j}}\right) \frac{\partial}{\partial y_{r}}, $$ but since the $X_{i}$ are in involution, we also have $$ \left[X_{i}, X_{j}\right]=\sum_{l} c_{i j}^{l} X_{l}=\sum_{l=1}^{k} c_{i j}^{l} \frac{\partial}{\partial y_{l}}+\sum_{l=k+1}^{m}\left(\sum_{\nu=1}^{k} c_{i j}^{\nu} a_{\nu l}\right) \frac{\partial}{\partial y_{l}} . $$ Comparing the coefficients of $\frac{\partial}{\partial y_{l}}$ for $l \leq k$, we find that $c_{i j}^{l}=0$, showing that the $X_{i}$ commute. Hence their flow $\Phi_{t_{i}}^{i}$ commute (wherever defined). Choose $\epsilon>0$ sufficiently small, and let $U^{\prime}$ be a small open neighborhood of $p$ such that for all $t=\left(t_{1}, \ldots, t_{k}\right) \in B_{\epsilon}(0)$ and all $q \in U$ the "joint flow" $$ \Phi\left(t_{1}, \ldots, t_{k}, q\right)=\Phi_{t_{1}}^{1} \circ \cdots \circ \Phi_{t_{k}}^{k}(q) $$ is defined. Since the flows commute, we have $$ \left.\frac{\partial}{\partial s}\right|_{s=0} \Phi\left(t_{1}, \ldots, t_{j}+s, \ldots, t_{k}, q\right)=\left(X_{j}\right)_{\Phi(t, q)} . $$ Define a map $$ F: B_{\epsilon}(0) \times\left(U^{\prime} \cap \mathbb{R}^{m-k}\right) \rightarrow U,(t, q) \mapsto \Phi_{t}(q) $$ By construction, $$ T F\left(\frac{\partial}{\partial t_{j}}\right)=X_{j}, \quad j=1, \ldots, k $$ and $\left.T F\left(\frac{\partial}{\partial y_{j}}\right)=\frac{\partial}{\partial y_{j}}\right)$ for $j>k$. In particular, $T_{(0,0)} F$ is invertible, hence $F$ restricts to a diffeomorphism on some open subset $U \subset B_{\epsilon}(0) \times\left(U^{\prime} \cap \mathbb{R}^{m-k}\right)$. The inverse map $F^{-1}$ gives the required change of coordinates. One has the following addendum to Frobenius' theorem: Theorem 11.9. Suppose $E \subset T M$ is an integrable distribution. For each $p \in M$, there is a unique maximal connected integral submanifold $\iota: S \rightarrow M$ passing through $p$. That is, if $\iota^{\prime}: S^{\prime} \rightarrow M$ is any other integral submanifold through $p$, then there exists a smooth map $F: S^{\prime} \rightarrow S$ such that $F$ is a diffeomorphism onto its image and $\iota^{\prime}=\iota \circ F$. This is analogous to the fact that every vector field has a unique maximal integral curve through every given point of $M$. The idea of proof is to "patch together" the local solutions. Again, the theorem fails for non-Hausdorff manifolds. The maximal integral submanifolds are called the leaves of the integrable distribution, and the decomposition of $M$ into leaves is called a foliation. 11.4. Applications to Lie groups. A homomorphism of Lie groups is a smooth group homomorphism $F: H \rightarrow G$. The tangent map at the identity $T_{0} F: \mathfrak{h} \rightarrow \mathfrak{g}$ is then a homomorphism of Lie algebras, i.e. takes brackets to brackets. (To see this, one proves that the left-invariant vector fields corresponding to $\xi$ and to $T_{0} F(\xi)$ are $F$-related.) A 1-1 Lie group homomorphism is called a Lie subgroup, in this case $T_{0} F: \mathfrak{h} \rightarrow \mathfrak{g}$ is a Lie subalgebra. THEOREM 11.10. Let $G$ be a Lie group, with Lie algebra $\mathfrak{g}$, and $j: \mathfrak{h} \subset \mathfrak{g}$ a Lie subalgebra. Then there exists a unique Lie subgroup $F: H \rightarrow G$ having $\mathfrak{h}$ as its Lie algebra: That is, $j=T_{0} F$. Proof. Let $E \subset T G$ be the distribution spanned by the left-invariant vector fields, $\xi^{L}$ with $\xi \in \mathfrak{h}$. Since $\left[\xi^{L}, \eta^{L}\right]=[\xi, \eta]^{L}$, this distribution is integrable. Let $\mathcal{L}_{g}$ denote the leaf through $g \in G$. The distribution is left-invariant: That is, for all $a \in G$ the tangent map to left translation, $T l_{a}: T G \rightarrow T G$ takes $E$ to itself. Hence, for any $a \in G$ the left translate $l_{a}\left(\mathcal{L}_{g}\right)=\mathcal{L}_{a g}$ is again a leaf. Let $H:=\mathcal{L}_{e}$. If $h_{1}, h_{2} \in H$ we have $\mathcal{L}_{h_{1}}=\mathcal{L}_{h_{2}}=H$, and acting by $a=h_{2}^{-1}$ we get $$ \mathcal{L}_{h_{2}^{-1} h_{1}}=\mathcal{L}_{e}=H $$ proving that $h_{2}^{-1} h_{1} \in H$. This shows that $H$ is a group. Smoothness of the group operations follows from that for $G$. We next describe an application of Frobenius' theorem to actions of Lie groups and Lie algebras on manifolds. DeFinition 11.11. An action of a Lie group $G$ on a manifold $M$ is a group homomorphism $G \rightarrow \operatorname{Diff}(M), g \mapsto \Phi_{g}$ such that the action map $$ \Phi: G \times M \rightarrow M,(g . p) \mapsto \Phi_{g}(p) $$ is smooth. An action of a finite dimensional Lie algebra $\mathfrak{g}$ on a manifold $M$ is a Lie algebra homomorphism $\mathfrak{g} \rightarrow \mathfrak{X}(M), \xi \mapsto \xi_{M}$ such that the action map $\mathfrak{g} \times M \rightarrow T M, \quad(\xi, p) \mapsto \xi_{M}(p)$ is smooth. ExAmPles 11.12. 1) Note that an action of the (additive) Lie group $G=\mathbb{R}$ is the same thing as a global flow, while an action of the Lie algebra $\mathfrak{g}=\mathbb{R}$ (with zero bracket) is the same thing as a vector field. 2) Every matrix Lie group $G \subset \operatorname{GL}(n, \mathbb{R})$, and every matrix Lie algebra acts on $\mathbb{R}^{n}$ by multiplication. 3) The rotation action of $\mathrm{SO}(n)$ on $\mathbb{R}^{n}$ restricts to an action on the sphere, $S^{n-1} \subset \mathbb{R}^{n}$. 4) Any Lie group $G$ acts on itself by multiplication from the left, $l_{a}(g)=a g$, multiplication from the right $r_{a^{-1}}(g)=g a^{-1}$, and also by the adjoint (=conjugation) action $$ \operatorname{Ad}_{a}(g):=l_{a} r_{a^{-1}}(g)=a g a^{-1} . $$ The maps $$ \xi \mapsto \xi^{L}, \quad \xi \mapsto-\xi^{R}, \quad \xi \mapsto \xi^{L}-\xi^{R} $$ are all Lie algebra actions of $\mathfrak{g}$ on $G$. THEOREM 11.13. Given an action of a Lie group $G$ on a manifold $M$, one obtains an action of the corresponding Lie algebra $\mathfrak{g}$, by setting $$ \xi_{M}(p)=\left.\frac{d}{d t}\right|_{t=0} \Phi(\exp (-t \xi), p) . $$ The vector field $\xi_{M}$ is called the generating vector field corresponding to $\xi$. Proof. Let us first note that if $G$ acts on manifolds $M_{1}, M_{2}$, and if $F: M_{1} \rightarrow M_{2}$ is a $G$-equivariant map, i.e. $$ F(g \cdot p)=g \cdot F(p) \quad \forall p \in M_{1} $$ then $\xi_{M_{1}} \sim_{F} \xi_{M_{2}}$. This follows because $F$ takes integral curves for $\xi_{M_{1}}$ to integral curves for $\xi_{M_{2}}$. Thus, if we can show $[\xi, \eta]_{M_{1}}=\left[\xi_{M_{1}}, \eta_{M_{1}}\right]$. then a similar property holds for $M_{2}$. We apply this to the special case $$ F: G \times M \rightarrow M, \quad(g, p) \mapsto \Phi\left(g^{-1}, p\right) $$ $\Phi: G \times M \rightarrow M$, with $G$-acting on $G \times M$ by the right-action on $G$ and trivial action on $M$, and on $M$ by the given action. The map is equivariant. This reduces the problem to the special case $M=G$ with action the right-action $a \mapsto r_{a^{-1}}$ of $G$ on itself. We claim that $$ \xi_{M}=\xi^{L} $$ in this case. Indeed, the flow $g \mapsto g \exp (-t \xi)^{-1}=g \exp (t \xi)$ commutes with left translations, hence it is the flow of a left invariant vector field. Taking the derivative at $g=e, t=0$ we see that this vector field is $\xi^{L}$, as claimed. But $\left[\xi^{L}, \eta^{L}\right]=[\xi, \eta]^{L}$. EXERCISE 11.14. Show that the generating vector field for the left action of $G$ on itself is $-\xi^{R}$, and the generating vector field for the adjoint action is $\xi^{L}-\xi^{R}$. Note: Many people omit the minus sign in the definition of the generating vector field $\xi_{M}$. But then $\xi \mapsto \xi_{M}$ is not a Lie algebra homomorphism but an "anti-homomorphism". We prefer to avoid "anti" whenever possible. Let us now consider the inverse problem: Try to integrate a given Lie algebra action to an action of the corresponding group! Suppose $G$ is a connected Lie group, with Lie algebra $\mathfrak{g}$. We assume that $G$ is also simply connected: That is, every loop in $G$ can be contracted to a point. For instance, $G=\mathrm{SU}(n)$ is simply connected. If $G$ is a compact Lie group with finite center, one also knows that some finite cover of $G$ is simply connected. ThEOREM 11.15. Every Lie algebra action $\xi \mapsto \xi_{M}$ of $\mathfrak{g}$ on a compact manifold $M$ "exponentiates" uniquely to a Lie group action of the simply connected Lie group G, that is, an action for which $\xi_{M}$ are the generating vector fields. Sketch of Proof. Every $G$-action on $M$ decomposes $G \times M$ into submanifolds $\mathcal{L}_{p}=$ $\left\{\left(g^{-1}, g . p\right) \mid g \in G\right\}$, and the action may be recovered from this decomposition. The idea of proof, given a $\mathfrak{g}$-action, is to construct $\mathcal{L}_{p}$ as leafs of a foliation. Let $E \subset T(G \times M)$ be the distribution, of rank equal to $\operatorname{dim} G$, spanned by all vector fields $\left(\xi^{L}, \xi_{M}\right) \in \mathfrak{X}(G \times M)$ as $\xi$ ranges over the Lie algebra. Since $$ \left[\left(\xi^{L}, \xi_{M}\right),\left(\eta^{L}, \eta_{M}\right)\right]=\left([\xi, \eta]^{L},[\xi, \eta]_{M}\right), $$ the distribution is involutive. Hence it defines a foliation of $G \times M$ into submanifolds of dimension $\operatorname{dim} G$. Given $p \in M$, let $\mathcal{L}_{p} \hookrightarrow G \times M$ be the unique leaf containing the point $(e, p)$. Projection to the first factor induces a smooth map $\mathcal{L}_{p} \rightarrow G$, with tangent map taking $\left(\xi^{L}, \xi_{M}\right)$ to $\xi^{L}$. Since the tangent map is an isomorphism, the map $\mathcal{L}_{p} \rightarrow G$ is a local diffeomorphism (that is, every point in $\mathcal{L}_{p}$ has an open neighborhood over which the map is a diffeomorphism onto its image). We claim that this map is surjective. Proof: By the Lemma given below, and since the exponential map exp : $\mathfrak{g} \rightarrow G$ is a diffeomorphism on some neighborhood of 0, every $g \in G$ can be written as a product $g_{1} \ldots g_{N}$ of elements $g_{j}=\exp \left(\xi_{j}\right)$ where $\xi_{j} \in \mathfrak{g}$. The curve $t \mapsto g_{1} \ldots g_{j-1} \exp \left(t \xi_{j}\right)$ is an integral curve of the left-invariant vector field $\xi_{j}^{L}$. Taking all this curves together defines a piecewise smooth curve $\gamma$ connecting $e$ to $g$. This curve lifts to $\mathcal{L}_{p}$ : Since $M$ is compact, each $\xi_{M}$ is complete, hence each smooth segment of $\gamma$ to an integral curve of $\left(\xi^{L}, \xi_{M}\right)$. We have shown at this stage that the map $\mathcal{L}_{p} \rightarrow G$ is a local diffeomorphism onto its image. Since $G$ is simply connected by assumption, it follows that the map is in fact a diffeomorphism. Hence, for every $g, \mathcal{L}_{p}$ contains a unique point of the form $\left(g^{-1}, p^{\prime}\right)$. Define $g \cdot p=\Phi(g, p):=p^{\prime}$. We leave it as an exercise to check that this map defines a smooth $G$-action. LEMMA 11.16. Let $G$ be a connected Lie group, and $U \subset G$ an open neighborhood of the group unit $e \in G$. Then every $g \in G$ can be written as a finite product $g=g_{1} \cdots g_{N}$ of elements $g_{j} \in U$. Proof. We may assume that $g^{-1} \in U$ whenever $g \in U$. For each $N$, let $U^{N}=\left\{g_{1} \cdots g_{N} \mid g_{j} \in\right.$ $U\}$. We have to show $\bigcup_{N=0}^{\infty} U^{N}=G$. Each $U^{N}$ is open, hence their union is open as well. If $g \in G \backslash \bigcup_{N=0}^{\infty} U^{N}$, then $g U \in G \backslash \bigcup_{N=0}^{\infty} U^{N}$ (for if $g h \in \bigcup_{N=0}^{\infty} U^{N}$ with $h \in U$ we would have $g=(g h) h^{-1} \in \bigcup_{N=0}^{\infty} U^{N}$.) This shows that $G \backslash \bigcup_{N=0}^{\infty} U^{N}$ is also open. Since $G$ is connected, it follows that the open and closed set $\bigcup_{N=0}^{\infty} U^{N}$ is all of $G$. ## Riemannian metrics Let us quickly recall some linear algebra. A bilinear form on a vector space $V$ is a bilinear $\operatorname{map} g: V \times V \rightarrow \mathbb{R}$. Such a bilinear form is called symmetric if $g(v, w)=g(w, v)$ for all $v, w$, and in this case is completely determined by the associated quadratic form $q(v)=g(v, v)$. $g$ is called an inner product if it is positive definite, i.e. $g(v, v)>0$ for all $v \in V$. More generally, a symmetric form $g$ is called non-degenerate if $g(v, w)=0$ for all $w$ implies $v=0$. Non-degenerate symmetric bilinear forms are also called indefinite inner products. Given a basis $e_{1}, \ldots, e_{n}$ of $V$, one can describe any bilinear form in terms of the matrix $g_{i j}=g\left(e_{i}, e_{j}\right)$. The bilinear form $g$ is symmetric if and only if the matrix $g_{i j}$ is symmetric, and in this case one can always choose the basis such that $g_{i j}$ is diagonal. In fact, one can choose the basis in such a way that only $+1,0,-1$ arise as diagonal entries. Let $d_{+}, d_{0}, d_{-}$the number of $+1,0,-1$ diagonal entries. Then $g$ is non-degenerate if $d_{0}=0$, and is an inner product if and only if $d_{0}=d_{-}=0$, i.e. if there exists a basis such that $g_{i j}=\delta_{i j}$. EXERCISE 12.1. Show that one can split $V=V_{+} \oplus V_{-}$where $\operatorname{dim} V_{ \pm}=d_{ \pm}$and $g$ is positive definite on $V_{+}$, negative definite on $V_{-}$. However, looking at the case $\left(d_{+}, d_{-}\right)=(1,1)$, observe that this splitting is not unique. Definition 12.2. A Riemannian metric on a manifold $M$ is a family of inner products $g_{p}: T_{p} M \times T_{p} M \rightarrow \mathbb{R}$, depending smoothly on $p$ in the sense that the quadratic form $$ q: T M \rightarrow \mathbb{R}, \quad v \mapsto g_{p}(v, v) \text { for } v \in T_{p} M $$ is a smooth map $q \in C^{\infty}(T M)$. More generally, a pseudo-Riemannian metric of signature $\left(d_{+}, d_{-}\right)$is defined by letting the $g_{p}$ be indefinite inner products of signature $\left(d_{+}, d_{-}\right)$. The case of signature $(3,1)$ is relevant to general relativity, with 3 space dimensions and 1 time dimension. Again, there is no distinguished splitting into "space" and "time" directions. Lemma 12.3. Any pseudo-Riemannian metric defines a symmetric $C^{\infty}(M)$-bilinear map $$ g: \mathfrak{X}(M) \times \mathfrak{X}(M) \rightarrow C^{\infty}(M), \quad g(X, Y)_{p}=g_{p}\left(X_{p}, Y_{p}\right) . $$ Conversely, every symmetric $C^{\infty}(M)$-bilinear map $g: \mathfrak{X}(M) \times \mathfrak{X}(M) \rightarrow C^{\infty}(M)$, with the property that $g(X, Y)_{p}=0$ for all $Y$ implies $X_{p}=0$, defines a pseudo-Riemannian metric. Proof. Let $g$ be a pseudo-Riemannian metric, with quadratic form $q$. View vector fields as smooth sections, $X \in \Gamma^{\infty}(M, T M)$. Then $$ g(X, Y)=\frac{1}{2}(g(X+Y, X+Y)-g(X, X)-g(Y, Y))=\frac{1}{2}(q \circ(X+Y)-q \circ X-q \circ Y) $$ is smooth, while $C^{\infty}(M)$-bilinearity is obvious. Conversely, suppose we are given a $C^{\infty}(M)$ bilinear map $g: \mathfrak{X}(M) \times \mathfrak{X}(M) \rightarrow C^{\infty}(M)$ with the property that $g(X, Y)_{p}=0$ for all $Y$ implies $X_{p}=0$. The following Lemma shows that $g(X, Y)_{p}$ depends only on $X_{p}, Y_{p}$. Hence we can define $$ g_{p}\left(X_{p}, Y_{p}\right):=g(X, Y)_{p} . $$ If $g_{p}(v, w)=0$ for all $v$, choose $Y$ with $Y_{p}=w$. Then $g(X, Y)_{p}=g\left(X_{p}, Y_{p}\right)=0$ for all $X, Y$, which by assumption implies $Y_{p}=0$. Hence $g_{p}$ is non-degenerate. Using the formula $q \circ X=g(X, X)$, and passing to local coordinates, one sees that $g_{p}$ depends smoothly on $p$, hence it defines a Riemannian metric. Lemma 12.4. If $A: \mathfrak{X}(M) \times \cdots \times \mathfrak{X}(M) \rightarrow C^{\infty}(M)$ is a $C^{\infty}(M)$-multilinear map, then the value of $A\left(X_{1}, \ldots, X_{r}\right)$ at $p \in M$ depends only on $\left(X_{1}\right)_{p}, \ldots,\left(X_{r}\right)_{p}$. More generally, this Lemma holds true for any $C^{\infty}(M)$-multilinear map from $\mathfrak{X}(M) \times \cdots \times \mathfrak{X}(M)$ to a $C^{\infty}(M)$ module. Proof. It suffices to consider the case $r=1$. We have to show that if $X$ vanishes at $p$, then $A(X)$ vanishes at $p$. But if $X_{p}=0$, we can write (using local coordinates, and the Taylor expansion) $X=\sum_{i} f_{i} X_{i}$ where $X_{i} \in \mathfrak{X}(M)$ and where $f_{i} \in C^{\infty}(M)$ vanish at $p$. Hence, $$ A(X)_{p}=A\left(\sum_{i} f_{i} X_{i}\right)_{p}=\sum_{i} f_{i}(p) A\left(X_{i}\right)_{p}=0 $$ by $C^{\infty}(M)$-linearity. Definition 12.5. A (pseudo)-Riemannian manifold $(M, g)$ is a manifold $M$ together with a (pseudo)-Riemannian metric. An isometry between (pseudo)-Riemannian manifold $\left(M_{1}, g_{1}\right)$ and $\left(M_{2}, g_{2}\right)$ is a diffeomorphism $F: M_{1} \rightarrow M_{2}$ such that for all $p \in M_{1}$, the tangent map $T_{p} F: T_{p} M_{1} \rightarrow T_{F(p)} M_{2}$ is an isometry, i.e. preserves inner products. In local coordinates $x_{1}, \ldots, x_{m}$ on $U \subset M$, any pseudo-Riemannian metric is determined by smooth functions $$ g_{i j}(x)=g\left(\frac{\partial}{\partial x_{i}}, \frac{\partial}{\partial x_{j}}\right) . $$ Indeed, one recovers $g$ from the $g_{i j}$ by $$ g\left(\sum_{i} a_{i} \frac{\partial}{\partial x_{i}}, \sum_{j} b_{j} \frac{\partial}{\partial x_{j}}\right)=\sum_{i j} g_{i j} a_{i} b_{i} . $$ Conversely, every collection of smooth functions $g_{i j}$, such that each $\left(g_{i j}(x)\right)$ is a non-degenerate symmetric bilinear form, defines a Riemannian metric. In particular, to $g_{i j}=\delta_{i j}$ defines the standard metric on $\mathbb{R}^{n}$. How does $g_{i j}$ depend on the choice of coordinates? Let $y=\phi(x)$ be a coordinate change, and let $\tilde{g}_{i j}(y)$ denote the matrix in $y$-coordinates. We have, $$ \frac{\partial}{\partial y_{i}}=\sum_{a} \frac{\partial x_{a}}{\partial y_{i}} \frac{\partial}{\partial x_{a}} . $$ Hence, $$ \tilde{g}_{i j}(y)=g\left(\frac{\partial}{\partial y_{i}}, \frac{\partial}{\partial y_{j}}\right)=\sum_{a b} \frac{\partial x_{a}}{\partial y_{i}} \frac{\partial x_{b}}{\partial y_{j}} g_{a b}(\phi(y)) $$ Lemma 12.6. Let $S \subset M$ be an embedded submanifold, and $g$ a Riemannian metric on $M$. Then the restriction of $g$ to the tangent spaces $T_{p} S \subset T_{p} M$ defines a Riemannian metric on $M$. More generally, if $\iota: S \rightarrow M$ is an immersion, there is a unique Riemannian metric on $S$ such that each tangent map $T_{p} \iota: T_{p} S \rightarrow T_{\iota(p)} M$ is an isometry onto its image. In particular, every embedded submanifold of $\mathbb{R}^{m}$ inherits a Riemannian metric from the standard Riemannian metric on $\mathbb{R}^{m}$. EXAmple 12.7. The 2-torus $T^{2}$ can be defined as a direct product of the circle $S^{1} \subset \mathbb{R}^{2}$ with itself. Correspondingly we have an embedding $T^{2} \rightarrow R^{4}$ and the corresponding induced metric $g$ on $T^{2}$. The resulting metric on $T^{2}$ is simply the product of the metrics on the $S^{1}$ factors, and in particular is flat: $T^{2}$ is locally isometric to $\mathbb{R}^{2}$. It follows that there is no embedding of $T^{2}$ into $\mathbb{R}^{3}$, inducing the same metric $g$ : We had seen in our curves and surfaces course that any connected surface in $\mathbb{R}^{3}$ with vanishing first fundamental form is an open subset of a plane, hence cannot be a compact surface. ## Existence of Riemannian metrics To show that every manifold admits a Riemannian metric, we need an important technical tool called partitions of unity. Theorem 13.1 (Partitions of unity). Let $M$ be a manifold. a) Any open cover $\left\{U_{\alpha}\right\}$ of $M$ has a locally finite refinement $\left\{V_{\beta}\right\}$ : That is, $\left\{V_{\beta}\right\}$ is an open cover, each $V_{\beta}$ is contained in some $U_{\alpha}$, and the cover is locally finite in the sense that each point in $M$ has an open neighborhood meeting only finitely many $V_{\beta}$ 's. b) For any locally finite cover $U_{\alpha}$ of $M$, there exists a partition of unity, that is a collection of functions $\chi_{\alpha} \in C^{\infty}(M)$ with $\operatorname{supp}\left(\chi_{\alpha}\right) \subset U_{\alpha}$, such that $0 \leq \chi_{\alpha} \leq 1$ and $$ \sum_{\alpha} \chi_{\alpha}=1 $$ Note that the sum $\sum_{\alpha} \chi_{\alpha}$ is well-defined, since only finitely many $\chi_{\alpha}$ 's are non-zero near any given point. We will omit the somewhat technical proof of this result. The proof is contained in most books on differential geometry (e.g. Helgason), and can also be found in the lecture notes from my "manifolds" course. The main steps for part (b) are as follows: (i) One constructs a "shrinking" of the open cover $U_{\alpha}$ to a new cover $V_{\alpha}$, such that $\bar{V}_{\alpha} \subset U_{\alpha}$. The new cover is still locally finite. (ii) One constructs functions $f_{\alpha} \in C^{\infty}(M)$ supported on $U_{\alpha}$, such that $f_{\alpha}>0$ on $V_{\alpha}$, (iii) One defines $f=\sum_{\alpha} f_{\alpha}>0$, and sets $\chi_{\alpha}=f_{\alpha} / f$. THEOREM 13.2. Every manifold $M$ admits a Riemannian metric. Proof. Choose an atlas $\left\{\left(U_{\alpha}, \phi_{\alpha}\right)\right\}$ of $M$. Passing to a refinement, we may assume that the atlas is locally finite. Choose a partition of unity $\chi_{\alpha}$ for the cover $\left\{U_{\alpha}\right\}$. Since $\phi_{\alpha}$ identifies $U_{\alpha}$ with an open subset of $\mathbb{R}^{m}$, we obtain Riemannian metrics $g_{\alpha}$ on $U_{\alpha}$ from the standard Riemannian metrics on $\mathbb{R}^{m}$. For all $p \in M$, the sum $g_{p}=\sum \chi_{\alpha}(p)\left(g_{\alpha}\right)_{p}$ is well-defined. Since all $\chi_{\alpha}(p) \geq 0$, with at least one strictly positive, $g_{p}$ is an inner product with clearly a smooth dependence on $p$. Thus $g$ is a Riemannian metric on $M$. It is not true that every manifold admits a pseudo-Riemannian metric of given signature $\left(d_{+}, d_{-}\right)$, where both $d_{ \pm} \neq 0$. ## Length of curves Suppose $(M, g)$ is a Riemannian manifold (that is, a manifold with a Riemannian metric). ${ }^{7}$ For any tangent vector $v \in T_{p} M$, we define its length as $\|v\|=g_{p}(v, v)^{1 / 2}$. Definition 14.1. Let $\gamma:[a, b] \rightarrow M$ be a smooth curve in $M .{ }^{8}$ One defines the length of $\gamma$ to be the integral $$ L(\gamma)=\int_{a}^{b}\|\dot{\gamma}(t)\| \mathrm{d} t $$ The length functional is invariant under reparametrizations of the curve $\gamma$. Somewhat more generally, we have: Proposition 14.2. Let $\sigma:[a, b] \rightarrow \mathbb{R}$ be a smooth function, with the property that $\sigma\left(t_{1}\right) \leq$ $\sigma\left(t_{2}\right)$ for $t_{1} \leq t_{2}$. Let $\gamma:[a, b] \rightarrow M$ be a smooth curve of the form $\gamma=\tilde{\gamma} \circ \sigma$. Then $L(\gamma)=L(\tilde{\gamma})$. Proof. By substitution of variables $\tilde{t}=\sigma(t),{ }^{9}$ $$ \begin{aligned} L(\gamma) & =\int_{a}^{b}\left\|\frac{d}{d t}(\tilde{\gamma} \circ \sigma)\right\| \mathrm{d} t \\ & =\int_{a}^{b}\left\|\frac{d \tilde{\gamma}}{d \tilde{t}}(\sigma(t))\right\|\left|\frac{d \sigma}{d t}\right| \mathrm{d} t \\ & =\int_{\sigma(a)}^{\sigma(b)}\left\|\frac{d \tilde{\gamma}}{d \tilde{t}}\right\| \mathrm{d} \tilde{t} \\ & =L(\tilde{\gamma}) \end{aligned} $$ The definition of $L(\gamma)$ applies to piecewise smooth curves: That is, continuous curves $\gamma:[a, b] \rightarrow M$ such that there exists a subdivision $a=t_{0} \leq \cdots \leq t_{N}=b$ of the interval, with each $\left.\gamma\right|_{\left[t_{i}, t_{i+1}\right]}$ a smooth curve. ${ }^{7}$ For the following discussion, see chapter 1.4 in Jost's book. ${ }^{8}$ Here smooth means that $\gamma$ extends to a smooth curve on an open interval $J$ containing $[a, b]$. ${ }^{9}$ If $\sigma:[a, b] \rightarrow \mathbb{R}$ is a continuous, piecewise smooth map, which is weakly increasing in the sense that $\sigma\left(t_{1}\right) \leq \sigma\left(t_{2}\right)$ for $t_{1} \leq t_{2}$, then $\int_{a}^{b} f(\sigma(t))\left|\frac{d \sigma}{d t}\right| \mathrm{d} t=\int_{\sigma(a)}^{\sigma(b)} f(\tilde{t}) \mathrm{d} \tilde{t}$. If the curve $\gamma$ is contained in a fixed coordinate chart $(U, \phi)$, and $\left(x_{1}(t), \ldots, x_{m}(t)\right)$ describes the curve in local coordinates, we have $$ L(\gamma)=\int_{a}^{b} \sqrt{\sum_{i j} g_{i j}(x(t)) \dot{x_{i}} \dot{x_{j}}} \mathrm{~d} t . $$ Definition 14.3 (Distance function). Let $(M, g)$ be a connected Riemannian manifold. For $p, q \in M$, the distance $d(p, q)$ between any two points on $M$ is infimum of $L(\gamma)$, as $\gamma$ varies over all piecewise smooth curves $\gamma:[0,1] \rightarrow M$ with $\gamma(0)=p$ and $\gamma(1)=q$. (If no such path exists, we set $d(p, q)=\infty$.) Problems 14.4. 1. Show that for any manifold $M$, the following are equivalent: (i) $M$ is connected, (ii) any two points $p, q$ can be joined by a continuous path, (iii) any two points $p, q$ can be joined by a piecewise smooth path, (iv) any two points $p, q$ can be joined by a smooth path. Hence $d(p, q)<\infty$ for a connected manifold. 2. Show that in the definition of distance function, one can replace piecewise smooth paths by smooth paths. In fact, any piecewise smooth path is of the form $\gamma=\lambda \circ \sigma$, where $\sigma$ is weakly increasing and piecewise smooth, and $\lambda$ is smooth. LeMma 14.5. Let $(U, \phi)$ be a coordinate chart in which $g$ is given by $g_{i j}(x)$, and $K \subset \phi(U)$ a compact subset. Then there exist $\lambda \geq \mu>0$ with $$ \mu \sqrt{\sum_{i} \xi_{i} \xi_{i}} \geq \sqrt{\sum_{i j} g_{i j}(x) \xi_{i} \xi_{j}} \geq \lambda \sqrt{\sum_{i} \xi_{i} \xi_{i}} . $$ for $x \in K, \xi \in \mathbb{R}^{n}$. Proof. The set of all $(x, \xi) \in \mathbb{R}^{2 n}$ with $x \in K$ and $\sum_{i} \xi_{i} \xi_{i}=1$ is compact. Hence the function $\sum_{i j} g_{i j}(x) \xi_{i} \xi_{j}$ takes on its maximum $\lambda$ and minimum $\mu$ on this set. By definition of a Riemannian metric, $\mu>0$. THEOrem 14.6. For any connected manifold $M$, the distance function $d$ defines a metric on $M$. That is, $d(p, q) \geq 0$ with equality if and only if $p=q$, and for any three points $p, q, r$, one has the triangle inequality $$ d(p, q)+d(q, r) \geq d(p, r) . $$ Proof. The triangle inequality is immediate from the definition. Suppose $p \neq q$. We have to show $d(p, q)>0$. Choose a chart $(U, \phi)$ around $p$, with $\phi(p)=0$, and let $\epsilon>0$ be sufficiently small, such that the closed ball $\overline{B_{\epsilon}}$ is contained in $\phi(U)$ and $\phi^{-1}\left(\overline{B_{\epsilon}}\right)$ does not contain $q$. Let $g_{i j}$ represent the metric in the chart $(U, \phi)$. Given a curve $\gamma$ from $p$ to $q$, let $t_{1}<b$ is such that $\gamma(t) \in \phi(U)$ for $a \leq t \leq t_{1}$ and $\gamma\left(t_{1}\right) \in \bar{B}_{\epsilon} \backslash B_{\epsilon}$. Write $\phi(\gamma(t))=x(t)$ for $a \leq t \leq t_{1}$. Using the Lemma, $$ L(\gamma) \geq \int_{a}^{t_{1}} \sqrt{\sum_{i j} g_{i j}(x(t)) \dot{x}_{i} \dot{x}_{j}} \mathrm{~d} t \geq \lambda \int_{a}^{t_{1}} \sqrt{\sum_{i} \dot{x}_{i} \dot{x}_{i}} \mathrm{~d} t \geq \lambda \epsilon, $$ since the length of the path from $\phi(p)=0$ to $x\left(t_{1}\right)$ must be at least the Euclidean distance $\epsilon$. Hence also $$ d(p, q)=\inf _{\gamma} L(\gamma) \geq \lambda \epsilon>0 . $$ THEOREM 14.7. For any manifold $M$, the topology defined by the metric coincides with the manifold topology. Proof. This follows from the Lemma: In local charts, $\epsilon$-balls for the metric $d$ contain sufficiently small Euclidean $\delta$-balls, and vice versa. ## Connections and parallel transport In this section, we will define the parallel transport of tangent vectors on any Riemannian manifold $(M, g)$. If $M \subset \mathbb{R}^{m}$ is an embedded submanifold of $\mathbb{R}^{m}$, with metric induced from $\mathbb{R}^{m}$, we can follow the strategy from the "curves and surfaces" course: At any $p \in M$ we have an orthogonal projection $\Pi_{p}: \mathbb{R}^{m} \rightarrow T_{p} M$. If $\gamma(t)$ is a curve and $X(t) \in T_{\gamma(t)} M$ a vector field along $\gamma$, we say that $X$ is parallel along $\gamma$ if the covariant derivative $$ \frac{\nabla X}{\mathrm{~d} t}:=\Pi_{\gamma(t)} \frac{\mathrm{d} X}{\mathrm{~d} t} $$ vanishes for all $t$. Here we have used that $X(t)$ can be viewed as an $\mathbb{R}^{m}$-valued function of $t$. Using the existence and uniqueness theorem for ODE's, one finds that any parallel vector field along $\gamma$ is determined by its value $X\left(t_{0}\right)$ at any fixed time $t_{0}$. For a general Riemannian manifold $(M, g)$, we don't have "orthogonal projection" at our disposal. It is remarkable that there exists, nevertheless, a well-defined concept of parallel transport on any Riemannian manifold $(M, g)$. That is, parallel transport is really an intrinsic property. Our starting point for defining parallel transport is to define generalized covariant derivatives, called affine connections. We will then show that any Riemannian manifold carries a distinguished affine connection. 15.1. Affine connections. Let $M$ be a manifold. Definition 15.1. An affine connection on $M$ is a bi-linear map $$ \nabla: \mathfrak{X}(M) \times \mathfrak{X}(M) \rightarrow \mathfrak{X}(M), \quad(X, Y) \rightarrow \nabla_{X}(Y) $$ such that $$ \begin{aligned} & \nabla_{X}(f Y)=f \nabla_{X}(Y)+X(f) Y \\ & \nabla_{f X}(Y)=f \nabla_{X}(Y) . \end{aligned} $$ for all $f \in C^{\infty}(M), X, Y \in \mathfrak{X}(M)$. The second condition says that the operator $\nabla_{X}$ is $C^{\infty}$-linear in the $X$ variable. One calls $\nabla_{X}(Y)$ the covariant derivative of $Y$ in the direction of $X$. $Y$ is called covariant constant in the direction of $X$ if $\nabla_{X}(Y)=0$. If $M=U$ is an open subset of $\mathbb{R}^{m}$, any affine connection $\nabla$ is determined by its values on coordinate vector fields. The functions $\Gamma_{j k}^{i} \in C^{\infty}(U)$ defined by $$ \nabla_{\frac{\partial}{\partial x_{j}}}\left(\frac{\partial}{\partial x_{k}}\right)=\sum_{i} \Gamma_{j k}^{i} \frac{\partial}{\partial x_{i}} $$ are called the Christoffel symbols of $\nabla$. The full connection is given in terms of Christoffel symbols by the formula, $$ \nabla_{\sum_{j} a_{j} \frac{\partial}{\partial x_{j}}}\left(\sum_{k} b_{k} \frac{\partial}{\partial x_{k}}\right)=\sum_{j} a_{j}\left(\sum_{k} \frac{\partial b_{k}}{\partial x_{j}} \frac{\partial}{\partial x_{k}}+\sum_{k, i} \Gamma_{j k}^{i} b_{k} \frac{\partial}{\partial x_{i}}\right) . $$ Conversely, it is easily checked that any collection of smooth functions $\Gamma_{j k}^{i}$ defines an affine connection by this formula. In particular, open subsets $U \subset \mathbb{R}^{m}$ have the standard affine connection, given by $\Gamma_{j k}^{i}=0$. More generally, for affine connections on manifolds one defines Christoffel symbols of a connection with respect to a given chart. First we note that if $U \subset M$ is an open subset, affine connections $\nabla$ have a unique restriction $\left.\nabla\right|_{U}$ with the property $$ \left(\left.\nabla\right|_{U}\right)_{\left.X\right|_{U}}\left(\left.Y\right|_{U}\right)=\left.\nabla_{X}(Y)\right|_{U} $$ Moreover, every connection is determined by its restrictions to elements of an open cover of $M$. Hence we may define: Definition 15.2. Let $\nabla$ be an affine connection on a manifold $M$. If $(U, \phi)$ is a chart, defining local coordinates $x_{1}, \ldots, x_{m}$, one defines the Christoffel symbols $\Gamma_{j k}^{i}$ of $\left.\nabla\right|_{U}$ in the given chart to be the functions defined by (5). Problems 15.3. 1. Calculate the Christoffel symbols of the standard connection on $\mathbb{R}^{2}$ in polar coordinates. The solution shows that Christoffel symbols may vanish in one coordinate system but be non-zero in another. 2. Work out the transformation property of Christoffel symbols under change of coordinates. Proposition 15.4. For any affine connection $\nabla$ on $M$, the map $T: \mathfrak{X}(M) \times \mathfrak{X}(M) \rightarrow$ $\mathfrak{X}(M)$ given by $$ T(X, Y)=\nabla_{X}(Y)-\nabla_{Y}(X)-[X, Y] $$ is $C^{\infty}(M)$-linear in both $X$ and $Y$. It is called the torsion of $\nabla$. Proof. For all $f \in C^{\infty}(M)$, $$ \begin{aligned} T(X, f Y)-f T(X, Y) & =\nabla_{X}(f Y)-f \nabla_{X}(Y)-[X, f Y]+f[X, Y] \\ & =X(f) Y-X(f) Y=0 . \end{aligned} $$ Similarly $T(f X, Y)-f T(X, Y)=0$. In local coordinates we have, in terms of the Christoffel symbols, $$ T\left(\frac{\partial}{\partial x_{j}}, \frac{\partial}{\partial x_{k}}\right)=\sum_{j k}\left(\Gamma_{j k}^{i}-\Gamma_{k j}^{i}\right) \frac{\partial}{\partial x_{i}} . $$ Hence, the connection is torsion-free if and only if the Christoffel symbols $\Gamma_{j k}^{i}$ are symmetric in $j, k$. In particular, if the Christoffel symbols have this symmetry property in one system of coordinates, then also in every other system. ### The Levi-Civita connection. Proposition 15.5. Let $(M, g)$ be a (pseudo-)Riemannian manifold. For any affine connection $\nabla$ on $M$, and any $Z \in \mathfrak{X}(M)$ the map $\nabla_{Z} g: \mathfrak{X}(M) \times \mathfrak{X}(M) \rightarrow C^{\infty}(M)$ given by $$ \left(\nabla_{Z} g\right)(X, Y)=Z g(X, Y)-g\left(\nabla_{Z}(X), Y\right)-g\left(X, \nabla_{Z}(Y)\right) $$ is $C^{\infty}(M)$-linear in both $X, Y$. It is called the covariant derivative of $g$ in the direction of $Z$. The connection $\nabla$ is called a metric connection if $\nabla_{X} g=0$. The proof is straightforward. In local coordinates and the corresponding Christoffel symbols for $\nabla$, we have $$ (\nabla g)_{i j}^{k}:=\left(\nabla_{\frac{\partial}{\partial x_{k}}} g\right)\left(\frac{\partial}{\partial x_{i}}, \frac{\partial}{\partial x_{j}}\right)=\frac{\partial g_{i j}}{\partial x_{k}}-\sum_{l} \Gamma_{k i}^{l} g_{l j}-\Gamma_{k j}^{l} g_{l i} . $$ and $\nabla$ is a metric connection if and only of the right hand side vanishes. Theorem 15.6 (Fundamental Theorem of Riemannian Geometry). Suppose $(M, g)$ is a pseudo-Riemannian manifold. There exists a unique torsion-free metric connection $\nabla$ on $M$. It is called the Levi-Civita connection. Proof. Suppose $\nabla$ is a torsion-free metric connection. Since $\nabla$ is metric, we have $$ Z g(X, Y)=g\left(\nabla_{Z}(X), Y\right)+g\left(X, \nabla_{Z}(Y)\right) . $$ Using the torsion free condition $\nabla_{Z}(X)=\nabla_{X}(Y)+[Z, X]$ this gives, $$ Z g(X, Y)=g\left(Y, \nabla_{X}(Z)\right)+g\left(X, \nabla_{Z}(Y)\right)+g([Z, X], Y) . $$ Permuting letters we also have $$ \begin{aligned} X g(Y, Z) & =g\left(Z, \nabla_{Y}(X)\right)+g\left(Y, \nabla_{X}(Z)\right)+g([X, Y], Z), \\ Y g(Z, X)) & =g\left(X, \nabla_{Z}(Y)\right)+g\left(Z, \nabla_{Y}(X)\right)+g([Y, Z], X) \end{aligned} $$ Use these equations to eliminate $\nabla_{X}$ and $\nabla_{Y}$, and obtain $Z g(X, Y)-X g(Y, Z)+Y g(Z, X))=2 g\left(X, \nabla_{Z}(Y)\right)+g([Z, X], Y)-g([X, Y], Z)+g([Y, Z], X)$, that is, $$ \begin{aligned} & 2 g\left(X, \nabla_{Z}(Y)\right) \\ & \quad=Z g(X, Y)-X g(Y, Z)+Y g(Z, X))-g([Z, X], Y)+g([X, Y], Z)-g([Y, Z], X) . \end{aligned} $$ Since $g$ is non-degenerate, any vector field $W$ is completely determined by its parings $g(X, W)$ with all vector fields $X$. In particular, (6) specifies the vector field $W=\nabla_{Z}(Y)$. This shows that a torsion-free metric connection $\nabla$ is determined by the metric $g$. Conversely, it is straightforward to check that formula (6) defines a torsion-free metric connection. For instance, if we replace $Y$ by $f Y$ for some function $f$, we find $$ \begin{aligned} & 2\left(g\left(X, \nabla_{Z}(f Y)\right)-f g\left(X, \nabla_{Z}(Y)\right)\right. \\ & \quad=Z(f) g(X, Y)-X(f) g(Y, Z)+g(X(f) Y, Z)+g(Z(f) Y, X) \\ & \quad=2 Z(f) g(X, Y) \end{aligned} $$ which shows that $\nabla_{Z}(f Y)-f \nabla_{Z}(Y)=Z(f) Y$. The other properties are checked similarly. ExERCISE 15.7. Try to re-derive the explicit formula (6) for the Levi-Civita connection without looking at the notes. Fill in the details of showing that this formula defines a torsionfree metric connection. COROLLARY 15.8. Every manifold admits a torsion-free affine connection $\nabla$. Proof. We have seen that every manifold admits a Riemannian metric $g$. Thus one can take the Levi-Civita connection with respect to $g$. Taking $X=\frac{\partial}{\partial x_{l}}, Y=\frac{\partial}{\partial x_{k}}, Z=\frac{\partial}{\partial x_{j}}$ to be coordinate vector fields in (6), we obtain a formula for the Christoffel symbols $\Gamma_{j k}^{i}$ of the Levi-Civita connection: $$ 2 \sum_{i} \Gamma_{j k}^{i} g_{i l}=\frac{\partial g_{k l}}{\partial x_{j}}+\frac{\partial g_{j l}}{\partial x_{k}}-\frac{\partial g_{j k}}{\partial x_{l}} . $$ Letting $\left(g^{-1}\right)_{i j}$ denote the inverse matrix to $g_{i j}$, this gives: THEOREM 15.9. In local coordinates, the Christoffel symbols for the Levi-Civita connection are given by $$ \Gamma_{j k}^{i}=\frac{1}{2} \sum_{l}\left(g^{-1}\right)_{i l}\left(\frac{\partial g_{k l}}{\partial x_{j}}+\frac{\partial g_{j l}}{\partial x_{k}}-\frac{\partial g_{j k}}{\partial x_{l}}\right) . $$ We had seen a similar formula in the curves and surfaces course. In fact, we could have used this formula to define a connection in local coordinates, and then check that the local definitions patch together. However, the significance of this rather complicated formula would remain obscure from such an approach. It is immediate from this formula that $\nabla$ is torsion-free, since the Christoffel symbols are symmetric in $j, k$. 15.3. Parallel transport. Let $\nabla_{X}(Y)$ be an affine connection on a manifold $M$. Since $\nabla_{X}(Y)$ is $C^{\infty}$-linear in the $X$-variable, the value of $\nabla_{X}(Y)$ at $p$ depend only on $X_{p}$. Thus if $v \in T_{p} M$ one can define $\nabla_{v}(Y) \in T_{p} M$ by $\nabla_{v}(Y):=\nabla_{X}(Y)_{p}$ where $X$ is any vector field with $X_{p}=v$. If $\gamma: J \rightarrow M$ is any curve, one can therefore define $$ \nabla_{\dot{\gamma}(t)} Y \in T_{\gamma(t)} M $$ If $x(t)$ is the description of the curve $\gamma$ in local coordinates $x_{1}, \ldots, x_{m}$, so that $\dot{\gamma}=\sum_{i} \dot{x}_{i} \frac{\partial}{\partial x_{i}}$, and $Y=\sum_{k} b_{k} \frac{\partial}{\partial x_{k}}$, $$ \begin{aligned} \nabla_{\dot{\gamma}(t)} Y & =\sum_{i j} \dot{x}_{j}\left(\frac{\partial b_{i}}{\partial x_{j}}+\sum_{k} \Gamma_{j k}^{i} b_{k}\right) \frac{\partial}{\partial x_{i}} . \\ & =\sum_{i}\left(\frac{d b_{i}}{d t}+\sum_{j k} \Gamma_{j k}^{i} \dot{x}_{j} b_{k}\right) \frac{\partial}{\partial x_{i}} . \end{aligned} $$ Here $\frac{d b_{i}}{d t}=\frac{d}{d t} b_{i}(x(t))$. Note that this formula depends only on the "restriction" of $Y$ to $\gamma$, or more precisely on the section of the pull-back bundle $\gamma^{*}(T M) \rightarrow J$ defined by $Y$. In fact, the formula makes sense for any vector field along $\gamma$, that is, any section of $\gamma^{*}(T M) \rightarrow J$. In local coordinates, vector fields along $\gamma$ are given by expressions $Y=\sum_{i} b_{k}(t) \frac{\partial}{\partial x_{k}} \in T_{\gamma(t)} M$ depending smoothly on $t$, and the above formula in local coordinates defines a new vector field along $\gamma,{ }^{10}$ $$ \frac{D Y}{d t} \equiv \nabla_{\dot{\gamma}(t)} Y . $$ Definition 15.10. A vector field $Y$ along a curve $\gamma: J \rightarrow M$, is called parallel along $\gamma$ if the covariant derivative $\frac{D Y}{d t}$ vanishes everywhere. ThEOrem 15.11. Let $\nabla$ be a metric connection on a manifold $M$. Let $\gamma: J \rightarrow M$ be a smooth curve, $X_{0} \in T_{\gamma\left(t_{0}\right)} M$ where $t_{0} \in J$. Then there is a unique parallel vector field $X(t) \in T_{\gamma(t)}$ along $\gamma$, with the property $X\left(t_{0}\right)=X_{0}$. The linear map $$ T_{\gamma\left(t_{0}\right)} M \rightarrow T_{\gamma(t)} M, \quad X_{0} \mapsto X(t) $$ is called parallel transport along $\gamma$, with respect to the connection $\nabla$. Proof. In local coordinates as above, parallel vector fields are the solutions of the first order ordinary differential equations, $$ \frac{d b_{i}}{d t}+\sum_{j k} \Gamma_{j k}^{i} \dot{x_{j}} b_{k}=0 . $$ Hence, for "short times" the theorem follows from the existence and uniqueness theorem for ODE's, and for "long times" by patching together local solutions. Proposition 15.12. Let $(M, g)$ be a pseudo-Riemannian manifold, and $\nabla$ an affine connection on $M$. Then $$ \frac{d}{d t} g(X(t), Y(t))=\left(\nabla_{\dot{\gamma}} g\right)(X(t), Y(t))+g\left(\frac{D X}{d t}, Y\right)+g\left(X, \frac{D Y}{d t}\right) . $$ Proof. In local coordinates, write $X(t)=\sum_{i} a_{i}(t) \frac{\partial}{\partial x_{i}}$ and $Y(t)=\sum_{j} b_{j}(t) \frac{\partial}{\partial x_{j}}$, and let $x(t)=\left(x_{1}(t), \ldots, x_{m}(t)\right)$ be the coordinate expression for the curve $\gamma$. Then $$ \begin{aligned} \frac{d}{d t} g(X(t), Y(t)) & =\frac{d}{d t} \sum_{i j} g_{i j} a_{i} b_{j} \\ & =\sum_{i j k} \dot{x}_{k} \frac{\partial g_{i j}}{\partial x_{k}}+\sum_{i j} g_{i j} \dot{a}_{i} b_{j}+\sum_{i j} g_{i j} a_{i} \dot{b}_{j} \end{aligned} $$ and $$ \begin{aligned} g\left(\frac{D X}{d t}, Y\right) & =\sum_{i j}\left(\dot{a}_{i}+\sum_{l m} \Gamma_{l m}^{i} \dot{x}_{l} a_{m}\right) b_{j}, \\ g\left(X, \frac{D Y}{d t}\right) & =\sum_{i j} a_{i}\left(\dot{b}_{j}+\sum_{l m} \Gamma_{l m}^{j} \dot{x}_{l} b_{m}\right) . \end{aligned} $$ ${ }^{10}$ Of course, it would be better to give a coordinate free definition. For this, one has to generalize the notion of an affine connection, and introduce connections $\nabla$ on vector bundles $E \rightarrow M$. For any $X \in \mathfrak{X}(M), \nabla_{X}$ is an endomorphism of the space of sections $\Gamma^{\infty}(E)$. For any smooth map $F: N \rightarrow M$ one then obtains a connection $F^{*} \nabla$ on the pull-back bundle $F^{*} E$. In our case, we obtain a connection $\gamma^{*} \nabla$ on $\gamma^{*}(T M)$. One then defines $$ \frac{D Y}{d t}:=\left(\gamma^{*} \nabla\right)_{\frac{\partial}{\partial t}} Y(t) . $$ Taking this three equations together, and using $$ (\nabla g)_{i j}^{k}=\left(\nabla_{\frac{\partial}{\partial x_{k}}} g\right)\left(\frac{\partial}{\partial x_{i}}, \frac{\partial}{\partial x_{j}}\right)=\frac{\partial g_{i j}}{\partial x_{k}}-\sum_{l} \Gamma_{k i}^{l} g_{l j}-\Gamma_{k j}^{l} g_{l i}, $$ the Proposition follows. ${ }^{11}$ As an immediate consequence, we have: Proposition 15.13. An affine connection $\nabla$ on a pseudo-Riemannian manifold $(M, g)$ is a metric connection if and only if parallel transport along curves preserves inner products. ## Geodesics Let $\nabla$ be an affine connection on a manifold $M$. Definition 16.1. A smooth curve $\gamma: J \rightarrow M$ is called a geodesic for the connection $\nabla$, if and only if the velocity vector field $\dot{\gamma}$ is parallel along $\gamma$. ExERcise 16.2. Show that if $\gamma: J \rightarrow M$ is a geodesic, and $\phi: \tilde{J} \rightarrow J$ is a diffeomorphism (change of parameters), then $$ \tilde{\gamma}(\tilde{t})=\gamma(\phi(\tilde{t})) $$ is a geodesic if and only if $\frac{d \phi}{d \tilde{t}}=$ const, i.e. if and only if $\phi(\tilde{t})=a \tilde{t}+b$ for some $a \neq 0, b$. As a special case of the differential equation for a parallel vector field $X(t)=\sum_{i} b_{i}(t) \frac{\partial}{\partial x_{i}}$, here $X(t)=\dot{\gamma}$ i.e. $b_{i}=\dot{x}_{i}$, we find: THEOREM 16.3. In local coordinates, geodesics are the solutions of the second order ordinary differential equation, $$ \frac{d^{2} x^{i}}{d t^{2}}+\sum_{j k} \Gamma_{j k}^{i} \dot{x}_{j} \dot{x}_{k}=0 $$ Notice that only the symmetric part $\Gamma_{j k}^{i}+\Gamma_{k j}^{i}$, that is the torsion-free part of $\nabla$, contributes to the geodesic equation. Thus, if one is interested in the geodesic flow of a metric connection $\nabla$, one might as well assume that $\nabla$ is the Levi-Civita connection. On $\mathbb{R}^{m}$ with the standard Riemannian metric, geodesics are straight lines with constant speed parametrization. It is a standard trick in ODE theory to reduce higher order ODE's to a system of first order ODE's, by introducing derivatives as parameters. In our case, if we introduce $\dot{x}_{i}=: \xi_{i}$, the geodesic equation becomes a system, $$ \begin{aligned} \frac{d x_{i}}{d t} & =\xi_{i} \\ \frac{d \xi_{i}}{d t} & =-\sum_{j k} \Gamma_{j k}^{i} \xi_{j} \xi_{k} . \end{aligned} $$ ${ }^{11}$ We had to resort to this terrible proof since we defined the covariant derivative along curves in coordinates only. In the coordinate free definition, the Proposition is almost a triviality because it is essentially just the definition of $\nabla g$ ! Notice that $x_{i}, \xi_{i}$ are just the standard local coordinates on $T M$ induced by the local coordinates $x_{i}$ on $M$. Hence, the above first order system defines a vector field $\mathcal{S}$ on $T M$, given in local coordinates by $$ \mathcal{S}=\sum_{i} \xi_{i} \frac{\partial}{\partial x_{i}}-\sum_{i j k} \Gamma_{j k}^{i} \xi_{j} \xi_{k} \frac{\partial}{\partial \xi_{i}} $$ Definition 16.4. The vector field $\mathcal{S}$ is called the geodesic spray of $\nabla$, and its flow is called the geodesic flow. Theorem 16.5. For any $p \in M, v \in T_{p} M$ there exists a unique maximal geodesic $\gamma_{v}: J \rightarrow$ $M$, where $\gamma_{v}(0)=p, \dot{\gamma}_{v}(0)=v$. Proof. Let $\Phi_{t}$ denote the geodesic flow, and $\pi: T M \rightarrow M$ the base point projection. The geodesics on $M$ are just the projections of solution curves of the geodesic spray $\mathcal{S}$. In particular, $\gamma_{v}$ is given by $$ \gamma_{v}(t)=\pi\left(\Phi_{t}(v)\right) $$ Notice that the geodesic flow has the property $$ \frac{\mathrm{d}}{\mathrm{d} t}\left(\pi\left(\Phi_{t}(v)\right)\right)=\Phi_{t}(v) . $$ This is the coordinate free reformulation of $\dot{x}_{i}=\xi_{i}$. Furthermore, it has the property $$ \Phi_{t}(a v)=\Phi_{a t}(v) $$ for $a \in \mathbb{R}$; this just says that if $\gamma(t)$ is a geodesic, with $\dot{\gamma}(0)=v$, then $t \mapsto \gamma(a t)$ is also a geodesic, but with initial velocity av. EXERCISE 16.6. Show that every non-constant geodesic is regular, i.e. $\dot{\gamma} \neq 0$ everywhere. Definition 16.7. The manifold $M$ with affine connection $\nabla$ is called geodesically complete if the geodesics spray is a complete vector field. A (pseudo-)Riemannian manifold $(M, g)$ is called geodesically complete if it is geodesically complete for the Levi-Civita connection. Thus geodesic completeness means that all geodesics exist for all time. The property $\gamma_{a v}(t)=\gamma_{v}(a t)$ for all $a \in \mathbb{R}$ is reminiscent of a property of 1-parameter subgroups of Lie groups. Similar to the Lie groups case we define: Definition 16.8. Suppose $(M, \nabla)$ is geodesically complete. The map $$ \operatorname{Exp}_{p}: T_{p} M \rightarrow M, \quad v \mapsto \gamma_{v}(1) $$ is called the exponential map based at $p$. Compare with the very similar definition of exponential maps for Lie groups - the curves $\gamma_{v}$ play the role of 1-parameter subgroups! In terms of the exponential map, we have $$ \gamma_{v}(t)=\operatorname{Exp}_{p}(t v) $$ THEOREM 16.9. The exponential map $\operatorname{Exp}_{p}$ is smooth. It defines a diffeomorphism from a neighborhood of $0 \in T_{p} M$ onto a neighborhood of $p \in M$. Proof. Let $\Phi: \mathbb{R} \times T M \rightarrow T M$ denote the flow of the geodesic spray, $\mathcal{S}$ for the connection $\nabla$, and let $\pi: T M \rightarrow M$ be the base point projection. Then $\operatorname{Exp}_{p}$ is just the restriction of the map $\pi \circ \Phi$ to the submanifold $\{1\} \times T_{p} M$, and hence is smooth. Compute $T_{0} \operatorname{Exp}_{p}$ : For $v \in T_{p} M$ we have, $$ T_{0} \operatorname{Exp}_{p}(v)=\left.\frac{d}{d t}\right|_{t=0} \operatorname{Exp}_{p}(t v)=\left.\frac{d}{d t}\right|_{t=0} \gamma_{v}(t)=v $$ so $T_{0} \operatorname{Exp}_{p}$ is just the identity map $T_{p} M \rightarrow T_{p} M$. From the inverse function theorem, it then follows that $\operatorname{Exp}_{p}$ is a diffeomorphism on some small neighborhood of $0 \in T_{p} M$. If one chooses a basis in $T_{p} M$, thus identifying $T_{p} M \cong \mathbb{R}^{m}$, the exponential map gives a system of local coordinates $x_{1}, \ldots, x_{m}$ on a neighborhood of $p$. These coordinates are called normal coordinates at $p$, and have very nice properties: THEOREM 16.10. In normal coordinates $x_{1}, \ldots, x_{m}$ based at $p \in M$, the geodesics through $p$ are given by straight lines, $$ x_{i}(t)=t a_{i} \quad a_{i} \in \mathbb{R} . $$ Moreover, all Christoffel symbols $\Gamma_{j k}^{i}$ vanish at 0 . Proof. By definition of the exponential map, $\operatorname{Exp}_{p}(t a)$ for $a \in \mathbb{R}^{m} \cong T_{p} M$ is the geodesic with initial velocity $v=a$. Inserting $x_{i}(t)=t a_{i}$ into the geodesic equation, we obtain $$ \sum_{j k} \Gamma_{j k}^{i}(t a) a_{j} a_{k}=0 $$ for all $a$. Setting $t=0$, it follows that $\Gamma_{j k}^{i}(0)=0$. We now specialize to the case that $\nabla$ is the Levi-Civita connection corresponding to a (pseudo-)Riemannian metric $g$ on $M$. Define the energy function $$ E \in C^{\infty}(T M), \quad E(v)=\frac{1}{2} g_{p}(v, v), \quad v \in T_{p} M $$ (Thus, the energy function is just the quadratic function associated to $g$, up to the factor $\frac{1}{2}$.) Since parallel transport for a metric connection preserves inner products, the geodesic flow preserves the energy: That is, $\mathcal{S}(E)=0$. It follows that $\mathcal{S}$ is tangent to the level surfaces of the energy functional. Geodesics for the Levi-Civita connection have an important alternative characterization, as critical points of the action functional. Definition 16.11. Let $\gamma:[a, b] \rightarrow M$ be a smooth curve in $M .{ }^{12}$ One defines the action of $\gamma$ by $$ A(\gamma)=\int_{a}^{b} E(\dot{\gamma}(t)) \mathrm{d} t=\frac{1}{2} \int_{a}^{b}\|\dot{\gamma}(t)\|^{2} \mathrm{~d} t $$ In local coordinates, $A(\gamma)=\frac{1}{2} \int_{a}^{b} \sum_{i j} g_{i j}(x(t)) \dot{x_{i}} \dot{x_{j}} \mathrm{~d} t$. The action functional is closely related to the length functional (assuming that $g$ is positive definite, so that $L(\gamma)$ is defined): ${ }^{12}$ Here smooth means that $\gamma$ extends to a smooth curve on an open interval $J$ containing $[a, b]$. Lemma 16.12. Let $\gamma:[a, b] \rightarrow M$ be a smooth curve. Then $$ L(\gamma)^{2} \leq 2(b-a) A(\gamma) $$ Equality holds if and only if $\gamma$ has constant speed, that is $\|\dot{\gamma}\|$ is constant. Proof. The Cauchy-Schwartz inequality ${ }^{13}$ implies $$ \left(\int_{a}^{b} f(t) \mathrm{d} t\right)^{2} \leq(b-a) \int_{a}^{b} f(t)^{2} \mathrm{~d} t $$ with equality if and only if $f$ is constant. Suppose $\gamma:[a, b] \rightarrow M$ is a smooth curve. A 1-parameter variation of $\gamma$ is a family of curves $\gamma_{s}:[a, b] \rightarrow M$ defined for $-\epsilon<s<\epsilon$, with $\gamma_{s}(a)=\gamma(a)$ and $\gamma_{s}(b)=\gamma(b)$ for all $t$, $\gamma_{0}=\gamma$, and such that the map $(s, t) \mapsto \gamma_{s}(t)$ is smooth. Theorem 16.13. A smooth curve $\gamma:[a, b] \rightarrow M$ is a geodesic if and only if for all 1parameter variations $\gamma_{s}$ of $\gamma$, $$ \left.\frac{d}{d s}\right|_{s=0} A\left(\gamma_{s}\right)=0 $$ Proof. Let $\gamma_{s}(t)$ be a 1-parameter variation. We can view $\gamma_{s}(t)$ as a curve with parameter $t$, depending on $s$ as a parameter, or vice versa. Let a ' indicate $s$-derivatives. Since the LeviCivita connection $\nabla$ is torsion-free, $$ \frac{D \gamma^{\prime}}{d t}=\frac{D \dot{\gamma}}{d s} $$ (In local coordinates, the left hand side is given by $\frac{D \gamma^{\prime}}{d t}=\sum_{i}\left(\frac{d^{2} x_{i}}{d s d t}+\sum_{j k} \Gamma_{j k}^{i} \dot{x}_{j} x_{k}^{\prime}\right) \frac{\partial}{\partial x_{i}}$, while the right hand side is given by a similar expression with $s, t$-derivatives in opposite order. The two expressions are the same since the Christoffel symbols for a torsion free connection are symmetric in $j, k$.) Since $\nabla$ is a metric connection, we can therefore compute, $$ \begin{aligned} \frac{d}{d s} A\left(\gamma_{s}\right) & =\frac{1}{2} \int_{a}^{b} \frac{\partial}{\partial s} g(\dot{\gamma}, \dot{\gamma}) d t \\ & =\int_{a}^{b} g\left(\frac{D \dot{\gamma}}{d s}, \dot{\gamma}\right) d t \\ & =\int_{a}^{b} g\left(\frac{D \gamma^{\prime}}{d t}, \dot{\gamma}\right) d t \\ & =\int_{a}^{b} \frac{d}{d t} g\left(\gamma^{\prime}, \dot{\gamma}\right) d t-\int_{a}^{b} g\left(\gamma^{\prime}, \frac{D \dot{\gamma}}{d t}\right) d t \\ & =-\int_{a}^{b} g\left(\gamma^{\prime}, \frac{D \dot{\gamma}}{d t}\right) d t . \end{aligned} $$ ${ }^{13}$ The Cauchy-Schwartz inequality for integrals says that $$ \left(\int_{a}^{b} f(t) g(t) \mathrm{d} t\right)^{2} \leq\left(\int_{a}^{b} f(t)^{2} \mathrm{~d} t\right)\left(\int_{a}^{b} g(t)^{2} \mathrm{~d} t\right) $$ with equality if and only if $f, g$ are linearly dependent (i.e. proportional). The desired inequality follows by setting $g=1$. Here we have used that $\gamma^{\prime}(a)=\gamma^{\prime}(b)=0$. The resulting expression vanishes at $s=0$, for all variations, if and only if $\frac{D \dot{\gamma}}{d t}=0$, i.e. if and only if $\gamma$ is a geodesic. In particular, if $\gamma:[a, b] \rightarrow M$ minimizes the action, in the sense that $$ A(\gamma) \leq A(\tilde{\gamma}) $$ for all paths $\tilde{\gamma}:[a, b] \rightarrow M$ (defined on the same interval $[a, b])$ with $\tilde{\gamma}(a)=\gamma(a), \quad \tilde{\gamma}(b)=\gamma(b)$, then $\gamma$ is a geodesic. (However, it is not necessary for a geodesic to minimize the action.) Theorem 16.14. A curve $\gamma:[a, b] \rightarrow M$ with $\|\dot{\gamma}(t)\|=$ const is a geodesic if and only if, for all 1-parameter variations $\gamma_{s}$ of $\gamma$, $$ \left.\frac{\partial}{\partial s}\right|_{s=0} L\left(\gamma_{s}\right)=0 $$ We leave the proof as an exercise. We have to put in by hand the assumption that $\gamma$ has constant speed, since the length functional is invariant under reparametrizations. (The 1parameter variations $\gamma_{s}$ need not have finite speed.) In particular, length minimizing, constant speed curves are always geodesics. ## The Hopf-Rinow Theorem The Hopf-Rinow Theorem says that a Riemannian manifold $(M, g)$ is geodesically complete (the geodesic flow is complete, i.e. all geodesics exists for all time) if and only if it is completene as a metric space (every Cauchy sequence converges). To prepare for the proof, we need some more facts on normal coordinates and the exponential map. Definition 17.1. Let $(M, g)$ be a Riemannian manifold. The injectivity radius $i_{p}(M)>0$ of $p \in M$ is the supremum of the set of all $r>0$ such that the exponential map $\operatorname{Exp}_{p}$ is defined on the open ball $B_{r}(0)$ and is injective. The injectivity radius $i(M) \geq 0$ of $M$ is the infimum of all $i_{p}(M)$ with $p \in M$. EXAmple 17.2. For the unit circle $S^{1} \subset \mathbb{R}^{2}$ with the standard Riemannian metric, each point has injectivity radius $\pi$. Similarly, for the sphere $M=S^{m-1} \subset \mathbb{R}^{m}$, the injectivity radius of any point is $i_{p}(M)=\pi$. For $M=\mathbb{R}^{m}, i_{p}(M)=\infty$. THEOREM 17.3. For all $0<r<i_{p}(M)$, the radial geodesics $\operatorname{Exp}_{p}($ tv $)$ intersect the spheres $\operatorname{Exp}_{p}\left(S_{r}(0)\right)$ orthogonally. For any $v \in S_{r}(0)$, the point $q=\operatorname{Exp}_{p}(v)$ has distance $d(p, q)=r$ from $p$, and the geodesic $\operatorname{Exp}_{p}(t v)$ is the unique (up to reparametrization) curve of length $d(p, q)$ connecting $p, q$. In particular, $$ \operatorname{Exp}_{p}\left(S_{r}(0)\right)=S_{r}(p) $$ for any $0<r<i_{p}(M)$. We will obtain this result as a consequence of the following Lemma on "geodesic polar coordinates" around $p$. Let $x_{1}, \ldots, x_{m}$ denote the normal coordinates on a neighborhood $U$ of $p$, obtained by choosing an orthornormal basis in $T_{p} M$. In this coordinates, $$ g_{i j}(0)=\delta_{i j}, \quad \Gamma_{j k}^{i}(0)=0 $$ Introduce polar coordinates $\left(\rho, \phi_{1}, \ldots, \phi_{m-1}\right)$ on $T_{p} M$, thus $\rho^{2}=\sum x_{i}^{2}$ and $\phi_{1}, \ldots, \phi_{m-1}$ are local coordinates on the unit sphere $S^{m-1} \subset T_{p} M$. (The particular choice of coordinates on $S^{m-1}$ will be irrelevant.) Using $\operatorname{Exp}_{p}$, we can view these as coordinates on (suitable open subsets of $) \operatorname{Exp}_{p}\left(B_{r}(0)\right)$ for $r<i_{p}(M)$. In particular, the coordinate vector field $\frac{\partial}{\partial \rho}$ given as $$ \frac{\partial}{\partial \rho}=\frac{1}{\|x\|} \sum_{i=1}^{m} x_{i} \frac{\partial}{\partial x_{i}} $$ is a well-defined vector fiel on $\operatorname{Exp}_{p}\left(B_{r}(0) \backslash\{0\}\right)$. Note that its integral curves are exactly the unit speed radial geodesics. LemMa 17.4 (Geodesic polar coordinates). In geodesic polar coordinates around p, $$ g_{\rho \rho} \equiv g\left(\frac{\partial}{\partial \rho}, \frac{\partial}{\partial \rho}\right)=1, \quad g_{\rho \phi_{j}} \equiv g\left(\frac{\partial}{\partial \rho}, \frac{\partial}{\partial \phi_{j}}\right)=0 $$ That is, the radial geodesics $\operatorname{Exp}_{p}(t v)$ are orthogonal to the spheres $\operatorname{Exp}_{p}\left(S_{r}(0)\right)$, for all $0<$ $r<i_{p}(M)$. Proof. Thus $\nabla_{\frac{\partial}{\partial \rho}} \frac{\partial}{\partial \rho}=0$. In particular, the length of $\frac{\partial}{\partial \rho}$ is constant along radial geodesics. But $$ \left.\lim _{t \rightarrow 0} g\left(\frac{\partial}{\partial \rho}, \frac{\partial}{\partial \rho}\right)\right|_{t x}=1 $$ since $g_{i j}(0)=\delta_{i j}$ and since $\frac{\partial}{\partial \rho}$ has length one in the Euclidean metric. It follows that $g\left(\frac{\partial}{\partial \rho}, \frac{\partial}{\partial \rho}\right)=$ 1 everywhere. Furthermore, using that the connections is torsion-free, $$ \begin{aligned} \frac{\partial}{\partial \rho} g\left(\frac{\partial}{\partial \rho}, \frac{\partial}{\partial \phi_{j}}\right) & =g\left(\nabla_{\frac{\partial}{\partial \rho}} \frac{\partial}{\partial \rho}, \frac{\partial}{\partial \phi_{j}}\right)+g\left(\frac{\partial}{\partial \rho}, \nabla_{\frac{\partial}{\partial \rho}} \frac{\partial}{\partial \phi_{j}}\right) \\ & =g\left(\frac{\partial}{\partial \rho}, \nabla_{\frac{\partial}{\partial \phi_{j}}} \frac{\partial}{\partial \rho}\right) \\ & =\frac{1}{2} \frac{\partial}{\partial \phi_{j}} g\left(\frac{\partial}{\partial \rho}, \frac{\partial}{\partial \rho}\right) \\ & =\frac{1}{2} \frac{\partial}{\partial \phi_{j}} 1=0 . \end{aligned} $$ Thus $g\left(\frac{\partial}{\partial \rho}, \frac{\partial}{\partial \phi_{j}}\right)$ is constant in radial directions. But $\left.\lim _{t \rightarrow 0} g\left(\frac{\partial}{\partial \rho}, \frac{\partial}{\partial \phi_{j}}\right)\right|_{t x}=0$, again since $g_{i j}(0)=\delta_{i j}$. Thus $g\left(\frac{\partial}{\partial \rho}, \frac{\partial}{\partial \phi_{j}}\right)=0$ everywhere. Proof of Theorem 17.3. Let $\gamma(t)(0 \leq t \leq 1)$ be any curve with $\gamma(0)=p$ and $\gamma(1)=q$. Suppose first that $\gamma(t) \in \operatorname{Exp}_{p}\left(B_{r}(0) \backslash\{0\}\right)$ for $0<t<1$. In geodesic polar coordinates $$ \dot{\gamma}=\dot{\rho} \frac{\partial}{\partial \rho}+\sum_{j} \dot{\phi}_{j} \frac{\partial}{\partial \phi_{j}} $$ thus $g(\dot{\gamma}, \dot{\gamma}) \geq|\dot{\rho}|^{2}$ with equality if and only if $\phi_{j}=$ const. It follows that $$ \begin{aligned} L(\gamma) & =\int_{0}^{1} g(\dot{\gamma}, \dot{\gamma})^{1 / 2} d t \\ & \geq \int_{0}^{1}|\dot{\rho}| d t \\ & \geq \int_{0}^{1} \dot{\rho} d t \\ & =\rho(1)=r, \end{aligned} $$ with equality if and only if $\phi_{j}=$ const and $\dot{\rho} \geq 0$ for all $t$. Clearly, curves leaving the set $\operatorname{Exp}_{p}\left(B_{r}(0)\right)$ for some time $t \in(0,1)$ will be even longer. Corollary 17.5. Let $p, q \in M$. Suppose there exists a piecewise smooth curve $\gamma:[0,1] \rightarrow$ $M$ of length $d(p, q)$ from $p$ to $q$. Then $\gamma$ is a reparametrization of a smooth (!) geodesic of length $d(p, q)$. Proof. Since $\gamma([0,1]) \subset M$ is compact, the infimum of the set of all injectivity radii $i_{\gamma(t)}(M)$ is strictly positive. Let $\epsilon>0$ be smaller than this infimum. Then for any two points on the curve, of distance less than $\epsilon$, the unique shortest curve connecting these points is the geodesic given by the exponential map. In particular, $\gamma$ must coincide with that geodesic up to reparametrization. We are now ready to prove the Hopf-Rinow theorem. We recall that a sequence $x_{n}, n=$ $1, \ldots, \infty$ in a metric space $(X, d)$ (where $d$ is the metric = distance function) is a Cauchy sequence if for all $\epsilon>0$, there exists $N>0$ such that $d\left(x_{n}, x_{m}\right)<\epsilon$ for $n, m \geq N$. In particular, every convergent sequence is a Cauchy sequence. A metric space is called complete if every Cauchy sequence in $X$ converges. For instance, every compact metric space is complete, while e.g. bounded open subsets of $\mathbb{R}^{m}$ (with induced metric) are incomplete. ExERCISE 17.6. Show that every Cauchy sequence is bounded. That is, there exists $p \in X$ and $R>0$ such that $x_{n} \in B_{R}(p)$ for all $n$. Theorem 17.7 (Hopf-Rinow). A Riemannian manifold $(M, g)$ is geodesically complete, if and only if it is complete as a metric space. In this case, any two points $p, q$ may be joined by a smooth geodesic of length $d(p, q)$. Proof. We may assume that $M$ is connected. Suppose $M$ is geodesically incomplete. That is, there exists a maximal unit speed geodesic $\gamma:(a, b) \rightarrow M$ with $b<\infty$. Since $d\left(\gamma\left(t_{i}\right), \gamma\left(t_{j}\right)\right) \leq\left|t_{j}-t_{i}\right|$, it follows that the sequence $\gamma\left(t_{i}\right)$ for $t_{i} \rightarrow b$ is a Cauchy sequence. On the other hand, this sequence cannot converge since $\gamma(t)$ leaves every given compact $\operatorname{set}^{14}$ for $t \rightarrow b$. Thus we have found a non-convergent Cauchy sequence, showing that $M$ is incomplete as a metric space. The other direction is a bit harder: Suppose $M$ is geodesically complete. Pick $p \in M$. We will show that every closed metric ball $\overline{B_{r}(p)}$ is compact, which implies that $M$ is metrically ${ }^{14}$ For any compact set $K$, there exists $\epsilon>0$ less than the injectivity radius of any point in $K$. Hence, unit speed geodesics for points starting in $K$ exist at least for time $\epsilon$. complete (Any Cauchy sequence is bounded, hence is contained in $\overline{B_{r}(p)}$ for $r$ sufficiently large. Any Cauchy sequence in a compact set converges.) By geodesic completeness, the exponential map $$ \operatorname{Exp}_{p}: T_{p} M \rightarrow M $$ is defined. It suffices to show that for all $r>0$, $$ \operatorname{Exp}_{p}\left(\overline{B_{r}(0)}\right)=\overline{B_{r}(p)} $$ where $B_{r}(0) \subset T_{p} M$ is the ball of radius $r$ for the inner product $g_{p}$. Indeed, $\overline{B_{r}(0)}$ is compact, and images of compact sets under continuous maps are again compact. The inclusion $\subseteq$ is clear; the harder part is the opposite inclusion $\overline{B_{r}(p)} \subseteq \operatorname{Exp}_{p}\left(\overline{B_{r}(0)}\right)$. Let $$ H=\left\{r>0 \mid \operatorname{Exp}_{p}\left(\overline{B_{r}(0)}\right)=\overline{B_{r}(p)}\right\} . $$ We have to show $H=[0, \infty)$. We first show that $H$ is closed. Let $r_{n} \in H$ with $\lim _{n \rightarrow \infty} r_{n}=r$. We have to show $r \in H$. Given $q \in \overline{B_{r}(p)}$, choose $q_{n} \in \overline{B_{r_{n}}(p)}$ with $q_{n} \rightarrow q$. Choose $v_{n} \in \overline{B_{r_{n}}(0)}$ with $\operatorname{Exp}_{p}\left(v_{n}\right)=q_{n}$. Since $\overline{B_{r_{n}}(0)} \subset \overline{B_{r}(0)}$ which is compact, there exists a convergent subsequence. Let $v \in \overline{B_{r}(0)}$ be the limit point; then $\operatorname{Exp}_{p}(v)=q$. Since $q$ was arbitrary, this shows $r \in H$. We next show that if $r \in H$ then $r+\epsilon \in H$ for $\epsilon>0$ sufficiently small. Since $H$ is closed, this will finish the proof $H=[0, \infty)$. Let $q \in \overline{B_{r+\epsilon}(p)}$. Choose $0<\epsilon<i_{K}(M):=\inf _{p \in K} i_{p}(M)$, where $K$ is the compact subset $K=\overline{B_{r}(p)}$. Thus, for any $x \in \overline{B_{r}(p)}$ with $d(x, q) \leq \epsilon$, there exists a unique geodesic in $M$ joining $x, q$, of length $d(x, q)$. To find such a point $x$, choose a sequence of curves $\gamma_{n}:[0,1] \rightarrow M$ connecting $p, q$, of length $\leq d(p, q)+\frac{1}{n}$. Let $t_{n} \in[0,1]$ be the smallest value such that $x_{n}:=\gamma_{n}\left(t_{n}\right) \in \partial \overline{B_{r}(p)}$. We have $$ d(p, q) \leq d\left(p, x_{n}\right)+d\left(x_{n}, q\right) \leq L\left(\gamma_{n}\right) \leq d(p, q)+\frac{1}{n} . $$ Since $\overline{B_{r}(p)}$ is compact some subsequence of the sequence $x_{n}$ converges to a limit point $x \in$ $\partial \overline{B_{r}(p)}$, with $$ d(p, x)+d(x, q)=d(p, q) . $$ Since $d(p, q) \leq r+\epsilon$ and $d(p, x)=r$, this implies $d(x, q) \leq \epsilon$. Choose $v \in \overline{B_{r}(0)}$ with $\operatorname{Exp}_{p}(v)=p$. Since $d(x, q) \leq \epsilon$, there exists a unique unit speed geodesic of length $d(x, q)$ from $x$ to $q$. Together with the unit speed geodesic $\operatorname{Exp}_{p}(t v / r)$, we obtain a piecewise smooth curve of length $d(p, q)$ from $p$ to $q$. As observed above, it is automatic that this curve is smooth, hence a geodesic. It hence coincides with the unique continuation of the geodesic $\operatorname{Exp}_{p}(t v / r)$. It follows that $\operatorname{Exp}_{p}(\tilde{v})=q$ for $$ \tilde{v}=\frac{r+\epsilon}{r} v=(1+\epsilon / r) v \in \overline{B_{r+\epsilon}(0)} $$ Note that we didn't quite use geodesic completeness in the proof: We only used that $\operatorname{Exp}_{p}$ is defined on all of $T_{p} M$. One might call this geodesic completeness at $p$. What we've shown is that geodesic completeness at any point $p$ implies geodesic completeness everywhere. ## The curvature tensor An affine connection $\nabla$ on a manifold $M$ is called flat if, around any point, there exist local coordinates in which all Christoffel symbols of $\nabla$ vanish. A (pseudo-)Riemannian metric is called flat if the corrsponding Levi-Civita connection is flat. Flatness of a connection implies that parallel transport along a path does not change under 1-parameter variations of the path. In practice, the definition is not always easy to verify, mainly because Christoffel symbols may vanish in one coordinate system and be nonzero in another. One is therefore interested in invariants of a connection: That is, quantities constructed from the connection whose vanishing does not depend on a choice of coordinates. One such example is the torsion $T(X, Y)=\nabla_{X}(Y)-\nabla_{Y}(X)-[X, Y]$ of a connection: Recall that by $C^{\infty}$-linearity, it defines a bi-linear map $T: T_{p} M \times T_{p} M \rightarrow T_{p} M$, and clearly $T$ has to vanish for any flat connection. A second invariant is the curvature operator to be discussed now. Definition 18.1. For vector fields $X, Y$ one defines the curvature operator $R(X, Y)$ : $\mathfrak{X}(M) \rightarrow \mathfrak{X}(M)$ by $$ R(X, Y)(Z)=\nabla_{X} \nabla_{Y} Z-\nabla_{Y} \nabla_{X} Z-\nabla_{[X, Y]} Z . $$ In short, $R(X, Y)=\left[\nabla_{X}, \nabla_{Y}\right]-\nabla_{[X, Y]}$. Theorem 18.2. The map $(X, Y, Z) \mapsto R(X, Y)(Z)$ is $C^{\infty}(M)$-linear in $X, Y, Z$. It follows that for $u, v \in T_{p} M$, there is a well-defined linear map $R_{p}(u, v): T_{p} M \rightarrow T_{p} M$ such that $$ R_{p}(u, v)(w)=(R(X, Y)(Z))_{p} $$ whenever $X_{p}=u, Y_{p}=v, Z_{p}=w$. Proof. $R$ is $C^{\infty}(M)$-linear in $Z$ : For all $f$, $$ \begin{aligned} & \nabla_{X} \nabla_{Y}(f Z)=f \nabla_{X} \nabla_{Y} Z+X(f) \nabla_{Y} Z+Y(f) \nabla_{X} Z+X(Y(f)) Z, \\ & \nabla_{Y} \nabla_{X}(f Z)=f \nabla_{Y} \nabla_{X} Z+Y(f) \nabla_{X} Z+X(f) \nabla_{Y} Z+Y(X(f)) Z, \\ & \nabla_{[X, Y]}(f Z)=f \nabla_{[X, Y]} Z+[X, Y](f) Z . \end{aligned} $$ Subtracting the last two equations from the first, we find $R(X, Y)(f Z)=f R(X, Y)(Z)$ as desired. Similarly one checks $C^{\infty}(M)$-linearity in $X, Y$. $C^{\infty}$-linearity of the curvature operator implies that in local charts, $R$ is determined by its values on coordinate vector fields. We can thus introduce components $R_{i j k}^{l}$ of the curvature tensor, defined by $$ R\left(\frac{\partial}{\partial x_{i}}, \frac{\partial}{\partial x_{j}}\right)\left(\frac{\partial}{\partial x_{k}}\right)=\sum_{l} R_{i j k}^{l} \frac{\partial}{\partial x_{l}} . $$ These can be expressed in terms of Christoffel symbols: We find, after short calculation, $$ R_{i j k}^{l}=\frac{\partial \Gamma_{j k}^{l}}{\partial x_{i}}-\frac{\partial \Gamma_{i k}^{l}}{\partial x_{j}}+\sum_{r}\left(\Gamma_{j k}^{r} \Gamma_{i r}^{l}-\Gamma_{i k}^{r} \Gamma_{j r}^{l}\right) . $$ Recall that this complicated expression appeared in the proof of Gauss' theorem egregium in the curves and surfaces course, but it was somewhat unmotivated back then! Since $R$ has four indices, the curvature tensor seems to give $(\operatorname{dim} M)^{4}$ invariants of a connection. In reality, the number is much smaller, due to symmetry properties of the curvature tensor. First of all, it is of course anti-symmetric in $X, Y$. More interesting is: Theorem 18.3 (Bianchi identity). Suppose $\nabla$ has vanishing torsion. Then $$ R(X, Y) Z+R(Y, Z) X+R(Z, X) Y=0 . $$ That is, in local coordinates, $R_{i j k}^{l}+R_{j k i}^{l}+R_{k i j}^{l}=0$. Proof. We show that the left hand side vanishes at any given $p \in M$. Let $\operatorname{Exp}_{p}$ : $B_{r}(0) \rightarrow M$ be the exponential map, where $r<i_{p}(M)$. Introduce normal coordinates on $U=\operatorname{Exp}_{p}\left(B_{r}(0)\right)=B_{r}(p)$. Then all Christoffel symbols $\Gamma_{i j}^{k}$ vanish at 0 , and we have (at 0 ) $$ R_{i j k}^{l}+R_{j k i}^{l}+R_{k i j}^{l}=\frac{\partial \Gamma_{j k}^{l}}{\partial x_{i}}-\frac{\partial \Gamma_{i k}^{l}}{\partial x_{j}}+\frac{\partial \Gamma_{k i}^{l}}{\partial x_{j}}-\frac{\partial \Gamma_{j i}^{l}}{\partial x_{k}}+\frac{\partial \Gamma_{i j}^{l}}{\partial x_{k}}-\frac{\partial \Gamma_{k j}^{l}}{\partial x_{i}} . $$ In the torsion-free case, this vanishes since the Christoffel symbols are symmetric in the lower indices. ExERCISE 18.4. Give a coordinate-free proof of the Bianchi identity. Suppose now that $g$ is a (pseudo-)Riemannian metric and $\nabla$ the corresponding Levi-Civita connection. For vector fields $X, Y, Z, W$ define the curvature tensor of $g$ by $$ R(X, Y, Z, W)=g(R(X, Y) Z, W) . $$ In components $R_{i j k l}=R\left(\frac{\partial}{\partial x_{i}}, \frac{\partial}{\partial x_{j}}, \frac{\partial}{\partial x_{k}}, \frac{\partial}{\partial x_{l}}\right)$ we have, $$ R_{i j k l}=\sum_{r} R_{i j k}^{r} g_{r l} $$ THEOREM 18.5. The curvature tensor has the symmetry properties, $$ R(X, Y, Z, W)=-R(Y, X, Z, W)=-R(X, Y, W, Z)=R(Z, W, X, Y) $$ and $$ R(X, Y, Z, W)+R(Y, Z, X, W)+R(Z, X, Y, W)=0 . $$ Proof. The last identity is just re-stating the Bianchi identity. Anti-symmetry of $R(X, Y, Z, W)$ in the first two entries $X, Y$ is obvious from the definition. To prove anti-symmetry in the last two entries, it is enough to show that $R(X, Y, Z, Z)=0$ for all $X, Y, Z$. We have, $$ \begin{aligned} R(X, Y, Z, Z)= & g\left(\nabla_{X} \nabla_{Y}(Z), Z\right)-g\left(\nabla_{Y} \nabla_{X}(Z), Z\right)-g\left(\nabla_{[X, Y]} Z, Z\right) \\ = & X g\left(\nabla_{Y}(Z), Z\right)-g\left(\nabla_{Y}(Z), \nabla_{X}(Z)\right)-Y g\left(\nabla_{X}(Z), Z\right)+g\left(\nabla_{X}(Z), \nabla_{Y}(Z)\right) \\ & -\frac{1}{2}[X, Y] g(Z, Z) \\ = & X g\left(\nabla_{Y}(Z), Z\right)-Y g\left(\nabla_{X}(Z), Z\right)-\frac{1}{2}[X, Y] g(Z, Z) \\ = & \frac{1}{2}(X(Y g(Z, Z))-Y(X g(Z, Z))-[X, Y] g(Z, Z)) \\ = & 0 . \end{aligned} $$ It remains to prove $R(X, Y, Z, W)=R(Z, W, X, Y)$. In fact, this is a consequence of the other symmetry properties, although in a rather non-obvious way. First, one adds the four equations obtained from the Bianchi identity $R(X, Y, Z, W)+R(Y, Z, X, W)+R(Z, X, Y, W)=0$ by interchaning $W$ with $X, Y, Z$. This gives $$ \begin{aligned} & R(X, Y, Z, W)+R(Y, Z, X, W)+R(Z, X, Y, W) \\ + & R(W, Y, Z, X)+R(Y, Z, W, X)+R(Z, W, Y, X) \\ + & R(X, W, Z, Y)+R(W, Z, X, Y)+R(Z, X, W, Y) \\ + & R(X, Y, W, Z)+R(Y, W, X, Z)+R(W, X, Y, Z)=0 . \end{aligned} $$ Using anti-symmetry in the first two entries and the last two entries, we obtain some cancellations and find, $$ R(W, X, Y, Z)+R(W, Y, Z, X)+R(W, Z, X, Y)=0 . $$ Using the Bianchi identity again, we have $$ \begin{aligned} R(Z, X, W, Y) & =-R(W, Z, X, Y)-R(X, W, Z, Y) \\ & =-R(W, Z, X, Y)-R(W, X, Y, Z) \\ & =R(W, Y, Z, X) . \end{aligned} $$ ExErcise 18.6. Prove anti-symmetry of $R(X, Y, Z, W)$ in $Z, W$ in local (normal) coordinates, similar to our proof of the Bianchi identity. ## Connections on vector bundles In this section we define connections on vector bundles $E \rightarrow M$ be a vector bundle. (We are mainly interested in $E=T M$, but other bundles will appear as well.) Definition 19.1. A connection (covariant derivative) on $E$ is a bi-linear map $$ \nabla: \mathfrak{X}(M) \times \Gamma^{\infty}(E) \rightarrow \Gamma^{\infty}(E),(X, \sigma) \mapsto \nabla_{X} \sigma, $$ such that $\nabla$ is $C^{\infty}$-linear in the $X$ variable and $$ \nabla_{X}(f \sigma)=f \nabla_{X} \sigma+X(f) \sigma $$ for all $f \in C^{\infty}(M), X \in \mathfrak{X}(M), \sigma \in \Gamma^{\infty}(E)$. Definition 19.2. The curvature operator corresponding to the connection $\nabla$ is the linear $\operatorname{map} R(X, Y): \Gamma^{\infty}(E) \rightarrow \Gamma^{\infty}(E)$ $$ R(X, Y)=\nabla_{X} \nabla_{Y}-\nabla_{Y} \nabla_{X}-\nabla_{[X, Y]} . $$ As for affine connections, the curvature operator $R(X, Y)$ is in fact a $C^{\infty}$-linear map, and moreover is $C^{\infty}$-linear in $X, Y$ also. 19.1. Connections on trivial bundles. Let us first consider the case of a trivial vector bundle, $E=M \times \mathbb{R}^{k}$. Let $e_{1}, \ldots, e_{k}$ be the standard basis of $\mathbb{R}^{k}$. These define "constant" sections $\epsilon_{1}, \ldots, \epsilon_{k}$ of $M \times \mathbb{R}^{k}$, and the most general section has the form, $$ \sigma=\sum_{a} \sigma_{a} \epsilon_{a} $$ where the $\sigma_{a}$ are functions. It is immediate that $$ \nabla_{X}^{0} \sigma:=\sum_{a} X\left(\sigma_{a}\right) \epsilon_{a} $$ defines a connection. This is called the trivial connection on the trivial bundle $E=M \times \mathbb{R}^{k}$. Now let $\nabla_{X}$ be any connection. Define a map $$ A: \mathfrak{X}(M) \rightarrow C^{\infty}\left(M, \operatorname{End}\left(\mathbb{R}^{k}\right)\right), X \mapsto A(X) $$ by $$ \nabla_{X} \sigma=\nabla_{X}^{0} \sigma+A(X) \sigma . $$ Thus $A(X)$ is a matrix-valued function on $M$, measuring the difference from the trivial connection. Letting $A_{a b}(X)$ be its components, we have $$ \nabla_{X} \sigma=\sum_{a} X\left(\sigma_{a}\right) \epsilon_{a}+\sum_{a b} A_{a b}(X) \sigma_{b} \epsilon_{a} . $$ That is, $$ \left(\nabla_{X} \sigma\right)_{a}=X\left(\sigma_{a}\right)+\sum_{b} A_{a b}(X) \sigma_{b} $$ Notice that the map $X \mapsto A(X)$ is $C^{\infty}(M)$-linear. Conversely, every $C^{\infty}(M)$-linear map of this form defines a connection. That is: Proposition 19.3. The space of connections on a trivial bundle $E=M \times \mathbb{R}^{k}$ is in 1-1 correspondence with the space of $C^{\infty}(M)$-linear maps, $\mathfrak{X}(M) \rightarrow C^{\infty}\left(M, \operatorname{End}\left(\mathbb{R}^{k}\right)\right), X \mapsto A(X)$. Under this correspondence, the map $A$ defines the connection $$ \nabla_{X}=\nabla_{X}^{0}+A(X) $$ One calls $A$ the connection 1-form of the connection $\nabla$. Suppose now that $\epsilon_{a}^{\prime} \in \Gamma^{\infty}\left(M, \mathbb{R}^{k}\right)$ is a new basis of the space of sections. That is, $$ \epsilon_{a}^{\prime}=g_{b a} \epsilon_{b} $$ where the matrix-valued function $g$ with coefficients $g_{a b} \in C^{\infty}(M)$ is invertible everywhere. Let $\sigma_{a}^{\prime}$ denote the components of $\sigma$ in the new basis, i.e. $$ \sigma_{a}^{\prime}=g_{a b} \sigma_{b} $$ Define the connection 1 -form $A^{\prime}$ of $\nabla$ in the new basis by $$ \nabla_{X} \sigma=\sum_{a}\left(X\left(\sigma_{a}^{\prime}\right)+\sum_{b} A^{\prime}(X)_{a b} \sigma_{b}^{\prime}\right) \epsilon_{a}^{\prime} $$ We have $\epsilon_{a}^{\prime}=\sum_{b}\left(g^{-1}\right)_{b a} \epsilon_{b}$, therefore $$ \begin{aligned} X\left(\sigma_{c}\right)+\sum_{b} A(X)_{c b} \sigma_{b} & =\sum_{a}\left(g^{-1}\right)_{c a}\left(X\left(\sigma_{a}^{\prime}\right)+\sum_{b} A^{\prime}(X)_{a b} \sigma_{b}^{\prime}\right) \\ & =\sum_{a} g_{c a}^{-1}\left(\sum_{b} g_{a b} X\left(\sigma_{b}\right)+\sum_{b} X\left(g_{a b}\right) \sigma_{b}+\sum_{b} A^{\prime}(X)_{a b} g_{b c} \sigma_{c}\right) \\ & =X\left(\sigma_{c}\right)+\sum_{a b}\left(g^{-1}\right)_{c a} X\left(g_{a b}\right)+\sum_{a b d} g_{c a}^{-1} A^{\prime}(X)_{a b} g_{b d} \sigma_{d} . \end{aligned} $$ Comparing, we read off, using matric notation, $$ A(X)=g^{-1} A^{\prime}(X) g+g^{-1} X(g), $$ or equivalently, $$ A^{\prime}(X)=g A(X) g^{-1}-X(g) g^{-1} . $$ In the theoretical physics literature, connections are called gauge fields, and sections of (possibly trivial) bundles $E$ are called particle fields. The change of basis using $g$ is called a gauge transformation, and the above formula is called the gauge group action of $C^{\infty}(M, \operatorname{Gl}(k, \mathbb{R}))$. EXERCISE 19.4. Show that the curvature operator $R(X, Y)$ on $E \times \mathbb{R}^{k}$ transforms according to $$ R^{\prime}(X, Y)=g R(X, Y) g^{-1} . $$ Give a formula for $R$ in terms of connection 1 -forms. 19.2. Connections on non-trivial vector bundles. This discussion carries over to more general vector bundles, as follows. Let $\operatorname{End}(E) \rightarrow M$ be the endomorphism bundle of $E$, with fibers $\operatorname{End}(E)_{p}=\operatorname{End}\left(E_{p}\right)$ the vector space ${ }^{15}$ of endomorphisms $E_{p} \rightarrow E_{p}$. The space of sections of $\operatorname{End}(E)$ is isomorphic to the space of $C^{\infty}(M)$-linear endomorphisms of the vector space $\Gamma^{\infty}(E)$ : $$ \Gamma^{\infty}(\operatorname{End}(E))=\operatorname{End}_{C^{\infty}(M)}\left(\Gamma^{\infty}(E)\right) . $$ A connection gives a linear map $\nabla_{X} \in \operatorname{End}\left(\Gamma^{\infty}(E)\right)$, which is not $C^{\infty}(M)$-linear. However, the difference between any two connections is: $$ A(X)=\nabla_{X}^{\prime}-\nabla_{X} \in \operatorname{End}_{C^{\infty}(M)}\left(\Gamma^{\infty}(E)\right)=\Gamma^{\infty}(\operatorname{End}(E)) . $$ Conversely, if $\nabla_{X}$ is any connection, and is $X \mapsto A(X) \in \Gamma^{\infty}(\operatorname{End}(E))$ is $C^{\infty}(M)$-linear, then $\nabla_{X}^{\prime}=\nabla_{X}+A(X)$ defines a new connection. This proves half of: Proposition 19.5. Every vector bundle $E$ admits a connection $\nabla$. The most general connection is $\nabla_{X}^{\prime}=\nabla_{X}+A(X)$ for some $C^{\infty}(M)$-linear map, $A: \mathfrak{X}(M) \rightarrow \Gamma^{\infty}(\operatorname{End}(E))$. Proof. Any local trivializations $\left.E\right|_{U} \cong U \times \mathbb{R}^{k}$ defines a connection $\nabla_{U}$ on $\left.E\right|_{U}$ coming from the trivial connection on $U \times \mathbb{R}^{k}$. Let $U_{\alpha}$ be a locally finite open cover of $M$, with local trivializations of $\left.E\right|_{U_{\alpha}}$, and let $\nabla^{\alpha}$ be the corresponding local connections. Let $\chi_{\alpha}$ be a partition of unity, and define $$ \nabla_{X}(\sigma)=\sum_{\alpha} \chi_{\alpha} \nabla_{X}^{\alpha}\left(\left.\sigma\right|_{U_{\alpha}}\right) $$ This has all the properties of a connection. Let $\left.E\right|_{U_{\alpha}} \cong U_{\alpha} \times \mathbb{R}^{k}$ be a local trivialization. Thus $\nabla$ becomes a connection on $U_{\alpha} \times \mathbb{R}^{k}$, described by some $A_{\alpha}(X) \in C^{\infty}\left(U_{\alpha}, \operatorname{End}\left(\mathbb{R}^{k}\right)\right)$. The maps $X \mapsto A_{\alpha}$ are called the local connection 1-forms for $\nabla$. If $U_{\alpha}$ is a coordinate chart, with local coordinates $x_{1} \ldots, x_{m}, A_{\alpha}$ is described by $m$ matrix valued functions $$ A_{\alpha}\left(\frac{\partial}{\partial x_{i}}\right) \in C^{\infty}\left(U_{\alpha}, \operatorname{End}\left(\mathbb{R}^{k}\right)\right) $$ ${ }^{15}$ In fact, each fiber $\operatorname{End}\left(E_{p}\right)$ is an algebra, and accordingly $\operatorname{End}(E)$ is an example of an algebra bundle. The components $$ \Gamma_{i a}^{b}:=\left(A_{\alpha}\right)_{a b}\left(\frac{\partial}{\partial x_{i}}\right) $$ are also called the Christoffel symbols of the connection with respect to the given local coordinates. 19.3. Constructions with connections. Given a vector bundle $E$, let $E^{*}$ be its dual bundle. There is a natural pairing of the spaces of sections, $$ \langle\cdot, \cdot\rangle: \Gamma^{\infty}\left(E^{*}\right) \times \Gamma^{\infty}(E) \rightarrow C^{\infty}(M),\langle\tau, \sigma\rangle_{p}:=\left\langle\tau_{p}, \sigma_{p}\right\rangle \equiv \tau_{p}\left(\sigma_{p}\right) . $$ In other words, $\Gamma^{\infty}\left(E^{*}\right)$ is identified with the space of $C^{\infty}$-linear maps $\Gamma^{\infty}(E) \rightarrow C^{\infty}(M)$. Proposition 19.6 (Duals). For any connection $\nabla$ on $E$, there is a unique connection $\nabla^{*}$ on $E^{*}$ with property, $$ X\langle\tau, \sigma\rangle=\left\langle\nabla_{X}^{*} \tau, \sigma\right\rangle+\left\langle\tau, \nabla_{X} \sigma\right\rangle . $$ Proof. Try to define $\nabla^{*}$ by this equation: $$ \left\langle\nabla_{X}^{*} \tau, \sigma\right\rangle=X\langle\tau, \sigma\rangle-\left\langle\tau, \nabla_{X} \sigma\right\rangle . $$ For $f \in C^{\infty}(M)$ we have, $$ X\langle f \tau, \sigma\rangle-\left\langle f \tau, \nabla_{X} \sigma\right\rangle=\langle X(f) \tau, \sigma\rangle+f\left(X\langle\tau, \sigma\rangle-\left\langle\tau, \nabla_{X} \sigma\right\rangle\right) $$ Showing that $\nabla_{X}^{*}(f \tau)=X(f) \tau+f \nabla_{X}^{*} \tau$ as desired. If $E, E^{\prime}$ are two vector bundles over $M$, we can form the direct sum $E \oplus E^{\prime}$, with $$ \Gamma^{\infty}\left(E \oplus E^{\prime}\right)=\Gamma^{\infty}(E) \oplus \Gamma^{\infty}\left(E^{\prime}\right) . $$ Proposition 19.7 (Direct sums). If $\nabla$ is a connection on $E$ and $\nabla^{\prime}$ a connection on $E^{\prime}$, there is a unique connection $\nabla \oplus \nabla^{\prime}$ on $E \oplus E^{\prime}$ such that $$ \left(\nabla \oplus \nabla^{\prime}\right)_{X}\left(\sigma \oplus \sigma^{\prime}\right)=\nabla_{X} \sigma \oplus \nabla_{X}^{\prime} \sigma^{\prime} $$ Finally, recall that if $E$ is a vector bundle over $M$, and $F \in C^{\infty}(N, M)$ a smooth map from a manifold $N$, we define a pull-back bundle $F^{*} E$ with fibers $\left(F^{*} E\right)_{q}=E_{F(q)}$. Its space of sections $\Gamma^{\infty}\left(F^{*} E\right)$ is generated (as a $C^{\infty}(N)$-module) by the subspace $F^{*} \Gamma^{\infty}(E)$. Proposition 19.8. Let $E \rightarrow M$ be a vector bundle with connection $\nabla$, and $F \in C^{\infty}(N, M)$. Then there is a unique connection $F^{*} \nabla$ such that for all $\sigma \in \Gamma^{\infty}(E), q \in N, w \in T_{q} N$ $$ \left(F^{*} \nabla\right)_{w}\left(F^{*} \sigma\right)=\nabla_{T_{q} F(w)} \sigma . $$ Proof. Exercise. The pull-back connection $F^{*} \nabla$ can be desribed in terms of connection 1-forms: If $\left.E\right|_{U} \cong$ $U \times \mathbb{R}^{k}$ is a local trivialization of $E$, and $X \mapsto A_{a b}(X)$ the connection 1-form of $\nabla$ in terms of this local trivialization. Then we obtain a local trivialization $\left.F^{*} E\right|_{F^{-1}(U)} \cong F^{-1}(U) \times \mathbb{R}^{k}$, with connection 1 -forms given by the pull-back forms, $F^{*} A_{a b} .{ }^{16}$ ${ }^{16}$ Recall that $C^{\infty}(M)$ - linear maps $\mathfrak{X}(M) \rightarrow C^{\infty}(M)$ are identified with sections of $T^{*} M$, i.e. 1-forms, and that there is a natural pull-back map $F^{*} \Gamma^{\infty}\left(T^{*} M\right) \rightarrow \Gamma^{\infty}\left(T^{*} N\right)$ given by $\left(F^{*} \alpha\right)_{q}=\left(T_{q} F\right)^{*} \alpha_{F(q)}$. 19.4. Parallel transport. Suppose $E$ is a vector bundle over $M$ with connection $\nabla$, and $\gamma: J \rightarrow M$ is any smooth curve. Sections of the pull-back bundle $\gamma^{*} E$ are called sections of $E$ along $\gamma$. A connection $\nabla$ on $E$ induces a pull-back connection $\gamma^{*} \nabla$ on $\gamma^{*} E$, and one can define a covariant derivative along $\gamma$ by $$ \frac{D}{D t}: \Gamma^{\infty}\left(\gamma^{*} E\right) \rightarrow \Gamma^{\infty}\left(\gamma^{*} E\right), \quad \frac{D \sigma}{D t}:=\left(\gamma^{*} \nabla\right)_{\frac{\partial}{\partial t}} \sigma . $$ A section $\sigma$ along $\gamma$ is called parallel if $\frac{D \sigma}{D t}=0$. Suppose $\left.E\right|_{U}=U \times \mathbb{R}^{k}$ is a local trivialization of $E$ with $\gamma(t) \in U$, given by a basis $\epsilon_{1}, \ldots, \epsilon_{k} \in \Gamma^{\infty}\left(U,\left.E\right|_{U}\right)$ of the space of sections. Then we can write $$ \sigma(t)=\sum_{a} \sigma_{a}(t)\left(\epsilon_{a}\right)_{\gamma(t)} \in E_{\gamma(t)}, $$ and the components of the covariant derivative is given by the formula, $$ \left(\frac{D \sigma}{D t}\right)_{a}=\frac{d \sigma_{a}}{\mathrm{~d} t}+\sum_{b} A_{a b}(\dot{\gamma}) \sigma_{b}(t) . $$ Furthermore, if $U$ is the domain of a coordinate chart, defining local coordinates $x_{1}, \ldots, x_{k}$, and $$ \Gamma_{i a}^{b}=A_{a b}\left(\frac{\partial}{\partial x_{i}}\right) $$ are the corresponding Christoffel symbols, the formula becomes, $$ \left(\frac{D \sigma}{D t}\right)_{a}=\frac{d \sigma_{a}}{\mathrm{~d} t}+\sum_{i b} \Gamma_{i a}^{b} \dot{x}_{i} \sigma_{b}(t) . $$ As for affine connections, one shows that for any given $\sigma_{t_{0}} \in E_{\gamma\left(t_{0}\right)}$, there is a unique parallel section $\sigma(t)$ along $\gamma$ with initial value $\sigma\left(t_{0}\right)=\sigma_{t_{0}}$. In this way, connections $\nabla$ define parallel transport in vector bundles. There is a more geometric way of understanding paralel transport on a vector bundle $\pi: E \rightarrow M$. Consider the tangent map $T \pi: T E \rightarrow T M$. Its kernel at $u \in E_{p}$ is $$ \operatorname{ker}\left(T_{u} \pi\right)=T_{u}\left(E_{p}\right) $$ the tangent space at $u$ to the fiber $E_{p}$. It is called the vertical subspace $$ V_{u} E=T_{u}\left(E_{p}\right) \subset T_{u} E . $$ Note that since $E_{p}$ is a vector space, $T_{u}\left(E_{p}\right) \cong E_{p}$. This means that we have a natural isomorphism, $V E=\pi^{*} E$ (the pull-back of $E$ to a vector bundle over $E$.). Since the map $T_{u} \pi: T_{u} E \rightarrow T_{p} M$ is clearly onto, it we have $$ T_{u} E / V_{u} E=T_{p} M \text {. } $$ It turns out that every connection $\nabla$ defines a complementary horizontal subspace $H_{u} E \subset T_{u} E$, with $$ T_{u} E=V_{u} E \oplus H_{u} E . $$ In fact, $H E$ is a vector subbundle of $T E$, called the horizontal bundle, and $T E=V E \oplus H E$. In a coordinate free way, one may define $H_{u} E$ as follows: THEOREM 19.9. Let $\nabla$ be a connection on $E$. Given $u \in E_{p}$, there exists a section $\sigma \in$ $\Gamma^{\infty}(E)$ with $\sigma_{p}=e$ and $(\nabla \sigma)_{p}=0$. The image $$ H_{u} E:=T_{p} \sigma\left(T_{p} M\right) $$ is independent of the choice of $\sigma$. EXERCISE 19.10. 1. Proof this theorem. 2. Given an alternative definition of $H_{u} E$ in local coordinates and using a local trivialization of $E$ : Show that the horizontal space in $E=U \times \mathbb{R}^{k}$ is spanned by all $$ \frac{\partial}{\partial x_{i}}-\sum_{a} \Gamma_{i a}^{b} u_{b} \epsilon_{a}(p) $$ Note that it is impossible, in general, to choose $\sigma$ with $\sigma_{p}=u$ and $\nabla \sigma=0$ everywhere: This is related to the problem that in general, distributions of rank $\geq 2$ need not be integrable. One can characterize parallel transport in terms of the horizontal bundle $H E \subset T E$ as follows: For any curve $\gamma(t)$ in $M$, and any given $u \in E_{\gamma\left(t_{0}\right)}$, there is a unique curve $\sigma(t)$ in $E$ such that $\sigma\left(t_{0}\right)=u$ and $$ \pi(\sigma(t))=\gamma(t), \quad \dot{\sigma} \in H_{\sigma(t)} E $$ for all $t$. The curve $\sigma(t)$ is called the horizontal lift of $\gamma$. Note that a curve $\sigma(t)$ projecting to $\gamma(t)$ is the same thing as a section of $E$ along $\gamma$. fields: The splitting $T E=H E \oplus V E=\pi^{*} T M \oplus V E$ given by $\nabla$ defines a horizontal lift of vector $$ \operatorname{Lift}_{\nabla}: \mathfrak{X}(M) \rightarrow \mathfrak{X}(E) . $$ Here $\operatorname{Lift}_{\nabla}(X)_{u}$ is the unique tangent vector in $H_{u} E$ projecting to $X_{p}$. The horizontal lifts of integral curves of $X$ are integrla curves of its horizontal lift $\operatorname{Lift}_{\nabla}(X)$. Note that by construction, $$ \operatorname{Lift}_{\nabla}(X) \sim_{\pi} X . $$ Hence if $X, Y$ are two vector fields, $$ \left[\operatorname{Lift}_{\nabla}(X), \operatorname{Lift}_{\nabla}(Y)\right] \sim_{\pi}[X, Y] . $$ This shows that the vector field $\left[\operatorname{Lift}_{\nabla}(X), \operatorname{Lift}_{\nabla}(Y)\right]-\operatorname{Lift}_{\nabla}([X, Y])$ must be vertical. That is, it is a section of $V E=\pi^{*} E$. What is this section? ThEOREM 19.11. For any $u \in E_{p}$, and any $X, Y$, we have $$ \left[\operatorname{Lift}_{\nabla}(X), \operatorname{Lift}_{\nabla}(Y)\right]_{u}-\operatorname{Lift}_{\nabla}([X, Y])_{u}=R(X, Y)_{p} u $$ THEOREM 19.12. The following are equivalent: (a) The curvature $R$ of $\nabla$ vanishes. (b) Horizontal lift $\mathfrak{X}(M) \rightarrow \mathfrak{X}(E)$ is a Lie algebra homomorphism. (c) The horizontal distribution $H E \subset T E$ is integrable. (d) Parallel transport along paths is invariant under homotopies leaving the end points fixed. We leave the proofs as exercises, or to be looked up in textbooks.
Textbooks
\begin{document} \title{How to drive our families mad} \author{Saka\'e Fuchino \and Stefan Geschke \and Osvaldo Guzman \and Lajos Soukup } \maketitle \date \xthanks{The first author was partially supported by Chubu University grant 16IS55A, as well as Grant-in-Aid for Scientific Research (C) 19540152 and Grant-in-Aid for Exploratory Research No.\ 26610040 of Japan Society for the Promotion of Science. The second author was supported by Centers of Excellence grant from the European Union. The third author was supported by CONACyT scholarship 420090. The fourth author was supported by Bolyai Grant. The research of this paper began when the first and second authors visited Alfr\'ed R\'enyi Institute in Budapest in 2002. The research was then resumed when the first author visited Centre de Recerca Matem\`atica in Barcelona and the fourth author Barcelona University at the same time in 2006. The first and fourth authors would like to thank Joan Bagaria and Juan-Carlos Mart\'\i nez for the arrangement of the visit and their hospitality during the stay in Barcelona. The authors also would like to thank Kenneth Kunen for allowing them to include his unpublished results and Andreas Blass as well as the referee of the paper for reading carefully the manuscript and giving many valuable suggestions. } \begin{abstract} Given a family ${\mathcal F}$ of pairwise almost disjoint (ad) sets on a countable set $S$, we study maximal almost disjoint (mad) families $\tilde{{\mathcal F}}$ extending ${\mathcal F}$. We define $\mathfrak{a}^+({\mathcal F})$ to be the minimal possible cardinality of $\tilde{{\mathcal F}}\setminus \cal F$ for such $\tilde{{\mathcal F}}$ and $\mathfrak{a}^+(\kappa)=\max\setof{\mathfrak{a}^+({\mathcal F})}{\cardof{{\mathcal F}}\leq\kappa}$. We show that all infinite cardinals less than or equal to the continuum $\mathfrak{c}$ can be represented as $\mathfrak{a}^+({\mathcal F})$ for some ad ${\mathcal F}$ (\Thmof{aplus-all}) and that the inequalities $\aleph_1=\mathfrak{a}<\mathfrak{a}^+(\aleph_1)=\mathfrak{c}$ (\Corof{aplus-small-large}) and $\mathfrak{a}=\mathfrak{a}^+(\aleph_1)<\mathfrak{c}$ (\Thmof{aplus-small-small}) are both consistent. We also give several constructions of mad families with some additional properties. \keywords{cardinal invariants -- almost disjoint number -- Cohen model -- destructible maximal almost disjoint family} \end{abstract} \section{Introduction} \institute{Saka\'e Fuchino\at Graduate School of System Informatics, Kobe University, Kobe, Japan.\\ {\tt [email protected]} \and Stefan Geschke\at Department of Mathematics, University of Hamburg, Germany.\\ {\tt [email protected]} \and Osvaldo Guzman\at Centre of Mathematics Science, Universidad Nacional Aut\'onoma de M\'exico, Mexico City, Distrito Federal, Mexico. {\tt [email protected]} \and Lajos Soukup\at Alfr\'ed R\'enyi Institute of Mathematics, Hungarian Academy of Sciences, Budapest, Hungary. {\tt [email protected]}} Given a family ${\mathcal F}$ of pairwise almost disjoint countable sets, we can ask what the maximal almost disjoint (mad) families extending ${\mathcal F}$ look like. In this note and \cite{fuchino-geschke-soukup-2}, we address some instances of this question and other related problems. Let us begin with the definition of some notions and notation about almost disjointness we shall use here. Two countable sets $A$, $B$ are said to be {\it almost disjoint\/} ({\em ad} for short) if $A\cap B$ is finite. A family ${\mathcal F}$ of countable sets is said to be {\em pairwise almost disjoint\/} ({\em ad\/} for short) if any two distinct $A$, $B\in{\mathcal F}$ are ad. If ${\mathcal X}\subseteq[S]^{\aleph_0}$ and $S=\bigcup{\mathcal X}$, ${\mathcal F}\subseteq{\mathcal X}$ is said to be {\em mad in ${\mathcal X}$} if ${\mathcal F}$ is ad and there is no ad ${\mathcal F}'$ such that\ ${\mathcal F}\subsetneqq{\mathcal F}'\subseteq{\mathcal X}$. Thus an ad family ${\mathcal F}$ is mad in ${\mathcal X}$ if and only if there is no $X\in{\mathcal X}$ which is ad\ from every $Y\in{\mathcal F}$. If ${\mathcal F}$ is mad in $[S]^{\aleph_0}$ for $S=\bigcup{\mathcal F}$, we say simply that ${\mathcal F}$ is a mad family (on $S$). $S$ as above is called the {\em underlying set\/} of ${\mathcal F}$. Let \begin{xitemize} \xitem[] $\mathfrak{a}({\mathcal X})=\min\setof{\cardof{{\mathcal F}}}{\cardof{{\mathcal F}}\geq\aleph_0 \mbox{ and }{\mathcal F}\mbox{ is mad in }{\mathcal X}}$. \end{xitemize} Clearly, the cardinal invariant $\mathfrak{a}$ known as the almost disjoint number (\cite{blass}) can be characterized as: \begin{example} $\mathfrak{a}=\mathfrak{a}([S]^{\aleph_0})$ for any countable $S$. \end{example} In this paper we concentrate on the case where the underlying set $S=\bigcup{\mathcal X}$ (or $S=\bigcup{\mathcal F}$) is countable. In \cite{fuchino-geschke-soukup-2} and the forthcoming continuation of this paper, we will deal with the cases where $S$ may be also uncountable. As the countable $S=\bigcup{\mathcal X}$, we often use $\omega$ or $T=\fnsp{\omega>}{2}$ where $T$ is considered as a tree growing downwards. That is, for $b$, $b'\in T$, we write $b'\leq_T b$ if $b\subseteq b'$. Each $f\in\fnsp{\omega}{2}$ induces the (maximal) branch \begin{xitemize} \xitem[] $B(f)=\setof{f\restriction n}{n\in\omega}\subseteq T$ \end{xitemize} in $T$. In Section \ref{mad-families}, we consider several cardinal invariants of the form $\mathfrak{a}({\mathcal X})$ for some ${\mathcal X}\subseteq[T]^{\aleph_0}$. For ${\mathcal X}\subseteq[S]^{\aleph_0}$ with $S=\bigcup{\mathcal X}$, let \begin{xitemize} \xitem[] ${\mathcal X}^\perp=\setof{Y\in[S]^{\aleph_0}}{\forall X\in{\mathcal X}\ \cardof{X\cap Y}<\aleph_0}$. \end{xitemize} If $Y\in{\mathcal X}^\perp$ we shall say that $Y$ is {\em almost disjoint} (ad) {\em to} ${\mathcal X}$. For an ad family ${\mathcal F}$, let \begin{xitemize} \xitem[] $\mathfrak{a}^+({\mathcal F})=\mathfrak{a}({\mathcal F}^\perp)$. \end{xitemize} For a cardinal $\kappa$, let \begin{xitemize} \xitem[] $ \mathfrak{a}^+(\kappa)=\sup\setof{\mathfrak{a}^+({\mathcal F})}{{} {\mathcal F}\mbox{ is an ad family on }\omega\mbox{ of cardinality }\leq\kappa} $. \end{xitemize} Clearly, $\mathfrak{a}^+(\omega)=\mathfrak{a}$ and $\mathfrak{a}^+(\kappa)\leq\mathfrak{a}^+(\lambda)\leq\mathfrak{c}$ for any $\kappa\leq\lambda\leq\mathfrak{c}$. In Section \ref{mad-over-pad} we give several constructions of ad families ${\mathcal F}$ for which ${\mathcal F}^\perp$ has some particular property. Using these constructions, we show in Section \ref{aplus} that $\mathfrak{a}^+(\mathfrak{c})=\mathfrak{c}$ (actually we have $\mathfrak{a}^+(\bar{\mathfrak{o}})=\mathfrak{c}$, see \Thmof{aplus-o}) and the consistency of the inequalities $\mathfrak{a}=\aleph_1<\mathfrak{a}^+(\aleph_1)=\mathfrak{c}$ (see \Corof{aplus-small-large}). We also show the consistency of $\mathfrak{a}^+(\aleph_1)<\mathfrak{c}$ (\Thmof{aplus-small-small}). For notions in the theory of forcing, the reader may consult \cite{millennium-book} or \cite{kunen-book}. We mostly follow the notation and conventions set in \cite{millennium-book} and/or \cite{kunen-book}. In particular, elements of posets $\bbd{P}$ are considered in such a way that stronger conditions are smaller. We assume that $\bbd{P}$-names are constructed just as in \cite{kunen-book} for a poset\ $\bbd{P}$ but we use alphabets with a tilde below them like $\utilde{a}$, $\utilde{b}$ etc.\ to denote the $\bbd{P}$-names corresponding to the sets $a$, $b$ etc.\ in the generic extension. $V$ denotes the ground model (in which we live). The canonical $\bbd{P}$-names of elements $a$, $b$ etc.\ of $V$ are denoted by the same symbols with hat like $\hat{a}$, $\hat{b}$ etc. For a poset\ $\bbd{P}$ (in $V$) we use $V^\bbd{P}$ to denote a ``generic'' generic extension $V[G]$ of $V$ by some $(V,\bbd{P})$-generic filter $G$. Thus $V^\bbd{P}\models\ \cdots$ is synonymous to $\forces{\bbd{P}}{\cdots}$ or $V\models\forces{\bbd{P}}{\cdots}$ and a phrase like: ``Let $W=V^\bbd{P}$\,'' is to be interpreted as saying: ``Let $W$ be a generic extension of $V$ by some/any $(V,\bbd{P})$-generic filter''. For the notation connected to the set theory of reals see \cite{tomek-book} and \cite{blass}. By $\mathfrak{c}$ we denote the size of the continuum $2^{\aleph_0}$. ${\mathcal M}$ and ${\mathcal N}$ are the ideals of meager sets and null sets (e.g.\ over the Cantor space $\fnsp{\omega}{2}$ or the Baire space $\fnsp{\omega}{\omega}$) respectively. For $I={\mathcal M}$, ${\mathcal N}$ etc., ${\sf cov}(I)$ and ${\sf non}(I)$ are {\em covering number} and {\em uniformity} of $I$. For an infinite cardinal $\kappa$ let $\Cohen{\kappa}={\rm Fn}(\kappa,2)$ or, more generally $\Cohen{X}={\rm Fn}(X,2)$ for any set $X$. $\Cohen{\kappa}$ is the Cohen forcing for adding $\kappa$ many Cohen reals. $\random{\kappa}$ denotes the random forcing for adding $\kappa$ many random reals. $\random{\kappa}$ is the poset\ consisting of Borel sets of positive measure in $\fnsp{\kappa}{2}$, which corresponds to the homogeneous measure algebra of Maharam type $\kappa$. For a poset\ $\bbd{P}=\pairof{\bbd{P},\leq_\bbd{P}}$, $X\subseteq\bbd{P}$ and $p\in\bbd{P}$, let \begin{xitemize} \item[] $X\downarrow p=\setof{q\in X}{q\leq_\bbd{P} p}$. \end{xitemize} \section{Mad families and almost disjoint numbers} \label{mad-families} One of the advantages of using $T=\fnsp{\omega>}{2}$ as the countable underlying set is that we can define some natural subfamilies of $[T]^{\aleph_0}$ such as ${\mathcal O}_T$, ${\mathcal A}_T$, ${\mathcal B}_T$ below. For $X\subseteq T$, let \begin{xitemize} \xitem[] $[X]=\setof{f\in\fnsp{\omega}{2}}{B(f)\subseteq X}$, and \xitem[] $\flr{X}=\setof{f\in\fnsp{\omega}{2}}{\cardof{B(f)\cap X}=\aleph_0}$. \end{xitemize} Clearly, we have $[X]\subseteq\flr{X}$. For $X\subseteq T$, let $X^{\uparrow}$ be the upward closure of $X$, that is: \begin{xitemize} \xitem[] $X^{\uparrow}=\setof{t\restriction n}{t\in X,\,n\leq\ell(t)}$. \end{xitemize} Then we have $\flr{X}\subseteq[X^{\uparrow}]$ for any $X\subseteq T$. \begin{definition}[Off-binary sets, \cite{leathrum}] Let \begin{xitemize} \item[] ${\mathcal O}_T=\setof{X\in[T]^{\aleph_0}}{ \flr{X}=\emptyset}$. \end{xitemize} \end{definition} T.\ Leathrum \cite{leathrum} called elements of ${\mathcal O}_T$ off-binary sets. Note that $\flr{X}=\emptyset$ if and only if there is no branch in $T$ with infinite intersection with $X$. \begin{definition}[Antichains] Let \begin{xitemize} \item[] ${\mathcal A}_T=\setof{X\in[T]^{\aleph_0}}{ X\mbox{ is an antichain in }T}$. \end{xitemize} \end{definition} \noindent Clearly, we have ${\mathcal A}_T\subseteq{\mathcal O}_T$. Using the notation above, the cardinal invariants $\mathfrak{o}$ and $\bar{\mathfrak{o}}$ introduced by Leathrum \cite{leathrum} can be characterized as: \begin{xitemize} \xitem[] $\mathfrak{o}=\mathfrak{a}({\mathcal O}_T)$, \xitem[] $\bar{\mathfrak{o}}=\mathfrak{a}({\mathcal A}_T)$ \end{xitemize} (see \cite{leathrum}). \iftesting \mynote{$\mathfrak{a}({\mathcal A}_T)=\mathfrak{a}({\mathcal A}_{T_\omega})$} \fi Leathrum also showed $\mathfrak{a}\leq\mathfrak{o}\leq\bar{\mathfrak{o}}$. J.\ Brendle \cite{brendle0} proved ${\sf non}({\mathcal M})\leq\mathfrak{o}$. \begin{definition}[Sets without infinite antichains] Let \begin{xitemize} \item[] ${\mathcal B}_T=\setof{X\in [T]^{\aleph_0}}{X \mbox{ does not contain any infinite antichain}}$. \end{xitemize} \end{definition} Note that ${\mathcal B}_T={{\mathcal A}_T}^\perp$. Elements of ${\mathcal B}_T$ are those infinite subsets of $T$ which can be covered by finitely may branches: \begin{lemma}[K.\ Kunen] \label{kunen's lemma} Let $X\in [T]^{\aleph_0}$. Then $X\in{\mathcal B}_T$ if and only if $X$ is covered by finitely may branches in $T$. \end{lemma} \begin{proof} If $X$ is covered by finitely many branches in $T$ then $X$ clearly does not contain any infinite antichain since otherwise one of the finitely many branches would contain an infinite antichain. \iffalse Suppose now that $X$ does not contain any infinite antichain. Let \begin{xitemize} \xitem[] $Y=\setof{b\in X}{\mbox{there are incompatible }a,a'\in X \mbox{ such that\ }a,a'\leq_T b}$ \end{xitemize} For each $b\in Y$ let $a^b_0$, $a^b_1\in X$ be such that\ $a^b_0$ and $a^b_1$ are incompatible and $a^b_0$, $a^b_1\leq_T b$. We claim that $Y$ is finite. Otherwise, since $Y$ contains no infinite antichain by $Y\subseteq X$, there is an infinite branch $C\subseteq Y$ by K\"onig's lemma. For each $b\in C$ we can choose $i_b\in 2$ such that\ $a^b_{i_b}\not\in C$. Then $\setof{a^b_{i_b}}{b\in C}$ would be an infinite antichain in $X$. But this is a contradiction. Let \begin{xitemize} \item[] $D=\setof{d\in X\setminus Y}{d\mbox{ is maximal in }X\setminus Y \mbox{ with respect to\ }\leq_T}$. \end{xitemize} Then $D$ is pairwise incompatible and hence finite. By definition of $Y$, \begin{xitemize} \item[] $(X\setminus Y)\downarrow d=\setof{b\in X\setminus Y}{b\leq_T d}$ \end{xitemize} is linearly ordered by $\leq_T$ for each $d\in D$. Thus $X$ is the union of the finitely many linearly ordered subsets of $T$: \begin{xitemize} \item[] $(X\setminus Y)\downarrow d,\, d\in D$\ \ and\ \ $\ssetof{b},\, b\in Y$. \end{xitemize} This proves the lemma since each of these linearly ordered subsets of $T$ can be extended to a branch in $T$. Suppose for contradiction that $X$ does not contain any infinite antichain but neither covered by finitely many branches. For $n\in\omega$, let $X_n\in[X]^{\aleph_0}$, $f_n\in\fnsp{\omega}{2}$ and $t_n\in T$ be defined inductively as follows: \begin{xitemize} \xitem[e-0] $X_0=X$; \xitem[e-1] $X_n$ is not a union of finitely many branches; \xitem[e-2] $f_n\in\flr{X_n}$; \xitem[e-3] $t_n\in X_n$, $t_n\not\in B(f_n)$; \xitem[e-4] $X_{n+1}=X_n\downarrow t_n$. \end{xitemize} $X_0$ as in \xitemof{e-0} satisfies \xitemof{e-1} by the assumption that $X$ is not covered by finitely many branches. \xitemof{e-2} is possible by Fodor's lemma and since $X_n\subseteq X$ does not contain any infinite antichain. We can find $t_n$ satisfying \xitemof{e-3} by \xitemof{e-1}. $t_n$ can be chosen so that $X_{n+1}$ in \xitemof{e-4} also satisfies \xitemof{e-1} because the set \begin{xitemize} \item[] $\setof{k\in\omega}{\mbox{there is }u\in X_n\mbox{\ such that\ } u\restriction k\in B(f_n)\mbox{ but\ }u\restriction k+1\not\in B(f_n)}$ \end{xitemize} is finite: the latter assertion holds since $X_n\subseteq X$ contains no infinite antichain. Now, for $n\in\omega$, let $m_n\in\omega$ be such that\ $m_n\geq \ell(t_n)$ and $f_n\restriction m_n\in X_n$. Then $\setof{f_n\restriction m_n}{n\in\omega}$ is an infinite antichain in $X$. This is a contradiction. \else Suppose now that $X$ cannot be covered by finitely many branches. By induction on $n$, we choose $t_n\in 2^n$ such that\ $t_0=\emptyset$, $t_{n+1}=t_n\mathop{{}^{\frown}} i$ for some $i\in 2$ and \begin{xitemize} \xitem[e-5] $X_{n+1}=X\downarrow t_{n+1}$ can not be covered by finitely many branches. \end{xitemize} This is possible since $X_0=X$ and $X_n\subseteq (X_n \downarrow (t_n\mathop{{}^{\frown}} 0)) \cup (X_n \downarrow (t_n\mathop{{}^{\frown}} 1)) \cup \ssetof{t_n}$. By \xitemof{e-5}, the branch $B=\setof{t_n}{n<\omega}$ does not cover $X_n$ for each $n\in\omega$. So we can pick $s_n\in X_n\setminus B$. Let $S=\setof{s_n}{n\in\omega}$. $S$ is an infinite subset of $X$ since $\ell(s_n)\geq n$ for all $n\in\omega$. If $C$ is a branch in $T$ different from $B$ then $t_n\notin C$ for some $n\in\omega$ and so $s_m\notin C$ for all $m\ge n$. Hence $S\cap C$ is finite. Moreover $S\cap B =\emptyset$. So we have $\flr{S}=\emptyset$. Thus $S$ should contain an infinite antichain by K\"onig's Lemma. \fi \mbox{} \usebox{\qedbox} \end{proof} \begin{theorem}[K.\ Kunen] \label{Kunen's thm A} $\mathfrak{a}({\mathcal B}_T)=\mathfrak{c}$. \end{theorem} \begin{proof} Suppose that ${\mathcal F}\subseteq{\mathcal B}_T$ is an ad family of cardinality $<\mathfrak{c}$. We show that ${\mathcal F}$ is not mad. For each $X\in{\mathcal F}$ there is $b_X\in[\fnsp{\omega}{2}]^{<\aleph_0}$ such that\ $X\subseteq\bigcup_{f\in b_X}B(f)$ by \Lemmaof{kunen's lemma}. Since ${\mathcal S}=\bigcup\setof{b_X}{X\in{\mathcal F}}$ has cardinality $\leq\cardof{{\mathcal F}}\cdot\aleph_0<\mathfrak{c}$, there is $f^*\in\fnsp{\omega}{2}\setminus{\mathcal S}$. We have $B(f^*)\in{\mathcal B}_T$ and $B(f^*)$ is ad\ to ${\mathcal F}$. \mbox{} \usebox{\qedbox} \end{proof} Let us say $X\subseteq T$ is {\em nowhere dense} if $\flr{X}$ is nowhere dense in the Cantor space $\fnsp{\omega}{2}$. It can be easily shown that $X$ is nowhere dense if and only if \begin{xitemize} \xitem[e-6] $\forall t\in T\ \exists t'\leq_T t\ \forall t''\leq_T t'\ (t''\not\in X)$. \end{xitemize} Note that, if $X\subseteq T$ is not nowhere dense, then $X$ is dense below some $t\in T$ (in terms of forcing). Also note that from \xitemof{e-6} it follows that the property of being nowhere dense is absolute. \begin{definition}[Nowhere dense sets] Let \begin{xitemize} \item[] ${\mathcal N\hspace{-0.2ex}\mathcal D}_T=\setof{X\in[T]^{\aleph_0}}{X\mbox{ is nowhere dense\,}}$. \end{xitemize} \end{definition} Note that, for $X\in[T]^{\aleph_0}$ with $X=\setof{t_n}{n\in\omega}$, we have \begin{xitemize} \item[] $\flr{X}=\bigcap_{n\in\omega}\bigcup_{m>n}[T\downarrow t_m]$. \end{xitemize} In particular $\flr{X}$ is a $G_\delta$ subset of $\fnsp{\omega}{2}$. Hence by Baire Category Theorem we have \begin{xitemize} \item[] ${\mathcal N\hspace{-0.2ex}\mathcal D}_T=\setof{X\in[T]^{\aleph_0}}{\flr{X}\mbox{ is a meager subset of } \fnsp{\omega}{2}}$. \end{xitemize} \begin{lemma} \label{claim-0} If $X\in[T]^{\aleph_0}$ then there is $X'\in[X]^{\aleph_0}$ such that\ $X'\in{\mathcal N\hspace{-0.2ex}\mathcal D}_T$. \end{lemma} \begin{proof} If $\flr{X}=\emptyset$ then $X\in{\mathcal N\hspace{-0.2ex}\mathcal D}_T$. Thus we can put $X'=X$. Otherwise let $f\in\flr{X}$ and let $X'=X\cap B(f)$. \mbox{} \usebox{\qedbox} \end{proof} \begin{theorem} \label{meager-set} ${\sf cov}({\mathcal M})$, $\mathfrak{a}\leq \mathfrak{a}({\mathcal N\hspace{-0.2ex}\mathcal D}_T)$. \end{theorem} \begin{proof} For the inequality ${\sf cov}({\mathcal M})\leq \mathfrak{a}({\mathcal N\hspace{-0.2ex}\mathcal D}_T)$, suppose that ${\mathcal F}\subseteq{\mathcal N\hspace{-0.2ex}\mathcal D}_T$ is an ad family of cardinality $<{\sf cov}({\mathcal M})$. Then $\bigcup\setof{\flr{X}}{X\in{\mathcal F}}\not=\fnsp{\omega}{2}$. Let $f\in\fnsp{\omega}{2}\setminus\bigcup\setof{\flr{X}}{X\in{\mathcal F}}$. Then $B(f)\in{\mathcal N\hspace{-0.2ex}\mathcal D}_T$ and $B(f)$ is ad\ from all $X\in{\mathcal F}$. To show $\mathfrak{a}\leq \mathfrak{a}({\mathcal N\hspace{-0.2ex}\mathcal D}_T)$ suppose that ${\mathcal F}\subseteq{\mathcal N\hspace{-0.2ex}\mathcal D}_T$ is an ad family of cardinality $<\mathfrak{a}$. Then ${\mathcal F}$ is not a mad family in $[T]^{\aleph_0}$. Hence there is some $X\in[T]^{\aleph_0}$ ad\ to ${\mathcal F}$. By \Lemmaof{claim-0}, there is $X'\subseteq X$ such that\ $X'\in{\mathcal N\hspace{-0.2ex}\mathcal D}_T$. Since $X'$ is also ad\ to ${\mathcal F}$, it follows that ${\mathcal F}$ is not mad in ${\mathcal N\hspace{-0.2ex}\mathcal D}_T$. \iffalse To show that $\mathfrak{a}\leq \mathfrak{a}({\mathcal N\hspace{-0.2ex}\mathcal D}_T)$, assume for contradiction that $\mathfrak{a}({\mathcal N\hspace{-0.2ex}\mathcal D}_T)<\mathfrak{a}$. Let ${\mathcal F}\subseteq{\mathcal N\hspace{-0.2ex}\mathcal D}_T$ be a mad family in ${\mathcal N\hspace{-0.2ex}\mathcal D}_T$ of cardinality $<\mathfrak{a}$. \begin{claim} There is a $B\subseteq T$ such that\ \begin{xitemize} \xitem[a-0] there are infinitely many $A\in{\mathcal F}$ such that\ $\cardof{A\cap B}=\aleph_0$, and \xitem[a-1] $B\in{\mathcal N\hspace{-0.2ex}\mathcal D}_T$ ($\subseteq{\mathcal N\hspace{-0.2ex}\mathcal D}_T$). \end{xitemize} \end{claim} \prfofClaim Note first that we have $\bigcup\setof{\flr{A}}{A\in{\mathcal F}}=\fnsp{\omega}{2}$ since ${\mathcal F}$ is mad in ${\mathcal N\hspace{-0.2ex}\mathcal D}_T$. As $\flr{A}$ for all $A\in{\mathcal F}$ is meager it follows that there are uncountably many $A\in{\mathcal F}$ such that\ $\flr{A}\not=\emptyset$. Thus we can construct $a_n\in T$, $n\in\omega$ and $b_n\in T$, $n\in\omega\setminus 1$ such that\ \begin{xitemize} \xitem[] $a_{n+1}$, $b_{n+1}\lvertneqq_T a_n$ for all $n\in\omega$, \xitem[] $a_n$ and $b_n$ are incompatible for all $n\in\omega\setminus 1$, \xitem[] there are uncountably many $A\in{\mathcal F}$ such that\ $\exists f\in\flr{A}\ (a_n\subseteq f)$ for all $n\in\omega$, and \xitem[a-2] there are uncountably many $A\in{\mathcal F}$ such that\ $\exists f\in\flr{A}\ (b_n\subseteq f)$ for all $n\in\omega\setminus 1$. \end{xitemize} For $n\in\omega\setminus 1$, let $f_n\in\fnsp{\omega}{2}$ and $A_n\in{\mathcal F}$ be such that\ \begin{xitemize} \xitem[] $b_n\subseteq f_n$, \xitem[] $f_n\in A_n$ and \xitem[] $A_n\not\in\setof{A_k}{k\in n\setminus 1}$. \end{xitemize} The construction of such $f_n$ and $A_n$ is possible by \xitemof{a-2}. Let \begin{xitemize} \item[] $B=\setof{f_n\restriction k}{k\in\omega,\, n\in\omega\setminus 1}$. \end{xitemize} It is easy to see that this $B$ is as desired. \qedofClaim \\ Let $B$ be as in the claim above and \begin{xitemize} \item[] ${\mathcal B}=\setof{A\cap B}{A\in{\mathcal F}\mbox{ and }A\cap B\mbox{ is infinite}}$. \end{xitemize} By \xitemof{a-0}, ${\mathcal B}$ is an infinite ad family on $B$ of cardinality $<\mathfrak{a}$. Hence there is $C\in[B]^{\aleph_0}$ which is ad to ${\mathcal B}$. By \xitemof{a-1}, $C\in{\mathcal N\hspace{-0.2ex}\mathcal D}_T$ and $C$ is ad\ to ${\mathcal F}$. This is a contradiction to the assumption that ${\mathcal F}$ is mad in ${\mathcal N\hspace{-0.2ex}\mathcal D}_T$. \fi \mbox{} \usebox{\qedbox} \end{proof} Let $\sigma$ be the measure on Borel sets of the Cantor space $\fnsp{\omega}{2}$ defined as the product measure of the probability measure on $2$. For $X\subseteq T$, let $\mu(X)=\sigma(\flr{X})$. \begin{definition}[Null sets] Let \begin{xitemize} \item[] ${\mathcal N}_T=\setof{X\in[T]^{\aleph_0}}{\mu(X)=0}$. \end{xitemize} \end{definition} \begin{theorem} \label{null-sets} ${\sf cov}({\mathcal N})$, $\mathfrak{a}\leq\mathfrak{a}({\mathcal N}_T)$. \end{theorem} \begin{proof} Similarly to the proof of \Thmof{meager-set}. \mbox{} \usebox{\qedbox} \end{proof} \begin{definition}[Nowhere dense null sets] Let \begin{xitemize} \item[] ${\mathcal N\hspace{-0.2ex}\mathcal D\hspace{-0.2ex}\mathcal N}_T={\mathcal N\hspace{-0.2ex}\mathcal D}_T\cap{\mathcal N}_T$. \end{xitemize} \end{definition} \begin{lemma} $\mathfrak{a}({\mathcal N\hspace{-0.2ex}\mathcal D}_T)\leq\mathfrak{a}({\mathcal N\hspace{-0.2ex}\mathcal D\hspace{-0.2ex}\mathcal N}_T)$\ \ and\ \ $\mathfrak{a}({\mathcal N}_T)\leq\mathfrak{a}({\mathcal N\hspace{-0.2ex}\mathcal D\hspace{-0.2ex}\mathcal N}_T)$. \end{lemma} \begin{proof} For the first inequality, suppose that ${\mathcal F}$ is a mad family in ${\mathcal N\hspace{-0.2ex}\mathcal D\hspace{-0.2ex}\mathcal N}_T$. Then ${\mathcal F}$ is an ad family in ${\mathcal N\hspace{-0.2ex}\mathcal D}_T$. It is also mad in ${\mathcal N\hspace{-0.2ex}\mathcal D}_T$. Suppose not. Then there is an $X\in{\mathcal N\hspace{-0.2ex}\mathcal D}_T$ ad to ${\mathcal F}$. Let $X'\in[X]^{\aleph_0}$ be as in the measure analog of \Lemmaof{claim-0}. Then $X'\in{\mathcal N\hspace{-0.2ex}\mathcal D\hspace{-0.2ex}\mathcal N}_T$. Hence ${\mathcal F}$ is not mad in ${\mathcal N\hspace{-0.2ex}\mathcal D\hspace{-0.2ex}\mathcal N}_T$. This is a contradiction. The second inequality can be also proved similarly. \mbox{} \usebox{\qedbox} \end{proof} The diagram Fig.\,\ref{fig:1} summarizes the inequalities obtained in this section integrated into the cardinal diagram given in Brendle \cite{brendle}. ``$\kappa\,\rightarrow\,\lambda$'' in the diagram means that ``$\kappa\leq\lambda$ is provable in {\sf ZFC}''. There are still some open questions concerning the (in)completeness of this diagram. In particular: \begin{figure}\label{fig:1} \end{figure} \begin{problem} \assert{a} Are the inequalities between $\mathfrak{a}({\mathcal N}_T)$, $\mathfrak{a}({\mathcal N\hspace{-0.2ex}\mathcal D}_T)$, $\mathfrak{a}({\mathcal N\hspace{-0.2ex}\mathcal D\hspace{-0.2ex}\mathcal N}_T)$ consistently strict and complete? \assert{b} Are $\mathfrak{a}({\mathcal N\hspace{-0.2ex}\mathcal D}_T)$ etc.\ independent from $\mathfrak{o}$, $\bar{\mathfrak{o}}$, $\mathfrak{a}_\mathfrak{s}$ ? \end{problem} \section{Ad families ${\mathcal F}$ for which ${\mathcal F}^\perp$ is contained in a certain subfamily of $[T]^{\aleph_0}$} \label{mad-over-pad} In this section we give several constructions of ad families with the property that the sets ad to them in a given generic extension are necessarily in a certain subfamily of $[T]^{\aleph_0}$. The constructions in this section are used in the proof of some results in the next sections. \begin{theorem} \label{osvaldo} There is an ad family ${\mathcal F}\subseteq{\mathcal A}_T$ of cardinality ${\sf non}({\mathcal M})$ such that, for any poset\ $\bbd{P}$ preserving the non-meagerness of ground-model non-meager sets, we have \begin{xitemize} \xitem[a-3] $\forces{\bbd{P}}{{\mathcal F}^\perp\subseteq{\mathcal N\hspace{-0.2ex}\mathcal D}_T}$. \end{xitemize} \end{theorem} The following assertion was originally proved under {\sf CH}: \begin{corollary} \label{osvaldo-0} There is an ad family ${\mathcal F}\subseteq{\mathcal A}_T$ of cardinality ${\sf non}({\mathcal M})$ such that, for any cardinal $\kappa$, we have \begin{xitemize} \xitem[cc-0] $V^{\Cohen{\kappa}}\models{\mathcal F}^\perp\subseteq{\mathcal N\hspace{-0.2ex}\mathcal D}_T$. \end{xitemize} \end{corollary} \prf The corollary follows from \Thmof{osvaldo} since the Cohen forcing $\Cohen{\kappa}$ preserves the non-meagerness of ground-model non-meager sets (see e.g.\ 11.3 in \cite{blass})\mbox{} \usebox{\qedbox} For the proof of \Thmof{osvaldo}, we use the following lemma. Let \begin{xitemize} \xitem[] ${\mathcal P}=\setof{f}{\mapping{f}{X}{\omega}\mbox{ for some }X\in[\omega]^{\aleph_0}}$. \end{xitemize} \begin{lemma} \label{osvaldo-1} There is a mapping $\mapping{F}{\fnsp{\omega}{\omega}}{{\mathcal P}}$ such that\ \begin{xitemize} \xitem[a-4] If $f$, $g\in\fnsp{\omega}{\omega}$, $f\not=g$, then $\cardof{F(f)\cap F(g)}<\aleph_0$. \xitem[a-5] If $h\in\fnsp{\omega}{\omega}$ and $X\subseteq\fnsp{\omega}{\omega}$ is non-meager, then there is $f\in X$ such that\ $\cardof{h\cap F(f)}=\aleph_0$. \end{xitemize} Furthermore, $F$ as above can be chosen such that\ it is definable and absolute in the sense that \xitemof{a-4} and \xitemof{a-5} hold for the extension of $F$ with the same definition in any generic extension of the ground model. \end{lemma} \prf Let $\seqof{s_n}{n\in\omega}$ be a one to one recursive enumeration of $\fnsp{\omega>}{\omega}$. For $f\in\fnsp{\omega}{\omega}$, let $\mathop{\rm dom}(F(f))=\setof{n\in\omega}{s_n\subseteq f}$. Let $\mapping{F(f)}{\mathop{\rm dom}(F(f))}{\omega}$ be defined by \begin{xitemize} \xitem[a-6] $F(f)(n)=f(\cardof{s_n})$ \end{xitemize} for $n\in\mathop{\rm dom}(F(f))$. \begin{claim} This $F$ is as desired. \end{claim} \prfofClaim It is clear that $F$ satisfies \xitemof{a-4} --- note that it is crucial here that the enumeration $\seqof{s_n}{n\in\omega}$ is chosen to be one to one. To show that $F$ also satisfies \xitemof{a-5}, suppose $h\in\fnsp{\omega}{\omega}$. It is enough to show that \begin{xitemize} \xitem[a-7] $N(h)=\setof{g\in\fnsp{\omega}{\omega}}{\cardof{h\cap F(g)}<\aleph_0}$ is a meager subset of $\fnsp{\omega}{\omega}$. \end{xitemize} For $k\in\omega$, let $N_k(h)=\setof{g\in\fnsp{\omega}{\omega}}{\cardof{h\cap F(g)}<k}$. Since $N(h)=\bigcup_{k\in\omega}N_k(h)$, it is enough to show that $N_k(h)$ is a nowhere dense subset of $\fnsp{\omega}{\omega}$ for each $k\in\omega$. For this, we prove, by induction on $k$, \begin{xitemize} \xitem[a-8] For any $s\in\fnsp{\omega>}{\omega}$, there are $s'\in\fnsp{\omega>}{\omega}$ and $m'\in\omega$ such that\ such that\ $s'\subseteq s$ and $\cardof{(h\restriction m')\cap F(g)}\geq k$ for all $g\in[s']$. \end{xitemize} Suppose that \xitemof{a-8} holds for $k=\ell$ and let $s\in\fnsp{\omega>}{\omega}$. By the induction hypothesis we may assume without loss of generality\ that there is an $m\in\omega$ such that\ $\cardof{(h\restriction m)\cap F(f)}\geq\ell$ for all $g\in[s]$. Let $n\in\omega$ be such that\ $n\geq m, \cardof{s}$ and $s_n\supseteq s$. Let \begin{xitemize} \xitem[] $s'=s_n\cup\ssetof{\pairof{\cardof{s_n},h(n)}}$. \end{xitemize} For any $g\in[s']$, we have $n\in\mathop{\rm dom}(F(g))$ by $s_n\subseteq s'\subseteq g$, and $F(g)(n)=g(\cardof{s_n})=h(n)$. Letting $m'=n+1$, we have $\cardof{(h\restriction m')\cap F(g)}\geq\ell+1$. Thus, \xitemof{a-8} holds for $k=\ell+1$ with these $s'$ and $m'$. \qedofClaim The definability and the absoluteness of $F$ is clear from the construction given above. \mbox{} \usebox{\qedbox} \noindent {\bf Proof of \bfThmof{osvaldo}:}\ \ Let \begin{xitemize} \xitem[a-9] $Q=\setof{q\in T}{q(n)\mbox{ is eventually }0}$. \end{xitemize} That is, for $q\in T$, $q\in Q$ if and only if $\cardof{\setof{n\in\omega}{q(n)=1}}<\aleph_0$. For $q\in Q$, let \begin{xitemize} \xitem[a-10] $\ell_q=\min\setof{\ell\in\omega}{ \forall m\ (\ell\leq m\,\rightarrow\,q(m)=0)}$. \end{xitemize} Let $\seqof{q_n}{n\in\omega}$ be a one to one enumeration of $Q$. For $n$, $k\in\omega$ let \begin{xitemize} \xitem[a-11] $T_{n,k}=\setof{s\in T}{q_n\restriction(\ell_q+k)\cup\ssetof{\pairof{\ell_q+k,1}} \subseteq s}$ \end{xitemize} and let $\seqof{s_{n,k,i}}{i\in\omega}$ be a one to one enumeration of $T_{n,k}$. Let $F$ be as in \Lemmaof{osvaldo-1}. For $n\in\omega$ and $f\in\fnsp{\omega}{\omega}$, let \begin{xitemize} \xitem[a-12] $F_n(f)=\setof{s_{n,k,i}}{k\in\mathop{\rm dom}(F(f)),\,i=F(f)(k)}$. \end{xitemize} Let $N\subseteq\fnsp{\omega}{\omega}$ be a non-meager set with $\cardof{N}={\sf non}({\mathcal M})$. Let ${\mathcal F}_n=F_n\imageof N$ and ${\mathcal F}=\bigcup_{n\in\omega}{\mathcal F}_n$. We show that this ${\mathcal F}$ is as desired: \begin{claim} \assertof{1} ${\mathcal F}\subseteq{\mathcal A}_T$. \assertof{2} ${\mathcal F}$ is ad. \assertof{3} \xitemof{a-3} holds for all poset\ $\bbd{P}$ preserving non-meagerness of ground-model non-meager sets. \end{claim} \prfofClaim \assertof{1}: Suppose that $A\in{\mathcal F}$ and $A=F_n(f)$ for some $n\in\omega$ and $f\in N$. If $s_0$, $s_1$ are two different elements of $A$, then there are $k_0$, $k_1\in\mathop{\rm dom}(F(f))$, $k_0\not=k_1$ and $i_0$, $i_1\in\omega$ such that\ $s_0=s_{n,k_0,i_0}$ and $s_1=s_{n,k_1,i_1}$. Since $s_0\in T_{n,k_0}$ and $s_1\in T_{n,k_1}$, it follows that $s_0$ and $s_1$ are incompatible. \assertof{2}: Suppose that $A_0$, $A_1\in{\mathcal F}$ with $A_0\not=A_1$. Let $A_0=F_{n_0}(f_0)$ and $A_1=F_{n_1}(f_1)$. If $n_0\not=n_1$ then we have $\cardof{A_0\cap A_1}\leq 1$. Then $f_0\not=f_1$. Thus, by \xitemof{a-4}, $\cardof{A_0\cap A_1}=\cardof{F(f_0)\cap F(f_1)}<\aleph_0$. \assertof{3}: Let $G$ be a $(V,\bbd{P})$-generic set and we work in $V[G]$. Note, that by our assumption, $N$ is still non-meager in $V[G]$. Suppose that $B\in [T]^{\aleph_0}\setminus{\mathcal N\hspace{-0.2ex}\mathcal D}_T$. We have to show that $\cardof{A\cap B}=\aleph_0$ for some $A\in{\mathcal F}$. Since $B\not\in{\mathcal N\hspace{-0.2ex}\mathcal D}_T$ there is $n\in\omega$ such that\ $B\downarrow (q_n\restriction\ell_{q_n})$ is dense below $q_n\restriction\ell_{q_n}$. It follows that, for each $k\in\omega$, there is $h(k)\in\omega$ such that\ $s_{n,k,h(k)}\in B$. By \xitemof{a-5} (which still holds in the generic extension $V[G]$), there is $f\in M$ such that\ $\cardof{h\cap F(f)}=\aleph_0$. By the definition of $h$ and $F_n(f)$, it follows that $\cardof{B\cap F_n(f)}=\aleph_0$. \qedofClaim \mbox{} \usebox{\qedbox} We can also obtain a variation of \Thmof{osvaldo} if our ground model is a generic extension of some inner model by adding uncountably may Cohen reals. Note that ${\sf non}({\mathcal M})=\aleph_1$ holds in such a ground model. \begin{theorem} \label{cohen-nd-1} Suppose that $W=V^\Cohen{\omega_1}$. Then, in $W$, there is an ad family ${\mathcal F}\subseteq{\mathcal A}_T$ of cardinality $\aleph_1$ such that \begin{xitemize} \xitem[cohen-nd-a] for any c.c.c.\ poset\ $\bbd{P}$ with $\bbd{P}\in V$, we have $W^\bbd{P}\models{\mathcal F}^\perp\subseteq{\mathcal N\hspace{-0.2ex}\mathcal D}_T$. \end{xitemize} \end{theorem} \begin{proof} Let $A\in[T]^{\aleph_0}\cap V$ be an antichain and let $\seqof{t^*_n}{n\in\omega}$ be a one to one enumeration of $A$. Let $G$ be a $(V,\Cohen{\omega_1})$-generic filter and $W=V[G]$. For $p\in\Cohen{\omega_1}$, $\alpha<\omega_1$ and $k\in\omega$, let \begin{xitemize} \item[] $f^p_\alpha=\setof{\pairof{n,i}\in\omega\times\omega}{\pairof{\omega\alpha+3n,i}\in p}$; \item[] $n^p_{\alpha,k}=\left\{\, \begin{array}{@{}ll} n, &\mbox{if }[\omega\alpha,\omega\alpha+3n+1]\subseteq\mathop{\rm dom}(p),\\ &\phantom{\mbox{if }} p(\omega\alpha+3n+1)=1\mbox{ and}\\ &\phantom{\mbox{if }} \cardof{\setof{m<n}{p(\omega\alpha+3m+1)=1}}=k,\\[2\jot] \mbox{undefined}, &\mbox{if there is no such }n\mbox{ as above;} \end{array}\right. $ \item[] $t^p_\alpha=\left\{\, \begin{array}{@{}ll} \setof{\pairof{n,i}\in\omega\times\omega}{ n<n^p_{\alpha,0},\,\pairof{\omega\alpha+3n+2,i}\in p},\\[\jot] \phantom{\mbox{undefined},\qquad}\mbox{if }n^p_{\alpha,0}\mbox{ is defined,}\\[2\jot] \mbox{undefined},\qquad\mbox{otherwise} \end{array}\right. $ and \item[] $t^p_{\alpha,k}=\left\{\, \begin{array}{@{}ll} \setof{\pairof{n,i}\in\omega\times\omega}{ n<n^p_{\alpha,k+1},\,\pairof{\omega\alpha+3n+2,i}\in p},\\[\jot] \phantom{\mbox{undefined},\qquad}\mbox{if }n^p_{\alpha,k+1}\mbox{ is defined,}\\[2\jot] \mbox{undefined},\qquad\mbox{otherwise.} \end{array}\right. $ \end{xitemize} Let \begin{xitemize} \item[] $f^G_\alpha=\bigcup_{p\in G}f^p_\alpha$, \item[] $t^G_{\alpha}=t^p_{\alpha}$ for some $p\in G$ such that\ $t^p_{\alpha}$ is defined, and \item[] $t^G_{\alpha,k}=t^p_{\alpha,k}$ for some $p\in G$ such that\ $t^p_{\alpha,k}$ is defined. \end{xitemize} For $\alpha\in\omega_1$, let \begin{xitemize} \xitem[A-alpha] $A_\alpha=\setof{t^G_\alpha\mathop{{}^{\frown}} t^*_k\mathop{{}^{\frown}} t^G_{\alpha,k}}{k\in\omega}$. \end{xitemize} Clearly each $A_\alpha$ is an antichain in $T$. $A_\alpha$, $\alpha<\omega_1$ are pairwise almost disjoint: Suppose that $\alpha<\beta<\omega_1$. Then there is $k_0<\omega$ such that\ $t^G_{\alpha,k}\not=f^G_{\beta,k}$ for all $k\in\omega\setminus k_0$. It follows that $A_\alpha\cap A_\beta\subseteq\setof{t^G_\alpha\mathop{{}^{\frown}} t^*_k\mathop{{}^{\frown}} t^G_{\alpha,k}}{k<k_0}$. We show that ${\mathcal F}=\setof{A_\alpha}{\alpha<\omega_1}$ satisfies \xitemof{cohen-nd-a}. Suppose that $\bbd{P}$ is a c.c.c.\ poset (in $W$) and $\bbd{P}\in V$. Let $H$ be a $(W,\bbd{P})$-generic filter. It is enough to show that, in $W[H]$, if $X\in[T]^{\aleph_0}$ is not nowhere dense then $X$ is not almost ad to ${\mathcal F}$. By the c.c.c.\ of $\Cohen{\omega_1}*\hat{\bbd{P}} \sim \Cohen{\omega_1}\times\bbd{P}$, there is an $\alpha^*\in\omega_1$ such that\ $X\in V[G\restriction\Cohen{\omega\alpha^*}][H]$. Let $t\in T$ be such that\ $X$ is dense below $t$. Then \[D=\setof{p\in\Cohen{\omega_1\setminus\omega\alpha^*}}{t^p_\alpha\supseteq t \mbox{ for some }\alpha\in\omega_1\setminus\omega\alpha^*} \]\noindent is dense in $\Cohen{\omega_1\setminus\omega\alpha^*}$. For $p\in D$ and $\alpha\in\omega_1\setminus\omega\alpha^*$ such that\ $t^p_\alpha\supseteq t$, letting $\utilde{A}_\alpha$ a $\Cohen{\omega_1\setminus\alpha^*}$-name of $A_\alpha$, we have $p\forces{\Cohen{\omega_1\setminus\omega\alpha^*}}{ \cardof{\utilde{A}_\alpha\cap X\downarrow t}=\aleph_0}$ by \xitemof{A-alpha} and since $X$ is dense below $t$. By genericity, it follows that, in $W[G]$, there is $\alpha<\omega_1$ such that\ $\cardof{A_\alpha\cap X}=\aleph_0$. \mbox{}\mbox{} \usebox{\qedbox} \end{proof} A measure version of {\ifJapanese 定理\else Theorem\fi\ \number\theThm}\ also holds: \begin{theorem} \label{random-n} Let $W=V^\Cohen{\omega_1}$. Then, in $W$, there is an ad family ${\mathcal F}$ in ${\mathcal N}_T$ of cardinality $\aleph_1$ such that\, for any c.c.c.\ poset\ $\bbd{P}$ with $\bbd{P}\in V$, we have $W^{\bbd{P}}\models{\mathcal F}^\perp\subseteq{\mathcal O}_T$. \end{theorem} For the proof of {\ifJapanese 定理\else Theorem\fi\ \number\theThm}\ we note first the following: \begin{lemma} \label{null-set} Suppose that $X\subseteq T$ is such that\ $X=\setof{t_k}{k\in\omega}$ for some enumeration $t_k$, $k\in\omega$ of $X$ with $\ell(t_k)\geq k$ for all $k\in\omega$. Then $X\in{\mathcal N}_T$. \end{lemma} \begin{proof} For all $n\in\omega$, we have $\flr{X}\subseteq\bigcup_{k\in\omega\setminus n}\flr{T\downarrow t_k}$. Hence \begin{xitemize} \item[] $\mu(X) =\sigma(\flr{X})\leq\sum_{k\in\omega\setminus n}\sigma(\flr{T\downarrow t_k}) \leq\sum_{k\in\omega\setminus n}2^k=2^{-n}$. \end{xitemize} It follows that $\mu(X)=0$. \mbox{} \usebox{\qedbox} \end{proof} \begin{proof}[of \itThmof{random-n}] Let $G$ be a $(V,\Cohen{\omega_1})$-generic filter and $W=V[G]$. In $W$, let \begin{xitemize} \item[] $f^G_\alpha=\setof{\pairof{n,i}}{\pairof{\omega\alpha+n,i}\in p \mbox{ for some }p\in G}$ \end{xitemize} for $\alpha<\omega_1$ and let $g^G_\alpha\in\fnsp{\omega}{\omega}$ be the increasing enumeration of $\left(f^G_\alpha\right)^{-1}[\ssetof{1}]$. Further in $W$, we construct inductively $A_\alpha\in{\mathcal N}_T$, $\alpha<\omega_1$ as follows. For $n\in\omega$, let $A_n\in{\mathcal N}_T$ be such that\ $\seqof{A_n}{n\in\omega}$ is a partition of $T$ in $V$. We can be easily find such $A_n$'s by \Lemmaof{null-set}. For $\omega\leq\alpha<\omega_1$, suppose that pairwise almost disjoint $A_\beta$, $\beta<\alpha$ have been constructed. Let $\seqof{B_\ell}{\ell\in\omega}$ be an enumeration of $\setof{A_\beta}{\beta<\alpha}$ and, for each $n\in\omega$, let $\seqof{b_{n,m}}{m\in\omega}$ be an enumeration of \begin{xitemize} \xitem[d-0] $C_n=T\setminus\left(\fnsp{n>}{2}\cup\setof{B_\ell}{\ell<n}\right)$. \end{xitemize} Let \begin{xitemize} \xitem[d-1] $A_\alpha=\setof{b_{n,g^G_\alpha(n)}}{n\in\omega}$. \end{xitemize} $A_\alpha\in{\mathcal N}_T$ by \xitemof{d-0} and \Lemmaof{null-set}. $A_\alpha$ is ad to $\setof{A_\beta}{\beta<\alpha}$ by \xitemof{d-0} and \xitemof{d-1}. We show that ${\mathcal F}=\setof{A_\alpha}{\alpha<\omega_1}$ is as desired. Suppose that $\bbd{P}$ is c.c.c.\ (in $W$) and $\bbd{P}\in V$. Let $H$ be a $(W,\bbd{P})$-generic filter. It is enough to show that, in $W[H]$, if $X\in[T]^{\aleph_0}\setminus{\mathcal O}_T$ then $X$ is not ad to ${\mathcal F}$. So suppose that (in $W[H]$) $X\in [T]^{\aleph_0}\setminus{\mathcal O}_T$ and $f\in\flr{X}$. Let $B=X\cap B(f)$. By the c.c.c.\ of $\Cohen{\omega_1}\ast\hat{\bbd{P}}\sim\Cohen{\omega_1}\times\bbd{P}$, there is an $\alpha^*\in\omega_1\setminus\omega$ such that\ $B\in V[(G\restriction\Cohen{\omega\alpha^*})][H]$. If $B\cap A_\alpha$ is infinite for some $\alpha<\alpha^*$ then we are done. So assume that $B$ is ad to all $A_\alpha$, $\alpha<\alpha^*$. Then $B\cap C_n$ is infinite for all $n\in\omega$. Since $f^G_{\alpha^*}$ is a Cohen real generic over $V[(G\restriction\Cohen{\omega\alpha^*})][H]$, it follows that $B\cap A_{\alpha^*}$ is infinite. \mbox{} \usebox{\qedbox} \end{proof} \section{Almost disjoint numbers over ad families} In this section we turn to questions on the possible values of $\mathfrak{a}^+(\cdot)$. \label{aplus} \begin{theorem}{\rm (K.\ Kunen)} \label{aplus-o} $\mathfrak{a}^+(\bar{\mathfrak{o}})=\mathfrak{c}$. \end{theorem} \begin{proof} Let ${\mathcal F}$ be any mad family in ${\mathcal A}_T$ of cardinality $\bar{\mathfrak{o}}$. By maximality of ${\mathcal F}$ we have ${\mathcal F}^\perp={\mathcal B}_T$. If ${\mathcal G}\subseteq[T]^{\aleph_0}$ is disjoint from ${\mathcal F}$ and ${\mathcal F}\cup{\mathcal G}$ is mad then ${\mathcal G}$ is mad in ${\mathcal B}_T$ and hence $\cardof{{\mathcal G}}=\mathfrak{c}$ by \Thmof{Kunen's thm A}. \mbox{} \usebox{\qedbox} \end{proof} \begin{theorem} \label{aplus-small-Cohen} $V^{\Cohen{\kappa}}\models\mathfrak{a}^+(\aleph_1)\geq\kappa$ for all regular $\kappa$. \end{theorem} \begin{proof} If $\kappa=\omega_1$ this is trivial. So suppose that $\kappa>\omega_1$. Let $W=V^{\Cohen{\omega_1}}$. Then $V^{\Cohen{\kappa}}=W^{\Cohen{\kappa\setminus\omega_1}}$. Let ${\mathcal F}$ be as in the proof of \Thmof{cohen-nd-1}. Suppose that $\tilde{{\mathcal F}}\supseteq{\mathcal F}$ is mad on $T$ in $V^{\Cohen{\kappa}}$. Then $\tilde{{\mathcal F}}\subseteq\left({\mathcal N\hspace{-0.2ex}\mathcal D}_T\right)^{V^{\Cohen{\kappa}}}$. Since $V^\Cohen{\kappa}\models {\sf cov}({\mathcal M})\geq\kappa$, it follows that $\cardof{\tilde{{\mathcal F}}}\geq\kappa$ by \Thmof{meager-set}. \mbox{} \usebox{\qedbox} \end{proof} \begin{corollary} \label{aplus-small-large} The inequality $\mathfrak{a}=\aleph_1<\mathfrak{a}^+(\aleph_1)=\mathfrak{c}$ is consistent. \end{corollary} \begin{proof} Start from a model $V$ of {\sf CH}. Since there is a $\Cohen{\kappa}$-indestructible mad family in $V$ it follows that $V^{\Cohen{\omega_2}}\models\mathfrak{a}=\aleph_1$ (see e.g.\ \cite{kunen-book}, Theorem 2.3). On the other hand we have $V^{\Cohen{\omega_2}}\models\mathfrak{a}^+(\aleph_1)=\aleph_2=\mathfrak{c}$ by \Thmof{aplus-small-Cohen}. \mbox{} \usebox{\qedbox} \end{proof} \begin{theorem} \label{aplus-small-small} The inequality $\mathfrak{a}^+(\aleph_1)<\mathfrak{c}$ is consistent. \end{theorem} \noindent For the proof of the theorem we use the following forcing notions: for a family ${\mathcal I}\subseteq\setof{A\in[\omega]^{\aleph_0}}{\cardof{\omega\setminus A}=\aleph_0}$ closed under union, let $\bbd{Q}_{\mathcal I}=\pairof{\bbd{Q}_{\mathcal I},\leq_{\bbd{Q}_{\mathcal I}}}$ be the poset\ defined by \begin{xitemize} \item[] $\bbd{Q}_{\mathcal I}=\Cohen{\omega}\times{\mathcal I}$\,; \end{xitemize} For all $\pairof{s,A}$, $\pairof{s',A'}\in\bbd{Q}_{\mathcal I}$ \begin{xitemize} \xitem[] $ \begin{array}[t]{r@{}l} \pairof{s',A'}\leq_{\bbd{Q}_{\mathcal I}}\pairof{s,A}\ \ \Leftrightarrow\ \ &s\subseteq s',\ A\subseteq A'\mbox{ and }\\ &\mbox{}\hspace{-12pt}\forall n\in\mathop{\rm dom}(s')\setminus\mathop{\rm dom}(s)\ (n\in A\ \rightarrow\ s'(n)=0). \end{array} $ \end{xitemize} Clearly $\bbd{Q}_{\mathcal I}$ is $\sigma$-centered. For a $(V,\bbd{Q}_{\mathcal I})$-generic $G$, let \begin{xitemize} \item[] $f_G=\bigcup\setof{s}{\pairof{s,A}\in G\mbox{ for some }A\in{\mathcal I}}$ and \item[] $A_G=f^{-1}_G\imageof\ssetof{1}$. \end{xitemize} Let $\tilde{{\mathcal I}}$ be the ideal in $[\omega]^{\aleph_0}$ generated from ${\mathcal I}$ (i.e.\ the downward closure of ${\mathcal I}$ with respect to\ $\subseteq$). By the genericity of $G$ and the definition of $\leq_{\bbd{Q}_{\mathcal I}}$ it is easy to see that $A_G$ is infinite and \begin{xitemize} \xitem[c-10] for every $B\in([\omega]^{\aleph_0})^V$, $A_G$ is almost disjoint from $B$\ \ $\Leftrightarrow$\ \ $B\in\tilde{{\mathcal I}}$. \end{xitemize} \noindent \begin{proof}[of \itThmof{aplus-small-small}] Working in a ground model $V$ of $2^{\aleph_0}=2^{\aleph_1}=\aleph_3$, let \begin{xitemize} \item[] $\seqof{\bbd{P}_\alpha,\utilde{\bbd{Q}}_\beta}{\alpha\leq\omega_2,\,\beta<\omega_2}$ \end{xitemize} be the finite support iteration of c.c.c.\ posets\ defined as follows: for $\beta<\omega_2$, let $\utilde{\bbd{Q}}_\beta$ be the $\bbd{P}_\beta$-name of the finite support (side-by-side) product of \begin{xitemize} \xitem[c-11] $\bbd{Q}_{\tilde{{\mathcal F}}}$, $\tilde{{\mathcal F}}\in\Phi$ \end{xitemize} where \begin{xitemize} \item[] $ \begin{array}[t]{r@{}l} \Phi=\setof{\tilde{{\mathcal F}}}{{}&\tilde{{\mathcal F}} \mbox{ is an ideal in }[\omega]^{\aleph_0}\\ & \mbox{ generated from an ad family in } [\omega]^{\aleph_0}\mbox{ of cardinality }\aleph_1} \end{array} $ \end{xitemize} in $V^{\bbd{P}_\beta}$. We have \begin{xitemize} \item[] $V^{\bbd{P}_\beta}\models\utilde{\bbd{Q}}_\beta\mbox{ satisfies the c.c.c.}$ \end{xitemize} since $V^{\bbd{P}_\beta}\models\bbd{Q}_{\tilde{{\mathcal F}}}\mbox{ is }\sigma \mbox{-centered for all }\tilde{{\mathcal F}}\in\Phi$. By induction on $\alpha\leq\omega_2$, we can show that $\bbd{P}_\alpha$ satisfies the c.c.c.\ and $\cardof{\bbd{P}_\alpha}\leq 2^{\aleph_1}=\aleph_3$ for all $\alpha\leq\omega_2$. It follows that \begin{xitemize} \xitem[] $V^{\bbd{P}_{\omega_2}}\models 2^{\aleph_0}=2^{\aleph_1}=\aleph_3$. \end{xitemize} Thus the following claim finishes the proof: \begin{claim} $V^{\bbd{P}_{\omega_2}}\models\mathfrak{a}=\mathfrak{a}^+(\aleph_1)=\aleph_2$. \end{claim} \prfofClaim Working in $V^{\bbd{P}_{\omega_2}}$, suppose that ${\mathcal F}$ is an ad family in $[\omega]^{\aleph_0}$ of cardinality $\aleph_1$. By the c.c.c.\ of $\bbd{P}_{\omega_2}$, there is some $\alpha^*<\omega_2$ such that\ ${\mathcal F}\in V^{\bbd{P}_{\alpha^*}}$. By \xitemof{c-11} and \xitemof{c-10}, there are $A_\alpha$, $\alpha\in\omega_2\setminus\alpha^*$ such that\ \begin{xitemize} \xitem[] for every $B\in([\omega]^{\aleph_0})^{V^{\bbd{P}_{\alpha}}}$, $A_\alpha$ is ad from $B$\ \ $\Leftrightarrow$\ \ $B\in$ the ideal generated from ${\mathcal F}\cup\setof{A_\beta}{\beta\in\alpha\setminus\alpha^*}$. \end{xitemize} Since $([\omega]^{\aleph_0})^{V^{\bbd{P}_{\omega_2}}} =\bigcup_{\alpha<\omega_2}([\omega]^{\aleph_0})^{V^{\bbd{P}_\alpha}}$, it follows that ${\mathcal F}\cup\setof{A_\alpha}{\alpha\in\omega_2\setminus\alpha^*}$ is a mad family in $V^{\bbd{P}_{\omega_2}}$. This shows that $V^{\bbd{P}_{\omega_2}}\models\mathfrak{a}^+(\aleph_1)\leq\aleph_2$. We also have $V^{\bbd{P}_{\omega_2}}\models\mathfrak{a}\geq\aleph_2$: for any ad family ${\mathcal G}\subseteq([\omega]^{\aleph_0})^{V^{\bbd{P}_{\omega_2}}}$ of cardinality $\leq\aleph_1$, there is some $\alpha^*<\omega_2$ such that\ ${\mathcal G}\in V^{\bbd{P}_{\alpha^*}}$. But $\smash{\utilde{\bbd{Q}}_{\alpha^*}}$ \ adds an infinite subset of $\omega$ almost disjoint to every element of ${\mathcal G}$. Hence ${\mathcal G}$ is not mad. \qedofClaim\mbox{} \usebox{\qedbox} \end{proof} Clearly, the method of the proof of {\ifJapanese 定理\else Theorem\fi\ \number\theThm}\ cannot produce a model of $\mathfrak{a}^+(\aleph_1)=\aleph_1<\mathfrak{c}$. \begin{problem} Is $\mathfrak{a}^+(\aleph_1)=\aleph_1<\mathfrak{c}$ consistent? \end{problem} All infinite cardinals less than or equal to the continuum $\mathfrak{c}$ can be represented as $\mathfrak{a}^+({\mathcal F})$ for some ${\mathcal F}$. \begin{theorem} \label{aplus-all} For any infinite $\kappa\leq\mathfrak{c}$, there is an ad family ${\mathcal F}\subseteq[T]^{\aleph_0}$ of cardinality $\mathfrak{c}$ such that\ $\mathfrak{a}^+({\mathcal F})=\kappa$. \end{theorem} \begin{proof} Let ${\mathcal F}'$ be a mad family in ${\mathcal A}_T$. Then by \Lemmaof{kunen's lemma}, we have \begin{xitemize} \xitem[d-2] ${\mathcal F}'^\perp={\mathcal B}_T$. \end{xitemize} Let $X$ and $X'$ be disjoint with $\fnsp{\omega}{2}=X\cup X'$, $\cardof{X}=\mathfrak{c}$ and $\cardof{X'}=\kappa$. Let \begin{xitemize} \item[] ${\mathcal F}={\mathcal F}'\cup\setof{B(f)}{f\in X}$. \end{xitemize} Clearly ${\mathcal F}$ is an ad family. By \xitemof{d-2} we have ${\mathcal F}^\perp\subseteq{\mathcal B}_T$. We claim $\mathfrak{a}^+({\mathcal F})=\kappa$: Since ${\mathcal F}\cup\setof{B(f)}{f\in X'}$ is a mad family by \Lemmaof{kunen's lemma}, we have $\mathfrak{a}^+({\mathcal F})\leq\kappa$. Again by \Lemmaof{kunen's lemma}, if ${\mathcal G}\subseteq{\mathcal F}^\perp$ is an ad family of cardinality $<\kappa$, then there is $f\in X'$ such that\ $B(f)$ is ad from every $B\in{\mathcal G}$. Thus $\mathfrak{a}^+({\mathcal F})\geq\kappa$. \mbox{} \usebox{\qedbox} \end{proof} \section{Destructibility of mad families} For a poset\ $\bbd{P}$, a mad family ${\mathcal F}$ in $[T]^{\aleph_0}$ is said to be {\em $\bbd{P}$-destructible} if \begin{xitemize} \item[] $V^\bbd{P}\models{\mathcal F}$ is not mad in $[T]^{\aleph_0}$. \end{xitemize} Otherwise it is {\em$\bbd{P}$-indestructible}. The results in Section \ref{mad-over-pad} can be also formulated in terms of destructibility of mad families. \begin{theorem} \label{abs-0} \assert{1} There is an ad family ${\mathcal F}\subseteq{\mathcal A}_T$ of size ${\sf non}({\mathcal M})$ which cannot be extended to a $\Cohen{\omega}$-indestructible mad family in any generic extension of the ground model $V^{\bbd{P}}$ as long as non-meager sets in $V$ remain non-meager in $V^\bbd{P}$. \assert{2} Let $W=V^{\Cohen{\omega_1}}$. Then, in $W$, there is an ad family ${\mathcal F}\subseteq{\mathcal N\hspace{-0.2ex}\mathcal D}_T$ of cardinality $\aleph_1$ such that, in any generic extension of $W$ by a c.c.c.\ poset\ $\bbd{P}$ with $\bbd{P}\in V$, ${\mathcal F}$ cannot be extended to a $\Cohen{\omega}$-indestructible mad family. \assert{3} Let $W=V^{\Cohen{\omega_1}}$. Then, in $W$, there is an ad family ${\mathcal F}\subseteq{\mathcal N}_T$ of cardinality $\aleph_1$ such that, in any generic extension of $W$ by a c.c.c.\ poset\ $\bbd{P}$ with $\bbd{P}\in V$, ${\mathcal F}$ cannot be extended to a $\random{\omega}$-indestructible mad family. \end{theorem} \begin{proof} \assertof{1}: The family ${\mathcal F}$ as in \Thmof{osvaldo} will do. Since we have ${\mathcal F}'\subseteq {\mathcal N\hspace{-0.2ex}\mathcal D}_T$ for any mad ${\mathcal F}'$ extending ${\mathcal F}$ in $V^\bbd{P}$, a further Cohen real over $V^\bbd{P}$ introduces a branch almost avoiding all elements of ${\mathcal F}'$. Thus ${\mathcal F}'$ is no longer mad in $V^{\bbd{P}\ast\Cohen{\omega}}$. \assertof{2}: By \Thmof{cohen-nd-1} and by an argument similar to the proof of \assertof{1}. \assertof{3}: In $W$, let ${\mathcal F}$ be as in the proof of \Thmof{random-n}. Then any mad ${\mathcal F}'\supseteq{\mathcal F}$ on $T$ in any $W^\bbd{P}$ for $\bbd{P}$ as above is included in ${\mathcal N}_T$ by ${\mathcal O}_T\subseteq{\mathcal N}_T$. Hence, in $W^{\bbd{P}\ast\random{\omega}}$, the random real $f$ over $W^\bbd{P}$ introduces the branch $B(f)$ almost avoiding all elements of ${\mathcal F}'$. Thus ${\mathcal F}'$ is no longer mad in $W^{\bbd{P}\ast\random{\omega}}$. \mbox{} \usebox{\qedbox} \end{proof} \section{$\kappa$-almost decided and $\lambda$-minimal mad families} In this final section we collect several other constructions of mad families with some additional properties. \iffalse \fi \newcommand{\operatorname{\cal I}}{\operatorname{\cal I}} Given an ad family ${\mathcal F}$ on $T$ let $\operatorname{\cal I} ({\mathcal F})$ be the ideal on $T$ generated by ${\mathcal F}\cup [T]^{<{\omega}}$, i.e. for $S\subset T$ we have $S\in \operatorname{\cal I}({\mathcal F})$ if $S\subset^*\cup{\mathcal F}'$ for some finite subfamily ${\mathcal F}'$ of ${\mathcal F}$. Let ${\mathcal F}$ be a mad family on $T$ and ${\mathcal B}\subseteq{\mathcal F}$. Clearly ${\mathcal B}^\perp \supseteq \operatorname{\cal I}({\mathcal F}\setminus {\mathcal B})\setminus[T]^{<\aleph_0}$. We say that ${\mathcal B}$ {\em almost decides} ${\mathcal F}$ if ${\mathcal B}^\perp = \operatorname{\cal I}({\mathcal F}\setminus {\mathcal B})\setminus[T]^{<\aleph_0}$. A mad family ${\mathcal F}$ is said to be {\em$\kappa$-almost decided\/} if every ${\mathcal B}\in[{\mathcal F}]^{\kappa}$ almost decides ${\mathcal F}$. \begin{theorem} \label{c-almost decided} Assume that ${\sf MA}(\sigma\mbox{-centered\/{}})$ holds. Then there is a $\mathfrak{c}$-almost decided mad family ${\mathcal F}$ on $T$. \end{theorem} \begin{proof} Let $\seqof{B_\beta}{\beta<\mathfrak{c}}$ be an enumeration of $[T]^{\aleph_0}$. We define $A_\alpha$, $\alpha<\mathfrak{c}$ inductively such that\ \begin{xitemize} \xitem[f-0] $\setof{A_n}{n\in\omega}$ is a partition of $T$ into infinite subsets; \end{xitemize} For all $\alpha\in\mathfrak{c}\setminus\omega$ \begin{xitemize} \xitem[f-1] $A_\alpha$ is ad from $A_\beta$ for all $\beta<\alpha$; \xitem[f-2] For $\beta<\alpha$, if $B_{\beta}\notin \operatorname{\cal I}(\setof{A_\delta}{{\delta}<{\alpha}})$ then $\cardof{A_\alpha\cap B_\beta}=\aleph_0$; \end{xitemize} \begin{claim} The construction of $A_\alpha$, $\alpha<\mathfrak{c}$ as above is possible. \end{claim} \prfofClaim Suppose that $\alpha\in\mathfrak{c}\setminus\omega$ and $A_\beta$, $\beta<\alpha$ have been constructed according to \xitemof{f-0}, \xitemof{f-1} and \xitemof{f-2}. Let \begin{xitemize} \item[] $S_\alpha=\setof{\beta<\alpha}{ B_{\beta}\notin \operatorname{\cal I}(\setof{A_\delta}{{\delta}<{\alpha}})} $. \end{xitemize} Let $\bbd{P}_\alpha=\setof{\pairof{\varphi,s}}{\varphi\in{\rm Fn}(T,2),\,s\in[\alpha]^{<\aleph_0}}$ be the poset\ with the ordering defined by \begin{xitemize} \item[] $\pairof{\varphi',s'}\leq_{\bbd{P}_\alpha}\pairof{\varphi,s}$\ \ $\Leftrightarrow$\\[\jot] \phantom{$\pairof{\varphi',s'}\leq$} $\varphi\subseteq \varphi'$, $s\subseteq s'$ and\\ \phantom{$\pairof{\varphi',s'}\leq$} $\forall t\in\mathop{\rm dom}(\varphi')\setminus\mathop{\rm dom}(\varphi)\ (\varphi'(t)=1\ \rightarrow\ t\not\in A_\delta\mbox{ for all }\delta\in s)$ \end{xitemize} for $\pairof{\varphi,s}$, $\pairof{\varphi',s'}\in\bbd{P}_\alpha$. $\bbd{P}_\alpha$ is $\sigma$-centered since $\pairof{\varphi,s}$, $\pairof{\varphi',s'}\in\bbd{P}_\alpha$ are compatible if $\varphi=\varphi'$. For $\beta<\alpha$, let \begin{xitemize} \item[] $C_\beta=\setof{\pairof{\varphi,s}\in\bbd{P}_\alpha}{\beta\in s}$ \end{xitemize} and, for $\beta\in S_\alpha$ and $n\in\omega$, let \begin{xitemize} \item[] $D_{\beta,n}=\setof{\pairof{\varphi,s}\in\bbd{P}_\alpha}{ \exists t\in\mathop{\rm dom}(\varphi)\ (\ell(t)\geq n\ \land\ \varphi(t)=1\ \land\ t\in B_{\beta})}$. \end{xitemize} It is easy to see that $C_\beta$, $\beta<\alpha$ and $D_{\beta,n}$, $\beta\in S_\alpha$, $n\in\omega$ are dense in $\bbd{P}_\alpha$. Let \begin{xitemize} \item[] ${\mathcal D}=\setof{C_{\beta}}{\beta<\alpha} \cup\setof{D_{\beta,n}}{\beta\in S_\alpha,\,n\in\omega} $. \end{xitemize} Since $\cardof{{\mathcal D}}<\mathfrak{c}$, we can apply ${\sf MA}(\sigma\mbox{-centered})$ to obtain a $({\mathcal D},\bbd{P}_\alpha)$-generic filter $G$. Let \begin{xitemize} \item[] $A_\alpha=\setof{t\in T}{\varphi(t)=1\mbox{ for some }\pairof{\varphi,s}\in G}$. \end{xitemize} Then this $A_\alpha$ is as desired. \qedofClaim Let ${\mathcal F}=\setof{A_\alpha}{\alpha<\mathfrak{c}}$. ${\mathcal F}$ is infinite by \xitemof{f-1} and mad by \xitemof{f-2}. We show that ${\mathcal F}$ is $\mathfrak{c}$-almost decided. First, note that we have $\mathfrak{a}=\mathfrak{c}$ by the assumptions of the theorem. By \xitemof{f-2}, we have: \begin{xitemize} \xitem[f-4] For any $B\in[T]^{\aleph_0}$, if $B\notin \operatorname{\cal I}(\setof{A_{\alpha}}{{\alpha}<\mathfrak{c}})$ then \\ $\cardof{\setof{{\alpha}<\mathfrak{c}}{\cardof{A_{\alpha}\cap B}<\aleph_0}}<\mathfrak{c}$. \end{xitemize} Suppose that ${\mathcal H}\in[{\mathcal F}]^\mathfrak{c}$ and $B\in{\mathcal H}^\perp$. Then $\cardof{\setof{{\alpha}<\mathfrak{c}}{\cardof{A_{\alpha}\cap B}<\aleph_0}}=\mathfrak{c}$ and so $B\in \operatorname{\cal I}({\mathcal F})$ by \xitemof{f-4}. Thus there is a finite ${\mathcal F}'\subset {\mathcal F}$ such that $B\subset^* \cup{\mathcal F}'$ and $F\cap B$ is infinite for each $F\in {\mathcal F}'$. But $B\in {\mathcal H}^\perp$ so ${\mathcal F}'\cap {\mathcal H}=\emptyset$. Thus ${\mathcal F}'$ witnesses that $B\in \operatorname{\cal I}({\mathcal F}\setminus {\mathcal H})$ which was to be proved. \mbox{} \usebox{\qedbox} \end{proof} For a mad family ${\mathcal F}$ on $T$, ${\mathcal C}\subseteq{\mathcal F}$ is said to be {\em minimal in ${\mathcal F}$} if $\mathfrak{a}^+({\mathcal F}\setminus{\mathcal C})=\cardof{{\mathcal C}}$. A mad family ${\mathcal F}$ is said to be {\em$\lambda$-minimal\/} if every ${\mathcal C}\in[{\mathcal F}]^{\lambda}$ is minimal in ${\mathcal F}$. \begin{lemma}\label{almost=decided-minimal} Suppose that ${\mathcal F}$ is a mad family on $T$. \assert{1} If ${\mathcal F}$ is $\cardof{{\mathcal F}}$-minimal then $\cardof{{\mathcal F}}=\mathfrak{a}$. \assert{2} If ${\mathcal B}\subseteq{\mathcal F}$ almost decides ${\mathcal F}$ and ${\mathcal F}\setminus{\mathcal B}$ is infinite then ${\mathcal F}\setminus{\mathcal B}$ is minimal in ${\mathcal F}$. \assert{3} If ${\mathcal F}$ is $\kappa$-almost decided for $\kappa=\cardof{{\mathcal F}}$ then ${\mathcal F}$ is $\lambda$-minimal for all $\omega\leq\lambda<\kappa$. \assert{4} If $\cardof{{\mathcal F}}=\mathfrak{a}$ and ${\mathcal F}$ is $\mathfrak{a}$-almost decided then ${\mathcal F}$ is $\mathfrak{a}$-minimal. \end{lemma} \begin{proof} \assertof{1}: If ${\mathcal F}$ is $\cardof{{\mathcal F}}$-minimal then ${\mathcal F}$ itself is minimal in ${\mathcal F}$. Thus $\mathfrak{a}=\mathfrak{a}^+(\emptyset)=\mathfrak{a}^+({\mathcal F}\setminus{\mathcal F})=\cardof{{\mathcal F}}$. \assertof{2}: First, note that, for any infinite ad ${\mathcal F}$, we have $\mathfrak{a}(\operatorname{\cal I}({\mathcal F}))=\cardof{{\mathcal F}}$. Suppose that ${\mathcal F}$ is a mad family on $T$ and ${\mathcal B}\subseteq{\mathcal F}$ almost decides ${\mathcal F}$, i.e.\ ${\mathcal B}^\perp=\operatorname{\cal I}({\mathcal F}\setminus{\mathcal B})$. Hence \begin{xitemize} \item[] $\mathfrak{a}^+({\mathcal F}\setminus({\mathcal F}\setminus{\mathcal B}))=\mathfrak{a}^+({\mathcal B}) =\mathfrak{a}({\mathcal B}^\perp)=\mathfrak{a}(\operatorname{\cal I}({\mathcal F}\setminus{\mathcal B}))=\cardof{{\mathcal F}\setminus{\mathcal B}}$. \end{xitemize} \assertof{3}: Suppose that $\kappa=\cardof{{\mathcal F}}$ and ${\mathcal F}$ is $\kappa$-almost decided. If ${\mathcal C}\in[{\mathcal F}]^{\lambda}$ for some $\omega\leq\lambda<\kappa$ then $\cardof{{\mathcal F}\setminus{\mathcal C}}=\kappa$ and hence ${\mathcal F}\setminus{\mathcal C}$ almost decides ${\mathcal F}$. By \assertof{2} it follows that ${\mathcal C}={\mathcal F}\setminus({\mathcal F}\setminus{\mathcal C})$ is minimal in ${\mathcal F}$. \assertof{4}: Suppose that $\cardof{{\mathcal F}}=\mathfrak{a}$ and ${\mathcal F}$ is $\mathfrak{a}$-almost decided. Suppose that ${\mathcal C}\in[{\mathcal F}]^\mathfrak{a}$. If $\cardof{{\mathcal F}\setminus{\mathcal C}}<\mathfrak{a}$, then clearly $\mathfrak{a}^+({\mathcal F}\setminus{\mathcal C})=\mathfrak{a}=\cardof{{\mathcal C}}$. Hence ${\mathcal C}$ is minimal in ${\mathcal F}$. If $\cardof{{\mathcal F}\setminus{\mathcal C}}=\mathfrak{a}$ then ${\mathcal F}\setminus{\mathcal C}$ almost decides ${\mathcal F}$. Thus, by \assertof{2}, ${\mathcal C}={\mathcal F}\setminus({\mathcal F}\setminus{\mathcal C})$ is again minimal in ${\mathcal F}$. \mbox{} \usebox{\qedbox} \end{proof} \begin{corollary} \label{c-minimal} Assume that ${\sf MA}(\sigma\mbox{-centered\/{}})$ holds. Then there is a mad family ${\mathcal F}$ on $T$ which is $\lambda$-minimal for all $\omega\leq\lambda\leq\mathfrak{c}$. \end{corollary} \begin{proof} By \Thmof{c-almost decided} and \Lemmaof{almost=decided-minimal},\,\assertof{3}, \assertof{4}. \mbox{} \usebox{\qedbox} \end{proof} \Thmof{c-almost decided} can be further improved to the following theorem: \begin{theorem} \label{c-almost=decided-x} Assume that ${\sf MA}(\sigma\mbox{-centered\/{}})$ holds. Let $\kappa=\mathfrak{c}$. Then there is a $\Cohen{\omega}$-indestructible mad family ${\mathcal F}$ (of size $\kappa$) such that\ \begin{xitemize} \xitem[k-a-d-0] $V^\Cohen{\omega}\models{\mathcal F}\mbox{ is }\kappa\mbox{-almost decided on }T$. \end{xitemize} \end{theorem} \begin{proof} Let $\seqof{\pairof{t_\beta,\utilde{B}_\beta}}{\beta<\kappa}$ be an enumeration of \begin{xitemize} \item[] $T\times\setof{\utilde{B}}{\utilde{B} \mbox{ is a nice }\Cohen{\omega}\mbox{-name of an element of } [T]^{\aleph_0}\mbox{ in }V^{\Cohen{\omega}}}$. \end{xitemize} Let $A_\alpha$, $\alpha<\kappa$ be then defined inductively just as in the proof of \Thmof{c-almost decided} with \begin{xitemize} \item[\xitemof{f-2}$'$] For $\beta<\alpha$, if $t\forces{\Cohen{\omega}}{ \utilde{B}_{\alpha}\notin \operatorname{\cal I}(\setof{A_\delta}{{\delta}<{\alpha}})}$ then $t\forces{\Cohen{\omega}}{\cardof{A_\alpha\cap \utilde{B}_\beta}=\aleph_0}$ \end{xitemize} in place of \xitemof{f-2}. \mbox{} \usebox{\qedbox} \end{proof} \begin{corollary} \label{c-almost=decided-minimal-x} For any cardinal $\kappa\geq\mathfrak{c}$ in the ground model $V$ there is a cardinal preserving generic extension $W$ of $V$ such that, in $W$, $\kappa<\mathfrak{c}$ and there is a $\kappa$-almost decided mad family ${\mathcal F}$ of size $\kappa$ (furthermore ${\mathcal F}$ is $\lambda$-minimal for all $\omega\leq\lambda\leq\kappa$). \end{corollary} \begin{proof} First extend $V$ to a model $V'$ of $\kappa=\mathfrak{c}$ and ${\sf MA}(\sigma\mbox{-centered})$. In $V'$, let ${\mathcal F}$ be as in \Thmof{c-almost=decided-x}. Then ${\mathcal F}$ is as desired in $V^\Cohen{\mu}$ for any $\mu>\kappa$. The claim in the parentheses follows from \Lemmaof{almost=decided-minimal},\,\assertof{3} and \xitemof{f-2}$'$. \mbox{} \usebox{\qedbox} \end{proof} \end{document}
arXiv
Math Formulas Algebra Formulas For Class 8 Algebra Formulas For Class 10 Algebraic Expressions formula Area and Perimeter Formulas Area of a Triangle Formula Area of a Circle Formula Area of a Square Formula Area of Equilateral triangle formula Area of a Cylinder formula Rhombus Formula Area of a Rhombus Formula Perimeter of Rhombus Formula Sin cos formula Cos Inverse Formula Sin Theta formula Tan2x formula Tan Theta Formula Tangent 3 Theta Formula Trigonometric Functions formulas Exponential formula Differential Equations formula Pi Formulas Quadrilateral Formula Set Formulas Sequence and Series Formulas Selling Price Formula Basic Math Formulas Physics Formulas Chemistry Formulas Chemical Compound Formulas T Test Formula T-Test Formula The t-test is any statistical hypothesis test in which the test statistic follows a Student's t-distribution under the null hypothesis. It can be used to determine if two sets of data are significantly different from each other, and is most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known. T-test uses means and standard deviations of two samples to make a comparison. The formula for T-test is given below: \(\large t=\frac{\overline{x}_{1}-\overline{x}_{2}}{\sqrt{\frac{S_{1}^{2}}{n_{1}}+\frac{S_{2}^{2}}{n_{1}}}}\) $\overline{x}$ = Mean of first set of values $\overline{x}_{2}$ = Mean of second set of values $S_{1}$ = Standard deviation of first set of values $S_{2}$ = Standard deviation of second set of values $n_{1}$ = Total number of values in first set $n_{2}$ = Total number of values in second set. The formula for standard deviation is given by: \[\large S=\sqrt{\frac{\sum\left(x-\overline{x}\right)^{2}}{n-1}}\] x = Values given $\overline{x}$ = Mean n = Total number of values. T Test Solved Examples Question 1: Find the t-test value for the following two sets of values: 7, 2, 9, 8 and 1, 2, 3, 4? Formula for mean: $\overline{x}=\frac{\sum x}{n}$ Formula for standard deviation: $S=\sqrt{\frac{\sum\left(x-\overline{x}\right)^{2}}{n-1}}$ Number of terms in first set: $n_{1}$ = 4 Mean for first set of data: $\overline{x}_{1}$ = 6.5 Construct the following table for standard deviation: $x_{1}$ $x_{1}-\overline{x}_{1}$ $\left(x_{1}-\overline{x}_{1}\right)^{2}$ 2 -4.5 20.25 $\sum \left(x_{1}-\overline{x}_{1}\right)^{2}=29$ Standard deviation for the first set of data: S1 = 3.11 Number of terms in second set: n2 = 4 Mean for second set of data: $\overline{x}_{2}=2.5$ $x_{2}$ $x_{2}-\overline{x}_{2}$ $\left (x_{2}-\overline{x}_{2} \right )^{2}$ 1 -1.5 2.25 $\sum \left (x_{2}-\overline{x}_{2} \right )^{2}=5$ Standard deviation for first set of data: $S_{2}$ = 1.29 Formula for t-test value: $\large t=\frac{6.5-2.5}{\sqrt{\frac{93667}{4}+\frac{1.667}{4}}}$ t = 2.3764 = 2.38 (approx) More topics in T Test Formula T-Distribution Formula Practise This Question Find the reading of the spring balance shown in figure. The elevator is going up with an acceleration of g/10, the pulley and the string are light and the pulley is smooth. 49.5 N FORMULAS Related Links Basic Mathematical Formulas Pdf Free Download All Geometry Formulas Pdf Fahrenheit To Degree Celsius Formula Change In Entropy Formula A Cube Cube Formula Perimeter Of Trapezoid Formula Rydberg Formula Molecular Formula List Perimeter Of Hexagon Formula Formula For Area Of A Rectangle A bob of mass m is suspended from the ceiling of a train moving with an acceleration a as shown in figure. Find the angle θ in equilibrium position. tan−1a/g cot−1a/g sin−1a/g Join BYJU'S Formulas Learning Program You have selected the wrong answer!! You have selected the correct answer!! Practise 1000s of questions mapped to your syllabus Login to track and save your performance Take learning on the go with our mobile app GO TO PRACTISE
CommonCrawl
\begin{document} \title{Observation of Robust Quantum Resonance Peaks in an Atom Optics Kicked Rotor with Amplitude Noise} \author{Mark Sadgrove} \author{Andrew Hilliard} \author{Terry Mullins} \author{Scott Parkins} \author{Rainer Leonhardt} \affiliation{Department of Physics, The University of Auckland, Private Bag 92019, Auckland, New Zealand} \begin{abstract} The effect of pulse train noise on the quantum resonance peaks of the Atom Optics Kicked Rotor is investigated experimentally. Quantum resonance peaks in the late time mean energy of the atoms are found to be surprisingly robust against all levels of noise applied to the kicking amplitude, whilst even small levels of noise on the kicking period lead to their destruction. The robustness to amplitude noise of the resonance peak and of the fall--off in mean energy to either side of this peak are explained in terms of the occurence of stable, $\epsilon$--classical dynamics [S. Wimberger, I. Guarneri, and S. Fishman, \textit{Nonlin.} \textbf{16}, 1381 (2003)] around each quantum resonance. \end{abstract} \maketitle \section{Introduction} The sensitivity of coherent quantum phenomena to the introduction of extraneous degrees of freedom is well documented \cite{Bundle1}. In particular, the coupling of a quantum system to its environment or, equivalently, subjection of the system to measurement is known to result in decoherence, that is, a loss of quantum interference phenomena. The experimental study of decoherence ideally requires a system whose coupling to the environment may be completely controlled. The discipline of Atom Optics allows the realisation of this requirement in the form of atoms interacting with a far detuned optical field. The Atom Optics Kicked Rotor (AOKR), first implemented experimentally by the Raizen Group of Austin, Texas \cite{Moore1995,Raizen1996}, is a particular example of some interest as it is a quantum system which is chaotic in the classical limit. The AOKR is realised by subjecting cold atoms to short, periodic pulses of an optical standing wave detuned from atomic resonance. The atoms typically experience curtailed energy growth (dynamical localisation) \cite{Fishman1982} compared with the classical case, but may also experience enhanced growth for certain pulsing periods, an effect known as \emph{quantum resonance} \cite{Izrailev1979}. Previous AOKR experiments have shown that spontaneous emission events and noise applied to the amplitude of the kicking pulse train result in the destruction of quantum dynamical localisation \cite{NelsonPRL,Raizen1998a}. It might then be expected that the other well known signature of quantum dynamics in the AOKR, quantum resonance, should exhibit great sensitivity to spontaneous emission or noisy pulse trains. However, recent experiments by d'Arcy \emph{et al.} have shown that detection of quantum resonance behaviour is actually enhanced in the presence of spontaneous emission \cite{dArcy2001, dArcy2001b, dArcy2003pp}, in stark contrast to the accepted wisdom on the effects of spontaneous emission noise. Recent numerical work has also focussed on the susceptibility of quantum resonance behaviour to applied noise \cite{Brouard2003}. Here we present further experimental evidence of the robustness of the quantum resonance peak to certain types of noise. In this case, noise is added to the kicked rotor system by introducing random fluctuations in the amplitude or period of the optical pulses used to kick the atoms (collectively termed \emph{pulse train noise}). We find that even in the presence of maximal amplitude noise, the structure near to quantum resonance persists (including the low energy levels to either side of the peak). This resistance to amplitude fluctuations runs counter to the expectation that quantum phenomena are sensitive to noise. In contrast, a small amount of noise added to the period of the pulses is enough to completely wash out the resonance structure. The robustness of the near resonant behaviour to amplitude noise is reminiscent of recent observations of quantum stability in the quantum kicked accelerator by Schlunk \emph{et al.} \cite{Schlunk2003a,Schlunk2003b}. The remainder of this paper is arranged as follows: Section \ref{sec:AOKR} provides background on the formal AOKR system with amplitude and period noise. Section \ref{sec:Noise} reviews the study of quantum resonances in the kicked rotor. Our experimental procedure and results are found in Sections \ref{sec:ExpSet} and \ref{sec:ExpRes} respectively, and the results are explained in Section \ref{sec:EpsClass} in terms of the recently developed $\epsilon$--classical model for quantum resonance peaks. Section\ref{sec:Conclusion} offers conclusory remarks. \section{Atom optics kicked rotor with amplitude and period noise} \label{sec:AOKR} The Hamiltonian for an AOKR kicked with period $T$ with fluctuations in the amplitude and/or pulse timing is given in scaled units by \begin{equation} \label{ham1} \hat{H}=\frac{\hat{\rho}^2}{2}-\kappa \cos(\hat{\phi})\sum^{N}_{n=0}R_{A,n}f(\tau-n R_{P,n}), \end{equation} where $\hat{\phi}$ and $\hat{\rho}$ are the quantum operators for the (scaled) atomic position and momentum, respectively, $\kappa$ is the kicking strength, $f$ is the pulse shape function, $\tau = t/T$ is the scaled time and the terms $R_{A,n}$ and $R_{P,n}$ introduce random fluctuations in the amplitude and kicking period respectively. We also note the scaled commutator relationship $[\hat{\phi}, \hat{\rho}] = i\mathchar'26\mkern-9muk$, where $\mathchar'26\mkern-9muk=8\omega_rT$ is a scaled Planck's constant and $\omega_r$ is the frequency associated with the energy change after a single photon recoil for Caesium. The scaled momentum $\hat{\rho}$ is related to the atomic momentum $\hat{p}$ by the equation $\hat{\rho}/\mathchar'26\mkern-9muk = \hat{p}/(2 \hbar k_L)$, where $k_L$ is the wave number of the laser light. In this paper, as in Refs. \cite{Auckland2003, dArcy2001}, momentum is presented in the ``experimental units'' of $\hat{p}/(2 \hbar k_L)$. Assuming, for simplicity, a rectangular pulse shape, the stochasticity parameter $\kappa$ is related to experimental parameters by the equation \begin{equation} \kappa = \Omega_{\rm eff}\omega_rT\tau_p, \label{Eq:Omega_eff} \end{equation} where $\Omega_{\rm eff}$ is the potential strength created by the laser field, and $\tau_p$ is the duration of the kicking pulse. $\Omega_{\rm eff}$ is given by \begin{equation} \Omega_{{\rm eff}}=\frac{\Omega^2}{\Delta}, \label{Eq:Omega_eff2} \end{equation} where $\Omega$ is the resonant, single beam Rabi frequency of the atoms and $\Delta$ (which is $\approx 2\times10^9$ rad ${\rm s}^{-1}$ for these experiments) takes into account the relative transition strengths between and laser detunings from the different hyperfine states of caesium, as discussed in our previous papers (see, for example, \cite{NelsonPRL,Auckland2003}). Noise is introduced by the terms $R_{i,n} = 1+\delta_{i,n}$, where $\delta_{i,n}$ is a random variable with probability distribution \begin{equation} P(\delta_{i,n}) = \left\{ \begin{array}{ll} 1/{\rm \mathcal{L}_i}, & |\delta_{i,n}| < {\mathcal{L}_i/2}\\ 0, & \rm{else} \end{array} \right. \end{equation} with $i=A$ denoting amplitude noise, and $i=P$ denoting period noise. The noise level is denoted $\mathcal{L}_i$. For amplitude noise, we have $0 \leq {\mathcal{L}_{A}} \leq 2$, where a noise level of $2$ corresponds to the case where the kicking strength can vary between $0$ and $2\overline{\kappa}$ for each pulse, with $\overline{\kappa}$ the mean value of the kicking strength in the experiment. For period noise, $0 \leq {\mathcal{L}_{P}} < {\mathcal{L}_{P, {\rm max}}}$ where ${\mathcal{L}_{P, {\rm max}}}$ is $1$ for the $\delta$-kicked rotor and $1-\alpha$ for the pulse kicked rotor in our experiments, with $\alpha$ the ratio of the pulse width to the pulse period (less than $1\%$ in our experiments). We note that our implementation of period noise differs from that used in \cite{RaizenPeriod} in that it shifts each pulse a random amount from its zero--noise position rather than randomising the timing between consecutive pulses. This means that the effect of the period noise fluctuations is not cumulative (as it is in the aforementioned reference), allowing a more instructive comparison of the effects of period noise with those of amplitude noise. \section{The quantum resonances of the AOKR} \label{sec:Noise} In a fully chaotic driven system no stable periodic orbits exist in phase space and thus no frequency of the driving force gives rise to resonant behaviour. Although the classical kicked rotor retains kick--to--kick correlations for any value of the stochasticity parameter, for sufficiently high $\kappa$ the phase space is essentially chaotic, and the dynamics are independent of the kicking period of the system. However, this is not true of the quantum system, even for large $\kappa$, as fundamental periodicities exist in the quantum dynamics. This may be seen by inspecting the one kick evolution operator for the quantum $\delta$--kicked rotor, which has the form \begin{equation} \hat{U} = \exp({\rm i}\kappa \cos\hat{\phi}/\mathchar'26\mkern-9muk)\exp(-{\rm i}\hat{\rho}^2/(2\mathchar'26\mkern-9muk)), \label{eq:evolrotor} \end{equation} For the analysis of quantum resonance, the second exponential term (or \emph{free evolution} term) of Eq. \ref{eq:evolrotor} is of primary importance. We see that if $\mathchar'26\mkern-9muk$ is an even multiple of $2\pi$, and the state undergoing evolution is a momentum eigenstate (or a quantum superposition of such eigenstates) $|n\rangle$ such that $\hat{\rho}|n\rangle = n\mathchar'26\mkern-9muk|n\rangle$, this term becomes unity. This is the quantum resonance condition, and it may be shown that atoms initially in momentum eigenstates undergo ballistic motion \cite{Izrailev1979} at resonant values of $\mathchar'26\mkern-9muk$. For $\mathchar'26\mkern-9muk=2\pi(2m-1)$, $m$ a positive integer, initial momentum eigenstates with even and odd $n$ acquire quantum phases after free evolution of $+1$ and $-1$ respectively. It is found that the additional possibility of $-1$ for the phase of odd momentum components of the wavefunction leads to oscillations in the mean energy of the kicked atomic ensemble \cite{Oskay2000, Deng1999}. Thus, $\mathchar'26\mkern-9muk=2\pi$ is termed a \textit{quantum antiresonance}. We note that whilst quantum resonances are predicted to exist for all rational multiples of $\mathchar'26\mkern-9muk=2\pi$, resonance peaks have only been observed in experiments and simulations at integral multiples. In this paper, we focus on the behaviour at $\mathchar'26\mkern-9muk=2\pi$ and $\mathchar'26\mkern-9muk=4\pi$ and will refer to the energy peaks at these values of the scaled Planck's constant as the first and second quantum resonances respectively. For a cloud of Caesium atoms at $5\mu$K, as used in our experiments, the atomic momentum distribution has a standard deviation of $\sim 5 \hbar k_L$, so only a small momentum subclass of the atoms may be considered to be in an initial momentum eigenstate. In general, each atom has a momentum of the form $\rho = n + \beta$ (in scaled units), where $n$ is an integer and $\beta \in [0,1)$ is known as a \emph{quasimomentum}. The appropriate evolution operator when the quasimomentum of the atoms is included is \begin{equation} \hat{U}_\beta = \exp({\rm i}\kappa \cos\hat{\phi}/\mathchar'26\mkern-9muk)\exp(-{\rm i}(\hat{n}+\beta)^2/(2\mathchar'26\mkern-9muk)). \label{eq:evolbeta} \end{equation} For some values of quasimomenta, this one--kick evolution operator still exhibits the periodicity necessary for resonance \cite{Wimberger2003}. Specifically, ballistic energy growth occurs for $\mathchar'26\mkern-9muk=2\pi$, $\beta=0.5$ and for $\mathchar'26\mkern-9muk=4\pi$, $\beta=0$ or $0.5$. The quantum resonances of the AOKR were first studied experimentally by the group of Mark Raizen at Austin, Texas \cite{Moore1995, Raizen1996,Oskay2000}. In particular, ref. \cite{Moore1995} presented the results of experiments in which the momentum distribution of the atoms was recorded for various kicking periods. The momentum distributions corresponding to quantum resonance were found to be \textit{narrower} than those off resonance. The relatively small population of atoms undergoing ballistic energy growth at resonance was not detected experimentally and no difference was found between momentum distributions for odd and even multiples of $\mathchar'26\mkern-9muk=2\pi$. In ref. \cite{Oskay2000} a further study by the group detected the expected ballistic peaks at $\mathchar'26\mkern-9muk=2\pi$ and $\mathchar'26\mkern-9muk=4\pi$. Additionally, small oscillations in the widths of the atomic momentum distributions as a function of kick number were seen only at $\mathchar'26\mkern-9muk=2\pi$ -- a result of the anti-resonance behaviour described earlier. More recent experiments by d'Arcy \emph{et al.} \cite{dArcy2001,dArcy2001b, dArcy2003pp} have focussed on the effect of spontaneous emission on the quantum resonance peaks. They found experimentally that spontaneous emission makes these peaks more prominent -- a somewhat counter--intuitive result. Further theoretical investigations revealed that this phenomenon was due to the reshuffling of atomic quasimomenta caused by spontaneous emission which allows more atoms to experience resonant behaviour at some time during their evolution. Additionally, reshuffling of quasimomenta results in fewer atoms gaining large momenta from multiple resonant kicks. Without spontaneous emission, resonant atoms soon travel outside the finite observation window of the experiment and thus do not contribute to the measured energy of the atomic ensemble. Our experiments measure the structure of the mean energy around the quantum resonance peak in a similar fashion to the experiments described above. The pulse period is scanned over the resonant value and the mean energy is extracted from the measured momentum distributions at each value of $T$. For the power and detuning of the kicking laser used in this experiment, there is a constant chance of spontaneous emission per pulse of $\sim 2.5\%$. As in \cite{dArcy2001}, this is found to increase the height and width of the resonance peaks and make them more amenable to investigation. Our numerical studies show that the non-zero spontaneous emission rate does not affect the study of amplitude noise and period noise on the quantum resonance peak. This is because the mechanisms by which pulse train noise and spontaneous emission noise influence the atomic dynamics are totally different: Spontaneous emission events affect individual atoms by changing their quasimomenta; Amplitude and period noise change the kick--to--kick correlations over the entire atomic ensemble and do not change atomic quasimomenta. Thus the advantages of a relatively high spontaneous emission rate may be utilised without biasing the study of the effects of pulse train noise on the quantum resonance peaks. \section{Experimental setup} \label{sec:ExpSet} Our experiments utilise a $5$ $\mu$K cloud of cold Caesium atoms, provided by a standard six beam magneto--optical trap (MOT) \cite{Monroe1990}. The atoms interact with a pulsed, far-detuned optical standing wave which is created by retroreflecting the light from a 150mW (slave) diode laser which is injection locked to a lower power (master) diode laser. The output of the master laser may be tuned over a range of about $4$ GHz relative to the $6S_{1/2}(F=4) \rightarrow 6P_{3/2}(F'=5)$ transition of the Caesium $\rm D_2$ line. The detuning of the laser from this transition is denoted $\delta$. The frequency of the kicking laser is monitored by observing the spectrum of its beat signal with the trapping laser. The standing wave has a horizontal orientation rather than the vertical orientation used in the quantum accelerator experiments of references \cite{Schlunk2003a, Schlunk2003b}. It is pulsed by optically switching the laser light using an acousto--optic modulator (AOM). The amplitude of the AOM's driving signal is controlled by a programmable pulse generator (PPG) to achieve the desired pulse train shape. For amplitude noise experiments, the AOM's response to the amplitude of its driving signal must first be calibrated, since the pulse heights need to be uniformly distributed. The PPG consists of a random access memory (RAM) chip which can store up to $2^{16}$ $12$ bit words representing samples of the pulse train. On receipt of a gate pulse, the samples in the RAM are read into a digital to analogue convertor at 25 MHz, corresponding to a 40ns temporal resolution for the pulse trains. A given realisation of a noisy pulse train (for amplitude or period noise) is created by using computer--generated pseudo--random numbers to give fluctuations about the mean amplitude or mean pulse position in a standard pulse train. The noisy pulse train is then uploaded to the PPG. In a typical experimental run, the cooled atoms were released from the MOT and subjected to $20$ standing wave pulses, then allowed to expand for an additional free drift time in order to resolve the atomic momenta. The momentum resolution of our experiments for a $12$ms expansion time is $0.29$ $2$--photon recoils. After free expansion, the atoms were subjected to optical molasses, effectively freezing them in place, and a fluoresence image of their spatial distribution was taken. The timing of the experiment was controlled by sequencing software running on the $\rm RTLinux^{\tiny \texttrademark}$ operating system kernel giving worst case timing errors of $30$ $\mu$s \cite{RTLinuxFAQ}, or $0.25\%$ of the atomic time of flight. Some experimental imperfections have a systematic effect on our data and need to be taken into account in simulations in order for meaningful comparisons to be made. Firstly, when the standing wave is on, individual atoms experience differing potentials depending on their radial position in the beam, due to the gaussian mode shape of the beam. This can affect the experimental resolution of the multi--peaked `diffusion resonance' structure in the mean energy which occurs between primary quantum resonances, as this structure is strongly dependent on the exact potential strength \cite{Auckland2003,Daley2001}. However it is not so critical to the observation of quantum resonance peaks, due to the very resistance to variations in amplitude discussed in this paper. Nonetheless, this spread in kicking strengths is taken into account in our simulations. Secondly, in order to achieve a spontaneous emission rate sufficiently high to make the quantum resonance peaks prominent and amenable to study, a detuning from resonance of about $500$ MHz was used in our experiments. This value of the detuning is large enough to ensure the condition $\delta \gg \overline{\Omega}$ (where $\overline{\Omega}$ is the average atomic Rabi frequency taken over the different hyperfine transitions) which is assumed in the derivation of the AOKR Hamiltonian \cite{Graham1992}. However, the difference in detuning between the $F=4$ ground state and each of the hyperfine excited states $F' = 3,4,5$, as well as the difference in coupling strengths between magnetic substates, leads again to a spread in kicking strengths (as detailed in reference \cite{Auckland2003}). Once again, this effect is allowed for in our simulations. We also note that the application of amplitude and period noise to our pulse trains inherently creates random scatter in our data since each different noise realisation gives rise to a different mean energy. Thus, meaningful results may only be obtained by averaging the energy from a number of separate experiments with different noise realisations. For experiments where the noise is solely a result of spontaneous emission events, the statistics are already excellent, since the mean energy is calculated for a large number of individual atoms. This is not true for pulse train noise experiments which affect correlations over the entire atomic ensemble. Each point on our curves represents an average of 12 separate experiments (except in the zero noise case, where 3 repetitions was found to be sufficient). This number of repetitions reduces the error to a size such that any quantum resonance structure may be confidently identified. \section{Experimental and simulation results} \label{sec:ExpRes} We now present experimental measurements of the quantum resonance peaks at $\mathchar'26\mkern-9muk=2\pi$ and $4\pi$, in the presence of noise applied to the amplitude or period of the kicking pulse train(Figs. \ref{fig:qramp} and \ref{fig:qrper}). Simulations are performed using the Monte Carlo wavefunction method as has previously been discussed in refs. \cite{Daley2001,Auckland2003}. For comparison with simulations, the value of $T$ corresponding to quantum resonance (that is $\mathchar'26\mkern-9muk=2\pi$ or $4\pi$) is taken to be the experimental position of the resonance peak. This gives values of $T_{{\rm res},1} = 61 \mu$s and $T_{{\rm res},2} = 121.5 \mu$s which are within $1\%$ of the theoretical values of $2\pi/8\omega_r = 60.5 \mu$s and $4\pi/8\omega_r=121 \mu$s respectively. The experimental resolution is limited by the spacing between consecutive values of $T$ (i.e. $0.5 \mu$s). However, the exact position of the quantum resonances is not important to the results presented here which are concerned with the overall shape of the resonance peaks. In this section, we measure the mean energy of the atomic ensemble, which is given by $E = <\hat{p}^2>/2(2\hbar k_L)^2$. This quantity is referred to as the energy in 2 photon--recoil units. The height of the quantum resonance for a given number of kicks $n$ was found in reference \cite{Wimberger2003} to be $E_{\rm res} = (1/4)(\kappa/\mathchar'26\mkern-9muk)^2n$. In the presence of amplitude noise, additional diffusive energy is gained which, for $\mathcal{L}_{A}=2$, is of size $(\kappa^2/12\mathchar'26\mkern-9muk^2)n$ \cite{Steck2000}. Thus, for maximal amplitude noise, the height of the quantum resonance energy peak is predicted to be \begin{equation} E_{\rm res} = \frac{1}{3} \left(\frac{\kappa}{\mathchar'26\mkern-9muk}\right)^2 n. \label{eq:resheightamp} \end{equation} We use this equation to determine the value of $\kappa$ to be used in our amplitude noise simulations. Although this method systematically underestimates the true value of $\kappa$ (since small populations of resonant atoms with high momenta cannot be detected experimentally) it avoids the many systematic errors that arise when $\kappa$ is estimated from power measurements of the kicking beam outside the MOT chamber. The values of $\kappa$ gained from this equation are consistent with those estimated from experimental parameters. If period noise is being applied instead, simulations show that the energy around the second quantum resonance saturates at the quasilinear value for the highest noise level, which is given by multiplying the quasilinear energy growth $\kappa^2/4\mathchar'26\mkern-9muk^2$ \cite{Rechester1980} by the number of kicks to give \begin{equation} E_{\rm q.l.} = \frac{1}{4}\left(\frac{\kappa}{\mathchar'26\mkern-9muk}\right)^2 n. \label{eq:resheightper} \end{equation} Thus, having measured the height of the resonance for an amplitude noise level of $2$, we can solve Eq. (\ref{eq:resheightamp}) for $\kappa/\mathchar'26\mkern-9muk$ which gives $3.77 \pm 0.04$. Similarly, having calculated the experimental quasi--linear energy of $66 \pm 0.7$ from the line fitted in Fig. \ref{fig:qrper}(b), we can solve Eq. (\ref{eq:resheightper}) for $\kappa/\mathchar'26\mkern-9muk$ which gives $3.63 \pm 0.03$. Given the different systematic errors which arise for amplitude and period noise calculations of $\kappa$ and the possibility of laser power drift between experimental runs, we do not expect perfect agreement between the two values. Using the values of $\kappa$ gained from Eqs. (\ref{eq:resheightamp}) and (\ref{eq:resheightper}) in our simulations we find good quantitative agreement between experimental and simulation results. We note that period noise experiments allow $\kappa$ to be determined more accurately because the quantum resonance behaviour is destroyed and therefore the wings of the momentum distributions are not populated. This leads to more accurate values for the experimentally measured final energies. Once $\kappa$ has been calculated from the measured energies, the spontaneous emission rate per pulse may be deduced by calculating the Rabi frequency $\Omega$ from Eqs.(\ref{Eq:Omega_eff}) and (\ref{Eq:Omega_eff2}) and using the standard expression to find the probability of spontaneous emission \cite{Metcalf1999}. Measured and simulated energies are plotted against $\mathchar'26\mkern-9muk$ (which may also be thought of as the scaled kicking period of the kicked rotor system (as in \cite{Wimberger2003}) where $\mathchar'26\mkern-9muk=2\pi$ corresponds to the kicking period at which the first quantum resonance peak occurs). \begin{figure}\label{fig:qramp} \end{figure} \subsection{Amplitude noise} In our experiments, we measured energies at pulsing periods close to quantum resonance for the first and second quantum resonances, which occur at $\mathchar'26\mkern-9muk=2\pi$ and $4\pi$ respectively. Amplitude noise was applied at levels of ${\mathcal{L}_A} = 0.5, 1, 1.5$ and $2$. Fig. \ref{fig:qramp} shows the results obtained. We see that the resonance peak increases in height and that the reduced energy level to either side of resonance rises with increasing noise level. However, somewhat surprisingly, the resonance peak still remains prominent compared to the surrounding energies, even for the highest possible level of amplitude noise, although it becomes less well defined. We note that there is essentially no difference between the behaviour seen at the first and second quantum resonances apart from the fact that the energies are systematically lower for the second quantum resonance in experiments. This is due, in part, to the fact that the atomic cloud expands to a larger size during kicking for the second quantum resonance as compared with the first. This leads to a lower average kicking strength being experienced by the atoms (a feature not included in our simulations). Additionally, the total expansion time for the atoms, including kicking , is constant which means that for the sweeps over the second quantum resonance the atoms have less free expansion time after kicking than at the first quantum resonance. This also leads to a systematic underestimation of the energy. That the dynamics at quantum resonance itself is robust against amplitude noise is not surprising. The resonance arises because the time between pulses matches the condition for unity quantum phase accumulation after free evolution. The introduction of amplitude noise does not affect this fundamental resonance criterion. Seen from the point of view of atom optics, the resonant period is the Talbot time (corresponding to $\mathchar'26\mkern-9muk=4\pi$) \cite{dArcy2001, dArcy2001b}. Whilst the amplitude of the pulses applied affects the number of atoms coupled into higher momentum classes, it does not affect the period dependent Talbot effect which gives rise to the characteristic energy growth seen at resonance. The most surprising feature in these experiments is the survival of low energy levels to either side of the resonance. Persistence of quantum dynamical localisation is the most obvious explanation for the sharp decrease in energy to either side of quantum resonance. However the results of Steck et al. \cite{Steck2000} (which were performed far from quantum resonance at $\mathchar'26\mkern-9muk=2.08$) demonstrated that dynamical localisation is destroyed by high levels (corresponding to $\mathcal{L}_A = 2$) of amplitude noise. In Section \ref{sec:EpsClass}, we will employ the recently developed $\epsilon$--classical description of the quantum resonance peak to explain this persistence of localisation. We see that the experimentally measured resonance peaks are broader than those predicted by simulations. The broadening may result from a higher than expected spontaneous emission rate, resulting from a small amount of leaked molasses light which is inevitably present during the kicking cycle. Additionally, phase jitter on the optical standing wave can be caused by frequency instability of the kicking laser and mechanical vibrations of the retroreflecting mirror. Such phase noise is equivalent to a constant level of period noise and would also lead to broadening of the resonance. It is hard to quantify the amount of phase noise present, although the clear visibility of the resonances when no extra period noise is applied (see dotted line, Fig. \ref{fig:qrper}) suggests that it is small in amplitude. However, these uncertainties do not affect the observation of the qualitative shape of the resonance structure under the application of amplitude noise and in particular, the puzzling robustness of the low energy levels to either side of resonance. \begin{figure}\label{fig:qrper} \end{figure} \subsection{Period noise} For comparison, we also present results showing the effect of period noise on the first two primary quantum resonance peaks. It may be seen that even small amounts of this type of noise have a large effect on the near resonant dynamics. Fig.~\ref{fig:qrper} shows the results for noise levels of $0.01$, $0.02$, $0.05$ and $0.1$. The first primary quantum resonance peak is found to be very sensitive to small deviations from strict periodicity of the pulse train. Noise levels of $0.05$ and $0.1$ completely wash out the peak, regaining the flat energy vs. kicking--period curve that we expect in the case of zero kick--to--kick correlations. The effect of period noise on the second primary quantum resonance is similar, although it is even more sensitive with an $0.02$ noise level completely destroying the peak. At higher noise levels, the mean energy tends towards the zero--correlation energy level. The greater effect of period noise on the second quantum resonance is due to the greater absolute variation possible in the free evolution period between pulses, since the kicking period in this case is twice that of the first quantum resonance. This has been verified by our group in separate experiments where the absolute variation of the kicking period was held constant \cite{MarkThesis}. Such noise was found to have a more uniform effect on structures in the mean energy. Sensitivity of the dynamics near quantum resonance to noise applied to the kicking period is not surprising, given the precise dependence of the resonance phenomenon on the pulse timing. The quantum phase accumulated between kicks is randomised and the kick--to--kick correlations destroyed. However, the stark contrast between the sensitivity of the near resonant dips in energy to amplitude and period noise requires further elucidation, which we now provide by looking at the correlations which lead to quantum resonance at early times and the $\epsilon$--classical dynamics of the kicked rotor near quantum resonance. \section{Reappearance of stable dynamics close to quantum resonance} \label{sec:EpsClass} \begin{figure}\label{fig:early&epsclass} \end{figure} We now seek to explain the surprising resilience of the structure near quantum resonance to the application of amplitude noise. Since the effect of amplitude noise is the same for the resonance peaks at $\mathchar'26\mkern-9muk = 2\pi$ and $4\pi$ we consider only the resonance peak about $\mathchar'26\mkern-9muk=2\pi$, although the arguments easily generalise to other quantum resonance peaks occuring at multiples of this value. We also limit our attention to the case where there is no spontaneous emission, as this form of decoherence, at the levels present in these experiments, merely broadens the resonance peak and does not affect its qualitative behaviour in the presence of amplitude noise. The stability of the quantum resonance structure in the late time energy (as measured in our experiments) may be explained by appealing to the $\epsilon$--classical mechanics formulated by Wimberger \emph{et al.} \cite{Wimberger2003, Wimberger2003bpp, dArcy2003pp}. In this description of the kicked rotor dynamics, a fictitious Planck's constant is introduced which is referenced to zero exactly at quantum resonance. Thus, even though the quantum resonance peak is a purely quantum mechanical effect, its behaviour may be well described by a (fictitious) classical map near to resonance. Before considering this picture, however, we will look at the resonances found in the early time classical and quantum energy growth rates of the kicked rotor which provide similar insight over a wider range of values for $\mathchar'26\mkern-9muk$. The classical rates were first derived by Rechester and White \cite{Rechester1980} and their work was extended to the quantum kicked rotor by Shepelyansky \cite{Shepelyansky1987}. These expressions for the early time classical and quantum energy growth rate, $D$, have the advantage that they hold for any pulsing period and not just for those within an $\epsilon$ neighbourhood of the quantum resonance period. Fig. \ref{fig:early&epsclass}(a) plots the early--time energy growth rate $D$ for the classical and quantum dynamics against the effective Planck's constant $\mathchar'26\mkern-9muk$. For sufficiently large values of $\kappa/\mathchar'26\mkern-9muk$ the energy growth rate after 5 kicks obeys the approximate expression \cite{Shepelyansky1987} \begin{eqnarray} D & \approx & \frac{1}{2}\left(\frac{\kappa}{\mathchar'26\mkern-9muk}\right)^2\left(\frac{1}{2} - J_2(K) -J_1^2(K)\right. \nonumber \\ & & \left. \frac{}{} + J_2^2(K) + J_3^2(K) \right), \label{eq:earlyD} \end{eqnarray} where the $J_n$ are Bessel functions and $K=\kappa$ for the classical case and $K=\kappa_q=2\kappa\sin(\mathchar'26\mkern-9muk/2)/\mathchar'26\mkern-9muk$ for the quantum case. The energy growth rate is expressed in the same energy units used in reference \cite{Auckland2003}. This formula was generalised by Steck \emph{et al.} \cite{Steck2000} to the case where amplitude noise is present in the system, giving \begin{eqnarray} D & \approx & \frac{\kappa^2+\mathrm{Var}(\delta K)}{4\mathchar'26\mkern-9muk^2} + \frac{\kappa^2}{2\mathchar'26\mkern-9muk^2}\left(-\mathscr{J}_2(K)\right. \nonumber \\ & & \left. \frac{}{} - \mathscr{J}_1^2(K) +\mathscr{J}_2^2(K)+\mathscr{J}_3^2(K)\right), \label{eq:earlyD_amp} \end{eqnarray} where $K$ is defined as before, $\delta K$ is a random variable giving the fluctuation in $K$ at each kick, $\mathrm{Var}(\delta K)$ is the variance of the noise distribution $P(\delta K)$ , and \begin{equation} \mathscr{J}_n(K) := \int_{-\infty}^{\infty}P(\delta K)J_n(K+\delta K){\rm{d}}(\delta K). \label{eq:Besmod} \end{equation} The new functions $\mathscr{J}_n$ are averages of the normal Bessel functions over the noise distribution. We note that references \cite{Steck2000} and \cite{Shepelyansky1987} deal with diffusion of the momentum $\rho$, whereas we present our results in terms of $\rho/\mathchar'26\mkern-9muk$. Hence, when comparing our results for energies or energy growth rates with the formulae in the aforementioned references, division by $\mathchar'26\mkern-9muk^2$ is necessary. Of particular interest is the behaviour near $\mathchar'26\mkern-9muk=0$. We note that using Shepelyansky's formula in this regime can be problematic because in the fully scaled system, the width of the initial atomic momentum distribution scales with $\mathchar'26\mkern-9muk$ and may become small enough that Shepelyansky's assumption of a uniform initial momentum distribution is no longer valid \cite{Daley2002}. Assuming, however, that a broad initial momentum distribution may be maintained in the classical limit, we see that a peak exists at $\mathchar'26\mkern-9muk=0$ for both the classical and quantum dynamics and the classical and quantum curves match perfectly until $\mathchar'26\mkern-9muk \sim 0.5$. More importantly, a reduced energy region at $\mathchar'26\mkern-9muk \approx 0.5$ remains even for the highest level of amplitude noise, as seen in Fig. \ref{fig:early&epsclass}(a). At larger values of $\mathchar'26\mkern-9muk$, the oscillations in the classical growth rate are destroyed by noise. However, in the quantum case, the robust peak structure seen near $\mathchar'26\mkern-9muk=0$ repeats itself at multiples of $\mathchar'26\mkern-9muk = 2\pi$. The survival of the structure near $\mathchar'26\mkern-9muk=0$ is attributable to the near integrability of the dynamics (classical and quantum) for small values of $\mathchar'26\mkern-9muk$. We recall that in the scaling used for these experiments the ratio $\kappa/\mathchar'26\mkern-9muk$ is kept constant where $\kappa$ is the classical stochasticity parameter of the system. Thus we have $\kappa \rightarrow 0$ as $\mathchar'26\mkern-9muk \rightarrow 0$. At small values of $\mathchar'26\mkern-9muk$ and thus $\kappa$, since the perturbation from an unkicked rotor is quite small, the system is near--integrable (i.e. the dynamics are stable) and the effect of fluctuations in the perturbation (amplitude noise) are far less compared with the effect at higher $\kappa/\mathchar'26\mkern-9muk$ where the system is chaotic. Fig. \ref{fig:early&epsclass}(a) shows that, in the quantum case, this stability reappears near quantum resonance, a fact that may be explained by inspection of Eqs. (\ref{eq:earlyD_amp}) and (\ref{eq:Besmod}). These equations show that the destruction of quantum correlations due to amplitude noise occurs due to the stochastic variation of the argument $\kappa_q$ of the Bessel functions. If $\kappa_q$ is small then so is the absolute variation of $\kappa_q$ inside the Bessel functions due to amplitude noise and, therefore, there is little damage to the quantum correlations themselves. Since $\kappa_q \rightarrow 0$ at quantum resonance, the same behaviour seen near $\mathchar'26\mkern-9muk=0$ reappears at integral multiples of $\mathchar'26\mkern-9muk=2\pi$. \begin{figure*} \caption{Phase space portraits for the $\epsilon$--classical standard map for $\mathchar'26\mkern-9muk=2\pi$ and $k=3.7$. The figures in the first row ((a), (b) and (c)) are for an amplitude noise level of $0$ and for values of $\epsilon$ of $0.001$,$0.02$ and $0.04$ respectively. The second row (figures (d), (e) and (f)) show the $\epsilon$--classical phase space for an amplitude noise level of $2$ and the same values of $\epsilon$. In Fig. \ref{fig:early&epsclass}(b), the values of $\mathchar'26\mkern-9muk$ corresponding to $\epsilon = 0.001$, $0.02$ and $0.04$ are labelled A,B and C respectively.} \label{fig:phase} \end{figure*} The formula for the early time energy growth rate $D$ also provides us with predictions of the qualitative behaviour of the late time energy \cite{Auckland2003}. However, if we limit our attention to the energies for $\mathchar'26\mkern-9muk \approx 2\pi m$ where $m$ is a positive integer, the $\epsilon$--classical model of Wimberger \emph{et al.} may be employed to calculate the energies around the quantum resonance after larger numbers of kicks. If $\epsilon=2\pi m - \mathchar'26\mkern-9muk$ is the (small) difference between $\mathchar'26\mkern-9muk$ and a resonant point, the dynamics of the AOKR is well approximated by the map \cite{Wimberger2003} \begin{subequations} \begin{eqnarray} \rho_{n+1} & = & \rho_n + \tilde{k}_n\sin \phi_{n+1},\\ \phi_{n+1} & = & \phi_n + {\rm{sign}}(\epsilon) \rho_n + \pi l + \mathchar'26\mkern-9muk\beta \mod 2\pi, \end{eqnarray} \label{eq:ecsm} \end{subequations} where $\tilde{k_n} = |\epsilon| k_n$, $k_n = (\kappa/\mathchar'26\mkern-9muk)R_{A,n}$ \cite{Schlunk2003a}, $\rho_n$ and $\phi_n$ are the momentum and position at kick $n$ respectively, $\rho_0 = |\epsilon|n_0$ for $n_0$ a positive integer and $\epsilon = \mathchar'26\mkern-9muk-2\pi m \ll 1$ for positive integers $m$. In this paper, $l$ is set to $1$ without loss of generality as in references \cite{Wimberger2003,Wimberger2003bpp}. In the reformulated dynamics, $\epsilon$ plays the part of Planck's constant and $\epsilon \rightarrow 0$ may be considered to be a quasi--classical limit. Fig. \ref{fig:early&epsclass}(b) shows the energy peak produced by the $\epsilon$--classical dynamics for various amplitude noise levels. We see that even the maximum noise level of $2$ does not destroy the peak, a finding that agrees with the experimental and simulation results presented in the previous section. Wimberger \emph{et al.} have derived a scaling law for the ratio of the mean energy at a certain value of $\epsilon$ to the on--resonant energy. This ratio is a function of $\epsilon$, the kicking strength $\kappa/\mathchar'26\mkern-9muk$ and the kick number \cite{Wimberger2003,Wimberger2003bpp}. The scaling law reproduces the quantum resonance peak, and its form is found to arise from the changes in the $\epsilon$--classical phase space as $\epsilon$ is varied and $\kappa/\mathchar'26\mkern-9muk$ is held constant. These changes may be seen in Fig. \ref{fig:phase}. The first row shows the $\epsilon$--classical phase space for increasing $\epsilon$ and no amplitude noise. For the relatively unperturbed system which exists at very low values of $\epsilon$ (say $\epsilon <0.001$), lines of constant momentum dominate the phase space (see Fig. \ref{fig:phase}(a)). We now consider the near resonant mechanics when zero noise is present, so that $k_n = k = {\rm constant}$ for all $n$. In this case, following Wimberger \emph{et al.}, we may calculate the kinetic energy $E_n = \epsilon^{-2}\langle \rho_n^2/2 \rangle$ by neglecting terms of order $\epsilon$ in Eq. (\ref{eq:ecsm}b) and iterating Eqs. (\ref{eq:ecsm})(a) and (b) followed by averaging over $\phi_0$ and $\beta$. Iterating Eq. (\ref{eq:ecsm}) in the limit of vanishing $\epsilon$, and considering for simplicity only the case where $\epsilon > 0$ and $\rho_0=0$, we find the momentum after the $n$th kick to be \cite{Wimberger2003bpp} \begin{equation} \rho_n \approx \epsilon k \sum_{s=0}^{n-1}\sin(\phi_0 + \pi(1 + 2\beta)s), \label{eq:pn} \end{equation} whence the mean energy may be calculated as \begin{equation} E_n \approx \frac{k^2}{2} \left \langle \sum_{s,s' = 0}^{n-1} \sin(\phi_0 + \pi(1+2\beta)s)\sin(\phi_0 + \pi(1+2\beta)s') \right \rangle, \label{eq:En} \end{equation} where the average on the RHS is taken over all values of $\phi_0$. For $\mathchar'26\mkern-9muk=2\pi$ (as in Fig. \ref{fig:phase}) the resonant value of quasi--momentum is $\beta=0.5$ \cite{dArcy2003pp} (corresponding to the line $\rho = 2\pi$ in the phase space figures). Substitution of this value of $\beta$ into Eq. (\ref{eq:En}) followed by averaging over a uniform distribution for $\phi_0$ gives $E_n \approx (k^2/2)n^2$ (this expression is exact when $\epsilon=0$) -- that is ballistic growth of energy occurs at exact quantum resonance for $\beta=0.5$ -- and the mean energy of the atomic ensemble (i.e. averaging over $\beta$) is raised significantly as $\epsilon \rightarrow 0$ (in fact it grows linearly with kick number \cite{Wimberger2003bpp}). Thus, the uniquely quantum energy peak found at integer multiples of $\mathchar'26\mkern-9muk=2\pi$ may also be explained by a \emph{classical} resonance of the $\epsilon$--classical dynamics which is valid in this regime. For larger values of $\epsilon$, the phase space of the system is significantly distorted and the approximate expression in Eq.(\ref{eq:pn}) is no longer valid. However, two facts in particular give a qualitative explanation for the decline in mean energy away from exact resonance: Firstly, the most distorted area of phase space is that around $\rho=2\pi$ -- that is the region responsible for ballistic growth for vanishing $\epsilon$ \cite{Wimberger2003}. Thus the number of trajectories giving ballistic growth is drastically lessened for $\epsilon>0$. Secondly, although the phase space region responsible for ballistic energy growth is warped, the structures which prevent stochastic energy growth (KAM tori) remain for $\epsilon>0$ and so the full quasi--linear rate of energy growth is not attained. These two facts taken together give a qualitative explanation for the fall off in mean energy as $\epsilon$ is increased, as seen in Fig. \ref{fig:early&epsclass}(b). This qualitative explanation of the structure near quantum resonance also holds in the case where maximal amplitude noise is applied to the system, as seen in the second row of Fig. \ref{fig:phase}. At exact quantum resonance ($\epsilon=0$) ballistic motion still occurs even in the presence of amplitude noise. For $\epsilon>0$, the phase space is distorted as before and some invariant curves are destroyed by the applied noise. However, even for $\epsilon=0.04$ (Fig. \ref{fig:phase}(f)), the phase space has not become completely stochastic and so we see the same quantum resonance structure as in the no--noise case, albeit with a lower peak--to--valley energy ratio. The persistence of the quantum resonance structure in the presence of amplitude noise may now be seen to be due to the reappearance of the quasi--classical dynamics which occurs at values of $\mathchar'26\mkern-9muk$ close to resonance value, and far from the actual classical limit. The $\epsilon$--classical description which is valid in this regime is marked by a return to complete integrability exactly at quantum resonance. By contrast, the extreme sensitivity of the resonant peak to even small amounts of period noise is precisely due to the sensitivity of this approximation to the exact value of $\mathchar'26\mkern-9muk$ (and thus the pulse timing). Whilst similar arguments to those used for amplitude noise might suggest that the resonance peak should be robust to period noise too, it is the very reappearance of the stable dynamics which is actually ruined by this type of noise. If the mean deviation from periodicity is of the order of the width of the quantum resonance peak, the suppression of energy growth to either side of the peak is destroyed, and the final energy approaches the zero correlation limit for any value of the kicking period. Comparison with the resonance seen in the early energy growth rates in the actual classical limit (Fig. \ref{fig:early&epsclass}(a)) shows that the behaviour of the quantum resonance in the presence of amplitude noise is qualitatively identical to that of the classical resonance. Thus, although the $\epsilon$--classical description of quantum resonance employs a ``fictitious'' classical dynamics in which the effective Planck's constant is still far from $0$, the quantum resonance peak may be said to mark a reappearance of classical stability in the kicked rotor dynamics far from the classical limit. The experimental observation of the robustness of the quantum resonance peak provides a new test of the validity of the $\epsilon$--classical model for the AOKR. \section{Conclusion} \label{sec:Conclusion} We have presented experimental results demonstrating that the quantum resonance peaks observed in Atom Optics Kicked Rotor experiments are surprisingly robust to noise applied to the kicking amplitude, and that quantum resonance peaks are still experimentally detectable even at the maximum possible noise level. By contrast the application of even small amounts of noise to the kicking period is sufficient to completely destroy the resonant peak and return the behaviour of the system to the zero--correlation limit. We have shown that the stability of the resonant dynamics in the presence of amplitude noise is reproduced by the $\epsilon$--classical dynamics of Wimberger \emph{et al.} Viewed in light of this theoretical treatment, the resilience of the quantum resonance peak to amplitude noise is due to the reappearance of near--integrable $\epsilon$--classical dynamics near quantum resonance, the behaviour of which is analogous (although not identical) to that of the kicked rotor in the actual classical limit of $\mathchar'26\mkern-9muk \rightarrow 0$. \section*{Acknowledgments} The authors thank Maarten Hoogerland for his help regarding the experimental procedure. M.S. would like to thank Andrew Daley for insightful conversations regarding this research and for providing the original simulation programs. This work was supported by the Royal Society of New Zealand Marsden Fund, grant UOA016. \end{document}
arXiv
\begin{document} \begin{center}{\bf ON WEAK ASSOCIATED REFLEXIVITY OF WEIGHTED SOBOLEV SPACES\\ OF THE FIRST ORDER ON REAL LINE}\end{center} \begin{center} {\bf V.D. Stepanov$^{1,3}$\footnote{Corresponding author: [email protected]} and E.P. Ushakova$^{2,3}$}\end{center} \noindent$^1$\textit{\small Computing Center of Far Eastern Branch of Russian Academy of Sciences, 65 Kim Yu Chena str., Khabarovsk 680000, Russia} \noindent$^2$\textit{\small V.A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, 65 Profsoyuznaya str., Moscow 117997, Russia} \noindent$^3$\textit{\small Steklov Mathematical Institute of Russian Academy of Sciences, 8 Gubkina str., 119991 Moscow, Russia} \noindent\textit{Key words}: Sobolev space, dual space, associate space, reflexivity. \\ \textit{MSC (2010)}: 46E30, 46E35 \vskip 0.2cm {{\bf Abstract.} We study associate and double associate spaces of two-weighted Sobolev spaces of the first order on real half-line and we show that unlike the notion of duality the associativity is divided into two cases which we call "strong" and "weak" ones with the division of the second associativity into four cases. On the way we prove that the Sobolev space of compactly supported functions possess weak associated reflexivity and the double weak-strong associate space is vacuous. The case of power weights was recently characterized by reduction to Ces\`{a}ro or Copson type spaces \cite{S1}}. \section{Introduction} Let $1<p<\infty, m\in\mathbb{N}$ and let $W^{p,m}, W_0^{p,m}$ and $H^{p,m}$ be classical Sobolev spaces (see \cite[Chapter 3]{A}), where $W_0^{p,m}$ and $H^{p,m}$ are completions of $C^\infty_0$ and $C^m$, respectively, with regard to the norm \begin{equation*} \|f\|_{m,p}:=\left(\sum_{0\leq |\alpha|\leq m}\|D^\alpha f\|_p^p\right)^{\frac{1}{p}}. \end{equation*} Moreover, $W^{p,m}=H^{p,m}$ \cite[Theorem 3.16]{A}. If $N=\sum_{0\leq |\alpha|\leq m} 1$ then the dual of $W^{p,m}$ is a closed subspace of vector Lebesgue space $L^{p'}_N,$ where $p'=\frac{p}{p-1}.$ It implies reflexivity of $W^{p,m}$ as well as $W^{p,m}_0$ on the base of general criterion of reflexivity of Banach spaces \cite[Theorem 1.17]{A} and weak compactness of a ball in $W^{p,m}$ which follows from \cite[\textsection\, 4, Theorem 2]{S}. General form of arbitrary linear bounded functional $L\in (W^{p,m})^\prime$ is given by \cite[Theorem 3.8]{A} with implicit formula for the norm $\|L\|.$ Alternatively, $W^{-m,p'}=(W^{p,m}_0)^\prime$ is constructed as completion of the set of functionals $V:=\{L_v; v\in L^{p'}\}\subset (W^{p,m}_0)^\prime,$ $L_v(u):=\langle u,v\rangle:=\int u(x)v(x)dx$ with respect to the norm \begin{equation}\label{nrm} \|v\|_{-m,p'}:=\sup_{0\not=u\in W^{p,m}_0}\frac{|\langle u,v\rangle|}{\|u\|_{m,p}}. \end{equation} Similar results are known for the Sobolev-Orlicz spaces (see \cite{K} and literature therein). Generally, elements of $(W^{p,m})^\prime, (W^{p,m}_0)^\prime$ are distributions of positive order. We learn out the case when duality is replaced by associativity and limit ourselves to the study of the two-weight Sobolev spaces of the first order on the real line. The motivation to characterize associative spaces is that it gives the principle of duality which allows to reduce a problem of the boundedness of a linear operator, say from Sobolev space to Lebesgue space, to a more manageable problem for its conjugate operator (see, for examples \cite{Oin2, Oin3, Oin4, Oin5, Oin6}, \cite{S3}). Now we provide basic definitions. Let $I:=(a,b)\subseteq\mathbb{R}$ be an open interval of the real axis and let $\mathfrak{M}(I)$ be the set of all Lebesgue measurable functions on $I$ . For $1\le p<\infty$ we denote $L^p(I)\subset \mathfrak{M}(I)$ the usual Lebesgue space with the norm $\|f\|_{L^p(I)}:=\left(\int_I|f|^p\right)^{1/p}.$ Let $ {\mathscr V}_p(I):=\bigl\{v\in L^p_\text{\rm loc}(I): v\ge 0,\|v\|_{L^1(I)}\not=0 \bigr\} $ be the set of weight functions (weights) and $v_0,v_1\in {\mathscr V}_1(I)$. Denote $W^1_{1,\text{\rm loc}}(I)$ the space of all functions $u\in L^1_\text{\rm loc}(I)$, which distributional derivatives $Du$ belong to $L^1_\text{\rm loc}(I)$. We study the weighted Sobolev space \begin{equation*} W^1_p(I):=\bigl\{u\in W^1_{1,\text{\rm loc}}(I):\|u\|_{W^1_p(I)}<\infty\bigr\}, \end{equation*} where \begin{equation*} \|u\|_{W^1_p(I)}:=\|v_0 u\|_{L^p(I)}+\|v_1 Du\|_{L^p(I)}, \end{equation*} and the subspaces $\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW^1_p(I)\subset\mathop{\phantom{W}}\limits^{\circ}\mskip-23muW^1_p(I)\subset W^1_p(I),$ where the second is the closure in $W^1_p(I)$ of a subspace $\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW^1_p(I)$ of all absolutely continuous functions $ AC(I)$ of the form \begin{equation*} \mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW^1_p(I):=\bigl\{f\in AC_\text{\rm loc}(I): f(0)=0,~ \mathop{\rm supp}\nolimits f~\text{compact~in~}I,\|f\|_{W^1_p(I)}<\infty\bigr\}. \end{equation*} Let $(X,\|\cdot\|_X)$ be a normed space of measurable functions on $I.$ $X$ is called an ideal space provided it satisfies the property: if $|f|\leq|g|$ a.e. on $I$ and $g\in X,$ then $f\in X$ and $\|f\|\leq\|g\|.$ Put \begin{equation}\label{D-X} \mathfrak{D}_X:=\Bigl\{g\in \mathfrak{M}(I):\int_I |fg|<\infty\, ~\text{for~ all~}\,f\in X\Bigr\}. \end{equation} For any $g\in\mathfrak{D}_X$ we define the functionals $$ \mathbf{J}_{X}(g):=\sup_{0\not=f\in X}\frac{\int_I |fg|}{\|f\|_{X}} \,\, \text{and}\,\,{J}_{X}(g):=\sup_{0\not=f\in X}\frac{|\int_I fg|}{\|f\|_{X}} $$ and the associated spaces $$ X'_s:=\bigl\{g\in \mathfrak{M}(I):\|g\|_{X'_s}:=\mathbf{J}_X(g)<\infty\bigr\}, $$ $$ X'_w:=\bigl\{g\in \mathfrak{M}(I):\|g\|_{X'_w}:={J}_X(g)<\infty\bigr\}, $$ which we call \textquotedblleft strong\textquotedblright\, and \textquotedblleft weak\textquotedblright\, associated spaces, respectively. A standard problem for an ideal space $(X,\|\cdot\|_X)$ is characterization of the \textquotedblleft strong\textquotedblright\, associated space (or the K\"{o}the dual) (see \cite[Chapter 1]{BS}). Observe that $J_X(g)=\mathbf{J}_X(g)$ for an ideal space $X.$ For a non-ideal space $J_X(g)$ and $\mathbf{J}_X(g)$ might be different (see \cite{PSU1} for examples). In particular, any weighted Sobolev space $X\in\{\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW^1_p(I),\mathop{\phantom{W}}\limits^{\circ}\mskip-23muW^1_p(I),W^1_p(I)\}$ is an example for which it might be $J_X(g)\not=\mathbf{J}_X(g)$ \cite{PSU0}, \cite{PSU1}. Let $X\in\{\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW^1_p(I),\mathop{\phantom{W}}\limits^{\circ}\mskip-23muW^1_p(I),W^1_p(I)\}.$ A complete characterization of the associate spaces $X'_s$ and $X'_w$ is obtained in \cite[Sections 5, 6]{PSU1}. Besides, it was recently discovered that for power weight functions $v_0$ and $v_1$ the spaces $X'_s$ and $X'_w$ coincide with Ces\`{a}ro or Copson type spaces. It appears a natural problem to characterize "double associate" spaces of the form $[X'_s]'_s,$ $[X'_s]'_w,$ $[X'_w]'_s,$ $[X'_w]'_w.$ Complete analysis of the problem for the Sobolev spaces with power weights and the Ces\`{a}ro or Copson type spaces is given in \cite{P1, S1}. The main goal of the paper is to establish \textquotedblleft weak\textquotedblright\, associated reflexivity of the Sobolev space $X=\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW^1_p(I)$ if $1<p<\infty$. For the reflexivity of the \textquotedblleft strong\textquotedblright\, and \textquotedblleft weak\textquotedblright\, weigh\-ted Ces\`{a}ro and Copson type spaces see \cite{S2} and \cite{P2}, respectively. In the next section we provide technical tools to deal with weighted Sobo\-lev spaces and their associated. In particular, we remind characterization of $X'_s$ and $X'_w$ from \cite{PSU0}, when $X=\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW^1_p(I)$ and show that $[X'_w]'_s=\{0\}$ in this case (see Corollary \ref{corol}). The main result is contained in Section 3, where we establish the \textquotedblleft weak\textquotedblright\, as\-so\-ci\-a\-ted reflexivity, that is $X=[X'_w]'_w$ of $X=\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW^1_p(I)$ (see Theorem \ref{theoremMain}). The characterization of $[X'_s]'_s=[X'_s]'_w$ is still open. However, for the power weights all $[X'_s]'_s,$ $[X'_s]'_w,$ $[X'_w]'_s,$ $[X'_w]'_w$ are discribed \cite{S1}. We use signs $:=$ and $=:$ for determining new quantities. We write $A\lesssim B,$ if $A\leq cB$ with some positive constant $c$, which depends only on $p$. $A\approx B$ is equivalent to $A\lesssim B \lesssim A$. Symbols $\mathbb{N}$ and $\mathbb{Z}$ are used for the sets of natural and integer numbers, respectively. Denotation $\chi_E$ means the characteristic function (indicator) of a set $E.$ Uncertainties of the form $0\cdot\infty, \frac{\infty}{\infty}$ and $\frac{0}{0}$ are taken to be zero. Symbol $\Box$ stands for the end of a proof. If $1<p<\infty,$ then $p':=\frac{p}{p-1}.$ \section{Sobolev spaces and their associated} Let $1<p<\infty$. Suppose for simplicity that $I=(0,\infty)$ and there exists $c\in (0,\infty)$ for which \begin{equation}\label{S6} \|v_1^{-1}\|_{{L^{p'}(0,c)}}\|v_0\|_{{L^p}(0,c)} = \|v_1^{-1}\|_{L^{p'}(c,\infty)}\|v_0\|_{L^p(c,\infty)} = \infty. \end{equation} Then by \cite[Lemma 1.6]{Oin} $\mathop{\phantom{W}}\limits^{\circ}\mskip-23muW^1_p(0,\infty)= W^1_p(0,\infty)$ and by the Oinarov--Otelbaev construction \cite{Oin}, \cite{PSU0}, \cite{PSU1} there exist unique strictly increasing absolutely continuous functions $a(t)$ and $b(t)$ such that \begin{equation*} \lim_{t\to 0}a(t)=\lim_{t\to 0}b(t)=0,\qquad\lim_{t\to \infty}a(t)=\lim_{t\to \infty}b(t)=\infty, \qquad a(t)<t<b(t)\quad (t>0),\end{equation*} \begin{equation}\label{2} \int_{a(t)}^t v_1^{-p'}=\int_t^{b(t)}v_1^{-p'},\quad t>0, \end{equation} ({\sl equilibrium condition}) and \begin{equation} \label{3} \biggl(\int_{a(t)}^{b(t)}v_1^{-p'}\biggr)^{1/p'} \biggl(\int_{a(t)}^{b(t)}v_0^p\biggr)^{1/p}=1,\quad t>0. \end{equation} Put \begin{equation*} V_1(t):=\int_{\Delta(t)}v_1^{-p'},\qquad V_1^\pm(t):=\int_{\Delta^\pm(t)}v_1^{-p'}, \end{equation*} \begin{equation*} \Delta(t):=(a(t),b(t)), ~\Delta^-(t):=(a(t),t),~ \Delta^+(t):=(t,b(t)) \end{equation*} and let $a^{-1}(t)$ be the function reverse to $a(t)$. Define \begin{gather*} \mathbb{G}(g):= \biggl(\int_0^\infty v_1^{-p'}(t)\biggl|\int_t^{a^{-1}(t)}\frac{g(x)}{V_1(x)}\biggl(\int_{a(x)}^tv_1^{-p'} \biggr) dx \biggr|^{p'}\,dt\biggr)^{1/p'},\\ \mathcal{G}(g):= \biggl(\int_0^\infty v_1^{-p'}(t)\,V_1^{p'}(t)\, \biggl|\int_t^{a^{-1}(t)}\frac{g(x)}{V_1(x)} \,dx\biggr|^{p'}\,dt\biggr)^{1/p'},\\ \mathsf{G}(g):=\biggl(\int_0^\infty\biggl(\int_t^{a^{-1}(t)}|g(x)|\,dx\biggr)^{p'}v_1^{-p'}(t) \,dt\biggr)^{1/p'} \end{gather*} and notate $W_p^1:=W_p^1(0,\infty)$, $\mathop{\phantom{W}}\limits^{\circ}\mskip-23muW^1_p:=\mathop{\phantom{W}}\limits^{\circ}\mskip-23muW^1_p(0,\infty)$, $\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW_p^1:=\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW_p^1(0,\infty)$. \begin{thm} \label{T1} {\rm\cite[Theorem 3.1]{ESU}, \cite[Theorem 4.1]{PSU0}, \cite[Theorem 4.5]{PSU0}} Let $1<p<\infty$ and $g\in L^1_{\rm loc}(0,\infty)$. Suppose that $v_0,v_1\in {\mathscr V}_p(0,\infty)$, $\frac{1}{v_1}\in L^{p'}_\text{\rm loc}(0,\infty)$ and the condition \eqref{S6} is satisfied. Then \begin{align*} {\mathbf J}_{W_p^1}(g)={\mathbf J}_{\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW_p^1}(g)\approx \mathsf{G}(g). \end{align*} If $X=W_p^1$ or $X=\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW_p^1$, then \begin{align*} {X}^\prime_s=\bigl\{g\in L^1_\text{\rm loc}(0,\infty): \mathsf{G}(g)<\infty, \|g\|_{{X}^\prime_s}\approx \mathsf{G}(g)\bigr\}. \end{align*} Secondly, \begin{equation}\label{UB} J_{\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW_p^1}(g)\approx \mathbb{G}(g)+\mathcal{G}(g), \end{equation} and if $X=\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW_p^1$, then \begin{align*}\label{ESU12} X^\prime_w=\bigl\{g\in L^1_\text{\rm loc}(0,\infty): \mathbb{G}(g)+\mathcal{G}(g)<\infty, \|g\|_{X^\prime_w}\approx \mathbb{G}(g)+\mathcal{G}(g)\bigr\}. \end{align*} Also, $J_{W_p^1}(g)<\infty$ if and only if $\mathsf{G}(g)<\infty$ and $J_{W_p^1}(g)\approx \mathbb{G}(g)+\mathcal{G}(g)$. \end{thm} \begin{rem}\label{rm} Let $v_0=v_1\equiv 1.$ Then we can open the right hand side of \eqref{nrm} for $W^{1,p}(0,\infty),$ using \cite[Example 7.2]{PSU1}. Namely, we have \begin{align*} \|v\|_{-1,p'}\approx& \Biggl(\int_0^\infty \biggl|\int_t^{t+\frac{1}{2}}v \biggr|^{p'}\, dt\Biggr)^{\frac{1}{p'}}\\ & + \Biggl(\int_0^{\frac{1}{2}} t^{-p'}\biggl|\int_0^t\left(\int_t^{y+\frac{1}{2}}v\right)\,dy \biggr|^{p'}\, dt +\int_{\frac{1}{2}}^\infty \biggl|\int_{t-\frac{1}{2}}^t\left(\int_t^{y+\frac{1}{2}}v\right)\,dy \biggr|^{p'}\, dt\Biggr)^{\frac{1}{p'}}. \end{align*} \end{rem} \begin{lem}\label{Norma} Let $1<p<\infty$ and $X=\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW_p^1$, then the functional $\|g\|_{X^\prime_w}$ is a norm. \end{lem} \begin{proof} It is sufficient to show that $$ \|g\|_{X^\prime_w}=0\ \ \Rightarrow\ \ g=0\ \text{a.e. on}\ (0,\infty). $$ Let $\|g\|_{X^\prime_w}=0$. Then $\mathbb{G}(g)=\mathcal{G}(g)=0$. In particular, $$ G(t):=\int_t^{a^{-1}(t)}\frac{g(x)}{V_1(x)}\biggl(\int_{a(x)}^tv_1^{-p'} \biggr)\, dx=0\ \ \text{a.e. on}\ (0,\infty). $$ Hence, $$ 0=G^\prime(t)=\frac{g(t)}{2}\ \ \text{a.e. on}\ (0,\infty). $$ \end{proof} Let $1<r<\infty,$ $u\in {\mathscr V}_r(0,\infty).$ Denote \begin{gather*} L^r_u(0,\infty):=\bigl\{h: \|h\|_{r,u}:=\|uh\|_{L^r(0,\infty)}<\infty\bigr\},\\ \mathbb{W}_{p',1/{v_1}}:=\bigl\{g\in L^1_\text{\rm loc}(0,\infty): \|g\|_{\mathbb{W}_{p',1/{v_1}}}:= \mathsf{G}(g)<\infty\bigr\},\\ \mathscr{W}_{p',1/{v_1}}:=\bigl\{g\in L^1_\text{\rm loc}(0,\infty): \|g\|_{\mathscr{W}_{p',1/{v_1}}}:=\mathbb{G}(g)+\mathcal{G}(g)<\infty\bigr\}. \end{gather*} \begin{rem}\label{remark} From \eqref{UB} we obtain H\"{o}lder's type inequality (see \cite[Theorem 2.4]{BS}) in $\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW_p^1$ and $\mathscr{W}_{p',1/{v_1}}$: if $1<p<\infty$ then $$ \biggl|\int_0^\infty fg\biggr|\lesssim \|f\|_{\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW_p^1}\|g\|_{\mathscr{W}_{p',1/{v_1}}}\quad\textrm{for any }f\in \mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW_p^1\textrm{ and }g\in \mathscr{W}_{p',1/{v_1}}. $$ \end{rem} The norm in $\mathscr{W}_{p',1/{v_1}}$ admits an alternative formulation in terms of a sequence $\{\eta_k\}_{k\in\mathbb{Z}}$ of the form: $$\eta_0=1, \qquad \eta_{k}=a^{-1}(\eta_{k-1})\quad (k\in\mathbb{N}), \qquad \eta_{k}=a(\eta_{k+1})\quad (-k\in\mathbb{N}).$$ To be able to declare it in the next lemma we denote \begin{equation*} G^{(\delta)}(t):={[V_1(t)]^{\delta}}\int_t^{a^{-1}(t)} \frac{g(x)}{V_1(x)}\Bigl( \int_{a(x)}^tv_1^{-p'}\Bigr)^{1-\delta} dx,\ \ \ \delta=0,1, \end{equation*} and observe that for $t\in[\eta_{k-1},\eta_k]$ \begin{gather} G^{(\delta)}(t)=G_{1,k}^{(\delta)}(t)+G_{2,k}^{(\delta)}(t),\label{22_1}\\ G_{1,k}^{(\delta)}(t):={V^{\delta}_1(t)}\int_t^{\eta_k} \frac{g(x)}{V_1(x)}\Bigl( \int_{a(x)}^tv_1^{-p'}\Bigr)^{1-\delta} dx,\nonumber\\ G_{2,k}^{(\delta)}(t):={V^{\delta}_1(t)}\int_{\eta_k}^{a^{-1}(t)} \frac{g(x)}{V_1(x)}\Bigl( \int_{a(x)}^tv_1^{-p'}\Bigr)^{1-\delta} dx.\nonumber \end{gather} \begin{lem}\label{norm}Let $1<p<\infty,$ $v_0,v_1\in {\mathscr V}_p(0,\infty)$, $\frac{1}{v_1}\in L^{p'}_\text{\rm loc}(0,\infty)$ and the condition \eqref{S6} is satisfied. Then \begin{align} \|g\|_{\mathscr{W}_{p',1/{v_1}}}^{p'}\approx& \sum_{k\in\mathbb{Z}}\biggl\{\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|G_{1,k}^{(0)}(t)\bigr|^{p'}\,dt+\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|G_{2,k}^{(0)}(t) \bigr|^{p'}\,dt \nonumber \\ &+\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|G_{1,k}^{(1)}(t)\bigr|^{p'}\,dt+\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|G_{2,k}^{(1)}(t) \bigr|^{p'}\,dt\biggr\}.\label{qq1} \end{align} \end{lem} \begin{proof} The upper estimate follows from \eqref{22_1} and $$ \|g\|_{\mathscr{W}_{p',1/{v_1}}}^{p'}\lesssim\sum_{k\in\mathbb{Z}}\int_{\eta_{k-1}}^{\eta_k}v_1^{-p'}(t)\Bigr\{\bigl| G^{(0)}(t)\bigr|^{p'}+\bigl| G^{(1)}(t)\bigr|^{p'}\Bigr\}dt. $$ To establish the lower estimate we assume that the inequality $$ \biggl|\int_0^\infty fg\biggr|\le C\|f\|_{\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW^1_p}= C\Bigl\{\|fv_0\|_p+\|f'v_1\|_p\Bigr\} $$ holds with $C=\|g\|_{\mathscr{W}_{p',1/{v_1}}}$, and let for some $N\in\mathbb{N}$ \begin{align*} {F}_{1,N}^{(\delta)}(x):= \frac{\sum_{|k|\le N}\chi_{[\eta_{k-1},\eta_k]}(x)}{V_1^-(x)} \int_{\eta_{k-1}}^x v_1^{-p'}(t)\bigl[\mathop{\mathrm{sgn}} G^{(\delta)}_{1,k}(t)\bigr] \biggl(\int_{a(x)}^t v_1^{-p'}\biggr)^{1-\delta} [V_1(t)]^{\delta}\bigl|G^{(\delta)}_{1,k}(t)\bigr|^{p'-1}\,dt,\end{align*} \begin{align*} {F}_{2,N}^{(\delta)}(x):= \frac{\sum_{|k|\le N}\chi_{[\eta_{k},\eta_{k+1}]}(x)}{V_1^-(x)} \int_{a(x)}^{\eta_{k}} v_1^{-p'}(t)\bigl[\mathop{\mathrm{sgn}} G^{(\delta)}_{2,k}(t)\bigr]\biggl(\int_{a(x)}^t v_1^{-p'}\biggr)^{1-\delta} [V_1(t)]^{\delta}\bigl|G^{(\delta)}_{2,k}(t)\bigr|^{p'-1}\,dt. \end{align*} If $f={F}^{(\delta)}_{1,N}+{F}^{(\delta)}_{2,N}$ then \begin{equation}\label{0}\int_0^\infty g(x)f(x)\,dx= \sum_{|k|\le N}\biggl\{\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|G_{1,k}^{(\delta)}(t)\bigr|^{p'}\,dt+\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|G_{2,k}^{(\delta)}(t) \bigr|^{p'}\,dt\biggr\}. \end{equation} To evaluate \begin{align*} \|{F}^{(\delta)}_{1,N}v_0\|_p^p=&\sum_{|k|\le N}\int_{\eta_{k-1}}^{\eta_k} v_0^{p}(x) \biggl|\frac{1}{V_1^-(x)}\int_{\eta_{k-1}}^x v_1^{-p'}(t) \bigl[\mathop{\mathrm{sgn}} G^{(\delta)}_{1,k}(t)\bigr]\\&\times \biggl(\int_{a(x)}^t v_1^{-p'}\biggr)^{1-\delta} [V_1(t)]^{\delta}\bigl|G^{(\delta)}_{1,k}(t)\bigr|^{p'-1}\,dt\biggr|^p\,dx \end{align*} we apply well known characterization of weighted Hardy's inequality \cite[p. 6]{KPS}, in order to obtain \begin{equation*} \int_{\eta_{k-1}}^{\eta_k} v_0^{p}(x) \biggl(\int_{\eta_{k-1}}^x v_1^{-p'}(t) \bigl|G^{(\delta)}_{1,k}(t)\bigr|^{p'-1}\,dt\biggr)^p\,dx \lesssim A_{1}^p\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}\bigl|G_{1,k}^{(\delta)}\bigr|^{p'}, \end{equation*} where (see \eqref{3}) \begin{align*} A_{1}:=\sup_{\eta_{k-1}<t<\eta_k}\biggl(\int_t^{\eta_k} v_0^{p}\biggr)^{1/p}\biggl(\int_{\eta_{k-1}}^t v_1^{-p'}\biggr)^{1/p'}\le \biggl(\int_{\eta_{k-1}}^{\eta_k} v_0^{p}\biggr)^{1/p}\biggl(\int_{\eta_{k-1}}^{\eta_{k}} v_1^{-p'}\biggr)^{1/p'}\le 1. \end{align*} Therefore, by using in the $\delta=1\,-$case the relation \begin{equation}\label{Gk1} V_1(t)=2V_1^+(t)\le 2\int_{\eta_{k-1}}^{b(t)}v_1^{-p'}\le 2\int_{\eta_{k-1}}^{b(x)}v_1^{-p'} \le 2V_1(x)=4V_1^-(x), \quad \eta_{k-1}\le t\le x,\end{equation} we have for the both $\delta=0,1$: \begin{align}\label{1} \|{F}^{(\delta)}_{1,N}v_0\|_p^p\le& \sum_{|k|\le N}\int_{\eta_{k-1}}^{\eta_k} v_0^{p}(x) \biggl(\int_{\eta_{k-1}}^x v_1^{-p'}(t) \bigl|G^{(\delta)}_{1,k}(t)\bigr|^{p'-1}\,dt\biggr)^p\,dx\nonumber\\\lesssim& \sum_{|k|\le N}\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}\bigl|G_{1,k}^{(\delta)}\bigr|^{p'} =:\bigl[\mathbf{G}^{(\delta)}_{1,N}(g)\bigr]^{p'}. \end{align} Analogously, we evaluate, by making use of \begin{equation}\label{Gk2}V_1(t)=2V_1^+(t)\le 2\int_{a(x)}^{b(\eta_k)}v_1^{-p'}\le 2V_1(x)=4 V_1^-(x), \quad \eta_k\le x\le\eta_{k+1},\end{equation} that \begin{align*} \|{F}^{(\delta)}_{2,N}v_0\|_p^p=&\sum_{|k|\le N}\int_{\eta_{k}}^{\eta_{k+1}} v_0^{p}(x) \biggl|\frac{1}{V_1^-(x)}\int_{a(x)}^{\eta_{k}} v_1^{-p'}(t) \bigl[\mathop{\mathrm{sgn}} G^{(\delta)}_{2,k}(t)\bigr]\\&\times \biggl(\int_{a(x)}^t v_1^{-p'}\biggr)^{1-\delta} [V_1(t)]^{\delta}\bigl|G^{(\delta)}_{2,k}(t)\bigr|^{p'-1}\,dt\biggr|^p\,dx\\ \le& \sum_{|k|\le N}\int_{\eta_{k}}^{\eta_{k+1}} v_0^{p}(x) \biggl(\int_{a(x)}^{\eta_{k}} v_1^{-p'}(t) \bigl|G^{(\delta)}_{2,k}(t)\bigr|^{p'-1}\,dt\biggr)^p\,dx \lesssim \, A_{2}^p\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}\bigl|G_{1,k}^{(\delta)}\bigr|^{p'}, \end{align*} where (see \eqref{3}) \begin{equation*} A_{2}:=\sup_{\eta_{k-1}<t<\eta_k}\biggl(\int_{\eta_k}^{a^{-1}(t)} v_0^{p}\biggr)^{1/p}\biggl(\int_t^{\eta_{k}} v_1^{-p'}\biggr)^{1/p'}\le 1. \end{equation*} Therefore, \begin{equation}\label{11} \|{F}^{(\delta)}_{2,N}v_0\|_p^p \lesssim \sum_{|k|\le N}\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}\bigl|G_{2,k}^{(\delta)}\bigr|^{p'}=:\bigl[\mathbf{G}^{(\delta)}_{2,N}(g)\bigr]^{p'}. \end{equation} Further, since \begin{align*} [{F}_{1,N}^{(\delta)}(x)]'=&-\sum_{|k|\le N}\chi_{[\eta_{k-1},\eta_k]}(x) \frac{\bigl[V_1^-(x)\bigr]'}{\bigl[V_1^-(x)\bigr]^2}\int_{\eta_{k-1}}^x v_1^{-p'}(t)\bigl[\mathop{\mathrm{sgn}} G^{(\delta)}_{1,k}(t)\bigr]\\&\times \biggl(\int_{a(x)}^t v_1^{-p'}\biggr)^{1-\delta} [V_1(t)]^{\delta}\bigl|G^{(\delta)}_{1,k}(t)\bigr|^{p'-1}\,dt+\sum_{|k|\le N}\chi_{[\eta_{k-1},\eta_k]}(x)\\&\times\begin{cases} v_1^{-p'}(x)\bigl[\mathop{\mathrm{sgn}} G^{(0)}_{1,k}(x)\bigr] \bigl|G^{(0)}_{1,k}(x)\bigr|^{p'-1}\\ -\displaystyle\frac{v_1^{-p'}(a(x))\,a'(x)}{V_1^-(x)}\int_{\eta_{k-1}}^x v_1^{-p'}\,\bigl[\mathop{\mathrm{sgn}} G^{(0)}_{1,k}\bigr] \bigl|G^{(0)}_{1,k}\bigr|^{p'-1}, & \delta=0,\\ 2v_1^{-p'}(x)\bigl[\mathop{\mathrm{sgn}} G^{(1)}_{1,k}(x)\bigr] \bigl|G^{(1)}_{1,k}(x)\bigr|^{p'-1}, & \delta=1,\end{cases} \end{align*} \begin{align*} [{F}_{2,N}^{(\delta)}(x)]'=&-\sum_{|k|\le N}\chi_{[\eta_{k},\eta_{k+1}]}(x) \frac{\bigl[V_1^-(x)\bigr]'}{\bigl[V_1^-(x)\bigr]^2}\int_{a(x)}^{\eta_{k}} v_1^{-p'}(t)\bigl[\mathop{\mathrm{sgn}} G^{(\delta)}_{2,k}(t)\bigr] \\&\times\biggl(\int_{a(x)}^t v_1^{-p'}\biggr)^{1-\delta} [V_1(t)]^{\delta}\bigl|G^{(\delta)}_{2,k}(t)\bigr|^{p'-1}\,dt-\sum_{|k|\le N}\chi_{[\eta_{k},\eta_{k+1}]}(x)\\&\times\begin{cases} \displaystyle\frac{v_1^{-p'}(a(x))\,a'(x)}{V_1^-(x)}\int_{a(x)}^{\eta_{k}} v_1^{-p'}\,\bigl[\mathop{\mathrm{sgn}} G^{(0)}_{2,k}\bigr] \bigl|G^{(0)}_{2,k}\bigr|^{p'-1}, & \delta=0,\\ \displaystyle\frac{v_1^{-p'}(a(x))\,a'(x)}{V_1^-(x)}&\\ \times\bigl[\mathop{\mathrm{sgn}} G^{(1)}_{2,k}(a(x))\bigr]V_1^-(a(x)) \bigl|G^{(1)}_{2,k}(a(x))\bigr|^{p'-1}, & \delta=1,\end{cases} \end{align*} then \begin{equation*}\label{8} \|[{F}_{1,N}^{(\delta)}]'v_1\|_p \le\begin{cases} I_1+\bigl[\mathbf{G}^{(0)}_{1,N}(g)\bigr]^{p'-1}+II_1, &\delta=0,\\ I_1+\bigl[\mathbf{G}^{(1)}_{1,N}(g)\bigr]^{p'-1}, &\delta=1,\end{cases} \end{equation*} where \begin{align*} I_1^p:=\sum_{|k|\le N}\int_{\eta_{k-1}}^{\eta_k} v_1^{p}(x) \frac{\Bigl|\bigl[V_1^-(x)\bigr]'\Bigr|^p}{\bigl[V_1^-(x)\bigr]^{2p}} \biggl(\int_{\eta_{k-1}}^x v_1^{-p'}(t) \biggl(\int_{a(x)}^t v_1^{-p'}\biggr)^{1-\delta} [V_1(t)]^{\delta}\bigl|G^{(\delta)}_{1,k}(t)\bigr|^{p'-1}\,dt\biggr)^p\,dx \end{align*} and \begin{align*} II_1^p:=\sum_{|k|\le N}\int_{\eta_{k-1}}^{\eta_k} v_1^{p}(x)\bigl[V_1^-(x)\bigr]^{-p}\bigl[v_1^{-p'}(a(x))\,a'(x)\bigr]^p \biggl(\int_{\eta_{k-1}}^x v_1^{-p'}(t) \bigl|G^{(0)}_{1,k}(t)\bigr|^{p'-1}\,dt\biggr)^p\,dx. \end{align*} In view of $v_1^{-p'}(a(x))a'(x)\le 2v_1^{-p'}(x)$ (see \eqref{eq}), we obtain, by using \eqref{Gk1} in the $\delta=1\,-$case, that \begin{align*} I_1^p\le&\sum_{|k|\le N}\int_{\eta_{k-1}}^{\eta_k} v_1^{p}(x) \frac{\bigl|v_1^{-p'}(x)-v_1^{-p'}(a(x))a'(x)\bigr|^p}{\bigl[V_1^-(x)\bigr]^{p}} \biggl(\int_{\eta_{k-1}}^xv_1^{-p'}\bigl|G^{(\delta)}_{1,k}\bigr|^{p'-1}\biggr)^p\,dx\\ \le& \sum_{|k|\le N}\int_{\eta_{k-1}}^{\eta_k} v_1^{p}(x) \frac{\bigl[v_1^{-p'}(x)+v_1^{-p'}(a(x))a'(x)\bigr]^p}{\bigl[V_1^-(x)\bigr]^{p}} \biggl(\int_{\eta_{k-1}}^xv_1^{-p'}\bigl|G^{(\delta)}_{1,k}\bigr|^{p'-1}\biggr)^p\,dx\\ \le& 3^{p}\sum_{|k|\le N}\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(x) \bigl[V_1^-(x)\bigr]^{-p} \biggl(\int_{\eta_{k-1}}^xv_1^{-p'}\bigl|G^{(\delta)}_{1,k}\bigr|^{p'-1}\biggr)^p\,dx. \end{align*} Analogously, \begin{equation*} II_2^p\le 2^p\sum_{|k|\le N}\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(x)\bigl[V_1^-(x)\bigr]^{-p} \biggl(\int_{\eta_{k-1}}^xv_1^{-p'}\bigl|G^{(0)}_{1,k}\bigr|^{p'-1}\biggr)^p\,dx. \end{equation*} On the strength of the boundedness characteristics for the Hardy operator \cite[p. 6]{KPS}, \begin{align*} \int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(x) \bigl[V_1^-(x)\bigr]^{-p} \biggl(\int_{\eta_{k-1}}^xv_1^{-p'}(t)\bigl|G^{(\delta)}_{1,k}(t)\bigr|^{p'-1}\,dt\biggr)^p\,dx\lesssim \mathbb{A}_1^p\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}\bigl|G_{1,k}^{(\delta)}\bigr|^{p'}, \end{align*} where \begin{equation*} \mathbb{A}_1:=\sup_{\eta_{k-1}<t<\eta_k} \biggl(\int_t^{\eta_{k}} v_1^{-p'}(x) \bigl[V_1^-(x)\bigr]^{-p}\,dx\biggr)^{1/p}\biggl(\int_{\eta_{k-1}}^t v_1^{-p'}\biggr)^{1/p'}. \end{equation*} It holds \begin{align*}\label{A1} \mathbb{A}_1^p\le&\sup_{\eta_{k-1}<t<\eta_k} \biggl(\int_t^{\eta_{k}} v_1^{-p'}(x)\biggl(\int_{\eta_{k-1}}^x v_1^{-p'}\biggr)^{-p}\,dx\biggr)\biggl(\int_{\eta_{k-1}}^t v_1^{-p'}\biggr)^{p-1}\\ =&\frac{1}{p-1}\sup_{\eta_{k-1}<t<\eta_k} \biggl[\biggl(\int_{\eta_{k-1}}^t v_1^{-p'}\biggr)^{1-p}- \biggl(\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}\biggr)^{1-p}\biggr] \biggl(\int_{\eta_{k-1}}^t v_1^{-p'}\biggr)^{p-1} \le\frac{1}{p-1}. \end{align*} Therefore, \begin{align*} \sum_{|k|\le N}\int_{\eta_{k-1}}^{\eta_k} \frac{v_1^{-p'}(x)}{ \bigl[V_1^-(x)\bigr]^{p}} \biggl(\int_{\eta_{k-1}}^xv_1^{-p'}(t)\bigl|G^{(\delta)}_{1,k}(t)\bigr|^{p'-1}\,dt\biggr)^p\,dx\lesssim\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}\bigl|G_{1,k}^{(\delta)}\bigr|^{p'}= \bigl[\mathbf{G}^{(\delta)}_{1,N}(g)\bigr]^{p'-1}, \end{align*} that is \begin{equation*}\label{8''} \|[{F}_{1,N}^{(\delta)}]'v_1\|_p \lesssim \bigl[\mathbf{G}^{(\delta)}_{1,N}(g)\bigr]^{p'-1}, \end{equation*} and, by letting $N\to\infty$, the estimate \begin{align}\label{FH} \|g\|_{\mathscr{W}_{p',1/{v_1}}}^{p'}\gtrsim \sum_{k\in\mathbb{Z}}\biggl\{\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|G_{1,k}^{(0)}(t)\bigr|^{p'}\,dt+\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|G_{1,k}^{(1)}(t)\bigr|^{p'}\,dt\biggr\} \end{align} is now performed, basing on \eqref{1} and \eqref{0}. Similarly, in view of $V_1^-(x)=\frac{1}{2}V_1(x)\ge \frac{1}{2}V_1^+(a(x))=\frac{1}{4}V_1(a(x))$ (for $\delta=1$), \begin{equation*}\label{8'} \|[{F}_{2,N}^{(\delta)}]'v_1\|_p \le\begin{cases} I_2+II_2, &\delta=0,\\ I_2+\bigl[{G}^{(2)}_{1,N}(g)\bigr]^{p'-1}, &\delta=1,\end{cases} \end{equation*} where \begin{align*} I_2^p:=\sum_{|k|\le N}\int_{\eta_{k}}^{\eta_{k+1}} v_1^{p}(x) \frac{\Bigl|\bigl[V_1^-(x)\bigr]'\Bigr|^p}{\bigl[V_1^-(x)\bigr]^{2p}} \biggl(\int_{a(x)}^{\eta_{k}} v_1^{-p'}(t)\biggl(\int_{a(x)}^t v_1^{-p'}\biggr)^{1-\delta} [V_1(t)]^{\delta}\bigl|G^{(\delta)}_{2,k}(t)\bigr|^{p'-1}\,dt\biggr)^p\,dx \end{align*} and \begin{equation*} II_2^p:=\sum_{|k|\le N}\int_{\eta_{k}}^{\eta_{k+1}} \frac{v_1^{p}(x)}{\bigl[V_1^-(x)\bigr]^{p}}\bigl[v_1^{-p'}(a(x))\,a'(x)\bigr]^p \biggl(\int_{a(x)}^{\eta_{k}} v_1^{-p'}\bigl|G^{(0)}_{2,k}\bigr|^{p'-1}\biggr)^p\,dx, \end{equation*} we obtain analogously to the previous case (see also \eqref{Gk2} for $\delta=1$): \begin{align*} I_2^p\le&\sum_{|k|\le N}\int_{\eta_{k}}^{\eta_{k+1}} v_1^{p}(x) \frac{\bigl|v_1^{-p'}(x)-v_1^{-p'}(a(x))a'(x)\bigr|^p}{\bigl[V_1^-(x)\bigr]^{p}} \biggl(\int_{a(x)}^{\eta_{k}} v_1^{-p'}\bigl|G^{(\delta)}_{2,k}\bigr|^{p'-1}\biggr)^p\,dx\\ \lesssim& \sum_{|k|\le N}\int_{\eta_{k}}^{\eta_{k+1}} v_1^{-p'}(x) \bigl[V_1^-(x)\bigr]^{-p} \biggl(\int_{a(x)}^{\eta_{k}} v_1^{-p'}\bigl|G^{(\delta)}_{2,k}\bigr|^{p'-1}\biggr)^p\,dx \end{align*} and \begin{equation*} II_2^p\lesssim\sum_{|k|\le N}\int_{\eta_{k}}^{\eta_{k+1}} v_1^{-p'}(x)\bigl[V_1^-(x)\bigr]^{-p} \biggl(\int_{a(x)}^{\eta_{k}} v_1^{-p'}(t)\bigl|G^{(0)}_{2,k}(t)\bigr|^{p'-1}\,dt\biggr)^p\,dx. \end{equation*} By characteristics for the Hardy inequality \cite[p. 6]{KPS}, \begin{align*} \int_{\eta_{k}}^{\eta_{k+1}} v_1^{-p'}(x)\bigl[V_1^-(x)\bigr]^{-p} \biggl(\int_{a(x)}^{\eta_{k}} v_1^{-p'}(t)\bigl|G^{(0)}_{2,k}(t)\bigr|^{p'-1}\,dt\biggr)^p\,dx \lesssim \mathbb{A}_2^p\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}\bigl|G_{2,k}^{(\delta)}\bigr|^{p'}, \end{align*} where \begin{equation*} \mathbb{A}_2:=\sup_{\eta_{k-1}<t<\eta_k} \biggl(\int_{\eta_{k}}^{a^{-1}(t)} v_1^{-p'}(x) \bigl[V_1^-(x)\bigr]^{-p}\,dx\biggr)^{1/p}\biggl(\int_t^{\eta_{k}} v_1^{-p'}\biggr)^{1/p'}. \end{equation*} We have \begin{align*} \mathbb{A}_2^p\le&\sup_{\eta_{k-1}<t<\eta_k} \biggl(\int_{\eta_{k}}^{a^{-1}(t)} v_1^{-p'}(x)\biggl(\int_{t}^x v_1^{-p'}\biggr)^{-p}\,dx\biggr)\biggl(\int_t^{\eta_{k}} v_1^{-p'}\biggr)^{p-1}\\ =&\frac{1}{p-1}\sup_{\eta_{k-1}<t<\eta_k} \biggl[\biggl(\int_t^{\eta_{k}} v_1^{-p'}\biggr)^{1-p}- \biggl(\int_t^{a^{-1}(t)} v_1^{-p'}\biggr)^{1-p}\biggr] \biggl(\int_t^{\eta_{k}} v_1^{-p'}\biggr)^{p-1} \le\frac{1}{p-1}. \end{align*} Therefore, \begin{align*} \sum_{|k|\le N}\int_{\eta_{k}}^{\eta_{k+1}} \frac{v_1^{-p'}(x)}{ \bigl[V_1^-(x)\bigr]^{p}} \biggl(\int_{a(x)}^{\eta_{k}}v_1^{-p'}(t)\bigl|G^{(\delta)}_{2,k}(t)\bigr|^{p'-1}\,dt\biggr)^p\,dx\lesssim\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}\bigl|G_{2,k}^{(\delta)}\bigr|^{p'}= \bigl[\mathbf{G}^{(\delta)}_{2,N}(g)\bigr]^{p'-1}, \end{align*} which, in combination with \eqref{11} and \eqref{0}, yields the estimate $$ \|g\|_{\mathscr{W}_{p',1/{v_1}}}^{p'}\gtrsim \sum_{k\in\mathbb{Z}}\biggl\{\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|G_{2,k}^{(0)}(t)\bigr|^{p'}\,dt+\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|G_{2,k}^{(1)}(t)\bigr|^{p'}\,dt\biggr\}, $$ by letting $N\to\infty$. Thus, (see also \eqref{FH}) the required lower bound is now confirmed. \end{proof} Basing on Lemma \ref{norm} one can prove the following \begin{lem}\label{plot}Let $1<p<\infty,$ $v_0,v_1\in {\mathscr V}_p(0,\infty)$, $\frac{1}{v_1}\in L^{p'}_\text{\rm loc}(0,\infty)$ and the condition \eqref{S6} is satisfied. Then the space $\mathbb{W}_{p',1/{v_1}}$ is dense in $\mathscr{W}_{p',1/{v_1}}.$ \end{lem} \begin{proof} Let $g\in \mathscr{W}_{p',1/{v_1}}$. Then $\|g\|_{\mathscr{W}_{p',1/{v_1}}}<\infty$ by \eqref{qq1}. Therefore, \begin{align}\label{r1} \lim_{n\to\infty}\sum_{|k|\ge n}\biggl\{\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|G_{1,k}^{(0)}(t)\bigr|^{p'}\,dt+\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|G_{2,k}^{(0)}(t) \bigr|^{p'}\,dt\nonumber\\ +\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|G_{1,k}^{(1)}(t)\bigr|^{p'}\,dt+\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|G_{2,k}^{(1)}(t) \bigr|^{p'}\,dt\biggr\}=0. \end{align} Let $g_N:=\chi_{[\eta_{-N},\eta_N]}g$ with some $N\in\mathbb{N}$. Then $g_N\in \mathbb{W}_{p',1/{v_1}}$. Indeed, \begin{equation*} G(|g_N|)^{p'}=\biggl\{\int_0^{\eta_{-N-1}}+\int_{\eta_{-N-1}}^{\eta_{N}}+\int_{\xi_{N}}^{\infty}\biggr\}v_1^{-p'}(x)\biggl(\int_x^{a^{-1}(x)}|g_N|\biggr)^{p'}dx, \end{equation*} where \begin{align*} \int_0^{\eta_{-N-1}}v_1^{-p'}(x)\biggl(\int_x^{a^{-1}(x)}|\chi_{[\eta_{-N},\eta_N]}g|\biggr)^{p'}dx=0= \int_{\eta_{N}}^\infty v_1^{-p'}(x)\biggl(\int_x^{a^{-1}(x)}|\chi_{[\eta_{-N},\eta_N]}g|\biggr)^{p'}dx. \end{align*} The assertion follows from the fact that \begin{equation*} \int_{\eta_{-N-1}}^{\eta_{N}}v_1^{-p'}(x)\biggl(\int_x^{a^{-1}(x)}|\chi_{[\eta_{-N},\eta_N]}g|\biggr)^{p'}dx\le \int_{\eta_{-N-1}}^{\eta_{N}}v_1^{-p'} \biggl(\int_{\eta_{-N-1}}^{\eta_{N+1}}|g|\biggr)^{p'}<\infty. \end{equation*} Denote $G_{i,k}^{(\delta)}(t)=:H_{i,k}^{(\delta)}g(t)$, $i=1,2$. We can write \begin{align*} \|g-g_N\|_{\mathscr{W}_{p',1/{v_1}}}^{p'}=& \sum_{i=1,2}\sum_{\delta=1,2}\sum_{k\in\mathbb{Z}}\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|H_{i,k}^{(\delta)}g(t)-H_{i,k}^{(\delta)}g_N(t)\bigr|^{p'}\,dt \\=&\sum_{i=1,2}\sum_{\delta=1,2}\sum_{k\in\mathbb{Z}}\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|H_{i,k}^{(\delta)}(\chi_{(0,\eta_{-N})}g)(t)+H_{i,k}^{(\delta)}(\chi_{(\eta_{N},\infty)}g)(t)\bigr|^{p'}\,dt \\=&\sum_{i=1,2}\sum_{\delta=1,2}\sum_{k\le -N-1}\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}\bigl|G_{i,k}^{(\delta)}\bigr|^{p'}+\sum_{\delta=1,2}\int_{\eta_{-N-1}}^{\eta_{-N}} v_1^{-p'}\bigl|G_{1,N}^{(\delta)}\bigr|^{p'}\\&+ \sum_{\delta=1,2}\int_{\eta_{N-1}}^{\eta_{N}} v_1^{-p'}\bigl|G_{2,N}^{(\delta)}\bigr|^{p'}+\sum_{i=1,2}\sum_{\delta=1,2}\sum_{k\ge N+1}\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}\bigl|G_{i,k}^{(\delta)}\bigr|^{p'}\\ \le&\sum_{i=1,2}\sum_{\delta=1,2}\sum_{|k|\ge N}\int_{\eta_{k-1}}^{\eta_k} v_1^{-p'}(t)\bigl|G_{i,k}^{(\delta)}(t)\bigr|^{p'}\,dt. \end{align*} This approves the statement of Lemma in view of \eqref{r1}. \end{proof} Now we can make an addition to the last assertion of Theorem \ref{T1}. \begin{rem} Let $X=W_p^1$. Then, by Theorem \ref{T1}, $$X^\prime_w=\Bigl\{g\in \mathbb{W}_{p',1/{v_1}}: \|g\|_{X'_w}\approx\|g\|_{\mathscr{W}_{p',1/{v_1}}}<\infty\Bigr\}.$$ It follows from here that $X'_w\subseteq \mathscr{W}_{p',1/{v_1}}$, and the inclusion can be strict, since there are examples of $g_0\in\mathscr{W}_{p',1/{v_1}}$ when $g_0\not\in\mathbb{W}_{p',1/{v_1}}$ (see \cite[Remark 5.5]{PSU0}). Indeed, if $g_0\in X'_w$ then, by \cite[Theorem 2.5]{PSU0}, $$\|g_0\|_{X'_w}=J_X(g_0)<\infty\quad \Longleftrightarrow \quad \infty>\mathbf{J}_X(g_0)=\|g_0\|_{\mathbb{W}_{p',1/{v_1}}}=\infty,$$ which is a contradiction. Let \begin{multline*}X'_{\textrm{ext}}:=\bigl\{g\in\mathscr{W}_{p',1/{v_1}}\colon \textrm{ there exists } \{g_k\}\subset X'_w \textrm{ such that } \\ \lim_{k\to\infty}\|g-g_k\|_{\mathscr{W}_{p',1/{v_1}}}=0 \textrm{ and } \|g\|_{X'_{\textrm{ext}}}:=\lim_{k\to\infty}\|g_k\|_{X'_w}\bigr\}.\end{multline*} Notice that the definition of $X'_{\textrm{ext}}$ is independent of a choice of $\{g_k\}$. Then $$ X'_{\textrm{ext}}\hookrightarrow\mathscr{W}_{p',1/{v_1}} \textrm { and } \|g\|_{\mathscr{W}_{p',1/{v_1}}}\le \|g\|_{X'_{\textrm{ext}}}.$$ Conversely, let $g\in\mathscr{W}_{p',1/{v_1}}$. Then, by Lemma \ref{plot}, there exists $\{g_k\}\subset \mathbb{W}_{p',1/{v_1}} \subset X'_w$ such that $\|g\|_{\mathscr{W}_{p',1/{v_1}}}=\lim_{k\to\infty}\|g_k\|_{\mathscr{W}_{p',1/{v_1}}}=\|g\|_{X'_{\textrm{ext}}}$. Hence, $g\in X'_{\textrm{ext}}$ and we have $\mathscr{W}_{p',1/{v_1}}\subset X'_{\textrm{ext}}$ and $\|g\|_{X'_{\textrm{ext}}}=\|g\|_{\mathscr{W}_{p',1/{v_1}}}$. Thus, $$ X'_{\textrm{ext}}=\mathscr{W}_{p',1/{v_1}} $$ with equality of the norms. \end{rem} The next technical statement is used in Corollary \ref{corol} to prove $[X'_w]'_s=\{0\}$. \begin{lem}\label{lemSW} Let $1<p<\infty$, $[c,d]\subset (0,\infty)$ and $h\in L^1([c,d])$. Then for any $\varepsilon>0$ there exists $g\in \mathbb{W}_{p',1/{v_1}}$ such that $|g|=|h|$ on $[c,d]$ and $\|g\|_{\mathscr{W}_{p',1/{v_1}}}<\varepsilon$. \end{lem} \begin{proof} Firstly, we show that for $g$ with ${\rm supp}\,g\in[c,d]$ it holds \begin{equation}\label{comp} \|g\|_{\mathscr{W}_{p',1/{v_1}}}^{p'}\lesssim [V_1(c)]^{p'+1}\biggl|\int_c^d \frac{g}{V_1}\biggr|^{p'}+ \int_{c}^{d}v_1^{-p'}(t)V_1^{p'}(t)\biggl|\int_t^d \frac{g}{V_1}\biggr|^{p'}\,dt. \end{equation} We start from the functional $\mathcal{G}(g)$, for which it holds, by the triangle inequality, that \begin{align*} \mathcal{G}(g\chi_{[c,d]})\le&\biggl(\int_{a(c)}^d v_1^{-p'}(t)\,V_1^{p'}(t)\, \biggl|\int_t^{a^{-1}(t)}\frac{\chi_{[c,d]}(x)g(x)}{V_1(x)} \,dx\biggr|^{p'}\,dt\biggr)^{1/p'} \\\le& \biggl(\int_{a(c)}^d v_1^{-p'}(t)\,V_1^{p'}(t)\, \biggl|\int_t^d\frac{\chi_{[c,d]}(x)g(x)}{V_1(x)} \,dx\biggr|^{p'}\,dt\biggr)^{1/p'} \\&+\biggl(\int_{a(c)}^{d} v_1^{-p'}(t)\,V_1^{p'}(t)\, \biggl|\int_{a^{-1}(t)}^d\frac{\chi_{[c,d]}(x)g(x)}{V_1(x)} \,dx\biggr|^{p'}\,dt\biggr)^{1/p'}. \end{align*} Since for any $\alpha>0$ \begin{equation}\label{cococo} \int_{a(t)}^t v_1^{-p'}[V_1^+]^{\alpha}\le \int_{a(t)}^t v_1^{-p'}(x)\Bigl[\int_{a(t)}^{b(x)} v_1^{-p'}\Bigr]^{\alpha}\,dx\le [V_1(t)]^{\alpha+1} ,\end{equation} we have \begin{align}\label{coco}& \int_{a(c)}^d v_1^{-p'}(t)\,V_1^{p'}(t)\, \biggl|\int_t^d\frac{\chi_{[c,d]}(x)g(x)}{V_1(x)} \,dx\biggr|^{p'}\,dt\nonumber\\&= \int_{a(c)}^c v_1^{-p'}(t)\,V_1^{p'}(t)\, \biggl|\int_c^d\frac{g(x)}{V_1(x)} \,dx\biggr|^{p'}\,dt+ \int_{c}^d v_1^{-p'}(t)\,V_1^{p'}(t)\, \biggl|\int_t^d\frac{g(x)}{V_1(x)} \,dx\biggr|^{p'}\,dt\nonumber\\ &\lesssim [V_1(c)]^{p'+1}\biggl|\int_c^d \frac{g}{V_1}\biggr|^{p'}+ \int_{c}^{d}v_1^{-p'}(t)V_1^{p'}(t)\biggl|\int_t^d \frac{g}{V_1}\biggr|^{p'}\,dt. \end{align} By the substitution $y=a^{-1}(t)$ and in view of \eqref{eq} and $V_1^+(a(y))\le V_1(y)$, \begin{multline*} \int_{a(c)}^{d} v_1^{-p'}(t)\,V_1^{p'}(t)\, \biggl|\int_{a^{-1}(t)}^d\frac{\chi_{[c,d]}(x)g(x)}{V_1(x)} \,dx\biggr|^{p'}\,dt=\int_{a(c)}^{a(d)} v_1^{-p'}(t)\,V_1^{p'}(t)\, \biggl|\int_{a^{-1}(t)}^d\frac{\chi_{[c,d]}(x)g(x)}{V_1(x)} \,dx\biggr|^{p'}\,dt\\\le \int_{c}^{d} v_1^{-p'}(a(y))\,V_1^{p'}(a(y))a'(y)\, \biggl|\int_{y}^d\frac{\chi_{[c,d]}(x)g(x)}{V_1(x)} \,dx\biggr|^{p'}\,dt\\\lesssim \int_{c}^{d} v_1^{-p'}(y)\,V_1^{p'}(y)\, \biggl|\int_{y}^d\frac{\chi_{[c,d]}(x)g(x)}{V_1(x)} \,dx\biggr|^{p'}\,dt= \int_{c}^{d} v_1^{-p'}(y)\,V_1^{p'}(y)\, \biggl|\int_{y}^d\frac{g(x)}{V_1(x)} \,dx\biggr|^{p'}\,dt. \end{multline*} Therefore, the estimate \eqref{comp} for the component $\mathcal{G}(g\chi_{[c,d]})$ of $\|g\|_{\mathscr{W}_{p',1/{v_1}}}^{p'}$ now follows. To prove the same for $\mathbb{G}(g\chi_{[c,d]})$ we write \begin{align*}& \int_0^\infty v_1^{-p'}(t)\biggl|\int_t^{a^{-1}(t)}\frac{\chi_{[c,d]}(x)g(x)}{V_1(x)}\biggl(\int_{a(x)}^tv_1^{-p'}\biggr)dx \biggr|^{p'}\,dt\\&= \int_{a(c)} ^dv_1^{-p'}(t)\biggl|\int_t^{a^{-1}(t)}\frac{\chi_{[c,d]}(x)g(x)}{V_1(x)}\biggl(\int_{a(x)}^tv_1^{-p'} \biggr)dx \biggr|^{p'}\,dt \\&= \int_{a(c)} ^d v_1^{-p'}(t)\biggl|\int_{a(t)}^tv_1^{-p'} (y)\biggl(\int_t^{a^{-1}(y)}\frac{\chi_{[c,d]}(x)g(x)}{V_1(x)}\,dx\biggr)dy \biggr|^{p'}\,dt. \end{align*} By the triangle and H\"{o}lder's inequalities, \begin{align*} \mathbb{G}(g\chi_{[c,d]})=& \biggl(\int_{a(c)} ^d v_1^{-p'}(t)\biggl(\int_{a(t)}^tv_1^{-p'} (y)\biggl|\int_t^{a^{-1}(y)}\frac{\chi_{[c,d]}g}{V_1}\biggr|dy \biggr)^{p'}\,dt\biggr)^{1/p'}\\\le& \biggl(\int_{a(c)} ^d v_1^{-p'}(t)\biggl(\int_{a(t)}^tv_1^{-p'} (y)\biggl|\int_t^d\frac{\chi_{[c,d]}g}{V_1}\biggr| dy \biggr)^{p'}\,dt\biggr)^{1/p'}\\&+ \biggl(\int_{a(c)} ^{a(d)} v_1^{-p'}(t)\biggl(\int_{a(t)}^tv_1^{-p'} (y)\biggl|\int_{a^{-1}(y)}^d\frac{\chi_{[c,d]}g}{V_1}\biggr| dy \biggr)^{p'}\,dt\biggr)^{1/p'}\\\le& \biggl(\int_{a(c)} ^d v_1^{-p'}(t)\,V_1^{p'}(t)\biggl|\int_t^d\frac{\chi_{[c,d]}g}{V_1}\biggr| ^{p'}\,dt\biggr)^{1/p'}\\&+ \biggl(\int_{a(c)} ^{a(d)} v_1^{-p'}(t)\,V_1^{p'-1}(t)\biggl(\int_{a(t)}^tv_1^{-p'} (y)\biggl|\int_{a^{-1}(y)}^d \frac{\chi_{[c,d]}g}{V_1}\biggr|^{p'} dy \biggr)\,dt\biggr)^{1/p'}. \end{align*} Since (see \eqref{cococo} and \eqref{eq}) \begin{align*}& \int_{a(c)} ^{a(d)} v_1^{-p'}(t)\,V_1^{p'-1}(t)\biggl(\int_{a(t)}^tv_1^{-p'} (y)\biggl|\int_{a^{-1}(y)}^d \frac{\chi_{[c,d]}(x)g(x)}{V_1(x)}\,dx\biggr|^{p'} dy \biggr)\,dt\\&= \int_{a(a(c))} ^{a(d)} v_1^{-p'} (y)\biggl|\int_{a^{-1}(y)}^d \frac{\chi_{[c,d]}(x)g(x)}{V_1(x)}\,dx\biggr|^{p'} \biggl(\int_y^{a^{-1}(y)} v_1^{-p'}(t)\,V_1^{p'-1}(t) dt\biggr)dy\\&\lesssim \int_{a(c)} ^{a(d)} v_1^{-p'} (y)V_1^{p'}(a^{-1}(y))\biggl|\int_{a^{-1}(y)}^d \frac{\chi_{[c,d]}(x)g(x)}{V_1(x)}\,dx\biggr|^{p'}\,dy\lesssim \int_{c} ^{d} v_1^{-p'} (t)V_1^{p'}(t)\biggl|\int_{t}^d \frac{g(x)}{V_1(x)}\,dx\biggr|^{p'}\,dy \end{align*} the estimate \eqref{comp} for $\mathbb{G}(g\chi_{[c,d]})$ follows by taking into account \eqref{coco}. Secondly, we fix $\varepsilon>0$ and take $$ n>\varepsilon^{-1}\Bigl(\int_c^d v_1^{-p'}\,V_1^{p'}\Bigr)^{1/p'}\int_c^d|h|. $$ Let $\{\alpha_i\}_{i=0}^n$ be a partition of $[c,d]$ such that $\int_{\alpha_i}^{\alpha_{i+1}}|h|=n^{-1 }\int_c^d|h|$ and suppose $\beta_i\in[\alpha_i,\alpha_{i+1}]$ are such that $\int_{\alpha_i}^{\beta_{i}}|h|=\int_{\beta_i}^{\alpha_{i+1}}|h|$, $i\in\{0,\ldots,n-1\}$. Put $$ \tilde{g}:=V_1|h|\sum_{i=0}^{n-1}\bigl(\chi_{[\alpha_i,\beta_i]}- \chi_{(\beta_i,\alpha_{i+1)}}\bigr). $$ Then $\tilde{g}\in \mathbb{W}_{p',1/{v_1}}$, $|\tilde{g}|=|h|$ on $[c,d]$, $\int_{\alpha_i}^{\alpha_{i+1}}\frac{\tilde{g}}{V_1}=0$ for $i=0,\ldots,n-1$ and (see \eqref{comp}) \begin{align*} \|\tilde{g}\|_{\mathscr{W}_{p',1/{v_1}}}^{p'}\lesssim& \int_{c}^{d}v_1^{-p'}(x)V_1^{p'}(x)\biggl|\int_x^d \frac{\tilde{g}}{V_1}\biggr|^{p'}\,dx =\sum_{i=0}^{n-1}\int_{\alpha_i}^{\alpha_{i+1}}v_1^{-p'}(x)V_1^{p'}(x)\biggl|\int_x^{\alpha_{i+1}} \frac{\tilde{g}}{V_1}\biggr|^{p'}\,dx\\ =& \sum_{i=0}^{n-1}\int_{\alpha_i}^{\alpha_{i+1}}v_1^{-p'}(x)V_1^{p'}(x)\biggl|\int_{\alpha_{i}}^x \frac{\tilde{g}}{V_1}\biggr|^{p'}\,dx \le \sum_{i=0}^{n-1}\biggl(\int_{\alpha_{i}}^{\alpha_{i+1}} |h|\biggr)^{p'}\int_{\alpha_i}^{\alpha_{i+1}}v_1^{-p'}V_1^{p'}\\=& n^{-p'}\biggl(\int_{c}^{d} |h|\biggr)^{p'}\sum_{i=0}^{n-1}\int_{\alpha_i}^{\alpha_{i+1}}v_1^{-p'}V_1^{p'}= n^{-p'}\biggl(\int_{c}^{d} |h|\biggr)^{p'}\int_{c}^{d}v_1^{-p'}V_1^{p'}<\varepsilon^{p'}. \end{align*} \end{proof} \begin{cor}\label{corol} Let $f\in\mathfrak{M}(0,\infty)$. If ${\rm meas}\,\{x\in (0,\infty)\colon f(x)\not=0\}>0$ then $\mathbf{J}_{\mathscr{W}_{p',1/{v_1}}}(f)=\infty$. \end{cor} \begin{proof} Let $f\not\equiv 0$. There is a segment $[c,d]\subset(0,\infty)$ such that $c<d$ and ${\rm meas}\,\bigl((c,d)\cap \{x\in (0,\infty)\colon f(x)\not=0\}\bigr)>0$. Fix an arbitrary $\varepsilon>0$. By Lemma \ref{lemSW} there exists $\tilde{g}\in\mathbb{W}_{p',1/{v_1}}$ with ${\rm supp}\,\tilde{g}\subset[c,d]$ such that $\|\tilde{g}\|_{\mathscr{W}_{p',1/{v_1}}}<\varepsilon$ and $|\tilde{g}|=1$ on $(c,d)$. Then $$ \mathbf{J}_{\mathscr{W}_{p',1/{v_1}}}(f)\ge\frac{\int_0^\infty|f\tilde{g}|}{\|\tilde{g}\|_{\mathscr{W}_{p',1/{v_1}}}}\ge \varepsilon^{-1}\int_c^d|f|. $$ \end{proof} \section{Main result} We start with auxiliary assertions needed to prove the main result. \begin{lem}\label{Em1} Let $1<p<\infty,$ $v_0,v_1\in {\mathscr V}_p(0,\infty)$, $\frac{1}{v_1}\in L^{p'}_\text{\rm loc}(0,\infty),$ and the condition \eqref{S6} is satisfied. Then \begin{eqnarray}\label{N1} L_{1/{v_0}}^{p'}(0,\infty)\subset \mathscr{W}_{p',1/{v_1}} \end{eqnarray} and \begin{eqnarray}\label{Norm1} \|g\|_{\mathscr{W}_{p',1/{v_1}}}\lesssim \|g\|_{p',1/{v_0}} \end{eqnarray} for any $g\in L_{1/{v_0}}^{p'}(0,\infty).$ \end{lem} \begin{proof} On the strength of \begin{eqnarray}\label{N2} V_1^+(t)\le \int_{t}^{b(x)}v_1^{-p'}\le V_1(x)=2V_1^-(x), \qquad t\leq x\leq a^{-1}(t) \end{eqnarray} it holds $$ \|g\|_{\mathscr{W}_{p',1/{v_1}}}\lesssim \biggl(\int_0^\infty v_1^{-p'}(t)\biggl(\int_t^{a^{-1}(t)}|g(x)|\,dx \biggr)^{p'}\,dt\biggr)^{1/p'}. $$ Then \eqref{Norm1} will follow from \begin{eqnarray}\label{N3} \biggl(\int_0^\infty v_1^{-p'}(t)\biggl(\int_t^{a^{-1}(t)}|g(x)|\,dx \biggr)^{p'}\,dt\biggr)^{1/p'}\le C\|g\|_{p',1/{v_0}}. \end{eqnarray} Consider the dual to \eqref{N3} inequality $$ \biggl(\int_0^\infty v_0^{p}(y)\biggl(\int_{a(y)}^y|f| \biggr)^{p}\,dy\biggr)^{1/p}\le C\|f\|_{p,v_1}, $$ which is a consequence of $$ \biggl(\int_0^\infty v_0^{p}(y)\biggl(\int_{a(y)}^{b(y)}|f| \biggr)^{p}\,dy\biggr)^{1/p}\le C_1\|f\|_{p,v_1}. $$ It is known \cite[Theorem 3.1]{PSU0} that $$ C_1\approx \mathcal{A}:=\sup_t\biggl( \int_{a(t)}^{b(t)}v_1^{-p'}\biggr)^{1/p'} \biggl( \int_{b^{-1}(t)}^{a^{-1}(t)}v_0^{p}\biggr)^{1/p}. $$ Put \begin{equation*} V_0(t):=\int_{a(t)}^{b(t)}v_0^p,\qquad V_0^\pm(t):=\int_{\Delta^\pm(t)}v_0^p. \end{equation*} We have by \eqref{N2} $$ V_1^+(t)\le V_1(a^{-1}(t)),\qquad \int_t^{a^{-1}(t)}v_0^p\le \int_t^{b(a^{-1}(t))}v_0^p=:V_0^+(a^{-1}(t)). $$ Therefore, by \eqref{3}, $$ \mathcal{A}_a(t):= \biggl( \int_{a(t)}^{b(t)}v_1^{-p'}\biggr)^{1/p'} \biggl( \int_{t}^{a^{-1}(t)}v_0^{p}\biggr)^{1/p}\le V_1(a^{-1}(t))^{1/p'}V_0(a^{-1}(t))^{1/p}=1. $$ Analogously, $$ \mathcal{A}_b(t):= \biggl( \int_{a(t)}^{b(t)}v_1^{-p'}\biggr)^{1/p'} \biggl( \int_{b^{-1}(t)}^tv_0^{p}\biggr)^{1/p}\le V_1(b^{-1}(t))^{1/p'}V_0(b^{-1}(t))^{1/p}=1. $$ Thus, $$ \mathcal{A}\approx\sup_{t>0}[\mathcal{A}_a(t)+\mathcal{A}_b(t)]\lesssim 1 $$ and \eqref{Norm1} follows. \end{proof} \begin{cor}\label{Cor1} Let $1<p<\infty$ and $f\in \mathfrak{D}_{\mathscr{W}_{p',1/{v_1}}}$ {\rm(}see \eqref{D-X}{\rm)}. Under the conditions of Lemma \ref{Em1} the embedding \eqref{N1} entails $f\in L_{v_0}^p(0,\infty)$ and \begin{equation}\label{v0} \infty>J_{\mathscr{W}_{p',1/{v_1}}}(f)\gtrsim \|f\|_{p,v_0}. \end{equation} \end{cor} \begin{proof} By Lemma \ref{Em1} \begin{align*} J_{\mathscr{W}_{p',1/{v_1}}}(f)=\sup_{0\not=g\in \mathscr{W}_{p',1/{v_1}}} \frac{\Bigl|\int_0^\infty gf\Bigr|}{\|g\|_{\mathscr{W}_{p',1/{v_1}}}} \gtrsim \sup_{0\not=g\in L_{1/{v_0}}^{p'}(0,\infty)} \frac{\Bigl|\int_0^\infty gf\Bigr|}{\|g\|_{p',1/{v_0}}}=\|f\|_{p,v_0}. \end{align*} \end{proof} \begin{lem}\label{Em2} Let $1<p<\infty$. Under the conditions of Lemma \ref{Em1}, if $J_{\mathscr{W}_{p',1/{v_1}}}(f)<\infty$ then $f=\tilde{f}$ a.e., where $\tilde{f}\in AC_\text{\rm loc}(0,\infty)$ and \begin{equation}\label{dd} \infty>J_{\mathscr{W}_{p',1/{v_1}}}(f)\gtrsim\|\tilde{f}'\|_{p,v_1}. \end{equation} \end{lem} \begin{proof} Let $$ g_\phi(x):=\frac{d\phi}{dx},\qquad \phi\in C_0^\infty(0,\infty). $$ Show that $g_\phi\in \mathscr{W}_{p',1/{v_1}}$. It is sufficiently to prove the inequalities $\mathbb{G}(g_\phi)\lesssim \|\phi\|_{p',1/{v_1}}$ and $\mathcal{G}(g_\phi)\lesssim \|\phi\|_{p',1/{v_1}}$. By taking into account the equality \begin{equation}\label{eq} v_1^{-p'}(a(x))a'(x)+v_1^{-p'}(b(x))b'(x)= 2v_1^{-p'}(x), \end{equation} which follows from the equilibrium condition \eqref{2}, we write \begin{align} \int_t^{a^{-1}(t)}\frac{g_\phi(x)}{V_1(x)}\biggl(\int_{a(x)}^tv_1^{-p'} \biggr)\ dx=-\phi(t) +\int_t^{a^{-1}(t)}\phi(x)\biggl\{ \frac{v_1^{-p'}(a(x))a'(x)}{V_1^-(x)}\nonumber\\+ \frac{v_1^{-p'}(x)\int_{a(x)}^tv_1^{-p'}}{[V_1^-(x)]^2}-\frac{v_1^{-p'}(a(x))a'(x)\int_{a(x)}^tv_1^{-p'}}{[V_1^-(x)]^2} \biggr\}\,dx \le |\phi(t)|+5 \int_t^{a^{-1}(t)}\frac{v_1^{-p'}(x)|\phi(x)|}{V_1^-(x)}\,dx. \end{align} Thus, $$ \mathbb{G}(g_\phi)\lesssim \|\phi\|_{p',v_1^{-1}} + \biggl(\int_0^\infty v_1^{-p'}(t)\biggl(\int_t^{a^{-1}(t)}\frac{v_1^{-p'}(x)|\phi(x)|}{V_1^-(x)}\,dx \biggr)^{p'}\,dt\biggr)^{1/p'}. $$ Put $h=v_1^{-1}|\phi|$ and consider dual to \begin{equation}\label{02} \biggl(\int_0^\infty v_1^{-p'}(t)\biggl(\int_t^{a^{-1}(t)}\frac{v_1^{1-p'}(x)h(x)}{V_1^-(x)}\,dx \biggr)^{p'}\,dt\biggr)^{1/p'}\le C\|h\|_{p'} \end{equation} inequality $$ \biggl(\int_0^\infty \frac{v_1^{-p'}(x)}{[V_1^-(x)]^p}\biggl(\int_{a(x)}^x{v_1^{-1}(t)|\psi(t)|}\,dt \biggr)^{p}\,dx\biggr)^{1/p}\le C\|\psi\|_{p}, $$ which follows from $$ \biggl(\int_0^\infty \frac{v_1^{-p'}(x)}{[V_1^-(x)]^p}\biggl(\int_{a(x)}^{b(x)}{v_1^{-1}(t)|\psi(t)|}\,dt \biggr)^{p}\,dx\biggr)^{1/p}\le C_2\|\psi\|_{p}. $$ By the criteria for the boundedness of Hardy--Steklov operators \cite[Theorem 1]{SU}, $$ C_2\approx\mathscr{A}:=\sup_t\biggl(\int_{a(t)}^{b(t)}v^{-p'}_1\biggr)^{1/p'}\biggl(\int_{b^{-1}(t)}^{a^{-1}(t)} \frac{v^{-p'}_1}{[V_1^-]^p}\biggr)^{1/p}. $$ Since $\int_{b^{-1}(t)}^{a^{-1}(t)} {v^{-p'}_1}{[V_1^-]^{-p}}\lesssim V_1^{1-p}(t)$ (see \cite[(5.18)]{PSU1}) we have $\mathscr{A}\lesssim 1$. This yields $\mathbb{G}(g_\phi)\lesssim \|\phi\|_{p',v_1^{-1}}<\infty$. Similarly, \begin{multline*} \int_t^{a^{-1}(t)}\frac{g_\phi(x)}{V_1(x)} dx=\frac{\phi(a^{-1}(t))} {V_1^-(a^{-1}(t))}-\frac{\phi(t)}{V_1^-(t)}+ \int_t^{a^{-1}(t)}\phi(x) \frac{v_1^{-p'}(x)-v_1^{-p'}(a(x))a'(x)}{[V_1^-(x)]^2}\,dx\\\le \frac{|\phi(a^{-1}(t))|} {V_1^-(a^{-1}(t))}+\frac{|\phi(t)|}{V_1^-(t)}+ \int_t^{a^{-1}(t)}|\phi(x)| \frac{v_1^{-p'}(x)+ v_1^{-p'}(a(x))a'(x)}{[V_1^-(x)]^2}\,dx. \end{multline*} Since $2v_1^{-p'}(a^{-1}(t))[a^{-1}(t)]'\ge v_1^{-p'}(t)$ (see \eqref{eq} with $x=a^{-1}(t)$) and $V_1(a^{-1}(t))\ge V_1^+(t)=\frac{1}{2}V_1(t)$, we have \begin{align*}& \int_0^\infty v_1^{-p'}(t)\,V_1^{p'}(t)\biggl[\frac{|\phi(a^{-1}(t))|} {V_1^-(a^{-1}(t))}+\frac{|\phi(t)|}{V_1^-(t)}\biggr]^{p'}\,dt\\ \lesssim &\int_0^\infty [a^{-1}(t)]'|\phi(a^{-1}(t))v_1^{-1}(a^{-1}(t))|^{p'}\,dt +\int_0^\infty |\phi(t)v_1^{-1}(t)|^{p'}\,dt\simeq \|\phi\|_{p',v_1^{-1}}. \end{align*} Further, \begin{align*}& \int_0^\infty v_1^{-p'}(t)\,V_1^{p'}(t)\biggl(\int_t^{a^{-1}(t)}|\phi(x)| \frac{v_1^{-p'}(x)+ v_1^{-p'}(a(x))a'(x)}{[V_1^-(x)]^2}\,dx\biggr)^{p'}\,dt\\ &\le \int_0^\infty v_1^{-p'}(t)\biggl(\int_t^{a^{-1}(t)}|\phi(x)| \frac{v_1^{-p'}(x)+ v_1^{-p'}(a(x))a'(x)}{V_1^-(x)}\,dx\biggr)^{p'}\,dt\\ &\le3 \int_0^\infty v_1^{-p'}(t)\biggl(\int_t^{a^{-1}(t)}\frac{v_1^{-p'}(x)|\phi(x)|}{V_1^-(x)}\, dx\biggr)^{p'}\,dt \simeq\int_0^\infty v_1^{-p'}(t)\biggl(\int_t^{a^{-1}(t)}\frac{v_1^{1-p'}(x)h(x)}{V_1^-(x)}\,dx\biggr)^{p'}\,dt \end{align*} (see \eqref{02}). Therefore, $\mathcal{G}(g_\phi)\lesssim \|\phi\|_{p',1/{v_1}}<\infty$ and \begin{equation}\label{D1} \|g_\phi\|_{\mathscr{W}_{p',1/{v_1}}}\lesssim \|\phi\|_{p',1/{v_1}}. \end{equation} It follows from \eqref{D1} that \begin{align}\label{dds} \sup_{0\not=\phi\in C^\infty_0(0,\infty)}\frac{\Bigl|\int_0^\infty f\phi'\Bigr|}{\|\phi\|_{p',1/{v_1}}}&\lesssim \sup_{0\not=\phi\in C^\infty_0(0,\infty)} \frac{\Bigl|\int_0^\infty fg_\phi\Bigr|}{\|g_\phi\|_{\mathscr{W}_{p',1/{v_1}}}}\nonumber\\ &\le\sup_{g\in \mathscr{W}_{p',1/{v_1}}}\frac{\Bigl|\int_0^\infty fg\Bigr|}{\|g\|_{\mathscr{W}_{p',1/{v_1}}}}=J_{\mathscr{W}_{p',1/{v_1}}}(f)<\infty. \end{align} Put $\Lambda\phi:=\int_0^\infty f\phi'$, $\phi\in C^\infty_0(0,\infty)$. On the strength of \eqref{dds}, $|\Lambda\phi|\lesssim\|\phi\|_{p',1/{v_1}}$. By the Hahn--Banach theorem there exists an extension $\tilde{\Lambda}\in \bigl(L^{p'}_{1/{v_1}}(0,\infty)\bigr)^\ast$ of $\Lambda$. By the Riesz representation theorem, there exists $u\in L^p_{v_1}(0,\infty)$ such that $\tilde{\Lambda}h=-\int_0^\infty uh$, $h\in L^{p'}_{1/{v_1}}(0,\infty)$. It implies \begin{equation}\label{d1} -\int_0^\infty u\phi=\int_0^\infty f\phi',\qquad\phi\in C_0^\infty(0,\infty), \end{equation} this means that $u$ is a distributional derivative of $f$. Then by \cite[Theorem 7.13]{Leoni} the function $f$ a.e. coincides with a function $\tilde f\in AC_\text{\rm loc}(0,\infty)$ and $u=\tilde{f}^\prime.$ It follows from \eqref{d1} that \begin{align*} J_{\mathscr{W}_{p',1/{v_1}}}(f)\ge\sup_{0\not=\phi\in C^\infty_0(0,\infty)}\frac{\Bigl|\int_0^\infty fg_\phi\Bigr|}{\|g_\phi\|_{\mathscr{W}_{p',1/{v_1}}}} \gtrsim\sup_{0\not=\phi\in C^\infty_0(0,\infty)}\frac{\Bigl|\int_0^\infty \tilde{f}'\phi\Bigr|}{\|\phi\|_{p',1/{v_1}}}= \|\tilde{f}'\|_{p,v_1}. \end{align*} \end{proof} Our main result of the paper reads the following \begin{thm}\label{theoremMain} Let $1<p<\infty$ and $f\in\mathfrak{D}_{\mathscr{W}_{p',1/{v_1}}}$. Then $J_{\mathscr{W}_{p',1/{v_1}}}(f)<\infty$ if and only if $f=\tilde{f}$ a.e., $\tilde{f}\in \mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW^1_p$ and $\|f\|_{W^1_p}\approx J_{\mathscr{W}_{p',1/{v_1}}}(f).$ Thus, $$ \mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW^1_p=[\mathscr{W}_{p',1/{v_1}}]_w^\prime=[[\mathop{\phantom{W}}\limits^{\circ\circ}\mskip-23muW^1_p]^\prime_w]_w^\prime. $$ \end{thm} \begin{proof} The {\it sufficient} part of Theorem follows from Remark \ref{remark}. {\it Necessity}. It is already proved that $\tilde{f}\in W^1_p$ (see \eqref{v0}, \eqref{dd}). Let $E:=\{x\in (0,\infty):|\tilde f(x)|>0\}$. Since $|\tilde f|$ is continuous on $(0,\infty)$, then $E$ is open set. Suppose that ${\rm mes}((b,\infty)\cap E)>0$ for any $b\in (0,\infty)$. Then there exists a sequence of segments $\{[a_k,b_k]\}_1^\infty\subset (0,\infty)$ such that $b_k<a_{k+1}$ and $m_k:=\min_{x\in [a_k,b_k]}|\tilde f(x)|>0$. Put $\theta_k:=\frac{1}{k m_k(b_k-a_k)}$. By Lemma \ref{lemSW} there is $g_k\in \mathscr{W}_{p',1/{v_1}}$ such that $\|g_k\|_{\mathscr{W}_{p',1/{v_1}}}<2^{-k}$ and $|g_k|=\theta_k$ on $(a_k,b_k)$. Put $g:=\sum_{k=1}^\infty g_k$. Then $\|g\|_{\mathscr{W}_{p',1/{v_1}}}\le 1$ and $$ \int_0^\infty|\tilde{f}g|\ge \sum_{k=1}^\infty \theta_k m_k(b_k-a_k)=\sum_{k=1}^\infty\frac{1}{k}=\infty, $$ which contradicts with $J_{\mathscr{W}_{p',1/{v_1}}}(f)<\infty.$ Similarly we show that $\tilde{f}(0)=0.$ Thus, $\mathop{\rm supp}\nolimits\tilde{f}\subset [0,\infty)$ is compact. \end{proof} \end{document}
arXiv
\begin{document} \date{\today} \title{A finite subdivision rule for the n-dimensional torus.} \author{Brian Rushton} \maketitle \pdfbookmark[1]{A FINITE SUBDIVISION RULE FOR THE N-DIMENSIONAL TORUS}{user-title-page} \begin{center} AMS Subject Classification Numbers: 52C26, 52B11 Keywords: Subdivision rules, hypercubes, simplicial, torus \end{center} \begin{abstract} Cannon, Floyd, and Parry have studied subdivisions of the 2-sphere extensively, especially those corresponding to 3-manifolds, in an attempt to prove Cannon's conjecture. There has been a recent interest in generalizing some of their tools, such as extremal length, to higher dimensions. We define finite subdivision rules of dimension $n$, and find an $n-1$-dimensional finite subdivision rule for the $n$-dimensional torus, using a well-known simplicial decomposition of the hypercube. \end{abstract} \section{Introduction}\label{Introduction} In its most general form, a subdivision rule it is an algorithm for taking a surface with a labeled finite covering by compact sets and recursively refining the elements of the covering into smaller labeled compact sets. Cannon and Swenson have shown \cite{hyperbolic} that Gromov hyperbolic groups with a 2-sphere at infinity give rise in a natural way to a subdivision rule on the sphere. In this setting, the subdivision rules are allowed to act on coverings of the 2-sphere by overlapping compact sets. A \emph{finite} subdivision rule is a simpler form of subdivision rule that other subdivision rules can often be reduced to, in which the coverings have to be tilings. Such subdivision rules have been studied extensively by Cannon, Floyd, and Parry in an attempt to solve the following conjecture \cite{combinatorial}: \textbf{Conjecture:} All Gromov hyperbolic groups with a 2-sphere at infinity act co-compactly and properly discontinuously on $\mathbb{H}^3$ by isometries. Finite subdivision rules are only one tool in studying this conjecture. Recently, several researchers have been expanding some of the other tools of Cannon, Floyd and Parry to 3-dimensions, such as extremal length and sphere-packings (see \cite{Spherepackings}, \cite{Spherelackings}, and \cite{saar}). In this same spirit, the author has investigated subdivision rules of the 3-sphere arising from boundaries of 4-manifolds. The least complicated non-simply connected 4-manifold is the 4-torus $S^1 \times S^1 \times S^1 \times S^1$. With some effort, the method of Chapter 2 of \cite{myself2} for finding a finite subdivision rule for the 3-torus (pictured in part 3. of Figure \ref{SmallToriSubs1}) generalizes to give a subdivision rule for the 4-torus (shown in Figure \ref{FourTorusSubs1}). Notice that the tile types in Figure \ref{FourTorusSubs1} are (as CW-complexes) a a cube, another cube, a triangular prims, and a tetrahedron. We can state this more simply by letting $\Delta^n$ represent a simplex of dimension $n$, and $I^m$ a hypercube of dimension $m$. With this notation, these tile types are $I^3, I^2 \times \Delta, I \times \Delta^2,$ and $\Delta^3$. A quick glance at the finite subdivision rule for the 3-torus in Figure \ref{SmallToriSubs1} shows that the tile types here are $I^2, I \times \Delta,$ and $\Delta^2$. These two examples follow a definite pattern. \begin{figure}\label{SmallToriSubs1} \end{figure} \begin{figure}\label{FourTorusSubs1} \end{figure} The main purpose of this paper is to show that this pattern continues for all $n$. In Theorem \ref{CubeTheorem} on page \pageref{CubeTheorem}, we construct an explicit $n$-dimensional finite subdivision rule (which we define below) for the $(n+1)$-torus in which the tile types have the form $I^k \times \Delta^{n-k}$, with one tile type for each $k$ from $0$ to $n$. Parts 1. and 2. of Figure \ref{SmallToriSubs1} show that this pattern holds for $n=1$ and $n=2$, as well. \section{Formal Definition of a Subdivision Rule} At this point, it will be helpful to give a concrete definition of subdivision rule. We first recall Cannon, Floyd and Parry's definition of a finite subdivision rule, taken from \cite{subdivision}. \begin{defi} A \textbf{finite subdivision rule} $R$ consists of the following. \begin{enumerate} \item A finite 2-dimensional CW complex $S_R$, called the \textbf{subdivision complex}, with a fixed cell structure such that $S_R$ is the union of its closed 2-cells. We assume that for each closed 2-cell $\tilde{s}$ of $S_R$ there is a CW structure $s$ on a closed 2-disk such that $s$ has at least three vertices, the vertices and edges of $s$ are contained in $\partial s$, and the characteristic map $\psi_s:s\rightarrow S_R$ which maps onto $\tilde{s}$ restricts to a homeomorphism onto each open cell. \item A finite two dimensional CW complex $R(S_R)$, which is a subdivision of $S_R$. \item A continuous cellular map $\phi_R:R(S_R)\rightarrow S_R$ called the \textbf{subdivision map}, whose restriction to every open cell is a homeomorphism. \end{enumerate} \end{defi} Each CW complex $s$ in the definition above (with its given characteristic map $\psi_s$) is called a \textbf{tile type}. As the final part of the definition, they show how finite subdivision rules can act on surfaces (and 2-complexes in general). An $R$-complex for a subdivision rule $R$ is a 2-dimensional CW complex $X$ which is the union of its closed 2-cells, together with a continuous cellular map $f:X\rightarrow S_R$ whose restriction to each open cell is a homeomorphism. We can subdivide $X$ into a complex $R(X)$ by requiring that the induced map $f:R(X)\rightarrow R(S_R)$ restricts to a homeomorphism onto each open cell. $R(X)$ is again an $R$-complex with map $\phi_R \circ f:R(X)\rightarrow S_R$. By repeating this process, we obtain a sequence of subdivided $R$-complexes $R^n(X)$ with maps $\phi_R^n\circ f:R^n(X)\rightarrow S_R$. All of the preceding definitions were adapted from \cite{subdivision}, which contains several examples. While in theory, a subdivision rule is represented by a CW-complex, most rules in practice are described by diagrams of the sort shown in Figure \ref{SmallToriSubs1}, part 3. In this paper, we find subdivision rules for the $n+1$-dimensional torus which subdivide the $n$-sphere. We define a subdivision rule in higher dimensions in a way analogous to subdivision rules in dimension 2. A \textbf{finite subdivision rule $R$ of dimension $n$} consists of: \begin{enumerate} \item A finite $n$-dimensional CW complex $S_R$, called the \textbf{subdivision complex}, with a fixed cell structure such that $S_R$ is the union of its closed $n$-cells. We assume that for every closed $n$-cell $\tilde(s)$ of $S_R$ there is a CW structure $s$ on a closed $n$-disk such that any two subcells that intersect do so in a single cell of lower dimension, the subcells of $s$ are contained in $\partial s$, and the characteristic map $\psi_s:s\rightarrow S_R$ which maps onto $\tilde{s}$ restricts to a homeomorphism onto each open cell. \item A finite $n$-dimensional subdivision $R(S_R)$ of $S(R)$. \item A \textbf{subdivision map} $\phi_R: R(S_R)\rightarrow S_R$, which is a continuous cellular map that restricts to a homeomorphism on each open cell. \end{enumerate} Each CW complex $s$ in the definition above (with its appropriate characteristic map) is called a \textbf{tile type} of $S$. All other portions of the definition (such as $R$-complexes) generalize in the natural way. As for traditional finite subdivision rules, we will often describe an $n$-dimensional finite subdivision rule by the subdivision of every tile type, instead of by constructing an explicit complex. Given a finite subdivision rule $R$ of dimension $n$, an $R$-complex consists of a $n$-dimensional CW complex $X$ which is the union of its closed $n$-cells together with a continuous cellular map $f:X\rightarrow S_R$ whose restriction to each open cell is a homeomorphism. All tile types are $R$-complexes. We now describe how to subdivide an $R$-complex $X$ with map $f:X\rightarrow S_R$, as described above. Recall that $R(S_R)$ is a subdivision of $S_R$. We simply pull back the cell structure on $R(S_R)$ to the cells of $X$ to create $R(X)$, a subdivision of $X$. This gives an induced map $f:R(X)\rightarrow R(S_R)$ that restricts to a homeomorphism on each open cell. This means that $R(X)$ is an $R$-complex with map $\phi_R \circ f:R(X)\rightarrow S_R$. We can iterate this process to define $R^n(X)$ by setting $R^0 (X) =X$ (with map $f:X\rightarrow S_R$) and $R^n(X)=R(R^{n-1}(X))$ (with map $\phi^n_R \circ f:R^n(X)\rightarrow S_R$) if $n\geq 1$. We will use the term `subdivision rule' throughout to mean a finite subdivision rule of dimension $n$ for some $n$. As for traditional finite subdivision rules, we will describe an $n$-dimensional finite subdivision rule by a diagram giving the subdivision of every tile type, instead of by constructing an explicit complex. Our approach to finding subdivision rules in this paper and in others (see \cite{myself}, \cite{myself2}) is to take the boundary of balls in the universal cover. The universal cover of any manifold can be constructed recursively by taking a copy of the fundamental domain, gluing on fundamental domains to every exposed face of the original, and repeating. More specifically, let $B(0)$ be a single copy of the fundamental domain of an $n$-manifold $M$. Let $B(k)$ be the set of all fundamental domains that are distance $\leq k$ from $B(0)$ (in the word metric). Then for many groups and choices of generating sets, $S(k)=\partial B(k)$ will be a topological $(n-1)$-sphere for all $k$ or for $k$ sufficiently large. The cell structure from the fundamental domain gives a cell structure to $B(k)$ and thus to $S(k)$. This cell structure is a tiling. Thus, we get a sequence of tilings in which every tile or every group of tiles corresponds to an element of the fundamental group, and the entire group is represented at some point. We have drawn $S(1),S(2)$ and $S(3)$ for the 3-dimensional torus with the standard choice of generators in Figures \ref{SOne} to \ref{SThree}, shown in 3-space and also as a combinatorial tiling. However, this sequence of tilings for a manifold is not necessarily created by a subdivision rule, because faces and edges are created and later covered up. To get a recursive structure, similar to hyperbolic 3-manifolds, we need to find a way to represent $S(k)$ (or a slightly modified version of it) as a subdivision of $S(k+1)$ (or a modified version of it). \begin{figure} \caption{S(1)} \label{SOne} \end{figure} \begin{figure} \caption{S(2)} \label{STwo} \end{figure} \begin{figure} \caption{S(3)} \label{SThree} \end{figure} \section{The $n$-torus} We now show how to obtain a subdivision rule for the $n$-dimensional torus. We will make the informal language of the introduction more rigorous. In the discussion that follows, let $I=[0,1]$, the unit interval. A $q$-cube is $I^q$, and a $p$-simplex is the convex hull of $p+1$ points in general position. Thus, a 1-cube is a line, a 2-cube is a square, and a 3-cube is a (standard) cube; a 1-simplex is a line, a 2-simplex is a triangle, and a 3-simplex is a tetrahedron, etc. \begin{thm}\label{CubeTheorem} The $n$-torus has a subdivision rule with $n$ tile types. The tile types are $p-1$-simplices cross $q$-cubes, where $1\leq p\leq n$ and $q=n-p$. Each such tile is subdivided into one $p-1$ simplex cross a $q$-cube and $2q$ $p$-simplices cross $q-1$ cubes. \end{thm} Before we begin the proof, Recall Figures \ref{SmallToriSubs1} and \ref{FourTorusSubs1} to see the tile types for $n=1,2,3$ and $4$. \begin{proof} The fundamental domain of the $n$-torus $\mathbb{T}^n=(S^1)^n$ is a hypercube of dimension $n$. If the generators of the fundamental group are $y_1,...,y_n$, then every element of the fundamental group can be written uniquely as $y_1^{a_1} y_2^{a_2} ... y_n^{a_n}$. Because our group is free abelian, the Cayley graph of the subgroup generated by any subset of the generators is contained in the Cayley graph of the fundamental group. Thus we can build the universal cover of these manifolds inductively from the universal covers of manifolds corresponding to subgroups. We now describe how to explicitly construct the subdivision rule. It may help to follow along with the examples $n=1,2,3$ and 4 starting on page \pageref{Example}. To construct the universal cover, we start with a single $n$-cube (i.e. $I^n$) and begin gluing on other cubes. Faces (or cells of codimension one) correspond to generators and inverses of generators. Assume an element represented by a cube is being glued on in some stage of creating the universal cover. Assume the element can be written as $y_{k1}^{a_1}y_{k2}^{a_2} ... y_{kp}^{a_p}$, where this is a representation of minimal word length (so $1\leq k_1 \leq k_2 \leq...\leq k_p \leq n$ and $a_i \neq 0$). Then this element is contained in a subgroup of rank $p$. Let $q=n-p$. Then gluing on the cube corresponding to this element is accomplished by identifying some of its boundary to the previous stage of the universal cover. If we write the cube $I^n$ as $I^p \times I^q$, the boundary will be $\partial I^p \times I^q \cup I^p \times \partial I^q$. Now, because the group element has $p$ geodesic paths into it (for instance, if $a_i$>0, going to $y_{k1}^{a_1-1} y_{k2}^{a_2} ... y_{kp}^{a_p}$ and then going through the $y_{k1}$-face to our element), our cube representing this element is glued onto $p$ faces in the previous stage at once. Each of the $p$ faces represents a generator, and if one generator is represented, its inverse is not, meaning that no pair of opposite faces is in the set of faces glued onto the universal cover.. The structure of the $n$-cube is such that every set of $p$ faces containing no opposing pairs determines a unique $q$-cell which is common to all of them (so, for instance, in a 3-cube, three non-opposing faces intersect in a vertex, two in a line, and one in a square). If we project $I^n \subseteq \mathbb{R}^n$ down onto the subspace orthogonal to this cell, we see that this set of faces projects to the star of a vertex in $\partial I^p$. Call this star $S$. Note also that every vertex in the $p$-cube has an opposite vertex, and the star of a vertex and its opposite have disjoint interiors and cover $\partial I^p$. Call the star of the opposite vertex $S^*$. Thus, in gluing on $I^n$ via $\partial I^n$, we glue the boundary onto $A=S\times I^q$. The faces of the $\partial I^n$ that are not glued to anything can be written as $B=B_1\cup B_2= S^* \times I^q \cup I^p \times \partial I^q$. Recall that, to find a subdivision rule, we look at $S(k)$ (i.e. all exposed faces at stage $k$ of constructing a universal cover), and $S(k+1)$ (all exposed faces at stage $k+1$),and try to find the first as a subset of the second. Therefore, our goal is to find a cell structure for $A$ and $B$ such that $B$ is a refinement or subdivision of $A$. We use the standard simplicial decomposition of the $p$-cube (found, for instance, in \cite{Rudin}, Exercise 10.18), which we now describe. $I^p$ is covered by the $p!$ simplices $\{[0,e_{\sigma(1)},e_{\sigma(1)}+e_{\sigma (2)},e_{\sigma(1)}+e_{\sigma (2)}+...+e_{\sigma(p)}]| \sigma\in \Sigma_p \}$, each of which has disjoint interior. Here, $e_i$ is the unit vector in the $i$-th direction. The symbol $[p_0,p_1,...,p_k]$ is defined to be $\tau(Q^k)$, where $\tau$ is the affine map $\tau(x_1,...,x_k)=p_0+\Sigma x_i(p_i-p_0)$, and $Q^k$ is the standard simplex $\{(x_1,...,x_k)|0\leq x_i$ for all i$, x_1+...+x_n \leq 1\}$. Each of these simplices has sub-simplices defined by deleting intermediate terms (so $[0]\subseteq [0,e_1] \subseteq [0,e_1,e_2]$, for instance). Recall that switching two terms in the simplex (i.e. changing $[p_0, p_1, p_2]$ to $[p_2,p_1,p_0]$) gives a different map from $Q^k$ with opposite orientation but the same image as the original map. If $\tau_1$ and $\tau_2$ are the map corresponding to the original simplex and the `flipped' simplex, then $\tau_2^{-1}\tau_1$ is an orientation-reversing simplicial map. We use this to define an involution on our simplicial cube. Define this map by switching $0$ and $e_{\sigma(1)}+e_{\sigma (2)}+...+e_{\sigma(p)}$ in every simplex. This is a simplicial map that is the identity on all subsimplices not containing 0 or $e_{\sigma(1)}+e_{\sigma (2)}+...+e_{\sigma(p)}$. Any subsimplex that contains one of those points is sent to an opposing subsimplex that contains the other point. The existence of this map shows, in particular, that the set of all closed simplices in $\partial I^p$ containing 0 is simplicially isomorphic to the set of all closed simplices in $\partial I^p$ containing $e_{\sigma(1)}+e_{\sigma(2)}+...+e_{\sigma(n)}$. This means that if $S$ and $S^*$ are given the simplicial structure they inherit from $I^p$, they are isomorphic. This means that $A$ and $B_1$ have the same cell structure. If $q=0$, $B_1=B$ and $A$ and $B$ have the same cell structure, so our subdivision rule can be the identity. If $q\neq 0$, it's slightly more difficult. We still give $A$ and $B_1$ the simplicialized structure explained above, and give $B_2$ the structure of $I^p \times \partial I^q$, where $I^p$ is given the simplicial structure of earlier. We show that $B$ contains $A$ as a subcomplex, with $\partial A \subseteq \partial B$. In the discussion that follows, it will be helpful to follow along with Figures \ref{PreCone}-\ref{FinalCone} for the case $p=2$, $q=1$. Figure \ref{FourTorusSubs} gives more examples with less explanation. So, pick a $(p-1)$-simplex $\Delta^{p-1}$ in $S \subseteq \partial I^p$. If we consider the center vertex of $S$ as $0$ and the center vertex of $S^*$ as $e_{\sigma(1)}+e_{\sigma (2)}+...+e_{\sigma(p)}$, then there is a unique $p$-simplex $\Delta^p$ defined by adjoining $e_{\sigma(1)}+e_{\sigma (2)}+...+e_{\sigma(p)}$ to $\Delta^{p-1}$. So, our goal is to show that $\Delta^{p-1}\times I^q\subseteq \Delta^{p-1}\times I^q \cup \Delta^p \times \partial I^q$. Patching together these simplices will show that $A\subseteq B_1\cup B_2=B$. To do so, note that $I^q$ is just the cone over the boundary of $I^q$ (as a set, not as a complex). Thus, we look at $\Delta^{p-1} \times \partial I^q \times I$, which we will eventually collapse. Each face in $\partial I^q$ is a $(q-1)$-cube. Given a specific face, we can embed the product $\Delta^{p-1}\times I^{q-1} \times I$ in $\mathbb{R}^{p+q-1}$ as $\{(x_1,...,x_{p-1},y_1,...,y_{q-1},z)|0\leq x_i$ for $1\leq i \leq p-1, 0\leq y_j \leq 1$ for $1\leq j \leq q-1,0 \leq z \leq 1 x_1+x_2+...+x_{p-1}\}$. Call this set $C$. Define a family of maps $f_t:C \rightarrow C$ by $$f_t(x_1,...,x_{p-1},y_1,...,y_{q-1},z)=(x_1,...,x_{p-1},y_1,...,y_{q-1},z(1+(x_1+...+x_{p-1}-1)\frac{t}{2})).$$ This defines an invertible homotopy (basically dragging down the corner of the top copy of the simplex along the $z$-axis). Note that $$f_1(C)=\{(x_1,...,x_{p-1},y_1,...,y_{q-1},z)|z \leq \frac{1}{2}+\frac{x_1+...+x_{p-1}}{2}\}$$ and this is the same as $$\{(x_1,...,x_{p-1},y_1,...,y_{q-1},z)|x_1+...+x_{p-1}+(2-2z)\geq 1\}.$$ The closure of its complement in $C$ is $I^{q-1}$ cross a $p$-simplex defined by $x_1+...+x_{p-1}+2(1-z) \leq 1$, where $0 \leq x_i, z \leq 1$. Thus we can write $C$ as the union of $f_1(C)\cong C$ and $C\setminus f_1(C)\cong \Delta^p \times I^{q-1}$. The boundary of $f_1(C)\cup(C\setminus f_1(C))$ is clearly the same as $C$, just with a more complex cell structure, i.e. a subdivision. If we now collapse to get the cone structure mentioned earlier, the simplex we just obtained is not affected, and we still have a subdivision. Patching together all faces in $\partial I^q$ shows that $\Delta^{p-1}\times I^q\subseteq \Delta^{p-1}\times I^q \cup \Delta^p \times \partial I^q$, as desired; since the homotopy fixed all $y$-coordinates, the subdivisions on each face of $\partial I^q$ match up. Finally, we glue together all the simplices to show that $A\subseteq B_1 \cup B_2$. Note that the center vertex of $S$ was the corner of each simplex sent to the origin in our embedding above, so when we glue our simplices together, all of those vertices are identified, and we have a well-defined subdivision rule. There is one problem that may have arisen. First of all we are assuming that $A$ is formed of $(p-1)$-simplices crossed with $q$-cubes. We also have a structure for $B$; but in the next stage of subdivision (or of constructing the universal cover), the new $A$'s are formed from the old $B$'s. Do they have the right structure? Well, note that $B_1$ was given the same structure as $A$, and the subtiles of $B_1$ correspond to those elements of $Z^n$ that stay in the same subgroup of rank $p$. $B_2$ represents those elements that land in a subgroup of order $p+1$, and these are given the structure of $n$-cubes split into $p$-simplices cross $(q-1)$ cubes, and the $p$-simplices are grouped about the correct vertex. So, the cell structure is consistent. \end{proof} We now find the subdivision rules explicitly for $n=1,2,3$ and $4$. \label{Example} For $n=1$, the universal cover is a line, $B(n)$ is $2n+1$ line segments, and $S(n)$ is two points. The subdivision rule is shown in Figure \ref{SmallToriSubs}. \begin{figure}\label{SmallToriSubs} \end{figure} Note that the only tile type is a point (i.e. 0-simplex cross a 0-cube), which is subdivided into one 0-simplex cross a 0-cube. For $n=2$, the fundamental domain is a square, $B(n+1)$ is a topological disk, and $S(n)$ is a topological circle. The subdivision rule is shown in Figure \ref{SmallToriSubs}. Type A is a line (i.e. a 0-simplex cross a 1-cube). It's subdivided into one line( a 0-simplex cross a 1-cube) and 2 more lines (1-simplices cross a 0-cube), just as the formula predicts. Type B is a line(a 1-simplex cross a 0-cube), and represents half of a group element. Two B tiles form the star of a vertex in the boundary of the 2-cube (a square), and the subdivision rule for type B is the identity, just as the formula predicts. As you can see in Figure \ref{TorusUniversal}, A tiles correspond to the four `ends' of $B(n)$, or, the group elements contained in a subgroup generated by exactly one of the standard generators, while B tiles correspond to elements that must be written using both generators. Notice how two neighboring $B$ tiles form a corner that is covered up by one square fundamental domain. \begin{figure}\label{TorusUniversal} \end{figure} For $n=3$, we have the 3-torus, whose universal cover is shown being constructed in Figures \ref{SOne} to \ref{SThree}. Notice in these figures that new cubes are glued onto a single face, two neighboring faces, or three faces forming a corner. These correspond to elements whose minimal word-length representations use one, two, or three generators, respectively. Type A (corresponding to a single face) is a square (or 0-simplex cross a 2-cube), and is subdivided into one square (a 0-simplex cross a 2-cube) and 4 other squares (or 1-simplices cross 1-cubes). Type B is a square (thought of as a 1-simplex cross a 1-cube). It is subdivided into a square (a 1-simplex cross a 1-cube), and 2 triangles (or 2-simplices cross a 0-cube). Two type B tiles correspond to the star of a vertex in the boundary of the 2-cube, which is then crossed by $I$. Note that this tile shows us what happens with the homotopy portion of Theorem \ref{CubeTheorem}. We start with $S$, the star of a vertex in the boundary of $I^p=I^1$ with a simplicial structure (namely, two edges of a square sharing a vertex, each edge considered as a 1-simplex). We cross this with $I^q$=$I^1$ to get two square sharing a face. This is $A$. Then, we write $A$ as a quotient of $S\times \partial I^1\times I$, so we get Figure \ref{PreCone}. \begin{figure} \caption{The set $S\times \partial I^1\times I$, where $S$ is the union of two 1-simplices.} \label{PreCone} \end{figure} On each component (one corresponding to $S\times{0}\times I$, one corresponding to $S\times{1}\times I$), we pull down the center line by our homotopy to get Figure \ref{Homotopy}. Collapsing $\partial I^1\times \{0\}=\{0,1\}\times\{0\}$ to a point to get a cone, we have the image in Figure \ref{FinalCone}. \begin{figure}\label{Homotopy} \end{figure} \begin{figure}\label{FinalCone} \end{figure} Type C is a triangle (a 2-simplex cross a 0-cube), which is subdivided by the identity. This corresponds to a cube glued onto three adjoining faces. Each tile of type C represents one sixth of a group element, and six together form the star of a vertex in $\partial I^3$ when it is given a simplicial structure. Several subdivisions of an A tile are shown in Figure \ref{CircleTorus}. This picture was created with Ken Stephenson's Circlepack \cite{Circlepak}. The pictures are only combinatorial subdivisions of each other; they can't be overlaid with vertices matching up. This is because the subdivision rule is not conformal. For more on the connection between conformality and circle packings, see \cite{French}. \begin{figure} \caption{Several subdivisions of a type A tile for the 3-torus.} \label{CircleTorus} \end{figure} Finally, the tile types for the four-torus are shown in Figure \ref{FourTorusSubs}. \begin{figure} \caption{The tile types for the four-torus.} \label{FourTorusSubs} \end{figure} Note the homotopies in each of these tiles. In the first tile type, the homotopy is shrinking the cube in the center and dragging the cell structure with it. In the second, we only shrink down a 2-dimensional face, again dragging everything along with it. In the third tile type, we shrink an edge, dragging along the cell structure with it. Finally, in the fourth tile type, there is nothing to drag. \section{Future Work} We hope to find many more subdivision rules for higher-dimensional manifolds, especially hyperbolic manifolds. Cannon and Swenson have shown \cite{hyperbolic} that hyperbolic $n$-manifolds have some sort of subdivision rule. We hope to find more explicit examples. Also, as mentioned in the introduction, Hersonsky and others have studied extremal length for three dimensional tilings (see \cite{saar}, \cite{Spherepackings}, and \cite{Spherelackings}). Do 3-dimensional subdivision rules for hyperbolic 4-manifolds satisfy a condition on extremal length similar to conformal 2-dimensional subdivision rules? \end{document}
arXiv
Cellular and Molecular Bioengineering March 2010 , Volume 3, Issue 1, pp 3–19 | Cite as Application of Population Dynamics to Study Heterotypic Cell Aggregations in the Near-Wall Region of a Shear Flow Yanping Ma Jiakou Wang Shile Liang Cheng Dong Qiang Du First Online: 09 March 2010 Our research focused on the polymorphonuclear neutrophils (PMNs) tethering to the vascular endothelial cells (EC) and the subsequent melanoma cell emboli formation in a shear flow, an important process of tumor cell extravasation from the circulation during metastasis. We applied population balance model based on Smoluchowski coagulation equation to study the heterotypic aggregation between PMNs and melanoma cells in the near-wall region of an in vitro parallel-plate flow chamber, which simulates in vivo cell-substrate adhesion from the vasculatures by combining mathematical modeling and numerical simulations with experimental observations. To the best of our knowledge, a multiscale near-wall aggregation model was developed, for the first time, which incorporated the effects of both cell deformation and general ratios of heterotypic cells on the cell aggregation process. Quantitative agreement was found between numerical predictions and in vitro experiments. The effects of factors, including: intrinsic binding molecule properties, near-wall heterotypic cell concentrations, and cell deformations on the coagulation process, are discussed. Several parameter identification approaches are proposed and validated which, in turn, demonstrate the importance of the reaction coefficient and the critical bond number on the aggregation process. Melanoma cells Leukocytes Endothelium Collisions Aggregates Probabilities Cell adhesion Shear conditions The reaction constant as a combination of A c, N r , N l , K on and K off (s−1) The contact area between a PMN and a tumor cell (μm2) Ca, We, Re The Capillary number, the Weber number and the Reynolds number Colln The collision number between two kinds of cells (events) CP, CT, C The concentration of PMNs, tumor cells and bulk cell concentration, respectively, in the near-wall region (cells mL−1) The diameter of an undeformed PMN cell (μm) Acceleration due to gravity (m s−2) H, Hdep, h The height of a deformed PMN cell attached to the substrate, the height defined by the depth of field of the microscope objective used in the experimental observations and the distance from the substrate to the center of the cell, which represented the position of the cell (μm) Iini, Itf The increasing multiple of initial condition or tethering frequency The incoming flux to collision region near PMN per concentration (kg−1 m3) Kon, Kon(n) The forward reaction rate per unit density for bond formation (s−1 μm−2) Koff, Koff(n) The backward reaction rate per cell for bond breakage (s−1) Outward unit normal (vector) of collision region The approximation of the smallest number of bonds required for firm adhesion (bonds) Nr, Nl The concentration of receptor and ligand on cells (μm−2) NPT, NP The number of tethered PMN–tumor cell doublets and tethered PMN monomers on the substrate (cells) Pn(t), Pn The probability of having n bonds. rp, rt The radius of an undeformed PMN cell and a tumor cell (μm) The tethering frequency of cells: including firmly adhered cell and rolling cells (events s−1 per view) vavg, vrel The average settling velocity for cells above height H dep and the relative velocity (difference between velocities) of two kinds of cells (μm s−1) vs0, vs, vc The free settling velocity, settling velocity and convection velocity of cells in the parabolic flow profile (μm s−1) The aggregation percentage βPT, β(i,j;i′,j′) The coagulation kernel of tumor cells in the near-wall region to the tethered PMNs (μm3 s−1) \( \hat{\beta }_{\text{PT}} ,\hat{\beta } \) The collision rate between tethered PMNs and tumor cell monomers (μm3 s−1) \( \dot{\gamma } \) The shear rate in the close-wall region (s−1) єPT The adhesion efficiency between tethered PMNs and tumor cell monomers The viscosity of the fluid flow (Poise) ρm, ρt The fluid density, the tumor cell density (kg m−3) Membrane tension (N m−1) The concentration of cells within the region of height H dep The collision region The authors thank Dr. Meghan Hoskins for providing simulation data on hydrodynamic force. This work was supported by the National Institutes of Health grant CA-125707 and National Science Foundation grant CBET-0729091. Computing the Collision Rate \( \hat{\beta }_{\text{PT}} \) We use the spherical coordinates to express all the related parameters so as to evaluate the integral in Eq. (8) with a given velocity profile. In our rectangular coordinates, we suppose the chamber cross section (Fig. 3) is in yz-plane and the x-axis points to the reader, so that $$ v = \left( {v_{x} ,v_{y} ,v_{z} } \right)\quad {\text{and}}\quad v_{x} = 0,\;v_{y} = v_{c} ,\;v_{z} = - v_{s} . $$ We take the standard spherical coordinate system with the origin being the center of the sphere where the arc shaped PMN lies on and the two bottom points of PMN having spherical coordinates \( \left( {r,\varphi_{0} ,{\frac{\pi }{2}}} \right) \) and \( \left( {r,\varphi_{0} ,{\frac{3\pi }{2}}} \right). \) Assume that the deformable PMN preserves its volume \( V = {\frac{4}{3}}\pi r_{\text{p}}^{3} , \) where r p is the radius of an undeformed PMN, we thus have that $$ V = {\frac{2}{3}}\pi \left( {1 - { \cos }\,\varphi_{0} } \right)r^{3} $$ which gives a closed form for H, r and \( \varphi_{0} \) via the following explicit formulae: $$ r = {\frac{V}{{\pi H^{3} }}} + {\frac{H}{3}}\quad {\text{and}}\quad \varphi_{0} = { \arccos }\left( {1 - {\frac{H}{r}}} \right). $$ Now consider a tumor cell which is colliding with this PMN. Assume that the contact point has coordinate (r + r t, Θ, φ), where r t is the radius of a tumor cell. Thus, h, the distance between the center of tumor cell and the substrate, is $$ h = \left( {r + r_{\rm t} } \right){ \cos }\,\varphi - r\,{ \cos }\,\varphi_{0} . $$ Our task is to compute the coagulation kernel under different flow conditions. For any given parameters \( \dot{\gamma } \) and μ, we compute H by the fitted function f in Eq. (16) first, and compute φ0 and r next. We may then plug in all these parameters and the flow velocity profile into the Eq. (8) to compute the kernel by estimating the integral over the collision region Ω. That is, we estimate $$ \hat{\beta }_{\text{PT}} = - \int\limits_{0}^{\pi } {\int\limits_{0}^{2\pi } {F(\varphi ,\theta )|_{{F \le 0,h \ge r_{\text{t}} }} d\varphi d\theta } } $$ where the integrand is given by $$ F\left( {\varphi ,\theta } \right) = { \sin }\,\varphi (r + r_{\text{t}} )^{2} \left( {{ \sin }\,\varphi \,{ \sin }\,\theta v_{y} + { \cos }\,\varphi v_{z} } \right). $$ This integral is evaluated numerically via quadrature rules. The parameters that we need in the calculation are listed in Table 6. Values for specific parameters ρ m Fluid density 1000 kg m−3 ρ t Tumor cell density r p PMN radius 4.0 × 10−6 m Tumor cell radius Gravity Acceleration 9.8 m s−2 Sensitivity to the Maximum Number of Bonds N The algebraic system $$ \begin{aligned} - AP_{0} + P_{1} = 0 \hfill \\ AP_{n - 1} - (A + n)P_{n} + (n + 1)P_{n + 1} = 0 \hfill \\ - AP_{N - 1} - NP_{N} = 0 \hfill \\ \end{aligned} $$ has a general solution \( P_{n} = {\frac{{A^{n} P_{0} }}{n!}}, \) for n = 1, 2, 3,…, with A and P 0 satisfying $$ \sum\limits_{n = 0}^{N} {P_{n} } = P_{0} \sum\limits_{n = 0}^{N} {{\frac{{A^{n} }}{n!}} = 1} . $$ Under a given flow condition, the reaction constant A is fixed, by taking \( N \to \infty , \) $$ \sum\limits_{n = 0}^{\infty } {P_{n} = P_{0} \sum\limits_{n = 0}^{\infty } {{\frac{{A^{n} }}{n!}}} = P_{0} e^{A} = 1} . $$ Therefore, the solution \( P_{n} \) converges as \( N \to \infty \) to \( {\frac{{e^{ - A} A^{n} }}{n!}} \). By numerical simulation, we could actually find that when N ≥ 300, the solution and the limit are almost identical (Fig. 13). So we can truncate the original system to N = 300. The steady state solution distribution of adhesion efficiency with the system size taken to be 300, 500, and 1000 Abkarian, M., and A. Viallat. Dynamics of vesicles in a wall-bounded shear flow. Biophys. J. 89:1055–1066, 2005.CrossRefGoogle Scholar Aldous, D. J. Deterministic and stochastic models for coalescence (aggregation and coagulation): a review of the Mean-Field Theory for Probabilities. Bernoulli 5:3–48, 1999.zbMATHCrossRefMathSciNetGoogle Scholar Belval, T., and J. Hellums. Analysis of shear-induced platelet aggregation with population balance mathematics. Biophys. J. 50:479–487, 1986.CrossRefGoogle Scholar Caputo, K. E., and D. A. Hammer. Effect of microvillus deformability on leukocyte adhesion explored using adhesive dynamics simulations. Biophys. J. 89:187–200, 2005.CrossRefGoogle Scholar Chang, K. C., D. F. Tees, and D. A. Hammer. The state diagram for cell adhesion under flow: leukocyte rolling and firm adhesion. Proc. Natl Acad. Sci. USA 97:11262–11267, 2000.CrossRefGoogle Scholar Chesla, S. E., P. Selvaraj, and C. Zhu. Measuring two-dimensional receptor–ligand binding kinetics with micropipette. Biophys. J. 75:1553–1572, 1998.CrossRefGoogle Scholar Davis, J. M., and J. C. Giddings. Influence of wall-retarded transport of retention and plate eight in field-flow fractionation. Sci. Technnol. 20:699–724, 1985.Google Scholar Dong, C., J. Cao, E. J. Struble, and H. Lipowsky. Mechanics of leukocyte deformation and adhesion to endothelium in shear flow. Ann. Biomed. Eng. 27:298–312, 1999.CrossRefGoogle Scholar Dong, C., and X. Lei. Biomechanics of cell rolling: shear flow, cell-surface adhesion, and cell deformability. J. Biomech. 33:35–43, 2000.CrossRefGoogle Scholar Goldman, A. J., R. G. Cox, and H. Brenner. Slow viscous motion of a sphere parallel to a plane wall. II. Couette flow. Chem. Eng. Sci. 20:653–660, 1967.Google Scholar Hammer, D. A., and S. M. Apte. Simulation of cell rolling and adhesion on surface in shear flow: general results and analysis of selectin-mediated neutrophil adhesion. Biophs. J. 63:35–57, 1992.CrossRefGoogle Scholar Hentzen, E. R., S. Neelamegham, G. S. Kansas, J. A. Benanti, L. V. Smith, C. W. McIntire, and S. I. Simon. Sequential binding of CD11a/CD18 and CD11b/CD18 defines neutrophil capture and stable adhesion to intercellular adhesion molecule-1. Blood 95:911–920, 2000.Google Scholar Hinds, M. T., Y. J. Park, S. A. Jones, D. P. Giddens, and B. R. Alevriadou. Local hemodynamics affect monocytic cell adhesion to a three-dimensional flow model coated with E-selectin. J. Biomech. 34:95–103, 2001.CrossRefGoogle Scholar Hoskins, M., R. F. Kunz, J. Bistline, and C. Dong. Coupled flow–structure–biochemistry simulations of dynamic systems of blood cells using an adaptive surface tracking method. J. Fluids Struct. 25:936–953, 2009.CrossRefGoogle Scholar Huang, P., and J. Hellums. Aggregation and disaggregation kinetics of human blood platelets: Part I. Development and validation of a population balance method. Biophys. J. 65:334–343, 1993.CrossRefGoogle Scholar Huang, P., and J. Hellums. Aggregation and disaggregation kinetics of human blood platelets: Part II. Shear induced platelet aggregation. Biophys. J. 65:344–353, 1993.CrossRefGoogle Scholar Khismatullin, D., and G. Truskey. A 3D numerical study of the effect of channel height on leukocyte deformation and adhesion in parallel-plate flow chambers. Microvasc. Res. 68:188–202, 2004.CrossRefGoogle Scholar Khismatullin, D., and G. Truskey. 3D numerical simulation of receptor-mediated leukocyte adhesion to surfaces: effects of cell deformability and viscoelasticity. Phys. Fluids 17:53–73, 2005.CrossRefGoogle Scholar Kolodko, A., K. Sabelfeld, and W. Wagner. A stochastic method for solving Smoluchowskis coagulation equation. Math. Comput. Simul. 49:57–79, 1999.zbMATHCrossRefMathSciNetGoogle Scholar Kwong, D., D. F. J. Tees, and H. L. Goldsmith. Kinetics and locus of failure of receptor ligand-mediated adhesion between latex spheres. II. Protein–protein bond. Biophys. J. 71:1115–1122, 1996.CrossRefGoogle Scholar Laurenzi, I. J., and S. L. Diamond. Monte Carlo simulation of the heterotypic aggregation kinetics of platelets and neutrophils. Biophys. J. 77:1733–1746, 1999.CrossRefGoogle Scholar Lei, X., M. B. Lawrence, and C. Dong. Influence of cell deformation on leukocyte rolling adhesion in shear flow. J. Biomech. Eng. 121:636–643, 1991.CrossRefGoogle Scholar Liang, S., C. Fu, D. Wagner, H. Guo, D. Zhan, C. Dong, and M. Long. 2D kinetics of β2 integrin-ICAM-1 bindings between neutrophils and melanoma cells. Am. J. Physiol. 294:743–753, 2008.CrossRefGoogle Scholar Liang, S., M. Hoskins, and C. Dong. Tumor cell extravasation mediated by leukocyte adhesion is shear rate-dependent on IL-8 signaling. Mol. Cell. Biomech. 7:77–91, 2009.Google Scholar Liang, S., M. Hoskins, P. Khanna, R. F. Kunz, and C. Dong. Effects of the tumor-leukocyte microenvironment on melanoma–neutrophil adhesion to the endothelium in a shear flow. Cell. Mol. Bioeng. 1:189–200, 2008.CrossRefGoogle Scholar Liang, S., M. Slattery, and C. Dong. Shear stress and shear rate differentially affect the multi-step process of leukocyte-facilitated melanoma adhesion. Exp. Cell Res. 310:282–292, 2005.CrossRefGoogle Scholar Liang, S., M. Slattery, D. Wagner, S. I. Simon, and C. Dong. Hydrodynamic shear rate regulates melanoma-leukocyte aggregations, melanoma adhesion to the endothelium and subsequent extravasation. Ann. Biomed. Eng. 36:661–671, 2008.CrossRefGoogle Scholar Long, M., H. L. Goldsmith, D. F. Tees, and C. Zhu. Probabilistic modeling of shear-induced formation and breakage of doublets cross-linked by receptor–ligand bonds. Biophys. J. 76:1112–1128, 1999.CrossRefGoogle Scholar Lyczkowski, R. W., B. R. Alevriadou, M. Horner, C. B. Panchal, and S. G. Shroff. Application of multiphase computational fluid dynamics to analyze monocyte adhesion. Ann. Biomed. Eng. 37:1516–1533, 2009.CrossRefGoogle Scholar McQuarrie, D. A. Kinetics of small systems. I. J. Phys. Chem. 38:433–436, 1963.CrossRefGoogle Scholar Melder, R. J., L. L. Munn, S. Yamada, C. Ohkubo, and R. K. Jain. Selectin- and integrin mediated T-lymphocyte rolling and arrest on TNF-activated endothelium: augmentation by erythrocytes. Biophys. J. 69:2131–2138, 1995.CrossRefGoogle Scholar Munn, L. L., R. J. Melder, and R. K. Jain. Analysis of cell flux in the parallel plate flow chamber: implications for cell capture studies. Biophys. J. 67:889–895, 1994.CrossRefGoogle Scholar Munn, L. L., R. J. Melder, and R. K. Jain. Role of erythrocytes inleukocyte-endothelial interactions: mathematical model and experimental validation. Biophys. J. 71:466–478, 1996.CrossRefGoogle Scholar Piper, J. W., R. A. Swerlick, and C. Zhu. Determining force dependence of two-dimensional receptor–ligand binding affinity by centrifugation. Biophys. J. 74:492–513, 1998.CrossRefGoogle Scholar Rinker, K. D., V. Prabhakar, and G. A. Truskey. Effect of contact time and force on monocyte adhesion to vascular endothelium. Biophys. J. 80:1722–1732, 2001.CrossRefGoogle Scholar Sabelfeld, K., and A. Kolodko. Stochastic Lagrangian models and algorithms for spatially inhomogeneous Smoluchowski equation. Math. Comput. Simul. 61:115–137, 2003.zbMATHCrossRefMathSciNetGoogle Scholar Shankaran, H., and S. Neelamegham. Nonlinear flow affects hydrodynamic forces and neutrophil adhesion rates in cone-plate viscometers. Biophys. J. 80:2631–2648, 2001.CrossRefGoogle Scholar Shankaran, H., and S. Neelamegham. Effect of secondary flow on biological experiments in the cone-plate viscometer: Methods for estimating collision frequency, wall shear stress and inter-particle interactions in non-linear flow. Biorheology 38:275–304, 2001.Google Scholar Slattery, M., and C. Dong. Neutrophils influence melanoma adhesion and migration under flow conditions. Int. J. Cancer 106:713–722, 2003.CrossRefGoogle Scholar Slattery, M., S. Liang, and C. Dong. Distinct role of hydrodynamic shear in PMN-facilitated melanoma cell extravasation. Am. J. Physiol. 288(4):C831–C839, 2005.CrossRefGoogle Scholar Smoluchowski, M. Mathematical theory of the kinetics of the coagulation of colloidal solutions. Z. Phys. Chem. 92:129, 1917.Google Scholar Starkey, J. R., H. D. Liggitt, W. Jones, and H. L. Hosick. Influence of migratory blood cells on the attachment of tumor cells to vascular endothelium. Int. J. Cancer 34:535–543, 1984.CrossRefGoogle Scholar Tees, D. F. J., O. Coenen, and H. L. Goldsmith. Interaction forces between red cells agglutinated by antibody. IV. Time and force dependence of break-up. Biophys. J. 65:1318–1334, 1993.CrossRefGoogle Scholar Tees, D. F. J., and H. L. Goldsmith. Kinetics and locus of failure of receptor–ligand mediated adhesion between latex spheres. I. Proteincarbohydrate bond. Biophys. J. 71:1102–1114, 1996.CrossRefGoogle Scholar Wang, J., M. Slattery, M. Hoskins, S. Liang, C. Dong, and Q. Du. Monte Carlo simulation of heterotypic cell aggregation in nonlinear shear flow. Math. Biosci. Eng. 3:683–696, 2006.zbMATHMathSciNetGoogle Scholar Welch, D. R., D. J. Schissel, R. P. Howrey, and P. A. Aeed. Tumor-elicited polymorphonuclear cells, in contrast to "normal" circulating polymorphonuclear cells, stimulate invasive and metastatic potentials of rat mammary adnocardinoma cells. Proc. Natl Acad. Sci. USA 86:5859–5863, 1989.CrossRefGoogle Scholar Wu, Q. D., J. H. Wang, C. Condron, D. Bouchier-Hayer, and H. P. Redmond. Human neutrophils facilitate tumor cell transendothelial migration. Am. J. Phsyiol. Cell Physiol. 280:814–822, 2001.Google Scholar Zhang, Y., and S. Neelamegham. Estimating the efficiency of cell capture and arrest in flow chambers: study of neutrophil binding via E-selectin and ICAM-1. Biophys. J. 83:1934–1952, 2002.CrossRefGoogle Scholar Zhang, Y., and S. Neelamegham. An analysis tool to quantify the efficiency of cell tethering and firm-adhesion in the parallel-plate flow chamber. J. Immunol. Methods 278:305–317, 2003.CrossRefGoogle Scholar Zhu, C., G. Bao, and N. Wang. Cell mechanics: mechanical response, cell adhesion, and molecular deformation. Annu. Rev. Biomed. Eng. 2:189–226, 2000.CrossRefGoogle Scholar Zhu, C., and S. E. Chesla. Dissociation of individual molecular bonds under force. In: Advances in Bioengineering, Vol. 36, edited by B. B. Simon. New York: ASME, 1997, pp. 177–178.Google Scholar Zhu, C., and R. McEver. Catch bonds: physical models and biological functions. Mol. Cell. Biomech. 2(3):91–104, 2005.Google Scholar © Biomedical Engineering Society 2010 1.Department of MathematicsThe Pennsylvania State UniversityUniversity ParkUSA 2.Department of BioengineeringThe Pennsylvania State UniversityUniversity ParkUSA Ma, Y., Wang, J., Liang, S. et al. Cel. Mol. Bioeng. (2010) 3: 3. https://doi.org/10.1007/s12195-010-0114-2 Received 18 December 2009 Accepted 20 February 2010 First Online 09 March 2010 Biomedical Engineering Society (BMES)
CommonCrawl
\begin{definition}[Definition:Existential Quantifier/Unique/Definition 3] There exists a unique object $x$ such that $\map P x$, denoted $\exists ! x: \map P x$, {{iff}} both: :$\exists x : \map P x$ and: :$\forall y : \forall z : \paren {\paren {\map P y \land \map P z} \implies y = z }$ \end{definition}
ProofWiki
Broken space diagonal In a magic cube, a broken space diagonal is a sequence of cells of the cube that follows a line parallel to a space diagonal of the cube, and continues on the corresponding point of an opposite face whenever it reaches a face of the cube.[1][2] The corresponding concept in two-dimensional magic squares is a broken diagonal. References 1. Narins, Brigham (2001), World of mathematics, Volume 2, Gale Group, p. 391, ISBN 9780787650650. 2. Pickover, Clifford A. (2003), The Zen of Magic Squares, Circles, and Stars: An Exhibition of Surprising Structures across Dimensions, Princeton University Press, p. 178, ISBN 1400841518.
Wikipedia
Mathematical Geosciences High-Order Block Support Spatial Simulation Method and Its Application at a Gold Deposit Joao Pedro de Carvalho Roussos Dimitrakopoulos Ilnur Minniakhmetov First Online: 20 February 2019 High-order sequential simulation methods have been developed as an alternative to existing frameworks to facilitate the modeling of the spatial complexity of non-Gaussian spatially distributed variables of interest. These high-order simulation approaches address the modeling of the curvilinear features and spatial connectivity of extreme values that are common in mineral deposits, petroleum reservoirs, water aquifers, and other geological phenomena. This paper presents a new high-order simulation method that generates realizations directly at the block support scale conditioned to the available data at point support scale. In the context of sequential high-order simulation, the method estimates, at each block location, the cross-support joint probability density function using Legendre-like splines as the set of basis functions needed. The proposed method adds previously simulated blocks to the set of conditioning data, which initially contains the available data at point support scale. A spatial template is defined by the configuration of the block to be simulated and related conditioning values at both support scales, and is used to infer additional high-order statistics from a training image. Testing of the proposed method with an exhaustive dataset shows that simulated realizations reproduce major structures and high-order relations of data. The practical intricacies of the proposed method are demonstrated in an application at a gold deposit. Sequential high-order simulation Block support Cross-support joint probability density function High-order spatial statistics Stochastic simulation methods are used to quantify the spatial uncertainty and variability of pertinent attributes of natural phenomena in geosciences and geoengineering. Initial simulation methods were based on Gaussian assumptions and second-order statistics of corresponding random field models (Journel and Huijbregts 1978; David 1988; Goovaerts 1997). To address the limits of such Gaussian approaches, multiple point statistics (MPS)-based simulation methods were introduced (Guardiano and Srivastava 1993; Strebelle 2002; Zhang et al. 2006; Arpat and Caers 2007; Remy et al. 2009; Mariethoz et al. 2010; Mariethoz and Caers 2014; Mustapha et al. 2014; Chatterjee et al. 2016; Li et al. 2016; Zhang et al. 2017) to remove distributional assumptions, as well as to enable the reproduction of complex curvilinear and other geologic features by replacing the random field model with a framework built upon extraction of multiple point patterns from a training image (TI) or geological analogue. The main limitations of MPS methods are that they do not explicitly account for high-order statistics, nor do they provide consistent mathematical models as they generate TI-driven realizations. Previous studies have shown resulting realizations that comply with the TI used but do not necessarily reproduce the spatial statistics inferred from the data (Osterholt and Dimitrakopoulos 2007; Goodfellow et al. 2012). As an alternative, to address the above limitations, a high-order simulation (HOSIM) framework has been proposed as a natural generalization of the second-order-based random field paradigm (Dimitrakopoulos et al. 2010; Mustapha and Dimitrakopoulos 2010a, b, 2011; Minniakhmetov and Dimitrakopoulos 2017a, b; Minniakhmetov et al. 2018; Yao et al. 2018). The HOSIM framework does not make any assumptions about the data distribution, and the resulting realizations reproduce the high-order spatial statistics of the data. Similar to the MPS and most Gaussian simulation approaches, HOSIM methods generate realizations at the point support scale, whereas in most major areas of application, simulated realizations must be at the block support scale. Typically, the change of support scale needed is addressed by generating simulated realizations on a very dense grid of nodes that is then postprocessed to generate realizations at the block support size needed. This is a computationally demanding process, as related configurations may require extremely dense grids with on the order of many millions to billions of nodes. Thus, there is a need for computationally efficient methods that simulate directly at the block support scale. In the context of conventional second-order geostatistics, direct block support simulation has been proposed. An approach termed "direct block simulation" was presented by Godoy (2003), which discretizes each block into several internal nodes, but only stores a single block value in memory for the next group simulation. This mechanism drastically reduces the amount of data stored in memory and saves considerable computational effort. The sequential direct block simulation method was expanded by Boucher and Dimitrakopoulos (2009) to incorporate multiple correlated variables by applying min/max autocorrelation factors. An explicit change of the support model and direct simulation at block support scale were used by Emery (2009). Although efficient, these methods carry all the limitations of a Gaussian simulation framework, and the related spatial connectivity is limited to two-point spatial statistics, thus they remain unable to characterize non-Gaussian variables, complex nonlinear geological geometries, and the critically important connectivity of extreme values (Journel 2018). Alternatives are, therefore, needed. High-order sequential simulation methods use high-order spatial cumulants to describe complex geologic configurations and high-order connectivity. At the same time, simulated realizations remain consistent with respect to the statistics of the available data, while capitalizing on the additional information that TIs can provide. These high-order spatial cumulants are described by Dimitrakopoulos et al. (2010) as combinations of moment statistical parameters. A high-order simulation algorithm was proposed by Mustapha and Dimitrakopoulos (2010a), where the conditional probability density functions (cpdf) are approximated by Legendre polynomials and high-order spatial cumulants. A template is defined based on the central node to be simulated and the nearest conditioning data. The replicates of this configuration are obtained from both the data and TI, and are used as input for the calculation of the Legendre coefficients in the cpdf approximation. Advantages of this method lie in the absence of assumptions on the distribution of the data and in being a data-driven approach. The Legendre polynomial was replaced by Legendre-like splines as the basis function for the estimation of conditional probabilities by Minniakhmetov et al. (2018). Results show a more stable approximation of the related cpdf. Improving upon the computational performance, Yao et al. (2018) proposed a new approach, where the calculation of the cpdf is simplified and no explicit calculation of cumulants is required. Although effective, the methods described above are performed at point support scale. This paper presents a new method that generates high-order stochastic simulations directly at the block support scale. The technique considers overlapping grids representing a study area at two support scales, viz. point and block, where the simulation process is implemented at the latter. In the sequential simulation process followed, only the initial point support data and previously simulated blocks are added to the set of conditioning values, thus drastically reducing the number of elements stored in memory. The block to be simulated and the nearest conditioning data, at the point or block support scale, define the spatial configuration of the template used. Similarly, the TI is represented at both support scales to provide replicates of related spatial template configurations. The conditional cross-support joint density function estimated at each block is approximated by Legendre-like splines. The remainder of the paper is organized as follows: First, the proposed model for high-order block support simulation is presented. Subsequently, a case study in a controlled environment assesses the performance of the current approach. Next, the method is applied to an actual gold deposit to demonstrate its practical aspects. Conclusions follow. 2 High-Order Block Support Simulation 2.1 Sequential Simulation In the following description, the index \( V \) relates to elements at the block support, while \( P \) represents point support. Consider a stationary and ergodic non-Gaussian random field (RF) \( Z_{P} \left( {u_{j} } \right) \) in \( R^{n} \), where \( u_{j} \) defines the location of nodes j in the domain \( D \subseteq R^{n} . \) Now, consider a transformation function that takes the above point support RF to the block support RF. Any upscaling function can be applied, but assume Eq. (1) for simplicity $$ Z_{V} \left( v \right) = \frac{1}{\left| V \right|}\int\limits_{{u_{j} \in \, v}} {Z_{P} \left( {u_{j} } \right){\text{d}}u_{j} } . $$ \( Z_{V} \left( {v_{i} } \right) \) is also a RF, indexed as \( v_{i} \in D \subseteq R^{n} ,i = 1, \ldots ,N_{V}, \) where \( N_{V} \) represents the total number of blocks to be simulated within the domain \( D \subseteq R^{n} \). \( Z_{V} \left( {v_{i} } \right) \) is the upscaled RF from \( Z_{P} (u_{j} ) \) considering all nodes \( u_{j} \) that are discretized within the block centered in \( v_{i} \), where V is the volume. The outcomes from the above RFs are denoted as \( z_{j}^{P} = z_{P} \left( {u_{j} } \right) \) and \( z_{i}^{V} = z_{V} \left( {v_{i} } \right) \), respectively, for the point and block support RF \( Z_{P} \left( {u_{j} } \right) = Z_{j}^{P} \) and \( Z_{V} \left( {v_{i} } \right) = Z_{i}^{V} \). Herein, the objective is to simulate a realization of the RF \( Z_{i}^{V} \) given the set of initial conditioning values at point support scale denoted as \( d_{p} = \left\{ {z_{1}^{P} , \ldots ,z_{{N_{P} }}^{P} } \right\} \), \( N_{P} \) being the total of the conditioning point support values. According to the sequential simulation theory in the geostatistical field, the joint probability density function (jpdf) \( f_{{Z_{1}^{V} , \ldots ,Z_{k}^{V} }} \) can be decomposed into the products of the respective univariate distributions (Johnson 1987; Journel and Alabert 1989; Journel 1994; Goovaerts 1997; Dimitrakopoulos and Luo 2004) $$ \begin{aligned} & f_{{_{{Z_{1}^{V} , \ldots ,Z_{{N_{V} }}^{V} }} }} \left( {z_{1}^{V} ,z_{2}^{V} , \ldots ,z_{{N_{V} }}^{V} \left| {d_{P} } \right.} \right) = f_{{Z_{1}^{V} }} \left( {z_{1}^{V} \left| {d_{P} } \right.} \right)f_{{Z_{2}^{V} , \ldots ,Z_{{N_{V} }}^{V} }} \left( {z_{2}^{V} , \ldots ,z_{{N_{V} }}^{V} \left| {d_{P} ,z_{1}^{V} } \right.} \right) \\ & \qquad = f_{{Z_{1}^{V} }} \left( {z_{1}^{V} \left| {d_{P} } \right.} \right)f_{{Z_{2}^{V} }} \left( {z_{2}^{V} \left| {d_{P} ,z_{1}^{V} } \right.} \right) \ldots f_{{Z_{{N_{V} }}^{V} }} \left( {z_{{N_{V} }}^{V} \left| {d_{P} ,z_{1}^{V} ,z_{2}^{V} , \ldots ,z_{{N_{V} - 1}}^{V} } \right.} \right). \\ \end{aligned} $$ According to Eq. (2), each block \( v^{k} \) is simulated based on the estimation of the conditional cross-support probability density function \( f_{{Z_{k}^{V} }} \left( {z_{k}^{{_{V} }} \left| {d_{P} ,z_{1}^{V} ,z_{2}^{V} , \ldots ,z_{k - 1}^{V} } \right.} \right) \), which according to Bayes' rule (Stuart and Ord 1987) is $$ f_{{Z_{k}^{V} }} \left( {z_{k}^{{_{V} }} \left| {d_{P} ,z_{1}^{V} ,z_{2}^{V} , \ldots ,z_{k - 1}^{V} } \right.} \right) = \frac{{f_{\text{Z}} \left( {d_{P} ,z_{1}^{V} ,z_{2}^{V} , \ldots ,z_{k - 1}^{V} ,z_{k}^{{_{V} }} } \right)}}{{\int\limits_{D} {f_{\text{Z}} \left( {d_{P} ,z_{1}^{V} ,z_{2}^{V} , \ldots ,z_{k - 1}^{V} ,z_{k}^{{_{V} }} } \right)dv_{k} } }}, $$ where \( {\text{Z}} = Z_{1}^{P} , \ldots ,Z_{{N_{P} }}^{P} ,Z_{1}^{V} , \ldots ,Z_{k}^{V} \). It is sufficient to approximate only the cross-support joint probability density function \( f_{\text{Z}} \left( {d_{P} ,z_{1}^{V} ,z_{2}^{V} , \ldots ,z_{k - 1}^{V} ,z_{k}^{{_{V} }} } \right) \). In this paper, this cross-support joint probability density function is approximated using Legendre-like orthogonal splines (Wei et al. 2013; Minniakhmetov et al. 2018). 2.2 Joint Probability Density Function Approximation For simplicity, let \( f\left( z \right) \) be the pdf of a random variable \( Z \) defined in \( \varOmega = \left[ {a,b} \right] \) and let \( \varphi_{1} \left( z \right),\varphi_{2} \left( z \right), \ldots \) be a set of orthogonal functions defined in the same space \( \varOmega \). Then, a fixed number \( \omega \) of those orthogonal functions can approximate \( f\left( z \right) \) (Lebedev 1965; Mustapha and Dimitrakopoulos 2010a; Minniakhmetov et al. 2018; Yao et al. 2018), when multiplied by the coefficients \( L_{i} \) $$ f\left( z \right) \approx \sum\limits_{i = 0}^{\omega } {L_{i} \varphi_{i} \left( z \right)} . $$ Since the sets of functions are orthogonal $$ \int\limits_{a}^{b} {\varphi_{i} \left( z \right)\varphi_{j} \left( z \right)} {\text{d}}z = \delta_{ij} , $$ where \( \delta_{ij} \) is the Kronecker delta indexed by \( i \) and \( j \), such that it take a unitary value if \( i = j \) and 0 otherwise, using the definition of the expected value of one for a basis function $$ E\left[ {\varphi_{i} \left( z \right)} \right] = \int\limits_{a}^{b} {\varphi_{i} \left( z \right)f\left( z \right)} {\text{d}}z. $$ Replacing \( f\left( z \right) \) as in Eq. (4) yields $$ \begin{aligned} & E\left[ {\varphi_{i} \left( z \right)} \right] \approx \int\limits_{a}^{b} {\varphi_{i} \left( z \right)\sum\limits_{j = 0}^{\omega } {L_{j} \varphi_{j} \left( z \right)} {\text{d}}z} = \sum\limits_{j = 0}^{\omega } {L_{j} \int\limits_{a}^{b} {\varphi_{j} \left( z \right)\varphi_{i} \left( z \right){\text{d}}z} } \\ & \quad = \sum\limits_{j = 0}^{\omega } {L_{j} \delta_{ij} } = L_{i} . \\ \end{aligned} $$ The coefficient \( L_{i} \) can be obtained experimentally from an available sample, thus \( f\left( z \right) \) is approximated by Eq. (4). Moving to the multivariate cross-support case, at every block location \( v^{k} \) the cross-support jpdf \( f_{\text{Z}} \left( {d_{P} ,z_{1}^{V} ,z_{2}^{V} , \ldots ,z_{k - 1}^{V} ,z_{k}^{V} } \right) \) can be defined in a similar sense. Considering in practice that not all the samples are included as conditioning, \( n_{V} \) and \( n_{P} \) are the maximum number of elements at block support and point support scale, respectively, in the calculation. Hereinafter, the above cross-support jpdf is referred to as \( f\left( {z_{0}^{V} , \ldots ,z_{{n_{V} }}^{V} ,z_{1}^{P} , \ldots ,z_{{n_{P} }}^{P} } \right) \) to simplify the notation and ensure better understanding of the variables in both the block and point support layers. Also note that, without loss of generality, \( z_{0}^{V} \) is the value to be simulated at location \( v_{0} \). The cross-support jpdf is defined in the domain \( \left[ {a,b} \right]^{{n_{V} + 1}} \times \left[ {a,b} \right]^{{n_{P} }} \). Note that the interval for the block support is not necessarily the same as for the point support. This also applies to the basis functions \( \varphi_{j} \left( z \right) \), which could be discretized differently for both supports. Similarly to the univariate case, the cross-support jpdf can be approximated as $$\begin{aligned}& f\left( {z_{0}^{V} , \ldots ,z_{{n_{V} }}^{V} ,z_{1}^{P} , \ldots ,z_{{n_{P} }}^{P} } \right) \approx \sum\limits_{{k_{0}^{V} }}^{\omega } \ldots \sum\limits_{{k_{V}^{{n_{V} }} }}^{\omega } \sum\limits_{{k_{P}^{1} }}^{\omega } \ldots \sum\limits_{{k_{P}^{{n_{P} }} }}^{\omega } \left[ L_{{k_{0}^{V} \ldots k_{{n_{V} }}^{V} k_{1}^{P} \ldots k_{{n_{P} }}^{P} }} \varphi_{{k_{0}^{V} }} \left( {z_{0}^{V} } \right) \right.\\ &\left. \quad \ldots \varphi_{{k_{{n_{V} }}^{V} }} \left( {z_{{n_{V} }}^{V} } \right)\varphi_{{k_{1}^{P} }} \left( {z_{1}^{P} } \right) \ldots \varphi_{{k_{{n_{P} }}^{P} }} \left( {z_{{n_{P} }}^{P} } \right) \right] .\end{aligned} $$ The coefficients \( L_{i \ldots jk \ldots l} \) can be calculated experimentally, since they can be obtained from the orthogonality property of the basis functions. Following the definition of the expected value of a basis function, this is expressed as $$ \begin{aligned} & E\left[ {\varphi_{i} \left( {z_{0}^{V} } \right) \ldots \varphi_{j} \left( {z_{{n_{V} }}^{V} } \right)\varphi_{k} \left( {z_{1}^{P} } \right) \ldots \varphi_{l} \left( {z_{{n_{P} }}^{P} } \right)} \right] \\ & \quad = \int\limits_{a}^{b} { \ldots \int\limits_{a}^{b} {\int\limits_{a}^{b} { \ldots \int\limits_{a}^{b} {\varphi_{i} \left( {z_{0}^{V} } \right) \ldots \varphi_{j} \left( {z_{{n_{V} }}^{V} } \right)\varphi_{k} \left( {z_{1}^{P} } \right)} } } } \\ & \qquad \ldots \varphi_{l} \left( {z_{{n_{P} }}^{P} } \right)f\left( {z_{0}^{V} , \ldots ,z_{{n_{V} }}^{V} ,z_{1}^{P} , \ldots ,z_{{n_{P} }}^{P} } \right){\text{d}}z_{0}^{V} \ldots {\text{d}}z_{{n_{V} }}^{V} {\text{d}}z_{1}^{P} \ldots {\text{d}}z_{{n_{P} }}^{P} . \\ \end{aligned} $$ Replacing \( f\left( {z_{V}^{0} , \ldots ,z_{V}^{{n_{V} }} ,z_{P}^{1} , \ldots ,z_{P}^{{n_{P} }} } \right) \) as in Eq. (8) yields $$ \begin{aligned} & E\left[ {\varphi_{i} \left( {z_{0}^{V} } \right) \ldots \varphi_{j} \left( {z_{{n_{V} }}^{V} } \right)\varphi_{k} \left( {z_{1}^{P} } \right) \ldots \varphi_{l} \left( {z_{{n_{P} }}^{P} } \right)} \right] \\ & \quad \approx \int\limits_{a}^{b} { \ldots \int\limits_{a}^{b} {\int\limits_{a}^{b} { \ldots \int\limits_{a}^{b} {\varphi_{i} \left( {z_{0}^{V} } \right) \ldots \varphi_{j} \left( {z_{{n_{V} }}^{V} } \right)\varphi_{k} \left( {z_{1}^{P} } \right) \ldots \varphi_{l} \left( {z_{{n_{P} }}^{P} } \right)} } } } \\ & \qquad \times\sum\limits_{{k_{0}^{V} }}^{\omega } { \ldots \sum\limits_{{k_{{n_{V} }}^{V} }}^{\omega } {\sum\limits_{{k_{1}^{P} }}^{\omega } {} } }\ldots \sum\limits_{{k_{{n_{P} }}^{P} }}^{\omega } \left[ L_{{k_{0}^{V} \ldots k_{{n_{V} }}^{V} k_{1}^{P} \ldots k_{{n_{P} }}^{P} }} \varphi_{{k_{0}^{V} }} \left( {z_{0}^{V} } \right) \right.\\ &\qquad\left.\ldots \varphi_{{k_{{n_{V} }}^{V} }} \left( {z_{{n_{V} }}^{V} } \right)\varphi_{{k_{1}^{P} }} \left( {z_{1}^{P} } \right) \ldots \varphi_{{k_{{n_{P} }}^{P} }} \left( {z_{{n_{P} }}^{P} } \right)\right] {\text{d}}z_{0}^{V} \ldots {\text{d}}z_{{n_{V} }}^{V} {\text{d}}z_{1}^{P} \ldots {\text{d}}z_{{n_{P} }}^{P} \\ &\quad = \sum\limits_{{k_{0}^{V} }}^{\omega } { \ldots \sum\limits_{{k_{{n_{V} }}^{V} }}^{\omega } {\sum\limits_{{k_{1}^{P} }}^{\omega } { \ldots \sum\limits_{{k_{{n_{P} }}^{P} }}^{\omega } {\left[ {L_{{k_{0}^{V} \ldots k_{{n_{V} }}^{V} k_{1}^{P} \ldots k_{{n_{P} }}^{P} }} \int\limits_{a}^{b} { \ldots \int\limits_{a}^{b} {\int\limits_{a}^{b} { \ldots \int\limits_{a}^{b} {\varphi_{i} \left( {z_{0}^{V} } \right)\varphi_{{k_{0}^{V} }} \left( {z_{0}^{V} } \right)} } } } } \right.} } } } \\ & \left. {\qquad \ldots \varphi_{j} \left( {z_{{n_{V} }}^{V} } \right)\varphi_{{k_{V}^{{n_{V} }} }} \left( {z_{{n_{V} }}^{V} } \right)\varphi_{k} \left( {z_{1}^{P} } \right)\varphi_{{k_{P}^{1} }} \left( {z_{1}^{P} } \right) \ldots \varphi_{l} \left( {z_{{n_{P} }}^{P} } \right)\varphi_{{k_{{n_{P} }}^{P} }} \left( {z_{{n_{P} }}^{P} } \right)} \right]\\ &\qquad {\text{d}}z_{0}^{V} \ldots {\text{d}}z_{{n_{V} }}^{V} {\text{d}}z_{1}^{P} \ldots {\text{d}}z_{{n_{P} }}^{P} \\ & \quad = \sum\limits_{{k_{0}^{V} }}^{\omega } { \ldots \sum\limits_{{k_{{n_{V} }}^{V} }}^{\omega } {\sum\limits_{{k_{1}^{P} }}^{\omega } { \ldots \sum\limits_{{k_{{n_{P} }}^{P} }}^{\omega } {\left[ {L_{{k_{0}^{V} \ldots k_{{n_{V} }}^{V} k_{1}^{P} \ldots k_{{n_{P} }}^{P} }} \delta_{{i_{{k_{0}^{V} }} }} \ldots \delta_{{j_{{k_{{n_{V} }}^{V} }} }} \delta_{{k_{{k_{1}^{P} }} }} \delta_{{l_{{k_{{n_{P} }}^{P} }} }} } \right]} } } } = L_{i \ldots jk \ldots l} . \\ \end{aligned} $$ Now, to determine \( L_{i \ldots jk \ldots l} \), the expected value from Eq. (10) is calculated from replicates of the training image according to a template defined from the simulation grid and sampling data. Let \( \tau = \left[ {v_{0} , \ldots ,v_{{n_{V} }} ,u_{1} , \ldots ,u_{{n_{P} }} } \right] \) be a template as in Fig. 1, where \( v_{0} \) and \( v_{1} \) represent locations of block support, and \( u_{1} ,u_{2}, {\text{ and }}u_{3} \) represent point support locations. \( v_{0} \) is the location of the block to be simulated, and \( n_{P} \) and \( n_{V} \) are, respectively, the total number of points and blocks used as conditioning. In the figure, the grids at point and block support scale appear separated, but this is for visualization purposes only. In reality, they overlap with each other, and the distance between the layers is set to 0. \( \tau \) is defined considering limited conditioning values, which are chosen in order of Euclidean proximity from the central block to be simulated. Having the specified template \( \tau \), the TI is scanned, and the replicates of such a template are retrieved. Note that \( \tau \) has elements that belong to the point and block support scales. Similarly, the TI must be available at both scales. Therefore, assuming a TI input at the point support scale, it is rescaled to block support scale, and both are retrieved during the simulation process, each in its respective layer. Example template \( \tau \) with conditioning data capturing values at both point and block support scales The algorithm for the block support high-order simulation method is as follows: Upscale the TI from point to block support scale. Define a random path to visit all the unsampled block locations on the simulation grid. At each \( v^{0} \) block location: Find the nearest conditioning point and block support values. Obtain the template \( \tau \) according to the configuration of the central block and related conditioning values at both support scales. Scan the training images, searching for replicates of the template \( \tau \) and corresponding values. Calculate all the spatial cross-support coefficients \( L_{i \ldots jk \ldots l} \) using Eq. (10). Derive the conditional cross-support jpdf \( f_{{Z_{0}^{V} }} \left( {z_{0}^{V} \left| {d_{P} ,z_{1}^{V} ,z_{2}^{V} , \ldots ,z_{k}^{V} } \right.} \right) \) according to Eqs. (8) and (3). Draw a uniform value from \( \left[ {0,1} \right] \) to sample \( z_{0}^{V} \) from the conditional cumulative distribution derived from the above. Add \( z_{0}^{V} \) to the simulation grid at block support scale so that it can be a conditioning value for the next block. Repeat steps 2 and 3 for additional realizations. 2.3 Approximation of a Joint Probability Density Using Legendre-Like Orthogonal Splines The current paper uses Legendre-like splines (Wei et al. 2013; Minniakhmetov et al. 2018) as a means to obtain the basis function mentioned above. In short, those splines are a combination of Legendre polynomials (Lebedev 1965) up to order \( r \) and linear combinations of B-splines (de Boor 1978). B-splines are a particular class of piecewise polynomials (splines) connected by some condition of continuity, and by themselves do not form an orthogonal basis. Thus, as introduced in Wei et al. (2013), the first \( r \) +1 splines are the Legendre polynomials, which can be defined as (Lebedev 1965) $$ \varphi_{r} = \frac{1}{{2^{r} r!}}\left( {\frac{{{\text{d}}^{r} }}{{{\text{d}}z^{r} }}} \right)\left[ {\left( {z^{2} - 1} \right)^{r} } \right],\quad - 1 \le z \le 1. $$ The additional functions are constructed given the domain $$ T = \{ \underbrace {{a,a, \ldots ,t_{0} = a}}_{r + 1} < t_{1} \le t_{2} \le \ldots \le t_{{m_{\rm{max} } }} < \underbrace {{t_{{m_{\rm{max} } + 1}} = b,b, \ldots ,b}}_{r + 1}\} , $$ where the elements \( t_{i} \) are referred to as knots and \( m_{\rm{max} } \) represents the maximum number of knots; note that Minniakhmetov et al. (2018) present a study on how to choose \( m_{\hbox{max} } \) to obtain computationally stable polynomial approximations. The final Legendre-like splines are defined as $$ \varphi_{r + m} (t) = \frac{{{\text{d}}^{r + 1} }}{{{\text{d}}t^{r + 1} }}f_{m} (t),\quad m = 1 \ldots m_{\rm{max} } . $$ \( f_{m} (t) \) is the determinant of the following matrix: $$ f_{m} (t) = \det \left( {\begin{array}{*{20}c} {B_{ - r,2r + 1,m} (t)} & {B_{ - r + 1,2r + 1,m} (t)} & \cdots & {B_{ - r + m - 1,2r + 1,m} (t)} \\ {B_{ - r,2r + 1,m} (t_{1} )} & {B_{ - r + 1,2r + 1,m} (t_{1} )} & \vdots & {B_{ - r + m - 1,2r + 1,m} (t_{1} )} \\ \vdots & \vdots & \ddots & \vdots \\ {B_{ - r,2r + 1,m} (t_{m - 1} )} & {B_{ - r + 1,2r + 1,m} (t_{m - 1} )} & \cdots & {B_{ - r + m - 1,2r + 1,m} (t_{m - 1} )} \\ \end{array} } \right), $$ which is constructed from the auxiliary splines \( B_{i,r,m} \left( t \right) \) of order \( r \), obtained according to the recursive rule $$ \begin{aligned} & B_{i,0,m} = \left\{ {\begin{array}{*{20}l} {1,} \hfill & \quad{t_{i,m} \le t \le t_{i + 1,m} } \hfill \\ {0,} \hfill & \quad{\text{otherwise}} \hfill \\ \end{array} } \right., \\ & B_{i,r,m} \left( t \right) = \frac{{t - t_{i,m} }}{{t_{i + r - 1,m} - t_{i,m} }}B_{i,r - 1,m} \left( t \right) + \frac{{t_{i + r,m} - t}}{{t_{i + r,m} - t_{i + 1,m} }}B_{i + 1,r - 1,m} \left( t \right). \\ \end{aligned} $$ These auxiliary functions are defined on the knot sequence \( T_{m} = \left\{ {t_{i,m} } \right\}_{i = - r}^{r + m + 1} \), \( m = 1 \ldots \)\( m_{\rm{max} } - 1 \), and the term \( t_{i,m} \) is defined as $$ t_{i,m} = \left\{ {\begin{array}{*{20}l} {a,} \hfill &\quad { - r \le i \le 0} \hfill \\ {t_{i} ,} \hfill &\quad { 1\le i \le m} \hfill \\ {b,} \hfill &\quad {m + 1 \le i \le m + r + 1} \hfill \\ \end{array} } \right.. $$ 3 Testing with an Exhaustive Dataset The method outlined above is tested using the two-dimensional image of the Walker Lake dataset (Isaaks and Srivastava 1989). This exhaustive dataset comprises two correlated variables U and V with sizes of 260 × 300 pixels. Random stratified sampling is used to retrieve 234 values from or 0.3 % of the exhaustive image V to be used as the dataset in the direct block simulation of V, to test the proposed method. The full dataset V is converted from the point to a block support representation by averaging over 5 × 5 pixels. This block support version is referred to here as the fully known reference image and is used for comparisons. Figure 2 shows V at the point and block support scale, as well as the dataset to be used. The image U is chosen as the training image in the simulation process. Figure 3 presents the TI at both point and block support (5 × 5 unit size) scales. To help the method find more meaningful spatial patterns of the potential conditioning templates, the histogram of the TI is matched to that of the dataset. Histograms of the exhaustively known image, TI, and dataset are displayed in Fig. 4, and basic statistics are presented in Table 1. Exhaustive image Va at point support scale, b at block support scale, and c 234 samples from the image in a Training image U at a point support scale and b block support scale Histogram of data, reference, and training image at point support scale Basic statistics of dataset, training image, and fully known image at point support scale Basic statistic Training image The test conducted consists of generating 15 simulated realizations of the V dataset at block support scale, using the data and the training image mentioned above. Note that the maximum number of knots used (Eq. 12) is 50, which provides computationally efficient and stable polynomial approximations, as appropriate. Figure 5 shows three of the simulated realizations generated, and Table 2 presents the statistics related to the average of the 15 simulations, training image, and reference image at block scale. Comparison of Figs. 2b and 5 suggests that the simulations reproduce the main structures of low and high values of the fully known reference image V. The histograms and variograms presented in Figs. 6 and 7 reasonably follow the behavior exhibited by the variogram model from the data and training image, respectively. Note that the variograms of the data are computed at point scale and rescaled to represent the corresponding volume–variance relation (Journel and Huijbregts 1978). Three example simulated realizations of the Walker Lake reference image V Basic statistics of the average of the simulations, training image, and reference image at block support scale Histograms of the simulations at block support scale, and comparison with reference and training image also at block support scale Variograms of simulated realizations, exhaustive image, TI, and variogram from data rescaled to block support variance: a WE direction, and b NS direction Spatial cumulants (Dimitrakopoulos et al. 2010) can quantify the spatial relationships between three and more points and are used herein to assess high-order spatial patterns. The third-order cumulant maps are presented along with the template used for its calculation in Fig. 8. Figure 9 shows the fourth-order cumulant map, where three slices of the complete cumulant map and the related template are displayed. In both figures, the color ranges from blue to red, representing lower to higher spatial intercorrelation between values. Note that the reference and training image high-order maps are calculated on block support scale, while the cumulant maps related to each simulation are averaged to a single map using the 15 stochastic simulated realizations at block support scale. During the calculation of the high-order spatial statistics from the data, only a few replicates are obtained and Fig. 8a presents a smooth interpolation using B-splines. Regarding the third-order maps, the average of the simulations match the spatial features observed in the data and fully known dataset. It also shares similarities with the third-order cumulants map from TI; this is somewhat expected as the process captures high-order relations from the TI at block support scale at well. These spatial relations present in the TI end up being present in the realizations as well. Third-order cumulant maps for a point support data used, b fully known block support image V, c training image, and d the average map of the 15 simulated realizations Slices of the fourth-order cumulant maps for a fully known image V, b training image, and c average map of the 15 simulations, all at block support scale The fourth-order cumulant map reproduces the characteristics that are closer to the TI than the fully known image, as expected. Note that, by explicitly calculating the spatial high-order cumulants, the information received from the training image to infer local cross-support distributions is conditioned to the data. 4 Application at a Gold Deposit This section applies the proposed method at a gold deposit. The dataset comprises 2300 drillholes spaced approximately in a 35 × 35 m2 configuration, covering an area of 4.5 km2. The training image is defined on 405 × 445 × 43 grid blocks of size 5 × 5 × 10 m3 and is based on blasthole samples. Both inputs are composited in a 10 m bench and are considered to be at point support scale. Figure 10 presents the drillholes available and the training image at block scale. The deposit to be simulated is represented by 510,800 blocks, each measuring 10 × 10 × 10 m3. a Cross-section of the available drillhole locations, and b training image at block support scale Fifteen simulated realizations are generated; cross-sections from two of them are presented in Fig. 11 to show similarities with the data and TI in the corresponding cross-section in Fig. 10. Notable is the reproduction of a sharp transition from high to low grades. Figure 12 shows the histograms of the simulations and TI at block support scale. Table 3 presents the related statistics. Variograms at block support scale are displayed in Fig. 13, where the data variogram is regularized to reflect the corresponding volume–variance relation. The second-order spatial statistics of the simulations match reasonably with the pattern followed by the data and are close to those of the TI. Results for third- and fourth-order cumulants and related maps for the data, TI, and simulated realizations are shown in Figs. 14 and 15, respectively. Note that the high-order statistics of the simulated realizations match those of the data and TI. Cross-section of two simulated realizations Histograms of simulated realizations and training image Basic statistics of the average of the simulations and training image at block support scale and dataset at point scale TI block support Data point support Variograms of simulated realizations and training image and data variograms rescaled to represent block variance: WE direction (left) and NS direction (right) Third-order cumulant maps, obtained with the template on the left, for the a dataset, b training image at block support, and c average map of the 15 simulations Three slices of the fourth-order cumulant maps, obtained with the template at the bottom, for the a dataset, b training image at block support, and c average map of the 15 simulations Further highlighting the advantages of the proposed direct block high-order simulation method, note that, for this case study, the runtime of the related algorithm was approximately 5.5 h, while the point high-order simulation requires approximately 24 h. Both approaches are tested with the same specifications and computing equipment: Intel® Core™ i7-7700 CPU with 3.60 GHz, 16 GB of RAM, running under Windows 7. This paper presents a new high-order simulation method that simulates directly at block support scale by estimating, at every block location, the cross-support joint probability density function. Legendre-like splines are the set of basis functions used to approximate the above density function. The related coefficients are calculated from replicates of a spatial template employed. The latter template is generated from the configuration of the block to be simulated and associated conditioning values, whose support can be at both point and block scale. The high-order character of the proposed direct block simulation method ensures that the generated realizations reflect the complex, nonlinear spatial characteristics of the variables being simulated and reproduce the connectivity of extreme values. The proposed algorithm is tested using an exhaustive image, showing that the different realizations generated can reasonably reproduce spatial architectures observed in the exhaustive image. An application at a gold deposit shows the practical aspects of the method. In addition, it documents that the method works well, while simulated realizations are shown to reproduce the spatial statistics of the available data up to the cumulants of fourth order that were calculated. Further work will focus on improving the computational efficiency, generating training images that are consistent with the high-order relations in the available data, and extending the proposed method to jointly simulate multiple variables. This work is funded by the National Science and Engineering Research Council of Canada, Natural Science and Engineering Research Council of Canada (NSERC) CRD Grant CRDPJ 500414-16, the COSMO mining industry consortium (AngloGold Ashanti, Barrick Gold, BHP, De Beers, IAMGOLD, Kinross, Newmont Mining and Vale) NSERC Discovery Grant 239019, and the IAMG by the 2017 Mathematical Geosciences Student Award. Arpat GB, Caers J (2007) Conditional simulation with patterns. Math Geol 39(2):177–203. https://doi.org/10.1007/s11004-006-9075-3 CrossRefGoogle Scholar Boucher A, Dimitrakopoulos R (2009) Block simulation of multiple correlated variables. Math Geosci 41(2):215–237. https://doi.org/10.1007/s11004-008-9178-0 CrossRefGoogle Scholar Chatterjee S, Mustapha H, Dimitrakopoulos R (2016) Fast wavelet-based stochastic simulation using training images. Comput Geosci 20(3):399–420. https://doi.org/10.1007/s10596-015-9482-y CrossRefGoogle Scholar David M (1988) Handbook of applied advanced geostatistical ore reserve estimation. Elsevier, AmsterdamGoogle Scholar de Boor C (1978) A practical guide to splines. Springer, BerlinCrossRefGoogle Scholar Dimitrakopoulos R, Luo X (2004) Generalized sequential Gaussian simulation on group size v and screen-effect approximations for large field simulations. Math Geol 36(5):567–590. https://doi.org/10.1023/B:MATG.0000037737.11615.df CrossRefGoogle Scholar Dimitrakopoulos R, Mustapha H, Gloaguen E (2010) High-order statistics of spatial random fields: exploring spatial cumulants for modeling complex non-Gaussian and non-linear phenomena. Math Geosci 42(1):65–99. https://doi.org/10.1007/s11004-009-9258-9 CrossRefGoogle Scholar Emery X (2009) Change-of-support models and computer programs for direct block-support simulation. Comput Geosci 35(10):2047–2056. https://doi.org/10.1016/j.cageo.2008.12.010 CrossRefGoogle Scholar Godoy M (2003) The effective management of geological risk in long-term production scheduling of open pit mines. Ph.D. Thesis, University of Queensland, Brisbane, QLD, AustraliaGoogle Scholar Goodfellow R, Albor Consuegra F, Dimitrakopoulos R, Lloyd T (2012) Quantifying multi-element and volumetric uncertainty, Coleman McCreedy deposit, Ontario, Canada. Comput Geosci 42:71–78. https://doi.org/10.1016/j.cageo.2012.02.018 CrossRefGoogle Scholar Goovaerts P (1997) Geostatistics for natural resources evaluation. Oxford University Press, New YorkGoogle Scholar Guardiano FB, Srivastava RM (1993) Multivariate geostatistics: beyond bivariate moments. In: Soares A (ed) Geostatistics Tróia'92, vol 1. Springer, Dordrecht, pp 133–144CrossRefGoogle Scholar Isaaks EH, Srivastava RM (1989) Applied geostatistics. Oxford University Press, OxfordGoogle Scholar Johnson ME (1987) Multivariate statistical simulation. Wiley, HobokenCrossRefGoogle Scholar Journel AG (1994) Modeling uncertainty: some conceptual thoughts. In: Dimitrakopoulos R (ed) Geostatistics for the next century. Springer, Dordrecht, pp 30–43CrossRefGoogle Scholar Journel AG (2018) Roadblocks to the evaluation of ore reserves - the simulation overpass and putting more geology into numerical models of deposits. In: Dimitrakopoulos R (ed) Advances in applied strategic mine planning. Springer, Heidelberg, pp 47–55CrossRefGoogle Scholar Journel AG, Alabert F (1989) Non-Gaussian data expansion in the earth sciences. Terra Nova 1(2):123–134. https://doi.org/10.1111/j.1365-3121.1989.tb00344.x CrossRefGoogle Scholar Journel AG, Huijbregts CJ (1978) Mining geostatistics. Blackburn, New YorkGoogle Scholar Lebedev NN (1965) Special functions and their applications. Prentice-Hall, New YorkCrossRefGoogle Scholar Li X, Mariethoz G, Lu DT, Linde N (2016) Patch-based iterative conditional geostatistical simulation using graph cuts. Water Resour Res 52(8):6297–6320. https://doi.org/10.1002/2015WR018378 CrossRefGoogle Scholar Mariethoz G, Caers J (2014) Multiple-point geostatistics: stochastic modeling with training images. Wiley, HobokenCrossRefGoogle Scholar Mariethoz G, Renard P, Straubhaar J (2010) The direct sampling method to perform multiple-point geostatistical simulations. Water Resour Res 46(11):1–14. https://doi.org/10.1029/2008WR007621 CrossRefGoogle Scholar Minniakhmetov I, Dimitrakopoulos R (2017a) Joint high-order simulation of spatially correlated variables using high-order spatial statistics. Math Geosci 49(1):39–66. https://doi.org/10.1007/s11004-016-9662-x CrossRefGoogle Scholar Minniakhmetov I, Dimitrakopoulos R (2017b) A high-order, data-driven framework for joint simulation of categorical variables. In: Gómez-Hernández JJ, Rodrigo-Ilarri J, Rodrigo-Clavero ME et al (eds) Geostatistics Valencia 2016. Springer, Cham, pp 287–301CrossRefGoogle Scholar Minniakhmetov I, Dimitrakopoulos R, Godoy M (2018) High-order spatial simulation using Legendre-like orthogonal splines. Math Geosci 50(7):753–780. https://doi.org/10.1007/s11004-018-9741-2 CrossRefGoogle Scholar Mustapha H, Dimitrakopoulos R (2010a) High-order stochastic simulation of complex spatially distributed natural phenomena. Math Geosci 42(5):457–485. https://doi.org/10.1007/s11004-010-9291-8 CrossRefGoogle Scholar Mustapha H, Dimitrakopoulos R (2010b) A new approach for geological pattern recognition using high-order spatial cumulants. Comput Geosci 36(3):313–334. https://doi.org/10.1016/j.cageo.2009.04.015 CrossRefGoogle Scholar Mustapha H, Dimitrakopoulos R (2011) HOSIM: a high-order stochastic simulation algorithm for generating three-dimensional complex geological patterns. Comput Geosci 37(9):1242–1253. https://doi.org/10.1016/j.cageo.2010.09.007 CrossRefGoogle Scholar Mustapha H, Chatterjee S, Dimitrakopoulos R (2014) CDFSIM: efficient stochastic simulation through decomposition of cumulative distribution functions of transformed spatial patterns. Math Geosci 46(1):95–123. https://doi.org/10.1007/s11004-013-9490-1 CrossRefGoogle Scholar Osterholt V, Dimitrakopoulos R (2007) Simulation of orebody geology with multiple-point geostatistics—application at Yandi channel iron ore deposit, WA, and implications for resource uncertainty. In: Dimitrakopoulos R (ed) Orebody modelling and strategic mine planning. AusIMM Spectrum series 14, pp 51–60Google Scholar Remy N, Boucher A, Wu J (2009) Applied geostatistics with SGeMS: a user's guide. Cambridge University Press, CambridgeCrossRefGoogle Scholar Strebelle S (2002) Conditional simulation of complex geological structures using multiple-point statistics. Math Geol 34(1):1–21. https://doi.org/10.1023/A:1014009426274 CrossRefGoogle Scholar Stuart A, Ord JK (1987) Kendall's advanced theory of statistics, 5th edn. Oxford University Press, New YorkGoogle Scholar Wei Y, Wang G, Yang P (2013) Legendre-like orthogonal basis for spline space. CAD Comput Aided Des 45(2):85–92. https://doi.org/10.1016/j.cad.2012.07.011 CrossRefGoogle Scholar Yao L, Dimitrakopoulos R, Gamache M (2018) A new computational model of high-order stochastic simulation based on spatial Legendre moments. Math Geosci 50(8):929–960. https://doi.org/10.1007/s11004-018-9744-z CrossRefGoogle Scholar Zhang T, Switzer P, Journel A (2006) Filter-based classification of training image patterns for spatial simulation. Math Geol 38(1):63–80. https://doi.org/10.1007/s11004-005-9004-x CrossRefGoogle Scholar Zhang T, Gelman A, Laronga R (2017) Structure- and texture-based fullbore image reconstruction. Math Geosci 49(2):195–215. https://doi.org/10.1007/s11004-016-9649-7 CrossRefGoogle Scholar OpenAccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 1.COSMO – Stochastic Mine Planning Laboratory, Department of Mining and Materials EngineeringMcGill UniversityMontrealCanada de Carvalho, J.P., Dimitrakopoulos, R. & Minniakhmetov, I. Math Geosci (2019). https://doi.org/10.1007/s11004-019-09784-x First Online 20 February 2019 DOI https://doi.org/10.1007/s11004-019-09784-x
CommonCrawl
\begin{document} \title{Gevrey genericity of Arnold diffusion in \emph{a priori} unstable Hamiltonian systems} \author{Qinbo Chen $^{\dag, \ddag}$} \address{$^\dag$ Department of Mathematics, Nanjing University, Nanjing 210093, China} \address{$^\ddag$ Morningside Center of Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China} \email{[email protected]} \author{Chong-Qing Cheng *} \address{* Department of Mathematics, Nanjing University, Nanjing 210093, China} \email{[email protected]} \subjclass[2010]{37J40, 37J50.} \begin{abstract} It is well known that under generic $C^r$ smooth perturbations, the phenomenon of global instability, known as Arnold diffusion, exists in \emph{a priori} unstable Hamiltonian systems. In this paper, by using variational methods, we will prove that under generic Gevrey smooth perturbations, Arnold diffusion still exists in the \emph{a priori} unstable Hamiltonian systems of two and a half degrees of freedom. \end{abstract} \maketitle \setcounter{tocdepth}{1} \section{Introduction}\label{introduction} Throughout this paper, we denote by $\mathbb T^n\times\mathbb R^n$ the cotangent bundle $T^*\mathbb T^n$ of the torus $\mathbb T^n$ with $\mathbb T=\mathbb R/\mathbb Z$, and endow $\mathbb T^n\times\mathbb R^n$ with its usual coordinates $(q,p)$ where $q=(q_1,\cdots,q_n)$ and $p=(p_1,\cdots,p_n)$. We also endow the phase space with its canonical symplectic form $\Omega=\sum_{i=1}^ndq_i\wedge dp_i$. A Hamiltonian system is usually a dynamical system governed by the following Hamilton's equations $$\dot{q}=\frac{\partial H}{\partial p}, \quad \dot{p}=-\frac{\partial H}{\partial q},\quad (q,p)\in\mathbb T^n\times\mathbb R^n$$ where $H(q, p, t)$ is a Hamiltonian function and the dependence on the time $t$ is $1$-periodic, so $t\in\mathbb T$. The goal of this paper is to present global instability in a class of Hamiltonians. The problem of the influence of small perturbations on an integrable Hamiltonian system was considered by Poincar\'e to be \emph{the fundamental problem of Hamiltonian dynamics}. It is customary to consider a nearly-integrable Hamiltonian system of the form $H=H_0(p)+\varepsilon H_1(q, p, t)$. Notice that for $\varepsilon=0$ such systems do not admit any instability phenomenon. For $0<\varepsilon\ll 1$, the celebrated KAM theory asserts that a set of nearly full measure in the phase space consists of invariant tori carrying quasi-periodic motions, and the oscillation of the action variables $p$ on each KAM torus is at most of order $\sqrt{\varepsilon}$. For $n\geq2$, the complement of the set of the union of KAM tori is connected, so a natural question that arises is whether it is possible to find large evolution of order $1$. In the celebrated paper \cite{Ar1964}, Arnold first proposed an example of a nearly-integrable Hamiltonian system with two and a half degrees of freedom, which admits trajectories whose action variables have large oscillation. Moreover, he also conjectured that such an instability phenomenon occurs in generic nearly-integrable systems. This is known as the Arnold diffusion conjecture, and it has been investigated extensively since then. The mechanism of Arnold's original example is based on the existence of a normally hyperbolic invariant cylinder (NHIC) foliated by a family of hyperbolic invariant tori or whiskered tori. The unstable manifold of one torus intersects transversally the stable manifold of another nearby torus. These tori constitute a transition chain along which diffusion takes place. By Nekhoroshev theory, it must be extremely slow. This mechanism has inspired a large number of studies to the Hamiltonians possessing certain hyperbolic geometric structures. In the literature, such a system is referred to as ``\emph{a priori} unstable", to be distinguished from a nearly-integrable system (i.e. ``\emph{a priori} stable"). There have been many works devoted to the \emph{a priori} unstable systems based on Arnold's geometric mechanism, and most of them have tried to find transition chains in more general cases \cite{BBB2003,BB2002,Cr2003, FP2001,KL2008,LM2005,Zh2011}, \emph{etc}. However, for a general \emph{a priori} unstable system the transition chain cannot be formed by a continuous family of tori but a Cantorian family, and the size of gaps between the persisting tori could be larger than the size of the intersections of the stable and unstable manifolds. This is known as the \emph{large gap problem}. In the last two decades there have been several methods to overcome the large gap problem. Among them there are mainly two methods concerning the genericity of instability: variational methods and geometric methods. The first attempt to study Arnold's original example in variational viewpoint is by U. Bessi \cite{Bessi1996}. Essential progress has also been made by J. N. Mather. In the celebrated paper \cite{Ma1993}, he developed a powerful variational tool to study the global instability in the framework of convex Lagrangian systems. In an unpublished manuscript \cite{Ma1995}, he further showed the existence of orbits with unbounded energy in perturbations of a geodesic flow on $\mathbb T^2$ by a generic time-periodic potential. Based on Mather's variational mechanism, the authors of \cite{CY2004} constructed diffusing orbits and proved the $C^r$-genericity ($r$ is finite and suitably large) of Arnold diffusion for the \emph{a priori} unstable systems with two and a half degrees of freedom. On the other hand, several authors have used geometric methods, which also apply to Hamiltonians that are not necessarily convex, to obtain Arnold diffusion. More precisely, the authors in \cite{DLS2000,DLS2003,DLS2006} defined the so-called scattering map which accounts for the outer dynamics along homoclinic orbits, and overcame the large gap problem by incorporating in the transition chain new invariant objects, like secondary tori and the stable and unstable manifolds of lower dimensional tori; In \cite{Tr2002}, the author geometrically defined the so-called separatrix map near the normally hyperbolic invariant cylinder, then he showed in \cite{Tr2004} the existence of diffusion by making full use of the dynamics of this map, and even estimated the optimal diffusion speed of order $\varepsilon/|\log \varepsilon|$ (see also \cite{BBB2003}). Moreover, for the case of \emph{a priori} unstable Hamiltonians with higher degrees of freedom, similar results have also been obtained by variational or geometric methods in \cite{Be2008,CY2009,DLS2016,DT2018,GLS2019,LC2010,Tr2012}. The \emph{a priori} stable case poses a new difficulty: the presence of multiple resonances. In the paper \cite{Ma2004} (see also \cite{Ma2012}), Mather first made an announcement for systems with two degrees of freedom in the time-periodic case or with three degrees of freedom in the autonomous case, under a series of cusp-residual conditions. Hence the diffusion problem in this situation was thought to possess only cusp-residual genericity. The complete proof for the autonomous systems with three degrees of freedom appeared in the preprint \cite{Ch2012}, and the main ingredients have been published in the recent works \cite{CZh2016,Ch2017Uniform,Ch2017,Ch2018}. Indeed, the main difficulty in this case arises from the dynamics around strong double resonances. It is because away from double resonances, one could apply normal form theory to construct NHICs with a length independent of $\varepsilon$, along which the local instability can be obtained as in the \emph{a priori} unstable case \cite{Be2010Large,BKZ2016}. To solve the problem of double resonance, the paper \cite{Ch2017} presented a new variational mechanism to switch from one resonance to another, which eventually proved the cusp-residual genericity of diffusion in the $C^r$ smooth category \cite{Ch2018}. Moreover, we mention that similar results on diffusion have also been obtained, by using variational methods, in the paper \cite{KZ2015} and the preprint \cite{KZ2013} for systems with 2.5 degrees of freedom. Also, we refer the reader to the preprints \cite{Marco2016Arnold,Marco2016chains,GM2017} for systems with 3 degrees of freedom by using the geometric tools. As for the case of arbitrarily higher degrees of freedom, we refer the reader to the preprint \cite{CX2015} and the announcement \cite{KZ2014Announ}. Anyway, there have been many other works related to the problem of Arnold diffusion but we cannot list all of them, see \cite{BCV2001,BT1999,DH2011,GKZ2016,GT2008,GR2007,KZ2014,ZC2014}, \emph{etc}. To the author's knowledge, the genericity of Arnold diffusion is by now quite well understood in the $C^r$ smooth category, not yet in the analytic category, or the Gevrey smooth category \cite{Gev1918}. The present paper is interested in whether the phenomenon of large evolution exists generically in the Gevrey smooth Hamiltonians. Given $\alpha\geq 1$, a Gevrey-$\alpha$ function is an ultra-differentiable function whose $k$-th order partial derivatives are bounded by $O(M^{-|k|}k!^{\alpha})$. For the case $\alpha=1$, it is exactly a real analytic function. Hence, the Gevrey class is intermediate between the $C^\infty$ class and the real analytic class. Besides, a key point for the Gevrey class is that it allows the existence of a function with compact support (i.e. bump function). But no analytic function has compact support. To consider the Arnold diffusion problem in the Gevrey topology, we would adopt the Gevrey norm introduced by Marco and Sauzin in \cite{MS2003} during a collaboration with Herman (see Definition \ref{def of gev}). Apart from the theory of PDE where it has been widely used, the Gevrey class is also studied in the field of Dynamical Systems. For example, we refer to \cite{Bo2011,Bo2013,BF2017,BM2011,LDG2017,Po2004}, \emph{etc} for the stability theory, such as KAM theory and Nekhoroshev theory. We also refer the reader to \cite{BK2005,LMS2015,MS2004,Wa2015,FaSa2018}, \emph{etc} for some relevant results on instability. All these studies make us believe that one can also consider the genericity problem of diffusion in the Gevrey case. Therefore, in this paper we start by considering the \emph{a priori} unstable, Gevrey-$\alpha$ ($\alpha>1$) Hamiltonian systems of two and a half degrees of freedom. The case $\alpha=1$ (i.e. the analytic genericity) is more complicated and has not been fully studied. Here we only mention a recent work \cite{GLS2019} which proposes a general geometric mechanism that might be useful for analytic genericity. In the same spirit as in \cite{GLS2019}, the paper \cite{GT2017} gives models where the analytic genericity can be achieved for \emph{a priori} chaotic symplectic maps, provided that the scattering map has no monodromy and is globally defined on the NHIC. Before stating our main results , we review the concept of Gevrey function and some standard facts. \begin{Def}[\emph{Gevrey function} \cite{MS2003}]\label{def of gev} Let $\alpha\geq 1, \mathbf L>0$ and $K$ be a $n$-dimensional compact domain. A real-valued $C^\infty$ function $f(x)$ defined on $K$ is said to be Gevrey-($\alpha,\mathbf L$) if $$ \| f\|_{\alpha,\mathbf L}:=\sum_{k\in\mathbb N^n}\frac{\mathbf L^{|k|\alpha}}{(k!)^\alpha}\|\partial^kf\|_{C^0(K)}<+\infty,$$ with the standard multi-index notation $k=(k_1,\cdots,k_n)\in\mathbb N^n$, $|k|=k_1+\cdots+k_n$, $k!=k_1!\cdots k_n!$ and $\partial^k=\partial^{k_1}_{x_1}\cdots\partial^{k_n}_{x_n}$. \end{Def} Let $\mathbf G^{\alpha,\mathbf L}(K):=\{ f\in C^\infty(K)~:~\|f\|_{\alpha,\mathbf L}<+\infty\}$. The space $\mathbf G^{\alpha,\mathbf L}(K)$ endowed with the norm $\|\cdot\|_{\alpha,\mathbf L}$ is a Banach space. Sometimes we also write $\mathbf G^{\alpha}(K):=\bigcup_{\mathbf L>0}\mathbf G^{\alpha,\mathbf L}(K)$. In particular, for $K\subset \mathbb R^n$ and $\alpha=1$, $\mathbf G^{1}(K)$ is exactly the space of real analytic functions on $K$: any function $f\in \mathbf G^{1,\mathbf L}(K)$ is real analytic in $K$ and admits an analytic extension in the complex domain $\{z\in\mathbb C^n:\textup{dist}(z,K)< \mathbf L\}$. Conversely, for any real analytic function $f$ in $K$, there exists $\mathbf L>0$ such that $f\in \mathbf G^{1,\mathbf L}(K)$. However, for $\alpha>1$, $\mathbf G^{\alpha,\mathbf L}(K)$ admits non-analytic functions. Therefore, the Gevrey-smooth category is intermediate between the $C^\infty$ category and the analytic category. Gevrey class has the following useful properties which have been already proved in \cite{MS2003}: \begin{enumerate}[\rm(G1)] \item\label{algebra norm} The norm $\|\cdot\|_{\alpha,\mathbf L}$ is an algebra norm, namely $\|fg\|_{\alpha,\mathbf L}\leq\|f\|_{\alpha,\mathbf L}\|g\|_{\alpha,\mathbf L}$. \item\label{derivative Gevrey}Suppose $0<\lambda<\mathbf L$ and $f\in\mathbf G^{\alpha,\mathbf L}(K)$, then all partial derivatives of $f$ belong to $\mathbf G^{\alpha,\mathbf L-\lambda}(K)$ and $\sum\limits_{k\in\mathbb N^n,|k|=l} \|\partial^kf\|_{\alpha,\mathbf L-\lambda}\leq l!^\alpha\lambda^{-l\alpha}\|f\|_{\alpha,\mathbf L}.$ \item\label{composition}Let $f\in\mathbf G^{\alpha,\mathbf L}(K_m)$ where $K_m$ is a $m$-dimensional domain and let $g=(g_1,\cdots,g_m)$ be a mapping whose component $g_i\in\mathbf G^{\alpha,\mathbf L_1}(K_n)$ . If $g(K_n)\subset K_m$ and $\|g_i\|_{\alpha,\mathbf L_1}-\|g_i\|_{C^0(K_n)}\leq\mathbf L^\alpha/n^{\alpha-1}$ for all $1\leq i\leq m,$ then $f\circ g\in\mathbf G^{\alpha,\mathbf L_1}(K_n)$ and $\|f\circ g\|_{\alpha,\mathbf L_1}\leq\|f\|_{\alpha,\mathbf L}$. \end{enumerate} \subsection{Setup and main result} The current paper will mainly focus on the convex Hamiltonians of two and a half degrees of freedom. As we will see later, all discussions will be restricted on a compact domain in $\mathbb T^2\times\mathbb R^2\times\mathbb T$, so we fix, once and for all, a constant $R>1$ and a compact set $$\mathscr D_R=\mathbb T^2\times\bar{B}_R(0)\times\mathbb T,$$ where $B_R(0)\subset\mathbb R^2$ is an open ball of radius $R$ centered at 0 and $\bar{B}_R(0)$ is the closure. By Definition \ref{def of gev}, the space $\mathbf G^{\alpha,\mathbf L}(\mathscr D_R)$ consists of all real-valued smooth functions $f(q,p,t)$ satisfying \begin{equation}\label{gevrey norm} \| f\|_{\alpha,\mathbf L}=\sum_{k\in\mathbb N^{5}}\frac{\mathbf L^{|k|\alpha}}{k!^\alpha}\|\partial^kf\|_{C^0(\mathscr D_R)}<+\infty, \end{equation} Let $C^\omega_d(\mathscr D_R)$ be the space of all real-valued analytic functions on $\mathscr D_R$, admitting an analytic extension in the complex domain $\{(q,p,t)\in(\mathbb C/\mathbb Z)^2\times\mathbb C^2\times(\mathbb C/\mathbb Z):\|\textup{Im}q\|_{\infty}< d,~\textup{dist}(p,\bar{B}_R(0)) < d, |\textup{Im}t|<d\}$. Set $C^\omega(\mathscr D_R)=\bigcup_{d>0} C^\omega_d(\mathscr D_R)$, it is well known that \begin{enumerate}[(i)] \item For $\alpha\geq 1$, $\mathbf L>0$ and any $d>\mathbf L^\alpha$, $$C^\omega_d(\mathscr D_R)\subset \mathbf G^{\alpha,\mathbf L}(\mathscr D_R)\subset C^\infty(\mathscr D_R),\quad C^\omega(\mathscr D_R)\subset \mathbf G^{\alpha}(\mathscr D_R)\subset C^\infty(\mathscr D_R).$$ \item $C^\omega(\mathscr D_R)=\mathbf G^{1}(\mathscr D_R).$ \end{enumerate} Now, we introduce the \emph{a priori} unstable Hamiltonian model considered in this paper and state the main assumptions. Let $q=(q_1, q_2)\in\mathbb T^2$ and $p=(p_1,p_2)\in\mathbb R^2$. We consider a time-periodic and $C^r (r>2)$ smooth Hamiltonian of the form: \begin{equation}\label{hamiltonian} \begin{aligned} H(q, p, t)=H_0(q, p)+H_1(q, p, t),\quad \text{where}\quad H_0(q, p)=h_1(p_1)+h_2(q_2,p_2). \end{aligned} \end{equation} Here, the term $H_1$ is a small perturbation which is periodic of period 1 in $t$. Our main assumptions on $H_0$ are the following: \begin{enumerate}[\bf(H1)] \item \textbf{Convexity and superlinearity}: for each $q\in\mathbb T^2$, the Hessian $\partial_{pp}H_0(q,p)$ is positive definite, and $\lim_{\|p\|\rightarrow +\infty} H_0(q,p)/\|p\|=+\infty.$ \item \textbf{A priori hyperbolicity}: the Hamiltonian flow $\Phi^t_{h_2}$, determined by $h_2$, has a hyperbolic fixed point $(q_2,p_2)=(x^*,y^*)$. Moreover, the function $h_2(q_2,y^*): \mathbb T \rightarrow \mathbb R$ attains its unique maximum at $q_2=x^*$. Without loss of generality, we can assume $(x^*,y^*)=(0,0)$. \end{enumerate} A prototype example of such a system is the coupling of a rotator and a pendulum $$H=\frac{p_1^2}{2}+\frac{p_2^2}{2}+(\cos2\pi q_2-1)+H_1(q, p, t),$$ which has been considered many times in the literature. Keep this example in mind will help the reader better understand our result and method. As we will see later, the above assumptions {\bf(H1)--(H2)} are in the same spirit as in \cite{CY2004} while our main result and approach have some differences. Let $\mathfrak B^{\mathbf L}_{\varepsilon,R}=\{ H_1\in C^\infty(\mathscr D_R) : \|H_1\|_{\alpha,\mathbf L}<\varepsilon\}$ $\subset\mathbf G^{\alpha,\mathbf L}(\mathscr D_R)$ denote the open ball of radius $\varepsilon$ centered at the origin with respect to the norm $\|\cdot\|_{\alpha,\mathbf L}$. \begin{The}\label{main theorem} Let $\alpha>1, R>1$ and assume that $H_0$ in \eqref{hamiltonian} is of class $C^r$ $( r>2)$, then there exists a positive constant $\mathbf L_0=\mathbf L_0(H_0,\alpha,R)$ such that, for each $\mathbf L\in(0,\mathbf L_0]$ and a sequence of open balls $B_s(y_1),\cdots, B_s(y_k)\subset\mathbb R^2$, of radius $s$ centered at $y_\ell\in[-R+1,R-1]\times\{0\}\subset\mathbb R^2$, $\ell=1,\cdots, k$, we have: there exist a positive number $\varepsilon_0=\varepsilon_0(H_0,\alpha,R,s,\mathbf L)$ and an open and dense subset $\mathfrak S^{\mathbf L}_{\varepsilon_0,R}\subset\mathfrak B^{\mathbf L}_{\varepsilon_0,R}$ such that for each perturbation $H_1\in\mathfrak S^{\mathbf L}_{\varepsilon_0,R}$, the system $H=H_0+H_1$ has a trajectory $(q(t),p(t))$ whose action variables $p(t)$ pass through the ball $B_s(y_\ell)$ at the time $t=t_\ell$, where $t_1<t_2<\cdots<t_k$. \end{The} \begin{Rem} Just as J. N. Mather did in \cite{Ma2004,Ma2012}, the smoothness of the unperturbed Hamiltonian $H_0$ could differ from that of the perturbation term $H_1$. Notice that $\mathbf L_0$ can not be an arbitrary constant, the reason is that our approach needs to adopt the Gevrey approximation (see Theorem \ref{Gevrey approx}). \end{Rem} \begin{Rem}[Autonomous case] Recall that Mather's cohomology equivalence is trivial for an autonomous system (cf. \cite{Be2002}). The problem is that, unlike the time-periodic case, there is no canonical global transverse section of the flow in an autonomous system. In \cite{LC2010}, this difficulty was overcome by taking local transverse sections, which could generalize Mather's cohomology equivalence. Thus we believe that the Gevrey genericity is still valid for the \emph{a priori} unstable autonomous Hamiltonians. However, in this paper, we only consider the non-autonomous case. \end{Rem} The perturbation technique used in the current paper can also prove the genericity in the sense of Ma\~n\'e, which means that the diffusion is still a typical phenomenon when $H_0$ is perturbed by potential functions. More precisely, let $\mathbf{B}^{\mathbf L}_{\varepsilon}$ $\subset\mathbf G^{\alpha,\mathbf L}(\mathbb T^2\times\mathbb T)$ denote the open ball of radius $\varepsilon$ centered at the origin with respect to the norm $\|\cdot\|_{\alpha,\mathbf L}$, we have \begin{The}\label{main thm2} Under the same assumptions as in Theorem \ref{main theorem}, there exists $\mathbf L_0=\mathbf L_0(H_0,\alpha,R)>0$ such that, for each $\mathbf L\in(0,\mathbf L_0]$ and a sequence of open balls $B_s(y_1),\cdots, B_s(y_k)\subset\mathbb R^2$, of radius $s$ centered at $y_\ell\in[-R+1,R-1]\times\{0\}\subset\mathbb R^2$, $\ell=1,\cdots, k$, we have: there exist a positive number $\varepsilon_0=\varepsilon_0(H_0,\alpha,R,s,\mathbf L)$ and an open and dense subset $\mathbf{S}^{\mathbf L}_{\varepsilon_0}\subset\mathbf{B}^{\mathbf L}_{\varepsilon_0}$ such that for each potential perturbation $H_1\in\mathbf{S}^{\mathbf L}_{\varepsilon_0}$, the system $H=H_0+H_1$ has a trajectory $(q(t),p(t))$ whose action variables $p(t)$ pass through the ball $B_s(y_\ell)$ at the time $t=t_\ell$, where $t_1<t_2<\cdots<t_k$. \end{The} \subsection{Outline of this paper} This paper mainly adopts variational methods to construct diffusing orbits, so it requires us to transform into Lagrangian formalism. We still denote by $\mathbb T^2\times\mathbb R^2$ the tangent bundle $T\mathbb T^2$, and endow $\mathbb T^2\times\mathbb R^2$ with its usual coordinates $(q, v)$. The Lagrangian $L:\mathbb T^2\times\mathbb R^2\times\mathbb T \rightarrow \mathbb R$ associated to $H$ is defined as follows: \begin{equation}\label{lagrangian} L(q,v,t):=\max_{p}\{\langle p, v\rangle-H(q,p,t)\}=L_0(q,v)+L_1(q,v,t),\quad L_0=l_1(v_1)+l_2(q_2,v_2). \end{equation} In our proofs, we will apply results in Mather theory, where the Lagrangian is required to satisfy the Tonelli conditions (see Section \ref{sec_Preliminaries}): the fiberwise Hessian is positive definite, fiberwise superlinear, and the completeness of the Euler-Lagrange flow. In fact, without affecting our analysis, we can always reduce to the Tonelli case. For our Lagrangian $L=L_0+L_1$ in \eqref{lagrangian}, it is clear that the unperturbed part $L_0$ is a Tonelli Lagrangian as a result of hypothesis \textbf{(H1)}. Now, we turn to the small perturbation term $L_1$. As we will see later, only the information on a compact region is needed in our proofs, then it will not affect the study of Arnold diffusion if one modifies the perturbation function $L_1$ outside the compact set. For example, one can introduce a new function $\widetilde{L}_1$ which has compact support, and is identically equal to $L_1$ on a compact set $\{\|v\|_q\leq K\}$. In terms of this modification, we then introduce a new Lagrangian $\widetilde{L}:=L_0+\widetilde{L}_1$. Observe that the modified Lagrangian $\widetilde{L}$ satisfies the Tonelli conditions since the perturbation term $\widetilde{L}_1$ is small enough and has compact support. Also, it is quite clear that $\widetilde{L}$ and $L$ generate the same Euler-Lagrange flow when restricted on the compact region $\{\|v\|_q\leq K\}$. Such a modification is elementary, see for instance \cite{Ma2004}. Therefore, in what follows, we can always assume, without loss of generality, that our Lagrangian \eqref{lagrangian} satisfies the Tonelli conditions. Then, through the Legendre transformation \begin{equation}\label{Legendre_tran} \begin{aligned} \mathscr L: T^*\mathbb T^2\times\mathbb T &\rightarrow T\mathbb T^2\times\mathbb T,\\ (q,p,t) &\mapsto(q,\frac{\partial H}{\partial p}(q,p,t),t), \end{aligned} \end{equation} we can write \begin{equation*} L(q,v,t)=\langle \pi_p\circ\mathscr L^{-1}(q,v,t),v\rangle-H\circ\mathscr L^{-1}(q,v,t), \end{equation*} where $\pi_p$ denotes the projection $(q,p,t)\mapsto p$. Then, the Hamilton's equations $\dot{q}=\frac{\partial H}{\partial p}$, $\dot{p}=-\frac{\partial H}{\partial q}$ is equivalent to the Euler-Lagrange equation \begin{equation*} \frac{d}{dt}\bigg(\frac{\partial L}{\partial v}\bigg)-\frac{\partial L}{\partial q}=0. \end{equation*} Throughout this paper, we use $\phi^t_L$ to denote the Euler-Lagrange flow determined by $L$ and $\Phi^t_H$ to denote the Hamiltonian flow determined by $H$. The Fenchel inequality and hypothesis (\textbf{H2}) together give rise to \begin{equation*} h_2(q_2,0)+l_2(q_2,v_2)\geq 0,\quad h_2(0,0)+l_2(0,0)=0,\quad (q_2,v_2)\in T\mathbb T. \end{equation*} Since $q_2=0$ (mod 1) is the unique maximum point of the function $h_2(\cdot,0):\mathbb T\to\mathbb R$, one gets \begin{equation}\label{weiyimin} l_2(0,0)=-h_2(0,0)\leq-h_2(q_2,0)\leq l_2(q_2,v_2),\quad (q_2,v_2)\in T\mathbb T. \end{equation} Then the point $(q_2,v_2)=(0,0)$ is the unique minimum point of the function $l_2$ as a consequence of the strict convexity. Also, $(0,0)$ is a hyperbolic fixed point for the Euler-Lagrange flow $\phi^t_{l_2}$. Compared with the variational proofs of $C^r$-genericity in \cite{CY2004,CY2009}, the method in this paper contains some new techniques. Indeed, the strategy used in \cite{CY2004,CY2009}, which perturbs the generating functions to create genericity, seems not applicable to the Gevrey genericity. The main difficulty arises from the fact that, when we estimate the Gevrey smoothness of a Hamiltonian flow, we cannot avoid the decrease of Gevrey coefficient $\mathbf L$ during the switch from a generating function to its corresponding Hamiltonian, or the switch from a Lagrangian to its associated Hamiltonian (see property (G\ref{derivative Gevrey}) above). Thus in this paper, inspired by the ideas in \cite{Ch2017}, we decide to directly perturb a Hamiltonian by potential functions, one advantage of this approach is that the Lagrangian associated to the perturbed Hamiltonian $H+V(q,t)$ is exactly $L-V(q,t)$. To this end, some quantitative estimations are required, such as the Gevrey approximation and the corresponding inverse function theorem. It is also worth mentioning that one can establish the genericity not only in the usual sense but also in the sense of Ma\~n\'e. Besides, we also believe that our results could be obtained by geometric tools, such as the scattering maps developed in \cite{DLS2006,DH2009,DLS2008,DLS2016}, or the separatrix maps in \cite{Tr2004,Tr2012,DT2018}. In our variational proof of genericity, the modulus of continuity of barrier functions is crucial. To implement this argument, the work \cite{CY2004} introduced the following parameterization technique: fixing an invariant curve $\Gamma_0$ on the NHIC, for any other invariant curve $\Gamma_\sigma$ on the NHIC, we parameterize it by $\sigma$, the area between the two curves. Then it can be shown that $\Gamma_\sigma$ is H\"older continuous with respect to $\sigma$ in the $C^0$ topology. However, by taking advantage of the tools in weak KAM theory, now we can show that this ``area" parameter $\sigma$ is exactly the cohomology class (See section \ref{sub holder}). This will help us simplify the proof. The structure of this paper is as follows. In Section \ref{Mathertheory}, we review some standard results, related to Arnold diffusion problem, in Mather theory. Section \ref{sec EWS} discusses the elementary weak KAM solutions, and a special ``barrier function" whose minimal points correspond to heteroclinic orbits. In Section \ref{sec local and global}, we introduce the concept of generalized transition chain and then give the variational mechanism of constructing diffusing orbits along this chain. In Section \ref{some properties Gev}, we present some properties of Gevrey functions which are necessary for our proofs. Section \ref{sec proof main} is the main part of this paper, and applies the tools exposed in previous sections to study the Gevrey smooth systems. First, we generalize the genericity of uniquely minimal measure in the Gevrey or analytic topology. Second, we obtain certain regularity of the elementary weak KAM solutions and show how to choose suitable Gevrey space. Finally, by proving total disconnectedness for the minimal sets of barrier functions, we establish the genericity of generalized transition chain along which the global instability occurs, this therefore completes the proofs of Theorem \ref{main theorem} and Theorem \ref{main thm2}. We would like to thank the anonymous referees for their insightful comments and valuable suggestions on improving our results. \section{Preliminaries: Mather theory}\label{Mathertheory}\label{sec_Preliminaries} In this section, we recall some standard results in Mather theory which are necessary for the purpose of our study, the main references are Mather's original papers \cite{Ma1991, Ma1993}. Let $M$ be a connected and compact smooth manifold without boundary, equipped with a smooth Riemannian metric $g$. Let $TM$ denote the tangent bundle, a point of $TM$ will be denoted by $(q,v)$ with $q\in M$ and $v\in T_qM$. We shall denote by $\|\cdot\|_q$ the norm induced by $g$ on the fiber $T_qM$. A time-periodic $C^2$ function $L=L(q, v, t):TM\times\mathbb T\rightarrow \mathbb R$ is called a \emph{Tonelli Lagrangian} if it satisfies: \begin{enumerate}[\rm(1)] \item \emph{Convexity}: $L$ is strictly convex in each fiber, i.e., the second partial derivative $\partial^2 L/\partial v^2(q, v, t)$ is positive definite, as a quadratic form, for each $(q, t) \in M\times\mathbb T$; \item \emph{Superlinear growth}: $L$ is superlinear in each fiber, i.e. for each $(q,t)\in M\times\mathbb T$, $$\lim_{\|v\|_q\rightarrow +\infty}\frac{L(q,v,t)}{\|v\|_q}=+\infty.$$ \item \emph{Completeness}: All solutions of the Euler-Lagrange equation are well defined for all $t\in\mathbb R$. \end{enumerate} Let $I=[a,b]$ be an interval and $\gamma:I\rightarrow M$ be any absolutely continuous curve. Given a cohomology class $c\in H^1(M,\mathbb R)$, we choose and fix a closed 1-form $\eta_c$ with $[\eta_c]=c$. Denote by $$A_c(\gamma):=\int_a^b L(d\gamma(t),t)-\eta_c(d\gamma(t))\, dt$$ the action of $L-\eta_c$ along $\gamma$, where $d\gamma(t)=(\gamma(t),\dot\gamma(t))$. A curve $\gamma: I\rightarrow M$ is called \emph{$c$-minimal} if $$A_c(\gamma)=\min_{\substack{\xi(a)=\gamma(a),\xi(b)=\gamma(b)\\ \xi\in C^{ac}(I,M)}}\int_a^b L(d\xi(t),t)-\eta_c(d\xi(t))\, dt,$$ where $C^{ac}(I,M)$ denotes the set of absolutely continuous curves. As is known to all, each minimal curve satisfies the Euler-Lagrange equation. A curve $\gamma:$ $\mathbb R$ $\rightarrow$ $M$ is called \emph{globally $c$-minimal} if for any $a<b$, the curve $\gamma:[a,b]\to M$ is $c$-minimal. Therefore, we introduce the \emph{globally minimal set} $$\widetilde{\mathcal G}(c):=\bigcup_{\gamma}\{ (d\gamma(t),t)~:~ \gamma:\mathbb R\rightarrow M~ \text{is ~} c\text{-minimal} \}.$$ Let $\phi^t_L$ be the Euler-Lagrange flow on $TM\times\mathbb T$, and $\mathfrak M$ be the space of all $\phi^t_L$-invariant probability measures on $TM\times\mathbb T$. To each $\mu\in\mathfrak M$, Mather has proved that $\int_{TM\times\mathbb T} \lambda \,d\mu$=0 holds for any exact 1-form $\lambda$, which yields that $\int_{TM\times\mathbb T} L-\eta_c \,d\mu=\int_{TM\times\mathbb T} L-\eta^\prime_c \, d\mu$ if $\eta_c-\eta_c^\prime$ is exact. This leads us to define \emph{Mather's $\alpha$ function}, $$\alpha(c):=-\inf_{\mu\in\mathfrak M}\int_{TM\times\mathbb T} L-\eta_c \,d\mu.$$ To some extent, the value $\alpha(c)$ is a minimal average action for $L-\eta_c$. Mather has proved that $\alpha:H^1(M,\mathbb R)\rightarrow\mathbb R$ is finite everywhere, convex and superlinear. For each $\mu\in\mathfrak M$, the \emph{rotation vector} $\rho(\mu)$ associated with $\mu$ is the unique element in $H_1(M,\mathbb R)$ that satisfies $$\langle\rho(\mu),[\eta_c]\rangle=\int_{TM\times\mathbb T}\eta_c\, d\mu,\quad \text{~ for all closed 1-form~} \eta_c,$$ here $\langle\cdot,\cdot\rangle$ denotes the dual pairing between homology and cohomology classes. Then, we can define \emph{Mather's $\beta$ function} as follows: $$\beta(h):=\inf_{\mu\in\mathfrak M, \rho(\mu)=h}\int_{TM\times\mathbb T} L\,d\mu.$$ This function $\beta:H_1(M,\mathbb R)\rightarrow\mathbb R$ is also finite everywhere, convex and superlinear. In fact, $\beta$ is the Legendre-Fenchel dual of the function $\alpha$, i.e. $\beta(h)=\max_{c}\{\langle h,c\rangle-\alpha(c)\}$. We define $$\mathfrak M^c(L):=\left\{ \mu: \int_{TM\times\mathbb T}L-\eta_c \,d\mu=-\alpha(c) \right\},~ \mathfrak M_h(L):=\left\{ \mu : \rho(\mu)=h, \int_{TM\times\mathbb T}L \,d\mu=\beta(h) \right\}.$$ By duality, it can be easily checked that $$\mathfrak M^c(L)=\bigcup_{h\in\partial \alpha(c)} \mathfrak M_h(L),$$ where $\partial \alpha(c)$ is the sub-differential. For one and a half degrees of freedom systems including twist maps, Mather's $\alpha$ function is of class $C^1$, then \begin{equation}\label{ssff} \mathfrak M^c(L)= \mathfrak M_h(L),\quad d\alpha(c)=h. \end{equation} We call each element $\mu\in\mathfrak M^c(L)$ a \emph{$c$-minimal measure}. The \emph{Mather set} of cohomology class $c$ is then defined by $$\widetilde{\mathcal M}(c):=\overline{\bigcup_{\mu\in\mathfrak M^c(L)} \text{supp}\mu}.$$ To study more dynamical properties, we need to find some ``larger" minimal invariant sets and discuss their topological structures. Let $t^\prime> t$, the action function $h^{t,t^\prime}_c:M\times M\rightarrow \mathbb R$ is defined by $$h^{t,t^\prime}_c(x,x^\prime):=\min_{\substack{\gamma(t)=x, \gamma(t^\prime)=x^\prime\\ \gamma\in C^{ac}([t,t'],M)}}\int_t^{t^\prime} (L-\eta_c)(d\gamma(s),s)\,ds+\alpha(c)\cdot(t^\prime-t).$$ Then we define a real-valued function $\Phi_c:(M\times\mathbb T)\times(M\times\mathbb T)\rightarrow \mathbb R$ by $$\Phi_c((x,\tau),(x^\prime, \tau^\prime)):=\inf_{\substack{t^\prime>t,~t\equiv\tau \text{mod}~1 \\ t^\prime\equiv \tau^\prime \text{mod}~1}}h^{t,t^\prime}_c(x,x^\prime).$$ and a real-valued function $h_c^\infty:(M\times\mathbb T)\times(M\times\mathbb T)\rightarrow \mathbb R $ by \begin{equation} h_c^\infty((x,\tau),(x^\prime, \tau^\prime))=\liminf\limits_{\substack{t\equiv\tau \text{mod}~1\\ t^\prime\equiv \tau^\prime \text{mod}~1,t^\prime-t\rightarrow+\infty}}h^{t,t^\prime}_c(x,x^\prime) \end{equation} In the literature, $h_c^\infty$ and $\Phi_c$ are called the \emph{Peierls barrier function} and \emph{Ma\~n\'e's potential} respectively. A minimal curve $\gamma:\mathbb R\rightarrow M$ is called $c$-semi static if for any $t<t'$, \begin{equation}\label{semi static} A_c(\gamma|_{[t,t^\prime]})+\alpha(c)\cdot(t^\prime-t)=\Phi_c\big(~(\gamma(t),t\text{~mod~} 1),~(\gamma(t^\prime),t^\prime\text{~mod~} 1)~\big). \end{equation} A minimal curve $\gamma:\mathbb R\rightarrow M$ is called $c$-static if for any $t<t'$, \begin{equation}\label{static} A_c(\gamma|_{[t,t^\prime]})+\alpha(c)\cdot(t^\prime-t)=-\Phi_c\big(~(\gamma(t^\prime),t^\prime\text{~mod~} 1),~(\gamma(t),t\text{~mod~} 1)~\big). \end{equation} This gives the so-called \emph{Aubry set} $\widetilde{\mathcal A}(c)$ and \emph{Ma\~{n}\'{e} set} $\widetilde{\mathcal N}(c)$ in $TM\times\mathbb T$: \begin{equation*} \begin{split} \widetilde{\mathcal A}(c)=\bigcup \big\{(d\gamma(t),t\text{~mod~}1):~\gamma ~\text{is}~ c\text{-static}\big\}, ~~ \widetilde{\mathcal N}(c)=\bigcup \big\{(d\gamma(t),t\text{~mod~}1):~\gamma ~\text{is}~ c\text{-semi static}\big\}. \end{split} \end{equation*} The $\alpha$-limit and $\omega$-limit sets of a $c$-minimal curve $(d\gamma(t),t)$ belong to $\widetilde{\mathcal A}(c)$, see for instance \cite{Be2002}. In addition, with the canonical projection $\pi:TM\times\mathbb T\rightarrow M\times\mathbb T$, one could define the \emph{projected Aubry set} $\mathcal A(c)=\pi\widetilde{\mathcal A}(c)$, the \emph{projected Mather set} $\mathcal M(c)=\pi\widetilde{\mathcal M}(c)$, the \emph{ projected Ma\~n\'e set} $\mathcal N(c)=\pi\widetilde{\mathcal N}(c)$ and the \emph{projected globally minimal set} $\mathcal G(c)=\pi\widetilde{\mathcal G}(c)$. Then the following inclusion relations hold (see \cite{Be2002}): \begin{equation*} \widetilde{\mathcal M}(c)\subset\widetilde{\mathcal A}(c)\subset\widetilde{\mathcal N}(c)\subset\widetilde{\mathcal G}(c),\quad \mathcal M(c)\subset\mathcal A(c)\subset \mathcal N(c)\subset\mathcal G(c). \end{equation*} Next, we present some key properties of the minimal sets above, which will be fully exploited in the construction of diffusing orbits. Property (1) below is a classical result which has been proved by J. N. Mather in \cite{Ma1991}, and the proof of property (2) could be found in \cite{Be2002,CY2004}. \begin{Pro}\label{upper semi} For the Tonelli Lagrangian $L$, we have: \begin{enumerate}[\rm(1)] \item \textup{(Graph property)} Let $\pi:TM\times\mathbb T\rightarrow M\times\mathbb T$ be the canonical projection. Then the restriction of $\pi$ to $\widetilde{\mathcal{A}}(c)$ is a bi-Lipschitz homeomorphism. \item \textup{(Upper semi-continuity)} The set-valued map $(c,L)\mapsto\widetilde{\mathcal G}(c,L)$ and the set-valued map $(c,L)\mapsto\widetilde{\mathcal N}(c,L)$ are both upper semi-continuous. \end{enumerate} \end{Pro} For $(x,\tau)$, $(x^\prime, \tau^\prime)$ $\in M\times\mathbb T$, we set $$d_c((x,\tau),(x^\prime, \tau^\prime)):=h_c^\infty((x,\tau),(x^\prime, \tau^\prime))+h_c^\infty((x^\prime, \tau^\prime),(x,\tau)).$$ By definition \eqref{static}, it follows that $$h^\infty_c((x,\tau),(x, \tau))=0\Longleftrightarrow(x,\tau)\in\mathcal A(c),$$ and hence $d_c$ is a pseudo-metric on the projected Aubry set $\mathcal A(c)$. Two points $(x,\tau), (x^\prime, \tau^\prime)\in\mathcal A(c)$ are said to be in the same \emph{Aubry class} if $d_c((x,\tau),(x^\prime, \tau^\prime))=0$. Clearly, each Aubry class is a closed set. If only one $c$-minimal measure exists, then the Aubry class is unique and $$\widetilde{\mathcal A}(c)=\widetilde{\mathcal N}(c).$$ To characterize the Ma\~{n}\'{e} set from another point of view, we define the following function \begin{equation*} B_c^*(x,\tau):=\min\limits_{\substack{(x_\ell,\tau_\ell)\in\mathcal A(c)\\\ell=1,2}}\{h_c^\infty((x_1,\tau_1),(x, \tau))+h_c^\infty((x,\tau),(x_2, \tau_2))-h_c^\infty((x_1,\tau_1),(x_2, \tau_2)) \}. \end{equation*} Mather has proved in \cite{Ma2004} that $\min B_c^*=0$, and the set of all minimal points is exactly $\mathcal N(c)$, i.e. \begin{equation}\label{manebarrier} B_c^*(x,\tau)=0 \Longleftrightarrow (x,\tau)\in\mathcal N(c). \end{equation} To prove Theorem \ref{generic G1} in Section \ref{sec proof main}, it is convenient to adopt the equivalent definition of minimal measures originating from Ma{\~{n}}{\'{e}} \cite{Mane1996}. In his setting, the minimal measures are obtained through a variational principle not requiring the invariance a priori. Let $\rm C$ be the set of all continuous functions $f:TM\times\mathbb T\to\mathbb R$ having linear growth at most, i.e. $$\|f\|_l:=\sup_{(v,t)}\frac{|f(q,v,t)|}{1+\|v\|_q}<+\infty,$$ and endow $\rm C$ with the norm $\|\cdot\|_l$. Let $\rm C^*$ be the vector space of all continuous linear functionals $\nu: \rm C\to\mathbb R$ provided with the weak-$*$ topology, namely, $$\lim\limits_{k\to+\infty}\nu_k=\nu \Longleftrightarrow \lim\limits_{k\to+\infty}\int_{TM\times\mathbb T}f\, d\nu_k=\int_{TM\times\mathbb T}f \,d\nu,\quad \forall f\in \rm C.$$ For each $N\in\mathbb Z^+$ and each $N$-periodic absolutely continuous curve $\gamma:\mathbb R\rightarrow M$, one can define a probability measure $\mu_\gamma$ associated to $\gamma$ as follows: \begin{equation}\label{holonomic} \int_{TM\times\mathbb T}f \,d \mu_\gamma :=\frac{1}{N}\int_0^N f(d\gamma(t),t)\,dt,\quad \forall f\in \rm C. \end{equation} Let $$\Gamma:=\bigcup_{N\in\mathbb Z^+}\big\{~\mu_\gamma~:~\gamma\in C^{ac}(\mathbb R,M) \textup{~is~} N \textup{-periodic} ~\big\}\subset \rm C^*.$$ and let $\mathcal{H}$ be the closure of $\Gamma$ in $\rm C^*$. It is easily seen that the set $\mathcal{H}$ is convex. $\mu_\gamma$ in $\Gamma$ has a naturally associated homology class $\rho(\mu_\gamma)=\frac{1}{N}[\gamma]\in H_1(M,\mathbb R),$ where $[\gamma]$ denotes the homology class of $\gamma$. The map $\rho:\Gamma\rightarrow H_1(M,\mathbb R)$ can extend continuously to $\rho:\mathcal{H}\rightarrow H_1(M,\mathbb R)$ which is surjective. Then, Ma\~n\'e introduced the following minimal measures: \begin{equation}\label{definition of mane} \begin{split} \mathfrak{H}^c(L)&:=\bigg\{\mu\in\mathcal{H} ~:~ \int L-\eta_c \,d\mu=\min\limits_{\nu\in\mathcal{H}}\int L-\eta_c\, d\nu \bigg\}, \\ \mathfrak{H}_h(L)&:=\bigg\{\mu\in\mathcal{H} ~:~\rho(\mu)=h, \int L\,d\mu=\min\limits_{\nu\in\mathcal{H},\rho(\nu)=h}\int L\,d\nu \bigg\}. \end{split} \end{equation} We end this section by the following equivalence property: \begin{Pro}\rm{(\cite{Mane1996})}\label{Mather and Mane} The sets $\mathfrak M^c(L)=\mathfrak{H}^c(L)$ and $\mathfrak M_h(L)=\mathfrak{H}_h(L)$. \end{Pro} \section{Elementary weak KAM solutions and heteroclinic orbits}\label{sec EWS} \subsection{Weak KAM solutions}\label{sub weakkam} Weak KAM solution is the basic element in weak KAM theory which builds a link between Mather theory and the theory of viscosity solutions of Hamilton-Jacobi equations. Here we only recall some basic concepts and properties which help us better understand Mather theory. For more details, we refer the reader to Fathi's book \cite{Fa2008} for time-independent systems, and to \cite{Be2008,CIS2013,WY2012} for time-periodic systems. \begin{Def}\label{weakKAM}A continuous function $u_c^-:M\times\mathbb T\rightarrow\mathbb R$ is called a \emph{backward weak KAM solution} if \begin{enumerate}[\rm(1)] \item For any absolutely continuous curve $\gamma:[a,b]\to M$, $$u_c^-(\gamma(b),b)-u_c^-(\gamma(a),a)\leq \int_{a}^{b} (L-\eta_c)(d\gamma(s),s)+\alpha(c)\,ds.$$ \item For each $(x,t)\in M\times\mathbb R$, there exists a \emph{backward calibrated curve} $\gamma^-:(-\infty,t]\rightarrow M$ with $\gamma^-(t)=x$ such that for all $a<b\leq t$, $$u_c^-(\gamma^-(b),b)-u_c^-(\gamma^-(a),a)=\int_{a}^{b}(L-\eta_c)(d\gamma^-(s),s)+\alpha(c)\,ds.$$ \end{enumerate} Similarly, a continuous function $u_c^+:M\times\mathbb T\rightarrow\mathbb R$ is called a \emph{forward weak KAM solution} if \begin{enumerate}[\rm(1)] \item For any absolutely continuous curve $\gamma:[a,b]\to M$, $$u_c^+(\gamma(b),b)-u_c^+(\gamma(a),a)\leq \int_{a}^{b} (L-\eta_c)(d\gamma(s),s)+\alpha(c)\,ds.$$ \item For each $(x,t)\in M\times\mathbb R$, there exists a \emph{forward calibrated curve} $\gamma^+:[t,+\infty)\rightarrow M$ with $\gamma^+(t)=x$ such that for all $t\leq a<b$, $$u_c^+(\gamma^+(b),b)-u_c^+(\gamma^+(a),a)=\int_{a}^{b}(L-\eta_c)(d\gamma^+(s),s)+\alpha(c)\,ds.$$ \end{enumerate} \end{Def} For example, it is well known in weak KAM theory that, for each $(x_0,t_0)\in M\times\mathbb T$ the barrier function $ h_c^\infty((x_0,t_0),\cdot): M\times\mathbb T\to\mathbb R$ is a backward weak KAM solution and $-h_c^\infty(\cdot,(x_0, t_0)):M\times\mathbb T\to\mathbb R$ is a forward weak KAM solution. If there is only one Aubry class, then $ h_c^\infty((x_0,t_0),\cdot)$ is the unique backward weak KAM solution up to an additive constant, and $-h_c^\infty(\cdot,(x_0,t_0))$ is also the unique forward weak KAM solution up to an additive constant. It is easily seen that backward (forward) calibrated curves are semi static. The following properties for weak KAM solutions are well known and the proof can be found in \cite{Fa2008} or \cite{CIS2013}: \begin{Pro}\label{properties weak KAM} \textup{(1)} $u^-_c$ is Lipschitz continuous, and is differentiable on $\mathcal A(c)$. If $u^-_c$ is differentiable at $(x_0,t_0)\in M\times\mathbb T$, then \begin{equation*} \partial_tu^-_c(x_0,t_0)+H(x_0,c+\partial_xu^-_c(x_0,t_0),t_0)=\alpha(c). \end{equation*} It also determines a unique $c$-semi static curve $\gamma^-_c:(-\infty,t_0]\rightarrow M$ with $\gamma^-_c(t_0)=x_0$, and such that $u_c^-$ is differentiable at each point $(\gamma^-_c(t),t)$ with $t\leq t_0$, namely $c+\partial_xu^-_c(\gamma^-_c(t),t)=\frac{\partial L}{\partial v}(d\gamma^-_c(t),t)$.\\ \textup{(2)} $u^+_c$ is Lipschitz continuous, and is differentiable on $\mathcal A(c)$. If $u^+_c$ is differentiable at $(x_0,t_0)\in M\times\mathbb T$, then \begin{equation*} \partial_tu^+_c(x_0,t_0)+H(x_0,c+\partial_xu^+_c(x_0,t_0),t_0)=\alpha(c). \end{equation*} It also determines a unique $c$-semi static curve $\gamma^+_c:[t_0,+\infty)\rightarrow M$ with $\gamma^+_c(t_0)=x_0$, and such that $u_c^+$ is differentiable at each point $(\gamma^+_c(t),t)$ with $t\geq t_0$, namely $c+\partial_xu^+_c(\gamma^+_c(t),t)=\frac{\partial L}{\partial v}(d\gamma^+_c(t),t)$. \end{Pro} \subsection{Elementary weak KAM solutions} It is a generic property that a Lagrangian has finitely many Aubry classes \cite{BC2008} for each cohomology class. Recall that the weak KAM solution is unique (up to an additive constant) if the Aubry class is unique. If two or more Aubry classes exist, there are infinitely many weak KAM solutions, among which we are only interested in the elementary weak KAM solutions. In what follows, we assume that for certain cohomology class $c$ the Aubry classes are $\{\mathcal A_{c,i}: i=1,2,\cdots,k\}$, and hence the projected Aubry set $\mathcal A(c)=\bigcup_i\mathcal A_{c,i}.$ The concept of elementary weak KAM solution appeared in the work \cite{Be2008}. However, for the purpose of our applications, we decide to adopt an analogous concept defined in \cite{Ch2012}. \begin{Def}\label{def of EWS} We fix an $i\in\{1,\cdots,k\}$ and perturb the Lagrangian $L\rightarrow L+\varepsilon V(x,t)$ where $\varepsilon>0$ and $V$ is a non-negative $C^\infty$ function satisfying $\textup{supp}V\cap\mathcal A_{c,i}=\emptyset$ and $V\big|_{\mathcal A_{c,j}}>0$ for each $j\neq i$. Then for the cohomology class $c$, the perturbed Lagrangian has only one Aubry class $\mathcal A_{c,i}$, and its backward weak KAM solution, denoted by $u^-_{c,i,\varepsilon}$, is unique up to an additive constant. If for a subsequence $\{u^-_{c,i,\varepsilon_k}\}$, the limit \begin{equation}\label{def ofews} u^-_{c,i}:=\lim\limits_{\varepsilon_k\rightarrow 0^+}u^-_{c,i,\varepsilon_k}, \end{equation} exists, then we call $u^-_{c,i}$ a \emph{backward elementary weak KAM solution}. Analogously, one can define a \emph{forward elementary weak KAM solution} $u^+_{c,i}$. \end{Def} In the following theorem, we will prove the existence of elementary weak KAM solutions and give explicit representation formulas as well. \begin{The}\label{representation of EWS} For each $i$, the backward (resp. forward) elementary weak KAM solution $u^-_{c,i}$ (resp. $u^+_{c,i}$) always exists and is unique up to an additive constant. More precisely, let $(x_i, \tau_i)$ be any point in $\mathcal A_{c,i}$, then there exists a constant $C$ (resp. $C'$) depending on $(x_i, \tau_i)$, such that $$u^-_{c,i}(x,\tau)=h^\infty_c((x_i,\tau_i),(x,\tau))+C \qquad (resp. ~ u^+_{c,i}(x,\tau)=-h^\infty_c((x,\tau),(x_i,\tau_i))+C'.)$$ \end{The} \begin{proof} We only give the proof for $u^-_{c,i}$ since $u^+_{c,i}$ is similar. Denote by $\alpha(c)$ and $\alpha_\varepsilon(c)$ the value of Mather's $\alpha$-function at the cohomology class $c$ for the Lagrangians $L-\eta_c$ and $L-\eta_c+\varepsilon V$ respectively, and denote by $h^\infty_c((x_i,\tau_i),(x,\tau))$ and $h^\infty_{c,\varepsilon}((x_i,\tau_i),(x,\tau))$ the corresponding Peierls barrier functions. We first claim that \begin{equation}\label{limit of barrierfun} h^\infty_c((x_i,\tau_i),(x,\tau))=\lim\limits_{\varepsilon\to0}h^\infty_{c,\varepsilon}((x_i,\tau_i),(x,\tau)). \end{equation} Indeed, as $V\geq 0$ and its support does not intersect with $\mathcal A_{c,i}$, we have $\alpha_\varepsilon(c)=\alpha(c)$ and $$h^\infty_c((x_i,\tau_i),(x,\tau))\leq \liminf\limits_{\varepsilon\to0} h^\infty_{c,\varepsilon}((x_i,\tau_i),(x,\tau)).$$ Now we turn to the opposite inequality $ \limsup_{\varepsilon\to0} h^\infty_{c,\varepsilon}((x_i,\tau_i),(x,\tau))\leq h^\infty_c((x_i,\tau_i),(x,\tau))$. Assume by contradiction that there exist a subsequence $\{h^\infty_{c,\varepsilon_k}\}_{k\in\mathbb N}$ and a point $(x',\tau')$ such that \begin{equation}\label{contradiction_i_k} \lim_{k\to\infty} h^\infty_{c,\varepsilon_k}((x_i,\tau_i),(x',\tau'))> h^\infty_c((x_i,\tau_i),(x',\tau')) \end{equation} For abbreviation, we denote \begin{equation}\label{phi_i_k} \phi^-_{c,\varepsilon_k}(x,\tau):= h^\infty_{c,\varepsilon_k}((x_i,\tau_i),(x,\tau)). \end{equation} By Definition \ref{def of EWS}, $\mathcal A_{c,i}$ is the only Aubry class for the Lagrangian $L-\eta_c+\varepsilon_kV$, which gives $\phi^-_{c,\varepsilon_k}(x_i,\tau_i)$ $=0$. Moreover, it is not hard to verify that the sequence $\{\phi^-_{c,\varepsilon_k}\}_k$ is uniformly Lipschitz. Hence this sequence is also uniformly bounded. Thus, it follows from the Arzel\`{a}-Ascoli theorem that, by taking a subsequence if necessary, $\phi^-_{c,\varepsilon_k}$ converges uniformly to a Lipschitz function $\phi^-_{c}$. It is well known in weak KAM theory that $\phi^-_{c}$ is a backward weak KAM solution for $L-\eta_c$, then \[\phi^-_{c}(x,\tau)=\phi^-_{c}(x,\tau)-\phi^-_{c}(x_i,\tau_i)\leq h^\infty_{c}((x_i,\tau_i),(x,\tau)),\] further, by letting $k\to\infty$ on both sides of \eqref{phi_i_k}, we get \[\lim_{k} h^\infty_{c,\varepsilon_k}((x_i,\tau_i),(x,\tau))=\lim_{k}\phi^-_{c,\varepsilon_k}(x,\tau)=\phi^-_c(x,\tau)\leq h^\infty_{c}((x_i,\tau_i),(x,\tau)),\] which contradicts \eqref{contradiction_i_k}. This therefore proves equality $\eqref{limit of barrierfun}$. Finally, we recall that $\mathcal A_{c,i}$ is the unique Aubry class for $L-\eta_c+\varepsilon V$ ($\varepsilon>0$). Then for any backward weak KAM solution $u^-_{c,i,\varepsilon}$, one has $u^-_{c,i,\varepsilon}(\cdot)=h^\infty_{c,\varepsilon}((x_i,\tau_i),\cdot)+C$ with $C$ a constant. Now the theorem is evident from what we have proved. \end{proof} \begin{Rem} By fixing a point $(x_i, \tau_i)\in\mathcal A_{c,i}$ for each index $i\in\{1,\cdots,k\}$, we conclude from Theorem \ref{representation of EWS} that the set of all backward elementary weak KAM solutions is exactly $\big\{h^\infty_c((x_i,\tau_i),\cdot)+C~:~C\in\mathbb R, ~ i=1,\cdots,k\big\}$, and the set of all forward elementary weak KAM solutions is exactly $\big\{-h^\infty_c(\cdot,(x_i,\tau_i))+C ~: ~C\in\mathbb R, ~ i=1,\cdots,k\big\}$. \end{Rem} \subsection{Heteroclinic orbits between Aubry classes} To study the heteroclinic trajectories from a variational viewpoint, we will use a special type of barrier function. Indeed, let $u^-_{c,i}(x,\tau)$ and $u^+_{c,j}(x,\tau)$ be a backward and a forward elementary weak KAM solution respectively. Now we define a function \begin{equation}\label{anotherkind barr} B_{c,i,j}(x,\tau):=u^-_{c,i}(x,\tau)-u^+_{c,j}(x,\tau), ~\textup{for each~} (x,\tau)\in M\times\mathbb T. \end{equation} Roughly speaking, it measures the action along curves joining the Aubry class $\mathcal A_{c,i}$ to $\mathcal A_{c,j}$, we refer the reader to \cite{Be2008,CY2009,Ch2012} for more discussions. In the sequel, the notation $\arg\min f$ denotes the minimal set $\{a ~|~f(a)=\min f\}$, then we have \begin{Pro}\label{manejifenlei} Suppose that the projected Aubry set $\mathcal A(c)=\bigcup_{i=1}^k\mathcal A_{c,i}$ consists of $k$ $(k\geq 2)$ Aubry classes, then the projected Ma\~n\'e set \begin{equation*} \mathcal N(c)=\bigcup_{i,j=1}^k \arg\min B_{c,i,j}, \end{equation*} \end{Pro} \begin{proof} We first prove $ \mathcal N(c)\supseteq\arg\min B_{c,i,j}$ for each $i, j$. Taking two points $(x_i,\tau_i)\in\mathcal A_{c,i}$ and $(x_j,\tau_j)\in\mathcal A_{c,j}$, Theorem \ref{representation of EWS} implies that there exist two constants $C_i$ and $C_j$ such that $$u^-_{c,i}(x,\tau)=h^\infty_c((x_i,\tau_i),(x,\tau))+C_i,\quad u^+_{c,j}(x,\tau)=-h^\infty_c((x,\tau),(x_j,\tau_j))+C_j.$$ Thus it's easy to compute that $$\min B_{c,i,j}(x,\tau)=h^\infty_c((x_i,\tau_i),(x_j,\tau_j))+C_i-C_j.$$ If $(\tilde{x},\tilde{\tau})\in\arg\min B_{c,i,j}$, then $$h^\infty_c((x_i,\tau_i),(\tilde{x},\tilde{\tau}))+C_i+h^\infty_c((\tilde{x},\tilde{\tau}),(x_j,\tau_j))-C_j=h^\infty_c((x_i,\tau_i),(x_j,\tau_j))+C_i-C_j,$$ namely $h^\infty_c((x_i,\tau_i),(\tilde{x},\tilde{\tau}))+h^\infty_c((\tilde{x},\tilde{\tau}),(x_j,\tau_j))-h^\infty_c((x_i,\tau_i),(x_j,\tau_j))=0$. By \eqref{manebarrier} one obtains $(\tilde{x},\tilde{\tau})\in\mathcal N(c)$. Now it remains to show $\mathcal N(c)\subset\bigcup_{i,j=1}^k \arg\min B_{c,i,j}$. For each $(\bar{x},\bar{\tau})\in\mathcal N(c)$, one deduces from \eqref{manebarrier} that there always exist $m, n\in\{1,2,\cdots,k\}$, and two points $(x_m,\tau_m)\in\mathcal A_{c,m}$, $(x_n,\tau_n)\in\mathcal A_{c,n}$ such that $$h_c^\infty((x_m,\tau_m),(\bar{x},\bar{\tau}))+h_c^\infty((\bar{x},\bar{\tau}),(x_n, \tau_n))=h_c^\infty((x_m,\tau_m),(x_n, \tau_n)).$$ Combining with Theorem \ref{representation of EWS}, one gets that for each $(x,\tau)\in M\times\mathbb T$, \begin{equation*} \begin{split} &u_{c,m}^-(\bar{x},\bar{\tau})-u_{c,n}^+(\bar{x},\bar{\tau})-\Big(u^-_{c,m}(x,\tau)-u^+_{c,n}(x,\tau)\Big)\\ =&h_c^\infty((x_m,\tau_m),(\bar{x},\bar{\tau}))+h_c^\infty((\bar{x},\bar{\tau}),(x_n, \tau_n))-\Big(h_c^\infty((x_m,\tau_m),(x,\tau))+h_c^\infty((x,\tau),(x_n, \tau_n))\Big)\\ =&h_c^\infty((x_m,\tau_m),(x_n, \tau_n))-\Big(h_c^\infty((x_m,\tau_m),(x,\tau))+h_c^\infty((x,\tau),(x_n, \tau_n))\Big)\leq 0, \end{split} \end{equation*} hence $(\bar{x},\bar{\tau})\in \arg\min B_{c,m,n}$. This completes the proof. \end{proof} From now on, we denote by $\mathcal N_{i,j}(c)$ the set of $c$-semi static curves which are negatively asymptotic to $\mathcal A_{c,i}$ and positively asymptotic to $\mathcal A_{c,j}$, i.e., \begin{equation}\label{ij maneorbit} \mathcal N_{i,j}(c)=\{(x,\tau): \exists \textup{~a~} c\textup{-semi static curve~} \gamma, \gamma(\tau)=x, \textup{~and~} \alpha(\gamma(t),t)\subset\mathcal A_{c,i},\omega(\gamma(t),t)\subseteq\mathcal A_{c,j} \}. \end{equation} Obviously, $\mathcal N_{i,j}(c)\subset\mathcal N(c)$, and each point $(x,\tau)\in\mathcal N_{i,j}(c)$ satisfies $$h^\infty_c((x_i,\tau_i),(x,\tau))+h^\infty_c((x,\tau),(x_j,\tau_j))=h^\infty_c((x_i,\tau_i),(x_j,\tau_j)),$$ then $\mathcal N_{i,j}(c)\subset\arg\min B_{c,i,j}$ thanks to Theorem \ref{representation of EWS}. Moreover, $\mathcal A_{c,i}\cup\mathcal A_{c,j}\cup\mathcal N_{i,j}(c)\subseteq\arg\min B_{c,i,j}$. Conversely, the equality $\arg\min B_{c,i,j}\setminus\mathcal A(c)=\mathcal N_{i,j}(c)$ may not hold in general. For instance, the pendulum Lagrangian $L=\frac{v^2}{2}-(\cos8\pi x-1)$ has four Aubry classes for the cohomology class $c=0\in H^1(\mathbb T,\mathbb R)$: $$\widetilde{\mathcal A}_1=(0,0), \widetilde{\mathcal A}_2=(\frac{1}{4},0), \widetilde{\mathcal A}_3=(\frac{1}{2},0), \widetilde{\mathcal A}_4=(\frac{3}{4},0).$$ They are all hyperbolic fixed points. By symmetry, it's easy to compute that $\arg\min B_{c,1,3}=\mathbb T$ but $\mathcal N_{1,3}(c)=\emptyset.$ However, in the case of only two Aubry classes, we can give a precise description. \begin{Pro}\label{double description} Suppose that the projected Aubry set $\mathcal A(c)=\mathcal A_{c,1}\cup\mathcal A_{c,2}$ has only two Aubry classes, then $$ \arg\min B_{c,1,2}= \mathcal A_{c,1}\cup\mathcal A_{c,2}\cup\mathcal N_{1,2}(c)\quad\textup{and}\quad \arg\min B_{c,2,1}=\mathcal A_{c,1}\cup\mathcal A_{c,2}\cup\mathcal N_{2,1}(c).$$ \end{Pro} \begin{proof} We only prove $\arg\min B_{c,1,2}= \mathcal A_{c,1}\cup\mathcal A_{c,2}\cup\mathcal N_{1,2}(c)$ and the other case is similar. By the analysis above, it only remains for us to verify $\arg\min B_{c,1,2}\subset\mathcal A_{c,1}\cup\mathcal A_{c,2}\cup\mathcal N_{1,2}(c)$. Indeed, for each point$(x,\tau)\in\arg\min B_{c,1,2}$, we take $\tau=0$ for simplicity, then \begin{equation}\label{Cor1} B_{c,1,2}(x,0)=u^-_{c,1}(x,0)-u^+_{c,2}(x,0)=\min B_{c,1,2}, \end{equation} and Proposition \ref{manejifenlei} implies that there exists a $c$-semi static curve $\gamma:\mathbb R\to M$, $\gamma(0)=x$, to be calibrated by $u^-_{c,1}$ on $(-\infty, 0]$ and be calibrated by $u^+_{c,2}$ on $(0, +\infty]$. Next, there exist two points $(\alpha,0)$, $(\omega, 0)\in\mathcal A(c)$ and a sequence of positive integers $\{m_k\}_{k}, \{n_k\}_{k}$ $\subset \mathbb Z^+$ such that $$\lim_{k\to\infty}\gamma(-m_k)=\alpha\textup{~and~}\lim_{k\to\infty}\gamma(n_k)=\omega.$$ By the calibration property, \begin{align*} &u^-_{c,1}(\gamma(0),0)-u^-_{c,1}(\gamma(-m_k),0)+u^+_{c,2}(\gamma(n_k),0)-u^+_{c,2}(\gamma(0),0)\\ =&\int_{-m_k}^{n_k}L(d\gamma(t),t)- \eta_c(d\gamma(t))+\alpha(c)\, dt. \end{align*} Let $\lim\inf k\to\infty$, \begin{equation}\label{Cor2} B_{c,1,2}(x,0)=u^-_{c,1}(\alpha,0)-u^+_{c,2}(\omega,0)+h_c^\infty\big((\alpha,0),(\omega,0)\big). \end{equation} On the other hand, without loss of generality (see Theorem \ref{representation of EWS}), we could assume $u^-_{c,1}(x,0)=h_c^\infty\big((x_1,0),(x,0)\big)$ with $(x_1,0)\in\mathcal A_{c,1}$ and $u^+_{c,2}(x,0)=-h_c^\infty\big((x,0),(x_2,0)\big)$ with $(x_2,0)\in\mathcal A_{c,2}$. Then equalities \eqref{Cor1} and \eqref{Cor2} together give rise to \begin{equation*} h_c^\infty\big((x_1,0),(x_2,0)\big)=h_c^\infty\big((x_1,0),(\alpha,0)\big)+h_c^\infty\big((\omega,0),(x_2,0)\big)+h_c^\infty\big((\alpha,0),(\omega,0)\big), \end{equation*} this could happen only if either $(\alpha,0), (\omega,0)$ belong to the same Aubry class or $(\alpha,0)\in\mathcal A_{c,1}$, $(\omega,0)\in\mathcal A_{c,2}$. This therefore completes the proof. \end{proof} Proposition \ref{double description} will be fully exploited in Section \ref{sec proof main} where we extend the Lagrangian to a double covering space such that the lift of the Aubry set contains two Aubry classes. \section{Variational mechanism of diffusing orbits}\label{sec local and global} In this section, we aim to give a master theorem which guarantees the existence of diffusion for Tonelli Lagrangian $L: TM\times\mathbb T\to\mathbb R$ with $M=\mathbb T^n$. Our construction of diffusion is variational, which requires less information about the geometric structure. The orbits are constructed by shadowing a sequence of local connecting orbits, along each of them the Lagrangian action attains ``local minimum". Basically, among them there are two types of local connecting orbits, one is based on Mather's variational mechanism constructing orbits with respect to the cohomology equivalence \cite{Ma1993,Ma1995}, the other one is based on Arnold's geometric mechanism \cite{Ar1964} whose variational version was first achieved by Bessi \cite{Bessi1996} for Arnold's original example, and was later generalized to more general systems \cite{CY2004,CY2009,Be2008}. Given a cohomology class $c\in H^1(M,\mathbb R) $, following Mather, we define \begin{equation*} \mathbb{V}_{c}=\bigcap_U\{i_{U*}H_1(U,\mathbb R): U\, \text{is a neighborhood of}\ \mathcal N_0(c) \}, \end{equation*} Here, $i_{U*}:H_1(U,\mathbb R)\to H_1(M,\mathbb R)$ is the mapping induced by the inclusion map $i_U$: $U\to M$, and $\mathcal N_0(c)$ denotes the time-0 section of the projected Ma\~n\'e set $\mathcal N(c)$ . Let $\mathbb{V}_{c}^{\bot}\subset H^1(M,\mathbb{R})$ denote the annihilator of $\mathbb{V}_{c}$, i.e. $c'\in \mathbb{V}_{c}^{\bot}$ if and only if $\langle c',h \rangle =0$ for all $h\in \mathbb{V}_c$. Clearly, \begin{equation*} \mathbb{V}_{c}^{\bot}=\bigcup_U\{\ker i_{U}^*: U\, \text{is a neighborhood of}\, \mathcal N_0(c)\}. \end{equation*} In fact, Mather has proved that there exists a neighborhood $U$ of $\mathcal N_0(c)$ in $M$ such that $\mathbb{V}_{c}=i_{U^*}H_1(U,\mathbb R)$ and $\mathbb{V}^\bot_{c}=\ker i^*_U$ (see \cite{Ma1993}). Then we can introduce the cohomology equivalence (also known as $c$-equivalence). \begin{Def}[\emph{Mather's $c$-equivalence}]\label{def_c_equivalenve} We say that $c,c'\in H^1(M,\mathbb R)$ are $c$-equivalent if there exists a continuous curve $\Gamma$: $[0,1]\to H^1(M,\mathbb R)$ such that $\Gamma(0)=c$, $\Gamma(1)=c'$ and for each $s_0\in [0,1]$, $\exists$ $\varepsilon>0$ such that $\Gamma(s)-\Gamma(s_0)\in \mathbb{V}_{{\Gamma}(s_0)}^{\bot}$ whenever $|s-s_0|<\varepsilon$ and $s\in [0,1]$. \end{Def} By making full use of the cohomology equivalence, Mather obtained a remarkable result on connecting orbits : if $c$ is equivalent to $c'$, the system has an orbit which in the infinite past tends to the Aubry set $\widetilde{\mathcal A}(c)$ and in the infinite future tends to the Aubry set $\widetilde{\mathcal A}(c')$ \cite{Ma1993}. Next, we recall Arnold's famous example in \cite{Ar1964}: when the stable and unstable manifolds of an invariant circle transversally intersect each other, then the unstable manifold of this circle would also intersect the stable manifold of another invariant circle nearby. To understand this mechanism from a variational viewpoint, we let $\check{\pi}:\check{M}\rightarrow \mathbb T^n$ be a finite covering of $\mathbb T^n$. Denote by $\widetilde{\mathcal N}(c,\check{M}), \widetilde{\mathcal A}(c,\check{M})$ the corresponding Ma\~{n}\'{e} set and Aubry set with respect to $\check{M}$. $\widetilde{\mathcal A}(c,\check{M})$ may have several Aubry classes even if $\widetilde{\mathcal A}(c)$ is unique. Here, we would like to emphasize that $\check{\pi}\widetilde{\mathcal A}(c,\check{M})=\widetilde{\mathcal A}(c)$. Also, it is not necessary to work always in a nontrivial finite covering space, one can choose $\check{M}=M$ if the Aubry set already contains more than one classes. Hence, for Arnold's famous example, the intersection of the stable and unstable manifolds implies that the set $\check{\pi}\mathcal N(c,\check{M})\big|_{t=0}\setminus\big(\mathcal A(c)\big|_{t=0}+\delta\big)$ is discrete. Here, $\mathcal A(c)\big|_{t=0}+\delta$ stands for a $\delta$-neighborhood of the set $\mathcal A(c)\big|_{t=0}$. This leads us to introduce the concept of \emph{generalized transition chain}. This notion could be found in \cite[Definition 5.1]{CY2009} as a generalization of Arnold's transition chain \cite{Ar1964}. In this paper, we adopt the definition as in \cite[Definition 4.1]{Ch2017} (see also \cite[Definition 2.2]{Ch2018}). \begin{Def}[\emph{Generalized transition chain}]\label{transition chain} Two cohomology classes $c, c'\in H^1(M,\mathbb R)$ are joined by a generalized transition chain if a continuous path $\Gamma: [0,1]\to H^1(M,\mathbb R)$ exists such that $\Gamma(0)=c, \Gamma(1)=c'$, and for each $s\in[0,1]$ at least one of the following cases takes place: \begin{enumerate}[(1)] \item There is $\delta_s>0$ such that for each $s'\in (s-\delta_s, s+\delta_s)\bigcap [0,1]$, $\Gamma(s')$ is $c$-equivalent to $\Gamma(s)$. \item There exist a finite covering $\check{\pi}:\check{M}\to M$ and a small $\delta_s>0$ such that the set $\check{\pi}\mathcal N(\Gamma(s),\check{M})\big|_{t=0}$ $\setminus$ $\big(\mathcal A(\Gamma(s))\big|_{t=0}+\delta_s\big)$ is non-empty and totally disconnected. $\mathcal A(\Gamma(s'))$ lies in a neighborhood of $\mathcal A(\Gamma(s))$ provided $|s'-s|$ is small. \end{enumerate} \end{Def} We would like to emphasize that, the statement ``$\mathcal A(\Gamma(s'))$ lies in a neighborhood of $\mathcal A(\Gamma(s))$ provided $|s'-s|$ is small" in condition (2) could be guaranteed by the upper semi-continuity of Aubry sets. In fact, this upper semi-continuity is always true in our model since the number of Aubry classes is only finite (in fact, two at most), see \cite{Be2010On}. Also, condition (2) appears weaker than the condition of transversal intersection of stable and unstable manifolds because it still works when the intersection is only topologically transversal. Our condition (2) is usually applied to the case where the Aubry set $\mathcal A(\Gamma(s))$ is contained in a neighborhood of a lower dimensional torus, while condition (1) is usually applied to the case where the Ma\~n\'e set $\mathcal N(\Gamma(s))$ is homologically trivial. Along a generalized transition chain, one can construct an orbit along which there is a substantial variation: \begin{The}\label{generalized transition thm} If $c$, $c'\in H^1(M,\mathbb R)$ are connected by a generalized transition chain $\Gamma$, then \begin{enumerate}[\rm(1)] \item there exists an orbit $(d\gamma(t),t)$ of the Euler-Lagrange flow connecting the Aubry set $\widetilde{\mathcal A}(c)$ to $\widetilde{\mathcal A}(c')$, which means the $\alpha$-limit set $\alpha(d\gamma(t),t)\subset\widetilde{\mathcal A}(c)$ and the $\omega$-limit set $\omega(d\gamma(t),t)\subset\widetilde{\mathcal A}(c')$. \item for any $c_1,\cdots, c_k\in \Gamma$ and small $\varepsilon>0$, there exist an orbit $(d\gamma(t), t)$ of the Euler-Lagrange flow and times $t_1<\cdots<t_k$, such that the orbit $(d\gamma(t),t)$ passes through the $\varepsilon$-neighborhood of $\widetilde{\mathcal A}(c_\ell)$ at the time $t=t_\ell$. \end{enumerate} \end{The} The proof of Theorem \ref{generalized transition thm} is similar to that of \cite[Section 5]{CY2009} and can also be found in \cite[Section 7]{Ch2012}. This variational mechanism of connecting orbits has already been used in \cite{Ch2017,Ch2018}. However, for the reader's convenience, we provide a proof of the theorem in appendix \ref{sec_proof_of_connectingthm}. We end this section by a simple illustration of the diffusing orbits in geometry, such orbits constructed in Theorem $\ref{main theorem}$ and Theorem \ref{main thm2} would drift near the normally hyperbolic cylinder (see figure \ref{picture1}). \begin{figure} \caption{A global connecting orbit shadowing the generalized transition chain } \label{picture1} \end{figure} \section{Technical estimates on Gevrey functions}\label{some properties Gev} In this part, we provide some necessary results for Gevrey functions defined on the torus $\mathbb T^n=\mathbb R^n/\mathbb Z^n$, which will be useful for our choice of Gevrey space in section \ref{determin of coeff}. We present this section in a self-contained way for the reader's convenience. The variational proof of the genericity of Arnold diffusion usually depends on the existence of functions with compact support, i.e. bump functions. This technique cannot apply to the problem of analytic genericity since no analytic function has compact support. However, the bump function does exist in the Gevrey-$\alpha$ category with $\alpha>1$. Here we give a modified Gevrey bump function which is based on the one constructed in \cite{MS2004}. \begin{Lem}[Gevrey bump function]\label{Gevrey bumpfunction} Let $\alpha>1, \mathbf L>0$, $D=[a_1,b_1]\times\cdots\times[a_n,b_n]$$\varsubsetneq\mathbb T^n$ be a $n$-dimensional cube and $U$ be an open neighborhood of $D$. Then there exists $f\in\mathbf G^{\alpha,\mathbf L}(\mathbb T^n)$ such that $0\leq f\leq 1$, $\textup{supp}f\subset U$, and $$f(x)=1 \Longleftrightarrow x\in D.$$ \end{Lem} \begin{proof} We first claim that for $0<d<d'<\frac{1}{2}$, there exists a function $g\in\mathbf G^{\alpha,\mathbf L}(\mathbb T)$ such that $0\leq g\leq 1$ and $$g(x)=1 \Longleftrightarrow x\in [-d,d],\quad\textup{supp}g\subset [-d',d'].$$ Indeed, let $\alpha=1+\frac{1}{\sigma}$ ($\sigma>0$) and define a non-negative function $h\in C^\infty(\mathbb R)$ as follows: $h(x)=0$ for $x\leq 0$, $h(x)=$$\exp(-\frac{\lambda}{x^\sigma})$ for $x>0$. Then $h\in\mathbf G^{\alpha,\mathbf L}(\mathbb R)$ if the constant $\lambda>(2\mathbf L^\alpha/\sin a)^\sigma/\sigma$ with $a=\frac{\pi}{4}\min\{1,\frac{1}{\sigma}\}$ (cf. \cite[Lemma A.3]{MS2003}). Next, we define $\psi(x)=\int_{-\infty}^xh\big(t+\frac{d'-d}{2}\big)h\big(-t+\frac{d'-d}{2}\big)~dt.$ It's easy to compute that $\psi\geq 0$ is non-decreasing and \begin{equation*} \psi(x)=\left\{ \begin{array}{ll} 0, & x\leq-\frac{d'-d}{2} \\ K, & x\geq \frac{d'-d}{2} \end{array} \right. \end{equation*} where $$K=\int^{\frac{d'-d}{2}}_{-\frac{d'-d}{2}}h\big(t+\frac{d'-d}{2}\big)h\big(-t+\frac{d'-d}{2}\big)~dt>0.$$ Then we define the function $$g(x)=\frac{1}{K^2}\psi\big(x+\frac{d'+d}{2}\big)\psi\bigg(-x+\frac{d'+d}{2}\big).$$ Obviously, $0\leq g\leq 1$, $\textup{supp}g\subset[-d',d']$, and $g(x)=1$ $\Longleftrightarrow$ $x\in[-d,d]$. It can be viewed as a function defined on $\mathbb T$. Hence by property (G\ref{algebra norm}) in Section \ref{introduction}, $g\in\mathbf G^{\alpha,\mathbf L}(\mathbb T)$, which proves our claim. Next, without loss of generality we assume $D=[-d_1,d_1]\times\cdots\times[-d_n,d_n]$ with $0<d_i<\frac{1}{2}$. By assumption, we can find another cube $D'=[-d'_1,d'_1]\times\cdots\times[-d'_n,d'_n]$ such that $D\subset D'\subset U\subset\mathbb T^n$. By the claim above, for each $i\in\{1,\cdots,n\}$ there exists a function $f_i\in\mathbf G^{\alpha,\mathbf L}(\mathbb T)$ such that $0\leq f_i\leq 1$, $\textup{supp}f_i\subset[-d_i',d_i']$, $f_i(x)=1$ $\Longleftrightarrow$ $x\in[-d_i,d_i]$. Thus we define $$f(x_1,\cdots,x_n):=\prod_{i=1}^nf_i(x_i),$$ which meets our requirements. \end{proof} Next, we prove that the inverse of a Gevrey map is still Gevrey smooth. For each high dimensional map $\varphi=(\varphi_1,\cdots,\varphi_n): V\to\mathbb R^n$ where $\varphi_i\in\mathbf G^{\alpha,\mathbf L}(V)$, its norm could be defined as follows: $$\|\varphi\|_{\alpha,\mathbf L}:=\sum\limits_{i=1}^n\|\varphi_i\|_{\alpha,\mathbf L}.$$ In what follows, $(0,1)^n$ denotes the unit domain $(0,1)\times\cdots\times(0,1)$ in $\mathbb R^n$. We also refer the reader to \cite{Kom1979} for the inverse function theorem of a general ultra-differentiable mapping. \begin{The}[Inverse Function Theorem of Gevrey class]\label{inverse thm} Let $X,Y$ be two open sets in $(0,1)^n$ and let $f: X\to Y$ be a Gevrey-$(\alpha,\mathbf L)$ map with $\alpha\geq 1$. If the Jacobian matrix $Jf$ is non-degenerate at $x_0\in X$, then there exist an open set $U$ containing $x_0$, an open set $V$ containing $f(x_0)$, a constant $\mathbf L_1<\mathbf L$, and a unique inverse map $f^{-1}:V\to U$ such that $f^{-1}\in\mathbf G^{\alpha,\mathbf L_1}(V)$. \end{The} \begin{proof} For simplicity we suppose the Jacobian matrix $J_{x_0}f=I_n$ where $I_n=\textup{diag}(1,1,\cdots,1)$, otherwise we can replace $f$ by $f\circ(J_{x_0}f)^{-1}$. We also suppose $f(x_0)=x_0$, otherwise we can replace $f$ by $f+x_0-f(x_0)$. If we write $f=id+h$ in a neighborhood of $x_0$, then $h(x_0)=0$, $J_{x_0}h=0$. For $0<\varepsilon\ll 1$ there exist $d>0$ and an open ball $B_{d}(x_0)=$$\{x\in X:\|x-x_0\|<d\}$ such that \begin{equation}\label{derofh} \quad \|h\|_{C^1(B_d(x_0))}\leq\varepsilon. \end{equation} By classical Inverse Function Theorem, there exist two small open sets $U, V\subset B_{d/2}(x_0)$ containing $x_0$ and a unique $ C^\infty$ inverse map $f^{-1}:V\to U$ where $f^{-1}(x_0)=x_0$. Let $\mathbf L_1=\varepsilon^{\frac{2}{3\alpha}}$, next we will prove $f^{-1}\in\mathbf G^{\alpha,\mathbf L_1}(V)$ by the contraction mapping principle. We can write $f^{-1}=id+g$ , so $g\in C^\infty(V)$ and the equality $$g(y)=-h(y+g(y)),~\forall y\in V$$ holds. Define the set $E=\{\varphi=(\varphi_1,\cdots,\varphi_n): \varphi(x_0)=0,~\varphi\in\mathbf G^{\alpha,\mathbf L_1}(V),~\|\varphi\|_{\alpha,\mathbf L_1}\leq \varepsilon^{\frac{3}{4}}\}$ with the norm $\|\cdot\|_{\alpha,\mathbf L_1}$, it's a non-empty, closed and convex set in the space $\mathbf G^{\alpha,\mathbf L_1}(V)$. Define the operator $$(T\varphi)(y):=-h(y+\varphi(y)), \forall y\in V.$$ \noindent$\bullet$ We first claim that the mapping $T\varphi\in E,$ $\forall \varphi\in E.$ In fact, for each $\varphi\in E$, $(T\varphi)(x_0)=0$. For $y\in V\subset B_{d/2}(x_0)$, we have $\|y+\varphi(y)-x_0\|\leq\|y-x_0\|+\|\varphi(y)-\varphi(x_0)\|\leq\frac{d}{2}+\|J\varphi\|\|y-x_0\|<d$, and hence $(id+\varphi)(V)$$\subset B_d(x_0)$. Moreover, let $\mathbf L_2:=\mathbf L\varepsilon^{\frac{1}{2\alpha}}$ and $\varepsilon$ be suitably small. For each $i\in\{1,\cdots,n\}$, \begin{equation*} \begin{split} \|x_i+\varphi_i\|_{\alpha,\mathbf L_1}-\|x_i+\varphi_i\|_{C^0}=&\sum\limits_{j=1}^n\mathbf L_1^{\alpha}\|\delta_{ij}+\partial_{x_j}\varphi_i\|_{C^0}+\sum\limits_{k\in\mathbb N^n,|k|\geq2}\frac{\mathbf L_1^{|k|\alpha}}{(k!)^\alpha}\|\partial^k\varphi_i\|_{C^0}\\ \leq&n\mathbf L_1^\alpha(1+\frac{\varepsilon^\frac{3}{4}}{\mathbf L_1^\alpha})+\|\varphi_i\|_{\alpha,\mathbf L_1}\leq2n\varepsilon^{\frac{2}{3}}+\varepsilon^{\frac{3}{4}}\leq\frac{\mathbf L_2^\alpha}{n^{\alpha-1}}, \end{split} \end{equation*} where $\delta_{ij}=1$ for $i=j$ and $\delta_{ij}=0$ for $i\neq j$. Hence by property (G\ref{composition}) in Section \ref{introduction}, $\|T\varphi\|_{\alpha,\mathbf L_1}=\|h\circ(id+\varphi)\|_{\alpha,\mathbf L_1}\leq\|h\|_{\alpha,\mathbf L_2,B_d(x_0)}$ since $(id+\varphi)(V)$$\subset B_d(x_0)$. Now it only remains to verify that $$\|h\|_{\alpha,\mathbf L_2,B_d(x_0)}\leq\varepsilon^{\frac{3}{4}}.$$ Recall that for $|k|\geq 2$ and $x\in B_d(x_0)$, $\partial^k f_i(x)=\partial^k h_i(x)$. By using \eqref{derofh}, we have \begin{equation}\label{hi esti} \begin{split} \|h_i\|_{\alpha,\mathbf L_2,B_d(x_0)}&=\|h_i\|_{C^0(B_d(x_0))}+\sum\limits_{k\in\mathbb N^n,|k|=1}\mathbf L_2^\alpha\|\partial^kh_i\|_{C^0(B_d(x_0))}+\sum\limits_{k\in\mathbb N^n,|k|\geq2}\frac{\mathbf L_2^{|k|\alpha}}{k!^\alpha}\|\partial^kf_i\|_{C^0(B_d(x_0))}\\ &\leq (1+n\mathbf L_2^\alpha)\varepsilon+\sum\limits_{k\in\mathbb N^n,|k|\geq2}\frac{\mathbf L^{|k|\alpha}\varepsilon^{\frac{|k|}{2}}}{k!^\alpha}\|\partial^kf_i\|_{C^0(B_d(x_0))}\\ &\leq (1+n\mathbf L^\alpha\varepsilon^\frac{1}{2})\varepsilon+\varepsilon\|f\|_{\alpha,\mathbf L}\leq\frac{\varepsilon^\frac{3}{4}}{n}, \end{split} \end{equation} which proves the claim. \noindent$\bullet$ On the other hand, for $\varphi,\tilde\varphi\in E$ and $i\in\{1,\cdots,n\}$, by the Newton-Leibniz formula we have \begin{equation*} \begin{split} h_i(x+\varphi(x))-h_i(x+\tilde{\varphi}(x))=&\bigg(\int_0^1Jh_i\big(x+s\varphi(x)+(1-s)\tilde{\varphi}(x)\big)ds\bigg)\bigg(\varphi(x)-\tilde{\varphi}(x)\bigg)\\ =&F(x)\big(\varphi(x)-\tilde{\varphi}(x)\big) \end{split} \end{equation*} where $Jh_i$ is the Jacobian matrix. It follows from property (G\ref{derivative Gevrey}) in Section \ref{introduction} and \eqref{hi esti} that $$\|Jh_i\|_{\alpha,\frac{\mathbf L_2}{2},B_d(x_0)}\leq \frac{\|h_i\|_{\alpha,\mathbf L_2,B_d(x_0)}}{(\mathbf L_2-\mathbf L_2/2)^\alpha}\sim O(\varepsilon^\frac{1}{4})<\frac{1}{2n}$$ provided $\varepsilon$ is suitably small. By property (G\ref{composition}), $\|F\|_{{\alpha,\mathbf L_1,V}}\leq \|Jh_i\|_{\alpha,\frac{\mathbf L_2}{2},B_d(x_0)}\leq\frac{1}{2n}$. Finally, we deduce from (G\ref{algebra norm}) that $$\|h_i\circ(id+\varphi)-h_i\circ(id+\tilde{\varphi})\|_{\alpha,\mathbf L_1}\leq\|F\|_{\alpha,\mathbf L_1}\|\varphi-\tilde{\varphi}\|_{\alpha,\mathbf L_1}\leq\frac{1}{2n}\|\varphi-\tilde{\varphi}\|_{\alpha,\mathbf L_1}.$$ Hence $\|h\circ(id+\varphi)-h\circ(id+\tilde{\varphi})\|_{\alpha,\mathbf L_1}\leq\frac{1}{2}\|\varphi-\tilde{\varphi}\|_{\alpha,\mathbf L_1}$, namely $$\|T\varphi-T\tilde\varphi\|_{\alpha,\mathbf L_1}\leq\frac{1}{2}\|\varphi-\tilde\varphi\|_{\alpha,\mathbf L_1}.$$ In conclusion, $T: E\to E$ is a contraction mapping. By the contraction mapping principle, $T$ has a unique fixed point, and hence the fixed point must be $g$. Therefore, $f^{-1}=id+g\in\mathbf G^{\alpha,\mathbf L_1}(V)$. \end{proof} Sometimes we need to approximate a continuous function by Gevrey smooth ones. Convolution provides us with a systematic technique. More specifically, for any $\alpha>1, \mathbf L>0$, by Lemma \ref{Gevrey bumpfunction} there exists a non-negative function $\eta\in\mathbf G^{\alpha,\mathbf L}(\mathbb R^n)$ such that $\textup{supp}\eta$ $\subset[\frac{1}{4},\frac{3}{4}]^n$ and $\int_{\mathbb R^n}\eta(x) dx=1$. Next we set $\eta_\varepsilon(x)=\frac{1}{\varepsilon^n}\eta(\frac{x}{\varepsilon})$ $(0<\varepsilon<1, x\in\mathbb R^n)$ which is called the mollifier. Then we define the convolution of $\eta_\varepsilon$ and $f\in C^0(\mathbb T^n)$ by \begin{equation}\label{mollifier} \eta_\varepsilon*f(x)=\int_{\mathbb T^n}\eta_\varepsilon(x-y)f(y)dy,~\forall~x\in\mathbb T^n. \end{equation} \begin{The}[Gevrey approximation] \begin{enumerate}[\rm(1)] \item Let $\alpha>1$, and $U\subset\mathbb T^n$, $V\subsetneq(0,1)^n$ be two open sets. If $f:U\to V$ is a continuous map, then there exists a sequence of maps $f^\varepsilon:U\to(0,1)^n$ such that $f^\varepsilon\in\mathbf G^{\alpha,\mathbf L_\varepsilon}(U)$. Furthermore, $\mathbf L_\varepsilon\to 0$ and $\|f^\varepsilon-f\|_{C^0}\to 0$ as $\varepsilon$ tends to 0. \item Let $\alpha>1$, $U, V$ be connected open sets satisfying $\bar{U},\bar{V}\varsubsetneq\mathbb T^n$ and $f: U\to V$ be a continuous map. Then there exists a sequence of maps $f^\varepsilon:U\to \mathbb T^n$ such that $f^\varepsilon\in\mathbf G^{\alpha,\mathbf L_\varepsilon}(U)$, $\mathbf L_\varepsilon\to 0$ and $\|f^\varepsilon-f\|_{C^0}\to 0$ as $\varepsilon$ tends to 0. Specifically, if $f$ is a diffeomorphism and the determinant $\det(Jf)$ ($Jf$ is the Jacobian matrix) has a uniform positive distance away from zero, then the Gevrey map $f^\varepsilon:U\to V^\varepsilon$ with $V^\varepsilon=f^\varepsilon(U)$ will also be a diffeomorphism provided that $\varepsilon$ is small enough. \end{enumerate}\label{Gevrey approx} \end{The} \begin{proof} (1): Let $f=(f_1,\cdots,f_n)$ and $f_i$ ($1\leq i\leq n$) be continuous, we only need to prove that each $f_i$ can be approximated by a Gevrey smooth function. Indeed, let $f_i^\varepsilon=\eta_\varepsilon*f_i$ ($0<\varepsilon<1$), where $\eta\in\mathbf G^{\alpha,\mathbf L}$. It's easy to check that $f_i^\varepsilon:U\to(0,1)$ since $\int_{\mathbb R^n}\eta_\varepsilon(x) dx=1$ and $\textup{supp}\eta_\varepsilon$$\subset[\frac{\varepsilon}{4},\frac{3\varepsilon}{4}]^n$. By the classical properties of convolutions, one obtains $f^\varepsilon_i\in C^\infty$ and \begin{equation*} \|f_i^\varepsilon-f_i\|_{C^0}\to 0, \textup{~as~} \varepsilon\to 0. \end{equation*} \begin{equation*} \partial^kf^\varepsilon_i=\partial^k\eta_\varepsilon*f_i=\int_{\mathbb T^n}\partial^k\eta_\varepsilon(x-y)f_i(y)dy,~\forall~k=(k_1,\cdots,k_n)\in\mathbb Z^n, k_i\geq 0. \end{equation*} It only remains to prove $f_i^\varepsilon$ is Gevrey smooth. In fact, if one sets $\mathbf L_\varepsilon=\mathbf L\varepsilon^{\frac{1}{\alpha}}$, then \begin{equation*} \begin{split} \|f^\varepsilon_i\|_{\alpha,\mathbf L_\varepsilon} &\leq\sum\limits_{k}\frac{\mathbf L_\varepsilon^{|k|\alpha}}{k!^\alpha}\|\partial^k\eta_\varepsilon\|_{C^0}\|f_i\|_{C^0}\\ &\leq \frac{\|f_i\|_{C^0}}{\varepsilon^n}\sum\limits_{k}\frac{\mathbf L_\varepsilon^{|k|\alpha}\varepsilon^{-|k|}}{k!^\alpha}\|\partial^k\eta\|_{C^0}\\ &=\frac{\|f_i\|_{C^0}}{\varepsilon^n}\sum\limits_{k}\frac{\mathbf L^{|k|\alpha}}{k!^\alpha}\|\partial^k\eta\|_{C^0}=\frac{\|f_i\|_{C^0}}{\varepsilon^n}\|\eta\|_{\alpha,\mathbf L}. \end{split} \end{equation*} Obviously, $\mathbf L_\varepsilon\to 0$ as $\varepsilon\to 0$. This completes the proof of (1). (2): The first part is not hard to prove by the technique in (1). Furthermore, if $f$ is a diffeomorphism from $U$ to $V$, then by using $\partial^kf^\varepsilon=\eta_\varepsilon*\partial^kf$ with $|k|=1$, one gets \begin{equation}\label{gevapp} \|f^\varepsilon-f\|_{C^1}\to 0,\quad \varepsilon\to 0. \end{equation} Since $\det(Jf)$ has a uniform positive distance away from zero, it concludes from \eqref{gevapp} and Theorem \ref{inverse thm} that $f^\varepsilon: U\to f^\varepsilon(U)$ would also be a diffeomorphism for $\varepsilon$ small enough. \end{proof} \section{Proof of the main results}\label{sec proof main} This section is the main part of the present paper, which aims to prove Theorem \ref{main theorem} and Theorem \ref{main thm2}. We will explain how to apply the tools exposed in the previous sections to \emph{a priori} unstable and Gevrey smooth systems. Before that, we need to do some preparations. \subsection{Genericity of uniquely minimal measure in Gevrey or analytic topology} Let $M=\mathbb T^n$. We fix an $h\in H_1(M,\mathbb R)$, it is well known that in the $C^r$ ($ 2\leq r \leq\infty$) topology, a generic Lagrangian has only one minimal measure $\mu$ with the rotation vector $\rho(\mu)=h$ (see \cite{Mane1996}). Next, we will show that such a property still holds in the Gevrey topology. For this purpose, we shall consider it in a Gevrey space $\mathbf G^{\alpha,\mathbf L}(M\times\mathbb T)$ with $\alpha\geq 1, \mathbf L>0$. A property is called \emph{generic} in the sense of Ma\~n\'e if, for each Lagrangian $L: TM\times\mathbb T\to\mathbb R$, there exists a residual\footnote{A residual subset $X$ of a Baire space is one whose complement is the union of countably many nowhere dense subsets. The residual set is a dense set.} subset $\mathcal{O}\subset\mathbf G^{\alpha,\mathbf L}(M\times\mathbb T)$ such that the property holds for each Lagrangian $L+\phi$ with $\phi\in\mathcal{O}$. \begin{The}\label{generic G1} Let $h\in H_1(M,\mathbb R)$, $\alpha\geq 1, \mathbf L>0$ and $L:TM\times\mathbb T\to\mathbb R$ be a Tonelli Lagrangian, then there exists a residual subset $\mathcal{O}(h)\subset\mathbf G^{\alpha,\mathbf L}(M\times\mathbb T)$ such that, for each $ \phi\in\mathcal{O}(h)$, the Lagrangian $L+\phi$ has only one minimal measure with the rotation vector $h$. \end{The} \begin{Rem} We shall note that the residual set $\mathcal{O}(h)$ depends on the homology class $h$. \end{Rem} \begin{proof} Recall Ma\~n\'e's equivalent definition of minimal measure in Section \ref{Mathertheory}, we are going to prove this theorem in the following setting based on Ma\~{n}\'{e}'s approach. \begin{enumerate}[(a)] \item Set $E:=\mathbf G^{\alpha,\mathbf L}(M\times\mathbb T).$ Obviously, it is a Banach space. \item Denote by $F\subset \mathrm{C}^*$ the vector space spanned by the set of probability measures $\mu\in\mathcal{H}$ with $\int_{TM\times\mathbb T}L\,d\mu<\infty$, the definitions of the sets $\mathcal{H}$ and $\mathrm{C}^*$ are in Section \ref{Mathertheory}. Recall that for $\mu_k, \mu\in F$, $$\lim\limits_{k\to+\infty}\,\mu_k=\mu \Longleftrightarrow \lim\limits_{k\to+\infty}\int_{TM\times\mathbb T}f\, d\mu_k=\int_{TM\times\mathbb T}f d\mu,\quad\forall f\in \mathrm{C}.$$ \item Let $\mathcal{L}: F\to \mathbb R$ be a linear map satisfying $\mathcal{L}(\mu)=\int L\, d\mu$, for every $\mu\in F$. \item Let $\varphi: E\to F^*$ be a linear map such that for each $\phi\in E$, $\varphi(\phi)\in F^*$ is defined as follows $$\langle\varphi(\phi),\mu\rangle:=\int \phi\, d\mu,~ \mu\in F.$$ \item $K:=\{\mu\in F~|~\rho(\mu)=h\}$. It's easy to check that $K$ is a separable metrizable convex subset. \end{enumerate} For $\phi\in E$, we denote $$\arg\min(\phi):=\{~\mu\in K~|~\mathcal{L}(\mu)+\langle \varphi(\phi),\mu \rangle=\min\limits_{\nu\in K}(\mathcal{L}(\nu)+\langle \varphi(\phi),\nu \rangle )~\}.$$ It's easy to verify that our setting above satisfies all conditions of that in \cite[Proposition 3.1]{Mane1996}, so there exists a residual subset $\mathcal{O}(h)\subset E$ such that each $\phi\in\mathcal{O}(h)$ has the following property: $$\#\arg\min(\phi)=1.$$ Since $\arg\min(\phi)$ $=$ $\mathfrak{H}_h(L+\phi)$, see \eqref{definition of mane}, it follows from Proposition \ref{Mather and Mane} that the Lagrangian $L+\phi$ admits only one minimal measure with the rotation vector $h$. \end{proof} \begin{Rem} For $\alpha=1$, $\mathbf G^{1,\mathbf L}$ is the space of real analytic functions. This therefore means that the uniqueness of minimal measure is also a generic property in the analytic topology. \end{Rem} \begin{Cor}\label{corgeneric G1} Let $\mathbf L>0$, $\alpha\geq 1$ and $L: T\mathbb T^n\times\mathbb T$ be a Tonelli Lagrangian. Then there exists a residual set $\mathcal{O}_1\subset\mathbf G^{\alpha,\mathbf L}(\mathbb T^n\times\mathbb T)$ such that for any $V\in\mathcal{O}_1$, the Lagrangian $L+V$ has the following property: for each rational $h=(h_1,\cdots,h_n)\in H_1(\mathbb T^n,\mathbb R)$ with $h_i\in\mathbb Q$, $L+V$ has one and only one minimal measure with the rotation vector $h$. \end{Cor} \begin{proof} For each $h\in H_1(\mathbb T^n,\mathbb R)$, thanks to Theorem \ref{generic G1}, we obtain a residual subset $\mathcal{O}(h)\subset\mathbf G^{\alpha,\mathbf L}(\mathbb T^n\times\mathbb T)$ such that for each $ \phi\in\mathcal{O}(h)$, the Lagrangian $L+\phi$ has only one minimal measure with the rotation vector $h$. Then we set \[\mathcal{O}_1=\bigcap_{h\in \mathbb Q^n} \mathcal{O}(h),\] which is the intersection of countably many residual sets. Of course, by the definition of residual set, $\mathcal{O}_1$ is non-empty and still residual (also dense) in the Banach space $\mathbf G^{\alpha,\mathbf L}(\mathbb T^n\times\mathbb T)$. The corollary is now evident from what we have proved. \end{proof} \subsection{H\"older regularity of elementary weak KAM solutions}\label{sub holder} In this part, we will choose a family of elementary weak KAM solutions which can be parameterized so that they are H\"older continuous in the $C^0$ topology. Such a property is crucial for our proof of Theorem \ref{generic G2}. To this end, we need to do study the normally hyperbolic cylinders (refer to Appendix \ref{appendix_NHIC}). Let us go back to our two and a half degrees of freedom Hamiltonian model \eqref{hamiltonian}. Let $$\Sigma(0):=\{(q_1,0,p_1,0): q_1\in\mathbb T, |p_1|\leq R\}\subset\mathbb T^2\times\mathbb R^2.$$ It is a cylinder restricted on the time-$0$ section, where $R$ is the constant fixed in Section \ref{introduction}. By condition {\bf(H2)}, $\Sigma(0)$ is a normally hyperbolic invariant cylinder (NHIC) for the time-1 map of the Hamiltonian flow $\Phi_{H_0}^t$. Since the Hamiltonian $H_0$ is integrable when restricted in the cylinder $\Sigma(0)$, the rate $\mu$ in \eqref{hyp splitting} is 1 and $\log\mu=0$, so it follows from Theorem \ref{persistence} that there exists \begin{equation}\label{diyigeepsilon} \varepsilon_1=\varepsilon_1(H_0,R)>0 \end{equation} such that if $\|H_1\|_{C^3(\mathscr D_R)}\leq\varepsilon_1$, the time-1 map $\Phi_H^1$ of the Hamiltonian $H$ still admits a $C^{r-1}$ normally hyperbolic invariant cylinder $\Sigma_H(0)$, which is a small deformation of $\Sigma(0)$ and can be considered as the image of the following diffeomorphism (see figure \ref{crumpled cylin}) \begin{equation}\label{graph of cylinder} \begin{split} \psi:\Sigma(0)&\to \Sigma_H(0)\subset \mathbb T^2\times\mathbb R^2, \\ (q_1,0,p_1,0)&\mapsto(q_1,\mathbf q_2(q_1,p_1),p_1, \mathbf p_2(q_1,p_1)). \end{split} \end{equation} Here, $\mathbf q_2$ and $\mathbf p_2$ are two $C^{r-1}$ functions taking values close to zero. Then $\psi$ induces a 2-form $\psi^*\Omega$ on the standard cylinder $\Sigma(0)$ where $\Omega=\sum_{i=1}^2 dq_i\wedge dp_i$, $$\psi^*\Omega=\bigg(1+\frac{\partial(\mathbf q_2,\mathbf p_2)}{\partial(q_1,p_1)}\bigg)dq_1\wedge dp_1.$$ Since the second de Rham cohomology group $H^2(\Sigma(0) ,\mathbb R)=\{0\}$, by using Moser's trick on the isotopy of symplectic forms, one can find a diffeomorphism $\psi_1:\Sigma(0)\to\Sigma(0)$ such that \begin{equation*} \psi_1^*\psi^*\Omega=dq_1\wedge dp_1. \end{equation*} \begin{figure} \caption{$\Sigma_H(0)$ is a small deformation of $\Sigma(0)$ } \label{crumpled cylin} \end{figure} Recall that $\Sigma_H(0)$ is invariant under $\Phi_H^1$ and $(\Phi_H^1)^*\Omega=\Omega$, one obtains \begin{equation*} \big( (\psi\circ\psi_1)^{-1}\circ\Phi_H^1\circ (\psi\circ\psi_1) \big)^*dq_1\wedge dp_1=dq_1\wedge dp_1. \end{equation*} Combining with the fact that $(\psi\circ\psi_1)^{-1}\circ\Phi_H^1\circ (\psi\circ\psi_1)$ is a small perturbation of $\Phi_{H_0}^1$, the map $(\psi\circ\psi_1)^{-1}\circ\Phi_H^1\circ (\psi\circ\psi_1)$ is an exact twist map, and hence one can apply the classical Aubry-Mather theory to characterize the minimal orbits on $\Sigma(0)$: given a $\rho\in\mathbb R$, there exists an Aubry-Mather set with rotation number $\rho$ satisfying \begin{enumerate} \item if $\rho\in\mathbb Q$, the set consists of periodic orbits. \item if $\rho\in\mathbb R\setminus \mathbb Q$, the set is either an invariant circle or a Denjoy set. \end{enumerate} For simplicity, we denote by $$\Sigma_H(s)=\Phi^s_H(\Sigma_H(0),0), \quad \Sigma(s)=\Phi^s_{H_0}(\Sigma(0),0)$$ the 2-dimensional manifolds, and denote by \begin{equation}\label{all_t} \widetilde{\Sigma}_H=\bigcup_{s\in\mathbb T}\Sigma_H(s),\quad\widetilde{\Sigma}=\bigcup_{s\in\mathbb T}\Sigma(s) \end{equation} the 3-dimensional manifolds in $T^*\mathbb T^2\times\mathbb T$. By using the Legendre transformation $\mathscr L$ (see \eqref{Legendre_tran}), the set $\mathscr L\widetilde{\Sigma}_H$ is $\phi_L^t$-invariant in $T\mathbb T^2\times\mathbb T$. Given a cohomology class $c=(c_1,0)\in H^1(\mathbb T^2,\mathbb R)$ with $|c_1|\leq R-1$, the following lemma shows that the Aubry set $\widetilde{\mathcal A}(c)$ lies inside the cylinder $\mathscr L\widetilde{\Sigma}_H$. \begin{Lem}[Location of the minimal sets]\label{minimal set on cylinder} Let $H$ be the Hamiltonian \eqref{hamiltonian} and $L$ be the associated Lagrangian \eqref{lagrangian}. There exists $\varepsilon_1=\varepsilon_1(H_0,R)>0$ such that if $\|H_1\|_{C^3(\mathscr D_R)}\leq \varepsilon_1$ , then for each $c=(c_1, 0)$ with $|c_1|\leq R-1$, the globally minimal set $\widetilde{\mathcal G}_L(c)\subset\mathscr L\widetilde{\Sigma}_H$. \end{Lem} \begin{proof} We first consider the autonomous Lagrangian $l_2(q_2,v_2)$. It follows from \eqref{weiyimin} that $(0,0)$ is the unique minimal point of $l_2$, so the globally minimal set of the Lagrangian $l_2$ is $$\widetilde{\mathcal G}_{l_2}=(0,0)\times\mathbb T\subset T\mathbb T\times\mathbb T.$$ Then for all $c=(c_1,0)$ with $|c_1|\leq R$, the globally minimal set of $L_0=l_1(v_1)+l_2(q_2,v_2)$ is $$\widetilde{\mathcal G}_{L_0}(c)=\{(q_1,0,D h_1(c_1),0,t)~:~ q_1\in\mathbb T,t\in\mathbb T\} \textup{~and~} \widetilde{\mathcal G}_{L_0}(c)\subset\mathscr L\widetilde{\Sigma}.$$ Here, the function $h_1$ is given in \eqref{hamiltonian}. Next, we take a small neighborhood $U$ of $\mathscr L\widetilde{\Sigma}$ in the space $T\mathbb T^2\times\mathbb T$ and let $\varepsilon_1=\varepsilon_1(H_0,R)$ be the constant defined in \eqref{diyigeepsilon}. Since $\|H_1\|_{C^3(\mathscr D_R)}\leq\varepsilon_1$, by letting $\varepsilon_1$ suitably small, it follows that $\|L_1\|_{C^2(\mathscr D_R)}$ is also sufficiently small. Thus, by the upper semi-continuity in Proposition \ref{upper semi}, $\widetilde{\mathcal G}_{L}(c)\subset U$ for all $c\in[-R+1,R-1]\times\{0\}$ where $L=L_0+L_1$. Equivalently, $$\mathscr L^{-1}\widetilde{\mathcal G}_{L}(c)\subset \mathscr L^{-1} U.$$ On the other hand, due to normal hyperbolicity and Theorem \ref{persistence}, $$\widetilde{\Sigma}_H\subset \mathscr L^{-1} U,$$ provided that $\varepsilon_1$ is small enough. Moreover, $\widetilde{\Sigma}_H$ is the unique $\phi^t_L$-invariant set in the neighborhood $\mathscr L^{-1} U$. This therefore implies $\mathscr L^{-1}\widetilde{\mathcal G}_{L}(c)\subset\widetilde{\Sigma}_H$ since $\mathscr L^{-1}\widetilde{\mathcal G}_{L}(c)$ is $\phi^t_L$-invariant. \end{proof} In the remainder of this section, we will use the following notation for simplicity. \noindent\textbf{Notation:} \begin{enumerate}[\rm(1)] \item In what follows, we use $M$ to denote the manifold $\mathbb T^2=\mathbb R^2/\mathbb Z^2$. Also, we denote by $$\check{M}=\mathbb T\times2\mathbb T=\mathbb R/\mathbb Z\times\mathbb R/2\mathbb Z,\quad \check{\pi}:\check{M}\to M$$ the double covering of $M$. We use such a double covering to distinguish between $0$ and $1$ in the $q_2$-coordinate, and identify 0 with 2 in the $q_2$-coordinate. The Hamiltonian $H: T^*M\times\mathbb T\to\mathbb R$ and the Lagrangian $L:TM\times\mathbb T\to\mathbb R$ could extend naturally to $T^*\check{M}$ and $T\check{M}$ respectively. By abuse of notation, we continue to write $H: T^*\check{M}\times\mathbb T\to\mathbb R$ and $L: T\check{M}\times\mathbb T\to\mathbb R$ for the new Hamiltonian and Lagrangian respectively. In this setting, the lift of the NHIC $\Sigma_H(0)$ will have two copies $$\check{\pi}^{-1}\Sigma_H(0)=\Sigma_{H,l}(0)\cup\Sigma_{H,u}(0), $$ where the subscripts $l, u$ are introduced to indicate ``lower" and ``upper" respectively. Then $\check{\pi}^{-1}\widetilde\Sigma_H=\widetilde\Sigma_{H,l}\cup\widetilde\Sigma_{H,u}$. \item For simplicity, we always use $\pi_q$ to denote the natural projection from $T\check{M}$ (resp. $TM$) to $\check{M}$ (resp. $M$) or from $T^*\check{M}$ (resp. $T^*M$) to $\check{M}$ (resp. $M$). \item Let $\kappa>0$ be small, we denote by $\mathrm{U}_\kappa=\mathrm{U}_{\kappa,l}\cup\mathrm{U}_{\kappa,u}$ the disconnected subset in $\check{M}$ where $$ \mathrm{U}_{\kappa,l}=\mathbb T\times[\kappa, 1-\kappa],\quad\mathrm{U}_{\kappa,u}=\mathbb T\times[1+\kappa, 2-\kappa].$$ Let $\mathrm N_{\kappa}=\check{M}\setminus\mathrm{U}_\kappa=\mathrm{N}_{\kappa,l}\cup\mathrm{N}_{\kappa,u}$ where $$\mathrm N_{\kappa,l}=\mathbb T\times(-\kappa, \kappa),\quad\mathrm N_{\kappa,u}=\mathbb T\times(1-\kappa, 1+\kappa).$$ The subscripts $l, u$ are also introduced to indicate the ``lower" and the ``upper" respectively (See figure \ref{picture2}). The number $\kappa$ should be chosen such that $$\pi_q\circ\Sigma_{H,l}(0)\subset\mathrm N_{\kappa/2,l},\qquad\pi_q\circ\Sigma_{H,u}(0)\subset\mathrm N_{\kappa/2,u}.$$ Namely, the perturbed cylinder is contained in a $\kappa/2$-neighborhood of the unperturbed one. Also, we let \begin{equation}\label{c_t1} \widetilde{\Sigma}_{H,l}\subset\mathrm N_{\kappa/2,l}\times\mathbb R^2\times\mathbb T,\quad\widetilde{\Sigma}_{H,u}\subset\mathrm N_{\kappa/2,u}\times\mathbb R^2\times\mathbb T. \end{equation} \item For $c=(c_1, 0)$$\in H^1(M,\mathbb R)$, if the Aubry set $\widetilde{\mathcal A}_{L}(c,M)|_{t=0}$ is an invariant circle, we denote by $$\Upsilon_c=\mathscr L^{-1}\widetilde{\mathcal A}_{L}(c,M)|_{t=0}\subset T^*M\times\{t=0\}$$ the invariant circle in the cotangent space. This leads us to introduce an index set \begin{equation}\label{buianquandeshangtongdiao} \mathbb{S}:=\{ (c_1,0)~:~ |c_1|\leq R-1, ~\Upsilon_c \textup{~is an invariant circle lying in~} \Sigma_H(0)\}. \end{equation} \item Let $\mathbf{r_0}>0$ be small satisfying $\mathbf{r_0}>\kappa$. Since $\Sigma_{H}(0)$ is a NHIM for the time-1 map $\Phi_H^1$, we have the associated local stable and unstable manifolds, denoted by $W_{\Sigma_{H}(0)}^{s,loc}$ and $W_{\Sigma_{H}(0)}^{u,loc}$ respectively, in the $\mathbf{r_0}$-tubular neighborhood of $\Sigma_H(0)$. In addition, $W^{s,loc}_{\Sigma_{H}(0)}=\bigcup_{q\in\Sigma_{H}(0)}W^{s,loc}_q$ and $ W^{u,loc}_{\Sigma_{H}(0)}=\bigcup_{q\in\Sigma_{H}(0)}W^{u,loc}_q$. \end{enumerate} Now, let us focus on $c=(c_1, 0)\in\mathbb{S}$. For the $\Phi^1_H$-invariant circle $\Upsilon_c$, it has local stable manifold $W^{s,loc}_{\Upsilon_c}=\bigcup_{q\in\Upsilon_c}W^{s,loc}_q$ and local unstable manifold $ W^{u,loc}_{\Upsilon_c}=\bigcup_{q\in\Upsilon_c}W^{u,loc}_q $. Theorem \ref{property of NHIM} tells us that the leaf $W^{s,loc}_q$ (resp. $W^{u,loc}_q$) has smooth dependence on the base point $q\in\Sigma_H(0)$. Consequently, $W^{s,loc}_{\Upsilon_c}$ (resp. $W^{u,loc}_{\Upsilon_c}$) is a Lipschitz manifold since $\Upsilon_c$ is only Lipschitz in general. Besides, the local stable (unstable) manifold can be viewed as a Lipschitz graph over $\check\pi\circ\mathrm N_\mathbf{r_0}$, namely \begin{equation*} \begin{split} W^{s,loc}_{\Upsilon_c}&=\{ \big(q_1,q_2, \mathbf p_1^s(q_1,q_2), \mathbf p_2^s(q_1,q_2)\big)\in T^*M\times\{t=0\}: (q_1,q_2)\in\check\pi\circ\mathrm N_\mathbf{r_0} \}\\ W^{u,loc}_{\Upsilon_c}&=\{ \big(q_1,q_2, \mathbf p_1^u(q_1,q_2), \mathbf p_2^u(q_1,q_2)\big)\in T^*M\times\{t=0\}: (q_1,q_2)\in\check\pi\circ\mathrm N_\mathbf{r_0} \} \end{split} \end{equation*} Here, $\mathbf p_1^{s,u}, \mathbf p_2^{s,u}$ are Lipschitz functions on $\check\pi\circ\mathrm N_\mathbf{r_0}\subset M$, and the domain $\mathrm N_{\mathbf{r_0}}=\mathrm{N}_{\mathbf{r_0},l}\cup\mathrm{N}_{\mathbf{r_0},u}$ with \[\mathrm N_{\mathbf{r_0},l}=\mathbb T\times(-\mathbf{r_0}, \mathbf{r_0}),\quad\mathrm N_{\mathbf{r_0},u}=\mathbb T\times(1-\mathbf{r_0}, 1+\mathbf{r_0}).\] Next, in the covering space $\check{M}$, the Aubry set $\widetilde{\mathcal A}_{L}(c,\check{M})$ is the union of two disjoint copies of $\widetilde{\mathcal A}_{L}(c,M)$ satisfying $\check{\pi}\widetilde{\mathcal A}_{L}(c,\check{M})=\widetilde{\mathcal A}_{L}(c,M)$. More precisely, $\mathscr L^{-1}\widetilde{\mathcal A}_{L}(c,\check{M})\big|_{t=0}=\Upsilon_{c,l}\cup\Upsilon_{c,u}$, where $\Upsilon_{c,\imath}$ lies in $\Sigma_{H,\imath}(0)$ and its stable and unstable manifolds are \begin{equation*} \begin{split} W^{s,loc}_{\Upsilon_{c,\imath}}&=\{ \big(q_1, q_2, \mathbf p_1^s(q_1, q_2), \mathbf p_2^s(q_1, q_2)\big)\in T^*\check{M}\times\{t=0\}: (q_1, q_2)\in\mathrm N_{\mathbf{r_0},\imath} \}\\ W^{u,loc}_{\Upsilon_{c,\imath}}&=\{ \big(q_1, q_2, \mathbf p_1^u(q_1, q_2), \mathbf p_2^u(q_1, q_2)\big)\in T^*\check{M}\times\{t=0\}: (q_1, q_2)\in\mathrm N_{\mathbf{r_0},\imath} \} \end{split} \end{equation*} with $\imath=l, u$. Here, by abuse of notation, we have continued to use $\mathbf p_1^{s,u}, \mathbf p_2^{s,u}$ to denote the corresponding Lipschitz functions defined on the lift of $\check\pi\circ\mathrm N_\mathbf{r_0}$. The lemma below gives the relation between the elementary weak KAM solutions and the local stable and unstable manifolds. \begin{Lem}\label{local manifolds representation} There exists $\mathbf{r_0}>0$ such that for each $c=(c_1, 0)\in\mathbb{S}$, we have \begin{enumerate}[\rm(1)] \item for each backward elementary weak KAM solution $u^-_{c, \imath}(q,t)$ with $\imath=l, u$, the function $u^-_{c, \imath}(q,0)$ is $C^{1,1}$ in the domain $\mathrm N_{\mathbf{r_0},\imath}$ and generates the local unstable manifold of $\Upsilon_{c,\imath}$, i.e. $$ W^{u,loc}_{\Upsilon_{c,\imath}}=\{ \big(q, c+\partial_qu^-_{c, \imath}(q,0)\big): q\in\mathrm N_{\mathbf{r_0},\imath} \},\qquad \imath=l, u.$$ \item for each forward elementary weak KAM solution $u^+_{c, \imath}(q,t)$ with $\imath=l, u$, the function $u^+_{c, \imath}(q,0)$ is $C^{1,1}$ in the domain $\mathrm N_{\mathbf{r_0},\imath}$ and generates the local stable manifold of $\Upsilon_{c,\imath}$, i.e. $$ W^{s,loc}_{\Upsilon_{c,\imath}}=\{ \big(q, c+\partial_qu^+_{c, \imath}(q,0)\big): q\in\mathrm N_{\mathbf{r_0},\imath} \},\qquad \imath=l, u.$$ \end{enumerate} \end{Lem} \begin{proof} We only prove for the case $u^-_{c, l}$ since the other cases are similar.\\ \textbf{Step 1:} We first claim that there exists a neighborhood $V$ of $\pi_q\circ\Upsilon_{c,l}$ in $\check{M}$ such that for each $\xi^-: (-\infty, 0]\to \check{M}$ calibrated by $u^-_{c,l}$ with $\xi^-(0)\in V$, the $\alpha$-limit set of the backward minimal configuration $\{\xi^-(-i)\}_{i\in\mathbb Z^+}$ must be contained in $\pi_q\circ\Upsilon_{c,l}$. Assume by contradiction that there exist a sequence of backward calibrated curves $\xi^-_k:(-\infty, 0]$ $\to\check{M}$ with $\xi_k^-(0)=x_k$, and a sequence $\alpha_k$ which belongs to the $\alpha$-limit set of the backward minimal configuration $\{\xi_k^-(-i)\}_{i\in\mathbb Z^+}$ satisfying \begin{equation}\label{class1} \lim\limits_{k\to\infty}x_k=x^*\in\pi_q\circ\Upsilon_{c,l}\quad\textup{and}\quad \lim\limits_{k\to\infty}\alpha_k=\alpha^*\notin\pi_q\circ\Upsilon_{c,l}. \end{equation} This implies $\alpha^*\in\pi_q\circ\Upsilon_{c,u}$ since the $\alpha$-limit set of each minimal curve shall be contained in the Aubry set. By Theorem \ref{representation of EWS}, each $\xi^-_k: (-\infty, 0]\to\check{M}$ is $c$-semi static and calibrated by $h^\infty_c((x^*,0),\cdot)$: $$h^\infty_c\big((x^*,0),(\xi^-_k(0),0)\big)-h^\infty_c\big((x^*,0),(\xi^-_k(-t),-t)\big)=h_c^{-t,0}\big(\xi^-_k(-t),\xi^-_k(0)\big),\quad\forall t\in\mathbb Z^+.$$ This further gives $h^\infty_c\big((x^*,0),(x_k,0)\big)-h^\infty_c\big((x^*,0),(\alpha_k,0)\big)\geq h^\infty_c\big((\alpha_k,0),(x_k,0)\big).$ The opposite inequality is obvious, therefore $h^\infty_c((x^*,0),(x_k,0))-h^\infty_c((x^*,0),(\alpha_k,0))=$ $ h^\infty_c((\alpha_k,0),(x_k,0))$. Sending $k\to\infty$, it follows that $$0=h^\infty_c\big((x^*,0),(x^*,0)\big)=h^\infty_c\big((x^*,0),(\alpha^*,0)\big)+ h^\infty_c\big((\alpha^*,0),(x^*,0)\big).$$ Hence, $(x^*,0)$ and $(\alpha^*,0)$ belong to the same Aubry class, which contradicts \eqref{class1}.\\ \textbf{Step 2:} By letting the above domain $V$ be suitably small if necessary, $W^{u}_{\Upsilon_{c,l}}$ is a Lipschitz graph over $V$, we will show that there exists a small number $\mathbf{r_0}>0$ such that $\mathrm N_{\mathbf{r_0},l}\subset V$, and each $u^-_{c,l}$-calibrated curve $\gamma^-: (-\infty, 0]\to \check{M}$ with $\gamma^-(0)\in\mathrm N_{\mathbf{r_0},l}$ satisfies $\gamma^-(-m)\in V$, $\forall~m\in\mathbb N$. To prove this, assume by contradiction that there exist a sequence of $u^-_{c,l}$-calibrated curves $\gamma^-_j: (-\infty, 0]$ $\to$ $\check{M}$ and a sequence of positive integers $T_j$ such that $\gamma_j^-(-T_j)\notin V,$ $\gamma_j^-(-m)\in V,$ $m\in\{0, 1, \cdots$, $T_j-1\}$ and $\lim_{j\to\infty}\textup{dist}(\gamma^-_j(0), \pi_q\circ\Upsilon_{c,l})=0$. We set $\eta^-_j(t):=\gamma_j^-(t-T_j)$, then $\eta^-_j: (-\infty, T_j]\to \check{M}$ is still a calibrated curve and \begin{equation}\label{contain1} \eta^-_j(0)\notin V,~ \eta_j^-(m)\in V, ~ m\in\{1,2,\cdots,T_j\} \end{equation} and \begin{equation}\label{a sequence of curves} \lim\limits_{j\to\infty}\textup{dist}(\eta^-_j(T_j), \pi_q\circ\Upsilon_{c,l})=0. \end{equation} Extracting a subsequence if necessary, we suppose that $(\eta^-_j(t), \dot{\eta}^-_j(t))$ converges uniformly to a limit curve $(\eta^-(t), \dot{\eta}^-(t)): I\to\check{M}$ on any compact interval of $\mathbb R$. Here, the interval $I$ is either $(-\infty, T]$ or $\mathbb R$ ($T$ is a positive integer). Obviously, $\eta^-(t)$ is still calibrated by $u^-_{c,l}$ and \begin{equation}\label{initial value of limit curve} \eta^-(0)\notin V. \end{equation} In the case $I=(-\infty, T]$, one obtains $\eta^-(T)\in\pi_q\circ\Upsilon_{c,l}$ as a consequence of \eqref{a sequence of curves}, and hence $\{\eta^-(m)\}_{m\in\mathbb Z, m\leq T}\subset\pi_q\circ\Upsilon_{c,l}$, which contradicts \eqref{initial value of limit curve}. In the case $I=\mathbb R$, it follows from \eqref{contain1} that the $\omega$-limit set of $\{\eta^-(m)\}_{m\in\mathbb Z}$ lies in $\pi_q\circ\Upsilon_{c,l}$, then $\{\eta^-(m)\}_{m\in\mathbb Z}\subset\pi_q\circ\Upsilon_{c,l}$ since the $\omega$-limit set and $\alpha$-limit set belong to the same Aubry class, which contradicts \eqref{initial value of limit curve}. \\ \textbf{Step 3:} By what we have proved above, for each $u^-_{c,l}$-calibrated curve $\gamma^-$ with $\gamma^-(0)\in\mathrm N_{\mathbf{r_0},l}$, $\{(\gamma^-(-m)$, $\dot{\gamma}^-(-m))\}_{i\in\mathbb Z^+}$ would always stay in a small neighborhood of the cylinder $\mathscr L\Sigma_{H,l}(0)$. This also means that the $\alpha$-limit set of $\{\mathscr L^{-1}\big(\gamma^-(-m)$, $\dot{\gamma}^-(-m)\big)\}_{m\in\mathbb Z^+}$ lies in $\Upsilon_{c,l}$. By normal hyperbolicity, $\{\mathscr L^{-1}\big(\gamma^-(-m)$, $\dot{\gamma}^-(-m)\big)\}_{m\in\mathbb Z^+}$ $\subset$ $W^{u,loc}_{\Upsilon_{c,l}}$. Thus, for each $q\in \mathrm N_{\mathbf{r_0},l}$, there is a unique $u^-_{c,l}$-calibrated curve $\gamma^-: (-\infty, 0]\to\check{M}$ with $\gamma^-(0)=q$ since $W^{u,loc}_{\Upsilon_{c,l}}$ is a Lipschitz graph over $\mathrm N_{\mathbf{r_0},l}$ $\subset V$. By weak KAM theory, $u^-_{c,l}$ is therefore $C^{1,1}$ in $\mathrm N_{\mathbf{r_0},l}$. Moreover, Proposition \ref{properties weak KAM} implies that $$(q, c+\partial_qu^-_{c,l}(q,0))=\mathscr L^{-1}\big(\gamma^-(0),\dot{\gamma}^-(0)\big)\in W^{u,loc}_{\Upsilon_{c,l}}.$$ This completes the proof. \end{proof} In \cite{CY2004}, the authors introduced an``area" parameter $\sigma$ to parameterize an invariant circle lying on the NHIC so that the invariant circle $\Gamma_\sigma$ is $\frac{1}{2}$-H\"older continuous with respect to $\sigma$, namely $$\|\Gamma_{\sigma_1}-\Gamma_{\sigma_2}\|_{C^0}\leq C|\sigma_1-\sigma_2|^{\frac{1}{2}}$$ However, this result can be improved by taking advantage of the tools in weak KAM theory. Roughly speaking, the ``area" parameter $\sigma$ is, to some extent, the cohomology class $c$ (see Lemma \ref{local11} and Theorem \ref{global regularity of elementary solutions} below). Similar results could be found in \cite{BKZ2016}. Recall that the invariant circle $\Upsilon_{c,\imath}$, where $c\in\mathbb{S}$ and $\imath=l,u$, can be viewed as a Lipschitz graph over $q_1$. More precisely, by abuse of notation, we continue to write $\Upsilon_{c,\imath}$ for this Lipschitz function $$ \Upsilon_{c,\imath}: \mathbb T\longrightarrow\Sigma_{H,\imath}(0)\subset \mathbb T^2\times\mathbb R^2$$ $$q_1\longmapsto(q_1, ~\pi_{q_2}\circ\Upsilon_{c,\imath}(q_1), ~\pi_{p_1}\circ\Upsilon_{c,\imath}(q_1),~ \pi_{p_2}\circ\Upsilon_{c,\imath}(q_1))$$ with $\pi_{q_1}\circ\Upsilon_{c,\imath}(q_1)=q_1$ and $\imath=l, u$. Then, we have: \begin{Lem}[$\frac{1}{2}$-H\"older regularity]\label{local11} There exists a positive constant $C$ such that for any $c, c'\in\mathbb{S}$, \begin{enumerate}[\rm(1)] \item $\max_{q_1}\|\Upsilon_{c,l}(q_1)-\Upsilon_{c',l}(q_1)\|\leq C\|c-c'\|^{\frac{1}{2}},$ \item $\max_{q_1}\|\Upsilon_{c,u}(q_1)-\Upsilon_{c',u}(q_1)\|\leq C\|c-c'\|^{\frac{1}{2}}.$ \end{enumerate} \end{Lem} \begin{proof} We only prove item (1) and the other one is similar. Recall that Lemma \ref{local manifolds representation} tells us that the elementary weak KAM solution $u^-_{c,l}$ is $C^{1,1}$ in $\mathrm N_{\mathbf{r_0},l}$. Now, in the 4-dimensional space $T^*\mathrm N_{\mathbf{r_0},l}=\mathrm N_{\mathbf{r_0},l}\times\mathbb R^2$, we define two 1-forms $\omega_1$ $=\big(c_1+\partial_{q_1}u^-_{c,l}(q,0)\big)dq_1$ $+\partial_{q_2}u^-_{c,l}(q,0)dq_2$ and $\omega_2=p_1dq_1+p_2 dq_2$. Note that $\omega_1|_{\Upsilon_{c,l}}=\omega_2|_{\Upsilon_{c,l}}$ as a consequence of Proposition \ref{properties weak KAM} and Lemma \ref{local manifolds representation}. Then, \begin{equation*} \int_{\Upsilon_{c,l}}\omega_2=\int_{\Upsilon_{c,l}}\omega_1=\int_{\Upsilon_{c,l}}[c_1dq_1+du^-_{c,l}(q,0)]=\int_{\Upsilon_{c,l}}c_1dq_1=c_1. \end{equation*} For $c, c'\in\mathbb{S}$, we may assume $c'_1>c_1$. Let $D$ be the region on the cylinder $\Sigma_{H,l}(0)$ between $\Upsilon_{c,l}$ and $\Upsilon_{c',l}$ (see figure \ref{regionD}). By Stoke's theorem, \begin{equation}\label{stoke formula} \int_{D}\sum\limits_{i=1}^2 dp_i\wedge dq_i=\int_{\Upsilon_{c,l}}\omega_2-\int_{\Upsilon_{c',l}}\omega_2=c_1-c'_1. \end{equation} \begin{figure} \caption{The region $D$ is bounded by two invariant circles} \label{regionD} \end{figure} Then \eqref{graph of cylinder} and \eqref{stoke formula} together imply \begin{equation}\label{area form estimate} \begin{split} |c_1-c_1'|&=\left|\int_{D}\sum\limits_{i=1}^2 dp_i\wedge dq_i\right|=\left|\int_{D}\left(1+\frac{\partial(\mathbf p_{2},\mathbf q_{2})}{\partial(p_1,q_1)}\right)dp_1\wedge dq_1\right|\\ &\geq\frac{1}{4}\left|\int_{D}dp_1\wedge dq_1\right|=\frac{1}{4}\left|\int_{\Upsilon_{c,l}}p_1dq_1-\int_{\Upsilon_{c',l}}p_1dq_1\right|\\ &=\frac{1}{4}\left|\int_{\mathbb T}\pi_{p_1}\circ\Upsilon_{c,l}(q_1)-\pi_{p_1}\circ\Upsilon_{c',l}(q_1)~dq_1\right| \end{split} \end{equation} As the Lipschitz functions $\pi_{p_1}\circ\Upsilon_{c,l}$, ~ $\pi_{p_1}\circ\Upsilon_{c',l}:\mathbb T\rightarrow\mathbb R$ satisfy $\pi_{p_1}\circ\Upsilon_{c',l}> \pi_{p_1}\circ\Upsilon_{c,l}$, we have \begin{equation}\label{holder regularity of Upsilon} \int_{\mathbb T}\pi_{p_1}\circ\Upsilon_{c',l}(q_1)-\pi_{p_1}\circ\Upsilon_{c,l}(q_1)~dq_1\geq\frac{1}{4C_L}\left(\max\limits_{q_1}|\pi_{p_1}\circ\Upsilon_{c',l}(q_1)-\pi_{p_1}\circ\Upsilon_{c,l}(q_1)|\right)^2, \end{equation} where $C_L$ is the Lipschitz bound of the functions $\pi_{p_1}\circ\Upsilon_{c,l}$ and $\pi_{p_1}\circ\Upsilon_{c',l}$. Recall that the function $\mathbf p_2(q_1,p_1)$ is at least $C^1$, then there exists a constant $K>0$ such that \begin{equation}\label{norm estimate} \begin{split} &\|\pi_{p}\circ\Upsilon_{c,l}(q_1)-\pi_{p}\circ\Upsilon_{c',l}(q_1)\|\\ =& |\pi_{p_1}\circ\Upsilon_{c,l}(q_1)-\pi_{p_1}\circ\Upsilon_{c',l}(q_1)|+|\pi_{p_2}\circ\Upsilon_{c,l}(q_1)-\pi_{p_2}\circ\Upsilon_{c',l}(q_1)|\\ =&|\pi_{p_1}\circ\Upsilon_{c,l}(q_1)-\pi_{p_1}\circ\Upsilon_{c',l}(q_1)|+|\mathbf p_2(q_1,\pi_{p_1}\circ\Upsilon_{c,l}(q_1))-\mathbf p_2(q_1,\pi_{p_1}\circ\Upsilon_{c',l}(q_1))|\\ \leq&(1+K)|\pi_{p_1}\circ\Upsilon_{c,l}(q_1)-\pi_{p_1}\circ\Upsilon_{c',l}(q_1)|, \end{split} \end{equation} Thus, combining \eqref{area form estimate}, \eqref{holder regularity of Upsilon} with \eqref{norm estimate}, one obtains \begin{equation*} \begin{split} \|c-c'\|&\geq|c_1-c_1'|\geq\frac{1}{16C_L}\big(\max\limits_{q_1}|\pi_{p_1}\circ\Upsilon_{c,l}(q_1)-\pi_{p_1}\circ\Upsilon_{c',l}(q_1)|\big)^2\\ &\geq\frac{1}{16C_L(1+K)^{2}}\big(\max\limits_{q_1}\|\pi_{p}\circ\Upsilon_{c,l}(q_1)-\pi_{p}\circ\Upsilon_{c',l}(q_1)\|\big)^2, \end{split} \end{equation*} which implies $$\max\limits_{q_1}\|\pi_{p}\circ\Upsilon_{c,l}(q_1)-\pi_{p}\circ\Upsilon_{c',l}(q_1)\|\leq4\sqrt{C_L}(1+K)\|c-c'\|^\frac{1}{2}. $$ Next, since the function $\mathbf q_2(q_1, p_1)$ is at least $C^1$, there exists a constant $\tilde{C}>0$ such that \begin{align*} \max\limits_{q_1}|\pi_{q_2}\circ\Upsilon_{c,l}(q_1)-\pi_{q_2}\circ\Upsilon_{c',l}(q_1)|& =\max\limits_{q_1}|\mathbf q_2(q_1,\pi_{p_1}\circ\Upsilon_{c,l}(q_1))-\mathbf q_2(q_1,\pi_{p_1}\circ\Upsilon_{c',l}(q_1))\\ & \leq \widetilde{C}\max\limits_{q_1}|\pi_{p_1}\circ\Upsilon_{c,l}(q_1)-\pi_{p_1}\circ\Upsilon_{c',l}(q_1)|\\ &\leq 4\sqrt{C_L}\widetilde{C}(1+K)\|c-c'\|^\frac{1}{2}. \end{align*} Consequently, item (1) follows immediately by setting $C=4\sqrt{C_L}(1+\widetilde{C})(1+K)$. \end{proof} We also mention that, for the Peierls barriers restricted on the NHIC, one can even obtain the H\"older continuity with respect to perturbations \cite{CC2017}. The result below is analogous to \cite[Lemma 6.4]{CY2009} and will be crucial for the proof of genericity. \begin{The}\label{global regularity of elementary solutions} Let $\mathbf{r_0}$ be the constant given in Lemma \ref{local manifolds representation}, and we fix two points $z_l\in\mathrm N_{\mathbf{r_0},l}, z_u\in\mathrm N_{\mathbf{r_0},u}$. Let $u^\pm_{c,l}(q,t)$, $u^\pm_{c,u}(q,t)$ be the elementary weak KAM solutions satisfying $u^{\pm}_{c,l}(z_l,0)=u^{\pm}_{c,l}(z_u,0)$ $\equiv \textup{constant}$, for all $c\in\mathbb{S}$. Then there exists $C_h>0$ such that for any $c, c'\in\mathbb{S}$ $$|u^{\pm}_{c,l}(q,0)-u^\pm_{c',l}(q,0)|\leq C_h(\|c'-c\|^{\frac{1}{2}}+\|c'-c\|),\quad\forall q\in\check{M}\setminus \mathrm N_{\mathbf{r_0},u}$$ and $$|u^{\pm}_{c,u}(q,0)-u^\pm_{c',u}(q,0)|\leq C_h(\|c'-c\|^{\frac{1}{2}}+\|c'-c\|), \quad\forall q\in\check{M}\setminus \mathrm N_{\mathbf{r_0},l}.$$ \end{The} \begin{Rem} By adding suitably constants, we can take $u^{\pm}_{c,l}(z_l,0)=u^{\pm}_{c,l}(z_u,0)=0$ for all $c\in\mathbb{S}$, since any elementary weak KAM solution plus a constant is still an elementary weak KAM solution. \end{Rem} \begin{proof} We only prove the case for $u^-_{c,l}$ and the others are similar. The normal hyperbolicity guarantees the smooth dependence of the unstable leaf $W_q^{u,loc}$ with respect to the base point $q\in\Sigma_{H,l}(0)$. By Lemma \ref{local11}, the local unstable manifold $W^{u, loc}_{\Upsilon_{c,l}}$ of $\Upsilon_{c,l}$ is also $\frac{1}{2}-$H\"{o}lder continuous in $c\in\mathbb{S}$. Then, Lemma \ref{local manifolds representation} implies that some constant $C_1>0$ exists such that \begin{equation*} \| \big(c+\partial_qu_{c,l}^-(q,0)\big)-\big(c'+\partial_qu_{c',l}^-(q,0)\big) \|\leq C_1\|c-c'\|^{\frac{1}{2}}, ~\forall q\in\mathrm N_{\mathbf{r_0},l}, ~\forall c, c'\in\mathbb{S}. \end{equation*} Further, using integration we obtain that for all $c, c'\in\mathbb{S}$ and all $ q\in\mathrm N_{\mathbf{r_0}, l}$, \begin{equation*} \begin{split} \big| \big( u^{-}_{c,l}(q,0)- u^{-}_{c,l}(z_l,0)+\langle c, q-z_l\rangle \big)-\big( u^-_{c',l}(q,0)-u^-_{c',l}(z_l,0)+\langle c', q-z_l\rangle \big) \big|\leq C_1\|c-c'\|^{\frac{1}{2}}. \end{split} \end{equation*} Since we have chosen $u^{-}_{c,l}(z_l,0)\equiv constant$ for all $c\in\mathbb{S}$, we get that $\forall$$c, c'\in\mathbb{S}$ and $\forall q\in\mathrm N_{\mathbf{r_0}, l}$ \begin{equation}\label{local regularity of sol} \begin{split} \big| u^{-}_{c,l}(q,0)- u^-_{c',l}(q,0) \big|\leq C_1\|c-c'\|^{\frac{1}{2}}+\|c-c'\|. \end{split} \end{equation} Next, for each $z\in\check{M}\setminus \mathrm N_{\mathbf{r_0},u}$, there exists a backward calibrated curve $\gamma^-_{c,l}$ with $\gamma_{c,l}^-(0)=z$, which is negatively asymptotic to $\pi_q\circ\Upsilon_{c,l}$. Since the duration of $\gamma^-_{c,l}$ staying outside $\mathrm N_{\mathbf{r_0},l}$ is uniformly bounded, denoted by $T_l\in\mathbb Z^+$, we have $\gamma^-_{c,l}(-k)\in \mathrm N_{\mathbf{r_0},l}$ for every integer $k\geq T_{l}$. Then, $$u^-_{c,l}(\gamma^-_{c,l}(0),0)-u^-_{c,l}(\gamma^-_{c,l}(-T_l),-T_l)=\int^0_{-T_l}L(\gamma^-_{c,l}(s),\dot{\gamma}^-_{c,l}(s),s)-\langle c, \dot{\gamma}^-_{c,l}(s)\rangle+\alpha(c)\, ds,$$ $$u^-_{c',l}(\gamma^-_{c,l}(0),0)-u^-_{c',l}(\gamma^-_{c,l}(-T_l),-T_l)\leq\int^0_{-T_l}L(\gamma^-_{c,l}(s),\dot{\gamma}^-_{c,l}(s),s)-\langle c', \dot{\gamma}^-_{c,l}(s)\rangle+\alpha(c')\,ds.$$ Subtracting the first formula from the second one, one deduces from inequality \eqref{local regularity of sol} that \begin{equation*} \begin{split} u^-_{c',l}(z,0)-u^-_{c,l}(z,0)\leq & u^-_{c',l}(\gamma^-_{c,l}(-T_l),-T_l)-u^-_{c,l}(\gamma^-_{c,l}(-T_l),-T_l)\\ &+\int^0_{-T_l}\langle c-c', \dot{\gamma}^-_{c,l}(s)\rangle+\alpha(c')-\alpha(c)\, ds\\ \leq & u^-_{c',l}(\gamma^-_{c,l}(-T_l),0)-u^-_{c,l}(\gamma^-_{c,l}(-T_l),0)+C_2\|c'-c\|\\ \leq &C_1\|c'-c\|^{\frac 12}+\|c'-c\|+C_2\|c'-c\|. \end{split} \end{equation*} Here, the second inequality follows from the fact that $\|\dot{\gamma}^-_{c,l}\|$ is uniformly bounded and Mather's $\alpha$-function is Lipschitz continuous. So we conclude that there exists $C_h>0$ such that $$u^-_{c',l}(z,0)-u^-_{c,l}(z,0)\leq C_h(\|c'-c\|^{\frac{1}{2}}+\|c'-c\|), ~\forall z\in\check{M}\setminus \mathrm N_{\mathbf{r_0},u}.$$ In a similar way, we can prove that $u^-_{c,l}(z,0)-u^-_{c',l}(z,0)$$\leq C_h(\|c'-c\|^{\frac{1}{2}}+\|c'-c\|)$ for all $z\in\check{M}\setminus \mathrm N_{\mathbf{r_0},u}$, which completes the proof. \end{proof} \subsection{Choice of the Gevrey space }\label{determin of coeff} In what follows, we assume $\alpha>1$ and $M=\mathbb T^2$. As we will see later, our proof of genericity is not always valid for all Gevrey space $\mathbf G^{\alpha,\mathbf L}$ ($\mathbf L>0$), but only for $\mathbf G^{\alpha,\mathbf L}$ with $\mathbf L$ bounded by a positive constant $\mathbf L_0$. This is caused by Gevrey approximation, and we will explain it and show how to choose $\mathbf L_0$ below. Let us first look at the unperturbed Lagrangian $L_0=l_1(v_1)+l_2(q_2,v_2)$ in \eqref{lagrangian}. For each $c=(c_1,0)$, $|c_1|\leq R-1$, the Aubry set and Ma\~n\'e set are $$\widetilde{\mathcal A}_{L_0}(c,M)=\widetilde{\mathcal N}_{L_0}(c,M)=\{(q_1, 0, D h_1(c_1), 0, t)\in TM\times\mathbb T: q_1\in\mathbb T, t\in\mathbb T\},$$ which is a $\phi_{L_0}^1$-invariant circle. Next, we work in the covering space $\check{M}$ and consider $L_0: T\check{M}\to\mathbb R$. Restricted on the time section $\{t=0\}$, the lift of the Aubry set has two copies: \begin{equation*} \begin{split} \widetilde{\mathcal A}_{L_0,l}(c,\check{M})|_{t=0}&=\{ (q_1, 0,D h_1(c_1),0)\in T\check{M}: q_1\in\mathbb T \},\\ \widetilde{\mathcal A}_{L_0,u}(c,\check{M})|_{t=0}&=\{ (q_1, 1,D h_1(c_1),0)\in T\check{M}: q_1\in\mathbb T \}, \end{split} \end{equation*} and they lie on the following two invariant cylinders respectively \begin{equation*} \begin{split} \mathscr L\Sigma_l(0)=&\{(q_1, 0, D h_1(p_1), 0)\in T\check{M}: q_1\in\mathbb T, |p_1|\leq R-1 \}, \\ \mathscr L\Sigma_u(0)=&\{(q_1, 1, D h_1(p_1), 0)\in T\check{M}: q_1\in\mathbb T, |p_1|\leq R-1 \}. \end{split} \end{equation*} Notice that $\pi_q\circ\mathscr L\Sigma_l(0)=\mathbb T\times\{0\}$ and $\pi_q\circ\mathscr L\Sigma_u(0)=\mathbb T\times\{1\}$. \begin{figure} \caption{$V_{c,l}$ (blue) in a fundamental domain of $\check{M}\times\mathbb T$} \label{tubular} \end{figure} Denote by $u^\pm_{c,l,L_0}, u^\pm_{c,u,L_0}$ the elementary weak KAM solutions of $L_0$ with respect to the cohomology class $c$. Recall that $\kappa<\mathbf{r_0}$. For each $x\in \mathrm{U}_{\kappa,l}$, there exists a unique $u^-_{c,l,L_0}$-calibrated curve $\xi^-_{x,c}(t): (-\infty, 0]\to\check{M}$ such that $\xi^-_{x,c}(0)=x$, and it is negatively asymptotic to $\mathcal A_{L_0,l}(c)$. We pick and fix a constant $T_c=T_c(\kappa,L_0)>0$ small enough, then we obtain a local neighborhood $$V_{c,l}=\{(\xi_{x,c}^-(t),t)\in\check{M}\times\mathbb T: x\in \mathrm{U}_{\kappa,l}, -T_c\leq t\leq 0\}$$ which is diffeomorphic to $\mathrm{U}_{\kappa,l}\times [-T_c, 0]$ (see figure \ref{tubular}), namely there is a diffeomorphism $$f: \mathrm{U}_{\kappa,l}\times [-T_c, 0]\to V_{c,l}$$ such that $f(x,t)=(\xi^-_{x,c}(t),t)$ and $V_{c,l}\cap(\mathrm N_{3\kappa/4,l}\times\mathbb T)=\emptyset$, this is guaranteed by $T_c\ll 1$. Notice that $V_{c,l}$ would vary in $c$. Recall that $\check{M}=\mathbb T\times[0,2]/\sim$, where the equivalence relation $\sim$ is defined by identifying 0 with 2 in the $q_2$-coordinate. In the sequel, we will fix, once and for all, a sufficiently small constant $\delta>0$, which is smaller than $\kappa/4$. Thanks to Theorem \ref{Gevrey approx} there exists a Gevrey-$(\alpha, \lambda_c)$ diffeomorphism $$\Psi_{c,l}: \mathrm{U}_{\kappa,l}\times [-T_c,0]\to \text{\uj V}_{c,l}$$ such that $\|\Psi_{c,l}-f\|_{C^0(\mathrm{U}_{\kappa,l}\times [-T_c,0])}\leq \delta/2$, where $\text{\uj V}_{c,l}\subsetneq\mathbb T\times(0,1)\times\mathbb T$ and $\text{\uj V}_{c,l}\cap(\mathrm N_{\kappa/2,l}\times\mathbb T)=\emptyset$, $\lambda_c=\lambda_c(\kappa,L_0)\ll 1$. It means that $\Psi_{c,l}(x,\cdot)$ remains $\delta/2-$close to $\xi^-_{x,c}(\cdot)$ in the following sense: $$\textup{dist}(\Psi_{c,l}(x,t), \xi^-_{x,c}(t))\leq\delta/2, \quad \forall~(x,t)\in\mathrm{U}_{\kappa,l}\times[-T_c, 0].$$ Recall that the number $\varepsilon_1$ given in Lemma \ref{minimal set on cylinder} is small enough, then one can find an small interval $I_c=\{(c'_1,0): c'_1\in (c_1-\tau, c_1+\tau) \}$ depending on $\kappa, L_0$, such that if the perturbation term $\|L_1\|_{C^2}<2\varepsilon_1$, then the Lagrangian $L=L_0+L_1$ satisfies: for each $c'\in I_c$, $x\in\mathrm U_{\kappa,l}$, $\bullet$ the $u^-_{c',l,L}$-calibrated curve $\gamma^-_{x,c',L}(t):(-\infty,0]\to\check{M}$ with $\gamma^-_{x,c',L}(0)=x$ is negatively asymptotic to $\mathcal A_{L,l}(c',\check{M})$. $\bullet$ $\gamma^-_{x,c',L}(\cdot)$ is still $\delta-$close to $\Psi_{c,l}(x,\cdot)$ in the sense that \begin{equation}\label{tubular approximation} \textup{dist}(\Psi_{c,l}(x,t),\gamma^-_{x,c',L}(t) )\leq \delta,\quad \forall -T_c\leq t\leq 0. \end{equation} These properties are guaranteed by the upper semi-continuity. By the finite covering theorem, there exist finitely many intervals $\{I_{c^i}\}_{i=0}^m$ such that \begin{equation}\label{interval decomp} \bigcup_{0\leq i\leq m} I_{c^i}\supset [-R+1,R-1]\times\{0\}, \end{equation} and the corresponding diffeomorphism $\Psi_{c^i,l}: \mathrm{U}_{\kappa,l}\times[-T_{c^i}, 0]\to \text{\uj V}_{c^i,l}$ is Gevrey-$(\alpha, \lambda_{c^i})$, and the positive number $T_{c^i}\ll 1$. According to Theorem \ref{inverse thm}, some constant $\lambda'_{c^i}<\lambda_{c^i}$ exists such that $\Psi_{c^i,l}^{-1}$ is Gevrey-$(\alpha, \lambda'_{c^i})$ smooth. In what follows, we set \begin{equation}\label{bold_L_0} \mathbf L_0:=\min\{\lambda'_{c^i}: i=0,\cdots,m\}, \end{equation} and hence $$\Psi_{c^i,l}^{-1}: \text{\uj V}_{c^i,l}\to\mathrm{U}_{\kappa,l}\times[-T_{c^i}, 0]$$ is Gevrey-$(\alpha, \mathbf L)$ smooth, for all $\mathbf L\leq\mathbf L_0$. We also point out that $\mathbf L_0$ is independent of the perturbation Hamiltonian $H_1$, and it depends only on $H_0$, $R$ and $\alpha$ since the choice of $\kappa$ depends only on $H_0$ and $R$, and $L_0$ depends only on $H_0$. Similarly, these procedures can be carried out for the region $\mathrm{U}_{\kappa,u}$, and one can get the corresponding Gevrey diffeomorphism $\Psi_{c^i,u}:\mathrm{U}_{\kappa,u}\times[-T_{c^i},0]\to\text{\uj V}_{c^i,u}$. For simplicity, we still assume the same interval covering $\bigcup_{i=0}^m I_{c^i}$ as \eqref{interval decomp} and $\Psi_{c^i,u}^{-1}:$$\text{\uj V}_{c^i,u}\to\mathrm{U}_{\kappa,u}\times[ -T_{c^i},0]$ is Gevrey-$(\alpha, \mathbf L)$ smooth for all $\mathbf L\leq\mathbf L_0$, where each $\Psi_{c^i,u}$ ($i=0,\cdots,m$) possesses the property analogous to \eqref{tubular approximation}. \subsection{Total disconnectedness}\label{total discon} Let $\alpha>1$, we will study the topological structure of the set of minimal points for \[B_{c,l,u}(x,\tau)=u^{-}_{c,l}(x,\tau)-u^{+}_{c,u}(x,\tau),\quad\text{and} \quad B_{c,u,l}(x,\tau)=u^{-}_{c,u}(x,\tau)-u^{+}_{c,l}(x,\tau)\] defined in \eqref{anotherkind barr}, where $u^{\pm}_{c,\imath}$ ($\imath=l, u$) are the elementary weak KAM solutions. Actually, we will show that the minimal set is totally disconnected for generic Lagrangian systems. Inspired by the technique in \cite[Section 4.2]{Ch2017}, we will perturb directly a Lagrangian by small potential functions. Compared with the perturbative techniques used in \cite{CY2004, CY2009} which perturb the generating functions to create genericity, our technique in the current paper provides more information, we can prove the genericity not only in the usual sense but also in the sense of Ma\~n\'e. Let $L=L_0+L_1$ be our Lagrangian given in \eqref{lagrangian}, where $\|L_1\|_{C^2}<\varepsilon_1$. Recall the interval covering $\bigcup_{0\leq i\leq m} I_{c^i}$ given in section \ref{determin of coeff}, one can always suppose that the length of each interval $I_{c^i}$ is less than 1. Then Theorem \ref{global regularity of elementary solutions} implies that for any $c, c'\in I_{c^i}\cap\mathbb{S}$ and $q\in\mathrm{U}_{\kappa}=\mathrm{U}_{\kappa,l}\cup\mathrm{U}_{\kappa,u},$ \begin{equation}\label{simplicity of global regularity} \begin{split} |u^{\pm}_{c,l}(q,0)-u^\pm_{c',l}(q,0)|\leq 2C_h\|c'-c\|^{\frac{1}{2}}, \qquad |u^{\pm}_{c,u}(q,0)-u^\pm_{c',u}(q,0)|\leq 2C_h\|c'-c\|^{\frac{1}{2}}. \end{split} \end{equation} Fixing $\mathbf L\in(0, \mathbf L_0]$ and $\varepsilon_0\in (0,\varepsilon_1)$, we consider the following set in $\mathbf G^{\alpha,\mathbf L}(M\times\mathbb T)$ with $M=\mathbb T^2$: \begin{equation}\label{space1} \mathfrak{P}:=\{P\in\mathbf G^{\alpha,\mathbf L}(M\times\mathbb T): \|P\|_{\alpha,\mathbf L}<\varepsilon_0,~\text{supp}P\cap(\check{\pi}\mathrm N_{\kappa/2}\times\mathbb T) =\emptyset\}. \end{equation} Then it is easily seen that each potential perturbation $P\in\mathfrak{P}$ to the Hamiltonian $H$ would not affect the NHIC since $\widetilde\Sigma_H\subset \check{\pi}\mathrm N_{\kappa/2}\times\mathbb R^2\times\mathbb T$, see \eqref{c_t1}. We also point out that, by a natural extension, any function in $\mathbf G^{\alpha,\mathbf L}(M\times\mathbb T)$ can be viewed as a function defined on $\check{M}\times\mathbb T$. \begin{The}\label{generic G2} Let $\alpha>1$, $\mathbf L\leq\mathbf L_0$. There exists a residual set $\mathcal{W}$$\subset\mathfrak{P}$ such that for each Gevrey potential function $P\in\mathcal{W}$, the Lagrangian $L+P: T\check{M}\times\mathbb T\to\mathbb R$ satisfies: for each $c\in\mathbb{S}$, the sets \[\arg\min B_{c,l,u}\big|_{\mathrm U_{\kappa,l}\cup\mathrm U_{\kappa,u}},\quad\arg\min B_{c,u,l}\big|_{\mathrm U_{\kappa,l}\cup\mathrm U_{\kappa,u}}\] are both totally disconnected. Here, $\arg\min B_{c,l,u}|_{\mathrm U_{\kappa,l}\cup\mathrm U_{\kappa,u}}$ stands for $\arg\min B_{c,l,u}\bigcap(\mathrm U_{\kappa,l}\cup\mathrm U_{\kappa,u})$, and $\arg\min B_{c,u,l}|_{\mathrm U_{\kappa,l}\cup\mathrm U_{\kappa,u}}$ stands for $\arg\min B_{c,l,u}\bigcap(\mathrm U_{\kappa,l}\cup\mathrm U_{\kappa,u})$. \end{The} \begin{proof} For $c\in\mathbb{S}$, we first study the set $\arg\min B_{c,l,u}$ restricted on the region $\mathrm U_{\kappa,l}\subset\check{M}\times\{t=0\}$. Let $\mathbf{r_0}>0$ be the constant give in Lemma \ref{local manifolds representation}. Recall that we have $\mathbf{r_0}>\kappa$. Then, to prove the total disconnectedness of $\arg\min B_{c,l,u}\big|_{\mathrm U_{\kappa,l}}$, it is enough to verify that \begin{equation}\label{Tot_Disc1} \textsf{the set~} \arg\min B_{c,l,u}\big|_{\bar{\mathrm N}_{\mathbf{r_0},l}\setminus \mathrm N_{\kappa,l}} \textsf{~is totally disconnected}, \end{equation} where ${\bar{\mathrm N}_{\mathbf{r_0},l}\setminus \mathrm N_{\kappa,l}}$ $\subset$ $\mathrm U_{\kappa,l}$ and $\bar{\mathrm N}_{\mathbf{r_0},l}$ is the closure of $\mathrm N_{\mathbf{r_0},l}$. To explain this, we recall that, according to Proposition \ref{double description}, a point $(x, 0)$ $\in$ $\arg\min B_{c,l,u}|_{\mathrm U_{\kappa,l}}$ if and only if there exists a $c$-semi static curve $\gamma_{x, c}: \mathbb R\to\check{M}$ with $\gamma_{x, c}(0)=x$, such that the orbit $\{\gamma_{x, c}(n): n\in\mathbb Z\}$ is negatively asymptotic to $\pi_q\circ \Upsilon_{c,l}$ and positively asymptotic to $\pi_q\circ \Upsilon_{c,u}$. Then, by letting $\kappa$ be suitably small if necessary, the orbit $\{\gamma_{x, c}(n): n\in\mathbb Z\}$ has to pass through the region ${\bar{\mathrm N}_{\mathbf{r_0},l}\setminus \mathrm N_{\kappa,l}}$ when it approaches $\pi_q\circ \Upsilon_{c,l}$, as a result of the normal hyperbolicity. Consequently, in what follows, we only need to check \eqref{Tot_Disc1}. We first focus on the subinterval $I_{c^0}$ in the interval covering $\bigcup_{0\leq i\leq m} I_{c^i}$. Let us pick a $2$-dim disk $$D=\{(x_1, x_2, t)\in\check{M}\times\mathbb T:~t=0, ~ |x_1-x_{1,0}|\leq d, ~ |x_2-x_{2,0}|\leq d \}\subset\bar{\mathrm N}_{\mathbf{r_0},l}\setminus \mathrm N_{\kappa,l} $$ which is centered at the point $(x_{1,0}, x_{2,0})$, and $d$ is small. We also set $$D+d_1:=\{(x_1, x_2,t)\in\check{M}\times\mathbb T:~t=0, ~|x_1-x_{1,0}|\leq d+d_1, ~ |x_2-x_{2,0}|\leq d+d_1\}\subset \bar{\mathrm N}_{\mathbf{r_0},l}\setminus \mathrm N_{\kappa,l} $$ with $0<d_1\ll 1$ (see figure \ref{picture2}). \begin{figure} \caption{A fundamental domain of $\check{M}\times\{t=0\}$} \label{picture2} \end{figure} Let $\mu$ be suitably small, for the index $i=1$ or $2$, we consider the following space \begin{align}\label{Per_spaces} \mathfrak{V}_{i}:=\bigg\{\mu\Big(\sum\limits_{\ell=1,2}a_{i,\ell}\cos2\ell\pi(x_i-x_{i,0}) +b_{i,\ell}\sin2\ell\pi(x_i-x_{i,0})\Big)~:~a_{i,\ell}, b_{i,\ell}\in[1,2]\bigg\}. \end{align} Obviously, $\mathfrak{V}_1, \mathfrak{V}_2\subset C^\omega(M)$. Next, we will construct perturbations based on potential functions of the form in $\mathfrak{V}_{i}$. To this end, we use the notation given in section \ref{determin of coeff}. Fixing a sufficiently large constant $\mathfrak{L}\gg\mathbf L$, by Lemma \ref{Gevrey bumpfunction} one can construct a function $\rho(x,t)=g(x)\chi(t):\check{M}\times\mathbb T\to\mathbb R$ such that $\chi:\mathbb T\to\mathbb R$ and $g(x):\check{M}\to\mathbb R$ are both non-negative Gevrey-$(\alpha, \mathfrak{L})$ functions. We choose \begin{equation*} \chi(t)=\left\{ \begin{array}{ll} >0, &~t\in (-T_{c^0}, 0)\\ =0, &~t\in\mathbb T\setminus(-T_{c^0}, 0) \end{array} \right. \end{equation*} where $ T_{c^0}\ll 1$ is given in section \ref{determin of coeff}, and require that $g|_D\equiv1$ and supp$g\subset D+d_1$$\subset\bar{\mathrm N}_{\mathbf{r_0},l}\setminus \mathrm N_{\kappa,l} $. We set $$\text{\uj C}:=\{ \Psi_{c^0,l}(x,t)~|~(x,t)\in (D+d_1)\times[-T_{c^0},0] \},$$ then $\text{\uj C}\subset\text{\uj V}_{c^0,l}\subsetneq \mathbb T\times(0, 1)\times\mathbb T$, and therefore $\text{\uj C}\cap(\mathrm N_{\kappa/2,l}\times\mathbb T)=\emptyset$. $\bullet$ With each $V\in\mathfrak{V}_{1}$ or $\mathfrak{V}_{2}$, which can also be viewed as a function on $\check{M}$, one can define $\widetilde{V}\in C^\infty(\check{M}\times\mathbb T)$ as follows: on the ``lower" domain $\mathbb T\times[0, 1]\times\mathbb T\subset\check{M}\times\mathbb T$, \begin{equation*} \widetilde{V}(z)=\begin{cases} (\rho V)\circ\Psi_{c^0,l}^{-1}(z)=\rho(x,t)V(x), & ~\Psi_{c^0,l}(x,t)=z\in\text{\uj C},\\ 0, & z\in(\mathbb T\times[0, 1]\times\mathbb T)\setminus\text{\uj C}. \end{cases} \end{equation*} Then we extend symmetrically the function to the ``upper" domain $\mathbb T\times[1, 2]\times\mathbb T$ such that $$\widetilde{V}(y,t)=\widetilde{V}(y-\mathbf{e}_2,t)$$ with $\mathbf{e}_2=(0, 1).$ The support of $\widetilde{V}$ satisfies \begin{equation}\label{supp_2copies} \textup{supp}\widetilde{V}\subset \text{\uj C}\cup(\text{\uj C}+\mathbf{e}_2). \end{equation} Since $\mathfrak{L}\gg\mathbf L$, according to the properties (G\ref{algebra norm}), (G\ref{composition}) in Section \ref{introduction}, we have \begin{equation}\label{construction of V} \widetilde{V}\in\mathbf G^{\alpha, \mathbf L}(\check{M}\times\mathbb T). \end{equation} $\bullet$ We also remark that, by the symmetry of $\widetilde{V}\in\mathbf G^{\alpha,\mathbf L}(\check{M}\times\mathbb T)$, $\widetilde{V}$ can also be viewed as a function on $M\times\mathbb T$. By abuse of notation, we continue to write $\widetilde{V}\in\mathbf G^{\alpha,\mathbf L}(M\times\mathbb T)$. Thus, $\widetilde{V}\in \mathfrak{P}.$ As a result of the construction above, some constant $C_1>0$ exists such that \begin{equation}\label{integration of V} \int_{-T_{c^0}}^{0}\widetilde{V}(\Psi_{c^0,l}(x,t))dt=V(x)\int^0_{-T_{c^0}}g(x)\chi(t) dt =V(x)\int^0_{-T_{c^0}}\chi(t) dt=C_1 V(x),\quad \text{for~} x\in D. \end{equation} Here, we have used the fact $g|_D\equiv1$. Let $\Pi_i$, $i=1,2$, be the standard projection to the $i$-th coordinate of $\check{M}$. For the Lagrangian $L:T\check{M}\times\mathbb T\to\mathbb R$, we consider two elementary weak KAM solutions $u^-_{c,l}(q,t)$, $u^+_{c,u}(q,t)$, and denote by $u^-_{c,l,\widetilde{V}}(q,t)$, $u^+_{c,u,\widetilde V}(q,t)$ the elementary weak KAM solutions of the perturbed Lagrangian $ L+\widetilde V$. Then the following result holds: \begin{Lem}\label{diameter compare} There exists an open and dense set $\mathcal{U}_{D}\subset\mathfrak{P}$ (see \eqref{space1}) such that for each $\widetilde{V}\in\mathcal{U}_{D}$, \begin{equation}\label{banjingxiao} \Pi_i\left(\arg\min\big(u^-_{c,l,\widetilde V}(x,0)-u^+_{c,u,\widetilde V}(x,0)\big)\big|_{D}\right)\subsetneq [x_{i,0}-d,x_{i,0}+d],\quad \text{for all~} c\in I_{c^0}\cap\mathbb{S}, \end{equation} where $ i=1, 2$. \end{Lem} \begin{proof} We start with the perturbation $\widetilde{V}$ of the form \eqref{construction of V}, where $V\in\mathfrak{V}_{1}\cup \mathfrak{V}_{2}$. Note that under such a potential perturbation, the cylinders $\Sigma_{H,l}(0)$ and $\Sigma_{H,u}(0)$ remain unchanged, and hence the Aubry set $\widetilde{\mathcal A}_{L+\widetilde{V}}(c,\check{M})$ $=$ $\widetilde{\mathcal A}_{L}(c,\check{M})$. \textbf{Step 1:} For $c\in I_{c^0}\cap\mathbb{S}$, the projected Aubry set $\mathcal A_{L}(c, \check{M})\subset \check{M}\times\mathbb T$ has two copies, denoted by $\mathcal A_{L,l}(c, \check{M})$ and $\mathcal A_{L, u}(c, \check{M})$. Each set $\mathcal A_{L,\imath}(c, \check{M})$, $\imath=l, u$ is diffeomorphic to $\mathbb T^2$ since $c\in \mathbb{S}$. For each $x\in D$ and each $u^+_{c,u}$-calibrated curve $\gamma^+_{x,c}(t):[0,+\infty)\to\check{M}$ with $\gamma^+_{x,c}(0)=x$, the minimizing curve $(\gamma^+_{x,c}(t), t) :\mathbb R^+\longrightarrow\check{M}\times\mathbb T$ is positively asymptotic to $\mathcal A_{L, u}(c, \check{M})$. Now, we claim that \begin{equation}\label{supp_nonintersection} \textup{supp}\widetilde{V}~\bigcap~ \big(\bigcup_{t>0}(\gamma^+_{x,c}(t),t)\big)=\emptyset, \end{equation} as long as $D$ is small enough. In fact, according to \eqref{supp_2copies}, the support of $\tilde{V}$ has two copies in the lower and upper region respectively and $\textup{supp}\widetilde{V}\subset \text{\uj C}\cup(\text{\uj C}+\mathbf{e}_2). $ It is clear that the minimizing orbit $(\gamma^+_{x,c}(t),t)$ never intersects itself, then $ \text{\uj C}\bigcap \big(\bigcup_{t>0}(\gamma^+_{x,c}(t),t)\big)=\emptyset$ since $D$ is a small neighborhood of $x$. Moreover, observe that the sets $\mathcal A_{L,l}(c, \check{M})$ and $\mathcal A_{L, u}(c, \check{M})$, which are diffeomorphic to $\mathbb T^2$, divide the $3$-dimensional configuration space $\check{M}\times\mathbb T$ into two connected components, then the minimizing curve $(\gamma^+_{x,c}(t), t) :\mathbb R^+\longrightarrow\check{M}\times\mathbb T$ always stays in the lower region, which means $ (\text{\uj C}+\mathbf{e}_2)$ $\bigcap$ $ \big(\bigcup_{t>0}(\gamma^+_{x,c}(t),t)\big)$ $=\emptyset$. This proves our claim \eqref{supp_nonintersection}. Consequently, \begin{equation}\label{u_plus_equal} u^+_{c,u,\widetilde V}(x)=u^+_{c,u}(x),\quad \text{for all}~x\in D. \end{equation} But the function $u^-_{c,l,\widetilde V}$ would undergo a small perturbation. Indeed, for $x\in D$, we can take a $u^-_{c,l,\widetilde{V}}$-calibrated curve $\gamma^-_{x,c,\widetilde{V}}:(-\infty,0]\to\check{M}$ with $\gamma^-_{x,c,\widetilde{V}}(0)=x$, then for $ m\in\mathbb Z^+$, \begin{equation}\label{dafd1} u^-_{c,l,\widetilde{V}}(\gamma^-_{x,c,\widetilde{V}}(0),0)-u^-_{c,l,\widetilde{V}}(\gamma^-_{x,c,\widetilde{V}}(-m),-m)= \int_{-m}^0 (L-\eta_c+\widetilde{V})(d\gamma^-_{x,c,\widetilde{V}}(t),t)+\alpha(c)\,dt, \end{equation} For another perturbation $\widetilde{V}'$, we have \begin{equation}\label{dafd2} u^-_{c,l,\widetilde V'}(\gamma^-_{x,c,\widetilde{V}}(0),0)-u^-_{c,l,\widetilde V'}(\gamma^-_{x,c,\widetilde{V}}(-m),-m)\leq \int_{-m}^0(L-\eta_c+\widetilde V')(d\gamma^-_{x,c,\widetilde{V}}(t),t)+\alpha(c)\,dt. \end{equation} By normal hyperbolicity, there exists a uniform upper bound $T\in\mathbb Z^+$, $T>T_{c^0}$, such that the orbit $\{(\gamma^-_{c,l,\widetilde{V}}(-t), t)\}_{t\geq T}$ shall retreat into the small neighborhood $\mathrm N_{\kappa/2,l}\times\mathbb T$. As the supports of $\widetilde{V}$ and $\widetilde{V}'$ have empty intersection with $\mathrm N_{\kappa/2,l}\times\mathbb T$, we have $u^-_{c,l,\widetilde{V}}= u^-_{c,l,\widetilde V'}$ on $\mathrm N_{\kappa/2,l}\times\mathbb T$. Then \eqref{dafd1}-\eqref{dafd2} imply that $$u^-_{c,l,\widetilde V'}(x,0)-u^-_{c,l,\widetilde{V}}(x,0)\leq \int_{-T}^0 (\widetilde V'-\widetilde{V})(\gamma^-_{x,c,\widetilde{V}}(t),t)\,dt.$$ Conversely, we can prove similarly that $$u^-_{c,l,\widetilde V'}(x,0)-u^-_{c,l,\widetilde{V}}(x,0)\geq \int_{-T}^0 (\widetilde V'-\widetilde{V})(\gamma^-_{x,c,\widetilde{V}'}(t),t)\,dt,$$ where $\gamma^-_{x,c,\widetilde V'}$ denotes the backward $u^-_{c,l,\widetilde V'}$-calibrated curve with $\gamma^-_{x,c,\widetilde V'}(0)=x$. Since $x$ lies in the region $D$ where $u^-_{c,l,\widetilde V}$ is differentiable (see Lemma \ref{local manifolds representation}), one has $\|\gamma^-_{x,c,\widetilde V'}(t)-\gamma^-_{x,c,\widetilde V}(t)\|\to 0$ as $\|\widetilde{V}' -\widetilde V\|\to 0$, which is guaranteed by the upper semi-continuity. Therefore, for $c\in I_{c^0}\cap\mathbb{S}$, \begin{equation}\label{potential perturbation} u^-_{c,l,\widetilde V'}(x,0)-u^-_{c,l,\widetilde V}(x,0)=\mathscr K_{\widetilde{V},c}(\widetilde V'-\widetilde{V})(x)+\mathscr R_c(\widetilde V'-\widetilde{V})(x),\quad x\in D, \end{equation} where the operator \begin{equation}\label{linear operator} \mathscr K_{\widetilde{V},c}(\widetilde V'-\widetilde{V})(x)=\int_{-T}^0(\widetilde V'-\widetilde{V})(\gamma_{x,c,\widetilde{V}}^-(t), t)\,dt, \end{equation} and the remainder $$\mathscr R_c(\widetilde V'-\widetilde V)=o(\| V'-V\|_{C^0})$$ since $V, V'\in\mathfrak{V}_{1}\cup \mathfrak{V}_{2}$ are linear combinations of trigonometric functions. \textbf{Step 2:} Now, we claim that there exists an arbitrarily small perturbation $\widetilde{V}\in\mathfrak{P}$ of the form \eqref{construction of V}, such that \begin{equation}\label{touying1} \Pi_1\left(\arg\min\big(u^-_{c,l,\widetilde V}(x,0)-u^+_{c,u,\widetilde V}(x,0)\big)\big|_{D}\right)\subsetneq [x_{1,0}-d,x_{1,0}+d],\quad \text{for all~} c\in I_{c^0}\cap\mathbb{S}. \end{equation} To prove this claim, we construct a grid for the parameters $(a_{1,1},b_{1,1}, a_{1,2}, b_{1, 2})$ in $\mathfrak{V}_1$ by splitting the domain $[1,2]^4$ equally into a family of 4-dimensional cubes whose side length is $\mu^2$, namely $$\Delta a_{1,\ell}=\Delta b_{1,\ell}=\mu^{2},\quad \ell=1,2.$$ There are as many as $[\mu^{-8}]$ cubes. In the sequel, we use the symbol $\textrm{Osc}_{x\in D}f$ to denote the oscillation of $f$, which describes the difference between the supremum and infimum of $f$ on $D$. According to \eqref{tubular approximation}, for each $c\in I_{c^0}\cap\mathbb{S}$ and $x\in D$, the backward $c$-semi static curve $\gamma^-_{x,c,\widetilde{V}}(t)$ will stay in the $\delta$-neighborhood of the curve $\Psi_{c^0,l}(t)$ for $t\in[-T_{c^0},0]$ provided that $\mu$ is small enough. Besides, since minimizing orbits have no self intersections, by letting $D$ be suitably small if necessary, the minimizing curve $\big(\gamma^-_{x,c,\widetilde{V}}(t), t\big):$ $[-T, -T_{c^0})$ $\longrightarrow$ $\check{M}\times\mathbb T$ does not intersect the support of $\widetilde{V}'-\widetilde{V}$. This is guaranteed by using arguments similar to the proof of \eqref{supp_nonintersection}. Therefore, \eqref{integration of V} and the above estimate imply that \begin{align}\label{approxi_del} \mathscr K_{\widetilde{V},c}(\widetilde{V}'-\widetilde{V})(x) =\int_{-T_{c^0}}^0 (\widetilde{V}'-\widetilde{V})(\Psi_{c^0,l}(x,t))\,dt+O(\mu\delta)= C_1 (V'(x)-V(x))+O(\mu\delta). \end{align} Then some constant $C_2>0$ exists such that \begin{align}\label{zhendang1} \text{Osc}_{x\in D}(\mathscr K_{\widetilde{V}, c}(\widetilde V'-\widetilde V))>\frac{1}{4}C_1\text{Osc}_{x\in D}(V-V')>C_2\mu\Delta \end{align} where $\Delta=\max\{|a_{1,\ell}-a'_{1,\ell}|,|b_{1,\ell}-b'_{1,\ell}|:~\ell=1,2\}$. This is guaranteed by \eqref{approxi_del} and the fact that $V$ is a finitely linear combination of $\{\sin2\pi x_1,\cos2\pi x_1, \sin 4\pi x_1,\cos 4\pi x_1\}$. Next, we split the interval $I_{c^0}$ equally into $[K_s\mu^{-6}]$ subintervals, where $K_s=L_s(\frac{24C_h}{C_2})^2$ and $L_s$ is the length of $I_{c^0}$, $C_h$ is the constant given in \eqref{simplicity of global regularity} and $C_2$ is given in \eqref{zhendang1}. We pick up the subinterval that has non-empty intersection with $\mathbb{S}$, and then denote all such kinds of subintervals by $\{\text{\uj J}_i\}_{i\in\mathbb{J}}$. Clearly, the cardinality of $\mathbb{J}$ is less than $[K_s\mu^{-6}]$. Let us fix a $c^*\in\text{\uj J}_i\cap\mathbb{S}$. If for some parameter $(a^*_{1,\ell}, b^*_{1,\ell}), \ell=1,2$ and its corresponding perturbation $V^*\in\mathfrak{V}_1$, formula \eqref{touying1} does not hold, then $\min_{x_2}\big( u^-_{c^*,l,\widetilde V^*}(x,0)-u^+_{c^*,u,\widetilde V^*}(x,0)\big)$ is identically equal to a constant for all $x\in D$, and hence \begin{equation}\label{zhendang2} \textup{Osc}_{x\in D}\min\limits_{x_2}\big( u^-_{c^*,l,\widetilde V^*}(x,0)-u^+_{c^*,u,\widetilde V^*}(x,0)\big)=0. \end{equation} Next, for another $V'=\mu\Big(\sum_{\ell=1,2}a'_{1,\ell}\cos2\ell\pi(x_1-x_{1,0}) +b'_{1,\ell}\sin2\ell\pi(x_1-x_{1,0})\Big)\in\mathfrak{V}_1$ and the corresponding perturbation $\widetilde {V}'$, it follows from \eqref{u_plus_equal} and \eqref{potential perturbation} that for all $c\in \text{\uj J}_i\cap\mathbb{S}$ and $x\in D$, \begin{equation} \begin{split} & \big(u^-_{c,l,\widetilde V'}(x,0)-u^+_{c,u,\widetilde V'}(x,0)\big)-\big( u^-_{c^*,l, \widetilde V^*}(x,0)-u^+_{c^*,u,\widetilde V^*}(x,0)\big)\\ =&\big(u^-_{c,l,\widetilde V'}(x,0)-u^-_{c^*,l,\widetilde V'}(x,0)\big)-\big(u^+_{c,u,\widetilde V'}(x,0)-u^+_{c^*,u,\widetilde V'}(x,0)\big) +\big(\mathscr K_{\widetilde{V}^*, c^*}+\mathscr R_{c^*}\big)(\widetilde V'-\widetilde V^*)(x). \end{split} \end{equation} As the length of $\text{\uj J}_i$ is $\frac{L_s}{[K_s\mu^-6]}$ and $c,c^*\in\text{\uj J}_i\cap\mathbb{S}$, formula \eqref{simplicity of global regularity} then implies that \begin{equation} \begin{split} &\Big|\big(u^-_{c,l,\widetilde V'}(x,0)-u^-_{c^*,l,\widetilde V'}(x,0)\big)-\big(u^+_{c,u,\widetilde V'}(x,0)-u^+_{c^*,u,\widetilde V'}(x,0)\Big)\big|\\ \leq & 4C_h\|c-c^*\|^{\frac{1}{2}} \leq 4C_h\Big(\frac{L_s}{[K_s\mu^{-6}]}\Big)^{\frac{1}{2}}\leq\frac{C_2\mu^3}{6}. \end{split} \end{equation} Since $\mu\ll 1$, one has $\|\widetilde V'-\widetilde V^*\|\ll 1$ and \begin{equation}\label{frac13} \|\mathscr R_{c^*}(\widetilde V'-\widetilde V^*)\|\leq\frac{1}{6}\|\mathscr K_{\widetilde{V}^*,c^*}(\widetilde V'-\widetilde V^*)\|. \end{equation} Regarding the potential function $V'$, if its parameter $(a'_{1,\ell}, b'_{1,\ell}), \ell=1, 2$ satisfies \begin{equation}\label{relation of coefficients} \max\{|a^*_{1,\ell}-a'_{1,\ell}|, |b^*_{1,\ell}-a'_{1,\ell}| : \ell=1,2\}\geq \mu^2, \end{equation} then inequalities \eqref{zhendang1}--\eqref{frac13} together give rise to $$\textup{Osc}_{x\in D}\min\limits_{x_2}\big(u^-_{c,l,\widetilde V'}(x,0)-u^+_{c,u,\widetilde V'}(x,0)\big)\geq\frac{C_2}{3}\mu^3>0.$$ Thus we can conclude that for each $c\in\text{\uj J}_i\cap\mathbb{S}$ and $V'\in\mathfrak{V}_1$ satisfying \eqref{relation of coefficients}, we have \begin{equation}\label{reduce} \begin{split} \textup{Osc}_{x\in D}\min\limits_{x_2}\big(u^-_{c,l,\widetilde V'}(x,0)-u^+_{c,u,\widetilde V'}(x,0)\big)>0. \end{split} \end{equation} Consequently, for each $\text{\uj J}_i$, we only need to cancel out at most $2^4$ cubes from the grid $\{\Delta a_{1,\ell}, \Delta b_{1,\ell}: \ell=1, 2\}$ so that formula \eqref{reduce} holds for all other cubes. Let the index $i$ range over $\mathbb{J}$, we therefore obtain a set $\text{\uj P}_1\subseteq\{(a_{1,1}, a_{1,2}, b_{1,1}, b_{1,2}): a_{1,\ell}, b_{1,\ell}\in[1, 2], \ell=1, 2\}$ with Lebesgue measure $$\textup{meas}\text{\uj P}_1\geq1-2^4(\mu^2)^4|\mathbb{J}|\geq1-2^4K_s\mu^2>0,$$ such that formula \eqref{reduce} holds for any $V'$ with parameter in $\text{\uj P}_1$ and any $c\in I_{c^0}\cap\mathbb{S}$. As $\mu$ is small enough, the claim \eqref{touying1} is now evident from what we have proved. \textbf{Step 3:} Actually, the arguments above in Step 2 also ensure that formula \eqref{touying1} has density in $\mathfrak{P}$. The openness is obvious, so there is an open-dense set $\mathcal{U}_{D,1}$ in $\mathfrak{P}$ such that formula \eqref{touying1} holds for each perturbed Lagrangian $L+\tilde{V}$ with $\widetilde{V}\in\mathcal{U}_{D,1}$. Analogously, we can consider a potential function $V\in\mathfrak{V}_2$ and its associated perturbation $\widetilde{V}$. By repeating similar arguments as in Step 2, we also obtain an open-dense set $\mathcal{U}_{D,2}\subset\mathfrak{P}$, such that for each perturbed Lagrangian $L+\tilde{V}$ where $\widetilde{V}\in\mathcal{U}_{D,2}$, \begin{equation*} \Pi_2\left(\arg\min\big(u^-_{c,l,\widetilde V}(x,0)-u^+_{c,u,\widetilde V}(x,0)\big)\big|_{D}\right)\subsetneq [x_{2,0}-d,x_{2,0}+d],\quad \text{for all~} c\in I_{c^0}\cap\mathbb{S}. \end{equation*} Thus, the proof of Lemma \ref{diameter compare} is now completed by taking a set $\mathcal{U}_D=\mathcal{U}_{D,1}\cap \mathcal{U}_{D,2}$. \end{proof} Now we continue to prove Theorem \ref{generic G2}. $\bullet$ From Lemma \ref{diameter compare} we see that for each small disk $D\subseteq \bar{\mathrm N}_{\mathbf{r_0},l}\setminus \mathrm N_{\kappa,l}$, there exists an open-dense set $\mathcal{U}_D\subset\mathfrak{P}$ such that formula \eqref{banjingxiao} holds for each Lagrangian $L+\widetilde V$ with $\widetilde V\in\mathcal{U}_D$. Next, we take a countable topology basis $\bigcup_j D_j$ for $\bar{\mathrm N}_{\mathbf{r_0},l}\setminus \mathrm N_{\kappa,l}$ where the diameter of $D_j$ approaches to $0$ as $j\to\infty$, and therefore obtain an open-dense set $\mathcal{U}_{D_j}$ for each $j$. Clearly, $\mathcal{U}_{I_{c^0}}=\bigcap_j\mathcal{U}_{D_j}$ is a residual set in $\mathfrak{P}$, and the set $\arg\min\big(u^-_{c,l,P}(x,0)-u^+_{c,u,P}(x,0)\big)\big|_{\bar{\mathrm N}_{\mathbf{r_0},l}\setminus \mathrm N_{\kappa,l}}$ is totally disconnected for each $P\in\mathcal{U}_{I_{c^0}}$ and $c\in\mathbb{S}\cap I_{c^0}$. The technique above also works for other subintervals $I_{c^i}, i=1,\cdots,m$, we can then obtain the corresponding residual sets $\mathcal{U}_{I_{c^i}}, i=1,\cdots,m$. Hence the intersection $\mathcal{U}_l=\bigcap_{i=0}^m\mathcal{U}_{I_{c^i}}$ is residual, and $\arg\min\big(u^-_{c,l,P}(x,0)-u^+_{c,u,P}(x,0)\big)\big|_{\bar{\mathrm N}_{\mathbf{r_0},l}\setminus \mathrm N_{\kappa,l}}$ is totally disconnected for each $P\in\mathcal{U}_l$ and $c\in\mathbb{S}$. By what have shown at the beginning of the proof, this is equivalent to saying that $\arg\min\big(u^-_{c,l,P}(x,0)-u^+_{c,u,P}(x,0)\big)\big|_{\mathrm U_{\kappa, l}}$ is totally disconnected for each $P\in\mathcal{U}_l$ and $c\in\mathbb{S}$. $\bullet$ Similarly, one can prove that there exists a residual set $\mathcal{U}_u\subset\mathfrak{P}$, such that the set $$\arg\min\big(u^-_{c,l,P}(x,0)-u^+_{c,u,P}(x,0)\big)\big|_{\mathrm U_{\kappa,u}}$$ is totally disconnected for each $P\in\mathcal{U}_u$ and $c\in\mathbb{S}$. $\bullet$ Conversely, by applying the technique above to $u^-_{c,u,P}(x,0)-u^+_{c,l,P}(x,0)$, we can also obtain two residual sets $\mathcal{V}_l$ and $\mathcal{V}_u$ in $\mathfrak{P}$, such that the set $\arg\min\big(u^-_{c,u,P}(x,0)-u^+_{c,l,P}(x,0)\big)\big|_{\mathrm U_{\kappa,l}}$ is totally disconnected for each $c\in\mathbb{S}$ and $P\in\mathcal{V}_l$, and the set $\arg\min\big(u^-_{c,u,P}(x,0)-u^+_{c,l,P}(x,0)\big)\big|_{\mathrm U_{\kappa,u}}$ is totally disconnected for each $ c\in\mathbb{S}$ and $P\in\mathcal{V}_u$. Therefore, the proof of Theorem \ref{generic G2} is now completed by taking $\mathcal{W}=\mathcal{U}_l\cap\mathcal{U}_u\cap\mathcal{V}_l\cap\mathcal{V}_u$. \end{proof} \subsection{Proof of Theorem \ref{main theorem} and \ref{main thm2}} Now, we are able to prove our main results. Let $R>1$, $\alpha>1$ and $0<\mathbf L\leq\mathbf L_0=\mathbf L_0(H_0,\alpha,R)$, where the constant $\mathbf L_0$ is given in \eqref{bold_L_0} and independent of $H_1$. \begin{proof} In our problem $M=\mathbb T^2$, $s>0$, $y_\ell\in[-R+1,R-1]\times\{0\}$, $\ell\in\{1,\cdots,k\}$. Let $\varepsilon_0$ be a small positive number satisfying $$\varepsilon_0<\min\{1,\mathbf L^{\alpha},\frac{\mathbf L^{2\alpha}}{2!^\alpha},\frac{\mathbf L^{3\alpha}}{3!^\alpha}\}\varepsilon_1,$$ where $\varepsilon_1$ is chosen as in Lemma \ref{minimal set on cylinder}. Now, $\|H_1\|_{\alpha,\mathbf L}<\varepsilon_0$ implies $\|H_1\|_{C^3}<\varepsilon_1$, and hence the Hamiltonian $H=H_0+H_1$ has a persistent normally hyperbolic invariant cylinder (NHIC), and the globally minimal set $\widetilde{\mathcal G}_L(c)$ lies in this NHIC for each $c=(c_1,0)$ with $|c_1|\leq R-1$. Here, $L=L_0+L_1$ is the Lagrangian associated to $H=H_0+H_1$. Also, observe that for the Ma\~n\'e set $\widetilde{\mathcal N}_{L_0}(c)$, \begin{equation}\label{proj_N_L0} \pi_p\circ\mathscr L^{-1}\widetilde{\mathcal N}_{L_0}(c)=c, \quad\text{~for each~} c=(c_1,0), |c_1|\leq R-1. \end{equation} By letting $\varepsilon_0$ be small enough, the perturbation term $L_1$ is also sufficiently small, then the upper semi-continuity (see Proposition \ref{upper semi}) implies that for each $c=(c_1,0)$, $|c_1|\leq R-1$, \begin{equation}\label{dist_N_L_and_N_L0} d(\pi_p\circ\mathscr L^{-1}\widetilde{\mathcal N}_{L}(c),\pi_p\circ\mathscr L^{-1}\widetilde{\mathcal N}_{L_0}(c))< s/2, \quad\text{whenever~} \|H_1\|_{\alpha,\mathbf L}<\varepsilon_0 \end{equation} With this fact, one can find that $\varepsilon_0=\varepsilon_0(H_0,\alpha,R,s,\mathbf L)>0$ depends on the Hamiltonian $H_0$ and the constants $\alpha$, $R$, $\mathbf L$ and $s$. \textbf{Density:} For each Hamiltonian $H_0+H_1$ with $\|H_1\|_{\alpha,\mathbf L}<\varepsilon_0,$ we will prove that there exists an arbitrarily small perturbation $V\in\mathbf G^{\alpha,\mathbf L}(M\times\mathbb T)$ such that $\|H_1+V\|_{\alpha,\mathbf L}<\varepsilon_0$, and the perturbed Hamiltonian $H_0+H_1+V$ has an orbit $(q(t),p(t))$ and times $t_1<\cdots<t_k$ such that the action variables $p(t)$ pass through the ball $B_s(y_\ell)$ at the time $t=t_\ell$. To this end, we will establish a generalized transition chain along which one is able to apply Theorem \ref{generalized transition thm}. Let $d\in(0, \varepsilon_0-\|H_1\|_{\alpha,\mathbf L})$ be arbitrarily small. $\bullet$ First, by applying the genericity property in Corollary \ref{corgeneric G1} to the Tonelli Lagrangian $L_0+L_1$, one can always choose a small perturbation \[\phi\in\mathbf G^{\alpha,\mathbf L}(M\times\mathbb T), \quad \|\phi\|_{\alpha,\mathbf L}<\frac{d}{2}\] such that for each rational homology class $h=(\frac{p}{q},0)$, the perturbed Lagrangian $L_0+L_1+\phi$ has only one minimal measure with the rotation vector $h$. Next, for the irrational case, it is well known in Aubry-Mather theory that for homology $h=(h_1,0)$ with $h_1\in\mathbb R\setminus\mathbb Q$, only one minimal measure with the rotation vector $h$ exists. Thus it is easily seen that the Aubry class is unique for each $c=(c_1,0)$ with $|c_1|\leq R-1$, as a result of property \eqref{ssff}. Then the uniqueness of Aubry class implies \begin{equation}\label{faf} \widetilde{\mathcal A}_{L_0+L_1+\phi}(c)=\widetilde{\mathcal N}_{L_0+L_1+\phi}(c). \end{equation} By the Legendre transformation, the associated Hamiltonian is exactly $H_0+H_1-\phi$ with $\|H_1-\phi\|_{\alpha,\mathbf L}<\varepsilon_0$, so \eqref{dist_N_L_and_N_L0} and \eqref{faf} imply that \begin{equation}\label{proj_A_L0} \pi_p\circ\mathscr L^{-1}\widetilde{\mathcal A}_{L_0}(c)=c, \quad d\left(\pi_p\circ\mathscr L^{-1}\widetilde{\mathcal A}_{L_0+L_1+\phi}(c),~\pi_p\circ\mathscr L^{-1}\widetilde{\mathcal A}_{L_0}(c)\right)< s/2, \end{equation} for each $c=(c_1,0)$, $|c_1|\leq R-1$. Recall that the set $\widetilde{\mathcal N}_{L_0+L_1+\phi}(c)\big|_{t=0}$ lies in the NHIC, so it is either homologically trivial or not. In the homologically trivial case, it is well known that the $c$-equivalence holds inside $(c_1-\delta_c,c_1+\delta_c)\times\{0\}$ for some small $\delta_c>0$, which satisfies condition (1) in Definition \ref{transition chain}. In the latter case, $\widetilde{\mathcal N}_{L_0+L_1+\phi}(c)\big|_{t=0}$ must be an invariant curve as a result of \eqref{faf}. Then we define as \eqref{buianquandeshangtongdiao} the set$$\mathbb{S}:=\{ (c_1,0) ~:~|c_1|\leq R-1, \Upsilon_c\textup{~is an invariant circle on the NHIC}\}.$$ Applying Theorem \ref{generic G2} to $L_0+L_1+\phi$, we can find a small potential perturbation with compact support \[P\in\mathbf G^{\alpha,\mathbf L}(M\times\mathbb T),\quad \|P\|_{\alpha,\mathbf L}<\frac{d}{2},\] such that the Lagrangian $L_0+L_1+\phi+P: T\check{M}\times\mathbb T\to\mathbb R$, defined in the double covering space, satisfies: for all $c\in\mathbb{S}$, the sets $$\arg\min B_{c,l,u}\big|_{\mathrm U_{\kappa,l}\cup\mathrm U_{\kappa,u}},\quad\arg\min B_{c,u,l}\big|_{\mathrm U_{\kappa,l}\cup\mathrm U_{\kappa,u}}$$ are both totally disconnected. Then Proposition \ref{manejifenlei} and \ref{double description} together yield that for each $c\in\mathbb{S}$, there exists a small $\delta_c>0$ such that the set $$\check{\pi}\mathcal N(c,\check{M})\big|_{t=0}\setminus(\mathcal A(c,M)\big|_{t=0}+\delta_c)$$ is totally disconnected, which satisfies condition (2) in Definition \ref{transition chain}. Consequently, the corresponding Hamiltonian is exactly $H_0+H_1-\phi-P$ where $\|\phi+P\|_{\alpha,\mathbf L}<d$ and $\|H_1-\phi-P\|_{\alpha,\mathbf L}<\varepsilon_0$. $\bullet$ Next, we take \[V=-\phi-P.\] Then the arguments above imply that there exists a generalized transition chain inside $[-R+1,R-1]\times\{0\}$$\subset H^1(M,\mathbb R)$ for the Lagrangian $L_0+L_1-V$. Thus, we conclude from Theorem \ref{generalized transition thm} and \eqref{proj_A_L0} that the perturbed Hamiltonian $H_0+H_1+V$ has an orbit $(q(t),p(t))$ whose action variables $p(t)$ pass through the ball $B_s(y_\ell)$ at the time $t=t_\ell$, where $t_1<t_2<\cdots<t_k$. Finally, thanks to $\|V\|_{\alpha,\mathbf L}<d$ and the arbitrariness of $d$, we complete the proof of density in $\mathfrak B^{\mathbf L}_{\varepsilon_0,R}$. \textbf{Openness:} It only remains to verify the openness. Since the time for the aforementioned trajectory $(q(t),p(t))$ passing through the balls $B_s(y_1)$, $\cdots$, $B_s(y_k)$ is finite, the smooth dependence of solutions of ODEs on parameters can guarantee the openness in $\mathfrak B^{\mathbf L}_{\varepsilon_0,R}$. Theorem \ref{main theorem} is now evident from what we have proved. Notice that in the proof of density part above, the perturbation we constructed is Gevrey potential function. Combining with the obvious openness property, Theorem \ref{main thm2} is also true. \end{proof} \appendix \section{Normally hyperbolic theory}\label{appendix_NHIC} In this appendix, we review some classical results in the theory of normally hyperbolic manifolds. We only give a less general introduction which is better applied to our problem, and refer the reader to \cite{Fen1971,Fen1977,HPS1977,Pesin04} for the proofs and more detailed introductions. \begin{Def}\label{Def NHIM} Let $M$ be a smooth Riemannian manifold and $f: M\to M$ be a $C^r (r\geq 1)$ diffeomorphism. Let $N\subset M$ be a submanifold (probably with boundary) which is invariant under $f$. Then $N$ is called a normally hyperbolic invariant manifold (NHIM) if there is an $f$-invariant tangent bundle splitting such that, for every $x\in N$ $$T_xM=T_xN\oplus E_x^s\oplus E_x^u,$$ and there exist a constant $C>0$, rates $0<\lambda<1<\mu$ with $\lambda\mu<1$ such that \begin{equation}\label{hyp splitting} \begin{split} v\in T_xN & \Longleftrightarrow \|Df^k(x)v\|\leq C\mu^{|k|}\|v\| , \quad k\in\mathbb Z,\\ v\in E^s_x & \Longleftrightarrow \|Df^k(x)v\|\leq C\lambda^k\|v\|, \quad k\geq0,\\ v\in E^u_x & \Longleftrightarrow \|Df^k(x)v\|\leq C\lambda^{|k|}\|v\|, \quad k\leq0.\\ \end{split} \end{equation} \end{Def} In what follows, $N$ is assumed to be compact and connected. Let $U$ be a tubular neighborhood of the NHIM $N$. In both \cite{Fen1971} and \cite{HPS1977} the existence of local stable and unstable manifolds in $U$, denoted by $W_N^{s,loc}$ and $W_N^{u,loc}$ respectively, are obtained by using the method of Hadamards graph transform. Moreover, the local stable and unstable manifolds can be characterized as follows: \begin{equation}\label{definition of local stable} \begin{split} W_N^{s,loc} &=\{y\in U ~|~ \textup{dist}(f^k(y),N)\leq \tilde{C}_{y}(\lambda+\tilde{\varepsilon})^k, \textup{~for all~} k\geq0 \}, \\ W_N^{u,loc} &=\{y\in U ~|~ \textup{dist}(f^k(y),N)\leq \tilde{C}_{y}(\lambda+\tilde{\varepsilon})^{|k|}, \textup{~for all~} k\leq0 \}, \end{split} \end{equation} where the constant $\tilde{C}_{y}>0$, and $\tilde{\varepsilon}>0$ is a small constant satisfying $\lambda+\tilde{\varepsilon}<1/\mu$. For each $x\in N$, the corresponding local stable and unstable leaves are defined as follows: \begin{equation}\label{definition of local stable leaf} \begin{split} W_x^{s,loc} &=\{y\in U ~|~ \textup{dist}(f^k(x),f^k(y))\leq \tilde{C}_{x,y}(\lambda+\tilde{\varepsilon})^k, \textup{~for all~} k\geq0 \}, \\ W_x^{u,loc} &=\{y\in U ~|~ \textup{dist}(f^k(x),f^k(y))\leq \tilde{C}_{x,y}(\lambda+\tilde{\varepsilon})^{|k|}, \textup{~for all~} k\leq0 \}. \end{split} \end{equation} where $\tilde{C}_{x,y}>0$ is a constant. Then we have the following properties (see \cite{HPS1977}): \begin{The}\label{property of NHIM} Let $N$ be a NHIM given in \eqref{Def NHIM}, and we define an integer $l:=\max\{k\in\mathbb Z~|~ 1\leq k\leq r$ $\text{~and~} k<\frac{|\log\lambda|}{\log\mu}\}$, then \begin{enumerate}[\rm(1)] \item $N$, $W_N^{s,loc}$ and $W_N^{u,loc}$ are $C^{l}$ manifolds. For each $x\in N$, the manifolds $W_x^{s,loc}$ and $W_x^{u,loc}$ are $C^{r}$ and $T_xW_x^{s,loc}=E_x^s$, $T_xW_x^{u,loc}=E_x^u$. \item $W_N^{s,loc}$ and $W_N^{u,loc}$ are foliated by the stable and unstable leaves respectively, i.e. \[W_N^{s,loc}=\bigcup_{x\in N} W_x^{s,loc}, \quad W_N^{u,loc}=\bigcup_{x\in N} W_x^{u,loc}.\] Moreover, if $x\neq x'$, then $W_x^{s,loc}\bigcap W_{x'}^{s,loc}=\emptyset$ and $W_x^{u,loc}\bigcap W_{x'}^{u,loc}=\emptyset.$ \item The unstable foliation $\{W^{u, loc}_x : x\in N\}$ is $C^l$ in the sense that $\bigcup_{x\in N}T^k_x W^{u, loc}_x$ is a continuous bundle for each $1\leq k\leq l$, where $T^k$ denotes the $k$-th order tangent. Analogous result holds for the stable foliation. \end{enumerate} \end{The} \begin{Rem} \rm{(1)}: We point out that for the models studied in the current paper, we have smoothness as high as the time-$1$ map since the dynamics on $N$ is close to integrable.\\ \rm{(2)}: We can also define the global stable (unstable) sets $W_N^{s,u}$ and $W_x^{s,u}$, just by replacing $U$ with $M$ in \eqref{definition of local stable}, \eqref{definition of local stable leaf}. But $W_N^{s,u}, W_x^{s,u}$ may fail to be embedded manifolds. \end{Rem} The normal hyperbolicity has stability under perturbations. Roughly speaking, the normally hyperbolic invariant manifold persists under small perturbations. \begin{The}[Persistence of normally hyperbolic invariant manifolds]\label{persistence} Suppose that $N\subset M$ is a NHIM for the $C^r$ $(r\geq 1)$ diffeomorphism $f$ and $\varepsilon>0$ is sufficiently small. Then for any $C^r$ diffeomorphism $f_\varepsilon: M\to M$ satisfying $\|f_\varepsilon-f\|_{C^1}<\varepsilon$, there exists a NHIM $N_\varepsilon$ that is $C^l$ diffeomorphic and close to $N$ where $l=\max\{k : 1\leq k\leq r$ $\text{~and~} k<\frac{|\log\lambda|}{\log\mu}\}$. Moreover, the local stable manifold $W^{s,loc}_{N_\varepsilon}$ and local unstable manifold $W^{u,loc}_{N_\varepsilon}$ are $C^l$ close to those of $N$. \end{The} \section{Variational construction of global connecting orbits}\label{sec_proof_of_connectingthm} The goal of this section is to prove Theorem \ref{generalized transition thm}, which could be achieved by modifying the arguments and techniques in \cite{CY2009}. We also refer the reader to \cite{Ch2018} or \cite{Ch2012} for more details. Throughout this section, we assume $M=\mathbb T^n$. Our diffusing orbits are constructed by shadowing a sequence of local connecting orbits, along each of them the Lagrangian action attains a ``local minimum". \subsection{Local connecting orbits}\label{sec localconnect} Let $d\gamma(t)=(\gamma(t), \dot{\gamma}(t))$. An orbit $(d\gamma(t),t):\mathbb R\to TM\times\mathbb T$ is said to connect one Aubry set $\widetilde{\mathcal A}(c)$ to another one $\widetilde{\mathcal A}(c')$ if the $\alpha$-limit set of the orbit is contained in $\widetilde{\mathcal A}(c)$ and the $\omega$-limit set is contained in $\widetilde{\mathcal A}(c')$. We will introduce two types of local connecting orbits: type-$c$ and type-$h$, the former corresponds to Mather's cohomology equivalence, while the latter corresponds to the variational interpretation of Arnold's mechanism. Before that, we need some preparations. \subsubsection{Time-step Lagrangian and upper semi-continuity} Both types of local connecting orbits depend on the upper semi-continuity of minimal curves of a modified $C^r$ Lagrangian $L^*:T\mathbb T^n\times\mathbb R\rightarrow\mathbb R$ which is defined as follows: let $L^+,L^-$ be two time-1 periodic Tonelli Lagrangians, \begin{equation*} L^*(\cdot,t):=\begin{cases} L^-(\cdot,t), & ~t\in(-\infty,0]\\ L^+(\cdot,t), & ~t\in[1,+\infty), \end{cases} \end{equation*} and $L^*$ is superlinear and positive definite in the fibers. Notice that $L^*$ is not periodic in time $t$, instead, it is periodic when restricted on either $(-\infty,0]$ or $[1,+\infty)$. We call such a modified Lagrangian $L^*$ a \emph{time-step} Lagrangian. For a \emph{time-step} Lagrangian $L^*$, a curve $\gamma:\mathbb R\rightarrow\mathbb T^n$ is called minimal if for any $t<t^\prime\in\mathbb R$, \begin{equation*} \int_t^{t^\prime}L^*(\gamma(s),\dot{\gamma}(s),s)\,ds=\min\limits_{\substack{\zeta(t)=\gamma(t),\zeta(t^\prime)=\gamma(t^\prime)\\ \zeta\in C^{ac}([t,t^\prime],\mathbb T^n)}}\int_t^{t^\prime}L^*(\zeta(s),\dot{\zeta}(s),s)\,ds \end{equation*} Hence, we denote by $\mathscr G(L^*)$ the set of all minimal curves and $\widetilde{\mathscr G}(L^*)=\bigcup_{\gamma\in\mathscr G(L^*)}(\gamma(t),\dot{\gamma}(t),t)$. Let $\alpha^\pm$ denote Mather's minimal average action of $L^\pm$. For $m_0, m_1\in \mathbb T^n$ and $T_0,T_1\in\mathbb Z_+$, we define \begin{equation*} h^{T_0,T_1}_{L^*}(m_0,m_1):=\inf\limits_{\substack{\gamma(-T_0)=m_0,\gamma(T_1)=m_1\\ \gamma\in C^{ac}([-T_0,T_1],\mathbb T^n)}}\int_{-T_0}^{T_1}L^*(\gamma(t),\dot{\gamma}(t),t)\,dt+T_0\alpha^-+T_1\alpha^+, \end{equation*} and \begin{equation*} h_{L^*}^{\infty}(m_0,m_1):=\liminf_{T_0,T_1\to+\infty}h_{L^*}^{T_0,T_1}(m_0,m_1), \end{equation*} which are bounded. We take any two sequences of positive integers $\{T_0^i\}_{i\in\mathbb Z_+}$ and $\{T_1^i\}_{i\in\mathbb Z_+}$ with $T_\ell^i\rightarrow+\infty$ ($\ell=0,1$) as $i\to+\infty$ and the associated minimal curve $\gamma_i(t)$: $[-T^i_0,T^i_1]\to \mathbb T^n$ connecting $m_0$ to $m_1$ such that $$ h_{L^*}^{\infty}(m_0,m_1)=\lim_{i\to\infty}h_{L^*}^{T_0^i,T_1^i}(m_0,m_1)=\lim_{i\to\infty}\int_{-T_0^i}^{T_1^i}L^*(\gamma_i(t),\dot{\gamma}_i(t),t)\,dt +T^i_0\alpha^-+T^i_1\alpha^+. $$ The following lemma shows that any accumulation point $\gamma$ of $\{\gamma_i\}_i$ is a pseudo curve playing an analogous role as a semi-static curve. For the proof, see \cite{CY2004} or \cite{CY2009}. \begin{Lem}\label{pseudo curve} Let $\gamma$: $\mathbb R\to \mathbb T^n$ be an accumulation point of $\{\gamma_i\}_i$ as shown above. Then for any $s\geq 0$, $t\geq 1$, \begin{equation}\label{pseudo curve formula} \begin{split} \int_{-s}^{t} L^*(\gamma(\tau),\dot{\gamma}(\tau),\tau)\,d\tau+s\alpha^-+t\alpha^+=\inf\limits_{\substack{\xi(-s_1)=\gamma(-s)\\\xi(t_1) =\gamma(t) \\ s_1-s\in\mathbb Z, ~t_1-t\in\mathbb Z\\ s_1\geq 0,~t_1\geq 1}}\int_{-s_1}^{t_1} L^*(\xi(\tau),\dot{\xi}(\tau),\tau)\,d\tau+s_1\alpha^-+t_1\alpha^+, \end{split} \end{equation} where the minimum is taken over all absolutely continuous curves. \end{Lem} This leads us to define the set of \emph{pseudo connecting curves} \begin{equation*} \mathscr C(L^*):=\{\gamma |~\gamma\in\mathscr G(L^*) \text{~and~} \eqref{pseudo curve formula}\text{~holds}\}. \end{equation*} Clearly, for each $\gamma\in\mathscr C(L^*)$ the orbit $(\gamma(t),\dot\gamma(t),t)$ would negatively approach the Aubry set $\widetilde{\mathcal A}(L^-)$ of the Lagrangian $L^-$ and positively approach the Aubry set $\widetilde{\mathcal A}(L^+)$ of $L^+$. This is why we call it a pseudo connecting curve. Define the following sets $$ \widetilde{\mathcal C}(L^*):=\bigcup_{\gamma\in\mathscr C(L^*)}(\gamma(t),\dot\gamma(t),t),\qquad \mathcal C(L^*):=\bigcup_{\gamma\in\mathscr C(L^*)}(\gamma(t),t). $$ Notice that if $L^*$ is time-1 periodic, then $\widetilde{\mathcal C}(L^*)$ is exactly the Ma\~n\'e set and $\mathcal C(L^*)$ is exactly the projected Ma\~n\'e set. Then we can prove the following property: \begin{Pro}\label{uppersemi of pseudo curves} The set-valued map $L^*\mapsto\mathscr C(L^*)$ is upper semi-continuous, namely if $L^*_i\to L^*$ in the $C^2$ topology, then we have the set inclusion $$ \limsup_i \mathscr C(L^*_i)\subset \mathscr C(L^*).$$ Consequently, the map $L^*\mapsto\widetilde{\mathcal C}(L^*)$ is also upper semi-continuous. \end{Pro} \begin{proof} Let $L^*_i\to L^*$ in the $C^2$ topology. If $\gamma_i$ converges $C^0$-uniformly to a curve $\gamma$ on each compact interval of $\mathbb R$ with $\gamma_i\in\mathscr C(L^*_i)$. We claim that $\gamma\in\mathscr C(L^*)$. Indeed, there exists $K>0$ such that $\|\dot{\gamma}_i(t)\|\leq K$ for all $t\in\mathbb R$, so the set $\{\gamma_i\}_{i}$ is compact in the $C^1$ topology. Since each $\gamma_i$ satisfies the Euler-Lagrange equation of $L_i$, by using the positive definiteness of $L^*_i$, one can write the Euler-Lagrange equation in the form of $\ddot{x}=f_i(x,\dot{x},t)$ for some $f_i$, which implies that $\{\gamma_i\}_{i}$ is compact in the $C^2$ topology. By the Arzel\`{a}-Ascoli theorem, extracting a subsequence if necessary, we can assume that $\gamma_i$ converges $C^1$-uniformly to a $C^1$ curve $\eta$ on each compact interval of $\mathbb R$. Obviously, $\eta=\gamma.$ Next, if $\gamma\notin\mathscr C(L^*)$, there would be some $s\geq 0, t\geq 1$, a curve $\widetilde{\gamma}:[-s-n_1, t+n_2]\to M$ and $\delta>0$ such that the action $$\int_{-s-n_1}^{t+n_2}L^*(\widetilde{\gamma}(\tau),\dot{\widetilde{\gamma}}(\tau),\tau)\,d\tau+n_1\alpha^-+n_2\alpha^+\leq \int_{-s}^{t}L^*(\gamma(\tau),\dot{\gamma}(\tau),\tau)\,d\tau-\delta$$ where $s, s+n_1\geq 0$, $t, t+n_2\geq 1$ and $\widetilde{\gamma}(-s-n_1)=\gamma(-s)$, $\widetilde{\gamma}(t+n_2)=\gamma(t)$. Since we have shown that $\gamma$ is an accumulation point of $\gamma_i$ in the $C^1$ topology, for any small $\varepsilon>0$, there would be a sufficiently large $i$ such that $\|\gamma-\gamma_i\|_{C^1[s,t]}\leq\varepsilon$ and a curve $\widetilde{\gamma}_i: [-s-n_1, t+n_2]\to M$ with $\widetilde{\gamma}_i(-s-n_1)=\gamma_i(-s)$, $\widetilde{\gamma}_i(t+n_2)=\gamma_i(t)$ such that $$\int_{-s-n_1}^{t+n_2}L_i^*(\widetilde{\gamma}_i(\tau),\dot{\widetilde{\gamma}}_i(\tau),\tau)\,d\tau+n_1\alpha^-+n_2\alpha^+\leq \int_{-s}^{t}L_i^*(\gamma_i(\tau),\dot{\gamma}_i(\tau),\tau)\,d\tau-\frac{\delta}{2},$$ By \eqref{pseudo curve formula}, $\gamma_i\notin\mathscr C(L^*_i)$, which is a contradiction. This proves $\gamma\in\mathscr C(L^*)$. Finally, the upper semi-continuity of $L^*\mapsto\widetilde{\mathcal C}(L^*)$ is a consequence of what we have shown above. \end{proof} \subsubsection{Local connecting orbits of type-{\it c}}\label{sub localc} In condition of the cohomology equivalence (see definition \ref{def_c_equivalenve}), we will show how to construct local connecting orbits based on Mather's variational mechanism. This idea of construction was first proposed by J. N. Mather in \cite{Ma1993}. \begin{The}\label{clemma connect} Let $L:T\mathbb T^n\times\mathbb T\to\mathbb R$ be a Tonelli Lagrangian and $c, c'\in H^1(\mathbb T^n,\mathbb R)$ are cohomology equivalent through a path $\Gamma:[0,1]\to H^1(\mathbb T^n,\mathbb R)$. Then there would exist $c=c_0, c_1, \dots, c_k=c'$ on the path $\Gamma$, closed 1-forms $\eta_i$ and $\bar{\mu}_i$ on $M$ with $[\eta_i]=c_i$, $[\bar{\mu}_i]=c_{i+1}-c_i$, and a smooth function $\rho_i(t):\mathbb R\to[0,1]$ for each $i=1,\cdots,k$, such that the time-step Lagrangian \begin{equation*} L_{\eta_i,\mu_i}=L-\eta_i-\mu_i,\quad\text{with}\quad \mu_i=\rho_i(t)\bar{\mu}_i \end{equation*} possesses the following properties: For each curve $\gamma\in \mathscr C(L_{\eta_i,\mu_i})$, it determines a trajectory $(d\gamma(t),t)$, connecting $\widetilde{\mathcal A}(c_i)$ to $\widetilde{\mathcal A}(c_{i+1})$, of the Euler-Lagrange flow $\phi^t_L$. \end{The} \begin{proof} By definition \ref{def_c_equivalenve}, it is obvious that there exist $c=c_0, c_1, \dots, c_k=c'$ on the path $\Gamma$, closed 1-forms $\eta_i$ and $\bar{\mu}_i$ on $M$ with $[\eta_i]=c_i$, $[\bar{\mu}_i]=c_{i+1}-c_i$$\in \mathbb{V}^{\bot}_{c_i}$ for each $i=1,\cdots,k$. By the arguments in Section \ref{sec local and global}, there is also a neighborhood $U_i$ of the projected Ma\~n\'e set $\mathcal N_0(c_i)$ such that $\mathbb{V}_{c_i}=i_{*U_i}H_1(U_i,\mathbb R)$. In particular, we can suppose $\bar{\mu}_i=0$ on $U_i$. Indeed, as $[\bar{\mu}_i]\in \mathbb{V}^{\bot}_{c_i}$, $\bar{\mu}_i$ is exact when restricted on $U_i$ and there is a smooth function $f:M\to\mathbb R$ satisfying $df=\bar{\mu}_i$ on $U_i$, and hence we can replace $\bar{\mu}_i$ by $\bar{\mu}_i-df$. As $\mathcal N_0(c_i)\subset U_i$, there exists $\delta_i\ll 1$ such that $\mathcal N_t(c_i)\subset U_i$ for all $t\in[0, \delta_i]$. Let $\rho_i:\mathbb R\to [0,1]$ be a smooth function such that $\rho_i(t)=0$ for $t\in(-\infty, 0]$, $\rho_i(t)=1$ for $t\in[\delta_i,+\infty)$. We set $\mu_i=\rho_i(t)\bar{\mu}_i$ and introduce a \emph{time-step} Lagrangian $$L_{\eta_i,\mu_i}=L-\eta_i-\mu_i: T\mathbb T^n\times\mathbb R\to\mathbb R.$$ For each orbit $\gamma\in\mathscr C(L_{\eta_i,\mu_i})$, by the upper semi-continuity in Proposition \ref{uppersemi of pseudo curves}, \begin{equation}\label{belong typec} \gamma(t)\in U_i,~\forall~t\in[0,\delta_i] \end{equation} holds provided that $|\bar{\mu}_i|$ is small enough. Clearly, $(\gamma,\dot{\gamma})$ solves the Euler-Lagrange equation of $L_{\eta_i,\mu_i}$. To verify it solves the Euler-Lagrange equation of $L$, we see that $\gamma\big|_{[0,\delta_i]}\subset U_i$ and $L_{\eta_i,\mu_i}=L-\eta_i$ on $U_i$ where $\eta_i$ is a closed 1-form, so $\gamma(t)$ solves the Euler-Lagrange equation of $L$ for $t\in[0,\delta_i]$. On the other hand, for $t\in(-\infty, \delta_i]$ we have $L_{\eta_i,\mu_i}=L-\eta_i$, then $\gamma(t)$ is a $c_{i}$-semi static curve of $L$ on the interval $(-\infty, \delta_i]$. Similarly, $\gamma(t)$ is a $c_{i+1}$-semi static curve of $L$ for $t\in[\delta_i,+\infty)$. Thus, $(\gamma(t),\dot{\gamma}(t)): \mathbb R\to T\mathbb T^n$ solves the Euler-Lagrange equation of $L$, and by Section \ref{sec_Preliminaries}, this orbit connects $\widetilde{\mathcal A}(c_i)$ to $\widetilde{\mathcal A}(c_{i+1})$ . \end{proof} \subsubsection{Local connecting orbits of type-{\it h}}\label{sub localh} Next, we will discuss the so-called local connecting orbits of type-$h$, it can be thought of as a variational version of Arnold's mechanism, the condition of geometric transversality is extended to the total disconnectedness of minimal points of barrier function. It is used to handle the situation where the cohomology equivalence does not always exist. Usually, it is applied to the case where the Aubry set lies in a neighborhood of a lower dimensional torus, in that case, we let $\check{\pi}:\check{M}\rightarrow \mathbb T^n$ be a finite covering of $\mathbb T^n$. Denote by $\widetilde{\mathcal N}(c,\check{M}), \widetilde{\mathcal A}(c,\check{M})$ the Ma\~{n}\'{e} set and Aubry set with respect to $\check{M}$, then $\widetilde{\mathcal A}(c,\check{M})$ would have more than one Aubry classes. In fact, for the construction of type-$h$ local connecting orbits in our proofs of Theorem \ref{main theorem} and \ref{main thm2}, it only involves two Aubry classes (see section \ref{sec proof main}). Thus, we only need to deal with the situation where the Tonelli Lagrangian $L: T\mathbb T^n\times\mathbb T\rightarrow \mathbb R$ contains more than one Aubry classes. Let $\mathcal A(c)\big|_{t=0} $ denote the time-0 section of the projected Aubry set $\mathcal A(c)$, i.e. $\mathcal A(c)\bigcap(\mathbb T^n\times\{t=0\})$, then we obtain the local connecting orbits of type-$h$ as follows: \begin{The}\label{connecting type h} Let the projected Aubry set $\mathcal A(c)=\mathcal A_{c,1}\cup\cdots \cup\mathcal A_{c,k}$ consists of $k$ $(k\geq 2)$ Aubry classes. Let $U:=\mathbb T^n\setminus\big(\mathcal A(c)|_{_{t=0}}+\kappa\big)$ be an open set where $\kappa>0$ is small. If $U\bigcap (\mathcal N(c)|_{t=0})$ is non-empty and totally disconnected. Then for any $c'$ sufficiently close to $c$, there exists an orbit of the Euler-Lagrange flow $\phi^t_L$ whose $\alpha$-limit set lies in $\widetilde{\mathcal A}(c)$ and $\omega$-limit set lies in $\widetilde{\mathcal A}(c')$. \end{The} \begin{proof} As the number of Aubry class of $\mathcal A(c)$ is finite, it is well known that if $c'$ is sufficiently close to $c$, the projected Aubry set $\mathcal A(c')$ will be contained in a small neighborhood of $\mathcal A(c)$, see e.g. \cite{Be2010On}. Since each Aubry class is compact and disjoint with each other, we have $\text{\rm dist}(\mathcal A_{c,i},\mathcal A_{c,i'})>0$ for any $i\neq i'$, and there exist open neighborhoods $N_1,\cdots,N_k \subset M$ such that $\mathcal A_{c,i}\big|_{t=0}\subset N_i$ for each $1\leq i\leq k$ and $\textup{dist}(N_i,N_{i'})>0$ for $i\neq i'$. Thus, $\mathcal A(c')|_{t=0}\subset \bigcup_i N_i$. From Proposition \ref{manejifenlei} and definition \eqref{ij maneorbit} we know $\mathcal N(c)=\bigcup_{i,i'}\mathcal N_{i,i'}(c)$, and hence there is a pair $(j, j')$ such that $\mathcal A(c')|_{t=0}\cap N_{j'}\neq\emptyset$ and $U\cap \mathcal N_{j,j'}(c)\neq\emptyset$. By the total disconnectedness assumption, we can find simply connected open sets $F$ and $O$ such that $F\subset O\subset U$, $\textup{dist}(O,\bigcup_{i=1}^k\bar{N}_i)>0$ and $\emptyset\neq O\bigcap\big(\mathcal N_{j,j'}(c)\big|_{t=0}\big)\subset F$. Then some $\delta>0$ exists such that \begin{equation}\label{belongrelation} O\bigcap\big(\mathcal N_{j,j'}(c)\big|_{0\leq t\leq\delta}\big)\subset F. \end{equation} Let $\eta$ and $\bar{\mu}$ be closed 1-forms such that $[\eta]=c$, $[\bar{\mu}]=c'-c$, and let $\rho:\mathbb R\to [0, 1]$ be a smooth function such that $\rho(t)=0$ for $t\leq 0$, $\rho=1$ for $t\geq\delta$. Note that by the simple connectedness of $O$, we are able to choose $\bar{\mu}$ such that $\textup{supp}\bar{\mu}\cap \bar{O}=\emptyset$. Next, we can construct a smooth function $\psi(x,t)=\varepsilon\psi_1(x)\psi_2(t):M\times\mathbb T\to [-1, 1]$ where $\varepsilon>0$, such that \begin{equation*} \psi_1(x)\left\{ \begin{array}{ll} =1, & x\in \bar{F}, \\ <1, & x\in O\setminus F,\\ <0, & x\in\bigcup\limits_{i\neq j,j'}N_i, \\ =0, & \textup{elsewhere.} \end{array} \right. \end{equation*} and \begin{equation*} \psi_2(t)\left\{ \begin{array}{ll} >0, & t\in (0,\delta), \\ =0, & t\in(-\infty,0]\cup[\delta,+\infty). \end{array} \right. \end{equation*} Then we set $\mu=\rho(t)\bar{\mu}$ and introduce a time-step Lagrangian $$L_{\eta,\mu,\psi}=L-\eta-\mu-\psi: T\mathbb T^n\times\mathbb R\to\mathbb R.$$ Let us first suppose $\mu=0$. Since $\psi(x,t)=0$ for $(x,t)\in\mathcal A_{c,j}\cup\mathcal A_{c,j'}$, and $\psi(x,t)<0$ for $(x,t)\in\bigcup\limits_{i\neq j,j'}N_i$ with $t\in(0,\delta)$, the Lagrangian $L_{\eta,0,\psi}$ contains only two Aubry classes which are exactly $\mathcal A_{c,j}$ and $\mathcal A_{c,j'}$ provided the number $\varepsilon>0$ is small enough. The set $\mathscr C(L_{\eta,0,\psi})$ then satisfies: \begin{enumerate}[(a)] \item $\mathcal A_{c,j}\cup\mathcal A_{c,j'}\subset\mathscr C(L_{\eta,0,\psi})$. \item $\mathscr C(L_{\eta,0,\psi})\setminus\big(\mathcal A_{c,j}\cup\mathcal A_{c,j'}\big)$ is non-empty. For each pseudo connecting curve $\xi\in$ $\mathscr C(L_{\eta,0,\psi})\setminus$ $\big(\mathcal A_{c,j}\cup\mathcal A_{c,j'}\big)$, we have $\xi(t)\in F$ for $0\leq t\leq\delta$, but its integer translation $K^*\xi(t):=\xi(t-K)$ with $K\in\mathbb Z\setminus{0}$ does not belong to $\mathscr C(L_{\eta,0,\psi})$ since $L_{\eta,0,\psi}$ is not periodic in $t$. \item $\mathscr C(L_{\eta,0,\psi})$ does not contain any other curves. \end{enumerate} These properties follow directly from \eqref{belongrelation} and the fact that $\psi(x,0)$ attains its maximum if and only if $x\in\bar{F}$, and the upper semi-continuity of $(\eta,0,\mu)\mapsto \mathscr C(L_{\eta,0,\psi})$. If $\mu\neq 0$. For $m_0\in\mathcal A_{c,j}\big|_{t=0}, m_1\in \mathcal A_{c,j'}\big|_{t=0}$, let $T_0^k, T_1^k\to+\infty$ be two sequences of positive integers such that $$\lim\limits_{k\to\infty}h_{L_{\eta,\mu,\psi}}^{T_0^k,T_1^k}(m_0,m_1)=h_{L_{\eta,\mu,\psi}}^{\infty}(m_0,m_1).$$ Let $\gamma_k(t): [-T^k_0, T^k_1]\to M$ be a minimizer associated with $h_{L_{\eta,\mu,\psi}}^{T_0^k,T_1^k}(m_0,m_1)$ and $\gamma$ be any accumulation point of $\{\gamma_k\}_k$, then $\gamma\in\mathscr C(L_{\eta,\mu,\psi})$. If $\mu$ and $\varepsilon$ are small enough, we deduce from the properties $(a)-(c)$ and the upper semi-continuity of $(\eta,\mu,\psi)\mapsto \mathscr C(L_{\eta,\mu,\psi})$ that \begin{equation}\label{constant region} \gamma(t)\in F,\quad\forall t\in[0, \delta]. \end{equation} Obviously, $(\gamma,\dot{\gamma})$ satisfies the Euler-Lagrangian equation of $L_{\eta,\mu,\psi}$, but we still need to verify that it solves the Euler-Lagrangian equation of $L$. In fact, $L_{\eta,\mu,\psi}=L-\eta$ for $t\leq 0$ and $L_{\eta,\mu,\psi}=L-\eta+\bar{\mu}$ for $t\geq\delta$ where $\eta$, $\bar{\mu}$ are closed 1-forms, so $\gamma(t)$ solves the Euler-Lagrangian equation of $L$ for $t\in(-\infty, 0]\cup[\delta, +\infty)$. This also implies that $\gamma:(-\infty,0]\to \mathbb T^n$ is a $c$-semi static curve of $L$ and $\gamma:[\delta,+\infty)\to M$ is a $c'$-semi static curve of $L$, then $$\alpha(d\gamma(t),t)\subset\widetilde{\mathcal A}(c),\quad \omega(d\gamma(t),t)\subset\widetilde{\mathcal A}(c').$$ Besides, for $t\in[0, \delta]$, we deduce from \eqref{constant region} that the Euler-Lagrange equation $(\frac{d}{dt}\partial_v-\partial_x)L_{\eta,\mu,\psi}=0$ is equivalent to $(\frac{d}{dt}\partial_v-\partial_x)L=0$ along the curve $\gamma(t)$ within $0\leq t\leq\delta$, which therefore shows that $(\gamma,\dot{\gamma})$ solves the Euler-Lagrange equation of $L$ for $t\in[0,\delta]$. This completes our proof. \end{proof} From the proof of Theorem \ref{connecting type h} we see that the connecting orbit $(\gamma,\dot{\gamma})$ obtained in this theorem is locally minimal in the following sense: \noindent{\bf Local minimum}: {\it There are two open balls $V^-,V^+\subset \mathbb T^n$ and $k^-,k^+\in\mathbb Z+$ such that $\bar{V}^-\subset N_j\setminus \mathcal A(c)\big|_{t=0}$ and $\bar{V}^+\subset N_{j'}\setminus \mathcal A(c')\big|_{t=0}$, $\gamma(-k^-)\in V^-$, $\gamma(k^+)\in V^+$ and \begin{equation}\label{local minimal property} \begin{aligned} &h_c^{\infty}(x^-,m_0)+h_{L_{\eta,\mu,\psi}}^{k^-,k^+}(m_0,m_1)+h_{c'}^{\infty}(m_1,x^+)\\ >&\liminf_{k^-_i, k_i^+\to\infty}\int_{-k^-_i}^{k^+_i} L_{\eta,\mu,\psi}(\gamma(t),\dot{\gamma}(t),t)\,dt+k^-_i\alpha(c)+k^+_i\alpha(c') \end{aligned} \end{equation} holds for all $(m_0,m_1)\in \partial(V^-\times V^+)$, $x^-\in N_j\cap \alpha(\gamma)|_{t=0}$, $x^+\in N_{j'}\cap\omega(\gamma)|_{t=0}$, where $k_i^-, k_i^+$ are the sequences such that $\gamma(-k_i^-)\to x^-$ and $\gamma(k_i^+)\to x^+$.} The set of curves starting from $V_i^-$ and reaching $V_{i'}^+$ within time $k^-+k^+$ would make up a neighborhood of the curve $\gamma$ in the space of curves. If it touches the boundary of this neighborhood, the action of $L_{\eta,\mu,\psi}$ along a curve $\xi$ will be larger than the action along $\gamma$. Besides, the connecting orbit of type-$c$ also has local minimal property. In this case, the modified Lagrangian has the form $L_{\eta,\mu}$. The local minimality is crucial in the variational construction of global connecting orbits. \subsection{Global connecting orbits}\label{sec variationconst} Now, we are ready to prove Theorem \ref{generalized transition thm} from a variational viewpoint. Intrinsically, we construct a global connecting orbit by shadowing a sequence of local connecting orbits. \begin{proof}[Sketch of the proof of Theorem \ref{generalized transition thm}] The proof parallels to that of \cite{CY2009} by a small modification. Here we only give a sketch of the basic idea, and the reader can refer to \cite[Section 5]{CY2009}, \cite{Ch2018}, \cite{Ch2012} for more details. For the generalized transition chain $\Gamma:[0,1]\to H^1(\mathbb T^n,\mathbb R)$ with $\Gamma(0)=c$ and $\Gamma(1)=c'$, by definition there exists a sequence $0=s_0<s_1<\cdots<s_m=1$ such that $s_i$ is sufficiently close to $s_{i+1}$ for each $0\leq i\leq m-1$, and $\mathcal A(\Gamma(s_i))$ could be connected to $\mathcal A(\Gamma(s_{i+1}))$ by a local minimal orbit of either type-$c$ (as Theorem \ref{clemma connect}) or type-$h$ (as Theorem \ref{connecting type h}). Then the global connecting orbits are just constructed by shadowing these local ones. For simplicity, we set $c_i=\Gamma(s_i)$. For each $i\in \{0,1,\cdots,m-1\}$, we take $\eta_i,\mu_i$, $\psi_i$ and $\delta_i>0$ as that in the proof of Theorem \ref{clemma connect} and \ref{connecting type h}, where $\psi_i=0$ in the case of type-$c$. Then we choose $k_i\in\mathbb Z_+$ with $k_0=0$ and $k_{i+1}-k_i$ is suitably large for each $i\in \{0,1,\cdots,m-1\}$, and introduce a modified Lagrangian \begin{equation*} L^*:=L-\eta_0-\sum\limits_{i=0}^{m-1}k_i^*(\mu_i+\psi_i). \end{equation*} Here, $k_i^*$ denotes a time translation operator such that $k^*_if(x,t)=f(x, t-k_i)$, and $\psi_i= 0$ in the case of type-$c$. By this definition, we see that $L^*=L-\eta_0$ for $t\leq k_0=0$, $L^*=L-\eta_m$ for $t\geq k_{m-1}+\delta_{m-1}$, and for each $i\in\{0, 1,\cdots,m-2\}$, $L^*=L-\eta_i-k_i^*(\mu_i+\psi_i)$ on $t\in [k_i, k_i+\delta_i]$ and $L^*=L-\eta_{i+1}$ for $t\in [k_i+\delta_i, k_{i+1}]$. For integers $T_0, T_m\in\mathbb Z+$ and $x_0,x_m\in \mathbb T^n$, we define \begin{equation*} h^{T_0,T_m}(x_0,x_m)=\inf_{\xi}\int_{-T_0}^{T_m+k_{m-1}}L^*(\xi(s),\dot{\xi}(s),s)\,ds+\sum\limits_{i=1}^{m-1}(k_i-k_{i-1})\alpha(c_i)+T_0\alpha(c_0)+T_m\alpha(c_m), \end{equation*} where the infimum is taken over all absolutely continuous curves $\xi$ defined on the interval $[-T_0,T_m+k_{m-1}]$ under some boundary conditions. By carefully setting boundary conditions and using standard arguments in variational methods, one could find that the minimizer $\gamma(t; T_0, T_m,m_0,m_1)$ of the action $h^{T_0,T_m}(x_0,x_m)$ is smooth everywhere, along which the term $k_i^*(\mu_i+\psi_i)$ would not contribute to the Euler-Lagrange equation. Hence the minimizer produces an orbit of the flow $\phi^t_L$, which passes through the $\varepsilon$-neighborhood of $\widetilde{\mathcal A}(c_i)$ at some time $t=t_i$. Let $T_0, T_m\to+\infty$, we can then get an accumulation curve $\gamma(t):\mathbb R\to \mathbb T^n$ of the sequence $\{\gamma(t; T_0, T_m,m_0,m_1)\}$ such that the $\alpha$-limit set of $(d\gamma(t),t)$ lies in $\widetilde{\mathcal A}(c)$ and the $\omega$-limit set of $(d\gamma(t),t)$ lies in $\widetilde{\mathcal A}(c')$. This completes the proof. \end{proof} \noindent{\bf Acknowledgments} We sincerely thank the anonymous referees for their insightful comments and valuable suggestions on improving our results. The first author was partially supported by China Postdoctoral Science Foundation (Grant No.2018M641500). The second author was partially supported by National Natural Science Foundation of China (Grant No. 11631006, No. 11790272) and a program PAPD of Jiangsu Province, China. \def$'${$'$} \end{document}
arXiv
\begin{document} \title{Logarithmic Gradient Transformation and Chaos Expansion of It\^{o} \begin{abstract} \noindent Since the seminal work of Wiener \cite{wiener}, the chaos expansion has evolved to a powerful methodology for studying a broad range of stochastic differential equations. Yet its complexity for systems subject to the white noise remains significant. The issue appears due to the fact that the random increments generated by the Brownian motion, result in a growing set of random variables with respect to which the process could be measured. In order to cope with this high dimensionality, we present a novel transformation of stochastic processes driven by the white noise. In particular, we show that under suitable assumptions, the diffusion arising from white noise can be cast into a logarithmic gradient induced by the measure of the process. Through this transformation, the resulting equation describes a stochastic process whose randomness depends only upon the initial condition. Therefore the stochasticity of the transformed system lives in the initial condition and thereby it can be treated conveniently with the chaos expansion tools. \end{abstract} \begin{keywords} It\^{o} Process, Chaos Expansion, Fokker-Planck Equation. \end{keywords} \begin{AMS} 60H10, 35Q84, 60J60 \end{AMS} \section{Introduction} \noindent Often stochastic descriptions of natural or social phenomena lead to more realistic mathematical models. The introduced stochastic notion may either arise from the uncertainty in the model inputs, or from the underlying governing law. In particular, the white noise manifests itself in both circumstances e.g. as a random force acting on a deterministic system in the Landau-Lifschitz fluctuating hydrodynamics \cite{landau} or as a Markovian process describing rarefied gases \cite{gorji} or polymers \cite{ottinger}. \\ \ \\ The Monte--Carlo methods are typically a natural choice for computational studies of the systems driven by the white noise. Yet the slow convergence rate of the brute-forth Monte--Carlo, motivates a quest for improved approaches. There exists an immense list of advanced Monte--Carlo techniques, each of which may yield to a substantial improvement over the conventional Monte--Carlo, provided certain regularities. One of the promising examples belongs to the Multi-Level Monte-Carlo approach \cite{giles} (and its variants \cite{tempone}). In short, MLMC makes use of abundant samples on a coarse scale discretization in order to improve the convergence rate of the fine scale one. This can be achieved by enforcing correlations between successive approximations; usually through employing common random numbers among them. \\ \ \\ Instead of producing numerical samples of a random variable however, one can expand the solution with respect to a set of (orthogonal) random functions which possess a known distribution \cite{karniadakis1}. The polynomial chaos and stochastic collocation schemes are among the main approaches built around this idea \cite{karniadakis2,hesthaven}. In particular, the polynomial chaos schemes transform the random differential equations to a set of deterministic equations, through which the evolution of the coefficients introduced in the polynomial expansion of the random solution is governed. Therefore by knowing the distribution of the resulting orthogonal functions, different statistics of the solution can be computed deterministically. While this approach may lead to efficient computations for equations pertaining a finite set of random variables, its application to the Brownian motion remains with a significant computational challenge. The problem arises due to the fact that the dimension of the expansion should grow in time in order to keep the solution measurable with respect to the Brownian motion \cite{hou}. Hence, the cost of the chaos expansion schemes grows here significantly, in comparison to the counterpart scenario where the solution remains measurable with respect to a fixed set of random variables. \\ \ \\ This paper addresses the problem of deterministic solution algorithms for systems subject to the white noise, in an idealized It\^{o} process setting. Here we introduce a novel transformation, where the randomness of the Brownian motion is described as a propagation of an (artificial) uncertainty of the initial condition. We show that the measure induced by the transformed system is consistent with the one resulting from the It\^{o} process, in the moment sense. The key ingredient is the fact that both the transformed and the original process result in an identical Fokker--Planck equation for their probability densities. Afterwards, since the transformed system describes an Ordinary Differential Equation (ODE) with an uncertain initial condition, a chaos expansion can be applied in a straight-forward manner. \\ \ \\ The paper is structured as the following. First in the next section we present our setting for the It\^{o} process and besides a shoer review of its corresponding Wiener-chaos expansion. In \cref{sec:main}, the gradient transformation of the white noise is motivated and introduced. In the follow up \cref{sec:theory}, some theoretical aspects of the transformation are justified. In particular, the solution existence and uniqueness of the transformed process is discussed. Therefore in \cref{sec:chaos}, the Hermite chaos expansion of the transformed process is devised. The paper concludes with final remarks and future outlooks. \section{Review of the Ito Process} \label{sec:rev} To start, a set of assumptions on the coefficients of the It\^{o} process, necessary for our analysis is provided in \cref{sec:gen}. Next, the conventional chaos expansion of the It\^{o} process is reviewed in \cref{sec:conv}. \subsection{General Setting} \label{sec:gen} \noindent We focus on a simple prototype of stochastic processes driven by the white noise. Let $(\Omega,\mathcal{F}^{U_0}_{t},\mathcal{P})$ be a complete probability space, where $\mathcal{F}^{U_0}_t=\mathcal{F}_t \otimes \mathcal{F}^{U_0}$ denotes the $\sigma$-algebra on the subsets of $\Omega=\Omega_1 \cup \Omega_2$. Here $\{ \mathcal{F}_t\}_{t\ge 0}$ is an increasing family of $\sigma$-algebras induced by the $n$-dimensional standard Brownian path $W(.,.): \mathbb{R}^{+} \times \Omega_1\to \mathbb{R}^n$ , and $\mathcal{F}^{U_0}$ the $\sigma$-algebra generated by the initial condition $U_0(.): \Omega_2 \to \mathbb{R}^n$. \\ \ \\%The sample space is , and $\mathcal{P}$ denotes the corresponding probability law for subsets of $\Omega$. \\ \ \\ We consider an It\^{o} diffusion process \begin{equation} dU_i (t,\omega)=b_i(U) dt+\beta dW_i (t,\omega) , \label{eq:ito-main} \end{equation} governing the evolution of the $\mathcal{F}^{U_0}_{t}$-measurable random variable $U(.,.): \mathbb{R}^+\times \Omega \to \mathbb{R}^{n}$, with the initial value $U_0$ and the law $\mathcal{P}$. \\ \ \\%Here $W (\omega,t)$ denotes the n-dimensional canonical Brownian path. Throughout this manuscript, we need certain regularity assumptions on the drift $b(.) : \mathbb{R}^n \to \mathbb{R}^n$, the diffusion coefficient $\beta\in \mathbb{R}$ and the initial condition $U_0$. \\ \ \\ We require $\beta\neq 0$ and that the drift $b(x)=-\nabla \Psi(x)$ with $\Psi(.) \in{{C}^\infty_{b}}(\mathbb{R}^n)$, where $C_{b}^{\infty}$ denotes the space of bounded functions with bounded derivative of all orders. Finally, we assume that the initial condition is deterministic hence its probability density $f_{U_0}(u)=\delta (u-U_0)$, where $\delta(.)$ is the n-dimensional Dirac delta and $U_0\in \mathbb{R}^n$. \\ \ \\ For the above-described setting, many interesting properties can be shown for the It\^{o} process, including the following. \begin{remark} \label{rem:sde} It is a classic result in the theory of Stochastic Differential Equations (SDEs) that since $\Psi(.) \in{{C}^\infty_{b}}(\mathbb{R}^n)$ and $\beta$ is assumed to be a constant, Eq.~\cref{eq:ito-main} has a solution with a bounded variance for all $t \ge 0$, which is unique in the mean square sense. Furthermore, the process is Feller continuous resulting in smooth variation of an expectation of the solution with respect to the initial condition \cite{oksendal}. \end{remark} \begin{remark} \label{rem:cont} Based on different results in the Malliavin calculus, since the coefficients $b$ and $\beta$ fulfill the Horm\"{a}nder criterion and furthermore $b$ has bounded derivatives, the Borel measure generated by the process $\mu_{U}=\mathcal{P}(U^{-1})$ is infinite times differentiable. Therefore the probability density $f_U(u;t)du=d\mu_U(u;t)$ is well-defined and $\mu_U(.;t),f_U(.;t)\in C^\infty(\mathbb{R}^n)$, provided $t>0$; see e.g. Theorem 2.7 in \cite{watanabe}. \end{remark} \begin{remark} \label{rem:fisher} Due to Corollary 4.2.2. of \cite{bogachev}, since $\mu_U$ is three times differentiable, the Fisher information \begin{eqnarray} I(f)&:=&\int_{\mathbb{R}^n}\frac{1}{f}\nabla_x f \cdot \nabla_x f dx \end{eqnarray} associated with the density $f_U$ is bounded at $t>0$. \end{remark} \begin{remark} \label{rem:fp} The density $f_U$ evolves according to the Fokker-Planck equation (forward-Kolmogorov equation) \begin{eqnarray} \label{eq:fp} \frac{\partial f_U(u;t)}{\partial t}&=&-\frac{\partial}{\partial u_i}\left(b_i (u) f_U(u;t)\right)+\frac{\beta^2}{2}\frac{\partial^2}{\partial u_i \partial u_i }f_U(u;t) \end{eqnarray} and the measure $\mu_U$ is governed by the transport equation \begin{eqnarray} \label{eq:trans} \frac{\partial \mu_U(u;t)}{\partial t}&=&-b_i (u) \frac{\partial}{\partial u_i}\mu_U(u;t)+\frac{\beta^2}{2}\frac{\partial}{\partial u_i\partial u_i }\mu_U(u;t). \end{eqnarray} Since $\Psi(.) \in{{C}^\infty_{b}}(\mathbb{R}^n)$ and $\beta \neq 0$, both above-mentioned equations have unique solutions (for uniqueness results see \cite{manita,diperna,bogachev2}). Notice that the Einstein index convention is employed here and henceforth, to economize the notation. \end{remark} \noindent In comparison to the natural setting of It\^{o} processes, we have introduced strong assumptions on $\Psi$ and $\beta$. Though not straight-forward, the generalization of our analysis may become possible as long as the corresponding It\^{o} process has a unique solution with bounded variance and its corresponding Fisher information is bounded (e.g. by using Lyapunov functionals \cite{khasminskii}). But to keep the study focused on the main idea, we postpone the generalization to the follow up studies. \\ \ \\ In typical applications in scientific computations, one is interested in some moments of the solution $U$, which are in the form of an expectation $\mathbb{E}[g (U(t,\omega))]$ of some smooth function $g(.)\in C^{\infty}(\mathbb{R}^n)$. \subsection{Wiener Chaos Expansion} \label{sec:conv} \noindent Due to slow convergence rates of Monte-Carlo methods, deterministic solution algorithms for stochastic processes can be attractive. Besides stochastic collocation methods \cite{zhang}, a Wiener chaos expansion of Eq.~\eqref{eq:ito-main} is possible due to the Cameron-Martin theorem \cite{cameron}, as carried out e.g. by Rozovskii, Hou and others \cite{hou, rozovskii,karniadakis1}. It is useful for our sequel analysis to provide an overview of this expansion. To simplify the notation we explain the chaos expansion of $U$ in a one dimensional setting $n=1$. For a multi-dimensional case, the following can be applied for each component of the solution. \\ \ \\ The random events with respect to which the solution $U$ is measurable are due to the initial condition $U_0$ and the corresponding Brownian integral $\beta\int_0^t dW(s,\omega_2)$, therefore for a deterministic $U_0$, $U$ can be expressed as \begin{eqnarray} U(t,\omega)&=&M\left(U_0, \int_0^t dW(s,\omega),t\right). \end{eqnarray} The integral of the Brownian path $\mathcal{I}(\omega):= \int_{s=0}^{t} dW(s,\omega)$, can be expanded as \begin{eqnarray} \mathcal{I}(\omega)&=&\sum_{j=1}^\infty\xi_j(\omega) \int_{0}^t\phi_j(s)ds, \label{eq:exact-wiener} \end{eqnarray} where $\{\phi_j(s)\}$ is a sequence of orthogonal functions in $L^2([0,t])$ and $\xi_j$ are independent normally distributed random variables. \\ \ \\ Suppose $P^{(l)}=\left\{t^{(l)}_j=\bigg(jt/{m_l} \ \ j\in\{1,...,m_l\} \bigg) \right\}$ is a partition for the time interval $(0,t]$. Intuitively the Brownian motion generates an independent normally distributed random variable at each $t^{(l)}_j\in P^{(l)}$. Along this picture let \begin{eqnarray} \hat{\mathcal{I}}^{(l)}&=&\sum_{j=1}^{m_l}\xi_j\int_{0}^t\phi_j(s)ds \label{eq:approx-wiener} \end{eqnarray} be an approximation of the integral \eqref{eq:exact-wiener} corresponding to the partition $P^{(l)}$. It can be shown that \begin{eqnarray} \mathbb{E}\left[\left(\mathcal{I}-\hat{\mathcal{I}}^{(l)}\right)^2\right]< C \frac{t}{m_l}, \end{eqnarray} where $C<\infty$ is some constant \cite{luo}.\\ \ \\ Analogously, let $\hat{U}^{(l)}$ be an approximation of $M$, computed on the partition $P^{(l)}$. Therefore due to Eq.~\eqref{eq:approx-wiener}, the solution at time $t$ can be approximated as a function $\hat{U}^{(l)}(t,\xi_1,...,\xi_{m_l})$ with a mean square error of $\mathcal{O}(t/m_l)$ (due to the truncation introduced in Eq.~\eqref{eq:approx-wiener}). At this point the Wiener chaos expansion can be applied to $\hat{U}^{(l)}$; as explained in the following. \\ \ \\ In order to expand $\hat{U}^{(l)}$ with respect to the Hermite basis, suppose $\xi=(\xi_1,...,\xi_{m_l})$ is an $m_l$-dimensional normally distributed random variable and let $\alpha=(\alpha_1,...,\alpha_p)\in \mathcal{J}^p_{m_l}$ denote an index from the set of multi-indices \begin{eqnarray} \label{eq:multi} \mathcal{J}^p_{m_l}&=&\left\{\alpha=(\alpha_i,1\le i\le {m_l}) \bigg \vert \alpha_i\in\{0,1,2,...,p\},|\alpha|=\sum_{i=1}^{m_l}\alpha _i\right\}. \end{eqnarray} Let the $|\alpha|$-order multi-variate Hermite polynomial \begin{eqnarray} {H}_\alpha(\xi)&=&\prod_{i=1}^{m_l}\hat{H}_{\alpha_i}(\xi_i) \label{eq:def-wick} \end{eqnarray} be a tensor product of the normalized $\alpha_i$-order Hermite polynomials $\hat{H}_{\alpha_i}(\xi_i)$. According to the Cameron-Martin theorem, $\hat{U}^{(l)}(t,\xi)$ admits the following Hermite expansion \begin{eqnarray} \hat{U}^{(l)}(t,\xi)=\lim_{p \to \infty}\sum_{\alpha \in \mathcal{J}^p_{m_l}} {\hat{u}^{(l)}}_{\alpha}(t) H_\alpha(\xi), \label{eq:strong-Hermite} \end{eqnarray} where ${\hat{u}^{(l)}}_\alpha(t)=\mathbb{E}[\hat{U}^{(l)}(t,\xi) H_\alpha(\xi)]$. \\ \ \\ In fact the expansion \eqref{eq:strong-Hermite} provides a means to project the randomness of the solution $U(t,\omega)$ into the Hermite basis. As a result, the It\^{o} process is transformed to a set of deterministic ODEs for the coefficients $\hat{u}^{(l)}_{\alpha}(t)$ and thus the expectations $\mathbb{E}[g(U(t,\omega)]\approx\mathbb{E}[g(\hat{U}^{(l)}(t,\xi))]$ can be computed deterministically. However in order to keep the order of the approximation introduced in the expansion \eqref{eq:approx-wiener} constant, $m_l$ should grow linearly with respect to $t$. So does the dimension of the expansion \eqref{eq:strong-Hermite}, as $m_l$ shows up in the order of the Hermite polynomials. Thus unless short time behavior of the solution is of interest, complexity of the Wiener chaos expansion of the It\^{o} process may become prohibitive; even though the number of Hermite polynomials can be reduced significantly through sparse tensor compressions \cite{schwab}. \\ \ \\ A more general insight about the problem can be sought by considering the fact that a smooth function of an $n$-dimensional random process Brownian path $f(W(t,\omega))$ at time $t=T$ is measurable with respect to the Borel $\sigma$-algebra on $\Omega={\left(\mathbb{R}^{n}\right)}^{[0,T]}$ \cite{oksendal}. Therefore in order to devise a chaos expansion of $f$, the orthogonal functions should span a rather high dimensional space $L^2(\Omega)$. \section{Main Result} \label{sec:main} \noindent The main idea of this work is to find an alternative SDE with a similar probability density as the one generated by the It\^{o} process, which yet remains measurable with respect to the $\sigma$-algebra induced by its initial condition. \\ \ \\% In other words, we look for a random variable with the same (marginal) probability density as the one induced by $\eqref{eq:ito-main}$, yet measurable with respect to its initial condition. \\ \ \\%Note that under modest assumptions of the initial measure generated by $U_0$, such a process can be constructed.\\ \ \\ More precisely, consider again the partition $P^l=\{0=t^l_1<t_2^l<...<t^l_{m_l}=t\}$ for the time interval $[0,t]$ with $|P^l|\to 0$ as $l\to \infty$. Obviously the solution of the It\^{o} process $U(t,\omega)$ is measurable with respect to the family of $\sigma$-algebras \begin{eqnarray} \{\mathcal{F}^{U_0}_{t^l_1}, \mathcal{F}^{U_0}_{t^l_2}...,\mathcal{F}^{U_0}_{t^l_{m_l}} \} \ \ \ \textrm{as} \ \ \ l\to \infty. \nonumber \end{eqnarray} However if we are only interested in some expectation $\mathbb{E}[g(U(t,\omega))]$ at time $t$, the knowledge of the Borel measure $\mu_U(B;t)=\mathcal{P}\{U^{-1} (t,B)\}$ where $B\in \mathcal{B}^n$, is sufficient. Note that $\mathcal{B}^n$ is the Borel $\sigma$-algebra on $\mathbb{R}^n$. Let $f_U(u;t)$ be the corresponding probability density i.e. $f_U(u;t)du=d\mu_U(u;t)$, therefore \begin{eqnarray} \mathbb{E}[g(U(t,\omega))]&=&\int_{\mathbb{R}^n}f_U(u;t)g(u)du. \nonumber \end{eqnarray} \noindent Suppose the random variable $X(t,\omega): \mathbb{R}^+ \times \Omega \to \mathbb{R}^n$ belongs to a complete probability space $(\Omega,\mathcal{G},\mathcal{Q})$, and generates a Borel measure $\mu_X=\mathcal{Q}(X^{-1})$. Let the probability density be $f_X(x;t)dx=d\mu_X$. We propose that under suitable assumptions on $f_{X}(x;0)$ (as explained in the following section), the solution of the transformed It\^{o} process \begin{eqnarray} \frac{d}{dt}X_i(t,\omega)&=&b_i(X)-\frac{1}{2} \beta^2 \left [\nabla_{x_i} \log f_X(x;t) \right ]_{x=X(t,\omega)} \label{eq:weak-ito} \end{eqnarray} with the initial condition $X_0(\omega):\Omega \to \mathbb{R}^n$, uniquely exists for all $t$. Furthermore the solution is consistent with the It\^{o} process in a sense that for an arbitrary smooth $g\in C^\infty (\mathbb{R}^n)$ we have \begin{eqnarray} \mathbb{E}[g(X(\omega,t))]&=&\mathbb{E}[g(U(\omega,t))], \end{eqnarray} where $U$ is the solution of the It\^{o} process with the initial condition $U_0=X_0$. \\ \ \\ Let us first review the motivation behind this transformation. Due to It\^{o}'s lemma, the probability density generated by the It\^{o} process follows the Fokker-Planck equation (see \cref{rem:fp}) \begin{eqnarray} \frac{\partial f_U(u;t)}{\partial t}+\frac{\partial}{\partial u_i}\left(b_i (u) f_U(u;t)\right)&=&\frac{1}{2}\frac{\partial^2}{\partial u_i \partial u_j }\left(\beta^2f_U(u;t)\right). \end{eqnarray} By rearranging the diffusion term one can see that \begin{eqnarray} \frac{\partial f_U(u;t)}{\partial t}+\frac{\partial}{\partial u_i}\left\{\bigg(b_i(u)-\frac{1}{2}\beta^2 \frac{\partial}{\partial u_j} \log(f_U(u;t))\bigg)f_U(u;t)\right\}&=&0, \nonumber \end{eqnarray} resulting in a stochastic process similar to Eq.~\eqref{eq:weak-ito}. Intuitively we observe that the effect of the diffusion on the probability density is equivalent to an advection induced by the gradient $\nabla_u \log f_U$. We refer to this transformation as {\it logarithmic gradient transformation}. Obviously this transformation needs to be justified. However before proceeding to the technical discussion in \cref{sec:theory}, let us provide some physical motivations behind the logarithmic gradient transformation. \\ \noindent Suppose $\exp\left({-{2\Psi(x)}/{\beta^2}}\right)\in L^1(\mathbb{R}^n)$ and hence the stationary density \begin{eqnarray} f_{st}(x)&=&\mathcal{Z}\exp\left({-\frac{2\Psi(x)}{\beta^2}}\right) \end{eqnarray} is well-defined. Therefore the introduced process generates the paths $(t,X(t,\omega))$ according to \begin{eqnarray} \frac{d}{dt}X_i(\omega,t)&=&-\frac{\beta^2}{2}\nabla_x \log \left(\frac{f_X(x;t)}{f_{st}(x)}\right) \bigg\vert_{x=X(\omega,t)} \nonumber \end{eqnarray} which is a gradient flow induced by the potential $\phi=\log ({f_X}/{f_{st}}) $. This potential is connected to the Kullback-Leibler distance (entropy distance) \begin{eqnarray} d_{KL}(t)&=&\int_{\mathbb{R}^n}f_X(x;t) \log \left(\frac{f_X(x;t)}{f_{st}(x)}\right) dx=\mathbb{E}[\phi(X)] \nonumber \end{eqnarray} between the two densities $f_X$ and $f_{st}$ \cite{kullback,otto}. Therefore from the physical point of view, the logarithmic gradient transformation generates a gradient flow in order to minimize the entropy distance $d_{KL}$ between the current state $f_X$ and $f_{st}$. \\ \ \\ \section{Theoretical Justifications} \label{sec:theory} \noindent The following arguments establish a connection between solutions of the main It\^{o} process i.e. Eq.~\eqref{eq:ito-main} and the transformed one Eq.~\eqref{eq:weak-ito}. \subsection{Regularity of the Ito Process} \noindent To start, note that in order to make sense of Eq.~\eqref{eq:weak-ito}, $f_U$ should admit certain regularities. Let us introduce a class of admissible probability densities for a measurable $f(x)$ as \begin{eqnarray} K_1&:=&\bigg\{ f(x) : \mathbb{R}^n\to (0,\infty)\ \bigg \vert \ \nabla \log{f}\in C_l^{\infty}(\mathbb{R}^n),\ M(f)<\infty,I(f)<\infty \bigg\}, \end{eqnarray} where \begin{eqnarray} M(f)&=&\int_{\mathbb{R}^n}fx^2dx \nonumber \end{eqnarray} and $C_l^{\infty}$ is the space of infinite times differentiable functions, with at most linear growth. \noindent The next lemma provides a link between $f_U$ and $K_1$. \begin{lemma} \label{lemma:space} Consider $U^\epsilon(t,\omega)$ to be the solution of the It\^{o} process \eqref{eq:ito-main} in the probability space $(\Omega,\mathcal{F}_t^{U^\epsilon_0},\mathcal{P}^\epsilon)$ with a drift $b=-\nabla \Psi$, $\Psi(.) \in C_b^\infty(\mathbb{R}^n)$ and a diffusion $\beta \neq 0$. Suppose the initial condition reads $U_0^\epsilon=U_0+\epsilon Z$, where $U_0\in \mathbb{R}^n$ is deterministic, $Z(\omega)\in \mathbb{R}^n$ is a normally distributed random variable and $\epsilon \in \mathbb{R}$ is a small, arbitrary chosen non-zero constant. \\ \ \\ Let $f_{U^\epsilon}(u;t)=d\mathcal{P}^\epsilon\left({U^\epsilon}^{-1}\right)$ be the probability density of the process, therefore \\ \begin{eqnarray} \label{in:reg-ito} f_{U^\epsilon}(.;t)\in K_1, \end{eqnarray} for $t\in [0, \infty)$. \end{lemma} \begin{proof} Note that the initial condition $U_0^\epsilon$ has a Gaussian probability density of the form \begin{eqnarray} f_{U_0^\epsilon}(u)=\mathcal{M}_{\epsilon}\left(|u-U_0|\right), \end{eqnarray} where \begin{eqnarray} \label{eq:gauss} \mathcal{M}_\epsilon (h)&:=&\frac{1}{(\sqrt{2\pi}|\epsilon| )^{n}}\exp\left(-\frac{h^2}{2\epsilon^2}\right). \end{eqnarray} It is straight-forward to see that $\mathcal{M}_{\epsilon}\left(|u-U_0|\right) \in K_1$ and thus we only need to prove the claim \eqref{in:reg-ito} for $t>0$. Notice that here and afterwards, $|\ . \ |$ denotes the Euclidean norm. \\ \ \\ First let us show that $\log f_{U_0^\epsilon}(.;t>0)\in C^{\infty}(\mathbb{R})$. According to \cref{rem:sde}-\cref{rem:fisher} at each $t>0$ we have $f_{U_0^\epsilon}(.;t)\in C^{\infty}(\mathbb{R})$, $I(f_{U_0^\epsilon})<\infty$ and $M(f_{U_0^\epsilon})<\infty$. Hence it is sufficient to prove $f_{U_0^\epsilon}(.; t)>0$, for $t>0$. For that, we make use of the Girsanov transformation. But before proceed, to prevent unnecessary notational complications we set $\beta =1$ for the followings. \\ \ \\ Let $W^\epsilon(t,\omega)$ be a standard n-dimensional Brownian process with the initial condition $U^\epsilon_0$ and the law $\mathcal{W}^\epsilon$. Then since $b(.)\in C_{b}^\infty(\mathbb{R}^n)$, we have \begin{eqnarray} \mathbb{E}\left[\exp\left(\frac{1}{2}\int_0^{T} b_i (W^\epsilon(t,\omega))b_i(W^\epsilon(t,\omega)dt \right)\right]&<&\infty, \end{eqnarray} for any finite $T$. Therefore the process \begin{eqnarray} Z(t,\omega)&:=&\exp\left(-\int_0^t b_i(W^\epsilon(s,\omega))dW^\epsilon_i(s,\omega)-\frac{1}{2}\int_{0}^t b^2(W^\epsilon(s,\omega))ds \right) \nonumber \\ \end{eqnarray} is a martingale for $t\in [0,T)$ \cite{oksendal}. It follows from the Girsanov theorem that \begin{eqnarray} d\mathcal{P}^\epsilon(t,\omega)&=&Z(t,\omega)d\mathcal {W}^\epsilon(t,\omega). \end{eqnarray} Since $d\mathcal{W}^\epsilon$ is a Gaussian measure, it is strictly positive for $t>0$, and hence $d\mathcal{P}>0$. It is then straight-forward to check that $f_{U_0^\epsilon}(u;t)>0$, for any $u\in \mathbb{R}^n$, provided $t>0$. \\ \ \\ Now the final piece is to prove \begin{eqnarray} |\nabla_u \log f_{U^\epsilon}(u;t)| &\leq& C(t,U_0)\left(|u|+1\right) \end{eqnarray} for every $u\in \mathbb{R}^n$, $t>0$ and some constant $C(t,U_0)<\infty$ which depends on $t$ and the initial condition $U_0$. Consider the partition \begin{eqnarray} P^{(l)}&=&\left\{t^{(l)}_j=\bigg(jt/{m_l} \ \ j\in\{1,...,m_l\} \bigg) \right\} \end{eqnarray} for the interval $(0,t]$ and $\Delta t^{(l)}=t/{m_l}$. Suppose ${Z}^{(l)}$ is the projection of the martingale $Z(t,\omega)$ on the partition $P^{(l)}$. Using It\^{o}'s lemma, we get \begin{eqnarray} Z^{(l)}(t,\omega)&=&\exp\bigg(\Psi(W^\epsilon(0,\omega))-\Psi(W^\epsilon(t,\omega))\bigg) \nonumber \\ &&\exp\left(\frac{1}{2}\sum_{j=1}^{m_l}\left({b^\prime}(W^\epsilon(t_j^{(l)},\omega)-b^2(W^\epsilon(t_j^{(l)},\omega))\right)\Delta t^{(l)}\right), \nonumber \\ \end{eqnarray} where ${b^\prime}=\textrm{div}\{b\}$. In terms of the density $f_{U^\epsilon}$, the Girsanov transformation yields \begin{eqnarray} \label{eq:girsanov-f} f_{U^\epsilon}(u_{m_l};t)&=&e^{-\Psi(u_{m_l})}\underbrace{\int_{\mathbb{R}^n}...\int_{\mathbb{R}^n}}_{m_l \ \textrm{times}}\bigg(e^{\Psi(u_0)+1/2\Delta t^{(l)}\sum_{j=0}^{m_l-1}\left(b^\prime(u_j)-b^2(u_j)\right)}\nonumber \\ &&\mathcal{M}_\epsilon (|u_0-U_0|)\prod_{i=0}^{m_l-1}\mathcal{M}_{\Delta t^{(l)}} (|u_{i+1}-u_i|)\bigg)du_0du_1...du_{m_l-1}, \end{eqnarray} as $m_l\to \infty$, where $\mathcal{M}$ is the Gaussian density defined in Eq.~\eqref{eq:gauss}. Since $\Psi \in C^\infty_b$, $\exp(\Psi(u_0)+1/2\Delta t^{(l)}\sum_{j=0}^{m_l-1}\left(b^\prime(u_j)-b^2(u_j)\right)$ is bounded above and below by some $S(t) < \infty$ and $I(t)>0$, respectively. Therefore we have \begin{eqnarray} \bigg \vert \nabla_{u_{m_l}}&&\log f_{U^\epsilon}(u_{m_l};t) \bigg \vert\leq|b(u_{m_l})|\nonumber \\ +\frac{S(t)}{I(t)}&&\left\vert \frac{\int_{\mathbb{R}^n}...\int_{\mathbb{R}^n}\mathcal{M}_{\epsilon^2} (|u_0-U_0|)\prod_{i=0}^{m_l-1}\nabla_{u_{m_l}}\mathcal{M}_{\Delta t^{(l)}} (|u_{i+1}-u_i|)du_0...du_{m_l-1}}{\int_{\mathbb{R}^n}...\int_{\mathbb{R}^n}\mathcal{M}_{\epsilon^2} (|u_0-U_0|)\prod_{i=0}^{m_l-1}\mathcal{M}_{\Delta t^{(l)}} (|u_{i+1}-u_i|)du_0...du_{m_l-1}}\right \vert, \nonumber \\ \end{eqnarray} as $m_l\to \infty$. However, the integral terms can be computed explicitly. In fact in the limit of $m_l\to \infty$, we get \begin{eqnarray} \int_{\mathbb{R}^n}...\int_{\mathbb{R}^n}\mathcal{M}_{\epsilon^2} (|u_0-U_0|)\prod_{i=0}^{m_l-1}\mathcal{M}_{\Delta t^{(l)}} (|u_{i+1}-u_i|)du_0...du_{m_l-1}&=&\mathcal{M}_{\epsilon^2+t}(|u_{m_l}-U_0|). \nonumber \\ \end{eqnarray} Therefore the upper bound reads \begin{eqnarray} \left \vert \nabla_{u_{m_l}} \log f_{U^\epsilon}(u_{m_l};t) \right \vert&\leq&|b(u_{m_l})| +\frac{S(t)}{I(t)}\left\vert \frac{\nabla_{u_{m_l}}\mathcal{M}_{\epsilon^2+t}(|u_{m_l}-U_0|)}{\mathcal{M}_{\epsilon^2+t}(|u_{m_l}-U_0|)}\right \vert \nonumber \\ &&\leq C(t,u_0)\left(|u_{m_l}|+1\right), \end{eqnarray} for $t>0$. \end{proof} \begin{corollary}\label{cor:trans} The measure of the process $\mu_{U^\epsilon}$ is the solution of the following transport equation \begin{eqnarray} \frac{\partial \mu_U(u;t)}{\partial t}&=&\left(-b_i (u)+\frac{\beta^2}{2}\frac{\partial }{\partial u_i}\log f_{U^\epsilon}(u;t)\right)\frac{\partial \mu_U(u;t)}{\partial u_i}. \end{eqnarray} \end{corollary} \begin{proof} The proof is straight-forward, by using \cref{rem:fp} and the result of \cref{lemma:space}, that $f_{U^\epsilon}(.,t) \in K_1$. \end{proof} \subsection{Solution Existence-Uniqueness and Consistency} \begin{theorem} Let $U(t,\omega)$, $U^\epsilon(t,\omega)\in \mathbb{R}^n$ be solutions of the It\^{o} process \eqref{eq:ito-main} for initial conditions $U_0$ and $U_0^\epsilon$, respectively, where the drift $b=-\nabla \Psi$ fulfills $\Psi \in C_b^\infty$ and $\beta \neq 0$. Here $U_0\in \mathbb{R}^n$ is deterministic, whereas $U^\epsilon_0=U_0+\epsilon Z$, $Z(\omega)\in \mathbb{R}^n$ is a normally distributed random variable and $\epsilon \in \mathbb{R}$ is a non-zero arbitrary chosen parameter. \\ \ \\ Suppose $X^\epsilon(t,\omega)\in \mathbb{R}^n$ is a random variable in a space $(\Omega,\mathcal{G}^\epsilon,\mathcal{Q}^\epsilon)$, and evolves according to \begin{eqnarray} \label{eq:ito-weakp} \frac{d}{dt}X^\epsilon_i(t,\omega)&=&b_i(X^\epsilon)-\frac{1}{2} \beta^2 \left [\nabla_{x_i} \log f_{X^\epsilon}(x;t) \right ]_{x=X^\epsilon(t,\omega)}, \end{eqnarray} subject to the initial condition $U_0^\epsilon$. Here $f_{X^\epsilon}(x;t)=d\mathcal{Q}^\epsilon\left({X^\epsilon}^{-1}\right)$ is the probability density of the process \eqref{eq:ito-weakp}. Therefore \begin{enumerate} \item The process \eqref{eq:ito-weakp}, has a unique solution with $\mathbb{E}[{X^\epsilon}^2(t,\omega)]<\infty$ for $t\in[0,\infty)$. \item For an arbitrary $g(.)\in C^2(\mathbb{R}^m)$, we have \begin{eqnarray} \mathbb{E}\left[g(X^\epsilon(t,\omega))\right]&=&\mathbb{E}\left[g(U^\epsilon(t,\omega))\right] \\ \textrm{and} \ \ \ \ \ \lim_{\epsilon\to 0}\mathbb{E}\left[g(X^\epsilon(t,\omega))\right]&=&\mathbb{E}\left[g(U(t,\omega))\right]. \end{eqnarray} \end{enumerate} \end{theorem} \begin{proof} First let us show that the process \begin{eqnarray} \frac{d}{dt}Y^\epsilon_i(t,\omega)&=&b_i(Y^\epsilon)-\frac{1}{2}\beta^2\left [\nabla_{y_i} \log f_{U^\epsilon}(y;t) \right ]_{y=Y^\epsilon(t,\omega)} \label{eq:ode-weak} \end{eqnarray} with the initial condition $U_0^\epsilon$ has a unique solution with bounded variance for all $t>0$. Let $F(t,Y^\epsilon)$ denote the right hand side of Eq.~\eqref{eq:ode-weak}. For the existence-uniqueness proof of a bounded variance solution, since $f_{U^\epsilon}(.;t)\in K_1$ according to \cref{lemma:space} and $b(.)\in C^\infty_b(\mathbb{R}^n)$, we get $F(t,.)\in C_l^\infty(\mathbb{R}^n)$. Therefore the existence-uniqueness follows directly from the Picard iterations and Groenwall's inequality (see \cite{agarwal} for details). Furthermore, the boundedness of the variance comes from the Chebyshev lemma (see Theorem 1.8 in \cite{khasminskii}). \\ \ \\ Now let us turn to the measure induced by $Y^\epsilon$ i.e. $\mu_{Y^\epsilon}$. Let us define the map $\sigma_t(U_0^\epsilon(\omega))=Y^\epsilon (t,\omega)$ and hence $\mu_{Y^\epsilon}(\sigma_t(u);t)=\mu_{U^\epsilon_0}(u)$. Therefore $\mu_{Y^\epsilon}$ fullfills the following transport equation \begin{eqnarray} \label{eq:trans2} \frac{\partial }{\partial t}\mu_{Y^\epsilon}(y;t)&=&-{F_i(t,y)}\frac{\partial }{\partial y_i} \mu_{Y^\epsilon}(y;t). \end{eqnarray} Note that since Eq.~\eqref{eq:ode-weak} has a unique solution, do does Eq.~\eqref{eq:trans2}. However due to \cref{cor:trans}, the measure induced by $U^\epsilon$ also fulfills Eq.~\eqref{eq:trans2}. Therefore $\mu_{Y^\epsilon}(y;t)=\mu_{U^\epsilon}(y;t)$, resulting in equivalence of Eqs \eqref{eq:ode-weak} and \eqref{eq:ito-weakp}. Furthermore \begin{eqnarray} \mathbb{E}[g(X^\epsilon(\omega,t))]&=&\mathbb{E}[g(U^\epsilon(\omega,t))]. \end{eqnarray} But since the It\^{o} process is Feller continuous \cite{oksendal}, we have \begin{eqnarray} \lim_{\epsilon\to 0}\mathbb{E}[g(U^\epsilon(\omega,t))]=\mathbb{E}[g(U(\omega,t))], \end{eqnarray} and hence \begin{eqnarray} \lim_{\epsilon \to 0}\mathbb{E}[g(X^\epsilon(\omega,t))]&=&\mathbb{E}[g(U(\omega,t))]. \end{eqnarray} \end{proof} \noindent To summarize, let $U^\epsilon$ and $U$ be solutions of the It\^{o} process subject to the initial conditions $U^\epsilon_0$ and $U_0$, respectively. As a consequence of the regularization and the introduced transformation, we can approximate the statistics of the true solution $U$ by statistic of $U^\epsilon$ through $\mathbb{E} [g(U^\epsilon(\omega,t))]=\mathbb{E}[g(X^\epsilon(\omega,t))]$. However due to well-posedness of Eq.~\eqref{eq:ito-main}, we obtain a mean square error \begin{eqnarray} \mathbb{E}\left[(U(\omega,t)-U^\epsilon(\omega,t))^2\right] < C(t) \epsilon^2 \end{eqnarray} bounded by $\epsilon^2$ and some constant $C(t)$ independent of $\epsilon$. Therefore the regularization costs us an error of $\mathcal{O}(\epsilon^2)$ in the mean square sense. \\ \ \\ \section{Chaos Expansion} \label{sec:chaos} \noindent The computational advantage of the gradient formulation Eq.~\eqref{eq:weak-ito} over the original It\^{o} process Eq.~\eqref{eq:ito-main}, can be exploited through its chaos expansion. Actually while the dimension of the space in which the Brownian path is measurable increases in time, its gradient transformation only propagates randomness originated from the initial condition. Therefore the resulting logarithmic gradient transformation behaves like an ODE with an uncertain initial condition. \\ \ \\%As a result, the dimension of its chaos expansion remains constant in time. \\ \ \\ Let us consider an initial condition $X_0(\omega):\Omega \to \mathbb{R}^n$ with a probability density $f_{X_0}(x)=\mathcal{M}_{\epsilon}(|x-U_0|)$, where $|\epsilon|>0$ and $U_0\in \mathbb{R}^n$. In the following, we present the corresponding Hermite chaos expansion of the process \eqref{eq:weak-ito} for $X(\omega,t):\Omega \times \mathbb{R}^+\to \mathbb{R}^n$ subject to $X_0$. For more details on the Hermite chaos, and in general polynomial chaos expansions see \cite{karniadakis2}. The expansion is performed on the map $M(\xi(\omega),t)=X(\omega,t)$, where $\xi\in \mathbb{R}^{n}$ is a normally distributed random variable, hence \begin{eqnarray} | \nabla_{q} M | f_{X}(M;t)&=&f_\Xi (q), \label{eq:consist} \end{eqnarray} where $f_\Xi(q)=\mathcal{M}_1(q)$ and $q \in \mathbb{R}^{n}$. In practice, Eq.~\eqref{eq:consist} is only employed to find the initial condition of $M$ (which in our case of $X_0$ initially being Gaussian distributed, the map becomes trivial), afterwards simply the coefficients of the expanded $M$ are propagated.\\ \ \\ The map evolves according to $X$ and thus \begin{eqnarray} \frac{d}{dt}{M}_i(\xi(\omega),t)&=&\overbrace{b_i(M)-\frac{1}{2}\beta^2 \left[\nabla_{x_i}\log f_{{X}}(x;t) \right]_{M}}^{F_i(t,M)}. \label{eq:weak3} \end{eqnarray} Since $\mathbb{E}[M^2]<\infty$, we conclude $M\in L^2(d\mu_\Xi)$, where $L^2(d\mu_\Xi)$ is the space of square integrable functions with the weight $d\mu_\Xi(q)=f_\Xi(q) dq$. Furthermore note that since $b(.)$ and the Fisher information are bounded, we have $F(t,.)\in L^2(d\mu_\Xi)$. Therefore $M$ admits a Hermite expansion \cite{sansoe} \begin{eqnarray} {M}_{i}(\xi,t)&=&\lim_{p\to \infty}\sum_{\alpha \in \mathcal{J}_{n}^p} m_{i,\alpha}(t) {H}_{\alpha}(\xi) \label{eq:exp-init} \end{eqnarray} for each component $i\in \{1,...,n\}$, where ${H}_\alpha$ and $\mathcal{J}$ are defined in \eqref{eq:def-wick} and \eqref{eq:multi}, respectively. The coefficients follow \begin{eqnarray} m_{i,\alpha}(t)&=&\left \langle M_{i} ,{H}_\alpha \right \rangle_{\mu_{\Xi}}, \label{eq:proj} \end{eqnarray} with the inner product defined based on the Gaussian weight \begin{eqnarray} \langle h,g \rangle_{\mu_{\Xi}}&=&\int_{\mathbb{R}^{n}}h(q)g(q)f_{\Xi}(q)d q. \end{eqnarray} Therefore \begin{eqnarray} \frac{d m_{i,\alpha}}{dt}&=&\langle b_i,{H}_\alpha\rangle_{\mu_\Xi}-\frac{1}{2}\beta^2\int_{\mathbb{R}^{n}}{{H}_\alpha}(\xi)\left(\nabla_{x_i} \log f_{X}(x;t)\right)_{x=M}d\mu_\Xi \nonumber \\ &=&\langle b_i,{H}_\alpha \rangle _{\mu_\Xi}+\frac{1}{2}\beta^2\bigg \langle \left(\frac{\partial {M}_l}{\partial \xi_k}\right)^{-1},\frac{\partial {H}_\alpha}{\partial \xi_l}\bigg \rangle_{\mu_\Xi} ,\label{eq:ODE} \end{eqnarray} and \begin{eqnarray} \frac{\partial {{M}}_i}{\partial \xi_k}\left(\frac{\partial {{M}}_j}{\partial \xi_k}\right)^{-1}&=&\delta_{ij}, \end{eqnarray} with $\delta$ being the Kronecker delta. Note that in deriving the last step of Eq.~\eqref{eq:ODE}, the fact that $f_{\Xi}$ vanishes at the boundaries together with Eq.~\eqref{eq:consist} have been used. Moreover since $f_{X}, f_{\Xi} \in K_1$, the inverse of $\nabla_\xi {M} $ exists which can be seen again from Eq.~\eqref{eq:consist}. It is important to emphasize that the evolution of the coefficients $m_{i,\alpha}$ do not directly depend on $f_X$. By taking advantage of the measure transform \eqref{eq:consist}, no explicit knowledge of the density $f_X$ is required. \\ \ \\ In practice, basides the error associated with the regularization of the initial condition, three types of numerical errors should be controlled in order to compute the evolution of the coefficients $m_{i,\alpha}$. First type comes through truncation of the Hermite expansion \eqref{eq:exp-init}. Second is due to the inner products $\langle .,. \rangle_{\mu_\Xi} $, where the Hermite-Gauss quadrature can be employed. And third, the error arising from the time integration which can be performed e.g. by the Runge-Kutta method, should be curbed. \section{Conclusion} This study proposed a transformation of the diffusion arising from the white noise into a transport induced by logarithmic gradient of the probability density. The well-posedeness of such a transformation for an It\^{o} process with strong regularity assumptions was shown. As a result, the transformed It\^{o} process behaves similar to an ODE with uncertain initial condition. Therefore the process remains measurable with respect to its initial condition resulting in interesting computational advantages. The relevance of the transformation was discussed by employing the chaos expansion technique. In follow up studies, besides analyzing the computational performance of the resulting chaos expansion, the author will investigate possible generalization of the transformation for a broader class of stochastic processes driven by the white noise. \end{document}
arXiv
Journal of Petroleum Exploration and Production Technology December 2018 , Volume 8, Issue 4, pp 1009–1015 | Cite as A proposed solution to the determination of water saturation: using a modelled equation Jethro Sam-Marcus Efeoghene Enaworu Oluwatosin J. Rotimi Ifeanyi Seteyeobot Review Paper - Exploration Geology First Online: 13 March 2018 Reservoir characterization is an important phase in oil and gas field development, which takes place during the appraisal phase of either a green field or a brown field upon which further development options are considered. Water saturation is a very important parameter in the general description of the reservoir as well as equity determination and dynamic modelling. Numerous equations have been developed which have been used to determine water saturation, but calculated water saturation values have been inconsistent with the saturation values determined from core analysis. This is generally due to their inability to account for the varying distribution of shale in the reservoir and the often incorrectness of their underlying assumptions. The major aim of this research is to develop a model which can be used to determine water saturation values using data from well logs; also, to compare the developed model with other existing models used in the oil and gas industry, using data from core analysis and well logs as the input data; and then finally, to discuss the results of the comparison, using the core-derived saturation values as the bench mark. The model is based on a parallel resistivity model, which is based on the assumption that the conductivity of the sandstone term and the shale term exist in parallel in the shaly-sand reservoir. The shale term in the reservoir of the model is based on the assumption that the clay-bound electrons do not move in the same conductivity path as the sandstone electrons. The shale conductivity term is based on the bound water saturation and the bound water resistivity. The modelled equation was compared in two scenarios using well log data and core data from two different reservoirs, and the model showed consistency in predicting the average water saturation in both reservoirs. The results of the comparison were positive for the modelled equation, as it gave coherent results in both comparison scenarios and matched reasonably the average water saturation of the selected reservoirs. This developed model can serve as an accurate means of determining water saturation in reservoirs, especially for reservoirs with similar characteristics as the selected reservoirs in this research. Reservoir characterization Water saturation Bound water saturation Bound water resistivity Volume of shale Core analysis Resistivity of the reservoir rock fully saturated with water Resistivity of the formation water Tortuosity factor Cementation exponent ϕ Formation factor Formation factor for shaly sands Effective concentration of clay counterions Equivalent conductance of clay counterions Volume of shale Shale cementation exponent Φt Total porosity Φtsh Shale porosity Shale formation resistivity factor Bound water resistivity Bound water saturation Reservoir characterization has been a very important tool in hydrocarbon exploration. It has enabled petroleum engineers to have a better understanding of the reservoir and its properties. Due to this fact, various models have been built to represent the reservoir and predict how the reservoir will behave under various conditions. Water saturation is an important parameter used in reservoir modelling, as it gives an idea of the percentage of the pore spaces occupied by water and oil or gas and hence the total amount of hydrocarbon present in the pore spaces of the reservoir. The values of water saturation calculated for a particular reservoir are used as inputs to static models and dynamic models, and this in turn is used to determine the initial oil in place of a reservoir. The calculated values of oil in place form the basis of future production forecasts and the determination of the economic viability of the discovered reservoir. Therefore, high accuracy is needed in the determination of water saturation as it determines the oil in place and the estimated reserves. Resistivity logs have been consistently used to determine the saturation of water in the reservoirs by making use of the Archie's equation (Archie 1942) that shows a relationship between water saturation to the true permeable formation resistivity, the formation porosity and the formation water resistivity. The challenge therefore arises due to the presence of shale in the reservoir which is a conductive medium and hence is against the original assumptions of the Archie's equation, which was a clean sandstone reservoir (Archie 1942). The presence of shale causes a disparity in the reading of the total resistivity of the reservoir and brings about an overshot in the water saturation predicted by the Archie's equation (Archie 1942). This disparity is caused by the additional conductance path caused by shale, and this additional conductance path is due to the conductive nature of shale. Due to the economic importance of developing a model which would determine water saturation effectively with the highest form of accuracy, various models have been developed in order to take cognizance of the effect of shale on the overall reservoir resistivity as well as on the water saturation value determined for the reservoir. Various models such as the Simandoux equation (Simandoux 1963), the dual-water model (Clavier et al. 1977), the Waxman–Smits equation (Waxman and Smits 1968), Schlumberger equation (Schlumberger 1989), Indonesia model (Poupon and Leveaux 1971) were all built on the foundational idea presented by (Archie 1942) in his original paper, by including a shale factor into the original Archie equation and hence presenting a simpler way to determine water saturation. The simple equations run the risk of becoming too simple, but yet it has been noted that these equations are comprehensive and can perform very well when correctly applied depending on the afore-determined properties of the reservoir. Yet there are more complex equations which are better functionally represented but contain values which are difficult to estimate accurately, which introduces a lot of errors in the calculated values of water saturation (Doveton 1986). The gold standard of reservoir characterization has been chosen to be core analysis as it brings a representative sample of the reservoir to the laboratory where various properties can be determined from core analysis. Core analysis is very expensive and often inaccurate in representing the entire reservoir as only sections of the reservoir are taken to the laboratory and analysed (Odizu-Abangwu et al. 2010). Hence, more accurate equations need to be developed for various petroleum-producing regions as the geology of various regions is not the same. This paper seeks to proffer a solution to the above-stated challenges by proposing a model as a possible solution to the shaly-sand problem. Since the advent of well logging, resistivity logs have been constantly used to determine the value of water saturation by using the Archie equation as the primary equation to determine water saturation. Due to the expensive nature of core analysis, the "log-only" option of determining water saturation has been seen as economical and truly desired (Doveton 1986). The presence of shale, which is made up predominantly of clay minerals and silts, poses a major flaw to the Archie equation (Eq. 1), being that Archie assumed the reservoir was made up of purely sand and the only conductive medium was the reservoir water that saturated the reservoir rock (Archie 1942). Based on this problem, shaly-sand equations have been developed to further account for the extra conductivity added to the total reservoir conductivity and invariably account for the shale effect and accurately determine the value of water saturation in the reservoir. $$S_{\text{w}} = \sqrt[n]{{\frac{{aR_{\text{w}} }}{{\emptyset^{m} R_{\text{t}} }}}}.$$ Some shale sand models and their limitations Shaly-sand models would be considered and reviewed in this research as a premise into which Archie's model was modified. The shaly-sand models considered are: The Simandoux Equation The Schlumberger modification of the Simandoux equation Indonesia equation. The Waxman–Smits equation The dual-water model. Simandoux equation Prior to the development of the Simandoux equation, the relationship between the true resistivity of the reservoir and the value of water saturation is represented in Eq. 2. $$\frac{1}{{R_{\text{t}} }} = \alpha S_{\text{w}} + \frac{\beta }{{R_{\text{w}} }}S_{\text{w}}^{2}$$ where Rt is true resistivity of the formation; Rw, formation water resistivity; Sw, water saturation; α, shale term; β, sandstone term. Simandoux in his experiment in 1963 studied "homogeneous mixtures of sorted sand and natural clay in various proportions." This was in order to study the volumetric effects of reducing clay volumes on the conductivity of the rock matrix and the overall saturation of water in the reservoir. Hence, the Simandoux equation was presented as thus $$\frac{1}{{R_{\text{t}} }} = \frac{{S_{\text{w}}^{2} }}{{FR_{\text{w}} }} + \frac{{V_{\text{sh}} \varepsilon }}{{R_{\text{sh}} }} .$$ With its shale term dependent on Vsh (volume of shale) and Rsh (resistivity of shale). The Simandoux equation was later modified by Bardon and Pied (1969) by including water saturation to the shale term of the original Simandoux equation which turned Eq. 3 into Eq. 4. $$\frac{1}{{R_{\text{t}} }} = \frac{{S_{\text{w}}^{2} }}{{FR_{\text{w}} }} + \frac{{V_{\text{sh}} *S_{\text{w}} }}{{R_{\text{sh}} }}$$ Some of the notable short-comings of the Simandoux equation were as follows (Herrick and Kennedy 2009). Simandoux made measurements on only four synthetic samples using one type of clay (montmorillonite), and the samples used had a constant value of porosity. Other researchers have demonstrated that the shale effect \((\alpha = \frac{{V_{\text{sh}} }}{{R_{\text{sh}} }})\) does not apply to disseminated shale conditions. The Simandoux model leads to optimistic results when the porosity is less than 20%, and because of this fact, it cannot be relied on in low porosity situations. The first terms of the Simandoux equations do not show a volumetric balance between the sandstone volume and the clay volumes, and the lack of a shale formation factor in the clay term makes the correction for clay effect by the Simandoux equation too large and hence reduces the amount of water saturation calculated. This sole problem could lead to the overestimation of the quantity of hydrocarbons in place.Schlumberger modified the general Simandoux equation by adding 1 − Vsh to the denominator to account for the shaly nature inherent in the clean sands. Hence, Eq. 5 is $$\frac{1}{{R_{\text{t}} }} = \frac{{S_{\text{w}}^{2} }}{{F\left( {1 - V_{\text{sh}} } \right)R_{\text{w}} }} + \frac{{V_{\text{sh}} }}{{R_{\text{sh}} }}S_{\text{w}}$$ The argument was that the original Simandoux equation completely discarded the possibility of having shale within the clean sandstone layers. This modification was done by Schlumbeger in their paper as a crude way to calibrate their equipment without any actual physical basis for this addition. It was their means of correcting the errors included in the Simandoux equation when the formation resistivity factor of clay was not accounted for in the shale term (Schlumberger 1989). The first use of the 1 − Vsh term was by Poupon et al. (1954), which was based on a volumetric balance between the volume of shale present and the volume of clay present in the reservoir. It was used to determine the volume of water saturation in thin-bedded sands and shale. It basically assumes that the conductivity of a particular medium is based on its size and the conductive material within its pore spaces (Eq. 6). It is represented by this equation: $$\frac{1}{{R_{\text{t}} }} = \frac{{\left( {1 - V_{\text{sh}} } \right)S_{\text{w}}^{n} }}{{FR_{\text{w}} }} + \frac{{V_{\text{sh}} }}{{R_{\text{sh}} }}$$ Indonesia equation Poupon and Leveaux (1971) derived the Indonesian equation in order to account for the high mount of shale and fresh water formations, which is common in Indonesia reservoirs. The equation was developed by using computer-made cross-plots to determine the relationship between the value of water saturation and the value of the true resistivity of the formation. The range of shale volumes recorded for such formations was 30% –70% shale content $$\frac{1}{{\sqrt {R_{\text{t}} } }} = \left[ {\frac{{V_{\text{clay}}^{d} }}{{\sqrt {R_{\text{clay}} } }} + \frac{{\emptyset^{m/2} }}{{\sqrt {aR_{\text{w}} } }}} \right]S_{\text{w}}^{n/2}$$ where Vclay is volume of shale; Rt, formation true resistivity; Rw, formation water resistivity; a, tortuosity, ϕ, porosity; Sw, water saturation $$d = 1 - \frac{{V_{\text{sh}} }}{2}$$ Waxman–Smits equation Waxman–Smits proposed a model which was based on the understanding that "one" water (saturating brine) was present in the reservoir (Waxman and Smits 1968). The Waxman–Smits model is based on laboratory measurements of resistivity, porosity and saturation of real rocks, and due to the model being backed-up heavily by laboratory data, the model was generally accepted (Eq. 8). $$\frac{1}{{R_{\text{t}} }} = \frac{{S_{\text{w}}^{2} }}{{F^{*} R_{\text{w}} }} + \frac{{BQ_{\text{v}} S_{\text{w}} }}{{F^{*} }}$$ where \(F^{*}\) is formation factor for shaly sands; Qv, effective concentration of clay counterions; B, equivalent conductance of clay counterions. The major assumptions of the Waxman–Smits model about clay formation and its properties are as follows: Clay surface conductivity is assumed to share a directly proportional relationship with the factor Qv (defined as the milli-equivalents of exchangeable clay counterions per unit volume of pore space). The constant of proportionality in this relationship was referred to as B, which is defined as the equivalent conductance of the clay counterions. The Waxman–Smits equation assumes that "the electric current is transported by the clay counterions that travels along the same tortuous paths as the current attributed to ions in the pore water" (Waxman and Smits 1968). The second assumption of the Waxman–Smits model is the major reason for the F* term replicated in both the sandstone resistivity term and the shale resistivity term. Hence, the shale term and the sandstone term are seen to have them same formation resistivity factor (Herrick and Kennedy 2009). This model served as the premise of the widely used dual-water model. The Waxman–Smits equation is often used as a standard against other methods, due to its high experimental backing, but the determination of CEC (Cation Exchange Capacity) is a time-consuming experiment and this is the major limitation of the Waxman–Smits model. Dual-water model In the dual-water model, it is proposed that the impact of clay minerals on the resistivity of reservoir rock is caused by the presence of two waters in the reservoir: the free water within the pore spaces of the reservoir rock and the bound water within the clay matrix (Clavier et al. 1977). The dual-water model was developed with the basic aim of accounting for the conduction that occurs within the volume at the surface of the clay mineral. The idea was to account for the conductivity that occurs near and within the double layer and the conductivity that occurs in the layer free from the effects of clay. Though the dual-water model was developed with the aim of modifying the Waxman–Smits model for water saturation, it contains within itself the premise that the conduction geometry of the free water and the clay counterions is the same (Herrick and Kennedy 2009). The dual-water model is represented by Eq. (9). $$\frac{1}{{R_{\text{t}} }} = \frac{{S_{{{\text{w}}_{\text{t}} }}^{n} }}{{F_{\text{o}} }}\left[ {\frac{1}{{R_{\text{w}} }} + \frac{{V_{\text{Q}} Q_{\text{V}} }}{{S_{{{\text{w}}_{\text{T}} }} }}\left( {\frac{1}{{R_{\text{cw}} }} - \frac{1}{{R_{\text{w}} }}} \right)} \right]$$ where Rcw is resistivity of the bound water; Rw, resistivity of the free water. The model developed in this paper is based on a parallel conductivity model which states that the total conductivity of the formation is a combination of the conductivity of the formation water in parallel with the conductivity contribution of the clay term. Hence, the formation water conducts in series with the clay itself and the clay-bound water. This model also refutes the assumption of the Waxman–Smits and dual-water model which believe that the formation water and the clay counterions all flow through the same tortuous path and hence have the same formation resistivity factor. The modelled equation is represented in terms of resistivity mathematically as: $$\frac{1}{{R_{\text{t}} }} = \frac{1}{{R_{\text{ss}} }} + \frac{1}{{R_{\text{sh}} }}$$ where Rt is true resistivity; Rss, sandstone resistivity contribution; Rsh, shale resistivity contribution $$\frac{1}{{R_{\text{ss}} }} = \frac{{S_{\text{w}}^{n} }}{{F*R_{\text{w}} }}$$ The resistivity in the sandstone term is the same as the Archie's resistivity equation, with the major contributor to the conductivity of the sandstone reservoir being the formation water resistivity. The shale bound water saturation itself is modelled to be a part of the total water saturation and is seen as the linking term between the "Vsh models" and the "CEC" cation–ion exchange capacity models. The bound water saturation is given by: $$S_{\text{b}} = \frac{{V_{\text{sh}} \emptyset_{\text{tsh}} }}{{\emptyset_{\text{t}} }}.$$ The bound water resistivity is given by: $$R_{\text{b}} = R_{\text{sh}} \emptyset_{\text{tsh}}^{\text{msh}}$$ where msh is shale cementation exponent; Φt, total porosity; Φtsh, shale porosity. The shale formation factor then becomes: $${\text{F}}_{\text{sh}} = \frac{1}{{\emptyset_{\text{tsh}}^{\text{msh}} }}$$ Replacing the Vsh term with Sb, the Rsh term with Rb and including the Fsh term (Shale formation volume factor) in the Simandoux equation, Eq. (3) then becomes Eq. (15): $$\frac{1}{{R_{\text{t}} }} = \frac{{S_{\text{w}}^{n} }}{{FR_{\text{w}} }} + \frac{{S_{\text{b}} S_{\text{w}}^{{\left( {n - 1} \right)}} }}{{F_{\text{sh}} R_{\text{b}} }}.$$ This model was developed based on the following assumptions: Parallel conductivity exists between the clean sand and the shale present in the clean sand. The sum of all parallel conductivity is equal to the total reservoir conductivity. The shale term and the sandstone term do not have the same formation resistivity factor. In the cases of thin-bedded shale and sandstone reservoirs, a volumetric balance exists in the reservoir such that the volumetric concentration of the sands summed up with the volumetric concentrations of the clay is equal to unity. The total shale resistivity is a function of the clay-bound water saturation, its resistivity and the shale formation factor. The model was tested using a reservoir that was divided into two zones. The first reservoir zone had high shale content, while the second zone mimicked the Archie clean sand, with very low volumes of shale. The selected reservoir properties used for the analysis are recorded in Table 1. Figure 1 shows the overall reservoir trend for the various values of water saturation calculated from each of the selected model. The trend between the core data water saturation and the results of the developed model is shown in Fig. 2. Reservoir properties R w R sh Φ tsh Water saturation trend with varying values of volume of shale in the reservoir Trend comparison between modelled equations and the core water saturation To further run an analysis on the results, a comparison between the values of average water saturation derived from each model was done, and the equation with a value of average water saturation that best matched the cored values of water saturation was selected based on the analysis. The analysis was broken down into two parts, the first part being a comparison of the average water saturation values from each equation for the entire reservoir as shown in Table 2 and the second part being the calculation of the values of average water saturation from each equation, for each reservoir zones as presented in Tables 3 and 4. Overall reservoir water saturation values Overall Sw Simandoux DWM Water saturation analysis for Zone 1 It was observed that the average water saturation values from the modelled equation were very close to already established equations. The Archie equation had the least accuracy in both comparisons due to its lack of a shale term. The average water saturation of the modelled equation was close to the average water saturation from the cored data. A second comparison was done in order to test the consistency of the derived model. The reservoir volume of shale values ranged from 10% to 70%. Table 5 shows the reservoir properties used in the second comparison. On an average, the values of water saturation calculated using the thickness-weighted average technique were consistent for the derived model as presented in Table 6. However, the derived model is perceived as the most promising model and this is due to its ability to account for the bound water saturation, the bound water resistivity and the conductive path of the bound water electrons; all these were taken into consideration during the development of this model. Average water saturation for the selected reservoir Average reservoir water saturation Sw "core data" Sw "Schlumberger" Sw "Simandoux" Sw "dual-water model" Sw "Model" The major aim of this research was to propose a solution to the shaly-sand problem by developing a model which could accurately mirror the water saturation results from core analysis and have average water saturation values which are not too far from the values of average water saturation gotten from core analysis. A solution has been proposed, the model is a physical model whose mathematical relation was based on the relationship between the formation conductivity and the bound water saturation and the bound water resistivity. Its major assumption was that the clay-bound electrons and the sandstone electrons do not move through the same conductive path. The results from the model were coherent with the results of average water saturation in each comparison cases. The conclusion can be reminisced in three points: The derived model was consistent in both comparison cases. The derived model was the most consistent equation in both cases of water saturation comparison, with average water saturation values being close to those from core analysis. Accounting for the bound water saturation, the bound water resistivity, and the assumption that the clay-bound electrons and the sandstone electrons do not flow through the same conductive path is the one of the main reasons why the modelled equation was able to effectively mirror the average reservoir water saturation. The modelled equation was consistent in the all the cases of comparison. It shows promise and should be further tested and applied to more oil fields. The author is grateful to the management of Covenant University for permission to publish this paper. Archie GE (1942) The electrical resistivity log as an aid in determining some reservoir characteristics. Trans AIME 146:54–62CrossRefGoogle Scholar Bardon Ch, Peid B (1969) Formation water saturation in shaly sands. SPWLA 10th annual logging symposium, 25–28 May, Houston, TexasGoogle Scholar Clavier C, Coates G, Dumanoir J (1977) The theoretical and experimental bases for the "dual water" model for the interpretation of shaly sands. In: Society of Petroleum Engineers Paper No. 6859, p 10Google Scholar Doveton JH (1986) Log analysis of subsurface geology: concepts and computer methods, 1st edn. Wiley, New York, NY, p 273Google Scholar Herrick D, Kennedy W (2009) On the quagmire of "Shaly Sand" saturation equation. In: Society of petrophysicists and well log analysts, The Woodlands, Texas, United States, pp. 1–16Google Scholar Odizu-Abangwu I, Suleman A, Nwosu C (2010) The impact of different shaly sand models on in place volumes and reservoir producibility in Niger Delta reservoirs. The Dual water and the Normalized Waxman-Smith saturation models. In: SPE 140627. Society of Petroleum Engineers, Tinapa-Calabar, Nigeria, p 6Google Scholar Poupon A, Leveaux J (1971) Evaluation of water saturation in shaly formations. In: SPWLA 12th annual logging symposium, pp 1–2Google Scholar Poupon A, Loy ME, Tixier MP (1954) A contribution to electric log interpretation in shaly sands. Trans AIME 6(06):138–145Google Scholar Schlumberger (1989) Log interpretation-principles! Applications. In: Schlumberger Educational Services, pp 8–14Google Scholar Simandoux P (1963) Measures die techniques an milieu application a measure des saturation en eau, etude du comportement de massifs agrileux. Review du'Institute Francais du Patrole 18(Supplementary Issue):193Google Scholar Waxman MH, Smits LJ (1968) Electrical conductivities in oil-bearing shaly sands. In: Society of petroleum engineers 42nd annual fall meeting. Society of Petroleum Engineers, Houston, Texas, pp 107–122CrossRefGoogle Scholar 1.Petroleum Engineering DepartmentCovenant UniversityOtaNigeria Sam-Marcus, J., Enaworu, E., Rotimi, O.J. et al. J Petrol Explor Prod Technol (2018) 8: 1009. https://doi.org/10.1007/s13202-018-0453-4 Received 23 June 2017 Accepted 24 February 2018 First Online 13 March 2018 DOI https://doi.org/10.1007/s13202-018-0453-4 King Abdulaziz City for Science and Technology
CommonCrawl
Physical layer security transmission scheme based on artificial noise in cooperative SWIPT NOMA system Yong Jin1, Zhentao Hu ORCID: orcid.org/0000-0001-9325-59661, Dongdong Xie1, Guodong Wu1 & Lin Zhou1 Aiming at high energy consumption and information security problem in the simultaneous wireless information and power transfer (SWIPT) multi-user wiretap network, we propose a user-aided cooperative non-orthogonal multiple access (NOMA) physical layer security transmission scheme to minimize base station (BS) transmitted power in this paper. In this scheme, the user near from BS is adopted as a friendly relay to improve performance of user far from BS. An energy harvesting (EH) technology-based SWIPT is employed at the near user to collect energy which can be used at cooperative stage. Since eavesdropper in the downlink of NOMA system may use successive interference cancellation (SIC) technology to obtain the secrecy information of receiver, to tackle this problem, artificial noise (AN) is used at the BS to enhance security performance of secrecy information. Moreover, semidefinite relaxation (SDR) method and successive convex approximation (SCA) technique are combined to solve the above non-convex problem. Simulation results show that in comparison with other methods, our method can effectively reduce the transmitted power of the BS on the constraints of a certain level of the secrecy rates of two users. High-speed transmission rates of smart terminals in fifth-generation (5G) wireless communication systems require vast spectrum resources [1, 2]. Traditional orthogonal multiple access (OMA) approaches are difficult to meet the needs of high-speed, real-time and wide-bandwidth in 5G. Researchers proposed NOMA technique, one of the key technologies of 5G, to provide higher spectral efficiency (SE) of multiple users. Keypoint of NOMA is multiple non-orthogonal access of power domain. Thus, multiple users can be served by the same resource block (time domain, frequency domain and code domain). However, due to mutual interference of non-orthogonal signal, the receiver has to use the SIC technique to extract desired information from received signal [3,4,5]. For the purpose of improving the performance of NOMA, the mode of effectively combining NOMA and cooperative technique has been attracted great attentions. The existing cooperative NOMA mainly can be divided into two types including the user-assisted and the relay-assisted. For example, J. Men and J. Ge explored the outage probability of the cooperative NOMA relay network by deriving a closed-form expression [6]. A two-stage relay selection scheme was presented to research the outage performance of the cooperative NOMA system in [7,8,9]. Z. Ding proposed a half-duplex user-aided cooperative NOMA scheme to achieve the maximum diversity gain, and the simulation results indicate that the proposed cooperative NOMA scheme could improve the outage probability [10]. S. L. Talbot and B. Farhang-Boroujeny presented a cooperative beamforming NOMA scheme which performed intrabeam superposition coding of a multi-user signal at the transmitter and the spatial filtering of interbeam interference followed by the intrabeam SIC at the destination [11]. In addition, energy efficiency (EE) is another concern of 5G. Since radio frequency (RF) signals can carry information and transmit energy simultaneously, they can be used not only for information vehicle but also for energy collection of the system. At present, a technology which can effectively extend the service life of energy-limited device called SWIPT has been proposed [12]. Different from traditional EH technologies, such as solar and wind, SWIPT provides stable and controllable energy for wireless applications while transmitting the necessary information contained in RF signal. For this reason, there are many relevant studies on SWIPT that have been appeared in the recent several years. Early studies on SWIPT have assumed that the entire signal can transmit both information and energy, exposing a fundamental trade-off between power and information transfer [13]. But this simultaneous transfer is unrealistic, since the EH operation employed in the RF domain destroys the information content. In order to practically achieve SWIPT, the received signal has to be allocated into two distinct parts, one for information decoding and another for EH [14]. Specifically, Zhang proposed two practical receiver architectures to handle the technical limitations of existing circuit designs, namely time-switching (TS) receiver and PS receiver. If TS is performed, the receiver switches in time between EH and information decoding. In this case, signal allocating is employed in the time domain. So, the entire signal received in one time slot is used either for EH or information decoding. The TS scheme allows for a simple hardware implementation at the receiver but requires accurate time synchronization. The PS scheme has higher receiver complexity compared to TS and requires the optimization of the PS factor [15]. However, Krikidis and Timotheou indicated that the PS technology achieves lower outage probability and higher gain than those of TS for applications with delay constraints. It is intuitive since the signal received in one time slot is used for both power transfer and information decoding when the PS protocol is performed. [14]. In order to further meet practical scenario, in contrast to the most of existing works which apply an ideally linear EH model, Zhou and Chu adopted the practical nonlinear EH model to capture the nonlinear characteristics of EH circuits and designed the resource allocation schemes for SWIPT networks[16]. Motivated by the requirements of 5G and the advantages of NOMA and SWIPT, using SWIPT to enhance SE and EE in NOMA system which contains energy-limited devices is a hotspot today [17].Liu and Ding introduce SWIPT into NOMA system to extend self-sustaining time of system. This strategy employs the near users as EH relays to improve communication quality of far users. Outage rates with respect to three kinds of relay selection schemes are evaluated in single-input single-output (SISO) scenario. The analytical results demonstrate that SWIPT technology can effectively enhance EE of conventional NOMA systems without jeopardizing its diversity gain [18]. Xu and Ding proposed a SWIPT NOMA cooperative transmission strategy to design a joint beamformer and power-distribution in multi-input single-output (MISO) scene with the perfect channel state information (CSI) model [19]. Furthermore, a joint design of the PS ratio and beamformer was studied in the imperfect CSI model [20]. However, the security of cooperative transmission is not involved in their work. Thus, the private information is easily intercepted and eavesdropped from open wireless channel [21, 22]. To overcome this issue, the cooperative jamming (CJ) technique can be introduced for transmitting AN to degrade the quality of the wiretap link. The existing CJ technique mainly can be classified into three different types including the source-aided CJ, relay-aided CJ and the destination-aided CJ. The source-aided CJ refers to combining information bearing signal with AN injected by the source [23]. In order to improve security of network, Moradikia et al. employed the AN which is injected by the idle transmitter in the scene of untrusted relays and passive eavesdroppers existing [24]. The relay-aided CJ describes the relay nodes of the system chosen to work as a jammer [25]. Liu Y combined cooperative relay with CJ technique to interfere eavesdropper. The research shows that their method can effectively improve the secrecy rate [26]. Zhou and Chu used AN technique to improve the information security of users in the SWIPT NOMA system which has an untrusted energy harvesting receiver [27]. The destination-aided CJ considers the AN transmitted by the destination [28]. A novel cooperative secure unmanned aerial vehicle-assisted propagation protocol has been presented by M. Tatar Mamaghani. For the purpose of enhancing physical-layer security and propagation reliability, they employed destination-assisted CJ as well as SWIPT at the unmanned aerial vehicle mounted relay [29]. Furthermore, Hu Z explored physical layer security of SWIPT relay network with imperfect the CSI model. An algorithm is presented to optimize secret rate of SWIPT network in the constraints of relay forward power and eavesdroppers SINR [30]. Cao and Wang studied the security transmission of uplink NOMA with the aid of the EH receivers. One of the EH receivers is selected as a friendly jammer that adopts the energy harvested from the RF signals to transmit the AN for interfering the eavesdropper [31]. However, the collaboration secrecy scheme of SWIPT and NOMA is not involved in their work. Thus, it motivates us to use AN-aided cooperative strategy to enhance physical-layer security of MISO SWIPT NOMA system. The main contributions of this paper are summarized as follows. We propose a source-aided CJ SWIPT NOMA strategy, where the near user serves as an EH relay to help the far user to improve the secrecy rate. By applying the PS protocol, the near user can simultaneously receive information and harvest energy used for the forwarding stage. In addition, multiple antennas and AN-aided techniques are exploited to protect the private information. The above scheme can be described as the problem of minimizing transmitted power from BS subject to secrecy rate of each user. To tackle this problem, we need to jointly optimize beamforming vector, AN covariance matrix and PS ratio. Unfortunately, it is NP-hard. So, variable slack technique is combined with SCA method to get suboptimal solution of the original problem. The simulation results verify the proposed scheme which not only guarantees information security but also reduces BS's energy consumption. The paper is organized as follows. In Sect. 2, we first introduce the system model and the problem formulation of the cooperative SWIPT NOMA in MISO system. Next, an SCA-based iterative algorithm is proposed to solve the joint AN-aided beamforming design and power splitting control problem. Then, we present numerical results on the performance of different schemes in Sect. 3. Finally, we conclude the paper in Sect. 4. Notations: Boldface capital letters and boldface lowercase letters denote matrices and vectors, respectively. \(\mathbb {C}\) represents the complex domain. \(\mathbb {E}|\cdot |\) denotes the expectation operator. The superscript \((\cdot )^{T}\) and \((\cdot )^{H}\) denote the transpose and (Hermitian) conjugate transpose, respectively. \({{\mathrm{Tr}}(\cdot )}\) represents the trace of a matrix. \(||\cdot ||\) denotes the magnitude of a complex number. \(\mathcal {CN}(\mathbf {0},\mathbf {X})\) denotes the circularly symmetric complex Gaussian distribution with mean vector \(\mathbf {0}\) and covariance matrix \(\mathbf {X}\). Methods/experimental System model and problem formulation Consider a downlink in an MISO system is depicted in Fig. 1. BS is equipped with \({N}_t\) antennas. Two users and a passive eavesdropper are equipped with a single antenna, respectively. For simplicity, supposing eavesdropper and user 2 do not have a direct link, and the eavesdropper is distributed outside the security area center on user 2 near from BS (eavesdropper in the security will be detected by user 2). Similar to reference [32], we assume that some handshaking mechanisms are introduced in the medium access control (MAC) protocol of user 1 and user 2 to inhibit the interception occurring in the security area center on user 2. Specifically, user 1 and user 2 can employ a particular MAC protocol to exchange data while the eavesdropper without MAC protocol may only intercept the data from the BS without MAC protocol. Assume that the channel quality of the user 2 and the eavesdropper is better than that of user 1, respectively. For example, consider an indoor sensor communication scenario where user 2 and eavesdropper are closer to the BS than user 1. Cooperative SWIPT NOMA transmission scheme is divided into two phases. In the first phase, the eavesdropper intercepts the transmitted signal from BS, and user 1 receives the transmitted signal from BS while user 2 performs SWIPT technique based on PS protocol. The PS protocol achieves SWIPT by allocating the received radio frequency signal at the user 2 into two streams of different power levels using a PS factor: one signal stream is converted to baseband for information decoding, and the other is sent to the rectenna circuit for EH. Assume that the harvested energy at the user 2 is only used for information forwarding at the second stage, while the energy for maintaining circuit, signal processing, etc., is neglected. When the harvested energy is more than the energy needed to information forwarding at user 2, there is an energy buffer at the secondary transmitter to store the excess energy. In the second phase, user 2 uses harvested energy in phase 1 to forward the message received in phase 1 to user 1, while user 1 employs maximal-ratio combination (MRC) criterion to accumulate and decode the messages received in two phases. Since the eavesdropper is distributed outside the security area center on user 2 near from BS, eavesdropper cannot receive the transmitted signal from user 2. Details of the process are presented next. In the first transmission phase, in order to improve the security of the BS's transmitted signal and reduce the risk of information leakage, an AN vector \(\mathbf {v}\in \mathbb {C}^{{N}_{t}}\) is added to the transmitted signal. Therefore, the transmitted signal from the BS is described as \(\mathbf {s}=\mathbf {w}_{1}{s}_{1}+\mathbf {w}_{2}{s}_{2}+\mathbf {v}\), where \({s}_{1},{s}_{2}\in \mathbb {C}\) are information bearing messages for user 1 and user 2, respectively. And \(\mathbf {w}_{1},\mathbf {w}_{2}\in \mathbb {C}^{{N}_{t}}\) are the corresponding transmitted beamformers. We assume that the power of the transmitted symbol is normalized, i.e., \(\mathbb {E}|{s}_{1}|^{2}=\mathbb {E}|{s}_{2}|^{2}=1\), and the AN vector \(\mathbf {v}\sim \mathcal {CN}(\mathbf {0},\mathbf {S})\), where \(\mathbf {S}\) is the covariance matrix of AN to be designed. Then, the signal received at user 1 is given by $$\begin{aligned} y_{1}^{(1)} = \widetilde{\mathbf {h}}_{1}^{H} \left( \mathbf {w}_{1}{s}_{1}+\mathbf {w}_{2}{s}_{2}+\mathbf {v}\right) + {n}_{1}^{(1)}, \end{aligned}$$ where \(\widetilde{\mathbf {h}}_{1}^{H}\in \mathbb {C}^{{N}_{t}}\) is the channel impulse response vector between the BS and user 1, and \({n}_{1}^{(1)}\sim \mathcal {CN}(0,\sigma _{1}^{2})\) represents the additive Gaussian white noise (AWGN) at user 1. Then, the signal-to-interference-plus-noise-ratio (SINR) received by user 1 for \({s}_{1}\) can be expressed as $$\begin{aligned} \mathrm{SIN}\mathrm{R}_{1}^{(1)} = \frac{|{{\widetilde{\mathbf {h}}_1^H}{\mathbf {w}_1}}|^2}{{|{\widetilde{\mathbf {h}}_1^H{\mathbf {w}_2}}|^2} + {|{\widetilde{\mathbf {h}}_1^H{\mathbf {v}}}|^2} + \sigma _1^2}, \end{aligned}$$ by defining \(\mathbf {h}_1=\widetilde{\mathbf {h}}_1/\sigma _1\), (2) is transformed as $$\begin{aligned} \mathrm{SIN}\mathrm{R}_{1}^{(1)} = \frac{{\mathbf {h}_1^H}{\mathbf {w}_1}{\mathbf {w}_1^H}{\mathbf {h}_1}}{{\mathbf {h}_1^H}\left( {\mathbf {w}_2}{\mathbf {w}_2^H} + {\mathbf {S}} \right) {\mathbf {h}_1} + 1}. \end{aligned}$$ The system model The power splitting architecture at user 2 As shown in Fig. 2, the power splitting architecture with respect to user 2, is introduced to perform SWIPT. Hence, the received signal for information decoding at user 2 can be described as $$\begin{aligned} y_{2}^{(1)} = \sqrt{1-\rho } \widetilde{\mathbf {h}}_{2}^{H} \left( \mathbf {w}_{1}{s}_{1}+\mathbf {w}_{2}{s}_{2}+\mathbf {v}\right) + {n}_{2}^{(1)}, \end{aligned}$$ where \(\rho \in [0,1]\) is the PS ratio of energy harvesting to be optimized later, \(\widetilde{\mathbf {h}}_{2}^{H}\in \mathbb {C}^{{N}_{t}}\) is the channel impulse response vector between the BS and user 2, \({n}_{2}^{(1)}\sim \mathcal {CN}(0,\sigma _{2}^{2})\) represents the AWGN. According to the NOMA principle, SIC is performed at user 2. Specifically, user 2 first decodes user 1's message (i.e., \({s}_{1}\)) and then subtracts this message from the received signal to decode its own message [33]. Therefore, the SINR received at user 2 to decode \({s}_{1}\) can be expressed as $$\begin{aligned} \mathrm{SIN}\mathrm{R}_{2,{s}_1}^{(1)} = \frac{(1-\rho ){\mathbf {h}_2^H}{\mathbf {w}_1}{\mathbf {w}_1^H}{\mathbf {h}_2}}{(1-\rho ){\mathbf {h}_2^H}\left( {\mathbf {w}_2}{\mathbf {w}_2^H} + {\mathbf {S}} \right) {\mathbf {h}_2} + 1}, \end{aligned}$$ where \(\mathbf {h}_2=\widetilde{\mathbf {h}}_2/\sigma _2\). Then, user 2 subtracts \({s}_{1}\) from \(y_{2}^{(1)}\) to further decode its own message \({s}_{2}\). The corresponding SINR can be described as $$\begin{aligned} \mathrm{SIN}\mathrm{R}_{2,{s}_2}^{(1)} = \frac{(1-\rho ){\mathbf {h}_2^H}{\mathbf {w}_2}{\mathbf {w}_2^H}{\mathbf {h}_2}}{(1-\rho ){\mathbf {h}_2^H}{\mathbf {S}} {\mathbf {h}_2} + 1}. \end{aligned}$$ On the other hand, the energy harvested by user 2 is modeled as [34] $$\begin{aligned} {E}=\rho \left( {|{{\widetilde{\mathbf {h}}_2^H}{\mathbf {w}_1}}|^2}+{|{{\widetilde{\mathbf {h}}_2^H}{\mathbf {w}_2}}|^2}+{|{{\widetilde{\mathbf {h}}_2^H}{\mathbf {v}}}|^2}\right) \eta , \end{aligned}$$ where \(\eta\) is the ratio of the first phase in a transmission time slot, and we assume that the two phases have the same transmission duration, then \(\eta =0.5\). Thus, in the second phase, the transmitted power of user 2 related to forwarding messages is $$\begin{aligned} {P}_t=\frac{{E}}{1-\eta }=\rho \left( {|{{\widetilde{\mathbf {h}}_2^H}{\mathbf {w}_1}}|^2}+{|{{\widetilde{\mathbf {h}}_2^H}{\mathbf {w}_2}}|^2}+{|{{\widetilde{\mathbf {h}}_2^H}{\mathbf {v}}}|^2}\right) . \end{aligned}$$ In addition, the signal received at the eavesdropper can be expressed as $$\begin{aligned} y_{e}^{(1)} = \widetilde{\mathbf {f}}^{H} \left( \mathbf {w}_{1}{s}_{1}+\mathbf {w}_{2}{s}_{2}+\mathbf {v}\right) + {n}_{e}^{(1)}, \end{aligned}$$ where \(\widetilde{\mathbf {f}}^{H}\in \mathbb {C}^{{N}_{t}}\) is the channel impulse response vector between the BS and eavesdropper, and \({n}_{e}^{(1)}\sim \mathcal {CN}(0,\sigma _{e}^{2})\) represents the AWGN at eavesdropper. Then, the SINR received by eavesdropper for the message \({s}_1\) can be expressed as $$\begin{aligned} \mathrm{SIN}\mathrm{R}_{e,{s}_1}^{(1)} = \frac{{\mathbf {f}^H}{\mathbf {w}_1}{\mathbf {w}_1^H}{\mathbf {f}}}{{\mathbf {f}^H}\left( {\mathbf {w}_2}{\mathbf {w}_2^H} + {\mathbf {S}} \right) {\mathbf {f}} + 1}. \end{aligned}$$ where \(\mathbf {f}=\widetilde{\mathbf {f}}/\sigma _{e}\). The eavesdropper then performs the SIC, subtracts \({s}_1\) from its signal to further decode the other message \({s}_2\). The corresponding SINR can be described as $$\begin{aligned} \mathrm{SIN}\mathrm{R}_{e,{s}_2}^{(1)} = \frac{{\mathbf {f}^H}{\mathbf {w}_2}{\mathbf {w}_2^H}{\mathbf {f}}}{{\mathbf {f}^H}{\mathbf {S}} {\mathbf {f}} + 1}. \end{aligned}$$ In the second phase, user 2 forwards the message to user 1 with the harvested energy. At this point, the signal received at user 1 is $$\begin{aligned} y_{1}^{(2)} = \sqrt{{P}_t}{g}_{1}{s}_{1} + {n}_{1}^{(2)}, \end{aligned}$$ where \({g}_{1}\in \mathbb {C}\) is the channel coefficient from user 2 to user 1, and \({n}_{1}^{(2)}\sim \mathcal {CN}(0,\sigma _{1}^{2})\) is the AWGN at user 1. Note that we consider the case of \(\sigma _{1}^{2}=\sigma _{2}^{2}\) for the simplicity of exposition. Thus, the SINR received by user 1 for \(s _{1}\) can be given by $$\begin{aligned} \mathrm{SIN}\mathrm{R}_{1,{s}_1}^{(2)} = \rho {g}\left( {\mathbf {h}_2^H}({\mathbf {w}_1}{\mathbf {w}_1^H}+{\mathbf {w}_2}{\mathbf {w}_2^H} + {\mathbf {S}} ){\mathbf {h}_2}\right) , \end{aligned}$$ where \({g}=|{g}_1|^2\). In communication systems, MRC usually be performed following synchronization, channel estimation and channel equalization (in the sampling domain). Since synchronization, channel estimation and channel equalization are not main concerns in this paper, similar as reference[19] and[35], we assume each channel has been completely synchronized before performing the MRC. So, user 1 can combine the signals received from the BS and the user 2 to jointly decode the message. The equivalent SINR at user 1 can be written as $$\begin{aligned} \mathrm{SIN}\mathrm{R}_{1,{s}_1}&{}={}&\mathrm{SIN}\mathrm{R}_{1}^{(1)}+ \mathrm{SN}\mathrm{R}_{1,{s}_1}^{(2)}\nonumber \\= & {} \frac{{\mathbf {h}_1^H}{\mathbf {w}_1}{\mathbf {w}_1^H}{\mathbf {h}_1}}{{\mathbf {h}_1^H}\left( {\mathbf {w}_2}{\mathbf {w}_2^H} + {\mathbf {S}} \right) {\mathbf {h}_1} + 1}+\rho {g}\left( {\mathbf {h}_2^H}\left( {\mathbf {w}_1}{\mathbf {w}_1^H}+{\mathbf {w}_2}{\mathbf {w}_2^H} + {\mathbf {S}} \right) {\mathbf {h}_2}\right) . \end{aligned}$$ Therefore, according to the SINR of the legitimate user and the eavesdropper, the secrecy rate \({R}_1\) and secrecy rate \({R}_2\) of the user 1 and the user 2 are defined as follows $$\begin{aligned} {R}_1= & {} {\min }\left\{ 0.5\mathrm{log}_2\left( 1+\mathrm{SIN}\mathrm{R}_{1,{s}_1}\right) ,0.5\mathrm{log}_2\left( 1+\mathrm{SIN}\mathrm{R}_{2,{s}_1}^{(1)}\right) \right\} \nonumber \\&-0.5\mathrm{log}_2\left( 1+\mathrm{SIN}\mathrm{R}_{e,{s}_1}^{(1)}\right) \end{aligned}$$ (15a) $$\begin{aligned} {R}_2= & {} 0.5\mathrm{log}_2\left( 1+\mathrm{SIN}\mathrm{R}_{2,{s}_2}^{(1)}\right) -0.5\mathrm{log}_2\left( 1+\mathrm{SIN}\mathrm{R}_{e,{s}_2}^{(1)}\right) . \end{aligned}$$ (15b) we want to minimize the transmitted power of the BS while guaranteeing the level of secrecy rates related to users. The optimization problem can be expressed as follow $$\begin{aligned}&\mathbf {P}_1:{}\min \limits _{\rho ,\mathbf {w}_1,\mathbf {w}_2,\mathbf {S}}{} {{\mathrm{Tr}}}\left( {\mathbf {w}_1}{\mathbf {w}_1^H}+{\mathbf {w}_2}{\mathbf {w}_2^H} + {\mathbf {S}} \right) \end{aligned}$$ $$\begin{aligned}&\quad {\mathrm {s.t.}}\,C0:{R}_1\ge \gamma _1,\end{aligned}$$ $$\begin{aligned}&\quad C1:{R}_2\ge \gamma _2,\end{aligned}$$ (16c) $$\begin{aligned}&\quad C2:0\le \rho <1,\end{aligned}$$ (16d) $$\begin{aligned}&\quad C3:\mathbf {S}\succeq 0, \end{aligned}$$ (16e) where \(\gamma _1\) and \(\gamma _2\) represent the minimum required secret rate thresholds with respect to user 1 and user 2, respectively. The constraint C0 is to ensure \({s}_1\) can be successfully decoded at user 2 meanwhile guarantee the SINR requirement of user 1 satisfied, and the constraint C1 ensures the secure transmission of message \({s}_2\) at user 2. Observing equ (15.a) and equ (15.b), we can find that \(\rho\), \(\mathbf {w}_1\) and \(\mathbf {w}_2\) are coupled together in the \({R}_1\) and \({R}_2\). Thus, P1 is a nonconvex problem and is difficult to solve. In the following, we will firstly employ SDR technique to reformulate P1 and then approximately solve the reformulated problem with an SCA-based iterative algorithm. $$\begin{aligned} \begin{aligned} {R}_{1,s_1}&=\frac{{\left[ {{{{\mathrm{Tr}}}}\left( {\left( {{\mathbf {W}}_1} + {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {F}}}} \right) + 1} \right] }}{{\left\{ {\frac{{\left[ {{{{\mathrm{Tr}}}}\left( {\left( {{\mathbf {W}}_1} + {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {H}}_1}} \right) + 1} \right] }}{{\left[ {{{{\mathrm{Tr}}}}\left( {\left( {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {H}}_2}} \right) + 1} \right] }} + \rho g{{{{\mathrm{Tr}}}}}\left( {\left( {{\mathbf {W}}_1} + {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {H}}_2}} \right) } \right\} \left[ {{{{\mathrm{Tr}}}}}\left( {\left( {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {F}}}} \right) +1\right] }}\\ {R}_{2,s_1}&=\frac{{\left[ {(1 - \rho ){{{\mathrm{Tr}}}}\left( {({{\mathbf {W}}_2} + {\mathbf {S}}){{\mathbf {H}}_2}} \right) + 1} \right] \left[ {{{{\mathrm{Tr}}}}\left( {({{\mathbf {W}}_1} + {{\mathbf {W}}_2} + {\mathbf {S}}){{\mathbf {F}}}} \right) + 1} \right] }}{{\left[ {(1 - \rho ){{{\mathrm{Tr}}}}\left( {({{\mathbf {W}}_1} + {{\mathbf {W}}_2} + {\mathbf {S}}){{\mathbf {H}}_2}} \right) + 1} \right] \left[ {{{{\mathrm{Tr}}}}\left( {({{\mathbf {W}}_2} + {\mathbf {S}}){{\mathbf {F}}}} \right) + 1} \right] }}\\ {R}_{2,s_2}&=\frac{{\left[ {(1 - \rho ){{{\mathrm{Tr}}}}\left( {{\mathbf {S}}{{\mathbf {H}}_2}} \right) + 1} \right] \left[ {{{{\mathrm{Tr}}}}\left( {\left( {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {F}}}} \right) + 1} \right] }}{{\left[ {(1 - \rho ){{{\mathrm{Tr}}}}\left( {\left( {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {H}}_2}} \right) + 1} \right] \left[ {{{{\mathrm{Tr}}}}\left( {{\mathbf {S}}{{\mathbf {F}}}} \right) + 1} \right] }} \end{aligned} \end{aligned}$$ AN-aided beamforming design and power splitting control Let \({\mathbf {W}_1}={\mathbf {w}_1}{\mathbf {w}_1^H}\), \({\mathbf {W}_2}={\mathbf {w}_2}{\mathbf {w}_2^H}\) and drop rank-one constraints \(\mathrm rank({\mathbf {W}_1})=1\), \(\mathrm rank({\mathbf {W}_2})=1\). P1 can be relaxed to P2, given as $$\begin{aligned}&\mathbf {P}_2: \min \limits _{\rho ,\mathbf {W}_1,\mathbf {W}_2,\mathbf {S}}{} {{\mathrm{Tr}}\left( {\mathbf {W}_1}+{\mathbf {W}_2}+ {\mathbf {S}} \right) }\end{aligned}$$ $$\begin{aligned}&\quad {\mathrm {s.t.}}\quad C4:{R}_{1,s_1}\leqslant {2^{ - {\gamma _1}}},\end{aligned}$$ $$\begin{aligned}&\quad C5:{R}_{2,s_1}\leqslant {2^{ - {\gamma _1}}},\end{aligned}$$ $$\begin{aligned}&\quad C8:\mathbf {S}\succeq 0,\mathbf {W}_1\succeq 0,\mathbf {W}_2\succeq 0, \end{aligned}$$ (18f) where \({{\mathbf {H}}_1} \triangleq {{\mathbf {h}}_1}{\mathbf {h}}_1^H\), \({{\mathbf {H}}_2} \triangleq {{\mathbf {h}}_2}{\mathbf {h}}_2^H\) and \({{\mathbf {F}}} \triangleq {{\mathbf {f}}}{\mathbf {f}}^H\). Moreover, \({R}_{1,s_1}\) and \({R}_{2,s_1}\) represent the secrecy rates of message \(s_1\) at user 1 and user 2, respectively. \({R}_{2,s_2}\) represents that of message \(s_2\) at user 2(c.f. (17)). Due to the constraints C4, C5 and C6 are still non-convex, they are converted to the following equivalent by introducing exponential auxiliary variables [36]. The constraint C4 is equivalently expressed as $$\exp \left( {{x_1} - {y_1} - {y_2}} \right) \leqslant {2^{ - {\gamma _1}}},$$ $${\mathrm{Tr}}\left( {\left( {{\mathbf {W}}_1} + {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {F}}}} \right) + 1 \leqslant \exp \left( { x_1}\right) ,$$ $$\frac{{\left[ {{\mathrm{Tr}}\left( {\left( {{\mathbf {W}}_1} + {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {H}}_1}} \right) + 1} \right] }}{{\left[ {{\mathrm{Tr}}\left( {({{\mathbf {W}}_2} + {\mathbf {S}}){{\mathbf {H}}_1}} \right) + 1} \right] }} +\rho g{\mathrm{Tr}}\left( {\left( {{\mathbf {W}}_1} + {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {H}}_2}} \right) \geqslant \exp \left( {{y}_1}\right),$$ $${\mathrm{Tr}}\left( {\left( {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {F}}}} \right) + 1 \geqslant \exp \left( { y_2}\right) ,$$ where \(x_1\), \(y_1\), \(y_2\) are exponential auxiliary variables. Here, only (19b) and (19c) remain as non-convex constraints. By using the first-order Taylor expansion approximation, the non-convex constraint (19b) can be approximated as $$\begin{aligned} {{\mathrm{Tr}}}\left( {({{\mathbf {W}}_1} + {{\mathbf {W}}_2} + {\mathbf {S}}){{\mathbf {F}}}} \right) + 1 \leqslant \exp \left( {\widetilde{ x_1}}\right) \left( { x_1} - {\widetilde{ x_1}} + 1\right) , \end{aligned}$$ where \(\widetilde{x}_1\) is an approximate value, and it is equal to \({x}_1\), when the corresponding constraints are tight. Furthermore, by introducing an auxiliary variable \({x_2} \geqslant 0\), constraint (19c) can be equivalently expressed as $$\begin{aligned}&{{\mathrm{Tr}}}\left( {\left( {{\mathbf {W}}_1} + {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {H}}_1}} \right) \geqslant {x_2}{\mathrm{Tr}}\left( {\left( {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {H}}_1}} \right) + x_2 - 1, \end{aligned}$$ $$\rho g{{{\mathrm{Tr}}}}\left( {\left( {{\mathbf {W}}_1} + {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {H}}_2}} \right) \geqslant \exp \left( { y_1}\right) - { x_2}.$$ For constraint (21a), an approximate convex constraint is produced by using the arithmetic-geometric mean (AGM) inequality \(xy \leqslant z\). That is, for any nonnegative variable x, y and z, if and only if \(a = \sqrt{y/x}\), the following formula holds $$\begin{aligned} 2xy \leqslant {(ax)^2} + {(y/a)^2} \leqslant 2z. \end{aligned}$$ Therefore, constraint (21a) is approximated by a convex constraint as follows $$\begin{aligned}&{\left( {{{\widetilde{a}}_1}{x_2}} \right) ^2} + {\left( {{\mathrm{Tr}}\left( {({{\mathbf {W}}_2} + {\mathbf {S}}){{\mathbf {H}}_1}} \right) /{{\widetilde{a}}_1}} \right) ^2}\leqslant 2{\mathrm{Tr}}\left( {\left( {{\mathbf {W}}_1} + {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {H}}_1}} \right) + 2 - 2{ x_2}, \end{aligned}$$ where \(\widetilde{a}_1\) is an approximate value, which is updated after each iteration by the following formula $$\begin{aligned} {\widetilde{a}_1} = \sqrt{{\mathrm{Tr}}\left( {\left( {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {H}}_1}} \right) /{ x_2}}. \end{aligned}$$ Using epigraph reformulation [37], the constraint (21b) can be transformed into a non-convex quadratic constraint and a convex LMI constraint as below $${u^2} \geqslant \exp ({y_1}) - {x_2},$$ $$\begin{aligned}&\left[ {\begin{array}{*{20}{c}} {g\rho }&{}u \\ u&{}{{\mathrm{Tr}}\left( {\left( {{\mathbf {W}}_1} + {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {H}}_2}} \right) } \end{array}} \right] \succeq 0. \end{aligned}$$ And then, the non-convex quadratic inequality constraint (25a) is still approximated by Taylor expansion as $$\begin{aligned} 2\widetilde{u}u - {\widetilde{u}^2} \geqslant \exp ({y_1}) - {x_2}, \end{aligned}$$ where \(\widetilde{u}\) is an approximate value. Similar to C4, constraint C5 is approximated as $$\exp \left( {{x_3} + {x_4} - {y_3} - {y_4}} \right) \leqslant {2^{ - {\gamma _1}}},$$ $${\left( {{{\widetilde{a}}_2}(1 - \rho )} \right) ^2} + {\left( {{\mathrm{Tr}}\left( {\left( {{\mathbf {W}}_2} + {\mathbf {S}}\right) {{\mathbf {H}}_2}} \right) /{{\widetilde{a}}_2}} \right) ^2}\leqslant 2\exp \left( {\widetilde{ x_3}}\right) \left( { x_3} - {\widetilde{ x_3}} + 1\right) - 2,$$ $${\mathrm{Tr}}\left( {({{\mathbf {W}}_1} + {{\mathbf {W}}_2} + {\mathbf {S}}){{\mathbf {F}}}} \right) + 1\leqslant \exp \left( {\widetilde{ x_4}}\right) \left( { x_4} - {\widetilde{ x_4}} + 1\right) ,$$ $$2\widetilde{t}t - {\widetilde{t}^2} + 1 \geqslant \exp \left( {y_3}\right) ,$$ $$\begin{aligned}&\left[ {\begin{array}{*{20}{c}} {1 - \rho }&{}t \\ t&{}{{\mathrm{Tr}}\left( {({{\mathbf {W}}_1} + {{\mathbf {W}}_2} + {\mathbf {S}}){{\mathbf {H}}_2}} \right) } \end{array}} \right] \succeq 0, \end{aligned}$$ $${\mathrm{Tr}}\left( {({{\mathbf {W}}_2} + {\mathbf {S}}){{\mathbf {F}}}} \right) + 1 \geqslant \exp ({ y_4}).$$ where \({\widetilde{x}_3}\), \({\widetilde{x}_4}\), \({\widetilde{t}}\) and \({\widetilde{a}_2}\) are approximate values. In addition, \({\widetilde{a}_2}\) is updated after each iteration by the following formula $$\begin{aligned} {\widetilde{a}_2} = \sqrt{{\mathrm{Tr}}\left( {({{\mathbf {W}}_2} + {\mathbf {S}}){{\mathbf {H}}_2}} \right) /(1 - \rho )} \end{aligned}$$ $${\left( {{{\widetilde{a}}_3}(1 - \rho )} \right) ^2} + {\left( {{\mathrm{Tr}}\left( {{\mathbf {S}}{{\mathbf {H}}_2}} \right) /{{\widetilde{a}}_3}} \right) ^2}\leqslant 2\exp ({\widetilde{x}_5})({x_5} - {\widetilde{x}_5} + 1) - 2,$$ $${\mathrm{Tr}}\left( {({{\mathbf {W}}_2} + {\mathbf {S}}){{\mathbf {F}}}} \right) + 1 \leqslant \exp ({\widetilde{ x}_6})({{ x}_6} - {\widetilde{ x}_6} + 1),$$ $$2\widetilde{q}q - {\widetilde{q}^2} + 1 \geqslant \exp ({y_5}),$$ $$\begin{aligned}&\left[ {\begin{array}{*{20}{c}} {1 - \rho }&{}q \\ q&{}{{\mathrm{Tr}}\left( {({{\mathbf {W}}_2} + {\mathbf {S}}){{\mathbf {H}}_2}} \right) } \end{array}} \right] \succeq 0, \end{aligned}$$ $${\mathrm{Tr}}\left( {{\mathbf {S}}{{\mathbf {F}}}} \right) + 1 \geqslant \exp ({y_6}).$$ where \({\widetilde{x}_5}\), \({\widetilde{x}_6}\), \({\widetilde{q}}\) and \({\widetilde{a}_3}\) are approximate values. Moreover, \({\widetilde{a}_3}\) is updated after each iteration by the following formula $$\begin{aligned} {\widetilde{a}_3} = \sqrt{{\mathrm{Tr}}\left( {{\mathbf {S}}{{\mathbf {H}}_2}} \right) /(1 - \rho )}. \end{aligned}$$ Therefore, P2 can be approximated as P3 as follows $$\begin{aligned}&\mathbf {P}_3:{}\min \limits _{\rho ,\mathbf {W}_1,\mathbf {W}_2,\mathbf {S},{{x_{i,}}{y_i},u,t,q}}{} {{\mathrm{Tr}}\left( {\mathbf {W}_1}+{\mathbf {W}_2} + {\mathbf {S}} \right) } \nonumber \\&\quad {\mathrm {s.t.}} \mathrm {(18e),(18f)(19a),(19d), (20), (23), (25b),(26),(27), (29)} \end{aligned}$$ where \(i \in \left\{ {1,2,3,4,5,6} \right\}\). We can see that P3 is a standard convex optimization problem. According to the solution of P3, an iterative algorithm using SCA can be developed to solve P1, the specific solution process is shown in Table 1. In addition, if the solution \((\mathbf {W}_1^*,\mathbf {W}_2^*)\) yielded by SDR is rank-one, then the optimal beamforming vectors \(\mathbf {w}_1^*\) and \(\mathbf {w}_2^*\) are obtained by eigenvalue-decomposition of \(\mathbf {W}_1^*\) and \(\mathbf {W}_2^*\), respectively. Otherwise, the suboptimal solution can be yielded by using Gaussian randomization procedure [38]. Table 1 The SCA-based algorithm In this section, some simulation results are shown to demonstrate the performance of the proposed AN-aided cooperate SWIPT NOMA transmission scheme. Consider that two legitimate users and a passive eavesdropper (located outside of the "security zone") are randomly deployed in an \(5m \times 6m\) area, and the BS is fixed at the edge with a coordinate \(\left( {0m,2.5m} \right)\). This scenario is suitable for an indoor sensor communication scenario. \(\tilde{h_1}\) is the standard Rayleigh fading coefficient. The distance-dependent path loss is modeled by \({P_L} = {10^{ - 3}}{d^{ - \alpha }}\), in which d and \(\alpha\) denote the Euclidean distance and path loss exponent, respectively. Using the Rician fading channel model, the downlink channels are modeled as $$\begin{aligned} {\widetilde{\mathbf {h}}_2}= & {} \sqrt{\frac{K}{{1 + K}}} {\mathbf {h}}_2^{{\text {LOS}}} + \sqrt{\frac{1}{{1 + K}}} {\mathbf {h}}_2^{{\text {NLOS}}}\\ {\widetilde{\mathbf {f}}}= & {} \sqrt{\frac{K}{{1 + K}}} {\mathbf {f}}^{{\text {LOS}}} + \sqrt{\frac{1}{{1 + K}}} {\mathbf {f}}^{{\text {NLOS}}}\\ { g_1}= & {} \sqrt{\frac{K}{{1 + K}}} g_1^{{\text {LOS}}} + \sqrt{\frac{1}{{1 + K}}} g_1^{{\text {NLOS}}} \end{aligned}$$ where K denotes the Rician factor, \({\mathbf {h}}_2^{{\text {LOS}}}\), \({\mathbf {f}}^{{\text {LOS}}}\) and \(g_1^{{\text {LOS}}}\) are the line-of-sight (LOS) deterministic components, \({\mathbf {h}}_2^{{\text {NLOS}}}\), \({\mathbf {f}}^{{\text {NLOS}}}\) and \(g_1^{{\text {NLOS}}}\) are the Rayleigh fading components. The detailed simulation parameters are given in Table 2. Table 2 Simulation parameters We introduce some other transmission strategies, namely the AN-aided NOMA strategy, Non-AN SWIPT NOMA strategy and AN-aided time division multiple access (TDMA) strategy. In the AN-aided NOMA strategy, the system still performs the NOMA strategy. However, SWIPT is not executed at user 2, i.e., the cooperative transmission stage is removed in the system. So, in this strategy, power splitting is not to be considered. In the Non-AN SWIPT NOMA strategy, we do not add an AN to transmitted signal from the BS in the system. Therefore, AN covariance matrix optimizing is not to be considered in this strategy. In the AN-aided TDMA strategy, the system performs the time division multiple access (TDMA) mode, i.e., the BS transmits information to user 1 or user 2 at different dynamic time intervals. Figure 3 describes minimum transmission power of the BS with respect to the number iterations of the proposed algorithm. The minimum required secrecy rate of user 1 is set to be 0.5 Bits/s/Hz. The minimum required secrecy rate of user 2 is set to be 0.3 Bits/s/Hz, 0.5 Bits/s/Hz or 0.7 Bits/s/Hz, respectively. It can be found that only few iterations are required to achieve the minimum transmission power of the BS in all case, which indicates that the proposed algorithm is effective. The minimum transmission power of the BS versus the number of iterations Figure 4 shows minimum transmission power of the BS with respect to secrecy rate of user 1 and secrecy rate of user 2, respectively. It can be found that in all cases, NOMA transmission strategies are superior to the TDMA transmission strategy, indicating the advantage of NOMA in improving the system SE. Moreover, our strategy obtains smaller transmission power of the BS than that of other NOMA strategies. The minimum transmission power of the BS versus the secrecy rates under different algorithms Since user 1 far from BS while user 2 near from BS, their secrecy rates have different impacts on transmission power. Firstly, we analyze the influence of user 1's secrecy rate on transmission power when the secrecy rate of user 2 was fixed. Again, we analyze the influence of user 2's secrecy rate on transmission power when the secrecy rate of user 1 was fixed. Figure 5 describes the minimum transmission power of the BS versus the secrecy rate of user 1, where the secrecy rate of user 2 is fixed by 0.5 Bits/s/Hz. Compared with that of other transmission strategies, the transmission power of the proposed AN-aided SWIPT NOMA strategy is the lowest, which demonstrates that the collaboration between SWIPT and NOMA is effective. It is also found that the power of the proposed scheme is slightly lower than that of the Non-AN NOMA SWIPT transmission strategy. The minimum transmission power of the BS versus the secrecy rate of user1 Figure 6 shows the minimum transmission power of the BS versus the secrecy rate of user 2, where the secrecy rate of user 1 is fixed by 0.5 Bits/s/Hz. We observe that the curves of the three NOMA transmission strategies are more smooth than that of TDMA transmission strategy. This indicates that the NOMA transmission strategies are robust with respect to the near user rate, i.e., under the same transmitted power constraint and far user secrecy rate, the NOMA can significantly increase the secrecy rate of the near user compared to that of TDMA. On the other hand, we find that gap of transmission power between AN-aided SWIPT NOMA strategy and AN-aided NOMA strategy decreases gradually with increasing secrecy rate requirement. Moreover, gap of transmission power between AN-aided and Non-AN strategies decreases gradually with increasing the secrecy rate. It clearly demonstrates that gain of user in BS caused by SWIPT-cooperative and AN-aided techniques decrease gradually with the increasing of secrecy rate. The minimum transmission power of the BS versus the secrecy rate of user 2 Figure 7 describes the minimum transmission power of the BS with respect to the number of BS's antennas, where the secrecy rate of user 1 and the secrecy rate user 2 are fixed by 0.5 Bits/s/Hz. We can find that the transmission power of all methods reduces with the number of BS's antenna. It is intuitive since the diversity gain of system can be enhanced by increasing the number of BS's antennas. Compared with that of TDMA transmission scheme, the transmission power of other three NOMA schemes are lower. This shows that the NOMA transmission schemes are robust. Similar to Fig. 5, it is also found that the power of the proposed scheme is slightly lower than that of the Non-AN NOMA SWIPT transmission strategy. The minimum transmission power of the BS versus the number of BS's antennas Table 3 Relationship among PS ratio, secrecy rate of user 1, secrecy rate of user 2 Table 3 shows the relationship among the PS ratio, the secrecy rate of user 1 and the secrecy rate of user 2. It can be seen that the PS ratio decreases sharply with the user 2's secrecy rate increasing, but increases slowly with the user 1's secrecy rate increasing. This is because the power related to the information decoding of the user 2 is completely determined only by \(1-\rho\). Higher secrecy rate requirement of user 2, more power needs to be allocated. Different from that of user 2, the secrecy rate of user 1 is determined by two parts (the transmission signal of the BS and user 2), and thus is less sensitive to the PS ratio than that of user 2. To promote the physical-layer security of the downlink in the MISO system, we propose an AN-aided cooperate SWIPT NOMA strategy. In the scene of one SIC-capable eavesdropper, we explore a cooperative transmission scheme based on multi-antenna, AN-aided, and PS techniques to minimize transmitted power of the BS while satisfying secrecy rate of all legitimate users. This NP-hard problem needs to jointly optimize the beamforming vector of multi-antenna, the covariance matrix of AN-aided and the PS ratio. Thus, we design a suboptimal algorithm by using SDR and SCA techniques to tackle the above NP-hard problem. Simulation studies verify the effectiveness of the proposed transmission scheme. Data sharing not applicable to this paper since no datasets were analyzed during current study. NOMA: Non-orthogonal multiple access SWIPT: Simultaneous wireless information and power transfer CSI: Channel state information SIC: Successive interference cancellation Semidefinite relaxation SCA: Successive convex approximation Power splitting Artificial noise Fifth-generation OMA: Orthogonal multiple access SE: Spectral efficiency EE: EH: Time-switching CJ: Cooperative jamming SISO: Single-input single-output MISO: Multiple-input single-output Maximal-ratio combination AWGN: Additive Gaussian white noise SINR: Signal-to-interference-noise-ratio LOS: Line-of-sight TDMA: Time division multiple access Z. Wei, D. N, J. Yuan, H. Wang, Optimal resource allocation for power-efficient MC-NOMA with imperfect channel state information. IEEE Trans. Commun. 65(9), 3944–3961 (2017) F. Zhou, N.C. Beaulieu, Z. Li, J. Si, P. Qi, Energy-efficient optimal power allocation for fading cognitive radio channels: ergodic capacity, outage capacity, and minimum-rate capacity. IEEE Trans. Wirel. Commun. 15(4), 2741–2755 (2016) D. Tse, P. Viswannath, Fundamentals of Wireless Communication (2005) Z. Ding, Z. Yang, P. Fan, H.V. Poor, On the performance of non-orthogonal multiple access in 5G systems with randomly deployed users. IEEE Signal Process. Lett. 21(12), 1501–1505 (2014) Z. Ding, P. Fan, H.V. Poor, Impact of user pairing on 5G nonorthogonal multiple-access downlink transmissions. IEEE Trans. Veh. Technol. 65(8), 6010–6023 (2016) J. Men, J. Ge, Non-orthogonal multiple access for multiple-antenna relaying networks. IEEE Commun. Lett. 19(10), 1686–1689 (2015) Z. Yang, Z. Ding, Y. Wu, P. Fan, Novel relay selection strategies for cooperative NOMA. IEEE Trans. Veh. Technol. 66(11), 10114–10123 (2017). https://doi.org/10.1109/TVT.2017.2752264 P. Xu, Z. Yang, Z. Ding, Z. Zhang, Optimal relay selection schemes for cooperative NOMA. IEEE Trans. Veh. Technol. 6, 16 (2018) Z. Ding, H. Dai, H. Vincent, Relay selection for cooperative NOMA. IEEE Wirel. Commun. Lett. 5(4), 416–419 (2016) J. Li, Y. Zhao, Radio environment map-based cognitive doppler spread compensation algorithms for high-speed rail broadband mobile communications. EURASIP J. Wirel. Commun. Netw. 263, 2012 (2012) S.L. Talbot, B. Farhang-Boroujeny, Time-varying carrier offsets in mobile OFDM. IEEE Trans. Commun. 57(9), 2790–2798 (2009) Z. Zhu, Z. Chu, Z. Wang, L. Lee, Outage constrained robust beamforming for secure broadcasting systems with energy harvesting. IEEE Trans. Wirel. Commun. 15(11), 7610–7620 (2016) P. Grover, A. Sahai, Shannon meets tesla: Wireless information and power transfer, in IEEE International Symposium on Information Theory (2010) I. Krikidis, S. Timotheou, S. Nikolaou, G. Zheng, D.W.K. Ng, R. Schober, Simultaneous wireless information and power transfer in modern communication systems. IEEE Commun. Mag. 52(11), 104–110 (2014) R. Zhang, C.K. Ho, MIMO broadcasting for simultaneous wireless information and power transfer. IEEE Trans. Wirel. Commun. 12(5), 1989–2001 (2013) F. Zhou, Z. Chu, H. Sun, R.Q. Huang, L. Hanzo, Artificial noise aided secure cognitive beamforming for cooperative miso-noma using swipt. IEEE J. Sel. Areas Commun. 36(4), 918–931 (2018) J. Xu, L. Liu, R. Zhang, Multiuser MISO beamforming for simultaneous wireless information and power transfer. IEEE Trans. Signal Process. 62(18), 4798–4810 (2014) Y. Liu, Z. Ding, M. Elkashlan, H.V. Poor, Cooperative non-orthogonal multiple access with simultaneous wireless information and power transfer. IEEE J. Sel. Areas Commun. 34(4), 938–953 (2016) Y. Xu, C. Shen, Z. Ding, X. Sun, S. Yan, G. Zhu, Z. Zhong, Joint beamforming and power-splitting control in downlink cooperative SWIPT NOMA systems. IEEE Trans. Signal Process. 65(18), 4874–4886 (2017) Y. Yuan, P. Xu, Z. Yang, Z. Ding, Q. Chen, Joint robust beamforming and power-splitting ratio design in SWIPT-based cooperative NOMA systems with CSI uncertainty. IEEE Trans. Veh. Technol. 68(3), 2386–2400 (2019) F. Zhou, Z. Li, J. Cheng, Q. Li, J. Si, Robust AN-aided beamforming and power splitting design for secure MISO cognitive radio with SWIPT. IEEE Trans. Wirel. Commun. 16(4), 2450–2464 (2017) H. Wang, X. Xia, Enhancing wireless secrecy via cooperation: signal design and optimization. IEEE Commun. Mag. 53(12), 47–53 (2015) Y. Ju, H. Wang, T. Zheng, Q. Yin, Secure transmissions in millimeter wave systems. IEEE Trans. Commun. 65(5), 2114–2127 (2017) M. Moradikia, H. Bastami, A. Kuhestani, H. Behroozi, L. Hanzo, Cooperative secure transmission relying on the optimal power allocation in the presence of untrusted relays, a passive eavesdropper and hardware impairments. IEEE Access 7, 116942–116964 (2019) C. Wang, H.W. Ming, X. Xia, Hybrid opportunistic relaying and jamming with power allocation for secure cooperative networks. IEEE Trans. Wirel. Commun. 14(2), 589–605 (2015) Y. Liu, H. Chen, L. Wang, Physical layer security for next generation wireless networks: theories, technologies, and challenges. IEEE Commun. Surv. Tutor. 19(1), 347–376 (2017) F. Zhou, Z. Chu, Y. Wu, N.AI-Dhahir, P. Xiao, Enhancing PHY security of MISO NOMA SWIPT systems with a practical non-linear EH model, in 2018 IEEE International Conference on Communications Workshops (ICC Workshops), pp. 1–6 (IEEE, 2018) D. Chen, Y. Cheng, W. Yang, J. Hu, Y. Cai, Physical layer security in cognitive untrusted relay networks. IEEE Access 6, 7055–7065 (2018) M.T. Mamaghani, Y. Hong, On the performance of low-altitude UAV-enabled secure af relaying with cooperative jamming and SWIPT. IEEE Access 7, 153060–153073 (2019) Z. Hu, D. Xie, M. Jin, L. Zhou, J. Li, Relay cooperative beamforming algorithm based on probabilistic constraint in SWIPT secrecy networks. IEEE Access 8, 173999–174008 (2020) K. Cao, B. Wang, H. Ding, L. Lv, R. Dong, T. Cheng, F. Gong, Improving physical layer security of uplink noma via energy harvesting jammers. IEEE Trans. Inf. Forensics Secur. 16, 786–799 (2021) A. Hasan, J.G. Andrews, The guard zone in wireless ad hoc networks. IEEE Trans. Wirel. Commun. 6(3), 897–906 (2007) Y. Zhang, H. Wang, Q. Yang, Z. Ding, Secrecy sum rate maximization in non-orthogonal multiple access. IEEE Commun. Lett. 20(5), 930–933 (2016) Q. Shi, C. Peng, W. Xu, M. Hong, Y. Cai, Energy efficiency optimization for MISO SWIPT systems with zero-forcing beamforming. IEEE Trans. Signal Process. 64(4), 842–854 (2016) Z. Ding, I. Krikidis, B. Sharif, H.V. Poor, Wireless information and power transfer in cooperative networks with spatially random relays. IEEE Trans. Wirel. Commun. 13(8), 4440–4453 (2014) Z. Chu, Z. Zhu, M. Johnston, S.Y. LeGoff, Simultaneous wireless information power transfer for MISO secrecy channel. IEEE Trans. Veh. Technol. 65(9), 6913–6925 (2016) S. Boyd, L. Vandenberghe, Convex Optimization (2004) Z. Luo, W. Ma, M. So, Y. Ye, S. Zhang, Semidefinite relaxation of quadratic optimization problems. IEEE Signal Process. Mag. 27(3), 20–34 (2010) This work was supported by the National Science Foundation Council of China (61771006, 61976080), Key research projects of university in Henan Province of China (19A413006, 20B510001), First-class Discipline Training Foundation of Henan University (2018YLTD04), the Programs for Science and Technology Development of Henan Province (192102210254), the Talent Program of Henan University (SYL19060110). School of Computer and Information Engineering, Henan University, Kaifeng, China Yong Jin, Zhentao Hu, Dongdong Xie, Guodong Wu & Lin Zhou Yong Jin Zhentao Hu Dongdong Xie Guodong Wu Lin Zhou Y.J was responsible for investigating the AN-aided beamforming design and power splitting control method that are suitable to be implemented. Z.H conceived and designed the study. D.X drafted the manuscript and revised it critically. All authors read and approved the final manuscript. Correspondence to Zhentao Hu. Jin, Y., Hu, Z., Xie, D. et al. Physical layer security transmission scheme based on artificial noise in cooperative SWIPT NOMA system. J Wireless Com Network 2021, 144 (2021). https://doi.org/10.1186/s13638-021-02020-3 SWIPT Power splitting (PS) Secrecy communication Non-Orthogonal Multiple Access Techniques in Emerging Wireless Systems
CommonCrawl
Home/Mathematics/1 In August 2003, a car dealer is trying to determine how many 2004 models should be ordered. Each car costs the dealer $10,000. The demand for the dealer's 2004 models has the probability distribution shown in Table 4. Each car is sold for $15,000. If the demand for 2004 cars exceeds the number of cars ordered in August, the dealer must reorder at a cost of $12,000 per car. If the demand for 2004 cars falls short, the dealer may dispose of excess cars in an end- of-model-year sale for $9,000 per car. How many 2004 models should be ordered in August? TABLE4 No. of Cars Demanded Probability 20 25 30 35 40 .30 .15 .15 .20 .20 Question Solved1 Answer 1 In August 2003, a car dealer is trying to determine how many 2004 models should be ordered. Each car costs the dealer $10,000. The demand for the dealer's 2004 models has the probability distribution shown in Table 4. Each car is sold for $15,000. If the demand for 2004 cars exceeds the number of cars ordered in August, the dealer must reorder at a cost of $12,000 per car. If the demand for 2004 cars falls short, the dealer may dispose of excess cars in an end- of-model-year sale for $9,000 per car. How many 2004 models should be ordered in August? TABLE4 No. of Cars Demanded Probability 20 25 30 35 40 .30 .15 .15 .20 .20 6OLRNU The Asker · Probability and Statistics Transcribed Image Text: 1 In August 2003, a car dealer is trying to determine how many 2004 models should be ordered. Each car costs the dealer $10,000. The demand for the dealer's 2004 models has the probability distribution shown in Table 4. Each car is sold for $15,000. If the demand for 2004 cars exceeds the number of cars ordered in August, the dealer must reorder at a cost of $12,000 per car. If the demand for 2004 cars falls short, the dealer may dispose of excess cars in an end- of-model-year sale for $9,000 per car. How many 2004 models should be ordered in August? TABLE4 No. of Cars Demanded Probability 20 25 30 35 40 .30 .15 .15 .20 .20 K5KOW7 The First Answerer Answer: Given Data Let , q = Number of 2004 model cars ordered in August d = Number of 2004 model cars demanded after August Now determine the smallest value of q for which&#160;E(q+1)-E(q) >= 0&#160; To calculate&#160;E(q+1)-E(q)&#160;, consider two possibilities: Case 1 : If&#160;d <= q&#160; then in this case , ordering q + 1&#160; units instead of q units which cause to be overstocked by one more unit . The probability that Case 1 will occur is simply&#160;P(D <= q)&#160;,&#160; where D is the random variable representing demand. Case 2 : If&#160;d >= q+1&#160;&#160;then in this case , ordering q + 1&#160; units instead of q units enables to be short one less unit . The probability that&#160; Case 2 will occur is&#160;P(D >= q+1)=1-P(D <= q) Now check conditions written above for the give problem. If&#160;d <= q&#160;, the costs shown in table given below are incurred t tt tttComputation of Total Cost if&#160;d <= q ttt&#160; tt tt ttt&#160; tttCost tt tt tttBuy q cars at $10000/car ttt10000q tt tt tttSell d cars at $15000/car ttt-15000d tt tt tttDispose of excess q-d cars at $9000/car ttt-9000(q-d) tt tt tttTotal cost ttt10000q-15000d-9000(q-d) tt t Hence total cost for case&#160;d <= q&#160;is , 10000 q-15000 d-9000(q-d)=10000 q-15000 d-9000 q+9000 d quad rarr(1)&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;=1000 q-6000 d Compare equation (1) with equation , c(d,q) =&#160;c_(0)q&#160;+ (terms not involving q) Hencew&#160;C_(0)&#160; is the per - unit cost of being overstocked . Thus get the value&#160;C_(0)&#160;= 1000 If&#160;d >= q+1&#160; , the costs shown in table given below are incurred t tt tttComputation of Total Cost if&#160;d >= q+1 ttt&#160; tt tt ttt&#160; tttCost tt tt t ... See the full answer In this problem, y = 1/(x2 + c) is a one-parameter family of solutions of the first-order DE y' + 2xy2 = 0. Find a solution of the first-order IVP consisting of this differential equation and the given initial condition. y(3) = 1/5 y = ? Give the largest interval I over which the solution is defined. (Enter your answer using interval notation.) ? need two answer Air at 80 degrees Fahrenheit and 14.7 psia with a mass flow rate of 0.6 lbm/sec enters a compressor and leaves at 70 psia and 430 degrees Fahrenheit through an exit area of 0.007 ft^2. The measured heat interaction rate with the 80 degree Fahrenheit factory room is 3 Btu/sec. Use constant Cp = 0.24 Btu/lbm oF a) Calculate the compressor work rate. b) Compute the entropy generation rate for the overall process. 1 Required information A 20-mm-diameter rod made of the same material as rods AC and AD in the truss shown was tested to failure and an ultimate load of 105 kN was recorded. Use a factor of safety of 3.0. Part 1 of 2 1,5 m D B 3 m -3 m 48 kN 48 kN Determine the required diameter of rod AC. The required diameter of rod AC is mm. Required information A 20-mm-diameter rod made of the same material as rods AC and AD in the truss shown was tested to failure and an ultimate load of 125 kN was recorded. Use a factor of safety of 3.0. 1.5 m 3 m 48 KN 48 KN Determine the required diameter of rod AC. The required diameter of rod AC is mm thermo 2 help plz Air, at \( 80^{\circ} \mathrm{F} \) and \( 14.7 \) psia with a mass flow rate of \( 0.6 \mathrm{Ibrm} / \mathrm{sec} \) enters a compressor and leaves at 70 psia and \( 430^{\circ} \) F through an exit of area 0007 . th \( ^{2} \). The measured heat internction rate with the \( 80^{\circ} \mathrm{F} \) factory room is \( ₹ \mathrm{Bth} / \mathrm{sec} \). Use constant \( \mathrm{C}_{p}=0.24 \mathrm{Btw}_{\mathrm{f}} \mathrm{b}_{\mathrm{m}}{ }^{\circ \mathrm{F}} \) a. Calculate the compressor work rate, W. b. Compute the entropy generation rate for the overall process. Required information Problem 15-4A (Algo) Preparing job cost sheets, recording costs, preparing inventory ledger accounts LO P1, P2, P3 [The following information applies to the questions displayed below] Watercraft's predetermined overhead rate is 200% of direct labor. Information on the company's production activities during May follows. a. Purchased raw Required information Problem 15-4A (Algo) Preparing job cost sheets, recording costs, preparing inventory ledger accounts LO P1, P2, P3 [The following information applies to the questions displayed below] Watercraft's predetermined overhead rate is 200% of direct labor. Information on the company's production activities during May follows. a. Purchased raw materials on credit, $220,000 b. Materials requisitions record use of the following materials for the month. Job 136 Job 137 Job 138 Job 139 Job 140 Total direct materials Indirect materials Total materials requisitions Job 136 Job 137 30h 138 300 119 $ 49,500 32,500 19,400 c. Time tickets record use of the following labor for the month. These wages were paid in cash Job 140 Total direct labor Indirect labor Total labor cost 23,200 6,400 131,000 20,500 $ 151,500 $ 12,300 10,500 37,900 39,200 4,000 101,900 14,500 $ 128,400 Journal entry worksheet < 3 Record raw material purchases on credit. Note: Enter debits before credits. Transaction Record entry 4 Raw materials inventory Accounts payable General Journal 5 Clear entry 6 7 8 Debit 220,000 ***** Credit 11 220,000 View general journal >
CommonCrawl
Surzhikov, Sergei Timofeevich Statistics Math-Net.Ru Total publications: 74 Scientific articles: 70 Presentations: 2 This page: 1797 Abstract pages: 11567 Full texts: 4390 References: 581 Member of the Russian Academy of Sciences Doctor of physico-mathematical sciences Birth date: 6.04.1952 Website: https://goo.gl/mHBGKn http://www.mathnet.ru/eng/person26166 List of publications on Google Scholar List of publications on ZentralBlatt https://mathscinet.ams.org/mathscinet/MRAuthorID/628595 http://elibrary.ru/author_items.asp?authorid=251 http://www.scopus.com/authid/detail.url?authorId=7006525458 Publications in Math-Net.Ru 1. S. T. Surzhikov, "Comparative analysis of the role of atom and ion spectral lines in radiative heating of four types of space capsules", TVT, 54:2 (2016), 249–266 ; High Temperature, 54:2 (2016), 235–251 2. D. A. Storozhev, S. T. Surzhikov, "Numerical simulation of the two-dimentional structure of glow discharge in molecular nitrogen with an account for oscillatory kinetics", TVT, 53:3 (2015), 325–336 ; High Temperature, 53:3 (2015), 307–318 3. A. L. Zheleznyakova, S. T. Surzhikov, "Calculation of a Hypersonic Flow over Bodies of Complex Configuration on Unstructured Tetrahedral Meshes Using the AUSM Scheme", TVT, 52:2 (2014), 283–293 ; High Temperature, 52:2 (2014), 271–281 4. A. S. Dikalyuk, S. T. Surzhikov, "Equilibrium spectral radiation behind the shock wave front in a $\mathrm{CO_2}$–$\mathrm{N_2}$ gas mixture", TVT, 52:1 (2014), 39–44 ; High Temperature, 52:1 (2014), 35–40 5. A. L. Zheleznyakova, S. T. Surzhikov, "Application of the method of splitting by physical processes for the computation of a hypersonic flow over an aircraft model of complex configuration", TVT, 51:6 (2013), 897–911 ; High Temperature, 51:6 (2013), 816–829 6. S. T. Surzhikov, M. P. Shuvalov, "Checking computation data on radiative and convectional heating of next generation spacecraft", TVT, 51:3 (2013), 456–470 ; High Temperature, 51:3 (2013), 408–420 7. S. T. Surzhikov, "Convective heating of small-radius spherical blunting for relatively low hypersonic velocities", TVT, 51:2 (2013), 261–276 ; High Temperature, 51:2 (2013), 231–245 8. A. S. Dikalyuk, S. T. Surzhikov, "Numerical simulation of rarefied dusty plasma in a normal glow discharge", TVT, 50:5 (2012), 611–619 ; High Temperature, 50:5 (2012), 571–578 9. D. A. Andrienko, S. T. Surzhikov, "The unstructured two-dimensional grid-based computation of selective thermal radiation in $\mathrm{CO}_2$–$\mathrm{N}_2$ mixture flows", TVT, 50:4 (2012), 585–595 ; High Temperature, 50:4 (2012), 545–555 10. D. V. Kotov, S. T. Surzhikov, "Computation of hypersonic flow and radiation of viscous chemically reacting gas in a channel modeling a section of a scramjet", TVT, 50:1 (2012), 126–136 ; High Temperature, 50:1 (2012), 120–130 11. S. T. Surzhikov, "Radiative-Convective Heat Transfer of a Spherically Shaped Space Vehicle in Carbon Dioxide", TVT, 49:1 (2011), 92–107 ; High Temperature, 49:1 (2011), 92–107 12. S. T. Surzhikov, "Квазистационарный высокочастотный емкостной тлеющий разряд в поперечном магнитном поле", TVT, 48:supplementary issue (2010), 102–112 13. S. T. Surzhikov, "Radiative gas dynamics of large landing spacecraft", TVT, 48:6 (2010), 956–964 ; High Temperature, 48:6 (2010), 910–917 14. S. T. Surzhikov, "Interaction of plasma plume of a plasma pulsed thruster with incident flow of rarefied magnetized plasma", Matem. Mod., 21:1 (2009), 12–24 ; Math. Models Comput. Simul., 1:6 (2009), 712–723 15. S. T. Surzhikov, "Glow discharge in external magnetic field in hypersonic flow of rarefied gas", TVT, 47:4 (2009), 485–497 ; High Temperature, 47:4 (2009), 459–471 16. S. T. Surzhikov, "Laser-supported combustion wave in the field of gravity", TVT, 47:3 (2009), 324–337 ; High Temperature, 47:3 (2009), 307–319 17. D. V. Kotov, S. T. Surzhikov, "Molecular dynamics simulation of the rate of dissociation and of the time of vibrational relaxation of diatomic molecules", TVT, 46:5 (2008), 664–673 ; High Temperature, 46:5 (2008), 604–613 18. D. V. Kotov, S. T. Surzhikov, "Local estimation of directional emissivity of light-scattering volumes using the Monte-Carlo method", TVT, 45:6 (2007), 885–895 ; High Temperature, 45:6 (2007), 807–817 19. I. V. Sharikov, D. M. Khrupov, S. T. Surzhikov, "Practical use of parallel computing for numerical simulation of interaction between air laser plasma and a surface", Matem. Mod., 18:8 (2006), 12–24 20. A. S. Petrusëv, S. T. Surzhikov, J. S. Shang, "A two-dimensional model of glow discharge in view of vibrational excitation of molecular nitrogen", TVT, 44:6 (2006), 814–822 ; High Temperature, 44:6 (2006), 804–813 21. D. S. Alekhin, D. M. Klimov, S. T. Surzhikov, "Potentials of internuclear interaction of diatomic molecules in planetary atmosphere", TVT, 44:3 (2006), 378–392 ; High Temperature, 44:3 (2006), 373–388 22. S. T. Surzhikov, "Numerical Simulation of Two-Dimensional Structure of Glow Discharge in View of the Heating of Neutral Gas", TVT, 43:6 (2005), 828–844 ; High Temperature, 43:6 (2005), 825–842 23. S. T. Surzhikov, J. S. Shang, "Viscous interaction on a flat plate with a surface discharge in magnetic field", TVT, 43:1 (2005), 21–31 ; High Temperature, 43:1 (2005), 19–30 24. S. T. Surzhikov, "Three-dimensional model of the spectral emissivity of light-scattering exhaust plumes", TVT, 42:5 (2004), 760–771 ; High Temperature, 42:5 (2004), 763–775 25. S. T. Surzhikov, "The use of Monte Carlo simulation methods to calculate the radiation of jets of combustion products in view of rotational spectral structure", TVT, 41:5 (2003), 785–799 ; High Temperature, 41:5 (2003), 694–707 26. S. T. Surzhikov, H. Krier, "Computational models of combustion of nonmetallized heterogeneous propellant", TVT, 41:1 (2003), 106–142 ; High Temperature, 41:1 (2003), 95–128 27. S. T. Surzhikov, "The Bifurcation of Subsonic Gas Flow past a Localized Volume of Low-Temperature Plasma", TVT, 40:4 (2002), 591–602 ; High Temperature, 40:4 (2002), 546–556 28. S. T. Surzhikov, H. Krier, "Quasi-one-dimensional model of combustion of sandwich heterogeneous solid propellant", TVT, 39:4 (2001), 629–639 ; High Temperature, 39:4 (2001), 586–595 29. V. V. Levenets, S. T. Surzhikov, "A self-consistent computational model of electrodynamic and thermogasdynamic processes in electric-discharge lasers", TVT, 39:1 (2001), 5–12 ; High Temperature, 39:1 (2001), 1–8 30. S. T. Surzhikov, V. M. Tenishev, L. A. Chudov, "On the problem of diatomic molecules wave functions determination", Matem. Mod., 12:2 (2000), 118–127 31. S. T. Surzhikov, "Numerical analysis of subsonic laser-supported combustion waves", Kvantovaya Elektronika, 30:5 (2000), 416–420 [Quantum Electron., 30:5 (2000), 416–420 ] 32. L. A. Kuznetsova, S. T. Surzhikov, "Absorption cross sections of diatomic molecules for problems of radiative heat transfer in low-temperature plasma", TVT, 37:3 (1999), 374–385 ; High Temperature, 37:3 (1999), 348–358 33. L. A. Kuznetsova, S. T. Surzhikov, "The information-computational complex "MSRT-RADEN". Database of absorption coefficients of diatomic electronic spectra", Matem. Mod., 10:5 (1998), 21–34 34. L. A. Kuznetsova, S. T. Surzhikov, "The information-computational complex "MSRT-RADEN". Models of absorption coefficients of diatomic molecules electronic spectra", Matem. Mod., 10:4 (1998), 30–40 35. L. A. Kuznetsova, S. T. Surzhikov, "The information-computational complex "MSRT-RADEN"", Matem. Mod., 10:3 (1998), 15–28 36. S. T. Surzhikov, "Macrostatistical model describing heat transfer by radiation with due regard for the vibrational-band spectrum: Calculation of radiation transfer", TVT, 36:3 (1998), 475–481 ; High Temperature, 36:3 (1998), 451–457 37. S. T. Surzhikov, "Macrostatistical model describing heat transfer by radiation with due regard for the vibrational-band spectrum. Formulation of the model", TVT, 36:2 (1998), 285–290 ; High Temperature, 36:2 (1998), 269–274 38. S. T. Surzhikov, "Radiative-gasdynamical model of a nozzle with local heating", Matem. Mod., 9:9 (1997), 54–74 39. S. T. Surzhikov, "Semiempirical model of dynamics and radiation of large-scale fireballs formed as a result of rocket accidents", TVT, 35:6 (1997), 932–939 ; High Temperature, 35:6 (1997), 919–926 40. S. T. Surzhikov, "Radiative heat fluxes in the vicinity of oxygen-hydrogen fireballs", TVT, 35:5 (1997), 778–782 ; High Temperature, 35:5 (1997), 766–770 41. S. T. Surzhikov, "Heat radiation of large-scale oxygen-hydrogen fireballs. Investigation of calculation models", TVT, 35:4 (1997), 584–593 ; High Temperature, 35:4 (1997), 572–581 42. S. T. Surzhikov, "Heat radiation of large-scale oxygen-hydrogen fireballs: Analysis of the problem and main results", TVT, 35:3 (1997), 416–423 ; High Temperature, 35:3 (1997), 410–416 43. A. P. Budnic, A. S. Vakulovsky, A. G. Popov, S. T. Surzhikov, "Mathematical modeling of optical discharge subsonic propagation in $\mathrm{CO}_2$-laser's beam with the refraction of laser radiation", Matem. Mod., 8:5 (1996), 3–25 44. S. T. Surzhikov, "An emitting cloud numerical model with nonstationary dynamical variables", Matem. Mod., 7:8 (1995), 3–24 45. S. T. Surzhikov, "Radiative buoyant thermal numerical model with variables "velocity-pressure"", Matem. Mod., 7:6 (1995), 3–31 46. S. T. Surzhikov, "Three-dimensional numerical simulation of MHD-interaction between a laser plasma and a moving ionized medium in magnetic field", TVT, 33:4 (1995), 519–531 ; High Temperature, 33:4 (1995), 514–526 47. S. T. Surzhikov, "Mathematical models of subsonic Laval nozzles of laser-plasma accelerators", TVT, 33:3 (1995), 437–451 ; High Temperature, 33:3 (1995), 435–448 48. L. Mirabo, Yu. P. Raizer, S. T. Surzhikov, "Laser combustion waves in Laval nozzles", TVT, 33:1 (1995), 13–23 ; High Temperature, 33:1 (1995), 11–20 49. S. T. Surzhikov, "Burning of a continuous optical discharge in an optical plasmatron at elevated pressure", TVT, 32:5 (1994), 714–717 ; High Temperature, 32:5 (1994), 667–670 50. S. T. Surzhikov, "Origination of return flows in an optical plasma generator under conditions of radiative combustion of discharge", TVT, 32:2 (1994), 292–298 ; High Temperature, 32:2 (1994), 275–281 51. S. T. Surzhikov, "Radiation heat transfer subject to atomic lines in a low-temperature laser's plasma layers", Matem. Mod., 5:10 (1993), 11–31 52. Y. P. Raizer, S. T. Surzhikov, "Mathematical model of gasdischarge and heat processes in technological lasers chamber", Matem. Mod., 5:3 (1993), 32–58 53. S. T. Surzhikov, "Simulation of line emission propagation in light-scattering volumes", TVT, 31:4 (1993), 680–682 ; High Temperature, 31:4 (1993), 628–630 54. S. T. Surzhikov, "The calculation of selective radiative heat transfer in arbitrary geometry volumes", TVT, 31:3 (1993), 434–438 ; High Temperature, 31:3 (1993), 391–395 55. Yu. P. Raizer, S. T. Surzhikov, "The rate of current spot expansion on a glow discharge cathode upon abrupt voltage rise", TVT, 31:1 (1993), 22–28 ; High Temperature, 31:1 (1993), 19–25 56. A. V. Rakhmanov, S. T. Surzhikov, "Expanding of a plasma cloud of complicated form into rarefied plasma in the magnetic field", Matem. Mod., 4:7 (1992), 67–78 57. K. G. Guskov, Y. P. Raizer, S. T. Surzhikov, "Three-dimensional computational mhd-model of plasma expansion into non-uniform medium with magnetic field", Matem. Mod., 4:7 (1992), 49–66 58. L. A. Dombrovskii, A. V. Kolpakov, S. T. Surzhikov, "Transport approximation in calculating the directed-radiation transfer in an anisotropically scattering erosional flare", TVT, 29:6 (1991), 1171–1177 ; High Temperature, 29:6 (1991), 954–959 59. S. T. Surzhikov, "Numerical modelling of a low combastion wave in $\mathrm{CO}_2$-laser's beam", Matem. Mod., 2:7 (1990), 85–95 60. L. A. Daladova, A. I. Makienko, S. N. Pavlova, S. T. Surzhikov, B. A. Khmelinin, "The thermal radiation computer model of axasymmetrical scattering two-phase volumes", Matem. Mod., 2:4 (1990), 54–66 61. K. G. Guskov, Yu. P. Raizer, S. T. Surzhikov, "Observed velocity of slow motion of an optical discharge", Kvantovaya Elektronika, 17:7 (1990), 937–942 [Sov J Quantum Electron, 20:7 (1990), 860–864 ] 62. S. T. Surzhikov, "Radiative–convective heat transfer in an optical plasmotron chamber", TVT, 28:6 (1990), 1205–1213 ; High Temperature, 28:6 (1990), 926–932 63. A. V. Kolpakov, L. A. Dombrovskii, S. T. Surzhikov, "Transfer of directed radiation in an absorbing and anisotropically scattering medium", TVT, 28:5 (1990), 983–987 ; High Temperature, 28:5 (1990), 753–756 64. Yu. P. Raizer, S. T. Surzhikov, "Charge diffusion along a current and an effective method of eliminating computational for glow discharges", TVT, 28:3 (1990), 439–443 ; High Temperature, 28:3 (1990), 324–327 65. Yu. P. Raizer, S. T. Surzhikov, "Continuous optical discharge burning at elevated pressures", Kvantovaya Elektronika, 15:3 (1988), 551–553 [Sov J Quantum Electron, 18:3 (1988), 349–351 ] 66. Yu. P. Raizer, S. T. Surzhikov, "Two-dimensional structure in a normal glow-discharge and diffusion effects in cathode and anode spot formation", TVT, 26:3 (1988), 428–435 ; High Temperature, 26:3 (1988), 304–311 67. S. T. Surzhikov, "О расчете направленного теплового излучения светорассеивающих объемов методом Монте-Карло", TVT, 25:4 (1987), 820–823 68. Yu. P. Raizer, A. Yu. Silant'ev, S. T. Surzhikov, "Two-dimensional calculations of a continuous optical discharge in atmospheric-air flow (optical plasmatron)", TVT, 25:3 (1987), 454–461 ; High Temperature, 25:3 (1987), 331–337 69. Yu. P. Raizer, S. T. Surzhikov, "Numerical study of a continuous optical discharge in atmospheric air in the framework of a one-dimensional model", TVT, 23:1 (1985), 29–35 ; High Temperature, 23:1 (1985), 28–34 70. Yu. P. Raizer, S. T. Surzhikov, "Investigation of the processes occurring in an optical plasmatron by numerical calculation", Kvantovaya Elektronika, 11:11 (1984), 2301–2310 [Sov J Quantum Electron, 14:11 (1984), 1526–1532 ] 71. S. T. Surzhikov, "Метод расчета теплообмена излучением с учетом атомных линий применительно к численной модели оптического разряда (№ 3509-В-87 Деп. от 19.V.1987)", TVT, 25:5 (1987), 1036 72. Yu. P. Raizer, A. Yu. Silant'ev, S. T. Surzhikov, "Методы численного расчета двумерного течения в оптическом плазмотроне (№ 7510-86 Деп. от 31.Х.1986)", TVT, 25:2 (1987), 412 73. Yu. P. Raizer, S. T. Surzhikov, "Одномерная численная модель оптического плазмотрона (№ 4705-84 от 4.VII.1984)", TVT, 22:6 (1984), 1233 74. V. V. Gorskii, S. T. Surzhikov, "Метод решения сопряженной задачи тепло- и массообмена при аэротермохимическом разрушении тел (№ 2252-81 Деп. от 14.V.81)", TVT, 19:5 (1981), 1117 Presentations in Math-Net.Ru 1. Механика ионизированных сред: компьютерные модели и междисциплинарные исследования S. T. Surzhikov All-Russian conference "Modern Problems of Continuum Mechanics" devoted to 110 anniversary of L. I. Sedov 2. Computer models of radiation-convective heat transfer in ramjet combustion chambers International Conference on Mathematical Control Theory and Mechanics Ishlinsky Institute for Problems in Mechanics of the Russian Academy of Sciences, Moscow
CommonCrawl
William Wales (astronomer) William Wales (1734? – 29 December 1798) was a British mathematician and astronomer who sailed on Captain Cook's second voyage of discovery, then became Master of the Royal Mathematical School at Christ's Hospital and a Fellow of the Royal Society. Early life Wales was born around 1734 to John and Sarah Wales and was baptised in Warmfield (near the West Yorkshire town of Wakefield) that year. As a youth, according to the historian John Cawte Beaglehole, Wales travelled south in the company of a Mr Holroyd, who became a plumber in the service of George III.[1] By the mid-1760s, Wales was contributing to The Ladies' Diary. In 1765 he married Mary Green, sister of the astronomer Charles Green.[1] In 1765, Wales was employed by the Astronomer Royal Nevil Maskelyne as a computer, calculating ephemerides that could be used to establish the longitude of a ship, for Maskelyne's Nautical Almanac.[2] 1769 transit of Venus and wintering at Hudson Bay As part of the plans of the Royal Society to make observations of the June 1769 transit of Venus, which would lead to an accurate determination of the astronomical unit (the distance between the Earth and the Sun), Wales and an assistant, Joseph Dymond, were sent to Prince of Wales Fort on Hudson Bay to observe the transit,[3] with the pair being offered a reward of £200 for a successful conclusion to their expedition.[1] Other Royal Society expeditions associated with the 1769 transit were Cook's first voyage to the Pacific, with observations of the transit being made at Tahiti, and the expedition of Jeremiah Dixon and William Bayly to Norway. Due to winter pack ice making the journey impossible during the winter months, Wales and Dymond were obliged to begin their journey in the summer of 1768, setting sail on 23 June. Ironically, Wales when volunteering to make a journey to observe the transit, had requested that he be sent to a more hospitable location.[4] The party arrived at Prince of Wales Fort in August 1768.[5] Due to the scarcity of building materials at the chosen site, the party had to bring not only astronomical instruments, but the materials required for the construction of living quarters.[5] On their arrival, the pair constructed two "Portable Observatories", which had been designed by the engineer John Smeaton.[6] Construction work occupied the pair for a month and then they settled in for the long winter season. When the day of the transit, 3 June 1769, finally arrived, the pair were lucky to have a reasonably clear day and they were able to observe the transit at around local midday. However, the two astronomers' results for the time of first contact, when Venus first appeared to cross the disc of the Sun, differed by 11 seconds; the discrepancy was to prove a cause of upset for Wales.[4] They were to stay in Canada for another three months before making the return voyage to England, thus becoming the first scientists to spend the winter at Hudson Bay.[7] On his return, Wales was still upset by the difference in the observations and refused to present his findings to the Royal Society until March 1770; however, his report of the expedition, including the astronomical results as well as other climatic and botanical observations, met with approval and he was invited by James Cook to join his next expedition.[4] Captain Cook's second circumnavigation voyage Wales and William Bayly were appointed by the Board of Longitude to accompany James Cook on his second voyage of 1772–75,[3] with Wales accompanying Cook aboard the Resolution. Wales' brother-in-law Charles Green, had been the astronomer appointed by the Royal Society to observe the 1769 transit of Venus but had died during the return leg of Cook's first voyage.[8] The primary objective of Wales and Bayly was to test Larcum Kendall's K1 chronometer, based on the H4 of John Harrison.[8] Wales compiled a log book of the voyage, recording locations and conditions, the use and testing of the instruments entrusted to him, as well as making many observations of the people and places encountered on the voyage.[9] Later life Following his return, Wales was commissioned in 1778 to write the official astronomical account of Cook's first voyage.[10] Wales became Master of the Royal Mathematical School at Christ's Hospital and was elected a Fellow of the Royal Society in 1776.[2][7] Amongst Wales' pupils at Christ's Hospital were Samuel Taylor Coleridge and Charles Lamb.[5] It has been suggested that Wales' accounts of his journeys might have influenced Coleridge when writing his poem The Rime of the Ancient Mariner.[11] The writer Leigh Hunt, another of Wales' pupils, remembered him as "a good man, of plain simple manners, with a heavy large person and a benign countenance".[12] He was appointed as Secretary of the Board of Longitude in 1795, serving in that position until his death in 1798.[10][13] He was nominated by the First Lord of the Admiralty, Earl Spencer, and his appointment confirmed 5 December 1795.[14] Recognition of his work During his voyage of 1791–95, George Vancouver, who had studied astronomy under Wales as a midshipman on HMS Resolution during Cook's second circumnavigation, named Wales Point, a cape at the entrance to Portland Inlet on the coast of British Columbia, in honour of his tutor; the name was later applied to the nearby Wales Island by an official at the British Hydrographic Office.[15] In his journal, Vancouver recorded his gratitude and indebtedness to Wales's tutelage "for that information which has enabled me to traverse and delineate these lonely regions."[16] Wales featured on a New Hebrides (now Vanuatu) postage stamp of 1974 commemorating the 200th anniversary of Cook's discovery of the islands.[8] The asteroid 15045 Walesdymond, discovered in 1998, was named after Wales and Dymond.[17] Works by William Wales • "Journal of a voyage, made by order of the Royal Society, to Churchill River, on the North-west Coast of Hudson's Bay". Philosophical Transactions of the Royal Society of London. 60: 109–136. 1771. • The Method of Finding the Longitude by timekeepers London: 1794. See also • European and American voyages of scientific exploration • Wales, Wendy (2015). Captain Cook’s Computer: the life of William Wales, F.R.S. (1734-1798). Hame House. ISBN 978-09933758-0-4. Notes 1. Wendy Wales. "William Wales' First Voyage". Cook's Log. Captain Cook Society. Retrieved 10 September 2009. 2. Mary Croarken (September 2002). "Providing longitude for all – The eighteenth-century computers of the Nautical Almanac". Journal for Maritime Research. Retrieved 6 August 2009. 3. "William Wales". State Library of New South Wales. Retrieved 6 August 2009. 4. Hudon, Daniel (February 2004). "A (Not So) Brief History of the Transits of Venus". Journal of the Royal Astronomical Society of Canada. 98 (1): 11–13. Retrieved 18 February 2022. 5. Fernie, J. Donald (September–October 1998). "Transits, Travels and Tribulations, IV: Life on the High Arctic". American Scientist. 86 (5): 422. doi:10.1511/1998.37.3396. 6. Steven van Roode. "Historical observations of the transit of Venus". Retrieved 10 August 2009. 7. Glyndwr Williams. "Wales, William". Dictionary of Canadian Biography Online. Retrieved 6 August 2009. 8. "William Wales". Ian Ridpath. Retrieved 6 August 2009. 9. Wales, William. "Log book of HMS 'Resolution'". Cambridge Digital Library. Retrieved 28 May 2013. 10. Orchison, Wayne (2007). Hockey, Thomas A. (ed.). The Biographical Encyclopedia of Astronomers: A-L. p. 1189. ISBN 978-0-387-31022-0. 11. Christopher Ondaatje (15 March 2002). "From Fu Man Chu to a grizzly end". Times Higher Education. Retrieved 11 August 2009. 12. Hunt, Leigh (1828). Lord Byron and some of his comtemporanies with recollections of the author's life and of his visit to Italy. Colburn. p. 352. 13. The Philosophical Transactions of the Royal Society of London, from Their Commencement, in 1665, to the Year 1800: 1763–1769. Royal Society. 1809. p. 683. 14. "Papers of the Board of Longitude : Confirmed minutes of the Board of Longitude, 1780-1801 (5 December 1795)". Cambridge Digital Library. Retrieved 15 January 2017. 15. "Wales Island Cannery". Porcher Island Cannery. Retrieved 10 August 2009. 16. "Captain George Vancouver". Discover Vancouver. Retrieved 10 August 2009. 17. "15045 Walesdymond (1998 XY21)". JPL Small-Body Database Browser. Retrieved 10 August 2009. Sources • Who's Who in Science (Marquis Who's Who Inc, Chicago Ill. 1968) ISBN 0-8379-1001-3 • Francis Lucian Reid "William Wales (ca. 1734–1798): playing the astronomer", Studies in History and Philosophy of Science, 39 (2008) 170–175 External links • "Wales, William" . Dictionary of National Biography. London: Smith, Elder & Co. 1885–1900. • Journal of a Voyage, Made by Order of the Royal Society, to Churchill River, on the North-West Coast of Hudson's Bay; Of Thirteen Months Residence in That Country; and of the Voyage Back to England; In the Years 1768 and 1769: By William Wales • Extracts of William Wales's Journal kept on his voyage aboard HMS Resolution • Full digitised version of Wales' Logbook from his voyage on HMS Resolution • The Original Astronomical Observations, Made in the Course of a Voyage...in the Resolution and Adventure – Results of Wales' work published in 1777 • Article on Wales compiled for Captain Cook Society • The Transit of William Wales Educational comic book produced by the Hudson's Bay Company for Canadian high school students Captain James Cook Voyages • First voyage (1769 transit of Venus observed from Tahiti) • Second voyage • Third voyage (Death of James Cook) Vessels • HMS Adventure • HMS Discovery • HMS Eagle • HMS Endeavour • HMS Grenville • HMS Pembroke • HMS Resolution Associates • Joseph Banks • William Bayly • William Bligh • Alexander Buchan • James Burney • Charles Clerke • James Colnett • Alexander Dalrymple • Georg Forster • Johann Reinhold Forster • Tobias Furneaux • John Gore • Charles Green • Zachary Hickes • James King • John Ledyard • David Nelson • Omai • Hugh Palliser • Sydney Parkinson • Nathaniel Portlock • Edward Riou • Henry Roberts • David Samwell • Daniel Solander • Herman Spöring • William Taylor • James Trevenen • William Wales • John Watts • John Webber • Thomas Willis Artworks • Zoffany's Death of Cook • Statue in Christchurch • Statue in The Mall, London • Hawaii Sesquicentennial half dollar Books • An Account of the Voyages • A Journal of a Voyage to the South Seas • Characteres generum plantarum • A Voyage Round the World • Observations Made During a Voyage Round the World Related • Birthplace Museum • Cooks' Cottage • James Cook Collection: Australian Museum • Memorial Museum • Puhi Kai Iti / Cook Landing Site Authority control International • FAST • ISNI • VIAF National • Chile • France • BnF data • Germany • Italy • Belgium • United States • Czech Republic • Australia • Netherlands Artists • ULAN People • Trove Other • SNAC • IdRef
Wikipedia
Methodology | Open | Published: 20 March 2015 A multi-criteria spatial deprivation index to support health inequality analyses Pablo Cabrera-Barona1, Thomas Murphy1, Stefan Kienberger1 & Thomas Blaschke1 International Journal of Health Geographicsvolume 14, Article number: 11 (2015) | Download Citation Deprivation indices are useful measures to analyze health inequalities. There are several methods to construct these indices, however, few studies have used Geographic Information Systems (GIS) and Multi-Criteria methods to construct a deprivation index. Therefore, this study applies Multi-Criteria Evaluation to calculate weights for the indicators that make up the deprivation index and a GIS-based fuzzy approach to create different scenarios of this index is also implemented. The Analytical Hierarchy Process (AHP) is used to obtain the weights for the indicators of the index. The Ordered Weighted Averaging (OWA) method using linguistic quantifiers is applied in order to create different deprivation scenarios. Geographically Weighted Regression (GWR) and a Moran's I analysis are employed to explore spatial relationships between the different deprivation measures and two health factors: the distance to health services and the percentage of people that have never had a live birth. This last indicator was considered as the dependent variable in the GWR. The case study is Quito City, in Ecuador. The AHP-based deprivation index show medium and high levels of deprivation (0,511 to 1,000) in specific zones of the study area, even though most of the study area has low values of deprivation. OWA results show deprivation scenarios that can be evaluated considering the different attitudes of decision makers. GWR results indicate that the deprivation index and its OWA scenarios can be considered as local estimators for health related phenomena. Moran's I calculations demonstrate that several deprivation scenarios, in combination with the 'distance to health services' factor, could be explanatory variables to predict the percentage of people that have never had a live birth. The AHP-based deprivation index and the OWA deprivation scenarios developed in this study are Multi-Criteria instruments that can support the identification of highly deprived zones and can support health inequalities analysis in combination with different health factors. The methodology described in this study can be applied in other regions of the world to develop spatial deprivation indices based on Multi-Criteria analysis. Índices de privación son medidas útiles para analizar inequidades en salud. Existen varios métodos para construir estos índices, sin embargo pocos estudios han usado Sistemas de Información Geográfica (SIG) y métodos Multi-Criterio para esta construcción. Este estudio aplica Evaluación Multi-Criterio para calcular los pesos de los indicadores del índice de privación, y también un enfoque SIG de lógica difusa para crear distintos escenarios de este índice. El Proceso Analítico Jerárquico (AHP) es usado para obtener los pesos de los indicadores del índice. La Sumatoria Lineal Ordenada Ponderada (OWA) que usa cuantificadores lingüísticos es aplicada para crear diferentes escenarios de privación. La Regresión Ponderada Geográficamente (GWR) y el índice Moran's I son empleados para explorar relaciones espaciales del índice de privación y sus escenarios, con dos factores relacionados a salud: distancia a servicios de salud y porcentaje de personas que nunca han tenido un nacido vivo. Este último indicador fue considerado como la variable dependiente de la GWR. El caso de estudio es la Ciudad de Quito, en Ecuador. El índice basado en el método AHP muestra media y alta privación (0,511 a 1,000) en zonas específicas del área de estudio, no obstante, la mayoría del área de estudio tiene bajos niveles de privación. Los resultados de OWA muestran escenarios de privación que pueden ser evaluados considerando diferentes actitudes de los tomadores de decisión. Los resultados de GWR indican que el índice de privación y sus escenarios OWA pueden ser considerados como estimadores locales de fenómenos relacionados a la salud. Los cálculos de Moran's I demuestran que varios escenarios de privación, en combinación con el factor de 'distancia a servicios de salud', podrían ser variables explicativas del porcentaje de personas que nunca han tenido un nacido vivo. El índice basado en el método AHP y los escenarios OWA de privación son instrumentos de análisis Multi-Criterio que pueden apoyar a la identificación de zonas con pobreza, y en combinación con otros factores de salud, pueden apoyar al análisis de inequidades en salud. La metodología descrita puede ser aplicada en otras regiones del mundo para desarrollar índices de privación basados en análisis Multi-Criterio. Approaches to developing deprivation indices are diverse [1-4], and area-based deprivation indices have been proven to be useful in identifying patterns of inequalities in health outcomes [1-11]. Deprivation can be defined as any disadvantage of an individual or human group, related to the community or society to which the individual or human group belongs, and these disadvantages can be of social or material nature [4,5]. Social deprivation can be linked to concepts of social fragmentation [11], and material deprivation can be related to the concept of poverty in terms of the lack of basic goods. These two kinds of deprivation are closely linked to public health and wellbeing [12]. Measuring deprivation requires the identification of two main issues: which indicators to be used to construct a deprivation index, and how to combine these indicators. The criteria for choosing the different indicators that compose deprivation indices can vary. In general, they depend on the availability of information in census and the objective of the study [2-4,8,9]. There are referential studies on constructing multiple deprivation indices, such as the Townsend Deprivation Index, which uses four indicators of material and social deprivation [4]; the Under Privileged Area score, also known as the Jarman Deprivation score, which considers eight deprivation indicators, and has been used to determine remuneration for physicians in United Kingdom [13,14]. Another known measure is the Carstairs deprivation index [15], which is very similar to the Townsend index but is a Scottish reality-based index. Common indicators for these three indices are overcrowding and unemployment. Townsend and Carstairs indices also include a very specific variable available in the British Census, namely the indicator of "Non car ownership". More recent efforts have used other kinds of indicators from different domains, including health, housing and vulnerability of the population, for the construction of deprivation indices [1-3,6-9]. However, the most common deprivation domains that can support studies of health are related to occupation, education and household conditions, including overcrowding [3]. Once the indicators for a deprivation index are chosen, the next important step is to define how they are going to be combined. Deprivation indicators can be combined using (i) simple additive techniques, using (ii) weights for each indicator, or using (iii) multivariate techniques [16]. The first technique just adds the deprivation indicators [4,16], the second technique can include expert-based weights [17], and the third technique commonly uses indicators weights created using statistical analysis such as the Principal Component Analysis [2,18]. Deprivation indices are constructed by integrating indicators generally extracted from census areas data [18,19]. In many parts of the world where census data are available, such indices can be geo-referenced using GIS. Subsequently, such geo-referenced data allow further spatial analyses, such as investigating spatial correlations [20], performing accessibility analysis [21], analyzing geographical patterns [22] or studying multiple scale evaluations [10] of deprivation measures. However, regarding the capacity of deprivation indices to be represented spatially explicit, there has been surprisingly little discussion so far about the spatial perspectives of these indices [10,17,22]. There is also not much documented experience - at least not through systematic comparisons of different scenarios - on how to construct these indices spatially. Based on this background, this paper shows the development of a deprivation index using techniques from Multi-Criteria decision making [8,17,23] and GIS-based fuzzy methods [17,24]. This methodology will show how an Analytical Hierarchy Process (AHP) is applied to obtain the weights for the different indicators that make up the deprivation index. AHP is a Multi-Criteria evaluation method that takes information from experts' judgments [23]. We then apply Ordered Weighted Averaging (OWA) in order to create different deprivation scenarios [17]. The indicators used to construct our spatial deprivation index follow a rights-based approach [25,26], and are extracted from the 2010 Ecuadorian Population and Housing Census. This rights-based approach prioritizes latent problems in Latin America, where basic needs problems (for example, not having sewerage systems) are more common than, for example, in European countries. The indicators used represent education, health, employment and housing conditions in census blocks of our study area, Quito City, Ecuador. This area has a total of 4034 census blocks, and the census block is considered the smallest area from which census information could be extracted. A spatial explorative analysis using Geographically Weighted Regression (GWR) and Moran's I is applied to the deprivation index and its scenarios to evaluate how they are spatially related to the following health factors: distance to health services, and the percentage of people that have never had a live birth. The distance to health services is considered a variable of health accessibility that could be considered in relation to deprivation measures in order to identify its effects on health [21]. The health factor of the percentage of people that have never had a live birth is related to a Population Census variable called "number of people that have never had a live birth". This indicator can represent health inequalities: when a woman's child is not born alive, this could be considered to be a consequence of a health condition, such as reproductive or maternal health problem [27]. This indicator can be calculated using information available in the 2010 Ecuadorian Population and Housing Census, and therefore could be considered a useful health-related indicator that can be analyzed together with deprivation indices to be obtained from future Census data. This variable is obtained from women's answers about how many live births they have had. At the time of a child's birth, he or she is considered to be a "live birth" if he or she shows vital life signals such as breathing and movement. Study area and materials The case study is the urban area of the Metropolitan District of Quito, Ecuador (Figure 1). This area is known as Quito City, and is home to more than 1.5 million people distributed in 34 urban districts (Parishes) [28]. This urban area has a narrow shape due its limits with the Pichincha Volcano in the west and the Valleys of Tumbaco and Los Chillos to the east. Over 80% of inhabitants are mestizos (mixed-ethnicity people) [28] but the city is also inhabited by minorities such as indigenous people, black people and white people. Historically, the south of Quito City was home to blue collar workers, as well as being the area where several factories and companies have settled [29]. In contrast, the north was inhabited by wealthier people. However, due the influx of migrants from other areas of the country and the population growth [30], there is not a single rule to locate different socio-economic groups in the city today, and we can find very poor neighborhoods in the north, and very new and up-market condominiums in the south. Location of the case study. The information to construct the deprivation index was derived from the 2010 Ecuadorian Population and Housing Census [28]. The advantages of using Population and Housing Census information to construct indices are that census data are commonly open access, and follow a standardization that allows a comparison of information between different places and time. A geocoded shape file of census blocks was also used in order to link the 2010 Census data to the 4034 census areas that make up the study area. For the calculation of the distance to health services, a data set of the geo-referenced health services in Quito City was used. This data set was provided by Ecuador's Ministry of Health. Multi-Criteria Evaluation A Multi-Criteria Evaluation (MCE) includes knowledge derived from different resources that can be integrated with GIS methods in order to support different kinds of analyses [23]. MCE combines information obtained from different criteria to produce an evaluation index [31] and a weight is allocated to each criterion, to represent the importance of the criterion. In this study, the Analytical Hierarchy Process (AHP) was applied. AHP is a MCE method developed by Saaty [32] that offers practical support for decision making and a straightforward way to obtain weights from criteria [23]. MCE methods, including AHP, also have the capacity to be integrated into GIS-based environments [23,24,33-36] and these GIS-based MCE approaches have been widely and successfully applied in environmental analysis [23,24,31,33,35,37]. The first step in an AHP is to identify a set of criteria. For the deprivation index developed in this study, the criteria are factors or variables that are considered to determine deprivation. A rights-based approach was used to choose the factors that make up the deprivation index [25,26], taking into account the framework of Buen Vivir (Good Living), that is based on human rights and nature rights. The Buen Vivir concept considers that in order to achieve a better quality of life, including time for leisure and harmony with nature, basic needs should first be satisfied [26]. Buen Vivir cannot be achieved if people do not have access to services that ensure their wellbeing and allow them to develop capabilities that create equal opportunities for everyone [26]. To have a good education, health, and to live in conditions of dignity, encourages actions that allow people to construct cohesive societies of Good Living. Human rights are universal. Therefore, the Buen Vivir concept can be applied in other countries, and it is not a concept which focuses only on Ecuador. Table 1 shows the different indicators considered for the construction of the deprivation index. Each indicator is considered as a criterion for the AHP. The chosen indicators fulfill the following requirements: i) to consider a human rights-based approach, ii) to be related to health and to have an affinity with material or social dimensions of deprivation [2,3,11,18,21] and iii) to be able to be represented at the census block level [2]. The chosen indicators belong to the dimensions of education, health, employment and housing conditions. Table 1 Criteria to construct the deprivation index The indicators used to construct the index represent socio-economic problems: people with no education and people that work for no payment. People who are physically disabled for over a year will be limited in their normal work and daily activities, and those without insurance will be extremely disadvantaged when it comes to health care services. The housing indicators used represent limitations of access to services and a lack of quality of life in the households. Variances Inflation Factors (VIF) were calculated for all indicators used in order to identify multi-colinearities. VIF shows how much the variance of an estimated regression coefficient is increased as a result of the colinearities between two variables. All VIF obtained were less than 5, which means all selected indicators can be used for the construction of the index. The key step in any AHP is the creation of a pairwise comparison matrix to compute weights for each criterion while reducing the complexity of the phenomenon in question, because only two criteria are compared at one time [38]. For the comparison of the resulting pairwise matrix, a unified scale is used. The grade of importance of each indicator is evaluated in relation to all other indicators. The importance scale ranges from 1 to 9, whereby 1 means equal importance, 3 means moderate importance, 5 means strong or essential importance, 7 represents very strong importance and 9 indicates extreme importance. Values of 2, 4, 6 and 8 can also be used and are considered as intermediate values. In order to obtain the references for the grades of importance, 32 experts' judgments were taken into consideration. The consulted experts are members of public and private Ecuadorean Institutions and work in the fields of Medicine, Geography and Territorial Planning, Environmental Sciences, and Social Sciences. They were consulted via an online questionnaire in September, 2014. The results of the pairwise comparison matrix are shown in Table 2, and the importance scores show that, according to the experts, the chosen indicators are of equal or very similar importance: for example, indicator B (% of people with no health insurance) is of the same importance as indicator A (% of people without any level of instruction), and indicator D (% of people that work with no payment) is of moderately greater importance than indicator C (% of people that are disabled for more than a year). The pairwise comparison matrix is reciprocal, consequently it is only necessary to fill in one diagonal half of the matrix. After assigning the different levels of importance in the pairwise comparison matrix, a normalized matrix (N) is obtained as described below [39]: $$ N = \frac{a_{ij}}{{\displaystyle \sum }{a}_{ij}} $$ Table 2 Results of the AHP method The normalized value for each cell of N is obtained by calculating the ratio of each importance value α ij of the pairwise comparison matrix and the values sum of each column of this matrix. Afterwards, all the row values of the normalized matrix are added, and then the sum is divided by the number of the indicators used to construct the deprivation index. The result of this operation is a vector that contains the weights for each indicator (criterion), the eigenvector. One of the potentials of AHP is that one can evaluate the consistency of the experts' judgments, by calculating a consistency ratio (CR) that indicates the likelihood that the pairwise comparison matrix judgments were generated randomly [32]: $$ CR=\frac{CI}{RI} $$ Were CI is the consistency index and RI is the random index. CI is calculated using the equation: $$ CI=\frac{\lambda_{max}-n}{n-1} $$ Where n represents the number of criteria and λ max is obtained as follows: a second vector is obtained by multiplying the eigenvector and the pairwise comparison matrix. Then a third vector is obtained by dividing the values of the second vector by the values of the eigenvector. λ max is the average of all the components of this final vector [39]. RI represents the consistency index of a random pairwise comparison matrix [38] and the values that this index can take depends of the number of criteria used [39]. Table 3 shows different values for the RI. In this study, we worked with twelve criteria or indicators, therefore the RI value used is 1,48. Table 3 Random indices The CR obtained was 0,0019, a value lees than 0,10. This value means that the pairwise comparison matrix is satisfactory [39], which is to say that there is a reasonable level of consistency in the experts' judgments [38,40]. The weights obtained for each indicator and the CR are also showed in Table 2. A first representation of the deprivation index was calculated based on the AHP weights by adding the deprivation weighted indicators. Linear min-max normalization was applied to this deprivation index. Values closer to 1 represent higher deprivation. We call the result of this calculation the AHP-based deprivation index. Ordered Weighted Averaging (OWA) The Ordered Weighted Averaging (OWA) provides an extension of the Boolean and weighted aggregation operations [39,41]. It ranks the criteria in a MCE and addresses the uncertainty from criteria interaction [24]. OWA works not only with criteria weights (w j . j = 1,2,3,…, n) but principally with order weights (v j . j = 1,2,3,…, n). Criteria weights are assigned to each criterion and indicate the level of importance of each criterion [42]. We applied AHP to calculate criteria weights. On the other hand, order weights depend on the ranking of each criterion rather than on its attributes. Order weights are assigned differentially in each location, depending on the respective criterion rank order [43]: For example, if v 1, v 2 and v 3 are order weights that have to be applied to the AHP-based weighted criteria X, Y and Z, for instance, if at one location the rank order is YXZ and in another location it is ZYX, the order weights are assigned as v 1 * Y + v 2 * X + v 3 * Z and v 1 * Z + v 2 * Y + v 3 * X, respectively. The OWA operator is defined as follows [44-46]: $$ OW{A}_i = {\displaystyle \sum_{j=1}^n}\left(\frac{u_j{v}_j}{{\displaystyle {\sum}_{j=1}^n}{u}_j{v}_j}\right){z}_{ij} $$ Where u j is the criterion weight reordered according to each criterion attribute value, v j is the order weight and Z ij is the sequence obtained by reordering the attribute values. When using different order weights, different results can be produced. From a GIS-based perspective, therefore, using different Boolean operations such as union (OR) and intersection (AND), or weighted linear combination [43-46] will result in different spatial patterns. The key issue in OWA is to obtain the order weights. We used linguistic quantifies to support the production of the order weights [42,44]. Linguistic quantifiers allow to translate natural language into mathematical formulations [42]: if we consider that Q is a linguistic quantifier, it can be represented as a fuzzy set over the interval 0 to 1, and if we consider that p is a value belonging to this interval, Q(p) represents the compatibility of p with the concept referred to by the quantifier Q [42,44] and is denoted by: $$ Q(p)={p}^{\propto },\kern0.5em \propto >0 $$ Where the parameter ∝ changes depending on which linguistic quantifier it belongs to, and can vary from, "at least one" to "all" quantifiers [38,42,44]. We used regular increasing monotone (RIM) quantifiers that produce order weights related to measures of ORness and tradeoff [42,44,46,47]. Table 4 shows the different values that the parameter ∝ can take. Table 4 Properties of Regular Increasing Monotone (RIM) quantifiers In the OWA procedure, it is very important to evaluate the decision strategies. These strategies range between extremely optimistic and extremely pessimistic. These strategies are to be interpreted according to the following logic: in the extremely optimistic strategy, the decision maker's attitude leads to weighting the highest possible outcome value (for this study the outcome value is the value of deprivation). From a probabilistic perspective, an extremely optimistic strategy is a situation in which a probability of 1, the highest probability, is assigned to the highest value at each location [45]. In other words, the highest ordered weight is assigned to the highest value at each location. The linguistic quantifier for the extremely optimistic strategy is "At least one", and this linguistic quantifier is equivalent to the logic OR (union) [44], meaning that something is true if at least one logic operand is true. The other extreme is the extremely pessimistic strategy, where the decision maker's attitude leads to weighting the lowest possible outcome value. From a probabilistic perspective, in this strategy, the probability of 1 is assigned to the lowest value at each location [45]. The linguistic quantifier for the extremely pessimistic strategy is "All", and this linguistic quantifier is equivalent to the logic AND (intersection) [44], meaning that something is true if all logic operands are true. The neutral decision strategy represents a full-tradeoff between criteria, where equal order weights are applied to all possible values at each location. When increasing the degree of optimism from the neutral strategy, greater order weights are assigned to the higher criterion values and smaller weights to the lower criterion values. In this study, we used the following GIS-based MCE equation to calculate order weights for OWA [44]: $$ {v}_j={\left({\displaystyle \sum_{k=1}^j}{u}_k\right)}^{\propto }-{\left({\displaystyle \sum_{k=1}^{j-1}}{u}_k\right)}^{\propto } $$ Finally, the linguistic quantifier-based OWA is defined as follows [44]: $$ OW{A}_i = {\displaystyle \sum_{j=1}^n}\left({\left({\displaystyle \sum_{k=1}^j}{u}_k\right)}^{\propto }-{\left({\displaystyle \sum_{k=1}^{j-1}}{u}_k\right)}^{\propto}\right){z}_{ij} $$ Table 5 provides an illustration of how to compute OWAi for the case of ∝ = 2 (for the RIM equals "Many") considering four hypothetic variables, each one with its respective weight. Table 5 Illustration of OWA calculation for four criteria values, for the linguistic quantifier 2 The process described in our illustration was applied to all the 4034 census blocks of our study area, for all the 12 chosen indicators, for each one of the seven quantifiers: At least one (Extremely optimistic), Few (Very optimistic), Some (Optimistic), Half (Neutral), Many (Pessimistic), Most (Very pessimistic) and All (Extremely pessimistic). In order to process this large amount of information, we developed a tool to compute the Ordered Weighted Average with fuzzy quantifiers based on the method presented by Malczewski [44]. The tool is implemented as a Python toolbox in ArcGIS software (ESRI, Redlands, USA). Python is an open-source programming language that can be used in a wide variety of software application domains. Our Python toolbox uses a Python package for scientific computing called NumPy. During computation, NumPy is instructed to apply the appropriate mathematical functions to compute the Ordered Weighted Average. Using the tool requires entering a feature layer with the criteria as attributes. The Graphical User Interface of the tool is displayed in Figure 2. To use the tool, the user must browse to the feature layer, then select the criteria from the drop-down list and enter the weight for each criterion. After the criteria and the weights are entered, the fuzzy quantifier must be selected from a dropdown list. This list has seven decision strategies: At least one (Extremely optimistic), Few (Very optimistic), Some (Optimistic), Half (Neutral), Many (Pessimistic), Most (Very pessimistic) and All (Extremely pessimistic). After the decision strategy is selected, the location for the output feature layer must be entered. The output feature layer is a copy of the input feature layer with the OWA values attached as an attribute. Graphical user interface of the tool developed to compute OWA with fuzzy quantifiers. We applied the tool to compute the OWA for the 4034 census blocks. While the tool is running, the linguistic quantifier selected is translated to a numeric parameter ∝ where the values assigned are: 0,001 for At least one, 0,1 for Few, 0,2 for Some, 1 for Half, 2 for Many, 10 for Most and 1000 for All [44]. Using the tool, the OWA was computed for all seven decision strategies, yielding seven scenarios for the deprivation index. These seven scenarios were normalized on a scale from 0 to 1 using linear min-max normalization. Spatial relationships between the different deprivation measures and health factors Two health indicators were chosen to evaluate the relation of the OWA deprivation scenarios and the AHP-based deprivation index with health: the distance of each census block to the nearest health service and the percentage of people in each census block that have never had a live birth. These two health indicators represent two different direct measures of the health dimension: a spatial measure of distances and a social measure of a health outcome. For the indicator of distance to health services, first, the centroids for each of the 4034 census blocks were calculated. Sizes of the census blocks differed all over the study area (sizes from around 3200 square meters to sizes of more than 400 000 square meters), therefore, centroids are a good representation of each census block location. Then, 128 health services were identified in the study area and the distances from each census block centroid to the nearest health service were calculated. The indicator of percentage of people that have never had a live birth was calculated for each census block, using information available in the 2010 Ecuadorian Population and Housing Census: the "number of people that have never had a live birth" and the population of each census block. Geographically Weighted Regression (GWR) was applied considering the measures of deprivation and distance to health services as the explanatory variables. The indicator of percentage of people that have never had a live birth was considered as the dependent variable. A different GWR was made for each OWA scenario of deprivation and for the AHP-based deprivation index. GWR is an extension of the standard regression techniques that allows parameters β k to vary spatially. GWR evaluates the variations of the regression model relationships across space and, contrary to simple regressions, GWR allows local parameter estimates [48-50]. The GWR model can be written as: $$ Y\left({s}_i\right)={\beta}_0(s)+{\displaystyle \sum_{k=1}^M}{\beta}_k(s){X}_k\left({s}_i\right)+\varepsilon \left({s}_i\right) $$ This equations means that at every location s, all coefficients β k need to be estimated, and ε(s i ) is a random error with a mean of zero and a constant variance [50]. The estimations of coefficients β k require the weighting of all observations, and the weights are a function of the distance between the location s and the observations around this location [49]. The function to calculate the weights is the kernel function: $$ {w}_{ij}= exp\left(\frac{h_{ij}^{\alpha }}{b}\right) $$ Where w ij is the weight of location s j that is used to estimate a parameter β k at the location s i , and \( {h}_{ij}^{\alpha } \) is the distance between observations s j and s i [50]. The aim of applying GWR in this study is to explore how the AHP-based deprivation index and its OWA scenarios relate to health factors by determining the spatial correlations of these relationships. The GWR technique is complemented with the application of the Global Moran's I. Moran's I is an index to measure spatial autocorrelation by comparing the value of a variable of one location with the value of this variable at all other locations [51]. Moran's I is defined by the following equation: $$ I=\frac{n{\displaystyle {\sum}_{i=1}^n}{\displaystyle {\sum}_{j=1}^n}{w}_{ij}\left({x}_i-\overline{x}\right)\left({x}_j-\overline{x}\right)}{{\displaystyle {\sum}_i}{\displaystyle {\sum}_{j\ne i}}{w}_{ij}\left({\displaystyle {\sum}_{i=1}^n}{\left({x}_i-\overline{x}\right)}^2\right)} $$ Where n is the number of spatial units to be taken into account, x is a value of a unit, \( \overline{x} \) is the mean of all values across all n units, and w ij is the spatial weight matrix that is a function of the distance that describes the neighborhood of spatial units. A positive Moran's I indicates the existence of clusters of similar values, while a negative Moran's I indicates clusters of dissimilar values. Moran's I closer to 0 indicates weak autocorrelation [52]. Deprivation index and its OWA scenarios The AHP-based deprivation Index results (Figure 3) display the presence of medium and high levels of deprivation (0,511 to 1,000) in specific zones of the study area, even though most of Quito City has low values of deprivation. Higher levels of deprivation appear at the edges of the study area, and represent relatively recently settled neighborhoods created by socio-economically more deprived people. On the other hand, lower deprivation levels (0,000 to 0,146) are commonly present on the northern side of the City, a part of Quito generally inhabited by people with better socio-economic conditions. Moderately deprived areas are located in the south, a very industrial and commercial area, traditionally inhabited by blue collar workers. These results coincide with what was explained in the study area description and confirm the consistency (Consistency ratio CR: 0,0019) of the AHP weights derived from the experts' judgments. The AHP-based deprivation Index has been shown to be very useful to evaluating levels of socio-economic deprivation considering our human rights-based approach: deprivation caused by unsatisfied needs due to a lack of basic services and capabilities related to human rights. For example, people with lower levels of education and health that live in unworthy households with limited or no access to basic services are considered to have high levels of deprivation in many socio-economic dimensions. AHP-based deprivation index result. Seven OWA scenarios were obtained: "At least one", "Few", "Some", "Half", "Many", "Most" and "All" (Figure 4). The "At least one" deprivation scenario represents the extremely optimistic strategy, where the highest possible deprivation values are shown for each census block. In this scenario, decision makers can have high risk-taking propensity to weigh more highly "positive outcomes" [53], with "positive outcomes" meaning "higher values of deprivation criteria". In this scenario, the indicator with the maximum value gets full weighting [54]. The results are census blocks with higher deprivation scores than the AHP-based deprivation index. The "All" deprivation scenario represents the extremely pessimistic strategy where the lowest possible deprivation values are shown for each census block, which means that the indicator with the minimum value gets full weighting [54] and the census blocks have lower deprivation scores than the AHP-based deprivation index. The "Half" deprivation scenario is the equivalent to the AHP-based deprivation index, because the equal order weights are applied for all indicators. The deprivation scenarios "Few" and "Some" are relatively optimistic scenarios where greater order weights are assigned to higher criterion values and smaller weights are assigned to the lower criterion values. The deprivation scenarios "Many" and "Most" are relatively pessimistic scenarios where greater order weights are assigned to lower criterion values and smaller weights are assigned to the higher criterion values. OWA scenarios of the deprivation index. The deprivation scenario with the linguistic quantifier "All" (logic AND) is considered the "worst-case scenario" [44] and in the case of our study, the worst-case scenario means that no action needs to be taken regarding the socio-economic deprivation in almost the entire Quito City territory. Nevertheless, this scenario could be useful to detect the most deprived areas, and can discern areas where taking immediate action is required to reduce socio-economic deprivation. On the other hand, the deprivation scenario with the linguistic quantifier "At least one" (logic OR), representing extreme optimism, shows larger deprivation areas. With this strategy, a larger number of deprivation areas should be considered for socio-economic recuperation, but this may not be feasible for decision makers due to time- and financial constraints. The "Half" scenario means that if the decision makers' risk-taking is neutral, only the AHP deprivation index constructed based on the experts' judgments can be considered. The "Few" and "Some" scenarios could support decision making that identifies areas where an extensive social-improvement program could work for most of the city, while the scenarios "Many" and "Most" could support decision making that focuses on taking action in highly socio-economically deprived areas without excessive financial/time investment. The GWR models show the goodness of fit results for the spatial relationships between the different deprivation measures and health factors (Table 6). A Gaussian kernel was used to solve each local regression and the extension of the kernel is determined using the Akaike Information Criterion (AIC). The AIC is a relative measure of a statistical model quality that takes into account the statistical goodness of fit and the tradeoff of the parameters used in the model. There is no range of values for this measure and the best model is considered to be the one with the lowest AIC value. The GWR models that have the best goodness of fit are the "AHP-based" model and the "Half" model. Other models with low AIC values are the "Some" and "Many" models, showing the importance of using OWA scenarios as tradeoffs between a neutral scenario and extreme scenarios when describing deprivation and health interactions. The models mentioned, "AHP-based", "Half", "Some" and "Many", are also models that represent similar proportions of the dependent variable variance: between 58% and 59%. However, this does not mean that these regressions produce an optimal dependent variable prediction in all locations. Table 6 GWR Statistics for all regressions performed Moran's I statistics identified clusters in the residual values of all GWR performed (Table 7). Clustering with high levels of significance indicate that explanatory variables are missing. In Moran's I the null hypothesis is the random distribution of values. Table 7 shows a random distribution in the models that showed the best goodness of fit ("AHP-based", "Half", "Some" and "Many") as well as in the "Most" model. This means that these deprivation scenarios, in combination with the 'distance to health services' factor, could be explanatory variables to predict the percentage of people that have never had a live birth. The models with the presence of residual clusters with high levels of significance ("At least one", "Few", "All") are models that do not completely explain the health dependent variable. Table 7 Moran's I statistics for the residuals of all regression performed Our AHP-based deprivation index is a multidimensional index that considers a rights-based conceptual approach useful to representing deprivation in dimensions of education, health, employment and housing. We conclude that our deprivation index has the potential to explain the socio-economic deprivation in the study area accurately because i) the important rights-based indicators used, ii) the consistency of experts' opinions in the AHP method, and because iii) the several alternative deprivation scenarios allow decision makers to identify urgent zones that can be addressed efficiently and also to the identification of a broader spectrum of zones that can be addressed using more resources. These OWA deprivation scenarios can be considered useful tools for decision makers and health planners. The different decision strategies offer different options when dealing with socio-economic deprivation in the study area. If decision makers decide not to use the AHP-based deprivation index, they can opt for a variety of tradeoff deprivation scenarios ("Few", "Some", "Many" and "Most") that can guide them to where their work will yield better results by saving time and financial resources. The "All" scenario is also interesting when it comes to identifying very deprived zones. These zones represent bigger gaps in the quality of life, and people living there should be considered a priority by health planners and city authorities. The GWR models show that the deprivation index and its scenarios can be related to health factors, and that several deprivation scenarios in combination with the 'distance to health services' factor, could be considered explanatories variables to predict the percentage of people that have never had a live birth. One limitation of this study is that no analysis of uncertainty was elaborated for the OWA scenarios. Even though this is not an objective of this article, we consider that a future study can incorporate uncertainty analysis for different OWA deprivation scenarios. Another limitation is that this study does not develop a complete statistical deprivation-health factor model. We reiterate that the GWR and Moran's I analyses should only be seen as an exploratory analysis, and more research regarding this issue is needed. Future research could include the incorporation of more explanatory health variables that could interact with the AHP-based deprivation index and the OWA deprivation scenarios. The identification of additional health problems that can be explained to some degree by the methods implemented in this study is also important. Further work can include variations of the Multi-Criteria evaluation used, for example, the use of different techniques to obtain criteria weights and order weights for the deprivation indicators. This study has several strengths, and can be considered as one of the first instances where Multi-Criteria evaluation methods such as AHP and OWA are utilized to create a deprivation index and deprivation scenarios. A strength of this study is that the AHP-OWA approach captures quantitative and qualitative information to produce different scenarios that are useful for decision makers when faced with different decision strategies due to constraints in time and financial resources. A further value of our study is that the OWA method is spatial in the sense that it aggregates the criteria for each census block depending on their values, and this aggregation is done for all linguistic quantifiers. Another strength of this work is the fact that the OWA procedure was automated with the development of the Python toolbox, which allows more efficient calculation of OWA deprivation scenarios for future studies. The methodology described in this study can be applied in other regions of the world to develop spatial deprivation indices based on Multi-Criteria analysis. An important contribution of this study is that the mixed method of applying AHP to calculate deprivation criteria weights and OWA to create different deprivation scenarios is a methodology that can be carried out in other studies beyond Latin America. The indicators considered in this study are common Population and Housing Census variables. However, as AHP and OWA methods are techniques that can be adapted to specific problems and phenomena, future studies can use the methodology presented here considering different deprivation indicators. Furthermore, the methods and results showed in this study can be considered as important tools to support health planners and decision makers. Niggebrugge A, Haynes R, Jones A, Lovett A, Harvey I. The index of multiple deprivation 2000 access domain: a useful indicator for public health? Soc Sci Med. 2005;60:2743–53. Pampalon P, Pamel D, Gamache P, Raymond G. A deprivation index for health planning in Canada. Chronic Dis Canada. 2009;29(4):178–91. Pasetto R, Sampaolo L, Pirastu R. Measures of material and social circumstances to adjust for deprivation in small-area studies of environment and health: review and perspectives. Ann Ist Super Sanita. 2010;46(2):185–97. Townsend P. Deprivation. J Soc Policy. 1987;16:125–46. Testi A, Ivaldi E. Material versus social deprivation and health: a case study of an urban area. Eur J Health Econ. 2009;10:323–8. Adams J, Ryan V, White M. How accurate are Townsend deprivation scores as predictors of self-reported health? A comparison with individual level data. J Public Health. 2004;27(1):101–6. Boyle P, Gatrell A, Duke-Williams O. Do area-level population change, deprivation and variations in deprivation affect individual-level self-reported limiting long-term illness? Soc Sci Med. 2001;53:795–9. Cabrera Barona P. A multiple deprivation index and its relation to health services accessibility in a rural area of Ecuador. In: Vogler R, Car A, Strobl J, Griesebner G, editors. GI_Forum 2014. Geospatial Innovation for Society: 1–4 July 2014; Salzburg. Wichmann/OAW; 2014. p. 188-91. Havard S, Deguen S, Bodin J, Louis K, Laurent O, Bard D. A small-area index of socioeconomic deprivation to capture health inequalities in France. Soc Sci Med. 2008;67:2007–16. Schuurman N, Bell N, Dunn JR, Oliver L. Deprivation indices, population health and geography: an evaluation of the spatial effectiveness of indices at multiple scales. J Urban Healt. 2007;84(4):591–603. Stjärne MK, Leon A, Hallqvist J. Contextual effects of social fragmentation and material deprivation on risk of myocardial infarction—results from the Stockholm Heart Epidemiology Program (SHEEP). Int J Epidemiol. 2004;33:732–41. Pampalon R, Raymond G. A deprivation index for health and welfare planning in Quebec. Chronic Dis Canada. 2000;21(3):104–13. Jarman B. Identification of underprivileged areas. Br Med J. 1983;286:1705–9. Jarman B. Underprivileged areas: validation and distribution of scores. Br Med J. 1984;289:1587–92. Carstairs V, Morris R. Deprivation: explaining differences in mortality between Scotland and England and Wales. Br Med J. 1989;299:886–9. Folwell K. Single measures of deprivation. J Epidemiol Community Health. 1995;49(2):S51–6. Bell N, Schuurman N, Hayes M. Using GIS-based methods of multicriteria analysis to construct socio-economic deprivation indices. Int J Health Geogr. 2007;6(17):1–19. Lalloué B, Monnez JM, Padilla C, Kihal W, Le Meur N, Zmirou-Navier D, et al. A statistical procedure to create a neighborhood socioeconomic index for health inequalities analysis. Int J Equity Health. 2013;12(21):1–11. Messer LC, Laraia BA, Kaufman JS, Eyster J, Holzman C, Culhane J, et al. The development of a standardized neighborhood deprivation index. J Urban Healt. 2006;83(6):1041–62. Hogan JW, Tchernis R. Bayesian factor analysis for spatially correlated data, with application to summarizing area-level material deprivation from census data. J Am Stat Assoc. 2004;99(466):314–24. Jordan H, Roderick P, Martin D. The index of multiple deprivation 2000 and accessibility effects on health. J Epidemiol Community Health. 2004;58:250–7. Benach J, Yasui Y. Geographical patterns of excess mortality in Spain explained by two indices of deprivation. J Epidemiol Community Health. 1999;53:423–31. Feizizadeh B, Blaschke T. Landslide risk assessment based on GIS multi-criteria evaluation: a case study in Bostan-Abad County, Iran. J Earth Sci Eng. 2011;1:66–71. Feizizadeh B, Blaschke T, Nazmfar H. GIS-based ordered weighted averaging and Dempster–Shafer methods for landslide susceptibility mapping in the Urmia Lake Basin. International Journal of Digital Earth: Iran; 2012. Mideros A. Ecuador: defining and measuring multidimensional poverty, 2006–2010. Cepal Rev. 2012;108:49–67. Ramírez R. La vida (buena) como riqueza de los pueblos. Hacia una socio ecología política del tiempo. Economía e Investigación IAEN; 2012. Pan American Health Organization. Salud en las Américas Publicaciones Científicas y Técnicas 636. Capítulo de Ecuador; 2012. Instituto Nacional de Estadísticas y Censos. Censo de Población y Vivienda 2010. 2014. http://www.ecuadorencifras.gob.ec/banco-de-informacion/. Accessed 10 Octo. 2014. Lozano Castro A. Quito, Ciudad Milenaria, Forma y símbolo. 1st ed. Abya Yala; 1991. Carrión F, Erazo Espinosa J. La forma urbana de Quito: una historia de centros y periferias. Bulletin de l'Institut Français d'Études Andines. 2012;41(3):503–22. Yu J, Chen Y, Wu J. Cellular automata based spatial multi-criteria land suitability simulation for irrigated agriculture. Int J Geogr Inf Sci. 2011;25(1):131–48. Saaty TL. A scaling method for priorities in hierarchical structure. J Math Psychol. 1977;15(3):34–9. Feizizadeh B, Blaschke T. GIS-multicriteria decision analysis for landslide susceptibility mapping: comparing three methods for the Urmia lake basin, Iran. Nat Hazards. 2013;65:2105–28. Marinoni O. Implementation of the analytical hierarchy process with VBA in ArcGIS. Comput Geosci. 2004;30:637–46. Joerin F, Theriault M, Musy A. Using GIS and outranking multicriteria analysis for land-use suitability assessment. Int J Geogr Inf Sci. 2001;15(2):153–74. Carver S. Integrating multi-criteria evaluation with geographical information systems. Int J Geogr Inf Sci. 1991;5(3):321–39. Ramanathan R. A note on the use of the analytic hierarchy process for environmental impact assessment. J Environ Manag. 2001;63:27–35. Boroushaki S, Malczewski J. Implementing an extension of the analytical hierarchy process using ordered weighted averaging operators with fuzzy quantifiers in ArcGIS. Comput Geosci. 2008;34:399–410. Gómez Delgado M, Barredo Cano JI. Sistemas de Información geográfica y evaluación multicriterio en la ordenación del territorio. RA-MA Editorial. 2005. Saaty TL. The analytic hierarchy process: planning. Resource Allocation. McGraw-Hill: Priority Setting; 1980. Malczewski J. GIS‐based multicriteria decision analysis: a survey of the literature. Int J Geogr Inf Sci. 2006;20(7):703–26. Meng Y, Malczewski J, Boroushaki S. A GIS-based multicriteria decision analysis approach for mapping accessibility patterns of housing development sites: a case study in Canmore, Alberta. J Geogr Inf Syst. 2011;3:50–61. Jian H, Eastman R. Application of fuzzy measures in multi-criteria evaluation in GIS. Int J Geogr Inf Sci. 2000;14(2):173–84. Malczewski J. Ordered weighted averaging with fuzzy quantifiers: GIS-based multicriteria evaluation for land-use suitability analysis. Int J Appl Earth Observation Geoinf. 2006;8:270–7. Malczewski J, Chapman T, Flegel C, Walters D, Shrubsole D, Healy MA. GIS-multicriteria evaluation with Ordered Weighted Averaging (OWA): case study of developing watershed management strategies. Environ Plan A. 2003;35(10):1769–84. Yager RR. Quantifier guided aggregation using OWA operators. Int J Intell Syst. 1996;11:49–73. Liu X, Shilian H. Orness and parameterized RIM quantifier aggregation with OWA operators: a summary. Int J Approx Reason. 2007;48:77–97. Clement F, Orange D, Williams M, Mulley C, Epprecht M. Drivers of afforestation in Northern Vietnam: assessing local variations using geographically weighted regression. Appl Geogr. 2009;29:561–76. Gao J, Li S. Detecting spatially non-stationary and scale-dependent relationships between urban landscape fragmentation and related factors using Geographically Weighted Regression. Appl Geogr. 2011;31:292–302. Krivoruchko K. Spatial regression models: concepts and comparison. In: In spatial statistical data analysis for GIS users. Firstth ed. Redlands, California: Esri Press; 2011. p. 483–537. Braz Junior G, Cardoso de Paiva A, Corrêa S. Classification of breast tissues using Moran's index and Geary's coefficient as texture signatures and SVM. Comput Biol Med. 2009;39:1063–72. Cai X, Wang D. Spatial autocorrelation of topographic index in catchments. J Hidrol. 2006;328:581–91. Malczewski J. Integrating multicriteria analysis and geographic information systems: the ordered weighted averaging (OWA) approach. Int J Environ Technol Manage. 2006;6(1/2):7–19. Amiri MJ, Mahiny AS, Hosseini SM, Jalali SG, Ezadkhasty Z, Karami SH. OWA Analysis for ecological capability assessment in watersheds. Int J Environ Res. 2013;7(1):241–54. The presented work has been funded by the Government of Ecuador through the Ecuadorian Secretary of Higher Education, Science, Technology and Innovation (SENESCYT) and the Ecuadorian Institute of Educational and Scholarship Credits (IECE) (Scholarship contract No. 375–2012). It has also partially been funded by the Austrian Science Fund (FWF) through the Doctoral College GIScience (DK W 1237 N23) at the University of Salzburg. Interfaculty Department of Geoinformatics - Z_GIS, University of Salzburg, Schillerstraße 30, 5020, Salzburg, Austria Pablo Cabrera-Barona , Thomas Murphy , Stefan Kienberger & Thomas Blaschke Search for Pablo Cabrera-Barona in: Search for Thomas Murphy in: Search for Stefan Kienberger in: Search for Thomas Blaschke in: Correspondence to Pablo Cabrera-Barona. PCB conceived the study, drafted the manuscript, performed the AHP method and statistical analysis, and constructed the maps and tables. TM implemented the OWA method in Python and provided some text fragments for the description of the Python tool development in the Methods section. TB and SK were involved in the overall design of the manuscript, revised various versions of the manuscript and helped formulating the Discussion and Conclusion section. All authors read and approved the final manuscript. Analytical Hierarchy Process (AHP) Geographically Weighted Regression (GWR) Proceso Analítico Jerárquico Sumatoria Lineal Ordenada Ponderada Regresión Ponderada Geográficamente
CommonCrawl
\begin{document} \title{\textbf{Cartan Connection for \textsl{h-}Matsumoto change}} \author{M.$\,$K.$\,$\textsc{Gupta}, Abha \textsc{Sahu}\thanks{Corresponding author}~, Suman \textsc{Sharma}\\ \normalsize{Department of Mathematics}\\[-3mm] \normalsize{Guru Ghasidas Viswavidyalaya, Bilaspur (C.G.), India}\\[-3mm] \small{E-mail: [email protected], [email protected], [email protected]}} \date{} \maketitle \begin{abstract} In the present paper, we have studied the Matsumoto change $\overline{L}(x,y)= \frac{L^{2}(x,y)}{L(x,y) - \beta(x,y)} $ with an \textsl{h-}vector $b_{i}(x,y)$. We have derived some fundamental tensors for this transformation. We have also obtained the necessary and sufficient condition for which the Cartan connection coefficients for both the spaces $F^{n}=(M^n,L)$ and $\overline{F}^{\,n}=(M^{n},\overline{L})$ are same.\\ \textbf{Keywords}: Finsler space, Matsumoto change and \textsl{h-}vector. \end{abstract} \section{Introduction} Let $M$ be an $ n $-dimensional $C^{\infty}$ Manifold and $T_{x}M$ denotes the tangent space of $M$ at $x$. The tangent bundle of $M$ is the union of tangent space $TM:=\underset{x\epsilon M}{\bigcup}T_{x}M$. A function $L: TM\rightarrow [0,\infty)$ is called Finsler metric function if it has the following properties\cite{shen2001lectures} \\[-1cm] \begin{enumerate} \item $L$ is $C^{\infty}$ on $TM \backslash\{0\}$,\\[-8mm] \item For each $x\epsilon{M}$, $L_{x}:= L\vert_{T_{x}M}$ is a Minkowaski norm on $T_{x}M$. \end{enumerate} The pair $(M^{n}, L)$ is then called a Finsler space. The normalized supporting element, metric tensor, angular metric tensor and Cartan tensor are defined by $l_{i}=\dot{\partial_{i}}L$, $g_{ij}=\frac{1}{2}\dot{\partial_{i}}\dot{\partial_{j}}L^2 $, $ h_{ij}=L\dot{\partial_{i}}\dot{\partial_{j}}L $ and $ C_{ijk}=\frac{1}{2}\dot{\partial_{k}}g_{ij} $ respectively. The Cartan connection for the Finsler space $F^n$ is given by $(F^{i}_{jk},N^{i}_{j},C^{i}_{jk})$. The \textsl{h-}covariant and \textsl{v-}covariant derivative of the tensor $T^{\,i}_j$ with respect to Cartan connection, are respectively given as follows:\\[-6mm] \begin{equation*}\begin{split} T^{\,i}_{j|k}&=\delta_{k}T^{\,i}_{j} + T^{\,r}_{j}F^{i}_{rk} - T^{\,i}_{r}F^{r}_{jk}\,,\\ T^{\,i}_{j}|_{k}&=\dot{\partial_{k}}T^{\,i}_{j} + T^{\,r}_{j}C^{i}_{rk} - T^{\,i}_{r}C^{r}_{jk}\,, \end{split} \end{equation*} where $\delta_{k}$ is differential operator $\delta_{k}=\partial_{k}-N^{r}_{k}\dot{\partial_{r}}$. \\ In 1984, C. Shibata \cite{shibata1984invariant} introduced the change $\overline{L}=f(L,\beta)$ as a generalization of Randers change, where $ f $ is positively homogeneous function of degree one in $ L $ and $ \beta(x,y)=b_{i}(x)y^{i} $. This change is called $ \beta $-change. An important class of $\beta$-change is Matsumoto change, given by $\overline{L}(x,y) = \frac{L^2}{L-\beta}\,.$ If $L(x,y)$ reduces to a Riemannian metric then $\overline{L}(x,y)$ becomes Matsumoto metric. A famous example of Finsler space ``A slope measure of a moutain with respect to time measure'' was given by M.$\,$Matsumoto\cite{matsumoto1989slope}. Due to his great contribution in Finsler geometry, this metric was named after him.\\ A.$\,$Tayebi et al. \cite{tayebi2014kropina} and Bankteswar Tiwari et al. \cite{tiwari2017generalized} discussed the Kropina change and generalized Kropina change respectively, for the Finsler space with $m^{th}$ root metric. In 2017, A.$\,$Tayebi et al.\cite{tayebi2017matsumoto} obtained the condition for the Finsler space given by Matsumoto change to be projectively related with the original Finsler space.\\[2mm] The concept of \textsl{h-}vector $b_{i}$, was first introduced by H. Izumi \cite{izumi1980conformal}, which is \textsl{v-}covariant constant with respect to Cartan connection and satisfies $L C^{h}_{ij} b_{h}= \rho h_{ij} \,,$ where $\rho$ is a non-zero scalar function. He showed that the scalar $\rho$ depends only on positional coordinates \textit{i.e.} $\dot{\partial_{i}}\rho =0$. From the definition of \textsl{h-}vector, it is clear that it depends not only on positional coordinates, but also on directional arguments.\\[2mm] Gupta and Pandey \cite{gupta2009hypersurfaces , gupta2015finsler}, discussed certain properties of Randers change and Kropina change with an \textsl{h-}vector. They\cite{gupta2015finsler} showed that \textit{If the \textsl{h-}vector is gradient then the scalar $\rho$ is constant}, \textit{i.e.} $\partial_{j}\rho=0$. In 2016, Gupta and Gupta \cite{gupta2017h, gupta2016hypersurface} have analysed Finsler space subjected to \textsl{h-}exponential change.\\ In the present paper, we have studied a Finsler metric defined by\\[-4mm] \begin{equation}\label{eq1} \overline{L}(x,y)= \frac{L^{2}(x,y)}{L(x,y) - b_{i}(x,y)y^{i}}\,, \end{equation} where $b_{i}(x,y)$ is an \textsl{h}-vector in $(M^{n},L)$.\\ The structure of this paper is as follows: In section $2$, we have obtained the expressions for different fundamental tensors of the transformed Finsler space. In section $3$, we have observed how the Cartan connection coefficients change due to Matsumoto change with an \textsl{h}-vector and also find the necessary and sufficient condition for which both connection coefficients would be same. \begin{Remark} H.$\,$S.$\,$Shukla et al. \emph{\cite{shuklamatsumoto}} also discussed Matsumoto change of Finsler metric by \textsl{h}-vector. Unfortunately, the results are wrong because of wrong computation in Lemma $1.1$ of\emph{\cite{shuklamatsumoto}}. \end{Remark} \section{The Finsler space $\overline{F}^{\,n}= (M^{n},\overline{L})$} Let the Finsler space transformed by the Matsumoto change \eqref{eq1} with an \textsl{h}-vector, be denoted by $\overline{F}^{\,n}= (M^{n},\overline{L})$. If we denote $\beta=b_{i}(x,y)y^{i}$, then indicatory property of angular metric tensor yields $\dot{\partial_{j}}\beta=b_{j}\,$. Throughout this paper, we have barred the geometrical objects associated with $\overline{F}^{\,n}$. \\From \eqref{eq1}, we get the normalized supporting element as \\[-6mm] \begin{equation}\label{eq2} \overline{l}_{i}= \frac{\tau}{(\tau-1)}\,l_{i}+ \frac{\tau^{2}}{(\tau-1)^{2}}\,m_{i}\,, \end{equation} \\[-1cm] where $\tau= \frac{L}{\beta}$ and $m_{i}= b_{i} - \frac{1}{\tau}l_{i}\,.$ \begin{Remark} The covariant vector $m_{i}$ statisfies the following relations \\ \emph{(i)}$\:\: m_{i}\neq 0 \qquad$\emph{(ii)}$\:\: m^{i}=g^{ij}m_{j}\qquad$ \emph{(iii)}$\:\: m^{i}m_{i}=b^2-\frac{1}{\tau^2}=m^2\qquad$\emph{(iv)}$ \:\: m_{i}y^{i}=0 \,.$ \end{Remark} Differentiating equation \eqref{eq2} with respect to $y^{j}$, and using the notation $L_{ij}=\dot{\partial_{j}}l_{i}$ we get \begin{equation*} \overline{L}_{ij}= \frac{\tau(\tau+\rho\tau-2)}{(\tau-1)^{2}}\, L_{ij}+\frac{2\tau^{2}}{\beta(\tau-1)^3}\,m_{i}m_{j}. \end{equation*} Therefore, the angular metric tensor $ \overline{h}_{ij}$ is obtained as \begin{equation} \overline{h}_{ij}=\frac{\tau^{2}(\tau+\rho\tau-2)}{(\tau-1)^{3}}\,h_{ij} + \frac{2\tau^4}{(\tau-1)^4}m_{i}m_{j}\,. \end{equation} The metric tensor $\overline{g}^{}_{ij}=\overline{h}_{ij}+\overline{l}_{i}\overline{l}_{j}$ is given by \begin{equation} \overline{g}^{}_{ij}=\frac{\tau^{2}(\tau+\rho\tau-2)}{(\tau-1)^{3}}\,g^{}_{ij}+\frac{\tau^{2}(1-\rho\tau)}{(\tau-1)^{3}}\,l_{i}l_{j}+\frac{\tau^{3}}{(\tau-1)^{3}}\,(m_{i}l_{j}+m_{j}l_{i})+ \frac{3\tau^4}{(\tau-1)^4}\,m_{i}m_{j}\,, \end{equation} which can be rewritten as\\[-5mm] \begin{equation}\label{eq3} \overline{g}^{}_{ij}= p\,g^{}_{ij}+ p^{}_{1}l_{i}l_{j}+ p^{}_{2}(m_{i}l_{j}+m_{j}l_{i})+p^{}_{3}\,m_{i}m_{j}\,, \end{equation} where \begin{equation*} p=\frac{\tau^{2}(\tau+\rho\tau-2)}{(\tau-1)^{3}}, \quad p^{}_{1}=\frac{\tau^{2}(1-\rho\tau)}{(\tau-1)^{3}}, \quad p^{}_{2}=\frac{\tau^{3}}{(\tau-1)^{3}}, \quad p^{}_{3}= \frac{3\tau^4}{(\tau-1)^4}\,. \end{equation*} \noindent The following lemma helps us to compute the inverse of metric tensor $\overline{g}^{}_{ij}\,$. \begin{Lemma} \emph{\cite{matsumoto1972c}:} Let $(m_{ij})$ be a non-singular matrix and $l^{}_{ij}= m_{ij}+n_{i}n_{j}$. The elements $l^{ij}$ of the inverse matrix, and the determinant of the matrix $(l^{}_{ij})$ are given by \\[-6mm] \begin{equation*} l^{ij}= m^{ij}-(1+n_{k}n^{k})^{-1}n^{i}n^{j},\quad det(l^{}_{ij})=(1+n_{k}n^{k})det(m_{ij}) \end{equation*} respectively, where $m^{ij}$ are elements of the inverse matrix of $(m_{ij})$ and $n^{k}=m^{ki} n_{i}$. \end{Lemma} The inverse metric tensor of $\overline{F}^{\,n}$ can be derived as follows:\\[-6mm] \begin{equation}\label{eq4} \overline{g}^{\,ij}= q\,g^{ij}+ q_{1}\,l^{i}l^{j}+ q_{2}\,(l^{i}m^{j}+m^{i}l^{j})+ q_{3}\,m^{i}m^{j}\,, \end{equation}\\[-12mm] where\\[-6mm] \begin{equation*} \quad q= \frac{1}{p}\,,\qquad q_{1}= \frac{-1}{2}\Big[\frac{p_{1}{p^{}_{3}}-p^{2}_{2}}{(p_{1}+p)p_{3}-p_{2}^{2}}+\frac{2p^{2}p^{2}_{2}p_{3}}{(3p+2p_{3}m^{2})\{(p_{1}+p)p_{3}-p_{2}^{2}\}^{2}}\Big] , \end{equation*} \begin{equation*} q_{2}=\frac{-2p_{2}p_{3}}{(3p+2p_{3}m^{2})\{(p_{1}+p)p_{3}-p_{2}^{2}\}}, \qquad q_{3}=\frac{-2p_{3}}{p(3p+2p_{3}m^{2})}\,. \end{equation*} The Cartan tensor $\overline{C}_{ijk}$ is obtained by differentiating the equation (\ref{eq3}) with respect to $ y^{k} $, as follows:\\[-8mm] \begin{equation}\label{eq6} \overline{C}^{}_{ijk}=p\,C^{}_{ijk}+V^{}_{ijk}\,, \end{equation}\\[-1cm] where\\[-12mm] \begin{equation*} V_{ijk}=K_{1}(h_{ij}m_{k}+h_{jk}m_{i}+h_{ki}m_{j})+K_{2}\,m_{i}m_{j}m_{k} \end{equation*} and\\[-12mm] \begin{equation*} K_{1}= \frac{\tau^{3}(\tau+3\rho\tau-4)}{2L(\tau-1)^4},\hspace{.3cm} K_{2}= \frac{6\tau^{4}}{\beta(\tau-1)^5}\,. \end{equation*}\\[-16mm] \begin{Remark} From above we can retrieve relations between the scalars as \begin{equation*} \frac{\partial\,p}{\partial\, \tau}=-\frac{2L}{{\tau}^2}\,K_{1}\,, \quad \frac{\partial\,p_{3}}{\partial\, \tau}=-\frac{2L}{{\tau}^2}\,K_{2}\,, \end{equation*} \begin{equation*} K_{1}=\frac{1}{2L}\left\lbrace p_{2} + p_{\,3}\left(\rho - \frac{1}{\tau} \right) \right\rbrace\,\quad \emph{and} \quad p_{\,1} + p_{2}\left(\rho - \frac{1}{\tau} \right) = 0\,. \end{equation*} \end{Remark} From equation \eqref{eq4} and \eqref{eq6}, we get the \textsl{(h)hv-}torsion tensor $\overline{C}^{\,i}_{jk}$\\[-6mm] \begin{equation}\label{eq7} \overline{C}^{\,i}_{jk}= C^{\,i}_{jk}+M^{\,i}_{jk}\,, \end{equation}where\\[-12mm] \begin{equation*}\begin{split} \quad M^{i}_{jk}&= q\,K_{1}(m_{k}h^{i}_{j}+m_{j}h^{i}_{k})+(q_{2}\,l^{i}+q_{3}\,m^{i})\left\lbrace 2K_{1}m_{j}m_{k}+\frac{p}{L} \rho \,h^{}_{jk}\right\rbrace \\ &+\left\lbrace q\,m^{i}+(q_{2}\,l^{i}+q_{3}\,m^{i})m^{2}\right\rbrace \left( K_{2}m_{j}m_{k}+K_{1}h^{}_{jk}\right). \end{split} \end{equation*} \section{Cartan Connection of the space $\overline{F}^{\,n}$} The Cartan connection for a Finsler space $\overline{F}^{\,n}$ is given by the traid $(\overline{F}^{\,i}_{jk},\overline{N}^{\,i}_{j},\overline{C}^{\,i}_{jk})$. The \textsl{v-}connection coefficient $ \overline{C}^{\,i}_{jk} $ is given by equation \eqref{eq7}. Now, we are obtaining the \textsl{h-}connection coeffiecient $ \overline{F}^{\,i}_{jk} $ and non-linear connection coeffiecient $ \overline{N}^{\,i}_{j} $.\\ First, we will try to find canonical spray of the transformed space $ \overline{F}^{\,n} $.\\ Differentiating equation \eqref{eq3} with respect to $x^{k}$, and using the definition of \textsl{h-}covariant derivative, we obtain\\ \begin{equation}\label{eq8} \begin{split} \partial^{}_{k} \overline{g}^{}_{ij} = &\,p\,\partial^{}_{k} g^{}_{ij}+p^{}_{1}(l_{i}l_{r}F^{r}_{jk}+l_{j}l_{r}F^{r}_{ik}) +p^{}_{2}(\rho^{}_{k}h^{}_{ij}+l^{}_{i}b^{}_{j|k}+l^{}_{j}b^{}_{i|k}+m^{}_{r}F^{r}_{jk}l_{i}+m^{}_{r}F^{r}_{ik}l_{j}\\&+m^{}_{i}F^{r}_{jk}l_{r}+m^{}_{j}F^{r}_{ik}l_{r})+p^{}_{3}(m_{i}b^{}_{j|k}+m_{j}b^{}_{i|k}+m_{i}m_{r}F^{r}_{jk}+m_{j}m_{r}F^{r}_{ik})\\&+ 2(K_{1}h^{}_{ij}+2K_{2}m_{i}m_{j})(\beta^{}_{k}+ N^{r}_{k}m_{r})+K_{1}(h_{jr}N^{r}_{k}m_{i}+h_{ir}N^{r}_{k}m_{j})\,, \end{split} \end{equation} where $\partial_{k} \rho={\rho}_{|k}=\rho^{}_{k}$ and ${\beta}_{|k}={\beta}_{k}$.\\ \noindent Applying Christoffel process with respect to indices $i\,,j\,,k$ in above equation, we obtain the coefficient of Christoffel symbol as follows:\\[-4mm] \begin{equation}\begin{split} \overline{\gamma}_{ijk} = p{\gamma}_{ijk}&+ \mathfrak{S}_{ijk}\left\lbrace \frac{p^{}_{2}}{2}\rho_{k}h_{ij}+\left( \beta_{k} + N^{r}_{k}m_{r}\right) B_{ij}\right\rbrace + Q_{i}F_{jk}+Q_{k}F_{ji} +Q_{j}E_{ik}\\&+(\overline{g}_{rj}-p\,g_{rj})\Big\{{\gamma}^{\,r}_{ik} + g^{rt}(C_{ikm}N^{m}_{t}-C_{tkm}N^{m}_{i}-C_{itm}N^{m}_{k})\Big\}\,, \end{split}\end{equation} where the symbol $\mathfrak{S}_{ijk}$ is defined as $\mathfrak{S}_{ijk}\,U_{ijk}=U_{ijk}-U_{jki}+U_{kij}$ and we have used the notation\\[-6mm] \begin{equation*} Q_{i}=p^{}_{2}l_{i}+p^{}_{3}m_{i}\,, \quad B_{ij}=K^{}_{1}h_{ij}+K^{}_{2}m_{i}m_{j}\,, \end{equation*} \begin{equation*} 2E_{ij}=b_{i|j}+b_{j|i}\,, \!\qquad 2F_{ij}=b_{i|j}-b_{j|i}\,. \end{equation*} \\[-2cm] \begin{Remark} The tensors $Q_{i}$ and $ B_{ij} $ statisfy the following\\[2mm] \emph{(i)} $\quad \dot{\partial_{j}}Q_{i}=B_{ij}\qquad$ \emph{(ii)}$\quad B_{ij}= B_{ji}\qquad$ \emph{(iii)}$ \quad B_{ij}y^{i}=0 \,.$ \end{Remark} The Christoffel Symbol of second kind of the Finsler space $\overline{F}^{\,n}$ is given by\\[-4mm] \begin{equation}\begin{split}\label{eq9} \overline{\gamma}^{\,i}_{jk}={\gamma}^{\,i}_{jk} &+ (g^{it}-p\overline{g}^{\,it})(C_{jkm}N^{m}_{t} - C_{tkm}N^{m}_{j} - C_{jtm}N^{m}_{k}) \\[2mm]&+ \overline{g}^{\,is}\mathfrak{S}_{jsk}\left[ \left\lbrace \frac{p^{}_{2}}{2}\rho_{k}h_{js} + (\beta_{k} + N^{r}_{k}m_{r})B_{sj}\right\rbrace + Q_{j}F_{sk} + Q_{k}F_{sj}+ Q_{s}E_{jk} \right] \,. \end{split}\end{equation}\\[-3mm] Transvecting equation \eqref{eq9} by $ y^{j}y^{k} $ and using ${G}^{\,i}= \frac{1}{2}{\gamma}^{\,i}_{jk}y^{j}y^{k}$, we get\\[-6mm] \begin{equation}\label{eq15} \overline{G}^{\,i}={G}^{\,i} + {D}^{\,i}\,, \end{equation} where \begin{equation}\label{eq20} {D}^{\,i}=\frac{1}{2}\,\overline{g}^{\,is}\big[Q_{s}E_{oo} + 2p_2LF_{so}\big]\,. \end{equation} Thus, we have: \begin{Proposition} The spray coefficient of the transformed space is given by equation \eqref{eq15}. \end{Proposition} \begin{Remark} In the subscript zero `o' is used to denote the transvection by $y^{i}$, \textit{ i.e.} $ F_{so}=F_{si}y^{i} $. \end{Remark} Differentiating equation \eqref{eq15} with respect to $y^{i}$ and using $\dot{\partial_{j}}G^{i}=N^{i}_{j}$ and $\dot{\partial_{j}}\overline{g}^{\,is}= -2\,\overline{g}^{\,ir}\overline{C}^{\,s}_{rj}$, we get \begin{equation}\label{eq12} \overline{N}^{\,i}_{j}={N}^{\,i}_{j} + {D}^{\,i}_{j}\,, \end{equation}\\[-12mm] where \\[-8mm] \begin{equation}\label{eq30} {D}^{\,i}_{j}= \overline{g}^{\,ir} \Big\{ -2D^{m}(\,p\,C_{mrj} + V_{mrj}) + Q_{r}E_{oj} + E_{oo}B_{rj} +p_2LF_{rj} + Q_{j}F_{ro} + \frac{p_2}{2}\rho^{}_{k}y^{k}h_{rj}\Big\} . \end{equation} Thus, we have: \begin{Proposition} The non-linear connection coefficient of the transformed space is given by the equation \eqref{eq12}. \end{Proposition} Now, we are in a position to obtain the Cartan connection coefficient for the space $\overline{F}^{\,n}$. We know that the relation between the Christoffel symbol and Cartan connection coefficient is given by \begin{equation*} {F}^{\,i}_{jk} = {\gamma}^{\,i}_{jk} + {g}^{\,is}({C}_{jkr}{N}^{r}_{s} - {C}_{skr}{N}^{r}_{j} - {C}_{jsr}{N}^{r}_{k})\,. \end{equation*} In view of equation \eqref{eq6}, \eqref{eq9} and \eqref{eq12}, we have \begin{equation*}\begin{split} \overline{F}^{\,i}_{jk} = {\gamma}^{\,i}_{jk} &+ (g^{it}-p\overline{g}^{\,it})(C_{ikm}N^{m}_{t} - C_{tkm}N^{m}_{i} - C_{itm}N^{m}_{k}) \\ &+ \overline{g}^{\,is}\left\lbrace \mathfrak{S}_{jsk}\left( {\frac{p_2}{2}\rho^{}_{k}h_{js} + (\beta_{k} + N^{r}_{k}m_{r})B_{sj}}\right) + Q_{j}F_{sk} +Q_{k}F_{sj}+ Q_{s}E_{jk} \right\rbrace \\ &+ \overline{g}^{\,is}\Big\{(pC_{jkr} + V_{jkr})(N^{r}_{s} + D^{r}_{s}) - (pC_{skr} + V_{skr})(N^{r}_{j} + D^{r}_{j}) - (pC_{jsr} + V_{jsr})(N^{r}_{k} + D^{r}_{k})\Big\} \end{split}\end{equation*} which can be simplifed as\\[-5mm] \begin{equation*}\begin{split} \overline{F}^{\,i}_{jk} = {F}^{\,i}_{jk} &+ \overline{g}^{\,is}\Big\{ \mathfrak{S}_{jsk}\Big(\frac{p_2}{2}\rho^{}_{k}h_{js} + \beta_{k}B_{js} - p\, C_{jsr}D^{r}_{k} - V_{jsr}D^{r}_{k}\Big) + Q_{j}F_{sk} + Q_{k}F_{js} +Q_{s}E_{jk} \Big\}. \end{split}\end{equation*} Above equation can be rewritten as\\[-8mm] \begin{equation}\label{eq16} \overline{F}^{\,i}_{jk} = {F}^{\,i}_{jk} + {D}^{\,i}_{jk}\,, \end{equation}\\[-12mm] where\\[-8mm] \begin{equation}\begin{split}\label{eq10} {D}^{\,i}_{jk}=\overline{g}^{\,is}\Big\{ \mathfrak{S}_{jsk}\Big(\frac{p_2}{2}\rho^{}_{k}h_{js} + \beta_{k}B_{js} - p\, C_{jsr}D^{r}_{k} - V_{jsr}D^{r}_{k}\Big) + Q_{j}F_{sk} + Q_{k}F_{sj} + Q_{s}E_{jk} \Big\}\,. \end{split}\end{equation} Hence, we have: \begin{Theorem} The relation between the Cartan connection coefficeint of $F^{n}$ and $\overline{F}^{\,n}$ is given by equation \eqref{eq16}. \end{Theorem} \begin{Remark}\label{R1} The tensors $D^{i}_{jk},\: D^{i}_{j}$ and $D^{i}$ are related as\\ \emph{(i)}$ \quad D^{\,i}_{jk}\,y^{k}=D^{\,i}_{j}\,, \qquad$ \emph{(ii)}$ \quad D^{\,i}_{j}\,y^{j}=2D^{\,i}\,, \qquad $ \emph{(iii)}$ \quad \dot{\partial_{j}}D^{\,i}=D^{\,i}_{j}.$ \end{Remark} Now, we want to find the condition for which the Cartan connection coefficients for both spaces $F^{n}$ and $\overline{F}^{\,n}$ are same, \textit{i.e.} $\overline{F}^{\,i}_{jk} = {F}^{\,i}_{jk}$ then ${D}^{\,i}_{jk}=0$, which implies ${D}^{\,i}_{j}=0$, then ${D}^{\,i}=0$. Therefore the equation \eqref{eq20} gives \\[-8mm] \begin{equation*} 2\,p_2LF_{io} + E_{oo}Q_{i}=0\,, \end{equation*} \\[-8mm] which on transvection by $y^{i}$ gives $E_{oo} = 0$ and then $F_{io}=0$. Diffrentiating $E_{oo} = 0$ partially with respect to $y^{i}$ gives $E_{io}=0$. Therefore we have $E_{io}=0=F_{io}$, which implies $ b_{i|o}=b_{o|i}=\beta_{\,|i}=0$. Differentiating $ \beta_{\,|i} $ partially with respect to $ y^{j} $ and using the commutation formula $ \dot{\partial_{j}}(\beta_{\,|i})-(\dot{\partial_{j}}\beta)_{|i}=(\dot{\partial_{r}}\beta)C^{r}_{ij|o}\, $, we get \\[-6mm] \begin{equation}\label{eq31} b_{j|i}=-\,b_{r}C^{r}_{ij|o}\,. \end{equation} This will give us $F_{ij}\!=\!0$. Taking \textsl{h-}covariant derivative of $LC^{r}_{ij}b_{r}=\rho h_{ij}$ and using ${\rho}^{}_{|k}\!=\!0$, $L_{|k}=0$ and $h_{ij|k}=0$, we get \\[-6mm] \begin{equation*} \left( C^{r}_{ij}b_{r}\right)_{|k}=\left( \frac{\rho}{L} h_{ij}\right)_{|k}=0 \,. \end{equation*}\\[-1cm] This gives \\[-12mm] \begin{equation*} C^{r}_{sj}b_{r|k} + C^{r}_{sj|k}b_{r}=0\,. \end{equation*} Transvecting by $ y^{k}$ and using $b_{r|o}=0$, we get $ C^{r}_{ij|o}b_{r} =0$ and then equation \eqref{eq31} gives $b_{i|j}=0 $, \textit{i.e.} the \textsl{h-}vector $b_{i}$ is parallel with respect to Cartan connection of $F^{n}$.\\[3mm] \underline{Conversely}, If $b_{i|j}=0$ then we get $E_{ij}=F_{ij}=0$ and $\beta_{i}=\beta_{\,|i}=b_{j|i}\,y^{j}=0$. Then equation \eqref{eq20} reduces to $D^{i}=0$. From $F_{ij}=0$ we have $ {\rho_{i}}=0 $, which implies $ D^{i}_{j}=0$. Therefore, from equation \eqref{eq10}, we get $D^{i}_{jk}=0$, which gives $\overline{F}^{\,i}_{jk} = {F}^{\,i}_{jk}$. Thus, we have: \begin{Theorem}\label{T1} For the Matsumoto change with an \textsl{h-}vector, the Cartan connection coefficients for both spaces $F^{n}$ and $\overline{F}^{\,n}$ are the same if and only if the \textsl{h-}vector $b_{i}$ is parallel with respect to the Cartan connection of $F^{n}$. \end{Theorem} Now, differentiating equation \eqref{eq12} with respect to $ y^{k}$, and using $\dot{\partial_{k}}N^{i}_{j}=G^{i}_{jk}$, we obtain\\[-6mm] \begin{equation}\label{eq26} \overline{G}^{\,i}_{jk}=G^{i}_{jk}+\dot{\partial_{k}}\,D^{i}_{j}\,, \end{equation} where $ G^{i}_{jk} $ are the Berwald connection coeffiecients.\\ Now, if the \textsl{h-}vector $b_{i}$ is parallel with respect to the Cartan connection of $F^{n}$, then by the Theorem \ref{T1}, the cartan connection coefficients for both Finsler space $ F^{n} $ and $ \overline{F}^{\,n} $ are the same, \textit{i.e.} $D^{i}_{jk}=0$ which implies $D^{i}_{j}=0$. Then from equation \eqref{eq26}, we get $\overline{G}^{\,i}_{jk}=G^{i}_{jk}\,.$\\[2mm] \underline{Conversely}$\,$, If $\overline{G}^{\,i}_{jk}=G^{i}_{jk}\,$ then, from equation \eqref{eq26}, we have $\dot{\partial_{k}}\,D^{i}_{j}=0$, which on transvecting by $ y^{j}$ and using Remark \ref{R1}, gives $D^{i}_{k}=0\,$. Using the same procedure as in the Theorem \ref{T1}, we get $b_{i|j}=0 $, \textit{i.e.} the \textsl{h-}vector $b_{i}$ is parallel with respect to Cartan connection of $F^{n}$. \\ Thus, we have: \begin{Theorem} For the Matsumoto change with an \textsl{h-}vector, the Berwald connection coefficients for both spaces $F^{n}$ and $\overline{F}^{\,n}$ are the same if and only if the \textsl{h-}vector $b_{i}$ is parallel with respect to the Cartan connection of $F^{n}$. \end{Theorem} \section*{Conclusion} In the present paper, The Cartan connection of the changed Finlser space is discovered and with the condition (\textsl{h-}vector $b_{i}$ is parallel, \textit{i.e.} $ b_{i|j}=0 \,$), the Cartan connection of both the spaces are same.\\ \textit{ For this transformation we can also find some geometric properties for the transformed Finsler space like the curvature tensor, torsion tensor, T-tensor etc.}\\ Gupta and Pandey \cite{gupta2015finsler} have proved that, ``$\,$For the Kropina change with an \textsl{h-}vector, the Cartan connection coefficients for both spaces $F^{n}$ and $\overline{F}^{\,n}$ are the same if and only if the \textsl{h-}vector $b_{i}$ is parallel with respect to the Cartan connection of $F^{n}\,$". We here observe that the Kropina change has finite number of terms whereas, Matsumoto change has infinite number of terms, although in both cases (finite and infinite) same result holds.\\ \textit{\bfseries The goal for future study in this area is to identify a class of change with an \textsl{h-}vector $b_{i}$ is parallel, for which the Cartan connection of both the Finsler space are same}. \normalsize \end{document}
arXiv
\begin{document} \title{The Quantum Absorption Refrigerator } \author{Amikam Levy and Ronnie Kosloff} \affiliation{ Institute of Chemistry The Hebrew University, Jerusalem 91904, Israel\\ } \begin{abstract} A quantum absorption refrigerator driven by noise is studied with the purpose of determining the limitations of cooling to absolute zero. The model consists of a working medium coupled simultaneously to hot, cold and noise baths. Explicit expressions for the cooling power are obtained for Gaussian and Poisson white noise. The quantum model is consistent with the first and second laws of thermodynamics. The third law is quantified, the cooling power ${\cal J}_c$ vanishes as ${\cal J}_c \propto T_c^{\alpha}$, when $T_c \rightarrow 0$, where $\alpha =d+1$ for dissipation by emission and absorption of quanta described by a linear coupling to a thermal bosonic field, where $d$ is the dimension of the bath. \end{abstract} \pacs{03.65.Yz,05.70.Ln, 07.20.Pe,05.30.-d} \maketitle \section{Introduction} \label{sec:introduction} The adsorption chiller is a refrigerator which employs a heat source to replace mechanical work for driving a heat pump \cite{jeff00}. The first device was developed in 1850 by the Carr\'e brothers which became the first useful refrigerator. In 1926 Einstein and Szil\'ard invented an absorption refrigerator with no moving parts \cite{szilard1926}. This idea has been incorporated recently to an autonomous quantum absorption refrigerator with no external intervention \cite{k169,popescu10} . The present study is devoted to a quantum absorption refrigerators driven by noise. The objective is to study the scaling of the optimal cooling power when the absolute zero temperature is approached. This study is embedded in the field of {\em Quantum thermodynamics}, the study of thermodynamical processes within the context of quantum dynamics. Historically, consistence with thermodynamics led to Planck's law, the basics of quantum theory. Following the ideas of Planck on black body radiation, Einstein five years later (1905), quantized the electromagnetic field \cite{einstein05}. {\em Quantum thermodynamics} is devoted to unraveling the intimate connection between the laws of thermodynamics and their quantum origin \cite{geusic67,spohn78,alicki79,k24,k122,k156,k169,lloyd,kieu04,segal06,bushev06,erez08,mahler08,allahmahler08,segal09,he09,mahlerbook,popescu10}. In this tradition the present study is aimed toward the quantum study of the third law of thermodynamics \cite{nerst06,landsberg56}, in particular quantifying the unattainability principle \cite{belgiorno03}: What is the scaling of the cooling power ${\cal J}_c$ of a refrigerator when the cold bath temperature approaches the absolute zero ${\cal J}_c \propto T_c^{\alpha}$ when $T_c \rightarrow 0$. \section{The quantum trickle} \label{sec:trickle} The minimum requirement for a quantum thermodynamical device is a system connected simultaneously to three reservoirs \cite{berry84}. These baths are termed hot, cold and work reservoir as described in Fig. \ref{fig:1}. \begin{figure} \caption{The quantum trickle: A quantum heat pump designated by the Hamiltonian $\Op H_s$ coupled to a work reservoir with temperature $T_w$, a hot reservoir with temperature $T_h$ and a cold reservoir with temperature $T_c$. The heat and work currents are indicated. In steady state ${\cal J}_h+{\cal J}_c+{\cal P}=0$.} \label{fig:1} \end{figure} A quantum description requires a representation of the dynamics working medium and the three heat reservoirs. A reduced description is employed in which the dynamics of the working medium is described by the Heisenberg equation for the operator $\Op O$ for open systems \cite{lindblad76,breuer}: \begin{equation} \frac{d}{dt} \Op O ~~=~~ \frac{i}{\hbar} [ \Op H_s, \Op O ] +\frac{\partial \Op O}{\partial t}+ {\cal L}_h (\Op O)+ {\cal L}_c (\Op O)+ {\cal L}_w (\Op O)~, \label{eq:lvn} \end{equation} where $\Op H_s$ is the system Hamiltonian and ${\cal L}_g$ are the dissipative completely positive superoperators for each bath ($g=h,c,w$). A minimal Hamiltonian describing the essence of the quantum refrigerator is composed of three interacting oscillators: \begin{eqnarray} \begin{array}{rcl} \Op H_s &= &\Op H_0 ~+~ \Op H_{int}\\ \Op H_0 &=& \hbar \omega_h \Op a^{\dagger} \Op a +\hbar \omega_c \Op b^{\dagger}\Op b +\hbar \omega_w \Op c^{\dagger} \Op c \\ \Op H_{int}&=& \hbar \omega_{int} \left( \Op a^{\dagger} \Op b \Op c + \Op a \Op b^{\dagger} \Op c^{\dagger} \right)~. \end{array} \label{eq:hamil} \end{eqnarray} $\Op H_{int}$ represents an annihilation of excitations on the work and cold bath simultaneous with creating an excitation in the hot bath. In an open quantum system the superoperators ${\cal L}_g$ represent a thermodynamic isothermal partition allowing heat flow from the bath to the system. Such a partition is equivalent to the weak coupling limit between the system and bath \cite{k122}. The superoperators ${\cal L}_g$ are derived from the Hamiltonian: \begin{equation} \Op H = \Op H_s+\Op H_h+\Op H_c+\Op H_w +\Op H_{sh} + \Op H_{sc}+ \Op H_{sw}~, \label{eq:hamil1} \end{equation} where $\Op H_g$ are bath Hamiltonians and $\Op H_{sg}$ represent system bath coupling. Each of the oscillators is linearly coupled to a heat reservoir for example for the hot bath: $\Op H_{sh} = \lambda_{sh} ( \Op a \Op A_h^{\dagger} + \Op a^{\dagger} \Op A_h)$ . Each reservoir individually should equilibrate the working medium to thermal equilibrium with the reservoir temperature. In general, the derivation of a thermodynamically consistent master equation is technically very difficult \cite{alicki06}. Typical problems are approximations that violate the laws of thermodynamics. We therefore require that the master equations fulfill the thermodynamical laws. Under steady state conditions of operation they become: \begin{eqnarray} \begin{array}{rcl} {\cal J}_h+{\cal J}_c+{\cal P}&=&0\\ -\frac{{\cal J}_h}{T_h}-\frac{{\cal J}_c}{T_c}-\frac{{\cal P}}{T_w} &\ge& 0~, \end{array} \label{eq:thermo} \end{eqnarray} where ${\cal J}_k = \langle {\cal L}_k (\Op H) \rangle$. The first equality represents conservation of energy (first law) \cite{spohn78,alicki79}, and the second inequality represents positive entropy production in the universe $\Sigma_u \ge 0$ (second law). For refrigeration $T_w \ge T_h \ge T_c$. From the second law the scaling exponent $\alpha \ge 1$ \cite{k156}. \section{Noise driven refrigerator } {\bf Gaussian noise driven refrigerator}. In the absorption refrigerator the noise source replaces the work bath and its contact $ \hbar \omega_w \Op c^{\dagger} \Op c$ leading to: \begin{eqnarray} \begin{array}{rcl} \Op H_{int}&=& f(t)\left( \Op a^{\dagger} \Op b + \Op a \Op b^{\dagger} \right)= f(t) \Op X ~, \end{array} \label{eq:hamil2} \end{eqnarray} where $f (t) $ is the noise field. $\Op X=(\Op a^{\dagger} \Op b + \Op a \Op b^{\dagger})$ is the generator of a swap operation between the two oscillators and is part of a set of $SU(2)$ operators , $\Op Y=i(\Op a^{\dagger} \Op b - \Op a \Op b^{\dagger})$, $\Op Z = \left( \Op a^{\dagger} \Op a - \Op b^{\dagger} \Op b \right)$ and the Casimir $\Op N = \left( \Op a^{\dagger} \Op a + \Op b^{\dagger} \Op b \right)$. We first study a Gaussian source of white noise characterized by zero mean $\langle f(t) \rangle=0$ and delta time correlation $\langle f(t) f(t') \rangle = 2 \eta \delta(t-t')$. The Heisenberg equation for a time independent operator $\Op O$ reduced to: \begin{equation} \frac{d}{dt} \Op O ~~=~~ i [ \Op H_s, \Op O ] +{\cal L}_n(\Op O)+{\cal L}_h (\Op O)+ {\cal L}_c (\Op O)~,\label{eq:glvn} \end{equation} where $\Op H_s = \hbar \omega_h \Op a^{\dagger} \Op a +\hbar \omega_c \Op b^{\dagger}\Op b $. The noise dissipator for Gaussian noise is ${\cal L}_n(\Op O) = -\eta [ \Op X, [\Op X , \Op O]]$ \cite{gorini76}. The next step is to derive the quantum Master equation of each reservoir. We assume that the reservoirs are uncorrelated and also uncorrelated with the driving noise. These conditions simplify the derivation of ${\cal L}_h$ which become the standard energy relaxation terms driving oscillator $ \omega_h \Op a^{\dagger} \Op a$ to thermal equilibrium with temperature $T_h$ and ${\cal L}_c$ drives oscillator $ \hbar \omega_b \Op b^{\dagger} \Op b$ to equilibrium $T_c$ \cite{breuer}. \begin{eqnarray} \begin{array}{rcr} {\cal L}_h (\Op O )&~=~& \Gamma_h (N_h +1 ) \left( \Op a^{\dagger} \Op O \Op a -\frac{1}{2} \left\{\Op a^{\dagger} \Op a, \Op O \right\} \right)\\ &&~+~\Gamma_h N_h \left( \Op a \Op O \Op a^{\dagger} -\frac{1}{2} \left\{\Op a \Op a^{\dagger} , \Op O \right\} \right)\\ {\cal L}_c (\Op O )&~=~& \Gamma_c (N_c +1 ) \left( \Op b^{\dagger} \Op O \Op b -\frac{1}{2} \left\{\Op b^{\dagger} \Op b, \Op O \right\} \right)\\ &&~+~\Gamma_c N_c \left( \Op b \Op O \Op b^{\dagger} -\frac{1}{2} \left\{\Op b \Op b^{\dagger} , \Op O \right\} \right)\\ \end{array}~. \label{eq:relaxabsor} \end{eqnarray} In the absence of the stochastic driving field these equations drive oscillator $a$ and $b$ separately to thermal equilibrium provided that $N_h = (\exp(\frac {\hbar \omega_h}{k T_h})-1)^{-1} $ and $N_c = (\exp(\frac {\hbar \omega_c}{k T_c})-1)^{-1} $. The kinetic coefficients $\Gamma_{h/c}$ are determined from the baths density function \cite{k122}. The equations of motion are closed to the $SU(2)$ set of operators. To derive the cooling current ${\cal J}_c= \langle {\cal L}_c( \hbar \omega_c \Op b^{\dagger} \Op b)\rangle$, we solve for stationary solutions of $\Op N$ and $\Op Z$, obtaining: \begin{eqnarray} \begin{array}{rcl} {\cal J}_c &~=~& \hbar \omega_c\frac{(N_c-N_h)}{(2\eta)^{-1} +\Gamma_h^{-1} + \Gamma_c^{-1}} \end{array}~. \label{eq:Jc} \end{eqnarray} Cooling occurs for $N_c > N_h \Rightarrow \frac{\omega_h}{T_h} > \frac{\omega_c}{T_c}$. The coefficient of performance ($COP$) for the absorption chiller is defined by the relation $COP = \frac{{\cal J}_c}{{\cal J}_n}$, with the help of Eq. (\ref{eq:Jc}) we obtain the Otto cycle $COP$ \cite{jahnkemahler08}: \begin{equation} COP ~=~ \frac{\omega_c}{\omega_h - \omega_c} ~\le~ \frac{T_c}{T_h-T_c}~. \label{eq:COPstoch} \end{equation} A different viewpoint starts from the high temperature limit of the work bath $T_w$ based on the weak coupling limit in Eq. (\ref{eq:hamil}), (\ref{eq:hamil1}), then: \begin{eqnarray} \begin{array}{rcr} {\cal L}_w (\Op O )&~=~& \Gamma_w (N_w +1 ) \left( \Op a^{\dagger} \Op b \Op O \Op b^{\dagger} \Op a -\frac{1}{2} \left\{\Op a^{\dagger} \Op a \Op b \Op b^{\dagger} , \Op O \right\} \right)\\ &&~+~\Gamma_w N_w \left( \Op a \Op b^{\dagger} \Op O \Op a^{\dagger} \Op b -\frac{1}{2} \left\{\Op a \Op a^{\dagger} \Op b^{\dagger} \Op b , \Op O \right\} \right)\\ \end{array}~. \label{eq:relaxw} \end{eqnarray} where $N_w = (\exp(\frac {\hbar \omega_w}{k T_h})-1)^{-1} $. At finite temperature ${\cal L}_w(\Op O)$ does not lead to a close set of equations. But in the limit of $T_w \rightarrow \infty$ it becomes equivalent to the Gaussian noise generator: ${\cal L}_w (\Op O)= -\eta/2 \left( [ \Op X , [\Op X, \Op O]]+ [ \Op Y , [\Op Y, \Op O]] \right)$, where $\eta= \Gamma_w N_w$. This noise generator leads to the same current ${\cal J}_c$ and $COP$ as Eq. (\ref{eq:Jc}) and (\ref{eq:COPstoch}). We conclude that Gaussian noise represents the singular bath limit equivalent to $T_w \rightarrow \infty$. As a result the entropy generated by the noise is zero. The solutions are consistent with the first and second laws of thermodynamics. The $COP$ is restricted by the Carnot $COP$. For low temperatures the optimal cooling current can be approximated by ${\cal J}_c \simeq \omega_c \Gamma_c N_c$. Coupling to a thermal bosonic field such as electromagnetic or acoustic phonons field implies $\Gamma_c \propto \omega_c^{d}$, where $d$ is the heat bath dimension. Optimizing the cooling current with respect to $\omega_c$ one obtains that the exponent $\alpha$ quantifying the third law ${\cal J}_c \propto T_c^{\alpha}$ is given by $\alpha = d +1$. {\bf Poisson noise driven refrigerator}. Poisson white noise can be referred as a sequence of independent random pulses with exponential inter-arrival times. These impulses drive the coupling between the oscillators in contact with the hot and cold bath leading to \cite{luczka91,alicki06}: \begin{eqnarray} \label{eq:master-eq} \begin{array}{rcl} \dfrac{d \Op O}{dt}&=&(i/\hbar)[\Opt H,\Op O] - (i/\hbar)\lambda \langle \xi \rangle [\Op X,\Op O] \\ &&+\lambda\left( \int^{\infty}_{-\infty}d\xi P(\xi)e^{(i/\hbar)\xi \Op X}\Op O e^{(-i/\hbar)\xi \Op X} -\Op O \right)~, \end{array} \end{eqnarray} where $ \Opt H $ is the total Hamiltonian including the baths. $\lambda$ is the rate of events and $\xi$ is the impulse strength averaged over a distribution $P(\xi)$. Using the Hadamard lemma and the fact that the operators form a closed $SU(2)$ algebra, we can separate the noise contribution to its unitary and dissipation parts, leading to the master equation, \begin{equation} \label{eq:vn} \dfrac{d \Op O}{dt}=(i/\hbar)[\Opt H,\Op O]+(i/\hbar)[\Op H^{\prime} ,\Op O]+ {\cal L}_n(\Op O)~. \end{equation} The unitary part is generated with the addition of the Hamiltonian $ \Op H^{\prime}= \hbar\epsilon \Op X $ with the interaction \begin{equation} \epsilon= -\dfrac{\lambda}{2}\int d\xi P(\xi)(2\xi/\hbar -sin(2\xi /\hbar))\nonumber~. \end{equation} This term can cause a direct heat leak from the hot to cold bath. The noise generator ${\cal L}_n(\Op\rho)$, can be reduced to the form $ {\cal L}_n(\Op O )= -\eta [\Op X,[\Op X,\Op O]]~, $ with a modified noise parameter: \begin{equation} \eta=\dfrac{\lambda}{4}\left( 1-\int d\xi P(\xi)cos(2\xi /\hbar)\right) \nonumber~. \end{equation} The Poisson noise generates an effective Hamiltonian which is composed of $\Opt H$ and $\Op H^{\prime}$, modifying the energy levels of the working medium. This new Hamiltonian structure has to be incorporated in the derivation of the master equation otherwise the second law will be violated. The first step is to rewrite the system Hamiltonian in its dressed form. A new set of bosonic operators is defined \begin{eqnarray} \begin{array}{l} \Op A_{1} = \Op a \cos(\theta) +\Op b \sin(\theta) \\ \Op A_{2} = \Op b \cos(\theta) -\Op a \sin(\theta) ~, \end{array} \end{eqnarray} The dressed Hamiltonian is given by: \begin{equation} \label {eq:dressed H} \Op H_{s} = \hbar\Omega_{+}\Op A^{\dagger}_1 \Op A_1 + \hbar\Omega_{-}\Op A^{\dagger}_2 \Op A_2~, \end{equation} where $ \Omega_{\pm} = \dfrac{\omega_h +\omega_c}{2} \pm \sqrt{(\dfrac{\omega_h -\omega_c}{2})^2 +\epsilon^2} $ and $ \cos^2(\theta) = \dfrac{\omega_h-\Omega_{-}}{\Omega_{+}-\Omega_{-}} $ Eq.(\ref {eq:dressed H}) impose the restriction, $\Omega_{\pm}>0$ which can be translated to $\omega_h \omega_c > \epsilon^2$. The master equation in the Heisenberg representation becomes: \begin{equation} \dfrac{d \Op O}{dt}=(i/\hbar)[\Op H_s ,\Op O] + {\cal L}_h(\Op O) +{\cal L}_c(\Op O)+{\cal L}_n(\Op O)~, \end{equation} where \begin{eqnarray} \begin{array}{ll} {\cal L}_h(\Op O)=&\gamma_1^h \textbf{c}^2(\Op A_1\Op O\Op A_1^{\dagger}-\frac{1}{2}\{\Op A_1\Op A_1^{\dagger},\Op O\})\\ &+\gamma_2^h \textbf{c}^2(\Op A_1^{\dagger}\Op O\Op A_1-\frac{1}{2}\{\Op A_1^{\dagger}\Op A_1,\Op O\})\\ &+\gamma_3^h \textbf{s}^2(\Op A_2\Op O\Op A_2^{\dagger}-\frac{1}{2}\{\Op A_2\Op A_2^{\dagger},\Op O\})\\ &+\gamma_4^h \textbf{s}^2(\Op A_2^{\dagger}\Op O\Op A_2-\frac{1}{2}\{\Op A_2^{\dagger}\Op A_2,\Op O\}) \\ {\cal L}_c(\Op O)=&\gamma_1^c \textbf{s}^2(\Op A_1\Op O\Op A_1^{\dagger}-\frac{1}{2}\{\Op A_1\Op A_1^{\dagger},\Op O\})\\ &+\gamma_2^c \textbf{s}^2(\Op A_1^{\dagger}\Op O\Op A_1-\frac{1}{2}\{\Op A_1^{\dagger}\Op A_1,\Op O\})\\ &+\gamma_3^c \textbf{c}^2(\Op A_2\Op O\Op A_2^{\dagger}-\frac{1}{2}\{\Op A_2\Op A_2^{\dagger},\Op O\})\\ &+\gamma_4^c \textbf{c}^2(\Op A_2^{\dagger}\Op O\Op A_2-\frac{1}{2}\{\Op A_2^{\dagger}\Op A_2,\Op O\}) \end{array}~, \end{eqnarray} where $\textbf{s}=\sin(\theta)$ and $\textbf{c}=\cos(\theta)$. And the noise generator: \begin{equation} {\cal L}_n(\Op O)= -\eta [\Op W,[\Op W,\Op O]]~, \end{equation} where $\Op W=\sin(2 \theta)\Op Z+\cos(2 \theta)\Op X$ and a new set of operators which form an $SU(2)$ algebra is defined: $\Op X=(\Op A_1^{\dagger}\Op A_2+\Op A_2^{\dagger}\Op A_1)$ , $\Op Y=i(\Op A_1^{\dagger}\Op A_2 - \Op A_2^{\dagger}\Op A_1)$ and $\Op Z=(\Op A_1^{\dagger}\Op A_1-\Op A_2^{\dagger}\Op A_2)$. The total number of excitations is accounted for by the operator $\Op N=(\Op A_1^{\dagger}\Op A_1+\Op A_2^{\dagger}\Op A_2)$. \\ The generalized heat transport coefficients become $\zeta_+^k=\gamma_2^k-\gamma_1^k$ and $\zeta_-^k=\gamma_4^k-\gamma_3^k$ for $k= h, c$. Applying the Kubo relation \cite{kubo57,kossakowski77}: $\gamma_1^k=e^{-\hbar\Omega_{+}\beta_k}\gamma_2^k$ and $\gamma_3^k=e^{-\hbar\Omega_{-}\beta_k}\gamma_4^k$, leads to the detailed balance relation: \begin{eqnarray} \label{eq:occupation} \begin{array}{l} \frac{\gamma_1^k}{\zeta_+^k}=\frac{1}{e^{\hbar\Omega_{+}\beta_k}-1}\equiv N_+^k\\ \frac{\gamma_3^k}{\zeta_-^k}=\frac{1}{e^{\hbar\Omega_{-}\beta_k}-1}\equiv N_-^k\nonumber~. \end{array} \end{eqnarray} In general $\zeta_{\pm}^k$ is temperature independent and can be calculated specifically for different choices of spectral density of the baths. For electromagnetic or acoustic phonon field $\zeta_{\pm}^k \propto \Omega_{\pm}^{d}$. The heat currents ${\cal J}_h$, ${\cal J}_c$ and ${\cal J}_n$ are calculated by solving the equation of motion for the operators at steady state and at the regime of low temperature, where $cos^2(\theta)\approx 1$ and $sin^2(\theta)\approx0$. \begin{eqnarray} \begin{array}{ll} \label{eq:equation of motion} \frac{d \Op N}{dt}=& -\frac{1}{2} (\zeta_+^h +\zeta_-^c) \Op N -\frac{1}{2} (\zeta_+^h -\zeta_-^c) \Op Z +(\zeta_+^h N_+^h +\zeta_-^c N_-^c) \\ \frac{d \Op Z}{dt}=& -\frac{1}{2} (\zeta_+^h +\zeta_-^c) \Op Z -\frac{1}{2} (\zeta_+^h -\zeta_-^c) \Op N +(\zeta_+^h N_+^h -\zeta_-^c N_-^c) -4\eta \Op Z \end{array} \end{eqnarray} Once the set of linear equations is solved the exact expression for the heat currents is extracted, ${\cal J}_h=\left\langle {\cal L}_h(\Op H_{s})\right\rangle $, $ {\cal J}_c=\left\langle {\cal L}_c(\Op H_{s})\right\rangle $ and $ {\cal J}_n=\left\langle {\cal L}_n(\Op H_{s})\right\rangle $. For simplicity, the distribution of impulses in Eq. (\ref{eq:master-eq}), is chosen as $P(\xi)=\delta (\xi-\xi_0)$. Then the effective noise parameter becomes: \begin{equation} \eta=\frac{\lambda}{4}(1-cos(2\xi_0/\hbar))~. \label{eq:eta} \end{equation} The energy shift is controlled by: \begin{equation} \epsilon=-\frac{\lambda}{2}(2\xi_0 /\hbar-sin(2\xi_0/\hbar))~. \label{eq:eps} \end{equation} \begin{figure} \caption{Entropy production $\Sigma_k=-{\cal J}_k/T_k$ as a function of impulse $\xi_0$ for the cold $\Sigma_c$ hot $\Sigma_h$ and the total entropy production $\Sigma_u=\Sigma_h+\Sigma_c$. $ T_c=10^{-3}$, $T_h=2.$, $ \omega_c=T_c$, $\omega_h=10.$ $\lambda=\omega_c$ $\zeta_{\pm}^k=\omega_c/10$ ($\hbar=k=1$).} \label{fig:2} \end{figure} Figure \ref{fig:2} shows a periodic structure of the heat current ${\cal J}_c$ and the entropy production $\Sigma_c=-{\cal J}_c/T_c$ with the impulse $\xi_0$. The second law of thermodynamics is obtained by the balance of the large entropy generation on the hot bath compensating for the negative entropy generation of cooling the cold bath. The $COP$ for the Poisson driven refrigerator is restricted by the Otto and Carnot $COP$: \begin{equation} COP =\frac{\Omega_-}{\Omega_+ - \Omega_-} \le \frac{\omega_c}{\omega_h-\omega_c} \le \frac{T_c}{T_h-T_c}~. \label{eq:cop2} \end{equation} The heat current ${\cal J}_c$ is given by: \begin{equation} {\cal J}_c \approx \hbar \Omega_- \dfrac{N_-^c -N_+^h}{(2\eta)^{-1} + (\zeta_+^h)^{-1} +(\zeta_-^c)^{-1}}~, \label{eq:pjc} \end{equation} The scaling of the optimal cooling rate is now accounted for. The heat flow is maximized with respect to the impulse $\xi_0$ by maximizing $\eta$ Eq. (\ref{eq:eta}), which occurs for $\xi_0= n\frac{\pi}{2}$, ($n=1,2..$). On the other hand the energy shift $\epsilon^2$ Eq. (\ref{eq:eps}) should to be minimized. The optimum is obtained when $\xi_0=\frac{\pi}{2}$. The cooling power of the Poisson noise case Eq. (\ref{eq:pjc}) is similar to the Gaussian one Eq. (\ref{eq:Jc}). In the Poisson case also the noise driving parameter $\eta $ is restricted by $ \omega_c$. This is because $\epsilon$ is restricted by $\Omega_- \ge 0$ and therefore $\lambda$ is restricted to scale with $\omega_c$. In total when $T_c \rightarrow 0$, ${\cal J}_c \propto T_c^{d+1}$. The optimal scaling relation ${ \cal J}_c \propto T_c^{\alpha}$ of the autonomous absorption refrigerators should be compared to the scaling of the discrete four stroke Otto refrigerators \cite{k243}. In the driven discrete case the scaling depends on the external control scheduling function on the expansion stroke. For a scheduling function determined by a constant frictionless nonadiabatic parameter the optimal cooling rate scaled with $\alpha=2$. Faster frictionless scheduling procedures were found based on a bang-bang type optimal control solutions. These solutions led to a scaling of $\alpha=3/2$ when positive frequencies were employed and ${\cal J}_c \propto -T_c/\log T_c$ when negative imaginary frequencies were allowed \cite{muga09,karl11}. The drawback of the externally driven refrigerators is that their analysis is complex. The optimal scaling assumes that the heat conductivity $\Gamma \gg \omega_c$, and that noise in the controls does not influence the scaling. For this reason an analysis based on the autonomous refrigerators is superior. \end{document}
arXiv
Modelling the Age-Hardening Precipitation by a Revised Langer and Schwartz Approach with Log-Normal Size Distribution | springerprofessional.de Skip to main content PDF-Version jetzt herunterladen vorheriger Artikel Thermodynamic Properties of Li-Sb Liquid Soluti... nächster Artikel Analysis of Martensitic Transformation Plastici... PatentFit aktivieren 04.07.2020 | Ausgabe 9/2020 Open Access Modelling the Age-Hardening Precipitation by a Revised Langer and Schwartz Approach with Log-Normal Size Distribution Metallurgical and Materials Transactions A > Ausgabe 9/2020 Dongdong Zhao, Yijiang Xu, Sylvain Gouttebroze, Jesper Friis, Yanjun Li » Zur Zusammenfassung PDF-Version jetzt herunterladen Manuscript submitted July 1, 2019. The nucleation and growth kinetics of secondary phases during artificial aging heat treatment are crucial in enhancing the mechanical properties of various alloy systems.[ 1 – 6 ] 6xxx and 7xxx aluminium alloys can generally be precipitation strengthened via artificial aging, wherein complex precipitation of multiple precipitates occurs, contributing to the hardening of the material.[ 5 , 7 – 11 ] The extent of precipitation strengthening is largely determined by the precipitate shape, size, and number density. Up to now, various modelling approaches have been developed to predict the precipitation behavior of secondary phases during age-hardening process.[ 8 , 9 , 12 – 25 ] Based on the classical nucleation and growth theories, these models are able to predict the time evolution of the precipitation of secondary phases during heat treatment. The modelling approaches as developed so far, which enable the numerical implementation of the classical nucleation and growth theories generally includes the "mean size approach" and the "multi-class approach".[ 13 , 17 , 19 , 26 – 28 ] The mean size approach, also named as "mean radius approach", was originally proposed by Langer and Schwartz (LS model), wherein the steady-state nucleation theory was adopted to compute the time evolution of phase separation in mixtures.[ 26 ] An important simplification of the mean size approach is that only the particles with radius r > r* ( r* is the critical radius) are included as the newly precipitated phase (see the hatched area in Figure 1). Particles with radius r < r* (see the unhatched area in Figure 1) are considered as unstable and will dissolve into the supersaturated matrix. The LS model was afterwards improved by Kampmann and Wagner via replacing the linearized Gibbs–Thomson equation with a non-linearized counterpart (MLS model).[ 17 , 29 ] As being the "mean size approach", both the LS model and MLS model cannot predict the explicit particle size distribution (PSD). Log-normal particle size distribution with a mean particle radius \( \bar{r}_{n} . \) r* is the critical radius. Particles with radius r < r* are continuously dissolving in the matrix. In the present Revised Langer and Schwartz approach, only the hatched area with particle radius r > r* contributes to the mean particle radius \( \bar{r}_{n} . \) \( \bar{r}_{n} \) is the mean radius for the full log-normal distribution and \( \bar{r} \) is the mean radius of the stable particles ( r > r*) Unlike the "mean size approach", the "multi-class approach" is developed with a definition of discrete size classes and a partitioning of the temporal evolution of PSD into series of individual time steps,[ 17 , 29 ] and hence is able to predict more information about particles, especially the full evolution of PSD. The Kampmann and Wagner numerical model (KWN model) is widely recognized as the pioneering multi-class approach. The KWN model is afterwards improved by Myhr and Grong[ 13 ] via allowing for the inter-fluxes between neighboring size classes, which is later on named as the "Euler-like multi-class approach". In contrast, the "Lagrange-like multi-class approach" differs from the Euler-like approach, in that it tracks the time evolution of each size class, without inter-size class flow.[ 28 ] It is worth noting that the KWN-based multi-class approach bears a generic and flexible nature, which allows for easy extension. A coupling of the KWN-based multi-class approach with the CALPHAD method enables efficient treatment of the multi-phase precipitation in multi-component systems subjected to different heat treatment conditions.[ 25 ] Despite the advantage of KWN-based multi-class approach, its application in certain circumstances is less feasible. For instance, the treatment of complex precipitation behavior near defects like grain boundaries or dislocations via KWN multi-class approach is not affordable, since one has to discretize the defect region to consider the solute concentration variation. The implementation of KWN multi-class approach in each discrete element turns to make the modeling framework remarkably expensive. A detailed comparison between the mean size and multi-class approaches conducted by Perez et al.[ 19 ] reveals that in simple cases, the "mean size approach" is faster and as accurate as the multi-class approaches in predicting the general course of precipitation: nucleation, growth, coarsening. This suggests that the "mean size approach" is also able to predict equally accurate results in the modeling framework wherein an implementation of KWN multi-class approach is not computationally affordable. In spite of the incapability in prediction of PSD evolution, the "mean size approach" still has a wide range of applications, due to its versatility and much less computing load compared with the multi-class approaches. Via a "mean size approach" integrating nucleation, growth and coarsening, Deschamps and Bréchet[ 12 ] have investigated the effect of predeformation on the precipitation kinetics of an Al-Zn-Mg alloy during aging. A coarsening rate as a function of mean and critical radius was introduced to weigh the pure growth equation and pure coarsening equation to ensure the continuity from growth to coarsening stage. Perrard et al.[ 18 ] adopted the same approach with a modified coarsening rate to model the precipitation of NbC on dislocations in α-Fe. It is worth noting that these two typical works both have implemented the Lifshitz-Slyozov-Wagner (LSW) kinetics[ 30 ] for describing coarsening.[ 12 , 18 ] However, it is demonstrated by Perez et al.[ 19 ] that the "mean size approach" based on LSW theory is incapable of modelling the PSD evolution of non-LSW precipitation. Meanwhile, the powerful multi-class KWN approaches which can predict PSD evolution also end up with a LSW particle size distribution at long aging time.[ 13 , 19 , 23 ] However, experimental data does not show characteristic LSW size distribution of precipitates, whereas a log-normal size distribution is commonly observed. Indeed, all the three classic approaches mentioned above cannot properly address the experimental log-normal PSD. These concerns motivate the present research efforts to develop an optimized modelling framework which simply imposes the realistic log-normal size distribution function commonly observed in experiments. In the present work, we present a revised Langer-Schwartz (RLS) model, which integrates the LS approach and log-normal particle size distribution to depict the precipitation kinetics including nucleation, growth and coarsening. The rest of the manuscript will be arranged as follows. First, the methodologies formulating the present modelling framework, which includes classic nucleation and growth theory, LS approach, log-normal distribution, and solubility product, will be presented in Section II. Hereafter, the numerical precipitation model is applied to treat the nucleation, growth, and coarsening behavior of the key precipitates in 6xxx and 7xxx alloys during aging. The simulation results will be presented and discussed in Section III and IV, followed by the conclusions in Section V. 2 Precipitation MODEL Prior to a comprehensive introduction of the RLS approach, it is necessary to present the hypotheses adopted in this model: For simplification of the model, precipitates are assumed to be spherical. The thermodynamics of precipitates are described by the solubility product. The precipitation reaction, including growth and dissolution, is only controlled by solute diffusion in matrix. Local equilibrium at the precipitate/matrix interface is assumed, wherein the Gibbs–Thomson effect is implemented. An initial constant shape parameter of the log-normal particle size distribution is assumed at the beginning of precipitation. 2.1 Nucleation The classic nucleation theory is employed to depict the formation of precipitates in supersaturated solid solution. Within this theory, the nucleation rate is calculated in terms of References 31 through 33 : $$ J = N_{0} Z\beta^{\ast} \exp \left( { - \frac{{\Delta G^{\ast} }}{{k_{B} T}}} \right)\exp \left( { - \frac{\tau }{t}} \right) $$ wherein N 0 is the number of nucleation sites per unit volume, k B the Boltzmann factor and T the temperature. Z is the Zeldovich factor and is calculated via[ 19 ] $$ Z = \frac{{v_{at}^{P} }}{{2\pi r^{\ast 2} }}\sqrt {\frac{\gamma }{{k_{{B}} T}}},$$ where \( v_{at}^{P} \) is the mean atomic volume for the precipitate and γ is the interfacial energy. β* represents the condensation rate of solute atoms into a cluster with critical size r*, which can be evaluated based on Russell's equation[ 34 ] $$ \beta^{\ast} = \frac{{4\pi r^{\ast 2} }}{{a^{4} }}\left( {\sum\limits_{i} {\frac{1}{{D_{i} x_{i} }}} } \right)^{ - 1} $$ where x i and D i are the concentration and diffusion coefficients of solute element i, respectively. Note that Δ G* is the nucleation energy barrier which needs to be overcome to form a nucleus with size r*, which is obtained by $$ \Delta G^{\ast} = \frac{{16\pi \gamma^{3} }}{{3\Delta G_{v}^{2} }},$$ wherein Δ G v is the driving force for nucleation per unit volume, which is related to the critical radius r* via interfacial energy γ as $$ r^{\ast} = \frac{2\gamma }{{\Delta G_{v} }}. $$ Finally, τ in Eq. [ 1] is the incubation time for nucleation introduced by Kampmann and Wagner,[ 27 ] and can be calculated as $$ \tau = \frac{4}{{2\pi \beta^{\ast} Z^{2} }} $$ 2.2 RLS Approach Within the RLS modelling framework, the instantaneous evolution of the particle number density, particle mean radius, volume fraction, solute concentration in matrix etc. are depicted via differential equations. The evolution of the size distribution is given by the following continuity equation $$ \frac{\partial \phi \left( r \right)}{\partial t} = - \frac{\partial }{\partial r}\left( {v\left( r \right)\phi \left( r \right)} \right) + j\left( r \right) $$ where v is the particle growth rate, j( r) is the distributed nucleation rate, and φ( r) is the log-normal particle size distribution function, which will be presented in the next section. Recognizing that only particles with radius r > r* are counted into the number density n, we can get the time evolution of n through integrating Eq. [ 7], leading to $$ \frac{\partial n}{\partial t} = J - \phi \left( {r^{\ast} } \right)\frac{{\partial r^{\ast} }}{\partial t} $$ J is the nucleation rate. The mean radius of the particle \( \bar{r}. \) are defined by $$ \overline{r} = \frac{1}{n}\int\limits_{{r^{\ast} }}^{\infty } {\phi \left( r \right)rdr} $$ By applying the previously defined assumptions, the time derivative of Eq. [ 9] gives us the time evolution of \( \bar{r}. \) $$ \frac{{\partial \bar{r}}}{\partial t} = v(\bar{r}) + \frac{1}{n} \cdot (\bar{r} - r^{\ast} )\phi (r^{\ast} )\frac{{\partial r^{\ast} }}{\partial t} + \frac{1}{n} \cdot J \cdot \left( {r^{\ast} + \delta r^{\ast} - \bar{r}} \right),$$ wherein the first term on the right of Eq. [ 10], \( v(\bar{r}) \) is the approximation for the growth rate of particles, which can be described with the classic Zener equation.[ 35 ] The second term corresponds to the change of \( \bar{r} \) contributed by dissolution of φ( r*) dr* particles with radius r*+dr* > r > r*. The third term denotes the change of \( \bar{r} \) induced by the nucleation of particles which have radii slightly larger than r*. 2.3 Log-Normal Distribution In order for the RLS model to describe the full solution of coarsening of particles in later aging stage, a continuous size distribution function φ( r) is needed. Different from previous LS models, we assume a log-normal distribution of particle sizes for size distribution function φ( r) in the present work. Such assumption is feasible and sensible since the log-normal distribution of particle sizes has been frequently observed in various experiments. An implementation of log-normal size distribution for φ( r) enable the present RLS model to capture the experimental results in a more realistic manner. The log-normal distribution function φ( r) is defined in terms of the following $$ \phi \left( x \right) = \frac{1}{{\sqrt {2\pi } sx}}\exp \left( { - \frac{{\left( {\ln x + s^{2} /2} \right)^{2} }}{{2s^{2} }}} \right), $$ wherein x is the normalized particle size for size class i with respect to the mean particle radius, i.e., \( r_{i} /\bar{r}. \) s is the shape parameter for the φ( r), which has the term of the following $$ s^{2} = \ln \left( {1 + \frac{{\sigma_{r}^{2} }}{{\bar{r}^{2} }}} \right), $$ where σ r is the measured standard deviation of the experimental precipitate size distribution. A normalized distribution function with respect to the normalized particle size is shown in Figure 2. As one can find that log-normal PSD with larger shape parameter would correspond to a more broader distribution. Log-normal particle size distribution with four different shape parameters (0.1, 0.2, 0.3, 0.6) 2.4 Growth Rate The classical diffusion-controlled growth rate equation[ 35 ] has been adopted to describe both particle dissolution and growth in the differential equations, which is termed as $$ v = \frac{dr}{dt} = \frac{{D_{j} }}{r}\frac{{\bar{x}_{j} - x_{j}^{i} \left( r \right)}}{{\alpha x_{j}^{p} - x_{j}^{i} \left( r \right)}}, $$ wherein D j is the solute diffusion coefficient in matrix, r the spherical particle radius, the mean solute concentration in matrix, the solute concentration in precipitate, the solute concentration at the particle/matrix interface. \( \bar{x}_{j} ,\;x_{j}^{p} , \) and \( x_{j}^{i} \left( r \right) \) are the solute concentrations of element j in the matrix, the particles and at the particle/matrix interface, respectively. α is the ratio between mean atomic volume of matrix and precipitate. Local equilibrium of the solute concentration at the particle/matrix interface is assumed via the Gibbs–Thomson effect, which can be described by the following equation with the solubility product (considering A m B n precipitate)[ 36 ]: $$ K\left( r \right) = \left( {x_{A}^{i} } \right)^{m} \left( {x_{B}^{i} } \right)^{n} = K^{\infty } \exp \left( {\frac{{2\gamma V_{m} }}{rRT}} \right), $$ wherein γ is the particle/matrix interfacial energy and \( x_{A}^{i} ,x_{B}^{i} \) represent the solute concentration at the particle/matrix interface of element A and B, respectively. K ∞ is the equilibrium solvus boundary, given by, $$ K^{\infty } = \left( {x_{A}^{\infty } } \right)^{m} \left( {x_{B}^{\infty } } \right)^{n} = \exp \left( {\frac{{\Delta S^{ \circ } }}{R} - \frac{{\Delta H^{ \circ } }}{RT}} \right),$$ where \( x_{A}^{\infty } \) and \( x_{B}^{\infty } \) are the equilibrium concentration of the solute elements, respectively. Δ S° and Δ H° are the formation entropy and enthalpy of the precipitate. Note that K ∞ is only valid for a particle with infinite size. The driving force for nucleation can then be given by[ 37 ] $$ \Delta G_{v} = RT \cdot \ln \left[ {\left( {\bar{x}_{A} } \right)^{m} \left( {\bar{x}_{B} } \right)^{n} } \right] - \left( {\frac{{\Delta S^{ \circ } }}{R} - \frac{{\Delta H^{ \circ } }}{RT}} \right) $$ The critical size r* of the particles can be determined when the growth rate v (Eq. [ 13]) is zero, wherein the solute mean concentration equals to the solute interfacial concentration, $$ \bar{x}_{A} = x_{A}^{i} \left( r \right),\;\bar{x}_{B} = x_{B}^{i} \left( r \right) $$ Substituting Eq. [ 17] into Eq. [ 14], we can get the critical radius r* for the particles. $$ r^{\ast} = \frac{{2\gamma V_{m} }}{{RT \cdot \ln \left[ {\left( {\bar{x}_{A} } \right)^{m} \left( {\bar{x}_{B} } \right)^{n} } \right] - \left( {\frac{{\Delta S^{ \circ } }}{R} - \frac{{\Delta H^{ \circ } }}{RT}} \right)}} $$ A basic assumption in the present precipitation model is that the growth rate is diffusion-controlled. In multi-component systems, we have multiple growth rate equations for the precipitate as a result of the different solute elements. By assuming the same overall growth rate regardless of which element is considered, we have the following (considering A m B n precipitate) $$ \frac{{D_{A} }}{r}\frac{{\bar{x}_{A} - x_{A}^{i} \left( r \right)}}{{\alpha x_{A}^{P} - x_{A}^{i} \left( r \right)}} = \frac{{D_{B} }}{r}\frac{{\bar{x}_{B} - x_{B}^{i} \left( r \right)}}{{\alpha x_{B}^{P} - x_{B}^{i} \left( r \right)}} $$ Solving Eqs. [ 14] and [ 19], the solute concentrations at the particle/matrix interface and the growth rate can be determined. Mass balance is calculated to update the solute concentration in the matrix as the precipitation is proceeding, which can be evaluated by $$ \bar{x}_{i} = \frac{{\overline{{x_{i}^{0} }} \left( {1 + \alpha f_{v} - f_{v} } \right) - \alpha x_{i}^{P} f_{v} }}{{1 - f_{v} }} $$ where f v is the particle volume fraction. 3.1 Model Prediction of β″ Precipitation The precipitation kinetics of 6xxx alloys subjected to artificial aging is rather complex, which involves multiple secondary phase precipitation, including pre-β″, β″, B′, β′, U1, U2, and the stable β precipitates. It is well established that the β″ phase with a needle morphology is the most effective strengthening precipitate. The present modeling framework is utilized to predict the precipitation of β″ in an Al-0.52 Mg-0.75Si (in wt pct) alloy during aging treatment, and the experimental data describing the precipitation behavior of β″ are from Reference 23 . The classical nucleation theory is intrinsically sensitive to the particle/matrix interfacial energy γ, which makes this parameter crucial for the present modelling prediction. The γ of β″/Al-matrix interface is difficult to determine due to its dependency on precipitate size and interfacial anisotropy.[ 2 , 38 ] As an early stage metastable precipitate, the γ of β″/Al-matrix interface is considered small as a result of a full coherency of β″ with the Al matrix along the precipitate needle direction and semi-coherency along a and c axes. In the present simulation, a value of 0.05 J/m 2 for γ was adopted, which is very close to the value of 0.045 J/m 2 utilized in Du's multi-class approach.[ 23 ] Note that a stoichiometry of Mg 5Si 6 is implemented for the β″ phase. Other key parameters for modelling precipitation of β″ are summarized in Table I. Table I Key Input Parameters for the Precipitation Modelling of β″ in an Al-0.52 Wt Pct Mg-0.75 Wt Pct Si Alloy Molar Volume of Al Matrix 1.0 × 10 −5 m 3/mol Molar Volume of β″ Precipitate 1.092 × 10 −5 m 3/mol Interfacial Energy, γ 0.05 J/m 2 Diffusion Mobility of Mg in Matrix[ 49 ] 1.342 × 10 −19 m 2/s Diffusion Mobility of Si in Matrix[ 49 ] Shape Parameter of Log-Normal PSD Solvus Boundary from Ref. [ 50 ] Figure 3 displays the predicted number density, mean/critical radius, and volume fraction of β″-Mg 5Si 6 precipitate as a function of aging time. The experimental results based on transmission electron microscopy (TEM) measurements from Du et al.,[ 23 ] are also plotted for comparison. As one can find in Figure 3(a) that the present modelling can well capture the number density evolution of β″-Mg 5Si 6, both at peak hardening and later over aging stage. Du et al.,[ 23 ] have shown that their KWN multi-class model in combination with spherical particle assumption is unable to well predict the number density evolution of β″-Mg 5Si 6 especially at later aging stage. It was demonstrated that a consideration of the non-spherical shape with an aspect ratio can effectively enhance the coarsening rate and hence increase the model capability. The present modelling framework is considered very promising given the fact that it can well depict the number density evolution even with spherical particle assumption. Figure 3(b) shows the time evolution of mean and critical radius of β″-Mg 5Si 6 particles in comparison with the experimental data. The predicted mean radius is in good agreement with experimental results at early aging stage (3, 36 hours), but bears discrepancy with experiment at late aging stage (108 hours). This may be attributed to the phase transformation of age-hardening precipitates. After a long time artificial aging, the dominant precipitate will become β′ instead of β″, which is beyond the prediction ability of the present model. Better prediction might be achieved by taking into account the precipitation of β′ precipitate in the later aging stage. It is shown in Figure 3(c) that the predicted volume fraction of β″-Mg 5Si 6 precipitates is in less agreement with the experimental results, especially after 10 5 seconds artificial aging, the predicted values are much smaller than that measured. Similar disagreement was also observed in between the predicted and measured volume fractions by KWN multi-class approach.[ 23 ] Note that both models implemented a stoichiometry of Mg 5Si 6 for β″, which has been recently established to have a constitution of Al 2Mg 5Si 4.[ 39 ] This can be one reason leading to the lower predicted volume fraction of precipitates. Moreover, the sudden increase in measured volume fraction of early-stage β″ at later aging stage determined by experiments is quite unlikely to occur. It is considered that the volume fraction may include contributions from precipitates at later aging stage, which contains transformation of precipitates and is beyond the prediction ability of the present modeling. Figure 3(d) shows the time evolution of the mean concentration of Mg, Si solute in the matrix. Significant decrease of the Mg, Si concentration in matrix are accompanied with the substantial nucleation ( cf. Figure 3(a)) and increased volume fraction ( cf. Figure 3(c)) of β″ precipitates at the timescale of 10 3 to 10 4 seconds before reaching the solubility limit leading to a slow increase of the volume fraction during coarsening. Predicted time evolution of ( a) β″-Mg 5Si 6 precipitate number density, ( b) precipitate radius, including mean and critical radius, ( c) precipitate volume fraction, ( d) mean concentration of Mg, Si solute in the matrix in an Al-0.52Mg-0.75Si alloy during artificial aging at 175 °C. The experimental results from Ref. [ 23 ] are also plotted for comparison 3.2 Model Prediction of β′ Precipitation β′ is a later-stage precipitate of 6xxx alloys during artificial aging. This phase, possessing a rod morphology, has a hexagonal unit structure with space group of P6 3 /m. The precipitation behavior of β′ phase in a commercial 6056 Al alloy[ 41 ] during artificial aging is utilized to validate the present precipitation model. As discussed above, the interfacial energy γ would drastically affect the nucleation rate and hence number density. The β′ precipitate is solely coherent with the Al matrix along the c axis, which is less coherent in comparison with the β″/Al-matrix interface. Hence, a higher interfacial energy γ for β′/Al-matrix interface can be expected. A reasonable γ value of 0.08 J/m 2 which enables a good prediction of the β′ precipitation behavior has been used in the present work. Note that this value is a bit smaller than the interfacial energy (0.104 to 0.112 J/m 2) adopted in the multi-class Lagrangian-like modelling of β′ precipitation by Bardel et al.[ 8 ] The stoichiometry of Mg 9Si 5 established by Vissers et al.[ 40 ] is used for the β′ precipitation modelling. The other key parameters for modelling precipitation of β′ are summarized in Table II. Key Input Parameters for the Precipitation Modelling of β′ in a Commercial 6056 Aluminium Alloy Molar Volume of β′ PRECIPITATE Figure 4 shows the predicted number density, mean/critical radius, and volume fraction of β′-Mg 9Si 5 precipitate vs time evolution. The experimental information of the β′ precipitate from the characterization of 6056 alloy subjected to T6 temper treatment by Donnadieu et al.[ 41 ] are also included in the figures for comparison. It can be seen in Figure 4 that the model predictions are well consistent with the TEM experimental measurements of β′ precipitation, both in number density, particle size, and volume fraction at the aging time of 8 hours. A comparison between Figures 3 and 4 enables one to find that the predicted coarsening rate of β″ is much faster than β′ at the late aging stage. Such higher coarsening rate of β″ seems to be in contradiction with its smaller interfacial energy than β′, which turns to slow down the growth rate of β″. However, the coarsening behavior of the precipitate is not solely controlled by the interfacial energy. According to Eq. [ 10], an increase of the mean particle radius is contributed by the Zener growth equation and dissolution of particles with instantaneous radius smaller than r* at late aging (coarsening) stage, wherein the nucleation rate at this stage is approaching ~0, with no contribution of newly nucleated particles to mean particle radius. As a result of the same aging temperature, the diffusion controlled Zener growth equation is not supposed to cause large discrepancy in the growth rate of the two precipitates. On the contrary, the dissolution of particles with radius r < r* should play a deterministic role in the coarsening rate. One can clearly find in Eq. [ 18] that multiple factors rather than sole interfacial energy will serve to determine the critical radius, including precipitate molar volume, solute mean concentration, and solvus boundary. The different time evolution of the critical radius r* causes the distinct coarsening rate between β″ and β′ as predicted in the present modelling. Predicted time evolution of ( a) β′-Mg 9Si 5 precipitate number density, ( b) precipitate radius, including mean and critical radius, ( c) precipitate volume fraction, ( d) mean concentration of Mg, Si solute in the matrix in 6056 alloy during artificial aging at 175 °C. The experimental results from Ref. [ 41 ] are also plotted for comparison 3.3 Model Prediction of η′ Precipitation The generic precipitation sequence of an Al-Zn-Mg alloy during artificial aging is widely recognized as: solid solution → GP zones → metastable η′ → stable η, wherein the η′ precipitates serve as the major secondary phases contributing to the age-hardening of Al-Zn-Mg alloy.[ 42 ] For the sake of predicting the η′ precipitation behavior, a stoichiometry of Mg 4Zn 11Al 1 has been adopted in the present modelling. Such stoichiometry is based on the atomic model of η′ phase established by Auld and Cousland,[ 43 ] which has been validated by Wolverton[ 44 ] using density functional theory (DFT) calculations. It is worth noting that the η′/matrix interfaces are also anisotropic, wherein the \( \left\{ {000 1} \right\}_{{\eta^{{\prime }} }} /\left\{ { 1 1 1} \right\}_{\text{Al}} \) interface is coherent, while the \( \left\{ {10 - 10} \right\}_{{\eta^{{\prime }} }} /\left\{ { 1 1 0} \right\}_{\text{Al}} \) interface is semi-coherent. This makes it difficult to precisely determine the η′/matrix interfacial energy. Based on DFT, Cao et al.[ 45 ] have predicted interfacial energies of ~44 and 190 mJ/m 2 for the coherent and semi-coherent interfaces, respectively. Hence in the present work, an optimal value of 0.1 J/m 2 has been used. Such interfacial energy is larger than the value of 0.06 J/m 2 adopted by Kamp et al.[ 46 ] to predict the precipitation and dissolution of η′ phase during friction stir welding process. The other key parameters adopted for modelling η′ precipitation are as tabulated in Table III. Key Input Parameters for the Precipitation Modelling of η′ in an 7150 Alloy Molar Volume of η′ Precipitate 0.1 J/m 2 Diffusion Mobility of Zn in Matrix[ 49 ] From Ref. [ 44 , 51 ] The predicted time evolution of number density, mean/critical radius, and volume fraction for η′ precipitates in a 7150 alloy are displayed in Figure 5. The experimentally measured number density, mean/critical radius, and volume fraction at peak-aging time of 24 hours as determined by small-angle X-ray scattering (SAXS) and TEM methods[ 47 ] are also included in the figures for comparison. It is clearly shown in Figure 5 that the present modelling framework can well predict the precipitation behavior of η′ phase at the peak-aging stage. Note that the peak hardening of 7xxx alloys usually occurs at ~24 hours, being much slower than the peak-hardening time of 6xxx alloys, which can be ascribed to the much lower diffusivity of the impurity elements at lower aging temperature (~120 °C). One can find that the predicted peak hardening occurs in the time range of 10 4 to 10 5 seconds, being consistent with the experimental observations. The lower diffusivity of the solute elements can be reflected by the much lower increasing rate of the mean radius during aging. One can find in Figure 5(b) that corresponding to the peaking particle number density, a substantial increase in mean particle radius also occurs at the timescale of 10 4 to 10 5 seconds, which is much longer than the timescale of 10 3 to 10 4 seconds in 6xxx alloys. Figure 5(e) shows the time-evolution of Al concentration in the matrix. One can find that along with the precipitation of η′ phase, there is an increase of the Al contents in the matrix. This is not surprising considering the consumption of Mg, Zn solutes in the matrix due to the continuous nucleation and growth of the η′ phase, which induces a monotonic increase of Al content in the matrix according to the conservation law. Figure 6 displays the comparison of predicted properties, i.e., number density, mean radius, volume fraction, of η′ precipitate by RLS model with the experimental data measured in six 7xxx alloy systems.[ 47 ] The good agreement between the prediction and experimental results as indicated in Figure 6 validates that the RLS model is able to adequately capture the precipitation behaviors of η′ phase in these alloys. Predicted time evolution of ( a) η′-Mg 4Zn 11Al 1 precipitate number density, ( b) precipitate radius, including mean and critical radius, ( c) precipitate volume fraction, mean concentration of ( d) Mg, Zn, (e) Al solute in the matrix in 7150 alloy. The experimental results from Ref. [ 47 ] are also plotted for comparison Predicted properties of η′-Mg 4Zn 11Al 1 precipitate by the RLS model against the experimental data in six alloy systems from Ref. [ 47 ], ( a) calculated vs experimental number density, ( b) calculated vs experimental mean radius, ( c) calculated vs experimental volume fraction The same set of parameters except for the shape parameter as adopted in the present RLS modelling were employed to depict the evolution of η′ precipitate radius in an Al-6.1 wt pct Zn-2.35 wt pct Mg model alloy subjected to artificial aging at 160 °C. Note that a different shape parameter of s = 0.07 instead of 0.01 was utilized for the precipitation modelling. The effect of this parameter on precipitation behavior will be discussed in a latter section. For the precipitation modelling of target alloy aged at 160 °C with a slow heating rate, an interfacial energy of 0.09 J/m 2 instead of 0.1 J/m 2 (fast heating rate) was implemented, which corresponds to a lower nucleation energy barrier, accounting for the easier nucleation of η′ precipitate on GP zone as reported by Deschamps et al.[ 12 , 48 ] Figure 7 shows the predicted time evolution of precipitate radius, including mean and critical radius in comparison with the small angle scattering (SAXS) and TEM data.[ 12 , 48 ] As indicated, the present modelling framework can deliver a remarkably accurate prediction of the evolution of precipitate radius throughout the aging process. Besides, at early aging stage, the predicted precipitate radius is slightly smaller than the measured values both in the fast- and slow-heating cases. This discrepancy with the experiments can be explained by the presence of GP zone during early aging stage, which contributes to the precipitate radius determined by experiments. However, the predicted precipitate radius in the modelling is only for the η′ precipitate, wherein the contribution of GP zone has not been taken into account. Predicted time evolution of precipitate radius, including mean and critical radius in comparison with the Small Angle Scattering and TEM data from Ref. [ 48 ], ( a) precipitate radius of Al-Zn-Mg alloy aged at 160 °C with a fast heating rate, ( b) precipitate radius of Al-Zn-Mg alloy aged at 160 °C with a slow heating rate 4.1 Log-Normal Distribution and the Shape Parameter As demonstrated, the RLS precipitation model within the framework of mean size approach, is able to accurately describe the precipitation behaviors including nucleation, growth and coarsening of a variety of precipitates during aging treatment. As an important factor entering the RLS modelling framework, the log-normal size distribution has large influence on the predicted precipitation behavior, hence the sensitivity of the model to the key physical parameter describing log-normal size distribution, i.e., shape parameter s, need to be evaluated. Comparing Figs. 3b, 4b, 5b and 7a and b, one can easily see a correlation between the shape parameter s of the log-normal size distribution and the value difference between \( \bar{r} \) and r* at the coarsening stage. A large shape parameter s would correspond to a large value difference between \( \bar{r} \) and r* at the coarsening stage (Figures 3(b) and 7(a) and (b)). However, a small s would produce a small value difference between the two (Figures 4(b) and 5b). Such behavior can be explained by the intrinsic dispersion feature of the log-normal size distribution. A large shape parameter s describes a broader size distribution, where one could expect a relatively larger difference between \( \bar{r} \) and r*. On the contrary, a narrower size distribution characterized with smaller shape parameter would yield much smaller difference between the two. In turn, the magnitude of value discrepancy between these two parameters at the coarsening stage can also serve as an indication whether there is a large change in the particle size distribution. Figure 8 displays the effect of shape parameter s of the log-normal distribution on the time evolution of particle radius, including mean/critical radius, and radius difference \( \bar{r} - r^{\ast} \) for the Q-Al 3Cu 2Mg 9Si 7 phase, as predicted by the RLS approach in the present work. The key parameters for modelling precipitation of Q phase are as listed in Table IV. One can clearly identify that a smaller s corresponds to a smaller radius difference between \( \bar{r} \) and r*, and vice versa. It is also shown in Figure 8 that the particle radius difference \( \bar{r} - r^{\ast} \) at coarsening stage is not a constant, which monotonically increases regardless of the magnitude of the initial shape parameter. This implies that the shape parameter depicting the log-normal distribution is continuously increasing, which refers to a sustained broadening of the size distribution during coarsening stage. Such interesting feature is in good agreement with the results by both experiments and Euler approaches showing clearly an broadening of the particle size distribution during aging especially later stage.[ 13 , 23 ] However, the extent of increment of the radius difference \( \bar{r} - r^{\ast} \) along with aging time is different in respect to the distinct initial shape parameter. For small s ( cf. Figure 8(a)), the increase in the radius difference \( \bar{r} - r^{\ast} \) is quite small even at long aging time (~10 7 seconds), while larger s produce a drastically increased particle radius difference \( \bar{r} - r^{\ast} \) ( cf. Figure 8(a)). The magnitude of radius difference \( \bar{r} - r^{\ast} \) at coarsening stage is demonstrated as being able to identify the broadening/evolution of particle size distribution. Hence, the initial shape parameter s is supposed to play a key role in affecting the later-stage evolution of particle size distribution. It is supposed that the evolution of the size distribution is not significant during aging when a small shape parameter is selected in the modelling, while an initial large shape parameter will result in a substantial broadening of the particle size distribution. Effect of shape parameter s = ( a) 0.01 and ( b) 0.1 of the log-normal distribution on the time evolution of Q-Al 3Cu 2Mg 9Si 7 precipitate radius, including mean/critical radius, and radius difference between \( \bar{r} \) and r*, predicted by the RLS approach in the present work Key Input Parameters for the Precipitation Modelling of Q-Al 3Cu 2Mg 9Si 7 Phase Diffusion Mobility of Cu in Matrix [ 49 ] Indeed, the shape parameter of the log-normal distribution does not solely influences the particle size distribution during precipitation, but also affects the evolution of other parameters during precipitation. Figure 9 shows the effects of shape parameter s of the log-normal distribution on the time evolution of precipitate number density, mean/critical radius, and volume fraction for Q-Al 3Cu 2Mg 9Si 7 phase, predicted by the RLS approach in the present work. The different shape parameter s is shown to hardly affect the number density evolution at early and peak-aging stage. However, a larger s can significantly accelerate the coarsening rate at later aging stage, as indicated by the faster decrease in number density with larger s ( cf. Figure 9(a)). Correspondingly, one can find that the variation of shape parameter does not change the evolution of particle mean/critical radius ( cf. Figures 9(b) and (c)) at early and peak-aging stage, but a larger s substantially enhances the growth rate of mean/critical radius at later aging stage. This interesting feature can be accounted for using the different log-normal distributions characterized with different shape parameters. As discussed above, an initial large shape parameter will produce continuous broadening of the log-normal distribution during precipitation. A broader size distribution described with a larger s will correspondingly have more particles with radius r < r* ( cf. Figure 1), which will dissolve in the solid solution matrix. Such dissolving of the smaller particles will enormously contribute to the coarsening of the larger particles. Hence, a larger shape parameter s will produce faster coarsening rate of the precipitate at later aging stage. Nevertheless, the variation of shape parameter does not induce a significant influence on the volume fraction of the precipitate ( cf. Figure 9(d)). Effect of shape parameter s of the log-normal distribution on the time evolution of ( a) Q-Al 3Cu 2Mg 9Si 7 precipitate number density, precipitate radius, including ( b) mean and ( c) critical radius, ( d) precipitate volume fraction, predicted by the RLS approach in the present work 4.2 Comparison with the Euler-Like Multi-class Approach A comparison between the present RLS model with the Euler-like multi-class approach was made in terms of predicted number density, particle radius, volume fraction and solute mean concentration in order for a validation of its accuracy and efficiency. The details of the Euler-like approach can be referred to in References 13 and 19 To facilitate the comparison, the same set of parameters ( cf. Table IV) were adopted for these two approaches. Figure 10 shows the predicted precipitation results of Q-Al 3Cu 2Mg 9Si 7 phase by the two approaches in terms of number density, particle radius, volume fraction and solute mean concentration. Note that a shape parameter of s = 0.04 is implemented for the log-normal distribution in the RLS model. One can find that the prediction results of these two approaches are remarkably consistent with each other. The different particle size distributions, i.e., a log-normal distribution as adopted in the RLS approach and a LSW distribution as predicted in the Euler-like approach[ 13 ] can be one reason accounting for the slight difference in the evolution of number density and radius between these two approaches. The integration of more realistic log-normal distribution, solubility product and LS model make this approach faster and equivalently accurate in precipitation prediction when compared to the multi-class approaches. Nevertheless, even though a log-normal distribution has been integrated in the RLS approach, the precise prediction of the evolution of particle size distribution is still not possible, which is the intrinsic feature of mean size approaches. It is worth noting that the introduction of the log-normal distribution with an arbitrary shape factor s is an simplification of the real size distribution. Such simplification is hard to relate to the true physics of precipitation at the beginning, since s = 0 is when nucleation starts for the first time due to the fact that particles nucleate with the same critical radius r*. During growth and coarsening at later aging stage, s is increasing as a result of nucleation of new nuclei with larger r* and the particle growth. Even as a mean size approach, the RLS approach can be promisingly further improved if the evolution of shape parameter s which characterizes the log-normal distribution can be properly addressed in the modelling framework during the precipitation process. Such description for the evolution of shape parameter s is complex and will be reported in our future research work. Predicted time evolution of the ( a) particle number density, ( b) mean particle radius, ( c) precipitate volume fraction, and ( d) solute concentration in matrix by the RLS approach in the present work in comparison with the prediction results by Euler-like multi-class approach. Note that the Q-Al 3Cu 2Mg 9Si 7 phase is selected for the modelling of precipitation and a shape parameter of s = 0.04 is implemented for the log-normal distribution in the RLS model A novel model termed as RLS approach which couples the Langer and Schwartz approach and log-normal particle size distribution has been developed to predict the precipitation behavior of the key precipitates including β″, β′, η′ in 6xxx and 7xxx Al alloys subjected to artificial aging. The available TEM and SAXS data concerning the precipitation of these secondary phases in terms of number density, mean radius, and volume fraction can be well predicted by the RLS approach. The simulation results reveal that the pre-defined log-normal size distribution in the RLS model is not fixed, where the shape parameter increases during precipitation process, corresponding to a broadening of the distribution. It is shown that the broadening of the size distribution is dependent on the magnitude of predefined shape parameter, i.e., broadening is faster when a large shape parameter is used in the modelling, and vice versa. Such broadening of the particle size distribution as predicted by the present modelling is consistent with the experimental observations. Moreover, the shape parameter will also affect coarsening at later aging stage, wherein large shape parameter will lead to rapid decreasing of number density and increased growth rate of mean/critical radius. A good agreement with the Euler-like multi-class model indicates that the present RLS framework which integrates the log-normal distribution and Langer and Schwartz model is faster and equivalently accurate in precipitation prediction, and hence can serve as an efficient approach for the description of simultaneous nucleation, growth, and coarsening of the key precipitates in multi-component Al alloys during aging treatments. Open Access funding provided by NTNU Norwegian University of Science and Technology (incl St. Olavs Hospital - Trondheim University Hospital). This work is supported by the project Fundamentals of Intergranular Corrosion in Aluminium Alloys – FICAL (247598), a Knowledge Building Project for Industry co-financed by The Research Council of Norway (RCN), and the industrial partners Hydro, Gränges, Benteler, and Steertec. RCN and the industrial partners are gratefully acknowledged for their financial support. Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​. Number Density The evolution equation of the size distribution is given by the summation of a growth term and a nucleation term: The number density n, is defined considering only particles with radius r > r*: $$ n = \int\limits_{{r^{\ast} }}^{\infty } {\phi \left( r \right)dr} $$ (A1) $$ \frac{\partial n}{\partial t} = \frac{\partial }{\partial t}\left( {\int\limits_{{r^{\ast} }}^{\infty } {\phi \left( r \right)dr} } \right) = - \phi \left( {r^{\ast} } \right) \cdot \frac{{\partial r^{\ast} }}{\partial t} + \int\limits_{{r^{\ast} }}^{\infty } {\frac{\partial \phi \left( r \right)}{\partial t}} dr $$ The time derivative of the number density is obtained by combining Eqs. [ 7] and [ A1]: $$ \frac{\partial n}{\partial t} = - \phi \left( {r^{\ast} } \right) \cdot \frac{{\partial r^{\ast} }}{\partial t} + \int\limits_{{r^{\ast} }}^{\infty } {\left( { - \frac{\partial }{\partial r}\left( {v\left( r \right)\phi \left( r \right)} \right) + j\left( r \right)} \right)} dr $$ The nucleation is assumed to form particles with radius r* + Δ r* only with a nucleation rate J. It means that j( r) is defined as: $$ j\left( r \right) = J \cdot \delta \left( {r^{\ast} + \Delta r^{\ast} } \right) $$ Assuming that at infinite, v and φ are zero and v(r*)=0 by definition, the equation becomes: $$ \frac{\partial n}{\partial t} = - \phi \left( {r^{\ast} } \right) \cdot \frac{{\partial r^{\ast} }}{\partial t} + \int\limits_{{r^{\ast} }}^{\infty } {j\left( r \right)dr} $$ We then obtain the final generic equation valid for any size distribution: $$ \frac{\partial n}{\partial t} = J - \phi \left( {r^{\ast} } \right) \cdot \frac{{\partial r^{\ast} }}{\partial t} $$ Mean Radius The mean radius is defined by: This equation is derived with time and combined with Eqs. [ 7] and [ 8]: $$ \frac{{\partial \bar{r}}}{\partial t} = \frac{\partial }{\partial t}\left( {\frac{1}{n}} \right)\int\limits_{{r^{\ast} }}^{\infty } {\phi \left( r \right)rdr + \frac{1}{n}\frac{\partial }{\partial t}} \left( {\int\limits_{{r^{\ast} }}^{\infty } {\phi \left( r \right)rdr} } \right) $$ $$ \frac{{\partial \bar{r}}}{\partial t} = - \frac{1}{{n^{2} }}\frac{\partial n}{\partial t}\int\limits_{{r^{\ast} }}^{\infty } {\phi \left( r \right)rdr + \frac{1}{n}} \left( { - \phi \left( {r^{\ast} } \right) \cdot r^{\ast} \cdot \frac{{\partial r^{\ast} }}{\partial t} + \int\limits_{{r^{\ast} }}^{\infty } {\frac{\partial }{\partial t}\left( {\phi \left( r \right)r} \right)dr} } \right) $$ $$ \frac{{\partial \bar{r}}}{\partial t} = - \frac{1}{n}\bar{r}\left( {J - \phi \left( {r^{\ast} } \right)\frac{{\partial r^{\ast} }}{\partial t}} \right) + \frac{1}{n}\left( { - \phi \left( {r^{\ast} } \right) \cdot r^{\ast} \cdot \frac{{\partial r^{\ast} }}{\partial t} + \int\limits_{{r^{\ast} }}^{\infty } {r\frac{\partial }{\partial t}\left( {\phi \left( r \right)} \right)dr} } \right) $$ $$ \frac{{\partial \bar{r}}}{\partial t} = - \frac{1}{n}\bar{r}\left( {J - \phi \left( {r^{\ast} } \right)\frac{{\partial r^{\ast} }}{\partial t}} \right) + \frac{1}{n}\left( { - \phi \left( {r^{\ast} } \right) \cdot r^{\ast} \cdot \frac{{\partial r^{\ast} }}{\partial t} + \int\limits_{{r^{\ast} }}^{\infty } {\left( { - r \cdot \frac{\partial }{\partial r}\left( {v\left( r \right)\phi \left( r \right)} \right) + r \cdot j\left( r \right)} \right)dr} } \right) $$ By applying Eq. A3 and assuming that the product r, v and φ is zero at infinite, we obtain: $$ \frac{{\partial \bar{r}}}{\partial t} = - \frac{1}{n}\bar{r}\left( {J - \phi \left( {r^{\ast} } \right)\frac{{\partial r^{\ast} }}{\partial t}} \right) + \frac{1}{n}\left( { - \phi \left( {r^{\ast} } \right) \cdot r^{\ast} \cdot \frac{{\partial r^{\ast} }}{\partial t} + \int\limits_{{r^{\ast} }}^{\infty } {v\left( r \right)\phi \left( r \right)dr + r^{\ast} \cdot J} } \right) $$ In the present model, we simplify the integration of \( v\left( r \right)\phi \left( r \right) \) by replacing it by the growth at the mean radius, leading to the following equation: $$ \frac{{\partial \bar{r}}}{\partial t} = v\left( {\bar{r}} \right) + \frac{1}{n}\left( {\bar{r} - r^{\ast} } \right)\phi \left( {r^{\ast} } \right)\frac{{\partial r^{\ast} }}{\partial t} + \frac{1}{n} \cdot J \cdot \left( {r^{\ast} + \Delta r^{\ast} - \bar{r}} \right) $$ Unsere Produktempfehlungen Premium-Abo der Gesellschaft für Informatik Sie erhalten uneingeschränkten Vollzugriff auf alle acht Fachgebiete von Springer Professional und damit auf über 45.000 Fachbücher und ca. 300 Fachzeitschriften. Zurück zum Zitat E.A. Mørtsell, C.D. Marioara, S.J. Andersen, J. Røyset, O. Reiso and R. Holmestad, Metall. Mater. Trans. A 2015, vol. 46, pp. 4369–4379. E.A. Mørtsell, C.D. Marioara, S.J. Andersen, J. Røyset, O. Reiso and R. Holmestad, Metall. Mater. Trans. A 2015, vol. 46, pp. 4369–4379. Zurück zum Zitat S. Wenner and R. Holmestad, Scr. Mater. 2016, vol. 118, pp. 5-8. S. Wenner and R. Holmestad, Scr. Mater. 2016, vol. 118, pp. 5-8. Zurück zum Zitat E.A. Mørtsell, C.D. Marioara, S.J. Andersen, I.G. Ringdalen, J. Friis, S. Wenner, J. Røyset, O. Reiso and R. Holmestad, J. Alloys. Comp. 2017, vol. 699, pp. 235-242. E.A. Mørtsell, C.D. Marioara, S.J. Andersen, I.G. Ringdalen, J. Friis, S. Wenner, J. Røyset, O. Reiso and R. Holmestad, J. Alloys. Comp. 2017, vol. 699, pp. 235-242. Zurück zum Zitat C.D. Marioara, A. Lervik, J. Grønvold, O. Lunder, S. Wenner, T. Furu and R. Holmestad, Metall. Mater. Trans. A 2018, vol. 49, pp. 5146–5156. C.D. Marioara, A. Lervik, J. Grønvold, O. Lunder, S. Wenner, T. Furu and R. Holmestad, Metall. Mater. Trans. A 2018, vol. 49, pp. 5146–5156. Zurück zum Zitat L.P. Ding, Z.H. Jia, J.-F. Nie, Y.Y. Weng, L.F. Cao, H.W. Chen, X.Z. Wu and Q. Liu, Acta Mater. 2018, vol. 145, pp. 437-450. L.P. Ding, Z.H. Jia, J.-F. Nie, Y.Y. Weng, L.F. Cao, H.W. Chen, X.Z. Wu and Q. Liu, Acta Mater. 2018, vol. 145, pp. 437-450. Zurück zum Zitat K. Li, A. Béché, M. Song, G. Sha, X.X. Lu, K. Zhang, Y. Du, S.P. Ringer and D. Schryvers, Scr. Mater. 2014, vol. 75, pp. 86-89. K. Li, A. Béché, M. Song, G. Sha, X.X. Lu, K. Zhang, Y. Du, S.P. Ringer and D. Schryvers, Scr. Mater. 2014, vol. 75, pp. 86-89. Zurück zum Zitat D.D. Zhao, L.C. Zhou, Y. Kong, A.J. Wang, J. Wang, Y.B. Peng, Y. Du, Y.F. Ouyang and W.Q. Zhang, J. Mater. Sci. 2011, vol. 46, pp. 7839–7849. D.D. Zhao, L.C. Zhou, Y. Kong, A.J. Wang, J. Wang, Y.B. Peng, Y. Du, Y.F. Ouyang and W.Q. Zhang, J. Mater. Sci. 2011, vol. 46, pp. 7839–7849. Zurück zum Zitat D. Bardel, M. Perez, D. Nelias, A. Deschamps, C.R. Hutchinson, D. Maisonnette, T. Chaise, J. Garnier and F. Bourlier, Acta Mater. 2014, vol. 62, pp. 129–140. D. Bardel, M. Perez, D. Nelias, A. Deschamps, C.R. Hutchinson, D. Maisonnette, T. Chaise, J. Garnier and F. Bourlier, Acta Mater. 2014, vol. 62, pp. 129–140. Zurück zum Zitat M. Afshar, F.X. Mao, H.C. Jiang, V. Mohles, M. Schick, K. Hack, S. Korte-Kerzel and L.A. Barrales-Mora, Comp. Mater. Sci. 2019, vol. 158, pp. 235-242. M. Afshar, F.X. Mao, H.C. Jiang, V. Mohles, M. Schick, K. Hack, S. Korte-Kerzel and L.A. Barrales-Mora, Comp. Mater. Sci. 2019, vol. 158, pp. 235-242. Zurück zum Zitat F. Qian, E.A. Mørtsell, C.D. Marioara, S.J. Andersen and Y.J. Li, Materialia 2018, vol. 4, pp. 33–37. F. Qian, E.A. Mørtsell, C.D. Marioara, S.J. Andersen and Y.J. Li, Materialia 2018, vol. 4, pp. 33–37. Zurück zum Zitat S.J. Andersen, H.W. Zandbergen, J. Jansen, C. TrÆholt, U. Tundal and O.Reiso, Acta Mater. 1998, vol. 46, pp. 3283-3298. S.J. Andersen, H.W. Zandbergen, J. Jansen, C. TrÆholt, U. Tundal and O.Reiso, Acta Mater. 1998, vol. 46, pp. 3283-3298. Zurück zum Zitat A. Deschamps and Y. Brechet, Acta Mater. 1999, vol. 47, pp. 293-305. A. Deschamps and Y. Brechet, Acta Mater. 1999, vol. 47, pp. 293-305. Zurück zum Zitat O.R. Myhr and Ø. Grong, Acta Mater. 2000, vol. 48, pp. 1605-1615. O.R. Myhr and Ø. Grong, Acta Mater. 2000, vol. 48, pp. 1605-1615. Zurück zum Zitat O.R. Myhr, Ø. Grong and S.J. Andersen, Acta Mater. 2001, vol. 49, pp. 65–75. O.R. Myhr, Ø. Grong and S.J. Andersen, Acta Mater. 2001, vol. 49, pp. 65–75. Zurück zum Zitat M. Nicolas and A. Deschamps, Acta Mater. 2003, vol. 51, pp. 6077–6094. M. Nicolas and A. Deschamps, Acta Mater. 2003, vol. 51, pp. 6077–6094. Zurück zum Zitat O.R. Myhr, Ø. Grong, H.G. Fjær and C.D. Marioara, Acta Mater. 2004, vol. 52, pp. 4997–5008. O.R. Myhr, Ø. Grong, H.G. Fjær and C.D. Marioara, Acta Mater. 2004, vol. 52, pp. 4997–5008. Zurück zum Zitat R. Wagner, R. Kampmann and P.W. Voorhees, Mater. Sci. Tech. 2006, pp. 309-407. R. Wagner, R. Kampmann and P.W. Voorhees, Mater. Sci. Tech. 2006, pp. 309-407. Zurück zum Zitat F. Perrard, A. Deschamps and P. Maugis, Acta Mater. 2007, vol. 55, pp. 1255–1266. F. Perrard, A. Deschamps and P. Maugis, Acta Mater. 2007, vol. 55, pp. 1255–1266. Zurück zum Zitat M. Perez, M. Dumont and D. Acevedo-Reyes, Acta Mater. 2008, vol. 56, pp. 2119–2132. M. Perez, M. Dumont and D. Acevedo-Reyes, Acta Mater. 2008, vol. 56, pp. 2119–2132. Zurück zum Zitat Q. Du, W.J. Poole and M.A. Wells, Acta Mater. 2012, vol. 60, pp. 3830–3839. Q. Du, W.J. Poole and M.A. Wells, Acta Mater. 2012, vol. 60, pp. 3830–3839. Zurück zum Zitat D. den Ouden, L. Zhao, C. Vuik, J. Sietsma and F.J. Vermolen, Comp. Mater. Sci. 2013, vol. 79, pp. 933–943. D. den Ouden, L. Zhao, C. Vuik, J. Sietsma and F.J. Vermolen, Comp. Mater. Sci. 2013, vol. 79, pp. 933–943. Zurück zum Zitat Z.S. Liu, V. Mohles, O. Engler and G. Gottstein, Comp. Mater. Sci. 2014, vol. 81, pp. 410–417. Z.S. Liu, V. Mohles, O. Engler and G. Gottstein, Comp. Mater. Sci. 2014, vol. 81, pp. 410–417. Zurück zum Zitat Q. Du, B. Holmedal, J. Friis and C.D. Marioara, Metall. Mater. Trans. A 2016, vol. 47, pp. 589-599. Q. Du, B. Holmedal, J. Friis and C.D. Marioara, Metall. Mater. Trans. A 2016, vol. 47, pp. 589-599. Zurück zum Zitat P. Priya, D.R. Johnson and M.J.M. Krane, Comp. Mater. Sci. 2017, vol. 139, pp. 273–284. P. Priya, D.R. Johnson and M.J.M. Krane, Comp. Mater. Sci. 2017, vol. 139, pp. 273–284. Zurück zum Zitat Q. Du, K. Tang, C.D. Marioara, S.J. Andersen, B. Holmedal and R. Holmestad, Acta Mater. 2017, vol. 122, pp. 178-186. Q. Du, K. Tang, C.D. Marioara, S.J. Andersen, B. Holmedal and R. Holmestad, Acta Mater. 2017, vol. 122, pp. 178-186. Zurück zum Zitat J.S. Langer and A.J. Schwartz, Phys. Rev. A 1980, vol. 21, pp. 948-958. J.S. Langer and A.J. Schwartz, Phys. Rev. A 1980, vol. 21, pp. 948-958. Zurück zum Zitat R. Kampmann and R. Wagner: Decomposition of Alloys: The Early Stages (Pergamon Press, Oxford, 1984), pp. 91-103. R. Kampmann and R. Wagner: Decomposition of Alloys: The Early Stages (Pergamon Press, Oxford, 1984), pp. 91-103. Zurück zum Zitat P. Maugis and M. Goune, Acta Mater. 2005, vol. 53, pp. 3359–3367. P. Maugis and M. Goune, Acta Mater. 2005, vol. 53, pp. 3359–3367. Zurück zum Zitat R. Kampmann, H. Eckerlebe and R. Wagner, Mat. Res. Soc. Symp. Proc. 1987, vol. 57, pp. 525-542. R. Kampmann, H. Eckerlebe and R. Wagner, Mat. Res. Soc. Symp. Proc. 1987, vol. 57, pp. 525-542. Zurück zum Zitat I.M. Lifshitz and V.V. Slyozov, J. Phys. Chem. Solids 1961, vol. 19, pp. 35-50. I.M. Lifshitz and V.V. Slyozov, J. Phys. Chem. Solids 1961, vol. 19, pp. 35-50. Zurück zum Zitat M. Perez, E. Courtois, D. Acevedo, T. Epicier and P. Maugis, Phil. Mag. Lett. 2007, vol. 87, pp. 645–656. M. Perez, E. Courtois, D. Acevedo, T. Epicier and P. Maugis, Phil. Mag. Lett. 2007, vol. 87, pp. 645–656. Zurück zum Zitat J.D. Robson, M.J. Jones and P.B. Prangnell, Acta Mater. 2003, vol. 51, pp. 1453-1468. J.D. Robson, M.J. Jones and P.B. Prangnell, Acta Mater. 2003, vol. 51, pp. 1453-1468. Zurück zum Zitat J.D. Robson, Acta Mater. 2004, vol. 52, pp. 4669–4676. J.D. Robson, Acta Mater. 2004, vol. 52, pp. 4669–4676. Zurück zum Zitat K.C. Russell, Chapter Nucleation in solids 1968, pp. 219-268. K.C. Russell, Chapter Nucleation in solids 1968, pp. 219-268. Zurück zum Zitat C. Zener, J. Appl. Phys. 1949, vol. 20, pp. 950-953. C. Zener, J. Appl. Phys. 1949, vol. 20, pp. 950-953. Zurück zum Zitat M. Perez, Scr. Mater. 2005, vol. 52, pp. 709–712. M. Perez, Scr. Mater. 2005, vol. 52, pp. 709–712. Zurück zum Zitat Ch.-A. Gandin and A. Jacot, Acta Mater. 2007, vol. 55, pp. 2539–2553. Ch.-A. Gandin and A. Jacot, Acta Mater. 2007, vol. 55, pp. 2539–2553. Zurück zum Zitat Y. Wang, Z.-K. Liu, L.-Q. Chen and C. Wolverton, Acta Mater. 2007, vol. 55, pp. 5934–5947. Y. Wang, Z.-K. Liu, L.-Q. Chen and C. Wolverton, Acta Mater. 2007, vol. 55, pp. 5934–5947. Zurück zum Zitat H.S. Hasting, A.G. Frøseth, S.J. Andersen, R. Vissers, J.C. Walmsley, C.D. Marioara, F. Danoix, W. Lefebvre and R. Holmestad, J. Appl. Phys. 2009, vol. 106, p. 123527. H.S. Hasting, A.G. Frøseth, S.J. Andersen, R. Vissers, J.C. Walmsley, C.D. Marioara, F. Danoix, W. Lefebvre and R. Holmestad, J. Appl. Phys. 2009, vol. 106, p. 123527. Zurück zum Zitat R. Vissers, M.A. van Huis, J. Jansen, H.W. Zandbergen, C.D. Marioara and S.J. Andersen, Acta Mater. 2007, vol. 55, pp. 3815-3823. R. Vissers, M.A. van Huis, J. Jansen, H.W. Zandbergen, C.D. Marioara and S.J. Andersen, Acta Mater. 2007, vol. 55, pp. 3815-3823. Zurück zum Zitat P. Donnadieu, M. Roux-Michollet and V. Chastagnier, Phil. Mag. A 1999, vol. 79, pp. 1347-1366. P. Donnadieu, M. Roux-Michollet and V. Chastagnier, Phil. Mag. A 1999, vol. 79, pp. 1347-1366. Zurück zum Zitat X. Fang, M. Song, K. Li, Y. Du, D.D. Zhao, C. Jiang and H. Zhang, J. Mater. Sci. 2012, vol. 47, pp. 5419–5427. X. Fang, M. Song, K. Li, Y. Du, D.D. Zhao, C. Jiang and H. Zhang, J. Mater. Sci. 2012, vol. 47, pp. 5419–5427. Zurück zum Zitat J.H. Auld and S.M. Cousland, J. Aust. Inst. Met. 1974, vol. 19, pp. 194-201. J.H. Auld and S.M. Cousland, J. Aust. Inst. Met. 1974, vol. 19, pp. 194-201. Zurück zum Zitat C. Wolverton, Acta Mater. 2001, vol. 49, pp. 3129-3142. C. Wolverton, Acta Mater. 2001, vol. 49, pp. 3129-3142. Zurück zum Zitat F.H. Cao, J.X. Zheng, Y. Jiang, B. Chen, Y.R. Wang and T. Hu, Acta Mater. 2019, vol. 164, pp. 207-219. F.H. Cao, J.X. Zheng, Y. Jiang, B. Chen, Y.R. Wang and T. Hu, Acta Mater. 2019, vol. 164, pp. 207-219. Zurück zum Zitat N. Kamp, A. Sullivan, R. Tomasi and J.D. Robson, Acta Mater. 2006, vol. 54, pp. 2003–2014. N. Kamp, A. Sullivan, R. Tomasi and J.D. Robson, Acta Mater. 2006, vol. 54, pp. 2003–2014. Zurück zum Zitat D.M. Liu, B.Q. Xiong, F.G. Bian, Z.H. Li, X.W. Li, Y.A. Zhang, Q.S. Wang, G.L. Xie, F. Wang and H.W. Liu, Mater. Sci. Eng. A 2015, vol. 639, pp. 245–251. D.M. Liu, B.Q. Xiong, F.G. Bian, Z.H. Li, X.W. Li, Y.A. Zhang, Q.S. Wang, G.L. Xie, F. Wang and H.W. Liu, Mater. Sci. Eng. A 2015, vol. 639, pp. 245–251. Zurück zum Zitat A. Deschamps, F. Livet and Y. Bréchet, Acta Mater. 1999, vol. 47, pp. 281-292. A. Deschamps, F. Livet and Y. Bréchet, Acta Mater. 1999, vol. 47, pp. 281-292. Zurück zum Zitat Y. Du, Y.A. Chang, B.Y. Huang, W.P. Gong, Z.P. Jin, H.H. Xu, Z.H. Yuan, Y. Liu, Y.H. He and F.-Y. Xie, Mater. Sci. Eng. A 2003, vol. 363, pp. 140–151. Y. Du, Y.A. Chang, B.Y. Huang, W.P. Gong, Z.P. Jin, H.H. Xu, Z.H. Yuan, Y. Liu, Y.H. He and F.-Y. Xie, Mater. Sci. Eng. A 2003, vol. 363, pp. 140–151. Zurück zum Zitat E. Povoden-Karadeniz, P. Lang, P. Warczok, A. Falahati, W. Jun and E. Kozeschnik, CALPHAD 2013, vol. 43, pp. 94–104. E. Povoden-Karadeniz, P. Lang, P. Warczok, A. Falahati, W. Jun and E. Kozeschnik, CALPHAD 2013, vol. 43, pp. 94–104. Zurück zum Zitat M.D. Jong, S.V.D. Zwaag and M. Sluiter, Int. J. Mat. Res. 2012, vol. 103, pp. 972-979. M.D. Jong, S.V.D. Zwaag and M. Sluiter, Int. J. Mat. Res. 2012, vol. 103, pp. 972-979. Zurück zum Zitat K.K. Chang, S.H. Liu, D.D. Zhao, Y. Du, L.C. Zhou and L. Chen, Thermo. Acta 2011, vol. 512, pp. 258–267. K.K. Chang, S.H. Liu, D.D. Zhao, Y. Du, L.C. Zhou and L. Chen, Thermo. Acta 2011, vol. 512, pp. 258–267. Dongdong Zhao Yijiang Xu Sylvain Gouttebroze Jesper Friis Yanjun Li https://doi.org/10.1007/s11661-020-05879-x Metallurgical and Materials Transactions A Competitive Healing of Creep-Induced Damage in a Ternary Fe-3Au-4W Alloy Thermodynamic Properties of Li-Sb Liquid Solution by QAM Magnetic-Field-Induced Liquid–Solid Interface Transformation and Its Effect on Microsegregation in Directionally Solidified Ni-Cr Alloy Analysis of Martensitic Transformation Plasticity Under Various Loadings in a Low-Carbon Steel: An Elastoplastic Phase Field Study Effect of a New High-Pressure Heat Treatment on Additively Manufactured AlSi10Mg Alloy In Situ Analysis of the Thermal Evolution of Electrodeposited Fe-C Coatings Die im Laufe eines Jahres in der "adhäsion" veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen. Zur Marktübersicht in-adhesives, MKVS, Nordson/© Nordson, ViscoTec/© ViscoTec, Hellmich GmbH/© Hellmich GmbH
CommonCrawl
\begin{document} \title{A Line-Search Algorithm Inspired by the Adaptive Cubic Regularization Framework and Complexity Analysis} \author{ E. Bergou \thanks{MaIAGE, INRA, Universit\'e Paris-Saclay, 78350 Jouy-en-Josas, France ({\tt [email protected]}). } \and Y. Diouane\thanks{Institut Sup\'erieur de l'A\'eronautique et de l'Espace (ISAE-SUPAERO), Universit\'e de Toulouse, 31055 Toulouse Cedex 4, France ({\tt [email protected]}). } \and S. Gratton\thanks{INP-ENSEEIHT, Universit\'e de Toulouse, 31071 Toulouse Cedex 7, France ({\tt [email protected]}).} } \maketitle \footnotesep=0.4cm \begin{abstract} Adaptive regularized framework using cubics has emerged as an alternative to line-search and trust-region algorithms for smooth nonconvex optimization, with an optimal complexity amongst second-order methods. In this paper, we propose and analyze the use of an iteration dependent scaled norm in the adaptive regularized framework using cubics. Within such scaled norm, the obtained method behaves as a line-search algorithm along the quasi-Newton direction with a special backtracking strategy. Under appropriate assumptions, the new algorithm enjoys the same convergence and complexity properties as adaptive regularized algorithm using cubics. The complexity for finding an approximate first-order stationary point can be improved to be optimal whenever a second order version of the proposed algorithm is regarded. In a similar way, using the same scaled norm to define the trust-region neighborhood, we show that the trust-region algorithm behaves as a line-search algorithm. The good potential of the obtained algorithms is shown on a set of large scale optimization problems. \end{abstract} \begin{center} \textbf{Keywords:} Nonlinear optimization, unconstrained optimization, line-search methods, adaptive regularized framework using cubics, trust-region methods, worst-case complexity. \end{center} \section{Introduction} An unconstrained nonlinear optimization problem considers the minimization of a scalar function known as the objective function. Classical iterative methods for solving the previous problem are trust-region (TR) \cite{Conn_Gould_Toin_2000,YYuan_2015}, line-search (LS) \cite{JEDennis_RBSchnabel_1983} and algorithms using cubic regularization. The latter class of algorithms has been first investigated by Griewank \cite{AGriewank_1981} and then by Nesterov and Polyak \cite{YNesterov_BTPolyak_2006}. Recently, Cartis \textit{et al} \cite{CCartis_NIMGould_PhLToint_2011_a} proposed a generalization to an adaptive regularized framework using cubics (ARC). The worst-case evaluation complexity of finding an $\epsilon$-approximate first-order critical point using TR or LS methods is shown to be computed in at most $\mathcal{O}(\epsilon^{-2})$ objective function or gradient evaluations, where $\epsilon \in ]0,1[$ is a user-defined accuracy threshold on the gradient norm \cite{YNesterov_2004,SGratton_ASartenaer_PhLToint_2008,CCartis_PhLSampaio_PhLToint_2015}. Under appropriate assumptions, ARC takes at most $\mathcal{O}(\epsilon^{-3/2})$ objective function or gradient evaluations to reduce the gradient of the objective function norm below $\epsilon$, and thus it is improving substantially the worst-case complexity over the classical TR/LS methods~\cite{CCartis_NIMGould_PhLToint_2011_b}. Such complexity bound can be improved using higher order regularized models, we refer the reader for instance to the references \cite{Birgin2016,CCartis_NIMGould_PhLToint_2017}. More recently, a non-standard TR method~\cite{CurtRobiSama16} is proposed with the same worst-case complexity bound as ARC. It is proved also that the same worst-case complexity $\mathcal{O}(\epsilon^{-3/2})$ can be achieved by mean of a specific variable-norm in a TR method \cite{Martinez2017} or using quadratic regularization \cite{Birgin2017}. All previous approaches use a cubic sufficient descent condition instead of the more usual predicted-reduction based descent. Generally, they need to solve more than one linear system in sequence at each outer iteration (by outer iteration, we mean the sequence of the iterates generated by the algorithm), this makes the computational cost per iteration expensive. In \cite{Bergou_Diouane_Gratton_2017}, it has been shown how to use the so-called energy norm in the ARC/TR framework when a symmetric positive definite (SPD) approximation of the objective function Hessian is available. Within the energy norm, ARC/TR methods behave as LS algorithms along the Newton direction, with a special backtracking strategy and an acceptability condition in the spirit of ARC/TR methods. As far as the model of the objective function is convex, in \cite{Bergou_Diouane_Gratton_2017} the proposed LS algorithm derived from ARC enjoys the same convergence and complexity analysis properties as ARC, in particular the first-order complexity bound of $\mathcal{O}(\epsilon^{-3/2})$. In the complexity analysis of ARC method~\cite{CCartis_NIMGould_PhLToint_2011_b}, it is required that the Hessian approximation has to approximate accurately enough the true Hessian \cite[Assumption AM.4]{CCartis_NIMGould_PhLToint_2011_b}, obtaining such convex approximation may be out of reach when handling nonconvex optimization. This paper generalizes the proposed methodology in \cite{Bergou_Diouane_Gratton_2017} to handle nonconvex models. We propose to use, in the regularization term of the ARC cubic model, an iteration dependent scaled norm. In this case, ARC behaves as an LS algorithm with a worst-case evaluation complexity of finding an $\epsilon$-approximate first-order critical point of $\mathcal{O}(\epsilon^{-2})$ function or gradient evaluations. Moreover, under appropriate assumptions, a second order version of the obtained LS algorithm is shown to have a worst-case complexity of $\mathcal{O}(\epsilon^{-3/2})$. The use of a scaled norm was first introduced in \cite[Section 7.7.1]{Conn_Gould_Toin_2000} for TR methods where it was suggested to use the absolute-value of the Hessian matrix in the scaled norm, such choice was described as ``the ideal trust region'' that reflects the proper scaling of the underlying problem. For a large scale indefinite Hessian matrix, computing its absolute-value is certainly a computationally expensive task as it requires a spectral decomposition. This means that for large scale optimization problems the use of the absolute-value based norm can be seen as out of reach. Our approach in this paper is different as it allows the use of subspace methods. In fact, as far as the quasi-Newton direction is not orthogonal with the gradient of the objective function at the current iterate, the specific choice of the scaled norm renders the ARC subproblem solution collinear with the quasi-Newton direction. Using subspace methods, we also consider the large-scale setting when the matrix factorizations are not affordable, implying that only iterative methods for computing a trial step can be used. Compared to the classical ARC, when using the Euclidean norm, the dominant computational cost regardless the function evaluation cost of the resulting algorithm is mainly the cost of solving a linear system for successful iterations. Moreover, the cost of the subproblem solution for unsuccessful iterations is getting inexpensive and requires only an update of a scalar. Hence, ARC behaves as an LS algorithm along the quasi-Newton direction, with a special backtracking strategy and an acceptance criteria in the sprite of ARC algorithm. In this context, the obtained LS algorithm is globally convergent and requires a number of iterations of order $\epsilon^{-2}$ to produce an $\epsilon$-approximate first-order critical point. A second order version of the algorithm is also proposed, by making use of the exact Hessian or at least of a good approximation of the exact Hessian, to ensure an optimal worst-case complexity bound of order $\epsilon^{-3/2}$. In this case, we investigate how the complexity bound depends on the quality of the chosen quasi-Newton direction in terms of being a sufficient descent direction. In fact, the obtained complexity bound can be worse than it seems to be whenever the quasi-Newton direction is approximately orthogonal with the gradient of the objective function. Similarly to ARC, we show that the TR method behaves also as an LS algorithm using the same scaled norm as in ARC. Numerical illustrations over a test set of large scale optimization problems are given in order to assess the efficiency of the obtained LS algorithms. The proposed analysis in this paper assumes that the quasi-Newton direction is not orthogonal with the gradient of the objective function during the minimization process. When such assumption is violated, one can either modify the Hessian approximation using regularization techniques or, when a second order version of the LS algorithm is regarded, switch to the classical ARC algorithm using the Euclidean norm until this assumption holds. In the latter scenario, we propose to check first if there exists an approximate quasi-Newton direction, among all the iterates generated using a subspace method, which is not orthogonal with the gradient and that satisfies the desired properties. If not, one minimizes the model using the Euclidean norm until a new successful outer iteration is found. We organize this paper as follows. In Section~\ref{section:1}, we introduce the ARC method using a general scaled norm and derive the obtained LS algorithm on the base of ARC when a specific scaled norm is used. Section~\ref{section:2} analyses the minimization of the cubic model and discusses the choice of the scaled norm that simplifies solving the ARC subproblem. Section~\ref{section:3} discusses first how the iteration dependent can be chosen uniformly equivalent to the Euclidean norm, and then we propose a second order LS algorithm that enjoys the optimal complexity bound. The section ends with a detailed complexity analysis of the obtained algorithm. Similarly to ARC and using the same scaled norm, an LS algorithm in the spirit of TR algorithm is proposed in Section~\ref{section:4}. Numerical tests are illustrated and discussed in Section~\ref{section:5}. Conclusions and future improvements are given in Section~\ref{section:6}. \section{ARC Framework Using a Specific $M_k$-Norm } \label{section:1} \subsection{ARC Framework} We consider a problem of unconstrained minimization of the form \begin{eqnarray} \label{nl_ls_problem} \displaystyle \min_{x \in \mathbb{R}^n} f(x), \end{eqnarray} where the objective function $f: \mathbb{R}^n \rightarrow \mathbb{R}$ is assumed to be continuously differentiable. The ARC framework \cite{CCartis_NIMGould_PhLToint_2011_a} can be described as follows: at a given iterate $x_k$, we define $m^Q_k : \mathbb{R}^n \rightarrow \mathbb{R}$ as an approximate second-order Taylor approximation of the objective function $f$ around $x_k$, i.e., \begin{eqnarray} \label{q_model} \displaystyle m^Q_k(s) & =& f(x_k) + s^{\top}g_k + \frac{1}{2} s^{\top} B_k s, \end{eqnarray} where $g_k = \nabla f(x_k)$ is the gradient of $f$ at the current iterate $x_k$, and $B_k$ is a symmetric local approximation (uniformly bounded from above) of the Hessian of $f$ at $x_k$. The trial step $s_k$ approximates the global minimizer of the cubic model $m_k(s)= m^Q_k(s) + \frac{1}{3}\sigma_k \| s \|^3_{M_k}$, i.e., \begin{eqnarray} \label{eq:nl_ARC_subproblem} s_k \approx \displaystyle \arg \min_{s \in \mathbb{R}^n} m_k(s), \end{eqnarray} where $\|.\|_{M_k}$ denotes an iteration dependent scaled norm of the form $\|x\|_{M_k}= \sqrt{x^{\top}M_k x}$ for all $x \in \mathbb{R}^n$ and $M_k$ is a given SPD matrix. $\sigma_k >0$ is a dynamic positive parameter that might be regarded as the reciprocal of the TR radius in TR algorithms (see \cite{CCartis_NIMGould_PhLToint_2011_a}). The parameter $\sigma_k$ is taking into account the agreement between the objective function $f$ and the model $m_k$. To decide whether the trial step is acceptable or not a ratio between the actual reduction and the predicted reduction is computed, as follows: \begin{equation} \label{rho} \rho_k \; = \; \frac{f(x_k) - f(x_k + s_k)}{ f(x_k) - m^Q_k(s_k)}. \end{equation} For a given scalar $0 < \eta < 1$, the $k^{th}$ outer iteration will be said \textit{successful} if $\rho_k \ge \eta$, and \textit{unsuccessful} otherwise. For all \textit{successful} iterations we set $x_{k+1}= x_k + s_k$; otherwise the current iterate is kept unchanged $x_{k+1} = x_k$. We note that, unlike the original ARC \cite{CCartis_NIMGould_PhLToint_2011_a,CCartis_NIMGould_PhLToint_2011_b} where the cubic model is used to evaluate the denominator in (\ref{rho}), in the nowadays works related to ARC, only the quadratic approximation $m^Q_k(s_k)$ is used in the comparison with the actual value of $f$ without the regularization parameter (see \cite{Birgin2016} for instance). Algorithm~\ref{algo:ARC} gives a detailed description of ARC. \LinesNumberedHidden \begin{algorithm}[!ht] \SetAlgoNlRelativeSize{0} \caption{\bf ARC algorithm.} \label{algo:ARC} \SetAlgoLined \KwData{select an initial point $x_0$ and the constant $0< \eta<1$. Set the initial regularization $\sigma_0 >0$ and $\sigma_{\min} \in ]0, \sigma_0]$, set also the constants $0< \nu_1 \le 1 <\nu_2 $.} \For{$k= 1, 2, \ldots$}{ Compute the step $s_k$ as an approximate solution of (\ref{eq:nl_ARC_subproblem}) such that \begin{eqnarray} \label{Cauchy_decrease} m_k(s_k) &\le & m_k(s^{\cc}_{k}) \end{eqnarray} where $s^{\cc}_{k} = - \delta^{\cc}_{k} g_k$ and $\delta^{\cc}_{k} = \displaystyle \arg \min_{t > 0} m_k(-t g_k)$ \; \eIf{$\rho_k \ge \eta$}{Set $x_{k+1}= x_k + s_k$ and $ \sigma_{k+1} = \max\{\nu_1 \sigma_k,\sigma_{\min}\}$\; }{Set $x_{k+1} = x_k$ and $\sigma_{k+1}= \nu_2\sigma_k$\;} } \end{algorithm} The Cauchy step $s^{\cc}_{k}$, defined in Algorithm \ref{algo:ARC}, is computationally inexpensive compared to the computational cost of the global minimizer of $m_k$. The condition (\ref{Cauchy_decrease}) on $s_k$ is sufficient for ensuring global convergence of ARC to first-order critical points. From now on, we will assume that first-order stationarity is not reached yet, meaning that the gradient of the objective function is non null at the current iteration $k$ (i.e., $g_k \neq 0$). Also, $\| \cdot \|$ will denote the vector or matrix $\ell_2$-norm, $\sign(\alpha)$ the sign of a real $\alpha$, and $I_n$ the identity matrix of size $n$. \subsection{An LS Algorithm Inspired by the ARC Framework} Using a specific $M_k$-norm in the definition of the cubic model $m_k$, we will show that ARC framework (Algorithm \ref{algo:ARC}) behaves as an LS algorithm along the quasi-Newton direction. In a previous work \cite{Bergou_Diouane_Gratton_2017}, when the matrix $B_k$ is assumed to be positive definite, we showed that the minimizer $s_k$ of the cubic model defined in (\ref{eq:nl_ARC_subproblem}) is getting collinear with the quasi-Newton direction when the matrix $M_k$ is set to be equal to $B_k$. In this section we generalize our proposed approach to cover the case where the linear system $B_ks = -g_k$ admits an approximate solution and $B_k$ is not necessarily SPD. Let $s^Q_k$ be an approximate solution of the linear system $B_k s=-g_k$ and assume that such step $s_k^Q$ is not orthogonal with the gradient of the objective function at $x_k$, i.e., there exists an $\epsilon_d>0$ such that $|g_k^{\top}s_k^Q | \ge \epsilon_d \|g_k \| \| s_k^Q\| $. Suppose that there exists an SPD matrix $M_k$ such that $M_ks^Q_k= \frac{\beta_k \|s^Q_k\|^2}{g_k^{\top}s_k^Q}g_k$ where $\beta_k \in ]\beta_{\min}, \beta_{\max}[$ and $\beta_{\max}> \beta_{\min}>0$, in Theorem \ref{th:equivnm} we will show that such matrix $M_k$ exists. By using the associated $M_k$-norm in the definition of the cubic model $m_k$, one can show (see Theorem \ref{cor:1}) that an approximate stationary point of the subproblem (\ref{eq:nl_ARC_subproblem}) is of the form \begin{eqnarray} \label{eqdeltaarc} s_{k} & = & \delta_k s^Q_k, ~~\mbox{where}~~ \delta_k= \frac{2 }{1-\sign(g_k^{\top}s^Q_k) \sqrt{1 +4 \frac{\sigma_k \| s^Q_k \|^3_{M_k}}{|g_k^{\top}s^Q_k|} }} . \end{eqnarray} For \textit{unsuccessful} iterations in Algorithm~\ref{algo:ARC}, since the step direction $s^Q_k$ does not change, the approximate solution of the subproblem, given by (\ref{eqdeltaarc}), can be obtained only by updating the step-size $\delta_k$. This means that the subproblem computational cost of \textit{unsuccessful} iterations is getting straightforward compared to solving the subproblem as required by ARC when the Euclidean norm is used (see e.g., \cite{CCartis_NIMGould_PhLToint_2011_a}). As a consequence, the use of the proposed $M_k$-norm in Algorithm~\ref{algo:ARC} will lead to a new formulation of ARC algorithm where the dominant computational cost, regardless the objective function evaluation cost, is the cost of solving a linear system for \textit{successful} iterations. In other words, with the proposed $M_k$-norm, the ARC algorithm behaves as an LS method with a specific backtracking strategy and an acceptance criteria in the sprite of ARC algorithm, i.e., the step is of the form $s_{k}= \delta_k s_k^Q $ where the step length $\delta_k>0$ is chosen such as \begin{eqnarray} \label{sdcond:arc} \rho_k=\frac{f(x_k) - f(x_k +s_{k} )}{ f(x_k) - m_k^Q(s_{k})} \ge \eta & \mbox{and}& m_k(s_{k}) \le m_k(-\delta^{\cc}_k g_k). \end{eqnarray} The step lengths $\delta_k$ and $\delta^{\cc}_k$ are computed respectively as follows: \begin{eqnarray} \delta_k&= &\frac{2}{1-\sign(g_k^{\top}s_k^Q)\sqrt{1 + 4 \frac{\sigma_k \beta_k^{3/2} \| s^Q_k \|^3}{|g_k^{\top}s_k^Q|}}} \label{eq:deltasigma:1}, \end{eqnarray} and \begin{eqnarray} \delta^{\cc}_k&= &\frac{2}{\frac{g_k^{\top}B_kg_k}{\|g_k \|^2} +\sqrt{\left(\frac{g_k^{\top}B_kg_k}{\|g_k \|^2}\right)^2 + 4 \sigma_k \chi_k^{3/2} \| g_k \| }} \label{eq:deltasigma:2} \end{eqnarray} where $\chi_k= \beta_k \left(\frac{5}{2} - \frac{3}{2} \cos(\varpi_k)^2 + 2 \left(\frac{1- \cos(\varpi_k)^2}{\cos(\varpi_k)}\right)^2\right)$ and $\cos(\varpi_k) = \frac{g_k^{\top}s_k^Q}{\|g_k\| \| s_k^Q\|}$. The $M_k$-norms of the vectors $s^Q_k$ and $g_k$ in the computation of $ \delta_k$ and $ \delta^{\cc}_k$ have been substituted using the expressions given in Theorem \ref{th:equivnm}. The value of $\sigma_k$ is set equal to the current value of the regularization parameter as in the original ARC algorithm. For large values of $\delta_k$ the decrease condition (\ref{sdcond:arc}) may not be satisfied. In this case, the value of $\sigma_k$ is enlarged using an expansion factor $\nu_2>1$. Iteratively, the value of $\delta_k$ is updated and the acceptance condition (\ref{sdcond:arc}) is checked again, until its satisfaction. We referred the ARC algorithm, when the proposed scaled $M_k$-norm is used, by LS-ARC as it behaves as an LS type method in this case. Algorithm~\ref{algo:LS-ARC} details the final algorithm. We recall again that this algorithm is nothing but ARC algorithm using a specific $M_k$-norm. \LinesNumberedHidden \begin{algorithm}[!ht] \SetAlgoNlRelativeSize{0} \caption{\bf LS-ARC algorithm.} \label{algo:LS-ARC} \SetAlgoLined \KwData{select an initial point $x_0$ and the constants $0< \eta<1$, $0<\epsilon_d<1$, $0< \nu_1 \le 1 <\nu_2 $ and $0< \beta_{\min}< \beta_{\max}$. Set $\sigma_0 >0$ and $\sigma_{\min} \in ]0, \sigma_0]$.} \For{$k= 1, 2, \ldots$}{ Choose a parameter $\beta_k \in ]\beta_{\min}, \beta_{\max}[$\; Let $s^Q_k$ be an approximate solution of $B_k s=-g_k$ such as $|g_k^\top s^Q_k| \ge \epsilon_d \|g_k\|\|s^Q_k\|$ \; Set $\delta_k$ and $\delta^{\cc}_k$ respectively using (\ref{eq:deltasigma:1}) and (\ref{eq:deltasigma:2})\; \While{ {\normalfont condition \text{(\ref{sdcond:arc})} is not satisfied}}{ Set $\sigma_k \leftarrow \nu_2 \sigma_k $ and update $\delta_k$ and $\delta^{\cc}_k$ respectively using (\ref{eq:deltasigma:1}) and (\ref{eq:deltasigma:2})\; } Set $s_k= \delta_k s^Q_k$, $x_{k+1}= x_k + s_k$ and $ \sigma_{k+1} = \max \{\nu_1 \sigma_k,\sigma_{\min}\}$\; } \end{algorithm} Note that in Algorithm \ref{algo:LS-ARC} the step $s^Q_k$ may not exist or be approximately orthogonal with the gradient $g_k$. A possible way to overcome this issue can be ensured by modifying the matrix $B_k$ using regularization techniques. In fact, as far as the Hessian approximation is still uniformly bounded from above, the global convergence will still hold as well as a complexity bound of order $\epsilon^{-2}$ to drive the norm of the gradient below $\epsilon \in ]0,1[$ (see \cite{CCartis_NIMGould_PhLToint_2011_a} for instance). The complexity bound can be improved to be of the order of $\epsilon^{-3/2}$ if a second order version of the algorithm LS-ARC is used, by making $B_k$ equals to the exact Hessian or at least being a good approximation of the exact Hessian (as in Assumption \ref{asm:B} of Section \ref{section:3}). In this case, modify the matrix $B_k$ using regularization techniques such that the step $s_k^Q$ approximates the linear system $B_ks = -g_k$ and $|g_k^\top s^Q_k| \ge \epsilon_d \|g_k\|\|s^Q_k\|$ is not trivial anymore. This second order version of the algorithm LS-ARC will be discussed in details in Section \ref{section:3} where convergence and complexity analysis, when the proposed $M_k$-norm is used, will be outlined. \section{On the Cubic Model Minimization} \label{section:2} In this section, we assume that the linear system $B_ks = -g_k$ has a solution. We will mostly focus on the solution of the subproblem (\ref{eq:nl_ARC_subproblem}) for a given outer iteration $k$. In particular, we will explicit the condition to impose on the matrix $M_k$ in order to get the solution of the ARC subproblem collinear with the step $s^Q_k$. Hence, in such case, one can get the solution of the ARC subproblem at a modest computational cost. The step $s^Q_k$ can be obtained exactly using a direct method if the matrix $B_k$ is not too large. Typically, one can use the $LDL^T$ factorization to solve this linear system. For large scale optimization problems, computing $s^Q_k$ can be prohibitively computationally expensive. We will show that it will be possible to relax this requirement by letting the step $s_k^Q$ be only an approximation of the exact solution using subspace methods. In fact, when an approximate solution is used and as far as the global convergence of Algorithm~\ref{algo:ARC} is concerned, all what is needed is that the solution of the subproblem (\ref{eq:nl_ARC_subproblem}) yields a decrease in the cubic model which is as good as the Cauchy decrease (as emphasized in condition (\ref{Cauchy_decrease})). In practice, a version of Algorithm~\ref{algo:LS-ARC} solely based on the Cauchy step would suffer from the same drawbacks as the steepest descent algorithm on ill-conditioned problems and faster convergence can be expected if the matrix $B_k$ influences also the minimization direction. The main idea consists of achieving a further decrease on the cubic model, better than the Cauchy decrease, by projection onto a sequence of embedded Krylov subspaces. We now show how to use a similar idea to compute a solution of the subproblem that is computationally cheap and yields the global convergence of Algorithm~\ref{algo:LS-ARC}. A classical way to approximate the exact solution $s^Q_k$ is by using subspace methods, typically a Krylov subspace method. For that, let $\mathcal{L}_k$ be a subspace of $\mathbb{R}^n$ and $l$ its dimension. Let $Q_k$ denotes an $n \times l$ matrix whose columns form a basis of $\mathcal{L}_k$. Thus for all $s \in \mathcal{L}_k$, we have $s_k=Q_kz_k$, for some $z_k \in \mathbb{R}^l$. In this case, $s^Q_k$ denotes the exact stationary point of the model function $m^Q$ over the subspace $\mathcal{L}_k$ when it exists. For both cases, exact and inexact, we will assume that the step $s_k^Q$ is not orthogonal with the gradient $g_k$. In what comes next, we state our assumption on $s_k^Q$ formally as follows: \begin{assumption} \label{asm:1} The model $m_k^Q$ admits a stationary point $s_k^Q$ such as $|g_k^{\top}s_k^Q | \ge \epsilon_d \|g_k \| \| s_k^Q\| $ where $\epsilon_d>0$ is a pre-defined positive constant. \end{assumption} We define also a Newton-like step $s_k^{\NT}$ associated with the minimization of the cubic model $m_k$ over the subspace $\mathcal{L}_k$ on the following way, when $s^Q_k$ corresponds to the exact solution of $B_ks=-g_k$ by \begin{eqnarray} \label{Newton_step_ARC} s_k^{\NT} = \delta_k^{\NT} s^Q_k, ~~\mbox{where} ~~ \delta^{\NT}_k= \arg \min_{\delta \in \mathcal{I}_k } m_k(\delta s_k^{Q}), \end{eqnarray} where $\mathcal{I}_k=\mathbb{R}_+$ if $ g_k^{\top} s_k^{Q} < 0$ and $\mathcal{I}_k=\mathbb{R}_-$ otherwise. If $s_k^Q$ is computed using an iterative subspace method, then $ s^{\NT}_k = Q_k z^{\NT}_k, $ where $z^{\NT}_k$ is the Newton-like step , as in (\ref{Newton_step_ARC}), associated to the following reduced subproblem: \begin{eqnarray} \label{reduce_model_arc} \min_{z\in \mathbb{R}^l} f(x_k) + z^{\top} Q_k^{\top} g_k + \frac{1}{2} z^{\top} Q_k^{\top} B_k Q_k z + \frac{1}{3}\sigma_k \| z \|^3_{Q_k^{\top} M_k Q_k}. \end{eqnarray} \begin{theorem} \label{th:1} Let Assumption \ref{asm:1} hold. The Newton-like step $s^{\NT}_k$ is of the form \begin{eqnarray} \label{eqdeltaarcN} s_k^{\NT} = \delta_k^{\NT} s^Q_k, ~~\mbox{where}~~ \delta_k^{\NT}& = & \frac{2 }{1-\sign(g_k^{\top}s_k^Q) \sqrt{1 +4 \frac{\sigma_k \| s_k^Q \|^3_{M_k}}{|g_k^{\top}s^Q_k|} }} . \end{eqnarray} \end{theorem} \begin{proof} Consider first the case where the step $s^Q_k$ is computed exactly (i.e., $B_ks_k^Q=-g_k$). In this case, for all $\delta \in \mathcal{I}_k$, one has \begin{eqnarray} \label{min_N_ARC_init} m_k(\delta s_k^Q) - m_k(0) &=& \delta g_k^{\top}s^{Q}_k + \frac{\delta^2}{2} [s^{Q}_k]^{\top} B_k [s^{Q}_k] + \frac{\sigma |\delta|^3 }{3} \| s^{Q}_k \|^3_{M_k} \nonumber \\ &=& (g_k^{\top}s^{Q}_k) \delta - (g_k^{\top}s_k^Q) \frac{\delta^2}{2} + (\sigma_k \| s_k^Q \|^3_{M_k}) \frac{ |\delta|^3}{3}. \end{eqnarray} If $g_k^{\top}s_k^Q<0$ (hence, $\mathcal{I}_k=\mathbb{R}_+$), we compute the value of the parameter $\delta^N_k$ at which the unique minimizer of the above function is attained. Taking the derivative of (\ref{min_N_ARC_init}) with respect to $\delta$ and equating the result to zero, one gets \begin{eqnarray} \label{equ:n:1} 0 & =& g_k^{\top}s^{Q}_k - (g_k^{\top}s_k^Q) \delta^{\NT}_k + \sigma_k \| s_k^Q \|^3_{M_k} \left(\delta^{\NT}_k\right)^2 , \end{eqnarray} and thus, since $ \delta^{\NT}_k>0$, \begin{eqnarray*} \delta^{\NT}_k& =& \frac{g_k^{\top}s_k^Q+ \sqrt{ \left(g_k^{\top}s_k^Q\right)^2 - 4 \sigma_k (g_k^{\top}s_k^Q)\| s_k^Q \|^3_{M_k} }}{ 2 \sigma_k \| s_k^Q \|^3_{M_k}} = \frac{2 }{1 + \sqrt{1 - 4 \frac{\sigma_k \| s_k^Q \|^3_{M_k}}{g_k^{\top}s^Q_k} }}. \end{eqnarray*} If $g_k^{\top}s_k^Q>0$ (hence, $\mathcal{I}_k=\mathbb{R}_-$),and again by taking the derivative of (\ref{min_N_ARC_init}) with respect to $\delta$ and equating the result to zero, one gets \begin{eqnarray} \label{equ:n:2} 0 & =& g_k^{\top}s^{Q}_k - (g_k^{\top}s_k^Q) \delta^{\NT}_k - \sigma_k \| s_k^Q \|^3_{M_k} \left(\delta^{\NT}_k\right)^2 , \end{eqnarray} and thus, since $ \delta^{\NT}_k<0$ in this case, \begin{eqnarray*} \delta^{\NT}_k& =& \frac{g_k^{\top}s_k^Q+ \sqrt{ \left(g_k^{\top}s_k^Q\right)^2 + 4 \sigma_k (g_k^{\top}s_k^Q)\| s_k^Q \|^3_{M_k} }}{ 2 \sigma_k \| s_k^Q \|^3_{M_k}} = \frac{2 }{1 - \sqrt{1 + 4 \frac{\sigma_k \| s_k^Q \|^3_{M_k}}{g_k^{\top}s^Q_k} }}. \end{eqnarray*} From both cases, one deduces that $ \delta^{\NT}_k= \frac{2 }{1-\sign(g_k^{\top}s_k^Q) \sqrt{1 +4 \frac{\sigma_k \| s_k^Q \|^3_{M_k}}{|g_k^{\top}s^Q_k|} }} .$ Consider now the case where $s_k^Q$ is computed using an iterative subspace method. In this cas, one has $s^{\NT}_k= Q_k z^{\NT}_k$, where $z^{\NT}_k$ is the Newton-like step associated to the reduced subproblem (\ref{reduce_model_arc}). Hence by applying the first part of the proof (the exact case) to the reduced subproblem (\ref{reduce_model_arc}), it follows that $$z^{\NT}_k = \bar{\delta}^{\NT}_k z^Q_k~~\mbox{ where~~} \bar{\delta}^{\NT}_k= \frac{2 }{1-\sign((Q_k^{\top} g_k)^{\top}z^Q_k) \sqrt{1 +4 \frac{\sigma_k \| z^Q_k \|^3_{{Q_k^{\top} M_k Q_k}}}{|(Q_k^{\top} g_k)^{\top}z^Q_k|} }}, $$ where $z^Q_k$ is a stationary point of the quadratic part of the minimized model in (\ref{reduce_model_arc}). Thus, by substituting $z^{\NT}_k$ in the formula $s^{\NT}_k= Q_k z^{\NT}_k$, one gets \begin{eqnarray*} s^{\NT}_k &= & Q_k \left( \frac{2 }{1-\sign((Q_k^{\top} g_k)^{\top}z^Q_k) \sqrt{1 +4 \frac{\sigma_k \| z^Q_k \|^3_{{Q_k^{\top} M_k Q_k}}}{|Q_k^{\top} g_k)^{\top}z^Q_k|} }} z^Q_k\right) \\ &=& \frac{2}{1- \sign(g_k^{\top}Q_k z^Q_k)\sqrt{1 + 4 \frac{\sigma_k \| Q_k z^Q_k \|^3_{{M}_k}}{|g_k^{\top}Q_k z^Q_k|}}} Q_k z^Q_k \\ &=& \frac{2}{1- \sign(g_k^{\top}s_k^Q)\sqrt{1 + 4 \frac{\sigma_k \| s_k^Q\|^3_{{M}_k}}{|g_k^{\top}s_k^Q|}}} s_k^Q. \end{eqnarray*} \end{proof} In general, for ARC algorithm, the matrix $M_k$ can be any arbitrary SPD matrix. Our goal, in this section, is to determine how one can choose the matrix $M_k$ so that the Newton-like step $s^{\NT}_k$ becomes a stationary point of the subproblem (\ref{eq:nl_ARC_subproblem}). The following theorem gives explicitly the necessary and sufficient condition on the matrix $M_k$ to reach this aim. \begin{theorem} \label{th:2} Let Assumption \ref{asm:1} hold. The step $s^{\NT}_k$ is a stationary point for the subproblem (\ref{eq:nl_ARC_subproblem}) if and only if there exists $\theta_k >0$ such that $M_ks_k^Q= \frac{\theta_k}{g_k^{\top}s_k^Q}g_k$. Note that $\theta_k=\|s^Q_k\|^2_{M_k}$. \end{theorem} \begin{proof} Indeed, in the exact case, if we suppose that the step $ s^{\NT}_k$ is a stationary point of the subproblem~(\ref{eq:nl_ARC_subproblem}), this means that \begin{eqnarray} \label{equ:1} \nabla m_k(s^{\NT}_k) & = & g_k + B_ks^{\NT}_k+\sigma_k \|s^{\NT}_k\|_{M_k} M_k s^{\NT}_k= 0, \end{eqnarray} In another hand, $s^{\NT}_k= \delta^{\NT}_k s^{Q}_k$ where $\delta^{\NT}_k$ is solution of $ g_k^{\top}s^{Q}_k - (g_k^{\top}s_k^Q)\delta^{\NT}_k +\sigma_k \| s_k^Q \|^3_{M_k} |\delta^{\NT}_k| \delta^{\NT}_k =0$ (such equation can be deduced from (\ref{equ:n:1}) and (\ref{equ:n:2})). Hence, we obtain that \begin{eqnarray*} 0 & =& \nabla m_k(s^{\NT}_k) =g_k - \delta^{\NT}_k g_k + \sigma_k |\delta^{\NT}_k| \delta^{\NT}_k \| s_k^Q \|_{M_k} M_k s^Q_k \nonumber \\ &=& \left( 1 - \delta^{\NT}_k \right)g_k + \left( \frac{\sigma_k \| s_k^Q\|_{M_k}^3 }{g_k^{\top}s_k^Q} |\delta^{\NT}_k| \delta^{\NT}_k \right) \left( \frac{g_k^{\top}s_k^Q}{\| s^Q\|_{M_k}^2} M_k s_k^Q \right) \\ &=& \left( \frac{\sigma_k \| s_k^Q\|_{M_k}^3 }{g_k^{\top}s_k^Q} |\delta^{\NT}_k| \delta^{\NT}_k \right) \left(g_k - \frac{g_k^{\top}s_k^Q}{\| s_k^Q\|_{M_k}^2} M_k s_k^Q \right). \end{eqnarray*} Equivalently, we conclude that $ M_k s_k^Q= \frac{\theta_k}{g_k^{\top}s_k^Q}g_k $ where $\theta_k = \| s_k^Q\|_{M_k}^2 >0$. A similar proof applies when a subspace method is used to compute $s_k^Q$. \end{proof} The key condition to ensure that the ARC subproblem stationary point is equal to the Newton-like step $s^{\NT}_k$, is the choice of the matrix $M_k$ which satisfies the following secant-like equation $M_ks_k^Q= \frac{\theta_k}{g_k^{\top}s_k^Q}g_k$ for a given $\theta_k>0$. The existence of such matrix $M_k$ is not problematic as far as Assumption \ref{asm:1} holds. In fact, Theorem \ref{th:equivnm} will explicit a range of $\theta_k>0$ for which the matrix $M_k$ exists. Note that in the formula of $s^{\NT}_k$, such matrix is used only through the computation of the $M_k$-norm of $s_k^Q$. Therefore an explicit formula of the matrix $M_k$ is not needed, and only the value of $\theta_k = \|s_k^Q\|_{M_k}^2$ suffices for the computations. When the matrix $M_k$ satisfies the desired properties (as in Theorem \ref{th:2}), one is ensured that $ s^{\NT}_k$ is a stationary point for the model $m_k$. However, ARC algorithm imposes on the approximate step to satisfy the Cauchy decrease given by (\ref{Cauchy_decrease}), and such condition is not guaranteed by $s^{\NT}_k$ as the model $m_k$ may be non-convex. In the next theorem, we show that for a sufficiently large $\sigma_k$, $ s^{\NT}_k$ is getting the global minimizer of $m_k$ and thus satisfying the Cauchy decrease is not an issue anymore. \begin{theorem} \label{cor:1} Let Assumption \ref{asm:1} hold. Let $M_k$ be an SPD matrix which satisfies $M_ks_k^Q= \frac{\theta_k}{g_k^{\top}s_k^Q}g_k$ for a fixed $\theta_k >0$. If the matrix $Q_k^{\top}(B_k + \sigma_k \|s^{\NT}_k\|_{M_k} M_k)Q_k$ is positive definite, then the step $s^{\NT}_k$ is the unique minimizer of the subproblem (\ref{eq:nl_ARC_subproblem}) over the subspace $\mathcal{L}_k$. \end{theorem} \begin{proof} Indeed, when $s^Q$ is computed exactly (i.e., $Q_k=I_n$ and $\mathcal{L}_k=\mathbb{R}^n$), then using \cite[Theorem 3.1]{Conn_Gould_Toin_2000} one has that, for a given vector $s^*_k$, it is a global minimizer of $m_k$ if and only if it satisfies $$ (B_k + \lambda^*_k M_k)s^*_k=-g_k $$ where $B_k + \lambda^*_k M_k$ is positive semi-definite matrix and $\lambda^*_k= \sigma_k \|s^*_k\|_{M_k}$. Moreover, if $B_k + \lambda^*_k M_k$ is positive definite, $s^*_k$ is unique. Since $M_ks_k^Q= \frac{\theta_k}{g_k^{\top}s_k^Q}g_k$, by applying Theorem \ref{th:2}, we see that $$ (B_k + \lambda^{\NT}_k M_k)s^{\NT}_k=-g_k $$ with $\lambda^{\NT}_k=\sigma_k \|s^{\NT}_k\|_{M_k}$. Thus, if we assume that $B_k + \lambda^{\NT}_k M_k$ is positive definite matrix, then $s^{\NT}_k$ is the unique global minimizer of the subproblem (\ref{eq:nl_ARC_subproblem}). Consider now, the case where $s^Q_k$ is computed using a subspace method, since $M_ks^Q_k= \frac{\theta_k}{g_k^{\top}s_k^Q}g_k$ one has that $Q_k^{\top}M_kQ_k z^Q_k= \frac{\theta_k}{(Q_k^{\top} g_k)^{\top}z^Q_k} Q_k^{\top}g_k$. Hence, if we suppose that the matrix $Q_k^{\top}(B_k + \lambda^{\NT}_k M_k)Q_k$ is positive definite, by applying the same proof of the exact case to the reduced subproblem (\ref{reduce_model_arc}), we see that the step $z^{\NT}_k$ is the unique global minimizer of the subproblem (\ref{reduce_model_arc}). We conclude that $s^{\NT}_k= Q_k z^{\NT}_k$ is the global minimizer of the subproblem (\ref{eq:nl_ARC_subproblem}) over the subspace $\mathcal{L}_k$. \end{proof} Theorem \ref{cor:1} states that the step $s^{\NT}_k$ is the global minimizer of the cubic model $m_k$ over the subspace $\mathcal{L}_k$ as far as the matrix $Q_k^{\top}(B_k + \sigma_k \|s_k^{\NT}\|_{M_k} M_k)Q_k$ is positive definite, where $\lambda^{\NT}_k=\sigma_k \|s^{\NT}_k\|_{M_k}$. Note that \begin{eqnarray*} \lambda^{\NT}_k &= & \sigma_k \|s^{\NT}_k\|_{M_k} = \frac{2 \sigma_k \| s^Q_k \|_{M_k}}{\left |1-\sign(g_k^{\top}s_k^Q) \sqrt{1 +4 \frac{\sigma_k \| s_k^Q \|^3_{M_k}}{|g_k^{\top}s_k^Q|} }\right |} \to +\infty~~~~\mbox{as $\sigma_k \rightarrow \infty$.} \end{eqnarray*} Thus, since $M_k$ is an SPD matrix and the regularization parameter $\sigma_k$ is increased for unsuccessful iterations in Algorithm~\ref{algo:ARC}, the positive definiteness of matrix $Q_k^{\top}(B_k + \sigma_k \|s^{\NT}_k\|_{M_k} M_k)Q_k$ is guaranteed after finitely many unsuccessful iterations. In other words, one would have insurance that $ s^{\NT}_k$ will satisfy the Cauchy decrease after a certain number of unsuccessful iterations. \section{Complexity Analysis of the LS-ARC Algorithm} \label{section:3} For the well definiteness of the algorithm LS-ARC, one needs first to show that the proposed $M_k$-norm is uniformly equivalent to the Euclidean norm. The next theorem gives a range of choices for the parameter $\theta_k$ to ensure the existence of an SPD matrix $M_k$ such as $M_ks_k^Q= \frac{\theta_k}{g_k^{\top}s_k^Q } g_k$ and the $M_k$-norm is uniformly equivalent to the $\ell_2$-norm. \begin{theorem} \label{th:equivnm}Let Assumption \ref{asm:1} hold. If \begin{equation} \label{I_def} \begin{tabular}{l} $ \theta_k = \beta_k \|s^Q_k\|^2 ~~\mbox{where}~ \beta_k \in ]\beta_{\min}, \beta_{\max}[ ~~\mbox{and}~~ \beta_{\max}> \beta_{\min}>0,$ \end{tabular} \end{equation} then there exists an SPD matrix $M_k$ such as \begin{enumerate}[i)] \item $M_ks_k^Q= \frac{\theta_k}{g_k^{\top}s_k^Q } g_k$, \item the $M_k$-norm is uniformly equivalent to the $\ell_2$-norm on $\mathbb{R}^n$ and for all $x \in \mathbb{R}^n$, one has \begin{equation} \label{eq:norms} \frac{\sqrt{\beta_{\min}}}{\sqrt{2} } \|x\| \le \| x\|_{M_k} \le \frac{\sqrt{2 \beta_{\max} }}{\epsilon_d} \| x\|. \end{equation} \item Moreover, one has $\|s_k^Q\|_{M_k}^2 = \beta_k \|s^Q_k\|^2$ and $\|g_k\|_{M_k}^2 = \chi_k \|g_k\|^2$ where $\chi_k= \beta_k \left(\frac{5}{2} - \frac{3}{2} \cos(\varpi_k)^2 + 2 \left(\frac{1- \cos(\varpi_k)^2}{\cos(\varpi_k)}\right)^2\right)$ and $\cos(\varpi_k) = \frac{g_k^{\top}s_k^Q}{\|g_k\| \| s_k^Q\|}$ \end{enumerate} \end{theorem} \begin{proof} Let $\bar{s}_k^Q= \frac{s_k^Q}{\|s_k^Q\|}$ and $\bar{g_k}$ be an orthonormal vector to $\bar{s}_k^Q$ (i.e., $\|\bar{g}_k \| =1$ and $\bar{g}_k^\top \bar{s}_k^Q=0$) such that \begin{eqnarray} \label{eq:g_coordinate} \frac{g_k}{\|g_k\|} = \cos(\varpi_k) \bar{s}_k^Q + \sin(\varpi_k) \bar{g_k}. \end{eqnarray} For a given $ \theta_k = \beta_k \|s^Q_k\|^2 ~~\mbox{where}~ \beta_k \in ]\beta_{\min}, \beta_{\max}[ ~~\mbox{and}~~ \beta_{\max}> \beta_{\min}>0,$ one would like to construct an SPD matrix $M_k$ such as $M_ks_k^Q= \frac{\theta_k g_k}{g_k^{\top}s_k^Q }$, hence \begin{eqnarray*} M_k\bar{s}_k^Q= \frac{\theta_k g_k}{g_k^{\top}s_k^Q \|s_k^Q\|}& =& \frac{\theta_k \|g_k\|}{g_k^{\top}s^Q_k \|s_k^Q\|} \left( \cos(\varpi_k) \bar{s}_k^Q + \sin(\varpi_k) \bar{g}_k \right)\\ &=& \beta_k \bar{s}_k^Q +\beta_k \tan (\varpi_k) \bar{g}_k. \end{eqnarray*} Using the symmetric structure of the matrix $M_k$, let $\gamma_k$ be a positive parameter such as \begin{eqnarray*} M_k= \left[\bar{s}_k^Q, \bar{g}_k \right] N_k \left[\bar{s}_k^Q, \bar{g}_k \right]^{\top} \text{ where } N_k= \left[ {\begin{array}{cc} \beta_k & \beta_k \tan (\varpi_k) \\ \beta_k \tan (\varpi_k) & \gamma_k \\ \end{array} } \right]. \end{eqnarray*} The eigenvalues $ \lambda^{\min}_k$ and $\lambda^{\max}_k$ of the matrix $N_k$ are the roots of $$ \lambda^2 - \left(\beta_k + \gamma_k \right) \lambda + \beta_k \gamma_k - \left( \beta_k \tan (\varpi_k) \right)^2=0,$$ hence \begin{eqnarray*} \lambda^{\min}_k = \frac{\left(\beta_k+ \gamma_k \right)-\sqrt{\vartheta_k}}{2}&\text{ and } & \lambda^{\max}_k=\frac{\left(\beta_k + \gamma_k \right)+\sqrt{\vartheta_k}}{2}, \end{eqnarray*} where $ {\vartheta_k} = \left(\beta_k - \gamma_k \right)^2 + 4\left( \beta_k \tan (\varpi_k) \right)^2.$ Note that both eigenvalues are monotonically increasing as functions of $\gamma_k$. One may choose $\lambda^{\min}_k$ to be equal to $\frac{1}{2} \beta_k = \frac{1}{2} \frac{\theta_k}{\|s_k^Q\|^2} $, therefore $\lambda^{\min}_k > \frac{1}{2}\beta_{\min}$ is uniformly bounded away from zero. In this case, from the expression of $\lambda^{\min}_k$, we deduce that $\gamma_k = 2 \beta_k \tan(\varpi_k)^2 +\beta_k/2 $ and \begin{eqnarray*} \label{val:gamma} \lambda^{\max}_k &= &\frac{3}{4} \beta_k + \beta_k \tan(\varpi_k)^2 +\sqrt{\frac{1}{16} \beta_k^2 + \frac{1}{2}\beta_k^2 \tan(\varpi_k)^2 + \beta_k^2 \tan(\varpi_k)^4} \\ &=& \beta_k \left( \frac{3}{4} + \tan(\varpi_k)^2 +\sqrt{\frac{1}{16} + \frac{1}{2} \tan(\varpi_k)^2 + \tan(\varpi_k)^4} \right)\\ &=& \beta_k \left( 1 + 2\tan(\varpi_k)^2\right). \end{eqnarray*} From Assumption \ref{asm:1}, i.e., $|g_k^{\top}s_k^Q| \ge \epsilon_d \|g_k \| \| s_k^Q\| $ where $\epsilon_d>0$, one has $\tan(\varpi_k)^2 \le \frac{1- \epsilon_d^2}{\epsilon_d^2}.$ Hence, $$ \lambda^{\max}_k \le \beta_{\max} \left( 1 + 2\frac{1- \epsilon_d^2}{\epsilon_d^2} \right) \le \frac{2 \beta_{\max}}{\epsilon_d^{2}} $$ A possible choice for the matrix $M_k$ can be obtained by completing the vectors family $\{\bar{s}_k^Q,\bar{g}_k \}$ to an orthonormal basis $\{\bar{s}_k^Q,\bar{g}_k, q_3, q_4, \ldots, q_n \}$ of $\mathbb{R}^n$ as follows: $$ M_k = [ \bar{s}_k^Q,\bar{g}_k, q_3, \ldots, q_n ] \left[ {\begin{array}{cc} N_k & 0 \\ 0 & D \end{array} } \right] [ \bar{s}_k^Q,\bar{g}_k, q_3, \ldots, q_n ]^{\top}, $$ where $D=\operatorname{diag}(d_3, \ldots, d_n) \in \mathbb{R}^{(n-2)\times (n-2)}$ with positive diagonal entrees independent from $k$ . One concludes that for all $ \theta_k = \beta_k \|s^Q_k\|^2 ~~\mbox{where}~ \beta_k \in ]\beta_{\min}, \beta_{\max}[ ~~\mbox{and}~~ \beta_{\max}> \beta_{\min}>0,$ the eigenvalue of the constructed $M_k$ are uniformly bounded away from zero and from above, hence the scaled $M_k$-norm is uniformly equivalent to the $\ell_2$-norm on $\mathbb{R}^n$ and for all $x \in \mathbb{R}^n$, one has \begin{equation*} \frac{\sqrt{\beta_{\min}}}{\sqrt{2} } \|x\| \le \sqrt{\lambda^{\min}_k} \|x\| \le \| x\|_{M_k} \le \sqrt{\lambda^{\max}_k} \|x\| \le \frac{\sqrt{2 \beta_{\max} }}{\epsilon_d} \| x\|. \end{equation*} By multiplying $M_ks_k^Q= \frac{\theta_k}{g_k^{\top}s_k^Q } g_k$ from both sides by $s_k^Q$, one gets $$\|s_k^Q\|_{M_k}^2 = \theta_k=\beta_k \|s^Q_k\|^2.$$ Moreover, using (\ref{eq:g_coordinate}) and (\ref{val:gamma}), one has \begin{eqnarray*} \|g_k\|_{M_k}^2 &=& \|g_k\|^2 \left( \cos(\varpi_k) \bar{s}_k^Q + \sin(\varpi_k) \bar{g_k} \right)^{\top} \left( \cos(\varpi_k) M_k\bar{s}_k^Q + \sin(\varpi_k) M_k\bar{g_k} \right)\\ &=& \|g_k\|^2 \left( \frac{\theta_k \cos(\varpi_k)^2}{\|s_k^Q\|^2} + \gamma_k \sin(\varpi_k)^2 + 2 \sin(\varpi_k) \cos(\varpi_k) \frac{\theta_k \tan(\varpi_k)^2}{\|s_k^Q\|^2}\right)\\ &=& \beta_k \|g_k\|^2 \left( \cos(\varpi_k)^2 +\frac{5}{2} \sin(\varpi_k)^2 + 2 \sin(\varpi_k)^2 \tan(\varpi_k)^2 \right)\\ &=& \beta_k \|g_k\|^2 \left( \frac{5}{2} - \frac{3}{2} \cos(\varpi_k)^2 + 2 \left(\frac{1- \cos(\varpi_k)^2}{\cos(\varpi_k)}\right)^2 \right). \end{eqnarray*} \end{proof} A direct consequence of Theorem \ref{th:equivnm} is that, by choosing $ \theta_k>0$ of the form $\beta_k \|s^Q_k\|^2$ where $\beta_k \in ]\beta_{\min}, \beta_{\max}[$ and $\beta_{\max}> \beta_{\min}>0$ during the application of LS-ARC algorithm, the global convergence and complexity bounds of LS-ARC algorithm can be derived straightforwardly from the classical ARC analysis \cite{CCartis_NIMGould_PhLToint_2011_b}. In fact, as far as the objective function $f$ is continuously differentiable, its gradient is Lipschitz continuous, and its approximated Hessian $B_k$ is bounded for all iterations (see \cite[Assumptions AF1, AF4, and AM1]{CCartis_NIMGould_PhLToint_2011_b}), the LS-ARC algorithm is globally convergent and will required at most a number of iterations of order $\epsilon^{-2}$ to produce a point $x_{\epsilon}$ with $\|\nabla f(x_{\epsilon})\| \le \epsilon$ \cite[Corollary 3.4]{CCartis_NIMGould_PhLToint_2011_b}. In what comes next, we assume the following on the objective function $f$: \begin{assumption} \label{asm:f} Assume that $f$ is twice continuously differentiable with Lipschitz continuous Hessian, i.e., there exists a constant $L\ge 0$ such that for all $x, y\in \mathbb{R}^n$ one has $$ \|\nabla^2 f(x) - \nabla^2 f(y) \| \le L \|x -y \|. $$ \end{assumption} When the matrix $B_k$ is set to be equal to the exact Hessian of the problem and under Assumption \ref{asm:f}, one can improve the function-evaluation complexity to be $\mathcal{O}(\epsilon^{-3/2})$ for ARC algorithm by imposing, in addition to the Cauchy decrease, another termination condition during the computation of the trial step $s_k$ (see \cite{CCartis_NIMGould_PhLToint_2011_b,Birgin2016}). Such condition is of the form \begin{eqnarray} \label{TC_s} \|\nabla m_k(s_k) \| & \le & \zeta \|s_k \|^2, \end{eqnarray} where $\zeta> 0$ is a given constant chosen at the start of the algorithm. When only an approximation of the Hessian is available during the application Algorithm \ref{algo:ARC}, an additional condition has to be imposed on the Hessian approximation $B_k$ in order to ensure an optimal complexity of order $\epsilon^{-3/2}$. Such condition is often considered as (see \cite[Assumption AM.4]{CCartis_NIMGould_PhLToint_2011_b}): \begin{assumption}\label{asm:B} The matrix $B_k$ approximate the Hessian $\nabla^2 f(x_k)$ in the sense that \begin{eqnarray} \label{strong:Dennis-more:approxiation} \|(\nabla^2 f(x_k) -B_k )s_k\| \le C \|s_k\|^2 \end{eqnarray} for all $k\ge 0$ and for some constant $C>0$. \end{assumption} Similarly, for LS-ARC algorithm, the complexity bound can be improved to be of the order of $\epsilon^{-3/2}$ if one includes the two following requirements (a) the $s_{k}$ satisfies the criterion condition~(\ref{TC_s}) and (b) the Hessian approximation matrix $B_k$ has to satisfy Assumption \ref{asm:B}. When our proposed $M_k$-norm is used, the termination condition (\ref{TC_s}) imposed on the cubic model $m_k$ can be expressed only in terms of $s_k^Q$ and $\nabla m_k^Q$. The latter condition will be required in the LS-ARC algorithm at each iteration to ensure that it takes at most $\mathcal{O}(\epsilon^{-3/2})$ iterations to reduce the gradient norm below $\epsilon$. Such result is given in the following proposition \begin{proposition} \label{lm1:wcc} Let Assumption \ref{asm:1} hold. Let $ \theta_k = \beta_k \|s^Q_k\|^2$ where $\beta_k \in ]\beta_{\min}, \beta_{\max}[$ and $\beta_{\max}> \beta_{\min}>0$. Then imposing the condition (\ref{TC_s}) in Algorithm \ref{algo:LS-ARC} is equivalent to the following condition \begin{eqnarray} \label{TC_s_arc_en} \|\nabla m_k^Q(s_k^Q) \| & \le & \frac{2 \sign(g_k^{\top}s_k^Q) \zeta }{-1+\sign(g_k^{\top}s_k^Q) \sqrt{1 +4 \frac{\sigma_k \theta_k^{3/2}}{|g_k^{\top}s^Q_k|} }} \|s_k^Q\|^2. \end{eqnarray} \end{proposition} \begin{proof} Since Assumption \ref{asm:1} holds and $ \theta_k = \beta_k \|s^Q_k\|^2$ as in (\ref{I_def}), Theorem \ref{th:equivnm} implies the existence of an SPD matrix $M_k$ such that $M_ks_k^Q=\frac{\theta_k}{g_k^{\top}s_k^Q}g_k $. Using such $M_k$-norm, an approximate solution of the cubic model $m_k$ is of the form $s_k= \delta_k s^{Q}_k$ where $\delta_k$ is solution of $ g_k^{\top}s^{Q}_k - (g_k^{\top}s_k^Q)\delta_k +\sigma_k \| s_k^Q \|^3_{M_k} |\delta_k| \delta_k =0$. Hence, \begin{eqnarray*} \nabla m_k(s_k) & = & g_k + B_k s_k+ \sigma_k \|s_k \|_{M_k} M_k s_k \\ &=& g_k + \delta_k B_k s^Q_k + \sigma_k |\delta_k| \delta_k \| s_k^Q \|_{M_k} M_k s^Q_k. \end{eqnarray*} Since $M_ks_k^Q=\frac{\theta_k}{g_k^{\top}s_k^Q}g_k $ with $\theta_k=\|s_k^Q\|_{M_k}^2$, one has \begin{eqnarray*} \nabla m_k(s_k) & =& g _k+ \delta_k B_k s_k^Q + \frac{\sigma_k |\delta_k| \delta_k \| s_k^Q \|_{M_k}^3}{g_k^{\top}s_k^Q} g_k \\ &=& \left(1 + \frac{\sigma_k |\delta_k| \delta_k \| s_k^Q \|_{M_k}^3}{g_k^{\top}s_k^Q} \right) g_k + \delta_k B_k s_k^Q . \end{eqnarray*} From the fact that $ g_k^{\top}s^{Q}_k - (g_k^{\top}s_k^Q)\delta_k +\sigma_k \| s_k^Q \|^3_{M_k} |\delta_k| \delta_k =0$, one deduces \begin{eqnarray*} \nabla m_k(s_k) & =& \delta_k \left(g_k + B_k s_k^{Q} \right)= \delta_k \nabla m_k^Q(s_k^{Q}). \end{eqnarray*} Hence, the condition (\ref{TC_s}) is being equivalent to \begin{eqnarray*} \|\nabla m_k^Q(s_k^{Q}) \| & \le & \frac{ \zeta}{\delta_k} \|s_k \|^2 = \zeta |\delta_k| \|s_k^Q\|^2. \end{eqnarray*} \end{proof} We note that the use of an exact solver to compute $s^Q_k$ implies that the condition (\ref{TC_s_arc_en}) will be automatically satisfied for a such iteration. Moreover, when a subspace method is used to approximate the step $s^Q_k$, we note the important freedom to add an additional preconditioner to the problem. In this case, one would solve the quadratic problem with preconditioning until the criterion~(\ref{TC_s_arc_en}) is met. This is expected to happen early along the Krylov iterations when the preconditioner for the linear system $B_ks=-g_k$ is good enough. Algorithm \ref{algo:LS-ARC:s} summarized a second-order variant of LS-ARC, referred here as $\mbox{LS-ARC}_{\mbox{(s)}}$, which is guaranteed to have an improved iteration worst-case complexity of order $\epsilon^{-3/2}$ (see Theorem \ref{th:4-ters}). \noindent \begin{algorithm}[!ht] \DontPrintSemicolon \SetAlgoNlRelativeSize{0} \caption{\bf $\mbox{LS-ARC}_{\mbox{(s)}}$ Algorithm.} \label{algo:LS-ARC:s} \begin{rm} \begin{description} \item In each iteration $k$ of Algorithm \ref{algo:LS-ARC}: Let $s^Q_k$ be an approximate solution of $B_k s=-g_k$ such as $|g_k^\top s^Q_k| \ge \epsilon_d \|g_k\|\|s^Q_k\|$ and the termination condition \begin{eqnarray*} \|\nabla m_k^Q(s_k^Q) \| & \le & \frac{2 \sign(g_k^{\top}s_k^Q) \zeta }{-1+\sign(g_k^{\top}s_k^Q) \sqrt{1 +4 \frac{\sigma_k \beta_k^{3/2} \|s^Q_k\|^3}{|g_k^{\top}s^Q_k|} }} \|s_k^Q\|^2 \end{eqnarray*} is satisfied for a given constant $\zeta >0$ chosen at the beginning of the algorithm. \end{description} \end{rm} \end{algorithm} The Hessian approximation $B_k$ (as required by Assumption \ref{asm:B}) involves $s_k$, hence finding a new matrix $B_k$ so that the new $s_k^Q$ satisfies Assumption \ref{asm:1} using regularization techniques is not trivial as $s_k$ is unknown at this stage. A possible way to satisfy Assumption \ref{asm:B} without modifying $B_k$ is by choosing $s^Q_k$ as the first iterate to satisfy (\ref{TC_s_arc_en}) when using a subspace method to solve the linear system $B_k s=-g_k$. Then, one checks if $s^Q_k$ satisfies Assumption \ref{asm:1} or not. If $s^Q_k$ violates the latter assumption, one runs further iterations of the subspace method until Assumption \ref{asm:1} will be satisfied. If the subspace method ends and Assumption \ref{asm:1} is still violated, one would restore such assumption by minimizing the cubic model using the $\ell_2$-norm until a successful outer iteration is found. For the sake of illustration, consider the minimization of the objective function $f(x,y) = x^2 - y^2$ for all $(x,y) \in \mathbb{R}^2$, starting from $x_0=(1,1)$, with $\sigma_0=1$, and $B_k$ being the exact Hessian of the problem during the application of the algorithm. One starts by checking if $s^Q_0 = (-1,-1)$ (i.e., the exact solution of the linear system $B_0 s = - g_0$) is a sufficient descent direction or not. Since the slop $g_0^\top s^Q_0$ is equal to zero for this example, the algorithm has to switch to the $\ell_2$-norm and thus $s_0$ will be set as a minimizer of the cubic model with the $\ell_2$-norm to define the cubic regularization term. Using a subproblem solver (in our case the \texttt{GLRT} solver from \texttt{GALAHAD} \cite{NIMGould_DOrban_PhLToint_2003}, more details are given in Section 6), one finds the step $s_0 = (-0.4220, 2.7063)$ and the point $x_1 = x_0 + s_0$ is accepted (with $f(x_1)=-13.4027$). Computing the new gradient $g_1 = (1.1559, -7.4126)$ and the quasi-Newton direction $s^Q_1 = (-0.5780, -3.7063)$, one has $|g_1^\top s^Q_1| = 20.5485 \ge \epsilon_d \|g_1\|\|s_1^Q\|=0.0205$ where $\epsilon_d=10^{-3}$ (hence Assumption 3.1 holds). We perform then our proposed LS strategy along the direction $s_1^Q$ to obtain the step $s_1$. For this example, except the first one, all the remaining iterations $k$ satisfy the condition $|g_k^{\top} s^Q_k| \ge \epsilon_d\|g_k\| \|s_k^Q\|$. We note that the regarded minimization problem is unbounded from below, hence $f$ decreases to $-\infty$ during the application of the algorithm. LS strategies require in general to have a sufficient descent direction for each iteration, it seems then natural that one may need to choose $\epsilon_d$ to be large (close to 1) to target good performance. However, during the application of $\mbox{LS-ARC}_{\mbox{(s)}}$ and to satisfy Assumption \ref{asm:1} (without modifying the matrix $B_k$), one may be encouraged to use an $\epsilon_d$ small. In what comes next, we will give a detailed complexity analysis of the $\mbox{LS-ARC}_{\mbox{(s)}}$ algorithm in this case. In particular, we will explicit how the complexity bound will depend on the choice of the constant $\epsilon_d$. The following results are obtained from \cite[Lemma 2.1]{Birgin2016} and \cite[Lemma 2.2]{Birgin2016}: \begin{lemma} Let Assumptions \ref{asm:f} and \ref{asm:1} hold and consider Algorithm \ref{algo:LS-ARC:s}. Then for all $k\ge 0$, one has \begin{eqnarray} \label{decrease:cubic} f(x_k) - m^Q_k(s_k) & \ge&\frac{\sigma_k}{3} \|s_k\|^3_{M_k}, \end{eqnarray} and \begin{eqnarray} \label{sigma:max} \sigma_k & \le& \sigma_{\max} := \max \left\{ \sigma_0, \frac{3 \nu_2 L}{2 (1-\eta) } \right\}. \end{eqnarray} \end{lemma} The next lemma is an adaptation of \cite[Lemma 2.3]{Birgin2016} when the proposed $M_k$-norm is used in the ARC framework. \begin{lemma} Let Assumptions \ref{asm:f}, \ref{asm:B} and \ref{asm:1} hold. Consider Algorithm \ref{algo:LS-ARC:s} with $ \theta_k = \beta_k \|s^Q_k\|^2$ where $\beta_k \in ]\beta_{\min}, \beta_{\max}[$ and $\beta_{\max}> \beta_{\min}>0$. Then for all $k\ge 0$ \begin{eqnarray*} \label{eq:g_s} \| s_k\| & \ge & \left( \frac{\|g_{k+1}\|}{L+ C+2\sqrt{2} \sigma_{\max} \beta_{\max}^{3/2}\epsilon_d^{-3} + \zeta }\right)^{\frac{1}{2}}. \end{eqnarray*} \end{lemma} \begin{proof}Indeed, using Assumptions \ref{asm:f} and \ref{asm:B} within Taylor expansion, one has \begin{eqnarray*} \| g_{k+1}\| & \le & \| g_{k+1} - \nabla m_k (s_k) \| + \| \nabla m_k (s_k) \| \\ & \le & \|g_{k+1} - g_k -B_k s_k - \sigma_k \|s_k\|_{M_k} M_k s_k \| + \zeta \| s_k\|^2 \\ & \le & \|g_{k+1} - g_k - \nabla^2 f(x_k) s_k\| + \| (\nabla^2 f(x_k)-B_k) s_k \| + \sigma_k \|M_k\|^{3/2} \|s_k\|^2 + \zeta \| s_k\|^2\\ & \le & L\| s_k \|^2 + C \| s_k \|^2+(\sigma_{k} \|M_k\|^{3/2} +\zeta) \| s_k \|^2. \end{eqnarray*} Using (\ref{sigma:max}), one has \begin{eqnarray*} \| g_{k+1}\|& \le & (L + C +\sigma_{\max} \|M_k\|^{3/2} +\zeta) \| s_k \|^2. \end{eqnarray*} Since Assumption \ref{asm:B} holds and $ \theta_k = \beta_k \|s^Q_k\|^2$ where $\beta_k \in ]\beta_{\min}, \beta_{\max}[$, then using Theorem \ref{th:equivnm}, the matrix $M_k$ norm is bounded from above by $2\beta_{\max} \epsilon_d^{-2}$. Hence, \begin{eqnarray*} \| g_{k+1}\| & \le & \left(L+ C+ 2\sqrt{2} \sigma_{\max} \beta_{\max}^{3/2} \epsilon_d^{-3} +\zeta \right) \| s_k \|^2. \end{eqnarray*} \end{proof} \begin{theorem} \label{th:4-ters} Let Assumptions \ref{asm:f}, \ref{asm:B} and \ref{asm:1} hold. Consider Algorithm \ref{algo:LS-ARC:s} with $ \theta_k = \beta_k \|s^Q_k\|^2$ where $\beta_k \in ]\beta_{\min}, \beta_{\max}[$ and $\beta_{\max}> \beta_{\min}>0$. Then, given an $\epsilon>0$, Algorithm \ref{algo:LS-ARC:s} needs at most $$ \left \lfloor \kappa_{\mbox{s}}(\epsilon_d) \frac{f(x_0) - f_{\mbox{low}}}{\epsilon^{3/2}} \right \rfloor $$ iterations to produce an iterate $x_{\epsilon}$ such that $\|\nabla f(x_{\epsilon}) \| \le \epsilon$ where $f_{\mbox{low}}$ is a lower bound on $f$ and $\kappa_{\mbox{s}}(\epsilon_d)$ is given by \begin{eqnarray*} \kappa_{\mbox{s}}(\epsilon_d) & =& \frac{6\sqrt{2}\left(L+ C+ 2\sqrt{2} \sigma_{\max} \beta_{\max}^{3/2} \epsilon_d^{-3} + \zeta \right)^{3/2}}{\eta \sigma_{\min}\beta_{\min}^{3/2}}. \end{eqnarray*} \end{theorem} \begin{proof} Indeed, at each iteration of Algorithm \ref{algo:LS-ARC:s}, one has \begin{eqnarray*} f(x_k) - f(x_k + s_k) & \ge & \eta (f(x_k) -m^Q_k(s_k)) \\ & \ge & \frac{\eta \sigma_k}{3} \|s_k\|^3_{M_k} \\ & \ge & \frac{\eta \sigma_{\min} \beta_{\min}^{3/2}}{6\sqrt{2}} \|s_k\|^3 \\ & \ge & \frac{\eta \sigma_{\min} \beta_{\min}^{3/2}}{6\sqrt{2}\left(L +C+ 2\sqrt{2} \sigma_{\max} \beta_{\max}^{3/2} \epsilon_d^{-3} +\zeta \right)^{3/2}} \|g_{k+1}\|^{3/2} \\ & \ge & \frac{\eta \sigma_{\min} \beta_{\min}^{3/2}}{6\sqrt{2}\left(L + C+2\sqrt{2} \sigma_{\max} \beta_{\max}^{3/2} \epsilon_d^{-3}+\zeta \right)^{3/2}} \epsilon^{3/2}, \\ \end{eqnarray*} by using (\ref{sdcond:arc}), (\ref{decrease:cubic}), (\ref{eq:norms}), (\ref{eq:g_s}), and the fact that $\|g_{k+1}\| \ge \epsilon$ before termination. Thus we deduce for all iterations as long as the stopping criterion does not occur \begin{eqnarray*} f(x_{0}) - f(x_{k+1}) &= & \sum_{j=0}^{k} f(x_{j}) - f(x_{j+1}) \\ &\ge& (k+1) \frac{\eta \sigma_{\min} \sqrt{\beta_{\min}}}{3\sqrt{2}\left(L +C+ 2\sqrt{2} \sigma_{\max} \beta_{\max}^{3/2} \epsilon_d^{-3}+\zeta \right)^{3/2}} \epsilon^{3/2}. \end{eqnarray*} Hence, the required number of iterations to produce an iterate $x_{\epsilon}$ such that $\|\nabla f(x_{\epsilon}) \| \le \epsilon$ is given as follow \begin{eqnarray*} k+1 &\le & \frac{6\sqrt{2}\left(L + C+ 2\sqrt{2} \sigma_{\max} \beta_{\max}^{3/2} \epsilon_d^{-3} +\zeta \right)^{3/2}}{\eta \sigma_{\min}\beta_{\min}^{3/2}} \frac{ f(x_{0}) - f_{\mbox{low}}}{\epsilon^{3/2} } \end{eqnarray*} where $f_{\mbox{low}}$ is a lower bound on $f$. Thus the proof is completed. \end{proof} We note that $\kappa_{\mbox{s}}(\epsilon_d)$ can be large for small values of $\epsilon_d$. Hence, although the displayed worst-case complexity bound is of order $\epsilon^{-3/2}$, the latter can be worse than it appears to be if the value of $\epsilon_d$ is very small (i.e., the chosen direction is almost orthogonal with the gradient). Such result seems coherent regarding the LS algorithm strategy where it is required to have a sufficient descent direction (i.e., an $\epsilon_d$ sufficiently large). \section{TR Algorithm Using a Specific $M_k$-Norm} \label{section:4} Similarly to ARC algorithm, it is possible to render TR algorithm behaves as an LS algorithm using the same scaled norm to define the trust-region neighborhood. As a reminder in a basic TR algorithm \cite{Conn_Gould_Toin_2000}, one computes a trial step $p_k$ by approximately solving \begin{eqnarray} \label{eq:nl_TR_subproblem} \displaystyle \min_{p \in \mathbb{R}^n} & m_k^Q(p) & ~~~\mbox{s. t.}~~~ \|p\|_{M_k} \le \Delta_k, \end{eqnarray} where $\Delta_k >0$ is known as the TR radius. As in ARC algorithms, the scaled norm $\|.\|_{M_k}$ may vary along the iterations and $M_k$ is an SPD matrix. Once the trial step $p_k$ is determined, the objective function is computed at $x_k + p_k$ and compared with the value predicted by the model at this point. If the model value predicts sufficiently well the objective function (i.e., the iteration is \textit{successful}), the trial point $x_k + p_k$ will be accepted and the TR radius is eventually expanded (i.e., $\Delta_{k+1}=\tau_2 \Delta_{k}$ with $\tau_2\ge 1$). If the model turns out to predict poorly the objective function (i.e., the iteration is \textit{unsuccessful}), the trial point is rejected and the TR radius is contracted (i.e., $\Delta_{k+1}=\tau_1 \Delta_{k}$ with $\tau_1 < 1$). The ratio between the actual reduction and the predicted reduction for the TR algorithms is defined as in ARC (see (\ref{rho})). For a given scalar $0 < \eta < 1$, the iteration will be said \textit{successful} if $\rho_k \ge \eta$, and \textit{unsuccessful} otherwise. Algorithm~\ref{algo:TR} gives a detailed description of a basic TR algorithm. \LinesNumberedHidden \begin{algorithm}[!ht] \SetAlgoNlRelativeSize{0} \caption{\bf TR algorithm.} \label{algo:TR} \SetAlgoLined \KwData{select an initial point $x_0$ and $0< \eta <1$. Set the initial TR radius $\Delta_0>0$, the constants $0 \le \tau_1< 1 \le \tau_2$, and $\Delta_{\max}>\Delta_0$.} \For{$k= 1, 2, \ldots$}{ Compute the step $p_k$ as an approximate solution of (\ref{eq:nl_TR_subproblem}) such that \begin{eqnarray} \label{TR:Cauchy_decrease} m_k^{Q}(p_k) &\le & m_k(p^{\cc}_{k}) \end{eqnarray} where $p^{\cc}_{k} = - \alpha^{\cc}_{k} g_k \text{ and } \alpha^{\cc}_{k} = \displaystyle \arg \min_{0< t\le \frac{\Delta_k}{\|g_k\|_{M_k}}} m_k^Q(-t g_k)$\; \eIf{$\rho_k \ge \eta$}{Set $x_{k+1}= x_k +p_k$ and $ \Delta_{k+1} =\min\{\tau_2 \Delta_k,\Delta_{\max}\}$; }{Set $x_{k+1} = x_k$ and $\Delta_{k+1}= \tau_1 \Delta_k$\;} } \end{algorithm} Note that for the TR subproblem, the solution we are looking for lies either interior to the trust region, that is $\|p_k\|_{M_k} < \Delta_k$, or on the boundary, $\|p_k\|_{M_k} = \Delta_k$. If the solution is interior, the solution $p_k$ is the unconstrained minimizer of the quadratic model $m_k^Q$. Such scenario can only happen if $m_k^Q$ is convex. In the non convex case a solution lies on the boundary of the trust region, while in the convex case a solution may or may not do so. Consequently in practice, the TR algorithm finds first the unconstrained minimizer of the model $m_k^Q$. If the model is unbounded from below, or if the unconstrained minimizer lies outside the trust region, the minimizer then occurs on the boundary of the trust region. In this section, we will assume that the approximated solution $s_k^Q$ of the linear system $B_k s=-g_k$ is computed exactly. Using similar arguments as for ARC algorithm, one can extend the obtained results when a truncated step is used in the TR algorithm. Under Assumption \ref{asm:1}, we will call the Newton-like step associated with the TR subproblem the vector of the following form \begin{eqnarray} \label{Newton_step_TR} p^{\NT}_k = \alpha_k^{\NT} s^Q_k, ~~\mbox{where} ~\alpha^{\NT}_k = \displaystyle \arg \min_{\alpha \in \mathcal{R}_k} m^Q_k(\alpha s_k^{Q}), \end{eqnarray} where $\mathcal{R}_k =]0, \frac{\Delta_k}{ \|s_k^{Q}\|_{M_k}}]$ if $g_k^{\top} s_k^{Q} < 0$ and $\mathcal{R}_k=[ -\frac{\Delta_k}{ \|s_k^{Q}\|_{M_k}},0[ $ otherwise. Similarly to ARC algorithm, one has the following results: \begin{theorem} \label{th:1:tr} Let Assumption \ref{asm:1} hold. \begin{enumerate} \item The Newton-like step (\ref{Newton_step_TR}) is of the following form: \begin{eqnarray} \label{eq:TR_newton-step} p^{\NT}_k & = & \alpha^{\NT}_k s^Q_k, ~~\mbox{where}~~ \alpha^{\NT}_k = \min \left\{1, - \sign(g_k^{\top}s_k^Q) \frac{ \Delta }{\| s_k^Q\|_{M_k}}\right\}. \end{eqnarray} \item When it lies on the border of the trust region $p^{\NT}_k$ is a stationary point of the subproblem (\ref{eq:nl_TR_subproblem}) if and only if $M_ks_k^Q= \frac{\theta}{g_k^{\top}s_k^Q}g_k$ where $\theta_k=\|s^Q_k\|^2_{M_k}$. \item Let $\lambda^{\NT}_k = \frac{g_k^{\top}s_k^Q}{\theta_k} \left( 1 + \sign(g_k^{\top}s_k^Q) \frac{ \|s^Q_k\|_{M_k} }{\Delta_k} \right) $ and assume that $p^{\NT}_k$ lies on the border of the trust-region. Then, if the matrix $B_k + \lambda^{\NT}_k M_k$ is positive definite, the step $p^{\NT}_k$ will be the unique minimizer of the subproblem (\ref{eq:nl_TR_subproblem}) over the subspace $\mathcal{L}_k$. \end{enumerate} \end{theorem} \begin{proof} 1. To calculate the Newton-like step $p^{\NT}_k $, we first note, for all $\alpha \in \mathcal{R}_k$ \begin{eqnarray} \label{min_N_TR} m^Q_k(\alpha s^{Q}_k) - m^Q_k(0) &=& \alpha g_k^{\top}s^{Q}_k + \frac{\alpha^2}{2} [s^{Q}_k]^{\top} B_k [s^{Q}_k] \nonumber \\ &=& (g_k^{\top}s^{Q}_k) \alpha - (g_k^{\top}s_k^Q) \frac{\alpha^2}{2}. \end{eqnarray} Consider the case where the curvature model along the Newton direction is positive, that is when $g_k^{\top}s_k^Q<0,$ (i.e., $\mathcal{R}_k =]0, \frac{\Delta_k}{ \|s_k^{Q}\|_{M_k}}]$) and compute the value of the parameter $\alpha$ at which the unique minimizer of (\ref{min_N_TR}) is attained. Let $\alpha^*_k$ denotes this optimal parameter. Taking the derivative of (\ref{min_N_TR}) with respect to $\alpha$ and equating the result to zero, one has $\alpha^*_k = 1 $. Two sub-cases may then occur. The first is when this minimizer lies within the trust region (i.e., $\alpha^*_k \|s^{Q}_k\|_{M_k} \le \Delta_k$), then \begin{eqnarray*} \label{alpha_tr_N_1_1} \alpha^{\NT}_k &=& 1. \end{eqnarray*} If $\alpha^*_k \|s^{Q}_k\|_{M_k} > \Delta_k$, then the line minimizer is outside the trust region and we have that \begin{eqnarray*} \label{alpha_tr_N_1_2} \alpha^{\NT}_k & = & \frac{\Delta_k}{\|s^{Q}_k\|_{M_k}}. \end{eqnarray*} Finally, we consider the case where the curvature of the model along the Newton-like step is negative, that is, when $g_k^{\top}s_k^Q> 0$. In that case, the minimizer lies on the boundary of the trust region, and thus \begin{eqnarray*} \label{alpha_tr_N_2} \alpha^{\NT}_k & = & - \frac{\Delta_k}{\|s^{Q}_k\|_{M_k}}. \end{eqnarray*} By combining all cases, one concludes that \begin{eqnarray*} p^{\NT}_k & = & \alpha^{\NT}_k s^Q_k, ~~\mbox{where}~~ \alpha^{\NT}_k = \min \left \{1, - \sign(g_k^{\top}s_k^Q) \frac{ \Delta_k }{\| s^Q_k\|_{M_k}}\right\}. \end{eqnarray*} 2. Suppose that the Newton-like step lies on the border of the trust region, i.e., $ p^{\NT}_k = \alpha^{\NT}_k s^Q_k= - \sign(g_k^{\top}s_k^Q) \frac{ \Delta_k }{\| s^Q_k\|_{M_k}}s^Q_k. $ The latter step is a stationary point of the subproblem (\ref{eq:nl_TR_subproblem}) if and only if there exists a Lagrange multiplier $\lambda^{\NT}_k\ge 0$ such that \begin{eqnarray*} (B_k+ \lambda^{\NT}_k M_k)p^{\NT}_k &= &-g_k. \end{eqnarray*} Substituting $p^{\NT}_k= \alpha^{\NT}_k s_k^Q$ in the latter equation, one has \begin{eqnarray} \label{eq:tr:1} \lambda^{\NT}_k M_k s_k^Q &= &\left(1 - \frac{1}{\alpha^{\NT}_k}\right)g_k. \end{eqnarray} By multiplying it from left by $(s^Q)^{\top}$, we deduce that \begin{eqnarray*} \label{lagrange-multiplier-tr} \lambda^{\NT}_k &= & \left(1 - \frac{1}{\alpha^{\NT}_k}\right)\frac{g_k^{\top}s_k^Q}{\|s^Q_k\|^2_{M_k}}=\frac{g_k^{\top}s_k^Q}{\theta} \left( 1 + \sign(g_k^{\top}s_k^Q) \frac{ \|s^Q_k\|_{M_k} }{\Delta_k} \right).\end{eqnarray*} By replacing the value of $\lambda^{\NT}_k$ in (\ref{eq:tr:1}), we obtain that $ M_ks_k^Q= \frac{\theta_k}{g_k^{\top}s_k^Q}g_k $ where $\theta_k = \| s_k^Q\|_{M_k}^2 >0$. 3. Indeed, when the step $p^{\NT}_k$ lies on the boundary of the trust-region and $M_k s_k^Q= \frac{\theta_k}{g_k^{\top}s_k^Q}g$. Then applying item (1) of Theorem \ref{th:1:tr}, we see that $$ (B_k + \lambda^{\NT}_k M_k)p^{\NT}_k=-g_k $$ with $\lambda^{\NT}_k=\frac{g_k^{\top}s_k^Q}{\theta_k} \left( 1 + \sign(g_k^{\top}s_k^Q) \frac{ \|s^Q_k\|_{M_k} }{\Delta_k} \right)>0$. Applying \cite[Theorem 7.4.1]{Conn_Gould_Toin_2000}, we see that if we assume that the matrix $B_k + \lambda^{\NT}_k M_k$ is positive definite, then $p^{\NT}_k$ is the unique minimizer of the subproblem (\ref{eq:nl_TR_subproblem}). \end{proof} Given an SPD matrix $M_k$ that satisfies the secant equation $ M_ks^Q_k= \frac{\theta_k}{g_k^{\top}s_k^Q} g_k$, item (3) of Theorem \ref{th:1:tr} states that the step $p^{\NT}_k$ is the global minimizer of the associated subproblems over the subspace $\mathcal{L}_k$ as far as the matrix $B_k + \lambda^{\NT}_k M_k$ is SPD. We note that $\lambda^{\NT}_k$ goes to infinity as the trust region radius $\Delta_k$ goes to zero, meaning that the matrix $B_k + \lambda^{\NT}_k M_k$ will be SPD as far as $\Delta_k$ is chosen to be sufficiently small. Since the TR update mechanism allows to shrink the value of $\Delta_k$ (when the iteration is declared as unsuccessful), satisfying the targeted condition will be geared automatically by the TR algorithm. Again, when Assumption \ref{asm:1} holds, we note that \textit{unsuccessful} iterations in the TR Algorithm require only updating the value of the TR radius $\Delta_k$ and the current step direction is kept unchanged. For such iterations, as far as there exists a matrix $M_k$ such as $M_ks^Q_k= \frac{\beta_k \|s^Q_k\|^2}{g_k^{\top}s_k^Q}g_k$ where $\beta_k \in ]\beta_{\min}, \beta_{\max}[$ and $\beta_{\max}> \beta_{\min}>0$, the approximate solution of the TR subproblem is obtained only by updating the step-size $\alpha^{\NT}_k$. This means that the computational cost of unsuccessful iterations do not requires solving any extra subproblem. We note that during the application of the algorithm, we will take $\theta_k$ of the form $\beta_k \|s^Q_k\|^2_2$, where $\beta_k \in ]\beta_{\min}, \beta_{\max}[$ and $0<\beta_{\min}< \beta_{\max}$. Such choice of the parameter $\theta_k$ ensures that the proposed $M_k$-norm uniformly equivalent to the $l_2$ one along the iterations (see Theorem \ref{th:equivnm}). In this setting, TR algorithms behaves as an LS method with a specific backtracking strategy. In fact, at the $k^{\mbox{th}}$ iteration, the step is of the form $p_k= \alpha_k s^Q_k $ where $s_k^Q $ is the (approximate) solution of the linear system $B_k s=-g_k$. The step length $ \alpha_k>0$ is chosen such as \begin{eqnarray} \label{sdcond} \frac{f(x_k) - f(x_k +p_{k} )}{ f(x_k) - m_k^Q(p_{k})} \ge \eta & ~~\mbox{and}~& m^Q_k(p_{k}) \le m^Q_k(-\alpha^{\cc}_k g_k). \end{eqnarray} The values of $\alpha_k$ and $\alpha^{\cc}_k$ are computed respectively as follows: \begin{eqnarray} \label{eq:deltatr:k} \alpha_k &= & \min \left\{1, - \sign(g_k^{\top}s^Q_k) \frac{ \Delta_k }{\beta_k^{1/2}\|s^Q_k\|}\right\} \end{eqnarray} and \begin{eqnarray} \label{eq:TR_cauchy-step} \alpha^{\cc}_k= \left\{ \begin{array}{ll} \frac{\Delta_k}{\chi_k^{1/2}\| g_k\|} & \text{ if } g_k^{\top}B_kg_k \le 0 ~ or~ \frac{g_k^{\top}B_kg_k}{\| g_k\|} \ge \frac{\Delta_k}{\chi_k^{1/2}} , \\ \\ \frac{g_k^{\top}B_kg_k}{\| g_k\|^2} & \text{ else}. \end{array} \right. \end{eqnarray} where $\chi_k= \beta_k \left(\frac{5}{2} - \frac{3}{2} \cos(\varpi_k)^2 + 2 \left(\frac{1- \cos(\varpi_k)^2}{\cos(\varpi_k)}\right)^2\right)$ and $\cos(\varpi_k) = \frac{g_k^{\top}s_k^Q}{\|g_k\| \| s_k^Q\|}$. $\Delta_k$ is initially equals to the current value of the TR radius (as in the original TR algorithm). For large values of $\alpha_k$ the sufficient decrease condition (\ref{sdcond}) may not be satisfied, in this case, the value of $\Delta_k$ is contracted using the factor $\tau_1$. Iteratively, the value of $\alpha_k$ is updated and the acceptance condition (\ref{sdcond}) is checked again until its satisfaction. Algorithm~\ref{algo:LS-TR} details the adaptation of the classical TR algorithm when our proposed $M_k$-norm is used. We denote the final algorithm by LS-TR as it behaves as an LS algorithm. \LinesNumberedHidden \begin{algorithm}[!ht] \SetAlgoNlRelativeSize{0} \caption{\bf LS-TR algorithm.} \label{algo:LS-TR} \SetAlgoLined \KwData{select an initial point $x_0$ and the constants $0< \eta <1$, $0<\epsilon_d\le 1$, $0 \le \tau_1< 1 \le \tau_2$, and $0< \beta_{\min}< \beta_{\max}$. Set the initial TR radius $\Delta_0>0$ and $\Delta_{\max}>\Delta_0$. } \For{$k= 1, 2, \ldots$}{ Choose a parameter $\beta_k \in ]\beta_{\min}, \beta_{\max}[$\; Let $s^Q_k$ be an approximate stationary point of $m_k^Q$ satisfying $|g_k^\top s^Q_k| \ge \epsilon_d \|g_k\|\|s^Q_k\|$\; Set $\alpha_k$ and $\alpha_k^{\cc}$ using (\ref{eq:deltatr:k}) and (\ref{eq:TR_cauchy-step})\; \While{ {\normalfont condition (\ref{sdcond}) is not satisfied} }{ Set $\Delta_k \leftarrow \tau_1 \Delta_k$, and update $\alpha_k$ and $\alpha_k^{\cc}$ using (\ref{eq:deltatr:k}) and (\ref{eq:TR_cauchy-step})\; } Set $p_k= \alpha_k s^Q_k$, $x_{k+1}= x_k + p_k$ and $ \Delta_{k+1} =\min\{\tau_2 \Delta_k,\Delta_{\max}\}$; } \end{algorithm} As far as the objective function $f$ is continuously differentiable, its gradient is Lipschitz continuous, and its approximated Hessian $B_k$ is bounded for all iterations, the TR algorithm is globally convergent and will required a number of iterations of order $\epsilon^{-2}$ to produce a point $x_{\epsilon}$ with $\|\nabla f(x_{\epsilon})\| \le \epsilon$ \cite{SGratton_ASartenaer_PhLToint_2008}. We note that the satisfaction of Assumption \ref{asm:1} is not problematic. As suggested for LS-ARC algorithm, one can modify the matrix $B_k$ using regularization techniques (as far as the Hessian approximation is kept uniformly bounded from above all over the iterations, the global convergence and the complexity bounds will still hold \cite{Conn_Gould_Toin_2000}). \section{Numerical Experiments} \label{section:5} In this section we report the results of experiments performed in order to assess the efficiency and the robustness of the proposed algorithms (\texttt{LS-ARC} and \texttt{LS-TR}) compared with the classical LS algorithm using the standard Armijo rule. In the latter approach, the trial step is of the form $s_k= \delta_k d_k $ where $d_k=s_k^Q$ if $-g_k^{\top}s^Q_k \ge \epsilon_d \|g_k\| \| s^Q_k\|$ ($s^Q_k$ is being an approximate stationary point of $m_k^Q$) otherwise $d_k=-g_k$, and the step length $\delta_k>0$ is chosen such as \begin{eqnarray} \label{cond:armijo} f(x_k +s_k) \le f(x_k) + \eta s_k^{\top}g_k, \end{eqnarray} where $\eta \in ]0,1[$. The appropriate value of $\delta_k$ is estimated using a backtracking approach with a contraction factor set to $\tau \in]0,1[$ and where the step length is initially chosen to be $1$. This LS method will be called \texttt{LS-ARMIJO}. We implement all the the algorithms as Matlab m-files and for all the tested algorithms $B_k$ is set to the true Hessian $\nabla^2f(x_k)$, and $\epsilon_d=10^{-3}$ for both \texttt{LS-ARC} and \texttt{LS-ARMIJO}. Other numerical experiments (not reported here) with different values of $\epsilon_d$ (for instance $\epsilon_d = 10^{-1}$, $10^{-6}$, and $10^{-12}$) lead to almost the same results. By way of comparison, we have also implemented the standard ARC/TR algorithms (see Algorithms \ref{algo:ARC} and \ref{algo:TR}) using the Lanczos-based solver \texttt{GLTR}/\texttt{GLRT} implemented in \texttt{GALAHAD} \cite{NIMGould_DOrban_PhLToint_2003}.The two subproblem solvers, \texttt{GLTR/GLRT} are implemented in Fortran and interfaced with Matlab using the default parameters. For the subproblem formulation we used the $\ell_2$-norm (i.e., for all iterations the matrix $M_k$ is set to identity). We shall refer to the ARC/TR methods based on \texttt{GLRT/GLTR} as \texttt{GLRT-ARC/GLTR-TR}. The other parameters defining the implemented algorithms are set as follows, for \texttt{GLRT-ARC} and \texttt{LS-ARC} $$ \eta=0.1, ~\nu_1=0.5, ~\nu_2=2,~\sigma_0=1, ~\mbox{and } \sigma_{\min}=10^{-16}; $$ for \texttt{GLTR-TR} and \texttt{LS-TR} $$ \eta=0.1, ~\tau_1= 0.5 , ~\tau_2=2, ~\Delta_0=1,~ \mbox{and } \Delta_{\max}=10^{16}; $$ and last for \texttt{LS-ARMIJO} $$ \eta=0.1, ~\mbox{and } ~\tau= 0.5. \mbox{ } $$ In all algorithms the maximum number of iterations is set to $10000$ and the algorithms stop when $$ \|g_k\|\le 10^{-5}. $$ A crucial ingredient in \texttt{LS-ARC} and \texttt{LS-TR} is the management of the parameter $\beta_k$. A possible choice for $\beta_k$ is $|g_k^{\top} s_k^Q|/\|s^Q_k\|^2$. This choice is inspired from the fact that, when the Hessian matrix $B_k$ is SPD, this update corresponds to use the energy norm, meaning that the matrix $M_k$ is set equals to $B_k$ (see \cite{Bergou_Diouane_Gratton_2017} for more details). However, this choice did not lead to good performance of the algorithms \texttt{LS-ARC} and \texttt{LS-TR}. In our implementation, for the \texttt{LS-ARC} algorithm, we set the value of $\beta_k$ as follows: $\beta_k=10^{-4} \sigma_k^{-2/3}$ if $g_k^{\top}s_k^Q < 0$ and $2$ otherwise. By this choice, we are willing to allow \texttt{LS-ARC}, at the start of the backtracking procedure, to take approximately the Newton step. Similarly, for the \texttt{LS-TR} method, we set $\beta_k=1$ and this allows \texttt{LS-TR} to use the Newton step at the start of the backtracking strategy (as in \texttt{LS-ARMIJO} method). All the Algorithms are evaluated on a set of unconstrained optimization problems from the CUTEst collection \cite{Gould2015}. The test set contains $62$ large-scale ($1000 \le n \le 10000$) CUTest problems with their default parameters. Regarding the algorithms \texttt{LS-TR}, \texttt{LS-ARC}, and \texttt{LS-ARMIJO}, we approximate the solution of the linear system $B_ks=-g_k$ using the \texttt{MINRES} Matlab solver. The latter method is a Krylov subspace method designed to solve symmetric linear systems \cite{Paige_1975}. We run the algorithms with the \texttt{MINRES} default parameters except the relative tolerance error which is set to $10^{-4}$. We note that on the tested problems, for \texttt{LS-ARC/LS-TR} , Assumption \ref{asm:1} was not violated frequently. The restoration of this assumption was ensured by performing iterations of \texttt{GLRT-ARC/GLTR-TR} (with the $\ell_2$-norm) until a new successful iteration is found. To compare the performance of the algorithms we use performance profiles proposed by Dolan and Mor\'e~\cite{EDDolan_JJMore_2002} over a variety of problems. Given a set of problems $\mathcal{P}$ (of cardinality $|\mathcal{P}|$) and a set of solvers $\mathcal{S}$, the performance profile $\rho_s(\tau)$ of a solver~$s$ is defined as the fraction of problems where the performance ratio $r_{p,s}$ is at most $\tau$ \begin{eqnarray*} \rho_s(\tau) \; = \; \frac{1}{|\mathcal{P}|} \mbox{size} \{ p \in \mathcal{P}: r_{p,s} \leq \tau \}. \end{eqnarray*} The performance ratio $r_{p,s}$ is in turn defined by \[ r_{p,s} \; = \; \frac{t_{p,s} }{\min\{t_{p,s}: s \in \mathcal{S}\}}, \] where $t_{p,s} > 0$ measures the performance of the solver~$s$ when solving problem~$p$ (seen here as the function evaluation, the gradient evaluation, and the CPU time). Better performance of the solver~$s$, relatively to the other solvers on the set of problems, is indicated by higher values of $\rho_s(\tau)$. In particular, efficiency is measured by $\rho_s(1)$ (the fraction of problems for which solver~$s$ performs the best) and robustness is measured by $\rho_s(\tau)$ for $\tau$ sufficiently large (the fraction of problems solved by~$s$). Following what is suggested in~\cite{EDDolan_JJMore_2002} for a better visualization, we will plot the performance profiles in a $\log_2$-scale (for which $\tau=1$ will correspond to $\tau=0$). \begin{figure} \caption{Performance profiles for $62$ large scale optimization problems (i.e., $1000 \le n \le 10000$).} \label{subfig2:pp:ls} \label{subfig1:pp:ls} \label{subfig3:pp:ls} \label{fig:pp:ls} \end{figure} We present the obtained performance profiles in Figure~\ref{fig:pp:ls}. Regarding the gradient evaluation (i.e., outer iteration) performance profile, see Figure \ref{subfig2:pp:ls}, LS approaches are the most efficient among all the tested solvers (in more $60 \%$ of the tested problems LS methods perform the best, while \texttt{GLRT-ARC} and \texttt{GLTR-TR} are performing better only in less than $15 \%$). When it comes to robustness, all the tested approaches exhibit good performance, \texttt{GLRT-ARC} and \texttt{GLTR-TR} are slightly better. For function evaluation performance profile given by Figure~\ref{subfig1:pp:ls}, \texttt{GLRT-ARC} and \texttt{GLTR-TR} show a better efficiency but not as good as LS methods. In fact, in more than $50\%$ of the tested problems LS methods perform the best while \texttt{GLRT-ARC} and \texttt{GLTR-TR} are better only in less than $35\%$. The robustness of the tested algorithms is the same as in the gradient evaluation performance profile. In terms of the demanded computing time, see Figure~\ref{subfig3:pp:ls}, as one can expect, \texttt{GLRT-ARC} and \texttt{GLTR-TR} are turned to be very consuming compared to the LS approaches. In fact, unlike the LS methods where only an approximate solution of one linear system is needed, the \texttt{GLRT}/\texttt{GLTR} approaches may require (approximately) solving multiple linear systems in sequence. For the LS approaches, one can see that \texttt{LS-TR} displays better performance compared to \texttt{LS-ARMIJO} on the tested problems. The main difference between the two LS algorithms is the strategy of choosing the search direction whenever $g_k^{\top}s_k^Q>0$. In our tested problems, the obtained performance using \texttt{LS-TR} suggests that going exactly in the opposite direction $-s_k^Q$, whenever $s_k^Q$ is not a descent direction, can be seen as a good strategy compared to \texttt{LS-ARMIJO}. \section{Conclusions} \label{section:6} In this paper, we have proposed the use of a specific norm in ARC/TR. With this norm choice, we have shown that the trial step of ARC/TR is getting collinear to the quasi-Newton direction. The obtained ARC/TR algorithm behaves as LS algorithms with a specific backtracking strategy. Under mild assumptions, the proposed scaled norm was shown to be uniformly equivalent to the Euclidean norm. In this case, the obtained LS algorithms enjoy the same convergence and complexity properties as ARC/TR. We have also proposed a second order version of the LS algorithm derived from ARC with an optimal worst-case complexity bound of order $\epsilon^{-3/2}$. Our numerical experiments showed encouraging performance of the proposed LS algorithms. A number of issues need further investigation, in particular the best choice and the impact of the parameter $\beta_k$ on the performance of the proposed LS approaches. Also, the analysis of the second order version of ARC suggests that taking the Newton direction is suitable for defining a line-search method with an optimal worst-case complexity bound of order $\epsilon^{-3/2}$. It would be interesting to confirm the potential of the proposed line search strategy compared to the classical LS approaches using extensive numerical tests. \small \end{document}
arXiv
# Basic modular arithmetic operations A modular operation is an operation in which we take the remainder of a division. For example, the remainder of the division of 7 by 3 is 1, so we say that 7 modulo 3 is equal to 1. We write this as $7 \equiv 1 \pmod{3}$. Addition and subtraction in modular arithmetic follow the same rules as in regular arithmetic. For example, if we add 2 to 1 modulo 3, we get: $$(1 + 2) \equiv 3 \pmod{3}$$ Multiplication in modular arithmetic is also straightforward. For example, if we multiply 2 by 3 modulo 4, we get: $$(2 \times 3) \equiv 6 \pmod{4}$$ Division in modular arithmetic requires us to find the multiplicative inverse of the divisor modulo the modulus. The multiplicative inverse of a number $a$ modulo $m$ is a number $b$ such that $ab \equiv 1 \pmod{m}$. For example, the multiplicative inverse of 2 modulo 5 is 3, because $2 \times 3 \equiv 1 \pmod{5}$. Now that we have covered the basic operations in modular arithmetic, let's move on to the next section and explore how we can apply modular arithmetic in numerical methods. ## Exercise Solve the following modular equations: 1. $(x + 2) \equiv 5 \pmod{7}$ 2. $(x - 4) \equiv 1 \pmod{8}$ # Applying modular arithmetic in numerical methods One common application of modular arithmetic in numerical methods is the calculation of large modular powers. For example, consider the calculation of $2^{100} \pmod{13}$. We can use the property of modular exponentiation to simplify this calculation. Fermat's Little Theorem states that if $p$ is a prime number and $a$ is an integer not divisible by $p$, then $a^{p-1} \equiv 1 \pmod{p}$. Using this theorem, we can calculate large modular powers efficiently. For example, to calculate $2^{100} \pmod{13}$, we can use Fermat's Little Theorem as follows: $$2^{12} \equiv 1 \pmod{13}$$ Now we can calculate $2^{100} \pmod{13}$ as follows: $$2^{100} \equiv (2^{12})^{8} \times 2^2 \equiv 1 \times 4 \equiv 4 \pmod{13}$$ ## Exercise Find the solutions to the following Diophantine equations: 1. $3x + 5y = 10$ 2. $4x - 7y = 15$ # Linear Diophantine equations A linear Diophantine equation is an equation of the form: $$ax + by = c$$ where $a$, $b$, and $c$ are integers, and $x$ and $y$ are variables. To solve a linear Diophantine equation, we can use the Extended Euclidean Algorithm. The algorithm allows us to find the greatest common divisor of two numbers and, at the same time, find integer coefficients $x$ and $y$ such that $ax + by = c$. For example, let's solve the equation $3x + 5y = 10$. We can use the Extended Euclidean Algorithm to find the greatest common divisor of 3 and 5, which is 1, and the coefficients $x$ and $y$, which are -2 and 1. So the solution to the equation $3x + 5y = 10$ is $x = -2t + 1$ and $y = t$, where $t$ is an integer. ## Exercise Find the solutions to the following system of linear Diophantine equations: 1. $3x + 5y = 10$ 2. $4x - 7y = 15$ # The Chinese Remainder Theorem The Chinese Remainder Theorem is a mathematical theorem that states that if we have a system of linear Diophantine equations with pairwise coprime moduli, then there exists a unique solution to the system. To apply the Chinese Remainder Theorem, we first need to find the modular inverse of the coefficients of the equations. Then, we can use the following formula to find the solution: $$x = \sum_{i=1}^{n} a_i \times b_i \times (m_i \times m_{i+1})^{-1} \times (c_i - \sum_{j=1}^{i-1} a_j \times b_j \times (m_i \times m_{i+1})^{-1} \times m_i)$$ where $a_i$, $b_i$, $m_i$, and $c_i$ are the coefficients and moduli of the equations. For example, let's solve the system of equations $3x + 5y = 10$ and $4x - 7y = 15$. We can find the modular inverses of the coefficients and apply the Chinese Remainder Theorem to find the solution. So the solution to the system of equations is $x = 1$ and $y = 2$. ## Exercise Find the modular inverse of the following numbers: 1. $2 \pmod{7}$ 2. $3 \pmod{5}$ # Modular exponentiation and its applications The modular exponentiation operation is denoted as $a^b \pmod{m}$. For example, to calculate $2^3 \pmod{5}$, we can use the following formula: $$2^3 \equiv (2 \times 2 \times 2) \pmod{5}$$ Now we can calculate $2^3 \pmod{5}$ as follows: $$2^3 \equiv 8 \pmod{5}$$ ## Exercise Solve the following modular equations: 1. $x^2 \equiv 3 \pmod{5}$ 2. $x^3 \equiv 2 \pmod{7}$ # Solving modular equations One common method to solve modular equations is to use the Chinese Remainder Theorem. We can split the modulus into prime factors and solve the equation modulo each prime factor. Then, we can use the Chinese Remainder Theorem to find the solution modulo the original modulus. For example, let's solve the equation $x^2 \equiv 3 \pmod{10}$. We can split the modulus into prime factors, solve the equation modulo 2 and 5, and then use the Chinese Remainder Theorem to find the solution modulo 10. So the solution to the equation $x^2 \equiv 3 \pmod{10}$ is $x \equiv 3 \pmod{10}$. ## Exercise Implement the following Python function to calculate the modular inverse of a number: ```python def modular_inverse(a, m): # Your implementation here ``` # Implementing modular arithmetic in Python For example, we can implement the following Python function to calculate the modular exponentiation: ```python def modular_exponentiation(a, b, m): result = 1 for _ in range(b): result = (result * a) % m return result ``` Now, let's implement the following Python function to calculate the modular inverse of a number: ```python def modular_inverse(a, m): gcd, x, _ = extended_euclidean(a, m) if gcd != 1: raise ValueError("Modular inverse does not exist") return x % m ``` ## Exercise Implement the following Python function to calculate the modular inverse of a number using Fermat's Little Theorem: ```python def modular_inverse_fermat(a, m): # Your implementation here ``` # Efficient calculation of large modular powers For example, we can implement the following Python function to calculate the modular inverse of a number using Fermat's Little Theorem: ```python def modular_inverse_fermat(a, m): if gcd(a, m) != 1: raise ValueError("Modular inverse does not exist") return pow(a, m - 2, m) ``` Now we can calculate the modular inverse of a number efficiently using Fermat's Little Theorem. ## Exercise Implement the following Python function to calculate the modular exponentiation using Fermat's Little Theorem: ```python def modular_exponentiation_fermat(a, b, m): # Your implementation here ``` # Applications of modular arithmetic in cryptography For example, RSA relies on the difficulty of factoring large numbers. The security of RSA depends on the difficulty of finding the modular inverse of a number modulo another number. ## Exercise Implement the following Python function to calculate the modular exponentiation using the binary exponentiation method: ```python def modular_exponentiation_binary(a, b, m): # Your implementation here ``` # Modular arithmetic in polynomial arithmetic For example, we can use modular arithmetic to calculate the modular exponentiation of a polynomial. This can be useful in cryptography and coding theory. Now we have covered all the sections in this textbook on Applying modular arithmetic in numerical methods with python. We have explored the basics of modular arithmetic, its applications in numerical methods, linear Diophantine equations, the Chinese Remainder Theorem, modular exponentiation and its applications, solving modular equations, implementing modular arithmetic in Python, efficient calculation of large modular powers, applications of modular arithmetic in cryptography, and modular arithmetic in polynomial arithmetic. ## Exercise Solve the following system of modular equations: 1. $3x + 5y \equiv 10 \pmod{7}$ 2. $4x - 7y \equiv 15 \pmod{13}$ ## Exercise Implement the following Python function to calculate the modular exponentiation using the binary exponentiation method: ```python def modular_exponentiation_binary(a, b, m): # Your implementation here ```
Textbooks
# Vectors and matrices A vector is a mathematical object that has both magnitude and direction. In linear algebra, vectors are typically represented as ordered lists of numbers. For example, the vector $\begin{bmatrix} 2 \\ 3 \\ 4 \end{bmatrix}$ can be thought of as a point in 3D space with coordinates (2, 3, 4). A matrix is a rectangular array of numbers. It is used to represent linear transformations, systems of linear equations, and more. For example, the matrix $\begin{bmatrix} 1 & 2 \\ 3 & 4 \\ 5 & 6 \end{bmatrix}$ can be used to represent a linear transformation in 2D space. ## Exercise Consider the vector $\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}$. What is its magnitude? To find the magnitude of a vector, we can use the Euclidean norm. The Euclidean norm of a vector $\begin{bmatrix} x \\ y \\ z \end{bmatrix}$ is given by: $$\sqrt{x^2 + y^2 + z^2}$$ So, the magnitude of the vector $\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}$ is: $$\sqrt{1^2 + 2^2 + 3^2} = \sqrt{14}$$ # Matrix operations: addition, subtraction, and multiplication Matrix addition and subtraction involve adding or subtracting corresponding elements of two matrices. For example, given matrices A and B: $$A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$$ $$B = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}$$ The sum of A and B is: $$A + B = \begin{bmatrix} 1 + 5 & 2 + 6 \\ 3 + 7 & 4 + 8 \end{bmatrix} = \begin{bmatrix} 6 & 8 \\ 10 & 12 \end{bmatrix}$$ The difference of A and B is: $$A - B = \begin{bmatrix} 1 - 5 & 2 - 6 \\ 3 - 7 & 4 - 8 \end{bmatrix} = \begin{bmatrix} -4 & -4 \\ -4 & -4 \end{bmatrix}$$ Matrix multiplication involves taking the dot product of rows of the first matrix with columns of the second matrix. For example, given matrices A and B: $$A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$$ $$B = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}$$ The product of A and B is: $$AB = \begin{bmatrix} (1 \cdot 5 + 2 \cdot 7) & (1 \cdot 6 + 2 \cdot 8) \\ (3 \cdot 5 + 4 \cdot 7) & (3 \cdot 6 + 4 \cdot 8) \end{bmatrix} = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix}$$ ## Exercise Given matrices A and B: $$A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$$ $$B = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}$$ Find the sum, difference, and product of A and B. The sum of A and B is: $$A + B = \begin{bmatrix} 1 + 5 & 2 + 6 \\ 3 + 7 & 4 + 8 \end{bmatrix} = \begin{bmatrix} 6 & 8 \\ 10 & 12 \end{bmatrix}$$ The difference of A and B is: $$A - B = \begin{bmatrix} 1 - 5 & 2 - 6 \\ 3 - 7 & 4 - 8 \end{bmatrix} = \begin{bmatrix} -4 & -4 \\ -4 & -4 \end{bmatrix}$$ The product of A and B is: $$AB = \begin{bmatrix} (1 \cdot 5 + 2 \cdot 7) & (1 \cdot 6 + 2 \cdot 8) \\ (3 \cdot 5 + 4 \cdot 7) & (3 \cdot 6 + 4 \cdot 8) \end{bmatrix} = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix}$$ # Linear transformations, including projection and reflection A linear transformation is a function that maps vectors from one vector space to another while preserving the operations of addition and scalar multiplication. In other words, a linear transformation can be represented as a matrix multiplication. For example, let's consider a linear transformation that scales a vector by a factor of 2: $$T(x) = 2x$$ This transformation can be represented by the matrix: $$A = \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix}$$ A projection is a linear transformation that maps a vector onto a lower-dimensional subspace while preserving its length. For example, consider a projection onto the x-axis: $$P(x) = (x_1, 0)$$ This projection can be represented by the matrix: $$B = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$$ A reflection is a linear transformation that maps a vector onto itself while reversing its direction. For example, consider a reflection across the x-axis: $$R(x) = (x_1, -x_2)$$ This reflection can be represented by the matrix: $$C = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$$ ## Exercise Given a vector $\begin{bmatrix} 1 \\ 2 \end{bmatrix}$, find its image under the linear transformation A, projection B, and reflection C. The image of the vector under the linear transformation A is: $$A \begin{bmatrix} 1 \\ 2 \end{bmatrix} = \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \end{bmatrix} = \begin{bmatrix} 2 \\ 4 \end{bmatrix}$$ The image of the vector under the projection B is: $$B \begin{bmatrix} 1 \\ 2 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$$ The image of the vector under the reflection C is: $$C \begin{bmatrix} 1 \\ 2 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \end{bmatrix} = \begin{bmatrix} 1 \\ -2 \end{bmatrix}$$ # Determinants and their properties The determinant of a matrix A is defined as: $$\det(A) = \sum_{\sigma \in S_n} \text{sgn}(\sigma) \prod_{i=1}^n A_{i \sigma(i)}$$ where $S_n$ is the set of all permutations of the set {1, 2, ..., n}, and $\text{sgn}(\sigma)$ is the sign of the permutation $\sigma$. Some properties of determinants include: - If A is an invertible matrix, then $\det(A) \neq 0$. - If A is a square matrix and $A_{ij} = A_{ji}$ for all $i$ and $j$, then $\det(A) = 0$. - If A is a square matrix and $A = B + C$, then $\det(A) = \det(B) + \det(C)$. ## Exercise Find the determinant of the matrix: $$A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$$ The determinant of the matrix A is: $$\det(A) = 1 \cdot 4 - 2 \cdot 3 = 4 - 6 = 2$$ # Eigenvalues and eigenvectors For example, let's consider the matrix A: $$A = \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix}$$ The eigenvalues of A are the roots of the characteristic equation: $$\det(\lambda I - A) = 0$$ where $\lambda$ is an eigenvalue and $I$ is the identity matrix. The eigenvectors of A are the vectors that, when multiplied by the matrix A, are scaled by the corresponding eigenvalue. ## Exercise Find the eigenvalues and eigenvectors of the matrix A. The eigenvalues of A are: $$\lambda_1 = 1$$ $$\lambda_2 = 3$$ The eigenvectors of A are: $$v_1 = \begin{bmatrix} 1 \\ 1 \end{bmatrix}$$ $$v_2 = \begin{bmatrix} 1 \\ -1 \end{bmatrix}$$ # Plotting data in MATLAB For example, let's consider the following data points: $$x = [1, 2, 3, 4, 5]$$ $$y = [2, 4, 6, 8, 10]$$ We can plot these data points in MATLAB using the following code: ```matlab x = [1, 2, 3, 4, 5]; y = [2, 4, 6, 8, 10]; plot(x, y); ``` This code will create a line plot of the data points. ## Exercise Plot the following data points in MATLAB: $$x = [1, 2, 3, 4, 5]$$ $$y = [2, 4, 6, 8, 10]$$ To plot the data points in MATLAB, you can use the following code: ```matlab x = [1, 2, 3, 4, 5]; y = [2, 4, 6, 8, 10]; plot(x, y); ``` This code will create a line plot of the data points. # Customizing plots using MATLAB For example, we can change the color of the plot by using the 'Color' property: ```matlab x = [1, 2, 3, 4, 5]; y = [2, 4, 6, 8, 10]; plot(x, y); set(gca, 'Color', 'red'); ``` We can also add labels to the plot using the 'XLabel' and 'YLabel' properties: ```matlab x = [1, 2, 3, 4, 5]; y = [2, 4, 6, 8, 10]; plot(x, y); set(gca, 'XLabel', 'Time'); set(gca, 'YLabel', 'Distance'); ``` ## Exercise Customize the plot of the data points from the previous exercise by changing the color to blue, adding x-axis and y-axis labels, and adding a title. To customize the plot of the data points, you can use the following code: ```matlab x = [1, 2, 3, 4, 5]; y = [2, 4, 6, 8, 10]; plot(x, y); set(gca, 'Color', 'blue'); set(gca, 'XLabel', 'Time'); set(gca, 'YLabel', 'Distance'); title('Distance vs Time'); ``` This code will create a line plot of the data points with a blue color, x-axis and y-axis labels, and a title. # Advanced topics in linear algebra: null space, rank, and inverse matrices The null space of a matrix A is the set of all vectors x that satisfy the equation Ax = 0. The null space of a matrix is also known as its kernel. The rank of a matrix A is the dimension of its column space. It is the maximum number of linearly independent columns in A. The inverse of a matrix A is a matrix B such that AB = BA = I, where I is the identity matrix. ## Exercise Find the null space, rank, and inverse of the matrix: $$A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$$ The null space of the matrix A is: $$N(A) = \{x \in \mathbb{R}^2 : Ax = 0\}$$ The rank of the matrix A is: $$\text{rank}(A) = 2$$ The inverse of the matrix A is: $$A^{-1} = \begin{bmatrix} -2 & 1 \\ 1.5 & -0.5 \end{bmatrix}$$ # Applications of linear algebra and data visualization in real-world scenarios For example, image processing often involves linear transformations and data visualization to enhance or analyze images. In natural language processing, linear algebra is used to represent text data as vectors and perform dimensionality reduction using techniques like Principal Component Analysis (PCA). In machine learning, linear algebra is used to train and test machine learning models, and data visualization is used to analyze and interpret the results. ## Exercise Discuss the application of linear algebra and data visualization in a real-world scenario, such as image processing or natural language processing. Linear algebra and data visualization are widely used in image processing to enhance or analyze images. For example, image transformations can be represented as matrix operations. In addition, data visualization techniques like histograms and heatmaps can be used to analyze image data and identify patterns or trends. In natural language processing, linear algebra is used to represent text data as vectors and perform dimensionality reduction using techniques like Principal Component Analysis (PCA). This allows for more efficient analysis and processing of large text datasets. Data visualization techniques like word clouds and network graphs can be used to analyze and interpret the results. In machine learning, linear algebra is used to train and test machine learning models. For example, linear regression models can be represented as matrix operations, and gradient descent algorithms can be used to optimize the model parameters. Data visualization techniques like scatter plots and decision boundaries can be used to analyze and interpret the results.
Textbooks
The Annals of Probability Ann. Probab. Stochastic integration in UMD Banach spaces J. M. A. M. van Neerven, M. C. Veraar, and L. Weis More by J. M. A. M. van Neerven More by M. C. Veraar More by L. Weis Enhanced PDF (383 KB) In this paper we construct a theory of stochastic integration of processes with values in ℒ(H, E), where H is a separable Hilbert space and E is a UMD Banach space (i.e., a space in which martingale differences are unconditional). The integrator is an H-cylindrical Brownian motion. Our approach is based on a two-sided Lp-decoupling inequality for UMD spaces due to Garling, which is combined with the theory of stochastic integration of ℒ(H, E)-valued functions introduced recently by two of the authors. We obtain various characterizations of the stochastic integral and prove versions of the Itô isometry, the Burkholder–Davis–Gundy inequalities, and the representation theorem for Brownian martingales. Ann. Probab., Volume 35, Number 4 (2007), 1438-1478. First available in Project Euclid: 8 June 2007 https://projecteuclid.org/euclid.aop/1181334250 Primary: 60H05: Stochastic integrals Secondary: 28C20: Set functions and measures and integrals in infinite-dimensional spaces (Wiener measure, Gaussian measure, etc.) [See also 46G12, 58C35, 58D20, 60B11] 60B11: Probability theory on linear topological spaces [See also 28C20] Stochastic integration in Banach spaces UMD Banach spaces cylindrical Brownian motion γ-radonifying operators decoupling inequalities Burkholder–Davis–Gundy inequalities martingale representation theorem van Neerven, J. M. A. M.; Veraar, M. C.; Weis, L. Stochastic integration in UMD Banach spaces. Ann. Probab. 35 (2007), no. 4, 1438--1478. doi:10.1214/009117906000001006. https://projecteuclid.org/euclid.aop/1181334250 Belopolskaya, Ya. I. and Daletskiĭ, Yu. L. (1990). Stochastic Equations and Differential Geometry. Kluwer Academic, Dordrecht. Bourgain, J. (1983). Some remarks on Banach spaces in which martingale difference sequences are unconditional. Ark. Mat. 21 163–168. Digital Object Identifier: doi:10.1007/BF02384306 Project Euclid: euclid.afm/1485897011 Bourgain, J. (1986). Vector-valued singular integrals and the $H\sp1$-BMO duality. In Probability Theory and Harmonic Analysis (Cleveland, Ohio, 1983) 1–19. Monogr. Textbooks Pure Appl. Math. 98. Dekker, New York. Brzeźniak, Z. (1995). Stochastic partial differential equations in M-type 2 Banach spaces. Potential Anal. 4 1–45. Brzeźniak, Z. (1997). On stochastic convolutions in Banach spaces and applications. Stoch. Stoch. Rep. 61 245–295. Digital Object Identifier: doi:10.1080/17442509708834122 Brzeźniak, Z. (2003). Some remarks on stochastic integration in 2-smooth Banach spaces. In Probabilistic Methods in Fluids (I. M. Davies, A. Truman et al., eds.) 48–69. World Scientific, New Jersey. Brzeźniak, Z. and van Neerven, J. M. A. M. (2000). Stochastic convolution in separable Banach spaces and the stochastic linear Cauchy problem. Studia Math. 143 43–74. Digital Object Identifier: doi:10.4064/sm-143-1-43-74 Burkholder, D. L. (2001). Martingales and singular integrals in Banach spaces. In Handbook of the Geometry of Banach Spaces 233–269. North-Holland, Amsterdam. Chung, K. L. and Williams, R. J. (1990). Introduction to Stochastic Integration, 2nd ed. Birkhäuser, Boston. Clément, Ph. P. J. E., de Pagter, B., Sukochev, F. A. and Witvliet, H. (2000). Schauder decompositions and multiplier theorems. Studia Math. 138 135–163. Dettweiler, E. (1989). On the martingale problem for Banach space valued stochastic differential equations. J. Theoret. Probab. 2 159–191. Dettweiler, E. (1991). Stochastic integration relative to Brownian motion on a general Banach space. Do\u ga Mat. 15 58–97. Diestel, J., Jarchow, H. and Tonge, A. (1995). Absolutely Summing Operators. Cambridge Univ. Press. Edgar, G. A. (1977). Measurability in a Banach space. Indiana Univ. Math. J. 26 663–677. Digital Object Identifier: doi:10.1512/iumj.1977.26.26053 Garling, D. J. H. (1986). Brownian motion and UMD-spaces. Probability and Banach Spaces (Zaragoza, 1985). Lecture Notes in Math. 1221 36–49. Springer, Berlin. Garling, D. J. H. (1990). Random martingale transform inequalities. In Probability in Banach Spaces VI (Sandbjerg, 1986) 101–119. Birkhäuser, Boston. Hitczenko, P. (1988). On tangent sequences of UMD-space valued random vectors. Unpublished manuscript. Warsaw. Kallenberg, O. (2002). Foundations of Modern Probability, 2nd ed. Springer, New York. Kalton, N. J. and Weis, L. The $H^\infty $-functional calculus and square function estimates. Preprint. Karatzas, I. and Shreve, S. E. (1991). Brownian Motion and Stochastic Calculus, 2nd ed. Springer, New York. Kunita, H. (1970). Stochastic integrals based on martingales taking values in Hilbert space. Nagoya Math. J. 38 41–52. Digital Object Identifier: doi:10.1017/S0027763000013507 Project Euclid: euclid.nmj/1118797962 Ledoux, M. and Talagrand, M. (1991). Probability in Banach Spaces. Springer, Berlin. Mamporia, B. (2004). On the existence and uniqueness of a solution to a stochastic differential equation in a Banach space. Georgian Math. J. 11 515–526. McConnell, T. R. (1989). Decoupling and stochastic integration in UMD Banach spaces. Probab. Math. Statist. 10 283–295. Montgomery-Smith, S. (1998). Concrete representation of martingales. Electron. J. Probab. 3 15. Digital Object Identifier: doi:10.1214/EJP.v3-37 van Neerven, J. M. A. M., Veraar, M. C. and Weis, L. (2006). Itô's formula in UMD Banach spaces and regularity of solutions of the Zakai equation. Unpublished manuscript. van Neerven, J. M. A. M., Veraar, M. C. and Weis, L. (2006). Stochastic evolution equations in UMD Banach spaces. Unpublished manuscript. Digital Object Identifier: doi:10.1016/j.jfa.2008.03.015 van Neerven, J. M. A. M. and Weis, L. (2005). Stochastic integration of functions with values in a Banach space. Studia Math. 166 131–170. Digital Object Identifier: doi:10.4064/sm166-2-2 van Neerven, J. M. A. M. and Weis, L. (2005). Weak limits and integrals of Gaussian covariances in Banach spaces. Probab. Math. Statist. 25 55–74. Neidhardt, A. L. (1978). Stochastic integrals in $2$-uniformly smooth Banach spaces. Ph.D. dissertation, Univ. Wisconsin. Ondreját, M. (2003). Équations d'Évolution Stochastiques dans les Espaces de Banach. Ph.D. thesis. Inst. Élie Cartan, Nancy, and Charles Univ., Prague. Pisier, G. (1975). Martingales with values in uniformly convex spaces. Israel J. Math. 20 326–350. Pisier, G. (1986). Probabilistic methods in the geometry of Banach spaces. Probability and Analysis (Varenna, 1985). Lecture Notes in Math. 1206 167–241. Springer, Berlin. Rubio de Francia, J. L. (1986). Martingale and integral transforms of Banach space valued functions. Probability and Banach Spaces (Zaragoza, 1985). Lecture Notes in Math. 1221 195–222. Springer, Berlin. Rosiński, J. (1987). Bilinear random integrals. Dissertationes Math. 259 71. Rosiński, J. and Suchanecki, Z. (1980). On the space of vector-valued functions integrable with respect to the white noise. Colloq. Math. 43 183–201. Digital Object Identifier: doi:10.4064/cm-43-1-183-201 Vakhania, N. N., Tarieladze, V. I. and Chobanyan, S. A. (1987). Probability Distributions in Banach Spaces. Reidel, Dordrecht. The Institute of Mathematical Statistics Future Papers Cylindrical continuous martingales and stochastic integration in infinite dimensions Veraar, Mark and Yaroslavtsev, Ivan, Electronic Journal of Probability, 2016 Vector-valued decoupling and the Burkholder–Davis–Gundy inequality Cox, Sonja and Veraar, Mark, Illinois Journal of Mathematics, 2011 ${\scr E}$-martingales and their applications in mathematical finance Choulli, Tahir, Krawczyk, Leszek, and Stricker, Christophe, The Annals of Probability, 1998 Exponential Estimates for Stochastic Convolutions in 2-Smooth Banach Spaces Seidler, Jan, Electronic Journal of Probability, 2010 Some remarks on tangent martingale difference sequences in $L^1$-spaces Cox, Sonja and Veraar, Mark, Electronic Communications in Probability, 2007 A Clark-Ocone formula in UMD Banach spaces Maas, Jan and Neerven, Jan, Electronic Communications in Probability, 2008 Littlewood-Paley-Stein theory for semigroups in UMD spaces Hytönen, Tuomas P., Revista Matemática Iberoamericana, 2007 On the Exponentials of Fractional Ornstein-Uhlenbeck Processes Matsui, Muneya and Shieh, Narn-Rueih, Electronic Journal of Probability, 2009 Complete Controllability of Impulsive Stochastic Integrodifferential Systems in Hilbert Space Dai, Xisheng and Yang, Feng, Abstract and Applied Analysis, 2013 Converse Results for Existence of Moments and Uniform Integrability for Stopped Random Walks Gut, Allan and Janson, Svante, The Annals of Probability, 1986 euclid.aop/1181334250
CommonCrawl
Can someone offer an intuitive understanding of linear/quadratic probing and double hashing? I'm reading through Introduction to Algorithms, and I'm having trouble grasping intuitively how linear probing, quadratic probing, and double hashing exactly work. I suspect my confusion lies within my hazy understanding of hashing itself, so I'd appreciate if anyone could clear up these areas and help me grasp the concepts. Here's what the textbook has to say about linear probing and quadratic probing: What does it mean to "first probe T[h'(k)]?" Also, what is "then we wrap around to slots T[0],...?" I'm also confused as to what primary clustering means; in particular, the part that talks about "long runs of occupied slots [building] up..." Any help would be great. Thank you. number-theory algorithms hash-function Bob JohnBob John $\begingroup$ For historical perspective, probing was introduced in Communion by Whitley Streiber. $\endgroup$ – Will Jagy Apr 11 '13 at 23:22 $\begingroup$ @WillJagy. Yuck! (reluctantly, +1) $\endgroup$ – Rick Decker Apr 12 '13 at 1:19 $\begingroup$ @RickDecker if you google wiki probing one of the first few responses is completely informative. And very funny. $\endgroup$ – Will Jagy Apr 12 '13 at 1:50 In both cases, as you probably know, you have a universe of objects, $U$, and you wish to insert a number $n\le m$ of these objects into an array $T = T[0], T[1], \dots, T[m-1]$ with no more than one object in each array slot (commonly known as a bucket) . One way to do this is to use a hash function $h(x)$ that maps $U$ into the set of array indices $\{0, 1, \dots, m-1\}$. The problem is that the size of $U$ is generally larger than $m$, the number of buckets in the array, so you have the potential that two different objects in your universe might be sent by $h$ to the same bucket, known as a collision. In other words, you might have different objects $x_1, x_2$ such that $h(x_1) = h(x_2)$. To handle situations like this, you need not only a hash function, but also a protocol to deal with collisions when they occur. In linear probing, the protocol to insert an object $x$ into the array is to first look at the bucket at index $h(x)$, namely $T[h(x)]$, which is what your notes refer to as the "first probe". If that bucket, $T[h(x)]$ is already occupied by an object other than $x$, you then try to insert $x$ into $T[h(x)+1]$. If that doesn't work, you try to insert $x$ into $T[h(x)+2], T[h(x)+3]$, and so on, until you find a vacant bucket or reach the last slot, $T[m-1]$ in your array. What do you do if you come to the last slot and haven't found a place yet for $x$? A simple way is to do what your notes call "wrap around", namely continue searching, starting at the top bucket, $T[0]$ and continuing to search at $T[1], T[2], T[3]$ and so on, which explains the $\mod m$ in your notes. Unless the hash table is completely full, this strategy will always find an available slot for the object $x$. Visualize the array as a collection of boxes. Some of them are currently empty, so color them white. Some of them are already full, so color them black. Now think of inserting several more objects, using linear probing. Each attempt to insert an object into a black box will lead to a sequence of consecutive black boxes with your new element occupying the first white box after those, which you then color black. Each of these sequences of adjacent black boxes is known as a cluster and it's not hard to see that, first, these clusters will grow over time and, even worse, might bump into other already existing clusters, leading to even longer ones, all of which slow down the insertion process. The moral: big clusters lead to slow hashing, which defeats the whole purpose of this data structure. There are several other protocols that lessen this clustering. One is quadratic probing, described in your notes. Without going into too much detail, this does roughly the same thing as linear probing, namely "keep probing (somehow) until you find an available slot". In linear probing the "somehow" is "at the current slot plus 1"; in quadratic probing, the "somehow" is "at another slot determined by a quadratic function". This still leads to clustering, but since the next slot you try isn't the next one in line, you're less likely to wind up with big clusters. Instead, you'll have a collection of smaller clusters, separated by collections of available buckets. In general, this will lead to more efficient hashing on the average. A good way to see this in action is to try an example. Suppose, for instance, you have a hash table with 11 buckets and a very simple hash function from the integers into the indices $\{0, 1, 2, \dots, 10\}$ given by $h(x)=(11 \mod x)$, try inserting a collection of numbers like $2, 17, 21, 35, 47, 13, 3, 6, 46, 29,10$ in that order and count the number of probes required using linear probing. Then compare that with quadratic probing, using successive probes given by $i^2+2i$. In other words, from an initial probe to index $h(x)=t$, look at slots $$ t+3, t+8, t+4, t+2, t+2, t+4, t+6, t+8, t+10, t+1 $$ if necessary, wrapping $\mod 11$ to keep the indices in range. Count the number of probes this takes; it should be less than linear probing (though I haven't tried it). Rick DeckerRick Decker $\begingroup$ What is the wrap around to T[0], T[1], etc.? This is where most of my confusion arises. $\endgroup$ – Bob John Apr 15 '13 at 20:22 $\begingroup$ Let's say that the slots of your hash table are $T[0], \dots T[12]$ so you have 13 slots. Suppose you're using linear probing, trying to insert an element into $T[10]$ and suppose that slot and $T[11], T[12]$ are already occupied. Probing into $T[10], T[11]$ and $T[12]$ fails to find a vacant slot, so you continue with $T[0], T[1], T[2],\dots$ until you found a vacant slot. It's as if your array was arranged in a circle, rather than a linear segment.. $\endgroup$ – Rick Decker Apr 16 '13 at 12:10 Not the answer you're looking for? Browse other questions tagged number-theory algorithms hash-function or ask your own question. Hashing: Quadratic Probing If $p\equiv3\pmod4$, and $n$ is the number of quadratic residues less than $p/2$, how can I show that $2\cdot4\cdots(p-1)\equiv(-1)^{n+k}$? Calculating a random "blob" in a 10 x 10 grid How to encode a line of points so you can connect long distances using short lines
CommonCrawl
Gilbreath's conjecture Gilbreath's conjecture is a conjecture in number theory regarding the sequences generated by applying the forward difference operator to consecutive prime numbers and leaving the results unsigned, and then repeating this process on consecutive terms in the resulting sequence, and so forth. The statement is named after Norman L. Gilbreath who, in 1958, presented it to the mathematical community after observing the pattern by chance while doing arithmetic on a napkin.[1] In 1878, eighty years before Gilbreath's discovery, François Proth had, however, published the same observations along with an attempted proof, which was later shown to be false.[1] Motivating arithmetic Gilbreath observed a pattern while playing with the ordered sequence of prime numbers 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, ... Computing the absolute value of the difference between term n + 1 and term n in this sequence yields the sequence 1, 2, 2, 4, 2, 4, 2, 4, 6, 2, ... If the same calculation is done for the terms in this new sequence, and the sequence that is the outcome of this process, and again ad infinitum for each sequence that is the output of such a calculation, the following five sequences in this list are 1, 0, 2, 2, 2, 2, 2, 2, 4, ... 1, 2, 0, 0, 0, 0, 0, 2, ... 1, 2, 0, 0, 0, 0, 2, ... 1, 2, 0, 0, 0, 2, ... 1, 2, 0, 0, 2, ... What Gilbreath—and François Proth before him—noticed is that the first term in each series of differences appears to be 1. The conjecture Stating Gilbreath's observation formally is significantly easier to do after devising a notation for the sequences in the previous section. Toward this end, let $(p_{n})$ denote the ordered sequence of prime numbers, and define each term in the sequence $(d_{n}^{1})$ by $d_{n}^{1}=p_{n+1}-p_{n},$ where $n$ is positive. Also, for each integer $k$ greater than 1, let the terms in $(d_{n}^{k})$ be given by $d_{n}^{k}=|d_{n+1}^{k-1}-d_{n}^{k-1}|.$ Gilbreath's conjecture states that every term in the sequence $a_{k}=d_{1}^{k}$ for positive $k$ is equal to 1. Verification and attempted proofs As of 2013, no valid proof of the conjecture has been published. As mentioned in the introduction, François Proth released what he believed to be a proof of the statement that was later shown to be flawed. Andrew Odlyzko verified that $d_{1}^{k}$ is equal to 1 for $k\leq n=3.4\times 10^{11}$ in 1993,[2] but the conjecture remains an open problem. Instead of evaluating n rows, Odlyzko evaluated 635 rows and established that the 635th row started with a 1 and continued with only 0s and 2s for the next n numbers. This implies that the next n rows begin with a 1. Generalizations In 1980, Martin Gardner published a conjecture by Hallard Croft that stated that the property of Gilbreath's conjecture (having a 1 in the first term of each difference sequence) should hold more generally for every sequence that begins with 2, subsequently contains only odd numbers, and has a sufficiently low bound on the gaps between consecutive elements in the sequence.[3] This conjecture has also been repeated by later authors.[4][5] However, it is false: for every initial subsequence of 2 and odd numbers, and every non-constant growth rate, there is a continuation of the subsequence by odd numbers whose gaps obey the growth rate but whose difference sequences fail to begin with 1 infinitely often.[6] Odlyzko (1993) is more careful, writing of certain heuristic reasons for believing Gilbreath's conjecture that "the arguments above apply to many other sequences in which the first element is a 1, the others even, and where the gaps between consecutive elements are not too large and are sufficiently random."[2][7] However, he does not give a formal definition of what "sufficiently random" means. See also • Difference operator • Prime gap • Rule 90, a cellular automaton that controls the behavior of the parts of the rows that contain only the values 0 and 2 References 1. Caldwell, Chris. "The Prime Glossary: Gilbreath's conjecture". The Prime Pages. Archived from the original on 2012-03-24. Retrieved 2008-03-07.. 2. Odlyzko, A. M. (1993). "Iterated absolute values of differences of consecutive primes". Mathematics of Computation. 61 (203): 373–380. doi:10.2307/2152962. JSTOR 2152962. Zbl 0781.11037. Archived from the original on 2011-09-27. Retrieved 2006-05-25.. 3. Gardner, Martin (December 1980). "Patterns in primes are a clue to the strong law of small numbers". Mathematical Games. Scientific American. Vol. 243, no. 6. pp. 18–28. 4. Guy, Richard K. (2004). Unsolved Problems in Number Theory. Problem Books in Mathematics (3rd ed.). Springer-Verlag. p. 42. ISBN 0-387-20860-7. Zbl 1058.11001. 5. Darling, David (2004). "Gilbreath's conjecture". The Universal Book of Mathematics: From Abracadabra to Zeno's Paradoxes. John Wiley & Sons. pp. 133–134. ISBN 9780471667001. Archived from the original on 2016-05-05. Retrieved 2015-04-21. 6. Eppstein, David (February 20, 2011). "Anti-Gilbreath sequences". 11011110. Archived from the original on April 12, 2017. Retrieved April 12, 2017. 7. Chase, Zachary (2023). "A random analogue of Gilbreath's conjecture". Math. Ann. arXiv:2005.00530. doi:10.1007/s00208-023-02579-w. Prime number conjectures • Hardy–Littlewood • 1st • 2nd • Agoh–Giuga • Andrica's • Artin's • Bateman–Horn • Brocard's • Bunyakovsky • Chinese hypothesis • Cramér's • Dickson's • Elliott–Halberstam • Firoozbakht's • Gilbreath's • Grimm's • Landau's problems • Goldbach's • weak • Legendre's • Twin prime • Legendre's constant • Lemoine's • Mersenne • Oppermann's • Polignac's • Pólya • Schinzel's hypothesis H • Waring's prime number
Wikipedia
multiphase flow dns Direct numerical simulations (DNS) and large eddy simulations (LES): Point-particle assumption . The design of new nuclear reactors, and the safe, efficient operation of existing reactors, can benefit from fundamental understanding of the bubbly two­‐phase flows created as the water boils. This article focuses on a subset of multiphase flows called particle-laden suspensions involving nondeforming particles in a carrier fluid. Furthermore, this initial period becomes more significant with increasing Jakob number. The article concludes with a summary perspective on the importance of integrating theoretical, modeling, computational, and experimental efforts at different scales. In particular, the subject of interest is a system in which the carrier fluid is a liquid that transports dispersed gas bubbles. 242, F. Shaffer, B. Gopalan, R. W. Breault, R. Cocco, S. R. Karri, R. Hays, and T. Knowlton, "High speed imaging of particle flow fields in CFB risers," 86, Copyright (2013), with permission from Elsevier. A persistent effort of our group has been to learn about the numerical pitfalls of existing methods and also develop a scalable, useful and robust solver for phase change. If the density of a material particle does not change, we have incompressible flow Conservation of momentum. (b) Initial average solid volume fraction profile. This was a finite difference approach to the problem with uniform, orthogonal computational framework. Representation of a particle-laden mixing layer in a computational domain. DNS studies aimed at solving flows undergoing phase change commonly make the following two assumptions: i) a constant interface temperature and ii) an incompressible flow treatment in both the gas and liquid regions, with the exception of the interface. Theoretical formulations to represent, explain, and predict these phenomena encounter peculiar challenges that multiphase flows pose for classical statistical mechanics. The physical validity of these assumptions is examined in this work by studying a canonical, spherically symmetric bubble growth configuration, which is a popular validation exercise in DNS papers. Direct and continuous multiphase flow monitoring at the wellhead ensures greater measurement accuracy and eliminates the need for dedicated test lines and test separators. Tryggvason and J. Lu. (b), (a) The National Energy Technology Lab's Chemical Looping Reactor; (b), (c), (e) high-speed images of a section of the reactor at different magnifications [16] APS Gallery of Fluid Motion), (d) VFEL simulation; (f) PR-DNS. In direct numerical simulations (DNS) of multiphase flows it is frequently found that features much smaller than the "dominant" flow scales emerge. Multiphase flows - Flows with (finite-size) particles/droplets/bubbles. In these lectures a relatively simple method to simulate the unsteady two-dimensional flow of two immiscible fluids, separated by a sharp interface, is introduced. • Sometimes you just want to know. Image courtesy of J. Capecelatro. putational Methods for Multiphase Flow. Alternative theoretical formulations and extensions to current formulations are outlined as promising future research directions. Selected highlights of recent progress using PR-DNS to discover new multiphase flow physics and develop models are reviewed. particle-laden turbulent flow are performed via direct Navier-Stokes (DNS) and large eddy simulations (LES) methods in OpenFOAM software. The most accurate technique for these flows, Direct Numerical Simulation (DNS), captures all the length scales of turbulence in the flow. Conditions and any applicable Cambridge University Press, 2007. Figure Solution of an unsteady diffusion system in 1D and 2D representing an accurately captured jump in temperature and its gradient. For incompressible flow the pressure is adjusted to enforce conservation of volume Conservation of energy. The first edition of Multiphase Flow with Droplets and Particles included a FORTRAN computer program for the multiphase flow of particles in a quasi- one-dimensional duct based on … Data generated by direct numerical simulations (DNS) of bubbly up-flow in a periodic vertical channel is used to generate closure relationships for a simplified two-fluid model for the average flow. DNS of a turbulent multiphase Taylor-Green vortex The training data for our model is generated from DNS of tur- bulent flows with bubbles, which provide complete information about the bubbles trajectories and the underlying flow. S. VINCENT 2-6 November 2015, Cargèse, France Simulation of turbulent multiphase flows For incompressible flow the pressure is adjusted to enforce conservation of volume Conservation of energy. We focus on obtaining kinematic models for monodisperse systems, i.e. It has widespread applications in desalination plants, power generation, food processing, and petrochemical fields.In the present work, an analytical expression is developed for the mass loading limit, defined as the limit beyond which liquid is unable to be vaporized in a general desuperheating system. Results show that DNS predictions are inaccurate during the initial period of bubble growth, which coincides with the inertial growth stage. DNS for Multiphase Flow Model Generation and Validation. Multiphase flow simulations make for often striking visuals. https://doi.org/10.1103/PhysRevFluids.5.110520, Physical Review Physics Education Research, Log in with individual APS Journal Account », Log in with a username/password provided by your institution », Get access through a U.S. public or high school library ». In fluid mechanics, multiphase flow is the simultaneous flow of materials with two or more thermodynamic phases. For practical multiphase flow problems the solution to the ddf evolution equation is coupled to a Eulerian carrier-phase flow solver , . The Multiphase and Wetgas meters apply a combination of electrical impedance measurements with cross correlation for velocity measurements. Multiphase models and applications ... Gas flow Liquid flow NTEC 2014 4 31 Slug flow in interconnected subchannels mm mm Calculation grid 204,512 cells 18.7 mm Water Inlet 0.23 m/s mm Air Inlet 2.0 m/s Air Inlet 0.5 m/s . A critical perspective on outstanding questions and potential limitations of PR-DNS for model development is provided. Numerical Methods Multiphase Flow 2 . Multiphase flow systems are a critical element of many industrial processes as they constitute the medium through which basic ingredients are processed to yield the final product(s). The APS Physics logo and Physics logo are trademarks of the American Physical Society. In the context of multiphase flows —Computational Multi-Fluid Dynamics (CMFD) field—, DNS means that all the interfacial and turbulent scales of the phenomenon must be fully resolved. ulations (DNS). This work begins from acquiring the experience accumulated by former Phd students Now our focus has shifted to a finite volume strategy that is more robust towards non-orthogonal, non-uniform grids, which is also one of the reasons that most commercial fluid dynamics codes such as Fluent, Converge, and Star CCM+ use the finite volume method. This article appears in the following collection: Physical Review Fluids publishes a collection of papers associated with the invited talks presented at the 72st Annual Meeting of the APS Division of Fluid Dynamics. This study presents two different machine learning approaches for the modeling of hydrodynamic force on particles in a particle-laden multiphase flow. Results from particle-resolved direct numerical simulations (PR-DNS) of flow over a random array of stationary particles for eight combinations of particle Reynolds number ( $${\mathrm {Re}}$$ ) and volume fraction ( $$\phi $$ ) … (a) Initial configuration. We recently published the details of a solver developed using a sharp numerical scheme based on a high-order accurate level-set method. • Flow regime, e.g. Physical Review Fluids™ is a trademark of the American Physical Society, registered in the United States. This thesis deals with numerical simulation methods for multiphase flows where different fluid phases are simultaneously present. The results indicate that for early times, and particularly as the Jakob number increases (more pronounced vaporization), the common assumptions inherited in the Scriven solution and adopted in various computations become invalid. Subscription DNS of Multiphase Flows Multiphase flows are everywhere: Rain, air/ocean interactions, combustion of liquid fuels, boiling in power plants, refrigeration, blood, Research into multiphase flows usually driven by "big" needs Early Steam Generation Nuclear Power Space Exploration Oil Extraction Chemical Processes Many new processes depend on multiphase flows, such as cooling of electronics, additive manufacturing, carbon sequestration, etc. 4. DOI:https://doi.org/10.1103/PhysRevFluids.5.110520. In direct numerical simulations (DNS) of multiphase flows it is frequently found that features much smaller than the "dominant" flow scales emerge. Feedback, questions or accessibility issues: [email protected]. The reference solutions that are used to examine DNS results are based on a compressible saturated treatment of the bubble contents, coupled to a generalized form of the Rayleigh-Plesset equation, and an Arbitrary-Lagrangian-Eulerian solution of the liquid phase energy equation. (b) Initial particle number density profile. This interest arises from the diversity of applications that can benefit from accurate simulations of boiling or condensation processes but also because the conservation laws at the interface introduce interesting & challenging computational problems, such as: These effects would be easy to capture if infinitesimal numerical resolution is available to track the motion of an interface and then exactly replicate the behavior of the underlying differential equations. 603 (2008), 474-475; Int'l. Furthermore, the numerical findings presented in terms of streamwise profiles of mean droplet diameter, average vapor temperature, vapor-droplet slip velocity, and liquid mass show that the desuperheating process can be described as taking place in two distinct zones. A closed-form expression for a threshold time is derived, beyond which the commonly employed DNS assumptions hold. Simulating Multiphase Flows Using a Front-Tracking/Finite-Volume Method. 3 In traditional DNS the goal is to examine the flow over a sufficiently large range of scales so that it is possible to infer how the collective motion of well-resolves bubbles … All rights reserved. The hydrodynamic interactions in these flows result in rich multiscale physics, such as clustering and pseudo-turbulence, with important practical implications. Both images show a close up view of the thermal sleeve region and the main pipe section and clearly illustrate the reduction in local vapor temperature coincident with the spray plume. Multiphase flow codes developed in various stages at UC Irvine and UDel (includes DNS, LBM and LES solvers) This radius together with a corresponding Scriven-based temperature profile provide appropriate initial conditions such that DNS treatment based on the aforementioned assumptions remains valid over a broad range of operating conditions. We adopt the Eulerian approach because we focus our attention to dispersed (concentration smaller than 0.001) and small particles (the Stokes number has to be smaller than 0.2). (a) An image from high-speed video of a riser flow showing the complex hydrodynamics and multiscale features of the particle-laden suspension. B. Aboulhasanzadeh, S. Thomas, J. Lu and G. Tryggvason. "Capturing Subgrid Physics in DNS of Multiphase Flows." Information about registration may be found here. Development of a stable finite volume solver for phase change can prove to be an important development. NURETH-14: The 14th International Topical Meeting on Nuclear Reactor Thermalhydraulics. It is also prevalent in many natural phenomena. An abrupt change in bulk velocity between the two phases at the interface, and, A modified interfacial energy balance due to latent heat release/absorption. 3. Tryggvason, Gretar, and Aboulhasanzadeh, Bahman. Sign up to receive regular email alerts from Physical Review Fluids. The simulations of particle phase are performed in Matlab and CFDEM. 9. Understanding multiphase flows is vital to addressing some of our most pressing human needs: clean air, clean water, and the sustainable production of food and energy. Examples include two-phase flows of gas-solid, gas-liquid or liquid-solid, and three-phase flows of gas-liquid-solid. Alternative theoretical formulations and extensions to current formulations are outlined as promising future research directions. for turbulence studies . Multiphase flow is a flow of several phases. These phases may consist of one chemical component, or several … Of natural gas-liquid multiphase flows, rain is perhaps the experience that (a) Initial configuration. Schematic showing the intersection of solid particles with the measurement region. It has direct applications in many industrial processes including riser reactors, bubble column reactors, fluidized bed reactors, dryers, and … Microfluidics - Flow induced by beating (artificial) cilia. Why DNS? Multiscale Issues in DNS of Multiphase Flows. Numerical techniques - Direct Numerical Simulations (DNS) and Large-Eddy Simulations (LES). simulations (DNS) of multiphase flows the dominant scale generally sets the resolution requirement. For many multiphase flow problems, direct numerical simulations of large systems have become routine. See Off-Campus Access to Physical Review for further instructions. bubbly flow, slug flow, annular flow, etc. For isothermal flow as we will be - Flows through porous media and along porous/permeable walls. DNS of Multiphase Flows The flow is predicted using the governing physical principles: Conservation of mass. Those features consist of thin films, filaments, drops, and boundary layers, and usually surface tension is strong so the geometry is simple. Here we primarily consider coupling to a Reynolds-averaged Navier Stokes (RANS) solver, although many of the modeling considerations are equally applicable to LES or DNS coupling as well. This circumvents the continuity issue faced due to a sudden jump of the underlying quantities for which, spatial derivatives are needed. Use of the American Physical Society websites and journals implies that In this paper we present three multiphase flow models suitable for the study of the dynamics of compressible dispersed multiphase flows. ABOUT US. Desuperheating is essential for systems which need to regulate the temperature of superheated steam and is often used to protect downstream piping and equipment. The flow solver is an explicit projection finite-volume method, third order in time and second order in space, and the interface motion is computed using a … The development of numerical methods for two-phase flow with the capability to handle interfacial mass transfer due to phase change has been the subject of wide interest in recent years. Learn More ». Toronto, Sept. 25-30, 2011. The physical validity of these assumptions is examined in this work by studying a canonical, spherically… • Multiscale multiphase flow • Turbulence DNS (turbulence, interface) impossible . A key idea in our implementation is to apply the interfacial boundary conditions, that undergo a sudden jump in values, using the ghost fluid method. Numerical methods for dispersed multiphase flows (RANS-type methods): Reynolds-averaged conservation equations with turbulence model, point-particle assumption: Mixture models To celebrate 50 years of enduring discoveries, APS is offering 50% off APCs for any manuscript submitted in 2020, published in any of its hybrid journals: PRL, PRA, PRB, PRC, PRD, PRE, PRApplied, PRFluids, and PRMaterials. Multiphase flow regimes • User must know a priori the characteristics of the flow. The region of space occupied by the solids is hatched with vertical lines. Shear breakup of drops, bubble induced drag reduction, dependency of lift on bubble formation, void fraction distribution in bubbly the user has read and agrees to our Terms and Figure: The bubble radius is shown as predicted by the Scriven solution, our compressible saturated vapor model, and experimental results. Direct Numerical Simulation (DNS) serves as an irreplaceable tool to probe the complexities of multiphase flow and identify turbulent mechanisms that elude conventional experimental measurement techniques. DNS studies aimed at solving flows undergoing phase change commonly make the following two assumptions: i) a constant interface temperature and ii) an incompressible flow treatment in both the gas and liquid regions, with the exception of the interface. 2. Figure: Results corresponding to 50% mass loading case showing averaged temperature field in (a) and instantaneous spray droplet colored by slip velocity in (b). Selected highlights of recent progress using PR-DNS to discover new multiphase flow physics and develop models are reviewed. • Predicting the transition from one regime to another possible only if the flow regimes can be predicted by the same model. For a fairly detailed treatment of DNS of multiphase ows, including both a description of numerical methods and a survey of results, we suggest A critical analysis of existing approaches leads to the identification of key desirable characteristics that a formulation must possess in order to be successful at representing these physical phenomena. In the second zone, which resides beyond the near-field, the desuperheating process displays a significantly reduced degree of vaporization, a near-equilibration of phasic velocities, and a milder change in the vapor temperature along the streamwise direction. We apply these models to the compressible ($\\text{Ma} = 0.2,\\,0.5$) … Many researchers now find themselves working away from their institutions and, thus, may have trouble accessing the Physical Review journals. CTFLab is a research laboratory led by Prof. Olivier Desjardins in the Sibley School of Mechanical and Aerospace Engineering at Cornell University. Note that this is simply a fictitious, ghost phase that is assumed. This limit is subsequently compared to predictions originating from 3D numerical simulations based on a Lagrangian-Eulerian framework in combination with a RANS treatment for the vapor phase. This is not always the case. Proceedings of the ASME 2013 Fluids Engineering Division Summer Meeting. The computations show that even for cases having much smaller mass loadings than the theoretical limit yield significant accumulation of liquid along the walls. • Only model one flow regime at a time. Potato Theeyal, Kerala Recipe, Cadbury Pakistan Website, Cheap Abandoned Castles For Sale, Kindergarten Math Powerpoint Presentations, Giant Barrel Sponge Order, Dramatic Entrance Quotes, Flavius Julius Caesar Character Analysis, Multiphase Flow Dns, multiphase flow dns 2020
CommonCrawl
Principal ideal domain In mathematics, a principal ideal domain, or PID, is an integral domain in which every ideal is principal, i.e., can be generated by a single element. More generally, a principal ideal ring is a nonzero commutative ring whose ideals are principal, although some authors (e.g., Bourbaki) refer to PIDs as principal rings. The distinction is that a principal ideal ring may have zero divisors whereas a principal ideal domain cannot. Principal ideal domains are thus mathematical objects that behave somewhat like the integers, with respect to divisibility: any element of a PID has a unique decomposition into prime elements (so an analogue of the fundamental theorem of arithmetic holds); any two elements of a PID have a greatest common divisor (although it may not be possible to find it using the Euclidean algorithm). If x and y are elements of a PID without common divisors, then every element of the PID can be written in the form ax + by. Principal ideal domains are noetherian, they are integrally closed, they are unique factorization domains and Dedekind domains. All Euclidean domains and all fields are principal ideal domains. Principal ideal domains appear in the following chain of class inclusions: rngs ⊃ rings ⊃ commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ Euclidean domains ⊃ fields ⊃ algebraically closed fields Algebraic structures Group-like • Group • Semigroup / Monoid • Rack and quandle • Quasigroup and loop • Abelian group • Magma • Lie group Group theory Ring-like • Ring • Rng • Semiring • Near-ring • Commutative ring • Domain • Integral domain • Field • Division ring • Lie ring Ring theory Lattice-like • Lattice • Semilattice • Complemented lattice • Total order • Heyting algebra • Boolean algebra • Map of lattices • Lattice theory Module-like • Module • Group with operators • Vector space • Linear algebra Algebra-like • Algebra • Associative • Non-associative • Composition algebra • Lie algebra • Graded • Bialgebra • Hopf algebra Examples Examples include: • $K$: any field, • $\mathbb {Z} $: the ring of integers,[1] • $K[x]$: rings of polynomials in one variable with coefficients in a field. (The converse is also true, i.e. if $A[x]$ is a PID then $A$ is a field.) Furthermore, a ring of formal power series in one variable over a field is a PID since every ideal is of the form $(x^{k})$, • $\mathbb {Z} [i]$: the ring of Gaussian integers,[2] • $\mathbb {Z} [\omega ]$ (where $\omega $ is a primitive cube root of 1): the Eisenstein integers, • Any discrete valuation ring, for instance the ring of p-adic integers $\mathbb {Z} _{p}$. Non-examples Examples of integral domains that are not PIDs: • $\mathbb {Z} [{\sqrt {-3}}]$ is an example of a ring which is not a unique factorization domain, since $4=2\cdot 2=(1+{\sqrt {-3}})(1-{\sqrt {-3}}).$ Hence it is not a principal ideal domain because principal ideal domains are unique factorization domains. Also, $\langle 2,1+{\sqrt {-3}}\rangle $ is an ideal that cannot be generated by a single element. • $\mathbb {Z} [x]$: the ring of all polynomials with integer coefficients. It is not principal because $\langle 2,x\rangle $ is an ideal that cannot be generated by a single polynomial. • $K[x,y,\ldots ],$ the ring of polynomials in at least two variables over a ring K is not principal, since the ideal $\langle x,y\rangle $ is not principal. • Most rings of algebraic integers are not principal ideal domains. This is one of the main motivations behind Dedekind's definition of Dedekind domains, which allows replacing unique factorization of elements with unique factorization of ideals. In paricular, many $\mathbb {Z} [\zeta _{p}],$ for the primitive p-th root of unity $\zeta _{p},$ are not principal ideal domains.[3] The class number of a ring of algebraic integers gives a measure of "how far away" the ring is from being a principal ideal domain. Modules Main article: Structure theorem for finitely generated modules over a principal ideal domain The key result is the structure theorem: If R is a principal ideal domain, and M is a finitely generated R-module, then $M$ is a direct sum of cyclic modules, i.e., modules with one generator. The cyclic modules are isomorphic to $R/xR$ for some $x\in R$[4] (notice that $x$ may be equal to $0$, in which case $R/xR$ is $R$). If M is a free module over a principal ideal domain R, then every submodule of M is again free.[5] This does not hold for modules over arbitrary rings, as the example $(2,X)\subseteq \mathbb {Z} [X]$ of modules over $\mathbb {Z} [X]$ shows. Properties In a principal ideal domain, any two elements a,b have a greatest common divisor, which may be obtained as a generator of the ideal (a, b). All Euclidean domains are principal ideal domains, but the converse is not true. An example of a principal ideal domain that is not a Euclidean domain is the ring $\mathbb {Z} \left[{\frac {1+{\sqrt {-19}}}{2}}\right].$[6][7] In this domain no q and r exist, with 0 ≤ |r| < 4, so that $(1+{\sqrt {-19}})=(4)q+r$, despite $1+{\sqrt {-19}}$ and $4$ having a greatest common divisor of 2. Every principal ideal domain is a unique factorization domain (UFD).[8][9][10][11] The converse does not hold since for any UFD K, the ring K[X, Y] of polynomials in 2 variables is a UFD but is not a PID. (To prove this look at the ideal generated by $\left\langle X,Y\right\rangle .$ It is not the whole ring since it contains no polynomials of degree 0, but it cannot be generated by any one single element.) 1. Every principal ideal domain is Noetherian. 2. In all unital rings, maximal ideals are prime. In principal ideal domains a near converse holds: every nonzero prime ideal is maximal. 3. All principal ideal domains are integrally closed. The previous three statements give the definition of a Dedekind domain, and hence every principal ideal domain is a Dedekind domain. Let A be an integral domain. Then the following are equivalent. 1. A is a PID. 2. Every prime ideal of A is principal.[12] 3. A is a Dedekind domain that is a UFD. 4. Every finitely generated ideal of A is principal (i.e., A is a Bézout domain) and A satisfies the ascending chain condition on principal ideals. 5. A admits a Dedekind–Hasse norm.[13] Any Euclidean norm is a Dedekind-Hasse norm; thus, (5) shows that a Euclidean domain is a PID. (4) compares to: • An integral domain is a UFD if and only if it is a GCD domain (i.e., a domain where every two elements have a greatest common divisor) satisfying the ascending chain condition on principal ideals. An integral domain is a Bézout domain if and only if any two elements in it have a gcd that is a linear combination of the two. A Bézout domain is thus a GCD domain, and (4) gives yet another proof that a PID is a UFD. See also • Bézout's identity Notes 1. See Fraleigh & Katz (1967), p. 73, Corollary of Theorem 1.7, and notes at p. 369, after the corollary of Theorem 7.2 2. See Fraleigh & Katz (1967), p. 385, Theorem 7.8 and p. 377, Theorem 7.4. 3. Milne. "Algebraic Number Theory" (PDF). p. 5. 4. See also Ribenboim (2001), p. 113, proof of lemma 2. 5. Lecture 1. Submodules of Free Modules over a PID math.sc.edu Retrieved 31 March 2023 6. Wilson, Jack C. "A Principal Ring that is Not a Euclidean Ring." Math. Mag 46 (Jan 1973) 34-38 7. George Bergman, A principal ideal domain that is not Euclidean - developed as a series of exercises PostScript file 8. Proof: every prime ideal is generated by one element, which is necessarily prime. Now refer to the fact that an integral domain is a UFD if and only if its prime ideals contain prime elements. 9. Jacobson (2009), p. 148, Theorem 2.23. 10. Fraleigh & Katz (1967), p. 368, Theorem 7.2 11. Hazewinkel, Gubareni & Kirichenko (2004), p.166, Theorem 7.2.1. 12. "T. Y. Lam and Manuel L. Reyes, A Prime Ideal Principle in Commutative Algebra" (PDF). Archived from the original (PDF) on 26 July 2010. Retrieved 31 March 2023. 13. Hazewinkel, Gubareni & Kirichenko (2004), p.170, Proposition 7.3.3. References • Michiel Hazewinkel, Nadiya Gubareni, V. V. Kirichenko. Algebras, rings and modules. Kluwer Academic Publishers, 2004. ISBN 1-4020-2690-0 • John B. Fraleigh, Victor J. Katz. A first course in abstract algebra. Addison-Wesley Publishing Company. 5 ed., 1967. ISBN 0-201-53467-3 • Nathan Jacobson. Basic Algebra I. Dover, 2009. ISBN 978-0-486-47189-1 • Paulo Ribenboim. Classical theory of algebraic numbers. Springer, 2001. ISBN 0-387-95070-2 External links • Principal ring on MathWorld
Wikipedia
\begin{document} \baselineskip=17pt \title{ Calculating relative power integral bases \ in totally complex quartic extensions of totally real fields } \thispagestyle{empty} \renewcommand{\arabic{footnote}}{\arabic{footnote}} \setcounter{footnote}{0} \noindent Mathematics Subject Classification: Primary 11Y50; Secondary 11R04, 11D57, 11D59.\\ Key words and phrases: power integral basis; calculating solutions of index form equations; relative quartic extensions; unit equation; Thue equation \begin{abstract} Some time ago we extended our monogenity investigations and calculations of generators of power integral bases to the relative case, cf. \cite{book}, \cite{gsz13}, \cite{grsz}. Up to now we considered (usually totally real) extensions of complex quartic fields. In the present paper we consider power integral bases in relative extensions of totally real fields. Totally complex quartic extensions of totally real number fields seems to be the most simple case, that we detail here. As we shall see, even in this case we have to overcome several unexpected difficulties, which we can, however solve by properly (but not trivially) adjusting standard methods. We demonstrate our general algorithm on an explicit example. We describe how the general methods for solving relative index form equations in quartic relative extensions are modified in this case. As a byproduct we show that relative Thue equations in totally complex extensions of totally real fields can only have small solutions, and we construct a special method for the enumeration of small solutions of special unit equations. These statements can be applied to other diophantine problems, as well. \end{abstract} \section{Introduction} Monogenity of number fields have an extensive literature cf. \cite{nark}, \cite{book}. A number field $K$ of degree $n$ is monogenic if its ring of integers ${\mathbb Z}_K$ is a simple ring extension of ${\mathbb Z}$, that is if there exists $\alpha\in{\mathbb Z}_K$ such that ${\mathbb Z}_K={\mathbb Z}[\alpha]$. In this case $(1,\alpha,\ldots,\alpha^{n-1})$ is an integral basis of $K$, called power integral basis. In the relative case, if $K$ is an extension field of $L$ with $[K:L]=n$, $K$ is (relatively) monogenic over $L$ if ${\mathbb Z}_K$ is a simple ring extension of the ring of integers ${\mathbb Z}_L$ of $L$, that is if there exists $\alpha\in{\mathbb Z}_K$ with ${\mathbb Z}_K={\mathbb Z}_L[\alpha]$ (see \cite{book}). In this case $(1,\alpha,\ldots,\alpha^{n-1})$ is a relative integral basis of $K$ over $L$ called relative power integral bases. If $K$ is monogenic, then it is relatively monogenic over its subfield $L$, cf. \cite{grsz}. There are efficient algorithms for calculating generators of power integral bases in lower degree number fields, up to degree 5 (see \cite{book}), but there is no general efficient algorithm for number fields of arbitrary degrees. These algorithms are especially efficient for cubic and quartic number fields when the problem can be reduced to the resolution of (one or more) Thue equations. Some time ago we extended our investigations to the relative case cf. \cite{book}, \cite{gsz13}, \cite{grsz}. At first we studied extensions of complex quadratic fields. In the present paper we consider relative extensions of totally real number fields (of arbitrary degrees). The most simple case seems to be the relative quartic extensions, when we can follow the arguments of \cite{gprel4}, reducing the calculation of generators of relative power integral bases to solving relative Thue equations. Especially, if $K$ is a totally complex quartic extension of the totally real number field $L$, then, at first glance, the calculation seems to be easy. Trying to perform the calculation for an explicite example we meet some unexpected problems. The resolution of the cubic Thue equation (according to \cite{gprel4}) is not trivial, even the standard enumeration algorithm has to be adjusted properly. Also, we observe that if $K$ is a totally complex quartic extension of a totally real number field $L$, then the quartic relative Thue equation to be solved by \cite{gprel4} is trivial, having only small solutions (cf. Theorem \ref{thth}). These observations might well be applied also in several other calculations. We illustrate our paper by an explicit example that gives a direct insight into the calculations. \section{The general scheme for relative quartic extensions} We recall the general method of \cite{gprel4} which reduces calculation of generators of relative power integral bases in quartic relative extensions to the resolution of relative Thue equations. In the following we denote by ${\mathbb Z}_K$ the ring of algebraic integers of any number field $K$. Let $M$ be a totally real number field of degree $m$. Assume $M={\mathbb Q}(\mu)$ with $\mu\in{\mathbb Z}_M$. Let $K=M(\xi)$ be a totally complex quartic extension of $M$. Denote by \[ f(x)=x^4+a_3x^3+a_2x^2+a_1x+a_0\in {\mathbb Z}_M[x] \] the relativ defining polynomial of $\xi$ over $M$. Assume that $K$ has a relative integral basis over $M$. Our purpose is to determine all generators $\alpha\in{\mathbb Z}_K$ of relative power integral bases of $K$ over $M$. That is, we consider $\alpha\in {\mathbb Z}_K$ such that $(1,\alpha,\alpha^2,\alpha^3)$ is a relative integer basis of $K$ over $M$. Such an $\alpha$ has relative index \[ I_{K/M}(\alpha)=({\mathbb Z}_M[\alpha]^+:{\mathbb Z}_M^+)=1 \] (see \cite{grsz}). According to \cite{gprel4} $\alpha$ can be written in the form \begin{equation} \alpha=\frac{A+X\xi+Y\xi^2+Z\xi^3}{d} \label{alpha} \end{equation} where $A,X,Y,Z\in{\mathbb Z}_M$, and $d\in{\mathbb Z}$ is a non-zero common denominator. Set \[ F(U,V)=U^3-a_2U^2V+(a_1a_3-4a_4)UV^2+(4a_2a_4-a_3^2-a_1^2a_4)V^3, \] \[ Q_1(X,Y,Z)=X^2-a_1XY+a_2Y^2+(a_1^2-2a_2)XZ+(a_3-a_1a_2)YZ+(-a_1a_3+a_2^2+a_4)Z^2, \] \[ Q_2(X,Y,Z)=Y^2-XZ-a_1YZ+a_2Z^2. \] Let $i_0=I_{K/M}(\xi)$. Recall the main result of \cite{gprel4}: \begin{lemma} \label{lemma1} $\alpha\in{\mathbb Z}_K$ generates a relative power integral basis of $K$ over $M$ if and only if there exist $U,V\in{\mathbb Z}_M$ such that \begin{equation} N_{M/Q}(F(U,V))=\frac{d^{6m}}{i_0}, \label{F} \end{equation} with \begin{equation} Q_1(X,Y,Z)=U,\;\;\; Q_2(X,Y,Z)=V. \label{Q12} \end{equation} \end{lemma} Note that Lemma \ref{lemma1} enables us to determine finitely many $(X,Y,Z)\in{\mathbb Z}_M^3$, such that all possible generators of relative power integral bases of $K$ over $M$ are of the form \begin{equation} \alpha=\frac{A+\varepsilon(X\xi+Y\xi^2+Z\xi^3)}{d}, \label{axyz} \end{equation} where $\varepsilon$ is a unit in $M$ and $A\in{\mathbb Z}_M$ is arbitrary (such that $\alpha\in{\mathbb Z}_K$). \section{Solving the cubic relative Thue equation over $M$} \label{cubiceq} To apply Lemma \ref{lemma1} we first have to determine the solutions $U,V\in{\mathbb Z}_M$ of equation (\ref{F}). By (\ref{F}) we have \begin{equation} F(U,V)=\varepsilon\cdot \nu \label{fuv} \end{equation} where $\varepsilon$ is a unit in $M$ and $\nu\in{\mathbb Z}_M$ of norm $d^{6m}/i_0$. As it is known, up to associates there are only finitely many possible values of $\nu$ that can be determined by an algebraic number theory package like Kash \cite{kash}, Magma \cite{magma} or Pari \cite{pari}. \cite{gp} gives an algorithm for the resolution of relative Thue equations. However here we would like to emphasize some special features of this calculation. There are three possible cases according to $F$:\\ A) $F$ splits into linear factors over $M$,\\ B) $F$ is irreducible over $M$,\\ C) $F$ is a product of a linear and a quadratic factor over $M$. We make some remarks on all the three cases, but we give a complete description only in the most interesting case C), with details of Baker's method, reduction and enumeration algorithms. \subsection{A) $F$ splits into linear factors over $M$} If \begin{equation} F(U,V)=(U-\lambda_1 V)(U-\lambda_2 V)(U-\lambda_3 V), \label{fuvs} \end{equation} with $\lambda_1, \lambda_2, \lambda_3\in{\mathbb Z}_M$, then (\ref{fuv}) implies \begin{equation} U-\lambda_i V=\delta_i\nu_i \;\;\; (i=1,2,3), \label{uuvv} \end{equation} with units $\nu_i\in M$ and with $\delta_i\in{\mathbb Z}_M$ such that $\delta_1\delta_2\delta_3=\nu$ (the norms of $\delta_i$ divide the norm of $\nu$). Using Siegel's identity \[ (\lambda_1-\lambda_2)(U-\lambda_3 V) +(\lambda_2-\lambda_3)(U-\lambda_1 V) +(\lambda_3-\lambda_1)(U-\lambda_2 V) =0, \] holding for any $U,V$. This gives rise to a unit equation \begin{equation} \alpha X+\beta Y=1, \label{unit} \end{equation} with \[ \alpha=\frac{(\lambda_1-\lambda_2)\delta_3}{(\lambda_3-\lambda_2)\delta_1},\;\; \beta=\frac{(\lambda_3-\lambda_1)\delta_2}{(\lambda_3-\lambda_2)\delta_1}, \] where \[ X=\frac{\nu_3}{\nu_1},\;\; Y=\frac{\nu_2}{\nu_1} \] are unknown units. This is a standard unit equation over $M$ that can be solved using the standard methods, see \cite{book}. We represent $X$ and $Y$ as a power product of the fundamental units of $M$ with unknown exponents. We apply Baker's method, reduction method and enumerate the small solutions \cite{book}. These procedures involve only the fundamental units of $M$. \subsection{B) $F$ is irreducible over $M$} If $F$ is irreducible over $M$ then it splits into linear factors over a cubic extension $L$ of $M$. That is, (\ref{fuvs}) holds, with $\lambda_1, \lambda_2, \lambda_3$ which are relative conjugates of $\lambda=\lambda_1\in{\mathbb Z}_L$ over $M$. (\ref{uuvv}) is valid with $\delta_i\in{\mathbb Z}_L$ and units $\nu_i\in L$ which are relative conjugates of $\delta,\nu\in L$ over $M$, respectively. We obtain a unit equation (\ref{unit}) which is formally the same as above. However there is an important difference. By calculating ${\displaystyle X=\frac{\nu_3}{\nu_1},\;\; Y=\frac{\nu_2}{\nu_1} }$ only the relative units remain in the quotients. To explain this situation denote by $\varepsilon_1,\ldots,\varepsilon_r$ the fundamental units in $M$. For the simplicity of our formulas assume that a set of fundamental units of $L$ is obtained by extending this set with units $\eta_1,\ldots,\eta_s$ of $L$. That is, any unit in $L$ can be written as \[ \varepsilon=\varepsilon_1^{a_1}\cdots \varepsilon_r^{a_r} \eta_1^{b_1}\cdots \eta_s^{b_s}, \] with exponents $a_1,\ldots,a_r,b_1,\ldots,b_s$. (Note that also in the general case any unit can be written is a similar form, but maybe with some common denominators in the exponents, but from our point of view this can be dealt with analogously.) Let $\vartheta$ be a generating element of $L$ over $M$. Denote by $f\in{\mathbb Z}_M[x]$ the relative defining polynomial of $\vartheta$ over $M$. Denote by $\vartheta^{(ij)}$ ($j=1,2,3$) the roots of the $i$-th conjugate of $f$ over $M$ ($i=1,2,3$). Denote by $\zeta^{(ij)}$ the conjugates of any $\zeta\in L$ corresponding to $\vartheta^{(ij)}$. For $\delta\in M$ we have $\delta^{(ij)}=\delta^{(i)},\; j=1,2,3$. As we have seen, in (\ref{unit}), $X$ and $Y$ contains quotients of relative conjugates of $\nu$ over $M$, where \[ \nu^{(ij)}=\left(\varepsilon_1^{(i)}\right)^{a_1} \cdots \left(\varepsilon_r^{(i)}\right)^{a_r} \left(\eta_1^{(ij)}\right)^{b_1}\cdots\left(\eta_r^{(ij)}\right)^{b_s}. \] This yields that \[ Y= \frac{\nu^{(ij_2)}}{\nu^{(ij_1)}}= \left(\frac{\eta_1^{(ij_2)}}{\eta_1^{(ij_1)}}\right)^{b_1}\cdots \left(\frac{\eta_s^{(ij_2)}}{\eta_s^{(ij_1)}}\right)^{b_s}, \] and similarly for $X$. Therefore we obtain a unit equation in $X$ and $Y$, both terms with $s$ factors and the same exponents. The standard arguments (Baker's method, reduction, enumeration) can be used to calculate $b_1,\ldots,b_s$. This can be used to determine $U,V$ up to a unit factor in $M$. One can apply Baker's method, reduction and enumeration the same standard way as in A) but with the above $s$ factors corresponding to the relative units. \subsection{C) $F$ is a product of a linear and a quadratic factor over $M$} The most interesting case is, when $F$ is a product of a linear and a quadratic factor over $M$. Then we have \[ F(U,V)=(U-\lambda_1 V)(U^2+\lambda_2 UV+\lambda_3 V^2), \] with $\lambda_i\in {\mathbb Z}_M\; (i=1,2,3)$ where the second degree factor is irreducible over $M$. Denote by $G=M(\gamma)$ ($\gamma\in{\mathbb Z}_G$) a quadratic extension of $M$ such that the quadratic factor of $F$ splits into linear factors over $G$. Denote by $\gamma^{(ij)}$ ($j=1,2$) the roots of the $i$-th conjugate of the relative defining polynomial of $\gamma$ over $M$ ($i=1,\ldots,m$). Denote by $\delta^{(ij)}$ the conjugates of any $\delta\in G$ corresponding to $\gamma^{(ij)}$ ($i=1,\ldots,m,j=1,2$). For $\zeta\in M$ we have $\zeta^{(ij)}=\zeta^{(i)},j=1,2$. Then we have \begin{equation} F^{(ij)}(U,V)=(U-\lambda_1^{(i)} V)(U-\gamma^{(i1)}V)(U-\gamma^{(i2)}V). \label{Fi} \end{equation} By (\ref{F}) and (\ref{Fi}) we have \begin{eqnarray} U-\lambda_1^{(i)} V&=& \delta_M^{(i)}\nu_M^{(i)} \nonumber\\ U-\gamma^{(i1)}V&=&\delta_G^{(i1)}\nu_G^{(i1)}\label{xy}\\ U-\gamma^{(i2)}V&=&\delta_G^{(i2)}\nu_G^{(i2)}\nonumber \end{eqnarray} where $\delta_M\in {\mathbb Z}_M$, the norm of which devides $d^{6m}/i_0$, $\nu_M$ is a unit in $M$, $\delta_G\in {\mathbb Z}_G$, the norm of which devides $d^{6m}/i_0$ and $\nu_G$ is a unit in $G$. (Up to associates there are only a few possible values of $\delta_M,\delta_G$.) Siegel's identity gives \[ (\lambda_1^{(i)}-\gamma^{(i1)})(U-\gamma^{(i2)}V) +(\gamma^{(i1)}-\gamma^{(i2)})(U-\lambda_1^{(i)}V) +(\gamma^{(i2)}-\lambda_1^{(i)})(U-\gamma^{(i1)}V) =0, \] whence \begin{equation} \alpha X+\beta Y=1, \label{abxy} \end{equation} with \[ \alpha=\frac{(\lambda_1^{(i)}-\gamma^{(i1)})\delta_G^{(i2)}} {(\gamma^{(i2)}-\gamma^{(i1)})\delta_M^{(i)}}, \;\; \beta=\frac{(\gamma^{(i2)}-\lambda_1^{(i)})\delta_G^{(i1)}} {(\gamma^{(i2)}-\gamma^{(i1)})\delta_M^{(i)}}, \] and the unknown units are \[ X=\frac{\nu_G^{(i2)}}{\nu_M^{(i)}}, \;\; Y=\frac{\nu_G^{(i1)}}{\nu_M^{(i)}}. \] Observe that $X$ and $Y$ are conjugated over $M$, as well as $\alpha$ and $\beta$. Let again $\varepsilon_1,\ldots\varepsilon_r$ denote the fundamental units of $M$. For simplicity's sake assume that a system of fundamental units of $G$ is obtained by extending this system by some relative units $\eta_1,\ldots,\eta_s$. Set \[ X^{(ij)}=(\varepsilon_1^{(i)})^{a_1}\cdots (\varepsilon_r^{(i)})^{a_r}\cdot (\eta_1^{(ij)})^{b_1}\cdots (\eta_s^{(ij)})^{b_s}. \] Let $A=\max |a_i|, B=\max |b_i|, E=\max(A,B)$. We detail the application of Baker's method, reduction and enumeration algorithms in this case C). \subsubsection{Baker's method} We apply standard arguments. Since $|N_{G/Q}(X)|=1$, hence $\sum_{i=1}^m\sum_{j=1}^2 |\log |X^{ij}||=0$. There exists a conjugate $i_0,j_0$ with \[ \log |X^{(i_0,j_0)}|<- c_1 E, \] where $c_1$ is a positive constant that can be easily calculated. According to standard arguments (cf. \cite{book}) we have \[ \exp(-c_1 E)> |X^{(i_0,j_0)}|= \frac{1}{|\alpha^{(i_0,j_0)}|}|1-\beta^{(i_0,j_0)}Y^{(i_0,j_0)}| \geq \frac{1}{2|\alpha^{(i_0,j_0)}|} |\log|\beta^{(i_0,j_0)}Y^{(i_0,j_0)}|| \] \begin{equation} \geq \frac{1}{2|\alpha^{(i_0,j_0)}|} |\log |\beta^{(i_0,j_0)}| +a_1\log |\varepsilon_1^{(i_0)}|+\ldots +a_r\log |\varepsilon_r^{(i_0)}| +b_1\log |\eta_1^{(i_0,j_0)}|+\ldots +b_s\log |\eta_s^{(i_0,j_0)}|| \label{i2} \end{equation} \[ \geq \exp (-C \log E), \] where $C$ is a huge positive constant. In the last step we applied Baker's method, that is e.g. the estimates of A. Baker and G. W\"ustholz \cite{bawu}. Comparing the beginning and the end of this series of inequalities we obtain an upper bound for $E$. $E_B$ is going to be the maximum of these upper bounds (the maximum taken for all possible pairs ($i_0,j_0$)). This usually of magnitude $10^{30}-10^{100}$. \subsubsection{Reduction} The next step is to reduce the bound $E_B$. For this purpose we apply Lemma 2.2.2 of \cite{book}. We recall here the cruxial statement. Let $\zeta_1,\ldots, \zeta_n$ be multiplicative independent algebraic numbers, $d_1,\ldots,d_n$ integers. Set $D=\max |d_i|$. Assume that \begin{equation} |d_1\zeta_1+\ldots+d_n\zeta_n|<c_1\exp(-c_2 D-c_3), \label{redineq} \end{equation} for any $d_1,\ldots,d_n$ where $c_1,c_2,c_3$ are given positive constants (of moderate size). Our purpose is to reduce the bound $D_0$ obtained previously for $D$ by using Baker's method. Let $H$ be a large constant (an appropriate value is about $D_0^n$) and consider the lattice $\cal L$ spanned by the columns of the $n+2$ by $n$ matrix \[ \left( \begin{array}{cccc} 1&0&\ldots &0\\ 0&1&\ldots &0\\ \vdots &\vdots &\vdots &\vdots \\ 0&0&\ldots&1\\ H\cdot{\rm Re}(\zeta_1)&H\cdot{\rm Re}(\zeta_2)&\ldots &H\cdot{\rm Re}(\zeta_n)\\ H\cdot{\rm Im}(\zeta_1)&H\cdot{\rm Im}(\zeta_2)&\ldots &H\cdot{\rm Im}(\zeta_n)\\ \end{array} \right). \] Assume that the columns in the above matrix are linearly independent. Denote by $b_1$ the first vector of an LLL-reduced basis of this lattice (cf. A. K. Lenstra, H. W. Lenstra Jr. and L. Lov\'asz \cite{lll}, M. Pohst \cite{dmv}). \begin{lemma} \label{redlemma} If $D\leq D_0$ and $|b_1|\geq \sqrt{(n+1)2^{n-1}} \cdot D_0$, then \[ D\leq \frac{\log H+\log c_1-c_3-\log D_0}{c_2}. \] \end{lemma} We apply this lemma based on the inequality \[ |\log |\beta^{(i_0,j_0)}| +a_1\log |\varepsilon_1^{(i_0)}|+\ldots +a_r\log |\varepsilon_r^{(i_0)}| +b_1\log |\eta_1^{(i_0,j_0)}|+\ldots +b_s\log |\eta_s^{(i_0,j_0)}|| \] \[ <2|\alpha^{(i_0,j_0)}|\cdot \exp(-c_1 E), \] see (\ref{i2}), letting $n=r+s+1, \zeta_1=\log|\beta^{(i_0,j_0)}|, \zeta_2=\log |\varepsilon_1^{(i_0)}|,\ldots, \zeta_{r+1}=\log |\varepsilon_r^{(i_0)}|$, $\zeta_{r+2}=\log |\eta_1^{(i_0,j_0)}|,\ldots, \zeta_{r+s+1}=\log |\eta_r^{(i_0,j_0)}|$. In the totally real case we may omit the last row of the matrix. The repeated application of the reduction method brings down the bound obtained by Baker's method to a so called reduced bound. We perform this calculation for all possible pairs ($i_0,j_0$) and $E_R$ is the maximum of the reduced bounds. This is usually of magnitude 100-1000 depending on the size of the example. \subsubsection{Enumeration} Observe that although the reduced bound $E_R$ is rather small, the number of possible values $-E_R\leq a_1,\ldots,a_r,b_1,\ldots b_r \leq E_R$ is a huge number: $(2E_R+1)^{r+s}$. Hence we must apply the enumeration methods of \cite{book}. As we shall see, in the case C) that we consder, the general method of \cite{book} can only be applied with non-trivial modifications. Recall that in our equation (\ref{abxy}) $X$ and $Y$ are conjugated over $M$, as well as $\alpha$ and $\beta$. We assumed that \[ X^{(ij)}=(\varepsilon_1^{(i)})^{a_1}\cdots (\varepsilon_r^{(i)})^{a_r}\cdot (\eta_1^{(ij)})^{b_1}\cdots (\eta_s^{(ij)})^{b_s}, \] where $\varepsilon_1,\ldots,\varepsilon_r$ is a set of fundamental units of $M$ and $\varepsilon_1,\ldots,\varepsilon_r,\eta_1\ldots,\eta_s$ is a set of fundamental units in $G$. Further $A=\max |a_i|, B=\max |b_i|, E=\max(A,B)$. We write equation (\ref{abxy}) in the form \begin{equation} \alpha^{(i1)}X^{(i1)}+\alpha^{(i2)}X^{(i2)}=1. \label{axax} \end{equation} The essence of the enumeration algorithm of \cite{book} is that we calculate an $S$ with \begin{equation} \frac{1}{S}\leq |\alpha^{(ij)} X^{(ij)}|\leq S, \label{SS} \end{equation} for all conjugates. As an initial value we can take \[ \log S=\max_{i,j}|\alpha^{(ij)}|+ E_R\cdot \max_{i,j} (\log|\varepsilon_1^{(i)}|+\cdots +\log|\varepsilon_r^{(i)}| +\log|\eta_1^{(ij)}|+\cdots +\log|\eta_s^{(ij)}|). \] Then we show that $S$ can be replaced by a smaller constant $s$ on the prize that we enumerate and test the exponent vectors $(a_1,\ldots,a_r,b_1,\ldots,b_s)$ in some exceptional sets. These sets are ellipsoids containing relatively few possible vectors. We repeat diminishing $S$ to $s$ until we reach a relatively small $s$ value (of magnitude 10). Finally we enumerate an ellipsoid of type (\ref{SS}) with the final $S=s$. We describe now how this procedure can be adopted to our case. Consider the following Lemma which is a modified version of Lemma 2.3.1 of \cite{book}. \begin{lemma} Let $s<S$ and assume that (\ref{SS}) holds. If there exists a $j$ such that \[ \frac{1}{s}\leq |\alpha^{(ij)}X^{(ij)}|\leq s \] is violated, then \\ I. either there exists a $j_0$ with \begin{equation} |\log|\alpha^{(ij_0)}X^{(ij_0)}||\leq \frac{2}{s}, \label{s1} \end{equation} II. or there exists a $j_0$ such that \begin{equation} \left|\log\left|\frac{\alpha^{(ij_0)}X^{(ij_0)}}{\alpha^{(ij)}X^{(ij)}}\right|\right| \leq \frac{2}{s}. \label{s2} \end{equation} \label{enumlemma} \end{lemma} \noindent {\bf Proof}\\ Let $j_0=\{1,2\}\setminus \{j\}$.\\ If \[ \frac{1}{S}\leq |\alpha^{(ij)}X^{(ij)}|\leq \frac{1}{s}, \] then we have \[ |\log|\alpha^{(ij_0)}X^{(ij_0)}||\leq 2||\alpha^{(ij_0)}X^{(ij_0)}|-1| \leq 2|\alpha^{(ij_0)}X^{(ij_0)}-1|=2|\alpha^{(ij)}X^{(ij)}| \leq \frac{2}{s}, \] which implies (\ref{s1}). On the other hand, if \[ s\leq |\alpha^{(ij)}X^{(ij)}|\leq S, \] then \[ \left|\log\left|\frac{\alpha^{(ij_0)}X^{(ij_0)}}{\alpha^{(ij)}X^{(ij)}}\right|\right| \leq 2\left|1-\left|\frac{\alpha^{(ij_0)}X^{(ij_0)}}{\alpha^{(ij)}X^{(ij)}}\right|\right| \leq 2\left|1+\frac{\alpha^{(ij_0)}X^{(ij_0)}}{\alpha^{(ij)}X^{(ij)}}\right| =\frac{2}{|\alpha^{(ij)}X^{(ij)}|}\leq \frac{2}{s}, \] which implies (\ref{s2}). Remark that in both cases we used the inequality $|\log x|<2|x-1|$ holding for any complex number $x$ with $|x-1|<0.795$. In our calculations this is satisfied, since in the applications we have $2/s<0.795$. $\Box$ Now we explain how to enumerate the possible vectors $a_1,\ldots,a_r, b_1,\ldots,b_s$ if in addition to (\ref{SS}) (for all $i,j$) either (\ref{s1}) or (\ref{s2}) is satisfied (for certain $i,j_0$). \noindent {\bf Case I.}\\ We have \[ \log|\alpha^{(ij)}X^{(ij)}| =\log|\alpha^{(ij)}|+a_1\log |\varepsilon_1^{(i)}| +\ldots +a_r\log |\varepsilon_r^{(i)}| +b_1\log|\eta_1^{(ij)}|+\ldots +b_s \log|\eta_s^{(ij)}|, \] for all $i,j$. Let \[ h:=\left( \begin{array}{c} \log|\alpha^{(11)}X^{(11)}|\\ \log|\alpha^{(12)}X^{(12)}|\\ \log|\alpha^{(21)}X^{(21)}|\\ \log|\alpha^{(22)}X^{(22)}|\\ \vdots\\ \log|\alpha^{(m1)}X^{(m1)}|\\ \log|\alpha^{(m2)}X^{(m2)}|\\ \log|\alpha^{(ij_0)}X^{(ij_0)}| \end{array} \right) ,\;\; g= \left( \begin{array}{c} \log|\alpha^{(11)}|\\ \log|\alpha^{(12)}|\\ \log|\alpha^{(21)}|\\ \log|\alpha^{(22)}|\\ \vdots\\ \log|\alpha^{(m1)}|\\ \log|\alpha^{(m2)}|\\ \log|\alpha^{(ij_0)}| \end{array} \right) \] \[ e_k= \left( \begin{array}{c} \log|\varepsilon^{(11)}|\\ \log|\varepsilon^{(12)}|\\ \log|\varepsilon^{(21)}|\\ \log|\varepsilon^{(22)}|\\ \vdots\\ \log|\varepsilon^{(m1)}|\\ \log|\varepsilon^{(m2)}|\\ \log|\varepsilon^{(ij_0)}| \end{array} \right) ,\; (k=1,\ldots,r), \;\;\; f_l= \left( \begin{array}{c} \log|\eta^{(11)}|\\ \log|\eta^{(12)}|\\ \log|\eta^{(21)}|\\ \log|\eta^{(22)}|\\ \vdots\\ \log|\eta^{(m1)}|\\ \log|\eta^{(m2)}|\\ \log|\eta^{(ij_0)}| \end{array} \right) ,\; (l=1,\ldots,s) \] Then \[ h=g+a_1e_1+\ldots a_re_r+b_1f_1+\ldots b_sf_s \] Let \[ \lambda_{ij}=\frac{1}{\log S}, (1\leq i\leq m,j=1,2), \] and for the last coordinate let \[ \lambda=\frac{s}{2}. \] For any vector \[ v=\left( \begin{array}{c} x_{11}\\ x_{12}\\ x_{21}\\ x_{22}\\ \vdots\\ x_{m1}\\ x_{m2}\\ x_{ij_0} \end{array} \right) \;\;\; {\rm set} \;\;\; \varphi (v)=\left( \begin{array}{c} \lambda_{11}\cdot x_{11}\\ \lambda_{12}\cdot x_{12}\\ \lambda_{21}\cdot x_{21}\\ \lambda_{22}\cdot x_{22}\\ \vdots\\ \lambda_{m1}\cdot x_{m1}\\ \lambda_{m2}\cdot x_{m2}\\ \lambda \cdot x_{ij_0} \end{array} \right). \] Then \[ \varphi(h)=\varphi(g)+a_1\varphi(e_1)+\ldots a_r\varphi(e_r) +b_1\varphi(f_1)+\ldots b_s\varphi(f_s). \] Further, (\ref{SS}) and (\ref{s1}) imply \[ ||\varphi(h)||^2= ||\varphi(g)+a_1\varphi(e_1)+\ldots a_r\varphi(e_r) +b_1\varphi(f_1)+\ldots b_s\varphi(f_s) ||^2 \] \[ \leq \sum_{i=1}^m\sum_{j=1}^2\frac{1}{\log S}|\log|\alpha^{(ij)}X^{(ij)}|| +\frac{s}{2}|\log|\alpha^{(ij_0)}X^{(ij_0)}|| \leq 2m+1. \] The above $L^2$ norm defines an ellipsoid. \noindent {\bf Case II.}\\ We have \[ \log\left|\frac{\alpha^{(ij_0)}X^{(ij_0)}}{\alpha^{(ij)}X^{(ij)}}\right| = \log\left|\frac{\alpha^{(ij_0)}}{\alpha^{(ij)}}\right| + b_1\log\left|\frac{\eta_1^{(ij_0)}}{\eta_1^{(ij)}}\right| +\ldots + b_s\log\left|\frac{\eta_s^{(ij_0)}}{\eta_s^{(ij)}}\right|. \] Observe that here we only have quotients of conjugates of the relative units $\eta_1,\ldots,\eta_s$. By (\ref{SS}) we can derive \begin{equation} \frac{1}{S^2}< \left|\frac{\alpha^{(ij_0)}X^{(ij_0)}}{\alpha^{(ij)}X^{(ij)}}\right| <S^2 \label{S2} \end{equation} for any $i,j$. Further, (\ref{s2}) holds for $i,j_0$. Let \[ h:=\left( \begin{array}{c} \log\left|\frac{\alpha^{(11)}X^{(11)}}{\alpha^{(12)}X^{(12)}}\right|\\ \\ \log\left|\frac{\alpha^{(21)}X^{(21)}}{\alpha^{(22)}X^{(22)}}\right|\\ \vdots\\ \log\left|\frac{\alpha^{(m1)}X^{(m1)}}{\alpha^{(m2)}X^{(m2)}}\right|\\ \\ \log\left|\frac{\alpha^{(ij_0)}X^{(ij_0)}}{\alpha^{(ij)}X^{(ij)}}\right| \end{array} \right) ,\;\; g= \left( \begin{array}{c} \log\left|\frac{\alpha^{(11)}}{\alpha^{(12)}}\right|\\ \\ \log\left|\frac{\alpha^{(21)}}{\alpha^{(22)}}\right|\\ \vdots\\ \log\left|\frac{\alpha^{(m1)}}{\alpha^{(m2)}}\right|\\ \\ \log\left|\frac{\alpha^{(ij_0)}}{\alpha^{(ij)}}\right| \end{array} \right) ,\;\; f_l= \left( \begin{array}{c} \log\left|\frac{\varepsilon_l^{(11)}}{\varepsilon_l^{(12)}}\right|\\ \\ \log\left|\frac{\varepsilon_l^{(21)}}{\varepsilon_l^{(22)}}\right|\\ \vdots\\ \log\left|\frac{\varepsilon_l^{(m1)}}{\varepsilon_l^{(m2)}}\right|\\ \\ \log\left|\frac{\varepsilon_l^{(ij_0)}}{\varepsilon_l^{(ij)}}\right| \end{array} \right) ,\; (l=1,\ldots,s). \] Then \[ h=g+b_1f_1+\ldots +b_sf_s \] Let \[ \lambda_{i}=\frac{1}{2\log S}, (1\leq i\leq m), \] and for the last coordinate let \[ \lambda_{m+1}=\frac{s}{2}. \] For any vector \[ v=\left( \begin{array}{c} x_{1}\\ x_{2}\\ \vdots\\ x_{m}\\ x_{m+1} \end{array} \right) \;\;\; {\rm set} \;\;\; \varphi (v)=\left( \begin{array}{c} \lambda_1\cdot x_{1}\\ \lambda_2\cdot x_{2}\\ \vdots\\ \lambda_m\cdot x_{m}\\ \lambda_{m+1}\cdot x_{m+1} \end{array} \right). \] Then \[ \varphi(h)=\varphi(g)+b_1\varphi(f_1)+\ldots b_s\varphi(f_s). \] Further, (\ref{S2}) and (\ref{s2}) imply \[ ||\varphi(h)||^2= ||\varphi(g)+b_1\varphi(f_1)+\ldots b_s\varphi(f_s)||^2 \] \[ \leq \sum_{i=1}^m \frac{1}{2\log S} \left|\log\left|\frac{\alpha^{(i1)}X^{(i1)}}{\alpha^{(i2)}X^{(i2)}}\right|\right| +\frac{s}{2} \left|\log\left|\frac{\alpha^{(ij_0)}X^{(ij_0)}}{\alpha^{(ij)}X^{(ij)}}\right|\right| \leq m+1. \] The above $L^2$ norm defines an ellipsoid. {\bf Remarks} \begin{enumerate} \item The procedure can be continued both in Case I and Case II by taking the previous $s$ in the role of $S$ and choosing a smaller $s$. For an appropriate choice of $s$ see \cite{book}. Usually we take $s=\sqrt{S}$. \item Proceeding until a relatively small value of $S$ (of magnitude 10), we enumerate in both cases an ellipsoid taking all weights $1/\log S$ in Case I and $1/(2\log S)$ in Case II to finish the procedure (cf. \cite{book}, see the last ellipsoids in our example). \item Enumerating the ellipsoids, in Case I we obtain all possible exponent vectors $(a_1,\ldots,a_r, b_1,\ldots,b_s)$. Observe that in Case II we can only enumerate the possible values of $(b_1,\ldots,b_s)$. For all possible $(b_1,\ldots,b_s)$ we run $a_1,\ldots,a_r$ run between $-E_R$ and $E_R$ and test if the unit equation (\ref{axax}) holds. Since the ground field $M$ is usually of small degree, this can be done relatively fast. \item We emphasize that for the enumeration of the ellipsoids we use the improved method involving LLL reduction (see \cite{dmv}). \item To speed up the test of possible exponent vectors we use sieves (see \cite{book}). This enables us to check mod $p$ congruences instead of equations in high precision real numbers. Also, calculations with integers modulo $p$ is much faster than real arithmetic. \item The enumeration of exponent vectors in the exceptional ellipsoids must be performed for all possible pairs ($i,j_0$) and we have to check all possible exponent vectors. \end{enumerate} \section{The quartic relative Thue equation} \label{qqeq} Having $U, V$ (determined up to a unit factor in $M$) we follow the methods of \cite{gprel4} (see also \cite{book}) to determine $X,Y,Z$. Set \[ Q_0(X,Y,Z)=U\cdot Q_1(X,Y,Z)-V\cdot Q_2(X,Y,Z). \] Let $X_0,Y_0,Z_0\in{\mathbb Z}_M$ be a nontrivial solution of \begin{equation} Q_0(X,Y,Z)=0, \label{q00} \end{equation} with, say $Z_0\ne 0$. We represent $X,Y,Z$ with parameters $P,Q,R\in M$ in the form \begin{eqnarray} X&=&R\cdot X_0+P,\nonumber\\ Y&=&R\cdot Y_0+Q,\label{rpq}\\ Z&=&R\cdot Z_0.\nonumber \end{eqnarray} Substituting these representations into (\ref{q00}) we obtain an equation of the form \[ R(C_1P+C_2Q)=C_3P^2+C_4PQ+C_5Q^2, \] with $C_1,\ldots,C_5\in{\mathbb Z}_M$. We multiply the equations (\ref{rpq}) by $C_1P+C_2Q$ and use the above equation to eliminate $R$ on the right hand sides. We obtain \begin{eqnarray} \kappa\cdot X &=& f_X(P,Q),\nonumber \\ \kappa\cdot Y &=& f_Y(P,Q),\label{xyzpq}\\ \kappa\cdot Z &=& f_Z(P,Q),\nonumber \end{eqnarray} with quadratic form $f_X,f_Y,f_Z\in{\mathbb Z}_M[P,Q]$. As stated in \cite{gprel4} we can replace $P,Q,\kappa$ by integer parameters (by multiplying the equations by the square of a common denominator of $\kappa,P,Q$) and $\kappa$ may attain only finitely many non-associated values. Substituting these representations into (\ref{Q12}) we obtain quartic equations over $M$: \begin{eqnarray} F_1(P,Q)=Q_1(f_X(P,Q),f_Y(P,Q),f_Z(P,Q))&=&\kappa^2\cdot U,\nonumber\\ F_2(P,Q)=Q_2(f_X(P,Q),f_Y(P,Q),f_Z(P,Q))&=&\kappa^2\cdot V\label{relthue}. \end{eqnarray} According to \cite{gprel4} at least one of these is a quartic relative Thue equation over $M$, having a root in $K$. \section{Relative Thue equations in totally complex extensions of totally real fields} We show that to solve relative Thue equations of type (\ref{relthue}) is a trivial matter. We formulate our assertion in a general form. Let $M$ be a totally real number field and let $K=M(\xi)$ be a totally complex extension of degree $k$ of $M$. More exactly, if $f(x)\in{\mathbb Z}_M[x]$ is the relative defining polynomial of $\xi$ over $M$, then all roots $\xi^{(i1)},\ldots, \xi^{(ik)}$ of a conjugate $f^{(i)}(x)$ of $f(x)$ are complex. Set \[ c_0=\frac{1}{\min_{i,j}|{\rm Im}(\xi^{(ij)})|}. \] Let $0\ne \nu\in{\mathbb Z}_M$, set \[ F(X,Y)=N_{K/M}(X-\xi Y), \] and consider the relative Thue equation \begin{equation} F(X,Y)=\nu, \;\; {\rm in}\;\; X,Y\in{\mathbb Z}_M. \label{ff} \end{equation} We denote by $|\overline{Z}|$ the size of $Z\in M$, that is the maximum absolute value of its conjugates. \begin{theorem} All solutions $X,Y\in{\mathbb Z}_M$ of equation (\ref{ff}) satisfy \begin{equation} \max (|\overline{X}|,|\overline{Y}|) \leq |\overline{\nu}|^{1/k}(1+c_0 |\overline{\xi}|). \label{xysize} \end{equation} \label{thth} \end{theorem} \noindent {\bf Proof.}\\ Let $X,Y\in{\mathbb Z}_M$ be an arbitrary but fixed solution of equation (\ref{ff}). Denote by $\gamma^{(i)}$ the conjugates of $\gamma\in M$ corresponding to $\xi^{(i1)},\ldots, \xi^{(ik)}$. For any $1\leq i\leq m,\; 1\leq j\leq k$ set \[ \beta^{(ij)}=X^{(i)}-\xi^{(ij)}Y^{(i)}. \] For any $1\leq i\leq m$ we have \[ \prod_{j=1}^k \beta^{(ij)}=\nu^{(i)}. \] Let $j_0$ be the index with \[ |\beta^{(ij_0)}|=\min_{1\leq j\leq k}|\beta^{(ij)}|. \] Then \[ |{\rm Im}(\xi^{(ij_0)})|\cdot |Y^{(i)}| ={\rm Im}|X^{(i)}-\xi^{(ij_0)}Y^{(i)}| ={\rm Im}|\beta^{(ij_0)}| \leq |\nu^{(i)}|^{1/k}, \] whence \[ |Y^{(i)}|\leq \frac{|\nu^{(i)}|^{1/k}}{|{\rm Im}(\xi^{(ij_0)})|},\;\; |X^{(i)}|\leq |\nu^{(i)}|^{1/k}+|\xi^{(ij_0)}|\cdot |Y^{(i)}|, \] which implies our assertion.\\ $\Box$ Let us return now to the equations (\ref{relthue}). We denoted the fundamental units of $M$ by $\varepsilon_1,\ldots,\varepsilon_r$. $\kappa$ can be written in the form \[ \kappa=\kappa_0\cdot \varepsilon_1^{k_1}\ldots \varepsilon_r^{k_r} \] where $\kappa_0$ can only take finitely many values, $k_1,\ldots,k_r\in{\mathbb Z}$. Set $k_i=4k_i'+\ell_i$ with $-1\leq \ell_i\leq 2,\; (i=1,\ldots,r)$ and \[ P'=P\cdot \varepsilon_1^{-k_1'}\cdots \varepsilon_r^{-k_r'}, \;\; Q'=Q\cdot \varepsilon_1^{-k_1'}\cdots \varepsilon_r^{-k_r'}. \] Then we have \begin{eqnarray*} F_1(P',Q')&=&\kappa_0\cdot U\cdot \varepsilon_1^{\ell_1}\cdots \varepsilon_r^{\ell_r}, \\ F_2(P',Q')&=&\kappa_0\cdot V\cdot \varepsilon_1^{\ell_1}\cdots \varepsilon_r^{\ell_r}. \end{eqnarray*} One of these equations is a quartic relative Thue equation (see \cite{book}) that can be solved easily by above Theorem for all possible values of $\kappa_0$ and for all possible $\ell_1,\ldots,\ell_r$. This gives $P$ and $Q$ up to a unit factor of $M$, whence we obtain $X,Y,Z$ by (\ref{xyzpq}) up to a unit factor in $M$. The generators of relative power integral bases of $K$ over $M$ are obtained by (\ref{axyz}). The possible values of $\alpha$ must be checked. \section{An example} As an example consider a root $\tau$ of the polynomial \[ f_3(x)=x^3-2x^2-5x-1. \] This is a totally real cubic field (one of the "simplest cubic fields" of D. Shanks \cite{shanks}). The conjugates of $\mu=\tau+2$ are all positive, $\mu$ having defining polynomial $g_3(x)=x^3-8x^2+15x-7$. Therefore $\xi=\sqrt[4]{-\mu}$ is a totally complex algebraic integer of degree 4 over $M$. Note that $\xi$ has defining polynomial \[ f_{12}(x)=x^{12}+8x^8+15x^4+7, \] and the field $K={\mathbb Q}(\xi)$ has an integral basis $(1,\xi,\ldots,\xi^{11})$, a power integral basis. Hence $K$ is monogenic. Our purpose is to determine all (non-equivalent) generators of relative power integral bases of $K$ over $M$. The integral basis $(1,\xi,\ldots,\xi^{11})$ of $K$ implies that the discriminant of $K$ is \[ D_K=D(f_{12})=2^{24}\cdot 7^3\cdot 19^8. \] It is easy to check, that \[ (1,\mu,\mu^2,\xi,\xi\mu,\xi\mu^2, \xi^2,\xi^2\mu,\xi^2\mu^2, \xi^3,\xi^3\mu,\xi^3\mu^2) \] is also an integral basis of $K$, therefore any $\alpha\in{\mathbb Z}_K$ can be written in the form \begin{equation} \alpha=A+X\xi+Y\xi^2+Z\xi^3, \label{AAlpha} \end{equation} with $A,X,Y,Z\in{\mathbb Z}_M$ (that is the constant $d$ in Lemma \ref{lemma1} is 1). Moreover, $m=3$ and $i_0=I_{K/M}(\xi)=1$. The relative defining polynomial of $\xi$ over $M$ is $x^4+\mu$, therefore equation (\ref{F}) is of the form \[ N_{M/{\mathbb Q}}(U(U^2-4\mu V^2))=\pm 1, \] that is \begin{equation} U(U^2-4\mu V^2)=\nu, \label{Fex} \end{equation} where $\nu$ is a unit in $M$. In the absolute case a similar equation is trivial to solve, that is not the case here. Actually the above equation is of Case C) of the three main types of equations detailed in Section \ref{cubiceq}. Let $\gamma=\sqrt{\mu}$, $G=M(\gamma)$. This is a totally real sextic number field, and our $F(U,V)$ factorizes over ${\mathbb Z}_G$: \[ U(U+2\gamma V)(U-2\gamma V)=\nu. \] This implies, that all three factors are units, the second and third factors are units in $G$, conjugated over $M$. For $i=1,2,3$ we obtain \begin{eqnarray} U^{(i)}&=&\nu_M^{(i)},\nonumber\\ U+2\gamma^{(i1)}V&=&\nu_G^{(i1)},\label{ccc}\\ U-2\gamma^{(i1)}V&=&U+2\gamma^{(i2)}V=\nu_G^{(i2)}.\nonumber \end{eqnarray} where $\nu_M$ is a unit in $M$, $\nu_G$ is a unit in $G$. By the above equations we obtain \[ 2\nu_M^{(i)}=\nu_G^{(i1)}+\nu_G^{(i2)} \] that is \begin{equation} \frac{1}{2} X^{(i1)}+\frac{1}{2} X^{(i2)}=1, \label{uuu} \end{equation} $X=\nu_G/\nu_M$ is a unit in $G$. Using Kash \cite{kash} we calculated a system of fundamental units in $G$. $\gamma=\sqrt{\mu}$ has defining polynomial $f_6(x)=x^6-8x^4+15x^2-7$. The elements $(1,\gamma,\gamma^2,\gamma^3,\gamma^4,\gamma^5)$ form an integral basis in $G$. The coefficients of the fundamental units in this integral basis are \begin{eqnarray*} \varepsilon_1:&& \;\; (1,1,0,0,0,0),\\ \varepsilon_2:&& \;\; (1,-1,0,0,0,0),\\ \varepsilon_3:&& \;\; (2,0,-1,0,0,0),\\ \varepsilon_4:&& \;\; (3,1,-1,0,0,0),\\ \varepsilon_5:&& \;\; (3,-7,0,-7,0,-1). \end{eqnarray*} The relative conjugates (over $M$) of these units are the following: \begin{eqnarray*} \varepsilon_1^{(i2)}&=&\varepsilon_2^{(i1)},\\ \varepsilon_2^{(i2)}&=&\varepsilon_1^{(i1)},\\ \varepsilon_3^{(i2)}&=&\varepsilon_3^{(i1)},\\ \varepsilon_4^{(i2)}&=& \frac{\varepsilon_3^{(i1)}} {\varepsilon_1^{(i1)} \varepsilon_2^{(i1)} \varepsilon_4^{(i1)}},\\ \varepsilon_5^{(i2)}&=& \frac{\varepsilon_3^{(i1)}} {\varepsilon_1^{(i1)} \varepsilon_2^{(i1)} \varepsilon_5^{(i1)}}. \end{eqnarray*} Equation (\ref{uuu}) can be written as \begin{eqnarray} && \pm \frac{1}{2}\cdot (\varepsilon_1^{(i1)})^{a_1}\cdot (\varepsilon_2^{(i1)})^{a_2}\cdot (\varepsilon_3^{(i1)})^{a_3}\cdot (\varepsilon_4^{(i1)})^{a_4}\cdot (\varepsilon_5^{(i1)})^{a_5} \nonumber \\&& \pm \frac{1}{2}\cdot (\varepsilon_1^{(i2)})^{a_1}\cdot (\varepsilon_2^{(i2)})^{a_2}\cdot (\varepsilon_3^{(i2)})^{a_3}\cdot (\varepsilon_4^{(i2)})^{a_4}\cdot (\varepsilon_5^{(i2)})^{a_5} =1 \label{u2} \end{eqnarray} In our example we have $c_1=0.18298,\;\; C=2.85992\cdot 10^{28}$ (cf. (\ref{i2})). Comparing the upper and lower bounds of the above series of inequalities (\ref{i2}) we obtain $A_B<10^{32}$. The following table summarizes the reduction procedure: \[ \begin{array}{|c||c|c|c|c|}\hline & A< & H= & {\rm precision} & {\rm new \; bound \; for}\; A \\ \hline {\rm Step \;\;I} & 10^{32} & 10^{170} & 300\;{\rm digits} & 1736 \\ \hline {\rm Step \;\;II}& 1736 & 10^{30} & 100\;{\rm digits} & 336 \\ \hline {\rm Step \;\;III}& 336 & 10^{20} & 100\;{\rm digits} & 219\\ \hline \end{array} \] The procedure took a few minutes all together and we obtained the reduced bound $A_R=219$. Observe that although the reduced bound is rather small, the number of possible values $-219\leq a_1,\ldots,a_5\leq 219$ is a huge number: $(2\cdot219+1)^5=16305067506199\approx 1.63\cdot 10^{13}$. We continue with the enumeration process. In Case I we performed the enumeration process with the following parameters: \[ \begin{array}{|c|c|c|c|}\hline & S & s & {\rm enumerated} \\ \hline {\rm Step \;\;I} & 10^{258} & 10^{20} & 6 \\ \hline {\rm Step \;\;II}& 10^{20} & 10^{10} & 6 \\ \hline {\rm Step \;\;III}& 10^{10} & 10^{3} & 19922\\ \hline {\rm Step \;\;IV}& 10^{3} & 10^{2} & 13506\\ \hline {\rm Step \;\;V}& 10^{2} & & 1194\\ \hline \end{array} \] All together the procedure took 2-3 minutes. Parallely to enumeration we made sieving modulo 113, 787. We found the only possible solution $a_1=\ldots=a_5=0$. \noindent In Case 2 we have \[ \frac{\frac{1}{2}X^{(i1)}}{\frac{1}{2}X^{(i2)}}= \left(\frac{\varepsilon_1^{(i1)}}{\varepsilon_2^{(i1)}}\right)^{a_2-a_1} \left(\frac{\varepsilon_3^{(i1)}} {\varepsilon_1^{(i1)}\varepsilon_2^{(i1)}\varepsilon_4^{(i1)}}\right)^{a_4} \left(\frac{\varepsilon_3^{(i1)}} {\varepsilon_1^{(i1)}\varepsilon_2^{(i1)}\varepsilon_5^{(i1)}}\right)^{a_5}. \] Therefore in Case II we can determine the possible values of $a_2-a_1,a_4,a_5$. We performed the enumeration algorithm with the following parameters: \[ \begin{array}{|c|c|c|c|c|}\hline & S & S^2 & s & {\rm enumerated} \\ \hline {\rm Step \;\;I} & 10^{258} & 10^{516} & 10^{20} & 0 \\ \hline {\rm Step \;\;II}& 10^{20} & 10^{40} & 10^{10} & 0 \\ \hline {\rm Step \;\;III}& 10^{10} & 10^{20} & 10^{3} & 38 \\ \hline {\rm Step \;\;IV}& 10^{3} & 10^6 & 10^{2} & 202\\ \hline {\rm Step \;\;V}& 10^{2} & 10^4 & & 79\\ \hline \end{array} \] The procedure took all together some seconds. In this case there was no way to diminish the possible exponent vectors by sieving. For all the 319 possible values of $a_2-a_1,a_4,a_5$ we let $a_1,a_3$ run through the interval $[-219,219]$. The exponent vectors $(a_1,\ldots,a_5)$ were tested modulo 113, 787, 1223, 2053 if they satisfy the unit equation (\ref{u2}). Finally we got three solutions of equation (\ref{u2}): \[ (a_1,\ldots,a_5)=(0,0,0,0,0),(0,1,0,0,0),(1,0,0,0,0). \] These correspond to \[ X=1,\;\; 1+\sqrt{\mu},\;\; 1-\sqrt{\mu}, \] that is \[ \frac{1}{2}\cdot 1 +\frac{1}{2}\cdot 1 =1,\;\; \frac{1}{2}\cdot (1+\sqrt{\mu}) +\frac{1}{2}\cdot (1-\sqrt{\mu}) =1,\;\; \frac{1}{2}\cdot (1-\sqrt{\mu}) +\frac{1}{2}\cdot (1+\sqrt{\mu}) =1. \] By (\ref{ccc}) \[ X=\frac{\nu_G}{\nu_M}, \] with $U=\nu_M$ and $U-2\sqrt{\mu}V=\nu_G$, we have \[ \frac{V}{U}=\frac{1}{2\sqrt{\mu}}\left( 1-\frac{\nu_G}{\nu_M} \right). \] This only gives and algebraic integer value for $X=1$ (out of the above possible values of $X$), whence the only solution of equation (\ref{Fex}) is $U=\nu_M,V=0$. Following the general arguments of Section \ref{qqeq} we have \[ Q_1(X,Y,Z)=X^2+\xi Z^2=U=\nu_M,\;\; Q_2(X,Y,Z)=Y^2-XZ=0. \] We get \[ Q_0(X,Y,Z)=Y^2-XZ=0, \] with non-trivial solution $X_0=1,Y_0=0,Z_0=0$. We set \[ X=X_0R,\;\; Y=Y_0R+P,\;\; Z=Z_0R+Q, \] with parameters $P,Q,R\in M$. We obtain $P^2-RQ=0$. We multiply by $Q$ the above equation and replace $RQ$ by $P^2$. Hence \begin{eqnarray} \kappa\cdot X&=&P^2,\nonumber\\ \kappa\cdot Y&=&PQ,\label{ss}\\ \kappa\cdot X&=&Q^2,\nonumber \end{eqnarray} and we replace $\kappa,P,Q$ by integer parameters $P,Q\in{\mathbb Z}_M$ (see Section \ref{qqeq}). It follows that $\kappa$ can only be a unit in $M$. Substituting these representations into $Q_1(X,Y,Z)=U$ we obtain \begin{equation} P^4+\xi Q^4=\kappa^2\nu_M=\nu \label{q1} \end{equation} where $\nu=\pm \varepsilon_1^{k_1},\varepsilon_2^{k_2}$ is a unit in $M$, $\varepsilon_1,\varepsilon_2$ being the fundamental units in $M$. Let $k_i=4k_i'+\ell_i$ with $-1\leq \ell_i\leq 2$ ($i=1,2$) and let \[ P'=P\cdot \varepsilon_1^{-k_1'}\cdot \varepsilon_2^{-k_2'}, \;\; Q'=Q\cdot \varepsilon_1^{-k_1'}\cdot \varepsilon_2^{-k_2'}. \] For $\nu'=\varepsilon_1^{\ell_1}\cdot \varepsilon_2^{\ell_2}$ we can easily solve equation \[ (P')^4+\xi\cdot (Q')^4=\pm \nu' \] in $P',Q'\in {\mathbb Z}_M$ using Theorem \ref{thth} and obtain that $P'=1,Q'=1$ is the only solution. Therefore up to a unit factor in $M$ we have $(P,Q)=(1,0)$, whence up to a unit factor in $M$ we obtain $(X,Y,Z)=(1,0,0)$. Hence up to equivalence the only generator of relative power integral basis of $K$ over $M$ is $\xi$. \section{Generators of absolute power integral bases} As we have seen, up to equivalence $\xi$ is the only generator of relative power integral bases of $K$ over $M$. By \cite{grsz} this implies that any generator of absolute power integral bases of $K$ must have the form \[ \zeta=z_0+z_1\mu+z_2\mu^2 \pm \varepsilon_1^{k_1}\varepsilon_2^{k_2}\cdot \xi, \] where $z_0,z_1,z_2,k_1,k_2\in{\mathbb Z}$, $\varepsilon_1, \varepsilon_2$ are fundamental units in $M$. We let $z_1,z_2,k_1,k_2$ run through the interval [-25,25]. In addition to $\xi$ we found only one algebraic integer of this shape with index $<10^{15}$. This element has index 65329214857201. \end{document}
arXiv
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) Probing the evolution of the EAS muon content in the atmosphere with KASCADE-Grande (1801.05513) KASCADE-Grande Collaboration: W.D. Apel, J.C. Arteaga-Velázquez, K. Bekk, M. Bertaina, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, F. Cossavella, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, D. Fuhrmann, A. Gherghel-Lascu, H.J. Gils, R. Glasstetter, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, T. Huege, K.-H. Kampert, D. Kang, H.O. Klages, K. Link, P. Łuczak, H.J. Mathes, H.J. Mayer, J. Milke, B. Mitrica, C. Morello, J. Oehlschläger, S. Ostapchenko, T. Pierog, H. Rebel, M. Roth, H. Schieler, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, H. Ulrich, A. Weindl, J. Wochele, J. Zabierowski Jan. 17, 2018 astro-ph.HE The evolution of the muon content of very high energy air showers (EAS) in the atmosphere is investigated with data of the KASCADE-Grande observatory. For this purpose, the muon attenuation length in the atmosphere is obtained to $\Lambda_\mu = 1256 \, \pm 85 \, ^{+229}_{-232}(\mbox{syst})\, \mbox{g/cm}^2$ from the experimental data for shower energies between $10^{16.3}$ and $10^{17.0} \, \mbox{eV}$. Comparison of this quantity with predictions of the high-energy hadronic interaction models QGSJET-II-02, SIBYLL 2.1, QGSJET-II-04 and EPOS-LHC reveals that the attenuation of the muon content of measured EAS in the atmosphere is lower than predicted. Deviations are, however, less significant with the post-LHC models. The presence of such deviations seems to be related to a difference between the simulated and the measured zenith angle evolutions of the lateral muon density distributions of EAS, which also causes a discrepancy between the measured absorption lengths of the density of shower muons and the predicted ones at large distances from the EAS core. The studied deficiencies show that all four considered hadronic interaction models fail to describe consistently the zenith angle evolution of the muon content of EAS in the aforesaid energy regime. KASCADE-Grande Limits on the Isotropic Diffuse Gamma-Ray Flux between 100 TeV and 1 EeV (1710.02889) KASCADE-Grande Collaboration: W. D. Apel, J. C. Arteaga-Velázquez, K. Bekk, M. Bertaina, J. Blümer, H. Bozdog, I. M. Brancus, E. Cantoni, A. Chiavassa, F. Cossavella, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, Z. Feng, D. Fuhrmann, A. Gherghel-Lascu, H. J. Gils, R. Glasstetter, C. Grupen, A. Haungs, D. Heck, J. R. Hörandel, T. Huege, K.-H. Kampert, D. Kang, H. O. Klages, K. Link, P. Łuczak, H. J. Mathes, H. J. Mayer, J. Milke, B. Mitrica, C. Morello, J. Oehlschläger, S. Ostapchenko, T. Pierog, H. Rebel, M. Roth, H. Schieler, S. Schoo, F. G. Schröder, O. Sima, G. Toma, G. C. Trinchero, H. Ulrich, A. Weindl, J. Wochele, J. Zabierowski Oct. 8, 2017 astro-ph.HE KASCADE and KASCADE-Grande were multi-detector installations to measure individual air showers of cosmic rays at ultra-high energy. Based on data sets measured by KASCADE and KASCADE-Grande, 90% C.L. upper limits to the flux of gamma-rays in the primary cosmic ray flux are determined in an energy range of ${10}^{14} - {10}^{18}$ eV. The analysis is performed by selecting air showers with a low muon content as expected for gamma-ray-induced showers compared to air showers induced by energetic nuclei. The best upper limit of the fraction of gamma-rays to the total cosmic ray flux is obtained at $3.7 \times {10}^{15}$ eV with $1.1 \times {10}^{-5}$. Translated to an absolute gamma-ray flux this sets constraints on some fundamental astrophysical models, such as the distance of sources for at least one of the IceCube neutrino excess models. A comparison of the cosmic-ray energy scales of Tunka-133 and KASCADE-Grande via their radio extensions Tunka-Rex and LOPES (1610.08343) W.D. Apel, J.C. Arteaga-Velázquez, L. Bähren, P.A. Bezyazeekov, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, N.M. Budnev, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, O. Fedorov, B. Fuchs, H. Gemmeke, O. A. Gress, C. Grupen, A. Haungs, D. Heck, R. Hiller, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, Y. Kazarina, M. Kleifges, E.E. Korosteleva, D. Kostunin, O. Krömer, J. Kuijpers, L.A. Kuzmichev, K. Link, N. Lubsandorzhiev, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, R.R. Mirgazov, R. Monkhoev, C. Morello, J. Oehlschläger, E.A. Osipova, A. Pakhorukov, N. Palmieri, L. Pankov, T. Pierog, V.V. Prosin, J. Rautenberg, H. Rebel, M. Roth, G.I. Rubtsov, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weind, R. Wischnewski, J. Wochele, J. Zabierowski, A. Zagorodnikov, J.A. Zensus Oct. 27, 2016 astro-ph.IM, astro-ph.HE The radio technique is a promising method for detection of cosmic-ray air showers of energies around $100\,$PeV and higher with an array of radio antennas. Since the amplitude of the radio signal can be measured absolutely and increases with the shower energy, radio measurements can be used to determine the air-shower energy on an absolute scale. We show that calibrated measurements of radio detectors operated in coincidence with host experiments measuring air showers based on other techniques can be used for comparing the energy scales of these host experiments. Using two approaches, first via direct amplitude measurements, and second via comparison of measurements with air shower simulations, we compare the energy scales of the air-shower experiments Tunka-133 and KASCADE-Grande, using their radio extensions, Tunka-Rex and LOPES, respectively. Due to the consistent amplitude calibration for Tunka-Rex and LOPES achieved by using the same reference source, this comparison reaches an accuracy of approximately $10\,\%$ - limited by some shortcomings of LOPES, which was a prototype experiment for the digital radio technique for air showers. In particular we show that the energy scales of cosmic-ray measurements by the independently calibrated experiments KASCADE-Grande and Tunka-133 are consistent with each other on this level. Improved absolute calibration of LOPES measurements and its impact on the comparison with REAS 3.11 and CoREAS simulations (1507.07389) W.D. Apel, J.C. Arteaga-Velazquez, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, R. Hiller, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, S. Nehls, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, J. Zabierowski, J.A. Zensus Dec. 18, 2015 astro-ph.IM, astro-ph.HE LOPES was a digital antenna array detecting the radio emission of cosmic-ray air showers. The calibration of the absolute amplitude scale of the measurements was done using an external, commercial reference source, which emits a frequency comb with defined amplitudes. Recently, we obtained improved reference values by the manufacturer of the reference source, which significantly changed the absolute calibration of LOPES. We reanalyzed previously published LOPES measurements, studying the impact of the changed calibration. The main effect is an overall decrease of the LOPES amplitude scale by a factor of $2.6 \pm 0.2$, affecting all previously published values for measurements of the electric-field strength. This results in a major change in the conclusion of the paper 'Comparing LOPES measurements of air-shower radio emission with REAS 3.11 and CoREAS simulations' published in Astroparticle Physics 50-52 (2013) 76-91: With the revised calibration, LOPES measurements now are compatible with CoREAS simulations, but in tension with REAS 3.11 simulations. Since CoREAS is the latest version of the simulation code incorporating the current state of knowledge on the radio emission of air showers, this new result indicates that the absolute amplitude prediction of current simulations now is in agreement with experimental data. Comparing LOPES measurements of air-shower radio emission with REAS 3.11 and CoREAS simulations (1309.5920) W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, J. Zabierowski, J.A. Zensus Dec. 18, 2015 astro-ph.HE Cosmic ray air showers emit radio pulses at MHz frequencies, which can be measured with radio antenna arrays - like LOPES at the Karlsruhe Institute of Technology in Germany. To improve the understanding of the radio emission, we test theoretical descriptions with measured data. The observables used for these tests are the absolute amplitude of the radio signal, and the shape of the radio lateral distribution. We compare lateral distributions of more than 500 LOPES events with two recent and public Monte Carlo simulation codes, REAS 3.11 and CoREAS (v 1.0). The absolute radio amplitudes predicted by REAS 3.11 are in good agreement with the LOPES measurements. The amplitudes predicted by CoREAS are lower by a factor of two, and marginally compatible with the LOPES measurements within the systematic scale uncertainties. In contrast to any previous versions of REAS, REAS 3.11 and CoREAS now reproduce the shape of the measured lateral distributions correctly. This reflects a remarkable progress compared to the situation a few years ago, and it seems that the main processes for the radio emission of air showers are now understood: The emission is mainly due to the geomagnetic deflection of the electrons and positrons in the shower. Less important but not negligible is the Askaryan effect (net charge variation). Moreover, we confirm that the refractive index of the air plays an important role, since it changes the coherence conditions for the emission: Only the new simulations including the refractive index can reproduce rising lateral distributions which we observe in a few LOPES events. Finally, we show that the lateral distribution is sensitive to the energy and the mass of the primary cosmic ray particles. Revised absolute amplitude calibration of the LOPES experiment (1508.03471) K. Link, T. Huege, W.D. Apel, J.C. Arteaga-Velázquez, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, R. Hiller, J.R. Hörandel, A. Horneffer, D. Huber, P.G. Isar, K-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, P. Łuczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, J. Zabierowski, J.A. Zensus Aug. 14, 2015 hep-ex, physics.ins-det, astro-ph.IM, astro-ph.HE One of the main aims of the LOPES experiment was the evaluation of the absolute amplitude of the radio signal of air showers. This is of special interest since the radio technique offers the possibility for an independent and highly precise determination of the energy scale of cosmic rays on the basis of signal predictions from Monte Carlo simulations. For the calibration of the amplitude measured by LOPES we used an external source. Previous comparisons of LOPES measurements and simulations of the radio signal amplitude predicted by CoREAS revealed a discrepancy of the order of a factor of two. A re-measurement of the reference calibration source, now performed for the free field, was recently performed by the manufacturer. The updated calibration values lead to a lowering of the reconstructed electric field measured by LOPES by a factor of $2.6 \pm 0.2$ and therefore to a significantly better agreement with CoREAS simulations. We discuss the updated calibration and its impact on the LOPES analysis results. Investigation of the radio wavefront of air showers with LOPES measurements and CoREAS simulations (ARENA 2014) (1507.07753) F.G. Schröder, W.D. Apel, J.C. Arteaga-Velazquez, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, S. Schoo, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, J. Zabierowski, J.A. Zensus July 28, 2015 astro-ph.IM, astro-ph.HE We investigated the radio wavefront of cosmic-ray air showers with LOPES measurements and CoREAS simulations: the wavefront is of approximately hyperbolic shape and its steepness is sensitive to the shower maximum. For this study we used 316 events with an energy above 0.1 EeV and zenith angles below $45^\circ$ measured by the LOPES experiment. LOPES was a digital radio interferometer consisting of up to 30 antennas on an area of approximately 200 m x 200 m at an altitude of 110 m above sea level. Triggered by KASCADE-Grande, LOPES measured the radio emission between 43 and 74 MHz, and our analysis might strictly hold only for such conditions. Moreover, we used CoREAS simulations made for each event, which show much clearer results than the measurements suffering from high background. A detailed description of our result is available in our recent paper published in JCAP09(2014)025. The present proceeding contains a summary and focuses on some additional aspects, e.g., the asymmetry of the wavefront: According to the CoREAS simulations the wavefront is slightly asymmetric, but on a much weaker level than the lateral distribution of the radio amplitude. Reconstruction of the energy and depth of maximum of cosmic-ray air-showers from LOPES radio measurements (1408.2346) W. D. Apel, J. C. Arteaga-Velazquez, L. Bähren, K. Bekk, M. Bertaina, P. L. Biermann, J. Blümer, H. Bozdog, I. M. Brancus, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J. R. Hörandel, A. Horneffer, D. Huber, T. Huege, P. G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Łuczak, M. Ludwig, H. J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, F. G. Schröder, O. Sima, G. Toma, G. C. Trinchero, A. Weindl, J. Wochele, J. Zabierowski, J. A. Zensus Aug. 11, 2014 hep-ex, astro-ph.IM, astro-ph.HE LOPES is a digital radio interferometer located at Karlsruhe Institute of Technology (KIT), Germany, which measures radio emission from extensive air showers at MHz frequencies in coincidence with KASCADE-Grande. In this article, we explore a method (slope method) which leverages the slope of the measured radio lateral distribution to reconstruct crucial attributes of primary cosmic rays. First, we present an investigation of the method on the basis of pure simulations. Second, we directly apply the slope method to LOPES measurements. Applying the slope method to simulations, we obtain uncertainties on the reconstruction of energy and depth of shower maximum Xmax of 13% and 50 g/cm^2, respectively. Applying it to LOPES measurements, we are able to reconstruct energy and Xmax of individual events with upper limits on the precision of 20-25% for the primary energy and 95 g/cm^2 for Xmax, despite strong human-made noise at the LOPES site. The wavefront of the radio signal emitted by cosmic ray air showers (1404.3283) W.D. Apel, J.C. Arteaga-Velázquez, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, J. Zabierowski, J.A. Zensus Aug. 7, 2014 hep-ex, astro-ph.IM, astro-ph.HE Analyzing measurements of the LOPES antenna array together with corresponding CoREAS simulations for more than 300 measured events with energy above $10^{17}\,$eV and zenith angles smaller than $45^\circ$, we find that the radio wavefront of cosmic-ray air showers is of approximately hyperbolic shape. The simulations predict a slightly steeper wavefront towards East than towards West, but this asymmetry is negligible against the measurement uncertainties of LOPES. At axis distances $\gtrsim 50\,$m, the wavefront can be approximated by a simple cone. According to the simulations, the cone angle is clearly correlated with the shower maximum. Thus, we confirm earlier predictions that arrival time measurements can be used to study the longitudinal shower development, but now using a realistic wavefront. Moreover, we show that the hyperbolic wavefront is compatible with our measurement, and we present several experimental indications that the cone angle is indeed sensitive to the shower development. Consequently, the wavefront can be used to statistically study the primary composition of ultra-high energy cosmic rays. At LOPES, the experimentally achieved precision for the shower maximum is limited by measurement uncertainties to approximately $140\,$g/cm$^2$. But the simulations indicate that under better conditions this method might yield an accuracy for the atmospheric depth of the shower maximum, $X_\mathrm{max}$, better than $30\,$g/cm$^2$. This would be competitive with the established air-fluorescence and air-Cherenkov techniques, where the radio technique offers the advantage of a significantly higher duty-cycle. Finally, the hyperbolic wavefront can be used to reconstruct the shower geometry more accurately, which potentially allows a better reconstruction of all other shower parameters, too. Highlights from the Pierre Auger Observatory (1310.4620) Antoine Letessier-Selvon, A. Aab, P. Abreu, M. Aglietta, M. Ahlers, E.J. Ahn, I.F.M. Albuquerque, I. Allekotte, J. Allen, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muniz, R. Alves Batista, M. Ambrosio, A. Aminaei, L. Anchordoqui, S. Andringa, T. Antivcic, C. Aramo, F. Arqueros, H. Asorey, P. Assis, J. Aublin, M. Ave, M. Avenier, G. Avila, A.M. Badescu, K.B. Barber, R. Bardenet, J. Baeuml, C. Baus, J.J. Beatty, K.H. Becker, A. Belletoile, J.A. Bellido, S. BenZvi, C. Berat, X. Bertou, P.L. Biermann, P. Billoir, F. Blanco, M. Blanco, C. Bleve, H. Blumer, M. Bohacova, D. Boncioli, C. Bonifazi, R. Bonino, N. Borodai, J. Brack, I. Brancus, P. Brogueira, W.C. Brown, P. Buchholz, A. Bueno, R.E. Burton, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, S.H. Cheng, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, L. Collica, M.R. Coluccia, R. Conceicao, F. Contreras, H. Cook, M.J. Cooper, S. Coutu, C.E. Covault, A. Criss, J. Cronin, A. Curutiu, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, M. De Domenico, S.J. de Jong, G. De La Vega, W.J.M. de Mello Junior, J.R.T. de Mello Neto, I. De Mitri, V. de Souza, K.D. de Vries, L. del Peral, O. Deligny, H. Dembinski, N. Dhital, C. Di Giulio, J.C. Diaz, M.L. Diaz Castro, P.N. Diep, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, P.N. Dong, A. Dorofeev, J.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, C.O. Escobar, J. Espadanal, A. Etchegoyen, P. Facal San Luis, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipcic, N. Foerster, B.D. Fox, C.E. Fracchiolla, E.D. Fraenkel, O. Fratu, U. Frohlich, B. Fuchs, R. Gaior, R.F. Gamarra, S. Gambetta, B. Garcia, S.T. Garcia Roca, D. Garcia-Gamez, D. Garcia-Pinto, G. Garilli, A. Gascon Bravo, H. Gemmeke, P.L. Ghia, M. Giller, J. Gitto, C. Glaser, H. Glass, F. Gomez Albarracin, M. Gomez Berisso, P.F. Gomez Vitale, P. Goncalves, J.G. Gonzalez, B. Gookin, A. Gorgi, P. Gorham, P. Gouffon, S. Grebe, N. Griffith, A.F. Grillo, T.D. Grubb, Y. Guardincerri, F. Guarino, G.P. Guedes, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, A.E. Herve, G.C. Hill, C. Hojvat, N. Hollon, P. Homola, J.R. Hoerandel, P. Horvath, M. Hrabovsky, D. Huber, T. Huege, A. Insolia, P.G. Isar, S. Jansen, C. Jarne, M. Josebachuili, K. Kadija, O. Kambeitz, K.H. Kampert, P. Karhan, P. Kasper, I. Katkov, B. Kegl, B. Keilhauer, A. Keivani, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, J. Knapp d, R. Krause, N. Krohm, O. Kroemer, D. Kruppke-Hansen, D. Kuempel, N. Kunka, G. La Rosa, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, M.S.A.B. Leao, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, I. Lhenry-Yvon, K. Link, R. Lopez, A. Lopez Aguera, K. Louedec, J. Lozano Bahilo, L. Lu, A. Lucero, M. Ludwig, H. Lyberis, M.C. Maccarone, C. Macolino, M. Malacari, S. Maldera, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Maris, H.R. Marquez Falcon, G. Marsella, D. Martello, L. Martin, H. Martinez, O. Martinez Bravo, D. Martraire, J.J. Masias Meza, H.J. Mathes, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurel, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, M. Melissas, D. Melo, E. Menichetti, A. Menshikov, S. Messina, R. Meyhandan, S. Micanovic, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, M. Monasor, D. Monnier Ragaigne, F. Montanet, B. Morales, C. Morello, J.C. Moreno, M. Mostafa, C.A. Moura, M.A. Muller, G. Muller, M. Munchmeyer, R. Mussa, G. Navarra, J.L. Navarro, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.T. Nhung, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, L. Novzka, J. Oehlschlager, A. Olinto, M. Oliveira, M. Ortiz, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, N. Palmieri, G. Parente, A. Parra, S. Pastor, T. Paul, M. Pech, J. Pekala, R. Pelayo, I.M. Pepe, L. Perrone, R. Pesce, E. Petermann, S. Petrera, A. Petrolini, Y. Petrov, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, M. Pontz, A. Porcelli, T. Preda, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, B. Revenu, J. Ridky, S. Riggi, M. Risse, P. Ristori, H. Rivera, V. Rizi, J. Roberts, W. Rodrigues de Carvalho, I. Rodriguez Cabo, G. Rodriguez Fernandez, J. Rodriguez Martino, J. Rodriguez Rojo, M.D. Rodriguez-Frias, G. Ros, J. Rosado, T. Rossler, M. Roth, B. Rouille-d'Orfeuil, E. Roulet, A.C. Rovero, C. Ruhle, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, F. Salesa Greus, G. Salina, F. Sanchez, P. Sanchez-Lucas, C.E. Santo, E. Santos, E.M. Santos, F. Sarazin, B. Sarkar, R. Sato, N. Scharf, V. Scherini, H. Schieler, P. Schiffer, A. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovanek, F.G. Schroeder, A. Schulz, J. Schulz, S.J. Sciutto, M. Scuderi, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, I. Sidelnik, G. Sigl, O. Sima, A. Smialkowski, R. Smida, G.R. Snow, P. Sommers, J. Sorokin, H. Spinka, R. Squartini, Y.N. Srivastava, S. Stanic, J. Stapleton, J. Stasielak, M. Stephan, M. Straub, A. Stutz, F. Suarez, T. Suomijarvi, A.D. Supanitsky, T. Susa, M.S. Sutherland, J. Swain, Z. Szadkowski, M. Szuba, A. Tapia, M. Tartare, O. Tacscuau, R. Tcaciuc, N.T. Thao, J. Tiffenberg, C. Timmermans, W. Tkaczyk, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tome, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, D.B. Tridapalli, E. Trovato, M. Tueros, R. Ulrich, M. Unger, J.F. Valdes Galicia, I. Valino, L. Valore, G. van Aar, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cardenas, G. Varner, J.R. Vazquez, R.A. Vazquez, D. Veberic, V. Verzi, J. Vicha, M. Videla, L. Villasenor, H. Wahlberg, P. Wahrlich, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, S. Westerhoff, B.J. Whelan, A. Widom, G. Wieczorek, L. Wiencke, B. Wilczynska, H. Wilczynski, M. Will, C. Williams, T. Winchen, B. Wundheiler, S. Wykes, T. Yamamoto, T. Yapici, P. Younk, G. Yuan, A. Yushkov, B. Zamorano, E. Zas, D. Zavrtanik, M. Zavrtanik, I. Zaw, A. Zepeda, J. Zhou, Y. Zhu, M. Zimbres Silva, M. Ziolkowski (the Pierre Auger Collaboration) Oct. 19, 2013 astro-ph.HE The Pierre Auger Observatory is the world's largest cosmic ray observatory. Our current exposure reaches nearly 40,000 km$^2$ str and provides us with an unprecedented quality data set. The performance and stability of the detectors and their enhancements are described. Data analyses have led to a number of major breakthroughs. Among these we discuss the energy spectrum and the searches for large-scale anisotropies. We present analyses of our X$_{max}$ data and show how it can be interpreted in terms of mass composition. We also describe some new analyses that extract mass sensitive parameters from the 100% duty cycle SD data. A coherent interpretation of all these recent results opens new directions. The consequences regarding the cosmic ray composition and the properties of UHECR sources are briefly discussed. Pierre Auger Observatory and Telescope Array: Joint Contributions to the 33rd International Cosmic Ray Conference (ICRC 2013) (1310.0647) The Telescope Array, Pierre Auger Collaborations: T. Abu-Zayyad, M. Allen, R. Anderson, R. Azuma, E. Barcikowski, J. W Belz, D. R. Bergman, S. A. Blake, R. Cady, M. J. Chae, B. G. Cheon, J. Chiba, M. Chikawa, W. R. Cho, T. Fujii, M. Fukushima, K. Goto, W. Hanlon, Y. Hayashi, N. Hayashida, K. Hibino, K. Honda, D. Ikeda, N. Inoue, T. Ishii, R. Ishimori, H. Ito, D. Ivanov, C. C. H. Jui, K. Kadota, F. Kakimoto, O. Kalashev, K. Kasahara, H. Kawai, S. Kawakami, S. Kawana, K. Kawata, E. Kido, H. B. Kim, J. H. Kim, J. H. Kim, S. Kitamura, Y. Kitamura, V. Kuzmin, Y. J. Kwon, J. Lan, J. P. Lundquist, K. Machida, K. Martens, T. Matsuda, T. Matsuyama, J. N. Matthews, M. Minamino, K. Mukai, I. Myers, K. Nagasawa, S. Nagataki, T. Nakamura, H. Nanpei, T. Nonaka, A. Nozato, S. Ogio, S. Oh, M. Ohnishi, H. Ohoka, K. Oki, T. Okuda, M. Ono, A. Oshima, S. Ozawa, I. H. Park, M. S. Pshirkov, D. C. Rodriguez, G. Rubtsov, D. Ryu, H. Sagawa, N. Sakurai, A. L. Sampson, L. M. Scott, P. D. Shah, F. Shibata, T. Shibata, H. Shimodaira, B. K. Shin, T. Shirahama, J. D. Smith, P. Sokolsky, R. W. Springer, B. T. Stokes, S. R. Stratton, T. A. Stroman, M. Takamura, M. Takeda, A. Taketa, M. Takita, Y. Tameda, H. Tanaka, K. Tanaka, M. Tanaka, S. B. Thomas, G. B. Thomson, P. Tinyakov, I. Tkachev, H. Tokuno, T. Tomida, S. Troitsky, Y. Tsunesada, K. Tsutsumi, Y. Uchihori, S. Udo, F. Urban, G. Vasiloff, Y. Wada, T. Wong, H. Yamaoka, K. Yamazaki, J. Yang, K. Yashiro, Y. Yoneda, S. Yoshida, H. Yoshii, R. Zollinger, Z. Zundel, A. Aab, P. Abreu, M. Aglietta, M. Ahlers, E.J. Ahn, I.F.M. Albuquerque, I. Allekotte, J. Allen, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muniz, R. Alves Batista, M. Ambrosio, A. Aminaei, L. Anchordoqui, S. Andringa, T. Antivcic, C. Aramo, F. Arqueros, H. Asorey, P. Assis, J. Aublin, M. Ave, M. Avenier, G. Avila, A.M. Badescu, K.B. Barber, R. Bardenet, J. Baeuml, C. Baus, J.J. Beatty, K.H. Becker, A. Belletoile, J.A. Bellido, S. BenZvi, C. Berat, X. Bertou, P.L. Biermann, P. Billoir, F. Blanco, M. Blanco, C. Bleve, H. Blumer, M. Bohacova, D. Boncioli, C. Bonifazi, R. Bonino, N. Borodai, J. Brack, I. Brancus, P. Brogueira, W.C. Brown, P. Buchholz, A. Bueno, R.E. Burton, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, S.H. Cheng, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, L. Collica, M.R. Coluccia, R. Conceicao, F. Contreras, H. Cook, M.J. Cooper, S. Coutu, C.E. Covault, A. Criss, J. Cronin, A. Curutiu, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, M. De Domenico, S.J. de Jong, G. De La Vega, W.J.M. de Mello Junior, J.R.T. de Mello Neto, I. De Mitri, V. de Souza, K.D. de Vries, L. del Peral, O. Deligny, H. Dembinski, N. Dhital, C. Di Giulio, J.C. Diaz, M.L. Diaz Castro, P.N. Diep, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, P.N. Dong, A. Dorofeev, J.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, C.O. Escobar, J. Espadanal, A. Etchegoyen, P. Facal San Luis, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipcic, N. Foerster, B.D. Fox, C.E. Fracchiolla, E.D. Fraenkel, O. Fratu, U. Frohlich, B. Fuchs, R. Gaior, R.F. Gamarra, S. Gambetta, B. Garcia, S.T. Garcia Roca, D. Garcia-Gamez, D. Garcia-Pinto, G. Garilli, A. Gascon Bravo, H. Gemmeke, P.L. Ghia, M. Giller, J. Gitto, C. Glaser, H. Glass, F. Gomez Albarracin, M. Gomez Berisso, P.F. Gomez Vitale, P. Goncalves, J.G. Gonzalez, B. Gookin, A. Gorgi, P. Gorham, P. Gouffon, S. Grebe, N. Griffith, A.F. Grillo, T.D. Grubb, Y. Guardincerri, F. Guarino, G.P. Guedes, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, A.E. Herve, G.C. Hill, C. Hojvat, N. Hollon, P. Homola, J.R. Hoerandel, P. Horvath, M. Hrabovsky, D. Huber, T. Huege, A. Insolia, P.G. Isar, S. Jansen, C. Jarne, M. Josebachuili, K. Kadija, O. Kambeitz, K.H. Kampert, P. Karhan, P. Kasper, I. Katkov, B. Kegl, B. Keilhauer, A. Keivani, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, J. Knapp, R. Krause, N. Krohm, O. Kroemer, D. Kruppke-Hansen, D. Kuempel, N. Kunka, G. La Rosa, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, M.S.A.B. Leao, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, R. Lopez, A. Lopez Aguera, K. Louedec, J. Lozano Bahilo, L. Lu, A. Lucero, M. Ludwig, H. Lyberis, M.C. Maccarone, C. Macolino, M. Malacari, S. Maldera, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Maris, H.R. Marquez Falcon, G. Marsella, D. Martello, L. Martin, H. Martinez, O. Martinez Bravo, D. Martraire, J.J. Masias Meza, H.J. Mathes, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurel, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, M. Melissas, D. Melo, E. Menichetti, A. Menshikov, S. Messina, R. Meyhandan, S. Micanovic, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, M. Monasor, D. Monnier Ragaigne, F. Montanet, B. Morales, C. Morello, J.C. Moreno, M. Mostafa, C.A. Moura, M.A. Muller, G. Muller, M. Munchmeyer, R. Mussa, G. Navarra, J.L. Navarro, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.T. Nhung, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, L. Novzka, J. Oehlschlager, A. Olinto, M. Oliveira, M. Ortiz, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, N. Palmieri, G. Parente, A. Parra, S. Pastor, T. Paul, M. Pech, J. Pekala, R. Pelayo, I.M. Pepe, L. Perrone, R. Pesce, E. Petermann, S. Petrera, A. Petrolini, Y. Petrov, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, M. Pontz, A. Porcelli, T. Preda, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, B. Revenu, J. Ridky, S. Riggi, M. Risse, P. Ristori, H. Rivera, V. Rizi, J. Roberts, W. Rodrigues de Carvalho, I. Rodriguez Cabo, G. Rodriguez Fernandez, J. Rodriguez Martino, J. Rodriguez Rojo, M.D. Rodriguez-Frias, G. Ros, J. Rosado, T. Rossler, M. Roth, B. Rouille-d'Orfeuil, E. Roulet, A.C. Rovero, C. Ruhle, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, F. Salesa Greus, G. Salina, F. Sanchez, P. Sanchez-Lucas, C.E. Santo, E. Santos, E.M. Santos, F. Sarazin, B. Sarkar, R. Sato, N. Scharf, V. Scherini, H. Schieler, P. Schiffer, A. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovanek, F.G. Schroeder, A. Schulz, J. Schulz, S.J. Sciutto, M. Scuderi, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, I. Sidelnik, G. Sigl, O. Sima, A. Smialkowski, R. Smida, G.R. Snow, P. Sommers, J. Sorokin, H. Spinka, R. Squartini, Y.N. Srivastava, S. Stanic, J. Stapleton, J. Stasielak, M. Stephan, M. Straub, A. Stutz, F. Suarez, T. Suomijarvi, A.D. Supanitsky, T. Susa, M.S. Sutherland, J. Swain, Z. Szadkowski, M. Szuba, A. Tapia, M. Tartare, O. Tacscuau, R. Tcaciuc, N.T. Thao, J. Tiffenberg, C. Timmermans, W. Tkaczyk, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tome, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, D.B. Tridapalli, E. Trovato, M. Tueros, R. Ulrich, M. Unger, J.F. Valdes Galicia, I. Valino, L. Valore, G. van Aar, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cardenas, G. Varner, J.R. Vazquez, R.A. Vazquez, D. Veberic, V. Verzi, J. Vicha, M. Videla, L. Villasenor, H. Wahlberg, P. Wahrlich, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, S. Westerhoff, B.J. Whelan, A. Widom, G. Wieczorek, L. Wiencke, B. Wilczynska, H. Wilczynski, M. Will, C. Williams, T. Winchen, B. Wundheiler, S. Wykes, T. Yamamoto, T. Yapici, P. Younk, G. Yuan, A. Yushkov, B. Zamorano, E. Zas, D. Zavrtanik, M. Zavrtanik, I. Zaw, A. Zepeda, J. Zhou, Y. Zhu, M. Zimbres Silva, M. Ziolkowski (The Pierre Auger Collaboration) Oct. 2, 2013 astro-ph.IM, astro-ph.HE Joint contributions of the Pierre Auger and Telescope Array Collaborations to the 33rd International Cosmic Ray Conference, Rio de Janeiro, Brazil, July 2013: cross-calibration of the fluorescence telescopes, large scale anisotropies and mass composition. Investigation on the energy and mass composition of cosmic rays using LOPES radio data (1309.2410) N. Palmieri, W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Łuczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmid, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski, J.A. Zensus - LOPES Collaboration - Sept. 10, 2013 astro-ph.HE The sensitivity to the mass composition as well as the reconstruction of the energy of the primary particle are explored here by leveraging the features of the radio lateral distribution function. For the purpose of this analysis, a set of events measured with the LOPES experiment is reproduced with the latest CoREAS radio simulation code. Based on simulation predictions, a method which exploits the slope of the radio lateral distribution function is developed (Slope Method) and directly applied on measurements. As a result, the possibility to reconstruct both the energy and the depth of the shower maximum of the cosmic ray air shower using radio data and achieving relatively small uncertainties is presented. Vectorial Radio Interferometry with LOPES 3D (1308.2512) D. Huber, W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, T. Huege, P.G. Isar, K.H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Łuczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmid, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski, J.A. Zensus Aug. 12, 2013 astro-ph.IM, astro-ph.HE One successful detection technique for high-energy cosmic rays is based on the radio signal emitted by the charged particles in an air shower. The LOPES experiment at Karlsruhe Institute of Technology, Germany, has made major contributions to the evolution of this technique. LOPES was reconfigured several times to improve and further develop the radio detection technique. In the latest setup LOPES consisted of 10 tripole antennas. With this, LOPES 3D was the first cosmic ray experiment measuring all three vectorial field components at once and thereby gaining the full information about the electric field vector. We present an analysis based on the data taken with special focus on the benefits of a direct measurement of the vertical polarization component. We demonstrate that by measuring all polarization components the detection and reconstruction efficiency is increased and noisy single channel data can be reconstructed by utilising the information from the other two channels of one antenna station. Comparison of LOPES data and CoREAS simulations using a full detector simulation (ICRC2013) (1308.2523) K. Link, W.D. Apel, J.C. Arteaga-VelÁzquez, L. BÄhren, K. Bekk, M. Bertaina, P.L. Biermann, J. BlÜmer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, K. Daumiller, V. De Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. HÖrandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K-H. Kampert, D. Kang, O. KrÖmer, J. Kuijpers, P. ŁUczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. OehlschlÄger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. RÜhle, A. Saftoiu, H. Schieler, A. Schmidt, F.G. SchrÖder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, J. Zabierowski, J.A. Zensus The LOPES experiment at the Karlsruhe Institute of Technology, Germany, has been measuring radio emission of air showers for almost 10 years. For a better understanding of the emission process a detailed comparison of data with simulations is necessary. This is possible using a newly developed detector simulation including all LOPES detector components. After propagating a simulated event through this full detector simulation a standard LOPES like event file is written. LOPES data and CoREAS simulations can then be treated equally and the same analysis software can be applied to both. This gives the opportunity to compare data and simulations directly. Furthermore, the standard analysis software can be used with simulations which provide the possibility to check the accuracy regarding reconstruction of air shower parameters. We point out the advantages and present first results using such a full LOPES detector simulation. A comparison of LOPES data and the Monte Carlo code CoREAS based on an analysis using this detector simulation is shown. The <lnA> study in the primary energy range 10^{16} - 10^{17} eV with the Muon Tracking Detector in the KASCADE-Grande experiment (1308.2059) P. Łuczak, W.D. Apel, J.C. Arteaga-Velázquez, K. Bekk, M. Bertaina, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, F. Cossavella, C. Curcio, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, J. Engler, B. Fuchs, D. Fuhrmann, H.J. Gils, R. Glasstetter, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, D. Huber, T. Huege, K.-H. Kampert, D. Kang, H.O. Klages, K. Link, M. Ludwig, H.J. Mathes, H.J. Mayer, M. Melissas, J. Milke, B. Mitrica, C. Morello, J. Oehlschläger, S. Ostapchenko, N. Palmieri, M. Petcu, T. Pierog, H. Rebel, M. Roth, H. Schieler, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, H. Ulrich, A. Weindl, J. Wochele, J. Zabierowski Aug. 9, 2013 astro-ph.HE The KASCADE-Grande Muon Tracking Detector enables with high accuracy the measurement of directions of EAS muons with energy above 0.8 GeV and up to 700 m distance from the shower centre. Reconstructed muon tracks are used to investigate muon pseudorapidity (eta) distributions. These distributions are nearly identical to the pseudorapidity distributions of their parent mesons produced in hadronic interactions. Comparison of the eta distributions from measured and simulated showers can be used to test the quality of the high energy hadronic interaction models. In this context a comparison of the QGSJet-II-2 and QGSJet-II-4 model will be shown. The pseudorapidity distributions reflect the longitudinal development of EAS and, as such, are sensitive to the mass of the cosmic rays primary particles. With various parameters of the eta distribution, obtained from the MTD data, it is possible to calculate the mean logarithmic mass of CRs. The results of the <lnA> analysis in the primary energy range 10^{16} eV - 10^{17} eV with the 1st quartile (Q1) of eta distribution will be presented. Mass sensitivity in the radio lateral distribution function (1308.0046) W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, P. Buchholz, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, M. Finger, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Łuczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmid, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski, J.A. Zensus Measuring the mass composition of ultra-high energy cosmic rays is one of the main tasks in the cosmic rays field. Here we are exploring the composition signature in the coherent electromagnetic emission from extensive air showers, detected in the MHz frequency range. One of the experiments that successfully detects radio events in the frequency band of 40-80 MHz is the LOPES experiment at KIT. It is a digital interferometric antenna array and has the important advantage of taking data in coincidence with the particle detector array KASCADE-Grande. A possible method to look at the composition signature in the radio data, predicted by simulations, concerns the radio lateral distribution function, since its slope is strongly correlated with Xmax. Recent comparison between REAS3 simulations and LOPES data showed a significantly improved agreement in the lateral distribution function and for this reason an analysis on a possible LOPES mass signature through the slope method is promising. Trying to reproduce a realistic case, proton and iron showers are simulated with REAS3 using the LOPES selection information as input parameters. The obtained radio lateral distribution slope is analyzed in detail. The lateral slope method to look at the composition signature in the radio data is shown here and a possible signature of mass composition in the LOPES data is discussed. Reconstructing energy and Xmax of cosmic ray air showers using the radio lateral distribution measured with LOPES (1308.0053) W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Łuczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmid, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski, J.A. Zensus The LOPES experiment, a digital radio interferometer located at KIT (Karlsruhe Institute of Technology), obtained remarkable results for the detection of radio emission from extensive air showers at MHz frequencies. Features of the radio lateral distribution function (LDF) measured by LOPES are explored in this work for a precise reconstruction of two fundamental air shower parameters: the primary energy and the shower Xmax. The method presented here has been developed on (REAS3-)simulations, and is applied to LOPES measurements. Despite the high human-made noise at the LOPES site, it is possible to reconstruct both the energy and Xmax for individual events. On the one hand, the energy resolution is promising and comparable to the one of the co-located KASCADE-Grande experiment. On the other hand, Xmax values are reconstructed with the LOPES measurements with a resolution of 90 g/cm2 . A precision on Xmax better than 30 g/cm2 is predicted and achievable in a region with a lower human-made noise level. KASCADE-Grande measurements of energy spectra for elemental groups of cosmic rays (1306.6283) The KASCADE-Grande Collaboration: W.D. Apel, J.C. Arteaga-Velàzquez, K. Bekk, M. Bertaina, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, F. Cossavella, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, J. Engler, M. Finger, B. Fuchs, D. Fuhrmann, H.J. Gils, R. Glasstetter, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, D. Huber, T. Huege, K.-H. Kampert, D. Kang, H.O. Klages, K. Link, P. Łuczak, M. Ludwig, H.J. Mathes, H.J. Mayer, M. Melissas, J. Milke, B. Mitrica, C. Morello, J. Oehlschläger, S. Ostapchenko, N. Palmieri, M. Petcu, T. Pierog, H. Rebel, M. Roth, H. Schieler, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, H. Ulrich, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski June 26, 2013 astro-ph.HE The KASCADE-Grande air shower experiment [W. Apel, et al. (KASCADE-Grande collaboration), Nucl. Instrum. Methods A 620 (2010) 202] consists of, among others, a large scintillator array for measurements of charged particles, Nch, and of an array of shielded scintillation counters used for muon counting, Nmu. KASCADE-Grande is optimized for cosmic ray measurements in the energy range 10 PeV to about 2000 PeV, where exploring the composition is of fundamental importance for understanding the transition from galactic to extragalactic origin of cosmic rays. Following earlier studies of the all-particle and the elemental spectra reconstructed in the knee energy range from KASCADE data [T. Antoni, et al. (KASCADE collaboration), Astropart. Phys. 24 (2005) 1], we have now extended these measurements to beyond 200 PeV. By analysing the two-dimensional shower size spectrum Nch vs. Nmu for nearly vertical events, we reconstruct the energy spectra of different mass groups by means of unfolding methods over an energy range where the detector is fully efficient. The procedure and its results, which are derived based on the hadronic interaction model QGSJET-II-02 and which yield a strong indication for a dominance of heavy mass groups in the covered energy range and for a knee-like structure in the iron spectrum at around 80 PeV, are presented. This confirms and further refines the results obtained by other analyses of KASCADE-Grande data, which already gave evidence for a knee-like structure in the heavy component of cosmic rays at about 80 PeV [W. Apel, et al. (KASCADE-Grande collaboration), Phys. Rev. Lett. 107 (2011) 171104]. Ankle-like Feature in the Energy Spectrum of Light Elements of Cosmic Rays Observed with KASCADE-Grande (1304.7114) W.D. Apel, J.C. Arteaga-Velàzquez, K. Bekk, M. Bertaina, J. Blümer, H. Bozdog, I.M. Brancus, E. Cantoni, A. Chiavassa, F. Cossavella, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, J. Engler, M. Finger, B. Fuchs, D. Fuhrmann, H.J. Gils, R. Glasstetter, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, D. Huber, T. Huege, K.-H. Kampert, D. Kang, H.O. Klages, K. Link, P. Łuczak, M. Ludwig, H.J. Mathes, H.J. Mayer, M. Melissas, J. Milke, B. Mitrica, C. Morello, J. Oehlschläger, S. Ostapchenko, N. Palmieri, M. Petcu, T. Pierog, H. Rebel, M. Roth, H. Schieler, S. Schoo, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, H. Ulrich, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski April 26, 2013 astro-ph.HE Recent results of the KASCADE-Grande experiment provided evidence for a mild knee-like structure in the all-particle spectrum of cosmic rays at $E = 10^{16.92 \pm 0.10} \, \mathrm{eV}$, which was found to be due to a steepening in the flux of heavy primary particles. The spectrum of the combined components of light and intermediate masses was found to be compatible with a single power law in the energy range from $10^{16.3} \, \mathrm{eV}$ to $10^{18} \, \mathrm{eV}$. In this paper, we present an update of this analysis by using data with increased statistics, originating both from a larger data set including more recent measurements and by using a larger fiducial area. In addition, optimized selection criteria for enhancing light primaries are applied. We find a spectral feature for light elements, namely a hardening at $E = 10^{17.08 \pm 0.08} \, \mathrm{eV}$ with a change of the power law index from $-3.25 \pm 0.05$ to $-2.79 \pm 0.08$. Thunderstorm Observations by Air-Shower Radio Antenna Arrays (1303.7068) W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, P. Buchholz, S. Buitink, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, P. Doll, M. Ender, R. Engel, H. Falcke, M. Finger, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, S. Nehls, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski, J.A. Zensus March 28, 2013 astro-ph.IM, astro-ph.HE Relativistic, charged particles present in extensive air showers lead to a coherent emission of radio pulses which are measured to identify the shower initiating high-energy cosmic rays. Especially during thunderstorms, there are additional strong electric fields in the atmosphere, which can lead to further multiplication and acceleration of the charged particles and thus have influence on the form and strength of the radio emission. For a reliable energy reconstruction of the primary cosmic ray by means of the measured radio signal it is very important to understand how electric fields affect the radio emission. In addition, lightning strikes are a prominent source of broadband radio emissions that are visible over very long distances. This, on the one hand, causes difficulties in the detection of the much lower signal of the air shower. On the other hand the recorded signals can be used to study features of the lightning development. The detection of cosmic rays via the radio emission and the influence of strong electric fields on this detection technique is investigated with the LOPES experiment in Karlsruhe, Germany. The important question if a lightning is initiated by the high electron density given at the maximum of a high-energy cosmic-ray air shower is also investigated, but could not be answered by LOPES. But, these investigations exhibit the capabilities of EAS radio antenna arrays for lightning studies. We report about the studies of LOPES measured radio signals of air showers taken during thunderstorms and give a short outlook to new measurements dedicated to search for correlations of lightning and cosmic rays. LOPES 3D reconfiguration and first measurements (1303.7070) D. Huber, W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, P. Buchholz, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, M. Finger, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, F.G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski, J.A. Zensus The Radio detection technique of high-energy cosmic rays is based on the radio signal emitted by the charged particles in an air shower due to their deflection in the Earth's magnetic field. The LOPES experiment at Karlsruhe Institute of Technology, Germany with its simple dipoles made major contributions to the revival of this technique. LOPES is working in the frequency range from 40 to 80 MHz and was reconfigured several times to improve and further develop the radio detection technique. In the current setup LOPES consists of 10 tripole antennas which measure the complete electric field vector of the radio emission from cosmic rays. LOPES is the first experiment measuring all three vectorial components at once and thereby gaining the full information about the electric field vector and not only a two-dimensional projection. Such a setup including also measurements of the vertical electric field component is expected to increase the sensitivity to inclined showers and help to advance the understanding of the emission mechanism. We present the reconfiguration and calibration procedure of LOPES 3D and discuss first measurements. LOPES 3D - vectorial measurements of radio emission from cosmic ray induced air showers (1303.7080) March 28, 2013 astro-ph.HE LOPES 3D is able to measure all three components of the electric field vector of the radio emission from air showers. This allows a better comparison with emission models. The measurement of the vertical component increases the sensitivity to inclined showers. By measuring all three components of the electric field vector LOPES 3D demonstrates by how much the reconstruction accuracy of primary cosmic ray parameters increases. Thus LOPES 3D evaluates the usefulness of vectorial measurements for large scale applications. LOPES-3D, an antenna array for full signal detection of air-shower radio emission (1303.6808) W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, P. Buchholz, E. Cantoni, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, M. Finger, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Luczak, M. Ludwig, H. J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, F. G. Schröder, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, M. Wommer, J. Zabierowski, J.A. Zensus To better understand the radio signal emitted by extensive air-showers and to further develop the radio detection technique of high-energy cosmic rays, the LOPES experiment was reconfigured to LOPES-3D. LOPES-3D is able to measure all three vectorial components of the electric field of radio emission from cosmic ray air showers. The additional measurement of the vertical component ought to increase the reconstruction accuracy of primary cosmic ray parameters like direction and energy, provides an improved sensitivity to inclined showers, and will help to validate simulation of the emission mechanisms in the atmosphere. LOPES-3D will evaluate the feasibility of vectorial measurements for large scale applications. In order to measure all three electric field components directly, a tailor-made antenna type (tripoles) was deployed. The change of the antenna type necessitated new pre-amplifiers and an overall recalibration. The reconfiguration and the recalibration procedure are presented and the operationality of LOPES-3D is demonstrated. Cosmic Ray Measurements with LOPES: Status and Recent Results (ARENA 2012) (1301.2557) F.G. Schröder, W.D. Apel, J.C. Arteaga, L. Bähren, K. Bekk, M. Bertaina, P.L. Biermann, J. Blümer, H. Bozdog, I.M. Brancus, A. Chiavassa, K. Daumiller, V. de Souza, F. Di Pierro, P. Doll, R. Engel, H. Falcke, B. Fuchs, D. Fuhrmann, H. Gemmeke, C. Grupen, A. Haungs, D. Heck, J.R. Hörandel, A. Horneffer, D. Huber, T. Huege, P.G. Isar, K.-H. Kampert, D. Kang, O. Krömer, J. Kuijpers, K. Link, P. Luczak, M. Ludwig, H.J. Mathes, M. Melissas, C. Morello, J. Oehlschläger, N. Palmieri, T. Pierog, J. Rautenberg, H. Rebel, M. Roth, C. Rühle, A. Saftoiu, H. Schieler, A. Schmidt, O. Sima, G. Toma, G.C. Trinchero, A. Weindl, J. Wochele, J. Zabierowski, J.A. Zensus Jan. 11, 2013 astro-ph.IM, astro-ph.HE LOPES is a digital antenna array at the Karlsruhe Institute of Technology, Germany, for cosmic-ray air-shower measurements. Triggered by the co-located KASCADE-Grande air-shower array, LOPES detects the radio emission of air showers via digital radio interferometry. We summarize the status of LOPES and recent results. In particular, we present an update on the reconstruction of the primary-particle properties based on almost 500 events above 100 PeV. With LOPES, the arrival direction can be reconstructed with a precision of at least 0.65{\deg}, and the energy with a precision of at least 20 %, which, however, does not include systematic uncertainties on the absolute energy scale. For many particle and astrophysics questions the reconstruction of the atmospheric depth of the shower maximum, Xmax, is important, since it yields information on the type of the primary particle and its interaction with the atmosphere. Recently, we found experimental evidence that the slope of the radio lateral distribution is indeed sensitive to the longitudinal development of the air shower, but unfortunately, the Xmax precision at LOPES is limited by the high level of anthropogenic radio background. Nevertheless, the developed methods can be transferred to next generation experiments with lower background, which should provide an Xmax precision competitive to other detection technologies. Antennas for the Detection of Radio Emission Pulses from Cosmic-Ray induced Air Showers at the Pierre Auger Observatory (1209.3840) P. Abreu, M. Aglietta, M. Ahlers, E.J. Ahn, I.F.M. Albuquerque, D. Allard, I. Allekotte, J. Allen, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, L. Anchordoqui, S. Andringa, T. Antičić, C. Aramo, E. Arganda, F. Arqueros, H. Asorey, P. Assis, J. Aublin, M. Ave, M. Avenier, G. Avila, A.M. Badescu, M. Balzer, K.B. Barber, A.F. Barbosa, R. Bardenet, S.L.C. Barroso, B. Baughman, J. Bäuml, C. Baus, J.J. Beatty, K.H. Becker, A. Bellétoile, J.A. Bellido, S. BenZvi, C. Berat, X. Bertou, P.L. Biermann, P. Billoir, F. Blanco, M. Blanco, C. Bleve, H. Blümer, M. M. Boháčová, D. Boncioli, C. Bonifazi, R. Bonino, N. Borodai, J. Brack, I. Brancus, P. Brogueira, W.C. Brown, R. Bruijn, P. Buchholz, A. Bueno, L. Buroker, R.E. Burton, K.S. Caballero-Mora, B. Caccianiga, L. Caramete, R. Caruso, A. Castellina, O. Catalano, G. Cataldi, L. Cazon, R. Cester, J. Chauvin, S.H. Cheng, A. Chiavassa, J.A. Chinellato, J. Chirinos Diaz, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, H. Cook, M.J. Cooper, J. Coppens, A. Cordier, S. Coutu, C.E. Covault, A. Creusot, A. Criss, J. Cronin, A. Curutiu, S. Dagoret-Campagne, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, M. De Domenico, C. De Donato, S.J. de Jong, G. De La Vega, W.J.M. de Mello Junior, J.R.T. de Mello Neto, I. De Mitri, V. de Souza, K.D. de Vries, L. del Peral, M. del Río, O. Deligny, H. Dembinski, N. Dhital, C. Di Giulio, M.L. Díaz Castro, P.N. Diep, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, P.N. Dong, A. Dorofeev, J.C. dos Anjos, M.T. Dova, D. D'Urso, I. Dutan, J. Ebr, R. Engel, M. Erdmann, C.O. Escobar, J. Espadanal, A. Etchegoyen, P. Facal San Luis, H. Falcke, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, S. Fliescher, C.E. Fracchiolla, E.D. Fraenkel, O. Fratu, U. Fröhlich, B. Fuchs, R. Gaior, R.F. Gamarra, S. Gambetta, B. García, S.T. Garcia Roca, D. Garcia-Gamez, D. Garcia-Pinto, A. Gascon Bravo, H. Gemmeke, P.L. Ghia, M. Giller, J. Gitto, H. Glass, M.S. Gold, G. Golup, F. Gomez Albarracin, M. Gómez Berisso, P.F. Gómez Vitale, P. Gonçalves, J.G. Gonzalez, B. Gookin, A. Gorgi, P. Gouffon, E. Grashorn, S. Grebe, N. Griffith, M. Grigat, A.F. Grillo, Y. Guardincerri, F. Guarino, G.P. Guedes, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, A.E. Herve, C. Hojvat, N. Hollon, V.C. Holmes, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, F. Ionita, A. Italiano, S. Jansen, C. Jarne, S. Jiraskova, M. Josebachuili, K. Kadija, K.H. Kampert, P. Karhan, P. Kasper, I. Katkov, B. Kégl, B. Keilhauer, A. Keivani, J.L. Kelley, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, J. Knapp, D.-H. Koang, K. Kotera, N. Krohm, O. Krömer, D. Kruppke-Hansen, D. Kuempel, J.K. Kulbartz, N. Kunka, G. La Rosa, C. Lachaud, D. LaHurd, L. Latronico, R. Lauer, P. Lautridou, S. Le Coz, M.S.A.B. Leão, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, R. López, A. Lopez Agüera, K. Louedec, J. Lozano Bahilo, L. Lu, A. Lucero, M. Ludwig, H. Lyberis, M.C. Maccarone, C. Macolino, S. Maldera, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, J. Marin, V. Marin, I.C. Maris, H.R. Marquez Falcon, G. Marsella, D. Martello, L. Martin, H. Martinez, O. Martínez Bravo, D. Martraire, J.J. Masías Meza, H.J. Mathes, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurel, D. Maurizio, P.O. Mazur, G. Medina-Tanco, M. Melissas, D. Melo, E. Menichetti, A. Menshikov, P. Mertsch, C. Meurer, R. Meyhandan, S. Mićanović, M.I. Micheletti, I.A. Minaya, L. Miramonti, L. Molina-Bueno, S. Mollerach, M. Monasor, D. Monnier Ragaigne, F. Montanet, B. Morales, C. Morello, E. Moreno, J.C. Moreno, M. Mostafá, C.A. Moura, M.A. Muller, G. Müller, M. Münchmeyer, R. Mussa, G. Navarra, J.L. Navarro, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.T. Nhung, M. Niechciol, L. Niemietz, N. Nierstenhoefer, D. Nitz, D. Nosek, L. Nožka, J. Oehlschläger, A. Olinto, M. Ortiz, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, N. Palmieri, G. Parente, E. Parizot, A. Parra, S. Pastor, T. Paul, M. Pech, J. Pȩkala, R. Pelayo, I.M. Pepe, L. Perrone, R. Pesce, E. Petermann, S. Petrera, A. Petrolini, Y. Petrov, C. Pfendner, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, V.H. Ponce, M. Pontz, A. Porcelli, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, J. Rautenberg, O. Ravel, D. Ravignani, B. Revenu, J. Ridky, S. Riggi, M. Risse, P. Ristori, H. Rivera, V. Rizi, J. Roberts, W. Rodrigues de Carvalho, G. Rodriguez, I. Rodriguez Cabo, J. Rodriguez Martino, J. Rodriguez Rojo, M.D. Rodríguez-Frías, G. Ros, J. Rosado, T. Rossler, M. Roth, B. Rouillé-d'Orfeuil, E. Roulet, A.C. Rovero, C. Rühle, A. Saftoiu, F. Salamida, H. Salazar, F. Salesa Greus, G. Salina, F. Sánchez, C.E. Santo, E. Santos, E.M. Santos, F. Sarazin, B. Sarkar, S. Sarkar, R. Sato, N. Scharf, V. Scherini, H. Schieler, P. Schiffer, A. Schmidt, O. Scholten, H. Schoorlemmer, J. Schovancova, P. Schovánek, F. Schröder, S. Schulte, D. Schuster, S.J. Sciutto, M. Scuderi, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, I. Sidelnik, G. Sigl, H.H. Silva Lopez, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, J. Sorokin, H. Spinka, R. Squartini, Y.N. Srivastava, S. Stanic, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, T. Suomijärvi, A.D. Supanitsky, T. Šuša, M.S. Sutherland, J. Swain, Z. Szadkowski, M. Szuba, A. Tapia, M. Tartare, O. Taşcău, R. Tcaciuc, N.T. Thao, D. Thomas, J. Tiffenberg, C. Timmermans, W. Tkaczyk, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, P. Travnicek, D.B. Tridapalli, G. Tristram, E. Trovato, M. Tueros, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, H. Wahlberg, P. Wahrlich, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, S. Westerhoff, B.J. Whelan, A. Widom, G. Wieczorek, L. Wiencke, B. Wilczyńska, H. Wilczyński, M. Will, C. Williams, T. Winchen, M. Wommer, B. Wundheiler, T. Yamamoto, T. Yapici, P. Younk, G. Yuan, A. Yushkov, B. Zamorano Garcia, E. Zas, D. Zavrtanik, M. Zavrtanik, I. Zaw, A. Zepeda, J. Zhou, Y. Zhu, M. Zimbres Silva, M. Ziolkowski (The Pierre Auger Collaboration), D. Charrier, L. Denis, G. Hilgers, L. Mohrmann, B. Philipps, O. Seeger Sept. 18, 2012 astro-ph.IM The Pierre Auger Observatory is exploring the potential of the radio detection technique to study extensive air showers induced by ultra-high energy cosmic rays. The Auger Engineering Radio Array (AERA) addresses both technological and scientific aspects of the radio technique. A first phase of AERA has been operating since September 2010 with detector stations observing radio signals at frequencies between 30 and 80 MHz. In this paper we present comparative studies to identify and optimize the antenna design for the final configuration of AERA consisting of 160 individual radio detector stations. The transient nature of the air shower signal requires a detailed description of the antenna sensor. As the ultra-wideband reception of pulses is not widely discussed in antenna literature, we review the relevant antenna characteristics and enhance theoretical considerations towards the impulse response of antennas including polarization effects and multiple signal reflections. On the basis of the vector effective length we study the transient response characteristics of three candidate antennas in the time domain. Observing the variation of the continuous galactic background intensity we rank the antennas with respect to the noise level added to the galactic signal.
CommonCrawl
Volume 20 Supplement 11 Selected articles from the IEEE BIBM International Conference on Bioinformatics & Biomedicine (BIBM) 2018: genomics A hybrid and scalable error correction algorithm for indel and substitution errors of long reads Arghya Kusum Das1, Sayan Goswami2, Kisung Lee2 & Seung-Jong Park2 Long-read sequencing has shown the promises to overcome the short length limitations of second-generation sequencing by providing more complete assembly. However, the computation of the long sequencing reads is challenged by their higher error rates (e.g., 13% vs. 1%) and higher cost ($0.3 vs. $0.03 per Mbp) compared to the short reads. In this paper, we present a new hybrid error correction tool, called ParLECH (Parallel Long-read Error Correction using Hybrid methodology). The error correction algorithm of ParLECH is distributed in nature and efficiently utilizes the k-mer coverage information of high throughput Illumina short-read sequences to rectify the PacBio long-read sequences.ParLECH first constructs a de Bruijn graph from the short reads, and then replaces the indel error regions of the long reads with their corresponding widest path (or maximum min-coverage path) in the short read-based de Bruijn graph. ParLECH then utilizes the k-mer coverage information of the short reads to divide each long read into a sequence of low and high coverage regions, followed by a majority voting to rectify each substituted error base. ParLECH outperforms latest state-of-the-art hybrid error correction methods on real PacBio datasets. Our experimental evaluation results demonstrate that ParLECH can correct large-scale real-world datasets in an accurate and scalable manner. ParLECH can correct the indel errors of human genome PacBio long reads (312 GB) with Illumina short reads (452 GB) in less than 29 h using 128 compute nodes. ParLECH can align more than 92% bases of an E. coli PacBio dataset with the reference genome, proving its accuracy. ParLECH can scale to over terabytes of sequencing data using hundreds of computing nodes. The proposed hybrid error correction methodology is novel and rectifies both indel and substitution errors present in the original long reads or newly introduced by the short reads. The rapid development of genome sequencing technologies has become the major driving force for genomic discoveries. The second-generation sequencing technologies (e.g., Illumina, Ion Torrent) have been providing researchers with the required throughput at significantly low cost ($0.03/million-bases), which enabled the discovery of many new species and variants. Although they are being widely utilized for understanding the complex phenotypes, they are typically incapable of resolving long repetitive elements, common in various genomes (e.g., eukaryotic genomes), because of the short read lengths [1]. To address the issues with the short read lengths, third-generation sequencing technologies (e.g., PacBio, Oxford Nanopore) have started emerging recently. By producing long reads greater than 10 kbp, these third-generation sequencing platforms provide researchers with significantly less fragmented assembly and the promise of a much better downstream analysis. However, the production costs of these long sequences are almost 10 times more expensive than those of the short reads, and the analysis of these long reads is severely constrained by their higher error rate. Motivated by this, we develop ParLECH (Parallel Long-read Error Correction using Hybrid methodology). ParLECH uses the power of MapReduce and distributed NoSQL to scale with terabytes of sequencing data [2]. Utilizing the power of these big data programming models, we develop fully distributed algorithms to replace both the indel and substitution errors of long reads. To rectify the indel errors, we first create a de Bruijn graph from the Illumina short reads. The indel errors of the long reads are then replaced with the widest path algorithm that maximizes the minimum k-mer coverage between two vertices in the de Bruijn graph. To correct the substitution errors, we divide the long read into a series of low and high coverage regions by utilizing the median statistics of the k-mer coverage information of the Illumina short reads. The substituted error bases are then replaced separately in those low and high coverage regions. ParLECH can achieve higher accuracy and scalability over existing error correction tools. For example, ParLECH successfully aligns 95% of E. Coli long reads, maintaining larger N50 compared to the existing tools. We demonstrate the scalability of ParLECH by correcting a 312GB human genome PacBio dataset, with leveraging a 452 GB Illumina dataset (64x coverage), on 128 nodes in less than 29 h. The second-generation sequencing platforms produce short reads at an error rate of 1-2% [3] in which most of the errors are substitution errors. However, the low cost of production results in high coverage of data, which enables self-correction of the errors without using any reference genome. Utilizing the basic fact that the k-mers resulting from an error base will have significantly lower coverage compared to the actual k-mers, many error correction tools have been proposed such as Quake [4], Reptile [5], Hammer [6], RACER [7], Coral [8], Lighter [9], Musket [10], Shrec [11], DecGPU [12], Echo [13], and ParSECH [14]. Unlike second-generation sequencing platforms, the third-generation sequencing platforms, such as PacBio and Oxford Nanopore sequencers, produce long reads where indel (insertion/deletion) errors are dominant [1]. Therefore, the error correction tools designed for substitution errors in short reads cannot produce accurate results for long reads. However, it is common to leverage the relatively lower error rate of the short-read sequences to improve the quality of long reads. While improving the quality of long reads, these hybrid error correction tools also reduce the cost of the pipeline by utilizing the complementary low-cost and high-quality short reads. LoRDEC [15], Jabba [16], Proovread [17], PacBioToCA [18], LSC [19], and ColorMap [20] are a few examples of hybrid error correction tools. LoRDEC [15] and Jabba [16] use a de Bruijn graph (DBG)-based methodology for error correction. Both the tools build the DBG from Illumina short reads. LoRDEC then corrects the error regions in long reads through the local assembly on the DBG while Jabba uses different sizes of k-mer iteratively to polish the unaligned regions of the long reads. Some hybrid error correction tools use alignment-based approaches for correcting the long reads. For example, PacBioToCA [18] and LSC [19] first map the short reads to the long reads to create an overlap graph. The long reads are then corrected through a consensus-based algorithm. Proovread [17] reaches the consensus through the iterative alignment procedures that increase the sensitivity of the long reads incrementally in each iteration. ColorMap [20] keeps information of consensual dissimilarity on each edge of the overlap graph and then utilizes the Dijkstra's shortest path algorithm to rectify the indel errors. Although these tools produce accurate results in terms of successful alignments, their error correction process is lossy in nature, which reduces the coverage of the resultant data set. For example, Jabba, PacBioToCA, and Proovread use aggressive trimming of the error regions of the long reads instead of correcting them, losing a huge number of bases after the correction [21] and thereby limiting the practical use of the resultant data sets. Furthermore, these tools use a stand-alone methodology to improve the base quality of the long reads, which suffers from scalability issues that limit their practical adoption for large-scale genomes. On the contrary, ParLECH is distributed in nature, and it can scale to terabytes of sequencing data on hundreds of compute nodes. ParLECH utilizes the DBG for error correction like LoRDEC. However, to improve the error correction accuracy, we propose a widest path algorithm that maximizes the minimum k-mer coverage between two vertices of the DBG. By utilizing the k-mer coverage information during the local assembly on the DBG, ParLECH is capable to produce more accurate results than LoRDEC. Unlike Jabba, PacBioToCA, and Proovread, ParLECH does not use aggressive trimming to avoid lossy correction. ParLECH further improves the base quality instead by correcting the substitution errors either present in the original long reads or newly introduced by the short reads during the hybrid correction of the indel errors. Although there are several tools to rectify substitution errors for second-generation sequences (e.g., [4, 5, 9, 13]), this phase is often overlooked in the error correction tools developed for long reads. However, this phase is important for hybrid error correction because a significant number of substitution errors are introduced by the Illumina reads. Existing pipelines depend on polishing tools, such as Pilon [22] and Quiver [23], to further improve the quality of the corrected long reads. Unlike the distributed error correction pipeline of ParLECH, these polishing tools are stand-alone and cannot scale with large genomes. LorMA [24], CONSENT [25], and Canu [26] are a few self-error correction tools that utilize long reads only to rectify the errors in them. These tools can automatically bypass the substitution errors of the short reads and are capable to produce accurate results. However, the sequencing cost per base for long reads is extremely high, and so it would be prohibitive to get long reads with high coverage that is essential for error correction without reference genomes. Although Canu reduces the coverage requirement to half of that of LorMA and CONSENT by using the tf-idf weighting scheme for long reads, almost 10 times more expensive cost of PacBio sequences is still a major obstacle to utilizing it for large genomes. Because of this practical limitation, we do not report the accuracy of the these self-error correction tools in this paper. Rationale behind the indel error correction Since we leverage the lower error rate of Illumina reads to correct the PacBio indel errors, let us first describe an error model for Illumina sequences and its consequence on the DBG constructed from these reads. We first observe that k-mers, DNA words of a fixed length k, tend to have similar abundances within a read. This is a well-known property of k-mers that stem from each read originating from a single source molecule of DNA [27]. Let us consider two reads R1 and R2 representing the same region of the genome, and R1 has one error base. Assuming that the k-mers between the position posbegin and posend represent an error region in R1 where error base is at position \({pos}_{error} = \frac {pos_{end}+{pos}_{begin}}{2}\), we can make the following claim. Claim 1: The coverage of at least one k-mer of R1 in the region between posbegin and posend is lower than the coverage of any k-mer in the same region of R2. A brief theoretical rationale of the claim can be found in Additional file 1. Figure 1 shows the rationale behind the claim. Widest Path Example: Select correct path for high coverage error k-mers Rationale behind the substitution error correction After correcting the indel errors with the Illumina reads, a substantial number of substitution errors are introduced in the PacBio reads as they dominate in the Illumina short-read sequences. To rectify those errors, we first divide each PacBio long read into smaller subregions like short reads. Next, we classify only those subregions as errors where most of the k-mers have high coverage, and only a few low-coverage k-mers exist as outliers. Specifically, we use Pearson's skew coefficient (or median skew coefficient) to classify the true and error subregions. Figure 2 shows the histogram of three different types of subregions in a genomic dataset. Figure 2a has similar numbers of low- and high-coverage k-mers, making the skewness of this subregion almost zero. Hence, it is not considered as error. Figure 2b is also classified as true because the subregion is mostly populated with the low-coverage k-mers. Figure 2c is classified as error because the subregion is largely skewed towards the high-coverage k-mers, and only a few low-coverage k-mers exist as outliers. Existing substitution error correction tools do not analyze the coverage of neighboring k-mers and often classify the true yet low-coverage k-mers (e.g., Fig. 2b as errors. Skewness in k-mer coverage statistics Another major advantage of our median-based methodology is that the accuracy of the method has a lower dependency on the value of k. Median values are robust because, for a relatively small value of k, a few substitution errors will not alter the median k-mer abundance of the read [28]. However, these errors will increase the skewness of the read. The robustness of the median values in the presence of sequencing errors is shown mathematically in the Additional file 1. Big data framework in the context of genomic error correction Error correction for sequencing data is not only data- and compute-intensive but also search-intensive because the size of the k-mer spectrum increases almost exponentially with the increasing value of k (i.e., up to 4k unique k-mers), and we need to search in the huge search space. For example, a large genome with 1 million reads of length 5000 bp involves more than 5 billion searches in a set of almost 10 billion unique k-mers. Since existing hybrid error correction tools are not designed for large-scale genome sequence data such as human genomes, we design ParLECH as a scalable and distributed framework equipped with Hadoop and Hazelcast. Hadoop is an open-source abstraction of Google's MapReduce, which is a fully parallel and distributed framework for large-scale computation. It reads the data from a distributed file system called Hadoop Distributed File System (HDFS) in small subsets. In the Map phase, a Map function executes on each subset, producing the output in the form of key-value pairs. These intermediate key-value pairs are then grouped based on the unique keys. Finally, a Reduce function executes on each group, producing the final output on HDFS. Hazelcast [29] is a NoSQL database, which stores large-scale data in the distributed memory using a key-value format. Hazelcast uses MummurHash to distribute the data evenly over multiple nodes and to reduce the collision. The data can be stored and retrieved from Hazelcast using hash table functions (such as get and put) in O(1) time. Multiple Map and Reduce functions can access this hash table simultaneously and independently, improving the search performance of ParLECH. Error correction pipeline Figure 3 shows the indel error correction pipeline of ParLECH. It consists of three phases: 1) constructing a de Bruijn graph, 2) locating errors in long reads, and 3) correcting the errors. We store the raw sequencing reads in the HDFS while Hazelcast is used to store the de Bruijn graph created from the Illumina short reads. We develop the graph construction algorithm following the MapReduce programming model and use Hadoop for this purpose. In the subsequent phases, we use both Hadoop and Hazelcast to locate and correct the indel errors. Finally, we write the indel error-corrected reads into HDFS. We describe each phase in detail in the subsequent sections. Indel error correction ParLECH has three major steps for hybrid correction of indel errors as shown in Fig. 4. In the first step, we construct a DBG from the Illumina short reads with the coverage information of each k-mer stored in each vertex. In the second step, we partition each PacBio long read into a sequence of strong and weak regions (alternatively, correct and error regions respectively) based on the k-mer coverage information stored in the DBG. We select the right and left boundary k-mers of two consecutive strong regions as source and destination vertices respectively in the DBG. Finally, in the third step, we replace each weak region (i.e., indel error region) of the long read between those two boundary k-mers with the corresponding widest path in the DBG, which maximizes the minimum k-mer coverage between those two vertices. Error correction steps Figure 5 shows the substitution error correction pipeline of ParLECH. It has two different phases: 1) locating errors and 2) correcting errors. Like the indel error correction, the computation of phase is fully distributed with Hadoop. These Hadoop-based algorithms work on top of the indel error-corrected reads that were generated in the last phase and stored in HDFS. The same k-mer spectrum that was generated from the Illumina short reads and stored in Hazelcast is used to correct the substitution errors as well. Substitution error correction De bruijn graph construction and counting k-mer Algorithm 1 explains the MapReduce algorithm for de Bruijn graph construction, and Fig. 6 shows the working of the algorithm. The map function scans each read of the data set and emits each k-mer as an intermediate key and its previous and next k-mer as the value. The intermediate key represents a vertex in the de Bruijn graph whereas the previous and the next k-mers in the intermediate value represent an incoming edge and an outgoing edge respectively. An associated count of occurrence (1) is also emitted as a part of the intermediate value. After the map function completes, the shuffle phase partitions these intermediate key-value pairs on the basis of the intermediate key (the k-mer). Finally, the reduce function accumulates all the previous k-mers and next k-mers corresponding to the key as the incoming and outgoing edges respectively. The same reduce function also sums together all the intermediate counts (i.e., 1) emitted for that particular k-mer. In the end of the reduce function, the entire graph structure and the count for each k-mer is stored in the NoSQL database of Hazelcast using Hazelcast's put method. For improved performance, we emit only a single nucleotide character (i.e., A, T, G, or C instead of the entire k-mer) to store the incoming and outgoing edges. The actual k-mer can be obtained by prepending/appending that character with the k−1 prefix/suffix of the vertex k-mer. De Bruijn graph construction and k-mer count Locating the indel errors of long read To locate the errors in the PacBio long reads, ParLECH uses the k-mer coverage information from the de Bruijn graph stored in Hazelcast. The entire process is designed in an embarrassingly parallel fashion and developed as a Hadoop Map-only job. Each of the map tasks scans through each of the PacBio reads and generates the k-mers with the same value of k as in the de Bruijn graph. Then, for each of those k-mers, we search the coverage in the graph. If the coverage falls below a predefined threshold, we mark it as weak indicating an indel error in the long read. It is possible to find more than one consecutive errors in a long read. In that case, we mark the entire region as weak. If the coverage is above the predefined threshold, we denote the region as strong or correct. To rectify the weak region, ParLECH uses the widest path algorithm described in the next subsection. Correcting the indel errors Like locating the errors, our correction algorithm is also embarrassingly parallel and developed as a Hadoop Map-only job. Like LoRDEC, we use the pair of strong k-mers that enclose a weak region of a long read as the source and destination vertices in the DBG. Any path in the DBG between those two vertices denotes a sequence that can be assembled from the short reads. We implement the widest path algorithm for this local assembly. The widest path algorithm maximizes the minimum k-mer coverage of a path in the DBG. We use the widest path based on our assumption that the probability of having the k-mer with the minimum coverage is higher in a path generated from a read with sequencing errors than a path generated from a read without sequencing errors for the same region in a genome. In other words, even if there are some k-mers with high coverage in a path, it is highly likely that the path includes some k-mer with low coverage that will be an obstacle to being selected as the widest path, as illustrated in Fig. 1. Therefore, ParLECH is equipped with the widest path technique to find a more accurate sequence to correct the weak region in the long read. Algorithm 2 shows our widest path algorithm implemented in ParLECH, a slight modification of the Dijkstra's shortest path algorithm using a priority queue that leads to the time complexity of O(E logV). Instead of computing the shortest paths, ParLECH traverses the graph and updates the width of each path from the source vertex as the minimum width of any edge on the path (line 15). Locating the substitution error Algorithm 3 shows the process to locate substitution base errors. To locate the substitution errors in the long reads, we first divided the long reads into shorter fragments. As the k-mers in a smaller subregion tend to have similar abundances [27], this will divide the longer reads into a sequence of high- and low-coverage fragments. If a fragment belongs to a low-coverage area of the genome, most of the k-mers in that fragment are expected to have low coverage. Otherwise, the k-mers are expected to have high coverage. This methodology enables ParLECH to better distinguish between true-yet-low-coverage and error-yet-high-coverage k-mers. By default, ParLECH uses the length of the short reads as the length of the shorter fragments. However, it can be easily modified with a user-defined length. The last fragment of the long reads can have a length shorter than default (or user-defined) length. This fragment is always ignored for correcting the substitution error as it is considered insufficient to gather any statistics. After dividing the long reads into shorter fragments, we calculate the Pearson's skew coefficient (mentioned as skewThreshold in Algorithm 3) of the k-mer coverage of each fragment as a threshold to classify those fragments as true or error. If the skew coefficient of the fragment lies in a certain interval, the fragment is classified as a true fragment without any error. Furthermore, the fragments with mostly low-coverage k-mers are also ignored. All the other fragments (i.e., the fragments with highly skewed towards high-coverage k-mers) are classified as erroneous. Through this classification, all the low-coverage areas of the genome will be considered as correct even if they have low-coverage k-mers but almost similar coverage as that of the neighboring k-mers. After classifying the fragments as true and error, we divide all the error fragments as high and low coverage. If the median k-mer coverage of a fragment is greater than the median coverage of the entire k-mer spectrum, the fragment is classified as high coverage. Otherwise, the fragment belongs to a low-coverage area. ParLECH uses a pattern of true and error k-mers to localize the errors and searches for the set of corrections with a maximum likelihood that make all k-mers true. Correcting the substitution error To rectify the substitution errors, ParLECH uses a majority voting algorithm similar to that of Quake [4]. However, we have two major differences. First, ParLECH's majority voting algorithm is fully distributed and can scale over hundreds of nodes. Second, unlike Quake, ParLECH uses different thresholds for the low and high coverage area of the genome to improve the accuracy. For each error base detected in the previous phase, ParLECH substitutes the base with all the different nucleotide characters (i.e., A, T, G, and C) and calculates the coverage of all the k-mers with that base. Finally, the error base is replaced with the one such that all those k-mers with that base exceeds or equals the specified threshold for that area. In this section, we show the experimental results of ParLECH using various real-world sequence datasets. We evaluate ParLECH with respect to four real data sets including E. coli, yeast, fruit fly, and human genome. The details of the data set are summarized in Table 1. The first three of them are relatively small-sized genomes. We use them to compare the accuracy of ParLECH with the existing hybrid error correction tools such as LoRDEC, Jabba, and Proovread. These data sets are also used to analyze the scalability and compare other resource consumption statistics such as memory requirement and CPU-Hour. Table 1 Datasets The fourth one is the largest among all. It is a large human genome data set that consists of almost 764 GB of sequencing reads including both Illumina and PacBio sequences. We use it to showcase the scaling capability of ParLECH with hundreds of GBs of sequencing reads over hundreds of compute nodes. In our experiments, other existing tools could not produce the result for the data set. Computing environment To evaluate ParLECH, we use SuperMic [30] HPC cluster, and Table 2 summarizes its configuration. The maximum number of compute nodes we can use for a single job is 128. Each node has 20 cores, 64 GB main memory, and one 250 GB hard disk drive (HDD). Note that the main bottleneck for our Hadoop jobs running on top of disk-based HDFS is the I/O throughput because each node is equipped with only one HDD. We expect that the performance of ParLECH can be significantly improved by using multiple HDDs per node and/or SSD. Our previous work [31–33] demonstrates the effects of various computing environments for large-scale data processing. Table 2 Experimental environment Accuracy metrics We evaluate the accuracy of ParLECH with respect to three different metrics as follows: 1) % Aligned reads and 2) % Aligned bases: These accuracy metrics indicate how well the corrected long reads are aligned to the reference genome. We report the %alignment both in terms of the total number of reads as well as the total bases present in the data set. For all the data sets other than the human genome, we use BLASR [34] to align the long reads to the reference genome as it reports longer alignments by bridging the long indel error. However, for the large human genome, we use BWA-mem [35] to get the alignment results quickly. 2) N50 statistics: It is also important to preserve input read depth in the corrected data set. Shorter reads and/or reduced depth may show better alignment but may have a negative impact on downstream analyses. Hence, we measure the N50 statistics of the data sets to indicate the discard or trimming of errors in the long reads instead of rectifying them. 3) Gain: We also use the gain metric [5] to measure the fraction of effectively corrected errors by ParLECH. The gain is defined as $$ Gain = \frac{TP-FP}{TP+FN} $$ where TP (true-positive) is the number of error bases that are successfully corrected, FP (false-positive) is the number of true bases that are wrongly changed, and FN (false-negative) is the number of error bases that are falsely detected as correct. To measure TP, FP, and FN, we follow the procedure described in [36]. Let r be an original read and rc be the read after correction. We derive the set of real sequencing errors Em by mapping r to the reference genome and recording differences. Then, we measure Er, the set of errors remaining in rc, by applying global alignment between rc and the genomic region where r was mapped to and recording the differences in the alignment. Finally, we calculate TP=|Em∖Er|, FP=|Er∖Em|, and FN=|Er∩Em|. Comparison with existing tools Table 3 compares the accuracy of ParLECH with that of LoRDEC, Jabba, and Proovread in terms of the percentage of aligned reads and aligned bases. Table 4, on the other hand, compares the accuracy in terms of gain. We measure the accuracy metrics using BLASR by running multiple instances of BLASR in parallel for efficiently processing large datasets. Table 3 Accuracy comparison (Alignments) Table 4 Accuracy comparison (Gain) The results demonstrate that ParLECH can rectify the indel errors with significantly more accuracy comparing to LoRDEC both in terms of the aligned bases and gain. Like LoRDEC, ParLECH does not correct the long reads in which there is no strong k-mer. However, ParLECH searches strong k-mers in all reads regardless of their length while LoRDEC filters out reads whose length is less than a threshold. Although Jabba attains significantly higher alignment accuracy compared to ParLECH, this high alignment accuracy is attained at the cost of producing reduced depths. This is because, unlike ParLECH, Jabba chooses to discard several of the uncorrected reads instead of rectifying them. As shown in Table 3, the total number of reads in the resulting error-corrected dataset is significantly higher in ParLECH comparing to Jabba. Proovread attains almost similar alignment accuracy comparing to ParLECH. However, it trims many of the error regions in each read and breaks an erroneous longer read at the error region, producing multiple shorter reads. Consequently, Proovread produces significantly lower N50 compared to ParLECH. We have further improved the accuracy by correcting the substitution errors of the long reads. This phase is not present in LoRDEC. However, it has a substantial impact on improving the quality of the data. As shown in Tables 3 and 4, by correcting the substitution errors, ParLECH improve the quality of the dataset by 1 to 3% from the indel error-corrected output both in terms of alignment and gain. Figure 7 demonstrates the scalability of different phases of ParLECH. Figure 7a demonstrates the scalability of each phase of ParLECH's indel error correction pipeline for the fruit fly dataset. The results show that the processing time of all three phases (i.e., constructing a de Bruijn graph, locating errors in long reads, and correcting errors in long reads) improves almost linearly with the increasing number of compute nodes. Therefore, the overall execution time of ParLECH also shows the almost linear scalability as we add more compute nodes. Scalability of ParLECH. a Time to correct indel error of fruit fly dataset. b Time to correct subst. error of fruit fly dataset Figure 7b demonstrates the scalability of different phases of ParLECH's substitution error correction pipeline for the same fruit fly dataset. Like the indel error correction phases, these phases are also linearly scalable with the increasing number of nodes. Figure 8 compares ParLECH with existing error correction tools. As shown in Fig. 8a, on a single node for the same E. coli data, ParLECH performs almost 1.5 times faster than Jabba and almost 7.5 times faster than Proovread. On a single node, LoRDEC shows slightly better (1.2 times faster) performance than ParLECH because both the tools have similar asymptotic complexity (O(E logv)) whereas ParLECH has some distributed computing overhead. However, utilizing the power of Hadoop and Hazelcast, the embarrassingly parallel algorithm of ParLECH can be easily distributed over multiple nodes and eventually outperform LoRDEC by several magnitudes, which is not designed for distributed computing. Even though the correction algorithm of LoRDEC can work independently on each of the long reads, the computation cannot be distributed because of the absence of a proper scheduler. Comparing execution time of ParLECH with existing error correction tools. a Time for hybrid correction of indel errors in E.coli long reads (1.032 GB). b Time for correction of substitution errors in E.coli short reads (13.50 GB) Figure 8b compares the substitution error correction pipeline with Quake [4], an existing tool to correct the substitution errors of Illumina short read sequences. For the similar reason mentioned above, ParLECH outperforms Quake by several magnitudes when distributed over multiple nodes. For a fair comparison with Quake, we use the E. coli Illumina dataset only for this experiment. Since the major motivation of ParLECH is to correct the long-read errors, we did not report the results of accuracy comparison between ParLECH and Quake in this paper. Effects of different traversal algorithms on indel error correction To better understand the benefit of our widest path algorithm (ParLECH WP), we compare its accuracy with that of two other graph traversal algorithms, which are popular in this domain. The first one is the Dijkstra's shortest path algorithm (ParLECH SP), and the other one is a greedy traversal algorithm (ParLECH Greedy). Table 5 reports the accuracy results of all the three algorithms over the real PacBio data sets. Table 5 Effects of different traversal algorithms ParLECH SP replaces the weak region in the long read with the sequence corresponding to the shortest path in the DBG. ParLECH Greedy always selects the vertex with the maximum coverage among all neighboring vertices during its traversal. For ParLECH Greedy, the traversal often ends up in a tip of a dead-end path. So, we use a branching factor b (100 by default) such that, after traversing b successive vertices from the source vertex, the algorithm backtracks if it cannot meet the destination vertex. The algorithm aborts when all successors from the source vertex are visited using this branching factor. Although ParLECH SP has the similar performance as ParLECH WP, because of the counter intuitive nature of shortest paths and the strong (high coverage) k-mers desired for the correction, it cannot take the advantage of the k-mer coverage information in a straight forward way, adversely impacting the accuracy. ParLECH Greedy, on the other hand, can take the advantage of the k-mer coverage information, but its accuracy depends highly on the higher value of the branching factor that poses a severe limitation on its performance. Our widest path algorithm not only optimizes the performance but also makes better use of k-mer coverage information. The algorithm maximizes the minimum coverage of the k-mer in a path. Compared to both ParLECH SP and ParLECH Greedy, ParLECH WP better balances the coverage of all the k-mers in a particular path of the DBG, which improves the accuracy of the resultant data set. As shown in Table 5, the widest path shows almost 15 to 25% better alignment accuracy compared to the greedy algorithm, which is found to perform worst among all. Comparing to the shortest path algorithm, the widest path shows almost 6 to 13% improvement for the dataset. Resource consumption statistics Using the power of Hadoop and Hazelcast, ParLECH is capable to tradeoff between CPU-Hour and DRAM utilization. That is, based on the data size and the available resources, ParLECH can be tuned to utilize the disk space at the cost of higher execution time. Table 6 compares the CPU-Hour and DRAM resource consumption of ParLECH with existing error correction tools with respect to the E. coli data set. For the best (lowest) execution time, ParLECH consumes almost similar CPU-Hour as LoRDEC, which is significantly less comparing to Jabba and Proovread. For this performance, ParLECH needs the entire k-mer spectrum in DRAM. Consequently, it utilizes almost 32GB of DRAM. However, ParLECH can process the same E. coli data consuming significantly less amount (only 5GB) of DRAM if configured properly. However, the process takes more time to finish because of context switching between the DRAM and the hard disk. Table 6 Comparing resource consumption of ParLECH with existing error correction tools with respect to E. coli dataset Processing large-scale human genomes To showcase the data handling capability of ParLECH with hundreds of GBs of sequencing data and its scaling capability with hundreds of computing nodes, we analyze a large human genome data set. This 312 GB of PacBio data set includes more than 23 million long reads with the average length of 6,587 base pairs. The corresponding Illumina data set is 452 GB in size and contains more than 1.4 billion reads with the read length of 101 base pairs. To analyze this large data set (764 GB cumulative), we use 128 nodes of SuperMic cluster. We tuned ParLECH for the maximum performance. That means we distributed the entire de Bruijn graph in the memory available across the cluster. The indel error correction process takes about 28.6 h as shown in Table 7. After this indel error correction, 78.3% of reads and 75.4% of bases are successfully aligned to the reference genome. The substitution error correction process took another 26.5 h, successfully aligning 79.73% of the reads and 80.24% of the bases to the reference genome. Table 7 Correcting a human genome In this paper, we present a distributed hybrid error correction framework for PacBio long reads, called ParLECH. For efficient and scalable analysis of large-scale sequence data, ParLECH makes use of Hadoop and Hazelcast. ParLECH uses the de Bruijn graph and k-mer coverage information from the short reads to rectify the errors of the long reads. We develop a distributed version of the widest path algorithm to maximize the minimum k-mer coverage in a path of the de Bruijn graph constructed from the Illumina short reads. We replace the indel error regions in a long read with their corresponding widest path. To improve the substitution accuracy, we develop a median statistics-based strategy that considers relative k-mer abundance in a specific area of a genome to take care of high- and low-coverage areas separately. Our experimental results show that ParLECH can scale with hundreds of compute nodes and can improve the quality of large-scale sequencing data sets in an accurate manner. While correcting the errors, ParLECH takes care of high- and low-coverage regions of the sequencing reads separately and is better capable to balance the k-mer coverage based on the neighborhood. Hence, we believe that it is a good starting point for detecting and correcting errors in RNA and metagenome sequences. The source code for ParLECH is available at https://github.com/arghyakusumdas/GenomicErrorCorrection. Center for computation and technology DBG: De bruijn graph Deoxyribonucleic acid DRAM: Dynamic random access memory GB: Giga bytes HDFS: Hadoop distributed file system HPC: LSU: NoSQL: Not only SQL ParLECH: Parallel long-read error correction using hybrid methodology RNA: SSD: UW: Goodwin S, McPherson JD, McCombie WR. Coming of age: ten years of next-generation sequencing technologies. Nat Rev Genet. 2016; 17(6):333–51. Das AK, Lee K, Park S-J. Parlech: Parallel long-read error correction with hadoop. In: 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE: 2018. p. 341–8. https://doi.org/10.1109/bibm.2018.8621549. Lou DI, Hussmann JA, McBee RM, Acevedo A, Andino R, Press WH, Sawyer SL. High-throughput dna sequencing errors are reduced by orders of magnitude using circle sequencing. Proc Natl Acad Sci. 2013; 110(49). https://doi.org/10.1073/pnas.1319590110. Kelley DR, Schatz MC, Salzberg SL. Quake: quality-aware detection and correction of sequencing errors. Genome Biol. 2010. https://doi.org/10.1186/gb-2010-11-11-r116. Yang X, Dorman KS, Aluru S. Reptile: representative tiling for short read error correction. Bioinformatics. 2010; 26(20). https://doi.org/10.1093/bioinformatics/btq468. Medvedev P, Scott E, Kakaradov B, Pevzner P. Error correction of high-throughput sequencing datasets with non-uniform coverage. Bioinformatics. 2011; 27(13). https://doi.org/10.1093/bioinformatics/btr208. Ilie L, Molnar M. Racer: Rapid and accurate correction of errors in reads. Bioinformatics. 2013. https://doi.org/10.1093/bioinformatics/btt407. Salmela L, Schröder J. Correcting errors in short reads by multiple alignments. Bioinformatics. 2011; 27(11). https://doi.org/10.1093/bioinformatics/btr170. Song L, Florea L, Langmead B. Lighter: fast and memory-efficient sequencing error correction without counting. Genome Biol. 2014; 15(11). https://doi.org/10.1186/s13059-014-0509-9. Liu Y, Schröder J, Schmidt B. Musket: a multistage k-mer spectrum-based error corrector for illumina sequence data. Bioinformatics. 2013; 29(3). https://doi.org/10.1093/bioinformatics/bts690. Schröder J, Schröder H, Puglisi SJ, Sinha R, Schmidt B. Shrec: a short-read error correction method. Bioinformatics. 2009; 25. https://doi.org/10.1093/bioinformatics/btp379. Liu Y, Schmidt B, Maskell DL. Decgpu: distributed error correction on massively parallel graphics processing units using cuda and mpi. BMC Bioinformatics. 2011; 12(1). https://doi.org/10.1186/1471-2105-12-85. Kao W-C, Chan AH, Song YS. Echo: a reference-free short-read error correction algorithm. Genome Res. 2011; 21(7). https://doi.org/10.1101/gr.111351.110. Das AK, Shams S, Goswami S, Platania R, Lee K, Park S-J. Parsech: Parallel sequencing error correction with hadoop for large-scale genome. In: Proceedings of the 9th International BICob Conference. ISCA: 2017. https://www.searchdl.org/PagesPublic/ConfPaper.aspx?ConfPprID=26C12DF8-87DB-E711-A40B-E4B3180586B9. Salmela L, Rivals E. Lordec: accurate and efficient long read error correction. Bioinformatics. 2014; 30(24):3506–14. Miclotte G, Heydari M, Demeester P, Audenaert P, Fostier J. Jabba: Hybrid error correction for long sequencing reads using maximal exact matches. In: International Workshop on Algorithms in Bioinformatics. Springer: 2015. p. 175–88. https://doi.org/10.1007/978-3-662-48221-6_13. Hackl T, Hedrich R, Schultz J, Förster F. proovread: large-scale high-accuracy pacbio correction through iterative short read consensus. Bioinformatics. 2014; 30(21):3004–11. Koren S, Schatz MC, Walenz BP, Martin J, Howard JT, Ganapathy G, Wang Z, Rasko DA, McCombie WR, Jarvis ED, et al. Hybrid error correction and de novo assembly of single-molecule sequencing reads. Nat Biotechnol. 2012; 30(7):693–700. Au KF, Underwood JG, Lee L, Wong WH. Improving pacbio long read accuracy by short read alignment. PLoS ONE. 2012; 7(10):46679. Haghshenas E, Hach F, Sahinalp SC, Chauve C. Colormap: Correcting long reads by mapping short reads. Bioinformatics. 2016; 32(17):545–51. Zhang H, Jain C, Aluru S. A comprehensive evaluation of long read error correction methods. BioRxiv. 2019:519330. https://doi.org/10.1101/519330. Walker BJ, Abeel T, Shea T, Priest M, Abouelliel A, Sakthikumar S, Cuomo CA, Zeng Q, Wortman J, Young SK, et al. Pilon: an integrated tool for comprehensive microbial variant detection and genome assembly improvement. PLoS ONE. 2014; 9(11):112963. Hsu J. PacBio Ⓡ variant and consensus caller. https://github.com/PacificBiosciences/GenomicConsensus. Last accessed on 03 Feb 2018. Salmela L, Walve R, Rivals E, Ukkonen E. Accurate self-correction of errors in long reads using de bruijn graphs. Bioinformatics. 2016; 33(6):799–806. PubMed Central Google Scholar Morisse P, Marchet C, Limasset A, Lecroq T, Lefebvre A. Consent: Scalable self-correction of long reads with multiple sequence alignment. BioRxiv. 2019:546630. https://doi.org/10.1101/546630. Koren S, Walenz BP, Berlin K, Miller JR, Bergman NH, Phillippy AM. Canu: scalable and accurate long-read assembly via adaptive k-mer weighting and repeat separation. Genome Res. 2017; 27(5):722–36. Crusoe MR, Alameldin HF, Awad S, Boucher E, Caldwell A, Cartwright R, Charbonneau A, Constantinides B, Edvenson G, Fay S, et al. The khmer software package: enabling efficient nucleotide sequence analysis. F1000Res. 2015; 4. https://doi.org/10.12688/f1000research.6924.1. PMID: 26535114; PMCID: PMC4608353. Brown CT, Howe A, Zhang Q, Pyrkosz AB, Brom TH. A reference-free algorithm for computational normalization of shotgun sequencing data. 2012. arXiv preprint arXiv:1203.4802. Johns M. Getting Started with Hazelcast: Packt Publishing Ltd; 2015. https://www.packtpub.com/big-data-and-business-intelligence/getting-started-hazelcast. High Performance Computing Louisiana State University. http://www.hpc.lsu.edu/resources/hpc/system.php?system=SuperMIC. Das AK, Koppa PK, Goswami S, Platania R, Park S-J. Large-scale parallel genome assembler over cloud computing environment. J Bioinform Comput Biol. 2017. https://doi.org/10.1142/s0219720017400030. Das AK, Park S-J, Hong J, Chang W. Evaluating different distributed-cyber-infrastructure for data and compute intensive scientific application. In: IEEE International Conference on Big Data: 2015. https://doi.org/10.1109/bigdata.2015.7363750. Das AK, Hong J, Goswami S, Platania R, Lee K, Chang W, Park S-J, Liu L. Augmenting amdahl's second law: A theoretical model to build cost-effective balanced hpc infrastructure for data-driven science. In: Cloud Computing (CLOUD), 2017 IEEE 10th International Conference On. IEEE: 2017. p. 147–54. https://doi.org/10.1109/cloud.2017.27. Chaisson MJ, Tesler G. Mapping single molecule sequencing reads using basic local alignment with successive refinement (blasr): application and theory. BMC Bioinformatics. 2012; 13(1):238. Li H, Durbin R. Fast and accurate short read alignment with burrows–wheeler transform. Bioinformatics. 2009; 25(14):1754–60. Yang X, Chockalingam SP, Aluru S. A survey of error-correction methods for next-generation sequencing. Brief Bioinform. 2012; 14(1):56–66. We would like to thank the Information Technology and Service (ITS) department of both UW Platteville and LSU for providing the testing infrastructure required in different phases of the project. This article has been published as part of BMC Genomics Volume 20 Supplement 11, 2019: Selected articles from the IEEE BIBM International Conference on Bioinformatics & Biomedicine (BIBM) 2018: genomics. The full contents of the supplement are available online at https://bmcgenomics.biomedcentral.com/articles/supplements/volume-20-supplement-11. Publication costs were funded by NSF grants (MRI-1338051, IBSS-L-1620451, SCC-1737557, RAPID-1762600), NIH grants (P20GM103458-10, P30GM110760-03, P20GM103424), LA Board of Regents grants (LEQSF(2016-19)-RD-A-08 and ITRS), and IBM faculty awards. Department of Computer Science and Software Engineering, University of Wisconsin at Platteville, Platteville, WI, USA Arghya Kusum Das School of Electrical Engineering and Computer Science, Center for Computation and Technology, Louisiana State University, Baton Rouge, Baton Rouge, LA, USA Sayan Goswami, Kisung Lee & Seung-Jong Park Sayan Goswami Kisung Lee Seung-Jong Park AKD and KL developed the algorithms of long read error correction. SG and SJP evaluated and tested the tool. All the authors read and approved the final manuscript. Correspondence to Arghya Kusum Das. This file provides a brief of the theoretical rationale for using widest path algorithm (claim 1), and a theoretical justification for why median statistics has lower dependency on the value of k. Das, A.K., Goswami, S., Lee, K. et al. A hybrid and scalable error correction algorithm for indel and substitution errors of long reads. BMC Genomics 20, 948 (2019). https://doi.org/10.1186/s12864-019-6286-9 Hybrid error correction PacBio
CommonCrawl
Recent questions tagged inflection Rewrite \(\cos 3 \mathrm{~A}\) in terms of \(\cos \mathrm{A}\) Rewrite \(\sin 3 \alpha\) in terms of \(\sin \alpha\) : asked Jan 24 in Mathematics by ♦Gauss Diamond (71,867 points) | 8 views If \(\lim _{x \rightarrow 1} \frac{x+x^2+x^3+\ldots . .+x^n-n}{x-1}=820,(n \in N)\) then the value of \(\mathrm{n}\) is equal to Evaluate \(\int \frac{x+1}{x^2+4 x+8} d x\). Evaluate \(\int \frac{x^3}{(x-2)(x+3)} d x\). Rewrite \(\int \frac{x^3}{(x-2)(x+3)} d x\) in terms of an integral with a numerator that has degree less than 2. Find \(\int \frac{x^3}{(3-2 x)^5} d x\). Evaluate \(\int \sqrt{4-9 x^2} d x\) Evaluate \(\int \sqrt{1-x^2} d x\). What is a vertical asymptote? Evaluate the following limits. Let \(\lim _{x \rightarrow 2} f(x)=2, \quad \lim _{x \rightarrow 2} g(x)=3 \quad\) and \(\quad p(x)=3 x^2-5 x+7 .\) Find the following limits: Describe three situations where \(\lim _{x \rightarrow c} f(x)\) does not exist. Integral part of \((\sqrt{2}+1)^{6}\) is asked Aug 15, 2022 in Mathematics by ♦Gauss Diamond (71,867 points) | 79 views Suppose \(f^{\prime}\) is continuous on \([a, b]\) and \(\varepsilon>0\). Prove that there exists \(\delta>0\) such that \[ \left|\frac{f(t)-f(x)}{t-x}-f^{\prime}(x)\right|<\varepsilon \] Suppose \(f^{\prime}(x)\) and \(g^{\prime}(x)\) exist, \(g^{\prime}(x) \neq 0\), and \(f(x)=g(x)=0\). Prove that \[ \lim _{t \rightarrow x} \frac{f(t)}{g(t)}=\frac{f^{\prime}(x)}{g^{\prime}(x)} . \] Suppose \(f\) is a real, continuously differentiable function on \([a, b]\), \(f(a)=f(b)=0\), and \[ \int_{a}^{b} f^{2}(x) d x=1 \] Prove that If \(f(x)=0\) for all irrational \(x, f(x)=1\) for all rational \(x\), prove that \(f \notin \mathcal{R}\) on \([a, b]\) for any \(a<b\). Suppose \(f \geq 0, f\) is continuous on \([a, b]\), and \(\int_{a}^{b} f(x) d x=0\). Prove that \(f(x)=0\) for all \(x \in[a, b]\). Suppose \(\alpha\) increases on \([a, b], a \leq x_{0} \leq b, \alpha\) is continuous at \(x_{0}\), \(f\left(x_{0}\right)=1\), and \(f(x)=0\) if \(x \neq x_{0}\). Prove that \(\bar{f} \in \mathcal{R}(\alpha)\) and that \(\int f d \alpha=0\). Prove a pointwise version of Fejér's Theorem: If \(f \in \mathcal{R}\) and \(f(x+), f(x-)\) exist for some \(x\), then \[ \lim _{N \rightarrow \infty} \sigma_{N}(f ; x)=\frac{1}{2}[f(x+)+f(x-)] . \] Suppose \(f \in \mathcal{R}\) on \([0, A]\) for all \(A<\infty\), and \(f(x) \rightarrow 1\) as \(x \rightarrow+\infty\). Prove that \[ \lim _{t \rightarrow 0} \int_{0}^{\infty} e^{-t x} f(x) d x=1 \quad(t>0) \] For \(i=1,2,3, \ldots\), let \(\varphi_{i} \in \mathcal{C}\left(R^{1}\right)\) have support in \(\left(2^{-i}, 2^{1-i}\right)\), such that \(\int \varphi_{i}=1\). Put Suppose \(f \in \mathcal{L}^{2}(\mu), g \in L^{2}(\mu)\). Prove that \[ \left|\int f \bar{g} d \mu\right|^{2}=\int|f|^{2} d \mu \int|g|^{2} d \mu \] if and only if there is a constant \(c\) such that \(g(x)=c f(x)\) almost everywhere. If \(f \in \mathcal{R}\) on \([a, b]\) and if \(F(x)=\int_{a}^{x} f(t) d t\), prove that \(F^{\prime}(x)=\) \(f(x)\) almost everywhere on \([a, b]\). If \(f \geq 0\) and \(\int_{E} f d \mu=0\), prove that \(f(x)=0\) almost everywhere on \(E\). Evaluate \(f(x)=\int_{0}^{x} \frac{1}{\sqrt{1+t^{2}}} d t\) Solve the integral equation \[ \int_{0}^{x}\left((x-y)^{2}-2\right) f(y) d y=-4 x \] applying differentiation and the solving the resulting differential equation. Solve \[ \int_{0}^{x} e^{-x} f(s) d s=e^{-x}+x-1 \] applying differentiation. Evaluate the integral \(f(x)=\int_{0}^{x} \frac{1}{\sqrt{1+t^{2}}} d t\) asked Jun 27, 2022 in Mathematics by ♦Gauss Diamond (71,867 points) | 64 views Evaluate \(x^2+49=0\) asked Jun 22, 2022 in Mathematics by ♦MathsGee Platinum (163,814 points) | 62 views Evaluate \(\int \frac{1}{\sqrt[4]{x}+3} d x\) Calculate \(\int 5 x^{4} d x\) asked May 31, 2022 in Mathematics by ♦Gauss Diamond (71,867 points) | 88 views Calculate \(\int \frac{1}{\sqrt[3]{x}} d x\) State the definition of a inflection point of a function \(f\). asked Feb 4, 2022 in Mathematics by ♦Gauss Diamond (71,867 points) | 148 views What is the general antiderivative of \(6 x^{2}+2 x+5 ?\) antiderivative Sketch the graph of \(f(x)=\frac{\ln x}{x}\), showing all extrema. intercepts asymptotes Find the general form for the following antiderivative: \(\int \frac{z}{z^{2}+9} d z\) The function $f(x)=\dfrac{x^{3}-6 x}{x^{2}-4}$ has ... asked Jan 13, 2022 in Mathematics by ♦MathsGee Platinum (163,814 points) | 114 views Evaluate the integral: $\int \dfrac{3 x+2}{\sqrt{4-3 x^{2}}} d x$ Evaluate the integral: $\int_{0}^{\pi} 2 x \cos 2 x d x$ Evaluate $\int \dfrac{\ln z}{(1+z)^{2}} d z$ Evaluate the following integral: $\int \frac{t}{\sqrt{t^{4}-1}} d t$. Evaluate the integral $\int_{1}^{2} \dfrac{2 s-2}{\sqrt{-s^{2}+2 s+3}} d s$ Evaluate $\int_{-1}^{2} \dfrac{x}{\sqrt{10+2 x+x^{2}}} d x=$
CommonCrawl
\begin{document} \sloppy {16W99, 17B38, 17B63 (MSC2020)} \begin{center} {\Large Double Lie algebras of a nonzero weight} Maxim Goncharov, Vsevolod Gubarev \end{center} \begin{abstract} We introduce the notion of $\lambda$-double Lie algebra, which coincides with usual double Lie algebra when $\lambda = 0$. We show that every $\lambda$-double Lie algebra for $\lambda\neq0$ provides the structure of modified double Poisson algebra on the free associative algebra. In particular, it confirms the conjecture of S. Arthamonov (2017). We prove that there are no simple finite-dimensional $\lambda$-double Lie algebras. {\it Keywords}: modified double Poisson algebra, double Lie algebra, Rota---Baxter operator, matrix algebra. \end{abstract} \section{Introduction} The notion of a~double Poisson algebra on a given associative algebra was introduced by M. Van den Bergh in 2008~\cite{DoublePoisson} as a noncommutative analog of Poisson algebra. The goal behind this notion was to develop noncommutative Poisson geometry. Let us briefly give the background of this object. Given a finitely generated associative algebra~$A$ and $n\in\mathbb{N}$, consider the representation space $\mathrm{Rep}_n(A) = \textrm{Hom}(A,M_n(F))$, where $F$ denotes the ground field. We want to equip~$A$ with a~structure such that $\mathrm{Rep}_n(A)$ is a~Poisson variety for every $n$. For $a\in A$, we may consider the matrix-valued function $a_{ij}$ on $\mathrm{Rep}_n(A)$,\,$1\leq i,j\leq n$. \linebreak These functions generate the coordinate ring $\mathcal{O}(\mathrm{Rep}_n(A))$ and satisfy the relations $(ab)_{ij} = \sum \limits_{k=1}^n a_{ik}b_{kj}$. Thus, to define of a Poisson bracket $\{\cdot,\cdot\}$ on $\mathrm{Rep}_n(A)$ one should know the value of $\{a_{ij},b_{kl}\}$ for all $a,b\in A$. For this reason, M. Van den Bergh defined a~bilinear double bracket $\lbrace\kern-3pt\lbrace \cdot,\cdot\rbrace\kern-3pt\rbrace\colon A\otimes A\to A\otimes A$ satisfying the analogs of anti-commutativity and Leibniz rule (valued in $A\otimes A$), as well as Jacobi identity (valued in $A\otimes A\otimes A$). An associative algebra equipped with such a~double bracket is called {\bf double Poisson algebra}. In~\cite{DoublePoisson}, it was shown that given a double Poisson algebra~$(A,\cdot,\lbrace\kern-3pt\lbrace\cdot,\cdot\rbrace\kern-3pt\rbrace)$, we have that $\mathcal{O}(\mathrm{Rep}_n(A))$ is a Poisson algebra under the bracket $$ \{a_{ij},b_{kl}\} = (\lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace_{(1)})_{kj}(\lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace_{(2)})_{il}, $$ where $\lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace = \lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace_{(1)}\otimes \lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace_{(2)}$. If one deals only with~$A$, then the product $\{a,b\} = \lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace_{(1)}\lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace_{(2)}$ is a~derivation on its second argument and vanishes on commutators on its first one. Moreover, $(A,\{\cdot,\cdot\})$ satisfies the Leibniz rule $\{a,\{b,c\}\} = \{\{a,b\},c\} + \{b,\{a,c\}\}$ and $(A/[A,A],\{\cdot,\cdot\})$ is a Lie algebra. In the terminology of W. Crawley-Boevey~\cite{Crawley-Boevey}, every double Poisson algebra provides a $H_0$-Poisson structure. N.~Iyudu, M.~Kontsevich, and Y.~Vlassopoulos~\cite{IKV} showed that every double Poisson algebra appears as a particular part of a~pre-Calabi-Yau structure. In~\cite{Kac15}, A.~De~Sole, V.~G.~Kac, and D.~Valeri introduced and studied double Poisson vertex algebras. In the middle of 2010s, S.~Arthamonov introduced a notion of~{\bf modified double Poisson algebra}~\cite{Arthamonov0,Arthamonov} with weakened versions of anti-commutativity and Jacobi identity. This notion allowed S.~Arthamonov to study the Kontsevich system and give more examples of $H_0$-Poisson structures arisen from double brackets. The notion of {\bf double Lie algebra} naturally arose directly from the definition of double Poisson algebra, it is a vector space~$V$ endowed with a double bracket satisfying anti-commutativity and Jacobi identity mentioned above, and we forget about associative product on~$V$. As far as we know, T. Schedler was the first who defined such a notion clearly~\cite{Schedler}. The importance of double Lie algebras is the following: every double Lie algebra structure defined on a vector space~$V$ can be uniquely extended to a~double Poisson algebra structure on the free associative algebra $\textrm{As}\langle V\rangle$. In~\cite{DoublePoissonFree}, A.~Odesskii, V.~Rubtsov, V.~Sokolov extended linear and quadratic double Lie algebras defined on a $n$-dimensional vector space to double Poisson algebras defined on the free $n$-generated associative algebra. It is known that double Lie algebras on a finite-dimensional vector space~$V$ are in one-to-one correspondence with skew-symmetric Rota---Baxter operators of weight~0 on the matrix algebra~$M_n(F)$, where $n = \dim(V)$~\cite{DoubleLie,DoublePoissonFree,Schedler}. Recall that a linear operator~$R$ defined on an algebra~$A$ is called a~{\bf Rota---Baxter operator} (RB-operator, for short) of weight~$\lambda$, if $$ R(x)R(y) = R( R(x)y + xR(y) + \lambda xy ) $$ for all $x,y\in A$. This notion firstly appeared in the article~\cite{Tricomi} of F. Tricomi in 1951 and further was several times rediscovered~\cite{Baxter,BelaDrin82}, see also the monograph of L.~Guo~\cite{GuoMonograph}. To the moment, applications of Rota---Baxter operators in symmetric polynomials, quantum field renormalization, pre- and postalgebras, shuffle algebra, etc. were found~\cite{Aguiar00,Atkinson,FardThesis,GuoMonograph,Ogievetsky}. Let us mention the bijection~\cite{Aguiar00,Unital,Schedler} between RB-operators of weight~0 on the matrix algebra $M_n(F)$ and solutions of the {\bf associative Yang---Baxter equation} (AYBE)~\cite{Aguiar01,Polishchuk,Zhelyabin} on $M_n(F)$. Recently this correspondence was established~\cite{AYBE-ext} in the weighted case of both: RB-operators and AYBE~\cite{FardThesis}. In~\cite{Double-0}, the correspondence between double Lie algebras and skew-symmetric RB-operators of weight~0 on the matrix algebra was extended for the infinite-dimensional case. In~\cite{DoubleLie}, M. Goncharov and P. Kolesnikov proved that there are no simple finite-dimensional double Lie algebras. The example of a~countable-dimensional simple double Lie algebra was found in~\cite{Double-0}. We apply Rota---Baxter operators of nonzero weight on the matrix algebra to define a weighted analog of double Lie algebras. In this way, we show that a naive version of such a definition fails, see~\S3. However, we define what is a $\lambda$-double Lie algebra for a~fixed~$\lambda\in F$. Thus, in the finite-dimensional case we extend the bijections $$ \begin{matrix} \mbox{double Lie} \\ \mbox{algebra} \end{matrix} \Longleftrightarrow \begin{matrix}\mbox{skew-symmetric RB-operator} \\ \mbox{of weight\,0 on }M_n(F) \\ \end{matrix} \Longleftrightarrow \begin{matrix}\mbox{skew-symmetric solution} \\ \mbox{of AYBE on }M_n(F) \end{matrix} $$ for the weighted analogs of the objects as follows, $$ \begin{matrix} \lambda\mbox{-double Lie} \\ \mbox{algebra} \end{matrix} \Longleftrightarrow \begin{matrix}\lambda\mbox{-skew-symmetric RB-operator} \\ \mbox{of weight\,} \lambda\mbox{ on }M_n(F) \\ \end{matrix} \Longleftrightarrow \begin{matrix}(-\lambda)\mbox{-skew-symmetric solution} \\ \mbox{of AYBE}(-\lambda) \mbox{ on }M_n(F) \end{matrix} $$ The correspondence between $\lambda$-double Lie algebras and RB-operators of weight~$\lambda$ is helpful for constructing examples of $\lambda$-double Lie algebras. As in the case $\lambda = 0$, we prove that there are no simple finite-dimensional $\lambda$-double Lie algebras. Recall~\cite{Double-0} that a double Lie algebra $V$ is said to be simple if $\lbrace\kern-3pt\lbrace V,V\rbrace\kern-3pt\rbrace \neq (0)$ and there are no nonzero proper subspaces $I$ in $V$ such that $\lbrace\kern-3pt\lbrace V,I\rbrace\kern-3pt\rbrace + \lbrace\kern-3pt\lbrace I,V\rbrace\kern-3pt\rbrace \subseteq I\otimes V + V\otimes I$. On the other hand, we find a~pair of interesting infinite-dimensional $\lambda$-double Lie algebras, one of them is $F[t]$ equipped with the double $\lambda$-skew-symmetric bracket $$ \lbrace\kern-3pt\lbrace t^n,t^m\rbrace\kern-3pt\rbrace = \frac{t^m\otimes t^{n+1}-t^n\otimes t^{m+1}}{t\otimes 1-1\otimes t}. $$ We show that this double Lie algebra~$M$ has the only one nonzero proper ideal which occurs to be isomorphic to~$M$. Finally, we prove that every $\lambda$-double Lie algebra structure on a vector space~$V$ generates a~unique modified double Poisson algebra structure on $\textrm{As}\langle V\rangle$. This general result confirms Conjecture~21 of S. Arthamonov (2017)~\cite{Arthamonov} about the double bracket~$\lbrace\kern-3pt\lbrace\cdot,\cdot\rbrace\kern-3pt\rbrace^{II}$ defined on the three-dimensional vector space $V = \textrm{Span}\{a_1,a_2,a_3\}$ as follows, \begin{gather*} \lbrace\kern-3pt\lbrace a_1,a_2\rbrace\kern-3pt\rbrace^{II} = -a_1\otimes a_2,\quad \lbrace\kern-3pt\lbrace a_2,a_1\rbrace\kern-3pt\rbrace^{II} = a_1\otimes a_2,\quad \lbrace\kern-3pt\lbrace a_2,a_3\rbrace\kern-3pt\rbrace^{II} = a_3\otimes a_2,\\ \lbrace\kern-3pt\lbrace a_3,a_1\rbrace\kern-3pt\rbrace^{II} = a_1\otimes a_3 - a_3\otimes a_1,\quad \lbrace\kern-3pt\lbrace a_3,a_2\rbrace\kern-3pt\rbrace^{II} = -a_3\otimes a_2. \end{gather*} Conjecture says that the double bracket $\lbrace\kern-3pt\lbrace\cdot,\cdot\rbrace\kern-3pt\rbrace^{II}$ can be extended to a modified double Poisson algebra structure on $\textrm{As}\langle a_1,a_2,a_3\rangle$. We want to emphasize the following interesting parallelism. In~1982, A.A. Belavin and V.G. Drinfel'd proved~\cite{BelaDrin82} that given a skew-symmetric solution $r = \sum\limits a_i\otimes b_i$ of the classical Yang---Baxter equation (CYBE) on a semisimple finite-dimensional Lie algebra~$L$, we get a Rota---Baxter operator~$R$ of weight~0 on $L$ defined by the formula $R(x) = \sum \langle a_i,x\rangle b_i$. Here $\langle \cdot,\cdot\rangle$ denotes the Killing form on $L$. In~2017, M. Goncharov~\cite{Goncharov2} proved that a solution of modified (i.\,e., with weakened skew-symmetricity) CYBE on a simple finite-dimensional Lie algebra $L$ gives by the same formula an RB-operator of a~nonzero weight on~$L$. In the case of double algebras, skew-symmetric RB-operators of weight~0 on the matrix algebra produce double Poisson algebras. On the other hand, $\lambda$-skew-symmetric RB-operators of nonzero weight~$\lambda$ on the matrix algebra give rise to modified double Poisson algebras, structures with weaker anti-commutativity and Jacobi identity. Let us give a short outline of the work. In~\S2, we give required preliminaries on Rota---Baxter operators, associative Yang---Baxter equation, and double Lie algebras (including infinite-dimensional case). In~\S3, we show that a naive version of $\lambda$-double Lie algebra fails because of the properties of RB-operators of nonzero weight on the matrix algebra. In~\S4, we give the main definition of $\lambda$-double Lie algebra. We provide both finite-dimensional and infinite-dimensional examples of $\lambda$-double Lie algebras including simple infinite-dimensional ones. In the finite-dimensional case, we prove that there are no simple $\lambda$-double Lie algebras. In~\S5, we prove that every $\lambda$-double Lie algebra on a vector space~$V$ can be uniquely extended to a modified double Poisson algebra on the free associative algebra $\textrm{As}\langle V\rangle$. \section{Preliminaries} \subsection{Rota---Baxter operators} A~linear operator~$R$ defined on an (not necessarily associative) algebra~$A$ is called a~Rota---Baxter operator (RB-operator, for short) of weight~$\lambda$, if $$ R(x)R(y) = R( R(x)y + xR(y) + \lambda xy ) $$ holds for all $x,y\in A$. It is well-known that given an RB-operator of weight~$\lambda$ on an algebra $A$, we have that $\widetilde{R} = -R-\lambda\textrm{id}$ is again an RB-operator of weight~$\lambda$ on $A$. {\bf Proposition 1}~\cite{Unital}. Let $A$ be an algebra, let $R$ be an RB-operator of weight~$\lambda$ on~$A$, and let $\psi$ be either an automorphism or an antiautomorphism of $A$. Then the operator $R^{(\psi)} = \psi^{-1}R\psi$ is an RB-operator of weight~$\lambda$ on~$A$. In~\cite{Spectrum}, it was proved the following general property of RB-operators on unital algebras. {\bf Theorem 1}. Let $A$ be a finite-dimensional unital algebra over a field~$F$. Given a~Rota---Baxter operator $R$ of weight~$\lambda$ on $A$, we have $\mathrm{Spec}\,(R)\subset\{0,-\lambda\}$. {\bf Corollary 1}. Given a~Rota---Baxter operator~$R$ of nonzero weight~$\lambda$ on~$M_n(F)$, we have the decomposition $A = \ker(R^{N})\oplus \ker(R+\lambda\textrm{id})^{N}$ (as subalgebras), where $N=n^2$. Given an algebra~$A$ and an ideal~$J$ of~$A$, a linear map $R\colon J\to A$ is called a Rota---Baxter operator of weight~$\lambda\in F$ from~$J$ to~$A$ if $$ R(a)R(b) = R(R(a)b + aR(b) + \lambda ab) $$ for all $a,b\in J$. For $J = A$, we obtain the usual notion of an RB-operator of weight~$\lambda$ on~$A$. \subsection{Associative Yang---Baxter equation} Let $A$ be an associative algebra, $r = \sum a_i\otimes b_i\in A\otimes A$. The tensor $r$ is a solution of the associative Yang---Baxter equation (AYBE, \cite{Aguiar01,Polishchuk,Zhelyabin}) if \begin{equation}\label{AYBE} r_{13}r_{12}-r_{12}r_{23}+r_{23}r_{13} = 0, \end{equation} where $$ r_{12} = \sum a_i\otimes b_i\otimes 1,\quad r_{13} = \sum a_i\otimes 1\otimes b_i,\quad r_{23} = \sum 1\otimes a_i\otimes b_i $$ are elements from $A^{\otimes3}$. The switch map $\tau\colon A\otimes A\to A\otimes A$ acts in the following way: $\tau(a\otimes b) = b\otimes a$. The solution $r$ of AYBE is called skew-symmetric if $r + \tau(r) = 0$. {\bf Proposition 2}~\cite{Aguiar00}. Let $r = \sum a_i\otimes b_i$ be a solution of AYBE on an associative algebra~$A$. A linear map $P_r\colon A\to A$ defined as \begin{equation}\label{AYBE2RB} P_r(x) = \sum a_i x b_i \end{equation} is an RB-operator of weight zero on $A$. Later, in 2006 K.~Ebrahimi-Fard defined in his Thesis~\cite[p.~113]{FardThesis} the associative Yang---Baxter equation of weight~$\lambda$. Given an associative algebra~$A$ and a tensor $r\in A\otimes A$, we say that $r$~is a~solution of the associative Yang---Baxter equation of weight~$\lambda$ if \begin{equation}\label{wAYBE} r_{13}r_{12}-r_{12}r_{23}+r_{23}r_{13} = \lambda r_{13}. \end{equation} {\bf Proposition 3}~\cite{FardThesis,AYBE-ext}. Let $r = \sum a_i\otimes b_i$ be a solution of AYBE of weight~$\lambda$ on an associative algebra~$A$. A linear map $P_r\colon A\to A$ defined by~\eqref{AYBE2RB} is an RB-operator of weight~$-\lambda$ on~$A$. {\bf Theorem 2}~\cite{Unital,AYBE-ext}. The map $r\to P_r$ is a bijection between the set of the solutions of AYBE of weight~$\lambda$ on $M_n(F)$ and the set of RB-operators of weight~$-\lambda$ on $M_n(F)$. \subsection{Double Poisson and double Lie algebras} Let $V$ be a linear space over $F$. Given $u\in V^{\otimes n}$ and $\sigma \in S_n$, $u^\sigma$ denotes the permutation of tensor factors. By a double bracket on $V$ we call a linear map from $V\otimes V$ to $V\otimes V$. Given an associative algebra~$A$, we consider the outer bimodule action of $A$ on $A\otimes A$: $b(a\otimes a') c = (ba)\otimes (a'c)$. {\bf Definition 1}~\cite{DoublePoisson}. A double Poisson algebra is an associative algebra $A$ equipped with a~double bracket satisfying the following identities for all $a,b,c\in A$ \begin{gather} \lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace =- \lbrace\kern-3pt\lbrace b,a\rbrace\kern-3pt\rbrace ^{(12)}, \label{antiCom} \\ \lbrace\kern-3pt\lbrace a, \lbrace\kern-3pt\lbrace b,c\rbrace\kern-3pt\rbrace \rbrace\kern-3pt\rbrace _L -\lbrace\kern-3pt\lbrace b, \lbrace\kern-3pt\lbrace a,c\rbrace\kern-3pt\rbrace \rbrace\kern-3pt\rbrace _R = \lbrace\kern-3pt\lbrace \lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace,c\rbrace\kern-3pt\rbrace _L, \label{Jacobi} \\ \lbrace\kern-3pt\lbrace a,bc\rbrace\kern-3pt\rbrace = \lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace c + b\lbrace\kern-3pt\lbrace a,c\rbrace\kern-3pt\rbrace, \label{Leibniz} \end{gather} where $\lbrace\kern-3pt\lbrace a, b\otimes c \rbrace\kern-3pt\rbrace _L = \lbrace\kern-3pt\lbrace a,b \rbrace\kern-3pt\rbrace \otimes c$, $\lbrace\kern-3pt\lbrace a, b\otimes c\rbrace\kern-3pt\rbrace _R = (b\otimes \lbrace\kern-3pt\lbrace a,c\rbrace\kern-3pt\rbrace )$, and $\lbrace\kern-3pt\lbrace a\otimes b, c\rbrace\kern-3pt\rbrace _L = (\lbrace\kern-3pt\lbrace a,c\rbrace\kern-3pt\rbrace \otimes b)^{(23)}$. Anti-commutativity~\eqref{antiCom} and Leibniz rule~\eqref{Leibniz} imply~\cite{DoublePoisson} the following equality \begin{equation} \label{LeibnizTwo} \lbrace\kern-3pt\lbrace ab,c\rbrace\kern-3pt\rbrace = a*\lbrace\kern-3pt\lbrace b,c\rbrace\kern-3pt\rbrace + \lbrace\kern-3pt\lbrace a,c\rbrace\kern-3pt\rbrace * b \end{equation} for the inner bimodule action of $A$ on $A\otimes A$: $b*(a\otimes a')*c = (ac)\otimes (ba')$. {\bf Definition 2}~\cite{DoublePoissonFree,Schedler,Kac15}. A double Lie algebra is a linear space $V$ equipped with a~double bracket satisfying the identities~\eqref{antiCom} and~\eqref{Jacobi}. An {\it ideal} of a double Lie algebra $V$ is a subspace $I\subseteq V$ such that $\lbrace\kern-3pt\lbrace V,I\rbrace\kern-3pt\rbrace + \lbrace\kern-3pt\lbrace I,V\rbrace\kern-3pt\rbrace \subseteq I\otimes V + V\otimes I$. Given an ideal $I$ of a double Lie algebra~$V$, we have a~natural structure of a double Lie algebra on the space $V/I$, i.\,e., $\lbrace\kern-3pt\lbrace x + I, y + I \rbrace\kern-3pt\rbrace = \lbrace\kern-3pt\lbrace x,y \rbrace\kern-3pt\rbrace + I\otimes V + V\otimes I$. Let~$L$ and $L'$ be double Lie algebras and let $\varphi\colon L\to L'$ be a~linear map. Then~$\varphi$ is called as {\it homomorphism} from~$L$ to~$L'$ if $$ (\varphi\otimes\varphi)(\lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace) = \lbrace\kern-3pt\lbrace \varphi(a),\varphi(b)\rbrace\kern-3pt\rbrace $$ holds for all $a,b\in L$. Note that the kernel of any homomorphism from~$L$ is an ideal of~$L$. A bijective homomorphism from~$L$ to $L'$ is called an isomorphism of double Lie algebras. A double Lie algebra $V$ is said to be {\it simple} if $\lbrace\kern-3pt\lbrace V,V\rbrace\kern-3pt\rbrace \neq (0)$ and there are no nonzero proper ideals in $V$. Suppose $V$ is a finite-dimensional space. In~\cite{DoubleLie}, it was shown that every double bracket $\lbrace\kern-3pt\lbrace \cdot,\cdot\rbrace\kern-3pt\rbrace$ on~$V$ is determined by a linear operator $R\colon\textrm{End} (V)\to \textrm{End}(V)$, precisely, \begin{equation}\label{eq:Bracket_via_RB} \lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace = \sum\limits_{i=1}^N e_i(a)\otimes R(e_i^*)(b) , \quad a,b\in V, \end{equation} where $e_1,\dots, e_N$ is a linear basis of $\textrm{End}(V)$, $e_1^*,\dots, e_N^*$ is the corresponding dual basis relative to the trace form. Let us explain how to relate an operator $R$ on $\textrm{End}(V)$ with a~bracket $\lbrace\kern-3pt\lbrace\cdot,\cdot\rbrace\kern-3pt\rbrace$ explicitly. Let $f_1,\ldots,f_n$ be a basis of $V$. By $e_{kl}$, $k,l\in\{1,\ldots,n\}$, we mean the standard basis of~$\textrm{End}(V)$, which acts on~$V$ by the formula $e_{ij}(f_k) = \delta_{jk} f_i$. Let us rewrite $$ \lbrace\kern-3pt\lbrace f_k,f_l\rbrace\kern-3pt\rbrace = \sum\limits_{m=1}^n f_m\otimes v_m,\quad k,l=1,\ldots,n, $$ for some $v_m\in V$. Define an operator $R\in\textrm{End}(\textrm{End}(V))$ as follows, $R(e_{km})(f_l) = v_m$, where $k,l,m=1,\ldots,n$. Then $$ \lbrace\kern-3pt\lbrace f_k,f_l\rbrace\kern-3pt\rbrace = \sum\limits_{m=1}^n f_m\otimes v_m = \sum\limits_{m,p=1}^n e_{mp}(f_k)\otimes R(e_{mp}^*)(f_l). $$ By linearity, we obtain the formula~\eqref{eq:Bracket_via_RB} for all $a,b\in V$. Note that the identity operator corresponds to the switch map $\tau\colon f_k\otimes f_l\to f_l\otimes f_k$. A linear operator $P$ on~$\textrm{End}(V)$ is called skew-symmetric if $P = -P^*$, where $P^*$ is the conjugate operator on $\textrm{End}(V)$ relative to the trace form. {\bf Theorem~3}~\cite{DoubleLie}. Let $V$~be a finite-dimensional vector space with a double bracket $\lbrace\kern-3pt\lbrace \cdot,\cdot \rbrace\kern-3pt\rbrace$ determined by an operator $R\colon\textrm{End}(V)\to \textrm{End}(V)$ by~\eqref{eq:Bracket_via_RB}. Then $V$ is a double Lie algebra if and only if $R$ is a~skew-symmetric RB-operator of weight~0 on $\textrm{End}(V)$. {\bf Remark 1}. Theorem~3 was stated in~\cite{Schedler} in terms of skew-symmetric solutions of the associative Yang---Baxter equation (AYBE). Since there exists the one-to-one correspondence between solutions of AYBE and Rota---Baxter operators of weight~0 on the matrix algebra~\cite{Unital}, Theorem~3 follows from~\cite{Schedler}. Actually, Theorem~3 was mentioned also in~\cite{DoublePoissonFree}. \subsection{Infinite-dimensional double Lie algebras} Consider a countable-dimensional double Lie algebra $\langle V,\lbrace\kern-3pt\lbrace \cdot,\cdot\rbrace\kern-3pt\rbrace\rangle$. We fix a~linear basis $u_i$, $i\in\mathbb{N}$, of $V$. Define $e_{ij}\in\textrm{End}(V)$ by the formula $e_{ij}u_k = \delta_{jk}u_i$. Let $\varphi\in\textrm{End}(V)$, then we may write $\varphi = \sum\limits_{ij}a_{ij}e_{ij}$. We identify $\varphi$ with an infinite matrix $[\varphi] = (a_{ij})_{i,j\geq0}$. Since $\varphi\in\textrm{End}(V)$ is well-defined, there is only finite number of nonzero elements in every column of the matrix $[\varphi]$. Define the subalgebra~$\textrm{End}_f(V)$ of $\textrm{End}(V)$ as follows, $$ \textrm{End}_f(V) = \{\varphi\in\textrm{End}(V)\mid \mbox{ for every }i,\ [\varphi]_{ij} = 0 \mbox{ for almost all }j\}. $$ Denote by $I$ a linear span of matrix unities $e_{ij}$, it is an ideal in $\textrm{End}_f(V)$. Let $\varphi\in\textrm{End}_f(V) = \sum\limits_{i,j}a_{ij}e_{ij}$. We define the symmetric non-degenerate bilinear trace form $\langle \cdot,\cdot \rangle$ on $I\times \textrm{End}_f(V)\cup \textrm{End}_f(V)\times I$ as follows, $$ \langle e_{kl},\varphi \rangle = \langle \varphi,e_{kl}\rangle = \textrm{tr}(e_{kl}\varphi) = a_{lk}. $$ Moreover, the form is associative, i.\,e., $\langle a,bc\rangle = \langle ab,c\rangle$, where at least one of $a,b,c$ lies in~$I$ and others are from $\textrm{End}_f(V)$. Given a double bracket algebra~$\lbrace\kern-3pt\lbrace \cdot,\cdot\rbrace\kern-3pt\rbrace$ on a space~$V$, we may define a linear operator $R\colon I\to \textrm{End} (V)$ by the formula \begin{equation}\label{Bracket_via_RBInf} \lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace = \sum\limits_{i,j\geq0} e_{ij}(a)\otimes R(e_{ji})(b), \quad a,b\in V. \end{equation} Conversely, given an operator $R\colon I\to \textrm{End} (V)$, one can define a double bracket on~$V$ by the formula~\eqref{Bracket_via_RBInf}. We may define a conjugate operator $R^*\colon I\to \textrm{End} (V)$ as follows, \begin{equation}\label{Conjugate} \lbrace\kern-3pt\lbrace b,a\rbrace\kern-3pt\rbrace^{(12)} = \sum\limits_{i,j\geq0} e_{ij}(a)\otimes R^*(e_{ji})(b), \quad a,b\in V. \end{equation} In~\cite{Double-0}, the following equality was shown, \begin{equation}\label{InfConj} \langle R(x),y\rangle = \langle x,R^*(y)\rangle,\quad x,y\in I. \end{equation} {\bf Theorem~4}~\cite{Double-0}. Let $V$~be a countable-dimensional vector space with a fixed linear basis $u_i$ and with a~double bracket $\lbrace\kern-3pt\lbrace \cdot,\cdot \rbrace\kern-3pt\rbrace$ determined by a linear map $R\in \textrm{End}_f(V)$ as in~\eqref{Bracket_via_RBInf}. Then $V$ is a double Lie algebra if and only if $R$ is a~skew-symmetric RB-operator of weight~0 from~$I$ to $\textrm{End}(V)$. {\bf Theorem 5}~\cite{Double-0}. The double Lie algebra $L_2$ defined on $F[t]$ by the formula $$ \lbrace\kern-3pt\lbrace t^n,t^m\rbrace\kern-3pt\rbrace = \frac{t^n\otimes t^m-t^m\otimes t^n}{t\otimes 1-1\otimes t} $$ is simple. \section{Naive version of $\lambda$-double Lie algebras} Let us return to the finite-dimensional case. Suppose that we have a double bracket $\lbrace\kern-3pt\lbrace\cdot,\cdot\rbrace\kern-3pt\rbrace$ on a linear space $V$. We want to apply the connection~\eqref{eq:Bracket_via_RB} of the double bracket with a linear operator $R$ on $\textrm{End} (V)$. By Theorem~3, the double bracket is Lie if and only if $R$ is a skew-symmetric RB-operator of weight~0 on $\textrm{End} (V)$. What does happen if $R$ is an RB-operator of nonzero weight~$\lambda$ on $\textrm{End} (V)$? We have the Jacobi identity~\eqref{Jacobi} if and only if (see the proofs of Theorems~3 and~4) $$ R(\theta_R(y)x) = 0,\quad x,y\in \textrm{End}(V), $$ here $\theta_R = R^* + R + \lambda\textrm{id}$. Thus, to avoid degenerate~$R$, we need $$\theta_R =R+R^*+\lambda \textrm{id}= 0$$ which is some kind of skew-symmetricity in the nonzero weight case. Hence, we have~\eqref{Jacobi} and the following identity, \begin{equation}\label{bad-anticommutativity} \lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace + \lbrace\kern-3pt\lbrace b,a\rbrace\kern-3pt\rbrace^{(12)} = - \lambda b\otimes a, \end{equation} which is an analogue of anticommutativity. Therefore, we want to define $\lambda$-analogue of double Lie algebra as a vector space~$V$ with a~double bracket satisfying~\eqref{Jacobi} and~\eqref{bad-anticommutativity}. Let us prove that such a notion is not natural. More precisely, we show that such finite-dimensional objects do not exist for $\lambda \neq 0$, and we do not expect that they exist in the infinite-dimensional case. By a variety of algebras we mean a class of all algebras satisfying the prescribed set of identities. For instance, the variety of associative algebras is defined by the identity $(xy)z = x(yz)$. By the famous Birkhoff theorem, a class~$\mathcal{M}$ of algebras forms a variety if and only if $\mathcal{M}$ is closed under taking of homomorphic images, subalgebras and direct products. Let $\mathcal{M}$ be a variety of algebras, $A$ be an algebra from $\mathcal{M}$ over a field $F$, $R\colon A\mapsto A$ be a linear map, and $\lambda\in F$. Consider a~direct sum of vector spaces $D_R(A)=A\oplus \bar{A}$, where $\bar{A}$ is an isomorphic copy of $A$. Define a product on $D_R(A)$ as follows \begin{multline*} (a+\bar{b})*(x+\bar{y}) \\ = ax+R(ay)-aR(y)+R(bx)-R(b)x+\overline{ay+bx-R(b)y-bR(y)-\lambda by}. \end{multline*} {\bf Definition 3}. A bilinear non-degenerate symmetric form $\omega:A\times A\mapsto F$ on an algebra $A$ is called invariant if for all $a,b,c\in A$: $$ \omega(ab,c)=\omega(a,bc). $$ In this case, the pair $(A,\omega)$ is called a quadratic algebra. The most important exampels of quadratic algebras are: 1. A semisimple finite-dimensional Lie algebra over a field of characteristic zero with the Killing form. 2. A matrix algebra $M_n(F)$ with the form $\omega:M_n(F)\times M_n(F)\mapsto F$ defined as $$ \omega(a,b)=\textrm{tr}{(ab)}. $$ {\bf Remark 2}. If $(A,\omega)$ is a quadratic algebra, then $D_R(A)$ is isomorphic to the Drinfeld double of the bialgebra $(A,\delta_r)$ (here we mean bialgebra in general sense, that is, an algebra with a comultiplication), where $r=\sum a_i\otimes b_i\in A\otimes A$ corresponds to the map $R$ by the natural isomorphism of $\textrm{End}(A)$ and $A\otimes A$, and $\delta_r(a) = \sum (a_ia\otimes b_i-a_i\otimes ab_i)$ (see \cite{Goncharov2} for details). {\bf Proposition 4}. Let $A$ be an algebra from a variety $\mathcal{M}$. If $R$ is a Rota---Baxter operator of weight $\lambda$ on~$A$, then $D_R(A)$ is an algebra from $\mathcal{M}$. {\sc Proof}. Set \begin{equation} \label{i-map} i(a)=\bar{a}+(\lambda a+R(a)). \end{equation} Let us prove that $I(A)=\{i(a)\mid a\in A\}$ is an ideal of $D_R(A)$. Clearly, $I(A)$ is a subspace of $D_R(A)$ and $\dim(I(A)) = \dim A$. Let $a,b\in A$. Then \begin{equation}\label{u1} i(a)*b = (\bar{a}+\lambda a+R(a))*b = \overline{ab}+R(ab)-R(a)b+\lambda ab+R(a)b = i(ab), \end{equation} \begin{multline}\label{u2} i(a)*\bar{b} = (\bar{a}+\lambda a+R(a))*\bar{b} \\ = \overline{-\lambda ab-aR(b)-R(a)b+\lambda ab+R(a)b} \\ +R(\lambda ab+R(a)b)-\lambda aR(b)-R(a)R(b) \\ = \overline{-aR(b)}-\lambda aR(b)-R(aR(b))=i(-aR(b)). \end{multline} Hence, $I(A)$ is a left ideal of $D_R(A)$. Similarly, $I(A)$ is a right ideal of $D_R(A)$ and therefore, $I(A)$ is an ideal of $D_R(A)$. Moreover, \begin{equation}\label{u3} i(a)*i(b) = i(a)*(\bar{b}+\lambda b+R(b)) = -i(aR(b))+i(\lambda ab+aR(b)) = \lambda i( ab). \end{equation} If $\lambda\neq 0$, then a map $i_{\lambda}\colon a\mapsto \frac{i(a)}{\lambda}$ is an isomorphism of algebras $A$ and $I(A)$. If $\lambda=0$, then $I(A)^2=0$. In both cases, the space $D_R(A)$ is equal to the direct sum of subalgebras~$A$ and $I(A)$. Now we have two situations: if $\lambda\neq 0$, then $D_R(A)$ is isomorphic to $A\otimes D$, where $D$ is two-dimensional $F$-algebra with a basis $\{1,d\}$ subject to the relation $d^2=d$. If $\lambda=0$, then $D_R(A)$ is isomorphic to $A\otimes N$, where $N$ is two-dimensional algebra with a basis $\{1,n\}$ subject to the relation $n^2=0$. Hence, $D_R(A)\in \mathcal{M}$. $\square$ {\bf Remark 3}. Let $\lambda\neq 0$, define the map $j\colon a\mapsto -\bar{a}-R(a)$ and \begin{equation} \label{J-map} J(A)=\{j(a) \mid a\in A\}. \end{equation} Similar arguments as in Proposition 4 show that $J(A)$ is an ideal in $D_R(A)$ and the map $j_\lambda\colon a\mapsto \frac{j(a)}{\lambda}$ is an isomorphism of $A$ and $J(A)$. Note that for all $a\in A$, $j(a)+i(a)=\lambda a$. That is, $D_R(A)=I(A)\oplus J(A)$. {\bf Remark 4}. A construction very close to $D_R(A)$ was suggested by K.~Uchino~\cite{Uchino} when $A$ is associative and $\lambda = 0$. Suppose in addition that a non-degenerate invariant symmetric bilinear form $\omega$ is defined on $A$. Then $\omega$ induces a form $Q$ on $D_R(A)$: for $a,b,c,d\in A$ put \begin{equation}\label{f1} Q(a+\bar{b},c+\bar{d})=\omega(a,d)+\omega(b,c). \end{equation} It is easy to see that $Q$ is a non-degenerate symmetric bilinear form on $D_R(A)$. {\bf Proposition 5}. Let $(A,\omega)$ be a quadratic algebra and let $Q$ be the form defined on $D_R(A)$ by~\eqref{f1}. Then the form $Q$ is invariant if and only if for all $a,b\in A$: \begin{equation}\label{p5.1} R(ab)+R^*(ab)+\lambda ab=0. \end{equation} {\sc Proof}. First of all, let us note that the following conditions are equivalent due to the non-degeneracy of the form on $A$: 1. For all $a,b\in A$: $R(ab)+R^*(ab)+\lambda ab=0$, 2. For all $a,b\in A$: $aR(b)+aR^*(b)+\lambda ab=0$, 3. For all $a,b\in A$: $R(a)b+R^*(a)b+\lambda ab=0$. Indeed, let $a,b,c\in A$. Then $$ \omega(R(ab)+R^*(ab)+\lambda ab,c)=\omega(a,bR^*(c)+bR(c)+\lambda bc). $$ This shows the equivalence of 1 and 2. Similarly, 1 is equivalent to 3. Let $a,b,c\in A$. From the definition of $Q$ we have that $Q(A,A)=Q(\bar{A},\bar{A})=0$. Since $\omega$ is invariant, we have $$ Q(a*b,\bar{c})=\omega(ab,c)=\omega(a,bc)=Q(a,\overline{bc})=Q(a,b*\bar{c}). $$ Similarly, we can prove the following identities: $$ Q(a*\bar{b},c)=Q(a,\bar{b}*c),\quad Q(\bar{a}*b,c)=Q(\bar{a},b*c). $$ Further, $$ Q(\bar{a}*\bar{b},c)=-\omega(R(a)b+aR(b)+\lambda ab,c)=-\omega(a,R^*(bc)+R(b)c+\lambda bc). $$ On the other hand, $$ Q(\bar{a},\bar{b}*c)=Q(\bar{a},\overline{bc}+R(bc)-R(b)c)=\omega(a,R(bc)-R(b)c). $$ Therefore, $$Q(\bar{a}*\bar{b},c)-Q(\bar{a},\bar{b}*c)=-\omega(a,R^*(bc)+R(bc)+\lambda bc). $$ Since $\omega$ is non-degenerate, $Q(\bar{a}*\bar{b},c)-Q(\bar{a},\bar{b}*c)=0$ if and only if $$ R^*(bc)+R(bc)+\lambda bc=0 $$ for all $b,c\in A$. Similar arguments show that an identity $Q(a*\bar{b},\bar{c})=Q(a,\bar{b}*\bar{c})$ is also equivalent to the condition \eqref{p5.1}. Consider equality $Q(\bar{a}*b,\bar{c})=Q(\bar{a},b*\bar{c})$. We have: $$ Q(\bar{a}*b,\bar{c})=\omega(R(ab)-R(a)b,c)=\omega(a,bR^*(c)-R^*(bc)). $$ Similarly, $$ Q(\bar{a},b*\bar{c})=\omega(a,R(bc)-bR(c)). $$ Therefore, $Q(\bar{a}*b,\bar{c})-Q(\bar{a},b*\bar{c})=0$ if and only if \begin{equation}\label{p5.2} R(bc)+R^*(bc)-bR(c)-bR^*(c)=0. \end{equation} It is only left to note that \eqref{p5.2} follows by \eqref{p5.1} and the observation from the beginning of the proof. $\square$ {\bf Theorem 6}. Let $A=M_n(F)$ and let $\omega(x,y)=\textrm{tr}(xy)$ be the trace form on $A$. There are no Rota---Baxter operators of a~nonzero weight~$\lambda$ on~$A$ satisfying the equality $R+R^*+\lambda \textrm{id}=0$. {\sc Proof}. Assume the converse. It is enough to consider the case when $\lambda=1$. By Proposition 5, $Q$ is a non-degenerate invariant bilinear form on $D_R(A)$. Let $E\in A$ be the identity matrix. Since $\textrm{tr}(E)\neq 0$, then $Q(E,\bar{E})\neq 0$. Let $a,b\in A$. Then \begin{equation}\label{th6} Q(\bar{a}*\bar{b},E) = -\omega(R(a)b+aR(b)+ab,E) = -\omega(a,R^*(b)+R(b)+b)=0. \end{equation} That is, $Q(\bar{A}*\bar{A},E)=0$. Therefore, $\bar{A}*\bar{A}\neq \bar{A}$ and $\bar{A}$ is not a semisimple algebra. Let $\bar{A}=B+N$, where $N$ is the nil-radical of $\bar{A}$ and $B$ is the semisimple component of~$\bar{A}$. Consider $\bar{E}=\bar{E_s}+\bar{E_n}$, where $\bar{E_s}\in B$, $\bar{E_n}\in N$. Note that $\bar{E_s}\in \bar{A}*\bar{A}$ and by \eqref{th6} $Q(E,\bar{E_s})=0$. Thus, $Q(E,\bar{E_n})=Q(E,\bar{E})\neq 0$. Consider a map $i\colon A\mapsto I(A)$ defined by~\eqref{i-map}. Equality \eqref{u2} implies that $\bar{E_n}*i(E) =i(E)*\bar{E_n} =i(-R(E_n))$. Therefore, $\bar{E_n}*i(E)\in I(A)$ and $\bar{E_n}*i(E)$ is nilpotent too. Hence, $R(E_n)$ is a~nilpotent matrix. Let $J$ be the ideal of $D(A)$ defined by~\eqref{J-map} and $j \colon A\to J$ is the map defined as $j(a)=-\bar{a}-R(a)$ for all $a\in A$. Recall that $-E_n=R(E_n)+R^*(E_n)$. Since $i(E_n)+j(E_n)=E_n$, \begin{multline*} j(E)*\bar{E_n} = (E-i(E))*\bar{E_n} = \bar{E}_n+i(R(E_n)) = \overline{E_n+R(E_n)}+R(E_n+R(E_n)) \\ = j(R^*(E_n)) = \bar{E_n}*j(E). \end{multline*} Therefore, $R^*(E_n)$ is a nilpotent element too. Then $\textrm{tr}(E_n)=-\textrm{tr}(R(E_n))-\textrm{tr}(R^*(E_n))=0$, it is a contradiction, since $\textrm{tr}(E_n)=\omega(E_n,E)=Q(\bar{E_n},E)\neq 0$. $\square$ {\bf Remark 5}. We have proved that there are no Rota---Baxter operators of weight $\lambda$ on $M_n(F)$ satisfying $R+R^*+\lambda\textrm{id}=0$. The same result can be proved for quadratic finite-dimensional simple Jordan and alternative algebras, since simple finite dimensional algebras in these varieties are unital. However, everything changes if we consider a simple finite-dimensional Lie (or Malcev) algebra. In \cite{Goncharov2} it was proved that Rota---Baxter operators of nonzero weight~$\lambda$ satisfying $\theta_R=0$ on a simple finite dimensional Lie (or Malcev) algebra $L$ are in one-to-one correspondence with solutions of the modified Yang---Baxter equation on $L$. \section{$\lambda$-double Lie algebras} In the light of Theorem~6, we come to the following definition. {\bf Definition 4}. A $\lambda$-double Lie algebra is a linear space $V$ equipped with a double bracket satisfying the following identities \begin{gather} \lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace + \lbrace\kern-3pt\lbrace b,a\rbrace\kern-3pt\rbrace^{(12)} = \lambda(a\otimes b-b\otimes a), \label{lambda-antiCom} \\ \lbrace\kern-3pt\lbrace a, \lbrace\kern-3pt\lbrace b,c\rbrace\kern-3pt\rbrace \rbrace\kern-3pt\rbrace _L -\lbrace\kern-3pt\lbrace b, \lbrace\kern-3pt\lbrace a,c\rbrace\kern-3pt\rbrace \rbrace\kern-3pt\rbrace _R - \lbrace\kern-3pt\lbrace \lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace,c\rbrace\kern-3pt\rbrace _L = -\lambda (b\otimes \lbrace\kern-3pt\lbrace a,c\rbrace\kern-3pt\rbrace)^{(12)} \label{lambda-Jacobi} \end{gather} for $a,b,c\in V$. A $\lambda$-double Lie algebra for $\lambda = 0$ is an ordinary double Lie algebra. Let $(A,\omega)$ be a quadratic algebra with a~unit. Define $\textrm{tr}(a):=\omega(a,1)$. {\bf Definition 5}. Given a quadratic algebra~$A$, a~linear operator~$R$ on~$A$ is called $\lambda$-skew-symmetric if \begin{equation}\label{RBLambdaSkewSym} R(a)+R^*(a)+\lambda a=\lambda \textrm{tr}(a)1 \end{equation} for all $a\in A$. {\bf Remark 6}. Note that the condition~\eqref{RBLambdaSkewSym} already appeared in the article of O.~Ogievetsky and T.~Popov~\cite{Ogievetsky}. Let $V$ be a~finite-dimensional vector space with a double bracket $\lbrace\kern-3pt\lbrace \cdot,\cdot \rbrace\kern-3pt\rbrace$ and let $R$~be the corresponding operator on $\textrm{End}(V)$, see~\eqref{eq:Bracket_via_RB}. Then the identity~\eqref{lambda-antiCom} has the form $\theta_R(y) = \lambda\textrm{tr}(y)E$. Further, the identity~\eqref{lambda-Jacobi} is fulfilled modulo~\eqref{lambda-antiCom} if and only if $R$ is an RB-operator of weight~$\lambda$ on $\textrm{End}(V)$. So, the following result holds. {\bf Theorem~7}. Let $V$~be a finite-dimensional vector space with a double bracket $\lbrace\kern-3pt\lbrace \cdot,\cdot \rbrace\kern-3pt\rbrace$ determined by an operator $R\colon\textrm{End}(V)\to \textrm{End}(V)$ as in~\eqref{eq:Bracket_via_RB}. Then $V$ is a $\lambda$-double Lie algebra if and only if $R$ is a~$\lambda$-skew-symmetric RB-operator of weight~$\lambda$ on $\textrm{End}(V)$. Let us call a solution~$r$ of AYBE of weight~$\lambda$ on $M_n(F)$ as a $\lambda$-skew-symmetric if $r$~satisfies the equality $$ r + \tau(r) = \lambda(E\otimes E-C) $$ where $C = \sum\limits_{i,j=1}^n e_{ij}\otimes e_{ji}$. Extending~\cite{AYBE-ext}, we get the one-to-one correspondence between $(-\lambda)$-skew-symmetric solutions of AYBE of weight~$-\lambda$ on $M_n(F)$ and $\lambda$-skew-symmetric RB-operators of weight~$\lambda$ on $M_n(F)$. {\bf Example 1}. A linear map $P$ defined on $M_n(F)$ as follows, $$ P(e_{ij}) = \begin{cases} -e_{ij}, & i<j, \\ 0, & i>j, \\ \sum\limits_{k\geq1}e_{i+k,\,i+k}, & i=j, \end{cases} $$ is an RB-operator of weight~1 on $M_n(F)$~\cite{Unital}. By the definition, we have $$ P^*(e_{ij}) = \begin{cases} 0, & i<j, \\ -e_{ij}, & i>j, \\ \sum\limits_{k\geq1}e_{i-k,\,i-k}, & i=j, \end{cases} $$ so $P$ is 1-skew-symmetric. Let $\{f_i\}$ be a basis of $n$-dimensional space~$V$ such that $e_{ij}(f_k) = \delta_{jk}f_i$. Due to the equivalence between RB-operators and double brackets, we have \begin{equation} \label{Ex10} \lbrace\kern-3pt\lbrace f_k,f_l\rbrace\kern-3pt\rbrace = \begin{cases} f_k\otimes f_l - f_l\otimes f_k, & k<l, \\ 0, & k\geq l. \end{cases} \end{equation} {\bf Remark 7}. Given a $\lambda$-skew symmetric RB-operator~$R$ of weight~$\lambda$ on $M_n(F)$, the operator $R^{(T)}$ which acts by the rule $R^{(T)}(a) = (R(a^T))^T$, where $T$ denotes the transpose in $M_n(F)$, is again a $\lambda$-skew symmetric RB-operator of weight~$\lambda$ on $M_n(F)$. Indeed, Proposition~1 implies that $R^{(T)}$ is an RB-operator of weight~$\lambda$. The operator $R^{(T)}$ is $\lambda$-skew symmetric, since the property $(S^{(T)})^* = (S^*)^{(T)}$ holds for all operators $S$ on $M_n(F)$. {\bf Example 2}. For $P$ from Example~1, consider $$ P^{(T)}(e_{ij}) = \begin{cases} 0, & i<j, \\ -e_{ij}, & i>j, \\ \sum\limits_{k\geq1}e_{i+k,\,i+k}, & i=j, \end{cases} $$ it is also a 1-skew-symmetric RB-operator of weight~1 on $M_n(F)$. Then \begin{equation} \label{Ex11} \lbrace\kern-3pt\lbrace f_k,f_l\rbrace\kern-3pt\rbrace = \begin{cases} f_k\otimes f_l, & k<l, \\ - f_l\otimes f_k, & k>l, \\ 0, & k = l. \end{cases} \end{equation} The RB-operator from the following example is close to the RB-operator from Example~2 for $n=3$, the only difference is another action on diagonal matrices. So, the obtained double bracket is like a~join of the double brackets~\eqref{Ex10} and~\eqref{Ex11}. {\bf Example 3}~\cite{Arthamonov}. Let $A$ be a three-dimensional vector space with a basis $a_1,a_2,a_3$. Define the double bracket on $A\otimes A$: \begin{equation}\label{Art:Exm} \begin{gathered} \lbrace\kern-3pt\lbrace a_1,a_2\rbrace\kern-3pt\rbrace = -a_1\otimes a_2,\quad \lbrace\kern-3pt\lbrace a_2,a_1\rbrace\kern-3pt\rbrace = a_1\otimes a_2,\quad \lbrace\kern-3pt\lbrace a_2,a_3\rbrace\kern-3pt\rbrace = a_3\otimes a_2,\\ \lbrace\kern-3pt\lbrace a_3,a_1\rbrace\kern-3pt\rbrace = a_1\otimes a_3 - a_3\otimes a_1,\quad \lbrace\kern-3pt\lbrace a_3,a_2\rbrace\kern-3pt\rbrace = -a_3\otimes a_2. \end{gathered} \end{equation} All omitted brackets of generators are assumed to be zero. It is $(-1)$-double Lie algebra. In~\cite{Arthamonov}, this double bracket was defined in the context of so called modified Poisson algebra, see the next section. The corresponding linear operator $R$ on $M_3(F)$ defined by~\eqref{eq:Bracket_via_RB} for this double bracket equals \begin{gather*} R(e_{12}) = R(e_{13}) = R(e_{23}) = 0, \quad R(e_{21}) = e_{21}, \quad R(e_{31}) = e_{31}, \quad R(e_{32}) = e_{32}, \\ R(e_{11}) = -e_{22},\quad R(e_{22}) = 0,\quad R(e_{33}) = -(e_{11}+e_{22}), \end{gather*} and it is a Rota---Baxter operator of weight~$-1$ on $M_3(F)$ (see the case 1) from A)~\cite[Theorem~3]{GonGub}). Since \begin{gather*} R^*(e_{12}) = e_{12}, \quad R^*(e_{13}) = e_{13}, \quad R^*(e_{23}) = e_{23}, \quad R^*(e_{21}) = R^*(e_{31}) = R^*(e_{32}) = 0, \\ R^*(e_{11}) = -e_{33},\quad R^*(e_{22}) = -(e_{11}+e_{33}),\quad R^*(e_{33}) = 0, \end{gather*} $R$ is $(-1)$-skew-symmetric. {\bf Example 4}. A linear map $P_1$ defined on $M_n(F)$ as follows, $$ P_1(e_{ij}) = \begin{cases} \sum\limits_{k\geq1}e_{i+k,j+k}, & i\leq j, \\ - \sum\limits_{k\geq0}e_{i-k,j-k}, & i>j, \end{cases} $$ is an RB-operator of weight~1 on $M_n(F)$~\cite{Spectrum}. Since $$ P_1^*(e_{ij}) = \begin{cases} \sum\limits_{k\geq1}e_{i-k,j-k}, & i\geq j, \\ - \sum\limits_{k\geq0}e_{i+k,j+k}, & i<j, \end{cases} $$ $P_1$ is 1-skew-symmetric. Then $$ \lbrace\kern-3pt\lbrace f_k,f_l\rbrace\kern-3pt\rbrace = \begin{cases} -(f_l\otimes f_k + f_{l+1}\otimes f_{k-1}+\ldots+f_{k-1}\otimes f_{l+1}), & l<k, \\ f_k\otimes f_l+f_{k+1}\otimes f_{l-1}+\ldots+f_{l-1}\otimes f_{k+1}, & l\geq k, \end{cases} $$ is a 1-double bracket. If we extend Example~4 on the case of countable-dimensional vector space, we get the space $V = \Bbbk[t]$ equipped with the 1-double Lie bracket $$ \lbrace\kern-3pt\lbrace t^n,t^m\rbrace\kern-3pt\rbrace = -\frac{(t^n\otimes t^{m+1}-t^m\otimes t^{n+1})}{t\otimes 1-1\otimes t}. $$ Here we identify $t^k$ with $f_{k+1}$, $k\geq0$. Denote the obtained 1-double Lie algebra as $M_1$. {\bf Remark 8}. We may prove the analog of Theorem~4 for $\lambda$-double Lie algebras and Rota---Baxter operators from $R\in \textrm{End}_f(V)$ when $V$ is countable-dimensional. Let us transform Example~4 as follows. Define the RB-operator $P_2 = P_1^{(\psi_n)}$ on~$M_n(F)$, where $\psi_n$ is an automorphism of $M_n(F)$ defined by the formula $\psi(e_{ij}) = e_{n+1-i,n+1-j}$. Then we extend $P_2$ as an operator from~$I$ to~$\textrm{End}(V)$. {\bf Example 5}. A linear map $P_2\colon I\to \textrm{End}(V)$ defined as follows, $$ P_2(e_{ij}) = \begin{cases} \sum\limits_{k\geq1}e_{i-k,j-k}, & i\geq j, \\ - \sum\limits_{k\geq0}e_{i+k,j+k}, & i<j, \end{cases} $$ is a 1-skew-symmetric RB-operator of weight~1 from $I$ to $\textrm{End}(V)$. Then $$ \lbrace\kern-3pt\lbrace f_k,f_l\rbrace\kern-3pt\rbrace = \begin{cases} f_k\otimes f_l+f_{k-1}\otimes f_{l+1}+\ldots+f_{l+1}\otimes f_{k-1}, & k>l, \\ -(f_{k+1}\otimes f_{l-1} + f_{k+2}\otimes f_{l-2}+\ldots+f_l\otimes f_k), & k\leq l, \end{cases} $$ is a 1-double bracket. A vector space $V = \Bbbk[t]$ equipped with~a double bracket $$ \lbrace\kern-3pt\lbrace t^n,t^m\rbrace\kern-3pt\rbrace = \frac{t^{n+1}\otimes t^m-t^{m+1}\otimes t^n}{t\otimes 1-1\otimes t} $$ is a 1-double Lie algebra, denote it as $M_2$. {\bf Proposition 6}. Each of 1-double Lie algebras $M_1$ and $M_2$ has only one nonzero proper ideal $I = tF[t]$. Moreover, $I$ is isomorphic to the whole double Lie algebra. {\sc Proof}. Let us prove the statement for $M_1$, the proof for~$M_2$ is analogous. Suppose that $I$ is a~nonzero proper ideal in $M_1$. Define $n$ as the minimal degree of elements from~$I$. Let us show that $n = 1$ and $t\in I$. If $n = 0$, then $1\in I$. Let us prove by induction on~$s\geq0$ that $t^s\in I$. For $s = 0$, it is true. Suppose that $s>0$ and we have proved that $t^j\in I$ for all $j<s$. Consider $$ \lbrace\kern-3pt\lbrace 1,t^{2s}\rbrace\kern-3pt\rbrace = t^{2s-1}\otimes t + t^{2s-2}\otimes t^2 + \ldots + t^{s+1}\otimes t^{s-1} + t^s\otimes t^s + \ldots + 1\otimes t^{2s}. $$ So, $t^s\otimes t^s\in V\otimes I + I\otimes V$. Consider the map $\psi\colon V\otimes V\to V/I\otimes V/I$ acting as follows, $\psi(v\otimes w) = (v+I)\otimes (w+I)$. Applying the equality $I\otimes V + V\otimes I = \ker(\psi)$, we conclude that $\psi(t^s\otimes t^s) = 0$, and it means that $t^s \in I$. Thus, $I = M_1$ and it is not proper ideal, as required. For $n\geq 1$, consider $f = \sum\limits_{j=0}^n\alpha_j t^j\in I$. We have that the product $$ \lbrace\kern-3pt\lbrace 1,f\rbrace\kern-3pt\rbrace = \sum\limits_{j=1}^n\alpha_j (t^{j-1}\otimes t + \ldots + t\otimes t^{j-1}) + 1\otimes f - \alpha_0 1\otimes 1 $$ lies in $V\otimes I + I\otimes V$. When $n>1$, the elements $1+I,t+I,\ldots,t^{n-1}+I$ of $V/I$ are linearly independent, we obtain a~contradiction. By the same reason, the case $n = 1$ and $\alpha_0\neq0$ does not hold. So, $n = 1$ and $t\in I$. As above, we may prove by induction on $s\geq1$ that $t^s\in I$. For this, it is enough to analyze the double product $\lbrace\kern-3pt\lbrace t,t^{2s-1}\rbrace\kern-3pt\rbrace$. Finally, the linear map $\xi\colon M_1\to I$ defined by the formula $\xi(t^n) = t^{n+1}$ is the isomorphism between $M_1$ and $I$. $\square$ {\bf Proposition 7}. Let $A$ be a quadratic algebra and let $R$ be a~$\lambda$-skew-symmetric Rota---Baxter operator of weight~$\lambda$ on~$A$. Then $R^*$ is also a~$\lambda$-skew-symmetric Rota---Baxter operator of weight $\lambda$ on $A$. {\sc Proof}. Since $R$ is $\lambda$-skew-symmetric, then $R^*(a)=-R(a)-\lambda a+\lambda \textrm{tr}(a)1$. Therefore, \begin{multline} R^*(a)R^*(b) = R(a)R(b)+\lambda R(a) b+\lambda aR(b)-\lambda \textrm{tr}(b) R(a)-\lambda \textrm{tr}(a) R(b) \\ +\lambda^2 ab-\lambda^2 \textrm{tr}(b) a-\lambda^2\textrm{tr}(a)b+\lambda^2\textrm{tr}(a)\textrm{tr}(b)1. \end{multline} On the other hand, \begin{multline} R^*(R^*(a)b+aR^*(b)+\lambda ab)=R^*(-R(a)b-\lambda ab+\lambda \textrm{tr}(a) b-aR(b)-\lambda ab +\lambda \textrm{tr}(b)a+\lambda ab) \\ = R^*(-R(a)b-aR(b)-\lambda ab)+\lambda R^*(\textrm{tr}(b)a+\textrm{tr}(a)b)\\ = R(a)R(b)+\lambda(R(a)b+aR(b)+\lambda ab)-\lambda\textrm{tr}(R(a)b+aR(b)+\lambda ab)\\ -\lambda R(\textrm{tr}(b)a+\textrm{tr}(a)b)-\lambda^2 \textrm{tr}(a)b-\lambda^2\textrm{tr}(b)a+2\lambda^2\textrm{tr}(a)\textrm{tr}(b). \end{multline} Finally, \begin{multline*} R^*(a)R^*(b)-R^*(R^*(a)b+aR^*(b)+\lambda ab) =\lambda\textrm{tr}(R(a)b+aR(b)+\lambda ab)-\lambda^2\textrm{tr}(a)\textrm{tr}(b)\\ =\lambda \textrm{tr}(R(a)b+aR(b)+\lambda ab-\lambda \textrm{tr}(a)b)=0, \end{multline*} since $$ \textrm{tr}(aR(b)) = \omega(aR(b),1) = \omega(a,R(b)) = \omega(R^*(a),b) = \omega(R^*(a)b,1) = \textrm{tr}(R^*(a)b) $$ and $R$ is~$\lambda$-skew-symmetric. $\square$ {\bf Remark 9}. Proposition~7 implies that given a quadratic algebra~$(A,\omega)$ equipped with a~$\lambda$-skew-symmetric Rota---Baxter operator of weight~$\lambda$, both linear operators $-R-\lambda\textrm{id}$ and $-R-\lambda\textrm{id}+\lambda\omega(\cdot,1)1$ are RB-operators of weight~$\lambda$. {\bf Remark 10}. Let $L$ be a~finite-dimensional $\lambda$-double Lie algebra for nonzero~$\lambda$ and let $R$ be a corresponding $\lambda$-skew-symmetric RB-operator of weight~$\lambda$ on $\textrm{End}(L)$. By Proposition~7, we get that $R^*$ is also $\lambda$-skew-symmetric RB-operator on $\textrm{End}(L)$. Thus, we may define a~new $\lambda$-double Lie algebra structure on the vector space~$L$ by~$R^*$ instead of~$R$. RB-operators $P_1$ and $P_2$ from Examples~4 and~5 are dual to each other in this sense. Let $n = \dim(V)>1$, $A = \textrm{End}(V)$, and $\omega$ is the trace form on $A$. By~Corollary~1, we have the decomposition $A = I_1\oplus I_2$ (as subalgebras), where $$ I_1=\ker(R^{N}),\quad I_2=\ker(R+\lambda\textrm{id})^{N}, \quad N=n^2. $$ For the Rota---Baxter operator $R^*$ we analogously have the decomposition $A = J_1\oplus J_2$ for $J_1=\ker(R^*)^{N}$ and $J_2=\ker(R^*+\lambda\textrm{id})^{N}$. Define $I_2' = \ker(R+\lambda\textrm{id})$. Let us show that $I_2'\neq(0)$. Suppose that $I_2' = (0)$, then $I_2 = (0)$ and $I_1 = A$. Also, $-(R+\lambda\textrm{id})$ is an invertible RB-operator of weight~$\lambda$ on $A$. It is well-known that $R+\lambda\textrm{id}$ is a~homomorphism from $B$ to~$A$~\cite{Splitting}, where $B$ is the vector space $\textrm{End}(V)$ under the product $$ x\circ y = -(R+\lambda\textrm{id})(x)y - x(R+\lambda\textrm{id})(y) + \lambda xy = -( R(x)y + xR(y) + \lambda xy ). $$ Since $R+\lambda\textrm{id}$ is nondegenerate, it is an isomorphism between $B$ and $A$. On the other hand, $\ker(R)$ is a nonzero ideal of $B$ as the kernel of the homomorphism $R\colon B\to A$. Since $B\cong A$~is simple, we conclude that $B = \ker(R)$, i.\,e., $R = 0$. Thus, $R^* = 0$ and $R$~may not be $\lambda$-skew-symmetric. {\bf Proposition 8}. We have a) $\omega(I_1,J_2)=\omega(J_1,I_2)=0$; b) $I_2'$ is a nilpotent ideal in $R(A)$. {\sc Proof}. a) Let $x\in I_1$, $y\in J_2$. The restriction of the map $R+\lambda\textrm{id}$ on $I_1$ is nondegenerate. Therefore, $x=(R+\lambda\textrm{id})^N(z)$ for some $z\in I_1$. Then $$ \omega(x,y) = \omega((R+\lambda\textrm{id})^N(z),y) = \omega(z,(R^*+\lambda\textrm{id})^N(y)) = 0. $$ Similarly, $\omega(J_1,I_2)=0$. b) It is well-known that $I_2'$ is an ideal in $R(A)$, see, e.\,g.,~\cite[Lemma~8]{Spectrum}. Suppose that $I_2'$ is not nilpotent, then there exists a nonzero idempotent $e^2=e\in I_2'$. Since the trace of $e$ is a positive integer number, $R^*(e)=\lambda\textrm{tr}(e)E\neq 0$. On the other hand, applying Proposition~7, we get $$ \lambda^2\textrm{tr}(e)^2E = R^*(e)R^*(e) = R^*(2\lambda \textrm{tr}(e)e + \lambda e) = \lambda^2(2\textrm{tr}(e)^2+\textrm{tr}(e))E $$ and $\textrm{tr}(e)^2+\textrm{tr}(e)=0$, a~contradiction. $\square$ {\bf Corollary 2}. For every $x\in I_2'$ we have $\textrm{tr}(x)=0$ and $R^*(x)=0$. {\bf Corollary 3}. There is a natural isomorphism between: a) $J_1$ and $I_1^*$, b) $J_2$ and $I_2^*$. {\sc Proof}. Indeed, for every $f\in A$ define a map $\gamma: A\mapsto A^*$ as follows: $$ \gamma(f)(a)=\omega(f,a). $$ Since the form $\omega$ is non-degenerate, $\gamma$ is an isomorphism. By Proposition 8a, $\gamma(J_1)=I_1^*$ and $\gamma(J_2)=I_2^*$. $\square$ In particular, if $e_1,\ldots e_p$ is a basis of $I_1$ and $e_{p+1},\ldots, e_N$ is a basis of $I_2$, then we may choose the dual basis $f_1,\ldots f_N$ of $e_1,\ldots e_N$ in such a way that $f_1,\ldots,f_p\in J_1$ and $f_{p+1},\ldots, f_N\in J_2$. {\bf Lemma}. Let $(V,\lbrace\kern-3pt\lbrace\cdot,\cdot\rbrace\kern-3pt\rbrace)$ be a $\lambda$-double Lie algebra for $\lambda\neq0$ and let $R$ be the corresponding Rota---Baxter operator of weight~$\lambda$ on $A=\textrm{End}(V)$. If $\dim V>1$, then $U=I_2'V$ is a~proper ideal of~$V$. {\sc Proof}. Let $\lambda=1$. Since $I_2'$ is a nilpotent nonzero ideal in $R(A)$, $U$ is a nonzero proper subspace of $V$. Moreover, $R(A)U\subset U$. It means that for all $a\in V$ and $b\in U$ $$ \lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace = \sum\limits_i e_i(a)\otimes R(e_i^*)(b)\in V\otimes U. $$ It remains to prove the inclusion $\lbrace\kern-3pt\lbrace b,a\rbrace\kern-3pt\rbrace\in V\otimes U + U\otimes V$ for the same $a,b$. If $y\in J_1$ and $x\in I_2'$, then \begin{equation} \label{J_1U<U} yx=-R^*(y)x-R(y)x+\textrm{tr}(y)x\in -R^*(y)x+I_2'\subset (R^*)^2(y)x+I_2'\subset\ldots\subset I_2'. \end{equation} This means that $J_1U\subset U$. Let $e_1,\ldots,e_k$ be a basis of $I_2'$, $e_1,\ldots,e_k,e_{k+1},\ldots, e_t$ be the basis of $I_2$. By Corollary~3, we may find a basis $f_1,\ldots,f_t$ of $J_2$ such that $\omega(e_i,f_j)=\delta_{ij}$ for all $i=1,\ldots,t$. In particular, for $j=1,\ldots,k$ we have $f_j^*=e_j\in I_2'$, and so $$ f_j(b)\otimes R(e_j)(a)=-f_j(b)\otimes e_ja\in V\otimes U. $$ Let $f_{t+1},\ldots,f_q$ be a~basis of $J_1$. If $i=1\ldots,k$ and $j=k+1,\ldots,t$, then $\textrm{tr}(f_je_i)=0$ and moreover, $\textrm{tr}(f_j I_2') = 0$ by Proposition~8a. Therefore, \begin{equation} \label{NoSimple:fjei} R(f_je_i)+R^*(f_je_i)+f_je_i=0. \end{equation} Then by Proposition~8a, $$ \textrm{tr}(R^*(f_je_i)a)=\textrm{tr}(f_je_iR(a))=0, $$ since $e_iR(a)\in I_2'$. Therefore, $R^*(f_je_i)=0$ and $f_je_i\in I_2'$ by~\eqref{NoSimple:fjei} for all $i=1\ldots,k$ and $j=k+1,\ldots, t$. Consequently, $f_jU\subset U$ and we get $f_j(b)\otimes R(e_j)(a)\subset U\otimes V$. Applying~\eqref{J_1U<U}, we finally have that \begin{multline*} \lbrace\kern-3pt\lbrace b,a \rbrace\kern-3pt\rbrace = \sum\limits_{j=1}^k f_j(b)\otimes R(f_j^*)(a) +\sum\limits_{j=k+1}^t f_j(b)\otimes R(f_j^*)(a) + \sum\limits_{j=t+1}^q f_j(b) \otimes R(f_j^*)(a) \\ \subset V\otimes U + U\otimes V. \end{multline*} Theorem is proved. $\square$ {\bf Remark 11}. Let $\dim V>1$ and $J_2'=\ker(R^*+\lambda\textrm{id})$. Then the subspace $U'=J_2'V$ is also a proper ideal in $(V,\lbrace\kern-3pt\lbrace\cdot,\cdot\rbrace\kern-3pt\rbrace)$. {\bf Theorem 8}. There are no simple finite-dimensional $\lambda$-double Lie algebras. {\sc Proof}. For $\lambda = 0$, it was proved in~\cite{DoubleLie}. When $\lambda\neq0$ and $\dim V>1$, it follows from Lemma. Finally, when $\lambda\neq0$ and $\dim V = 1$, it is easy to show that we have $\lbrace\kern-3pt\lbrace V,V\rbrace\kern-3pt\rbrace = 0$, so $V$ is not simple too. $\square$ \section{Modified double Poisson algebras} {\bf Definition 6}~\cite{Arthamonov0,Arthamonov}. Let $A$ be an associative algebra over $F$ with the product $ab=\mu(a\otimes b)$. A double bracket $\lbrace\kern-3pt\lbrace\cdot,\cdot\rbrace\kern-3pt\rbrace$ on $A$ is called a modified double Poisson bracket~\cite{Arthamonov0,Arthamonov} if the following equalities hold \begin{gather} \lbrace\kern-3pt\lbrace a,bc \rbrace\kern-3pt\rbrace = (b\otimes 1)\lbrace\kern-3pt\lbrace a,c\rbrace\kern-3pt\rbrace+\lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace(1\otimes c), \label{a1} \\ \lbrace\kern-3pt\lbrace ab,c \rbrace\kern-3pt\rbrace = (1\otimes a)\lbrace\kern-3pt\lbrace b,c\rbrace\kern-3pt\rbrace+\lbrace\kern-3pt\lbrace a,c\rbrace\kern-3pt\rbrace(b\otimes 1), \label{a2} \\ \{a,\{b,c\}\}-\{b,\{a,c\}\}=\{\{a,b\},c\}, \label{a3} \\ \{a,b\}+\{b,a\}=0\ \text{mod}\ [A,A] \label{a4} \end{gather} for all $a,b,c\in A$. Here $\{a,b\}=\mu\circ \lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace$. Note that the identities~\eqref{a1} and~\eqref{a2} are exactly the same as both Leibniz rules~\eqref{Leibniz} and~\eqref{LeibnizTwo} fulfilled in double Poisson algebras. The identities~\eqref{a3} and~\eqref{a4} are weakened versions of anti-commutativity and Jacobi identity. S. Arthamonov posed two conjectures devoted to modified double Poisson brackets, one of them is the following. {\bf Conjecture} (S. Arthamonov, 2017~\cite{Arthamonov}). The bracket defined on $A\otimes A$ by~\eqref{Art:Exm} is a~modified Poisson bracket. We prove more general result that every $\lambda$-double Lie algebra generates the structure of modified double Poisson algebra on the free associative algebra. {\bf Theorem 9}. Let $(V,\lbrace\kern-3pt\lbrace \cdot,\cdot\rbrace\kern-3pt\rbrace)$ be a finite-dimensional $\lambda$-double Lie algebra with nonzero~$\lambda$. Then equalities~\eqref{a1},~\eqref{a2} define a modified double Poisson algebra on the free associative algebra $A=\textrm{As}\langle e_1,\ldots,e_n\rangle$, where $e_1,\ldots,e_n$ is a basis of $V$. {\sc Proof}. For convenience, we will prove the statement for $\lambda=1$. Take $a=x_1\ldots x_k$, $b=y_1\ldots y_l$, where $x_i,y_j\in \{e_1,\ldots,e_n\}$. Then $$ \lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace =\sum_{i=1}^k\sum_{j=1}^l (y_1\ldots y_{j-1}\otimes x_1\ldots x_{i-1})\lbrace\kern-3pt\lbrace x_i, y_j\rbrace\kern-3pt\rbrace(x_{i+1}\ldots x_{k}\otimes y_{j+1}\ldots y_l). $$ It is easy to check that $\lbrace\kern-3pt\lbrace\cdot,\cdot\rbrace\kern-3pt\rbrace$ satisfies~\eqref{a1} and~\eqref{a2}. For $x,y\in V$ we will use the following notation: $$\lbrace\kern-3pt\lbrace x, y\rbrace\kern-3pt\rbrace=(x,y)_{(1)}\otimes (x,y)_{(2)}.$$ Let us prove \eqref{a4}. We have \begin{multline}\label{th**} \{a,b\}=\sum\limits_{i,j} y_1\ldots y_{j-1}(x_i,y_j)_{(1)}x_{i+1}\ldots x_kx_1\ldots x_{i-1}(x_i,y_j)_{(2)}y_{j+1}\ldots y_l\\ =\sum\limits_{i,j}x_1\ldots x_{i-1}(x_i,y_j)_{(2)}y_{j+1}\ldots y_ly_1\ldots y_{j-1}(x_i,y_j)_{(1)}x_{i+1}\ldots x_k\ \text{mod}\ [A,A]. \end{multline} The equality \eqref{lambda-antiCom} implies that for all $a,b,c\in A$, \begin{equation}\label{tozh} a(x_i,y_j)_{(2)}b(x_i,y_j)_{(1)}c = -a(y_j,x_i)_{(1)}b(y_j,x_i)_{(2)}c+ay_jbx_ic-ax_iby_jc. \end{equation} We have \begin{multline*} \sum\limits_{i,j}x_1\ldots x_{i-1}(x_i,y_j)_{(2)}y_{j+1}\ldots y_ly_1\ldots y_{j-1}(x_i,y_j)_{(1)}x_{i+1}\ldots x_k\\ =-\sum\limits_{i,j}x_1\ldots x_{i-1}(y_j,x_i)_{(1)}y_{j+1}\ldots y_ly_1\ldots y_{j-1}(y_j,x_i)_{(2)}x_{i+1}\ldots x_k \allowdisplaybreaks \\ +\sum\limits_{i,j}(x_1\ldots x_{i-1}y_jy_{j+1}\ldots y_ly_1\ldots y_{j-1}x_ix_{i+1}\ldots x_k\\ -x_1\ldots x_{i-1}x_iy_{j+1}\ldots y_ly_1\ldots y_{j-1}y_jx_{i+1}\ldots x_k)\\ =-\{b,a\}+\sum_{i,j} x_1\ldots x_{i-1}( y_jy_{j+1}\ldots y_ly_1\ldots y_{j-1}x_i-x_iy_{j+1}\ldots y_ly_1\ldots y_{j-1}y_j)x_{i+1}\ldots x_k. \end{multline*} Note that $$ \sum_{j=1}^l y_jy_{j+1}\ldots y_ly_1\ldots y_{j-1}x_i-x_iy_{j+1}\ldots y_ly_1\ldots y_{j-1}y_j =\left[\sum_j y_{j+1}\ldots y_ly_1\ldots y_{j-1},x_i\right]. $$ Finally, \begin{multline*} \sum\limits_{i=1}^k x_1\ldots x_{i-1}\left(\sum_{j=1}^l y_jy_{j+1}\ldots y_ly_1\ldots y_{j-1}x_i-x_iy_{j+1}\ldots y_ly_1\ldots y_{j-1}y_j\right )x_{i+1}\ldots x_k \\ =\sum_{i=1}^k x_1\ldots x_{i-1}\left[\sum_j y_{j+1}\ldots y_ly_1\ldots y_{j-1},x_i\right]x_{i+1}\ldots x_k\\ =\sum_{j=1}^l [y_{j+1}\ldots y_ly_1\ldots y_{j-1}, x_1\ldots x_k]\in [A,A]. \end{multline*} Therefore, $\lbrace\kern-3pt\lbrace\cdot,\cdot\rbrace\kern-3pt\rbrace$ satisfies \eqref{a4}. Let us prove~\eqref{a3}. Define $L(a,b,c) = \{a,\{b,c\}\}-\{b,\{a,c\}\}-\{\{a,b\},c\}$. First we note that induction on $\deg(c)$ allows us to assume that $\deg(c) = 1$, since for $c = c_1c_2$ we have by~\eqref{a1},~\eqref{a2}: \begin{multline*} L(a,b,c) = \{a,\{b,c_1c_2\}\} - \{b,\{a,c_1c_2\}\} - \{\{a,b\},c_1c_2\} \\ = \{a,c_1\{b,c_2\}\} + \{a,\{b,c_1\}c_2\} - \{b,c_1\{a,c_2\}\} - \{b,\{a,c_1\}c_2\} - \{\{a,b\},c_1c_2\} \\ = c_1\{a,\{b,c_2\}\} + \underline{\{a,c_1\}\{b,c_2\}} + \underline{\underline{\{b,c_1\}\{a,c_2\}}} + \{a,\{b,c_1\}\}c_2 - c_1\{b,\{a,c_2\}\} \\ - \underline{\underline{\{b,c_1\}\{a,c_2\} }} - \underline{\{a,c_1\}\{b,c_2\}} - \{b,\{a,c_1\}\}c_2 - c_1\{\{a,b\},c_2\} - \{\{a,b\},c_1\}c_2 \\ = c_1L(a,b,c_2) + L(a,b,c_1)c_2. \end{multline*} Let $a=x_1,\ldots,x_k$, $b=y_1,\ldots,y_l$, where $x_i,y_j\in \{e_1,\ldots,e_n\}$ and $c\in V$. We will use the following notation: $\alpha_i(x):=x_1\ldots x_{i-1}$ (for convenience, $\alpha_1(x):=1$), $\beta_i(x)=x_{i+1}\ldots x_k$ ($\beta_k(x):=1)$. That is, $a=\alpha_i(x)x_i\beta_i(x)$. Similarly, $\alpha_i(y)=y_1\ldots y_{i-1}$, $\beta_i(y)=y_{i+1}\ldots y_l$. Also, we need $\gamma_{i,j}(x) = \begin{cases} x_{i}x_{i+1}\ldots x_{j}, & i\leq j,\\ 1, & i>j. \end{cases}$ We have $$ \{b,c\}=\sum\limits_{j=1}^l(y_j,c)_{(1)}\beta_j(y)\alpha_j(y)(y_j,c)_{(2)}. $$ Therefore, \begin{align} &\{a,\{b,c\}\} =\sum\limits_{j=1}^l\{a, (y_j,c)_{(1)}\beta_j(y)\alpha_j(y)(y_j,c)_{(2)}\} \nonumber \allowdisplaybreaks \\ & \ =\sum\limits_{i=1}^k \sum\limits_{j=1}^l(x_i,(y_j,c)_{(1)})_{(1)}\beta_i(x)\alpha_i(x)(x_i,(y_j,c)_{(1)})_{(2)}\beta_j(y)\alpha_j(y)(y_j,c)_{(2)} \nonumber \\ & \ +\sum\limits_{i=1}^k \sum\limits_{j=1}^{l-1}\sum\limits_{s=j+1}^l(y_j,c)_{(1)}\gamma_{j+1,s-1}(y) (x_i,y_s)_{(1)}\beta_i(x)\alpha_i(x)(x_i,y_s)_{(2)}\beta_s(y)\alpha_j(y)(y_j,c)_{(2)} \label{a(bc)-1} \allowdisplaybreaks \\ & \ +\sum\limits_{i=1}^k \sum\limits_{j=2}^l\sum\limits_{p=1}^{j-1}(y_j,c)_{(1)}\beta_j(y)\alpha_p(y) (x_i,y_p)_{(1)}\beta_i(x)\alpha_i(x)(x_i,y_p)_{(2)}\gamma_{p+1,j-1}(y)(y_j,c)_{(2)} \label{a(bc)-2} \allowdisplaybreaks \\ & \ +\sum\limits_{i=1}^k \sum\limits_{j=1}^l (y_j,c)_{(1)}\beta_j(y)\alpha_j(y)(x_i,(y_j,c)_{(2)})_{(1)}\beta_i(x)\alpha_i(x)(x_i,(y_j,c)_{(2)})_{(2)}. \nonumber \end{align} Similarly, \begin{align} &\{b,\{a,c\}\}=\sum\limits_{i=1}^k\{b, (x_i,c)_{(1)}\beta_i(x)\alpha_i(x)(x_i,c)_{(2)}\} \nonumber \\ & \ =\sum\limits_{i=1}^k \sum\limits_{j=1}^l(y_j,(x_i,c)_{(1)})_{(1)}\beta_j(y)\alpha_j(y)(y_j,(x_i,c)_{(1)})_{(2)}\beta_i(x)\alpha_i(x)(x_i,c)_{(2)} \nonumber \\ & \ +\sum\limits_{i=1}^{k-1} \sum\limits_{j=1}^l\sum\limits_{q=i+1}^k(x_i,c)_{(1)}\gamma_{i+1,q-1}(x) (y_j,x_q)_{(1)}\beta_j(y)\alpha_j(y)(y_j,x_q)_{(2)}\beta_q(x)\alpha_i(x)(x_i,c)_{(2)} \label{b(ac)-Psi} \allowdisplaybreaks \\ & \ +\sum\limits_{i=2}^k \sum\limits_{j=1}^l\sum\limits_{r=1}^{i-1}(x_i,c)_{(1)}\beta_i(x)\alpha_r(x) (y_j,x_r)_{(1)}\beta_j(y)\alpha_j(y)(y_j,x_r)_{(2)}\gamma_{r+1,i-1}(x)(x_i,c)_{(2)} \label{b(ac)-Gamma} \\ & \ +\sum\limits_{i=1}^k \sum\limits_{j=1}^l (x_i,c)_{(1)}\beta_i(x)\alpha_i(x)(y_j,(x_i,c)_{(2)})_{(1)}\beta_j(y)\alpha_j(y)(y_j,(x_i,c)_{(2)})_{(2)}. \nonumber \end{align} Finally, \begin{align} & \{\{a,b\},c\}=\sum\limits_{i=1}^k\sum\limits_{j=1}^{l} \{\alpha_j(y)(x_i,y_j)_{(1)}\beta_i(x)\alpha_i(x)(x_i,y_j)_{(2)}\beta_j(y),c\} \nonumber \\ & \ =\sum\limits_{i=1}^k\sum\limits_{j=2}^{l}\sum\limits_{p=1}^{j-1}(y_p,c)_{(1)}\gamma_{p+1,j-1}(y)(x_i,y_j)_{(1)}\beta_i(x)\alpha_i(x)(x_i,y_j)_{(2)}\beta_j(y)\alpha_p(y)(y_p,c)_{(2)} \label{(ab)c)-1} \\ & \ +\sum\limits_{i=1}^{k}\sum\limits_{j=1}^{l}((x_i,y_j)_{(1)},c)_{(1)}\beta_i(x)\alpha_i(x)(x_i,y_j)_{(2)}\beta_j(y)\alpha_j(y)((x_i,y_j)_{(1)}),c)_{(2)} \nonumber \allowdisplaybreaks \\ & \ +\sum\limits_{i=1}^{k-1}\sum\limits_{j=1}^{l}\sum\limits_{q=i+1}^k(x_q,c)_{(1)}\beta_q(x)\alpha_i(x)(x_i,y_j)_{(2)}\beta_j(y)\alpha_j(y)(x_i,y_j)_{(1)}\gamma_{i+1,q-1}(x)(x_q,c)_{(2)} \label{(ab)c-Gamma} \\ & \ +\sum\limits_{i=2}^{k}\sum\limits_{j=1}^{l}\sum\limits_{r=1}^{i-1}(x_r,c)_{(1)}\gamma_{r+1,i-1}(x)(x_i,y_j)_{(2)}\beta_j(y)\alpha_j(y)(x_i,y_j)_{(1)}\beta_i(x)\alpha_r(x)(x_r,c)_{(2)} \label{(ab)c-Psi} \allowdisplaybreaks \\ & \ +\sum\limits_{i=1}^k\sum\limits_{j=1}^{l} ((x_i,y_j)_{(2)},c)_{(1)}\beta_j(y)\alpha_j(y)(x_i,y_j)_{(1)}\beta_i(x)\alpha_i(x)((x_i,y_j)_{(2)},c)_{(2)} \nonumber \\ & \ +\sum\limits_{i=1}^k\sum\limits_{j=1}^{l-1}\sum\limits_{s=j+1}^l\! (y_s,c)_{(1)}\beta_s(y)\alpha_j(y)(x_i,y_j)_{(1)}\beta_i(x)\alpha_i(x)(x_i,y_j)_{(2)}\gamma_{j+1,s-1}(y)(y_s,c)_{(2)} \label{(ab)c-2}. \end{align} First we note that~\eqref{a(bc)-1} with~\eqref{(ab)c)-1} and~\eqref{a(bc)-2} with~\eqref{(ab)c-2} respectively annihilate. By the same arguments and by~\eqref{tozh}, we rewrite the sum of~\eqref{b(ac)-Psi} and~\eqref{(ab)c-Psi} as follows, \begin{multline*} \sum\limits_{i=1}^{k-1} \sum\limits_{j=1}^l\sum\limits_{q=i+1}^k(x_i,c)_{(1)}\gamma_{i+1,q-1}(x) (y_j,x_q)_{(1)}\beta_j(y)\alpha_j(y)(y_j,x_q)_{(2)}\beta_q(x)\alpha_i(x)(x_i,c)_{(2)}\\ +\sum\limits_{i=2}^{k}\sum\limits_{j=1}^{l}\sum\limits_{r=1}^{i-1}(x_r,c)_{(1)}\gamma_{r+1,i-1}(x)(x_i,y_j)_{(2)}\beta_j(y)\alpha_j(y)(x_i,y_j)_{(1)}\beta_i(x)\alpha_r(x)(x_r,c)_{(2)} \\=\sum\limits_{i=1}^{k-1} \sum\limits_{j=1}^l\sum\limits_{q=i+1}^k\left ((x_i,c)_{(1)}\gamma_{i+1,q-1}(x) (y_j,x_q)_{(1)}\beta_j(y)\alpha_j(y)(y_j,x_q)_{(2)}\beta_q(x)\alpha_i(x)(x_i,c)_{(2)}\right.\\ +\left.(x_i,c)_{(1)}\gamma_{i+1,q-1}(x)(x_q,y_j)_{(2)}\beta_j(y)\alpha_j(y)(x_q,y_j)_{(1)}\beta_q(x)\alpha_i(x)(x_i,c)_{(2)} \right)\\ =\sum\limits_{i=1}^{k-1} \sum\limits_{j=1}^l\sum\limits_{q=i+1}^k\big((x_i,c)_{(1)}\gamma_{i+1,q-1}(x) y_j\beta_j(y)\alpha_j(y)x_q\beta_q(x)\alpha_i(x)(x_i,c)_{(2)} \allowdisplaybreaks \\ -\sum\limits_{i=1}^{k-1} \sum\limits_{j=1}^l\sum\limits_{q=i+1}^k ((x_i,c)_{(1)}\gamma_{i+1,q-1}(x) x_q\beta_j(y)\alpha_j(y)y_j\beta_q(x)\alpha_i(x)(x_i,c)_{(2)}\big)=:\Psi. \end{multline*} Note that $\sum\limits_{j=1}^ly_j\beta_j(y)\alpha_j(y)=\sum\limits_{j=1}^l\beta_j(y)\alpha_j(y)y_j$. Let $\psi(y)=\sum\limits_{j=1}^l\beta_j(y)\alpha_j(y)y_j$. Then the last equality has a form \begin{multline*} \Psi=\sum\limits_{i=1}^{k-1} \sum\limits_{q=i+1}^k(x_i,c)_{(1)}\gamma_{i+1,q-1}(x) \psi(y)x_q\beta_q(x)\alpha_i(x)(x_i,c)_{(2)}\\ -\sum\limits_{i=1}^{k-1} \sum\limits_{q=i+1}^k (x_ic)_{(1)}\gamma_{i+1,q-1}(x) x_q\psi(y)\beta_q(x)\alpha_i(x)(x_i,c)_{(2)} \allowdisplaybreaks \\ =\sum\limits_{i=1}^{k-1}(x_i,c)_{(1)}\psi(y)\beta_i(x)\alpha_i(x)(x_i,c)_{(2)}-(x_i,c)_{(1)}\beta_i(x)\psi(y)\alpha_i(x)(x_i,c)_{(2)}. \end{multline*} Similarly, we rewrite the sum of~\eqref{b(ac)-Gamma} and~\eqref{(ab)c-Gamma} \begin{multline*} \sum\limits_{i=2}^k \sum\limits_{j=1}^l\sum\limits_{r=1}^{i-1}(x_i,c)_{(1)}\beta_i(x)\alpha_r(x) (y_j,x_r)_{(1)}\beta_j(y)\alpha_j(y)(y_j,x_r)_{(2)}\gamma_{r+1,i-1}(x)(x_i,c)_{(2)}\\ +\sum\limits_{i=1}^{k-1}\sum\limits_{j=1}^{l}\sum\limits_{q=i+1}^k(x_q,c)_{(1)}\beta_q(x)\alpha_i(x)(x_i,y_j)_{(2)}\beta_j(y)\alpha_j(y)(x_i,y_j)_{(1)}\gamma_{i+1,q-1}(x)(x_q,c)_{(2)} \allowdisplaybreaks \\ =\sum\limits_{i=2}^{k}(x_i,c)_{(1)}\beta_i(x)\psi(y)\alpha_i(x)(x_i,c)_{(2)}-(x_i,c)_{(1)}\beta_i(x)\alpha_i(x)\psi(y)(x_i,c)_{(2)} =: \Gamma. \end{multline*} Observe that \begin{multline*} \Psi+\Gamma = \sum\limits_{i=1}^{k-1}(x_i,c)_{(1)}\psi(y)\beta_i(x)\alpha_i(x)(x_i,c)_{(2)}-(x_i,c)_{(1)}\beta_i(x)\psi(y)\alpha_i(x)(x_i,c)_{(2)}\\ +\sum\limits_{i=2}^{k}(x_i,c)_{(1)}\beta_i(x)\psi(y)\alpha_i(x)(x_i,c)_{(2)}-(x_i,c)_{(1)}\beta_i(x)\alpha_i(x)\psi(y)(x_i,c)_{(2)} \allowdisplaybreaks \\ =\sum\limits_{i=1}^{k-1}(x_i,c)_{(1)}\psi(y)\beta_i(x)\alpha_i(x)(x_i,c)_{(2)}-(x_1,c)_{(1)}\beta_1(x)\psi(y)\alpha_1(x)(x_1,c)_{(2)} \\ -\sum\limits_{i=2}^{k-1}(x_i,c)_{(1)}\beta_i(x)\psi(y)\alpha_i(x)(x_i,c)_{(2)}+\sum\limits_{i=2}^{k-1}(x_i,c)_{(1)}\beta_i(x)\psi(y)\alpha_i(x)(x_i,c)_{(2)} \\ +(x_k,c)_{(1)}\beta_k(x)\psi(y)\alpha_k(x)(x_k,c)_{(2)}-\sum\limits_{i=2}^{k}(x_i,c)_{(1)}\beta_i(x)\alpha_i(x)\psi(y)(x_i,c)_{(2)}\\ =\sum\limits_{i=1}^k (x_i,c)\psi(y)\beta_i(x)\alpha_i(x)(x_i,c)_{(2)} - \sum\limits_{i=1}^k(x_i,c)_{(1)}\beta_i(x)\alpha_i(x)\psi(y)(x_i,c)_{(2)}. \end{multline*} Summing up the obtained equations we get that \begin{multline*} \{a,\{b,c\}\}-\{b,\{a,c\}\}-\{\{a,b\},c\}\\ =\sum\limits_{i=1}^k \sum\limits_{j=1}^l(x_i,(y_j,c)_{(1)})_{(1)}\beta_i(x)\alpha_i(x)(x_i,(y_j,c)_{(1)})_{(2)}\beta_j(y)\alpha_j(y)(y_j,c)_{(2)}\\ +\sum\limits_{i=1}^k \sum\limits_{j=1}^l (y_j,c)_{(1)}\beta_j(y)\alpha_j(y)(x_i,(y_j,c)_{(2)})_{(1)}\beta_i(x)\alpha_i(x)(x_i,(y_j,c)_{(2)})_{(2)}\\ -\sum\limits_{i=1}^k \sum\limits_{j=1}^l(y_j,(x_i,c)_{(1)})_{(1)}\beta_j(y)\alpha_j(y)(y_j,(x_i,c)_{(1)})_{(2)}\beta_i(x)\alpha_i(x)(x_i,c)_{(2)} \\ -\sum\limits_{i=1}^k \sum\limits_{j=1}^l (x_i,c)_{(1)}\beta_i(x)\alpha_i(x)(y_j,(x_i,c)_{(2)})_{(1)}\beta_j(y)\alpha_j(y)(y_j,(x_i,c)_{(2)})_{(2)} \allowdisplaybreaks \\ -\sum\limits_{i=1}^{k}\sum\limits_{j=1}^{l}((x_i,y_j)_{(1)},c)_{(1)}\beta_i(x)\alpha_i(x)(x_i,y_j)_{(2)}\beta_j(y)\alpha_j(y)((x_i,y_j)_{(1)}),c)_{(2)} \\ -\sum\limits_{i=1}^k\sum\limits_{j=1}^{l} ((x_i,y_j)_{(2)}c)_{(1)}\beta_j(y)\alpha_j(y)(x_i,y_j)_{(1)}\beta_i(x)\alpha_i(x)((x_i,y_j)_{(2)},c)_{(2)} \\ -\sum\limits_{i=1}^k (x_i,c)\psi(y)\beta_i(x)\alpha_i(x)(x_i,c)_{(2)}+\sum\limits_{i=1}^k(x_i,c)_{(1)}\beta_i(x)\alpha_i(x)\psi(y)(x_i,c)_{(2)}. \end{multline*} Note that the sum $-\sum\limits_{i=1}^k\sum\limits_{j=1}^{l} ((x_i,y_j)_{(2)}c)_{(1)}\beta_j(y)\alpha_j(y)(x_i,y_j)_{(1)}\beta_i(x)\alpha_i(x)((x_i,y_j)_{(2)},c)_{(2)}$ can be considered as a linear function on $\lbrace\kern-3pt\lbrace x_i,y_j\rbrace\kern-3pt\rbrace^{(12)}=(x_i,y_j)_{(2)}\otimes (x_i,y_j)_{(1)}$: \begin{multline*} -\sum\limits_{i=1}^k\sum\limits_{j=1}^{l} ((x_i,y_j)_{(2)}c)_{(1)}\beta_j(y)\alpha_j(y)(x_i,y_j)_{(1)}\beta_i(x)\alpha_i(x)((x_i,y_j)_{(2)},c)_{(2)}\\ =-\mu^2\left ((1\otimes \beta_j(y)\alpha_j(y)\otimes \beta_i(x)\alpha_i(x))\Theta_c( (x_i,y_j)_{(2)}\otimes (x_i,y_j)_{(1)})\right ), \end{multline*} where the map $\mu^2:A\otimes A\otimes A\mapsto A$ is defined as $\mu^2(x\otimes y\otimes z)=xyz$ and $$ \Theta_c(x\otimes y)=(x,c)_{(1)}\otimes y\otimes (x,c)_{(2)}. $$ Therefore, we apply \eqref{tozh} and get \begin{multline*} -\sum\limits_{i=1}^k\sum\limits_{j=1}^{l} ((x_i,y_j)_{(2)},c)_{(1)}\beta_j(y)\alpha_j(y)(x_i,y_j)_{(1)}\beta_i(x)\alpha_i(x)((x_i,y_j)_{(2)},c)_{(2)} \\ =\sum\limits_{i=1}^k\sum\limits_{j=1}^{l} ((y_j,x_i)_{(1)},c)_{(1)}\beta_j(y)\alpha_j(y)(y_j,x_i)_{(2)}\beta_i(x)\alpha_i(x)((y_j,x_i)_{(1)},c)_{(2)} \\ -\sum\limits_{i=1}^k\sum\limits_{j=1}^l(y_j,c)_{(1)}\beta_j(y)\alpha_j(y)x_i\beta_i(x)\alpha_i(x)(y_j,c)_{(2)} \allowdisplaybreaks \\ +\sum\limits_{i=1}^k\sum\limits_{j=1}^l(x_i,c)_{(1)}\beta_j(y)\alpha_j(y)y_j\beta_i(x)\alpha_i(x)(x_i,c)_{(2)}. \end{multline*} Let us divide summands from $ \{a,\{b,c\}\}-\{b,\{a,c\}\}-\{\{a,b\},c\}$ into two groups: $$ \{a,\{b,c\}\}-\{b,\{a,c\}\}-\{\{a,b\},c\} = I+J, $$ where \begin{multline*} I=\sum\limits_{i=1}^k \sum\limits_{j=1}^l(x_i(y_j,c)_{(1)})_{(1)}\beta_i(x)\alpha_i(x)(x_i,(y_j,c)_{(1)})_{(2)}\beta_j(y)\alpha_j(y)(y_j,c)_{(2)}\allowdisplaybreaks \\ -\sum\limits_{i=1}^k \sum\limits_{j=1}^l (x_i,c)_{(1)}\beta_i(x)\alpha_i(x)(y_j,(x_i,c)_{(2)})_{(1)}\beta_j(y)\alpha_j(y)(y_j,(x_i,c)_{(2)})_{(2)} \\ -\sum\limits_{i=1}^{k}\sum\limits_{j=1}^{l}((x_i,y_j)_{(1)},c)_{(1)}\beta_i(x)\alpha_i(x)(x_i,y_j)_{(2)}\beta_j(y)\alpha_j(y)((x_i,y_j)_{(1)},c)_{(2)}\\ +\sum\limits_{i=1}^k(x_i,c)_{(1)}\beta_i(x)\alpha_i(x)\psi(y)(x_i,c)_{(2)}, \end{multline*} \begin{multline*} \allowdisplaybreaks J =\sum\limits_{i=1}^k \sum\limits_{j=1}^l (y_j,c)_{(1)}\beta_j(y)\alpha_j(y)(x_i,(y_j,c)_{(2)})_{(1)}\beta_i(x)\alpha_i(x)(x_i,(y_j,c)_{(2)})_{(2)} \\ -\sum\limits_{i=1}^k \sum\limits_{j=1}^l(y_j,(x_i,c)_{(1)})_{(1)}\beta_j(y)\alpha_j(y)(y_j,(x_i,c)_{(1)})_{(2)}\beta_i(x)\alpha_i(x)(x_i,c)_{(2)}\\ +\sum\limits_{i=1}^k\sum\limits_{j=1}^{l} ((y_j,x_i)_{(1)},c)_{(1)}\beta_j(y)\alpha_j(y)(y_j,x_i)_{(2)}\beta_i(x)\alpha_i(x)((y_j,x_i)_{(1)},c)_{(2)}\\ -\sum\limits_{i=1}^k\sum\limits_{j=1}^l(y_j,c)_{(1)}\beta_j(y)\alpha_j(y)x_i\beta_i(x)\alpha_i(x)(y_j,c)_{(2)}. \end{multline*} Finally, we can use \eqref{lambda-Jacobi} and get that $I=J=0$. Indeed, from \eqref{lambda-Jacobi} it follows that $$ \lbrace\kern-3pt\lbrace x_i,\lbrace\kern-3pt\lbrace y_j,c\rbrace\kern-3pt\rbrace \rbrace\kern-3pt\rbrace_L-\lbrace\kern-3pt\lbrace y_j,\lbrace\kern-3pt\lbrace x_i,c\rbrace\kern-3pt\rbrace\rbrace\kern-3pt\rbrace_R-\lbrace\kern-3pt\lbrace\lbrace\kern-3pt\lbrace x_i,y_j\rbrace\kern-3pt\rbrace,c\rbrace\kern-3pt\rbrace_L =-(y_j\otimes\lbrace\kern-3pt\lbrace x_i,c\rbrace\kern-3pt\rbrace)^{(12)} $$ can be rewritten as \begin{multline*} T_{ij}:= (x_i,(y_j,c)_{(1)})_{(1)}\otimes (x_i,(y_j,c)_{(1)})_{(2)}\otimes (y_j,c)_{(2)} \\ - (x_i,c)_{(1)}\otimes (y_j,(x_i,c)_{(2)})_{(1)} \otimes (y_j,(x_i,c)_{(2)})_{(2)} \\ - ((x_i,y_j)_{(1)},c)_{(1)}\otimes (x_i,y_j)_{(2)}\otimes ((x_i,y_j)_{(1)},c)_{(2)} \\ + (x_i,c)_{(1)}\otimes y_j\otimes (x_i,c)_{(2)} = 0. \end{multline*} Therefore, we get that $$ \sum\limits_{i=1}^k\sum\limits_{j=1}^lT_{ij}\cdot(\beta_i(x)\alpha_i(x)\otimes \beta_j(y)\alpha_j(y)\otimes 1)=0. $$ It is left to note that $$I=\mu^2\left (\sum\limits_{i=1}^k\sum\limits_{j=1}^lT_{ij}\cdot(\beta_i(x)\alpha_i(x)\otimes \beta_j(y)\alpha_j(y)\otimes 1)\right )=0.$$ The proof of the equality $J=0$ is similar. $\square$ {\bf Corollary 4}. The double bracket from Example~3 defines a modified double Poisson structure on the algebra $\textrm{As}\langle a_1,a_2,a_3\rangle$. Thus, the conjecture of S. Arthamonov~\cite{Arthamonov} holds. Theorem 9 combined with Corollary 35 from~\cite{Arthamonov} implies the following result. {\bf Theorem 10}. Let $(V,\lbrace\kern-3pt\lbrace\cdot,\cdot\rbrace\kern-3pt\rbrace)$ be a~finite-dimensional double Lie algebra of nonzero weight~$\lambda$. Denote by $\lbrace\kern-3pt\lbrace\cdot,\cdot\rbrace\kern-3pt\rbrace$ its extension as a modified double Poisson bracket on $\textrm{As}(V)$. Then $F[\textrm{Rep}_n(\textrm{As}(V))]^\textrm{tr}$ is equipped with a Poisson bracket uniquely defined by $\{\textrm{tr}(a), \textrm{tr}(b)\} = \textrm{tr}(\lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace_{(1)}\lbrace\kern-3pt\lbrace a,b\rbrace\kern-3pt\rbrace_{(2)})$ for any $a, b \in \textrm{As}(V)$. \section*{Acknowledgments} The authors are grateful to the anonymous reviewer for the helpful remarks. The research is supported by Russian Science Foundation (project 21-11-00286). \noindent Maxim Goncharov \\ Sobolev Institute of Mathematics \\ Acad. Koptyug ave. 4, 630090 Novosibirsk, Russia \\ Novosibirsk State University \\ Pirogova str. 2, 630090 Novosibirsk, Russia \\ e-mail: [email protected] \noindent Vsevolod Gubarev \\ Sobolev Institute of Mathematics \\ Novosibirsk State University \\ e-mail: [email protected] \end{document}
arXiv
4.7: One-way Anova [ "article:topic", "NULL HYPOTHESIS", "Tukey-Kramer test", "Analysis of variance", "authorname:mcdonaldj", "showtoc:no" ] Book: Biological Statistics (McDonald) 4: Tests for One Measurement Variable Contributed by John H. McDonald Associate Professor (Biological Sciences) at University of Delaware When to use it Null hypothesis How the test works Additional Analyses Tukey-Kramer test Partitioning variance Graphing the results Similar tests How to do the test Welch's anova Power analysis To learn to use one-way anova when you have one nominal variable and one measurement variable; the nominal variable divides the measurements into two or more groups. It tests whether the means of the measurement variable are the same for the different groups. Analysis of variance (anova) is the most commonly used technique for comparing the means of groups of measurement data. There are lots of different experimental designs that can be analyzed with different kinds of anova; in this handbook, I describe only one-way anova, nested anova and two-way anova. Fig. 4.7.1 The mussel Mytilus trossulus In a one-way anova (also known as a one-factor, single-factor, or single-classification anova), there is one measurement variable and one nominal variable. You make multiple observations of the measurement variable for each value of the nominal variable. For example, here are some data on a shell measurement (the length of the anterior adductor muscle scar, standardized by dividing by length; I'll call this "AAM length") in the mussel Mytilus trossulus from five locations: Tillamook, Oregon; Newport, Oregon; Petersburg, Alaska; Magadan, Russia; and Tvarminne, Finland, taken from a much larger data set used in McDonald et al. (1991). Tvarminne 0.0571 0.0873 0.0974 0.1033 0.0703 0.0735 0.0835 0.105 0.0764 The nominal variable is location, with the five values Tillamook, Newport, Petersburg, Magadan, and Tvarminne. There are six to ten observations of the measurement variable, AAM length, from each location. The statistical null hypothesis is that the means of the measurement variable are the same for the different categories of data; the alternative hypothesis is that they are not all the same. For the example data set, the null hypothesis is that the mean AAM length is the same at each location, and the alternative hypothesis is that the mean AAM lengths are not all the same. The basic idea is to calculate the mean of the observations within each group, then compare the variance among these means to the average variance within each group. Under the null hypothesis that the observations in the different groups all have the same mean, the weighted among-group variance will be the same as the within-group variance. As the means get further apart, the variance among the means increases. The test statistic is thus the ratio of the variance among means divided by the average variance within groups, or \(F_s\). This statistic has a known distribution under the null hypothesis, so the probability of obtaining the observed \(F_s\) under the null hypothesis can be calculated. The shape of the \(F\)-distribution depends on two degrees of freedom, the degrees of freedom of the numerator (among-group variance) and degrees of freedom of the denominator (within-group variance). The among-group degrees of freedom is the number of groups minus one. The within-groups degrees of freedom is the total number of observations, minus the number of groups. Thus if there are \(n\) observations in a groups, numerator degrees of freedom is \(a-1\) and denominator degrees of freedom is \(n-a\). For the example data set, there are \(5\) groups and \(39\) observations, so the numerator degrees of freedom is \(4\) and the denominator degrees of freedom is \(34\). Whatever program you use for the anova will almost certainly calculate the degrees of freedom for you. The conventional way of reporting the complete results of an anova is with a table (the "sum of squares" column is often omitted). Here are the results of a one-way anova on the mussel data: sum of squares mean square among groups 0.00452 4 0.001113 7.12 2.8×10-4 within groups 0.00539 34 0.000159 total 0.00991 38 If you're not going to use the mean squares for anything, you could just report this as "The means were significantly heterogeneous (one-way anova, \(F_{4,34}=7.12\, ,\; P=2.8\times 10^{-4}\))." The degrees of freedom are given as a subscript to \(F\), with the numerator first. Note that statisticians often call the within-group mean square the "error" mean square. I think this can be confusing to non-statisticians, as it implies that the variation is due to experimental error or measurement error. In biology, the within-group variation is often largely the result of real, biological variation among individuals, not the kind of mistakes implied by the word "error." That's why I prefer the term "within-group mean square." One-way anova assumes that the observations within each group are normally distributed. It is not particularly sensitive to deviations from this assumption; if you apply one-way anova to data that are non-normal, your chance of getting a \(P\) value less than \(0.05\) , if the null hypothesis is true, is still pretty close to \(0.05\) . It's better if your data are close to normal, so after you collect your data, you should calculate the residuals (the difference between each observation and the mean of its group) and plot them on a histogram. If the residuals look severely non-normal, try data transformations and see if one makes the data look more normal. If none of the transformations you try make the data look normal enough, you can use the Kruskal-Wallis test. Be aware that it makes the assumption that the different groups have the same shape of distribution, and that it doesn't test the same null hypothesis as one-way anova. Personally, I don't like the Kruskal-Wallis test; I recommend that if you have non-normal data that can't be fixed by transformation, you go ahead and use one-way anova, but be cautious about rejecting the null hypothesis if the \(P\) value is not very far below \(0.05\) and your data are extremely non-normal. One-way anova also assumes that your data are homoscedastic, meaning the standard deviations are equal in the groups. You should examine the standard deviations in the different groups and see if there are big differences among them. If you have a balanced design, meaning that the number of observations is the same in each group, then one-way anova is not very sensitive to heteroscedasticity (different standard deviations in the different groups). I haven't found a thorough study of the effects of heteroscedasticity that considered all combinations of the number of groups, sample size per group, and amount of heteroscedasticity. I've done simulations with two groups, and they indicated that heteroscedasticity will give an excess proportion of false positives for a balanced design only if one standard deviation is at least three times the size of the other, and the sample size in each group is fewer than \(10\). I would guess that a similar rule would apply to one-way anovas with more than two groups and balanced designs. Heteroscedasticity is a much bigger problem when you have an unbalanced design (unequal sample sizes in the groups). If the groups with smaller sample sizes also have larger standard deviations, you will get too many false positives. The difference in standard deviations does not have to be large; a smaller group could have a standard deviation that's \(50\%\) larger, and your rate of false positives could be above \(10\%\) instead of at \(5\%\) where it belongs. If the groups with larger sample sizes have larger standard deviations, the error is in the opposite direction; you get too few false positives, which might seem like a good thing except it also means you lose power (get too many false negatives, if there is a difference in means). You should try really hard to have equal sample sizes in all of your groups. With a balanced design, you can safely use a one-way anova unless the sample sizes per group are less than \(10\) and the standard deviations vary by threefold or more. If you have a balanced design with small sample sizes and very large variation in the standard deviations, you should use Welch's anova instead. If you have an unbalanced design, you should carefully examine the standard deviations. Unless the standard deviations are very similar, you should probably use Welch's anova. It is less powerful than one-way anova for homoscedastic data, but it can be much more accurate for heteroscedastic data from an unbalanced design. If you reject the null hypothesis that all the means are equal, you'll probably want to look at the data in more detail. One common way to do this is to compare different pairs of means and see which are significantly different from each other. For the mussel shell example, the overall \(P\) value is highly significant; you would probably want to follow up by asking whether the mean in Tillamook is different from the mean in Newport, whether Newport is different from Petersburg, etc. It might be tempting to use a simple two-sample t–test on each pairwise comparison that looks interesting to you. However, this can result in a lot of false positives. When there are \(a\) groups, there are \(\frac{(a^2-a)}{2}\) possible pairwise comparisons, a number that quickly goes up as the number of groups increases. With \(5\) groups, there are \(10\) pairwise comparisons; with \(10\) groups, there are \(45\), and with \(20\) groups, there are \(190\) pairs. When you do multiple comparisons, you increase the probability that at least one will have a \(P\) value less than \(0.05\) purely by chance, even if the null hypothesis of each comparison is true. There are a number of different tests for pairwise comparisons after a one-way anova, and each has advantages and disadvantages. The differences among their results are fairly subtle, so I will describe only one, the Tukey-Kramer test. It is probably the most commonly used post-hoc test after a one-way anova, and it is fairly easy to understand. In the Tukey–Kramer method, the minimum significant difference (MSD) is calculated for each pair of means. It depends on the sample size in each group, the average variation within the groups, and the total number of groups. For a balanced design, all of the MSDs will be the same; for an unbalanced design, pairs of groups with smaller sample sizes will have bigger MSDs. If the observed difference between a pair of means is greater than the MSD, the pair of means is significantly different. For example, the Tukey MSD for the difference between Newport and Tillamook is \(0.0172\). The observed difference between these means is \(0.0054\), so the difference is not significant. Newport and Petersburg have a Tukey MSD of \(0.0188\); the observed difference is \(0.0286\), so it is significant. There are a couple of common ways to display the results of the Tukey–Kramer test. One technique is to find all the sets of groups whose means do not differ significantly from each other, then indicate each set with a different symbol. mean AAM Tukey–Kramer Newport 0.0748 a Magadan 0.0780 a, b Tillamook 0.0802 a, b Tvarminne 0.0957 b, c Petersburg 0.103 c Then you explain that "Means with the same letter are not significantly different from each other (Tukey–Kramer test, \(P> 0.05\))." This table shows that Newport and Magadan both have an "a", so they are not significantly different; Newport and Tvarminne don't have the same letter, so they are significantly different. Another way you can illustrate the results of the Tukey–Kramer test is with lines connecting means that are not significantly different from each other. This is easiest when the means are sorted from smallest to largest: Fig. 4.7.2 Mean AAM (anterior adductor muscle scar standardized by total shell length) for Mytilus trossulus from five locations. Pairs of means grouped by a horizontal line are not significantly different from each other (Tukey–Kramer method, \(P> 0.05\)). There are also tests to compare different sets of groups; for example, you could compare the two Oregon samples (Newport and Tillamook) to the two samples from further north in the Pacific (Magadan and Petersburg). The Scheffé test is probably the most common. The problem with these tests is that with a moderate number of groups, the number of possible comparisons becomes so large that the P values required for significance become ridiculously small. The most familiar one-way anovas are "fixed effect" or "model I" anovas. The different groups are interesting, and you want to know which are different from each other. As an example, you might compare the AAM length of the mussel species Mytilus edulis, Mytilus galloprovincialis, Mytilus trossulus and Mytilus californianus; you'd want to know which had the longest AAM, which was shortest, whether M. edulis was significantly different from M. trossulus, etc. The other kind of one-way anova is a "random effect" or "model II" anova. The different groups are random samples from a larger set of groups, and you're not interested in which groups are different from each other. An example would be taking offspring from five random families of M. trossulus and comparing the AAM lengths among the families. You wouldn't care which family had the longest AAM, and whether family A was significantly different from family B; they're just random families sampled from a much larger possible number of families. Instead, you'd be interested in how the variation among families compared to the variation within families; in other words, you'd want to partition the variance. Under the null hypothesis of homogeneity of means, the among-group mean square and within-group mean square are both estimates of the within-group parametric variance. If the means are heterogeneous, the within-group mean square is still an estimate of the within-group variance, but the among-group mean square estimates the sum of the within-group variance plus the group sample size times the added variance among groups. Therefore subtracting the within-group mean square from the among-group mean square, and dividing this difference by the average group sample size, gives an estimate of the added variance component among groups. The equation is: \[\text{among}\: -\: {group\:variance}=\frac{MS_{among}-MS_{within}}{n}\] where \(n_o\) is a number that is close to, but usually slightly less than, the arithmetic mean of the sample size (\(n_i\)) of each of the \(a\) groups: \[n_o=\left ( \frac{1}{a-1} \right )\ast \left ( \frac{\text{sum}(n_i)-\text{sum}(n_i)^2}{\text{sum}(n_i)} \right )\] Each component of the variance is often expressed as a percentage of the total variance components. Thus an anova table for a one-way anova would indicate the among-group variance component and the within-group variance component, and these numbers would add to \(100\%\). Although statisticians say that each level of an anova "explains" a proportion of the variation, this statistical jargon does not mean that you've found a biological cause-and-effect explanation. If you measure the number of ears of corn per stalk in \(10\) random locations in a field, analyze the data with a one-way anova, and say that the location "explains" \(74.3\%\) of the variation, you haven't really explained anything; you don't know whether some areas have higher yield because of different water content in the soil, different amounts of insect damage, different amounts of nutrients in the soil, or random attacks by a band of marauding corn bandits. Partitioning the variance components is particularly useful in quantitative genetics, where the within-family component might reflect environmental variation while the among-family component reflects genetic variation. Of course, estimating heritability involves more than just doing a simple anova, but the basic concept is similar. Another area where partitioning variance components is useful is in designing experiments. For example, let's say you're planning a big experiment to test the effect of different drugs on calcium uptake in rat kidney cells. You want to know how many rats to use, and how many measurements to make on each rat, so you do a pilot experiment in which you measure calcium uptake on \(6\) rats, with \(4\) measurements per rat. You analyze the data with a one-way anova and look at the variance components. If a high percentage of the variation is among rats, that would tell you that there's a lot of variation from one rat to the next, but the measurements within one rat are pretty uniform. You could then design your big experiment to include a lot of rats for each drug treatment, but not very many measurements on each rat. Or you could do some more pilot experiments to try to figure out why there's so much rat-to-rat variation (maybe the rats are different ages, or some have eaten more recently than others, or some have exercised more) and try to control it. On the other hand, if the among-rat portion of the variance was low, that would tell you that the mean values for different rats were all about the same, while there was a lot of variation among the measurements on each rat. You could design your big experiment with fewer rats and more observations per rat, or you could try to figure out why there's so much variation among measurements and control it better. There's an equation you can use for optimal allocation of resources in experiments. It's usually used for nested anova, but you can use it for a one-way anova if the groups are random effect (model II). Partitioning the variance applies only to a model II (random effects) one-way anova. It doesn't really tell you anything useful about the more common model I (fixed effects) one-way anova, although sometimes people like to report it (because they're proud of how much of the variance their groups "explain," I guess). Here are data on the genome size (measured in picograms of DNA per haploid cell) in several large groups of crustaceans, taken from Gregory (2014). The cause of variation in genome size has been a puzzle for a long time; I'll use these data to answer the biological question of whether some groups of crustaceans have different genome sizes than others. Because the data from closely related species would not be independent (closely related species are likely to have similar genome sizes, because they recently descended from a common ancestor), I used a random number generator to randomly choose one species from each family. Branchiopods Copepods Decapods Ostracods 13.49 0.63 6.81 2.78 8.82 16.09 0.87 2.80 After collecting the data, the next step is to see if they are normal and homoscedastic. It's pretty obviously non-normal; most of the values are less than \(10\) , but there are a small number that are much higher. A histogram of the largest group, the decapods (crabs, shrimp and lobsters), makes this clear: Fig. 4.7.3 Histogram of the genome size in decapod crustaceans. The data are also highly heteroscedastic; the standard deviations range from \(0.67\) in barnacles to \(20.4\) in amphipods. Fortunately, log-transforming the data make them closer to homoscedastic (standard deviations ranging from \(0.20\) to \(0.63\)) and look more normal: Fig. 4.7.4 Histogram of the genome size in decapod crustaceans after base-10 log transformation. Analyzing the log-transformed data with one-way anova, the result is \(F_{6,76}=11.72\, ,\; P=2.9\times 10^{-9}\). So there is very significant variation in mean genome size among these seven taxonomic groups of crustaceans. The next step is to use the Tukey-Kramer test to see which pairs of taxa are significantly different in mean genome size. The usual way to display this information is by identifying groups that are not significantly different; here I do this with horizontal bars: Fig. 4.7.5 Neans and 95% confidence limits of genome size in seven groups of crustaceans. Horizontal bars link groups that are not significantly different (Tukey-Kramer test, P>0.05). Analysis was done on log-transformed data, then back-transformed for this graph. This graph suggests that there are two sets of genome sizes, groups with small genomes (branchiopods, ostracods, barnacles, and copepods) and groups with large genomes (decapods and amphipods); the members of each set are not significantly different from each other. Isopods are in the middle; the only group they're significantly different from is branchiopods. So the answer to the original biological question, "do some groups of crustaceans have different genome sizes than others," is yes. Why different groups have different genome sizes remains a mystery. Fig. 4.7.6 Length of the anterior adductor muscle scar divided by total length in Mytilus trossulus. Means ±one standard error are shown for five locations. The usual way to graph the results of a one-way anova is with a bar graph. The heights of the bars indicate the means, and there's usually some kind of error bar, either 95% confidence intervals or standard errors. Be sure to say in the figure caption what the error bars represent. If you have only two groups, you can do a two-sample t–test. This is mathematically equivalent to an anova and will yield the exact same \(P\) value, so if all you'll ever do is comparisons of two groups, you might as well call them \(t\)–tests. If you're going to do some comparisons of two groups, and some with more than two groups, it will probably be less confusing if you call all of your tests one-way anovas. If there are two or more nominal variables, you should use a two-way anova, a nested anova, or something more complicated that I won't cover here. If you're tempted to do a very complicated anova, you may want to break your experiment down into a set of simpler experiments for the sake of comprehensibility. If the data severely violate the assumptions of the anova, you can use Welch's anova if the standard deviations are heterogeneous or use the Kruskal-Wallis test if the distributions are non-normal. I have put together a spreadsheet to do one-way anova anova.xls on up to \(50\) groups and \(1000\) observations per group. It calculates the \(P\) value, does the Tukey–Kramer test, and partitions the variance. Some versions of Excel include an "Analysis Toolpak," which includes an "Anova: Single Factor" function that will do a one-way anova. You can use it if you want, but I can't help you with it. It does not include any techniques for unplanned comparisons of means, and it does not partition the variance. Several people have put together web pages that will perform a one-way anova; one good one is here. It is easy to use, and will handle three to \(26\) groups and \(3\) to \(1024\) observations per group. It does not do the Tukey-Kramer test and does not partition the variance. Salvatore Mangiafico's \(R\) Companion has a sample R program for one-way anova. There are several SAS procedures that will perform a one-way anova. The two most commonly used are PROC ANOVA and PROC GLM. Either would be fine for a one-way anova, but PROC GLM (which stands for "General Linear Models") can be used for a much greater variety of more complicated analyses, so you might as well use it for everything. Here is a SAS program to do a one-way anova on the mussel data from above. DATA musselshells; INPUT location $ aam @@; DATALINES; Tillamook 0.0571 Tillamook 0.0813 Tillamook 0.0831 Tillamook 0.0976 Tillamook 0.0923 Tillamook 0.0836 Newport 0.0873 Newport 0.0662 Newport 0.0672 Newport 0.0819 Petersburg 0.0974 Petersburg 0.1352 Petersburg 0.0817 Petersburg 0.1016 Petersburg 0.0968 Petersburg 0.1064 Petersburg 0.1050 Magadan 0.1033 Magadan 0.0915 Magadan 0.0781 Magadan 0.0685 Tvarminne 0.0703 Tvarminne 0.1026 Tvarminne 0.0956 Tvarminne 0.0973 Tvarminne 0.1039 Tvarminne 0.1045 PROC glm DATA=musselshells; CLASS location; MODEL aam = location; RUN; The output includes the traditional anova table; the P value is given under "Pr > F". Sum of Source DF Squares Mean Square F Value Pr > F Model 4 0.00451967 0.00112992 7.12 0.0003 Error 34 0.00539491 0.00015867 Corrected Total 38 0.00991458 PROC GLM doesn't calculate the variance components for an anova. Instead, you use PROC VARCOMP. You set it up just like PROC GLM, with the addition of METHOD=TYPE1 (where "TYPE1" includes the numeral 1, not the letter el. The procedure has four different methods for estimating the variance components, and TYPE1 seems to be the same technique as the one I've described above. Here's how to do the one-way anova, including estimating the variance components, for the mussel shell example. PROC VARCOMP DATA=musselshells METHOD=TYPE1; The results include the following: Type 1 Estimates Variance Component Estimate Var(location) 0.0001254 Var(Error) 0.0001587 The output is not given as a percentage of the total, so you'll have to calculate that. For these results, the among-group component is \(\frac{0.0001254}{(0.0001254+0.0001586)}=0.4415\), or \(44.15\%\); the within-group component is \(\frac{0.0001587}{(0.0001254+0.0001586)}=0.5585\), or \(55.85\%\). If the data show a lot of heteroscedasticity (different groups have different standard deviations), the one-way anova can yield an inaccurate \(P\) value; the probability of a false positive may be much higher than \(5\%\). In that case, you should use Welch's anova. I've written a spreadsheet to do Welch's anova welchanova.xls. It includes the Games-Howell test, which is similar to the Tukey-Kramer test for a regular anova. (Note: the original spreadsheet gave incorrect results for the Games-Howell test; it was corrected on April 28, 2015). You can do Welch's anova in SAS by adding a MEANS statement, the name of the nominal variable, and the word WELCH following a slash. Unfortunately, SAS does not do the Games-Howell post-hoc test. Here is the example SAS program from above, modified to do Welch's anova: MEANS location / WELCH; Here is part of the output: Welch's ANOVA for AAM Source DF F Value Pr > F location 4.0000 5.66 0.0051 Error 15.6955 To do a power analysis for a one-way anova is kind of tricky, because you need to decide what kind of effect size you're looking for. If you're mainly interested in the overall significance test, the sample size needed is a function of the standard deviation of the group means. Your estimate of the standard deviation of means that you're looking for may be based on a pilot experiment or published literature on similar experiments. If you're mainly interested in the comparisons of means, there are other ways of expressing the effect size. Your effect could be a difference between the smallest and largest means, for example, that you would want to be significant by a Tukey-Kramer test. There are ways of doing a power analysis with this kind of effect size, but I don't know much about them and won't go over them here. To do a power analysis for a one-way anova using the free program G*Power, choose "F tests" from the "Test family" menu and "ANOVA: Fixed effects, omnibus, one-way" from the "Statistical test" menu. To determine the effect size, click on the Determine button and enter the number of groups, the standard deviation within the groups (the program assumes they're all equal), and the mean you want to see in each group. Usually you'll leave the sample sizes the same for all groups (a balanced design), but if you're planning an unbalanced anova with bigger samples in some groups than in others, you can enter different relative sample sizes. Then click on the "Calculate and transfer to main window" button; it calculates the effect size and enters it into the main window. Enter your alpha (usually \(0.05\)) and power (typically \(0.80\) or \(0.90\)) and hit the Calculate button. The result is the total sample size in the whole experiment; you'll have to do a little math to figure out the sample size for each group. As an example, let's say you're studying transcript amount of some gene in arm muscle, heart muscle, brain, liver, and lung. Based on previous research, you decide that you'd like the anova to be significant if the means were \(10\) units in arm muscle, \(10\) units in heart muscle, \(15\) units in brain, \(15\) units in liver, and \(15\) units in lung. The standard deviation of transcript amount within a tissue type that you've seen in previous research is \(12\) units. Entering these numbers in G*Power, along with an alpha of \(0.05\) and a power of \(0.80\), the result is a total sample size of \(295\). Since there are five groups, you'd need \(59\) observations per group to have an \(80\%\) chance of having a significant (\(P< 0.05\)) one-way anova. Picture of Mytilus trossulus from Key to Invertebrates Found At or Near The Rosario Beach Marine Laboratory. Gregory, T.R. 2014. Animal genome size database. McDonald, J.H., R. Seed and R.K. Koehn. 1991. Allozymes and morphometric characters of three species of Mytilus in the Northern and Southern Hemispheres. Marine Biology 111:323-333. John H. McDonald (University of Delaware) 4.6: Data Transformations 4.8: Kruskal–Wallis Test John H. McDonald
CommonCrawl
Differentiation is the fundamental practice of mathematics. It directly relates to the knowledge we acquired during Elementary Math, which indirectly teaches us how to reason about concepts by changing from one representation to another. Differentiation gives us the ability to isolate and measure values between two variables instead of just their sum or average. This article aims to help you learn how to do differentiation in a way that will be fastest for you. Differentiation is a math topic that can be found in many mathematics syllabi such as the IGCSE Additional Mathematics (0606) and Singapore SEAB Additional Mathematics. Differentiation - Formula and Its Application What is differentiation? Calculus is a branch of mathematics that studies the continuous change between two variables. Calculus can be further classified into two kinds: differential and integral calculus. Differentiation in calculus is a key concept used to determine instantaneous rates of change. Instantaneous rate of change is the measure of how quickly the new value is affecting the old value. In this article, we will focus on exploring how to differentiate and provide several real-life differentiation examples for you to practice. What are the rules of Differentiation? Derivative as Gradient Function Gradient functions are used in math and graphing, and they're also the most basic function in linear and polynomial regression. The gradient function is a way to describe a slope by showing the height of each degree of ascent (or descent) as a linear function of $x$, with the suffix indicating the direction (positive or negative). A positive gradient indicates that as you approach the endpoint on the $x$-axis, the $y$-axis points go up; a negative gradient indicates that as you approach the endpoint on the $x$-axis, the $y$-axis points go down. So how do we calculate the gradient function? To make a gradient function, you pick an $x$-axis value (called the "endpoint") and then sketch every possible $y$-axis point that goes up (or down) with a given slope. Derivative as Power Functions How do we find the gradient function of a power function? Power Rule is a relatively straightforward formula that helps to find the derivative of a power function, a function that has a variable base raised to a fixed real number. Here are some examples of power functions: $\text{f}\left( x \right)=x,\text{g}\left( x \right)={{x}^{3}},\text{h}\left( x \right)=\frac{1}{x}$ To find the derivative of power functions, we can use this formula: $\frac{\text{d}}{\text{d}x}\left( {{x}^{n}} \right)=n{{x}^{n-1}}$, where $n$ is a constant If it is ${x}^{n}$, we can bring the power, $n$, to the front of the $x$, and then minus 1 from the power. Scalar Multiple Now, how will we do it if there is a constant in front of the power function? Let's explore another concept which is called the Scalar Multiple. $\frac{\text{d}}{\text{d}x}(k{{x}^{n}})=k\frac{\text{d}}{\text{d}x}({{x}^{n}})=kn{{x}^{n-1}}$, where $k$ is a constant. The constant will not affect the process of differentiation. In fact, we could just take the constant out and differentiate the remaining function according to the appropriate rule. The constant stays until the last step where we multiply the constant with the answer. Simplify and you will get the final answer! Take note: It has to be a constant. Only constant could be brought out! Addition and Subtraction Rule Next, how do we differentiate if there is an addition or subtraction sign between the terms? \frac{\text{d}}{\text{d}x}(k{{x}^{n}}+q{{x}^{m}})&=k\frac{\text{d}}{\text{d}x}({{x}^{n}})+q\frac{\text{d}}{\text{d}x}({{x}^{m}}) \\ & =kn{{x}^{n-1}}+qm{{x}^{m-1}} \end{aligned}$, where $k$ and $q$ are a constants. Even though this might look complicated at first, but we can differentiate each term independently. The addition and subtraction do not affect the differentiation process. The Chain Rule is a quite fundamental rule in differentiation, which reveals how to find the derivative of a composite function. $\frac{\text{d}y}{\text{d}x}=\frac{\text{d}y}{\text{d}u}\times \frac{\text{d}u}{\text{d}x}$ The general rule for finding the derivative of a composite function is to: Identify the nested function and let the nested function be variable $u$ . Calculate the derivative of the nested function, $\frac{\text{d}u}{\text{d}x}$. Convert the outer function $y$ in terms of $u$. Calculate the derivative of the outer function, $\frac{\text{d}y}{\text{d}u}$. Substitute the derivatives into the formula: $\frac{\text{d}y}{\text{d}x}=\frac{\text{d}y}{\text{d}u}\times \frac{\text{d}u}{\text{d}x}$ Since $u$ is something that we introduce into the equation, we have to substitute the original function back for the variable $u$. Simplify to get the final answer. If you are looking for a faster solution to differentiate a composite function, you can also try this method $\rightarrow$ Identify the nested function and the outer function. Calculate the derivative of the outer function as an entity. Calculate the derivative of the nested function. Multiply the derivatives to get the final answer. Find the derivative of each of the following, $y={{\left( 7{{x}^{3}}+x \right)}^{4}}$ $\text{f}\left( x \right)=\frac{1}{\sqrt{3{{x}^{2}}-1}}$ $y=\frac{2}{{{\left( 3-\sqrt{x} \right)}^{3}}}$ The Product Rule of Differentiation is another very important rule that helps us find the derivatives of a product from the multiplication of two variables. $\frac{\text{d}}{\text{d}x}\left( uv \right)=u\frac{\text{d}v}{\text{d}x}+v\frac{\text{d}u}{\text{d}x}$ How to differentiate using the Product Rule: Multiply the first term with the derivative of the second term. Add the second term multiplied by the derivative of the first term to the first product. Differentiate $\sqrt{x}{{({{x}^{2}}+2)}^{5}}$ with respect to $x$. What happens if you have to find the derivatives of the quotients of two variables? $\frac{\text{d}}{\text{d}x}\left( \frac{u}{v} \right)=\frac{v\frac{\text{du}}{\text{d}x}-u\frac{\text{dv}}{\text{d}x}}{{{v}^{2}}}$ How to differentiate using the Quotient Rule? Multiply the denominator with the derivative of the numerator. Minus the product of the numerator and the derivative of the denominator. Divide all with the square of the denominator. Find the derivative of $y=\frac{x+1}{\sqrt{3{{x}^{2}}-1}}$. Gradient of Curve It is very easy to find the gradient of a function when you have a situation in which a tangent line is easy to find. But finding the gradient in most situations with a complicated equation is tricky. A tangent line does not always give you enough information. Fortunately, there is another way of getting information about the slope of a curve in the plane. Instead of drawing a tangent line, we could use differentiation to find the answer! The difference here is not that it's harder; it's that it requires fewer steps. To an onlooker, this difference would seem negligible. But if you look at how much time you spend solving problems, you'll find that lots of time can be saved if you solve using derivatives. Differentiation is a useful trick because it reduces a problem to a calculation that can be done by hand much faster than by drawing a tangent line. Find the gradient of the curve $y=\left( x-1 \right)\left( 2x+3 \right)$ at $x=2$. Question 5: Finding the Unknowns when given the gradient The curve$y=a{{x}^{2}}+bx$ has a gradient of $2$ at the point $\left( 2,0 \right)$. Find the values of $a$ and $b$. Question 6: Equations of Tangents and Normal to Curve Find the equation of the tangent and the normal to the curve $y={{x}^{3}}+{{x}^{2}}-4x-3$ at the point where $x=-2$ . Question 7: Meeting the Curve Again Find the equation of the normal to the curve $y=3{{x}^{2}}+5x-9$ at the point where$x=-2$, hence find the $y$ -coordinate of the point where this normal meets the curve again. Question 8: Finding Perpendicular and Parallel Lines Find the coordinates of point $P$ on the curve $y=3{{x}^{2}}-2x+1$ for which the normal at $P$ is parallel to the line $y=2x-3$. Connected Rate of Changes In finding the derivative of a function, we have to know the rate at which the function is changing at each point. We can determine the rate of change of one thing with respect to another by taking the derivative of one variable with respect to the other. For example, if an object is moving in a straight line, the rate of change of its position with respect to time is called velocity and is defined as the measure of how quickly an object's position changes over time. If two objects are moving at constant speeds along parallel lines, they are said to have constant relative velocity. If their speeds are different, they have a relative velocity that is different from this value. The rate of change of their relative velocity is called acceleration, which is defined as the derivative of their relative velocity with respect to time. The same idea can be applied in different settings. Suppose we are doing investment in a fund that pays $3\%$ per annum, compounded annually, what would the amount in the account be after ten years if the starting amount is $\$1000$? We can calculate the rate of change of the amount in the account by using differentiation too. When you know how things change with respect to time, you know how they vary over different intervals of time. This allows you to predict how they will behave in the future and whether or not a problem can be solved. The formula for connected rate of changes: $\frac{\text{d}y}{\text{d}t}=\frac{\text{d}y}{{}}\times \frac{{}}{\text{d}t}$ The radius, $r$ cm, of a hemisphere is increasing at a constant rate of $0.5$ cm/s. Find the rate of increase of the volume of the hemisphere when $r=3$. Question 10: Connected Rate of Change involving Coordinate Geometry The values of $x$ and $y$ are related by the equation $xy=23x-8$. If $x$ increases at the rate of $0.03$ unit/s, find the rate of change of $y$ when $y=21$. Question 11: Connected Rate of Change involving Area The figure shows part of the curve$y=2{{x}^{2}}+3$ . The point $B(x,y)$ is a variable point that moves along the curve for $0<x<6$ . $C$ is a point on the $x$-axis such that $BC$ is parallel to the $y$-axis and $A(6,0)$ lies on the $x$-axis. Express the area of triangle $ABC$, $T$ units², in terms of $x$, and find an expression for $\frac{\text{d}T}{\text{d}x}$. Given that when$x=2$, $T$ is increasing at the rate of $0.8$ units²/s, find the corresponding rate of change of $x$ at this instant. Differentiation Strategies in Math Increasing and Decreasing Functions A function describes a relationship between two variables. The derivative of a function provides a lot of details to the shape of the graph; it is equal to the slope of the line tangent to the curve at that point. A positive derivative indicates an increasing function, a function where the value of its derivative is always positive; this means that if you plot its graph, you will see that the graph goes up as $x$ increases. Whereas a negative derivative indicates a decreasing one, with the value of its derivative being always negative; this means that if you plot its graph, you will see that the graph goes down as $x$ increases. The equation of a curve is $y={{x}^{3}}+4{{x}^{2}}+kx+3$, where $k$ is a constant. Find the set of values of $k$ for which the curve is always an increasing function. What happens when the derivative is $0$? These points are called "stationary" because at these points the function is neither increasing nor decreasing. There are three types of stationary points: maximums, minimums, and points of inflexion. The maximum and minimum are the largest and smallest values the function can take. The maximum is the highest point on the function's graph. The minimum is the lowest point on the function's graph. The point of inflexion is where the curve changes direction from increasing to decreasing, or from decreasing to increasing. Take Note: Maximum and minimum points are also called turning points. All turning points are stationary points but not all stationary points are turning points. The stationary point of inflexion is a stationary point but not a turning point. Not all point of inflexion is a stationary point. First Derivative Test This is a technique used to find whether a given point is a maximum, minimum, or point of inflexion of a function. By determining whether the function is increasing or decreasing at these points, the approximate shape of the graph is obtained. Find the coordinates of the stationary points on the curve $y={{x}^{3}}-3x+2$. Determine its nature by using the first derivative test. Hence, sketch the curve. Second Derivative Test The second derivative test is used to test the concavity of a function. It is a simple but powerful technique used to distinguish between a maximum point or minimum point. If the second derivative is negative, then it is a maximum point. If the second derivative is positive, then it is a minimum point. If the second derivative is zero, then it is a point of inflexion. Find the coordinates of the stationary point on the curve $y=2{{x}^{3}}+3{{x}^{2}}-120x+4$. Determine its nature by using the second derivative test. Hence, sketch the curve. $\leftarrow$ Here are some essential steps on solving problems involving maximum and minimum values. Differentiation of Trigonometric, Logarithmic, and Exponential Functions Basics of Trigonometric Derivative Let's have a look at the differentiation formula of trigonometric functions. You will have to use Chain Rule. $\frac{\text{d}}{\text{d}x}(\cos (ax))=-a\sin \left( ax \right)$ $\frac{\text{d}}{\text{d}x}(\tan(ax))=a{{\sec }^{2}}\left( ax \right)$ $\frac{\text{d}}{\text{d}x}(\sin (ax))=a\cos \left( ax \right)$ Basics of Logarithmic Derivative Logarithmic functions, have the following differentiation formulas: $\frac{\text{d}}{\text{d}x}\left[ \ln x \right]=\frac{1}{x}$ $\frac{\text{d}}{\text{d}x}\left[ \ln \left( ax+b \right) \right]=\frac{a}{ax+b}$ Basics of Exponential Derivative What about the differentiation formula for exponential functions? $\frac{\text{d}}{\text{d}x}\left( {{e}^{x}} \right)={{e}^{x}}$ $\frac{\text{d}}{\text{d}x}\left( {{e}^{ax}} \right)=a{{e}^{ax}}$ Please note that there is an interesting case where $\frac{\text{d}}{\text{d}x}\left( {{e}^{x}} \right)={{e}^{x}}$. This is because the gradient is equal to the function itself.
CommonCrawl
For a diatomic molecule, what is the specific heat per mole at constant pressure/volume? At high temperatures, the specific heat at constant volume $\text{C}_{v}$ has three degrees of freedom from rotation, two from translation, and two from vibration. That means $\text{C}_{v}=\frac{7}{2}\text{R}$ by the Equipartition Theorem. However, I recall the Mayer formula, which states $\text{C}_{p}=\text{C}_{v}+\text{R}$. The ratio of specific heats for a diatomic molecule is usually $\gamma=\text{C}_{p}/\text{C}_{v}=7/5$. What is then the specific heat at constant pressure? Normally this value is $7/5$ for diatomic molecules? thermodynamics degrees-of-freedom ShanZhengYangShanZhengYang "At high temperatures, the specific heat at constant volume $C_v$ has three degrees of freedom from rotation, two from translation, and two from vibration." I can't understand this line. $C_v$ is a physical quantity not a dynamical system. So how can it have a degrees of freedom?? You can say the degrees of freedom of an atom or molecule is something but it is wrong if you say the degrees of freedom of some physical quantity(like temperature, specific heat etc.) is something. Degrees of freedom is the number of independent coordinates necessary for specifying the position and configuration in space of a dynamical system. Now to answer your question, we know that the energy per mole of the system is $\frac{1}{2} fRT$. where $f$= degrees of freedom the gas. $\therefore$ molar heat capacity, $C_v=(\frac{dE}{dT})_v=\frac{d}{dT}(\frac{1}{2}fRT)_v=\frac{1}{2}fR$ Now, $C_p=C_v+R=\frac{1}{2}fR+R=R(1+ \frac{f}{2})$ $\therefore$ $\gamma=1+ \frac{2}{f}$ Now for a diaatomic gas:- A diaatomic gas has three translation(along x,y,z asis) and two rotational(about y and z axis) degrees of freedom. i.e. total degrees of freedom is $5$. Hence $C_v=\frac{1}{2}fR=\frac{5}{2}R$ and $C_p=R(1+ \frac{f}{2})=R(1+ \frac{5}{2})=\frac{7}{2}R$ Rajesh SardarRajesh Sardar A diatomic molecule will have 7 degrees of freedom at high temperatures. However, the ratio of specific heats that you cited is for diatomic molecules around room temperatures, which have 5 degrees of freedom. PhysikaPhysika In particular, when the thermal energy $k_B T$ is smaller than the spacing between the quantum energy levels, the contribution of the vibrational and rotational degrees of freedom will fall. At room temperature, the contribution of the vibrational degree of freedom of a diatomic molecule is often 0, and so $C_v$ will be $R/2$ lower than expected. Furthermore, since a rotation about the bond between the two atoms in a diatomic molecule is not really a rotation, there are actually only 6 degrees of freedom for a diatomic molecule at high temperatures: 3 translational, 2 rotational, and 1 vibrational. When you take away the vibrational degree of freedom at lower temperatures, only 5 remain, and you get $C_v = 5/2R$ and $C_p = C_v + R = 7/2R$. eyqseyqs Molecules are quite different from the monatomic gases like helium and argon. With monatomic gases, thermal energy comprises only translational motions. Translational motions are ordinary, whole-body movements in 3D space whereby particles move about and exchange energy in collisions—like rubber balls in a vigorously shaken container (see animation here [19]). These simple movements in the three dimensions of space mean individual atoms have three translational degrees of freedom. A degree of freedom is any form of energy in which heat transferred into an object can be stored. This can be in translational kinetic energy, rotational kinetic energy, or other forms such as potential energy in vibrational modes. Only three translational degrees of freedom (corresponding to the three independent directions in space) are available for any individual atom, whether it is free, as a monatomic molecule, or bound into a polyatomic molecule. As to rotation about an atom's axis (again, whether the atom is bound or free), its energy of rotation is proportional to the moment of inertia for the atom, which is extremely small compared to moments of inertia of collections of atoms. This is because almost all of the mass of a single atom is concentrated in its nucleus, which has a radius too small to give a significant moment of inertia. In contrast, the spacing of quantum energy levels for a rotating object is inversely proportional to its moment of inertia, and so this spacing becomes very large for objects with very small moments of inertia. For these reasons, the contribution from rotation of atoms on their axes is essentially zero in monatomic gases, because the energy spacing of the associated quantum levels is too large for significant thermal energy to be stored in rotation of systems with such small moments of inertia. For similar reasons, axial rotation around bonds joining atoms in diatomic gases (or along the linear axis in a linear molecule of any length) can also be neglected as a possible "degree of freedom" as well, since such rotation is similar to rotation of monatomic atoms, and so occurs about an axis with a moment of inertia too small to be able to store significant heat energy Cobra KingCobra King Not the answer you're looking for? Browse other questions tagged thermodynamics degrees-of-freedom or ask your own question. What are the six degrees of freedom of the atoms in a solid? The "potential energy" degree of freedom? Does the equipartition theorem for a diatomic gas apply to the three rotations if the temperature is high enough? Number of degrees of fredom in diatomic molecule model
CommonCrawl
\begin{document} \centerline{\Large\textbf{ Fleming-Viot Processes in an Environment }\footnote{Supported by NSFC (No.10721091 )}} \centerline{ Hui He\footnote{ \textit{E-mail address:} { [email protected] }} } \centerline{Laboratory of Mathematics and Complex Systems, } \centerline{ School of Mathematical Sciences, Beijing Normal University,} \centerline{ Beijing 100875, People's Republic of China} {\narrower{\narrower{\narrower \begin{abstract} We consider a new type of lookdown processes where spatial motion of each individual is influenced by an individual noise and a common noise, which could be regarded as an environment. Then a class of probability measure-valued processes on real line $\mathbb{R}$ are constructed. The sample path properties are investigated: the values of this new type process are either purely atomic measures or absolutely continuous measures according to the existence of individual noise. When the process is absolutely continuous with respect to Lebesgue measure, we derive a new stochastic partial differential equation for the density process. At last we show that such processes also arise from normalizing a class of measure-valued branching diffusions in a Brownian medium as the classical result that Dawson-Watanabe superprocesses, conditioned to have total mass one, are Fleming-Viot superprocesses. \end{abstract} \noindent\textit{AMS 2000 subject classifications.} Primary 60G57, 60H15; Secondary 60K35, 60J70. \noindent\textit{Key words and phrases.} measure-valued process, superprocesses, Fleming-Viot process, random environment, stochastic partial differential equation \noindent\textbf{Abbreviated Title:} Fleming-Viot processes \par}\par}\par} \section{Introduction} In this work, we construct and study a new class of probability measure-valued Markov processes on the real line $\mathbb R$. Our model arises from a modified stepwise mutation model (see Section 1.1.10 of \cite{[E00]} for classical stepwise mutation model): the mutation process of each individual in the model is influenced by an independent noise and a common noise. More precisely, suppose that $\{ W(t,x): x\in \mathbb{R}, t\geq0\}$ is space-time white noise based on Lebesgue measure, the common noise, and $\{B_i(t):t\geq0,i=1,2,\cdots\}$ is a family of independent standard Brownian motions, the individual noises, which are independent of $\{W(t,x): x\in\mathbb{R}\}$. The mutation of an individual in the stepwise mutation system with label $i$ is defined by the stochastic equations \begin{equation} \label{1.6} dx_i(t)=\epsilon dB_i(t)+\int_\mathbb{R}h(y-x_i(t))W(dt,dy),\textrm{\ \ \ }t\geq0,~~i=1,2,\cdots, \end{equation} where $W(dt,dy)$ denotes the time-space stochastic integral relative to $\{W_t(B)\}$ and $\epsilon\geq0.$ Suppose that $h\in C^2(\mathbb{R})$ is square-integrable. Let $\rho_{\epsilon}=\epsilon^2+\rho(0)$ and \begin{equation} \label{f1.1} \rho(x)=\int_{\mathbb{R}}h(y-x)h(y)dy, \end{equation} for $x\in \mathbb{R}.$ For each integer $m\geq1$, $\{(x_1(t),\cdots,x_m(t)):t\geq0\}$ is an $m$-dimensional diffusion process which is generated by the differential operator \begin{equation}\label{Gdiff} G^m:=\frac{1}{2}\sum_{i=1}^ma(x_i)\frac{\partial^2}{\partial x_i^2}+ \frac{1}{2}\sum_{i,j=1,i\neq j}^m\rho(x_i-x_j)\frac{\partial^2}{\partial x_i\partial x_j}. \end{equation} In particular, $\{x_i(t):t\geq0\}$ is a one-dimensional diffusion process with generator $G:=(\rho_{\epsilon}/2)\Delta$. Because of the exchangeability, a diffusion process generated by $G^m$ can be regarded as an interacting particle system or a measure-valued process. Heuristically, $\rho_{\epsilon}$ represents the speed of the particles and $\rho(\cdot)$ describes the interaction between them. Our interest comes from recent studies on connections between superprocesses and stochastic flows; see \cite{[DLW01]}, \cite{[DLZ04]}, \cite{[SA01]} and \cite{[W98]}. In those works, particles undergo random branching and their spatial motions are affected by the presence of stochastic flows. Some new classes of measure-valued processes were constructed from the empirical measure of the particles. Those measure-valued processes are quite different with the classical Dawson-Watanabe processes. There are at least two different ways to look at those processes. One is as a superprocess in random environment and the other as an extension of models of the motion of the mass by stochastic flows; see \cite{[LR05]}, \cite{[MX01]}. In this work we remove the branching structure of particle systems in \cite{[W98]} but add a sampling mechanism. That is whenever a particle's exponential `sampling clock' rings, it jumps to a position chosen at random from the current empirical distribution of the whole population. Its mutation then continues from its new position. This work is simulated by classical connections between Dawson-Watanabe processes and Fleming-Viot processes investigated in \cite{[EM91]} and \cite{[P92]}. It has been shown that Fleming-Viot superprocesses is the Dawson-Watanabe prcesses, conditioned to have total mass one. So we want to ask what can we obtain if the measure-valued processes constructed in \cite{[DLW01]}, \cite{[DLZ04]} \cite{[SA01]} and \cite{[W98]} are conditioned to have total mass one? The particle picture described in \cite{[P92]} suggests that the branching structure of such conditioned measure-valued branching processes may be changed to sampling mechanism. Thus measure-valued branching processes constructed in \cite{[W98]}, conditioned to have total mass one, may have generator as: \begin{equation} \label{f1.2} \mathcal{L}F(\mu):=\mathcal{A}F(\mu)+\mathcal{B}F(\mu), \end{equation} where \begin{eqnarray} \label{f1.3} \mathcal{A}F(\mu)\!\!&:=\!\!&\frac{1}{2}\int_{\mathbb{R}}\rho_{\epsilon}\frac{d^2}{dx^2} \frac{\delta F(\mu)}{\delta\mu(x)}\mu(dx)\cr\!\!&\!\!& +\frac{1}{2}\int_{\mathbb{R}^2}\rho(x-y)\frac{d^2}{dxdy} \frac{\delta^2F(\mu)}{\delta\mu(x)\delta\mu(y)}\mu(dx)\mu(dy), \end{eqnarray} \begin{equation} \label{f1.4} \mathcal{B}F(\mu):=\frac{\gamma}{2}\int_\mathbb{R}\int_{\mathbb R}\frac{\delta^2F(\mu)}{\delta\mu(x)\delta\mu(y)} \left(\mu(dx)\delta_{x}(dy)-\mu(dx)\mu(dy)\right), \end{equation} for some bounded continuous functions $F(\mu)$ on $P(\mathbb{R})$. The variational derivative is defined by \begin{equation} \label{f1.5} \frac{\delta F(\mu)}{\delta\mu(x)}=\lim_{r\rightarrow{0+}}\frac{1}{r}[F(\mu+r\delta_x)-F(\mu)],\textrm{\ \ \ } x\in \mathbb{R}, \end{equation} if the limit exists and $\delta^2F(\mu) / \delta\mu(x)\delta\mu(y)$ is defined in the same way with $F$ replaced by $(\delta F/\delta\mu(y))$ on the right hand side. If we replace $\cal B$ in (\ref{f1.4}) by $$ \frac{\gamma}{2}\int_\mathbb{R} \frac{\delta^2F(\mu)}{\delta\mu(x)^2}\mu(dx), $$ then $\cal L$ is the generator of the measure-valued process constructed in \cite{[W98]}, where $\cal L$ acted on some bounded continuous functions on $M(\mathbb R)$, space of finite measures on $\mathbb R$; see (1.8) of \cite{[W98]}. If the second term in $\cal A$ vanishes, then $\cal L$ is just the generator of an usual Fleming-Viot process. The main work in this paper is to solve the martingale problem and analyze the sample path properties of the solution. For $f\in B(\mathbb R^m)$, define $F_{m,f}(\mu)=\langle f,\mu^m\rangle$. For $\mu\in P(\mathbb{R})$, we say a $P(\mathbb{R})$-valued continuous process $\{Z(t):t\geq0\}$ is a solution of the $(\mathcal {L},\mu)$-\textsl{martingale problem} if $Z(0)=\mu$ and \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{martfor} F(Z(t))-F(Z(0))-\int_0^t\mathcal {L}F(Z(s))ds,\textrm{\ \ \ } t\geq0, \eeqlb is a martingale for each $F\in\mathcal {D}(\mathcal {L}):=\bigcup_{m\geq1}\{F_{m,f}(\mu), f\in C^2(\mathbb{R}^m)\}.$ A simple calculation yields \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{GeneratorL} {\cal L} F_{m,f}(\mu)=\langle \mu^m, G^mf\rangle+\sum_{1\leq i<j\leq m}\gamma\left( \langle \mu^{m-1},\Psi_{ij}f\rangle-\langle\mu^m, f\rangle\right), \eeqlb where $\Psi_{ij}$ denotes the operator from $B(\mathbb{R}^{m})$ to $B(\mathbb{R}^{m-1})$ defined by \begin{equation} \label{replace} \Psi_{ij}f(x_1,\cdots,x_{m-1})=f(x_1,\cdots,x_{m-1},\cdots,x_{m-1},\cdots,x_{m-2}), \end{equation} where $x_{m-1}$ is in the places of the $i$th and the $j$th variables of $f$ on the right hand side. We shall show that the $({\cal L}, \mu)$-martingale problem is well-posed and call the solution as Fleming-Viot process in an environment (FVE for short). We will use look-down construction suggested by \cite{[DK96]} with some modifications to show the existence of the solution. This look-down construction will help us on analyzing the sample path properties. The uniqueness of the $({\cal L},\mu)$-martingale problem will be proved by classical duality argument. Since the spatial motions of individuals in the look-down system are not independent with each other, when solving the martingale problem, we need some technical lemmas which will be given in the Appendix. Our other main results include: \begin{enumerate} \item State classification: when $\epsilon>0$, FVE is absolutely continuous respect to $dx$ and we also deduce a new SPDE for the density process; when $\epsilon=0$ its values are purely atomic; \item When conditioned to have total mass one, a measure-valued branching process in a Brownian medium constructed in \cite{[W98]} is an FVE. \end{enumerate} The remaining of this paper is organized as follows. In Section 2, we solve the $({\cal L}, \mu)$-martingale problem. The state classification of the process will be investigated in Section 3. In the last section, Section 4, we derive the connection between FVE and the process constructed in \cite{[W98]}. Two technical lemmas will be given in the Appendix. \begin{remark} By Theorem 8.2.5 of \cite{[EK86]}, the closure of $\{(f,G^mf):f\in C_c^{\infty}(\mathbb{R}^m)\}$ denoted by $\bar{G}^m$ is single-valued and generates a Feller semigroup $(T_t^m)_{t\geq0}$ on $\hat{C}(\mathbb{R}^m)$. Note that this semigroup is given by a transition probability function and can therefore be extended to all of $B(\mathbb{R}^m)$. \end{remark} Notation: For reader's convenience, we introduce here our main notation. Let $\hat{\mathbb{R}}$ denote the one-point compactification of $\mathbb{R}$. Given a topological space $E$, let $M(E)$ ($P(E)$) denote space of finite measures (probability measures) on $E$. Let $B(E)$ denote the set of bounded measurable functions on $E$ and let $C(E)$ denote its subset comprising of bounded continuous functions. Let $\hat{C}(\mathbb{R}^n)$ be the space of continuous functions on $\mathbb R^n$ which vanish at infinity and let $C_c^{\infty}(\mathbb{R}^n)$ be functions with compact support and bounded continuous derivatives of any order. Let $C^2(\mathbb{R}^n)$ denote the set of functions in $C(\mathbb{R}^n)$ which is twice continuously differential functions with bounded derivatives up to the second order. Let $\hat{C}^2(\mathbb{R}^n)$ be the subset of $C^2(\mathbb{R}^n)$ of functions that together with their derivatives up to the second order vanish at infinity. \\ \noindent Let $$ C_{\partial}^2(\mathbb{R}^n)=\{f+c: c\in\mathbb{R} \textrm{ and } f\in \hat{C}^2({\mathbb{R}^n})\} $$ We denote by $C_E[0,\infty)$ the space of continuous paths taking values in $E$. Let $D_E[0,\infty)$ denote the Skorokhod space of c\`{a}dl\`{a}g paths taking values in $E$. For $f\in C(\mathbb{R})$ and $\mu\in M(\mathbb{R})$ we shall write $\langle \mu, f\rangle$ for $\int fd\mu$. \section{Construction}\label{SECCON} \subsection{Uniqueness}\label{SECUNIQUE} In this subsection, we define a dual process to show the uniqueness of the $({\cal L},\mu)$-martingale problem. Let $\{M_t:t\geq0\}$ be a nonnegative integer-valued c\`{a}dl\`{a}g Markov process. For $i\geq j$, the transition intensities $q_{i,i-1}=\gamma i(i-1)/2$ and $q_{ij}=0$ for all other pairs $i,j$. Let $\tau_0=0$ and let $\{\tau_k:1\leq k\leq M_0-1\}$ be the sequence of jump times of $\{M_t:t\geq0\}$. That is $\tau_1=\inf\{t\geq0:M_t\neq M_0\},\cdots,\tau_k=\inf\{t>\tau_{k-1}:M_t\neq M_{\tau_{k-1}}\} .$\\ Let $\{\Gamma_k:1\leq k\leq M_0-1\}$ be a sequence of random operators which are conditionally independent given $\{M_t :t\geq 0\}$ and satisfy $$\textbf{P}\{\Gamma_k=\Psi_{ij}|M(\tau_k-)=l,M(\tau_k)=l-1\} =\left(\begin{array}{c}l\cr 2\end{array}\right)^{-1} ,\textrm{\ \ \ }1\leq i< j\leq l,$$ where $\Psi_{ij}$ are defined by (\ref{replace}). Let $\textbf{B}$ denote the topological union of $\{B(\mathbb{R}^m):m = 1,2,\cdots\}$ endowed with pointwise convergence on each $B(\mathbb{R}^m)$. Then \begin{equation} \label{functionf}F_t={T}_{t-\tau_k}^{M_{\tau_k}}\Gamma_k {T}_{\tau_k-\tau_{k-1}}^{M_{\tau_{k-1}}}\Gamma_{k-1}\cdots {T}_{\tau_2-\tau_1}^{M_{\tau_1}}\Gamma_1{T}_{\tau_1}^{M_0}F_0, \textrm{\ \ \ }\tau_k\leq t<\tau_{k+1},~~0\leq k\leq M_0-1, \end{equation} defines a Markov process $\{F_t:t\geq0\}$ taking values from ${\bf B}$. Clearly, $\{(M_t,F_t):t\geq0\}$ is also a Markov process. Let $\textbf{E}_{m,f}$ denote the expectation given $M_0=m$ and $F_0=f\in B(\mathbb{R}^m)$. \begin{theorem} \label{ThDual} Suppose that $\{Z(t):t\geq 0\}$ is a solution of the $({\mathcal{L}},\mu)$-martingale problem and assume that $\{Z(t):t\geq0\}$ and $\{(M_t,F_t):t\geq0\}$ are defined on the same probability space and independent of each other, then \begin{equation} \label{Dual} {\bf{E}}\left\langle Z(t)^m, f\right\rangle ={\bf{E}}_{m,f}\big{[}\left\langle \mu^{M_t},F_t\right\rangle \big{]} \end{equation} for any $t\geq0$, $f\in C(\mathbb{R}^m)$ and integer $m\geq1$. \end{theorem} \textbf{Proof}. In this proof we set $F_{\mu}(m,f)=F_{m,f}(\mu)=\langle \mu^m ,f\rangle$. It suffices to prove (\ref{Dual}) for $f\in C^2(\mathbb R^m)$. By the definition of $F_t$ and elementary properties of $M_t$, we know that $\{(M_t,F_t): t\geq0\}$ has weak generator $\mathcal {L}^\#$ given by \begin{eqnarray} \label{ThDuala} \mathcal {L}^\#F_{\mu}(m,f)= F_{\mu}(m,{G}^mf)+ \sum_{1\leq i<j\leq m}\gamma\left(F_{\mu}(m-1,\Psi_{ij}f)-F_{\mu}(m,f)\right) \end{eqnarray} with $f\in C^2(\mathbb{R}^m)$. In view of (\ref{GeneratorL}) we have \begin{equation} \label{ThDualb} \mathcal {L}^\#F_{\mu}(m,f)={\mathcal {L}}F_{m,f}(\mu). \end{equation} Thus if we can show that for $F_0\in C^2(\mathbb R^m)$, $F_t\in C^2(\mathbb R^m)$ for all $t\geq0$, then dual relationship (\ref{Dual}) follows from Corollary 4.4.13 of \cite{[EK86]}. To this end, it suffices to show that $T_t^m C^2(\mathbb R^m)\subset C^2(\mathbb R^m).$ When $\epsilon>0$, $G^m$ is uniform elliptic. The desired result follows from Theorem 0.5 on page 227 of \cite{[Dy65]}. When $\epsilon=0$, Lemma \ref{lehomo} yields the desired conclusion. We are done. $\Box$ \subsection{Look Down Processes}\label{SECLD} Suppose that $x_t=(x_1(t),\cdots,x_m(t))$ is a Markov process in $\mathbb{R}^m$ generated by ${G}^m$. By Lemma 2.3.2 of \cite{[D93]} we know that $(x_1(t),\cdots,x_m(t))$ is an exchangeable Feller process. Let $P_t^{(m)}$ denote its transition semigroup. Then $\{P_t^{(m)}, m\geq1\}$ is a consistent family of Feller semigroups on $C({\mbb R}}\def\dfN{{\mbb N}^m)$, i.e., for all $k\leq m$, any $k$-component of $G^m$-diffusion evolve as a $G^k$-diffusion. Let $\{B_{ijk},\,1\leq i<j,\,1\leq k<\infty\}$ and $\{B_{i0},\, i\geq1\}$ be independent Brownian motions, independent of $W$. Let $\{N_{ij},\,1\leq i<j\}$ be independent, unit rate Poisson processes, independent of $\{B_{ijk}\},\, W$ and let $\tau_{ijk}$ denote the $k$th jump time of $N_{ij}$. Let $\{X_i(0),\, i\geq1\}$ be an exchangeable sequence of random variables, independent of $\{U_{ijk}\}$, $\{U_{i0}\}$, $W$ and $\{N_{ij}\}$. Define $\gamma_{ijk}=\min\{\tau_{i'jk'},\, i'<j: \tau_{i'jk'}>\tau_{ijk}\}$; that is, $\gamma_{ijk}$ is the first jump time of $N_j\equiv \sum_{i<j}N_{ij}$ after $\tau_{ijk}$, and define $\gamma_{j0}=\min\{\tau_{ij1}: i<j\}$. Finally, for $0\leq t<\gamma_{j0}$ define \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{lookdown1} X_j(t)=X_j(0) +\epsilon B_{j0}(t)+\int_0^t\int_{{\mbb R}}\def\dfN{{\mbb N}}h(y-X_{j}(s))W(dyds) \eeqlb and for $\tau_{ijk}\leq t<\gamma_{ijk}$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{lookdown2} X_j(t)=X_{i}(\tau_{ijk})+\epsilon (B_{ijk}(t)-B_{ijk}({\tau_{ijk}})) +\int_{\tau_{ijk}}^t\int_{{\mbb R}}\def\dfN{{\mbb N}}h(y-X_{j}(s))W(dyds). \eeqlb Since $G^m$-diffusion is an exchangeable consistent family of Feller diffusions, between the jump times of the Poisson processes, the $X_j$ behave as a $G^1$-diffusion and any $n$-component of the particle systems evolve as a $G^n$-diffusion. At the jump times of $N_{ij}$, $X_j$ ``looks down'' at $X_i$, assumes the value of $X_i$ at the jump time, and then evolves as a $G^1$-diffusion and also any $n$-component of the particle systems evolve as a $G^n$-diffusion. Then $X=(X_1,X_2,\cdots)$ is a Markov process with generator given by \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{generatorX} {\mathbb A}f(x_1,\cdots,x_m)&=& G^mf(x_1,\cdots,x_m)\cr &&+\sum_{1\leq i<j\leq m}\left(f(\theta_{ij}(x_1,\cdots,x_m))-f(x_1,\cdots,x_m)\right), \eeqlb where $f\in C^2({\mbb R}}\def\dfN{{\mbb N}^m)$ and $\theta_{ij}(x_1,\cdots,x_m)$ denote the element of ${\mbb R}}\def\dfN{{\mbb N}^m$ obtained by replacing $x_j$ by $x_i$ in $(x_1,\cdots,x_m)$. \\ As in \cite{[DK96]}, we want to compare the ${\mbb R}}\def\dfN{{\mbb N}^{\infty}$-valued process $X$ to a sequence of modified Moran-type models. Let $S_m$ denote the collection of permutations of $(1,\cdots,m)$ which we write as ordered $m$-tuples $s=(s_1,\cdots,s_m)$. Let $\pi_{ij}:S_m\rightarrow S_m$ denote the mapping such that $\pi_{ij}s$ is obtained from $s$ by interchanging $s_i$ and $s_j$ and let $\{M_{ijk}:1\leq i\neq j\leq m,k\geq1\}$ be independent random mappings $M_{ijk}: S_m\rightarrow S_m$ such that $P\{M_{ijk}s=s\}=P\{M_{ijk}s=\pi_{ij}s\}=\frac{1}{2}$. In following we define an $S_m$-valued process $\Sigma^m$ and counting processes $\{\tilde{N}_{ij}, 1\leq i\neq j\leq m\}$ recursively. Let $\Sigma^m(0)$ be uniformly distributed on $S_m$ and independent of all other processes. Let \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{tildeN} \tilde{N}_{ij}(t)=\sum_{1\leq k< l\leq m}\int_0^t {\bf1} _{\{\Sigma_i^m(r-)=k,\,\Sigma_j^m(r-)=l\}}dN_{kl}(r) \eeqlb and let $\Sigma^m$ be constant except for discontinuities determined by $\Sigma^m(\tilde{\tau}_{ijk})=M_{ijk}\Sigma^m(\tilde{\tau}_{ijk}-),$ where $\tilde{\tau}_{ijk}$ is the $k$-th jump time of $\tilde{N}_{ij}$, or more precisely, interpreting $\Sigma^m$ as a $\mathbb Z^m$-valued process, \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{morans} {\Sigma}^{m}(t)=\sum_{1\leq i<j\leq m}\int_0^t \left(M_{ij(\tilde{N}_{ij}(r-)+1)}\Sigma^m(r-)\right) d\tilde{N}_{ij}(r). \eeqlb Next, define $\{\hat{N}_{ij},1\leq i\leq m<j\}$ by \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{hatN} \hat{N}_{ij}(t)=\sum_{k=1}^m\int_0^t{\bf 1}_{\{\Sigma_i^m(r-)=k\}}dN_{kj}(r) \eeqlb and let $\hat{\tau}_{ijk}$ denote the $k$-th jump time of $\hat{N}_{ij}$. Note that for $j>m$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{moranN} N_j=\sum_{1\leq i<j}N_{ij}=\sum_{1\leq i\leq m}\hat{N}_{ij}+\sum _{m<i\leq j}N_{ij}. \eeqlb By Lemma 2.1 of \cite{[DK96]}, $\{\tilde{N}_{ij}\}$ and $\{\hat{N}_{ij}\}$ are Poisson processes with intensities $\frac{1}{2}$ and 1, respectively. And for each $t\geq0$, $\Sigma^m(t)$ is independent of ${\cal G}_t=\sigma(\tilde{N}_{ij}(s),\hat{N}_{kl}(s):s\leq t,1\leq i\neq j\leq m, 1\leq k\leq m<l)$. Define $$Y_j^m(t)=X_{\Sigma_j^m(t)}(t),\quad j=1,\cdots,m.$$ \begin{lemma}\label{lemmaY}$ Y^m=(Y_1^m,\cdots, Y_m^m)$ is a Markov process with generator given by \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{generatorY} {\mathbb A}_mf(y_1,\cdots,y_m)&=& G^mf(y_1,\cdots,y_m)\cr &&+\frac{1}{2}\sum_{1\leq i\neq j\leq m}\left(f(\theta_{ij}(y_1,\cdots,y_m))-f(y_1,\cdots,y_m)\right), \eeqlb where $f\in C^2({\mbb R}}\def\dfN{{\mbb N}^m)$ and $\theta_{ij}(x_1,\cdots,x_m)$ denote the element of ${\mbb R}}\def\dfN{{\mbb N}^m$ obtained by replacing $x_j$ by $x_i$ in $(x_1,\cdots,x_m)$. \end{lemma} {\bf Proof. } The proof is similar to that of part (b) in Lemma 2.1 of \cite{[DK96]}. For $1\leq i,\,j\leq m$, define \begin{align}\label{tildeB} {\tilde{B}}_{j0}&=B_{\alpha 0},&\text{where }\alpha&=\Sigma_j^m(0),&\cr {\tilde{B}}_{ijk}&= B_{\alpha\beta\gamma}, &\text{where }\alpha&=\Sigma_i^m(\tilde{\tau}_{ijk}-), \beta=\Sigma_j^m(\tilde{\tau}_{ijk}-),& \cr &&\qquad\gamma&=N_{\alpha \beta}(\tilde{\tau}_{ijk}-)& \end{align} Define $\tilde{\gamma}_{ijk}=\min\{\tilde{\tau}_{i'jk'},\, i'\neq j: \tilde{\tau}_{i'jk'}>\tilde{\tau}_{ijk}\}$ and let $\tilde{\gamma}_{j0}$ be the first jump time of $\tilde{N}_j\equiv\sum_{i\neq j}\tilde{N}_{ij}$. By Lemma \ref{Aselection}, $Y_j^m(t)=X_{\Sigma^m_j(t)}^m(t)$ yields that for $0\leq t<\tilde{\gamma}_{j0}$ \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{Y1} Y^m_j(t)=Y^m_j(0) +\epsilon {\tilde{B}}_{j0}(t)+\int_0^t\int_{{\mbb R}}\def\dfN{{\mbb N}}h(y-Y^m_{j}(s))W(dyds) \eeqlb and for $\tilde{\tau}_{ijk}\leq t<\tilde{\gamma}_{ijk}$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{Y2} Y^m_j(t)=Y_{i}(\tilde{\tau}_{ijk})+\epsilon ({\tilde{B}}_{ijk}(t) -{\tilde{B}}_{ijk}({\tilde{\tau}_{ijk}})) +\int_{\tilde{\tau}_{ijk}}^t\int_{{\mbb R}}\def\dfN{{\mbb N}}h(y-Y^m_{j}(s))W(dyds). \eeqlb By Lemmas A5.1 and A5.2 of \cite{[DK96]}, $\{{\tilde{B}}_{j0}\}, \{{\tilde{B}}_{ijk}\}$ and $\{Y_j(0)\}$ are independent of $\{\tilde{N}_{ij}\}$ and $\Sigma^m$. Furthermore, the ${\tilde{B}}_{j0}$ and the ${\tilde{B}}_{ijk}$ are independent Brownian motions and $(Y_1^m(0),\cdots,Y^m_m(0))$ has the same distribution as $(X_1(0),\cdots,X_m(0))$. Then the desired result follows from (\ref{Y1}) and (\ref{Y2}). $\Box$ By (\ref{Y1}) and (\ref{Y2}), we see $(Y_1^m(t),\cdots, Y^m_m(t))$ is exchangeable and has the same empirical measures as $(X_1,\cdots,X_m).$ From the construction above, $\Sigma^m(t)$ must be independent of $Y^m(t)$. Thus for each $t>0$, $(X_1(t),X_2(t),\cdots)$ is exchangeable. To show the existence of $({\cal L}, \mu)$-martingale problem, we need the following lemma. \begin{lemma}\label{Lemma2.3} \begin{enumerate} \item[(a).] Suppose that $Z(t)$ is a $P({\mbb R}}\def\dfN{{\mbb N})$-valued process satisfying the martingale formula (\ref{martfor}) for every $F\in {\cal D}({\cal L})$. Then $\{Z(t):t\geq0\}$ has a continuous modification and for $\phi\in C^2({\mbb R}}\def\dfN{{\mbb N})$ \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{l2.3a} M_t(\phi):=\langle Z(t),\phi\rangle-\langle Z(0),\phi\rangle-\frac{\rho_{\epsilon}}{2}\int_0^t\langle Z(s),\phi'' \rangle ds \eeqlb is a martingale with quadratic variation \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{l2.3b} \gamma\int_0^t \left(\langle Z(s),\phi^2\rangle-\langle Z(s),\phi\rangle^2\right)ds+ \int_0^t ds\int_{{\mbb R}}\def\dfN{{\mbb N}}\langle Z(s),h(\cdot-y)\phi'\rangle^2dy. \eeqlb \item[(b).] If a continuous $P({\mbb R}}\def\dfN{{\mbb N})$-valued process $Z(t)$ satisfies the martingale problem (\ref{l2.3a}) and (\ref{l2.3b}), then it is also a solution of $({\cal L}, \mu)$-martingale problem. \end{enumerate} \end{lemma} {\bf Proof. } (a). The existence of continuous modification follows from Lemma 2.1 of \cite{[EK87]} and the fact that (\ref{martfor}) is a martingale for each $F\in {\cal D}({\cal L})$ which also yields (\ref{l2.3a}) and (\ref{l2.3b}). The proof for assertion (b) is a classical approximation procedure. We left it to the interested readers. $\Box$ Now, we come to our main result in this section. \begin{theorem} \label{ThExist} Given $\mu\in P(\mathbb R)$, suppose that $\{X_i(0),i\geq1\}$ is an exchangeable sequence of random variables such that $$\lim_{m\rightarrow\infty}\frac{1}{m}\sum_{i=1}^m \delta_{X_i(0)}=\mu.$$ Let \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{approdirac} Z_m(t)=\frac{1}{m}\sum_{i=1}^m \delta_{X_i(t)}=\frac{1}{m}\sum_{i=1}^m\delta_{Y_i^m(t)}.\eeqlb Then the $({\cal L},\mu)$-martingale problem has a solution $Z$ such that for each $t>0$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{ThExista} \lim_{m\rightarrow\infty}\sup_{s\leq t}\rho(Z_m(s),Z(s))=0\quad a.s., \eeqlb where $\rho$ denotes the Prohorov metric on $P(\mathbb R)$. \end{theorem} {\bf Proof. } With the help of Lemma \ref{Lemma2.3} which can be regarded as a version of Lemma 2.3 of \cite{[DK96]}, the proof is similar to Theorem 2.4 of \cite{[DK96]}. We omit it here. $\Box$ \section{Sample Path Properties}\label{SECstate} In this section, we show that when $\epsilon>0$, $Z(t)$ is absolutely continuous respect to $dx$ for almost all $t\geq0$ and when $\epsilon=0$ the values of $Z$ are purely atomic. We first describe the \textsl{weak atomic topology} on $M(\mathbb R)$ introduced by Ethier and Kurtz \cite{[EK94]}. Recall that $\rho$ denotes the Prohorov metric on $M(\mathbb R)$, which induces the topology of the weak convergence. Define the metric $\rho_a$ on $M(\mathbb R)$ by \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{atomictopo} \rho_a(\mu,\nu)=\rho(\mu,\nu)&+&\sup_{0<\epsilon\leq1}\bigg{|}\int_{{\mbb R}}\def\dfN{{\mbb N}} \int_{{\mbb R}}\def\dfN{{\mbb N}}\Phi(|x-y|/\epsilon)\mu(dx)\mu(dy)\cr \!\!&\!\!&\qquad\quad-\int_{{\mbb R}}\def\dfN{{\mbb N}} \int_{{\mbb R}}\def\dfN{{\mbb N}}\Phi(|x-y|/\epsilon)\nu(dx)\nu(dy)\bigg{|}, \eeqlb where $\Phi(\cdot)=\left(1-\cdot\right)_+.$ The topology on $M(\mathbb R)$ induced by $\rho_a$ is called the \textsl{weak atomic topology}. For $\mu\in M(\mathbb R)$, define $\mu^*=\sum\mu(\{x\})^2\delta_{x}$. We need the following results of \cite{[EK94]}. \begin{lemma} \label{EKlemmas} Let $\mu_n,\mu\in M({\mbb R}}\def\dfN{{\mbb N})$. \begin{enumerate} \item[(a).] Suppose $\rho(\mu_n,\mu)\rar0$. Then $\rho(\mu_n^*,\mu^*)\rar0$ if and only if $\mu_n^*({\mbb R}}\def\dfN{{\mbb N})\rightarrow\mu^*({\mbb R}}\def\dfN{{\mbb N})$; \item[(b).] $\rho_a(\mu_n,\mu)\rar0$ if and only if $\rho(\mu_n,\mu)\rar0$ and $\rho(\mu_n^*,\mu^*)\rar0$; \item[(c).] Suppose $Z\in C_{(M({\mbb R}}\def\dfN{{\mbb N}),\,\rho)}[0,\infty)$. If $Z^*({\mbb R}}\def\dfN{{\mbb N})\in C_{[0,\infty)}[0,\infty)$, then $Z\in C_{(M({\mbb R}}\def\dfN{{\mbb N}),\,\rho_a)}[0,\infty)$. \end{enumerate} \end{lemma} {\bf Proof.} See Lemmas 2.1, 2.2 and 2.11 of \cite{[EK94]} for (a), (b) and (c), respectively. $\Box$ Our first main result in this section is the following theorem. \begin{theorem} \label{Thstate} Suppose $Z$ is a solution of $({\cal L},\mu)$-martingale problem. Assume $\epsilon=0$. Then ${\bf P}\{Z(t)\in P_{a}(\mathbb R),\,t>0\}={\bf P}\{Z(\cdot)\in C_{(M({\mbb R}}\def\dfN{{\mbb N}),\,\rho_a)}[0,\infty)\}=1,$ where $P_a(\mathbb R)$ denotes the collection of purely atomic probability measures on $\mathbb R$. \end{theorem} {\bf Proof.} According to the look down construction, (\ref{lookdown1}) and (\ref{lookdown2}), if $X_j$ `looks down' $X_i$, and assume the value of $X_i$ at the jump time, then $X_j$ and $X_i$ have the same sample path before the next jump time. Define $$ x_i(t)=X_i(0)+\int_0^t\int_{\mathbb R}h(y-x_i(s))W(dyds),\quad t\geq0,\quad i=1,2,\cdots. $$ Therefore, by Lemma \ref{EKlemmas}, $Z_m(\cdot)\in D_{(P(\mathbb R),\, \rho_a)}[0,\infty) $ and $Z^*_m(t,\mathbb R)$ is monotone in $t\geq0$. According to Proposition 3.3 of \cite{[DK96]} and Lemma \ref{lehomo}, almost surely for $t>0$, there are only finite number paths, denoted by $D(t)$ which is independent of $m$, alive in the `look down system'. Let ${t_0}>0$ be fixed. Note that $D$ is c\`{a}dl\`{a}g on $ [t_0,+\infty)$. Typically, $D(t)\leq D(s)$ for $t>s$. Let $\{x_{c_i}(t_0), i=1,2,\cdots, D(t_0) \}$ be the enumeration of the living paths at $t_0$ with $x_{c_1}(t_0)<x_{c_2}(t_0)<\cdots<x_{c_{D(t_0)}}(t_0)$. Thus for $t>t_0$, we may represent $Z_m(t)$ by \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{Thstatec} Z_m(t)=\sum_{i=1}^{D(t_0)}\frac{b_{i,m}(t)}{m}\delta_{x_{c_i}(t)},\quad t\geq t_0, \eeqlb where $b_{i,m}(t), i=1,2,\cdots$ are nonnegative integer-valued c\`{a}dl\`{a}g random processes defined on $ [t_0,+\infty)$ with $\sum_{i=1}^{D(t)} b_{i,m}=m$. Note that by Lemma \ref{lehomo}, for every $T>t_0$, almost surely, \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{Thstated}\inf_{i\neq j}\inf_{t_0\leq t\leq T}|x_{c_i}(t)-x_{c_j}(t)|>0.\eeqlb Therefore, according to (\ref{ThExista}) we may represent $Z(t)$ by \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{repZ} Z(t)=\sum_{i=1}^{D(t_0)}{b_{i}(t)}\delta_{x_{c_i}(t)},\quad t\geq t_0, \eeqlb where $b_{i}(t)\geq 0, i=1,2,\cdots$ are c\`{a}dl\`{a}g random processes defined on $ [t_0,+\infty)$ with \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{Thstatee} \sup_{t_0\leq t\leq T}\sum_{i=1}^{D(t_0)}|b_{i,m}(t)/m-b_{i}(t)|\rar0,\quad a.s.\quad \text{as }m\rightarrow\infty. \eeqlb Since $t_0$ is arbitrary, ${\bf P}\{Z(t)\in P_{a}(\mathbb R),\,t>0\}=1.$ From above and Lemma \ref{EKlemmas}, we see $Z(\cdot\vee t_0)\in D_{(P(\mathbb R),\,\rho_a)}[0,\infty),\,a.s.$ Typically, $$ Z_m^*(\cdot\vee t_0,\mathbb R)\rightarrow Z^*(\cdot\vee t_0,\mathbb R)\quad \text{in}\quad D_{\mathbb R}[0,\infty)\quad \text{as}\quad m\rightarrow\infty\quad a.s. $$ On the other hand, according to the `look down construction', if we define $$ J(Z_m^*(t\vee t_0,{\mbb R}}\def\dfN{{\mbb N})):=\int_0^{\infty}e^{-u}[1\wedge\sup_{0\leq t\leq u}|Z_m^*(t\vee t_0,{\mbb R}}\def\dfN{{\mbb N})-Z_m^*((t\vee t_0)-,{\mbb R}}\def\dfN{{\mbb N})|]du, $$ then $$ J(Z_m^*(t\vee t_0,{\mbb R}}\def\dfN{{\mbb N}))\leq \frac{4m+2}{m^2}\rightarrow 0\quad\text{as }m\rightarrow\infty. $$ By Theorem 3.10.2 of \cite{[EK86]} and Lemma \ref{EKlemmas}, $Z(\cdot\vee t_0)\in C_{(P(\mathbb R),\,\rho_a)}[0,\infty),\,a.s.$ Set $D=\{(x,y)\in{\mbb R}}\def\dfN{{\mbb N}^2:x=y\}$ and $D_2=D\times {\mbb R}}\def\dfN{{\mbb N}^2+{\mbb R}}\def\dfN{{\mbb N}^2\times D$. By approximating an indicate function from continuous functions, we see that ($\ref{Dual}$) holds for $f=1_{D}$ and $g=1_{D_2}$. Note that $\langle Z(t)^2, f\rangle=Z^*(t,{\mbb R}}\def\dfN{{\mbb N})$ and $\langle Z(t)^2, f\rangle^2=\langle Z(t)^4,g\rangle$. Therefore, by (\ref{Dual}) and the right continuity of $(F_t,M_t)$, $$ \lim_{t\downarrow0}{\bf E}|Z^*(t,{\mbb R}}\def\dfN{{\mbb N})-\mu^*({\mbb R}}\def\dfN{{\mbb N})|^2=\lim_{t\downarrow0}{\bf E}|\langle Z(t)^2, f\rangle-\langle \mu^2,f\rangle|^2=0. $$ By Lemma \ref{EKlemmas} and the monotonicity of $Z^*_m(t,\mathbb R)$, $\rho_a(Z(t),\mu)\rar0$ almost surely as $t\rar0$. Thus $Z(\cdot)\in C_{(P(\mathbb R),\,\rho_a)}[0,\infty),\,a.s.$ $\Box$ In the next theorem, we shall show that when $\epsilon>0$ $Z(t,dx)$ is absolutely continuous with respect to $dx$ and derive the SPDE for the density. \begin{theorem}\label{Thcon} Suppose $Z$ is a solution of $({\cal L},\mu)$-martingale problem. Assume $\epsilon>0$. Then for $t>0$, $Z(t,dx)$ is absolutely continuous with respect to $dx$ and the density $Z_t(x)$ satisfies the following SPDE: for $\phi\in {\cal S}(\mathbb R)$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{SPDE} \langle Z_t,\phi\rangle-\langle\mu,\phi\rangle\!\!&=\!\!&\int_0^t\int_{{\mbb R}}\def\dfN{{\mbb N}}\sqrt{\gamma Z_s(x)}\phi(x){V}(dsdx) -\int_0^t\int_{{\mbb R}}\def\dfN{{\mbb N}}\langle Z_s,\phi\rangle\sqrt{\gamma Z_s(x)}{V}(dsdx)\cr \!\!&\!\!&+\int_0^t\int_{\mathbb R}\langle Z_s,h(x-\cdot)\phi'\rangle W(dsdx)+\frac{\rho_{\epsilon}}{2}\int_0^t\langle Z_s,\phi''\rangle ds, \eeqlb where $V$ and $W$ are two independent Brownian sheets and ${\cal S}({\mbb R}}\def\dfN{{\mbb N})$ is the space of rapidly decreasing $C^{\infty}$-function defined on ${\mbb R}}\def\dfN{{\mbb N}$ equipped with the Schwartz topology. \end{theorem} {\bf Proof.} We borrow the ideas in Theorem 1.7 of \cite{[KS88]}. First by dual relationship (\ref{Dual}), one can derive that for any $\phi,\psi\in C(\mathbb R)$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{moment1} {\bf E}\langle Z(t),\phi\rangle=\langle\mu, T_t^1\phi\rangle \eeqlb and \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{moment2} {\bf E}\left[\langle Z(t),\phi\rangle\langle Z(t),\psi\rangle\right]=e^{-\gamma t} \langle \mu^2,T_t^2\phi\psi\rangle+ \int_0^t e^{-\gamma s}\langle \mu T_{t-s}^1,\Psi_{12}(T_s^2 \phi\psi)\rangle ds. \eeqlb For $\epsilon>0$, the semigroup $(T_t^m)_{t>0}$ is uniformly elliptic and has density $q_m(t,x,y)$ satisfying $$q_m(t,x,y)\leq c\cdot g_m({\epsilon' t},x,y),~~t>0,~x,y\in\mathbb{R}^m,$$ where $c$ is a constant and $g_m(t,x,y)$ denotes the transition density of the $m$-dimensional standard Brownian motion; see \cite{[Dy65]}. Without loss of generality, we assume $\epsilon'=1$. Note that \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} &&\int_{{\mbb R}}\def\dfN{{\mbb N}^2}q_1(u,x,z_1)q_1(u',x,z_2)q_1(t-s,z,y) q_2(s,(y,y),(z_1,z_2))dz_1dz_2\\ &\rightarrow& q_1(t-s,z,y) q_2(s,(y,y),(x,x)) \eeqnn as $u,u'\rar0$. Meanwhile, \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} &&\int_{{\mbb R}}\def\dfN{{\mbb N}^2}q_1(u,x,z_1)q_1(u',x,z_2)q_1(t-s,z,y) q_2(t,(y,y),(z_1,z_2))dz_1dz_2\\ &&\leq c\int_{{\mbb R}}\def\dfN{{\mbb N}^2}g_1(u,x,z_1)g_1(u',x,z_2)g_1(t-s,z,y) g_2(t,(y,y),(z_1,z_2))dz_1dz_2\\ &&=cg_1(u+s,x,y)g_1(u'+s,x,y)g_1(t-s,z,y). \eeqnn Take $\phi=\phi_{u,x}=q_1(u,x,\cdot)$ and $\psi=\psi_{u',x}=q_1(u',x.\cdot)$ in (\ref{moment2}). By dominated convergence theorem, when $u,u'\rar0$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{Thcona} &&\int_0^Tdt\int dx \int_0^t e^{-\gamma s}\langle \mu T_{t-s}^1,\Psi_{12}(T_s^2 \phi\psi)\rangle ds \cr &&\rightarrow\int_0^Tdt\int dx\int_0^tds\int_{{\mbb R}}\def\dfN{{\mbb N}^2}e^{-\gamma s} q_1(t-s,z,y) q_2(s,(y,y),(x,x))dy\mu(dz). \eeqlb Similarly, we have \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{Thconb} &&\int_0^Tdt\int dx e^{-\gamma t} \langle \mu^2,T_t^2\phi\psi\rangle\cr &&\rightarrow\int_0^Tdt\int dx\int_{{\mbb R}}\def\dfN{{\mbb N}^4}e^{-\gamma t} q_2(t,(x_1,x_2),(x,x))\mu(dx_1)\mu(dx_2). \eeqlb Combining (\ref{Thcona}) and (\ref{Thconb}) together yields $\{\langle Z(t), q_{u}(x,\cdot)\rangle, u>0\}$ is a Cauchy sequence in $L^2(\Omega\times[0,T]\times {\mbb R}}\def\dfN{{\mbb N})$. This implies the existence of the density $Z_t(x)$ of $Z_t$ in $L^2(\Omega\times[0,T]\times {\mbb R}}\def\dfN{{\mbb N}).$ Next, we derive the SPDE (\ref{SPDE}). Choose an one dimensional standard Brownian motion $\hat{B}_t$ independent of $Z_t$. For any fixed $c>1/2$, set $G_t=\exp(\hat{B}_t+(c-1/2)t)$. So $Z_t>0$ and $Z_t\rightarrow\infty$ as $t\rightarrow\infty$ a.s. It also satisfies $$ dG_t=\sqrt{\gamma}G_td\hat{B}_t+cG_tdt,\quad G_0=0. $$ Define $C_t=\int_0^t G_sds$. $C_t$ is strictly increasing and $C_t\rightarrow\infty$ as $t\rightarrow\infty$ a.s.. Let $C_t^{-1}$ denote its inverse function on $[0,\infty).$ Define measure-valued process $I_t$ by $$ I_t(dx)=G_{C_t^{-1}}\cdot Z_{C_t^{-1}}(dx). $$ By Ito's formula, (\ref{l2.3a}) and (\ref{l2.3b}) \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \langle I_t,\phi\rangle&=&\langle I_0,\phi\rangle+\int_0^{C_t^{-1}}G_sdM_s(\phi) +\int_0^{C_t^{-1}}\sqrt{\gamma} G_s\langle Z_s,\phi\rangle d\hat{B}_s\\ &&~+c\int_0^{C_t^{-1}}G_s\langle Z_s,\phi\rangle ds+ \frac{\rho_{\epsilon}}{2}\int_0^{C_t^{-1}}G_s\langle Z_s,\phi''\rangle ds. \eeqnn Then $$\tilde{M}_t(\phi):=\int_0^{C_t^{-1}}G_sdM_s(\phi) +\int_0^{C_t^{-1}} G_s\langle Z_s,\phi\rangle d\hat{B}_s, \quad t\geq0, $$is a local martingale with quadratic function $$ \langle \tilde{M}(\phi)\rangle_t={\gamma}\int_0^t\langle I_s,\phi^2\rangle ds+\int_0^tds \int_{\mathbb R}\langle I_s,h(x-\cdot)\phi'\rangle^2/\langle I_s,1\rangle dx. $$ Clearly, $I_t(dx)$ is also absolutely continuous with respect to $dx$. Denote the corresponding density by $I_t(x)$. Similar to the martingale representation theorem (see Theorem 3.3.6 of \cite{[KX95]} or Theorem III-7 of \cite{[EM90]}), there exists two independent $L^2(\mathbb R)$-cylindrical Brownian motion $\tilde{V}$ and $\tilde{W}$ (may be on an extension probability space) such that $$ \tilde{M}_t(\phi)=\int_0^t\langle f(s,I_s)^*\phi,d\tilde{V}_s\rangle_{L^2(\mathbb R)}+\int_0^t\langle g(s,I_s)^*\phi,d\tilde{W}_s\rangle_{L^2(\mathbb R)}, $$ where $f(s,I_s)$ and $g(s,I_s)$ are linear maps from $L^2(\mathbb R)$ to ${\cal S}'(\mathbb R)$, the space of Schwartz distributions, such that for $\phi\in {\cal S}({\mbb R}}\def\dfN{{\mbb N})$, $$f(s,I_s)^*\phi(x)=\sqrt{{\gamma}I_s(x)}\phi(x)$$ and $$ g(s,I_s)^*\phi(x)=\int_{\mathbb R}h(x-y)\phi'(y)I_s(y)dy/\sqrt{\langle I_s,1\rangle}. $$ Thus \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \langle I_t,\phi\rangle&=&\int_0^t\langle f(s,I_s)^*\phi,d\tilde{V}_s\rangle_{L^2(\mathbb R)}+\int_0^t\langle g(s,I_s)^*\phi,d\tilde{W}_s\rangle_{L^2(\mathbb R)}\\ &&~+c\int_0^t\langle I_s,\phi\rangle/\langle I_s,1\rangle ds+ \frac{\rho_{\epsilon}}{2}\int_0^t\langle I_s,\phi''\rangle/\langle I_s,1\rangle ds. \eeqnn Define two new $L^2(\mathbb R)$-cylindrical Brownian motions $\hat{V}$ and $\hat{W}$ by $$ \langle \hat{V}_t,\phi\rangle=\int_0^{C_t}\frac{1}{\langle I_s,1\rangle}\langle d\tilde{V},\phi\rangle,\quad\langle \hat{W}_t,\phi\rangle=\int_0^{C_t}\frac{1}{\langle I_s,1\rangle}\langle d\tilde{W},\phi\rangle. $$ Since $\tilde{V}$ and $\tilde{W}$ are independent, $\hat{V}$ and $\hat{W}$ are orthogonal (hence they are independent). Then we can find two independent Brownian sheets ${V}(dtdx)$ and ${W}(dtdx)$ such that $$ \tilde{V}_t(l)=\int_0^t\int_{{\mbb R}}\def\dfN{{\mbb N}}l(x){V}(dsdx),\quad \tilde{W}_t(l)=\int_0^t\int_{{\mbb R}}\def\dfN{{\mbb N}}l(x){W}(dsdx),\quad \forall\, l\in L^2({\mbb R}}\def\dfN{{\mbb N}). $$ Using Ito's formula and noting that $\langle Z_t,\phi\rangle=\langle I_{C_t},\phi\rangle/\langle I_{C_t},1\rangle$ yield \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \langle Z_t,\phi\rangle-\langle\mu,\phi\rangle\!\!&=\!\!&\int_0^t\int_{{\mbb R}}\def\dfN{{\mbb N}}\sqrt{\gamma Z_s(x)}\phi(x){V}(dsdx) -\int_0^t\int_{{\mbb R}}\def\dfN{{\mbb N}}\langle Z_s,\phi\rangle\sqrt{\gamma Z_s(x)}{V}(dsdx)\cr \!\!&\!\!&+\int_0^t\int_{\mathbb R}\langle Z_s,h(x-\cdot)\phi'\rangle W(dsdx)+\frac{\rho_{\epsilon}}{2}\int_0^t\langle Z_s,\phi''\rangle ds \eeqnn for $\phi\in {\cal S}(\mathbb R)$. We have completed the proof. $\Box$ \section{Connections to Measure-valued Branching Processes in a Random Medium} It has been shown that there are deep connections between the Dawson-Watanabe and Fleming-Viot superprocesses; see \cite{[EM91],{[KS88]},[P92]}. In this section, we shall show that the Fleming-Viot processes in random environment is a class of measure-valued branching processes in a Brownian medium, conditioned to have total mass one. Such measure-valued branching processes were first constructed and studied by \cite{[W97]} and \cite{[W98]}. The argument in this section is similar to those in \cite{[P92]} with some modifications. Let $\{\omega(t),t\geq0\}$ and $\{\hat{\omega}(t),t\geq0\}$ denote the coordinate processes on $C_{P({\mbb R}}\def\dfN{{\mbb N})}[0,\infty)$ and $C_{M({\mbb R}}\def\dfN{{\mbb N})}[0,\infty)$, respectively. Define ${\cal F}^0_t=\sigma(\omega(s); s\leq t)$, $\hat{\cal F}^0_t=\sigma(\hat{\omega}(s); s\leq t)$, ${\cal F}_t={\cal F}_{t+}^0$ and $\hat{\cal F}_t=\hat{\cal F}_{t+}^0$. Based on the results in \cite{[W98]} and the continuity of $\hat{\omega}$, for each $\mu\in M({\mbb R}}\def\dfN{{\mbb N})$, there exists an unique probability measure $\hat{\bf Q}_{\mu}$ on $C_{M({\mbb R}}\def\dfN{{\mbb N})}[0,\infty)$ such that for $\phi\in C^2({\mbb R}}\def\dfN{{\mbb N})$ \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{SDSM1} \hat{M}_t(\phi):=\langle \hat{\omega} (t),\phi\rangle-\langle\mu,\phi\rangle-\frac{\rho_{\epsilon}}{2}\int_0^t\langle \hat{\omega}(s),\phi'' \rangle ds, \quad t\geq0, \eeqlb under $\hat{\bf Q}_{\mu}$ is a continuous $\hat{\cal F}_t$-martingale starting at 0 with quadratic variation \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{SDSM2} \langle\hat{M}(\phi)\rangle_t=\gamma\int_0^t \langle \hat{\omega}(s),\phi^2\rangle ds+ \int_0^t ds\int_{{\mbb R}}\def\dfN{{\mbb N}}\langle \hat{\omega}(s),h(\cdot-y)\phi'\rangle^2dy. \eeqlb Let \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} C_+&=&\{f:[0,\infty)\rightarrow[0,\infty): f \textrm{ continuous }, \exists\,t_f\in(0,\infty] \textrm{ such that }\\ &&\qquad f(t)>0 \textrm{ if }t\in[0,t_f) \textrm{ and }f(t)=0 \textrm{ if } t\geq t_f\} \eeqnn with the compact-open topology. Let $L_y\in P(C_+)$ denote the law of the unique solution of $$ \eta_t=y+\int_0^t\sqrt{\gamma\eta_s}dB_s, $$ where $B$ is a standard Brownian motion. Note that \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{Thdwfv2} \hat{\bf Q}_{\mu}(\hat{\omega}({\mbb R}}\def\dfN{{\mbb N})\in\cdot)=L_{\mu({\mbb R}}\def\dfN{{\mbb N})}(\cdot). \eeqlb For $\mu\in M({\mbb R}}\def\dfN{{\mbb N})-\{0\}$, define $\bar{\mu}(\cdot)=\mu(\cdot)/\mu({\mbb R}}\def\dfN{{\mbb N})$. Let $\{{\bf Q}_{\bar{\mu},f}(A):A\in {\cal F},f\in C_+\}$ be a regular conditional probability for $\bar{\omega}$ given $\hat{\omega}_{\cdot}=f(\cdot)$ under $\hat{\bf Q}_{\mu}$, where $\cal F $ denotes the Borel $\sigma$-field on $C_{P({\mbb R}}\def\dfN{{\mbb N})}[0,\infty)$. That is $$ \hat{\bf Q}_{\mu}(\bar{\omega}\in A|\hat{\omega}_{\cdot}({\mbb R}}\def\dfN{{\mbb N})=f(\cdot))={\bf Q}_{\bar{\mu},f}(A) \quad \forall A\in {\cal F}. $$\begin{lemma}\label{ThDWFV} For each $\mu\in M({\mbb R}}\def\dfN{{\mbb N})-\{0\}$, there exists a subset $C_{\mu}$ of $C_+$ such that $L_{\mu({\mbb R}}\def\dfN{{\mbb N})}(C_{\mu})=1$ and for $f\in C_{\mu}$, under ${\bf Q}_{\bar{\mu},f}$ \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{ThDWFVm0} {M}_t^f(\phi,\omega):=\langle {\omega}_t, \phi\rangle-\langle\bar{\mu},\phi\rangle-\frac{\rho_{\epsilon}}{2}\int_0^t\langle {\omega}_s,\phi'' \rangle ds, \quad t<t_f, \eeqlb is an ${\cal F}_t$-martingale starting at 0 for every $\phi\in C^2({\mbb R}}\def\dfN{{\mbb N})$ with \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{ThDWFVm1} \langle{M}^f(\phi)\rangle_t&=&\gamma\int_0^{t}(\langle{\omega}_s,\phi^2\rangle- \langle{\omega}_s,\phi\rangle^2)f(s)^{-1}ds \cr&&\quad+\int_0^{t} ds\int_{{\mbb R}}\def\dfN{{\mbb N}}\langle {\omega}_s,h(\cdot-y)\phi'\rangle^2dy\quad \forall\,t<t_f \eeqlb and $\omega_t=\omega_{t_f} \textrm{ for all }t\geq t_f$. \end{lemma} \begin{remark} Note that if $f=1$, then (\ref{ThDWFVm0}) and (\ref{ThDWFVm1}) are just (\ref{l2.3a}) and (\ref{l2.3b}), respectively. \end{remark} {\bf Proof.} Define $T_n=\inf\{t:\hat{\omega}_t({\mbb R}}\def\dfN{{\mbb N})\leq 1/n\}$ and for $\phi\in C^2({\mbb R}}\def\dfN{{\mbb N})$\begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{ThDWFVa} \bar{M}_t^n(\phi):=\int_0^{t\wedge T_n} \hat{\omega}_s({\mbb R}}\def\dfN{{\mbb N})^{-1}d\hat{M}_s(\phi)-\int_0^{t\wedge T_n} \langle\hat{\omega}_s,\phi\rangle\hat{\omega}_s({\mbb R}}\def\dfN{{\mbb N})^{-2}d\hat{M}_s(1). \eeqlb Thus for fixed $t$, $\{\bar{M}_t^n(\phi):n\geq1\}$ is a martingale in $n$. By Ito's formula, \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{ThDWFVb} \langle\bar{\omega}_{t\wedge T_n},\phi\rangle =\langle\bar{\mu},\phi\rangle+\frac{\rho_{\epsilon}}{2}\int_0^{t\wedge T_n}\langle\bar{\omega}_s,\phi''\rangle ds+\bar{M}_t^n(\phi), \eeqlb which implies that \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{ThDWFVc} \sup_{t\leq K,\, n\geq 1}|\bar{M}_t^n(\phi)|\leq 2||\phi||_{\infty}+\frac{K\rho_{\epsilon}}{2}||\phi''||_{\infty}. \eeqlb Therefore, according to the Martingale Convergence Theorem and maximal inequality, $\bar{M}_t^n(\phi)$ converges as $n\rightarrow \infty$ uniformly for $t$ in compacts a.s. (by perhaps passing to a subsequence). We denote by $\bar{M}_t(\phi)$ the limit which is a continuous martingale satisfying \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{ThDWFVd} \bar{M}_t^n(\phi)=\bar{M}_{t\wedge T_n}(\phi),\quad \forall t\geq0,\quad a.s. \eeqlb and \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{ThDWFVe} \sup_{t\leq K}|\bar{M}_t(\phi)|\leq 2||\phi||_{\infty}+\frac{K\rho_{\epsilon}}{2}||\phi''||_{\infty}. \eeqlb Letting $n\rightarrow\infty$ in (\ref{ThDWFVb}) yields \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{ThDWFVf} \langle\bar{\omega}_{t},\phi\rangle =\langle\bar{\mu},\phi\rangle+\frac{\rho_{\epsilon}}{2}\int_0^{t\wedge T_0}\langle\bar{\omega}_s,\phi''\rangle ds+\bar{M}_t(\phi),\quad \forall t\geq0~a.s.~\forall \phi\in C^2({\mbb R}}\def\dfN{{\mbb N}), \eeqlb where $T_0=\inf\{t:\hat{\omega}_t({\mbb R}}\def\dfN{{\mbb N})=0\}$. Note that \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{ThDWFV1} \bar{M}_{t\wedge T_0}(\phi)=\bar{M}_{t}(\phi). \eeqlb Let $s<t$ and let $F$ be a bounded $\sigma(\hat{\omega}_{\cdot}({\mbb R}}\def\dfN{{\mbb N}))$-measurable random variable. Since $\{\hat{\omega}_t({\mbb R}}\def\dfN{{\mbb N}):t\geq0\}$ is a martingale under $\hat{\bf Q}_{\mu}$, the martingale representation theorem implies that there exists some $\sigma(\hat{\omega}_s({\mbb R}}\def\dfN{{\mbb N}):s\leq t)$- predictable function $f$ such that \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{ThDWFVg} F=\hat{\bf Q}_{\mu}(F)+\int_0^{\infty}f(s,\hat{\omega})d\hat{\omega}_s({\mbb R}}\def\dfN{{\mbb N}). \eeqlb According to (\ref{ThDWFVd}) and (\ref{ThDWFVg}), \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} \!\!&\!\!& \hat{\bf Q}_{\mu}((\bar{M}_{t\wedge T_n}(\phi)-\bar{M}_{s\wedge T_n}(\phi))F|{\cal F}_s)\\ \!\!&\!\!&\quad=\hat{\bf Q}_{\mu}((\bar{M}^n_{t}(\phi)-\bar{M}^n_{s}(\phi)) \int_0^{\infty}f(s,\hat{\omega})d\hat{\omega}_s({\mbb R}}\def\dfN{{\mbb N})|{\cal F}_s)\\ \!\!&\!\!&\quad =\hat{\bf Q}_{\mu}((\int_{s\wedge T_n}^{t\wedge T_n} \hat{\omega}_u({\mbb R}}\def\dfN{{\mbb N})^{-1}d\hat{M}_u(\phi)-\int_{s\wedge T_n}^{t\wedge T_n} \langle\hat{\omega}_u,\phi\rangle\hat{\omega}_u({\mbb R}}\def\dfN{{\mbb N})^{-2}d\hat{M}_u(1)) \int_s^tf(s,\hat{\omega})d\hat{\omega}_u({\mbb R}}\def\dfN{{\mbb N})|{\cal F}_s)\\ \!\!&\!\!&\quad=\hat{\bf Q}_{\mu}(\int_{s\wedge T_n}^{t\wedge T_n}(\langle\hat{\omega}_u,\phi\rangle\hat{\omega}_u({\mbb R}}\def\dfN{{\mbb N})^{-1} -\langle\hat{\omega}_u,\phi\rangle\hat{\omega}_u({\mbb R}}\def\dfN{{\mbb N})^{-1})f(u)du|{\cal F}_s)\\ \!\!&\!\!&\quad=0 \eeqnn By letting $n\rightarrow\infty$ in the above, we have $$ \hat{\bf Q}_{\mu}((\bar{M}_t(\phi)-\bar{M}_s(\phi))F|{\cal F}_s)=0, $$ which yields for a fixed $\phi\in C^2({\mbb R}}\def\dfN{{\mbb N})$, $\{\bar{M}_t(\phi):t\geq0\}$ is a martingale with respect to ${\cal G}_t:={\cal F}_t\vee\sigma(\hat{\omega}_s({\mbb R}}\def\dfN{{\mbb N}):s\geq0)$. On the other hand, by (\ref{ThDWFVd}) and (\ref{ThDWFV1}), \begin{eqnarray}}\def\eeqlb{\end{eqnarray} \label{ThDWFVh} \langle\bar{M}(\phi)\rangle_t&=&\gamma\int_0^{t\wedge T_0}(\langle\bar{\omega}_s,\phi^2\rangle- \langle\bar{\omega}_s,\phi\rangle^2)\hat{\omega}_s({\mbb R}}\def\dfN{{\mbb N})^{-1}ds \cr&&\quad+\int_0^{t\wedge T_0} ds\int_{{\mbb R}}\def\dfN{{\mbb N}}\langle \bar{\omega}_s,h(\cdot-y)\phi'\rangle^2dy\qquad\hat{\bf Q}_{\mu}-a.s. \eeqlb Set $M_t^f(\phi,\omega)=M_{t_f-}^f(\phi)$ for $t\geq t_f$. By (\ref{ThDWFVf}) and (\ref{ThDWFV1}), \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{ThDWFVi} \bar{M}_t(\phi)=M_t^{\omega_{\cdot}({\mbb R}}\def\dfN{{\mbb N})}(\phi,\bar{\omega}),\quad \forall\, t\geq0\quad\hat{\bf Q}_{\mu}-a.s.~\forall \phi\in C^2({\mbb R}}\def\dfN{{\mbb N}). \eeqlb Then for each $G\in b{\cal F}_t^0$ and $s<t$, by the ${\cal G}_t$ martingale property of $\bar{M}_t(\phi)$, (\ref{Thdwfv2}) and (\ref{ThDWFVi}), $$ {\bf Q}_{\bar{\mu},f}\left(\left(M_t^f(\phi)-M_s^f(\phi)\right)G\right)=0\quad L_{\mu({\mbb R}}\def\dfN{{\mbb N})}-a.a.f. $$ By considering rational and the fact that $C_{P({\mbb R}}\def\dfN{{\mbb N})}[0,\infty)$ with local uniform topology is a standard measurable space and taking limits in $s$ and $G$, we could find a $L_{\mu({\mbb R}}\def\dfN{{\mbb N})}$-null set off which the above holds for all $s<t$ and $G\in {\cal F}_s$. That is $\{M_t^f(\phi):t\geq0\}$ is an ${\cal F}_t$-martingale under ${\bf Q}_{\bar{\mu},f}$ for $L_{\mu({\mbb R}}\def\dfN{{\mbb N})}-a.a.f.$ Take $t_n^f=\inf\{u:f(u)\leq 1/n\}$. According to (\ref{ThDWFVh}) and above arguments, we can deduce that for every $n\geq1$ $$ M_{t\wedge t_n^f}^f(\phi)^2-\gamma\int_0^{t\wedge t_n^f}(\langle{\omega}_s,\phi^2\rangle- \langle{\omega}_s,\phi\rangle^2)f(s)^{-1}ds -\int_0^{t\wedge t_n^f} ds\int_{{\mbb R}}\def\dfN{{\mbb N}}\langle {\omega}_s,h(\cdot-y)\phi'\rangle^2dy,\quad t\geq0, $$ is an ${\cal F}_t$-martingale under ${\bf Q}_{\bar{\mu},f}$ for $L_{\mu({\mbb R}}\def\dfN{{\mbb N})}-a.a.f.$ Now, consider a countable subset of $C^2({\mbb R}}\def\dfN{{\mbb N})$, $C_S({\mbb R}}\def\dfN{{\mbb N})$, such that we can approximate any function $\phi\in C^2({\mbb R}}\def\dfN{{\mbb N})$ by a sequence $\{\phi_k:\,k\geq1\}\subset C_S({\mbb R}}\def\dfN{{\mbb N})$ in such a way that not only $\phi$ but all of its derivatives up to the second order are approximated boundedly and pointwise. Taking limits in $M_t^f(\phi)$ and $\langle M^f(\phi)\rangle_t$ yields the desired conclusion. $\Box$ For $T>0$, define $(\Omega_{T-},{\cal F}_{T-})=(C_{P({{\mbb R}}\def\dfN{{\mbb N}})}[0,T), \text{Borel sets}).$ $(\hat{\Omega}_{T-},\hat{\cal F}_{T-})$ denotes the same space with $M({\mbb R}}\def\dfN{{\mbb N})$ in place of $P({\mbb R}}\def\dfN{{\mbb N})$. If ${\bf Q}$ is a probability on $C_{P({\mbb R}}\def\dfN{{\mbb N})}[0,\infty),$ then ${\bf Q}|_{T-}$ is defined on $(\Omega_{T-},{\cal F}_{T-})$ by ${\bf Q}|_{T-}(A)={\bf Q}(\omega|_{[0,T)}\in A).$ Similarly, one defines $(\Omega_{T},{\cal F}_{T})$, $(\hat{\Omega}_{T},\hat{\cal F}_{T})$ and ${\bf Q}|_{T}.$ Suppose ${\bf Q}_{\mu}$ is the unique probability measure on $C_{P({\mbb R}}\def\dfN{{\mbb N})}[0,\infty)$ such that $\{\omega(t),t\geq0\}$ under $\bf Q_{\mu}$ is a solution of $(\cal L,\mu)$-martingale problem. Our main result in this subsection is the following theorem which is analogous to Corollary 4 of \cite{[P92]}. \begin{theorem}\label{ThM} Suppose that $\{\mu_n\}\subset M({\mbb R}}\def\dfN{{\mbb N})-\{0\}$ satisfy $\bar{\mu}_n\rightarrow\mu$ in $P({\mbb R}}\def\dfN{{\mbb N})$. \begin{enumerate} \item[(a).] If for each $n$, there exists a function $f_n\in C_{\mu_n}$ such that for some $T>0$, $\sup_{0\leq t\leq S}|f_n-1|\rightarrow 0$ for $S<T$ as $n\rightarrow\infty$, then \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{Thm1}{\bf Q}_{\bar{\mu}_n,f_n} |_{T-}\rightarrow {\bf Q}_{\mu}|_{T-}\text{ weakly on } (\Omega_{T-},{\cal F}_{T-}).\eeqlb \item[(b).] Let $\{A_n\}$ be a sequence of Borel subset of $C_+$ such that $ L_{\mu_n({\mbb R}}\def\dfN{{\mbb N})}(A_n)>0$ for every $n\geq1$. If for some $T>0$ $$ \sup\{|g(t)-1|:g\in A_n,t\leq S\}\rar0 \text{ as }n\rightarrow\infty,\forall S<T, $$ then $$ \hat{\bf Q}_{\mu_n}(\bar{\omega}\in \cdot|\omega_{\cdot}({\mbb R}}\def\dfN{{\mbb N})\in A_n)|_{T-}\rightarrow {\bf Q}_{\mu}|_{T-}\text{ weakly on } (\Omega_{T-},{\cal F}_{T-}). $$ \end{enumerate} \end{theorem} {\bf Proof.} (a). It suffices to prove $$ {\bf Q}_{\bar{\mu}_n,f_n} |_{S}\rightarrow {\bf Q}_{\mu}|_{S}\text{ weakly on } (\Omega_{S},{\cal F}_{S}). $$ Let $\hat{\mathbb{R}}={\mbb R}}\def\dfN{{\mbb N}\cup\{\partial\}$ denote the one-point compactification of $\mathbb{R}$. Since $\sup_{0\leq t\leq S}|f_n-1|\rightarrow 0$ for $S<T$ as $n\rightarrow\infty$, $\inf_{t\leq S}f_n\geq 1/2$ for $n$ larger enough and $$ |\langle M^{f_n}(\phi)\rangle_t-\langle M^{f_n}(\phi)\rangle_s|\leq \frac{\gamma}{2}||\phi||_{\infty}^2|t-s| +||\rho||_{\infty}||\phi'||^2_{\infty}|t-s|,\quad \forall\, s,t\leq S, {\bf Q}_{\bar{\mu}_n,f_n}-a.s. $$ By Theorem 2.3 of \cite{[RC86]}, one can check that $\{{\bf Q}_{\bar{\mu}_n,f_n}|_S: n\geq1\}$ is tight in $P(C_{P(\hat{{\mbb R}}\def\dfN{{\mbb N}})}[0,S])$. Let ${\bf Q}$ be a limit point in $P(C_{P(\hat{{\mbb R}}\def\dfN{{\mbb N}})}[0,S])$. With abuse of notation, we denote by $\{\omega_s:s\leq S\}$ the coordinate processes of $C_{P(\hat{{\mbb R}}\def\dfN{{\mbb N}})}[0,S]$. One may use Skorohod representation theorem to see that under $\bf Q$, \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{l2.3a1} M_t(\phi):=\langle \omega_t,\phi\rangle-\langle \mu,\phi\rangle-\frac{\rho_{\epsilon}} {2}\int_0^t\langle \omega_s,\phi'' \rangle ds \eeqlb is a continuous martingale starting at 0 for $t\leq S$ and $\phi\in C^2_{\partial}({\mathbb{R}})$ with quadratic variation \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{l2.3b2} \gamma\int_0^t \left(\langle \omega_s,\phi^2\rangle-\langle \omega_s,\phi\rangle^2\right)ds+ \int_0^t ds\int_{{\mbb R}}\def\dfN{{\mbb N}}\langle \omega_s,h(\cdot-y)\phi'\rangle^2dy. \eeqlb We claim that $$\mathbf{Q}\{\omega_t(\{\partial\})=0~\textrm{for all}~ t\in[0,S]\}=1.$$ Consequently, $\mathbf{Q}$ is supported by $C_{P({{\mbb R}}\def\dfN{{\mbb N}})}[0,S]$. For $k\geq1$, let $$ \phi_k(x)=\begin{cases} \exp\{-\frac{1}{|x|^2-k^2}\},& \textrm{if}~|x|>k,\\ 0,&\textrm{if}~|x|\leq k. \end{cases} $$ One can check that $\{\phi_k\}\subset C^2_{\partial}({\mathbb{R}})$ such that $\lim_{|x|\rightarrow\infty}\phi_k(x)=1$, $\lim_{|x|\rightarrow\infty}\phi_k(x)'=0$ and $\phi_k(\cdot)\rightarrow1_{\{\partial\}}(\cdot)$ boundedly and pointwise. $||\phi_k'||\rightarrow0$ and $||\phi_k''||\rightarrow0$ as $k\rightarrow\infty$. By martingale inequality, we have \begin{eqnarray*} &&\mathbf{Q}\{\sup_{0\leq t\leq S}|M_t(\phi_k)-M_t(\phi_j)|^2\}\cr &&\quad\leq4\gamma\int_0^S\mathbf{Q}_{\mu}\{\langle\omega_s,(\phi_k-\phi_j)^2\rangle\} ds+8\gamma\int_0^S\mathbf{Q}_{\mu}\{\langle\omega_s,|\phi_k-\phi_j|\rangle\} ds\cr&&\qquad+4\int_0^Sds\int_{\hat{\mathbb{R}}}\mathbf{Q}\{\langle \omega_s,h(z-\cdot)(\phi_k'-\phi_j')\rangle^2\}dz.\end{eqnarray*} By dominated convergence theorem, $\mathbf{Q}\{\sup_{0\leq t\leq S}|M_t(\phi_k)-M_t(\phi_j)\rangle|^2\}\rightarrow0$ as $k,j\rightarrow\infty$. Therefore, there exists $M^{\partial}=(M^{\partial}_t)_{t\leq S}$ such that for every $t\leq S$, $$\mathbf{Q}\{|M_t(\phi_k)-M_t^{\partial}|^2\} \rightarrow0 $$ and (by perhaps passing to a subsequence) $$ \sup_{0\leq s\leq t}|M_s(\phi_k)-M_s^{\partial}| \rightarrow0\quad {\bf Q}-a.s. $$ as $k\rightarrow\infty$. We obtain $M^{\partial}$ is a continuous martingale. It follows from ($\ref{l2.3a1}$) that $M_t^{\partial}=\omega_t(\{\partial\})$ is a continuous martingale with mean zero . Thus $\mathbf{Q}(\omega_t(\{\partial\}))=0$. Then the claim follows from the continuity of $\big{\{}\omega_t(\{\partial\}):t\geq0\big{\}}$. Extend $\bf Q$ to $C_{P({\mbb R}}\def\dfN{{\mbb N})}[0,\infty)$ by setting the conditional distribution of $\{\omega_{t+S}:t\geq0\}$ given ${\cal F}_S^0$ equal to ${\bf Q}_{\omega_S}$. Then ${\bf Q}={\bf Q}_{\mu}$ and so ${\bf Q}|_S={\bf Q}_{\mu}|_S$. We complete the proof of (a). (b). Let $H:\Omega|_{T-}\rightarrow {\mbb R}}\def\dfN{{\mbb N}$ be bounded and continuous. Then by Lemma \ref{ThDWFV} and (\ref{Thm1}), \begin{eqnarray*}}\def\eeqnn{\end{eqnarray*} &&|\hat{\bf Q}_{\mu_n}(H(\bar{\omega}) |\omega_{\cdot}({\mbb R}}\def\dfN{{\mbb N})\in A_n)- {\bf Q}_{\mu}(H)|\cr &&\quad = |\hat{\bf Q}_{\mu_n}(H(\bar{\omega}) |\omega_{\cdot}({\mbb R}}\def\dfN{{\mbb N})\in A_n\cap C_{\mu_n})- {\bf Q}_{\mu}(H)|\\ && \quad\leq \left|\int_{A_n\cap C_{\mu_n}} {\bf Q}_{\bar{\mu}_n,g}(H)-{\bf Q}_{\mu}(H)dL_{\mu_n({\mbb R}}\def\dfN{{\mbb N})}L_{\mu_n({\mbb R}}\def\dfN{{\mbb N})}(A_n\cap C_{\mu_n})^{-1}\right|\\ &&\quad \leq \sup_{g\in A_n\cap C_{\mu_n}}|{\bf Q}_{\bar{\mu}_n,g}(H)-{\bf Q}_{\mu}(H)|\\ &&\quad \rar0\quad \text{ as }\quad n\rightarrow\infty. \eeqnn We are done. $\Box$ \begin{corollary} \label{coroPE} Suppose that $\{\mu_n\}\subset M({\mbb R}}\def\dfN{{\mbb N})-\{0\}$ satisfy $\bar{\mu}_n\rightarrow\mu$ in $P({\mbb R}}\def\dfN{{\mbb N})$. For $T>0$, let $T_n\rightarrow T$ and $\delta_n\rar0$ and assume $|\mu_n({\mbb R}}\def\dfN{{\mbb N})-1|<\delta_n$. Then \begin{enumerate} \item[(a).] $\hat{\bf Q}_{\mu_n}(\bar{\omega}\in \cdot| \sup\limits_{t\leq T_n}|\omega_t({\mbb R}}\def\dfN{{\mbb N})-1|<\delta_n) \xrightarrow{weakly} {\bf Q}_{\mu}|_{T-}\quad \text{on}\quad (\Omega_{T-},{\cal F}_{T-})$; \item[(b).] $\hat{\bf Q}_{\mu_n}(\hat{\omega}\in \cdot| \sup\limits_{t\leq T_n}|\omega_t({\mbb R}}\def\dfN{{\mbb N})-1|<\delta_n) \xrightarrow{weakly} {\bf Q}_{\mu}|_{T-}\quad \text{on}\quad (\hat{\Omega}_{T-},\hat{\cal F}_{T-})$. \end{enumerate} \end{corollary} {\bf Proof.} Setting $$ A_n=\{g\in C_+:\sup_{t\leq T_n}|g(t)-1|<\delta_n\} $$ and Theorem \ref{ThM} yield (a). (b) follows from (a) and the fact that for $S<T$ and $n$ large enough, $$ \hat{\bf Q}_{\mu_n}\left(\sup_{t\leq S}|\hat{\omega}_t({\mbb R}}\def\dfN{{\mbb N})^{-1}-1|<\frac{\delta_n}{1-\delta_n}\bigg{|}A_n\right)=1. $$ $\Box$ \textbf{Acknowledgement}. I would like to give my sincere thanks to Professors Shui Feng, Zenghu Li, Jie Xiong and Hao Wang for their simulating discussions. \centerline{APPENDIX} \appendix \section{Random selections of stochastic integrals} \begin{lemma} \label{Aselection} Let $W(dsdy)$ be a space-time white noise on $[0,\infty)\times {\mbb R}}\def\dfN{{\mbb N}$ based on Lebesgue measure measure. Let $\{X_i(t),t\geq0,i=1,2, \cdots\}$ be a sequence of real valued predictable stochastic processes. Let $h(x,y)$ be a measurable function on ${\mbb R}}\def\dfN{{\mbb N}\times{\mbb R}}\def\dfN{{\mbb N}$. Define stochastic integrals $$ Y_i(t):=\int_0^t\int_{{\mbb R}}\def\dfN{{\mbb N}}h(X_i(s),y)W(dsdy),\quad t\geq0,\quad i\geq 1. $$ Suppose $\pi$ is a random variable taking values in $\{1,2,\cdots\}$, independent of $\{X_i,i=1,2,\cdots\}$ and $W$. Then $$ Y_{\pi}(t)=\int_0^t\int_{{\mbb R}}\def\dfN{{\mbb N}}h(X_{\pi}(s),y)W(dsdy),\quad t\geq0. $$ \end{lemma} {\bf Proof.} If $h$ is a simple function, the desired conclusion is obvious. For general result, one can consider the $L^2$ approximation and Ito's isometry; see Theorem 2.2.5 of \cite{[Wl86]}. $\Box$ \section{Stochastic flow of diffeomorphism} In this part, we consider the following stochastic differential equation \begin{eqnarray}}\def\eeqlb{\end{eqnarray}\label{SDEQN} \xi_t=x+\int_s^t\int_{{\mbb R}}\def\dfN{{\mbb N}}h(y-\xi(s))W(dsdy),\quad x\in{\mbb R}}\def\dfN{{\mbb N},\quad t\geq s, \eeqlb where $W(dsdy)$ is a space-time white noise on $[0,\infty)\times {\mbb R}}\def\dfN{{\mbb N}$ based on Lebesgue measure measure. The existence and pathwise uniqueness for (\ref{SDEQN}) have been proved in \cite{[DLW01]}. \begin{lemma} \label{lehomo} Suppose $h\in C^2(\mathbb R)$. There is a modification of the solution, denoted by $\xi_{s,t}(x)$, such that almost surely \begin{enumerate} \item[(1)] $\xi_{s,t}(x,\omega)$ is continuous in $(s,t,x)$ and satisfies $\lim_{t\downarrow s}\xi_{s,t}(x,\omega)=x;$ \item[(2)] $\xi_{s,t+u}(x,\omega)=\xi_{t,t+u}(\xi_{s,t}(x,\omega),\omega)$ is satisfied for all $s<t$ and $u>0$; \item[(3)] the map $\xi_{s,t}(\cdot,\omega):{\mbb R}}\def\dfN{{\mbb N}\rightarrow{\mbb R}}\def\dfN{{\mbb N}$ is an onto homeomorphism for all $s<t$; \item[(4)] the map $\xi_{s,t}(\cdot,\omega):{\mbb R}}\def\dfN{{\mbb N}\rightarrow{\mbb R}}\def\dfN{{\mbb N}$ is a $C^2$-diffeomorphism for all $s<t$. \end{enumerate} \end{lemma} {\bf Proof. } The argument is exactly similar to that in Chapter 2 of \cite{[Ku84]}. We omit it here and left it to interested readers. $\Box$ \textbf{References} \begin{enumerate} \renewcommand{[\arabic{enumi}]}{[\arabic{enumi}]} \bibitem{[D93]} {} Dawson, Donald A. (1993): Measure-valued Markov Processes, in: Lecture Notes in Math., Vol.1541, pp.1-260, Springer, Berlin. \bibitem{[DLW01]} {} Dawson, Donald A.; Li, Z.; Wang, H. (2001): Superprocesses with dependent spatial motion and general branching densities, \textit{Electron. J. Probab.} {\bf6} no. 25, 33 pp. (electronic). \bibitem{[DLZ04]} {} Dawson, Donald A.; Li, Z.; Zhou, X. (2004): Superprocesses with coalescing Brownian spatial motion as large scale limits, \textit{J. Theoretic Probab} {\bf17} 673-692 \bibitem{[DK96]} Donnelly, P. and Kurtz, T. G. (1996): A countable representation of the Fleming-Viot measure-valued diffusion. \textit{The Annals of Probability} {\bf24}, No.2, 698-742. \bibitem{[Dy65]} {} Dynkin, E. B. (1965): \textsl{ Markov Processes. Vols. II,} Academic Press Inc., Publishers, New York; Springer-Verlag, 1965. \bibitem{[EM90]} {} El Karoui, N. and M\'{e}l\'{e}ard, S. (1990): Martingale measures and stochastic calculus, \textit{Probab. Theory Rel. Fields} {\bf 84}, 83-101. \bibitem{[E00]} {} Etheridge, A. (2000): An introduction to superprocesses, Providence, Rhode Island, AMS. \bibitem{[EM91]} {} Etheridge, A. and March, P. (1991): A note on superprocesses, \textit{Probab. Theory Rel. Fields} {\bf 89}, 141-148. \bibitem{[EK86]} {} Ethier, S.N. and Kurtz, T.G. :\textsl{ Markov Processes: Characterization and Convergence,} John Wiley \& Sons, Inc., New York, 1986. \bibitem{[EK87]} {} Ethier, S.N. and Kurtz, T.G. The infinitely-many-alleles model with selection as a measure-valued diffusion, Stochastic methods in biology (Nagoya, 1985), Lecture Notes in Biomath., vol.70, Springer, Berlin, 1987, pp.72--86. \bibitem{[EK94]} {}Ethier, S.N. and Kurtz, T.G. (1994): Convergence to Fleming-Viot processes in the weak atomic topology, \textsl{Stochastic Process. Appl.} {\bf 54}, 1-27. \bibitem{[KX95]} {} Kallianpur, G. and Xiong, J. (1995): Stochastic differential equations in infinite-dimensional spaces. \textsl{IMS Lecture Notes---Monograph Series} {\bf 26}, Institute of Mathematical Statistics. \bibitem{[KS88]} {} Konno, N. and Shiga, T.(1988): Stochastic partial differential equations for some measure-valued diffusions, \textit{Probab. Theory Related Fields} {\bf79}, 201--225. \bibitem{[K99]} Krylov, N. V. (1999): An analytic approach to SPDEs, Stochastic partial differential equations: six perspective, \textit{Math. Surveys Monogr.} \textbf{64}, 185-242, Amer. Math. Soc., Providence, RI. \bibitem{[Ku84]} Kunita, H. (1984): Stochastic differential equations and stochastic flows of diffeomorphisms. \textsl{Lecture Notes in Math.}, 1097, 143--303, Springer, Berlin. \bibitem{[LR05]} {} Le Jan, Y. and Raimond, O. (2005): Flows, coalescence and noise, \textsl{Ann. Probab.} {\bf32} 1247-1315. \bibitem{[MX01]} {} Ma, Zhi-Ming and Xiang, Kai-Nan (2001): Superprocesses of stochastic flows, \textsl{Ann. Probab.} {\bf 29} 317--343. \bibitem{[P92]} {} Perkins, E. A. (1992). Conditional Dawson-Watanabe processes and Fleming-Viot processes. \textit{Seminar on Stochastic Processes} 1991, Birkh\"{a}user, Basel, pp. 142-155. \bibitem{[RC86]} {} Roelly-Coppoletta, S. (1986). A criterion of convergence of measure-valued processes; application to measure branching processes, \textsl{Stochastics} {\bf 17}, 43-65. \bibitem{[SA01]} {} Skoulakis, G. and Adler, Robert J. (2001): Superprocesses over a stochastic flow, \textit{Ann. Appl. Probab.} {\bf 11} 488--543. \bibitem{[Wl86]} {} Walsh, J.B. (1986): \textit{An introduction to stochastic partial differential equations}. Lect. Notes Math., vol. 1180, pp. 265-439. Berlin Heidelberg New York, Springer. \bibitem{[W97]} {}Wang, H. (1997): State classification for a class of measure-valued branching diffusions in a Brownian medium, \textit{Probab. Theory Related Fields} {\bf109} 39--55. \bibitem{[W98]} {}Wang, H. (1998): A class of measure-valued branching diffusions in a random medium, \textit{ Stochastic Anal. Appl.} {\bf 16} 753--786. \bibitem{[Z07]} {}Zhou, X. (2007): A superprocess involving both branching and coalescing, \textit{ Ann.I.H.Poincar\'{e}-PR} {\bf 43} 599-618. \end{enumerate} \end{document}
arXiv
Detecting discordance enrichment among a series of two-sample genome-wide expression data sets Volume 18 Supplement 1 Proceedings of the 27th International Conference on Genome Informatics: genomics Yinglei Lai1, Fanni Zhang1, Tapan K. Nayak1, Reza Modarres1, Norman H. Lee2 & Timothy A. McCaffrey3 BMC Genomics volume 18, Article number: 1050 (2017) Cite this article With the current microarray and RNA-seq technologies, two-sample genome-wide expression data have been widely collected in biological and medical studies. The related differential expression analysis and gene set enrichment analysis have been frequently conducted. Integrative analysis can be conducted when multiple data sets are available. In practice, discordant molecular behaviors among a series of data sets can be of biological and clinical interest. In this study, a statistical method is proposed for detecting discordance gene set enrichment. Our method is based on a two-level multivariate normal mixture model. It is statistically efficient with linearly increased parameter space when the number of data sets is increased. The model-based probability of discordance enrichment can be calculated for gene set detection. We apply our method to a microarray expression data set collected from forty-five matched tumor/non-tumor pairs of tissues for studying pancreatic cancer. We divided the data set into a series of non-overlapping subsets according to the tumor/non-tumor paired expression ratio of gene PNLIP (pancreatic lipase, recently shown it association with pancreatic cancer). The log-ratio ranges from a negative value (e.g. more expressed in non-tumor tissue) to a positive value (e.g. more expressed in tumor tissue). Our purpose is to understand whether any gene sets are enriched in discordant behaviors among these subsets (when the log-ratio is increased from negative to positive). We focus on KEGG pathways. The detected pathways will be useful for our further understanding of the role of gene PNLIP in pancreatic cancer research. Among the top list of detected pathways, the neuroactive ligand receptor interaction and olfactory transduction pathways are the most significant two. Then, we consider gene TP53 that is well-known for its role as tumor suppressor in cancer research. The log-ratio also ranges from a negative value (e.g. more expressed in non-tumor tissue) to a positive value (e.g. more expressed in tumor tissue). We divided the microarray data set again according to the expression ratio of gene TP53. After the discordance enrichment analysis, we observed overall similar results and the above two pathways are still the most significant detections. More interestingly, only these two pathways have been identified for their association with pancreatic cancer in a pathway analysis of genome-wide association study (GWAS) data. This study illustrates that some disease-related pathways can be enriched in discordant molecular behaviors when an important disease-related gene changes its expression. Our proposed statistical method is useful in the detection of these pathways. Furthermore, our method can also be applied to genome-wide expression data collected by the recent RNA-seq technology. Genome-wide expression data have been widely collected by the recent microarray [1–3] or RNA-seq technologies [4, 5]. In addition to the differential expression analysis for the identification of potential study-related biomarkers [6], gene set enrichment analysis (or gene set analysis) for the identification of study-related pathways (or gene sets) has received a considerable attention in the recent literature [7, 8]. It enables us to detect weak but coherent changes in individual genes through aggregating information from a specific group of genes. In the current public databases, large genome-wide expression data sets or multiple genome-wide expression data sets have been made available [3, 9]. For a large data set, multiple subsets can be generated according to different stages of an important feature. Integrative analysis enables us to detect weak but coherent changes in individual datasets through aggregating information from different datasets [10–12]. Integrative gene set enrichment is an approach that aggregates information from a specific group of genes among different datasets [13–15]. Due to the aforementioned complex analysis scenario, different analysis methods are needed to address different study purposes. For example, the study purpose can be to identify gene sets with statistical significance after data integration (without considering whether changes are positive or negative) and an extension of traditional meta-analysis method can be used, or the study purpose can be to identify gene sets with concordance enrichment and a mixture model based approach can be used. In this study, we consider a series of related genome-wide expression data sets collected at different stages of an important feature. For an illustrative example, RNA-seq data can be collected at many different growth time points and we are interested in the following study purpose. The gene expression in some pathways may be overall high at early time points and overall low at later time points. It is biologically interesting to identify these pathways with clearly discordant behaviors. Pang and Zhao [16] have recently suggested a stratified gene set enrichment analysis. (Jones et al. [17] also recently conducted a stratified gene expression analysis.) The analysis purpose in this study is different from theirs. As we have explained, to achieve an efficient analysis for the detection of discordance among a series of related genome-wide expression data sets, we need a specific statistical method. In a differential expression analysis and/or gene set enrichment analysis, it is usually unknown whether a gene is truly differentially expressed (up-regulated or down-regulated) or non-differentially expressed (null). Statistically, we can conduct a test (e.g. t-test) for the observations from each gene and obtain a p-value to evaluate how likely the gene is differentially expressed. False discovery rate [6, 18] can be used to evaluate the proportion of false positives among claimed positives. Another approach can also be considered. It is based on the well-known finite normal-distribution mixture model [19]. Signed z-scores can be obtained from one-sided p-values [15, 20]. The assumption is that all the z-scores are a sample of a mixture model with three components: one with zero population mean representing non-differentially expressed genes and the other two with positive and negative population means representing up-regulated and down-regulated genes, respectively. The false discovery rate (FDR) can be conveniently calculated under this framework. In the mixture model approach, although the component information is still unknown, it can be estimated by the well-established E-M algorithm [19]. This information has been used to address the enrichment in concordance among different data sets [15]. In this study, our interest is to detect enrichment in discordance among a series of related genome-wide expression data sets collected at different stages of an important feature. The estimated component information can be useful in the calculation of discordance enrichment probability (see "Methods" for details). Therefore, our method is developed based on a mixture model. In the "Methods" section, we will review the background for our mixture model based approach. Without a structure consideration, the model parameter space increases exponentially with the increase of number of data sets. Therefore, a novel statistical contribution of this study is that we propose a two-level mixture model to achieve a linearly increased parameter space with the increase of number data sets. The model parameters can be estimated by the well-established E-M algorithm and the model-based probability of discordance enrichment can be calculated for gene set detection. Table 1 gives an artificial example to illustrate discordance enrichment. Assume there are six two-sample genome-wide expression data sets, and z-scores (see "Methods" for details) for all genes are calculated. Assume there is an important molecular pathway with nine genes, and their z-scores are shown in Table 1. A positive or negative z-score implies a possible up-regulation or down-regulation, respectively. In Table 1, there are several genes with some clearly positive and some clearly negative z-scores (like absolute value greater than 4). For examples, z-scores 7.7, 4.8, -4.9 and -7.6 are observed for gene G 4; z-scores 6.5 and -8.1 are observed for gene G 5; z-scores 7.9, 5.0, 4, -8.6 and -8.9 are observed for gene G 6; z-scores 4.6, -5.6 and -9.0 are observed for gene G 7, and z-scores 5.3, -4.1 and -4.8 are observed for G 8. These observations of clear discordance suggest that, in this pathway, some genes may behave clearly differently among different data sets. Furthermore, there are five out of nine genes with these clear discordant behaviors. If we only expect about 30% of genes with such behaviors, then this proportion is obviously large (>50%). An exploration of pathways (or gene sets) enriched in clear discordance will enable us to further understand the molecular mechanisms of complex diseases. Table 1 An artificial example for discordance illustration Pancreatic cancer related studies are important in public health [21]. Recently, gene PNLIP (pancreatic lipase) has been shown its association with the pancreatic cancer survival rate [22]. A paired two-sample microarray genome-wide expression data set has been collected for studying pancreatic cancer [23]. One advantage of this paired design is that we can focus on the expression ratio between tumor and non-tumor tissues for each gene. One related biological motivation is to use the genome-wide expression data set to understand molecular changes related to the change of expression ratio of gene PNLIP. In this study, more specifically, our interest is to identify pathways or gene sets showing clearly discordant behavior when the expression ratio of gene PNLIP changes. Understanding these molecular changes can help us further investigate the role of gene PNLIP and even the general disease mechanism of pancreatic cancer. Gene expression profiles are measured as continuous variables. However, if we can perform this analysis with a relatively simple method, then the results can be more interpretable. Therefore, our approach is to divide the microarray data set into a series of non-overlapping subsets according to the tumor/non-tumor paired expression ratio of gene PNLIP. The log-ratio ranges from a negative value (e.g. more expressed in non-tumor tissue) to a positive value (e.g. more expressed in tumor tissue). Our purpose is to understand whether any gene sets are enriched in discordant behaviors among these subsets (when the log-ratio is increased from negative to positive). Notice that we only use the expression ratio of gene PNLIP to divide the study data set. We do not consider the expression profiles of other genes for data division. There is no analysis optimization in data division and this strategy avoids the selection bias towards our analysis. The number of study subjects in the microarray data set is adequate so that we can divide the data set into many subsets (e.g. greater than five) so that the biological changes can be better explored. After dividing the study data set into K non-overlapping subsets, we can perform genome-wide differential expression analysis for each subset. Genes can be generally categorized as up-regulated (positively differentially expressed), down-regulated (negatively differentially expressed) or null (non-differentially expressed). Genes may show concordant behaviors or discordant behaviors among different subsets. For examples, showing positive differential expression in all K subsets is clearly a concordant behavior and showing negative differential expression in the first subset but positive differential expression in the last subset is clearly a discordant behavior. In a genome-wide differential expression analysis, we usually calculate the test scores based on a chosen statistic (e.g. t-test) to evaluate whether genes are differentially expressed or not. For simplicity, we choose the well-known two-sample t-test. A strong positive or negative differential expression would result in clearly positive or negative test score. A non-differential expression would result in a test score close to zero and the test score could be either positive or negative (but rarely zero exactly). Therefore, if a gene is concordantly differentially expressed (e.g. all up-regulated with clearly positive test scores) in some subsets but it is not differentially expressed (e.g. all null with slightly positive test scores) in the other subsets, then it can be statistically difficult to evaluate whether the gene has an overall discordant behavior. Therefore, in this study, we focus on genes with some clearly discordant behaviors: up-regulated in at least one subset and down-regulated in at least one subset (to avoid the statistical difficulty mentioned above). We are interested in identifying pathways or gene sets enriched in clearly discordant behaviors. We focus on KEGG pathways. The detected pathways will be useful for our further understanding of the role of gene PNLIP in pancreatic cancer research. Gene TP53 is well-known for its role as tumor suppressor in general cancer studies. Its log-ratio in the microarray data set also ranges from a negative value (e.g. more expressed in non-tumor tissue) to a positive value (e.g. more expressed in tumor tissue). We also divide the microarray data set according to the expression ratio of gene TP53 and repeat the discordance enrichment analysis. We consider the analysis result based on gene TP53 a useful comparison with the analysis result based on gene PNLIP. Multiple data sets In this study, we consider a detection of gene set enrichment in discordant behaviors (or discordance gene set enrichment) for a series of two-sample genome-wide expression data sets. The term "enrichment in discordant behaviors" will be mathematically defined later. Let K be the number of data sets and let m be the number of common genes among this series of data sets. Each data set is collected for two given groups (same for all K data sets). In general, one group represents a normal status and the other group represents an abnormal status. For a single two-sample genome-wide expression data set, differential expression analysis and gene set enrichment analysis are usually conducted. The purpose of analysis of differential expression is to identify genes showing significantly up-regulation or down-regulation when two sample groups are compared. The purpose of gene set enrichment analysis is to identify pathways (or gene sets) showing coordinate up-regulation or down-regulation, which may be considered as an extension of differential expression analysis. Therefore, the following gene behaviors are usually of our research interest in two-sample expression data analysis: positive change (or up-regulation), negative change (or down-regulation) and null (or non-differentially expressed). However, these underlying behaviors are usually not observed and expression data are collected to make statistical inference about them. Data pre-processing is important for both microarray and RNA-seq data and it has been well discussed in the literature [24–26]. In our study, the data can be downloaded from a well-known public database. We assume that the gene expression profiles have been appropriately pre-processed. In an analysis of multiple expression data sets, it is usually necessary to focus on common genes and gene identifiers can be useful for this purpose. In our study, we divide a relatively large data set into a series of non-overlapping subsets. Therefore, all the genes in the downloaded data are common. Many statistical tests have been proposed for analyzing a two-sample genome-wide expression data set [27, 28]. In this study, the traditional paired-two-sample t-test is chosen for its simplicity (although other statistics could be certainly considered, see below). For each gene in each data set (or subset), we perform the t-test to obtain a t-score. Its p-value is evaluated based on the permutation procedure (randomly switch the tumor/non-tumor labels for each pair of tissues) so that the normal distribution assumption is not assumed for the paired-difference data. All the permuted t-scores are pooled together so that tiny p-values can be calculated [29]. One-sided upper-tailed p-values are calculated so that the direction of change can be distinguished for each gene in each data set. Let p i,k be the p-value for the i-th gene in the k-th data set. z-scores are obtained by an inverse normal transformation $$z_{i,k} = \Phi^{-1}(p_{i,k}), $$ where Φ(·) is the cumulative distribution function (c.d.f.) of the standard normal distribution (mean zero and variance one). This transformation has been widely used [20] and our proposed multivariate normal mixture model will be applied to the transformed z-scores. Discordance enrichment Our proposed method is a type of gene set enrichment analysis. As it has been discussed by Lai et al. [15], we defined "enrichment" as "the number of events of interest is larger than expected" and our "event of interest" in this study is "a list of clearly discordant behaviors" from a gene. If we know whether the expression profile of a gene is up-regulated (simplified as "up"), down-regulated (simplified as "down") or non-differentially expressed (simplified as "null") in a data set, then a list of concordant behaviors among K data sets for this gene could be (up, up, …, up), (down, down, …, down) or (null, null, …, null). In this study, we focused on a list with at least one "up" and at least one "down" among K data sets. For example, a list like (down, up, up, …, up) is an event of interest but a list like (null, up, up, …, up) is not. The reason is "down" and "up" can be visually distinguished by the negative ("-") and positive ("+") signs in z-scores, respectively. However, zero z-scores are rarely observed. Therefore, it is less clear to distinguish "null" from "up" (or "null" from "down"). Based on the expression profiles, we obtain z-scores to make statistical inference about genes' behaviors in each data set. To evaluate "discordance enrichment" as defined above, we considered a mixture model approach that allows us to estimate the probability of a behavior ("up", "down" or "null") and the expected number of events of interest (notice that these are not directly observed in the data sets). Let S be the set of genes for a pathway (or gene set in general) and m S the number of genes in S. If the i-th gene in S is showing a list of clearly discordant behaviors, then we set an indicator variable U S,i =1; Otherwise, we set U S,i =0. Then, we can calculate the discordance enrichment score (DES) for gene set S that is a probability defined as $$DES_{S} = \mathbf{Pr}\left(\sum_{i=1}^{m_{S}} U_{S,i} > m_{S} \theta\right), $$ in which θ is the proportion of genes with clearly discordant behaviors. In our mixture model, we used normal distributions to model the z-scores. A novel contribution is that the parameter space of our model increases linearly when the number of data sets is increased. This is due to the two-level structure of our model. (The parameter space of a general model for this analysis increases exponentially when the number of data sets is increased). For each gene in each data set, we considered three normal distribution components that represent up-regulation (positive distribution mean), down-regulation (negative distribution mean) and null (zero mean). (Theoretically, p-values under the null hypothesis are uniformly distributed. Therefore, z-scores under the null hypothesis are normally distributed with mean zero and variance one). The mathematical details are described below. A two-level mixture model First, we described the basic model structure for just one data set. Then, we introduced our novel two-level mixture model. A simple three-component normal distribution mixture model [30, 31] is considered for each z-score z i,k (the i-th gene in the k-th data set, i=1,2,…,m and k=1,2,…,K): $$f(z_{i,k}) = \sum_{j_{k}=0}^{2} \rho_{j_{k},k} \phi_{\mu_{j_{k},k}, \sigma^{2}_{j_{k},k}}(z_{i,k}). $$ In the above model, \(\phantom {\dot {i}\!}\phi _{\mu, \sigma ^{2}}(\cdot)\) is the probability density function (p.d.f.) of a normal distribution with mean μ and variance σ 2. Three components represent up-regulation with μ 1,k >0, down-regulation with μ 2,k <0 and null with μ 0,k =0 (also recall that \(\sigma ^{2}_{0,k}=1\)). For this model, an assumption is that the p.d.f. of z i,k is simply \(\phi _{\mu _{j_{k},k}, \sigma ^{2}_{j_{k},k}}(z_{i,k})\) if we know the underlying component information j k for the i-th gene in the k-th data set. However, the component information is usually not observed in practice. Then, we have this one-dimensional mixture model after the introduction of component proportion parameters \(\left \{ \rho _{j_{k},k}, j_{k}=0,1,2 \right \}\) for the k-th data set. When we extend the above mixture model to a higher dimension (i.e. K data sets), without a structure consideration, the parameter space increases exponentially due to the 3K different component combinations (3 components in each of K data sets). Therefore, when K is not a small number (i.e. K>4), we need a more efficient model [15]. Biologically, when different data sets are collected for the same or similar research purpose, some genes are likely to show consistent behaviors across different data sets and some genes are likely to show different behaviors. For genes likely showing consistent behaviors across K data sets, we consider a complete concordance (CC) multivariate model to approximate the distribution of {z i,k ,k=1,2,…,K}. For genes likely showing different behaviors across K data sets, we consider a complete independence (CI) multivariate model to approximate the distribution of {z i,k ,k=1,2,…,K}. (Notice that there is no overlap among multiple data sets. If the component information among these data sets is known, then z-scores are independent.) We first describe the CI model and CC model as below. The CI model assumes that the behaviors of the i-th gene are independent across different data sets. Therefore, we have the following mixture model: $$f_{CI}(z_{i,1}, z_{i,2}, \ldots, z_{i,K}) = \prod_{k=1}^{K}\left[\sum_{j_{k}=0}^{2} \rho_{j_{k},k} \phi_{\mu_{j_{k},k},\sigma^{2}_{j_{k},k}}(z_{i,k})\right]. $$ This model is simply a product of K one-dimensional three-component mixture-models. The CC model assumes that the behaviors of the i-th gene are the same across different data sets. Although the component information is unknown, the components for different data sets must be consistent. Therefore, we have the following mixture model: $$f_{CC}(z_{i,1}, z_{i,2}, \ldots, z_{i,K})=\sum_{j=0}^{2}\left[\pi_{j} \prod_{k=1}^{K}\phi_{\mu_{j,k},\sigma_{j,k}^{2}}(z_{i,k})\right]. $$ This model has three components and each component is a product of K normal probability density functions. In practice, it is unknown whether the i-th gene is showing independent or consistent behaviors. Therefore, we consider CI and CC as two high-level components and propose the following two-level model for {z i,k ,k=1,2,…,K}: $$\begin{aligned} f(z_{i,1}, z_{i,2}, \ldots, z_{i,K}) &= \lambda f_{CC}(z_{i,1}, z_{i,2}, \ldots, z_{i,K}) \\ & \quad+ (1-\lambda) f_{CI}(z_{i,1}, z_{i,2}, \ldots, z_{i,K}). \end{aligned} $$ Notice that this two-level model is still a mixture model. We further assume that \(\{ \mu _{j_{k},k}, \sigma _{j_{k},k}^{2}, j_{k}=0,1,2, k=1,2,\ldots,K \}\) are shared by both CI and CC models. It is evident that the model parameter space increases linearly when the number of data sets (K) increases. We can use the well-established Expectation-Maximization (E-M) algorithm [19] for parameter estimation. First, it is necessary to introduce some indicator variables (for component information) for the z-scores {z i,k ,k=1,2,…,K} of the i-th gene. Then, we describe the E-step and M-step. For high-level component information, $${}\begin{aligned} \omega_{i}\,=\,\left\{\!\ \begin{array}{ll} 1 &\text{if gene's behaviors are consistent with CC model;}\\ 0 &\text{if gene's behaviors are consistent with CI model.} \end{array} \right. \end{aligned} $$ For CI model component information, $${}\begin{aligned} \eta_{i,j_{k},k}=\left\{ \begin{array}{ll} 1 &\text{if \(z_{i,k}\) is sampled from the \(j_{k}\)-th component;}\\ 0 &\text{otherwise.} \end{array} \right. \end{aligned} $$ For CC model component information, $${}{\begin{aligned} \xi_{i,j}=\left\{ \begin{array}{ll} 1 &\text{if all}\, \{ z_{i,k}, k=1,2,\ldots,K \}\, \text{are sampled from the}\, j\text{-th component;}\\ 0 &\text{otherwise.} \end{array} \right. \end{aligned}} $$ The E-step is the calculation of the following expected values when all the parameter values are given. $${}{\begin{aligned} {\mathrm{E}}(\omega_{i})& = &\frac{\lambda f_{CC}(z_{i,1}, z_{i,2}, \ldots, z_{i,K})}{\lambda f_{CC}(z_{i,1}, z_{i,2}, \ldots, z_{i,K}) + (1-\lambda) f_{CI}(z_{i,1}, z_{i,2}, \ldots, z_{i,K})}, \end{aligned}} $$ $${}{\begin{aligned} {\mathrm{E}}((1&-\omega_{i}) \eta_{i,j_{k},k})\\ & = \frac{(1-\lambda)\rho_{j_{k},k}\phi_{\mu_{j_{k},k},\sigma_{j_{k},k}^{2}}(z_{i,k}) \prod_{h=1,h\neq k}^{K}\sum_{j_{h}=0}^{2}\rho_{j_{h},h}\phi_{\mu_{j_{h},h},\sigma_{j_{h},h}^{2}}(z_{i,h})}{\lambda f_{CC}(z_{i,1}, z_{i,2}, \ldots, z_{i,K}) + (1-\lambda) f_{CI}(z_{i,1}, z_{i,2}, \ldots, z_{i,K})}. \end{aligned}} $$ $${}\begin{aligned} {\mathrm{E}}&(\omega_{i} \xi_{i,j})\\ &= \frac{\lambda \pi_{j} \prod_{k=1}^{K} \phi_{\mu_{j,k},\sigma_{j,k}^{2}}(z_{i,k})}{\lambda f_{CC}(z_{i,1}, z_{i,2}, \ldots, z_{i,K})\! +\! (1-\lambda) f_{CI}(z_{i,1}, z_{i,2}, \ldots, z_{i,K})}, \end{aligned} $$ The M-step is the calculation of the following parameter values when all the component information is given: $$\begin{array}{@{}rcl@{}} \hat{\lambda}&=&\frac{1}{m}\sum_{i=1}^{m}{\mathrm{E}}(\omega_{i}), \end{array} $$ $$\begin{array}{@{}rcl@{}} \hat{\rho}_{j_{k},k}&=&\frac{\sum_{i=1}^{m}{\mathrm{E}}\left((1-\omega_{i}) \eta_{i,j_{k},k}\right)}{\sum_{i=1}^{m}\sum_{j_{h}=0}^{2}{\mathrm{E}} \left((1-\omega_{i}) \eta_{i,j_{h},k}\right)}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \hat{\pi}_{j}=\frac{\sum_{i=1}^{m}{\mathrm{E}}(\omega_{i} \xi_{i,j})}{\sum_{i=1}^{m}\sum_{h=0}^{2}{\mathrm{E}}(\omega_{i} \xi_{i,h})}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \hat{\mu}_{j_{k},k}&=&\frac{\sum_{i=1}^{m}\left[{\mathrm{E}}(\omega_{i} \xi_{i,j_{k}})+{\mathrm{E}}((1-\omega_{i}) \eta_{i,j_{k},k})\right] z_{i,k}}{\sum_{i=1}^{m}[{\mathrm{E}}(\omega_{i} \xi_{i,j_{k}})+{\mathrm{E}}((1-\omega_{i}) \eta_{i,j_{k},k})]}, \end{array} $$ $$\begin{array}{@{}rcl@{}} \hat{\sigma}^{2}_{j_{k},k}\,=\,\frac{\sum_{i=1}^{m}[{\mathrm{E}}(\omega_{i} \xi_{i,j_{k}})+{\mathrm{E}}((1-\omega_{i}) \eta_{i,j_{k},k})]] (z_{i,k} - \hat{\mu}_{j_{k},k})^{2}}{\sum_{i=1}^{m}[{\mathrm{E}}(\omega_{i} \xi_{i,j_{k}})+{\mathrm{E}}((1-\omega_{i}) \eta_{i,j_{k},k})]}\!. \end{array} $$ E-step and M-step are iterated until a numerical convergence is achieved. In this study, the numerical convergence is defined as that the difference between the current log-likelihood and the previous one is within a given tolerance value (e.g. 10−4). Enrichment score As we have discussed in Discordance enrichment, in this study, we focus on genes' behaviors with at least one up-regulation and at least one down-regulation among K data sets (our event of interest: a gene with clearly discordant behaviors). However, we do not need to enumerate all these combinations (among 3K in total). The related computing can be simplified if we enumerate the compliment events instead. There are three combinations for complete concordance: (up, up,..., up), (down, down,..., down) and (null, null,..., null). They will be excluded. There are \(\sum _{l=1}^{K-1}{K \choose l}\) combinations with both "null" and "up" (without "down") and there are \(\sum _{l=1}^{K-1}{K \choose l}\) combinations with both "null" and "down" (without "up"). They will also be excluded. Then, the remaining combinations are our events of interest (at least one "up" and at least one "down"). According to the above computing strategy, based on the two-level mixture model, the related proportion (θ) of genes with clearly discordant behaviors (also see "Discordance enrichment" for more details) can be calculated as follows. $${}\begin{aligned} \theta\! =\! (1\,-\,\lambda)\left(\!1\! -\!\! \sum_{j=0}^{2} \prod_{k=1}^{K} \rho_{j,k} \,-\,\!\! \sum_{\{j_{k}\} \in A} \prod_{k=1}^{K} \rho_{j_{k},k} -\!\! \sum_{\{j_{k}\} \in B} \prod_{k=1}^{K} \rho_{j_{k},k}\! \right)\!, \end{aligned} $$ where A is the set of lists with a mix of 0's and 2's, and B is the set of lists with a mix of 0's and 1's. Let S be a gene set with m S genes. As defined in Discordance enrichment, let the indicator variable U S,i =1 if the i-th gene in S is showing a list of clearly discordant behaviors, and U S,i =0 otherwise. Then, based on the two-level mixture model, the related probability can be calculated as follows. $${}\begin{aligned} \mathbf{Pr}(U_{S,i}=1) &= (1-\lambda)[ f_{CI}(z_{S,i,1}, z_{S,i,2}, \ldots, z_{S,i,K}) \\ & \quad- \sum_{j=0}^{2} \prod_{k=1}^{K} \rho_{j,k} \phi_{\mu_{j,k},\sigma^{2}_{j,k}}(z_{S,i,k}) \\ & \quad- \sum_{\{j_{k}\} \in A \cup B} \prod_{k=1}^{K} \rho_{j_{k},k} \phi_{\mu_{j_{k},k},\sigma^{2}_{j_{k},k}}(z_{S,i,k}) ] \\ & \quad / f(z_{S,i,1}, z_{S,i,2}, \ldots, z_{S,i,K}), \end{aligned} $$ where (z S,i,1,z S,i,2,…,z S,i,k ) are the related z-scores. Let ζ S,i =P r(U S,i =1), which is a conditional probability according to the given model and observed data. Under the assumption that z-scores from different genes are independent, the discordance enrichment score (DES) for gene set S, which has been defined in Discordance enrichment as \(DES_{S} = \mathbf {Pr}\left (\sum _{i=1}^{m_{S}} U_{S,i} > m_{S} \theta ]\right)\), can be calculated as follows. $$\begin{aligned} DES_{S} &= \sum_{U_{S,1}=0}^{1} \sum_{U_{S,2}=0}^{1} \cdots \sum_{U_{S,m_{S}}=0}^{1} \left[I\left(\sum_{i=1}^{m_{S}} U_{S,i}\right.\right.\\ & \left.\left. \quad > m_{S} \theta{\vphantom{\sum_{0}^{0}}}\right) \prod_{i=1}^{m_{S}} \zeta_{S,i}^{U_{S,i}} (1-\zeta_{S,i})^{1-U_{S,i}} \right], \end{aligned} $$ where I(true statement)=1 and I(false statement)=0 (indicator function). Since {ζ S,i ,i=1,2,…,m S } are usually different for different genes, the above formula is a calculation of a tail probability for a heterogeneous Bernoulli process. The related computing issue and the related false discovery rate have already been discussed by Lai et al. [15]. Therefore, we described them briefly as below. False discovery rate As discussed in the literature [15, 20], the above enrichment score is a conditional probability and a true positive proportion for gene set S. Therefore, the related false discovery rate [6, 18] for the top T gene sets {S 1,S 2,…,S T } identified by the above DES can be conveniently derived as below. $$FDR = 1 - \sum_{t=1}^{T} DES_{S_{t}}/T. $$ Computational approximation As discussed in Lai et al. [15], the exact calculation of DES can be difficult due to the complexity of heterogeneous Bernoulli process. A Monte Carlo approximation has been suggested as follows. First, set an integer variable X=0. For the i-th gene in S, simulate a Bernoulli random variable with probability of event ζ S,i . Then, count the number of events from all genes in S, and increase X by one if this number is larger than m S θ. Repeat the simulation and counting B times and report X/B as the approximated DES. B=2000 was suggested by Lai et al. [15]. Genome-wide expression data and KEGG pathway collection Zhang et al. [23] recently conducted a genome-wide expression study for forty-five matched pairs of pancreatic tumor and adjacent non-tumor tissues. The data were collected by the microarray technology (Affymetrix GeneChip Human Gene 1.0 ST arrays) and were made publicly available in the NCBI GEO database [23]. The collections of gene sets or pathways can be downloaded from the Molecular Signature Database [7, 8]. At the time of study, the collections have been updated to version 4.0. In this study, we focus on 186 KEGG pathways for our data analysis. There are 28677 genes available for our discordance enrichment analysis. As we have explained in the Methods, we expect to identify pathways with enrichment in clearly discordant gene behaviors among a series of pre-defined genome-wide expression data sets. (Notice that a pathway with DES∼1 is significantly enriched in clearly discordant behaviors; and a pathway with DES∼0 is evidently not enriched in clearly discordant behaviors). Data division based on gene PNLIP The hierarchical clustering tree (with Euclidean distance and the "median" agglomeration method) for the log2-transformed ratio values of gene PNLIP is included in Fig. 1 a. Several major clusters of subjects can be generated if we cut the tree at 0.15. After including these isolated subjects into their nearby clusters, we can obtain seven clusters (subgroups of tumor/non-tumor pairs). Therefore, seven subsets of genome-wide expression data were defined accordingly with sample size 7+7, 7+7, 6+6, 4+4, 6+6, 9+9, or 6+6 (see Fig. 2 a). Figure 3 b shows the paired expression ratio values of gene PNLIP [log2-transformation applied here for the convenience of visualization of up-regulation (positive sign) or down-regulation (negative sign)]. Figure 3 a shows the individual expression values for gene PNLIP in different subsets. Notice that, from Fig. 3 b, subsets 1 represents a clear down-regulation of gene PNLIP, and subsets 6 and 7 represents null and up-regulation of gene PNLIP, respectively. Hierarchical clustering for data division. a Tree of paired-ratio values (log2-transformed) of gene PNLIP. b Tree of paired-ratio values (log2-transformed) of gene TP53 Comparison of expression and paired-ratio between gene TP53 vs. gene PNLIP. a Comparison of paired-ratio values (log2-transformed). Gray dotted lines represent the cutoff values for defining subsets. b Comparison of expression values for non-tumor tissues. c Comparison of expression values for tumor tissues Expression and paired-ratio of gene PNLIP. a Expression values for tissues in seven subsets (gray color represents non-tumor and dark color represents tumor). b Paired-ratio values (log2-transformed) in seven subsets (gray dotted vertical lines for their separation) z-Scores based on gene PNLIP Figure 4 shows pair-wise scatterplot for comparing z-scores from the seven subsets defined by the paired-ratio of gene PNLIP. Most scatterplots for adjacent or close-to-adjacent subsets are showing a relatively regular positive correlation pattern (implying overall consistent gene behaviors). The scatterplots for far-from-adjacent subsets are mostly showing an irregular weak correlation pattern (implying a considerable amount of inconsistent gene behaviors). As mentioned above, subsets 1, 6 and 7 are representative for down-regulation, null and up-regulation of gene PNLIP, respectively. It is clear that the scatterplot for subsets 7 vs. 1 is showing the most irregular pattern, which implies that many genes have clearly discordant behaviors when gene PNLIP changes its behavior from down-regulation to up-regulation. z-score comparison (gene PNLIP). Pair-wise scatterplots for comparing z-scores from seven subsets defined by the paired-ratio of gene PNLIP Significant pathways based on gene PNLIP Table 2 lists the significant KEGG pathways identified by the discordance enrichment analysis (with DES>0.80, also the related maximum FDR<0.05). Among these eleven pathways, there are neuroactive ligand receptor interaction, olfactory transduction, alpha-linolenic-acid metabolism and linoleic-acid metabolism pathways. The literature support for the association between pancreatic cancer and each of these pathways will be discussed later. For the olfactory transduction and neuroactive ligand receptor interaction pathways, Fig. 5 shows their z-score pattern changes when all the adjacent subsets are pair-wisely compared and three representative subsets (1, 6, 7, see above for their details) are also pair-wisely compared. For the pairs of subsets 2 vs. 1, 3 vs. 2, concordant behaviors can be overall observed for the genes in these two pathways. Discordant behaviors can be overall observed for the pairs 6 vs. 5, 7 vs. 6, 6 vs. 1 and 7 vs. 1. Particularly for the pair 7 vs. 1 (up-regulation vs. down-regulation for gene PNLIP), the genes in olfactory transduction pathway are mostly down-regulated in subset 1 but evenly up-regulated or down-regulated in subset 7, and the genes in neuroactive ligand receptor interaction pathway are almost evenly up-regulated or down-regulated in both subsets. z-score comparison (gene TP53). Pair-wise scatterplots for comparing z-scores from six subsets defined by the paired-ratio of gene TP53 Table 2 Pathways identified by the discordance enrichment analysis Data division based on gene TP53 The hierarchical clustering tree (with Euclidean distance and the "median" agglomeration method) for the log2-transformed ratios values of gene TP53 is included in Fig. 1 b. Several major clusters of subjects can be generated if we cut the tree at 0.03. After including these isolated subjects into their nearest clusters, we can obtain six clusters (subgroups of tumor/non-tumor pairs). Therefore, six subsets of genome-wide expression data were defined accordingly with sample size 4+4, 7+7, 6+6, 13+13, 10+10, or 5+5 (see Fig. 2 a). Figure 6 b shows the paired expression ratio values of gene TP53 [log2-transformation applied here for the convenience of visualization of up-regulation (positive sign) or down-regulation (negative sign)]. Figure 6 a shows the individual expression values for gene TP53 in different subsets. Notice that, from Fig. 6 b, subsets 1 represents a clear down-regulation of gene TP53, and subsets 3 and 6 represents null and up-regulation of gene TP53, respectively. Expression and paired-ratio of gene TP53. a Expression values for tissues in six subsets (gray color represents non-tumor and dark color represents tumor). b Paired-ratio values (log2-transformed) in six subsets (gray dotted vertical lines for their separation) z-Scores based on gene TP53 Figure 7 shows pair-wise scatterplot for comparing z-scores from the six subsets defined by the paired-ratio of gene TP53. Many scatterplots for adjacent or close-to-adjacent subsets are still showing a relatively regular positive correlation pattern (implying overall consistent gene behaviors). Almost all the scatterplots for far-from-adjacent subsets are showing an irregular weak correlation pattern (implying a considerable amount of inconsistent gene behaviors). As mentioned above, subsets 1, 3 and 6 are representative for down-regulation, null and up-regulation of gene TP53, respectively. All the pair-wise scatterplots for these three subsets are showing irregular patterns (with the scatterplot for subsets 6 vs. 1 the most irregular), which implies that many genes have clearly discordant behaviors when gene TP53 change its behavior from down-regulation to null, and then to up-regulation. z-scores in two most significantly detected pathways (gene PNLIP). Pair-wise scatterplots for comparing z-scores in the given pathway (dark color) and out of the given pathway (gray color). All the adjacent subsets are pair-wisely compared (e.g. 2 vs. 1, 3 vs. 2, 4 vs. 3, 5 vs. 4, 6 vs. 5 and 7 vs. 6) and three representative subsets (1 for down-regulation, 6 for null, and 7 for up-regulation) are also pair-wisely compared (7 vs. 6 already shown, then 6 vs. 1 and 7 vs. 1). The order of scatterplots is shown as (a-p) Significant pathways based on gene TP53 Table 2 list the significant KEGG pathways identified by the discordance enrichment analysis (with DES>0.80, also the related maximum FDR<0.10). Among these five pathways, there are neuroactive ligand receptor interaction, olfactory transduction, alpha-linolenic-acid metabolism and linoleic-acid metabolism pathways (which have been identified above by the analysis based on gene PNLIP). For the olfactory transduction and neuractive ligand receptor interaction pathways, Fig. 8 shows their z-score pattern changes when all the adjacent subsets are pair-wisely compared and three representative subsets (1, 3, 6, see above for their details) are also pair-wisely compared. For the pairs of subsets 6 vs. 5, 5 vs. 4 and 4 vs. 3, concordant behaviors can be overall observed for the genes in these two pathways. Discordant behaviors can be overall observed for the pairs 2 vs. 1, 3 vs. 2, 3 vs. 1, 6 vs. 1 and 6 vs. 3. Particularly for the pair 6 vs. 1 (up-regulation vs. down-regulation for gene TP53), the genes in olfactory transduction pathway are mostly down-regulated in subset 6 but evenly up-regulated or down-regulated in subset 1, and the genes in neuractive ligand receptor interaction pathways are somewhat evenly up-regulated or down-regulated in both subsets. z-scores in two most significantly detected pathways (gene TP53). Pair-wise scatterplots for comparing z-scores in the given pathway (dark color) and out of the given pathway (gray color). All the adjacent subsets are pair-wisely compared (e.g. 2 vs. 1, 3 vs. 2, 4 vs. 3, 5 vs. 4, and 6 vs. 5) and three representative subsets (1 for down-regulation, 3 for null, and 6 for up-regulation) are also pair-wisely compared (3 vs. 1, 6 vs. 1 and 6 vs. 3). The order of scatterplots is shown as (a-p) Literature support We have conducted a discordance enrichment analysis based on gene PNLIP and a discordance enrichment analysis based on gene TP53. Among two lists of identified pathways, there are four in common: neuroactive ligand receptor interaction, olfactory transduction, alpha-linolenic-acid metabolism and linoleic-acid metabolism pathways (see Table 2). To further understand these pathways, we have checked the related biomedical literature. The genome-wide expression data analyzed in this study were collected based on the microarray technology for RNA profiling. Genome-wide association study (GWAS) data have also been collected for pancreatic cancer research based on the microarray technology for DNA profiling (single nucleotide polymorphism, or SNP). Wei et al. [32] recently conducted a pathway analysis for a large GWAS data on pancreatic cancer research. They reported only two pathways. Interestingly, these two pathways are neuroactive ligand receptor interaction and olfactory transduction pathways (top two identified from both of our analysis results, see above for details). Notice that their findings were based on a different type of molecular data. This is a strong support for the discordance enrichment analysis results. We also found at least one support for both alpha-linolenic-acid metabolism and linoleic-acid metabolism pathways. Wenger et al. [33] conducted a study on the roles of alpha-linolenic acid (ALA) and linoleic acid (LA) on pancreatic cancer and they observed an association between the disease and these two fatty acids. Insignificant pathways Figure 9 shows 186 DES based on PNLIP vs. DES based on TP53. These two lists of DES's are highly correlated (Spearman's rand correlation 0.642), although some pathways identified in the analysis results based on PNLIP are not significant in the analysis results based on TP53. (Notice that a pathway with DES∼1 is significantly enriched in clearly discordant behaviors; and a pathway with DES∼0 is evidently not enriched in clearly discordant behaviors.) Only a small number of pathways were identified by the discordance enrichment analysis. The histograms in the figure show that most pathways are showing insignificant DES's. For each of two analysis results, there are more than 140 pathways (among 186) with DES<0.05. The number of pathways with both DES<0.01 or both DES<0.05 is 111 (60%) or 138 (74%), respectively. For both DES<0.25, 0.5 or 0.75, there are 154 (83%), 164 (88%) or 173 (93%) pathways, respectively. Therefore, most pathways are evidently not enriched in clearly discordant behaviors among the series of subsets defined by the paired expression ratio of gene PNLIP; neither are they among the series of subsets defined by the paired expression ratio of gene TP53. Many disease related pathways have been listed by KEGG (http://www.genome.jp/kegg/pathway.html). The collection of pancreatic cancer related pathways (or KEGG pancreatic cancer) and the collection of cancer related pathways (or KEGG pathways in cancer) are not enriched from both analysis results (DES<0.001). Among the pathway components of these two collections (e.g. cell cycle pathway, apoptosis pathway, etc.), the highest DES value is <0.01 for the PPAR signaling pathway from the analysis results based on PNLIP, and the highest DES value is <0.05 for the cytokine-cytokine receptor interaction pathway from the analysis results based on TP53. Pathways like hedgehog signaling, proteasome, and primary immunodeficiency are also showing low DES values (all <0.05). Comparison of DES between gene TP53 vs. gene PNLIP. (left, lower) Scatterplot of DES based on gene TP53 vs. DES based on gene PNLIP, notice that there are overlapped dots in the scatterplot. (left, upper) Histogram of DES based on gene TP53. (right, lower) Histogram of DES based on gene PNLIP Expression profiles of PNLIP vs. TP53 PNLIP is a gene shown recently its association with pancreatic cancer [22]. TP53 is a well-known tumor suppressor gene. From the above comparison, it is interesting that the discordance enrichment analysis results based on PNLIP are highly correlated with the discordance enrichment analysis results based on TP53. To further understand this correlation, we compared the expression profile of PNLIP with the expression profile of TP53. Figure 2 a shows a relatively weak negative correlation (Spearman's rank correlation -0.250) between two lists of paired-ratios but the correlation is not statistically significant (p-value=0.098). In the non-tumor group (Fig. 2 b), the negative correlation (Spearman's rank correlation -0.318) achieves a p-vlaue 0.033. In the tumor group (Fig. 2 c), the negative correlation (Spearman's rank correlation -0.276) is again not statistically significant (p-value=0.066). Furthermore, the ratio cutoff values for defining subsets were added to Fig. 2 a. A contingency table can be generated according to these grids (for example, the cell number is one for row one and column one in the table). The chi-square test for this sparse contingency table is not statistically significant (simulation based p-value >0.3). Therefore, in summary, gene PNLIP may be negatively associated with gene TP53 but no clear statistical significance has been observed in this study. Comparison to gene set analysis Efron and Tibshirani [34] have proposed a gene set analysis (GSA) method for analyzing enrichment in pathways (or gene sets). It was suggested by Maciejewski [35] that this method is preferred in a gene set enrichment analysis. In some situations of integrative data analysis, different data sets cannot be simply pooled together. For each data set, the p-value of enrichment in up-regulation can be obtained for each gene set. To integrate the p-values from multiple data sets (for the same gene set), we can consider Fisher's method (Fisher's combined probability test). log-Transformed p-values are summed up and then multiplied by -2, which is well-known to follow a chi-squared distribution under the null hypotheses. In this way, we can perform an integrative gene set enrichment analysis of multiple data sets (when different data sets cannot be pooled together). Gene sets (or pathways) can be ranked by their chi-squared p-values. (Similarly, the p-value of enrichment in down-regulation can also be obtained by GSA for each gene set and each data set. Then, the related chi-squared p-values can be calculated by Fisher's method.) Notice that, our analysis purpose is to detect discordance enrichment among multiple data sets. However, the discordance feature is usually not considered in a traditional integrative analysis. In this study, our analysis results were based on several subsets divided from a genome-wide expression data set with a relatively large sample size. These subsets could be pooled back (to be the original large data set). Therefore, we applied GSA to the original data (so that we could take the advantage of its relatively large sample size). However, after considering the adjustment for multiple hypothesis testing, no pathways (or gene sets) could be identified even at the false discovery rate 0.3 (or FDR<30%). (Therefore, the detail of GSA results is not reported). An application to The Cancer Genome Atlas (TCGA) data sets For a further illustration of our method, we performed a discordance enrichment analysis of the RNA sequencing (RNA-seq) data collected by The Cancer Genome Atlas (TCGA) project [3]. At the time of study, with the consideration of adequate numbers of normal/tumor subjects, we selected the RNA-seq data for studying prostate adenocarcinoma (PRAD), colon adenocarcinoma (COAD), stomach adenocarcinoma (STAD), head and neck squamous cell carcinoma (HNSC), thyroid carcinoma (THCA) and liver hepatocellular carcinoma (LIHC). Among these different types of diseases, we expected a certain level of dissimilarity in genome-wide expression profiles. Therefore, we applied our method to these six TCGA RNA-seq data sets (and our proposed two-level mixture model was useful to reduce the number of model parameters). Gene expression profiles for more than 20,000 common genes were available for our analysis. Among 186 KEGG pathways, we report the analysis results for a collection of cancer related pathways. There are sixteen of these pathways in KEGG but fourteen of them are available in the Molecular Signatures Database [7, 8]. In Table 3, the discordance enrichment analysis results are also compared to the results based on GSA-based Fisher's method (see Comparison to Gene Set Analysis for details). However, it is important to emphasize that the detection of discordance enrichment is our focus in this study and the feature of discordance is usually not considered in a traditional integrative analysis (e.g. Fisher's method). Table 3 A comparison study Table 3 shows the comparison of our discordance enrichment scores (DES) to the p-values calculated by GSA-based Fisher's method (up-regulation or down-regulation). (Lower p-value for more significant result but higher DES for more significant result.) The p53 signaling pathway, cell cycle pathway, and PPAR signaling pathway are three pathways with significant GSA-Fisher p-values. For the p53 signaling pathway and cell cycle pathway, their DES suggest low discordance among different types of diseases for these two well-known pathways. For the PPAR signaling pathway, its DES is also highly significant. Figure 10 shows a considerable amount of concordance as well as a considerable amount of discordance among different types of diseases for this pathway. With the consideration of either Bonferroni-type adjustment or FDR-type adjustment, no detection can be further observed based on GSA-based Fisher's method. However, our method identified a few pathways with significant discordance enrichment (DES>0.999) including the focal adhesion, MAPK signaling, VEGF signaling and apoptosis pathways. Figure 11 shows a considerable amount of discordance among different types of diseases for the well-known apoptosis pathway. Furthermore, the WNT signaling, adherens junction, MTOR signaling and TGF-beta signaling pathways are also showing high DES, which suggest possible discordance enrichments for these pathways. z-scores in PPAR signaling pathway (TCGA data). Pair-wise scatterplots for comparing z-scores in the given pathway (dark color) and out of the given pathway (gray color). x-Axis and y-axis represent z-scores for different types of diseases. The order of scatterplots is shown as (a-o) z-scores in apoptosis pathway (TCGA data). Pair-wise scatterplots for comparing z-scores in the given pathway (dark color) and out of the given pathway (gray color). x-Axis and y-axis represent z-scores for different types of diseases. The order of scatterplots is shown as (a-o) In this study, we suggested a discordance gene set enrichment analysis for a series of two-sample genome-wide expression data sets. To reduce the parameter space, we proposed a two-level multivariate normal distribution mixture model. Our model is statistically efficient with linearly increased parameter space when the number of data sets is increased. Then, gene sets can be detected by the model-based probability of discordance enrichment. Based on our two-level model, if the proportion of complete concordance component is high, then more genes behave concordantly among different data sets. Similarly, if the proportion of complete independence component is high, then more genes behave discordantly among different data sets. In the complete concordance component (model), only complete concordant behaviors are considered: all "up," all "down" or all "null." Therefore, there are only three items j=0,1,2 for the outer summation term. For each complete concordant behavior, we have independence among different data sets. Statistically, conditional on a underlying complete concordant behavior (with probability π j ), we have an inner product term of probability density functions calculated based on different data sets. In the complete independence component (model), genes behave completely independent among different data sets, which is reflected in the outer product term. For each data set, the underlying behavior for each gene can be "up," "down" or "null." However, the behavior cannot be directly observed and the related probability density function is calculated based on a mixture model. Our method was applied to a microarray expression data set collected for pancreatic cancer research. The data were collected for forty-five matched tumor/non-tumor pairs of tissues. These pairs were first divided into seven subgroups for defining seven subsets of genome-wide expression data, according to the paired expression ratio of gene PNLIP. This gene was recently shown its association with pancreatic cancer. Our purpose was to understand discordance gene set enrichment when gene PNLIP changes its behavior from down-regulation to up-regulation. Among a few identified pathways, the neuroactive ligand receptor interaction, olfactory transduction pathways were the most significant two. The alpha-linolenic-acid metabolism and linoleic-acid metabolism pathways were also among the list. To better understand these results, we divided again the original data with forty-five pairs of tumor/non-tumor tissues into six subsets, according to the paired expression ratio of gene TP53 (a well-known tumor suppressor gene). The above four pathways were also identified by the discordance gene set enrichment analysis, with the neuroactive ligand receptor interaction, olfactory transduction pathways still the most significant two. After our literature search, we found that these two pathways were the only two identified for their association with pancreatic cancer in a recent independent pathway analysis of genome-wide association study (GWAS) data. For the alpha-linolenic-acid metabolism and linoleic-acid metabolism pathways, we found a previous study that the association between pancreatic cancer and these two fatty acids (alpha-linolenic acid and linoleic acid) was observed. A few discordant behaviors from individual genes can be observed from Figs. 7 and 8. In Fig. 7 p, among genes in the neuroactive ligand receptor interaction pathway (black dots), a gene with the most negative z-score in subset 1 has the most positive z-score in subset 7. This is a clear change from down-regulation to up-regulation. In Fig. 8 a-b, among genes in the olfactory transduction pathway (black dots), a gene with the most positive z-score in subset 2 has a moderately positive z-score in subset 1, but its z-score in subset 3 is clearly negative. This is a clear change from up-regulation to down-regulation. We conducted a discordance gene set enrichment analysis based on gene PNLIP and a discordance gene set enrichment analysis based on gene TP53. Only a few among 186 KEGG pathways were identified. Most pathways (like cancer and pancreatic cancer related pathways) were evidently not enriched in discordant gene behaviors. This suggest unique molecular roles of both genes PNLIP and TP53 in pancreatic cancer development. There were four pathways identified from both analysis results and we found biomedical literature to support the association between pancreatic cancer and these pathways. Some pathways identified in one analysis were not identified in the other analysis. It is also biologically interesting to understand these pathways. It was biologically interesting to observe pathways with clearly discordant gene behaviors when the paired expression ratio of an important disease-related gene was changing. The analysis results in this study illustrated the usefulness of our proposed statistical method. Our method was developed based on z-scores that are statistical measures of differential expression, and many existing two-sample statistical tests could be used for generating z-scores. Therefore, in this study, we demonstrated our method based on a partition of a relatively large two-sample microarray data set as well as several two-sample genome-wide expression data sets collected by the recent RNA-seq technology. Our method is statistically novel for its two-level structure, which is developed based on a biological motivation (genes' behaviors among different data sets). Due to this two-level structure, the parameter space of our model is increased linearly when the number of data sets is increased. Then, the parameter estimates can be statistically efficient. In our mixture model, conditional independence is the key to reduce the complexity of multivariate data analysis. For each gene, when the mixture component information is given for all the data sets, its z-scores are independent. (Notice that there is no overlap among multiple data sets). Mathematical and computational convenience is achieved for our statistical model due to this unique feature. Our method is based on the well-established mixture model framework and the Expectation-Maximization (EM) algorithm for parameter estimation. One limitation is that the proposed three-component mixture model may not fit z-scores well for some data. This can be improved by considering more components in the mixture model. For example, instead of a simple consideration of down-regulation, null and up-regulation, we may consider more components like strong-down-regulation, weak-down-regulation, null, weak-up-regulation and strong-up-regulation. This will only proportionally increase the parameter space (still linear with the number of data sets for our two-level mixture model). It is also interesting to extend our method for more complicated analysis purpose. For example, we may be interested in identifying trend changes (monotonically increasing or decreasing) instead of general changes. Also, for example, we may have multiple data sets collected for different disease stages, but the data set for normal/reference/control stage is not large enough to be divided and it has to be used repeatedly in two-sample comparisons (then z-scores are not even conditionally independent). For these situations, the extension of our method would require a considerable amount of research effort. Schena M, Shalon D, Davis RW, Brown PO. Quantitative monitoring of gene expression patterns with a complementary dna microarray. Science. 1995; 270:467–70. Lockhart D, Dong H, Byrne M, Follettie M, Gallo M, Chee M, Mittmann M, Wang C, Kobayashi M, Horton H, Brown E. Expression monitoring by hybridization to high-density oligonuleotide arrays. Nat Biotechnol. 1996; 14:1675–80. Network TCGA. Comprehensive genomic characterization defines human glioblastoma genes and core pathways. Nature. 2008; 455:1061–8. Nagalakshmi U, Wang Z, Waern K, Shou C, Raha D, Gerstein M, Snyder M. The transcriptional landscape of the yeast genome defined by rna sequencing. Science. 2008; 320:1344–9. Wilhelm BT, Marguerat S, Watt S, Schubert F, Wood V, Goodhead I, Penkett CJ, Rogers J, Bahler J. Dynamic repertoire of a eukaryotic transcriptome surveyed at single-nucleotide resolution. Nature. 2008; 453:1239–43. Storey JD, Tibshirani R. Statistical significance for genomewide studies. Proc Nat Acad Sci USA. 2003; 100:9440–5. Mootha VK, Lindgren CM, Eriksson KF, Subramanian A, Sihag S, Lehar J, Puigserver P, Carlsson E, Ridderstrale M, Laurila E, Houstis N, Daly MJ, Patterson N, Mesirov JP, Golub TR, Tamayo P, Spiegelman B, Lander ES, Hirschhorn JN, Altshuler D, Groop L. PGC-1 α-response genes involved in oxidative phos-phorylation are coordinately downregulated in human diabetes. Nat Genet. 2003; 34:267–73. Subramanian A, Tamayo P, Mootha VK, Mukherjee S, Ebert BL, Gillette MA, Paulovich A, Pomeroy SL, Golub TR, Lander ES, Mesirov JP. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc Nat Acad Sci USA. 2005; 102:15545–50. Edgar R, Barrett T. NCBI GEO standards and services for microarray data. Nat Biotechnol. 2006; 24:1471–2. de Magalhaes JP, Curado J, Church GM. Meta-analysis of age-related gene expression profiles identifies common signatures of aging. Bioinformatics. 2009; 25:875–81. Choi JK, Yu U, Kim S, Yoo OJ. Combining multiple microarray studies and modeling interstudy variation. Bioinformatics. 2003; 19 Supplement 1:84–90. Tanner SW, Agarwal P. Gene vector analysis (geneva): A unified method to detect differentially-regulated gene sets and similar microarray experiments. BMC Bioinforma. 2008; 9:348. Shen K, Tseng GC. Meta-analysis for pathway enrichment analysis when combining multiple genomic studies. Bioinformatics. 2010; 26:1316–23. Chen M, Zang M, Wang X, Xiao G. A powerful bayesian meta-analysis method to integrate multiple gene set enrichment studies. Bioinformatics. 2013; 29:862–9. Lai Y, Zhang F, Nayak TK, Modarres R, Lee NH, McCaffrey TA. Concordant integrative gene set enrichment analysis of multiple large-scale two-sample expression data sets. BMC Genomics. 2014; 15 Suppl 1:6. Pang H, Zhao H. Stratified pathway analysis to identify gene sets associated with oral contraceptive use and breast cancer. Cancer Inform. 2014; 13 (Suppl 4):73–8. Jones AR, Troakes C, King A, Sahni V, De Jong S, Bossers K, Papouli E, Mirza M, Al-Sarraj S, Shaw CE, Shaw PJ, Kirby J, Veldink JH, Macklis JD, Powell JF, Al-Chalabi A. Stratified gene expression analysis identifies major amyotrophic lateral sclerosis genes. Neurobiol Aging. 2015; 36:2006–19. Benjamini Y, Hochberg Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. J R Stat Soc Series B. 1995; 57:289–300. McLachlan GJ, Krishnan T. The EM Algorithm and Extensions, 2nd Edition. Hoboken, New Jersey, USA: John Wiley & Sons, Inc.; 2008. McLachlan GJ, Bean RW, Jones LB. A simple implementation of a normal mixture approach to differential gene expression in multiclass microarrays. Bioinformatics. 2006; 22:1608–15. Brower V. Genomic research advances pancreatic cancer's early detection and treatment. J Nat Cancer Inst. 2015; 107:95. Zhang G, He P, Tan H, Budhu A, Gaedcke J, Ghadimi BM, Ried T, Yfantis HG, Lee DH, Maitra A, Hanna N, Alexander HR, Hussain SP. Integration of metabolomics and transcriptomics revealed a fatty acid network exerting growth inhibitory effects in human pancreatic cancer. Clin Cancer Res. 2013; 19:4983–93. Zhang G, Schetter A, He P, Funamizu N, Gaedcke J, Ghadimi BM, Ried T, Hassan R, Yfantis HG, Lee DH, Lacy C, Maitra A, Hanna N, Alexander HR, Hussain SP. DPEP1 inhibits tumor cell invasiveness, enhances chemosensitivity and predicts clinical outcome in pancreatic ductal adenocarcinoma. PLoS One. 2012; 7:31507. Amaratunga D, Cabrera J. Exploration and Analysis of DNA Microarray and Protein Array Data. Hoboken, New Jersey, USA: John Wiley & Sons, Inc; 2003. Oshlack A, Robinson MD, Young MD. From RNA-seq reads to differential expression results. Genome Biol. 2010; 11:220. Zheng W, Chung LM, Zhao H. Bias detection and correction in RNA-sequencing data. BMC Bioinforma. 2011; 12:290. Tusher VG, Tibshirani R, Chu G. Significance analysis of microarrays applied to the ionizing radiation response. Proc Nat Acad Sci USA. 2001; 98:5116–21. Smyth GK. Linear models and empirical bayes methods for assessing differential expression in microarray experiments. Stat Appl Genet Mol Biol. 2004; 3:3. Dudoit S, Shaffer JP, Boldrick JC. Multiple hypothesis testing in microarray experiments. Stat Sci. 2003; 18:71–103. Lai Y, Adam BL, Podolsky R, She JX. A mixture model approach to the tests of concordance and discordance between two large scale experiments with two-sample groups. Bioinformatics. 2007; 23:1243–50. Lai Y, Eckenrode SE, She JX. A statistical framework for integrating two microarray data sets in differential expression analysis. BMC Bioinforma. 2009; 10 (Suppl. 1):23. Wei P, Tang H, Li D. Insights into pancreatic cancer etiology from pathway analysis of genome-wide association study data. PLoS One. 2012; 7:46887. Wenger FA, Kilian M, Jacobi CA, Schimke I, Guski H, Müller JM. Does alpha-linolenic acid in combination with linoleic acid influence liver metastasis and hepatic lipid peroxidation in bop-induced pancreatic cancer in syrian hamsters? Prostaglandins Leukot Essent Fatty Acids. 2000; 62:329–34. Efron B, Tibshirani R. On testing the significance of sets of genes. Ann Appl Stat. 2007; 1:107–29. Maciejewski H. Gene set analysis methods: statistical models and methodological differences. Brief Bioinforma. 2014; 15:504–18. This article has been published as part of BMC Genomics Volume 18 Supplement 1, 2016: Proceedings of the 27th International Conference on Genome Informatics: genomics. The full contents of the supplement are available online at http://bmcgenomics.biomedcentral.com/articles/supplements/volume-18-supplement-1. This work was partially supported by the NIH grant GM-092963 (Y.Lai). The publication costs were funded by the Department of Statistics at The George Washington University. YL conceived of the study, developed the methods, performed the statistical analysis, and drafted the manuscript; FZ developed the methods, performed the statistical analysis, and helped to draft the manuscript; TKN, RM, NHL and TAM helped to draft the manuscript. All authors read and approved the final manuscript. Department of Statistics, The George Washington University, 801 22nd St. N.W., Rome Hall, 7th Floor, Washington, 20052, D.C., USA Yinglei Lai, Fanni Zhang, Tapan K. Nayak & Reza Modarres Department of Pharmacology and Physiology, The George Washington University Medical Center, Washington, 20037, D.C., USA Norman H. Lee Department of Medicine, Division of Genomic Medicine, The George Washington University Medical Center, Washington, 20037, D.C., USA Timothy A. McCaffrey Yinglei Lai Fanni Zhang Tapan K. Nayak Reza Modarres Correspondence to Yinglei Lai. Lai, Y., Zhang, F., Nayak, T.K. et al. Detecting discordance enrichment among a series of two-sample genome-wide expression data sets. BMC Genomics 18 (Suppl 1), 1050 (2017). https://doi.org/10.1186/s12864-016-3265-2 Discordance Gene set enrichment Mixture models
CommonCrawl
Itô calculus Itô calculus, named after Kiyosi Itô, extends the methods of calculus to stochastic processes such as Brownian motion (see Wiener process). It has important applications in mathematical finance and stochastic differential equations. The central concept is the Itô stochastic integral, a stochastic generalization of the Riemann–Stieltjes integral in analysis. The integrands and the integrators are now stochastic processes: $Y_{t}=\int _{0}^{t}H_{s}\,dX_{s},$ where H is a locally square-integrable process adapted to the filtration generated by X (Revuz & Yor 1999, Chapter IV), which is a Brownian motion or, more generally, a semimartingale. The result of the integration is then another stochastic process. Concretely, the integral from 0 to any particular t is a random variable, defined as a limit of a certain sequence of random variables. The paths of Brownian motion fail to satisfy the requirements to be able to apply the standard techniques of calculus. So with the integrand a stochastic process, the Itô stochastic integral amounts to an integral with respect to a function which is not differentiable at any point and has infinite variation over every time interval. The main insight is that the integral can be defined as long as the integrand H is adapted, which loosely speaking means that its value at time t can only depend on information available up until this time. Roughly speaking, one chooses a sequence of partitions of the interval from 0 to t and constructs Riemann sums. Every time we are computing a Riemann sum, we are using a particular instantiation of the integrator. It is crucial which point in each of the small intervals is used to compute the value of the function. The limit then is taken in probability as the mesh of the partition is going to zero. Numerous technical details have to be taken care of to show that this limit exists and is independent of the particular sequence of partitions. Typically, the left end of the interval is used. Important results of Itô calculus include the integration by parts formula and Itô's lemma, which is a change of variables formula. These differ from the formulas of standard calculus, due to quadratic variation terms. In mathematical finance, the described evaluation strategy of the integral is conceptualized as that we are first deciding what to do, then observing the change in the prices. The integrand is how much stock we hold, the integrator represents the movement of the prices, and the integral is how much money we have in total including what our stock is worth, at any given moment. The prices of stocks and other traded financial assets can be modeled by stochastic processes such as Brownian motion or, more often, geometric Brownian motion (see Black–Scholes). Then, the Itô stochastic integral represents the payoff of a continuous-time trading strategy consisting of holding an amount Ht of the stock at time t. In this situation, the condition that H is adapted corresponds to the necessary restriction that the trading strategy can only make use of the available information at any time. This prevents the possibility of unlimited gains through clairvoyance: buying the stock just before each uptick in the market and selling before each downtick. Similarly, the condition that H is adapted implies that the stochastic integral will not diverge when calculated as a limit of Riemann sums (Revuz & Yor 1999, Chapter IV). Notation The process Y defined before as $Y_{t}=\int _{0}^{t}H\,dX\equiv \int _{0}^{t}H_{s}\,dX_{s},$ is itself a stochastic process with time parameter t, which is also sometimes written as Y = H · X (Rogers & Williams 2000). Alternatively, the integral is often written in differential form dY = H dX, which is equivalent to Y − Y0 = H · X. As Itô calculus is concerned with continuous-time stochastic processes, it is assumed that an underlying filtered probability space is given $(\Omega ,{\mathcal {F}},({\mathcal {F}}_{t})_{t\geq 0},\mathbb {P} ).$ The σ-algebra ${\mathcal {F}}_{t}$ represents the information available up until time t, and a process X is adapted if Xt is ${\mathcal {F}}_{t}$-measurable. A Brownian motion B is understood to be an ${\mathcal {F}}_{t}$-Brownian motion, which is just a standard Brownian motion with the properties that Bt is ${\mathcal {F}}_{t}$-measurable and that Bt+s − Bt is independent of ${\mathcal {F}}_{t}$ for all s,t ≥ 0 (Revuz & Yor 1999). Integration with respect to Brownian motion The Itô integral can be defined in a manner similar to the Riemann–Stieltjes integral, that is as a limit in probability of Riemann sums; such a limit does not necessarily exist pathwise. Suppose that B is a Wiener process (Brownian motion) and that H is a right-continuous (càdlàg), adapted and locally bounded process. If $\{\pi _{n}\}$ is a sequence of partitions of [0, t] with mesh going to zero, then the Itô integral of H with respect to B up to time t is a random variable $\int _{0}^{t}H\,dB=\lim _{n\rightarrow \infty }\sum _{[t_{i-1},t_{i}]\in \pi _{n}}H_{t_{i-1}}(B_{t_{i}}-B_{t_{i-1}}).$ It can be shown that this limit converges in probability. For some applications, such as martingale representation theorems and local times, the integral is needed for processes that are not continuous. The predictable processes form the smallest class that is closed under taking limits of sequences and contains all adapted left-continuous processes. If H is any predictable process such that ∫0t H2 ds < ∞ for every t ≥ 0 then the integral of H with respect to B can be defined, and H is said to be B-integrable. Any such process can be approximated by a sequence Hn of left-continuous, adapted and locally bounded processes, in the sense that $\int _{0}^{t}(H-H_{n})^{2}\,ds\to 0$ in probability. Then, the Itô integral is $\int _{0}^{t}H\,dB=\lim _{n\to \infty }\int _{0}^{t}H_{n}\,dB$ where, again, the limit can be shown to converge in probability. The stochastic integral satisfies the Itô isometry $\mathbb {E} \left[\left(\int _{0}^{t}H_{s}\,dB_{s}\right)^{2}\right]=\mathbb {E} \left[\int _{0}^{t}H_{s}^{2}\,ds\right]$ which holds when H is bounded or, more generally, when the integral on the right hand side is finite. Itô processes An Itô process is defined to be an adapted stochastic process that can be expressed as the sum of an integral with respect to Brownian motion and an integral with respect to time, $X_{t}=X_{0}+\int _{0}^{t}\sigma _{s}\,dB_{s}+\int _{0}^{t}\mu _{s}\,ds.$ Here, B is a Brownian motion and it is required that σ is a predictable B-integrable process, and μ is predictable and (Lebesgue) integrable. That is, $\int _{0}^{t}(\sigma _{s}^{2}+|\mu _{s}|)\,ds<\infty $ for each t. The stochastic integral can be extended to such Itô processes, $\int _{0}^{t}H\,dX=\int _{0}^{t}H_{s}\sigma _{s}\,dB_{s}+\int _{0}^{t}H_{s}\mu _{s}\,ds.$ This is defined for all locally bounded and predictable integrands. More generally, it is required that Hσ be B-integrable and Hμ be Lebesgue integrable, so that $\int _{0}^{t}(H^{2}\sigma ^{2}+|H\mu |)ds<\infty .$ Such predictable processes H are called X-integrable. An important result for the study of Itô processes is Itô's lemma. In its simplest form, for any twice continuously differentiable function f on the reals and Itô process X as described above, it states that f(X) is itself an Itô process satisfying $df(X_{t})=f^{\prime }(X_{t})\,dX_{t}+{\frac {1}{2}}f^{\prime \prime }(X_{t})\sigma _{t}^{2}\,dt.$ This is the stochastic calculus version of the change of variables formula and chain rule. It differs from the standard result due to the additional term involving the second derivative of f, which comes from the property that Brownian motion has non-zero quadratic variation. Semimartingales as integrators The Itô integral is defined with respect to a semimartingale X. These are processes which can be decomposed as X = M + A for a local martingale M and finite variation process A. Important examples of such processes include Brownian motion, which is a martingale, and Lévy processes. For a left continuous, locally bounded and adapted process H the integral H · X exists, and can be calculated as a limit of Riemann sums. Let πn be a sequence of partitions of [0, t] with mesh going to zero, $\int _{0}^{t}H\,dX=\lim _{n\rightarrow \infty }\sum _{t_{i-1},t_{i}\in \pi _{n}}H_{t_{i-1}}(X_{t_{i}}-X_{t_{i-1}}).$ This limit converges in probability. The stochastic integral of left-continuous processes is general enough for studying much of stochastic calculus. For example, it is sufficient for applications of Itô's Lemma, changes of measure via Girsanov's theorem, and for the study of stochastic differential equations. However, it is inadequate for other important topics such as martingale representation theorems and local times. The integral extends to all predictable and locally bounded integrands, in a unique way, such that the dominated convergence theorem holds. That is, if Hn → ;H and |Hn| ≤ J for a locally bounded process J, then $\int _{0}^{t}H_{n}\,dX\to \int _{0}^{t}H\,dX,$ in probability. The uniqueness of the extension from left-continuous to predictable integrands is a result of the monotone class lemma. In general, the stochastic integral H · X can be defined even in cases where the predictable process H is not locally bounded. If K = 1 / (1 + |H|) then K and KH are bounded. Associativity of stochastic integration implies that H is X-integrable, with integral H · X = Y, if and only if Y0 = 0 and K · Y = (KH) · X. The set of X-integrable processes is denoted by L(X). Properties The following properties can be found in works such as (Revuz & Yor 1999) and (Rogers & Williams 2000): • The stochastic integral is a càdlàg process. Furthermore, it is a semimartingale. • The discontinuities of the stochastic integral are given by the jumps of the integrator multiplied by the integrand. The jump of a càdlàg process at a time t is Xt − Xt−, and is often denoted by ΔXt. With this notation, Δ(H · X) = H ΔX. A particular consequence of this is that integrals with respect to a continuous process are always themselves continuous. • Associativity. Let J, K be predictable processes, and K be X-integrable. Then, J is K · X integrable if and only if JK is X integrable, in which case $J\cdot (K\cdot X)=(JK)\cdot X$ • Dominated convergence. Suppose that Hn → H and |Hn| ≤ J, where J is an X-integrable process. then Hn · X → H · X. Convergence is in probability at each time t. In fact, it converges uniformly on compact sets in probability. • The stochastic integral commutes with the operation of taking quadratic covariations. If X and Y are semimartingales then any X-integrable process will also be [X, Y]-integrable, and [H · X, Y] = H · [X, Y]. A consequence of this is that the quadratic variation process of a stochastic integral is equal to an integral of a quadratic variation process, $[H\cdot X]=H^{2}\cdot [X]$ Integration by parts As with ordinary calculus, integration by parts is an important result in stochastic calculus. The integration by parts formula for the Itô integral differs from the standard result due to the inclusion of a quadratic covariation term. This term comes from the fact that Itô calculus deals with processes with non-zero quadratic variation, which only occurs for infinite variation processes (such as Brownian motion). If X and Y are semimartingales then $X_{t}Y_{t}=X_{0}Y_{0}+\int _{0}^{t}X_{s-}\,dY_{s}+\int _{0}^{t}Y_{s-}\,dX_{s}+[X,Y]_{t}$ where [X, Y] is the quadratic covariation process. The result is similar to the integration by parts theorem for the Riemann–Stieltjes integral but has an additional quadratic variation term. Itô's lemma Main article: Itô's lemma Itô's lemma is the version of the chain rule or change of variables formula which applies to the Itô integral. It is one of the most powerful and frequently used theorems in stochastic calculus. For a continuous n-dimensional semimartingale X = (X1,...,Xn) and twice continuously differentiable function f from Rn to R, it states that f(X) is a semimartingale and, $df(X_{t})=\sum _{i=1}^{n}f_{i}(X_{t})\,dX_{t}^{i}+{\frac {1}{2}}\sum _{i,j=1}^{n}f_{i,j}(X_{t})\,d[X^{i},X^{j}]_{t}.$ This differs from the chain rule used in standard calculus due to the term involving the quadratic covariation [Xi,Xj ]. The formula can be generalized to include an explicit time-dependence in $f,$ and in other ways (see Itô's lemma). Martingale integrators Local martingales An important property of the Itô integral is that it preserves the local martingale property. If M is a local martingale and H is a locally bounded predictable process then H · M is also a local martingale. For integrands which are not locally bounded, there are examples where H · M is not a local martingale. However, this can only occur when M is not continuous. If M is a continuous local martingale then a predictable process H is M-integrable if and only if $\int _{0}^{t}H^{2}\,d[M]<\infty ,$ for each t, and H · M is always a local martingale. The most general statement for a discontinuous local martingale M is that if (H2 · [M])1/2 is locally integrable then H · M exists and is a local martingale. Square integrable martingales For bounded integrands, the Itô stochastic integral preserves the space of square integrable martingales, which is the set of càdlàg martingales M such that E[Mt2] is finite for all t. For any such square integrable martingale M, the quadratic variation process [M] is integrable, and the Itô isometry states that $\mathbb {E} \left[(H\cdot M_{t})^{2}\right]=\mathbb {E} \left[\int _{0}^{t}H^{2}\,d[M]\right].$ This equality holds more generally for any martingale M such that H2 · [M]t is integrable. The Itô isometry is often used as an important step in the construction of the stochastic integral, by defining H · M to be the unique extension of this isometry from a certain class of simple integrands to all bounded and predictable processes. p-Integrable martingales For any p > 1, and bounded predictable integrand, the stochastic integral preserves the space of p-integrable martingales. These are càdlàg martingales such that E(|Mt|p) is finite for all t. However, this is not always true in the case where p = 1. There are examples of integrals of bounded predictable processes with respect to martingales which are not themselves martingales. The maximum process of a càdlàg process M is written as M*t = sups ≤t |Ms|. For any p ≥ 1 and bounded predictable integrand, the stochastic integral preserves the space of càdlàg martingales M such that E[(M*t)p] is finite for all t. If p > 1 then this is the same as the space of p-integrable martingales, by Doob's inequalities. The Burkholder–Davis–Gundy inequalities state that, for any given p ≥ 1, there exist positive constants c, C that depend on p, but not M or on t such that $c\mathbb {E} \left[[M]_{t}^{\frac {p}{2}}\right]\leq \mathbb {E} \left[(M_{t}^{*})^{p}\right]\leq C\mathbb {E} \left[[M]_{t}^{\frac {p}{2}}\right]$ for all càdlàg local martingales M. These are used to show that if (M*t)p is integrable and H is a bounded predictable process then $\mathbb {E} \left[((H\cdot M)_{t}^{*})^{p}\right]\leq C\mathbb {E} \left[(H^{2}\cdot [M]_{t})^{\frac {p}{2}}\right]<\infty $ and, consequently, H · M is a p-integrable martingale. More generally, this statement is true whenever (H2 · [M])p/2 is integrable. Existence of the integral Proofs that the Itô integral is well defined typically proceed by first looking at very simple integrands, such as piecewise constant, left continuous and adapted processes where the integral can be written explicitly. Such simple predictable processes are linear combinations of terms of the form Ht = A1{t > T} for stopping times T and FT-measurable random variables A, for which the integral is $H\cdot X_{t}\equiv \mathbf {1} _{\{t>T\}}A(X_{t}-X_{T}).$ This is extended to all simple predictable processes by the linearity of H · X in H. For a Brownian motion B, the property that it has independent increments with zero mean and variance Var(Bt) = t can be used to prove the Itô isometry for simple predictable integrands, $\mathbb {E} \left[(H\cdot B_{t})^{2}\right]=\mathbb {E} \left[\int _{0}^{t}H_{s}^{2}\,ds\right].$ By a continuous linear extension, the integral extends uniquely to all predictable integrands satisfying $\mathbb {E} \left[\int _{0}^{t}H^{2}\,ds\right]<\infty ,$ in such way that the Itô isometry still holds. It can then be extended to all B-integrable processes by localization. This method allows the integral to be defined with respect to any Itô process. For a general semimartingale X, the decomposition X = M + A into a local martingale M plus a finite variation process A can be used. Then, the integral can be shown to exist separately with respect to M and A and combined using linearity, H · X = H · M + H · A, to get the integral with respect to X. The standard Lebesgue–Stieltjes integral allows integration to be defined with respect to finite variation processes, so the existence of the Itô integral for semimartingales will follow from any construction for local martingales. For a càdlàg square integrable martingale M, a generalized form of the Itô isometry can be used. First, the Doob–Meyer decomposition theorem is used to show that a decomposition M2 = N + ⟨M⟩ exists, where N is a martingale and ⟨M⟩ is a right-continuous, increasing and predictable process starting at zero. This uniquely defines ⟨M⟩, which is referred to as the predictable quadratic variation of M. The Itô isometry for square integrable martingales is then $\mathbb {E} \left[(H\cdot M_{t})^{2}\right]=\mathbb {E} \left[\int _{0}^{t}H_{s}^{2}\,d\langle M\rangle _{s}\right],$ which can be proved directly for simple predictable integrands. As with the case above for Brownian motion, a continuous linear extension can be used to uniquely extend to all predictable integrands satisfying E[H2 · ⟨M⟩t] < ∞. This method can be extended to all local square integrable martingales by localization. Finally, the Doob–Meyer decomposition can be used to decompose any local martingale into the sum of a local square integrable martingale and a finite variation process, allowing the Itô integral to be constructed with respect to any semimartingale. Many other proofs exist which apply similar methods but which avoid the need to use the Doob–Meyer decomposition theorem, such as the use of the quadratic variation [M] in the Itô isometry, the use of the Doléans measure for submartingales, or the use of the Burkholder–Davis–Gundy inequalities instead of the Itô isometry. The latter applies directly to local martingales without having to first deal with the square integrable martingale case. Alternative proofs exist only making use of the fact that X is càdlàg, adapted, and the set {H · Xt: |H| ≤ 1 is simple previsible} is bounded in probability for each time t, which is an alternative definition for X to be a semimartingale. A continuous linear extension can be used to construct the integral for all left-continuous and adapted integrands with right limits everywhere (caglad or L-processes). This is general enough to be able to apply techniques such as Itô's lemma (Protter 2004). Also, a Khintchine inequality can be used to prove the dominated convergence theorem and extend the integral to general predictable integrands (Bichteler 2002). Differentiation in Itô calculus The Itô calculus is first and foremost defined as an integral calculus as outlined above. However, there are also different notions of "derivative" with respect to Brownian motion: Malliavin derivative Malliavin calculus provides a theory of differentiation for random variables defined over Wiener space, including an integration by parts formula (Nualart 2006). Martingale representation The following result allows to express martingales as Itô integrals: if M is a square-integrable martingale on a time interval [0, T] with respect to the filtration generated by a Brownian motion B, then there is a unique adapted square integrable process α on [0, T] such that $M_{t}=M_{0}+\int _{0}^{t}\alpha _{s}\,\mathrm {d} B_{s}$ almost surely, and for all t ∈ [0, T] (Rogers & Williams 2000, Theorem 36.5). This representation theorem can be interpreted formally as saying that α is the "time derivative" of M with respect to Brownian motion B, since α is precisely the process that must be integrated up to time t to obtain Mt − M0, as in deterministic calculus. Itô calculus for physicists In physics, usually stochastic differential equations (SDEs), such as Langevin equations, are used, rather than stochastic integrals. Here an Itô stochastic differential equation (SDE) is often formulated via ${\dot {x}}_{k}=h_{k}+g_{kl}\xi _{l},$ where $\xi _{j}$ is Gaussian white noise with $\langle \xi _{k}(t_{1})\,\xi _{l}(t_{2})\rangle =\delta _{kl}\delta (t_{1}-t_{2})$ and Einstein's summation convention is used. If $y=y(x_{k})$ is a function of the xk, then Itô's lemma has to be used: ${\dot {y}}={\frac {\partial y}{\partial x_{j}}}{\dot {x}}_{j}+{\frac {1}{2}}{\frac {\partial ^{2}y}{\partial x_{k}\,\partial x_{l}}}g_{km}g_{ml}.$ An Itô SDE as above also corresponds to a Stratonovich SDE which reads ${\dot {x}}_{k}=h_{k}+g_{kl}\xi _{l}-{\frac {1}{2}}{\frac {\partial g_{kl}}{\partial {x_{m}}}}g_{ml}.$ SDEs frequently occur in physics in Stratonovich form, as limits of stochastic differential equations driven by colored noise if the correlation time of the noise term approaches zero. For a recent treatment of different interpretations of stochastic differential equations see for example (Lau & Lubensky 2007). Itô interpretation and supersymmetric theory of SDEs Main article: Supersymmetric theory of stochastic dynamics In the supersymmetric theory of SDEs, stochastic evolution is defined via stochastic evolution operator (SEO) acting on differential forms of the phase space. The Itô-Stratonovich dilemma takes the form of the ambiguity of the operator ordering that arises on the way from the path integral to the operator representation of stochastic evolution. The Itô interpretation corresponds to the operator ordering convention that all the momentum operators act after all the position operators. The SEO can be made unique by supplying it with its most natural mathematical definition of the pullback induced by the noise-configuration-dependent SDE-defined diffeomorphisms and averaged over the noise configurations. This disambiguation leads to the Stratonovich interpretation of SDEs that can be turned into the Itô interpretation by a specific shift of the flow vector field of the SDE. See also • Stochastic calculus • Itô's lemma • Stratonovich integral • Semimartingale • Wiener process References • Bichteler, Klaus (2002), Stochastic Integration With Jumps (1st ed.), Cambridge University Press, ISBN 0-521-81129-5 • Cohen, Samuel; Elliott, Robert (2015), Stochastic Calculus and Applications (2nd ed.), Birkhaueser, ISBN 978-1-4939-2867-5 • Hagen Kleinert (2004). Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 4th edition, World Scientific (Singapore); Paperback ISBN 981-238-107-4. Fifth edition available online: PDF-files, with generalizations of Itô's lemma for non-Gaussian processes. • He, Sheng-wu; Wang, Jia-gang; Yan, Jia-an (1992), Semimartingale Theory and Stochastic Calculus, Science Press, CRC Press Inc., ISBN 978-0849377150 • Karatzas, Ioannis; Shreve, Steven (1991), Brownian Motion and Stochastic Calculus (2nd ed.), Springer, ISBN 0-387-97655-8 • Lau, Andy; Lubensky, Tom (2007), "State-dependent diffusion", Phys. Rev. E, 76 (1): 011123, arXiv:0707.2234, Bibcode:2007PhRvE..76a1123L, doi:10.1103/PhysRevE.76.011123 • Nualart, David (2006), The Malliavin calculus and related topics, Springer, ISBN 3-540-28328-5 • Øksendal, Bernt K. (2003), Stochastic Differential Equations: An Introduction with Applications, Berlin: Springer, ISBN 3-540-04758-1 • Protter, Philip E. (2004), Stochastic Integration and Differential Equations (2nd ed.), Springer, ISBN 3-540-00313-4 • Revuz, Daniel; Yor, Marc (1999), Continuous martingales and Brownian motion, Berlin: Springer, ISBN 3-540-57622-3 • Rogers, Chris; Williams, David (2000), Diffusions, Markov processes and martingales - Volume 2: Itô calculus, Cambridge: Cambridge University Press, ISBN 0-521-77593-0 • Mathematical Finance Programming in TI-Basic, which implements Ito calculus for TI-calculators. Integrals Types of integrals • Riemann integral • Lebesgue integral • Burkill integral • Bochner integral • Daniell integral • Darboux integral • Henstock–Kurzweil integral • Haar integral • Hellinger integral • Khinchin integral • Kolmogorov integral • Lebesgue–Stieltjes integral • Pettis integral • Pfeffer integral • Riemann–Stieltjes integral • Regulated integral Integration techniques • Substitution • Trigonometric • Euler • Weierstrass • By parts • Partial fractions • Euler's formula • Inverse functions • Changing order • Reduction formulas • Parametric derivatives • Differentiation under the integral sign • Laplace transform • Contour integration • Laplace's method • Numerical integration • Simpson's rule • Trapezoidal rule • Risch algorithm Improper integrals • Gaussian integral • Dirichlet integral • Fermi–Dirac integral • complete • incomplete • Bose–Einstein integral • Frullani integral • Common integrals in quantum field theory Stochastic integrals • Itô integral • Russo–Vallois integral • Stratonovich integral • Skorokhod integral Miscellaneous • Basel problem • Euler–Maclaurin formula • Gabriel's horn • Integration Bee • Proof that 22/7 exceeds π • Volumes • Washers • Shells Stochastic processes Discrete time • Bernoulli process • Branching process • Chinese restaurant process • Galton–Watson process • Independent and identically distributed random variables • Markov chain • Moran process • Random walk • Loop-erased • Self-avoiding • Biased • Maximal entropy Continuous time • Additive process • Bessel process • Birth–death process • pure birth • Brownian motion • Bridge • Excursion • Fractional • Geometric • Meander • Cauchy process • Contact process • Continuous-time random walk • Cox process • Diffusion process • Empirical process • Feller process • Fleming–Viot process • Gamma process • Geometric process • Hawkes process • Hunt process • Interacting particle systems • Itô diffusion • Itô process • Jump diffusion • Jump process • Lévy process • Local time • Markov additive process • McKean–Vlasov process • Ornstein–Uhlenbeck process • Poisson process • Compound • Non-homogeneous • Schramm–Loewner evolution • Semimartingale • Sigma-martingale • Stable process • Superprocess • Telegraph process • Variance gamma process • Wiener process • Wiener sausage Both • Branching process • Galves–Löcherbach model • Gaussian process • Hidden Markov model (HMM) • Markov process • Martingale • Differences • Local • Sub- • Super- • Random dynamical system • Regenerative process • Renewal process • Stochastic chains with memory of variable length • White noise Fields and other • Dirichlet process • Gaussian random field • Gibbs measure • Hopfield model • Ising model • Potts model • Boolean network • Markov random field • Percolation • Pitman–Yor process • Point process • Cox • Poisson • Random field • Random graph Time series models • Autoregressive conditional heteroskedasticity (ARCH) model • Autoregressive integrated moving average (ARIMA) model • Autoregressive (AR) model • Autoregressive–moving-average (ARMA) model • Generalized autoregressive conditional heteroskedasticity (GARCH) model • Moving-average (MA) model Financial models • Binomial options pricing model • Black–Derman–Toy • Black–Karasinski • Black–Scholes • Chan–Karolyi–Longstaff–Sanders (CKLS) • Chen • Constant elasticity of variance (CEV) • Cox–Ingersoll–Ross (CIR) • Garman–Kohlhagen • Heath–Jarrow–Morton (HJM) • Heston • Ho–Lee • Hull–White • LIBOR market • Rendleman–Bartter • SABR volatility • Vašíček • Wilkie Actuarial models • Bühlmann • Cramér–Lundberg • Risk process • Sparre–Anderson Queueing models • Bulk • Fluid • Generalized queueing network • M/G/1 • M/M/1 • M/M/c Properties • Càdlàg paths • Continuous • Continuous paths • Ergodic • Exchangeable • Feller-continuous • Gauss–Markov • Markov • Mixing • Piecewise-deterministic • Predictable • Progressively measurable • Self-similar • Stationary • Time-reversible Limit theorems • Central limit theorem • Donsker's theorem • Doob's martingale convergence theorems • Ergodic theorem • Fisher–Tippett–Gnedenko theorem • Large deviation principle • Law of large numbers (weak/strong) • Law of the iterated logarithm • Maximal ergodic theorem • Sanov's theorem • Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy) Inequalities • Burkholder–Davis–Gundy • Doob's martingale • Doob's upcrossing • Kunita–Watanabe • Marcinkiewicz–Zygmund Tools • Cameron–Martin formula • Convergence of random variables • Doléans-Dade exponential • Doob decomposition theorem • Doob–Meyer decomposition theorem • Doob's optional stopping theorem • Dynkin's formula • Feynman–Kac formula • Filtration • Girsanov theorem • Infinitesimal generator • Itô integral • Itô's lemma • Karhunen–Loève theorem • Kolmogorov continuity theorem • Kolmogorov extension theorem • Lévy–Prokhorov metric • Malliavin calculus • Martingale representation theorem • Optional stopping theorem • Prokhorov's theorem • Quadratic variation • Reflection principle • Skorokhod integral • Skorokhod's representation theorem • Skorokhod space • Snell envelope • Stochastic differential equation • Tanaka • Stopping time • Stratonovich integral • Uniform integrability • Usual hypotheses • Wiener space • Classical • Abstract Disciplines • Actuarial mathematics • Control theory • Econometrics • Ergodic theory • Extreme value theory (EVT) • Large deviations theory • Mathematical finance • Mathematical statistics • Probability theory • Queueing theory • Renewal theory • Ruin theory • Signal processing • Statistics • Stochastic analysis • Time series analysis • Machine learning • List of topics • Category
Wikipedia
Wed, 05 Jun 2019 18:41:45 GMT 3.1: Measures of Center [ "article:topic", "showtoc:no", "license:ccbysa", "authorname:kkozak" ] Book: Statistics Using Technology (Kozak) 3: Examining the Evidence Using Graphs and Statistics Contributed by Kathryn Kozak Professor (Mathematics) at Coconino Community College This section focuses on measures of central tendency. Many times you are asking what to expect on average. Such as when you pick a major, you would probably ask how much you expect to earn in that field. If you are thinking of relocating to a new town, you might ask how much you can expect to pay for housing. If you are planting vegetables in the spring, you might want to know how long it will be until you can harvest. These questions, and many more, can be answered by knowing the center of the data set. There are three measures of the "center" of the data. They are the mode, median, and mean. Any of the values can be referred to as the "average." The mode is the data value that occurs the most frequently in the data. To find it, you count how often each data value occurs, and then determine which data value occurs most often. The median is the data value in the middle of a sorted list of data. To find it, you put the data in order, and then determine which data value is in the middle of the data set. The mean is the arithmetic average of the numbers. This is the center that most people call the average, though all three – mean, median, and mode – really are averages. There are no symbols for the mode and the median, but the mean is used a great deal, and statisticians gave it a symbol. There are actually two symbols, one for the population parameter and one for the sample statistic. In most cases you cannot find the population parameter, so you use the sample statistic to estimate the population parameter. Definition \(\PageIndex{1}\) Population Mean: \(\mu=\frac{\Sigma x}{N}\), pronounced mu. \(N\) is the size of the population. \(x\) represents a data value. \(\sum x\) means to add up all of the data values. Sample Mean: \(\overline{x}=\frac{\sum x}{n}\), pronounced x bar. \(n\) is the size of the sample. The value for \(\overline{x}\) is used to estimate \(\mu\) since \(\mu\) can't be calculated in most situations. Example \(\PageIndex{1}\) finding the mean, median, and mode Suppose a vet wants to find the average weight of cats. The weights (in pounds) of five cats are in Table 3.1.1. 6.8 8.2 7.5 9.4 8.2 Table 3.1.1: Finding the Mean, Median, and Mode Find the mean, median, and mode of the weight of a cat. Before starting any mathematics problem, it is always a good idea to define the unknown in the problem. In this case, you want to define the variable. The symbol for the variable is \(x\). The variable is \(x =\) weight of a cat \(\overline{x}=\frac{6.8+8.2+7.5+9.4+8.2}{5}=\frac{40.1}{5}=8.02\) pounds You need to sort the list for both the median and mode. The sorted list is in Table 3.1.2. Table 3.1.2: Sorted List of Cat's Weights There are 5 data points so the middle of the list would be the 3rd number. (Just put a finger at each end of the list and move them toward the center one number at a time. Where your fingers meet is the median.) Table 3.1.3: Sorted List of Cats' Weights with Median Marked The median is therefore 8.2 pounds. This is easiest to do from the sorted list that is in Table 3.1.2. Which value appears the most number of times? The number 8.2 appears twice, while all other numbers appear once. Mode = 8.2 pounds. A data set can have more than one mode. If there is a tie between two values for the most number of times then both values are the mode and the data is called bimodal (two modes). If every data point occurs the same number of times, there is no mode. If there are more than two numbers that appear the most times, then usually there is no mode. In Example 3.1.1, there were an odd number of data points. In that case, the median was just the middle number. What happens if there is an even number of data points? What would you do? Example \(\PageIndex{2}\) finding the median with an even number of data points Suppose a vet wants to find the median weight of cats. The weights (in pounds) of six cats are in Table 3.1.4. Find the median. Table 3.1.4: Weights of Six Cats Variable: \(x =\) weight of a cat First sort the list if it is not already sorted. There are 6 numbers in the list so the number in the middle is between the 3rd and 4th number. Use your fingers starting at each end of the list in Table 3.1.5 and move toward the center until they meet. There are two numbers there. Table 3.1.5: Sorted List of Weights of Six Cats To find the median, just average the two numbers. median \(=\frac{7.5+8.2}{2}=7.85\) pounds The median is 7.85 pounds. Example \(\PageIndex{3}\) finding mean and median using technology Suppose a vet wants to find the median weight of cats. The weights (in pounds) of six cats are in Table 3.1.4. Find the median Variable: \(x=\) weight of a cat You can do the calculations for the mean and median using the technology. The procedure for calculating the sample mean ( \(\overline{x}\) ) and the sample median (Med) on the TI-83/84 is in Figures 3.1.1 through 3.1.4. First you need to go into the STAT menu, and then Edit. This will allow you to type in your data (see Figure 3.1.1). Figure 3.1.1: TI-83/84 Calculator Edit Setup Once you have the data into the calculator, you then go back to the STAT menu, move over to CALC, and then choose 1-Var Stats (see Figure 3.1.2). The calculator will now put 1-Var Stats on the main screen. Now type in L1 (2nd button and 1) and then press ENTER. (Note if you have the newer operating system on the TI-84, then the procedure is slightly different.) If you press the down arrow, you will see the rest of the output from the calculator. The results from the calculator are in Figure 3.1.3. Figure 3.1.2: TI-83/84 Calculator CALC Menu Figure 3.1.3: TI-83/84 Calculator Input for Example 3.1.3 Variable Figure 3.1.4: TI-83/84 Calculator Results for Example 3.1.3 Variable The commands for finding the mean and median using R are as follows: variable<-c(type in your data with commas in between) To find the mean, use mean(variable) To find the median, use median(variable) So for this example, the commands would be weights<-c(6.8, 8.2, 7.5, 9.4, 8.2, 6.3) mean(weights) [1] 7.733333 median(weights) [1] 7.85 Example \(\PageIndex{4}\) affect of extreme values on mean and median Suppose you have the same set of cats from Example 3.1.1 but one additional cat was added to the data set. Table 3.1.6 contains the six cats' weights, in pounds. 6.8 7.5 8.2 8.2 9.4 22.1 Find the mean and the median. mean \(=\overline{x}=\frac{6.8+7.5+8.2+8.2+9.4+22.1}{6}=10.37\) pounds The data is already in order, thus the median is between 8.2 and 8.2. median \(=\frac{8.2+8.2}{2}=8.2\) pounds The mean is much higher than the median. Why is this? Notice that when the value of 22.1 was added, the mean went from 8.02 to 10.37, but the median did not change at all. This is because the mean is affected by extreme values, while the median is not. The very heavy cat brought the mean weight up. In this case, the median is a much better measure of the center. An outlier is a data value that is very different from the rest of the data. It can be really high or really low. Extreme values may be an outlier if the extreme value is far enough from the center. In Example 3.1.4, the data value 22.1 pounds is an extreme value and it may be an outlier. If there are extreme values in the data, the median is a better measure of the center than the mean. If there are no extreme values, the mean and the median will be similar so most people use the mean. The mean is not a resistant measure because it is affected by extreme values. The median and the mode are resistant measures because they are not affected by extreme values. As a consumer you need to be aware that people choose the measure of center that best supports their claim. When you read an article in the newspaper and it talks about the "average" it usually means the mean but sometimes it refers to the median. Some articles will use the word "median" instead of "average" to be more specific. If you need to make an important decision and the information says "average", it would be wise to ask if the "average" is the mean or the median before you decide. As an example, suppose that a company wants to use the mean salary as the average salary for the company. This is because the high salaries of the administration will pull the mean higher. The company can say that the employees are paid well because the average is high. However, the employees want to use the median since it discounts the extreme values of the administration and will give a lower value of the average. This will make the salaries seem lower and that a raise is in order. Why use the mean instead of the median? The reason is because when multiple samples are taken from the same population, the sample means tend to be more consistent than other measures of the center. The sample mean is the more reliable measure of center. To understand how the different measures of center related to skewed or symmetric distributions, see Figure 3.1.5. As you can see sometimes the mean is smaller than the median and mode, sometimes the mean is larger than the median and mode, and sometimes they are the same values. Figure 3.1.5: Mean, Median, Mode as Related to a Distribution One last type of average is a weighted average. Weighted averages are used quite often in real life. Some teachers use them in calculating your grade in the course, or your grade on a project. Some employers use them in employee evaluations. The idea is that some activities are more important than others. As an example, a fulltime teacher at a community college may be evaluated on their service to the college, their service to the community, whether their paperwork is turned in on time, and their teaching. However, teaching is much more important than whether their paperwork is turned in on time. When the evaluation is completed, more weight needs to be given to the teaching and less to the paperwork. This is a weighted average. \(\frac{\sum x w}{\sum w}\) where \(w\) is the weight of the data value, \(x\). Example \(\PageIndex{5}\) weighted average In your biology class, your final grade is based on several things: a lab score, scores on two major tests, and your score on the final exam. There are 100 points available for each score. The lab score is worth 15% of the course, the two exams are worth 25% of the course each, and the final exam is worth 35% of the course. Suppose you earned scores of 95 on the labs, 83 and 76 on the two exams, and 84 on the final exam. Compute your weighted average for the course. Variable: \(x=\) score The weighted average is \(\frac{\Sigma x w}{\Sigma w}=\frac{\text { sum of the scores times their weights }}{\text { sum of all the weights }}\) weighted average \(=\frac{95(0.15)+83(0.25)+76(0.25)+84(0.35)}{0.15+0.25+0.25+0.35}=\frac{83.4}{1.00}=83.4 \%\) A weighted average can be found using technology. The procedure for calculating the weighted average on the TI-83/84 is in Figures 3.1.6 through 3.1.9. First you need to go into the STAT menu, and then Edit. This will allow you to type in the scores into L1 and the weights into L2 (see Figure 3.1.6). Figure 3.1.6: TI-3/84 Calculator Edit Setup Once you have the data into the calculator, you then go back to the STAT menu, move over to CALC, and then choose 1-Var Stats (see Figure 3.1.7). The calculator will now put 1-Var Stats on the main screen. Now type in L1 (2nd button and 1), then a comma (button above the 7 button), and then L2 (2nd button and 2) and then press ENTER. (Note if you have the newer operating system on the TI-84, then the procedure is slightly different.) The results from the calculator are in Figure 3.1.9. The \(\overline{x}\) is the weighted average. Figure 3.1.8: TI-83/84 Calculator Input for Weighted Average Figure 3.1.9: TI-83/84 Calculator Results for Weighted Average x<-c(type in your data with commas in between) w<-c(type in your weights with commas in between weighted.mean(x,w) x<-c(95, 83, 76, 84) w<-c(.15, .25, .25, .35) [1] 83.4 The faculty evaluation process at John Jingle University rates a faculty member on the following activities: teaching, publishing, committee service, community service, and submitting paperwork in a timely manner. The process involves reviewing student evaluations, peer evaluations, and supervisor evaluation for each teacher and awarding him/her a score on a scale from 1 to 10 (with 10 being the best). The weights for each activity are 20 for teaching, 18 for publishing, 6 for committee service, 4 for community service, and 2 for paperwork. One faculty member had the following ratings: 8 for teaching, 9 for publishing, 2 for committee work, 1 for community service, and 8 for paperwork. Compute the weighted average of the evaluation. Another faculty member had ratings of 6 for teaching, 8 for publishing, 9 for committee work, 10 for community service, and 10 for paperwork. Compute the weighted average of the evaluation. Which faculty member had the higher average evaluation? a. Variable: \(x=\) rating evaluation \(=\frac{8(20)+9(18)+2(6)+1(4)+8(2)}{20+18+6+4+2}=\frac{354}{50}=7.08\) b. evaluation \(=\frac{6(20)+8(18)+9(6)+10(4)+10(2)}{20+18+6+4+2}=\frac{378}{50}=7.56\) c. The second faculty member has a higher average evaluation. You can find a weighted average using technology. The last thing to mention is which average is used on which type of data. Mode can be found on nominal, ordinal, interval, and ratio data, since the mode is just the data value that occurs most often. You are just counting the data values. Median can be found on ordinal, interval, and ratio data, since you need to put the data in order. As long as there is order to the data you can find the median. Mean can be found on interval and ratio data, since you must have numbers to add together. Exercise \(\PageIndex{1}\) Cholesterol levels were collected from patients two days after they had a heart attack (Ryan, Joiner & Ryan, Jr, 1985) and are in Table 3.1.7. Find the mean, median, and mode. Table 3.1.7: Cholesterol Levels The lengths (in kilometers) of rivers on the South Island of New Zealand that flow to the Pacific Ocean are listed in Table 3.1.8 (Lee, 1994). Find the mean, median, and mode. Length (km) Clarence 209 Clutha 322 Conway 48 Taieri 288 Waiau 169 Shag 72 Hurunui 138 Kakanui 64 Waipara 64 Rangitata 121 Ashley 97 Ophi 80 Waimakariri 161 Pareora 56 Selwyn 95 Waihao 64 Rakaia 145 Waitaki 209 Ashburton 90 Table 3.1.8: Lengths of Rivers (km) Flowing to Pacific Ocean The lengths (in kilometers) of rivers on the South Island of New Zealand that flow to the Tasman Sea are listed in Table 3.1.9 (Lee, 1994). Find the mean, median, and mode. Hollyford 76 Waimea 48 Cascade 64 Motueka 108 Arawhata 68 Takaka 72 Haast 64 Aorere 72 Karangarua 37 Heaphy 35 Cook 32 Karamea 80 Waiho 32 Mokihinui 56 Whataroa 51 Buller 177 Wanganui 56 Grey 121 Waitaha 40 Taramakau 80 Hokitika 64 Arahura 56 Table 3.1.9: Lengths of Rivers (km) Flowing to Tasman Sea Eyeglassmatic manufactures eyeglasses for their retailers. They research to see how many defective lenses they made during the time period of January 1 to March 31. Table 3.1.10 contains the defect and the number of defects. Find the mean, median, and mode. Defect Type Number of Defects Scratch 5865 Right shaped - small 4613 Flaked 1992 Wrong axis 1838 Chamfer wrong 1596 Crazing, cracks 1546 Wrong shape 1485 Wrong PD 1398 Spots and bubbles 1371 Wrong height 1130 Right shape - big 1105 Lost in lab 976 Spots/bubble - intern 976 Table 3.1.10: Number of Defective Lenses Print-O-Matic printing company's employees have salaries that are contained in Table 3.1.11. Salary ($) CEO 272,500 Driver 58,456 CD74 100,702 CD65 57,380 Embellisher 73,877 Folder 65,270 GTO 74,235 Handwork 52,718 Horizon 76,029 ITEK 64,553 Mgmt 108,448 Platens 69,573 Polar 75,526 Pre Press Manager 108,448 Pre Press Manager/ IT 98,837 Pre Press/ Graphic Artist 75,311 Designer 90,090 Sales 109,739 Administration 66,346 Table 3.1.11: Salaries of Print-O-Matic Printing Company Employees a. Find the mean and median. b. Find the mean and median with the CEO's salary removed. c. What happened to the mean and median when the CEO's salary was removed? Why? d. If you were the CEO, who is answering concerns from the union that employees are underpaid, which average of the complete data set would you prefer? Why? e. If you were a platen worker, who believes that the employees need a raise, which average would you prefer? Why? Print-O-Matic printing company spends specific amounts on fixed costs every month. The costs of those fixed costs are in Table 3.1.12. Monthly cost ($) Bank charges 482 Cleaning 2208 Computer expensive 2471 Lease payments 2656 Postage 2117 Uniforms 2600 Table 3.1.12: Fixed Costs for Print-O-Matic Printing Company b. Find the mean and median with the bank charger removed. c. What happened to the mean and median when the bank charger was removed? Why? d. If it is your job to oversee the fixed costs, which average using te complete data set would you prefer to use when submitting a report to administration to show that costs are low? Why? e. If it is your job to find places in the budget to reduce costs, which average using the complete data set would you prefer to use when submitting a report to administration to show that fixed costs need to be reduced? Why? State which type of measurement scale each represents, and then which center measures can be use for the variable? You collect data on people's likelihood (very likely, likely, neutral, unlikely, very unlikely) to vote for a candidate. You collect data on the diameter at breast height of trees in the Coconino National Forest. You collect data on the year wineries were started. You collect the drink types that people in Sydney, Australia drink. You collect data on the height of plants using a new fertilizer. You collect data on the cars that people drive in Campbelltown, Australia. You collect data on the temperature at different locations in Antarctica. You collect data on the first, second, and third winner in a beer competition. Looking at Graph 3.1.1, state if the graph is skewed left, skewed right, or symmetric and then state which is larger, the mean or the median? Graph 3.1.1: Skewed or Symmetric Graph An employee at Coconino Community College (CCC) is evaluated based on goal setting and accomplishments toward the goals, job effectiveness, competencies, and CCC core values. Suppose for a specific employee, goal 1 has a weight of 30%, goal 2 has a weight of 20%, job effectiveness has a weight of 25%, competency 1 has a goal of 4%, competency 2 has a goal has a weight of 3%, competency 3 has a weight of 3%, competency 4 has a weight of 3%, competency 5 has a weight of 2%, and core values has a weight of 10%. Suppose the employee has scores of 3.0 for goal 1, 3.0 for goal 2, 2.0 for job effectiveness, 3.0 for competency 1, 2.0 for competency 2, 2.0 for competency 3, 3.0 for competency 4, 4.0 for competency 5, and 3.0 for core values. Find the weighted average score for this employee. If an employee has a score less than 2.5, they must have a Performance Enhancement Plan written. Does this employee need a plan? An employee at Coconino Community College (CCC) is evaluated based on goal setting and accomplishments toward goals, job effectiveness, competencies, CCC core values. Suppose for a specific employee, goal 1 has a weight of 20%, goal 2 has a weight of 20%, goal 3 has a weight of 10%, job effectiveness has a weight of 25%, competency 1 has a goal of 4%, competency 2 has a goal has a weight of 3%, competency 3 has a weight of 3%, competency 4 has a weight of 5%, and core values has a weight of 10%. Suppose the employee has scores of 2.0 for goal 1, 2.0 for goal 2, 4.0 for goal 3, 3.0 for job effectiveness, 2.0 for competency 1, 3.0 for competency 2, 2.0 for competency 3, 3.0 for competency 4, and 4.0 for core values. Find the weighted average score for this employee. If an employee that has a score less than 2.5, they must have a Performance Enhancement Plan written. Does this employee need a plan? A statistics class has the following activities and weights for determining a grade in the course: test 1 worth 15% of the grade, test 2 worth 15% of the grade, test 3 worth 15% of the grade, homework worth 10% of the grade, semester project worth 20% of the grade, and the final exam worth 25% of the grade. If a student receives an 85 on test 1, a 76 on test 2, an 83 on test 3, a 74 on the homework, a 65 on the project, and a 79 on the final, what grade did the student earn in the course? A statistics class has the following activities and weights for determining a grade in the course: test 1 worth 15% of the grade, test 2 worth 15% of the grade, test 3 worth 15% of the grade, homework worth 10% of the grade, semester project worth 20% of the grade, and the final exam worth 25% of the grade. If a student receives a 92 on test 1, an 85 on test 2, a 95 on test 3, a 92 on the homework, a 55 on the project, and an 83 on the final, what grade did the student earn in the course? 1. mean = 253.93, median = 268, mode = none 3. mean = 67.68 km, median = 64 km, mode = 56 and 64 km 5. a. mean = $89,370.42, median = $75,311, b. mean = $79,196.56, median = $74,773, c. See solutions, d. See solutions, e. See solutions 7. a. ordinal- median and mode, b. ratio – all three, c. interval – all three, d. nominal – mode 9. Skewed right, mean higher 13. 76.75 3.2: Measures of Spread Kathryn Kozak
CommonCrawl
\begin{document} \edef\marginnotetextwidth{\the\textwidth} \title{\kern-30pt Soundness and Completeness of the NRB Verification Logic\kern-30pt} \author{ Author~1\inst{1} \and Author~2\inst{2} } \institute{ Author 1 address\\ \email{Author 1 email} \and Author 2 address\\ \email{Author 2 email} } \maketitle \begin{abstract} This short paper gives a model for and a proof of completeness of the NRB verification logic for deterministic imperative programs, the logic having been used in the past as the basis for automated semantic checks of large, fast-changing, open source C code archives, such as that of the Linux kernel source. The model is a coloured state transitions model that approximates from above the set of transitions possible for a program. Correspondingly, the logic catches all traces that may trigger a particular defect at a given point in the program, but may also flag false positives. \end{abstract} \pagestyle{plain} \section{Introduction} \customlabel{\thedefinition}{sec:Introduction} NRB program logic was first introduced in 2004 \cite{RST:2004} as the theory supporting an automated semantic analysis suite \cite{iCCS2006} targeting the C code of the Linux kernel. The analyses performed with this kind of program logic and automatic tools are typically much more approximate than that provided by more interactive or heavyweight techniques such as theorem-proving and model-checking \cite{Clarke1}, respectively, but the NRB combination has proved capable of rapidly scanning millions of lines of C code and detecting deadlocks scattered at one per million lines of code \cite{SEW30}. A rough synopsis of the characteristics of the logic or an approach using the logic is that it is precise in terms of accurately following the often complex flow of control and sequence of events in an imperative language, but not very accurate at following data values. That is fine for a target language like C \cite{C89,C99}, where static analysis cannot reasonably hope to follow all data values accurately because of the profligate use of indirection through pointers in a typical program (a pointer may access any part of memory, in principle, hence writing through a pointer might `magically' change any value) and the NRB logic was designed to work around that problem by focussing instead on information derived from sequences of events. NRB is a logic with modal operators. The modalities do not denote a full range of actions as in Dynamic Logic ~\cite{Harel:2000}, but rather only the very particular action of the final exit from a code fragment being via a \code{return}, \code{break}, or \code{goto}. The logic is also configurable in detail to support the code abstractions that are of interest in different analyses; detecting the freeing of a record in memory while it may still be referenced requires an abstraction that counts the possible reference holders, for example, not the value currently in the second field from the right. The technique became known as `symbolic approximation' \cite{SA,ISOLA} because of the foundation in symbolic logic and because the analysis is guaranteed to be on the alarmist side (`approximate from above'); the analysis does not miss bugs in code, but does report false positives. In spite of a few years' pedigree behind it now, a foundational semantics for the logic has only just been published \cite{SCP} (as an Appendix to the main text), and this article aims to provide a yet simpler semantics for the logic and also a completeness result, with the aim of consolidating the technique's bona fides. Interestingly, the formal guarantee (`never miss, over-report') provided by NRB and the symbolic approximation technique is said not to be desirable in the commercial context by the very practical authors of the Coverity analysis tool \cite{Coverity,Bessey:2010}, which also has been used for static analysis of the Linux kernel and many very large C code projects. Allegedly, in the commercial arena, understandability of reports is crucial, not the guarantee that no bugs will be missed. The Coverity authors say that commercial clients tend to dismiss any reports that they do not understand, turning a deaf ear to explanations. However, the reports produced by our tools have always been filtered before presentation, so only the alarms that cannot be dismissed as false positives are seen. The layout of this paper is as follows. In Section~\ref{sec:A1} a model of programs as sets of `coloured' transitions between states is introduced, and the constructs of a generic imperative language are expressed in those terms. It is shown that the constructs obey certain algebraic laws, which soundly implement the established deduction rules of NRB logic. Section~\ref{sec:A2} shows that the logic is complete for deterministic programs, in that anything that is true in the model introduced in Section~\ref{sec:A1} can be proved using the formal rules of the NRB logic. Since the model contains at least as many state transitions as occur in reality, `soundness' of the NRB logic means that it may construct false alarms for when a particular condition may be breached at some particular point in a program, but that it may not miss any real alarms. `Completeness' means that the logic flags no more false alarms than are already to be predicted from the model, so if the model says that there ought to be no alarms at all (which means that there really are no alarms), then the logic can prove that. Thus, reasoning symbolically is not in principle an approximation here; it is not necessary to laboriously construct and examine the complete graph of modelled state transitions in order to be able to give a program a `clean bill of health' with reference to some potential defect, because the logic can always do the job as well. \section{Semantic Model} \customlabel{\thedefinition}{sec:A1} \begin{table}[t] \caption{NRB deduction rules for triples of assertions and programs. Unless explicitly noted, assumptions ${{\bf G}}_l p_l$ at left are passed down unaltered from top to bottom of each rule. We let ${\cal E}{}_1$ stand for any of ${{\bf R}}$, ${{\bf B}}$, ${{\bf G}}_l$, ${{\bf E}}_k$; ${\cal E}_2$ any of ${{\bf R}}$, ${{\bf G}}_l$, ${{\bf E}}_k$; ${\cal E}_3$ any of ${{\bf R}}$. ${{\bf G}}_{l'}$ for $l'\ne l$, ${{\bf E}}_k$; ${\cal E}_4$ any of ${{\bf R}}$. ${{\bf G}}_l$, ${{\bf E}}_{k'}$ for $k'\ne k$; $[h]$ the body of the subroutine named $h$. } \customlabel{\thedefinition}{tab:rules} \[ \begin{array}{c} \frac{ \triangleright~\{p\}\, P\, \{{{\bf N}}q\lor {\cal E}_{1}x\} \quad \triangleright~\{q\}\, Q\, \{{{\bf N}}r\lor {\cal E}_{1}x\} }{ \triangleright~\{p\}\, P\,{;}\,Q\,\, \{{{\bf N}}r\lor {\cal E}_{1}x\} }\mbox{\footnotesize[seq]} \qquad \frac{ \triangleright~\{p\}\, P\, \{{\bf B} q\lor {{\bf N}}p\lor {\cal E}_{2}x\} }{ \triangleright~\{p\}\, \code{do} \,\,P\, \{{{\bf N}}q\lor {\cal E}_{2}x\} }\mbox{\footnotesize[do]} \\[2ex] \frac{ }{ \triangleright~\{p\}\, \code{skip}\, \{{{\bf N}}\,p\} }\mbox{\footnotesize[skp]} \qquad \frac{ }{ \triangleright~\{p\}\, \code{return}\, \{{{\bf R}}\,p\} }\mbox{\footnotesize[ret]} \\[2ex] \frac{ }{ \triangleright~\{p\}\, \code{break}\,\,\, \{{{\bf B}}\,p\} }\mbox{\footnotesize[brk]} \quad \mbox{\footnotesize[$p{\rightarrow\kern0.5pt} p_l$]} \frac{ }{ {{\bf G}}_l\,p_l\,\triangleright~\{p\}\, \code{goto}\,\,l\, \{{{\bf G}}_l\,p\} }\mbox{\footnotesize[go]} \\[2ex] \frac{ }{ \triangleright~\{p\}\, \code{throw}\,\,k\, \{{{\bf E}}_k\,p\} }\mbox{\footnotesize[throw]} \quad \frac{ }{ \triangleright~ \{q[e/x] \}\,\, x{=}e\,\, \{{{\bf N}}q\} }\mbox{\footnotesize[let]} \\[2ex] \frac{ \triangleright~ \{q\land p\}\, P\, \{r\} }{ \triangleright~\{p\}\, q\,{{\rightarrow\kern0.5pt}}P\,\, \{r\} }\mbox{\footnotesize[grd]} \quad \frac{ \triangleright~ \{p \}\, P\,\, \{q\} \quad \triangleright~ \{p \}\, Q\,\, \{q\} }{ \triangleright~ \{p \}\, P\,{\shortmid}\,Q\,\, \{q\} }\mbox{\footnotesize[dsj]} \\[2ex] \mbox{\footnotesize$[{\bf N} p_l{\rightarrow\kern0.5pt} q]$} \frac{ {{\bf G}}_l\,p_l\,\,\triangleright~ \{p \}~ P~ \{q\} }{ {{\bf G}}_l\,p_l\,\,\triangleright~ \{p \}~ P:l~ \{q\} }\mbox{\footnotesize[frm]} \qquad \frac{ {{\bf G}}_l\,p_l\,\,\triangleright~ \{p \}~ P~ \{{{\bf G}}_l p_l \lor {{\bf N}}q \lor {\cal E}_{3}x\} }{ \triangleright~ \{p \}~ \code{label}~l.P~ \{{{\bf N}}q\lor {\cal E}_{3}x\} }\mbox{\footnotesize[lbl]} \\[2ex] \frac{ \triangleright~ \{p \}~ [h]~ \{{\bf R} r \lor {{\bf E}}_kx_k\} }{ {{\bf G}}_l p_l\,\triangleright~ \{p \}~ \code{call}~h~ \{{{\bf N}}r\lor {{\bf E}}_kx_k\} }\mbox{\footnotesize[sub]} \quad \frac{ \triangleright~ \{p \}~ P~ \{{{\bf N}}r\lor {{\bf E}}_kq\lor{\cal E}_4 x\} \quad \triangleright~ \{q \}~ Q~ \{{{\bf N}}r\lor {{\bf E}}_k x_k\lor{\cal E}_4 x\} }{ \triangleright~ \{p \}~ \code{try}~P~\code{catch}(k)~Q~\{{{\bf N}}r\lor {{\bf E}}_k x_k\lor{\cal E}_4x \} }\mbox{\footnotesize[try]} \\[2ex] \frac{ \triangleright~\{p_i\}~P~\{q\} }{ \triangleright~\{ {\lor}\kern-3pt{\lor} p_i\}~P~\{ q\} } \qquad \frac{ \triangleright~\{p\}~P~\{q_i\} }{ \triangleright~\{ p\}~P~\{ {\land}\kern-3pt{\land} q_i\} } \qquad \frac{ {{\bf G}}_l\,p_{li}\,\triangleright~\{p\}~P~\{q\} }{ {\lor}\kern-3pt{\lor} {{\bf G}}_l\,p_{li}\,\triangleright~\{ p\}~P~\{ q\} } \\[2ex] \mbox{\footnotesize[$p'{\rightarrow\kern0.5pt} p, q{\rightarrow\kern0.5pt} q', p_l'{\rightarrow\kern0.5pt} p_l|{{\bf G}}_lq'{\rightarrow\kern0.5pt} {{\bf G}}_lp'_l$]} \frac{ {{\bf G}}_l\,p_l\,\triangleright~\{ p\}~P~\{ q\} }{ {{\bf G}}_l\,p_l'\,\triangleright~\{ p'\}~P~\{ q'\} } \end{array} \] \customlabel{\thedefinition}{tab:NRBG} \end{table} This section sets out a semantic model for the full NRBG(E) logic (`NRB' for short) shown in Table~\ref{tab:NRBG}. The `NRBG' part stands for `normal, return, break, goto', and the `E' part treats exceptions (catch/throw in Java, setjmp/longjmp in C), aiming at a complete treatment of classical imperative languages. This semantics simplifies a {\em trace model} presented in the Appendix to \cite{SCP}, substituting traces there for state transitions here. A natural model of a program is as a relation of type $\mathds{P}(S\times S)$, expressing possible changes in a state of type $S$ as a set of pairs of initial and final states. We shall add a {\em colour} to this picture. The `colour' shows if the program has run {\em normally} through to the end (colour `${\bf N}$') or has terminated early via a \code{return} (colour `${\bf R}$'), \code{break} (colour `${\bf B}$'), \code{goto} (colour `${\bf G}_l$' for some label $l$) or an exception (colour `${\bf E}_k$' for some exception kind $k$). The aim is to document precisely the control flow in the program. In this picture, a deterministic program may be modelled as a set of `coloured' transitions of type \[ \mathds{P}(S\times \star\times S) \] where the colours $\star$ are a disjoint union \[ \star = \{{\bf N}\} \sqcup \{{\bf R}\} \sqcup \{{\bf B}\} \sqcup \{{\bf G}_l\,|\,l\in L\} \sqcup \{{\bf E}_k\,|\,k\in K\} \] and $L$ is the set of possible \code{goto} labels and $K$ the set of possible exception kinds. The programs we consider are in fact deterministic, but we will use the general setting. Where the relation is not defined on some initial state $s$, we understand that the initial state $s$ leads to the program getting hung up in an infinite loop, instead of terminating. Relations representing deterministic programs thus have a set of images for any given initial state that is either of size zero (`hangs') or one (`terminates'). Only paths through the program that do not `hang' in an infinite loop are of interest to us, and what the NRB logic will say about a program at some point will be true only supposing control reaches that point, which it may never do. Programs are put together in sequence with the second program accepting as inputs only the states that the first program ends `normally' with. Otherwise the state with which the first program exited abnormally is the final outcome. That is, \begin{align*} \llbracket P;Q\rrbracket &= \{ s_0\mathop{\mapsto}\limits^\iota s_1 \in \llbracket P\rrbracket ~|~ \iota\ne{\bf N}\}\\ &\cup \,\{s_0\mathop{\mapsto}\limits^\iota s_2 \mid s_1\mathop{\mapsto}\limits^\iota s_2 \in \llbracket Q\rrbracket ,~ s_0\mathop{\mapsto}\limits^{\bf N} s_1\in \llbracket P\rrbracket \} \end{align*} This statement is not complete, however, because abnormal exits with a \code{goto} from $P$ may still re-enter in $Q$ if the \code{goto} label is in $Q$, and proceed. We postpone consideration of this eventuality by predicating the model with the sets of states $g_l$ {\em hypothesised} as being fed in at the label $l$ in the code. The model of $P$ and $Q$ with these sets as assumptions produce outputs that take account of these putative extra inputs at label $l$: \begin{align*} \llbracket P;Q\rrbracket_g &= \{ s_0\mathop{\mapsto}\limits^\iota s_1 \in \llbracket P\rrbracket_g ~|~ \iota\ne{\bf N}\}\\ &\cup \,\{s_0\mathop{\mapsto}\limits^\iota s_2 \mid s_1\mathop{\mapsto}\limits^\iota s_2 \in \llbracket Q\rrbracket_g ,~ s_0\mathop{\mapsto}\limits^{\bf N} s_1\in \llbracket P\rrbracket_g \} \end{align*} Later, we will tie things up by ensuring that the set of states bound to early exits via a \code{goto}~$l$ in $P$ are exactly the sets $g_l$ hypothesised here as entries at label $l$ in $Q$ (and vice versa). The type of the {\em interpretation} expressed by the fancy square brackets is \[ \llbracket{-_1}\rrbracket_{-_2} : {\mathscr C}{\rightarrow\kern0.5pt}(L\pfun \mathds{P}S){\rightarrow\kern0.5pt} \mathds{P}(S\times \star\times S) \] where $g$, the second argument/suffix, has the partial function type $L\pfun \mathds{P}S$ and the first argument/bracket interior has type $\mathscr C$, denoting a simple language of imperative statements whose grammar is set out in Table~\ref{tab:BNF}. The models of some of its very basic statements as members of $\mathds{P}(S\times\star\times S)$ are shown in Table~\ref{tab:ex1to4} and we will discuss them and the interpretations of other language constructs below. \begin{table}[tb] \subtable{}{ \fbox{ \begin{minipage}[t]{0.465\textwidth} A \code{skip} statement is modelled as \[ \llbracket \code{skip} \rrbracket_g = \{ s\mathop{\mapsto}\limits^{{\bf N}} s\mid s\in S \} \] It makes the transition from a state to the same state again, and ends `normally'. \end{minipage} } } \subtable{}{ \fbox{ \begin{minipage}[t]{0.465\textwidth} A \code{return} statement has the model \[ \llbracket \code{return} \rrbracket_g = \{ s\mathop{\mapsto}\limits^{{\bf R}} s\mid s\in S \} \] It exits at once `via a return flow' after a single, trivial transition. \end{minipage} } } \subtable{}{ \fbox{ \begin{minipage}[t]{0.465\textwidth} The model of $\code{skip};\code{return}$ is \[ \llbracket \code{skip};\code{return}\rrbracket_g = \{ s\mathop{\mapsto}\limits^{{\bf R}} s\mid s\in S \} \] which is the same as that of \code{return}. It is made up of the compound of two trivial state transitions, $s\mathop{\mapsto}\limits^{{\bf N}} s$ from \code{skip} and $s\mathop{\mapsto}\limits^{{\bf R}} s$ from \code{return}, the latter ending in a `return flow'. \end{minipage} } } \subtable{}{ \fbox{ \begin{minipage}[t]{0.465\textwidth} The $\code{return};\code{skip}$ compound is modelled as: \[ \llbracket \code{return};\code{skip}\rrbracket_g = \{ s\mathop{\mapsto}\limits^{{\bf R}} s\mid s\in S \} \] It is made up of of just the $s\mathop{\mapsto}\limits^{{\bf R}} s$ transitions from \code{return}. There is no transition that can be formed as the composition of a transition from \code{return} followed by a transition from \code{skip}, because none of the first end `normally'. \end{minipage} } } \caption{Models of simple statements.} \customlabel{\thedefinition}{tab:ex1to4} \end{table} A real imperative programming language such as C can be mapped onto $\mathscr C$ -- in principle exactly, but in practice rather approximately with respect to data values, as will be indicated below. \begin{table}[t] \caption{Grammar of the abstract imperative language $\mathscr C$, where integer variables $x\in X$, term expressions $e \in \mathscr E$, boolean expressions $b \in \mathscr B$, labels $l \in L$, exceptions $k\in K$, statements $c \in \mathscr C$, integer constants $n \in {\mathds{Z}}$, infix binary relations $r \in R$, subroutine names $h \in H$. Note that labels (the targets of \code{goto}s) are declared with `\code{label}' and a label cannot be the first thing in a code sequence; it must follow some statement. Instead of \code{if}, $\mathscr C$ has guarded statements, and explicit nondeterminism, which, however, is only to be used here in the deterministic construct $b{\rightarrow\kern0.5pt} P \shortmid \lnot b{\rightarrow\kern0.5pt} Q$ for code fragments $P$, $Q$. } \customlabel{\thedefinition}{tab:BNF} \footnotesize \begin{align*} {\mathscr C}~{:}{:}& {\text{=}} ~\code{skip} ~{\mid}~\code{return} ~{\mid}~\code{break} ~{\mid}~\code{goto}\,\,l ~{\mid}~c{;}c ~{\mid}~ x {=} e ~{\mid} ~b{{\rightarrow\kern0.5pt}} c ~{\mid}~c\,{\shortmid}\,c ~{\mid}~\code{do}~c ~{\mid}~c\,{:}\,l ~{\mid}~\code{label}\,\,l.c ~{\mid}~\code{call}\,\,h\\ &\mid ~\code{try}~c~\code{catch}(k)~c ~{\mid}~\code{throw}\,\,k \\ \mathscr{E}~{:}{:}& {\text{=}} ~n \mid x \mid n*e \mid e + e \mid b\,?\,e:e \\ \mathscr{B}~{:}{:}& {\text{=}} ~\top \mid \bot \mid e~r~e \mid b \lor b \mid b \land b \mid \lnot b \mid \exists x. b \\ R~{:}{:}& {\text{=}} ~{<} \mid {>} \mid {\le} \mid {\ge} \mid {=} \mid {\ne} \end{align*} \end{table} A conventional $\code{if}(b)~P~\code{else}~Q$ statement in C is written as the nondeterministic choice between two guarded statements $b{\rightarrow\kern0.5pt} P\shortmid\lnot b{\rightarrow\kern0.5pt} Q$ in the abstract language $\mathscr C$; the conventional $\code{while}(b)~P$ loop in C is expressed as $\code{do}\{\lnot b{\rightarrow\kern0.5pt}\code{break}\shortmid b{\rightarrow\kern0.5pt} P\}$, using the forever-loop of $\mathscr C$, etc. A sequence $P; l: Q$ in C with a label $l$ in the middle should strictly be expressed as $P : l; Q$ in $\mathscr C$, but we regard $P ; l : Q$ as syntactic sugar for that, so it is still permissible to write $P ; l: Q$ in $\mathscr C$. As a very special syntactic sweetener, we permit $l : Q$ too, even when there is no preceding statement $P$, regarding it as an abbreviation for $\code{skip} : l; Q$. Curly brackets may be used to group code statements for clarity in $\mathscr C$, and parentheses may be used to group expressions. The variables are globals and are not formally declared. The terms of $\mathscr C$ are piecewise linear integer forms in integer variables, so the boolean expressions are piecewise comparisons between linear forms. \begin{example} A valid integer term is `$\rm 5x + 4y + 3$', and a boolean expression is `$\rm 5x + 4y + 3 < z - 4 \land y \le x$'. In consequence another valid integer term, taking the value of the first on the range defined by the second, and 0 otherwise, is `$\rm (5x + 4y + 3 < z - 4 \land y \le x)\,?\,5x + 4y + 3:0$'. \end{example} \noindent The limited set of terms in $\mathscr C$ makes it practically impossible to map standard imperative language assignments as simple as `$\rm x=x*y$' or `$\rm x= x\mid y$' (the bitwise or) succinctly. In principle, those could be expressed exactly point by point using conditional expressions (with at most $2^{32}$ disjuncts), but it is usual to model all those cases by means of an abstraction away from the values taken to attributes that can be represented more elegantly using piecewise linear terms The abstraction may be to how many times the variable has been read since last written, for example, which maps `$\rm x= x*y$' to `$\rm x = x+1; y = y+1; x = 0$'. Formally, terms have a conventional evaluation as integers and booleans that is shown (for completeness!) in Table~\ref{tab:ev}. The reader may note the notation $s\,x$ for the evaluation of the variable named $x$ in state $s$, giving its integer value as result. We say that state $s$ {\em satisfies} boolean term $b\in\mathscr B$, written $s\models b$, whenever $\llbracket b\rrbracket s$ holds. \begin{table}[t] \caption{The conventional evaluation of integer and boolean terms of $\mathscr C$, for variables $x\in X$, integer constants $\kappa\in{\mathds{Z}}$, using $s\,x$ for the (integer) value of the variable named $x$ in a state $s$. The form $b[n/x]$ means `expression $b$ with integer $n$ substituted for all unbound occurrences of $x$'. } \customlabel{\thedefinition}{tab:ev} \footnotesize \[ \begin{array}[t]{@{}r@{~}c@{~}l} \llbracket-\rrbracket&:&{\mathscr E} {\rightarrow\kern0.5pt} S {\rightarrow\kern0.5pt} {\mathds{Z}}\\ \llbracket x \rrbracket s&=&s\,x\\ \llbracket \kappa \rrbracket s&=&\kappa\\ \llbracket \kappa*e \rrbracket s&=&\kappa*\llbracket e \rrbracket s\\ \llbracket e_1 + e_2 \rrbracket s&=&\llbracket e_1 \rrbracket s + \llbracket e_2 \rrbracket s\\ \llbracket b\,?\,e_1:e_2 \rrbracket s&=& \mbox{if} ~\llbracket b \rrbracket s~ \mbox{then}~ \llbracket e_1 \rrbracket s~\mbox{else}~ \llbracket e_2 \rrbracket s \end{array} \quad \begin{array}[t]{r@{~}c@{~}l@{}} \llbracket-\rrbracket&:&{\mathscr B}{\rightarrow\kern0.5pt} S {\rightarrow\kern0.5pt} \mbox{\bf bool}\\ \llbracket \top \rrbracket s&=&\top\qquad \llbracket \bot \rrbracket s =\bot\\ \llbracket e_1 < e_2 \rrbracket s &=&\llbracket e_1 \rrbracket s < \llbracket e_2 \rrbracket s\\ \llbracket b_1 \lor b_2 \rrbracket s &=&\llbracket b_1 \rrbracket s \lor \llbracket b_2 \rrbracket s\\ \llbracket b_1 \land b_2 \rrbracket s &=&\llbracket b_1 \rrbracket s \land \llbracket b_2 \rrbracket s\\ \llbracket \lnot b \rrbracket s &=&\lnot (\llbracket b \rrbracket s)\\ \llbracket \exists x. b\rrbracket s &=& \exists n\in \mathds{Z}. \llbracket b[n/x] \rrbracket s \end{array} \] \end{table} The \code{label} construct of $\mathscr C$ declares a label $l\in L$ that may subsequently be used as the target in \code{goto}s. The component $P$ of the construct is the body of code in which the label is {\em in scope}. A label may not be mentioned except in the scope of its declaration. The same label may not be declared again in the scope of the first declaration. The semantics of labels and \code{goto}s will be further explained below. The only way of exiting the $\mathscr C$ \code{do} loop construct normally is via \code{break} in the body $P$ of the loop. An abnormal exit other than \code{break} from the body $P$ terminates the whole loop abnormally. Terminating the body $P$ normally evokes one more turn round the loop. So conventional \code{while} and \code{for} loops need to be mapped to a \code{do} loop with a guarded \code{break} statement inside, at the head of the body. The precise models for this and every construct of $\mathscr C$ as a set of coloured transitions are enumerated in Table~\ref{tab:interpretation}. \begin{table}[t] \caption{Model of programs of language $\mathscr C$, given as hypothesis the sets of states $g_l$ for $l\in L$ observable at $\code{goto}~l$ statements. A recursive reference means `the least set satisfying the condition'. For $h\in H$, the subroutine named $h$ has code $[h]$. The state $s$ altered by the assignment of $n$ to variable $x$ is written $s[x\mapsto n]$. } \customlabel{\thedefinition}{tab:interpretation} \footnotesize \[ \begin{array}[t]{@{}r@{~}l@{~}} \llbracket-\rrbracket_g&:~\mathscr C {\rightarrow\kern0.5pt} \mathds{P}(S\times\star\times S)\notag\\[0.5ex] \llbracket\code{skip}\rrbracket_g &= \{s_0\mathop{\mapsto}\limits^{{\bf N}}s_0\mid s_0\in S\}\\ \llbracket\code{return}\rrbracket_g s_0&= \{s_0\mathop{\mapsto}\limits^{{\bf R}}s_0\mid s_0\in S\}\\ \llbracket\code{break}\rrbracket_g &= \{s_0\mathop{\mapsto}\limits^{{\bf B}}s_0\mid s_0\in S\}\\ \llbracket\code{goto}~l\rrbracket_g &= \{s_0\mathop{\mapsto}\limits^{{\bf G}_l}s_0\mid s_0\in S\}\\ \llbracket\code{throw}~k\rrbracket_g &= \{s_0\mathop{\mapsto}\limits^{{\bf E}_k}s_0\mid s_0\in S\}\\ \llbracket P;Q\rrbracket_g &= \{ s_0\mathop{\mapsto}\limits^\iota s_1 \in \llbracket P\rrbracket_g \mid \iota \ne {\bf N} \}\\ &\cup~ \{ s_0\mathop{\mapsto}\limits^\iota s_2 \mid s_1\mathop{\mapsto}\limits^\iota s_2\in\llbracket Q\rrbracket_g ,~ s_0\mathop{\mapsto}\limits^{{\bf N}} s_1\in \llbracket P\rrbracket_g \} \\ \llbracket x=e\rrbracket_g s_0&= \{ s_0\mathop{\mapsto}\limits^{{\bf N}} s_0[x\mapsto\llbracket e\rrbracket s_0]\} \mid s_0\in S \} \\ \llbracket p {\rightarrow\kern0.5pt} P\rrbracket_g &= \{ s_0\mathop{\mapsto}\limits^\iota s_1 \in \llbracket P\rrbracket_g \mid \llbracket p\rrbracket s_0\} \\ \llbracket P\shortmid Q\rrbracket_g&= \llbracket P\rrbracket_g \cup \llbracket Q\rrbracket_g \\ \llbracket \code{do}~P\rrbracket_g &= \{ s_0\mathop{\mapsto}\limits^{{\bf N}} s_1 \mid s_0\mathop{\mapsto}\limits^{{\bf B}} s_1 \in \llbracket P\rrbracket_g \}\\ &\cup~ \{ s_0\mathop{\mapsto}\limits^\iota s_1 \in \llbracket P\rrbracket_g \mid~\iota\ne {{\bf N}},{{\bf B}} \}\\ &\cup~ \{ s_0\mathop{\mapsto}\limits^\iota s_2 \mid s_1\mathop{\mapsto}\limits^\iota s_2\in \llbracket \code{do}~P\rrbracket_g ,~ s_0\mathop{\mapsto}\limits^\iota s_1 \in \llbracket P\rrbracket_g \} \\ \llbracket P : l \rrbracket_g &= \llbracket P\rrbracket_g \\ &\cup~ \{ s_0\mathop{\mapsto}\limits^{{\bf N}} s_1 \mid s_0\in S,~s_1 \in g_l \} \\ \llbracket \code{label}~ l~ P\rrbracket_g &= \llbracket P \rrbracket_{g\cup\{l\mapsto g_l^*\}} - g_l^* \customlabel{\thedefinition}{eq:label} \\ &~\mbox{where}~g_l^* = \{ s_1 \mid s_0\mathop{\mapsto}\limits^{{\bf G}_l} s_1\in \llbracket P \rrbracket_{g\cup\{l\mapsto g_l^*\}} \} \\ \llbracket \code{call}~h\rrbracket_g &= \{ s_0\mathop{\mapsto}\limits^{{\bf N}} s_1 \mid s_0\mathop{\mapsto}\limits^{{\bf R}} s_1 \in \llbracket [h]\rrbracket_{\{\,\}} \} \\ &\cup~ \{ s_0\mathop{\mapsto}\limits^{{\bf E}_k} s_1 \in \llbracket [h]\rrbracket_{\{\,\}} \mid k\in K \} \\ \llbracket \code{try}~P~\code{catch}(k)~Q\,\rrbracket_g &= \{ s_0\mathop{\mapsto}\limits^\iota s_1 \in\llbracket P\rrbracket_g \mid ~\iota\ne {{\bf E}}_k \}\\ &\cup~ \,\{ s_0\mathop{\mapsto}\limits^\iota s_2 \mid s_1\mathop{\mapsto}\limits^\iota s_2\in \llbracket Q\rrbracket_g ,~ s_0\mathop{\mapsto}\limits^{{\bf E}_k} s_1\in \llbracket P\rrbracket_g \} \end{array} \] \end{table} Among the list of models in Table~\ref{tab:interpretation}, that of \code{label} declarations in particular requires explanation because labels are more explicitly controlled in $\mathscr C$ than in standard imperative languages. Declaring a label $l$ makes it invisible from the outside of the block (while enabling it to be used inside), working just the same way as a local variable declaration does in a standard imperative programming language. A declaration removes from the model of a labelled statement the dependence on the hypothetical set $g_l$ of the states attained at \code{goto}~$l$ statements. All the instances of \code{goto}~$l$ statements are inside the block with the declaration at its head, so we can take a look to see what totality of states really do accrue at \code{goto}~$l$ statements; they are recognisable in the model because they are the outcomes of the transitions that are marked with ${\bf G}_l$. Equating the set of such states with the hypothesis $g_l$ gives the (least) fixpoint $g_l^*$ required in the \code{label}~$l$ model. The hypothetical sets $g_l$ of states that obtain at \code{goto}~$l$ statements are used at the point where the label $l$ appears within the scope of the declaration. We say that any of the states in $g_l$ may be an outcome of passing through the label $l$, because it may have been brought in by a \code{goto}~$l$ statement. That is an overestimate; in reality, if the state just before the label is $s_1$, then at most those states $s_2$ in $g_l$ that are reachable at a \code{goto}~$l$ from an initial program state $s_0$ that also leads to $s_1$ (either $s_1$ first or $s_2$ first) may obtain after the label $l$, and that may be considerably fewer $s_2$ than we calculate in $g_l^*$. Here is a visualisation of such a situation; the curly arrows denote a trace: \[ \begin{array}{ccc@{\quad}l@{\quad}l} & & \{s_1\}&l:&\kern-20pt\{s1,s_2\}\\[-1ex] &\rotatebox{45}{$\leadsto$}&\\[-1ex] \{s_0\}& &\raisebox{1.25ex}{\rotatebox{-90}{$\leadsto$}}\\[-1ex] &&\\[-1ex] & & \{s_2\}&\code{goto}~l \end{array} \] If the initial precondition on the code admits more than one initial state $s_0$ then the model may admit more states $s_2$ after the label $l$ than occur in reality when $s_1$ precedes $l$, because the model does not take into account the dependence of $s_2$ on $s_1$ through $s_0$. It is enough for the model that $s_2$ proceeds from some $s_0$ and $s_1$ proceeds from some (possibly different) $s_0$ satisfying the same initial condition. In mitigation, \code{goto}s are sparsely distributed in real codes and we have not found the effect pejorative. \begin{example} Consider the code $R$ and suppose the input is restricted to a unique state $s$: \[ \code{label}~A,B.\overbrace{ \underbrace{\code{skip};~\code{goto}~A ; ~B:~\code{return} ;~A}_Q:~\code{goto}~B}^P \] with labels $A$, $B$ in scope in body $P$, and the marked fragment $Q$. The single transitions made in the code $P$ and the corresponding statement sequences are: \begin{align*} &s\mathop{\mapsto}\limits^{{\bf N}} s \mathop{\mapsto}\limits^{{{\bf G}}_A} s &\#~& \code{skip};~\code{goto}~A;\\[-0.75ex] &s \mathop{\mapsto}\limits^{{\bf N}} s \mathop{\mapsto}\limits^{{\bf N}} s \mathop{\mapsto}\limits^{{{\bf G}}_B} s &\#~& \code{skip};~\code{goto}~A;A:~\code{goto}~B\\[-0.75ex] &s \mathop{\mapsto}\limits^{{\bf N}} s \mathop{\mapsto}\limits^{{\bf N}} s \mathop{\mapsto}\limits^{{{\bf N}}} s \mathop{\mapsto}\limits^{{\bf R}} s &\#~& \code{skip};~\code{goto}~A;A:~\code{goto}~B;B:~\code{return} \end{align*} with observed states $g_A = \{ s \}$, $g_B = \{ s \}$ at the labels $A$ and $B$ respectively. The $\code{goto}~B$ statement is not in the fragment $Q$ so there is no way of knowing about the set of states at $\code{goto}~B$ while examining $Q$. Without that input, the traces of $Q$ are \begin{align*} &s \mathop{\mapsto}\limits^{{\bf N}} s \mathop{\mapsto}\limits^{{{\bf G}}_A} s &\#~&\code{skip};~\code{goto}~A\hspace{1in}\\ &s \mathop{\mapsto}\limits^{{\bf N}} s \mathop{\mapsto}\limits^{{\bf N}} s &\#~&\code{skip};~\code{goto}~A;A:\hspace{1in} \end{align*} There are no possible entries at $B$ originating from within $Q$ itself. That is, the model $\llbracket Q\rrbracket_g$ of $Q$ as a set of transitions assuming $g_B = \{\,\}$, meaning there are no entries from outside, is $\llbracket Q \rrbracket_g = \{ s\mathop{\mapsto}\limits^{{\bf N}}s,s\mathop{\mapsto}\limits^{{\bf G}_A}s \}$. When we hypothesise $g_B = \{ s \}$ for $Q$, then $Q$ has more traces: \begin{align*} &s \mathop{\mapsto}\limits^{{\bf N}} s \mathop{\mapsto}\limits^{{\bf N}} s \mathop{\mapsto}\limits^{{{\bf N}}} s \mathop{\mapsto}\limits^{{\bf R}} s &\#~& \code{skip};~\code{goto}~A;A:~\code{goto}~B;B:~\code{return} \end{align*} corresponding to these entries at $B$ from the rest of the code proceeding to the \code{return} in $Q$, and $\llbracket Q\rrbracket_g = \{ s\mathop{\mapsto}\limits^{{\bf N}}s,~ s\mathop{\mapsto}\limits^{{{\bf G}}_A}s,~ s\mathop{\mapsto}\limits^{{\bf R}}s \}$. In the context of the whole code $P$, that is the model for $Q$ as a set of initial to final state transitions. \customlabel{\thedefinition}{ex:6} \end{example} \begin{example} Staying with the code of Example~\ref{ex:6}, the set $\{ s\mathop{\mapsto}\limits^{{{\bf G}}_A}s,~ s\mathop{\mapsto}\limits^{{{\bf G}}_B}s,~ s\mathop{\mapsto}\limits^{{\bf R}}s \}$ is the model $\llbracket P\rrbracket_g$ of $P$ starting at state $s$ with assumptions $g_A$, $g_B$ of Example~\ref{ex:6}, and the sets $g_A$, $g_B$ are observed at the labels $A$, $B$ in the code under these assumptions. Thus $\{A\mapsto g_A, B\mapsto g_B\}$ is the fixpoint $g^*$ of the {\bf label} declaration rule in Table~\ref{tab:interpretation}. That rule says to next remove transitions ending at \code{goto}~$A$s and $B$s from visibility in the model of the declaration block, because they can go nowhere else, leaving only $\llbracket R\rrbracket_{\{\,\}} = \{ s\mathop{\mapsto}\limits^{{\bf R}}s\}$ as the set-of-transitions model of the whole block of code, which corresponds to the sequence $\code{skip};\code{goto}~A;A:~\code{goto}~B;B:~\code{return}$. \end{example} \noindent We extend the propositional language to ${\mathscr B}^*$ which includes the modal operators ${{\bf N}}$, ${{\bf R}}$, ${{\bf B}}$, ${{\bf G}}_l$, ${{\bf E}}_k$ for $l\in L$, $k\in K$, as shown in Table~\ref{tab:B*}, which defines a model of $\mathscr B^*$ on transitions. The predicate ${\bf N} p$ informally should be read as picking out from the set of all coloured state transitions `those normal-coloured transitions that produce a state satisfying $p$', and similarly for the other operators. \begin{table}[t] \caption{ Extending the language $\mathscr B$ of propositions to modal operators ${{\bf N}}$, ${{\bf R}}$, ${{\bf B}}$, ${{\bf G}}_l$, ${{\bf E}}_k$ for $l\in L$, $k\in K$. An evaluation on transitions is given for $b\in \mathscr B$, $b^* \in \mathscr B^*$. } \footnotesize \customlabel{\thedefinition}{tab:B*} \[ \mathscr B^*~{:}{:}{\text{\---}}~ b \mid {{\bf N}}\,b^* \mid {{\bf R}}\,b^* \mid {{\bf B}}\,b^* \mid {{\bf G}}_l\,b^* \mid {{\bf E}}_k\,b^* \mid b^* \lor b^* \mid b^* \land b^* \mid \lnot b^* \] \[ \begin{array}[t]{r@{~}c@{~}l} \llbracket b \rrbracket(s_0\mathop{\mapsto}\limits^{\iota} s_1)&=&\llbracket b\rrbracket s_1 \\ \llbracket {{\bf N}}\,b^* \rrbracket(s_0\mathop{\mapsto}\limits^{\iota} s_1) &=& (\iota={{\bf N}}) \land \llbracket b^*\rrbracket (s_0\mathop{\mapsto}\limits^{\iota} s_1) \\ \llbracket {{\bf R}}\,b^* \rrbracket(s_0\mathop{\mapsto}\limits^{\iota} s_1) &=& (\iota={{\bf R}}) \land \llbracket b^*\rrbracket (s_0\mathop{\mapsto}\limits^{\iota} s_1) \\ \llbracket {{\bf B}}\,b^* \rrbracket(s_0\mathop{\mapsto}\limits^{\iota} s_1) &=& (\iota={{\bf B}}) \land \llbracket b^*\rrbracket (s_0\mathop{\mapsto}\limits^{\iota} s_1) \\ \llbracket {{\bf G}}_l\,b^* \rrbracket(s_0\mathop{\mapsto}\limits^{\iota} s_1) &=& (\iota={{\bf G}}_l) \land \llbracket b^*\rrbracket (s_0\mathop{\mapsto}\limits^{\iota} s_1) \\ \llbracket {{\bf E}}_k\,b^* \rrbracket(s_0\mathop{\mapsto}\limits^{\iota} s_1) &=& (\iota={{\bf E}}_k) \land \llbracket b^*\rrbracket (s_0\mathop{\mapsto}\limits^{\iota} s_1) \end{array} \] \end{table} The modal operators satisfy the algebraic laws given in Table~\ref{tab:modal}. Additionally, however, for non-modal $p\in \mathscr B$, \begin{equation} p = {{\bf N}} p \lor {{\bf R}} p \lor {{\bf B}} p \lor \lor\kern-6pt\lor {{\bf G}}_l p \lor \kern-6pt\lor {{\bf E}}_k p \customlabel{\thedefinition}{eq:star} \end{equation} because each transition must be some colour, and those are all the colours. The decomposition works in the general case too: \begin{proposition} Every $p\in \mathscr B^*$ can be (uniquely) expressed as \[ p = {{\bf N}} p_{{\bf N}} \lor {{\bf R}} p_{{\bf R}} \lor {{\bf B}} p_{{\bf B}} \lor \lor\kern-6pt\lor {{\bf G}}_l p_{{\bf G}_l} \lor \kern-6pt\lor {{\bf E}}_k p_{{\bf E}_k} \] for some $p_{{\bf N}}$, $p_{{\bf R}}$, etc that are free of modal operators. \customlabel{\thedefinition}{prop:1} \end{proposition} \begin{proof} \em Equation \eqref{eq:star} gives the result for $p\in \mathscr B$. The rest is by structural induction on $p$, using Table~\ref{tab:modal} and boolean algebra. Uniqueness follows because ${{\bf N}}p_{{\bf N}} = {{\bf N}}p_{{\bf N}}'$, for example, applying ${{\bf N}}$ to two possible decompositions, and applying the orthogonality and idempotence laws; apply the definition of ${{\bf N}}$ in the model in Table~\ref{tab:B*} to deduce $p_{{\bf N}}= p_{{\bf N}}'$ for non-modal predicates $p_{{\bf N}}$, $p_{{\bf N}}'$. Similarly for ${{\bf B}}$, ${{\bf R}}$, ${{\bf G}}_l$, ${{\bf E}}_k$. \fbox{\vbox to 1ex{}~} \end{proof} \begin{table}[t] \caption{Laws of the modal operators ${{\bf N}}$, ${{\bf R}}$, ${{\bf B}}$, ${{\bf G}}_l$, ${{\bf E}}_k$ with $M,M_1,M_2\in \{{{\bf N}},{{\bf R}},{{\bf B}},{{\bf G}}_l,{{\bf E}}_k\mid l\in L,k\in K\}$ and $M_1\ne M_2$. } \customlabel{\thedefinition}{tab:modal} \footnotesize \begin{align*} M(\bot) &= \bot &\text{(flatness)} \\ M(b_1\lor b_2) &= M(b_1)\lor M(b_2) &\text{(disjunctivity)} \\ M(b_1\land b_2) &= M(b_1)\land M(b_2) &\text{(conjunctivity)} \\ M(M b) &= M b &\text{(idempotence)} \\ M_2(M_1 b) = M_1(b) \land M_2(b) &= \bot&\text{(orthogonality)} \end{align*} \end{table} \noindent So modal formulae $p\in \mathscr{B}^*$ may be viewed as tuples $(p_{{\bf N}},p_{{\bf R}},p_{{\bf B}},p_{{{\bf G}}_l},p_{{{\bf E}}_k})$ of non-modal formulae from $\mathscr{B}$ for labels $l\in L$, exception kinds $k\in K$. That means that ${{\bf N}} p \lor {{\bf R}} q$, for example, is simply a convenient notation for writing down two assertions at once: one that asserts $p$ of the final states of the transitions that end `normally', and one that asserts $q$ on the final states of the transitions that end in a `return flow'. The meaning of ${{\bf N}} p \lor {{\bf R}} q$ is the union of the set of the normal transitions with final state that satisfy $p$ plus the set of the transitions that end in a `return flow' and whose final states satisfy $q$. We can now give meaning to a notation that looks like (and is intended to signify) a Hoare triple with an explicit context of certain `\code{goto} assumptions': \begin{definition} Let $g_l = {\llbracket p_l \rrbracket}$ be the set of states satisfying $p_l\in \mathscr B$, labels $l\in L$. Then `${{\bf G}}_l\,p_l \triangleright \{p\}~a~\{q\}$', for non-modal $p,p_l\in \mathscr B$, $P\in \mathscr C$ and $q\in \mathscr B^*$, means: \begin{align*} \llbracket {{\bf G}}_l\,p_l\triangleright\{p\}~P~\{q\}\rrbracket &= \llbracket \{p\}~P~\{q\}\rrbracket_g \\ &= \forall s_0\mathop{\mapsto }\limits^{\iota} s_1 \in \llbracket P \rrbracket_g.~ \llbracket p \rrbracket s_0 \Rightarrow \llbracket q \rrbracket (s_0\mathop{\mapsto }\limits^{\iota} s_1) \end{align*} \customlabel{\thedefinition}{def:A5} \end{definition} That is read as `the triple $\{p\}~P~\{q\}$ holds under assumptions $p_l$ at $\code{goto}~l$ when every transition of $P$ that starts at a state satisfying $p$ also satisfies $q$'. The explicit Gentzen-style assumptions $p_l$ are free of modal operators. What is meant by the notation is that those states that may be attainable as the program traces pass through \code{goto} statements are assumed to be restricted to those that satisfy $p_l$. The ${{\bf G}}_l\,p_l$ assumptions may be separated by commas, as ${{\bf G}}_{l_1}\,p_{l_1}, {{\bf G}}_{l_2}\,p_{l_2},\dots$, with $l_1\ne l_2$, etc. Or they may be written as a disjunction ${{\bf G}}_{l_1}\,p_{l_1}\lor {{\bf G}}_{l_2}\,p_{l_2}\lor\dots$ because the information in this modal formula is only the mapping $l_1\mapsto p_{l_1}$, $l_2\mapsto p_{l_2}$, etc. If the same $l$ appears twice among the disjuncts ${{\bf G}}_l\,p_l$, then we understand that the union of the two $p_l$ is intended. Now we can prove the validity of laws about triples drawn from what Definition~\ref{def:A5} says. The first laws are strengthening and weakening results on pre- and postconditions: \begin{proposition} The following algebraic relations hold: \begin{align} \llbracket \{\bot\}~P~\{q\} \rrbracket_g &{~\mathop{\Leftrightarrow}~} \top \customlabel{\thedefinition}{eq:G1} \\ \llbracket \{p\}~P~\{\top\} \rrbracket_g &{~\mathop{\Leftrightarrow}~} \top \customlabel{\thedefinition}{eq:G2} \\ \llbracket \{p_1\lor p_2\}~P~\{q\} \rrbracket_g &{~\mathop{\Leftrightarrow}~} \llbracket \{p_1\}~P~\{q\} \rrbracket_g \land \llbracket \{p_2\}~P~\{q\} \rrbracket_g \customlabel{\thedefinition}{eq:G3} \\ \llbracket \{p\}~P~\{q_1\land q_2\} \rrbracket_g &{~\mathop{\Leftrightarrow}~} \llbracket \{p\}~P~\{q_1\} \rrbracket_g \land \llbracket \{p\}~P~\{q_2\} \rrbracket_g \customlabel{\thedefinition}{eq:G4} \\ (p_1{{\rightarrow\kern0.5pt}} p_2) \land\llbracket \{p_2\}~P~\{q\} \rrbracket_g &~\mathop{\Rightarrow}~ \llbracket \{p_1\}~P~\{q\} \rrbracket_g \customlabel{\thedefinition}{eq:G5} \\ (q_1{{\rightarrow\kern0.5pt}} q_2) \land\llbracket \{p\}~P~\{q_1\} \rrbracket_g &~\mathop{\Rightarrow}~ \llbracket \{p\}~P~\{q_2\} \rrbracket_g \customlabel{\thedefinition}{eq:G6} \\ \llbracket \{p\}~P~\{q\} \rrbracket_{g'} &{~\mathop{\Rightarrow}~} \llbracket \{p\}~P~\{q\} \rrbracket_g \customlabel{\thedefinition}{eq:G7} \end{align} for $p,p_1,p_2\in \mathscr B$, $q,q_1,q_2\in \mathscr B^*$, $P\in \mathscr C$, and $g_l \subseteq g'_l\in\mathds{P}S$. \customlabel{\thedefinition}{prp:P3} \end{proposition} \begin{proof} \em (\ref{eq:G1}-\ref{eq:G4}) follow on applying Definition~\ref{def:A5}. (\ref{eq:G5}-\ref{eq:G6}) follow from (\ref{eq:G3}-\ref{eq:G4}) on considering the cases $p_1\lor p_2 = p_2$ and $q_1\land q_2 = q_1$. The reason for \eqref{eq:G7} is that $g'_l$ is a bigger set than $g_l$, so $\llbracket P \rrbracket_{g'}$ is a bigger set of transitions than $\llbracket P \rrbracket_g$ and thus the universal quantifier in Definition~\ref{def:A5} produces a smaller (less true) truth value. \fbox{\vbox to 1ex{}~} \end{proof} \begin{theorem}[Soundness] The following algebraic inequalities hold, for ${\cal E}{}_1$ any of ${{\bf R}}$, ${{\bf B}}$, ${{\bf G}}_l$, ${{\bf E}}_k$; ${\cal E}_2$ any of ${{\bf R}}$, ${{\bf G}}_l$, ${{\bf E}}_k$; ${\cal E}_3$ any of ${{\bf R}}$, ${{\bf B}}$, ${{\bf G}}_{l'}$ for $l'\ne l$, ${{\bf E}}_k$; ${\cal E}_4$ any of ${{\bf R}}$, ${{\bf B}}$, ${{\bf G}}_{l}$, ${{\bf E}}_{k'}$ for $k'\ne k$; $[h]$ the code of the subroutine called $h$: \begin{small} \begin{align} \left.\begin{array}{@{}l@{~}l} &\llbracket \{p\}\, P\, \{{{\bf N}}q\lor {\cal E}_{1}x\}\rrbracket_g\\ \land& \llbracket \{q\}\, Q\, \{{{\bf N}}r\lor {\cal E}_{1}x\}\rrbracket_g \end{array}\right\} &{~\mathop{\Rightarrow}~} \llbracket \{p\}\, P\,{;}\,Q\,\, \{{{\bf N}}r\lor {\cal E}_{1}x\}\rrbracket_g \customlabel{\thedefinition}{eq:AT9} \\ \llbracket \{p\}\, P\, \{{\bf B} q\lor {{\bf N}}p\lor {\cal E}_{2}x\}\rrbracket_g &{~\mathop{\Rightarrow}~} \llbracket \{p\}\, \code{do} \,\,P\, \{{{\bf N}}q\lor {\cal E}_{2}x\}\rrbracket_g \\ \top &{~\mathop{\Rightarrow}~} \llbracket \{p\}\, \code{skip}\, \{{{\bf N}}\,p\}\rrbracket_g \\ \top &{~\mathop{\Rightarrow}~} \llbracket \{p\}\, \code{return}\, \{{{\bf R}}\,p\}\rrbracket_g \\ \top &{~\mathop{\Rightarrow}~} \llbracket \{p\}\, \code{break}\,\,\, \{{{\bf B}}\,p\}\rrbracket_g \\ \top &{~\mathop{\Rightarrow}~} \llbracket \{p\}\, \code{goto}\,\,l\, \{{{\bf G}}_l\,p\}\rrbracket_g \customlabel{\thedefinition}{eq:AT14} \\ \top &{~\mathop{\Rightarrow}~} \llbracket \{p\}\, \code{throw}\,\,k\, \{{{\bf E}}_k\,p\}\rrbracket_g \customlabel{\thedefinition}{eq:AT15} \\ \llbracket \{b\land p\}\, P\, \{q\}\rrbracket_g &{~\mathop{\Rightarrow}~} \llbracket \{p\}\, b\,{{\rightarrow\kern0.5pt}}P\,\, \{q\}\rrbracket_g \\ \llbracket \{p \}\, P\,\, \{q\}\rrbracket_g \land \llbracket \{p \}\, Q\,\, \{q\}\rrbracket_g &{~\mathop{\Rightarrow}~} \llbracket \{p \}\, P\,{\shortmid}\,Q\,\, \{q\}\rrbracket_g \\ \top &{~\mathop{\Rightarrow}~} \llbracket \{q[e/x] \}\,\, x{=}e\,\, \{{{\bf N}}q\}\rrbracket_g \\ \llbracket \{p \}~ P~ \{q\}\rrbracket_g \land g_l\subseteq \{s_1\mid s_0\mathop{\mapsto}\limits^{{\bf N}}s_1\in\llbracket q\rrbracket \} &{~\mathop{\Rightarrow}~} \llbracket \{p \}~ P:l~ \{q\}\rrbracket_g \customlabel{\thedefinition}{eq:AT18} \\ \llbracket \{p \}~ P~ \{{{\bf G}}_l p_l \lor {{\bf N}}q \lor {\cal E}_{3}x\}\rrbracket_{g\cup\{l\mapsto p_l\}} &{~\mathop{\Rightarrow}~} \llbracket \{p \}~ \code{label}~l.P~ \{{{\bf N}}q\lor {\cal E}_{3}x\}\rrbracket_g \\ \llbracket \{p \}~ [h]~ \{{\bf R} r \lor {{\bf E}}_k x_k\}\rrbracket_{\{~\}} &{~\mathop{\Rightarrow}~} \llbracket \{p \}~ \code{call}~h~ \{{{\bf N}}r\lor {{\bf E}}_k x_k\}\rrbracket_{g} \customlabel{\thedefinition}{eq:AT20} \\ \left.\begin{array}{@{}l@{~}l} &\llbracket \{p \}~ P~ \{{{\bf N}}r\lor {{\bf E}}_k q \lor{\cal E}_{4} x \}\rrbracket_{g}\\~ \land& \llbracket \{q \}~ Q~ \{{{\bf N}}r\lor {{\bf E}}_k x_k\lor {\cal E}_{4} x \}\rrbracket_{g} \end{array}\right\} &{~\mathop{\Rightarrow}~} \llbracket \{p \}~ \code{try}~P~\code{catch}(k)~Q~ \{{{\bf N}}r\lor {{\bf E}}_k x_k\lor {\cal E}_{4} x \}\rrbracket_{g} \customlabel{\thedefinition}{eq:AT21} \end{align} \end{small} \customlabel{\thedefinition}{thm:T1} \end{theorem} \begin{proof} By evaluation, given Definition~\ref{def:A5} and the semantics from Table~\ref{tab:interpretation}. \fbox{\vbox to 1ex{}~} \end{proof} The reason why the theorem is titled `Soundness' is that its inequalities can be read as the NRB logic deduction rules set out in Table~\ref{tab:NRBG}, via Definition~\ref{def:A5}. The fixpoint requirement of the model at the \code{label} construct is expressed in the `arrival from a \code{goto} at a label' law \eqref{eq:AT18}, where it is stated that {\em if} the hypothesised states $g_l$ at a \code{goto}~$l$ statement are covered by the states $q$ immediately after code block $P$ and preceding label $l$, {\em then} $q$ holds after the label $l$ too. However, there is no need for any such predication when the $g_l$ are exactly the fixpoint of the map \[ g_l\mapsto \{s_1\mid s_0\mathop{\mapsto}\limits^{{\bf G}_l} s_1\in\llbracket P\rrbracket_g\} \] because that is what the fixpoint condition says. Thus, while the model in Table~\ref{tab:interpretation} satisfies equations (\ref{eq:AT9}-\ref{eq:AT21}), it satisfies more than they require -- some of the hypotheses in the equations could be dropped and the model would still satisfy them. But the NRB logic rules in Table~\ref{tab:NRBG} are validated by the model and thus are sound. \section{Completeness for deterministic programs} \customlabel{\thedefinition}{sec:A2} In proving completeness of the NRB logic, at least for deterministic programs, we will be guided by the proof of partial completeness for Hoare's logic in K.~R.~Apt's survey paper \cite{KRAPT}. We will need, for every (possibly modal) postcondition $q \in \mathscr B^*$ and every construct $R$ of $\mathscr C$, a non-modal formula $p\in \mathscr B$ that is weakest in $\mathscr B$ such that if $p$ holds of a state $s$, and $s\mathop{\mapsto}\limits^\iota s'$ is in the model of $R$ given in Table~\ref{tab:interpretation}, then $q$ holds of $s\mathop{\mapsto}\limits^\iota s'$. This $p$ is written $\mbox{wp}(R,q)$, the `weakest precondition on $R$ for $q$'. We construct it via structural induction on $\mathscr C$ at the same time as we deduce completeness, so there is an element of chicken versus egg about the proof, and we will not labour that point. We will also suppose that we can prove any tautology of $\mathscr B$ and $\mathscr B^*$, so `completeness of NRB' will be relative to that lower-level completeness. Notice that there is always a set $p\in \mathds{P}S$ satisfying the `weakest precondition' characterisation above. It is $\{ s\in S\mid s\mathop{\mapsto}\limits^\iota s'\in\llbracket R\rrbracket_g \Rightarrow s\mathop{\mapsto}\limits^\iota s' \in \llbracket q\rrbracket \}$, and it is called the weakest {\em semantic} precondition on $R$ for $q$. So we sometimes refer to $\text{wp}(R,q)$ as the `weakest {\em syntactic} precondition' on $R$ for $q$, when we wish to emphasise the distinction. The question is whether or not there is a formula in $\mathscr B$ that exactly expresses this set. If there is, then the system is said to be {\em expressive}, and that formula {\em is} the weakest (syntactic) precondition on $R$ for $q$, $\text{wp}(R,q)$. Notice also that a weakest (syntactic) precondition $\text{wp}(R,q)$ must encompass the semantic weakest precondition; that is because if there were a state $s$ in the latter and not in the former, then we could form the disjunction $\text{wp}(R,q)\lor(x_1=s x_1\land\dots x_n=s x_n)$ where the $x_i$ are the variables of $s$, and this would also be a precondition on $R$ for $q$, hence $x_1=s x_1\land\dots x_n=s x_n {\rightarrow\kern0.5pt} \text{wp}(R,q)$ must be true, as the latter is supposedly the weakest precondition, and so $s$ satisfies $\text{wp}(R,q)$ in contradiction to the assumption that $s$ is not in $\text{wp}(R,q)$. For orientation, then, the reader should note that `there is a weakest (syntactic) precondition in $\mathscr B$' means there is a unique strongest formula in $\mathscr B$ covering the weakest semantic precondition. We will lay out the proof of completeness inline here, in order to avoid excessively overbearing formality, and at the end we will draw the formal conclusion. A completeness proof is always a proof by cases on each construct of interest. It has the form `suppose that {\em foo} is true, then we can prove it like this', where \emph{foo} runs through all the constructs we are interested in. We start with assertions about the sequence construction $P;Q$. We will look at this in particular detail, noting where and how the weakest precondition formula plays a role, and skip that detail for most other cases. Thus we start with \emph{foo} equal to ${\bf G}_l\, g_l~\triangleright~\{p\}~P;Q~\{q\}$ for some assumptions $g_l\in \mathscr{B}$, but we do not need to take the assumptions $g_l$ into account in this case. \begin{em} \paragraph{Case $P;Q$}. Consider a sequence of two statements $P;Q$ for which $\{p\}~P;Q~\{q\}$ holds in the model set out by Definition~\ref{def:A5} and Table~\ref{tab:interpretation}. That is, suppose that initially the state $s$ satisfies predicate $p$ and that there is a progression from $s$ to some final state $s'$ through $P;Q$. Then $s\mathop{\mapsto}\limits^\iota s'$ is in $\llbracket P;Q\rrbracket_g$ and $s\mathop{\mapsto}\limits^\iota s'$ satisfies $q$. We will consider two subcases, the first where $P$ terminates normally from $s$, and the second where $P$ terminates abnormally from $s$. A third possibility, that $P$ does not terminate at all, is ruled out because a final state $s'$ is reached. Consider the first subcase, which means that we think of $s$ as confined to $\mbox{wp}(P,{\bf N}\top)$. According to Table~\ref{tab:interpretation}, that means that $P$ started in state $s_0=s$ and finished normally in some state $s_1$ and $Q$ ran on from state $s_1$ to finish normally in state $s_2=s'$. Let $r$ stand for the weakest precondition $\mbox{wp}(Q,{\bf N} q)$ that guarantees a normal termination of $Q$ with $q$ holding. By definition of weakest precondition, $\{r\}~Q~\{{\bf N} q\}$, is true and $s_1$ satisfies $r$ (if not, then $r\lor (x_1=s x_1\land x_2= s x_2 \land \dots)$ would be a weaker precondition for ${\bf N} q$ than $r$, which is impossible). The latter is true whatever $s_0$ satisfying $p$ and $\mbox{wp}(P,{\bf N}\top)$ we started with, so by definition of weakest precondition, $p\land \mbox{wp}(P,{\bf N}\top){\rightarrow\kern0.5pt} \mbox{wp}(P,{\bf N} r)$ must be true, which is to say that $\{p\land \mbox{wp}(P,{\bf N}\top)\}~P~\{{\bf N} r\}$ is true. By induction, it is the case that there are deductions $\vdash \{p\land \mbox{wp}(P,{\bf N}\top)\}~P~\{{\bf N} r\}$ and $\vdash \{r\}~Q~\{{\bf N} q\}$ in the NRB system. But the following rule \[ \frac{ \{p\land \mbox{wp}(P,{\bf N}\top)\}~P~\{{\bf N} r\} \quad \{r\}~Q~\{{\bf N} q\} }{ \{p\land \mbox{wp}(P,{\bf N}\top)\}~P;Q~\{{\bf N} q\} } \] is a derived rule of NRB logic. It is a specialised form of the general NRB rule of sequence. Putting these deductions together, we have a deduction of the truth of the assertions $ \{p\land \mbox{wp}(P,{\bf N}\top)\}~P;Q~\{{\bf N} q\}$. By weakening on the conclusion, since ${\bf N} q{\rightarrow\kern0.5pt} q$ is (always) true, we have a deduction of $ \{p\land \mbox{wp}(P,{\bf N}\top)\}~P;Q~\{q\}$. Now consider the second subcase, when the final state $s_1$ reached from $s=s_0$ through $P$ obtains via an abnormal flow out of $P$. This means that we think of $s$ as confined to $\mbox{wp}(P,\lnot{\bf N}\top)$. Now the transition $s_0\mathop{\mapsto}\limits^\iota s_1$ in $\llbracket P\rrbracket_g$ satisfies $q$, and $s$ is arbitrary in $p\land\mbox{wp}(P,\lnot{\bf N}\top)$, so $\{p\land\mbox{wp}(P,\lnot{\bf N}\top)\}~P~\{q\}$. However, `not ending normally' (and getting to a termination, which is the case here) means `ending abnormally', i.e., ${\bf R}\top\lor{\bf B}\top\lor\dots$ through all of the available colours, as per Proposition~\ref{prop:1}, and we may write the assertion out as $\{p\land\mbox{wp}(P,{\bf R}\top\lor{\bf B}\top\dots)\}~P~\{q\}$. Considering the cases separately, one has $\{p\land\mbox{wp}(P,{\bf R}\top)\}~P~\{{\bf R} q\}$ (since ${\bf R} q$ is the component of $q$ that expects an ${\bf R}$-coloured transition), and $\{p\land\mbox{wp}(P,{\bf B}\top)\}~P~\{{\bf B} q\}$, and so on, all holding. By induction, there are deductions $\vdash \{p\land\mbox{wp}(P,{\bf R}\top)\}~P~\{{\bf R} q\}$, $\vdash \{p\land\mbox{wp}(P,{\bf B}\top)\}~P~\{{\bf B} q\}$, etc. But the following rule \[ \frac{ \{p \land\mbox{wp}(P,{\cal E}\top)\}~P~\{{\cal E} q\} }{ \{p \land\mbox{wp}(P,{\cal E}\top)\}~P;Q~\{{\cal E} q\} } \] is a derived rule of NRB logic for each `abnormal' colouring ${\cal E}$, and hence we have a deduction $\vdash \{p \land\mbox{wp}(P,{\cal E}\top)\}~P;Q~\{{\cal E} q\}$ for each of the `abnormal' colours ${\cal E}$. By weakening on the conclusion, since ${\cal E} q{\rightarrow\kern0.5pt} q$, for each of the colours ${\cal E}$, we have a deduction $\vdash \{p \land\mbox{wp}(P,{\cal E}\top)\}~P;Q~\{q\}$ for each of the colours ${\cal E}$. By the rule on disjunctive hypotheses (fourth from last in Table~\ref{tab:NRBG}) we now have a deduction $\vdash \{p\land (\mbox{wp}(P,{\bf N}\top)\lor\mbox{wp}(P,{\bf R}\top)\lor\dots)\}~P;Q~\{q\}$. But the weakest precondition is monotonic, so $\mbox{wp}(P,{\bf N}\top)\lor\mbox{wp}(P,{\bf R}\top)\lor\dots$ is covered by $\mbox{wp}(P,{\bf N}\top\lor{\bf R}\top\lor\dots)$, which is $\mbox{wp}(P,\top)$ by Proposition~\ref{prop:1}. But for a deterministic program $P$, the outcome from a single starting state $s$ can only be uniquely a normal termination, or uniquely a return termination, etc, and $\mbox{wp}(P,{\bf N}\top)\lor\mbox{wp}(P,{\bf R}\top)\lor\dots = \mbox{wp}(P,{\bf N}\top\lor{\bf R}\top\lor\dots) = \mbox{wp}(P,\top)$ exactly. The latter is just $\top$, so we have a proof $\vdash \{p\}P;Q~\{q\}$. As to what the weakest precondition $\mbox{wp}(P;Q,q)$ is, it is $\mbox{wp}(P,{\bf N} \mbox{wp}(Q,q))\lor \mbox{wp}(P,{\bf R} q)\lor \mbox{wp}(P, {\bf B} q)\lor \dots$, the disjunction being over all the possible colours. \end{em} That concludes the consideration of the case $P;Q$. The existence of a formula expressing a weakest precondition is what really drives the proof above along, and in lieu of pursuing the proof through all the other construct cases, we note the important weakest precondition formulae below: \begin{itemize} \item The weakest precondition for assignment is $\mbox{wp}(x=e,{\bf N} q) = q[e/x]$ for $q$ without modal components. In general $\mbox{wp}(x=e, q) = {\bf N} q[e/x]$. \item The weakest precondition for a \code{return} statement is $\mbox{wp}(\code{return},q) = {\bf R} q$. \item The weakest precondition for a \code{break} statement is $\mbox{wp}(\code{break},q) = {\bf B} q$. Etc. \item The weakest precondition $\mbox{wp}(\code{do}~P,{\bf N} q)$ for a \code{do} loop that ends `normally' is $\code{wp}(P,{\bf B} q) \lor \code{wp}(P,{\bf N} \code{wp}(P,{\bf B} q)) \lor \code{wp}(P,{\bf N} \code{wp}(P,{\bf N} \code{wp}(P,{\bf B} q))) \lor \dots$. That is, we might break from $P$ with $q$, or run through $P$ normally to the precondition for breaking from $P$ with $q$ next, etc. Write $\code{wp}(P,{\bf B} q)$ as $p$ and write $\code{wp}(P,{\bf N} r)\land \lnot p$ as $\psi(r)$, Then $\mbox{wp}(\code{do}~P,{\bf N} q)$ can be written $p \lor \psi(p) \lor \psi(p\lor \psi(p)) \lor \dots$, which is the strongest solution to $\pi = \psi(\pi)$ no stronger than $p$. This is the weakest precondition for $p$ after $\code{while} (\lnot p)~ P$ in classical Hoare logic. It is an existentially quantified statement, stating that an initial state $s$ gives rise to exactly some $n$ passes through $P$ before the condition $p$ becomes true for the first time. It can classically be expressed as a formula of first-order logic and it is the weakest precondition for ${\bf N} q$ after $\code{do}~P$ here. The preconditions for ${\cal E} q$ for each `abnormal' coloured ending ${\cal E}$ of the loop $\code{do}~P$ are similarly expressible in $\mathscr B$, and the precondition for $q$ is the disjunction of each of the preconditions for ${\bf N} q$, ${\bf R} q$, ${\bf B} q$, etc. \item The weakest precondition for a guarded statement $\mbox{wp}(p{\rightarrow\kern0.5pt} P,q)$ is $p{\rightarrow\kern0.5pt} \mbox{wp}(P,q)$, as in Hoare logic; and the weakest precondition for a disjunction $\mbox{wp}(P\shortmid Q, q)$ is $\mbox{wp}(P,q) \land \mbox{wp}(Q,q)$, as in Hoare logic. However, we only use the deterministic combination $p{\rightarrow\kern0.5pt} P\shortmid \lnot p{\rightarrow\kern0.5pt} Q$ for which the weakest precondition is $(p{\rightarrow\kern0.5pt}\mbox{wp}(P,q))\land(\lnot p{\rightarrow\kern0.5pt}\mbox{wp}(Q,q))$, i.e. $p\land\mbox{wp}(P,q)\lor \lnot p\land\mbox{wp}(Q,q)$. \end{itemize} To deal with labels properly, we have to extend some of these notions and notations to take account of the assumptions ${\bf G}_l g_l$ that an assertion ${\bf G}_l g_l~\triangleright~\{ p \}~P~\{ q \}$ is made against. The weakest precondition $p$ on $P$ for $q$ is then $p = \mbox{wp}_g(P,q)$, with the $g_l$ as extra parameters. The weakest precondition for a label use $\mbox{wp}_g(P :l,q)$ is then $\mbox{wp}_g(P,q)$, provided that $g_l{\rightarrow\kern0.5pt} q$, since the states $g_l$ attained by $\code{goto}~l$ statements throughout the code are available after the label, as well as those obtained through $P$. The weakest precondition in the general situation where it is not necessarily the case that $g_l{\rightarrow\kern0.5pt} q$ holds is $\mbox{wp}_g(P,q\land (g_l{\rightarrow\kern0.5pt} q))$, which is $\mbox{wp}_g(P,q)$. Now we can continue the completeness proof through the statements of the form $P:l$ (a labelled statement) and $\code{label}~l.P$ (a label declaration). \begin{em} \paragraph{Case labelled statement}. If $\llbracket \{p\}~P:l~\{q\}\rrbracket_g$ holds, then every state $s = s_0$ satisfying $p$ leads through $P$ with $s_0\mathop{\mapsto}\limits^\iota s_1$ satisfying $q$, and also $q$ must contain all the transitions $s_0\mathop{\mapsto}\limits^{{\bf N}} s_1$ where $s_1$ satisfies $g_l$. Thus $s$ satisfies $\mbox{wp}_g(P,q)$ and ${\bf N} g_l{\rightarrow\kern0.5pt} q$ holds. Since $s$ is arbitrary in $p$, so $p{\rightarrow\kern0.5pt} \mbox{wp}_g(P,q)$ holds and by induction, $\vdash {\bf G}_l g_l~\triangleright~\{p\} ~P~\{q\}$. Then, by the `frm' rule of NRB (Table~\ref{tab:NRBG}), we may deduce $\vdash {\bf G}_l g_l~\triangleright~\{p\} ~P:l~\{q\}$. \end{em} \begin{em} \paragraph{Case label declaration}. The weakest precondition for a declaration $\mbox{wp}_g(\code{label}\,l.P,q)$ is simply $p = \mbox{wp}_{g'}(P,q)$, where the assumptions after the declaration are $g' = g \cup \{l\mapsto g_l\}$ and $g_l$ is such that ${\bf G}_l g_l \triangleright \{ p \}~P~\{q\}$. In other words, $p$ and $g_l$ are simultaneously chosen to make the assertion hold, $p$ maximal and $g_l$ the least fixpoint describing the states at $\code{goto}~l$ statements in the code $P$, given that the initial state satisfies $p$ and assumptions ${\bf G}_l g_l$ hold. The $g_l$y are the statements that after exactly some $n\in\mathds{N}$ more traversals through $P$ via $\code{goto}~l$, the trace from state $s$ will avoid another $\code{goto}~l$ for the first time and exit $P$ normally or via an abnormal exit that is not a $\code{goto}~l$. If it is the case that $\llbracket \{p\}~\code{label}~l.P~\{q\}\rrbracket_g$ holds then every state $s=s_0$ satisfying $p$ leads through $\code{label}~l.P$ with $s_0\mathop{\mapsto}\limits^\iota s_1$ satisfying $q$. That means that $s_0\mathop{\mapsto}\limits^\iota s_1$ leads through $P$, but it is not all that do; there are extra transitions with $\iota = {\bf G}_l$ that are not considered. The `missing' transitions are precisely the ${\bf G}_l g_l$ where $g_l$ is the appropriate least fixpoint for $g_l = \{s_1\mid s_0\mathop{\mapsto}\limits^{{\bf G}_l}s_1 \in \llbracket P\rrbracket_{g\cup\{l\mapsto g_l\}}$, which is a predicate expressing the idea that $s_1$ at a $\code{goto}~l$ initiates some exactly $n$ traversals back through $P$ again before exiting $P$ for a first time other than via a $\code{goto}~l$. The predicate $q$ cannot mention ${\bf G}_l$ since the label $l$ is out of scope for it, but it may permit some, all or no ${\bf G}_l$-coloured transitions. The predicate $q\lor {\bf G}_l g_l$, on the other hand, permits all the ${\bf G}_l$-coloured transitions that exit $P$. transitions. Thus adding ${\bf G}_l g_l$ to the assumptions means that $s_0$ traverses $P$ via $s_0\mathop{\mapsto}\limits^\iota s_1$ satisfying $q\lor {\bf G}_l g_l$ even though more transitions are admitted. Since $s=s_0$ is arbitrary in $p$, so $p{\rightarrow\kern0.5pt} \mbox{wp}_{g\cup\{l\mapsto g_l\}}(P,q\lor {\bf G}_l g_l)$ and by induction $\vdash {\bf G}_l~\triangleright~\{p\}~P~\{q\lor {\bf G}_l g_l\}$, and then one may deduce $\vdash \{p\}~\code{label}~l.P~\{q\}$ by the `lbl' rule. \end{em} \fbox{\vbox to 1ex{}~} That concludes the text that would appear in a proof, but which we have abridged and presented as a discussion here! We have covered the typical case ($P;Q$) and the unusual cases ($P:l$, $\code{label}~l.P$). The proof-theoretic content of the discussion is: \begin{theorem}[Completeness] The system of NRB logic in Table~\ref{tab:NRBG} is complete for deterministic programs, relative to the completeness of first-order logic. \end{theorem} We do not know if the result holds for non-deterministic programs too, but it seems probable. A different proof technique would be needed (likely showing that attempting to construct a proof backwards either succeeds or yields a counter-model). Along with that we note \begin{theorem}[Expressiveness] The weakest precondition $\mbox{wp}(P,q)$ for $q\in \mathscr B^*$, $P\in \mathscr C$ in the interpretation set out in Definition~\ref{def:A5} and Table~\ref{tab:interpretation} is expressible in $\mathscr B$. \end{theorem} The observation above is that there is a formula in $\mathscr B$ that expresses the semantic weakest precondition exactly. \begin{comment} \section{NRBG(E) logic} On considering a fixpoint $g$ of the map $g\mapsto g'$ described in \eqref{eq:A1} of Proposition~\ref{prp:P1}, the equations (\ref{eq:G1}-\ref{eq:G7}) and (\ref{eq:AT9}-\ref{eq:AT21}) of Theorem~\ref{thm:T1} give the logical rules of Table~\ref{tab:rules}, via the translation of the semantics of assertions given in Definition~\ref{def:A5}. They are sound by construction. What is notable about the fixpoint is that the hypotheses $g_l$ on the left of the `$\triangleright$' cover all the ${{\bf G}}_l g_l$ reached as conclusions on the right. So conclusions cannot be weakened arbitrarily beyond a certain point without weakening hypotheses to match, leading to the restriction in the rule of weakening listed last in Table~\ref{tab:rules}, and in the {\bf go} rule. A proof must start with a generous guess $g_l$ as to the states that may arise at $\code{goto}~l$ statements, otherwise the {\bf go} rule will not apply. But too large a guess will make the {\bf frm} rule impossible to apply. So the guess has to be `just right'. \begin{example} Recall that $l{\,:\,}a$ is syntactic sugar for $\{\,\}{\,:\,}l;\,a$ where $\{\,\}$ is an empty traceset. We derive the rule \[ \frac{ {{\bf G}}_l p_l\,\triangleright\,\{ p \lor p_l \} ~ a ~ \{ q \} }{ {{\bf G}}_l p_l\,\triangleright\,\{ p \} ~ l : a ~ \{ q \} } \mbox{\small[frm$_0$]} \] as follows: \begin{prooftree} \AxiomC{$ $} \RightLabel{[frm]} \UnaryInfC{$ {{\bf G}}_l p_l\,\triangleright\,\{ p \} ~ : l ~ \{ p \lor p_l \} $} \AxiomC{$ {{\bf G}}_l p_l\,\triangleright\,\{ p \lor p_l \} ~ a ~ \{ q \} $} \RightLabel{[seq]} \BinaryInfC{$ {{\bf G}}_l p_l\,\triangleright\,\{ p \} ~ : l ; a ~ \{ q \} $} \end{prooftree} \end{example} \begin{example} The derivation for $\{\top\}~\code{label}~l.~l:\code{goto}~l~\{\bot\}$ is: \begin{prooftree} \AxiomC{} \RightLabel{[go]} \UnaryInfC{$ {{\bf G}}_l\top\,\triangleright~\{\top\}~\code{goto}~l~\{{{\bf G}}_l\top\} $} \UnaryInfC{$ {{\bf G}}_l\top\,\triangleright~\{\top\}~\code{goto}~l~\{{{\bf N}}\bot\lor {{\bf G}}_l\top\} $} \RightLabel{[frm$_0$]} \UnaryInfC{$ {{\bf G}}_l\top\,\triangleright~\{\top\}~l:\code{goto}~l~\{{{\bf N}}\bot\} $} \UnaryInfC{$ {{\bf G}}_l\top\,\triangleright~\{\top\}~l:\code{goto}~l~\{\bot\} $} \RightLabel{[lbl]} \UnaryInfC{$ \triangleright~\{\top\}~\code{label}~l.~l:\code{goto}~l~\{\bot\} $} \end{prooftree} The two unlabelled steps are by weakening of the conclusion and/or through the equivalence ${{\bf N}}\bot\leftrightarrow\bot$. If one had tried ${{\bf G}}_l\,p$ for some smaller $p$, then the first rule could not be applied. \end{example} \begin{remark} If we guess ${{\bf G}}_l\top$ and manage to prove a result, then the proof is valid, never mind if the guess is `just right' or not. In principle, we can go through the proof making the guess smaller (while still above the fixpoint ${{\bf G}}_l\,p_l$, whatever it is) and the steps remain valid. So we do not have to know what the `just right' guess is if we succeed in proving something from ${{\bf G}}_l\top$. The model-theoretic justification is that the logic of proof using ${{\bf G}}_l\top$ corresponds to the greatest fixpoint semantics $\llbracket a\rrbracket_{g_\top^*}$, not to the least fixpoint semantics $\llbracket a\rrbracket_{g_\bot^*}$ (see Remark~\ref{rem:rem2}). But since the greatest fixpoint set of traces includes the least fixpoint set of traces, so what is true of all the greatest fixpoint traces is also true of all the least fixpoint traces. \end{remark} The logic displayed in Table~\ref{tab:rules} has to be tailored from $C$ to the real language C. In particular, C has \code{if} statements instead of guarded statements and non-deterministic choice. Combining the rules {\bf grd} and {\bf dsj} gives the rule for C conditionals, at least when the test expression has no side effects: \[ \frac{ \triangleright~\{p\land c\}~a~\{q\} \qquad \triangleright~\{p\land \lnot c\}~b~\{q\} }{ \triangleright~\{p\}~\code{if}(c)~a~\code{else}~b~\{q\} }\mbox{[if]} \] When the test has a side-effect, we break the conditional up into an assignment or assignments followed by a conditional with a non-side-effecting test. C also has \code{setjmp} and \code{longjmp} instead of \code{try}/\code{catch} and \code{throw}. The C `\code{if}(!\code{setjmp}($k$)) $a$ \code{else} $b$' corresponds to the $C$ `\code{try} $a$ \code{catch}($k$) $b$' construction, and the C `\code{longjmp}($k$,1)' corresponds to the $C$ `\code{throw} $k$' construction. The \code{call} logic derives from inlining the subroutine body, turning \code{return}s into `\code{goto}~end' statements, and renaming labels to avoid collisions. But, in practice, calls are all treated as opaque satisfying the property being studied -- such as `balances takes and releases of locks' -- while the property is tested for possible failure in each and every subroutine in turn. \begin{remark} Here is a procedure that, if it terminates, terminates with the code decorated with preconditions and postconditions complying to Table~\ref{tab:rules}. Start with hypotheses $g_l = \bot$ on the left hand side, and construct preconditions and postconditions throughout the code satisfying the rules of Table~\ref{tab:rules} with the exception that the restriction on the left of the {\bf go} rule and that in the conditions on the left of the rule of weakening are ignored. Those relate to final fixpoint conditions $g_l^*$, not the intermediate conditions $g_l$ of this construction. Next construct the $g_l'$ that are at least a union of the preconditions found at all $\code{goto}~l$ statements, intersected with the precondition at the statement labelled $l$. So $g_l\Rightarrow g_l'$ by Lemma~\ref{lemma:A2}. Repeat the construction of preconditions and postconditions satisfying the rules of Table~\ref{tab:rules} throughout the code, this time with hypotheses $g_l'$ on the left hand side everywhere. Continue until, or if, a fixpoint $g_l = g_l' = g_l^*$ is reached. The fixpoint $g_l^*$ covers all the preconditions of $\code{goto}~l$ statements and fits inside the precondition of the statement labelled $l$. So the rules of Table~\ref{tab:rules} are satisfied. \customlabel{\thedefinition}{rem:rem6} \end{remark} \end{comment} \section{Summary} We have proven the NRB logic sound with respect to a simple transition-based model of programs, and showed that it is complete for deterministic programs. \noindent \end{document}
arXiv
Role of flying cars in sustainable mobility Akshat Kasliwal ORCID: orcid.org/0000-0001-7610-08661,2, Noah J. Furbush1,3, James H. Gawron ORCID: orcid.org/0000-0001-8600-24342, James R. McBride1, Timothy J. Wallington ORCID: orcid.org/0000-0002-9810-63261, Robert D. De Kleine ORCID: orcid.org/0000-0002-6510-93361, Hyung Chul Kim ORCID: orcid.org/0000-0002-0992-45471 & Gregory A. Keoleian ORCID: orcid.org/0000-0002-7096-13042 Nature Communications volume 10, Article number: 1555 (2019) Cite this article Interest and investment in electric vertical takeoff and landing aircraft (VTOLs), commonly known as flying cars, have grown significantly. However, their sustainability implications are unclear. We report a physics-based analysis of primary energy and greenhouse gas (GHG) emissions of VTOLs vs. ground-based cars. Tilt-rotor/duct/wing VTOLs are efficient when cruising but consume substantial energy for takeoff and climb; hence, their burdens depend critically on trip distance. For our base case, traveling 100 km (point-to-point) with one pilot in a VTOL results in well-to-wing/wheel GHG emissions that are 35% lower but 28% higher than a one-occupant internal combustion engine vehicle (ICEV) and battery electric vehicle (BEV), respectively. Comparing fully loaded VTOLs (three passengers) with ground-based cars with an average occupancy of 1.54, VTOL GHG emissions per passenger-kilometer are 52% lower than ICEVs and 6% lower than BEVs. VTOLs offer fast, predictable transportation and could have a niche role in sustainable mobility. The transportation sector faces the challenge of meeting growing demand for convenient passenger mobility while reducing congestion, improving safety, and mitigating emissions. Automated driving and electrification are disruptive technologies that may contribute to these goals, but they are limited by congestion on existing roadways and land-use constraints. Electric vertical takeoff and landing aircraft (VTOLs) could overcome these limitations by enabling urban and regional aerial travel services. VTOLs with tilt-rotor, duct, and wing designs, such as the GL-10 prototype designed by NASA1, combine the convenience of local takeoff and landing like a helicopter with the efficient aerodynamic flight of an airplane. Although smaller and larger designs are possible, several companies are considering craft that can carry four to five occupants2. Initially, these VTOLs would likely be piloted taxi services, but with advances in aviation regulation and sensor and processor technology, could transition toward future automated control3. Electrification is a propulsion strategy for improving the sustainability of both aerial and ground-based transportation modes, owing to the superior efficiency of electric powertrains compared with combustion engines. One critical efficiency enabler for VTOLs is distributed electric propulsion (DEP), which uses physically smaller, electrically-driven propulsors. These propulsors can be used with greater flexibility to leverage the benefits of aero-propulsive coupling and improve performance compared with more traditional designs4. This enables aerodynamically optimized designs, such as articulating propellers and high aspect-ratio blown wings, which allow efficient VTOL energy performance and significant noise reduction. DEP could facilitate VTOL success in the urban aerial taxi space, where conventional helicopters or vertical-lift aircraft have struggled. In principle, VTOLs can travel the shortest distance between two points, and their relatively modest sizes would enable near point-to-point service. Conversely, road networks are much less direct and consequently have an associated circuity factor, defined as the ratio of the shortest network route to the Euclidian distance between two points5. This benefit of VTOL aerial systems could favor energy and travel-time performance, particularly in locations with congested and circuitous routing. High VTOL cruise speeds could reduce travel time further. Significant time savings and associated productivity gains could be a key factor in consumer adoption of VTOL transportation. There are many questions that need to be addressed to assess the viability of VTOLs including cost, noise, and societal and consumer acceptance. Our analysis assesses the environmental sustainability of VTOLs compared with ground-based passenger cars. There have been few studies of VTOLs' potential climate change implications6,7. We report the first comprehensive assessment of the primary energy and GHG emissions impacts of using electric VTOLs vs. ground-based light-duty vehicles for passenger mobility. Our analysis first focuses on a vehicle-to-vehicle comparison with one occupant (i.e., the pilot or driver) traveling point-to-point distances ranging from 5 to 250 km. The base case is assessed for a 100 km distance. As part of a sensitivity analysis, we compare the results on a passenger-kilometer traveled (PKT) basis. The VTOL is assumed to have three passengers and one pilot (i.e., four occupants), as it will most likely be used in a transportation-as-a-service business model where service providers seek to maximize utilization rates. Ground-based cars are assumed to be personally owned with a typical loading of 1.54 passengers/occupants8. Modeling details are available in the Methods section, with uncertainties explored in the Sensitivity Analysis. We assess use-phase burdens associated with both aerial (well-to-wing) and ground-based (well-to-wheel) transport. Total fuel cycle impacts encompass both upstream (mining, refining, and transportation of the fuel source) and downstream (operational) activities. Burdens from other life cycle stages, such as vehicle production and end-of-life, are not considered owing to a lack of standardization in VTOL fabrication materials, manufacturing processes, and design specifications. To quantify the use-phase sustainability of these mobility systems, two key metrics are chosen: primary energy use in units of megajoules [MJ] and GHG emissions in units of kilograms of carbon dioxide equivalents [kg-CO2e] on a 100-year global warming potential basis. Subsequently, differences in real-world occupancies are explored by normalizing those metrics on a PKT basis, which is useful when comparing different passenger transport modes9. We also compare the travel time of VTOLs vs. cars. Piloted operation for both modes of mobility is the basis of our analysis. Connected and automated operation are beyond the existing scope and will be considered in future work. We find that for our base case with 100 km point-to-point trips, VTOL GHG emissions are 35% lower than internal combustion engine vehicles (ICEVs), but 28% higher than battery electric vehicles (BEVs). Normalizing base-case emissions per PKT with expected loading gives VTOL burdens (with three passengers) that are 52% and 6% lower than for the ICEV and BEV (with 1.54 passengers), respectively. For short trips (up to 35 km), which dominate trip frequency for conventional cars, VTOLs have higher energy consumption and GHG emissions than ground-based vehicles. Time savings for VTOL rides compared with cars (83% for a 100 km trip) could act as a driver for consumer adoption. From the viewpoint of energy use and hence GHG emissions, it appears that VTOLs could have a niche role in sustainable mobility, particularly in regions with circuitous routes and/or high congestion. VTOL results The flight profile shown in Fig. 1 is broken up into the following five phases: takeoff hover, climb, cruise, descent, and landing hover. VTOLs, such as the NASA GL-10 depicted in Fig. 2, have a different travel time, speed, and power consumption profile during each phase, as discussed in the Methods section. The base-case GHG emissions associated with each phase are shown in Fig. 3 for trips between 5 and 250 km. A minimum of 5 km was chosen due to the 2.5 km horizontal slant range during both climb and descent phases. The base-case scenario of transporting one occupant over a point-to-point distance of 100 km has GHG emissions of 15.7 kg-CO2e. For shorter travel distances, where energetically intensive hover dominates the flight profile, the VTOL compares less favorably than it does for longer distances, where efficient cruise dominates the flight profile. Primary energy results follow the same trends as GHG emissions (refer to Supplementary Figs. 1 and 2). VTOL flight profile. The five phases of VTOL travel are takeoff hover, climb, cruise, descent, and landing hover. Each phase will have a different travel time, velocity, and power consumption NASA GL-10 VTOL1. The takeoff and landing hover configuration for a prototype NASA VTOL is shown here. The tilt rotor and wing design combines the convenience of local takeoff and landing like a helicopter with the efficient aerodynamic flight of an airplane VTOL GHG emissions over a range of trip distances. The GHG emission results for a single-occupant VTOL are broken out by the hover and cruise phases over trip distances from 5 to 250 km. The climb phase is modeled as part of cruise. Furthermore, the takeoff and landing hover phases are combined for simplicity and the powerless descent phase is omitted, as it is assumed to have zero emissions. See the Methods section for details Ground-based vehicle results On-road adjusted city and highway fuel economies for the BEV were 304 Wh mi−1 (109.2 MPGe) and 309 Wh mi−1 (107.5 MPGe), respectively10. This results in a fuel economy of 108.5 MPGe for a combined 55% city/45% highway driving cycle11. For the ICEV, on-road adjusted city and highway fuel economies were 30.7 and 39.5 MPG, respectively, yielding a combined fuel economy of 34.1 MPG10,11. See Methods section for details. Accounting for effects from fuel carbon intensity and a circuity factor of 1.205 (to incorporate actual road distance traveled between origin and destination) yields use-phase emissions values for the base-case single-occupant scenario of 12.3 kg-CO2e for the BEV and 24.3 kg-CO2e for the ICEV. The lower burdens from the BEV reflect the higher overall system efficiencies of electrified platforms over internal-combustion platforms. The fuel-to-motion conversion efficiency of ICEVs is 12–30%, depending on the drive cycle, whereas for BEVs this efficiency is 72–94%12. VTOL vs. ground-based vehicle comparison Figure 4 compares VTOL vs. ICEV and BEV emission intensity (kg-CO2e VKT−1) as a function of point-to-point trip distance (vehicle-kilometers traveled (VKT)) for our base case with one occupant in each vehicle. The associated primary energy used is shown in Supplementary Fig. 3. ICEV and BEV base-case emissions were found to be roughly 0.20 and 0.10 kg-CO2e VKT−1, respectively. As indicated in Fig. 3, VTOLs incur significant emissions for hover but are efficient in cruise. As a result, the VTOL emissions per VKT shown in Fig. 4 are ~0.59 kg-CO2e VKT−1 for the shortest trip (5 km) but decrease rapidly with increasing trip length, tending toward an asymptotic value of ~0.14 kg-CO2e VKT−1 for a 250 km trip. Base-case VTOL emissions (for the 100 km trip) are 0.15 kg-CO2e VKT−1. GHG emissions normalized by vehicle-kilometers traveled. The GHG emission results for single-occupant VTOLs and ground-based vehicles (ICEV and BEV) are normalized by vehicle-kilometers traveled (VKT). This illustrates the impact of amortizing the fixed burden from the hover phase over longer distances. The VTOL GHG emissions break even with the ICEV at 35 km The ICEV performs better than the VTOL up to ~35 km, where aerial flight is dominated by the energy-intensive hover mode. GHG emissions for VTOLs drop substantially below those from ICEVs for trips longer than ~50 km. For long-distance trips, VTOLs can leverage efficient cruise performance to outperform ICEVs. The VTOL emissions approach, but do not match, those from BEVs for distances > ~120 km. For our base-case travel distance of 100 km (point-to-point), the VTOL has well-to-wing/wheel GHG emissions that are 35% lower but 28% higher than the ICEV and BEV, respectively. Initially, VTOLs are likely to operate as aerial taxis, and service providers would target near-full occupancy from a utilization–maximization standpoint, similar to current commercial airlines. Passengers could be incentivized to share VTOL rides given the significant expected time savings from flying. Thus, it seems likely that the average occupancy of VTOLs will be greater than conventional passenger cars. Given an expected occupancy difference, it can be argued that the emission burdens between VTOLs and ground-based vehicles should be compared on a PKT basis rather than the VKT basis shown in Fig. 4. The results of this assessment are described in the Sensitivity Analysis. Extensive variability exists in VTOL design and operational domains. The sensitivity analysis presented here includes the variation of six key VTOL parameters from the base-case values (for 100 km point-to-point travel). Table 1 contains the definitions for the key parameter input values and associated sources. Figure 5a, b summarize the results of the analysis. Figure 5a shows the sensitivity of the base-case VTOL emissions (kg-CO2e VKT−1) to grid carbon intensity, wind, lift-to-drag ratio (L/D), battery-specific energy, and powertrain assumptions. Table 1 VTOL modeling input parameters a Sensitivity analysis for VTOL base-case scenario. Five key modeling parameters are individually varied over realistic bounded ranges within the modeling of the 100 km base-case VTOL scenario. Variation in the electrical grid carbon intensity has the largest impact on the results, whereas the range of system efficiencies show the smallest change. b Sensitivity analysis for passenger loading. The 100 km base-case VTOL scenario is modeled with passenger loading varying from 1 to 3. GHG emission results are normalized by personal-kilometers traveled (PKT) to illustrate the impact of allocating the burden over more travelers. Dashed horizontal lines indicate results for ground-based vehicles (ICEV and BEV) with an average occupancy of 1.54 passengers. It should be noted that the pilot is not considered a passenger in the VTOL, whereas the driver is considered a passenger in the ground-based cars First, changing the 2020 electrical grid carbon intensity from the US average mix to the California and Central-and-Southern Plains grids results in a 52% decrease and 41% increase in emissions, respectively. It is noteworthy that a similar effect will be seen with the BEV when comparing these VTOL results with the BEV baseline. Second, although impacts of wind could equalize fluctuations in emissions for a defined route over multiple iterations of travel in an aerial taxi service, weather remains an important consideration that can affect VTOL energy use for a given trip. We estimate a 16% reduction in base-case emissions with a favorable 30-knot tailwind. Conversely, a 30-knot headwind increases these emissions by nearly 26%. Third, we examine the L/D ratio. An upper-bound aerodynamic efficiency value of 20 during cruise would reduce base-case GHG emissions by almost 13%. Conversely, a lower bound for the cruise L/D of 13 will result in a 26% increase from the baseline emissions. Fourth, we consider battery-specific energy. If the specific energy is reduced from 400 Wh kg−1 in the VTOL baseline to the 250 Wh kg−1 assumed for the BEV10 (while keeping the battery capacity constant at 140 kWh), the emissions will increase by 18%. This also reduces the safe operating range from 250 to 220 km due to the higher energy consumption required to account for the added battery weight. Fifth, system efficiency is another important variable to consider. The efficiency is driven by the electric powertrain and modern propeller designs for lift and cruise functionality. This is bounded by a value of 70% on the lower end and 80% on the upper end. These result in a nearly 8% increase and 4% decrease from the baseline emissions, respectively. Sixth, we consider the impact of passenger loading on emissions calculated on a PKT basis. As noted previously, it seems likely that the average VTOL loading will be higher than for conventional ground-based vehicles. There are no available empirical data upon which to assume a typical VTOL occupancy; hence, we consider one to three passengers (alongside a pilot) spanning the complete range for the craft considered (corresponding to a maximum payload of 350 kg). The average number of passengers in a ground vehicle is 1.54 (including the driver)8, which forms a reasonable basis of comparison with the VTOL. Figure 5b shows the PKT results for VTOL vs. an ICEV and BEV, noting that the pilot in the VTOL is not considered a passenger. We define an occupant as any person who is physically contained in the vehicle, whereas a passenger is an occupant for whom the trip is being made. Therefore, the VTOL pilot is not considered a passenger. As seen from Fig. 5b, for two or more passengers the VTOL outperforms the ICEV and for three passengers the VTOL outperforms the BEV on a PKT basis. Specifically, a three-passenger VTOL has burdens that are 52% and 6% lower than for a 1.54-passenger ICEV and BEV, respectively. Figure 5b indicates that VTOLs operating at or near-full capacity are relatively efficient, outperforming average-occupancy BEVs in the base case. Supplementary Fig. 4 contains the sensitivity analysis results for the BEV and ICEV. The point-to-point VTOL flight path results in a 100 km trip time of about 27 min, with a cruise speed of 150 mph (roughly 241 kph). For a highly congested commute of a similar distance, approximately the span of a major city, time savings of point-to-point travel can be significant. VTOL travel time is dependent on many factors that are hard to collectively characterize, including air traffic and airspace restrictions. Weather challenges are inherent with aircraft operation, which can create travel-time variability. Thirty-knot headwinds and tailwinds are considered as bounds in the Sensitivity Analysis, representing inclement weather that is potentially still safe for flight. This results in a nearly 3 min increase or decrease in travel time for the base-case 100 km trip. Although travel time for VTOLs can vary with weather conditions, the variability is relatively small and can be predicted given reasonable weather forecasts. Predictability is a major advantage of VTOL mobility, particularly in locations where road systems are highly congested and ground travel times highly unpredictable. On the ground, adopting the five-cycle test procedure (see Methods section) yields an average speed of 20.6 mph (33.2 kph) for all-city driving and 58.5 mph (94.1 kph) for all-highway driving. This results in an average speed of 29.1 mph (46.8 kph) for a combined 55% city/45% highway driving cycle. Assuming an average circuity factor of 1.20, defined as the ratio of actual and straight-line distance, leads to a travel time of 154 min for the base case. For context, a trip of similar length from Irvine to Malibu can take between 120 and 210 min during rush hour according to a Google Maps estimate13. For ground-based vehicles, variability in travel time is significant. From the 55% city/45% highway base case, the travel time increases by 41% using the city average speed and decreases by 50% using the highway average speed. The travel time for cars is significantly longer than for VTOLs, reflecting their much lower average cruise velocities and, to a lesser extent, more circuitous routing. Travel times as a function of distance for VTOL and ground-based cars are shown in Fig. 6. In the base case (100 km), this equates to an 83% time-saving switching from car to VTOL. This finding is consistent with the estimation of a sixfold travel-time advantage for VTOL travel in Silicon Valley14. As seen in Fig. 6, there is no overlap between uncertainty ranges of both aerial and ground travel times. Even for adverse wind conditions for the VTOL and faster highway travel for the cars, aerial modes still have lower associated journey times than ground-based modes. The time and route certainty for VTOLs, factors that are often unpredictable on the ground, could be valuable for passenger transport. Travel-time comparison. Travel-time results for the VTOL and ground-based vehicles (ICEV and BEV) are provided as a function of travel distance. Uncertainty bars show the impact of varying the assumptions for wind speed for VTOLs and urban-highway driving split for cars VTOL throughput will be aided by relatively short travel times and the added degree of freedom associated with aerial mobility. In principle, lines of VTOLs could travel over several vertical layers/stacks. However, regimented operation would likely be enforced by aviation authorities, and for VTOL deployment as an aerial taxi service, physical access to takeoff and landing sites would be limited. Presently, it is hard to gauge how these constraints would compare to limitations of ground-based vehicles. However, it seems plausible that even in a conservative scenario, VTOL throughput would not be a limiting factor to their adoption. We present the first detailed sustainability assessment of VTOL flying cars. Although VTOLs are faced with economic, regulatory, and safety challenges, we determine that they may have a niche role in a sustainable mobility system. From the results of our assessment, four key insights for VTOL development can be drawn. First, due to the significantly higher burdens associated with fewer passengers on-board, operators would have to ensure VTOLs fly at near-full capacities for them to outperform conventional ground-based vehicles. This might be a plausible scenario for two reasons. Current airline service providers already operate with similarly high utilization targets. Also, given the significant time savings of VTOLs over cars, passengers may be motivated to share rides with others to reduce higher costs expected of VTOL trips. While ridesharing in ground-based cars, passengers often have to tradeoff cost for travel time. This is not expected to be the case with VTOLs, with time-saving benefits being potentially important for their adoption. It should be noted here that single-occupant ground-based vehicles also have negative sustainability implications compared with fully loaded cars that combine passenger trips. Second, VTOLs emit fewer GHGs on a VKT basis compared with ICEVs for trips beyond 35 km. However, the average ground-based vehicle commute is only about 17 km long, with trips exceeding 35 km accounting for under 15% of all vehicle trips8. Hence, the trips where VTOLs are more sustainable than ICEVs only make up a small fraction of total annual vehicle-miles traveled on the ground. Subsequently, VTOLs will be limited in their contribution (and role) in a sustainable mobility system. For shorter distances, energy-intensive hover dominates the flight profile, thereby preventing the VTOL from leveraging efficient aerodynamic performance in cruise. VTOL sustainability performance is more advantageous when competing with ground-based vehicles traveling congested routes or indirect routes with higher circuity factors. The comparative energy, emissions, and time-saving benefits of VTOLs are maximized in areas with high congestion or with geographical barriers, which dictate indirect routing for ground-based transport. There could also be an opportunity to displace a portion of short-range regional jet travel with electric VTOLs to reduce GHG emissions. Small jets such as the Embraer 145 with a capacity of 49 passengers have a use-phase well-to-wing GHG burden of 0.10 and 0.20 kg-CO2e PKT−1 with load factors of 100% and 50%, respectively15. This is comparable to VTOLs with one to three passengers emitting 0.15–0.06 kg-CO2e PKT−1 for a 250 km trip. Third, the GHG emissions of electric VTOLs scale with the carbon intensity of the electricity grid. The carbon intensity of most electric grids are expected to be substantially lower in the future, as more renewable generation is brought on-line. Hence, the benefits of electric VTOLs over conventional fossil-fuel-powered road transportation are expected to grow in the future. Fourth, lower VTOL emissions, enabled largely by DEP, are not strongly contingent on advances in energy storage. Although superior battery chemistries (and higher specific energies) favor VTOLs' performance over BEVs (owing to greater weight dependencies for the former), they affect range more than they do sustainability impacts for either transport mode, as described in the Methods section. Related work by Uber7 and Ullman et al.6 support the key takeaways from our study. Uber7 estimates a VTOL energy intensity of about 0.48 kWh km−1 at 241 kph for an 80 km trip. No detailed breakdown of the VTOL energy modeling is provided. Under these conditions and assuming four occupants, the VTOL operational energy intensity yielded by our model is 0.43 kWh km−1 (about 10% lower). Further, the Ullman et al.6 model of VTOL range and energy consumption was reproduced according to the physics-based relationships stated in the study. Using Ullman et al.6 model and input assumptions, the VTOL operational energy intensity is 0.57 kWh km−1 for a 1360 kg VTOL at 241 kph for a 100 km trip. Using our baseline input assumptions instead, Ullman's model produces an energy intensity of 0.37 kWh km−1 (about 35% lower), which is the same as the output from our model for a fully loaded VTOL. Despite these studies reporting higher energy intensities, the comparison with ground-based vehicles still remains promising. Using the 2020 US average grid mix, Uber's 0.48 kWh km−1 and Ullman's 0.57 kWh km−1 translate to GHG emissions of 0.09 and 0.10 kg-CO2e PKT−1, respectively (for three passengers on a 100 km trip). These compare favorably to the 0.07 and 0.13 kg-CO2e PKT−1 results for the BEV and ICEV, respectively. Our analysis provides an important first basis for assessing and guiding use-phase VTOL sustainability. Given the dynamic nature of rapid developments in the flying car space, VTOL deployment could emerge differently from our defined base case. This may alter our findings in unpredictable ways. Further, future work should consider the total vehicle cycle burdens for these aircraft, once there is more clarity on material selection, manufacturing processes, design, and disposal. Finally, despite certain sustainability benefits of VTOLs, their feasibility as a future transportation option depends on advances beyond those of a technical nature, including regulation, consumer, and societal acceptance of aerial transport in urban areas. VTOL key parameter definitions The key input parameters used throughout our VTOL physics-based model are defined in Table 1. The table includes the base-case value and corresponding source, as well as the bounds used in the sensitivity analysis for each parameter. VTOL range model For modeling VTOL range, we begin with the potential energy (E) needed to lift the VTOL to a given altitude (h). This is considered alongside the system efficiency (η) used to convert energy stored in the battery to run the electric motor and finally create propulsion through the propellers. For a VTOL with a given takeoff mass (m) and given gravity constant (g), we have: $$E = \frac{{mgh}}{\eta }$$ The aerodynamics of the VTOL frame convert this potential energy into a distance traveled (R), akin to an unpowered glide. The aerodynamic efficiency of the VTOL is specified by the L/D ratio and corresponds to how effectively a VTOL can convert altitude to distance traveled. As such: $$\frac{L}{D} = \frac{R}{h}$$ Combining Eqs. (1) and (2), we arrive at the general Eq. (3) for VTOL electric aircraft range: $$R = E\frac{L}{D}\eta \frac{1}{{mg}}$$ Refining this equation to investigate the key performance drivers, we factor the battery mass (mb) into both numerator and denominator terms. This yields the standard Breguet range equation adapted for electric aircraft in Eq. (4)16. Overall, the model converts the potential energy required to lift a VTOL to an altitude into range (R) achieved from gliding on the aerodynamic wings. $$R = E\frac{L}{D}\eta \frac{1}{{mg}}\frac{{m_{\mathrm{b}}}}{{m_{\mathrm{b}}}} = E^ \ast \beta \frac{L}{D}\eta \frac{1}{g}$$ Eq. (4) illustrates key VTOL performance drivers, namely battery technology and design efficiency, as shown in Table 1. Battery-specific energy (E*) is the limiting factor for VTOL range. Lithium-sulfur batteries are being discussed for aerospace applications17 and are currently being built with pack-specific energies of 400 Wh kg−1 18,19. The performance characteristics chosen for VTOL batteries (400 Wh kg−1 and 1 kW kg−1) appear to be plausible in the near future. Several battery chemistries with a practical specific energy upwards of 400 Wh kg−1 have been reported20. Research and development is underway to improve the battery cycle life and specific power. One study reported a cell-specific power of 10 kW kg−1 for a certain lithium-sulfur chemistry21. Moreover, VTOLs would likely not be regulated by safety requirements around battery packaging as stringent as for BEVs, such as those defined by the Federal Motor Vehicle Safety Standards22. Ground-based vehicles are prone to crashes and operational battery wear and tear, thereby warranting such constraints. For VTOLs, reduced overhead packaging weight enables greater realization of cell-specific to battery-specific energy compared with ground-based cars. Further, DEP enables alternative battery topographies for VTOLs, allowing for unique designs of battery warehousing. Another reason VTOLs can adopt advanced batteries earlier is that their service providers would be more likely to pay a premium compared with automotive manufacturers (from a cost-recovery and customer-base willingness to pay standpoint). The alignment of these factors indicates sufficient basis for our assumed value. Battery mass fraction (β) describes the mass of the battery packs that can be supported on the airframe. Other important considerations for battery performance include depth of discharge and reserve capacity. Battery health and cycle life are considered by restricting the usable battery capacity at 80%3,23. Current aviation regulations for on-demand, small commuter aircraft mandate 30 min of additional cruise fuel24. These regulations are designed for diverting to an alternate airport at the end of long-haul trips and not for short commuter hops with VTOLs, which do not need runway access. Conversely, an aggressive projection specifies 6 min additional cruise, which translates to a narrow safety margin3. Our model considers an intermediate reserve battery of 20% capacity for emergencies, which amounts to 15 min of additional cruise time or 5 min of reserve hover time25. Incorporating both considerations gives an available battery capacity for standard flight of 60%. VTOL design efficiency can be broken into aerodynamic efficiency, characterized by the L/D ratio and system efficiency (η). L/D is a measure of the efficiency of converting potential energy from altitude into distance traveled. η is composed of powertrain efficiency (0.9) and propulsive efficiency (0.85 for climb and cruise, and 0.7 for hover)6,26. VTOL energy and GHG emissions modeling Diversity of VTOL designs calls for a physics-based approach to primary energy and GHG emission modeling. This section provides the details of the physics-based model and sample calculations for the base-case input values defined in Table 1. Performance drivers from Eq. (1) directly affect the maximum VTOL range and indirectly affect the emissions results through added battery capacity and mass. To construct a model specifically for energy and emissions, we consider the simplified VTOL flight profile shown in Fig. 1. The VTOL energy model combines climb and descent with cruise due to the uncertainty in the speed profile and transition to/from winged flight during these phases. We selected a cruise altitude of 305 m(1000 ft) for consistency with the Uber3 report and to meet the minimum safe altitude threshold in Federal Aviation Regulation Part 91.11927. As the VTOL is assumed to reach the same altitude during each flight, for simplicity, only the cruise horizontal slant range is assumed to change between different trips. Also note that although Rhover includes hover for both takeoff and landing, the corresponding ground roll is zero. Therefore, given a selected trip length (R), the range of each flight phase can be simplified as specified in Eq. (5). $$R = R_{{\mathrm{hover}}} + R_{{\mathrm{climb}}} + R_{{\mathrm{cruise}}} + R_{{\mathrm{descent}}} \cong R_{{\mathrm{cruise}}}$$ With the horizontal slant range known for each phase of flight, we then assess the resulting power requirements. Each mode of flight has a constant average power draw (P) over its corresponding time of flight (t), which is used to find the overall energy requirements (E), as shown in Eq. (6). $$E = P_{{\mathrm{hover}}}t_{{\mathrm{hover}}} + P_{{\mathrm{cruise}}}t_{{\mathrm{cruise}}}$$ Power draw is equal to the product of force and velocity in the direction of flight. This velocity is specified as true airspeed (TAS) or the velocity of the aircraft relative to the air. Headwinds or tailwinds do not change the TAS but they do affect the groundspeed (GS). Time of flight and horizontal slant range during each phase is calculated from the GS. Although energy and emissions will change with fluctuating winds and resulting GS for single flights, it is important to note that frequent back and forth along a given air-taxi route would likely average out these changes. The energy requirement calculated in Eq. (6) is adjusted for a 90% battery charge–discharge efficiency before applying a primary-to-delivered electricity factor for arriving at primary energy28. To determine GHG emissions, the computed energy is combined with 2020 US average grid mix projections from the 2017 GREET model, described in the modeling of ground-based vehicles29. Regional variations in generation portfolios are captured in the sensitivity analysis, with electric grids from California and Central-and-Southern Plains representing the two bookends for emission factors. Additional auxiliary power draws from systems such as advanced avionics or passenger comforts (phone charging, heating/cooling, radio, etc.) are excluded, as they would likely have a minor impact on the overall results. For context, an advanced transponder has a power draw of the order of 200 W, which is three orders of magnitude smaller than the power requirements for our VTOL flight30. The detailed power and energy calculations for each phase of flight are described below. First, we examine the taxi phase. Power requirements for non-flight activities are aggregated in this segment. This includes wheel-driven taxi to a landing pad from a charging space and vice versa, system power during passenger ingress and egress, and other small draws. As expected time for taxi would be relatively short, about 1 min3, the energy expended relative to total flight energy is small and hence not accounted for in our model. Second, hover is examined. Hover is the most energetically intensive phase of the flight profile, because unlike helicopters, blown wing VTOL designs are optimized for cruise. Hover power (Phover) is modeled in Eq. (7) based on momentum theory31. For a mean sea level air density (ρ), disk loading (δ), and hover system efficiency (ηh), we have: $$P_{{\mathrm{hover}}} = \frac{{mg}}{{\eta _{\mathrm{h}}}}\sqrt {\frac{\delta }{{2\rho }}} $$ Hover power is primarily dependent on rotor disk loading, defined as the VTOL total weight divided by the lifting surface area. The disk loading parameter is chosen based on data provided in Stoll32, resulting in a δ value of 450 N m−2 for the VTOL. In addition to disk loading (δ), we use a hover system efficiency (ηh) of 0.63, which incorporates a powertrain efficiency of 0.9 and propulsive efficiency of 0.7 (instead of 0.85) to account for lifting inefficiencies6. Hover relates to vertical takeoff and landing, as well as intermediate loitering, and thus has no associated ground roll6. Eq. (7) culminates in an average power requirement of 250.6 kW: $$P_{{\mathrm{hover}}} = \frac{{1187.5\;{\mathrm{kg}} \ast 9.81\;{\mathrm{m}}\;{\mathrm{s}}^{ - 2}}}{{0.63}}\sqrt {\frac{{450\;{\mathrm{N}}\;{\mathrm{m}}^{ - 2}}}{{2 \ast 1.22\;{\mathrm{kg}}\;{\mathrm{m}}^{ - 3}}}} = 250.6\;{\mathrm{kW}}$$ For a minute-long hover (2-legs of 30 s each), total primary energy required for this leg of the flight (constant across each trip) is 40.9 MJ (accounting for charge–discharge and primary-to-delivered energy efficiencies of 90% and 40.8%, respectively): $${\mathrm{Primary}}\;{\mathrm{energy}}\;{\mathrm{for}}\;{\mathrm{hover}} = \frac{{250.6\;{\mathrm{kW}} \ast 60\;{\mathrm{s}}}}{{1000 \ast 0.408 \ast 0.9}} = 40.9\;{\mathrm{MJ}}$$ Third, we model climb and descent, which are modeled in the same way as cruise for three main reasons. First, the energy required in excess of cruise performance to climb and accelerate is approximately balanced out by the lower energy required during the descent and deceleration segment, such that assuming cruise performance for the whole duration is a good approximation. Second, limited data are available indicating how the VTOL TAS and corresponding L/D would change throughout climb and descent. Finally, due to the cruise altitude of 1000 ft and the assumed rate of climb (ROC) and rate of descent (ROD) of 1000 fpm, the climb and descent phases have a duration of only 2 min, which is only a small portion of the 25 min flight in the base case. However, if/when an accurate velocity profile and VTOL configuration is made available, a higher fidelity modeling approach may be used. In this case, climb would be modeled separately from cruise and power requirements would be split up into two distinct parts. First, the potential energy used to lift the VTOL to a given altitude is converted to power (Pclimb,PE) by dividing by the time for climb (tclimb). This duration is found by specifying the ROC and the target altitude (h). $$P_{{\mathrm{climb}},{\mathrm{PE}}} = \frac{{mgh}}{{t_{{\mathrm{climb}}}}}$$ Next, we consider the power necessary to overcome aerodynamic forces during climb (Pclimb,D). A flight path angle (y) is specified for the VTOL during climb. Thus, we calculate the climb TAS (Vclimb) using γ and ROC. L/Dclimb and Vclimb enable determination of the power necessary to overcome aerodynamic forces during climb. $$P_{{\mathrm{climb}},{\mathrm{D}}} = \frac{{mg}}{{L/D_{{\mathrm{climb}}}}}V_{{\mathrm{climb}}}$$ Combining these two power elements in Eqs. (8) and (9) yields Eq. (10), which also incorporates climb system efficiency (ηc): $$P_{{\mathrm{climb}}} = \left( \frac{{mh}}{{t_{{\mathrm{climb}}}}} + \frac{m}{{L/D_{{\mathrm{climb}}}}}V_{{\mathrm{climb}}} \right)\frac{g}{{\eta _{\mathrm{c}}}}$$ Now, we reduce the cruise altitude over time of flight in climb term to be equivalent to the ROC. This yields the final power Eq. (11) for climb (Pclimb), which would have to be integrated over the velocity and L/D profile during the climb phase. $$P_{{\mathrm{climb}}} = \frac{{mg}}{{\eta _{\mathrm{c}}}} \left({\mathrm{ROC}} + \frac{{V_{{\mathrm{climb}}}}}{{L/D_{{\mathrm{climb}}}}} \right)$$ Alternatively, we arrive at the same modeling equation using a free body diagram of a VTOL in climb, in which we observe the four forces acting on an aircraft: Lift (L), Weight (W), Thrust (T), and Drag (D). VTOL weight is a product of its takeoff mass (m) and acceleration due to gravity (g). A diagram of the relationship is provided in Supplementary Fig. 5. The power needed for climb can be simplified as the product of thrust produced by the VTOL and the TAS. From the momentum conservation principle, the thrust force is approximately resolved into the opposing drag force and a small component of the VTOL weight, as shown in Supplementary Fig. 5. Owing to winged VTOL design, the component of induced velocity for climb, if any, is neglected. Next, the (Vclimbsinγ) term is expressed as ROC. Using a small flight path angle assumption, we consider weight to be approximately equal to lift. Drag force can be found through dividing weight by L/Dclimb. $${P_{{\mathrm{climb}}} = \frac{{TV_{{\mathrm{climb}}}}}{{\eta _{\mathrm{c}}}} = \frac{{V_{{\mathrm{climb}}}}}{{\eta _{\mathrm{c}}}}(mg{\mathrm{sin}}\gamma + D) = \frac{g}{{\eta _{\mathrm{c}}}} \left(mV_{{\mathrm{climb}}}{\mathrm{sin}}\gamma + \frac{{mV_{{\mathrm{climb}}}}}{{L/D_{{\mathrm{climb}}}}} \right) = \frac{{mg}}{{\eta _{\mathrm{c}}}} \left({\mathrm{ROC}} + \frac{{V_{{\mathrm{climb}}}}}{{L/D_{{\mathrm{climb}}}}} \right)}$$ Fourth, cruise flight is modeled using a simple force balance, depicted through the free body diagram shown in Supplementary Fig. 6. We assume equal force couples in steady, non-accelerated flight (equal lift and weight, and equal thrust and drag). We then use the thrust and cruise TAS (V) of the VTOL to find the power draw during cruise (Pcruise). As assumed, thrust produced is equal to the drag force and is found by dividing VTOL weight (W) by L/D. This yields Eq. (13), which also considers cruise system efficiency (ηc). $$P_{\mathrm{cruise}} = \frac{{mg}}{{\frac{L}{D}}}\frac{V}{{\eta _{\mathrm{c}}}} = \frac{( 1187.5\;{\mathrm{kg}} \ast 9.81 \;{\mathrm{m}}\;{\mathrm{s}}^{2} \ast 66.7\;{\mathrm{m}}\;{\mathrm{s}}^{-1} )}{17 \ast 0.765 \ast 1000} = 59.7\;{\mathrm{kW}}$$ A cruise power (Pcruise) of ~59.7 kW is calculated. For determining primary energy for cruise, we first use Eq. (5). In the base-case scenario: $$R_{{\mathrm{cruise}}} \cong 100\;{\mathrm{km}}$$ Using our 150 mph cruise velocity, a corresponding base-case cruise time of about 24.9 min is obtained. Finally, $${\mathrm{Primary}}\;{\mathrm{energy}}\;{\mathrm{for}}\;{\mathrm{cruise}} = \frac{{59.7\;{\mathrm{kW}} \ast 24.9\min \ast \,60\;{\mathrm{s}}\;{\mathrm{min}}^{ - 1}}}{{1000 \ast 0.408 \ast 0.9}} = 243.0\;{\mathrm{MJ}}$$ Adding the individual primary energy associated with each leg of the flight profile gives a total base-case VTOL primary energy use of about 284 MJ. The primary energy is converted to GHG emissions by multiplying by 0.408 to convert back to delivered electricity, then multiplying by 0.135 kg-CO2e MJ−1 to get 15.7 kg-CO2e. Finally, we incorporate reserves. Present reserve requirements for VTOLs remain unclear without official regulation from aviation authorities. Existing FAA regulations for Part 135 aircraft mandate 30 min of additional cruise fuel. This regulation is designed for diversion to alternate airports at the end of long-haul trips, not for shorter commute hops with VTOLs that do not need a runway to land. Therefore, we use the current FAA regulation as a conservative upper bound for safety. At the lower end are two predictions of 6 and 10 min reserves for cruise6,7. Although these seem more appropriate to the functionality of VTOLs, safety would be of the highest concern for consumer adoption. Emergency scenarios or potentially adverse conditions call for greater robustness. Our model uses an intermediate value of 20% of total battery capacity as reserve25, amounting for 15 min of additional cruise time, or 5 min of reserve hover time. Ground-based vehicles energy and GHG emissions modeling Ground-based passenger vehicles are modeled as generic mid-sized, light-duty ICEVs and BEVs with fuel economy, powertrain efficiency, and battery-specific energy projected for the year 202010. Although battery-specific energy determines range for BEVs (as for VTOLs), it has minor impacts on fuel economy (which relates more to electric motor efficiency). For functional equivalence with our VTOL, a long-range (340 km, 210 mi) BEV is chosen. This range accounts for battery health. Four factors need to be considered to assess the energy and emissions for cars. First, the fuel economy. Fuel economy values for urban and highway driving are adjusted for on-road (real-world) conditions using the five-cycle testing method (Supplementary Table 1)11. The five-cycle test is representative of typical US commuting, in that it covers five distinct driving patterns (including aggressive driving, extreme ambient temperatures, and heating and air-conditioning usage), and considers an equivalent payload weight. Scenarios corresponding to the bookend fuel economies of urban and highway driving are modeled as extremes and described as sensitivities. For obtaining a baseline value, we compute the combined fuel economy as a harmonically weighted average of 55% city/45% highway driving activity, as specified by the US EPA11. For details, see Supplementary Eq. (1), as part of Supplementary Note 1. Second, the added weight from incremental payload and components, which increase its fuel consumption. Payload-induced fuel consumption increase values of 0.073 and 0.27 L equivalent per 100 km per 100 kg, based on EPA five-cycle testing, were used for the BEV and ICEV, respectively33. An equivalency factor of roughly 8.9 kWh L−1 was applied. Third, the fuel carbon intensity, which dictates the emissions profile of the vehicle. The 2017 GREET model was used as a basis for these values29. Similar to the electric VTOL, the BEV GHG emissions are driven by the charging grid. For the base case, we assumed the 2020 US average distributed mix from GREET (34% coal, 28% natural gas, 19% nuclear, 8% hydro, 8% wind, and 3% other), with the delivered electricity corresponding to a GHG-100 intensity of 0.135 kg-CO2e MJ−1. The corresponding efficiency for primary-to-delivered grid electricity is 40.8%. This factor is modified from GREET, which accounts for upstream energy impacts of nuclear-based grid electricity using the NREL US LCI database34 (see Supplementary Note 2 for details). We assume a 90% battery charge/discharge efficiency for the BEV, consistent with the VTOL28. The ICEV platform is powered by conventional E10 gasoline, with a lower heating value of 119.6 MJ gal−1, well-to-wheel primary energy use of 1.28 MJ per MJ of delivered fuel energy, and a well-to-wheel carbon intensity of 0.093 kg-CO2e per MJ of delivered energy. Fourth, circuitous (indirect) routing is an important consideration specific to ground-based modes. Ground-based routes are typically longer than the shortest distance between two points. This circuity factor is highly variable and depends on geographic location, urban density, preferred routing, and road-network connectivity. We used the US average circuity factor of 1.20 to determine the effective performance of ground transport modes5. For context, in an assessment of average circuity factors by country, Belarus has the most direct routing (average circuity factor 1.12), whereas Egypt has the least direct routing (average circuity factor 2.10)5. Dividing the BEV range of 320 km by the US average circuity factor provides a point-to-point BEV range of ~280 km. Travel-time modeling The travel time for VTOL flight was modeled based on the simple flight profile shown in Fig. 1. This begins with an assumed 30 s hover for takeoff. We then assume a ROC of 1000 fpm to 1000 ft cruise altitude, with a similar ROD, followed by a 30 s hover to landing35. Cruise TAS of 150 mph follows leading VTOL design32. The resulting GS and travel time are calculated, with potential headwinds and tailwinds of 30 knots each modeled for their sensitivities. For ground-based vehicles, travel time is inherently more uncertain, changing with chosen route, traffic conditions, time of day, region, and weather36. Here, a simple model incorporating effective distance and average velocity is employed for estimating commute time. For a general estimate of travel time, we first derive velocities for city and highway driving individually. This is done by computing the weighted average of factors specified in testing guidelines (Supplementary Table 2)11. The average speeds obtained are used as the basis for modeling travel times corresponding to all-city and all-highway driving patterns. For a baseline average speed—consistent with our fuel economy modeling approach—we use a harmonically weighted average of 55% city/45% highway driving speeds11. Equations for modeling travel time, consistent with the five-cycle testing procedure, are provided in Supplementary Note 3 (see Supplementary Eqs. (2), (3), and (4)). The authors declare that all data supporting the findings of this study are available within the paper and its Supplementary Information files. Code availability Supporting code is available from the authors upon reasonable request. Barnstorff, K. Ten-Engine Electric Plane Completes Successful Flight Test. https://www.nasa.gov/langley/ten-engine-electric-plane-completes-successful-flight-test. (NASA, 2017). Datta, A. Commercial Intra-City On-Demand Electric-VTOL Working Report (Vertical Flight Society, 2018). Amazon Web Services. Uber Elevate: eVTOL Vehicle Requirements and Missions. https://s3.amazonaws.com/uber-static/elevate/Summary Mission and Requirements.pdf (2018). Kim, H. D., Perry, A. T., & Ansell, P. J. A Review of Distributed Electric Propulsion Concepts for Air Vehicle Technology Technical Report (Aerospace Research Central, 2018). Ballou, R. H., Rahardja, H. & Sakai, N. Selected country circuity factors for road travel distance estimation. Trans. Res. Part A Policy Pract. 36, 843–848 (2002). Ullman, D. G., Homer, V., Horgan, P., & Oullette, R. Comparing Electric Sky Taxi Visions Technical Report (David Ullman, 2017). Uber. Uber Elevate: Fast Forwarding to a Future of On-Demand Urban Air Transportation. https://www.uber.com/elevate.pdf (2016). Oak Ridge National Laboratory. National Household Travel Survey Technical Report. https://nhts.ornl.gov/ (Oak Ridge National Laboratory, 2017). Chester, M. & Horvath, A. High-speed rail with emerging automobiles and aircraft can reduce environmental impacts in California's future. Environ. Res. Lett. 7, 034012 (2012). Elgowainy, A. et al. Cradle-to-Grave Lifecycle Analysis of US Light Duty Vehicle Fuel Pathways: A Greenhouse Gas Emissions and Economic Assessment of Current (2015) and Future (2025–2030) Technologies Technical Report (Argonne National Lab, 2016). US Environmental Protection Agency. Fuel Economy Labeling of Motor Vehicle Revisions to Improve Calculation of Fuel Economy Estimates Technical Report. http://www3.epa.gov/carlabel/documents/420r06017.pdf (US Environmental Protection Agency, 2006). US Fuel Economy Information. Where the Energy Goes: Electric Cars https://www.fueleconomy.gov/feg/atv-ev.shtml (2019). Google Maps https://www.google.com/maps/dir/Irvine,+CA/Malibu,+CA/@33.8709241,-118.5008353,10z/am=t/data=!4m18!4m17!1m5!1m1!1s0x80dcdd0e689140e3:0xa77ab575604a9a39!2m2!1d-117.8265049!2d33.6845673!1m5!1m1!1s0x80e81da9f908d63f:0x93b72d71b2ea8c5a!2m2!1d-118.7797571!2d34.0259216!2m2!7e2!8j1530288600!3e0!5i1 (2018). Antcliff, K. R., Moore, M. D., & Goodrich, K. H. Silicon Valley as an Early Adopter for On-Demand Civil VTOL Operations Technical Report (NASA, 2016). Chester, M. & Horvath, A. Environmental assessment of passenger transportation should include infrastructure and supply chains. Environ. Res. Lett. 4, 024008 (2009). Greatrix, D. R. In: Powered Flight. 29–62. https://doi.org/10.1007/978-1-4471-2485-6_2 (Springer, London, 2012). Service, R. F. New generation of batteries could power aerial drones, underwater robots. Science Magazine http://www.sciencemag.org/news/2018/03/new-generation-batteries-could-better-power-aerial-drones-underwater-robots (2018). Oxis Energy. Our Cell and Battery Technology Advantages https://oxisenergy.com/technology/ (2018). Sion Power. Licerion https://sionpower.com/products/ (2018). Ma, Y. et al. Lithium sulfur primary battery with super high energy density: based on the cauliflower-like structured C/S cathode. Sci. Rep. 5, 14949 (2015). Yuan, Z. et al. Hierarchical free‐standing carbon‐nanotube paper electrodes with ultrahigh sulfur‐loading for lithium–sulfur batteries. Adv. Funct. Mater. 24, 6105–6112 (2014). US National Highway Traffic Safety Administration. Lithium-Ion Battery Safety Issues for Electric and Plugin Hybrid Vehicles Technical Report. https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/documents/12848-lithiumionsafetyhybrids_101217-v3-tag.pdf (US National Highway Traffic Safety Administration, 2017). Harish, A. et al. Economics of Advanced Thin-Haul Concepts and Operations Technical Report (Aerospace Research Central, 2016). Low, R. B., Dunne, M. J., Blumen, I. J. & Tagney, G. Factors associated with the safety of EMS helicopters. Am. J. Emerg. Med. 9, 103–106 (1991). Stolaroff, J. K. et al. Energy use and life cycle greenhouse gas emissions of drones for commercial package delivery. Nat. Commun. 9, 1054 (2018). Brown, A., & Harris, W. A Vehicle Design and Optimization Model for On-Demand Aviation Technical Report (Massachusetts Institute of Technology, 2018). US Government Publishing Office. Part 91 – General Operating and Flight Rules (14 C.F.R. § 91.119) Technical Report. https://www.ecfr.gov/cgi-bin/text-idx?c=ecfr&sid=3efaad1b0a259d4e48f1150a34d1aa77&rgn=div5&view=text&node=14:2.0.1.3.10&idno=14#se14.2.91_1119 (US Government Publishing Office, 2010). Cooney, G., Hawkins, T. R. & Marriott, J. Life cycle assessment of diesel and electric public transportation buses. J. Ind. Ecol. 17(5), 689–699 (2013). Argonne National Laboratory. GREET.Net Database (Argonne National Laboratory, Lemont, Illinois, USA, 2017). GTX 345 Specs. Garmin https://buy.garmin.com/en-US/US/p/140949#overview (2018). Johnson, W. Helicopter Theory (Courier Corporation ISBN-10: 0-486-68230-7, 1994). Stoll, A. Analysis and Full Scale Testing of the Joby S4 Propulsion https://nari.arc.nasa.gov/sites/default/files/attachments/Stoll-TVFW-Aug2015.pdf (NASA, 2015). Kim, H. C. & Wallington, T. J. Life cycle assessment of vehicle lightweighting: a physics-based model to estimate use-phase fuel consumption of electrified vehicles. Environ. Sci. Technol. 50, 11226–11233 (2016). National Renewable Energy Laboratory. U.S. Life Cycle Inventory Database (National Renewable Energy Laboratory, 2012). Stoll, A., & Mikic, G. V. Design Studies of Thin-Haul Commuter Aircraft with Distributed Electric Propulsion Technical Report (Aerospace Research Central, 2016). Salonen, M. & Toivonen, T. Modelling travel time in urban networks: comparable measures for private car and public transport. J. Transp. Geogr. 31, 143–153 (2013). Misra, A. Evolution of Fundamental Technologies for Future Electrified Aircraft Technical Report. https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20180004254.pdf (NASA, 2017). Federal Aviation Administration. Advisory Circular: Aircraft Weight and Balance Control Technical Report. https://www.faa.gov/documentLibrary/media/Advisory_Circular/AC120-27E.pdf (Federal Aviation Administration, 2005). U.S. Standard Atmosphere. Engineering Toolbox https://www.engineeringtoolbox.com/standard-atmosphere-d_604.html (2018). We thank James E. Anderson and Sandy L. Winkler (Ford Motor Company), William J. Fredericks (Advanced Aircraft Company), and Geoffrey M. Lewis, Carlos E.S. Cesnik, and Nilton O. Renno (University of Michigan) for helpful discussions. This study was supported by Ford Motor Company through their Summer Internship Program and a Ford-University of Michigan Alliance Project Award (No. N022546-00). Research and Innovation Center, Ford Motor Company, Dearborn, Michigan, 48121, USA Akshat Kasliwal , Noah J. Furbush , James R. McBride , Timothy J. Wallington , Robert D. De Kleine & Hyung Chul Kim Center for Sustainable Systems, School for Environment and Sustainability, University of Michigan, 440 Church Street, Ann Arbor, Michigan, 48109, USA , James H. Gawron & Gregory A. Keoleian Department of Aerospace Engineering, University of Michigan, 1320 Beal Avenue, Ann Arbor, Michigan, 48109, USA Noah J. Furbush Search for Akshat Kasliwal in: Search for Noah J. Furbush in: Search for James H. Gawron in: Search for James R. McBride in: Search for Timothy J. Wallington in: Search for Robert D. De Kleine in: Search for Hyung Chul Kim in: Search for Gregory A. Keoleian in: T.J.W., R.D.K., H.C.K., and G.A.K. designed and supervised the research. J.R.M. provided the governing VTOL model and parameters. A.K., N.J.F., and J.H.G. performed the research and wrote the paper, with inputs from all co-authors. Correspondence to Gregory A. Keoleian. Journal peer review information: Nature Communications thanks Joshuah Stolaroff, Alex Stoll, and David Ullman for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Kasliwal, A., Furbush, N.J., Gawron, J.H. et al. Role of flying cars in sustainable mobility. Nat Commun 10, 1555 (2019). https://doi.org/10.1038/s41467-019-09426-0 Sociotechnical convex hulls and the evolution of transportation activity: A method and application to US travel survey data John Mulrow , Sybil Derrible & Constantine Samaras Technological Forecasting and Social Change (2019) Flying Cars for Green Transportation Brandon R. Sutherland Joule (2019) Nature | Research Highlight A jaunt by airborne 'car' can save on greenhouse gases Nature Communications menu Editors' Highlights
CommonCrawl
About Iieta Search IIETA Content -Any-ArticleBasic pageBlog entryJournalEventFeature Home Journals ACSM Conduction Band Offset Effect on the Cu2ZnSnS4 Solar Cells Performance Citation List CiteScore 2020: 0.8 ℹCiteScore: CiteScore is the number of citations received by a journal in one year to documents published in the three previous years, divided by the number of documents indexed in Scopus published in those same three years. SCImago Journal Rank (SJR) 2020: 0.158 ℹSCImago Journal Rank (SJR): The SJR is a size-independent prestige indicator that ranks journals by their 'average prestige per article'. It is based on the idea that 'all citations are not created equal'. SJR is a measure of scientific influence of journals that accounts for both the number of citations received by a journal and the importance or prestige of the journals where such citations come from It measures the scientific influence of the average article in a journal, it expresses how central to the global scientific discussion an average article of the journal is. Source Normalized Impact per Paper (SNIP) 2020: 0.521 ℹSource Normalized Impact per Paper(SNIP): SNIP measures a source's contextual citation impact by weighting citations based on the total number of citations in a subject field. It helps you make a direct comparison of sources in different subject fields. SNIP takes into account characteristics of the source's subject field, which is the set of documents citing that source. 240x200fu_ben_.jpg Conduction Band Offset Effect on the Cu2ZnSnS4 Solar Cells Performance Ahmed Redha Latrous* | Ramdane Mahamdi | Naima Touafek | Marcel Pasquinelli Higher Normal School "Assia Djebar", Ain El Bey Road, Constantine 25000, Algeria LEA, Electronics Department, University of Batna2, Mostefa Ben Boulaïd, Batna 05000, Algeria Higher National School of Biotechnology "Toufik Khaznadar" (ENSB), Ain El Bey Road, Constantine 25000, Algeria DETECT Department, IM2NP Laboratory, UMR CNRS 7334, Aix Marseille University, Marseille 13000, France Corresponding Author Email: [email protected] https://doi.org/10.18280/acsm.450601 | Citation 45.06_01.pdf Among the causes of the degradation of the performance of kesterite-based solar cells is the wrong choice of the n-type buffer layer which has direct repercussions on the unfavorable band alignment, the conduction band offset (CBO) at the interface of the absorber/buffer junction which is one of the major causes of lower VOC. In this work, the effect of CBO at the interface of the junction (CZTS/Cd(1-x)ZnxS) as a function of the x composition of Zn with respect to (Zn+Cd) is studied using the SCAPS-1D simulator package. The obtained results show that the performance of the solar cells reaches a maximum values (Jsc = 13.9 mA/cm2, Voc = 0.757 V, FF = 65.6%, ɳ = 6.9%) for an optimal value of CBO = -0.2 eV and Zn proportion of the buffer x = 0.4 (Cd0.6Zn0.4S). The CZTS solar cells parameters are affected by the thickness and the concentration of acceptor carriers. The best performances are obtained for CZTS absorber layer, thichness (d = 2.5 µm) and (ND = 1016 cm-3). The obtained results of optimizing the electron work function of the back metal contact exhibited an optimum value at 5.7 eV with power conversion efficiency of 13.1%, Voc of 0.961 mV, FF of 67.3% and Jsc of 20.2 mA/cm2. absorber layer, buffer layer, CBO, Cd(1-x)ZnxS, CZTS, interface, SCAPS-1D, solar cell A recent explosion of Kesterite Cu2ZnSn(S,Se)4 (CZTSSe) solar cells has led scientists to adopt extensive research towards this surprising absorbent thin-film technology considered to be the ideal materials because of their environmental affinities, the abundance of its constituent elements on the ground, their direct band gaps (1-1.5 eV) without forgetting their high absorption coefficients compared to their counterparts, chalcopyrite, chalcogenide and perovskite [1-5].The progress of experimental research aimed at improving the performance of this type of material has revealed a conversion efficiency of 12.6% for the sulfur-selenium alloy CZTSSe [6], 11.6% for the compound based on Selenium CZTSe [7], while a yield greater than 9% characterized the compound based on pure sulfur CZTS [8]. Despite all the efforts made in this discipline, the best performance for Kesterite remains below their neighbors, chalcopyrites, in particular solar cells based on Cu2InGaS4 [9, 10] and CdTe [11]. Worth mentioning that the absorbent layer CZTS contains non-toxic elements, absence of selenium Se and therefore more ecological than the CZTSSe semiconductor. Also, CZTS semiconductor structure is very analogous to chalcopyrite semiconductors, which allows applying their different technologies. Despite all the qualities mentioned, CZTS cells constantly endure from weak electronic properties, mainly due to the high recombination of carriers in the interface between the absorber (CZTS) and the buffer layers where the atomic arrangement is extremely disordered and then conversion efficiency of the CZTS absorber solar cells is mainly limited by the significant deficit of open circuit voltage (Voc). Several factors could reduce the Voc among them the unwanted energy bands alignment at the absorber/buffer heterojunction, which can cause strong recombination at these interfaces [12]. Different values of conduction band offset (CBO) have been described [13]. From first-principles calculations, CBO is reported to be "cliff"-like and most recent investigations including measurements admit that the "cliff"-like CBO acts significantly to the interface recombination and thereby VOC deficit. The "cliff" creation would act as a barrier to stop the flow of injected electrons (majority carrier) from the buffer to the absorber under forward bias [14, 15]. To remedy this in terms of the VOC deficit and avoid the degradation of the performance of our kesterite cell, the choice of the buffer layer is essential in order to reduce the losses by recombination at the interface of the absorber/buffer junction. Much research work on the effect of band offsets at the absorber/buffer interfaces has been undertaken primarily for the ZnS and CdS buffer layers [16]. The conduction band minimum (CBM) of the ZnS as a buffer layer in the CZTS solar cells is located above the CBM of the CZTS absorber layer, then it forms barrier to photo generated electrons and leads to high carrier losses, whereas in the case of CdS its CBM is located below the CBM of the CZTS absorber layer causing a large negative CBO which increases the interface recombination. In this perspective, and throughout our study, a good compromise concerning the buffer layer Cd(1-x)ZnxS with convenient band gaps which will be chosen so that its CBM can be chosen by controlling the Zn/(Zn + Cd) ratio of cadmium to zinc. This represents the main goal of our work to optimize the buffer layer Cd(1–x)ZnxS according to x by calculating with the SCAPS-1D Software, the energy bands alignment CBO at the interface of the CZTS/Cd(1–x)ZnxS junction and thus contributes to the improvement of the output performance of the proposed structure SLG/Mo/CZTS/Cd(1–x)ZnxS/ZnO:Al, the open circuit voltage VOC, the short-circuit current density JSC, as well as the Fill factor FF and the efficiency $\eta$ by simulating the effects of various electrical and optical parameters, the absorber layer thickness, the acceptor carrier concentration of the absorber layer as well as the predominant impact of electron work function of the back metal contact. Our proposed structure and numerical calculations will be exposed in the following sections. 2. Presentation of the Device Our structure SLG/Mo/CZTS/Cd(1–x)ZnxS/ZnO:Al, as illustrated in Figure 1 has been adopted as the basic model throughout our study, knowing that many authors have experimentally analyzed its behavior [17]. In this structure SLG (Soda Lime Glass) acts as substrate, followed by Mo which acts as back contact over which there is a thin layer p-type doped CZTS kesterite which acts as absorber active layer. A thin n-doped Cd(1–x)ZnxS layer with direct band gap tunable between 2.64 eV (x = 0) and 3.42 eV (x = 1) and the electronic affinity which varies between 4.8 eV (x = 0) and 3.9 (x = 1) is deposited on the CZTS layer, followed by a ZnO type n window layer, which acts as front contact on which aluminum contacts are grown. Figure 1. Proposed CZTS Kesterite structure SCAPS-1D solar simulation software for thin layer solar cells developed at Gent University is exploited to inject the device parameters as well as material parameters of each layer based on solving the Poisson equation and electron-hole continuity equations [18]. The physical parameters are selected from the literature values and were listed in Table 1 [19-21]. The absorption coefficients for all materials used in this study are taken from Scaps. We opted for an operating temperature set at 300°K. Solar radiation is incident at the front contact with Air mass 1.5 global spectrum (AM1.5G) and a solar light power of (103 W/m2) and taking into account series resistance 3.25 Ω and shunt resistance 400 Ω. In Table 1, we fixed the thicknesses values of the buffer layers Cd(1-x)ZnxS as well as the window layer and we varied the thickness of the absorbent layer CZTS. We have also fixed respectively the values of the bandgaps and the electronic affinities of the absorbent and window layers. We have also fixed the values of the density of donor atoms for the buffer and window layers; by varying the density of acceptor atoms for the absorbent layer CZTS. We supposed that the electron work function of the back metal contact is variable [21]. The effects of radiative recombination and Auger electron/hole captures have been considered throughout all the cases. It should also be noted that the defects influence at the various interfaces (CZTS/Cd(1–x)ZnxS) and (Cd(1–x)ZnxS/ZnO) is not taken into account. Table 1. Physical values for the different layers of the proposed structure CZTS Cd(1-x)ZnxS ZnO:Al Thickness [µm] Bandgap Eg [eV] Electron affinity χ [eV] Relative dielectric permittivity er [eV] NC [cm-3] 2.2×1018 NV [cm-3] Electron themal velocity [cm/s] Hole themal velocity [cm/s] Electron mobility µn [cm2/V.s] Hole mobility µp [cm2/V.s] 2.5×101 Donor density ND [cm-3] Acceptor density NA [cm-3] Absorption coefficient α [cm-1] Scaps Radiative recombination coefficient Br [cm3/s] 5×10-9 Auger electron capture coefficient [cm6/s] 1×10-29 Auger hole capture coefficient [cm6/s] Defect type (A/D/N) A:8.5×1015 D:1×1017 Defect density [cm-3] NC – Conduction band effective density of states, NV – Valence band effective density of states 3. Results and Discussions 3.1 Energy bands calculation Before optimization, the first step of this work consists in simulating and validating by the SCAPS 1D software the basic structure proposed SLG/Mo/CZTS/CdS/ZnO:Al with the experimental results [19]. The energy band alignment diagram for the non-equilibrium condition of the different layers is shown in Figure 2 as a function of the parameters summarized in Table 1, showing the offsets of the conduction and valence band as well as the different band gap and electron affinities at the CZTS/CdS junction interface. The band gap energy of the buffer layer is 2.4 eV and its electron affinity is 4.0 eV. The CBO and the VBO of this junction both are negative, which causes an increase in the carriers recombination and consequently a decrease in Voc, due to the activation energy relatively lower than the absorber band gap [22, 23]. This band offset causes significant minority charge recombination, thus the Voc value is reduced Figure 3. The formation of the heterojunction brings about the alignment of the Fermi level of the bands of all the layers in equilibrium with the vacuum level. Under illumination, excess free carriers are generated and the Fermi level subdivides into quasi-Fermi levels due to the open circuit voltage Voc. According to the results of the energy band diagram calculations, there is a spike-like CBO at the interface of the CZTS/CdS heterojunction Figure 3. The electron affinity of CdS (~4 eV) is lower than that of CZTS (~4.25 eV) which will generate a positive spike-like CBO with DEc = 0.25eV which prevent the movement of electrons from the kesterite layer to the buffer layer and a negative valence band offset VBO with DEv = -0.65 eV. Figure 2. Energy band diagram representation of the basic kesterite structure Figure 3. Simulated energy band diagram of the basic kesterite structure before optimization Figure 4. Comparison between experimentation and J-V characteristics of the structure before optimization 3.2 Characteristics current density-voltage (J-V) simulation and validation To validate the model of the basic structure before optimization and under lighting, the curve representing the current density as a function of the voltage (J-V) was simulated and compared to the experimental curve [24] in accordance with the parameters exhibited in Table 1. There is a good agreement between the experimentation and simulation curves which validates our set of parameters as a baseline for the simulation in this work Figure 4. 3.3 Influence of [Zn]/([Zn]+[Cd]) rate of Cd(1-x)ZnxS buffer layer In this section, we review the study of the impact of [Zn] / ([Zn] + [Cd]) rate of the Cd(1-x)ZnxS thin layer on energy bands alignment, therefore on the conduction bands offsets (CBO) and the valence bands offsets (VBO) leading to the spike or cliff creation at the interface of the buffer/absorber junction. These bands offsets prevent the two types of carriers, electrons and holes from moving and crossing these obstacles. We are therefore witnessing a strong recombination at the CZTS/Cd(1-x)ZnxS junction interface and inside the device, producing performance degradation of the structure. In this perspective, pushed computations by the SCAPS-1D simulator were undertaken in such a way as to introduce a promising buffer layer Cd(1-x)ZnxS into the basic structure, by varying the rate x between the value 0 (CdS) and the value 1 (ZnS), its band gap therefore varies respectively between 2.64 eV and 3.42 eV as well as the electron affinity between 4.8 eV and 3.9 eV. Bandgap and electron affinity of Cd(1-x)ZnxS for different Zn composition can be calculated by extrapolation of experimental curves [25, 26] given by the Eqns. (1) and (2), in the other hand, the CBO and VBO are deduced from the Eqns. (3) and (4). $\operatorname{Eg}\left(\mathrm{Cd}_{(1-\mathrm{x})} \mathrm{Zn}_{\mathrm{x}} \mathrm{S}\right)=2.642+1.067 \mathrm{x}-0.285 \mathrm{x}^{2}(\mathrm{eV})$ (1) $\chi\left(\mathrm{Cd}_{(1-\mathrm{x})} \mathrm{Zn}_{\mathrm{x}} \mathrm{S}\right)=4.8-0.9 \mathrm{x}(\mathrm{eV})$ (2) $\Delta \mathrm{E}_{\mathrm{C}}=\chi(\mathrm{CZTS})-\chi\left(\mathrm{Cd}_{(1-\mathrm{x})} \mathrm{Zn}_{\mathrm{x}} \mathrm{S}\right)(\mathrm{eV})$ (3) $\left.\Delta \mathrm{E}_{\mathrm{V}}=\left[\mathrm{E}_{\mathrm{g}}(\mathrm{CZTS})+\chi(\mathrm{CZTS}\right)\right]-\left[\mathrm{E}_{\mathrm{g}}\left(\mathrm{Cd}_{(1-\mathrm{x})} \mathrm{Zn}_{\mathrm{x}} \mathrm{S}\right)++\chi\left(\mathrm{Cd}_{(1-\mathrm{x})} \mathrm{Zn}_{\mathrm{x}} \mathrm{S}\right)\right](\mathrm{eV})$ (4) All the simulated computations representing energy bands alignment of the conduction and valence bands as a function of the dimensions of the proposed cell are represented in Figure 5. It has been observed that the CBO at The CZTS/Cd(1-x)ZnxS plays a large role in controlling the carriers transmission through the metallic contact. The amount of this offset is determined by the difference in the buffer/absorber layer's electron affinity. The increase in positive and negative band offsets would respectively form spike-like and cliff-like structures. With increasing electron affinity of the buffer layer, the junction interface CZTS/Cd(1-x)ZnxS would change from a spike-like shape to another cliff-like shape. In order to determine the conduction band offset CBO, the impact of DEc on VOC, JSC, FF and $\eta$ was simulated. The recombination process at the CZTS/Cd(1-x)ZnxS layer interface has already been treated by several authors [27]. In fact energy band diagram of the SLG/Mo/CZTS/Cd(1-x)ZnxS/ZnO:Al solar system in Figure 5 illustrates the easy electrons transfer from the CZTS absorber layer to the front contact through the Cd(1-x)ZnxS/ZnO:Al interface. The highest conversion efficiency and FF obtained are 6.9% and 65.6% respectively at an optimum value of CBO = -0.2eVfor the value of x equal to 0.4. However the values of VOC and JSC are respectively 0.757 V and 13.9 mA/cm2 for the same optimal value of CBO. Nonetheless, out of this range, we can observe a decrease in the FF and the efficiency parameters Figure 6. The recombination phenomenon, which is the main factor in the reduction of the quantum efficiency, is consistent with the reduction in the fill factor and short- circuit current. It can also be concluded that the VBO negative values are not an obstacle in the shift of carriers from the surface at the absorber/buffer interface to the absorber layer, and do not significantly change the solar cell's performance. As a result the CBO will be a determining factor in improving the produced optical flow and increasing efficiency. Figure 5. Energy band diagram evolution of the proposed structure Figure 6. Variation of VOC, JSC, FF and $\eta$ with CBO at the CZTS/Cd(1-x)ZnxS junction interface of the proposed structure Figure 7. J-V characteristics of the proposed structure before optimization according to Cd(1−x)ZnxS buffer layers Simulation results of the proposed structure using alternative buffer layers Cd(1-x)ZnxS instead of the standard buffer layer CdS show that these thin-film semiconductor materials are suitable to participate in improving the performance of the device by adopting as an optimal layer, the buffer layer Cd0.6Zn0.4S. The characteristic J-V of all the kesterite structure parameterized by the variation of the buffer layer Cd(1-x)ZnxS is illustrated in Figure 7. For all these reasons, this buffer layer will represent the most appropriate layer used in order to optimize the parameters of the structure. 3.4 CZTS absorbent layer thickness influence After having optimized the buffer layer Cd0.6Zn0.4S, we will place particular emphasis on other factors influencing the performance of the proposed structure, in particular the thickness of the absorbent layer [28, 29]. To analyze its impact, calculations were made for different thicknesses of the CZTS absorber layer ranging from 10 nm to 10 µm, this layer, assuming all other parameters constant as described in the Table 1. As shown in Figure 8, the curves obtained after simulation show that by increasing the thickness of the absorber from 200 nm to 10000 nm, Voc, Jsc, FF and $\eta$ increase accordingly. We can also notice that Voc has increased by around 20%, while JSC has decreased by 18%. FF and the conversion efficiency were consequently increased by 10% and 11% respectively compared to the basic structure before optimization. A good improvement was noted for an optimum absorber thickness of 2.5 µm. 8a.png 8b.png Figure 8. Variation of VOC, JSC, FF and $\eta$ as a function of CZTS absorber layer thickness 3.5 Acceptor carrier concentration influence of the CZTS absorber layer Likewise, the acceptor carrier concentration NA of CZTS absorber thin layer plays a major role in improving the performance of the structure. From this perspective one of the aims of our paper is to study its effect on the overall efficiency of our solar cell [28, 29]. After setting the optimum value for the CZTS absorbent layer thickness at 2.5 mm, we note the normalized output parameters (VOC = 0.825 V, JSC = 23.5 mA/cm2, FF = 65.3%, $\eta$ = 7.7%) in Figure 9. As carrier concentration increases from 1×1014 cm-3 to 1×1018 cm-3, the semiconductor becomes degenerate; this is one of the major concerns for limiting the higher value of NA. This will generate a reduction in the space charge region of the absorber side which in turn leads to a sharp diminution of the photo generated carriers in this region [21], and therefore a decrease in the current density Jsc and the energy conversion efficiency. A good improvement was also noted for an optimum acceptor carrier concentration NA of 1×1016 cm-3. Figure 9. Variation of VOC, JSC, FF and $\eta$ as a function of CZTS absorber layer acceptor carrier concentration 3.6 Back metal contact work function influence of the CZTS absorber layer The influence of the back metal contact work function (BMWF) is taken into account in this work. After setting the optimum value for the CZTS absorbent layer thickness at 2.5mm and the optimum value for acceptor carrier concentration NA at 1×1016 cm-3, and to find the influence of the electron work function $\phi$ of metal at the back contact, its value in the range from 4.6 eV to 6 eV was varied in the simulation. By an increase in the BMWF an incredible improvement in solar cell parameters was found, as revealed from Figure 10 (VOC = 0.961 V, JSC = 20.2 mA/cm2, FF = 67.3%, $\eta$ = 13.1%). A remarkably sharp rise is observed for BMWF in the range of 5.2 – 6 eV, where JSC can reach up to around 20 mA/cm2 similar to the authors [30] due to improve of the interface CZTS/Mo by a good ohmic contact. The conversion efficiency remains almost constant for BMWF between 5.7 and 6 eV. According to the graph, the optimal value is located at 5.7 eV. Figure 10. Variation of VOC, JSC, FF and $\eta$ as a function of CZTS absorber layer back metal contact work function In this paper, the impact of conduction band offset (CBO) at the absorber/buffer (CZTS/Cd(1-x)ZnxS) junction interface was investigated using SCAPS-1D software. We calculated a large negative value of -0.55eV for the "cliff" like CBO at the CZTS/CdS junction interface for a proportion of the buffer x = 0, which represents one of the main recombination mechanisms, whereas a calculated value of 0.35 eV for the "spike" like CBO at the interface of the CZTS/ZnS junction for a proportion of the buffer x = 1 shows the formation of a barrier to the photo generated electrons and leads to high carriers losses. The results obtained are in good agreement with the works already published by the authors. The variation of the proportion x between 0 and 1 allowed us to simulate a new buffer layer Cd0.6Zn0.4S, reduce the recombination at the interface of the junction as well as the deficit of VOC and therefore improve the efficiency of our cell. For this buffer layer the results show an improvement in VOC of 7% compared to ZnS and a clear improvement in efficiency of 30% compared to ZnS and 20% compared to CdS. The obtained maximum values (VOC = 961 mV, JSC = 20.2 mA/cm2, FF = 67.3% and $\eta$ = 13.1%) are both reached at an optimal cliff value around -0.2 eV for a composition x = 0.4 (Cd0.6Zn0.4S) and optimal absorber CZTS layer. The optimized values of these parameters were 2.5 mm for CZTS layer thickness, 1×1016 cm-3 for the acceptor doping concentration NA, and 5.7 eV for the Molybdenum back metal contact work function (BMWF). Authors acknowledge Dr. Marc Burgelman and colleagues from University of Gent for using SCAPS-1D program in all simulations reported in this paper. [1] Mekky, A.H. (2020). Electrical and optical simulation of hybrid perovskite-based solar cell at various electron transport materials and light intensity. Annales de Chimie - Science des Matériaux, 44(3): 179-184. https://doi.org/10.18280/acsm.440304 [2] Touafek, N., Mahamd,i R., Dridi, C. (2019). Impact of the secondary phase ZnS on CZTS performance solar cells. International Journal of Control, Energy and Electrical Engineering, 9: 6-9. [3] Vauche, L., Risch, L., Sanchez, Y., Dimitrievska, M., Pasquinelli, M., Goislard de Monsabert, T., Grand, P.P., Jaime-Ferrer, S., Saucedo, E. (2015). 8.2% pure selenide kesterite thin-film solar cells from large-area electrodeposited precursors. Progress in Photovoltaics: Research and Applications, 24(1): 38-51. https://doi.org/10.1002/pip.2643 [4] Vauche, L., Dubois, J., Laparre, A., Pasquinelli, M., Bodnar, S., Grand, P.P., Jaime, S. (2014). Rapid thermal processing annealing challenges for large scale Cu2ZnSnS4 thin films. Physica Status Solid: Applications and Materials Science, 212(1): 103-108. https://doi.org/10.1002/pssa.201431387 [5] Ruiz, C.M., Pérez-Rodriguez, A., Arbiol. J., Morante, J.R., Bermúdez, V. (2014). Impact of the structure of Mo(S,Se)2 interfacial region in electrodeposited CuIn(S,Se)2 solar cells. Physica Status Solid: Applications and Materials Science, 212(1): 61-66. https://doi.org/10.1002/pssa.201431435 [6] Wang, W., Winkler, M.T., Gunawan, O., Gokmen, T., Todorov, T.K., Zhu, Y., Mitzi, D.B. (2013). Device characteristics of CZTSSe thin‐film solar cells with 12.6% efficiency. Advanced Energy Materials, 4(7):1301465. https://doi.org/10.1002/aenm.201301465 [7] Lee, Y.S., Gershon, T., Gunawan, O., Todorov, T.K., Gokmen, T., Virgus, Y., Guha, S. (2015). Cu2ZnSnSe4 thin‐film solar cells by thermal co‐evaporation with 11.6% efficiency and improved minority carrier diffusion length. Advanced Energy Materials, 5(7): 1401372. https://doi.org/10.1002/aenm.201401372 [8] Hao, X., Sun, K., Yan, C., Liu, F., Huang, J., Pu, A., Green, M. (2016). Large VOC improvement and 9.2% efficient pure sulfide Cu2ZnSnS4 solar cells by heterojunction interface engineering. IEEE 43rd Photovoltaic Specialists Conference (PVSC), Portland, OR, USA, pp. 5-10. https://doi.org/10.1109/PVSC.2016.7750017 [9] Touafek, N., Mahamdi, R. (2014). Excess defects at the CdS/CIGS interface solar cells. Chalcogenide Letters, 11(11): 589-596. [10] Touafek, N., Mahamdi, R. (2014). Back surface recombination effect on the ultra-thin CIGS solar cells by SCAPS. International Journal of Renewable Energy Research, 4(4): 958-964. [11] Akbarnejad, E., Ghorannevis, Z., Mohammadi, E., Fekriaval, L. (2019). Correlation between different CdTe nanostructures and the performances of solar cells based on CdTe/CdS heterojunction. Journal of Electroanalytical Chemistry, 849: 113358. https://doi.org/10.1016/j.jelechem.2019.113358 [12] Kaur, K., Kumar, N., Kumar, M. (2017). Strategic review of interface carrier recombination in earth abundant Cu–Zn–Sn–S–Se solar cells: Current challenges and future prospects. Journal of Materials Chemistry A, 5: 3069-3090. https://doi.org/10.1039/C6TA10543B [13] Kumar, A., Thakur, A.D. (2018). Role of contact work function, back surface field and conduction band offset in CZTS solar cell. Japanese Journal of Applied Physics, 57(8S3): 08RC05. https://doi.org/10.7567/JJAP.57.08RC05 [14] Liu, X., Feng, Y., Cui, H., Liu, F., Hao, X., Conibeer, G., Mitzi, D.B., Green, M. (2016). The current status and future prospects of kesterite solar cells: A brief review. Progress in Photovoltaic, 24(6): 879-898. https://doi.org/10.1002/pip.2741 [15] Pal, K., Singh, P., Bhaduri, A., Thapa, K.B. (2019). Current challenges and future prospects for a highly efficient (> 20%) kesterite CZTS. Solar Energy Materials and Solar Cells, 196: 138-156. https://doi.org/10.1016/j.solmat.2019.03.001 [16] Wang, D., Zhao, W., Zhang, Y., Liu, S. (2018). Path towards high-efficient kesterite solar cells. Journal of Energy Chemistry, 27(4): 1040-1053. https://doi.org/10.1016/j.jechem.2017.10.027 [17] Khattak, Y.H., Baig, F., Ullah, S., Mari, B., Beg, S., Ullah, H. (2018). Enhancement of the conversion efficiency of thin film kesterite solar cell. Journal of Renewable and Sustainable Energy, 10(3): 033501. https://doi.org/10.1063/1.5023478 [18] Niemegeers, A., Burgelman, M., Decock K, et al. (2013). Scaps Manual, e-Book. [19] Haddout, A., Raidou, A., Fahoume, M., Elharfaoui, N., Lharch, M. (2018). Influence of CZTS layer parameters on cell performance of kesterite thin-film solar cells. Proceedings of the 1st International Conference on Electronic Engineering and Renewable Energy, Saidia, Morocco, pp. 640-646. https://doi.org/10.1007/978-981-13-1405-6_73 [20] Jhuma, F.A., Shaily, M.Z., Rashid, M.J. (2019). Towards high‑efficiency CZTS solar cell through buffer layer optimization. Materials for Renewable and Sustainable Energy, 8(6): 1-7. https://doi.org/10.1007/s40243-019-0144-1 [21] Patel, M., Ray, A. (2012). Enhancement of output performance of Cu2ZnSnS4 thin film solar cells-A numerical simulation approach and comparison to experiments. Physica B: Condensed Matter, 407(21): 4391-4397. https://doi.org/10.1016/j.physb.2012.07.042 [22] Crovetto, A., Hansen, O. (2017). What is the band alignment of Cu2ZnSn(S,Se)4 solar cells? Solar Energy Materials and Solar Cells, 169: 177-194. https://doi.org/10.1016/j.solmat.2017.05.008 [23] Kulwinder, K., Naveen, K., Mulkesh, K. (2017). Strategic review of interface carrier recombination in earth abundant Cu–Zn–Sn–S–Se solar cells: Current challenges and future prospects. Journal of Materials Chemistry A, 5: 3069-3090. [24] Mitzi, D.B., Gunawan, O., Todorov, T.K., Guha, S. (2011). The path towards a high performance solution-processed kesterite solar cell. Solar Energy Materials and Solar Cells, 95(6): 1421-1436. https://doi.org/10.1016/j.solmat.2010.11.028 [25] Hasan, N.B., Ghazi, R.A. (2016). Study optical and electrical properties of Cd1-xZnxS thin films prepared by spray pyrolysis technique. International Journal of Engineering and Advanced Research Technology (IJEART), 2(10): 33-36. [26] Chowdhury, T.H., Ferdaous, M.T., Abdul Wadi, M.A., Chelvanathan, P., Amin, N., Islam, A., Kamaruddin, N., Zin, M.I.M., Ruslan, M.H., Sopian, K.B., Akhtaruzzaman, M.D. (2018). Prospects of ternary Cd1−xZnxS as an electron transport layer and associated interface defects in a planar lead halide perovskite solar cell via numerical simulation. Journal of Electronic Materials, 47(5): 3051-3058. https://doi.org/10.1007/s11664-018-6154-4 [27] Jiang, Z.W., Gao, S.S., Wang, S.Y., Wang, D.X., Gao, P., Sun, Q., Zhou, Z.Q., Liu, W., Sun, Y., Zhang, Y. (2019). Insight into band alignment of Zn(O,S)/CZTSe solar cell by simulation. Chinese Physics B, 28(4): 048801. https://doi.org/10.1088/1674-1056/28/4/048801 [28] Haddout, A., Raidou, A., Fahoume, M. (2019). A review on the numerical modeling of CdS/CZTS-based solar cells. Applied Physics A, 125(124): 1-16. https://doi.org/10.1007/s00339-019-2413-3 [29] Cherouana, A., Labbani, R. (2017). Study of CZTS and CZTSSe solar cells for buffer layers selection. Applied surface Science, 424(2): 251-255. https://doi.org/10.1016/j.apsusc.2017.05.027 [30] Ferdaous, M.T., Shahahmadi, S.A., Chelvanathan, P., Akhtaruzzaman, Md., Alharbi, F.H., Sopian, K., Tiong, S.K., Amin, N. (2019). Elucidating the role of interfacial MoS2 layer in Cu2ZnSnS4 thin film solar cells by numerical analysis. Solar Energy, 178: 162-172. https://doi.org/10.1016/j.solener.2018.11.055 Latest News & Announcement Phone: + 1 825 436 9306 JNMES IJHT MMEP EJEE RCMA IJSDP IJSSE IJDNE EESRJ IJES PSEES AMA_A MMC_A Please sign up to receive notifications on new issues and newsletters from IIETA Select Journal/Journals: IJHTMMEPACSMEJEEISII2MJESARCMARIATSIJSDPIJSSEIJDNEJNMESIJESEESRJRCESAMA_AAMA_BAMA_CAMA_DMMC_AMMC_BMMC_CMMC_D
CommonCrawl
Metals and Materials International pp 1–9 | Cite as Investigation on the Creep Behavior of AZ91 Magnesium Alloy Processed by Severe Plastic Deformation Iraj Khoubrou Bahram Nami Seyyed Mehdi Miresmaeili This paper describes the grain refinement due to equal-channel angular pressing (ECAP) and the creep properties of the ECAP-processed AZ91 magnesium alloy. The resulting microstructure and creep properties were examined by scanning electron microscope and impression creep test method. Microstructural evolution reveals that the grains were refined to 14 µm after four ECAP passes at 628 K, following route Bc. The creep tests were carried out under stresses in the range of 35 to 95 MPa at temperatures in the range of 538 to 583 K. Based on a power law between the impression rate and stress, the stress exponents were about 2 and the activation energies were about 129 kJ/mol, which are close to that for lattice diffusion of magnesium. Considering the obtained results, it can be stated that the grain boundary sliding is the dominant creep mechanism at low stresses and high temperatures. Graphic Abstract Deformation mechanism is grain boundary sliding (GBS) during creep of the AZ91 alloy at low stresses and high temperature and deformation behavior can be determined from: $$\upvarepsilon^{\cdot } = 7.25\left({\frac{{\text{b}}}{{\text{d}}}} \right)^{2} \left({\frac{{\text{Gb}}}{{\text{kT}}}} \right)\left( {\frac{\upsigma}{{\text{G}}}} \right)^{{2.02}} {\text{D}}_{L}$$ Magnesium alloy Equal-channel angular pressing Creep properties Grain boundary sliding Microstructure X.U. Yan, L.X. Hu, S.U.N. Yu, J.B. Jia, J.F. Jiang, Trans. Nonferrous Met. Soc. China 25, 381 (2015)CrossRefGoogle Scholar X.C. Luo, D.T. Zhang, W.W. Zhang, C. Qiu, D.L. Chen, Mater. Sci. Eng., A 725, 398 (2018)CrossRefGoogle Scholar G. Zeng, C. Liu, Y. Wan, Y. Gao, S. Jiang, Z. Chen, Mater. Sci. Eng., A 734, 59 (2018)CrossRefGoogle Scholar B.L. Mordike, T. Ebert, Mater. Sci. Eng., A 302, 37 (2001)CrossRefGoogle Scholar M. Celikin, M. Pekguleryuz, in TMS Annual Meeting & Exhibition (Springer, Cham, 2018), p. 337Google Scholar D.H. Hou, S.M. Liang, R.S. Chen, C. Dong, E.H. Han, Acta Metall. Sin. (Engl. Lett.) 28, 115 (2015)CrossRefGoogle Scholar B.A. Esgandari, H. Mehrjoo, B. Nami, S.M. Miresmaeili, Mater. Sci. Eng., A 528, 5018 (2011)CrossRefGoogle Scholar R. Panicker, A.H. Chokshi, R.K. Mishra, R. Verma, P.E. Krajewski, Acta Mater. 57, 3683 (2009)CrossRefGoogle Scholar J. Xu, X. Wang, X. Zhu, M. Shirooyeh, J. Wongsa-Ngam, D. Shan, B. Guo, T.G. Langdon, J. Mater. Sci. 48, 4117 (2013)CrossRefGoogle Scholar Y. Yuan, A. Ma, X. Gou, J. Jiang, G. Arhin, D. Song, H. Liu, Mater. Sci. Eng., A 677, 125 (2016)CrossRefGoogle Scholar R.Z. Valiev, Y. Estrin, Z. Horita, T.G. Langdon, M.J. Zechetbauer, Y.T. Zhu, JOM 58, 33 (2006)CrossRefGoogle Scholar T.G. Langdon, Mater. Sci. Eng., A 462, 3 (2007)CrossRefGoogle Scholar R.Z. Valiev, T.G. Langdon, Prog. Mater Sci. 51, 881 (2006)CrossRefGoogle Scholar A. Yamashita, D. Yamaguchi, Z. Horita, T.G. Langdon, Mater. Sci. Eng., A 287, 100 (2000)CrossRefGoogle Scholar R.B. Figueiredo, T.G. Langdon, J. Mater. Sci. 43, 7366 (2008)CrossRefGoogle Scholar P.S. Roodposhti, A. Sarkar, K.L. Murty, H. Brody, R. Scattergood, Mater. Sci. Eng., A 669, 171 (2016)CrossRefGoogle Scholar M. Alvarez-Leal, A. Orozco-Caballero, F. Carreno, O.A. Ruano, Mater. Sci. Eng., A 710, 240 (2018)CrossRefGoogle Scholar Y.H. Wei, Q.D. Wang, Y.P. Zhu, H.T. Zhou, W.J. Ding, Y. Chino, M. Mabuchi, Mater. Sci. Eng., A 360, 107 (2003)CrossRefGoogle Scholar R.B. Figueiredo, T.G. Langdon, Mater. Sci. Eng., A 501, 105 (2009)CrossRefGoogle Scholar L. Li, W. Wei, Y. Lin, C. Lijia, L. Zheng, J. Mater. Sci. 41, 409 (2006)CrossRefGoogle Scholar K. Kubota, M. Mabuchi, K. Higashi, J. Mater. Sci. 34, 2255 (1999)CrossRefGoogle Scholar B. Nami, H. Razavi, S. Mirdamadi, S.G. Shabestari, S.M. Miresmaeili, Metall. Mater. Trans. A 41, 1973 (2010)CrossRefGoogle Scholar F. Kabirian, R. Mahmudi, Metall. Mater. Trans. A 40, 116 (2009)CrossRefGoogle Scholar R. Jahadi, M. Sedighi, H. Jahed, Mater. Sci. Eng., A 593, 178 (2014)CrossRefGoogle Scholar S.H. Kang, Y.S. Lee, J.H. Lee, J. Mater. Process. Technol. 201, 436 (2008)CrossRefGoogle Scholar K.R. Gopi, H.S. Nayaka, S. Sahu, J. Mater. Eng. Perform. 26, 3399–3409 (2017)CrossRefGoogle Scholar S.B. Xu, Q.I.N. Zhen, L.I.U. Ting, C.N. Jing, G.C. Ren, Trans. Nonferrous Met. Soc. China 22, 61 (2012)CrossRefGoogle Scholar X.M. Feng, T.T. Ai, Trans. Nonferrous Met. Soc. China 19, 293–298 (2009)CrossRefGoogle Scholar C.W. Chung, R.G. Ding, Y.L. Chiu, W. Gao, J. Phys: Conf. Ser. 241, 012101 (2010)Google Scholar Y. Yuan, A. Ma, J. Jiang, F. Lu, W. Jian, D. Song, Y.T. Zhu, Mater. Sci. Eng., A 588, 329 (2013)CrossRefGoogle Scholar B. Chen, D.L. Lin, L. Jin, X.Q. Zeng, C. Lu, Mater. Sci. Eng., A 483, 113 (2008)CrossRefGoogle Scholar K. Ishikawa, H. Watanabe, T. Mukai, Mater. Lett. 59, 1511 (2005)CrossRefGoogle Scholar F. Yang, J.C. Li, Mater. Sci. Eng., R 74, 233 (2013)CrossRefGoogle Scholar E.M. Mazraeshahi, B. Nami, S.M. Miresmaeili, Mater. Des. 51, 427 (2013)CrossRefGoogle Scholar R.J. Nabariya, S. Goyal, M. Vasudevan, N. Arivazhagan, Mater. Today Proc. 5, 12320 (2018)CrossRefGoogle Scholar M.E. Kassner, M.T. Perez-Prado, Fundamentals of Creep in Metals and Alloys, 1st edn. (Elsevier Science, Amsterdam, 2004)Google Scholar A. Heczel, F. Akbaripanah, M.A. Salevati, R. Mahmudi, A. Vida, J. Gubicza, J. Alloys Compd. 763, 629 (2018)CrossRefGoogle Scholar M. Janeček, J. Čížek, J. Gubicza, J. Vrátná, J. Mater. Sci. 47, 7860 (2012)CrossRefGoogle Scholar J.N. Wang, A.J. Schwartz, T.G. Nieh, D. Clemens, Mater. Sci. Eng., A 206, 63 (1996)CrossRefGoogle Scholar R. Mahmudi, R. Alizadeh, A.R. Geranmayeh, Scr. Mater. 64, 521 (2011)CrossRefGoogle Scholar H. Somekawa, K. Hirai, H. Watanabe, Y. Takigawa, K. Higashi, Mater. Sci. Eng. A 407, 53 (2005)CrossRefGoogle Scholar S.M. Miresmaeili, B. Nami, Mater. Des. 56, 286 (2014)CrossRefGoogle Scholar B. Kondori, R. Mahmudi, Mater. Sci. Eng., A 700, 438 (2017)CrossRefGoogle Scholar S.N.G. Chu, J.C.M. Li, J. Mater. Sci. 12, 2200 (1977)CrossRefGoogle Scholar J.C. Li, Mater. Sci. Eng., A 322, 23 (2002)CrossRefGoogle Scholar S.M. Miresmaeili, B. Nami, R. Abbasi, I. Khoubrou, JOM 71, 2128–2135 (2019)CrossRefGoogle Scholar S. Ganguly, A.K. Mondal, Mater. Sci. Eng., A 718, 377 (2018)CrossRefGoogle Scholar © The Korean Institute of Metals and Materials 2019 1.Department of Materials Engineering and New TechnologiesShahid Rajaee Teacher Training University (SRTTU)Lavizan, TehranIran Khoubrou, I., Nami, B. & Miresmaeili, S.M. Met. Mater. Int. (2019). https://doi.org/10.1007/s12540-019-00318-y Received 13 March 2019 DOI https://doi.org/10.1007/s12540-019-00318-y Publisher Name The Korean Institute of Metals and Materials
CommonCrawl
A designer has 3 fabric colors he may use for a dress: red, green, and blue. Four different patterns are available for the dress. If each dress design requires exactly one color and one pattern, how many different dress designs are possible? For each fabric color, the designer can choose one of four patterns. Thus, as there are three potential fabric colors, the designer can create $3 \cdot 4 = \boxed{12}$ different dress designs.
Math Dataset
Sun, 29 Dec 2019 22:19:54 GMT 10.3: Sampling Distributions and the Central Limit Theorem [ "article:topic", "showtoc:no", "license:ccbysa", "authorname:dnavarro" ] 10: Estimating Unknown Quantities from a Sample 10.3.1 Sampling distribution of the mean 10.3.2 Sampling distributions exist for any sample statistic! 10.3.3 The central limit theorem The law of large numbers is a very powerful tool, but it's not going to be good enough to answer all our questions. Among other things, all it gives us is a "long run guarantee". In the long run, if we were somehow able to collect an infinite amount of data, then the law of large numbers guarantees that our sample statistics will be correct. But as John Maynard Keynes famously argued in economics, a long run guarantee is of little use in real life: [The] long run is a misleading guide to current affairs. In the long run we are all dead. Economists set themselves too easy, too useless a task, if in tempestuous seasons they can only tell us, that when the storm is long past, the ocean is flat again. Keynes (1923) As in economics, so too in psychology and statistics. It is not enough to know that we will eventually arrive at the right answer when calculating the sample mean. Knowing that an infinitely large data set will tell me the exact value of the population mean is cold comfort when my actual data set has a sample size of N=100. In real life, then, we must know something about the behaviour of the sample mean when it is calculated from a more modest data set! With this in mind, let's abandon the idea that our studies will have sample sizes of 10000, and consider a very modest experiment indeed. This time around we'll sample N=5 people and measure their IQ scores. As before, I can simulate this experiment in R using the rnorm() function: > IQ.1 <- round( rnorm(n=5, mean=100, sd=15 )) > IQ.1 [1] 90 82 94 99 110 The mean IQ in this sample turns out to be exactly 95. Not surprisingly, this is much less accurate than the previous experiment. Now imagine that I decided to replicate the experiment. That is, I repeat the procedure as closely as possible: I randomly sample 5 new people and measure their IQ. Again, R allows me to simulate the results of this procedure: [1] 78 88 111 111 117 This time around, the mean IQ in my sample is 101. If I repeat the experiment 10 times I obtain the results shown in Table ??, and as you can see the sample mean varies from one replication to the next. Person.1 Sample.Mean Replication 1 90 82 94 99 110 95.0 Ten replications of the IQ experiment, each with a sample size of N=5. Replication 2 78 88 111 111 117 101.0 Ten replications of the IQ experiment, each with a sample size of N=5. Replication 3 111 122 91 98 86 101.6 Ten replications of the IQ experiment, each with a sample size of N=5. Replication 4 98 96 119 99 107 103.8 Ten replications of the IQ experiment, each with a sample size of N=5. Replication 5 105 113 103 103 98 104.4 Ten replications of the IQ experiment, each with a sample size of N=5. Replication 7 100 93 108 98 133 106.4 Ten replications of the IQ experiment, each with a sample size of N=5. Replication 9 86 119 108 73 116 100.4 Ten replications of the IQ experiment, each with a sample size of N=5. Replication 10 95 126 112 120 76 105.8 Ten replications of the IQ experiment, each with a sample size of N= Now suppose that I decided to keep going in this fashion, replicating this "five IQ scores" experiment over and over again. Every time I replicate the experiment I write down the sample mean. Over time, I'd be amassing a new data set, in which every experiment generates a single data point. The first 10 observations from my data set are the sample means listed in Table ??, so my data set starts out like this: 95.0 101.0 101.6 103.8 104.4 ... What if I continued like this for 10,000 replications, and then drew a histogram? Using the magical powers of R that's exactly what I did, and you can see the results in Figure 10.5. As this picture illustrates, the average of 5 IQ scores is usually between 90 and 110. But more importantly, what it highlights is that if we replicate an experiment over and over again, what we end up with is a distribution of sample means! This distribution has a special name in statistics: it's called the sampling distribution of the mean. Sampling distributions are another important theoretical idea in statistics, and they're crucial for understanding the behaviour of small samples. For instance, when I ran the very first "five IQ scores" experiment, the sample mean turned out to be 95. What the sampling distribution in Figure 10.5 tells us, though, is that the "five IQ scores" experiment is not very accurate. If I repeat the experiment, the sampling distribution tells me that I can expect to see a sample mean anywhere between 80 and 120. Figure 10.5: The sampling distribution of the mean for the "five IQ scores experiment". If you sample 5 people at random and calculate their average IQ, you'll almost certainly get a number between 80 and 120, even though there are quite a lot of individuals who have IQs above 120 or below 80. For comparison, the black line plots the population distribution of IQ scores. Figure 10.6: The sampling distribution of the maximum for the "five IQ scores experiment". If you sample 5 people at random and select the one with the highest IQ score, you'll probably see someone with an IQ between 100 and 140. One thing to keep in mind when thinking about sampling distributions is that any sample statistic you might care to calculate has a sampling distribution. For example, suppose that each time I replicated the "five IQ scores" experiment I wrote down the largest IQ score in the experiment. This would give me a data set that started out like this: 110 117 122 119 113 ... Doing this over and over again would give me a very different sampling distribution, namely the sampling distribution of the maximum. The sampling distribution of the maximum of 5 IQ scores is shown in Figure 10.6. Not surprisingly, if you pick 5 people at random and then find the person with the highest IQ score, they're going to have an above average IQ. Most of the time you'll end up with someone whose IQ is measured in the 100 to 140 range. An illustration of the how sampling distribution of the mean depends on sample size. In each panel, I generated 10,000 samples of IQ data, and calculated the mean IQ observed within each of these data sets. The histograms in these plots show the distribution of these means (i.e., the sampling distribution of the mean). Each individual IQ score was drawn from a normal distribution with mean 100 and standard deviation 15, which is shown as the solid black line). Figure 10.7: Each data set contained only a single observation, so the mean of each sample is just one person's IQ score. As a consequence, the sampling distribution of the mean is of course identical to the population distribution of IQ scores. Figure 10.8: When we raise the sample size to 2, the mean of any one sample tends to be closer to the population mean than a one person's IQ score, and so the histogram (i.e., the sampling distribution) is a bit narrower than the population distribution. Figure 10.9: By the time we raise the sample size to 10, we can see that the distribution of sample means tend to be fairly tightly clustered around the true population mean. At this point I hope you have a pretty good sense of what sampling distributions are, and in particular what the sampling distribution of the mean is. In this section I want to talk about how the sampling distribution of the mean changes as a function of sample size. Intuitively, you already know part of the answer: if you only have a few observations, the sample mean is likely to be quite inaccurate: if you replicate a small experiment and recalculate the mean you'll get a very different answer. In other words, the sampling distribution is quite wide. If you replicate a large experiment and recalculate the sample mean you'll probably get the same answer you got last time, so the sampling distribution will be very narrow. You can see this visually in Figures 10.7, 10.8 and 10.9: the bigger the sample size, the narrower the sampling distribution gets. We can quantify this effect by calculating the standard deviation of the sampling distribution, which is referred to as the standard error. The standard error of a statistic is often denoted SE, and since we're usually interested in the standard error of the sample mean, we often use the acronym SEM. As you can see just by looking at the picture, as the sample size N increases, the SEM decreases. Okay, so that's one part of the story. However, there's something I've been glossing over so far. All my examples up to this point have been based on the "IQ scores" experiments, and because IQ scores are roughly normally distributed, I've assumed that the population distribution is normal. What if it isn't normal? What happens to the sampling distribution of the mean? The remarkable thing is this: no matter what shape your population distribution is, as N increases the sampling distribution of the mean starts to look more like a normal distribution. To give you a sense of this, I ran some simulations using R. To do this, I started with the "ramped" distribution shown in the histogram in Figure 10.10. As you can see by comparing the triangular shaped histogram to the bell curve plotted by the black line, the population distribution doesn't look very much like a normal distribution at all. Next, I used R to simulate the results of a large number of experiments. In each experiment I took N=2 samples from this distribution, and then calculated the sample mean. Figure ?? plots the histogram of these sample means (i.e., the sampling distribution of the mean for N=2). This time, the histogram produces a ∩-shaped distribution: it's still not normal, but it's a lot closer to the black line than the population distribution in Figure ??. When I increase the sample size to N=4, the sampling distribution of the mean is very close to normal (Figure ??, and by the time we reach a sample size of N=8 it's almost perfectly normal. In other words, as long as your sample size isn't tiny, the sampling distribution of the mean will be approximately normal no matter what your population distribution looks like! # needed for printing width <- 6 height <- 6 # parameters of the beta a <- 2 b <- 1 # mean and standard deviation of the beta s <- sqrt( a*b / (a+b)^2 / (a+b+1) ) m <- a / (a+b) # define function to draw a plot plotOne <- function(n,N=50000) { # generate N random sample means of size n X <- matrix(rbeta(n*N,a,b),n,N) X <- colMeans(X) # plot the data hist( X, breaks=seq(0,1,.025), border="white", freq=FALSE, col=ifelse(colour,emphColLight,emphGrey), xlab="Sample Mean", ylab="", xlim=c(0,1.2), main=paste("Sample Size =",n), axes=FALSE, font.main=1, ylim=c(0,5) box() axis(1) #axis(2) # plot the theoretical distribution lines( x <- seq(0,1.2,.01), dnorm(x,m,s/sqrt(n)), lwd=2, col="black", type="l" for( i in c(1,2,4,8)) { plotOne(i)} Figure 10.10: A demonstration of the central limit theorem. In panel a, we have a non-normal population distribution; and panels b-d show the sampling distribution of the mean for samples of size 2,4 and 8, for data drawn from the distribution in panel a. As you can see, even though the original population distribution is non-normal, the sampling distribution of the mean becomes pretty close to normal by the time you have a sample of even 4 observations. On the basis of these figures, it seems like we have evidence for all of the following claims about the sampling distribution of the mean: The mean of the sampling distribution is the same as the mean of the population The standard deviation of the sampling distribution (i.e., the standard error) gets smaller as the sample size increases The shape of the sampling distribution becomes normal as the sample size increases As it happens, not only are all of these statements true, there is a very famous theorem in statistics that proves all three of them, known as the central limit theorem. Among other things, the central limit theorem tells us that if the population distribution has mean μ and standard deviation σ, then the sampling distribution of the mean also has mean μ, and the standard error of the mean is \mathrm{SEM}=\frac{\sigma}{\sqrt{N}} \nonumber$$ Because we divide the population standard devation σ by the square root of the sample size N, the SEM gets smaller as the sample size increases. It also tells us that the shape of the sampling distribution becomes normal.150 This result is useful for all sorts of things. It tells us why large experiments are more reliable than small ones, and because it gives us an explicit formula for the standard error it tells us how much more reliable a large experiment is. It tells us why the normal distribution is, well, normal. In real experiments, many of the things that we want to measure are actually averages of lots of different quantities (e.g., arguably, "general" intelligence as measured by IQ is an average of a large number of "specific" skills and abilities), and when that happens, the averaged quantity should follow a normal distribution. Because of this mathematical law, the normal distribution pops up over and over again in real data. 10.2: The Law of Large Numbers 10.4: Estimating Population Parameters
CommonCrawl
Mahmoud* and Ren*: Forest Fire Detection and Identification Using Image Processing and SVM Mubarak Adam Ishag Mahmoud* and Honge Ren* Forest Fire Detection and Identification Using Image Processing and SVM Abstract: Accurate forest fires detection algorithms remain a challenging issue, because, some of the objects have the same features with fire, which may result in high false alarms rate. This paper presents a new video-based, image processing forest fires detection method, which consists of four stages. First, a background-subtraction algorithm is applied to detect moving regions. Secondly, candidate fire regions are determined using CIE L∗a∗b∗ color space. Thirdly, special wavelet analysis is used to differentiate between actual fire and fire-like objects, because candidate regions may contain moving fire-like objects. Finally, support vector machine is used to classify the region of interest to either real fire or non-fire. The final experimental results verify that the proposed method effectively identifies the forest fires. Keywords: Background Subtraction , CIE L∗a∗b∗ Color Space , Forest Fire , SVM , Wavelet Forest-fires are real threats to human lives, environmental systems and infrastructure. It is predicted that forest fires could destroy half of the world's forests by the year 2030 [1]. The only efficient way to minimize the forest fires damage is adopt early fire detection mechanisms. Thus, forest-fire detection systems are gaining a lot of attention on several research centers and universities around the world. Currently, there exists many commercial fire detection sensor systems, but all of them are difficult to apply in big open areas like forests, due to their delay in response, necessary maintenance, high cost and other problems. In this study, image processing based has been used due to several reasons such as quick development of digital cameras technology, the camera can cover large areas with excellent results, the response time of image processing methods is better than that of the existing sensor systems, and the overall cost of the image processing systems is lower than sensor systems. Several forest-fire detection methods based on image processing have been proposed. The methods presented in [2,3] share the same framework. These methods proposed forest fire detection using YCbCr color space. In these methods, detection of the forest-fire is based on four rules: the first and second rules are used to segment flame regions, while the third and fourth rules are used to segment high-temperature regions. The first one is based on the fact that, in any fire image, the red color value is larger than the green and the green is larger than the blue, this fact is represented in YCbCr as luminance Y is larger than chrominance blue (Y>Cb). In the second rule, the luminance Y value is larger than the average values of the Y component for the same image (Y>Ymean) while the Cb component is smaller than the average values of the Cb (Cb< Cbmean). Additionally, the Cr is larger than the average values Cr (Cr>Crmean). The third rule depends on the fact that the fire region center at high temperature is white in color, this results in reducing the red component and increasing the blue component at the fire center, which is presented as (Cb>Y>Cr). The fourth rule is that the Cr is smaller than the standard deviation for the same image (Crstd) multiplied by constant τ (Crτ*Crstd). These methods are fast. However, they are susceptible to false positives because they are not able to differentiate between moving fire-like objects and actual fire. Wang and Ye [4] proposed a forest-fire disaster prevention method that can detect fire and smoke. For fire detection, in any fire image, the red color value is larger if compared with the green. Besides, the green value is larger if compared with the blue. The R component is also larger than the average of the R component for the same image. This rule is represented as (R>G>B), (R>Rmean). The RGB images are then converted to HSV color space. Fire pixels are determined if the following conditions are met: 0≤H≤60, 0.2≤S≤1, 100≤V≤255. For smoke detection, RGB and k-means algorithms are used. Standard RGB smokes values C are taken from the image with significant smoke. The C value must be experimentally adjusted based on the results. Cluster center P is determined from video stream after the image frames are clustered by k-means algorithm. Smoke is detected if |P–C| < threshold. This method works well, nevertheless, smoke can spread quickly and has different colors based on the burning materials, leading to false alarm. Chen et al. [5] designed a fire detection algorithm which combines the saturation channel of the HSV color and the RGB color. This method detected fire using three rules: R≥RT, R≥G>B, and S≥((255-R)*ST/RT). Determinations of two thresholds (ST and RT) are needed. Based on the experimental results, the selected range is 55–65 for ST values and 115–135 for RT. This method is fast and computationally simple compared to the other methods. However, it suffers from false-positive alarms in case of moving fire-like objects. In this study, a forest-fire detection method is proposed. It depends on multi-stages to identify forestfire. The final results indicate that the proposed algorithm has a good detection rate and fewer false alarms. The proposed algorithm is able to distinguish between fire and fire-like objects which are the main crucial problems for most of the existing methods. The paper is organized as follows: Section 2 describes the Methodology, Section 3 presents the experimental results, and Section 4 summarizes the achieved results and potential future direction. 2. Methodology In this part, the proposed method is presented. It consists of multi-stages. First, background subtraction is applied, because the fire boundaries continuously changes. Second, a color segmentation model is used to mark the candidate regions. Third, special wavelet analysis is carried out to distinguish between actual fire and fire like objects. Finally, support vector machine (SVM) is used for classifying the candidate regions to either actual fire or non-fire. The proposed algorithm stages will be described in details in the following subsections. Fig. 1 shows a flowchart of the proposed method. 2.1 Background Subtraction Detecting moving objects is an essential step in most of the fire detection methods based on a video, because the fire boundaries continuously fluctuates. Eq. (1) calculates the contrast between the current image and background to determine the region of motion. Fig. 2 shows an example of background subtraction. A pixel at (x, y) is supposed to be moving if it satisfies Eq. (1) as follows. [TeX:] $$\left| I _ { n } ( x , y ) - B _ { n } ( x , y ) \right| > t h r$$ where In(x, y) and Bn(x, y) represents the pixel value at (x, y) for the current and background frame, and thr refers to a threshold value which is set to 3 experimentally. The background value is continuously updated using Eq. (2) as follows: [TeX:] $$B _ { n + 1 } ( i , j ) = \left\{ \begin{array} { c c } { B _ { n } ( x , y ) + 1 i f I _ { n } ( x , y ) > B _ { n } ( x , y ) } \\ { B _ { n } ( x , y ) - 1 i f I _ { n } ( x , y ) < B _ { n } ( x , y ) } \\ { B _ { n } ( x , y ) } { i f I _ { n } ( x , y ) = B _ { n } ( x , y ) } \end{array} \right.$$ where Bn+1(x, y) and Bn(x, y) represents intensity pixel value at (x, y) for the current and previous background [6]. The proposed method flowchart. An original frame containing fire (a) and the frame containing fire after background subtraction (b). 2.2 Color-based Segmentation Different kinds of moving things (e.g., trees, people, birds, etc.) as well fire can be included after applying background subtraction. Thus, CIE L∗a∗b∗ color is used to select candidate regions of fire color. 2.2.1 RGB to CIE L*a*b* conversion The conversion from RGB to CIE L∗a∗b∗ color space is performed by using Eq. (3): [TeX:] $$\left[ \begin{array} { l } { X } \\ { Y } \\ { z } \end{array} \right] = \left[ \begin{array} { c c c } { 0.412673 } \ { 0.357580 } \ { 0.180423 } \\ { 0.212671 }\ { 0.715160 } \ { 0.07169 } \\ { 0.019334 } \ { 0.119193 } \ { 0.950227 } \end{array} \right] * \left[ \begin{array} { l } { R } \\ { G } \\ { B } \end{array} \right] \\ L ^ { * } = \left\{ \begin{array} { c } { 116 * \left( Y / Y _ { n } \right) - 16 , \text { if } \left( Y / Y _ { n } \right) > 0.008856 } \\ { 903.3 * \left( Y / Y _ { n } \right) , \text { otherwise } } \end{array} \right. \\ \begin{aligned} a ^ { * } \ = 500 * \left( f \left( X / X _ { n } \right) - f \left( Y / Y _ { n } \right), \right. \\ b ^ { * } \ = 200 * \left( f \left( Y / Y _ { n } \right) - f \left( Z / Z _ { n } \right), \right. \end{aligned} \\ f ( t ) = \left\{ \begin{array} { c } { t ^ { 1 / 3 } , i f t > 0.008856 } \\ { 7.787 * t + 16 / 116 , \text { Otherwise } } \end{array} \right.$$ where Xn, Yn, and Zn represents the reference color (white) values. The RGB colors channels range is from 0 to 255 for 8-bit data representation, and the ranges of L*, a*, and b* are [0, 100], [–110, 110], and [–110, 110], respectively. After calculating the values of color channels (L*, a*, b*), the values of average channel (L*m, a*m, b*m) are obtained using the following equations: [TeX:] $$\begin{aligned} L _ { m } ^ { * } = \frac { 1 } { N } \sum _ { x } \sum _ { y } L ^ { * } ( x , y ) \\ a _ { m } ^ { * } = \frac { 1 } { N } \sum _ { x } \sum _ { y } a ^ { * } ( x , y ) \\ b _ { m } ^ { * } = \frac { 1 } { N } \sum _ { x } \sum _ { y } b ^ { * } ( x , y ) \end{aligned}$$ where L*m, a*m and b*m are the average CIE L*a*b* channels values, and N is the image pixels' total number. To detect the candidate fire region using CIE L*a*b*, four rules are defined based on the notion that the fire region is the brightest area with near red color in the image. The rules are as follows: [TeX:] $$R 1 ( x , y ) = \left\{ \begin{array} { l l } { 1 \text { if } \left( L ^ { * } ( x , y ) \geq L ^ { * } m \right) } \\ { 0 } \ { \text { Otherwise } } \end{array} \right.$$ [TeX:] $$R 2 ( x , y ) = \left\{ \begin{array} { l l } { 1 \text { if } \left( a * ( x , y ) \geq a ^ { * } m \right) } \\ { 0 } \ { \text { Otherwise } } \end{array} \right.$$ [TeX:] $$R 3 ( x , y ) = \left\{ \begin{array} { l l } { 1 \text { if } \left( b ^ { * } ( x , y ) \geq b ^ { * } m \right) } \\ { 0 } \ { \text { Otherwise } } \end{array} \right.$$ [TeX:] $$R 4 ( x , y ) = \left\{ \begin{array} { l l } { 1 \text { if } \left( b ^ { * } ( x , y ) \geq a ^ { * } ( x , y ) \right) } \\ { 0 } \ { \text { Otherwise } } \end{array} \right.$$ where R1(x, y), R2(x, y), R3(x, y), and R4(x, y) are binary images. Fig. 3 shows the applying rules (5) through (8). Applying the rules from (5)–(8) to the input images: (i) original RGB images, (ii) binary images using rule (5), (iii) binary images using rule (6), (iv) binary images using rule (7), (v) binary images using rule (8), and (vi) binary images using rules (5) through (8). 2.3 Spatial Wavelet Analysis for Color Variations There is high luminance contrast in genuine fire regions than in fire-like colored objects, due to the turbulent fire flicker. Spatial wavelet analysis is a good image-processing method that can be used to distinguish between genuine fire regions and fire-like colored regions. Thus, a 2D wavelet filter is used on the red channel and the spatial wavelet energy is calculated for each pixel. Fig. 4 shows the wavelet energies of two videos, one contains actual fire and the other contains fire-like objects. It is clear that these regions containing actual fires have high variations and high wavelet energy. The following formula is used to calculate the wavelet energy: [TeX:] $$E ( x , y ) = \left( H L ( x , y ) ^ { 2 } + L H ( x , y ) ^ { 2 } + H H ( x , y ) ^ { 2 } \right)$$ where E(x, y) is the spatial wavelet energy for specific pixel, HL, LH and HH are low high, high low and high-high wavelet sub-images. The spatial wavelet energy for each block is calculated by adding the specific energy of each pixel in the block as follows [7]. [TeX:] $$E _ { b l o c k } = \frac { 1 } { N _ { b } } \sum _ { x , y } E ( x , y )$$ where Nb is the total number of pixel's in the block. Eblock is used in the next stage as SVM input, to classify the regions of interest to either fire or non-fire. Wavelet energy for actual fire (a) and fire-like object (b). 2.4 Classification using SVM SVM nowadays is commonly used in different fields of pattern recognition systems, because it provides high performance and accurate classification results with limited training data set. The SVM idea is to create an optimal hyperplane to divide the input dataset into two classes with maximum margins. In this study, SVM is used to classify the regions of interest to either fire or non-fire. SVM classification function defined in the following formula: [TeX:] $$f ( x ) = \operatorname { sign } \left( \sum _ { i = 0 } ^ { l - 1 } w _ { i } \cdot k \left( x , x _ { i } \right) + b \right)$$ where sign() is to determine whether the class of x either belongs to fire or non-fire (+1 class and –1 class). wi are output weights of the kernel, k() represents a kernel function, xi are the support vectors, i is support vectors number. In our proposed method, a one-dimension feature vector has been used. The data in this study is nonlinearly separable, no hyper-plane may exist to separate the input data into two parts, therefore, non-linear radial basis function (RBF) [8] is used, as follows: [TeX:] $$k ( x , y ) = \exp \left( - \frac { \| x - y \| ^ { 2 } } { 2 \sigma ^ { 2 } } \right) \text { for } \sigma > 0$$ where x, y represent the input feature vectors, σ is a parameter for controlling the width of the effective basis function, experimentally set to 0.1 which gives a good performance. To train the SVM, dataset consisting of 500 wavelet energies from actual fire video and 500 fire-like and non-fire moving pixels were used. In this part, experimental results of the proposed method have been presented. The model is implemented using MATLAB (R2017a) and tested on an Intel Core i7 2.97 GHz PC 8 GB RAM PC. To measure the proposed algorithm performance, 10 videos collected from the Internet (http://www.ultimatechase.com), eight of them are used with dimensions of 256×256. Table 1 shows a snapshot of the tested videos. True positive is counted if an image frame has a fire pixel, and it is determined by the proposed algorithm as fire and if the image frame has no fire. It is determined by the proposed algorithm as fire, it counts as a false-positive. The results are shown in Table 2. Videos used for the proposed algorithm evaluation The experimental final results in Table 2 show that the proposed method has an average true positive rate (93.46%) in the eight fire videos and false positive rate (6.89%) in the two fire-color moving object videos. These results indicate the good performance of the proposed method. Experimental results for testing the proposed forest-fire detection method Video Number of frames Number of fire frame TP TP rate (%) FP FP rate (%) Video_NO. 1 260 260 230 88.46 - - Video_NO. 3 208 208 203 97.6 - - Video_NO. 6 585 0 - - 34 5.81 Video_NO. 10 251 0 - - 20 7.97 3.1 Performance Evaluation To evaluate the performance of the proposed algorithm, comparisons between the above-mentioned methods and the proposed algorithm are carried out. All of these methods are tested in data sets consisting of 300 images (200 forest-fire images and 100 non-fire images) collected from the Internet. The Algorithms' performances are calculated using the evaluation metric F-score. 3.1.1 F-score The F-score [9] is used to evaluate the performance of the detection methods. For any given detection method, there are four possible outcomes; If an image has fire pixels, and it was determined by the algorithm as fire, then it is a true-positive; if the same image is determined not to be fire pixels by the algorithm, it is false-negative. If an image has no fire, and it was determined by the algorithm as no fire, it is true-negative, but if it was identified as fire by the algorithm, it counts as a false-positive. Fire detection methods are evaluated using the following equations: [TeX:] $$F = 2 * \frac { ( \text {precision} \text { reall } ) } { ( \text { precision } + \text {recall} ) }$$ [TeX:] $$precision \ = \frac { T P } { ( T P + F P ) }$$ [TeX:] $$r e c a l l = \frac { T P } { ( T P + F N ) }$$ where F refers to F-score; TP, TN, FP and FN are a true positive, true negative, false positive, and false negative, respectively. A higher algorithm F-score means a better overall performance. Table 3 shows the comparison results.  TP rate is TP divided by the overall number of fire images.  TN rate is TN divided by the overall number of non-fire images.  FN rate is FN divided by the overall number of fire images.  FP rate is FP divided by the overall number of non-fire images. Evaluations of the four tested fire detection methods Method TP rate (%) FN rate (%) TN rate (%) FP rate (%) Recall Precision F-score (%) Premal and Vinsley [2] 91.5 8 89 13 0.920 0.876 89.74 Vipin [3] 86 9.5 82 11 0.901 0.887 89.38 Chen et al. [5] 83 16.5 88 26 0.834 0.761 79.58 Proposed method 94 5 90 8 0.949 0.922 93.52 Table 3 shows the F-score of four methods. The proposed method F-score is 3.78% higher than that of the method described in Premal and Vinsley [2], this indicates the reliability of the proposed method. This work presented an effective forest-fire detection method using image processing. Background subtraction and special wavelet analysis are used. In addition, SVM is used for classifying the candidate region to either real fire or non-fire. Comparison between the existing methods and the proposed method is carried out. The final results indicate that the proposed forest fires detection method achieves a good detection rate (93.46%) and a low false-alarm rate (6.89%) in fire-like objects. These results indicate that the proposed method is accurate and can be used in automatic forest-fire alarm systems. For future work, the method's accuracy could be improved by extracting more fire features and increasing the training data set. The work is supported by Fundamental Research Funds for the central universities (No. 2572017PZ10). Mubarak Adam Ishag Mahmoud He received B.S. in Engineering Technology from Faculty of Engineering and Technology, University of Gezira in 2006 and M.S. degrees in Electronics Engineering from Sudan University of Science and Technology in 2012. Now he is a Ph.D. candidate at Information and Computer Engineering, Northeast Forestry University, China. Honge Ren She received the Ph.D. degree from Northeast Forestry University, China, in 2009. She is currently professor of College of Information and Computer Engineering at Northeast Forestry University, a supervisor of doctoral students, and the director of Heilongjiang Provincial Forestry Intelligent Equipment Engineering Research Center. Her main research interests include different aspects of artificial intelligence and distributed systems. 1 D. Stipanicev, T. Vuko, D. Krstinic, M. Stula, L. Bodrozic, "Forest fire protection by advanced video detection system: Croatian experiences," in Proceedings of the 3rd TIEMS Workshop on Improvement of Disaster Management Systems: Local and Global Trends, Trogir, Croatia, 2006;custom:[[[https://bib.irb.hr/prikazi-rad?rad=279548]]] 2 C. E. Premal, S. S. Vinsley, "Image processing based forest fire detection using YCbCr colour model," in Proceedings of 2014 International Conference on Circuit, Power and Computing Technologies (ICCPCT), Nagercoil, India, 2014;pp. 1229-1237. doi:[[[10.1109/ICCPCT.2014.7054883]]] 3 V. Vipin, "Image processing based forest fire detection," International Journal of Emerging Technology and Advanced Engineering, vol. 2, no. 2, pp. 87-95, 2012.custom:[[[https://scholar.google.co.kr/scholar?hl=ko&as_sdt=0%2C5&q=Image+processing+based+forest+fire+detection&btnG=]]] 4 Y. L. Wang, J. Y. Ye, "Research on the algorithm of prevention forest fire disaster in the Poyang Lake Ecological Economic Zone," Advanced Materials Research, vol. 518-523, pp. 5257-5260, 2012.doi:[[[10.4028/www.scientific.net/amr.518-523.5257]]] 5 T. H. Chen, P. H. Wu, Y. C. Chiou, "An early fire-detection method based on image processing," in Proceedings of 2004 International Conference on Image Processing, Singapore, 2004;pp. 1707-1710. doi:[[[10.1109/ICIP.2004.1421401]]] 6 M. Kang, T. X. T ung, J. M. Kim, "Efficient video-equipped fire detection approach for automatic fire alarm systems," Optical Engineering, vol. 52, no. 1, 2013.doi:[[[10.1117/1.oe.52.1.017002]]] 7 B. U. Toreyin, Y. Dedeoglu, U. Gudukbay, A. E. Cetin, "Computer vision based method for real-time fire and flame detection," Pattern Recognition Letters, vol. 27, no. 1 pp.49-58, pp. no.1 49-58, 2006.doi:[[[10.1016/j.patrec.2005.06.015]]] 8 S. Theodoridis, A. Pikrakis, K. Koutroumbas, D. Cavouras, Introduction to Pattern Recognition: A Matlab Approach. New Y ork, NY: Academic Press, 2010.custom:[[[https://scholar.google.co.kr/scholar?hl=ko&as_sdt=0%2C5&q=Introduction+to+Pattern+Recognition%3A+A+Matlab+Approach&btnG=]]] 9 T . Fawcett, 2004;, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.9777 Received: September 19 2017 Revision received: November 30 2017 Accepted: January 9 2018 Corresponding Author: Honge Ren* ([email protected]) Mubarak Adam Ishag Mahmoud*, College of Information and Computer Engineering, Northeast Forestry University, Harbin, China, [email protected] Honge Ren*, College of Information and Computer Engineering, Northeast Forestry University, Harbin, China, [email protected]
CommonCrawl
A novel gene selection algorithm for cancer classification using microarray datasets Russul Alanni ORCID: orcid.org/0000-0002-6445-01371, Jingyu Hou1, Hasseeb Azzawi1 & Yong Xiang1 BMC Medical Genomics volume 12, Article number: 10 (2019) Cite this article Microarray datasets are an important medical diagnostic tool as they represent the states of a cell at the molecular level. Available microarray datasets for classifying cancer types generally have a fairly small sample size compared to the large number of genes involved. This fact is known as a curse of dimensionality, which is a challenging problem. Gene selection is a promising approach that addresses this problem and plays an important role in the development of efficient cancer classification due to the fact that only a small number of genes are related to the classification problem. Gene selection addresses many problems in microarray datasets such as reducing the number of irrelevant and noisy genes, and selecting the most related genes to improve the classification results. An innovative Gene Selection Programming (GSP) method is proposed to select relevant genes for effective and efficient cancer classification. GSP is based on Gene Expression Programming (GEP) method with a new defined population initialization algorithm, a new fitness function definition, and improved mutation and recombination operators. . Support Vector Machine (SVM) with a linear kernel serves as a classifier of the GSP. Experimental results on ten microarray cancer datasets demonstrate that Gene Selection Programming (GSP) is effective and efficient in eliminating irrelevant and redundant genes/features from microarray datasets. The comprehensive evaluations and comparisons with other methods show that GSP gives a better compromise in terms of all three evaluation criteria, i.e., classification accuracy, number of selected genes, and computational cost. The gene set selected by GSP has shown its superior performances in cancer classification compared to those selected by the up-to-date representative gene selection methods. Gene subset selected by GSP can achieve a higher classification accuracy with less processing time. The rapid development of microarray technology in the past few years has enabled researchers to analyse thousands of genes simultaneously and obtain biological information for various purposes, especially for cancer classification. However, gene expression data obtained by microarray technology could bring difficulties to classification methods due to the fact that usually the number of genes in a microarray dataset is very big, while the number of samples is small. This fact is known as the curse of dimensionality in data mining [1,2,3,4]. Gene selection, which extracts informative and relevant genes, is one of the effective options to overcome the curse of dimensionality in microarray data based cancer classification. Gene selection is actually a process of identifying a subset of informative genes from the original gene set. This gene subset enables researchers to obtain substantial insight into the genetic nature of the disease and the mechanisms responsible for it. This technique can also decrease the computational costs and improve the cancer classification performance [5, 6]. Typically, the approaches for gene selection can be classified into three main categories: filter, wrapper and embedded techniques [6, 7]. The filter technique exploits the general characteristics of the gene expressions in the dataset to evaluate each gene individually without considering classification algorithms. The wrapper technique is to add or remove genes to produce several gene subsets and then evaluates these subsets by using the classification algorithms to obtain the best gene subset for solving the classification problem. The embedded technique is between the filter and wrapper techniques in order to take advantage of the merits of both techniques. However, most of the embedded techniques deal with genes one by one [8], which is time consuming especially when the data dimension is large such as the microarray data. Naturally inspired evolutionary algorithms are more applicable and accurate than wrapper gene selection methods [9, 10] due to their ability in searching for the optimal or near-optimal solutions on large and complex spaces of possible solutions. Evolutionary algorithms also consider multiple attributes (genes) during their search for a solution, instead of considering one attribute at a time. Various evolutionary algorithms [11,12,13,14,15,16,17,18,19] have been proposed to extract informative and relevant cancer genes and meanwhile reduce the number of noise and irrelevant genes. However, in order to obtain high accuracy results, most of these methods have to select a large number of genes. Chuang et al. [20] proposed the improved binary particle swarm optimization (IBPSO) method which achieved a good accuracy for some datasets but, again, selected a large number of genes. An enhancement of BPSO algorithm was proposed by Mohamad et al. [21] by minimizing the number of selected genes. They obtained good classification accuracies for some datasets, but the number of selected genes is not small enough compared with other studies. Recently, Moosa et al. [22] proposed a modified Artificial Bee Colony algorithm (mABC). Another study [15] proposed a hybrid method by using Information Gain algorithm to reduce the number of irrelevant genes and using an improved simplified swarm optimization (ISSO) to select the optimal gene subset. These two studies were able to get a good accuracy with small number of selected genes. However, the number of selected genes is still high compared with our method. In recent years, a new evolutionary algorithm known as Gene Expression Programming (GEP) was initially introduced by Ferreira [23] and widely used in many applications for classification and decision making [24,25,26,27,28,29,30]. GEP has three main advantages Flexibility, which makes it easy to design an optimal model. In other words, any part of GEP steps can be improved or changed without adding any complexity to the whole process. The power of achieving the target that is inspired from the ideas of genotype and phenotype Data visualization. It is easy to visualize each step of the GEP and that distinguishes it from many algorithms These advantages make it easy to use GEP process to create our new gene selection program and simulate the dynamic process of achieving the optimal solution in decision making. A few studies applied GEP as a feature selection method and obtained some promising results [31, 32] which encourage us to do further research. GEP algorithm, based on its evolutionary structure, faces some computational problems, when it is applied to complex and high dimensional data such as microarray datasets. Inspired by the above circumstances, to enhance the robustness and stability of microarray data classifiers, we propose a novel gene selection method based on the improvement of GEP. This proposed algorithm is called Gene Selection Programming (GSP). The idea behind this approach is to control the GEP solution process by replacing the random adding, deleting and selection with the systematic gene-ranking based selection. In this paper four innovative operations are presented: attributes/genes selection (initializing the population), mutation operation, recombination operation and a new fitness function. More details of GSP are presented in the Methods section. In this work, support vector machine (SVM) with a linear kernel serves to evaluate the performance of GSP. For a better reliability we used leave-one-out cross validation (LOOCV). The results were evaluated in terms of three metrics: classification accuracy, number of selected genes and CPU time. The rest of this paper is organized as follows: The overview of GEP and the proposed gene selection algorithm GSP are presented in the Methods section (section 2). Results section (section 3) provides the experimental results on ten microarray datasets. Discussion section (section 4) presents the statistical analysis and discussion about the experimental results. Finally, Conclusion section (section 5) gives the conclusions and directions of future research. Gene expression programming Gene Expression Programming (GEP) algorithm is an evolutionary algorithm. GEP consists of two parts. The first part is characteristic linear chromosomes (genotype), which are composed of one or more genes. Each gene consists of a head and a tail. The head may contain functional elements like {Q, +, −, ×, /} or terminal elements, while the tail contains terminals only. The terminals represent the attributes in the datasets. In this study, sometimes we use the term attribute to represent the gene in microarray dataset to avoid the possible confusion between the gene in microarray datasets and the gene in GEP chromosome. The size of the tail (t) is computed as t = h (n-1) + 1, where h is the head size, and n is the maximum number of parameters required in the function set. The second part of GEP is a phenotype which is a tree structure also known as expression tree (ET). When the representation of each gene in the chromosome is given, the genotype is established. Then the genotype can be converted to the phenotype by using specific languages invented by the GEP author. GEP process has four main steps: initialize the population by creating the chromosomes (individuals), identify a suitable fitness function to evaluate the best individual, conduct genetic operations to modify the individuals to achieve the optimal solution in the next generation, and check the stop conditions. GEP flowchart is shown in Fig. 1. The flowchart of the GEP modelling It is worth mentioning that the GEP algorithm faces some challenging problems, especially the computational efficiency, when it is applied on the complex and high-dimensional data such as a microarray dataset. This motivates us to solve these problems and further improve the performance of the GEP algorithm by improving the evolution process. The details of the proposed gene selection programming (GSP) algorithm, which is based on GEP, for cancer classification are given in the following sub-sections. Systematic selection approach to initial GSP population Initializing population is the first step in our gene selection method for which candidates are constructed from two sets: terminal set (ts) and function set (fs). Terminal set should represent the attributes of the microarray dataset. The question is what attributes should be selected into the terminal set. Selecting all attributes (including the unrelated attributes) will affect the computational efficiency. The best way to reduce the noise from the microarray data is to minimize the number of unrelated genes. There are two commonly used ways to do that: either by identifying a threshold and the genes ranked above a threshold are selected, or by selecting the n-top ranked genes (e.g. top 50 ranked genes). Both ways have disadvantages: defining a threshold suitable for different datasets is very difficult and deciding how many genes should be selected is subjective. To avoid these disadvantages, we use a different technique named systematic selection approach. The systematic selection approach consists of three steps: rank all the attributes, calculate the weight of each attribute, and select the attributes based on their weight using the Roulette wheel selection method. The details of these steps are shown in the following sub-sections. Attribute ranking We use the Information Gain (IG) algorithm [33] to rank the microarray attributes. IG is a filter method mainly used to rank and find the most relevant genes [15, 34, 35]. The attributes with a higher rank value have more impact on the classification process, while the attributes with a zero rank value are considered irrelevant. The rank values of all attributes are calculated once and saved in the buffer for later use in the program. Weight calculation The weight (w) of each attribute (i) is calculated based on Eq (1) $$ {w}_i=\frac{r_i}{sum}\in \left[0,1\right] $$ where \( sum={\sum}_i{r}_{i\kern0.5em }\forall i\in ts \) and r is the rank value, and \( {\sum}_i{w}_i=1 \). The attributes with a higher weight contain more information about the classification. Attribute selection In our systematic selection approach, we use the Roulette wheel selection method, which is also known as proportionate selection [36], to select the strong attributes (i.e., the attributes with a high weight). With this approach all the attributes are placed on the roulette wheel according to their weight. An attribute with a higher weight has a higher probability to be selected as a terminal element. This approach could reduce the number of irrelevant attributes in the final terminal set. The population is then initialized from this final terminal set (ts) and the function set (fs). Each chromosome (c) in GSP is encoded with the length of N*(gene_length), where N represents the number of genes in each chromosome (c) and the length of a gene (g) is the length of its head (h) plus the length of its tail (t). In order to set the effective chromosome length in GSP, we need to determine the head size as well as the number of genes in each chromosome (details are in the Results section). The process of creating GSP chromosomes is illustrated in Algorithm 1 Fitness function design The objective of the gene selection method is to find the smallest subset of genes that can achieve the highest accuracy. To this end, we need to define a suitable fitness function for GEP that has the ability to find the best individuals/chromosomes. We define the fitness value of an individual chromosome i as follow: $$ {f}_i=2r\ast AC(i)+r\ast \frac{t-{s}_i}{t} $$ This fitness function consists of two parts. The first part is based on the accuracy result AC(i). This accuracy is measured based on the support vector machine (SVM) classifier using LOOCV. For example, if chromosome i is +/Qa2a1a5a6a3, its expression tree (ET) is Then, the input values for the SVM classifier are the attributes a2, a1 and a5. The second part of the fitness function is based on the number of the selected attributes si in the individual chromosome and the total number t of attributes in the dataset . Parameter r is a random value within the range (0.1, 1) giving an importance to the accuracy with respect to the number of attributes. Since the accuracy value is more important than the number of selected attributes in measuring the fitness of a chromosome, we multiply the accuracy by 2r. Improved genetic operations The purpose of the genetic operations is to improve the individual chromosomes towards the optimal solution. In this work, we improve two genetic operations as shown below. Mutation is the most important genetic operator. It makes a small change to the genomes by replacing an element with another. The accumulation of several changes can create a significant variation. The random mutation may result in the loss of the important attributes, which may reduce the accuracy and increase the processing time. The critical question of mutation is which attributes are to be added or deleted. Ideally, each deleted terminal/function in the mutation operation should be covered by some other selected terminals/functions. This requirement can be fulfilled by using our method. To clarify the GSP mutation operation, we provide a simple example in Fig. 2. Example of GSP mutation In the example, the chromosome c has one gene. The head size is 3, so the tail length is h (n-1) + 1=4 and the chromosome length is (3+4) =7. The weight table shows that the attribute with the highest weight in the chromosome is a9 and the attribute with the lowest weight is a1. With the mutation GSP method selects the weakest terminal lt (the terminal with lowest weight) which is a1 in our example. There are two options to replace a1: the program could select either a function such as (/) or a terminal to replace it. In the latter option, the terminal should have a weight higher than that of a1, and the fitness value of the new chromosome c` must be higher than the original one. This new mutation operation is outlined in Algorithm 2. The second operation that we use in our gene selection method is the recombination operation. In recombination, two parent chromosomes are randomly chosen to exchange some material (short sequence) between them. The short sequence can be one or more elements in a gene (see Fig. 3). The two parent chromosomes could also exchange an entire gene in one chromosome with another gene in another chromosome. Recombination of 3 elements in gene 1 (from position 0 to 2) In this work, we improve the gene recombination by controlling the exchanging process. Suppose c1 and c2 are two chromosomes (see Fig. 4). The fitness value of c1 = 80% and the fitness value of c2 = 70% based on our fitness function (2). We select the "strong" gene (the one with the highest weight summation) from the chromosome that has the lowest fitness value (lc) and exchange it with the "weak" gene (the one with the lowest weight summation) from another chromosome that has the highest fitness value (hc). In general, this process increases the fitness of hc. We repeat the exchange process until we get a new chromosome (hc') with a higher fitness value than that of both parent chromosomes. The hc` has a higher probability of being a transcription in the next generation. This idea comes from the gene structure [37]. Example for GSP Recombination Based on the above innovative improvements for the GSP method in this section, we present the steps of GSP in Algorithm 3 with pseudocode. In this section, we evaluate the performance of GSP method using ten microarray cancer datasets, which were downloaded from http://www.gems-system.org. Table 1 presents the details of the experimental datasets in terms of diverse samples, attributes and classes. Table 1 Description of the experimental datasets Our experimental results contain three parts. Part 1 (Ev.1) evaluated the best setting for GSP based on the number of genes (g) in each chromosome and the head size (h). Part 2 (Ev.2) evaluated the GSP performance in terms of three metrics: classification accuracy, number of selected genes and CPU Time. To guarantee the impartial classification results and avoid generating bias results, this study adopted cross validation method LOOCV to reduce the bias in evaluating their performance over each dataset. Our gene selection results were compared with three gene selection methods using the same classification model for the sake of fair competition. Part 3 (Ev.3) evaluated the overall GSP performance by comparing it with other up-to-date models. Ev.1 the best setting for gene and head To set the best values for the number of genes (g) of each chromosome and the size of the gene head (h) in the GSP method, we evaluated nine different settings to show their effect on the GSP performance results. For g we selected three values 1, 2 and 3, and for each g value we selected three h values: 10, 15 and 20. We increased the values of h by 5 to make it clear to observe the effect of h values on the GSP performance, especially when the effect of increasing h is very slight. For more reliability, we used three different datasets (11_Tumors, Leukaemia 1, Prostate Tumor). The parameters used in GSP are listed in Table 2. Table 2 Parameters used in GSP The average results across the three experimental datasets are presented in Table 3. ACavg, Navg and Tavg represent the average accuracy, number of selected attributes and CPU time respectively for ten runs, while ACstd, , Nstd. and Tstd. represent the standard deviation for the classification accuracy, number of selected attributes and CPU time respectively. Table 3 The results of different setting for g and h. Bold font indicates the best results Figure 5 shows the evaluation values in terms of ACavg, Tavg. and Navg for three different numbers of genes in each chromosome. The evaluation values a The average accuracies (ACavg). b The average number of attributes (Navg ). c The average CPU time (Tavg) It is observed from the results in Table 3 that: Comparing g with h: g has a stronger effect on the results than h. Regarding g results: when g was increased, ACavg, Tavg and Navg were increased as well (positive relationships). The results of ACstd,, Tstd. and Nstd. were decreased when g was increased (negative relationships). The results became stable when the g value was greater than 2. Regarding h results: h has positive relationships with ACavg, Tavg and Navg and negative relationships with ACstd,, Tstd. and Nstd. The results became stable when the h value was over 15. Increasing h values would increase the complexity of the model while the AC and N results would not show a notable enhancement. The best setting for g and h was 2 and 15 respectively. Ev.2: Comparison of the GSP performance with representative gene selection algorithms In order to evaluate the performance of our GSP algorithm objectively, we first evaluated its performance in terms of three evaluation criteria: classification accuracy (AC), number of selected attributes (N) and CPU Time (T). Then we compared the results with three popular gene selection algorithms named Particle Swarm Optimization (PSO) [48], GEP and GA [49] using the same model for the sake of a fair comparison. The parameters of the comparison methods are listed in Table 4. Table 4 Parameter setting of the competitors The Information Gain algorithm was used in order to filter irrelevant and noisy genes and reduce the computational load for the gene selection and classification methods. The support vector machine (SVM) with a linear kernel served as a classifier of these gene selection methods. In order to avoid selection bias, the LOOCV was used. Weka software was used to implement the PSO and GA models with default settings, while the GEP model was implemented by using java package GEP4J [50]. Table 5 shows the comparison results of GSP with three gene selection algorithms across ten selected datasets. Table 5 Comparison of GSP with three gene selection algorithms on ten selected datasets. Bold font indicates the best results The experimental results showed that the GSP algorithm achieved the highest average accuracy result (99.92%) across the ten experimental datasets, while the average accuracies of other models were 97.643%, 97.886% and 94.904% for GEP, PSO and GA respectively. The standard deviation results showed that GSP had the smallest value (0.342671), while the average standard deviations were3.425399, 3.3534102 and 5.24038421 for GEP, PSO and GA respectively. This means the GSP algorithm made the classification performance more accurate and stable. The GSP algorithm achieved the smallest number of predictive/relevant genes (8.16), while the average number of predictive genes was 13.8, 16.14 and 473.5 for GEP, PSO and GA respectively. These evaluation results show that GSP is a promising approach for solving gene selection and cancer classification problems. CPU Time results showed that GSP took almost half of the time that GEP needed to achieve the best solution. However, the time is still long compared with the PSO and GA methods. Ev.3: Comparison of GSP with up-to-date classification models For more evaluations, we compared our GSP model with up-to-date classification models IBPSO, SVM [14], IG-GA [35], IG-ISSO [15], EPSO [21] and mABC [22]. This comparison was based on the classification result and the number of genes regardless of the methods of data processing and classification. The comparison results on ten datasets are presented in Table 6. Table 6 Comparison of the gene selection algorithms on ten selected datasets. Bold font indicates the best results It can be seen from Table 6 that GSP performed better than its competitors on seven datasets (11_Tumors, 9_Tumors, Lung_ Cancer, Leukemia1, Leukemia2, SRBCT, and DLBCL), while mABC had better results on three data sets (Brain_Tumor1, Brain_Tumor2, and Prostate). Interestingly, all runs of GSP achieved 100% LOOCV accuracy with less than 5 selected genes on the Lung_Cancer, Leukemia1, Leukemia2, SRBCT, and DLBCL datasets. Moreover, over 98% classification accuracies were obtained on other datasets. These results indicate that GSP has a high potential to achieve the ideal solution with less number of genes, and the selected genes are the most relevant ones. Regarding the standard deviations in Table 6, results that produced by GSP were almost consistent on all datasets. The differences of the accuracy results and the number of genes in each run were very small. For GSP, the highest ACstd was 0.52 while the highest Nstd was 1.5. This means that GSP has a stable process to select and produce a near-optimal gene subset from a high dimensional dataset (gene expression data). We applied GSP method on ten microarray datasets. The results of GSP performance evaluations show that GSP can generate a subset of genes with a very small number of related genes for cancer classification on each dataset. Across the ten experimental datasets, the maximum number of selected genes is 17 with the accuracy not less than 98.88%. The performance results of GSP and other comparative models (see Table 6) on Prostate and Brain tumor datasets were not as good as the results on other datasets. This is probably due to the fact that these models concentrated on reducing the number of irrelevant genes, but ignored other issues such as the missing values and redundancy. More effort needs to be made on microarray data processing before applying the GSP model to achieve better results. The GSP method on datasets 11_Tumors and 9_Tumors achieved relatively lower accuracy results (99.88% and 98.88% respectively) compared with the accuracy results on other datasets. The reason was due to the high number of classes (11 and 9 respectively) which could be a problem to any classification models. We noticed from GSP performance that when the accuracy increased the number of selected genes and the processing time decreased (negative relationship). This proves that GSP is effective and efficient for gene selection method. In this study, we have proposed an innovative gene selection algorithm (GSP). This algorithm can not only provide a smaller subset of relevant genes for cancer classification but also achieve higher classification accuracies in most cases with shorter processing time compared with GEP. The comparisons with the representative state-of-art models on ten microarray datasets show the outperformance of GSP in terms of classification accuracy and the number of selected genes. However, the processing time of GSP is still longer than that of PSO and GA models. Our future research direction is to reduce the processing time of GSP while still keeping the effectiveness of the method. ACavg : The average value of accuracy BPSO: Binary Particle Swarm Optimization Expression Tree fs: Function set GEP: GSP: Gene Selection Programming IBPSO: Improved Binary Particle Swarm Optimization ISSO: Improved Simplified Swarm Optimization LOOCV: Leave-one-out cross validation mABC: Modified Artificial Bee Colony algorithm Number of genes in each chromosome Navg: The average number of selected attributes PSO: Rank value SVM: Support Vector Machine Tavg: The average value of CPU time TS: Tabu Search Terminal set Wang H-Q, Jing G-J, Zheng C. Biology-constrained gene expression discretization for cancer classification. Neurocomputing. 2014;145:30–6. Espezua S, Villanueva E, Maciel CD, Carvalho A. A Projection Pursuit framework for supervised dimension reduction of high dimensional small sample datasets. Neurocomputing. 2015;149:767–76. Seo M, Oh S. A novel divide-and-merge classification for high dimensional datasets. Comput Biol Chem. 2013;42:23–34. Xie H, Li J, Zhang Q, Wang Y. Comparison among dimensionality reduction techniques based on Random Projection for cancer classification. Comput Biol Chem. 2016;65:165–72. Tabakhi S, Najafi A, Ranjbar R, Moradi P. Gene selection for microarray data classification using a novel ant colony optimization. Neurocomputing. 2015;168:1024–36. Du D, Li K, Li X, Fei M. A novel forward gene selection algorithm for microarray data. Neurocomputing. 2014;133:446–58. Mundra PA, Rajapakse JC. Gene and sample selection for cancer classification with support vectors based t-statistic. Neurocomputing. 2010;73:2353–62. Jin C, Jin S-W, Qin L-N. Attribute selection method based on a hybrid BPNN and PSO algorithms. Appl Soft Comput. 2012;12:2147–55. Alshamlan H, Badr G, Alohali Y. mRMR-ABC: A Hybrid Gene Selection Algorithm for Cancer Classification Using Microarray Gene Expression Profiling. Biomed Res Int. 2015;2015:604910. Alshamlan HM, Badr GH, Alohali YA. The performance of bio-inspired evolutionary gene selection methods for cancer classification using microarray dataset. Int J Biosci, Biochem Bioinformatics. 2014;4:166. Azzawi H, Hou J, Alanni R, Xiang Y. SBC: A New Strategy for Multiclass Lung Cancer Classification Based on Tumour Structural Information and Microarray Data. In 2018 IEEE/ACIS 17th International Conference on Computer and Information Science (ICIS), 2018: 68–73. Chen K-H, Wang K-J, Tsai M-L, Wang K-M, Adrian AM, Cheng W-C, et al. Gene selection for cancer identification: a decision tree model empowered by particle swarm optimization algorithm. BMC Bioinformatics. 2014;15:1. H. M. Zawbaa, E. Emary, A. E. Hassanien, and B. Parv, "A wrapper approach for feature selection based on swarm optimization algorithm inspired from the behavior of social-spiders," in Soft Computing and Pattern Recognition (SoCPaR), 2015 7th International Conference of, 2015, pp. 25-30. Mohamad MS, Omatu S, Deris S, Yoshioka M. A modified binary particle swarm optimization for selecting the small subset of informative genes from gene expression data. IEEE Trans Inf Technol Biomed. 2011;15:813–22. Lai C-M, Yeh W-C, Chang C-Y. Gene selection using information gain and improved simplified swarm optimization. Neurocomputing. 2016;19;218:331–8. D. Karaboga and B. Basturk, "Artificial bee colony (ABC) optimization algorithm for solving constrained optimization problems," in International fuzzy systems association world congress, 2007, pp. 789-798. Jain I, Jain VK, Jain R. Correlation feature selection based improved-Binary Particle Swarm Optimization for gene selection and cancer classification. Appl Soft Comput. 2018;62:203–15. Pino Angulo A. Gene Selection for Microarray Cancer Data Classification by a Novel Rule-Based Algorithm. Information. 2018;9:6. Chuang L-Y, Yang C-H, Yang C-H. Tabu search and binary particle swarm optimization for feature selection using microarray data. J Comput Biol. 2009;16:1689–703. Chuang L-Y, Chang H-W, Tu C-J, Yang C-H. Improved binary PSO for feature selection using gene expression data. Comput Biol Chem. 2008;32:29–38. Mohamad MS, Omatu S, Deris S, Yoshioka M, Abdullah A, Ibrahim Z. An enhancement of binary particle swarm optimization for gene selection in classifying cancer classes. Algorithms Mol Biol. 2013;8:1. Moosa JM, Shakur R, Kaykobad M, Rahman MS. Gene selection for cancer classification with the help of bees. BMC Med Genet. 2016;9:2–47. Ferreira C. Gene expression programming in problem solving. In: Soft computing and industry. London: Springer; 2002. p. 635–53. Azzawi , Hou, J, Xiang Y, Alann R. Lung Cancer Prediction from Microarray Data by Gene Expression Programming. IET Syst Biol. 2016;10(5):168–78. Yu Z, Lu H, Si H, Liu S, Li X, Gao C, et al. A highly efficient gene expression programming (GEP) model for auxiliary diagnosis of small cell lung cancer. PloS one. 2015;10:e0125517. Peng Y, Yuan C, Qin X, Huang J, Shi Y. An improved Gene Expression Programming approach for symbolic regression problems. Neurocomputing. 2014;137:293–301. Kusy M, Obrzut B, Kluska J. Application of gene expression programming and neural networks to predict adverse events of radical hysterectomy in cervical cancer patients. Med Biol Eng Comput. 2013;51:1357–65. Yu Z, Chen X-Z, Cui L-H, Si H-Z, Lu H-J, Liu S-H. Prediction of lung cancer based on serum biomarkers by gene expression programming methods. Asian Pac J Cancer Prev. 2014;15:9367–73. Al-Anni R, Hou J, Abdu-aljabar R, Xiang Y. Prediction of NSCLC recurrence from microarray data with GEP. IET Syst Biol. 2017;11(3):77–85. Azzawi H, Hou J, Alanni R, Xiang Y, Abdu-Aljabar R, Azzawi A. Multiclass Lung Cancer Diagnosis by Gene Expression Programming and Microarray Datasets. In: International Conference on Advanced Data Mining and Applications; 2017. p. 541–53. Alsulaiman FA, Sakr N, Valdé JJ, El Saddik A, Georganas ND. Feature selection and classification in genetic programming: Application to haptic-based biometric data. In: Computational Intelligence for Security and Defense Applications, 2009. CISDA 2009. IEEE Symposium on; 2009. p. 1–7. Alanni R, Hou J, Azzawi H, Xiang Y. New Gene Selection Method Using Gene Expression Programing Approach on Microarray Data Sets. In: Lee R, editor. Computer and Information Science. Cham: Springer International Publishing; 2019. p. 17–31. Y. Yang and J. O. Pedersen, "A comparative study on feature selection in text categorization," in Icml, 1997, pp. 412-420. Dai J, Xu Q. Attribute selection based on information gain ratio in fuzzy rough set theory with application to tumor classification. Appl Soft Comput. 2013;13:211–21. Yang C-H, Chuang L-Y, Yang CH. IG-GA: a hybrid filter/wrapper method for feature selection of microarray data. J Med Biol Eng. 2010;30:23–8. Goldberg DE, Deb K. A comparative analysis of selection schemes used in genetic algorithms. Found Genet Algorithms. 1991;1:69–93. Suryamohan K, Halfon MS. Identifying transcriptional cis-regulatory modules in animal genomes. Wiley Interdiscip Rev Dev Biol. 2015;4:59–84. Su AI, Welsh JB, Sapinoso LM, Kern SG, Dimitrov P, Lapp H, et al. Molecular classification of human carcinomas by use of gene expression signatures. Cancer Res. 2001;61:7388–93. Staunton JE, Slonim DK, Coller HA, Tamayo P, Angelo MJ, Park J, et al. Chemosensitivity prediction by transcriptional profiling. Proc Natl Acad Sci. 2001;98:10787–92. Pomeroy SL, Tamayo P, Gaasenbeek M, Sturla LM, Angelo M, McLaughlin ME, et al. Prediction of central nervous system embryonal tumour outcome based on gene expression. Nature. 2002;415:436–42. Nutt CL, Mani D, Betensky RA, Tamayo P, Cairncross JG, Ladd C, et al. Gene expression-based classification of malignant gliomas correlates better with survival than histological classification. Cancer Res. 2003;63:1602–7. Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, et al. Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science. 1999;286:531–7. Armstrong SA, Staunton JE, Silverman LB, Pieters R, den Boer ML, Minden MD, et al. MLL translocations specify a distinct gene expression profile that distinguishes a unique leukemia. Nat Genet. 2002;30:41–7. Bhattacharjee A, Richards WG, Staunton J, Li C, Monti S, Vasa P, et al. Classification of human lung carcinomas by mRNA expression profiling reveals distinct adenocarcinoma subclasses. Proc Natl Acad Sci. 2001;98:13790–5. Khan J, Wei JS, Ringner M, Saal LH, Ladanyi M, Westermann F, et al. Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nat Med. 2001;7:673–9. Singh D, Febbo PG, Ross K, Jackson DG, Manola J, Ladd C, et al. Gene expression correlates of clinical prostate cancer behavior. Cancer Cell. 2002;1:203–9. Shipp MA, Ross KN, Tamayo P, Weng AP, Kutok JL, Aguiar RC, et al. Diffuse large B-cell lymphoma outcome prediction by gene-expression profiling and supervised machine learning. Nat Med. 2002;8:68–74. Moraglio A, Di Chio C, Poli R. Geometric particle swarm optimisation. In: European conference on genetic programming; 2007. p. 125–36. D. E. Goldberg, "Genetic algorithms in search, optimization and machine learning 'addison-wesley, 1989," Reading, MA, 1989. J. Thomas, "GEP4J ", ed, 2010. We appreciate Deakin University staff for their continued cooperation. We thank Rana Abdul jabbar for the guidance on data analysis. No funding was received All the datasets were downloaded from http://www.gems-system.org. School of Information Technology, Deakin University, Burwood, 3125, VIC, Australia Russul Alanni, Jingyu Hou, Hasseeb Azzawi & Yong Xiang Russul Alanni Jingyu Hou Hasseeb Azzawi Yong Xiang RA designed the study, wrote the code and drafted the manuscript, JH designed the model and the experiments and revised the manuscript. HA and YX participated in the model design and coordination, and helped to draft the manuscript. All authors read and approved the final manuscript. Correspondence to Russul Alanni. Alanni, R., Hou, J., Azzawi, H. et al. A novel gene selection algorithm for cancer classification using microarray datasets. BMC Med Genomics 12, 10 (2019). https://doi.org/10.1186/s12920-018-0447-6 Gene selection Microarray cancer dataset
CommonCrawl