text
stringlengths
100
500k
subset
stringclasses
4 values
Multiple periodic solutions for damped vibration systems with superquadratic terms at infinity Ping Li1 & Guanwei Chen ORCID: orcid.org/0000-0002-4622-24051 By using variational methods, we obtain infinitely many nontrivial periodic solutions for a class of damped vibration systems with superquadratic terms at infinity. By using some weaker conditions, our results extend and improve some existing results in the literature. Besides, some examples are given to illustrate our results. Introduction and the main result In this paper, we study the existence of infinitely many nontrivial periodic solutions for the following damped vibration system: $$ \textstyle\begin{cases} \ddot{u}+(q(t)I_{N\times N}+B)\dot{u}+(\frac{1}{2}Bq(t)-A(t))u+ F_{u}(t,u)=0,\quad t\in\mathbb{R},\\ u(0)-u(T)=\dot{u}(0)-\dot{u}(T)=0, \quad T>0, \end{cases} $$ where \(u=u(t)\in C^{2}(\mathbb{R},\mathbb{R}^{N})\), \(I_{N\times N}\) is the \(N\times N\) identity matrix, \(q(t)\in L^{1}(\mathbb {R};\mathbb{R})\) is T-periodic and satisfies \(\int_{0}^{T} q(t)\,dt=0\), \(A(t)=[a_{ij}(t)]\) is a T-periodic symmetric \(N\times N\) matrix-valued function with \(a_{ij}\in L^{\infty}(\mathbb{R};\mathbb{R})\) (\(\forall i,j=1, 2,\ldots, N\)), \(B=[b_{ij}]\) is an antisymmetric \(N\times N\) constant matrix. When \(B=0\) (zero matrix), the authors [6] studied the special case of (1.1) and obtained the existence and multiplicity of periodic solutions. When \(B\neq0\), the author [2] obtained infinitely many periodic solutions of (1.1) with \(F(t,u)\) satisfying the asymptotically quadratic condition: $$ \lim_{ \vert u \vert \rightarrow\infty}\frac{F(t,u)}{ \vert u \vert ^{2}}=V(t) \quad\mbox{uniformly for } t \in[0,T], \mbox{ where } \inf_{t\in[0,T]}V(t)>0, $$ the authors [4] obtained one existence result and two multiplicity results with \(F(t,u)\) satisfying the superquadratic condition: $$ 0< \mu F(t,u)\leq\bigl(F_{u}(t,u),u\bigr),\quad\forall t\in[0,T], \forall \vert u \vert \geq r, $$ where \(\mu> 2\) and \(r\geq0\) are some constants, \((\cdot,\cdot)\) and \(|\cdot|\) denote the inner product and the associated norm in \(\mathbb {R^{N}}\); the author [3] used a more general superquadratic condition (\(\lim_{|u|\rightarrow\infty}\frac{F(t,u)}{|u|^{2}}=+\infty\) uniformly for \(t\in[0,T]\)) and obtained infinitely many periodic solutions for (1.1). By the more general superquadratic condition used in [3] and some weaker conditions, we also obtain infinitely many periodic solutions for (1.1). Our main result reads as follows. System (1.1) has infinitely many nontrivialT-periodic solutions if\(F(t,u)\)isT-periodic in t and even inu, and the following conditions hold: \({(F_{1})}\): \(F(t,u)\)is measurable intfor every\(u\in\mathbb {R^{N}}\)and continuously differentiable inufor a.e. \(t\in[0,T]\), and there exist\(a\in C(\mathbb{R^{+}},\mathbb{R^{+}})\), \(b\in L^{1}([0,T];\mathbb{R^{+}})\)such that $$\bigl\vert F(t,u) \bigr\vert \leq a\bigl( \vert u \vert \bigr)b(t), \qquad \bigl\vert F_{u}(t,u) \bigr\vert \leq a\bigl( \vert u \vert \bigr)b(t), \quad \forall(t,u)\in[0,T]\times\mathbb{R}^{N}. $$ \(F(t,u)\geq0\), \(\forall(t,u)\in[0,T]\times\mathbb{R}^{N}\), and $$\lim_{ \vert u \vert \rightarrow\infty}\frac{F(t,u)}{ \vert u \vert ^{2}}=+\infty\quad\textit {uniformly for } t\in[0,T]. $$ There exists\(\alpha>2\)such that $$\lim_{ \vert u \vert \rightarrow\infty}\frac{F_{u}(t,u)}{ \vert u \vert ^{\alpha-1}}< +\infty \quad\textit{uniformly for } t\in[0,T]. $$ There are constants\(b>0\)and\(1\leq\beta\in(\alpha -2,+\infty)\)such that $$\lim_{ \vert u \vert \rightarrow\infty}\inf\frac{(F_{u}(t,u),u)-2F(t,u)}{ \vert u \vert ^{\beta}}>b \quad\textit{uniformly for } t\in[0,T]. $$ To compare our result with the most related result [3], we firstly describe the result in [3]. ([3]) System (1.1) has infinitely many nontrivialT-periodic solutions if\(F\in C^{1}(\mathbb{R}\times\mathbb{R}^{N},\mathbb{R})\)isT-periodic in t and even inu, and it satisfies the following conditions: \({(\mathit{SF}_{1})}\): \(F(t,0)=0\), \(\forall t\in[0,T]\), and there are two constants\(d_{1}>0\)and\(\alpha_{1}>2\)such that $$\bigl\vert F_{u}(t,u) \bigr\vert \leq d_{1}\bigl(1+ \vert u \vert ^{\alpha_{1}-1}\bigr), \quad\forall(t,u)\in [0,T]\times \mathbb{R}^{N}. $$ \(\frac{1}{2}(F_{u}(t,u),u)\geq F(t,u)\geq0\)for all\((t,u)\in [0,T]\times\mathbb{R}^{N}\)and There is a constant\(b>0\)such that $$\lim_{ \vert u \vert \rightarrow\infty}\inf\frac{(F_{u}(t,u),u)-2F(t,u)}{ \vert u \vert ^{\alpha _{1}}}>b \quad\textit{uniformly for } t\in[0,T]. $$ The method of Theorem 1.1 is based on the fountain theorem of Bartsch [1], which is essentially different from the variant fountain theorem developed by Zou [7] used in [3]. Our result extends and improves the result in [3]. The reasons are as follows: We only need F to satisfy \((F_{1})\) rather than \(F\in C^{1}(\mathbb{R}\times\mathbb{R}^{N},\mathbb{R})\). We remove the condition \(\frac{1}{2}(F_{u}(t,u),u)\geq F(t,u)\) for all \((t,u)\in[0,T]\times\mathbb{R}^{N}\) in \((\mathit{SF}_{2})\). Condition \((F_{3})\) is weaker than \((\mathit{SF}_{1})\). Indeed, condition \((\mathit{SF}_{1})\) implies $$\bigl\vert F(t,u)-F(t,0) \bigr\vert = \biggl\vert \int_{0}^{1}\bigl(F_{u}(t,su),u\bigr)\,ds \biggr\vert \leq d_{1}\bigl( \vert u \vert + \vert u \vert ^{\alpha_{1}}\bigr),\quad\forall(t,u)\in[0,T]\times\mathbb{R}^{N}, $$ it follows from the continuity of \(F(\cdot,0)\) that (for all \(\alpha \ge\alpha_{1}\)) $$\begin{gathered} \lim_{ \vert u \vert \rightarrow\infty}\sup\frac{ \vert F(t,u) \vert }{ \vert u \vert ^{\alpha}} \\ \quad\leq\lim_{ \vert u \vert \rightarrow\infty}\sup\frac{d_{1}( \vert u \vert + \vert u \vert ^{\alpha_{1}})+\max_{t\in[0,T]} \vert F(t,0) \vert }{ \vert u \vert ^{\alpha}} < +\infty\quad \mbox{uniformly for } t\in[0,T]. \end{gathered} $$ The constant β in our condition \((F_{4})\) is more general than \(\alpha_{1}\) in \((\mathit{SF}_{3})\). Example 1.1 The following example is given to illustrate our result. Let $$F(t,u):=h(t) \vert u \vert ^{2}\ln\bigl(1+ \vert u \vert ^{2}\bigr),\quad u\in\mathbb{R}^{N},t\in[0,T], $$ where \(h\in L^{\infty}(0,T;\mathbb{R}^{+})\) with \(\inf_{t\in [0,T]}h(t)>0\). Then $$F_{u}(t,u)=2h(t)\ln\bigl(1+ \vert u \vert ^{2}\bigr)u+ \frac{2h(t) \vert u \vert ^{2}u}{1+ \vert u \vert ^{2}}. $$ It is not hard to check that the function satisfies our conditions \((F_{1})\)–\((F_{4})\). The rest of our paper is organized as follows. In Sect. 2, we establish variational framework associated with (1.1) and give some preliminary lemmas, which are useful in the proof of our result, and then we give the detailed proof of our main result. Variational frameworks and the proof of Theorem 1.1 Let \(\|\cdot\|_{p}\) denote the norm of \(L^{p}([0,T];\mathbb{R}^{N})\) for any \(p\in[1,\infty]\). Let $$\begin{aligned} W:= &{}\bigl\{ u=u(t): [0,T]\rightarrow\mathbb{R}^{N}| u\mbox{ is absolutely continuous}, u(0)=u(T), \mbox{ and}\\ &\dot{u}\in L^{2} \bigl([0,T];\mathbb{R}^{N}\bigr) \bigr\} \end{aligned} $$ with the inner product $$(u,v)_{W}:= \int_{0}^{T} \bigl[(u,v) +(\dot{u},\dot{v}) \bigr] \,dt, \quad \forall u,v\in W. $$ The corresponding norm is defined by \(\|u\|_{W}=(u,u)_{W}^{1/2}\). Obviously, W is a Hilbert space. $$Q(t):= \int_{0}^{t}q(s)\,ds $$ $$\Vert u \Vert _{0}:= \biggl( \int_{0}^{T}e^{Q(t)} \bigl( \vert u \vert ^{2} + \vert \dot{u} \vert ^{2} \bigr)\, dt \biggr)^{1/2}, \quad u\in W. $$ Obviously, the norm \(\|\cdot\|_{0}\) is equivalent to the usual one \(\| \cdot\|_{W}\) on W. We denote by \(\langle\cdot,\cdot\rangle_{0}\) the inner product corresponding to \(\|\cdot\|_{0}\) on W. The corresponding functional for problem (1.1) is defined by $$ \varPhi(u):=\frac{1}{2} \int_{0}^{T}e^{Q(t)} \bigl[ \vert \dot{u} \vert ^{2}+(Bu,\dot {u})+\bigl(A(t)u,u\bigr) \bigr]\,dt - \int_{0}^{T}e^{Q(t)}F(t,u)\,dt, \quad u\in W. $$ Let \(L: W\to W^{\ast}\) (\(W^{*}\) is the dual space of W) be an operator defined by $$Lu(v):= \int_{0}^{T}e^{Q(t)} \biggl[(B\dot{u},v)+ \frac {1}{2}q(t) (Bu,v) \biggr]\,dt, \quad\forall u,v\in W. $$ We can identify \(W^{\ast}\) with W by Riesz representation theorem, so Lu can be viewed as a function belonging to W such that $$Lu(v)=\langle Lu,v\rangle_{0}, \quad\forall u,v\in W. $$ It is not hard to check that L is a bounded linear operator on W. From the discussion in [4], we get that L is self-adjoint and compact on W. Since B is an antisymmetric \(N\times N\) constant matrix, it follows from integration by parts that $$\langle Lu,u\rangle_{0}= \int_{0}^{T}e^{Q(t)} \biggl[(B\dot{u},u)+ \frac {1}{2}q(t) (Bu,u) \biggr]\,dt = \int_{0}^{T}e^{Q(t)} \bigl[(B\dot{u},u) \bigr]\,dt. $$ We define an operator \(K: W\to W^{\ast}\) by $$\langle Ku,v\rangle_{0}=\langle Lu,v\rangle_{0}+ \int_{0}^{T}e^{Q(t)} \bigl( \bigl(I_{N\times N}-A(t)\bigr)u,v \bigr)\,dt, \quad\forall u,v\in W. $$ Then it is easy to check that K is a bounded self-adjoint linear operator. Therefore, the definitions of \(\langle\cdot,\cdot\rangle_{0}\) and K imply that Φ defined in (2.1) can be rewritten as $$\varPhi(u)=\frac{1}{2}\bigl\langle (I-K)u,u\bigr\rangle _{0}- \int _{0}^{T}e^{Q(t)}F(t,u)\,dt, \quad u\in W, $$ where I denotes the identity operator. By the classical spectral theory, we have the decomposition $$W=W^{0} \oplus W^{-}\oplus W^{+}, $$ where \(W^{0}:=\ker(I-K)\), \(W^{+}\) and \(W^{-}\) are the positive and negative spectral subspaces of \(I-K\) in W, respectively. Besides, \(W^{-}\) and \(W^{0}\) are finite dimensional (see [4]). Obviously, we can define a new equivalent inner product \(\langle\cdot ,\cdot\rangle\) on W with corresponding norm \(\|\cdot\|\) such that $$\bigl\langle (I-K)u,u\bigr\rangle _{0}=\pm \Vert u \Vert ^{2}, \quad\forall u\in W^{\pm}. $$ Then we have $$\begin{aligned} \varPhi(u)&=\frac{1}{2}\bigl\langle (I-K)u,u \bigr\rangle _{0}-I(u) \\ &=\frac{1}{2}\bigl( \bigl\Vert u^{+} \bigr\Vert ^{2}- \bigl\Vert u^{-} \bigr\Vert ^{2} \bigr)-I(u), \quad u\in W, \end{aligned} $$ where \(I(u):=\int_{0}^{T}e^{Q(t)}F(t,u)\,dt\). Then, by the assumptions of F, we know that I and Φ are continuously differentiable and $$\varPhi'(u)v=\bigl\langle u^{+},v^{+}\bigr\rangle -\bigl\langle u^{-},v^{-}\bigr\rangle -I'(u)v, \quad I'(u)v= \int_{0}^{T}e^{Q(t)} \bigl(F_{u}(t,u),v \bigr)\,dt, $$ where \(u,v\in W=W^{-} \oplus W^{0}\oplus W^{+}\) with \(u=u^{-}+u^{0}+u^{+}\) and \(v=v^{-}+v^{0}+v^{+}\); besides, the T-periodic solutions of (1.1) are the critical points of the \(C^{1}\) functional \(\varPhi: W\rightarrow\mathbb{R}\) (see [4]). Since the embedding of W into \(C(0,T;\mathbb{R}^{N})\) is compact, there exists a constant \(C>0\) such that $$ \Vert u \Vert _{\infty}\leq C \Vert u \Vert ,\quad\forall u\in W, $$ where \(\|u\|_{\infty}=\max_{t\in[0,T]}|u(t)|\). Besides, by the Sobolev embedding theorem, we get directly the following lemma. W is compactly embedded in\(L^{p}([0,T];\mathbb{R}^{N})\), \(\forall p\in [1,+\infty]\). In order to prove Theorem 1.1, we state the fountain theorem of Bartsch (see [1, 5]). Let W be a Banach space with the norm \(\|\cdot\|\) and \(W:=\overline {\bigoplus_{m\in\mathbb{N}}X_{m}}\) with \(\dim X_{m}<\infty\) for any \(m\in \mathbb{N}\). Set $$ Y_{k}:=\bigoplus_{m=1}^{k}X_{m}, \qquad Z_{k}:=\overline{\bigoplus_{m=k}^{\infty}X_{m}}. $$ (Fountain theorem) We assume that\(\varPhi\in C^{1}(X,\mathbb{R})\)satisfies the Cerami condition, \(\varPhi(-u)=\varPhi(u)\). For almost every\(k\in N\), there exist\(\rho_{k}>r_{k}>0\)such that \(a_{k}:=\inf_{u\in Z_{k},\|u\|=r_{k}}\varPhi(u)\rightarrow +\infty\)as\(k\rightarrow\infty\). \(b_{k}:=\max_{u\in Y_{k},\|u\|=\rho_{k}}\varPhi(u)\leq0\). ThenΦhas a sequence of critical values tending to +∞. Under the Palais–Smale (PS) condition, the fountain theorem is established in [1, 5]. Because the deformation theorem holds true under the Cerami condition, we know the fountain theorem is still valid under the Cerami condition. Here, if any sequence \(u_{n}\subset X\) such that \(\{\varPhi(u_{n})\}\) is bounded and \(\|\varPhi{'}(u_{n})\|(1+\|u_{n}\|)\rightarrow0\) as \(n\rightarrow \infty\) has a convergent sequence, we say that \(\varPhi\in C^{1}(X, {R})\) satisfies the Cerami condition, such a subsequence is then called a Cerami sequence. If assumptions\((F_{1})\), \((F_{3})\), and\((F_{4})\)hold, thenΦsatisfies the Cerami condition \((C)\). We assume that, for any sequence \(\{u_{n}\}\subset W\), \(\{\varPhi (u_{n})\}\) is bounded and \(\|\varPhi{'}(u_{n})\| {(1+\|u_{n}\| )}\rightarrow0\). Part 1. We firstly prove the boundedness of \(\{u_{n}\}\). There is a constant \(M>0\) such that $$ \bigl\vert \varPhi(u_{n}) \bigr\vert \leq M\quad\mbox{and}\quad \bigl\Vert \varPhi{'}(u_{n}) \bigr\Vert \bigl(1+ \Vert u_{n} \Vert \bigr)\leq M,\quad\forall n\in\mathbb{N}^{+}. $$ It follows from \((F_{4})\) that there exist \(c_{1}>0\) and \(M_{1}>1\) such that $$ \bigl(F_{u}(t,u),u\bigr)-2F(t,u)\geq c_{1} \vert u \vert ^{\beta},\quad\forall \vert u \vert \geq M_{1}, \forall t\in[0,T]. $$ By \((F_{1})\), one has that $$ \bigl\vert \bigl(F_{u}(t,u),u\bigr)-2F(t,u) \bigr\vert \leq c_{2}b(t),\quad\forall \vert u \vert \leq M_{1},\forall t \in[0,T], $$ where \(c_{2}=(M_{1}+2)\max_{s\in[0,M_{1}]}a(s)\). Combining (2.5) and (2.6), we get that $$ \bigl(F_{u}(t,u),u\bigr)-2F(t,u)\geq c_{1} \vert u \vert ^{\beta}-c_{1}M^{\beta }_{1}-c_{2}b(t), \quad\forall \vert u \vert \in\mathbb{R}^{N},\forall t\in[0,T]. $$ It follows from \(e^{Q(t)}\geq d_{1}\) for some constant \(d_{1}>0\) (\(\forall t\in[0,T]\)), (2.4), (2.7), the definitions of \(\varPhi (u)\) and \(\varPhi{'}(u)\) that $$ \begin{aligned}[b] 3M&\geq2\varPhi(u_{n})-\bigl\langle \varPhi{'}(u_{n}),u_{n}\bigr\rangle \\ &= \int_{0}^{T}e^{Q(t)} \bigl[ \bigl(F_{u}(t,u_{n}),u_{n} \bigr)-2F(t,u_{n}) \bigr]\,dt \\ &\geq d_{1}c_{1} \int_{0}^{T} \vert u_{n} \vert ^{\beta}\,dt-d_{1}c_{1}M_{1}^{\beta }T-d_{1}c_{2} \int_{0}^{T}b(t)\,dt, \quad\forall n\in \mathbb{N}^{+}. \end{aligned} $$ By (2.8) and \(b\in L^{1}([0,T];\mathbb{R^{+}})\), we have $$ \int_{0}^{T} \vert u_{n} \vert ^{\beta}\,dt\leq D,\quad\forall n\in\mathbb{N}^{+} $$ for some \(D>0\). Let \(\varPi_{n}=\{t\in[0,T]:|u_{n}|\geq M_{1}\}\), then we have $$\int_{\varPi_{n}} \vert u_{n} \vert ^{\beta}\,dt \leq D_{1},\quad\forall n\in \mathbb{N}^{+} $$ for some \(D_{1}>0\). Since \(\beta\geq1\), we also have $$ \int_{\varPi_{n}} \vert u_{n} \vert \,dt\leq D_{1},\quad\forall n\in\mathbb{N}^{+}. $$ For any \(n\in N\), let \(\chi_{n}:\mathbb{R}\rightarrow\mathbb{R}\) be the indicator of \(\varPi_{n}\), that is, $$\chi_{n}(t):= \textstyle\begin{cases} 1, & t\in\varPi_{n},\\ 0,& t\notin\varPi_{n}, \end{cases}\displaystyle \quad\forall n\in \mathbb{N}^{+}. $$ Then, by the definition of \(\varPi_{n}\) and (2.10), we have $$\bigl\Vert (1-\chi_{n})u_{n} \bigr\Vert _{\infty}\leq M_{1}\quad\mbox{and}\quad \Vert \chi _{n}u_{n} \Vert _{1}\leq D_{1},\quad \forall n\in\mathbb{N}^{+}. $$ It follows from the equivalence of any two norms on a finite-dimensional space \(W^{0}\oplus W^{-}\) that $$\begin{aligned} \bigl\Vert u_{n}^{-} \bigr\Vert _{2}^{2}&=\bigl(u_{n}^{-},u_{n} \bigr)_{2} \\ &=\bigl(u_{n}^{-},(1-\chi_{n})u_{n} \bigr)_{2}+\bigl(u_{n}^{-},\chi_{n}u_{n} \bigr)_{2} \\ &\leq \bigl\Vert (1-\chi_{n})u_{n} \bigr\Vert _{\infty}\cdot \bigl\Vert u_{n}^{-} \bigr\Vert _{1}+ \bigl\Vert u_{n}^{-} \bigr\Vert _{\infty}\cdot \Vert \chi_{n}u_{n} \Vert _{1} \\ &\leq\bigl(h_{1} \bigl\Vert (1-\chi_{n})u_{n} \bigr\Vert _{\infty}+h_{2} \Vert \chi_{n}u_{n} \Vert _{1}\bigr) \bigl\Vert u_{n}^{-} \bigr\Vert _{2} \\ &\leq(h_{1} M_{1}+h_{2}D_{1}) \bigl\Vert u_{n}^{-} \bigr\Vert _{2}, \quad\forall n\in \mathbb{N}^{+} \end{aligned} $$ for some \(h_{1}, h_{2}>0\). Therefore, $$\bigl\Vert u_{n}^{-} \bigr\Vert _{2}\leq h_{1} M_{1}+h_{2}D_{1}, \quad\forall n\in \mathbb{N}^{+}, $$ which together with the equivalence of any two norms on a finite-dimensional space \(W^{0}\oplus W^{-}\) implies that $$ \bigl\Vert u_{n}^{-} \bigr\Vert \leq D_{2}, \quad\forall n\in\mathbb{N}^{+} $$ for some \(D_{2}>0\). $$F(t,u)\leq c_{3} \vert u \vert ^{\alpha},\quad\forall \vert u \vert \geq M_{2},\forall t\in[0,T]. $$ $$\bigl\vert F(t,u) \bigr\vert \leq c_{4}b(t),\quad\forall \vert u \vert \leq M_{2},\forall t\in[0,T], $$ where \(c_{4}=\max_{s\in[0,M_{2}]}a(s)\). Hence, we obtain $$ F(t,u)\leq c_{3} \vert u \vert ^{\alpha}+c_{4}b(t), \quad\forall \vert u \vert \in\mathbb {R}^{N},\forall t\in[0,T]. $$ By (2.4), (2.11), (2.12), and \(e^{Q(t)}\leq d_{2}\) for some constant \(d_{2}>0\) (\(\forall t\in[0,T]\)), we have $$ \begin{aligned}[b] \frac{1}{2} \Vert u_{n} \Vert ^{2}&=\varPhi(u_{n})+ \bigl\Vert u_{n}^{-} \bigr\Vert + \int _{0}^{T}e^{Q(t)}F(t,u_{n}) \,dt \\ &\leq M+D_{2}+d_{2}c_{3} \int_{0}^{T} \vert u_{n} \vert ^{\alpha}\,dt+d_{2}c_{4} \int _{0}^{T}b(t)\,dt,\quad\forall n\in \mathbb{N}^{+}. \end{aligned} $$ If \(\beta>\alpha\), Hölder's inequality and (2.9) imply that $$ \int_{0}^{T} \vert u_{n} \vert ^{\alpha}\,dt\leq T^{(\beta-\alpha)/\beta}\biggl( \int _{0}^{T} \vert u_{n} \vert ^{\beta}\,dt\biggr)^{\alpha/\beta}< +\infty,\quad\forall n\in \mathbb{N}^{+}. $$ It follows from (2.13), (2.14), and \(b\in L^{1}([0,T];\mathbb{R^{+}})\) that \((u_{n})\) is bounded. If \(\beta\leq\alpha\), using (2.2), we have that $$\begin{aligned} \int_{0}^{T} \vert u_{n} \vert ^{\alpha}\,dt&= \int_{0}^{T} \vert u_{n} \vert ^{\beta } \vert u_{n} \vert ^{\alpha-\beta}\,dt \\ &\leq \Vert u_{n} \Vert _{\infty}^{\alpha-\beta} \int_{0}^{T} \vert u_{n} \vert ^{\beta}\,dt \\ &\leq C^{\alpha-\beta} \Vert u_{n} \Vert ^{\alpha-\beta} \int_{0}^{T} \vert u_{n} \vert ^{\beta }\,dt,\quad\forall n\in\mathbb{N}^{+}. \end{aligned} $$ Noting the fact that \(\alpha-\beta<2\), it follows from (2.9), (2.13), and \(b\in L^{1}([0,T];\mathbb{R^{+}})\) that \((u_{n})\) is bounded. Part 2. Then we prove that the sequence \(\{u_{n}\}\) has a convergent sequence. The boundedness of \(\{u_{n}\}\) implies that \(u_{n}\rightharpoonup u\) in W. First, we prove $$ \int_{0}^{T}e^{Q(t)} \bigl(F_{u}(t,u_{n}),u_{n}-u \bigr)\,dt\rightarrow 0,\quad n\rightarrow\infty. $$ Note that Lemma 2.1 implies that \(u_{n}\rightarrow u\) in \(L^{p}\) and there is a constant \(c_{5}>0\), we have $$ \Vert u_{n}-u \Vert _{p}\rightarrow0,\qquad \Vert u_{n} \Vert _{p}\leq c_{5} \Vert u \Vert , \quad \forall1\leq p< \infty. $$ If \(|u|\geq M_{3}\), \(M_{3}>0\), it follows from \((F_{3})\) that there exists \(c_{6}>0\) such that $$ F_{u}(t,u)\leq c_{6} \vert u \vert ^{\alpha-1}, \quad\forall \vert u \vert \geq M_{3},\forall t\in[0,T]. $$ According to the boundedness of \((u_{n})\) and Lemma 2.1, we have \(\|u_{n}\|<\infty\). It follows from \(e^{Q(t)}\leq d_{2}\), (2.16), (2.17), and Hölder's inequality that $$ \begin{aligned}[b] &\int_{0}^{T}e^{Q(t)} \bigl(F_{u}(t,u_{n}),u_{n}-u \bigr)\,dt \\ &\quad\leq d_{2} \int_{0}^{T}\bigl(c_{6} \vert u \vert ^{\alpha-1}\bigr) (u_{n}-u)\,dt \\ &\quad\leq c_{6}d_{2} \Vert u_{n}-u \Vert _{\alpha}\cdot \Vert u_{n} \Vert _{\alpha}^{\alpha -1} \\ &\quad\leq c_{5}^{\alpha-1}c_{6}d_{2} \Vert u_{n}-u \Vert _{\alpha}\cdot \Vert u_{n} \Vert ^{\alpha-1}\rightarrow0. \end{aligned} $$ If \(|u|\leq M_{3}\), by \((F_{1})\), one has that $$ \bigl\vert F_{u}(t,u) \bigr\vert \leq c_{7}b(t),\quad \forall \vert u \vert \leq M_{3},\forall t\in[0,T], $$ where \(c_{7}=\max_{s\in[0,M_{3}]}a(s)\). By Lemma 2.1, (2.19), \(e^{Q(t)}\leq d_{2}\), and \(b\in L^{1}([0,T];\mathbb {R^{+}})\), we obtain $$ \begin{aligned}[b] &\int_{0}^{T}e^{Q(t)} \bigl(F_{u}(t,u_{n}),u_{n}-u \bigr)\,dt \\ &\quad\leq d_{2} \int_{0}^{T}c_{7}b(t) (u_{n}-u)\,dt \\ &\quad\leq c_{7}d_{2} \Vert u_{n}-u \Vert _{\infty} \int_{0}^{T}b(t)\,dt\rightarrow0. \end{aligned} $$ Combining (2.18) and (2.20), we can see that (2.15) holds. Therefore, by (2.15), \(\varPhi {'}(u_{n})\rightarrow0\), \(u_{n}\rightharpoonup u\) in W, and the definition of \(\varPhi{'}\), we have $$ \begin{aligned}[b] 0&=\lim_{n\rightarrow\infty}\bigl\langle \varPhi{'}(u_{n}),u_{n}-u\bigr\rangle \\ &=\lim_{n\rightarrow\infty}(u_{n},u_{n}-u) -\lim _{n\rightarrow\infty} \int_{0}^{T}e^{Q(t)} \bigl(F_{u}(t,u_{n}),u_{n}-u \bigr)\,dt \\ &=\lim_{n\rightarrow\infty} \Vert u_{n} \Vert ^{2}- \Vert u \Vert ^{2}-0. \end{aligned} $$ $$ \lim_{n\rightarrow\infty} \Vert u_{n} \Vert = \Vert u \Vert . $$ It follows from \(u_{n}\rightharpoonup u\) in W that $$\Vert u_{n}-u \Vert ^{2}=(u_{n}-u,u_{n}-u) \rightarrow0, $$ so \(\{u_{n}\}\) has a convergent subsequence in W. Thus Φ satisfies the Cerami condition. □ Since \(\dim W^{0}\) and \(\dim W^{-}\) are finite, we can choose an orthonormal basis \(\{e_{m}\}_{m=1}^{k_{1}}\) of \(W^{0}\), an orthonormal basis \(\{e_{m}\}_{m=k_{1}+1}^{k_{2}}\) of \(W^{-}\), and an orthonormal basis \(\{e_{m}\}_{m=k_{2}+1}^{\infty}\) of \(W^{+}\), where \(1\leq k_{1}< k_{2}\) and \(k_{1}+1\leq k_{2}<\infty\). Then \(\{e_{m}\}_{m=1}^{\infty}\) is an orthonormal basis of W. Let \(X_{m}:=\mathbb{R}e_{m}\), then \(Y_{k}\) and \(Z_{k}\) can be defined as (2.3). If\(Z_{k}=\overline{\bigoplus_{m\geq k}X_{m}}\), then $$\beta_{k}:=\sup_{u\in Z_{k}, \Vert u \Vert =1} \Vert u \Vert _{\infty}\rightarrow0\quad \textit{as }k\rightarrow\infty. $$ It is clear that \(0<\beta_{k+1}\leq\beta_{k}\), so \(\beta_{k}\rightarrow \beta\geq0\), \(k\rightarrow\infty\). For every \(k\in N\), there exists \(u_{k}\in Z_{k}\) such that \(\|u_{k}\|=1\) and \(\|u_{k}\|_{\infty}>\frac{1}{2}\beta_{k}\). By the definition of \(Z_{k}\), \(u_{k}\rightharpoonup0\) in W, then by Lemma 2.1 in [3] and Rellich's embedding theorem (see [5]), \(u_{k}\rightarrow0\) in \(Z_{k}\). Therefore, we have \(\beta =0\), that is, \(\beta_{k}\rightarrow0\). □ For the Hilbert space W, define \(Y_{k}\) and \(Z_{k}\) as in (2.3). According to Lemma 2.3 and the evenness of \(F(t,\cdot )\), we know that Φ satisfies the Cerami condition \((C)\) and \(\varPhi(-u)=\varPhi(u)\). It remains to verify conditions (i) and (ii) of Lemma 2.2. Verification of (i). Taking \(r_{k}=\beta_{k}^{-1}\), using Lemma 2.4, one has that $$r_{k}\rightarrow+\infty\quad\mbox{as }k\rightarrow\infty. $$ Choose k large enough such that \(Z_{k}\subset W^{+}\) and $$r_{k}\geq\biggl(4\max_{s\in[0,1]}a(s) \int_{0}^{T}e^{Q(t)}b(t)\,dt \biggr)^{1/2}. $$ Now, for \(u\in Z_{k}\) with \(\|u\|=r_{k}\) and (\(F_{1}\)), we have that $$\begin{aligned} \varPhi(u)&=\frac{1}{2} \Vert u \Vert ^{2}- \int_{0}^{T}e^{Q(t)}F(t,u)\,dt \\ &\geq\frac{1}{2} \Vert u \Vert ^{2}-\max _{s\in[0, \Vert u \Vert _{\infty}]}a(s) \int _{0}^{T}e^{Q(t)}b(t)\,dt \\ &\geq\frac{1}{2} \Vert u \Vert ^{2}-\max _{s\in[0,\beta_{k} \Vert u \Vert ]}a(s) \int _{0}^{T}e^{Q(t)}b(t)\,dt \\ &\geq\frac{1}{2} \Vert u \Vert ^{2}-\max _{s\in[0,1]}a(s) \int_{0}^{T}e^{Q(t)}b(t)\, dt \\ &\geq\frac{ r_{k}^{2}}{4}, \end{aligned} $$ $$a_{k}=\inf_{u\in Z_{k},\|u\|=r_{k}}\varPhi(u)\geq\frac {r_{k}^{2}}{4} \rightarrow+\infty \quad\mbox{as }k\rightarrow\infty. $$ Verification of (ii). Similar to Lemma 2.3 in [2], we get that, for a finite-dimensional subspace \(Y_{k}\subset W\), for any \(k\in\mathbb{N}\), there exists a constant \(\epsilon_{k}>0\) such that $$ m\bigl(\varLambda_{u}^{k}\bigr)\geq\epsilon_{k}, \quad\forall u\in Y_{k}\setminus\{ 0\}, $$ where \(m(\cdot)\) denotes the Lebesgue measure in \(\mathbb{R}\), \(\varLambda _{u}^{k}:=\{t\in[0,T]:|u|\geq\epsilon_{k}\|u\|\}\). Note that \(e^{Q(t)}\geq d_{1}\) for some constant \(d_{1}>0\) (\(\forall t\in[0,T]\)). By \((F_{2})\), for any \(k\in\mathbb{N}\), there exists a constant \(S_{k}>0\) such that $$ F(t,u)\geq\frac{ \vert u \vert ^{2}}{d_{1}\epsilon_{k}^{3}}, \quad\forall t\in [0,T],\forall \vert u \vert \geq S_{k}. $$ Hence, using (2.23), (2.24), \((F_{2})\), and \(e^{Q(t)}\geq d_{1}\), we have that, for any \(k\in\mathbb{N}\) and \(u\in Y_{k}\) with \(\|u\|\geq S_{k}/\epsilon_{k}\), $$\begin{aligned} \varPhi(u)&\leq\frac{1}{2} \bigl\Vert u^{+} \bigr\Vert ^{2}- \int_{0}^{T}e^{Q(t)}F(t,u)\,dt \\ &\leq\frac{1}{2} \Vert u \Vert ^{2}-\biggl( \int_{[0,T]\setminus\varLambda _{u}^{k}}e^{Q(t)}F(t,u)\,dt + \int_{\varLambda_{u}^{k}}e^{Q(t)}F(t,u)\,dt\biggr) \\ &\leq\frac{1}{2} \Vert u \Vert ^{2}- \int_{\varLambda_{u}^{k}}e^{Q(t)}F(t,u)\,dt \\ &\leq\frac{1}{2} \Vert u \Vert ^{2}-d_{1} \int_{\varLambda_{u}^{k}}\frac { \vert u \vert ^{2}}{d_{1}\epsilon_{k}^{3}}\,dt \\ &\leq\frac{1}{2} \Vert u \Vert ^{2}-\frac{d_{1}\epsilon_{k}^{2} \Vert u \Vert ^{2}}{d_{1}\epsilon_{k}^{3}}m \bigl(\varLambda_{u}^{k}\bigr) \\ &\leq\frac{1}{2} \Vert u \Vert ^{2}- \Vert u \Vert ^{2} \\ &=-\frac{1}{2} \Vert u \Vert ^{2}. \end{aligned}$$ Now, for any \(k\in\mathbb{N}\), if we choose $$\Vert u \Vert =\rho_{k}>\max\{\gamma_{k},S_{k}/ \epsilon_{k}\}, $$ then (2.25) implies that $$b_{k=}\max_{u\in Y_{k},\|u\|=\rho_{k}}\varPhi(u)\leq-\frac{1}{2}\rho _{k}^{2}\leq0,\quad\forall k\in\mathbb{N}. $$ Consequently, by Lemma 2.2, system (1.1) has infinitely many nontrivial T-periodic solutions. □ We obtain infinitely many nontrivial periodic solutions for a class of damped vibration systems with superquadratic terms at infinity. By using some weaker conditions, our results extend and improve some existing results in the literature. Bartsch, T.: Infinitely many solutions of a symmetric Dirichlet problem. Nonlinear Anal. TMA 20, 1205–1216 (1993) Chen, G.: Infinitely many nontrivial periodic solutions for damped vibration problems with asymptotically linear terms. Appl. Math. Comput. 245, 438–446 (2014) Chen, G.: Periodic solutions of superquadratic damped vibration problems. Appl. Math. Comput. 270, 794–801 (2015) Li, X., Wu, X., Wu, K.: On a class of damped vibration problems with super-quadratic potentials. Nonlinear Anal. 72, 135–142 (2010) Willem, M.: Minimax Theorems. Birkhäuser, Boston (1996) Wu, X., Chen, S., Teng, K.: On variational methods for a class of damped vibration problems. Nonlinear Anal. 68, 1432–1441 (2008) Zou, W.: Variant fountain theorems and their applications. Manuscr. Math. 104, 343–358 (2001) Research supported by the National Natural Science Foundation of China (No. 11771182) and the Natural Science Foundation of Shandong Province (No. ZR2017JL005). School of Mathematical Sciences, University of Jinan, Jinan, China Ping Li & Guanwei Chen Search for Ping Li in: Search for Guanwei Chen in: All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript. Correspondence to Guanwei Chen. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Li, P., Chen, G. Multiple periodic solutions for damped vibration systems with superquadratic terms at infinity. Bound Value Probl 2019, 191 (2019) doi:10.1186/s13661-019-01306-2 Damped vibration systems Periodic solutions Superquadratic
CommonCrawl
Additively manufactured metastructure design for broadband radar absorption M. B. Abdullahi ORCID: orcid.org/0000-0002-6728-30041,2 & M. H. Ali1 Recent advances in material science and electronics led to the rapid development of communication devices and radar detection techniques resulting in an ever-increasing demand for improved stealth performance of air vehicles during scouts. Absorber design employing metastructure concept has recently become a popular approach to improving radar stealth performance. Metastructure permits the realization of desired absorption characteristics by careful design of geometrical structures and material compositions. In this study, a metastructure designed based on graphite SLS composite for radar absorption has been demonstrated. The unit cell of the proposed structure is simulated by COMSOL Multiphysics to determine the frequency-dependent absorption characteristic of the structure. It is fabricated by using a low-cost selective laser sintering technique of additive manufacturing technology. The prototype, while measured, shows effective absorption bandwidth of 1.04 GHz that is in reasonable agreement with the simulated response of 2.08 GHz. The optimized structure exhibits ≤ − 10 dB reflectivity within a broad frequency range extending from 7.60 GHz to 18.00 GHz under normal incidence in both TE and TM polarizations. Furthermore, the absorption performance under different polarizations and incident angles has been investigated. Results indicate that the absorber displays polarization indifference and exhibits a wide-angle of incidence tolerance of up to 45° in TE polarization and 30° in TM polarizations. In this paper, the feasibility of using graphite SLS material to design and 3D print a metastructure design for radar absorbing has been established as confirmed by the simulation and the measurement results. The advantages of low cost, ultra-broad operating band, wide-angle of incidence feature, and polarization insensitivity qualifies the proposed absorber for stealth and electromagnetic shielding applications. Recent advances in material science and electronics have led to the rapid development of communication devices and radar detection techniques. This has compounded the electromagnetic (EM) interferences and radiation problems. Also, there is pressing demand for improved stealth performance of air vehicles during the scout. In an attempt to find a solution to these problems, there has been extensive research interest in EM absorbing materials in the last few years. Current researches are aimed at developing EM absorbers with the properties of thinner thickness, lower density, wider angle of incidence tolerance, better polarization insensitivity, and stronger and broader bandwidth absorption [1]. Though improvement to these properties can hardly be met simultaneously, there is a need to develop absorbers with optimal property mix that is peculiar to the area of application such as radar stealth [2,3,4], electromagnetic compatibility [5], energy harvesting [6], and so on. Recently, absorber design employing metastructure concept has become a popular approach [7,8,9] for improving radar stealth performance as it permits realization of desired absorption characteristics by carefully manipulating the geometry of the structure and/or material composition. Metastructures (MS) are artificially engineered subwavelength units that can effectively manipulate the propagation of electromagnetic waves resulting in unique electromagnetic features that are unattainable in conventional materials, such as near-zero or negative refractive index. Additive manufacturing technology, which is also known as 3D printing, is a promising processing technology for fabricating structures and devices with different geometries using computer-aided design [10]. It offers high efficiency, convenience, and a low-cost fabrication process that involves printing successive layers of a given material on top of each other. Lately, 3D printing systems have been utilized to manufacture metamaterial absorbers (MMAs) of different structural designs and material bases [11,12,13,14,15,16,17]. However, the main hindrance for the fabrication of broadband MMAs using 3D printing technology is the limited range of materials compatible with 3D printers [18, 19]. Most of the available 3D printing materials are fully insulating or have low conductivity values [11], which limits their usage in 3D printing of EM absorbing metamaterials. To enhance the dielectric performance of the common 3D printing polymers, organic materials such as carbon, carbon black, carbon nanotubes, and graphite are loaded to the polymers as hitherto demonstrated in non-3D printing polymeric composites [20, 21]. Nowadays, commercially available conductive filaments like conductive PLA, conductive ABS, and graphene are being explored for the development of EM absorbers and devices using 3D printing [11,12,13,14,15,16,17,18,19, 22]. However, none of the existing reports reviewed have considered graphite SLS 3D printing material. Therefore, a cross-block patterned broadband metastructure absorber for radar absorption at X and Ku bands manufactured using low-cost 3D printing technology is presented in this work. The graphite SLS material is introduced in the design of the absorber for the first time. A preliminary designed structure is 3D printed via selective laser sintering (SLS) technique and its absorption performance measured afterward. Good agreement between measured and simulated results of the preliminary absorber is observed. By optimizing the dimensions of the cross-block patterned preliminary absorber resonator unit cell, an ultra-broad absorption band with a reflectivity of below -10 dB (absorptivity of higher than 90%) is realized over the range of 7.6 to 18 GHz radar spectrum as indicated in the simulation results. Moreover, the angular acceptance of the absorber in transverse electric (TE) and transverse magnetic (TM) mode are investigated, likewise, absorber polarization dependency is simulated. Results indicated wide angles of incidence tolerance for both polarization modes and polarization insensitivity at normal incidence. These exceptional promising features of the present metastructure absorber design qualifies it to be used for different applications in radar frequencies, such as low-cost electromagnetic interference shielding, electromagnetic compatibility, and stealth technology. The electromagnetic absorption behaviour of a structure can generally be determined by using Eq. (1) where the frequency-dependent parameters A(ω), R(ω), and T(ω) represent the absorption, reflectance, and transmittance, respectively. $$ A\left(\upomega \right)=1-\mid \mathrm{R}\left(\upomega \right)\mid -\mid \mathrm{T}\left(\upomega \right)\mid =1-{\left|{S}_{11}\right|}^2-{\left|{S}_{21}\right|}^2 $$ where R(ω) = |S11|2 and T(ω) = |S21|2 Due to the ground plane in the designed structure, T(ω) can be neglected. Therefore, Eq. (1) reduces to Eq. (2). $$ A\left(\upomega \right)=1-\left|\mathrm{R}\left(\upomega \right)\right|=1-{\left|{S}_{11}\right|}^2 $$ Using scattering parameter (S11) from the simulations, the absorptivity of the RAMS is calculated using Eq. (2). The wave equation of the propagating electric field "E" based on the Maxwell equation is given by Eq. (3). $$ \nabla \times {\mu}_r^{-1}\left(\nabla \times \boldsymbol{E}\right)-{k}_0^2\left({\varepsilon}_r-\frac{j\sigma}{\omega {\varepsilon}_0}\right)\boldsymbol{E}=0 $$ where μr represents the relative permeability, k0 the free space wavenumber, εr the relative permittivity, σ the conductivity, ω the angular frequency, and ε0 the permittivity of free space. In COMSOL, the S-parameters describing the reflection and transmission of the wave are defined as follows: $$ {S}_{11}=\frac{\int_{\mathrm{\partial \Omega }}\left(E-{E}_1\right)\cdotp {E}_1}{\int_{\mathrm{\partial \Omega }}{E}_1\cdotp {E}_1} $$ $$ {S}_{21}=\frac{\int_{\mathrm{\partial \Omega }}E\cdotp {E}_2}{\int_{\mathrm{\partial \Omega }}{E}_2\cdotp {E}_2} $$ Then, the S11 parameter evaluated from Eq. (4) is subsequently substituted into Eq. (2) in COMSOL software to calculate the magnitude of the absorptivity of the incident wave. Based on transmission line theory, impedance encountered by EM waves at layer "i" is defined as [23]: $$ {Z}_{in^i}={Z}_i\frac{Z_{in}\left(i+1\right)+{Z}_i\tanh {\gamma}_i{d}_i}{Z_i+{Z}_{in}\left(i+1\right)\tanh {\gamma}_i{d}_i} $$ where Zi and γi can be represented as \( {Z}_i=\sqrt{\frac{\mu_{r,i}}{\varepsilon_{r,i}}} \) and \( {\gamma}_i=j2\pi f\frac{\sqrt{\varepsilon_{r,i}{\mu}_{r,i}}}{C} \), respectively. The magnitude of EM wave absorption represented by the reflectivity of an absorber backed by a perfect electrical conductor (PEC) ground plate can be written as Eq. (7). $$ R=20\log \left|\frac{Z_{in^{1\kern0.75em }}-1}{Z_{in^1}\kern0.5em +1}\right| $$ where R is reflectivity and \( {Z}_{in^1} \) is surface layer input impedance. The reflectivity of the three-layered RAMS can be calculated from Eqs. (6) and (7) using the FEM formulation. Design of the structure The proposed absorber is a three-layer structure consisting of a cross-shaped surface structure, square block middle layer, and conventional single slab bottom layer as revealed in Fig. 1. The material used in the design of the absorber layers is Graphite SLS, exclusive material from a UK-based Graphite Additive Manufacturing company. It was obtained from the mixture of a Nylon 12 (PA) powder and graphite powder, to enhance the dielectric performance of the material. As graphite SLS assumed the property of graphite, the measured complex permittivity of graphite [24] used in this work is shown in Fig. 2, while the loss tangent of the graphite is 0.52. A copper film of 0.04 mm thickness is used in the design as a ground plane to prevent transmission of the waves beyond the structure. The schematic of the unit cell of the proposed radar-absorbing metastructure (RAMS) shown in Fig. 1 is simulated by using the frequency domain solver of COMSOL Multiphysics environment based on finite element method (FEM) to investigate its absorption characteristics. Designed unit cell radar-absorbing metamaterial with three-layer cross-block structure. a Model of periodic unit. b Model size. l1 = 4.0 mm, l2 = 2.6 mm, w = 1.8 mm, e = 0.6 mm, d1 = 0.8 mm, d2 = 0.8 mm, d3 = 2.6 mm Complex permittivity of graphite SLS material Simulation approach FEM is used in this study to investigate the EM absorbing property of the proposed design. In FEM, the simulation domain is discretized into sub-elements called mesh elements. Tetrahedral mesh element is selected with a maximum element size of one-tenth of the minimum wavelength (\( \frac{1}{10}{\lambda}_{\mathrm{min}} \)) of the input wave to obtain reasonable accuracy of the simulation results within a reasonable computation resource in the COMSOL. The complete mesh consists of 16688 domain elements, 2294 boundary elements, and 276 edge elements. The boundaries of the simulation domain along x and y axes were simulated as a perfect magnetic conductor (PMC) and perfect electric conductor (PEC) boundary conditions as shown in Fig. 1b. This method sets tangential fields to zero [1] respectively to mimic an infinite periodic structure arranged in the x and y directions. A periodic port boundary in the radio frequency (RF) module of COMSOL was used to generate plane-polarized electromagnetic wave excitations along the z-axis plane. With this boundary condition, the magnitude of the reflection loss can be assessed. Impedance boundary condition (IBC) which treats any material behind the boundary as being infinitely large is chosen for the ground plane. It is used at boundaries where the field is known to penetrate only a short distance outside the boundary. This penetration is approximated by the IBC to avoid the need to include another domain in the model. The IBC is represented in Eq. (8). $$ \sqrt{\frac{\mu_0{\mu}_r}{\varepsilon_0{\varepsilon}_r-\frac{j\sigma}{\omega }}}\ \mathbf{n}\times \mathbf{H}+\mathbf{E}-\left(\mathbf{n}.\mathbf{E}\right)\mathbf{n}=\left(\mathbf{n}.{\mathbf{E}}_{\mathrm{s}}\right)\mathbf{n}-{\mathbf{E}}_{\mathrm{s}} $$ In Fig. 2, the measured dielectric properties of graphite SLS material [24] are shown. Together with the loss tangent of material recorded as 0.52, these material parameters are used in the simulation to evaluate the absorption behaviour of the proposed structure. Fabrication and measurement We used the services of UK based graphite additive manufacturing company for the fabrication of the proposed designed RAMS model. The RAMS was built using selective laser sintering (SLS) process and a partially enlarged picture of the 3D printed radar-absorbing metastructure is presented in Fig. 3. Additively manufactured radar-absorbing metastructure Selective laser sintering (SLS) is an additive manufacturing process that allows the building of complex 3D parts by solidifying successive layers of powder material deposited on top of each other [25]. A wiper deposits a thin layer of powder particles (typically 0.1–0.3 mm thickness) on a fabrication bed, laser scans over the particles in the shape of the desired object as calculated from a CAD model causing the particles to melt and fuse. The final part is built by repeatedly depositing and sintering successive powder layers. A schematic overview of the SLS process is depicted in Fig. 4. No support structures are needed and allow rapid production of complicated geometries parts in one step process. Suitable materials for SLS include polymers, ceramics, and metals. Schematic of a typical SLS setup [31] The fabricated absorber structure performance was measured at ABS technics [26] laboratory to verify the rationality of the Comsol Multiphysics design. NRL Arch measurement procedure was adopted to obtain the absorber reflection loss (RL). Measurement of RL at a near-normal incidence of 10° for each antenna versus the normal on the reflective metal plate for TE polarized incident wave was carried out from 2.0–18 GHz. The schematic of the measurement setup is indicated in Fig. 5 and the detailed measurement method as per IEEE std 1128/1998 is given in [27]. Schematic of NRL arch method RL measurement setup Geometric Dimension Optimization The measured and simulated results of the 3D printed RAMS demonstrate below − 10 dB reflectivity in the 10–12 GHz range only. To obtain broad absorption in the X and Ku bands of radar spectrum satisfying the below − 10 dB reflectivity, the unit cell geometric dimensions are optimized by using Comsol Multiphysics' in-built Nelder-Mead method to get the desired absorption property. Additively manufactured RAMS The measured reflectivity result of the 3D printed absorber at an incident angle of 10° for TE-polarized wave is shown in Fig. 6, and the simulated reflectivity is plotted alongside the measured reflectivity in Fig. 7 for comparison. Measured reflectivity plots at 10° incidence of TE-polarized wave Measured and simulated reflectivity plots at 10° incidence of TE-polarized wave Geometric dimension optimized RAMS The optimized dimensions obtained which result in broadband absorption characteristics of the designed absorber are l1 = 15.0 mm, l2 = 3.5 mm, w = 4.2 mm, e = 1.6 mm, d1 = 1.2 mm, d2 = 0.6 mm, d3 = 3.2 mm. The simulated reflectivity of the optimized structure at normal incidence for both TE and TM polarizations is presented in Fig. 8. Simulated reflectivity plots of the optimized structure at normal incidence for TE and TM polarizations Absorber angle of incidence tolerance is evaluated to assess its suitability for practical application which demands a wide angle of incidence performance. Simulations were carried out for TE and TM modes under different angles of incidence varying from 0° to 45° at steps of 15°. Absorber reflectivity and absorptivity responses under different angles of incidence for TE mode of the optimized design are displayed in Fig. 9, while absorber's angle of incidence responses for TM mode are depicted in Fig. 10. Another useful characteristic of an EM absorber is the ability of the absorber to respond well in different polarization angles. Therefore, the metastructure's response to different polarization angles is investigated in this paper and revealed in Fig. 11. Simulated reflectivity (a) and absorptivity (b) of the optimized structure at normal and oblique incidence for TE-polarized wave Simulated reflectivity (a) and absorptivity (b) of the optimized structure at normal and oblique incidence for TM-polarized wave Simulated reflectivity plots of different polarization angles at normal incidence for optimized structure When the impedance of an absorber matches that of air, reflection is minimized which results in near-unity absorbance. For comparison, the real and imaginary parts of the impedance of the metastructure and single-layer flat designs were calculated from the simulated complex S parameters using \( {Z}_{\mathrm{eff}}\left(\upomega \right)=\sqrt{\frac{{\left(1+{\mathrm{S}}_{11}\right)}^2}{{\left(1-{\mathrm{S}}_{11}\right)}^2}} \) and plotted in Figs. 12 and 13. To explore the physical absorption mechanism of the proposed RAMS, current density, power loss density, and E field distributions were simulated at two absorption peaks of 9.0 and 15.4 GHz when the incident angle is 0°, and results are revealed in Fig. 14. Calculated real effective impedance plots of metastructure absorber and flat absorber Calculated imaginary effective impedance plots of metastructure absorber and flat absorber Simulated current density [a 9 GHz, b 15.4 GHz], power loss density distribution [c 9 GHz, d 15.4 GHz], and electric field distribution [e 9 GHz, f 15.4 GHz] the radar absorbing metastructure Considering the additively manufactured RAMS, there is a favourable agreement between the simulation and measurement curves. The effective absorption bandwidth is 1.04 GHz for measured and 2.08 GHz for simulated. Effective absorption bandwidth is often defined as the corresponding frequency range in which reflectivity is ≤ − 10 dB. Deviations between simulated and measured results are observed, which can be attributed to the imperfections resulting from the fabrication and measurement processes. The geometric dimension optimization has greatly improved the effective absorption bandwidth of the RAMS, covering almost the entire frequency range of interest (X and Ku bands) as shown in Fig. 8. The new optimized absorber thickness is 5.0 mm only as against the 4.2 mm of the printed absorber. The reflection coefficient curves for the TE and TM polarizations are observed to overlapped at normal incidence. Moreover, it can be seen from Fig. 9a that as the angle of incidence increases to 15°, the effective bandwidth is unchanged which slightly decreases as the incident angle increase to 45°. Absorption levels are observed to diminish as the incident angle increases as depicted in Fig. 9b. For TM polarization, the effective bandwidth is similarly unchanged as the angle of incidence increases to 15° while it decreases as the incident angle is varied to 45° according to results from Fig. 10a. Absorption levels are observed to be greater than 90% for normal and 15° incidences, greater than 80% at 30°, and greater than 70% at 45° when TM wave interacts with the RAMS as depicted in Fig. 10b. It is, however, worth mentioning that the absorption level is greater than 70% in the X and Ku bands for all the incident angles considered either for TE wave or TM wave while the absorption level is said to generally decrease with the increase of angle of incidence. Simulation results for polarization angles varying from 0° to 60° indicated polarization independence behaviour as shown in Fig. 11. The curves are observed to overlapped over one another which is attributed to the fourfold symmetry of the structural design of the absorber unit cell. It is a fact that structures exhibiting fourfold symmetry demonstrate polarization insensitivity absorption behaviour [28,29,30]. Concluding, the designed RAMS is said to exhibit a wide-angle of incidence property and polarization-insensitive behaviour. It can be spotted from Figs. 12 and 13 that the designed metastructure's effective impedance is much closer to matching air impedance than the single-layer flat design, though both are designed from the same material and thickness. The real part impedance of the metastructure design from Fig. 12 is close to unity in most of the absorber working frequencies. From Fig. 13, the imaginary part of the effective impedance is observed to be near zero in most of the absorber's working frequency bands. This illustrates the effectiveness of the novel metastructure design for broadband radar spectrum absorption. As depicted in Fig. 14a, the current density at the 9.0 GHz absorption peak is concentrated at the middle layer and topmost part of the surface layer. At 15.4 GHz, the current density distribution is dominant at the cross-shaped surface layer and the surface of the bottom layer according to Fig. 14b. Remarkably, the power loss distribution coincides well with the current density distribution as revealed in Fig. 14c, d, indicating that the current density plays a vital role in radar absorption which is mainly caused by ohmic losses. The electric field distribution shown in Fig. 14e, f is similar to that of the current density and power loss distribution. Table 1 is a comparison of the designed absorber in this paper with some other absorbers fabricated by 3D printing. Compared with other absorbers in the literature listed in Table 1, the RAMS reported in this paper has a wider effective bandwidth at 10.40 GHz. It also yields a good thickness at 5.0 mm, better than the absorber thickness obtained in [18, 19]. Moreover, Graphite SLS was for the first time introduced in this paper as can be observed from Table 1. Table 1 Comparison of − 10 dB absorption bandwidth range between radar-absorbing metastructure of this work and previous works Finally, the present metastructure design can be described as a low-cost polarization-independent radar absorber with a wide-angle of incidence characteristics for X and Ku bands applications. In this paper, we demonstrated the feasibility of using graphite SLS material to design and fabricate a metastructure for radar absorption using 3D printing as confirmed by the simulation and measurement results. The simulation results obtained from the optimized RAMS show that the structure can realize the below − 10 dB reflectivity in the 7.6–18.0 GHz frequency band at normal incidence of waves for TE and TM waves. This property is maintained at an oblique incidence until 45° in TE wave and 15° in the TM wave. However, the reflectivity is below − 5 dB (more than is 70% absorptivity) at 45° incidences for TM wave. The metastructure absorber contributes to impedance matching to air that ensures minimized reflection and subsequent broadband absorption. Moreover, the proposed RAMS demonstrate polarization insensitivity and cost-effectivity as it is made from commercially available 3D printing material and technology thereby making it to be a practically viable candidate for stealth and electromagnetic interference shielding applications. EM: 3D: MMAs: Metamaterial absorbers SLS: Transverse electric TM: Transverse magnetic RAMS: Radar-absorbing metastructure FEM: Finite element method Perfect electric conductor PMC: Perfect magnetic conductor IBC: Impedance boundary condition Luo H, Chen F, Wang X, Dai W, Xiong Y, Yang J, Gong R (2019) A novel two-layer honeycomb sandwich structure absorber with high- performance microwave absorption. Compos Part A 119:1–7. https://doi.org/10.1016/j.compositesa.2019.01.015 Zhang L et al (2014) A broadband radar absorber based on perforated magnetic polymer composites embedded with FSS. IEEE Trans Magn 50(5):4004305 pp 1-5 Abdalla MA, Hu Z (2012) On the study of development of x band metamaterial radar absorber. Adv Electromagn 1(3). https://doi.org/10.7716/aem.v1i3.25 Xu H, Bie S, Xu Y, Yuan W, Chen Q, Jiang J (2016) Broad bandwidth of thin composite radar absorbing structures embedded with frequency selective surfaces. Compos Part A 80:111–117. https://doi.org/10.1016/j.compositesa.2015.10.019 Chung DDL (2001) Electromagnetic interference shielding effectiveness of carbon materials. Carbon N Y 39(2):279–285. https://doi.org/10.1016/S0008-6223(00)00184-6 Hossain MJ, Faruque MRI, Islam MT (2018) Perfect metamaterial absorber with high fractional bandwidth for solar energy harvesting. PLoS One 13(11):1–15 Fan Q, Yang X, Lei H, Liu Y, Huang Y, Chen M (2020) Gradient nanocomposite with metastructure design for broadband radar absorption. Compos Part A Appl Sci Manuf 129:105698. https://doi.org/10.1016/j.compositesa.2019.105698 Huang Y, Yuan X, Chen M, Song WL, Chen J, Fan Q, Tang L, Fang D (2018) Ultrathin flexible carbon fiber reinforced hierarchical metastructure for broadband microwave absorption with nano lossy composite and multiscale optimization. ACS Appl Mater Interfaces 10(51):44731–44740. https://doi.org/10.1021/acsami.8b16938 Xie S, Zhu L, Zhang Y, Ji Z, Wang J (2020) Three-dimensional periodic structured absorber for broadband electromagnetic radiation absorption. Electron Mater Lett 16(4):340–346. https://doi.org/10.1007/s13391-020-00219-y Nadgorny M, Ameli A (2018) Functional polymers and nanocomposites for 3d printing of smart structures and devices. ACS Appl Mater Interfaces 10(21):17489–17507. https://doi.org/10.1021/acsami.8b01786 Tasolamprou AC, Mentzaki D, Viskadourakis Z, Economou EN, Kafesaki M, Kenanakis G (2020) Flexible 3D printed conductive metamaterial units for electromagnetic applications in microwaves. Materials (Basel) 13(17):1–14. https://doi.org/10.3390/ma13173879 Jiang W, Yan L, Ma H, Fan Y, Wang J, Feng M, Qu S (2018) Electromagnetic wave absorption and compressive behavior of a three-dimensional metamaterial absorber based on 3D printed honeycomb. Sci Rep 8(1):1–7. https://doi.org/10.1038/s41598-018-23286-6 Lai W, Wang Y, He J (2020) Electromagnetic wave absorption properties of structural conductive ABS fabricated by fused deposition modeling. Polymers (Basel) 12(06):13. https://doi.org/10.3390/polym12061217 Lu Y, Chi B, Liu D, Gao S, Gao P, Huang Y, Yang J, Yin Z, Deng G (2018) Wideband metamaterial absorbers based on conductive plastic with additive manufacturing technology. ACS Omega 3(9):11144–11150. https://doi.org/10.1021/acsomega.8b01223 Kronberger R, Soboll P (2016) 3D-printed frequency selective surfaces for microwave absorbers. In: Proceedings of ISAP2016, Okinawa, Japan, pp 178–179. https://doi.org/10.1109/MWSYM.2015.7166769 Zhou D, Huang X, Du Z (2017) Analysis and design of multi-layered broadband radar absorbing metamaterial using 3D printing technology- based method. IEEE Antennas Wirel Propag 16:133–136. https://doi.org/10.1109/LAWP.2016.2560904 Wu H et al (2020) A study on the fused deposition modeling process of graphene / Nano-Fe 3 O 4 composite absorber and its absorbing properties of electromagnetic microwave. Appl Sci 10(1508):1–12 Ren J, Yin JY (2018) 3D-printed low-cost dielectric-resonator-based ultra-broadband microwave absorber using carbon-loaded acrylonitrile butadiene styrene polymer. Materials (Basel) 11:1249. https://doi.org/10.3390/ma11071249 Kjelgard KG, Wisland DT, Lande TS (2018) 3D printed wideband microwave absorbers using composite graphite / pla filament. In: Proceedings of 48th European Microwave Conference, pp 859–862 Khurram AA, Rakha SA, Ali N, Zhou P, Munir A (2014) Effect of low-content carbon nanotubes on the dielectric and microwave absorption properties of graphite / polymer nanocomposites. J Appl Polym Sci 40891(20):1–7. https://doi.org/10.1002/app.40891 Ansari A, Akhta MJ (2017) Co/graphite based light weight microwave absorber for electromagnetic shielding and stealth applications. Mater Res Express 4(1). https://doi.org/10.1088/2053-1591/aa570c Arbaoui Y, Laur V, Maalouf A, Queffelec P, Passerieux D, Delias A, Blondy P (2016) Full 3-D printed microwave termination: a simple and low-cost solution. IEEE Trans Microw Theory Tech 64(1):271–278. https://doi.org/10.1109/TMTT.2015.2504477 Xiong Y-J et al (2018) Structural broadband absorbing metamaterial based on three-dimensional printing technology. Acta Phys Sin 67(8):084202–1–084202–8. https://doi.org/10.7498/aps.67.20172262 Hotta M, Hayashi M, Lanagan MT, Agrawal DK (2011) Complex permittivity of graphite, carbon black and coal powders in the ranges of X-band frequencies ( 8 . 2 to 12 . 4 GHz ) and between 1 and 10 GHz. ISIJ Int 51(11):1766–1772 Kruth JP, Wang X, Laoui T, Froyen L (2003) Lasers and materials in selective laser sintering. Assem Autom 23(4):357–371. https://doi.org/10.1108/01445150310698652 J. L. Kenis, "ABS Technics," 2020. [Online]. Available: www.abstechnics.com. IEEE (1998) IEEE Recommended Practice for Radio-Frequency ( RF ) Absorber Evaluation in the Range of 30 MHz to 5 GHz; ANSI/IEEE Std. 1128-1998. IEEE, New York Fan S, Song Y (2018) Bandwidth-enhanced polarization-insensitive metamaterial absorber based on fractal structures. J Appl Phys 123(8):085110–1–085110–8. https://doi.org/10.1063/1.5004629 Beeharry T, Yahiaoui R, Selemani K, Ouslimani HH (2018) A co-polarization broadband radar absorber for RCS reduction. Materials (Basel) 11:1–11. https://doi.org/10.3390/ma11091668 Abdulkarim YI, Deng L, Luo H, Huang S, Sabah C, Karaaslan M (2020) Electromagnetic simulations of polarization-insensitive and wide-angle multiband metamaterial absorber by incorporating double asterisk resonator. Bull Mater Sci 43(116):9. https://doi.org/10.1007/s12034-020-02098-3 R. K. Ganeriwala, "Multiphysics modelling of selective laser sintering/melting," PhD Thesis, University of California, 2015. The authors would like to thank the General Manager of ABS Technics, Johan L. Kennis for offering to carry out the measurement of absorption performance of the 3D printed absorber at their facility located in Mol, Belgium at no cost. No funding received. Department of Physics, Bayero University, Kano, Nigeria M. B. Abdullahi & M. H. Ali Department of Physics, Usmanu Danfodiyo University Sokoto, Sokoto, Nigeria M. B. Abdullahi M. H. Ali M.H. Ali conceived the idea, gives directives, suggested the fabrication method, and corrected the manuscript. M.B. Abdullahi carried out the design and the simulation and drafted and organized the manuscript. The authors have read and approved the final manuscript. Correspondence to M. B. Abdullahi. Abdullahi, M.B., Ali, M.H. Additively manufactured metastructure design for broadband radar absorption. Beni-Suef Univ J Basic Appl Sci 10, 24 (2021). https://doi.org/10.1186/s43088-021-00114-x Graphite SLS Metastructure Wide-angle
CommonCrawl
Popular Science Monthly/Volume 44/December 1893/The Fruit Industry in California < Popular Science Monthly‎ | Volume 44‎ | December 1893 State Interference in Social Affairs Popular Science Monthly Volume 44 December 1893 (1893) The Fruit Industry in California by Charles Howard Shinn Criminal Woman 1220065Popular Science Monthly Volume 44 December 1893 — The Fruit Industry in California1893Charles Howard Shinn THE FRUIT INDUSTRY IN CALIFORNIA. By CHARLES HOWARD SHINN. IT seems to me that an account of the present condition of the fruit industry in California would be of economic value, provided that it were entirely free from the advertising element. By the "advertising element" I mean that very natural and almost irrepressible desire of a resident of any portion of this magnificent country to attract others to his particular district. There ought to be some way of presenting statistical and other facts relating to one department of horticulture in a given American State, in exactly the same spirit that an expert upon cotton manufacture would arrange the statistics of the mills of Massachusetts. A considerable area of California lands is planted to orchards and vineyards. Some of these, as with other human enterprises, are profitable, and some are unprofitable; but all are producing fruit, most of which finds its way in some shape to markets outside the State. The range of these fruit products is very great, and many American producers, as well as those of Europe and other parts of the world, feel the competition of this food supply. An immense number of consumers, as well as this army of rival producers, must wish to obtain statistics of the California industry under consideration. The following article is an attempt to present the facts of a great fruit-growing industry so plainly that all its departments can be understood by the reader. First, let us examine the best available statistics of the area planted, and the kinds of fruit used. These are much more complete now than when the officers of the last national census attempted to collect them from county officials, because competent agents of the State Board of Horticulture, themselves fruit-growers, spent the greater part of last year in making a "house-to-house canvass." They asked every man who owned an orchard to write down the number of acres he had in fruit trees, and classified the result, in many cases, by actual inspection of the orchard. The mass of details is of course too ponderous to be printed here, but the results can be analyzed so as to justify presentation in a series of tables. There are several ways of ​possible classification, but I can think of nothing better than to take the principal fruit-growing counties, give their areas and the acreage now planted, arranging the fruits reported upon in four divisions—the citrus and semitropic species, the nut-bearing Cluster of Uvaria. Olives. (One half natural diameter.) trees, the ordinary deciduous fruits, and lastly the vines and small fruits. The principal citrus and semitropic fruits grown in California are the fig, olive, lemon, and orange. The citron of commerce flourishes, but has not been much planted, and the lime does well in some districts. The pomegranate is in many gardens, but few commercial orchards exist, and the same is true of the loquat and guava. Here and there in sheltered, frostless places are the beginnings of some small plantations of pineapples, bananas, and date palms, and a few specimens of cherimoya, granadilla, alligator pear, jujube, melon shrub, chayota, the best species of opuntia, and other tropic and semitropic fruits that are being tested on a very small scale. Easily first, and type of the whole class, is the orange. It is commercially grown to the extent of a hundred acres or more in fifteen counties of California; eight counties contain over five hundred acres apiece. The acreage of the new county of Riverside, created by the last Legislature, is necessarily included in San Bernardino, and that of Kings County in Tulare. Table I. Acreage of Semitropic Fruits. County. Oranges. Olives. Lemons. Figs. Butte 2,664 755 50 259 Los Angeles 1,2297 788 2,000 973 Orange 5,412 270 681 82 San Bernardino 38,237 1,200 2,003 362 San Diego 1,500 1,063 4,4790 291 Santa Barbara 540 871 1,276 950 Tulare 604 320 63 182 Ventura 548 613 443 62 ⁠Total (8 counties) 61,802 5,880 11,306 3,161 The forty-five remaining counties of the State contain acreages as follows: Oranges, 1,559; olives, 3,394; lemons, 1,090; figs, 3,119. ​Adding these totals, we obtain the area of the semitropic orchards of California, according to the latest and most reliable data. There are 64,361 acres of oranges, 9,274 acres of olives, 12,396 acres of lemons, and 5,280 acres of figs. The entire acreage devoted to the semitropic fruits above classified is 91,311. No reasonable allowance for small orchards overlooked would be likely to bring this total to much more than 95,000 acres. Studying the table, we Young Fig Tree. Tejon Ranch. observe that the leading orange-growing counties are San Bernardino and Los Angeles; the leading fig county is Los Angeles, with Santa Barbara very close, but both still under the thousand acre mark; the leading olive counties are San Diego and Santa Barbara, and the leading lemon counties are San Diego and Los Angeles. Placer, Butte, Sacramento, and Yuba are the only counties in the Sacramento Valley and northern Sierra foothills that ​have a hundred acres of oranges; Fresno, Stanislaus, and Tulare, in the San Joaquin Valley, have also barely commenced the culture of semitropic fruits. But the industry is more at home in the Coast Range valleys from Santa Barbara south and southeast. There, also, it is of longer growth, three out of four trees being in bearing, while in the counties that have but lately begun to plant semitropic fruits more than half the orchards have not yet fruited to any extent. The beginnings of fig and olive orchards are more generally distributed throughout the State than are lemon and orange orchards. Classified from this standpoint, the lemon is represented by one or more acres in thirty counties, the orange in thirty-eight, the fig in forty-two, and the olive in forty-four. Deciduous fruits cover a very wide range, both in variety and distribution. The apple, apricot, cherry, peach, prune, and pear are the principal deciduous fruits grown in California. There are some nectarine and quince orchards, and the Japanese persimmon is planted to some extent. Many other deciduous fruit trees find place in family orchards and experimental grounds, but those named comprise all that are of commercial value at the present time. A complete table of the deciduous fruit acreage by counties would include every one of the fifty-three. The apple, for instance, is grown everywhere. The peach and prune better represent the deciduous fruits. A unit of one hundred acres would force us to classify some forty-five counties. Even five hundred acres as a unit would list twenty-nine counties; but, by raising it to a thousand acres, we include all, or nearly all, of the famous deciduous fruit districts. Table II.—Acreage of Deciduous Fruits. County. Apples. Apricots. Cherries. Peaches. Pears. Prunes and plums. Alameda 505 3,310 2,171 1,375 1,701 4,236 Butte 307 540 165 3,286 913 1,144 El Dorado 225 29 39 1,338 200 279 Fresno 185 556 7 2,058 634 1,601 Kern 338 320 25 1,079 315 946 Los Angeles 1,511 2,899 18 4,059 1,661 3,748 Orange 128 1,492 10 1,203 900 905 Placer 332 280 272 3,621 1,070 615 Sacramento 139 535 160 2,870 2,900 1,770 San Bernardino 222 1,554 15 2,090 402 1,463 Santa Clara 750 4,350 1,250 5,570 900 8,900 Solano 153 3,733 436 4,915 3,050 2,870 Sonoma 4,121 229 317 2,507 1,407 2,600 Tehama 86 574 100 3,182 520 1,328 Tulare 147 724 10 3,800 642 5,270 Yolo 75 824 50 1,040 621 1,522 ⁠Total (16 counties) 9,224 21,949 5,045 43,993 17,836 39,197 Drying the Apricots. ​The remaining thirty-seven counties of the State contain acreages as follows: Apples, 10,753; apricots, 8,176; cherries, 1,883; peaches, 11,007; pears, 5,906; prunes and plums, 15,445. Adding these totals, we obtain the area of the deciduous orchards. There are 19,977 acres of apples, 30,125 acres of apricots, 6,928 acres of cherries, 55,000 acres of peaches, 23,742 acres of pears, and 54,642 acres of prunes and plums. The deciduous fruits lead in acreage and value of products all other branches of California horticulture; and as the above table Almond Tree in February. Rancho Chico. plainly shows, the same concentration of each separate variety of fruit in some particular district is manifest everywhere. There are apple counties, peach counties, prune counties, and always will be, although some changes will take place in a decade or two. Peaches, prunes, and apricots occupy nearly three fourths of the acreage. The cherry orchards, although covering the smallest ​area, are more profitable, and give employment to more laborers, in proportion to acreage, than any others of the class. The greater part of the apple crop is consumed at home, but all the other fruits must find their chief market outside the State. In addition to the acreage already tabulated, there are 1,080 acres of nectarines, 300 acres of quinces, and about 100 acres of Japan persimmons. This makes a grand total of 191,894 acres devoted to this class of fruits. Statistics are somewhat incomplete for some of the mountain counties, but it will not be safe to add more than five per cent, and we can then say in round numbers that 200,000 acres are planted with the deciduous fruits. The leading apple counties of the State are Sonoma, Los Angeles, Siskiyou, Santa Cruz, San Diego, and Humboldt. Nothing could better illustrate the extent to which the climate of California is modified by local conditions. San Diego is the most southern county, Siskiyou is the most northern, and they are separated from each other by more than seven hundred miles, but both contain great apple-growing districts. The leading apricot counties are Solano, Alameda, and Los Angeles. The cherry is chiefly grown in Alameda and Santa Clara. The peach industry has been most completely developed in Santa Clara, Solano, Los Angeles, Tulare, Butte, and Tehama. Nectarines are mostly planted in Sonoma and Alameda. Plums and prunes seem to belong chiefly to Santa Clara, Tulare, Alameda, and Solano. Lastly, the great pear districts are in Sacramento, Solano, Alameda, and Los Angeles. The Coast Range lowlands and foothills, together with a few districts in the San Joaquin and Sacramento Valleys, produce the bulk of all the deciduous fruits. Third among the horticultural divisions that I have thought it desirable to tabulate are the nut-bearing trees, comparatively small in present acreage, but likely to become more and more important industries. The nuts grown on a commercial scale are only two, the almond and the walnut. The chestnut, pistachio, filbert, pecan, and a few others have been planted to some extent. The following table shows the counties that have 1,000 acres and upward of either almonds or walnuts: Table III.—Acreage of Nut-hearing Trees. County. Almonds. Walnuts. Alameda 1,237 36 Butte 1,588 12 Los Angeles 107 1,789 Orange 102 2,592 Santa Barbara 340 1,203 Solano 1,470 70 Ventura 150 6,310 ⁠(7 counties) 4,984 11,022 ​The remaining forty-nine counties only bring the total of almond trees in the State to 9,400 acres and that of walnuts to 14,912 acres. One can easily see how limited are the districts as yet devoted to these products. Three almond counties—Butte, Solano, and Alameda—contain nearly one half of the total acreage of the State; four walnut counties—Ventura, Orange, Los Angeles, and Santa Barbara contain more than four fifths of all the trees planted. The almond, however, is grown to some extent in forty-six counties and the walnut in forty-five. Italian chestnuts, pecans, and filberts have been planted to the extent of perhaps 100 acres. This makes the total acreage of nut-bearing trees in the State 24,413. It is not likely that more than 100 or 200 acres were overlooked. In round numbers there may possibly be 25,000 acres in this class of trees. The last division contains the grapes and small fruits. Wine and raisin grapes have been very carefully tabulated each year, but table grapes with less attention to details, and small fruits not at all until recently. The grape industry is mostly carried on in the fourteen counties represented by the following table: Table IV.—Acreage of Grapes. County. Wine grapes. Raisin grapes. Table grapes. Alameda 6,396 164 236 Fresno 5,474 43,928 100 Los Angeles 4,632 671 1,182 Napa 18,177 10 52 Placer 354 500 1,421 Sacramento 3,131 385 2,550 San Bernardino 1,024 2,591 274 San Diego 132 4,455 510 Santa Clara 10,294 200 1,200 Santa Cruz 1,365 103 1,253 Solano 1,928 1,328 1,167 Sonoma 22,351 . . . . . 427 Tulare 70 10,264 100 Yolo 1,575 5,500 1,500 ⁠Total (14 counties) 76,903 70,077 11,972 The total acreage of wine grapes is 91,428; that of raisin grapes is 81,773; and that of table grapes is 18,732. Besides, the area devoted to small fruits, as far as can be ascertained, is 5,081 acres. Alameda, Sacramento, and San Joaquin contain over three fifths of the small-fruit area of the State. Returning to grapes, the results are obtained from the statistics of the State Viticultural Commissioners' Report of 1891, with the figures for a few missing counties filled in from other reliable sources. As in previous tables, the chief centers of each department of the industry are easily recognized. Table grapes are of ​especial importance in Sacramento, Yolo, Placer, Santa Cruz, Santa Clara, Solano, and Los Angeles, in which counties more than half the table grapes are found. Two counties of the San Joaquin Valley, Fresno and Tulare, have planted five eighths of the total raisin-grape area of the State. Three wine counties—Sonoma, Napa, and Santa Clara—contain five ninths of the total wine-grape area. In round numbers, then, the fruit and vine acreage of California in October, 1893, is as follows: Citrus and semitropic 95,000 Deciduous fruits 200,000 Nut-bearing trees 25,000 Grapes 191,933 Small fruits 5,081 ⁠Total 517,014 Having ascertained the total acreage, the approximate number of fruit-bearing plants of the kinds tabulated can be readily found. Orchardists set trees at different distances apart according to the soil and the variety: 12' × {\displaystyle \times } 12', 15' × {\displaystyle \times } 15', 18' × {\displaystyle \times } 18', and 20' × {\displaystyle \times } 20' can be found within a mile of each other. Walnuts and other strong growing trees are often set 30' × {\displaystyle \times } 30', with cultivated crops planted between for a few years. The above systems of planting give respectively the following number of trees to the acre: 302, 193, 134, 108, and 48. Of course, there are many other planting distances in general use. The ordinary rule of multiplying the acreage by 100 has never seemed to me sufficiently accurate, and I should choose 150 as more nearly representative of the orchards of to-day. Grapevines are planted 4' × {\displaystyle \times } 6' in the case of some varieties, and 4' × {\displaystyle \times } 8', 6' × {\displaystyle \times } 6', and 8' × {\displaystyle \times } 8' in ordinary vineyards. These distances give the following numbers of plants to the acre: 1,815, 1,361, 1,210, and 680; about 1,200 is probably a fair average. Tabulated, with a fair allowance for the acreage planted in the spring of 1893, the sum of the whole matter is as follows: Number of trees on 320,000 acres 48,000,000 Number of vines, etc., on 200,000 acres 240,000,000 Trees and vines of the plant of 1893 (about) 7,500,000 ⁠Total number of plants (about) 295,500,000 The reader must remember that every one of these plants, excepting vines grown from cuttings, has been propagated in a nursery, set out by hand with more or less carefulness, and pruned and cultivated. About sixty per cent of the fruit trees are now in partial or full bearing; in the vineyards the proportion is probably somewhat greater. ​ Hillside Vines and Trees. Niles Cañon. ​What is the gross yield from these trees? Like wheat, or any other staple crop, the average per acre is very much less than one would expect. There are often such heavy losses from late frosts, drought, insect pests, and fungoid diseases that only a person of more than ordinary intelligence can successfully manage large orchard interests. The average orchard, like the average farm, just about makes a fair living for an industrious man. That this is true can be easily shown by the following figures, and deductions from them: Orchard and Vineyard Products in 1891. Class Pounds Canned fruit 64,790,120 Dried fruit 66,743,134 Fresh, deciduous 101,097,940 Prunes 10,220,700 Raisins 45,558,370 Citrus fruits 88,194,560 Figs 50,000 Nuts 10,318,060 ⁠Total shipment, in pounds 386,972,884 It requires not less than 600,000,000 pounds of fresh fruits, besides the nuts, to produce the above results. In round figures, then, 600,000,000 pounds represent the fruit surplus of the State, in the departments of deciduous fruits, citrus fruits, raisins, and table grapes. In addition there was a surplus of Wine (gallons) 11,114,029 Brandy (gallons) 799,614 Olive oil (eases) 12,088 Now, there are in California about 500,000 acres of the trees and vines which produce these 600,000,000 pounds of fresh fruit. That is 1,200 pounds to the acre, worth in the orchard from twelve to forty dollars, the average gross value of the crop from an acre of fruit. Of course, many of the trees are not yet in bearing, and some fruit-growers will always have far better returns than this. But the above average is very significant. It shows plainly that the industry can not exist upon a lower average price than one cent a pound for fruit in the orchard. But if the present orchards were in full bearing there might come an especially favorable season which would give a total, even without further planting, of fully 1,500,000,000 pounds. If there are 50,000 acres planted every year, and the old orchards are kept up, the present acreage will be doubled by 1901. But, to show what has been done under favorable circumstances, I give the following statement of the yield of a 700 acre San Joaquin Valley irrigated orchard in 1890: ​ Yield of 700 Acre Orchard. Apricots 339,411 Peaches 2,115,314 Nectarines 210,518 Pears 280,124 Plums 4,705 Prunes 22,283 ⁠Total 2,972,335 This is a well-authenticated yield of nearly 3,000,000 pounds from the orchard, or, to be more exact, within a fraction of 4,246 pounds to the acre. This fruit was sold for $84,365.01, or $130 per acre, gross receipts. The annual product of the 1,300 acres of vines and trees upon this ranch is confidently expected to be 10,000,000 pounds of fresh fruit when every acre comes into bearing, and that is practicable under first-class management. Ignorance or neglect would ruin both orchard and vineyard, however, in less than three years. The average yield per acre, as previously shown, is only 1,200 pounds, but here is a tract of 700 acres, not in full bearing, that gives three and a half times as much. By obtaining the highest possible price, the estimated possible sale of about $45 per acre (when the yield was 1,500 pounds) has been raised in this case to $120 per acre. Should the whole 1,200 acres ultimately yield 10,000,000 pounds, the average per acre will be more than four tons of green fruit, the increase being largely in the item of grapes. Four tons per acre, at a uniform price of one cent a pound, would yield $80, as against the average value of the State crop at that price, $13 per acre. If the 200,000 acres of deciduous fruits in the State could be made to yield at the rate of this irrigated San Joaquin Valley orchard, the product would now be about 850,000,000 pounds of fresh fruit. The same acreage in full bearing at the expected average would reach the enormous yield of 1,660,000,000 pounds. If the semitropic fruits and vineyards could be depended upon to yield in like proportion, it is safe to say that the fruit supply of the world would be more than provided for, and the transportation facilities of the great railroad lines would be overburdened. But horticulture, like agriculture, is subject to drawbacks and limitations. Orchards and vineyards, exactly the same as corn fields and wheat fields, give only a low general average. The industry of fruit-growing is established upon a solid foundation and is very prosperous, but the whole yield of the State can never be made proportionate to the yield obtained under exceptional circumstances. The acreage and yield of the orchards and vineyards have now been ascertained. The cash value of the total output can not be ​as closely calculated. Floating estimates vary even more than the floating estimates of the acreage. Healthy, well-managed orchards probably average gross sales of $100 per acre, taking all classes of fruit together, and one season with another, but there are no reliable statistics of this side of the industry. Returning to an estimate of a present surplus of 600,000,000 pounds of fresh fruit, this at two cents (the average value in the orchards one year Vacaville Pear Tree. with another) would yield the growers $12,000,000 and would probably cost the consumer $36,000,000. This does not include the value of the product of the wine grapes. It only represents the output of the gold mine of the orchards. Commercially, of course, the volume of business created is represented by the cost to the consumer. Studies of the future of an industry are seldom useful. Planting of trees and vines continues steadily, and if there is a demand ​the present output can be indefinitely increased. It is believed by the best horticultural authorities that fruit, in various forms, will become more and more a great food staple, used by the masses of the people, and that new markets for the enormous output can be developed from time to time in the United States and in Europe. Like wheat, a staple, fruit in the future will not make fortunes nor "pay for a ranch in one year," but will give safe, steady returns upon the labor and capital invested. The extensive area that might be devoted to fruit culture, if the demand justified such a use, can be seen by the following figures: San Bernardino, Riverside, San Diego, and Los Angeles Counties, all noted for their semitropic fruits, contain 20,913,000 acres, or in round numbers one fourth the area of the State. Fresno, Kern, and Tulare, the great irrigated counties of the San Joaquin Valley, famous for their vineyards and deciduous fruit orchards, contain 14,737,000 acres. The rich and beautiful fruit counties of Alameda, Butte, Placer, Sacramento, Santa Clara, Solano, Sonoma, and Ventura, added to the above, bring the total area to nearly 50,000,000 acres. It need not be supposed that all these immense districts can be cultivated. There are deserts and barren mountains, as well as fertile valleys, plains, and hillsides. But if only one third of the area of these counties is capable of being cultivated, and if only one third of the cultivated acreage is used for fruits, these counties alone can produce, when their orchards are in full bearing, twenty times as much fruit as the present entire yield of the State. The future of the fruit industry of California depends upon the growth of the demand for fruit products. All the other conditions are favorable for the development of the business, but the problem of the possible demand can only be solved by continuing to plant trees, gather fruit, and send it to the markets of the world. The picturesque side of California fruit-growing is very attractive and must long remain so. Just now everything is in the creative stage: vineyards and orchards are being extended along the valleys and up the slopes; the cabins of pioneers are giving place to modern cottages and stately dwellings; villages are fast becoming towns; and towns are rising to the rank of cities. Only about the old missions can one find orchards that deserve to be called venerable, as measured by European standards. Take out a few old trees of olive, fig, orange, and pear, and all that is left are less than forty years old. Blossoming springtime in these great orchards is charming, as almonds, apricots, peaches, and all the rest of the deciduous fruit trees come into flower over square miles. The very roadsides are sometimes covered with drifts of petals blown from the overhanging boughs. Loquats ripen and are fit to market almost ​before the last apple blossoms are gone in the orchards; cherries come next, then the early apricots and plums; the procession goes on month after month, even after the leaves fall. Late apples, pears, and Japanese persimmons mark the California December, mingling as they do with the ripening oranges and lemons and a few figs hanging on the leafless trees. Although the details of the orchard work vary considerably in different parts of California, the more important elements are much the same everywhere. The winter work of pruning is succeeded by the spring work of cultivation and the summer work of harvest. A highly organized system has been developed; laborsaving machinery is used to a great and increasing extent; and the actual cost of producing a pound of fruit can be proved to have lessened every year. One hesitates to say how cheaply fruit can be grown under favorable circumstances by intelligent Americans who know the business. Men are becoming rich at prices that ten years ago would have seemed ruinous. Of course, there is a limit to the process of cheapening production, but the end is still far off. The planting and culture of orchards; the thinning of green fruit; the gathering, handling, packing, shipping, and marketing of ripe fruit; the canning, drying, preserving, and other methods of utilizing fruit products—all these are in a process of continuous evolution. The foregoing glimpses of the subject indicate more than the beginnings of a great industry. Whoever visits California will see surprisingly vast and imposing results in concrete forms. Valley after valley, town after town live by the toil of the orchardist and vineyardist. The sight is a cheering one, because successful fruit culture requires a high degree of skill and intelligence, a thickly settled rural community, and especial facilities for communication with all that these things imply. The road-improvement societies are little needed in California fruit colonies. Sometimes the macadamized and sprinkled highways extend six or eight miles out of the town to the very edge of the orchards; then, as the wheat fields are reached, they degenerate into very ordinary country roads. But the educational requirements of this specialized industry extend into new departments of science, and are continually developing so rapidly that only a few trained observers can take note of the advance. Horticulture, applied to the daily needs of such industries as I have described, leaves its servants no time to dream dreams about possibilities of orchard life a century or even a decade hence. Multitudes of perplexing problems of culture and management arise, but two great tasks are always with the educated orchardist or vineyardist. One, briefly stated, is, "Can I produce new and vastly superior varieties by ​fertilization and scientific study of the laws of heredity and variation applied to plants?" The answer is, "Yes; there is no assignable limit to the capacity of our cultivated fruits, and of fruits still wild, to improve and develop new characteristics." The second great task relates to the ceaseless struggle with the lower forms of animal and vegetable life which prey upon useful forms in immeasurable and innumerable hosts. Gophers and jack-rabbits are now only pests of minor importance in thickly settled orchard-districts, but the warfare of the horticulturist Fumigating Tent. Hydrocyanic-acid gas process for destroying scale. Chino Valley. with fungoid diseases and parasitic insects long ago passed its amusing stage. It is a serious business of importance to the whole human race, because whatever threatens the food supply threatens the life of man. The practical applications of skill and capital in the field of preventive and remedial agencies have been remarkable. Every successful Californian fruit-grower has now learned that he must as regularly treat his trees for scale and other inflictions as he must plow his land, thin the fruit, or gather the crop. At the spraying season in the fruit districts ​it is literally true that the odor of the various preparations used to destroy insect life is universal for miles and for days at a time. Nine tenths of the discussions in the innumerable local clubs of fruit-growers that are doing so much for the practical advance of the industry in California are discussions upon methods Almond Bough in July. for the destruction of these pests. Sometimes one sees hundreds of acres of orchard, in February, snow-white over every inch of twig, with "salt and lime wash," or other acres are brown with sulphite of soda or oil and alkali. Now and then comes an orchard where the remedies have been used in too great strength, and the buds and tender bark seem blighted and blackened. The prevailing enemy of the orchards is the insect family Coccidæ. The species that do most harm are the oyster-shell scale (Mytilaspis pomorum), the pernicious scale (Aspidiotus perniciosus), the yellow orange scale (Aspidiotus citrinus), the red orange scale (Aonidia aurantii), and the apricot scale (Lecanium armeniacum). The Florida red scale (Aspidiotus ficus) and the mining scale (Chionaspis biclavis), a very dangerous species from Tahiti, are being quarantined against by the horticultural commissioners. A cargo of 325,000 orange trees infested with the Tahiti species was once destroyed by Mr. Craw, the quarantine officer of the State Board. Another group of scale insects known as "cottony scales" (Icerya and Dactylopius) are among the worst enemies of the orchardist. Aphides, canker worms, caterpillars, and fungoid diseases are as yet of much less immediate danger to the fruit-growers than the various Coccidæ of which I have named only the prominent species. Many valuable formulas for summer and winter washes, for kerosene emulsions, and other preparations were first used in California. The hydrocyanic-acid-gas method is also a Californian ​invention. Derricks and tents are used in this gas treatment, and it solves many difficulties in the way of using washes and the spray system on the citrus fruit trees. The city of Riverside owns several complete sets of the necessary apparatus, and rents at a nominal rate to fruit-growers, who hire operators and furnish the necessary chemicals. Since this is not a technical treatise, however, I must refer students of the perpetual struggle going on in California between the orchardist and his insect enemies to the publications of the Agricultural Department of the State University and of the State Board of Horticulture. Here, in thousands of pages, the story is told in every detail. There is not only an active warfare going on against insect foes, but various predaceous and parasitic insects that destroy dangerous species have been called to the aid of the horticulturist. In conclusion, one must ask, "How goes the fight?" The statistics of the fruit industry answer this question. The cost of destroying insect pests has become a permanent item of expense, the results of which are increased profits. Care and management of orchards now include preparation of the soil; selection of varieties adapted to the place; planting and culture of the trees; pruning, according to different systems for different species and localities; the use of special fertilizers, and the destruction of noxious insect life. The various coccids that infest the California orchard valleys are only to be found in dangerous numbers upon the orchards of the careless or the ignorant fruit-growers. Their multiplication is readily and safely checked on as large a scale as desired, and at a cost paid many times over by the increased crop. Sometimes, for several seasons and over large districts, the coccids disappear, but they return, and renewed expenditures of time and skill are necessary to conquer them again. The expense lessens, however, and the certainty of success increases, year after year as the fruit-grower becomes a specialist. Does this appear too difficult? It is the same old demand for intellect, inherent in the order of things. Horticulture in every division is a science as well as an art, and it more and more amply rewards the technical skill of the well-equipped specialist. During the discussion in the British Association on anthropometric measurements, Dr. Garson expressed the opinion that there could be no better system than that adopted in the United States, where an enormous number of observations were made on a uniform plan iu many schools. If the American plan could be adopted in Great Britain we should be able to compare children on both sides of the Atlantic, and have full details of the growth of the English race. The different methods of anthropometric observation now adopted rendered the results absolutely useless. Retrieved from "https://en.wikisource.org/w/index.php?title=Popular_Science_Monthly/Volume_44/December_1893/The_Fruit_Industry_in_California&oldid=8846863"
CommonCrawl
Rational design and dynamics of self-propelled colloidal bead chains: from rotators to flagella Shape-programmed 3D printed swimming microtori for the transport of passive and active agents Remmi Danae Baker, Thomas Montenegro-Johnson, … Igor. S. Aranson Spontaneous self-propulsion and nonequilibrium shape fluctuations of a droplet enclosing active particles Gašper Kokot, Hammad A. Faizi, … Petia M. Vlahovska Experimental observation of flow fields around active Janus spheres Andrew I. Campbell, Stephen J. Ebbens, … Ramin Golestanian Magnetically powered metachronal waves induce locomotion in self-assemblies Ylona Collard, Galien Grosjean & Nicolas Vandewalle Symmetry breaking propulsion of magnetic microspheres in nonlinearly viscoelastic fluids Louis William Rogowski, Jamel Ali, … Min Jun Kim Polar jets of swimming bacteria condensed by a patterned liquid crystal Taras Turiv, Runa Koizumi, … Oleg D. Lavrentovich Chemically-powered swimming and diffusion in the microscopic world Yifei Zhang & Henry Hess Cooperation in a fluid swarm of fuel-free micro-swimmers Matan Yah Ben Zion, Yaelin Caba, … Paul M. Chaikin Oscillatory rheotaxis of artificial swimmers in microchannels Ranabir Dey, Carola M. Buness, … Corinna C. Maass Hanumantha Rao Vutukuri1 nAff4, Bram Bet2, René van Roij2, Marjolein Dijkstra3 & Wilhelm T. S. Huck1 Scientific Reports volume 7, Article number: 16758 (2017) Cite this article Colloids The quest for designing new self-propelled colloids is fuelled by the demand for simple experimental models to study the collective behaviour of their more complex natural counterparts. Most synthetic self-propelled particles move by converting the input energy into translational motion. In this work we address the question if simple self-propelled spheres can assemble into more complex structures that exhibit rotational motion, possibly coupled with translational motion as in flagella. We exploit a combination of induced dipolar interactions and a bonding step to create permanent linear bead chains, composed of self-propelled Janus spheres, with a well-controlled internal structure. Next, we study how flexibility between individual swimmers in a chain can affect its swimming behaviour. Permanent rigid chains showed only active rotational or spinning motion, whereas longer semi-flexible chains showed both translational and rotational motion resembling flagella like-motion, in the presence of the fuel. Moreover, we are able to reproduce our experimental results using numerical calculations with a minimal model, which includes full hydrodynamic interactions with the fluid. Our method is general and opens a new way to design novel self-propelled colloids with complex swimming behaviours, using different complex starting building blocks in combination with the flexibility between them. Active or self-propelled colloidal-particle systems are currently a subject of great interest in soft condensed matter science, owing to their ability to mimic the collective behaviour of complex living systems, but also to serve as model systems to study intrinsically out-of-equilibrium systems1,2,3,4,5,6. Self-propelled particles can exhibit rich collective behaviour, such as clustering, segregation, and anomalous density fluctuations, by consuming internal energy or extracting energy from their local environment in order to generate their own motion1,2,3,4,5,7. In recent years, several experimental self-propelled particle systems have been developed, namely Janus spherical particles2,8,9,10,11, Janus rods12, bimetallic rods13, granular rods14, gear-shaped15,16, hetero-dimers17, and L-shaped particles18, based on different self-propulsion mechanisms. However, most of the experimental studies have been reported on Janus spherical19 and rod-like particle systems12. Moreover, the majority of self-propelled particles move actively by converting the input energy directly into translational motion on a single particle level. As a result, the self-propelled particles exhibit ballistic behaviour at time scales much smaller than the rotational diffusion time and diffusive behaviour at longer time scales, with an effective diffusion coefficient that is many times larger than the equilibrium diffusion coefficient. In this work we ask ourselves whether, and if so how, self-propelled Janus spheres can assemble into bead chains that exhibit rotational motion, possibly coupled with translational motion. Some biological systems display active rotational motion, e.g., certain bacteria confined to two dimensions20, and dancing algae21. To the best of our knowledge, only a few synthetic systems show active rotation, e.g., a granular gear-shaped particle in an active bacterial bath15,16, multi-layered bimetallic rods22,23, L-shaped particles24, and random clusters of active Janus particles25,26. However, the swimming behaviour of these systems cannot be tuned by controlling the internal structure. Here, we develop a new type of internally driven colloidal-particle system, namely self-propelled colloidal bead chains with tunable stiffness. This system is suitable to study dynamics on a single particle level in concentrated systems, and exploits directed self-assembly of simple self-propelled building blocks. Moreover, we show that introducing (semi-)flexibility between the beads in a chain is sufficient to break the time-reversibility constraint to realize flagellum-like or helical-like motion that is powered by the fuel. To the best of our knowledge, flagellum-like motion has not been reported before in internally powered (self-propelled) colloidal swimmers. Only externally powered propellers such as helix-shaped27,28,29,30 particles using rotating magnetic fields, and DNA-linked assemblies of magnetic particles tethered to a red blood cell31 in biaxial magnetic fields have shown this motion. However, these systems are not suitable for collective behaviour studies because the long-ranged magnetic interactions between individual units are difficult to minimize. Additionally, they do not satisfy the force-free and torque-free condition28 which is an essential condition for studying internally powered colloidal systems. Moreover, these systems are difficult to fabricate. Several simulations and theoretical studies have predicted that particle shape and swimming direction can affect the macroscopic behaviour of self-propelled particle systems1,2,6,32, but the experimental realization of shape-anisotropic particles in combination with varying propulsion directions scarcely exists. Several synthesis routes have been reported for synthesizing 'passive' colloidal molecules, complex-shaped particles33,34,35,36,37,38, chains of particles using different starting building blocks, e.g., isotropic spherical particles using various linking mechanisms34,39,40,41,42, Janus particles43, and flattened particles44. However, making these model systems with self-propelling capabilities remains a challenge. Here, we take an inspiration from the design principles of synthesizing passive colloidal molecules, where the colloidal spherical particles are treated as atoms and the interactions between them are tuned such that they self-assemble into well-defined complex clusters33,34,36. We show that self-assembly of the Janus particles into polymer-like chains with tunable stiffness provides a powerful route towards the fabrication of complex self-propelled particle systems. Fabrication of permanent bead chains We first present our method to create catalytically powered colloidal rotators or active rigid linear bead chains by connecting the individual self-propelled spherical swimmers in such a way that neighbouring swimmers in the chain are always propelling in opposite directions. Our fabrication method consisted of two steps (Fig. 1a): (i) aligning Janus particles into linear chains using a high frequency external AC electric field45, and (ii) making these structures permanent using a combination of van der Waals attractions and linkers (i.e., inter-diffusion of polymer chains that are made of particles, and polyelectrolytes) to ensure that the chains remained intact as a chain even after removing the field. Suspensions consisting of 0.02 vol% half-side Pt coated polystyrene (PS) particles in deionized water were introduced to a rectangular electric cell (0.1 × 1.0 mm2). Upon application of a high frequency external AC electric field (E rms = 0.01 Vμm−1, f = 800 kHz where E rms is the root-mean-square electric-field strength and f is the frequency), the particles assembled into staggered or zig-zag linear chains in the direction of the applied field (Fig. S1b). The difference in the polarizability of both sides (half-dielectric and half-metallic) of the Janus particles caused the particles to align into staggered chains as shown in Fig. S1b. At low particle concentrations and at low field strengths, the stable structure is a string fluid phase that consists of staggered chains of particles in the field direction and a liquid-like order in the (a) Fabrication of active rotators. Schematic diagram illustrating the steps involved in the preparation of active bead chains. Dark side represents the Pt-coated side of the particles. (b) Time evolution of an active rigid colloidal polymer chain or self-propelled rotator in 1.0 vol% H2O2 solution. Time-lapse optical microscopy images show the circular motion of a bead chain in the presence of the fuel. The arrow depicts the direction of the rotational motion. (Inset) The direction of the propulsion forces acting along the length of the chain. Dark parts represent metal-coated (Pt) side of the particle. Scale bars are 2.0 μm. perpendicular to the field direction (Fig. S1b)45. However, these linear structures rely on the presence of external electric fields, i.e., the chains lost their identity (Fig. S1c) when the field was switched off 37,45. To circumvent this, we used a combination of a strong electric field and a heating step. At high field strengths (E rms = 0.04–0.05 Vμm−1, f = 800 kHz), the induced dipolar attractions between the particles are strong enough to push the particles together into distances where the van der Waals attractions take over and bind the particles irreversibly. In particular, the van der Waals attractions between metallic halves of the particles are strong in comparison to those between the polymeric halves of the particles46. Due to some irregularities in the Pt coating a small fraction of the polymeric side of a particle can also touch the polymeric side of neighbouring particles. These polymer-polymer connections are the key to making permanent bonds between neighbouring Janus spheres through a heating step that was developed in an earlier study by Vutukuri et al.34,39. In the heating step, the sample cell was heated to 60–65 °C, which is well below the glass transition temperature47 (T g ≈ 107 °C) of polystyrene as shown in Fig. S2, for about 2–3 minutes using a hot-air stream that was much wider than the sample cell34,39. After 2–3 minutes, the field was slowly switched off and we found that the particles were permanently attached to each other in a staggered fashion and acted as a single rigid body. Fig. S3 shows the length distribution of the resulting permanent bead chains. It is important to mention here that the length of the permanent chains can be controlled by varying the distance between the two electrodes as has been developed in our previous study by Vutukuri et al.34. This procedure could thus significantly increase the monodispersity in length, if necessary, but was not used in the present study. Dynamics of self-propelled rotators or active rigid bead chains After creating the permanent structures, we studied their dynamic behaviour in the presence of H2O2. We note that it has been reported that in the case of individual half Pt-coated Janus spheres, the platinum-catalyzed decomposition reaction of H2O2 generates a concentration gradient of ions across the surface of the particles, inducing self-propulsion in the direction of the non-coated surface8,9,10,48. Although the underlying mechanism responsible for the propulsion is still under debate49, the dominant propulsion mechanism is probably a combination of diffusiophoresis and self-electrophoresis8,9,10,48. When we transferred the permanent staggered chains into a 1.0 vol% H2O2 solution, the bead chains showed autonomous sustained rotating behaviour on the surface of the bottom wall (Fig. 1b). The rotating movement can be attributed to the fact that the propulsion force on each Janus particle is acting in opposite directions in alternative fashion along the length of the chain (see the inset of Fig. 1b), resulting in a constant torque acting on the chain, which induces a rotational motion of the chain. The time-lapsed bright field images show the rotational motion of a 6-bead chain in 1.0 vol% of H2O2 solution as shown in Fig. 1b (see supplementary Movie S1). The inset of Fig. 2b shows the typical trajectory of the centre-of-mass of the bead chain. In order to characterize the trajectories, we first extracted the centroids of each bead (Janus spheres) in a chain using a particle-tracking algorithm50, and then calculated the orientation angle based on the coordinates of the first and the last bead as a function of time (see Materials and Methods section). From this, we calculated the mean-squared angular displacement (MSAD), and the translational mean-squared displacement (MSD) of the centre of mass of the chain as shown in Fig. 2. Due to the persistent angular propulsion, the bead chains are actively rotating, as shown by the cumulative rotational angle θ(t) continuously increasing with time (inset of the Fig. 2a). We obtained the average angular velocity (ω) and the passive (equilibrium) rotational diffusion coefficient (Dr,eq) of the rotator by fitting the MSAD curves with the equation \(\langle {\theta }^{2}\rangle =\langle {[\theta (t)-\theta (0)]}^{2}\rangle ={\omega }^{2}{t}^{2}+2{D}_{r}t\) (see more details in Materials and Methods section). The rotator or active bead chain was propelling with an angular velocity ω = 0.48 ± 0.03 rad/s. The obtained equilibrium rotational constant, Self-propelled 6-bead chain in a 1.0 vol% H2O2 solution. (a) Mean-squared angular displacement (MSAD) of rigid active bead chain. The black squares represent the experimental measurement, and the solid red curve shows the quadratic fit from the MSAD equation. Top-left inset shows the cumulative rotational angle with time. (b) The typical mean-squared displacement (MSD) of a circular swimmer. The solid red curve is the fit from the MSD equation (eq. 5). Top-left inset illustrates the trajectory of the center-of-mass of the bead chain. Dr,eq = 0. 005 ± 0.004 rad2/s, is consistent with the theoretical value51 as well as with our numerical calculations (see the bead-shell minimal model in Materials and Methods section). Note that at the same fuel concentration (1.0 vol% of H2O2 solution), singlets propelled with a velocity of 1.58 μm/s (see Fig. S4). In contrast to single self-propelled spheres, our rotators show distinctive MSAD and MSDs. The main distinctive feature of the MSD is an oscillatory behaviour indicating that the rotator performs a spiral-like "spira mirabilis" motion due to thermal fluctuations. As shown in Fig. 2b, the experimental MSD is in a good quantitative agreement with the theoretical predictions as have been reported for a thin rod with not only a self-propulsion force, but also a self-propulsion torque, i.e., a circular swimmer52. We note that the angular velocity obtained from the MSAD fitting (ω = 0.48 ± 0.03 rad/s) is in quantitative agreement with the value obtained from the MSD fitting (ω = 0.42 ± 0.09 rad/s). Next, we quantified the dynamics of bead chains in terms of angular velocity for different chain length and at different fuel concentrations. For each fuel concentration, we analysed 6 trajectories for each chain length in two separately prepared samples. In Fig. 3, we plot the average angular velocity as a function of chain length, for two different fuel concentrations. It is apparent from Fig. 3 that the chains with an even number of beads exhibit rotations that are much more pronounced than those of chains with an odd number of beads. This can be understood from the forces on the beads: in a perfectly symmetric chain where the propulsion forces are antiparallel, a chain with an even number of beads will experience a net torque, but no net force. In contrast, a chain with an odd number of particles will experience no net torque but will experience a net force. In practice, small asymmetries result in both forces and torques on both types of chains, but the preference of odd-numbered chains for translation and even-numbered chains for rotation is preserved. Both clockwise (CW) and counterclockwise (CCW) rotations are observed for different chain lengths. For longer bead chains (>3σ, where σ is the particle diameter) the gravitational height of the passive bead chains is l g > 1.0 μm, which is comparable to the particle diameter. As a result, they are confined to a plane where they, once started rotating in a particular direction, tend to continue in that direction. We did not observe any switching between the directions from the CCW to the CW directions and vice versa. On the other hand, for dimer and trimer chains we did observe switching between the rotation directions. This can be attributed to the fact that the gravitational height for shorter bead chains is less than one particle diameter. For instance, the gravitational height of a dimer bead chain is ~0.3 σ. As a consequence, shorter chains do not only show small fluctuations out of the 2D plane, but show also full rotations in 3D, thereby switching between CCW and CW rotation directions. For an even number of beads in a chain, the angular velocity decreases with increasing length at constant fuel concentration, as shown in Fig. 3. In order to capture the flow fields generated by the rotators in the fluid, we added uncoated PS spheres of diameter 2.1 μm as tracer particles in the sample cell. We subsequently followed the dynamics of the tracer particles as a function of time by means of a bright field microscope. The trajectories of the tracer particles reveal that the tracer particles in close proximity of the 4-bead rotator show net motion (Fig. 4), while the particles far from the rotator show Brownian motion and the effect of the rotator is negligible in this case. Based on our experimental observations we believe that there is no bulk convective flow present in the system. Angular velocities (ω) for varying length of active rigid chains and for two different H2O2 concentrations. Blue squares represent the angular velocity at a fuel concentration of 1.0 vol%, and red diamonds denotes the angular velocity at a fuel concentration of 2.0 vol%. The fluid flow generated by the 4-bead rotator in 1.0 vol% H2O2 solution. Green line represents the trajectory of the PS tracer particles of 2.1 μm size overlaid on a bright field image. The white arrow depicts the rigid 4-bead rotator. The scale bar is 5.0 μm. Self-propelled or active semiflexible bead chains Next, we study how flexibility between the beads in a chain affects the swimming behaviour in the presence of a fuel. Here we used sterically stabilized PS (polystyrene) Janus particles with a high molecular weight polyvinylpyrrolidone (PVP, Mw = 360 kg mol−1) as starting building blocks and repeated the same protocol as was mentioned for the fabrication of active rigid chains. The particles that were sterically stabilized with a higher molecular weight PVP form chains with a semiflexible character34 as the stabilizing polymers (PVP) that were chemically connected and/or entangled with the PS polymer chains, acted as linkers between the beads. Using Fourier mode analysis, we quantified the flexibility of the resulting chains by estimating the persistence length l p , yielding l p ≈ 20σ, where σ is the diameter of a bead34. When we introduced flexibility between the beads in a chain, the resulting chains showed a completely different swimming behavior consisting of a coupled translational and rotational motion, which strongly resembles flagellum-like motion or helical motion (see supplementary Movie S2). Here, our intention is to refer to the observed swimming behavior as a flagella-like motion, which means that the chain is undergoing an intricate coupling between rotational and translational motion. We note that the rigid bead chains showed only active rotational motion. The time-lapsed bright field images show the sequential shape and orientation changes of an active semiflexible chain, indicated by the change in intensity of beads in the chain as shown in the XY projection in Fig. 5a. We further confirmed this by measuring the intensity profile of the uncoated side of the third bead in the chain, which shows oscillatory behavior with time (Fig. 5c). We note that the bright parts of the beads in the chain indicate the uncoated side of the bead in the focal plane, while the dark ones indicate beads that are either above or below the focal plane. Next, we analyzed the motion of the chain in the orthogonal XZ view, which reveals the spiral motion of the chain (Fig. 5b). Figure 5d demonstrates a typical trajectory of the center of mass of the active semi-flexible bead chain during a time interval of 20 seconds. The resulting conformations and propulsion motion is a consequence of a coupling between the propulsion forces, chain flexibility and viscous drag from the solvent that generates controlled deformations. As our chains are semiflexible in nature (persistence length, l p ≈ 20 σ; contour length, l c ≈ 6 σ), we expect that there is always some correlation between the neighboring particles, since a combination of flexibility of the chain and hydrodynamic forces may generate an asymmetry in the self-propulsion direction of the beads. As a result, the rotation cycle is non-reciprocal, leading to (see supplementary Movie S2) a net translational displacement of the chain28,53. Next, we examined the flows generated in the fluid by a self-propelled 6-bead chain, by means of a 1.0 μm tracer particles and a 5.0 μm elongated glass piece (a contamination in the sample cell). The typical trajectories of tracers over 20 sec are shown in Fig. 5e. One clearly deduces from the paths of the tracer particles that there is no bulk convective flow present in the system. Dynamics of a self-propelled semiflexible colloidal bead chain in a 1.0 vol% H2O2 solution. (a) Time-lapse optical microscopy images show the sequential shape and orientation changes of the active bead chain resulting in net translational motion. We note that the bright parts of the beads in the chain represent uncoated sides of the beads in the focal plane while the darks ones represent beads either above or below the focal plane. (b) Orthogonal XZ view of the same chain over 100 seconds. (c) Intensity profile of the center of mass of the chain. (d) Trajectory of the centre of mass of the active semi-flexible bead chain over a time interval of 20 sec. (e) Blue line show the trajectory of the center of mass of semiflexible chain, and red lines show the trajectory of the tracer particle of a 1.0 μm size tracer particle and a 5.0 μm elongated glass piece (a) contamination in the sample cell), respectively, overlaid on a bright field image. Scale bar are 5.0 μm in (a,e) figures, 2.0 μm in figure (b). In the presence of 1.0 vol% H2O2, the chain showed both rotational and translational motion. The active rotational velocity around the axis of the active 6-bead semiflexible chain (Fig. 5) is 3.5 rad/sec and the chain is propelling along the long axis with a velocity of 0.9 μm/s. The chain of 8 beads propelled with a translational velocity of 0.62 μm/s and the rotational velocity around the axis is 3.8 rad/sec in 1.0 vol% H2O2 solution. Due to the large persistence length (l p ≈ 20 σ) of the semiflexible chains, the shorter chains (≤4σ) behaved like rigid chains and showed rotator behaviour. In addition, we will demonstrate now that our method can also be used to make more complex self-propelled chains composed of both semiflexible and rigid parts as shown in Fig. 6. These complex chains were fabricated by mixing rigid and semiflexible chains together and repeating the same protocol as was used in making the constituent bead chains. In the presence of fuel (1.0 vol% H2O2), the propelling behaviour (see supplementary Movie S3) of a half-rigid and half-semiflexible 8-bead chain resembles the swimming behaviour of microorganisms with a flexible tail and a rigid head, such as e.g., E. coli. The rotational velocity of the long axis of the half-rigid and half-semiflexible 8-bead chain is 1.9 rad/sec and the propelling velocity is 1.2 μm/s. We believe that our method opens up new possibilities to realize more complex swimmers, for instance, attaching a semiflexible chain to a 'passive' single big sphere or a cargo. Dynamics of a self-propelled complex colloidal bead chain consisting of a half-rigid and a half-semiflexible part of the chain in a 1.0 vol% H2O2 solution. Time-lapse optical microscopy images show the swimming behaviour. White lines depict the rigid and the semiflexible parts of the chain. Scale bar is 5.0 μm. Bead-shell minimal model and equations of motion In order to better understand the experimentally observed propulsion behavior of bead chains, we complemented our study with a minimal model that includes full hydrodynamic interactions with the fluid (see more details in Materials and Methods section). Instead of modeling the catalytically powered self-propulsion in terms of a slip velocity on the surface of the Janus particles, we model the propulsion mechanism by an effective propulsion force of a fixed magnitude that acts on each of the Janus spheres. For a rigid chain of spheres, these propulsion forces sum up to a force and torque acting on this rigid configuration. If the 6 × 6 hydrodynamic resistance tensor of this rigid body is known, the equations of motion of this object can be solved after imposing the effective force and torque. Alternatively, if the motion of the body is known, these equations of motions may be inverted to calculate the effective force and torque. We apply this to the rigid rotating chains, for which the observed angular velocity is used to calculate, given the 'zig-zag' arrangement of the propulsion forces, the magnitude of the effective propulsion force (more details in Materials and Methods section). To check the validity of this procedure, we feed the obtained effective force magnitude back into the equation of motion, from which we calculate the angular velocity for rigid bead chains of different lengths. We found our calculations to be consistent with the experimentally measured angular velocities (Fig. 3) in all three cases of bead chains with an even number of beads, as shown in Fig. 7. The experimentally measured angular velocity for bead chains of different length in comparison with the theoretical ones obtained from the average effective propulsion force using the resistance tensor. Self-propelled semi-flexible bead chains As shown experimentally, the active rigid bead chains did not exhibit any translational motion (Figs 1b and 2), whereas the semiflexible chains showed a net translational motion (Fig. 5). Due to the over-damped nature of the dynamics at low Reynolds numbers, we expect that the internal motions within a semi-flexible bead chain will quickly decay towards an equilibrium configuration, rather than showing oscillatory motion as in the case of a driven oscillator with inertia. Below we show that a small anisotropy in both the shape and the force distribution of the bead chain, induced by the semiflexibility, can lead to a translational motion of the bead chain, even when the net effective propulsion force on the chain remains zero. To this end, we analyzed small anisotropies in the shape by considering bead chains with the centers of the Janus spheres positioned in a C-shape and on a helical centerline, as parameterized by a helical radius r and helical pitch p (see more details in Materials and Methods section). In addition, we studied anisotropic arrangements of these propulsion forces that deviate from the 'zig-zag' arrangement. We summarized our numerical predictions in terms of 'swimming' behavior for the active helical bead chains with varying shape and propulsion force distributions in the Figure TI. We clearly found that anisotropy in both shape and force distribution is required to induce translational motion. Figure TI Summary of whether or not translational motion is predicted theoretically, as indicated by Yes and No, for active helical bead chains with varying shapes and force distributions. We distinguish three different shapes (the 'zig-zag' linear bead chain, the C-shaped chain, and different types of helically shaped bead chains) and two different arrangements of the effective propulsion forces (an alternating 'zig-zag' force distribution as observed for the rigid bead chains and a typical heterogeneous force distribution). Animations of the motion of these different bead chains, referred to in parentheses, can be found in the Supplementary Movies. Hence, shape anisotropy in the form of a C-shaped or (moderately) helically shaped bead chain, combined with a heterogeneous distribution of propulsion forces, leads to a 'swimming' motion that qualitatively reproduces the experimentally observed propulsion behavior (Fig. 5). This can also be clearly seen in animation 6 found in the Supplementary information. We now take a close look at the influence of the force distribution on the swimming behavior. In Fig. 8, we compare the in-plane (projected) velocity v 2D and angular velocity around the long axis of the bead chain, ω ||, for a set of 100 randomly chosen configurations of propulsion forces for a fixed helically shaped bead chain, to the (angular) velocity for a range of different helical bead chains and the experimentally observed values. Here, we considered configurations with 6 randomly chosen orientations (blue), which are (almost) never force-free. In addition, we considered zig-zag configurations with a single randomly chosen orientation (purple), as well as configurations with three randomly chosen directions of propulsion forces, augmented with the three opposite vectors. For the latter two categories, the total effective propulsion force vanishes. From Fig. 8, we observed that these random configurations do not produce translational motion that qualitatively agrees with the observed experimental motion and therefore we conclude that the arrangement of the effective propulsion forces requires a certain correlation of the directions of the forces on the beads. Since the experimental system is semiflexible in nature (persistence length, l p ≈ 20σ; contour length, l c ≈ 6σ) and initially made from a linear zig-zag beadchain, there are always correlations between the force directions of neighboring beads in the chain. Comparison of the (magnitude of the) two-dimensional velocity v2D and angular velocity ω|| for different distributions of the propulsion forces on the beads of a helical bead chain. In blue, 100 realizations are shown where the direction of the propulsion force is chosen at random for each individual Janus particle. Force-free (i.e. vanishing effective propulsion force) realizations with a 'zig-zag' distribution with forces aligned anti-parallel in a random direction, and where the direction of the propulsion forces is chosen at random for three of the Janus particles with the three others being their opposite, are shown in purple and pink, respectively. These numbers were calculated for a specific choice of helical bead chain shape, as shown in Fig. 9. In green, the (angular) velocity is plotted for the above mentioned heterogeneous force distribution, for all the different helically shaped bead chains under consideration, while the red square indicates the values that are observed in the experiment. In conclusion, we have developed an effective, yet simple method for the fabrication of internally powered colloidal bead chains with tunable stiffness. Our active bead chains are the first experimental realization of a new type of self-propelled particles that spontaneously rotate or spin when the spherical swimmers are rigidly connected, while they show flagellum-like propulsion when the connections between the spherical swimmers are semiflexible. In the presence of the fuel, active rigid bead chains experience equal and opposite propulsion forces along the length of the chain that induce active rotational motion. Due to the hydrodynamic forces and the flexibility of the chain, long (≥6σ) self-propelled semiflexible chains are subject to asymmetry in the self-propulsion direction of the beads in a chain, leading to flagellum-like motion. We corroborated our experimental results by means of numerical calculations with a minimal theoretical model, based on a description via effective propulsion forces. We were able to reproduce qualitatively and to some extent quantitatively, the 'swimming' behavior of the semiflexible bead chains. Also, the spiraling motion calculated from this model agrees well with the experimentally observed motion of the bead chains. With our numerical calculations we found that the propulsion forces along the chain require some correlations between the directions of the forces on the beads to induce translational motion for active semiflexible chains. These correlations may arise from the semiflexible nature of the chains that initially had a well-defined linear zig-zag structure. Our findings demonstrate how microscopic dynamics (i.e., flexibility between the beads) can affect the dynamical behavior of self-propelled bead chains. In future studies, the length of the chains can be controlled using previously reported methods by Vutukuri et al.34. Potentially, our method paves a way to study collective behavior of self-propelled or internally driven chains with tunable stiffness, and can likely be extended to light-activated propelling particles7 (e.g., half-titania coated particles) where the rotational speed of rigid chains and the swimming speed of semiflexible bead chains can be controlled with light intensity. Additionally, we can easily extend the procedure to magnetodielectric particles such as magnetic nanoparticles embedded in micron-sized polymeric particles, so that we can control the directionality of the motion of active bead chains by either external magnetic or electric fields. This work will facilitate future designs of new complex self-propelled swimmers, for instance, attaching a 'passive' single big sphere or a cargo with an active semiflexible bead chain. Particle synthesis We synthesized54 electrostatically stabilized and negatively charged polystyrene (PS) particles of 1.35 μm with a size polydispersity of 4%, and sterically stabilized (polyvinylpyrrolidone, PVP, Mw = 40 kg/mol) with a size polydispersity of 3%. These two types of particles were used in the fabrication of rigid bead chains. Whereas a longer molecular weight PVP (Mw = 360 kg/mol) PS particles were used in the fabrication of semiflexible bead chains. Sterically stabilized PS particles were synthesized using the method of Song et al.55. The size of the sterically stabilized PS particles was 1.40 μm with a size polydispersity of 5%. The particle size and polydispersity were determined using static light scattering, and scanning electron microscopy. We note that based on our camera frame rate, we deemed it easier to follow the dynamics of tracer particles in the case of rigid bead chains when we used bigger particles (2.1μm), because the chains are rotating faster. On the other hand, smaller particles (1.0 μm) are sufficient to capture the dynamics of tracer particles in the case of semiflexible case. Janus particles preparation We first prepared a monolayer of spherical particles by slowly drying particles from a dilute suspension (0.05 wt%) on a clean microscope glass slides. A 15 nm thick layer of platinum (Pt) was then vertically deposited using a sputter coater (Cressington 208HR). Prior to particle detachment, slides were thoroughly washed with DI water, and subsequently particles were detached by sonicating the slides in DI water for 10 mins. Next, particles were washed a couple of times with DI water and then the suspension with the desired concentration was transferred to the electric cell. Electric-field setup The electric cell consisted of a 0.1 mm × 1.0 mm cross section capillary with two 50 µm thickness nickel-alloy wires (Goodfellow) threaded along the side walls56,57. We used a function generator (Agilent, Model 3312 OA) and a wide band voltage amplifier (Krohn-Hite, Model 7602M) to generate the electric fields. We used a high frequency AC fields to minimize the polarization effects of the electric double layer around the particle. The field strength and the frequency were measured with an oscilloscope (Tektronix, Model TDS3052). We used a homemade DC filter to remove the DC component in the signal. Fixation of bead chains After filling the cell with the Janus colloidal particles suspension, we sealed both ends of the capillary with UV-curing optical adhesive (Norland No. 68) and cured the glue with UV light (λ = 350 nm, UVGL-58 UV lamp). The dispersion was then exposed to an AC field (E rms = 0.04–0.05 Vμm−1, f = 800 kHz), and the ramping time was about 2 mins. After 2–3 mins, all the particles were assembled into zig-zag chains of one particle wide in the field direction. The dispersion was subsequently heated to 60–65 °C, which is well below the glass transition temperature (T g ≈ 107 °C) of polystyrene47, for about 2–3 minutes using a hot-air stream that was much wider than the sample cell34,39,58. We further kept the field on for 2–3 mins while the sample was cooled down to room temperature. After the fixation step, we carefully opened both ends of the electric cell and the bead chains were subsequently transferred to a small eppendrof tube (0.5 ml) for further dilution. The dynamics of the chains were studied in Lab-Tek chambered cover-glass (Thermo Fisher Scientific). Internal structure of staggered or zig-zag chains Our method yielded regular and staggered (zig-zag) linear chains independent of whether it is used in the fabrication of rigid or semiflexible chains. We note that in our fabrication method, we exploited the difference in the polarizability of both sides (half-dielectric and half-metallic) of the Janus particles in external AC electric fields, causing a zig-zag chain configuration, as is shown in the Fig. S1. In the case of semiflexible (persistence length, l p ≈ 20 σ; contour length, l c ≈ 6 σ), we expect that there remains always some correlation between the neighboring particles, stemming from the zig-zagged pattern at synthesis, while a combination of flexibility of the chain and hydrodynamic propulsion may generate an asymmetry in the self-propulsion direction of the beads. Particle tracking We recorded the particle dynamics using an Olympus IX81 confocal microscope, equipped with an Andor iXon3 camera, Andor 400-series solid-state lasers, a Yokogawa CSU-X1 spinning disk. Bright-field optical micrographs and videos were recorded using a Nikon microscope equipped with a CCD camera (Olympus CKX41). We processed recorded images and extracted the centroids of each particle in a chain using the particle tracking programs of Rogers et al.50. We then calculated the orientation of the chain from the pixel coordinates of the first bead and the last bead in the chain. To uniquely identify the both ends of the chains, it is required that the rotation angle of the chain between successive frames is less than π/2. Mean squared displacement of rigid chains The MSAD and MSDs can be determined by solving the equations of motion for a circular swimmer52. The overdamped equations can be written in 2D as $$\frac{dx(t)}{dt}=\nu \,\cos \,\theta (t)+{\xi }_{1}(t)$$ $$\frac{dy(t)}{dt}=\nu \,\sin \,\theta (t)+{\xi }_{2}(t)$$ $$\frac{d\theta (t)}{dt}=\omega +\zeta (t)$$ where ω is the rotational velocity, x and y are the center of mass coordinates, \(\nu \) is the propulsion velocity, and θ is the orientation of the bead chain. The Brownian noise terms, \(\xi \), \(\zeta \), are Gaussian random variables with zero mean whose magnitudes are taken from theoretical Brownian diffusivities. By solving the Eqs 1–3 we obtained $$\mathrm{MSAD}:\langle {\rm{\Delta }}{\theta }^{2}\rangle =\langle {[\theta (t)-\theta (0)]}^{2}\rangle ={\omega }^{2}{t}^{2}+2\,{D}_{r}t$$ $$\begin{array}{c}{\rm{M}}{\rm{S}}{\rm{D}}:\quad \quad \langle {\rm{\Delta }}{L}^{2}\rangle =\frac{2\,{v}^{2}}{{({D}_{r}^{2}+{\omega }^{2})}^{2}}[({D}_{r}^{2}+{\omega }^{2})\,{D}_{r}t+({\omega }^{2}-{D}_{r}^{2}\,)\\ \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \,\,\,\,\,+{e}^{-{D}_{r}t}([{D}_{r}^{2}-{\omega }^{2}]\cos \,(\omega t)-2\omega {D}_{r}\,\sin \,(\omega t)]+2({D}_{\parallel }+{D}_{\perp })t\end{array}$$ where D r is the rotational diffusion coefficient, \({D}_{\parallel }\), \({D}_{\perp },\) are translational diffusion coefficients along the major and minor axis, and \(v\) is the propulsion velocity. Resistance tensor, bead-shell model and equations of motion Often, the catalytically powered self-propulsion of (half-)platinum coated particles, such as the bead chains considered this work, is modelled by a slip-velocity on the active part of the particle surface. Moreover, since there are no external forces acting on the bead chains, the total hydrodynamic force and torque on the chain both vanish, such that the flow field around the bead chain is, to leading order, at most of dipolar character. However, as was proven by Ten Hagen et al.59, the motion of a self-propelled particle is equivalent to that of a passive particle (of identical shape), which is driven by an effective external force F eff and torque T eff. As described in ref.59, if both the distribution of slip-velocity on the active particle surface, and flow field solutions around the driven passive particle for arbitrary external force and torque are known, the motion of the active particle may be solved by taking suitable choices of the driving forces in the conjugate passive particle system. Subsequently, the effective force F eff and torque T eff are calculated from the passive case by imposing the velocity and angular obtained in the previous step. Due to the linearity of low-Reynolds number hydrodynamics, the force and torque relate to the particle velocity U and angular velocity ω in a linear fashion: $${\mathscr{F}}=\eta \,{\mathscr{R}}\,{\mathscr{V}}$$ where the 6-vector \({\mathscr{F}}=({{\boldsymbol{F}}}^{{\boldsymbol{e}}{\boldsymbol{f}}{\boldsymbol{f}}},{{\boldsymbol{T}}}^{{\boldsymbol{e}}{\boldsymbol{f}}{\boldsymbol{f}}})\) denotes the (effective) force and torque on the particle, the 6-vector \({\mathscr{V}}=({\boldsymbol{U}},{\boldsymbol{\omega }})\) denotes the (angular) velocity with respect to a chosen reference frame and the 6 × 6 tensor \( {\mathcal R} \) denotes the hydrodynamic resistance tensor, which depends itself on the shape of the self-propelled particle under consideration. The linearity of this dependence is due to the linearity of the Stokes equation that governs the hydrodynamics at low Reynolds numbers. Here, rather than calculating the effective force and torque from the slip-velocity on the active bead chains, and flow field solutions of the passive problem, we adopt the opposite strategy by directly determining the effective force and torque from bead chain systems for which both the motion and the geometry are accurately known (from the experiment). To this end, we numerically (using a bead-shell model, see p. 21) determine the tensor \( {\mathcal R} \) for the rigid bead chains, for which the (angular) velocity is measured in the experiment: U = 0, while \({\boldsymbol{\omega }}=\omega \hat{{\boldsymbol{z}}}\), which leads to \({{\boldsymbol{F}}}^{eff}=0\) and \({{\boldsymbol{T}}}^{eff}={T}^{eff}\hat{{\boldsymbol{z}}}\), where \({T}^{eff}\) is solved from the equation of motion \({\mathscr{F}}=\eta \,{\mathscr{R}}\,{\mathscr{V}},\) after we determined the resistance tensor \( {\mathcal R} \). To proceed, we decompose the effective force and torque into effective forces acting on each of the individual Janus sphere that constitute the chain, as $${{\boldsymbol{F}}}^{{\boldsymbol{eff}}}=\sum _{i}{{\boldsymbol{F}}}_{i}\,{\rm{and}}\,{{\boldsymbol{T}}}^{{\boldsymbol{eff}}}=\sum _{i}{{\boldsymbol{F}}}_{i}\times ({{\boldsymbol{r}}}_{i}-{{\boldsymbol{r}}}_{cm})$$ where we assume that the force magnitude |F i | is equal for all i, while the direction is determined by the orientation of the catalytic hemispheres: pointing along the axis that connects the 'south' pole on the passive hemisphere to the 'north pole' of the active hemisphere, as is shown in e.g. Fig. 9 of the main text. This assumption is equivalent to assuming that the (chemical) interaction between catalytic surfaces is small, such that all F i are of equal magnitude, and that the symmetry of the active hemisphere is preserved such that (effective) contributions of individual torques on each Janus sphere may be neglected, as would be the case of a single Janus sphere. Since an alternating ('zig-zag') orientation of the active hemispheres is clearly observed in the experiment, the magnitude |F i | is the only unknown and may be solved by plugging Eq. (7) into Eq. (6) for the rigid rotating chains, as described in the next section. A (constructed) configuration of effective propulsion forces that gives rise to a rotation of the bead chain. The forces are all perpendicular to the axis that connects the two end beads, which is lying on a horizontal plane in the experiment. The forces differ by a rotation around this axis, such that the resulting rotation is predominantly around this axis. In this work, we calculate the rigid body resistance tensor of (active) bead chains using a bead-shell model60,61, in which the surface of a rigid body is homogeneously covered by a large number (M) of small spheres of radius a. When this cluster of M spheres is given a non-zero common velocity, a disturbance flow field is created, which in turn causes hydrodynamic interactions between the small spheres that are given by the Rotne-Prager mobility tensor62,63. The forces on the individual spheres can then be calculated by a 3 M × 3 M matrix inversion, from which the total force and torque on the rigid body follow as the sum of the individual forces and torques around a chosen reference point. Subsequently, these results are extrapolated to \(M\to \infty \), while keeping Ma 2 finite, where typically \(M\) = 1000–3000 spheres are used in this procedure. In this large-\(M\) limit, we retrieve the boundary integral formulation of the Stokes equation, guaranteeing accurate results for the resistance tensor \( {\mathcal R} \). Self-propelled rigid bead chains We first validate our model for the active rigid bead chains by comparing the rotational diffusion coefficient for the 6-bead chain as obtained from the bead-shell model calculations to the experimental data. Assuming that the direction perpendicular to the plane of view of the microscope is the z-axis, the relevant rotational resistance coefficient is R 66, from which the rotational diffusion coefficient is expressed by the Stokes-Einstein relation as D r = k B T/(η R 66). Here, using the viscosity of water and the experimental size of the Janus spheres, we find D r = 0.0096 rad2/s, which is consistent with the experimentally measured value. We included the experimental results of rotational speed of the rigid chains in the presence of the fuel and the rotational diffusivity of chains in water, and the rotational and the translational speed of semiflexible chains in the fuel in supplementary information (see Table SII). For completeness, we give the complete resistance tensor in the below paragraph. Next, we use the measurements of the rotational velocity of the rigid bead chains to estimate the value of the effective propulsion force acting on each individual Janus sphere. Note that in each case, we try to stay as closely as possible to the observed 'zig-zag' shape, as shown in Fig. S5, where we estimate from experimental snapshots that the angle between the vectors connecting the centers of adjacent Janus spheres is 0.15π on average. Also, the distribution of the propulsion forces is arranged in an alternating zig-zag fashion. The validity of our model and consistency of our assumptions, Eq. (7), is verified from Fig. 7 above, where we compare the experimentally observed angular velocity ω with the angular velocity that we calculate from Eq. (6) for chains of different length, where the values of |F i | is averaged over the data of the rigid bead chains. The complete 6 × 6 resistance tensor of a rigid linear 'zigzag' bead chain of 6 beads, aligned along the x-axis, is found to be: $${\mathscr{R}}=(\begin{array}{cccccc}38.0\,\mu m & -0.35\,\mu m & -0.042\,\mu m & 0.097\,\mu {m}^{2} & -0.0041\,\mu {m}^{2} & -0.015\,\mu {m}^{2}\\ -0.35\,\mu m & 30.0\,\mu m & -0.056\,\mu m & -0.005\,\mu {m}^{2} & -0.01\,\mu {m}^{2} & 0.022\,\mu {m}^{2}\\ -0.042\,\mu m & -0.056\,\mu m & 38.0\,\mu m & 0.022\,\mu {m}^{2} & -0.0039\,\mu {m}^{2} & -0.09\,\mu {m}^{2}\\ 0.097\,\mu {m}^{2} & -0.005\,\mu {m}^{2} & 0.022\,\mu {m}^{2} & 430.0\,\mu {m}^{3} & -11.0\,\mu {m}^{3} & 1.1\,\mu {m}^{3}\\ -0.0041\,\mu {m}^{2} & -0.01\,\mu {m}^{2} & -0.0039\,\mu {m}^{2} & -11.0\,\mu {m}^{3} & 46.0\,\mu {m}^{3} & 0.15\,\mu {m}^{3}\\ -0.015\,\mu {m}^{2} & 0.022\,\mu {m}^{2} & -0.09\,\mu {m}^{2} & 1.1\,\mu {m}^{3} & 0.15\,\mu {m}^{3} & 430.0\,\mu {m}^{3}\end{array})$$ With the magnitude |F i | determined from the rigid chains and the consistency of our assumptions checked, we apply this model to the motion of the semiflexible bead chains. As mentioned above, we model the semiflexible bead chains by introducing shape anisotropy to the rigid bead chains, such that the centers of the beads lie on a helix with radius r and pitch p. We note that the aligned chain (p → ∞) and the c-shaped chain (p = 0) are special cases of this parameterization. Moreover, we consider a propulsion force distribution along the bead chain that is arranged in such a way that the chain will start rotating around the axis that connects the two ends, when the chain does not coincide completely with this axis. The propulsion forces are all perpendicular to the axis that connects the end points of the bead chain, and differ by a rotation around this axis. Below, we denote the angular velocity around this axis by ω || , while we represent the projected (2-dimensional) velocity in the xy-plane by v 2D. Note that, as in the case of the rotating rigid bead chains, the net force on the chain vanishes. Therefore, any translational motion is entirely caused by a (shape-dependent) translation-rotation coupling in the resistance tensor described above, and is in fact independent of our effective description using propulsion forces (that we have obtained from the rigid case). We make a comparison between the theoretically calculated velocities associated with these bead chains and the velocity measured in the experiment, which is 0.9 µm/s = 0.64 σ/s. In Fig. S6, we plot the velocity of helical bead chains as a function of the helical radius r/σ and pitch α as obtained theoretically using our bead-shell model. We observe that over a large range of shapes, the velocity is around 0.2 σ/s. We also investigate the rotational velocity ω || for the same range of shapes, as shown in Fig. S7. Here, we observe a wide range of shapes that show an angular velocity ω || of 3.0 s−1, which compares well to the value of 3.5 s−1 in the experiments. Finally, we studied the ratio between rotation and translation velocity, which as described above, is purely an effect of the shape of the bead chain and is independent of the magnitude of the effective propulsion force. In Fig. 10 we show this ratio as a function of helical shape, and observe that this ratio is around 0.08 σ for a range of helical shapes, whereas the experimentally measured ratio is 0.18 σ. We conclude that our minimal model is able to predict this ratio up to a factor of two, which is striking given the simplicity of the model. Also, the spiraling motion observed from the animations coincides well with the experimentally observed motion. A better understanding of experimental uncertainties or a more detailed hydrodynamic model may improve these predictions. Ratio between the in-plane velocity v 2D and the angular velocity ω || around the bead chain axis, for a range of helically shaped bead chains characterized by the helical radius r/σ and pitch p/σ. Elgeti, J., Winkler, R. G. & Gompper, G. Physics of microswimmers-single particle motion and collective behavior: a review. Rep. Prog. Phys. 78, 056601 (2015). Article ADS MathSciNet CAS PubMed Google Scholar Bialké, J., Speck, T. & Löwen, H. Active colloidal suspensions: Clustering and phase behavior. J. Non-Crystalline Solids 407, 367–375 (2015). Wang, J. & Gao, W. Nano/Microscale motors: biomedical opportunities and challenges. Acs Nano 6, 5745–5751 (2012). Sengupta, S., Ibele, M. E. & Sen, A. Fantastic Voyage: Designing Self-Powered Nanorobots. Angew. Chem. Int. Edn. 51, 8434–8445 (2012). Sanchez, S., Soler, L. & Katuri, J. Chemically Powered Micro- and Nanomotors. Angew. Chem. Int. Edit. 54, 1414–1444 (2015). Bechinger, C. et al. Active particles in complex and crowded environments. Rev. Modern Phys. 88, 045006 (2016). Palacci, J. et al. Light-activated self-propelled colloids. Philos Trans A Math Phys Eng Sci 372, 0372 (2014). Ginot, F. et al. Nonequilibrium equation of state in suspensions of active colloids. Phys. Rev. X 5, 011004 (2015). Theurkauff, I., Cottin-Bizonne, C., Palacci, J., Ybert, C. & Bocquet, L. Dynamic Clustering in Active Colloidal Suspensions with Chemical Signaling. Phys. Rev. Lett. 108, 268303 (2012). Howse, J. R. et al. Self-motile colloidal particles: From directed propulsion to random walk. Phys. Rev. Lett. 99, 048102 (2007). Yan, J. et al. Reconfiguring active particles by electrostatic imbalance. Nat. Mater. 15, 1095–1097 (2016). Vutukuri, H. R. et al. Dynamic self-organization of side-propelling colloidal rods: experiments and simulations. Soft Matter 12, 9657–9665 (2016). Wang, W., Duan, W., Sen, A. & Mallouk, T. E. Catalytically powered dynamic assembly of rod-shaped nanomotors and passive tracer particles. Proc. Natl. Acad. Sci. USA 110, 17744–17749 (2013). Narayan, V., Ramaswamy, S. & Menon, N. Long-lived giant number fluctuations in a swarming granular nematic. Science 317, 105–108 (2007). Sokolov, A., Apodaca, M. M., Grzybowski, B. A. & Aranson, I. S. Swimming bacteria power microscopic gears. Proc. Natl. Acad. Sci. USA 107, 969–974 (2010). Di Leonardo, R. et al. Bacterial ratchet motors. Proc. Natl. Acad. Sci. USA 107, 9541–9545 (2010). Chaturvedi, N., Hong, Y. Y., Sen, A. & Velegol, D. Magnetic Enhancement of Phototaxing Catalytic Motors. Langmuir 26, 6308–6313 (2010). Kummel, F. et al. Circular Motion of Asymmetric Self-Propelling Particles. Phys. Rev. Lett. 110, 198302 (2013). Wang, W., Duan, W., Ahmed, S., Sen, A. & Mallouk, T. E. From one to many: dynamic assembly and collective behavior of self-propelled colloidal motors. Acc. Chem. Res. 48, 1938–1946 (2015). Lauga, E., DiLuzio, W. R., Whitesides, G. M. & Stone, H. A. Swimming in circles: motion of bacteria near solid boundaries. Biophys J. 90, 400–412 (2006). Drescher, K. et al. Dancing Volvox: Hydrodynamic Bound States of Swimming Algae. Phys. Rev. Lett. 102, 168101 (2009). Qin, L. D., Banholzer, M. J., Xu, X. Y., Huang, L. & Mirkin, C. A. Rational design and synthesis of catalytically driven nanorotors. J. Am. Chem. Soc. 129, 14870–14872 (2007). Wang, Y. et al. Dynamic Interactions between Fast Microscale Rotors. J. Am. Chem. Soc. 131, 9926–9927 (2009). Kummel, F. et al. "Circular Motion of Asymmetric Self-Propelling Particles" Reply. Phys. Rev. Lett. 113, 029802 (2014). Ebbens, S., Jones, R. A. L., Ryan, A. J., Golestanian, R. & Howse, J. R. Self-assembled autonomous runners and tumblers. Phys. Rev. E 82, 015304 (2010). Boymelgreen, A., Yossifon, G., Park, S. & Miloh, T. Spinning Janus doublets driven in uniform ac electric fields. Phys. Rev. E 89, 011003 (2014). Zhang, L. et al. Artificial bacterial flagella: Fabrication and magnetic control. Appl. Phys. Lett. 94, 3079655 (2009). Pak, O. S., Gao, W., Wang, J. & Lauga, E. High-speed propulsion of flexible nanowire motors: Theory and experiments. Soft Matter 7, 8169–8181 (2011). Article ADS CAS Google Scholar Gao, W., Sattayasamitsathit, S., Manesh, K. M., Weihs, D. & Wang, J. Magnetically Powered Flexible Metal Nanowire Motors. J. Am. Chem. Soc. 132, 14403–14405 (2010). Gao, W. et al. Bioinspired Helical Microswimmers Based on Vascular Plants. Nano Lett. 14, 305–310 (2014). Dreyfus, R. et al. Microscopic artificial swimmers. Nature 437, 862–865 (2005). Soto, R. & Golestanian, R. Self-assembly of active colloidal molecules with dynamic function. Phys. Rev. E 91, 052304 (2015). Li, F., Josephson, D. P. & Stein, A. Colloidal Assembly: The Road from Particles to Colloidal Molecules andCrystals. Angew. Chem. Int. Edn. 50, 360–388 (2011). Vutukuri, H. R. et al. Colloidal analogues of charged and uncharged polymer chains with tunable stiffness. Angew. Chem. Int. Edn. 51, 11249–11253 (2012). Vutukuri, H. R., Imhof, A. & van Blaaderen, A. Fabrication of polyhedral particles from spherical colloids and their self-assembly into rotator phases. Angew. Chem. Int. Edn. 53, 13830–13834 (2014). Duguet, E., Desert, A., Perro, A. & Ravaine, S. Design and elaboration of colloidal molecules: an overview. Chem. Soc. Rev. 40, 941–960 (2011). Yethiraj, A. Tunable colloids: control of colloidal phase transitions with tunable interactions. Soft Matter 3, 1099–1115 (2007). Zerrouki, D., Baudry, J., Pine, D., Chaikin, P. & Bibette, J. Chiral colloidal clusters. Nature 455, 380–382 (2008). Smallenburg, F., Vutukuri, H. R., Imhof, A., van Blaaderen, A. & Dijkstra, M. Self-assembly of colloidal particles into strings in a homogeneous external electric or magnetic field. J. Phys. Condens. Matter. 24, 464113 (2012). Peng, B., Vutukuri, H. R., van Blaaderen, A. & Imhof, A. Synthesis of fluorescent monodisperse non-spherical dumbbell-like model colloids. J. Mater. Chem. 22, 21893–21900 (2012). Biswal, S. L. & Gast, A. P. Mechanics of semiflexible chains formed by poly(ethylene glycol)-linked paramagnetic particles. Phys. Rev. E. 68, 021402 (2003). Byrom, J., Han, P., Savory, M. & Biswal, S. L. Directing Assembly of DNA-Coated Colloids with Magnetic Fields To Generate Rigid, Semiflexible, and Flexible Chains. Langmuir 30, 9045–9052 (2014). Gangwal, S., Cayre, O. J. & Velev, O. D. Dielectrophoretic assembly of metallodielectric Janus particles in AC electric fields. Langmuir 24, 13312–13320 (2008). Ramirez, L. M. et al. Polloidal Chains from Self-Assembly of Flattened Particles. Langmuir 29, 10340–10345 (2013). Hunter, R. J. Foundations of Colloid Science. (Oxford University Press, 2001). Mazur, S., Beckerbauer, R. & Buckholz, J. Particle size limits for sintering polymer colloids without viscous flow. Langmuir 13, 4287–4294 (1997). Ebbens, S. et al. Electrokinetic effects in catalytic platinum-insulator Janus swimmers. Epl-Europhys Lett. 106, 58003 (2014). Brown, A. & Poon, W. Ionic effects in self-propelled Pt-coated Janus swimmers. Soft Matter 10, 4016–4027 (2014). Rogers, S. S., Waigh, T. A., Zhao, X. B. & Lu, J. R. Precise particle tracking against a complicated background: polynomial fitting with Gaussian weight. Phys. Biol 4, 220–227 (2007). Tirado, M. M., Martínez, C. L. & de la Torre, J. G. Comparison of theories for the translational and rotational diffusion coefficients of rod‐like macromolecules. J. Chem. Phys. 81, 2047–2052 (1984). van Teeffelen, S. & Lowen, H. Dynamics of a Brownian circle swimmer. Phys. Rev. E 78, 020101 (2008). Purcell, E. M. Life at low Reynolds number. Am. J. Phys 45, 3–11 (1977). Goodwin, J., Hearn, J., Ho, C. & Ottewill, R. Studies on the preparation and characterisation of monodisperse polystyrene laticee. Colloid and Polymer Science 252, 464–471 (1974). Song, J. S., Tronc, F. & Winnik, M. A. Two-stage dispersion polymerization toward monodisperse, controlled micrometer-sized copolymer particles. J. Am. Chem. Soc. 126, 6562–6563 (2004). Vutukuri, H. R. et al. An experimental and simulation study on the self-assembly of colloidal cubes in external electric fields. Soft Matter 10, 9110–9119 (2014). Vutukuri, H. R., Badaire, S., Matthijs de Winter, D. A., Imhof, A. & van Blaaderen, A. Directed Self-Assembly of Micron-Sized Gold Nanoplatelets into Oriented Flexible Stacks with Tunable Interplate Distance. Nano Letters 15(8), 5617–5623 (2015). Vutukuri, H. R., Stiefelhagen, J., Vissers, T., Imhof, A. & van Blaaderen, A. Bonding assembled colloids without loss of colloidal stability. Adv. Mater. 24, 412–416 (2012). ten Hagen, B. et al. Can the self-propulsion of anisotropic microswimmers be described by using forces and torques? J Phys-Condens Mat 27, 194110 (2015). de la Torre, J. G., Huertas, M. L. & Carrasco, B. Calculation of hydrodynamic properties of globular proteins from their atomic-level structure. Biophysical journal 78, 719–730 (2000). Bet, B., Boosten, G., Dijkstra, M. & van Roij, R. Efficient shapes for microswimming: From three-body swimmers to helical flagella. J. Chem. Phys. 146, 4976647 (2017). Rotne, J. & Prager, S. Variational Treatment of Hydrodynamic Interaction in Polymers. J. Chem. Phys. 50, 4831–4837 (1969). Wajnryb, E., Mizerski, K. A., Zuk, P. J. & Szymczak, P. Generalization of the Rotne-Prager-Yamakawa mobility and shear disturbance tensors. J. Fluid Mechanics 731, 402 (2013). Article MathSciNet MATH Google Scholar We would like to thank Frank Smallenburg for useful discussions. W.T.S.H. acknowledges financial support by the European Research Council Advanced Grant (246812 intercom), H.R.V. was partially supported by a Marie Skłodowska-Curie Intra European Fellowship (G.A. No. 708349- SPCOLPS) within Horizon 2020, and R.v.R. and B.B. by a NWO-VICI grant of the Netherlands Organization for Scientific Research. Hanumantha Rao Vutukuri Present address: Soft Materials, Department of Materials, ETH Zurich, 8093, Zurich, Switzerland Institute for Molecules and Materials, Radboud University, Heyendaalseweg 135, 6525 AJ, Nijmegen, The Netherlands Hanumantha Rao Vutukuri & Wilhelm T. S. Huck Institute for Theoretical Physics, Center for Extreme Matter and Emergent Phenomena, Utrecht University, Princetonplein 5, 3584 CC, Utrecht, The Netherlands Bram Bet & René van Roij Soft condensed Matter, Debye Institute for Nanomaterials Science, Utrecht University, Princentonplein 1, 3584 CC, Utrecht, The Netherlands Marjolein Dijkstra Bram Bet René van Roij Wilhelm T. S. Huck H.R.V. conceived the research and designed the experiments. H.R.V. performed all the experiments and analysis. W.T.S.H. supervised the research. B.B., R.v.R., and M.D. developed the numerical models. B.B. performed the calculations. H.R.V., B.B., and W.T.S.H. wrote the manuscript. All authors discussed the results and commented on the manuscript. Correspondence to Hanumantha Rao Vutukuri, Marjolein Dijkstra or Wilhelm T. S. Huck. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Supporting movie 1 Vutukuri, H.R., Bet, B., van Roij, R. et al. Rational design and dynamics of self-propelled colloidal bead chains: from rotators to flagella. Sci Rep 7, 16758 (2017). https://doi.org/10.1038/s41598-017-16731-5 Light-switchable propulsion of active particles with reversible interactions Maciej Lisicki Jan Vermant Active particles induce large shape deformations in giant lipid vesicles Masoud Hoore Direct numerical simulation of the self-propelled Janus particle: use of grid-refined fluctuating lattice Boltzmann method Li Chen Chenyu Mo Haihang Cui Microfluidics and Nanofluidics (2019) Targeted assembly and synchronization of self-spinning microgears Antoine Aubret Mena Youssef Jérémie Palacci Nature Physics (2018)
CommonCrawl
Instability of standing waves for the inhomogeneous Gross-Pitaevskii equation Yongbin Wang 1 , Binhua Feng 2 , , Department of Basic Teaching and Research, Qinghai University, Xining, 810016, China Department of Mathematics, Northwest Normal University, Lanzhou, 730070, China Received: 22 February 2020 Accepted: 08 May 2020 Published: 22 May 2020 MSC : 35Q55, 35A15 In this paper, we consider the instability of standing waves for an inhomogeneous Gross-Pitaevskii equation $ i\psi_t +\Delta \psi -a^2|x|^2\psi +|x|^{-b}|\psi|^{p}\psi = 0. $ This equation arises in the description of nonlinear waves such as propagation of a laser beam in the optical fiber. We firstly proved that there exists $\omega_* \gt 0$ such that for all $\omega \gt \omega_*$, the standing wave $\psi(t, x) = e^{i\omega t}u_\omega(x)$ is unstable. Then, we deduce that if $\partial_\lambda^2S_\omega(u_\omega^\lambda)|_{\lambda = 1}\leq 0$, the ground state standing wave $e^{i\omega t}u_\omega(x)$ is strongly unstable by blow-up, where $u_\omega^\lambda(x) = \lambda^{\frac{N}{2}}u_\omega(\lambda x)$ and $S_\omega$ is the action. This result is a complement to the partial result of Ardila and Dinh (Z. Angew. Math. Phys. 2020), where the strong instability of standing waves has been studied under a different assumption. inhomogeneous Gross-Pitaevskii equation, strong instability, ground state Citation: Yongbin Wang, Binhua Feng. Instability of standing waves for the inhomogeneous Gross-Pitaevskii equation[J]. AIMS Mathematics, 2020, 5(5): 4596-4612. doi: 10.3934/math.2020295 [1] G. P. Agrawal, Nonlinear Fiber Optics, Academic Press, 2007. [2] G. Baym, C. J. Pethick, Ground state properties of magnetically trapped Bose-Einstein condensate rubidium gas, Phys. Rev. Lett., 76 (1996), 6-9. doi: 10.1103/PhysRevLett.76.6 [3] L. Pitaevskii, S. Stringari, Bose-Einstein condensation, International Series of Monographs on Physics, 116. The Clarendon Press, Oxford University Press, Oxford, 2003. [4] J. Chen, On a class of nonlinear inhomogeneous Schrödinger equations, J. Appl. Math. Comput., 32 (2010), 237-253. doi: 10.1007/s12190-009-0246-5 [5] J. Chen, B. Guo, Sharp global existence and blowing up results for inhomogeneous Schrödinger equations, Discrete Contin. Dyn. Syst. Ser. B, 8 (2007), 357-367. [6] A. de Bouard, R. Fukuizumi, Stability of standing waves for nonlinear Schrödinger equations with inhomogeneous nonlinearities, Ann. Henri Poincaré, 6 (2005), 1157-1177. [7] V. D. Dinh, Blowup of H1 solutions for a class of the focusing inhomogeneous nonlinear Schrödinger equation, Nonlinear Analysis, 174 (2018), 169-188. doi: 10.1016/j.na.2018.04.024 [8] B. Feng, On the blow-up solutions for the nonlinear Schrödinger equation with combined powertype nonlinearities, J. Evol. Equ., 18 (2018), 203-220. doi: 10.1007/s00028-017-0397-z [9] B. Feng, H. Zhang, Stability of standing waves for the fractional Schrödinger-Choquard equation, Comput. Math. Appl., 75 (2018), 2499-2507. doi: 10.1016/j.camwa.2017.12.025 [10] F. Genoud, An inhomogeneous, L2-critical, nonlinear Schrödinger equation, Z. Anal. Anwend., 31 (2012), 283-290. doi: 10.4171/ZAA/1460 [11] F. Genoud, C. Stuart, Schrödinger equations with a spatially decaying nonlinearity: Existence and stability of standing waves, Discrete Contin. Dyn. Syst., 21 (2008), 137-186. doi: 10.3934/dcds.2008.21.137 [12] X. Luo, Stability and multiplicity of standing waves for the inhomogeneous NLS equation with a harmonic potential, Nonlinear Anal. Real World Appl., 45 (2019), 688-703. doi: 10.1016/j.nonrwa.2018.07.031 [13] J. Zhang, S. Zhu, Sharp energy criteria and singularity of blow-up solutions for the DaveyStewartson system, Commun. Math. Sci., 17 (2019), 653-667. doi: 10.4310/CMS.2019.v17.n3.a4 [14] S. Zhu, Blow-up solutions for the inhomogeneous Schrödinger equation with L2 supercritical nonlinearity, J. Math. Anal. Appl., 409 (2014), 760-776. doi: 10.1016/j.jmaa.2013.07.029 [15] T. Cazenave, Semilinear Schrödinger equations, Courant Lecture Notes in Mathematics vol. 10, New York University, Courant Institute of Mathematical Sciences, New York; American Mathematical Society, Providence, RI, 2003. [16] S. Le Coz, A note on Berestycki-Cazenave's classical instability result for nonlinear Schrödinger equations, Adv. Nonlinear Stud., 8 (2008), 455-463. [17] A. Bensouilah, V. D. Dinh, S. H. Zhu, On stability and instability of standing waves for the nonlinear Schrödinger equation with inverse-square potential, J. Math. Phys., 59 (2018), 18. [18] J, Chen, B. Guo, Strong instability of standing waves for a nonlocal Schrödinger equation, Physica D: Nonlinear Phenomena, 227 (2007), 142-148. doi: 10.1016/j.physd.2007.01.004 [19] Z. Cheng, Z. Shen, M. Yang, Instability of standing waves for a generalized Choquard equation with potential, J. Math. Phys., 58 (2017), 13. [20] Z. Cheng, M. Yang, Stability of standing waves for a generalized Choquard equation with potential, Acta Appl. Math., 157 (2018), 25-44. doi: 10.1007/s10440-018-0162-5 [21] V. D. Dinh, On instability of standing waves for the mass-supercritical fractional nonlinear Schrödinger equation, Z. Angew. Math. Phys., 70 (2019), 17. [22] B. Feng, Sharp threshold of global existence and instability of standing wave for the SchrödingerHartree equation with a harmonic potential, Nonlinear Anal. Real World Appl., 31 (2016), 132-145. doi: 10.1016/j.nonrwa.2016.01.012 [23] B. Feng, On the blow-up solutions for the fractional nonlinear Schrödinger equation with combined power-type nonlinearities, Commun. Pure Appl. Anal., 17 (2018), 1785-1804. doi: 10.3934/cpaa.2018085 [24] B. Feng, R. Chen, Q. Wang, Instability of standing waves for the nonlinear Schrödinger-Poisson equation in the L2-critical case, J. Dynam. Differential Equations, (2019), doi: 10.1007/s10884-019-09779-6. [25] B. Feng, J. Liu, H. Niu, et al. Strong instability of standing waves for a fourth-order nonlinear Schrödinger equation with the mixed dispersions, Nonlinear Anal., 196 (2020), 111791. [26] R. Fukuizumi, M. Ohta, Instability of standing waves for nonlinear Schrödinger equations with potentials, Differ. Integral Equ., 16 (2003), 691-706. [27] R. Fukuizumi, M. Ohta, Strong instability of standing waves with negative energy for double power nonlinear Schrödinger equations, SUT J. Math., 54 (2018), 131-143. [28] M. Ohta, Strong instability of standing waves for nonlinear Schrödinger equations with harmonic potential, Funkcial. Ekvac., 61 (2018), 135-143. doi: 10.1619/fesi.61.135 [29] M. Ohta, Strong instability of standing waves for nonlinear Schrödinger equations with a partial confinement, Comm. Pure Appl. Anal., 17 (2018), 1671-1680. doi: 10.3934/cpaa.2018080 [30] R. Fukuizumi, M. Ohta, Strong instability of standing waves for nonlinear Schrödinger equations with attractive inverse power potential, Osaka J. Math., 56 (2019), 713-726. [31] Y. Wang, Strong instability of standing waves for Hartree equation with harmonic potential, Physica D: Nonlinear Phenomena, 237 (2008), 998-1005. doi: 10.1016/j.physd.2007.11.018 [32] J. Zhang, Sharp threshold for blowup and global existence in nonlinear Schrödinger equations under a harmonic potential, Comm. Partial Differential Equations, 30 (2005), 1429-1443. doi: 10.1080/03605300500299539 [33] J. Zhang, S. Zhu, Stability of standing waves for the nonlinear fractional Schrödinger equation, J. Dynam. Differential Equations, 29 (2017), 1017-1030. doi: 10.1007/s10884-015-9477-3 [34] A. H. Ardila, V. D. Dinh, Some qualitative studies of the focusing inhomogeneous Gross-Pitaevskii equation, Z. Angew. Math. Phys., 71 (2020), 24. [35] L. G. Farah, Global well-posedness and blow-up on the energy space for the inhomogeneous nonlinear Schrödinger equation, J. Evol. Equ., 16 (2016), 193-208. doi: 10.1007/s00028-015-0298-y Yongbin Wang Binhua Feng Yongbin Wang, Binhua Feng. Instability of standing waves for the inhomogeneous Gross-Pitaevskii equation[J]. AIMS Mathematics, 2020, 5(5): 4596-4612. doi: 10.3934/math.2020295
CommonCrawl
Effect of nitrogen addition on the carbon metabolism of soil microorganisms in a Calamagrostis angustifolia wetland of the Sanjiang Plain, northeastern China Xiaohong Weng1,2, Xin Sui ORCID: orcid.org/0000-0002-4339-43081,2, Yingnan Liu3 na1, Libin Yang2,3 & Rongtao Zhang3 na1 Annals of Microbiology volume 72, Article number: 18 (2022) Cite this article Soil microorganisms are important mediators of land ecosystem functions and stability. However, carbon sources in different amounts of nitrogen addition are known to affect the function of soil microbial communities. Thus, this study sought to evaluate the effects of nitrogen addition on the carbon utilization capacity of soil microorganisms in the Sanjiang Plain wetland, northeastern China. Three nitrogen treatments (CK, 0 kg N ha−1 a−1; N40, 40 kg N ha−1 a−1; and N80 kg N ha−1 a−1) were evaluated in the Honghe National Nature Reserve of the Sanjiang Plain. The carbon metabolism capacity of soil microorganisms in the C. angustifolia wetland was investigated after five consecutive year's nitrogen addition treatment using the Bio-Eco technique. Different amounts of nitrogen addition conditions resulted in significant differences in pH, ammonium nitrogen (NH4+), dissolved organic carbon (DOC), and soil microbial alpha diversity. The average well-color development (AWCD) in the Bio-Eco Plate assay increased gradually with incubation time, and different nitrogen levels significantly affected these AWCD values (P < 0.05), with the N40 treatment exhibiting the highest value. Furthermore, the N80 treatment had significantly lower Shannon and Pielou diversity indices (P < 0.05). N40 significantly promoted carbohydrate, amino acid, and ester utilization rates by soil microorganisms, whereas N80 significantly inhibited carbohydrate, amino acid, alcohol, amine, and organic acids utilization. Redundancy analysis (RDA) showed that the three treatments had remarkable differences in soil microbial community metabolism, and the cumulative variance contribution was 72.86%. In addition, RDA revealed that the N80 treatment was positively correlated with the TN, SMC, DON, and TOC but negatively correlated with DOC, NH4+, pH, and NO3−. Long-term nitrogen addition leads to changes in soil microbial community structure and significantly alters the ability of soil microorganisms to utilize carbon sources in the Calamagrostis angustifolia wetland. Soil nitrogen is one of the important nutrient elements in ecosystems and plays critical roles on ecosystem structure and function. In wetland ecosystems, nitrogen is considered to be one of the mainly limiting nutrient elements for primary productivity, but due to the development of agricultural practice and long-term large amounts of nitrogen fertilizers utilization, the amounts of available nitrogen in the soil increase quickly and impact wetland ecosystem structure and function (Vitousek and Howarth 1991; Feng et al. 2015). Therefore, nitrogen-saturated inputs affect a number of ecological processes (Nakaji et al. 2001; Lu et al. 2015), such as soil acidification (Ouyang et al. 2005), affecting apoplastic decomposition (Song et al. 2011), and stimulated CO2 emissions (Song et al. 2013; Tao et al. 2018), as well as reducing soil microbial diversity and affecting microbial ecological functions (Wang et al. 2018; Zhang et al. 2018), ultimately having an impact on wetland ecosystems. Wetlands are important terrestrial ecosystems, covering 5 to 8% of the total area of the earth. Wetlands possess the characteristics of both terrestrial and aquatic ecosystems and are therefore highly biodiverse and productive, in addition to providing a wide range of ecosystem services (Wang et al. 2006a). Nitrogen addition affects the structure and function of wetland ecosystems by altering soil environmental conditions such as nitrogen content and soil organic matter, influencing soil microbial activity, as well as the composition and diversity of soil microbial communities (Li et al. 2006; Wang et al. 2006b). Soil microorganisms are major participants in soil nitrogen transformation, and have an important regulatory role in the soil nitrogen cycle (Zhang et al. 2009), and contribute significantly to the stability and function of wetlands. Since microbial response to environmental changes is very sensitive, so soil microbial structure and function is an important indicator for soil quality change (Yan et al. 2010). Previous studies have shown that the continuous input of nitrogen affects soil microorganisms in different ecosystems. For example, Li et al. (2013) results showed that the soil microbial activity decreased significantly was changed when the ammonium sulfate concentration was higher than 528.5 mg/kg. In contrast, Wu et al. (2017) investigated the effect of nitrogen addition (NH4NO3) on soil microorganisms in coastal wetlands and found that soil microbial activity was increased under N addition (3 and 6 g N ha−1 a−1). The reason may be the different forms and amounts of N could affect the type of carbon source used by the soil microbiota. Fang et al. (2014) reported that nitrogen source inputs of ammonium and nitrate nitrogen significantly promoted microbial metabolic activity and utilization of carbon substrates. Long-term nitrogen input experiments conducted by Compton et al. (2004) at Harvard University showed that increased nitrogen decreased microbial biomass and reduced the utilization ability of microbial communities for carbon sources. It is seen that nitrogen input affects grassland ecosystems, forest ecosystems, and wetland ecosystems. Wetlands play an important role in ecological processes such as controlling greenhouse gas emissions, regulating climate, and maintaining ecosystem balance (Liu 2004). Many scholars have begun to focus on the effects of nitrogen addition on wetland soil microorganisms. Lu et al. (2021) found in the effect of nitrogen addition on soil microbial structure and function in coastal saline wetlands that high N treatment (200 kg N ha−1 a−1) with NH4NO3 as a nitrogen source increased nutrients in the soil but reduced soil microbial diversity. Wang et al. (2019) studied the effects of nitrogen addition on temperate marshland soils in northeastern China and found that nitrogen treatments (8 g N m−2 a−1) with NH4NO3 as a nitrogen source for nitrogen input reduced soil pH value and alteration in the microbial community. The Sanjiang Plain is the largest freshwater marsh region in China. However, due to the extensive and intensive use of the region for agricultural production over the past 50 years, the natural freshwater marshes in this region often received increasing amounts of exogenous N inputs (Zhao 1999; Wang et al. 2010; Zhang et al. 2007). Studies have shown that nitrogen input promotes bacterial growth (Bragazza et al. 2012), enhances methanogenic and anaerobic respiratory bacterial activity (Yavitt et al. 2012), accelerates soil denitrification processes (Francez et al. 2011), and inhibits aerobic methane oxidation (Lozanovska et al. 2016). The previous studies reported that the ratio of NH4+-N/NO3−-N has been reduced from 5 to 2 since the 1970s, so the nitrogen form has been changed (Hu et al. 2019). Our previous study also found that the bacterial and fungal diversities and compositions in the wetlands of the Sanjiang Plain were significantly changed by NH4NO3 addition (Sui et al. 2016). However, the function of soil microorganisms in the wetlands of the Sanjiang Plain by adding different nitrogen forms is still unknown. Therefore, we aim to (i) evaluate the variation of soil physicochemistry properties in different amounts NO3- addition and (ii) clarify the changes of soil microbial carbon metabolism and function diversity in different amounts NO3- addition. The Bio-Eco Plate method is a technique that allows researchers to identify the different types of carbon sources utilized by soil microorganisms (Garland 1997; Preston et al. 2002). This technique has been widely used in recent years as it provides a simple and rapid means of assessing the function of soil microbial communities (Konopka et al. 1998; Garland and Mills 1991). In this study, the carbon utilization capacity of soil microorganisms was investigated using the Bio-Eco Plate technique based on long-term field simulations of NH4 addition in the Honghe National Nature Reserve in the Sanjiang Plain. Our findings thus provide important insights into the mechanisms by which wetland ecosystems adapt to human activities, as well as a scientific basis for the promotion of sustainable development and the creation of more effective wetland ecosystem management strategies. Effect of different nitrogen levels on soil physicochemical characteristics Table 1 summarizes the average values of main soil physicochemical properties under different nitrogen application levels. Different nitrogen levels had a significant effect on soil pH, NH4+ content, and DOC (P < 0.05). The contents of SMC, DON, and TN showed an increasing trend with the increase of nitrogen addition concentration, as order CK < N40 < N80; the contents of soil pH and NO3− showed a decreasing trend with the increase of nitrogen addition concentration, as order CK > N40 > N80; the NH4+, DOC, and TOC contents of each treatment showed a decreasing and then increasing phenomenon with the increase of nitrogen addition concentration. Table 1 Physicochemical properties of wetland soil with different nitrogen addition conditions Effects of simulated nitrogen addition on soil microbial carbon source metabolic activity As illustrated in Fig. 1, the average well color development (AWCD) of the soil microbial community under different nitrogen addition concentrations increased with culture time. Specifically, this value increased rapidly at 0–96 h, indicating a high microbial metabolic activity at this culture stage. These increases decelerated after 120 h and stabilized thereafter. Therefore, the AWCD values that were incubated for 120 h were selected for subsequent analysis. The AWCD (i.e., and indicator of carbon utilization) of the microbial communities under different nitrogen addition conditions exhibited the following order: N40 > CK > N80. Interestingly, the N40 treatment exhibited the highest carbon metabolic capacity. Average well color development (AWCD) of the soil microbial community under different nitrogen addition concentrations as a function of incubation time Changes in soil microbial functional diversity To further determine the effect of different nitrogen concentrations on soil microbial carbon source utilization, different diversity indices were used to evaluate the AWCD value at 120 h of culture. As indicated in Table 2, there was no significant difference between the soil microbial functional diversity of the N40 and CK treatments, but there was a significant difference between N80 and CK (P < 0.05). Table 2 Soil microbial functional diversity index under different nitrogen addition concentrations Utilization of different carbon sources by the soil microbial community Figure 2 illustrates the effects of different nitrogen addition levels on the utilization rates of different types of carbon sources by soil microorganisms. A total of 31 carbon sources were evaluated with the Bio-Eco Plate assay, which in turn were divided into six categories: carbohydrates (7), amino acids (6), alcohols (3), esters (4), amines (3), and organic acids (8). As shown in Fig. 2, N40 significantly promoted carbohydrate, amino acid, and ester utilization rates by soil microorganisms (P < 0.05), whereas N80 inhibited carbohydrate, amino acid, alcohol, amine, and organic acids utilization (P < 0.05). Utilization of different carbon sources by soil microbial communities under different nitrogen addition concentrations. 1 Carbohydrate, 2 amino acids, 3 alcohols, 4 esters, 5 amine, 6 acids; lowercase letters indicate significant differences (P < 0.05). CK, control; N40, 40 kg N ha−1 a−1; N80, 80 kg N ha−1 a−1 As shown in Fig. 3, the metabolic fingerprint of the N40-treated soil was constituted by eight kinds of carbon sources with AWCD > 2.0, including α-D-lactose, β-methyl-D-glucoside, D-cellobiose, L-serine, D-mannitol, N-acetyl-D-glucosamine, D-galacturonic acid, and D-glucosaminic acid. Among these, the AWCD of D-mannitol was the highest, reaching a value of 3.42. The AWCD values of the N80 treatment were all below 2. Among these, tween 40 had the highest AWCD value but reached only a value of 1.67. Overall, the compounds within each group did not always behave consistently. The utilization intensity of D-xylose, i-erythritol, phenylethyl-amine, itaconic acid, and D-malic acid by soil microorganisms decreased with the increase of nitrogen concentration, as follows: CK > N40 > N80; the utilization intensity of tween 40 and 4-hydroxy benzoic acid increased with the increase of nitrogen concentration, as follows: N80 > N40 > CK; the γ-hydroxybutyric acid showed N80 > CK > N40; and the utilization intensity of other carbon sources by soil microorganisms showed a trend of increasing and then decreasing with the increase of N concentration, as follows: N40 > CK > N80. Carbon physiological spectrum and metabolic fingerprint of soil microbial community in the Calamagrostis angustifolia wetland of the Sanjiang Plain. Carbohydrate, B2–D-xylose, H1–α-D-lactose, A2–β-methyl-D-glucoside, G2–α-D-glucose-1-phosphate, E1–α-cyclodextrin, F1–glycogen, G1–D-cellobiose; amino acids, A4–L-arginine, B4–L-asparagine, C4–L-phenylalanine, D4–L-serine, E4–L-threonine, F4–glycyl-L-glutamic acid; esters, B1–pyruvic acid methyl ester, C1–tween 40, D1–tween 80, A3–D-galactonic acid-γ-lactone; alcohols, C2–i-erythritol, D2–D-mannitol, H2–D, L-α-glycerol phosphate; amine, G4–phenylethyl-amine, H4–putrescine, E2–N-acetyl-d-glucosamine; acids, B3–D-galacturonic acid, F2–D-glucosaminic acid, C3–2-hydroxy benzoic acid, D3–4-hydroxy benzoic acid, E3–γ-hydroxybutyric acid, F3–itaconic acid, G3–α-ketobutyric acid, H3–D-malic acid As illustrated in Fig. 4, the results of the metabolic activity heat map of the soil microbial community can be divided into four groups. In group I, the AWCD values of D-galactonic acid-γ-lactone, phenylethyl-amine, D,L-α-glycerol phosphate, and glycyl-L-glutamic acid were not significantly different between the three treatments; in group II, the AWCD values of D-galacturonic acid, N-acetyl-D-glucosamine, D-mannitol, D-cellobiose, α-D-lactose, and β-methyl-D-glucoside in N80 were significantly lower than in CK and N40; in group III, putrescine, L-Serine, D-malic acid, and α-D-glucose-1-phosphate were significantly higher in CK and N40 than in N80; and in group IV, the AWCD values of D-glucosaminic acid in N80 were significantly lower than in CK and N40. These results demonstrated that different nitrogen concentrations had significantly different effects on the activity and carbon source utilization preference of the soil microbiota. Heat map and hierarchical cluster analysis based on the average well color development (AWCD) at 120 h of soil microbial communities under different nitrogen addition treatments. The samples are grouped based on their similarity to each other. The clustering results are arranged horizontally. Higher AWCD values are indicated in dark green, whereas lower AWCD values are indicated in yellow. CK, control; N40, 40 kg N ha−1 a−1; N80, 80 kg N ha−1 a−1 Factors influencing soil microbial carbon source utilization patterns under different nitrogen addition treatments Table 3 summarizes the correlation coefficients of the main components of the 31 carbon sources. As shown in Table 3, a total of 21 carbon sources constitute the first principal component (PCA1), including five carbohydrates, four amino acids, two esters, three alcohols, three amines, and three organic acids. Among them, D-cellobiose was the carbon source most related to PC1 with a 0.989 load value, followed by β-methyl-D-glucoside (0.987) and α-D-glucose-1-phosphate (0.979). Therefore, D-cellobiose, β-methyl-D-glucoside, and α-D-glucose-1-phosphate had a major effect on PCA1. Additionally, seven kinds of carbon sources constituted the second principal component (PC2), including two carbohydrates, two amino acids, one ester, and two organic acids. Among them, D-xylose is the most relevant carbon source for PC2 (loading value of −0.841), followed by L-phenylalanine (0.805) and glycogen (0.785). Collectively, nitrogen addition makes the carbon source metabolic activity of soil microorganisms most correlated with D-cellobiose and D-xylose in the Calamagrostis angustifolia wetland of the Sanjiang Plain. Table 3 Correlation coefficients of major components in 31 kinds of carbon sources The microorganisms in the soil samples containing varying nitrogen levels were cultured for 120 h. The results of the redundancy analysis (RDA) showed that the variance contributions of RDA1 and RDA2 were 55.36% and 17.5%, respectively, and the cumulative variance contribution was 72.86%. As illustrated in Fig. 5, Soil microbial Biolog-substrate utilization patterns were separated with the alteration of nitrogen addition. Among them, the total N80 treatment clustered farthest from the total CK and N40 treatments, whereas the total CK and N40 treatments clustered closer to each other. Therefore, we concluded that the N80 treatment significantly changes the carbon source utilization capacity of soil microorganisms. In addition, the N80 treatment was positively correlated with the TN, SMC, DON, and TOC but negatively correlated with DOC, NH4+, pH, and NO3−. Redundancy analysis of soil microbial community functions and environmental factors under different nitrogen addition treatments. CK, control; N40, 40 kg N ha−1 a−1; N80, 80 kg N ha−1 a−1. SMC, soil moisture contents; DOC, dissolved organic carbon; DON, dissolved organic nitrogen; TN, total nitrogen; TOC, total organic carbon Table 4 presents the relationships between different soil microbial functional diversity indices and soil physicochemical properties in the Sanjiang Plain. The AWCD value was highly positively correlated with pH (R2 = 0.69, P < 0.01). Furthermore, the Shannon-Wiener index (H), Simpson index (D), and Pielou index were highly correlated with pH, with R2 values reaching 0.68, 0.70, and 0.74, respectively. Table 4 Correlation between soil environmental factors and soil microbial functional diversity under different nitrogen addition treatments Different soil conditions can strongly affect soil microbial composition and diversity, and therefore, shifts in the functions of the soil microbiota can be used as relevant ecological indicators. Long-term nitrogen input ultimately affects the structure and function of wetland ecosystems, and therefore, assessing the impact of long-term nitrogen addition on soil microbial function is critical to gain a comprehensive insight into wetland ecosystem dynamics. In this study, the AWCD value of the soil microbial community increased with culture time, and the different experimental treatments exhibited the following order: N40 > CK > N80 (Fig. 1). Different times of N addition and different nitrogen forms had significant differences in soil microbial carbon metabolism. For example, Yuan et al. (2012) found that the AWCD of N treatments (CO(NH2)2) in a Chinese fir plantation after seven consecutive year N addition showed nitrogen treatment (60 kg N ha−1 a−1) significantly increased the AWCD, but nitrogen treatment (120 kg N ha−1 a−1) significantly decreased the AWCD value. However, Yu et al. (2013) found that treatment with applied NH4NO3 (150 kg N ha−1 a−1) promoted AWCD in shrub, but nitrogen treatment (50 kg N ha−1 a−1) inhibited AWCD in shrub after 1 month of N addition. Sui et al. (2016) conducted simulating nitrogen addition (NH4NO3), and four consecutive years on the functional diversity of soil microorganisms in the Calamagrostis angustifolia wetland of the Sanjiang Plain showed that after AWCD, values increased with increasing nitrogen concentration, showing HN (8 g N ha−1 a−1) > LN (4 g N ha−1 a−1) > CK. The differences in the results of these studies may be related to differences in the form and the concentration of N addition. In soils with short-term nitrogen addition, nitrogen application helps to alleviate nitrogen limitation and improve soil available nitrogen content, thereby promoting the functional activity of soil microbial carbon metabolism (Liu et al. 2010). However, long-term high nitrogen addition can result soil acidification and reduced organic carbon content, affect the effectiveness of heterotrophic microbial communities on substrate utilization, and reduce soil microbial productivity (Deforest 2004). Our findings indicated that higher nitrogen addition substantially decreased soil microbial activity, which also coincided with higher soil moisture. In soils with low nitrogen content, nitrogen application helps to alleviate nitrogen limitation and improve soil available nitrogen content, thereby promoting the functional activity of soil microbial carbon metabolism (Liu et al. 2010). Therefore, our study is consistent with the results of Sui et al. (2016) and Wu et al. (2017) that low concentration of nitrogen addition promotes the functional activity of carbon metabolism in soil microorganisms. However, long-term high concentration of nitrogen addition can affect the effectiveness of heterotrophic microbial communities on substrate utilization and reduce soil microbial productivity (Deforest 2004); Compton et al. (2004) conducted a long-term nitrogen input experiment in Harvard Forest, and the study showed that high nitrogen addition will lead to the decrease of soil microbial carbon biomass, which reduces the utilization rate of soil microorganisms for substrates. In addition, excessive nitrogen additions can lead to soil acidification (Table 1), and lower soil pH leads to changes in microbial biomass and microbial communities (Li et al. 2019). This may be the reason that the high concentration of nitrogen addition decreased the soil microbial AWCD value. However, Frey et al. (2004) carried out nitrogen addition experiments in Harvard Forest, and their studies have shown that the utilization of substrates by soil microorganisms in broad-leaved forests and mixed forests is not significantly related to nitrogen increase. So, the changes of soil microbial carbon metabolism under different nitrogen addition conditions may be related to ecosystem types, nitrogen application time, study period, nitrogen application amount and nitrogen application form, etc. Therefore, the effect mechanism of nitrogen addition on soil microbial metabolic activity still needs to be further studied. The Shannon diversity index, Simpson index, and Pielou index are composite indicators of the richness and evenness of microbial species (He et al. 2013a). Our findings indicated that different levels of nitrogen addition significantly changed the alpha functional of the soil microbial community (Table 2). Specifically, the Shannon-Wiener, Simpson, and Pielou indices of the soil microorganisms were significantly higher in the CK and N40 treatments compared to N80. These findings were consistent with those of Sui et al. (2016), whose study demonstrated that the Shannon and Simpson indices increased with the low concentration of nitrogen addition (4 g N ha−1 a−1) and decreased with the high concentration of nitrogen addition (8 g N ha−1 a−1) in wetland soils, and the indices differed significantly between treatments. This may be because moderate nitrogen application favors the growth of soil microorganisms, whereas high nitrogen content promotes the proliferation of some microbial populations while suppressing others, resulting in a decrease in microbial community diversity indices. The heat map shows that different nitrogen concentrations had different effects on the carbon source utilization patterns of the microbes in the soil samples. Combining Fig. 2 and Fig. 3, it can be seen that N40 promoted the utilization of all carbon sources in carbohydrates, amino acids, and esters by soil microorganisms. N80 inhibited the utilization of all carbon sources in carbohydrates, amino acids, alcohols, amines, and organic acids, except for 4-hydroxy benzoic acid and γ-hydroxybutyric acid in organic acids. In addition, the most relevant D-cellobiose and D-xylose for PC1 and PC2 are both derived from carbohydrates, which exactly correspond to the results in Fig. 2 and Fig. 3. Frey et al. (2004) also evaluated the effect of simulated nitrogen settlement on the soil microbial community of a hardwood forest and found that the carboxylic acids and carbohydrates were significantly higher in the low nitrogen plots, whereas the citric acid and malonic acid were significantly higher in the high nitrogen plots. Chakraborty et al. (2011) reported that the application of nitrogen fertilizer reduced the ability of soil microbial groups to decompose organic acids and amines. Zhu et al. (2014) found that anaerobic bacteria increased the use of amino acids and decreased the use of organic acids and carbohydrates after nitrogen application. Furthermore, gram-negative bacteria increased the use of carboxylic acids and decreased the use of amino acids and polymeric carbon sources. In contrast, yeast increased the use of carboxylic acids and polymeric carbon sources. These findings demonstrate that the differences in the carbon source utilization patterns of soil microorganisms may be an adaptation to soil environmental changes. In this study, KNO3 was the main nitrogen source in this experiment; therefore, the NO3−/NH4+ increased in the soil, so this may reduce the soil microbial activity and used the ability of some carbon substrate. Soil carbon concentration also affected significantly on soil microbial function and diversity. Fang et al. (2014) reported that high nitrogen concentration significantly increases DOC content in soil by changing the metabolic activity of microorganisms and the way of utilizing carbon substrate, and nitrogen has no effect on it. This reason may be the soil organic carbon is an important nutrient on soil microbial function, and carbon metabolism activity and the variation of soil DOC concentration would directly affect the soil microbial composition and diversity and therefore change the soil microbial function (He et al. 2013b). The soil carbon metabolism types of N80 treatment were positively correlated with the TN, SMC, DON, and TOC but significantly negatively correlated with DOC and NH4+, pH, and NO3− (Fig. 5). The increasing N addition concentration also significantly reduced soil pH and NH4+ content (Table 1). The reason may be adding nitrogen will lead to reduce the soil pH and affect the soil carbon metabolism ability. Liu et al. (2010) found that long-term high nitrogen addition would lead to soil acidification, affecting the soil microbial metabolism change and activity decline. Sui et al. (2021) reported that soil pH and total nitrogen had a significant effect on soil bacterial community structure. Fierer and Jackson (2006) demonstrated that soil pH significantly affected the composition of the bacterial community. Furthermore, Diao et al. (2019) found that soil pH was the main factor affecting microbial carbon source utilization under different nitrogen levels. Additionally, our redundancy analyses showed that there was a distinct separation between the different nitrogen treatments tested herein, while the effect of the N80 treatment was significantly greater than that of the N40. It is possible that a nonlinear threshold is crossed somewhere between the N40 and N80 treatments, which in turn allows the N80 treatment to have a greater effect on the composition, diversity, and carbon metabolism patterns of the microbial community than the N40 treatment. With the development of agriculture and industry in the future, it can be predicted that the amount of nitrogen input in the Sanjiang Plain will continue to increase, which will affect the structure and function of soil microorganisms. Carbon is the most important component in nature that can be utilized by soil microorganisms, and it is also the most consumed and utilized nutrient substrate. A large amount of nitrogen input will increase the amount of litter, change the quantity and quality of soil organic matter (Grandy et al. 2009; Liu et al. 2010), and also directly change the quality of soil microbial decomposition substrates (C:N:P ratio), quantity, and soil environment. Therefore, nitrogen addition affects soil nutrient availability, causing changes in microbial utilization of carbon sources and ultimately altering the structure and function of microbial communities (Chakraborty et al. 2011). In summary, the simulated nitrogen addition treatments had a significant effect on the soil microbial communities of Sanjiang Plain, demonstrating that increases in nitrogen addition rates will invariably change the physicochemical properties of the wetland environment. Nevertheless, the Biolog analysis method can only reflect changes in microbial functions based on carbon source utilization patterns and thus cannot fully illustrate the functional diversity of soil microorganisms. Therefore, this approach must be combined with high-throughput sequencing technology, molecular biology, and phospholipid fatty acid analysis to better characterize the variations of microbial function diversity. The carbon source utilization capacity of the soil microorganisms in the Calamagrostis angustifolia wetland of the Sanjiang Plain changed significantly under different nitrogen addition conditions. Particularly, N80 treatment significantly reduced the Shannon, Simpson, and Pielou diversity indices of soil microorganisms and significantly changes the carbon source utilization capacity of soil microorganisms. However, nitrogen addition affects the aforementioned parameters, thus affecting the carbon source utilization capacity of soil microorganisms. Therefore, once nitrogen addition rates exceed a certain threshold in the future, the stability of the temperate wetland ecosystem will be inevitably compromised if no prevention strategies are implemented. The experimental site was located in the Honghe National Nature Reserve of Sanjiang Plain (47° 42′ 18″–47° 52′ 07″ N, 133° 34′ 38″–133° 46′ 29″ E) (Fig. 6). This region has a total area of 2.18 × 104 ha, of which the wetland area constitutes approximately 1.1 × 104 ha, accounting for more than half of the total area of the reserve. The reserve exhibits a temperate humid/semi humid monsoon climate with long winters, severe cold and snow, and short spring and autumn seasons. The average annual temperature (MAT) of this region is 1.9 °C, the average annual evaporation is 1166 mm, and the average annual precipitation (MAP) is 585 mm, with precipitation mostly concentrated between July and September (Qu et al. 2015). The study site exhibits primarily bleached stagnant soil and fibrous organic soil, its main vegetation types are meadows and swamps, and the dominant plants are Calamagrostis angustifolia, Glyceria spiculose, Carex lasiocarpa, and Carex pseudocuraica. Location of the research site (Honghe National Nature Reserve, Sanjiang Plain) Plot setting and sample collection To investigate the nitrogen addition concentration on soil microbial function in Sanjiang Plain, sample plots were established in May 2016, and nine plots (5 m × 30 m) were randomly established in the Calamagrostis angustifolia wetland experiment station, which were treated with three nitrogen concentration levels. The nitrogen addition treatments included control (CK), 0 kg N ha−1 a−1; N40, 40 kg N ha−1 a−1; and N80, 80 kg N ha−1 a−1 treatments. Each treatment was performed in triplicate and randomly assigned. Nitrogen addition concentrations were set based on the amount of local agricultural fertilize utilization. Applying large doses of N in the short term can effectively mimic long-term small-dose N concentration inputs (Dise and Stevens 2005). Nitrogen additions greater than the exogenous N input to the wetland ecosystem were used to study the response of the ecosystem to a possible future high N saturation state, and different N additions were set to observe the effects of N input over the next 50 years. The nitrogen source KNO3 was dissolved in water and sprayed uniformly with a sprinkler during the growing season which is mainly associated with agricultural fertilization (Sun et al. 2007), in May each year. The control plots were sprayed with an equal amount of water. Sampling was conducted in October 2020 within each sample plot in the Calamagrostis angustifolia wetland. Five points (the center point of the diagonal line and the sample point on the diagonal line at an equal distance from the center sample point) were selected using the diagonal 5-point sampling method in each of the three treatment sample plots, and the soil was collected from the surface soil layer (0–20 cm) with a 4-cm diameter soil auger. After removing plant debris and other impurities in the collected soil samples, the samples were pooled, stored in a 4 °C cooler, and transported to the laboratory. One part of the sample was used for the Bio-Eco Plate experiments, whereas the other part was naturally dried and used for soil physicochemical analyses. Determination of soil physicochemical factors Soil pH was determined via the leaching and potentiometric method with a ratio of 2.5:1; soil moisture was determined via the drying method and determined gravimetrically by oven drying at 105 °C for 24 h; soil total nitrogen (TN) was determined with an elemental analyzer; NO3−-N was determined via the phenol disulfonic acid colorimetric method; NH4+-N was determined via the potassium chloride leaching-indophenol blue colorimetric method; dissolved organic carbon (DOC) and dissolved organic nitrogen (DON) were determined via the K2SO4 leaching method with a soil-liquid ratio of 1:5, extracted for 30 min, centrifuged and filtered, and then measured by TOC analyzer (Jones and Willett 2005); total organic carbon (TOC) was determined with a TOC analyzer; and NH4+-N and NO3−-N were determined by the Elemental Analyzer (Flash EA 1112 N, Thermo Fisher, Waltham, MA, USA) (Murphy et al. 2000). Determination of the functional diversity of the soil microbial community The Bio-Eco Plate incubation method was used to determine the ability of soil microbial communities to utilize 31 different kinds of carbon sources (Qu et al. 2015). The Bio-Eco Plate has 96 microtiter wells, 1 replicate for every 32 wells, and 3 replicates in total. The first well is used as a control without carbon source, while the other wells contain different carbon sources and tetrazolium salt dyes. The microorganisms use the carbon source to respire and cause the dye tetrazolium in the microtiter wells to change color by redox reaction (Xi and Hu 2003). A portion of the soil sample was activated in a thermostat at 25 °C for 24 h. Afterward, 10 g of fresh soil was weighed into a 200 mL triangular flask, to which 90 mL of 0.85% sterile NaCl solution was added. The mixture was then shaken at room temperature for 30 min at 200 r/min. The plates were continuously incubated at 25 °C for 168 h. During the incubation period, the absorbance was measured at 590 nm at 24 h intervals, and the absorbance values were recorded (Feng et al. 2021). AWCD was calculated as follows (Velasco et al. 2009; Jin et al. 2014; Liao et al. 2013): $$AWCD=\sum \left({C}_i-R\right)/31$$ where Ci is the 590 nm absorbance value of the well containing a carbon source, R is the absorbance value of the control well, and 31 is the number of holes in the ECO plate. If Ci-R ≤ 0, the value was recorded as 0. The functional diversity of the soil microbial communities was calculated using 120-h cultivation data. The soil diversity indices were calculated as follows (Pielou 1975; Whittaker 1972): $$\mathrm{Shannon}\hbox{-} \mathrm{Wiener}\kern0.17em \mathrm{diversity}\kern0.17em \mathrm{index}:H={P}_i\;\ln\;{P}_i$$ $$\mathrm{Simpson}\kern0.17em \mathrm{diversity}\kern0.17em \mathrm{index}:D=1-\sum {\left({P}_i\right)}^2$$ $$\mathrm{Pielou}\kern0.17em \mathrm{diversity}\kern0.17em \mathrm{index}:J=H/\mathit{\ln}\;S$$ where Pi is the ratio of the ith relative absorbance value to the sum of the relative absorbance values of all samples. All data were analyzed before processing using Excel 2010. One-way ANOVA was performed using the SPSS 25.0 software with the test level set at 0.05. Scatter plots with trend lines were generated using Excel 2010, and histograms were generated using SigmaPlot 10.0. Diversity index analysis, heat map, and redundancy analysis (RDA) were performed with R (Vegan package). The original data is recorded in an Excel file named "Data record sheet" and has been attached to this article. CK: N40: 40 kg N ha−1 a−1 AWCD: The average well-color development RDA: Redundancy analysis SMC: Soil moisture contents Dissolved organic carbon DON: Dissolved organic nitrogen TN: Total nitrogen TOC: Bragazza L, Buttler A, Habermacher J, Brancaleoni L, Gerdol R, Fritze H, Hanajík P, Laiho R, Johnson D (2012) High nitrogen deposition alters the decomposition of bog plant litter and reduces carbon accumulation. Glob Chang Biol 18(3):1163–1172 Chakraborty A, Chakrabarti K, Chakraborty A, Ghosh S (2011) Effect of long-term fertilizers and manure application on microbial biomass and microbial activity of a tropical agricultural soil. Biol Fertil Soils 47(2):227–233 Compton J E, Watruda L S, Porteousa L A, DeGrood S (2004) Response of soil microbial biomass and community composition to chronic nitrogen additions at Harvard forest. For Ecol Manage 196(1):143 -158. Deforest J (2004) Atmospheric nitrate deposition and the microbial degradation of cellobiose and vanillin in a northern hardwood forest. Soil Biol Biochem 36(6):965–971 Diao C, Lu XK, Tian J, Zhang YQ, Mo JM, Yu GR (2019) Effects of long-term nitrogen addition on the metabolic diversity of microbial carbon sources in subtropical forest soils. Acta Eco Sini 39(18):6622–6630 Dise NB, Stevens J (2005) Nitrogen deposition and reduction of terrestrial biodiversity: evidence from temperate grasslands. Sci China C Life Sci 48:720–728 Fang HJ, Cheng SL, Yu R, Xu MJ, Wang YS, Li LS, Dang XS, Wang L, Li YN (2014) Experimental nitrogen deposition alters the quantity and quality of soil dissolved organic carbon in an alpine meadow on the Qinghai-Tibetan Plateau. Appl Soil Ecol 81:1–11 Feng HF, Lin WQ, Xue L (2021) Interactive effects of nitrogen and phosphorus additions and different stand densities on soil microbial functional diversity of Acacia auriculiformis stands. Acta Eco Sini 41(6):2305–2314 Feng ZZ, Rütting T, Pleijel H, Wallin G, Reich PB, Kammann CI, Newton PCD, Kobayashi K, Luo YJ, Uddling J (2015) Constraints to nitrogen acquisition of terrestrial plants under elevated CO2. Glob Chang Biol 21(8):3152–3168 Fierer N, Jackson RB (2006) The diversity and biogeography of soil bacterial communities. P Natl Acad Sci USA 103:626–631 Francez AJ, Pinay G, Josselin N, Williams BL (2011) Denitrification triggered by nitrogen addition in Sphagnum magellanicum peat. Biogeochemistry 106:435–441 Frey SD, Knorr M, Parrent JL, Simpson RT (2004) Chronic nitrogen enrichment affects the structure and function of the soil microbial community in temperate hardwood and pine forests. Forest Ecol Manag 196(1):159–171 Garland JL (1997) Analysis and interpretation of community-level physiological profiles in microbial ecology. FEMS Microbioly Ecol 24(4):289–300 Garland JL, Mills AL (1991) Classification and characterization of heterotrophic microbial communities on the basis of patterns of community-level sole-carbon-source utilization. Appl Environ Microbiol 57(8):2351–2359 Grandy AS, Strickland MS, Lauber CL, Bradford MA, Fierer N (2009) The influence of microbial communities, management, and soil texture on soil organic matter chemistry. Geoderma 150(3-4):278–286 He JZ, Li J, Zhen YM (2013a) Thoughts on the microbial diversity-stability relationship in soil ecosystems. Biodivers Sci 21(4):412–421 He YT, Qi YC, Dong YS, Xiao SS, Peng Q, Liu XC, Sun LJ (2013b) Effects of nitrogen fertilization on soil microbial biomass and community functional diversity in temperate grassland in Inner Mongolia, China. Clean-Soil Air Water 41(12):1216–1221 Hu Y, Peuke AD, Zhao X, Yan J, Li C (2019) Effects of simulated atmospheric nitrogen deposition on foliar chemistry and physiology of hybrid poplar seedlings. Plant Physiol Biochem 143:94–108 Jin Z, Ji FY, Xu X, Xu XY, Chen QK, Li Q (2014) Microbial and metabolic characterization of a denitrifying phosphorus-uptake/side stream phosphorus removal system for treating domestic sewage. Biodegradation 25(6):777–786 Jones DL, Willett VB (2005) Experimental evaluation of methods to quantify dissolved organic nitrogen (DON) and dissolved organic carbon (DOC) in soil. Soil Biol Biochem 38(5):991–999 Konopka A, Oliver L, Turco RF Jr (1998) The use of carbon substrate utilization patterns in environmental and ecological microbiology. Microb Ecol 35(2):103–115 Li FL, Liu M, Li ZP, Jiang CY, Han FX, Che YP (2013) Changes in soil microbial biomass and functional diversity with a nitrogen gradient in soil columns. Appl Soil Ecol 64:1–6 Li WC, Sheng HY, Ekawati D, Jiang YP, Yang HM (2019) Variations in the compositions of soil bacterial and fungal communities due to microhabitat effects induced by simulated nitrogen deposition of a bamboo forest in wetland. Forests 10(12):1098 Li XW, Cui BS, Wang QG (2006) Nitrogen in wetland soils: a review. Soils 38(2):143–147 Liao M, Xie XM, Peng Y, Chai JJ, Chen N (2013) Characteristics of soil microbial community functional and structure diversity with coverage of Solidago Canadensis L. J Cent South Univ 20(3):749–756 Liu ZF, Fu BJ, Zheng XX, Liu GH (2010) Plant biomass, soil water content and soil N:P ratio regulating soil microbial functional diversity in a temperate steppe: a regional scale study. Soil Biol Biochem 42(3):445–450 Liu ZG (2004) Carbon stock and GHG emission of wetland ecosystem. Scientia Geogr Sinica 24(5):634–639 Lozanovska I, Kuzyakov Y, Krohn J, Parvin S, Dorodnikov M (2016) Effects of nitrate and sulfate on greenhouse gas emission potentials from microform-derived peats of a boreal peatland: a 13C tracer study. Soil Biol Biochem 100:182–191 Lu GR, Xie BH, Cagle GA, Wang XH, Han GX, Wang XJ, Hou AX, Guan B (2021) Effects of simulated nitrogen deposition on soil microbial community diversity in coastal wetland of the Yellow River Delta. Sci Total Environ 757:143825 Lu X, Mao Q, Gilliam FS, Luo Y, Mo J (2015) Nitrogen deposition contributes to soil acidification in tropical ecosystems. Glob Chang Biol 20(12):3790–3801 Murphy DV, Macdonald AJ, Stockdale EA, Goulding WT, Fortune S, Gaunt JL, Poulton PR, Wakefield JA, Webster CP, Wilmer WS (2000) Soluble organic nitrogen in agricultural soils. Biol Fertil Soils 30(5-6):374–387 Nakaji T, Fukami M, Dokiya Y, Izuta T (2001) Effects of high nitrogen load on growth, photosynthesis and nutrient status of Cryptomeria japonica and Pinus densiflora seedlings. Trees Struct Funct 15(8):453–461 Ouyang H, Deng W, Wang QG (2005) A review on nitrogen transmission processes in natural wetlands. Acta Eco Sini 25(2):326–333 Pielou EC (1975) Ecological diversity. Wiley-Interscience, New York Preston MJ, Boddy L, Randerson PF (2002) Analysis of microbial community functional diversity using sole-carbon-source utilisation profiles - a critique. FEMS Microbiol Eco 42(1):1–14 Qu TB, Wang CY, Pang SN, Zhang JF (2015) Utilization of carbon sources by soil microbial communities of four plant functional groups in Songnen stepe. Acta Eco Sini 35(17):5695–5702 Song C, Liu D, Yang G, Song Y, Rong M (2011) Effect of nitrogen addition on decomposition of Calamagrostis angustifolia litters from freshwater marshes of northeast China. Ecol Eng 37(10):1578–1582 Song CC, Wang LL, Tian HQ, Liu DY, Lu CQ, Xu XF, Zhang LH, Yang GS, Wan ZM (2013) Effect of continued nitrogen enrichment on greenhouse gas emissions from a wetland ecosystem in the Sanjiang Plain, northeast China: a 5 year nitrogen addition experiment. J Geophys Res-Biog 118(2):741–751 Sui X, Zhang R, Frey B, Yang L, Liu Y, Li MH (2021) Soil physicochemical properties drive the variation in soil microbial communities along a forest successional series in a degraded wetland in northeastern China. Ecol Evol 11:2194–2208 Sui X, Zhang RT, Liu YN, Xu N, Ni HW (2016) Influence of simulation nitrogen deposition on soil microbial functional diversity of Calamagrostis angustifolia wetland in Sanjiang Plain. Acta Agrestia Sin 24(6):1226–1233 Sun ZG, Liu JS, Wang JD (2007) Study on nitrogen concentration and deposition amount in wet deposition in typical wetland ecosystem of Sanjiang Plain. Syst Sci Compr Stud Agric 23(1):114–119 Tao B, Liu C, Zhang B, Dong J (2018) Effects of inorganic and organic nitrogen additions on CO2 emissions in the coastal wetlands of the Yellow River Delta, China. Atmos Environ 185(JUL.):159–167 Velasco AG-V, Probanza A, Mañero FJG, Treviño AC, Moreno JM, Lucas Garcia JA (2009) Effect of fire and retardant on soil microbial activity and functional diversity in a Mediterranean pasture. Geoderma 53(1):186–193 Vitousek PM, Howarth RW (1991) Nitrogen limitation on land and in the sea: how can it occur? Biogeochemistry 13:87–115 Wang C, Liu DW, Bai E (2018) Decreasing soil microbial diversity is associated with decreasing microbial biomass under nitrogen addition. Soil Biol Biochem 120:126–133 Wang GP, Liu JS, Wang JD, Yu JB (2006a) Soil phosphorus forms and their variations in depressional and riparian freshwater wetlands (Sanjiang Plain, northeast China). Geoderma 132(1):59–74 Wang JB, Fu XL, Zhang Z, Li MH, Cao HJ, Zhou XL, Ni HW (2019) Responses of soil respiration to nitrogen addition in the Sanjiang Plain wetland, northeastern China. PLoS One 14(1):e0211456 Wang LL, Song CC, Song YY, Guo YD, Wang XW, Sun XX (2010) Effects of reclamation of natural wetlands to a rice paddy on dissolved carbon dynamics in the Sanjiang Plain, Northeastern China. Ecol Eng 36: 1417–1423 Wang Y, Liu JS, Sun ZG, Wang JD, Zhang XL (2006b) Overview of biogeochemical studies of nitrogen in wetland systems. Wetland Sci 4(4):311–320 Whittaker RH (1972) Evolution and measurement of species diversity. TAXON 21(2-3):213–251 Wu SQ, Wang CZ, Li MS (2017) On soil function diversity of native coastal wetland under simulated nitrogen deposition. Soils 49(6):1153–1158 Xi JY, Hu HY (2003) Application of Biolog system in the study of microbial community. Acta Microbiol Sin 43(1):138–141 Yan H, Wu XY, Huang J, He ZM (2010) Microbial indicator of soil quality evaluation and its studying methods. J Shanxi Agricul Sci 38(10):78–81 Yavitt JB, Yashiro E, Cadillo-Quiroz H, Zinder SH (2012) Methanogen diversity and community composition in peatlands of the central to northern Appalachian Mountain region, North America. Biogeochemistry 109:117–131 Yu PY, Zhu F, Wang ZY, Yan WD, Su SF, Li TP (2013) Effects of nitrogen addition on metabolic function of microbial community in red soil of Cinnamomum camphora forest. J Cent South Univ For Technol 33(3):70–74 Yuan YH, Fan HB, Li HX, Liu WF, Shen FF, Guo HB (2012) Effects of simulated nitrogen deposition on soil microorganism in a Chinese fir plantation. Sci Silvae Sin 48(9):8–14 Zhang J, Lin XG, Yi R (2009) Advances in functional gene diversity of microorganism in relation to soil nitrogen cycling. Chin J Eco-Agric 17(5):1029–1034 Zhang L, Song C, Wang D, Wang Y (2007) Effects of exogenous nitrogen on freshwater marsh plant growth and N2O fluxes in Sanjiang Plain, northeast China. Atmos Environ 41(5):1080–1090 Zhang TA, Chen HYH, Ruan HH (2018) Global negative effects of nitrogen deposition on soil microbes. ISME J 12:1817–1825 Zhao KY (1999) Mires in China. Science Press, Beijing Zhu F, Li TP, Yu PY, Su SF, Hong XY, Chen T (2014) Carbon source utilization of soil microbial communities in response to nitrogen addition in the Cinnamomum camphora plantation. Sci Silvae Sin 50(8):82–89 We are grateful to the Scientific Paper Editing Co. Ltd. for language editing. The work was funded by the Heilongjiang Provincial Academy of Sciences Special Plan (YZ202003), special projects for the central government to guide the development of local science and technology (ZY20B15), Natural Sciences Foundation of Heilongjiang Province (LH2020C088), and Outstanding Youth Foundation of Heilongjiang University (JCL202006), supported by the Application Technology Research and Development Plan Project of Heilongjiang Province (GA19C006-6), and supported by the Key Laboratory of Forest Plant Ecology, Ministry of Education (K2020A02). Yingnan Liu and Rongtao Zhang contributed equally to this work. Engineering Research Center of Agricultural Microbiology Technology, Ministry of Education, Heilongjiang University, Harbin, 150500, China Xiaohong Weng & Xin Sui Heilongjiang Provincial Key Laboratory of Ecological Restoration and Resource Utilization for Cold Region, School of Life Sciences, Heilongjiang University, Harbin, 150080, China Xiaohong Weng, Xin Sui & Libin Yang Institute of Natural Resources and Ecology, Heilongjiang Academy of Sciences, Harbin, 150040, China Yingnan Liu, Libin Yang & Rongtao Zhang Xiaohong Weng Xin Sui Yingnan Liu Libin Yang Rongtao Zhang XW and XS performed the experiments, analyzed the data, and wrote this manuscript. YL designed this experiment and revised the manuscript. LY helped to analyze the data, and RZ helped to do the experiment and took the soil samples. The author(s) read and approved the final manuscript. Correspondence to Xin Sui. The study did not violate ethics, and all participants agreed to publish the paper. Weng, X., Sui, X., Liu, Y. et al. Effect of nitrogen addition on the carbon metabolism of soil microorganisms in a Calamagrostis angustifolia wetland of the Sanjiang Plain, northeastern China. Ann Microbiol 72, 18 (2022). https://doi.org/10.1186/s13213-022-01674-8 Calamagrostis angustifolia wetland Soil microorganism Bio-Eco Plate Papers from the 7th International Conference on Agricultural and Biological Sciences (ABS 2021)
CommonCrawl
Bounds on the Hausdorff dimension of random attractors for infinite-dimensional random dynamical systems on fractals Qingqing Li and Tianshou Zhou , Guangdong Province Key Laboratory of Computational Science, School of Mathematics, Sun Yat-Sen University, Guangzhou 510275, China * Corresponding author: Tianshou Zhou Fund Project: This work was partially supported by the National Nature Science Foundation of China under Grant NO.91530320(T.Z.) and Grant NO.11775314(T.Z.) and 973 project of Science and Technology department of China under Grant NO.2014CB964703(T.Z.) Figure(8) Positive and negative feedback loops in biological regulatory networks appear often in a multi-node manner since regulatory processes are in general multi-step. Although it is well known that interlocked positive and negative feedback loops (iPNFLs) can generate sustained oscillations, how the number of nodes in each loop affects the oscillations remains elusive. By analyzing a model of iPNFLs with multiple nodes, we find that the node number of the negative loop mainly plays a role of amplifying oscillation amplitudes whereas that of the positive loop mainly plays a role of reducing oscillatory regions, both depending on the (competitive or noncompetitive) way of interaction between the two loops. We also find that given an iPNFL network of the same structure, the noncompetitive model is more likely to produce large-amplitude oscillations than the competitive model. These results not only indicate that multi-node iPNFLs are an effective mechanism of promoting oscillations but also are helpful for the design of synthetic oscillators. Keywords: Interlocked feedback loops, oscillation, combinatorial regulation, synthetic oscillator. Mathematics Subject Classification: Primary: 92B05, 92C42; Secondary: 93C15. Citation: Qingqing Li, Tianshou Zhou. Interlocked multi-node positive and negative feedback loops facilitate oscillations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3139-3155. doi: 10.3934/dcdsb.2018304 Y. An, G. S. Xu and Z. B. Yang, Calcium participates in feedback regulation of the oscillating ROP1 Rho GTPase in pollen tubes, Proc. Natl. Acad. Sci. U. S. A., 106 (2009), 22002-22007. doi: 10.1073/pnas.0910811106. Google Scholar B. Ananthasubramaniam and H. Herzel, Positive feedback promotes oscillations in negative feedback loops, PLoS One, 9 (2014), e104761. Google Scholar D. Angeli, F. J. Jr and E. D. Sontag, Detection of multistability, bifurcations, and hysteresis in a large class of biological positive-feedback systems, Proc. Natl. Acad. Sci. U. S. A., 101 (2004), 1822-1827. doi: 10.1073/pnas.0308265100. Google Scholar O. Brandman and T. Meyer, Feedback loops shape cellular signals in space and time, Science, 322 (2008), 390-395. doi: 10.1126/science.1160617. Google Scholar O. Brandman, F. J. Jr, R. Li and T. Meyer, Interlinked fast and slow positive feedback loops drive reliable cell decisions, Science, 310 (2005), 496-498. doi: 10.1126/science.1113834. Google Scholar C. F. Calkhoven and G. Ab, Multiple steps in the regulation of transcription-factor level and activity, Biochemical Journal, 317 (1996), 329-342. doi: 10.1042/bj3170329. Google Scholar S. M. Castillo-Hair, E. R. Villota and A. M. Coronado, Design principles for robust oscillatory behavior, Systems and Synthetic Biology, 9 (2015), 125-133. doi: 10.1007/s11693-015-9178-6. Google Scholar O. Cinquin and J. Demongeot, Positive and Negative Feedback: Striking a balance between necessary antagonists, Journal of Theoretical Biology, 216 (2002), 229-241. doi: 10.1006/jtbi.2002.2544. Google Scholar Z. Darieva, A. Clancy, R. Bulmer, E. Williams, A. Pic-Taylor, B. A. Morgan and A. D. Sharrocks, A competitive transcription factor binding mechanism determines the timing of late cell cycle-dependent gene expression, Molecular Cell, 38 (2010), 29-40. doi: 10.1016/j.molcel.2010.02.030. Google Scholar L. Giorgetti, T. Siggers, G. Tiana, G. Caprara, S. Notarbartolo, T. Corona, M. Pasparakis, P. Milani, M. L. Bulyk and G. Natoli, Noncooperative interactions between transcription factors and clustered DNA binding sites enable graded transcriptional responses to environmental inputs, Molecular Cell, 37 (2010), 418-428. doi: 10.1016/j.molcel.2010.01.016. Google Scholar B. Huang, X. Tian, F. Liu and W. Wang, Impact of time delays on oscillatory dynamics of interlinked positive and negative feedback loops, Phys. Rev. E., 94 (2016), 052413. doi: 10.1103/PhysRevE.94.052413. Google Scholar Z. Hui, Y. Chen and C. Yong, Noise propagation in gene regulation networks involving interlinked positive and negative feedback loops, PLoS One, 7 (2012), e51840. doi: 10.1371/journal.pone.0051840. Google Scholar F. J. Jr, Self-perpetuating states in signal transduction: Positive feedback, double-negative feedback and bistability, Current Opinion in Cell Biology, 14 (2002), 140-148. doi: 10.1016/S0955-0674(02)00314-9. Google Scholar D. Kim, Y. K. Kwon and K. H. Cho, Coupled positive and negative feedback circuits form an essential building block of cellular signaling pathways, Bioessays, 29 (2007), 85-90. doi: 10.1002/bies.20511. Google Scholar J. R. Kim, Y. Yoon and K. H. Cho, Coupled feedback loops form dynamic motifs of cellular networks, Biophysical Journal, 94 (2008), 359-365. doi: 10.1529/biophysj.107.105106. Google Scholar K. N. Lan, Regulation of oscillation dynamics in biochemical systems with dual negative feedback loops, Journal of the Royal Society Interface, 9 (2012), 1998-2010. doi: 10.1098/rsif.2012.0028. Google Scholar I. M. Lengyel, D. Soroldoni, A. C. Oates and L. G. Morelli, Nonlinearity arising from noncooperative transcription factor binding enhances negative feedback and promotes genetic oscillations, Papers in Physics, 6 (2014), 060012. doi: 10.4279/PIP.060012. Google Scholar W. A. Lim, C. M. Lee and C. Tang, Design principles of regulatory networks: Searching for the molecular algorithms of the cell, Molecular Cell, 49 (2013), 202-212. doi: 10.1016/j.molcel.2012.12.020. Google Scholar W. Ma, A. Trusina, H. El-Samad, W. A. Lim and C. Tang, Defining Network Topologies that Can Achieve Biochemical Adaptation, Cell, 138 (2009), 760-773. doi: 10.1016/j.cell.2009.06.013. Google Scholar K. Maeda and H. Kurata, Long negative feedback loop enhances period tunability of biological oscillators, Journal of Theoretical Biology, 440 (2018), 21-31. doi: 10.1016/j.jtbi.2017.12.014. Google Scholar W. Mather, M. R. Bennett, J. Hasty and L. S. Tsimring, Delay-induced degrade-and-fire oscillations in small genetic circuits, Physical Review Letters, 102 (2009), 068105. doi: 10.1103/PhysRevLett.102.068105. Google Scholar N. A. M. Monk, Oscillatory Expression of Hes1, p53, and NF-kappaB Driven by Transcriptional Time Delays, Current Biology, 13 (2003), 1409-1413. doi: 10.1016/S0960-9822(03)00494-9. Google Scholar K. Montagne, R. Plasson, Y. Sakai, T. Fujii and Y. Rondelez, Programming An In Vitro Dna Oscillator Using A Molecular Networking Strategy, Molecular Systems Biology, 7 (2011), 466-472. doi: 10.1038/Msb.2010.120. Google Scholar M. Monti and P. R. Wolde, The accuracy of telling time via oscillatory signals, Physical Biology, 13 (2016), 035005. doi: 10.1088/1478-3975/13/3/035005. Google Scholar A. Munteanu, M. Constante, M. Isalan and R. V. Solé, Avoiding transcription factor competition at promoter level increases the chances of obtaining oscillation, BMC Systems Biology, 4 (2010), p66. doi: 10.1186/1752-0509-4-66. Google Scholar R. Murugan, Theory on the dynamics of oscillatory loops in the transcription factor networks, PLoS One, 7 (2014), 3736-3739. Google Scholar M. Namiko, J. M. Hogh and S. Szabolcs, Coupled positive and negative feedbacks produce diverse gene expression patterns in colonies, MBio, 6 (2015), e00059-15. doi: 10.1128/mBio.00059-15. Google Scholar B. Novák and J. J. Tyson, Design principles of biochemical oscillators, Nat. Rev. Mol. Cell. Biol, 9 (2008), 981-991. Google Scholar E. L. O'Brien, E. V. Itallie and M. R. Bennett, Modeling synthetic gene oscillators, Mathematical Biosciences, 236 (2012), 1-15. doi: 10.1016/j.mbs.2012.01.001. Google Scholar S. Pigolotti, S. Krishna and M. H. Jensen, Oscillation patterns in negative feedback loops, Proc. Natl. Acad. Sci. U. S. A., 104 (2007), 6533-6537. doi: 10.1073/pnas.0610759104. Google Scholar J. R. Pomerening, S. Y. Kim and F. J. Jr, Systems-level dissection of the cell-cycle oscillator: bypassing positive feedback produces damped oscillations, Cell, 122 (2005), 565-578. doi: 10.1016/j.cell.2005.06.016. Google Scholar T. Shopera, W. R. Henson, A. Ng, Y. J. Lee, K. Ng and T. S. Moon, Robust, tunable genetic memory from protein sequestration combined with positive feedback, Nucleic Acids Research, 43 (2015), 9086-9094. doi: 10.1093/nar/gkv936. Google Scholar H. Song, P. Smolen, E. Avron, D. A. Baxter and J. H. Byrne, Dynamics of a minimal model of interlocked positive and negative feedback loops of transcriptional regulation by cAMP-response element binding proteins, Biophysical Journal, 92 (2007), 3407-3424. doi: 10.1529/biophysj.106.096891. Google Scholar J. Stricker, S. Cookson, M. R. Bennett, W. H. Mather, L. S. Tsimring and J. Hasty, A fast, robust and tunable synthetic gene oscillator, Nature, 456 (2008), 516-519. doi: 10.1038/nature07389. Google Scholar P. K. Tapaswi, P. Bhattacharya, An extended mathematical-model of transcription and translation during embryogenesis, Cybernetica, 24 (1981), 61-84. Available from: http://library.isical.ac.in:8080/jspui/bitstream/10263/938/1/CYB-24-1-1981-P61-84.pdf. Google Scholar X. J. Tian, X. P. Zhang, F. Liu and W. Wang, Interlinking positive and negative feedback loops creates a tunable motif in gene regulatory networks, Physical Review E Statistical Nonlinear and Soft Matter Physics, 80 (2009), 011926. doi: 10.1103/PhysRevE.80.011926. Google Scholar T. Y. Tsai, Y. S. Choi, W. Ma, J. R. Pomerening, C. Tang and J. E. Ferrell, Robust, tunable biological oscillations from interlinked positive and negative feedback loops, Science, 321 (2008), 126-129. doi: 10.1126/science.1156951. Google Scholar K. Uriu and H. Tei, Feedback loops interlocked at competitive binding sites amplify and facilitate genetic oscillations, Journal of Theoretical Biology, 428 (2017), 56-64. doi: 10.1016/j.jtbi.2017.06.005. Google Scholar A. Verdugo and R. Rand, Hopf bifurcation in a DDE model of gene expression, Communications in Nonlinear Science and Numerical Simulation, 13 (2008), 235-242. doi: 10.1016/j.cnsns.2006.05.001. Google Scholar Y. C. Wang, S. E. Peterson and J. F. Loring, Protein post-translational modifications and regulation of pluripotency in human stem cells, Cell Research, 24 (2014), 143-160. doi: 10.1038/cr.2013.151. Google Scholar J. J. Wei and C. B. Yu, Hopf bifurcation analysis in a model of oscillatory gene expression with delay, Proceedings of the Royal Society of Edinburgh, 139 (2009), 879-895. doi: 10.1017/S0308210507000091. Google Scholar X. P. Zhang, Z. Cheng, F. Liu and W. Wang, Linking fast and slow positive feedback loops creates an optimal bistable switch in cell signaling, Physical Review E Statistical Nonlinear and Soft Matter Physics, 76 (2007), 031924. doi: 10.1103/PhysRevE.76.031924. Google Scholar X. Zhao, T. Hirota, X. M. Han, H. Cho, L. W. Chong, K. Lamia, S. Liu, A. R. Atkins, E. Banayo, C. Liddle, R. T. Yu, J.R. Yates, S. A. Kay, M. Downes and R. M. Evans, Circadian amplitude regulation via FBXW7-targeted REV-ERB $α$ degradation, Cell, 165 (2016), 1644-1657. doi: 10.1016/j.cell.2016.05.012. Google Scholar Figure 1. An example of interlocked positive and negative feedback loops. (a) Network topology, where the homogenous negative feedback loop contains $N$ node with $N$ being an odd number, the homogeneous positive feedback loop contains $M+1$ node with $M$ being a positive integer, and node $X_N$ is common. (b, c) Two representative modes for the combinatorial regulation of two transcription factors, where (b) corresponds to noncompetitive binding whereas (c) to competitive binding Figure Options Download as PowerPoint slide Figure 2. Influence of parameter $\gamma$ on the real and imaginary parts of the root of the characteristic equation $f(\lambda)=0$:non-competition model. (a)$N=3$, $M=8$(corresponding to the case of $N<M+1$); (b)$N=3$, $M=2$ (corresponding to the case of$N=M+1$; (C)$N=5$, $M=2$) (corresponding to the case of $N>M+1$); (d) Bifurcation diagram of $x_3$ versus $\gamma$ for $N=3$, $M=2$.Here green solid and red dashed lines represent stable and unstable steady states respectively, and the symbol 'HB' represents the Hopf bifurcation point. Other parameter values are set as $\alpha_n=\alpha_p=3 $, $c_n=c_p=0.1$, $h_n=3$, $h_p=1$, $\alpha_n=\alpha_p=\alpha$, and $K_1=K_2=1$ Figure 3. The occurrence of oscillations in the system of interlocked multi-node positive and negative loops. (a, c) correspond to the noncompetitive model whereas (b, d) to the competitive model. (a, b) show time series of component $x_5$, where the inset is a phase trajectory in the $(x_4,x_5)$ plane. (c, d) show both a stable region for the fixed point (corresponding to 'nonoscillation' indicated in the diagram) and an oscillatory region (corresponding to 'oscillation' indicated in the diagram) in the $(N,M)$ plane, where the green das line is the border of the two regions. Parameter values are set as $\alpha_n=\alpha_p=3$, $c_n=c_p=0.1$, $h_n=3$, $h_p=1$, $\alpha_n=\alpha_p=1$, and $K_1=0.5$, and $K_2=4$ Figure 4. The effect of the node number of the negative feedback loop on the amplitude and period of oscillations, where $M=2$.(a, c) correspond to the noncompetitive model, (b, d) correspond to the competitive model. (a, b) for oscillation amplitude, (c, d) for oscillation period. Parameter values are set as $\alpha_n=\alpha_p=3$, $c_n=c_p=0.1$, $h_n=3$, $h_p=1$, $\alpha_n=\alpha_p=1$, and $K_2=0.5$ Figure 5. The effect of the node number of the positive feedback loop on the amplitude and period of oscillations, where $N=5$.(a, c) correspond to the noncompetitive model, (b, d) correspond to the competitive model. (a, b) for oscillation amplitude, (c, d) is related to period. Parameter values are set as $\alpha_n=\alpha_p=3$, $c_n=c_p=0.1$, $h_n=3$, $h_p=1$, $\gamma_n=\gamma_p=1$, and $K_2=0.5$ Figure 6. Three dimensional pseudo-diagram in the $(\gamma,K)$ plane, where the color bar represents oscillation amplitude, where parameter $N$ is fixed at $N=3$.(a, b, c, d) correspond to non-competitive binding with $M=1,2,6,8$ from left to right (corresponding to the cases of $N>M+1$, $N=M+1$ and $N<M+1$, respecitvely); (e, f, g, h) correspond to competitive binding with $M=1,2,6,8$ from left to right(corresponding to the cases of $N>M+1$, $N=M+1$ and $N<M+1$, respecitvely). Parameter values are set as $\alpha_n=\alpha_p=3$, $c_n=c_p=0.1$, $h_n=3$, $h_p=1$ Figure 7. Three dimensional pseudo-diagram in the $(\gamma,K)$ plane, where the color bar represents oscillation peroid, where parameter $N$ is fixed at $N=3$. (a, b, c, d) correspond to non-competitive binding with $M=1,2,6,8$ from left to right (corresponding to the cases of $N>M+1$, $N=M+1$ and $N<M+1$, respecitvely); (e, f, g, h) correspond to competitive binding with $M=1,2,6,8$ from left to right(corresponding to the cases of $N>M+1$, $N=M+1$ and $N<M+1$, respecitvely). Other parameter values are set as $\alpha_n=\alpha_p=3$, $c_n=c_p=0.1$, $h_n=3$, $h_p=1$ Figure 8. Comparison between influences of NFL node number and time delay on oscillating region, where the number of PFL nodes, $M$ is fixed at $M=2$. (a) displays the dependence of the maximum oscillation amplitude on the ratio of $K_1/K_2$; (b) shows the oscillation region of in the $(K_1,K_2)$ plane; (c) shows the oscillation region of $\tau=1.04$ also in the $(K_1,K_2)$ plane. Other parameter values are set as $\alpha_n=\alpha_p=3$, $c_n=c_p=0.1$, $h_n=3$, $h_p=1$ Orit Lavi, Doron Ginsberg, Yoram Louzoun. Regulation of modular Cyclin and CDK feedback loops by an E2F transcription oscillator in the mammalian cell cycle. Mathematical Biosciences & Engineering, 2011, 8 (2) : 445-461. doi: 10.3934/mbe.2011.8.445 Maria Conceição A. Leite, Yunjiao Wang. Multistability, oscillations and bifurcations in feedback loops. Mathematical Biosciences & Engineering, 2010, 7 (1) : 83-97. doi: 10.3934/mbe.2010.7.83 Wing-Cheong Lo, Ching-Shan Chou, Kimberly K. Gokoffski, Frederic Y.-M. Wan, Arthur D. Lander, Anne L. Calof, Qing Nie. Feedback regulation in multistage cell lineages. Mathematical Biosciences & Engineering, 2009, 6 (1) : 59-82. doi: 10.3934/mbe.2009.6.59 Ben Niu, Weihua Jiang. Dynamics of a limit cycle oscillator with extended delay feedback. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1439-1458. doi: 10.3934/dcdsb.2013.18.1439 Benjamin B. Kennedy. A periodic solution with non-simple oscillation for an equation with state-dependent delay and strictly monotonic negative feedback. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 47-66. doi: 10.3934/dcdss.2020003 Dmitry Treschev. Oscillator and thermostat. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1693-1712. doi: 10.3934/dcds.2010.28.1693 Peter Kuchment, Leonid Kunyansky. Synthetic focusing in ultrasound modulated tomography. Inverse Problems & Imaging, 2010, 4 (4) : 665-673. doi: 10.3934/ipi.2010.4.665 Jean-Philippe Bernard, Emmanuel Frénod, Antoine Rousseau. Paralic confinement computations in coastal environment with interlocked areas. Discrete & Continuous Dynamical Systems - S, 2015, 8 (1) : 45-54. doi: 10.3934/dcdss.2015.8.45 Xingbo Liu, Deming Zhu. On the stability of homoclinic loops with higher dimension. Discrete & Continuous Dynamical Systems - B, 2012, 17 (3) : 915-932. doi: 10.3934/dcdsb.2012.17.915 Cicely K. Macnamara, Mark A. J. Chaplain. Spatio-temporal models of synthetic genetic oscillators. Mathematical Biosciences & Engineering, 2017, 14 (1) : 249-262. doi: 10.3934/mbe.2017016 M. B. Paterson, D. R. Stinson, R. Wei. Combinatorial batch codes. Advances in Mathematics of Communications, 2009, 3 (1) : 13-27. doi: 10.3934/amc.2009.3.13 JiYoon Jung, Carl Mummert, Elizabeth Niese, Michael Schroeder. On erasure combinatorial batch codes. Advances in Mathematics of Communications, 2018, 12 (1) : 49-65. doi: 10.3934/amc.2018003 Xiong Li. The stability of the equilibrium for a perturbed asymmetric oscillator. Communications on Pure & Applied Analysis, 2006, 5 (3) : 515-528. doi: 10.3934/cpaa.2006.5.515 Xiong Li. The stability of the equilibrium for a perturbed asymmetric oscillator. Communications on Pure & Applied Analysis, 2007, 6 (1) : 69-82. doi: 10.3934/cpaa.2007.6.69 Richard A. Brualdi, Kathleen P. Kiernan, Seth A. Meyer, Michael W. Schroeder. Combinatorial batch codes and transversal matroids. Advances in Mathematics of Communications, 2010, 4 (3) : 419-431. doi: 10.3934/amc.2010.4.419 Venkateswaran P. Krishnan, Eric Todd Quinto. Microlocal aspects of common offset synthetic aperture radar imaging. Inverse Problems & Imaging, 2011, 5 (3) : 659-674. doi: 10.3934/ipi.2011.5.659 T. Varslo, C E Yarman, M. Cheney, B Yazıcı. A variational approach to waveform design for synthetic-aperture imaging. Inverse Problems & Imaging, 2007, 1 (3) : 577-592. doi: 10.3934/ipi.2007.1.577 Peter W. Bates, Yu Liang, Alexander W. Shingleton. Growth regulation and the insulin signaling pathway. Networks & Heterogeneous Media, 2013, 8 (1) : 65-78. doi: 10.3934/nhm.2013.8.65 Kimberly Fessel, Jeffrey B. Gaither, Julie K. Bower, Trudy Gaillard, Kwame Osei, Grzegorz A. Rempała. Mathematical analysis of a model for glucose regulation. Mathematical Biosciences & Engineering, 2016, 13 (1) : 83-99. doi: 10.3934/mbe.2016.13.83 Adrien Nguyen Huu. Investment under uncertainty, competition and regulation. Journal of Dynamics & Games, 2014, 1 (4) : 579-598. doi: 10.3934/jdg.2014.1.579 Qingqing Li Tianshou Zhou Figures and Tables
CommonCrawl
A self-calibrated DLL-based clock generator for an energy-aware EISC processor Sewook Hwang, Kyeong Min Kim, Jungmoon Kim, Seon Wook Kim, Chulwoo Kim School of Electrical Engineering This paper describes a low-jitter delay-locked loop (DLL)-based clock generator for dynamic frequency scaling in the extendable instruction set computing (EISC) processor. The DLL-based clock generator provides the system clock with frequencies of 0.5$\times$ to 8$\times$ of the reference clock, according to the workload of the EISC processor. The proposed analog self-calibration method and a phase detector with an auxiliary charge pump can effectively reduce the delay mismatch between delay cells in the voltage-controlled delay line and the static phase offset due to the current mismatch in the charge pump, respectively. The self-calibrated output waveform exhibits 9.7 ps of RMS jitter and 73.7 ps of peak-to-peak jitter at 120 MHz. The prototype clock generator implemented in a 0.18-$\mu$m CMOS process occupies an active area of 0.27 mm$2 and consumes 15.56 mA. IEEE Transactions on Very Large Scale Integration (VLSI) Systems https://doi.org/10.1109/TVLSI.2012.2188656 delay-locked loop (DLL) dynamic frequency scaling (DFS) extendable instruction set computing (EISC) Hardware and Architecture 10.1109/TVLSI.2012.2188656 Dive into the research topics of 'A self-calibrated DLL-based clock generator for an energy-aware EISC processor'. Together they form a unique fingerprint. Clocks Engineering & Materials Science 100% Dynamic frequency scaling Engineering & Materials Science 35% Electric delay lines Engineering & Materials Science 27% Detectors Engineering & Materials Science 18% Calibration Engineering & Materials Science 18% Electric potential Engineering & Materials Science 12% Hwang, S., Kim, K. M., Kim, J., Kim, S. W., & Kim, C. (2013). A self-calibrated DLL-based clock generator for an energy-aware EISC processor. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 21(3), 575-579. [6175980]. https://doi.org/10.1109/TVLSI.2012.2188656 A self-calibrated DLL-based clock generator for an energy-aware EISC processor. / Hwang, Sewook; Kim, Kyeong Min; Kim, Jungmoon et al. In: IEEE Transactions on Very Large Scale Integration (VLSI) Systems, Vol. 21, No. 3, 6175980, 2013, p. 575-579. Hwang, S, Kim, KM, Kim, J, Kim, SW & Kim, C 2013, 'A self-calibrated DLL-based clock generator for an energy-aware EISC processor', IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 21, no. 3, 6175980, pp. 575-579. https://doi.org/10.1109/TVLSI.2012.2188656 Hwang S, Kim KM, Kim J, Kim SW, Kim C. A self-calibrated DLL-based clock generator for an energy-aware EISC processor. IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 2013;21(3):575-579. 6175980. doi: 10.1109/TVLSI.2012.2188656 Hwang, Sewook ; Kim, Kyeong Min ; Kim, Jungmoon et al. / A self-calibrated DLL-based clock generator for an energy-aware EISC processor. In: IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 2013 ; Vol. 21, No. 3. pp. 575-579. @article{c20aa1e2d61e43c7814483bb8c31d704, title = "A self-calibrated DLL-based clock generator for an energy-aware EISC processor", abstract = "This paper describes a low-jitter delay-locked loop (DLL)-based clock generator for dynamic frequency scaling in the extendable instruction set computing (EISC) processor. The DLL-based clock generator provides the system clock with frequencies of 0.5$\times$ to 8$\times$ of the reference clock, according to the workload of the EISC processor. The proposed analog self-calibration method and a phase detector with an auxiliary charge pump can effectively reduce the delay mismatch between delay cells in the voltage-controlled delay line and the static phase offset due to the current mismatch in the charge pump, respectively. The self-calibrated output waveform exhibits 9.7 ps of RMS jitter and 73.7 ps of peak-to-peak jitter at 120 MHz. The prototype clock generator implemented in a 0.18-$\mu$m CMOS process occupies an active area of 0.27 mm$2 and consumes 15.56 mA.", keywords = "Calibration, delay-locked loop (DLL), dynamic frequency scaling (DFS), extendable instruction set computing (EISC)", author = "Sewook Hwang and Kim, {Kyeong Min} and Jungmoon Kim and Kim, {Seon Wook} and Chulwoo Kim", doi = "10.1109/TVLSI.2012.2188656", journal = "IEEE Transactions on Very Large Scale Integration (VLSI) Systems", T1 - A self-calibrated DLL-based clock generator for an energy-aware EISC processor AU - Hwang, Sewook AU - Kim, Kyeong Min AU - Kim, Jungmoon AU - Kim, Seon Wook AU - Kim, Chulwoo N2 - This paper describes a low-jitter delay-locked loop (DLL)-based clock generator for dynamic frequency scaling in the extendable instruction set computing (EISC) processor. The DLL-based clock generator provides the system clock with frequencies of 0.5$\times$ to 8$\times$ of the reference clock, according to the workload of the EISC processor. The proposed analog self-calibration method and a phase detector with an auxiliary charge pump can effectively reduce the delay mismatch between delay cells in the voltage-controlled delay line and the static phase offset due to the current mismatch in the charge pump, respectively. The self-calibrated output waveform exhibits 9.7 ps of RMS jitter and 73.7 ps of peak-to-peak jitter at 120 MHz. The prototype clock generator implemented in a 0.18-$\mu$m CMOS process occupies an active area of 0.27 mm$2 and consumes 15.56 mA. AB - This paper describes a low-jitter delay-locked loop (DLL)-based clock generator for dynamic frequency scaling in the extendable instruction set computing (EISC) processor. The DLL-based clock generator provides the system clock with frequencies of 0.5$\times$ to 8$\times$ of the reference clock, according to the workload of the EISC processor. The proposed analog self-calibration method and a phase detector with an auxiliary charge pump can effectively reduce the delay mismatch between delay cells in the voltage-controlled delay line and the static phase offset due to the current mismatch in the charge pump, respectively. The self-calibrated output waveform exhibits 9.7 ps of RMS jitter and 73.7 ps of peak-to-peak jitter at 120 MHz. The prototype clock generator implemented in a 0.18-$\mu$m CMOS process occupies an active area of 0.27 mm$2 and consumes 15.56 mA. KW - Calibration KW - delay-locked loop (DLL) KW - dynamic frequency scaling (DFS) KW - extendable instruction set computing (EISC) U2 - 10.1109/TVLSI.2012.2188656 DO - 10.1109/TVLSI.2012.2188656 JO - IEEE Transactions on Very Large Scale Integration (VLSI) Systems JF - IEEE Transactions on Very Large Scale Integration (VLSI) Systems
CommonCrawl
Hostname: page-component-768dbb666b-l8xdn Total loading time: 2.149 Render date: 2023-02-02T09:21:50.246Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true >The ANZIAM Journal >ON MULTIFRACTIONALITY OF SPHERICAL RANDOM FIELDS WITH... The ANZIAM Journal Main notation and definitions Multifractional processes The Hölder exponent Data and methodology Numerical studies ON MULTIFRACTIONALITY OF SPHERICAL RANDOM FIELDS WITH COSMOLOGICAL APPLICATIONS Part of: Stochastic processes Cosmology Applications Inference from stochastic processes Published online by Cambridge University Press: 18 August 2022 PHILIP BROADBRIDGE [Opens in a new window] , RAVINDI NANAYAKKARA [Opens in a new window] and ANDRIY OLENKO [Opens in a new window] PHILIP BROADBRIDGE Department of Mathematics and Statistics, La Trobe University, Melbourne, VIC3086, Australia; e-mail: [email protected], [email protected] RAVINDI NANAYAKKARA ANDRIY OLENKO* e-mail: [email protected] Save PDF (8 mb) View PDF[Opens in a new window] This paper investigates spatial data on the unit sphere. Traditionally, isotropic Gaussian random fields are considered as the underlying mathematical model of the cosmic microwave background (CMB) data. We discuss the generalized multifractional Brownian motion and its pointwise Hölder exponent on the sphere. The multifractional approach is used to investigate the CMB data from the Planck mission. These data consist of CMB radiation measurements at narrow angles of the sky sphere. The results obtained suggest that the estimated Hölder exponents for different CMB regions do change from location to location. Therefore, the CMB temperature intensities are multifractional. The methodology developed is used to suggest two approaches for detecting regions with anomalies in the cleaned CMB maps. random fieldsmultifractionalityHölder exponentspherical statisticscosmic microwave background radiationCMB anomalies MSC classification Primary: 60G60: Random fields Secondary: 60G15: Gaussian processes 60G22: Fractional processes, including fractional Brownian motion 62M40: Random fields; image analysis 83F05: Cosmology 62P35: Applications to physics The ANZIAM Journal , Volume 64 , Issue 2 , April 2022 , pp. 90 - 118 This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. © The Author(s), 2022. Published by Cambridge University Press on behalf of Australian Mathematical Publishing Association Inc. The notion of fractional Brownian motion (FBM) was introduced by Mandelbrot and Van Ness in 1968 [Reference Mandelbrot and Van Ness31]. The Hurst parameter H can be used to define the Hölder regularity of the FBM [Reference Ayache and Véhel7]. The multifractional Brownian motion (MBM) was first considered by Péltier and Lévy Véhel in 1995, extending the FBM [Reference Péltier and Véhel38]. The concept of multifractionality allows local fractional properties to depend on space-time locations. Multifractional processes were used to study complex stochastic systems which exhibit nonlinear behaviour in space and time. Multifractional behaviour of data has been found in many applications, such as image processing, stock price movements and signal processing [Reference Ayache and Véhel7, Reference Sheng, Chen and Qiu41]. The generalized multifractional Brownian motion (GMBM) is a continuous Gaussian process that was introduced by generalizing the traditional FBM and MBM (see [Reference Ayache and Véhel7]). In comparison to MBM, the Hölder regularity of GMBM can vary substantially. For example, GMBM can allow discontinuous Hölder exponents. This has been an advantage in certain applications, such as medical image modelling, telecommunications, turbulence and finance, where the pointwise Hölder exponent can change rapidly. A Fourier spectrum's low frequencies control the long-range dependence of a stochastic process while the higher frequencies control the Hölder regularity. Therefore, GMBM can be used to model processes that exhibit erratic behaviour of the local Hölder exponent and long-range dependence [Reference Ayache and Véhel7]. The aim of this paper is to introduce new applied methodology based on the local Hölder exponent, and illustrate it by applying it to cosmic microwave background (CMB) radiation data. The following brief physics background is provided to enable a better understanding of the CMB data. The universe originated about 14 billion years ago and was characterized by an extremely high temperature. The cosmological theory that is generally accepted is given, for example, in the book by Weinberg [Reference Weinberg45]. From $10^{-35}$ seconds (s) after the original singularity up to $10^{-32}$ s, there was rapid exponential inflation by a factor greater than $e^{60}$ , driven by an as yet unidentified inflation quantum field. Correlation lengths following from vacuum energy fluctuations rapidly expanded to separations that are now observed in the CMB to be well beyond the horizon of light signals. Inflation is an answer to the horizon puzzle as well as the flatness puzzle and the absence of magnetic monopoles. From $10^{-5}$ s after the singularity, hadron particle–antiparticle pairs could form, followed by lepton pairs from 1 s. By 10 s, the universe had cooled enough so that pair annihilation led to photons being the dominant component of energy within the plasma for the next 380,000 years. During this time, within the plasma, the photon mean free path was relatively short, so the system was opaque. However, from 100,000 years onwards, He atoms and then neutral H atoms began to form. In the final stage of "recombination", at temperature almost 4000 kelvin (K), after 378,000 years the photons propagated freely and can be observed in the CMB. Due to cosmological expansion, their wavelengths have now stretched to the microwave part of the electromagnetic spectrum, exhibiting a high-accuracy blackbody spectrum of a thermalized environment, matching with a temperature of 2.725 K. However, small anisotropic perturbations of that temperature are now of most interest. They indicate large-scale density variations within the plasma universe that are associated with preferred locations in the subsequent formation of galaxies. Further back, anisotropy relates to the formation of matter, separating rival quantum field theories. On a sphere, a scalar function such as temperature is most conveniently expanded in a basis of spherical harmonics in polar coordinates $Y_l^m(\theta ,\varphi )$ . The largest variation from isotropy is a dipole structure at $l=1$ . That dipole can be transformed away by choosing a reference frame that has a speed of 368 km/s relative to our own frame. There is significant structure in the spherical harmonic spectrum up to at least $l=1000$ . There is a consistent interpretation that we are now in a dark-energy-dominated era, consisting of 73% or more of the mass energy, around 23% dark matter, the small but important remainder being ordinary matter and radiation. In the microwave region, the CMB spectrum closely follows that of a black body at equilibrium temperature 2.735 K, tracing back to a plasma temperature of around 4000 K at a time corresponding to redshift $z =1500$ at 50% atomic combination. Although the equilibrium spectrum is essential, there are important departures from equilibrium that give information on the state of the early universe. Relative anisotropic variations of spectral intensity from that of a black body are of the order of $10^{-4}$ . Calculations by Khatri and Sunyaev [Reference Khatri and Sunyaev25] showed that outside of a relatively small range of redshifts, external energy inputs from sources such as massive particle decay would dissipate by Compton and double Compton scattering and other relaxation processes to affect the signal by several lower orders of magnitude. The primary sources of anisotropy were large-scale acoustic waves whose compressions in the plasma universe were associated with raised temperatures. Using the current angular widths of anisotropies in the CMB, the current standard model $\Lambda $ CDM (cold dark matter plus dark energy) affords an estimate of the Hubble constant at $H_0=67.4\pm 1.4$ km/s/MPc [Reference Aghanim4]. This agrees well with data from the POLARBEAR Antarctica telescope that give $H_0=67.2\pm 0.57$ km/s/MPc [Reference Adachi2]. However, estimates from more recent emissions from closer galaxies, using both cepheid variables and type Ia supernovae as distance markers, give $H_0=74.03\pm 1.42$ km/s/MPc [Reference Riess, Casertano, Yuan, Macri and Scolnic40]. This unexplained discrepancy will eventually be resolved by newly found errors in the methodology of one or both of the competing large-z and small-z measurements, or in new physical processes that are currently unidentified. Within a turbulent plasma, there are electrodynamical processes that are far more complicated than the large-scale acoustic waves. When radiation by plasma waves is taken into account, useful kinetic equations and spectral functions can no longer be constructed by Bogoliubov's approach of closing the moment equations for electron distribution functions (see [Reference Klimontovich26, Ch. 5]). Even in controlled tokamak devices, the dynamical description of magnetic field lines has fractal attracting sets [Reference Viana, Da Silva, Kroetz, Caldas, Roberto and Sanjuán43], and charged particle trajectories may have fractal attractors under the influence of multiple magnetic drift waves [Reference Mathias, Viana, Kroetz and Caldas34]. At CMB frequencies below 3 GHz (that is, wavelengths larger than 10 cm), there have been indications of spectral intensities much higher than that of a 2.7 K black body [Reference Baiesi, Burigana, Conti, Falasco, Maes, Rondoni and Trombetti8]. Although there is a high level of confidence in measuring the universe's expansion factor from the CMB since the decoupling of photons from charged particles, the level of complexity of magnetohydrodynamics in plasma suggests that this subject might not be a closed book. Multifractal analysis is a tool that might contribute to understanding the multiscale data that are becoming successively finer-grained with each generation of radio telescope. The Planck mission [39] was launched in 2009 to measure the CMB with an extraordinary accuracy over a wide spectrum of infrared wavelengths. The signal obtained has been filtered by astrophysics teams using the best available technology. We feel that it is worthwhile to analyse the full signal that is currently available. Higher-resolution measurements in the future will distinguish which details of the analysis are due to physical causes, or various sources of galactic noise and measurement errors. Either way, a retrospective correction of our analysis could guide future signal processing. The CMB data can be utilized to understand how the early universe originated and to find out the key parameters of the Big Bang model [37]. Numerous researchers have suggested that the CMB data either are non-Gaussian or cannot be accurately described by mathematical models with few constant parameters (see [Reference Ade3, Reference Leonenko, Nanayakkara and Olenko29, Reference Marinucci32, Reference Minkov, Pinkwart and Schupp36]). The classical book by Weinberg [Reference Weinberg45] explained that this anisotropy in the plasma universe was significant enough to produce anisotropy in current galaxy distributions. For some recent results and discussion of fundamental cosmological models of the universe see [Reference Broadbridge and Deutscher12]. To detect departures from the isotropic model in actual CMB data several approaches can be employed (see, for example, [Reference Hamann, Gia, Sloan, Wang and Womersley21, Reference Leonenko, Nanayakkara and Olenko29]). Different approaches can give different results, and suggest to cosmologists sky regions for further investigations. The motivation of this paper is to check for multifractionality of the CMB temperature intensities from the Planck mission. Theoretical multifractional space-time models which differ from the standard cosmological model [Reference Calcagni, Kuroyanagi and Tsujikawa15] have suggested that the universe is not expanding monotonically, which produces multifractional behaviour. Calcagni et al. [Reference Calcagni, Kuroyanagi and Tsujikawa15] used the CMB data from the Planck mission and the Far Infrared Absolute Spectrophotometer to establish speculative constraints on multifractional space-time expansion scenarios. Further, fractional stochastic partial differential equations (SPDEs) were employed to model the CMB data [Reference Anh, Broadbridge, Olenko and Wang5]. The fractional SPDE models considered exhibited long-range dependence. In the literature, the most widely used model for describing CMB temperature intensities is isotropic Gaussian spherical random fields (see, for example, [Reference Ade3, Reference Lang and Schwab27, Reference Marinucci and Peccati33] for more details). Mathematical analysis of spherical random fields has attracted significant research attention in recent years (see [Reference Hamann, Gia, Sloan, Wang and Womersley21, Reference Le Gia and Peach28, Reference Marinucci and Peccati33] and the references therein). This paper continues these investigations. It develops methodology to investigate fractional properties of random fields on the unit sphere. The presented detailed analysis of actual CMB temperature intensities suggests the presence of multifractionality. The methodology developed was also used to detect anomalies in CMB maps. The results obtained were compared with a different method from [Reference Hamann, Gia, Sloan, Wang and Womersley21]. Both methods found the same anomalies, but each detected its own CMB regions of unusual behaviour. Applications of the methodology developed resulted in spatial clusters that matched very well with the temperature confidence mask (TMASK) of unreliable CMB intensities. Developing a methodology to detect multifractional behaviour and anomalies within the random fields framework is quite natural. In the CMB research context, determining areas with unusual Hölder exponent values can indicate locations of seeds of galaxies or areas that are problematic for preliminary signal processing of CMB maps. The anomalous locations detected are regions of potential interest for further investigations by astronomers, in particular, using analytic tools that are not yet routinely used. The structure of the paper is as follows. Section 2 provides the main notation and definitions related to the theory of random fields. Section 3 introduces the concept of multifractionality and discusses the GMBM. Section 4 presents results on the estimation of the pointwise Hölder exponent by using quadratic variations of random fields. Section 5 discusses the suggested estimation methodology. Numerical studies including computing the estimates of pointwise Hölder exponents for different one- and two-dimensional regions of the CMB sky sphere are given in Section 6. This section also demonstrates an application of our methodology to detect regions with anomalies in the cleaned CMB maps. Finally, the conclusions and some future research directions are presented in Section 7. All numerical studies were carried out by using Python version 3.9.4 and R version 4.0.3, specifically, the R package rcosmo [Reference Fryer, Li and Olenko18, Reference Fryer, Olenko, Li and Wang19]. A reproducible version of the code in this paper is available in the "Research materials" folder at the website https://sites.google.com/site/olenkoandriy/. 2 Main notation and definitions This section presents background material in the theory of random fields, fractional spherical fields and fractional processes. Most of the material included in this section is based on the papers [Reference Ayache6, Reference Lang and Schwab27, Reference Malyarenko30, Reference Marinucci and Peccati33]. Let $\mathbb {R}^{3}$ be the real three-dimensional Euclidean space and $s_2(1)$ be the unit sphere defined in $\mathbb {R}^{3}$ . That is, $s_2(1)= \{x \in \mathbb {R}^{3},\lVert x \rVert =1\}$ where $\lVert \cdot \rVert $ represents the Euclidean distance in ${\mathbb {R}}^3$ . Let ${SO}(3)$ denote the group of rotations on $\mathbb {R}^{3}$ . Let $(\Omega , \mathcal {F}, P)$ be a probability space. The symbol $\overset {d}{=}$ denotes equality in the sense of the finite-dimensional distributions. Definition 2.1. A function $T(\omega , x): \Omega \times s_2(1) \rightarrow \mathbb {R}$ is called a real-valued random field defined on the unit sphere. For simplicity, it will also be denoted by $T(x)$ , $x \in s_2(1)$ . Definition 2.2. The random field $T(x)$ is called strongly isotropic if, for all $k \in \mathbb {N}$ , $x_{1}, \ldots , x_{k} \in s_2(1)$ and $g \in {SO}(3)$ , the joint distributions of the random variables $T(x_{1}), \ldots , T(x_{k})$ and $T(g x_{1}), \ldots , T(g x_{k})$ have the same law. It is called $2$ -weakly isotropic (in the following it will be just called isotropic) if the second moment of $T(x)$ is finite, that is, if $E (\lvert T(x)\rvert ^{2}) < \infty $ for all $x \in s_2(1)$ and if for all pairs of points $x_{1}, x_{2} \in s_2(1),$ and for any rotation, $g \in {SO}(3)$ , we have $$ \begin{align*} E(T(x)) = E(T(gx)),\quad E (T(x_{1})\ T(x_{2})) = E (T(gx_{1}) T(gx_{2})). \end{align*} $$ Definition 2.3. The random field $T(x)$ is called Gaussian if for all $k \in \mathbb {N}$ and $x_{1}, \ldots , x_{k} \in s_2(1)$ the random variables $T(x_{1}), \ldots , T(x_{k})$ are multivariate Gaussian distributed, that is, $\sum _{i=1}^{k} a_{i} T(x_{i})$ is a normally distributed random variable for all $a_{i} \in \mathbb {R}$ , $i=1, \ldots , k,$ such that $\sum _{i=1}^{k} a_i^2 \neq 0.$ Let $T = \{ T(r,\theta ,\varphi ) \mid 0 \leq \theta \leq \pi , 0 \leq \varphi < 2\pi , r> 0\}$ be a spherical random field that has zero mean, finite variance and is mean-square continuous. Let the corresponding Lebesgue measure on the unit sphere be $\sigma _1(du) = \sigma _1(d\theta \cdot d\varphi ) = \sin {\theta }\;d\theta \;d\varphi $ , with $u = (\theta ,\varphi ) \in s_2(1)$ . For two points on $s_2(1)$ , we use $\Theta $ to denote the angle formed between two rays originating at the origin and pointing at these two points, and $\Theta $ is called the angular distance between these two points. To emphasize that a random field depends on Euclidean coordinates, the notation $\tilde {T}(x) = T(r,\theta ,\varphi )$ , $x \in \mathbb {R}^3$ , will be used. Remark 2.4. In the following, for analysis of cosmological data, we will also be using the galactic coordinate system with the Sun as the centre to locate the relative positions of objects and motions within the Milky Way. This consists of galactic longitude $l, 0 \leq l < 2\pi $ , and galactic latitude b, $-\pi /2 \leq b \leq \pi /2$ . They are related to the spherical coordinates by $l=\phi $ and $b=(\pi /2- \theta )$ . Remark 2.5. A real-valued second-order random field $\tilde {T}(x)$ , $x \in s_2(1)$ , with $E (\tilde {T}(x))=0$ is isotropic if $E (\tilde {T}(x_1)\tilde {T}(x_2)) = B(\cos {\Theta })$ , $x_1, x_2 \in s_2(1)$ , depends only on the angular distance $\Theta $ between $x_1$ and $x_2$ . The spherical harmonics are defined by $$ \begin{align*} Y_l^m (\theta,\varphi) = c_l^m\exp{(im \varphi)}P_l^m(\cos{\theta}), \quad l=0,1,\ldots; \ m=0, \pm 1,\ldots, \pm l, \end{align*} $$ $$ \begin{align*} c_l^m = (-1)^m \bigg(\frac{2l+1}{4\pi}\frac{(l-m)!}{(l+m)!}\bigg)^{1/2}, \end{align*} $$ and the Legendre polynomials $P_l^m(\cos {\theta })$ having degree l and order m. Then the following spectral representation of spherical random fields holds in the mean-square sense: $$ \begin{align*} T(r, \theta, \varphi)=\sum_{l=0}^{\infty} \sum_{m=-l}^{l} Y_{l}^{m}(\theta, \varphi) a_{l}^{m}(r), \end{align*} $$ where $a_{l}^{m}(r)$ is a set of random coefficients defined by $$ \begin{align*} a_l^m(r)=\int_0^{\pi} \int_0^{2\pi} T(r,\theta,\varphi)\overline{Y_l^m(\theta, \varphi)}r^2 \sin{\theta}\;d\theta \;d\varphi = \int_{s_2(1)}\tilde{T}(ru)\overline{Y_l^m(u)}\sigma_1(du), \end{align*} $$ with $u= {x/\Vert x \Vert } \in s_2(1)$ , $r= \Vert x \Vert $ . Definition 2.6. A real-valued random field $\tilde {T}(x)$ , $x \in \mathbb {R}^{3}$ , has stationary increments, if the equality $$ \begin{align*} \tilde{T}(x+{x^{\prime}})-\tilde{T}({x^{\prime}}) \stackrel{d}{=} \tilde{T}(x)-\tilde{T}(0), \quad x \in \mathbb{R}^{3}, \end{align*} $$ holds for all ${x^{\prime }} \in \mathbb {R}^{3}$ . Remark 2.7. When $\tilde {T}(x)$ , $x \in \mathbb {R}^{3}$ , is a second-order random field with stationary increments, then one has $E (\tilde {T}(x+x^{\prime })-\tilde {T}(x^{\prime }))^{2}=\mathcal {V}_{\tilde {T}}(x)$ for every $(x, x^{\prime }) \in \mathbb {R}^{3} \times \mathbb {R}^{3}$ , where $\mathcal {V}_{\tilde {T}}$ is called the variogram of the field $\tilde {T}$ . Definition 2.8. A real-valued random field $\tilde {T}(x)$ , $x \in \mathbb {R}^{3}$ , is said to be globally self-similar if, for some fixed positive real number H and for each positive real number a, it satisfies (2.1) $$ \begin{align} a^{-H} \tilde{T}(a x) \stackrel{d}{=} \tilde{T}(x), \; x \in \mathbb{R}^{3}. \end{align} $$ Remark 2.9. Beside the degenerate case, the scale invariance property (2.1) holds only for a unique H which we declare as the global self-similarity exponent. Definition 2.10. For each fixed $H \in (0,1),$ there exists a real-valued globally H-self-similar isotropic centred Gaussian field with stationary increments. This is called the fractional Brownian field (FBF) of Hurst parameter $H,$ and is denoted by $B_{H}(t)$ , $t \in \mathbb {R}^{3}$ . The corresponding covariance function, is given, for all $(t^{\prime }, t^{\prime \prime }) \in \mathbb {R}^{3} \times \mathbb {R}^{3}$ , by $$ \begin{align*} E(B_{H}(t^{\prime}) B_{H}(t^{\prime \prime}))=2^{-1} \operatorname{Var} (B_{H} (\mathbf{e}_{0})) (\lVert t^{\prime}\rVert^{2 H}+\lVert t^{\prime \prime}\rVert^{2 H}- \lVert t^{\prime}-t^{\prime \prime}\rVert^{2 H}), \end{align*} $$ where $\mathbf {e}_{0}$ denotes an arbitrary vector of the unit sphere $s_2(1)$ . Remark 2.11. In the particular case where $H=1 / 2,$ the FBF is denoted by $B(t)$ , $t \in \mathbb {R}^{3}$ , and is called Lévy Brownian motion. Similarly, one can introduce an $H\text {-self-similar}$ process in the one-dimensional case. We also denote it by $B_{H}(t)$ , $t \geq 0$ . It will be called the fractional Brownian motion (FBM). Definition 2.12 [Reference Péltier and Véhel38]. The FBM with Hurst parameter $H(0<H<1)$ is defined by the stochastic integral $$ \begin{align*} B_{H}(t) = \frac{1}{\Gamma(H+1 / 2)}\bigg\{\int_{-\infty}^{0} ((t-s)^{H-1 / 2}-(-s)^{H-1 / 2}) \;{d} W(s) +\int_{0}^{t}(t-s)^{H-1 / 2}\; {d} W(s)\bigg\}, \end{align*} $$ where $t \geq 0$ and $W(\cdot )$ denotes a Wiener process on $(-\infty , \infty )$ . The Hurst parameter specifies the degree of self-similarity. When $H=0.5$ , the FBM reduces to the standard Brownian motion. In contrast to the Brownian motion, the increments of FBM are correlated. 3 Multifractional processes This section provides definitions and theorems related to multifractional processes. Most of the material presented in this section is based on [Reference Ayache6, Reference Benassi, Cohen and Istas9, Reference Benassi, Roux and Jaffard10, Reference Péltier and Véhel38]. Let $C^1$ be the class of continuously differentiable functions and $C^2$ be the class of functions where both first and second derivatives exist and are continuous. First, we introduce multifractional processes in the one-dimensional case. These will be used to analyse the CMB temperature intensities using the ring ordering Hierarchical Equal Area isoLatitude Pixelation (HEALPix) scheme. Definition 3.1 [Reference Benassi, Cohen and Istas9]. A multifractional Gaussian process $X(t), \; t \in [0,1],$ is a real Gaussian process whose covariance function $C(t,s)$ is of the form $$ \begin{align*} C(t, s)=\int_{\mathbb{R}} f(t, \lambda) \overline{f(s, \lambda)}\; {d} \lambda, \end{align*} $$ $$ \begin{align*} f(t, \lambda)=\frac{(e^{i t \lambda}-1) a(t, \lambda)}{\lvert \lambda\rvert ^{1 / 2+\alpha(t)}}. \end{align*} $$ The smoothness of the process is determined by the function $\alpha (\cdot )$ which is from $C^{1}$ with $0<\alpha (t)<1$ , $t \in [0,1]$ . The modulation of the process is determined by the function $a(t, \lambda )$ which is defined on $[0,1] \times \mathbb {R}$ and satisfies $a(t, \lambda )=a_{\infty }(t)+R(t, \lambda )$ , where $a_{\infty }(\cdot )$ is $C^1([0,1])$ with, $a_{\infty }(t) \neq 0$ for all $t \in [0,1],$ and $R(\cdot , \cdot ) \in C^{1,2}([0,1] \times \mathbb {R})$ is such that there exists some $\eta>0$ that for $i=0,1$ and $j=0,2$ we have $$ \begin{align*} \bigg\lvert \frac{\partial^{i+j}}{\partial t^{i} \partial \lambda^{j}} R(t, \lambda)\bigg\rvert \leqslant \frac{C}{\lvert\lambda\rvert^{\eta+j}}. \end{align*} $$ Definition 3.2 [Reference Péltier and Véhel38]. Multifractional Brownian motion (MBM) is given by $$ \begin{align*} B_{H(t)}(t)&=\frac{\sigma}{\Gamma(H(t)+1 / 2)}\bigg\{\int_{-\infty}^{0}((t-s)^{H(t)-1 / 2}-(-s)^{H(t)-1 / 2})\; {d} B(s) \\ & \quad + \int_{0}^{t}(t-s)^{H(t)-1 / 2}\; {d} B(s)\bigg\}, \end{align*} $$ where $B(s)$ is the standard Brownian motion and $\sigma ^{2}= \operatorname {Var}(B_{H(t)}(t))|_{t=1}$ . For the MBM, ${E} (B_{H(t)}(t))=0$ and $\operatorname {Var}({B}_{{H}({t})}(t))={\sigma ^{2}\lvert {t}\rvert ^{2 {H}({t})}}/2$ . The FBM is a special case of the MBM where the local Hölder exponent $H(t)$ is a constant, namely, $H(t)=H$ . The MBM, which is a nonstationary Gaussian process, does not have independent stationary increments, in contrast to the FBM. Definition 3.3. A function $H(\cdot ): \mathbb {R} \rightarrow \mathbb {R}$ is a $(\beta , c)$ -Hölder function, $\beta>0$ and ${c>0,}$ if $$ \begin{align*} \lvert H (t_{1})-H (t_{2})\rvert \leqslant c \lvert t_{1}-t_{2}\rvert^{\beta} \end{align*} $$ for all $t_{1}, t_{2}$ satisfying $\lvert t_{1}-t_{2}\rvert <1$ . The MBM admits the following harmonizable representation (see, for example, [Reference Benassi, Roux and Jaffard10]). If $H(\cdot ): \mathbb {R} \rightarrow [a, b] \subset (0,1)$ is a $\beta $ -Hölder function satisfying the assumption $\sup H(t)<\beta $ , then the MBM with functional parameter $H(\cdot )$ can be written as $$ \begin{align*} \operatorname{Re}\bigg(\int_{\mathbb{R}} \frac{({e}^{it \xi}-1)}{\lVert\xi\rVert^{H(t)+ 1 / 2}} {d} \tilde{W}(\xi)\bigg), \end{align*} $$ where ${\tilde {W}}(\cdot )$ is the complex isotropic random measure that satisfies $$ \begin{align*} {d}{\tilde {W}}(\cdot )={d} {W_{1}}(\cdot )+\mathrm {i}d {W_{2}}(\cdot ). \end{align*} $$ Here, ${W_{1}}(\cdot )$ and ${W_{2}}(\cdot )$ are independent real-valued Brownian measures. We now introduce the concept of the generalized multifractional Brownian motion (GMBM), which is an extension of the FBM and MBM. The GMBM was introduced to overcome the limitations existed in applying the MBM to model data whose pointwise Hölder exponent has an irregular behaviour. The following definitions will be used to analyse the CMB temperature intensities using the ring and nested ordering HEALPix schemes for $d=1, 2$ , respectively. Definition 3.4 [Reference Ayache and Véhel7]. Let $[a, b] \subset (0,1)$ be an arbitrary fixed interval. $A n$ admissible sequence $(H_{n}(\cdot ))_{n \in \mathbb {N}}$ is a sequence of Lipschitz functions defined on $[0,1]$ and taking values in $[a, b]$ with Lipschitz constants $\delta _{n}$ such that $\delta _{n} \leqslant c_{1} 2^{n \alpha }$ , for all $n \in \mathbb {N}$ , where $c_{1}>0$ and $\alpha \in (0, a)$ are constants. Let $(H_{n}(\cdot ))_{n \in \mathbb {N}}$ be an admissible sequence. The generalized multifractional field with the parameter sequence $(H_{n}(\cdot ))_{n \in \mathbb {N}}$ is the continuous Gaussian field $Y(x, y), \; (x, y) \in [0,1]^{d} \times [0,1]^{d}$ , defined for all $(x, y)$ as $$ \begin{align*} Y(x, y)=\operatorname{Re}\bigg(\int_{\mathbb{R}^{d}}\bigg(\sum_{n=0}^{\infty} \frac{(\mathrm{c}^{ix\xi}-1)}{\lVert\xi\rVert^{H_{n}(y)+ 1/2}} \hat{f}_{n-1}(\xi)\bigg)\, {d} \tilde{W}(\xi)\bigg), \end{align*} $$ where ${\tilde {W}}(\cdot )$ is the stochastic measure defined previously. The GMBM with the parameter sequence $(H_{n}(\cdot ))_{n \in \mathbb {N}}$ is the continuous Gaussian process $X(t), \; t \in [0,1]^{d}$ defined as the restriction of $Y(x, y)$ , $(x, y) \in {[0,1]}^{d} \times {[0,1]}^{d}$ to the diagonal, $X(t)=Y(t, t)$ . Compared with the FBM and MBM, one of the major advantages of the GMBM is that its pointwise Hölder exponent can be defined through the parameter $(H_{n}(\cdot ))_{n \in \mathbb {N}}$ . For every $t \in \mathbb {R}^{2},$ almost surely, $$ \begin{align*} \alpha_{X}(t)=H(t)=\liminf _{n \rightarrow \infty} H_{n}(t). \end{align*} $$ 4 The Hölder exponent This section presents basic notation, definitions and theorems associated with the pointwise Hölder exponent; see [Reference Ayache and Véhel7, Reference Benassi, Cohen and Istas9, Reference Istas and Lang24] for additional details. The pointwise Hölder exponent determines the regularity of a stochastic process. It describes local scaling properties of random fields, and can be used to detect multifractionality. The pointwise Hölder exponent of a stochastic process ${X(t)}$ , $t \in \mathbb {R},$ whose trajectories are continuous, is the stochastic process ${\alpha _{X}(t)}$ , ${t \in \mathbb {R}}$ , defined for every t as $$ \begin{align*} \alpha_{X}(t)=\sup \bigg\{\alpha \; \bigg| \limsup _{h \rightarrow 0} \frac{\lvert X(t+h)-X(t)\rvert}{\lvert h\rvert^{\alpha}}=0\bigg\}. \end{align*} $$ The Hölder regularity of FBM can be specified at any given point t, almost surely, and $\alpha _{B_H}(t)=H$ is constant for FBM. The pointwise Hölder regularity of MBM can be determined by its functional parameter similarly to FBM where $\alpha _{X}(t)$ is the pointwise Hölder exponent. In particular, for every $t \in \mathbb {R}$ , almost surely, $\alpha _{X}(t)=H(t)$ . In the literature, the method of quadratic variations is a frequently used technique to estimate the Hölder exponent [Reference Benassi, Cohen and Istas9, Reference Istas and Lang24]. The following definition is used to compute the total increment in the one-dimensional case and will be applied for the ring ordering scheme of HEALPix points. Let $t \in [0,1]$ . For every integer $N \geq 2$ , the generalized quadratic variation $V_{N}^{(1)}(t)$ around t is defined by (4.1) $$ \begin{align} V_{N}^{(1)}(t) = \sum_{p \in v_{N}(t)}\bigg(\sum_{k \in F}e_k X\bigg(\frac{p+k}{N}\bigg)\bigg)^2, \end{align} $$ where $F=\{0,1,2\}$ , $e_{0}=1$ , $e_{1}=-2$ , $e_{2}=1$ , and $$ \begin{align*} v_{N}(t)= \{p \in \mathbb{N}\; | 0 \leqslant p \leqslant N-2 \text{ and } \lvert t -{p}/{N}\rvert \leqslant N^{-\gamma}\}.\end{align*} $$ The following definition is used to compute the total increment in the two-dimensional case and will be used for the nested ordering scheme of HEALPix points. Definition 4.3. Let $t = (t_1,t_2)\in [0,1]^2$ . For every integer $N \geq 2$ , the generalized quadratic variation $V_{N}^{(2)}(t)$ around t is defined by (4.2) $$ \begin{align} V_{N}^{(2)}(t) = \sum_{p \in v_{N}(t)}\bigg(\sum_{k \in F}d_k X\bigg(\frac{p+k}{N}\bigg)\bigg)^2, \end{align} $$ where $p={(p_1,p_2)}$ , $\varepsilon ={(\varepsilon _1,\varepsilon _2)}$ , ${(p+\varepsilon )}/{N}=({{(p_1+\varepsilon _1)}/{N}},{{(p_2+\varepsilon _2)}/{N}})$ , $F=\{0,1,2\}^{2}$ and, for all $k=(k_{1}, k_{2}) \in F$ , $d_{k}=\prod _{l=1}^{2} e_{k_{l}}$ with $e_{0}=1$ , $e_{1}=-2$ and $e_{2}=1$ . Here, $ v_{N}(t)=v_{N}^{1}(t_{1}) \times v_{N}^{2}(t_{2})$ and, for all $i=1, 2$ , $$ \begin{align*}v_{N}^{i} (t_{i}) = \{p_{i} \in \mathbb{N}\, | 0 \leqslant p_{i} \leqslant N-2 \text{ and } \lvert t_{i}-{p_{i}}/{N}\rvert \leqslant N^{-\gamma}\}.\end{align*} $$ The pointwise Hölder exponents are estimated for the one-dimensional ring ordering and two-dimensional nested ordering of HEALPix points by considering sufficiently large N and $d=1,2$ , respectively, in the following theorem which is a specialization of [Reference Ayache and Véhel7, Theorem 2.2] with $\delta =1$ . Theorem 4.4 [Reference Ayache and Véhel7]. Let $X(t), \; t \in [0,1]^d$ , be a GMBM with an admissible sequence $(H_n(\cdot ))_{n \in \mathbb {N}}$ ranging in $[a, b] \subset (0,1 - 1/2d).$ Then, for a fixed $\gamma \in (b, 1-1/2d)$ and the sequence $(H_n(t))_{n \in \mathbb {N}}$ convergent to $H(t)$ , we have (4.3) $$ \begin{align} H(t) = \lim_{N \to \infty} \frac{1}{2}\bigg(d(1-\gamma)-\frac{\mathrm{\log} (V_{N}^{(d)}(t))}{\mathrm{\log}(N)}\bigg) \end{align} $$ almost surely. 5 Data and methodology This section presents an overview of the data and key ideas of the suggested methodology to study multifractionality of the CMB data that is based on theoretical results from Section 4. This and the next sections also provide a detailed justification of this methodology and its assumptions and required modifications of the formulas for the spherical case and CMB data. In the cosmological literature, it is widely accepted that CMB data are a realization of random fields on a sphere. This paper follows this approach to study the local properties of the corresponding spherical random field. We have developed and implemented a method of computing local estimators in a neighbourhood of each pixel. The CMB data are referenced by a very dense grid of pixels with equal areas on the sky sphere. They are stored according to the HEALPix format on the sphere. Each CMB pixel has a set of attributes, such as its unique location, temperature intensity and polarization data, which describe its properties. In this analysis the temperature intensities are used. The resolution parameter $N_{\text {side}}$ defines the number of pixels $N_{\text {pix}}$ on the sphere and their size. For example, for a given resolution $N_{\text {side}}=2048$ , there are $N_{\text {pix}}= 12 \times {(N_{\text {side}})}^2 = 50\,331\,648$ pixels observed on the CMB sky sphere [Reference Fryer, Li and Olenko18, Reference Gorski, Hivon, Banday, Wandelt, Hansen, Reinecke and Bartelmann20, Reference Hivon22]. The CMB data are stored at 5 and 10 arc minutes resolution on the CMB sky for the resolution parameters $N_{\text {side}}=2048$ and $N_{\text {side}}=1024$ , respectively. To estimate local Hölder exponent values one needs a sufficient number of observations in a neighbourhood of a given point. The dense HEALPix grid provides such high-resolution data to reliably estimate local Hölder exponent values. For all numerical results and estimates, the highest available resolution $N_{\text {side}}=2048$ was used. The Planck CMB intensity measurements vary in frequency from 30 to 857 GHz. They were obtained by separating the Planck CMB measurements from the foreground noise using several methods (COMMANDER, NILC, SEVEM and SMICA) [Reference Ade3]. In applied cosmological research, it's assumed that after separation, the residual foreground noise component of Planck CMB temperature intensities is negligible. For multifractional data, $H(t)$ changes from location to location and $H(t) \not \equiv $ constant, where $t \in s_2(1)$ . Several methods to estimate the local Hölder exponent are available in the literature. Different methods often give different results regarding inconsistent estimation results of the Hölder exponent (see, for example, discussions in [Reference Bianchi11, Reference Struzik42]). Inconsistent results by different techniques are due to their different assumptions [Reference Bianchi11]. We propose an estimation method based on the generalized quadratic variations given by (4.1) and (4.2) and their asymptotic behaviour in (4.3). The results of this method are also compared with another conventional method that uses the rescale range (R/S) to estimate the Hölder exponent. This method is realized in the R package fractal [Reference Constantine and Percival17]. The CMB data exhibit variations of the temperature intensities at very small scales ( $\pm \ 1.8557 \times 10^{-3}$ ). To get reliable estimates of $H(t)$ , a large number of observations in neighbourhoods of each t is required. Thus, in this paper, we do not discuss the preciseness of the local estimators of $H(t)$ , but only pay attention to differences in the estimated values at different locations. For computing purposes, the temperature intensities were scaled as $$ \begin{align*} \text{scaled intensity} (t) =\frac{\text{intensity}(t)}{\max_{s\in s_2(1)}\lvert\text{intensity} (s)\rvert}. \end{align*} $$ It is clear from Definition 4.1 that this scaling does not change the values of $\alpha _X(t)$ . Also, by (4.1) and (4.2) the generalized quadratic variation of the scaled process $cX(t)$ is $c^2V_N^{(d)}(t)$ , $d=1,2$ . By (4.3), (5.1) $$ \begin{align} \lim_{N \to \infty} \frac{\mathrm\log (c^2V_{N}^{(d)}(t))}{\mathrm\log(N)} = \lim_{N \to \infty}\bigg(\frac{\mathrm\log(c^2)}{\mathrm\log(N)} + \frac{\mathrm\log (V_{N}^{(d)}(t))}{\mathrm\log(N)}\bigg) = \lim_{N \to \infty} \frac{\mathrm\log (V_{N}^{(d)}(t))}{\mathrm\log(N)}, \end{align} $$ which means that this scaling also does not affect $H(t)$ . As mentioned before, for small values of $\log (N)$ the estimates of $H(t)$ can be biased, which is now evident from the term $\mathrm \log (c^2)/\mathrm \log (N)$ in equation (5.1). However, this bias is due to the scaling effect only and is exactly the same for all values of t. Even if it might result in some errors in estimates $\hat {H}(t)$ , it will not affect the analysis of differences in $H(t)$ values for different locations, which is the main aim of this analysis. Estimates of pointwise Hölder exponent values were computed using one- and two-dimensional regions of the CMB data and the HEALPix ring and nested orderings [Reference Gorski, Hivon, Banday, Wandelt, Hansen, Reinecke and Bartelmann20]. The ordering schemes are demonstrated in Figure 1. For fast computations we used the well-known advantages of these HEALPix ordering representations. The method of quadratic variation for Hölder exponents was adjusted for the nested and ring representations on the sphere. Numerical studies showed that the proposed estimators are robust to changes of neighbourhood sizes. Figure 1 HEALPix ordering schemes. 6 Numerical studies This section presents numerical studies and applications of the methodology from Section 5 to CMB data. The pointwise Hölder exponent estimates $\hat {H}(t)$ are computed and analysed for one- and two-dimensional regions of the CMB temperature intensities acquired from the NASA/IPAC Infrared Science Archive [23]. The estimated Hölder exponents are used to quantify the roughness of the CMB temperature intensities. The methodology developed is also applied to detect possible anomalies in the CMB temperature intensities. 6.1 Estimates of Hölder exponent for one-dimensional CMB regions For the one-dimensional case, the HEALPix ring ordered CMB temperature intensities were modelled by a stochastic process $X(t)$ . Their Hölder exponents $H(t)$ were estimated by using the expression from equation (4.3) for the given large N with $d=1$ , where $V_{N}^{(1)}(t)$ was computed using equation (4.1), which can be explicitly written as $$ \begin{align*} V_N^{(1)}(t) = \sum_{p = 0}^{N-2}\bigg( X\bigg(\frac{p}{N}\bigg)- 2X\bigg(\frac{p+1}{N}\bigg) + X\bigg(\frac{p+2}{N}\bigg)\bigg)^2. \end{align*} $$ As pixels on relatively small ring segments can be considered lying on approximately straight lines, the results from the case $d=1$ can be used. The parameter N was chosen to give approximately the number of pixels within a half ring of the CMB sky sphere. The parameter r is the distance from a HEALPix point t that is the centre of an interval in which we compute the total increment $V_{N}^{(1)}(t)$ . By the expression for $v_N(t)$ in Definition 4.2, the parameter $\gamma $ was computed as $\gamma ={-(\log (r)/\!\log (N))}$ for selected values of N and r. Then it was used in equation (4.3) to compute the estimated pointwise Hölder exponent values. According to the HEALPix structure of the CMB data with resolution $N_{\text {side}}=2048$ , the HEALPix ring ordering scheme results in $(4 \times N_{\text {side}}-1)$ rings [Reference Hivon22]. That is, for $N_{\text {side}} = 2048$ , the CMB sky sphere consists of $8191$ rings. Based on the HEALPix geometry, the number of pixels in the upper part rings increases with the ring number, $\text {Ring}=1,\ldots ,2047$ , as $4 \times \text {Ring}$ . The $(2N_{\text {side}}+1)=4097$ set of rings in the middle part of the CMB sky sphere have equal number of pixels, $4 \times N_{\text {side}}$ . The number of pixels in each of the final $(N_{\text {side}}-1)=2047$ rings in the lower part decreases according to the formula $4 \times (8191-\text {Ring}+1)$ . For the one-dimensional case, the estimated pointwise Hölder exponent values $\hat {H}(t)$ were computed as follows. First, a random CMB pixel was selected and its ring was determined. Then pixels belonging to the half of that particular ring were selected. Then, for each CMB pixel in this rim segment, the quadratic variation was computed by $V_{N}^{(1)}(t)$ given in equation (4.1). When computing the generalized quadratic variation for a CMB pixel, the pixels within a distance $r=0.08$ from it were considered. For these pixels, the squared increments were computed and used to obtain the total of increments. Finally, the Hölder exponents were estimated by substituting the total of increments and the other parameters in equation (4.3). First, three CMB pixels "552300", "1533000", "3253800" located in the corresponding upper part rings 525, 875 and 1275 were chosen. Then for each CMB pixel in these half rings, their corresponding estimated Hölder exponents $\hat {H}(t)$ were computed. Next, another three pixels "10047488", "32575488", "39948288" were chosen in the middle part of the CMB sky sphere. Their ring numbers were 2250, 5000 and 5900, respectively. Finally, three CMB pixels "47656664", "48651704", "49375304" belonging to the lower part rings, 7035, 7275 and 7500 were selected and the pointwise Hölder exponents of pixels in their rim segments were estimated. For example, Figure 2 shows the plots of the scaled intensities and the estimated pointwise Hölder exponents of the rim segments of rings 1275 and 5900, which belong to the upper and middle parts of the CMB sky sphere, respectively. It can be seen from Figures 2(a) and 2(b) that the majority of scaled intensities fall into the range $[-0.2, 0.2]$ and their fluctuations are random. Figures 2(c) and 2(d) show that the $\hat {H}(t)$ values in both rim sections are changing and the dispersion range for ring 1275 is wider than that of ring 5900. Similar plots and results were also obtained for other rings. Figure 2 Examples of scaled intensities and $\hat {H}(t)$ values for one-dimensional CMB regions. The summary of the estimated pointwise Hölder exponent values obtained by the discussed methodology is shown in Table 1. It is clear that the dispersion range of the $\hat {H}(t)$ values and the mean $\hat {H}(t)$ value change with ring numbers. These results suggest that the pointwise Hölder exponent values change from location to location. The summary of the estimated pointwise Hölder exponent values obtained by the conventional (R/S) method using the command RoverS from the R package fractal is given in Table 2. Note that the dispersion range and the mean $\hat {H}(t)$ value change with the spiralling ring number. Similar results were also obtained for other available estimators of the Hölder exponent. Although these numerical values are inconsistent between different methods, they all suggest that the pointwise Hölder exponent values change from location to location. Table 1 Summary of $\hat {H}(t)$ values for pixels in different rings of the CMB sky sphere. Table 2 Summary of $\hat {H}(t)$ values for pixels in different rings of the CMB sky sphere using the R/S method. It is expected that temperature intensities are positively dependent/correlated in close regions; see the covariance analysis in [Reference Broadbridge, Kolesnik, Leonenko and Olenko13]. Therefore, running standard equality-of-means tests under independence assumptions will provide even more significant results if the hypothesis of equal means is rejected. To prove that distributions of $\hat {H}(t)$ are statistically different between different sky regions, we carried out several equality-of-means tests. Before that, the Shapiro test was used to ensure that the $\hat {H}(t)$ values satisfy the normality assumption. For all the cases considered in Table 1, their $\hat {H}(t)$ values failed the normality assumption. Since the CMB pixels close to each other can be dependent, to get more reliable results, we chose CMB pixels at distance 50 apart on a ring. The Shapiro test confirmed that in all the considered upper and lower part cases in Table 1, $\hat {H}(t)$ values at step 50 satisfied the normality assumption, whereas the $\hat {H}(t)$ values in the middle part failed the normality assumption. Let $\mu _1$ and $\mu _2$ be the $\text {mean}{({\hat {H}(t)})}$ values of the rim segments of rings 525 and 1275, respectively. To test the hypothesis $H_0: \mu _1 = \mu _2$ against $H_1: \mu _1 \neq \mu _2$ , we carried out the Wilcoxon test. The obtained p-value ( $3.048 \times 10^{-15}$ ) is significantly less than 0.05 and suggests that the means are different at the 5% level of significance. Similar results were obtained for the Wilcoxon tests between all pairs of the cases in Table 1. For example, Table 3 shows Wilcoxon test results for selected four rings, two in the upper part, and the other two correspondingly in the middle and lower parts of the CMB sky sphere. Figure 3 shows the distribution box plots of the $\hat {H}(t)$ values in the rim segments of rings 525, 1275, 2250 and 7500. It is clear from Figure 3 that the $\text {mean}{({\hat {H}(t)})}$ values are different from each other in these cases. Table 3 The p-values for Wilcoxon tests between different rings. Figure 3 The distribution of $\hat {H}(t)$ values of four rim segments. Analogously to Table 3, for all Wilcoxon tests between the rim sectors in the upper, middle and lower parts, $p>0.05$ . Therefore, there is enough statistical evidence to suggest that the pointwise Hölder exponents change from location to location. While we compared Hölder exponents for different rings, from Figure 2 it is clear that $\hat {H}(t)$ is also changing for pixels within the same rings. 6.2 Estimates of Hölder exponent for two-dimensional CMB regions For two-dimensional sky regions, pointwise Hölder exponent values $H(t)$ were estimated according to equation (4.3) with $d=2$ , where $V_{N}^{(2)}(t)$ was computed using equation (4.2). Equation (4.2) in Definition 4.3 can be written in the following explicit form: $$ \begin{align*} V_{N}^{(2)}(t) = &\sum_{p \in v_{N}(t)}\bigg\{\sum_{k_1 \in \{0,1,2\}} \sum_{k_2 \in \{0,1,2\}} e_{k_1}e_{k_2} X\bigg(\frac{p_1+k_1}{N},\frac{p_2+k_2}{N}\bigg)\bigg\}^2 \\ =& \sum_{p \in v_{N}(t)} \bigg\{X\bigg(\frac{p_1}{N},\frac{p_2}{N}\bigg) - 2X\bigg(\frac{p_1}{N},\frac{p_2+1}{N}\bigg) -2 X\bigg(\frac{p_1+1}{N},\frac{p_2}{N}\bigg) \\ &\quad +X\bigg(\frac{p_1}{N},\frac{p_2+2}{N}\bigg) + X\bigg(\frac{p_1+2}{N},\frac{p_2}{N}\bigg) + 4 X\bigg(\frac{p_1+1}{N},\frac{p_2+1}{N}\bigg) \\ &\quad - 2 X\bigg(\frac{p_1+1}{N},\frac{p_2+2}{N}\bigg) - 2 X\bigg(\frac{p_1+2}{N},\frac{p_2+1}{N}\bigg) + X\bigg(\frac{p_1+2}{N},\frac{p_2+2}{N}\bigg)\bigg\}^2. \end{align*} $$ To compute quadratic increments of spherical random fields, relatively small parts of the sphere can be approximately considered as regions of the plane and the above formula can be applied. Note that the internal summation set $\{ ( {(p_1 + k_1)/N}, {(p_2 + k_2)/N})\mid k_1, k_2 \in \{0, 1, 2\} \}$ can be very efficiently represented by the HEALPix nested structure. Indeed, all pixels have either seven or eight neighbours (see Figure 4). The $3 \times 3$ configuration with eight neighbours perfectly matches the internal summation set and can be directly used in computations of $V_N^2(t)$ . For the case of seven neighbours, an additional eighth neighbour, the intensity of which equals to that of its adjusted pixel, was imputed. For the resolution $N_{\text {side}} = 2048$ only 24 out of 50 331 648 pixels have seven neighbours. For such a small number of pixels, the imputation has a negligible impact on the results. Figure 4 Examples of pixels with seven and eight neighbours for $N_{\text {side}}=4$ . Circular regions with radius $R=0.23$ were used in the computations in this section. Let N denote the number of pixels within such circular regions. Then, $N \approx 662\, 700$ pixels. To reduce the computation time, we chose a grid of 1000 CMB pixels with step $662 = [662\,700/1000]$ , where $[\cdot ]$ denotes the integer part, over the total number of pixels. To compute local estimators $\hat {H}(t)$ , for each chosen CMB pixel, a circular window with radius $r=0.01$ was selected. The value of $\gamma $ was computed as $\gamma ={- (\log (\sqrt {\pi }r/2)/\!\log (\sqrt {N}))}$ for given values of N and r. The factor $\sqrt {\pi }/2$ appeared as the number of pixels is proportional to a window area. To match the number of pixels in circular window regions that were used in computations and square regions used for summation in $V_{N}^{(2)}(t)$ , the length $2{d_0}$ of the side of squares should satisfy the equation ${(2{d_0})^2} = {\pi r^2}$ . The $\gamma $ obtained was substituted in equation (4.3) to compute the estimated pointwise Hölder exponent values. For $r=0.01$ , there are approximately 2836 pixels in each specified window. For each of these pixels, the squared increment was computed and the total of increments was obtained by the expression for $V_{N}^{(2)}(t)$ . Initially, the two-dimensional regions were selected randomly. Then, from those candidates, regions with majority warm, majority cold, a mixture of temperatures, and borderline regions were selected. The paper presents only results about those selected regions. Similar results were obtained for other analogous regions, but, due to space restrictions, are not given here. First, a circular CMB sky window of radius $R=0.23$ from a warm area with a majority of high temperature intensities was selected. The mean temperature intensity in the selected CMB sky region covering the warm area was $5.978\,61 \times 10^{-5}$ . The window is shown in Figure 5(a). The number of pixels in that specific window was 662 685. Then different circular CMB sky windows having a radius of $R=0.23$ covering cold, mixture, and borderline regions shown in Figures 5(b), 5(c) and 5(d) were chosen. In each of the cold, mixture of warm and cold and a borderline having warm and cold regions, the number of pixels were 662 697, 662 706 and 662 725, respectively. The value of $\gamma $ was computed as $\gamma =0.705$ for each CMB sky region. The corresponding mean temperature intensities were $-8.340\,55 \times 10^{-5}$ , $-1.740\,35 \times 10^{-5}$ and $7.598\,51 \times 10^{-6}$ . Figure 5 Sky windows used for computations. The plots of the estimated pointwise Hölder exponent values for each case are displayed in Figure 6. These $\hat {H}(t)$ values are mostly dispersed in the interval $[0.36, 0.86]$ . Figure 6 shows an erratic and an irregular behaviour in the distribution of $\hat {H}(t)$ values. It can be observed that the estimates in Figures 6(a) and 6(d) with substantial warm temperatures have larger $\hat {H}(t)$ fluctuations than the $\hat {H}(t)$ values for cold regions. Figure 6 Local estimates $\hat {H}(t)$ for two-dimensional regions. The summary of the estimated pointwise Hölder exponents for each selected region is given in Table 4. It shows the mean CMB temperature intensities of each circular window. Table 4 also presents the estimated minimum, maximum and mean $\hat {H}(t)$ values computed by using the selected 1000 CMB pixels. It is clear from this table that the mean $\hat {H}(t)$ value from the warm region is the highest and it is the lowest for the borderline region. The mean $\hat {H}(t)$ values of the cold region and mixture case lie in between. It is apparent from Table 4 that the range of the estimated pointwise Hölder exponent values changes with respect to the temperature of the chosen regions of the CMB sky sphere. To further investigate the estimated pointwise Hölder exponents, they were computed for 100 random CMB pixels in each of the regions considered. It was apparent that even if one accounts for variation by considering these 100 CMB pixels, the $\hat {H}(t)$ values between different regions are different. The analyses suggested that all $\hat {H}(t)$ values for 100 and 1000 CMB pixels were consistent. Therefore, the results suggest that the estimated pointwise Hölder exponent values change from place to place. To prove that $\hat {H}(t)$ is significantly different between different sky windows, we carried out several equality-of-means tests. Initially, we carried out the Shapiro test to ensure that the $\hat {H}(t)$ values satisfy the normality assumption. However, for all the cases considered in Table 4, their $\hat {H}(t)$ values failed the normality assumption. Figure 7 displays the distribution box plots of the $\hat {H}(t)$ values in the CMB sky windows with warm, cold, mixture and borderline regions. It can be observed from Figure 7 that the $\hat {H}(t)$ distributions have extreme values in all four cases. Thus, we present only results from the Wilcoxon test as it is reliable amidst the nonnormality of data and in the presence of outliers. Let $\mu _1$ and $\mu _2$ be the $\text {mean}{({\hat {H}(t)})}$ values in the sky windows with warm and cold regions, respectively. Testing the hypothesis $H_0: \mu _1 = \mu _2$ against $H_1: \mu _1 \neq \mu _2$ using a Wilcoxon test, we obtained $p <2.2 \times 10^{-16}$ , highly significant at the 5% level. It suggests that the means are different at 5% level of significance. Similar results were obtained for the Wilcoxon tests between all pairs of the cases, and the corresponding p-values are shown in Table 5. This suggests that the $\text {mean}{({\hat {H}(t)})}$ values are different from each other in all the cases. Apart from variations between the cases, we observe from Figure 6 and Table 4 that the estimated Hölder exponents do change within individual sky windows as well. Table 4 Analysis of CMB sky windows with different temperatures. Figure 7 The distribution of $\hat {H}(t)$ values for chosen sky windows. Table 5 The p-values for Wilcoxon tests between chosen sky windows. Therefore, there is enough evidence to suggest that the pointwise Hölder exponents change from location to location of the CMB sky sphere. 6.3 Analysis of CMB temperature anomalies As previously discussed in Section 1, several missions have measured the CMB temperature anisotropies, gradually increasing their precision by using advanced radio telescopes. This section discusses applications of the multifractional methodology to detect regions of CMB maps with "anomalies". In particular, it can help in evaluating various reconstruction methods for blocked regions with unavailable or too noisy data. It is well known that the CMB maps are affected by interference from the Milky Way, and radio signals emitting from our galaxy are much noisier than the CMB. Thus, the Milky Way blocks the CMB near the galactic plane. However, the smooth and predictable nature of Milky Way's radiation spectrum has enabled the cosmological attributes to be found by subtracting the spectrum from the initially observed intensities [Reference Castelvecchi16]. From Planck 2015 results, the CMB maps have been cleaned and reconstructed using different techniques namely, COMMANDER, NILC, SEVEM and SMICA (see [Reference Ade3] for more information). We are using the CMB map produced from the SMICA method [23] with $N_{\text {side}}=2048.$ To examine the random behaviour of isotropic Gaussian fields on the sphere, a direction-dependent mathematical tool has been proposed in [Reference Hamann, Gia, Sloan, Wang and Womersley21]. The authors applied their probe to investigate the CMB maps from Planck PR2 2015 and PR3 2018 with specific consideration of cosmological data from the inpainted maps. To detect departures from the traditional stochastic model of the CMB data, they utilized the autocorrelation of the sequence of full-sky Fourier coefficients and have proposed an "AC discrepancy" function on the sphere. For the inpainted Planck 2015 COMMANDER map, [Reference Hamann, Gia, Sloan, Wang and Womersley21] shows the maximum "AC discrepancy" for the galactic coordinates, $(l,b) = (353.54, 1.79)$ . Similarly, for the inpainted Planck COMMANDER 2018, NILC 2018, SEVEM 2018 and SMICA 2018 with $N_{\text {side}}=1024$ , there are significant departures at the galactic coordinates (12.57, 0.11), $(61.17,-30.73)$ , $(261.25,-2.99)$ and $(261.34,-2.99)$ , respectively. A majority of these locations are the masked regions of the galactic plane. The galactic coordinates corresponding to the largest deviations are different for each map depicting the discrepancies in the underlying inpainting techniques. The approach of Hamann et al. [Reference Hamann, Gia, Sloan, Wang and Womersley21] used directional dependencies in CMB data on the unit sphere. The results below are based on a different approach that uses the local roughness properties of these data. Therefore, the detected regions of high anomalies can be different for these two methods as they reflect different physical anisotropic properties of CMB (see, for example, Figure 10). The estimated local Hölder exponents on one-dimensional rings can be considered as directional local probes of CMB anisotropy. However, the estimates for two-dimensional regions are more complex and aggregate local information about roughness in different directions. In the following analysis, we use estimated values of the Hölder exponent to detect regions of possible anomalies in CMB maps. Figure 8 shows the plots of scaled intensities and estimated Hölder exponent values $\hat {H}(t)$ in one- and two-dimensional CMB regions of the great circle. We notice from Figure 8(a) that there is an increase in the fluctuations of the scaled intensity values between the HEALPix range $[25\,163\,000, 25\,164\,000]$ of the great circle ring. A low plateau of estimated $\hat {H}(t)$ values in Figure 8(b) corresponds to that range of HEALPix values. The equator rim segment with the unusual plateau of $\hat {H}(t)$ values has CMB pixel numbers ranging from 25 163 208 to 25 163 852. Their corresponding galactic coordinates were found to be between, $(65.02, 0.01)$ and $(93.32, 0.01)$ . Figure 8 Scaled intensities and estimated $\hat {H}(t)$ values in one- and two-dimensional regions of the great circle. Similarly, this unusual behaviour of $\hat {H}(t)$ values was observed in the two-dimensional CMB regions near the galactic plane/equator. Figure 8(c) shows the plot of scaled intensities in the two-dimensional space and a spike in intensities can be observed near the specified range of HEALPix values. The corresponding lower valley of $\hat {H}(t)$ values can be seen in Figure 8(d). The four corners of the spherical region having unusual $\hat {H}(t)$ values have HEALPix values 23 404 309, 23 391 936, 23 564 929 and 24 158 424. Their galactic coordinates were found as $(85.91, -1.66)$ , $(76.82, -1.66)$ , $(76.82, 4.05)$ and $(85.91, 4.05)$ , respectively. Table 6 shows the summary of CMB intensities at these one- and two-dimensional equatorial regions. The two-dimensional region around the unusual values was extracted as a rectangular spherical region from the circular CMB window using the previously identified galactic coordinates to split them as the unusual and the remaining regions. It is clear that the range of temperature intensities is wider in the one- and two-dimensional regions around the unusual values than in the regions excluding them. Further, the variances of intensities in the anomalous regions are larger than in the remaining regions. Moreover, Table 6 confirms that the mean $\hat {H}(t)$ values in the anomalous regions are lower than in the remaining regions. Table 6 Analysis of CMB intensities near the equatorial region. Figure 9 shows the Planck 2015 map with blocked nonreliable CMB values. The region of CMB values where the TMASK is applied by the SMICA reconstruction technique, is removed in Figure 9. The TMASK of the CMB intensities utilized by the SMICA method determines the region where the inpainted CMB intensities in the galactic plane are considered to be reliable. The rectangular window shows a possible region of anomalies detected by the developed multifractional methodology. Figure 9 SMICA 2015 map with TMASK and the region of anomalies. We now apply this approach and investigate $\hat {H}(t)$ for all $t \in s_2(1)$ . First, the one-dimensional methodology was used. $\hat {H}(t)$ was estimated using the CMB intensities on rims, similar to the analysis in Figures 8(a) and 8(b). The moving windows with 4096 consecutive pixels, which is approximately half of a full ring, were used to obtain values of $\hat {H}(t)$ . To clearly show local behaviours, after several trials, sets $v_N(t)$ with 61 HEALPix points, that is, where the radius equals 30 pixels, were selected. The results obtained are shown in Figure 10(a). To compare them with the AC discrepancy approach in [Reference Hamann, Gia, Sloan, Wang and Womersley21], Figure 10(b) shows the corresponding map obtained by applying the direction-dependent probe. The code from [Reference Wang44] was used to compute values of AC discrepancies for SMICA 2015 CMB intensities. The first map highlights $\hat {H}(t)$ values below the $5$ th percentile. AC discrepancy values above the ${95}^{}$ th percentile were used for the second map. Both approaches detected the region of anomalies in Figure 9. However, from locations of other discrepancy values, it is clear that these approaches detect different CMB anomalies. Figure 10 Discrepancy maps for CMB intensities from SMICA 2015. Very sharp changes in $\hat {H}(t)$ values in Figure 8(b) motivated the second method to detect anomalies, which is based on increments of $\hat {H}(t)$ values. Figure 8(b) demonstrated substantial changes of $\hat {H}(t)$ for nearby t locations. These changes are permanent as $\hat {H}(t)$ exhibits stable behaviour after a rapid "jump". Such changes are different from noise or outliers, when values in random distinct locations lie at an abnormal distance from other values in their surrounding points. To detect such rapid changes, we used the statistics ${\hat {H}}_{\Delta }(t) = \min _{t_1 \in {\Delta }{(t)}}\lvert {\hat {H}}(t)-{\hat {H}}(t_1)\rvert $ , where t and $t_1$ are indices of ring-ordered pixels and the set $\Delta {(t)}$ = $\{t+10,\ldots ,t+20\}$ . The delay of 10 was selected to detect jumps that occur over short distances. The minimum over the set of consecutive points $\Delta {(t)}$ was used to eliminate outliers or noise that can result in distinct large differences $\lvert {\hat {H}}(t)-{\hat {H}}(t_1)\rvert $ . Figure 11(a) shows the computed ${\hat {H}}_{\Delta }(t)$ values for SMICA 2015 CMB intensities. Here, ${\hat {H}}_{\Delta }(t)$ values above the ${95}$ th percentile are plotted. Figure 11 ${\hat {H}}_{\Delta }$ discrepancy maps for CMB intensities from SMICA 2015. It is well known that the galactic centre and equatorial region are the most problematic and questionable areas of the CMB maps. Our methodology classified the galactic centre as anomalous based on its Hölder exponent value above the 95th percentile. The contextualization has been already provided by Hamann et al. [Reference Hamann, Gia, Sloan, Wang and Womersley21]; it is due to difficulties in producing correct cleaned CMB maps in this region. In Figure 11(b), $5\%$ of largest ${\hat {H}}_{\Delta }(t)$ values are shown on the TMASK map. Note that in most cases, clusters of largest ${\hat {H}}_{\Delta }(t)$ values are within the TMASK. It seems that ${\hat {H}}_{\Delta }$ indices rather accurately detected many regions with unreliable CMB values. Analysis of other CMB maps gave similar results. Summarizing, the methodology implemented to investigate multifractional presence within the CMB temperature intensities could also serve as a mechanism to detect regions of anomalies in CMB maps. In this paper, we examined multifractional spherical random fields and their application to analysis of cosmological data from the Planck mission. The paper developed the general methodology for estimation of pointwise Hölder exponents of multifractional data observed on the unit sphere. It estimated pointwise Hölder exponent values for the actual CMB temperature intensities and checked for the presence of multifractionality. The estimators of pointwise Hölder exponents for one- and two-dimensional regions were obtained by using the ring and nested orderings of the HEALPix visualization structure. The analysis carried out conveyed multifractionality in the CMB temperature intensities, since the computed pointwise Hölder exponent values do substantially change from place to place in the CMB sky sphere. The proposed method was used for numerical studies of the CMB data and found anisotropies in the temperature intensities. In particular, validity and usefulness of the method were evidenced by detecting numerous anisotropies in unreliable CMB regions of the TMASK. The approach developed and the computing techniques implemented can also be used for other types of spherical data, such as solar data, planetary data, meteorological data, pollution data and earth data. First, the R package rcosmo [Reference Fryer, Li and Olenko18] can be used to transform spherical data into the HEALPix format. Then the methods developed can be directly applied by using the publicly available R code for this paper. Some numerical approaches that were used to speed up computations for big CMB data sets will be reported in future publications. In future studies, it would be also interesting to: • develop the distribution theory for the estimators of $H(t)$ ; • investigate reliability and accuracy of various estimators of the Hölder exponent for the CMB; • study rates of convergence in Theorem 4.4 (see results on superconvergence by McLean [Reference McLean35]); • investigate changes of the Hölder exponents depending on evolutions of random fields driven by SPDEs on the sphere [Reference Anh, Broadbridge, Olenko and Wang5, Reference Broadbridge, Kolesnik, Leonenko and Olenko13, Reference Broadbridge, Kolesnik, Leonenko, Olenko and Omari14]; • study directional changes of the Hölder exponent by extending the results obtained for the conventional ring ordering to rings with arbitrary orientations; • apply our methodology to other spherical data, in particular, to new high-resolution CMB data from future CMB-S4 surveys [Reference Abazajian1]; • explore relations between the locations of the detected CMB anomalies and other cosmic objects. At high values of l, the signal will be weakened by Silk damping and possibly clouded by pressure variations due to turbulence within the plasma, as currently observed in stars. We feel that it is worthwhile to analyse the full signal that is currently available. Higher-resolution measurements in the future will distinguish how much of the analysis is due to physical causes or currently remaining galactic noise. This research was partially supported under the Australian Research Council's Discovery Projects funding scheme (project no. DP160101366). We would like to thank Prof. A. Ayache for attracting our attention to and discussing mutifractional models for random fields and Prof. I. Sloan for various discussions about mathematical modelling of CMB data. The authors are also grateful to the referees for their suggestions that helped to improve the style of the paper. This research includes computations using the Linux computational cluster Gadi of the National Computational Infrastructure (NCI), which is supported by the Australian Government and La Trobe University. We are also grateful for the use of data on the Planck/ESA mission from the Planck Legacy Archive. Abazajian, K. et al., "CMB-S4 science case, reference design, and project plan", Preprint, 2019, arXiv:1907.04473.Google Scholar Adachi, S. et al., "A measurement of the CMB E-mode angular power spectrum at subdegree scales from 670 square degrees of POLARBEAR data", Astrophys. J. 904 (2020) Article ID 65; doi:10.3847/1538-4357/abbacd.Google Scholar Ade, P. A. R. et al., "Planck 2015 results-XVI. Isotropy and statistics of the CMB", Astron. Astrophys. 594 (2016) 1–62; doi:10.1051/0004-6361/201526681.Google Scholar Aghanim, N. et al., "Planck 2018 results-VI. Cosmological parameters", Astron. Astrophys 641 (2020) 1–67; doi:10.1051/0004-6361/201833910.Google Scholar Anh, V., Broadbridge, P., Olenko, A. and Wang, Y. G., "On approximation for fractional stochastic partial differential equations on the sphere", Stoch. Environ. Res. Risk Assess. 32 (2018) 2585–2603; doi:10.1007/s00477-018-1517-1.Google Scholar Ayache, A., Multifractional stochastic fields: wavelet strategies in multifractional frameworks (World Scientific, Singapore, 2018). doi:10.1142/8917.CrossRefGoogle Scholar Ayache, A. and Véhel, J., "On the identification of the pointwise Hölder exponent of the generalized multifractional Brownian motion", Stoch. Process. Appl. 111 (2004) 119–156; doi:10.1016/j.spa.2003.11.002.CrossRefGoogle Scholar Baiesi, M., Burigana, C., Conti, L., Falasco, G., Maes, C., Rondoni, L. and Trombetti, T., "Possible nonequilibrium imprint in the cosmic background at low frequencies", Phys. Rev. Res. 2 (2020) Article ID 013210; doi:10.1103/PhysRevResearch.2.013210.Google Scholar Benassi, A., Cohen, S. and Istas, J., "Identifying the multifractional function of a Gaussian process", Statist. Probab. Lett. 39 (1998) 337–345; doi:10.1016/S0167-7152(98)00078-9.Google Scholar Benassi, A., Roux, D. and Jaffard, S., "Elliptic Gaussian random processes", Rev. Mat. Iberoam. 13 (1997) 19–90; doi:10.4171/rmi/217.CrossRefGoogle Scholar Bianchi, S., "Pathwise identification of the memory function of multifractional Brownian motion with application to finance", Int. J. Theor. Appl. Finance 8 (2005) 255–281; doi:10.1142/S0219024905002937.CrossRefGoogle Scholar Broadbridge, P. and Deutscher, K., "Solution of non-autonomous Schrödinger equation for quantized de Sitter Klein-Gordon oscillator modes undergoing attraction-repulsion transition", Symmetry 12 (2020) Article ID 943; doi:10.3390/sym12060943.Google Scholar Broadbridge, P., Kolesnik, A., Leonenko, N. and Olenko, A., "Random spherical hyperbolic diffusion", J. Stat. Phys. 177 (2019) 889–916; doi:10.1007/s10955-019-02395-0.CrossRefGoogle Scholar Broadbridge, P., Kolesnik, A., Leonenko, N., Olenko, A. and Omari, D., "Spherically restricted random hyperbolic diffusion", Entropy 22 (2020) Article ID 217; doi:10.3390/e22020217.CrossRefGoogle ScholarPubMed Calcagni, G., Kuroyanagi, S. and Tsujikawa, S., "Cosmic microwave background and inflation in multifractional spacetimes", J. Cosmol. Astropart. Phys. 8 (2016) Article ID 039; doi:10.1088/1475-7516/2016/08/039.Google Scholar Castelvecchi, D., "The quest to unlock the secrets of the baby universe", Nature 572 (2019) 298–302; doi:10.1038/d41586-019-02417-7.Google ScholarPubMed Constantine, W. and Percival, D., "Fractal: a fractal time series modeling and analysis package", R package version 2.0-4 (2017), Retrieved from https://CRAN.R-project.org/package=fractal.Google Scholar Fryer, D., Li, M. and Olenko, A., "rcosmo: R package for analysis of spherical, HEALPix and cosmological data", R J. 12 (2020) 206–225; https://journal.r-project.org/archive/2020/RJ-2020-012/RJ-2020-012.pdf.Google Scholar Fryer, D., Olenko, A., Li, M. and Wang, Y. G., "rcosmo: Cosmic microwave background data analysis", R package version 1.1.0 (2019) Retrieved from https://CRAN.R-project.org/package=rcosmo.Google Scholar Gorski, K. M., Hivon, E., Banday, A., Wandelt, B. D., Hansen, F. K., Reinecke, M. and Bartelmann, M., "HEALPix: a framework for high-resolution discretization and fast analysis of data distributed on the sphere", Astrophys. J. 622 (2005) 759–771; doi:10.1086/427976.CrossRefGoogle Scholar Hamann, J., Gia, Q. T. L., Sloan, I. H., Wang, Y. G. and Womersley, R. S., "A new probe of Gaussianity and isotropy with application to cosmic microwave background maps", Internat. J. Modern Phys. C 32 (2021) Article ID 2150084; doi:10.1142/S0129183121500844.CrossRefGoogle Scholar Hivon, E., "Geometric and algebraic properties of HEALPix" (2022), https://healpix.jpl.nasa.gov/html/intronode4.htm (Accessed 20 June 2022).Google Scholar IRSA, "NASA/IPAC infrared science archive", https://irsa.ipac.caltech.edu/data/Planck/release-2/all-sky-maps/maps/component-maps/cmb/ (Accessed 20 June 2022).Google Scholar Istas, J. and Lang, G., "Quadratic variations and estimation of the local Hölder index of a Gaussian process", Ann. Inst. H. Poincaré Probab. Statist. 33 (1997) 407–436; doi:10.1016/S0246-0203(97)80099-4.Google Scholar Khatri, R. and Sunyaev, R. A., "Creation of the CMB spectrum: precise analytic solutions for the blackbody photosphere", J. Cosmol. Astropart. Phys. 2012 (2012) Article ID 038; doi:10.1088/1475-7516/2012/06/038.Google Scholar Klimontovich, Y. L., The statistical theory of non-equilibrium processes in a plasma (Pergamon Press, Oxford, 1967).Google Scholar Lang, A. and Schwab, C., "Isotropic Gaussian random fields on the sphere: regularity, fast simulation and stochastic partial differential equations", Ann. Appl. Probab. 25 (2015) 3047–3094; doi:10.1214/14-AAP1067.CrossRefGoogle Scholar Le Gia, Q. T. and Peach, J., "A spectral method to the stochastic Stokes equations on the sphere", ANZIAM J. 60 (2019) C52–C64; doi:10.21914/anziamj.v60i0.13987.Google Scholar Leonenko, N., Nanayakkara, R. and Olenko, A., "Analysis of spherical monofractal and multifractal random fields", Stoch. Environ. Res. Risk Assess. 35 (2021) 681–701; doi:10.1007/s00477-020-01911-z.Google Scholar Malyarenko, A., Invariant random fields on spaces with a group action (Springer, Berlin, 2012), doi:10.1007/978-3-642-33406-1.Google Scholar Mandelbrot, B. B. and Van Ness, J. W., "Fractional Brownian motions, fractional noises and applications", SIAM Rev. 10 (1968) 422–437; doi:10.1137/1010093.CrossRefGoogle Scholar Marinucci, D., "Testing for non-Gaussianity on cosmic microwave background radiation: a review", Statist. Sci. 19 (2004) 294–307; doi:10.1214/088342304000000783.CrossRefGoogle Scholar Marinucci, D. and Peccati, G., Random fields on the sphere: representation, limit theorems and cosmological applications (Cambridge University Press, New York, 2011); doi:10.1017/CBO9780511751677.CrossRefGoogle Scholar Mathias, A. C., Viana, R. L., Kroetz, T. and Caldas, I. L., "Fractal structures in the chaotic motion of charged particles in a magnetized plasma under the influence of drift waves", Phys. A 469 (2017) 681–694; doi:10.1016/j.physa.2016.11.049.CrossRefGoogle Scholar McLean, W., "Implementation of high-order, discontinuous Galerkin time stepping for fractional diffusion problems", ANZIAM J. 62 (2020) 121–147; doi:10.1017/S1446181120000152.CrossRefGoogle Scholar Minkov, M., Pinkwart, M. and Schupp, P., "Entropy methods for CMB analysis of anisotropy and non-Gaussianity", Phys. Rev. D 99 (2019) Article ID 103501; doi:10.1103/PhysRevD.99.103501.CrossRefGoogle Scholar NASA, "Tests of big bang: the CMB", https://wmap.gsfc.nasa.gov/universe/bb_tests_cmb.html (Accessed 20 June 2022).Google Scholar Péltier, R. and Véhel, J., "Multifractional Brownian motion: definition and preliminary results", Technical Report, 2645, Institut National de Recherche en Informatique et en Automatique, Le Chesnay Cedex, France, 1995, https://hal.inria.fr/inria-00074045.Google Scholar Planck Science Team, Planck, https://www.cosmos.esa.int/web/planck (Accessed 20 June 2022).Google Scholar Riess, A. G., Casertano, S., Yuan, W., Macri, L. M. and Scolnic, D., "Large Magellanic Cloud Cepheid standards provide a 1% foundation for the determination of the Hubble constant and stronger evidence for physics beyond $\lambda$ CDM", Astrophys. J. 876 (2019) Article ID 85; doi:10.3847/1538-4357/ab1422.CrossRefGoogle Scholar Sheng, H., Chen, Y. and Qiu, T., Fractional processes and fractional-order signal processing: techniques and applications (Springer, London, 2011). doi:10.1007/978-1-4471-2233-3.Google Scholar Struzik, Z., "Determining local singularity strengths and their spectra with the wavelet transform", Fractals 8 (2000) 163–179; doi:10.1142/S0218348X00000184.Google Scholar Viana, R. L., Da Silva, E. C., Kroetz, T., Caldas, I. L., Roberto, M. and Sanjuán, M. A. F., "Fractal structures in nonlinear plasma physics", Philos. Trans. Roy. Soc. A 369 (2011) 371–395; doi:10.1098/rsta.2010.0253.CrossRefGoogle ScholarPubMed Wang, Y. G., "CMBProbe, the Python package for generating AC discrepancy maps" (2022), https://github.com/wangyg19/CMBProbe (Accessed 20 June 2022).Google Scholar Weinberg, S., Cosmology (Oxford University Press, Oxford, 2008).Google Scholar Figure 4 Examples of pixels with seven and eight neighbours for $N_{\text {side}}=4$. You have Access Open access
CommonCrawl
3D-printed integrative probeheads for magnetic resonance Junyao Xie ORCID: orcid.org/0000-0002-9078-05191,2, Xueqiu You ORCID: orcid.org/0000-0001-9659-00811,2, Yuqing Huang1,2, Zurong Ni1,2, Xinchang Wang1,2, Xingrui Li2,3, Chaoyong Yang ORCID: orcid.org/0000-0002-2374-53422,3, Dechao Zhang1,2, Hong Chen4, Huijun Sun ORCID: orcid.org/0000-0003-1556-09001,2 & Zhong Chen ORCID: orcid.org/0000-0002-1473-22241,2,5 Nature Communications volume 11, Article number: 5793 (2020) Cite this article Design, synthesis and processing NMR spectroscopy Magnetic resonance (MR) technology has been widely employed in scientific research, clinical diagnosis and geological survey. However, the fabrication of MR radio frequency probeheads still face difficulties in integration, customization and miniaturization. Here, we utilized 3D printing and liquid metal filling techniques to fabricate integrative radio frequency probeheads for MR experiments. The 3D-printed probehead with micrometer precision generally consists of liquid metal coils, customized sample chambers and radio frequency circuit interfaces. We screened different 3D printing materials and optimized the liquid metals by incorporating metal microparticles. The 3D-printed probeheads are capable of performing both routine and nonconventional MR experiments, including in situ electrochemical analysis, in situ reaction monitoring with continues-flow paramagnetic particles and ions separation, and small-sample MR imaging. Due to the flexibility and accuracy of 3D printing techniques, we can accurately obtain complicated coil geometries at the micrometer scale, shortening the fabrication timescale and extending the application scenarios. With the extensive development of nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI) techniques, these methods have found wide applications in various fields, such as oncology imaging, biological material detection, substance analysis, and in situ electrochemical reaction monitoring1,2,3,4. As one of the core components of magnetic resonance (MR) systems, radio frequency (RF) coils significantly influence the quality of MR experimental results. Conventional MR coils are usually fabricated by manual winding and printed circuit board lithography techniques, which generally require labor-intensive manufacturing and 2D fabrication processes5,6,7. Thus, it is imprecise and time consuming to fabricate coils with complex or irregular 3D structures, especially given demands of miniaturization. Moreover, some unconventional NMR experiments, such as microliter-scale sample detection and biochemical reaction monitoring, require customized 3D microfluidic sample structures integrated with RF coils8,9. It is difficult for MRI samples that have different shapes and sizes or microfluidic systems to fit RF homogeneous regions exactly, which leads to a reduction of signal-to-noise ratio (SNR) due to a lower filling factor. To overcome these difficulties, we have developed an integrative MR probehead fabrication method based on the combination of high-precision 3D printing and liquid metal (LM) infusion techniques. The 3D printing (or additive manufacturing) technique is a process of creating 3D objects with customized shapes and geometries by using different materials10. The use of 3D printing instead of subtractive manufacturing methods, such as mechanical machining and laser cutting, has attracted great interest in numerous applications in the field of rapid prototyping11. Three-dimensional microstructures created by 3D printing have been shown to have valuable applications in numerous fields, ranging from biomaterials to microelectronics12,13. 3D printing enables high flexibility in sample geometrical design, and provides a potential solution for fabricating stereoscopic RF coils with intricate constructions and sample detection regions consistent with RF coils with high geometrical precision. Despite applications in auxiliary detection phantoms and supporting structures, 3D printing techniques still exhibit inherent difficulties in the fabrication of MR probeheads. These difficulties come from the special requirements for probehead structures and materials, including the coil structure-induced filling factor, material electromagnetic performance, and material magnetic susceptibility14,15. We demonstrate an approach, combined with computer-aided manufacturing and design, to fabricate integrative MR probeheads using 3D printing and LM injection techniques. The MR probeheads consist of a RF coil with micrometer-scale conductive wires, customized sample chambers, and RF circuit interfaces, all of which are wrapped in a single 3D-printed polymer block. Custom-built MR coils, fabricated by perfusing channels with LM at room temperature, were integrated with complex sample chamber geometries and microfluidic systems. To the best of our knowledge, although there are some similar applications16,17,18, no previous studies have explored this type of method for the fabrication of integrative probeheads for MR systems. Using this method, we can build customized probeheads with coil structures that are more precisely adapted to sample dimensions and specifications. Thus, our probeheads can not only enhance the SNR due to the improved fill factor, but also meet the requirements for in situ chemical reaction monitoring and small-sized object imaging. The probehead performance can be further improved based on material screening and structural design optimization. Probehead design and fabrication Various MR coil structures and sample chamber geometries were custom designed in substrate blocks according to the experimental requirements. The dimensions of each coil were determined to fulfill the detection requirements for different sample volumes and MR system conditions. We simulated MR coils by using CST Microwave Studio (CST studio suite 2018, Computer Simulation Technology of America, Framingham, Massachusetts, USA) to determine the optimal RF magnetic field strength and homogeneity. To complete the welding and installation of the probeheads, additional protruding structures were designed. After the design was completed, different parts of the monolithic MR probehead were consolidated in SolidWorks software and fabricated by 3D printing. To inspect the performance of different 3D printing techniques for MR probehead applications, we adopted two of the most commonly used printing principles, i.e., fused deposition modeling (FDM) and stereolithography apparatus (SLA), to fabricate integrative MR RF probehead models. After the probeheads were printed, the necessary postprocessing was performed, and then the conductive coils were constructed using the liquid metal perfusion technique. Figure 1a–e illustrates the printing and manufacturing procedure of our designs. Fig. 1: 3D printing and manufacturing procedure of integrative MR probeheads for different scenarios. Both a fused deposition modeling (FDM) and b stereo lithography appearance (SLA) techniques are utilized to fabricate a complete probehead (c) layer by layer according to the simulation design. d Liquid metal is perfused into the model through the injection hole to form an RF coil. e The RF coil is connected to the matching circuit by two copper strips to form a complete probe. The entrance and exit of the liquid metal channel are completely sealed with silver paste. Various 3D-printed probeheads suitable for MR applications can be fabricated and utilized, including f U-tube saddle probehead (SAP), U-tube Alderman-Grant probehead (AGP), reaction monitoring probehead (RMP), electrochemical reaction monitoring probehead (ECP), gradient probehead (GP) for MR, and g modified solenoid imaging probehead (MSO), modified Alderman-Grant imaging probehead (MAG) for MRI. The coil channel of MSO probehead, before and after the liquid metal perfusion, are also shown. Multiple 3D-printed integrative RF probeheads for MR were designed (Supplementary Fig. 1) and printed (Fig. 1f, d). All probeheads were printed by using transparent materials to clearly illustrate their internal channel structures. The designed coil structures with different dimensions, such as solenoid, saddle and Alderman-Grant (AG) coils, were embedded inside polymer blocks. A U-tube saddle probehead (SAP) and U-tube Alderman-Grant probehead (AGP) were designed to perform routine microliter-scale sample experiments. To investigate in situ reactions, we designed multiturn microfluidic systems inside in situ reaction monitoring probehead (RMP). The microfluidic channels act as micro-reactors that are integrated with NMR coils for in situ reaction kinetics monitoring with high sensitivity. An electrochemical reaction monitoring probehead (ECP) was fabricated to achieve in situ electrochemical observations. Designed with gradient coils, a gradient probehead (GP) can achieve complex NMR sequence analysis. In addition, a modified solenoid imaging probehead (MSO) and modified Alderman-Grant imaging probehead (MAG) were designed for use in MRI experiments on small samples with specific structural sizes. Building materials selection Both FDM and SLA 3D printers were used to fabricate the RF probeheads with various building materials. Considering the stringent requirements for probehead materials to restrain energy dissipation, magnetic field inhomogeneity and the bottom envelope of the spectra in MR experiments19, we characterized the electromagnetic properties, including dielectric constant and loss tangent, of selected building materials in the MR frequency range. In MR RF design, it is essential to use materials with low-dielectric-constant and low-dielectric-loss to decrease the total electromagnetic loss in the signal path, improve coils' quality factor (Q) and the SNR, and thus increase the minimum detectable signal14. The analytical results of the dielectric properties of the 3D-printed building materials and unloaded Q of SAP probeheads with these materials are provided in Table 1. All Q factors were measured on unloaded probes, using an Agilent E5071C network analyzer. Poly tetra fluoroethylene (PTFE) is a reference material that cannot be used for 3D printing, so its Q factor is not listed in the table. Modified LM paste (discussed below) is used as conductive material. After comparing the dielectric properties of the materials and corresponding SAP Q factors characterized in MR-relevant frequency ranges, we chose PLA (i.e., VisiJet M3 Crystal), which has lower dielectric values in the relevant resonance frequency range, as the probehead building material. Table 1 Dielectric constant and loss tangent of 3D-printed building materials. The magnetism of building materials is also an extremely important factor. The components of the building materials we selected are mainly nonmetallic materials, including polylactic acid, acrylonitrile butadiene styrene copolymers, or acrylic resin. These components are all diamagnetic materials and have little effect on MR experiments. In addition, because the RF coils were embedded in 3D-printed substrates, whether the materials contained hydrogen and the position/size of the material spectral bottom envelope were also key considerations in finally choosing a building material. Related onboard tests needed to be performed after the complete probes were fabricated with building materials that showed excellent electromagnetic properties in previous evaluations. The onboard experimental results proved that PLA performed well in terms of both electrical performance and material background signals, so it was the preferred substrate for our 3D printing models. Although the availability of our measurement results is limited due to the commercial confidentiality of the materials, these performance evaluation aspects still hold considerable reference value for building material optimization and selection. Conductive coil liquid metals optimization Low-melting-point LMs were injected as conductive materials into 3D-printed micro-channels to form MR coils. We first considered several commonly used liquid metal pastes. Instead of mercury and silver pastes20, we chose gallium (Ga) and its alloys (e.g. EGaIn) as alternative conducting LM materials due to their low toxicity, high conductivity, good liquidity, and low thermal expansion. In addition, when exposed to air, gallium generates a thin (~1 nm) passivating oxide layer on its surface to protect inside the parts from excessive oxidation, thus benefiting the stability of the entire probehead21. MR coils should have excellent electrical conductivity to reduce the thermal noise of conductive wires, thus improving the frequency response and quality factor and subsequently increasing the SNR of the acquired spectra. For this reason, the electrical properties (mainly conductivity) of the LMs were important indicators used for measurements. To increase the conductivity of the MR coil materials, we proposed an approach for LM paste preparation by incorporating uncoated metal microparticles into gallium. According to the measurement results (Supplementary Fig. 5), we chose gold microparticles (AuMPs) as mixed metal, as Ga/Au paste with 1 wt% Au microparticles had better electrical conductivity than pure gallium, with the specific conductivity increased by ~3%. We then tested the resistance of Ga/Au pastes with different mixing ratios using I-shaped mold. As the AuMPs content increased, the conductivity of the pastes first increased and then decreased, as shown in Fig. 2a. At an AuMPs ratio of 3 wt%, the conductivity of the LM reached a maximum of 3.82 × 106 S m−1, ~10% higher than that of pure gallium. AuMPs dispersed in pure gallium may act as an electronic transmission bridge and thus improve the conductivity of the mixed LM pastes. The reason why the resistance of the Ga/Au pastes unexpectedly increased is presumably related to the processing and molding methods of conductive materials. As the amount of mixing particles increases, the contact resistance (caused by lattice mismatch22, etc.) of metal micron-sized particles to LM may increase due to the prolonged mixing time. In addition, the applied pressure during the molding can affect the uniformity of the metal particle distribution in LM pastes, and cause surface micron-level cavities. The above effects were enhanced with the increasement of metal particle content. Fig. 2: Multi-proportion electrical performance and temperature-dependence characteristic measurement of LM pastes. a Conductivity of LM pastes consisting of gold microparticles (AuMPs) and gallium with different mixing ratios. b Temperature dependence of the conductivity of gold microparticles in gallium with different mixing ratios. Data represent means ± s.d. from five (a) and three (b) independent replicates. Source data are provided as a Source Data file. The saddle coil electrical conductivity was also measured with respect to temperature to show its performance under long, complex MR pulse sequences, as shown in Fig. 2b. The electrical performance of the LM pastes remained stable over the MR working temperature range (293–308 K), with differences of <5%. We prefer solid-state LMs to be conductive materials in experiments because of their better electromagnetic stability than liquid state LMs. During NMR experiments, we used an air circuit to control the probehead temperature in order to maintain the solid-state stability of the coil. However, LM coils may still experience a partially slightly heat-melting due to the long RF pulse duration, sample reaction heating and other reasons. The solidification rate increases after mixing because the metal particles exist as condensation nucleus in LM, inhibiting the supercooled phenomenon of gallium. This effect improved the stability of the probeheads as a result of the enhanced thermal tolerance of the MR coils. The magnetic properties of LM pastes are mainly determined by gallium. Pure gallium has a low diamagnetism, −0.248 × 10−6 cGSM g−1 mass magnetic susceptibility in the solid phase (−0.085 × 10−6 cGSM g−1 for Gu), and very low paramagnetism, 0.002 × 10−6 cGSM g−1 mass magnetic susceptibility in the liquid phase23. In the experimental temperature range, the magnetic properties of the LM pastes had little effect on our experiments. NMR coil simulation and verification Designing and simulating RF coils for MR applications are cumbersome but fundamental tasks, in which the prediction of SNR performance, RF field homogeneity and depth of penetration greatly speeds up the design process by reducing the required hardware manufacturing iterations24. According to our integrated probehead design requirements and 3D printing technique limitations, a capacitor could not be included in the coil structure. This meant that all candidate coil structures should be noncapacitive. There are three main types of noncapacitive volumetric coil structures: solenoid, saddle, and modified AG structures. Solenoid coils with excellent performance require a sufficient number of turns, which increases the inherent inductance of the resonant circuit, making it difficult to tune the coil to a high frequency (500 MHz in our design). A high number of turns also increases thermal noise of the resonant circuit, reducing the quality factor and SNR of the coil14. Therefore, after determining the building material and conductive metal for the probeheads, we simulated two common noncapacitive coils, saddle and modified AG coils, for applications in a Varian 11.7T NMR system. The homogeneity of the generated RF field in the signal detection area is an important index for evaluating coil experimental performance. For microliter-scale sample detection and miniaturization requirements, an inner diameter of 3 mm and thickness of 400 µm were selected to define coil channel structures that are easy to clean and inject. The coils were designed and modulated based on the optimal sizes25; i.e., heights of 4.98 mm and 4.5 mm were adopted for the saddle and modified AG coils at first, with the aim of maintaining a homogeneous field with a height of ~2 mm. To evaluate the optimal performance of each coil, a RF (B1) field was simulated using CST Microwave Studio (Fig. 3). The simulation results (with PEC as conductive material) demonstrated that the saddle coil had a higher Q (172), better RF field uniformity (ΔB = ± 4% \(\bar B\)) in the sample detection area, and a higher normalized SNR (1.0) than AG coil (Q = 111, ΔB = ± 8% \(\bar B\), normalized SNR = 0.61)14. In addition, compared an AG coil, a saddle coil with the same small size would be easier to tune to the desired high resonance frequencies, such as 500 MHz in our design. Thus, a saddle coil was chosen as the optimal structure for our NMR experiments. Fig. 3: RF magnetic field simulations of saddle and modified Alderman-Grant coils. Simulations of saddle (a) and modified Alderman-Grant (b) coils were both performed at a frequency of 500 MHz. In situ electrochemical monitoring using 3D-printed probehead Ethanol offers an attractive alternative as a liquid fuel because of its advantages of hypotoxicity, high energy density and renewability, and its usage considerably reduces dependence on traditional energy sources26,27. A 3D-printed electrochemistry-nuclear magnetic resonance (EC-NMR) probehead (i.e., ECP), shown in Fig. 4, was custom designed to perform in situ EC-NMR experiments to investigate the ethanol oxidation reaction. Conventional EC-NMR experiments generally adopt a dedicated reaction chamber. The placement and isolation of multiple electrodes in the tube are complex and time consuming4,28. However, such cells can be ignored in our design. Instead of a single sample tube, three long (18 mm) and wide (4.3 mm diameter) sample channels were constructed and integrated with RF coils to insert the electrodes (working electrode Pt, counter electrode Pt, and reference electrode Ag) without creating misconnections and to prevent reaction-generated bubbles from refluxing, which would affect experimental stability. These channels are independent of each other and converge in the NMR signal detection region. To achieve higher electrocatalytic efficiency29,30, platinum electrodes incorporated with platinum particles were used as the working and counter electrodes because they are more tolerant to poisoning effect than bulk platinum electrodes due to the adsorption of CO species. Fig. 4: In situ EC-NMR system and experimental results. a Schematic representation of the in situ EC-NMR setup and ECP probehead. The in situ ECP probehead contains three customized electrode sockets/channels, which can form a three-electrode structure. b In situ 1H-NMR spectra and c time-resolved changes in the ethanol, acetic acid, and carbon dioxide concentrations during the ethanol oxidation reaction. In c, data represent means ± s.d. from three independent experiments. Source data are provided as a Source Data file. Figure 4a schematically shows the electrochemical probe setup for the in situ EC-NMR study. A series of in situ 1H NMR spectra were acquired to monitor the time-dependent information of the products and reactants during the ethanol oxidation reaction at 0.9 V for 10 h using a standard 1D proton NMR pulse sequence, as shown in Fig. 4b. The volume of aqueous solution involved in the reaction was 500 μL and included 1.0 M ethanol as the initial reactant and 0.1 M HClO4 as the supporting electrolyte. The NMR peak areas and chemical shifts were calibrated with TSP to classify and quantify the reaction species. Before electrolysis, ethanol showed spectral peaks at 3.57 ppm and 1.08 ppm, corresponding to hydrogen atoms in terminal hydroxymethyl groups (−CH2OH) and protons in methyl groups (−CH3), respectively. Hydroxyl groups were not detectable due to the fast proton exchange with water. In the progress of electrolysis, a new NMR peak appeared at 2.08 ppm, indicating the formation of acetic acid (CH3COOH). The proton peak at 2.08 ppm increased due to the production of acetic acid as ethanol oxidation progressed, while the signals at 1.08 and 3.57 ppm decreased caused by the consumption of ethanol. The main intermediate (acetaldehyde) cannot be observed in the experiment because acetaldehyde polymerization readily occurs under a high oxidation potential, especially when the monomer is adsorbed on active surface sites26,31. Thus, acetaldehyde would be further oxidized to acetic acid. The results indicate that the ECP design enables the in situ real-time monitoring of products and provides an efficient tool to gain insight into reaction pathways. To obtain further insight, quantitative analyses were conducted by integrating 1H NMR spectra. Figure 4c shows the changes in the ethanol, acetic acid, and CO2 concentrations obtained from NMR measurements by normalizing the peak integrals to the TSP internal reference. Because the main products of ethanol electrolysis oxidation under high potential are acetic acid and CO/CO2 gas, one can calculate the quantity of gases from the mass balance. The ethanol concentration decreased significantly in the first 2 h, while the reaction products increased rapidly during the same time period. Then, the reaction rate was reduced to a fairly low level, indicating that the ethanol oxidation activity on the catalyst was probably affected by the blocking of C-C breaking on Pt active sites by intermediate poison species32. Incomplete ethanol oxidation to acetic acid thus prevailed over complete oxidation to CO2. In situ reaction monitoring with continuous flow separation using 3D-printed probehead Recent years have seen increased interest in the real-time in situ NMR analysis of intermediates and products of chemical reactions33,34. The continuous flow separation of sample components is an important step in chemical and biochemical analyses35,36. In the detection of samples containing ferromagnetic and paramagnetic particles/ions, NMR spectra may produce the proliferation of spectral lines and spectral peak overlaps due to the uneven local magnetic field, which may lead to difficulties in analyzing experimental spectra in severe cases. Effective particle continuous flow separation methods are essential for in situ NMR, enabling the real-time monitoring of reactions with paramagnetic generators and facilitating the integration of other upstream/downstream control and treatment steps. Several conventional particle/ion separation techniques, commonly employed for industrial and research applications, have significant limitations and difficulties, especially for in situ NMR experiments33,37. The implementation of these methods increases instrument complexity and requires additional expensive equipment. We present here a continuous-flow separation probehead (CFSP), without extra equipment to study continuous, in situ reactions in NMR experiments without special treatment in which the reactants or products contain paramagnetic particles/ions affecting normal detection. We utilized the particle size properties and electrical properties of the deposits and ions for separation, taking advantage of the adsorption properties of silica gel and the high magnetic field of NMR magnets. Particle/ion movement in a magnetic field varies greatly depending on these characteristics. Using our probehead, we successfully separated particles and ions with different dimensions and magnetic properties in a natural high-field magnet. From Fig. 5a, it can be seen that the particle and ion separation sample channels of the CFSP, a modified version of the RMP, mainly consist of a reaction channel, particle filter channel, ion separation channel and sample/waste liquid shunt channel. The design of the inner channel structure of the probehead is shown in detail in Supplementary Fig. 14. In our experiment, the CFSP was designed to monitor the oxidation reaction procedure of isopropyl alcohol with potassium permanganate in neutral solution, in which the suspended manganese/potassium ion and the produced manganese dioxide particles precipitation needs to be optimally separated from the sample prior to signal detection to prevent a paramagnetic effect in MR experiments38. A saddle coil was used in the CFSP to focus the RF field onto the product detection area with a total volume of 13 µL. The two reactants were injected into the probehead through sample inlets and then mixed and reacted in the reaction channel (1.8 mm in channel diameter, 15 mm in spiral diameter, 15 mm in height, 3-turn solenoid). During the reaction, manganese dioxide precipitation was continuously generated and suspended in the solution. As the first separation part of the shunt channel, the adsorption tank (15 mm in height and 25 mm in diameter) of the filter channel was filled with silica gel particles, which have a strong adsorption capability and can effectively filter the particles generated in solution39,40, as shown in Fig. 5b. The paramagnetic particles were mainly deposited near the upper surface of the silica gel layer, which was more than 15 mm above the coil detection area and had little effect on the experiment. Silica gel also has a considerable degree of adsorption for metallic paramagnetic ions. Then, the filtered solution entered the ion separation channel (1.8 mm in diameter, 2.5 mm in height, and 1-turn solenoid), in which paramagnetic positive ions (Mn2+ and K+) were towed by the Lorentz force to the area near the outer wall of the sample channel according to the left-hand rule under the effect of the vertical downward magnetic field, as shown in Fig. 5c. The separation fork consisted of two 45° angled channels at the end of the ion separation channel, and the positive ion-containing portion near the outer wall of the channel was separated into the waste liquid channel for discharge, while the remaining portion with few paramagnetic particles and ions was directed to the detection area for NMR experiments. Fig. 5: Inner structures and separation principles of the CFSP. a The internal structure of the CFSP. b The principle of in situ filtration and separation of paramagnetic particles. c The principle of the Lorentz force separation of paramagnetic ions under a strong magnetic field. The CFD module in COMSOL Multiphysics software was used to simulate the fluid mixing in the spiral channel to demonstrate the mixing efficiency of the two samples. Considering the flow rate of the samples in the microchannel, the laminar flow-dilute concentration substance transfer module was employed for simulation. The mixing of samples in the channel under different conditions can be obtained by scanning parameters such as flow rate, transfer coefficient, and viscosity. The simulation results under the experimental conditions were obtained (Supplementary Fig. 15). The results indicated that the complete mixing of the two reactants in the spiral channel was achieved, even at the highest flow rate (100 μL min−1 at the inlet) in our work. The CFSP probehead (Supplementary Fig. 16) required some preliminary testing and preparation before being used for experiments (Supplementary Methods). We measured the effects of different paramagnetic potassium ion concentrations on the full-width-at-half-maximum (FWHM) of the spectra (Supplementary Fig. 17), which increased sharply with increasing ion concentration, and the rate of increase decreased gradually as the peak intensity approached zero. There was little change in the spectral width of the potassium permanganate solution at different concentrations (Supplementary Fig. 18), indicating that the potassium permanganate solution has little paramagnetism. Therefore, the widen spectra may mainly be derived from the paramagnetic potassium ions and manganese ion impurities in the solution. The actual paramagnetic ion separation efficiency of the probe under high field conditions was also measured in in situ NMR experiments with a 0.01mol L−1 manganese ion sample (Fig. 6a). The content of manganese ions in the solution was measured by a manganese ion meter (LH-Mn1, Hangzhou Luheng Biotechnology Co., Ltd., China). There was a significant difference in the paramagnetic ion concentration between the two outlet solutions after separation, which was more clearly demonstrated in the resulting spectra. The Mn2+ ion concentration in the effluent outlet channel sample was significantly higher than that in the sample outlet channel sample. The differences in both the spectral FWHM and ion concentration of the two solutions after separation were proportional to the flow rate. Fig. 6: The Mn2+ separation efficiency and the in situ separation results of the CFSP. a The Lorentz force separation efficiency of paramagnetic ions (0.01 mol L−1 manganese ions in aqueous solution) with different flow rates are shown through full-width-at-half-maximum (FWHM). b In situ 1H-NMR spectra during the ethanol oxidation reaction. With the addition of a filter structure, a spectra containing more useful information can be measured at different reaction times. In a, data represent means ± s.d. from three independent experiments. Source data are provided as a Source Data file. A series of in situ 1H NMR spectra, obtained with varying durations of the oxidation reaction, were acquired for 10 min (Fig. 6b). Compared with the results obtained without separation, the FWHM of the spectra of the reactants and products after filtration was significantly reduced, while the SNR and spectral resolution were increased, indicating that paramagnetic particles and ions were removed from the initial product. The spectral line broadening phenomenon still exhibited because the reaction continued after the particles and ions were separated, producing new paramagnetic substances. Gradual changes in the amount of the reactant (ethanol) and product (acetic acid) with increasing residence time can be clearly detected. This result proves that our CFSP probehead not only enables real-time reaction monitoring but also achieves the continuous flow separation of paramagnetic particles and ions, expanding the application field of in situ NMR experiments. 3D-printed integrative MRI probe heads were also custom designed and fabricated to further demonstrate the universal applicability of our approach. Performance tests and experimental results are illustrated in the Supplementary Note 1. Despite the disadvantages in the conductivity of the coil materials, images with higher SNR were obtained using our LM probe. This demonstrated that our approach has great prospects and potential in custom MRI applications, mainly including flexible MRI41, integrated MRI, and micro-MRI42. The Q factor is one of the most significant performance parameters of the MR probe (coil) and plays a decisive role in signal sensitivity. The Q factor of the RF coil is affected by its geometrical structure, material conductivity, and the resonance frequency. The calculation formula for the inherent Q factor of the coil is Q = Lω0/R, where L denotes the coil inductance, ω0 represents the angular frequency of the signal, and R refers to the coil resistance14. The main factors affecting R-value are the material conductivity of the coil, coil size, and the skin effect. It is worth mentioning that the skin effect of high-frequency alternating current has a great influence on the actual resistance of RF coils43. At a frequency of 500 MHz, the pathway for current flow inside an LM paste conductor is reduced to approximately 2 × δ due to the skin effect. The skin depth δ for gallium is 19 µm, calculated by the following equation: $$\delta = \sqrt {\frac{\rho }{{\pi f\mu _0}}}$$ where ρ is the specific resistivity of gallium, µ0 is the permeability of vacuum, and f is the resonance frequency of the coil. Although gallium has ~15 times higher resistance at 0 Hz, copper has an approximately four times smaller skin depth than gallium at 500 MHz due to the skin effect, which decreases the resistive loss of gallium relative to that of copper by a factor of 4. One way to measure the Q factor of the MR probe is to evaluate the ratio of the center frequency ω0 to the 3 dB bandwidth Δω of the reflection coefficient (S11). The most commonly used formula is Q = ω0/Δω, which is also used in our work to measure and evaluate the Q factor. The Q factor of MR coils fabricated by conventional materials (copper or copper alloy) at room temperature can usually reach 100–200, while the coils with special materials in custom applications may have a worse Q factor. The performance of the standard or homemade MR probes is presented in Supplementary Table 142,44,45,46,47. Among them, the standard Varian commercial probe (slot tube type copper coil) used in our laboratory has a measured unloaded Q factor of 158; a solenoid NMR microcoil wrapped with polyurethane-coated copper wires has a Q factor of 26 at 300 MHz46; an inkjet printing MRI surface coil with silver paste material has a Q value of 12.5 at 400 MHz47. Despite the advantages of the skin effect, our LM coils (Q = 44) perform worse compared to copper coil probes (175) with the same structure and size due to their disadvantage in material conductivity (Supplementary Fig. 19). Although our LM probes are inferior to commercial probes in performance (Q factor), they are able to meet the requirements of our customized MR experiments. The applicability of the customized 3D LM probes was verified by the results of various applications in our work. Our design and fabricate approach of the 3D-printed integrative MR probehead focuses on addressing the precise and integrated custom machining problems encountered by conventional methods, rather than optimizing the performance of the probe. Micrometer-scale 3D RF coils and closely compatible sample channels with complex structures can be constructed by this approach flexibly and precisely, according to the requirements of MR experiments. This benefit dramatically improves the usability of our customized probes in special MR experiments. The performance of our probes still has a lot of space for improvement. Specifically, the coil resistance would be effectively reduced by further optimizing the coil conductive material (such as mixing silver nanowires into LM48), contributing to the improvement of the Q factor of our probes. Besides, the performance of the probe could also be enhanced by the improvements in the post-processing of the probeheads, including the cleaning and filling of the channels. Moreover, combined with MR experimental requirements, it is necessary to determine an optimal coil structure and size to obtain an MR probe with a high Q factor. In the future, state-of-the-art 3D metal printing capabilities may overcome problems such as printing pores, 3D structural accuracy and associativity with other materials, eliminating the need to add conductive components (e.g., copper foil), and capacitors in MR coils41,49. It is worth looking forward to, a variety of 3D printing methods and material innovations will further enhance the performance and greatly expand the application prospects of our customized integrative probeheads. In conclusion, we have demonstrated an approach for the design and fabrication of integrative MR probeheads with high fabrication precision based on the combination of 3D printing and liquid metal filling techniques. Multi-size/multi-shape integrated MR Probeheads containing customized sample channels were rapidly constructed for evaluation and experiments, benefiting from the great flexibility and convenience of our method. The probehead materials, including 3D printing building materials and liquid metal pastes, were characterized, selected and optimized to improve the experimental performance, which also guided us to optimize the MR probehead structures. To further verify the usability of our designs in conventional MR systems, we performed related NMR and MRI experiments using customized 3D-printed probeheads. According to the results, our proposed method is flexible and effectively meets the requirements of MR experiments, thus expanding practical applications. Our ongoing efforts to further improve probehead performance include optimizing the material performance, improving the design structures, and expanding the application areas. The proposed method presents a basis for customized probeheads for NMR studies and clinical MRI detection, opening up a new class of applications in MR systems. 3D printing and general fabrication process of printed probeheads Based upon the FDM technique (Fig. 1a), a 3D printing machine (ProJet 3510SD, 3D Systems Inc., Rock Hill, SC, USA) was used to construct MR prototypes with a printing resolution of 30 µm. During printing, the building materials and sacrificial materials were heated and ejected from one of the dual nozzles to construct the molding structures and the coil/sample hollow cavities, respectively. Similar to FDM, SLA (Cyclone W-1, Jiaxing Shanwei Electrical and Mechanical Co., Ltd., China and SLA300 DLC, Weinstein (Xiamen) Industrial Co., Ltd., China) was also used to create probehead models in a layer by layer fashion with a printing resolution of 25 µm based on a process of photopolymerization (Fig. 1b). With the help of CAD, an ultraviolet (UV) laser was focused on a vat of photopolymer resin, causing chains of molecules link to form polymers, which made up the bodies of the probeheads. After the 3D printing procedure, we infused LM pastes into the coil channels from the injection hole to form conductive electrical MR coils (Fig. 1c, d). Two thin copper strips were inserted into the LM through the coil pins as feedlines to the matching network (Fig. 1e). Then, the injection holes and RF circuit interfaces were sealed with silver paste (Pelco 16040-30, Ted Pella Inc., Redding, CA, USA) and epoxy glue to prevent LM paste leakage. Because of their strong fluidity, LM pastes can be easily injected into and fully populated in irregular and narrow channels (diameter of 400 µm or smaller in our experiments). An ultrasonic vibration machine (KQ3200DA, Kunshan Ultrasound Instrument Co., Ltd., China) was used to remove tiny air bubbles remaining in the coil channels after LM paste injection. The solidification process, a step used to cure LM pastes to form solid structures, was essential to stabilize the encapsulant. Finally, the inserted RF circuit elements were welded with a matching circuit to constitute a complete MR probe. Electrical performance measurement of building materials and LM pastes Numerous measurement techniques can be used to characterize the electromagnetic properties of 3D printing building materials. We utilized a simple but robust method19 to measure the dielectric properties of the selected common building materials used in either FDM or SLA 3D printing (Supplementary Fig. 2). The materials can be broadly divided into three groups, corresponding to three printers: VisiJet M3 Crystal for ProJet 3510SD, Formula L1, and Clara A for SLA300 DLC, and PT-series for Cyclone W-1. The dielectric constant εr and loss tangent tanδ of the materials in each resonator circuit at MR resonance frequencies were estimated (Supplementary Methods). We fabricated five samples of each building material to minimize the influence of printing inaccuracies, connectors, and parasitic losses on the measurement. Electrical performance tests of the metal materials were carried out in a temperature-controlled chamber. All presented numerical measurements in our work were conducted five times under the same condition to obtain an average. Fabrication of LM pastes LM pastes were prepared by incorporating uncoated metal microparticles into gallium, using high-energy sonication (FS-450N, Shanghai Sonxi Ultrasonic Instrument Co., Ltd., China; Supplementary Fig. 4). The gallium was acid treated to remove the oxide skin, benefiting the incorporation of metal particles into the gallium droplets. Instead of being purely on the exterior of the material, gallium oxide was distributed inside the bulk fluid after sonication21. The dispersed oxides served as a support to suspend the metal microparticles in the low-viscosity fluid to form a paste, preventing settling during storage. After mixing and molding, the LM pastes were put aside overnight for stabilization and solidification at 263 K before characterization. The AgMPs and AuMPs (Shanghai Aladdin Biochemical Technology Co., Ltd., China) used for mixing both had a diameter of 10 µm. During mixing, the sonication energy was kept constant, while the amount of microparticles was increased. The AuMPs were weighed using a high-precision electronic balance and mixed with pure gallium at 0, 1, 2, 3, 4, and 5 wt% (of a 2 g mixture). Verification tests of the 3D-printed NMR Probehead To initially verify the performance of our approach, the designed saddle coil structure was imported into SolidWorks software after CST simulation to model the 3D-printed integrative RF probehead. According to the simulation results, an RF coil with a length of 4.98 mm and an inner diameter of 3 mm was finally constructed in a substrate block. To obtain the optimal SNR performance of the coil, we designed the detection region of a sample-filled chamber (2.6 mm diameter) according to the coil geometry (3 mm diameter). The accordingly defined SAP probehead was fabricated as an example with a dimension of 15 mm × 6 mm × 7 mm, and contained specially designed structures for connection and installation (Supplementary Fig. 7). An antenna test, Q factor calculation and nutation experiment were performed to test the basic performance of the SAP probehead after welding and installation (Supplementary Figs. 8–10). Because the structure and size of the coil are the same (a preferred saddle coil with optimal dimension scale), the unloaded Q factors of the 3D-printed NMR probeheads used in our work are not much different, ranging from 45 to 50. A deionized water signal was acquired using the SAP probehead in a Varian 11.7 T NMR system to demonstrate the usability of our integrated design. The FWHM of the experimental spectrum acquired by using a proton pulse sequence in a single scan was 26 Hz with rough manual shimming. Electrode pretreatment and instrument installation in the EC-NMR experiment Platinum particles were electroplated on platinum wires by cyclic voltammetry in a conventional three-electrode electrochemical cell on a CHI760e workstation (Shanghai CHI Instrument Co. Ltd., China) using the two as-prepared platinum wires as the working electrode and counter electrode and a saturated calomel electrode as the reference electrode at −0.2 V (vs SCE)29. The solution consisted of 3.0 mM H2PtCl6 and 0.1 M H2SO4. Electrochemical measurements were then performed in a solution of 0.1 M H2SO4 and 0.1 M C2H5OH using cyclic voltammetry and chronoamperometry (Supplementary Fig. 12). All chemical reagents, including ethanol (C2H5OH), chloroplatinic acid hydrate (H2PtCl6•6H2O), sulfuric acid (H2SO4, 98%), sodium phosphate (TSP) and deuterium oxide (D2O, 99.9%), were purchased from Sigma-Aldrich. The probe with the three-electrode electrochemical probehead (Supplementary Fig. 13) was mounted in the 11.7T NMR instrument. The three electrodes were connected to the electrochemical workstation by copper wire cables, with the electrochemical workstation located approximately five meters away from the magnet to prevent the effect of strong magnetic fields. The electrochemical probehead consisted of a Pt wire as the CE, an Ag/AgCl wire as the RE, and a Pt wire loaded with Pt nanoparticles as the WE. All electrodes were inserted into the sample channels of the customized probehead and sealed with perforated tube caps. The distance between the electrodes and detection area was suitably arranged to not only realize the in situ monitoring of electrochemical reactions, but also effectively avoid the interference of small bubbles generated by the reaction in the monitoring area. The parameters of the conducted proton pulse sequence were a 10 μs excitation pulse length, 57 dBm RF power, 2 s delay, 3 s acquisition time, and 8 accumulations. CFSP preprocessing and experiments A series of pretreatments were required for the CFSP probe prior to NMR experiments. We first used an air pump to blow several small cotton balls into the filter cavity. Due to the funnel-shaped design at the bottom of the filter cavity, the cotton balls gathered at the exit. Silica gel (Shanghai Aladdin Biochemical Technology Co., Ltd., China) with a particle sizes of 250–830 µm were then injected in the form of a deionized water suspension and accumulated in the filtration chamber due to their own stickiness and the blockage provided by the cotton balls. The inner diameter of the channel at the outlet of the filter cavity was reduced to 0.8 mm so that the cotton balls and silica gel particles could be kept in the cavity without being flushed out by the solvent. The probehead was placed for a period of time in a high-temperature drying tank and was used for experiments after the water was evaporated and the silica gel particles were completely dry. Our probehead filtration chamber could hold a 15 mm thickness silica gel layer, ensuring the filtration efficiency of precipitation in experiments. For in situ NMR reaction monitoring, how to optimize the flow rate of the syringe pump to obtain the best detection results with different durations is extremely complex and important. For a microfluidic system, the channel volume between the mixing point and detecting area and the flow rate are two important parameters that determine the residence time33. In our design, we fixed the volume of the mixing channel to 400 µL and precisely adjusted and controlled the flow rate of the reactants through syringe pumps for the detection of products. The flow rate determined the residence time in the channel between the mixing point and detection area, and this residence time was taken as the reaction time. Moreover, the time for fluid to flow through the NMR detection region should also be considered to acquire complete NMR signals with high time resolution. As a result, the flow rate for each syringe pump was set to values from 20 µL min−1 to 100 µL min−1, corresponding to residence times from 10 min down to 2 min in the reaction channel, and detectable time from 21 s down to 4 s with an estimated detection volume of 14 µL. Samples were injected into the CFSP channels with such a slow flow rate and micropressure, ensuring sufficient time for the silica gel layer to block and adsorb paramagnetic precipitation. Reactants were pumped into probehead by syringe pumps approximately 3 m outside the NMR magnet (Supplementary Fig. 20). Regular proton pulse sequences were used to detect the spectra of reaction generators after in situ shunting. The delay time between scans was scaled with the flow rate to ensure complete refreshment of the detection volume. The other sequence parameters were consistent with the previous EC experiment. For comparison, the same reaction was detected without filtration using 5 mm sample tube commercial probe (Varian 500 ID/PFG SP50P54TOL), with unloaded Q factor equals to 160. The authors declare that the main data supporting the findings of this study are available within the article and its Supplementary Information files. Extra data are available from the corresponding author upon request. Source data are provided with this paper. Tirotta, I. et al. 19F magnetic resonance imaging (MRI): From design of materials to clinical applications. Chem. Rev. 115, 1106–1129 (2015). Cevallos-Cevallos, J. M., Reyes-De-Corcuera, J. I., Etxeberria, E., Danyluk, M. D. & Rodrick, G. E. Metabolomic analysis in food science: a review. Trends Food Sci. Technol. 20, 557–566 (2009). Ogata, K. et al. Revealing lithium-silicide phase transformations in nano-structured silicon-based lithium ion batteries via in situ NMR spectroscopy. Nat. Commun. 5, 3217 (2014). ADS CAS Article PubMed Google Scholar Cao, S. H. et al. Versatile, robust, and facile approach for in situ monitoring electrocatalytic processes through liquid electrochemical NMR spectroscopy. Anal. Chem. 91, 1686–1691 (2019). Webb, A. G. Radiofrequency microcoils in magnetic resonance. Prog. Nucl. Magn. Reson. Spectrosc. 31, 1–42 (1997). Lee, H., Sun, E., Ham, D. & Weissleder, R. Chip-NMR biosensor for detection and molecular analysis of cells. Nat. Med. 14, 869–874 (2008). Article CAS PubMed PubMed Central Google Scholar Fugariu, I. et al. Towards single egg toxicity screening using microcoil NMR. Analyst 142, 4812–4824 (2017). Swyer, I. et al. Digital microfluidics and nuclear magnetic resonance spectroscopy for in situ diffusion measurements and reaction monitoring. Lab Chip 19, 641–653 (2019). Mompean, M. et al. Pushing nuclear magnetic resonance sensitivity limits with microfluidics and photochemically induced dynamic nuclear polarization. Nat. Commun. 9, 108 (2018). ADS Article CAS PubMed PubMed Central Google Scholar Lee, J. Y., An, J. & Chua, C. K. Fundamentals and applications of 3D printing for novel materials. Appl. Mater. Today 7, 120–133 (2017). Silver, A. 3D printing in lab. Nature 565, 123–124 (2019). ADS CAS Article Google Scholar Sochol, R. D. et al. 3D printed microfluidics and microelectronics. Microelectron. Eng. 189, 52–68 (2018). Murphy, S. V. & Atala, A. 3D bioprinting of tissues and organs. Nat. Biotechnol. 32, 773–785 (2014). CAS Article PubMed PubMed Central Google Scholar Mispelter J., Lupu M. & Briguet A. NMR Probeheads for Biophysical and Biomedical Experiments: Theorectical Priciples and Practical Guidelines, 2nd edn. (Impertial College Press, 2015). Ngo, T. D., Kashani, A., Imbalzano, G., Nguyen, K. T. Q. & Hui, D. Additive manufacturing (3D printing): a review of materials, methods, applications and challenges. Compos. Part B Eng. 143, 172–196 (2018). Li, L. G., Abedini-Nassab, R. & Yellen, B. B. Monolithically integrated Helmholtz coils by 3-dimensional printing. Appl. Phys. Lett. 104, 190 (2014). Bharambe, V. et al. Vacuum-filling of liquid metals for 3D printed RF antennas. Addit. Manuf. 18, 221–227 (2017). Zhou, L. Y. et al. Three-dimensional printed wearable sensors with liquid metals for detecting the pose of snakelike soft robots. ACS Appl. Mater. Interfaces 10, 23208–23217 (2018). Behzadnezhad, B., Collick, B. D., Behdad, N. & Mcmillan, A. B. Dielectric properties of 3D-printed materials for anatomy specific 3D-printed MRI coils. J. Magn. Reson. 289, 113–121 (2018). ADS CAS Article PubMed PubMed Central Google Scholar Wu, S. Y., Yang, C., Hsu, W. Y. & Lin, L. W. 3D-printed microelectronics for integrated circuitry and passive wireless sensors. Microsyst. Nanoeng. 1, 15013 (2015). Daalkhaijav, U., Yirmibesoglu, O. D., Walker, S. & Menguc, Y. Rheological modification of liquid metal for additive manufacturing of stretchable electronics. Adv. Mater. Technol. 3, 1700351 (2018). Article CAS Google Scholar Losurdo, M., Suvorova, A., Rubanov, S., Hingerl, K. & Brown, A. S. Thermally stable coexistence of liquid and solid phases in gallium nanoparticles. Nat. Mater. 15, 995–1002 (2016). Pashaey, B. P. & Seleznev, V. V. Magnetic susceptibility of gallium-indium alloys in liquid state. Sov. Phys. J. 16, 565–566 (1973). Kozlov, M. & Turner, R. Fast MRI coil analysis based on 3-D electromagnetic and RF circuit co-simulation. J. Magn. Reson. 200, 147–152 (2009). Tang, J. A. et al. Solid-state STRAFI NMR probe for material imaging of quadrupolar nuclei. J. Magn. Reson. 225, 93–101 (2012). Wang, Y., Zou, S. Z. & Cai, W. B. Recent advances on electro-oxidation of ethanol on Pt- and Pd-based catalysts: from reaction mechanisms to catalytic materials. Catalysts 5, 1507–1534 (2015). Antolini, E. Catalysts for direct ethanol fuel cells. J. Power Sources 170, 1–12 (2007). Zhang, X. P. et al. NMR spectroelectrochemistry in studies of hydroquinone oxidation by polyaniline thin films. Electrochim. Acta 273, 300–306 (2018). Wang, J. L. et al. MoS2 nanoflower supported Pt nanoparticle as an efficient electrocatalyst for ethanol oxidation reaction. Int. J. Hydrog. Energy 44, 16411–16423 (2019). Tran, L. T. et al. Preparation and electrocatalytic characteristics of the Pt-based anode catalysts for ethanol oxidation in acid and alkaline media. Int. J. Hydrog. Energy 43, 20563–20572 (2018). Brimaud, S., Pronier, S., Coutanceau, C. & Leger, J. M. New findings on CO electrooxidation at platinum nanoparticle surfaces. Electrochem. Commun. 10, 1703–1707 (2008). Florez-Montano, J. et al. Mechanism of ethanol electrooxidation on mesoporous Pt electrode in acidic medium studied by a novel electrochemical mass spectrometry set-up. Electrochim. Acta 209, 121–131 (2016). Fang, H. X. et al. Probing the kinetics in supramolecular chemistry and molecular assembly by microfluidic-NMR spectroscopy. Sci. China Chem. 61, 1460–1464 (2018). Finch, G., Yilmaz, A. & Utz, M. An optimised detector for in-situ high-resolution NMR in microfluidic devices. J. Magn. Reson. 262, 73–80 (2016). Salafi, T., Zeming, K. K. & Zhang, Y. Advancements in microfluidics for nanoparticle separation. Lab Chip 17, 11–33 (2017). Pamme, N. Continuous flow separations in microfluidic devices. Lab Chip 7, 1644–1659 (2007). Wensink, H. et al. Measuring reaction kinetics in a lab-on-a-chip by microcoil NMR. Lab Chip 5, 280–284 (2005). Evans, W. L. & Day, J. E. The oxidation of ethyl alcohol by means of potassium permanganate. J. Am. Chem. Soc. 38, 375–381 (2002). Mohan, D. & Pittman, C. U. Arsenic removal from water/wastewater using adsorbents: a critical review. J. Hazard. Mater. 142, 1–53 (2007). Li, J. R., Kuppler, R. J. & Zhou, H. C. Selective gas adsorption and separation in metal-organic frameworks. Chem. Soc. Rev. 38, 1477–1504 (2009). Corea, J. R. et al. Screen-printed flexible MRI receive coils. Nat. Commun. 7, 10839 (2016). Badilita, V. et al. On-chip three dimensional microcoils for MRI at the microscale. Lab Chip 10, 1387–1390 (2010). Varga, M. et al. Adsorbed eutectic GaIn structures on a neoprene foam for stretchable MRI coils. Adv. Mater. 29, 1703744 (2017). Alonso, J., Soleilhavoup, A., Wong, A., Guiga, A. & Sakellariou, D. Double helix dipole design applied to magnetic resonance: a novel NMR coil. J. Magn. Reson. 235, 32–41 (2013). van Bentum, P. J. M., Janssen, J. W. G., Kentgens, A. P. M., Bart, J. & Gardeniers, J. G. E. Stripline probes for nuclear magnetic resonance. J. Magn. Reson. 189, 104–113 (2007). ADS Article CAS PubMed Google Scholar Olson, D. L., Lacey, M. E. & Sweedler, J. V. High-resolution microcoil NMR for analysis of mass-limited, nanoliter samples. Anal. Chem. 70, 645–650 (1998). Mager, D. et al. An MRI receiver coil produced by inkjet printing directly on to a flexible substrate. IEEE Trans. Med. Imaging 29, 482–487 (2010). Bellew, A. T., Manning, H. G., da Rocha, C. G., Ferreira, M. S. & Boland, J. J. Resistance of single Ag nanowire junctions and their role in the conductivity of nanowire networks. ACS Nano 9, 11422–11429 (2015). DebRoy, T. et al. Scientific, technological and economic issues in metal printing and their solutions. Nat. Mater. 18, 1026–1032 (2019). The work is supported by the National Natural Science Foundation of China (Grants U1632274, 11761141010, U1805261, 11475142, 22073078, and 61801411), and China Postdoctoral Science Foundation (2017M622075). Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, 361005, Xiamen, China Junyao Xie, Xueqiu You, Yuqing Huang, Zurong Ni, Xinchang Wang, Dechao Zhang, Huijun Sun & Zhong Chen State Key Laboratory for Physical Chemistry of Solid Surfaces, Xiamen University, 361005, Xiamen, China Junyao Xie, Xueqiu You, Yuqing Huang, Zurong Ni, Xinchang Wang, Xingrui Li, Chaoyong Yang, Dechao Zhang, Huijun Sun & Zhong Chen Department of Chemistry, Xiamen University, 361005, Xiamen, China Xingrui Li & Chaoyong Yang Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, 361005, Xiamen, China Fujian Science & Technology Innovation Laboratory for Energy Materials of China, 361005, Xiamen, China Zhong Chen Junyao Xie Xueqiu You Yuqing Huang Zurong Ni Xinchang Wang Xingrui Li Chaoyong Yang Dechao Zhang Huijun Sun J.X. conceived the idea, designed, and performed the experiments, analyzed data, and wrote the first draft of the manuscript. X.Y. conceived the idea, designed experiments, and helped write the manuscript. Y.H. and X.W. helped write the manuscript. Z.N. helped designed experiments and analyzed the data. X.L. helped performed CFD simulations and their analysis under supervision of C.Y. H.C. and D.Z. helped performed experiments. H.S. helped designed experiments and analyzed the data. Z.C. supervised and conceived the idea, helped designed experiments, and helped write the manuscript. Correspondence to Xueqiu You or Huijun Sun or Zhong Chen. Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Xie, J., You, X., Huang, Y. et al. 3D-printed integrative probeheads for magnetic resonance. Nat Commun 11, 5793 (2020). https://doi.org/10.1038/s41467-020-19711-y Received: 10 December 2019 Accepted: 21 October 2020 Editors' Highlights Top Articles of 2019 Nature Communications ISSN 2041-1723 (online)
CommonCrawl
What are the most important resonance structures of 5-aminocycloheaxa-2,4-dienone? Show, by means of curly arrows and electron shifts, the resonance contributors to the structure given below. Suggest which one would not be a major contributor and state why. This is a question from my exam. I do not understand where I went wrong. My effort is placed at the end (along with a request). Also, my TA removed 1 point out of 7 for a reason. Can anyone decipher for what? There's the writing in red but it is illegible to me. I don't understand to what he refers by 8 and 6 t or +, or whatever he wrote. organic-chemistry resonance Safdar Faisal yolo123yolo123 $\begingroup$ Wouldn't this isomerize to its tautomer m-aminophenol? $\endgroup$ – Abel Friedman The handwriting of your TA probably stems from ancient egyptian script, I had a hard time reading it. In the end I agree with jerepierre, interpreting it as "6E" or "8E" short for 6/8 electrons. The last statement then translates to "but all have 8E." Before continuing to answer your question, I want to make a statement about resonance structures in general. Resonance structures are used to express a complicated bonding situation in terms of simpler, Lewis type structures. None of these structures exist individually, only a mixture of all possible resonance structures at the same time exists. I am not a fan of declaring single resonance structure as having more or less impact on the overall structure. Of course one can go ahead and calculate the total wave function in terms of resonance structures, generally known as valence bond theory, but this is overkill for such simple molecules. Then you get a mathematical representation of the electronic structure which you can decompose into contributions of the individual resonance structures. A simpler approach is to analyse the molecule with Natural Bond Orbital (NBO) Analysis. I have done this for you, but only as an educational effort. Maybe after understanding this concept it is easier to grasp why certain structures are make a smaller contribution. Let me go ahead and explain a little NBO to you. The electronic structure of a molecule is in general highly delocalised. All orbitals are stretched over the whole molecule. Without changing the electron density, it is possible to localise (transform) these orbitals to represent two centre interactions, that can be interpreted as bonds. This gives an approximate picture in terms of a Lewis structure. There will be electron density which cannot be assigned to a bond. The smaller this density is, the better is the fit as a Lewis structure. For this educational approach I picked the 5 resonance structures you have posted plus an additional ionic form. (I would have detracted a point for the last structure.) There are many more ionic forms, but these will not have big effects, e.g. cleavage of a carbon carbon bond. The following is based on NBO analyses from DF-BP86/def2-SVP. \begin{array}{crr}\hline \text{Structure} & \text{% Lewis} & \text{% non-Lewis}\\\hline \mathbf{1} & 96.4 & 3.6 \\ \mathbf{2} & 96.8 & 3.2 \\ \mathbf{3} & 96.2 & 3.8 \\ \mathbf{4} & 95.8 & 4.2 \\ \mathbf{5} & 96.3 & 3.7 \\ \mathbf{6} & 95.2 & 4.7 \\\hline \end{array} The first statement that should be made is that this molecule is quite well described by Lewis resonance structures. However, delocalised bonding would describe the molecule much better. All structures are reasonably close to each other, as one would expect for a very delocalised system. The ionic structure has the smallest impact, which is feasible because nitrogen-hydrogen bonds are better described as covalent. However, it proves that there can (in the right medium) be a proton shift from nitrogen to oxygen, and is therefore worth consideration. The structure with the second largest error (% non-Lewis) is $\mathbf{4}$. A positively charged carbon would be by far more electronegative than nitrogen. And many imine-type structures are known, which makes $\mathbf{5}$ more important. (In this sense I would disagree with your explanation, which could also justify the detraction of a point.) In general: Carbocation structures that are not in close proximity to (much) more electronegative elements should be considered as having a lesser influence on the electronic structure of a molecule. Buck Thorn♦ Martin - マーチン♦Martin - マーチン $\begingroup$ Would it not be simpler or perhaps clearer to tabulate the relative energies of these structures and on the that basis determine which are expected to be greater resonance contributors? $\endgroup$ – Buck Thorn ♦ $\begingroup$ @Buck These are all just interpretations of a single electronic structure, they all have the same energy just different localisation schemes of the election density. I guess one could do a full Valence bond calculation, but I can't do that. Probably it would be a good idea to look at this with natural resonance theory, but I don't have access to that either. $\endgroup$ – Martin - マーチン ♦ $\begingroup$ Ah, so they are degenerate. You did a nice job of arranging the results. It's so tidy it's hard to see the steps involved, even with your explanation that NBO was used. I haven't worked with NBO so it's not easy to see how you "pick the structures." I thought you fed geometries + constraints into Gaussian and it spit out a density, but in this case ... well, I guess I'll have to read up on NBO. $\endgroup$ i think your teacher said "but all have it" refering to the split charge (?) I would personally say that the ones with less contribution would be the ones with carbon with positive charges because while N+ and O- are relatively stable, C+ is certainly not. Being a bit more specific, I would say (not 100% sure) that the 4th one because is the one in which the positive charge is in an unstable carbocation and close to a Nitrogen atom which would destabalise it even more due to its electronegativity. Daniel AGDaniel AG I think those are "6E" and "8E". "6E" indicates that there is a carbon atom with only 6 total electrons in that resonance structure. "8E" indicates that all atoms bear octets of electrons. When determining importance of resonance structures, maximizing the number of atoms that follow the octet rule is the priority. It may be counterintuitive, but the structure with a positive charge on nitrogen contributes more to the electronic structure of the compound than the structures with positive charge on carbon. jerepierrejerepierre Not the answer you're looking for? Browse other questions tagged organic-chemistry resonance or ask your own question. What are the correct resonance structures of bromoethene? What are the resonance structures for chlorite anion? What are the resonance structures of 3-aminocyclohex-2-en-1-ylium? What are the correct resonance structures of nitrous oxide? What is resonance, and are resonance structures real? Rules to identify the most stable resonance structures What is the most "important" resonance structure of SCN⁻? Can resonance structures account for NMR spectroscopy chemical shifts?
CommonCrawl
Let \(\mathcal{S} \subset \mathbb{R}^{4}\) be the vectors \(X=\left(x_{1}, x_{2}, x_{3}, x_{4}\right)\) that satisfy \(x_{1}+x_{2}-x_{3}+x_{4}=0\). Let \(\mathcal{S} \subset \mathbb{R}^{4}\) be the... a) What is the dimension of \(\mathcal{S}\) ? b) Find a basis for the orthogonal complement of \(\mathcal{S}\). Let $\mathbf{r}_{0}=\left(x_{0}, y_{0}\right)$ be a fixed vector in $R^{2}$. In each part, describe in words the set of all vectors $\mathbf{r}=(x, y)$ that satisfy the stated condition. Let \(\omega=\sigma a_{i}(\mathrm{x}) d x_{i}\) be a 1 -form of class \(\mathcal{C}^{\prime \prime}\) in a convex open set \(E \subset R^{n}\). Assume \(d \omega=0\) and prove that \(\omega\) is exact in \(E\) by completing the following outline: asked Jul 5, 2022 in Mathematics by ♦Gauss Diamond (74,605 points) | 114 views eigenspace Let \(x_{0}\) be a fixed point of \(d x / d t=f(x)\) in \(M \subset \mathbb{R}^{n}\). A central manifold is an invariant manifold that touches in the fixed point a eigenspace \(E_{c}\) belonging to the eigenvalues with vanishing real part. asked Jul 2, 2022 in Mathematics by ♦Gauss Diamond (74,605 points) | 81 views If \(f\) is a real function defined in a convex open set \(E \subset R^{n}\), such that \(\left(D_{1} f\right)(\mathbf{x})=0\) for every \(\mathbf{x} \in E\), prove that \(f(\mathbf{x})\) depends only on \(x_{2}, \ldots, x_{n}\). Let \(\vec{v}\) and \(\vec{w}\) be vectors in \(\mathbb{R}^{n}\). If \(\|\vec{v}\|=\|\vec{w}\|\), show there is an orthogonal matrix \(R\) with \(R \vec{v}=\vec{w}\) and \(R \vec{w}=\vec{v}\). Consider the homogeneous linear system \(A x=0\) where \[ A=\left(\begin{array}{rrrr} 1 & 3 & 0 & 1 \\ 1 & 3 & -2 & -2 \\ 0 & 0 & 2 & 3 \end{array}\right) \text {. } \] Let \(\mathrm{V}\) be the real vector space of all real \(2 \times 3\) matrices, and let \(\mathrm{W}\) be the real vector space of all real \(4 \times 1\) column vectors. asked Jul 7, 2022 in Mathematics by ♦MathsGee Platinum (164,226 points) | 80 views In \(\mathbb{R}^{3}\), let \(N\) be a non-zero vector and \(X_{0}\) and \(Z\) points. Let \(\mathbf{f}\) be a \(C^{1}\) vector field on an open set \(W \subset \mathbb{R}^{2}\) and \(H: W \rightarrow \mathbb{R}\) a \(C^{1}\) function such that \[ D H(\mathbf{u}) \mathbf{f}(\mathbf{u})=0 \] Let \(\mathcal{P}_{k}\) be the space of polynomials of degree at most \(k\) and define the linear map \(L: \mathcal{P}_{k} \rightarrow \mathcal{P}_{k+1}\) by \(L p:=p^{\prime \prime}(x)+x p(x) .\) Let \(Z_{1}, \ldots, Z_{k}\) be distinct points in \(\mathbb{R}^{n}\). Find a unique point \(X_{0}\) in \(\mathbb{R}^{n}\) at which the function \[ Q(X)=\left\|X-Z_{1}\right\|^{2}+\cdots+\left\|X-Z_{k}\right\|^{2} \] completing Let \(\mathcal{S}\) be the linear space of infinite sequences of real numbers \(x:=\left(x_{1}, x_{2}, \ldots\right) .\) Define the linear map \(L: \mathcal{S} \rightarrow \mathcal{S}\) by
CommonCrawl
Home > eBooks > Books by Independent Authors > Basic Algebra: Digital Second Edition > Chapter IX. Fields and Galois Theory 2016 Chapter IX. Fields and Galois Theory Anthony W. Knapp Books By Independent Authors, 2016: 452-552 (2016) https://doi.org/10.3792/euclid/9781429799980-9 DOWNLOAD PDF SAVE TO MY LIBRARY This chapter develops some general theory for field extensions and then goes on to study Galois groups and their uses. More than half the chapter illustrates by example the power and usefulness of the theory of Galois groups. Prerequisite material from Chapter VIII consists of Sections 1–6 for Sections 1–13 of the present chapter, and it consists of all of Chapter VIII for Sections 14–17 of the present chapter. Sections 1–2 introduce field extensions. These are inclusions of a base field in a larger field. The fundamental construction is of a simple extension, algebraic or transcendental, and the next construction is of a splitting field. An algebraic simple extension is made by adjoining a root of an irreducible polynomial over the base field, and a splitting field is made by adjoining all the roots of such a polynomial. For both constructions, there are existence and uniqueness theorems. Section 3 classifies finite fields. For each integer $q$ that is a power of some prime number, there exists one and only one finite field of order $q$, up to isomorphism. One finite field is an extension of another, apart from isomorphisms, if and only if the order of the first field is a power of the order of the second field. Section 4 concerns algebraic closure. Any field has an algebraic extension in which each nonconstant polynomial over the extension field has a root. Such a field exists and is unique up to isomorphism. Section 5 applies the theory of Sections 1–2 to the problem of constructibility with straightedge and compass. First the problem is translated into the language of field theory. Then it is shown that three desired constructions from antiquity are impossible: "doubling a cube," trisecting an arbitrary constructible angle, and "squaring a circle." The full proof of the impossibility of squaring a circle uses the fact that $\pi$ is transcendental over the rationals, and the proof of this property of $\pi$ is deferred to Section 14. Section 5 concludes with a statement of the theorem of Gauss identifying integers $n$ such that a regular $n$-gon is constructible and with some preliminary steps toward its proof. Sections 6–8 introduce Galois groups and develop their theory. The theory applies to a field extension with three properties—that it is finite-dimensional, separable, and normal. Such an extension is called a "finite Galois extension." The Fundamental Theorem of Galois Theory says in this case that the intermediate extensions are in one-one correspondence with subgroups of the Galois group, and it gives formulas relating the corresponding intermediate fields and Galois subgroups. Sections 9–11 give three standard initial applications of Galois groups. The first is to proving the theorem of Gauss about constructibility of regular $n$-gons, the second is to deriving the Fundamental Theorem of Algebra from the Intermediate Value Theorem, and the third is to proving the necessity of the condition of Abel and Galois for solvability of polynomial equations by radicals—that the Galois group of the splitting field of the polynomial have a composition series with abelian quotients. Sections 12–13 begin to derive quantitative information, rather than qualitative information, from Galois groups. Section 12 shows how an appropriate Galois group points to the specific steps in the construction of a regular $n$-gon when the construction is possible. Section 13 introduces a tool known as Lagrange resolvents, a precursor of modern harmonic analysis. Lagrange resolvents are used first to show that Galois extensions in characteristic 0 with cyclic Galois group of prime order $p$ are simple extensions obtained by adjoining a $p^\mathrm{th}$ root, provided all the $p^\mathrm{th}$ roots of 1 lie in the base field. Lagrange resolvents and this theorem about cyclic Galois groups combine to yield a derivation of Cardan's formula for solving general cubic equations. Section 14 begins the part of the chapter that depends on results in the later sections of Chapter VIII. Section 14 itself contains a proof that $\pi$ is transcendental; the proof is a nice illustration of the interplay of algebra and elementary real analysis. Section 15 introduces the field polynomial of an element in a finite-dimensional extension field. The determinant and trace of this polynomial are called the norm and trace of the element. The section gives various formulas for the norm and trace, including formulas involving Galois groups. With these formulas in hand, the section concludes by completing the proof of Theorem 8.54 about extending Dedekind domains, part of the proof having been deferred from Section VIII.11. Section 16 discusses how prime ideals split when one passes, for example, from the integers to the algebraic integers in a number field. The topic here was broached in the motivating examples for algebraic number theory and algebraic geometry as introduced in Section VIII.7, and it was the main topic of concern in that section. The present results put matters into a wider context. Section 17 gives two tools that sometimes help in identifying Galois groups, particularly of splitting fields of monic polynomials with integer coefficients. One tool uses the discriminant of the polynomial. The other uses reduction of the coefficients modulo various primes. Published: 1 January 2016 Digital Object Identifier: 10.3792/euclid/9781429799980-9 Rights: Copyright © 2016, Anthony W. Knapp < Previous Chapter Next Chapter > Basic Algebra: Digital Second Edition • 1 January 2016 Project Euclid (distributor)
CommonCrawl
Introduction to multigraphr This vignette serves as an introduction to the package multigraphr. Parts of the theoretical background is provided but for more details, consult the following literature which the package is based on: Shafie, T. (2015). A multigraph approach to social network analysis. Journal of Social Structure, 16. Link Shafie, T. (2016). Analyzing local and global properties of multigraphs. The Journal of Mathematical Sociology, 40(4), 239-264. Link Shafie, T. and Schoch, D., (to appear in 2021). Multiplexity analysis of networks using multigraph representations. Statistical Methods & Applications Shafie, T. (Under review). Goodness of fit tests for random multigraph models. Make sure the library is loaded library('multigraphr') Multigraphs and applicability Multigraphs are network representations in which multiple edges and edge loops (self edges) are permitted. These data structures can be either directly observed or aggregated by classifying or cross-classifying node attributes into meta nodes. For the latter case, within group edges correspond to self-edges. See example below where the original graph with 15 nodes and 12 edges (left) is aggregated based on node categories into a small multigraph with 4 nodes (right). Edge aggregation can also be used to obtain multigraphs. Assume that we study a graph with three different types of relations over three periods of time: If we aggregate over time periods, we obtain for each edge category a multigraph for the total time period of three days: For more details on these kinds of aggregations, see Shafie (2015;2016). Multigraph representation of network data Multigraphs are represented by their edge multiplicity sequence \[\mathbf{M} = (M_{ij} : (i,j) \in \cal{R} )\] where \(\cal{R}\) is the canonical site space for undirected edges \[R = \lbrace (i,j) : 1 \leq i \leq j \leq n \rbrace\] i.e. \[(1,1) < (1,2) <···< (1,n) < (2,2) < (2,3) <···< (n,n)\] where \(n\) is number of nodes. The number of vertex pair sites is given by \(\displaystyle r = \binom{n+1}{2}\). Edge multiplicities can also be represented as entries in a matrix \[\mathbf{M}= \begin{bmatrix} M_{11} & M_{12} & \dots & M_{1n} \\ 0 & M_{22} & \dots & M_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0& \ldots & M_{nn} \end{bmatrix} \qquad \mathbf{M} + \mathbf{M'}= \begin{bmatrix} 2M_{11} & M_{12} & \dots & M_{1n} \\ M_{12} & 2M_{22} & \dots & M_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ M_{1n} & M_{2n} & \ldots & 2M_{nn} \end{bmatrix}\] where the right hand matrix is the equivalence of an adjacency matrix of a multigraph (Shafie, 2016). Random multigraph models Two probability models for generating undirected random multigraphs are implemented in the package together with several statistics under these two models. Moreover, functions for goodness of fit tests are available for the presented models. Note that some of the functions are only practical for small scale multigraphs (with number of nodes less than 10 and number of edges less than 20). Random stub matching model for multigraphs The first model is obtained by random stub matching (RSM) given observed degree sequence of a multigraphs, so that edges are assigned to sites given fixed degree sequence \(\mathbf{d}=(d_1, \ldots, d_n)\). The edge assignment probability sequence \(\mathbf{Q}\) is defined as a function of these degrees and the edge assignment probabilities are given by \[\begin{equation} Q_{ij}= \left\{ \begin{array}{ll} \displaystyle \binom{d_i}{2}\bigg/ \binom{2m}{2} & \mbox{for $i=j$}\\ \displaystyle d_id_j\bigg/ \binom{2m}{2} & \mbox{for $i<j$} \ , \end{array} \right . \end{equation}\] The probability of a multigraph under this model is given by \[\begin{equation}P(\mathbf{M}=\mathbf{m})=\frac{2^{m_2} \binom {m}{\mathbf{m}}}{\binom{2m}{\mathbf{d}}}=\frac{2^{m_2} m! \prod_{i=1}^n d_i!}{(2m)! \prod_{i\leq j} m_{ij}!} \ ,\end{equation}\] where \(m_2=\sum \sum_{i<j}m_{ij}\) (Shafie, 2016). Consider a small graph on 3 nodes and the following adjacency matrix: A <- matrix(c(1, 1, 0, 1, 2, 2, 0, 2, 0), nrow = 3, ncol = 3) ## [,1] [,2] [,3] ## [1,] 1 1 0 The degree sequence of the multigraph has double counted diagonals (see the edge multiplciity matrix defined above) and is given by D <- get_degree_seq(adj = A, type = 'graph') ## [1] 3 7 2 so that number of edges in the multigraph is half the sum of the degree sequence which is equal to 6. The RSM model given this degree sequence shows that the sample space consists of 7 possible multigraphs, as represented by their multiplicity \[\mathbf{M}= (M_{11}, M_{12}, M_{1,3}, M_{22}, M_{23}, M_{33})\] which is stored in data frame m.seq (each row correspond to the edge multiplicity sequence of a unique multigraph): rsm_1 <- rsm_model(deg.seq = D) rsm_1$m.seq ## M11 M12 M13 M22 M23 M33 ## 1 1 1 0 3 0 1 The probabilities associated with each multigraph/edge multiplicity sequence, together with statistics 'number of loops', 'number of multiple edges' and 'simple graphs or not', are stored in prob.dists: rsm_1$prob.dists ## prob.rsm loops multiedges simple ## 1 0.03030303 5 1 0 More details on these statistics for analyzing structural properties of a multigrpahs is given below. Independent edge assignment model for multigraphs The second is obtained by independent edge assignments (IEA) according to a common probability distribution. The \(m\) edges of the multigraph are independently assigned to the sites \((i,j)\in \mathcal{R}\) and the edge multiplicity sequence \(\mathbf{M}\) follows a multinomial distribution with parameters \(m\) and \(\mathbf{Q}=(Q_{ij}: (i, j) \in \mathcal{R})\), where \(\mathbf{Q}\) is the edge probability sequence with edge assignment probabilities \(Q_{ij}\) for each site \((i,j)\in \mathcal{R}\) (Shafie, 2015). The probability of a multigraph under this model is given by \[\begin{equation} P(\mathbf{M}=\mathbf{m})= \binom{m}{\mathbf{m}} \mathbf{Q}^{\mathbf{m}} = \frac{m!}{\prod_{i\leq j} m_{ij} !} \prod_{i\leq j}Q_{ij}^{m_{ij}}. \end{equation}\] Moments of certain statistics to analyse multigraph structures are easier derived under this model, facilitating the structural analysis of the multigraphs since the full probability distribution of multigraphs is not needed. Thus, it is of interest to approximate the RSM model by the IEA model. There are two ways in which this can be done: 1. Independent edge assignment of stubs (IEAS) In order to get an RSM approximation using the IEA model we can simply ignore the dependency between the edge assignments in the RSM model. The distribution of \(\mathbf{M}\) is approximated with the edge probability sequence defined as a function of the fixed degrees \(\mathbf{Q}(\mathbf{d})\). This approximation can be viewed as repeated assignments with replacements of stubs, whereas RSM is repeated assignments without replacement of stubs. 2. Independent stub assignment (ISA) A Bayesian version of the RSM model is obtained by assigning a prior to the parameter \(\mathbf{d}\), i.e., assuming that the stubs are independently attached to the \(n\) nodes according to a probability distribution \(\mathbf{p}=(p_1, p_2, \ldots, p_n)\) where \(p_i>0\) and \(\sum_{i=1}^np_i = 1\). Thus, is the outcome of a random degree sequence that is multinomial distributed with parameters \(2m\) and \(\mathbf{p}=\mathbf{d}\). Then it follows that the multiplicity sequence \(\mathbf{M}\) has an IEA distribution with edge probability sequence \(\mathbf{Q}(\mathbf{p})\). For the RSM approximation, \(\mathbf{p}=\mathbf{d}/2m\). The relations between the models are summarized in the below figure: The function iea_model has both versions of the IEA model implemented and can be specified to approximate the RSM model. Consider using the IEA model to approximate the RSM model so that edge assignment probabilities are functions of observed degree sequence. Note that the sample space for multigraphs is much bigger than for the RSM model so the multiplicity sequences are not printed (they can be found using the function get_edgemultip_seq for very small multigraphs and their probabilities can be found using the multinomial distribution). The following shows the number of multigraphs under either of the two IEA models: ieas_1 <- iea_model(adj = A , type = 'graph', model = 'IEAS', K = 0, apx = TRUE) isa_1 <- iea_model(adj = A , type = 'graph', model = 'ISA', K = 0, apx = TRUE) isa_1$nr.multigraphs ieas_1$nr.multigraphs The logical parameter apx determines whether the IEA model is being used as an approximation to the RSM model and the parameter model specifies which IEA model is used for the approximation. The IEA models can also be used independent of the RSM model. For example, the IEAS model can be used where edge assignment probabilities are estimated using the observed edge multiplicities (maximum likelihood estimates): ieas_2 <- iea_model(adj = A , type = 'graph', model = 'IEAS', K = 0, apx = FALSE) The ISA model can also be used independent of the RSM model. Then, a sequence with the stub assignment probabilities (for example based on prior belief) should be given as argument: isa_2 <- iea_model(adj = A , type = 'graph', model = 'ISA', K = 0, apx = FALSE, p.seq = c(1/3, 1/3, 1/3)) Statistics to analyze structural properties Several statistics for analyzing the structural properties of multigraphs under the different models are implemented in the package. These include number of loops denoted \(M_1\) and number of non-loops denoted \(M_2\) (indicator of e.g. homophily and heterophily). Other statistics which are only implemented for the IEA model are those part of a so called complexity sequence with the distribution of edge multiplicities. This sequence is given by \(\mathbf{R}=(R_0, R_1, R_2, \ldots, R_m)\) where \[\begin{equation} R_k = \sum \sum_{i<j}I(M_{ij} = k) \ \textrm{ for } \ k = 0,1, \ldots, m \ , \end{equation}\] and \(I\) is an indicator variable. Thus, \(R_0\) denotes the number of vertex pair sites with no edge occupancy, \(R_1\) single edge occupancy, \(R_2\) double edge occupancy, and so forth. These statistics are useful as an indicator of e.g. multiplexity/interlocking. Approximate 95% interval estimates for these statistics are given by \(\hat{E} \pm 2\sqrt{\hat{V}}\). Example (continued) Under the RSM model, the first two moments and interval estimates of the statistics \(M_1\) and \(M_2\) are given by rsm_1$M ## M1 M2 ## Expected 2.273 3.727 ## Variance 0.986 0.986 ## Upper 95% 4.259 5.713 ## Lower 95% 0.287 1.741 which are calculated using the numerically found probability distributions under RSM (no analytical solutions exist for these moments). Under the IEA models (IEAS or ISA), moments of these statistics, together with the complexity statistic \(R_k\) representing the sequence of frequencies of edge sites with multiplicities \(k = 0,1, \ldots, m\) are found using derived formulas. Thus, there is no limit on multigraph size. When the IEAS model is used to approximate the RSM model (see above), these statistics are: ieas_1$M ## M1 M2 ## Observed 3.000 3.000 ## Expected 2.273 3.727 ## Variance 1.412 1.412 ## Upper 95% 4.649 6.104 ## Lower 95% -0.104 1.351 ieas_1$R ## R0 R1 R2 ## Observed 2.000 2.000 2.000 ## Expected 2.674 1.588 1.030 ## Variance 0.575 1.129 0.760 ## Upper 95% 4.191 3.713 2.773 ## Lower 95% 1.156 -0.537 -0.713 When the ISA model is used to approximate the RSM model (see above): isa_1$M ## Observed 3.000 3.000 isa_1$R The interval estimates can then be visualized to detect discrepancies between observed and expected values thus indicating social mechanisms at play, and to detect interval overlap and potential interdependence between different types of edges (for examples, see Shafie 2015,2016; Shafie & Schoch 2021). Goodness of fit tests Goodness of fits tests of multigraph models using Pearson (\(S\)) and information divergence (\(A\)) test statistics under the random stub matching (RSM) and by independent edge assignments (IEA) model, where the latter is either independent edge assignments of stubs (IEAS) or independent stub assignment (ISA). The tests are performed using goodness-of-fit measures between the edge multiplicity sequence of a specified model or an observed multigraph, and the expected multiplicity sequence according to a simple or composite hypothesis. Tests of a simple multigraph hypothesis The following test statistics are used when edge multiplicities according to \(\textrm{IEA}(\mathbf{Q})\) and correct model \(\mathbf{Q_0}=\mathbf{Q}\) is tested: the Pearson statistic \[S_0=\sum\sum_{i \leq j}\frac{(M_{ij}-mQ_{0ij})^2}{mQ_{0ij}}=\sum\sum_{i \leq j}\frac{M_{ij}^2}{mQ_{0ij}}-m \stackrel{asymp}{\sim} \chi^2(r-1)\] the divergence statistic \[D_0=\sum\sum_{i\leq j}\frac{M_{ij}}{m} \log \frac{M_{ij}}{mQ_{0ij}} \quad \textrm{and} \quad A_0=\frac{2m}{\log \textrm{e}}D_0 \stackrel{asymp}{\sim} \chi^2(r-1)\] Tests of a composite multigraph hypothesis The composite multigraph hypotheses are ISA for unknown \(\mathbf{p}\) and IEAS for unknown \(\mathbf{d}\) where parameters have to be estimated from data \(\mathbf{M}\). When the correct model is tested, the statistics are given by the Pearson statistic \[\hat{S}=\sum\sum_{i \leq j}\frac{(M_{ij}-m\hat{Q}_{ij})^2}{m\hat{Q}_{ij}}=\sum\sum_{i \leq j}\frac{M_{ij}^2}{m\hat{Q}_{ij}}-m \stackrel{asymp}{\sim} \chi^2(r-n)\] the divergence statistic \[\hat{D}=\sum\sum_{i\leq j}\frac{M_{ij}}{m} \log \frac{M_{ij}}{m\hat{Q}_{ij}} \qquad \textrm{and} \quad \hat{A}=\frac{2m}{\log \textrm{e}}\hat{D}\stackrel{asymp}{\sim} \chi^2(r-n)\] Simulated goodness of fit tests Probability distributions of test statistics, summary of tests, moments of tests statistics, adjusted test statistics, critical values, significance level according to asymptotic distribution, and power of tests can be examined using gof_sim given a specified model from which we simulate observed values from, and a null or non-null hypothesis from which we calculate expected values from. This in order to investigate the behavior of the null and non-null distributions of the test statistics and their fit to to asymptotic \(\chi^2\) distributions, thus also checking how reliable the tests are for small sized multigraphs. Simulated goodness of fit tests for multigraphs with n=4 nodes and m=10 edges (be patient, these take a while to run). (1) Testing a simple IEAS hypothesis with degree sequence (6,6,6,2) against a RSM model with degrees (8,8,2,2): gof1 <- gof_sim(m = 10, model = 'IEAS', deg.mod = c(8,8,2,2), hyp = 'IEAS', deg.hyp = c(6,6,6,2)) (2) Testing a correctly specified simple IEAS hypothesis with degree sequence (14,2,2,2): gof2 <- gof_sim(m = 10, model = 'IEAS', deg.mod = c(14,2,2,2), hyp = 'IEAS', deg.hyp = c(14,2,2,2)) The non-null (gof1) and null (gof2) distributions of the test statistics together with their asymptotic chi2-distribution can be visualized using e.g. ggplot2:
CommonCrawl
Search all SpringerOpen articles Environmental Systems Research Fluoride removal from aqueous solution onto activated carbon of Catha edulis through the adsorption treatment technology Jemal Fito1Email authorView ORCID ID profile, Hanan Said2, Sisay Feleke3 and Abebe Worku1 Environmental Systems Research20198:25 Nowadays freshwater quality deterioration and quantity depletion are rapidly increasing across the globe. Especially fluoride polluted groundwater is causing a severe shortage of water supply and public health problem. Hence, this study was designed to investigate the performance of activated carbon produced from the Catha edulis stem for the removal of fluoride from aqueous solution. A C. edulis stem sample was collected from dumping sites of Addis Ababa City and its activation and carbonation processes were performed using H2SO4 and a high temperature of 600 °C. The experimental study was designed to use a full factorial approach with a 33 which were the three factors with the three levels, namely pH (2, 7 and 9), contact times (60, 90 and 120 min) and adsorbent doses (0.5 g, 1.0 g and 1.5 g in 100 mL) at the initial fluoride concentration 30 mg/L which resulted in 81 experimental runs in triplicates. The calculated maximum adsorption capacity of 18 mg/g was found under the Langmuir isotherm, whereas the Freundlich model (R2 0.98) better fitted the experimental data, which indicated that the adsorption process was multilayer and cooperative. The maximum fluoride removal of 73% was observed at the optimum condition of adsorbent dose of 1.5 g in 100 mL contact time of 60 min and pH 2, whereas the predicted value of the fluoride removal of 69% was calculated under the same experimental condition. Fluoride removal was positive and strongly influenced by the adsorbent dose, whereas the adsorption pH was negatively and weakly impacted on removal. Generally, the performance of activated carbon for the removal of fluoride from aqueous solution is promising. The study also indicated that C. edulis activated carbon is a potential candidate for water treatment technology. Pollutant removal Today, freshwater consumption is increasing rapidly at an unexpected rate due to the fast industrial advancement, world population explosion, mechanized agriculture expansion and progress of civilization (Fito et al. 2017a; Fito and Alemu 2019). Basically, in all nations across the globe, access to safe water and sanitation is recognized as a fundamental human right for all (Fito et al. 2019a). However, environmental crisis, particularly water pollution is a major contributor to water quality deterioration and quantity depletion, which is a global challenge to achieve sustainable development goals. In line with this, about 2.0–2.7 billion people are expected to face severe water shortage challenge by 2050 under current business and water consumption and management practices (UN-Water 2003). Furthermore, an estimated 4.8–5.7 billion people will live in potentially water scarce areas by 2050 (UN-Water 2015). It was also reported that water pollution poses serious concerns about human health and environmental risks in many developing countries, particularly those in sub-Sahara, including Ethiopia (Nienie et al. 2017; UN Water 2018; Fito et al. 2019b). Fluoride pollution of drinking water is very common, particularly in areas where groundwater is the only source of water supply (Sailaja et al. 2015). Fluoride exists in rocks in different minerals such as sellaite (MgF2), fluorspar (CaF2), cryolite (Na3AlF6) and fluorapatite (Ca5(PO4)3F) (Mohapatra et al. 2009). High fluoride concentrations above 4 mg/L can cause various human health problems, such as dental fluorosis, skeletal fluorosis, decreased birth rates, lower intelligence quotient, thyroid gland injury and neurological disorders (Amalraj and Pius 2017; WHO 2017; Dehghani et al. 2018). Fluorosis is a very common and endemic disease in more than 25 countries worldwide in both developed and developing regions (Jagtap et al. 2012; Amalraj and Pius 2017; Bhattacharya 2017). Generally, the epidemiological studies show that drinking water contributes about 60% of fluoride ion of the total per capita daily intake (Jagtap et al. 2012). Fluoride concentration > 30 mg/L in groundwater was reported in many countries and regions, including China, India, Sri Lanka, West Africa (Ghana, Ivory Coast, Senegal), North Africa (Libya, Sudan, Tunisia), South Africa, the East African Rift Valley (Kenya, Uganda, Tanzania, Ethiopia and Rwanda), northern Mexico, and Central Argentina (Joshi et al. 2012). The highest fluoride concentration of 33 mg/L in drinking groundwater was reported in the Ethiopian section of the East African Rift Valley (Nigussie et al. 2007). However, in recently published papers, the average fluoride concentration in the range of 5 to 26 mg/L was reported (Mulugeta et al. 2015; Kebede et al. 2016). About 11–14 million people in the Ethiopian rift valley rely on fluoride polluted groundwater (Mulugeta et al. 2015; Kebede et al. 2016). Fluoride-polluted water treatments have been performed using several purification techniques. The most common fluoride removal methods from drinking water are chemical precipitation, nanofiltration, membrane processes, ion exchange, reverse osmosis, electrocoagulation, and electrodialysis are expensive and thus non-sustainable for developing countries (Fito et al. 2019c). Additionally, these methods require high energy, chemical, operational and capital inputs and advanced technologies. These shortcomings of the various methods of defluoridation prompted researchers to find alternative treatment methods. In comparison with other aforementioned treatment techniques, adsorption of fluoride removal from drinking water is the most appropriate and widely used treatment technique. Flexibility, simplicity of design, relative ease of operation, cost-effectiveness, environmental considerations and production of high water quality are the major advantages of the adsorption treatment technology (Lavecchia et al. 2012; Nure et al. 2017). However, commercially developed activated carbon is a universal adsorbent and very expensive which makes adsorption unsuitable for developing countries. The development of cost-effective and efficient adsorbent methods is still under investigation. In recent years, the use of inexpensive, easily available adsorbents with high carbon content and low inorganic composition have been used as potential raw material for the preparation of activated carbon. Special attention is given for locally developed adsorbents that are promising raw materials for the removal of contaminants from water and wastewater (Asaithambi et al. 2018). Hence, many studies are focused on adsorbents for fluoride removal from drinking water, such as an alum manufacturing process (Nigussie et al. 2007), alumina in bauxite (Lavecchia et al. 2012), activated alumina (Mulugeta et al. 2015), CaCl2-modified Crocus sativus leaves (Dehghani et al. 2018), fired clay pots (Kofa et al. 2017), bark of Morinda tinctoria (Amalraj and Pius 2017), bark of the Vitex negundo plant (Suneetha et al. 2015), lanthanum-impregnated bauxite (Vardhan and Srimurali 2016), lapsi seed stone (Joshi et al. 2012), Al–Ce hybrid adsorbent (Liu et al. 2010) and iron ore (Kebede et al. 2016). However, researchers are still looking for practical and affordable adsorbents which can be applied at the commercial scale that leads to improving water supply quality in the regions of fluoride-saturated groundwater. Generally, adsorption is considered as an attractive, effective, convenient, easy to apply, simple to design, low cost and environmentally compatible technology (Loganathan et al. 2013). Khat (Catha edulis), a dicotyledonous evergreen shrub of the family Celastraceae is a plant cultivated for the sedative effect in its leaves. In Ethiopia, C. edulis' stems are disposed of everywhere, especially around the khat market areas, streets and dumpsters. Thus C. edulis' stems are a major contributor to the huge amount of solid waste in larger Ethiopian cities. The use of C. edulis stems as a potential source of activated carbon was explored in this study. Employing this byproduct as an adsorbent for removal of fluoride from the water supply may thus be an environmentally friendly and practical waste conversion method (Fito et al. 2017b). However, very few studies have been conducted so far on the preparation of the activated carbon from C. edulis. Therefore, this study was designed to investigate the performance of activated carbon produced from C. edulis stems for the removal of fluoride from aqueous solution. Additionally, the experimental study was designed to use full factorial approach with a 33, which were the three levels of the three independent variables, namely: pH (2, 7 and 9), contact time (60, 90 and 120 min) and adsorbent doses (0.5 g, 1.0 g and 1.5 g in 100 mL). There are many advantages of such factorial experimental designs, some of which require low cost, minimum numbers of experiments, reduced treatment time and the possibility to investigate the main and interaction effects separately (Nure et al. 2017). Adsorbent development Catha edulis stem samples were collected from dumping sites of the large Addis Ababa Merkato market, which is locally called "Khat terra". The sample was washed with distilled water and dried in an oven at 100 °C until completely dried. The stems were cut into pieces at the size of 10 mm. Carbonation and activation of C. edulis stem were performed using different activating agents, such as KOH, H2SO4 and H3PO4 at temperatures of 500 and 600 °C. Based on the prescreening, the activated carbon developed using H2SO4 at 600 °C was selected because it has a better fixed carbon composition and higher surface areas compared to the others. The procedure of the selected activating agent was started by soaking sample chips in concentrated H2SO4 for 24 h in order to achieve good carbon structure and a large surface area. The soaked sample was washed with distilled water and completely dried at 110 °C in an oven for 3 h. Then, the pyrolysis of the sample was operated at a temperature of 600 °C for 60 min in a muffle furnace (Ali et al. 2012). The activated carbon was washed thoroughly with 3 N HCl and heated to 50 °C to solubilize the mineral. Additionally, the activated carbon was washed with 1% NaHCO3 to remove residual acid, and further washing was done using deionized water until pH 7 of the activated carbon was achieved, which dried in an oven at 105 °C for 12 h (Shivayogimath et al. 2008). Finally, the activated carbon was passed through a 0.5 mm sieve and sealed in a polyethylene bag which was stored for further characterization and uses as indicated in Fig. 1 (Gupta and Suhas 2009). Different stages of the activated carbon production Adsorbent characterization Proximate analysis In the determination of the proximate analysis of the C. edulis activated carbon (CAC), the moisture content, volatile matter, ash content and fixed carbon of the adsorbent were measured. The thermal drying method was applied for the proximate analysis of the adsorbent. The adsorbent sample of 1.0 g was weighed in triplicate and placed in a clean, dried, and weighed crucible in a preheated oven at 110 °C. The crucibles with samples were placed in an oven at 110 °C for 2 h. Then, the sample was cooled in desiccators at ambient temperature and its weight was measured again. Hence, the difference between the initial and the final mass of the CAC was used to determine the moisture content using Eq. 1. $$ {\text{Mc}} = \frac{{{\text{W}}_{\text{w}} - {\text{W}}_{\text{d}} }}{{{\text{W}}_{\text{w}} }} \times 100\% $$ where Mc is the moisture content of the CAC in percentage, Ww is the weight of the sample and Wd is the weight of the sample after drying (Milne et al. 1990; Anisuzzaman et al. 2015) Similarly, for the determination of the ash content of the CAC, 1.0 g of CAC sample was put into a crucible and heated at 500 °C in a muffle furnace for 4 h and allowed to cool in a desiccator to room temperature. Finally, the CAC sample was weighed and the percentage of the ash content of the CAC was calculated using Eq. 2 (Anisuzzaman et al. 2015). $$ AC = \frac{{W_{1} }}{{W_{2} }} \times 100\% $$ where AC is the ash content in percentage, W2 is the weight of the CAC sample and W1 is the weight of the ash. For the determination of the volatile matter of the CAC, 1.0 g of the CAC sample was taken and placed in a pre-dried crucible and heated in a muffle furnace regulated at 800 °C for 8 min. Then, the crucible was cooled in desiccators and weighed (Aragaw 2016). Finally, the volatile matter of the CAC was calculated using Eq. 3. $$ Vm = \frac{{W_{2} - W_{1} }}{{W_{2} }} \times 100\% $$ where Vm is the volatile matter in percentage, W2 is the weight of the CAC sample and W1 is the weight of CAC after heating. Fixed carbon content was determined by deducting the moisture, volatile, and ash content percentage using the Eq. 4 (Nwabanne and Igbokwe 2012). $$ {\text{Fixed Carbon Content }}\left( \% \right) = 100\% - \left( {{\text{MC}}\% + {\text{VC}}\% + {\text{Ash}}\% } \right) $$ where the MC% is moisture content in percent, VC% is volatile content in percent, Ash% is the ash content in percentage (Nure et al. 2017) Determination of pH A 1.0 g of CAC was added to 100 mL of distilled water in a beaker of 500 mL volume in order to determine the pH value of CAC adsorbent. Then, the beaker was covered with a watch glass and boiled on the hot plate for 5 min and set aside for a few minutes to settle the bulk of CAC particles. The supernatant liquid was poured off and cooled to the room temperature at a 25 °C. Finally, the pH value of the solution was measured by using a pH meter (Hach HQD Field Case Model 58258-00) (Dada et al. 2012) Bulk density The mass to volume ratio of CAC adsorbent was used to determine the bulk density. A measuring cylinder of a volume of 10 cm3 was weighted without and with the adsorbent sample. The difference between the initial weight of the container and the final weight was calculated. This mass difference was taken as the mass of the CAC adsorbent. Then, the bulk density was calculated from the relationship of mass to volume and determined using Eq. 5, $$ {\text{Bulk Density}} \left( kg/{m^3} \right) = \frac{{\text{M}}}{{\text{V}}} $$ where M is the mass of a sample (kg), whereas the V is the volume of the adsorbent sample (m3) (Dada et al. 2012) Fourier transform infrared spectroscopy CAC was characterized using Fourier transform infrared spectroscopy (FTIR). The CAC adsorbent was mixed with dry KBr in the ratio of 2:200 in mg and ground very well. The adsorbent sample was scanned over a wavelength of 400–4000 cm−1 using FTIR spectrophotometry (65 FT-IR PerkinElmer) (Dolphen and Thiravetyan 2011). This was designed to determine the availability of the functional groups on the surface of CAC. Optimization of batch adsorption In order to prepare a fluoride solution of 1000 mg/L, 2.21 g of NaF, salt was dissolved in 1 L of deionized water and the standard solution (30 mg/L) was prepared by diluting the stock solution with deionized water. The adsorption batch experiment was carried out using 100 mL of adsorbate solution into 250 mL conical flasks. This adsorption process was investigated using initial fluoride concentration 30 mg/L based on the maximum fluoride concentration in many places in Ethiopia, particularly in the Rift Valley. The fixed values of independent values were pH (2, 7 and 9), contact times (60, 90 and 120 min) and CAC doses (0.5 g, 1.0 g and 1.5 g in 100 mL) were selected (Tezcan et al. 2015). The design of this experiment was a 33 full factorial design with 81 runs with triplicates at room temperature (Table 1). The pH of the solutions was adjusted using 0.1 N of HNO3 and NaOH solutions. The flasks were agitated at 120 rpm using magnetic stirrers on an orbital shaker for specified contact times labeled. Then, the solution was left to settle for 2 min and the finer particles were removed using Whitman filter paper 42 (Hegazy et al. 2014; Nure et al. 2017). The filtrate sample was used for the analysis of fluoride removal. Finally, the reading of the electrode was made after the value stabilized for 15 min. Factors and their levels for removal of fluoride using full factorial design Dosage (g/100 mL) Contact time (min.) Adsorption equilibrium isotherms The adsorption isotherm was determined by using adsorbent doses of 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5 and 4.0 g which were added in 100 mL of the fluoride solution (30 mg/L) at constant values of pH 2 and contact time of 60 min. Equilibrium adsorption capacity (qe, mg/g) is the ratio of the adsorbate per amount of the adsorbent at the equilibrium concentration (Ce), keeping the operational temperature constant (room temperature). This adsorption equilibrium is checked using the two commonly used Freundlich and Langmuir isotherm models (Nure et al. 2017). The amount of the adsorptive capacity was calculated using Eq. 6. $$ {\text{q}}_{\text{e}} = \left( {\frac{{{\text{C}}_{0} - {\text{C}}_{\text{e}} }}{\text{m}}} \right){\text{V}} $$ General Langmuir isotherm equation was indicated in Eq. 7. $$ q_{e} = \frac{{ Q_{O } C_{e } b}}{{1 + C_{e } b}} $$ where Ce equilibrium is the concentration (mg/L), Qo maximum adsorption capacity (mg/g), qe absorptivity capacity (mg/g), and b Langmuir isotherm constant (L/mg) (Tezcan et al. 2015). The Freundlich isothermal model is expressed in Eq. (8). $$ q = K_{f} C{\text{e}}^{1/n} $$ where the q (mg/g) is the adsorbed amount of the adsorbate per unit mass of the adsorbent, Ce (mg/L) is the equilibrium concentration in mg/L, Kf ((mg/g) (L/mg) 1/n) and 1/n are the Freundlich constants which represent adsorption capacity and adsorption intensity, respectively (Nure et al. 2017). One-way analysis of variance (ANOVA) was used for means comparison of fluoride removal at the 95% confidence level. The removal of the fluoride through the treatment of the adsorption was also further supported by linear regression among the independent variables (pH, contact time and adsorbent dose) and dependent variable (removal efficiency). Characterization of activated carbon The proximate analysis of the CAC with H2SO4 was analyzed and the results obtained were displayed in Table 2. Basically, the intention of the experiment was to determine the composition of fixed carbon and ash content. The high composition of the fixed carbon refers to the quality of the adsorbent which improves the surface area and adsorption performance. A very low value of the CAC moisture content of 4% was recorded, which is an indicator of its good quality. The volatile matter of the adsorbent was 25%, whereas the ash content was only 18%, which proved that the inorganic content was insignificant. In other studies of activated carbon derived from holm oak, high percentage of ash content of 70.6% was reported, which was much higher than that of activated carbon produced from C. edulis (Tezcan et al. 2015). The high content of ash can be explained as low quality of the raw material behavior for adsorbent preparation. This happened because the large amount of ash content is not favorable for activated carbon development and resulted in low carbon content, which in turn can lower the adsorption capacity and efficiency. Hence, adsorbent development with low ash content is one of the critical concerns of scientists and researchers working on water and wastewater purification processes through adsorption treatment technology. Proximate analyses of CAC adsorbent after treatment with sulfuric acid Mass in % Volatile matter Ash content Fixed carbona aThe result was calculated by difference Another most important parameter was fixed carbon, which was 53%, a significant amount for local prepared activated carbon. In another study of locally developed activated carbon, a moisture content of 4.0%, ash 36.0%, fixed carbon 42.2% and volatile substances of 16.8% were reported, which is in good agreement with the current study (Nure et al. 2017). Generally, the prepared activated carbon was a more carbonaceous material and a good precursor for adsorbent development, which contributed to water and wastewater purification at the industrial level. The bulk density of CAC developed using H2SO4 and high temperature of 600 °C at the specific particle size of nearly 0.5 mm was determined. On average, a bulk density of 0.46 g/cm3 was obtained. This value was found in the acceptable range of the carbonaceous materials of 0.35 to 1.2 g/m3 as reported by many studies (Sunday et al. 2018). Normally, high bulk density is considered to be a good quality of the adsorbent materials. Hence, the CAC showed good material characteristics in terms of the bulk density value found in the recommendable range of the ideal adsorbent. Finally, the pH value of CAC that activated by sulfuric acid was found to be nearly neutral. In an FTIR analysis of the C. edulis, three major peaks were observed and shown in Table 3. The first peak was found in the range of the wavelength 3300–3600 cm−1 indicates the presence of the hydroxyl functional group which could be associated to the organic acid, alcohol, or phenol functional group. The second group was found in the range of the 1500–1600 cm−1 wavelength related to the stretching of the carboxylate bond. These two broad and long peaks observed on the surface of the adsorbent were probably associated with the chemical activation process of the adsorbent. This particular surface modification might be attributed to hydrogen ion released from sulfuric acid and could be resulted in the development of the hydroxide functional group on the surface of the adsorbent. The last weak peak was found at the wavelength of 1425 cm−1 which was most likely associated with the functional group of the C–C stretching of the aromatic compound. FTIR analyses, AC peak observed and corresponding functional groups Wave length in cm−1 Peaks nature of AC Small sharp peak Diminished small sharp peak Presence of O–H Stretching C–H Long and broad peak Diminished small peak Carboxylate bond Small peak Very small peak C–C stretched (aromatic) Sharp peak No peak C=CH stretched Fluoride removal analyses The results of the fluoride removal from aqueous solution using experimental design of the factorial matrix and the corresponding predicted values are shown in Table 4 The adsorption efficiency was increased by 16% as the adsorbent dose shifted from low value (0.5 g) to a high value (1.5 g in 100 mL), but the efficiency was decreased by the same value as the pH of the solution shifted from 2 to 9, keeping other factors constant. However, the effect of raising the contact time from 60 to 120 min on the adsorption performance was insignificant. This indicated that the adsorption equilibrium was achieved at the lower contact time of the adsorption process. The overall effect of the independent factors on adsorption performance was investigated using the full factorial approach. The results obtained were considered both the main and interaction effects, which represented the reality. Based on this fact, the maximum fluoride removal of 73% was observed at the optimum condition of adsorbent dose of 1.5 g in 100 mL, contact time of 60 min and pH 2 at the fixed and initial fluoride concentration operated at room temperature; the predicted value of 69% was calculated under the same experimental condition. Through all the adsorption experiments, the minimum adsorption of 40.7% was recorded at an adsorbent dose of 0.5 g, contact time 60 min and the pH of the adsorption solution was 9. In other studies, fluoride removal in the range of 50–99% at an initial fluoride concentration of 1–12 mg/L with the help of the barks of Vitex negundo activated carbon (Suneetha et al. 2015); 76% fluoride removal at an initial concentration of 25 mg/L using lapsi seed stone activated carbon (Sahira et al. 2013); adsorption capacity of 26 mg/g through the application of the bark of Morinda tinctoria as an adsorbent (Amalraj and Pius 2017); fluoride removal of 85% at initial solution 6.5 mg/L with the crocus sativus leaves adsorbent (Dehghani et al. 2018); and fluoride removal efficiency of 81% at an initial concentration of 1–5 mg/L with the modified sludge adsorbent (Li et al. 2018) were reported. Full factorial experimental matrix for fluoride removal displayed in percentage Experiment # Dose (mg/100 L) Actual % Predicted % The main and interaction effects on adsorption In this regression analysis, terms of the higher interaction effects such as three and four ways had no significant impact on adsorption performance. These terms were excluded from the regression analyses and from the model equation. The impacts of the main and two way interaction on the adsorption were investigated at 95% confidence level (Table 5). Basically, there are two important concepts in linear regression models, namely: the regression coefficient of determination (R2), which is used to describe the degree of the variability explained by the regression model, and the p value which helps to check whether the main and interaction effects are statistically significant in the performance of adsorption. The application of regression and ANOVA tests for fluoride removal Coefficients symbol t Stat 3.73E−08 − 3.37 X1X2X3 Regression statistics Multiple R R square Adjusted R square Significance F Based on this principle, there were only two main effects, which were used to determine the influence on the adsorption treatment. Accordingly, only coefficient b1 and b2 were statically significant in this adsorption process and incorporated in the model Eq. (9). The R2 value of the fluoride removal from aqueous solution was 92%, which was considered as 92% of fluoride removal was well explained by the liner regression model. This was a good indicator for the regression model goodness of fit. These statistically significant terms and their values were incorporated in the regression Eq. (9). $$ Y_{flouride } = 64.70 + 12.21X_{1} - 3.4X_{2} $$ where Yfluoride was the predicted value of the fluoride removal in percentage; X1 was adsorbent dose and X2 was pH of the solution; the other main and interaction effects were not significant and removed from this equation. The regression coefficient of adsorbent dose was positive and enhanced the adsorption performance but increasing the pH value of the solution suppressed the adsorption performance as depicted by the negative coefficient in the model equation. The degree of impact on the adsorption process was X1 ≫ X2, which basically depended on the magnitudes and the signs of the coefficients in the model equation. Generally, the adsorption performance was highly and positively influenced by the adsorbent dose but weakly and negatively influenced by the pH of the solution. Finally, the one-way ANOVA test showed that there were statistically significant differences among mean fluoride concentrations of the adsorption effluent after the treatment. Adsorption isotherms The adsorption isotherm data were collected from adsorption capacity (qe) and the equilibrium concentration of the fluoride (Ce). But the adsorption isotherms were investigated using the most commonly used adsorption models, namely Langmuir and Freundlich isotherms. The linearized form of the Langmuir Eq. 10 was used. $$ \frac{1}{{q_{e} }} = \frac{1}{{ Q_{O } }} + \frac{1}{{ Q_{O } bC_{e } }} $$ Langmuir adsorption isotherm is based on the assumption that the adsorbent surface sites are capable of adsorbing one adsorbate at a time so that the layer of the adsorbent would have a mono layer thickness. Once the adsorbate matters occupy the sites, no further adsorption is expected. Langmuir isotherm was determined by plotting (1/qe) versus (1/Ce), as indicated in Fig. 2. Based on this plot, the maximum adsorption capacity (Qo) and Langmuir isotherm constant (b) were calculated from the intercept and slope of the Langmuir equation, respectively. Langmuir isotherm plot for fluoride removal using CAC Langmuir isothermal adsorption constant (b) was found to be 0.05 L/mg and related to the free energy of adsorption, whereas the maximum adsorption capacity (Qo) was 18 mg/g. Even though the highest value of the R2 (0.96) indicated a satisfactory fit of the Langmuir isotherm with the experimental data, the maximum adsorption capacity deviated much from the observed data (33 mg/g). Therefore, the model couldn't well describe the mechanism of the adsorption process under the study condition. Based on the outcome of the Langmuir isotherm, it is difficult to make a decision on the prediction and improvement of the adsorbing surface chemistry. Therefore, the assumption of mono layer mechanism of the adsorption process has not well defined the phenomena, indicating the need to check it using other assumptions. However, the dimensionless constant parameter called separation constant (RL) of the Langmuir adsorption process was also checked. The effect of the adsorption isothermal shape was determined by the value of the RL to identify whether the adsorption system was favorable or unfavorable. Basically, this value can be calculated using Eq. 11, $$ R_{\text{L}} = \frac{1}{{1 + bC_{\text{o}}}} $$ where RL is a dimensionless separation factor, CO the initial fluoride concentration (mg/L) and b the Langmuir constant (L/mg). The parameter RL indicates the shape of the isotherm, which reflects when RL > 1 (unfavorable); RL = 1 (linear); 0 < RL < 1 (favorable) and when RL = 0 is irreversible. Based on the above equation, the calculated value of RL was 0.4. This value suggested that the adsorption process was categorized under the favorable adsorption group. Freundlich isotherm Similar to the Langmuir isotherm, the experimental data were checked against a linearized form of the Freundlich isotherm Eq. 12. The Freundlich isothermal plot of the linear equation is shown in Fig. 3. The adsorption isotherm is described by the relationship between the bulk aqueous concentration of fluoride and the adsorbed value on the surface of the adsorbent. Freundlich isotherm plot for fluoride removal from aqueous solution using CAC $$ logq = logK_{f} + \frac{1}{n}\log Ce $$ Freundlich isotherm is based on the assumption of heterogeneous surface composing of the different sites for adsorption. It doesn't forecast the adsorbent saturation level of the adsorbate which implies that the multilayer of adsorption phenomena is expected under such condition. Freundlich isotherm graph was sketched using log qe vs log Ce. The values of Kf and n were obtained from the intercept and slope of the linearized form of Freundlich equation, respectively. Specifically, n is the degree of nonlinearly between solution concentration and adsorption capacity in addition to its relationship with the favorability of the adsorption process, whereas the Kf ((mg/g) (L/mg)1/n) is an indicator of adsorption capacity. Based on the Freundlich isotherm, Freundlich intensity (n) of 0.61 and the adsorption capacity (Kf) of 0.094 were calculated, which basically depend on the nature of the adsorbate and the temperature of the system. The value of n = 1 indicates that the equilibrium distribution between the solid and liquid phase is independent of the concentration, whereas the values of 1/n > 1 or 1/n < 1 show cooperative and normal adsorption, respectively (Nure et al. 2017). Moreover, the degree of the surface heterogeneity increases sharply as the value of 1/n decreases (Liu et al. 2015). Specifically, the Freundlich intensity (n) value indicated that the adsorption process was a cooperative, which was reported in several recently published journal articles (Nure et al. 2017). Generally, the Freundlich isotherm depicted that the adsorption process was heterogeneous and had multilayer surfaces, suggesting that the binding sites were not equivalent but with different adsorption layers. From the Freundlich equation, R2 of 0.98 was found and could be considered as good fitness of the model to describe the adsorption mechanism. But the Langmuir maximum adsorption capacity showed a weak association between the calculated and the observed adsorption capacity, which was also further supported by its low value of R2 compared to the Freundlich isotherm. This study investigated the removal fluoride ions from aqueous solution using chemically (sulfuric acid) activated C. edulis stems under the batch adsorption mode. Four factors such as the adsorbent dose, contact time and pH of solution were used in factorial experimental design under adsorption study. The maximum fluoride removal of 73% was recorded at the optimum condition of adsorbent dose of 1.5 g in 100 mL, contact time 60 min and pH 2, whereas the model based predicted value of the fluoride removal under the same experimental condition was 69%. Fluoride removal was positive and strongly influenced by the adsorbent dose whereas the adsorption pH was negatively and weakly impacted on the removal. Removal of fluoride was effectively achieved using chemically modified of C. edulis. These results showed significant fluoride removal under the specific study condition. Experiment-based maximum adsorptive capacity was found to be 33.3 mg/g, whereas the predicted value under Langmuir isotherm study was 18 mg/g. Catha edulis was found to be a good precursor for activated carbon and have a good adsorption capacity. Freundlich isotherm (R2 0.98) was better fitted to the experimental data, which indicated that the adsorption process was multilayer and cooperative. Generally, the performance of activated carbon for the removal of fluoride from aqueous solution is a promising which would make the CAC a potential nominee to be used in water treatment technology. However, further study of the treatment technology is expected in respect to the adsorbent recovery, adsorption kinetics and thermodynamics. ANOVA: CAC: Catha edulis activated carbon FTIR: UN: We would like to thank the Ethiopian Road Authority for funding this research work and Addis Ababa Science and Technology University for supervising the financial support given by the authority. This research work was supported by Ethiopian Road Authority JF contributed for experimental design, experimental supervision, statistical analysis, and manuscript writing and editing whereas HS, SF and AW mainly contributed for data collection and manuscript editing. All authors read and approved the final manuscript. This part is not applicable for this article. We authors have read and understood the policy of the competing interests and declare that there is no competing interests among the authors and the fund provider for this work. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Department of Environmental Engineering, Addis Ababa Science and Technology, P.O.Box 16417, Addis Ababa, Ethiopia Water and Energy Design and Supervision Works Sector, Ethiopian Construction Design and Supervision Works Corporation, Addis Ababa, Ethiopia Ethiopian Agriculture Research Council Secretariat, P.O.Box 8115, Addis Ababa, Ethiopia Ali I, Asim M, Khan TA (2012) Low cost adsorbents for the removal of organic pollutants from wastewater. J Environ Manage 113:170–183. https://doi.org/10.1016/j.jenvman.2012.08.028 View ArticleGoogle Scholar Amalraj A, Pius A (2017) Removal of fluoride from drinking water using aluminum hydroxide coated activated carbon prepared from bark of Morinda tinctoria. Appl Water Sci 7:2653–2665. https://doi.org/10.1007/s13201-016-0479-z View ArticleGoogle Scholar Anisuzzaman SM, Joseph CG, Taufiq-Yap YH et al (2015) Modification of commercial activated carbon for the removal of 2,4-dichlorophenol from simulated wastewater. J King Saud Univ Sci 27:318–330. https://doi.org/10.1016/j.jksus.2015.01.002 View ArticleGoogle Scholar Aragaw TA (2016) Proximate analysis of cane bagasse and synthesizing activated carbon: emphasis on material balance. J Environ Treat Tech 4:102–110Google Scholar Asaithambi P, Beyene D, Raman A et al (2018) Removal of pollutants with determination of power consumption from landfill leachate wastewater using an electrocoagulation process: optimization using response surface methodology (RSM). Appl Water Sci 8:1–12. https://doi.org/10.1007/s13201-018-0715-9 View ArticleGoogle Scholar Bhattacharya S (2017) Application of nanostructured materials in fluoride removal from contaminated groundwater. Eur Water 58:87–93Google Scholar Dada AO, Olalekan AP, Olatunya AM, Dada O (2012) Langmuir, Freundlich, Temkin and Dubinin—Radushkevich isotherms studies of equilibrium sorption of Zn 2+ unto phosphoric acid modified rice husk. IOSR J Appl Chem 3:38–45. https://doi.org/10.9790/5736-0313845 View ArticleGoogle Scholar Dehghani MH, Farhang M, Afsharnia M, Mckay G (2018) Adsorptive removal of fluoride from water by activated carbon derived from CaCl2-modified Crocus sativus leaves : equilibrium adsorption isotherms, optimization, and influence of anions. Chem Eng Commun. https://doi.org/10.1080/00986445.2018.1423969 View ArticleGoogle Scholar Dolphen R, Thiravetyan P (2011) Adsorption of melanoidins by chitin nanofibers. Chem Eng J 166:890–895View ArticleGoogle Scholar Fito J, Alemu K (2019) Microalgae–bacteria consortium treatment technology for municipal wastewater management. Nanotechnol Environ Eng 4:1–9. https://doi.org/10.1007/s41204-018-0050-2 View ArticleGoogle Scholar Fito J, Tefera N, Demeku S, Kloos H (2017a) Water footprint as an emerging environmental tool for assessing Sustainable water use of the bioethanol distillery at Metahara sugarcane farm, Oromiya Region, Ethiopia. Water Conserv Sci Eng 2:165–176. https://doi.org/10.1007/s41101-017-0038-y View ArticleGoogle Scholar Fito J, Tefera N, Van Hulle SWH (2017b) Adsorption of distillery spent wash on activated bagasse fly ash: kinetics and thermodynamics. J Environ Chem Eng 5:5381–5388. https://doi.org/10.1016/j.jece.2017.10.009 View ArticleGoogle Scholar Fito J, Bultossa G, Kloos H (2019a) Physicochemical and heavy metal constituents of the groundwater quality in Haramaya Woreda, Oromia Regional State, Ethiopia. Int J Energy Water Resour 3:23–32. https://doi.org/10.1007/s42108-019-00009-9 View ArticleGoogle Scholar Fito J, Tefera N, Van Hulle SWH (2019b) Physicochemical properties of the sugar industry and ethanol distillery wastewater and their impact on the environment. Sugar Tech 21:265–277. https://doi.org/10.1007/s12355-018-0633-z View ArticleGoogle Scholar Fito J, Tefera N, Van Hulle WH (2019c) An integrated treatment technology for blended wastewater of the sugar industry and ethanol distillery. Environ Process. https://doi.org/10.1007/s40710-019-00366-x View ArticleGoogle Scholar Gupta VK, Suhas (2009) Application of low-cost adsorbents for dye removal—a review. J Environ Manage 90:2313–2342. https://doi.org/10.1016/j.jenvman.2008.11.017 View ArticleGoogle Scholar Hegazy AK, Abdel-Ghani NT, El-Chaghaby GA (2014) Adsorption of phenol onto activated carbon from seaweed: determination of the optimal experimental parameters using factorial design. Appl Water Sci 42:952–956. https://doi.org/10.1016/j.jtice.2011.04.003 View ArticleGoogle Scholar Jagtap S, Yenkie MK, Labhsetwar N, Rayalu S (2012) Fluoride in drinking water and defluoridation of water. Chem Rev 112:2454–2466View ArticleGoogle Scholar Joshi S, Pradhananga MA, Pradhananga RR (2012) Adsorption of fluoride ion onto zirconyl—impregnated activated carbon prepared from lapsi seed stone. J Nepal Chem Soc 30:13–23View ArticleGoogle Scholar Kebede B, Beyene A, Fufa F (2016) Experimental evaluation of sorptive removal of fluoride from drinking water using iron ore. Appl Water Sci 6:57–65. https://doi.org/10.1007/s13201-014-0210-x View ArticleGoogle Scholar Kofa GP, Gomdje VH, Telegang C, Koungou SN (2017) Removal of fluoride from water by adsorption onto fired clay pots: kinetics and equilibrium studies. J Appl Chem 2017:1–7. https://doi.org/10.1155/2017/6254683 View ArticleGoogle Scholar Lavecchia R, Medici F, Piga L, Rinaldi G (2012) Fluoride removal from water by adsorption on a high alumina content bauxite. Chem Eng Trans 26:225–230Google Scholar Li Y, Yang S, Jiang Q et al (2018) The adsorptive removal of fluoride from aqueous solution by modified sludge: optimization using response surface methodology. Int J Environ Res Public Health 15:1–14. https://doi.org/10.3390/ijerph15040826 View ArticleGoogle Scholar Liu H, Deng S, Li Z et al (2010) Preparation of Al–Ce hybrid adsorbent and its application for defluoridation of drinking water. J Hazard Mater 179:424–430. https://doi.org/10.1016/j.jhazmat.2010.03.021 View ArticleGoogle Scholar Liu G, Xiao J, Ren H, Zhong H (2015) Adsorption thermodynamics and kinetics of N, N'-diisopropoxypropyl-N″, N‴-oxydiethylenedicarbonyl bis (thiourea) on chalcopyrite surfaces. J Ind Eng Chem 21:1306–1313. https://doi.org/10.1016/j.jiec.2014.06.003 View ArticleGoogle Scholar Loganathan P, Vigneswaran S, Kandasamy J, Naidu R (2013) Defluoridation of drinking water using adsorption processes. J Hazard Mater 248–249:1–19. https://doi.org/10.1016/j.jhazmat.2012.12.043 View ArticleGoogle Scholar Milne T, Brennan A, Glenn B (1990) Sourcebook of methods of analysis for biomass conversion and biomass conversion processes. Elsevier, LondonGoogle Scholar Mohapatra M, Anand S, Mishra BK et al (2009) Review of fluoride removal from drinking water. J Environ Manage 91:67–77. https://doi.org/10.1016/j.jenvman.2009.08.015 View ArticleGoogle Scholar Mulugeta E, Zewge F, Johnson CA, Chandravanshi BS (2015) Aluminium hydro (oxide)– based (AO) adsorbent for defluoridation of drinking water: optimisation, performance comparison, and field testing. Water SA 41:121–128View ArticleGoogle Scholar Nienie AB, Sivalingam P, Laffite A et al (2017) Seasonal variability of water quality by physicochemical indexes and traceable metals in suburban area in Kikwit, Democratic Republic of the Congo. Int Soil Water Conserv Res 5:158–165. https://doi.org/10.1016/j.iswcr.2017.04.004 View ArticleGoogle Scholar Nigussie W, Zewge F, Chandravanshi BS (2007) Removal of excess fluoride from water using residue from alum Removal of excess fluoride from water using waste residue from alum manufacturing process. J Hazard Mater 147:954–963. https://doi.org/10.1016/j.jhazmat.2007.01.126 View ArticleGoogle Scholar Nure JF, Shibeshi NT, Asfaw SL et al (2017) COD and colour removal from molasses spent wash using activated carbon produced from bagasse fly ash of Matahara sugar factory, Oromiya region, Ethiopia. Water SA 43:470–479. https://doi.org/10.4314/wsa.v43i3.12 View ArticleGoogle Scholar Nwabanne JT, Igbokwe P (2012) Application of response surface methodology for preparation of activated carbon from palmyra palm nut. N Y Sci J 5:18–25Google Scholar Sahira J, Mandira A, Prasad PB, Ram PR (2013) Effects of activating agents on the activated carbons prepared from Lapsi Seed Stone. Res J Chem Sci 3:19–24Google Scholar Sailaja BK, Bhagawan D, Himabindu V, Cherukuri J (2015) Removal of fluoride from drinking water by adsorption onto Activated Alumina and activated carbon. Int J Eng Res Appl 5:19–24Google Scholar Shivayogimath CB, Hiremath MN, Lokeshappa B (2008) Preparation and characterization of granular activated carbon from Acacia Nilotica Stalk by KOH activation. Int J Eng Sci Innov Technol 3:201–207Google Scholar Sunday NJ, Okechukwu NS, Elom N et al (2018) Quantitative characterization of activated carbon from cow, donkey, chicken and horse bones from Ezzangbo in Ebonyi State, Nigeria. Am J Appl Chem 6:169–174. https://doi.org/10.11648/j.ajac.20180605.12 View ArticleGoogle Scholar Suneetha M, Sundar BS, Ravindhranath K (2015) Removal of fluoride from polluted waters using active carbon derived from barks of Vitex negundo plant. J Anal Sci Technol 6:1–19. https://doi.org/10.1186/s40543-014-0042-1 View ArticleGoogle Scholar Tezcan U, Ates F, Erginel N et al (2015) Adsorption of disperse orange 30 dye onto activated carbon derived from Holm Oak (Quercus Ilex) acorns: a 3 k factorial design and analysis. J Environ Manage 155:89–96. https://doi.org/10.1016/j.jenvman.2015.03.004 View ArticleGoogle Scholar UN-Water (2003) Water for people water for life. The United Nations World Water development Report 36. https://doi.org/10.1017/cbo9781107415324.004 UN-Water (2015) Water for a sustainable world, The United Nations World water development Report 2015 Report WATERGoogle Scholar UN Water (2018) Nature-based solutions for water, The United Nations World water development Report 2018 ReportGoogle Scholar Vardhan CMV, Srimurali M (2016) Removal of fluoride from water using a novel sorbent lanthanum-impregnated bauxite. SpringerPlus 5:1–18. https://doi.org/10.1186/s40064-016-3112-6 View ArticleGoogle Scholar World Health Organisation (2017) Guidelines for drinking water quality. World Health Organisation, GenevaGoogle Scholar Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
Adaptive training with full-body movements to reduce bradykinesia in persons with Parkinson's disease: a pilot study Susanna Summa1, Angelo Basteris1,2, Enrico Betti3 & Vittorio Sanguineti1 Bradykinesia (slow movements) is a common symptom of Parkinson's disease (PD) and results in reduced mobility and postural instability. The objective of this study is to develop and demonstrate a technology-assisted exercise protocol that is specifically aimed at reducing bradykinesia. Seven persons with PD participated in this study. They were required to perform whole body reaching movements toward targets placed in different directions and at different elevations. Movements were recorded by a Microsoft Kinect movement sensor and used to control a human-like avatar, which was continuously displayed on a screen placed in front of the subjects. After completion of each movement, subjects received a 0-100 score that was inversely proportional to movement time. Target distance in the next movements was automatically adjusted in order to keep the score around a pre-specified target value. In this way, subjects always exercised with the largest movement amplitude they could sustain. The training protocol was organised into blocks of 45 movements toward targets placed in three different directions and at three different elevations (a total of nine targets). Each training session included a finite number of blocks, fitted within a fixed 40 minutes duration. The whole protocol included a total of 10 sessions (approximately two sessions/week). As primary outcome measure we took the absolute average acceleration. Various aspects of movement performance were taken as secondary outcome measures, namely accuracy (undershoot error), path curvature, movement time, and average speed. Throughout sessions, we observed an increase of the absolute average acceleration and speed and decreased undershoot error and movement time. Exercise also significantly affected the relationship between target elevation and both speed and acceleration - the improvement was greater at higher elevations. The device and the protocol were well accepted by subjects and appeared safe and easy to use. Our preliminary results point at a training-induced reduction of bradykinesia. Bradykinesia (slow movements) is a common symptom in Parkinson's disease (PD) [1] and has important consequences on daily life activities. As regards the upper limb, it may cause difficulties in dexterous activities such as using work or kitchen tools. It may also contribute to impaired coordination in activities like sport or dressing. It has been suggested [2] that slow movements are a consequence of a reduced accuracy, which would lead to multiple corrections [3] and therefore to a greater movement time. However, this view is difficult to reconcile with previous observations [4] that movements in PD are characterized by prolonged acceleration phases, not prolonged decelerations as it would have been expected by multiple corrections. Problems with energy expenditure have often been associated to bradykinesia in PD. Protas et al. [5] and Schenkman et al. [6] suggested that individuals with PD spend about 20% more energy than healthy people during movements, which points at a poor management of energy expenditure in terms of economy of movement. Canning et al. [7] and Stanley et al. [8] showed that, during motor exercise, the attainment of peak aerobic power occurs at a significantly lower exercise level respect to healthy persons, thus indicating poor metabolic efficiency. Slower movements in PD have also been associated to a reduced muscle strength and to an inability to generate rapid muscle contraction [9]. However, muscle weakness was not consistently observed in all muscles in persons with bradykinesia. Alteration in sensory processing is another possible explanation. Persons with PD have an abnormal regulation of proprioception; for instance, lack of vision affects the speed/accuracy trade-off more than in controls [10]. However, it is unclear whether these problems arise from altered peripheral feedback or from abnormal central processing [11]. All the above explanations are hard to reconcile with the observation that persons with bradykinesia may indeed perform fast movements, e.g. to escape from a danger (paradoxical kinesia) [12]. Also, persons with bradykinesia can exceed their preferred moving speed while maintaining a movement accuracy comparable to the one of healthy subjects [13]. This suggests that bradykinesia in persons with PD is not a mere compensatory mechanism for impaired motor control or defective sensory processing. Rather, is may be a consequence of an implicit decision to select movements that have a lower energy expenditure or are characterized by lower force levels. Consistent with the emerging view of the role of the basal ganglia as action 'energizers' - see [14] for a review - Mazzoni et al. [15] suggested that dopaminergic pathways from the substantia nigra to the striatum may regulate the likelihood of moving at higher speeds. Rehabilitation may have an important impact in the quality of life of persons with PD. Physical exercise might help to reduce the motor symptoms - especially bradykinesia and balance problems - while keeping the levodopa (LD) dose as low as possible. Also, moderate endurance exercises have been reported to augment the efficacy of LD therapy [16]. A recent review [17] compared the effectiveness of physiotherapy intervention in persons with PD. The study took into consideration a number of common treatments (i.e. general physiotherapy, exercise, treadmill training, cueing, dance, or martial arts). Short-term (i.e. <3 months) benefits of physiotherapy were observed in most outcomes, but were significant only for speed, two- or six-minute walk test, Freezing of Gait questionnaire, Timed Up & Go, Functional Reach Test, Berg Balance Scale, and UPDRS. While many treatments resulted in improved performance, no significant difference was observed between treatments, at least for the outcome measures that were taken into consideration. Recently, a technique originally developed for speech rehabilitation (Lee Silverman's Voice Therapy, LSVT) has been extended to specifically address motor bradykinesia (Training BIG, later known as LSVT BIG); see [18]. This technique is based on intensive full-body exercise, specifically aimed at increasing the sensory awareness of the widest range of motion that patients can achieve and encouraging the maximum speed. Farley et al. [18] related this technique to the speed-amplitude relation [19] - speed increases with movement amplitude - and observed that training of large amplitude movements involving the whole body induces a modification of this relation - in high-amplitude movements the speed improves more. In a comparative study [20], the LSVT BIG technique resulted in a greater improvement in motor performance with respect to either nordic walking or non-supervised home exercise. Here we propose a novel approach for reducing bradykinesia, based on virtual reality, exergaming [21] and the low-cost natural user interface Microsoft Kinect. A few studies have tested safety and feasibility of using this device with persons with Parkinson's disease. Pompeu et al. [22] used a commercial game suite - Microsoft Kinect Adventures™- to engage the player in a variety of mini games that exploit full body motion. Galna et al. [23] used an exercise protocol specifically designed to train dynamic postural control. Taking inspiration to the LSVT BIG technique, we designed an exercise protocol that relies on whole body reaching movements with different amplitudes and directions, to induce subjects to increase their movement speed and its sensitivity to movement amplitude. Movements were recorded through the Kinect device and displayed on a screen by an animated avatar in a mirror view, which provided subjects with knowledge of their performance. Depending on the measured movement time, an adaptive regulator continuously adjusted the distance of the targets to keep movement time close to a target value established by the therapist. In this way, the exercise was automatically and continuously adapted to the individual's degree of impairment. Experimental set-up The experimental apparatus included a video projector, displaying a virtual reality environment on a 2 m × 2 m screen. Subjects were required to stand in front of the screen, within a 3 m distance. A markerless motion capture sensor (Microsoft Kinect), placed below the screen, recorded the subjects' full-body movements in 3D space at a 30 Hz sampling rate. The device has a limited accuracy - 1 cm range, see [24] - but allows to reconstruct the trajectories of 'virtual' markers in real-time. Therefore, it can provide participants an immediate, continuous visual feedback of their movements. In our experiment, the reconstructed trajectories of 13 virtual 'markers' (one head marker, plus shoulder, elbow, hand, hip, knee, and foot, respectively left and right) were used to animate a ten-segments avatar. Estimates of the markers' spatial coordinates from the motion sensor data were obtained through the OpenNI (PrimeSense, Tel-Aviv, Israel, [25]) Application Program Interface (API). A specifically developed software application, based on the H3DAPI (SenseGraphics, Sweden, [26]) software environment and Python, was used to implement the task and the experimental protocol (see below). The proposed exercise protocol involved full-body movements. While standing, subjects were required to reach one of nine targets, presented in random order. The movement was considered as terminated when the hand first entered the target. Therefore, participants were not required to stop their movement when the target was reached. Target positions were defined in terms of a subject-centered reference frame, as points on the surface of two spheres, centered on each shoulder, at elevation angles of −45° (below shoulder, 'low'), 0° (shoulder level, 'middle') and 45° (above shoulder, 'high'). The targets' horizontal direction (azimuth) with respect to the ipsilateral shoulder marker was 30° (right), 150° (left) and frontal (intersection of the spheres with the sagittal plane); see Figure 1 for details. The radius of the spheres - i.e. the distance between the targets and the shoulder (target distance, TD) - was initially set to 150% of the subject's arm length, and was automatically adjusted during the exercise (see below), within the 50-150% range (of arm length). At the beginning of each session, the difficulty level was reset to its initial value. All movements started from a neutral posture in which both arms were extended downward, so that the hands were placed slightly below the pelvis. Target arrangement and visual environment. Left: The virtual environment consists of an animated avatar, which is continuously showed to the subject, a target point and a numeric score that is displayed after the end of each movement. Each trajectory can be decomposed into an approach (red) and a correction phase (green). The dashed line denotes the line of projection of the target onto the projection center used by the display. Right: The nine targets were placed at a distance TD from the shoulder, at three different elevations: low (blue), middle (green), high (red). For a given target, movement amplitude (MA) denotes the distance of the target from the start hand position. A mirror image of the subject was continuously displayed on the screen as an animated avatar, in which the subject's hands were displayed as ⊘15 cm spheres; see Figure 1 (left). While the subject was in the reference pose, one target (⊘15 cm) appeared on the screen (displayed as either an apple, a star or a bag of money). Subjects were required to reach the target as fast as possible, by using their preferred hand. In other words, subjects were free to choose with which hand to reach the target. In this sense, the task was bilateral. To facilitate reaching, subjects were also allowed to step in all directions. The task involved movements in three dimensions, but targets were only displayed as projections on the screen placed in front of the subjects. In this way, subjects had a limited information on target location along the 'depth' direction. In fact, all points of the projection line connecting the projection center defined by the virtual environment and the 3D position of the virtual target project to the same point of the screen. The only information on 'depth' was provided by the size of the displayed target (targets, or body segments, that are further away look smaller when projected). As a consequence, the visual feedback on reaching accuracy was largely two-dimensional (on-screen distance between target and subject's hand). The movement was considered completed when either the distance between hand and target was less than the target size, i.e. 15 cm, or movement time was greater than 10 s. After completion of a movement, a 0−100 score was displayed on the screen, calculated as: $$ \text{score} = 100 \cdot \left \lfloor \frac{1/\text{MT} - 1/\text{MT}_{max}}{1/\text{MT}_{min}-1/\text{MT}_{max}} \right \rfloor $$ ((1)) where MT is the total movement time; MT max and MT min are, respectively, the maximum and minimum durations; and ⌊x⌋ is the integer value of x. Based on pilot tests with healthy subjects, we set MT min and MT max to, respectively, 0.5 and 10 s. A zero score was assigned to movements whose duration was greater than MT max . Movements whose duration was less than MT min received a maximum (100) score. We did not explicit tell them that the score was related with MT, but they all realized it after a few epochs. We also provided an audio feedback: (i) an unpleasant sound when a zero score was achieved; (ii) a trumpet sound when score was equal to 100, or (iii) a theme-specific 'ok' sound (e.g. a clink if the target was a bag of money) for intermediate score values. In this way, subjects were encouraged to move as fast as possible. Exercise protocol The exercise protocol was organized into epochs, each one corresponding to 5 repetitions of a target set - a sequence of all nine targets, in random order (i.e., 5×9=45 movements per epoch). After each epoch, subjects had to rest (sitting if necessary) for at least 1 min. The therapy protocol consisted of a total of 10 training sessions (2 sessions/week), each lasting 40 minutes. Depending on the individual conditions and thus on individual movement speeds, each session could involve a variable number of epochs. At the beginning of each session, an automatic calibration procedure was carried out to initialize the movement tracking algorithm, to estimate the subject's arm length and to establish the subject-centered reference frame with respect to which targets were specified. Each phase of this procedure was guided by vocal messages. We used a Bayesian procedure [27] to automatically adjust the target distance to the individual movement capabilities, on a per target set basis. After completion of a target set (i.e., nine movements), TD was adjusted in order to get the average score in the next target set as close as possible to a pre-specified target value. Specifically, TD was increased if the average score was greater than the target value, and decreased if the average score was smaller. In other words, if a subject could not reach the target fast enough, the next targets were placed closer to the body. If subjects performed well, targets were placed farther away. In this way, subjects always made movements as wide as they could afford but the score, and therefore the average speed, was kept around the specified target score. In our experiments the target score was set to 25/100, corresponding to MT=1.74 s. In summary, subjects were required to maintain a target average performance (quantified by the above duration score) within a pre-specified number of consecutive trials (the 'target set') and across different target elevations and movement amplitudes. The adaptive controller automatically adjusted the target distance (i.e., task difficulty) in order to maintain that average score. The study involved a total of seven subjects with idiopathic PD, see the Table 1 for demographic and clinical information, recruited among the outpatients of the National Health System of the municipality of Genoa, Italy (ASL3 'Genovese'). Table 1 Subjects' demographic and clinical information The inclusion criteria were a diagnosis of Parkinson's disease made by a neurologist and the ability to stand up and make a few steps without a walking aid. Presence of serious psychiatric problems, severe receptive aphasia and inability to perform the Timed 'Up and Go' test (TUG) with aids and supervision were taken as exclusion criteria. Presence of early dementia did not in itself constitute an exclusion criterion. The age of the seven subjects (2M + 5F) was 67±5 years (range 60−76). Disease duration was 5±4 years (range 2−13). We quantified subjects' impairment through the Unified Parkinson Disease Rating Scale (UPDRS) - part III (motor) - a 0-56 scale (0: normal; 56: maximally impaired) [28] - 15±10 (range: 5−28) and the Modified Hoehn and Yahr (H&Y) staging scale [29,30], a 1-5 scale (1: minimal disability, 5: maximum disability) - 3±1 (range 1.5-4). Before the start of the exercise protocol, the subjects' performance with the Timed 'Up and Go' test (TUG) [31] was 15±12 s (range 5−38 s) and with the 10-Meters-Walk Test (10MWT) [32] was 12±12 s (range: 4−39 s). In the latter test, subjects were instructed to walk as fast as possible. Two subjects (S1 and S3) exhibited an abnormal forward-flexed posture (camptocormia). All subjects were taking medications at the time of testing and were in their 'ON' phase during training. The research conforms to the ethical standards laid down in the 1964 Declaration of Helsinki that protects research subjects. Each subject signed a consent form that conforms to these guidelines. The raw recordings of the 3D trajectories of the 13 virtual markers were smoothed with a 4th order Savitzky-Golay filter with a 0.96 s time window (corresponding to 29 data samples). The same filter was used to estimate all subsequent time derivatives. The filter parameters correspond to a cut-off frequency of approximately 1.5 Hz. Although relatively low with respect to movement analysis standards, this value is necessary to deal with the low accuracy of the Kinect sensor. The Kinect system uses a reconstruction algorithm to estimate the positions of anatomical points (hand centroid etc.). This reconstruction is not 100% accurate, so that the estimated marker positions tend to fluctuate from one sample to the next. As a consequence, the estimated marker trajectories are more irregular and less smooth than in conventional marker-based motion capture systems [33]. Smoothing reduces this problem. In spite of the limited tracking accuracy of this device [24], the smoothed trajectories still allowed to reliably estimate the main spatio-temporal features of the movement (path, duration, speed). Movement trajectories can be decomposed into an approach phase, in which subjects reach the target projection line, and a correction phase, in which subjects move along the projection line in order to achieve the target. PD subjects with bradykinesia tend to move slowly and to undershoot the target [34], therefore we expected they had problems with both phases. In the analysis we only considered the movements that achieved a score greater than zero; the others were rejected. For each movement, we first identified the hand that subjects selected to perform the movement by comparing target distance measurements. We then focused on this hand for all subsequent analysis of each single movement. We then estimated the movement onset as the instant at which movement speed went above 10% of peak speed. The end of the approach phase was identified as the time when the speed went below this same threshold. Finally, movement end was estimated as the instant at which the distance between the hand and the target was smaller than the target size (i.e. 15 cm). To assess the effect of exercise, we focused on various aspects of movement performance. In addition to target distance, which is a measure of task difficulty and was automatically adjusted at every target set, for each movement we specifically looked at movement path, movement time and the average absolute acceleration (a measure of movement 'effort'). Movement path Movements toward a specific target, placed at distance TD from the shoulder, are characterized by a specific Movement Amplitude (MA), defined as the distance between the start position of the hand selected for the movement (i.e. its reference pose) and the target (see Figure 1). This quantity depends on TD but also on target location, thus it is target-dependent. We quantified the movement path in terms of the undershoot error (US), defined as the projection of the endpoint error - difference from target position and final position at the end of the approach phase - over the direction of the target with respect to the start position. As a measure of path curvature we calculated a Linearity index (LI), defined as the relative increase of path length (PL) with respect to the nominal MA: LI=PL/MA−1. A zero LI would correspond to a perfectly straight hand trajectory. Movement timing For each movement we calculated the Movement Time (MT) - which determined the movement score as explained above - defined as the time interval between movement onset and movement end. We also looked at the average speed (AS) for each movement. Movement effort The actual effort that subjects actually devoted to a movement was quantified by taking the average norm of acceleration (AA), calculated as the value of the rectified tangential acceleration, averaged from movement start to movement end (i.e., average of the absolute value of acceleration): $$ \text{AA}=\frac{1}{MT} \int_{0}^{MT} \left| \frac{dv}{dt} \right| \, dt $$ where v(t) is movement speed; see also [15]. In straight-line reaching movements, the average acceleration is proportional to the ratio between path length and the square of movement time, i.e. AA∝PL/MT2; see [35,36]. We tentatively assumed that this relation holds in the present task. As a consequence, the score, and thus the reciprocal of movement time, is approximately proportional to the square root of the ratio between the absolute average acceleration and the path length, i.e. \(\sqrt {\text {AA}/\text {PL}}\). Hence AA and PL are two major determinants of movement time and therefore of the movement score. Specifically, increasing PL would require an increase of AA in order to keep MT (and thus the movement score) constant. Since movements toward targets at different elevations have very different amplitudes, we expect that if they are forced to be of equal duration (i.e., equal score), absolute acceleration should also increase with target elevation. In other words, movements toward 'high' targets should require more effort to achieve the same score. As the controller regulates the average score and the adjusted target distance is common to all targets, irrespective of their elevation, targets at low elevation - requiring less effort - are expected to achieve a greater-than-average score, whereas targets at high elevation - requiring more effort - will achieve a lower-than-average score. With training, subjects are expected to improve their overall performance. This means that they should be able to achieve the same target score by reaching more distant targets. Furthermore, for a given target distance, they are expected to put more effort in their movements, i.e they should increase their absolute average acceleration. From the recorded hand trajectories, their velocities and their accelerations, we calculated the above indicators for each individual movement. We took the average absolute acceleration as primary outcome measure. All other indicators, which reflect different aspects of task performance, were taken as secondary outcome measures. To assess the overall effect of exercise on subjects' performance, for each indicator we ran a 2-way repeated-measures ANOVA, with training (pre- vs post-) and elevation (low, middle, high) as within-subject factors. We compared the trials performed under the most challenging condition, represented by the maximum target distance (150% of arm length). For this reason, we took the first epoch of the first session (pre- condition) and the first epoch of the last session (post- condition). For the indicators that exhibited a significant training and training × elevation effects, we additionally looked at their correlations with disease severity, as measured by the UPDRS - part III and the Modified Hoehn and Yahr staging scale. To do this, for each individual subject and for each indicator we calculated a linear regression over target elevation (low, middle, high), separately for the pre- and post-treatment conditions. We then took: (i) the intercept of the pre-treatment line as pre-treatment performance measure; (ii) the corresponding slope; (iii) the difference in the intercepts of the post- and the pre-treatment lines as a measure of the treatment-related change in performance; and (iv) the difference in the slopes of the post- and pre-treatment lines. For each of the above indicators we took the correlation coefficients with disease severity. In all cases we took p=0.05 as the threshold for statistical significance. We used Matlab (Mathworks, Natick MA) for all data analysis. Both the visual environment and the exercise protocol were well accepted by all subjects. Subject S7 exited the study after 5 sessions for health reasons (flu) unrelated to the treatment protocol. This subject was not considered in all further statistical analysis. Although subjects were allowed to step, they very rarely did, likely because they did not feel safe in moving the arm while stepping. In all cases we observed no relevant changes of this behavior as training proceeded. Across sessions, subjects significantly increased (p=0.0335; paired samples t-test) the number of completed blocks of trials (epochs) during the (fixed) duration of each session; see Table 2 for details. Table 2 Number of epochs completed on the first (1) and the last treatment sessions (10) Regulation of target distance Based on subjects' performance (score), task difficulty - i.e. TD - was adaptively regulated 'as needed' [27]. In this way, the average score over sessions was expected to gradually get closer to this target value, and a concurrent increase in TD is an indirect indication of improved task performance. Figure 2 (left) shows the temporal evolution of score (top) and TD (bottom), averaged over sessions, for each individual subject. With the exception of subjects S1 and S5 who only approached the target score in the later sessions, all other subjects generally managed to keep their score close to the target value. Across sessions, subjects rapidly reduced the fraction of trials per session in which they got a zero score (target not reached within the timeout interval), from 27 ± 9% to 7 ± 1%. The effect was not significant due to the large between-subject variability. Temporal evolution of score (top) and TD (bottom). Left: Individual subjects. Each color represents a different subject. Right: average over subjects (black: S1 and S5; red: all other subjects). Dashed areas and bars denote the standard error (SE). With the exception of subjects S1 and S5, for which TD remained close to its minimum value (50% of arm length), all other subjects exhibited a gradual TD increase; see Figure 2 (right). Several subjects exhibit a non-monotonic evolution of target distance over sessions. This is because the difficulty level was set to its initial value at the beginning of each session, so that the temporal evolution of TD across sessions exhibits some variability. Movement performance Experimental observations confirmed that subjects generally used a two-step strategy for reaching the targets, consisting of an approach and a correction phase. During the approach phase, subjects reached the line joining the point of view and the actual position in space of the virtual target. All points of this line are projected into the same point on the screen. During the correction phase, subjects moved along this line to achieve the actual 3D target position; see Figure 1 (left). The results of the 2-way ANOVA are summarized in Table 3. Table 3 Summary of the results of the 2-way analysis of variance (ns: not significant), for undershoot error (US), linearity index (LI), movement time (MT), average speed (AS) and average absolute acceleration (AA) Movement Path During the approach phase, subjects generally tended to undershoot the target, but the magnitude of the effect did not depend on target elevation (non-significant effect of elevation). We observed a significant training effect on the amount of undershoot (p=0.03) - undershoot decreases with training. This effect did not depend on target elevation (non-significant interaction between session and elevation); see Table 3 and Figure 3 (left). In contrast, we found no significant changes in path curvature (linearity index, LI) - curvature neither significantly depends on elevation nor significantly decreases with training. Effect of training on undershoot error and movement time. Undershoot Error (left) and Movement Time (rigth) in the first epoch of the beginning (Pre) and the first epoch at the end of the training protocol (Post). Error bars denote the SE. Movement Effort We assessed movement effort in terms of the average absolute acceleration. We found significant training (p=0.025) and elevation (p=0.02) effects. Figure 4 (right) summarizes the effect of training on movement effort. Temporal evolution and effect of training on average speed and average acceleration. Average Speed (left) and Average Acceleration (right) averaged across subjects (first and last epoch of each session). Bar graph of Average Speed and Average Acceleration in the first epoch of the beginning (Pre) and the last epoch at the end of the training protocol (Post). Error bars denote the SE. In addition, we observed a significant training × elevation effect (p=0.014); see Table 3. Figure 5 summarizes this effect. Interaction between training and target elevation. Sensitivity of movement time (left), average speed (middle) and average absolute acceleration (right) to target elevation (low, middle, high), respectively at the beginning (Pre, blue line) and at the end of training (Post, orange line). Error bars denote the SE. Movement timing We observed a significant decrease (p=0.002) of movement time with training; see Figure 3 (right). We did not find significant elevation or elevation × training effects, see Table 3. As regards average speed, we found a significant elevation effect in the overall movement (p=0.008) - speed increases with target elevation. We also found a significant training effect (p=0.01); see Table 3. Figure 4 (left) summarizes the effect of training on average speed. We also observed a significant training × elevation interaction (p=0.005); see Figure 5. A look at the relation between MT and elevation - see Figure 5 (left)- suggests that before training MT is significantly greater at high elevation than at low elevation (p=0.026, post-hoc comparison with Bonferroni correction). At the end of the training, MT decreases and also becomes less dependent on MA (elevation effect not significant). Disease severity The relation between disease severity of the individual subjects - quantified through the Modified Hoehn and Yahr scale - and the corresponding performance indicators is summarized in Table 4. Table 4 Correlation of disease severity (Modified Hoehn and Yahr scale, H&Y) with the regression parameters (slope, intercept) of undershoot error (US), movement time (MT), average speed (AS) and average absolute acceleration (AA) with respect to target elevation We only found a significant correlation with the pre-treatment movement speed (AS; R=−0.82, p=0.04) - greater disease severity, less speed. No statistically significant correlations were observed with the UPDRS score. Clinical scales To assess whether the training protocol resulted in modifications of the subjects' degree of impairment, we performed clinical tests (TUG, 10MWT) before the start and after the end of the training protocol. The TUG score was 15±12 s (range 5−38 s) before training and 16±15 s (range 4−45 s) after training. The 10MWT score, respectively before and after training, was 12±12 s (range: 4−39 s) and 12±13 s (range: 3−37.7 s), see Table 5 for details. We found an improvement in, respectively, the TUG and the 10MWT in 3/6 and 5/6 subjects. However, these effects turned out to be non-significant from the statistical point of view (paired-sample t-test). Table 5 TUG and 10MWT tests before and after training We designed a technology-assisted exercise that specifically aims at increasing movement speed through the repeated practice of large amplitude movements. Six subjects (out of seven) successfully completed the trial, with the exclusion of S7 who exited the study for reasons unrelated to the treatment. All subjects verbally expressed a high level of acceptance for the treatment and the apparatus. They only reported a difficulty in assessing the 3D location of the targets. This is consistent with a previous study [23] pointing out that, while participants enjoyed the game and could gladly train at home, they exhibited a difficulty to 'discriminate between different types and orientations of visual objects'. Subjects gradually increased movement amplitude To encourage subjects to exercise at the maximum amplitude they could sustain, we adaptively regulated target distance (and therefore movement amplitude) so that subjects could achieve and maintain a target movement time [27]. This guaranteed both exercising at maximal effort but also safety and motivation (speed and amplitude were maintained within comfortable levels). Over the training sessions all subjects - see Figure 2 - exhibited a gradual increase of target distance. At the same time, all managed to maintain the movement score (based on movement time) close to the target value of 25/100. The fraction of trials in which subjects got a zero score also rapidly decreased across sessions. We decided to set the same target score for all subjects. For subjects S1 and S5 this was specially challenging, and they only managed to reach it on the final sessions of the training protocol. To all other subjects, the target appeared to be within easy reach, but they still found the task challenging and motivating. The proposed approach is similar to the LSVT BIG technique, in which subjects are encouraged to practice large amplitude movements through verbal cues by a therapist [18]. In our case, adaptive control of amplitude, time-based reward and the continuous display of the mirror image of the subject, of his/her movements and of the targets plays a similar role of the verbal cues used by [18], as a way to promote subjects' awareness of the amplitude of their movements. Sensory awareness of movement magnitude is related to the integration of proprioception and vision, which is another essential aspect of the LSVT BIG technique. Subjects become faster and more accurate With training, we expected subjects to gradually improve both precision and speed of their movements. As regards precision, irrespective of target elevation subjects generally tended to undershoot the target. This is a well-documented symptom - hypometria - that has often been related to bradykinesia [2,37]. Specifically, bradykinesia may in part result from a reduced endpoint accuracy. Sheridan and Flowers [38] hypothesized that in order to maintain accuracy within acceptable limits, PD patients are forced to increase the duration of their movements. However, we suspect that in the present experiment the observed undershoot may be at least partly a consequence of a parsimonious strategy (i.e. 'stopping early') to deal with the lack of depth information. In fact, we ran few trials with healthy subjects and they reported similar problems (data not shown). Nevertheless, with training we indeed observed a significant decrease of the undershoot error; see Figure 3 (top). We also observed a significant decrease in the movement time - see Figure 3 (top) - and a corresponding increase in movement speed and in absolute acceleration - subjects tend to move faster and to put more effort in their movements increasing also their accuracy; see Figure 3 (bottom). A further, more indirect indication that subjects move faster is represented by the significant increase of the number of movements that subjects managed to complete within each 40-min training session. Finally, we looked at the relation between the amount of improvement (in motor performance, in motor motivation) and the initial degree of impairment, as measured by the Modified Hoehn and Yahr score and the UPDRS-III scale. We found a weak but significant negative correlation between disease severity and the pre-treatment speed - more severely impaired subjects initially make slower movements. In contrast, no significant relationship was observed between disease severity and performance improvement. These results suggest a simple relation between task-related performance measure and the overall degree of impairment. However, they should be taken cautiously given the small number of subjects that are far from representative of the general PD population. Reduced bradykinesia or task familiarization? An improved speed and accuracy of the movement may result from either a true reduction of the bradykinesia symptoms, or mere familiarization with the task. As mentioned in the Introduction, bradykinesia has been associated to either a difficulty in selecting movements that require greater levels of energy expenditure [15,39,40] or an insensitivity to rewarding outcomes [41]. Formulations based on optimal control - e.g. [40] - emphasize that movements are the result of a trade-off between reward and effort. Response vigor - the bias toward selecting high-speed movements - reflects this trade-off. The notion that the latter is mediated by the basal ganglia has found some empirical confirmation [14,42]. Vigor is difficult to quantify empirically [15]. Some studies have been looking at the observation that movement speed increases with movement amplitude - the amplitude-speed effect, see [18]. This relation has been reported in reaching, in walking, in handwriting and in eye movements. For instance, Choi et al. [43] analyzed saccades of various amplitudes and looked at the relationship between amplitude and speed, and how it depends on the subjects' degree of impulsivity, defined in terms of how long they are willing to wait for a rewarding outcome. Their main finding was that subjects' impulsivity correlated with the slope of the saccade's amplitude-speed relationship. In other studies [44] this effect was quantified in terms of the relationship between movement amplitude and the average acceleration, taken as a measure of effort. These authors reported that the handwriting movements of PD subjects have an abnormal stroke size - acceleration dependence. Taken together, the above studies suggest that the slope of the amplitude-speed or amplitude-acceleration dependence can be taken as a measure of vigor. In the present study we looked at the slopes of both the amplitude-acceleration and the amplitude-speed relations. We observed a significant effect of elevation (or, equivalently, amplitude) in the average absolute acceleration, which more directly reflects energy expenditure; see Table 3 and Figure 5. A similar effect was observed in the average speed - training led to an increase of the slope of the amplitude-speed relation. However, one problem with this interpretation is that familiarization with the task would result, by itself, in a generalized increase of movement speed, while not necessarily implying a vigor change. As regards the amplitude-absolute acceleration relationship, Rigoux et al. [40]'s model predicts that low vigor - i.e. a greater subjective importance given to movement effort - implies a greater sensitivity of MT to MA. To further explore this point, we looked at the empirical relation between MT, elevation (i.e. MA) and training; see Figure 5 (left). We found that before training MT is significantly greater at high elevation than at low elevation. At the end of the training MT not only decreases, but also becomes less dependent on MA (elevation effect no longer significant). Similar findings were reported by van Gemmert et al. [44] in the context of handwriting. They specifically looked at the relationship between the size and the duration of elementary movements (stroke), in healthy subjects and in persons with PD. Hence, our data exhibit an effect that is consistent with an increased vigor [40]. However, familiarization with the task would lead to a reduced MT in ways that are similar to those induced by vigor change, so that these aspects would be difficult to distinguish. Therefore, a slope increase in the AA vs MA relation may be at least in part a consequence of familiarization with the task. Similar considerations apply to the AS vs MA relation. In summary, our observed training-induced changes in both the amplitude-speed or the amplitude-acceleration relations are consistent with an increased vigor but are not conclusive in distinguishing between task familiarization and vigor change. Toward clinical application Although our findings are far from conclusive and expect confirmation by a larger study, they nevertheless suggest a training-induced improvement of the bradykinesia symptoms. We observed a modest improvement in some subjects in a variety of clinical scales, but these changes were not statistically significant. In contrast, Ebersbach et al. [20] delivered 1-hour treatment sessions, 4 sessions/week for 4 weeks (a total of 16 hours of treatment) and found a clinically significant reduction of the UPDRS-III score. A lower reduction was observed after a shorter duration (2 weeks) version of the same LSVT protocol [45] (a total of 8 hours of treatment). After a Kinect-based training protocol consisting of fourteen 60-minute sessions with the Kinect Adventure game suite (a total of 14 hours of treatment), Pompeu et al. [22] also reported an improvement in activity (balance and gait) and participation (quality of life). It should be noted that our subjects only made 40-minutes treatment sessions, 2 sessions/week for 5 weeks (a total of 6.6 hours of treatment), which is a far lower dose than [20,22] but is similar to [45]. The better outcome of the latter may depend on the different intensity (similar treatment doses administered in half the time) and/or the behavioral training provided in addition to the large amplitude exercise. In all cases we found no evidence of plateau effects in the temporal evolution of performance indicators in Figure 4, which suggests that additional exercise might have resulted in even more improvement. Another limitation of our proposed approach with respect to the LSVT BIG technique is that, although we provided several forms of feedback on task performance, we did not explicitly stimulate subjects' motivation and we did not explicitly promote transfer of the improved performance to activities of daily living. Using a tangible (monetary) reward and/or directly measuring enjoyment, and possibly modulating them during training might further improve the outcome. We have explored the potential of the Microsoft Kinect by focusing on two specific symptoms of Parkinson's disease, namely bradykinesia and hypokinesia. The rationale underlying the study is that bradykinesia can be mitigated by repeated exercise that specifically focuses on high-amplitude movements [18,20]. Although preliminar, our results point at a training-induced reduction of bradykinesia. However, we cannot conclude on whether the observed outcome is the mere effect of familiarization with the task or is a consequence of an increased vigor. Proper discrimination between these two effects is indeed an open issue, which we leave to future developments. To address this, one could possibly focus on more automatic motor activities (e.g. handwriting, speech, etc), for which a familiarization effect can be ruled out, or on comparing the effects of training with a baseline (e.g. healthy subjects, or PD subjects ON vs OFF medication). Another possibility is to use computational models that explicitly address learning and vigor change, to estimate learning-related and vigor change-related contributions to the observed changes of performance. The same arguments on the difficulty of distinguishing between performance improvements related to familiarization and those related to vigor also apply to assessing bradykinesia through clinical scales, none of which specifically address or vigor. More in general, we wanted to explore the potential of natural user interfaces as rehabilitation devices. Natural interfaces are appealing because subjects can freely move and are not required to wear sensors or markers. This makes their use more intuitive and more comfortable specially for older users. In fact as in other reports the device was well accepted by our subjects and appeared safe and easy to use. In the context of rehabilitation they are increasingly used in conjunction with off-the-shelf video games [22], but they also allow to design exercises that target specific types of impairment [23]. One secondary aspect is the low cost, which makes this treatment particularly affordable by rehabilitation centers and even individual users. Taken together, these aspects suggest that the proposed treatment may be suitable for training with little or no supervision by a therapist, possibly in domestic environments. Berardelli A, Rothwell JC, Thompson PD, Hallett M. Pathophysiology of bradykinesia in Parkinson's disease. Brain. 2001; 124(Pt 11):2131–46. Mazzoni P, Shabbott B, Cortés JC. Motor control abnormalities in Parkinson's Disease. Cold Spring Harbor Perspect Med. 2012; 2(6):a009282. Pfann KD, Buchman AS, Comella CL, Corcos DM. Control of movement distance in Parkinson's disease. Mov Disord. 2001; 16(6):1048–65. Phillips JG, Martin KE, Bradshaw JL, Iansek R. Could bradykinesia in Parkinson's disease simply be compensation?J Neurol. 1994; 241(7):439–47. Protas EJ, Stanley RK, Jankovic J, MacNeill B. Cardiovascular and metabolic responses to upper- and lower-extremity exercise in men with idiopathic Parkinson's disease. Phys Ther. 1996; 76(1):34–40. Schenkman M, Salay J, Scherer N, Kohrt W. Walking Economy in Patients With Mild to Moderate Parkinsons Disease. J Neurol Phys Ther. 2004; 28(4):172–3. Canning CG, Alison JA, Allen NE, Groeller H. Parkinson's disease: An investigation of exercise capacity, respiratory function, and gait. Arch Phys Med Rehabil. 2014; 78(2):199–207. Stanley RK, Protas EJ, Jankovic J. Exercise performance in those having Parkinson's disease and healthy normals. Med Sci Sport Exerc. 1999; 31(6):761–6. Corcos DM, Chen CM, Quinn NP, McAuley J, Rothwell JC. Strength in Parkinson's disease: relationship to rate of force generation and clinical status. Ann Neurol. 1996; 39(1):79–88. Klockgether T, Dichgans J. Visual control of arm movement in Parkinson's disease. Mov Disord. 1994; 9(1):48–56. Abbruzzese G, Berardelli A. Sensorimotor integration in movement disorders. Mov Disord. 2003; 18(3):231–40. Berardelli A, Dick JP, Rothwell JC, Day BL, Marsden CD. Scaling of the size of the first agonist EMG burst during rapid wrist movements in patients with Parkinson's disease. J Neurol Neurosurg Psychiatry. 1986; 49(11):1273–9. Article PubMed Central CAS PubMed Google Scholar Majsak MJ, Kaminski T, Gentile AM, Flanagan JR. The reaching movements of patients with Parkinson's disease under self-determined maximal speed and visually cued conditions. Brain. 1998; 121(Pt 4):755–66. Turner RS, Desmurget M. Basal ganglia contributions to motor control: a vigorous tutor. Current Opin Neurobiol. 2010; 20(6):704–16. Mazzoni P, Hristova A, Krakauer JW. Why don't we move faster? Parkinson's disease, movement vigor, and implicit motivation. J Neurosci. 2007; 27(27):7105–16. Muhlack S, Welnic J, Woitalla D, Müller T. Exercise improves efficacy of levodopa in patients with Parkinson's disease. Mov Disord. 2007; 22(3):427–30. Tomlinson CL, Patel S, Meek C, Herd CP, Clarke CE, Stowe R, et al. Physiotherapy versus placebo or no intervention in Parkinson's disease. Cochrane Database Syst Rev. 2013:;9. Farley BG, Koshland GF. Training BIG to move faster: the application of the speed-amplitude relation as a rehabilitation strategy for people with Parkinson's disease. Exp Brain Res. 2005; 167(3):462–7. Freund HJ, Büdingen HJ. The relationship between speed and amplitude of the fastest voluntary contractions of human arm muscles. Exp Brain Res. 1978; 31(1):1–12. Ebersbach G, Ebersbach A, Edler D, Kaufhold O, Kusch M, Kupsch A, et al. Comparing exercise in Parkinson's disease–the Berlin LSVT®;BIG study. Mov Disord. 2010; 25(12):1902–8. Barry G, Galna B, Rochester L. The role of exergaming in Parkinson's disease rehabilitation: a systematic review of the evidence. J Neuroeng Rehabil. 2014; 11:33. Pompeu JE, Arduini LA, Botelho AR, Fonseca MBF, Pompeu SMAA, Torriani-Pasin C, et al. Feasibility, safety and outcomes of playing Kinect Adventures!™for people with Parkinson's disease: a pilot study. Physiotherapy. 2014; 100(2):162–8. Noncommunicable diseases. Galna B, Jackson D, Schofield G, McNaney R, Webster M, Barry G, et al. Retraining function in people with Parkinson's disease using the Microsoft kinect: game design and pilot testing. J NeuroEng Rehabil. 2014; 11(1):60. Dutta T. Evaluation of the Kinect™ sensor for 3-D kinematic measurement in the workplace. Appl Ergon. 2012; 43(4):645–9. OpenNI User Guide. http://www.primesense.com. H, 3D.org: Open Source Haptics. http://www.h3dapi.org. Squeri V, Basteris A, Sanguineti V. Adaptive regulation of assistance "as needed" in robot-assisted motor skill learning and neuro-rehabilitation. In: IEEE International Conference on Rehabilitation Robotics: 2011. p. 1–6. http://www.ncbi.nlm.nih.gov/pubmed/22275579. Goetz CG, Poewe W, Rascol O, Sampaio C, Stebbins GT, Fahn S, et al. The unified Parkinson's disease rating scale (UPDRS): status and recommendations. Mov Disord. 2003; 18(7):738–50. Hoehn MM, Yahr MD. Parkinsonism: onset, progression, and mortality. Neurology. 1998; 50(2):318. Goetz CG, Poewe W, Rascol O, Sampaio C, Stebbins GT, Counsell C, et al. Movement disorder society task force report on the Hoehn and Yahr staging scale: status and recommendations. Mov Disord. 2004; 19(9):1020–8. Podsiadlo D, Richardson S. The timed "Up & Go": a test of basic functional mobility for frail elderly persons. J Am Geriatr Soc. 1991; 39(2):142–8. Schenkman M, Cutson TM, Kuchibhatla M, Chandler J, Pieper C. Reliability of impairment and physical performance measures for persons with Parkinson's disease. Phys Ther. 1997; 77(1):19–27. van Diest M, Stegenga J, Wörtche HJ, Postema K, Verkerke GJ, Lamoth CJC. Suitability of kinect for measuring whole body movement patterns during exergaming. J Biomech. 2014; 47(12):2925–32. Flowers K. Ballistic and corrective movements on an aiming task. Intention tremor and parkinsonian movement disorders compared. Neurology. 1975; 25(5):413–21. Nelson WL. Physical principles for economies of skilled movements. Biol Cybern. 1983; 46(2):135–47. Nagasaki H. Asymmetric velocity and acceleration profiles of human arm movements. Exp Brain Res. 1989; 74(2):319–26. Broderick MP, Van Gemmert AWA, Shill HA, Stelmach GE. Hypometria and bradykinesia during drawing movements in individuals with Parkinson's disease. Exp Brain Res. 2009; 197(3):223–33. Sheridan MR, Flowers KA. Movement variability and bradykinesia in Parkinson's disease. Brain. 1990; 113(4):1149–61. Shadmehr R, Krakauer JW. A computational neuroanatomy for motor control. Exp Brain Res. 2008; 185(3):359–81. Rigoux L, Guigon E. A model of reward- and effort-based optimal decision making and motor control. PLoS Comput Biol. 2012; 8(10):1002716. Shiner T, Seymour B, Symmonds M, Dayan P, Bhatia KP, Dolan RJ. The effect of motivation on movement: a study of bradykinesia in Parkinson's disease. PLoS One. 2012; 7(10):47138. Pasquereau B, Turner RS. Limited encoding of effort by dopamine neurons in a cost–benefit trade-off task. J Neurosci. 2013; 33(19):8288–300. Choi JES, Vaswani PA, Shadmehr R. Vigor of movements and the cost of time in decision making. J Neurosci. 2014; 34(4):1212–23. Van Gemmert AWA, Adler CH, Stelmach GE. Parkinson's disease patients undershoot target size in handwriting and similar tasks. J Neurol Neurosurg Psychiat. 2003; 74(11):1502–8. Ebersbach G, Grust U, Ebersbach A, Wegner B, Gandor F, Kühn AA. Amplitude-oriented exercise in Parkinson's disease: a randomized study comparing LSVT-BIG and a short training protocol. J Neural Transm. 2014; 122(2):253–6. This work was partly supported by the EU Grant FP7-ICT- 271724 (HUMOUR), by a grant from the Italian Ministry of University and Research (PRIN 2009) and with the kind contribution of the Italian Ministry of Foreign Affairs, Unit for S/T cooperation. Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy Susanna Summa, Angelo Basteris & Vittorio Sanguineti School of Computer Science, University of Hertfordshire, Hatfield, UK Angelo Basteris Functional Recovery and Rehabilitation Service, Hospital 'La Colletta', Arenzano, Italy Enrico Betti Susanna Summa Vittorio Sanguineti Correspondence to Susanna Summa. EB, VS, AB and SS conceived the study. SS and AB developed the experimental setup and the software. EB recruited the subjects and made the clinical assessments. SS, AB and EB made the experiments. SS and VS analyzed the results. SS,EB and VS wrote the manuscript. All authors read and approved the final manuscript. Summa, S., Basteris, A., Betti, E. et al. Adaptive training with full-body movements to reduce bradykinesia in persons with Parkinson's disease: a pilot study. J NeuroEngineering Rehabil 12, 16 (2015). https://doi.org/10.1186/s12984-015-0009-5 Accepted: 04 February 2015 Bradykinesia
CommonCrawl
Function $g(x)$, such that $\int_{\mathbb{R}} f(g(x))dx = \int_{\mathbb{R}} f(x)dx$ for all $f\in L^1(\mathbb{R})$ It's not too hard to show that for all $f\in L^1(\mathbb{R})$ and for $g(x) = x-\frac{1}{x}$ we have $\int_{\mathbb{R}} f(g(x))dx = \int_{\mathbb{R}} f(x)dx$:use the substitution $y = x - \frac{1}{x}$ and split the integral in two parts: $x<0$ and $x>0$. On one of them make the choice $x = \frac{y + \sqrt{y^2 + 4}}{2}$, on the other $x = \frac{y - \sqrt{y^2 + 4}}{2}$ and add the resulting integrals. The existence of even one function $g(x)$, such that the above works is astonishing. Are there other choices of $g$, such that $\int_{\mathbb{R}} f(g(x))dx = \int_{\mathbb{R}} f(x)dx$ for all $f\in L^1(\mathbb{R})$? Edit: $g(x) = x + a$ for any a would be a solution, but I was wondering if there are more complicated $g$ which work, or if $x - \frac{1}{x}$ is just a single example. Milen Ivanov Milen IvanovMilen Ivanov $\begingroup$ How about $g(x)=x$? Trivial, but it works. $\endgroup$ – Adrian Keister Apr 5 '18 at 13:31 $\begingroup$ I meant a non-trivial choice of $g$ $\endgroup$ – Milen Ivanov Apr 5 '18 at 13:33 $\begingroup$ Actually $$\int_{\mathbb{R \setminus \{0\}}} f(g(x)) \,dx = \int_{\mathbb{R}}f(x) \,dx$$ $\endgroup$ – mucciolo Apr 5 '18 at 13:39 $\begingroup$ @mucciolo The notation $L^1(\Bbb R)$ implies Lebesgue integration, which disregards values taken at a single point. $\endgroup$ – Arnaud Mortier Apr 5 '18 at 13:41 $\begingroup$ see Glasser's master theorem. please note that the statement in wiki is slightly off, the leading coefficient $|a|$ in the transform $u = |a|x - \sum_{n=1}^N \frac{|a_n|}{x - \beta_n}$ should really be $1$. $\endgroup$ – achille hui Apr 5 '18 at 15:38 The set of all such functions is closed under composition. As a result, all the functions $g^k$ (in the sense of composition) work. E.g. $$g^2(x)=x-\frac1x -\frac{1}{x-\frac1x}$$ Adding to that the fact that you can also compose them with affine functions of slope $1$ (lisyarus' answer) yields already a pretty large class. Incorporating @J.G.'s comment, one gets for instance all rational functions of the form $$\frac{x^2+ax+b}{x+c}$$ with the only requirement that $b<c^2$ ($a$ and $c$ are arbitrary). Arnaud MortierArnaud Mortier $\begingroup$ Nice, that's great insight! $\endgroup$ – Milen Ivanov Apr 5 '18 at 13:58 $\begingroup$ IIRC $x-c/x$ works for any $c>0$, which expands the class even further. $\endgroup$ – J.G. Apr 5 '18 at 15:15 We have $\int f\circ g=\int f$ for every integrable $f$ if and only if the same holds for $f=\chi_E$, which says precisely $$m(g^{-1}(E))=m(E)$$for every measurable set $E$, which is to say that $g$ is measure-preserving. If we assume in addition that $g$ is a smooth bijection this is equivalent to $|g'|=1,$so the only smooth bijections with this property are $$g(x)=\pm x+c.$$ This seemed so obvious that it seemed clear to me at first that $x-1/x$ cannot have the stated property. But of course that function is not injective; if $g(x)=x-1/x$ you can calculate $g^{-1}((a,b))$ explicitly, and sure enough you get two intervals with total length $b-a$. (This is actually consistent with the analysis above, if you look at it right. The relevant condition for a smooth bijection is actually $|({g^{-1})}'|=1$, which is equivalent to $|g'|=1$. But now if $g(x)=x-1/x$ and you let $y_1$ and $y_2$ denote the two "branches" of $g^{-1}$ you easily calculate that $|y_1'|+|y_2'|=1$.) One can easily concoct discontinuous examples. For example if $A\subset(0,\infty)$ let $$g(x)=\begin{cases}x,&(|x|\in A), \\-x,&(|x|\notin A).\end{cases};$$then $g$ is a measure-preserving bijection. David C. UllrichDavid C. Ullrich There is a not that well known result about these kinds on integrals that quite often comes up on this site. It was already mentioned in the comments by achille hui, but I'll add this as a CW as it's a shame not having it as an answer: Glasser's master theorem$^{[1]}$ says that if $f(x)$ is integrable then $$\int_{\mathbb{R}}f(x)\,{\rm d}x = \int_{\mathbb{R}}f(\phi(x))\,{\rm d}x$$ for all $\phi(x) = x - \sum_{n=1}^N\frac{|a_n|}{x-b_n}$ where $a_n,b_n$ are arbitrary constants and where the integrals are to be interpreted in a principal value sense. [1]: Glasser, M. L. "A Remarkable Property of Definite Integrals." Math. Comput. 40, 561-563, 1983 As mentioned by David C. Ullrich, such $g$ is called a measure-preserving transformation (MPT) on $\mathbb{R}$ (w.r.t. the Lebesgue measure, of course). The comment by achille hui provides an important family of MTPs given by $$ g(x) = \pm \left( x - c - \sum_{k=1}^{n} \frac{\mu_k}{x - \lambda_k} \right) $$ for $c, \lambda_k \in \mathbb{R}$ and $\mu_k \in (0, \infty)$, which is the statement of the Glasser's master theorem (1983). More generally, a theorem by Letac (1977) tells that any function of the form $$ g(x) = \pm \left( x - c - \int_{\mathbb{R}} \left( \frac{1}{x - \lambda} + \frac{\lambda}{1+\lambda^2} \right) \, \mu(d\lambda) \right)$$ for $c \in \mathbb{R}$ and a singular Borel measure $\mu$ with $\int_{\mathbb{R}} \frac{\mu(d\lambda)}{1+\lambda^2} < \infty$ is MPT. This theorem includes interesting examples such as $$g(x) = x - \cot x$$ corresponding to $\mu = \sum_{k \in \mathbb{Z}} \delta_{\pi k}$. Notice also that the Glasser's master theorem is a special case of this theorem with $\mu = \sum_{k=1}^{n} \mu_k \delta_{\lambda_k}$, although the proof techniques involved are different. Sangchul LeeSangchul Lee Clearly if $g: \mathbb{R}\to\mathbb{R}$ is a monotonic, we must have have, under $u=g(x)$, $$ \int_{\mathbb{R}}f(g(x))dx=\int_{\mathbb{R}}f(u)(g^{-1})'(u)du $$ for all $f\in L^1(\mathbb{R})$. Thus $$(g^{-1})'(u)=1$$ which implies $g(x)=x+a$. If $g: (-\infty,0)\to\mathbb{R}$ and $g: (0,\infty)\to\mathbb{R}$ are monotonic, namely $g^{-1}$ has two branches $x=g_1^{-1}(u):\mathbb{R}\to(-\infty,0)$ and $x=g_2^{-1}(u):\mathbb{R}\to(0,\infty)$, then \begin{eqnarray} \int_{\mathbb{R}}f(g(x))dx&=&\int_{(-\infty,0)}f(g(x))dx+\int_{(0,\infty)}f(g(x))dx\\ &=&\int_{\mathbb{R}}f(u)(g_1^{-1})'(u)du+\int_{\mathbb{R}}f(u)(g_2^{-1})'(u)du\\ &=&\int_{\mathbb{R}}f(u)\bigg[(g_1^{-1})'(u)+(g_2^{-1})'(u)\bigg]du \end{eqnarray} holds for all $f\in L^{-1}(\mathbb{R})$. Thus we must have $$ (g_1^{-1})'(u)+(g_2^{-1})'(u)=1 $$ or $$ g_1^{-1}(u)+g_2^{-1}(u)=u. \tag{1} $$ or all $u\in\mathbb{R}$. Assume $g(x)$ has the form $g(x)=ax+\frac{b}{x+c}$. Thus if $g(x)$ is monotonic, then $a$ and $b$ must satisfy $ab<0$. WOLG, assume $a>0,b<0$. Using (1), it is easy to obtain $a=1$. So $$ g(x)=x+\frac{b}{x+c}, b<0 $$ xpaulxpaul $\begingroup$ The $(g^{-1})'$ in the first formula should actually be $|(g^{-1})|$ (because for example $\int f(-x)=\int f$.) $\endgroup$ – David C. Ullrich Apr 6 '18 at 11:46 $\begingroup$ @DavidC.Ullrich, you are right.Thanks. $\endgroup$ – xpaul Apr 6 '18 at 14:01 Consider $g(x) = x+a$ for a fixed $a\in \mathbb R$. Then, using the substitution $y = x+a$, we get $dy = dx$, and thus $$\int\limits_{\mathbb R}f(g(x))dx = \int\limits_{\mathbb R}f(y)dy$$ lisyaruslisyarus The functions of the form $$\phi(x) = x - \sum_{i=1}^{n-1} \frac{\rho_i}{x- \alpha_i} -\beta $$ where $\rho_i>0$ for all $1\le i \le {n-1}$ invariate the Lebesgue measure. The set of such functions is closed under composition. Sketch of proof: Assume $\alpha_1 < \ldots <\alpha_{n-1}$. On each of the intervals $(-\infty, \alpha_1)$, $(\alpha_1, \alpha_2)$, $\ldots$, $(\alpha_{n-2}, \alpha_{n-1})$,$(\alpha_{n-1},\infty)$ the function $\phi$ is strictly increasing from $-\infty$ to $\infty$. Therefore, for every $u \in \mathbb{R}$ the equation $$\phi(x)=u$$ has exactly $n$ distinct real roots. From Viete's relations we see that the sum of the roots equals $$u +\sum_{i=1}^{n-1}\alpha_i + \beta$$ Therefore, for every $I$ interval of $\mathbb{R}$ the set $\phi^{-1}(I)$ is a disjoint union of interval of length $|I|$. This implies $$\int_{\mathbb{R}}(\chi_I\circ \phi)\ d\mu= \int_{\mathbb{R}}\chi_I \ d \mu$$ for the characteristic function of the interval $I$. From here one concludes that $$\int f \circ\phi \ d\mu = \int f \ d\mu$$ for all $f \in L^1(\mathbb{R})$. Consider two such functions $\phi= x-\sum_1^{n-1} \frac{\rho_i}{x-\alpha_i} - \beta$ and $\chi = x - \sum_1^{n'-1}\frac{\rho'_i}{x-\alpha'_i} - \beta'$. Each of the intervals $(\infty, \alpha'_1)$, $(\alpha'_1,\alpha'_2)$, $\ldots$,$(\alpha_{n'},\infty)$ get divided into intervals that map to one of $(-\infty, \alpha_1)$, $(\alpha_1, \alpha_2)$, $\ldots$, $(\alpha_{n-2}, \alpha_{n-1})$,$(\alpha_{n-1},\infty)$. We conclude that $\mathbb{R}$ gets divided into $n\cdot n'$ intervals that are mapped strictly increasing onto $(-\infty, \infty)$ by $\phi\circ \psi$ . The rational function $\phi\circ \psi$ has therefore at least $n\cdot n'-1$ real finite poles and a pole at infinity. Since it is of degree $n\cdot n'$ it must have a partial fraction decomposition of form $$x - \sum_1^{n n'-1} \frac{\rho''_i}{x- \alpha''_i}- \beta''$$ Now one checks that the $\rho''_i$ must be positive. Orest BucicovschiOrest Bucicovschi Does there exist a function such that $\int_{\mathbb{R}_+^{\star} } t^nf(t)dt=0$? How to integrate $\int_{-1}^1\frac{1}{a + bx }dx$, where $a,b\in \mathbb{C}$ without using branch cuts. Prove $\int_{-\pi}^{\pi}\sin \sin x \,dx=0$ without using the fact that $\sin(x)$ is odd. Is there a formula for calculating the integral of a polynomial times a trig function? Let $f:[-1,1]\to \mathbb{R}$ diferentiable, with $f'$ integrable, such that $\frac{\int_{-1}^{1}e^xf(x)dx}{f(1)-f(-1)}=2(e+e^{-1})$. Example of a bounded function $f : A\times B \to \mathbb{R}^n$ such that one of the iterated integrals exist but the other does not How to find $\int_{0}^{t}{(g(\tau)\sin(\omega_{0}t - \tau))d\tau}$ for an arbitrary $g(\tau)$? For what real $\alpha >0$, does $\int_{1}^{\infty}{\frac{1}{\sqrt{x^\alpha-1}}\, dx}$ converge? Prove the existence of $t_{0}$ such that $\int_{0}^{t}\frac{\sin{x}}{\sqrt{x}}dx+\sin{t}>0 $ for all $t\geq t_{0}$ How to prove that $\int_{1}^{\sqrt{2}+1}\frac{\ln{x}}{x^{2}-1}dx=\frac{\pi^{2}}{16}-\frac{\ln^{2}\left(\sqrt{2}+1\right)}{4}$
CommonCrawl
Research | Open | Published: 02 May 2015 A graph modification approach for finding core–periphery structures in protein interaction networks Sharon Bruckner1, Falk Hüffner2 & Christian Komusiewicz2 Algorithms for Molecular Biologyvolume 10, Article number: 16 (2015) | Download Citation The core–periphery model for protein interaction (PPI) networks assumes that protein complexes in these networks consist of a dense core and a possibly sparse periphery that is adjacent to vertices in the core of the complex. In this work, we aim at uncovering a global core–periphery structure for a given PPI network. We propose two exact graph-theoretic formulations for this task, which aim to fit the input network to a hypothetical ground truth network by a minimum number of edge modifications. In one model each cluster has its own periphery, and in the other the periphery is shared. We first analyze both models from a theoretical point of view, showing their NP-hardness. Then, we devise efficient exact and heuristic algorithms for both models and finally perform an evaluation on subnetworks of the S. cerevisiae PPI network. A fundamental task in the analysis of PPI networks is the identification of protein complexes and functional modules. Herein, a basic assumption is that complexes in a PPI network are strongly connected among themselves and weakly connected to other complexes [1]. This assumption is usually too strict. To obtain a more realistic network model of protein complexes, several approaches incorporate the core–attachment model of protein complexes [2]. In this model, a complex is conjectured to consist of a stable core plus some attachment proteins, which have only transient interactions with the core. In graph-theoretic terms, the core thus is a dense subnetwork of the PPI network. The attachment (or: periphery) is less dense, but has edges to one or more cores. Current methods employing this type of modeling are based on seed growing [3-5]. Here, an initial set of promising small subgraphs is chosen as cores. Then, each core is separately greedily expanded by adding vertices to its core or its attachment (in each step, a vertex maximizing some specific objective function is chosen). The aim of these approaches was to predict protein complexes [4,5] or to reveal biological features that are correlated with topological properties of core–periphery structures in networks [3]. In this work, we use core–periphery modeling in a different context. Instead of searching for local core–periphery structures, we attempt to unravel a global core–periphery structure in PPI networks. To this end, we hypothesize that the true network consists of several core–periphery structures. We propose two precise models to describe this. In the first model, the core–periphery structures are disjoint. In the second model, the peripheries may interact with different cores, but the cores are disjoint. Then, we fit the input data to each formal model and evaluate the results on several PPI networks. Our approach. In spirit, our approach is related to the clique-corruption model of the CAST algorithm for gene expression data clustering [6]. In this model, the input is a similarity graph where edges between vertices indicate similarity. The hypothesis is that the objects corresponding to the vertices belong to disjoint biological groups of similar objects, the clusters. In the case of gene expression data, these are assumed to be groups of genes with the same function. Assuming perfect measurements, the similarity graph is a cluster graph. Definition 1. A graph G is a cluster graph if each connected component of G is a clique. Because of stochastic measurement noise, the input graph is not a cluster graph. The task is to recover the underlying cluster graph from the input graph. Under the assumption that the errors are independent, the most likely cluster graph is one that disagrees with the input graph on a minimum number of edges. Such a graph can be found by applying a minimum number of edge modifications (that is, edge insertions or edge deletions) to the input graph. This paradigm directly leads to the optimization problem CLUSTER EDITING [7-9]. We now apply this approach to our hypothesis that there is a global core–periphery structure in the PPI networks. In both models detailed here, we assume that all proteins of each core interact with each other; this implies that each core is a clique. We also assume that the proteins in the periphery interact only with the cores but not with each other. Hence, the peripheries are independent sets. In the first model, we assume that ideally the protein interactions give rise to vertex-disjoint core–periphery structures, that is, there are no interactions between different cores and no interactions between cores and peripheries of other cores. Then each connected component has at most one core which is a clique and at most one periphery which is an independent set. This is precisely the definition of a split graph. A graph G=(V,E) is a split graph if V can be partitioned into V 1 and V 2 such that G[ V 1] is an independent set and G[ V 2] is a clique. We call the vertices in V 1 periphery vertices and the vertices in V 2 core vertices. Note that the partition for a split graph is not always unique. Split graphs have been previously used to model core–periphery structures in social networks [10]. There, however, the assumption is that the network contains exactly one core–periphery structure. We assume that each connected component is a split graph; we call graphs with this property split cluster graphs. Our fitting model is described by the following optimization problem. SPLIT CLUSTER EDITING Input: An undirected graph G=(V,E). Task: Transform G into a split cluster graph by applying a minimum number of edge modifications. In our second model, we allow the vertices in the periphery to be attached to an arbitrary number of cores, thereby connecting the cores. In this model, we thus assume that the cores are disjoint cliques and the vertices of the periphery are an independent set. Such graphs are called monopolar [11]. A graph is monopolar if its vertex set can be two-partitioned into V 1 and V 2 such that G[ V 1] is an independent set and G[ V 2] is a cluster graph. The partition (V 1,V 2) is called monopolar partition. Again, we call the vertices in V 1 periphery vertices and the vertices in V 2 core vertices. Our second fitting model now is the following. MONOPOLAR EDITING Task: Transform G into a monopolar graph by applying a minimum number of edge modifications and output a monopolar partition. Figure 1 shows an example graph along with optimal solutions for SPLIT CLUSTER EDITING and MONOPOLAR EDITING and, for comparison, CLUSTER EDITING. Clearly, the models behind SPLIT CLUSTER EDITING and MONOPOLAR EDITING are simplistic and cannot completely reflect biological reality. For example, subunits of protein complexes consisting of two proteins that first interact with each other and subsequently with the core of a protein complex are supported by neither of our models. Nevertheless, our models are less simplistic than pure clustering models that attempt to divide protein interaction networks into disjoint dense clusters. Furthermore, there is a clear trade-off between model complexity, algorithmic feasibility of models, and interpretability. An example input and optimal solutions to CLUSTER EDITING, SPLIT CLUSTER EDITING, and MONOPOLAR EDITING. Dashed edges are edge deletions, bold edges are edge insertions. CLUSTER EDITING and SPLIT CLUSTER EDITING produce the same two clusters but SPLIT CLUSTER EDITING assigns the blue vertex of the size-four cluster to the periphery. In an optimal solution to MONOPOLAR EDITING the two blue vertices are in the periphery which is shared between two clusters. Note that the number of necessary edge modifications decreases from CLUSTER EDITING to SPLIT CLUSTER EDITING to MONOPOLAR EDITING. Further related work. In the following, we point to some related work in the literature that is not directly relevant for our algorithms and their evaluation but either considers models of core–periphery structure or optimization problems that are related to SPLIT CLUSTER EDITING or MONOPOLAR EDITING. Della Rossa et al. [12] proposed to compute core–periphery profiles that assign to each vertex a numerical coreness value. The computation of these values is based on a heuristic random-walk model. Their evaluation showed that the S. cerevisiae PPI network exhibits a clear core–periphery structure which significantly deviates from random networks with the same degree distribution. An adaption of the Markov Clustering algorithm MCL that incorporates the core-attachment model for protein complexes was presented by Srihari et al. [13]. The SPLIT EDITING problem is closely related to SPLIT CLUSTER EDITING as it asks to transform a graph into a (single) split graph by at most k edge modifications. SPLIT EDITING is, somewhat surprisingly, solvable in linear time [14]; in fact, the number of required modifications depends only on the degree sequence. Thus, split graphs are recognizable by their degree sequence. Another problem that is related to CLUSTER EDITING is COGRAPH EDITING which asks to destroy induced P 4's by modifying at most k edges [15]. COGRAPH EDITING has applications in the computation of phylogenies [16]. In a cograph, every connected component has diameter at most two; in split cluster graphs every connected component has diameter at most three. Finally, a further approach of fitting PPI networks to specific graph classes was proposed by Zotenko et al. [17] who find for a given PPI network a close chordal graph, that is, a graph without induced cycles of length four or more. The modification operation is insertion of edges. One notable difference is that the algorithm may be unable to construct a chordal graph from the input network [17]. Preliminaries. We consider undirected simple graphs G=(V,E) where n:=|V| denotes the number of vertices and m:=|E| denotes the number of edges. The open neighborhood of a vertex u is defined as N(u):={v∣{u,v}∈E}. We denote the neighborhood of a set U by $N(U):= \bigcup _{u\in U} N(u)\setminus U$ . The subgraph induced by a vertex set S is defined as G[S]:=(S,{{u,v}∈E∣u,v∈S}). One approach to solving NP-hard problems is based on the concept of fixed-parameter tractability [18,19]. Herein, instances I of a problem come along with a parameter k, for example the size of a solution. The aim is to obtain a fixed-parameter algorithm, that is, an algorithm with running time f(k)·n O(1) where f depends only on k. Such an algorithm is efficient if k is small and f does not grow too rapidly. The exponential-time hypothesis (ETH) states that there is a constant c>1 such that 3-SAT cannot be solved in (c−ε)n time for any ε>0 [20]. Assuming the ETH, tight running time lower bounds can be shown; a survey on ETH-based running time lower bounds is given by Lokshtanov et al. [21]. If it is known that a parameterized problem L does not admit a 2o(k)·n O(1)-time algorithm (assuming the ETH), then a polynomial-time reduction from this problem to a problem L ′ with parameter k ′=O(k) implies that L ′ cannot be solved in $\phantom {\dot {i}\!}2^{o(k^{\prime })}\cdot n^{O(1)}$ time (assuming the ETH). Note that the parameter in this case may also be the number of vertices or the number of edges of a graph. Combinatorial properties and complexity Before presenting concrete algorithmic approaches for the two optimization problems, we show some properties of split cluster graphs and monopolar graphs which will be useful in the various algorithms. Furthermore, we present computational hardness results for the problems which will justify the use of integer linear programming (ILP) and heuristic approaches. Each connected component of the solution has to be a split graph. These graphs can be characterized by forbidden induced subgraphs (see Figure 2). The forbidden induced subgraphs for split graphs (2K 2, C 4, and C 5) and for split cluster graphs (C 4, C 5, P 5, necktie, and bowtie). Lemma 1 ([22]). A graph G is a split graph if and only if G does not contain an induced subgraph that is a pair of disjoint edges or a cycle of four or five edges, that is, G is (2K 2,C 4,C 5)-free. To obtain a characterization for split cluster graphs, we need to characterize the existence of 2K 2's within connected components. The following lemma will be useful for this purpose. Lemma 2. If a connected graph contains a 2K 2 as induced subgraph, then it contains a 2K 2=(V ′,E ′) such that there is a vertex v∉V ′ that is adjacent to at least one vertex of each K 2 of (V ′,E ′). Proof. Let G contain the 2K 2{x 1,x 2},{y 1,y 2} as induced subgraph. Without loss of generality, let the shortest path between any x i ,y j be P=(x 1=p 1,p 2,…,p k =y 1). Clearly, k>2. If k=3, then x 1 and y 1 are both adjacent to p 2. Otherwise, if k=4, then {x 2,x 1=p 1},{p 3,p 4=y 1} is a 2K 2 and x 1 and p 3 are both adjacent to p 2. Finally, if k>4, then P contains a P 5 as induced subgraph. The four outer vertices of this P 5 induce a 2K 2 whose K 2's each contain a neighbor of the middle vertex. We can now provide a characterization of split cluster graphs (see Figure 2). Theorem 1. A graph G is a split cluster graph if and only if G is a (C 4,C 5,P 5,necktie,bowtie)-free graph. Let G be a split cluster graph, that is, every connected component is a split graph. Clearly, G does not contain a C 4 or C 5. If a connected component of G contains a P 5, then omitting the middle vertex of the P 5 yields a 2K 2, which contradicts the assumption that the connected component is a split graph. The same argument shows that the graph cannot contain a necktie or bowtie. Conversely, let G be (C 4,C 5,P 5,necktie,bowtie)-free. Clearly, no connected component contains a C 4 or C 5. Assume towards a contradiction that a connected component contains a 2K 2 consisting of the K 2's {a,b} and {c,d}. Then according to Lemma 2 there is a vertex v which is, without loss of generality, adjacent to a and c. If no other edges between the 2K 2 and v exist, then {a,b,v,c,d} is a P 5. Adding exactly one of {b,v} or {d,v} creates a necktie, and adding both edges results in a bowtie. No other edges are possible, since there are no edges between {a,b} and {c,d}. This leads to a linear-time algorithm for checking whether a graph is a split cluster graph. There is an algorithm that determines in O(n+m) time whether a graph is a split cluster graph and outputs a forbidden induced subgraph if this is not the case. For each connected component, we run an algorithm by Heggernes and Kratsch [23] that checks in linear time whether a graph is a split graph, and if not, produces a 2K 2, C 4, or C 5. If the forbidden subgraph is a C 4 or C 5, we are done. If it is a 2K 2, then we find, using the method described in the proof of Lemma 2, in linear time an induced 2K 2 such that there is a vertex v that is adjacent to at least one vertex in each K 2. The subgraph induced by this 2K 2 plus v is either a P 5, necktie, or bowtie, as shown in the proof of Theorem 1. In contrast, SPLIT CLUSTER EDITING is NP-hard even in restricted cases. Before proving the hardness, we make the following observation that follows from a simple local improvement argument. It will be used in our hardness proof and also in our algorithms. Observation 1. There is an optimal solution to SPLIT CLUSTER EDITING such that every degree-one vertex whose neighbor has degree at least two is a periphery vertex, and no inserted edge is incident with a periphery vertex. SPLIT CLUSTER EDITING is NP-hard even on graphs with maximum degree 10. Further, it cannot be solved in 2o(k)·n O(1) or 2o(n)·n O(1) time if the exponential-time hypothesis (ETH) is true. We reduce from CLUSTER EDITING: Input: An undirected graph G=(V,E) and an integer k. Question: Can G be transformed into a cluster graph by applying at most k edge modifications? CLUSTER EDITING is NP-hard [24], even if the maximum degree of the input graph is five [25] and it cannot be solved in 2o(k)·n O(1) time assuming ETH [25,26]. The reduction works as follows; we assume that the original instance does not contain isolated vertices. Given an instance (G,k) of CLUSTER EDITING, build a graph G ′=(V ′,E ′) that has the same vertices and edges as G and degG(v) additional degree-one vertices attached to each v∈V. We show that G can be transformed by at most k edge modifications into a cluster graph if and only if G ′ has a split cluster editing set of size at most k. First, if a set S of at most k edge modifications transforms G into a cluster graph $\tilde {G}$ , then performing the same modifications on G ′ transforms G ′ into a split cluster graph $\tilde {G'}$ : Each connected component of $\tilde {G'}$ contains a clique K of $\tilde {G}$ plus degG(v) degree-one vertices adjacent to each v∈K. The set of these degree-one vertices is an independent set. For the other direction, we show that there is a minimum-cardinality edge modification set S ′ that transforms G ′ into a split cluster graph $\tilde {G}'$ , such that performing S ′ on G transforms G into a cluster graph. By Observation 1 and the fact that each vertex in G has degree at least one, we can assume that every vertex in V ′∖V is a periphery vertex in $\tilde {G}'$ . Consider some vertex v∈V. If v is a periphery vertex in $\tilde {G}'$ , then all degG(v) edges between v and V ′∖V are deleted (there are no edges between periphery vertices). Then, however, a solution with the same cost is to delete all degG(v) edges between v and V instead. This solution makes v a core vertex with neighbors in V ′ only. Hence, we can assume that S ′ makes every vertex in V a core vertex. Since $\tilde {G'}$ is a split cluster graph, each core is a clique and different cores are disjoint. Hence, S ′ transforms G into a cluster graph. This shows the correctness of the reduction. The hardness results follow from the previous hardness results and the fact that the solution size remains the same and that the maximum degree of the constructed graph G ′ is exactly twice the maximum degree of G. This hardness result motivates the study of algorithmic approaches such as fixed-parameter algorithms or ILP formulations. For example, SPLIT CLUSTER EDITING is fixed-parameter tractable for the parameter number of edge modifications k by the following search tree algorithm: Check whether the graph contains a forbidden subgraph. If this is the case, branch into the possibilities to destroy this subgraph. In each recursive branch, the number of allowed edge modifications decreases by one. Furthermore, since the largest forbidden subgraph has five vertices, at most ten possibilities for edge insertions or deletions have to be considered to destroy a forbidden subgraph. By Theorem 2, forbidden subgraphs can be found in O(n+m) time. Altogether, this implies the following. SPLIT CLUSTER EDITING can be solved in O(10k·(n+m)) time. This result is purely of theoretical interest. With further improvements of the search tree algorithm, practical running times might be achievable. For example, one could focus on improving the base of the exponential factor by a more elaborate case distinction, either designed manually (e. g. [27]) or automatically [28]. Another approach could be to study parameterized data reduction known as kernelization [18,19]. Monopolar graphs The class of monopolar graphs is hereditary, and thus it is characterized by forbidden induced subgraphs. The set of minimal forbidden induced subgraphs, however, is infinite [29]; for example among graphs with five or fewer vertices, only the wheel W 4 is forbidden, but there are 11 minimal forbidden subgraphs with six vertices. In contrast to the recognition of split cluster graphs, which is possible in linear time by Theorem 2, deciding whether a graph is monopolar is NP-hard [30]. Algorithmic research is focused on the recognition problem for special graph classes. A fairly general such approach uses a 2-SAT formulation [31,32]. Thus MONOPOLAR EDITING is NP-hard already for k=0 edge modifications. As a consequence, it is not fixed-parameter tractable with respect to the number of edge modifications k unless P=NP (in contrast to SPLIT CLUSTER EDITING). Solution approaches To evaluate our model, it is helpful to obtain optimal solutions to eliminate or at least estimate the systematic bias that might be introduced by heuristics. We use an integer linear programming (ILP) formulation for this. Since it is not able to solve the hardest instances, we also present a heuristic based on simulated annealing. Forbidden subgraph ILP From Theorem 1, we can easily derive an ILP formulation for SPLIT CLUSTER EDITING. For each (undirected) pair of vertices {u,v}, we introduce binary variables e uv indicating whether the edge {u,v} is present in the solution graph. Defining $\bar e_{\textit {uv}} := 1 - e_{\textit {uv}}$ , we have $$\begin{array}{*{20}l} \text{minimize} & \sum_{{\{u, v\} \in E}} \bar e_{uv} + \sum_{{\{u, v\} \notin E}} e_{uv} \text{ subject to} \end{array} $$ ((1)) $$\begin{array}{*{20}l} &\sum_{{\{u, v\} \in E_{F}}} \bar e_{uv} + \sum_{{\{u, v\} \notin E_{F}}} e_{uv} \geq 1 \forall (V_{F}, E_{F}) \in \mathcal{F}, \end{array} $$ where $\mathcal F$ is the set of forbidden induced subgraphs on V. A constraint of type (2) forces that at least one edge differs from the forbidden subgraph. Since an n-vertex graph may contain Ω(n 5) forbidden subgraphs, in practice we use row generation (lazy constraints) and add in a callback only the constraints that are violated; by Theorem 2, we can find a violated constraint in linear time. The effectivity of ILP solvers is largely based on getting good lower bounds from the LP relaxation. A common technique to improve this further is to add cutting planes, that is, inequalities that are already implied by any integral solution, but that cut off part of the polytope of the LP relaxation. We can derive some cutting planes by strengthening the forbidden subgraph constraints. For a C 5, at least two edits are required to obtain a split cluster graph, so we can replace the 1 on the right-hand side by a 2. For a P 5 uvwxy, we can use $$\begin{array}{*{20}l} \bar e_{uv} &+ \bar e_{vw} + \bar e_{wx} + \bar e_{xy} + \frac{1}{2} e_{uw} + e_{vx} + \frac{1}{2} e_{wy} + \frac{1}{2} e_{xu} \\&+ \frac{1}{2} e_{yv} \geq 1. \end{array} $$ A factor $\frac {1}{2}$ is permissible for edits that require at least one more edit; for example inserting {u,w} produces a necktie. The summand e uy is omitted, since this insertion produces a C 5, which needs at least two more edits. Similar strengthenings are possible for neckties and bowties. Partition variable ILP Since monopolar graphs have infinitely many forbidden subgraphs, which are NP-hard to find, the forbidden subgraph ILP formulation is not feasible for MONOPOLAR EDITING. We show an alternative formulation based on the observation that if we correctly guess the partition into core and independent set vertices, we can get a simpler forbidden subgraph characterization for both split cluster graphs and monopolar graphs. Let G=(V,E) be a graph and $C \dot {\cup } I = V$ a partition of the vertices. Then G is a split cluster graph with core vertices C and periphery vertices I if and only if G does not contain an edge with both endpoints in I, nor an induced P 3 with both endpoints in C. " ⇒": We show the contraposition. Thus assume that there is an edge with both endpoints in I or an induced P 3 with both endpoints in C. Then I is not an independent set or C does not form a clique in each connected component, respectively. " ⇐": We again show the contraposition. If G is not a split cluster graph with core vertices C and periphery vertices I, then it must contain an edge with both endpoints in I, or C∩H does not induce a clique for some connected component H of G. In the first case we are done; in the second case, there are two vertices u,v∈C in the same connected component with {u,v}∉E. Consider a shortest path (u=p 1,…,p l =v) from u to v. If it contains a periphery vertex p i ∈I, then p i−1,p i ,p i+1 forms a forbidden subgraph. Otherwise, p 1,p 2,p 3 is one. For annotated monopolar graphs, the situation is even simpler. By Definition 3, the two-partition into C and I exactly demands that I is an independent set and G[C] is a cluster graph or, equivalently, P 3-free. Let G=(V,E) be a graph and $C \dot {\cup } I = V$ a partition of the vertices. Then G is a monopolar graph with core vertices C and periphery vertices I if and only if it does not contain an edge with both endpoints in I, nor an induced P 3 whose vertices are contained in C. " ⇒": This follows directly from Definition 3. " ⇐": If G is not monopolar with core vertices C and periphery vertices I, then it must contain an edge with both endpoints in I, or G[C] is not a cluster graph. In the first case we are done; in the second case, there is a P 3 with all vertices in C, since that is the forbidden induced subgraph for cluster graphs. From Lemma 3, we can derive an ILP formulation for SPLIT CLUSTER EDITING. As before, we use binary variables e uv indicating whether the edge {u,v} is present in the solution graph. In addition, we introduce binary variables c u indicating whether a vertex u is part of the core. Defining $\bar e_{\textit {uv}} := 1 - e_{\textit {uv}}$ and $\bar c_{u} := 1 - c_{u}$ , and fixing an arbitrary order on the vertices, we have $$\begin{array}{*{20}l} \text{minimize} \sum_{{\{u, v\} \in E}} \bar e_{uv} + \sum_{{\{u, v\} \notin E}} e_{uv} & \text{~subject to} \end{array} $$ $$\begin{array}{*{20}l} c_{u} + c_{v} + \bar e_{uv} & \geq 1 ~ \forall u, v \end{array} $$ $$\begin{array}{*{20}l} \bar e_{uv} + \bar e_{vw} + e_{uw} + \bar c_{u} + \bar c_{w} & \geq 1 ~ \forall u \neq v, v \neq w > u. \end{array} $$ Herein, Constraint (5) forces that the periphery vertices are an independent set and Constraint (6) forces that core vertices in the same connected component form a clique. For MONOPOLAR EDITING, we replace Constraint (6) by $$\begin{array}{*{20}l} \bar e_{uv} + \bar e_{vw} + e_{uw} + \bar c_{u} + \bar c_{v} + \bar c_{w} \geq 1 ~ \forall u \neq v, v \neq w > u \end{array} $$ which forces that the graph induced by the core vertices is a cluster graph. Data reduction (preprocessing) proved very effective for solving CLUSTER EDITING optimally [8,9]. Indeed, any instance can be reduced to one of at most 2k vertices [33,34], where k is the number of edge modifications. Unfortunately, the data reduction rules we devised for SPLIT CLUSTER EDITING were not applicable to our real-world test instances. However, Observation 1 allows us to fix the values of some variables of Constraints (4) to (6) in the partition variable ILP for SPLIT CLUSTER EDITING: if a vertex u has only one vertex v as neighbor and deg(v)>1, then set c u =0 and e uw =0 for all w≠v. Since our instances have many degree-one vertices, this considerably reduces the size of the ILPs. The integer linear programming approach is not able to solve the hardest of our instances. Thus, we employ the well-known simulated annealing heuristic. This is a local search method, where we try a random modification of our current solution, and accept it if it improves the objective; but to escape local minima, we also accept it with a small probability if it makes the objective worse. More precisely, a change in the objective of Δ is accepted with probability exp(−Δ/T), where the factor T is reduced over the course of the algorithm down to zero, such that the algorithm initially explores a larger part of the search space, but eventually settles in a local minimum. We restart the simulated annealing algorithm, where each repetition has a fixed number of steps. For SPLIT CLUSTER EDITING, we start with a clustering where each vertex is a singleton. As random modification, we move a vertex to a cluster that contains one of its neighbors. Since this allows only a decrease in the number of clusters, we also allow moving a vertex into an empty cluster. For a fixed clustering, the optimal number of modifications can be computed in linear time by counting the edges between clusters and computing for each cluster a solution for SPLIT EDITING in linear time [14]. For MONOPOLAR EDITING, we additionally have a set representing the shared periphery. Accordingly, we allow moving a vertex into another cluster or into the independent set. Here, the optimal number of modifications for a fixed clustering can also be calculated in linear time: all edges in the independent set are deleted, all edges between clusters are deleted, and all missing edges within clusters are added. We test exact algorithms and heuristics for SPLIT CLUSTER EDITING (SCE) and MONOPOLAR EDITING (ME) on several PPI networks, and perform a biological evaluation of the modules found. We use three known methods for comparison. The algorithm by Luo et al. [3] ("Luo" for short) produces clusters with core and periphery, like SCE, but the clusters may overlap and might not cover the whole graph. Luo produces two types of core–periphery structures, those with a dense core, called k-plex core, and those with a star core. In the comparison, we consider only the structures with k-plex cores, since this model is closer to our models. For periphery, we consider only neighbors of the core (called 1-periphery by Luo et al. [3]) and not vertices with distance two to the core (called 2-periphery). The SCAN algorithm [35], like ME, partitions the graph vertices into "clusters", which we interpret as cores, and "hubs" and "outliers", which we interpret as periphery. SCAN is run with several parameter combinations, obtaining different results. For consistency, we select the results where the clusters have the highest modularity, as reported by the SCAN program itself. In addition, we compare the solutions of SCE and ME with optimal solutions of Cluster Editing (CE) (see Section 'Split cluster editing' for a formal problem definition). The result of such a solution is a cluster graph and the size-1 clusters of this cluster graph are an independent set. Accordingly, we interpret the size-1 clusters as periphery. We solve CE by a simple ILP with row generation, using the characterization by the forbidden subgraph P 3. Experimental setup Implementation details. The ILPs and the simulated annealing heuristic were implemented in C++ and compiled with the GNU g++ 4.7.2 compiler. As ILP solver, we used CPLEX 12.6.0. For both formulations, we use the heuristic solution found after 10 rounds as MIP start. For the forbidden subgraph formulation (Section 'Forbidden subgraph ILP'), in a lazy constraint callback, we find a forbidden subgraph using Theorem 2 and add the corresponding inequality of type (2) to the model. We then delete one of its vertices and try to find another forbidden subgraph, adding up to n inequalities per callback. For the partition variable formulation (Section 'Partition variable ILP'), we initially add all independent set constraints (5) and those P 3 constraints ((6), (7)) for which the vertices u,v,w induce a P 3 in the input graph. In a lazy constraint callback, we add violated P 3 constraints (usually only a few are needed). These constraints are also used as cutting planes, that is, we already add them in a cutting plane callback when they are violated by the fractional solution. In addition, we use the forbidden subgraphs C 4 and P 5 for SCE and the forbidden subgraph W 4 for ME as cutting planes (Eq. 2). In the cutting plane callbacks, we add the 500 inequalities which are violated the most, if the violation is at least 0.3 (these parameters were heuristically determined). In the simulated annealing heuristic, we use 20,000 steps and an initial T 0=1, and restart the procedure 100 times. The test machine is a 4-core 3.6 GHz Intel Xeon E5-1620 (Sandy Bridge-E) with 10 MB L3 cache and 64 GB main memory, running Debian GNU/Linux 7.0. CPLEX was allowed to use up to 8 threads, and we report wall clock times. Data. For comparison of the algorithms, we first use random graphs, where each possible edge is present with probability p, to examine variability of running times and limits of feasibility. For more realistic data, we generate subnetworks of the S. cerevisiae (yeast) protein interaction network from BioGRID [36]. Our networks contain only physical interactions. For each Gene Ontology (GO) term in the annotations of the Saccharomyces Genome Database (SGD) [37], we extract the subnetwork induced by only those proteins that are annotated with this term. We omit networks with fewer than 30 vertices (these can all be solved in less than one second). This yields 178 graphs with up to 2198 vertices, with a median of 66 vertices and 226 edges. For the biological evaluation, we focus on three particular subnetworks, corresponding to three essential processes: cell cycle, translation, and transcription.a These are important subnetworks known to contain complexes. Table 1 shows some properties of these networks. Table 1 Input properties of the process networks Biological evaluation. We evaluate our results using the following measures. First, we examine the coherence of the GO terms in our modules using the semantic similarity score calculated by G-SESAME [38]. We use this score to test the hypothesis that the cores are more stable than the peripheries. If the hypothesis is true, then the GO terms within a core should be more similar than the GO terms in the periphery. Hence, the pairwise similarity score within the core should be higher than in the periphery. We test only terms relating to process, not function, since proteins in the same complex play a role in the same biological process. Since ME, SCAN, and CE return multiple cores and only a single periphery, we assign to each cluster C its neighborhood N(C) as periphery. We consider only clusters with at least two core vertices and one periphery vertex. Next, we compare the resulting clusters with known protein complexes from the CYC2008 database [39]. Since the networks we analyze are subnetworks of the larger yeast network, we discard for each network the CYC2008 complexes that have less than 50% of their vertices in the current subnetwork, restrict them to proteins contained in the current subnetwork, and then discard those with fewer than three proteins. We test the overlap between the algorithm results and these complexes, treating the complexes as the "ground truth". We expect that the cores mostly correspond to complexes and that the periphery may contain complex vertices plus further vertices. Random networks Figure 3 shows running times for random graphs using the fastest ILP version (using partition variables and cutting planes). Each box represents 25 runs. For SCE, running times show large variation (note the logarithmic scale). Density p=0.3 here yields harder instances than either denser or sparser instances. Already for n=22, two instances with p=0.3 could not be solved with available memory, although another one takes only three seconds. For ME and p=0.1 or p=0.3, there are fewer outliers and the instances can be solved much quicker than for the SCE model. Running times and variance of running time seem to increase monotonously with density, however. Thus, for p=0.5SCE could be solved quicker than ME. Running times for random graphs. Left: SPLIT CLUSTER EDITING; right: MONOPOLAR EDITING. A star indicates an instance that was aborted due to insufficient memory. The heuristic optimally solves SCE for all instances with known optimal solution; for ME, it is off by one for five instances. PPI subnetworks Figure 4 shows running times for the different ILP approaches on PPI subnetworks. Overall, we can observe that these instances are much easier than the random graph instances. For SCE with the forbidden subgraph formulation, we see that the strengthened inequalities such as Constraint (3) allow to solve more instances, and that using the P 5 (in our instances the most frequent forbidden subgraph) not only as a forbidden subgraph but also as a cutting plane further improves running time. However, neither version is as effective as the partition variable formulation. Here, using forbidden subgraphs as cutting planes has less effect, solving only one more instance. This is probably because adding the initial constraints ((5) to (7)) already produces a fairly tight relaxation. Moreover, finding the cutting planes is quite slow. Running times of the different ILP formulations for the PPI subnetworks. Left: SPLIT CLUSTER EDITING; right: MONOPOLAR EDITING. For ME, we note that instances can be solved slightly quicker in general, consistent with the observations on sparse random networks. Using W 4 (the smallest forbidden subgraph for monopolar graphs) as a cutting plane also helps little, solving one more instance, but might be useful for difficult instances with long running time. The heuristic for SCE finds an optimal solution for all 126 instances for which the optimal solution size is known. The ME heuristic optimally solves 104 of 129 instances for which the optimal solution size is known. The average error is very small (0.61), but for one instance the heuristic produces a solution size too high by 27. Possibly the independent set, which interacts with all clusters, makes local search approaches less effective here compared to SCE. Figure 5 shows the running times for the fastest ILP approaches, that is, the partition variable ILPs with cuts, and the heuristics for both problems. Also shown are the running times of SCAN, LUO, and the ILP for CE. For the majority of the instances, the ILP approaches for SCE and ME are much slower than all other methods including the ILP for CE. SCAN and the ME heuristic are the fastest methods, solving each instance in less than a minute and most instances within a second. The SCE heuristic is substantially slower than the ME heuristic; this behavior is consistent with the observations for the ILP approaches. Finally, LUO is comparable with the SCE heuristic: it is faster than the exact ILP approaches but substantially slower than SCAN and the ME heuristic. Running times of the best ILP formulations, of the two heuristics, and of LUO and SCAN for the PPI subnetworks. Process networks Our results are summarized in Table 2 (size statistics and average GO term coherence) and Table 3 (complex detection). Table 2 Solution statistics and average GO term coherence for the process networks Table 3 Complex detection statistics for the process networks Running times and objective function. For SCE, the ILP approach failed to solve the cell cycle and transcription network, and for ME, it failed to solve the transcription network, with CPLEX running out of memory in each case. Thus, consistent with the previous types of instances, the theoretically harder ME problem was easier to solve in practice. This could be explained by the fact that the number k of necessary modifications is much lower, which could reduce the size of the branch-and-bound tree. For two of the three optimally solved instances, the heuristic finds the optimal solution within one minute. For the third instance (ME transcription) it finds the optimal solution only after several hours; after one minute, it is 2.9% too large. This indicates the heuristic gives good results, and in the following, we use the heuristic solution for the three instances not solvable by ILP. From experiments with other networks, we conjecture that the heuristic SCE solutions are optimal; we are less sure about the heuristic solutions for ME. As for the PPI subnetworks, the SCAN algorithm runs very fast, finishing within seconds on all three networks; the LUO algorithm is considerably slower as it needs several minutes on the translation network. CE is again slower than LUO but still considerably faster than SCE and ME. Cluster statistics and GO term coherence. Table 2 gives an overview of the number and average sizes of the output clusters and of their average GO term coherence in core and periphery. We say that a cluster is nontrivial if it has at least three vertices and at least two core vertices. We describe the results for the cell cycle network in more detail since the results here are the most representative of the three networks. Then, we summarize our findings for the transcription and translation network. The SCE solution identifies 14 nontrivial clusters; all other clusters are singletons. Only for one of the 14 nontrivial clusters, the GO term coherence is lower in the core than in the periphery (for two clusters the scoring tool does not return a result, four clusters have empty peripheries). This is in line with the hypothesis that cores have higher GO term coherence than peripheries. The ME result contains more nontrivial clusters than SCE (24). Compared to SCE, clusters have on average about the same size, but a slightly smaller core and a slightly larger periphery (recall that a periphery vertex may occur in more than one cluster). The average coherence in the cores is 0.58, lower than for SCE (0.64), this might be due to the fact that the cores are smaller for ME. On average, coherence in the periphery is much lower than in the cores, but for six clusters it is higher than in the core. SCAN identifies 7 hubs and 41 outliers, which then comprise the periphery. There are even more nontrivial clusters than for ME. Clusters are smaller than for SCE or ME, in particular the periphery has on average only 4.4 vertices as opposed to 7.3 for SCE or 9.8 for ME. Coherence on cores is similar to SCE and ME, and also lower for the periphery. LUO outputs only large clusters (this is true for all subnetworks we tested). For the cell cycle network, 16 clusters are identified, each having at least 5 proteins in the cores, and 3 in the periphery, and the largest having 15 proteins in the core and 126 in the periphery (for SCE, one cluster has 10 proteins in the core and 56 in the periphery, and all other clusters for the three other methods have cores of at most 16 and peripheries of at most 30 vertices). The cores have much lower coherence on average than the other methods, but again coherence in the periphery is even lower. CE outputs many nontrivial clusters, on average the cores and periphery are smaller than for SCE and ME. The average coherence is lower than for SCE and ME, but again the average coherence is higher in the core than in the periphery. We now describe the results for the transcription network. Again, ME outputs the smallest cores, followed by SCAN and CE. LUO again finds the largest cores and also the largest peripheries. Concerning GO term analysis, we see a similar pattern here that LUO has worse coherence. The average core coherence is the highest for ME and, unlike CE and SCE, the average coherence is higher in the cores than in the periphery for ME. In the translation network, ME outputs the most nontrivial clusters, followed by CE and SCE. SCAN and LUO output the fewest nontrivial clusters (5 and 4, respectively). LUO has the best coherence values here. The average coherence is higher for CE than for ME but the difference between the average core and periphery coherence is less pronounced in CE than in ME. Complex detection. Table 3 gives an overview of the number of detected complexes. Again, we describe the results for the cell cycle network in more detail and then summarize our findings for the transcription and translation network. Following our hypothesis, we say that a complex is detected by a cluster if at least 50% of the core belongs to the complex and at least 50% of the complex belongs to the cluster. Out of the seven complexes, three are detected without any error (anaphase-promoting, DASH, and Far3p/Far7p/Far8p/Far9p/Far10p/Far11p complex), and one (Mcm2-7) is detected with an error of two additional proteins in the core that are not in the complex. The periphery contains between one and eight extra proteins that are not in the complex (which is allowed by our hypothesis). ME detects the same complexes as SCE, and additionally the mitotic checkpoint complex. For the anaphase-promoting complex, it misses one protein; all other complexes are detected without error. SCAN detects almost the same complexes as ME (it misses the Mcm2-7 complex). It also has slightly more errors, for example having three extra protein in the core for the anaphase-promoting complex plus one missing. LUO detects the same complexes as ME without missing any complex proteins but it also finds more extra vertices in the cores. CE detects the same clusters as ME with a slightly higher number of missed complex proteins and extra core proteins. In the transcription network, the ME method comes out a clear winner: it detects all 11 complexes and has fewer errors than the other methods. CE detects more complexes than SCAN and SCE; LUO detects only 6 complexes for this network. In the translation network, SCE, ME, LUO, and CE detect the same four complexes. The SCAN algorithm does not seem to deal well with this network, since it does not detect any complex. LUO finds only four nontrivial clusters, corresponding to the four complexes also detected by SCE and ME; this might also explain why it has the best coherence values here. Experiment evaluation The coherence values for cores and peripheries indicate that a division of clusters into core and periphery makes sense. Under the assumption that cores should be more coherent than peripheries, ME and LUO do best with respect to separating cores from periphery. In detecting complexes, the ME method does best (20 detected), followed by CE (18), followed by SCE and LUO (15 each), and finally SCAN (12). This indicates that the model that peripheries are shared is superior (note that in CE the size-1 clusters are also a shared periphery). One advantage of ME compared to CE is that the cores are smaller and thus contain fewer extra proteins which are not in the complex. Note that when comparing the number of detected complexes, then SCE is at a disadvantage, since it can use each protein as periphery only once, while having large peripheries makes it easier to count a complex as detected. One approach here could be to consider clusters of size one as shared periphery (as we did for CE). The graph modification-based methods showed a more consistent behavior across the three test networks than LUO (which performs not so well on the transcription network) and SCAN (which performs not so well on the translation network). A further notable difference between the algorithms is that LUO outputs much larger peripheries for each cluster. Thus, the peripheries of the detected complexes contain many proteins which are not known to be in the complex (by our initial hypothesis, these extra proteins are not necessarily errors). The other four methods are much more conservative in this regard. Concerning the theoretical analysis of SPLIT CLUSTER EDITING the following questions are open: Is SPLIT CLUSTER EDITING amenable to parameterized data reduction? That is, does SPLIT CLUSTER EDITING admit a polynomial-time reduction to a polynomial-size problem kernel (see [18] for a definition of problem kernel)? Does SPLIT CLUSTER EDITING admit a constant-factor approximation? It would be also interesting to study the SPLIT CLUSTER DELETION problem in which only edge deletions are allowed to transform the input graph into a split cluster graph. This variant is also NP-hard by a reduction that is similar to the one presented for SPLIT CLUSTER EDITING. For MONOPOLAR EDITING it would be interesting to obtain any tractability results, for example by considering combinations of parameters. A first step here could be to study the problem of recognizing monopolar graphs more closely. There are many further variants of our models that could possibly yield better biological results or have algorithmic advantages. For instance, one could restrict the cores to have a certain minimum size. Also, instead of using split graphs as a core–periphery model, one could resort to dense split graphs [10] in which every periphery vertex is adjacent to all core vertices. Finally, one could allow some limited amount of interaction between periphery vertices. Further evaluation of the biological properties of the computed core–periphery structures seems also worthwhile. For example, it would be interesting to examine the peripheries more closely in order to determine whether SPLIT CLUSTER EDITING and MONOPOLAR EDITING are too conservative when determining the periphery of a cluster. Finally, one could explore the biological properties of those clusters that were identified by SPLIT CLUSTER EDITING or MONOPOLAR EDITING but that do not correspond to known protein complexes from the CYC2008 database (all output clusters are listed in the Additional file 1: Supplemental material). a To determine the protein subsets corresponding to each process, we queried BioMart [40] for all yeast genes annotated with the relevant GO terms: GO:0007049 (cell cycle), GO:0006412 (translation), and GO:0006351 (DNA-templated transcription). Note that this gives somewhat different results than using the SGD GO annotations. Spirin V, Mirny LA. Protein complexes and functional modules in molecular networks. PNAS. 2003; 100(21):12123–8. Gavin A-C, Aloy P, Grandi P, Krause R, Boesche M, Marzioch M, et al. Proteome survey reveals modularity of the yeast cell machinery. Nature. 2006; 440(7084):631–6. Luo F, Li B, Wan X-F, Scheuermann R. Core and periphery structures in protein interaction networks. BMC Bioinformatics. 2009; 10(Suppl 4):8. Leung HC, Xiang Q, Yiu S-M, Chin FY. Predicting protein complexes from PPI data: a core-attachment approach. J Comput Biol. 2009; 16(2):133–44. Wu M, Li X, Kwoh C-K, Ng S-K. A core-attachment based method to detect protein complexes in PPI networks. BMC Bioinformatics. 2009; 10(1):169. Ben-Dor A, Shamir R, Yakhini Z. Clustering gene expression patterns. J Comput Biol. 1999; 6(3-4):281–97. Shamir R, Sharan R, Tsur D. Cluster graph modification problems. Discrete Appl Math. 2004; 144(1–2):173–82. Böcker S, Briesemeister S, Klau GW. Exact algorithms for cluster editing: Evaluation and experiments. Algorithmica. 2011; 60(2):316–34. Böcker S, Baumbach J. Cluster editing. In: Proceedings of the 9th conference on computability in Europe (CiE '13). LNCS. Berlin, Heidelberg: Springer: 2013. p. 33–44. Borgatti SP, Everett MG. Models of core/periphery structures. Soc Netw. 1999; 21(4):375–95. Chernyak ZA, Chernyak AA. About recognizing (α,β) classes of polar graphs. Discrete Math. 1986; 62(2):133–8. Della Rossa F, Dercole F, Piccardi C. Profiling core-periphery network structure by random walkers. Sci Rep. 2013. Article no. 3. Srihari S, Ning K, Leong H. MCL-CAw: a refinement of MCL for detecting yeast complexes from weighted PPI networks by incorporating core-attachment structure. BMC Bioinformatics. 2010; 11:504. Hammer PL, Simeone B. The splittance of a graph. Combinatorica. 1981; 1(3):275–84. Liu Y, Wang J, Guo J, Chen J. Complexity and parameterized algorithms for cograph editing. Theor Comput Sci. 2012; 461:45–54. Hellmuth M, Wieseke N, Lechner M, Lenhof H-P, Middendorf M, Stadler PF. Phylogenomics with paralogs. PNAS. 2015; 112(7):2058–63. Zotenko E, Guimarães KS, Jothi R, Przytycka TM. Decomposition of overlapping protein complexes: a graph theoretical method for analyzing static and dynamic protein associations. Algorithms Mol Biol. 2006; 1(7). Downey RG, Fellows MR. Fundamentals of Parameterized Complexity. Texts in Computer Sci. Berlin, Heidelberg: Springer; 2013. Niedermeier R. Invitation to fixed-parameter algorithms. Oxford: Oxford University Press; 2006. Impagliazzo R, Paturi R, Zane F. Which problems have strongly exponential complexity?J Comput Syst Sci. 2001; 63(4):512–30. Lokshtanov D, Marx D, Saurabh S. Lower bounds based on the exponential time hypothesis. Bull EATCS. 2011; 105:41–72. Foldes S, Hammer PL. Split graphs. Congressus Numerantium. 1977; 19:311–5. Heggernes P, Kratsch D. Linear-time certifying recognition algorithms and forbidden induced subgraphs. Nord J Comput. 2007; 14(1–2):87–108. Křivánek M, Morávek J. NP-hard problems in hierarchical-tree clustering. Acta Informatica. 1986; 23(3):311–23. Fomin FV, Kratsch S, Pilipczuk M, Pilipczuk M, Villanger Y. Subexponential fixed-parameter tractability of cluster editing. CoRR. 2011:abs/1112.4419. Komusiewicz C, Uhlmann J. Cluster editing with locally bounded modifications. Discrete Appl Math. 2012; 160(15):2259–70. Liu Y, Wang J, Xu C, Guo J, Chen J. An effective branching strategy based on structural relationship among multiple forbidden induced subgraphs. J Comb Optimization. 2015; 29(1):257–75. Gramm J, Guo J, Hüffner F, Niedermeier R. Automated generation of search tree algorithms for hard graph modification problems. Algorithmica. 2004; 39(4):321–47. Berger AJ. Minimal forbidden subgraphs of reducible graph properties. Discussiones Mathematicae Graph Theory. 2001; 21(1):111–7. Farrugia A. Vertex-partitioning into fixed additive induced-hereditary properties is NP-hard. Electron J Combinatorics. 2004; 11(1):46. Le VB, Nevries R. Complexity and algorithms for recognizing polar and monopolar graphs. Theor Comput Sci. 2014; 528:1–11. Churchley R, Huang J. Solving partition problems with colour-bipartitions. Graphs Combinatorics. 2014; 30(2):353–64. Cao Y, Chen J. Cluster editing: Kernelization based on edge cuts. Algorithmica. 2012; 64(1):152–69. Chen J, Meng J. A 2k kernel for the cluster editing problem. J Comput Syst Sci. 2012; 78(1):211–20. Xu X, Yuruk N, Feng Z, Schweiger TAJ. SCAN: a structural clustering algorithm for networks. In: Proceedings of the 13th ACM SIGKDD international conference on Knowledge Discovery and Data mining (KDD '07). New York: ACM: 2007. p. 824–33. Chatr-aryamontri A, Breitkreutz B-J, Heinicke S, Boucher L, Winter AG, Stark C, et al. The BioGRID interaction database: 2013 update. Nucleic Acids Res. 2013; 41(D1):816–23. Cherry JM, Hong EL, Amundsen C, Balakrishnan R, Binkley G, Chan ET, et al. Saccharomyces genome database: the genomics resource of budding yeast. Nucleic Acids Res. 2012; 40(Database issue):700–5. Du Z, Li L, Chen C-F, Yu PS, Wang JZ. G-SESAME: web tools for GO-term-based gene similarity analysis and knowledge discovery. Nucleic Acids Res. 2009; 37(suppl. 2):345–9. Pu S, Wong J, Turner B, Cho E, Wodak SJ. Up-to-date catalogues of yeast protein complexes. Nucleic Acids Res. 2009; 37(3):825–31. Kasprzyk A. BioMart: driving a paradigm change in biological data management. Database. 2011;2011. An extended abstract of this article appeared in the Proceedings of the 14th Workshop on Algorithms in Bioinformatics (WABI '14), volume 8701 of LNCS, Springer, pages 340–351. International Max Planck Research School for Computational Biology and Scientific Computing, Ihnestr. 63-73, Berlin, 14195, Germany Sharon Bruckner Institut für Softwaretechnik und Theoretische Informatik, TU Berlin, Ernst-Reuter-Platz 7, Berlin, 10587, Germany Falk Hüffner & Christian Komusiewicz Search for Sharon Bruckner in: Search for Falk Hüffner in: Search for Christian Komusiewicz in: Correspondence to Christian Komusiewicz. SB conceived the model and developed it with the other authors. Methods and experiments were jointly developed, and FH did the implementation. All authors read and approved the final manuscript. Additional file Additional file 1 Supplemental material. This file contains the source code of the programs that generate the CPLEX ILPs, our input data (the three process networks), and our output clusters. All files are readable as plain text files. Protein complexes Graph classes NP-hard problems
CommonCrawl
Avizienis, A. V. et al. Neuromorphic atomic switch networks. PloS one 7, e42772 (2012). Meyer, E. et al. Impact of Electron and Scanning Probe Microscopy on Materials Research 339–357 (Springer Netherlands, 1999). Reed, J., Wilkinson, P., Schmit, J., Klug, W. & Gimzewski, J. K. Observation of nanoscale dynamics in cantilever sensor arrays. Nanotechnology 17, 3873 (2006). Naranjo, B., Gimzewski, J. K. & Putterman, S. E. T. H. Observation of nuclear fusion driven by a pyroelectric crystal. Nature 434, 1115–1117 (2005). Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Observation of the temporal evolution of the (1 times 2) reconstructed Au (110) surface using scanning tunneling microscopy. Journal of Vacuum Science and Technology. B, Microelectronics Processing and Phenomena;(United States) 9, (Submitted). Gimzewski, J. K., Berndt, R. & Schlittler, R. R. Observation of the temporal evolution of the (1$\times$ 2) reconstructed Au (110) surface using scanning tunneling microscopy. Journal of Vacuum Science & Technology B 9, 897–901 (1991). Wesoloski, L. M., Stieg, A. Z., Kunitake, M., Dultz, S. C. & Gimzewski, J. K. Observations of image contrast and dimerization of decacyclene by low temperature scanning tunneling microscopy. The Journal of chemical physics 127, 174703 (2007). Yang, R. et al. On-demand nanodevice with electrical and neuromorphic multifunction realized by local ion migration. ACS nano 6, 9515–9521 (2012). Teitell, M. A., Gimzewski, J. K. & Reed, J. C. OPTICAL CYTOMETRY. (2009). Gimzewski, J. K. et al. Near Field Optics 333–340 (Springer Netherlands, 1993). Zhang, W. et al. ORGN 414-Folding polyrotaxanes using secondary noncovalent bonding interactions. ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY 235, (AMER CHEMICAL SOC 1155 16TH ST, NW, WASHINGTON, DC 20036 USA, 2008). Gimzewski, J. K., Affrossman, S., Gibson, M. T., Watson, L. M. & Fabian, D. J. Oxidation of scandium by oxygen and water studied by XPS. Surface Science 80, 298–305 (1979). Lüthi, R. et al. Parallel nanodevice fabrication using a combination of shadow mask and scanning probe methods. Applied physics letters 75, 1314–1316 (1999). Schmit, J., Reed, J., Novak, E. & Gimzewski, J. K. Performance advances in interferometric optical profilers for imaging and testing. Journal of Optics A: Pure and Applied Optics 10, 064001 (2008). Welland, M. E. & Gimzewski, J. K. Perspectives on the limits of fabrication and measurement. PHILOS T ROY SOC A 353, 279–279 (1995). Gimzewski, J. K., VATEL, O. & Hallimaoui, A. Ph. DUMAS*, M. GU*, C. SYRYKH*, F. SALVAN*,* GPEC, URA CNRS 783, Fac. de Luminy, 13288, Marseille Cedex 9, France. Optical Properties of Low Dimensional Silicon Structures: Proceedings of the NATO Advanced Research Workshop, Meylan, France, March 1-3, 1993 244, 157 (1993). Guo, L. et al. Phenotypic characterization of the foldase homologue PrsA in Streptococcus mutans. Molecular oral microbiology 28, 154–165 (2013). Gimzewski, J. A. M. E. S. K. A. Z. I. M. I. E. R. Z. Photomechanical transducer. (1999). Berndt, R. R. J. K. B. R. R. W. D. M. et al. Photon emission at molecular resolution induced by a scanning tunneling microscope. Science 262, 1425–1427 (1993). Coombs, J. H., Gimzewski, J. K., Reihl, B., Sass, J. K. & Schlittler, R. R. Photon emission experiments with the scanning tunnelling microscope. Journal of Microscopy 152, 325–336 (1988). Berndt, R. et al. Photon emission from adsorbed C60 molecules with sub-nanometer lateral resolution. Applied Physics A 57, 513–516 (1993). Berndt, R. & Gimzewski, J. K. Photon Emission from C60 in a Nanoscopic Cavity. Proceedings of the NATO Advanced Research Workshop: (Humboldt-Universität zu Berlin, 1994). Berndt, R., Gimzewski, J. K. & Schlittler, R. R. Photon emission from nanostructures in an STM. Nanostructured Materials 3, 345–348 (1993). Berndt, R., Gimzewski, J. K. & Schlittler, R. R. Photon emission from small particles in an STM. Zeitschrift für Physik D Atoms, Molecules and Clusters 26, 87–88 (1993). Gimzewski, J. K. Photons and Local Probes 189–208 (Springer Netherlands, 1995). Berndt, R. & Gimzewski, J. K. Photon emission from transition metal surfaces in scanning tunneling microscopy. International Journal of Modern Physics B 7, 516–519 (1993). Berndt, R. & Gimzewski, J. K. Photon emission in scanning tunneling microscopy: Interpretation of photon maps of metallic systems. Physical Review B 48, 4746 (1993). Berndt, R. & Gimzewski, J. K. Photon emission in scanning tunneling microscopy: interpretation of photon maps of metallic systems. SPIE MILESTONE SERIES MS 107, 376–376 (1995). Berndt, R., Schlittler, R. R. & Gimzewski, J. K. Photon emission processes in STM. AIP Conf Proceedings 241, 328–336 (1992). Berndt, R., Schlittler, R. R. & Gimzewski, J. K. Photon emission scanning tunneling microscope. Journal of Vacuum Science & Technology B 9, 573–577 (1991). Gimzewski, J. K., Reihl, B., Coombs, J. H. & Schlittler, R. R. Photon emission with the scanning tunneling microscope. Zeitschrift für Physik B Condensed Matter 72, 497–501 (1988). Dumas, P. et al. Photon spectroscopy, mapping, and topography of 85% porous silicon. Journal of Vacuum Science & Technology B 12, 2064–2066 (1994). McKinnon, A. W., Welland, M. E., Wong, T. M. H. & Gimzewski, J. K. Photon-emission scanning tunneling microscopy of silver films in ultrahigh vacuum: A spectroscopic method. Physical Review B 48, 15250 (1993). Galaxy, I. Photothermal spectroscopy with femtojoule sensitivity using a micromechanical device. Nature 372, 3 (1994). Joachim, C., Gimzewski, J. K. & Tang, H. Physical principles of the single-C 60 transistor effect. Physical Review B 58, 16407 (1998). Gimzewski, J. K. et al. Plasma surface interactions in the TCA tokamak: a preliminary study using deposition probes. (1982). Tanoue, R. et al. Positional selectivity of reversible azomethine condensation reactions at solid/liquid interfaces leading to supramolecule formation. Journal of Electroanalytical Chemistry 716, 145–149 (2014). Humbert, A., Gimzewski, J. K. & Reihl, B. Postannealing of coldly condensed Ag films: Influence of pyridine preadsorption. Physical Review B 32, 4252 (1985). Welland, M. E., Miles, M. J. & Gimzewski, J. K. Probe Microscopy: Editorial. Probe Microscopy 1, 1–1 (1997). Veprek, S. et al. Properties of microcrystalline silicon. IV. Electrical conductivity, electron spin resonance and the effect of gas adsorption. Journal of Physics C: Solid State Physics 16, 6241 (1983). Veprek, S. et al. PROPERTIES OF MICROCRYSTALLINE SILICON-ELECTRICAL-CONDUCTIVITY, ELECTRON-SPIN RESONANCE AND THE EFFECT OF GAS-ADSORPTION. HELVETICA PHYSICA ACTA 55, 161–161 (BIRKHAUSER VERLAG AG PO BOX 133 KLOSTERBERG 23, CH-4010 BASEL, SWITZERLAND, 1982). Sass, J. K. & Gimzewski, J. K. Proposal for the simulation of electrochemical charge transfer in the scanning tunneling microscope. Journal of electroanalytical chemistry and interfacial electrochemistry 251, 241–245 (1988).
CommonCrawl
Categorical and analytic invariants in Algebraic geometry 1 (September 14–18, 2015, Steklov Mathematical Institute, Moscow) The aim of the conference is to bring together Japanese and Russian experts actively working in the area of algebraic and analytic geometry, homological algebra and string theory, in order to get an insight on the structure of complex varieties and certain interrelated invariants thereof, such as derived categories, semi-infinite Hodge structures, topological correlators and quantum motives, which reflect the properties of these varieties relevant to mirror symmetry. Рoster Getting from the airport Please use a high-speed railway, called Aeroexpress (with moderately priced tickets, about 450 rubles, approx. 6 Euro). The Aeroexpress connects all international airports in Moscow with the subway stations: "Belorusskaja" for the Sheremetyevo airport, "Paveletskaja" for the Domodedovo airport, and "'Kievskaja" for the Vnukovo aiport. Follow the signs of Aeroexpress or Trains inside the airport. You might go for 15 minutes inside the airport building, which depends on the terminal of your arrival. The timetables of the Aeroexpress and other details are available on the official site http://www.aeroexpress.ru/en. The map of Moscow Metro (subway) can be found here. You should go to the orange (Kaluzhsko-Rizhskaya) line, South. If you go from the green line, you change at Novokuznetskaya-Tretyakovskaya, if you go by the brown line, you change at Oktyabrskaya. The participants of the conference are housed either in the building of the Faculty of Mathematics of the Higher School of the Economics, Vavilova, 7. Steklov Mathematical Institute The Steklov Mathematical Institute is located within a walking distance from the metro station "Akademicheskaya" ("Академическая"), orange line, South. It takes about 10–15 minutes to get from the station to the institute. First one should take Dmitriya Ulyanova street (Дмитрия Ульянова) and then Vavilova street (Вавилова), see the map. The hotel is on the first floor. One should show a passport to the guard at the entrance of the building. Then one goes to the right till the very end and takes a stairway. A guardian will help. Vavilova, 7 The HSE Faculty of Mathematics is located within a walking distance from the metro station "Leninskiy prospect" ("Ленинский проспект"), orange line, South. First one should take an unnamed street and then Vavilova street (Вавилова), see the map. Directions, pictures and other useful information is available here. One should show documents at the reception and then take the elevator. A person at the reception will help. To get to the Steklov Institute you can walk 20 minutes straight along Vavilova street away from the center of the city or you can take the trams 14 and 39 from the station "Metro Leninskiy prospect (Yuzhnii Vyhod)" ("Метро Ленинский проспект (южный выход)"), which is just in front of the HSE building, to the station "Ulitsa Gubkina" ("Улица Губкина"), 4 stops along Vavilova street. The conference sessions will take place at the Steklov Institute, conference hall, 9th floor. Turn right after entering the building. Don't use elevators that are in front of the entrance – they don't go to the 9th floor. The elevators are located in the middle of the long corridor on the ground floor. WiFi connection In the Steklov Institute there is an open wireless network connection on the first floor (hotel) and on the 9th floor (conference hall); the network name is MIAN-FREE. Besides, there is a network MIAN with the login "mianconf" and the same password. However, some settings on the computer are required, see more details here. The cafeteria of the Steklov Institute is located on the ground floor of the building (first floor in Russian). After the entrance one should go to the right till the very end. Here are some restaurants nearby the Steklov Institute and Vavilova, 7. Some of the chains in Moscow: Mu-Mu (Му-Му), Elki-Palki (Елки-Палки), Shesh-Besh (Шеш-Беш). Public transportation in Moscow One ride costs 50 RUR; see more details here. Bus, tram, and trolleybus A driver sells tickets for some extra money. One inserts a ticket with a blank side up into the machine at the entrance of a transport. Keep the ticket till the end of the ride – controllers are possible. Metro (subway) Ticket offices nearby turnstiles after the entrance to the metro. ATM machines There is an ATM machine in the Steklov Institute (it accepts Visa, MasterCard, but does not accept Amex). It is located in the middle of the long corridor on the ground floor nearby the elevators. There is also an ATM machine close to the Steklov Institute at Vavilova, 23C1, see the map. Bondal Alexey Igorevich Przyjalkowski Victor Vladimirovich Roslyi Aleksei Andreevich Saito Kyoji Local organizers Grishina Olga Valentinovna Komarov Stanislav Igorevich Kuznetsova Vera Vitalievna Abuaf Roland Brav Christopher Efimov Alexander Ivanovich Galkin Sergey Hosono Shinobu Ikeda Akishi Ishii Akira Karzhemanov Il'ya Vyacheslavovich Katzarkov Ludmil Kawamata Yujiro Kuznetsov Alexander Gennad'evich Logvinenko Timothy Losev Andrei Semenovich Milanov Todor Ouchi Genki Prokhorov Yuri Gennadievich Shiraishi Yuuki Takahashi Atsushi Toda Yukinobu Ueda Kazushi Uehara Hokuto Steklov Mathematical Institute of Russian Academy of Sciences, Moscow Laboratory of algebraic geometry and its applications, National Research University "Higher School of Economics" (HSE), Moscow Institute of Fundamental Science, Moscow Kavli Institute for the Physics and Mathematics of the Universe Categorical and analytic invariants in Algebraic geometry 1, Moscow, September 14–18, 2015 September 14, 2015 (Mon) 1. Multipointed NC deformations and CY3folds Y. Kawamata September 14, 2015 10:30–11:30, Moscow, Steklov Mathematical Institute 2. On categorical joins A. G. Kuznetsov 3. Non-commutative virtual structure sheaves Yu. Toda 4. Moduli of relations of quivers K. Ueda 5. $P$-functors T. Logvinenko September 15, 2015 (Tue) 6. Homological invariants of DG algebras and generalized degeneration A. I. Efimov 7. Lagrangian embeddings of cubic fourfolds containing a plane G. Ouchi 8. Calabi–Yau structures on dg categories and shifted symplectic structures on moduli Ch. Brav September 16, 2015 (Wed) 9. Looking geometry from the moduli spaces of CICYs Sh. Hosono 10. From Riemann to Feynman geometry in Feynman approach to QFT A. S. Losev 11. The Calabi–Yau completion for a formal parameter A. Ikeda September 17, 2015 (Thu) 12. Vertex algebras and Gromov–Witten invariants T. Milanov 13. On the Frobenius manifold from the Gromov–Witten theory for an orbifold projective line with $r$ orbifold points Yu. Shiraishi 14. Categorical Kaehler Metrics L. Katzarkov 15. Calabi–Yau dg categories to Frobenius manifolds via primitive forms A. Takahashi 16. Joins and Hadamard products S. Galkin September 18, 2015 (Fri) 17. Explicit Dolgachev surfaces and exceptional collections I. V. Karzhemanov 18. Exceptional sheaves on the Hirzebruch surface $\mathbb{F}_2$ H. Uehara 19. Degenerations of del Pezzo surfaces in terminal Q-Gorenstein families Yu. G. Prokhorov 20. On the special McKay correspondence A. Ishii 21. Skew-growth function for dual Artin monoid K. Saito
CommonCrawl
Earth, Planets and Space Full paper | Open | Published: 11 March 2019 Earthquake-induced prompt gravity signals identified in dense array data in Japan Masaya Kimura ORCID: orcid.org/0000-0003-2492-67081, Nobuki Kame1, Shingo Watada1, Makiko Ohtani2, Akito Araya1, Yuichi Imanishi1, Masaki Ando3 & Takashi Kunugi4 Earth, Planets and Spacevolume 71, Article number: 27 (2019) | Download Citation Earthquake ruptures cause mass redistribution, which is expected to induce transient gravity perturbations simultaneously at all distances from the source before the arrival of P-waves. A recent research paper reported the detection of such prompt gravity signals from the 2011 Tohoku-Oki earthquake by comparing observed acceleration waveforms and model simulations. The 11 observed waveforms presented in that paper recorded in East Asia shared a similar trend above the background seismic noise and were in good agreement with the simulations. However, the signal detection was less quantitative because the significance of the observed signals was not discussed and the waveforms at other stations in the region were not shown. In this study, similar trends were not observed in most of the data recorded near the stations used in the aforementioned study, suggesting that the reported signals were only local noises. We thus took a different approach to identify the prompt signals. We optimized the multi-channel data recorded by superconducting gravimeters, broadband seismometers, and tiltmeters. Though no signal was identified in the single-trace records, the stacked trace of the broadband seismometer array data in Japan showed a clear signal above the reduced noise level. The signal amplitude was 0.25 nm/s2 for an average distance of 987 km from the event hypocenter. This detection was confirmed with a statistical significance of \(7\sigma\), where \(\sigma\) is the standard deviation of the amplitude of the background noise. This result provided the first constraint on the amplitude of the observed prompt signals and may serve as a reference in the detection of prompt signals in future earthquakes. Compressional seismic waves radiating from an earthquake accompany density perturbations, which give rise to widespread transient gravity perturbations \(\delta \varvec{g}\), even ahead of the wave front. The interest in earthquake-induced prompt gravity perturbations has increased in terms of both theoretical prediction and data signal detection with their potential for earthquake early warning (Harms et al. 2015; Harms 2016; Montagner et al. 2016; Heaton 2017; Vallée et al. 2017; Kimura 2018; Kimura and Kame 2019). In this paper, "prompt" denotes the time period between the event origin time and the P-wave arrival time. Study by Montagner et al. (2016) was the first to discuss prompt gravity signals in observed data. They searched for the signal from the 2011 Mw 9.0 Tohoku-Oki earthquake in the records of a superconducting gravimeter (SG) at Kamioka (approximately 510 km from the epicenter) and five nearby broadband seismometers of the Full Range Seismograph Network of Japan (F-net). Though they failed to identify a prompt signal with an amplitude that was obviously above the background noise level, they found that the 30-s average value immediately before the P-wave arrival was more prominent than the noise level with a statistical significance greater than 99% (corresponding to approximately \(3\sigma\) if the background noise has a normal distribution, where \(\sigma\) is the standard deviation of the noise). Based on this finding, they claimed the presence of a prompt gravity signal from the event. However, 99% significance seems considerably low for definite signal detection because it means that one in hundred samples exceeds a reference level; this is too frequent to claim an anomaly in time series analysis. Heaton (2017) replied to Montagner et al. (2016) with an objection that their data analysis did not include the appropriate response of the Earth. He pointed out that in the measurement of prompt gravity perturbations, the acceleration motion of the observational site \(\ddot{\varvec{u}}\) has to be considered because the gravimeter output \(\left( \varvec{a} \right)_{z}\) is affected by \(\ddot{\varvec{u}}\), i.e., \(\left( \varvec{a} \right)_{z} = \left( {\delta \varvec{g}} \right)_{z} - \left( {\ddot{\varvec{u}}} \right)_{z}\), where \(\left( \varvec{x} \right)_{z}\) indicates the vertical component of vector \(\varvec{x}\) with upward being positive. He exemplified in a simple spherical Earth model that the Earth's motion due to prompt gravity perturbations mostly decreases the gravimeter's sensitivity. Vallée et al. (2017) reported the detection of prompt gravity signals from the 2011 event based on both data analysis and theoretical modeling. From the records of regional broadband seismic stations in the Japanese islands and the Asian continent, they selected 11 waveforms based on the study's criteria. Nine waveforms among them showed a consistent visible downward trend starting from the origin time up to the respective P-wave arrival times (Figure 1 of Vallée et al. 2017). They then numerically simulated the prompt signals for the 11 stations considering the acceleration motion of the observational sites, i.e., a direct scenario based on Heaton (2017). To synthesize the sensor output \(\left( \varvec{a} \right)_{z}\), they evaluated both the gravity perturbation \(\delta \varvec{g}\) and the ground acceleration \(\ddot{\varvec{u}}\) directly generated by \(\delta \varvec{g}\) in a semi-infinite flat Earth model. The 11 pairs of observed and synthetic waveforms showed similarities to one another (Figure 3 in Vallée et al. 2017). However, their signal detection was relatively less quantitative. In contrast to Montagner et al. (2016), they did not discuss the significance of the observed signals with respect to background noise. In addition, the 11 observational stations they used were only a small subset of the available approximately 200 stations. a Model prediction of the prompt gravity perturbation \(\left( {\delta \varvec{g}^{\text{H}} } \right)_{z}\) (vertical component with upward positive) of the 2011 Tohoku-Oki earthquake for Kamioka Observatory. We used the infinite homogeneous Earth model of Harms et al. (2015), and no filter was applied. Time 0 was set to the event origin time \(t_{\text{eq}}\). The P-wave arrival time on the gravimetric record is 05:47:32.4 UTC (68.1 s after \(t_{\text{eq}}\)). b Distribution of the prompt gravity perturbation \(\left( {\delta \varvec{g}^{\text{H}} } \right)_{z}\) immediately before P-wave arrival at each location. The contour lines are drawn every 10 nm/s2. The star, the letters K and M, the red triangle, and the small dots indicate the epicenter, Kamioka Observatory, Matsushiro Observatory, the 71 F-net stations, and the 706 tiltmeter stations, respectively In this study, we search for prompt gravity signals from the 2011 event using a quantitative approach. Initially, we note that observed waveforms at other stations near those Vallée et al. (2017) presented barely showed a similar trend beyond noise ("Local records near the reported stations" section). Our analyses thus rely not on simulated waveforms but rather mostly on data, and we optimize multi-channel data recorded by different instruments ("Data" section). We first analyze SG data at two stations ("Superconducting gravimeters" section), but signal detection was unsuccessful. Next, we analyze records of the dense arrays of F-net ("F-net broadband seismometers" section) and High Sensitivity Seismograph Network Japan (Hi-net) ("Hi-net tiltmeters" section). Although most single-channel records did not show any signal beyond noise, waveform-stacking successfully reduced the noise level and allowed identification of a prominent signal in the F-net data. Results of data analyses Local records near the reported stations Vallée et al. (2017) selected 11 stations and showed the waveforms recorded at the stations. Their data processing (termed "procedure V" in this paper) and selection criteria are detailed in "Appendix 1." Because the presented waveforms showed a similar downward trend and amplitude in a wide range of hypocentral distance between 427 and 3044 km, the prompt signal waveforms of the 2011 event are not expected to vary significantly within a few hundred kilometers. This long-range spatial characteristic is also supported by the original model of Harms et al. (2015), who formulated the prompt gravity perturbation \(\delta \varvec{g}^{\text{H}}\) in an infinite homogeneous medium by an earthquake, where the superscript H denotes the modeling by Harms et al. (2015). Figure 1 shows \(\left( {\delta \varvec{g}^{\text{H}} } \right)_{z}\) for the 2011 event with contours drawn every 10 nm/s2, often termed 1 micro gal in geodesy. The spatial extent of \(\left( {\delta \varvec{g}^{\text{H}} } \right)_{z}\) is a few thousand kilometers (Fig. 1b) as noted by Kimura (2018). We checked whether reported downward trends were recorded at other stations near those Vallée et al. (2017) used. Among the 11 stations, Fukue (FUK) in Japan and Mudanjiang (MDJ) and Zhalaiteqi Badaerhuzhen (NE93) in China had other available stations within 100 km and were eligible for this purpose. Figure 2a (modified from Figure 3 of Vallée et al. 2017) shows the observed and simulated waveforms at FUK for reference, and Fig. 2b shows the waveforms at the F-net stations near FUK. The hypocentral distances of FUK and the other 10 stations range from 1130 to 1390 km (Fig. 2c). The waveform at FUK (Fig. 2b) appears similar to that of Vallée et al. (2017) (Fig. 2a) as it shows a similar downward trend beyond the noise level with a similar amplitude. They are not identical to each other because of the different signal processing procedures of Vallée et al. (2017) (procedure V) and this study (termed "procedure K" in this paper). Details of procedure K and the difference in the two procedures are described in "Appendix 2." Acceleration waveforms at F-net observational stations before P-wave arrival from the 2011 Tohoku-Oki earthquake. The black thick vertical line indicates the event origin time \(t_{\text{eq}}\). a Simulated (black) and observed (red) acceleration waveforms at FUK (modified from Fig. 3 of Vallée et al. 2017). The observed waveform was processed using procedure V. b Observed acceleration waveforms at FUK and around 10 stations processed using procedure K, which is perfectly causal. c Distribution map of the F-net observational stations near FUK Acceleration waveforms at observational stations in China before P-wave arrival from the 2011 Tohoku-Oki earthquake. The black vertical line indicates the event origin time, which was set to 0. They were processed using procedure K. a Observed acceleration waveforms at stations near MDJ: NE5E, NE6E, MDJ, NE7E, and NE6D. We plotted waveforms at MDJ for the STS-1 and STS-2 seismometers. Vallée et al. (2017) used data recorded by the STS-1 type. b Observed acceleration waveforms at stations near NE93: NE94, NE87, NE93, NE92, and NEA3. c Distribution map of the observational stations. The yellow star and red triangles indicate the epicenter and the stations, respectively However, the other 10 waveforms shown in Fig. 2b do not generally depict a downward trend. Rather, they generally appear as only noise, although Sefuri (SBR) does seem to show a slight downward trend. Namely, the stations near FUK barely showed the downward trend as shown by the waveform at FUK. Figure 3 shows the records at the stations surrounding MDJ and NE93 processed using procedure K. Again, similar downward trends are not observed at the stations near MDJ nor NE93, and it is difficult to identify a significant signal beyond noise in a single trace. Though the STS-1 broadband seismometer at MDJ shows the downward trend beyond noise, the other stations near MDJ, and the STS-2 broadband seismometer at MDJ, do not show such a signal. At NE93, not only the surrounding stations but also NE93 do not show the trend seen in Vallée et al. (2017). Eventually, we did not see the downward trend except for a few outliers. This waveform comparison suggests that the downward trend at NE93 (Figure 1 of Vallée et al. 2017) was not a signal but an artifact due to procedure V, and the trend at FUK and MDJ was possibly just local site noise or affected by unknown local site responses. We analyzed three different types of data: gravity data from two SGs, ground velocity data from the F-net seismographic array, and ground tilt data from the Hi-net tiltmeter array. All 71 F-net stations are equipped with STS-1 or STS-2 broadband seismometers. A two-component borehole tiltmeter is installed at 706 Hi-net stations. These instruments are listed in Table 1. The instrumental responses of SG, STS-1, and STS-2 to the acceleration input are shown in Additional file 1: Fig. S1. Table 1 Observation instruments Superconducting gravimeters We used SG data recorded at a 40-Hz sampling rate (GWR5 channel) (Imanishi 2001). Figure 4 shows the recorded data at Kamioka (\(t_{\text{P}} = t_{\text{eq}} + 68.1 \,{\text{s}}\), where \(t_{\text{eq}}\) and \(t_{\text{P}}\) denote the event origin time and the visually selected P-wave arrival time, respectively). The data include the sensor response. The background microseism dominated, with an amplitude of 100 nm/s2. Obviously, no signal was identified. Figure 5 shows the noise power spectrum. In contrast to the 1-Hz sampling data (GGP1 channel) with a 0.061-Hz anti-aliasing filter used in the analysis of Montagner et al. (2016), our 40-Hz sampling data contain the signal power in the frequency range higher than 0.061 Hz. After removing the trend component and multiplying a cosine taper in the first and last 10% sections of the time series, we applied a band-pass filter (five-pole 0.001-Hz high-pass and five-pole 0.03-Hz low-pass causal Butterworth filters) to the 1-h data (05:00–06:00 UTC) to reduce the relatively large noise power higher than 0.05 Hz. The lower corner frequency of 0.001 Hz was set to remove the long period tidal variation. After filtering, the noise was significantly reduced (Fig. 6a). During the prompt period \(t_{\text{eq}} < t < t_{\text{P}}\), we do not see signals with amplitudes far beyond the noise level of the record prior to the event origin time. Original SG data at Kamioka with zero direct current offset (at a 40-Hz sampling rate) Noise power spectrum of the Kamioka SG data. The time window is 40 min between 05:00 and 05:40 UTC before the 2011 Tohoku-Oki event 0.001–0.03 Hz band-pass-filtered SG data at a Kamioka and b Matsushiro For quantitative evaluation, we defined the noise level \(A_{\text{N}}\) in the time window [\(t_{1}\), \(t_{2}\)] as follows: $$A_{\text{N}} = \sqrt {\frac{1}{{t_{2} - t_{1} }}\mathop \int \limits_{{t_{1} }}^{{t_{2} }} \left[ {x\left( t \right) - \mu } \right]^{2} {\text{d}}t} ,$$ where \(x\left( t \right)\) is time series data and \(\mu = \frac{1}{{t_{2} - t_{1} }} \int \nolimits_{{t_{1} }}^{{t_{2} }} x\left( t \right){\text{d}}t\). For the Kamioka data, \(A_{\text{N}}\) decreased from 70 to 0.4 nm/s2 after filtering (\(t_{1} = 05:40\) UTC and \(t_{2} = t_{\text{eq}}\)). Figure 6b shows the data for Matsushiro (436 km from the hypocenter and \(t_{\text{P}} = t_{\text{eq}} + 57.3 \,{\text{s}}\)) after the same filtering process. Although \(A_{\text{N}}\) decreased from 80 to 0.7 nm/s2 after filtering, we did not recognize clear signals during the prompt period. Note that the oscillation with the period of approximately 90 s is a parasitic mode of the instrument (Imanishi 2005, 2009). F-net broadband seismometers The frequency responses of the F-net STS-1 seismometers to velocity are flat between 0.003 and 10 Hz. Consequently, we did not deconvolve the sensor frequency responses from the recorded waveforms. The velocity data were converted to acceleration data taking the finite difference in the time domain. In the vertical component of the F-net data, the typical value of \(A_{\text{N}}\) was 1000 nm/s2 (340 nm/s2 was the lowest value) dominated by the microseism. To reduce the microseismic noise, we applied the same filters (0.002-Hz two-pole high-pass and 0.03-Hz six-pole low-pass causal Butterworth filters) employed in Vallée et al. (2017) for all available data from 70 of the 71 stations (omitting one because of the poor recording quality). After filtering, the microseism noise was successfully reduced to as low as 0.2 nm/s2 (Additional file 1: Fig. S2). However, we did not recognize clear signals. Only at two stations, FUK and SBR, could we find a downward trend before P-wave arrival. Next, a multi-station signal-stacking method was applied to further enhance the signals of interest. After the band-pass filtering, we selected 27 traces out of the 70 traces based on the noise level and stacked them aligned with \(t_{\text{P}}\) at each station because we expected the maximum signal amplitude at the last of the prompt period (Fig. 1a). Figure 7a shows the stacked trace, and Additional file 1: Fig. S3a shows an enlarged view of the trace. The noise of the stacked trace significantly decreased, and the trace successfully showed a significant signal with an amplitude of 0.25 nm/s2. Our selection criterion and polarity reversal correction for the stacking are described in "Appendix 3," and the 27 stations are listed in Additional file 2: Table S1. The hypocentral distances of the 27 stations are between 505 and 1421 km (the average is 987 km), and the minimum and maximum \(t_{\text{P}}\) are 63 and 176 s after \(t_{\text{eq}}\), respectively. Stacked waveforms of the filtered data for 30 min before the P-wave arrivals. Time 0 was set to the stacking reference time \(t_{\text{P}}\). a Plot for F-net broadband seismometer data. b Plot for Hi-net tiltmeter data To quantify the signal detection in terms of statistical significance, we investigated the distributions of background noise and the enhanced gravity signal. Figure 8 shows the histograms of the noise section (between − 30 and − 3 min before the aligned \(t_{\text{P}}\)) and the signal section in the stacked trace. Here, we defined the latter half of the time period − 1 min (\({ \fallingdotseq }\) minimum \(t_{\text{P}} - t_{\text{eq}} ) < t < 0\), i.e., − 30 s \(< t < 0\) as the signal section because all 27 waveforms were expected to contain a signal with increasing amplitude toward the end of this time period, as shown in Fig. 1a. The noise histogram was approximated by a normal distribution with a standard deviation \(\sigma\). In our analysis, \(\sigma\) was given by \(A_{\text{N}}\) (0.035 nm/s2). On the other hand, before the aligned \(t_{\text{P}}\), the amplitude of the stacked trace generally increased with time and the signal level exceeded \(3\sigma\) at \(t = - 20\) s and \(5\sigma\) at \(t = - 6\) s before finally reaching \(7\sigma\) at \(t = 0\) (Fig. 7a), i.e., the signal detection was verified with a statistical significance of \(7\sigma\). Amplitude histograms of the background noise (blue in the left vertical axis) and the prompt gravity signal before P-wave arrival (red in the right vertical axis). The noise histogram was fitted by a normal distribution with a standard deviation \(\sigma = 0.035\) nm/s2 (black curve) Hi-net tiltmeters We also analyzed the data recorded by the Hi-net tiltmeters, which work as horizontal accelerometers. For our analysis, the tilt data in rad were converted into horizontal acceleration in m/s2 by multiplying with the gravity acceleration (9.8 m/s2). Because the sensor response is not known in the seismic frequency band, we could not deconvolve it from the data; however, tiltmeter records have been used as seismic records by comparing them to nearby broadband seismic records (e.g., within a bandwidth of 0.02–0.16 Hz) (Tonegawa et al. 2006). Because tiltmeters are designed to respond to static changes, recordings are also reliable below 0.02 Hz. When compared to the F-net data, the Hi-net tiltmeter data were generally noisy. The typical value of \(A_{\text{N}}\) was 2000 nm/s2. After removing the trend component and applying the same band-pass filter as employed in Vallée et al. (2017), we again failed to identify a significant signal in each channel. We then aligned 553 data traces out of 1412 traces (two horizontal components from each station) with respect to the P-wave arrival times for stacking. Our selection criterion is also described in "Appendix 3." The hypocentral distances of the 553 traces are between 264 and 1349 km (the average is 830 km). Figure 7b and Additional file 1: Figure S3b show the stacked trace and its enlarged view, respectively. In contrast to the F-net results, the prompt signal was not identified. The noise level \(A_{\text{N}}\) was 0.08 nm/s2. The predicted signal amplitude of the stacked trace based on the infinite homogeneous Earth model of Harms et al. (2015) was 2 nm/s2 (Kimura 2018), where the theoretical time series were synthesized at each station and then filtered and stacked in alignment with the P-wave arrival time in the same manner as the observed data. In our analysis, such a large signal was confirmed not to exist in the data, and the upper signal level was constrained as 0.15 nm/s2 with 95% statistical significance (approximately \(2\sigma\)) in the horizontal component. Difference from previous SG study Failure of detecting prompt gravity signals in the SG data is consistent with the result of Montagner et al. (2016), who also analyzed the Kamioka SG and five nearby F-net stations and could not visually detect a clear signal in the time domain. On the other hand, at the Global Seismographic Network (GSN) station Matsushiro (MAJO), a signal detection was reported by Vallée et al. (2017). GSN MAJO and the Matsushiro SG are installed in the same tunnel, and the Kamioka SG and the five nearby F-net stations in Montagner et al. (2016) are located at nearly the same epicentral distance. The results of Montagner et al. (2016) and Vallée et al. (2017) seem inconsistent with one another. The signal amplitude at GSN MAJO shown in Vallée et al. (2017) may have been a mere noise or affected by a local site response. Significance of our stacked trace on theoretical modeling The F-net stacked trace (Fig. 7a) showed a great improvement of the statistical significance of the signal detection. It provides the first constraint of prompt gravity signals by observation and can work as a reference to validate future theoretical models. Once a model is developed that explains the sensor output in gravimetry and the reliable value of \(\delta \varvec{g}\) is constrained, related physical quantities such as gravity gradient and spatial strain are constrained as well. As Heaton (2017) noted, ground acceleration \(\ddot{\varvec{u}}\) affects the measurement of prompt gravity perturbation \(\delta \varvec{g}\). Therefore, in the modeling of prompt gravity signals, not only \(\delta \varvec{g}\) but also \(\ddot{\varvec{u}}\) before P-wave arrival have to be calculated. Vallée et al. (2017) analytically showed that in an infinite homogeneous non-self-gravitating medium, the induced \(\ddot{\varvec{u}}\) directly generated by \(\delta \varvec{g}\) becomes \(\ddot{\varvec{u}} = \delta \varvec{g}\), suggesting full cancelation of \(\delta \varvec{g}\) by \(\ddot{\varvec{u}}\). They then numerically investigated \(\delta \varvec{g}\) and the induced site motion \(\ddot{\varvec{u}}\) when exposed to the effects of a free surface in a layered non-self-gravitating half-space and evaluated the sensor output \(- \left( {\delta \varvec{g}} \right)_{z} + \left( {\ddot{\varvec{u}}} \right)_{z}\). Their simulated waveforms at the 11 stations showed the same downward monotonic trend and similar amplitude of approximately 1 nm/s2 within the wide range of 427–3044 km from the hypocenter. However, this simulated signal amplitude of 1 nm/s2 is significantly larger than our identified amplitude of 0.25 nm/s2 in the F-net stacked trace, suggesting that the simulation of Vallée et al. (2017) overestimated the sensor outputs. Our stacked waveform and Vallée et al.'s single-channel waveforms cannot be directly compared, but their amplitudes can be compared. Because all 27 stations used for the stacking are in the region where Vallée et al.'s simulated waveforms showed the same trend and amplitude of 1 nm/s2, if similar signals were recorded in the 27 traces, the resultant amplitude of the stacked waveform would also become 1 nm/s2. The identified signal level of 0.25 nm/s2 is, however, one-fourth of the expected value. Notably, the polarity of our stacked trace shows a negative trend toward the P-wave arrival, consistent with the observation and simulation of Vallée et al. (2017). A prospective candidate for a better theoretical model is a normal mode model of a spherical self-gravitating realistic Earth that addresses the fully coupled equations between the elastic deformation and gravity. Very recently, Juhel et al. (2019) conducted theoretical modeling using such a normal mode approach to compute prompt gravity signals. However, similar to Vallée et al. (2017), the fully coupled problem was not solved in the study. They first considered the prompt gravity perturbation \(\delta \varvec{g}\) induced by the earthquake elastic deformation and then considered the prompt gravity effect on the elastic deformation, which they termed a "two-step approach." Their simulation results were quite similar to those of Vallée et al. (2017) and seemed to overestimate the sensor output as well. Although a fully coupled model requires an enormous number of normal mode summations to precisely evaluate the prompt gravity perturbations, a numerical assessment should be conducted in the future. Possible reasons for no finding with tiltmeters The lack of signal identification in the stacked Hi-net trace (Fig. 7b) can be attributed to the large amplitude of the noise spectrum in the frequency band of the applied band-pass filter (in contrast to the SG and the F-net data in the vertical component). In this band, the typical noise level is more than 10 times that of the quiet SGs and the F-net. Another reason may be unknown effects of a free surface on the induced \(\ddot{\varvec{u}}\), which may more effectively cancel the horizontal component of \(\delta \varvec{g}\) compared to its vertical component. Toward future detection of earthquake-induced prompt gravity signals using a gravity gradient sensor We have shown that the identified prompt gravity signals were very small (approximately 0.25 nm/s2 for the average distance of 987 km); this can be attributed to the cancelation of gravity measurements by the acceleration motion of the ground and suggests that gravimetry is not the best approach for detecting prompt gravity perturbation. A gravity gradient measurement provides an alternative method to detect prompt signals from earthquakes (Harms et al. 2015; Juhel et al. 2018). A spatially inhomogeneous gravity field induces tidal deformation of an object or spatial strain, which is observable even if the observer moves with the same acceleration as the prompt gravity perturbation. Detecting very small perturbations in the gravity gradient has been a challenge in identifying gravitational waves from space. Abbott et al. (2016) observed gravitational waves using laser interferometers in a high-frequency range from tens to hundreds of Hz. New state-of-the-art instruments, such as torsion bar antennas (TOBA) (Ando et al. 2010; Shoda et al. 2014), are being developed. Such instruments are intended to observe spatial strain through the tidal deformation of two crossing bars. The existing prototype TOBA attained a 10−8 s−2 sensitivity within a low-frequency range of 0.01–1 Hz (Shoda et al. 2014). The theoretical gravito-gradiograms and the prompt signal intensity map are shown for the 2011 Tohoku-Oki earthquake (Fig. 9). The expected signal level was 10−13 s−2. Though this value is 10−5 times smaller than the attained sensitivity, the next-generation TOBA will attain sufficient sensitivity to detect prompt signals. Prompt earthquake detection will significantly benefit from such ultra-sensitive sensors. a Theoretical six-component gravito-gradiograms of the 2011 Tohoku-Oki earthquake synthesized for Kamioka Observatory. Time 0 was set to the event origin time \(t_{\text{eq}}\). b Distribution of prompt gravity gradient changes immediately before P-wave arrival at each location (upper left: \(\ddot{h}_{11}\) component, upper center: \(\ddot{h}_{22}\) component, upper right: \(\ddot{h}_{33}\) component, lower left: \(\ddot{h}_{12}\) component, lower center: \(\ddot{h}_{13}\) component, and lower right: \(\ddot{h}_{23}\) component), where \(\ddot{h}_{ij}\) denotes the ijth component of the gravity gradient tensor (see "Appendix 4"). In these figures, the \(x_{1}\)-, \(x_{2}\)-, and \(x_{3}\)-axes correspond to the directions of east, north, and upward, respectively. The star and the letter K are the epicenter and Kamioka Observatory, respectively. The contour lines are drawn every \(2 \times 10^{ - 13} \, {\text{s}}^{ - 2}\) In "Appendix 4," we present an explicit expression of theoretical gravito-gradiograms, the waveforms of gravity gradients. We extended the expression of Harms et al. (2015), who used a seismic dislocation source, to a general source described as a moment tensor. Our extension will contribute to the interpretation of future observational records of various event mechanisms. We searched for prompt gravity signals from the 2011 Mw 9.0 Tohoku-Oki earthquake in seismic network data. Though nearly all the single-channel waveforms did not show any signals beyond the noise level except for several outliers, the stacked trace of F-net broadband records showed a clear signal in the vertical component. The identified signal level was 0.25 nm/s2 for the average distance of 987 km; this detection was verified at a statistical significance of \(7\sigma\) to the background noise. In addition, analysis of Hi-net tiltmeters constrained the upper limit of the signal in the horizontal components as 0.15 nm/s2 at 95% significance. The stacked F-net trace is the first constraint of earthquake-induced prompt gravity signals by observation and will be used as a reference to validate future theoretical models. Measurement of gravity gradients is a more promising method in the prompt detection of future earthquakes. State-of-the-art instruments, such as torsion bar antennas, are being developed to detect strain acceleration smaller than 10−13 s−2. FUK: Fukue F-net: Full Range Seismograph Network of Japan GSN: Global Seismographic Network Hi-net: High Sensitivity Seismograph Network Japan IRIS: Incorporated Research Institutions for Seismology MAJO: Matsushiro MDJ: NE93: Zhalaiteqi Badaerhuzhen Seismic Analysis Code SBR: Sefuri SG: superconducting gravimeter TOBA: torsion bar antennas Abbott BP et al (2016) Observation of gravitational waves from a binary black hole merger. Phys Rev Lett 116(6):61102. https://doi.org/10.1103/PhysRevLett.116.061102 Aki K, Richards PG (2002) Quantitative seismology, 2nd edn. University Science Books, Susalito Ando M, Ishidoshiro K, Yamamoto K, Yagi K, Kokuyama W, Tsubono K, Takamori A (2010) Torsion-bar antenna for low-frequency gravitational-wave observations. Phys Rev Lett 105(16):161101. https://doi.org/10.1103/PhysRevLett.105.161101 Goldstein P, Snoke A (2005) SAC availability for the IRIS community. IRIS DMC Newslett 7(1) Harms J (2016) Transient gravity perturbations from a double-couple in a homogeneous half-space. Geophys J Int 205(2):1153–1164. https://doi.org/10.1093/gji/ggw076 Harms J, Ampuero JP, Barsuglia M, Chassande-Mottin E, Montagner J-P, Somala SN, Whiting BF (2015) Transient gravity perturbations induced by earthquake rupture. Geophys J Int 201(3):1416–1425. https://doi.org/10.1093/gji/ggv090 Heaton TH (2017) Correspondence: response of a gravimeter to an instantaneous step in gravity. Nat Commun 8:66. https://doi.org/10.1038/s41467-017-01348-z Imanishi Y (2001) Development of a high-rate and high-resolution data acquisition system based on a real-time operating system. J Geod Soc Jpn 47(1):52–57. https://doi.org/10.11366/sokuchi1954.47.52 Imanishi Y (2005) On the possible cause of long period instrumental noise (parasitic mode) of a superconducting gravimeter. J Geod 78:683–690. https://doi.org/10.1007/s00190-005-0434-5 Imanishi Y (2009) High-frequency parasitic modes of superconducting gravimeters. J Geod 83:455–467. https://doi.org/10.1007/s00190-008-0253-6 Juhel K, Ampuero JP, Barsuglia M, Bernard P, Chassande-Mottin E, Fiorucci D, Harms J, Montagner J-P, Vallée M, Whiting BF (2018) Earthquake early warning using future generation gravity strainmeters. J Geophys Res Solid Earth 123(12):10889–10902. https://doi.org/10.1029/2018JB016698 Juhel K, Montagner J-P, Vallée M, Ampuero JP, Barsuglia M, Bernard P, Clévédé E, Harms J, Whiting BF (2019) Normal mode simulation of prompt elastogravity signals induced by an earthquake rupture. Geophys J Int 216(2):935–947. https://doi.org/10.1093/gji/ggy436 Kimura M (2018) No identification of predicted earthquake-induced prompt gravity signals in data recorded by gravimeters, seismometers, and tiltmeters and its interpretation based on the principle of gravimetry. Master thesis, The University of Tokyo, Japan. https://repository.dl.itc.u-tokyo.ac.jp/?action=pages_view_main&active_action=repository_view_main_item_detail&item_id=51237&item_no=1&page_id=28&block_id=31. Accessed 25 Feb 2019 Kimura M, Kame N (2019) Representation theorem and Green's function (3)—strain, stress, and density perturbation fields due to a point source using 2nd derivative of Green's function in an unbounded homogeneous isotropic elastic medium—. Zisin 2(71):153–160. https://doi.org/10.4294/zisin.2017-20 (in Japanese) Montagner J-P, Juhel K, Barsuglia M, Ampuero JP, Chassande-Mottin E, Harms J, Whiting B, Bernard P, Clévédé E, Lognonné P (2016) Prompt gravity signal induced by the 2011 Tohoku-Oki earthquake. Nat Commun 7:13349. https://doi.org/10.1038/ncomms13349 Shoda A, Ando M, Ishidoshiro K, Okada K, Kokuyama W, Aso Y, Tsubono K (2014) Search for a stochastic gravitational-wave background using a pair of torsion-bar antennas. Phys Rev D 89(2):27101. https://doi.org/10.1103/PhysRevD.89.027101 Tonegawa T, Hirahara K, Shibutani T, Shiomi K (2006) Upper mantle imaging beneath the Japan Islands by Hi-net tiltmeter recordings. Earth Planets Space 58(8):1007–1012. https://doi.org/10.1186/BF03352605 Vallée M, Ampuero JP, Juhel K, Bernard P, Montagner J-P, Barsuglia M (2017) Observations and modeling of the elastogravity signals preceding direct seismic waves. Science 358:1164–1168. https://doi.org/10.1126/science.aao0746 MK performed most of the waveform analysis. NK supervised MK's work. NK and MK wrote the manuscript. MK, NK, and SW contributed to the planning. NK, SW, MO, AA, YI, MA, and TK contributed to the interpretation of the results and gave useful advice. YI and TK contributed to the SG and Hi-net tiltmeter data acquisition, respectively. All authors read and approved the final manuscript. We thank the editor Severine Rosat and three anonymous reviewers for their constructive comments and suggestions that helped to improve the manuscript. We also thank Nobuaki Fuji for valuable discussion. The SG data used in our study are available on request from the authors. The F-net data are available at NIED F-net server http://www.fnet.bosai.go.jp. The Hi-net tiltmeter data used in our study can be obtained from NIED Japan by sending a request to [email protected]. This research was supported by JSPS (KAKENHI JP15K13559, JP16K05532, JP18J21734) and MEXT via the Program for Leading Graduate Schools and the Earthquake and Volcano Hazards Observation and Research Program. Earthquake Research Institute, The University of Tokyo, Yayoi, Bunkyo-ku, Tokyo, 113-0032, Japan Masaya Kimura , Nobuki Kame , Shingo Watada , Akito Araya & Yuichi Imanishi National Institute of Advanced Industrial Science and Technology, Namiki, Tsukuba, 305-8560, Japan Makiko Ohtani Department of Physics, The University of Tokyo, Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan Masaki Ando National Research Institute for Earth Science and Disaster Resilience, Tennodai, Tsukuba, 305-0006, Japan Takashi Kunugi Search for Masaya Kimura in: Search for Nobuki Kame in: Search for Shingo Watada in: Search for Makiko Ohtani in: Search for Akito Araya in: Search for Yuichi Imanishi in: Search for Masaki Ando in: Search for Takashi Kunugi in: Correspondence to Masaya Kimura. Additional file 1. Figures showing (1) a diagram of instrumental responses, (2) acceleration waveforms of F-net broadband seismometers, and (3) an enlarged view of the stacked traces of F-net broadband seismometers and Hi-net tiltmeters data. Additional file 2. List of the 27 F-net stations used for the stacking. Appendix 1: Data processing and station selection criteria of Vallée et al. (2017) and characteristics of the waveforms presented in the studyAppendix 2: Data processing of this study and the difference between Vallée et al.'s procedures and those of this studyAppendix 3: Station selection criterion and polarity reversal correction for stacking of F-net and Hi-net dataAppendix 4: Expression for theoretical gravito-gradiograms Appendix 1: Data processing and station selection criteria of Vallée et al. (2017) and characteristics of the waveforms presented in the study Vallée et al. (2017) retrieved all the regional broadband vertical seismic records at distances up to 3000 km from the 2011 Tohoku-Oki earthquake hypocenter from the Incorporated Research Institutions for Seismology (IRIS) data center and from F-net. The number of stations was approximately 200 in this region, many of which were deployed in Japan and northeast China. They conducted signal processing (procedure V) as follows: they (1) terminated each station time series at the P-wave arrival time; (2) removed the mean value; (3) deconvolved the sensor response and converted it into a band-limited accelerogram using the Seismic Analysis Code (SAC, Goldstein and Snoke 2005) command "transfer"; and then (4) applied a band-pass filter (0.002-Hz two-pole high-pass and 0.03-Hz six-pole low-pass causal Butterworth filters). During procedure V, no tapering was applied to the records. Among all the processed records, they selected nine records based on a noise criterion that the waveform amplitude never exceeded ± 0.8 nm/s2 during the 30-min window before the earthquake origin time. They additionally selected two F-net stations (Shari, Fukue) to improve the azimuthal and distance coverage. The hypocentral distances of the selected 11 stations ranged from 427 to 3044 km. Nine waveforms out of the 11, three in the Japanese islands and six in the Asian continent, shared a downward trend beyond the seismic noise before the P-wave arrivals. The amplitudes immediately prior to the P-wave arrivals were approximately 1 nm/s2 for the hypocentral distance 1000–2000 km (Fig. 1 of Vallée et al. 2017), in which Vallée et al. considered the observability of the signals reaches a maximum. Appendix 2: Data processing of this study and the difference between Vallée et al.'s procedures and those of this study Our data processing (procedure K) is as follows: we (1) extracted the 60-min time series data starting at 46 min before the origin time; (2) calibrated the raw digital count data into velocity by dividing by the sensor sensitivity coefficient; (3) converted it from velocity to acceleration through the finite difference of digital velocity data; (4) multiplied a cosine taper at the first and last 10% sections of the time series; and then (5) applied the same band-pass filter employed in procedure V. The 60-min time series are sufficiently long to avoid the taper to decrease the signal of interest. Procedure K does not involve the recovery of the instrumental response and therefore is perfectly causal. Though we used data containing the following large amplitude of the P-waves, non-causal artificial signals never originated from the section. In contrast, as mentioned in "Appendix 1", Vallée et al. (2017) terminated the waveforms at the P-wave arrival time and deconvolved the sensor response. Removal of the instrumental response from the terminal portion of time series data works as an acausal filter for the waveform, which can generate spurious signals just before P-wave arrival, as exemplified in Fig. 3b, in which the retrieved waveforms were processed using our perfectly causal method. The waveform at NE93 does not show the downward trend seen in Vallée et al. (2017). Note that most of the available data were clipped after the P-wave arrivals. That is why both Vallée et al. (2017) and our study did not deconvolve the sensor response from the time series including the P-wave section. Appendix 3: Station selection criterion and polarity reversal correction for stacking of F-net and Hi-net data For the stacking of F-net data, we selected 27 stations where the noise level \(A_{\text{N}}\) was less than one-twentieth of the reference value \(A_{\text{S}}\). Here, \(A_{\text{S}}\) is the amplitude of the synthetic gravity waveforms predicted by Harms et al. (2015) and applied the same filter as for the observed waveforms. As an example, the filtering decreases the amplitude of the prompt gravity perturbation at Kamioka (Fig. 1a) from − 23 to − 5 nm/s2 (Kimura 2018). Our selection criterion based on the model of Harms et al. (2015) eventually corresponded to the station selection with very low noise level and a hypocentral distance longer than 500 km. For these 27 records, we applied a polarity reversal correction, i.e., the data was multiplied by − 1 at stations where the predicted gravity change \(\left( {\delta \varvec{g}^{\text{H}} } \right)_{z}\) is positive, and vice versa, based on the model by Harms et al. (2015) (Fig. 1b). The predicted polarities at the 27 stations were the same; we just added them to obtain the stacked waveform. For the stacking of Hi-net data, the trace selection criterion and a polarity reversal correction were based on Harms et al. (2015) as well. Because of the noisy data, traces were chosen based on the criterion of the AS/AN ratio being greater than unity. Note that the model of Harms et al. (2015) can provide a prompt gravity change for all three vector components. Appendix 4: Expression for theoretical gravito-gradiograms Here, we show the theoretical gravito-gradiogram, the waveform of a gravity gradient, in an explicit form. Our formula can be used to synthesize template waveforms for the detection of prompt gravity perturbations through the measurement of a gravity gradient or strain acceleration using state-of-the-art devices such as TOBA. Our expression is an extension of Harms et al. (2015) as it can deal with the general seismic source represented by a moment tensor. We assume the same simplifications of Harms et al. (2015). The derivation starts from the equivalence between two potentials as follows (Harms et al. 2015): $$\delta \psi \left( {\varvec{x},t} \right) = - 4\pi G\rho_{0} \phi \left( {\varvec{x},t} \right),$$ where \(\varvec{x}\) is the receiver position, \(t\) the time, \(G\) the gravitational constant, \(\rho_{0}\) the density of the medium, \(\delta \psi\) the gravity potential perturbation, and \(\phi\) the compressional seismic potential. From Eq. 1, the gravity perturbation vector \(\delta \varvec{g}\left( {\varvec{x},t} \right)\) is represented as follows: $$\delta \varvec{g}\left( {\varvec{x},t} \right) = - \nabla \delta \psi \left( {\varvec{x},t} \right) = 4\pi G\rho_{0} \nabla \phi \left( {\varvec{x},t} \right) = 4\pi G\rho_{0} \varvec{u}^{\phi } \left( {\varvec{x},t} \right),$$ where \(\varvec{u}^{\phi }\) is the scalar potential component of the seismic displacement \(\varvec{u}\) \(\left( {\varvec{u}^{\phi } = \nabla \phi } \right)\). Employing the well-known solution of the seismic displacement from a general seismic source represented by a moment tensor (Aki and Richards 2002), we obtained the analytical expression for the components of the prompt gravity perturbations \(\delta \varvec{g}\left( {\varvec{x},t} \right)\) as follows: $$\begin{aligned} \delta g_{n} & = - \left( {15\gamma_{n} \gamma_{p} \gamma_{q} - 3\gamma_{n} \delta_{pq} - 3\gamma_{p} \delta_{qn} - 3\gamma_{q} \delta_{np} } \right)\frac{G}{{r^{4} }}\mathop \int \limits_{0}^{r/\alpha } \tau M_{pq} \left( {t - \tau } \right){\text{d}}\tau \\ & \quad + \left( {6\gamma_{n} \gamma_{p} \gamma_{q} - \gamma_{n} \delta_{pq} - \gamma_{p} \delta_{qn} - \gamma_{q} \delta_{np} } \right)\frac{G}{{\alpha^{2} r^{2} }}M_{pq} \left( {t - \frac{r}{\alpha }} \right) \\ & \quad + \gamma_{n} \gamma_{p} \gamma_{q} \frac{G}{{\alpha^{3} r}}\dot{M}_{pq} \left( {t - \frac{r}{\alpha }} \right), \\ \end{aligned}$$ where \(\gamma_{i}\) is the directional cosine, \(\delta_{ij}\) the Kronecker delta, \(r\) the distance between the source and receiver, \(\alpha\) the P-wave velocity, and \(M_{pq}\)(t) the moment function. This expression uses orthonormal bases and is familiar to seismologists. The first term on the right-hand side shows the prompt term. It coincides with that of Harms et al. (2015) for a shear dislocation (a double couple) source. In contrast to the prompt gravity acceleration, the measurement of the corresponding prompt gravity gradient change (or strain acceleration) is not affected by the ground motion. It is expressed as the spatial derivative of Eq. 3 as follows: $$\begin{aligned} \ddot{h}_{nm} & : = \frac{{\partial \delta g_{n} }}{{\partial x_{m} }} = 4\pi G\rho_{0} \frac{{\partial u_{n}^{\phi } }}{{\partial x_{m} }} \\ & = R_{5} \frac{G}{{r^{5} }}\mathop \int \limits_{0}^{{\frac{r}{\alpha }}} \tau M_{pq} \left( {t - \tau } \right)d\tau + R_{3} \frac{G}{{\alpha^{2} r^{3} }}M_{pq} \left( {t - \frac{r}{\alpha }} \right) \\ & \quad + R_{2} \frac{G}{{\alpha^{3} r^{2} }}\dot{M}_{pq} \left( {t - \frac{r}{\alpha }} \right) + R_{1} \frac{G}{{\alpha^{4} r}}\ddot{M}_{pq} \left( {t - \frac{r}{\alpha }} \right), \\ \end{aligned}$$ where \(\ddot{h}_{nm}\) denotes the nmth component of the gravity gradient tensor and $$\begin{aligned} R_{5} & = 105\gamma_{n} \gamma_{p} \gamma_{q} \gamma_{m} - 15\left( {\delta_{mn} \gamma_{p} \gamma_{q} + \delta_{mp} \gamma_{q} \gamma_{n} + \delta_{mq} \gamma_{n} \gamma_{p} + \delta_{pq} \gamma_{n} \gamma_{m} + \delta_{qn} \gamma_{p} \gamma_{m} + \delta_{np} \gamma_{q} \gamma_{m} } \right) \\ & \quad + 3\left( {\delta_{pq} \delta_{mn} + \delta_{qn} \delta_{pm} + \delta_{np} \delta_{qm} } \right), \\ R_{3} & = - 45\gamma_{n} \gamma_{p} \gamma_{q} \gamma_{m} + 6\left( {\delta_{mn} \gamma_{p} \gamma_{q} + \delta_{mp} \gamma_{q} \gamma_{n} + \delta_{mq} \gamma_{n} \gamma_{p} + \delta_{pq} \gamma_{n} \gamma_{m} + \delta_{qn} \gamma_{p} \gamma_{m} + \delta_{np} \gamma_{q} \gamma_{m} } \right) \\ & \quad - \left( {\delta_{pq} \delta_{mn} + \delta_{qn} \delta_{pm} + \delta_{np} \delta_{qm} } \right), \\ R_{2} & = - 10\gamma_{n} \gamma_{p} \gamma_{q} \gamma_{m} + \left( {\delta_{mn} \gamma_{p} \gamma_{q} + \delta_{mp} \gamma_{q} \gamma_{n} + \delta_{mq} \gamma_{n} \gamma_{p} + \delta_{pq} \gamma_{n} \gamma_{m} + \delta_{qn} \gamma_{p} \gamma_{m} + \delta_{np} \gamma_{q} \gamma_{m} } \right), \\ R_{1} & = - \gamma_{n} \gamma_{p} \gamma_{q} \gamma_{m} . \\ \end{aligned}$$ This tensor is symmetric and has six different components. Equation 4 has four terms on the right-hand side, and the first term is the prompt term. Each term consists of a (1) radiation pattern, (2) distance-dependent term, and (3) time-dependent term. Once we specify the moment tensor, theoretical waveforms for any receiver position can be efficiently calculated using the formula. The full expression of \(\frac{{\partial u_{n} }}{{\partial x_{m} }}\) is presented in Kimura and Kame (2019). The expression of theoretical gravito-gradiograms from a single-force source is presented in Kimura (2018). Earthquake-induced gravity perturbation Transient deformation Time variable gravity 2011 Tohoku-Oki earthquake 4. Seismology
CommonCrawl
Power control strategy of a photovoltaic system with battery storage system Khouloud Bedoud ORCID: orcid.org/0000-0003-1290-00411,2, Hichem Merabet ORCID: orcid.org/0000-0001-7479-31951 & Tahar Bahi ORCID: orcid.org/0000-0001-6822-24922 Journal of Engineering and Applied Science volume 69, Article number: 116 (2022) Cite this article In this paper, an intelligent approach based on fuzzy logic has been developed to ensure operation at the maximum power point of a PV system under dynamic climatic conditions. The current distortion due to the use of static converters in photovoltaic production systems involves the consumption of reactive energy. For this, separate control of active and reactive powers using a proportional-integral controller is applied. Using batteries for energy storage in the photovoltaic system has become an increasingly promising solution to improve energy quality: current and voltage. For this purpose, the energy management of batteries for regulating the charge level under dynamic climatic conditions has been studied. The research presented in this paper provides an important contribution to the application of fuzzy theory to improve the power and performance of a hybrid system comprising a grid-connected PV, battery, and energy management strategy. Therefore, to highlight the advantage of the FL-MPPT studied in this paper, its performance has been compared and analyzed with conventional P&O and NNT algorithms. Simulation results are carried out in MatLab/Simulink tools. According to the analysis of the results, a better energy quality has been proven. Nowadays, the reduction of greenhouse gas emissions has become a genuine concern for all governments around the world. Therefore, the exploitation of green and clean energy resources (solar and wind energy) is an essential solution for environmental protection on the one hand and to meet the enormous energy demand on the other hand. Thanks to its advantages, cost and ease of installation and maintenance as well as their high efficiency, the use of photovoltaic (PV) systems for the production of electrical energy from solar irradiation has known a significant development in different fields such as modern buildings, pumping systems, and rural areas [1,2,3,4,5]. Recently, numerous works on the study of PV conversion systems have been performed like control, storage [6], meteorological and operational parameters [7, 8], and thermal regulation [9]. However, developing a reliable control technique for operation at maximum power is necessary. A variety of approaches such as perturb and observe (P&O), hill climbing (HC), incremental conductance (IC), genetic algorithms (GA), artificial neural network (ANN), and fuzzy logic (FL) [10, 11] have been studied and developed to extract and maintain operation at the maximum power point (MPP) from the PV system. Regarding literature, P&O and HC are the widely used PV system algorithms for their low cost and simplicity and ease of implementation [12,13,14]. These two algorithms have the same operating principle except that the output control variables are duty cycle for HC and voltage for P&O. The two major drawbacks of these algorithms are the oscillation around the optimal power point and poor tracking of the MPP in the case of sudden changes in meteorological conditions notably the temperature (T) and irradiation (G). So, to deal with these drawbacks, a modified P&O algorithm is reported in many research for its better performance and dynamic efficiency compared to classical P&O [13, 15]. Likewise, for the same reasons, several works have studied and developed an improved version of HC called IC, which can extract the MPP even in the case of different and rapid operating conditions, with fast convergence. Authors in [13, 14, 16, 17] have confirmed that the difficulty of the implementation and the high cost are the main disadvantages of the IC algorithm compared to P&O. Currently, the efficiency and the excellent performance of the maximum power point tracking (MPPT) approaches based on artificial intelligence such as genetic algorithms (GA), artificial neural network (ANN), and fuzzy logic (FL) have attracted the attention of researchers. A. Alice Hepzibah in [14] has argued that these algorithms are more stable and ensure a quick response time for all irradiance levels. However, the fuzzy logic MPPT (FL-MPPT) algorithm allows for more efficient power extraction, is simple to use, and does not require a sensor to measure temperature and irradiation, in contrast to other algorithms such as ANN, which need a large database for training, testing, and validation. Using FL-MPPT offers attractive properties: precise MPP localization, ensures efficient and fast tracking, with fewer oscillations and reliable behavior. The high performance of FL-MPPT is experimentally verified and tested under different climate variations in [18]. This motivates us to develop the FL-MPPT algorithm based on Mamdani's method. The triangular membership function has been used because it is the fastest form [19, 20]. Therefore, a suitable compromise between expert knowledge and inference system rules is required. Moreover, knowing that PV energy is random, then using an energy management strategy is a necessary solution for maintaining a balance between supply and demand [21]. In the case of high energy production, it can be stored in batteries and used either during the night or shortcoming of the photovoltaic generator (PVG) [6, 22]. Indeed, the intended goal through the work done in this paper is to ensure a good control strategies of PV system in order to have a better energy quality injected into the grid and in the other hand, to ensure better energy management of battery storage system (BSS) under variation of irradiation and temperature. The primary purpose of BSS management and control is to increase battery cycle life by reducing current fluctuation, avoiding battery overcharging, and maintaining a balance between supply and demand [23]. Despite a large number of works on this topic, a few papers have studied the application of the FL-MPPT to a PV system connected to the grid and equipped with an energy management system. The novelty of the main contributions of our work is the application of FL-MPPT on the PV system with BSS as well as the comparative study on the performance of this algorithm with conventional P&O algorithm and advanced NNT algorithm for MPP tracking and also the quality analysis of the current injected into the grid. This research work deals with five (05) control strategies under variable climatic conditions: Fuzzy-based MPPT control to track the maximum power point; DC-DC converter control using duty cycle based on PI regulators; DC-Bus control; DC-AC inverter control; Comparative study; BSS energy management. The proposed system structure is shown in Fig. 1. It mainly includes the PVG, DC-DC boost converter, battery, and inverter connected to the grid through inductors. It is understood that the use of the boost converter is to convert the input voltage to a higher output voltage. When the switch is open, the energy is stored in the inductor and will be discharged otherwise [24]. Generally, the structure consists of two parts: the first part is consecrated to the control of the PV conversion chain (Fuzzy MPPT, DC-DC converter, DC-Bus and DC-AC inverter). While in the second part, the BSS energy management was carried out. It can be seen that the BSS is directly connected to the DC bus through the control management system. Structure of proposed system PV array modeling The PV panel consists of multiple modules connected in series or parallel to increase the voltage level or current level, respectively. Figure 2 shows the PV cell equivalent circuit composed of a current source, two resistances (series and shunt), and an antiparallel diode. PV cell equivalent circuit The current source (\({I}_{s})\) is expressed by de following equation [14, 25]: $${I}_{s}=\left(\frac{G}{{G}_{ref}}\right)({I}_{s\_ref}+{K}_{sc}.\left(T-{T}_{ref}\right))$$ where \(G\) and \(T\) are the irradiance and the environment temperature, respectively. \({K}_{sc}\) is coefficient of short-circuit current. Under standard conditions, the current, irradiation, and temperature are as follows: \({I}_{s\_ref}\), \({G}_{ref}\) and \({T}_{ref}.\) As shown in Eq. (1), the current varies according to irradiation and temperature change; on the other hand, the \({I}_{sat}\) current depends only on temperature variation [26]. Following Kirchhoff's law, the output current of the PV panel (\({v}_{pv}\)) is given by [4, 13, 14, 27]: $${I}_{pv}={I}_{s}-{I}_{d}-{I}_{shu}$$ So, we can write [28]: $${I}_{pv}={I}_{s}- {I}_{sat}\left[\mathit{exp}\left(\frac{q\left({v}_{pv}+\left({I}_{pv}*{R}_{Ser}\right)\right)}{nkT}\right)-1\right]-\frac{{V}_{pv}+\left({I}_{pv}*{R}_{Ser}\right)}{{R}_{shu}}$$ $${{I}_{d}=I}_{sat}\left[\mathit{exp}\left(\frac{q\left({v}_{pv}+\left({I}_{pv}*{R}_{Ser}\right)\right)}{nkT}\right)-1\right]$$ $${I}_{shu}=\frac{{V}_{pv}+\left({I}_{pv}*{R}_{Ser}\right)}{{R}_{shu}}$$ The boost converter transfer function can be written as follows [26]: $${v}_{m}=\frac{1}{1-D}{v}_{pv}$$ According to the power conservation law the relationship between input/output average currents is given by: $${I}_{pv}=\frac{1}{1-D}{I}_{dc}$$ The DC bus equation is expressed by: $$\frac{{dv}_{dc}}{dt}=\frac{1}{C}({I}_{dc}-{I}_{inv})$$ DC-AC inverter The inverter which is the adaptation stage, gives us the possibility to convert DC-voltage into AC-voltage with desired frequency and amplitude. We notice that the inverter control allows to ensure a better quality of the currents and powers (P, Q) injected into the grid. The relationship between the input/output inverter voltages is given by [29]: $$\left\{\begin{array}{c}{v}_{an}=({S}_{1}-{S}_{2}){v}_{dc}\\ {v}_{bn}={(S}_{2}-{S}_{3}){v}_{dc}\\ {v}_{cn}={{(S}_{3}-S}_{1}){v}_{dc}\end{array}\right.$$ $$\left[\begin{array}{c}{v}_{a}\\ {v}_{b}\\ {v}_{c}\end{array}\right]=\frac{{v}_{dc}}{3}\left[\begin{array}{ccc}2& -1& -1\\ -1& 2& -1\\ -1& -1& 2\end{array}\right]\left[\begin{array}{c}{S}_{1}\\ {S}_{2}\\ {S}_{3}\end{array}\right]$$ where \({v}_{dc}\) is the DC voltage, \({v}_{in}\)(i = a, b, c) and \({S}_{j}\)(j = 1,2,3) are the AC voltages and the switching state signals. The grid voltages equation is given by [29]: $$\left[\begin{array}{c}{v}_{ga}\\ {v}_{gb}\\ {v}_{gc}\end{array}\right]=\left[\begin{array}{c}{v}_{a}\\ {v}_{b}\\ {v}_{c}\end{array}\right]+R\left[\begin{array}{c}{I}_{ga}\\ {I}_{gb}\\ {I}_{gc}\end{array}\right]+L\frac{d}{dt}\left[\begin{array}{c}{I}_{ga}\\ {I}_{gb}\\ {I}_{gc}\end{array}\right]$$ In the aim to control the active (P) and reactive (Q) powers separately, the decoupling between these two electrical quantities has been studied and realized. For a balanced system, we can write the powers \({P}_{g}\) and \({Q}_{g}\) as follows [29]: $$\left\{\begin{array}{c}{P}_{g}=\frac{3}{2}({v}_{gd}{I}_{gd}+{v}_{gq}{I}_{gq})\\ {Q}_{g}=\frac{3}{2}({v}_{gq}{I}_{gd}-{v}_{gd}{I}_{gq})\end{array}\right.$$ Indeed, we can write: $$\left\{\begin{array}{c}{P}_{g}=\frac{3}{2}{v}_{gd}{I}_{gd}\\ {Q}_{g}=-\frac{3}{2}{v}_{gd}{I}_{gq})\end{array}\right.$$ where \({v}_{gdq}\) is the grid voltage and \({I}_{gdq}\) is the grid current. Control system and energy management Fuzzy MPPT control The main objective of the work exposed in this subsection is the extraction of the MPP from the PVG and, therefore, current \({I}_{MPP}\) and voltage \({v}_{MPP}\) used to define and adjust the duty cycle based on efficient and robust fuzzy MPPT algorithm. The PV electrical behavior, current and power at temperature and irradiation equal to 25 °C and 1KW/m2, respectively, are shown in Fig. 3. In the case of short circuit current (\({I}_{sc}\)), the voltage is equal to zero, and for open circuit voltage (\({V}_{oc}\)), the PV current is zero. The \({V}_{oc}\) and the \({I}_{sc}\) are 48.2 V and 6.05 A, respectively. Moreover, the optimal voltage (\({V}_{MPP}\)) and current (\({I}_{MPP}\)) which implies an optimal power (\({P}_{MPP}\)) are 40.51 V and 5.68 A, respectively (see Fig. 3), where \({P}_{MPP}\) varies depending on the climatic variation. In order to ensure maximum power extraction, we used a fuzzy logic based MPPT control technique to generate the duty cycle (D) of the boost converter. Figure 4 show the control system of the boost converter. PV (a) power and (b) current versus voltage for T = 25 °C and G = 1 KW/m2 Control strategy of the whole PV system The FL-MPPT consists of three blocks: fuzzification, inference system, and defuzzification. Figure 5 shows the structure of FL-MPPT algorithm. The power variation ∆P and voltage variation ∆v are used as input variables of the fuzzy inference system and ∆D as the output. The relation between these variables is defined based on fuzzy set theory. The fuzzy system inputs and output are given by: Fuzzy MPPT algorithm flowchart $$\left\{\begin{array}{c}\Delta P=P\left(k\right)-P\left(k-1\right) \\ \Delta v = v\left(k\right)-v\left(k-1\right) \\ \Delta D = D\left(k\right)-D\left(k-1\right)\end{array}\right.$$ The membership function allows it to pass from numerical input variables to fuzzy variables during fuzzification. However, it is necessary to ensure the criteria given by [30]: $$\left\{\begin{array}{c}E\left(k\right)=\frac{P\left(k\right)-P(k-1)}{I\left(k\right)-I(k-1)} \\ \Delta E\left(k\right)=E\left(k\right)-E(k-1)\end{array}\right.$$ where, \(E\left(k\right)\) and \(\Delta E\left(k\right)\) represent the error and the error variation at the instant k. \(E\left(k\right)\) allows us to locate the operating point relative to the MPP at instant k. On the other hand, \(\Delta E\left(k\right)\) presents the displacement direction. The optimal power point value is ensured when \(E\left(k\right)\) is zero thanks to the dynamic variation of the duty cycle according to climatic conditions. In inference system block, the rules will be applied on the previously fuzzified input data. The inference method of Mamdani with Max–Min combination has been used. Figure 6 shows the 3D surface of the fuzzy rules presented in Table 1. The membership functions (MF) of the linguistic variable: NB = negative big, N = negative, Z = zero, P = positive, PB = positive big illustrated in Figs. 7 and 8. Fuzzy MPPT surface Table 1 Fuzzy rules base Output membership variable of the fuzzy MPPT \(\Delta \mathrm{D}\) It should be noted that defuzzification makes it possible to transform fuzzy variables into numerical variables. The centroid algorithm has been used. Finally, the \(\Delta D\) is defuzzified using Eq. 16 [30]. $$\Delta D=\frac{\sum_{j=1}^{n}\mu (\Delta {D}_{j} )\Delta {D}_{j}}{\sum_{j=1}^{n}\mu (\Delta {D}_{j} )}$$ Control management and energy storage Several works have studied the control of the energy loss rate caused by the battery-based energy storage and management system [31]. Indeed, in the work published by W. Greenwood et al. [32], the authors have used the percentage change of the ramp rate. Other methods have been exposed in [33]. The management technique developed in this paper gives us the possibility of controlling the battery state of charge (SOC) and discharge according to the desired electrical quantities (voltage and current) at a steady voltage as well as the energy generated by the PV system with reduced response time. All this, under different weather variations while avoiding complete destocking and the overcharging of the battery to increase its life cycle. The SOC and the battery voltage \({v}_{Bat}\) can be calculated as a function of \({I}_{Bat}\) by the equations below [34,35,36]: $$SOC=100\left(1-\left(\frac{\int {I}_{Bat}dt}{{C}_{Bat}}\right)\right)$$ $${v}_{Bat}={v}_{Bat-oc}-R{I}_{Bat}$$ $${v}_{Bat-oc}={v}_{0}-{v}_{P}\left(\frac{1-SOC}{SOC}\right){C}_{Bat}+\alpha {e}^{-\beta \left(1-SOC\right){C}_{Bat}}$$ With \({v}_{Bat-oc}\) is the BSS open-circuit voltage, \(R\) is the battery internal resistance, \({C}_{Bat}\) is the capacity (Ah), \({v}_{P}\) is the polarization voltage, \({v}_{0}\) is voltage constant of the BSS. \(\beta\) and \(\alpha\) and represent capacity and the exponential voltage, respectively. Indeed, an implementation of the proposed technique based on proportion integral regulators (PI) is illustrated in Fig. 9. Two control loops are considered: the first loop consists in regulating the voltage of the DC bus and generating the reference current \({I}_{Bat-Ref}\) while the second loop allows to control the current \({I}_{Bat}\) to generate the switching signal \({D}_{CC}\) of the charging circuit. However, in the case where \({v}_{dc}\) is higher than its reference value, the charging circuit operates as a buck converter in charging mode. On the other hand, when \({v}_{dc}\) is lower than its reference value, the charging circuit operates as a boost converter in discharging mode. Structure of the control management The modeling and control algorithms of the whole system have been developed using MatLab/Simulink software during 8.5 s of simulation. The electrical parameters of the adopted PV module "Sun Power SPR-230E-WHT-D" and the Battery are summarized in Tables 2 and 3, respectively, given in Appendix. Regarding the profiles of solar irradiation and temperature, Fig. 10 depicts the different shapes of solar irradiation and temperature such as ramp up, ramp down, and step up recommended by the European dynamic standard test EN-50530 [18, 26] as an input's disturbance. This, in order to take into account of possible real atmospheric conditions. Table 2 PV Sun Power SPR-230E-WHT-D parameters Table 3 Battery parameters lithium-ion Curves of (a) irradiation and (b) temperature Under these conditions, the PV generator voltage and current are calculated instantaneously and used as inputs of the fuzzy MPPT algorithm to impose the duty cycle (D) of the boost converter. Likewise, the DC voltage will subsequently be used as input of the inverter control algorithm to generate the control signals of the IGBTs semi-conductor and the storage system. Figure 11 shows that the DC voltage is well balanced; it converges rapidly towards the reference value VDC-ref = 600 V with slight variations at the ramp up and ramp down of solar irradiation with a constant value of temperature T = 25 °C for \(1\le t<3\) and T = 45 °C for \(\mathrm{4,8}\le t<\mathrm{5,5}\). However, under rapidly increasing irradiation from 500 W/m2 to 800 W/m2 for t = 7.5 s, a significant pick is noticed. Therefore, the effectiveness of the control weakens this variation, especially in the case of rapidly increasing irradiation which is the main challenge. DC voltage Furthermore, the waveform of the active powers injected into the grid is depicted in Fig. 12. In the beginning, solar irradiation has been fixed to 500 W/m2 for 1 s. Afterward, the solar irradiation increases gradually from 500 W/m2 up to 1000 W/m2 for 2 s, and the active power instantly changed from 5 to 9 KW. As a result, the proposed FL-MPPT algorithm fastly tracks the new MPP without overshoots. For \(4.8 s\le t<5.5 s\), temperature is constant T = 45 °C and solar irradiation followed a ramp down to reach 500 W/m2. The waveforms of active power are smoother and significantly influenced by the evolution of solar irradiation. In the case where solar irradiation is constant G = 500 W/m2 and temperature followed a decreasing ramp for \(6.7s\le t<6.9s\), it can be observed a slight increase in active power and it kepts at 5.4 KW. A fast step-up transient of solar radiation from 500 W/m2 to 800 W/m2 at constant temperature T = 25 °C for t = 7.5 s has been taken place. Hence, the MPP is accurately and consistently tracked according to solar irradiation intensity. As illustrated in Fig. 13, the reactive power remains unchanged and keeps its reference value (Q = 0) regardless the climatic conditions. So, the unity power factor is achieved. Reactive power Figure 14 shows the voltage (\({v}_{g}\)) and the current (\({I}_{g}\)) injected to the grid, respectively, with a sinusoidal shape of the three (03) voltage and current phases. However, the voltage injected into the grid remain of constant amplitude as illustrated in Fig. 14a. On the other hand, the current injected into the grid varies in the sense of the solar irradiation change as shown in Fig. 14b. Ramp-up or rapidly increasing irradiation (t = 7.5 s) leads to an increasing of PV system's current. a Voltage injected to the grid. b Current injected to the grid To highlight the advantage of the FL-MPPT studied in this paper, its performances have been compared and examined with conventional P&O and NNT algorithms under EN50530 dynamic test as shown in Figs. 15a and 16a. Accordingly, this has been evaluated by the analysis of the total harmonic distortion (THD), response time, and ripple around the MPP. The NNT-MPPT algorithm is a multilayer network with one input layer consisting of 01 neuron, two hidden layers with 04 and 40 neurons respectively, and one output layer. PPV and VPV are the inputs, and D is the output of the NNT. Indeed, the Levenberg–Marquardt (LM) algorithm is used for NNT training combined with gradient descent and Newton's method with the mean square error performance function. As approximators function, "purelin" function is used in the output layer and sigmoid as the activation function "tansig" in the hidden layers. a PV power. b PV power zoom a PV current. b PV current zoom Table 4 summarizes the comparative analysis for G = 1000 W/m2 and T = 45 C °. The enlarged waveforms (zoom) of the power and current in Figs. 15b and 16b show a reduced ripple with the proposed FL-MPPT algorithm so low energy losses. Moreover, from spectrum analysis of IPV depicted in Fig. 17, the minimum value of the THD is 3.29% for FL-MPPT algorithm. This latter achieves excellent performance including fast response time of 0.03 s compared to 0.1 s and 0.08 s in [37] and [38] respectively, and reduced power (PPV = 4.92 W) and current oscillations with ripple of 3.35 compared to 24.56 and 62.45 for NNT and P&O respectively. This, ensures a better quality of energy injected to the grid. However, in a step when the PV system is under rapidly increasing irradiation and temperature is maintained at a fixed value T = 25 C ° for t = 7.5 s, the FL-MPPT performs significantly better than P&O-MPPT in terms of overshoot and response time. Table 4 Performance comparison for G = 1000 W/m2 and T = 45 C ° Spectrum analysis of PV current in the cases of a FL-MPPT, b NNT-MPPT, and c P&O-MPPT The European EN-50530 dynamic test highlights the performances of the FL-MPPT algorithm. As advantages from results analysis: The proposed control algorithm has proved its performance in terms of response time and overshoot even for rapidly increasing irradiation; The active power is stable smoother and evolves according to the irradiation intensity reaching at each moment the maximum power thanks to FL-MPPT which continuously located the MPP and depicts THD less than 5% which is as per IEEE-519 standard [39]; The unity power factor is achieved; The energy losses are low due to the smoothest waveform with smaller oscillations around the MPP and fast response time. Although the proposed approach is the best in terms of performance and power quality, it has limitations particularly in operating conditions under partial shading. In these conditions, the characteristic Ppv = f(Vpv) presents several MPP called partial maximum. For this, in the case of partial shading, the use of metaheuristic methods is recommended to plot the global maximum power point. Figures 18 and 19 show the outputs of control management and energy storage system: battery current (\({I}_{Bat}\)) and Battery voltage (\({v}_{Bat}\)), respectively. It is noted that the battery current follows perfectly the reference current delivered by the PI regulator of the DC voltage with less ripple. When the battery current is negative, the battery absorbs energy from the PV system. Otherwise, the management system ensures the discharge of the battery. Figure 20 shows the battery state of charge (SOC) which illustrates the battery state of charge. Battery current State of charge (SOC %) This study develops and exposes PV conversion chain associated with a battery storage system under MatLab/Simulink environment. However, for better PV System efficiency, the development of consistent control strategies and BSS energy management must accompany the installation. The simulation results show that despite the gradual or sudden variation in irradiation and temperature recommended by EN50530 dynamic test, the feasibility and effectiveness of the control and control management systems are proved. Active power and DC voltage follow their desired values accurately. Reactive power is kept at zero to ensure the unity power factor. The FL-MPPT algorithm application for faster tracking of MPP improves the system performance, and energy quality is injected into the grid with fewer oscillations. The equal THD = 3.29% is the lowest value compared with the other algorithms, such as P&O and NNT. Also, the economical and simpler FL-MPPT algorithm does not require auxiliary circuits (sensors to measure temperature and irradiation) like P&O or a large database like NNT for learning. Indeed, the work developed in this paper presents a promising solution for controlling powers, ensuring the unity power factor, and maintaining a balance between demand and supply. Finally, as perspectives of this work, the practical online control techniques development and the implementation of a storage system management and diagnosis of inverter faults will undoubtedly be the subject of our future work. All presented data are available under any request. ANN: \({v}_{Bat\_C}\) : Battery charge voltage \({I}_{Bat}\) : \({I}_{PV}\) : Note: PV panel current \({v}_{Bat\_D}\) : Battery discharge voltage \({v}_{Bat}\) : \(k\) : Boltzmann constant \({I}_{d}\) : Diode current \({D}_{CC}\) : Duty cycle of charging circuit \(q\) : Electron charge FL: FL-MPPT: Fuzzy logic MPPT Genetic algorithms HC: \(n\) : Ideality factor MPP: Maximum power point MPPT: Maximum power point tracking \({v}_{oc}\) : Open circuit voltage \({I}_{s}\) : Photoelectric current PVG: Photovoltaic generator P&O: Perturb and observe \({I}_{sat}\) : Reverse saturation curent \({I}_{sc}\) : Short circuit current \({R}_{Ser}\) : \({R}_{shu}\) : Fabio LA, Adélio JM, Geraldo CG, Sérgio MRS, Alexandre R (2010) Photovoltaic solar system connected to the electric power grid operating as active power generator and reactive power compensator. Sol Energy 84:1310–1317. https://doi.org/10.1016/j.solener.2010.04.011 Herman B, Antti P, Antti A, Tero M, Mikko RAP, Anders VL (2020) Photovoltaic system modeling: a validation study at high latitudes with implementation of a novel DNI quality control method. Sol Energy 204:316–329. https://doi.org/10.1016/j.solener.2020.04.068 Aissou S, Rekioua D, Mezzai N, Rekioua T, Bacha S (2015) Modeling and control of hybrid photovoltaic wind power system with battery storage. Energy Convers Manage 89:615–625. https://doi.org/10.1016/j.enconman.2014.10.034 Bedoud K, Bahi T, Merabet H (2019) Modeling and characteristics study of photovoltaic generator. In: ICSRESA 2019. In: 1st International Conference on Sustainable Renewable Energy Systems and Applications, 2019, IEEExplore, pp. 1–6. https://doi.org/10.1109/ICSRESA49121.2019.9182545 Bartosz CMS, Dorota C (2021) Analysis of operation and energy performance of a heat pump driven by a PV system for space heating of a single family house in polish conditions. Renewable Energy 165:117–126. https://doi.org/10.1016/j.renene.2020.11.026 Hadi T, Hamid T (2021) Adaptive robust control-based energy management of hybrid PV-Battery systems with improved transient performance. Int J Hydrogen Energy 46:7442–7453. https://doi.org/10.1016/j.ijhydene.2020.11.243 Bigorajski J, Chwieduk D (2019) Analysis of a micro photovoltaic/thermal–PV/T system operation in moderate climate. Renewable Energy 137:127–136. https://doi.org/10.1016/j.renene.2018.01.116 Fuentes M, Vivar M, De La Casa J et al (2018) An experimental comparison between commercial hybrid PV-T and simple PV systems intended for BIPV. Renew Sustain Energy Rev 93:110–120. https://doi.org/10.1016/j.rser.2018.05.021 Hongtao Xu, Ning W, Chenyu Z, Zhiguo Qu, Fariborz K (2021) Energy conversion performance of a PV/T-PCM system under different thermal regulation strategies. Energy Convers Manag 29:113660. https://doi.org/10.1016/j.enconman.2020.113660 Ali RR, Mohammad HM, Shahriar J (2013) Classification and comparison of maximum power point tracking techniques for photovoltaic system: a review. Renew Sustain Energy Rev 19:433–443. https://doi.org/10.1016/j.rser.2012.11.052 Ahmed F, Ibrahim Z, Dina A (2018) Improved teaching–learning-based optimization algorithm-based maximum power point trackers for photovoltaic system. Electr Eng 100:1773–1784. https://doi.org/10.1007/s00202-017-0654-8 Sharma S, Tikiwala M, Dadhaniya R (2015) Implementation of MPPT algorithm on PV panel using Pic 16F877 controller. Int J Res Eng Technol 4(6):60–67. https://doi.org/10.15623/IJRET.2015.0406009 Alik R, Jusoh A (2017) Modified perturb and observe (P&O) with checking algorithm under various solar irradiation. Sol Energy 148:128–139. https://doi.org/10.1016/j.solener.2017.03.064 Alice HA, Premkumar K (2020) ANFIS current–voltage controlled MPPT algorithm for solar powered brushless DC motor based water pump. Electr Eng 102:421–435. https://doi.org/10.1007/s00202-019-00885-8 Mazen AS, Mohamed TEM, Mohamed G (2018) An improved perturb-and-observe based MPPT method for PV systems under varying irradiation levels. Sol Energy 171:547–561. https://doi.org/10.1016/j.solener.2018.06.080 Lyden S, Haque ME (2015) Maximum power point tracking techniques for photovoltaic systems: a comprehensive review and comparative analysis. Renew Sustain Energy Rev 52:1504–1518. https://doi.org/10.1016/j.rser.2015.07.172 Anup A, Satarupa B, Suman S, Mrutyunjaya N (2016) A review of maximum power-point tracking techniques for photovoltaic systems. Int J Sustain Energy 35(5):478–501. https://doi.org/10.1080/14786451.2014.918979 Doubabi H, Salhi I, Chennani M, Essounbouli N (2021) High performance MPPT based on TS Fuzzy-integral backstepping control for PV system under rapid varying irradiance-experimental validation. ISA Trans 118:247–259. https://doi.org/10.1016/j.isatra.2020.01.009 Karthika S, Velayutham K, Rathika P, Devaraj D (2014) Fuzzy logic based maximum power point tracking designed for 10 kW solar photovoltaic system with different membership functions. Int J Electr Comput Energ Electron Commun Eng 8(6):1013–1018. https://doi.org/10.1016/j.isatra.2021.02.004 Verma P, Garg R, Mahajan P (2020) Asymmetrical interval type-2 fuzzy logic control based MPPT tuning for PV system under partial shading condition. ISA Trans 100:251–263. https://doi.org/10.1016/j.isatra.2020.01.009 Mohamed AB, Hossam H, Ripon KC, Michael R (2021) PV-Net: an innovative deep learning approach for efficient forecasting of short-term photovoltaic. Energy Prod J Clean Prod 303:127037. https://doi.org/10.1016/j.jclepro.2021.127037 Chakir A, Tabaa M, Moutaouakkil F, Medromi H, Julien-Salame M, Dandache A et al (2020) Optimal energy management for a grid connected PV-battery system. Energy Rep 6:218–231. https://doi.org/10.1016/j.egyr.2019.10.040 Wenlong J, Derrick KXL, Chean HL, Wallace SHW, Wong MLD (2017) Hybrid energy storage retrofit for standalone photovoltaic-battery residential energy system. In: IEEE Innovative Smart Grid Technologies (ISGT-Asia), 4–7 December 2017, Auckland, New Zealand: IEEExplore. 1–6. https://doi.org/10.1109/ISGT-Asia.2017.8378395 Ali Khan M, Ahteshamul H, Bharath Kurukuru VS (2020) Intelligent control of a novel transformerless inverter topology for photovoltaic applications. Electr Eng 102:627–641. https://doi.org/10.1007/s00202-019-00899-2 Nacer B, Syed K, Saleh AAG, Ayshah SA, Alex I (2021) Accurate modeling and simulation of solar photovoltaic panels with simulink-MATLAB. J Comput Electron 20:974–983. https://doi.org/10.1007/s10825-021-01656-0 Abdelhakim B, Ilhami C, Korhan K (2017) Implementation of a modified P&O-MPPT algorithm adapted for varying solar radiation conditions. Electr Eng 99:839–846. https://doi.org/10.1007/s00202-016-0457-3 Shubhranshu MP, Pravat KR (2021) Differential evolution with dynamic control factors for parameter estimation of photovoltaic models. J Comput Electron 20:330–343. https://doi.org/10.1007/s10825-020-01617-z(01 Hemza B, Djaafer L, Nasserdine B (2021) Model predictive control and ANN-based MPPT for a multi-level grid-connected photovoltaic inverter. Electr Eng. https://doi.org/10.1007/s00202-021-01355-w Issa H, Khaled M, Mohamed A, Ralph K (2020) Efficient model predictive power control with online inductance estimation for photovoltaic inverters. Electr Eng 102:549–562. https://doi.org/10.1007/s00202-019-00893-8 Hai T, Zhan J, Muranaka K (2022) An efficient fuzzy-logic based MPPT controller for grid-connected PV systems by farmland fertility optimization algorithm. Optik 267:169636. https://doi.org/10.1016/j.ijleo.2022.169636 de la Parra I, Marcos J, García M, Marroyo L (2016) Dynamic ramp-rate control to smooth short-term power fluctuations in large photovoltaic plants using battery storage systems. In: 42nd Annual Conference of the IEEE Industrial Electronics Society; 23–26 October 2016, Florence, Italy: IEEExplore 3052–3057. https://doi.org/10.1109/IECON.2016.7793564 Greenwood W, Lavrova O, Mammoli A, Cheng F, Willard S (2013) Optimization of solar PV smoothing algorithms for reduced stress on a utility-scale battery energy storage system. In Electrical Energy Storage Applications and Technologies (EESAT) conference; 21–23 October 2013, San Diego Marriot Marquis and Marina in San Diego, CA. https://www.sandia.gov/ess-ssl/EESAT/2013_papers/Optimization_of_Solar_PV_Smoothing_Algorithms_for_Reduced_Stress_on_a_Utility-Scale_Battery_Energy_Storage_System.pdf George H, Andrew C, Jeremy K (2017) Comparative analysis of domestic and feeder connected batteries for low voltage networks with high photovoltaic penetration. J Energy Storage 13:334–343. https://doi.org/10.1016/j.est.2017.07.019 Pradeep KS, Satyaranjan J, Chitti B (2022) Power management and bus voltage control of a battery backup-based stand-alone PV system. Electr Eng 104:97–110. https://doi.org/10.1007/s00202-021-01391-6 Ardashir M, Sakthivel R (2020) Energy management in photovoltaic battery hybrid systems: a novel type-2 fuzzy control. Int J Hydro Energy 45:20970–20928. https://doi.org/10.1016/j.ijhydene.2020.05.187 Sajjad D, Karzan W, Mehrdad K, Alireza R, Mohammad RM, Majid G (2019) Enhanced control strategies for a hybrid battery/photovoltaic system using FGS-PID in grid-connected mode. Int J Hydro Energy 4:14642–14660. https://doi.org/10.1016/j.ijhydene.2019.04.174 Lekouaghet B, Boukabou A, Lourci N et al (2018) Control of PV grid connected systems using MPC technique and different inverter configuration models. Elect Power Syst Res 154:287–298. https://doi.org/10.1016/j.epsr.2017.08.027 Touil SA, Boudjerda N, Boubakir A, Drissi KEK (2019) A sliding mode control and artificial neural network based MPPT for a gridconnected photovoltaic source. Asian J Control 21(4):1892–1905. https://doi.org/10.1002/asjc.2007 Liang X, Andalib-Bin-Karim C (2018) Harmonics and mitigation techniques through advanced control in grid-connected renewable energy sources: a review. IEEE Trans Ind Appl 54(4):3100–3111. https://doi.org/10.1109/TIA.2018.2823680 The authors would like to thank the team members. Research Center in Industrial Technologies CRTI, BP 64, Cheraga, Algeria Khouloud Bedoud & Hichem Merabet Automatic Laboratory and Signals, Badji Mokhtar University, Annaba, Algeria Khouloud Bedoud & Tahar Bahi Khouloud Bedoud Hichem Merabet Tahar Bahi All authors contributed to the study development and have read and approved the final version. Preparation of the original project and wrote the manuscript: Khouloud Bedoud, Formal analysis and investigation: Hichem Merabet, Methodology: Tahar Bahi. Correspondence to Khouloud Bedoud. Bedoud, K., Merabet, H. & Bahi, T. Power control strategy of a photovoltaic system with battery storage system. J. Eng. Appl. Sci. 69, 116 (2022). https://doi.org/10.1186/s44147-022-00163-8 Photovoltaic system MPPT control
CommonCrawl
Indication of anisotropy in arrival directions of ultra-high-energy cosmic rays through comparison to the flux pattern of extragalactic gamma-ray sources (1801.06160) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, N. Arsene, H. Asorey, P. Assis, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A.C. Cobos Cerutti, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, M.M. Freire, T. Fujii, A. Fuster, R. Gaïor, B. García, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, G. Morlino, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L.A.S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, J. Poh, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. F. Soriano, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, M. Wiedeński, L. Wiencke, H. Wilczyński, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Feb. 6, 2018 astro-ph.CO, astro-ph.HE A new analysis of the dataset from the Pierre Auger Observatory provides evidence for anisotropy in the arrival directions of ultra-high-energy cosmic rays on an intermediate angular scale, which is indicative of excess arrivals from strong, nearby sources. The data consist of 5514 events above 20 EeV with zenith angles up to 80 deg recorded before 2017 April 30. Sky models have been created for two distinct populations of extragalactic gamma-ray emitters: active galactic nuclei from the second catalog of hard Fermi-LAT sources (2FHL) and starburst galaxies from a sample that was examined with Fermi-LAT. Flux-limited samples, which include all types of galaxies from the Swift-BAT and 2MASS surveys, have been investigated for comparison. The sky model of cosmic-ray density constructed using each catalog has two free parameters, the fraction of events correlating with astrophysical objects and an angular scale characterizing the clustering of cosmic rays around extragalactic sources. A maximum-likelihood ratio test is used to evaluate the best values of these parameters and to quantify the strength of each model by contrast with isotropy. It is found that the starburst model fits the data better than the hypothesis of isotropy with a statistical significance of 4.0 sigma, the highest value of the test statistic being for energies above 39 EeV. The three alternative models are favored against isotropy with 2.7-3.2 sigma significance. The origin of the indicated deviation from isotropy is examined and prospects for more sensitive future studies are discussed. Inferences on Mass Composition and Tests of Hadronic Interactions from 0.3 to 100 EeV using the water-Cherenkov Detectors of the Pierre Auger Observatory (1710.07249) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello We present a new method for probing the hadronic interaction models at ultra-high energy and extracting details about mass composition. This is done using the time profiles of the signals recorded with the water-Cherenkov detectors of the Pierre Auger Observatory. The profiles arise from a mix of the muon and electromagnetic components of air-showers. Using the risetimes of the recorded signals we define a new parameter, which we use to compare our observations with predictions from simulations. We find, firstly, inconsistencies between our data and predictions over a greater energy range and with substantially more events than in previous studies. Secondly, by calibrating the new parameter with fluorescence measurements from observations made at the Auger Observatory, we can infer the depth of shower maximum for a sample of over 81,000 events extending from 0.3 EeV to over 100 EeV. Above 30 EeV, the sample is nearly fourteen times larger than currently available from fluorescence measurements and extending the covered energy range by half a decade. The energy dependence of the average depth of shower maximum is compared to simulations and interpreted in terms of the mean of the logarithmic mass. We find good agreement with previous work and extend the measurement of the mean depth of shower maximum to greater energies than before, reducing significantly the statistical uncertainty associated with the inferences about mass composition. Spectral Calibration of the Fluorescence Telescopes of the Pierre Auger Observatory (1709.01537) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, R. Gaior, B. García, D. Garcia-Pinto, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, A. Gorgi, P. Gorham, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, D. Rogozin, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, A. Shadkam, R.C. Shellard, G. Sigl, G. Silli, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Oct. 2, 2017 physics.ins-det, astro-ph.IM We present a novel method to measure precisely the relative spectral response of the fluorescence telescopes of the Pierre Auger Observatory. We used a portable light source based on a xenon flasher and a monochromator to measure the relative spectral efficiencies of eight telescopes in steps of 5 nm from 280 nm to 440 nm. Each point in a scan had approximately 2 nm FWHM out of the monochromator. Different sets of telescopes in the observatory have different optical components, and the eight telescopes measured represent two each of the four combinations of components represented in the observatory. We made an end-to-end measurement of the response from different combinations of optical components, and the monochromator setup allowed for more precise and complete measurements than our previous multi-wavelength calibrations. We find an overall uncertainty in the calibration of the spectral response of most of the telescopes of 1.5% for all wavelengths; the six oldest telescopes have larger overall uncertainties of about 2.2%. We also report changes in physics measureables due to the change in calibration, which are generally small. The Pierre Auger Observatory: Contributions to the 35th International Cosmic Ray Conference (ICRC 2017) (1708.06592) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, I.F.M. Albuquerque, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, A.M. Badescu, A. Balaceanu, F. Barbato, R.J. Barreira Luz, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, J. Biteau, S.G. Blaess, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, A. Cancio, F. Canfora, L. Caramete, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, A.G. Chavez, J.A. Chinellato, J. Chudoba, R.W. Clay, A. Cobos, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, G. Consolati, G. Consolati, F. Contreras, M.J. Cooper, S. Coutu, C.E. Covault, J. Cronin, S. D'Amico, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, J. Debatin, O. Deligny, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, J.C. D'Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, A. Dundovic, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, M.M. Freire, T. Fujii, A. Fuster, R. Gaïor, B. García, F. Gaté, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, A. Gorgi, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Herve, G.C. Hill, C. Hojvat, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, I. Jandt, J.A. Johnsen, M. Josebachuili, J. Jurysek, A. Kääpä, O. Kambeitz, K.H. Kampert, B. Keilhauer, N. Kemmerich, E. Kemp, J. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M. Lauscher, R. Legumina, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, M. Malacari, M. Mallamaci, D. Mandat, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, J.J. Masías Meza, H.J. Mathes, S. Mathys, G. Matthiae, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Middendorf, L. Miramonti, B. Mitrica, D. Mockler, S. Mollerach, F. Montanet, C. Morello, G. Morlino, M. Mostafá, A.L. Müller, G. Müller, M.A. Muller, S. Müller, R. Mussa, I. Naranjo, L. Nellen, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, F. Pedreira, J. Pękala, R. Pelayo, J. Peña-Rodriguez, L. A. S. Pereira, M. Perlin, L. Perrone, C. Peters, S. Petrera, J. Phuntsok, R. Piegaia, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, M. Plum, J. Poh, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, R. Ramos-Pollan, J. Rautenberg, D. Ravignani, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, G. Rodriguez Fernandez, J. Rodriguez Rojo, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, G. Salina, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, M. Schauer, V. Scherini, H. Schieler, M. Schimp, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, A. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, R.C. Shellard, G. Sigl, G. Silli, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. F. Soriano, R. Squartini, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, M. Stolpovskiy, F. Strafella, A. Streich, F. Suarez, M. Suarez Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, J. Swain, Z. Szadkowski, A. Taboada, O.A. Taborda, V.M. Theodoro, C. Timmermans, C.J. Todero Peixoto, L. Tomankova, B. Tomé, G. Torralba Elipe, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, A. van Vliet, E. Varela, B. Vargas Cárdenas, R.A. Vázquez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, A. Weindl, M. Wiedeński, L. Wiencke, H. Wilczyński, T. Winchen, M. Wirtz, D. Wittkowski, B. Wundheiler, L. Yang, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, Z. Zong, F. Zuccarello Oct. 2, 2017 astro-ph.CO, astro-ph.IM, astro-ph.HE Contributions of the Pierre Auger Collaboration to the 35th International Cosmic Ray Conference (ICRC 2017), 12-20 July 2017, Bexco, Busan, Korea. AWAKE readiness for the study of the seeded self-modulation of a 400\,GeV proton bunch (1708.01087) P. Muggli, E. Adli, R. Apsimon, F. Asmus, R. Baartman, A.-M. Bachmann, M. Barros Marin, F. Batsch, J. Bauche, V. K. Berglyd Olsen, M. Bernardini, B. Biskup, A. Boccardi, T. Bogey, T. Bohl, C. Bracco, F. Braunmuller, S. Burger, G. Burt, S. Bustamante, B. Buttenschon, A. Butterworth, A. Caldwell, M. Cascella, E. Chevallay, M. Chung, H. Damerau, L. Deacon, A. Dexter, P. Dirksen, S. Doebert, J. Farmer, V. Fedosseev, T. Feniet, G. Fior, R. Fiorito, R. Fonseca, F. Friebel, P. Gander, S. Gessner, I. Gorgisyan, A. A. Gorn, O. Grulke, E. Gschwendtner, A. Guerrero, J. Hansen, C. Hessler, W. Hofle, J. Holloway, M. Huther, M. Ibison, M.R. Islam, L. Jensen, S. Jolly, M. Kasim, F. Keeble, S.-Y. Kim, F. Krause, A. Lasheen, T. Lefevre, G. LeGodec, Y. Li, S. Liu, N. Lopes, K. V. Lotov, M. Martyanov, S. Mazzoni, D. Medina Godoy, O. Mete, V. A. Minakov, R. Mompo, J. Moody, M. T. Moreira, J. Mitchell, C. Mutin, P. Norreys, E. Oz, E. Ozturk, W. Pauw, A. Pardone, C. Pasquino, K. Pepitone, A. Petrenko, S. Pitmann, G. Plyushchev, A. Pukhov, K. Rieger, H. Ruhl, J. Schmidt, I. A. Shalimova, E. Shaposhnikova, P. Sherwood, L. Silva, A. P. Sosedkin, R. I. Spitsyn, K. Szczurek, J. Thomas, P. V. Tuev, M. Turner, V. Verzilov, J. Vieira, H. Vincke, C. P. Welsch, B. Williamson, M. Wing, G. Xia, H. Zhang Aug. 3, 2017 physics.plasm-ph, physics.acc-ph AWAKE is a proton-driven plasma wakefield acceleration experiment. % We show that the experimental setup briefly described here is ready for systematic study of the seeded self-modulation of the 400\,GeV proton bunch in the 10\,m-long rubidium plasma with density adjustable from 1 to 10$\times10^{14}$\,cm$^{-3}$. % We show that the short laser pulse used for ionization of the rubidium vapor propagates all the way along the column, suggesting full ionization of the vapor. % We show that ionization occurs along the proton bunch, at the laser time and that the plasma that follows affects the proton bunch. % AWAKE, The Advanced Proton Driven Plasma Wakefield Acceleration Experiment at CERN (1512.05498) E. Gschwendtner, E. Adli, L. Amorim, R. Apsimon, R. Assmann, A.-M. Bachmann, F. Batsch, J. Bauche, V.K. Berglyd Olsen, M. Bernardini, R. Bingham, B. Biskup, T. Bohl, C. Bracco, P. N. Burrows, G. Burt, B. Buttenschon, A. Butterworth, A. Caldwell, M. Cascella, E. Chevallay, S. Cipiccia, H. Damerau, L. Deacon, P. Dirksen, S. Doebert, U. Dorda, J. Farmer, V. Fedosseev, E. Feldbaumer, R. Fiorito, R. Fonseca, F. Friebel, A.A. Gorn, O. Grulke, J. Hansen, C. Hessler, W. Hofle, J. Holloway, M. Huther, D. Jaroszynski, L. Jensen, S. Jolly, A. Joulaei, M. Kasim, F. Keeble, Y. Li, S. Liu, N. Lopes, K.V. Lotov, S. Mandry, R. Martorelli, M. Martyanov, S. Mazzoni, O. Mete, V.A. Minakov, J. Mitchell, J. Moody, P. Muggli, Z. Najmudin, P. Norreys, E. Oz, A. Pardons, K. Pepitone, A. Petrenko, G. Plyushchev, A. Pukhov, K. Rieger, H. Ruhl, F. Salveter, N. Savard, J. Schmidt, A. Seryi, E. Shaposhnikova, Z.M. Sheng, P. Sherwood, L. Silva, L. Soby, A.P. Sosedkin, R.I. Spitsyn, R. Trines, P.V. Tuev, M. Turner, V. Verzilov, J. Vieira, H. Vincke, Y. Wei, C.P. Welsch, M. Wing, G. Xia, H. Zhang Dec. 17, 2015 physics.plasm-ph, physics.acc-ph The Advanced Proton Driven Plasma Wakefield Acceleration Experiment (AWAKE) aims at studying plasma wakefield generation and electron acceleration driven by proton bunches. It is a proof-of-principle R&D experiment at CERN and the world's first proton driven plasma wakefield acceleration experiment. The AWAKE experiment will be installed in the former CNGS facility and uses the 400 GeV/c proton beam bunches from the SPS. The first experiments will focus on the self-modulation instability of the long (rms ~12 cm) proton bunch in the plasma. These experiments are planned for the end of 2016. Later, in 2017/2018, low energy (~15 MeV) electrons will be externally injected to sample the wakefields and be accelerated beyond 1 GeV. The main goals of the experiment will be summarized. A summary of the AWAKE design and construction status will be presented. Path to AWAKE: Evolution of the concept (1511.09032) A. Caldwell, E. Adli, L. Amorim, R. Apsimon, T. Argyropoulos, R. Assmann, A.-M. Bachmann, F. Batsch, J. Bauche, V.K. Berglyd Olsen, M. Bernardini, R. Bingham, B. Biskup, T. Bohl, C. Bracco, P.N. Burrows, G. Burt, B. Buttenschon, A. Butterworth, M. Cascella, S. Chattopadhyay, E. Chevallay, S. Cipiccia, H. Damerau, L. Deacon, P. Dirksen, S. Doebert, U. Dorda, E. Elsen, J. Farmer, S. Fartoukh, V. Fedosseev, E. Feldbaumer, R. Fiorito, R. Fonseca, F. Friebel, G. Geschonke, B. Goddard, A.A. Gorn, O. Grulke, E. Gschwendtner, J. Hansen, C. Hessler, S. Hillenbrand, W. Hofle, J. Holloway, C. Huang, M. Huther, D. Jaroszynski, L. Jensen, S. Jolly, A. Joulaei, M. Kasim, F. Keeble, R. Kersevan, N. Kumar, Y. Li, S. Liu, N. Lopes, K.V. Lotov, W. Lu, J. Machacek, S. Mandry, I. Martin, R. Martorelli, M. Martyanov, S. Mazzoni, M. Meddahi, L. Merminga, O. Mete, V.A. Minakov, J. Mitchell, J. Moody, A.-S. Muller, Z. Najmudin, T.C.Q. Noakes, P. Norreys, J. Osterhoff, E. Oz, A. Pardons, K. Pepitone, A. Petrenko, G. Plyushchev, J. Pozimski, A. Pukhov, O. Reimann, K. Rieger, S. Roesler, H. Ruhl, T. Rusnak, F. Salveter, N. Savard, J. Schmidt, H. von der Schmitt, A. Seryi, E. Shaposhnikova, Z.M. Sheng, P. Sherwood, L. Silva, F. Simon, L. Soby, A.P. Sosedkin, R.I. Spitsyn, T. Tajima, R. Tarkeshian, H. Timko, R. Trines, T. Tueckmantel, P.V. Tuev, M. Turner, F. Velotti, V. Verzilov, J. Vieira, H. Vincke, Y. Wei, C.P. Welsch, M. Wing, G. Xia, V. Yakimenko, H. Zhang, F. Zimmermann Nov. 29, 2015 physics.plasm-ph, physics.acc-ph This report describes the conceptual steps in reaching the design of the AWAKE experiment currently under construction at CERN. We start with an introduction to plasma wakefield acceleration and the motivation for using proton drivers. We then describe the self-modulation instability --- a key to an early realization of the concept. This is then followed by the historical development of the experimental design, where the critical issues that arose and their solutions are described. We conclude with the design of the experiment as it is being realized at CERN and some words on the future outlook. A summary of the AWAKE design and construction status as presented in this conference is given in [1]. Effects of Neutron Irradiation on Carbon Doped MgB2 Wire Segments (cond-mat/0507275) R.H.T. Wilke, S.L. Bud'ko, P.C. Canfield, D.K. Finnemore, Raymond J. Suplinskas, J. Farmer, S.T. Hannahs July 12, 2005 cond-mat.supr-con We have studied the evolution of superconducting and normal state properties of neutron irradiated Mg(B$_{.962}$C$_{.038}$)$_2$ wire segments as a function of post exposure annealing time and temperature. The initial fluence fully suppressed superconductivity and resulted in an anisotropic expansion of the unit cell. Superconductivity was restored by post-exposure annealing. The upper critical field, H$_{c2}$(T=0), approximately scales with T$_c$ starting with an undamaged T$_c$ near 37 K and H$_{c2}$(T=0) near 32 T. Up to an annealing temperature of 400 $^ o$C the recovery of T$_c$ tends to coincide with a decrease in the normal state resistivity and a systematic recovery of the lattice parameters. Above 400 $^ o$C a decrease in order along the c- direction coincides with an increase in resistivity, but no apparent change in the evolution of T$_c$ and H$_{c2}$. To first order, it appears that carbon doping and neutron damaging effect the superconducting properties of MgB$_2$ independently. Distribution of parallel vortices studied by spin-polarized neutron reflectivity and magnetization (cond-mat/0108364) S.-W. Han, P. F. Miceli, J. Farmer, G. Felcher, R. Goyette, G. T. Kiehne, J. B. Ketterson Aug. 23, 2001 cond-mat.supr-con, cond-mat.str-el We present the studies of non-uniformly distributed vortices in Nb/Al multilayers at applied field near parallel to film surface by using spin-polarized neutron reflectivity (SPNR) and DC magnetization measurements. We have observed peaks above the lower critical field, Hc1, in the M-H curves from the multilayers. Previous works with a model calculation of minimizing Gibbs free energy have suggested that the peaks could be ascribed to vortex line transitions for spatial commensuration in a thin film superconductor. In order to directly determine the distribution of vortices, we performed SPNR measurements on the multilayer and found that the distribution and density of vortices are different at ascending and descending fields. At ascending 2000 Oe which is just below the first peak in the M-H curve, SPNR shows that vortices are mostly localized near a middle line of the film meanwhile the vortices are distributed in broader region at the descending 2000 Oe. That is related to the observation of more vortices trapped at the descending field. As the applied field is sightly tilted (< 3.5degree), we observe another peak at a smaller field. The peak position is consistent with the parallel lower critical field (Hc1||). We discuss that the vortices run along the applied field below Hc1|| and rotate parallel to the surface at Hc1||. Orientation of Vortices in a Superconducting Thin-Film: Quantitative Comparison of Spin-Polarized Neutron Reflectivity and Magnetization (cond-mat/0008244) S.-W. Han, J. Farmer, H. Kaiser, P. F. Miceli, I. V. Roshchin, L. H. Greene Aug. 17, 2000 cond-mat.supr-con We present a quantitative comparison of the magnetization measured by spin-polarized neutron reflectivity (SPNR) and DC magnetometry on a 1370 \AA\ -thick Nb superconducting film. As a function of magnetic field applied in the film plane, SPNR exhibits reversible behavior whereas the DC magnetization shows substantial hysteresis. The difference between these measurements is attributed to a rotation of vortex magnetic field out of the film plane as the applied field is reduced. Since SPNR measures only the magnetization parallel to the film plane whereas DC magnetization is strongly influenced by the perpendicular component of magnetization when there is a slight sample tilt, combining the two techniques allows one to distinguish two components of magnetization in a thin film.
CommonCrawl
OSA Publishing > Biomedical Optics Express > Volume 11 > Issue 11 > Page 6356 Christoph Hitzenberger, Editor-in-Chief Towards label-free 3D segmentation of optical coherence tomography images of the optic nerve head using deep learning Sripad Krishna Devalla, Tan Hung Pham, Satish Kumar Panda, Liang Zhang, Giridhar Subramanian, Anirudh Swaminathan, Chin Zhi Yun, Mohan Rajan, Sujatha Mohan, Ramaswami Krishnadas, Vijayalakshmi Senthil, John Mark S. De Leon, Tin A. Tun, Ching-Yu Cheng, Leopold Schmetterer, Shamira Perera, Tin Aung, Alexandre H. Thiéry, and Michaël J. A. Girard Sripad Krishna Devalla,1 Tan Hung Pham,1,2 Satish Kumar Panda,1 Liang Zhang,1 Giridhar Subramanian,1 Anirudh Swaminathan,1 Chin Zhi Yun,1 Mohan Rajan,3 Sujatha Mohan,3 Ramaswami Krishnadas,4 Vijayalakshmi Senthil,4 John Mark S. De Leon,5 Tin A. Tun,1,2 Ching-Yu Cheng,2,6 Leopold Schmetterer,2,7,9,10,11 Shamira Perera,2,8 Tin Aung,2,8 Alexandre H. Thiéry,12,14 and Michaël J. A. Girard13,15 1Ophthalmic Engineering & Innovation Laboratory, Department of Biomedical Engineering, Faculty of Engineering, National University of Singapore, Singapore 2Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 3Rajan Eye Care Hospital, Chennai, India 4Glaucoma Services, Aravind Eye Care Systems, Madurai, India 5Department of Health Eye Center, East Avenue Medical Center, Quezon City, Philippines 6Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore 7Nanyang Technological University, Singapore 8Duke-NUS Graduate Medical School, 8 College Rd, Singapore 169857, Singapore 9Department of Clinical Pharmacology, Medical University of Vienna, Austria 10Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Austria 11Institute of Clinical and Molecular Ophthalmology, Basel, Switzerland 12Department of Statistics and Applied Probability, National University of Singapore, Singapore 13Ophthalmic Engineering and Innovation Laboratory (OEIL), Singapore Eye Research Institute, 20 College Road, Singapore 169856, Singapore [email protected] [email protected] Vijayalakshmi Senthil https://orcid.org/0000-0003-3599-2401 S Devalla T Pham S Panda L Zhang A Swaminathan C Yun M Rajan S Mohan R Krishnadas V Senthil J De Leon T Tun C Cheng L Schmetterer S Perera T Aung A Thiéry M Girard •https://doi.org/10.1364/BOE.395934 Sripad Krishna Devalla, Tan Hung Pham, Satish Kumar Panda, Liang Zhang, Giridhar Subramanian, Anirudh Swaminathan, Chin Zhi Yun, Mohan Rajan, Sujatha Mohan, Ramaswami Krishnadas, Vijayalakshmi Senthil, John Mark S. De Leon, Tin A. Tun, Ching-Yu Cheng, Leopold Schmetterer, Shamira Perera, Tin Aung, Alexandre H. Thiéry, and Michaël J. A. Girard, "Towards label-free 3D segmentation of optical coherence tomography images of the optic nerve head using deep learning," Biomed. Opt. Express 11, 6356-6378 (2020) Get PDF (17772 KB) DRUNET: a dilated-residual U-Net deep learning network to segment optic nerve head tissues in optical coherence tomography images (BOE) Shared-hole graph search with adaptive constraints for 3D optic nerve head optical coherence tomography image segmentation (BOE) Deep learning based noise reduction method for automatic 3D segmentation of the anterior of lamina cribrosa in optical coherence tomography volumetric scans (BOE) Original Manuscript: May 7, 2020 Revised Manuscript: August 17, 2020 Manuscript Accepted: August 19, 2020 Suppl. Mat. (1) Recently proposed deep learning (DL) algorithms for the segmentation of optical coherence tomography (OCT) images to quantify the morphological changes to the optic nerve head (ONH) tissues during glaucoma have limited clinical adoption due to their device specific nature and the difficulty in preparing manual segmentations (training data). We propose a DL-based 3D segmentation framework that is easily translatable across OCT devices in a label-free manner (i.e. without the need to manually re-segment data for each device). Specifically, we developed 2 sets of DL networks: the 'enhancer' (enhance OCT image quality and harmonize image characteristics from 3 devices) and the 'ONH-Net' (3D segmentation of 6 ONH tissues). We found that only when the 'enhancer' was used to preprocess the OCT images, the 'ONH-Net' trained on any of the 3 devices successfully segmented ONH tissues from the other two unseen devices with high performance (Dice coefficients > 0.92). We demonstrate that is possible to automatically segment OCT images from new devices without ever needing manual segmentation data from them. The complex 3D structural changes of the optic nerve head (ONH) tissues that manifest with the progression of glaucoma has been extensively studied and better understood owing to the advancements in optical coherence tomography (OCT) imaging [1]. These include changes such as the thinning of the retinal nerve fiber layer (RNFL) [2,3], changes in the choroidal thickness [4], minimum rim width [5], and lamina curvature and depth [6,7]. The automated segmentation and analysis of these parameters in 3D from OCT volumes could improve the current clinical management of glaucoma. Robustly segmenting OCT volumes remains extremely challenging. While commercial OCTs have in-built proprietary segmentation software, they can segment some, but not all the ONH tissues [8–10]. To address this, several research groups have developed an overwhelming number of traditional image processing based 2D [8,11–15] and 3D [16–21] segmentation tools, however they are generally tissue-specific [11–13,15,16,21], computationally expensive [20,22], require manual input [17,19], and are often prone to errors in scans with pathology [20,23,24]. Recent deep learning (DL) based systems have however exploited a combination of low- (i.e. edge-information, contrast and intensity profile) and high-level features (i.e. speckle pattern, texture, noise) from OCT volumes to identify different tissues, yielding human-level [25–32] and pathology invariant [25,26,31] segmentations. Yet, given the variability in image characteristics (e.g. contrast or speckle noise) across devices as a result of proprietary processing software [33], a DL system designed for one device cannot be directly translated to others [34]. Since it is common for clinics to own different OCT devices, and for patients to be imaged by different OCT devices during their care, the device-specific nature of these DL algorithms considerably limit their clinical adoption. While there currently exists only a few major commercial manufacturers of spectral-domain OCT (SD-OCT) such as Carl Zeiss Meditec (Dublin, CA, USA), Heidelberg Engineering (Heidelberg, Germany), Optovue Inc. (Fremont, CA, USA), Nidek (Aichi, Japan), Optopol Technology (Zawiercie, Poland), Canon Inc. (Tokyo, Japan), Lecia Microsystems (Wetzlar, Germany), etc., several others have already started to or will soon be releasing the next-generation OCT devices. This further increases the complexity in deploying DL algorithms clinically. Given that reliable segmentations [33] are an important step towards diagnosing glaucoma accurately, there is a need for a single DL segmentation framework that is not only translatable across devices, but also versatile to accept data from next-generation OCT devices. In this study, we developed a DL-based 3D segmentation framework that is easily translatable across OCT devices in a label-free manner (without the need to manually re-segment data for each device). To achieve this, we first designed an 'enhancer': a DL network that can improve the quality of OCT B-scans and harmonize image characteristics across OCT devices. Because of such pre-processing, we demonstrate that a segmentation framework trained on one device can be used to segment volumes from other unseen devices. 2. Methods The proposed study consisted of two parts: (1) image enhancement, and (2) 3D segmentation. We first designed and validated a DL based image enhancement network to simultaneously de-noise (reduce speckle noise), compensate (improve tissue visibility and eliminate artefacts) [35], contrast enhance (better differentiate tissue boundaries) [35], and histogram equalize (reduce intensity inhomogeneity) OCT B-scans from three commercially available SD-OCT devices (Spectralis, Cirrus, RTVue). The network was trained and tested with images from all three devices. A 3D DL-based segmentation framework was then designed and validated to isolate six ONH tissues from OCT volumes. The framework was trained and tested separately on OCT volumes from each of the three devices with and without image enhancement. The overall schematic of the study is shown in Supplement 1 (Fig. S1). 2.2 Patient recruitment A total of 450 patients were recruited from four centers: the Singapore National Eye Center (Singapore), Rajan Eye Care Hospital (Chennai, India), Aravind Eye Hospital (Madurai, India), and the East Avenue Medical Center (Quezon City, Philippines) (Table 1). All subjects gave written informed consent. The study adhered to the tenets of the Declaration of Helsinki and was approved by the institutional review board of the respective hospitals. The cohort comprised of 225 healthy and 225 glaucoma subjects. The inclusion criteria for healthy subjects were: an intraocular pressure (IOP) less than 21 mmHg, healthy optic discs with a vertical cup-disc ratio (VCDR) less than or equal to 0.5, and normal visual fields tests. Glaucoma was diagnosed with the presence of glaucomatous optic neuropathy (GON), VCDR > 0.7 and/or neuroretinal rim narrowing with repeatable glaucomatous visual field defects. We excluded subjects with corneal abnormalities that could preclude the quality of the scans. Table 1. Patient Populations and Scanning Specifications 2.3 Optical coherence tomography imaging All 450 subjects were seated and imaged using spectral-domain OCT under dark room conditions in the respective hospitals. 150 subjects (75 healthy + 75 glaucoma) had one of their ONHs imaged using Spectralis (Heidelberg Engineering, Heidelberg, Germany), 150 (75 healthy + 75 glaucoma) using Cirrus (model: HD 5000, Carl Zeiss Meditec, Dublin, CA, USA), and another 150 (75 healthy + 75 glaucoma) using RTVue (Optovue Inc., Fermont, CA, USA). For glaucoma subjects, the eye with GON was imaged, and if both eyes met the inclusion criteria, one eye was randomly selected. For healthy controls, the right ONH was imaged. The scanning specifications for each device can be found in Table 1. From the dataset of 450 volumes, 390 (130 from each device) were used for training and testing the image enhancement network, while the remaining 60 (20 from each device) were used for training and testing the 3D segmentation framework. 2.4 Image enhancement The enhancer network was trained to reproduce simple mathematical operations including spatial averaging, compensation, contrast enhancement, and histogram equalization. When using images from a single device, the use of a DL network to perform such operations would be seen as unnecessary, as one could readily use the mathematical operators instead. However, when mixing images from multiple devices, besides performing such enhancement operations, the network also reduces the differences in the image characteristics across the devices, resulting in images that are 'harmonized' (i.e. less device specific) – a necessary step to perform robust device-independent 3D segmentation. 2.4.1 Image enhancement–dataset preparation The 390 volumes were first resized (in pixels) to 448 (height) x 352 (width) x 96 (number of B-scans), and a total of 37,440 baseline B-scans (12,480 per device) were obtained. Each B-scan (Fig. 1, (A) [1]) was then digitally enhanced (Fig. 1, (A) [4]) by performing spatial averaging (each pixel value was replaced by the mean of its 8 lateral neighbours; Fig. 1, (A) [2]) [36], compensation with contrast enhancement (contrast exponent = 2; Fig. 1, (A) [3]) [35], and histogram equalization (contrast limited adaptive histogram equalization [CLAHE], clip limit = 2; Fig. 1, (A) [4]) [37]. The compensated image with contrast enhancement ${I^{^{SC}}}$ was defined as: (1)$${M_{i,j}} = \textrm{ }\sum\limits_{k = i}^N {I_{k,j}^n} $$ (2)$$I_{i,j}^{SC} = \textrm{ }\frac{{I_{i,j}^n}}{{2{M_{i,j}}}}$$ where $I$ was the intensity map of the image ($i$ = 0: top of the image; $i$= $N$:bottom of the image); ${M_{i,j}}$ was the compensation profile that enhanced the A-scan pixel intensity at depth i for a given A-scan j; and n was the exponent used to control the contrast profile (also known as contrast exponent; $n$ = 2 was used based on the results from the earlier study [35]). Fig. 1. The dataset preparation for the image enhancement network is shown in (A). Each B-scan (A [1]) was digitally enhanced (4) by performing spatial averaging (each pixel value was replaced by the mean of its 8 lateral neighbors; A [2]) [36], compensation and contrast enhancement (contrast exponent = 2; A [3]) [35], and histogram equalization (contrast limited adaptive histogram equalization [CLAHE], clip limit = 2; A [4]) [37]. For training the 3D segmentation framework (B), the following tissues were manually segmented from OCT volumes: (1) the RNFL and prelamina (in red), (2) the ganglion cell complex (GCC; ganglion cell layer + inner plexiform layer; in cyan), (3) all other retinal layers (in blue); (4) the retinal pigment epithelium (RPE; in pink); (5) the choroid (in yellow); and (6) the lamina cribrosa (LC; in indigo). Noise (in grey) and vitreous humor (in black) were also isolated. The detailed implementation for CLAHE can be found in [38]. The clip limit was a factor that limited the extent of intensity over-amplification during the process of histogram equalization. A clip limit of 2 [38] was empirically chosen to prevent the intensity over-amplification, especially for the already hyperreflective structures such as a the retinal pigment epithelium. The image enhancement network was then trained with a training dataset of 36,000 pairs (12,000 per device) of baseline and digitally-enhanced B-scans, respectively. During the process of hyperparameter tuning, 80% (28,800 pairs) of the training dataset were used for the initial training, while the remaining 20% (7,200 pairs) were used for the subsequent validation. An independent test set of 1,440 pairs were used to truly evaluate the performance of the enhancer network. B-scans from a same patient were not shared between training and testing datasets. 2.4.2 Image enhancement–network description Briefly, as in our earlier DL based image enhancement study [39], the proposed enhancer exploited the inherent advantages of U-Net [40] and its skip connections [41], residual learning [42], dilated convolutions [43], and multi-scale hierarchical feature extraction [44]. We used the same network architecture, except that the output layer was now activated by the sigmoid activation function [45] (originally tanh). The design (Fig. S1; refer to Supplement 1), implementation, significance of each component, and data augmentation details can be referred to from our earlier study [39]. The loss function was a weighted combination of both the root mean square error (RMSE) and a multi-scale perceptual loss [46] function that was based on the VGG19 DL model [47]. Pixel-to-pixel loss functions (e.g., RMSE) compare only the low-level features (i.e., edge information) between the DL prediction and their corresponding ground-truth often leading to over-smoothened (blur) images [46], especially in image-to-image translation problems (e.g., de-noising). However, perceptual loss based functions exploit the high-level features (i.e., texture, abstract patterns) [46,48–50] in these images to assess their differences, enabling the DL network to achieve human-like visual understanding [19]. Thus, a weighted combination of both the loss functions allows the DL network to preserve the low- and high- level features in its predictions, limiting the effects of blurring. To compute the perceptual loss, the output of the enhancer (referred to as 'DL-enhanced' B-scan) and its corresponding digitally-enhanced B-scan was separately passed through the VGG-19 [47] DL model that was pre-trained on the ImageNet dataset [51]. Feature maps at multiple scales (5 scales; outputs from the 2nd, 4th, 6th, 10th, and 14th convolutional layers) were extracted, and the perceptual loss was computed as the mean RMSE (average of all scales) between the extracted features from the 'DL-enhanced' and its corresponding 'digitally-enhanced' B-scan. Experimentally, the RMSE and perceptual loss when combined (total loss) in a weighted-ratio of 1.0:0.01 offered the best performance (qualitative and quantitative; as described below). The individual and total loss functions were defined as: (3)$${{{\cal L}}_{RMSE}}({I_{DL\textrm{ }Enhanced}},{I_{Digitally\textrm{ }Enhanced}}) = \textrm{ }\sqrt {\frac{1}{{HW}}\sum\limits_{h = 1}^H {\sum\limits_{w = 1}^W {{{({I{{(h,w)}_{DL\textrm{ }Enhanced}} - I{{(h,w)}_{Digitally\textrm{ }Enhanced}}} )}^2}} } }$$ (4)$${{{\cal L}}_{Perceptual}}({I_{DL\textrm{ }Enhanced}},{I_{Digitally\textrm{ }Enhanced}}) = \textrm{ }\frac{{\sqrt {\sum\limits_{i = 2,4,6,10,14}^{} {\frac{1}{{{C_i}{H_i}{W_i}}}||{P_i}({I_{DL\textrm{ }Enhanced}}) - {P_i}({I_{Digitally\textrm{ }Enhanced}})|{|^2}} } }}{5}$$ (5)$${{{\cal L}}_{Total}} = {{{\cal L}}_{RMSE}} + \textrm{ }0.01 \times {{{\cal L}}_{Perceptual}}$$ where ${I_{DL\textrm{ }Enhanced}}$ and ${I_{Digitally\textrm{ }Enhanced}}$ are the intensity maps of the DL predicted and the ground-truth images; $H$ and W are the height and width of the image; ${C_i}$, ${H_i}$, and ${W_i}$ represent the channel depth, height, and width for the convolution layer i. The enhancer comprised of a total of 900 K trainable parameters, and was trained end-to-end using the Adam optimizer [52], with a learning rate of 0.0001. We trained and tested on an NVIDIA GTX 1080 founders edition GPU with CUDA 10.1 and cuDNN v7.5 acceleration. Using the given hardware configuration, the DL network enhanced a single 'baseline' B-scan in under 25 ms. 2.4.3 Image enhancement–quality analysis Upon training, the network was used to enhance the unseen baseline B-scans from all the three devices. The DL-enhanced B-scans were qualitatively assessed by two expert observers (S.K.D & T.P.H) for the following: (1) noise reduction, (2) deep tissue visibility and blood vessel shadows, (3) contrast enhancement and intensity inhomogeneity, and (4) DL induced artifacts. 2.4.4 Image enhancement–quantitative analysis The following metrics were used to quantitatively assess the performance of the enhancer: (1) universal image quality index (UIQI) [53], and (2) structural similarity index (SSIM) [54]. We used the UIQI to assess the extent of image enhancement (baseline vs. DL-enhanced B-scans), while the MSSIM was used to assess the structural reliability of the DL-enhanced B-scans (digitally-enhanced vs. DL-enhanced). Unlike the traditional error summation methods (e.g., RMSE etc.) that compared only the intensity differences, the UIQI jointly modeled the (1) loss of correlation (${L_C}$) (2) luminance distortion (${D_L}$), and (3) contrast distortion (${D_C}$) to assess image quality [53]. It was defined as (x: baseline; y: DL-enhanced B-scan): (6)$$UIQI(x,y) = {L_C} \times {D_L} \times {D_C}$$ (7)$${L_C} = \frac{{{\sigma _{xy}}}}{{{\sigma _{xy}}{\sigma _{xy}}}};{D_L} = \frac{{2{\mu _x}{\mu _y}}}{{\mu _x^2 + \mu _y^2}};{D_C} = \frac{{2{\sigma _x}{\sigma _y}}}{{\sigma _x^2 + \sigma _y^2}}$$ ${L_C}$ measured the degree of linear correlation between the baseline and DL-enhanced B-scans;${D_L}$ and ${D_C}$ measured the distortion in luminance and contrast respectively;${\mu _x}$, ${\sigma _x}$, $\sigma _x^2$ denoted the mean, standard deviation, and variance of the intensity for B-scan x, while ${\mu _y}$, ${\sigma _y}$, $\sigma _y^2$ denoted the same for the B-scan y;${\sigma _{xy}}$ was the cross-covariance between the two B-scans. The UIQI was defined between -1 (poor quality) and +1 (excellent quality). As in our previous study [39], the SSIM (x: digitally-enhanced; y: DL-enhanced B-scan) was defined as: (8)$$SSIM(x,y) = \frac{{(2{\mu _x}{\mu _y} + {C_1})(2{\sigma _{xy}} + {C_2})}}{{(\mu _x^2 + \mu _y^2 + {C_1})(\sigma _x^2 + \sigma _y^2 + {C_2})}}$$ The constants C1 and C2 (to stabilize the division) were chosen as 6.50 and 58.52, as recommended in a previous study [54]. The SSIM was defined between -1 (no similarity) and +1 (perfect similarity). 2.5 3D segmentation–dataset preparation The 60 volumes used for training and testing the 3D segmentation framework (20 from each device, balanced with respect to pathology) were manually segmented (slice-wise) by an expert observer (SD) using Amira (version 6, FEI, Hillsboro, OR). The following classes of tissues were segmented (Fig. 1, (B)): (1) the RNFL and prelamina (in red), (2) the ganglion cell complex (GCC; ganglion cell layer + inner plexiform layer; in cyan), (3) all other retinal layers (in blue); (4) the retinal pigment epithelium (RPE; in pink); (5) the choroid (in yellow); and (6) the lamina cribrosa (LC; in indigo). Noise (all regions below the choroid-sclera interface; in grey) and vitreous humor (black) were also isolated. We were unable to obtain a full-thickness segmentation of the LC due to limited visibility [55]. We also excluded the peripapillary sclera due to its poor visibility and the extreme subjectivity of its boundaries especially in Cirrus and RTVue volumes. To optimize computational speed, the volumes (baseline OCT + labels) for all three devices were resized (in voxels) to 112 (height) x 88 (width) x 48 (number of B-scans). 2.5.1 Deep learning based 3D segmentation of the ONH Recent studies have demonstrated that 3D CNNs can improve the reliability of automated segmentation [56–63], and even out-perform their 2D variants [57]. This is because 3D CNNs not only harness the information from each image, but also effectively combine it with the depth-wise spatial information from adjacent images. Despite its tremendous potential, the applications of 3D CNNs in ophthalmology is still in its infancy [64–69], and has not yet been explored for the segmentation of the ONH tissues. Further, there exist discrepancies in the delineation of ambiguous regions (e.g., choroid-sclera boundary, LC boundary) even among different well-trained DL model depending upon the type and complexity of architecture/feature extraction, learning method, etc., causing variability in the automated measurements. To address this, recent DL studies have explored ensemble learning [31,70–78], a meta-learning approach that synergizes (combine and fine-tune) [75] the predictions from multiple networks, to offer a single prediction that is closest to the ground-truth. Specifically, ensemble learning has shown to better generalize and increase the robustness of segmentations in OCT [31,71] and other medical imaging modalities [72–74,77]. In this study, we designed and validated 'ONH-Net', a 3D segmentation framework inspired by the popular 3D U-Net [58] to isolate six ONH tissues from OCT volumes. The ONH-Net consisted of three segmentation networks (3D CNNs) and one 3D CNN for ensemble learning (referred to as the 'ensembler'). Each of the three segmentation CNNs offered an equally plausible segmentation, which were then synergized by the 'ensembler' to yield the final 3D segmentation of the ONH tissues. 2.5.2 3D segmentation–network description The design of the three segmentation CNNs was based on the 3D U-Net [58] and its variants [65]. Briefly, each CNN (Fig. 2, (A)) comprised of four micro-U-Nets (Fig. 2, (B); μ-U-Nets) and a latent space (Fig. 2, (C); LS). We used multi-scale hierarchical feature extraction [39,44] to obtain smoother tissue boundaries. The three CNNs differed from each other only in the design of the 'feature extraction' (FE) units (Fig. 2, (D); Types 1-3), thus resulting in three equally plausible segmentations. Fig. 2. The DL architecture of the proposed 3D segmentation framework (three segmentation CNNs + one ensembler network) is shown. Each CNN (A) comprised of four micro-U-Nets (μ-U-Nets; B) and a latent space (LS; C). The three CNNs differed from each other only in the design of the 'feature extraction' (FE) units (D; Types 1-3). The ensembler (E) consisted of three sets of 3D convolutional layers, with each set separated by a dropout layer. ONH-Net (F) was then assembled by using the three trained CNNs as parallel input pipelines to the ensembler network. The ensembler (Fig. 2, (E)) consisted of three sets of 3D convolutional layers, with each set separated by a dropout layer (50%) [79] to limit overfitting and improve generalizability. Each of the three segmentation CNNs were first trained end-to-end with the same labeled-dataset. The ONH-Net was then assembled by using the three trained CNNs as parallel input pipelines to the ensembler network (Fig. 2, (F)). Finally, we trained the ONH-Net (ensembler weights: trainable; segmentation CNN weights: frozen) end-to-end using the same aforementioned labeled-dataset. During this process, each segmentation CNN provided equally plausible segmentation feature maps (obtained from the last 3D convolution layer), which were then concatenated and fed to the ensembler for fine-tuning. The ONH-Net was trained separately for each device. The design and implementation details can be found in the Supplement 1. All the DL networks (segmentation CNNs, ONH-Net) were trained with the stochastic gradient descent (SGD; learning rate:0.01; Nesterov momentum:0.05 [80]) optimizer, and the Jaccard distance was used as the loss function [26]. We empirically observed that the use of SGD optimizer with Nesterov momentum offered a better generalizability and faster convergence compared to Adam optimizer [52] for OCT segmentation problems that typically use limited data, while Adam performed better for image-to-image translation problems (i.e., enhancement [39]) that use much larger datasets. However, we are unable to theoretically explain this yet for our case. Given the limitations in hardware, all the DL networks were trained with a batch size of 1. To circumvent the scarcity in data, all the DL networks used custom data augmentation techniques (B-scans wise) as in our earlier study [26]. We ensured that the same data augmentation was used for each B-scan in a given volume. The three CNNs consisted of 7.2 M (Type 1), 7.2 M (Type 2), and 12.4 M (Type 3) trainable parameters, while the ONH-Net consisted of 28.86 M parameters (2.06M trainable parameters [ensembler], 26.8M non-trainable parameters [trained CNNs with weights frozen]). All the DL networks were trained and tested on NVIDIA GTX 1080 founders edition GPU with CUDA 10.1 and cuDNN v7.5 acceleration. Using the given hardware configuration, the ONH-Net was trained in 12 hours (10 hours for each CNN [trained in parallel; one per GPU]; 2 hours for fine-tuning with the ensembler). Once trained, each OCT volume was segmented in about 120 ms. 2.5.3 3D segmentation–training and testing We used a five-fold cross-validation approach (for each device) to train and test the performance of ONH-Net. In this process, the labeled-dataset (20 OCT volumes + manual segmentations) was split into five equal parts. One part ('left-out' set; 4 OCT volumes + manual segmentations) was used as the testing dataset, while the remaining four parts (16 OCT volumes + manual segmentations) were used as the training dataset. The entire process was repeated five times, each with a different 'left-out' testing dataset (and corresponding training dataset). Totally, for each device, the segmentation performance was assessed on 20 OCT volumes (4 per validation; 5-fold cross-validation). 2.5.4 3D segmentation–qualitative analysis The segmentations obtained from the trained ONH-Net on unseen data were manually reviewed by expert observers (S.D. & T.P.H) and compared against their corresponding manual segmentations. 2.5.5 3D segmentation–quantitative analysis We used the following metrics to quantitatively assess the segmentation performance: (1) Dice coefficient (DC); (2) specificity (Sp); and (3) sensitivity (Sn). The metrics were computed in 3D for the following tissues: (1) the RNFL and prelamina; (2) the GCC; (3) all other retinal layers; (4) the RPE; and (5) the choroid. Given the subjectivity in the visibility of the posterior LC boundary [55], we excluded LC from quantitative assessment. Noise and vitreous humor were also excluded from quantitative assessment. The Dice coefficient (DC) was used to assess the spatial overlap between the manual and DL segmentations (between 0 [no overlap] and 1 [perfect overlap]). For each tissue, the DC was computed as: (9)$$DC = \frac{{2 \times |{D \cap M} |}}{{|D |+ |M |}}$$ where D and M were the voxels that represented the chosen tissue in the DL segmented and the corresponding manually segmented volumes. Specificity (Sp) was used to assess the true negative rate of the segmentation framework and was defined as: (10)$$Sp = \frac{{|{\overline D \cap \overline M } |}}{{|M |}}$$ where $\bar{D}$ represented the voxels that did not belong to the chosen tissue in the DL segmented volume, while $\bar{M}$ represented the same in the corresponding manually segmented volume. Sensitivity (Sn) was used to assess the true positive rate and was defined as: (11)$$Sn = \frac{{|{D \cap M} |}}{{|M |}}$$ 2.5.6 3D segmentation–effect of image enhancement To assess if image enhancement had an effect on segmentation performance, we trained and tested ONH-Net on the baseline and the DL-enhanced datasets. For both datasets, ONH-Net was trained on any one device (Spectralis/Cirrus/RTVue), but tested on all the three devices (Spectralis, Cirrus, and RTVue). Paired t-tests were used to compare the differences (means) in the segmentation performance (Dice coefficients, sensitivities, specificities; mean of all tissues) for both cases. For all experiments, the segmentation performance was compared between glaucoma and healthy subjects. 2.5.7 3D segmentation–device independency When tested on a given device (Spectralis/Cirrus/RTVue), paired t-tests were used to assess the differences (Spectralis vs. Cirrus; Cirrus vs. RTVue; RTVue vs. Spectralis) in the segmentation performance depending on the device used for training ONH-Net. The process was performed with both baseline and DL-enhanced datasets. 3.1 Image enhancement–qualitative analysis The enhancer was tested on a total of 1440 (480 from each device) unseen baseline B-scans. In the DL-enhanced B-scans from all the three devices (Fig. 3, 3rd column), the ONH-tissue boundaries appeared sharper with a uniformly enhanced intensity profile (compared to respective 'baseline' B-scans). The blood vessel shadows were also reduced with improved deep tissue (choroid-scleral interface, LC) visibility. In all cases, the DL-enhanced B-scans were consistently similar to their corresponding digitally-enhanced B-scans (Fig. 3, 2nd column), with no DL induced artifacts. Fig. 3. The qualitative performance of the image enhancement network is shown for six randomly selected (1-6) subjects (2 per device). The 1st, 2nd and 3rd columns represent the baseline, digitally-enhanced, and the corresponding DL-enhanced B-scans for patients imaged with Spectralis (1-2), Cirrus (3-4), and RTVue (5-6) devices, respectively. 3.2 Image enhancement–quantitative analysis The mean UIQI (mean ± SD) for the DL-enhanced B-scans (compared to baseline B-scans) were: 0.94 ± 0.02, 0.95 ± 0.03, and 0.97 ± 0.01 for Spectralis, Cirrus, and RTVue, respectively, indicating improved image quality. In all cases, the mean SSIM (mean ± SD) for the DL-enhanced B-scans (compared to digitally-enhanced B-scans) were: 0.95 ± 0.02, 0.91 ± 0.02, and 0.93 ± 0.03, for Spectralis, Cirrus, and RTVue, respectively, indicating strong structural similarity. 3.3 3D segmentation performance–qualitative analysis When trained and tested on the baseline volumes from the same device (Fig. 4, 5, and 6; 4th column), ONH-Net successfully isolated all ONH layers. Further, the DL segmentations appeared consistent with their respective manual segmentations (Fig. 4, 5, and 6; 3rd column; refer Fig. S3 in Supplement 1 for 3D visualization), with no difference in the segmentation performance between glaucoma and healthy OCT volumes. Fig. 4. The qualitative performance (one randomly chosen B-scan per volume) of the ONH-Net 3D segmentation framework for three healthy (1-3) and three glaucoma (4-6) subjects is shown. The framework was trained on volumes from Spectralis, and tested on Spectralis (1, 4), Cirrus (2,5), and RTVue (3,6) devices respectively. The 1,st 2,nd and 3rd columns represent the baseline, DL enhanced, and the corresponding manual segmentation for the chosen B-scan. The 4th and 5th columns represent the DL segmentations when ONH-Net was trained and tested using the baseline and DL enhanced volumes, respectively. Fig. 5. The qualitative performance (one randomly chosen B-scan per volume) of the ONH-Net 3D segmentation framework for three healthy (1-3) and three glaucoma (4-6) subjects is shown. The framework was trained on volumes from Cirrus, and tested on Spectralis (1, 4), Cirrus (2,5), and RTVue (3,6) devices respectively. The 1,st 2,nd and 3rd columns represent the baseline, DL enhanced, and the corresponding manual segmentation for the chosen B-scan. The 4th and 5th columns represent the DL segmentations when the ONH-Net was trained and tested using the baseline and DL enhanced volumes respectively. Fig. 6. The qualitative performance (one randomly chosen B-scan per volume) of the ONH-Net 3D segmentation framework from three healthy (1-3) and three glaucoma (4-6) subjects is shown. The framework was trained on volumes from Cirrus, and tested on Spectralis (1, 4), Cirrus (2,5), and RTVue (3,6) devices respectively. The 1,st 2,nd and 3rd columns represent the baseline, DL enhanced, and the corresponding manual segmentation for the chosen B-scan. The 4th and 5th columns represent the DL segmentations when the ONH-Net was trained and tested using the baseline and DL enhanced volumes respectively. 3.4 3D segmentation performance–quantitative analysis When trained and tested on the baseline volumes (same device), the mean Dice coefficients (mean of all tissues; mean ± SD) were: 0.93 ± 0.02, 0.93 ± 0.02, and 0.93 ± 0.02 for Spectralis, Cirrus, and RTVue, respectively. The mean sensitivities / specificities (mean of all tissues; mean ± SD) were: 0.94 ± 0.02 / 0.99 ± 0.00, 0.93 ± 0.02 / 0.99 ± 0.00, and 0.93 ± 0.02 / 0.99 ± 0.00, respectively. 3.5 3D segmentation performance–effect of image enhancement and device independency Without image enhancement (baseline dataset), ONH-Net trained with one device was unable to segment even a single ONH tissue reliably on the other two devices (Fig. 4; 2nd, 3rd, 5th, 6th rows; 4th column; similarly, for Fig. 5–6). In all cases, dice coefficients were always lower than 0.65, sensitivities lower than 0.77, and specificities lower than 0.80. However, with image enhancement (DL-enhanced dataset), ONH-Net trained with one device was able to accurately segment all tissue layers on the other two devices with mean Dice coefficients and sensitivities > 0.92 (Fig. 4–6, 5th column). In addition, when trained and tested on the same device, it performed better for several ONH layers (p < 0.05). The tissue wise quantitative metrics for the aforementioned cases can be found in Supplement 1 (Tables S1-S6). Further, when trained and tested with the DL-enhanced OCT volumes, irrespective of the device used for training, there were no significant differences (p<0.05) in the segmentation performance for all tissues (Fig. 7, 8, 9), except for the LC. The tissue wise quantitative metrics for the individual cases can be found in Supplement 1 (Tables S7-S12). Finally, we observed so significant differences in the segmentation performance between glaucoma and healthy subjects. Fig. 7. The device independent segmentation performance of the proposed ONH-Net is shown. The segmentation performance on four randomly chosen (1-2 healthy; 3-4 glaucoma) Spectralis volumes from the test set are shown (one B-scan per volume). The 1st, 2nd, and 3rd columns represent the baseline, DL enhanced and the corresponding manual segmentation for the chosen B-scan. The 4th, 5th, and 6th columns represent the segmentations obtained when tested using the Spectralis, Cirrus, and RTVue trained segmentation model. Fig. 8. The device independent segmentation performance of the proposed ONH-Net is shown. The segmentation performance on four randomly chosen (1-2 healthy; 3-4 glaucoma) Cirrus volumes from the test set are shown (one B-scan per volume). The 1st, 2nd, and 3rd columns represent the baseline, DL enhanced and the corresponding manual segmentation for the chosen B-scan. The 4th, 5th, and 6th columns represent the segmentations obtained when tested using the Spectralis, Cirrus, and RTVue trained segmentation model. Fig. 9. The device independent segmentation performance of the proposed ONH-Net is shown. The segmentation performance on four randomly chosen (1-2 healthy; 3-4 glaucoma) RTVue volumes from the test set are shown (one B-scan per volume). The 1st, 2nd, and 3rd columns represent the baseline, DL enhanced and the corresponding manual segmentation for the chosen B-scan. The 4th, 5th, and 6th columns represent the segmentations obtained when tested using a Spectralis, Cirrus, and RTVue trained segmentation model. In this study, we proposed a 3D segmentation framework (ONH-Net) that is easily translatable across OCT devices in a label-free manner (i.e. without the need to manually re-segment data for each device). Specifically, we developed 2 sets of DL networks. The first (referred to as the 'enhancer') was able to enhance OCT image quality from 3 OCT devices, and harmonized image-characteristics across these devices. The second performed 3D segmentation of 6 important ONH tissue layers. We found that the use of the 'enhancer' was critical for our segmentation network to achieve device independency. In other words, our 3D segmentation network trained on any of 3 devices successfully segmented ONH tissue layers from the other two devices with high performance. Our work suggests that it is possible to automatically segment OCT volumes from a new OCT device without having to re-train ONH-Net with manual segmentations from that device. Besides existing commercial SD-OCT manufacturers, the democratization and emergence of OCT as the clinical gold-standard for in vivo ophthalmic examinations [81] has encouraged the entry of several new manufacturers to the market as well. Further, owing to advancements in imaging technology, there has been a rise of the next generation devices: swept-source [82], polarization sensitive [83], and adaptive optics [84] based OCTs. Given that preparing reliable manual segmentations (training data) for OCT-based DL algorithms requires months of training for a skilled technician, and that it would take more than 8 hours of manual work to accurately segment just a single 3D volume for just a limited number of tissue layers (here 6), it will soon become practically infeasible to perform manual segmentations for all OCT brands, device models, generations, and applications. Furthermore, only a few research groups have successfully managed to exploit DL to fully-isolate ocular structures from 3D OCT images [25–29,32], and only for a very limited number of devices. There is therefore a strong need for a single DL segmentation framework that can easily be translated across all existing and future OCT devices, thus eliminating the excruciating task of preparing training datasets manually. Our approach provides a high-performing solution to that problem. Eventually, we believe, this could open doors for multi-device glaucoma management. While classical image processing frameworks can indeed be used to improve the quality of OCT images, the resulting enhanced images would still retain the device-specific image characteristics (i.e., intensity and contrast profile). In this study, we hypothesized that by reducing the device-specific characteristics of the enhanced images, it might be possible to 'deceive' DL networks that subsequently use them into perceiving images from multiple devices in a similar manner. Given that this might not be possible to achieve using simple mathematical operations, we proposed the idea of a DL approach that didn't require explicit or hardcoded functions, but rather learnt to do the same organically. During the training, when repeatedly exposed to images from multiple devices, the enhancer network constantly refined its weights to best suit all of them. As a result, we visually observed that the DL enhanced images had characteristics (i.e., intensity and contrast profiles) that were less-specific to any one device. However, we were unable to quantify this observation yet, and further research is required to understand the same. In this study, we found that the use of enhancer was crucial for ONH-Net to achieve device independency, in other words, the ability to segment OCT volumes from devices it had not been trained with earlier. This can be attributed to the design of the proposed DL networks that allowed a perception of visual information through a host of low-level (e.g. tissue boundaries) and high-level abstract features (e.g. speckle pattern, intensity, and contrast profile). When image enhancement was used as a pre-processing step, the enhancer not only improved the quality of low-level features, but also reduced differences in high-level abstract features across OCT devices, thus 'deceiving' ONH-Net into perceiving volumes from all three devices similarly. This enabled ONH-Net trained on the DL-enhanced OCT volumes from one device to successfully isolate the ONH tissues from the other two devices with very high performance (mean Dice coefficients > 0.92). Note that such a performance is superior to that of our previous 2D segmentation framework that also had the additional caveat that it only worked on a single device [26]. In addition, irrespective of the device used for training, there were no significant differences (p>0.05) in segmentation performance. In all cases, our DL segmentations were deemed clinically reliable (refer to Supplement 1). To confirm the hypothesis on the need for the enhancer network, we also trained and tested the ONH-Net with only the digitally enhanced images. Although the quality of the digitally enhanced images was comparable to that of the DL enhanced images, the segmentation performance when tested on unseen devices was still poor (refer Table S13 in Supplement 1). This can be attributed to the fact that the digitally enhanced OCT images still retained their device specific image characteristics, thus, the re-iterating the necessity to obtain harmonized images as a precursor to achieve device independency. In a recent landmark study, De Fauw et al. [71] proposed the idea of using device-independent representations (segmentation maps) for the diagnosis of retinal pathologies from OCT images. However, the study was not truly device-independent, as, even though the diagnosis network was device-independent, the segmentation network was still trained with multiple devices. Similarly, our approach may not truly be considered as device-independent. While ONH-Net is device-independent, the enhancer (on which ONH-Net relies on) needs to be trained with data for all considered devices. But this is a still a very acceptable option, because the enhancer only requires un-labeled images (i.e. non-segmented; ∼100 OCT volumes) for any new device that is being considered. After which, automated segmentation can still be performed without ever needing manual segmentation for that new device. Such a task would require a few minutes rather than several weeks/months needed for manual segmentations. Finally, the proposed approach should not be confused with 'transfer learning' [85], a DL technique gaining momentum in medical imaging [74,86–89]. In this technique, a DL network is first pre-trained on large-size datasets (e.g. ImageNet [51]), and when subsequently fine-tuned on a smaller dataset for the task of interest (e.g. segmentation), it re-uses the pre-trained knowledge (high-level representations [e.g. edges, shapes]) to generalize better. In our approach, the generalization of ONH-Net was achieved using the enhanced images, and not the actual knowledge of the enhancer network, thus keeping the learning of both the networks mutually exclusive, yet necessary. There are several limitations to this study that warrant further discussion. First, we used only 20 volumes in total to test the segmentation performance for each device. Second, the study was performed only using spectral-domain OCT devices, but not swept-source. Third, although the enhancer simultaneously addressed multiple issues affecting image quality, we were unable to quantify the effect of each. Also, we were unable to quantify the extent to which the 'DL-enhanced' B-scans were harmonized. Fourth, we observed slight differences in LC curvature and LC thickness when the LC was segmented using ONH-Net trained on different devices (Fig. 7, Fig. 8, Fig. 9; 2nd and 4th rows). Given the significance of LC morphology in glaucoma [90], this subjectivity could affect glaucoma diagnosis. This is yet to be tested. Further, in a few B-scans (Fig. 7, Fig. 8, Fig. 9; 6th column), we observed that the GCC segmentations were thicker when the ONH-Net was trained on volumes from RTVue device. These variabilities might limit a truly multi-device glaucoma management. We are currently exploring the use of advanced DL concepts such as semi-supervised learning [91] to address these issues that may have occurred as a result of limited training data. Finally, although ONH-Net was invariant to volumes with glaucoma, it is unclear if the same will be true in the presence of other conditions such as cataract [92], peripapillary atrophy [93], and high-myopia [94] that commonly co-exist with glaucoma. To summarize, we demonstrate as a proof of concept that it is possible to develop DL segmentation tools that are easily translatable across OCT devices without ever needing additional manual segmentation data. The core contributions of this study to achieve the same included: (1) the development of ONH-Net – a highly modular DL approach for the segmentation of 3D OCT volumes of the ONH; and (2) the development of the enhancer – a DL approach to enhance the OCT image quality from multiple devices and simultaneously reduce the differences in the device-specific image characteristics. Through our core contributions, we were able to address (as a proof of concept) the device-specific nature of DL algorithms, an important factor that limits the translations and wide-spread adoption of DL algorithms in clinics. Finally, we hope the proposed framework can potentially help patients for the longitudinal follow-up on multiple devices, and encourage multi-center glaucoma studies also. Ministry of Education - Singapore (R-155-000-183-112, R-397-000-280-112, R-397-000-294-114, R-397-000-308-112); National University of Singapore (NUSYIA FY16 P16, R-155-000-180-133); National Medical Research Council (NMRC/OFIRG/0048/2017, NMRC/STAR/0023/2014). Dr. Michaël J. A. Girard and Dr. Alexandre H. Thiéry are co-founders of Abyss Processing. See Supplement 1 for supporting content. 1. J. S. Schuman, "Spectral domain optical coherence tomography for glaucoma (an AOS thesis)," Trans Am Ophthalmol Soc 106, 426–458 (2008). 2. C. Bowd, R. N. Weinreb, J. M. Williams, and L. M. Zangwill, "The retinal nerve fiber layer thickness in ocular hypertensive, normal, and glaucomatous eyes with optical coherence tomography," Arch. Ophthalmol. 118(1), 22–26 (2000). [CrossRef] 3. A. Miki, F. A. Medeiros, R. N. Weinreb, S. Jain, F. He, L. Sharpsten, N. Khachatryan, N. Hammel, J. M. Liebmann, C. A. Girkin, P. A. Sample, and L. M. Zangwill, "Rates of retinal nerve fiber layer thinning in glaucoma suspect eyes," Ophthalmology 121(7), 1350–1358 (2014). [CrossRef] 4. Z. Lin, S. Huang, B. Xie, and Y. Zhong, "Peripapillary Choroidal Thickness and Open-Angle Glaucoma: A Meta-Analysis," J Ophthalmol 2016, 5484568 (2016). 5. J. M. D. Gmeiner, W. A. Schrems, C. Y. Mardin, R. Laemmer, F. E. Kruse, and L. M. Schrems-Hoesl, "Comparison of Bruch's Membrane Opening Minimum Rim Width and Peripapillary Retinal Nerve Fiber Layer Thickness in Early Glaucoma Assessment," Invest. Ophthalmol. Visual Sci. 57(9), OCT575–OCT584 (2016). [CrossRef] 6. K. J. Halupka, B. J. Antony, M. H. Lee, K. A. Lucy, R. S. Rai, H. Ishikawa, G. Wollstein, J. S. Schuman, and R. Garnavi, "Retinal optical coherence tomography image enhancement via deep learning," Biomed. Opt. Express 9(12), 6205–6221 (2018). [CrossRef] 7. S. C. Park, J. Brumm, R. L. Furlanetto, C. Netto, Y. Liu, C. Tello, J. M. Liebmann, and R. Ritch, "Lamina cribrosa depth in different stages of glaucoma," Invest. Ophthalmol. Visual Sci. 56(3), 2059–2064 (2015). [CrossRef] 8. F. A. Almobarak, N. O'Leary, A. S. C. Reis, G. P. Sharpe, D. M. Hutchison, M. T. Nicolela, and B. C. Chauhan, "Automated Segmentation of Optic Nerve Head Structures With Optical Coherence Tomography," Invest. Ophthalmol. Visual Sci. 55(2), 1161–1168 (2014). [CrossRef] 9. K. X. Cheong, L. W. Lim, K. Z. Li, and C. S. Tan, "A novel and faster method of manual grading to measure choroidal thickness using optical coherence tomography," Eye 32(2), 433–438 (2018). [CrossRef] 10. S. L. Mansberger, S. A. Menda, B. A. Fortune, S. K. Gardiner, and S. Demirel, "Automated Segmentation Errors When Using Optical Coherence Tomography to Measure Retinal Nerve Fiber Layer Thickness in Glaucoma," Am. J. Ophthalmol. 174, 1–8 (2017). [CrossRef] 11. B. Al-Diri, A. Hunter, and D. Steel, "An Active Contour Model for Segmenting and Measuring Retinal Vessels," IEEE Trans. Med. Imaging 28(9), 1488–1497 (2009). [CrossRef] 12. M. A. Mayer, J. Hornegger, C. Y. Mardin, and R. P. Tornow, "Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients," Biomed. Opt. Express 1(5), 1358–1383 (2010). [CrossRef] 13. S. Niu, Q. Chen, L. de Sisternes, D. L. Rubin, W. Zhang, and Q. Liu, "Automated retinal layers segmentation in SD-OCT images using dual-gradient and spatial correlation smoothness constraint," Comput. Biol. Med. 54, 116–128 (2014). [CrossRef] 14. J. Tian, P. Marziliano, M. Baskaran, T. A. Tun, and T. Aung, "Automatic segmentation of the choroid in enhanced depth imaging optical coherence tomography images," Biomed. Opt. Express 4(3), 397–411 (2013). [CrossRef] 15. L. Zhang, K. Lee, M. Niemeijer, R. F. Mullins, M. Sonka, and M. D. Abramoff, "Automated segmentation of the choroid from clinical SD-OCT," Invest. Ophthalmol. Visual Sci. 53(12), 7510–7519 (2012). [CrossRef] 16. Z. Hu, M. D. Abràmoff, Y. H. Kwon, K. Lee, and M. K. Garvin, "Automated Segmentation of Neural Canal Opening and Optic Cup in 3D Spectral Optical Coherence Tomography Volumes of the Optic Nerve Head," Invest. Ophthalmol. Visual Sci. 51(11), 5708–5717 (2010). [CrossRef] 17. H. Ishikawa, J. Kim, T. R. Friberg, G. Wollstein, L. Kagemann, M. L. Gabriele, K. A. Townsend, K. R. Sung, J. S. Duker, J. G. Fujimoto, and J. S. Schuman, "Three-Dimensional Optical Coherence Tomography (3D-OCT) Image Enhancement with Segmentation-Free Contour Modeling C-Mode," Invest. Ophthalmol. Visual Sci. 50(3), 1344–1349 (2009). [CrossRef] 18. R. Kafieh, H. Rabbani, M. D. Abramoff, and M. Sonka, "Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map," Med. Image Anal. 17(8), 907–928 (2013). [CrossRef] 19. K. Lee, H. Zhang, A. Wahle, M. D. Abràmoff, and M. Sonka, "Multi-layer 3D Simultaneous Retinal OCT Layer Segmentation: Just-Enough Interaction for Routine Clinical Use," in VipIMAGE 2017, (Springer International Publishing, 2018), 862–871. 20. Y. Sun, T. Zhang, Y. Zhao, and Y. He, "3D Automatic Segmentation Method for Retinal Optical Coherence Tomography Volume Data Using Boundary Surface Enhancement," arXiv:1508.00966 [cs.CV] (2015). 21. C. Wang, Y. Wang, D. Kaba, H. Zhu, Y. Lv, Z. Wang, X. Liu, and Y. Li, "Segmentation of Intra-retinal Layers in 3D Optic Nerve Head Images," in Image and Graphics, (Springer International Publishing, 2015), 321–332. 22. D. Alonso-Caneiro, S. A. Read, and M. J. Collins, "Automatic segmentation of choroidal thickness in optical coherence tomography," Biomed. Opt. Express 4(12), 2795–2812 (2013). [CrossRef] 23. R. A. Alshareef, S. Dumpala, S. Rapole, M. Januwada, A. Goud, H. K. Peguda, and J. Chhablani, "Prevalence and Distribution of Segmentation Errors in Macular Ganglion Cell Analysis of Healthy Eyes Using Cirrus HD-OCT," PLoS One 11(5), e0155319 (2016). [CrossRef] 24. J. Chhablani, T. Krishnan, V. Sethi, and I. Kozak, "Artifacts in optical coherence tomography," Saudi. J. Ophthalmol. 28(2), 81–87 (2014). [CrossRef] 25. S. K. Devalla, K. S. Chin, J.-M. Mari, T. A. Tun, N. G. Strouthidis, T. Aung, A. H. Thiéry, and M. J. A. Girard, "A Deep Learning Approach to Digitally Stain Optical Coherence Tomography Images of the Optic Nerve Head," Invest. Ophthalmol. Visual Sci. 59(1), 63–74 (2018). [CrossRef] 26. S. K. Devalla, P. K. Renukanand, B. K. Sreedhar, G. Subramanian, L. Zhang, S. Perera, J.-M. Mari, K. S. Chin, T. A. Tun, N. G. Strouthidis, T. Aung, A. H. Thiéry, and M. J. A. Girard, "DRUNET: a dilated-residual U-Net deep learning network to segment optic nerve head tissues in optical coherence tomography images," Biomed. Opt. Express 9(7), 3244–3265 (2018). [CrossRef] 27. L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, "Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search," Biomed. Opt. Express 8(5), 2732–2744 (2017). [CrossRef] 28. D. Lu, M. Heisler, S. Lee, G. W. Ding, E. Navajas, M. V. Sarunic, and M. F. Beg, "Deep-learning based multiclass retinal fluid segmentation and detection in optical coherence tomography images using a fully convolutional neural network," Med. Image Anal. 54, 100–110 (2019). [CrossRef] 29. A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab, "ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks," Biomed. Opt. Express 8(8), 3627–3642 (2017). [CrossRef] 30. X. Sui, Y. Zheng, B. Wei, H. Bi, J. Wu, X. Pan, Y. Yin, and S. Zhang, "Choroid segmentation from Optical Coherence Tomography with graph-edge weights learned from deep convolutional neural networks," Neurocomputing 237, 332–341 (2017). [CrossRef] 31. T. H. Pham, S. K. Devalla, A. Ang, S. Zhi Da, A. H. Thiery, C. Boote, C.-Y. Cheng, V. Koh, and M. J. A. Girard, "Deep Learning Algorithms to Isolate and Quantify the Structures of the Anterior Segment in Optical Coherence Tomography Images," arXiv:1909.00331 [eess.IV] (2019). 32. F. G. Venhuizen, B. van Ginneken, B. Liefers, M. J. J. P. van Grinsven, S. Fauser, C. Hoyng, T. Theelen, and C. I. Sánchez, "Robust total retina thickness segmentation in optical coherence tomography images using convolutional neural networks," Biomed. Opt. Express 8(7), 3292–3316 (2017). [CrossRef] 33. T. C. Chen, A. Hoguet, A. K. Junk, K. Nouri-Mahdavi, S. Radhakrishnan, H. L. Takusagawa, and P. P. Chen, "Spectral-Domain OCT: Helping the Clinician Diagnose Glaucoma: A Report by the American Academy of Ophthalmology," Ophthalmology 125(11), 1817–1827 (2018). [CrossRef] 34. D. Romo-Bucheli, P. Seeböck, J. I. Orlando, B. S. Gerendas, S. M. Waldstein, U. Schmidt-Erfurth, and H. Bogunović, "Reducing image variability across OCT devices with unsupervised unpaired learning for improved segmentation of retina," Biomed. Opt. Express 11(1), 346–363 (2020). [CrossRef] 35. M. J. Girard, N. G. Strouthidis, C. R. Ethier, and J. M. Mari, "Shadow removal and contrast enhancement in optical coherence tomography images of the human optic nerve head," Invest. Ophthalmol. Visual Sci. 52(10), 7738–7748 (2011). [CrossRef] 36. W. Wu, O. Tan, R. R. Pappuru, H. Duan, and D. Huang, "Assessment of frame-averaging algorithms in OCT image analysis," Ophthalmic Surg. Lasers Imaging 44(2), 168–175 (2013). [CrossRef] 37. S. M. P. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, "Adaptive histogram equalization and its variations," Comput. Gr. Image Process 39(3), 355–368 (1987). [CrossRef] 38. B. S. Min, D. K. Lim, S. J. Kim, and J. H. Lee, "A Novel Method of Determining Parameters of CLAHE Based on Image Entropy," IJSEIA 7(5), 113–120 (2013). [CrossRef] 39. S. K. Devalla, G. Subramanian, T. H. Pham, X. Wang, S. Perera, T. A. Tun, T. Aung, L. Schmetterer, A. H. Thiéry, and M. J. A. Girard, "A Deep Learning Approach to Denoise Optical Coherence Tomography Images of the Optic Nerve Head," Sci. Rep. 9(1), 14454 (2019). [CrossRef] 40. O. Ronneberger, P. Fischer, and T. Brox, "U-Net: Convolutional Networks for Biomedical Image Segmentation," in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, (Springer International Publishing, 2015), 234–241. 41. E. V. Michal Drozdzal, Gabriel Chartrand, Samuel Kadoury, and Chris Pal, "The Importance of Skip Connections in Biomedical Image Segmentation," arXiv:1608.04117 [cs.CV] (2016). 42. X. Z. Kaiming He, Shaoqing Ren, and Jian Sun, "Deep Residual Learning for Image Recognition," arXiv:1512.03385 [cs.CV] (2015). 43. F. Yu and V. Koltun, "Multi-Scale Context Aggregation by Dilated Convolutions," arXiv:1511.07122 [cs.CV] (2015). 44. Y. Liu, M. M. Cheng, X. Hu, K. Wang, and X. Bai, "Richer Convolutional Features for Edge Detection," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017), 5872–5881. 45. C. Nwankpa, W. Ijomah, A. Gachagan, and S. Marshall, "Activation Functions: Comparison of trends in Practice and Research for Deep Learning," arXiv:1811.03378 [cs.LG] (2018). 46. Justin Johnson, Alexandre Alahi, and L. Fei-Fei, "Perceptual Losses for Real-Time Style Transfer and Super-Resolution," arXiv:1603.08155 [cs.CV] (2016). 47. K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," in 3rd International Conference on Learning Representations (ICLR 2015), (San Diego, CA, USA, 2015). 48. H. Cheong, S. K. Devalla, T. H. Pham, Z. Liang, T. A. Tun, X. Wang, S. Perera, L. SchmeŠerer, A. Tin, C. Boote, A. H. Thiery, and M. J. A. Girard, "DeshadowGAN: A Deep Learning Approach to Remove Shadows from Optical Coherence Tomography Images," arXiv:1910.02844v1 [eess.IV] (2019). 49. Karim Armanious, Chenming Jiang, Marc Fischer, Thomas Küstner, Konstantin Nikolaou, Sergios Gatidis, and B. Yang, "MedGAN: Medical Image Translation using GANs," arXiv:1806.06397 [cs.CV] (2018). 50. Haofu Liao, Wei-An Lin, Jianbo Yuan, S. Kevin Zhou, and J. Luo, "Artifact Disentanglement Network for Unsupervised Metal Artifact Reduction," arXiv:1906.01806 [eess.IV] (2019). 51. J. Deng, W. Dong, R. Socher, L. Li, L. Kai, and F.-F. Li, "ImageNet: A large-scale hierarchical image database," in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009), 248–255. 52. J. B. Diederik and P. Kingma, "Adam: A Method for Stochastic Optimization," arXiv:1412.6980 [cs.LG] (2014). 53. W. Zhou and A. C. Bovik, "A universal image quality index," IEEE Signal Process. Lett. 9(3), 81–84 (2002). [CrossRef] 54. W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef] 55. J. M. Mari, N. G. Strouthidis, S. C. Park, and M. J. A. Girard, "Enhancement of Lamina Cribrosa Visibility in Optical Coherence Tomography Images Using Adaptive Compensation," Invest. Ophthalmol. Visual Sci. 54(3), 2238–2247 (2013). [CrossRef] 56. F. Milletari, N. Navab, and S.-A. Ahmadi, V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation (2016), pp. 565–571. 57. C. M. Deniz, S. Xiang, R. S. Hallyburton, A. Welbeck, J. S. Babb, S. Honig, K. Cho, and G. Chang, "Segmentation of the Proximal Femur from MR Images using Deep Convolutional Neural Networks," Sci. Rep. 8(1), 16485 (2018). [CrossRef] 58. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, "3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation," in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016, (Springer International Publishing, 2016), 424–432. 59. H. R. Roth, H. Oda, X. Zhou, N. Shimizu, Y. Yang, Y. Hayashi, M. Oda, M. Fujiwara, K. Misawa, and K. Mori, "An application of cascaded 3D fully convolutional networks for medical image segmentation," Comput. Med. Imag. Grap. 66, 90–99 (2018). [CrossRef] 60. Y. Huang, Q. Dou, Z.-X. Wang, L.-Z. Liu, Y. Jin, L. Chaofeng, L. Wang, H. Chen, and R.-H. Xu, 3D RoI-aware U-Net for Accurate and Efficient Colorectal Tumor Segmentation (2019). 61. D. Müller and F. Kramer, "MIScnn: A Framework for Medical Image Segmentation with Convolutional Neural Networks and Deep Learning," arXiv:1910.09308 [eess.IV] (2019). 62. H. Roth, L. Lu, A. Farag, A. Sohn, and R. Summers, Spatial Aggregation of Holistically-Nested Networks for Automated Pancreas Segmentation (2016). 63. Q. Dou, L. Yu, H. Chen, Y. Jin, X. Yang, J. Qin, and P. A. Heng, "3D deeply supervised network for automated segmentation of volumetric medical images," Med. Image Anal. 41, 40–54 (2017). [CrossRef] 64. A. Abbasi, A. Monadjemi, L. Fang, H. Rabbani, and Y. Zhang, "Three-dimensional optical coherence tomography image denoising through multi-input fully-convolutional networks," Comput. Biol. Med. 108, 1–8 (2019). [CrossRef] 65. S. Feng, W. Zhu, H. Zhao, F. Shi, D. Xiang, and X. Chen, VinceptionC3D: a 3D convolutional neural network for retinal OCT image classification, SPIE Medical Imaging (SPIE, 2019), Vol. 10949. 66. N. Eladawi, M. Elmogy, M. Ghazal, L. Fraiwan, A. Aboelfetouh, A. Riad, H. Sandhu, R. Keynton, and A. El-Baz, "Early Signs Detection of Diabetic Retinopathy Using Optical Coherence Tomography Angiography Scans Based on 3D Multi-Path Convolutional Neural Network," in 2019 IEEE International Conference on Image Processing (ICIP), 2019), 1390–1394. 67. M.-X. Li, S.-Q. Yu, W. Zhang, H. Zhou, X. Xu, T.-W. Qian, and Y.-J. Wan, "Segmentation of retinal fluid based on deep learning: application of three-dimensional fully convolutional neural networks in optical coherence tomography images," Int. J. Ophthalmol 12, 1012–1020 (2019). 68. E. Noury, S. Sudhakaran, R. Chang, A. Ran, C. Cheung, S. Thapa, H. Rao, S. Dasari, M. Riyazuddin, S. Nagaraj, and R. Zadeh, Detecting Glaucoma Using 3D Convolutional Neural Network of Raw SD-OCT Optic Nerve Scans (2019). 69. S. Maetschke, B. Antony, H. Ishikawa, G. Wollstein, J. Schuman, and R. Garnavi, "A feature agnostic approach for glaucoma detection in OCT volumes," PLoS One 14(7), e0219126 (2019). [CrossRef] 70. A. Benou, R. Veksler, A. Friedman, and T. Riklin Raviv, "Ensemble of expert deep neural networks for spatio-temporal denoising of contrast-enhanced MRI sequences," Med. Image Anal. 42, 145–159 (2017). [CrossRef] 71. J. De Fauw, J. R. Ledsam, B. Romera-Paredes, S. Nikolov, N. Tomasev, S. Blackwell, H. Askham, X. Glorot, B. O'Donoghue, D. Visentin, G. van den Driessche, B. Lakshminarayanan, C. Meyer, F. Mackinder, S. Bouton, K. Ayoub, R. Chopra, D. King, A. Karthikesalingam, C. O. Hughes, R. Raine, J. Hughes, D. A. Sim, C. Egan, A. Tufail, H. Montgomery, D. Hassabis, G. Rees, T. Back, P. T. Khaw, M. Suleyman, J. Cornebise, P. A. Keane, and O. Ronneberger, "Clinically applicable deep learning for diagnosis and referral in retinal disease," Nat. Med. 24(9), 1342–1350 (2018). [CrossRef] 72. N. Georgiev and A. Asenov, "Automatic Segmentation of Lumbar Spine MRI Using Ensemble of 2D Algorithms," in Computational Methods and Clinical Applications for Spine Imaging, (Springer International Publishing, 2019), 154–162. 73. K. Kamnitsas, W. Bai, E. Ferrante, S. McDonagh, M. Sinclair, N. Pawlowski, M. Rajchl, M. Lee, B. Kainz, D. Rueckert, and B. Glocker, "Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation," in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, (Springer International Publishing, 2018), 450–462. 74. X. Liu, L. Faes, A. U. Kale, S. K. Wagner, D. J. Fu, A. Bruynseels, T. Mahendiran, G. Moraes, M. Shamdas, C. Kern, J. R. Ledsam, M. K. Schmid, K. Balaskas, E. J. Topol, L. M. Bachmann, P. A. Keane, and A. K. Denniston, "A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis," Lancet Glob Health 1(6), e271–e297 (2019). [CrossRef] 75. Q. Lyu, H. Shan, and G. Wang, MRI Super-Resolution with Ensemble Learning and Complementary Priors (2019). 76. L. Rokach, "Ensemble-based classifiers," Artif. Intell. Rev. 33(1-2), 1–39 (2010). [CrossRef] 77. T. Zhou, S. Ruan, and S. Canu, "A review: Deep learning for medical image segmentation using multi-modality fusion," Array 3-4, 100004 (2019). [CrossRef] 78. F. Li, H. Chen, Z. Liu, X.-d. Zhang, M.-s. Jiang, Z.-z. Wu, and K.-q. Zhou, "Deep learning-based automated detection of retinal diseases using optical coherence tomography images," Biomed. Opt. Express 10(12), 6204–6226 (2019). [CrossRef] 79. N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, "Dropout: a simple way to prevent neural networks from overfitting," J. Mach. Learn. Res. 15, 1929–1958 (2014). 80. S. Vaswani, F. Bach, and M. Schmidt, "Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron," arXiv:1810.07288 [cs.LG] (2018). 81. J. Fujimoto and E. Swanson, "The Development, Commercialization, and Impact of Optical Coherence Tomography," Invest. Ophthalmol. Visual Sci. 57(9), OCT1–OCT13 (2016). [CrossRef] 82. A. Yasin Alibhai, C. Or, and A. J. Witkin, "Swept Source Optical Coherence Tomography: a Review," Curr. Ophthalmol. Rep. 6(1), 7–16 (2018). [CrossRef] 83. J. F. de Boer, C. K. Hitzenberger, and Y. Yasuno, "Polarization sensitive optical coherence tomography - a review [Invited]," Biomed. Opt. Express 8(3), 1838–1873 (2017). [CrossRef] 84. M. Pircher and R. J. Zawadzki, "Review of adaptive optics OCT (AO-OCT): principles and applications for retinal imaging [Invited]," Biomed. Opt. Express 8(5), 2536–2562 (2017). [CrossRef] 85. K. Weiss, T. M. Khoshgoftaar, and D. Wang, "A survey of transfer learning," J. Big. Data 3(1), 9 (2016). [CrossRef] 86. J. Chang, J. Yu, T. Han, H. Chang, and E. Park, "A method for classifying medical images using transfer learning: A pilot study on histopathology of breast cancer," in 2017 IEEE 19th International Conference on e-Health Networking, Applications and Services (Healthcom), 2017), 1–4. 87. M. Maqsood, F. Nazir, U. Khan, F. Aadil, H. Jamal, I. Mehmood, and O.-Y. Song, "Transfer Learning Assisted Classification and Detection of Alzheimer's Disease Stages Using 3D MRI Scans," Sensors 19(11), 2645 (2019). [CrossRef] 88. A. Hosny, C. Parmar, T. P. Coroller, P. Grossmann, R. Zeleznik, A. Kumar, J. Bussink, R. J. Gillies, R. H. Mak, and H. J. W. L. Aerts, "Deep learning for lung cancer prognostication: A retrospective multi-cohort radiomics study," PLoS Med. 15(11), e1002711 (2018). [CrossRef] 89. M. Raghu, C. Zhang, J. Kleinberg, and S. Bengio, "Transfusion: Understanding Transfer Learning for Medical Imaging," arXiv:1902.07208 [cs.CV] (2019). 90. S. H. Lee, T. W. Kim, E. J. Lee, M. J. Girard, and J. M. Mari, "Diagnostic Power of Lamina Cribrosa Depth and Curvature in Glaucoma," Invest. Ophthalmol. Visual Sci. 58(2), 755–762 (2017). [CrossRef] 91. G. Bortsova, F. Dubost, L. Hogeweg, I. Katramados, and M. D. Bruijne, "Semi-Supervised Medical Image Segmentation via Learning Consistency under Transformations," arXiv:1911.01218 [cs.CV] (2019). 92. J. M. Heltzer, "Coexisting glaucoma and cataract," Ophthalmology 111(2), 408–409 (2004). [CrossRef] 93. J. B. Jonas, "Clinical implications of peripapillary atrophy in glaucoma," Curr. Opin. Ophthalmol. 16(2), 84–88 (2005). [CrossRef] 94. L. Xu, Y. Wang, S. Wang, Y. Wang, and J. B. Jonas, "High Myopia and Glaucoma Susceptibility: The Beijing Eye Study," Ophthalmology 114(2), 216–220 (2007). [CrossRef] J. S. Schuman, "Spectral domain optical coherence tomography for glaucoma (an AOS thesis)," Trans Am Ophthalmol Soc 106, 426–458 (2008). C. Bowd, R. N. Weinreb, J. M. Williams, and L. M. Zangwill, "The retinal nerve fiber layer thickness in ocular hypertensive, normal, and glaucomatous eyes with optical coherence tomography," Arch. Ophthalmol. 118(1), 22–26 (2000). A. Miki, F. A. Medeiros, R. N. Weinreb, S. Jain, F. He, L. Sharpsten, N. Khachatryan, N. Hammel, J. M. Liebmann, C. A. Girkin, P. A. Sample, and L. M. Zangwill, "Rates of retinal nerve fiber layer thinning in glaucoma suspect eyes," Ophthalmology 121(7), 1350–1358 (2014). Z. Lin, S. Huang, B. Xie, and Y. Zhong, "Peripapillary Choroidal Thickness and Open-Angle Glaucoma: A Meta-Analysis," J Ophthalmol 2016, 5484568 (2016). J. M. D. Gmeiner, W. A. Schrems, C. Y. Mardin, R. Laemmer, F. E. Kruse, and L. M. Schrems-Hoesl, "Comparison of Bruch's Membrane Opening Minimum Rim Width and Peripapillary Retinal Nerve Fiber Layer Thickness in Early Glaucoma Assessment," Invest. Ophthalmol. Visual Sci. 57(9), OCT575–OCT584 (2016). K. J. Halupka, B. J. Antony, M. H. Lee, K. A. Lucy, R. S. Rai, H. Ishikawa, G. Wollstein, J. S. Schuman, and R. Garnavi, "Retinal optical coherence tomography image enhancement via deep learning," Biomed. Opt. Express 9(12), 6205–6221 (2018). S. C. Park, J. Brumm, R. L. Furlanetto, C. Netto, Y. Liu, C. Tello, J. M. Liebmann, and R. Ritch, "Lamina cribrosa depth in different stages of glaucoma," Invest. Ophthalmol. Visual Sci. 56(3), 2059–2064 (2015). F. A. Almobarak, N. O'Leary, A. S. C. Reis, G. P. Sharpe, D. M. Hutchison, M. T. Nicolela, and B. C. Chauhan, "Automated Segmentation of Optic Nerve Head Structures With Optical Coherence Tomography," Invest. Ophthalmol. Visual Sci. 55(2), 1161–1168 (2014). K. X. Cheong, L. W. Lim, K. Z. Li, and C. S. Tan, "A novel and faster method of manual grading to measure choroidal thickness using optical coherence tomography," Eye 32(2), 433–438 (2018). S. L. Mansberger, S. A. Menda, B. A. Fortune, S. K. Gardiner, and S. Demirel, "Automated Segmentation Errors When Using Optical Coherence Tomography to Measure Retinal Nerve Fiber Layer Thickness in Glaucoma," Am. J. Ophthalmol. 174, 1–8 (2017). B. Al-Diri, A. Hunter, and D. Steel, "An Active Contour Model for Segmenting and Measuring Retinal Vessels," IEEE Trans. Med. Imaging 28(9), 1488–1497 (2009). M. A. Mayer, J. Hornegger, C. Y. Mardin, and R. P. Tornow, "Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients," Biomed. Opt. Express 1(5), 1358–1383 (2010). S. Niu, Q. Chen, L. de Sisternes, D. L. Rubin, W. Zhang, and Q. Liu, "Automated retinal layers segmentation in SD-OCT images using dual-gradient and spatial correlation smoothness constraint," Comput. Biol. Med. 54, 116–128 (2014). J. Tian, P. Marziliano, M. Baskaran, T. A. Tun, and T. Aung, "Automatic segmentation of the choroid in enhanced depth imaging optical coherence tomography images," Biomed. Opt. Express 4(3), 397–411 (2013). L. Zhang, K. Lee, M. Niemeijer, R. F. Mullins, M. Sonka, and M. D. Abramoff, "Automated segmentation of the choroid from clinical SD-OCT," Invest. Ophthalmol. Visual Sci. 53(12), 7510–7519 (2012). Z. Hu, M. D. Abràmoff, Y. H. Kwon, K. Lee, and M. K. Garvin, "Automated Segmentation of Neural Canal Opening and Optic Cup in 3D Spectral Optical Coherence Tomography Volumes of the Optic Nerve Head," Invest. Ophthalmol. Visual Sci. 51(11), 5708–5717 (2010). H. Ishikawa, J. Kim, T. R. Friberg, G. Wollstein, L. Kagemann, M. L. Gabriele, K. A. Townsend, K. R. Sung, J. S. Duker, J. G. Fujimoto, and J. S. Schuman, "Three-Dimensional Optical Coherence Tomography (3D-OCT) Image Enhancement with Segmentation-Free Contour Modeling C-Mode," Invest. Ophthalmol. Visual Sci. 50(3), 1344–1349 (2009). R. Kafieh, H. Rabbani, M. D. Abramoff, and M. Sonka, "Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map," Med. Image Anal. 17(8), 907–928 (2013). K. Lee, H. Zhang, A. Wahle, M. D. Abràmoff, and M. Sonka, "Multi-layer 3D Simultaneous Retinal OCT Layer Segmentation: Just-Enough Interaction for Routine Clinical Use," in VipIMAGE 2017, (Springer International Publishing, 2018), 862–871. Y. Sun, T. Zhang, Y. Zhao, and Y. He, "3D Automatic Segmentation Method for Retinal Optical Coherence Tomography Volume Data Using Boundary Surface Enhancement," arXiv:1508.00966 [cs.CV] (2015). C. Wang, Y. Wang, D. Kaba, H. Zhu, Y. Lv, Z. Wang, X. Liu, and Y. Li, "Segmentation of Intra-retinal Layers in 3D Optic Nerve Head Images," in Image and Graphics, (Springer International Publishing, 2015), 321–332. D. Alonso-Caneiro, S. A. Read, and M. J. Collins, "Automatic segmentation of choroidal thickness in optical coherence tomography," Biomed. Opt. Express 4(12), 2795–2812 (2013). R. A. Alshareef, S. Dumpala, S. Rapole, M. Januwada, A. Goud, H. K. Peguda, and J. Chhablani, "Prevalence and Distribution of Segmentation Errors in Macular Ganglion Cell Analysis of Healthy Eyes Using Cirrus HD-OCT," PLoS One 11(5), e0155319 (2016). J. Chhablani, T. Krishnan, V. Sethi, and I. Kozak, "Artifacts in optical coherence tomography," Saudi. J. Ophthalmol. 28(2), 81–87 (2014). S. K. Devalla, K. S. Chin, J.-M. Mari, T. A. Tun, N. G. Strouthidis, T. Aung, A. H. Thiéry, and M. J. A. Girard, "A Deep Learning Approach to Digitally Stain Optical Coherence Tomography Images of the Optic Nerve Head," Invest. Ophthalmol. Visual Sci. 59(1), 63–74 (2018). S. K. Devalla, P. K. Renukanand, B. K. Sreedhar, G. Subramanian, L. Zhang, S. Perera, J.-M. Mari, K. S. Chin, T. A. Tun, N. G. Strouthidis, T. Aung, A. H. Thiéry, and M. J. A. Girard, "DRUNET: a dilated-residual U-Net deep learning network to segment optic nerve head tissues in optical coherence tomography images," Biomed. Opt. Express 9(7), 3244–3265 (2018). L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, "Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search," Biomed. Opt. Express 8(5), 2732–2744 (2017). D. Lu, M. Heisler, S. Lee, G. W. Ding, E. Navajas, M. V. Sarunic, and M. F. Beg, "Deep-learning based multiclass retinal fluid segmentation and detection in optical coherence tomography images using a fully convolutional neural network," Med. Image Anal. 54, 100–110 (2019). A. G. Roy, S. Conjeti, S. P. K. Karri, D. Sheet, A. Katouzian, C. Wachinger, and N. Navab, "ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks," Biomed. Opt. Express 8(8), 3627–3642 (2017). X. Sui, Y. Zheng, B. Wei, H. Bi, J. Wu, X. Pan, Y. Yin, and S. Zhang, "Choroid segmentation from Optical Coherence Tomography with graph-edge weights learned from deep convolutional neural networks," Neurocomputing 237, 332–341 (2017). T. H. Pham, S. K. Devalla, A. Ang, S. Zhi Da, A. H. Thiery, C. Boote, C.-Y. Cheng, V. Koh, and M. J. A. Girard, "Deep Learning Algorithms to Isolate and Quantify the Structures of the Anterior Segment in Optical Coherence Tomography Images," arXiv:1909.00331 [eess.IV] (2019). F. G. Venhuizen, B. van Ginneken, B. Liefers, M. J. J. P. van Grinsven, S. Fauser, C. Hoyng, T. Theelen, and C. I. Sánchez, "Robust total retina thickness segmentation in optical coherence tomography images using convolutional neural networks," Biomed. Opt. Express 8(7), 3292–3316 (2017). T. C. Chen, A. Hoguet, A. K. Junk, K. Nouri-Mahdavi, S. Radhakrishnan, H. L. Takusagawa, and P. P. Chen, "Spectral-Domain OCT: Helping the Clinician Diagnose Glaucoma: A Report by the American Academy of Ophthalmology," Ophthalmology 125(11), 1817–1827 (2018). D. Romo-Bucheli, P. Seeböck, J. I. Orlando, B. S. Gerendas, S. M. Waldstein, U. Schmidt-Erfurth, and H. Bogunović, "Reducing image variability across OCT devices with unsupervised unpaired learning for improved segmentation of retina," Biomed. Opt. Express 11(1), 346–363 (2020). M. J. Girard, N. G. Strouthidis, C. R. Ethier, and J. M. Mari, "Shadow removal and contrast enhancement in optical coherence tomography images of the human optic nerve head," Invest. Ophthalmol. Visual Sci. 52(10), 7738–7748 (2011). W. Wu, O. Tan, R. R. Pappuru, H. Duan, and D. Huang, "Assessment of frame-averaging algorithms in OCT image analysis," Ophthalmic Surg. Lasers Imaging 44(2), 168–175 (2013). S. M. P. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, "Adaptive histogram equalization and its variations," Comput. Gr. Image Process 39(3), 355–368 (1987). B. S. Min, D. K. Lim, S. J. Kim, and J. H. Lee, "A Novel Method of Determining Parameters of CLAHE Based on Image Entropy," IJSEIA 7(5), 113–120 (2013). S. K. Devalla, G. Subramanian, T. H. Pham, X. Wang, S. Perera, T. A. Tun, T. Aung, L. Schmetterer, A. H. Thiéry, and M. J. A. Girard, "A Deep Learning Approach to Denoise Optical Coherence Tomography Images of the Optic Nerve Head," Sci. Rep. 9(1), 14454 (2019). O. Ronneberger, P. Fischer, and T. Brox, "U-Net: Convolutional Networks for Biomedical Image Segmentation," in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, (Springer International Publishing, 2015), 234–241. E. V. Michal Drozdzal, Gabriel Chartrand, Samuel Kadoury, and Chris Pal, "The Importance of Skip Connections in Biomedical Image Segmentation," arXiv:1608.04117 [cs.CV] (2016). X. Z. Kaiming He, Shaoqing Ren, and Jian Sun, "Deep Residual Learning for Image Recognition," arXiv:1512.03385 [cs.CV] (2015). F. Yu and V. Koltun, "Multi-Scale Context Aggregation by Dilated Convolutions," arXiv:1511.07122 [cs.CV] (2015). Y. Liu, M. M. Cheng, X. Hu, K. Wang, and X. Bai, "Richer Convolutional Features for Edge Detection," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017), 5872–5881. C. Nwankpa, W. Ijomah, A. Gachagan, and S. Marshall, "Activation Functions: Comparison of trends in Practice and Research for Deep Learning," arXiv:1811.03378 [cs.LG] (2018). Justin Johnson, Alexandre Alahi, and L. Fei-Fei, "Perceptual Losses for Real-Time Style Transfer and Super-Resolution," arXiv:1603.08155 [cs.CV] (2016). K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," in 3rd International Conference on Learning Representations (ICLR 2015), (San Diego, CA, USA, 2015). H. Cheong, S. K. Devalla, T. H. Pham, Z. Liang, T. A. Tun, X. Wang, S. Perera, L. SchmeŠerer, A. Tin, C. Boote, A. H. Thiery, and M. J. A. Girard, "DeshadowGAN: A Deep Learning Approach to Remove Shadows from Optical Coherence Tomography Images," arXiv:1910.02844v1 [eess.IV] (2019). Karim Armanious, Chenming Jiang, Marc Fischer, Thomas Küstner, Konstantin Nikolaou, Sergios Gatidis, and B. Yang, "MedGAN: Medical Image Translation using GANs," arXiv:1806.06397 [cs.CV] (2018). Haofu Liao, Wei-An Lin, Jianbo Yuan, S. Kevin Zhou, and J. Luo, "Artifact Disentanglement Network for Unsupervised Metal Artifact Reduction," arXiv:1906.01806 [eess.IV] (2019). J. Deng, W. Dong, R. Socher, L. Li, L. Kai, and F.-F. Li, "ImageNet: A large-scale hierarchical image database," in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009), 248–255. J. B. Diederik and P. Kingma, "Adam: A Method for Stochastic Optimization," arXiv:1412.6980 [cs.LG] (2014). W. Zhou and A. C. Bovik, "A universal image quality index," IEEE Signal Process. Lett. 9(3), 81–84 (2002). W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE Trans. on Image Process. 13(4), 600–612 (2004). J. M. Mari, N. G. Strouthidis, S. C. Park, and M. J. A. Girard, "Enhancement of Lamina Cribrosa Visibility in Optical Coherence Tomography Images Using Adaptive Compensation," Invest. Ophthalmol. Visual Sci. 54(3), 2238–2247 (2013). F. Milletari, N. Navab, and S.-A. Ahmadi, V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation (2016), pp. 565–571. C. M. Deniz, S. Xiang, R. S. Hallyburton, A. Welbeck, J. S. Babb, S. Honig, K. Cho, and G. Chang, "Segmentation of the Proximal Femur from MR Images using Deep Convolutional Neural Networks," Sci. Rep. 8(1), 16485 (2018). Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, "3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation," in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016, (Springer International Publishing, 2016), 424–432. H. R. Roth, H. Oda, X. Zhou, N. Shimizu, Y. Yang, Y. Hayashi, M. Oda, M. Fujiwara, K. Misawa, and K. Mori, "An application of cascaded 3D fully convolutional networks for medical image segmentation," Comput. Med. Imag. Grap. 66, 90–99 (2018). Y. Huang, Q. Dou, Z.-X. Wang, L.-Z. Liu, Y. Jin, L. Chaofeng, L. Wang, H. Chen, and R.-H. Xu, 3D RoI-aware U-Net for Accurate and Efficient Colorectal Tumor Segmentation (2019). D. Müller and F. Kramer, "MIScnn: A Framework for Medical Image Segmentation with Convolutional Neural Networks and Deep Learning," arXiv:1910.09308 [eess.IV] (2019). H. Roth, L. Lu, A. Farag, A. Sohn, and R. Summers, Spatial Aggregation of Holistically-Nested Networks for Automated Pancreas Segmentation (2016). Q. Dou, L. Yu, H. Chen, Y. Jin, X. Yang, J. Qin, and P. A. Heng, "3D deeply supervised network for automated segmentation of volumetric medical images," Med. Image Anal. 41, 40–54 (2017). A. Abbasi, A. Monadjemi, L. Fang, H. Rabbani, and Y. Zhang, "Three-dimensional optical coherence tomography image denoising through multi-input fully-convolutional networks," Comput. Biol. Med. 108, 1–8 (2019). S. Feng, W. Zhu, H. Zhao, F. Shi, D. Xiang, and X. Chen, VinceptionC3D: a 3D convolutional neural network for retinal OCT image classification, SPIE Medical Imaging (SPIE, 2019), Vol. 10949. N. Eladawi, M. Elmogy, M. Ghazal, L. Fraiwan, A. Aboelfetouh, A. Riad, H. Sandhu, R. Keynton, and A. El-Baz, "Early Signs Detection of Diabetic Retinopathy Using Optical Coherence Tomography Angiography Scans Based on 3D Multi-Path Convolutional Neural Network," in 2019 IEEE International Conference on Image Processing (ICIP), 2019), 1390–1394. M.-X. Li, S.-Q. Yu, W. Zhang, H. Zhou, X. Xu, T.-W. Qian, and Y.-J. Wan, "Segmentation of retinal fluid based on deep learning: application of three-dimensional fully convolutional neural networks in optical coherence tomography images," Int. J. Ophthalmol 12, 1012–1020 (2019). E. Noury, S. Sudhakaran, R. Chang, A. Ran, C. Cheung, S. Thapa, H. Rao, S. Dasari, M. Riyazuddin, S. Nagaraj, and R. Zadeh, Detecting Glaucoma Using 3D Convolutional Neural Network of Raw SD-OCT Optic Nerve Scans (2019). S. Maetschke, B. Antony, H. Ishikawa, G. Wollstein, J. Schuman, and R. Garnavi, "A feature agnostic approach for glaucoma detection in OCT volumes," PLoS One 14(7), e0219126 (2019). A. Benou, R. Veksler, A. Friedman, and T. Riklin Raviv, "Ensemble of expert deep neural networks for spatio-temporal denoising of contrast-enhanced MRI sequences," Med. Image Anal. 42, 145–159 (2017). J. De Fauw, J. R. Ledsam, B. Romera-Paredes, S. Nikolov, N. Tomasev, S. Blackwell, H. Askham, X. Glorot, B. O'Donoghue, D. Visentin, G. van den Driessche, B. Lakshminarayanan, C. Meyer, F. Mackinder, S. Bouton, K. Ayoub, R. Chopra, D. King, A. Karthikesalingam, C. O. Hughes, R. Raine, J. Hughes, D. A. Sim, C. Egan, A. Tufail, H. Montgomery, D. Hassabis, G. Rees, T. Back, P. T. Khaw, M. Suleyman, J. Cornebise, P. A. Keane, and O. Ronneberger, "Clinically applicable deep learning for diagnosis and referral in retinal disease," Nat. Med. 24(9), 1342–1350 (2018). N. Georgiev and A. Asenov, "Automatic Segmentation of Lumbar Spine MRI Using Ensemble of 2D Algorithms," in Computational Methods and Clinical Applications for Spine Imaging, (Springer International Publishing, 2019), 154–162. K. Kamnitsas, W. Bai, E. Ferrante, S. McDonagh, M. Sinclair, N. Pawlowski, M. Rajchl, M. Lee, B. Kainz, D. Rueckert, and B. Glocker, "Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation," in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, (Springer International Publishing, 2018), 450–462. X. Liu, L. Faes, A. U. Kale, S. K. Wagner, D. J. Fu, A. Bruynseels, T. Mahendiran, G. Moraes, M. Shamdas, C. Kern, J. R. Ledsam, M. K. Schmid, K. Balaskas, E. J. Topol, L. M. Bachmann, P. A. Keane, and A. K. Denniston, "A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis," Lancet Glob Health 1(6), e271–e297 (2019). Q. Lyu, H. Shan, and G. Wang, MRI Super-Resolution with Ensemble Learning and Complementary Priors (2019). L. Rokach, "Ensemble-based classifiers," Artif. Intell. Rev. 33(1-2), 1–39 (2010). T. Zhou, S. Ruan, and S. Canu, "A review: Deep learning for medical image segmentation using multi-modality fusion," Array 3-4, 100004 (2019). F. Li, H. Chen, Z. Liu, X.-d. Zhang, M.-s. Jiang, Z.-z. Wu, and K.-q. Zhou, "Deep learning-based automated detection of retinal diseases using optical coherence tomography images," Biomed. Opt. Express 10(12), 6204–6226 (2019). N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, "Dropout: a simple way to prevent neural networks from overfitting," J. Mach. Learn. Res. 15, 1929–1958 (2014). S. Vaswani, F. Bach, and M. Schmidt, "Fast and Faster Convergence of SGD for Over-Parameterized Models and an Accelerated Perceptron," arXiv:1810.07288 [cs.LG] (2018). J. Fujimoto and E. Swanson, "The Development, Commercialization, and Impact of Optical Coherence Tomography," Invest. Ophthalmol. Visual Sci. 57(9), OCT1–OCT13 (2016). A. Yasin Alibhai, C. Or, and A. J. Witkin, "Swept Source Optical Coherence Tomography: a Review," Curr. Ophthalmol. Rep. 6(1), 7–16 (2018). J. F. de Boer, C. K. Hitzenberger, and Y. Yasuno, "Polarization sensitive optical coherence tomography - a review [Invited]," Biomed. Opt. Express 8(3), 1838–1873 (2017). M. Pircher and R. J. Zawadzki, "Review of adaptive optics OCT (AO-OCT): principles and applications for retinal imaging [Invited]," Biomed. Opt. Express 8(5), 2536–2562 (2017). K. Weiss, T. M. Khoshgoftaar, and D. Wang, "A survey of transfer learning," J. Big. Data 3(1), 9 (2016). J. Chang, J. Yu, T. Han, H. Chang, and E. Park, "A method for classifying medical images using transfer learning: A pilot study on histopathology of breast cancer," in 2017 IEEE 19th International Conference on e-Health Networking, Applications and Services (Healthcom), 2017), 1–4. M. Maqsood, F. Nazir, U. Khan, F. Aadil, H. Jamal, I. Mehmood, and O.-Y. Song, "Transfer Learning Assisted Classification and Detection of Alzheimer's Disease Stages Using 3D MRI Scans," Sensors 19(11), 2645 (2019). A. Hosny, C. Parmar, T. P. Coroller, P. Grossmann, R. Zeleznik, A. Kumar, J. Bussink, R. J. Gillies, R. H. Mak, and H. J. W. L. Aerts, "Deep learning for lung cancer prognostication: A retrospective multi-cohort radiomics study," PLoS Med. 15(11), e1002711 (2018). M. Raghu, C. Zhang, J. Kleinberg, and S. Bengio, "Transfusion: Understanding Transfer Learning for Medical Imaging," arXiv:1902.07208 [cs.CV] (2019). S. H. Lee, T. W. Kim, E. J. Lee, M. J. Girard, and J. M. Mari, "Diagnostic Power of Lamina Cribrosa Depth and Curvature in Glaucoma," Invest. Ophthalmol. Visual Sci. 58(2), 755–762 (2017). G. Bortsova, F. Dubost, L. Hogeweg, I. Katramados, and M. D. Bruijne, "Semi-Supervised Medical Image Segmentation via Learning Consistency under Transformations," arXiv:1911.01218 [cs.CV] (2019). J. M. Heltzer, "Coexisting glaucoma and cataract," Ophthalmology 111(2), 408–409 (2004). J. B. Jonas, "Clinical implications of peripapillary atrophy in glaucoma," Curr. Opin. Ophthalmol. 16(2), 84–88 (2005). L. Xu, Y. Wang, S. Wang, Y. Wang, and J. B. Jonas, "High Myopia and Glaucoma Susceptibility: The Beijing Eye Study," Ophthalmology 114(2), 216–220 (2007). Aadil, F. Abbasi, A. Abdulkadir, A. Aboelfetouh, A. Abramoff, M. D. Abràmoff, M. D. Aerts, H. J. W. L. Ahmadi, S.-A. Alahi, Alexandre Al-Diri, B. Almobarak, F. A. Alonso-Caneiro, D. Alshareef, R. A. Amburn, E. P. Ang, A. Antony, B. Antony, B. J. Armanious, Karim Asenov, A. Askham, H. Aung, T. Austin, J. D. Ayoub, K. Babb, J. S. Bach, F. Bachmann, L. M. Back, T. Bai, W. Bai, X. Balaskas, K. Baskaran, M. Beg, M. F. Bengio, S. Benou, A. Bi, H. Blackwell, S. Bogunovic, H. Boote, C. Bortsova, G. Bouton, S. Bovik, A. C. Bowd, C. Brox, T. Bruijne, M. D. Brumm, J. Bruynseels, A. Bussink, J. Canu, S. Chang, G. Chang, H. Chang, J. Chang, R. Chaofeng, L. Chartrand, Gabriel Chauhan, B. C. Chen, H. Chen, P. P. Chen, Q. Chen, T. C. Chen, X. Cheng, C.-Y. Cheng, M. M. Cheong, H. Cheong, K. X. Cheung, C. Chhablani, J. Chin, K. S. Cho, K. Chopra, R. Çiçek, Ö. Collins, M. J. Conjeti, S. Cornebise, J. Coroller, T. P. Cromartie, R. Cunefare, D. Dasari, S. de Boer, J. F. De Fauw, J. de Sisternes, L. Demirel, S. Deng, J. Deniz, C. M. Denniston, A. K. Devalla, S. K. Diederik, J. B. Ding, G. W. Dong, W. Dou, Q. Duan, H. Dubost, F. Duker, J. S. Dumpala, S. Egan, C. Eladawi, N. El-Baz, A. Elmogy, M. Ethier, C. R. Faes, L. Fang, L. Farag, A. Farsiu, S. Fauser, S. Fei-Fei, L. Feng, S. Ferrante, E. Fischer, Marc Fischer, P. Fortune, B. A. Fraiwan, L. Friberg, T. R. Friedman, A. Fu, D. J. Fujimoto, J. Fujimoto, J. G. Fujiwara, M. Furlanetto, R. L. Gabriele, M. L. Gachagan, A. Gardiner, S. K. Garnavi, R. Garvin, M. K. Gatidis, Sergios Georgiev, N. Gerendas, B. S. Geselowitz, A. Ghazal, M. Gillies, R. J. Girard, M. J. Girard, M. J. A. Girkin, C. A. Glocker, B. Glorot, X. Gmeiner, J. M. D. Goud, A. Greer, T. Grossmann, P. Guymer, R. H. Hallyburton, R. S. Halupka, K. J. Hammel, N. Han, T. Hassabis, D. Hayashi, Y. He, F. He, Y. Heisler, M. Heltzer, J. M. Heng, P. A. Hinton, G. E. Hitzenberger, C. K. Hogeweg, L. Hoguet, A. Honig, S. Hornegger, J. Hosny, A. Hoyng, C. Hu, X. Hu, Z. Huang, D. Huang, S. Huang, Y. Hughes, C. O. Hughes, J. Hunter, A. Hutchison, D. M. Ijomah, W. Ishikawa, H. Jain, S. Jamal, H. Januwada, M. Jiang, Chenming Jiang, M.-s. Jin, Y. Johnson, Justin Jonas, J. B. Junk, A. K. Kaba, D. Kadoury, Samuel Kafieh, R. Kagemann, L. Kai, L. Kaiming He, X. Z. Kainz, B. Kale, A. U. Kamnitsas, K. Karri, S. P. K. Karthikesalingam, A. Katouzian, A. Katramados, I. Keane, P. A. Kern, C. Kevin Zhou, S. Keynton, R. Khachatryan, N. Khan, U. Khaw, P. T. Khoshgoftaar, T. M. Kim, J. Kim, S. J. Kim, T. W. King, D. Kingma, P. Kleinberg, J. Koh, V. Koltun, V. Kozak, I. Kramer, F. Krishnan, T. Krizhevsky, A. Kruse, F. E. Kumar, A. Küstner, Thomas Kwon, Y. H. Laemmer, R. Lakshminarayanan, B. Ledsam, J. R. Lee, E. J. Lee, J. H. Lee, K. Lee, M. Lee, M. H. Lee, S. Lee, S. H. Li, F. Li, F.-F. Li, K. Z. Li, L. Li, M.-X. Li, S. Li, Y. Liang, Z. Liao, Haofu Liebmann, J. M. Liefers, B. Lienkamp, S. S. Lim, D. K. Lim, L. W. Lin, Wei-An Lin, Z. Liu, L.-Z. Liu, Q. Liu, X. Liu, Y. Liu, Z. Lu, D. Lu, L. Lucy, K. A. Luo, J. Lv, Y. Lyu, Q. Mackinder, F. Maetschke, S. Mahendiran, T. Mak, R. H. Mansberger, S. L. Maqsood, M. Mardin, C. Y. Mari, J. M. Mari, J.-M. Marshall, S. Marziliano, P. Mayer, M. A. McDonagh, S. Medeiros, F. A. Mehmood, I. Menda, S. A. Meyer, C. Michal Drozdzal, E. V. Miki, A. Milletari, F. Min, B. S. Misawa, K. Monadjemi, A. Montgomery, H. Moraes, G. Mori, K. Müller, D. Mullins, R. F. Nagaraj, S. Navab, N. Navajas, E. Nazir, F. Netto, C. Nicolela, M. T. Niemeijer, M. Nikolaou, Konstantin Nikolov, S. Niu, S. Nouri-Mahdavi, K. Noury, E. Nwankpa, C. O'Donoghue, B. O'Leary, N. Oda, H. Oda, M. Or, C. Orlando, J. I. Pal, Chris Pan, X. Pappuru, R. R. Park, E. Park, S. C. Parmar, C. Pawlowski, N. Peguda, H. K. Perera, S. Pham, T. H. Pircher, M. Pizer, S. M. P. Qian, T.-W. Qin, J. Rabbani, H. Radhakrishnan, S. Raghu, M. Rai, R. S. Raine, R. Rajchl, M. Ran, A. Rao, H. Rapole, S. Read, S. A. Rees, G. Reis, A. S. C. Ren, Shaoqing Renukanand, P. K. Riad, A. Riklin Raviv, T. Ritch, R. Riyazuddin, M. Rokach, L. Romera-Paredes, B. Romo-Bucheli, D. Ronneberger, O. Roth, H. Roth, H. R. Roy, A. G. Ruan, S. Rubin, D. L. Rueckert, D. Salakhutdinov, R. Sample, P. A. Sánchez, C. I. Sandhu, H. Sarunic, M. V. SchmeŠerer, L. Schmetterer, L. Schmid, M. K. Schmidt, M. Schmidt-Erfurth, U. Schrems, W. A. Schrems-Hoesl, L. M. Schuman, J. Schuman, J. S. Seeböck, P. Sethi, V. Shamdas, M. Shan, H. Sharpe, G. P. Sharpsten, L. Sheet, D. Sheikh, H. R. Shi, F. Shimizu, N. Sim, D. A. Simoncelli, E. P. Simonyan, K. Sinclair, M. Socher, R. Sohn, A. Song, O.-Y. Sonka, M. Sreedhar, B. K. Srivastava, N. Steel, D. Strouthidis, N. G. Subramanian, G. Sudhakaran, S. Sui, X. Suleyman, M. Summers, R. Sun, Jian Sung, K. R. Sutskever, I. Swanson, E. Takusagawa, H. L. Tan, C. S. Tan, O. Tello, C. ter Haar Romeny, B. Thapa, S. Theelen, T. Thiery, A. H. Thiéry, A. H. Tian, J. Tin, A. Tomasev, N. Topol, E. J. Tornow, R. P. Townsend, K. A. Tufail, A. Tun, T. A. van den Driessche, G. van Ginneken, B. van Grinsven, M. J. J. P. Vaswani, S. Veksler, R. Venhuizen, F. G. Visentin, D. Wachinger, C. Wagner, S. K. Wahle, A. Waldstein, S. M. Wan, Y.-J. Wang, C. Wang, D. Wang, G. Wang, K. Wang, S. Wang, X. Wang, Y. Wang, Z. Wang, Z.-X. Wei, B. Weinreb, R. N. Weiss, K. Welbeck, A. Williams, J. M. Witkin, A. J. Wollstein, G. Wu, J. Wu, W. Wu, Z.-z. Xiang, D. Xiang, S. Xie, B. Xu, L. Xu, R.-H. Xu, X. Yang, B. Yang, X. Yang, Y. Yasin Alibhai, A. Yasuno, Y. Yin, Y. Yu, F. Yu, J. Yu, L. Yu, S.-Q. Yuan, Jianbo Zadeh, R. Zangwill, L. M. Zawadzki, R. J. Zeleznik, R. Zhang, C. Zhang, H. Zhang, L. Zhang, S. Zhang, T. Zhang, W. Zhang, X.-d. Zhang, Y. Zhao, H. Zhao, Y. Zheng, Y. Zhi Da, S. Zhong, Y. Zhou, H. Zhou, K.-q. Zhou, T. Zhou, W. Zhou, X. Zhu, H. Zhu, W. Zimmerman, J. B. Zisserman, A. Zuiderveld, K. Am. J. Ophthalmol. (1) Arch. Ophthalmol. (1) Artif. Intell. Rev. (1) Biomed. Opt. Express (12) Comput. Biol. Med. (2) Comput. Gr. Image Process (1) Comput. Med. Imag. Grap. (1) Curr. Ophthalmol. Rep. (1) Curr. Opin. Ophthalmol. (1) IEEE Signal Process. Lett. (1) IEEE Trans. Med. Imaging (1) IEEE Trans. on Image Process. (1) IJSEIA (1) Int. J. Ophthalmol (1) Invest. Ophthalmol. Visual Sci. (11) J Ophthalmol (1) J. Big. Data (1) J. Mach. Learn. Res. (1) Lancet Glob Health (1) Med. Image Anal. (4) Nat. Med. (1) Neurocomputing (1) Ophthalmic Surg. Lasers Imaging (1) PLoS Med. (1) Saudi. J. Ophthalmol. (1) Sci. Rep. (2) Trans Am Ophthalmol Soc (1) » Supplement 1 Supplement 1 (1) M i , j = ∑ k = i N I k , j n (2) I i , j S C = I i , j n 2 M i , j (3) L R M S E ( I D L E n h a n c e d , I D i g i t a l l y E n h a n c e d ) = 1 H W ∑ h = 1 H ∑ w = 1 W ( I ( h , w ) D L E n h a n c e d − I ( h , w ) D i g i t a l l y E n h a n c e d ) 2 (4) L P e r c e p t u a l ( I D L E n h a n c e d , I D i g i t a l l y E n h a n c e d ) = ∑ i = 2 , 4 , 6 , 10 , 14 1 C i H i W i | | P i ( I D L E n h a n c e d ) − P i ( I D i g i t a l l y E n h a n c e d ) | | 2 5 (5) L T o t a l = L R M S E + 0.01 × L P e r c e p t u a l (6) U I Q I ( x , y ) = L C × D L × D C (7) L C = σ x y σ x y σ x y ; D L = 2 μ x μ y μ x 2 + μ y 2 ; D C = 2 σ x σ y σ x 2 + σ y 2 (8) S S I M ( x , y ) = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 ) (9) D C = 2 × | D ∩ M | | D | + | M | (10) S p = | D ¯ ∩ M ¯ | | M | (11) S n = | D ∩ M | | M | Patient Populations and Scanning Specifications No of subjects Scanning Specifications Spectralis Singapore National Eye Center 57 11 97 horizontal B-scans (32μm distance between B-scans, 384 A-scans per B-scan); covering an area of 15 ° x 10 ° centered on the ONH; 20x signal averaging. Aravind Eye Hospital 18 64 Cirrus Rajan Eye Care Hospital 75 75 200 horizontal B-scans (30 μm,200 A-scans per B-scans); covering an area of 6mm x 6mm centered on the ONH. RTVue East Avenue Medical Center 75 75 101 horizontal B-scans (40 μm distance between B-scans; 101 A-scans per B-scan); covering an area of 20 ° x 20 ° centered on the ONH.
CommonCrawl
VII Training Course in the Physics of Correlated Electron Systems and High-Tc Superconductors Vietri sul Mare (Salerno) Italy Participant Seminar Abstracts Dr. Luigi Amico Dipartimento di Metodologie Fisiche e Chimiche per l'Ingegneria, Università di Catania Scaling of entanglement close to quantum phase transitions Abstract: We discuss the entanglement near a quantum phase transition by analyzing the properties of the concurrence for a class of exactly solvable models in one dimension. We find that entanglement can be classified in the framework of scaling theory. Further, we reveal a profound difference between classical correlations and the non-local quantum correlation, entanglement: the correlation length diverges at the phase transition, whereas entanglement in general remains short ranged.[ Nature 416, 608 (2002)] Prof. Victor Yarzhemsky Institute of General and Inorganic Chemistry of RAS Spece -group approach to the wavefunction of a Cooper pair and its applications high-temperature superconductors Abstract: A standart theory of space-groups based on the induced representation method is applied to construct Cooper pair wavefuctions as zero-total momentum states obeying Pusi exclusion principle. It is shown that in many case the results are similar to that of point-group approach. The diffences of the space-group approach and point group approach are duscussed. The method is applied to UPt3, for which E2u symmetry of superconduction order parameter is obtained and and to perovskite systems as high-temperature supercunductors and Sr2RuO4. The influence of different types of time reversal symmetry violation on the SOP structure is discuseed (see papers 1,2 and 5) Dr. Balazs Dora Department of Physics, Budapest University of Technology and Economics Unconventional density waves in quasi-one dimensional systems Abstract: We consider the possibility of formation of unconventional charge and spin density waves (UCDW, USDW) in quasi-one dimensional electronic systems. In analogy with unconventional superconductivity, we develop a mean field theory of UDW allowing for momentum dependent gap on the Fermi surface. Conditions for the appearance of such a low temperature phase are investigated. The thermodynamic properties are found to be very similar to those of d-wave superconductors. The linear (optical conductivity) and nonlinear (threshold electric field) response is calculated. These theoretical results describe convincingly the low temperature phase of the $\alpha$-(BEDT-TTF)$_2$KHg(SCN)$_4$ salt. Dr. Anna Posazhennikova Katholieke Universiteit Leuven On the toy model of pseudogap Abstract: The problem of pseudo-gap formation in an electronic system, induced by the fluctuations of the order parameter is revisited. We make the observation that a large class of current theories are theoretically equivalent to averaging the Free energy of the pseudo-gap system over quenched-disordered distribution of the order parameter. We examine the cases of both infinite and finite correlation length, showing how the interplay of pseudo-gap formation and superconductivity can be treated in this approach. Mr. Marcin Raczkowski Institute of Physics, Jagellonian University Competition between Vertical and Diagonal Static Stripes in the HF approximation Abstract: The charge localization and tendencies of doped holes towards self-oganization into striped patterns is one of the most interesting topics in the physics of high-$T_c$ superconductors. Qualitative picture of stable static stripe phases can be given within the single band Hubbard model using Hartree-Fock approximation. Here we investigate the properties and stability of the filled (one doped hole per stripe site) vertical stripes (VS) and diagonal stripes (DS) by varying the on-site Coulomb repulsion $U$ for two representative doping levels $x=1/8$ and $x=1/6$, and reveal the microscopic reasons of the observed transition from VS to DS with increasing $U$. In the weak coupling regime of U=4t, where t is the hopping element, the stability of VS is best explained by the solitonic mechanism which leads to the kinetic energy gain due to the hopping perpendicular to the stripes. On the contrary, the stability of DS in the strong coupling regime of U=6t is less obvious. We show that the charge densities along DS (m_i^z=0) are lower than along VS, and the nonequivalent atoms within antiferromagnetic domains in the case of DS have larger site magnetization densities and consequently lower probabilities of double occupancy. Hence, DS have a more favorable potential energy which explains their stability in the large $U$ regime. Mr. Marcos Rigol Madrazo Institut für Theoretische Physik III, Universität Stuttgart Quantum Monte Carlo study of confined fermions in 1-D optical lattices Abstract: Quantum Monte Carlo simulations are used to study the ground state of the one dimensional fermionic Hubbard Model in a harmonic trap. Local phases appear in the system and a local order parameter is defined to characterize them. The establishment of the Mott phase does not proceed via the traditional quantum phase transition. Important implications for the experimental study of these systems are deduced. Mr. Adam Rycerz Marian Smoluchowski Institute of Physics, Jagellonian University On metal-insulator transition for a one-dimensional correlated nanoscopic chain Abstract: We have applied our novel numerical scheme combining Lanczos diagonalization in the Fock space with an ab initio renormalization of the single-particle (Wannier) functions, to study the ground state properties of the Extended Hubbard Model. Through the finite-size scaling we determine the discontinuity of the momentum distribution Fermi surface. Our results imply Fermi-liquid behavior for lattice parameter a < 3 a0 (a0 is the Bohr radius) and zero-temperature transition to the localized spin system for larger a. Future applications of the method are listed. The talk will be complemented by possible experimental verifications of the presented theoretical results, in respect to recently discussed limitations of ARPES experiments for one-dimensional systems. Dr. Yasuhiro Saiga Department of Physics, Tokyo Institute of Technology Two-Dimensional t-J Model in a Staggered Field Abstract: The two-dimensional t-J model in a staggered field is studied by numerically exact diagonalization up to 20 sites. For the low-hole-density region and a realistic value of J/t, it is found that the presence of staggered field strengthens the attraction between two holes. With increasing field, the d_{x^2-y^2}-wave superconducting correlations are enhanced while the extended-s-wave ones hardly change. This implies that coexistence of the d_{x^2-y^2}-wave superconducting order and the commensurate antiferromagnetic order occurs in a staggered field. [Home] [Up] [List of Accepted Participants] [Logistic Instructions] [Lecture Topics and Background References] [Participant Seminar Abstracts] [Program]
CommonCrawl
Service innovation management practices in the telecommunications industry: what does cross country analysis reveal? Syed Abidur Rahman ORCID: orcid.org/0000-0002-7889-920X1, Seyedeh Khadijeh Taghizadeh1, T. Ramayah2 & Noor Hazlina Ahmad2 SpringerPlus volume 4, Article number: 810 (2015) Cite this article Service innovation management practice is currently being widely scrutinized mainly in the developed countries, where it has been initiated. The current study attempts to propose a framework and empirically validate and explain the service innovation practices for successful performance in the telecommunications industry of two developing countries, Malaysia and Bangladesh. The research framework proposes relationships among organisational culture, operating core (innovation process, cross-functional organisation, and implementation of tools/technology), competition-informed pricing, and performance. A total of 176 usable data from both countries are analysed for the purpose of the research. The findings show that organisational culture tends to be more influential on innovation process and cross-functional organisation in Malaysian telecommunication industry. In contrast, implementation of tools/technology plays a more instrumental role in competition-informed pricing practices in Bangladesh. This study revealed few differences in the innovation management practices between two developing countries. The findings have strategic implications for the service sectors in both the developing countries regarding implementation of innovative enterprises, especially in Bangladesh where innovation is the basis for survival. Testing the innovation management practices in the developing countries perhaps contains uniqueness in the field of innovation management. Innovation coupled with performance of firms is a subject with significant attention within academia (Damanpour 2014) due to its rapid and dramatic impact on society and organisations across borders. In order to achieve ultimate goal in an organisation, managerial practices and activities can play a vital role. In this regards, few rudimentary and imperative management practices is considered in this study context to understand to what extent such practices contribute organisations for accomplishing the performance specifically in developing context. Scholars claim that countries and regions are endowed with diverse types of resources and infrastructures (Chen and Hsiao 2013) which rely on their own organisational culture how to practice (Aycan et al. 2000). Earlier literatures illustrated the influence of national and organisational culture on different managerial practices in the organisations (Ardichvili et al. 2006) as well as on successful innovation (Lee et al. 2013; Büschgens et al. 2013). In the context of 'culture' issue, some of the scholars have asserted that national culture has an influence along with other spectrums on the organisation and its culture (Tayeb 1994). To be more specific, literatures suggest that organisational culture is the integral part of the national culture (Iorgulescu and Marcu 2015). However, Hogan and Coote (2014) noted that despite much focused attention on the topic of organisational culture and innovation, the extant literature does not sufficiently document the organisational culture that enables innovation. To have successful innovation, scholars gave importance to three operating core of innovation as fundamental aspects of innovation management. These three operating core are innovation process, cross-functional organisation, and implementation of tools/technology introduced by Hull et al. (1996). These practices facilitate service companies in managing their new service development process in a best way (Collins and Hull 2002) as it is proved to be faster, cheaper, and better for service development than serial alternatives (Liker et al. 1999). As scholars highlighted, innovation process, cross-functional organisation, and implementation of tools/technology are increasingly necessary for survival under conditions of hyper competition (Hull 2004). Further, literatures suggest that in the process, a great deal of effort must be put in the implementation of new products/services (Orfila-Sintes et al. 2005). Innovation process considers various activities include effectiveness in market assessment, bench marketing, identify customer needs, quality function, and review on the design of the products (Hull 2004). This guidance can create value for customers who are the focus of innovation (De Jong and Vermeulen 2003). In addition, cross-functional teams are often seen as key for innovation projects (Blindenbach-Driessen 2015) which carries out every practice and process in a systematic and sustainable way (Weiss and Legrand 2011). It is generally an accepted notion that people are of central importance in cross-functional organisation as each has capabilities to find and solve problems. Cross-functional organisation with high performance teamwork can bring success to firms, while without could be a reverse situation (Weiss and Legrand 2011). In the stream of innovation literature, tools/technology mainly represents the usage of computer and information technology (CIT). Most service firms are knowledge-based and heavily depend on information technology (IT) (Hull and Tidd 2003b), hence, IT can facilitate the decision making process in the development cycle in a shorter time (Hull 2004). In addition, CIT enables team members to share their experience in service development cycle and systematically compare their services with competitors (Tidd and Bessant 2009). It allows management to evaluate and control all the projects through stored day-to-day information as well to learn and conduct staff training upon reviewing customer and user satisfaction, evaluating projects, and audits (Mudrak et al. 2005). Moreover, this study has considered competition-informed pricing as important practices for new service development. Competition-informed pricing refers to the prices of competing product that are used as a benchmark instead of customer demand. The competition-informed pricing assumes that the cost structure of the company would be such a way that matches with the competitors' pricing (Shapiro and Jackson 1978). According to Hinterhuber (2004), while making the pricing decisions the manager must take into consideration the competitive perspective which facilitates to inform the competitors' pricing. The purpose of choosing competition-informed pricing is due to the selection of telecommunications industry in the current context. Competition-informed pricing in the telecommunications industry plays persuading role. It is matter of fact that in the telecommunications industry, the level of competition is more intense compared to any other industry, irrespective of a country's economic and social state. The market structure of telecommunications industry is considered as oligopoly. In the oligopoly market, there are only few firms which have considerable control over their prices, but each firm must consider the course of actions, activities, and reactions of the rivals (Noam 2006). Hence, an organisation cannot overlook the importance of today's hyper competitive market in their innovation process because researchers noted that innovation has a synchronized relationship with competitors (Goto 2009). Finally, the study has attempted to reveal the impact of such practices on the innovation performance. Performance reflects the business initiatives and strategies taken by the firm. Previous researchers argued that innovation in an organisation directly and positively influences the improvement in business performance (Tidd et al. 2005). Innovation as a firm's unique resource can lead to competitive advantage and improvement in performance, effectiveness, and efficiency (Barney 1991). If firms are highly focused on innovation, they will be more successful in the offering of new products and services where subsequently it will result in greater performance (Eisingerich et al. 2009). However, over the past years many of countries face difficulties in strengthening innovation performance (OECD 2007) which diverges due to the capacity to innovate. To do so, the study has framed this research in the telecommunication industry of two countries. Most importantly, the study intended to test a framework in developing countries which has partially been molded and tested in developed countries. As in the recent literatures, scholars have solicited to modify and test management theories and framework in emerging economies which are typically built in the northern part of the globe (Barrett et al. 2015). However, this is a prospect to substantiate whether framework initiated in the developed countries explicate similar underlined causal effects across developing countries. We have chosen two Asian countries, one of which is considered as innovation driven country (Malaysia), and another considered as only factor driven country with insufficient capacity to innovate (Bangladesh) (World Economic Forum 2015). Bangladesh is one of the prominent member of the world "Next Eleven" group which is considered the most lucrative emerging economy group amongst others in the globe and the country is planning to step in the middle income country by the year of 2021 (Planning Commission 2012). On the other hand, Malaysia is the one of the most potential developing countries which plans to enter the club of 'developed countries' by the year 2021 (Malaysian Investment Development Authority 2014). However, to achieve such economical shift by the year of 2021, it is presumed that innovation and its practices in the industries can be one of the driving forces. In both the countries, telecommunications industry plays leading role in the development of the economy. Profile of the telecommunications industry indicates a proximate similarity in terms of operations and ownership. DiGi a Malaysian telecommunications company and GrameenPhone a Bangladeshi telecommunications company are both a foreign subsidiary of Telenor group, Norway. DiGi holds the second position in terms of market share in Malaysia and GrameenPhone holds the largest market share in Bangladesh. On the other hand, Robi Axiata, a Malaysian subsidiary of the Celcom Axiata group, is operating in Bangladesh with significant market share in the country as well as in Malaysia. Therefore, the current study attempts to propose a framework and empirically validate and explain the service innovation practices between the emerging countries as researchers suggested limited study in these context _ENREF_44 (Taghizadeh et al. 2014). The result may contribute for the policy maker as guideline to enhance the innovation performance through firm resources and capabilities. This paper is structured in seven sections. The second section provides an overview of the theoretical justification of the variables that help the reader to understand the proposed research framework as well as hypotheses formulation. The research methodology and the findings of the empirical analysis used in the study are discussed in section three. In section four, a discussion derived from the result is presented. Implication, conclusion, and limitation with future direction of the research are presented in section five, six, and seven, respectively. Theoretical background and hypothesis development Todays, changes are taking place everywhere, which raising complexity among the environment e.g. changes in economic condition lead to the opening of new markets, while closing others (van Riel 2005). Such a domino effect subsequently increases the level of global competition and rivalry among the companies (van Riel 2005). To overcome the complexity, management need to have a balanced, comprehensive, and proactive approach (Ottenbacher 2007). Scholars believe that successful service innovation not only depends on how a firm manages projects, coordinates imputes of different functions, and links up with its customer, but also relies on being able to develop strategic approaches and look widely (Tidd et al. 2005). Literature on new service development reveals that the growth and performance of any organisation rely on an efficient management of innovation in a competitive climate (Jiménez-Jiménez and Sanz-Valle 2011; Tidd and Bessant 2009). In the literature, a composite model was illustrated comprising of three managerial practices: innovation process, cross-functional organisation, and implementation of tools/technology introduced by Hull et al. (1996). These practices facilitate service companies in managing their new service development process in a best way (Collins and Hull 2002) as it is proved to be faster, cheaper, and better for service development than serial alternatives (Liker et al. 1999). Innovation process, cross-functional organisation, and implementation of tools/technology are known as the operating core and are increasingly necessary for survival under conditions of hyper competition (Hull 2004). In this operating core both marketing and developmental operations are included in contrast to literature dealing with the market on the one hand and organisation behaviour on the other (Hull 2004). Innovation process represents a disciplined practice in order to control the procedure from idea generation to launch (Hull and Tidd 2003a). According to Hull and Tidd (2003a) and Liker et al. (1999), innovation process denotes the mechanistic form of an organisation where rules and regulations are structured and maintained accordingly. Hull and Tidd (2003a) pointed out that in the setting of innovative process, organisations tend to be effective, efficient, and characterized by standardized procedures. A clear division of labour and an authoritarian chain of command prevail, while the companies embrace the innovative process for the service innovation management (Liker et al. 1999). Cross-functional organisation involves the coordination of people at all the stages of innovation practices (Tidd et al. 2005). Liker et al. (1999) asserted an innovative organisation is characterised by an organic setting that tends to be flexible and characterised by few rules and standard procedures. Teamwork and a creative combination of various views, perspectives and disciplines recognizes innovative organisational practices (Tidd and Bessant 2009). Co-involvement of operations people, who are developing the services and delivering systems support behind the scenes, is necessary for firms success (Magnusson et al. 2003). Tools/technology denotes enabling computer information technologies (CIT) in supporting communication (Hull et al. 1996). According to Collins and Hull (2002), organisational transformation and transaction capabilities are enhanced by the adoption of CIT's tools, such communication devices and data distribution approaches. As the complexity of the business environment has been increased, it requires organisations to have a collaborative and creative working place through the implementing of CIT's tools (Klein and Dologite 2000). According to scholars, the proliferation of information and technology has created a revolution in the current trend with a wider economic perspective across national borders (Erumban and De Jong 2006). However, Hull and Tidd (2003a) found that training and championing 'as a part of organisational culture' influence on shaping up the innovation process, cross-functional organisation, and implementation of tools/technology of service-oriented companies. Hence, we propose that organisational culture can play a stimulus role in practicing the operating core. Scholars noted that organisational culture plays an influential role in the management practices of the firms (Zammuto and O'Connor 1992). Organisational culture is a complex set of values, beliefs, assumptions, and symbols that a firm should institute in its business operation (Miron et al. 2004; Chang and Lin 2007; Barney 1986; Martins and Terblanche 2003). According to Naranjo-Valencia et al. (2011), to facilitate the implication of innovation successfully, organisations should meet the requirements of internal behaviour and external relations which comply with the organisational culture. In fact, organisational culture is a source of new ideas within the organisation (Uzkurt et al. 2013). As suggested by Chang and Lin (2007), this paper conceptualizes organisational culture by considering the four cultural traits (i.e. cooperativeness, innovativeness, consistency, and effectiveness) into a single domain. Cooperativeness focuses primarily on cooperation to each other as extended family which represents a strong team work and trust to each other. Innovativeness can be characterized with a focus on creativity, adaptability, and dynamism which allows the employees for the self-development. Consistency emphasizes on maintaining order, rules and regulations, uniformity, and efficiency throughout the organisational structure. The cultural trait of effectiveness indicates the competitiveness, goal achievement, and efficiency of the organisational activities. Therefore, this paper proposes that organisational culture may have effect on the operating core and thus the following hypotheses would be worthy of testing: H1. Organisational culture facilitates the practise of continuous process improvement in service development. H2. Organisational culture enables the practise of cross-functional organisation to a great level in service development. H3. Organisational culture accelerates the implementation of information technology tools in service development. In oligopoly market high barriers to entry for new competitors exist to a greater extent. Such barriers to entry impede the other new entrants in competing in the market due to the high start-up capital cost (McConnell et al. 2009). To achieve a desire performance in oligopoly market, each firm must consider the course of actions, activities, and reactions of the rivals (Noam 2006). So, competition-informed pricing as how to set prices using information gathered from competitors can be helpful in order to deal with pricing complexity. Hinterhuber (2004) believes that while making pricing decisions a manager must take into consideration the competitive perspective. Competition-informed pricing has the tendency to enhance the likelihood of setting the right price by a competitor's innovation practices, including pricing that may match or exceed the firms' price for innovated products and services. The price of competitive products and competitive advantages of competitors dictate that the firm needs to make an evaluation on the firms' position in the market vis-a-vis the competitors (Ingenbleek et al. 2003). The competitor's current price strategy and strength to react are important components for competition-informed pricing. While firms practice competition-informed pricing, it is also imperative for them to consider the market structure, degree of competition in the market, and the competitive advantages of competitors in the market. Such activities in fact refer to the overall knowledge of the competition by the market players. In this vein, this research suggests that the operating core can facilitate the efforts of managers to gather information related to competitors. For example, process involves external investigation for developing new products and services (Hull 2003). It may help firms in the practice of price decision making through involvement of the functional departments in the procedure towards understanding the strategic movement of rivals in the market. Inter-functional coordination and cooperation are deemed instrumental in efficient innovation management in gathering data regarding the right price from the perspectives of competitors. It can be assumed that the degree of competition can be understood through the propensity of coordination of people in an organisation. Or else, CIT' tools along with continuous updating of the service development process may facilitate firms in gathering competitors' price related information in a shorter time. Considering the above discussion, the following hypothesis is formulated: H4. Continuous process improvement increases the level of gathering competition-informed pricing in service development. H5. A cross-functional organisation facilitates the level of gathering competition-informed pricing in service development. H6. Implementation of information technology tools for gathering competition-informed pricing is easier in service development. Previous study found the relationship between competition-informed pricing firms performance (Ingenbleek et al. 2003). In fact it is difficult to find an ideal measurement for business performance particularly in collecting performance data. In the past studies, the performance of an organisation is frequently evaluated by the simple outcomes of financial indicators such as return on investment (ROI), return on sales, or sales growth. This study measured non-financial performance focusing on innovation activities in terms of new service development and delivery process improvement which has been also found in the earlier research (e.g. Hull 2003; Hull and Tidd 2003a). Coming up with upgraded features, higher quality of services, shorter time for delivery of services, reducing cost of service development, higher quality in the delivery process are the major indicators to measure the performance of the organisation in terms of new service development and delivery process (Hull and Tidd 2003a). To achieve better performance, it is expected to implement appropriate pricing practice (Hultink et al. 1997) as scholars have also asserted that setting a right price drives superior performance for firms (Dutta et al. 2003). Competition-informed pricing increases the chance of setting the right price by knowing competitor's innovation (Ingenbleek et al. 2003). Gathering information from competitors' price strategy enable a quantitative evaluation of the firm's relative position (Ingenbleek et al. 2003). Therefore, we propose that understanding the competitors' trend of pricing, degree of competition, and market structure will enable service companies to upgrade services with new features and reduce the time of response. Thus, the following hypothesis is presented for testing: H7. The greater the practice of competition-informed pricing, the higher the level of performance improvement. Based on the above discussion, this paper proposes that competition-informed pricing can mediate the relationship between operating core and performance. There is hardly any research being conducted to examine the impact of the operating core on performance of service firms through the possible role of competition-informed pricing. The rationale for testing this mediating effect arises from the market structure of telecommunications industry. Practicing operating core of the service innovation perhaps is not enough to achieve the performance enhancement of service industries. While service-based companies embrace the operating core of innovation practices, they subsequently need to understand the position of their competitors in the market. It is a generally accepted notion that in a competitive market, each and every company follows the competitors' pricing and pricing strategy. Vermeulen and van der Aa (2003) mentioned that most organisations use services which are developed by some competitor in order to adjust the competitors' product in their innovation process. To a greater extent, such companies try to get as much information on the competitors' price. By understanding the pricing position of competitors, service companies attempt to attain higher performance. Thus, the following hypotheses would be worth of testing: H8. Competition-informed pricing mediates the relationship between process and performance. H9. Competition-informed pricing mediates the relationship between organisation and performance. H10. Competition-informed pricing mediates the relationship between implementing tools/technology and performance. After all, we believe that culture, service innovation practice, pricing, and firm's performance of mobile phone companies should differ significantly between Malaysia and Bangladesh. The reason for choosing these two contexts is discussed in introduction part. Therefore, we test all path relationships though multi group analysis. H11: All the hypothesised relationships in the proposed framework will differ between Malaysia and Bangladesh telecommunication companies. Thus, the research framework (Fig. 1) aims to explore the relationship of organisational culture as a predictor of operating core (innovation process, cross-functional organisation, and implementation of tools/technology) for new service development. Further, we draw attention to explore the mediating role of competition-informed pricing practices between the relationship of operating core practices and performance. Research methodology and result Sample and data To test the research framework and hypotheses, we considered telecommunications industry in Bangladesh and Malaysia. In Bangladesh, out of six, three top largest telecommunications companies (GrameenPhone, Robi Axiata, and Airtel) were chosen as they contain more than 60 % of the total market share in the country. Similarly, three top largest telecommunications companies from Malaysia (DiGi, Maxis, and Celcom Axiata) were chosen out of six, which are holding more than 60 % of the total market share in the country as well. The purposive sampling was chosen because specific managers form the respondent pool for the research questionnaire survey. In Malaysia, there are 820 branch offices for the DiGi, Maxis, and Celcom that we could collect 98 usable data. In Bangladesh, there are in total 621 branch offices and the usable collected data is 78. To run the analysis of the current framework with three predictors, it is required to have a minimum sample size of 77, which would generate a power of 0.80 for a model with medium effect size (Hair et al. 2013). Therefore, a total of 176 usable data from both countries are analysed for the purpose of the research. Table 1 provides the demographic statistics of the sample data. Table 1 Demographic profile of respondent The questionnaire was developed from past studies. The items for organisational culture (OC1 to OC9) were taken from Chang and Lin (2007). In the survey questionnaire, the respondents were asked to respond on the items of organisation culture on 5-point Likert scale (1 = strongly disagree to 5 = strongly agree) with the question "How much do you agree on the following practices…?." The items for innovation process (PRC1–PRC5), cross-functional organisation (ORG1–ORG5), and tools/technology (TLS1–TLS5) were taken from Hull (2003) and Hull and Tidd (2003a), and anchored on 5-point Likert scale (1 = very low extent to 5 = very high extent). While measuring the innovation process, the respondents were asked to rate the items considering the following statement, "By the practice of innovation process, our company is…" To measure the cross-functional organisation the statement was "By the practice of cross-functional organisation, our company has…" Tools/technology was measured on the basis of following statement "In the implementation of information technology tools, our company has…" The items for competition-informed pricing (COMIP1–COMIP5) were taken from Ingenbleek et al. (2003), and measured on 5-point Likert scale (1 = very low extent to 5 = very high extent). The managers were asked to indicate "To what extent your company take into consideration….?." The items for performance were taken from Hull and Tidd (2003a) in terms of service development (SD1–SD5) and delivery process (DP1–DP5) measured on 5-point Likert scale (1 = very low extent to 5 = very high extent). While measuring the performance, the respondents were asked to rate the items considering the following statement "To what extent has your operation system changed based on the following…" Details of the items have been illustrated in "Appendix". To ensure that there is no Common Method bias in the questionnaire survey, we performed Harman's single factor test. This revealed that the first factor accounted for 45.018 % of variance, which is less than threshold level of 50 % of total variance explained (Podsakoff et al. 2003). In this study, to see whether there any differences between subsidiaries group exist (DiGi in Malaysia and GrameenPhone in Bangladesh are both subsidiaries of Telnor group; Robi in Bangladesh and Celcom in Malaysia are subsidiaries of Axiata group), an independent-sample t test was conducted to compare the six variables. Parent companies Telenor and Axiata were considered as two groups, where, DiGi and GrameenPhone were considered as group 1 and Celcom and Robi were grouped as 2. The results show that the p value from the independent t test for five variables is not significant except for one variable that is organisational culture. Organisational culture shows some slight difference in the means between the two groups of subsidiaries. Therefore, the effect size test was calculated to determine the magnitude of the difference as suggested by (Cohen 1988). The effect size is determined by the Cohen's d value. The formula to get the Cohen's d is: $${\text{Cohen's d}} = {\text{difference between sample mean}}/{\text{pooled standard deviation}}$$ The interpretation for effect size using Cohen's d test value belonging to the categories: 0.20–0.49 (small), 0.50–0.79 (medium), and above or equal to 0.80 (large). The result of the test indicates that the effect size of the variable is small (0.21), therefore, the homogeneity of two groups of subsidiaries is established. The small effect size indicates that the response bias is not a threat. In order to achieve our research objectives and analyse the measurement and structural model, we considered the structural equation model (SEM) with PLS approach, specifically the SmartPLS version 2.0 M3 Beta (Ringle and Wende 2005). PLS-SEM can be viewed as quite similar to multiple regression analysis to examine possible relationships with less emphasis on the measurement model (Hair et al. 2013). The individual path coefficients in the PLS structural model can also be interpreted as standardised beta coefficients of ordinary least square regression (Götz et al. 2010). Each path coefficient's significance can be accessed through a bootstrapping procedure where significant paths showing the hypothesised direction empirically support the proposed causal relationship and vice versa (Hair et al. 2011; Yung and Bentler 1994; Efron 1979). Bootstrapping in PLS is a nonparametric test which involves repeated random sampling with replacement from the original sample to create a bootstrap sample and to obtain standard errors for hypothesis testing (Hair et al. 2011). Regarding the number of re-sampling, Chin (2010) suggested to perform bootstrapping with 1000 re-samples. In the current study, the bootstrapping procedure with 1000 re-samples was used to test the significance of the path coefficients (regression coefficients). The path coefficients have standardized values between −1 and +1. The estimated path coefficients close to +1 represents a strong positive linear relationship and vice versa for negative values (Hair et al. 2013). In addition, to carry out a multi-group analysis between the companies of the two countries, PLS is considered to be more appropriate to explore the differences between them. The respondents of Bangladesh telecommunications sector's managers and Malaysian telecommunications sector's managers were split into two data sets (Bangladesh = 78 samples and Malaysia = 98 samples). To estimate the structure model, all criteria such as convergent validity, discriminant validity, and measurement invariance were checked separately as suggested by Hair et al. (2013). Factor loadings of the items, average variance extracted (AVE), and composite reliability (CR) are used to assess convergence validity of the data (Hair et al. 2009). To ensure the indicators' reliability, the main loading and cross-loading of items are checked. In accordance with Chin (1998), we retained the items which exceeded the recommended value of 0.6 while three items (OC8, OC9, TLS4) were found to be below the cut off value were deleted. Two items (OC4 and ORG5) were deleted because of cross-loading. The AVE of all the constructs exceeded the cut off value of 0.5 suggested by in literature (Henseler et al. 2009; Hair et al. 2013). The CR values of the constructs were found to have a minimum threshold of 0.7 suggested by Hair et al. (2011). Table 2 shows the results. Table 2 PLS factor loadings, CR, and AVE of full and country samples After convergent validity, we analysed the discriminant validity of the model. The discriminant validity was assessed for both the full and split sample by comparing the correlations between constructs and the square root of the average variance extracted for that construct (Fornell and Larcker 1981). The results show that the square roots of AVEs are greater in all cases than the off-diagonal elements in their corresponding row and column, suggesting that the required discriminant validity was achieved (Table 3). In total, the measurement model demonstrated adequate convergent validity and discriminant validity. Table 3 Discriminant validity of data sets Measurement invariance was tested. According to Hair et al. (2013), researchers should ensure the construct measures are invariant across the groups while comparing path coefficients across the groups using the PLS-MGA parametric. Bootstrapping is used according to the number of the observation in the data set separately for each group. Through outer weights and standard errors for each group and using the Levene's test suggested by Hair et al. (2013), the invariance test is checked for all items. In this test, if the test for equality of group variance is significant, then the unequal standard errors are assumed and the test statistic (t value) is computed as follows: $$S_{1 2} = \sqrt {S_{1}^{2} + S_{2}^{2} }$$ If the test for equality of group variance is not significant, equal standard errors are assumed and the test statistic (t value) is computed as follows: $$S_{1 2} = \left( {\sqrt {\frac{{\left( {N_{1} - 1} \right)^{2} }}{{\left( {N_{1} + N_{2} - 2} \right)}} \,. \,S_{1}^{2} + \frac{{\left( {N_{2} - 1} \right)^{2} }}{{\left( {N_{1} + N_{2} - 2} \right)}} \cdot S_{2}^{2} } } \right) \cdot \left( {\sqrt {\frac{1}{{N_{1} }} + \frac{1}{{N_{2} }}} } \right)$$ The criterion is that at least two items should not differ in the measurement items of each construct. The result shows that the there is no significant difference among the two groups. Table 4 shows the results. Table 4 Invariance test After testing measuring model, the structural model has been analysed. The R 2 and the path coefficients (beta and significance) show how well the data supported the hypothesized model (Chin 1998). We used the bootstrapping method with a resampling of 1000 to estimate the significance of the path coefficients (Chin 1998). The path coefficients for full and split data are shown in Table 5 and Fig. 2. Table 5 Result for direct relationships Structural models. **p < 0.01, *p < 0.05 Hypotheses related to organisational culture and operating core From the analysis, we found H1 was supported in the full data (β = 0.520, p < 0.01), the Malaysian data (β = 0.545, p < 0.01), and the Bangladeshi data (β = 0.314, p < 0.01). H2 was supported in the full data (β = 0.584, p < 0.01), the Malaysian data (β = 0.651, p < 0.01), and also in the Bangladeshi data (β = 0.350, p < 0.01). H3 was found to be supported in the full data (β = 0.567, p < 0.01), the Malaysian data (β = 0.471, p < 0.01) as well as in the Bangladeshi data (β = 0.545, p < 0.01). Hypotheses related to operating core (innovation process, cross-functional organisation, and implementation of tools/technology) and competition-informed pricing The result of H4 is supported in the full data set (β = 0.170, p < 0.05) and the Malaysian data set (β = 0.255, p < 0.05), while in the Bangladeshi data it was not supported. The result of H5 was supported in the full data set (β = 0.266, p < 0.05) and the Bangladeshi data set (β = 0.275, p < 0.05), while in the Malaysian data set it was not supported. The result of H6 was supported in the full data set (β = 0.295, p < 0.01) and the Bangladeshi data set (β = 0.536, p < 0.01), while in the Malaysian data set, H6 was not supported. Hypotheses related to competition-informed pricing and performance The findings revealed that H7 was supported in all the data sets, the full (β = 0.602, p < 0.01), the Malaysian (β = 0.562, p < 0.01), and the Bangladeshi (β = 0.596, p < 0.01) data sets. Hypotheses related to the mediating effect of competition-informed pricing on the relationship between operating core and performance. The result shows that H9 was supported only in the full data set. H10 was supported in the full and in the Bangladeshi data sets only, but not in the Malaysian data set (Table 6). Table 6 Result for mediating effect To explore the differences, we carried out PLS multi-group analysis for the Bangladeshi and Malaysian subsamples. We tested the differences between the path coefficients across the respective two data sets and the result is shown in Table 7. Three paths differ significantly between the two countries' data sets. Organisational culture and process (p = 0.036); organisational culture and organisation (p = 0.003); tools or technology and competition-informed pricing (p = 0.016) have significant statistical differences (Table 7). Table 7 Path differences by Country The results of the study show significant relationship between organisational culture and operating core (innovation process, cross-functional organisation, and implementation of tools/technology) in both Bangladesh and Malaysia context. It is in line with the previous notion regarding the fact that internal behaviour and external relation, as part of organisational culture, facilitates the implementation of innovation successfully in the developed countries context (Naranjo-Valencia et al. 2011). Similar findings have been also observed in the current study, which focuses on developing countries. Organisational culture as a source of new ideas (Uzkurt et al. 2013) facilitates the practice of operating core (innovation process, cross-functional organisation, and implementation of tools/technology) in telecommunications industry. Earlier researchers found that training and championing have an influence on shaping up innovative organisations and processes (Hull and Tidd 2003a). However, the current study gives importance to the overall organisational culture in relationship with the practice of the operating core. Nevertheless, results of the present research also give such impression in the context of telecommunications sector in Malaysia and Bangladesh. It is not expected that such practice of organisational culture would be the same throughout all organisations or throughout all the countries. In line with similar considerations, the result of the multi-group analysis shows that the relationship between organisational culture and process as well as organisational culture and cross-functional organisation are significantly and statistically differ between Malaysian telecommunications industry and Bangladeshi telecommunications industry. Based on the findings, the practice of organisational culture in relationship with process (β = 0.545) and cross-functional organisation (β = 0.651) is stronger in the Malaysian telecommunications sector compared to the Bangladeshi telecommunications sector, where process holds a standard beta of 0.314 and cross-functional organisation accounts a standard beta of 0.350. According to scholars, cultural differences have implications on the organisations where they are operating (Tayeb 1994). Furthermore it has been asserted that cultural values at individual or societal level are greatly influenced by the national culture (Thornton et al. 2011). National culture with low individualism accentuates on strong group solidity. The culture which possess the characteristics of uncertainty avoidance at higher level prefer to follow clear rules of conduct, while cultures low on uncertainty avoidance relish on novel events and value innovation. Cultures those are high on harmony focuses accepting matters as they are, and low level of harmony indicates the prominence of assertiveness to advance personal or group interests (Li et al. 2013). Therefore, in context of this study, it is the veritable fact that the organisational culture would differ between the companies of these countries, which might have been experienced due to the influence of different national culture. Perhaps, due to the advancement of modern and trending organisational culture in Malaysia, the telecommunication companies are able to blend mechanistic process and organic cross-functional organisation as practices of innovation on a concurrent basis. It can be argued that the multi-ethnicity setting of the Malaysian culture influences the organisational culture to practise both the mechanistic and organic structures simultaneously. In the Malaysian context, cooperativeness and steadiness has been entrenched in the society, which presumably are influenced by the cultural harmony of the nation. From an economic point of view, Malaysia is in the stage of development and is considered to be one of the emerging tigers of Asia. The government has already taken up various measures to achieve developed nation recognition and status. With this view, it is inferred that the culture of cooperativeness, creativity, efficacy, and competitiveness among the Malaysian telecommunication companies are supportive towards innovation driven in such a transitional stage. To be more specific, based on the data, the study believes that cooperativeness is one of the most significant dimensions of organisational culture followed by consistency, and innovativeness for the telecommunication companies of the both countries. Furthermore, among the Malaysian telecommunications companies, cooperativeness and consistency deemed to be carrying more weightage. On the other hand, cooperativeness and innovativeness are more important among the Bangladeshi telecommunications companies in order to shape up effective innovation practices. The relationship between innovation process and competition-informed pricing is found to be significant in the Malaysian telecommunications sector whereas in the Bangladeshi telecommunications sector, it is insignificant. Theoretically, innovation process refers to the mechanistic stand of the organisation. According to Liker et al. (1999) and Tidd and Hull (2011), a mechanistic organisation is appropriate when the environment is efficient, effective, and stable. The findings of this study reflect what was advocated earlier in the context of innovation in developed countries. The Malaysian telecommunications sector is presumably at a mature stage with greater efficiency and effectiveness compared to the Bangladeshi telecommunications sector. Such an efficient and mature state of the industry instigates us to consider the most important stakeholder in the business environment such as competitors. With this contextual argument, it is noteworthy to state that the Malaysian telecommunications industry takes into account the competition-informed pricing practice with the mechanistic state of business operation. However, the innovation process can improve firm's performance if the practice of gathering price related information from competitors is emphasized. Competition-informed pricing helps managers in the Malaysian telecommunications field to understand the upper-limit of the price decision while practising the innovation process for performance improvement. Therefore, it is important to mention that through the competition-informed pricing practice, the mechanistic state of organisation can assist to achieve performance. In contrast to Malaysia, the relationship of cross-functional organisation and tools/technology with competition-informed pricing is significant in the Bangladeshi telecommunications sector. Bangladesh is in a position where it is about to take flight towards the development of innovation. Apparently, foreign investment is growing in the country, with greater interest among the telecommunication companies around the world. Therefore, the market is experiencing rapid changes in terms of organisational operation and strategy. As suggested by Liker et al. (1999) and Tidd and Hull (2011), organisations tend to be organic while the environment is not stable, dynamic, and the existence of less rules and regulations. In this scenario, it is justifiable to conclude upon the significance of the result that denotes the influence of cross-functional organisation on competitor-informed pricing. However, it is important to understand the competitors' pricing strategy and competitors' strength in the market through use of cross-functional team members within the innovative organisation. Computer information technology (CIT)'s tools, indeed, updates the process of service innovation cycle among cross-functional team members and increases the frequency of cross-functional team members' communication in the value chain as highlighted in the previous study (Collins and Hull 2002; Tidd and Hull 2011). Thus, the result of the current study explains a facilitator role of competition-informed pricing for implementation of tools/technology to achieve a firm's goals and performance only in the Bangladeshi telecommunications sector. Since the offered services of the telecommunications industry are very much similar across the companies, therefore, the state of competition is apparently higher, which triggers the companies to consider competition-informed pricing. In the result of multi-group analysis, the relationship between tools/technology and competition-informed pricing significantly differs in the Bangladeshi telecommunications sector (β = 0.536) compared to the Malaysian telecommunications sector (β = 0.163). In line with the resource based view theory (RBV), organisation resources are converted to capabilities which would have an effect on competitive advantage (Barney 1991). In this study, resources namely innovation process, cross functional organisation, and tools/technology have causal effect on the firms capabilities that is competition-informed pricing. Subsequently, this capability (competition-informed pricing) has also a casual effect on competitive advantage, in this study which is performance. In this line, it has been argued in the literature that in capitalizing resources, an organisation can dominate and achieve a high level of performance (Barney 1991). Interestingly, the mediating effect of competition-informed pricing is found to be significant on the relationship between tools/technology and performance only in the Bangladeshi data set. The reason probably accounts for the state of the progress in Bangladesh in terms of business innovation. Bangladesh is struggling towards the benchmark of the international standard. Being in transition from least developed country to emerging country, the business organisations are proactive to inculcate the practice of using tools/technology. On the other hand, tools/technology has become a part of the business operation for a fairly long time in Malaysia. Therefore, a significant difference has been observed between the Malaysian and Bangladeshi telecommunications sector in terms of these relationships. Managerial relevance The illustrated research model is a useful theoretical framework for explaining the elements of operating core practices of service innovation that influence higher performance through the mediating effect of competition-informed pricing. According to the result attained from this study, managers of the Malaysian telecommunications sector do not take into account the competition-informed pricing while practising the operating core of service innovation to achieve higher performance. On the contrary, managers of the Bangladeshi telecommunication companies should take into account the competition-informed pricing while practising the operating core of service innovation to realize greater performance to counter the instable environment. The study also reflects the situation of organisational culture practice in both countries' industry. It is recommended that managers of Bangladeshi telecommunications industry develop an organisational culture to gain performance advantages with the practice of service innovation. Overall, the findings suggest that it is advantageous for the telecommunications industry to escalate the level of performance, facilitating managers to consider competition-pricing for new services with the support of operating core of the service innovation management. The managers of the industry must look towards competitors to set the price of the service along with practicing innovative process, innovative organisation, and implanting tools or technology. This may assist the managers to gain insight on the practice of service innovation, organisational culture, and performance. Taken all together, the results of this study show that the service innovation practice differs between Malaysia and Bangladesh. In Malaysia, organisational culture is revealed to be a strong predictor for operating core of service innovation compared to the Bangladeshi telecommunications sector. Furthermore, in the Malaysian telecommunications sector, competition-informed pricing does not necessitate playing any role between operating core of service innovation and performance, while in the Bangladeshi telecommunications sector, competition-informed pricing facilitates the relationship of tools or technology with performance. In addition, the relationship between tools or technology and competition-informed pricing is strong in the Bangladeshi telecommunications sector. On the other hand, it is not significant in the Malaysian telecommunications sector. It is however expected that if the respective managers of both countries consider these issues, it would contribute immensely towards the practice of service innovation management as a whole. Limitations and future directions of research This paper has limitations that are to be noted. The paper is based on a single industry and the sample is drawn only from the telecommunications industry, which has the potential for limiting the generalisation of the findings of this research across other industries. This can be overcome by extending the scope of the research by using a larger database comprising responses of managers representing a number of industries. Although this paper is based purely on quantitative methodology using established constructs, these were not used in any prior study in Bangladesh and Malaysia. Future study can be developed using a mixed methodology comprising qualitative and quantitative approaches toward contributing to greater generalisation of the findings. In addition, future study can look into the other subsidies of Telenor group and Axiata group operating in Asian countries such as India, Pakistan, Myanmar, Indonesia, Brunei, and Thailand in order to test the applicability of the framework in the developing countries. Ardichvili A, Maurer M, Li W, Wentling T, Stuedemann R (2006) Cultural influences on knowledge sharing through online communities of practice. J Knowl Manag 10(1):94–107 Aycan Z, Kanungo R, Mendonca M, Yu K, Deller J, Stahl G, Kurshid A (2000) Impact of culture on human resource management practices: a 10-country comparison. Appl Psychol 49(1):192–221 Barney JB (1986) Organisational culture: can it be a source of sustained competitive advantage? Acad Manage Rev 11(3):656–665 Barney JB (1991) Firm resources and sustained competitive advantage. J Manage 17(1):99–120 Barrett M, Davidson E, Prabhu J, Vargo SL (2015) Service innovation in the digital age: key contributions and future directions. MIS Q 39(1):135–154 Blindenbach-Driessen F (2015) The (In) effectiveness of cross-functional innovation teams: the moderating role of organisational context. IEEE Trans Eng Manage 62(1):29–38 Büschgens T, Bausch A, Balkin DB (2013) Organisational culture and innovation: a meta-analytic review. J Prod Innov Manage 30(4):763–781 Chang SE, Lin C-S (2007) Exploring organisational culture for information security management. Ind Manage Data Syst 107(3):438–458 Chen C-J, Hsiao Y-C (2013) The endogenous role of location choice in product innovations. J World Bus 48(3):360–372 Chin WW (1998) The partial least squares approach for structural equation modeling. In: Marcoulides GA (ed) Modern methods for business research. Psychology Press, New York, pp 295–336 Chin WW (2010) How to write up and report PLS analyses. In: Vinzi VE, Chin WW, Henseler J, Wang H (eds) Handbook of partial least squares. Springer, Berlin, pp 655–690 Cohen J (1988) Statistical power analysis for the behavioral sciencies. Erlbaum, Routledge Collins PD, Hull FM (2002) Early simultaneous influence of manufacturing across stages of the product development process: impact on time and cost. Int J Innov Manag 6(1):1–24 Damanpour F (2014) Footnotes to research on management innovation. Org Stud 35(9):1265–1285 De Jong JPJ, Vermeulen PAM (2003) Organizing successful new service development: a literature review. Manag Decis 41(9):844–858 Dutta S, Zbaracki MJ, Bergen M (2003) Pricing process as a capability: a resource-based perspective. Strateg Manag J 24(7):615–630 Efron B (1979) Bootstrap methods: another look at the jackknife. Ann Stat 7(1):1–26 Eisingerich AB, Rubera G, Seifert M (2009) Managing service innovation and interorganisational relationships for firm performance; to commit or diversify? J Serv Res 11(4):344–356 Erumban AA, De Jong SB (2006) Cross-country differences in ICT adoption: a consequence of culture? J World Bus 41(4):302–314 Fornell C, Larcker DF (1981) Evaluating structural equation models with unobservable variables and measurement error. J Mark Res 18(1):39–50 Goto A (2009) Innovation and competition policy*. Jpn Econ Rev 60(1):55–62 Götz O, Liehr-Gobbers K, Krafft M (2010) Evaluation of structural equation models using the partial least squares (PLS) approach. In: Vinzi VE, Chin WW, Henseler J, Wang H (eds) Handbook of partial least squares. Springer, Berlin, pp 691–711 Hair Jr JF, Hult GTM, Ringle C, Sarstedt C (2013) A primer on partial least squares structural equation modeling (PLS-SEM). Sage Publishers, UK Hair JF, Black WC, Babin BJ, Anderson RE (2009) Multivariate data analysis. Pearson Prentice Hall, Upper Saddle, New Jersey Hair JF, Ringle CM, Sarstedt M (2011) PLS-SEM: indeed a silver bullet. J Market Theory Prac 19(2):139–152 Henseler J, Ringle C, Sinkovics R (2009) The use of partial least squares path modeling in international marketing. Adv Int Market (AIM) 20:277–320 Hinterhuber A (2004) Towards value-based pricing—an integrative framework for decision making. Ind Mark Manage 33(8):765–778 Hogan SJ, Coote LV (2014) Organisational culture, innovation, and performance: a test of Schein's model. J Bus Res 67(8):1609–1621 Hull FM (2003) Product development in service enterprises: Case studies of good practice. In: Tidd J, Hull FM (eds) Service innovation: organisational responses to technological opportunities and market imperatives, vol 9. Imperial College Press, UK, pp 371–390 Hull FM (2004) Innovation strategy and the impact of a composite model of service product development on performance. J Serv Res 7(2):167–180 Hull FM, Tidd J (2003a) A composite framework of product development and delivery effectiveness in services. In: Tidd J, Hull FM (eds) Service innovation; organisation responses to technological oOpportunities and market imperatives, vol 9., vol Series on Technology ManagementImperial College Press, UK, pp 343–371 Hull FM, Tidd J (2003b) The organisation of new service development in the USA and UK. In: Tidd J, Hull F (eds) Service innovation; organisation responses to technological opportunities and market imperatives, vol 9. Imperial College Press, UK, pp 137–174 Hull FM, Collins PD, Liker JK (1996) Composite forms of organisation as a strategy for concurrent engineering effectiveness. IEEE Trans Eng Manage 43(2):133–142 Hultink EJ, Griffin A, Hart S, Robben HS (1997) Industrial new product launch strategies and product development performance. J Prod Innov Manage 14(4):243–257 Ingenbleek P, Debruyne M, Frambach RT, Verhallen TMM (2003) Successful new product pricing practices: a contingency approach. Market Lett 14(4):289–305 Iorgulescu A, Marcu M (2015) The relationship between national culture and organisational culture. Soc Sci Educ Res Rev 2(2):93–98 Jiménez-Jiménez D, Sanz-Valle R (2011) Innovation, organisational learning, and performance. J Bus Res 64(4):408–417 Klein EE, Dologite DG (2000) The role of computer support tools and gender composition in innovative information system idea generation by small groups. Comput Hum Behav 16(2):111–139 Lee S-G, Trimi S, Kim C (2013) The impact of cultural differences on technology adoption. J World Bus 48(1):20–29 Li K, Griffin D, Yue H, Zhao L (2013) How does culture influence corporate risk-taking? J Corp Finance 23:1–22 Liker JK, Collins PD, Hull FM (1999) Flexibility and standardization: test of a contingency model of product design—manufacturing integration. J Prod Innov Manage 16(3):248–267 Magnusson PR, Matthing J, Kristensson P (2003) Managing user involvement in service innovation experiments with innovating end users. J Serv Res 6(2):111–124 Malaysian Investment Development Authority (2014) Services sector. http://www.mida.gov.my/home/services-sector/posts/ Martins E, Terblanche F (2003) Building organisational culture that stimulates creativity and innovation. Eur J Innov Manage 6(1):64–74 McConnell C, Brue S, Flynn S (2009) Economics: principles, problems, and policies, 18th edn. McGraw-Hill Education, New York Miron E, Erez M, Naveh E (2004) Do personal characteristics and cultural values that promote innovation, quality, and efficiency compete or complement each other? J Org Behav 25(2):175–199 Mudrak T, van Wagenberg A, Wubben E (2005) Innovation process and innovativeness of facility management organisations. Facilities 23(3/4):103–118 Naranjo-Valencia JC, Jiménez-Jiménez D, Sanz-Valle R (2011) Innovation or imitation? The role of organisational culture. Manag Decis 49(1):55–72 Noam EM (2006) Fundamental instability: why telecom is becoming a cyclical and oligopolistic industry. Inf Econ Policy 18(3):272–284 OECD (2007) Innovation and growth-rationale for an innovation strategy. http://www.oecd.org/science/inno/39374789.pdf Orfila-Sintes F, Crespí-Cladera R, Martínez-Ros E (2005) Innovation activity in the hotel industry: evidence from Balearic Islands. Tour Manag 26(6):851–865 Ottenbacher MC (2007) Innovation management in the hospitality industry: different strategies for achieving success. J Hosp Tour Res 31(4):431–454 Planning Commission (2012) Perspective plan of Bangladesh 2010––2021, general economics division planning commission government of the People's Republic of Bangladesh. http://www.plancomm.gov.bd/wp…/09/Perspective-Plan-of-Bangladesh.pdf Podsakoff PM, MacKenzie SB, Lee J-Y, Podsakoff NP (2003) Common method biases in behavioral research: a critical review of the literature and recommended remedies. J Appl Psychol 88(5):879 Ringle C, Wende W (2005) SmartPLS. http://www.smartpls.de Shapiro BP, Jackson BB (1978) Industrial pricing to meet customer needs. Harv Bus Rev 56(6):119–127 Taghizadeh SK, Jayaraman K, Rahman SA, Malkifar S (2014) A glance on service innovation scenario: case of leading telecommunication companies in Malaysia. Int J Bus Innov 1(5):4–22 Tayeb M (1994) Organisations and national culture: methodology considered. Org Stud 15(3):429–445 Thornton PH, Ribeiro-Soriano D, Urbano D (2011) Socio-cultural factors and entrepreneurial activity: an overview. Int Small Bus J 29(2):105–118 Tidd J, Bessant J (2009) Managing innovation: integrating technological, market and organisational change, 4th edn. Wiley, West Sussex Tidd J, Hull FM (2011) Service innovation: development, delivery and performance. The handbook of innovation and services: a multi-disciplinary perspective. Edward Elgar Publishing Limited, USA Tidd J, Bessant J, Pavitt K (2005) Managing innovation: integrating technological, market and organisational change, 3rd edn. Wiley, West Sussex Uzkurt C, Kumar R, Kimzan HS, Eminoglu G (2013) Role of innovation in the relationship between organisational culture and firm performance: a study of the banking sector in Turkey. Eur J Innov Manage 16(1):92–117 van Riel ACR (2005) Introduction to the special issue on service innovation management. Manag Serv Qual 15(6):493–495 Vermeulen P, van der Aa W (2003) Organizing innovation in services. In: Tidd J, Hull FM (eds) Service innovation, organizational responses to technological opportunities and market imperatives, vol 9. Imperial College Press, London, pp 35–53 Weiss DS, Legrand C (2011) Innovative intelligence: the art and practice of leading sustainable innovation in your organisation. Wiley, Canada World Economic Forum (2015) The global competitiveness report 2014–2015. In: Schwab (ed). Geneva. Available via http://www3.weforum.org/docs/WEF_GlobalCompetitivenessReport_2014-15.pdf. Accessed 21 Sept 2015 Yung YF, Bentler PM (1994) Bootstrap-corrected ADF test statistics in covariance structure analysis. Br J Math Stat Psychol 47(1):63–84 Zammuto RF, O'Connor EJ (1992) Gaining advanced manufacturing technologies' benefits: the roles of organisation design and culture. Acad Manag Rev 17(4):701–728 SAR participated in the data collection from Bangladesh, writing up Introduction and Discussion section. SKT participated in the data collection from Malaysia and drafted Theoretical background along with Discussion section. TR carried out the statistical analysis and assisted in writing Research Methodology section. NHA contributed in writing the Managerial Relevance, Conclusion, and Limitation and Future direction of research. All authors read and approved the final manuscript. Syed Abidur Rahman, has recieved Ph.D. degree in the area of entrepreneurship and innovation from Universiti Sains Malaysia. He is working in Stamford University Bangladesh as Assistant Professor. He published several articles in academic journals. His area of interest is base of pyramid, entrepreneruship, sustainable development, and innovation. Seyedeh Khadijeh Taghizadeh is a Ph.D. candidate in the area of marketing and innovation in Universiti Sains Malaysia. Her area of interest is service innovation, sustainable development, entrepreneurship. She published several articles in academic journals and attended several international conferences. T. Ramayah is currently a Professor at the School of Management, Universiti Sains Malaysia. He has also presented numerous papers at local and international conferences having won 3 "Best Papers" award. His publications have appeared in Computers in Human Behavior, Resources, Conservation and Recycling, International Journal of Information Technology & Decision Making (IJITDM), International Journal of Information Management, Engineering, Construction and Architectural Management (ECAM) and North American Journal of Psychology. Noor Hazlina Ahmad Ph.D. is an Associate Professor at the School of Management USM. She joined the university after completing her Ph.D. at the University of Adelaide, Australia. Her research work lies in the inter-disciplinary intersection between entrepreneurship and organizational, which looked into accumulating ground-breaking evidence of cultural constraints on entrepreneurial behaviour. The authors would like to thank Dr. Marcus Griffin (English language editor) for proofreading the manuscript. Department of Business Administration, Stamford University Bangladesh, 744, Saat Masjid Road, Dhaka, Bangladesh Syed Abidur Rahman & Seyedeh Khadijeh Taghizadeh School of Management, Universiti Sains Malaysia, Pulau Penang, Malaysia T. Ramayah & Noor Hazlina Ahmad Syed Abidur Rahman Seyedeh Khadijeh Taghizadeh T. Ramayah Noor Hazlina Ahmad Correspondence to Syed Abidur Rahman. See Table 8. Table 8 Measurement items Rahman, S.A., Taghizadeh, S.K., Ramayah, T. et al. Service innovation management practices in the telecommunications industry: what does cross country analysis reveal?. SpringerPlus 4, 810 (2015). https://doi.org/10.1186/s40064-015-1580-8 Service innovation practices Innovation process Cross-functional organisation Tools/technology
CommonCrawl
Physics Meta Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. Can space expand with unlimited speed? According to this article on the European Space Agency web site just after the Big Bang and before inflation the currently observable universe was the size of a coin. One millionth of a second later the universe was the size of the Solar System, which is an expansion much much faster than speed of light. Can space expand with unlimited speed? big-bang space-expansion faster-than-light edited Feb 17, 2016 at 17:57 John Rennie czikocziko 78311 gold badge66 silver badges66 bronze badges $\begingroup$ Related: physics.stackexchange.com/q/20056 $\endgroup$ – John Rennie $\begingroup$ More related: physics.stackexchange.com/q/44386 $\endgroup$ $\begingroup$ Possible duplicates: physics.stackexchange.com/q/26549/2451 and links therein. $\endgroup$ – Qmechanic ♦ $\begingroup$ The expansion hasn't got a speed. It is a misnomer to say that it is a speed. It should be called expansion rate. It is not like two points having a relative speed, it is more like a scaling rate of the unit distance. If there were no masses in the universe we would not sense any expansion at all. $\endgroup$ – Oktay Doğangün $\begingroup$ In "Many Worlds In One", Vilenkin remarks that expansion differs from relative motion. However, as quoted in their preprint whose main title was "Expanding Confusion", Davis (of Lineweaver and Davis, cited in Pulsar's answer) had rejected the physicality of "spatial expansion" as any sort of force or drag. I'm making this comment mainly to help myself find Oktay Dogangun's helpful comment & Pulsar's answer, whose color-coded diagram is easier to follow than a couple of the 3 panels in Lineweaver & Davis's paper. $\endgroup$ – Edouard Oct 9, 2021 at 3:01 There are quite a few common misconceptions about the expansion of the universe, even among professional physicists. I will try to clarify a few of these issues; for more information, I highly recommend the article "Expanding Confusion: common misconceptions of cosmological horizons and the superluminal expansion of the Universe" from Tamara M. Davis and Charles H. Lineweaver. I will assume a standard ΛCDM-model, with $$ \begin{align} H_0 &= 67.3\;\text{km}\,\text{s}^{-1}\text{Mpc}^{-1},\\ \Omega_{R,0} &= 9.24\times 10^{-5},\\ \Omega_{M,0} &= 0.315,\\ \Omega_{\Lambda,0} &= 0.685,\\ \Omega_{K,0} &= 1 - \Omega_{R,0} - \Omega_{M,0} - \Omega_{\Lambda,0} = 0. \end{align} $$ The expansion of the universe can be described by a scale factor $a(t)$, which can be thought of as the length of an imaginary ruler that expands along with the universe, relative to the present day, i.e. $a(t_0)=1$ where $t_0$ is the present age of the universe. From the standard equations, one can derive the Hubble parameter $$ H(a) = \frac{\dot{a}}{a} = H_0\sqrt{\Omega_{R,0}\,a^{-4} + \Omega_{M,0}\,a^{-3} + \Omega_{K,0}\,a^{-2} + \Omega_{\Lambda,0}}, $$ such that $H(1)=H_0$ is the Hubble constant. In a previous post, I showed that the age of the universe, as a function of $a$, is $$ t(a) = \frac{1}{H_0}\int_0^a\frac{a'\,\text{d}a'}{\sqrt{\Omega_{R,0} + \Omega_{M,0}\,a' + \Omega_{K,0}\,a'^2 + \Omega_{\Lambda,0}\,a'^4}}, $$ which can be numerically inverted to yield $a(t)$, and consequently $H(t)$. It also follows that the present age of the universe is $t_0=t(1)=13.8$ billion years. Now, another consequence of the Big Bang models is Hubble's Law, $$ v_\text{rec}(t_\text{ob}) = H(t_\text{ob})\,D(t_\text{ob}), $$ describing the relation between the recession velocity $v_\text{rec}(t_\text{ob})$ of a light source and its proper distance $D(t_\text{ob})$, at a time $t_\text{ob}$. In fact, this follows immediately from the definition of $H(t_\text{ob})$, since $v_\text{rec}(t_\text{ob})$ is proportional to $\dot{a}$ and $D(t_\text{ob})$ is proportional to $a$. However, it should be noted that this is a theoretical relation: neither $v_\text{rec}(t_\text{ob})$ nor $D(t_\text{ob})$ can be observed directly. The recession velocity is not a "true" velocity, in the sense that it is not an actual motion in a local inertial frame; clusters of galaxies are locally at rest. The distance between them increases as the universe expands, which can be expressed as $v_\text{rec}(t_\text{ob})$. Some cosmologists therefore prefer to think of $v_\text{rec}(t_\text{ob})$ as an apparent velocity, a theoretical quantity with little physical meaning. A related quantity that is observable is the redshift of a light source, which is the cumulative increase in wavelength of the photons as they travel through the expanding space between source and observer. There is a simple relation between the scale factor and the redshift of a source, observed at a time $t_\text{ob}$: $$ 1 + z(t_\text{ob}) = \frac{a(t_\text{ob})}{a(t_\text{em})}, $$ such that the observed redshift of a photon immediately gives the time $t_\text{em}$ at which the photon was emitted. The proper distance $D(t_\text{ob})$ of a source is also a theoretical quantity. It's an "instantaneous" distance, which can be thought of as the distance you would obtain with a (very long!) measuring tape if you were able to "stop" the expansion of the universe. It can however be derived from observable quantities, such as the luminosity distance or the angular diameter distance. The proper distance to a source, observed at time $t_\text{ob}$ with a redshift $z_\text{ob}$ is $$ D(z_\text{ob},t_\text{ob}) = a_\text{ob}\frac{c}{H_0}\int_{a_\text{ob}/(1+z_\text{ob})}^{a_\text{ob}}\frac{\text{d}a}{\sqrt{\Omega_{R,0} + \Omega_{M,0}\,a + \Omega_{K,0}\,a^2 + \Omega_{\Lambda,0}\,a^4}}, $$ with $a_\text{ob} = a(t_\text{ob})$. The furthest objects that we theoretically can observe have infinite redshift; they mark the edge of the observable universe, also known as the particle horizon. Ignoring inflation, we get: $$ D_\text{ph}(t_\text{ob}) = a_\text{ob}\frac{c}{H_0}\int_0^{a_\text{ob}}\frac{\text{d}a}{\sqrt{\Omega_{R,0} + \Omega_{M,0}\,a + \Omega_{K,0}\,a^2 + \Omega_{\Lambda,0}\,a^4}}. $$ In practice though, the furthest we can see is the CMB, which has a current redshift $z_\text{CMB}(t_0)\approx 1090$. A source that has a recession velocity $v_\text{rec}(t_\text{ob})=c$ has a corresponding distance $$ D_\text{H}(t_\text{ob})=\frac{c}{H(t_\text{ob})}. $$ This is called the Hubble distance. Almost there, just a few more quantities need to be defined. The photons that we observe at a time $t_\text{ob}$ have travelled on a null geodesic called the past light cone. It can be defined as the proper distance that a light source had at a time $t_\text{em}$ when it emitted the photons that we observe at $t_\text{ob}$: $$ D_\text{lc}(t_\text{em},t_\text{ob})= a_\text{em}\frac{c}{H_0}\int_{a_\text{em}}^{a_\text{ob}}\frac{\text{d}a}{\sqrt{\Omega_{R,0} + \Omega_{M,0}\,a + \Omega_{K,0}\,a^2 + \Omega_{\Lambda,0}\,a^4}}. $$ There are two special cases: for $t_\text{ob}=t_0$ we have our present-day past light cone (i.e. the photons that we are observing right now), and for $t_\text{ob}=\infty$ we get the so-called cosmic event horizon: $$ D_\text{eh}(t_\text{em})= a_\text{em}\frac{c}{H_0}\int_{a_\text{em}}^\infty\frac{\text{d}a}{\sqrt{\Omega_{R,0} + \Omega_{M,0}\,a + \Omega_{K,0}\,a^2 + \Omega_{\Lambda,0}\,a^4}}. $$ For light emitted today, $t_\text{em}=t_0$, this has a special significance: if a source closer to us than $D_\text{eh}(t_0)$ emits photons today, then we will be able to observe those at some point in the future. In contrast, we will never observe photons emitted today by sources further than $D_\text{eh}(t_0)$. One final definition: instead of proper distances, we can use co-moving distances. These are distances defined in a co-ordinate system that expands with the universe. In other words, the co-moving distance of a source that moves away from us along with the Hubble flow, remains constant. The relation between co-moving and proper distance is simply $$ D_c(t) = \frac{D(t)}{a(t)}, $$ so that both are the same at the present day $a(t_0)=1$. Thus $$ \begin{align} D_\text{c,ph}(t_\text{ob}) &= \frac{D_\text{ph}(t_\text{ob})}{a_\text{ob}},\\ D_\text{c,lc}(t_\text{em},t_\text{ob}) &= \frac{D_\text{lc}(t_\text{em},t_\text{ob})}{a_\text{em}},\\ D_\text{c,H}(t_\text{ob}) &= \frac{D_\text{H}(t_\text{ob})}{a_\text{ob}}. \end{align} $$ In fact, it would have been more convenient to start with co-moving distances instead of proper distances; in case you've been wondering where all the above integrals come from, those can be derived from the null geodesic of the FLRW metric: $$ 0 = c^2\text{d}t^2 - a^2(t)\text{d}\ell^2, $$ such that $$ \text{d}\ell = \frac{c\,\text{d}t}{a(t)} = \frac{c\,\text{d}a}{a\,\dot{a}} = \frac{c\,\text{d}a}{a^2\,H(a)}, $$ and $\text{d}\ell$ is the infinitesimal co-moving distance. So, what can we do with all these tedious calculations? Well, we can draw a graph of the evolution of the expanding universe (after inflation). Inspired by a similar plot in the article from Davis & Lineweaver, I made the following diagram: This graph contains a lot of information. On the horizontal axis, we have the co-moving distance of light sources, in Gigalightyears (bottom) and the corresponding Gigaparsecs (top). The vertical axis shows the age of the universe (left) and the corresponding scale factor $a$ (right). The horizontal thick black line marks the current age of the universe (13.8 billion years). Co-moving sources have a constant co-moving distance, so that their world lines are vertical lines (the black dotted lines correspond with sources at 10, 20, 30, etc Gly). Of course, our own world line is the thick black vertical line, and we are currently situated at the intersection of the horizontal and vertical black line. The yellow lines are null geodesics, i.e. the paths of photons. The scale of the time axis is such that these photon paths are straight lines at 45° angles. The orange line is our current past light cone. This is the cross-section of the universe that we currently observe: all the photons that we receive now have travelled on this path. The path extends to the orange dashed line, which is our future light cone. The particle horizon, i.e. the edge of our observable universe, is given by the blue line; note that this is also a null geodesic. The red line is our event horizon: photons emitted outside the event horizon will never reach us. The purple dashed curves are distances corresponding with particular redshift values $z(t_\text{ob})$, in particular $z(t_\text{ob}) = 1, 3, 10, 50, 1000$. Finally, the green curves are lines of constant recession velocity, in particular $v_\text{rec}(t_\text{ob}) = c, 2c, 3c, 4c$. Of course, the curve $v_\text{rec}(t_\text{ob}) = c$ is nothing else than the Hubble distance. What can we learn from all this? Quite a lot: The current (co-moving) distance of the edge of the observable universe is 46.2 billion ly. Of course, the total universe can be much bigger, and is possibly infinite. The observable universe will keep expanding to a finite maximum co-moving distance at cosmic time $t = \infty$, which is 62.9 billion ly. We will never observe any source located beyond that distance. Curves of constant recession velocity expand to a maximum co-moving distance, at $t_\text{acc} = 7.7$ billion years, and then converge again. This time $t_\text{acc}$, indicated by the horizontal black dashed line, is in fact the moment at which the expansion of the universe began to accelerate. Curves of constant redshift also expand first, and converge when $t$ becomes very large. This means that a given source, which moves along a vertical line, will be observed with an infinite redshift when it enters the particle horizon, after which its redshift will decrease to a mimimum value, and finally increase again to infinity at $t = \infty$. In other words, every galaxy outside our local cluster will eventually be redshifted to infinity when the universe becomes very old. This is due to the dominance of dark energy at late cosmic times. Photons that we currently observe of sources at co-moving distances of 10, 20, 30 and 40 Gly have redshifts of 0.87, 2.63, 8.20 and 53.22 respectively. The edge of the observable universe is receding from us with a recession velocity of more than 3 times the speed of light. $3.18c$, to be exact. In other words, we can observe sources that are moving away from us faster than the speed of light. Sources at co-moving distances of 10, 20, 30 and 40 Gly are receding from us at 0.69, 1.38, 2.06 and 2.75 times the speed of light, respectively. Sources outside our particle horizon are moving away even faster. There is no a priori limit to the maximum recession velocity: it is proportional to the size of the total universe, which could be infinite. The Hubble distance lies completely inside the event horizon. It will asymptotically approach the event horizon (as well as the curve of constant redshift 1) as $t$ goes to infinity. The current Hubble distance is 14.5 Gly (corresponding with $z=1.48$) , while the current distance to the event horizon is 16.7 Gly ($z=1.87$). Photons emitted today by sources that are located between these two distances will still reach us at some time in the future. Although the difference between the Hubble distance and the event horizon today is rather small, this difference was much larger in the past. Consider for example the photons that we observe today, emitted by a source at a co-moving distance of 30 Gly. It emitted those photons at $t=0.62$ Gy, when the source was moving away from us at $3.5c$. The source continued its path along the vertical dotted line, while the photons moved on our past light cone. At $t=0.83, 1.64, 4.06$ Gy those photons passed regions that were moving away from us at $3c, 2c, c$ respectively. Along the way, those photons accumulated a total redshift of 53.22. From all the above, it should be clear that the Hubble distance is not a horizon. I should stress again that all these calculations are only valid for the standard ΛCDM-model. Apologies for the very lengthy post, but I hope it has clarified a few things. PulsarPulsar $\begingroup$ +1 for the Davis and Lineweaver reference - I can't cite that paper enough around here. Also awesome entire rest of the post! And a belated welcome to Physics Stackexchange! $\endgroup$ $\begingroup$ I find this to be a vacuous philosophical babbling. A velocity is always a coordinate-dependent concept, even in special relativity (and in Newtonian physics). That doesn't mean that we shouldn't talk about it. We must talk about it because it is a crucial concept to describe all motion in Nature. For objects separated by large regions of a curved spacetime, it becomes ambiguous and important what we exactly mean by a velocity, but it is still true that there is a correspondence with a situation in special relativity which is the reason why we don't see things behind the cosmic horizon. $\endgroup$ – Luboš Motl $\begingroup$ @Lubos Vacuous philosophical babbling??? All my calculations are correct and well known to cosmologists. I expected better from you. $\endgroup$ – Pulsar $\begingroup$ +1 What a beautiful explanation and also what a magnificent job with the plot: what a work of art, and so clear too. How long did that take to put together? I propose you think about putting it alongside the isometric embedding plot at en.wikipedia.org/wiki/… as it would really help clarify what the reader is looking at. Having the isometric embedding is pretty funky, but your plot can put a great deal more info on essentially the same plot, so they would go well together. $\endgroup$ – Selene Routley $\begingroup$ @WetSavannaAnimal Thank you very much, I really appreciate that. Yes, it took quite a bit of time :-) I made my own cosmology notes and I wrote a Python programme for all the calculations, and to create plots like this; all in all a few weeks work. I've made similar plots for proper distances, which you can see here. As for wiki, that would be a good idea, I'll think about it. Thanks again! $\endgroup$ Yes, the expansion of space itself is allowed to exceed the speed-of-light limit because the speed-of-light limit only applies to regions where special relativity – a description of the spacetime as a flat geometry – applies. In the context of cosmology, especially a very fast expansion, special relativity doesn't apply because the curvature of the spacetime is large and essential. The expansion of space makes the relative speed between two places/galaxies scale like $v=Hd$ where $H$ is the Hubble constant and $d$ is the distance. When this $v$ exceeds $c$, it means that the two places/galaxies are "behind the horizons of one another" so they can't observer each other anytime soon. But they're still allowed to exist. In quantum gravity i.e. string theory, there may exist limits on the acceleration of the expansion but the relevant maximum acceleration is extreme – Planckian – and doesn't invalidate any process we know, not even those in cosmic inflation. Luboš MotlLuboš Motl $\begingroup$ @Motl Then one can assume that sending messages between such galaxies is impossible, since they are receding from each other faster than the speed of light? How then they still affect each other by gravity, which has the speed of light? $\endgroup$ – Force $\begingroup$ Dear Jim, yes, as I said, the fact that the mutual speed exceeds the speed of light - although it's not really well-defined - means that they can't see each other or otherwise communicate. Otherwise, they also can't send signals to each other gravitationally - gravitational signals propagate by the speed of light, too. However, both faraway galaxies still feel the gravitational field - but in general relativity, gravity is given by the local curvature of the spacetime which is there regardless of what the other distant galaxy is doing right now. $\endgroup$ $\begingroup$ "the fact that the mutual speed exceeds the speed of light - although it's not really well-defined - means that they can't see each other or otherwise communicate" Yes, it's not uniquely defined, and that's the fundamental answer to the question. However, for the most common cosmological definition of the speed, the remainder of this sentence is false. See arxiv.org/abs/astro-ph/0310808 . $\endgroup$ – user4552 $\begingroup$ Dear Ben, maybe I am not using the "most common definition of the speed" in a curved spacetime, but the statement of mine is surely correct for the "most natural definition of the speed" in a curved spacetime for any comparison with special relativity. $\endgroup$ $\begingroup$ Still, photons can travel between galaxies that are receding from each other faster than light already at the time of emission; see the Davis & Lineweaver paper linked by @Pulsar below. By the way, impressingly, Davis wrote this paper as part of her Ph.D. $\endgroup$ – Thriveth Your question is based on a fundamental misconception. You say: At the beginning, right after the Big Bang, the universe was the size of a coin but it's more accurate to say "the observable universe was the size of a coin" i.e. the 13.7 billion light year bit that we can currently see was at one time the same radius as a coin. The universe may well be infinite in size, and if so it has always been infinite in size right back to the moment of the Big Bang. There is no point in the observable universe that is moving away from us at faster than the speed of light, but assuming the universe is infinite, or at least much bigger than the bit we can see, everything farther away from us than the edge of the observable universe is moving away from us faster than the speed of light. As Luboš says this doesn't violate relativity since it's space that's expanding not the objects themselves moving, and there is no limit to the expansion rate of space. In fact if there was a period of inflation immediately after the Big Bang, during this period space expanded at a rate that makes the speed of light look positively glacial. If you're interested in a bit more detail about how we model the expansion of the universe search this site for "FLRW metric", or Google for it. John RennieJohn Rennie $\begingroup$ "There is no point in the observable universe that is moving away from us at faster than the speed of light[...]" This is a common misconception. See Davis and Lineweaver arxiv.org/abs/astro-ph/0310808 . The answer by Pulsar is the only one that addresses the fundamental issue inherent the question, which is that GR doesn't have a well defined notion of the velocity of one object relative to some other distant object. (The Davis and Lineweaver uses one particular definition in the context of cosmology.) $\endgroup$ I'm going to go with "yes, but that's less interesting than you may think." 1. The laws of physics are local Every law of physics that we know of only "sees" a tiny portion of the universe. The universe seems to consist of the same physical laws being applied identically and independently to every little part of itself. If you look at any tiny part of an expanding universe, nothing untoward is going on. Everything is following the same laws as in any other situation, and nothing is exceeding the speed of light. When you stitch all of these pieces together, you get a global spacetime where the total volume of space seems to increase very quickly, but this "total volume of space" doesn't appear in any physical law, and in some sense you could think of it as a human invention. 2. Even globally, it's not clear that anything untoward is going on The Milne model is the zero-density limit of the standard (FLRW) expanding cosmological model. It's a useful source of counterexamples to misconceptions about cosmology because it's actually just a portion of Minkowski space (the flat spacetime of special relativity) in different coordinates, so you can apply your special-relativistic intuition and calculational techniques to problems in cosmology, often getting results that contradict what might appear to be true in the FLRW coordinates. In the Milne model, recessional velocities between objects can be arbitrarily high (exceeding $c$ or any particular multiple of $c$). This doesn't contradict special relativity because the definition of "recessional velocity" doesn't match the usual definition of "velocity" in special relativity. The recessional velocity is, in SR terms, the rapidity (times $c$). In the Milne model, you and your friend (both at rest relative to the Hubble flow) can be 1 meter apart, wait for 1 second (measured by your respective watches), and at the end of that second be 10100 meters apart – or any other time interval and two distances you like, as long as the later distance is larger than the earlier one. How can there possibly be "room" for this in Minkowski space? It's pretty easy to see what's going on. Since any inertial frame is as valid as any other, I'll pick one where you and your friend have equal and opposite velocities $\pm \mathbf v$. After a time $\tau$ has elapsed on your watches, your $t$ coordinates will have increased by $\gamma\tau$ and your x coordinate by $\pm\mathbf v\gamma\tau$. Since $\gamma\to\infty$ as $|\mathbf v|\to c$, these coordinate changes can be arbitrarily large, so there can be plenty of "room" at the end even if $\tau$ is small. Another way of looking at this is that the triangle inequality doesn't work in spacetime. You might expect that if you and your friend start at the same point and you each travel in a straight line (inertial motion) for 1 second (elapsed proper time = length of worldline), that the distance between you should be at most 2 light seconds. In fact, though, the distance can be anything. If we classify that as "superluminal expansion of space" (and I think we should, since we are literally doing FLRW cosmology here), then superluminal expansion of space is allowed even in special relativity. When you move from this special case to general FLRW cosmology, you lose the special-relativistic correspondence, but I don't think that makes the possibility of "superluminal" expansion any more surprising. On the contrary: if it can happen in special relativity, then of course it can happen in general relativity. answered Sep 26, 2019 at 20:36 benrgbenrg $\begingroup$ Very interesting. What about the approximation to an "empty" universe? $\endgroup$ $\begingroup$ @Edouard There's no Newtonian gravity in this answer. The Milne cosmology isn't Newtonian. In Guth's toy Newtonian model, the collapse time doesn't depend on the radius of the universe, but it does depend on the density. That's true in FLRW cosmology too. You can cut off the FLRW geometry at some arbitrary radius (at least if $p=0$) and the time to the big crunch will be independent of the cutoff. It depends on local properties (density, Hubble parameter) and not on global properties (total mass, radius). $\endgroup$ – benrg Feb 2, 2021 at 22:07 I should add, in order to avoid inconsistencies in the text and in the derived formulas, that the expression for $D(z_{ob},t_{ob})$ as it written down here already IS a comoving distance, which forces you to set $a_{ob} = a(t_{0}) = 1$ and $t_{ob} = t_0$, $z_{ob} = 0$. This is a direct consequence of setting $ds^2 = 0$ in the FRLW line element, yielding the light-cone equation. This distance is also the comoving distance given on the horizontal axis of the diagram. The text and the derived formulas should be adapted to these notions. For a correct treatment, refer to the Davis and Lineweaver papers. rhkail edited Oct 13, 2018 at 6:52 Buzz♦ Rene KailRene Kail Highly active question. Earn 10 reputation (not counting the association bonus) in order to answer this question. The reputation requirement helps protect this question from spam and non-answer activity. How is it possible the universe expanded faster than the speed of light during inflation? How can spacetime be expanding faster than the speed of light? Many times speed of light Expansion of space vs movement within space Speed of light's relation with the expansion of the universe Rate of expansion of universe during big bang If causality propagates at the speed of light, how could the Big Bang expand faster than that? Is the speed of light $c$ invariant with the vacuum space energy density? Big Bang and relativity If nothing can travel faster than the speed of light, how can there be parts of the universe we can't see? acceleration of the universe How distant is the origin of oldest space radiation reaching us? Is inflation a possible explanation for what started the expansion? Cosmology: Inflation, rate of expansion over the first few years
CommonCrawl
Exponential of Real Number is Strictly Positive/Proof 5/Lemma < Exponential of Real Number is Strictly Positive‎ | Proof 5 Let $x$ be a real number. Let $\exp$ denote the (real) Exponential Function. $\forall x \in \R: \exp x \ne 0$ This proof assumes the definition of $\exp$ as the solution to an initial value problem. That is, suppose $\exp$ satisfies: $ (1): \quad D_x \exp x = \exp x$ $ (2): \quad \exp \left({0}\right) = 1$ on $\R$. Aiming for a contradiction, suppose that $\exists \alpha \in \R: \exp \alpha = 0$. Suppose that $\alpha > 0$. Let $J = \left[{0 \,.\,.\, \alpha}\right]$. From Exponential Function is Continuous, $\exp$ is continuous on $J$. From Max and Min of Function on Closed Real Interval: $\exists K \in \R: \forall x \in J: \left\vert{\exp x}\right\vert < K$ Then, $\forall n \in \N : \exists c_n \in J$ such that: \(\displaystyle 1\) \(=\) \(\displaystyle \exp 0\) By hypothesis $(2)$ \(\displaystyle \) \(=\) \(\displaystyle \sum_{j \mathop = 0}^{n-1} \frac{\exp^{j} \left({\alpha}\right)}{j!} \left({-\alpha}\right)^{j} + \frac{\exp c_n}{n!} \left({-\alpha}\right)^{n}\) Taylor's Theorem for Univariate Functions \(\displaystyle \) \(=\) \(\displaystyle \sum_{j \mathop = 0}^{n-1} \frac{\exp \left({\alpha}\right)}{j!} \left({-\alpha}\right)^{j} + \frac{\exp c_n}{n!} \left({-\alpha}\right)^{n}\) By hypothesis $(1)$ \(\displaystyle \) \(=\) \(\displaystyle \sum_{j \mathop = 0}^{n-1} \frac 0 {j!} \left({-\alpha}\right)^{j} + \frac{\exp c_n}{n!} \left({-\alpha}\right)^{n}\) From our assumption aiming at a contradiction \(\displaystyle \) \(=\) \(\displaystyle \frac{\exp c_n}{n!} \left({-\alpha}\right)^{n}\) So $\forall n \in \N :1 \le K \dfrac{\alpha^{n}}{n!}$ That is, dividing both sides by $K$: $\forall n \in \N: \dfrac 1 K \le \dfrac{\alpha^{n}}{n!}$ But from Power over Factorial, $\dfrac{\alpha^{n}}{n!} \to 0$. This contradicts our assumption. The same argument, mutatis mutandis proves the result for $\alpha < 0$. By hypothesis $(2)$: \(\displaystyle \alpha = 0 \ \ \) \(\displaystyle \implies \ \ \) \(\displaystyle \exp \alpha\) \(=\) \(\displaystyle 1\) \(\displaystyle \) \(\ne\) \(\displaystyle 0\) Retrieved from "https://proofwiki.org/w/index.php?title=Exponential_of_Real_Number_is_Strictly_Positive/Proof_5/Lemma&oldid=362868" Proofs by Contradiction Exponential of Real Number is Strictly Positive
CommonCrawl
Reconstructing neuronal circuitry from parallel spike trains Deconvolution improves the detection and quantification of spike transmission gain from spike trains Lidor Spivak, Amir Levi, … Eran Stark Spontaneous activity emerging from an inferred network model captures complex spatio-temporal dynamics of spike data Cristiano Capone, Guido Gigante & Paolo Del Giudice A convolutional neural network for estimating synaptic connectivity from spike trains Daisuke Endo, Ryota Kobayashi, … Shigeru Shinomoto Inferring and validating mechanistic models of neural microcircuits based on spike-train data Josef Ladenbauer, Sam McKenzie, … Srdjan Ostojic Extracellular detection of neuronal coupling Elmer Guzman, Zhuowei Cheng, … Kenneth S. Kosik Maximum entropy models provide functional connectivity estimates in neural networks Martina Lamberti, Michael Hess, … Sarah Marzen Degeneracy in the emergence of spike-triggered average of hippocampal pyramidal neurons Abha Jain & Rishikesh Narayanan Dataset of cortical activity recorded with high spatial resolution from anesthetized rats Csaba Horváth, Lili Fanni Tóth, … Richárd Fiáth Systematic errors in connectivity inferred from activity in strongly recurrent networks Abhranil Das & Ila R. Fiete Ryota Kobayashi ORCID: orcid.org/0000-0002-3935-57011,2, Shuhei Kurita ORCID: orcid.org/0000-0001-7415-31203, Anno Kurth ORCID: orcid.org/0000-0002-9557-10034,5, Katsunori Kitano ORCID: orcid.org/0000-0002-3209-60226, Kenji Mizuseki ORCID: orcid.org/0000-0002-1456-21497, Markus Diesmann ORCID: orcid.org/0000-0002-2308-57274,5,8, Barry J. Richmond ORCID: orcid.org/0000-0002-8234-15409 & Shigeru Shinomoto ORCID: orcid.org/0000-0002-5745-773610,11 Nature Communications volume 10, Article number: 4468 (2019) Cite this article Neural decoding State-of-the-art techniques allow researchers to record large numbers of spike trains in parallel for many hours. With enough such data, we should be able to infer the connectivity among neurons. Here we develop a method for reconstructing neuronal circuitry by applying a generalized linear model (GLM) to spike cross-correlations. Our method estimates connections between neurons in units of postsynaptic potentials and the amount of spike recordings needed to verify connections. The performance of inference is optimized by counting the estimation errors using synthetic data. This method is superior to other established methods in correctly estimating connectivity. By applying our method to rat hippocampal data, we show that the types of estimated connections match the results inferred from other physiological cues. Thus our method provides the means to build a circuit diagram from recorded spike trains, thereby providing a basis for elucidating the differences in information processing in different brain regions. Over the past decade it has become possible to record from much larger numbers of neurons than in the past1,2,3,4,5, even though this number is still a mere shadow of the total number of neurons present. The premise behind collecting these large data sets is that this could lead to improvements in correlating neuronal activity with specific sensations, motion, or memory, and possibly lead to improvements in adaptation and learning as well6,7,8,9,10. Having such large data sets leads to difficulties in handling the data and interpreting the results. There are two main approaches to handle large amounts of recording data. In the first approach, researchers have developed methods to reduce dimensionality while minimizing the loss of information11,12,13. The second approach, which we take here, is to use all of the data to carry out mesoscopic neuroanatomy, that is, to reveal the fine neuronal circuitry in which neural circuit computation is carried out. From these high channel count recordings, one should be able to estimate neuronal connectivity by quantifying the degree to which firing from a given neuron is influenced by the firing of neurons from which the index neuron is receiving input14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30. For this purpose, we develop an analytical tool that estimates neuronal connectivity in measurement units of postsynaptic potentials (PSPs). In this study we also investigate how much data are needed to reliably estimate the connections between pairs of neurons. Because reconstructing connectivity is not guaranteed to reflect anatomical connectivity31,32,33, we evaluate the accuracy of estimation by directly comparing the estimated connections with the true connections, using synthetic data generated by simulating a network of Hodgkin–Huxley (HH)-type neurons or a large network of leaky integrate-and-fire (LIF) neurons. Finally, we apply this method to spike trains recorded from rat hippocampus. For the experimental data, we compare our estimates of whether an innervating connection is excitatory or inhibitory with the results obtained by manually analyzing other physiological information such as spike waveforms, autocorrelograms, and mean firing rate. Estimating neuronal connections To estimate neuronal connectivity between each pair of neurons, we obtain the cross-correlation (CC) by collecting spike times of a neuron measured relative to every spike of a reference neuron (Fig. 1a). We explore the CC for the evidence of a monosynaptic impact of a few milliseconds using the generalized linear model (GLM). Here, neuronal connectivity is detected by fitting a coupling filter, while slow, large-scale wavy fluctuations that are often present in recorded spike trains are absorbed by adapting the slow part of the GLM. We call our method "GLMCC" (METHODS). Estimating neuronal connections. a Connectivity between neurons is estimated by fitting a generalized linear model (GLM) to the cross-correlation (CC). \({J}_{ij}\) represents a coupling from the \(j\)-th neuron to the \(i\)-th neuron. Excitatory and inhibitory neurons are depicted as triangles and circles, and their synaptic connections are colored magenta and cyan, respectively. Surrounding neurons may induce large-scale fluctuations in the CC (light green line). b Neuronal connectivity is visualized by the Hinton diagram, in which excitatory and inhibitory connections are represented, respectively by magenta and cyan squares of the sizes (area) proportional to the postsynaptic potential (PSP) \({w}_{ij}\). c Distributions of excitatory postsynaptic potentials (EPSPs) and inhibitory postsynaptic potentials (IPSPs) of a simulated network Criterion for the presence of connections A neuronal connection is considered significant when the estimated parameter falls outside the confidence interval of a given significance level for the null hypothesis that the connection is absent. If the parameter remains within the confidence interval, the state of the connection is undetermined (METHODS). The number of pairs considered to be connected will depend on the significance level \(\alpha\) and on the strength of the correlation. Estimation methods presume connections as if they were all direct ones, causing strong indirect influences to be purported direct connections. Neurophysiologists often try to avoid these false positives (FPs) by shifting the significance level to small values, that is, by moving \(\alpha\) to very stringent levels. However, being conservative about FPs means that existing connections important for information processing will be missed, thereby producing many false negatives (FNs). To capture the manner in which the numbers of FPs and FNs change with the level of conservatism used for estimating connections, we applied our inference model to spike trains obtained from a network of HH neurons, in which the true anatomical connectivity is known. With this knowledge, we searched for the optimal level of conservatism or the significance level that may balance the conflicting demands for reducing FPs and FNs. Our simulation used a network of 1000 HH neurons consisting of 800 excitatory and 200 inhibitory neurons (cf. Fig. 1b). In the simulation, excitatory neurons innervated 12.5% of other neurons with excitatory postsynaptic potentials (EPSPs). These excitatory connections were log normally distributed34,35,36,37 (Fig. 1c). Inhibitory neurons randomly innervated 25% of other neurons with inhibitory postsynaptic potentials (IPSPs). These inhibitory connections were normally (Gaussian) distributed38. We simulated the network for a period representing 5400 s (90 min) with step sizes of 0.01 and 0.001 ms for excitatory and inhibitory neurons, respectively (METHODS). Our simulation reproduced irregular neuronal firing and skewed distribution of firing rates, which are consistent with balanced state network models39 (Supplementary Fig. 1). To illustrate the performance of estimating connections, we sampled 20 neurons out of the entire population. Figure 2a shows the estimated connection matrices obtained using different significance levels, in reference to the true connectivity. Here we have not considered weak excitatory connections whose EPSPs are smaller than 1 mV, because the amount of spike recording is insufficient for identifying connections of this level. The connection matrix is divided into four quadrants representing connections between inhibitory–excitatory, excitatory–excitatory, excitatory–inhibitory, and inhibitory–inhibitory neurons. True connections for the second and third quadrants are excitatory, and those of the fourth and first quadrants are inhibitory. For \(\alpha =0.01\), too many false connections were assigned to pairs of neurons; there were 15 false connections (4.3%) in this sample. At the other extreme, all FPs can be excluded by decreasing the significance level (down to \(\alpha =1{0}^{-24}\)). In the latter case most existing connections are lost, and a large number of FNs arise; 22 among 32 existing connections (69%) are missed in this example. The numbers of FPs and FNs for excitatory and inhibitory categories are shown below for the connection matrices, indicating that the total number of FPs and FNs may be minimized between these extreme cases. Selecting the significance level. a The connection matrices are estimated with different levels of conservatism against making false positives (FPs), which are represented by the significance level \(\alpha\). In each connection matrix, the \(x\)-axis indicates reference (index) neurons. The connection matrix is divided into four quadrants representing inhibitory–excitatory, excitatory–excitatory, excitatory–inhibitory, and inhibitory–inhibitory zones. The numbers of FP and false negative (FN) connections for the excitatory and inhibitory categories are depicted below the matrices. In the true connectivity, weak excitatory connections whose EPSPs are smaller than 1 mV are not considered. b The Matthews correlation coefficient (MCC) is plotted against the significance level \(\alpha\). The MCC takes a maximum at an intermediate level of cautiousness, given by \(\alpha\, =\,0.001\). Source data are provided as a Source Data file To balance the FPs and FNs simultaneously, we selected the significance level that maximized the Matthews correlation coefficient (MCC)27,40. The significance level was set to \(\alpha\, =\,0.001\) (Fig. 2b). Although false connections remain, the neuronal circuit was most accurately reconstructed with \(\alpha \,=\,0.001\). We adopted \(\alpha\, =\,0.001\) throughout the following analyses. Duration of spike recording The necessary duration of spike recording can be estimated even without fitting the statistical model to the spike trains. This is because the distribution of the connection parameter for the null hypothesis is obtained solely in terms of the observation interval (\(T\)) and the firing rates of the pre and postsynaptic neurons (\({\lambda }_{{\rm{pre}}}\) and \({\lambda }_{{\rm{post}}}\)) (METHODS). The confidence interval of the connection parameter (\(J\)) is $${J}_{\pm }\,=\,\pm c/{\left(T\tau {\lambda }_{{\rm{pre}}}{\lambda }_{{\rm{post}}}\right)}^{1/2},$$ where \(\tau\) is the time scale of synaptic impact, which is chosen by maximizing the model likelihood: \(\tau\, =\,0.004\) s for the simulation data and \(\tau \,=\,0.001\) s for the rat hippocampal data. The coefficient \(c\) is given as 5.16 for \(\alpha \,=\,0.001\). We assume that connection parameter \(J\) is proportional to the PSP, \(w\) mV41: $$J\,=\,aw.$$ The coefficient \(a\) is determined using synthetic data as \(a\,=\,0.39\) for the EPSP and \(a\,=\,1.57\) for IPSP. By combining this with Eq. (1), the necessary duration of spike recording needed to determine the likely presence of a connection of PSP is given as $$T\,> \,\frac{{c}^{2}}{\tau {\lambda }_{{\rm{pre}}}{\lambda }_{{\rm{post}}}{a}^{2}{w}^{2}}.$$ According to the coefficient \(a\), which is larger for IPSP than for EPSP, the inhibitory connection is detected more easily than the excitatory connection, given the same PSP \(|w|\). This is in conflict with the results of some other studies16,42,43. The disagreement is due to the difference in simulation models; in our simulation model, the time scale of the inhibitory synapse is chosen to be longer than that of the excitatory synapse on the basis of physiological experiments44,45. Accordingly, the inhibitory response is slower and has a larger integrated effect than the excitatory response. Our GLMCC should be able to properly detect the overall integrated effect (Supplementary Fig. 2). To make reliable inference, in addition to the above relation, it is also necessary to have collected a sufficiently large number of spikes during the interaction time window on the order of a few milliseconds. Here we require (METHODS): $$T{\lambda }_{{\rm{pre}}}{\lambda }_{{\rm{post}}}\,> \,10/\tau \, [{{\rm{s}}}^{-1}].$$ Table 1 shows the results of several cases of firing rates and the assumed PSPs using the \(\alpha =0.001\). Unsurprisingly, to detect a weak connection for a low firing neuron requires gathering data for a long period of time. Figure 3a shows the connections estimated with different observation time windows, illustrating how weak connections become visible as the recording duration increases. Table 1 Duration of spike recording required for verifying neuronal connections Neuronal circuits reconstructed from different observation time windows. a Neuronal connections estimated from the observation time windows of 600, 1800, and 5400 s (10, 30, and 90 min) are plotted in reference to true connectivity. In each connection matrix, the \(x\)-axis indicates reference neurons. In the network graphs shown in the second panel, excitatory and inhibitory neurons are depicted as triangles and circles, respectively. b Estimated postsynaptic potentials (PSPs) (\(\hat{w}\)) plotted against true parameters (\(w\)) were computed for 100 neurons randomly selected from the simulation. Points in the first and third quadrants represent qualitatively correct inferences for excitatory and inhibitory connections (magenta and cyan, respectively). Points on the nonzero \(y\)-axis represent the false positive connections for unconnected pairs. Points on the nonzero \(x\)-axis represent the false negatives. c Detection status for connections of given PSPs with respect to the observation window (\(T\)). Connections estimated as excitatory and inhibitory are colored magenta and cyan, respectively, while undetermined ones are colored gray. Diagonal and vertical lines represent the theoretical formulas (3) and (4), respectively. Source data are provided as a Source Data file Estimating PSPs We believe that our method is of particular interest because it couches the connections in terms of PSPs for the individual neuronal pairs. Figure 3b compares the estimated PSPs (\(\hat{w}\)) against the true values (\(w\)) from the numerical simulation. Here we represent \(\hat{w}\,=\,0\) if the connection is undetermined, i.e., not significant. Thus, unconnected links (\(w\,=\,0\)) that were classified as undetermined (true negatives) are placed at the origin. Points lying on the nonzero \(x\)-axis are existing connections that were not detected. Points lying on the nonzero \(y\)-axis are the functional or virtual connections that were estimated for unconnected pairs. The points in the first and third quadrants represent true positives, or existing connections whose signs were correctly inferred as excitatory or inhibitory, respectively. The points in the second and fourth quadrants are existing connections whose signs were misclassified. The number of nonzero connections increases with the recording duration. Existing connections with large PSP amplitude tend to be detected with the signs correctly identified (points in the first and third quadrants). There are also virtual connections assigned for unconnected pairs (nonzero \(y\)-axis). The number of such FPs is larger than the expected number of statistical errors (Fig. 2a). This implies that the false connections may not be mere statistical fluctuations, but rather that they may reflect the functional connectivity indirectly connected via other unobserved neurons. Figure 3c demonstrates the way individual connections emerge by increasing the recording duration. Here the abscissa is the observation window (\(T\)) multiplied by the firing rates of the pre and postsynaptic neurons (\({\lambda }_{{\rm{pre}}}\) and \({\lambda }_{{\rm{post}}}\)) so that all data are organized into a unified formula (inequality (3)). The values of \(T{\lambda }_{{\rm{pre}}}{\lambda }_{{\rm{post}}}\) for the excitatory connections tended to be smaller than those of inhibitory connections, because the firing rates of excitatory neurons were typically lower than those of inhibitory neurons. Excitatory–inhibitory (E–I) dominance index The probability of misassigning individual connectivity for unconnected pairs tends to be higher than the statistical significance level, because their firing is generally correlated with each other due to indirect interactions through unobserved neurons. Nevertheless, excitatory and inhibitory characteristics of individual neurons can be inferred with a lower error rate, because we can refer to multiple connections for each neuron. We define an excitatory–inhibitory (E–I) dominance index as $${d}_{{\rm{ei}}}\,=\,\frac{{n}_{{\rm{e}}}\,-\,{n}_{{\rm{i}}}}{{n}_{{\rm{e}}}\,+\,{n}_{{\rm{i}}}}.$$ where \({n}_{{\rm{e}}}\) and \({n}_{{\rm{i}}}\) represent the numbers of identified excitatory and inhibitory connections projecting from each neuron, respectively. The E–I dominance indexes computed for 2 networks of 80 neurons each are plotted against firing rates of neurons (Fig. 4a). In this case, excitatory and inhibitory characteristics of individual neurons were well-identified based on E–I dominance indexes. Inhibitory neurons typically exhibited higher firing rates in comparison to excitatory neurons. The firing irregularity measured using the local variation (\(Lv\)) of interspike intervals46,47 is plotted against firing rate. Spiking of inhibitory neurons tended to be more regular (smaller \(Lv\)) than that of excitatory neurons. Excitatory–inhibitory (E–I) dominance index. a E–I dominance index \({d}_{{\rm{ei}}}\,=\,({n}_{{\rm{e}}}\,-\,{n}_{{\rm{i}}})/({n}_{{\rm{e}}}\,+\,{n}_{{\rm{i}}})\) and the firing irregularity (\(Lv\)), plotted against the firing rate for 160 HH-type neurons (2 networks of 80 neurons). Excitatory and inhibitory neurons are plotted as triangles and disks, colored magenta and cyan, respectively. b The rates at which excitatory and inhibitory characteristics are identified correctly according to \({d}_{{\rm{ei}}}\,> \,0\) and \({d}_{{\rm{ei}}}\,<\,0\), respectively for excitatory and inhibitory neurons. Source data are provided as a Source Data file If we can record many spike trains in parallel for a long time, many excitatory and inhibitory neurons may be correctly identified according to \({d}_{{\rm{ei}}}\,> \,0\) and \({d}_{{\rm{ei}}}\,<\,0\), respectively. Figure 4b illustrates the manner in which the ratio of such correct identification depends on the total number of spike trains and the duration of observation. Real spike trains We apply our method to spike trains recorded from the hippocampal CA1 area of a rat while it was exploring a square open field (hc-3 data sets in Collaborative Research in Computational Neuroscience (CRCNS))48. Figure 5a displays the connections obtained with different observation time windows, demonstrating that more connections become visible as the recording duration increases, similar to the results seen with synthetic data. The connection matrix is divided into four quadrants according to the putative classification performed by manually analyzing waveforms, autocorrelograms, and mean firing rates49,50,51. We observe that connections in the third, fourth, and first quadrants of the connectivity matrix representing excitatory–inhibitory, and inhibitory–inhibitory, and inhibitory–excitatory zones, respectively, are detected in a relatively short observation window. This is consistent with our formula (3), given that inhibitory neurons typically fire at high rates, though inhibitory neurons are not necessarily a uniform population52. Connections in the second quadrant, representing the excitatory–excitatory zone, only appear after increasing the observation time window, and the estimated connection pattern remains sparse; more connections might have been identified if the observation period had been even longer. However, the estimated connection pattern is consistent with the finding using intracellular recording in vitro that inter-pyramidal connections in the hippocampus CA1 are sparse53. Neuronal circuits reconstructed from real spike trains in vivo. a Neuronal connections estimated from spike trains recorded from the hippocampal CA1 area of a rat. Estimations were made with observation time windows of 600, 1800, and 5400 s (10, 30, and 90 min) for neurons whose firing rate is \(> 0.5\) [Hz]. In each connection matrix, the \(x\)-axis indicates reference neurons. The connection matrix is partitioned into groups of putative excitatory and inhibitory neurons defined manually according to other physiological cues such as waveforms. b Cross-correlations of several pairs of neurons computed at different time windows. The slow part of the GLM adapted to the data is depicted as a light green line. The coupling filter is separately depicted in magenta, cyan, or gray, for the excitatory, inhibitory, or undetermined, respectively. Corroborated connections are indicated by arrows. c E–I dominance index (\({d}_{{\rm{ei}}}\)) and firing irregularity (\(Lv\)) are plotted against the firing rates for putative excitatory and inhibitory neurons. Neurons with \(> 1\) connection with firing rate and \(> 0.1\) [Hz] are plotted in the E–I dominance index. d Estimated connections among neurons in CA1 and Entorhinal Cortex (EC). The connection matrix is partitioned into putative excitatory and inhibitory neurons in CA1 and EC. One EC unit, whose excitatory or inhibitory characteristic was not determined by the manual analysis, is put in the gap (gray) between excitatory and inhibitory groups. In the network graph shown in the second panel, excitatory- and inhibitory-dominated connections are depicted in magenta and cyan, while connections of mixed characteristics are depicted in gray. Source data are provided as a Source Data file Figure 5b shows CCs of several neuron pairs (see Supplementary Fig. 3 for all the detected pairs). Here, we have excluded spike records at an interval of \(\pm 1\) ms in the cross-correlogram, because near-synchronous spikes were not detected in the experiment due to the shadowing effect54. The CCs become less noisy as the observation time increases, and some connections resolved (8–7, 13–3, 14–7, and 15–8). Some real spike trains exhibited large-scale wavy fluctuations (13–11), which may suggest that these neurons are under the influence of brain activity with lagged phases or perhaps they were responding to some unidentified external stimulus. Our method absorbs these fluctuations by adapting the slow part of the GLM (demonstrated as light green lines in Fig. 5b), and succeeds in detecting a tiny impact by fitting coupling filters (lines colored magenta, cyan, and gray, respectively represent excitatory, inhibitory, and undetermined connections in Fig. 5b). In Fig. 5c, we plotted the E–I dominance index (\({d}_{{\rm{ei}}}\)) and the firing irregularity (\(Lv\)) against the firing rate. The E–I dominance index is roughly consistent with the putative excitatory and inhibitory neurons. The irregularity of the putative excitatory neurons tended to be higher (larger \(Lv\)) than that of the inhibitory neurons, similar to what we observed with the simulation data. The good separation of the putative excitatory and inhibitory neurons in these plots implies that we can classify recorded cells into excitatory and inhibitory neurons reliably without having to rely on their waveforms, as the E–I dominance index, firing irregularity, and firing rate are obtained solely from the spike times. We also attempted to analyze a set of spike trains recorded simultaneously from multiple regions including CA1 and the Entorhinal Cortex (EC). Figure 5d demonstrates a matrix of estimated connections among excitatory and inhibitory neurons in CA1 and EC. Though the number of inter-regional connections was small in this sample data, our analysis method is generally applicable to any set of spike trains, irrespective of the recorded areas. Comparison with other methods We compared our method with the conventional CC method16 and the jittering method25 by applying these methods to synthetic and biological data. With the synthetic data, we can compare the performance of inferring connectivity with the true connectivity (Fig. 6a). Here, we have not shown excitatory connections smaller than 1 mV in the true connectivity matrix as in Fig. 2a, because they are unlikely to be detected in a 90 min recording. The relative performance of the analysis methods is unchanged even if the smaller EPSPs are included. The conventional CC analysis tended to produce a number of FPs, revealing a vulnerability to fluctuations in cross-correlograms. In contrast, the jittering method avoided making FPs, but missed many existing connections, in particular for inhibitory connections. This result may have occurred because the decrease in the firing rate induced by an inhibitory interaction is slower than an impulsive response to an excitatory stimulus; the jittering method count spikes in each bin and tends to overlook a slower modulation in the firing rate. The number of false connections was 88, 27, and 13, respectively for the conventional CC method, the jittering method, and the GLMCC method, indicating the superiority of the present method. We also examined the manner in which the number of errors varies with the firing rate of neurons, and found that the estimation error increases with the firing rates (Supplementary Fig. 4). Comparison of estimation methods. a Connections estimated using the conventional cross-correlation method, the jittering method, and our GLMCC method, in reference to the true connectivity of the synthetic data (used in Fig. 3). For the GLMCC and the true connectivity, the size of each square is proportional to the PSP amplitude, while for the first two methods, the estimated connections are represented in equal size, because they do not estimate the PSP. b Neuronal connections estimated from spike trains recorded from the hippocampus of a rat (used in Fig. 5d). Source data are provided as a Source Data file We also compared the connections estimated from the real biological data recorded from the hippocampus of a rat (Fig. 6b). The conventional CC method and jittering method suggested many (false) excitatory connections from putative inhibitory neurons to other neurons. In the GLMCC, most of detected inhibitory connections in hippocampal data are from inhibitory to inhibitory or from inhibitory to excitatory neurons, consistent with low FPs and FNs in inhibitory connections in synthetic data by this method. Testing with large-scale simulations We have tuned the GLMCC method using synthetic data of a network of 1000 HH neurons and assessed the estimation performance. We have also tested the method with simulation data of different inhibitory connectivities and those generated by LIF neurons29, and confirmed that the method estimates the connectivity accurately for these data as well (Supplementary Fig. 5). In the original simulation, 1000 HH neurons are densely connected with excitatory neurons innervating EPSPs to 12.5% of other neurons. However, the effective connectivity is rather sparse, because the EPSPs are log normally distributed and the majority of them are weak. Accordingly, the number of effective connections each neuron receives is not large in this network size. Considering the realistic situation in which each neuron is receiving strong connections from a number of neurons, we carried out simulations of a larger scale network consisting of 10,000 LIF neurons using the NEST simulator55 (Supplementary Note 1 and Supplementary Tables 1, 2, and 3). By performing simulations of different connection densities, we examined the manner in which the number of false estimates varies with the number of connections. Figure 7 demonstrates the proportions of FPs and FNs counted for each pair of neurons, indicating the stable estimation of the GLMCC method and its superiority to other existing methods. Sample connectivity matrices are presented in Supplementary Fig. 6. The number of estimation errors computed for networks of 10,000 LIF neurons. Horizontal axes represent the average number of excitatory inputs to each neuron. For excitatory (inhibitory) connectivity, FPs represent directed links that were mistakenly assigned as excitatory (inhibitory), whereas FNs represent excitatory (inhibitory) connections that were assigned as disconnected or inhibitory (excitatory). Source data are provided as a Source Data file We have presented a method for reconstructing neuronal circuitry from multichannel extracellular neuronal recordings. This method, based on a combination of the GLM and CC, can balance the antagonistic demands for reducing FPs and FNs when estimating neuronal connectivity. Our method is tolerant of the large variations in firing activity that often occur in vivo. As a critical part of the method, we show a framework for estimating the necessary duration of the spike recordings so that any likely neuronal connections are detected. The duration is presented in terms of the firing rates of the pre- and postsynaptic neurons, and the presumed PSP. It would be ideal to be able to estimate individual connections using intracellular or patch clamp recordings where the postsynaptic current caused by presynaptic neuronal firing can be measured, as is done with recordings from the rat cortex34,37. While those methods can reliably detect synaptic connections, they are limited because only a few neurons can be recorded simultaneously. With the recent increase in parallel high channel count extracellular recordings from anaesthetized and behaving animal subjects1,2, it is possible to estimate the connection strength between a number of neurons20,28. Several strong analytical methods for estimating connections from spike trains have been developed, including the CC analysis14,15,18,21 and the GLM8,19,23,27,29. While CCs have been used to estimate neuronal connectivity, this classical CC analysis becomes unreliable when there are large fluctuations in the data. One approach to solving this problem has been to jitter the time stamps of spikes25,28. We tested the performance of the conventional CC method and the jittering method in estimating connectivity using synthetic data, and found that our GLMCC performed better than conventional methods (Fig. 6). Another approach has been to apply GLM to parallel spike trains. However, the size of the computation increases as the recording time increases. Because the number of neuronal pairs increases by the square of the number of spike trains (e.g., 10,000 pairs should be examined for 100 parallel spike trains), computation for estimating individual connections of each pair should be modest. Our analysis can be conducted with a reasonable computation time with amounts of data that can reasonably be collected, as our GLM analyses the CC for a time window of 100 ms rather than the entire spike trains. Our GLMCC may also adapt to wavy fluctuations in CC, making it tolerant to large-scale fluctuations that are often attendant on real spike trains in vivo (cf. Fig. 5b). There could also be fluctuations on an even longer time scale. There are several methods for processing such nonstationarity, including the state-space models56,57 or the Gaussian process58. Such slow fluctuations may induce variation in the CC amplitude, but they would not appear in the averaged cross-correlogram \(c(t)\) in an interval of 100 ms in our framework. In general, biological data are accompanied with large nonstationary fluctuations; neuronal firing rate may change according to behavioral contexts, and it might even occur that each neuron may appear or disappear due to unstable recording. To examine whether our method may have provided consistent estimation for neuronal connections, we split the recordings in half and compared estimated connections from each half (Supplementary Fig. 7). We found that the estimated connections exhibited significant overlap between the first and second halves. Thus, our inference method provides consistent data, not only for synthetic data, but also for experimental data. It is interesting to test our estimation with the information of biological connectivity, which is obtained by the latest experimental techniques such as the intracellular current injection59 or optogenetic control60. Because recording time is limited, a possible restriction on inferring connectivity could be that there is not enough data. Here we made estimates on the duration of spike recordings needed so that any likely neuronal connections would be detected (cf. Table 1). It should be noted that the limit given in Eq. (3) or Table 1 is not due to a limitation of our method, but it is an essential limitation caused by the sparse firing itself. Even if a given neuron fired several times with each spike occurring shortly after the firing of an index neuron, such evidence may not be sufficient to confirm the presence of a synaptic connection. Thus, enough data are needed so that spike co-occurrence becomes statistically significant28. When we applied our method to data recorded from the rat hippocampus we identified connections for four types of pairs including excitatory–excitatory, excitatory–inhibitory, inhibitory–inhibitory, and inhibitory–excitatory. These numbers were consistent with those identified physiologically51, supporting the efficacy of our method. Typically, the pyramidal neurons have low background firing rates and interneurons have higher firing rates. Our analysis (cf. inequality (3)) indicates that the necessary recording duration is inversely proportional to the product of the firing rates of the pre and postsynaptic neurons. Thus, connections between neurons firing at high frequencies can be detected with a relatively short observation duration. In contrast, for neurons with low firing rates, data will have to be collected for much longer periods, and we expect that excitatory–excitatory connections will be detected only if there is a relatively long recording period. The consequences of this have been seen with experimental data; for instance, synapses that connect with inhibitory interneurons were frequently detected, and connections between excitatory neurons were rarely detected20,61. The hippocampal data analyzed in this study (Fig. 5a) conforms to this pattern, and our analysis provides insight into how this happens. Our approach and method provide a means for estimating a map of neuronal connections from high channel count simultaneous recordings. We presume, based on anatomical differences, that these maps will have different structures in different functional brain regions. Having a reliable technique for estimating the maps offers the opportunity to identify these different structures, thereby providing a basis for understanding the variations in information processing that arises from differences in anatomy and connected structures. Estimating neuronal connectivity Here we describe our GLM analysis, the basis of validating connections and selecting the significance level, and the method of estimating the PSP. GLMCC To discover neuronal connections between a pair of neurons, we devise a GLM that detects short-term synaptic impacts in the CC (as schematically depicted in Fig. 1a and as real cross-correlograms of rat hippocampal data in Fig. 5b). We designed the GLMCC as $$c(t)\,=\,\exp \left(a(t)\,+\,{J}_{12}f(t)\,+\,{J}_{21}f(-t)\right),$$ where \(t\) is the time from the spikes of the reference neuron, and \(a(t)\) represents large-scale fluctuations produced outside of the pair of neurons. \({J}_{ij}\) represents neuronal connection from the \(j\)th neuron to the \(i\)th neuron. The time profile of the synaptic interaction is modeled as \(f(t)\,=\,\exp (-\frac{t\,-\,d}{\tau })\) for \(t\ > \ d\) and \(f(t)\,=\,0\) otherwise, where \(\tau\) is the typical time scale of synaptic impact and \(d\) is the transmission delay. The connection parameter \({J}_{ij}\) of our GLMCC can be derived from a model of the original interaction process between neurons (Supplementary Note 2). Given an underlying rate \(c(t)\), the probability for spikes to occur at \(\{{t}_{k}\}\,=\,\{{t}_{1},{t}_{2},\cdots \ ,{t}_{N}\}\) is obtained theoretically as62, $$p(\{{t}_{k}\}|\theta)\,=\,{\prod }_{k}c({t}_{k})\exp \left[-{\int }_{-W}^{W}c(t)\ dt\right],$$ where \(\theta \,=\,\{{J}_{12},{J}_{21},a(t)\}\), representing a set of parameters that characterize \(c(t)\). To detect short-term synaptic impacts of a few ms hidden in large-scale fluctuations in the CC, we make \(a(t)\) adapt to the slow part of the fluctuations. This may be done by providing a prior distribution that penalizes a large gradient of \(a(t)\): $$p(\theta)\propto \exp \left[-\frac{1}{\gamma}{\int}_{-W}^{W}{\left(\frac{da}{dt}\right)}^{2}\ dt\right],$$ where \(\gamma\) is a hyperparameter representing the flatness of \(a(t)\); \(a(t)\) is nearly constant if \(\gamma\) is small, or is otherwise rapidly fluctuating. We selected the hyperparameter using the ABIC (Akaike Bayesian Information Criterion)63 so that c(t) fits the experimental CCs, and adopted the mean value: \(\gamma \,=\,5\times 1{0}^{-4}\) [ms\({}^{-1}\)]. For the connection parameters \({J}_{12}\) and \({J}_{21}\), we have assumed uniform priors. The posterior distribution of a set of parameters \(\theta \,=\,\{{J}_{12},{J}_{21},a(t)\}\), given the spike data \(\{{t}_{k}\}\), is obtained from Bayes' rule as $$p(\theta |\{{t}_{k}\})\,=\,\frac{p(\{{t}_{k}\}|\theta )p(\theta )}{p(\{{t}_{k}\})}.$$ The parameters are determined with the maximum a posteriori (MAP) estimate, that is, by maximizing the posterior distribution or its logarithm: $$\mathrm{log}\ {\it{p}}(\theta |\{{t}_{k}\})\,=\,{\sum }_{k}\mathrm{log}\ {\it{c}}({t}_{k})-{\int }_{-W}^{W}{\it{c}}(t)\ {\it{dt}}\\ \,-\,\frac{1}{\gamma }{\int }_{-W}^{W}{\left(\frac{\it{da}}{\it{dt}}\right)}^{2}\ {\it{dt}} + {\rm{const.}}$$ The MAP inference for \(\theta \,=\,\{{J}_{12},{J}_{21},a(t)\}\) was performed efficiently using the Levenberg–Marquardt method (Supplementary Note 3). Statistical test for determining connectivity We determine the presence of a neuronal connection by disproving the null hypothesis that a connection is absent. Namely, we conclude that a connection is likely present if the estimated parameter is outside the confidence interval for the null hypothesis; otherwise, the presence of a connection is undetermined. The null hypothesis is that two neurons generate spikes at their baseline firing rates independently of each other. According to Poisson statistics, the variance of the number of spikes generated in a time interval \(\Delta\) after the spike of a reference neuron is equal to its mean. The mean spike number is obtained by multiplying the intensity \(c(0)\) by an interval \(\Delta\), $$n\,=\,c(0)\Delta .$$ Assuming that the connection \(J\) is small, the average number of spikes caused by a neuronal connection during an interval \(\Delta\) is approximated as $$\delta n\,=\,c(0)J\tau (1\,-\,{e}^{-\Delta /\tau }).$$ The condition that the synaptic interaction produces a significant impact on the CC is \(|\delta n|> {z}_{\alpha }\sqrt{n}\), where \({z}_{\alpha }\) is a threshold for the normal distribution (\({z}_{\alpha }=2.58\) for \(\alpha =0.01\) and \({z}_{\alpha }=3.29\) for \(\alpha =0.001\)). In terms of the estimated connection parameter \(\hat{J}\), this condition is given as $$|\hat{J}|\,> \,{z}_{\alpha }\frac{{\Delta }^{1/2}}{\tau (1-{e}^{-\Delta /\tau })}\cdot \frac{1}{{(c(0))}^{1/2}}.$$ Here, \({\Delta }^{1/2}/(\tau (1-{e}^{-\Delta /\tau }))\) on the right-hand side of this inequality is dependent on \(\Delta\) but it takes the lowest value \(1.57{\tau }^{-1/2}\) at \(\Delta =1.26\tau\). Thus we have the following inequality: $$|\hat{J}|\,> \,1.57{z}_{\alpha }{(\tau c(0))}^{-1/2}.$$ The typical duration of spike recording needed for the connectivity inference (inequality (3)) is obtained from Eq. (14) by approximating \(c(0)=T{\lambda }_{{\rm{pre}}}{\lambda }_{{\rm{post}}}\), where \(T\) is the total duration of recording. Another requirement is that spike trains should contain a sufficiently large number of spikes to make a reliable inference. A typical number of spikes contained in the CC in the interaction time window is \(T{\lambda }_{{\rm{pre}}}{\lambda }_{{\rm{post}}}\tau\). By requiring this to be >10, we obtain the inequality (4). Selecting the significance level Although we obtained the confidence interval of the connection parameter \({J}_{ij}\) at the given value above, the probability of assigning spurious connectivity to anatomically disconnected pairs is higher than the threshold, because spike trains are correlated. Such spurious connections or FPs may be reduced by decreasing the significance level. However, this operation may cause the vast majority of existing connections to be missed, thus producing a huge number of FNs. Thus, the significance level should be chosen so that these conflicting demands (of reducing FPs and FNs) are optimally balanced. As we can directly count FPs and FNs in simulation data, we may select a significance level such that the performance of the inference is maximized. As a measure for assessing the performance of connectivity inference, we adopt the MCC40 defined as $$\begin{array}{ccc}MCC&=&\\ &&\frac{{N}_{{\rm{T}}{\rm{P}}}{N}_{{\rm{T}}{\rm{N}}}-{N}_{{\rm{F}}{\rm{P}}}{N}_{{\rm{F}}{\rm{N}}}}{\sqrt{({N}_{{\rm{T}}{\rm{P}}}+{N}_{{\rm{F}}{\rm{P}}})({N}_{{\rm{T}}{\rm{P}}}+{N}_{{\rm{F}}{\rm{N}}})({N}_{{\rm{T}}{\rm{N}}}+{N}_{{\rm{F}}{\rm{P}}})({N}_{{\rm{T}}{\rm{N}}}+{N}_{{\rm{F}}{\rm{N}}})}},\end{array}$$ where \({N}_{{\rm{T}}{\rm{P}}}\), \({N}_{{\rm{T}}{\rm{N}}}\), \({N}_{{\rm{F}}{\rm{P}}}\), and \({N}_{{\rm{F}}{\rm{N}}}\) represent the numbers of true positive, true negative, FP, and FN connections, respectively. Because there are excitatory and inhibitory connections, we may obtain two coefficients for individual categories. To evaluate the quality of inference in terms of a single measure, here we take the macro-average MCC that gives equal importance to these categories (Macro-average)64: $$MCC=\frac{MC{C}_{{\rm{E}}}+MC{C}_{{\rm{I}}}}{2}.$$ In computing the coefficient for the excitatory category \(MC{C}_{E}\), we classify connections as excitatory or other (disconnected and inhibitory); for the inhibitory category \(MC{C}_{I}\), we classify connections as inhibitory or other (disconnected and excitatory). Here we evaluate \(MC{C}_{E}\) by considering only excitatory connections of reasonable strength (EPSP \(\,> \,\) 1 mV), as EPSPs distribute log-normally and there are a number of weak connections that are hard to detect in several hours. Estimating PSPs from GLM connection parameters We translate the GLM connection parameters \({J}_{ij}\) into biological PSPs \({w}_{ij}\) mV. This relation is obtained by numerically simulating a network of neurons interacting through known connections \(\{{w}_{ij}\}\) and by applying the GLM to their spike trains to estimate the connection parameters \(\{{J}_{ij}\}\). Regarding synaptic connections \({w}_{ij}\) for which \({J}_{ij}\) was verified in the correct signs, we assume a proportional relation as in Eq. (2): $${J}_{ij}=a{w}_{ij}.$$ The coefficient \(a\) is determined by applying regression analysis to the synthetic data. We obtained \(a=0.39\) for EPSP and \(1.57\) for IPSP, respectively. When we newly estimate connection parameters \({\hat{J}}_{ij}\) from spike trains, they can be translated into PSPs using the relation: $${\hat{w}}_{ij}={\hat{J}}_{ij}/a.$$ Figure 3b compares the estimated PSPs \({\hat{w}}_{ij}\) with the original PSPs values \({w}_{ij}\) of a model neural network. In our numerical simulation, synaptic connectivity is given in terms of conductance. Thus we have to translate conductance into PSP. The translation rule is described in Supplementary Note 4 and Supplementary Fig. 8. Details of existing methods Here we describe the details of the conventional CC method and the jittering method, which were compared with the present GLMCC method in estimating synaptic connectivity. The CC method estimates the deviation in the cross-correlogram at short time-lags16. The synaptic connection is detected if the spike count is outside the confidence interval for a null hypothesis that two spike trains are independent stationary Poisson processes. The cross-correlogram was constructed by counting the number of spikes in an interval [−50, \(+\)50] ms with a bin size of \(\Delta =1\) ms. The confidence interval is given by \([{\bar{n}}_{{\rm{cc}}}-{z}_{\alpha }\sqrt{{\bar{n}}_{{\rm{cc}}}},{\bar{n}}_{{\rm{cc}}}+{z}_{\alpha }\sqrt{{\bar{n}}_{{\rm{cc}}}}]\), where \({\bar{n}}_{{\rm{cc}}}={\lambda }_{{\rm{pre}}}{\lambda }_{{\rm{post}}}T\Delta\) is the expected number of spikes; \({\lambda }_{{\rm{pre}}}\) and \({\lambda }_{{\rm{post}}}\) are the firing rates of the pre- and postsynaptic neurons, respectively; and \({z}_{\alpha }\) is the threshold for the normal distribution. We have chosen the significance level \(\alpha =0.01\). The jittering method was introduced to avoid false detection caused by large fluctuations in the background cross-correlogram20,25. Here we adopted the parameters in the original method. Namely, we generated surrogate data sets by randomly perturbing or jittering the original data in a uniform interval of [−5,\(+\)5] ms to estimate a global band at an acceptance level of 99%. An excitatory or inhibitory monosynaptic connections was identified if the original cross-correlogram at a bin size of 1 ms protruded the band anywhere in the region [1, 4] ms. A network of HH-type neurons We ran a numerical simulation of a network of 1000 HH-type neurons interacting through fixed synapses. Of them, 800 excitatory neurons innervate to 12.5% of other neurons with EPSPs that are log-normally distributed34,35,37, whereas 200 inhibitory neurons innervate randomly to 25% of other neurons with IPSPs that are normally distributed. Simulated spike trains and the connectivity matrix (EPSPs and IPSPs) are available on figshare65. Neuron models For excitatory pyramidal cells, we adopted HH-type models developed by Destexhe et al.66. The membrane potential \(V\) obeys the equation: $${C}_{{\rm{m}}}^{{\rm{pyr}}}\frac{dV}{dt}\,=\,-{I}_{{\rm{L}}}-{I}_{{\rm{Na}}}-{I}_{{\rm{K}}}-{I}_{{\rm{M}}}-{I}_{{\rm{tot}}},$$ where \({C}_{{\rm{m}}}^{{\rm{pyr}}}\) is the membrane capacitance, \({I}_{{\rm{L}}}={g}_{{\rm{L}}}^{{\rm{pyr}}}(V-{E}_{{\rm{L}}}^{{\rm{pyr}}})\) is the leak current, \({I}_{{\rm{Na}}}\,=\,{g}_{{\rm{Na}}}^{{\rm{pyr}}}{m}^{3}h(V\,-\,{E}_{{\rm{Na}}}^{{\rm{pyr}}})\) is the Na\({}^{+}\) current, \({I}_{{\rm{K}}}\,=\,{g}_{{\rm{K}}}^{{\rm{pyr}}}{n}^{4}(V\,-\,{E}_{{\rm{K}}}^{{\rm{pyr}}})\) is the delayed-rectifier K\({}^{+}\) current, \({I}_{{\rm{M}}}\,=\,{g}_{{\rm{M}}}^{{\rm{pyr}}}p(V\,-\,{E}_{{\rm{K}}}^{{\rm{pyr}}})\) is the muscarinic potassium current, and \({I}_{{\rm{tot}}}\) is the total input current from the other neurons. The gating variables \(x\in \{m,h,n,p\}\) are described by the kinetic equation: $$\frac{dx}{dt}\,=\,{\alpha }_{x}(V)(1\,-\,x)\,-\,{\beta }_{x}(V)x,$$ where \({\alpha }_{x}\) and \({\beta }_{x}\) are the activation and inactivation functions, respectively. The activation and inactivation functions and the parameter values are summarized in Table 2. Table 2 Parameters for pyramidal neurons and interneurons For inhibitory interneurons, we adopted the HH-type models developed by Erisir et al.67. The membrane potential \(V\) obeys the equation: $${C}_{{\rm{m}}}^{{\rm{inh}}}\frac{dV}{dt}=-{I}_{{\rm{L}}}-{I}_{{\rm{Na}}}-{I}_{{{\rm{K}}}_{1}}-{I}_{{{\rm{K}}}_{2}}-{I}_{{\rm{tot}}},$$ where \({C}_{{\rm{m}}}^{{\rm{inh}}}\) is the membrane capacitance, \({I}_{{\rm{L}}}\,=\,{g}_{{\rm{L}}}^{{\rm{inh}}}(V\,-\,{E}_{{\rm{L}}}^{{\rm{inh}}})\) is the leak current, \({I}_{{\rm{Na}}}\,=\,{g}_{{\rm{Na}}}^{{\rm{inh}}}{m}^{3}h(V\,-\,{E}_{{\rm{Na}}}^{{\rm{inh}}})\) is the Na\({}^{+}\) current, \({I}_{{{\rm{K}}}_{1}}\,=\,{g}_{{{\rm{K}}}_{1}}^{{\rm{inh}}}{n}_{1}^{4}(V\,-\,{E}_{{\rm{K}}}^{{\rm{inh}}})\) and \({I}_{{{\rm{K}}}_{2}}={g}_{{{\rm{K}}}_{2}}^{{\rm{inh}}}{n}_{2}^{2}(V-{E}_{{\rm{K}}}^{{\rm{inh}}})\) are the delayed-rectifier K\({}^{+}\) current due to Kv1.3 and Kv3.1–Kv3.2 conductance, respectively, and \({I}_{{\rm{tot}}}\) is the total input current. The gating variables \(x\in \{m,h,{n}_{1},{n}_{2}\}\) follow the kinetic equation (18), with the activation and inactivation functions prescribed by the original paper67. The parameter values are summarized in Table 2. Synaptic connections Each neuron receives synaptic currents induced by the firing of other neurons. Excitatory synaptic currents are mediated by 2-amino-3-(5-methyl-3-oxo-1,2-oxazol-4-yl) propanoic acid (AMPA) and N-methyl-D-aspartate (NMDA) receptors, whereas inhibitory synaptic currents are mediated by \(\gamma\)-aminobutyric acid (GABA)-A receptors. The total input current to the \(i\)th neuron is given by $${I}_{{\rm{tot}}}^{i} = {\sum}_{j{:} {\rm{Pyramidal}} \, {\rm{cells}}} \left({I}_{{\rm{AMPA}}}^{ij} + {I}_{{\rm{NMDA}}}^{ij} \right) + {\sum }_{j{:}{\rm{Interneurons}}} {I}_{{\rm{GABA}}}^{ij}+{I}_{\rm{bg}}.$$ where \({I}_{{\rm{AMPA}}}^{ij}\), \({I}_{{\rm{NMDA}}}^{ij}\), and \({I}_{{\rm{GABA}}}^{ij}\), respectively represent the synaptic currents given by the AMPA, NMDA, and GABA receptors, and \({I}_{{\rm{bg}}}\) represents the background current. For AMPA-mediated current, we adopted the depressing synapse model proposed by Tsodyks et al.44 $${I}_{{\rm{AMPA}}}^{ij}\,=\,{g}_{{\rm{AMPA}}}^{ij}{w}_{j}(t)({V}_{i}\,-\,{E}_{{\rm{AMPA}}}),$$ $$\begin{array}{ccc}{\tau }_{{\rm{ina}}}^{{\rm{AMPA}}}\frac{d{w}_{j}(t)}{dt}&=&-{w}_{j}(t)\\ &&+\,{U}_{{\rm{AMPA}}}{r}_{j}(t){\sum }_{k}\delta (t-{t}_{k}^{j}-{d}_{{\rm{AMPA}}}),\end{array}$$ $${\tau }_{{\rm{rec}}}^{{\rm{AMPA}}}\frac{d{r}_{j}(t)}{dt}\,=\,-{r}_{j}(t)\,+\,1\,-\,{w}_{j}(t),$$ where \({g}_{{\rm{AMPA}}}^{ij}\) is the maximal AMPA conductance, \({V}_{i}\) is the membrane potential of the postsynaptic neuron, \({t}_{k}^{j}\) is the \(k\)th spike time of the presynaptic neuron, and \({d}_{{\rm{AMPA}}}\) is the synaptic conduction delay. For each connection, the conduction delay is drawn from a uniform distribution between 0 and 2 ms. \({w}_{j}\) and \({r}_{j}\) represent the fraction of synaptic resources in the effective and recovered states, respectively. The AMPA parameter values are summarized in Table 3. Table 3 Parameters for synaptic currents and background inputs For NMDA-mediated current, we adopted the first-order kinetic equation proposed by Destexhe et al.68 $${I}_{{\rm{NMDA}}}^{ij}={g}_{{\rm{NMDA}}}^{ij}{r}_{j}(t)f({V}_{i})({V}_{i}-{E}_{{\rm{NMDA}}}),$$ $$\begin{array}{ccc}\frac{d{r}_{j}(t)}{dt}&=&{\alpha }_{{\rm{NMDA}}}T(t-{t}_{{\rm{pre}}}-{d}_{{\rm{NMDA}}})(1-{r}_{j}(t))\\ &&-{\beta }_{{\rm{NMDA}}}{r}_{j}(t),\end{array}$$ $$f({V}_{i})={\left(1.0+0.28[{{\rm{Mg}}}^{2+}]{e}^{-0.062{V}_{i}}\right)}^{-1},$$ where [Mg\({}^{2+}\)] = 1.0 mM is the extracellular magnesium concentration, \({t}_{{\rm{pre}}}\) is the last spike time of the presynaptic neuron, \({d}_{{\rm{NMDA}}}\) is the conduction delay drawn from a uniform distribution between 0 and 2 ms, and \(T(t)\) represents the transmitter concentration in the cleft. When a spike occurs in a presynaptic neuron, a transmitter pulse is induced such that \(T(t)\,=\,1\) mM for a short period (1 ms) and the concentration returns to \(T(t)\,=\,0\). The NMDA parameter values are summarized in Table 3. For GABA-A-mediated current, we adopted the depressing synapse model proposed by Tsodyks et al.44 $${I}_{{\rm{GABA}}}^{ij}={g}_{{\rm{GABA}}}^{ij}{w}_{j}(t)({V}_{i}-{E}_{{\rm{GABA}}}),$$ $$\begin{array}{ccc}&&{\tau }_{{\rm{ina}}}^{{\rm{GABA}}}\frac{d{w}_{j}(t)}{dt}=-{w}_{j}+\\ &&\,\,\,\,\,\,{U}_{{\rm{GABA}}}{r}_{j}(t){\sum }_{k}\delta (t-{t}_{k}^{j}-{d}_{{\rm{GABA}}}),\end{array}$$ $${\tau }_{{\rm{rec}}}\frac{d{r}_{j}(t)}{dt}=-{r}_{j}(t)+1-{w}_{j}(t).$$ where \({d}_{{\rm{GABA}}}\) is the conduction delay drawn from a uniform distribution between 1 and 3 ms. The GABA parameter values are summarized in Table 3. We ran a simulation of a network consisting of 800 pyramidal neurons and 200 interneurons interconnected with a fixed strength. Each neuron receives 100 excitatory inputs randomly selected from 800 pyramidal neurons and 50 inhibitory inputs selected from 200 interneurons. The AMPA conductance (\({g}_{{\rm{AMPA}}}^{ij}\)) is drawn independently from a log-normal distribution34,35 $$P(x)\,=\,\frac{1}{\sqrt{2\pi }\sigma x}\exp \left(-\frac{{(\mathrm{log}\,{\it{x}}\,-\,\mu )}^{2}}{2{\sigma }^{2}}\right),$$ where \(\mu \,=\,-3.37\) and \(\sigma \,=\,1.3\) are the mean and SD of the natural logarithm of the AMPA conductance. The NMDA and GABA conductances (\({g}_{{\rm{NMDA}}}^{ij}\) and \({g}_{{\rm{GABA}}}^{ij}\)) are sampled from the normal distribution $$P(x)=\frac{1}{\sqrt{2\pi }\sigma }\exp \left(-\frac{{(x-\mu )}^{2}}{2{\sigma }^{2}}\right),$$ where \(\mu\) and \(\sigma\) are the mean and SD of the conductances. Parameters are \({\mu }_{{\rm{NMDA}}}\,=\,8.5\times 1{0}^{-4}\ {\rm{mS}}\ {{\rm{cm}}}^{-2}\), \({\sigma }_{{\rm{NMDA}}}\,=\,8.5\times 1{0}^{-5}\ {\rm{mS}}\ {{\rm{cm}}}^{-2}\) and \({\mu }_{{\rm{GABA}}}\,=\,0.34\ {\rm{mS}}\ {{\rm{cm}}}^{-2}\), \({\sigma }_{{\rm{GABA}}}\,=\,0.27\ {\rm{mS}}\ {{\rm{cm}}}^{-2}\) for the NMDA and GABA conductance, respectively. If the sampled value is less than zero, the conductance is resampled from the same distribution. Because our model network is smaller than real cortical networks, where each neuron receives inputs from the order of 1000 neurons, we added a background current to represent inputs from many neurons, as previously done by Destexhe et al.69. The background current is given as the sum of excitatory and inhibitory inputs: $${I}_{{\rm{bg}}}\,=\,{g}_{{\rm{e}}}(t)(V\,-\,{E}_{{\rm{AMPA}}})\,+\,{g}_{{\rm{i}}}(t)(V\,-\,{E}_{{\rm{GABA}}}),$$ where the total excitatory and inhibitory conductance \({g}_{{\rm{e,i}}}(t)\) obey the Ornstein–Uhlenbeck process70, representing random bombardments from a number of neurons. $$\frac{d{g}_{x}}{dt}\,=\,-\frac{{g}_{x}(t)\,-\,{g}_{x,0}}{{\tau }_{x}}\,+\,\sqrt{\frac{2{\sigma }_{x}^{2}}{{\tau }_{x}}}\xi (t),$$ where \(x\) represents excitatory (e) or inhibitory (i), \({g}_{x,0}\) and \({\sigma }_{x}\) are the asymptotic mean and SD of the conductance, \({\tau }_{x}\) is the synaptic time constant, and \(\xi (t)\) is the Gaussian white noise with zero mean and unit variance. Parameters for the background inputs are summarized in Table 3. Simulation codes were written in C++ and parallelized with OpenMP framework. Simulations were conducted on a computer with Intel Xeon Processors E5-2650v2. The time step was 0.01 ms for excitatory (pyramidal) neurons and 0.001 ms for inhibitory (inter) neurons. The neural activity was simulated up to 10,000 s. Spike trains were recorded from the hippocampal area of a rat, while it was exploring an open square field. Experimental procedures, data collection, and spike sorting are as described in detail in Mizuseki et al.51. All protocols were approved by the Institutional Animal Care and Use Committees of Rutgers University and New York University. Hippocampal principal cells and interneurons were separated on the basis of their waveforms, autocorrelograms, and mean firing rates49,50,51. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. The source data underlying Figs. 2–7 are provided as a Source Data file. Simulated data generated by a network of 1,000 Hodgkin–Huxley neurons has been deposited in figshare65 (https://doi.org/10.6084/m9.figshare.9637904). All experimental data used in this paper can be found in hc-3 data sets at CRCNS48 (CRCNS.org. https://doi.org/10.6080/K09G5JRZ). A ready-to-use version of the web application, the source code, and example data sets are available at our website, http://www.ton.scphys.kyoto-u.ac.jp/%7Eshino/GLMCC and are also hosted publicly on github, accessible via https://github.com/NII-Kobayashi. Simulation codes of a large network of LIF neurons are available on ModelDB (https://senselab.med.yale.edu/modeldb/ShowModel.cshtml?model=258807). Simulation codes of a network of HH neurons are available upon request from the corresponding author. Buzsáki, G. Large-scale recording of neuronal ensembles. Nat. Neurosci. 7, 446 (2004). Jun, J. J. et al. Fully integrated silicon probes for high-density recording of neural activity. Nature 551, 232 (2017). Article CAS ADS Google Scholar Mitz, A. R. et al. High channel count single-unit recordings from nonhuman primate frontal cortex. J. Neurosci. Methods 289, 39–47 (2017). Pachitariu, M. et al. Suite2p: beyond 10,000 neurons with standard two-photon microscopy. preprint at https://www.biorxiv.org/content/10.1101/061507v2.abstract (2017). Stringer, C., Pachitariu, M., Steinmetz, N., Carandini, M. & Harris, K. D. High-dimensional geometry of population responses in visual cortex. Nature 571, 361–365 (2019). Brown, E. N., Kass, R. E. & Mitra, P. P. Multiple neural spike train data analysis: state-of-the-art and future challenges. Nat. Neurosci. 7, 456 (2004). Hatsopoulos, N., Joshi, J. & O'Leary, J. G. Decoding continuous and discrete motor behaviors using motor and premotor cortical ensembles. J. Neurophysiol. 92, 1165–1174 (2004). Pillow, J. W. et al. Spatio-temporal correlations and visual signalling in a complete neuronal population. Nature 454, 995 (2008). Ohiorhenuan, I. E. et al. Sparse coding and high-order correlations in fine-scale cortical networks. Nature 466, 617 (2010). Stevenson, I. H. & Kording, K. P. How advances in neural recording affect data analysis. Nat. Neurosci. 14, 139 (2011). Churchland, M. M. et al. Neural population dynamics during reaching. Nature 487, 51 (2012). Cunningham, J. P. & Byron, M. Y. Dimensionality reduction for large-scale neural recordings. Nat. Neurosci. 17, 1500 (2014). Kobak, D. et al. Demixed principal component analysis of neural population data. Elife 5, e10989 (2016). Perkel, D. H., Gerstein, G. L. & Moore, G. P. Neuronal spike trains and stochastic point processes: Ii. simultaneous spike trains. Biophys. J. 7, 419–440 (1967). Toyama, K., Kimura, M. & Tanaka, K. Organization of cat visual cortex as investigated by cross-correlation technique. J. Neurophysiol. 46, 202–214 (1981). Aertsen, A. M. & Gerstein, G. L. Evaluation of neuronal connectivity: sensitivity of cross-correlation. Brain Res. 340, 341–354 (1985). Reid, R. C. & Alonso, J.-M. Specificity of monosynaptic connections from thalamus to visual cortex. Nature 378, 281 (1995). Sakurai, Y. Hippocampal and neocortical cell assemblies encode memory processes for different types of stimuli in the rat. J. Neurosci. 16, 2809–2819 (1996). Okatan, M., Wilson, M. A. & Brown, E. N. Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity. Neural Comput. 17, 1927–1961 (2005). Fujisawa, S., Amarasingham, A., Harrison, M. T. & Buzsáki, G. Behavior-dependent short-term assembly dynamics in the medial prefrontal cortex. Nat. Neurosci. 11, 823 (2008). Grun, S. Data-driven significance estimation for precise spike correlation. J. Neurophysiol. 101, 1126–1140 (2009). Stevenson, I. H. et al. Bayesian inference of functional connectivity and network structure from spikes. IEEE Trans. Neural Syst. Rehabil. Eng. 17, 203–213 (2009). Chen, Z., Putrino, D. F., Ghosh, S., Barbieri, R. & Brown, E. N. Statistical inference for assessing functional connectivity of neuronal ensembles with sparse spiking data. IEEE Trans. Neural Syst. Rehabil. Eng. 19, 121–135 (2011). Ito, S. et al. Extending transfer entropy improves identification of effective connectivity in a spiking cortical network model. PLoS One 6, e27431 (2011). Amarasingham, A., Harrison, M. T., Hatsopoulos, N. G. & Geman, S. Conditional modeling and the jitter method of spike resampling. J. Neurophysiol. 107, 517–531 (2012). Stetter, O., Battaglia, D., Soriano, J. & Geisel, T. Model-free reconstruction of excitatory neuronal connectivity from calcium imaging signals. PLoS Comput. Biol. 8, e1002653 (2012). Article CAS ADS MathSciNet Google Scholar Kobayashi, R. & Kitano, K. Impact of network topology on inference of synaptic connectivity from multi-neuronal spike data simulated by a large-scale cortical network model. J. Comput. Neurosci. 35, 109–124 (2013). Schwindel, C. D., Ali, K., McNaughton, B. L. & Tatsuno, M. Long-term recordings improve the detection of weak excitatory-excitatory connections in rat prefrontal cortex. J. Neurosci. 34, 5454–5467 (2014). Zaytsev, Y. V., Morrison, A. & Deger, M. Reconstruction of recurrent synaptic connectivity of thousands of neurons from simulated spiking activity. J. Comput. Neurosci. 39, 77–103 (2015). Cai, Z., Neveu, C. L., Baxter, D. A., Byrne, J. H. & Aazhang, B. Inferring neuronal network functional connectivity with directed information. J. Neurophysiol. 118, 1055–1069 (2017). Brody, C. D. Correlations without synchrony. Neural Comput. 11, 1537–1551 (1999). Gerstein, G. L., Bedenbaugh, P. & Aertsen, A. M. Neuronal assemblies. IEEE Trans. Biomed. Eng. 36, 4–14 (1989). Stevenson, I. H., Rebesco, J. M., Miller, L. E. & Körding, K. P. Inferring functional connections between neurons. Curr. Opin. Neurobiol. 18, 582–588 (2008). Song, S., Sjöström, P. J., Reigl, M., Nelson, S. & Chklovskii, D. B. Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS Biol. 3, e68 (2005). Teramae, J.-N., Tsubo, Y. & Fukai, T. Optimal spike-based communication in excitable networks with strong-sparse and weak-dense links. Sci. Rep. 2, 485 (2012). Ikegaya, Y. et al. Interpyramid spike transmission stabilizes the sparseness of recurrent network activity. Cereb. Cortex 23, 293–304 (2013). Buzsáki, G. & Mizuseki, K. The log-dynamic brain: how skewed distributions affect network operations. Nat. Rev. Neurosci. 15, 264 (2014). Hoffmann, J. H. et al. Synaptic conductance estimates of the connection between local inhibitor interneurons and pyramidal neurons in layer 2/3 of a cortical column. Cereb. Cortex 25, 4415–4429 (2015). Potjans, T. C. & Diesmann, M. The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model. Cereb. Cortex 24, 785–806 (2014). Matthews, B. W. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochim. Biophys. Acta 405, 442–451 (1975). Fetz, E. E. & Gustafsson, B. Relation between shapes of post-synaptic potentials and changes in firing probability of cat motoneurones. J. Physiol. 341, 387–410 (1983). Volgushev, M., Ilin, V. & Stevenson, I. H. Identifying and tracking simulated synaptic inputs from neuronal firing: insights from in vitro experiments. PLoS Comput. Biol. 11, e1004167 (2015). Melssen, W. & Epping, W. Detection and estimation of neural connectivity based on crosscorrelation analysis. Biol. Cybern. 57, 403–414 (1987). Tsodyks, M. V. & Markram, H. The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proc. Natl Acad. Sci. USA 94, 719–723 (1997). Gupta, A., Wang, Y. & Markram, H. Organizing principles for a diversity of gabaergic interneurons and synapses in the neocortex. Science 287, 273–278 (2000). Shinomoto, S., Shima, K. & Tanji, J. Differences in spiking patterns among cortical neurons. Neural Comput. 15, 2823–2842 (2003). Mochizuki, Y. et al. Similarity in neuronal firing regimes across mammalian species. J. Neurosci. 36, 5736–5747 (2016). Mizuseki, K., Sirota, A., Pastalkova, E., Diba, K. & Buzsáki, G. Multiple single unit recordings from different rat hippocampal and entorhinal regions while the animals were performing multiple behavioral tasks. (CRCNS Org, 2013). Skaggs, W. E., McNaughton, B. L., Wilson, M. A. & Barnes, C. A. Theta phase precession in hippocampal neuronal populations and the compression of temporal sequences. Hippocampus 6, 149–172 (1996). Csicsvari, J., Hirase, H., Czurko, A. & Buzsáki, G. Reliability and state dependence of pyramidal cell-interneuron synapses in the hippocampus: an ensemble approach in the behaving rat. Neuron 21, 179–189 (1998). Mizuseki, K., Sirota, A., Pastalkova, E. & Buzsáki, G. Theta oscillations provide temporal windows for local circuit computation in the entorhinal-hippocampal loop. Neuron 64, 267–280 (2009). Freund, T. F. & Buzsáki, G. Interneurons of the hippocampus. Hippocampus 6, 347–470 (1996). Deuchars, J. & Thomson, A. Ca1 pyramid-pyramid connections in rat hippocampus in vitro: dual intracellular recordings with biocytin filling. Neuroscience 74, 1009–1018 (1996). Pillow, J. W., Shlens, J., Chichilnisky, E. & Simoncelli, E. P. A model-based spike sorting algorithm for removing correlation artifacts in multi-neuron recordings. PLoS One 8, e62123 (2013). Gewaltig, M.-O. & Diesmann, M. Nest (neural simulation tool). Scholarpedia 2, 1430 (2007). Koyama, S., CastellanosPérez-Bolde, L., Shalizi, C. R. & Kass, R. E. Approximate methods for state-space models. J. Amer. Stat. Assoc. 105, 170–180 (2010). Article CAS MathSciNet Google Scholar Chen, Z. & Brown, E. N. State space model. Scholarpedia 8, 30868 (2013). Zhou, B., Moorman, D. E., Behseta, S., Ombao, H. & Shahbaba, B. A dynamic bayesian model for characterizing cross-neuronal interactions during decision-making. J. Amer. Stat. Assoc. 111, 459–471 (2016). Marshall, L. et al. Hippocampal pyramidal cell-interneuron spike transmission is frequency dependent and responsible for place modulation of interneuron discharge. J. Neurosci. 22, RC197 (2002). English, D. F. et al. Pyramidal cell-interneuron circuit architecture and dynamics in hippocampal networks. Neuron 96, 505–520 (2017). Barthó, P. et al. Characterization of neocortical principal cells and interneurons by network interactions and extracellular features. J. Neurophysiol. 92, 600–608 (2004). Daley, D. J. & Vere-Jones, D. An introduction to the theory of point processes. (Springer-Verlag, New York, 2003). Akaike, H. Likelihood and the bayes procedure. in Selected papers of Hirotugu Akaike 309–332 (Springer, 1998). Sun A. & Lim E.-P. Hierarchical text classification and evaluation, in Proceedings of ICDM 2001 521–538 (IEEE, 2001). Kobayashi R. et al. Synthetic spike data generated by a network of 1000 hodgkin-huxley type neurons. Figshare (2019) https://doi.org/10.6084/m9.figshare.9637904. Destexhe, A. & Paré, D. Impact of network activity on the integrative properties of neocortical pyramidal neurons in vivo. J. Neurophysiol. 81, 1531–1547 (1999). Erisir, A., Lau, D., Rudy, B. & Leonard, C. Function of specific k. channels in sustained high-frequency firing of fast-spiking neocortical interneurons. J. Neurophysiol. 82, 2476–2489 (1999). Destexhe, A., Mainen, Z. F. & Sejnowski, T. J. Kinetic models of synaptic transmission. Methods Neuronal Model. 2, 1–25 (1998). Destexhe, A., Rudolph, M., Fellous, J.-M. & Sejnowski, T. J. Fluctuating synaptic conductances recreate in vivo-like activity in neocortical neurons. Neuroscience 107, 13–24 (2001). Tuckwell, H. C. Introduction to theoretical neurobiology:, nonlinear and stochastic theories 2 (Cambridge University Press, Cambridge, 1988). We thank Yuzuru Yamanaka, Tatsuya Goto, Kazuki Fujita, Daisuke Endo, and Masahiro Naito for their constructive comments on this manuscript. Furthermore, this paper was greatly improved by the comments of anonymous reviewers. R.K. is supported by JSPS KAKENHI grant numbers JP17H03279, JP18K11560, and JP19H01133, JST ACT-I Grant Number JPMJPR16UC, the Okawa Foundation for Information and Telecommunications, and the Open Collaborative Research and MOU grant at the National Institute of Informatics in Japan. S.K. is supported by JST ACT-I Grant Number JPMJPR17U8. A.K. and M.D. received funding from the European Union's Horizon 2020 Framework Programme for Research and Innovation under Specific Grant Agreement No. 785907 (Human Brain Project SGA2). K.M. is supported by JSPS KAKENHI grant numbers JP16H04656 and JP17K19462. B.J.R. is supported by US NIMH Intramural Program with report number ZIAMH002619-27. S.S. is supported by JSPS KAKENHI Grant numbers JP26280007 and JP17H06028, and the New Energy and Industrial Technology Development Organization (NEDO). National Institute of Informatics, Tokyo, 101-8430, Japan Ryota Kobayashi Department of Informatics, SOKENDAI (The Graduate University for Advanced Studies), Tokyo, 101-8430, Japan Center for Advanced Intelligence Project, RIKEN, Tokyo, 103-0027, Japan Shuhei Kurita Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52425, Jülich, Germany Anno Kurth & Markus Diesmann Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany Department of Information Science and Engineering, Ritsumeikan University, Kusatsu, 525-8577, Japan Katsunori Kitano Department of Physiology, Osaka City University Graduate School of Medicine, Osaka, 545-8585, Japan Kenji Mizuseki Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Aachen, Germany Markus Diesmann Laboratory of Neuropsychology, NIMH/NIH/DHHS, Bethesda, MD, 20814, USA Barry J. Richmond Department of Physics, Kyoto University, Kyoto, 606-8502, Japan Shigeru Shinomoto Brain Information Communication Research Laboratory Group, ATR Institute International, Kyoto, 619-0288, Japan Anno Kurth S.S. conceived the project. R.K. and S.S. developed methodology for reconstructing neuronal connectivity. S.K and K.K. performed the network simulation of HH neurons. K.M. performed the experiment. A.K. and M.D. performed the large-scale simulation of LIF neurons. S.S. and B.J.R. wrote the manuscript based on input from R.K. All authors commented on the manuscript. S.S. supervised the project. Correspondence to Shigeru Shinomoto. Peer review information Nature Communications thanks Zhe Chen, Marius Pachitariu and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Kobayashi, R., Kurita, S., Kurth, A. et al. Reconstructing neuronal circuitry from parallel spike trains. Nat Commun 10, 4468 (2019). https://doi.org/10.1038/s41467-019-12225-2 Inferring the temporal evolution of synaptic weights from dynamic functional connectivity Marco Celotto Stefan Lemke Stefano Panzeri Brain Informatics (2022) Lidor Spivak Amir Levi Eran Stark Communications Biology (2022) Information flow in the rat thalamo-cortical system: spontaneous vs. stimulus-evoked activities Kotaro Ishizu Tomoyo I. Shiramatsu Hirokazu Takahashi Editors' Highlights Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
The Labor Force Participation Rate Trends in Participation Rate Causes of Decline Global Labor Participation Labor Force Participation Rate: Purpose, Formula, and Trends Adam Hayes, Ph.D., CFA, is a financial writer with 15+ years Wall Street experience as a derivatives trader. Besides his extensive derivative trading expertise, Adam is an expert in economics and behavioral finance. Adam received his master's in economics from The New School for Social Research and his Ph.D. from the University of Wisconsin-Madison in sociology. He is a CFA charterholder as well as holding FINRA Series 7, 55 & 63 licenses. He currently researches and teaches economic sociology and the social studies of finance at the Hebrew University in Jerusalem. Robert C. Kelly Reviewed by Robert C. Kelly Robert Kelly is managing director of XTS Energy LLC, and has more than three decades of experience as a business executive. He is a professor of economics and has raised more than $4.5 billion in investment capital. Pete Rathburn Fact checked by Pete Rathburn Pete Rathburn is a copy editor and fact-checker with expertise in economics and personal finance and over twenty years of experience in the classroom. What Is the Labor Force Participation Rate? The labor force participation rate is an estimate of an economy's active workforce. The formula is the number of people ages 16 and older who are employed or actively seeking employment, divided by the total non-institutionalized, civilian working-age population. In the 12 months ending December 2022, the U.S. labor force participation rate ranged between a low of 62% and a high of 62.3%, according to the U.S. Bureau of Labor Statistics (BLS), which publishes the figures monthly. As of December 2022, it is 62.3%. From 2013 on, the monthly figures held steady in the vicinity of 63%, after a sharp decline in the wake of the Great Recession; however, in early 2020, the labor force participation rate fell dramatically, dropping from 63.4% to 61.4% in the first half of the year, as a result of the COVID-19 pandemic. Its low point was reached in April 2020, when the rate sank to 60.2%. The labor force participation rate indicates the percentage of all people of working age who are employed or are actively seeking work. In conjunction with the unemployment numbers, it can offer some perspective into the state of the economy. Starting in 2013, the U.S. labor force participation rate held steady at around 63% until the COVID-19 pandemic struck. It was 62.1% as of November 2022. The rate varies over time based on social, demographic, and economic trends. Global labor force participation has shown a steady decline since 1990. Understanding the Labor Force Participation Rate The labor force participation rate is an important metric to use when analyzing employment and unemployment data because it measures the number of people who are actively job-hunting as well as those who are currently employed. It omits institutionalized people (in prisons, nursing homes, or mental health facilities) and members of the military. It includes all other people aged 16 or older and compares the proportion of those who are working or seeking work outside the home to those who are neither working nor seeking work outside the home. Because it accounts for people who have given up looking for work, this may make the labor force participation rate a somewhat more reliable figure than the unemployment rate. The unemployment numbers do not take into account those who have given up looking for work. Some economists argue that the labor force participation rate and unemployment data should be considered together in an effort to better understand an economy's real employment status. Labor Force Participation Rate Formula The formula for labor force participation is: ( Number Employed + Number Seeking Work ) × 100 Civilian Non-Institutional Population \begin{aligned}&\frac{ ( \text{Number Employed} + \text{Number Seeking Work} ) \times 100 }{ \text{Civilian Non-Institutional Population} } \\\end{aligned} ​Civilian Non-Institutional Population(Number Employed+Number Seeking Work)×100​​ This applies to all members of the population at age 16 or older. ( Number Employed + Number Seeking Work ) × 100 Civilian Non-Institutional Population \frac{ ( \text{Number Employed} + \text{Number Seeking Work} ) \times 100 }{ \text{Civilian Non-Institutional Population} } Civilian Non-Institutional Population(Number Employed+Number Seeking Work)×100​ Factors That Affect Participation Rate Labor force participation does not exist in a vacuum. Instead, it is impacted by a variety of social, economic, and demographic factors. As these factors change, then labor force participation might go up or down. These changes can happen quickly or slowly. They might have a short-term impact on labor force participation, or they might create long-term change. Short- and long-term economic trends can influence the labor force participation rate. In the long run, industrialization and the accumulation of wealth can have an impact. Industrialization tends to increase participation by creating employment opportunities. High levels of accumulated wealth can reduce participation because wealthier people simply have less need to work for a living. In the short term, business cycles and unemployment rates influence the participation rate. During an economic recession, the labor force participation rate tends to fall because many laid-off workers become discouraged and give up looking for jobs. Economic policies such as heavy labor market regulation and generous social benefit programs may also tend to decrease labor force participation. Social expectations and changes to those expectations can impact who is available to participate in the workforce. As different groups are expected to work or not, the labor force participation rate will go up or down. For example, If married men are considered responsible for supporting their families, for example, while married women stay home, then women will stop working after marriage or after having children, which lowers the labor force participation rate. If the expectation is that both parents should be able to work, however, then some parents of either gender will leave the workforce, while others will stay. Expectations for education can also impact the labor force participation rate. If the majority of young people learn a trade or a family business as they are growing up, then are expected to work immediately after finishing a high school education, then adults will start entering the workforce between ages 17 and 19. In countries or demographic groups where attending college is more common, though, more young adults will continue their education after high school. Labor force participation will go down because they won't join the workforce until their early or mid-twenties. Demographic Factors Changes in the working-age population from generation to generation influence labor force participation as well. As large age cohorts enter retirement age, the labor force participation rate can fall. For example, the retirement of a steady stream of baby boomers has reduced labor force participation. Baby boomers are one of the largest demographic blocks in the population. Since generations after the baby boomers are smaller, they will not be replaced by as many active, younger workers when they retire. The labor force participation rate has changed based on economic, social, and demographic trends over the long term. It rose steadily through the second half of the 20th century, peaking at 67.3% in April 2000. As the Great Recession hit in 2008, the participation rate entered several years of steep decline, stabilizing at around 63% by 2013. The trend in the women's labor force participation rate largely parallels the long-term trends for the total population. The women's labor force participation rate nearly doubled from 32% to 60% in the 50 years from 1948 to 1998. This rate has since dropped to 54.6% in April 2020, from 57.9% in February 2020. It has increased since, sitting at 56.5% in November 2022. The U.S. labor force participation rate of 62.3% in December 2022 included 56.8% participation for women and 68.1% participation from men. Why the Participation Rate Has Declined According to the Federal Reserve, the share of prime-working-age people (25 to 54 years old) in the labor force peaked at 72% in 1995 and declined to 63.7% over the next 25 years. This roughly corresponds to some of the declining trends in labor force participation in the 21st century. There are a number of reasons that the labor force participation rate has declined. The Great Recession: During the Great Recession from 2007 to 2009, unemployment rose from 5% to 10%. In the decade that followed, the labor market recovered. But many workers who had left the workforce never returned to full-time work, even after jobs were available. Though overall unemployment returned to pre-recession levels, rates of long-term unemployment increased as workers who had lost jobs stayed out of the labor force for longer periods of time. COVID-19: There was another sharp drop in labor participation in early 2020, as the COVID-19 pandemic shut down the U.S. economy. Many vulnerable workers were unable or unwilling to remain in face-to-face jobs, while others left their jobs to take care of family members at home. Due to caregiving expectations, women left the workforce at higher rates than men did. Retirement: Baby boomers are the largest segment of the population. As they reach retirement age and leave the labor force, the participation rate goes down, since there aren't enough younger workers to replace them. From 2007 to 2014, up to half the decline in labor force participation was a result of the workforce aging, according to the presidential Council of Economic Advisors. College: An increase in college attendance at the younger end of the age spectrum is another factor that reduces labor force participation. College enrollment by 18- to 24-year-olds increased from around 35% to 41% from 2000 to 2018; however, enrollment rates have dropped due to the pandemic, with undergraduate enrollment declining 7.8% from fall 2020 to fall 2021. It has continued to fall as of October 2022 but at a slower rate. The national unemployment rate in the United States in December 2022 was 3.5%. Global Labor Force Participation Global labor force participation has shown a steady decline since 1990. According to the World Bank, the global labor force participation rate stood at 59% at the end of 2021, down from 62% in 2010. The following table highlights the countries with the highest and lowest labor force participation rates as of 2021: Countries with Highest and Lowest Labor Force Participation Rates (2021) Country (Highest) Rate Country (Lowest) Rate Qatar 87% Tajikistan 40% Madagascar 85% Algeria 40% Solomon Islands 85% Moldova 39% Zimbabwe 84% Jordan 38% Tanzania 83% Yemen 37% Rwanda 82% Somalia 34% North Korea 82% Djibouti 31% Cambodia 80% Nepal 80% Source: The World Bank The U.S. territory of Puerto Rico also made the list, ranking among those with the lowest labor force participation rates at 40%. What Does the Labor Force Participation Rate Measure? The labor force participation rate measures a country's active workforce of people 16 and older. It takes into account people who have stopped looking for work but still want to work, unlike the unemployment rate. What Affects the Labor Force Participation Rate? Three major factors influence the rate: economic, demographic, and social. For instance, the recent retirement of baby boomers in great numbers has pushed the rate down, while the introduction of large numbers of women into the workforce in the second half of the 20th century increased the rate. In April 2020, after the COVID-19 pandemic struck the U.S., the rate went down by more than 3% compared to the beginning of that year. How Does the U.S. Rate Compare With Those of Other Countries? According to the World Bank's most recent data from 2021, the U.S. falls in the middle of the pack at 61%, two points ahead of the world rate of 59%. There were seven countries with the same rate as the U.S. (including Germany, Austria, and Russia). As of November 2022, the U.S. rate stands at 62.1%. How Is the Labor Force Participation Rate Measured? The labor force participation rate is measured by the Bureau of Labor Statistics, based on a monthly household survey by the U.S. Census Bureau. This survey asks respondents about their age and whether they are employed or looking for work. On that basis, the government can estimate the labor force participation rate. Why Is the Labor Force Participation Rate Declining? The participation rate has steadily declined since the late 1990s, largely due to the retirement of baby boomers and other demographic changes. In 2020, there was a sharp drop in labor participation due to the Covid-19 pandemic, which shuttered many businesses and forced many vulnerable people to leave the workforce. The labor force participation rate measures the percentage of adults who are either employed or actively looking for a job. It does not include those in the military, prisons, or otherwise outside of the ordinary labor market. It also accounts for the people who are not seeking work, making it a more reliable statistic than the regular unemployment rate. U.S. Bureau of Labor Statistics. "Labor Force Statistics from the Current Population Survey." U.S. Bureau of Labor Statistics. "Labor Force Statistics from the Current Population Survey: Concepts and Definitions," Select "Labor Force Participation Rate." U.S. Bureau of Labor Statistics. "Labor Force Statistics from the Current Population Survey: Concepts and Definitions," Select "Civilian Labor Force, or Labor Force." U.S. Bureau of Labor Statistics. "Table A-15: Alternative Measures of Labor Underutilization." Federal Reserve Bank of St. Louis, FRED Economic Data. "Labor Force Participation Rate." Federal Reserve Bank of St. Louis, FRED Economic Data. "Labor Force Participation Rate — Women." U.S. Department of Labor. "Labor Force Status of Men and Women." Federal Reserve Bank of St. Louis. "Demographics Help Explain the Fall in the Labor Force Participation Rate." U.S. Bureau of Labor Statistics. "Labor Force Projections to 2020: A More Slowly Growing Workforce," Page 44. U.S. Bureau of Labor Statistics. "Great Recession, Great Recovery? Trends From the Current Population Survey." Harvard University. "The Labor Force Participation Rate Since 2007: Causes and Policy Implications," Page 3. National Center for Education Statistics. "College Enrollment Rates," Page 1. National Student Clearing House Research Center. "COVID-19: Stay Informed with the Latest Enrollment Information: Figure 1." U.S. Bureau of Labor Statistics. "Employment Situation Summary." The World Bank. "Labor Force Participation Rate, Total (% of Total Population Ages 15+)." Guide to Unemployment What Is Unemployment? Understanding Causes, Types, Measurement What Does Termination of Employment Mean? Unemployment Claim Unemployment Compensation: Definition, Requirements, and Example What Is Severance Pay? Definition and Why It's Offered The Layoff Payoff: A Severance Package 7 Considerations When You Negotiate Severance 7 Effective Ways to Prepare for a Layoff Unemployment Insurance (UI): How It Works, Requirements, and Funding How to Apply for Unemployment Insurance Now Who Doesn't Get Unemployment Insurance? Should You Buy Private Unemployment Insurance? How to Pay Your Bills When You Lose Your Job Can I Access Money in My 401(k) If I Am Unemployed? All About COBRA Health Insurance Medical Debt: What to Do When You Can't Pay Help, My Unemployment Benefits Are Running Out What Is the Unemployment Rate? Rates by State How Is the U.S. Monthly Unemployment Rate Calculated? Unemployment Rates: The Highest and Lowest in the World What You Need To Know About the Employment Report U-3 vs. U-6 Unemployment Rate: What's the Difference? Participation Rate vs. Unemployment Rate: What's the Difference? What the Unemployment Rate Does Not Tell Us How the Unemployment Rate Affects Everybody How Inflation and Unemployment Are Related How the Minimum Wage Impacts Unemployment The Cost of Unemployment to the Economy Okun's Law: Economic Growth and Unemployment What Can Policymakers Do to Decrease Cyclical Unemployment? What Happens When Inflation and Unemployment Are Positively Correlated? The Downside of Low Unemployment Frictional vs. Structural Unemployment: What's the Difference? Structural vs. Cyclical Unemployment: What's the Difference? Cyclical Unemployment: Definition, Cause, Types, and Example Disguised Unemployment: Definition and Different Types Employment-to-Population Ratio: Definition and What It Measures Frictional Unemployment: Definition, Causes, and Quit Rate Explained Full Employment Labor Market Explained: Theories and Who Is Included Natural Unemployment Structural Unemployment: Definition, Causes, and Examples The employment-to-population ratio measures the number of workers currently employed against the total working-age population of a region. The unemployment rate is the percentage of the total labor force that is unemployed but actively seeking employment and willing to work. Unemployment is the term for when a person who is actively seeking a job is unable to find work. Structural unemployment is a longer-lasting form of unemployment caused by fundamental shifts in an economy. Underemployment: Definition, Causes, and Example Underemployment is a measure of the number of people in an economy who are working in low-paid or part-time jobs because they can't find jobs that match their skills. Frictional unemployment is the result of voluntary employment transitions within an economy. How Labor Force Participation Rate Affects U.S. Unemployment How Demographics Drive the Economy
CommonCrawl
Anti-plane shear Lamb's problem on random mass density fields with fractal and Hurst effects EECT Home Optimal scalar products in the Moore-Gibson-Thompson equation March 2019, 8(1): 221-230. doi: 10.3934/eect.2019012 Shock wave formation in compliant arteries Cristóbal Rodero 1, , J. Alberto Conejero 2,, and Ignacio García-Fernández 3, Division of Imaging Sciences and Biomedical Engineering, King's College London, St. Thomas' Hospital, SE1 7EH London, United Kingdom Instituto Universitario de Matemática Pura y Aplicada, Universitat Politècnica de València, E-46022 València, Spain CoMMLab, Departament d'Informàtica, Universitat de València, E-46100 Burjassot, València, Spain * Corresponding author: J. Alberto Conejero Received March 2018 Revised July 2018 Published January 2019 We focus on the problem of shock wave formation in a model of blood flow along an elastic artery. We analyze the conditions under which this phenomenon can appear and we provide an estimation of the instant of shock formation. Numerical simulations of the model have been conducted using the Discontinuous Galerkin Finite Element Method. The results are consistent with certain phenomena observed by practitioners in patients with arteriopathies, and they could predict the possible formation of a shock wave in the aorta. Keywords: Navier-Stokes conservation law, blood flow, elastic arteries. Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35. Citation: Cristóbal Rodero, J. Alberto Conejero, Ignacio García-Fernández. Shock wave formation in compliant arteries. Evolution Equations & Control Theory, 2019, 8 (1) : 221-230. doi: 10.3934/eect.2019012 K. Ando, T. Sanada, K. Inaba, J. Damazo, J. Shepherd, T. Colonius and C. Brennen, Shock propagation through a bubbly liquid in a deformable tube, Journal of Fluid Mechanics, 671 (2011), 339-363. doi: 10.1017/S0022112010005707. Google Scholar S. Čanić and E. H. Kim, Mathematical analysis of the quasilinear effects in a hyperbolic model blood flow through compliant axi-symmetric vessels, Math. Method Appl. Sci., 26 (2003), 1161-1186. doi: 10.1002/mma.407. Google Scholar S. Čanić, J. Tambača, G. Guidoboni, A. Mikelić, C. J. Hartley and D. Rosenstrauch, Modeling viscoelastic behavior of arterial walls and their interaction with pulsatile blood flow, SIAM J. Appl. Math., 67 (2006), 164-193. doi: 10.1137/060651562. Google Scholar I. Christov, P. Jordan and C. Christov, Nonlinear acoustic propagation in homentropic perfect gases: A numerical study, Physics Letters A, 353 (2006), 273-280. doi: 10.1016/j.physleta.2005.12.101. Google Scholar I. C. Christov, V. Cognet, T. C. Shidhore and H. A. Stone, Flow rate-pressure drop relation for deformable shallow microfluidic channels, Journal of Fluid Mechanics, 841 (2018), 267-286. doi: 10.1017/jfm.2018.30. Google Scholar L. Cozijnsen, R. L. Braam, R. A. Waalewijn, M. A. Schepens, B. L. Loeys, M. F. van Oosterhout, D. Q. Barge-Schaapveld and B. J. Mulder, What is new in dilatation of the ascending aorta?, Circulation, 123 (2011), 924-928. doi: 10.1161/CIRCULATIONAHA.110.949131. Google Scholar T. A. Crowley and V. Pizziconi, Isolation of plasma from whole blood using planar microfilters for lab-on-a-chip applications, Lab on a Chip, 5 (2005), 922-929. doi: 10.1039/b502930a. Google Scholar V. Dolejš and M. Feistaner, Discontinuous Galerkin Method: Analysis and Applications to Compressible Flow, Springer, Cham, 2015. doi: 10.1007/978-3-319-19267-3. Google Scholar C. T. Dotter, D. J. Roberts and I. Steinberg, Aortic length: Angiocardiographic measurements, Circulation, 2 (1950), 915-920. doi: 10.1161/01.CIR.2.6.915. Google Scholar A. Elgarayhi, E. El-Shewy, A. A. Mahmoud and A. A. Elhakem, Propagation of nonlinear pressure waves in blood, ISRN Computational Biology, 2013 (2013), Article ID 436267, 5 pages. doi: 10.1155/2013/436267. Google Scholar R. Erbel and H. Eggebrecht, Aortic dimensions and the risk of dissection, Heart, 92 (2006), 137-142. doi: 10.1136/hrt.2004.055111. Google Scholar Y. C. Fung, Biomechanics: Circulation, Springer Science & Business Media, 2013. Google Scholar J. E. Hall, Guyton and Hall Textbook of Medical Physiology E-Book, Elsevier Health Sciences, 13 edition, 2015. Google Scholar P. Hunter, Numerical Simulation of Arterial Blood Flow, PhD thesis, ResearchSpace@ Auckland, 1972. Google Scholar J. Keener and J. Sneyd, Mathematical Physiology, volume 8 of Interdisciplinary Applied Mathematics, Springer: New York, 1998. Google Scholar P. S. Laplace, Traité de Mécanique Céleste, Courcier, 1805. Google Scholar P. Lax, Hyperbolic systems of conservation laws ii, Communications on Pure and Applied Mathematics, 10 (1957), 537-566. doi: 10.1002/cpa.3160100406. Google Scholar P. Lax, Development of singularities of solution of nonlinear hyperbolic partial differential equations, Journal of Mathematical Physics, 5 (1964), 611-613. doi: 10.1063/1.1704154. Google Scholar S. S. Mao, N. Ahmadi, B. Shah, D. Beckmann, A. Chen, L. Ngo, F. R. Flores, Y. L. Gao and M. J. Budoff, Normal thoracic aorta diameter on cardiac computed tomography in healthy asymptomatic adults: impact of age and gender, Academic Radiology, 15 (2008), 827-834. Google Scholar D. Mowat, N. Haites and J. Rawles, Aortic blood velocity measurement in healthy adult using a simple ultrasound technique, Cardiovascular Research, 17 (1983), 75-80. doi: 10.1093/cvr/17.2.75. Google Scholar P. R. Painter, The velocity of the arterial pulse wave: A viscous-fluid shock wave in an elastic tube, Theoretical Biology and Medical Modelling, 5 (2008), p15. doi: 10.1186/1742-4682-5-15. Google Scholar P. Perdikaris and G. Karniadakis, Fractional-order viscoelasticity in one-dimensional blood flow models, Ann. Biomed. Eng., 42 (2014), 1012-1023. doi: 10.1007/s10439-014-0970-3. Google Scholar K. Perktold, M. Resch and H. Florian, Pulsatile non-newtonian flow characteristics in a three-dimensional human carotid bifurcation model, Journal of biomechanical engineering, 113 (1991), 464-475. doi: 10.1115/1.2895428. Google Scholar G. Porenta, D. Young and T. Rogge, A finite-element model of blood flow in arteries including taper, branches, and obstructions, J. Biomech. Eng., 108 (1986), 161-167. doi: 10.1115/1.3138596. Google Scholar J. K. Raines, M. Y. Jaffrin and A. H. Shapiro, A computer simulation of arterial dynamics in the human leg, J. Biomech., 7 (1974), 77-91. doi: 10.1016/0021-9290(74)90072-4. Google Scholar C. Rodero, Analysis of blood flow in one dimensional elastic artery using Navier-Stokes conservation laws, Master's thesis, Universitat de València / Universitat Politècnica de València, 2017. Google Scholar S. Sherwin, V. Franke, J. Peiró and K. Parker, One-dimensional modelling of a vascular network in space-time variables, J. Engrg. Math., 47 (2003), 217-250. doi: 10.1023/B:ENGI.0000007979.32871.e2. Google Scholar S. J. Sherwin, L. Formaggia, J. Peiró and V. Franke, Computational modelling of 1D blood flow with variable mechanical properties and its application to the simulation of wave propagation in the human arterial system, Int. J. Numer. Methods Fluids, 43 (2003), 673-700. doi: 10.1002/fld.543. Google Scholar R. Shoucri and M. Shoucri, Application of the method of characteristics for the study of shock waves in models of blood flow in the aorta, Cardiovascular Engineering, 7 (2007), 1-6. Google Scholar T. Sochi, Flow of Navier-Stokes fluids in cylindrical elastic tubes, J. Appl. Fluid Mech., 8 (2015), 181-188. Google Scholar E. F. Toro, Riemann solvers and numerical methods for fluid dynamics: A practical introduction, Springer-Verlag, Berlin, 2009. doi: 10.1007/b79761. Google Scholar E. F. Toro, Brain venous haemodynamics, neurological diseases and mathematical modelling. a review, Appl. Math. Comput., 272 (2016), 542-579. doi: 10.1016/j.amc.2015.06.066. Google Scholar N. Westerhof and A. Noordergraaf, Arterial viscoelasticity: A generalized model, J. Biomech., 3 (1970), 357-379. doi: 10.1016/0021-9290(70)90036-9. Google Scholar R. J. Whittaker, M. Heil, O. E. Jensen and S. L. Waters, A rational derivation of a tube law from shell theory, Applied Mathematics, 63 (2010), 465-496. doi: 10.1093/qjmam/hbq020. Google Scholar T. Young, Ⅲ. An essay on the cohesion of fluids, Philosophical Transactions of the Royal Society of London, 95 (1805), 65-87. Google Scholar Figure 1. An artery as a compliant tube, where variable x denotes the spatial coordinate and t the temporal one. Figure 2. Decomposition of the domain $ D $. Figure 3. Several beat like boundary conditions (23) for $u(0, 0) = 0$. Figure 4. Formation of a shock wave with a beat-like boundary condition. The discontinuity at $x = 20$ is due to the nature of the DG method, which provides two values in the frontier between elements. Benchawan Wiwatanapataphee, Yong Hong Wu, Thanongchai Siriapisith, Buraskorn Nuntadilok. Effect of branchings on blood flow in the system of human coronary arteries. Mathematical Biosciences & Engineering, 2012, 9 (1) : 199-214. doi: 10.3934/mbe.2012.9.199 Mette S. Olufsen, Ali Nadim. On deriving lumped models for blood flow and pressure in the systemic arteries. Mathematical Biosciences & Engineering, 2004, 1 (1) : 61-80. doi: 10.3934/mbe.2004.1.61 Alberto Bressan, Graziano Guerra. Shift-differentiabilitiy of the flow generated by a conservation law. Discrete & Continuous Dynamical Systems - A, 1997, 3 (1) : 35-58. doi: 10.3934/dcds.1997.3.35 Alberto Bressan, Khai T. Nguyen. Conservation law models for traffic flow on a network of roads. Networks & Heterogeneous Media, 2015, 10 (2) : 255-293. doi: 10.3934/nhm.2015.10.255 Wouter Huberts, E. Marielle H. Bosboom, Frans N. van de Vosse. A lumped model for blood flow and pressure in the systemic arteries based on an approximate velocity profile function. Mathematical Biosciences & Engineering, 2009, 6 (1) : 27-40. doi: 10.3934/mbe.2009.6.27 Pavel I. Plotnikov, Jan Sokolowski. Compressible Navier-Stokes equations. Conference Publications, 2009, 2009 (Special) : 602-611. doi: 10.3934/proc.2009.2009.602 Jan W. Cholewa, Tomasz Dlotko. Fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 2967-2988. doi: 10.3934/dcdsb.2017149 Anna Amirdjanova, Jie Xiong. Large deviation principle for a stochastic navier-Stokes equation in its vorticity form for a two-dimensional incompressible flow. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 651-666. doi: 10.3934/dcdsb.2006.6.651 Shuguang Shao, Shu Wang, Wen-Qing Xu, Bin Han. Global existence for the 2D Navier-Stokes flow in the exterior of a moving or rotating obstacle. Kinetic & Related Models, 2016, 9 (4) : 767-776. doi: 10.3934/krm.2016015 Pavel I. Plotnikov, Jan Sokolowski. Optimal shape control of airfoil in compressible gas flow governed by Navier-Stokes equations. Evolution Equations & Control Theory, 2013, 2 (3) : 495-516. doi: 10.3934/eect.2013.2.495 Daniel Coutand, Steve Shkoller. Turbulent channel flow in weighted Sobolev spaces using the anisotropic Lagrangian averaged Navier-Stokes (LANS-$\alpha$) equations. Communications on Pure & Applied Analysis, 2004, 3 (1) : 1-23. doi: 10.3934/cpaa.2004.3.1 Roberto Triggiani. Stability enhancement of a 2-D linear Navier-Stokes channel flow by a 2-D, wall-normal boundary controller. Discrete & Continuous Dynamical Systems - B, 2007, 8 (2) : 279-314. doi: 10.3934/dcdsb.2007.8.279 Rafael Vázquez, Emmanuel Trélat, Jean-Michel Coron. Control for fast and stable Laminar-to-High-Reynolds-Numbers transfer in a 2D Navier-Stokes channel flow. Discrete & Continuous Dynamical Systems - B, 2008, 10 (4) : 925-956. doi: 10.3934/dcdsb.2008.10.925 Imam Wijaya, Hirofumi Notsu. Stability estimates and a Lagrange-Galerkin scheme for a Navier-Stokes type model of flow in non-homogeneous porous media. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020234 Kuijie Li, Tohru Ozawa, Baoxiang Wang. Dynamical behavior for the solutions of the Navier-Stokes equation. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1511-1560. doi: 10.3934/cpaa.2018073 Hermenegildo Borges de Oliveira. Anisotropically diffused and damped Navier-Stokes equations. Conference Publications, 2015, 2015 (special) : 349-358. doi: 10.3934/proc.2015.0349 Hyukjin Kwean. Kwak transformation and Navier-Stokes equations. Communications on Pure & Applied Analysis, 2004, 3 (3) : 433-446. doi: 10.3934/cpaa.2004.3.433 Vittorino Pata. On the regularity of solutions to the Navier-Stokes equations. Communications on Pure & Applied Analysis, 2012, 11 (2) : 747-761. doi: 10.3934/cpaa.2012.11.747 C. Foias, M. S Jolly, I. Kukavica, E. S. Titi. The Lorenz equation as a metaphor for the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2001, 7 (2) : 403-429. doi: 10.3934/dcds.2001.7.403 Igor Kukavica. On regularity for the Navier-Stokes equations in Morrey spaces. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1319-1328. doi: 10.3934/dcds.2010.26.1319 Cristóbal Rodero J. Alberto Conejero Ignacio García-Fernández
CommonCrawl
Quantifying the role of weather on seasonal influenza Marion Roussel1,2, Dominique Pontier1,2, Jean-Marie Cohen3, Bruno Lina4,5 & David Fouchet1,2 Improving knowledge about influenza transmission is crucial to upgrade surveillance network and to develop accurate predicting models to enhance public health intervention strategies. Epidemics usually occur in winter in temperate countries and during the rainy season for tropical countries, suggesting a climate impact on influenza spread. Despite a lot of studies, the role of weather on influenza spread is not yet fully understood. In the present study, we investigated this issue at two different levels. First, we evaluated how weekly (intra-annual) incidence variations of clinical diseases could be linked to those of climatic factors. We considered that only a fraction of the human population is susceptible at the beginning of a year due to immunity acquired from previous years. Second, we focused on epidemic sizes (cumulated number of clinical reported cases) and looked at how their inter-annual and regional variations could be related to differences in the winter climatic conditions of the epidemic years over the regions. We quantified the impact of fifteen climatic variables in France using the Réseau des GROG surveillance network incidence data over eleven regions and nine years. At the epidemic scale, no impact of climatic factors was highlighted. At the intra-annual scale, six climatic variables had a significant impact: average temperature (5.54 ± 1.09 %), absolute humidity (5.94 ± 1.08 %), daily variation of absolute humidity (3.02 ± 1.17 %), sunshine duration (3.46 ± 1.06 %), relative humidity (4.92 ± 1.20 %) and daily variation of relative humidity (4.46 ± 1.24 %). Since in practice the impact of two highly correlated variables is very hard to disentangle, we performed a principal component analysis that revealed two groups of three highly correlated climatic variables: one including the first three highlighted climatic variables on the one hand, the other including the last three ones on the other hand. These results suggest that, among the six factors that appeared to be significant, only two (one per group) could in fact have a real effect on influenza spread, although it is not possible to determine which one based on a purely statistical argument. Our results support the idea of an important role of climate on the spread of influenza. Influenza is one of the most significant diseases in humans, generating worldwide annual epidemics, which result in about three to five million cases of severe illness, and about 250,000 to 500,000 deaths [1]. Improving influenza knowledge about key epidemiological parameters such as survival, transmission and reproduction in hosts is essential to upgrade surveillance network and to develop more accurate predicting models. Better epidemic predictions would set up more appropriate public health prevention and intervention strategies. Epidemics occur mainly during the winter season months in temperate countries [2–4] unlike in tropical and sub-tropical countries where they generally happen during the rainy season [5–8]. These differences suggest a climate impact on influenza spread. Climate might affect influenza diffusion (onset, duration, size) by impacting individuals' contact rates (frequency and duration), population immunity and virus survival outside human body. The role of weather is however not fully understood [9] despite a lot of laboratory studies of host susceptibility according to environmental conditions [10–12] and mathematical modeling approaches analyzing the link between influenza morbidity or mortality and climatic factors [13–18]. Various climatic factors such as temperature, humidity, rainfalls, UV radiation, sunshine duration and wind speed might have an impact on influenza spread. In temperate countries, humidity and temperature might play an important role in influenza spread. Several laboratory works showed that a cold and dry weather promotes a higher virus survival outside human body and a better transmission [11, 19]. Cold air inhalation chills nasal epithelium leading to an inhibition of mechanical defenses of the respiratory mucosa and of the immune system [20]. Otherwise models explaining influenza epidemics (e.g., onset, peak, mortality) according to climatic factors reinforce the role of humidity and temperature in influenza spread in the United States [13, 15] as well as in Europe [16, 21]. Rainfalls might have an impact in tropical and sub-tropical countries such as in Central and South America [22–24] and in Asia [25, 26]. Another theory suggests a link between vitamin D secretion and influenza immunity, which is supported by experiments [27, 28]. As UV radiation is involved in vitamin D production, a lack of UV radiation in winter, for temperate countries, leads to a reduction of vitamin D production and might boost influenza epidemics [29, 30]. Dowell [31] also suggested a role of dark/light cycles and photoperiod on the immune systems caused by melatonin fluctuations. Thereby UV radiation and sunshine duration might have an indirect effect on influenza infections. Finally in China, Xiao et al. [32] proposed that a low wind speed contribute to influenza spread. In fact a strong wind speed may have a dispersive effect on influenza in the environment limiting its diffusion. The aim of this study is to quantify the impact of several climatic factors such as temperature, humidity, and rainfalls, on influenza epidemics in France, a temperate country. The role of weather can be estimated based on the variation of influenza propagation in an area according to its climate variation. Usually studies compared observed to modeled epidemics taking into account climatic factors by comparing incidence or mortality within an epidemic year [13–18]. The impact of the climatic factors included in the model is supported if modeled and observed epidemics are similar. However little information is available about influenza transmission. Modeling approaches made a lot of hypotheses about the within host virus dynamic such as incubation and infectious periods R 0 etc. Such hypotheses may have a strong impact on influenza propagation, which might lead to a misestimating of climatic effects. In order to reduce the set of model hypotheses, we built an autoregressive model based on the shape of the observed epidemics over time. We explained the intraseasonal variation of incidence of eleven French regions and for nine epidemic years (an epidemic year corresponds to October of a year until April of the year after) with the climatic variables listed before, to quantify their respective impact globally over all regions, then specifically in each region for significant climatic variables. The originality of our model is to consider that only a fraction of the human population is susceptible at the beginning of a year due to immunity acquired from previous years. Considering loss of immunity in modeling influenza epidemics might be important [33] even if almost no studies about influenza and climate take it into account to our knowledge. Here we called susceptible individuals people that could be infected and develop symptoms, as we only had data about infected people presenting symptoms. We then quantified potential effects of climatic factors on the interseasonal variation of influenza epidemics. To do that we built an autoregressive linear model that explains the epidemic size according to the average value of the climatic factors over an epidemic year for the nine epidemic years and the eleven French regions. Epidemiological data come from the Réseau des GROG (Regional Influenza Surveillance Group) sentinel network, which is a French surveillance network made up of general practitioners and pediatricians. These physician sentinels identify cases of respiratory pathogens including influenza. Each region has on average 25 sentinels (from 10 to 75 depending on regions and epidemic years) involved in the Réseau des GROG sentinel network. Every week from October to April, they describe in reports the intensity of their activity by giving the number of days they worked, the number of medical acts performed and the number of acute respiratory infection (ARI) defined as the sudden onset of at least one respiratory sign (cough, rhinitis, coryza, etc.) and at least one systemic sign suggesting an acute infectious context (fever, fatigue, headache, myalgia, malaise, etc.). In addition, sentinels randomly realize nasal/pharyngeal swab samples on patients with a less than 48 h ARI. Analysis of these samples allows virological confirmation of influenza infections. Using the weekly information reported by each physician sentinel (clinical reports and virological samples analysis), the Réseau des GROG sentinel network is able to provide an estimate of the number of influenza-infected individuals called the influenza incidence. First they define the ARI incidence, the number of ARI cases (I ARI ), for a region and a week t as: $$ {I}_{ARI}(t)=\left(\frac{Pe{d}_{region}}{Pe{d}_{GROG\; participants}(t)}\cdot AR{I}_{Ped}(t)\right)+\left(\frac{G{P}_{region}}{G{P}_{GROG\; participants}(t)}\cdot AR{I}_{GP}(t)\right) $$ where ARI GP (t) and ARI Ped (t) stand for the number of ARI cases for week t, respectively, reported by general practitioners (GP) and pediatricians from the Réseau des GROG sentinel network. GP region and Ped region are, respectively, the number of GP and pediatricians of a region. GP GROG participants (t) and Ped GROG participants (t) represent the number of GP and pediatricians who participated in surveillance the week t, respectively. Age of infected individuals was not taken into account assuming that climatic factors have a uniform impact on influenza spread within the population. Second, the Réseau des GROG sentinel network estimates influenza incidence relying on both the ARI incidence and virological data. For each week of each region, an influenza positivity rate (for all circulating strains) is defined as the ratio of the number of positive samples on the total number of samples collected over a week. It is calculated using a moving average of order 3 taking into account the positive rate of the week concerned and the ones before and after in order to remove excessive fluctuations. We assumed that the positive rate corresponds to the actual proportion of influenza cases among ARI cases reported by the Réseau des GROG sentinel network. The influenza incidence (I influenza ) is defined as the ARI incidence weighted by the positivity rate (T +): $$ {I}_{influenza}(t)={I}_{ARI}(t) \cdot {T}_{+}(t) $$ Epidemiological data are available from the 2003–2004 epidemic year to the 2012–2013 epidemic year. However we excluded the 2009–2010 epidemic year where the H1N1 pandemic happened in order to study only seasonal epidemics. Climatic data We chose eleven French regions: Aquitaine, Lower Normandy, Brittany, Upper Normandy, Île-de-France, Lorraine, Nord-Pas-de-Calais, Pays de la Loire, Picardy, Provence-Alpes-Côte d'Azur (PACA) and Rhône-Alpes, which have different climates. Aquitaine, Pays de la Loire, Brittany, Lower Normandy, Upper Normandy and Nord-Pas de Calais have an oceanic climate; Île-de-France, Picardy and Lorraine have an oceanic climate with continental influences; PACA has a Mediterranean climate and Rhône-Alpes climate is made up of continental, Mediterranean and mountainous influences (see Fig. 1). Map of France showing the eleven studied regions according to their climate: Aquitaine, Lower Normandy, Brittany, Upper Normandy, Nord-Pas de Calais and Pays de la Loire in blue for their oceanic climate, Île-de-France, Lorraine, and Picardy in green for their oceanic climate with continental influences, PACA in orange for its Mediterranean climate and Rhône-Alpes in yellow for its continental, Mediterranean and mountainous influences, with the geographical location of the 65 meteorological stations (in red) Climatic data were provided by Météo-France, the French national meteorological service. We picked 65 meteorological stations (see Fig. 1) to collect data in order to estimate climatic factors that globally describe each region. We had information on temperature, relative humidity, absolute humidity, rainfalls, sunshine duration (very correlated to UV radiation), and wind speed (see Additional file 1). It is not necessarily easy to choose efficient climatic factors, as illustrated by Davis et al. [34] who highlighted the challenge of selecting an appropriate measure of the humidity covariate. As epidemiological data were weekly available, we created weekly climatic variables from the daily meteorological data by averaging the daily data. The climatic variables built are defined in Table 1. Table 1 Definition of the climatic variables Climatic factors can impact influenza spread by both increasing the transmissibility of the virus and/or by increasing the susceptibility of its human host. One particularity of our data set is that the variability in influenza incidence is reported at different scales: the transmission scale (intraseasonal variation) and the epidemic scale (interseasonal variation). The impact of climatic factors may occur at the two scales in which it will be observed in a slightly different way. At the transmission scale – during a seasonal epidemic of a given year in a given region – favorable climatic (for influenza diffusion) factors will lead to observe an increase in disease (apparent) transmission. At this scale we will search for significant associations between weekly variations of climatic factors and those of the disease apparent transmission rate (defined below). Different observed epidemics (in all regions and epidemic years) will be treated as independent replicates. At the epidemic scale - the impact of a climatic factor (in a region over an entire epidemic year) may mainly be observed by the increase or decrease in the epidemic size (the total number of infected individuals). At this scale we will search for significant associations between the size of the epidemic and the average value of the different climatic factors (over an epidemic year in a region). Because both scales imply different response variables, they will be treated separately and independently. Impact of climatic factors at the transmission scale We built an auto-regressive statistical model with a lag of one week to explain variations in the weekly influenza incidence with climatic factors for eleven French regions over nine epidemic years. Our model is inspired from general epidemiological models in which the number of infected and symptomatic individuals at time t, I(t), is modeled as a general function depending on i) the number of infected and symptomatic at time t − 1, I(t − 1), and ii) the number of individuals at time t who are susceptible to develop the symptomatic form of the disease upon infection, S(t − 1): $$ I(t) = \beta (t)\cdot I{\left(t-1\right)}^a\cdot S{\left(t-1\right)}^b $$ where a and b are constants (heterogeneity parameters) extending the mass action type model into a more general form, which has been shown as a relevant way to approximate epidemic shapes in populations with heterogeneous mixing [35]. β is the apparent transmission rate of the virus. a = b = 1 correspond to the mass action model [36]. With a logarithm transformation the relationship becomes: $$ \log \left(I(t)\right)= \log \left(\beta (t)\right)+a\cdot \log \left(I\left(t-1\right)\right) + b\cdot log\left(S\left(t-1\right)\right) $$ In fact, the numbers of infected and susceptible individuals are not directly observed. Î and Ŝ denote estimates of the number of infected and susceptible individuals, respectively. Considering that i) the number infected and susceptible individuals are estimated and ii) there is stochasticity in the transmission process, the relationship (2) becomes: $$ \log \left(\widehat{I}(t)\right)= \log \left(\beta (t)\right)+a\cdot \log \left(\widehat{I}\left(t-1\right)\right) + b\cdot \log \left(\widehat{S}\left(t-1\right)\right)+{\varepsilon}_1 $$ To analyze the impact of a climatic factor (F c ), we considered that the transmission rate is given by: $$ \log \left(\beta (t)\right) = c\cdot {F}_c+d+{\varepsilon}_2 $$ where c quantifies the link between F c and β, d is a constant and ε 2 is a random term independent of F c modeling the fluctuation in β independent of F c , i.e., due to other factors. Not all the human population is susceptible to influenza, e.g., due to immunity acquired from previous infection. However, giving an estimate of the influenza susceptible population (non-immune population) is difficult due to the seasonal variation of circulating strains, loss of immunity phenomena and the fact that asymptomatic cases are not detected. In this model, we keep a pragmatic statistical view by considering that the susceptible pool linearly decreases every week with the infection of new individuals. So the estimated susceptible population Ŝ for a week t and a given region is given by: $$ \widehat{S}(t) = \widehat{N}-{\widehat{I}}_{cum}\left(t-1\right) $$ where Î cum is the number of infected individuals cumulated from the beginning of the epidemic year to the week t − 1. Note that introducing Î cum (t − 1) in the model implicitly introduces a link between I(t) and I(t − 2), I(t − 3), etc. in our model. \( \widehat{\mathrm{N}} \) is a statistical (constant in time) parameter introduced to model a linear relationship between the number of individuals that are susceptible to develop the symptomatic form of the influenza infection and the cumulated number of individuals that developed a symptomatic influenza infection until t − 1. On a biological point of view, it can be interpreted as the total number of individuals that could potentially develop an observable form of the disease upon infection, but this interpretation has to be taken with caution (see Discussion). Combining equations (3), (4) and (5) we get: $$ Y(t) = c\cdot {F}_c + d+a\cdot Y\left(t-1\right)+b\cdot \log \left(\widehat{N}-{\widehat{I}}_{cum}\left(t-1\right)\right)+\varepsilon $$ where Y(t) is the logarithm of the estimated number of infected individuals. ε = ε 1 + ε 2 is the total residual error and it is assumed to be distributed according to a Gaussian centered distribution with a standard deviation σ. We defined \( \widehat{\alpha}=\frac{I_{Tmax}}{\widehat{N}} \), which provides an estimate of the proportion of individuals who developed the disease (with symptoms) in the pool of individuals that could have developed it. I Tmax denotes here the time at which the influenza surveillance ends (mid-April). α = 1 means that all individuals who could potentially become sick acquired the infection, and suggests that the disease has a sufficient transmission to reach the entire susceptible pool of the population. At the opposite α < 1 suggests that the virus spread was not sufficient to reach the entire susceptible pool. Since all the model coefficients (a, b, c, d and α) may depend on both the region (R) and the epidemic year (Y), there are many possible different models that can be considered depending on how Y and R affect the coefficients. Models are synthesized as follows: $$ a(X),\ b(Z),\ c(U),\ d(V),\ \alpha (W) $$ where X, Z, U, V and W are formulas depending on R and Y. To take a few examples, be x a generic variable that can be a, b, c… x(0) means that x = 0 in the model; x(1) means that x is constant (intercept model); x(R) means that x depends on the region; x(R + Y) means that x depends on both the region and epidemic year in an additive way and x(R ⋅ Y) in a multiplicative way. The most complicated model considered (i.e., the complete model) is not the model where all parameters depend multiplicatively on R and Y (R ⋅ Y), which would contain too many parameters to be tractable. Since a and b are shape parameters for the spread of the epidemic, it is reasonable to assume that they are characteristics of the region (a(R) and b(R)). d affects the average transmission rate of the virus. It can be different between regions (which show different demographic characteristics) and between epidemic years (because the circulating influenza strain is different from one epidemic year to the next), but it is reasonable to consider that it will only be slightly affected by the interaction between these two factors (d(R ⋅ Y)). That is why the most complicated model considered was a(R), b(R), c(R), d(R + Y), α(R ⋅ Y). Model parameters were inferred using maximum likelihood estimation. The analysis was performed following two steps. In the first step, we tried to reduce as much as possible the complexity of the model that will be used to test climatic factors and estimate their impact. An AIC criterion was used to select the model having the lowest AIC. If the difference between two AIC values is less than two, the most parsimonious model is chosen. In that procedure, the coefficient c was fixed to zero (model c(0)) in order to select a model that is independent of climatic data. In the second step, climatic factors were introduced in the model selected in step 1. In this section, we search how increases or decreases in the value of climatic factors during an epidemic can impact the apparent transmission rate. Global variations in the average value of the climatic factors between regions and epidemic years are not interesting here. That is why climatic factors were first centered within years and regions: for a climatic factor f measured during a week t, an epidemic year Y and a region R, we define: $$ {\varphi}_{t,Y,R} = {f}_{t,Y,R} - \overline{f_{Y,R}} $$ where \( \overline{f_{Y,R}} \) denotes the mean of climatic factor f over the surveillance period of epidemic year Y in region R. To allow easy comparison between the estimated coefficients of the fifteen climatic factors, each of them was then reduced: $$ {F}_{t,Y,R}=\frac{\varphi_{t,Y,R}}{s{d}_{\varphi }} $$ where sd φ stands for the standard deviation of the variable φ. over all epidemic weeks t, epidemic year Y and region R. In total, fifteen climatic factors were tested, leading to potentially important problems of multiple testing. Since climatic factors are not independent, applying a simple Bonferroni correction would lead to a loss of statistical power [37]. Instead, we preferred a multiple testing correction based on permutation tests [38]. The idea of the permutation test we developed here is to keep the same values for all the climatic factors but to shuffle the week indexes, within a given region and a given epidemic year (in order to break the potential association between any climatic factor and the observed course of the epidemic). Mathematically, let us call F t,Y,R the value of the climatic factor F during the t th week of region R and epidemic year Y. Let us call P a permutation of the week indexes t. The permuted climatic factors (F) associated to permutation P in region R and year Y will be defined by: F P(t),Y,R . The main advantage of this permutation procedure is that it conserves the within epidemic year and region correlation structure between the climatic factors. One permutation of the climatic factors is then defined as a set of permutations (one for each epidemic year in each region) leading to a set of permuted climatic factors in all regions and for all epidemic years. Note that these permuted factors have strictly no reason to be correlated with the apparent disease transmission rate (the permutation is purely random) and hence can be considered as realizations of the null hypothesis H0 "the apparent transmission rate is not linked to any climatic factor". We used the absolute value of the maximum estimated climatic factor coefficients (c max ) as a test statistic for H0. We generated 10,000 permutations of climatic factors (see above) and for each one we calculated c max , leading to 10,000 independent realizations of c max under H0. The 95 % quantile of the distribution defines a significant threshold. Climatic factors are considered being significantly linked to the apparent transmission rate if the absolute value of their c estimate from data is above the defined threshold. Model parameters are estimated using maximum likelihood. Standard errors of the estimations of the model parameters were determined using the square roots of the diagonal elements of the covariance matrix (the inverse of the negative of the expected value of the Hessian matrix). Model implementation and permutation tests were performed in Python. Impact of climatic factors at the epidemic scale To evaluate the impact of climatic factors at the epidemic scale we considered the ratio of cumulated number of infected individuals across the entire epidemic period (from the first week of epidemic of the first region in epidemic to the last week of epidemic of the last region in epidemic) to the total population – an indicator of the epidemic size – as a response variable (ES). As individuals infected a previous year are immunized the year after if there is not much influenza virus evolution (i.e., antigenic drifts), the epidemic size of a previous year determines the number of susceptible individuals the year after. We expected a negative correlation between the epidemic size of a previous year and the one the year after, because if the epidemic size was high on the previous year, there will be less susceptible individuals the year after, leading to a smaller epidemic. That is why we considered an autoregressive linear model in order to take into account the correlation between the epidemic size of an epidemic year and the one from the previous epidemic year. We used a logarithm transformation in order to fit the normality and the homoscedasticity of residuals. The model is defined as: $$ \log \left(E{S}_{Y,R}\right) = {a}_0+{a}_Y+{a}_R+b\cdot \log \left(E{S}_{Y-1,R}\right)+c\cdot \overline{F_{Y,R}} $$ where a 0, b and c are constant model parameters and a Y (respectively a R ) models potential systematic variations in the epidemic size between epidemic years (respectively regions). These two terms account for the fact that some regions may be more prone to important epidemics (e.g., due to population demography) and the strains circulating some epidemic years can be more virulent or affect a larger set of the human population due to more important genetic differences with the strains of the previous epidemic years. \( \overline{F_{Y,R}} \) denotes the mean value of climatic factor F over the entire epidemic year. Foremost we selected model parameters (a Y , a R and b) using an AIC criterion and then we assessed the impact of climatic factors. Multiple hypothesis testing was corrected as in the previous section. Values of Y and R were shuffled together (pairs of values for Y and R were randomly re-attributed to all epidemics). For a permutation P, new climatic factors were built as \( \overline{F_{P\left(Y,R\right)}} \). The advantage of this permutation procedure is that, as above, it keeps the covariance structure between the climatic factors. As previously the permutation test is used to determine a significant threshold for the c coefficients using the maximum absolute estimated value of the c coefficients as a statistic. Model parameters were estimated using the classical tools of linear models implemented in R3.1.2 [39]. In order to reduce the complexity of the model we performed an AIC selection without climatic factors. According to the AIC criterion we chose the model with all coefficients (a, b, d and α) independent of regions and epidemic years (see Table 2). Then we built models adding each climatic factor to the chosen model. Finally we made permutations to test the impact of the climatic factors as described in the Methods section. Table 2 AIC selection at the transmission scale Six climatic factors appeared significant: the average absolute humidity, the average temperature, the average relative humidity, the daily variation of relative humidity, the sunshine duration and the daily variation of absolute humidity (see Fig. 2). The parameters and impacts of these climatic factors are summarized in Table 3. In order to search for confounding effects we built a principal component analysis (PCA) on the climatic data using R.3.1.2 [39] and the package ade4 [40–42]. The correlation circle of the PCA shows the correlations between variables (see Fig. 3). Two groups of variables are observed: on the one hand average temperature, average absolute humidity and diary variation of absolute humidity positively correlated and, on the other hand, average relative humidity negatively correlated with diary variation of relative humidity and sunshine duration. Theoretical distribution under the null hypothesis with the threshold (the 95th quantile) in green and the |c| values in red, standing for the climatic impacts of each factor estimated for the eleven regions and for the nine epidemic years (to 2003–2004 till 2012–2013 except 2009–2010) at the transmission scale. 1: Average temperature, 2: Daily variation of temperature, 3: Relative weekly variation of temperature, 4: Absolute weekly variation of temperature, 5: Average relative humidity, 6: Daily variation of relative humidity, 7: Relative weekly variation of relative humidity, 8: Absolute weekly variation of relative humidity, 9: Average absolute humidity, 10: Daily variation of absolute humidity, 11: Relative weekly variation of absolute humidity, 12: Absolute weekly variation of absolute humidity, 13: Average wind speed, 14: Rainfalls height, 15: Sunshine duration Table 3 Global climatic impacts Correlation circle of the principal component analysis (PCA) on climatic data. A: Average temperature, B: Average absolute humidity, C: Average relative humidity, D: Daily variation of relative humidity, E: Sunshine duration, F: Daily variation of absolute humidity. The PCA explains 85.47 % of the variance with its first two axes explaining, respectively, 48.73 and 36.74 % Besides the evaluation of impact of climatic factors at the transmission scale, the model built allowed the estimate of the susceptible population for an epidemic year \( \widehat{N} \) with the definition of \( \widehat{\alpha} \) that provides an estimate of the proportion of individuals who developed the disease in the pool of individuals that could have developed it. In the fifteen climatic models, estimates of α were included between 0.98 and 1 with a very low standard deviation (< 0.01). Regional and seasonal variations appear in the epidemic size (see Fig. 4). In order to evaluate the impact of climatic factors on these variations we first chose a model according to the AIC criterion and second we built models with each climatic factor and tested the climatic impacts with permutations. Boxplot of the ratio of cumulated number of infected individuals across the entire epidemic period to the total population (Y) of the eleven regions according to the nine epidemic years The auto-regressive coefficient b was not retained from the AIC selection procedure (see Table 4). That is why we chose a model only considering seasonal and regional variations to evaluate the impact of climatic factors. Table 4 AIC selection at the epidemic scale No climatic factors appeared significant at the epidemic scale (see Fig. 5) meaning that none of the climatic factors well explained the variation of epidemic size between regions and epidemic years. Theoretical distribution under the null hypothesis with the threshold (the 95th quantile) in green and the |c| values in red, standing for the climatic impacts of each factor estimated for the eleven regions and for the nine epidemic years (to 2003–2004 till 2012–2013 except 2009–2010) at the epidemic scale. 1: Average temperature, 2: Daily variation of temperature, 3: Relative weekly variation of temperature, 4: Absolute weekly variation of temperature, 5: Average relative humidity, 6: Daily variation of relative humidity, 7: Relative weekly variation of relative humidity, 8: Absolute weekly variation of relative humidity, 9: Average absolute humidity, 10: Daily variation of absolute humidity, 11: Relative weekly variation of absolute humidity, 12: Absolute weekly variation of absolute humidity, 13: Average wind speed, 14: Rainfalls height, 15: Sunshine duration Considering that variations in epidemic size could not be explained by our (measured) climatic variables, we then tried to decompose these variations into three sources. First variations in region characteristics (e.g., population size or non-measured climatic factors) can lead to systematic differences between regions. Second, temporal variations (e.g., in strain characteristics) can lead to systematic increase or decreased of epidemic sizes in all regions. Third, local conditions (in given epidemic years and regions) may also affect epidemic sizes. To quantify these three sources of variations, we built a model considering epidemic year and region as random variables: log(ES Y,R ) = a 0 + a Y + a R + ε, where a Y (respectively a R ) is distributed according to a Gaussian centered distribution with a standard deviation σ Y . (respectively σ R ). ε stands for the residual variations, taking into account the local variations of a given epidemic year and region, it is distributed according to a Gaussian centered distribution with a standard deviation σ ε . The homoscedasticity of the residuals is shown in Additional file 2: Figure S1. Parameters were estimated with R.3.1.2 [39] using the package lme4 [43, 44]. We found \( {\widehat{\sigma}}_Y=0.036 \), \( {\widehat{\sigma}}_R=0.013 \) d \( {\widehat{\sigma}}_{\varepsilon }=0.0217 \) meaning that variations from one epidemic year to another one, from one region to another one and due to local conditions account for 50.9, 18.4 and 30.7 %, respectively. In the present paper, we presented the results of the analysis of the statistical link between influenza spread and fifteen climatic factors. Data were obtained from the French Réseau des GROG sentinel network. The network is based on voluntary practitioners who i) record acute respiratory infection and ii) randomly send nasal samples for an antigenic confirmation (or rejection) of influenza infection. Based on those two pieces of information, the Réseau des GROG sentinel network provides influenza incidence estimates of clinical cases. Two metrics were used for linking virus spread to climatic data: weekly incidence data of clinical cases and the epidemic size – measured as the total number of recorded clinical cases over the epidemic period. Results of the analysis failed to isolate any correlation between epidemic size and climatic factors. Regarding weekly incidence data, we considered that incidence at time t was first affected by both the number of infected and susceptible individuals at time t − 1, as it is classically assumed in epidemic dynamic models of infectious diseases [36]. Six climatic factors were found to be significantly linked to influenza spread: average temperature, average absolute and relative humidity, daily variations of absolute and relative humidity as well as sunshine duration. However, a principal component analysis revealed that upon these six factors, two groups of three highly correlated factors could be separated. On a practical point of view, this implies that within each of the two groups, it is likely that only one factor has a biological link to influenza spread, the two remaining factors being linked to the disease spread because they are linked to the first factor (confounding effect). The first group of factors is made up of average temperature and absolute humidity, and daily variations of absolute humidity. The role of a cold and dry weather on influenza spread has been highlighted from laboratory studies [19, 20] and modeling approaches in temperate countries [13, 15, 16, 21] including France [45]. Moreover models that included weekly variations of both temperature and absolute humidity in Israel [46] and in New York City [47] predicted reliable influenza epidemic estimations (better estimations with both factors than only one). That is why both the average temperature and absolute humidity seem to play an important role on the influenza spread. The second group of factors is made up of average relative humidity, daily variations of relative humidity and sunshine duration. Both laboratory [11] and simulation [14] studies enhanced the impact of the relative humidity. About sunshine duration, a decrease of sunshine might favor influenza spread [31] but surprisingly our results showed a positive impact of sunshine duration on influenza epidemic spread. That is why the average relative humidity might impact influenza spread whereas sunshine duration might be a confounding factor. Overall, the impact of the significant factors remained relatively low (a few percent). This is not surprising when we compare our finding with what is found in the literature (3 % impact of absolute humidity in the Netherlands - [21], less than 2 % impact of both absolute humidity and temperature on influenza mortality in the USA - [15]). However, it is important to raise reasonable hypotheses for explaining why the impact of climatic factors is found so low. First, low impacts can arise from the presence of important noise in data. The Réseau des GROG sentinel network is based on a limited number of voluntary practitioners, leading to noise in incidence estimates. Second, in order to obtain relatively reliable incidence estimates, we had to average incidence over entire regions. Climate and disease spread can be disparate within a region, leading to weaken the link between climatic factors and disease spread. Third, the model, which has a lag of one week (linking incidence at time t with the number of susceptible and infected individuals at time t − 1), can be a bit too simple. Actually simple compartmental models may not be sufficient to describe properly an influenza epidemic. Models are becoming more complex by, for example, taking into account more heterogeneous influenza transmission in the population (e.g., agent-based model) and including a contact network among people [48–50]. Finally, correlation between influenza spread and single climatic factors can be too simplistic. Climate can have a strong impact on disease spread, but on a more complex way involving several factors and potential interactions between these factors. Such combinations of factors were not considered in the model because it would have led to a huge number of hypotheses' testing. Such an investigation of the most relevant combinations of climatic factors would be more relevantly achieved using descriptive statistics, but this was not the purpose of our study. Another important question arising from our results is about of the disparity of the link of climatic factors with influenza spread using weekly incidence data and epidemic size data. The first obvious potential explanation is the lower statistical power associated to epidemic size data. Epidemic size is estimated only once per year while incidence is estimated every week. So epidemic size data contain less statistical information. An interesting alternative hypothesis could be that epidemic size and weekly incidence data capture different biological phenomena. Basically, incidence (corrected by the number of susceptible and infected individuals) may vary between weeks according to climatic factors for two reasons: i) because individuals are more susceptible to develop the clinical form of the infection and ii) because infection is more likely, i.e., the virus transmission rate increases. Epidemic size is schematically the result (product) of two phenomena: i) the proportion of individuals in the region that are susceptible to develop the clinical form of the disease upon infection and ii) the fraction of these individuals that will be reached by the virus, i.e., that will effectively become infected. If the latter phenomenon is linked to the virus transmission rate, the link is not linear. In particular, for large enough transmission rates, all susceptible individuals become infected during an epidemic and this term is poorly affected by the transmission rate. Interestingly, in that case, epidemic sizes are mainly an indicator of individuals' susceptibility and hence contain information that differs from that of weekly incidence data. The proportion of the susceptible (to the clinical disease) population that ultimately develops the disease is an important quantity for both data analysis interpretation and disease management. In data analysis, it will tell us how to interpret epidemic size data. When all susceptible individuals acquire the infection, then epidemic size is an indicator of the proportion of susceptible individuals in the population, i.e., the proportion of individuals that are in a healthy state (in terms of innate and acquire immunity) that does not permit them to control the disease upon infection. On a management point of view, if all individuals acquire the infection, this means that the virus transmission rate is high and reducing it will not necessarily lead to reduce its impact. In our study, we introduced a term that we interpreted as the proportion of susceptible individuals who ultimately got infected. This is an interesting result, but which should be interpreted with great caution. First susceptibility is here defined as the ultimate development of the disease upon infection. It is hence not necessarily equivalent to susceptibility defined by antibody profiles. Second, it is important to recall that it is primarily a model parameter introduced for statistical convenience (i.e., a shape parameter). The fact that it equals one in our model only means that the decay in disease incidence at the end of the epidemic can be explained without having to assume any susceptible pool that would have escaped the infection. Since the study was not designed for estimating this biological quantity, we invite the reader not to interpret it as a formal estimation procedure of the proportion of susceptible individuals, but as a point raising interesting questions. Several improvements could be brought to our analysis. First, it would be interesting to differentiate between the different subtypes of influenza. Influenza epidemics are often due to several subtypes that generate potentially shifted epidemics [51]. Practically, in our model this would imply that the number of susceptible individuals does not necessarily decreases with the cumulated number of influenza cases from all subtypes, but is subtype specific. Even though the use of permutation tests tends to reduce this problem, it would still be interesting to study the different subtypes separately because they might be differentially affected by climatic factors. Unfortunately, this information was not available in our data set. The second interesting improvement that could be brought to our model is the consideration of different age-classes. Indeed, influenza is known to spread differentially within and between age-classes [52–54]. However, introducing age-classes in our model would tend to make it more complex. In the current paper we adopted a practical point of view by considering only the global spread of the epidemic without considering the heterogeneity of individuals that may exist within a population (age-classes, social classes, job-dependent degree of exposure, etc.). Proper modeling of the relationship between climatic variables and infectious diseases spread and impact presents a challenging task. We presented a way to conciliate statistical and dynamical models of infectious diseases in a way that keeps the simplicity of statistical approach while introducing key knowledge about infectious dynamics (such as the decay of incidence after the epidemic peak). We performed our study on two important influenza response variables at two levels: intra- and inter-annually. Linking variations of weekly incidence data with climatic factors is relevant because it allows anticipating the decay or increase in the number of cases of influenza in the weeks to come. The epidemic size is also a very important measure because it allows quantifying the impact of influenza according to climatic factors. This is especially valuable in the context of global climate changes to anticipate the future impact of influenza. World Health Organization: Influenza (Seasonal). 2014 http://www.who.int/mediacentre/factsheets/fs211/en/. Viboud C, Boëlle P-Y, Pakdaman K, Carrat F, Valleron A-J, Flahault A. Influenza Epidemics in the United States, France, and Australia, 1972–1997. Emerg Infect Dis J. 2004;10:32–9. Tamerius JD, Shaman J, Alonso WJ, Bloom-Feshbach K, Uejio CK, Comrie A, Viboud C. Environmental predictors of seasonal influenza epidemics across temperate and tropical climates. PLoS Pathog. 2013;9:e1003194. Finkelman BS, Viboud C, Koelle K, Ferrari MJ, Bharti N, Grenfell BT. Global Patterns in Seasonal Activity of Influenza A/H3N2, A/H1N1, and B from 1997 to 2005: Viral Coexistence and Latitudinal Gradients. PLoS One. 2007;2:e1296. Moura FEA, Perdigão ACB, Siqueira MM. Seasonality of influenza in the tropics: a distinct pattern in Northeastern Brazil. Am J Trop Med Hyg. 2009;81:180–3. Rao BL, Banerjee K. Influenza surveillance in Pune, India, 1978–90. Bull WHO. 1993;71:177–81. Rao BL, Yeolekar LR, Kadam SS, Pawar MS, Kulkarni PB, More BA, Khude MR. Influenza surveillance in Pune, India, 2003. Southeast Asian J Trop Med Public Health. 2005;36:906–9. Dosseh A, Ndiaye K, Spiegel A, Sagna M, Mathiot C. Epidemiological and virological influenza survey in Dakar, Senegal: 1996-1998. Am J Trop Med Hyg. 2000;62:639–43. Fuhrmann C. The effects of weather and climate on the seasonality of influenza: what we know and what we need to know. Geography Compass. 2010;4:718–30. Lowen AC, Steel J, Mubareka S, Palese P. High temperature (30 °C) blocks aerosol but not contact transmission of influenza virus. J Virol. 2008;82:5650–2. Lowen AC, Mubareka S, Steel J, Palese P. Influenza virus transmission is dependent on relative humidity and temperature. PLoS Pathog. 2007;3:e151. McDevitt J, Rudnick S, First M, Spengler J. Role of absolute humidity in the inactivation of influenza viruses on stainless steel surfaces at elevated temperatures. Appl Environ Microbiol. 2010;76:3943–7. Shaman J, Pitzer VE, Viboud C, Grenfell BT, Lipsitch M. Absolute humidity and the seasonal onset of influenza in the continental United States. PLoS Biol. 2010;8:e1000316. Żuk T, Rakowski F, Radomski JP. A model of influenza virus spread as a function of temperature and humidity. Comput Biol Chem. 2009;33:176–80. Barreca AI, Shimshack JP. Absolute humidity, temperature, and influenza mortality: 30 years of county-level evidence from the United States. Am J Epidemiol. 2012;176:S114–S22. van Noort SP, Águas R, Ballesteros S, Gabriela M, Gomes M. The role of weather on the relation between influenza and influenza-like illness. J Theor Biol. 2012;298:131–7. Jaakkola K, Saukkoriipi A, Jokelainen J, Juvonen R, Kauppila J, Vainio O, Ziegler T, Rönkkö E, Jaakkola JJK, Ikäheimo TM, et al. Decline in temperature and humidity increases the occurrence of influenza in cold climate. Environ Health. 2014;13:1–8. Chong KC, Goggins W, Zee BCY, Wang MH. Identifying meteorological drivers for the seasonal variations of influenza infections in a subtropical city - Hong Kong. Int J Environ Res Public Health. 2015;12:1560–76. Lofgren E, Fefferman NH, Naumov YN, Gorski J, Naumova EN. Influenza seasonality: underlying causes and modeling theories. J Virol. 2007;81:5429–36. Eccles R. An explanation for the seasonality of acute upper respiratory tract viral infections. Acta Otolaryngol. 2002;122:183–91. te Beest DE, van Boven M, Hooiveld M, van den Dool C, Wallinga J. Driving factors of influenza transmission in the Netherlands. Am J Epidemiol. 2013;178:1469–77. Soebiyanto RP, Clara W, Jara J, Castillo L, Sorto OR, Marinero S, de Antinori MEB, McCracken JP, Widdowson M-A, Azziz-Baumgartner E. The role of temperature and humidity on seasonal influenza in tropical areas: Guatemala, El Salvador and Panama, 2008–2013. PLoS One. 2014;9:e100659. Alonso WJ, Viboud C, Simonsen L, Hirano EW, Daufenbach LZ, Miller MA. Seasonality of influenza in Brazil: a traveling wave from the Amazon to the Subtropics. Am J Epidemiol. 2007;165:1434–42. Mahamat A, Dussart P, Bouix A, Carvalho L, Eltges F, Matheus S, Miller MA, Quenel P, Viboud C. Climatic drivers of seasonal influenza epidemics in French Guiana, 2006–2010. J Infect. 2013;67:141–7. Soebiyanto RP, Adimi F, Kiang RK. Modeling and predicting seasonal influenza transmission in warm regions using climatological parameters. PLoS One. 2010;5:e9450. Chumkiew S, Srisang W, Jaroensutasinee M, Jaroensutasinee K. Climatic factors affecting on influenza cases in Nakhon Si Thammarat. World Acad Sci Eng Technol. 2007;1:633–6. Helming L, Böse J, Ehrchen J, Schiebe S, Frahm T, Geffers R, Probst-Kepper M, Balling R, Lengeling A. 1α,25-dihydroxyvitamin D3 is a potent suppressor of interferon γ-mediated macrophage activation. Blood. 2005;106:4351–8. Abu-Amer Y, Bar-Shavit Z. Impaired bone marrow-derived macrophage differentiation in vitamin D deficiency. Cell Immunol. 1993;151:356–68. Cannell JJ, Vieth R, Umhau JC, Holick MF, Grant WB, Madronich S, Garland CF, Giovannucci E. Epidemic influenza and vitamin D. Epidemiol Infect. 2006;134:1129–40. Urashima M, Segawa T, Okazaki M, Kurihara M, Wada Y, Ida H. Randomized trial of vitamin D supplementation to prevent seasonal influenza A in schoolchildren. Am J Clin Nutr. 2010;91:1255–60. Dowell SF. Seasonal variation in host susceptibility and cycles of certain infectious diseases. Emerg Infect Dis. 2001;7:369–74. Xiao H, Tian H, Lin X, Gao L, Dai X, Zhang X, Chen B, Zhao J, Xu J. Influence of extreme weather and meteorological anomalies on outbreaks of influenza A (H1N1). Chin Sci Bull. 2013;58:741–9. Yaari R, Katriel G, Huppert A, Axelsen JB, Stone L. Modelling seasonal influenza: the role of weather and punctuated antigenic drift. J R Soc Interface. 2013;10:20130298. Davis RE, McGregor GR, Enfield KB. Humidity: a review and primer on atmospheric moisture and human health. Environ Res. 2016;144(Part A):106–16. Roy M, Pascual M. On representing network heterogeneities in the incidence rate of simple epidemic models. Ecol Complex. 2006;3:80–90. Kermack WO, McKendrick AG. A contribution to the mathematical theory of epidemics. Proc R Soc Lond A. 1927;115:700–21. Nakagawa S. A farewell to Bonferroni: the problems of low statistical power and publication bias. Behav Ecol. 2004;15:1044–5. Good P. Permutation tests: a practical guide to resampling methods for testing hypotheses. Verlag New York: Springer; 2000. R Core team. R: A Language and Environment for Statistical Computing. 2014. Dray S, Dufour AB, Chessel D. The ade4 package-II: Two-table and K-table methods. R News. 2007;7:47–52. Dray S, Dufour A-B, et al. The ade4 package: implementing the duality diagram for ecologists. J Stat Softw. 2007;22:1–20. Chessel D, Dufour AB, Thioulouse J. The ade4 package-I-One-table methods. R News. 2004;4:5–10. Bates D, Maechler M, Bolker BM, Walker S. Fitting Linear Mixed-Effects Models using lme4. 2015. Bates D, Maechler M, Steven Walker BB: lme4: Linear mixed-effects models using Eigen and S4. 2015. Viboud C, Pakdaman K, Boëlle P-y, Wilson M, Myers M, Valleron A-J, Flahault A. Association of influenza epidemics with global climate variability. Eur J Epidemiol. 2004;19:1055–9. Axelsen JB, Yaari R, Grenfell BT, Stone L. Multiannual forecasting of seasonal influenza dynamics reveals climatic and evolutionary drivers. Proc Natl Acad Sci U S A. 2014;111:9538–42. Shaman J, Karspeck A. Forecasting seasonal outbreaks of influenza. Proc Natl Acad Sci U S A. 2012;109:20425–30. Eubank S, Guclu H, Anil Kumar VS, Marathe MV, Srinivasan A, Toroczkai Z, Wang N. Modelling disease outbreaks in realistic urban social networks. Nature. 2004;429:180–4. Lunelli A, Pugliese A, Rizzo C. Epidemic patch models applied to pandemic influenza: contact matrix, stochasticity, robustness of predictions. Math Biosci. 2009;220:24–33. Balcan D, Gonçalves B, Hu H, Ramasco JJ, Colizza V, Vespignani A. Modeling the spatial spread of infectious diseases: the GLobal Epidemic and Mobility computational model. J Comput Sci. 2010;1:132–45. Arkema JMS, Meijer A, Meerhoff TJ, Velden J, Paget WJ. Epidemiological and virological assessment of influenza activity in Europe, during the 2006-2007 winter. Euro Surveill. 2008;13:18958. Del Valle SY, Hyman JM, Hethcote HW, Eubank SG. Mixing patterns between age groups in social networks. Soc Netw. 2007;29:539–54. Glass K, Mercer GN, Nishiura H, McBryde ES, Becker NG. Estimating reproduction numbers for adults and children from case data. J R Soc Interface. 2011;8:1248–59. Mossong J, Hens N, Jit M, Beutels P, Auranen K, Mikolajczyk R, Massari M, Salmaso S, Tomba GS, Wallinga J, et al. Social contacts and mixing patterns relevant to the spread of infectious diseases. PLoS Med. 2008;5:e74. We acknowledge the practitioners of Réseau des GROG sentinel network and the labs involved in the surveillance. We thank Isabelle Daviaud from Open Rome who sorted the epidemiological data from the Réseau des GROG sentinel network and Annick Auffray from Météo-France for her kindly help with meteorological data. This work was archived using the computing facilities of the CC LBBE/PRABI and of the CC IN2P3. It was performed within the framework of the LABEX ECOFECT (ANR‐ 11‐ LABX‐ 0048) of Université de Lyon, within the program "Investissements d'Avenir" (ANR‐ 11-IDEX‐ 0007) operated by the French National Research Agency (ANR). Epidemiological data are available in the Additional file 3 and climatic data are available in the Météo-France website. All authors participated to the design of the study. MR conducted the analysis and prepared the initial draft of the manuscript. DF and DP supervised the analysis and writing of the manuscript. JMC is a general practioner specialist in influenza surveillance and BL is a virologist specialist in influenza virus. All authors contributed to the writing of and critically revised the manuscript. All authors approved the final version of the manuscript. Ethical approval and consent to participate Surveillance forms were routinely used in the influenza seasons, and oral informed consent was obtained from the ARI patient at the moment of swab taking in accordance with national regulations. All swab results and forms were anonymized by the laboratories before they were sent to the GROG network coordination, and only identified by the number given by each laboratory for virological tests. In accordance with the French applicable law n°2011–2012 of the 29th December, article 5, no clearance of an Ethics Committee is required in France for the retrospective analysis of anonymized data collected within routine influenza surveillance schemes. University Lyon 1, CNRS, UMR 5558, Biometry and Evolutionary Biology laboratory, Bât. Grégor Mendel 43 bd du 11 novembre 1918, Villeurbanne Cedex, F-69622, France Marion Roussel , Dominique Pontier & David Fouchet LabEx ECOFECT, Eco-evolutionary Dynamics of infectious Diseases, University of Lyon, Lyon, France OPEN ROME, Paris, France Jean-Marie Cohen Laboratory of Virology, Centre National de Référence des Virus Influenzae, Hospices Civils de Lyon, Lyon, France Bruno Lina Virpath, EA4610, Faculty of Medecine Lyon Est, University Claude Bernard Lyon 1, Cedex08, Lyon, 69372, France Search for Marion Roussel in: Search for Dominique Pontier in: Search for Jean-Marie Cohen in: Search for Bruno Lina in: Search for David Fouchet in: Correspondence to Marion Roussel. Meteorological data description. (PDF 211 kb) Figure of the homoscedasticity of the residuals of the model at the epidemic scale. (PDF 97 kb) Epidemiological data. (PDF 136 kb) Roussel, M., Pontier, D., Cohen, J. et al. Quantifying the role of weather on seasonal influenza. BMC Public Health 16, 441 (2016) doi:10.1186/s12889-016-3114-x Received: 19 December 2015 DOI: https://doi.org/10.1186/s12889-016-3114-x Climatic Factor Absolute Humidity Biostatistics and Methods
CommonCrawl
Asia/Seoul English (United States) CALOR 2016 EXCO in Daegu, Republic of Korea Asia/Seoul timezone Sponsorship/Exhibition Login Instructions [email protected] Towards a technological prototype for a high-granularity electromagnetic calorimeter for future lepton colliders New concepts for calorimetry Taikan Suehara (Kyushu University) A key ingredient to meet the requirements of the physics program at energy frontier machines such as future lepton colliders or the LHC are calorimeters with an unprecedented high granularity. These kind of calorimeters allow for the application of particle flow algorithms that rely on an excellent particle separation within the calorimeter. The R&D program comprises an electromagnetic calorimeter with tungsten with a radiation length $X_0=3.5$~mm, Moli\`ere radius $R_M=9$ mm and interaction length $\lambda_I=96$~mm as absorber material and silicon as the active material. French and Japanese groups within the CALICE collaboration are conducting an intensive program for the development of highly granular calorimeters. A physics prototype with a pixel size of $1 \times 1~{\rm cm^2}$ dedicated mainly to demonstrate the physics potential of a calorimeter has been successfully operated in the years 2005-2011. It has been proven that the pixelised silicon wafers are particularly suited to assure a high separation power while allowing at the same time a stable detector operation over a long time. These are reasons why this technology has been chosen for the upgrade of the forward calorimeters of the CMS experiment at the LHC.The technology is also studied for the upgrade of the ATLAS detector. The design of the (next) prototype for a silicon tungsten (SiW) electromagnetic calorimeter puts the emphasis on the understanding and overcoming of the engineering challenges imposed by the requirements on the detector compactness. The main units of the calorimeter prototype are: - The so called {\it alveolar structure} made of pre-impregnated carbon-fibre and epoxy which is also equipped with the tungsten absorber material. This alveolar structure has been fabricated during winter 2011/12 and is now subject to external mechanical studies including thermal tests. - Layers with a length of up to 1.5\,m that carries up to 8 Active Signal Units or ASUs which is the entity of four silicon wafer, PCB and readout electronics. An ASU features a lateral dimension of $18 \times 18~{\rm cm^2}$ each and in the most aggressive design a height of about 2\,mm. It comprises 1024 cells read out by 16 ASICs. In an initial phase with ASUs carrying only one wafer and four ASICs have been tested that allowed for example to validate the concept of embedded electronics. We have now turned to the production of several fully equipped ASUs that undergo validation in beam tests and on test benches. In both cases the flexible and in many parts scalable data acquisition system will be used. The presentation will report on first results of this validation but will also sketch the main steps of the involved production process that is part is supported but the European AIDA-2020 and since recently by the French excellence programme P2IO. Both programmes seek explicitly to foster synergies between the necessary R&D programs for CALICE, ATLAS and CMS. An overview on synergies is thus part of the proposed contribution for the CALOR conference. The contribution will finally report on the R&D programme on silicon wafers and on irradiation tests carried out in 2015 at Japanese irradiation facilities. The CALICE collaboration is preparing large scale prototypes for highly granular calorimeters for detectors to be operated at a future lepton collider. Currently a prototype of a silicon-tungsten electromagnetic calorimeter Si-W ECAL will be assembled which in terms of dimensions and layout meets already most of the requirements given by the lepton collider physics programme and hence the detector design. In particular the front end electronics will be embedded into the layer structure of the calorimeter and have to fit within alveolar layers with less than 1 cm in height. In this contribution the design of the prototype is presented and the steps towards the realisation will be presented. Note finally that the presented technology plays also a key role in the upgrades of the LHC Experiments CMS and ATLAS. Roman Poeschl (Laboratoire de l'Accelerateur Lineaire (FR)) suehara-calor.pdf
CommonCrawl
In this article, we will look at this phenomenon from a mathematical perspective and learn how human networks are organized, perhaps unwittingly, to create this phenomenon. ... David Austin Email David Austin Mail to a friend Print this article Many of us are surprised, at one time or another, to meet a person by chance and later discover that we have some kind of connection or a friend in common. We scratch our heads and say, "Gee, it's a small world." This experience was investigated in the 1960s by the social psychologist Stanley Milgram through a classic experiment. A group of participants living in Omaha, Nebraska was randomly chosen, each of whom was asked to send a folder by mail to a target participant in Sharon, Massachusetts, just outside Boston, subject to this rule: If you do not know the target person on a personal basis, do not try to contact him directly. Instead, mail this folder ... to a personal acquaintance who is more likely than you to know the target person ... it must be someone you know on a first-name basis. Beginning with 160 participants in Nebraska, 44 of the folders, or 27.5%, were successfully delivered to their target through a chain of intermediaries. Somewhat surprisingly, the completed chains required a relatively small number of intermediaries; the median number of intermediaries was five, with one chain being completed with only two intermediaries. This experiment has entered our cultural imagination through the phrase "six degrees of separation," as the completed chains required a median of six legs on their journey. This experiment has been repeated in other forms. Most recently, a study of the structure of friendships on Facebook, which included 721 million individuals and 69 billion friendship links, found that two Facebook users are separated by an average distance of 4.74 links or 3.74 intermediaries. Of course, one could argue the degree to which Facebook friendship mirrors real-life acquaintanceship, but this is still a remarkable result considering Facebook's international audience. Once you start looking, you will see this phenomenon all around. For instance, the trivia game Six Degrees of Kevin Bacon links actors together when they have appeared together in a movie. As of last year, no actor was more than eight links away from Kevin Bacon. The Oracle of Baseball links baseball players if they have ever played on the same team creating a chain between Babe Ruth and Justin Verlander with just five intermedaries. Mathematicians have their own version in which mathematicians are linked if they have published a mathematics paper together. Mathematicians sometimes cite their Erdős number, which is the length of the smallest chain connecting them to the prolific mathematician Paul Erdős. In Milgram's original experiment, individuals, aware of only their own acquaintances, lacked any deeper knowledge of the global structure of the network. So while it is surprising that individuals are connected by relatively short chains, it is even more suprising that the participants were able to find these chains. A network in which most pairs of individuals are linked by short chains though they may be separated by large geographic distances is known as a small-world network. In addition, when individuals may find these short chains using only local information, we say the network is searchable. In this article, we will look at this phenomenon from a mathematical perspective and learn how human networks are organized, perhaps unwittingly, to create this phenomenon. The Watts-Strogatz model Before beginning, let's think about what features we would like to build in to our mathematical model. First, the majority of our acquaintances are with people who are geographically close to us through, say, family, co-workers, and friends at school. Second, many of our acquaintances are also acquainted with each other. For instance, two of our co-workers are likely to be acquainted as well. Finally, most of us have a smaller number of acquaintances who are further removed geographically, such as friends or family members who have moved away. In spite of the fact that the Internet now facilitates more far-flung friendships, a considerable amount of current research supports these assumptions. The first model we'll consider was introduced by Watts and Strogatz in the late 1990s in an attempt to understand whether our three assumptions are enough to create small-world networks. The model begins with $n$ nodes arranged in a ring, with each node linked to its $k$ closest neighbors. Here is an example with $n=30$ nodes and $k=4$ links to neighbors: To include some long-range links in our model, we will consider each edge and, with probability $p$, change one of the endpoints to another randomly chosen node. In this way, we create a a family of networks by varying the parameter $p$ in the range $0\leq p\leq 1$. $p=0$ $p=0.01$ $p=0.1$ $p=1$ At the end of the spectrum where $p=0$, none of the edges are rewired, and we have a completely regular network. On the other end where $p=1$, every edge is rewired, and we have a completely random network. In this way, the parameter $p$ interpolates between completely regular and completely random behavior. Watts and Strogatz characterize these networks with two quantities. The first, denoted $L(p)$, measures the average distance between nodes in the network, while the second, denoted $C(p)$, measures the amount of clustering in the network. More specifically, $L(p)$ is the number of links in the shortest path between a pair of nodes, averaged over all pairs of nodes. In a small-world network, we would expect that most pairs are joined by relatively short chains of links and that $L(p)$ is therefore relatively small. Remember that each node is connected to its $k$ nearest neighbors when $p=0$. Among these $k$ neighbors, there are possibly $k(k-1)/2$ links. Given a node in our network, we may ask what fraction of these links actually exist. The clustering coefficient $C(p)$ is the average of this fraction over each of the nodes. In practical terms, this quantity measures the number of our acquaintances who are also acquaintances. We are therefore interested in networks with a high clustering coefficient. Following Watts and Strogatz, I performed an experiment in which I looked at networks with $n=1000$ nodes and $k=10$ nearest neighbors. For many values of $p$, I created twenty networks for which I computed $L(p)$ and $C(p)$. The plot below shows the averages, normalized by their values at $p=0$, on a logarithmic scale. The important point to notice is that there is a very large range of probabilities $p$, roughly $0.001< p < 0.1$, in which $L(p)$, the average distance between nodes, is relatively small, and the clustering coefficient $C(p)$ is relatively large. These are the conditions that we desire for a small-world network so we conclude that it is relatively easy to create small-world networks. In other words, it doesn't take a lot of long-range links to create a small world. Speaking quantitatively, $L(0)$ is proportional to $n$. For values of $p$ values that give a small world, however, $L(p)$ is less than a polynomial in $\log(n)$. Watts and Strogatz applied this kind of analysis to the network of film actors described above, the power grid in the western United States, and neural network of the nematod worm C. elegans. All three examples demonstrate the properties of a small world. From this, they speculate that the small-world property is probably ubiquitous in the real world. How can we find short chains of links? While it is surprising that short chains of links existed in Milgram's experiment, it is perhaps even more surprising that they could be found by the participants, given the limited information they possess. For instance, if we knew the structure of the entire network---that is, if we knew everyone's acquaintances---it would be relatively straightforward to find the shortest chain. However, individual participants are aware of only their own acquaintances. With this information, how are short chains discovered? A hint is given by this figure of Milgram's, in which the geographic locations of each of the intermediaries in one of the successful chains is given. Notice how each intermediary sent the folder to a person who is geographically closer to the target; remarkably, the distance is roughly halved at each step. This led Kleinberg to study a new model for networks that is based on the Watts-Strogatz model and that incorporates geographic distance in the distribution of links. Kleinberg's model begins with nodes on a uniform two-dimensional $n\times n$ lattice (we may use a lattice in any dimension $k$). There is a natural notion of geographic distance between any two nodes; we define $d(u,v)$, the distance between nodes $u$ and $v$, to be the smallest number of steps needed to walk from one node to the other along the grid. Generalizing the network of Watts and Strogatz, Kleinberg assumed that any node had links to all nodes within a chosen distance $p$ and then added $q$ long-range links chosen at random. For instance, when $p=2$ and $q=3$, the red node in the center may be linked to the blue nodes. So far, this is nothing more than a two-dimensional version of the Watts-Strogatz model. The novel feature about Kleinberg's model is that the long-range links are chosen to favor shorter ones over longer ones. For instance, I live in Michigan; in Kleinberg's model, it is more likely that I have a long-range connection to someone in Illinois than to someone in France. In particular, we will introduce a parameter $\alpha\geq0$ and choose a long-range connection linking $u$ to $v$ with probability proportional to $d(u,v)^{-\alpha}$: $$ {\rm Pr}[u\to v]\propto \frac{1}{d(u,v)^\alpha}. $$ When $\alpha=0$, we have the Watts-Strogatz model in which long-range links are chosen without reference to the geographic distance between nodes. However, when $\alpha=1$, the probability that a long-range links exists between $u$ and $v$ is inversely proportional to the distance between the nodes. Kleinberg then proposed an algorithm that models the dynamics of the Milgram experiment: at each step, the intermediary sends the folder to his acquaintance that is closest to the target. In this way, each intermediary uses only his knowledge of his own acquaintances and not the entire structure of links in the network. We imagine that the folder requires one unit of time to move from one acquaintance to another and determine the delivery time $T$ required for the folder to move from the source to the target. I repeated a simulation of Kleinberg's by studying a lattice of 20,000 by 20,000 nodes placed on a torus to minimize boundary effects. With fixed values of $\alpha$, $p$, and $q$, I constructed 1000 networks finding, for each, the delivery time given by the algorithm between two fixed nodes. To summarize the results, the logarithm of the delivery time $T$ is plotted below as a function of $\alpha$. This simulation clearly shows that there is a optimum value of $\alpha$; that is, in networks constructed with $\alpha\approx2$, the average delivery time is shortest. In fact, Kleinberg provides a theoretical understanding of this result through two results: When $\alpha=2$, the average delivery time is at most proportional to $(\log(n))^2$. For other values of $\alpha$, the average length of chains produced by the algorithm is at least proportional to $n^\beta$ where $\beta$ is described in the graph below. Remember that a pair of nodes in a small-world network is, on average, joined by a chain whose length is bounded by a polynomial in $\log(n)$. The upshot of Kleinberg's study is that these short chains may be found by this algorithm--that is, the network is searchable--only when $\alpha=2$. Kleinberg's analysis is guided by the diagram in Milgram's paper, which shows that at each step, the distance to the target is roughly halved. In the same way, we will consider, at each step of the folder's journey, the distance to the target in our small-world network, and ask how long it takes to halve that distance. Easley and Kleinberg encourage us to think of this strategy through "scales of resolution." For instance, to deliver a letter to a distant person, we first find the country, then the state, the city, the street, and then the house. What we will see is that in the network constructed with $\alpha=2$, the folder spends roughly the same amount of time in each one of these scales. We will say that the folder is in phase $j$ if the distance from the folder's current holder $u$ to the target $t$ is greater than $2^j$ and at most $2^{j+1}$. We therefore ask how long the folder spends in phase $j$ before entering phase $j-1$. Since the geographic distance from the original source to the target is less than $2n$, the phase in which the folder starts out is no more than $\log(n)+1$. Consecutively, there are at most $\log(n)+1$ phases that the folder passes through. When $\alpha=2$, we will see that the time spent in each phase is roughly constant and proportional to $\log(n)$. Let's think about this informally from the point of view of "scales of resolution" before digging into a more careful argument. Assuming the folder is at some node $u$, let's ask what the chances are that $u$ has a long-range link into the annular region shown below. How many points are in this annular region? Since the number of nodes in each disk is proportional to $d^2$, the number of nodes in the annular region is also proportional to $d^2$. The probability that there is a long range link to a node in that region is between $1/(2d)^2$ and $1/d^2$. Multiplying the number of nodes in the annular region by the probability of choosing one of the nodes in the annular region gives an approximate result that is independent of $d$. In this way, we see that the long-range links from $u$ are distributed uniformly over all scales when those links are randomly chosen with $\alpha=2$. We therefore expect that the time spent in each phase of the folder's journey to be independent of the phase. Notice that this value of $\alpha$ is adapted to the underlying two-dimensional geometric structure of the network. If we instead choose another network based on a $k$-dimensional grid, we should expect that $\alpha=k$ will be the exponent that enables our algorithm to find short paths. Let's make this precise. Suppose that we are currently in phase $j$ with the folder held by $u$. We would like to estimate how long it takes us to move into phase $j-1$ by passing the folder to a node $v$ whose distance to $t$ is no more than $2^j$. First, we have said that long-range links are chosen with probability proportional to $d^{-2}$. Let's estimate the constant of proportionality. If we are at node $u$, then the probability of choosing a long-range link to node $v$ is $${\rm Pr}[u\to v] = \frac{d(u,v)^{-2}}{\sum_{u\neq w}d(u,w)^{-2}}.$$ To estimate the denominator, we know that there are $4d$ nodes at distance $d$ from $u$ and that the distance between nodes is bounded by $2n-2$, a distance realized by nodes at opposite corners of the lattice. Therefore, $$ \begin{eqnarray*} \sum_{u\neq w} d(u,w)^{-2} & \leq & \sum_{d=1}^{2n-2} 4d\cdot d^{-2} \\ & \leq & \sum_d^{2n-2} 4/d \\ & \leq & 4+4\ln(2n-2) \\ & \leq & 4\ln(6n). \end{eqnarray*} $$ This means that the probability of a long-range link existing between $u$ and $v$ is greater than $${\rm Pr}[u\to v] \geq \frac{1}{4\ln(6n) d(u,v)^2}.$$ We will now estimate the probability that a long-range link reduces the folder's current phase. In this case, the distance from $u$ to the target $t$ is at most $2^{j+1}$. To reduce the phase, we need to find a long-range link to a node $v$ within a distance of $2^j$ of $t$. This means that $$d(u,v)\leq 2^{j+1}+2^j < 2^{j+2}.$$ For these nodes $v$, we then have $${\rm Pr}[u\to v] \geq \frac{1}{4\ln(6n) (2^{j+2})^2}.$$ The number of nodes within a distance $2^j$ of $t$ is $$1+4\sum_{d=1}^{2^j}d = 1 + 2(2^j(2^j-1)) = 1 + 2^{2j+1} - 2^{j+1} > 2^{2j-1}.$$ Therefore, the probability that a long-range link exists that will reduce the phase is at least $$\frac{2^{2j-1}}{4\ln(6n) (2^{j+2})^2} = \frac{1}{128\ln(6n)}.$$ This statement is the crux of the argument. Notice that this estimate is independent of the current phase $j$: at each phase, we have a lower bound for the probability of leaving that phase that is independent of the phase. This justifies our earlier statement that the long-range links are distributed uniformly across all scales of resolution. We may now estimate $T_j$, the amount of time we spend in phase $j$: $$\begin{eqnarray*} \overline{T}_j & = & \sum_{i=1}^\infty i~{\rm Pr}[T_j = i] \\ & = & \sum_{i=1}^\infty {\rm Pr}[T_j \geq i] \\ & \leq & \sum_{i=1}^\infty \left(1-\frac{1}{128\ln(6n)}\right)^{i-1} \\ & = & 128\ln(6n). \end{eqnarray*} $$ This shows that the average time spent in each phase is independent of the phase. Therefore, the average delivery time $\overline{T}$ satisfies: $$\overline{T} \leq (\log(n)+1)\cdot 128\ln(6n) \leq C\log(n)^2 $$ for some constant $C$. Small-world networks are characterized by the fact that the distance between any two points is not more than a polynomial in $\log(n)$. Since our algorithm finds paths having this property, we are assured that our algorithm finds suitably short paths. In other words, our network is searchable. We have now shown that $\alpha=2$ leads to searchable small-world networks. A similar type of analysis shows that other values of $\alpha$ do not. Smaller values of $\alpha$ diminish the importance of geographic distance; the long-range links are too random to allow us to home in on the target. Larger values of $\alpha$ create too many shorter long-range links, which save relatively little time by following them. Putting our theory to the test Kleinberg studies a family of small-world networks, parametrized by $\alpha$, and finds that only one gives a searchable small-world network, suggesting that searchable small-world networks exist in some delicate balance. Let's see how this corresponds with real-world data. Liben-Nowell, Nowak, Kumar, Raghavan, and Tomkins tested this prediction by considering LiveJournal, a network of over a million bloggers each of whom publishes a profile containing his or her geographic location as well as a list of other LiveJournal users considered to be "friends." We may then construct a network consisting of the nearly half million users who give a location in the continental United States and use the published geographic information to determine the geographic distance $d(u,v)$ between two users. Let's first ask whether this network satisfies the properties of a searchable small-world network. Users have, on average, eight friends, a relatively small number compared to other social networks. However, this network demonstrates a high clustering coefficient---when $u$ and $v$ have a friend in common, they are themselves friends roughly one-fifth of the time. How does our searching algorithm perform? In a simulation, a randomly chosen source user $s$ attempted to send a message to a randomly chosen target user $t$. At each step, the current holder of the message $u$ sends it to the friend geographically closest to $t$. If $u$ has no friends closer to $t$, then the chain unsuccessfully ends. In a half million trials, the chain successfully concluded 13% of the time with an average delivery time just a little over four. This study was then repeated after making a small modification; the current user $u$ was allowed to send the folder to a random friend if he or she had no friends closer to $t$. In this case, 80% of the chains completed with a median length of 12. These results are illustrated in the following figure, in which the inset shows the results given the modified algorithm. The vertical scale represents $f(k)$, the fraction of completed chains having delivery time $k$. Copyright 2005 National Academy of Sciences, U.S.A. From this, we may conclude that the LiveJournal network forms a searchable small-world network. Let's now look at the distribution of links and ask whether it follows Kleinberg's inverse-square relationship between probability and distance. One problem needs to be addressed first. In Kleinberg's model, nodes are uniformly distributed in the plane; however, users in the LiveJournal network are not uniformly distributed across the continental United States. In the figure below, successive circles, each centered at Ithaca, New York, represent an increase of 50,000 nodes in the network. As we might expect, the nodes are more highly concentrated on the east and west coasts. To this end, we will use rank to replace the role of distance in determining the probability that a link exists between two nodes. Given a node $u$, we define ${\rm rank}_u(v)$, the rank of $v$ with respect to $u$, to be the number of nodes that are closer to $u$ than $v$ is. For instance, in the figure below, ${\rm rank}_u(v)= 6$ since there are six nodes closer to $u$ than $v$ is (we include the node $u$ in this count). If we consider two nodes $u$ and $v$ in the network placed on a uniform lattice and separated by a distance $d$, there are roughly $d^2$ nodes closer to $u$ than $v$ is. This says that the rank of $v$ is proportional to $d^2$. Our earlier analysis showed that we obtain a searchable small-world network when a link between $u$ and $v$ exists with probability proportional to $d^{-2}$. Expressed in terms of the rank, we therefore expect a link from $u$ leading to $v$ with probability proportional to ${\rm rank}(v)^{-1}$; this holds regardless of the dimension $k$ in which we create our lattice. In fact, Liben-Nowell and his collaborators define a rank-based friendship network to be one in which the probability of a link between $u$ and $v$ is inversely proportional to ${\rm rank}_u(v)$: $$ {\rm Pr}[u\to v] \propto \frac{1}{{\rm rank}_u(v)}. $$ They then prove that the expected delivery time in a rank-based friendship network to be at most proportional to $(\log(n))^3$. In other words, rank-based friendship networks are searchable. Liben-Nowell et al then looked at the LiveJournal network to determine whether it is a rank-based friendship network. The results are summarized in the figure below. Here we see the probability that a link exists between two nodes plotted as a function of the rank between the nodes. The quantity $\epsilon$ is what the authors call the "background probability," a component of the probability that is independent of geography and due to factors such as friendships forming online through shared interests. As can be seen, the probability and rank are very nearly inversely proportional to one another, as required for a searchable small-world network. Below, we see the probability plotted as a function of the rank for the groups of LiveJournal users on the East and West Coasts of the United States. The inverse proportion is even more striking here. As Liben-Nowell and his collaborators conclude: "In a lamentably imperfect world, it is remarkable that people form friendships so close to the perfect distribution for navigating their social structures." What we have seen is a rather remarkable story. Beginning with a mathematical model of Milgram's experiment, Kleinberg's analysis suggests that there is some underlying structure to the probabilitistic distribution of the links in a searchable small-world network. After refining the characterization of this distribution, Liben-Nowell et al then verified that one particular searchable real-world network closely matched this distribution. Indeed, a similar analysis of Facebook, performed by Backstrom et al, again found the relationship between the distribution of links and rank to be very close to an inverse proportionality. There appears to be a remarkable agreement between real-world social networks and the simple models we have created to study them. Finally, let us return to Milgram's original experiment. Our models have relied only on geographic distance as a means of determining the person to whom the folder is sent. Many participants indicated, however, that they sometimes chose their recipient based on occupation rather than geography. This suggests that it may be useful to consider models that include more dimensions so as to incorporate other factors such as occupation. Indeed, this approach has been taken by Watts, Dodds, and Newman, who created a different model of a small-world network and showed that searchable networks become more common as more dimensions are added. I would like to thank David Liben-Nowell, who graciously shared the figures from the LiveJournal study with me, and the Proceedings of the National Academy of Sciences for permission to reprint them. Stanley Milgram. The small-world problem, Psychology Today, Vol. 1, 60–67, 1967. Lars Backstrom, Paolo Boldi, Marco Rosa, Johan Ugander, Sebastiano Vigna. Four Degrees of Separation, 2012. Available at http://arxiv.org/abs/1111.4570. Duncan Watts, Steven Strogatz. Collective dynamics of 'small world' networks, Nature, Vol. 393, 440-442, 1998. Jon Kleinberg. Navigation in a small world, Nature, Vol. 406, 845, 2000. Available at http://www.cs.cornell.edu/home/kleinber/nat00.pdf. Jon Kleinberg. The small-world phenomenon: an algorithmic perspective. In Proc. 32nd ACM Symposium on the Theory of Computing, 163-170, 2000. Available at http://www.cs.cornell.edu/home/kleinber/swn.ps. Jon Kleinberg. The small-world phenomenon and the dynamics of information. In Proc. 14th Advances in Neural Information Processing Systems, 431-438, 2001. Available at http://www.cs.cornell.edu/home/kleinber/nips14.pdf. David Easley, Jon Kleinberg. Networks, Crowds, and Markets: Reasoning about a Highly Connected World, Cambridge University Press, 2010. Available at http://www.cs.cornell.edu/home/kleinber/networks-book/. David Liben-Nowell, Jasmine Novak, Ravi Kumar, Prabhakar Raghavan, Andrew Tomkins. Geographic routing in social networks, Proceedings of the National Academy of Sciences, Vol. 102, no. 33, 11623-11628, 2005. Ravi Kumar, David Liben-Nowell, Jasmine Novak, Prabhakar Raghavan, Andrew Tomkins. Theoretical Routing in Social Networks, Technical Report MIT-LCS-TR-990, MIT Press, 2005. Lars Backstrom, Eric Sun, Cameron Marlow. Find me if you can: Improving geographical prediction with social and spatial proximity. In Proc 19th International World Wide Web Conference, 2010. Duncan Watts, Peter Dodds, Mark Newman. Identity and search in social networks, Science, Vol. 296, no. 5571, 1302-1305, 2002. The AMS encourages your comments, and hopes you will join the discussions. We review comments before they're posted, and those that are offensive, abusive, off-topic or promoting a commercial product, person or website will not be posted. Expressing disagreement is fine, but mutual respect is required. Feature Column! These web essays are designed for those who have already discovered the joys of mathematics as well as for those who may be uncomfortable with mathematics. Search Feature Column Feature Column at a glance
CommonCrawl
(4)/(5)+(16)/(20) - adding of fractions (4)/(5)+(16)/(20) - step by step solution for the given fractions. Adding of fractions, full explanation. If it's not what You are looking for just enter simple or very complicated fractions into the fields and get free step by step solution. Remember to put brackets in correct places to get proper solution. + - * / fill out with example data Solve the problem Solution for the given fractions $ \frac{4}{5 }+\frac{ 16}{20 }=? $ The common denominator of the two fractions is: 20 $ \frac{4}{5 }= \frac{(4*4)}{(4*5)} =\frac{ 16}{20} $ $ \frac{16}{20 }= \frac{(1*16)}{(1*20)} =\frac{ 16}{20} $ Fractions adjusted to a common denominator $ \frac{4}{5 }+\frac{ 16}{20 }=\frac{ 16}{20 }+\frac{ 16}{20} $ $ \frac{16}{20 }+\frac{ 16}{20 }= \frac{(16+16)}{20} $ $ \frac{(16+16)}{20 }=\frac{ 32}{20} $ $ \frac{32}{20 }=\frac{ 8}{5} $ see mathematical notation | (-15)/(2)*(1)/(100) - multiplication of fractions | | (13)/(18)-(5)/(9) - subtraction of fractions | | (10)/(8)-(5)/(4) - subtract fractions | | (11)/(15)-(18)/(25) - subtract fractions | | (7)/(15)-(3)/(25) - subtract fractions | | (1)/(9)-(1)/(36) - subtraction of fractions | | (2)/(3)-(4)/(6) - subtract fractions | | (23)/(12)-(5)/(6) - subtraction of fractions | | (2)/(7)-(5)/(8) - subtraction of fractions | | (2)/(7)+(5)/(8) - addition of fractions | | (44)/(9)+(16)/(3) - addition of fractions | | (67)/(10)+(393)/(100) - add fractions | | (6)/(10)+(93)/(100) - addition of fractions | | (61)/(10)+(393)/(100) - addition of fractions | | (5(2m-7)+8m)/(2)+(17)/(2) - addition of fractions | | (5(2m-7)+8m)/(5)+(17)/(2) - adding of fractions | | (3)/(4)+(1)/(24) - add fractions | | (7y)/(8)/(-9)/(5y) - divide fractions | | (-11a)/(4)+(11)/(8a) - adding of fractions | | (7)/(2)/(8)/(7) - dividing of fractions | | (-8)/(-5)/(7)/(5) - divide fractions | | (11)/(12)/(11)/(4) - dividing of fractions | | (a)/(7)-(5)/(7) - subtract fractions | | (13)/(56)+(5)/(7) - adding of fractions | | (5)/(16)-(1)/(20) - subtraction of fractions | | (1)/(5)+(3)/(20) - adding of fractions | | (5-2)/(3)+(2-2)/(4) - adding of fractions | | (x-2)/(3)+(x-2)/(4) - adding of fractions | | (7)/(1/4)*(1)/(18) - multiplication of fractions | | (13)/(56)+(5)/(7) - add fractions | | (2)/(15)-(1)/(20) - subtraction of fractions | | (35)/(36)*(6)/(7) - multiplying of fractions | | (150)/(45)+(180)/(45) - adding of fractions | | (150)/(45)*(180)/(45) - multiplication of fractions | | (2)/(5)+(1)/(9) - adding of fractions | | (10)/(81)+(25)/(72) - adding of fractions | | (2a)/(15)-(1)/(3) - subtraction of fractions | | (7)/(9)-(14)/(45) - subtraction of fractions | | (12)/(15)/(6)/(5) - dividing of fractions | | (2)/(7)/(10)/(3) - dividing of fractions | | (9)/(1)*(1)/(5) - multiplying of fractions | | (12)/(7)*(9)/(8) - multiply fractions | | (1)/(4)*(2)/(1) - multiplication of fractions | | (4)/(1)*(1)/(16) - multiplication of fractions | | (1)/(16)*(1)/(16) - multiplication of fractions | | (1)/(3)+(7a)/(8) - adding of fractions | | (16)/(1)*(25)/(36) - multiplication of fractions | | (12)/(1)*(4)/(3) - multiplication of fractions | | (12)/(1)+(4)/(3) - addition of fractions | | (12)/(1)*(25)/(36) - multiplication of fractions | | (15(2-x))/(1)+(13(3-x))/(1) - adding of fractions | | (7)/(15)+(13)/(18) - add fractions | | (4)/(9)+(3)/(8) - addition of fractions | | (11)/(15)*(3)/(22) - multiplication of fractions | | (9)/(14)+(10)/(21) - adding of fractions | | (3)/(4)+(1)/(9) - adding of fractions | | (7)/(30)+(5)/(18) - add fractions | | (1)/(10)-(2)/(5) - subtract fractions | | (10)/(9)*(3)/(2) - multiplying of fractions | | (4)/(49)+(1)/(7) - adding of fractions | | (5)/(3)*(6)/(7) - multiplication of fractions | | (15)/(8)+(7)/(8) - add fractions | | (4)/(3)*(2)/(5) - multiplying of fractions | | (4)/(7)*(11)/(6) - multiplying of fractions | | (5)/(21)*(7)/(15) - multiply fractions |
CommonCrawl
On the qualitative behavior of the solutions to second-order neutral delay differential equations Shyam Sundar Santra1, Hammad Alotaibi2 & Omar Bazighifan ORCID: orcid.org/0000-0002-7251-96083,4 Differential equations of second order appear in numerous applications such as fluid dynamics, electromagnetism, quantum mechanics, neural networks and the field of time symmetric electrodynamics. The aim of this work is to establish necessary and sufficient conditions for the oscillation of the solutions to a second-order neutral differential equation. First, we have taken a single delay and later the results are generalized for multiple delays. Some examples are given and open problems are presented. Consider the class of nonlinear neutral delay differential equations of the form $$ \bigl(a\bigl(w'\bigr)^{\mu } \bigr)'(y)+c(y)g \bigl(u\bigl(\varsigma (y)\bigr) \bigr)=0, $$ where \(w(y)=u(y)+b(y)u(\vartheta (y))\) and μ is the ratio of two odd positive integers. We assume the following conditions hold. \(a, c, \vartheta, \varsigma \in C (\mathbb{R_{+}},\mathbb{R_{+}})\) such that \(\vartheta (y)\leq y\), \(\varsigma (y)\leq y\) for \(y \geq y_{0}\), \(\vartheta (y) \to \infty \), \(\varsigma (y) \to \infty \) as \(y \to \infty \). \(g \in C(\mathbb{R,\mathbb{R}})\) is non-decreasing and odd with \(ug(u)>0\) for \(u\neq 0\). \(a(y)>0\) and \(\int _{0}^{\infty } (a(\eta ) )^{-1/\mu }\,d\eta =\infty \). By letting \(A(y)=\int _{0}^{y} (a(\eta ) )^{-1/\mu }\,d\eta \), we have \(\lim_{y \to \infty } A(y)=\infty \). \(b \in C(\mathbb{R_{+}},\mathbb{R_{-}})\) with \(-1+(2/3)^{1/\mu } \leq -b_{0} \leq b(y) \leq 0 \) for \(y \in \mathbb{R_{+}}\). \(b \in C(\mathbb{R_{+}},\mathbb{R_{-}})\) with \(-1 <-b_{0} \leq b(y) \leq 0 \) for \(y \in \mathbb{R_{+}}\). In 1978, Brands [1] showed that the solutions to $$ u''(y)+c(y)u\bigl(y-\varsigma (y)\bigr) =0 $$ are oscillatory, if and only if, the solutions to \(u''(y)+c(y)u(y) =0\) are oscillatory. Baculikova et al. [2] considered (1) and studied the oscillatory behavior of (1) for \(g(u)=u\), \(0\leq {}b(y)\leq {}b_{0}<\infty \) and (A3). They obtained sufficient conditions for the oscillation of the solutions of the linear counterpart of (1), using comparison techniques. Chatzarakis et al. [3] considered the equation $$\begin{aligned} \bigl(a\bigl(u^{\prime }\bigr)^{\mu _{2}} \bigr)^{\prime }(y)+c(y)u^{{\mu _{2}}}\bigl( \varsigma (y)\bigr)=0. \end{aligned}$$ Also, Chatzarakis et al. [4] studied (2) to obtain new oscillation criteria. Džurina [5] studied the linear counterpart of (1) when \(0\leq b(y)\leq b_{0}<\infty \) and (A3) and established sufficient conditions for the oscillation of the solutions of the linear counterpart of (1) by comparison techniques. Karpuz et al. [6] studied (1) for various ranges of the neutral coefficient b. Pinelas and Santra [7] studied necessary and sufficient conditions for the solutions of $$ \bigl(u(y)+b(y)u(y-\vartheta ) \bigr)^{\prime }+\sum _{i=1}^{m}c_{j}(y)g \bigl(u(y-\varsigma _{j}) \bigr)=0. $$ Wong [8] obtained necessary and sufficient conditions for the oscillation of $$ \bigl(u(y)+bu(y-\vartheta ) \bigr)''+ c(y)g(y- \varsigma )=0, $$ where the constant b satisfies \(-1< b<0\). Grace et al. [9] studied (1) and established sufficient conditions for \(0 \leq b(y) <1\). For further work on this type of equations, we refer the reader to [10–36] and the references cited therein. We may note that most of the authors considered only sufficient conditions, and only a few considered necessary and sufficient conditions. Hence, the objective of this work is to establish both necessary and sufficient conditions for oscillation of (1) without using comparison techniques. In Sect. 2 some preliminary results are presented, Sect. 3 deals with main results, Sect. 4 represents the conclusion and the final section includes open problems. In this section, two lemmas are presented which we need for our work in the sequel. Lemma 2.1 Under the assumptions (A1)–(A3) and (A4) or (A5) and the solution u of (1) is an eventually positive solution, we have \(w(y)<0\), \(w^{\prime }(y)>0\) and \((a(w^{\prime })^{\mu })^{\prime }(y)<0\); \(w(y)>0\), \(w^{\prime }(y)>0\) and \((a(w^{\prime })^{\mu })^{\prime }(y)<0\), for sufficiently large y. Assume there exists a \(y_{1} \geq {}y_{0}\) such that \(u(y)>0\), \(u(\vartheta (y))\), and \(u(\varsigma (y))>0\) for \(y\geq {}y_{1}\). From (1) and (A2), we have $$ \bigl(a\bigl(w^{\prime }\bigr)^{\mu } \bigr)^{\prime }(y)=-c(y)g \bigl(u\bigl(\varsigma (y)\bigr) \bigr)< 0 \quad\text{for } y\geq {}y_{1}, $$ which implies that \((a(w^{\prime })^{\mu } )(y)\) is non-increasing on \([y_{1},\infty )\). We have \(a(y)>0\), and thus either \(w^{\prime }(y)<0\) or \(w^{\prime }(y)>0\) for \(y\geq {}y_{2}\), where \(y_{2}\geq {}y_{1}\). If \(w^{\prime }(y)>0\) for \(y\geq {}y_{2}\), then we have (i) and (ii). We prove now that \(w^{\prime }(y)<0\) cannot occur. If \(w^{\prime }(y)<0\) for \(y\geq {}y_{2}\), then there exists \(\kappa _{1}>0\) such that \((a(w^{\prime })^{\mu } )(y)\leq -\kappa _{1}\) for \(y\geq {}y_{2}\), which yields upon integration over \([y_{2},y)\subset [y_{2},\infty )\) after dividing through by a $$ w(y)\leq {}w(y_{2})-\kappa _{1}^{1/\mu } \int _{y_{2}}^{y} \bigl(a(\eta ) \bigr)^{-1/\mu }\,d \eta \quad\text{for } y\geq {}y_{2}. $$ By virtue of condition (A3), \(\lim_{t\to \infty }w(y) =-\infty \). We consider the following possibilities: Let the solution u be unbounded. There exists a sequence \(\{y_{k}\}\) such that \(\lim_{k \to \infty } y_{k} = \infty \) and \(\lim_{k\to \infty } u(y_{k}) =\infty \), where \(u(y_{k}) = \max \{u(\eta ): y_{0} \leq \eta \leq y_{k}\}\). Since \(\lim_{y \to \infty } \vartheta (y) = \infty \), \(\vartheta (y_{k}) > y_{0}\) for all sufficiently large k. By \(\vartheta (y) \leq y\), $$ u \bigl(\vartheta (y_{k}) \bigr) = \max \bigl\{ u(\eta ): y_{0} \leq \eta \leq \vartheta (y_{k})\bigr\} \leq \max \bigl\{ u(\eta ): y_{0} \leq \eta \leq y_{k} \bigr\} = u(y_{k}). $$ Therefore, for all large k, $$ w(y_{k}) = u(y_{k}) + b(y_{k})u \bigl(\vartheta (y_{k}) \bigr) \geq \bigl(1+ b(y_{k})\bigr)u(y_{k}) > 0, $$ which contradicts \(\lim_{y \to \infty } w(y) = -\infty \). Let the solution u be bounded, then w is bounded, from which one concludes \(\lim_{y \to \infty } w(y) = -\infty \), a contradiction. Hence, w satisfies one of the cases (i) or (ii). This completes the proof. □ Under the assumptions (A1)–(A3), (A4) or (A5), (i) and u is an eventually positive solution of (1), we have \(\lim_{ y \to \infty }u(y)=0\). Assume that there exists a \(y_{1} \geq {}y_{0}\) such that \(u(y)>0\), \(u(\vartheta (y))\), and \(u(\varsigma (y))>0\) for \(y\geq {}y_{1}\). Then Lemma 2.1 holds and w satisfies one of the cases (i) or (ii) for \(y_{2} \geq y_{1}\), where \(y \geq y_{2}\). Let w satisfy (i) for \(y \geq y_{2}\). Therefore, $$\begin{aligned} 0&\geq \lim_{y \to \infty }w(y)= \limsup_{y\to \infty } w(y) \geq \limsup_{y \to \infty } \bigl(u(y)-b_{0} u\bigl(\vartheta (y)\bigr) \bigr) \\ &\geq \limsup_{y \to \infty } u(y)+\liminf_{t\to \infty } \bigl(-b_{0} u\bigl(\vartheta (y)\bigr) \bigr) = (1-b_{0}) \limsup_{y \to \infty } u(y), \end{aligned}$$ which implies that \(\limsup_{y \to \infty } u(y)=0\) and hence \(\lim_{y \to \infty }u(y)=0\). □ In view of (ii) of Lemma 2.1, it is obvious that \(\lim_{y\to \infty }w(y)>0\), i.e., there exists \(\kappa _{1}>0\) such that \(w(y)\geq \kappa _{1}\) for all large y. In this section, we establish the necessary and sufficient conditions for the oscillation of the solution of (1) by considering the two cases when \(g(v)/v^{\mu _{1}}\) is non-increasing and \(g(v)/v^{\mu _{1}}\) is non-decreasing. The case when \(g(v)/v^{\mu _{1}}\) is non-increasing Suppose that there exists \({\mu _{1}}\) such that \(0<{\mu _{1}}<\mu \) and $$ \frac{g(v)}{v^{\mu _{1}}} \geq \frac{g(u)}{u^{\mu _{1}}} \quad\text{for } 0< v \leq u. $$ For example the function \(g(u)=|u|^{\mu _{2}} \operatorname{sgn}(u)\) with \(0<{\mu _{2}}<{\mu _{1}}<\mu \) satisfying (5). Theorem 3.1 Assume that (A1)–(A4) and (5) hold. Then each unbounded solution of (1) is oscillatory if and only if $$\begin{aligned} \int _{Y}^{\infty }c(\eta )g \bigl(\kappa ^{1/ \mu } A\bigl(\varsigma (\eta )\bigr) \bigr)\,d\eta =+\infty \quad\forall Y>0\textit{ and }\kappa >0. \end{aligned}$$ On the contrary, we assume that there exists a nonoscillatory unbounded solution \(u(y)\) of (1). Suppose that the solution \(u(y)\) is eventually positive. Then there exists \(y_{1} \geq y_{0}\) such that \(u(y) > 0\), \(u(y)>0\), \(u(\vartheta (y))>0\) and \(u(\varsigma (y))>0\) for \(y\geq {}y_{1}\). Proceeding as in the proof of Lemma 2.1, we see that \((a(w')^{\mu } )(y)\) is non-increasing, and w satisfies one of the cases (i) or (ii) on \([y_{2},\infty )\), where \(y_{2}\geq {}y_{1}\). Then we have the following two possible cases. Case 1. Let w satisfy (i) for \(y\geq {}y_{2}\). As u is the unbounded solution, there exists \(y\geq {}y_{2}\) such that \(u(y)=\max \{u(s): y_{2}\leq s\leq {}T\}\). Since \(w(y)=u(y)+b(y)u(\vartheta (y))\), we have \(u(y)\leq {}w(y)+\{1-(2/3)^{1/\mu }\}u(\vartheta (y))< u(y)\), which leads a contradiction. Case 2. Let w satisfy (ii) for \(y\geq y_{2}\). Note that \(\lim_{y \to \infty } (a(w')^{\mu } )(y)\) exists. Using \(w(y) \leq u(y)\) in (1) and integrating the new inequality from y to +∞, we obtain $$ \int _{y}^{\infty }c(\eta )g \bigl(w\bigl(\varsigma (\eta ) \bigr) \bigr)\,d\eta \leq \bigl(a\bigl(w'\bigr)^{\mu } \bigr) (y). $$ $$ w'(y)\geq \biggl[\frac{1}{a(y)} \int _{y}^{\infty }c(\eta )g \bigl(w\bigl( \varsigma (\eta )\bigr) \bigr)\,d\eta \biggr]^{1/\mu } $$ for \(y\geq y_{3}\). Let \(y_{4}> y_{3}\) be a point such that $$ A(y)-A(y_{3})\geq \frac{1}{2}A(y),\quad y \geq y_{4}. $$ Then integrating (7) from \(y_{3}\) to y, we get $$\begin{aligned} w(y)-w(y_{3})& \geq \int _{y_{3}}^{y} \biggl[\frac{1}{a(\eta )} \int _{ \eta }^{\infty }c(\zeta )g \bigl(w\bigl(\varsigma ( \zeta )\bigr) \bigr)\,d\zeta \biggr]^{1/ \mu } \,d\eta \\ & \geq \int _{y_{3}}^{y} \biggl[\frac{1}{a(\eta )} \int _{y}^{\infty }c( \zeta )g \bigl(w\bigl(\varsigma ( \zeta )\bigr) \bigr)\,d\zeta \biggr]^{1/\mu }\,d\eta, \end{aligned}$$ i.e., $$\begin{aligned} w(y) & \geq \bigl(A(y)-A(y_{3}) \bigr) \biggl[ \int _{y}^{\infty } c(\zeta )g \bigl(w\bigl(\varsigma ( \zeta )\bigr) \bigr)\,d\zeta \biggr]^{1/\mu } \\ &\geq \frac{1}{2}A(y) \biggl[ \int _{y}^{\infty }c(\zeta )g \bigl(w\bigl( \varsigma ( \zeta )\bigr) \bigr)\,d\zeta \biggr]^{1/\mu }. \end{aligned}$$ Since \((a(w')^{\mu } )(y)\) is non-increasing on \([y_{4},\infty )\), there exist \(\kappa >0\) and \(y_{5}> y_{4}\) such that \((a(w')^{\mu } )(y) \leq \kappa \) for \(y\geq y_{5}\). Integrating the inequality \(w'(y) \leq (\kappa / a(y))^{1/\mu }\), we have $$ w(y)\leq w(y_{5})+\kappa ^{1/\mu } \bigl(A(y)-A(y_{5}) \bigr). $$ Since \(\lim_{t\to \infty }A(y)=\infty \), the last inequality becomes $$\begin{aligned} w(y) \leq \kappa ^{1/\mu } A(y) \quad\text{for } y\geq y_{5}. \end{aligned}$$ On the other hand, (5) implies that $$\begin{aligned} g \bigl(w\bigl(\varsigma (\zeta )\bigr) \bigr)= \frac{g (w(\varsigma (\zeta )) )}{w^{{\mu _{1}}} (\varsigma (\zeta ) )} w^{{\mu _{1}}} \bigl(\varsigma (\zeta ) \bigr)\geq \frac{ g (\kappa ^{1/\mu } A(\varsigma (\zeta )) )}{ (\kappa ^{1/\mu }A(\varsigma (\zeta )) )^{{\mu _{1}}}} {w^{{\mu _{1}}} \bigl(\varsigma (\zeta ) \bigr)}. \end{aligned}$$ Consequently, (8) becomes $$ w(y)\geq \frac{A(y)}{2} \biggl[ \int _{y}^{\infty } \frac{c(\zeta )g (\kappa ^{1/\mu } A(\varsigma (\zeta )) )w^{\mu _{1}}(\varsigma (\zeta ))}{ (\kappa ^{1/\mu }A(\varsigma (\zeta )) )^{{\mu _{1}}}}\,d \zeta \biggr]^{1/\mu }. $$ If we define $$ \Upsilon (y)= \int _{y}^{\infty } \frac{c(\zeta )g (\kappa ^{1/\mu } A(\varsigma (\zeta )) )w^{\mu _{1}}(\varsigma (\zeta ))}{ (\kappa ^{1/\mu }A(\varsigma (\zeta )) )^{{\mu _{1}}}}\,d \zeta, $$ then \(w^{\mu _{1}} / (\kappa ^{1/\mu }A )^{\mu _{1}} \geq \Upsilon ^{{ \mu _{1}}/\mu }/ (2\kappa ^{1/\mu } )^{\mu _{1}}\). Taking the derivative of ϒ we get $$\begin{aligned} \Upsilon '(y) \leq - \frac{g (\kappa ^{1/\mu } A(\varsigma (y)) )c(y)w^{{\mu _{1}}}(\varsigma (y))}{ (\kappa ^{1/\mu }A(\varsigma (y)) )^{{\mu _{1}}}} \leq - \frac{c(y)g (\kappa ^{1/\mu } A(\varsigma (y)) )}{ (2\kappa ^{1/\mu } )^{\mu _{1}}} \Upsilon ^{{\mu _{1}}/\mu }\bigl(\varsigma (y)\bigr) \leq 0. \end{aligned}$$ Therefore, \(\Upsilon (y)\) is non-increasing on \([y_{5}, \infty )\) so \(\Upsilon ^{{\mu _{1}}/\mu }(\varsigma (y))/\Upsilon ^{{\mu _{1}}/\mu }(y) \geq 1\), and $$\begin{aligned} \bigl(\Upsilon ^{1-{\mu _{1}}/\mu }(y) \bigr)' & \leq - (1-{\mu _{1}}/ \mu ) \Upsilon ^{-{\mu _{1}}/\mu }(y) \frac{c(y)g (\kappa ^{1/\mu } A(\varsigma (y)) )}{ (2\kappa ^{1/\mu } )^{\mu _{1}}} \Upsilon ^{{\mu _{1}}/\mu } \bigl(\varsigma (y) \bigr) \\ & \leq -(1-{\mu _{1}}/\mu ) \frac{c(y)g (\kappa ^{1/\mu } A(\varsigma (y)) )}{ (2\kappa ^{1/\mu } )^{\mu _{1}}}. \end{aligned}$$ We have \({\mu _{1}}/\mu <1\) and \(\Upsilon (y)\) is positive and non-increasing. Integrating the last inequality, from \(y_{5}\) to y, we have $$\begin{aligned} \frac{(1-{\mu _{1}}/\mu )}{(2 \kappa ^{1/\mu })^{{\mu _{1}}}} \int _{t^{5}}^{y}c( \eta )g \bigl(\kappa ^{1/\mu } A\bigl(\varsigma (\eta )\bigr) \bigr)\,d\eta \leq - \bigl[\Upsilon ^{1-{\mu _{1}}/\mu }(\eta ) \bigr]_{y_{5}}^{y} < \Upsilon ^{1-{ \mu _{1}}/\mu }(y_{5})< \infty, \end{aligned}$$ which contradicts (6). If \(u(y)<0\) for \(y\geq {}y_{1}\), then we set \(y(y):=-u(y)\) for \(y\geq {}y_{1}\) in (1). Using (A2), we find $$ \bigl(a(y) \bigl(\overline{w}'(y)\bigr)^{\mu } \bigr)+c(y) \overline{g} \bigl(y\bigl( \varsigma (y)\bigr) \bigr)=0 \quad\text{for } y\geq {}y_{1}, $$ where \(\overline{w}(y)=y(y)+b(y)y(\vartheta (y))\) and \(\overline{g}(u):=-g(-u)\) for \(u\in \mathbb{R}\). Clearly, g̅ satisfies (A2). Then, proceeding as above, we can find the same contradiction. To prove the condition (6) is necessary, assume that (6) does not hold; so for some \(\kappa > 0\) and \(y \geq y_{0}\) we have $$ \int _{Y}^{\infty }c(\eta )g \bigl(\kappa ^{1/\mu } A \bigl(\varsigma (\eta )\bigr) \bigr)\,d\eta \leq \frac{\kappa }{3}. $$ We set $$\begin{aligned} S={} &\biggl\{ u: u \in C\bigl([y_{0},\infty),\mathbb{R}\bigr), u(y)=0 \text{ for } y \in [y_{0}, Y] \text{ and} \\ & \biggl(\frac{\kappa }{3} \biggr)^{1/\mu }\bigl[A(y)-A(Y)\bigr]\leq u(y) \leq \kappa ^{1/ \mu } \bigl[A(y)-A(Y)\bigr] \text{ for } y \geq y_{0}\biggr\} . \end{aligned}$$ We define the operator \(\Omega: S \to C([y_{0},+\infty ),\mathbb{R})\) by $$ (\Omega u) (y)= \textstyle\begin{cases} 0,& y \in [y_{0}, Y], \\ -b(y)u (\vartheta (y) )+\int _{Y}^{y} [\frac{1}{a(\eta )} [\frac{\kappa }{3}+\int _{\eta }^{\infty }c(\zeta )g (u( \varsigma (\zeta )) )\,d\zeta ] ]^{1/\mu } \,d\eta, & y\geq Y. \end{cases} $$ For every \(u \in S\) and \(y \geq Y\), we have $$\begin{aligned} (\Omega u) (y) &\geq \int _{Y}^{y} \biggl[\frac{1}{a(\eta )} \biggl[ \frac{\kappa }{3}+ \int _{\eta }^{\infty }c(\zeta )g \bigl(u\bigl(\varsigma ( \zeta )\bigr) \bigr)\,d\zeta \biggr] \biggr]^{1/\mu }\,d\eta \\ &\geq \int _{Y}^{y} \biggl[\frac{1}{a(\eta )} \frac{\kappa }{3} \biggr]^{1/ \mu }\,d\eta = \biggl(\frac{\kappa }{3} \biggr)^{1/\mu }\bigl[A(y)-A(Y)\bigr]. \end{aligned}$$ For every \(u \in S\) and \(y \geq Y\), we have \(u(y)\leq \kappa ^{1/\mu } A(y)\) and \(g(u(y))\leq g(\kappa ^{1/\mu } A(y))\). Then $$\begin{aligned} (\Omega u) (y)&\leq - b(y)u \bigl(\vartheta (y) \bigr)+ \int _{T}^{y} \biggl[\frac{1}{a(\eta )} \biggl( \frac{\kappa }{3}+\frac{\kappa }{3} \biggr) \biggr]^{1/\mu }\,d\eta \\ &\leq b_{0} \kappa ^{1/\mu } \bigl[A\bigl(\vartheta (y) \bigr)-A(Y) \bigr]+(2 \kappa /3)^{1/\mu } \bigl[A(y)-A(Y) \bigr] \\ &\leq b_{0} \kappa ^{1/\mu } \bigl[A(y)-A(Y) \bigr]+ (2\kappa /3)^{1/ \mu } \bigl[A(y)-A(Y) \bigr] \\ &= \bigl(b_{0}+(2/3)^{1/\mu } \bigr)\kappa ^{1/\mu } \bigl[A(y)-A(Y) \bigr] \leq \kappa ^{1/\mu } \bigl[A(y)-A(Y) \bigr], \end{aligned}$$ which implies that \((\Omega u)(y) \in S\). Let us define now a sequence of continuous function \(v_{n}: [y_{0}, +\infty )\to \mathbb{R}\) by the recursive formula $$\begin{aligned} &u_{0}(y)= \textstyle\begin{cases} 0,& y \in [y_{0}, Y], \\ \frac{\kappa }{3}[A(y)-A(Y)],& y\geq Y, \end{cases}\displaystyle \\ &u_{n}(y)= (\Omega u_{n-1} ) (y),\quad n\geq 1. \end{aligned}$$ Inductively, it is easy to verify that, for \(n>1\), $$ \biggl(\frac{\kappa }{3} \biggr)^{1/\mu } \bigl[A(y)-A(Y) \bigr]\leq u_{n-1}(y) \leq u_{n}(y) \leq \kappa ^{1/\mu } \bigl[A(y)-A(Y) \bigr]. $$ Therefore the point-wise limit of the sequence exists. Let \(\lim_{y \to \infty }u_{n}(y)=v(y)\) for \(y \geq y_{0}\). By Lebesgue's dominated convergence theorem, \(u \in S\) and \((\Omega u)(y) =u(y)\), where \(u(y)\) is a solution of (1) on \([Y,\infty )\) such that \(u(y)>0\). Hence, (6) is necessary. This completes the proof. □ Example 3.2 Consider the delay differential equation $$ \bigl(e^{-y} \bigl(\bigl(u(y)-e^{-y}u(y-1) \bigr)' \bigr)^{3/5}\bigr)'+y\bigl(u(y-2) \bigr)^{1/3}=0,\quad y\geq 0. $$ Here \(\mu = 3/5\), \(a(y)=e^{-y}\), \(-1 < b(y)=-e^{-y} \leq 0\), \(\vartheta (y)=y-1\), \(\varsigma (y)=y-2\), \(A(y)=\int _{0}^{y} e^{5s/3} \,ds= \frac{3}{5} (e^{5y/3}-1 )\), \(g(v)=v^{1/3}\). For \({\mu _{1}}=1/2\), we have a decreasing function \(g(v)/v^{\mu _{1}}=v^{-1/6}\). Now $$\begin{aligned} \int _{0}^{\infty } c(\eta )g \bigl(\kappa ^{1/\mu } A\bigl(\varsigma (\eta )\bigr) \bigr)\,d\eta = \int _{0}^{\infty }\eta \biggl(\kappa ^{5/3} \frac{3}{5} \bigl(e^{5(\eta -2/3}-1 \bigr) \biggr)^{1/3} \,d\eta =\infty \quad\forall \kappa >0. \end{aligned}$$ So, all the conditions of Theorem 3.1 hold, and therefore every unbounded solution of (9) is oscillatory. Let assumptions (A1)–(A4) hold. Then each unbounded solution of (1) oscillates if and only if (6) holds for every \(\kappa >0\). To prove sufficiency by contradiction, assume that the solution u of (1) is eventually positive and unbounded. So, there exists \(y_{1}\geq {}y_{0}\) such that \(u(y)>0\), \(u (\vartheta (y) )>0\) and \(u (\varsigma (y) )>0\) for \(y\geq {}y_{1}\). Proceeding as in the proof of Lemma 2.1, \((a(w')^{\mu } )(y)\) is non-increasing, w satisfies one of the cases (i) or (ii) on \([y_{2},\infty )\), where \(y_{2}\geq {}y_{1}\). We have the following two possible cases. Case 1. Let w satisfy (i) for \(y \geq y_{2}\). This case is similar to the proof of Theorem 3.1. Case 2. Let w satisfy (ii) for \(y \geq y_{2}\). Since \(w(y)\) is unbounded and monotonically increasing, it follows that $$ \lim_{y \to \infty }\frac{w^{\mu }(y)}{A^{\mu }(y)}=\lim_{y \to \infty } \frac{(w'(y))^{\mu }}{(A'(y))^{\mu }}=\lim_{y\to \infty } \bigl(a\bigl(w' \bigr)^{\mu } \bigr) (y)=c < \infty. $$ If \(c =0\), then \(\lim_{t\to \infty }A(y)=+\infty \) implies that \(\lim_{t\to \infty }w(y)< +\infty \), which is invalid (\(\because w(y)\) is unbounded). Hence \(c\neq 0\). Therefore, there exist a constant \(\kappa > 0\) and a \(y_{2} > y_{1}\) such that \(w(y)\geq \kappa ^{1/\mu } A(y)\) for \(y\geq y_{2}\). Consequently, \(u(y) \geq w(y) \geq \kappa ^{1/\mu } A(y)\) for \(y \geq y_{2}\). Using \(u(y)\geq \kappa ^{1/\mu } A(y)\) in (1) and then integrating the final inequality from \(y_{2}\) to +∞, we obtain a contradiction to (6) for every \(\kappa >0\). By using the same transformation as in the proof of Theorem 3.1 we can get a contradiction for an eventually negative unbounded solution, so we omit it here. One can prove the necessary part by following the proof of Theorem 3.1. So we omit it here. The proof of the theorem is complete. □ Assume that (A1)–(A4) and (5) hold. Then each solution of (1) is oscillatory or \(\lim_{y \to \infty }u(y)=0\) if and only if (6) holds for every \(\kappa >0\). On the contrary, we assume that the solution u of (1) is eventually positive. Then there exists \(y_{1}\geq {}y_{0}\) such that \(u(y)>0\), \(u(\vartheta (y))>0\) and \(u(\varsigma (y))>0\) for \(y\geq {}y_{1}\). Proceeding as in the proof of Lemma 2.1, we see \((a(w')^{\mu } )(y)\) is non-increasing, and w satisfies one of the cases (i) or (ii) on \([y_{2},\infty )\), where \(y_{2}\geq {}y_{1}\). Thus, we have the following two possible cases. Case 1. Let w satisfy (i) for \(y\geq y_{2}\). Then, by Lemma 2.2, we have \(\lim_{y \to \infty }u(y)=0\). Case 2. Let w satisfy (ii) for \(y\geq y_{2}\). The case follows from the proof of Theorem 3.1. The necessary part is similar to Theorem 3.1. The proof of the theorem is complete. □ The case when \(g(u)/u^{\mu _{1}}\) is non-decreasing Suppose that there exists \({\mu _{1}}>\mu \) such that $$ \frac{g(v)}{v^{\mu _{1}}} \leq \frac{g(u)}{u^{\mu _{1}}} \quad\text{for }0< v \leq u. $$ For example we might consider the function \(g(u)=|u|^{\mu _{2}} \operatorname{sgn}(u)\) with \(\mu <{\mu _{1}}<{\mu _{2}}\) satisfying (10). Assume that (A1)–(A3), (A5), (10), \(\varsigma ^{\prime }(y) \geq 1\) hold. Then each solution of (1) oscillates or \(\lim_{y \to \infty }u(y)=0\) if and only if $$\begin{aligned} \int _{Y}^{\infty } \biggl[\frac{1}{a(\zeta )} \biggl[ \int _{\zeta }^{\infty }c( \eta )\,d\eta \biggr] \biggr]^{1/\mu }\,d\zeta =+\infty \quad\forall y>0. \end{aligned}$$ Proceeding in the proof of Theorem 3.4, we can conclude that \(\lim_{y \to \infty }u(y)=0\) when z satisfies (i). Let us consider Case 2, for \(y\geq y_{2}\). By Remark 1, there exist a constant \(\kappa > 0\) and \(y_{2} >y_{1}\) such that \(z (\varsigma (y) )\geq \kappa \) for \(y\geq y_{2}\). Consequently, $$\begin{aligned} g \bigl(w\bigl(\varsigma (y)\bigr) \bigr)= \frac{g (w(\varsigma (y)) )}{w^{{\mu _{1}}} (\varsigma (y) )}w^{{ \mu _{1}}} \bigl(\varsigma (y) \bigr) \geq \frac{g(\kappa )}{\kappa ^{{\mu _{1}}}}w^{{\mu _{1}}} \bigl( \varsigma (y) \bigr) \end{aligned}$$ for \(y\geq y_{2}\). Using \(w(y) \leq u(x)\) and (12) in (1), and then integrating the final inequality we have $$ \lim_{A \to \infty } \bigl[ \bigl(a\bigl(w' \bigr)' \bigr) (\eta ) \bigr]_{y}^{A}+ \frac{g(\kappa )}{\kappa ^{{\mu _{1}}}} \int _{y}^{\infty }c(\zeta )w^{{ \mu _{1}}} \bigl( \varsigma (\zeta ) \bigr)\,d\zeta \leq 0. $$ Since \((a(w')' )(y)\) is non-increasing and positive, we have $$\begin{aligned} \frac{g(\kappa )}{\kappa ^{{\mu _{1}}}} \int _{y}^{\infty }c(\eta )w^{{ \mu _{1}}} \bigl(\varsigma (\eta ) \bigr)\,d\eta \leq \bigl(a\bigl(w'\bigr)^{\mu } \bigr) (y) \leq \bigl(a\bigl(w'\bigr)^{\mu } \bigr) \bigl( \varsigma (y)\bigr) \leq a(y) \bigl(\bigl(w'\bigr)^{\mu } \bigr) \bigl(\varsigma (y)\bigr) \end{aligned}$$ for all \(y \geq y_{2}\). Therefore, $$\begin{aligned} \biggl(\frac{g(\kappa )}{\kappa ^{{\mu _{1}}}} \biggr)^{1/\mu } \biggl[ \frac{1}{a(y)} \biggl[ \int _{y}^{\infty }c(\zeta )w^{\mu _{1}} \bigl( \varsigma (\zeta ) \bigr)\,d\zeta \biggr] \biggr]^{1/\mu } \leq w' \bigl( \varsigma (y) \bigr) \end{aligned}$$ implies that $$\begin{aligned} \biggl(\frac{g(\kappa )}{\kappa ^{{\mu _{1}}}} \biggr)^{1/\mu } \biggl[ \frac{1}{a(y)} \biggl[ \int _{y}^{\infty }c(\zeta )\,d\zeta \biggr] \biggr]^{1/ \mu } \leq \frac{w' (\varsigma (y) )}{w^{{\mu _{1}}/\mu } (\varsigma (y) )} \leq \frac{w' (\varsigma (y) )\varsigma '(y)}{w^{{\mu _{1}}/\mu } (\varsigma (y) )}. \end{aligned}$$ Integrating the final inequality from \(y_{2}\) to +∞, we have $$\begin{aligned} \biggl(\frac{g(\kappa )}{\kappa ^{{\mu _{1}}}} \biggr)^{1/\mu } \int _{y_{2}}^{ \infty } \biggl[\frac{1}{a(\zeta )} \biggl[ \int _{\zeta }^{\infty }c(\eta )\,d \eta \biggr] \biggr]^{1/\mu }\,d\zeta &< \int _{y_{2}}^{\infty } \frac{w' (\varsigma (\eta ) )\varsigma '(\eta )}{w^{{\mu _{1}}/\mu } (\varsigma (\eta ) )}\,d \eta \\ &\leq \frac{w^{1-{\mu _{1}}/\mu }(\varsigma (y_{2}))}{{\mu _{1}}/\mu -1}< \infty, \end{aligned}$$ which contradicts (11). Next, we show that (11) is necessary. Assume that (11) does not hold and let there exist \(y \geq y_{0}\) such that $$ \int _{Y}^{y} \biggl[\frac{1}{a(\zeta )} \biggl[ \int _{\zeta }^{\infty }c( \eta )\,d\eta \biggr] \biggr]^{1/\mu }\,d\zeta \leq \frac{(1-b_{0}) (g(1) )^{-1/\mu }}{5}, $$ where \(\kappa > 0\) is a constant. We set $$ S= \biggl\{ u \in C\bigl([y_{0},\infty),\mathbb{R}\bigr): u(y)= \frac{1-b_{0}}{5}, y\in [y_{0},Y] \frac{1-b_{0}}{5}\leq u(y)\leq 1 \text{ for } y \geq Y \biggr\} . $$ We define the operator \(\Omega: S \to C([y_{0},\infty ),\mathbb{R})\) by $$ (\Omega u) (y)= \textstyle\begin{cases} \frac{1-b_{0}}{5}, & y \in [y_{0}, Y], \\ -b(y)u (\vartheta (y) )+\frac{1-b_{0}}{5}+\int _{T}^{y} [ \frac{1}{a(\eta )} [\int _{\eta }^{\infty }c(\zeta )g (u( \varsigma (\zeta )) )\,d\zeta ] ]^{1/\mu } \,d\eta, & y \geq T. \end{cases} $$ For every \(u \in S\) and \(y \geq Y\), \((\Omega u)(y)\geq \frac{1-b_{0}}{5}\) and $$\begin{aligned} (\Omega u) (y)&\leq b_{0}+\frac{1-b_{0}}{5}+ \bigl(g(1) \bigr)^{1/\mu } \int _{Y}^{y} \biggl[\frac{1}{a(\eta )} \biggl[ \int _{\eta }^{\infty }c( \zeta )\,d\zeta \biggr] \biggr]^{1/\mu } \,d\eta \\ &\leq b_{0}+\frac{1-b_{0}}{5}+\frac{1-b_{0}}{5}= \frac{3b_{0}+2}{5} < 1, \end{aligned}$$ which implies that \(\Omega u \in S\). The remaining proof follows from Theorem 3.1. This completes the proof. □ Consider the differential equation $$ \bigl( \bigl( \bigl(u(y)-e^{-y}u\bigl(\vartheta (y) \bigr) \bigr)' \bigr)^{1/5} \bigr)'+(y+1) \bigl(u(y-2)\bigr)^{\frac{7}{3}}=0,\quad y\geq 0. $$ Here \(\mu = 1/5\), \(a(y)=1\), \(\varsigma (y)=y-2\), \(g(v)=v^{\frac{7}{3}}\). For \({\mu _{1}}=4/3\), we have \(g(v)/v^{\mu _{1}}=v\), which is an increasing function. To check (11) we have $$ \int _{2}^{\infty } \biggl[ \int _{\zeta }^{\infty }(\eta +1)\,d\eta \biggr]^{5} \,d\zeta =\infty. $$ So, all conditions of Theorem 3.5 hold, and therefore each solution of (13) oscillates or converges to zero. It is worth noting that we have established the necessary and sufficient conditions when \(-1 < b(y) \leq 0\). These conditions do not hold in all ranges of \(b(y)\). Theorems 3.1–3.5 also hold for the following equation: where \(b, a, c_{j}, g_{j}, \varsigma _{j}\) \((j =1,2,\dots,m)\) satisfy assumptions (A1)–(A5). In order to extend Theorems 3.1–3.5, we can find an index i so that \(c_{j}, g_{j}, \varsigma _{j}\) satisfies (6) and (11). Consider the neutral differential equation $$ \bigl(e^{-y} \bigl( \bigl(u(y)-e^{-y}u\bigl( \vartheta (y)\bigr) \bigr)' \bigr)^{3/5} \bigr)'+\frac{1}{y+1}\bigl(u(y-2)\bigr)^{1/3} + \frac{1}{y+2}\bigl(u(y-1)\bigr)^{1/5}=0, \quad y\geq 0. $$ Here \(\mu = 3/5\), \(a(y)=e^{-y}\), \(b(y)=-e^{-y}\), \(\varsigma _{1}(y)=u-2\), \(\varsigma _{2}(y)=u-1\), \(A(y)=\int _{0}^{y} e^{5s/3} \,ds= \frac{3}{5} (e^{5y/3}-1 )\), \(g_{1}(v)=v^{1/3}\) and \(g_{2}(v)=v^{1/5}\). For \({\mu _{1}}=1/2\), we have decreasing functions \(g_{1}(v)/v^{\mu _{1}}=v^{-1/6}\) and \(g_{2}(v)/v^{\mu _{1}}=v^{-3/10}\). Now, $$\begin{aligned} & \int _{0}^{\infty } \sum_{i=1}^{m}c_{j}( \eta )g_{j} \bigl(\kappa ^{1/ \mu } A\bigl(\varsigma _{j}(\eta )\bigr) \bigr)\,d\eta \\ &\quad \geq \int _{0}^{\infty } g_{1}( \eta )f_{1} \bigl(\kappa ^{1/\mu } A\bigl(\varsigma _{1}( \eta )\bigr) \bigr)\,d\eta \\ &\quad = \int _{0}^{\infty }\frac{1}{\eta +1} \biggl(\kappa ^{5/3}\frac{3}{5} \bigl(e^{5(\eta -2)/3}-1 \bigr) \biggr)^{1/3} \,d\eta =\infty \quad\forall \kappa >0. \end{aligned}$$ So, all the conditions of Theorem 3.1 hold, and therefore every unbounded solution of (14) is oscillatory. $$ \bigl( \bigl( \bigl(u(y)-e^{-y}u\bigl(\vartheta (y) \bigr) \bigr)' \bigr)^{5/7} \bigr)'+t \bigl(u(y-2)\bigr)^{5/3} + (y+1) \bigl(u(y-1)\bigr)^{3}=0,\quad y \geq 0. $$ Here \(\mu = 5/7\), \(a(y)=1\), \(\varsigma _{1}(y)=y-2\), \(\varsigma _{2}(y)=y-1\), \(g_{1}(v)=v^{5/3}\) and \(g_{2}(v)=v^{3}\). For \({\mu _{1}}=4/3\), we have decreasing functions \(g_{1}(v)/v^{\mu _{1}}=v^{1/3}\) and \(g_{2}(v)/v^{\mu _{1}}=v^{5/3}\). Clearly, all the conditions of Theorem 3.5 hold. Thus, each solution of (15) oscillates or \(\lim_{y \to \infty }u(y)=0\). Examples 4.1 and 4.2 prove the feasibility and effectiveness of Remark 2. Open problem This work leads to some open problems: Can we find necessary and sufficient conditions for the oscillation of solutions to second-order differential equation (1) for the other ranges of the neutral coefficient b? Is it possible to generalize this work to fractional order? Brands, J.J.M.S.: Oscillation theorems for second-order functional-differential equations. J. Math. Anal. Appl. 63(1), 54–64 (1978) MathSciNet MATH Article Google Scholar Baculikova, B., Dzurina, J.: Oscillation theorems for second order neutral differential equations. Comput. Math. Appl. 61, 94–99 (2011) Chatzarakis, G.E., Dzurina, J., Jadlovska, I.: New oscillation criteria for second-order half-linear advanced differential equations. Appl. Math. Comput. 347, 404–416 (2019) Chatzarakis, G.E., Jadlovska, I.: Improved oscillation results for second-order half-linear delay differential equations. Hacet. J. Math. Stat. 48(1), 170–179 (2019) Džurina, J.: Oscillation theorems for second order advanced neutral differential equations. Tatra Mt. Math. Publ. 48, 61–71 (2011) Karpuz, B., Santra, S.S.: Oscillation theorems for second-order nonlinear delay differential equations of neutral type. Hacet. J. Math. Stat. 48(3), 633–643 (2019) Pinelas, S., Santra, S.S.: Necessary and sufficient condition for oscillation of nonlinear neutral first-order differential equations with several delays. J. Fixed Point Theory Appl. 20(1), 27 (2018) Wong, J.S.W.: Necessary and suffcient conditions for oscillation of second order neutral differential equations. J. Math. Anal. Appl. 252(1), 342–352 (2000) Grace, S.R., Džurina, J., Jadlovska, I., Li, T.: An improved approach for studying oscillation of second-order neutral delay differential equations. J. Inequal. Appl. 2018, 193 (2018) Agarwal, R.P., Bohner, M., Li, T., Zhang, C.: Oscillation of second order differential equations with a sublinear neutral term. Carpath. J. Math. 30, 1–6 (2014) Abdalla, B., Abdeljawad, T.: On the oscillation of Caputo fractional differential equations with Mittag-Leffler nonsingular kernel. Chaos Solitons Fractals 127, 173–177 (2019) Abdalla, B., Abodayeh, K., Abdeljawad, T., Alzabut, J.: New oscillation criteria for forced nonlinear fractional difference equations. Vietnam J. Math. 45, 609–618 (2017) Abdalla, B., Abdeljawad, T.: On the oscillation of Hadamard fractional differential equations. Adv. Differ. Equ. 409, 1–12 (2018) Abdalla, B., Alzabut, J., Abdeljawad, T.: On the oscillation of higher order fractional difference equations with mixed nonlinearities. Hacet. J. Math. Stat. 47(2), 207–217 (2018) Baculikova, B., Dzurina, J.: Oscillation theorems for second order nonlinear neutral differential equations. Comput. Math. Appl. 62, 4472–4478 (2011) Baculikova, B., Li, T., Dzurina, J.: Oscillation theorems for second order neutral differential equations. Electron. J. Qual. Theory Differ. Equ. 74, 1 (2011) Bazighifan, O., Elabbasy, E.M.: Oscillation of higher-order differential equations with distributed delay. J. Inequal. Appl. 2019, 55 (2019) Bazighifan, O., Dassios, I.: Riccati technique and asymptotic behavior of fourth-order advanced differential equations. Mathematics 8, 1–11 (2020) Bazighifan, O., Ruggieri, M., Santra, S.S., Scapellato, A.: Qualitative properties of solutions of second-order neutral differential equations. Symmetry 12(9), 1–10 (2020) Santra, S.S., Bazighifan, O., Ahmad, H., Chu, Y.-M.: Second-order differential equation: oscillation theorems and applications. Math. Probl. Eng. 2020, Article ID 8820066 (2020). https://doi.org/10.1155/2020/8820066 Santra, S.S., Dassios, I., Ghosh, T.: On the asymptotic behavior of a class of second-order non-linear neutral differential equations with multiple delays. Axioms 9, 134 (2020). https://doi.org/10.3390/axioms9040134 Karpuz, B., Santra, S.: New criteria for the oscillation and asymptotic behavior of second-order neutral differential equations with several delays. Turk. J. Math. 44, 1990–2003 (2020). https://doi.org/10.3906/mat-2006-103 Santra, S.S., Bazighifan, O., Ahmad, H., Yao, S.-W.: Second-order differential equation with multiple delays: oscillation theorems and applications. Complexity 2020, Article ID 8853745 (2020). https://doi.org/10.1155/2020/8853745 Santra, S.S., Ghosh, T., Baghifan, O.: Explicit criteria for the oscillation of second-order differential equations with several sub-linear neutral coefficients. Adv. Differ. Equ. 2020, 643 (2020). https://doi.org/10.1186/s13662-020-03101-1 Li, T., Rogovchenko, Y.V.: Oscillation theorems for second order nonlinear neutral delay differential eqquations. Abstr. Appl. Anal. 2014, Article ID 594190 (2014) MATH Google Scholar Qian, Y., Xu, R.: Some new oscillation criteria for higher order quasi-linear neutral delay differential equations. Differ. Equ. Appl. 3, 323–335 (2011) Pinelas, S., Santra, S.S.: Necessary and sufficient conditions for oscillation of nonlinear first order forced differential equations with several delays of neutral type. Analysis 39(3), 97–105 (2019) Ragusa, M.A.: Elliptic boundary value problem in vanishing mean oscillation hypothesis. Comment. Math. Univ. Carol. 40(4), 651–663 (1999) Ragusa, M.A., Tachikawa, A.: Regularity for minimizes for functional of double phase with variable exponents. Adv. Nonlinear Anal. 9, 710–728 (2020) Santra, S.S.: Existence of positive solution and new oscillation criteria for nonlinear first order neutral delay differential equations. Differ. Equ. Appl. 8(1), 33–51 (2016) Santra, S.S.: Oscillation analysis for nonlinear neutral differential equations of second order with several delays. Mathematica 59(82), 111–123 (2017) Santra, S.S.: Oscillation analysis for nonlinear neutral differential equations of second order with several delays and forcing term. Mathematica 61(84), 63–78 (2019) Santra, S.S.: Necessary and sufficient condition for oscillatory and asymptotic behavior of second-order functional differential equations. Kragujev. J. Math. 44(3), 459–473 (2020) Santra, S.S., Dix, J.G.: Necessary and sufficient conditions for the oscillation of solutions to a second-order neutral differential equation with impulses. Nonlinear Stud. 27(2), 375–387 (2020) Yang, Q., Xu, Z.: Oscillation criteria for second order quasi-linear neutral delay differential equations on time scales. Comput. Math. Appl. 62, 3682–3691 (2011) Ye, L., Xu, Z.: Oscillation criteria for second order quasilinear neutral delay differential equations. Appl. Math. Comput. 207, 388–396 (2009) The authors are thankful to the the editors and and the referees for their valuable suggestions and comments, which improved the content of this paper. The authors received no direct funding for this work. Department of Mathematics, JIS College of Engineering, Kalyani, 741235, India Shyam Sundar Santra Department of Mathematics, Faculty of Science, Taif University, Taif, 21944, Saudi Arabia Hammad Alotaibi Department of Mathematics, Faculty of Science, Hadhramout University, Hadhramout, 50512, Yemen Omar Bazighifan Department of Mathematics, Faculty of Education, Seiyun University, Hadhramout, 50512, Yemen The authors declare that they read and approved the final manuscript. Correspondence to Omar Bazighifan. Santra, S.S., Alotaibi, H. & Bazighifan, O. On the qualitative behavior of the solutions to second-order neutral delay differential equations. J Inequal Appl 2020, 256 (2020). https://doi.org/10.1186/s13660-020-02523-5 Non-oscillation Lebesgue's Dominated Convergence theorem Necessary and sufficient conditions
CommonCrawl
Search SpringerLink Government assistance and total factor productivity: firm-level evidence from China Richard Harris ORCID: orcid.org/0000-0001-8066-36291 & Shengyu Li2 Journal of Productivity Analysis volume 52, pages 1–27 (2019)Cite this article Industrial policy, particularly through the provision of large-scale assistance to industry in the form of 'tax holidays' and subsidies to firms, is very important in China. A major contribution of this paper is to introduce firm-level measures of assistance directly into industry-level production functions determining firm output using Chinese firm-level panel data for 1998–2007 and analysing the impact of government assistance on TFP at the firm-level. Our results indicate inverted U-shaped gains from assistance: across the 26 industries considered, firms receiving assistance rates of 1–10, 10–19, 20–49 and 50+% experienced on average 4.5, 9.4, 9.2 and −3% gains in TFP level, respectively. We then decompose the growth of TFP and relate it to assistance and formal political connections between firms and the government. We find in general firms receiving assistance contributed relatively more to TFP growth than non-assisted firms. However, this was largely through new firms being 'encouraged' to start-up rather than through firms open throughout 1998 to 2007 improving. There is also evidence that closure rates were truncated as a result of assistance. Moreover, the better results for assisted firms was very much 'driven' by a sub-group that received assistance but had no formal political connections and were not State-owned. Providing assistance to industry as part of an industrial strategy has a long history, in both developing and developed economies (Schwartz and Clements 1999). Until more recently, such approaches were presumed to have been largely a failure, summed-up by Cohen (2006, p. 88) as follows: "The standard criticism levelled against sectoral industrial policies is that the state has neither the necessary information nor adequate incentives to make better choices than the market… it tends to misestimate … the negative long-term effects of the protection granted to certain firms and the negative impacts of the benefits granted to promoted sectors on other sectors." However, industrial policy is generally now regarded more favourably as shown by various contributions to recent books on the topic (e.g., Felipe 2015; Stiglitz and Lin 2013). Rather than just 'believe' in the market and allow economic success to be generated by globalisation allied to government intervention in support of liberalisation, privatisation and deregulation, "… it has become obvious that all governments are engaged in various forms of industrial policies… (therefore) the question is not whether any government should use industrial policy but rather how to use industrial policy in the best way" (Stiglitz et al. 2013, pp. 5–6). China is perceived as a country that provides large-scale assistance to industry (Haley and Haley 2013). But was government assistance targeted at the right firms and sectors and at an appropriate level? A recent paper by Aghion et. al. (2015) investigated if the distribution of government assistance to firms in China enhanced productivity, finding that assistance was allocated to competitive sectors and /or fostered competition in a sectorFootnote 1 so enhancing productivity growth over the 1998–2007 period. Their approach was essentially to test if subsidies were correlated with initial competition levels, where the latter was measured using a Lerner index. They also measured the concentration of assistance across firms within each sector (using a Herfindahl index). Both the correlations obtained at the sector-city level (the Lerner indices) and the Herfindahl indices were regressed on firm-level total factor productivity (TFP) estimates obtained using an Olley–Pakes approach. Both measures were found to have positive and significant impacts on TFP, and this is taken as evidence that government assistance was targeted at the right firms and sectors. However, Aghion et al. (op. cit.) did not test directly whether receiving assistance had a direct impact on each firms' TFP nor the extent of such assistance; if receiving assistance is found to lower firm-level TFP (at least for some categories of firms or at, say, high levels of assistance) then it may well be that overall industrial policy in China introduces distortions that increase misallocation and work against the productivity-enhancing effects associated with the (more macro-level) distribution of assistance. Whether assistance acts as a boost to investment and production, and the extent to which this improves/reduces efficiency and thus productivity growth, is largely an empirical issue. Thus a major contribution of this paper is to fill this gap in the literature, by introducing variables that measure the assistance (including tax 'holidays' and subsidies) received by each firm directly into production functions determining firm output and analysing the impact of the assistance on TFP at the firm-level. A system-GMM econometric approach is used to measure firm-level TFP (with the variables representing assistance instrumented by their lagged values). To check the robustness of our results, the impact of assistance is also tested using a production function approach based on 'matching' firms receiving assistance with those not receiving 'treatment' who nonetheless had very similar characteristics to the assisted sub-group (Imbens and Rubin 2015). Both sets of results indicate that across the 26 industries considered Chinese firms that received assistance had higher TFP during 1998–2007, although there is some evidence that too high a level of assistance has negative consequences for TFP, suggesting that 'rent-seeking' and/or the pursuit of profit is blunted when firms become too dependent on government help, especially when such help is tied to 'political control' by the state (which is the case in China as explained below). To justify such results, we provide a simple model in the appendix that sets out how this is consistent with economic theory. Apart from the Aghion et al. (2015) study, we are only aware of a study by Girma et al. (2009) who used the same database as we use (but only for 1999 to 2005) to consider whether subsidies boosted export sales for domestic firms in manufacturing (finding subsidies stimulated exporting intensities of existing exporters but had little impact on encouraging firms to enter exporting). The major differences with the current study are: we include all (and not just domestic) firms in manufacturing and utilities covering 1998–2007; the more important form of assistance provided through 'tax holidays' (as well as subsidies) is included; and our dependent variable is TFP. Other studies, mostly covering developed economies, that consider the impact of assistance on productivity are relatively scarce, usually relate only to labour productivity (not TFP) and have produced mixed results. For example, Irwin and Klenow (1996) found no impact on labour productivity of R&D subsidies for U.S. high-tech companies; for Japanese forestry, Managi (2010) found a negative relationship between subsidies and TFP; Einio (2014) reports no instantaneous impacts of R&D support programmes in Finland on productivity (although there is evidence of long-term gains); Huang (2015) shows that tax credit use among Taiwanese firms enhanced their productivity; while Koski and Pajarinen (2015) report that R&D subsidies had no statistically significant impact on labour productivity in Finnish firms during 2003–2010, although employment subsidies and other subsidies (the latter covering similar State aid instruments as included in the present study) were negatively related to output-per-worker.Footnote 2 The paper is set out as follows. In the next section we discuss the rationale for the (Chinese) government providing assistance to firms, where government aid can be central, state or local (or some combination of all three levels). In Section 3 we discuss briefly the form that assistance takes and present some background information on its importance to firms. Following this, we estimate industry-level production functions using system-GMM and a 'matching' approach, to test whether assistance impacted on the level of TFP across firms. In Section 5 we decompose the growth of TFP and relate it to assistance and the extent of formal political connections between firms and the government. The paper concludes with a summary and some ideas for further research that would extend the approach taken in this paper. The rationale for government assistance to firms The starting (traditional neoclassical) position is usually that markets are efficient such that they are the best mechanism by which to allocate resources (cf. the model of general equilibrium associated with Arrow and Debreu 1954); the exception is when there are market failures (European Commission 2002). Traditionally such failures have been associated with imperfect and asymmetric information being available to (especially smaller) firms, and/or imperfect (risk) markets leading to higher (financial) costs for by such firms and more generally a problem of incomplete markets (Greenwald and Stiglitz 1986). Failures are also associated with not being able to capture positive externalities in other firms—such as R&D spillovers—or the wider benefits gained from geographic agglomeration (e.g., intra-industry specialization through Marshall-Arrow-Romer economies and/or inter-industry Jacobian urbanization economies—see Marshall 1890, Arrow 1962, and Romer 1986 and Jacobs 1970, 1986). Such justification for government intervention on the grounds of market failure has been criticized by those who do not adhere to the neoclassical tradition; for example, evolutionary economists (e.g., Metcalfe and Georghiou 1998) have argued that information costs, leading to asymmetric outcomes, are one of the features of the market, and they are in part necessary as a selection device (for promoting the fittest firms) and in providing incentives for learning and discovery, which is crucial to the process of variety creation upon which the evolutionary view of markets is based (as Metcalfe and Georgiou, op. cit., point out "a profit opportunity known to everybody is a profit opportunity for nobody"). This does not mean that there is no rationale for government intervention, assuming that it sees a direct increase in economic benefits from more firms gaining information and thus acting on that information (e.g., by adopting certain technologies, increasing their overall capabilities, etc.). For example, Casson (1999) argues that in this situation the government has a comparative advantage in information, and it is on this basis (not market failure) that it can justify intervention. See also Cohen (2006, section 3.1). More recently, there has been an emphasis on dynamic factors that lead to a comparative advantage (Rodrik 2006), such as the importance of knowledge and firm capabilities as a source of firm performance and thus productivity growth.Footnote 3 Thus government intervention to enhance both learning and learning spillovers is especially warranted to coordinate structural transformations that will close the "knowledge" gap that exists with firms at the (international) frontier, so moving resources from low- to high-productivity sectors (it is argued—see Felipe 2015—that such sectors do not develop naturally in developing economies without government help). Thus, for example, Khan (2015) sets out a model of the 'competitiveness curve' that justifies assistance to industry (particularly in developing economies) based on providing 'rents for learning' to cover knowledge and capability gaps and encourage learning-by-doing. In developing economies like China, firms initially lack the sophisticated organisations and technical capabilities to produce goods and services at global quality standards (and costs), and assistance buys time to engage in the learning that is needed, as well as encouraging inward foreign direct investment from firms that have the required competencies (which should also lead to additional spillover effects). In China, there is an additional rationale for government providing (large-scale) assistance to firms; in principle all firms in China can be subject to political control—i.e., there is a lishu relationship, which means firms are "subordinate to" political influence (the Chinese name for this relationship, as represented in the National Bureau of Statistics database we use below, is 隶属关). In practice the lishu relationship includes "… approvals for licences, domain, major projects, major operations decisions (such as profit distribution and investment) and firm structures" (Tan et al. 2007, p. 788), all of which are set to meet political objectives. As well as controls, the lishu relationship also involves government support and subsidies (e.g., access to finance, more favourable tax treatment, granting of contracts, access to raw materials and other 'scarce resources'Footnote 4, etc.). The relationship is much stronger for publicly owned firms (e.g., state-owned enterprises, or SOEs, and collectively owned enterprises), who are also expected to meet certain 'social' goals set by politicians, such as employment targets, but it is still relevant to privately-owned and foreign-owned firms (either because of the strength of political connections and/or because of intervention by government). An essential difference in the lishu relationship between publicly-controlled and privately-owned firms tends to be that the former are more beset with meeting policy goals (e.g., employment) rather than receiving favourable treatment such as subsidies and/or access to finance (Wu et al. 2012). However, Xia et al. (2009) state that over time the importance of lishu has diminished especially following reforms introduced in 1997, and the vast majority of newly established privately owned firms that have set up in China since the late 1990s have opted not to have any (formal) lishu relationship with the government (central, regional or local). Evidence for this is provided in (Ding et al. 2015, Table 1), who show that the proportion of medium- to large-sized Chinese firms in manufacturing and utilities with no political connections increased from 15.7% in 1998 to 76% by 2007. And yet the same data (which is also used in this study—see Table 1 above) shows that on average between 1998 and 2007 nearly 57% of firms receiving assistance had no formal political connections (nearly 52% of all firms, which includes those with no assistance, had no political connections). This provides strong support for the claim made by Haley and Haley (2013, Chapter 1) that under the operation of Chinese State Capitalism, the government is able to meet its industrial strategies not so much directly through traditional lishu relationships but rather through ensuring firms are dependent on government for financial assistance that creates mutual dependence. That is, Haley and Haely (op. cit, p. 21–22) note that "in China political factors matter at least as much as, and often more than, economic factors for firms' and markets' performance and therefore for the dispensation of subsidies". They also argue—based on case studies—that there is substantial evidence that Chinese production subsidies have encouraged many overseas (and especially U.S.) firms to move manufacturing to China, after developing their technological competencies in their home countries. Table 1 Percentage of firmsa receiving tax holidaysb, subsidies or both, China 1998–2007 Because of the decentralisation of power in China to the provinces, and the further layer of often strong local government with its own agenda, firms can have different (even several) links with central, provincial and local governments, each with hidden and often conflicting budgetary processes. Li and Zhou (2005) point out that local government officials have a major incentive to develop the economies in their jurisdictions because their political careers depend on the economic performance of their regions. Thus Walder (1992, pp. 528–29) comments that "China's national budget is a nested hierarchy of independent budgets—each government unit exercises property rights over firms under their financial jurisdiction… each of which seeks to expand its revenues by capturing investment, subsidies, and grants". Haley and Haley (op. cit. p. 21) review the case study evidence that shows "provincial governments deploy massive subsidies to support favoured business groups and further provincial rather than central objectives or efficiencies". Thus while Chinese policymakers in the period after the 'open door' reforms starting in 1992 sought to learn from how Korea and Japan achieved large-scale development, which included lessons in subsidising strategic industries, there is evidence (Heilmann and Shih 2013) that full-scale assistance to firms (and industrial policy more generally) only really got going in the 1990's once Chinese policymakers had concluded that by supporting targeted firms they could advance the state's interests in the new economic order (Thun 2004). Historically, such help had been limited to State-owned Enterprises (SOEs), but since the 1990's this has been extended to privately-owned firms as well. In terms of the type of assistance usually given to firms, this tends to be based on 'horizontal' (covering activities that take place in a broad range of sectors and typically affecting the 'infrastructure' surrounding firms) and 'vertical' (more targeted on specific firms and sectors) policies. The former has in more recent times received greater support as it is seen to have a smaller impact on competition (since it is not about 'picking winners' as all firms should face a 'level playing field'), whereas vertical policies can favour one (sub-group) of firms to the detriment of others. That said, even horizontal policies impact more on certain firms (e.g., those more engaged in R&D, or located in sectors with attributes that are being encouraged by policy, such as higher value-added). In the Chinese context, Lin et. al. (2015) argue there has been a continuous upgrading towards more capital-intensive sectors with (latent) dynamic comparative advantage (rather than the static advantage of having a substantial, relatively cheap abundance of sufficiently skilled labour). In broad terms, industrial policy pursues the growth of 'pillars' (key industriesFootnote 5) where technology acquisition and improving competitive advantage feature strongly. Firms receive various financial incentives (including 'tax holidays', grants, and access to cheap loans), that are consistent with providing additional liquidity and sharing risk, and thus overall subsidizing production and investment; however, as Haley and Haley (2013, pp. 31–32) point out official information is very limited on how much assistance is provided, to whom and for what. Thus they conclude that "generally, despite stated policies, outsiders cannot ascertain the true policies that underlie subsidies". Thus whether assistance acts as a boost to investment and production, while at the same time underpinning productivity growth, is largely an empirical issue. Does it mitigate market failures, help infant industries, new firm entry and underdeveloped capital markets, coordinate (vertical and horizontal) linkages in production and enhance learning-by-doing; or, as Porter (1990) argues, do subsidies dull the market incentives firms' face, delay adjustment and innovativeness, and overall constrain flexibility and instead create a culture of 'rent seeking' (particularly for SOEs and those firms with strong political connections with government—cf. Yu et al. 2010; Tan et al. 2007)? Based on the discussion in this section, and in anticipation of the results presented below, we provide a simple theoretical model in the appendix where generally assistance lowers the 'user' cost of capital, so relaxing likely financial constraints and allowing firms to upgrade the quality of their capital stock, which in turn will lead to increases in TFP (e.g., through lowering costs as 'vintage' capital stock is replaced by more efficient, newer capital equipment; and/or through allowing firms to introduce new, higher quality products). The model also allows for managerial effort to be divided between pursuing higher levels of TFP and rent-seeking (e.g., through lishu relationships), where the latter (cet. par.) increases profitability (without increasing TFP) and thus boosts the personal reward to managers (e.g., when assistance 'leaks' into higher profits, through such 'soft budget' constraints, managers obtain greater bonus-related payFootnote 6 and this creates an agency problem—see, for example, Hanke and Heine 2015). The outcome is that we are able to show theoretically that up to a certain assistance rate (which we denote in the model as \(\widehat \gamma\)) managerial effort is dominated by efforts to improve TFP; however, when actual government assistance becomes too high (γ > \(\widehat {\mathrm{\gamma }}\)) this dulls the pursuit of higher TFP as 'rent seeking' dominates managerial efforts. Note, lack of data (e.g., on managerial effort) means we cannot directly estimate and therefore test the assumptions underlying the model in the appendix. The extent of government assistance to Chinese firms The data source used in this study covers medium- to large-sized firms belonging to 26 industries covering manufacturing and utilities for 1998–2007. A discussion of the unbalanced panel dataset used—the annual accounting reports required by government to be filed by industrial firms with the NBS over the period of 1998–2007—is presented in Ding et al. (2015). This dataset includes all SOEs and other types of enterprises with annual sales of five million yuan (about $817,000) or more. Brandt et al. (2012.) provide a thorough discussion of this extensively used dataset, which for present purposes covered nearly 600 thousand firms, which corresponds to some 2.2 million firm-year observations. Table 1 presents information on the percentages that received assistance during this period, with firms sub-divided into those who only received tax holidays, subsidies, or both types of assistance. Information on subsidies received is reported by firms while tax holidays is calculated from taxes paid on profits and VAT combined with data on value-added and profits-before-tax for each firm. Firms that did not pay the full 17% rate of VAT or 33% profits tax are considered to have received a tax holiday.Footnote 7 Table 1 also reports the average tariff (ad valorem equivalent) on final imported goods as an additional source of assistance to firms, computed using the WITS (World Bank) database.Footnote 8 The percentage of firms receiving government assistance increased from over 53% in 1998 to over 72% in 2007 (Table 1). The largest form of assistance was tax holidays, while the percentage of firms receiving only subsidies was relatively small (and fairly constant); those receiving both tax holidays and subsidies rose from around 4% of firms in 1998 to over 8% by 2007. During this period, and reflecting China joining the WTO at the end of 2001, protection from overseas competition declined with the average tariff rate declining from some 18 to 10% (see Table 24.1 in Harrison et al. (2014), for details on tariff rates across industriesFootnote 9). In terms of the financial value of assistance, Table 2 presents assistance rates (calculated using data on the total value of assistance divided by total value-added produced for each sub-group shownFootnote 10) broken-down into type of assistance and by ownership categories. Relief from paying VAT at its full rate was the most valuable source of help received (worth between 5.9 and 7.5% of value-added during the period), followed by profit tax 'holidays' (increasing from around 2% in 1998 to nearly 5% in 2007). Direct subsidies were worth significantly less (on average around 1% of value-added over 1998–2007). Cumulatively, assistance rose from around 10% of value-added in 1998 to 13% by 2007; foreign-owned firms (including those based in special economic areas and Taiwan) received the highest rates of assistance, rising slowly over time, while (perhaps unexpectedly) SOEs as a sub-group received the lowest rates of assistance.Footnote 11 Table 2 Value of assistance to industry as a percentage of total value-added produced, China 1998–2007 The direct impact of assistance on firm level productivity In this section we present the empirical findings on the relationship between the rate of government assistance received and TFP. The methodology (and justification for its use—such as the need to use a fixed-effects estimator; the strengths of the approach versus the Olley and Pakes (1996) and Levinsohn and Petrin (2003) approaches; the need to estimate a gross-output versus value-added production function; and the consistency of estimating TFP using a single-stage, rather than multi-stage, approach) has been fully set out in in Ding et al. (2015), where a system Generalised Methods of Moments (GMM) approach was used to estimate log-linear Cobb-Douglas gross-output production functions for 26 industries in China, using annual firm-level National Bureau of Statistics (NBS) data for 1998–2007. Specifically, we estimate the following model: $$y_{it} = \alpha _i + \alpha _Ee_{it} + \alpha _Mm_{it} + \alpha _Kk_{it} + \alpha _XX_{it} + \alpha _Tt + \varepsilon _{it}$$ where endogenous y, e, m and k refer respectively to the logarithms of real gross output, employment, intermediate inputs, and the capital stock in firm i at time t (i = 1,…,N; t = 1,…T); and Xit is a vector of observed (proxy) variables determining TFP. In particular we include dummy variables measuring the rate of assistance received (compared to the benchmark sub-group who received no assistance); we do this because we find that the effect of assistance on TFP is non-linear and this is a standard way of accounting for non-linearity. Also included into the vector Xit are firm characteristics such as firm age, political affiliation, firm ownership, export behavior, whether the firm engaged in R&D, financial variables, and geographic location (Table 3 provides a list of the variables used; further discussion is provided in Ding et al., op. cit., relating to their Table 1). Lastly, t is a time trend, measuring exogenous gains in TFP over time. Table 3 Descriptive statistics for variables used in determining assistance to firms, China 1998–2007 Equation (1)—in dynamic form with additional lagged values of output and factor inputs—is estimated using the two-step XTABOND2 system GMM approach (Arellano and Bond 1991) implemented in STATA (this also involves correcting for any potential finite sample bias using Windmeijer's 2005, approach). Thus Eq. (1) is estimated both in first-differences and in levels, allowing for fixed effects and tackling endogeneity of the right-hand-side variables (including the lagged dependent variable) and selection bias by using lagged values of the endogenous variables as instruments in the first differences equation, and first-differences of the same variables as instruments in the levels equation (Blundell and Bond 1998).Footnote 12 In this study, gross output, intermediate inputs, labour, and capital are treated as endogenous, as well as assistance rates, political affiliation, capital ownership,Footnote 13 exporting, and R&D. Lastly, according to Arellano and Bond (1991), the presence of second-order autocorrelation implies that the estimates are inconsistent. Panel tests for autocorrelation are used to establish whether second-order correlation is an issue. The detailed results from estimating Eq. (1) for 26 two-digit industries/industry groups are presented in Table 7 (in the unpublished appendix). These are very similar to those presented in Ding et al. (2015), to which the interested reader is directed for a full discussion. Here we concentrate on the parameter estimates for the assistance variables (Table 4, top half). Firstly, as the diagnostics show, the models estimated pass various tests of the validity of the instruments used and tests for autocorrelation. All the models for the 26 industries pass the Hansen test for over-identification at the 10% level or better, suggesting the validity of the instrument set used. With regard to tests for autocorrelation, none show evidence of second-order serial correlation in the differenced residuals (based on a 10% significance level), suggesting the overall consistency of our estimates. Table 4 Long-run impact on TFP of assistance to firms (26 industries, China, 1998–2007) Table 4 shows that in 11 out of 26 industries the impact of assistance on TFP increases monotonically for those firms that receive less than 10, 10–19 and 20–49% assistant rates; for a further 10 industries assistance rates between 1–9% have a significantly positive effect while the impact is greater for those in receipt of 10–19% assistance rates, and approximately the same for those receiving 20–49% compared to 10–19% assistance. Only for the petroleum sector, measuring instruments, electronic power generation and gas production is there a decline in the positive impact of assistance on TFP for the 20–49% sub-group compared to 10–19%. Tobacco is the only sector where assistance (for any sub-group) has no statistically significant impact on TFP. In 9 industries firms with assistance rates 50+% experienced significant declines in TFP (especially coal mining, electronic power generation and water production), while in nonmetal products receiving 50+% assistance boosted TFP by 5.7% and in metal products the impact was 13.7% higher TFP (only in the latter sector does TFP increases monotonically across all assistance rate sub-groups). On average across all 26 industries, the parameter estimates in Table 4 show that firms receiving assistance rates of 1–10, 10–19, 20–49 and 50+% experienced on average 4.5, 9.4, 9.2 and −3% gains in TFP, respectively,Footnote 14 which is consistent with the theoretical model in the appendix which proposes that "over-assistance" induces firm managers to substitute managerial effort for rent-seeking effort, which consequently lowers TFP. This complements the result obtained by Aghion et al. (2015) that "… driving the Herfindahl for the dispersion of tax holidays on income taxes and value-added taxes to 0 would lead to an increase in TFP of 8.5 to 10.3 percentage points" (pp. 15–16). Thus both studies show that assistance to industry in China has an impact on firm-level TFP. Based on the results from estimating Eq. (1), we can also calculate an index of TFP. The obvious approach would be \(ln\widehat P_{it} = y_{it} - \widehat \alpha _Ee_{it} - \widehat \alpha _Mm_{it} - \widehat \alpha _Kk_{it}\), but this is not a proper TFP index, because the measure of input growth (\(\widehat \alpha _Ee_{it} - \widehat \alpha _Mm_{it} - \widehat \alpha _Kk_{it}\)) does not satisfy axiom X5 (proportionality) in O'Donnell (2015), except in the case of constant returns-to-scale. The solution is to restore proportionality by using a special case of the Färe and Primont (1995) input index: $$ln\widehat P_{it} = y_{it} -\frac{1}{\widehat \alpha_{E}+\widehat \alpha_{M}+\widehat \alpha_{K}}\left(\widehat \alpha_{E}e_{it}+\widehat \alpha_{M}m_{it}+\widehat \alpha_{K}k_{it}\right)=\frac{1}{\widehat \alpha_{E}+\widehat \alpha_{M}+\widehat \alpha_{K}}\left(\widehat \alpha_{i}+\widehat \alpha_{X}X_{it}+\widehat \alpha_{T}t_{it}+\widehat\varepsilon _{it}\right)$$ and use Eq. (2) to summarise our results. Figure 1, which shows the cumulative distribution of TFP for firms with different rates of assistance, confirms that assisted firms generally had higher TFP, with a gap between the best and worst performing sub-groups of 0.133 at the widest point.Footnote 15 TFP distribution in China by rates of government assistance, 1998–2007 Lastly, even though in estimating Eq. (1) we allow for fixed effects, endogeneity and selection bias for certain key right-hand-side variables (via instrumenting intermediate inputs, labour, capital, assistance rates, political affiliation, capital ownership, exporting, and R&D), we also as a robustness check re-estimated Eq. (1) using a 'matched' sample approach (Imbens and Rubin 2015), even though the asymptotic theory for matching and then using system GMM is not yet developed. Separately for each of the 26 industries covered here we used a propensity-score approach to predict the likelihood of receiving assistance and then used one-to-one 'matching' to create an overlapping 'treatment' and 'control' group of firms (the STATA procedure PSMATCH2 was used).Footnote 16 This smaller sample was then used to re-estimate Eq. (1)—using system-GMM but this time not instrumenting assistance rates—with the key results reported in Table 4 (lower half). Tests of the appropriateness of the 'matching' technique (using PSTEST in STATA) based on Rubin's B and R show that the 'treatment' and 'control' groups are sufficiently balanced. Moreover, the results obtained with respect to the parameter estimates attached to the assistance rate dummies are generally similar; the averages across all industries of the impact of assistance on TFP for the various sub-groups are less than 10 percentage points different when the 'full' data and 'matched' data results are compared. This confirms that the estimates produced in Table 4 (top half) of the impact of assistance on TFP are indeed robust. TFP growth and the impact of assistance The previous section shows the impact of receiving assistance on the level of TFP, while this section takes the next step: we decompose the growth of aggregate TFP and consider the contribution of government assistance. That is, while in Section 4 we have found that assistance generally leads to higher levels of TFP, evidence is still needed as to whether government aid induces higher aggregate TFP growth—where the latter includes firms in operation throughout 1998–2007 (covering within-firm and inter-firm changes to TFP) as well as the impact of new firm entry and firm closures.Footnote 17 Put differently, TFP growth is not just about changes in the distribution of the TFP level across firms, but also the (re-)allocation of resources across firms as they expand or contract. We measure TFP growth and its decomposition using the well-known Haltiwanger approach (Foster et al. 1998). The index of productivity in year t is defined as a geometrically weighted average of individual firm-level productivities (Eq. 2). This index and its growth between t and t−k can therefore be written as follows:Footnote 18 $$lnP_t = \mathop {\sum}\nolimits_i {\theta _{it}lnP_{it}} \quad \quad {\mathrm{\Delta }}lnP_t = lnP_t - lnP_{t - k}$$ where P measures productivity and θit is the share of output for firm i in period t. Thus, productivity growth can be expressed as follows: $$\begin{array}{l}{\mathrm{\Delta }}lnP_{t} = \overbrace {\mathop {\sum}\nolimits_{i} {\theta_{it - k}\Delta lnP_{it}} }^{{\mathrm{within}} - {\mathrm{firm}}\,{\mathrm{(continuers)}}} + \overbrace {\mathop {\sum}\nolimits_{i} {(lnP_{it - k} - lnP_{t - k})\Delta \theta_{it}} }^{{\mathrm{between}} - {\mathrm{firm}}\left( {{\mathrm{continuers}}} \right)} \overbrace {\mathop {\sum}\nolimits_{i} {\Delta lnP_{it}\Delta \theta_{it}} }^{{\mathrm{cross}} - {\mathrm{firm}}({\mathrm{continuers}})}\\ + \overbrace {\mathop {\sum}\nolimits_{i} {\theta_{it}\left( {lnP_{it} - lnP_{t - k}} \right)} }^{{\mathrm{entering}}\,{\mathrm{firms}}} - \overbrace {\mathop {\sum}\nolimits_{i} {\theta_{it - k}\left( {lnP_{it - k} - lnP_{t - k}} \right)} }^{{\mathrm{exiting}}\, {\mathrm{firms}}}\end{array}$$ Using estimates of lnPt for 1998 and 2007 (Eq. 2) and Eqs. (3, 4), we obtain the results in Table 5. The latter shows that overall Chinese firms achieve on average TFP growth of 7.9% p.a., with 80%Footnote 19 of this attributable to the impact of new firm entry (firm closure actually decreased TFP growth by 0.3% p.a.). The next major source was from continuing firms becoming internally more productive (contributing 21% of overall growthFootnote 20). Table 5 Firm-level TFP growth (average per annum) by State ownership, political affiliation and assistance, 1998–2007, China Table 5 also shows that SOE's contributed 2.1% to overall TFP growth (or 27% of the totalFootnote 21), which is significantly below what might have been expected given their share of total output in 1998 (over 42%), and in part reflects their having lower TFP levels. Much of the contribution to SOE TFP growth was due to the closure of inefficient firms (contributing 62%Footnote 22 of total TFP growth). Similarly, Table 5 shows that firms that had strong political connections (i.e., lishu relationships with provincial or central government) also contributed much less to overall TFP growth, which is in line with the results for SOE's (see Wu et al. 2012). Importantly, Table 5 shows that firms receiving assistance made a relatively larger contribution to overall TFP growth p.a. (non-assisted firms, accounting for some 49% of output in 1998, contributed just over 25% of overall TFP growth and thus in aggregate the assisted group contributed around 75% of total TFP growth). Further, taking account of their share of output and hence relative importance in 1998,Footnote 23 all the assisted sub-groups performed better than the non-assisted group, reflecting both stronger improvements in TFP over time and larger increases in their shares of total output (note, in Table 5 the TFP index for firms with a 50+% assistance rate is higher than the TFP indices for non-assisted and <10% assisted firms; in Fig. 1, the average TFP of the 50+% sub-group is about the same or lower with the difference being due to TFP not being weighted by output shares in Fig. 1). The entry of more productive and the closure of less productive firms dominated the composition of TFP growth in non-assisted firms (accounting for 44 and 33% of overall growth, respectively), and there was also a significant contribution from 'within-firm' improvements in productivity. Overall, this is in line with what might be expected when no government incentives are received and firms face the full impact of market competition. For the assisted sub-groups, there is much less reliance on the closure of less productive firms as a means of improving TFP growth (contributions were either small or negative) which suggests that assistance may have helped to 'prop-up' a proportion of relatively unproductive firms. There is also little evidence to suggest that assisted firms overall experienced higher 'with-firm' productivity gains, relative to non-assisted firms. Instead, the major source of TFP improvement for assisted firms tended to be the opening of new firms, perhaps in part attracted (facilitated) by the subsidies available from government. Lastly, we present results when firms are grouped by whether they were State-owned, had any political connections, and received assistance (at any positive rate). Part of the reason for grouping the data in this way is that during the period covered the proportion of output attributed to firms with no political connections increased from 12.9% in 1998 to over 55% by 2007 (Table 5). And yet Table 3 shows that on average between 1998–2007 nearly 57% of firms receiving assistance had no formal political connections. This provides strong support for the claim made by Haley and Haley (2013, Chapter 1) that under the operation of Chinese State Capitalism, the government has become less reliant on formal, traditional lishu relationships and instead ensures firms are dependent on government for financial assistance that creates mutual dependence. Table 6 shows that firms receiving government financial assistance that had no formal political connections and were not State-owned had (by some margin) the best performance: this sub-group contributed nearly 59% to overall TFP growth, with the highest TFP levels in both 1998 and 2007, and the largest increase in market share (up from 8% to nearly 45%). In this sub-group, some 97% of TFP growth was due to the entry of new firms. Next, in terms of the contribution to aggregate TFP growth, comprised firms receiving no assistance, and as stated above net entry and, to a lesser extent, 'within-firm' improvements had the largest impact on productivity growth. Table 6 Firm-level TFP growth (average per annum) by whether State-owned (SOE), political affiliation (PA) and receiving assistance, 1998–2007, China In contrast, firms that received assistance and were either SOE's (with no formal political links, i.e. lishu relationship) or had political connections (but were not SOE's)—the 'remainder' sub-group—contributed the least to TFP growth, despite their having some 26% of total output in 1998 (which only fell to 23% by 2007). For this sub-group, there were significant, but counter-balancing contributions to TFP growth from positive 'within-firm' improvements and the entry of more productive firms, and even larger negative impacts through the closure of more productive firms. The final sub-group (assisted SOE's with political connections) contributed some 9% to overall TFP growth, but they had the lowest TFP levels in both years and lost around 50% of their market share over the period. This sub-group saw little improvement in TFP growth through the 'within-firm' contribution, although there were significant gains through the most productive 'continuing' firms gaining market shares at the expense of the least productive firms ('between-firm' effects). Summary and conclusions Industrial policy, particularly through the provision of large-scale assistance to industry in the form of 'tax holidays' and subsidies to firms, is very important in China (e.g., the data used here for medium- to large-sized firms in manufacturing and utilities shows that in 2007 over 72% of firms received government assistance, worth around 13% of their value-added). Recently Aghion et al. (2015) have reported that the distribution of government assistance to firms in China has enhanced productivity over the 1998–2007 period, given that it was allocated to competitive sectors and /or fostered competition in a sector. However, they did not test directly whether receiving assistance had a direct impact on each firms' TFP, perhaps thereby introducing distortions that work against the productivity-enhancing effects associated with the distribution of assistance. A major contribution of this paper has been to use Chinese firm-level panel data for 1998–2007 to introduce measures of assistance received by each firm directly into industry-level production functions determining firm output. The latter were estimated using a system-GMM econometric approach (with assistance instrumented by its lagged valued); and by estimating production functions using 'matched' data comprising firms receiving assistance and firms not receiving 'treatment' who nonetheless had very similar characteristics to the assisted sub-group. The results indicated that, across the 26 industries considered, Chinese firms that received assistance had higher TFP during 1998–2007, although there is some evidence that too high a level of assistance has negative consequences for TFP. On average the results showed that firms receiving assistance rates of 1–10, 10–19, 20–49 and 50+% experienced on average 4.5, 9.4, 9.2 and −3% gains in TFP, respectively. While we find that government assistance generally boosted TFP at the firm level, we also show that aggregate TFP growth was largely achieved through assisted new firms being 'encouraged' to start-up rather than through continuing firms improving, and there is also some evidence that closure rates were truncated as a result of government assistance. That said, overall assisted firms did contribute more to TFP growth than non-assisted firms, but this was very much 'driven' by a sub-group that received government financial assistance but had no formal political connections and were not State-owned. Assisted firms that were SOE's and/or had political connections were the lowest performers, which suggests that state policy to boost TFP worked best in China when it was de-coupled from formal political control. Turning to further work that could be done, we have not at this stage set out to test if different forms of assistance (i.e., different types of tax holidays as well as subsidies to firms) have differential impacts. Our initial attempts to do this using system-GMM suffered from collinearity problems, so further experimentation with regard to modelling is necessary. Taking the Haltiwanger results a stage further, it would also be interesting to model directly the impact of assistance on the (hazard rate of) firm closure. Competition-friendly policies are defined as those that allocate assistance to a wide group of firms in a sector (so encouraging competition) and/or that target younger and more productive firms. Karhunen and Huovari (2015), using similar data, confirm these results for Finland. Note, this is not limited to 'catch-up' in developing economies; 'network failures' in general arise because technological know-how (broadly defined) is partly tacit and therefore cannot be diffused easily. Networks can be important for the transfer of such tacit knowledge (they are mutual learning processes fostered by well-managed collaboration between specialists in complementary fields, as well as between designers, producers and end-users), and they can also partly overcome the problems associated with firms experiencing bounded rationality and consequently bounded vision (Teece and Pisano 1998). Closer ties to government can also help businesses to overcome market and state failures in securing property rights and enforcing contracts—Li et al. (2008) and Zhou (2013). Note, therefore, this definition of politically connected firms is different to the approach adopted by Faccio (2006), who looked at such connections across 47 countries (excluding China). There are currently around 15 'pillar' industries set by the central government in China, from technology-intensive sectors like aerospace and computing, through to wholesale and retailing. The 'culture' sector is also now a pillar industry. Or when corruption is present, it may be possible for them to use the extra profits to reward themselves more directly. The value attributed to any profits tax holiday is computed as: (0.33 × profits-before-tax) − profits tax paid. The value of any VAT holiday is (0.17 × value-added) − VAT paid. Du et al. (2014), Harrison (2014) and Aghion et al. (2015) provide further details. Others (e.g., Aghion et al. 2015) have also included the 'implied' rate of interest firms paid on loans (calculated as interest payments divided by current liabilities) to measure the extent to which firms may have received loans at below-market interest rates. Certainly the implied interest across firms did decline between 1998 and 2004 (before rising again between 2004 and 2007)−see Fig. 2 in the unpublished appendix. However, the percentage of firms paying zero interest, because they had no borrowings, also rose dramatically from around 29% in 1998 to around 42% in 2007 (mostly due to the growth in importance of smaller privately-owned businesses during this period—see Table 1 in Ding et al. (2015)—who were generally unable to secure loans from the Chinese banking system). Given this, no direct measure of the 'implied' cost of borrowing is included in this study (although, note, we do include measures on firm liquidity into our determinants of TFP—see Table 3 below). Table 8 in the unpublished appendix provides the breakdown used in this study. That is, not the average across firms—totals for each sub-group were instead used. Table 9 in the unpublished appendix provides a breakdown of assistance rates across ownership sub-groups by type of assistance. It is also important to note that while SOEs had lower rates of assistance, the NBS data shows that in 1998 SOEs received nearly 39% of all assistance by value (¥64.6 of a total of ¥167.4 billion); in 2007 they received just over 14% of all assistance (¥207.9 of a total of ¥1453.2 billion). We use Roodman's (2009) 'collapse' procedure in all our estimations using XTABOND2 in STATA, such that only the instruments applicable to each variable—not the full instrument set covering all variables—are used. Too many instruments have been shown to often result in a Hansen p-value at or very close to 1. Note we expect that (in particular) receiving assistance, being politically affiliated and State-owned are endogenous (to each other) and that is why we instrument these variables and test whether the instruments used are appropriate. With regard therefore to identifying causality, we have endogenous relationships which imply causality in both directions, and we are able to take account of this through the use of an instrumental variable approach. Only if a set of structural equations were estimated with say a FIML approach (which implies the as yet to be developed methodology of multi-equation fixed effects modelling which is beyond the scope of this paper) could structural parameters be retrieved and direct causal relationships amongst all the endogenous variables be separated out. Instead here we can identify the reduced form causal impact of a change in assistance on TFP, based on an unbiased parameter estimates, given we instrument endogenous variables. This is based on taking a simple average across all industries (irrespective of whether parameter estimates were statistically significant or not) and expressing the results as eΣα − 1. The Kolmogorov–Smirnov test for equality of distribution functions between the 20 to <50% and 50+% distributions has a d-statistic of −0.133 (and associated p-value of 0.00). The equation used for the propensity score matching was \(\begin{array}{l}Assisted_{it} + \alpha _0 + \alpha _1\,Assisted_{it - 1} + \alpha _2\\ \left( {High\,political\,affiliation\, \times \,SOE} \right)_{it} \;+\; \alpha _xZ_{it} + \mu _{it}\end{array}\) (3) where Zit comprises a sets of control variables determining the probability of being assisted (involving the indicated variables in Table 3, including industry and year dummies). In the context of the exercise undertaken here, firms that are in the dataset in 2007 and not 1998 are deemed 'new entrants'; firms that appear in 1998 but not 2007 are classified as closed. Since the NBS dataset does not include enterprises with annual sales below ¥5 million, it is important to note that about 80% of all industrial firms are excluded from the sample. However, as shown in Brandt. et al. (2012), using the full census of firms periodically carried out in China, the omitted firms only account for some 9.9% of output in 2004, and 2.5% of exports. Moreover, a comparison of 1995 NBS and Census data shows the NBS has a similar level of coverage, allowing Brandt. et al. (2012) to state that "… the NBS decision rule on which firms to include in their annual sample is not introducing any systematic bias in our estimates". Thus the issue that firms may still exist but be below the ¥5 million benchmark is not likely to have any significant impact here. As will be seen, we combine the between-firm and cross-firm effects into one 'between firm' effect. i.e., 6.28 ÷ 7.88. Recall the figures in Column 1 (Table 5) reflect two components, as shown in Eq. (3): a within-subgroup productivity change and the relative importance of the subgroup over time. That is, we can rewrite Eq. (3) as: \({\mathrm{\Delta }}lnP_t = \mathop {\sum}\nolimits_i {\theta _{it}lnP_{it}} - \mathop {\sum}\nolimits_i {\theta _{it - k}lnP_{it - k}}\) Note that we can shift the location of productivity ω to make sure the optimal choice is between 0 and 1, without loss of generality. Note that η < −1, so γ* > 0. We assume \(\alpha _K > \frac{{ - 1}}{{\left( {1 + \eta } \right)}}\), so γ* < 1. Thus, the optimal effort is an interior solution. Aghion P, Cai J, Dewatripont M, Du L, Harrison A, Legros P (2015) Industrial policy and competition. Am Econ J Macroecon 7(4):1–32 Arellano M, Bond S (1991) Some tests of specification for panel data: Monte Carlo evidence and an application to employment equations. Rev Econ Stud 58(2):277–297 Arrow KJ (1962) The economic implications of learning by doing. Rev Econ Stud 29:155–173 Arrow KJ, Debreu G (1954) Existence of an equilibrium for a competitive economy. Econometrica 22(3):265–290. https://doi.org/10.2307/1907353 Blundell R, Bond S (1998) Initial conditions and moment restrictions in dynamic panel data models. J Econ 87:115–143 Brandt L, Van Biesebroeck J, Zhang Y (2012) Creative accounting or creative destruction? Firm-level productivity growth in Chinese manufacturing. J Dev Econ 97:339–351 Casson M (1999) Market failure and government support for business: a comment, for the DTI, mimeo Cohen E (2006) Theoretical foundations of industrial policy. EIB Pap 11(1):85–106 Ding S, Guariglia A, Harris R (2015) The determinants of productivity in Chinese large and medium-sized industrial firms, 1998–2007. J Product Anal. https://doi.org/10.1007/s11123-015-0460-0 Du L, Harrison A, Jefferson G (2014) FDI spillovers and industrial policy: the role of tariffs and tax holidays. World Dev 64:366–383 Einio E (2014) R&D subsidies and company performance: evidence from geographic variation in government funding based on the ERDF population-density rule. Rev Econ Stat 96(4):710–728 European Commission (2002) A study of business support services and market failure. A report to the European Commission, the Foundation for SME Development, University of Durham, July. http://ec.europa.eu/DocsRoom/documents/3646/attachments/1/translations/en/renditions/pdf Faccio M (2006) Politically connected firms. Am Econ Rev 96(1):369–386 Färe R, Primont D (1995) Multi-output production and duality: theory and applications. Kluwer Academic Publishers, Boston Felipe J (ed) (2015) Development and modern industrial policy in practice: issues and country experiences, Asian Development Bank and Edward Elgar, Cheltenham, UK Foster L, Haltiwanger J, Krizan CJ (1998) Aggregate productivity growth: lessons from microeconomic evidence, NBER Working Paper No. 6803 Girma S, Gong Y, Gorg H, Zhihong Y (2009) Can production subsidies explain China's export performance? Evidence from firm-level data. Scand J Econ 111(4):863–891 Greenwald BC, Stiglitz JE (1986) Externalities in economies with imperfect information and incomplete markets. Q J Econ 101(2):229–264. https://doi.org/10.2307/1891114 Haley UCV, Haley GT (2013) Subsidies to Chinese industry: state capitalism, business strategy, and trade policy. Oxford University Press, Oxford Hanke PC, Heine K (2015) Subsidies and corporate governance—an agency approach. Manag Decis Econ 36:256–264 Harrison A, Shenggen F, Kanbur R, Wei SJ, Zhang X (2014) Trade and Industrial Policy: China in the 1990s to Today. The Oxford Companion to the Economics of China. Oxford University Press, Oxford, pp 161–170 Heilmann S, Shih L (2013) The rise of industrial policy in China, 1978–2012. Harvard-Yenching Institute Working paper series Huang C-H (2015) Tax credits and total factor productivity: firm-level evidence from Taiwan. J Technol Transf 40(6):932–947 Imbens GW, Rubin DB (2015) Causal inference for statistics, social, and biomedical sciences: an introduction. Cambridge University Press, Cambridge Irwin DA, Klenow PJ (1996) High-tech R&D subsidies: estimating the effects of Sematech. J Int Econ 40:323–344 Jacobs J (1970) The economy of cities. Jonathan Cape, London Jacobs J (1986) Cities and the wealth of nations. Penguin, Harmondsworth Khan MH (2015) Industrial policy design and implementation challenges, In: Felipe J (ed) Development and modern industrial policy in practice: issues and country experiences. Asian Development Bank and Edward Elgar, Cheltenham, UK, pp 94–126 Karhunen H, Huovari J (2015) R&D subsidies and productivity in SMEs. Small Bus Econ 45(4):805–823 Koski H, Pajarinen M (2015) Subsidies, the shadow of death and labor productivity. J Ind Competition Trade 15:189–204 Levinsohn J, Petrin A (2003) Estimating production functions using inputs to control for unobservable. Rev Econ Stud 70(2):317–341 Li H, Zhou L-A (2005) Political turnover and economic performance: the incentive role of personnel control in China. J Public Econ 89:1743–1762 Li H, Meng L, Wang Q, Zhou L-A (2008) Political connections, financing and firm performance: evidence from Chinese private firms. J Dev Econ 87:283–299 Lin JY, Long CX, Zhang X (2015) Industrial diversification in the People's Republic of China, In: Felipe J (ed) Development and modern industrial policy in practice: issues and country experiences. Asian Development Bank and Edward Elgar, Cheltenham, UK, pp 197–218 Managi S (2010) Productivity measures and effects from subsidies and trade: an empirical analysis for Japan's forestry. Appl Econ 42:3871–3883 Marshall A (1890) Principles of Economics. Macmillan, London Metcalfe S, Georghiou L (1998) Equilibrium and evolutionary foundations of technology policy. STI Rev 1998(1):22–26. https://www.oecd-ilibrary.org/science-and-technology/sti-review_sti_rev-v1998-1-en O'Donnell CJ (2015) Using information about technologies, markets and firm behaviour to decomose a proper productivity index. J Econom. https://doi.org/10.1016/j.jeconom.2015.06.009 Olley S, Pakes A (1996) The dynamics of productivity in the telecommunications equipment industry. Econometrica 64(6):1263–1297 Porter ME (1990) The competitive advantage of nations. Free Press, New York, NY Rodrik D (2006) Industrial development: stylized facts and policies revised. Copy at http://j.mp/2oz4ySE. http://drodrik.scholar.harvard.edu/publications/industrial-development-stylized-facts-and-policies-revised Romer PM (1986) Increasing returns and long-run growth. J Political Econ 94:1002–1037 Roodman D (2009) How to doxtabond2: an introduction to difference and system GMM in Stata. Stata J 9:86–136 Rubin DB (2001) Using propensity scores to help design observational studies: application to the tobacco litigation. Health Serv Outcomes Res Methodol 2:169–188 Schwartz G, Clements B (1999) Government subsidies. J Economic Surv 13(2):119–147 Stiglitz JE, Lin JY (eds) (2013) The industrial policy revolution I: the role of government beyond ideology. Palgrave Macmillan, Basingstoke, UK Stiglitz JE, Lin JY, Monga C (2013) Introduction: the rejuvenation of industrial policy. In: Stiglitz JE, Lin JY (eds) The industrial policy revolution I: the role of government beyond ideology, Palgrave Macmillan, Basingstoke, UK, pp 1–18 Tan J, Li S, Xia J (2007) When iron fist, visible hand, and invisible hand meet: firm-level effects of varying institutional environments in China. J Bus Res 60:786–794 Teece DJ, Pisano G (1998) The Dynamic Capabilities of Firms: an Introduction'. In: Dosi G, Teece DJ, Chytry J (eds) Technology, organization and competitiveness. Perspectives on industrial and corporate change. Oxford University Press, Oxford, pp 193–214 Thun E (2004) Industrial policy, Chinese style: FDI, regulation, and dreams of national champions in the auto sector. J East Asian Stud 4:453–489 Walder AG (1992) Property rights and stratification in socialist redistributive economies. Am Sociol Rev 57:524–539 Windmeijer F (2005) A finite sample correction for the variance of linear efficient two-step GMM estimators. J Econ 126:25–51 Wu W, Wu C, Rui OM (2012) Ownership and the value of political connections: evidence from China. Eur Financ Manag 18:695–729 Yu M, Hui Y, Pan H (2010) Political connections, rent seeking, and the fiscal subsidy efficiency of local governments. (In Chinese with English summary). Jingji Yanjiu/Economic Res J 45(3):65–77 Xia J, Li S, Long C (2009) The transformation of collectively owned enterprises and its outcomes in China, 2001-05. World Dev 37:1651–1662 Zhou W (2013) Political connections and entrepreneurial investment: evidence from China's transition economy. J Bus Ventur 28:299–315 Department of Economics and Finance, Durham University Business School, Durham, UK School of Economics, UNSW Business School, Sydney, NSW, Australia Shengyu Li Correspondence to Richard Harris. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. For simplicity, the firm employs a Cobb-Douglas function: $$Y = e^{\omega} E^{\alpha_E}M^{\alpha_M}K^{\alpha_K}$$ (A.1) where Y, E, M and K refer to output, employment, intermediate inputs and capital stock; ω is the physical productivity of the firm, and we assume constant return to scale so αE + αM + αE = 1. With imperfect competition, demand is: $$Y = P^\eta e^{f(x)}$$ and we assume the elasticity of demand is η < −1; P is the price of the product; and f(x) is the quality of the product which is a function of the managerial effort x. We assume f(x) is increasing in x. Factor prices are PE, PM, and PK(γ) and in particular, the 'user' cost of capital PK(γ) is a function of government assistance γ (with \(\frac{{\partial P_K}}{{\partial \gamma }} < 0\)). The firm maximises profit, subjective to the production function (A.1) and the demand curve (A.2): $$\pi _f\left( {x, \gamma } \right) = {\mathop {{\mathrm{max}}}\limits_{E,M,K}}PY - P_EE - P_MM - P_K\left( \gamma \right)K$$ After some manipulation, the profit function can be shown to be: $$\pi _f\left( {x,\gamma } \right) = {\mathrm{\Phi }}\left[ {P_K\left( \gamma \right)} \right]^{\left( {1 + \eta } \right)\alpha K}e^{ - \omega \left( {1 + \eta } \right) + f\left( x \right)}$$ where \({\mathrm{\Phi }} = \left( { - \frac{1}{\eta }} \right)\left( {\frac{\eta }{{\eta + 1}}} \right)^{\eta + 1}\left[ {P_E^{\alpha_E}P_M^{\alpha_M}\frac{1}{{\alpha _E^{\alpha_E}\alpha _E^{\alpha_M}\alpha _K^{\alpha_K}}}} \right]^{\eta + 1}\), which does not involve x or γ. As well as managerial effort to boost product quality, managers can also exert rent-seeking effort q: $$\pi _r\left( {q,\gamma } \right) = \gamma g\left( q \right)$$ where g(q), the share of government assistance that directly rewards management, is increasing in rent-seeking effort q. Assuming that the nominal salary of managers is a share β of firm profit, we can write the problem of the manager as: $${\mathop {{\mathrm{max}}}\limits_{x,q}} \beta \pi _f\left( {x,\gamma } \right) + \pi _r\left( {q,\gamma } \right)$$ subject to the constraint that total effort (x + q) = 1. This imposes a trade-off for the manager of allocating her effort between pursuing TFP (hence higher profit) and pursuing rent-seeking to boost her private rewards without having to make the effort of boosting TFP. To simplify the solution of the problem (and without loss of generality), we set f(x) = x and g(q) = eq−1, and \(P_K\left( \gamma \right) = e^{ - \gamma }\overline P _K\) where \(\overline P _K = 1\) is the normalised market 'user' cost of capital. Thus, the first order condition of the manager's problem is $${\mathrm{\beta }}\Phi \left[ {P_K\left( \gamma \right)} \right]^{\left( {1 + \eta } \right)\alpha K}e^{ - \omega \left( {1 + \eta } \right) + x} - \gamma e^{ - x} = 0$$ where the first term measures the marginal return to managerial effort from firm profit that determines nominal salary, while the second term represents the marginal return of managerial effort from rent-seeking. The former is positive while the latter is negative (as she has less rent-seeking effort to spend the more profit-seeking effort is allocated). The trade-off between the two implies optimal managerial effort is:Footnote 24 $$x^ \ast \left( \gamma \right) = \frac{1}{2}\left[ {\left( {\eta + 1} \right)\omega - ln\beta {\mathrm{\Phi }} + ln\gamma + \alpha _K\left( {1 + \eta } \right)\gamma } \right]$$ Note that \(x^{\ast}(\gamma)\) is a concave function of γ, with a maximum at \(\gamma ^ \ast = \frac{{ - 1}}{{\alpha _K\left( {1 + \eta } \right)}}\).Footnote 25 Also, measured TFP is: $$TFP\left(\gamma\right)=\frac{-1}{\eta}\left[x^{*}\left(\gamma\right)-\omega\left(+1\eta\right) \right]=\frac{-1}{2}\left[\left(\eta+1\right)\omega-ln\beta\Phi +ln\gamma +\alpha_{K}\left(1+\eta\right)\gamma\right]$$ As η < −1, measured TFP is higher if physical productivity ω is higher. More importantly, assistance generally increase TFP except when assistance is too high. This is summarized in the following proposition. Proposition 1: When\(\gamma \;<\; \gamma^{\ast}, x^{\ast}(\gamma)\) and TFP(γ) are increasing in γ; when \(\gamma \ge \gamma^{\ast}, x^{\ast}(\gamma)\) and TFP(γ) are decreasing in γ. That is, government subsidies lower the marginal cost of production and provide an incentive to the manager to allocate more effort to pursue higher profitability (via higher TFP), but over-assistance induces the manager to substitute managerial effort by rent-seeking effort, and consequently lowers TFP. Unpublished appendix Tables 7–9, Fig. 2 Table 7 Long-run two-step system-GMM production function (26 industries, China, 1998–2007) Table 8 Average final goods tariffs by industry in China, 1998–2007 Percentage of firms making no interest payments and implied interest rate for those making interest payments, China 1998–2007. Source: NBS Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Harris, R., Li, S. Government assistance and total factor productivity: firm-level evidence from China. J Prod Anal 52, 1–27 (2019). https://doi.org/10.1007/s11123-019-00559-4 Issue Date: December 2019 Political connections Firm-level Over 10 million scientific documents at your fingertips Switch Edition Academic Edition Corporate Edition Not affiliated © 2022 Springer Nature Switzerland AG. Part of Springer Nature.
CommonCrawl
Corporate Finance & Accounting Financial Analysis Linear Relationship Definition By Adam Hayes What Is a Linear Relationship? A linear relationship (or linear association) is a statistical term used to describe a straight-line relationship between a variable and a constant. Linear relationships can be expressed either in a graphical format where the variable and the constant are connected via a straight line or in a mathematical format where the independent variable is multiplied by the slope coefficient, added by a constant, which determines the dependent variable. A linear relationship may be contrasted with a polynomial or non-linear (curved) relationship. A linear relationship (or linear association) is a statistical term used to describe a straight-line relationship between a variable and a constant. Linear relationships can be expressed either in a graphical format or as a mathematical equation of the form y = mx + b. Linear relationships are fairly common in daily life. The Linear Equation Is: Mathematically, a linear relationship is one that satisfies the equation: y=mx+bwhere:m=slopeb=y-intercept\begin{aligned} &y = mx + b \\ &\textbf{where:}\\ &m=\text{slope}\\ &b=\text{y-intercept}\\ \end{aligned}​y=mx+bwhere:m=slopeb=y-intercept​ In this equation, "x" and "y" are two variables which are related by the parameters "m" and "b". Graphically, y = mx + b plots in the x-y plane as a line with slope "m" and y-intercept "b." The y-intercept "b" is simply the value of "y" when x=0. The slope "m" is calculated from any two individual points (x1, y1) and (x2, y2) as: m=(y2−y1)(x2−x1)m = \frac{(y_2 - y_1)}{(x_2 - x_1)}m=(x2​−x1​)(y2​−y1​)​ Linear Relationship What Does a Linear Relationship Tell You? There are three sets of necessary criteria an equation has to meet in order to qualify as a linear one: an equation expressing a linear relationship can't consist of more than two variables, all of the variables in an equation must be to the first power, and the equation must graph as a straight line. A linear function in mathematics is one that satisfies the properties of additivity and homogeneity. Linear functions also observe the superposition principle, which states that the net output of two or more inputs equals the sum of the outputs of the individual inputs. A commonly used linear relationship is a correlation, which describes how one variable changes in a linear fashion to changes in another variable. In econometrics, linear regression is an often-used method of generating linear relationships to explain various phenomena. Not all relationships are linear, however. Some data describe relationships that are curved (such as polynomial relationships) while still other data cannot be parameterized. Mathematically similar to a linear relationship is the concept of a linear function. In one variable, a linear function can be written as follows: f(x)=mx+bwhere:m=slopeb=y-intercept\begin{aligned} &f(x) = mx + b \\ &\textbf{where:}\\ &m=\text{slope}\\ &b=\text{y-intercept}\\ \end{aligned}​f(x)=mx+bwhere:m=slopeb=y-intercept​ This is identical to the given formula for a linear relationship except that the symbol f(x) is used in place of y. This substitution is made to highlight the meaning that x is mapped to f(x), whereas the use of y simply indicates that x and y are two quantities, related by A and B. In the study of linear algebra, the properties of linear functions are extensively studied and made rigorous. Given a scalar C and two vectors A and B from RN, the most general definition of a linear function states that: c×f(A+B)=c×f(A)+c×f(B)c \times f(A +B) = c \times f(A) + c \times f(B)c×f(A+B)=c×f(A)+c×f(B) Examples of Linear Relationships Linear relationships are pretty common in daily life. Let's take the concept of speed for instance. The formula we use to calculate speed is as follows: the rate of speed is the distance traveled over time. If someone in a white 2007 Chrysler Town and Country minivan is traveling between Sacramento and Marysville in California, a 41.3 mile stretch on Highway 99, and the complete the journey ends up taking 40 minutes, she will have been traveling just below 60 mph. While there are more than two variables in this equation, it's still a linear equation because one of the variables will always be a constant (distance). A linear relationship can also be found in the equation distance = rate x time. Because distance is a positive number (in most cases), this linear relationship would be expressed on the top right quadrant of a graph with an X and Y-axis. If a bicycle made for two was traveling at a rate of 30 miles per hour for 20 hours, the rider will end up traveling 600 miles. Represented graphically with the distance on the Y-axis and time on the X-axis, a line tracking the distance over those 20 hours would travel straight out from the convergence of the X and Y-axis. In order to convert Celsius to Fahrenheit, or Fahrenheit to Celsius, you would use the equations below. These equations express a linear relationship on a graph: °C=59(°F−32)\degree C = \frac{5}{9}(\degree F - 32)°C=95​(°F−32) °F=95(°C+32)\degree F = \frac{9}{5}(\degree C + 32)°F=59​(°C+32) Assume that the independent variable is the size of a house (as measured by square footage) which determines the market price of a home (the dependent variable) when it is multiplied by the slope coefficient of 207.65 and is then added to the constant term $10,500. If a home's square footage is 1,250 then the market value of the home is (1,250 x 207.65) + $10,500 = $270,062.50. Graphically, and mathematically, it appears as follows: In this example, as the size of the house increases, the market value of the house increases in a linear fashion. Some linear relationships between two objects can be called a "constant of proportionality." This relationship appears as Y=k×Xwhere:k=constantY,X=proportional quantities\begin{aligned} &Y = k \times X \\ &\textbf{where:}\\ &k=\text{constant}\\ &Y, X=\text{proportional quantities}\\ \end{aligned}​Y=k×Xwhere:k=constantY,X=proportional quantities​ When analyzing behavioral data, there is rarely a perfect linear relationship between variables. However, trend-lines can be found in data that form a rough version of a linear relationship. For example, you could look at the sale of ice-cream and the number of hospital visits as the two variables at play in a graph and find a linear relationship between the two. How the Least Squares Method Works The least squares method is a statistical technique to determine the line of best fit for a model, specified by an equation with certain parameters to observed data. Inside the Marginal Rate of Substitution Marginal rate of substitution is the amount of a good a consumer is willing to consume in relation to another good, as long as it is equally satisfying. Understanding the Marginal Rate of Technical Substitution The marginal rate of technical substitution is the rate at which a factor must decrease and another must increase to retain the same level of productivity. Duration Definition Duration indicates the years it takes to receive a bond's true cost, weighing in the present value of all future coupon and principal payments. Inside Polynomial Trending Polynomial trending occurs in large data sets containing many fluctuations and describes a curved or broken pattern from a straight linear trend. Line Of Best Fit The line of best fit is an output of regression analysis that represents the relationship between two or more variables in a data set. Regression Basics for Business Analysis Fixed Income Essentials Measuring Bond Risk With Duration and Convexity How is the capital asset pricing model (CAPM) represented in the security market line (SML)? Continuous Compound Interest Calculating Covariance for Stocks Bullish Divergences and Bearish Reversal Signals
CommonCrawl
The Magnus Prediction Models September 18, 2018, Micah Blake McCurdy, @IneffectiveMath Estimating Shooter and Goalie Talent I am interested in isolating which NHL players shoot the puck well, and which NHL goaltenders do a good job at preventing shots from becoming goals. To that end I have fit a regression model which replicates some of the simple features of shooting and saving. Throughout this article, when I say "shot" I will mean "unblocked shot", that is, goals, saves, and misses (including shots that hit the post or the crossbar). Furthermore, when I talk of shooting talent, I mean the ability to score more than one would expect given the shot location, so a player may well take a lot of shots from great scoring locations and still be "a bad shooter" in some sense. Generating many such shots is obviously desirable and surely can be done more often by talented players, but I do not consider any such talents to be part of shooting talent, which is (half of) the subject of this article. Throughout, I'll be using only 5v5 shots, since I think the hockey assumptions underlying the model are only valid for a single score state. However, one could presumably fit such a model (with perhaps slightly different tuning parameters) for 5v4 and even for 5v3, and then obtain aggregate estimates for players by combining their estimates from the various different models. Once a shot is being taken by a given player from a certain spot against a specific goaltender, I estimate the probability that such a shot will be a goal. This process is modelled with a generalized ridge logistic regression, for a detailed exposition please see Section 3. Briefly: I use a design matrix for which every row is a shot with the following columns: An indicator for shooter; An indicator for the goaltender; A set of indicators for shot type, where wrist and snap shots are (together, undistinguished) taken as the "base" shot type, and dummy variables are set to 1 for slap shots, backhands, wraparounds, and tips (including deflections); An indicator for "rush shots", that is, shots for which the previous recorded play-by-play event is in a different zone and no more than four seconds prior; An indicator for "rebound shots", that is, shots for which the previous recorded play-by-play event is another shot taken by the same team no more than three seconds prior; The distance from the shot location to the net, divided by 89 ft; making the intersection of the blue line and the split line distance "1"; The "visible net", that is, the width of the net projected onto the plane which is square to the shooter, divided by six feet. For shots from the split line, the visible net has value 1, and for shots very close to the goal line, the visible net is close to 0; and An intercept. I make a slightly unusual modification to shot distances; namely, shots which are recorded as coming from closer than ten feet are assigned a distance of 10ft. This is to stop small variations in shot location from having outsize effects on the regression, and also because it is close to the threshold of minimum human reaction time for goaltenders given typical NHL wrist shot speeds. The observation is 1 for goals and 0 for saves or misses. The model is fit by maximizing the likelihood of the model, that is, for a given model, form the product of the predicted probabilities for all of the events that did happen (90% chance of a save here times 15% of that goal there, etc.). Large products are awkward, so we solve the mathematically equivalent problem of maximizing the logarithm of the likelihood, and before we do so we add a term of the form \(-\beta^T\Lambda\beta\), where we use \(\Lambda\) to encode our prior knowledge, as described below. Simple formulas for the \(\beta\) which maximixes this likelihood to not seem to exist, but we can still find it by iteratively computing: $$ \beta_{n+1} = ( X^TX + \Lambda )^{-1} X^T ( X \beta_n + Y - f(X,\beta_n) ) $$ where \(f(X,\beta)\) is the vector function whose entry as position i is \((1 + \exp(-X_i\beta))^{-1}\) where \(X_i\) is the i'th row of \(X\) (this choice of \(f\) is what makes the regression logistic). By starting with \(\beta_0\) as the zero vector and iterating until convergence, I obtain estimates of shooter ability, goaltending ability, with suitable modifications for shot location and type. This model is zero-biased, which is to say that we consider deviations from average ability to be on-their-face unlikely and bias our results towards average. Another way of saying the same thing is to say that we are beginning with an assumption (of a certain strength) that all players are of league average ability and then letting the observed data slowly update our knowledge, instead of beginning with an assumption that we know nothing about the shooters and goaltenders at all. The bias controlled by the matrix \(\Lambda\), which must be positive definite for the above formula to be the well-defined solution which makes \(\beta\) the one which minimizes the total error. As in my 5v5 shot rate model, I use a diagonal matrix, where the entries correspoding to goaltenders and shooters are \(\lambda = 100\) and those corresponding to all other columns are 0.001, that is, very close to zero. As for that model, the non-trivial \(\lambda\) values were chosen by varying \(\lambda\) and choosing a value where player estimates have stabilized. In the future, I will publish results for all seasons, but for now, I record the results of fitting this model on all of the 5v5 shots in the 2016-2018 regular seasons. First, the non-player covariates are: Covariate Constant -2.55 Slapshot +0.0836 Tip/Deflection -0.222 Backhand -0.175 Wraparound -0.300 Rush +0.228 Rebound +0.754 Distance -2.86 Visible Net +1.15 Logistic regression coefficient values can be difficult to interpret, but negative values always mean "less likely to become a goal" and positive values mean "more likely to become a goal". To compute the probability that a shot with a given description will become a goal, add up all of the model covariates to obtain a number, and then apply the logistic function to it, that is, $$ x \mapsto \frac{1}{1 + \exp(-x)}$$ This function (after which the regression type is named) is very convenient for modelling probabilities, since it monotonically takes the midpoint of the number line (that is, zero) to 50% while taking large negative numbers to positive numbers close to zero and very large positive numbers to positive numbers close to one. Thus, for instance, we might want to compute the goal probability of a wrist shot from 30 feet out (just below the tops of the circles), on the split line, neither on the rush nor a rebound. To do this, begin with the constant value -2.55. We have encoded by dividing by 89, so we multiply 30/89 times the distance coefficient of -2.86 to obtain -0.964. From the split line, the visible net is 1, so we add +1.15. Wrist and snap shots are taken as the base category, so no shot type term needs to be added. Since the shot is neither a shot nor a rebound, we have all the terms we need, adding them together gives -2.364. Applying the logistic function gives 8.6%, close to the historical percentage of six to eight percent from this area. The overall features of the model are more or less as expected---shots from farther away are less likely to go in, seeing more of the net is good, rush shots are good, rebound shots are even better. The (very slight) positive value for slapshots and negative value for tips and deflections may seem surprising at first, after all, slapshots are scored only rarely and tips score often. However, slapshots are systematically taken far from the net, and tips and deflections almost always from close to the net, after accounting for shot location there is almost no difference between wrist and slap shots and tips are, after all, somewhat less precise than wrist shots and the player tipping the prior shot generally isn't looking at the net. As the above example shows, the model can already be used without specifying shooters or goaltenders. However, this is perhaps a little boring. Below are the values for all the goaltenders who faced at least one shot in the 2016-2018 regular seasons. I've inverted the scale so that the better performances are at the top. The scale is the same units as for the non-player covariates above, so even the best or worst performances are smaller than the effect of a shot being a rush shot, for instance, consistent with goaltending performances being broadly similar across the league. Similarly for forward and defender results, which I've put on separate pages for performance reasons. Minimum Minutes:
CommonCrawl
Carleman estimates for some anisotropic elasticity systems and applications EECT Home Optimal control of advective direction in reaction-diffusion population models June 2012, 1(1): 109-140. doi: 10.3934/eect.2012.1.109 Certain questions of feedback stabilization for Navier-Stokes equations Andrei Fursikov 1, and Alexey V. Gorshkov 2, Department of Mechanics & Mathematics, Moscow State University, Moscow 119991 Department of Mechanics and Mathematics, Moscow State University, 119991 Moscow, Russian Federation Received November 2011 Revised February 2012 Published March 2012 The authors study the stabilization problem for Navier-Stokes and Oseen equations near steady-state solution by feedback control. The cases of control in initial condition (start control) as well as impulse and distributed controls in right side supported in a fixed subdomain of the domain $G$ filled with a fluid are investigated. The cases of bounded and unbounded domain $G$ are considered. Keywords: invariant manifold., feedback operator, Navier-Stokes equations, control, stabilization. Mathematics Subject Classification: Primary: 93D15, 76D0. Citation: Andrei Fursikov, Alexey V. Gorshkov. Certain questions of feedback stabilization for Navier-Stokes equations. Evolution Equations & Control Theory, 2012, 1 (1) : 109-140. doi: 10.3934/eect.2012.1.109 M. S. Agranovich and M. I. Vishik, Elliptic boundary problems with parameter and parabolic problems of general type, (Russian),, Russian Math. Surveys, 19 (1964), 43. doi: 10.1070/RM1964v019n03ABEH001149. Google Scholar A. V. Babin and M. I. Vishik, "Attractors of Evolution Equations,", Studies in Mathematics and its Applications, 25 (1992). Google Scholar V. Barbu, Feedback stabilization of Navier-Stokes equations,, ESAIM Control, 9 (2003), 197. doi: 10.1051/cocv:2003009. Google Scholar V. Barbu, I. Lasiecka and R. Triggiani, Abstract setting of tangential boundary stabilization of Navier-Stokes equations by high- and low-gain feedback controllers,, Nonlinear Analysis, 64 (2006), 2704. doi: 10.1016/j.na.2005.09.012. Google Scholar V. Barbu, S. Rodrigues and A. Shirikyan, Internal exponential stabilization to a non-stationary solution for 3D Navier-Stokes equations,, 2010, (). Google Scholar V. Barbu and R. Triggiani, Internal stabilization of Navier-Stokes equations with finite-dimentional controllers,, Indiana Univ. Math. J., 53 (2004), 1443. doi: 10.1512/iumj.2004.53.2445. Google Scholar A. V. Fursikov, Stabilizability of quasilinear parabolic equation by feedback boundary control,, Sbornik: Mathematics, 192 (2001), 593. doi: 10.1070/SM2001v192n04ABEH000560. Google Scholar A. V. Fursikov, Stabilizability of two-dimensional Navier-Stokes equations with help of boundary feedback control,, J. of Math. Fluid Mechanics, 3 (2001), 259. doi: 10.1007/PL00000972. Google Scholar A. V. Fursikov, Feedback stabilization for the 2D Navier-Stokes equations,, in, 223 (2002), 179. Google Scholar A. V. Fursikov, Feedback stabilization for the 2D Oseen equations: Additional remarks,, in, 143 (2002), 169. Google Scholar A. V. Fursikov, Stabilization for the 3D Navier-Stokes system by feedback boundary control. Partial Differential Equations and Applications,, Discrete and Cont. Dyn. Syst., 10 (2004), 289. Google Scholar A. V. Fursikov, Real process corresponding to 3D Navier-Stokes system, and its feedback stabilization from boundary,, in, 206 (2002), 95. Google Scholar A. V. Fursikov, Real processes and realizability of a stabilization method for Navier-Stokes equations by boundary feedback control,, in, 2 (2002), 137. Google Scholar A. V. Fursikov, "Optimal Control of Distributed Systems. Theory and Applications,", Transl. of Math. Mongraphs, 187 (2000). Google Scholar A. V. Fursikov, Local existence theorems with unbounded set of input data and unboundedness of stable invariant manifolds for 3D Navier-Stokes equations,, Discrete and Continuous Dynamical System, 3 (2010), 269. doi: 10.3934/dcdss.2010.3.269. Google Scholar A. V. Fursikov and O. Yu. Imanuvilov, "Controllability of Evolution Equations,", Lecture Notes Ser., 34 (1996). Google Scholar A. V. Fursikov and O. Yu. Imanuvilov, Exact controllability of Navier-Stokes and Boussinesq equations,, Russian Math. Surveys, 54 (1999), 565. doi: 10.1070/RM1999v054n03ABEH000153. Google Scholar Th. Gallay and C. E. Wayne, Invariant manifolds and the long-time asymptotics of the Navier-Stokes and vorticity equations on $\mathbbR^2$,, Arch. Ration. Mech. Anal., 163 (2002), 209. doi: 10.1007/s002050200200. Google Scholar A. V. Gorshkov, Stabilization of the one-dimensional heat equation on a semibounded rod,, Uspekhi Mat. Nauk, 56 (2001), 213. doi: 10.1070/RM2001v056n02ABEH000388. Google Scholar D. Henry, "Geometric Theory of Semilinear Parabolic Equations,", Lecture Notes in Mathematics, 840 (1981). Google Scholar K. Iosida, "Functional Analysis,", Springer-Verlag, (1965). Google Scholar A. A. Ivanchikov, On numerical stabilization of unstable Couette flow by the boundary conditions,, Russ. J. Numer. Anal. Math. Modelling, 21 (2006), 519. doi: 10.1515/rnam.2006.21.6.519. Google Scholar O. A. Ladyzhenskaya, "The Mathematical Theory of Viscous Incompressible Flow,", Revised English edition, (1963). Google Scholar O. A. Ladyžhenskaya and V. A. Solonnikov, On linearization principle and invariant manifolds for problems of magnetichydromechanics, (Russian), Boundary Value Problems of Mathematical Physics and Related Questions in the Theory of Functions, 38 (1973), 46. Google Scholar J.-L. Lions and E. Magenes, Problémes aux Limites Non Homogénes et Applications,, Vol. 1, (1968). Google Scholar J. E. Marsden and M. McCracken, "The Hopf Bifurcation and Its Applications,", Applied Mathematical Sciences, (1976). Google Scholar J.-P. Raymond, Feedback boundary stabilization of the three-dimensional incompressible Navier-Stokes equations,, J. Math. Pures Appl. (9), 87 (2007), 627. doi: 10.1016/j.matpur.2007.04.002. Google Scholar R. Temam, "Navier-Stokes Equations. Theory and Numerical Analysis,", Third editon, 2 (1984). Google Scholar M. I. Vishik and A. V. Fursikov, "Mathematical Problems of Statistical Hydromechanics,", Kluwer Acad. Publ., (1988). Google Scholar A. V. Fursikov. Stabilization for the 3D Navier-Stokes system by feedback boundary control. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 289-314. doi: 10.3934/dcds.2004.10.289 Evrad M. D. Ngom, Abdou Sène, Daniel Y. Le Roux. Global stabilization of the Navier-Stokes equations around an unstable equilibrium state with a boundary feedback controller. Evolution Equations & Control Theory, 2015, 4 (1) : 89-106. doi: 10.3934/eect.2015.4.89 Evrad M. D. Ngom, Abdou Sène, Daniel Y. Le Roux. Boundary stabilization of the Navier-Stokes equations with feedback controller via a Galerkin method. Evolution Equations & Control Theory, 2014, 3 (1) : 147-166. doi: 10.3934/eect.2014.3.147 Jean-Pierre Raymond, Laetitia Thevenet. Boundary feedback stabilization of the two dimensional Navier-Stokes equations with finite dimensional controllers. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 1159-1187. doi: 10.3934/dcds.2010.27.1159 Takayuki Kubo, Ranmaru Matsui. On pressure stabilization method for nonstationary Navier-Stokes equations. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2283-2307. doi: 10.3934/cpaa.2018109 V. V. Chepyzhov, A. A. Ilyin. On the fractal dimension of invariant sets: Applications to Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 117-135. doi: 10.3934/dcds.2004.10.117 Tomás Caraballo, Peter E. Kloeden, José Real. Invariant measures and Statistical solutions of the globally modified Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2008, 10 (4) : 761-781. doi: 10.3934/dcdsb.2008.10.761 Mehdi Badra. Abstract settings for stabilization of nonlinear parabolic system with a Riccati-based strategy. Application to Navier-Stokes and Boussinesq equations with Neumann or Dirichlet control. Discrete & Continuous Dynamical Systems - A, 2012, 32 (4) : 1169-1208. doi: 10.3934/dcds.2012.32.1169 Fabio Ramos, Edriss S. Titi. Invariant measures for the $3$D Navier-Stokes-Voigt equations and their Navier-Stokes limit. Discrete & Continuous Dynamical Systems - A, 2010, 28 (1) : 375-403. doi: 10.3934/dcds.2010.28.375 Enrique Fernández-Cara. Motivation, analysis and control of the variable density Navier-Stokes equations. Discrete & Continuous Dynamical Systems - S, 2012, 5 (6) : 1021-1090. doi: 10.3934/dcdss.2012.5.1021 Varga K. Kalantarov, Edriss S. Titi. Global stabilization of the Navier-Stokes-Voight and the damped nonlinear wave equations by finite number of feedback controllers. Discrete & Continuous Dynamical Systems - B, 2018, 23 (3) : 1325-1345. doi: 10.3934/dcdsb.2018153 Pavel I. Plotnikov, Jan Sokolowski. Compressible Navier-Stokes equations. Conference Publications, 2009, 2009 (Special) : 602-611. doi: 10.3934/proc.2009.2009.602 Jan W. Cholewa, Tomasz Dlotko. Fractional Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 2967-2988. doi: 10.3934/dcdsb.2017149 P.E. Kloeden, Pedro Marín-Rubio, José Real. Equivalence of invariant measures and stationary statistical solutions for the autonomous globally modified Navier-Stokes equations. Communications on Pure & Applied Analysis, 2009, 8 (3) : 785-802. doi: 10.3934/cpaa.2009.8.785 Elena Braverman, Alexandra Rodkina. Stabilization of difference equations with noisy proportional feedback control. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2067-2088. doi: 10.3934/dcdsb.2017085 G. M. de Araújo, S. B. de Menezes. On a variational inequality for the Navier-Stokes operator with variable viscosity. Communications on Pure & Applied Analysis, 2006, 5 (3) : 583-596. doi: 10.3934/cpaa.2006.5.583 Hermenegildo Borges de Oliveira. Anisotropically diffused and damped Navier-Stokes equations. Conference Publications, 2015, 2015 (special) : 349-358. doi: 10.3934/proc.2015.0349 Hyukjin Kwean. Kwak transformation and Navier-Stokes equations. Communications on Pure & Applied Analysis, 2004, 3 (3) : 433-446. doi: 10.3934/cpaa.2004.3.433 Vittorino Pata. On the regularity of solutions to the Navier-Stokes equations. Communications on Pure & Applied Analysis, 2012, 11 (2) : 747-761. doi: 10.3934/cpaa.2012.11.747 C. Foias, M. S Jolly, I. Kukavica, E. S. Titi. The Lorenz equation as a metaphor for the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2001, 7 (2) : 403-429. doi: 10.3934/dcds.2001.7.403 Andrei Fursikov Alexey V. Gorshkov
CommonCrawl
Tuning Electronic Properties of Blue Phosphorene/Graphene-Like GaN van der Waals Heterostructures by Vertical External Electric Field Jingjing Guo1, Zhongpo Zhou1Email authorView ORCID ID profile, Hengheng Li1, Haiying Wang1 and Chang Liu2 Accepted: 6 May 2019 The structural and electronic properties of a monolayer and bilayer blue phosphorene/graphene-like GaN van der Waals heterostructures are studied using first-principle calculations. The results show that the monolayer-blue phosphorene/graphene-like GaN heterostructure is an indirect bandgap semiconductor with intrinsic type II band alignment. More importantly, the external electric field tunes the bandgap of monolayer-blue phosphorene/graphene-like GaN and bilayer-blue phosphorene/graphene-like GaN, and the relationship between bandgap and external electric field indicates a Stark effect. The semiconductor-to-metal transition is observed in the presence of a strong electric field. Heterostructure Blue phosphorene Graphene-like GaN External electric field Two-dimensional (2D) materials such as graphene [1], transition metal dichalcogenides (TMDs) [2], black phosphorene (BP) [3], and graphene-like GaN (g-GaN) [4] have been in the spotlight, owing to their fascinating physical properties and potential applications in devices. As a fast-emerging research area, the way in which the heterostructures are assembled from the isolated atoms remains to be an exciting research filed. It is considered as a novel way to construct devices, which integrates the properties of each isolated component with ideal properties applied in nanoelectronics [5, 6]. Due to atomic layers' interaction [7], these heterostructures possess outstanding properties comparing with the pure 2D materials, and their properties are preserved without degradation when they are bonded together in the layer-by-layer way. To date, many efforts have been made to obtain van der Waals (vdW) heterostructures. It is worth noting that the blue phosphorene (blue-P)-based vdW heterostructures such as blue-P/TMDs [8–10] and blue-P/graphene [11] have attracted increasing attention due to their excellent electronic and optical characteristics. Among the above-mentioned 2D semiconductor materials, blue-P monolayer has been prepared by epitaxial growth on Au (111) substrates for the first time in 2016 [7]. Z. Zhang et. al. predicted the epitaxial growth of blue-P monolayers on GaN (001) substrates, and proposed an unconventional "half-layer" growth mechanism. It is also pointed out that blue-P is more stable on the surface of GaN (001) due to the chemical affinity between phosphorus and gallium and the good lattice matching [12]. Blue-P, consisting of a vertically corrugated yet single layer of phosphorus atoms, attracts intense research interest due to its superb properties such as sizable bandgap and high mobility [13, 14]. In addition, g-GaN, as a novel 2D material, can be synthesized experimentally by means of a migration-enhanced encapsulated growth (MEEG) technique [15]. Theoretical simulation has shown that g-GaN is a semiconductor with an indirect bandgap, which can be efficiently manipulated by an external electric field [16]. Like other 2D materials, g-GaN can also be hydrogenated and halogenated conveniently. All these studies have shown that g-GaN is an alternative 2D semiconductor for applications in many important fields in the future. The lattice parameter of g-GaN could match well with blue-P, which indicates that blue-P/g-GaN is an ideal material system for the construction of heterostructures, as well as an excellent inserting layer for tuning of their electronic properties by the interlayer interaction. In this regard, it matters to investigate the electronic and optical properties of the blue-P/g-GaN vdW heterostructures. However, few researches have been investigated to study the properties of blue-P/g-GaN vdW heterostructures [17, 18]. In this work, the electronic structural properties and the variation tendency of the bandgap energy (Eg) with the vertical external electric field (Eext) in the blue-P/g-GaN vdW heterostructures are evaluated and conducted by using the first-principles calculations with vdW-corrected exchange-correlation functional. The band structures and electrical properties of the monolayer and bilayer blue-P/g-GaN vdW heterostructures have been investigated using the Cambridge Serial Total Energy Package (CASTEP) [19], which is based on the density functional theory (DFT) [20, 21] in a plane-wave basis set with the projector augmented wave (PAW) method potential [22, 23]. The generalized gradient approximation (GGA) with the Perdew-Burke-Ernzerhof (PBE) [24] function is adopted to describe the electrons exchange-correlation energy. Since the GGA-PAW approximation usually underestimates the Eg of semiconductors, the hybridization functional HSE06 is carried out to correct them. The effect of vdW interaction [25] is described by the Grimme's DFT-D2 method. Here, a 500 eV cut-off energy for the plane wave basis was set to ensure the convergence of total energy. A vacuum thickness of 20 Å along the Z direction of the blue-P/g-GaN heterostructures is added to eliminate the interaction with the spurious replica images. The atomic positions are optimized until the convergence tolerance of the force on each atom is smaller than 0.001 eV/Å. The first Brillouin-zone integration is used by a fine grid of 7 × 7 × 1 for the structure optimization and 21 × 21 × 1 for electronic state calculation. Several structures shown in our previous work have been studied as a benchmark to obtain the most stable structure of the bilayer heterostructures [18]. The optimized lattice constants are 3.25 Å and 3.20 Å for bilayer-blue-P and g-GaN, respectively, whose values are in agreement with the reported studies [9, 26]. The lattice mismatch is about of 2% only [18]. In order to obtain the minimum energy configuration and evaluate the thermal stability of the structures, the blue-P layer is moved relating to the g-GaN layer and the lowest energy configuration is found by finite amounts δx/y. The evolution of the total energy difference as a function of δx and δy is shown in our previous studies [18]. Figure 1a shows the atomic structures of side and top view of bilayer-blue-P on g-GaN. The optimum stacking mode of blue-P bilayers is consistent with the previous paper [27]. Figure 1b demonstrates the relation between the binding energy (Eb) at the interface and the interlayer distance of blue-P and g-GaN (dblue-P/g-GaN). Its definition has been described in detail in our previous studies [18]. The Eb is about 49 meV for the single-layer blue-P with an equilibrium distance of 3.57 Å. For the bilayer, the binding energy is almost the same as that of the single layer, whereas the equilibrium distance is 3.52 Å. Those binding energies have the same magnitude order as other vdW crystals, such as BP/graphene [Eb = 60 meV] [11], blue-P/graphene [Eb = 70 meV] [6], and bilayer blue-P [Eb = 25 meV] [27]. a Side and top view of bilayer blue-P on g-GaN. b Binding energy as a function of the distance dblue-P/g-GaN for the monolayer and bilayer system. The inset shows the zoom close the minimum of the binding energy Figure 2a-b displays the band structures of monolayer- blue-P/g-GaN heterostructure and bilayer- blue-P/g-GaN heterostructure, with Eg of 1.26 eV and 1.075 eV calculated by using GGA, respectively. For the HSE06 method, the Eg is 2.2 eV and 1.91 eV, respectively. For both heterostructures, the minimal-energy states in the conduction band are near M point and the maximal-energy states in the valence band are at K point, the two points are not at the same crystal momentum in the Brillouin zone. Thus, the bandgap is an indirect band gap for both semiconductor heterostructures. The Eg of monolayer-blue-P/g-GaN heterostructure decreases 0.63 eV compared with the monolayer-blue-P (1.89 eV), while the Eg of bilayer-blue-P (1.118 eV) shrinks 0.043 eV in contrast to bilayer-blue-P/g-GaN heterostructure. The band bending can be achieved from the difference between the Fermi levels of the blue-P with the g-GaN system and the free-standing blue-P [28]: ΔEF = W − WP, where W is the work function of the composed system (blue-P/g-GaN), and WP is the work function of the pristine blue-P. The ΔEF of − 1.17 eV and − 0.81 eV for the monolayer-blue-P/g-GaN heterojunctions and the bilayer-blue-P/g-GaN heterojunctions are obtained respectively, as shown in Fig. 2c, d. As one can see, the type of the energy band alignment is the staggered gap (type II) at the interfaces for all the monolayer-blue-P/g-GaN heterostructures and the bilayer-blue-P/g-GaN heterostructures. Band structures of a monolayer-blue-P/g-GaN heterostructure, and b bilayer-blue-P/g-GaN heterostructure, respectively; band alignments and work functions related to c monolayer-blue-P/g-GaN heterostructure and d bilayer-blue-P/g-GaN heterostructure The heterostructure is often subjected to an external electric field to tune its electronic properties while applied to nanoelectronic devices. In order to study the influence of the Eext on the electronic structure, the band structures are calculated with different Eext for the blue-P/g-GaN heterostructures. As reported in previous work, the geometrical structure of the heterostructure can be neglected, but the band structure changes greatly under different Eext [29]. Figure 3a shows the evolution of the Eg as a function of the Eext from − 1.0 eV/Å to 1.0 eV/Å. The direction of Eext from top (g-GaN layer) to bottom (blue-P layer) is taken as the forward direction. It is clearly shown that monolayer-blue-P/g-GaN and bilayer-blue-P/g-GaN heterostructures exhibit a bandgap modulation with the Eext. For monolayer-blue-P/g-GaN, in the case of the forward Eext, the Eg increases linearly with the increasing Eext ≤ 0.4 eV/Å (L-increase range). The monolayer-blue-P/g-GaN obtains its maximum Eg when Eext = 0.5 eV/Å and shows little change when Eext is in the range 0.4 < Eext < 0.6 eV/Å (saturation range), which enhances the band offsets so as to promote the separation of electron-hole pairs. The initial enlargement in Eg is attributed to the counterbalance of Eext to some extent by the built-in electric field (Eint). The Eg comes to a linear decrease range with increasing Eext > 0.6 eV/Å (L-decrease range). Thus, the heterostructure shows a metal behavior when it is subjected to a stronger electric field. This is originated from the dielectric breakdown as well as charge tunneling. In contrast, the Eg declines linearly with increasing Eext (L-decrease range) under a reverse Eext, caused by the conduction band minimum (CBM) band edge shifting toward to the valence band maximum (VBM). However, when Eext = − 0.7 eV/Å, the bandgap begins to decrease sharply, which may be due to the breakdown. When Eext < − 0.8 eV/Å, the blue-P/g-GaN heterojunction experiences a transition from semiconductor to metal (metal range). These results reveal that both Eg and semiconductor to metal transition of the blue-P/g-GaN heterostructure is dependent on electrostatic gating, which could be used in high-performance electronic and optoelectronic devices. In addition, the effect of Eext on the Eg between the bilayers of blue-P and g-GaN heterostructure is the same as the single layer but with a smaller electronic field for transition from semiconductor to metal. a Eg vs Eext of monolayer-blue-P/g-GaN and bilayer-blue-P/g-GaN heterostructures. b–e The band structures of the monolayer-blue-P/g-GaN heterostructure with Eext of 0.3 eV/Å, 0.5 eV/Å, − 0.3 eV/Å, and 0.7 eV/Å. The EF is set to 0, and indicated by the red dashed line To explore the effect of electric field on the band structure, the relation between the energy band structures and the external electric field are calculated. The band structures of the monolayer-blue-P/g-GaN heterostructures with Eext of 0.3 eV/Å, 0.5 eV/Å, − 0.3 eV/Å, and 0.7 eV/Å are shown in Fig. 3b–e. In Fig. 3b-c, under the 0.3 eV/Å and 0.5 eV/Å of Eext, the Eg increases to 1.651 eV and 1.757 eV. This indicates the quasi-Fermi level of the g-GaN monolayer is shifted downward, and the quasi-Fermi level of blue-P monolayer is lifted upward. However, in Fig. 3d-e, for the − 0.3 eV/Å and − 0.7 eV/Å of Eext, the Eg decrease to 0.888 eV and 0.49 eV. The quasi-Fermi level of g-GaN moves upward, and the quasi-Fermi level of blue-P moves downward. The results show that the bandgap varies linearly with the applied vertical Eext, indicating a giant Stark effect [30]. Upon applying a vertical Eext, the subband states of the valence and conduction valence would undergo a mixing, leading to a field-induced splitting of the electronic levels. The electrostatic potential difference induced by the external field considerably changed the electronic structures near the Fermi level [31]. Figure 4a–d shows the isosurface of charge accumulation (with color in orange) and depletion (light green), which exhibits the change of charge density of the blue-P/g-GaN heterojunction with the Eext value of 0.3 eV/Å, 0.5 eV/Å, − 0.3 eV/Å, and − 0.7 eV/Å, respectively. Upon applying a forward Eext, as exhibited in Fig. 4a-b, positive charges (holes) tend to transfer from blue-P layer to g-GaN layer, and negative charges (electrons) transfer from g-GaN to blue-P layer. At the same time concurrently, one can see that the charge-transfer amount is more than 0.3 eV/Å when the electric field is 0.5 eV/Å. Essentially, a positive external electric field orients the charge along the direction of the stress field, restricting the charge to the atomic plane, but leaving the charge in these planes, thereby facilitating the transfer of the charge from blue-P to g-GaN. In contrast, the negative Eext induces electrons to accumulate/deplete at the opposite side, as visualized in Fig. 4c-d. Mainly negative external electric fields position the charge back towards the stress field and thus transfer the charge from g-GaN to blue-P. Accordingly, the quasi-Fermi level of g-GaN monolayer and EVBM rise, while the quasi-Fermi level of blue-P monolayer and ECBM decrease, resulting in a linear reduction on bandgap. Simultaneously, electrons are transferred from blue-P to g-GaN under a reverse Eext. It is found that the amount of the transferred charge increases with the increase of electric field intensity. a–d Isosurface of charge accumulation and depletion of monolayer-blueP/g-GaN heterostructure under Eext of 0.3 eV/Å, 0.5 eV/Å, − 0.3 eV/Å, and − 0.7 eV/Å, respectively. Orange and light green isosurfaces represent positive charge accumulation and charge depletion, respectively. e Planar-averaged electron density Δρ(z) at different electrical field for monolayer-blue-P/g-GaN To make it clear that how Eext modulates the electronic property, the integrated charge density difference of the monolayer-blue-P/g-GaN heterostructure as a function of the perpendicular distance is calculated, displayed in Fig. 4e. The positive values in Fig. 4e indicate charge accumulation, and the negative values represent charge depletion. For Eext = 0, the charge density difference of the heterostructure is obtained by ∆ρ = ρheterostructure−ρg-GaN−ρblue-P. The change of the plane-average charge density difference at interfaces indicates that the electrons were transferred from the g-GaN layer to blue-P layer across the interface, whereas the holes remained in the g-GaN side. The surface averaged differential charge with an electric field is calculated for 0.3 eV/Å and − 0.3 eV/Å. The Eext can exert influence on transferring charges in the heterostructure. It can be described as [29] $$ \Delta \rho {E}_{\mathrm{ext}}(z)=\int {\rho}_{E_{\mathrm{ext}}}\left(x,y,z\right) dxdy-\int {\rho}_{E_0}\left(x,y,z\right) dxdy $$ where \( \int {\rho}_{E_{\mathrm{ext}}}\left(x,y,z\right) dxdy\ \mathrm{and}\int {\rho}_{E_0}\left(x,y,z\right) dxdy \) are the charge density at (x, y, z) point in the supercell of the monolayer-BP/g-GaN heterostructure with and without Eext, respectively. The direction of charge transfer induced by the negative (blue line) Eext is opposite to that of the positive (red line) Eext. The integrated charge density quantitatively illustrates that the amount of transferred charges increases with the strength of the Eext. The value of the charges transfers for the blue-P/g-GaN heterostructure with 0.3 eV/Å of Eext is larger than that of 0 eV/Å and − 0.3 eV/Å, because the positive external electric field localizes the charges along the direction of the applied field, confining the charges to g-GaN planes. In order to distinguish the contributions of blue-P and g-GaN in the band structure, the projected state density of the heterostructures is calculated and shown in Fig. 5a. It can be seen that the contribution of VBM mainly comes from the g-GaN, and the entrainment contribution is mainly from the blue-P. Figure 5b displays the isosurface of charge accumulation and depletion of the monolayer- blue-P/g-GaN and bilayer-blue-P/g-GaN under 0.5 eV/Å and 0.7 eV/Å external field, respectively. Due to the dielectric breakdown of the bilayer-blue-P/g-GaN at 0.7 eV/Å external field, the current relathed the charge transfer would have saturated under the increasing external field, which is in accordance with that in Fig. 3a. a TDOS of bilayer-blue-P/g-GaN heterostructure. PDOS of P, Ga, and N in heterostructure. b Isosurface of charge accumulation and depletion of monolayer-blue-P/g-GaN heterostructure under Eext of 0.3 eV/Å, 0.5 eV/Å, − 0.3 eV/Å, and − 0.7 eV/Å, respectively In summary, the structural and electronic properties of the monolayer-blue-P/g-GaN and bilayer-blue-P/g-GaN vdW heterostructures are investigated by using first-principle calculations. The results show that the monolayer-blue-P/g- GaN heterostructure is an indirect band gap semiconductor with intrinsic type II band alignment. The band offset and Eg of monolayer-blue-P/g-GaN and bilayer-blue-P/g-GaN can be continuously tuned by Eext, and the relation between Eg and Eext indicates a Stark effect. The Eg becomes zero at − 0.8 and 0.9 eV/Å for monolayer-blue-P/g-GaN, and − 0.5 and 0.7 eV/Å for bilayer-blue-P/g-GaN, indicating a transition from semiconductor to metal. Blue-P: Black phosphorene CASTEP: Cambridge Serial Total Energy Package CBM: Conduction band minimum DFT: Density functional theory GGA: Generalized gradient approximation G-GaN: MEEG: Migration-enhanced encapsulated growth PAW: Projector augmented wave PBE: Perdew-Burke-Ernzerhof TMDs: Transition metal dichalcogenides VBM: Valence band maximum vdW: van der Waals The authors are grateful to Dr. Xiaodong Yang from Nanjing University, Prof. Shengzhan Lu and Prof. Tianxing Wang from Henan Normal University for their help on the DFT calculation. This work is supported by the High-Performance Computing Center of Henan Normal University. This work is supported by the NSFC nos. 11404100 and 11304083, the Young Scholar Foundation of Henan Normal University no. 5101029470616, and the surplus foundation for vertical scientific research projects of Henan Normal University no. 5201029120301. This work is also supported by the China Scholarship Council (nos. 201608410308 and 201608410415). The datasets generated during and/or analyzed during the current study are available from the corresponding author on request. JG and ZZ designed the simulation, analyzed the data, and wrote the paper. HL, HW, and CL checked the manuscript. All authors read and approved the final manuscript. Henan Key Laboratory of Photovoltaic Materials, and School of Physics and Materials Science, Henan Normal University, Xinxiang, 453007, China Key Laboratory of Artificial Micro- and Nano-structures of Ministry of Education, and School of Physics and Technology, Wuhan University, Wuhan, 430072, China Novoselov KS, Geim AK, Morozov SV, Jiang D, Zhang Y, Dubonos SV, Grigorieva IV, Firsov AA (2004) Electric field effect in atomically thin carbon films. Science. 306:666–669View ArticleGoogle Scholar Ramasubramaniam A, Naveh D, Towe E (2011) Tunable band gaps in bilayer transition-metal dichalcogenides. Phys Rev B 84:205325View ArticleGoogle Scholar Rodin AS, Carvalho A, Castro Neto AH (2014) Strain-induced gap modification in black phosphorus. Phys Rev Lett 112:176801View ArticleGoogle Scholar Al Balushi ZY, Wang K, Ghosh RK, Vila RA, Eichfeld SM, Caldwell JD, Qin X, Lin YC, DeSario PA, Stone G, Subramanian S, Paul DF, Wallace RM, Datta S, Redwing JM, Robinson JA (2016) Two-dimensional gallium nitride realized via graphene encapsulation. Nat Mater. 15:1166–1171View ArticleGoogle Scholar Roy T, Tosun M, Kang JS, Sachid AB, Desai SB, Hettick M, Hu CC, Javey A (2014) Field-effect transistors built from all two-dimensional material components. ACS Nano. 8:6259–6264View ArticleGoogle Scholar Sun M, Chou JP, Ren Q, Zhao Y, Yu J, Tang W (2017a) Tunable Schottky barrier in van der Waals heterostructures of graphene and g-GaN. Appl. Phys. Lett. 110:192Google Scholar Deng D, Novoselov KS, Fu Q, Zheng N, Tian Z, Bao X (2016) Catalysis with two-dimensional materials and their heterostructures. Nat Nanotechnol. 11:218–230View ArticleGoogle Scholar Peng Q, Wang Z, Sa B, Wu B, Sun Z (2016a) Electronic structures and enhanced optical properties of blue phosphorene/transition metal dichalcogenides van der Waals heterostructures. Sci Rep. 6:31994View ArticleGoogle Scholar Peng Q, Wang Z, Sa B, Wu B, Sun Z (2016b) Blue phosphorene/MS2 (M = Nb, Ta) heterostructures as promising flexible anodes for lithium-ion batteries. ACS Appl Mater Interfaces. 8:13449–13457View ArticleGoogle Scholar Zhang ZY, Si MS, Peng SL, Zhang F, Wang YH, Xue DS (2015) Bandgap engineering in van der Waals heterostructures of blue phosphorene and MoS2 : a first principles calculation. J Solid State Chem 231:64–69View ArticleGoogle Scholar Padilha JE, Fazzio A, da Silva AJ (2015) Van der Waals heterostructure of phosphorene and graphene: tuning the Schottky barrier and doping by electrostatic gating. Phys Rev Lett. 114:066803View ArticleGoogle Scholar Zeng J, Cui P, Zhang Z (2017) Half Layer By Half Layer Growth of a Blue Phosphorene Monolayer on a GaN(001) Substrate. Phys Rev Lett. 118:046101View ArticleGoogle Scholar Zhu Z, Tomanek D (2014) Semiconducting layered blue phosphorus: a computational study. Phys. Rev. Lett. 112:176802View ArticleGoogle Scholar Xiao J, Long M, Zhang X, Ouyang J, Xu H, Gao Y (2015) Theoretical predictions on the electronic structure and charge carrier mobility in 2D phosphorus sheets. Sci Rep. 5:9961View ArticleGoogle Scholar Balushi ZYA, Wang K, Ghosh RK, Vilá RA, Eichfeld SM, Caldwell JD, Qin X, Lin YC, Desario PA, Stone G (2016) Two-dimensional gallium nitride realized via graphene encapsulation. Nature Materials. 15:1166View ArticleGoogle Scholar Chen Q, Hu H, Chen X, Wang J (2011) Tailoring band gap in GaN sheet by chemical modification and electric field: Ab initio calculations. Appl Phys Lett. 98:1687Google Scholar Sun M, Chou JP, Yu J, Tang W (2017b) Electronic properties of blue phosphorene/graphene and blue phosphorene/graphene-like gallium nitride heterostructures. Phys Chem Chem Phys. 19:17324–17330View ArticleGoogle Scholar Guo J, Zhou Z, Wang T, Lu Z, Yang Z, Liu C (2017) Electronic structure and optical properties for blue phosphorene/graphene-like GaN van der Waals heterostructures. Curr Appl Phys. 17:1714–1720View ArticleGoogle Scholar Clark SJ, Segall M, Pickard CJ, Hasnip P, Probert M, Refson K, Payne MC (2005) First principles methods using CASTEP. 220:567–570Google Scholar Klimes J, Bowler DR, Michaelides A (2010) Chemical accuracy for the van der Waals density functional. J Phys Condens Matter. 22:022201View ArticleGoogle Scholar Kresse G, Furthmüller J (1996) Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Physical Review B 54:11169–11186View ArticleGoogle Scholar Blöchl PE (1994) Projector augmented-wave method. Phys Rev B. 50:17953–17979View ArticleGoogle Scholar Kresse GJ, Joubert D (1999) From ultrasoft pseudopotentials to the projector augmented-wave method. 59:1758View ArticleGoogle Scholar Perdew JP, Burke K, Ernzerhof M (1997) Generalized gradient approximation made simple [Phys. Rev. Lett. 77, 3865 (1996)]. Phys Rev Lett 78:1396–1396View ArticleGoogle Scholar Grimme S, Antony J, Ehrlich S, Krieg H (2010) A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu. J Chem Phys. 132:154104View ArticleGoogle Scholar Shao Y, Xiao Z, Bi C, Yuan Y, Huang J (2014) Origin and elimination of photocurrent hysteresis by fullerene passivation in CH3NH3PbI3 planar heterojunction solar cells. Nature Communications. 5:5784View ArticleGoogle Scholar Mogulkoc Y, Modarresi M, Mogulkoc A, Ciftic YO (2016) electronic and optical properties of bilayer blue phosphorus. Computational Materials Science. 124:23–29View ArticleGoogle Scholar Khomyakov PA, Giovannetti G, Rusu PC, Brocks G, van den Brink J, Kelly PJ (2009) First-principles study of the interaction and charge transfer between graphene and metals. Physical Review B. 79Google Scholar Xiong W, Xia C, Du J, Wang T, Peng Y, Wei Z, Li J (2017) Band engineering of the MoS2/stanene heterostructure: strain and electrostatic gating. Nanotechnology. 28:195702View ArticleGoogle Scholar Huang L, Li J (2016) Tunable electronic structure of black phosphorus/blue phosphorus van der Waals p-n heterostructure. Applied Physics Letters. 108:083101View ArticleGoogle Scholar Tian X-Q, Wang X-R, Wei Y-D, Liu L, Gong Z-R, Gu J, Du Y, Yakobson BI (2017) Highly tunable electronic structures of phosphorene/carbon nanotube heterostructures through external electric field and atomic intercalation. Nano Lett. 17:7995–8004View ArticleGoogle Scholar
CommonCrawl
Perceived parental support in childhood and adolescence and suicidal ideation in young adults: a cross-sectional analysis of the i-Share study Melissa Macalli1, Marie Tournier1,2, Cédric Galéra1, Ilaria Montagni1, Aicha Soumare1, Sylvana M. Côté1,3 & Christophe Tzourio ORCID: orcid.org/0000-0002-6517-29841,4 BMC Psychiatry volume 18, Article number: 373 (2018) Cite this article Suicidal ideation and suicidal risk assessment are major concerns for health professionals. The perception of a low level of parental support is a risk factor for suicidal tendencies among adolescents, but little is known about its long-term impact on the vulnerability to suicidal behavior in young adults. We investigated whether the perceived level of parental support during childhood and adolescence was associated with current suicidal ideation in young adults. We retrieved data collected in the i-Share study from February 1st, 2013 through January 30, 2017. This cross-sectional study included 10,015 French students, aged 18–24 years that completed an on-line self-reported questionnaire about suicidal ideation in the last 12 months and their perceived parental support in childhood and adolescence. We performed multinomial logistic regressions and sensitivity analyses to assess associations between the degree of perceived parental support and the frequency suicidal thoughts, after adjusting for the main known risk factors of suicidal ideation. We employed multiple imputations to account for missing data. The study sample included 7539 female (75.7%) and 2436 male (24.3%) students (mean [SD] age 20.0 [1.8] years). About one in five students reported occasional suicidal thoughts (n = 1775, 17.7%) and 368 students (3.7%) reported frequent suicidal thoughts. The adjusted multinomial logistic regression revealed a significant negative association between perceived parental support and suicidal thoughts. A lack of perceived parental support in childhood and adolescence was associated with > 4-fold elevated risk of occasional (adjusted OR, 4.55; 95% CI: 2.97–6.99) and nearly 9-fold elevated risk of frequent (adjusted OR, 8.58; 95% CI: 4.62–15.96) suicidal thoughts, compared to individuals that perceived extremely strong parental support. This association was strongest among students with no personal history of depression or suicide attempts. Students that perceived low levels of past parental support had a higher risk of suicidal ideation. Past perceived parental support appeared to be a potent marker of suicidal risk in young adults. This marker should be routinely collected in studies on suicidal risk in young adults, and it could be considered an additional screening tool. Suicide is the second leading cause of death among young adults between the ages of 15 and 29 years [1, 2], including college students [3]. The estimated prevalence of suicide ideation ranges from 6 to 12% among college students [4,5,6,7]. A recent meta-analysis pooled 36 college student cohorts (i.e., 634,662 students) and estimated that the 12-month prevalence of suicidal ideation was 10.6% (95% CI: 9.1–12.3). There was no significant difference between the prevalence estimates according to European and North American nationalities [8]. Suicidal ideation is common in young adults. It is the first step on the pathway to suicide [5] and one of the main risk factors for suicide attempts and suicides [9,10,11]. Suicidal behaviors are the result of complex interactions between social, psychological, and environmental risk factors. Moreover, many investigators have shown that, within this etiological heterogeneity, familial contributions are potent factors. Adoption and twin studies suggested that genetic factors account only in part for transmission of suicidal behavior [12]. Suicidal behaviors have also been associated with exposure of children and adolescents to domestic violence or sexual abuse, family conflicts, parent loss, parental divorce or separation, and a family history of mood disorder or substance abuse [5, 13,14,15]. Some studies have suggested that low levels of perceived parental support (PPS) were associated with higher suicidal ideation in adolescents [16,17,18], but little is known about this relationship in young adults. In this study, we tested the hypothesis that a low level of PPS in childhood and adolescence induced a persistent impact on the vulnerability to suicidal ideation in young adults. Study population and data collection Our study sample comprised participants of the ongoing, internet-based, Students' Health Research Enterprise (i-Share) project, a prospective, population-based study of students in French-speaking universities and higher education institutions. Enrollment in the i-Share project started in 2013; to be eligible, a student had to be officially registered at a University or higher education institute, at least 18 years of age, able to read and understand French, and provide informed consent for participation. No compensation was given to students for completing the questionnaire on which the analyses of the current paper are based. The i-Share protocol was approved by the "Commission nationale de l'informatique et des libertés" (CNIL- National Commission of Informatics and Liberties), which ensures that data collection does not violate freedom, rights, or human privacy. No information on ethnic or racial origin was collected in i-Share. Students were informed about the purpose and aims of the study through flyers, communications in classes, social media, and a newsletter (http://www.i-Share.fr). After formal pre-registration on the i-Share online portal, a change of password, and validation of the informed consent, students completed the self-administered baseline questionnaire. This questionnaire recorded information on the participant's health status, personal and family medical history, sociodemographic characteristics, and lifestyle habits. For this cross-sectional study, we acquired data from a large sample of students that had participated in the i-Share cohort study between February 2013 and January 2017. Students were eligible only when they completed all items in the questionnaire. We restricted our analyses to students aged 18–24 years old, because this age range is defined as young adults, by the World Health Organization [19], and it is typically associated with major changes in a student's life. Outcome: Suicidal thoughts Suicidal thoughts were investigated with the question: In the last 12 months, how often have you thought of committing suicide (had suicidal ideation)? Participants selected one of three possible responses: (1) no suicidal thoughts, (2) occasional suicidal thoughts, and (3) frequent suicidal thoughts. Exposure variable: Perceived parental support in childhood and adolescence PPS was investigated with the question: During your childhood and adolescence, how would you describe the support and comfort provided by your family? Participants selected one of five different responses: (1) none, (2) low, (3) moderate, (4) strong, and (5) extremely strong. The following potentially confounding self-reported variables were considered in the analyses: age, gender (male, female), parents divorced or separated (yes, no), parental death (yes, no), did not live in parental home during childhood (yes, no), perceived economic status in childhood (adequate to very comfortable, difficult to very difficult), parental history of depression or anxiety (yes, no), and personal history of depression or attempted suicide (yes, no). Given the low number of students with a history of attempted suicide, the history of depression and attempted suicide were grouped into one variable. Descriptive analyses We first described the variables in the whole sample and divided individuals into three groups, according to the frequency of suicidal thoughts. Continuous variables are expressed as the mean ± SD. Categorical variables are described as the proportion (range). The Kruskal-Wallis test was used to compare the distributions of age in the three groups of suicidal thoughts. Proportions were compared with the Chi-square test. Model construction Unadjusted and adjusted multinomial logistic regression models were utilized to assess the relationship between PPS and the frequency of suicidal thoughts. Model convergence was systematically checked. The assumption of linearity of the logit was tested for the continuous variable, age. We also tested interactions between PPS and the covariates included in the model. The fully adjusted complete-case analysis took into account age, gender, parental divorce or separation, parental death, not living in the parental home during childhood, perceived economic status in childhood, and a personal history of depression or attempted suicide. We also conducted a sensitivity analysis in another adjusted model, which included the additional factor: parental history of depression or anxiety. Non-response and multiple imputation Students were allowed to declare that they were not willing to answer certain questions considered sensitive, such as suicidal thoughts, PPS, personal history of depression or attempted suicide, parental divorce, parental death, and not living in the parental home in childhood. These refusals were coded as missing data. The literature shows the value of imputing in case of missing data rather than excluding them [20]. It also highlights the pitfalls associated with misuse of this method [21] such as making a false assumption of missingness at random. It is therefore advisable to present the two results of these analyses and to compare them. Thus, in the main analysis, participants with missing data were excluded in order to present a complete data analysis. Then we performed a sensitivity analysis with multiple imputation on missing data in order to take into account non-response among eligible students. We chose the method known as multiple imputation by chained equation (MICE). This method was based on an algorithm of the Monte Carlo Markov chain method, which was adapted for imputation on arbitrary or non-monotonic data [22]. First, we analyzed the shape of the refusal data; then we formulated the assumption of missing-at-random data. We hypothesized that the typology of the non-response data was informative, and that it did not occur completely at random, because it reflected the student's unwillingness to answer. Therefore, according to this method, we assumed that all the information on the missing data was contained in the observed data. This method assumes that the distributions of the incomplete variables are conditional on the other variables of the imputation model. We performed 10 imputations. For each set of imputed data, we estimated the imputed model parameters by taking into account all variables of the chosen model. Attributable fraction The attributable fraction (AF) was defined as the proportion of suicidal thoughts attributable to PPS, calculated as follows [23]: $$ \mathrm{AF}=\frac{p\left( RR\hbox{-} 1\right)}{p\left( RR\hbox{-} 1\right)+1} $$ where p represents the underlying prevalence of the risk factor (low PPS) in our population; i.e., the prevalence of low PPS levels (including questionnaire responses of moderate, low, or none) in our sample. RR is the relative risk of suicidal thoughts in the exposed population (i.e., the population that described PPS as moderate, low, or none) divided by the risk of suicidal thoughts in the unexposed population (the population that described PPS as extremely strong, or strong). This calculation was also used to determine the AF for other family events measured in the study. All p-values were two-tailed, and p < 0.05 was considered statistically significant. Analyses were performed with SAS version 9.4 (SAS Institute Inc., Cray, NC, USA). Sample description based on suicidal thoughts Of the 11,968 students that fully completed the questionnaire, 720 were excluded because they were not between 18 and 24 years of age. Another 1223 (11%) were excluded because they were not willing to answer questions related to suicidal thoughts, PPS, and confounding variables (Fig. 1). For all variables, excluded non-respondents comprised less than 4% (range: 1 to 4%). Participants were allowed to refuse to answer the question regarding parental history of depression or anxiety. In this case, a rather large number of individuals (n = 1045, 10%) refused to respond; therefore, we decided to include this group of participants in the analyses. The final study population included 10,015 students. Flow chart of the study population (i-Share cohort, 2013–2017) Compared to the 10,015 included participants, the 1233 excluded students that did not answer the questions of interest reported more occasional (n = 208, 25.6% vs. n = 1775, 17.7%) and frequent (n = 77, 9.5% vs. n = 368, 3.7%, p < 0.0001) suicidal thoughts. Additionally, compared to included students, the excluded students were more likely to describe PPS as "none" and to respond "yes" to a personal history of depression, attempted suicide, or negative childhood events (Table 1). Table 1 Comparison of key variables between participants and nonparticipants* In our sample, 7872 (78.7%) students reported no suicidal thoughts. About one student in five reported suicidal thoughts in the past 12 months; of these, 1775 (17.7%) had occasional suicidal thoughts and 368 (3.7%) had frequent suicidal thoughts. Table 2 shows the sample characteristics, both overall and categorized by the type of suicidal thoughts. The mean age of the participants was 20.0 years (±1.8) and about three quarters were female (n = 7539; 75.7%). Participants with occasional and frequent suicidal thoughts were more likely to declare negative family events, such as parental divorce (p < 0.0001) or parental death (p = 0.0004), compared to those with no suicidal thoughts. They were more likely scholarship holder (p < 0.0001) and to declare an economic status in childhood as difficult to very difficult (p < 0.0001). Students that declared suicidal thoughts also reported more parental history of depression or anxiety than those with no suicidal thoughts. A personal history of depression or attempted suicide was reported by more than half the students with frequent suicidal thoughts (n = 216; 58.7%), about a third of those with occasional suicidal thoughts (n = 496; 27.9%), and less than 10% of those without suicidal thoughts (n = 662; 8.4%, p < 0.0001). Table 2 Characteristics of the study sample categorized by the frequency of suicidal thoughts over the preceding year* The proportion of participants that reported strong PPS (including extremely strong and strong) declined as the frequency of suicidal thoughts increased (p < 0.0001). Conversely, the proportion of participants that reported weak PPS (including questionnaire responses of moderate, low, and none) increased with increases in the frequency of suicidal thoughts. Thus, a total lack of PPS was more common among participants with frequent suicidal thoughts (n = 28; 7.6%) than among those with occasional suicidal thoughts (n = 56, 3.2%) or those without suicidal thoughts (n = 72, 0.9%). Association between PPS in childhood and adolescence and suicidal thoughts over the preceding year Table 3 summarizes the unadjusted and adjusted multinomial logistic model results. We observed a negative association between PPS and suicidal thoughts. The unadjusted univariate multinomial logistic regression also showed that lower levels of PPS were associated with higher frequencies of suicidal thoughts. Moreover, having a total lack of PPS (questionnaire response: 'none') strongly increased the odds of having suicidal thoughts occasionally (OR, 5.77; 95% CI: 4.00–8.32) and frequently (OR, 8.38; 95% CI: 11.08–30.48), compared to having an extremely strong PPS (Table 3). Table 3 Associations between perceived parental support in childhood and adolescence and the frequency of suicidal thoughts over the preceding year* A multiple adjustment to the model reduced the estimated values, but they remained relatively high. Thus, a total lack of PPS showed aORs of 4.43 for occasional suicidal thoughts (95% CI: 3.02–6.49) and 9.81 for frequent suicidal thoughts (95% CI: 5.60–17.19). In all models, we observed that the PPS was negatively associated with the risk of suicidal thoughts. Further adjusting with the parental history of depression or anxiety did not change the general interpretation of the results (Table 3). We tested interactions between gender and perceived parental support and it was not significant (p for interaction = 0.40). To investigate whether there was a statistically significant interaction between PPS and a personal history of depression or attempted suicide, we performed a stratified regression analysis. The risks of both occasional and frequent suicidal thoughts increased with decreases in the level of PPS in both strata, but the risks were higher among students without a personal history of depression or attempted suicide (p for interaction < 0.001; Table 4). For example, a total lack of PPS was associated with a 6-fold risk of frequent suicidal thoughts (aOR, 6.01; 95% CI: 2.65–14.00) among students with a personal history of depression or attempted suicide, but the risk was eight-fold (aOR, 8.06; 95% CI: 2.92–22.23) for students without a personal history of depression or attempted suicide. Table 4 Associations between perceived parental support in childhood and adolescence and suicidal thoughts over the preceding year, categorized by whether individuals had a personal history of depression or attempted suicide* When the model was tested after adding multiple imputations of non-response data, we found that the relative efficiency of the imputation on each of the variables was greater than 95%. This finding indicated that the number of imputations was sufficient for the fraction of non-response data. The estimations obtained from the imputed model were close to those obtained from the non-imputed model (Table 5). The association between PPS and suicidal thoughts was statistically significant for all models (p < 0.05) except in the multiple imputation models for the students without history of depression or suicide attempt. Table 5 Associations between perceived parental support and suicidal thoughts in an adjusted multinomial logistic regression model after multiple imputation for non-response data* We also found that the risks associated with PPS were higher than the risks associated other negative childhood events measured in our study (Table 6), independent of the adjustment. This was confirmed by the calculation of attributable risk. We found that 20.5% of occasional and frequent suicidal thoughts could be attributed to an insufficient PPS (frequency of PPS rated 'moderate', 'low', or 'none' in our sample: 26%), which was the same percentage (20.5%) attributed to a personal history of depression or attempted suicide (frequency of personal history of depression or attempted suicide in our sample: 13.7%). Both these percentages were notably higher than the percentage of suicidal thoughts attributed to parental divorce (6.9%) or parental death (1.1%). Table 6 Association between student characteristics and suicidal thoughts* In this cross-sectional study on a large sample of 10,015 students, we found a strong association between PPS in childhood and adolescence and suicidal thoughts in young adults. Lower levels of PPS were associated with a higher frequency of both occasional and frequent suicidal thoughts. Thus, a total lack of PPS was associated with more than 4-fold increased risk of occasional suicidal thoughts (aOR, 4.55; 95% CI: 2.97–6.99) and nearly 9-fold increased risk of frequent suicidal thoughts (aOR, 8.58; 95% CI: 4.62–15.96). In all models, we observed a negative association between the level of PPS and the frequency of suicidal thoughts. Sensitivity analyses modifying adjustment and multiple imputation modeling provided consistent results. Few studies have described the association between PPS and suicidal thoughts in young adults, and most did not control for confounding factors related to the family environment. In one study that included 5183 Chinese students, suicidal ideation was associated with poor family structures and relationships or improper parenting styles [24]. In another study that included 188 African American students, strong family support was associated with a lower incidence of suicide ideation [25]. Similarly, in a Taiwanese study that included 2919 college students, a positive linear trend was observed between increased suicidal tendency and a parenting style with low affection [26]. In a younger age range (adolescents 12 to 18 years old), it was shown that inadequate social and family support increased the risk of suicide or suicidal ideation [27,28,29,30,31]. Similarly, a cross-sectional study that included 448 adolescents aged 13 to 17 years measured PPS with the Perceived Social Support from Family Scale. In that study, each one point increase in PPS was associated with a 54% lower frequency in suicidal plans [32]. Compared to the present study, all those previous studies were more focused on current parental support, in addition to other social determinants. However, a longitudinal study conducted among a large sample of adolescents showed conflicting results. Parental support was predictive of lower levels of depression but was not significantly associated with the outcomes related to suicidal behaviors [33]. The present study had some important strengths, including the size of the sample, the strength of the associations, the PPS-dose-dependent pattern, the consistency of our results with previous studies, and the large number of variables collected and adjusted for in the multivariable models. Furthermore, when we performed multiple imputation for non-response data and the sensitivity analysis, we found consistent results. However, there were also some limitations in this study. First, it was a cross-sectional analysis; therefore, we could not strictly separate the timing of exposure, outcome, and covariates. Moreover, no causality could be inferred between PPS and suicidal ideation. Second, only brief and succinct measures could be done in large sample studies and perceived parental support as well as suicidal ideation were assessed with only one item. This is a limitation that has to be taken into account when interpreting our results. However, the prevalence of suicidal thoughts found in our study fall within the range reported by the main studies on the subject [4,5,6, 8] which was somehow reassuring. Third, the voluntary participation of students may have introduced a self-selection bias, although it is difficult to see how this potential bias could have influenced the observed associations. Fourth, the information was self-reported, which could lead to an information bias, particularly if participants under-reported the frequency of suicidal thoughts or the presence of a personal and/or family history, due to considerations of social acceptability. Again, this under-reporting would be expected to have a low impact on the associations observed. Fifth, there is an over-representation of women in our sample compared to the 56% of female students in France. However, we tested interactions with gender for the main analyses and none was significant. Further, in stratified analyses, aOR did not differ significantly between males and females and the confidence intervals were largely overlapping (data not shown). Finally, although we had information on confounding factors, we could not rule out the influence of residual or unmeasured confounding factors, due to the complexity of the suicidal thought process. In addition, because our predictor variable was PPS, a recall bias might have led to an overestimation of the associations between support and suicidal ideation. However, our findings on the differential roles that PPS played for students with and without a history of depression suggested that a recall bias might have had limited effect. Indeed, we postulated that, if recall bias was a major source of influence, the association between low support and suicidal thoughts would be stronger in students with a history of depression compared to those without. Instead, we found the reverse; students without a history of depression were more likely to report suicidal thoughts, when they also reported low parental support during childhood and adolescence. We found that our estimation of the risk associated with PPS was higher than previous estimated risks of other variables known to be associated with suicidal ideation, such as parental divorce [34, 35] or parental death [36, 37]. We also noted that the specific role of low PPS could not be distinguished from the roles of other negative parenting practices, such as abuse or neglect (not measured in our study). However, our associations remained significant after controlling for other negative childhood events, such as parental death or divorce. The association between PPS and suicidal thoughts could reflect a familial aggregation of suicidal thoughts and mood disorders. Contributors to his type of aggregation might be unknown psycho-social, clinical, or biological factors, including genetic factors [38]; for example, parents that provide low support might be experiencing depression [39]. Our results in young adults, if confirmed by other studies, could eventually lead to the development of intervention programs for families at an early stage of life and thus be a prerequisite for more targeted, less costly and more effective prevention interventions [40]. In adolescents, such programs have yielded promising results to decrease the incidence of suicidal thoughts in young adults after an intervention started in adolescence [41, 42]. These interventions, aimed primarily at building parenting support and supervision capacities, are strategies developed by the CDC in suicide prevention [43]. Other programs such as attachment-based family therapy [44] aim to transform the quality of adolescent-parent attachment in order to provide the adolescent with a safer relationship that can support him during difficult times and crises related to suicidal thoughts and behaviors. Regardless of the subjectivity of the PPS variable or the etiology of suicidal thoughts, evaluating PPS could be useful in assessments of suicidal risk in young adults. Given that PPS is a relatively neutral, non-intrusive variable, health professionals can readily assess PPS to improve suicide risk screening. Our findings indicated that PPS could be a particularly important marker, because the association between PPS and suicidal ideation was stronger in the absence than in the presence of a personal history of attempted suicide or depression. This is remarkable, because a personal history of attempted suicide or depression is an important marker of suicidal risk. Our findings highlighted the importance of interventions that aim to screen for and correct risky situations that children might face at home. To summarize, our results indicated that a low PPS in childhood and adolescence was strongly associated with frequent suicidal thoughts in young adults. This issue should be systematically addressed in further clinical studies on suicidal risk in young people and, if confirmed, it could be considered in routine care. Longitudinal studies should assess the ability of PPS to predict the risk of suicide attempts and suicide. AF: CNIL: Commission nationale de l'informatique et des libertés - National Commission of Informatics and Liberties i-Share: Internet-based Students' Health Research Enterprise MICE: Multiple imputation by chained equation Perceived parental support RR: SD: World Health Organization (WHO). Suicide data. 2017. http://www.who.int/mental_health/prevention/suicide/suicideprevent/en/. Accessed 19 Feb 2018. Centers for Disease Control and Prevention (CDC). 2015. Web-based Injury Statistics Query and Reporting System (WISQARS). https://www.cdc.gov/violenceprevention/suicide/statistics/. Accessed 15 Jan 2018. Schwartz AJ. College student suicide in the United States: 1990-1991 through 2003-2004. J Am Coll Heal. 2006;54(6):341–52. Garlow SJ, Rosenberg J, Moore JD, et al. Depression, desperation, and suicidal ideation in college students: results from the American Foundation for Suicide Prevention College screening project at Emory University. Depress Anxiety. 2008;25(6):482–8. Wilcox HC, Arria AM, Caldeira KM, Vincent KB, Pinchevsky GM, O'Grady KE. Prevalence and predictors of persistent suicide ideation, plans, and attempts during college. J Affect Disord. 2010;127(1–3):287–94. Arria AM, O'Grady KE, Caldeira KM, Vincent KB, Wilcox HC, Wish ED. Suicide ideation among college students: a multivariate analysis. Arch Suicide Res. 2009;13(3):230–46. Brener ND, Hassan SS, Barrios LC. Suicidal ideation among college students in the United States. J Consult Clin Psychol. 1999;67(6):1004–8. Mortier P, Cuijpers P, Kiekens G, Auerbach RP, Demyttenaere K, Green JG, et al. The prevalence of suicidal thoughts and behaviours among college students: a meta-analysis. Psychol Med. 2018;48(4):554–65. Reinherz HZ, Giaconia RM, Silverman AB, Friedman A, Pakiz B, Frost AK. al. Early psychosocial risks for adolescent suicidal ideation and attempts. J Am Acad Child Adolesc Psychiatry. 1995;34(5):599–611. Brent DA, Johnson B, Bartle S, Bridge J, Rather C, Matta J, et al. Personality disorder, tendency to impulsive violence, and suicidal behavior in adolescents. J Am Acad Child Adolesc Psychiatry. 1993;32(1):69–75. Van Orden KA, Witte TK, Cukrowicz KC, Braithwaite S, Selby EA, Joiner TE. The interpersonal theory of suicide. Psychol Rev. 2010;117(2):575–600. Brent DA, Mann JJ. Family genetic studies, suicide, and suicidal behavior. Am J Med Genet C Semin Med Genet. 2005;133 C(1):13–24. Bridge JA, Brent DA, Johnson BA, Connolly J. Familial aggregation of psychiatric disorders in a community sample of adolescents. J Am Acad Child Adolesc Psychiatry. 1997;36(5):628–36. King CA, Kerr DCR, Passarelli MN, Foster CE, Merchant CR. One-year follow-up of suicidal adolescents: parental history of mental health problems and time to post-hospitalization attempt. J Youth Adolesc. 2010;39(3):219–32. Morano CD, Cisler RA, Lemerond J. Risk factors for adolescent suicidal behavior: loss, insufficient familial support, and hopelessness. Adolescence. 1993;28(112):851–65. Kang B-H, Kang J-H, Park H-A, Cho Y-G, Hur Y-I, Sim WY. al. The mediating role of parental support in the relationship between life stress and suicidal ideation among middle school students. Korean J Fam Med. 2017;38(4):213–9. Sharaf AY, Thompson EA, Walsh E. Protective effects of self-esteem and family support on suicide risk behaviors among at-risk adolescents. J Child Adolesc Psychiatr Nurs. 2009;22(3):160–8. Czyz EK, Liu Z, King CA. Social connectedness and one-year trajectories among suicidal adolescents following psychiatric hospitalization. J Clin Child Adolesc Psychol. 2012;41(2):214–26. World Health Organization (WHO). Young people and health: challenge for society. 2003 health: challenge for society. 2003 health: challenge for society. 2003. http://whqlibdoc.who.int/trs/WHO_TRS_731_fre.pdf. Accessed 22 Jan 2018. Janssen KJ, Donders ART, Harrell FE Jr, Vergouwe Y, Chen Q, Grobbee DE, Moons KG. Missing covariate data in medical research: to impute is better than to ignore. J Clin Epidemiol. 2010;63(7):721–7. Sterne JA, White IR, Carlin JB, Spratt M, Royston P, Kenward MG, Carpenter JR. Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls. BMJ. 2009;338:b2393. Faris PD, Ghali WA, Brant R, Norris CM, Galbraith PD, Knudtson ML, et al. Multiple imputation versus data enhancement for dealing with missing data in observational health care outcome analyses. J Clin Epidemiol. 2002;55(2):184–91. Rosen L. An intuitive approach to understanding the attributable fraction of disease due to a risk factor: the case of smoking. Int J Environ Res Public Health. 2013;10(7):2932–43. Zhai H, Bai B, Chen L, Han D, Wang L, Qiao Z, et al. Correlation between family environment and suicidal ideation in university students in China. Int J Environ Res Public Health. 2015;12(2):1412–24. Harris TL, Molock SD. Cultural orientation, family cohesion, and family support in suicide ideation and depression among african american college students. Suicide Life Threat Behav. 2000;30(4):341–53. Gau SS-F, Chen Y-Y, Tsai F-J, Lee M-B, Chiu Y-N, Soong W-T, et al. Risk factors for suicide in Taiwanese college students. J Am Coll Heal. 2008;57(2):135–42. Barzilay S, Brunstein Klomek A, Apter A, Carli V, Wasserman C, Hadlaczky G, et al. Bullying victimization and suicide ideation and behavior among adolescents in Europe: a 10-country study. J Adolesc Health. 2017;61(2):179–86. Joiner TE. Why people die by suicide. Cambridge: Harvard University press; 2005. King CA, Merchant CR. Social and interpersonal factors relating to adolescent suicidality: a review of the literature. Arch Suicide Res. 2008;12(3):181–96. Miller AB, Esposito-Smythers C, Leichtweis RN. Role of social support in adolescent suicidal ideation and suicide attempts. J Adolesc Health. 2015;56(3):286–92. Kerr DCR, Preuss LJ, King CA. Suicidal adolescents' social support from family and peers: gender-specific associations with psychopathology. J Abnorm Child Psychol. 2006;34(1):103–14. Klaus NM, Mobilio A, King CA. Parent-adolescent agreement concerning adolescents' suicidal thoughts and behaviors. J Clin Child Adolesc Psychol. 2009;38(2):245–55. LeCloux M, Maramaldi P, Thomas KA, Wharff EA. A longitudinal study of health care resources, family support, and mental health outcomes among suicidal adolescents. Anal Soc Issues Public Policy. 2017;17(1):319–38. https://doi.org/10.1111/asap.12139. Lindström M, Rosvall M. Parental separation in childhood, social capital, and suicide thoughts and suicide attempts: a population-based study. Psychiatry Res. 2015;229(1–2):206–13. De Goede M, Spruijt E. Effects of parental divorce and youth unemployment on adolescent health. Patient Educ Couns. 1996;29(3):269–76. Rostila M, Berg L, Arat A, Vinnerljung B, Hjern A. Parental death in childhood and self-inflicted injuries in young adults-a national cohort study from Sweden. Eur Child Adolesc Psychiatry. 2016;25(10):1103–11. Guldin M-B, Li J, Pedersen HS, Obel C, Agerbo E, Gissler M, et al. Incidence of suicide among persons who had a parent who died during their childhood: a population-based cohort study. JAMA Psychiatry. 2015;72(12):1227–34. Turecki G, Brent DA. Suicide and suicidal behavior. Lancet. 2016;387(10024):1227–39. Taraban L, Shaw DS, Leve LD, Natsuaki MN, Ganiban JM, Reiss D, et al. Parental depression, overreactive parenting, and early childhood externalizing problems: moderation by social support. Child Dev. 2018. https://doi.org/10.1111/cdev.13027. Heckman JJ. Skill formation and the economics of investing in disadvantaged children. Science. 2006;312(5782):1900–2. Reider EE, Sims BE. Family-based preventive interventions: can the onset of suicidal ideation and behavior be prevented? Suicide Life Threat Behav. 2016;46(Suppl 1):S3–7. Connell AM, McKillop HN, Dishion TJ. Long-term effects of the family check-up in early adolescence on risk of suicide in early adulthood. Suicide Life Threat Behav. 2016;46:S15–22. Centers for Disease Control and Prevention (CDC). 2017. https://www.cdc.gov/violenceprevention/suicide/prevention.html. Accessed 10 Oct 2018. Ewing ESK, Diamond G, Levy S. Attachment-based family therapy for depressed and suicidal adolescents: theory, clinical model and empirical support. Attach Hum Dev. 2015;17(2):136–56. The authors are grateful to the coordinating team of the i-Share project for their assistance in setting up and collecting data. In particular, we would like to thank: Clotilde Pollet, Edwige Pereira, Sarah Qchiqach, Elena Milesi, and Marie Mougin. We thank all the students that participated in the i-Share study. This work was supported by the French National Research Agency (Agence nationale de la Recherche) via the program "Investissements d'Avenir", grant ANR-10-COHO-05. The funding sponsor played no role in the design of the study; in the collection, analyses, or interpretation of data; in writing the manuscript; or in the decision to publish the results. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Inserm, Bordeaux Population Health Research Center, University of Bordeaux, UMR 1219, F-33000, Bordeaux, France Melissa Macalli, Marie Tournier, Cédric Galéra, Ilaria Montagni, Aicha Soumare, Sylvana M. Côté & Christophe Tzourio Charles Perrens Hospital, Bordeaux, France Marie Tournier University of Montreal, Montreal, Québec, Canada Sylvana M. Côté University of Bordeaux, 146 rue Léo Saignat, 33076, Bordeaux Cedex, Bordeaux, France Christophe Tzourio Melissa Macalli Cédric Galéra Ilaria Montagni Aicha Soumare MM, CT, and SC designed the study. MM and AS conducted the statistical analysis. MM and CT wrote the first draft of the manuscript. All co-authors had full access to the data, read, revised, and approved the final manuscript. Correspondence to Christophe Tzourio. The i-Share project was approved by the "Commission nationale de l'informatique et des libertés" (National Commission of Informatics and Liberties, Reference DR-2013-019). Students were informed on the nature and purpose of the study and provided on-line consent. Macalli, M., Tournier, M., Galéra, C. et al. Perceived parental support in childhood and adolescence and suicidal ideation in young adults: a cross-sectional analysis of the i-Share study. BMC Psychiatry 18, 373 (2018). https://doi.org/10.1186/s12888-018-1957-7 i-Share cohort Causes, treatment and prevention of suicide
CommonCrawl
Some results for the large time behavior of Hamilton-Jacobi equations with Caputo time derivative Weisong Dong 1, and Chang Li 2,, School of Mathematics, Tianjin University, Tianjin 300354, China Hua Loo-Keng Center for Mathematical Sciences, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China * Corresponding author: Chang Li Received April 2020 Revised September 2020 Published November 2020 We derive second order estimates for $ \chi $-plurisubharmonic solutions of complex Hessian equations with right hand side depending on the gradient on compact Hermitian manifolds. Keywords: Complex Hessian equations, Second order estimates, Hermitian manifolds. Mathematics Subject Classification: Primary: 35J15, 53C55; Secondary: 58J05, 35B45. Citation: Weisong Dong, Chang Li. Second order estimates for complex Hessian equations on Hermitian manifolds. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2020377 S. Y. Cheng and S. T. Yau, On the existence of a complete Kähler metric on noncompact complex manifolds and the regularity of Fefferman's equation, Comm. Pure Appl. Math., 33 (1980), 507-544. doi: 10.1002/cpa.3160330404. Google Scholar J. C. Chu, L. D. Huang and X. H. Zhu, The Fu-Yau equation in higher dimensions, Peking Math. J., 2 (2019), 71-97. doi: 10.1007/s42543-019-00016-z. Google Scholar J. C. Chu, L. D. Huang and X. H. Zhu, The Fu-Yau equation on compact astheno-Kähler manifolds, Adv. Math., 346 (2019), 908-945. doi: 10.1016/j.aim.2019.02.006. Google Scholar J. C. Chu, L. D. Huang and X. H. Zhu, The 2-nd Hessian type equation on almost Hermitian manifolds, preprint, arXiv: 1707.04072. Google Scholar J. C. Chu, V. Tosatti and B. Weinkove, The Monge-Ampère equation for non-integrable almost complex structures, J. Eur. Math. Soc., 21 (2019), 1949-1984. doi: 10.4171/JEMS/878. Google Scholar S. Dinew and S. Kołodziej, Liouville and Calabi-Yau type theorems for complex Hessian equations, Amer. J. Math., 139 (2017), 403-415. doi: 10.1353/ajm.2017.0009. Google Scholar A. Fino, Y. Y. Li, S. Salamon and L. Vezzoni, The Calabi-Yau equation on 4-manifolds over 2-tori, Trans. Amer. Math. Soc., 365 (2013), 1551-1575. doi: 10.1090/S0002-9947-2012-05692-3. Google Scholar J. X. Fu and S. T. Yau, The theory of superstring with flux on non-Kähler manifolds and the complex Monge-Ampère equation, J. Differential Geom., 78 (2008), 369-428. doi: 10.4310/jdg/1207834550. Google Scholar B. Guan and H. M. Jiao, Second order estimates for Hessian type fully nonlinear elliptic equations on Riemannian manifolds, Calc. Var. Partial Differential Equations, 54 (2015), 2693-2712. doi: 10.1007/s00526-015-0880-8. Google Scholar P. F. Guan, C. Y. Ren and Z. Z. Wang, Global $C^2$-estimates for convex solutions of curvature equations, Comm. Pure Appl. Math., 68 (2015), 1287-1325. doi: 10.1002/cpa.21528. Google Scholar Z. L. Hou, X. N. Ma and D. M. Wu, A second order estimate for complex Hessian equations on a compact Kähler manifold, Math. Res. Lett., 17 (2010), 547-561. doi: 10.4310/MRL.2010.v17.n3.a12. Google Scholar R. Kobayashi, Kähler-Einstein metric on an open algebraic manifold, Osaka J. Math., 21 (1984), 399-418. Google Scholar C. Li and L. M. Shen, The complex Hessian equations with gradient terms on Hermitian manifolds, J. Differential Equations, 269 (2020), 6293-6310. doi: 10.1016/j.jde.2020.04.037. Google Scholar Y. Y. Li, Some existence results of fully nonlinear elliptic equations of Monge-Ampere type, Comm. Pure Appl. Math., 43 (1990), 233-271. doi: 10.1002/cpa.3160430204. Google Scholar D. H. Phong, S. Picard and X. W. Zhang, A second order estimate for general complex Hessian equations, Anal. PDE, 9 (2016), 1693-1709. doi: 10.2140/apde.2016.9.1693. Google Scholar D. H. Phong, S. Picard and X. W. Zhang, Fu-Yau Hessian equations, preprint, arXiv: 1801.09842. Google Scholar J. Song and B. Weinkove, On the convergence and singularities of the J-flow with applications to the Mabuchi energy, Comm. Pure Appl. Math., 61 (2008), 210-229. doi: 10.1002/cpa.20182. Google Scholar G. Székelyhidi, Fully non-linear elliptic equations on compact Hermitian manifolds, J. Differential Geom., 109 (2018), 337-378. doi: 10.4310/jdg/1527040875. Google Scholar G. Tian, On the existence of solutions of a class of Monge-Ampère equations, Acta Math. Sinica (N.S.), 4 (1988), 250-265. doi: 10.1007/BF02560581. Google Scholar G. Tian and S. T. Yau, Complete Kähler manifolds with zero Ricci curvature. Ⅰ, J. Amer. Math. Soc., 3 (1990), 579-609. doi: 10.2307/1990928. Google Scholar G. Tian and S. T. Yau, Complete Kähler manifolds with zero Ricci curvature. Ⅱ, Invent. Math., 106 (1991), 27-60. doi: 10.1007/BF01243902. Google Scholar V. Tosatti and B. Weinkove, The complex Monge-Ampère equation on compact Hermitian manifolds, J. Amer. Math. Soc., 23 (2010), 1187-1195. doi: 10.1090/S0894-0347-2010-00673-X. Google Scholar V. Tosatti and B. Weinkove, The complex Monge-Ampère equation with a gradient term, preprint, arXiv: 1906.10034. Google Scholar S. T. Yau, On the Ricci curvature of a compact Kähler manifold and the complex Monge-Ampère equation. Ⅰ, Comm. Pure Appl. Math., 31 (1978), 339-411. doi: 10.1002/cpa.3160310304. Google Scholar R. R. Yuan, On a class of fully nonlinear elliptic equations containing gradient terms on compact Hermitian manifolds, Canad. J. Math., 70 (2018), 943-960. doi: 10.4153/CJM-2017-015-9. Google Scholar D. K. Zhang, Hessian equations on closed Hermitian manifolds, Pacific J. Math., 291 (2017), 485-510. doi: 10.2140/pjm.2017.291.485. Google Scholar Yueh-Cheng Kuo, Huey-Er Lin, Shih-Feng Shieh. Asymptotic dynamics of hermitian Riccati difference equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020365 Jun Zhou. Lifespan of solutions to a fourth order parabolic PDE involving the Hessian modeling epitaxial growth. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5581-5590. doi: 10.3934/cpaa.2020252 Shenglan Xie, Maoan Han, Peng Zhu. A posteriori error estimate of weak Galerkin fem for second order elliptic problem with mixed boundary condition. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020340 Huiying Fan, Tao Ma. Parabolic equations involving Laguerre operators and weighted mixed-norm estimates. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5487-5508. doi: 10.3934/cpaa.2020249 Md. Masum Murshed, Kouta Futai, Masato Kimura, Hirofumi Notsu. Theoretical and numerical studies for energy estimates of the shallow water equations with a transmission boundary condition. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1063-1078. doi: 10.3934/dcdss.2020230 Mathew Gluck. Classification of solutions to a system of $ n^{\rm th} $ order equations on $ \mathbb R^n $. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5413-5436. doi: 10.3934/cpaa.2020246 Sabira El Khalfaoui, Gábor P. Nagy. On the dimension of the subfield subcodes of 1-point Hermitian codes. Advances in Mathematics of Communications, 2021, 15 (2) : 219-226. doi: 10.3934/amc.2020054 Andy Hammerlindl, Jana Rodriguez Hertz, Raúl Ures. Ergodicity and partial hyperbolicity on Seifert manifolds. Journal of Modern Dynamics, 2020, 0: 331-348. doi: 10.3934/jmd.2020012 Wenqiang Zhao, Yijin Zhang. High-order Wong-Zakai approximations for non-autonomous stochastic $ p $-Laplacian equations on $ \mathbb{R}^N $. Communications on Pure & Applied Analysis, 2021, 20 (1) : 243-280. doi: 10.3934/cpaa.2020265 Hedy Attouch, Aïcha Balhag, Zaki Chbani, Hassan Riahi. Fast convex optimization via inertial dynamics combining viscous and Hessian-driven damping with time rescaling. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021010 Weisong Dong Chang Li
CommonCrawl
Rough singular integrals associated to polynomial curves Yulin Zhang1 & Feng Liu ORCID: orcid.org/0000-0002-4177-98451 Journal of Inequalities and Applications volume 2022, Article number: 19 (2022) Cite this article In this paper, the authors establish the boundedness of singular integral operators associated to polynomial curves as well as the related maximal operators with rough kernels \(\Omega \in H^{1}({\mathrm{S}}^{n-1})\) and \(h\in \Delta _{\gamma }(\mathbb{R}_{+})\) for some \(\gamma >1\) on the Triebel–Lizorkin spaces. It should be pointed out that the bounds are independent of the coefficients of the polynomials in the definition of the operators. The main results of this paper not only improve and generalize essentially some known results but also complement some recent boundedness results. It is well known that the Triebel–Lizorkin spaces contain many important function spaces such as Lebesgue spaces, Hardy spaces, Sobolev spaces, and Lipschitz spaces. Over the last several years, a considerable amount of attention has been given to investigate the boundedness for singular integral operators with various rough kernels on the Triebel–Lizorkin spaces. Particularly, many scholars devoted to studying the bounds for singular integral operators with singularity along various sets under the rough kernels \(\Omega \in H^{1}({\mathrm{S}}^{n-1})\) and \(h\in \Delta _{\gamma }(\mathbb{R}_{+})\) for some \(\gamma >1\). For example, see [10] for the polynomial mappings, [29] for the homogeneous mappings, [27] for the surfaces to revolution. It is unknown whether the singular integral operators associated to polynomial curves under the rough kernels are bounded on the Triebel–Lizorkin spaces. The main purpose of this paper is to address the question. In addition, we establish the bounds for the related maximal singular integral operators on the Lebesgue and Triebel–Lizorkin spaces. Before stating our main results, let us recall some pertinent definitions, notations, and backgrounds. Let \(n\geq 2\) be an integer and let \({\mathrm{S}}^{n-1}\) denote the unit sphere in \(\mathbb{R}^{n}\) equipped with the normalized Lebesgue measure dσ. Let \(\Omega \in L^{1}({\mathrm{S}}^{n-1})\) be a homogeneous function of degree zero on \(\mathbb{R}^{n}\) and satisfy $$\begin{aligned} \int _{{\mathrm{S}}^{n-1}}\Omega (u)\,d\sigma (u)=0. \end{aligned}$$ The singular integral operator \(T_{h,\Omega }\) is defined as $$\begin{aligned} T_{h,\Omega }f(x):={\mathrm{p.v.}} \int _{\mathbb{R}^{n}} \frac{\Omega (y/ \vert y \vert )h( \vert y \vert )}{ \vert y \vert ^{n}}f(x-y)\,dy, \end{aligned}$$ where \(f\in \mathcal{S}(\mathbb{R}^{n})\) (the Schwartz class) and \(h\in \Delta _{1}(\mathbb{R}_{+})\). For \(\gamma >0\), the notation \(\Delta _{\gamma }(\mathbb{R}_{+})\) denotes the set of all measurable functions h on \(\mathbb{R}_{+}:=(0,\infty )\) satisfying $$\begin{aligned} \Vert h \Vert _{\Delta _{\gamma }(\mathbb{R}_{+})}=\sup_{R>0} \biggl( \frac{1}{R} \int _{0}^{R} \bigl\vert h(t) \bigr\vert ^{\gamma }\,dt \biggr)^{1/\gamma }< \infty. \end{aligned}$$ It is not difficult to see that \(L^{\infty }(\mathbb{R}_{+})=\Delta _{\infty } (\mathbb{R}_{+}) \subsetneq \Delta _{\gamma _{2}}(\mathbb{R}_{+})\subsetneq \Delta _{ \gamma _{1}}(\mathbb{R}_{+})\) for \(0<\gamma _{1}<\gamma _{2}<\infty \). For the sake of simplicity, we denote \(T_{h,\Omega }=T_{\Omega }\) when \(h\equiv 1\). The theory of singular integral originated in Calderón and Zygmund's work [4] in which they used the rotation method to establish the \(L^{p}(\mathbb{R}^{n}) (1< p<\infty )\) of \(T_{\Omega }\) if \(\Omega \in L\log L({\mathrm{S}}^{n-1})\). Since then, more and more scholars have been devoted to studying the boundedness of singular integrals with various rough kernels. Particularly, Coifman and Weiss [12] proved that \(T_{\Omega }\) is of type \((p, p)\) for \(1< p<\infty \) if \(\Omega \in H^{1}({\mathrm{S}}^{n-1})\) (see also [15]). It was remarkable that \(\Omega \in H^{1}({\mathrm{S}}^{n-1})\) turned out to be the weakest size condition for the \(L^{p}\) boundedness of \(T_{\Omega }\) up to now. Later on, an active extension to the theory was due to Fefferman [23] who discovered that the Calderón–Zygmund rotation method is no longer available if \(T_{h,\Omega }\) is also rough in the radial direction, for instance \(h\in L^{\infty }(\mathbb{R}_{+})\), so that new methods must be addressed. More precisely, Fefferman [23] showed that \(T_{h,\Omega }\) is of type \((p, p)\) for \(1< p<\infty \) if \(\Omega \in \mathrm{Lip}_{\alpha }({\mathrm{S}}^{n-1})\) for some \(\alpha >0\) and \(h\in L^{\infty }(\mathbb{R}_{+})\). Fefferman's result was later improved by Namazi [32] by assuming \(\Omega \in L^{q}({\mathrm{S}}^{n-1})\) for some \(q>1\) instead of \(\Omega \in \mathrm{Lip}_{\alpha }({\mathrm{S}}^{n-1})\). Meanwhile, Duoandikoetxea and Rubio de Francia [16] used the Littlewood–Paley theory to improve the results to the case \(\Omega \in L^{q}({\mathrm{S}}^{n-1})\) for any \(q>1\) and \(h\in \Delta _{2}(\mathbb{R}_{+})\). The boundedness for rough singular integral operators on Tribel–Lizorkin spaces has also been studied extensively by many authors. In 2002, Chen, Fan, and Ying [5] first showed that \(T_{\Omega }\) is bounded on \(\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})\) if \(\Omega \in L^{r}({\mathrm{S}}^{n-1})\) for some \(r>1\). Later on, the result was extended and improved by many authors. For example, see [2, 6] for the case \(\Omega \in \mathcal{F}_{\beta }({\mathrm{S}}^{n-1})\) (the Grafakos–Stefanov function class in [25]), [9, 10] for the case \(\Omega \in H^{1}({\mathrm{S}}^{n-1})\). For the operators \(T_{\Omega }\) and \(T_{h,\Omega }\), the singularities are along the diagonal \(\{x=y\}\). However, many problems in analysis have led one to consider singular integral operators with singularity along more general sets. One of the principal motivations for the study of such operators is the requirements of several complex variables and large classes of "subelliptic" equations (see [37, 39]). So more and more scholars are devoted to studying the \(L^{p}\) bounds for rough singular integral operators with singularity along various sets. For example, see [3, 22, 34] for polynomial mappings, [17, 19] for real-analytic submanifolds, [11, 28] for homogeneous mappings, [1, 18, 20, 26] for polynomial curves. Other interesting works can be found in [7, 8, 35, 36, 42], among others. In this paper we focus on the singular integrals associated to polynomial curves with rough kernels. Let \(h, \Omega \) be given as in (1.2) and P be a real polynomial on \(\mathbb{R}\) satisfying \(P(0)=0\). For a function \(\varphi:\mathbb{R}_{+}\rightarrow \mathbb{R}\), we define the singular integral operator associated to polynomial compound curves \(\{P(\varphi (|y|))y/|y|; y\in \mathbb{R}^{n}\}\) by $$\begin{aligned} T_{h,\Omega,P,\varphi }f(x):={\mathrm{p.v.}} \int _{\mathbb{R}^{n}}f\bigl(x-P\bigl( \varphi \bigl( \vert y \vert \bigr)\bigr)y/ \vert y \vert \bigr)\frac{\Omega (y/ \vert y \vert )h( \vert y \vert )}{ \vert y \vert ^{n}}\,dy, \end{aligned}$$ where \(f\in \mathcal{S}(\mathbb{R}^{n})\). When \(\varphi (t)\equiv t\), we denote \(T_{h,\Omega,P,\varphi }=T_{h,\Omega,P}\). Particularly, \(T_{h,\Omega,P}=T_{h, \Omega }\) when \(P(t)\equiv t\). In 1997, Fan and Pan [20] first established the \(L^{2}\) boundedness for \(T_{h,\Omega,P}\) if \(h\in L^{\infty }(\mathbb{R}_{+})\) and \(\Omega \in H^{1}({\mathrm{S}}^{n-1})\). Subsequently, Al-Hasan and Pan [1] improved the result by establishing the following. Theorem A ([1]) Let \(h\in L^{\infty }(\mathbb{R}_{+})\) and \(\Omega \in H^{1}({\mathrm{S}}^{n-1})\) satisfy (1.1). Then, for \(1< p<\infty \), there exists a constant \(C>0\) independent of \(h, \Omega \) and the coefficients of P such that $$\begin{aligned} \Vert T_{h,\Omega,P}f \Vert _{L^{p}(\mathbb{R}^{n})}\leq C \Vert h \Vert _{L^{\infty }( \mathbb{R}_{+})} \Vert \Omega \Vert _{H^{1}({\mathrm{S}}^{n-1})} \Vert f \Vert _{L^{p}( \mathbb{R}^{n})}, \quad\forall f\in L^{p}\bigl(\mathbb{R}^{n} \bigr). \end{aligned}$$ Later on, the \(L^{p}\) mapping properties for \(T_{h,\Omega,P}\) have been investigated by many authors. For example, see [18] for the case \(h\equiv 1\) and \(\Omega \in \mathcal{F}_{\beta }({\mathrm{S}}^{n-1})\), [26] for the case \(\Omega \in L\log L({\mathrm{S}}^{n-1})\). Based on (2.4) and Theorem A, a natural question is the following. Is \(T_{h,\Omega,P}\) bounded on \(F_{\alpha }^{p,q}(\mathbb{R}^{n})\) if \(h\in \Delta _{\gamma }(\mathbb{R}_{+})\) for some \(\gamma \in (1,\infty ]\) and \(\Omega \in H^{1}({\mathrm{S}}^{n-1})\)? Our investigation will not only address this question, but also deal with a more general class of operators. More specifically, we have the following result. Theorem 1.1 Let P be a real polynomial on \(\mathbb{R}\) satisfying \(P(0)=0\) and \(\varphi \in \mathfrak{F}_{1}\) or \(\mathfrak{F}_{2}\). Here, \(\mathfrak{F}_{1}\) (resp., \(\mathfrak{F}_{2}\)) is the set of all functions \(\phi:\mathbb{R}_{+}\rightarrow \mathbb{R}\) satisfying the following condition \((a)\) (resp., \((b)\)): ϕ is an increasing \(\mathcal{C}^{1}\) function such that \(t\phi '(t)\geq C_{\phi }\phi (t)\) and \(\phi (2t)\leq c_{\phi }\phi (t)\) for all \(t>0\), where \(C_{\phi }\) and \(c_{\phi }\) are independent of t. ϕ is a decreasing \(\mathcal{C}^{1}\) function such that \(t\phi '(t)\leq - C_{\phi }\phi (t)\) and \(\phi (t)\leq c_{\phi }\phi (2t)\) for all \(t>0\), where \(C_{\phi }\) and \(c_{\phi }\) are independent of t. Suppose that \(\Omega \in H^{1}({\mathrm{S}}^{n-1})\) satisfies (1.1) and \(h\in \Delta _{\gamma }(\mathbb{R}_{+})\) for some \(\gamma \in (1,\infty ]\). Then For \(\alpha \in \mathbb{R}\) and \((1/p,1/q)\in \mathcal{R}_{\gamma }\), there exists a constant \(C>0\) independent of \(h, \gamma, \Omega \) and the coefficients of P such that $$\begin{aligned} \Vert T_{h,\Omega,P,\varphi }f \Vert _{\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})} \leq C\gamma ' \Vert h \Vert _{\Delta _{\gamma }(\mathbb{R}_{+})} \Vert \Omega \Vert _{H^{1}({ \mathrm{S}}^{n-1})} \Vert f \Vert _{\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})}. \end{aligned}$$ Here, \(\mathcal{R}_{\gamma }\) is the interior of the convex hull of three squares \((\frac{1}{2},\frac{1}{2}+\frac{1}{\max \{2,\gamma '\}})^{2}\), \((\frac{1}{2} -\frac{1}{\max \{2,\gamma '\}},\frac{1}{2})^{2}\), and \((\frac{1}{2\gamma }, 1-\frac{1}{2\gamma })^{2}\). For \(\alpha >0\) and \((1/p,1/q)\in \mathcal{R}_{\gamma }\), there exists a constant \(C>0\) independent of \(h, \gamma, \Omega \) and the coefficients of P such that $$\begin{aligned} \Vert T_{h,\Omega,P,\varphi }f \Vert _{F_{\alpha }^{p,q}(\mathbb{R}^{n})}\leq C \gamma ' \Vert h \Vert _{\Delta _{\gamma }(\mathbb{R}_{+})} \Vert \Omega \Vert _{H^{1}({ \mathrm{S}}^{n-1})} \Vert f \Vert _{F_{\alpha }^{p,q}(\mathbb{R}^{n})}. \end{aligned}$$ Remark 1.1 There are some model examples in the class \(\mathfrak{F}_{1}\) such as \(t^{\alpha } (\alpha >0), t^{\alpha }(\ln (1+t))^{\beta } (\alpha, \beta >0), t\ln \ln (e+t)\), real-valued polynomials P on \(\mathbb{R}\) with positive coefficients and \(P(0)=0\), and so on. We now give examples in the class \(\mathfrak{F}_{2}\) such as \(t^{\delta }(\delta <0)\) and \(t^{-1}\ln (1+1/t)\). It was pointed out in [26] that for \(\varphi \in \mathfrak{F}_{1}\) (or \(\mathfrak{F}_{2}\)) there exists a constant \(B_{\varphi }>1\) such that \(\varphi (2t)\geq B_{\varphi }\varphi (t)\) (or \(\varphi (t)\geq B_{\varphi }\varphi (2t)\)). (i) It is clear that \(\mathcal{R}_{\gamma _{1}} \subsetneq \mathcal{R}_{\gamma _{2}}\) for \(\gamma _{1}<\gamma _{2}\) and \(\mathcal{R}_{\infty }=(0,1)\times (0,1)\). In view of (2.4), we see that Theorem 1.1 essentially improved and generalized Theorem A. (ii) Our methods used to deal with Fourier transform estimates of some measures are different from those in the proof of Theorem A. In fact, the authors in [1] used the \(TT^{*}\) method to prove Theorem A. However, the \(TT^{*}\) method is not needed in the proof of Theorem 1.1. (iii) Part (i) of Theorem 1.1 improved and generalized Theorem 1 in [9], in which the authors showed that \(T_{h,\Omega }\) is bounded on \(\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})\) for \(\alpha \in \mathbb{R}\) and \(1< p, q<\infty \), provided that \(h\in L^{\infty }(\mathbb{R}_{+})\) and \(\Omega \in H^{1}({\mathrm{S}}^{n-1})\). (iv) Theorem 1.1 is new, even in the special case \(h\equiv 1\) or \(\alpha =0\), \(q=2\), \(\varphi (t)\equiv t\), or \(P(t)\equiv t\). The second motivation of this paper is concerned with the \(L^{p}\) boundedness of maximal truncated singular integrals associated to polynomial curves. Let \(h, \Omega, P, \varphi \) be given as in (1.3). The maximal truncated singular integral operator \(T_{h,\Omega,P,\varphi }^{*}\) is defined by $$\begin{aligned} T_{h,\Omega,P,\varphi }^{*}f(x):=\sup_{\epsilon >0} \biggl\vert \int _{ \vert y \vert >\epsilon }f\bigl(x-P\bigl(\varphi \bigl( \vert y \vert \bigr)\bigr)y/ \vert y \vert \bigr) \frac{\Omega (y/ \vert y \vert )h( \vert y \vert )}{ \vert y \vert ^{n}}\,dy \biggr\vert , \end{aligned}$$ where \(f\in \mathcal{S}(\mathbb{R}^{n})\). The type of operator \(T_{h,\Omega,P,\varphi }^{*}\) was first studied by Fan, Guo, and Pan [18] who proved that \(T_{h,\Omega,P,\varphi }^{*}\) is bounded on \(L^{p}(\mathbb{R}^{n})\) for \((2\beta -1)/(2\beta -2)< p<2\beta -1\) if \(h\equiv 1\), \(\varphi (t)\equiv t\), and \(\Omega \in \mathcal{F}_{\beta }({\mathrm{S}}^{n-1})\) for some \(\beta >3/2\). Recently, Liu [26] proved that \(T_{h,\Omega,P, \varphi }^{*}\) is of type \((p, p)\) for \(1< p<\infty \), provided that \(\varphi \in \mathfrak{F}_{1}\) or \(\mathfrak{F}_{2}\), \(\Omega \in L\log L({\mathrm{S}}^{n-1})\) and h satisfies certain radial condition. Based on (2.1), (2.2) and the results related to \(T_{h,\Omega,P, \varphi }^{*}\), a natural question is the following. Is \(T_{h,\Omega,P,\varphi }^{*}\) bounded on \(L^{p}(\mathbb{R}^{n})\) for some \(p>1\) under the same conditions of Theorem 1.1? This question can be addressed by the following. Let \(P, \varphi \) be given as in Theorem 1.1. Suppose that \(\Omega \in H^{1}({\mathrm{S}}^{n-1})\) satisfies (1.1) and \(h\in \Delta _{\gamma }(\mathbb{R}_{+})\) for some \(\gamma \in (4/3,\infty ]\). Then there exists a constant \(C>0\) independent of \(h, \gamma, \Omega \) and the coefficients of P such that $$\begin{aligned} \bigl\Vert T_{h,\Omega,P,\varphi }^{*}f \bigr\Vert _{L^{p}(\mathbb{R}^{n})}\leq C\gamma ' \Vert h \Vert _{\Delta _{\gamma }(\mathbb{R}_{+})} \Vert \Omega \Vert _{H^{1}({\mathrm{S}}^{n-1})} \Vert f \Vert _{L^{p}(\mathbb{R}^{n})}, \quad\forall f\in L^{p} \bigl(\mathbb{R}^{n}\bigr). \end{aligned}$$ Here, \(p\in (\gamma ',\infty )\) if \(\gamma \geq 2\) or \(p\in (\gamma ', {2\gamma '}/{(\gamma '-2)})\) if \(\gamma \in (4/3,2)\). Theorem 1.2 is new, even in the special case \(h\equiv 1\) or \(\varphi (t)\equiv t\). It is unknown whether the operator \(T_{h,\Omega,P,\varphi }\) appearing in Theorem 1.2 is bounded on \(L^{p}(\mathbb{R}^{n})\) for some \(p>1\) if \(\gamma \in (1,4/3]\), even in the special case \(\varphi (t)=t\), which is very interesting. The third motivation of this paper is concerned with the boundedness of maximal truncated singular integrals associated to polynomial curves on Triebel–Lizorkin spaces. The first work related to the boundedness for maximal singular integral operator on Triebel–Lizorkin spaces was due to Zhang and Chen [43], who showed that the maximal singular integral operator is bounded on \(\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})\) and \(F_{\alpha }^{p,q}(\mathbb{R}^{n})\) for \(0<\alpha <1\) and \(1< p, q<\infty \) by assuming that \(\Omega \in H^{1}({\mathrm{S}}^{n-1})\). Recently, Liu, Xue, and Yabuta [30] established the boundedness for the maximal singular integral operators associated to polynomial mappings on Triebel–Lizorkin spaces under the conditions \(h\in \Delta _{\gamma }(\mathbb{R}_{+})\) with some \(\gamma >1\) and \(\Omega \in L\log L({\mathrm{S}}^{n-1})\). Very recently, the authors [31] obtained the boundedness for \(T_{h,\Omega,P, \varphi }^{*}\) on Triebel–Lizorkin spaces, provided that \(h\equiv 1\), \(\Omega \in \mathcal{F}_{\beta }({\mathrm{S}}^{n-1})\) with some \(\beta >3/2\) and \(\varphi \in \mathfrak{F}_{3}\), where \(\mathfrak{F}_{3}\) is the set of all functions ϕ satisfying the following conditions: (a) ϕ is a positive increasing function on \((0,\infty )\) such that \(t^{\delta }\phi '(t)\) is monotonic on \((0,\infty )\) for some \(\delta \in \mathbb{R}\); (b) There exist positive constants \(C_{\phi }\) and \(c_{\phi }\) such that \(t\phi '(t)\geq C_{\phi }\phi (t)\) and \(\phi (2t)\leq c_{\phi }\phi (t)\) for all \(t>0\). It is clear that \(\mathfrak{F}_{3}\subsetneq \mathfrak{F}_{1}\). There are some model examples for the class \(\mathfrak{F}_{3}\) such as \(t^{\alpha } (\alpha >0)\), \(t^{\beta }\ln (1+t) (\beta \geq 1)\), \(t\ln \ln (e+t)\), real-valued polynomials P on \(\mathbb{R}\) with positive coefficients and \(P(0)=0\) and so on. Based on the above, it is natural to ask the following question. Is \(T_{h,\Omega,P,\varphi }^{*}\) defined in (1.4) bounded on the Triebel–Lizorkin spaces if \(h\equiv 1\) and \(\Omega \in H^{1}({\mathrm{S}}^{n-1})\)? Our next result will give a positive answer to Question 1.3. Let P be a real polynomial on \(\mathbb{R}\) satisfying \(P(0)=0\) and \(\varphi \in \mathfrak{F}_{3}\). Suppose that \(h\equiv 1\) and \(\Omega \in H^{1}({\mathrm{S}}^{n-1})\) satisfies (1.1). Then, for \(0<\alpha <1\) and \(1< p, q<\infty \), there exists a constant \(C>0\) independent of Ω and the coefficients of P such that $$\begin{aligned} &\bigl\Vert T_{h,\Omega,P,\varphi }^{*}f \bigr\Vert _{\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})} \leq C \Vert \Omega \Vert _{H^{1}({\mathrm{S}}^{n-1})} \Vert f \Vert _{\dot{F}_{\alpha }^{p,q}( \mathbb{R}^{n})},\quad \forall f\in \dot{F}_{\alpha }^{p,q}\bigl(\mathbb{R}^{n} \bigr);\\ &\bigl\Vert T_{h,\Omega,P,\varphi }^{*}f \bigr\Vert _{F_{\alpha }^{p,q}(\mathbb{R}^{n})} \leq C \Vert \Omega \Vert _{H^{1}({\mathrm{S}}^{n-1})} \Vert f \Vert _{F_{\alpha }^{p,q}( \mathbb{R}^{n})},\quad \forall f\in F_{\alpha }^{p,q}\bigl(\mathbb{R}^{n} \bigr). \end{aligned}$$ Moreover, both \(T_{h,\Omega,P,\varphi }^{*}:F_{\alpha }^{p,q}(\mathbb{R}^{n}) \rightarrow \dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})\) and \(T_{h,\Omega,P, \varphi }^{*}:F_{\alpha }^{p,q}(\mathbb{R}^{n}) \rightarrow F_{\alpha }^{p,q} (\mathbb{R}^{n})\) are continuous. The boundedness part in Theorem 1.3 implies [43, Theorem 1.2] when \(P(t)=\varphi (t)\equiv t\). It should be pointed out that Theorem 1.3 is new, even in the special case \(\varphi (t)\equiv t\). The paper is organized as follows. In Sect. 2 we present some preliminary definitions and lemmas, which are the main ingredients of proving Theorems 1.1–1.3. The proofs of Theorems 1.1–1.3 will be given in Sect. 3. It should be pointed out that the main methods and ideas employed in this paper are a combination of ideas and arguments from [1, 21, 22, 27, 30, 41]. However, some new techniques are needed in the main proofs. The new ideas invented in our proofs are to define suitable measures and to estimate them suitably. Throughout the paper, for any \(p\in [1,\infty ]\), we denote \(p'\) by the conjugate index of p, which satisfies \({1}/{p}+{1}/{p'}=1\). Here, we set \(1'=\infty \) and \(\infty '=1\). The letter C or c, sometimes with certain parameters, will stand for positive constants not necessarily the same one at each occurrence, but are independent of the essential variables. In what follows, we set \(\mathfrak{R}_{n}=\{\zeta \in \mathbb{R}^{n}; 1/2<|\zeta |\leq 1\}\). Let \(\triangle _{\zeta }(f)\) be the difference of f for an arbitrary function f defined on \(\mathbb{R}^{n}\) and \(\zeta \in \mathbb{R}^{n}\), i.e., \(\triangle _{\zeta }(f)(x)=f_{\zeta }(x)-f(x)\), where \(f_{\zeta }(x)=f(x+\zeta )\). For any \(t\in \mathbb{R}\), we set \(\exp (t)=e^{-2\pi it}\). We also use the conventions \(\sum_{i\in \emptyset }a_{i}=0\) and \(\prod_{i\in \emptyset }a_{i}=1\). Preliminary definitions and lemmas Preliminary definitions In this subsection we give the definitions of several rough kernels and their relationships. Definition 2.1 (Hardy spaces) The Hardy space \(H^{1}({\mathrm{S}}^{n-1})\) is the set of all \(L^{1}({\mathrm{S}}^{n-1})\) functions which satisfy \(\|f\|_{H^{1}({\mathrm{S}}^{n-1})}<\infty \), where $$\begin{aligned} \Vert \Omega \Vert _{H^{1}({\mathrm{S}}^{n-1})}:= \int _{{\mathrm{S}}^{n-1}}\sup_{0\leq r< 1} \biggl\vert \int _{{\mathrm{S}}^{n-1}}\Omega (\theta ) \frac{1-r^{2}}{ \vert rw-\theta \vert ^{n}}\,d\sigma (\theta ) \biggr\vert \,d\sigma (w). \end{aligned}$$ (\(L(\log L)^{\alpha }({\mathrm{S}}^{n-1})\) class) The class \(L(\log L)^{\alpha }({\mathrm{S}}^{n-1})\) for \(\alpha >0\) denotes the class of all measurable functions Ω on \({\mathrm{S}}^{n-1}\) which satisfy $$\begin{aligned} \Vert \Omega \Vert _{L(\log L)^{\alpha }({\mathrm{S}}^{n-1})}:= \int _{{\mathrm{S}}^{n-1}} \bigl\vert \Omega (\theta ) \bigr\vert \log ^{\alpha }\bigl( \bigl\vert \Omega (\theta ) \bigr\vert +2\bigr)\,d \sigma (\theta )< \infty. \end{aligned}$$ (Grafakos–Stefanov class) The Grafakos–Stefanov class \(\mathcal{F}_{\beta }({\mathrm{S}}^{n-1})\) for \(\beta >0\) denotes the set of all integrable functions over \({\mathrm{S}}^{n-1}\) which satisfy the condition $$\begin{aligned} \sup_{u\in {\mathrm{S}}^{n-1}} \int _{{\mathrm{S}}^{n-1}} \bigl\vert \Omega (v) \bigr\vert \biggl(\log ^{+}\frac{1}{ \vert u\cdot v \vert } \biggr)^{\beta }\,d\sigma (v)< \infty. \end{aligned}$$ We remark that \(\mathcal{F}_{\beta }({\mathrm{S}}^{n-1})\) was introduced by Grafakos and Stefanov [25] in the study of the \(L^{p}\) boundedness of singular integral operator with rough kernels. The following inclusion relations are known: $$\begin{aligned} &L^{r}\bigl({\mathrm{S}}^{n-1}\bigr)\subsetneq L(\log L)^{\beta _{1}}\bigl({\mathrm{S}}^{n-1}\bigr) \subsetneq L(\log L)^{\beta _{2}}\bigl({\mathrm{S}}^{n-1}\bigr) \quad{\text{for }} r>1 {\text{ and }} 0< \beta _{2}< \beta _{1}; \\ &L(\log L)^{\beta }\bigl({\mathrm{S}}^{n-1}\bigr)\subsetneq H^{1}\bigl({\mathrm{S}}^{n-1}\bigr) \subsetneq L^{1} \bigl({\mathrm{S}}^{n-1}\bigr) \quad{\text{for }} \beta \geq 1; \end{aligned}$$ $$\begin{aligned} &L(\log L)^{\beta }\bigl({\mathrm{S}}^{n-1}\bigr)\nsubseteq H^{1}\bigl({\mathrm{S}}^{n-1}\bigr) \nsubseteq L(\log L)^{\beta }\bigl({\mathrm{S}}^{n-1}\bigr) \quad{\text{for }} 0< \beta < 1; \\ &{\mathcal{F}}_{\beta _{1}}\bigl({\mathrm{S}}^{n-1}\bigr)\subsetneq { \mathcal{F}}_{ \beta _{2}}\bigl({\mathrm{S}}^{n-1}\bigr),\quad 0< \beta _{2}< \beta _{1}; \\ &\bigcup_{q>1}L^{q}\bigl({ \mathrm{S}}^{n-1}\bigr)\subsetneq {\mathcal{F}}_{\beta }\bigl({ \mathrm{S}}^{n-1}\bigr),\quad \beta >0; \\ &\bigcap_{\beta >1}{\mathcal{F}}_{\beta }\bigl({ \mathrm{S}}^{n-1}\bigr)\nsubseteq H^{1}\bigl({ \mathrm{S}}^{n-1}\bigr)\nsubseteq \bigcup_{\beta >1}{ \mathcal{F}}_{\beta }\bigl({ \mathrm{S}}^{n-1}\bigr). \end{aligned}$$ Let us present the definitions of Triebel–Lizorkin spaces. (Triebel–Lizorkin spaces) Let \(\mathcal{S}'(\mathbb{R}^{n})\) be the tempered distribution class on \(\mathbb{R}^{n}\). For \(\alpha \in \mathbb{R}\) and \(0< p, q\le \infty (p\neq \infty )\), we define the homogeneous Triebel–Lizorkin spaces \(\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})\) by $$\begin{aligned} \dot{F}_{\alpha }^{p,q}\bigl(\mathbb{R}^{n}\bigr):= \biggl\{ f\in \mathcal{S}' \bigl( \mathbb{R}^{n}\bigr): \Vert f \Vert _{\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})} = \biggl\Vert \biggl(\sum _{i\in \mathbb{Z}}2^{-i\alpha q} \vert \Psi _{i}*f \vert ^{q} \biggr)^{1/q} \biggr\Vert _{L^{p}(\mathbb{R}^{n})}< \infty \biggr\} , \end{aligned}$$ where \(\widehat{\Psi _{i}}(\xi )=\phi (2^{i}\xi )\) for \(i\in \mathbb{Z}\) and \(\phi \in \mathcal{C}_{c}^{\infty }(\mathbb{R}^{n})\) satisfies the conditions: \(0\leq \phi (x)\leq 1\); \(\operatorname{supp}(\phi )\subset \{x: 1/2\leq |x|\leq 2\}\); \(\phi (x)>c>0\) if \(3/5\leq |x|\leq 5/3\). The inhomogeneous versions of Triebel–Lizorkin spaces are denoted by \(F_{\alpha }^{p,q}(\mathbb{R}^{n})\) and are obtained by adding the term \(\|\Theta *f\|_{L^{p}(\mathbb{R}^{n})}\) to the right-hand side of (2.3) with \(\sum_{i\in \mathbb{Z}}\) replaced by \(\sum_{i\geq 1}\), where \(\Theta \in \mathcal{S}(\mathbb{R}^{n})\), \(\operatorname{supp}(\hat{\Theta })\subset \{\xi: |\xi |\leq 2\}\), \(\hat{\Theta }(x)>c>0\) if \(|x|\leq 5/3\). The following properties are well known (see [24, 40]): $$\begin{aligned} &\dot{F}_{0}^{p,2}\bigl(\mathbb{R}^{n} \bigr)=L^{p}\bigl(\mathbb{R}^{n}\bigr) \quad{\text{for }} 1< p< \infty; \end{aligned}$$ $$\begin{aligned} &F_{\alpha }^{p,q}\bigl(\mathbb{R}^{n}\bigr) \sim \dot{F}_{\alpha }^{p,q}\bigl( \mathbb{R}^{n}\bigr) \cap L^{p}\bigl(\mathbb{R}^{n}\bigr)\quad {\text{and}} \\ & \Vert f \Vert _{F_{\alpha }^{p,q}(\mathbb{R}^{n})}\sim \Vert f \Vert _{\dot{F}_{\alpha }^{p,q} (\mathbb{R}^{n})}+ \Vert f \Vert _{L^{p}(\mathbb{R}^{n})}\quad {\text{for }} \alpha >0,1< p, q< \infty. \end{aligned}$$ Our next definition is concerned with the \(H^{1}({\mathrm{S}}^{n-1})\) atom. (\(H^{1}({\mathrm{S}}^{n-1})\) atom) A function \(a:{\mathrm{S}}^{n-1}\rightarrow \mathbb{C}\) is a \((1,\infty )\) atom if there exist \(\vartheta \in {\mathrm{S}}^{n-1}\) and \(\varrho \in (0,1]\) such that $$\begin{aligned} &\operatorname{supp}(a)\subset {\mathrm{S}}^{n-1}\cap B(\vartheta,\varrho ),\quad {\text{where }} B(\vartheta,\varrho )=\bigl\{ y\in \mathbb{R}^{n}: \vert y-\vartheta \vert < \varrho \bigr\} ; \end{aligned}$$ $$\begin{aligned} &\Vert a \Vert _{L^{\infty }({\mathrm{S}}^{n-1})}\leq \varrho ^{-n+1}; \end{aligned}$$ $$\begin{aligned} &\int _{{\mathrm{S}}^{n-1}}a(y)\,d\sigma (y)=0. \end{aligned}$$ Preliminary lemmas We start now the following atomic decomposition of \(H^{1}({\mathrm{S}}^{n-1})\). Lemma 2.1 ([13, 14]) Let \(\Omega \in H^{1}({\mathrm{S}}^{n-1})\) satisfy (1.1). Then there exist a sequence of complex numbers \(\{c_{j}\}_{j\geq 1}\) and a sequence of \((1,\infty )\) atoms \(\{\Omega _{j}\}_{j\geq 1}\) such that $$\begin{aligned} \Omega =\sum_{j=1}^{\infty }c_{j} \Omega _{j}, \quad \Vert \Omega \Vert _{H^{1}({\mathrm{S}}^{n-1})}\approx \sum _{j=1}^{\infty } \vert c_{j} \vert . \end{aligned}$$ In order to deal with certain estimates for Fourier transforms of some measures, we need the following properties for \((1,\infty )\) atom. ([21]) Let \(\zeta =(\zeta _{1},\ldots,\zeta _{n})\neq (0,\ldots,0)\) and \(\zeta '= \zeta /|\zeta |=(\zeta _{1}',\ldots,\zeta _{n}')\). Suppose that \(n\geq 3\) and \(b(\cdot )\) is a \((1,\infty )\) atom on \({\mathrm{S}}^{n-1}\) supported in \({\mathrm{S}}^{n-1}\cap B(\zeta ',\varrho )\), where \(\varrho \in (0,1]\). Let $$\begin{aligned} &F_{b}(s)=\bigl(1-s^{2}\bigr)^{{(n-3)}/{2}}\chi _{(-1,1)}(s) \int _{{\mathrm{S}}^{n-2}}b\bigl(s,\bigl(1-s^{2} \bigr)^{{1}/{2}} \tilde{y}\bigr)\,d\sigma (\tilde{y}),\\ &G_{b}(s)=\bigl(1-s^{2}\bigr)^{{(n-3)}/{2}}\chi _{(-1,1)}(s) \int _{{\mathrm{S}}^{n-2}} \bigl\vert b\bigl(s,\bigl(1-s^{2} \bigr)^{{1}/{2}} \tilde{y}\bigr) \bigr\vert \,d\sigma ( \tilde{y}). \end{aligned}$$ Then there exists a positive constant C, independent of b, such that $$\begin{aligned} &\operatorname{supp}(F_{b})\subset \bigl(\zeta _{1}'-2r \bigl(\zeta '\bigr),\zeta _{1}'+2r\bigl( \zeta '\bigr)\bigr),\\ & \operatorname{supp}(G_{b})\subset \bigl(\zeta _{1}'-2r\bigl(\zeta '\bigr), \zeta _{1}'+2r\bigl(\zeta '\bigr)\bigr);\\ &\Vert F_{b} \Vert _{L^{\infty }(\mathbb{R})}\leq C \bigl\vert r\bigl( \zeta '\bigr) \bigr\vert ^{-1}, \qquad \Vert G_{b} \Vert _{L^{\infty }(\mathbb{R})}\leq C \bigl\vert r\bigl(\zeta '\bigr) \bigr\vert ^{-1};\\ &\int _{\mathbb{R}}F_{b}(s)\,ds=0, \end{aligned}$$ where \(r(\zeta ')=|\zeta |^{-1}|A_{\varrho }(\zeta )|\) and \(A_{\varrho }(\zeta )= (\varrho ^{2}\zeta _{1},\varrho \zeta _{2}, \ldots,\varrho \zeta _{n})\). Let \(\zeta =(\zeta _{1},\zeta _{2})\neq (0,0)\) and \(\zeta '=\zeta /|\zeta |=(\zeta _{1}', \zeta _{2}')\). Suppose that \(n=2\) and \(b(\cdot )\) is a \((1,\infty )\) atom on \({\mathrm{S}}^{1}\) supported in \({\mathrm{S}}^{1}\cap B(\zeta ',\varrho )\), where \(\varrho \in (0,1]\). Let $$\begin{aligned} &F_{b}(s)=\bigl(1-s^{2}\bigr)^{-{1}/{2}}\chi _{(-1,1)}(s) \bigl(b\bigl(s,\bigl(1-s^{2}\bigr)^{{1}/{2}} \bigr)+b\bigl(s,-\bigl(1-s^{2}\bigr)^{{1}/{2}}\bigr) \bigr),\\ &G_{b}(s)=\bigl(1-s^{2}\bigr)^{-{1}/{2}}\chi _{(-1,1)}(s) \bigl( \bigl\vert b\bigl(s,\bigl(1-s^{2} \bigr)^{{1}/{2}}\bigr) \bigr\vert + \bigl\vert b\bigl(s,- \bigl(1-s^{2}\bigr)^{{1}/{2}}\bigr) \bigr\vert \bigr). \end{aligned}$$ $$\begin{aligned} &\operatorname{supp}(F_{b})\subset \bigl(\zeta _{1}'-2r \bigl(\zeta '\bigr),\zeta _{1}'+2r\bigl( \zeta '\bigr)\bigr),\qquad \operatorname{supp}(G_{b})\subset \bigl(\zeta _{1}'-2r\bigl(\zeta '\bigr), \zeta _{1}'+2r\bigl(\zeta '\bigr)\bigr);\\ &\int _{\mathbb{R}}F_{b}(s)\,ds=0;\\ &\Vert F_{b} \Vert _{L^{q}(\mathbb{R})}\leq C \bigl\vert r\bigl( \zeta '\bigr) \bigr\vert ^{-1+{1}/{q}}, \qquad \Vert G_{b} \Vert _{L^{q}(\mathbb{R})}\leq C \bigl\vert r\bigl(\zeta '\bigr) \bigr\vert ^{-1+{1}/{q}}, \end{aligned}$$ for some \(q\in (1,2)\), where \(r(\zeta ')=|\zeta |^{-1}|A_{\varrho }(\zeta )|\) and \(A_{\varrho }(\zeta )=(\varrho ^{2}\zeta _{1},\varrho \zeta _{2})\). The following oscillatory estimates are useful for our proofs. ([33, Corollary, p. 186]) Let \(l\in \mathbb{N}\setminus \{0\}\), \(\{\mu _{i}\}_{i=1}^{l}\subset \mathbb{R}\), and \(\{d_{i}\}_{i=1}^{l}\) be distinct positive real numbers. Let \(\psi \in \mathcal{C}^{1}([0,1])\). Then there exists \(C>0\) independent of \(\{\mu _{j}\}_{j=1}^{l}\) such that $$\begin{aligned} \biggl\vert \int _{\delta }^{\tau }\exp \bigl(\mu _{1}t^{d_{1}}+ \cdots +\mu _{l}t^{d_{l}}\bigr) \psi (t)\,dt \biggr\vert \leq C \vert \mu _{1} \vert ^{-\epsilon } \biggl( \bigl\vert \psi ( \tau ) \bigr\vert + \int _{ \delta }^{\tau } \bigl\vert \psi '(t) \bigr\vert \,dt \biggr) \end{aligned}$$ holds for \(0\leq \delta <\tau \leq 1\) and \(\epsilon =\min \{{1}/{d_{1}},{1}/{l}\}\). Let \(\Phi (t)=t^{\alpha _{1}}+\mu _{2}t^{\alpha _{2}}+\cdots +\mu _{n}t^{ \alpha _{n}}\), where \(\{\mu _{i}\}_{i=2}^{n}\) are real parameters, and \(\{\alpha _{i}\}_{i=1}^{n}\) are distinct positive (not necessarily integer) exponents. Suppose that \(\varphi \in \mathfrak{F}_{3}\) and \(t^{\delta }\varphi '(t)\) is monotonic on \((0,\infty )\) for some \(\delta \in \mathbb{R}\). Then, for any \(r>0\) and \(\lambda \neq 0\), $$\begin{aligned} \biggl\vert \int _{r/2}^{r}\exp \bigl(\lambda \Phi \bigl(\varphi (t)\bigr)\bigr)\frac{dt}{t} \biggr\vert \leq C \bigl\vert \lambda \varphi (r)^{\alpha _{1}} \bigr\vert ^{-\epsilon }, \end{aligned}$$ with \(\epsilon =\min \{1/\alpha _{1},1/n\}\). Here, \(C>0\) is independent of \(\{\mu _{i}\}_{i=2}^{n}\), but may depend on φ and δ. We end this section by presenting a well-known result. ([38, pp. 476–478]) Let \(\mathcal{P}=(P_{1},\ldots,P_{d})\) with each \(P_{i}\) being a real polynomial defined on \(\mathbb{R}^{n}\). Then the maximal operator \(M_{\mathcal{P}}\) defined by $$\begin{aligned} M_{\mathcal{P}}f(x)=\sup_{r>0}\frac{1}{r^{n}} \biggl\vert \int _{ \vert t \vert \leq r}f\bigl(x-\mathcal{P}(t)\bigr)\,dt \biggr\vert \end{aligned}$$ $$\begin{aligned} \Vert M_{\mathcal{P}}f \Vert _{L^{p}(\mathbb{R}^{d})}\leq C_{p} \Vert f \Vert _{L^{p}( \mathbb{R}^{d})},\quad \forall 1< p< \infty {\textit{ and }} f\in L^{p} \bigl( \mathbb{R}^{d}\bigr). \end{aligned}$$ Here, \(C_{p}>0\) is independent of the coefficients of \(\{P_{i}\}_{i=1}^{d}\) and f. Proofs of Theorems 1.1–1.3 In this section we prove Theorems 1.1–1.3. In Sect. 3.1 we present some notation and lemmas, which are the main ingredients of proving Theorems 1.1–1.3. The proofs of Theorems 1.1–1.3 will be given in Sect. 3.2. Some notation and lemmas In what follows, let \(N\in \mathbb{N}\setminus \{0\}\) and \(P(t)=\sum_{i=1}^{N}a_{i}t^{i}\) with \(a_{N}\neq 0\). Then there exist \(0< l_{1}< l_{2}<\cdots <l_{\Lambda }=N\) such that \(P(t)=\sum_{i=1}^{\Lambda }a_{l_{i}}t^{l_{i}}\) with \(a_{l_{i}}\neq 0\) for all \(1\leq i\leq \Lambda \). Set $$\begin{aligned} P_{0}(t)=0,\qquad P_{s}(t)=\sum_{i=1}^{s}a_{l_{s}}t^{l_{s}},\quad 1\leq s\leq \Lambda. \end{aligned}$$ It is clear that \(P(t)=P_{\Lambda }(t)\) and \(l_{s}\geq s\) for \(1\leq s\leq \Lambda \). Let \(h, \Omega \) be given as in (1.2). For \(0\leq s\leq \Lambda \), \(y, \xi \in \mathbb{R}^{n}\), a vector \(\theta \in {\mathrm{S}}^{n-1}\), and a function \(\varphi:[0,\infty )\rightarrow \mathbb{R}\), we set $$\begin{aligned} \Gamma _{s,\theta }(y,\xi )=\sum_{i=s+1}^{\Lambda }a_{l_{i}} \varphi \bigl( \vert y \vert \bigr)^{l_{i}}\theta \cdot \xi. \end{aligned}$$ Define the measures \(\{\sigma _{h,\Omega,k,\theta,s}\}_{k\in \mathbb{Z}}\) and \(\{|\sigma _{h,\Omega,k,\theta,s}|\}_{k\in \mathbb{Z}}\) by $$\begin{aligned} &\widehat{\sigma _{h,\Omega,k,\theta,s}}(\xi )= \int _{2^{k\gamma '}< \vert y \vert \leq 2^{(k+1)\gamma '}}\exp \bigl(P_{s}\bigl(\varphi \bigl( \vert y \vert \bigr)\bigr)y'\cdot \xi +\Gamma _{s, \theta }(y, \xi )\bigr)\frac{\Omega (y/ \vert y \vert )h( \vert y \vert )}{ \vert y \vert ^{n}}\,dy,\\ &\widehat{ \vert \sigma _{h,\Omega,k,\theta,s} \vert }(\xi )= \int _{2^{k\gamma '}< \vert y \vert \leq 2^{(k+1)\gamma '}}\exp \bigl(P_{s}\bigl(\varphi \bigl( \vert y \vert \bigr)\bigr)y'\cdot \xi +\Gamma _{s, \theta }(y, \xi )\bigr)\frac{ \vert \Omega (y/ \vert y \vert )h( \vert y \vert ) \vert }{ \vert y \vert ^{n}}\,dy, \end{aligned}$$ where \(P_{s}\) is given as in (3.1). Note that \(\Gamma _{s,\theta }(y,\xi )\) is independent of \({y}/{|y|}\). In view of (1.1), it is easy to see that $$\begin{aligned} \sigma _{h,\Omega,k,\theta,0}(\xi )=0,\quad \forall k\in \mathbb{Z}, \xi \in \mathbb{R}^{n}. \end{aligned}$$ We have the following estimates. Let \(h\in \Delta _{\gamma }(\mathbb{R}_{+})\) for some \(\gamma \in (1,\infty ]\) and Ω be a \((1,\infty )\) atom satisfying (2.6)–(2.8) with \(0<\varrho \leq 1\) and \(\vartheta =\theta =(1,0,\ldots,0)\in {\mathrm{S}}^{n-1}\). Assume that \(\varphi \in \mathfrak{F}_{1}\) or \(\varphi \in \mathfrak{F}_{2}\). Then, for \(1\leq s\leq \Lambda \) and \(\xi =(\xi _{1},\ldots,\xi _{n})\neq (0,\ldots,0)\), there exists a constant \(C>0\) independent of \(h, \Omega, \gamma, \xi \) and \(\{a_{l_{s}}\}_{s=1}^{\Lambda }\) such that $$\begin{aligned} \bigl\vert \widehat{\sigma _{h,\Omega,k,\theta,s}}(\xi )- \widehat{\sigma _{h,\Omega,k,\theta,s-1}}(\xi ) \bigr\vert \leq C\gamma ' \Vert h \Vert _{ \Delta _{\gamma }(\mathbb{R}_{+})}\min \bigl\{ 1,\varphi \bigl(2^{(k+1)\gamma '} \bigr)^{l_{s}} \bigl\vert L_{s}( \xi ) \bigr\vert \bigr\} , \end{aligned}$$ $$\begin{aligned} L_{s}(\xi )=\bigl(a_{l_{s}}\varrho ^{2}\xi _{1},a_{l_{s}}\varrho \xi _{2}, \ldots,a_{l_{s}} \varrho \xi _{n}\bigr). \end{aligned}$$ We only prove (3.3) for the case \(\varphi \in \mathfrak{F}_{1}\) since another case \(\varphi \in \mathfrak{F}_{2}\) is analogous. Fix \(1\leq s\leq \Lambda \) and \(\xi '=\xi /|\xi |=(\xi _{1}',\ldots, \xi _{n}')\). Let \(\mathcal{O}\) be the rotation such that \(\mathcal{O}(\xi ')=\vartheta \) and \(\mathcal{O}^{-1}\) denote the inverse of \(\mathcal{O}\). Then \(\mathcal{O}^{2}(\xi ')=(\xi _{1}',\eta _{2}',\ldots,\eta _{n}')\). Let \(Q_{n-1}\) be a rotation in \(\mathbb{R}^{n-1}\) such that \(Q_{n-1}(\xi _{2}',\ldots,\xi _{n}')=(\eta _{2}',\ldots,\eta _{n}')\) and R be a transformation by \(R(z_{1},z_{2},\ldots,z_{n})=(z_{1},Q_{n-1}(z_{2},\ldots,z_{n}))\). Then, for any \(y'=(u,y_{2}',\ldots,y_{n}')\in {\mathrm{S}}^{n-1}\), we have \(\vartheta \cdot R(y')=\vartheta \cdot y'=u\) and \(\Omega (\mathcal{O}^{-1}R(y'))\) is a \((1,\infty )\) atom with supported in \({\mathrm{S}}^{n-1}\cap B(\xi ',\varrho )\). By some changes of variables, we have $$\begin{aligned} &\widehat{\sigma _{h,\Omega,k,\theta,s}}(\xi ) \\ &\quad= \int _{2^{k\gamma '}}^{2^{(k+1)\gamma '}} \exp \Biggl( \sum _{i=s+1}^{\Lambda }a_{l_{i}}\varphi (t)^{l_{i}} \xi \cdot \theta \Biggr) \int _{{\mathrm{S}}^{n-1}}\Omega \bigl(y'\bigr)\exp \Biggl(\sum _{i=1}^{s}a_{l_{i}} \varphi (t)^{l_{i}}\xi \cdot y' \Biggr)\,d\sigma \bigl(y'\bigr)h(t)\frac{dt}{t} \\ &\quad= \int _{2^{k\gamma '}}^{2^{(k+1)\gamma '}}\exp \Biggl( \sum _{i=s+1}^{\Lambda }a_{l_{i}}\varphi (t)^{l_{i}} \vert \xi \vert \xi _{1}' \Biggr) \\ & \qquad{}\times \int _{{\mathrm{S}}^{n-1}}A\bigl(y'\bigr)\exp \Biggl(\sum _{i=1}^{s}a_{l_{i}}\varphi (t)^{l_{i}} \vert \xi \vert \xi '\cdot \mathcal{O}^{-1}R\bigl(y'\bigr) \Biggr)\,d\sigma \bigl(y'\bigr)h(t)\frac{dt}{t} \\ &\quad= \int _{2^{k\gamma '}}^{2^{(k+1)\gamma '}}\exp \Biggl( \sum _{i=s+1}^{\Lambda }a_{l_{i}}\varphi (t)^{l_{i}} \vert \xi \vert \xi _{1}' \Biggr) \int _{\mathbb{R}}F_{A}(u)\exp \Biggl(\sum _{i=1}^{s}a_{l_{i}} \varphi (t)^{l_{i}} \vert \xi \vert u \Biggr)\,duh(t)\frac{dt}{t}, \end{aligned}$$ where \(A(y')=\Omega (\mathcal{O}^{-1}R(y'))\) and \(F_{A}\) is defined as in Lemma 2.2 (in case \(n>2\)) or Lemma 2.3 (in case \(n=2\)). Notice that \(A(\cdot )\) is a \((1,\infty )\) atom with supported in \(B(\xi ',\varrho )\). Invoking Lemmas 2.2 and 2.3, one finds that $$\begin{aligned} &\operatorname{supp}(F_{A})\subset \bigl(\xi _{1}'-2r \bigl(\xi '\bigr),\xi _{1}'+2r\bigl(\xi '\bigr)\bigr). \end{aligned}$$ $$\begin{aligned} &\Vert F_{A} \Vert _{L^{\infty }(\mathbb{R})}\leq C \bigl\vert r\bigl(\xi '\bigr) \bigr\vert ^{-1}, \quad{\text{if }} n\geq 3; \end{aligned}$$ $$\begin{aligned} &\Vert F_{A} \Vert _{L^{q}(\mathbb{R})}\leq C \bigl\vert r\bigl(\xi '\bigr) \bigr\vert ^{-1+1/q},\quad {\text{if }} n=2 \end{aligned}$$ for some \(q\in (1,2)\). Here, \(r(\xi ')=|\xi |^{-1}L_{\varrho }(\xi )\), where \(A_{\varrho }(\xi )=(\varrho ^{2}\xi _{1},\varrho \xi _{2},\ldots, \varrho \xi _{n})\) for \(n\geq 3\) and \(A_{\varrho }(\xi )=(\varrho ^{2}\xi _{1},\varrho \xi _{2})\) for \(n=2\). In view of (3.5) and (3.6), $$\begin{aligned} & \bigl\vert \widehat{\sigma _{h,\Omega,k,\theta,s}}(\xi )- \widehat{ \sigma _{h,\Omega,k,\theta,s-1}}(\xi ) \bigr\vert \\ &\quad= \Biggl\vert \int _{2^{k\gamma '}}^{2^{(k+1)\gamma '}}\exp \Biggl(\sum _{i=s+1}^{\Lambda }a_{l_{i}}\varphi (t)^{l_{i}} \vert \xi \vert \xi _{1}' \Biggr) \int _{\mathbb{R}}F_{A}(u)\exp \Biggl(\sum _{i=1}^{s-1}a_{l_{i}} \varphi (t)^{l_{i}} \vert \xi \vert u \Biggr) \\ &\qquad{} \times \bigl(\exp \bigl(a_{l_{s}}\varphi (t)^{l_{s}} \vert \xi \vert \xi _{1}'\bigr)-\exp \bigl(a_{l_{s}}\varphi (t)^{l_{s}} \vert \xi \vert u\bigr)\bigr)\,duh(t) \frac{dt}{t} \Biggr\vert \\ &\quad\leq \int _{2^{k\gamma '}}^{2^{(k+1)\gamma '}} \int _{ \mathbb{R}} \bigl\vert F_{A}(u) \bigr\vert \min \bigl\{ 2,2\pi \varphi \bigl(2^{(k+1)\gamma '}\bigr)^{l_{s}} \vert a_{l_{s}} \xi \vert \bigl\vert \xi _{1}'-u \bigr\vert \bigr\} \,du \bigl\vert h(t) \bigr\vert \frac{dt}{t} \\ &\quad\leq \min \bigl\{ 2,4\pi \vert a_{l_{s}}\xi \vert r\bigl(\xi '\bigr)\varphi \bigl(2^{(k+1) \gamma '}\bigr)^{l_{s}}\bigr\} \int _{2^{k\gamma '}}^{2^{(k+1)\gamma '}} \bigl\vert h(t) \bigr\vert \frac{dt}{t} \int _{\mathbb{R}} \bigl\vert F_{A}(u) \bigr\vert \,du. \end{aligned}$$ From (3.7) and (3.8), one sees that there exists \(C>0\) independent of \(h, \Omega, \gamma \) such that $$\begin{aligned} \int _{\mathbb{R}} \bigl\vert F_{A}(u) \bigr\vert \,du \leq C. \end{aligned}$$ Moreover, by Hölder's inequality, one has $$\begin{aligned} \int _{2^{k\gamma '}}^{2^{(k+1)\gamma '}} \bigl\vert h(t) \bigr\vert \frac{dt}{t}& = \sum_{i=0}^{[\gamma ']} \int _{2^{k \gamma '+i}}^{2^{k\gamma '+i+1}} \bigl\vert h(t) \bigr\vert \frac{dt}{t} \\ & \leq \sum_{i=0}^{[ \gamma ']} \biggl( \int _{2^{k\gamma '+i}}^{2^{k\gamma '+i+1}} \bigl\vert h(t) \bigr\vert ^{\gamma }\frac{dt}{t} \biggr)^{1/\gamma } \biggl( \int _{2^{k\gamma '+i}}^{2^{k \gamma '+i+1}}\frac{dt}{t} \biggr)^{1/\gamma '} \\ & \leq 2^{1/\gamma }\bigl(\bigl[\gamma '\bigr]+1\bigr) \Vert h \Vert _{ \Delta _{\gamma }(\mathbb{R}_{+})}(\ln 2)^{1/\gamma '}\leq 4\gamma ' \Vert h \Vert _{\Delta _{\gamma }(\mathbb{R}_{+})}. \end{aligned}$$ Here, \([x]=\max \{k\in \mathbb{Z}:k\leq x\}\) for \(x\in \mathbb{R}\). Finally, it follows from (3.9)–(3.11) that where \(C>0\) is independent of \(h, \Omega, \gamma \). This proves (3.3) and completes the proof. □ Let \(h\in \Delta _{\gamma }(\mathbb{R}_{+})\) for some \(\gamma \in (1,\infty ]\) and Ω be a \((1,\infty )\) atom satisfying (2.6)–(2.8) with \(0<\varrho \leq 1\) and \(\vartheta =\theta =(1,0,\ldots,0)\in {\mathrm{S}}^{n-1}\). Assume that \(\varphi \in \mathfrak{F}_{1}\) or \(\varphi \in \mathfrak{F}_{2}\). Then, for \(1\leq s\leq \Lambda \) and \(\xi =(\xi _{1},\ldots,\xi _{n})\neq (0,\ldots,0)\), there exist \(\delta >0\) and \(C>0\) independent of \(h, \Omega, \gamma, \xi \), and \(\{a_{l_{s}}\}_{s=1}^{\Lambda }\) such that $$\begin{aligned} \bigl\vert \widehat{\sigma _{h,\Omega,k,\theta,s}}(\xi ) \bigr\vert \leq C\gamma ' \Vert h \Vert _{ \Delta _{\gamma }(\mathbb{R}_{+})}\min \bigl\{ 1,\bigl(\varphi \bigl(2^{k\gamma '}\bigr)^{l_{s}} \bigl\vert L_{s}( \xi ) \bigr\vert \bigr)^{-1/(2l_{s}\gamma '\delta )}\bigr\} , \end{aligned}$$ where \(L_{s}(\xi )\) is given as (3.4) and \(\delta =1\) if \(n\geq 3\) and \(\delta >2\) if \(n=2\). We only prove (3.12) for the case \(\varphi \in \mathfrak{F}_{1}\) since another case is analogous. By (3.5) and Hölder's inequality, we have $$\begin{aligned} & \bigl\vert \widehat{\sigma _{h,\Omega,k,\theta,s}}(\xi ) \bigr\vert \\ &\quad\leq \int _{2^{k\gamma '}}^{2^{(k+1)\gamma '}} \Biggl\vert \int _{\mathbb{R}}F_{A}(u)\exp \Biggl(\sum _{i=1}^{s}a_{l_{i}} \varphi (t)^{l_{i}} \vert \xi \vert u \Biggr)\,du \Biggr\vert \bigl\vert h(t) \bigr\vert \frac{dt}{t} \\ &\quad\leq \biggl( \int _{2^{k\gamma '}}^{2^{(k+1)\gamma '}} \bigl\vert h(t) \bigr\vert ^{\gamma }\frac{dt}{t} \biggr)^{1/\gamma } \Biggl( \int _{2^{k\gamma '}}^{2^{(k+1) \gamma '}} \Biggl\vert \int _{\mathbb{R}}F_{A}(u) \exp \Biggl(\sum _{i=1}^{s}a_{l_{i}} \varphi (t)^{l_{i}} \vert \xi \vert u \Biggr)\,du \Biggr\vert ^{\gamma '}\frac{dt}{t} \Biggr)^{1/ \gamma '} \\ &\quad\leq \biggl( \int _{2^{k\gamma '}}^{2^{(k+1)\gamma '}} \bigl\vert h(t) \bigr\vert ^{\gamma }\frac{dt}{t} \biggr)^{1/\gamma } \Vert F_{A} \Vert _{L^{1}(\mathbb{R})}^{ \max \{1-2/\gamma ',0\}}\bigl(\gamma ' \bigr)^{\max \{1/\gamma '-1/2,0\}} \\ &\qquad{} \times \Biggl( \int _{2^{k\gamma '}}^{2^{(k+1) \gamma '}} \Biggl\vert \int _{\mathbb{R}}F_{A}(u) \exp \Biggl(\sum _{i=1}^{s}a_{l_{i}} \varphi (t)^{l_{i}} \vert \xi \vert u \Biggr)\,du \Biggr\vert ^{2}\frac{dt}{t} \Biggr)^{\min \{1/ \gamma ',1/2\}}. \end{aligned}$$ Notice that $$\begin{aligned} & \biggl( \int _{2^{k\gamma '}}^{2^{(k+1)\gamma '}} \bigl\vert h(t) \bigr\vert ^{\gamma }\frac{dt}{t} \biggr)^{1/\gamma } \\ &\quad\leq \Biggl(\sum_{i=0}^{[\gamma ']} \int _{2^{k \gamma '+i}}^{2^{k\gamma '+i+1}} \bigl\vert h(t) \bigr\vert ^{\gamma }\frac{dt}{t} \Biggr)^{1/ \gamma } \leq \bigl(2\bigl(\bigl[ \gamma '\bigr]+1\bigr) \Vert h \Vert _{\Delta _{\gamma }(\mathbb{R}_{+})}^{\gamma } \bigr)^{1/\gamma }\leq \bigl(4\gamma '\bigr)^{1/\gamma } \Vert h \Vert _{\Delta _{\gamma }( \mathbb{R}_{+})}. \end{aligned}$$ This together with (3.10) and (3.13) implies $$\begin{aligned} & \bigl\vert \widehat{\sigma _{h,\Omega,k,\theta,s}}(\xi ) \bigr\vert \\ &\quad\leq \bigl(4\gamma '\bigr)^{ \max \{1/2,1/\gamma \}} \Vert h \Vert _{\Delta _{\gamma }(\mathbb{R}_{+})} \\ &\qquad{} \times \Biggl( \int _{2^{k\gamma '}}^{2^{(k+1) \gamma '}} \Biggl\vert \int _{\mathbb{R}}F_{A}(u) \exp \Biggl(\sum _{i=1}^{s}a_{l_{i}} \varphi (t)^{l_{i}} \vert \xi \vert u \Biggr)\,du \Biggr\vert ^{2}\frac{dt}{t} \Biggr)^{\min \{1/ \gamma ',1/2\}}. \end{aligned}$$ By some changes of variables and the properties for φ, we have $$\begin{aligned} & \int _{2^{k\gamma '}}^{2^{(k+1)\gamma '}} \Biggl\vert \int _{ \mathbb{R}}F_{A}(u) \exp \Biggl(\sum _{i=1}^{s}a_{l_{i}} \varphi (t)^{l_{i}} \vert \xi \vert u \Biggr)\,du \Biggr\vert ^{2}\frac{dt}{t} \\ &\quad\leq \sum_{\mu =0}^{[\gamma ']} \int _{2^{k \gamma '+\mu }}^{2^{k\gamma '+\mu +1}} \Biggl\vert \int _{\mathbb{R}}F_{A}(u) \exp \Biggl(\sum _{i=1}^{s}a_{l_{i}}\varphi (t)^{l_{i}} \vert \xi \vert u \Biggr)\,du \Biggr\vert ^{2}\frac{dt}{t} \\ &\quad\leq \sum_{\mu =0}^{[\gamma ']} \int _{ \varphi (2^{k\gamma '+\mu })}^{\varphi (2^{k\gamma '+\mu +1})} \Biggl\vert \int _{\mathbb{R}}F_{A}(u) \exp \Biggl(\sum _{i=1}^{s}a_{l_{i}}t^{l_{i}} \vert \xi \vert u \Biggr)\,du \Biggr\vert ^{2} \frac{dt}{\varphi ^{-1}(t)\varphi '(\varphi ^{-1}(t))} \\ &\quad\leq \frac{1}{C_{\varphi }}\sum_{\mu =0}^{[ \gamma ']} \int _{\varphi (2^{k\gamma '+\mu })}^{\varphi (2^{k\gamma '+ \mu +1})} \Biggl\vert \int _{\mathbb{R}}F_{A}(u) \exp \Biggl(\sum _{i=1}^{s}a_{l_{i}}t^{l_{i}} \vert \xi \vert u \Biggr)\,du \Biggr\vert ^{2}\frac{dt}{t} \\ &\quad= \frac{1}{C_{\varphi }}\sum_{\mu =0}^{[\gamma ']} \int _{ \frac{\varphi (2^{k\gamma '+\mu +1})}{\varphi (2^{k\gamma '+\mu })}}^{1} \Biggl\vert \int _{\mathbb{R}}F_{A}(u) \exp \Biggl(\sum _{i=1}^{s}a_{l_{i}} \varphi \bigl(2^{k\gamma '+\mu +1}\bigr)^{l_{i}}t^{l_{i}} \vert \xi \vert u \Biggr)\,du \Biggr\vert ^{2} \frac{dt}{t} \\ &\quad\leq \frac{1}{C_{\varphi }}\sum_{\mu =0}^{[ \gamma ']} \int _{c_{\varphi }^{-1}}^{1} \Biggl\vert \int _{\mathbb{R}}F_{A}(u) \exp \Biggl(\sum _{i=1}^{s}a_{l_{i}}\varphi \bigl(2^{k\gamma '+\mu +1}\bigr)^{l_{i}}t^{l_{i}} \vert \xi \vert u \Biggr)\,du \Biggr\vert ^{2}\frac{dt}{t} \\ &\quad\leq \frac{1}{C_{\varphi }}\sum_{\mu =0}^{[ \gamma ']} \int _{\mathbb{R}} \int _{\mathbb{R}} \bigl\vert F_{A}(u) \overline{F_{A}(v)} \bigr\vert \\ &\qquad{}\times \Biggl\vert \int _{c_{\varphi }^{-1}}^{1}\exp \Biggl(\sum _{i=1}^{s}a_{l_{i}}\varphi \bigl(2^{k\gamma '+\mu +1}\bigr)^{l_{i}}t^{l_{i}} \vert \xi \vert (u-v) \Biggr)\frac{dt}{t} \Biggr\vert \,du\,dv. \end{aligned}$$ Fix \(\mu \in \{0,1,\ldots,[\gamma ']\}\), we get by Lemma 2.4 that $$\begin{aligned} & \Biggl\vert \int _{c_{\varphi }^{-1}}^{1}\exp \Biggl(\sum _{i=1}^{s}a_{l_{i}}\varphi \bigl(2^{k\gamma '+\mu +1}\bigr)^{l_{i}}t^{l_{i}} \vert \xi \vert (u-v) \Biggr)\frac{dt}{t} \Biggr\vert \\ &\quad\leq C\min \bigl\{ 1,\bigl( \vert a_{l_{s}}\xi \vert \varphi \bigl(2^{k\gamma '+\mu +1}\bigr)^{l_{s}} \vert u-v \vert \bigr)^{-1/l_{s}} \bigr\} \\ &\quad\leq C\bigl( \vert a_{l_{s}}\xi \vert \varphi \bigl(2^{k\gamma '+i+1}\bigr)^{l_{i}} \vert u-v \vert \bigr)^{-1/(l_{s} \delta )}, \end{aligned}$$ where \(\delta =1\) if \(n\geq 3\) and \(\delta =q'\) if \(n=2\). Here, q is given as in the proof of Lemma 3.1. Here, the constant \(C>0\) is independent of \(u, v, \xi, \mu, k\), and \(\{a_{l_{i}}\}_{i=1}^{s}\). In view of (3.15) with (3.16), $$\begin{aligned} & \int _{2^{k\gamma '}}^{2^{(k+1)\gamma '}} \Biggl\vert \int _{ \mathbb{R}}F_{A}(u) \exp \Biggl(\sum _{i=1}^{s}a_{l_{i}} \varphi (t)^{l_{i}} \vert \xi \vert u \Biggr)\,du \Biggr\vert ^{2}\frac{dt}{t} \\ &\quad\leq C\gamma '\bigl(\varphi \bigl(2^{k\gamma '} \bigr)^{l_{s}} \vert a_{l_{s}} \xi \vert \bigr)^{-1/(l_{s}\delta )} \int _{\mathbb{R}} \int _{\mathbb{R}} \bigl\vert F_{A}(u) \overline{F_{A}(v)} \bigr\vert \vert u-v \vert ^{-1/(l_{s}\delta )} \,du\,dv. \end{aligned}$$ Define the function \(b(u)=r(\xi ')F_{A}(r(\xi ')u+\xi _{1}')\). In view of (3.6)–(3.8) we see that \(\operatorname{supp}(b)\subset (-2,2)\) and \(\|b\|_{L^{\infty }(\mathbb{R})}\leq C\) for \(n\geq 3\) and \(\|b\|_{L^{q}(\mathbb{R})}\leq C\) for \(n=2\). By some changes of variables, $$\begin{aligned} & \int _{\mathbb{R}} \int _{\mathbb{R}} \bigl\vert F_{A}(u) \overline{F_{A}(v)} \bigr\vert \vert u-v \vert ^{-1/(l_{s}\delta )} \,du\,dv \\ &\quad= \bigl\vert r\bigl(\xi '\bigr) \bigr\vert ^{-1/(l_{s}\delta )} \int _{-2}^{2} \int _{-2}^{2} \bigl\vert b(u) \overline{b(v)} \bigr\vert \vert u-v \vert ^{-1/(l_{s}\delta )}\,du\,dv. \end{aligned}$$ When \(n\geq 3\), by the fact \(\|b\|_{L^{\infty }(\mathbb{R})}\leq C\) and \(\delta =1\), we get $$\begin{aligned} \int _{-2}^{2} \int _{-2}^{2} \bigl\vert b(u)\overline{b(v)} \bigr\vert \vert u-v \vert ^{-1/l_{s}}\,du\,dv \leq C \int _{-2}^{2} \int _{-2}^{2} \vert u-v \vert ^{-1/l_{s}} \,du\,dv\leq C. \end{aligned}$$ When \(n=2\), by the fact \(\|b\|_{L^{q}(\mathbb{R})}\leq C\) and Hölder's inequality, $$\begin{aligned} &\int _{-2}^{2} \int _{-2}^{2} \bigl\vert b(u)\overline{b(v)} \bigr\vert \vert u-v \vert ^{-1/(l_{s}q')}\,du\,dv\\ &\quad \leq C \Vert b \Vert _{L^{q}(\mathbb{R}^{n})}^{2} \biggl( \int _{-2}^{2} \int _{-2}^{2} \vert u-v \vert ^{-1/l_{s}} \,du\,dv \biggr)^{1/q'}\leq C. \end{aligned}$$ Therefore, we get from (3.18) that $$\begin{aligned} \int _{\mathbb{R}} \int _{\mathbb{R}} \bigl\vert F_{A}(u) \overline{F_{A}(v)} \bigr\vert \vert u-v \vert ^{-1/(sl_{s} \delta )} \,du\,dv\leq C \bigl\vert r\bigl(\xi '\bigr) \bigr\vert ^{-1/(l_{s}\delta )}. \end{aligned}$$ It follows from (3.19) and (3.17) that $$\begin{aligned} &\int _{2^{k\gamma '}}^{2^{(k+1)\gamma '}} \Biggl\vert \int _{\mathbb{R}}F_{A}(u) \exp \Biggl(\sum _{i=1}^{s}a_{l_{i}}\varphi (t)^{l_{i}} \vert \xi \vert u \Biggr)\,du \Biggr\vert ^{2}\frac{dt}{t} \\ &\quad\leq C\gamma '\bigl(\varphi \bigl(2^{k\gamma '} \bigr)^{l_{s}} \bigl\vert L_{s}( \xi ) \bigr\vert \bigr)^{-1/(l_{s}\delta )}, \end{aligned}$$ where \(C>0\) is independent of \(h, \Omega, \gamma, \varrho, \xi, k\) and \(\{a_{l_{i}}\}_{i=1}^{s}\). In view of (3.20) and (3.14), $$\begin{aligned} \bigl\vert \widehat{\sigma _{h,\Omega,k,\theta,s}}(\xi ) \bigr\vert \leq C\gamma ' \Vert h \Vert _{ \Delta _{\gamma }(\mathbb{R}_{+})}\bigl(\varphi \bigl(2^{k\gamma '}\bigr)^{l_{s}} \bigl\vert L_{s}( \xi ) \bigr\vert \bigr)^{-\min \{1/\gamma ',1/2\}/(l_{s}\delta )}, \end{aligned}$$ where \(C>0\) is independent of \(h, \Omega, \gamma, \varrho, \xi, k\), and \(\{a_{l_{i}}\}_{i=1}^{s}\). On the other hand, we get by (3.5), (3.10), and (3.11) that $$\begin{aligned} \bigl\vert \widehat{\sigma _{h,\Omega,k,\theta,s}}(\xi ) \bigr\vert \leq \Vert F_{A} \Vert _{L^{1}( \mathbb{R})} \int _{2^{k\gamma '}}^{2^{(k+1)\gamma '}} \bigl\vert h(t) \bigr\vert \frac{dt}{t}\leq C\gamma ' \Vert h \Vert _{\Delta _{\gamma }(\mathbb{R}_{+})}. \end{aligned}$$ Then (3.12) follows from (3.21) and (3.22). □ Let \(h\in \Delta _{\gamma }(\mathbb{R}_{+})\) for some \(\gamma \in (1,\infty ]\) and Ω be a \((1,\infty )\) atom satisfying (2.1)–(2.3) with \(0<\varrho \leq 1\) and \(\vartheta =\theta =(1, 0,\ldots,0)\in {\mathrm{S}}^{n-1}\). Let \(\varphi \in \mathfrak{F}_{1}\) or \(\varphi \in \mathfrak{F}_{2}\). Then, for \(\gamma '< p<\infty \), there exists a constant \(C>0\) independent of \(h, \Omega, \gamma, \xi, \theta \), and \(\{a_{l_{s}}\}_{s=1}^{\Lambda }\) such that $$\begin{aligned} \Bigl\Vert \sup_{k\in \mathbb{Z}} \bigl\vert \vert \sigma _{h,\Omega,k,\theta,0} \vert *f \bigr\vert \Bigr\Vert _{L^{p}(\mathbb{R}^{n})}\leq C \gamma ' \Vert h \Vert _{\Delta _{\gamma }( \mathbb{R}_{+})} \Vert f \Vert _{L^{p}(\mathbb{R}^{n})}, \quad\forall f\in L^{p}\bigl( \mathbb{R}^{n} \bigr). \end{aligned}$$ We only consider the case \(\varphi \in \mathfrak{F}_{1}\) since another one can be obtained similarly. Fix \(k\in \mathbb{Z}\), by a change of variables, $$\begin{aligned} \vert \sigma _{h,\Omega,k,\theta,0} \vert *f(x)&= \int _{2^{k \gamma '}< \vert y \vert \leq 2^{(k+1)\gamma '}}f \Biggl(x-\sum_{j=1}^{\Lambda }a_{l_{j}} \varphi \bigl( \vert y \vert \bigr)^{l_{j}}\theta \Biggr) \frac{ \vert \Omega (y)h( \vert y \vert ) \vert }{ \vert y \vert ^{n}}\,dy \\ & = \int _{2^{k\gamma '}}^{2^{(k+1) \gamma '}}f \Biggl(x-\sum _{j=1}^{\Lambda }a_{l_{j}}\varphi (t)^{l_{j}} \theta \Biggr) \bigl\vert h(t) \bigr\vert \frac{dt}{t} \Vert \Omega \Vert _{L^{1}({\mathrm{S}}^{n-1})}. \end{aligned}$$ It is clear that \(\|\Omega \|_{L^{1}({\mathrm{S}}^{n-1})}\leq C\). By Hölder's inequality and a change of variables, one has $$\begin{aligned} & \bigl\vert \vert \sigma _{h,\Omega,k,\theta,0} \vert *f(x) \bigr\vert \\ &\quad\leq C \int _{2^{k\gamma '}}^{2^{(k+1)\gamma '}} \Biggl\vert f \Biggl(x-\sum _{j=1}^{\Lambda }a_{l_{j}}\varphi (t)^{l_{j}} \theta \Biggr) \Biggr\vert \bigl\vert h(t) \bigr\vert \frac{dt}{t} \\ &\quad\leq C\sum_{i=0}^{[\gamma ']} \int _{2^{k \gamma '+i}}^{2^{k\gamma '+i+1}} \Biggl\vert f \Biggl(x-\sum _{j=1}^{\Lambda }a_{l_{j}}\varphi (t)^{l_{j}}\theta \Biggr) \Biggr\vert \bigl\vert h(t) \bigr\vert \frac{dt}{t} \\ &\quad\leq C \Vert h \Vert _{\Delta _{\gamma }(\mathbb{R}_{+})}\sum_{i=0}^{[\gamma ']} \Biggl( \int _{2^{k\gamma '+i}}^{2^{k\gamma '+i+1}} \Biggl\vert f \Biggl(x-\sum _{j=1}^{\Lambda }a_{l_{j}}\varphi (t)^{l_{j}} \theta \Biggr) \Biggr\vert ^{\gamma '}\frac{dt}{t} \Biggr)^{1/\gamma '} \\ &\quad= C \Vert h \Vert _{\Delta _{\gamma }(\mathbb{R}_{+})}\sum_{i=0}^{[ \gamma ']} \Biggl( \int _{\varphi (2^{k\gamma '+i})}^{\varphi (2^{k \gamma '+i+1})} \Biggl\vert f \Biggl(x-\sum _{j=1}^{\Lambda }a_{l_{j}}t^{l_{j}} \theta \Biggr) \Biggr\vert ^{\gamma '} \frac{dt}{\varphi ^{-1}(t)\varphi '(\varphi ^{-1}(t))} \Biggr)^{1/ \gamma '} \\ &\quad\leq C \Vert h \Vert _{\Delta _{\gamma }(\mathbb{R}_{+})}\sum_{i=0}^{[\gamma ']} \Biggl( \int _{\varphi (2^{k\gamma '+i})}^{ \varphi (2^{k\gamma '+i+1})} \Biggl\vert f \Biggl(x-\sum _{j=1}^{\Lambda }a_{l_{j}}t^{l_{j}} \theta \Biggr) \Biggr\vert ^{\gamma '}\frac{dt}{t} \Biggr)^{1/\gamma '} \\ &\quad\leq C\gamma ' \Vert h \Vert _{\Delta _{\gamma }(\mathbb{R}_{+})} \Biggl(\sup _{r>0}\frac{1}{r} \int _{ \vert t \vert \leq r} \Biggl\vert f \Biggl(x- \sum _{j=1}^{\Lambda }a_{l_{j}}t^{l_{j}}\theta \Biggr) \Biggr\vert ^{ \gamma '}\,dt \Biggr)^{1/\gamma '}. \end{aligned}$$ It follows that $$\begin{aligned} \sup_{k\in \mathbb{Z}} \bigl\vert \vert \sigma _{h,\Omega,k,\theta,0} \vert *f(x) \bigr\vert \leq C\gamma ' \Vert h \Vert _{\Delta _{\gamma }(\mathbb{R}_{+})} \Biggl(\sup_{r>0}\frac{1}{r} \int _{ \vert t \vert \leq r} \Biggl\vert f \Biggl(x-\sum _{j=1}^{\Lambda }a_{l_{j}}t^{l_{j}}\theta \Biggr) \Biggr\vert ^{\gamma '}\,dt \Biggr)^{1/ \gamma '}. \end{aligned}$$ This together with Lemma 2.6 yields (3.23). □ The following result is the main ingredient of proving Theorem 1.2. Let \(A>0\), \(\Lambda \in \mathbb{N}\setminus \{0\}\) and \(\{\sigma _{k,s}:0\leq s\leq \Lambda {\textit{ and }}k\in \mathbb{Z}\}\) be a family of uniformly bounded Borel measures on \(\mathbb{R}^{n}\) with \(\sigma _{k,0}(\xi )=0\) for every \(k\in \mathbb{Z}\) and \(\xi \in \mathbb{R}^{n}\). For \(1\leq s\leq \Lambda \), let \(\eta _{s}>1\), \(v\geq 1\), \(\delta _{s}, \beta _{s}>0\), \(\{a_{k,s,v}\}\) be a sequence of positive numbers, \(\ell _{s}\in \mathbb{N}\setminus \{0\}\) and \(L_{s}:\mathbb{R}^{n}\rightarrow \mathbb{R}^{\ell _{s}}\) be a linear transformation. Suppose that there exists a constant \(C>0\) independent of A such that the following are satisfied for \(k\in \mathbb{Z}, \xi \in \mathbb{R}^{n}\) and \(s\in \{1,\ldots,\Lambda \}\): \(|\widehat{\sigma _{k,s}}(\xi )|\leq CA\min \{1,|a_{k,s,v}L_{s}(\xi )|^{- \delta _{s}/v}\}\); \(|\widehat{\sigma _{k,s}}(\xi )-\widehat{\sigma _{k,s-1}}(\xi ) \leq CA|a_{k,s,v}L_{s}( \xi )|^{\beta _{s}/v}\); \(\inf_{k\in \mathbb{Z}}\frac{a_{k+1,s,v}}{a_{k,s,v}}\geq \eta _{s}^{v}\) or \(\inf_{k\in \mathbb{Z}}\frac{a_{k,s,v}}{a_{k+1,s,v}}\geq \eta _{s}^{v}\); For some \(q\in (1,\infty )\), it holds that $$\begin{aligned} \Bigl\Vert \sup_{k\in \mathbb{Z}} \bigl\vert \vert \sigma _{k,s} \vert *f \bigr\vert \Bigr\Vert _{L^{q}(\mathbb{R}^{n})} \leq CA \Vert f \Vert _{L^{q}(\mathbb{R}^{n})}, \quad\forall f\in L^{q}\bigl( \mathbb{R}^{n}\bigr). \end{aligned}$$ Then there exists a constant \(C>0\) such that $$\begin{aligned} \Biggl\Vert \sup_{k\in \mathbb{Z}} \Biggl\vert \sum _{j=k}^{\infty }\sigma _{j,\Lambda }*f \Biggr\vert \Biggr\Vert _{L^{p}(\mathbb{R}^{n})}\leq CA \Vert f \Vert _{L^{p}( \mathbb{R}^{n})}, \quad\forall f\in L^{p}\bigl(\mathbb{R}^{n}\bigr), \end{aligned}$$ where \(p=2\) if \(q=2\), \(p\in (q,2]\) if \(q\in (1,2)\), and \(p\in [2,\min \{q, \frac{2q}{q-1}\})\) if \(q>2\). Here, \(C>0\) is independent of \(A, v, \{L_{s}\}_{s=1}^{\Lambda }, f\), but may depend on \(p, n, \Lambda, \{\ell _{s}\}_{s=1}^{\Lambda }, \{\beta _{s}\}_{s=1}^{ \Lambda }\) and \(\{\delta _{s}\}_{s=1}^{\Lambda }, \{\eta _{s}\}_{s=1}^{\Lambda }\). We shall adopt the method following from [22] to prove this lemma. For simplicity, we only consider the case \(\inf_{k\in \mathbb{Z}} \frac{a_{k+1,s,v}}{a_{k,s,v}}\geq \eta _{s}^{v}\), since another one can be proved similarly. For \(s\in \{1,\ldots,\Lambda \}\), we set \(r_{s}=\operatorname{rank}(L_{s})\) and let \(\pi _{r_{s}}^{n}(\xi )=(\xi _{1},\ldots,\xi _{r_{s}})\) be the projection from \(\mathbb{R}^{n}\) to \(\mathbb{R}^{r_{s}}\). Invoking [22, Lemma 6.1], there exist two nonsingular linear transformations \(H_{s}:\mathbb{R}^{r_{s}}\rightarrow \mathbb{R}^{r_{s}}\) and \(G_{s}:\mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\) such that $$\begin{aligned} \bigl\vert H_{s}\pi _{r_{s}}^{n}G_{s}( \xi ) \bigr\vert \leq \bigl\vert L_{s}(\xi ) \bigr\vert \leq \ell _{s} \bigl\vert H_{s} \pi _{r_{s}}^{n}G_{s}( \xi ) \bigr\vert . \end{aligned}$$ Let \(\phi \in \mathcal{C}_{0}^{\infty }(\mathbb{R})\) be such that \(\operatorname{supp}(\phi )=\{|t|\leq 1\}\) and \(\phi (t)\equiv 1\) for \(|t|<1/2\). For \(s\in \{1,\ldots,\Lambda \}\), we define a sequence of measures \(\{\mu _{k,s}\}_{k\in \mathbb{Z}}\) on \(\mathbb{R}^{n}\) by $$\begin{aligned} \widehat{\mu _{k,s}}(\xi )=\widehat{\sigma _{k,s}}(\xi ) \prod_{j=s+1}^{\Lambda }\phi \bigl( \bigl\vert a_{k,j,v}H_{j}\pi _{r_{j}}^{n}G_{j}( \xi ) \bigr\vert \bigr)- \widehat{\sigma _{k,s-1}}(\xi )\prod _{j=s}^{\Lambda }\phi \bigl( \bigl\vert a_{k,j,v}H_{j} \pi _{r_{j}}^{n}G_{j}( \xi ) \bigr\vert \bigr). \end{aligned}$$ It is not difficult to see that $$\begin{aligned} \sigma _{k,\Lambda }=\sum_{s=1}^{\Lambda }\mu _{k,s}. \end{aligned}$$ In view of (3.27) we write $$\begin{aligned} \sup_{k\in \mathbb{Z}} \Biggl\vert \sum_{j=k}^{\infty } \sigma _{j,\Lambda }*f \Biggr\vert \leq \sum_{s=1}^{\Lambda } \sup_{k\in \mathbb{Z}} \Biggl\vert \sum_{j=k}^{\infty } \mu _{j,s}*f \Biggr\vert =:\sum_{s=1}^{\Lambda }T_{s}^{*}(f). \end{aligned}$$ Therefore, for (3.24), it suffices to show that $$\begin{aligned} \bigl\Vert T_{s}^{*}(f) \bigr\Vert _{L^{p}(\mathbb{R}^{n})} \leq C_{p}A \Vert f \Vert _{L^{p}( \mathbb{R}^{n})} \end{aligned}$$ for all \(1\leq s\leq \Lambda \), where \(p=2\) if \(q=2\), and \(p\in (q,2]\) if \(q\in (1,2)\), and \(p\in [2,\min \{q,\frac{2q}{q-1}\})\) if \(q>2\). Here, \(C_{p}>0\) is independent of \(A, v, L_{s}, f\), but may depend on \(p, n, \Lambda, \ell _{s}, \beta _{s}, \delta _{s}, \eta _{s}\). Let \(\psi \in \mathcal{S}(\mathbb{R})\) be such that \(\psi (\xi )\equiv 1\) when \(|\xi |<1\) and \(\psi (\xi )\equiv 0\) when \(|\xi |>\eta _{s}\). Define the function \(\Phi _{k}\) by \(\widehat{\Phi _{k}}(\xi )=\psi (a_{k,s,v}|H_{s}\pi _{r_{s}}^{n} G_{s}( \xi )|)\). Write $$\begin{aligned} \sum_{j=k}^{\infty }\mu _{j,s}*f&= \Phi _{k}*T_{s}(f)+( \delta -\Phi _{k})*\sum _{j=k}^{\infty }\mu _{j,s}*f-\Phi _{k}* \sum_{j=-\infty }^{k-1}\mu _{j,s}*f\\ &=:I_{k,1}(f)+I_{k,2}(f)+I_{k,3}(f), \end{aligned}$$ where δ is the Dirac delta function and $$\begin{aligned} T_{s}(f)=\sum_{k\in \mathbb{Z}}\mu _{k,s}*f. \end{aligned}$$ $$\begin{aligned} T_{s}^{*}(f)\leq \sup_{k\in \mathbb{Z}} \bigl\vert I_{k,1}(f) \bigr\vert +\sup_{k\in \mathbb{Z}} \bigl\vert I_{k,2}(f) \bigr\vert +\sup_{k\in \mathbb{Z}} \bigl\vert I_{k,3}(f) \bigr\vert . \end{aligned}$$ We first prove that $$\begin{aligned} \bigl\Vert T_{s}(f) \bigr\Vert _{L^{p}(\mathbb{R}^{n})}\leq C_{p}A \Vert f \Vert _{L^{p}( \mathbb{R}^{n})} \end{aligned}$$ for \(p\in (\frac{2q}{q+1},\frac{2q}{q-1})\) and \(s\in \{1,\ldots,\Lambda \}\), where \(C_{p}>0\) is independent of \(A, v, L_{s}, f\), but may depend on \(p, n, \Lambda, \ell _{s}, \beta _{s}, \delta _{s}, \eta _{s}\). In view of assumptions (a) and (b) and (3.26), $$\begin{aligned} &\bigl\vert \widehat{\mu _{k,s}}(\xi ) \bigr\vert \leq CA\bigl( \bigl\vert a_{k,s,v} L_{s}(\xi ) \bigr\vert ^{\beta _{s}/v}+ \bigl\vert a_{k,s,v} L_{s}(\xi ) \bigr\vert ^{1/v}\bigr); \end{aligned}$$ $$\begin{aligned} &\bigl\vert \widehat{\mu _{k,s}}(\xi ) \bigr\vert \leq CA\min \bigl\{ 1, \bigl\vert a_{k,s,v} L_{s}(\xi ) \bigr\vert ^{- \delta _{s}/v}+ \bigl\vert a_{k,s,v}L_{s}(\xi ) \bigr\vert ^{-1/v}\bigr\} . \end{aligned}$$ Let \(\{\Psi _{k,s}\}_{k\in \mathbb{Z}}\) be a sequence of nonnegative functions in \(\mathcal{C}_{0}^{\infty }(\mathbb{R})\) such that $$\begin{aligned} &\mathop{\operatorname{supp}}(\Psi _{k,s})\subset \bigl[a_{k+1,s,v}^{-1},a_{k-1,s,v}^{-1} \bigr], \qquad\sum_{k\in \mathbb{Z}}\Psi _{k,s}^{2}(t)=1,\\ &\biggl\vert \biggl(\frac{d}{dt} \biggr)^{j}\Psi _{k,s}(t) \biggr\vert \leq C_{j} \vert t \vert ^{-j} (j=1,2,\ldots ) \quad{\text{for all }} t>0 {\text{ and }} j\in \mathbb{N}, \end{aligned}$$ where \(C_{j}\) are independent of \(s, v\), and k. Define the Fourier multiplier operator \(S_{j,s}\) by $$\begin{aligned} \widehat{S_{j,s}f}(\xi )=\Psi _{j,s}\bigl( \bigl\vert H_{s}\pi _{r_{s}}^{n}G_{s}(\xi ) \bigr\vert \bigr) \hat{f}(\xi ) \quad{\text{for }} j\in \mathbb{Z}. \end{aligned}$$ Thus, the operator \(T_{s}\) can be decomposed as $$\begin{aligned} T_{s}(f)=\sum_{k\in \mathbb{Z}}\mu _{k,s}* \sum_{j \in \mathbb{Z}}S_{j+k,s}S_{j+k,s}f =\sum _{j\in \mathbb{Z}} \sum_{k\in \mathbb{Z}}S_{j+k,s}( \mu _{k,s}*S_{j+k,s}f) =: \sum_{j\in \mathbb{Z}}T_{s,j}(f). \end{aligned}$$ By the Littlewood–Paley theory, Plancherel's theorem, and assumption (c), we then use (3.33) and (3.34) to obtain $$\begin{aligned} \bigl\Vert T_{s,j}(f) \bigr\Vert _{L^{2}(\mathbb{R}^{n})} &\leq C \biggl\Vert \biggl(\sum_{k\in \mathbb{Z}} \vert \mu _{k,s}*S_{j+k,s}f \vert ^{2} \biggr)^{1/2} \biggr\Vert _{L^{2}(\mathbb{R}^{n})} \\ & \leq C \biggl(\sum_{k\in \mathbb{Z}} \int _{a_{j+k+1,s,v}^{-1}\leq \vert H_{s}\pi _{r_{s}}^{n}G_{s}( \xi ) \vert \leq a_{j+k-1,s,v}^{-1}} \bigl\vert \widehat{\mu _{k,s}}(\xi ) \bigr\vert ^{2} \bigl\vert \hat{f}(\xi ) \bigr\vert ^{2}\,d\xi \biggr)^{1/2} \\ & \leq CA\eta _{s}^{-c \vert j \vert } \Vert f \Vert _{L^{2}(\mathbb{R}^{n})} \end{aligned}$$ for some \(c>0\), where \(C>0\) is independent of \(A, v, L_{s}, f\), but may depend on \(\ell _{s}, \beta _{s}, \delta _{s}, \eta _{s}\). On the other hand, by our assumption (d), (3.26) and a well-known result on maximal functions (see [22]), there exists a constant \(C>0\) independent of \(A, v, L_{s}\) such that $$\begin{aligned} \Bigl\Vert \sup_{k\in \mathbb{Z}} \bigl\vert \vert \mu _{k,s} \vert *f \bigr\vert \Bigr\Vert _{L^{q}( \mathbb{R}^{n})} \leq CA \Vert f \Vert _{L^{q}(\mathbb{R}^{n})}, \quad\forall f\in L^{q}\bigl( \mathbb{R}^{n}\bigr) \end{aligned}$$ for any \(1\leq s\leq \Lambda \). Using (3.38) and the lemma in [16, pp. 544], $$\begin{aligned} \biggl\Vert \biggl(\sum_{k\in \mathbb{Z}} \vert \mu _{k,s}*g_{k} \vert ^{2} \biggr)^{{1}/{2}} \biggr\Vert _{L^{p}(\mathbb{R}^{n})}\leq C_{p}A \biggl\Vert \biggl(\sum _{k \in \mathbb{Z}} \vert g_{k} \vert ^{2} \biggr)^{{1}/{2}} \biggr\Vert _{L^{p}(\mathbb{R}^{n})} \end{aligned}$$ for \(|1/p-1/2|=1/(2q)\) and arbitrary functions \(\{g_{k}\}_{k}\in L^{p}(\ell ^{2}, \mathbb{R}^{n})\). Here, \(C_{p}>0\) is independent of \(A, v, L_{s}\). Combining (3.39) with the Littlewood–Paley theory implies $$\begin{aligned} \bigl\Vert T_{s,j}(f) \bigr\Vert _{L^{p}(\mathbb{R}^{n})} &\leq C \biggl\Vert \biggl(\sum_{k\in \mathbb{Z}} \vert \mu _{k,s}*S_{j+k,s}f \vert ^{2} \biggr)^{1/2} \biggr\Vert _{L^{p}(\mathbb{R}^{n})} \\ & \leq CA \biggl\Vert \biggl(\sum_{k \in \mathbb{Z}} \vert S_{j+k,s}f \vert ^{2} \biggr)^{1/2} \biggr\Vert _{L^{p}(\mathbb{R}^{n})} \leq CA \Vert f \Vert _{L^{p}(\mathbb{R}^{n})}, \end{aligned}$$ where \(|1/p-1/2|=1/(2q)\) and \(C>0\) is independent of \(A, v, L_{s}\). By interpolation between (3.37) and (3.40), we have that, for any \(p\in (\frac{2q}{q+1},\frac{2q}{q-1})\) and some \(c'>0\), $$\begin{aligned} \bigl\Vert T_{s,j}(f) \bigr\Vert _{L^{p}(\mathbb{R}^{n})}\leq CA\eta _{s}^{-c' \vert j \vert } \Vert f \Vert _{L^{p}( \mathbb{R}^{n})}. \end{aligned}$$ Inequality (3.41) together with (3.36) and Minkowski's inequality implies (3.32). By (3.32) and a well-known result on maximal functions (see [22]), we have that, for all \(p\in (\frac{2q}{q+1},\frac{2q}{q-1})\), $$\begin{aligned} \Bigl\Vert \sup_{k\in \mathbb{Z}} \bigl\vert I_{k,1}(f) \bigr\vert \Bigr\Vert _{L^{p}( \mathbb{R}^{n})} \leq C \bigl\Vert T_{s}(f) \bigr\Vert _{L^{p}(\mathbb{R}^{n})}\leq C_{p}A \Vert f \Vert _{L^{p}(\mathbb{R}^{n})}, \end{aligned}$$ where \(C_{p}>0\) is independent of \(A, v, L_{s}, f\), but may depend on \(p, n, \ell _{s}, \beta _{s}, \delta _{s}, \eta _{s}\). We now estimate \(\|\sup_{k\in \mathbb{Z}}|I_{k,2}(f)|\|_{L^{p}(\mathbb{R}^{n})}\). Write $$\begin{aligned} \sup_{k\in \mathbb{Z}} \bigl\vert I_{k,2}(f) \bigr\vert \leq \sum_{j=0}^{ \infty }\sup _{k\in \mathbb{Z}} \bigl\vert (\delta -\Phi _{k})*\mu _{j+k,s}*f \bigr\vert =:\sum_{j=0}^{\infty }I_{j}(f). \end{aligned}$$ An application of (3.38) shows that $$\begin{aligned} \bigl\Vert I_{j}(f) \bigr\Vert _{L^{q}(\mathbb{R}^{n})}\leq C \Bigl\Vert \sup_{k\in \mathbb{Z}} \bigl\vert \vert \mu _{j+k,s} \vert * \vert f \vert \bigr\vert \Bigr\Vert _{L^{q}(\mathbb{R}^{n})} \leq CA \Vert f \Vert _{L^{q}(\mathbb{R}^{n})}. \end{aligned}$$ In view of Plancherel's theorem, (3.25), and (3.33), we have that, for some \(c>0\), $$\begin{aligned} \bigl\Vert I_{j}(f) \bigr\Vert _{L^{2}(\mathbb{R}^{n})}^{2} &\leq \biggl\Vert \biggl(\sum_{k\in \mathbb{Z}} \bigl\vert ( \delta -\Phi _{k})*\mu _{j+k,s}*f \bigr\vert ^{2} \biggr)^{1/2} \biggr\Vert _{L^{2}(\mathbb{R}^{n})}^{2} \\ & \leq \sum_{k\in \mathbb{Z}} \int _{\{a_{k,s,v} \vert H_{s}\pi _{r_{s}}^{n}G_{s}(\xi ) \vert \geq 1 \}} \bigl\vert \widehat{\mu _{j+k,s}}(\xi ) \bigr\vert ^{2} \bigl\vert \hat{f}(\xi ) \bigr\vert ^{2}\,d\xi \\ & \leq \sum_{k\in \mathbb{Z}}\sum_{i=-\infty }^{k} \int _{\{a_{i,s,v}^{-1} \leq \vert L_{s}( \xi ) \vert < a_{i-1,s,v}^{-1}\}} \bigl\vert \widehat{\mu _{j+k,s}}(\xi ) \bigr\vert ^{2} \bigl\vert \hat{f}( \xi ) \bigr\vert ^{2}\,d\xi \\ & \leq C\sum_{k\in \mathbb{Z}}\sum _{i=-\infty }^{k}A^{2}\eta _{s}^{-c(j+k-i)} \int _{\{a_{i,s,v}^{-1} \leq \vert L_{s}(\xi ) \vert < a_{i-1,s,v}^{-1}\}} \bigl\vert \hat{f}(\xi ) \bigr\vert ^{2}\,d\xi \\ & \leq CA^{2}\eta _{s}^{-jc} \sum _{i=0}^{\infty }\eta _{s}^{-ic} \Vert f \Vert _{L^{2}(\mathbb{R}^{n})}^{2} \\ & \leq CA^{2}\eta _{s}^{-jc} \Vert f \Vert _{L^{2}( \mathbb{R}^{n})}^{2}. \end{aligned}$$ $$\begin{aligned} \bigl\Vert I_{j}(f) \bigr\Vert _{L^{2}(\mathbb{R}^{n})}\leq CA\eta _{s}^{-jc/2} \Vert f \Vert _{L^{2}( \mathbb{R}^{n})}. \end{aligned}$$ An interpolation between (3.44) and (3.45) gives that $$\begin{aligned} \bigl\Vert I_{j}(f) \bigr\Vert _{L^{p}(\mathbb{R}^{n})}\leq CA\eta _{s}^{-\tau j} \Vert f \Vert _{L^{p}( \mathbb{R}^{n})} \end{aligned}$$ for some \(\tau >0\) and \(p\in [2,q]\) if \(q>2\) or \(p\in (q,2]\) if \(q\in (1,2)\) or \(p=2\) if \(q=2\). Combining this with (3.43) leads to $$\begin{aligned} \Bigl\Vert \sup_{k\in \mathbb{Z}} \bigl\vert I_{k,2}(f) \bigr\vert \Bigr\Vert _{L^{p}( \mathbb{R}^{n})}\leq CA \Vert f \Vert _{L^{p}(\mathbb{R}^{n})} \end{aligned}$$ for \(p\in [2,q]\) if \(q>2\) or \(p\in (q,2]\) if \(q\in (1,2)\) or \(p=2\) if \(q=2\). It remains to estimate \(\|\sup_{k\in \mathbb{Z}}|I_{k,3}(f)|\|_{L^{p}(\mathbb{R}^{n})}\). Write $$\begin{aligned} \sup_{k\in \mathbb{Z}} \bigl\vert I_{k,3}(f) \bigr\vert = \sup_{k\in \mathbb{Z}} \Biggl\vert \sum_{j=1}^{\infty } \Phi _{k}*\mu _{k-j,s}*f \Biggr\vert \leq \sum _{j=1}^{\infty }\sup_{k\in \mathbb{Z}} \vert \Phi _{k}*\mu _{k-j,s}*f \vert =:\sum _{j=1}^{\infty }J_{j}(f). \end{aligned}$$ In view of (3.38), one can get $$\begin{aligned} \bigl\Vert J_{j}(f) \bigr\Vert _{L^{q}(\mathbb{R}^{n})}\leq C \Bigl\Vert \sup_{k\in \mathbb{Z}} \bigl\vert \vert \mu _{j-k,s} \vert * \vert f \vert \bigr\vert \Bigr\Vert _{L^{q}(\mathbb{R}^{n})} \leq CA \Vert f \Vert _{L^{q}(\mathbb{R}^{n})}. \end{aligned}$$ In view of Plancherel's theorem, we use (3.33) and (3.25) to get $$\begin{aligned} & \bigl\Vert J_{j}(f) \bigr\Vert _{L^{2}(\mathbb{R}^{n})} \\ &\quad\leq \biggl\Vert \biggl(\sum_{k\in \mathbb{Z}} \vert \Phi _{k}*\mu _{k-j,s}*f \vert ^{2} \biggr)^{1/2} \biggr\Vert _{L^{2}(\mathbb{R}^{n})} \\ &\quad\leq \biggl(\sum_{k\in \mathbb{Z}} \int _{\{a_{k,s,v} \vert H_{s} \pi _{r_{s}}^{n}G_{s}(\xi ) \vert \leq \eta _{s}\}} \bigl\vert \widehat{\mu _{k-j,s}}( \xi ) \bigr\vert ^{2} \bigl\vert \hat{f}(\xi ) \bigr\vert ^{2}\,d\xi \biggr)^{1/2} \\ &\quad\leq C \biggl( \int _{\mathbb{R}^{n}}\sum_{k\in \mathbb{Z}} \bigl\vert \widehat{\mu _{k-j,s}}(\xi ) \bigr\vert ^{2}\chi _{\{a_{k,s,v} \vert L_{s}( \xi ) \vert \leq \ell _{s}\eta _{s}\}} \bigl\vert \hat{f}(\xi ) \bigr\vert ^{2}\,d\xi \biggr)^{1/2} \\ &\quad\leq CA\bigl(\eta _{s}^{-\beta _{s}j}+\eta _{s}^{-j} \bigr) \Vert f \Vert _{L^{2}( \mathbb{R}^{n})} \\ &\qquad{} \times \biggl(\sup_{\xi \in \mathbb{R}^{n}} \sum_{k\in \mathbb{Z}} \bigl( \bigl\vert a_{k,s,v}L_{s}(\xi ) \bigr\vert ^{2\beta _{s}/v}+ \bigl\vert a_{k,s,v} L_{s}(\xi ) \bigr\vert ^{2/v}\bigr)\chi _{\{a_{k,s,v} \vert L_{s}(\xi ) \vert \leq \ell _{s}\eta _{s} \}} \biggr)^{1/2} \\ &\quad\leq CA\bigl(\eta _{s}^{-\beta _{s}j}+\eta _{s}^{-j} \bigr) \Vert f \Vert _{L^{2}( \mathbb{R}^{n})}, \end{aligned}$$ where in the last inequality of (3.49) we have used the properties of lacunary sequence and the fact that \(\ell _{s}\eta _{s}>1\), \(v\geq 1\). Here, \(C>0\) is independent of \(A, v, L_{s}\), but may depend on \(n, \ell _{s}, \beta _{s}, \delta _{s}, \eta _{s}\). An interpolation between (3.48) and (3.49) leads to $$\begin{aligned} \bigl\Vert J_{j}(f) \bigr\Vert _{L^{p}(\mathbb{R}^{n})}\leq CA\bigl(\eta _{s}^{-\theta \beta _{s}j}+ \eta _{s}^{-\theta j}\bigr) \Vert f \Vert _{L^{2}(\mathbb{R}^{n})} \end{aligned}$$ for some \(\theta >0\), where \(p\in [2,q]\) if \(q>2\) or \(p\in (q,2]\) if \(q\in (1,2)\) or \(p=2\) if \(q=2\). By (3.47), (3.50), and Minkowski's inequality, for \(p\in [2,q]\) if \(q>2\) or \(p\in (q,2]\) if \(q\in (1,2)\) or \(p=2\) if \(q=2\). Then (3.29) follows from (3.31), (3.42), (3.46), and (3.51). This finishes the proof of Lemma 3.4. □ Let \(\Lambda, v\in \mathbb{N}\setminus \{0\}\). For \(1\leq s\leq \Lambda \), let \(\{a_{k,s,v}\}_{k\in \mathbb{Z}}\) be a lacunary sequence of positive numbers. For \(1\leq s\leq \Lambda \), let \(\delta _{s}>0, \eta _{s}>1, \ell _{s}\in \mathbb{N}\setminus \{0 \}\), and \(L_{s}:\mathbb{R}^{n}\rightarrow \mathbb{R}^{\ell _{s}}\) be linear transformations. Let \(\{\sigma _{s,k}: 0\leq s\leq \Lambda{\textit{ and }}k\in \mathbb{Z}\}\) be a family of measures on \(\mathbb{R}^{n}\) with \(\sigma _{0,k}=0\) for every \(k\in \mathbb{Z}\). Suppose that there exist \(p_{0}, q_{0}>1\) satisfying \((p_{0},q_{0})\neq (2,2)\) and \(c, A>0\) independent of v and \(\{L_{s}\}_{s=1}^{\Lambda }\) such that the following conditions are satisfied for any \(1\leq s\leq \Lambda \), \(k\in \mathbb{Z}\), \(\xi \in \mathbb{R}^{n}\), and \(\{g_{k,j}\}\in L^{p_{0}}(\mathbb{R}^{n},\ell ^{q_{0}}(\ell ^{2}))\): \(|\widehat{\sigma _{s,k}}(\xi )|\leq cA\min \{1,|a_{k,s,v}L_{s}(\xi )|^{-{ \delta _{s}}/{v}}\}\); \(|\widehat{\sigma _{s,k}}(\xi )-\widehat{\sigma _{s-1,k}}(\xi ) | \leq cA|a_{k,s,v}L_{s}(\xi )|^{{1}/{v}}\); $$\begin{aligned} &\biggl\Vert \biggl(\sum_{j\in \mathbb{Z}} \biggl(\sum _{k\in \mathbb{Z}} \vert \sigma _{s,k}*g_{k,j} \vert ^{2} \biggr)^{q_{0}/2} \biggr)^{1/q_{0}} \biggr\Vert _{L^{p_{0}}(\mathbb{R}^{n})}\\ &\quad\leq cA \biggl\Vert \biggl(\sum _{j \in \mathbb{Z}} \biggl(\sum_{k\in \mathbb{Z}} \vert g_{k,j} \vert ^{2} \biggr)^{q_{0}/2} \biggr)^{1/q_{0}} \biggr\Vert _{L^{p_{0}}(\mathbb{R}^{n})}. \end{aligned}$$ Then, for \(\alpha \in \mathbb{R}\) and \(({1}/{p},{1}/{q})\in B_{1}B_{2} \setminus \{({1}/{p_{0}},{1}/{q_{0}}),({1}/{2},{1}/{2}) \}\), there exists a constant \(C>0\) independent of v and \(\{L_{s}\}_{s=1}^{\Lambda }\) such that $$\begin{aligned} \biggl\Vert \sum_{k\in \mathbb{Z}}\sigma _{\Lambda,k}*f \biggr\Vert _{ \dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})} \leq CA \Vert f \Vert _{\dot{F}_{\alpha }^{p,q}( \mathbb{R}^{n})}, \end{aligned}$$ where \(B_{1}=({1}/{2},{1}/{2})\), \(B_{2}=({1}/{p_{0}},{1}/{q_{0}})\) and \(B_{1}B_{2}\) is the line segment from \(B_{1}\) to \(B_{2}\). Assume that \(\inf_{k\in \mathbb{Z}}\frac{a_{k+1,s,v}}{a_{k,s,v}}\geq \eta _{s}^{v}\) for all \(1\leq s\leq \Lambda \), the corresponding result has been proved in [27, Lemma 2.5]. Similar arguments will give the corresponding result for the case \(\inf_{k\in \mathbb{Z}}\frac{a_{k,s,v}}{a_{k+1,s,v}} \geq \eta _{s}^{v}\). The details are omitted. □ In order to prove Theorem 1.3, we need the following characterization of the Triebel–Lizorkin spaces. Let \(0<\alpha <1\), \(1< p<\infty \), \(1< q\leq \infty \), and \(1\leq r<\min \{p,q\}\). Then $$\begin{aligned} \Vert f \Vert _{\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})}\approx \biggl\Vert \biggl( \sum _{k\in \mathbb{Z}}2^{kq\alpha } \biggl( \int _{\mathfrak{R}_{n}} \bigl\vert \triangle _{2^{-k}\zeta }(f) \bigr\vert ^{r}\,d\zeta \biggr)^{q/r} \biggr)^{{1}/{q}} \biggr\Vert _{L^{p}(\mathbb{R}^{n})}. \end{aligned}$$ Our main ingredient of proving Theorem 1.3 is the following boundedness criterion. Let \(v\geq 1\), \(\Lambda \in \mathbb{N} \setminus \{0\}\), and \(\{\sigma _{k,s}:k\in \mathbb{Z}, 1\leq s\leq \Lambda \}\) be a family of Borel measures on \(\mathbb{R}^{n}\) with \(\sigma _{k,0}=0\) for all \(k\in \mathbb{Z}\). Let \(|\sigma _{k,s}|\) be the total variation of \(\sigma _{k,s}\). Let \(\{a_{k,s,v}\}_{k\in \mathbb{Z}}\) be a lacunary sequence of positive numbers. For \(1\leq s\leq \Lambda \), let \(\eta _{s}>1, \beta _{s}, \gamma _{s}>0\), \(M_{s}\in \mathbb{N}\setminus \{0\}\), and \(L_{s}:\mathbb{R}^{n}\rightarrow \mathbb{R}^{M_{s}}\) be linear transformations. Suppose that there exist \(C, A>0\) independent of v such that, for \(1\leq s\leq \Lambda \), \(k\in \mathbb{Z}\), and \(\xi \in \mathbb{R}^{n}\), the following conditions are satisfied: \(\max \{|\widehat{\sigma _{k,s}}(\xi ) -\widehat{\sigma _{k,s-1}}( \xi )|,|\widehat{|\sigma _{k,s}|}(\xi ) -\widehat{|\sigma _{k,s-1}|}( \xi )|\}\leq CA|a_{k,s,v}L_{s}(\xi )|^{1/v}\); \(\max \{|\widehat{\sigma _{k,s}}(\xi )|,|\widehat{|\sigma _{k,s}|}( \xi )|\} \leq CA\min \{1,|a_{k,s,v}L_{s}(\xi )|^{-\beta _{s}/v}\}\); There exists \(\vartheta \in \mathbb{R}^{n}\) such that \(\sup_{k\in \mathbb{Z}}||\sigma _{k,0}|*f(x)|\leq CA|f(x+\vartheta )|\) for any \(x\in \mathbb{R}^{n}\); There exist \(p_{0}, q_{0}>1\) satisfying \((p_{0},q_{0})\neq (2,2)\), \(1< r_{0}<\min \{p_{0},q_{0}\}\), and \(2\le u<\infty \) such that $$\begin{aligned} & \biggl\Vert \biggl(\sum_{l\in \mathbb{Z}} \biggl\Vert \biggl(\sum_{k\in \mathbb{Z}} \bigl\vert \vert \sigma _{k,s} \vert * g_{l, \zeta,k} \bigr\vert ^{u} \biggr)^{1/u} \biggr\Vert _{L^{r_{0}}(\mathfrak{R}_{n})}^{q_{0}} \biggr)^{1/q_{0}} \biggr\Vert _{L^{p_{0}}(\mathbb{R}^{n})} \\ &\quad\leq CA \biggl\Vert \biggl(\sum_{l\in \mathbb{Z}} \biggl\Vert \biggl(\sum_{k\in \mathbb{Z}} \vert g_{l,\zeta,k} \vert ^{u} \biggr)^{1/u} \biggr\Vert _{L^{r_{0}}(\mathfrak{R}_{n})}^{q_{0}} \biggr)^{1/q_{0}} \biggr\Vert _{L^{p_{0}}( \mathbb{R}^{n})}. \end{aligned}$$ Then, for \(\alpha \in (0,1)\) and \(({1}/{p},{1}/{q})\in P_{1}P_{2}\setminus \{({1}/{p_{0}},{1}/{q_{0}}) \}\), there exists a constant \(C>0\) independent of A and v such that $$\begin{aligned} &\biggl\Vert \biggl(\sum_{l\in \mathbb{Z}}2^{lq\alpha } \biggl( \int _{ \mathfrak{R}_{n}}\sup_{k\in \mathbb{Z}} \bigl\vert \vert \sigma _{k,s} \vert * \vert \triangle _{2^{-l}\zeta }f \vert \bigr\vert \,d\zeta \biggr)^{q} \biggr)^{{1}/{q}} \biggr\Vert _{L^{p}(\mathbb{R}^{n})}\leq CA \Vert f \Vert _{\dot{F}_{\alpha }^{p,q}( \mathbb{R}^{n})}, \\ &\quad\forall 1\leq s\leq \Lambda. \end{aligned}$$ Here, \(P_{1}P_{2}\) denotes the line segment from \(P_{1}\) to \(P_{2}\) with \(P_{1}=({1}/{2}, {1}/{2})\) and \(P_{2}=({1}/{p_{0}},{1}/{q_{0}})\). Suppose also that the following inequality holds for \(1\leq s\leq \Lambda \): $$\begin{aligned} &\biggl\Vert \biggl(\sum_{j\in \mathbb{Z}} \biggl(\sum _{k\in \mathbb{Z}} \vert \sigma _{k,s}*g_{k,j} \vert ^{2} \biggr)^{q_{0}/2} \biggr)^{1/q_{0}} \biggr\Vert _{L^{p_{0}}(\mathbb{R}^{n})}\\ &\quad \leq CA \biggl\Vert \biggl(\sum _{j \in \mathbb{Z}} \biggl(\sum_{k\in \mathbb{Z}} \vert g_{k,j} \vert ^{2} \biggr)^{q_{0}/2} \biggr)^{1/q_{0}} \biggr\Vert _{L^{p_{0}}(\mathbb{R}^{n})}. \end{aligned}$$ Then, for \(\alpha \in (0,1)\) and \(({1}/{p},{1}/{q})\in P_{1}P_{2}\setminus \{({1}/{p_{0}},{1}/{q_{0}}),({1}/{2},{1}/{2}) \}\), there exists a constant \(C>0\) independent of A and v such that $$\begin{aligned} \Biggl\Vert \Biggl(\sum_{l\in \mathbb{Z}}2^{lq\alpha } \Biggl( \int _{ \mathfrak{R}_{n}}\sup_{k\in \mathbb{Z}} \Biggl\vert \sum _{j=k}^{ \infty }\sigma _{j,\Lambda }* \triangle _{2^{-l}\zeta }f \Biggr\vert \,d\zeta \Biggr)^{q} \Biggr)^{{1}/{q}} \Biggr\Vert _{L^{p}(\mathbb{R}^{n})}\leq CA \Vert f \Vert _{\dot{F}_{ \alpha }^{p,q}(\mathbb{R}^{n})}. \end{aligned}$$ The lemma can be proved by the arguments similar to those used in deriving [30, Lemma 2.9]. We omit the details. □ Proof of Theorem 1.1 Let \(h, \Omega, P, \varphi \) be given as in Theorem 1.1. Invoking Lemma 2.1, there exist a sequence of complex numbers \(\{c_{j}\}_{j=1}^{\infty }\) and a sequence of \((1,\infty )\) atoms \(\{\Omega _{j}\}_{j=1}^{\infty }\) such that \(\Omega =\sum_{j=1}^{\infty }c_{j}\Omega _{j}\) and \(\|\Omega \|_{H^{1}({\mathrm{S}}^{n-1})} \approx \sum_{j=1}^{\infty }|c_{j}|\). By the definition of \(T_{h,\Omega,P,\varphi }\), one has $$\begin{aligned} T_{h,\Omega,P,\varphi }f=\sum_{j=1}^{\infty }c_{j}T_{h,\Omega _{j},P, \varphi }f. \end{aligned}$$ In view of (3.52) and the definition of \(\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})\), we have that, for \(1< p, q<\infty \) and \(\alpha \in \mathbb{R}\), $$\begin{aligned} \Vert T_{h,\Omega,P,\varphi }f \Vert _{\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})} \leq \sum _{j=1}^{\infty } \vert c_{j} \vert \Vert T_{h,\Omega _{j},P,\varphi }f \Vert _{\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})}. \end{aligned}$$ Therefore, to prove Theorem 1.1, it suffices to prove that there exists \(C>0\) is independent of \(h, \gamma, \Omega \) and the coefficients of P such that $$\begin{aligned} \Vert T_{h,\Omega,P,\varphi }f \Vert _{\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})} \leq C\gamma ' \Vert h \Vert _{\Delta _{\gamma }(\mathbb{R}_{+})} \Vert f \Vert _{\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})}, \end{aligned}$$ holds for any \((1,\infty )\) atom Ω and \(\alpha \in \mathbb{R}\) and \((p,q)\in \mathcal{R}_{\gamma }\). Given a \((1,\infty )\) atom Ω satisfying (2.6)–(2.8) with \(0<\varrho \leq 1\) and \(\vartheta \in {\mathrm{S}}^{n-1}\). Without loss of generality we may assume that \(\vartheta =\theta =(1,0,\ldots,0)\). By the definition of \(\sigma _{h,\Omega,k,\theta,\Lambda }\), we have $$\begin{aligned} T_{h,\Omega,P,\varphi }f=\sum_{k\in \mathbb{Z}}\sigma _{h, \Omega,k,\theta,\Lambda }*f. \end{aligned}$$ Note that if \(\varphi \in \mathfrak{F}_{1}\) or \(\varphi \in \mathfrak{F}_{2}\), there exist \(C_{1}, C_{2}>0\) depending only on φ such that $$\begin{aligned} C_{1}\leq \frac{\varphi (2t)}{\varphi (t)}\leq C_{2}, \quad\forall t>0. \end{aligned}$$ In view of (3.55) and Lemma 3.1, $$\begin{aligned} \bigl\vert \widehat{\sigma _{h,\Omega,k,\theta,s}}(\xi )- \widehat{\sigma _{h,\Omega,k,\theta,s-1}}(\xi ) \bigr\vert \leq C\gamma ' \Vert h \Vert _{ \Delta _{\gamma }(\mathbb{R}_{+})}\min \bigl\{ 1,\bigl(\varphi \bigl(2^{k\gamma '} \bigr)^{l_{s}} \bigl\vert L_{s}( \xi ) \bigr\vert \bigr)^{1/\gamma '}\bigr\} . \end{aligned}$$ By the properties of φ and applying the arguments similar to those used in deriving [27, Lemma 2.4], one obtains that there exists \(C>0\) independent of \(h, \Omega, \gamma \) and \(\{a_{l_{i}}\}_{i=1}^{\Lambda }\) such that $$\begin{aligned} &\biggl\Vert \biggl(\sum_{j\in \mathbb{Z}} \biggl(\sum _{k\in \mathbb{Z}} \vert \sigma _{h,\Omega,k,\theta,s}*g_{k,j} \vert ^{2} \biggr)^{q/2} \biggr)^{1/q} \biggr\Vert _{L^{p}(\mathbb{R}^{n})} \\ &\quad\leq C\gamma ' \Vert h \Vert _{\Delta _{\gamma }(\mathbb{R}_{+})} \biggl\Vert \biggl(\sum_{j\in \mathbb{Z}} \biggl(\sum_{k\in \mathbb{Z}} \vert g_{k,j} \vert ^{2} \biggr)^{q/2} \biggr)^{1/q} \biggr\Vert _{L^{p}(\mathbb{R}^{n})} \end{aligned}$$ for all \(1\leq s\leq \Lambda \) and \((1/p,1/q)\in \mathcal{R}_{\gamma }\). Then (3.53) follows from (3.2), (3.54), (3.56), (3.57), and Lemmas 3.2 and 3.5. □ Let \(h, \Omega, P, \varphi \) be given as in Theorem 1.2. By Lemma 2.1, there exist a sequence of complex numbers \(\{c_{j}\}_{j=1}^{\infty }\) and a sequence of \((1,\infty )\) atoms \(\{\Omega _{j}\}_{j=1}^{\infty }\) such that \(\Omega =\sum_{j=1}^{\infty }c_{j}\Omega _{j}\) and \(\|\Omega \|_{H^{1}({\mathrm{S}}^{n-1})} \approx \sum_{j=1}^{\infty }|c_{j}|\). In view of the definition of \(T_{h,\Omega,P, \varphi }^{*}\), $$\begin{aligned} T_{h,\Omega,P,\varphi }^{*}f\leq \sum_{j=1}^{\infty } \vert c_{j} \vert T_{h, \Omega _{j},P,\varphi }^{*}f. \end{aligned}$$ In view of (3.58), to prove Theorem 1.2, it suffices to show that there exists \(C>0\) independent of \(h, \gamma, \Omega \) and the coefficients of P such that $$\begin{aligned} \bigl\Vert T_{h,\Omega,P,\varphi }^{*}f \bigr\Vert _{L^{p}(\mathbb{R}^{n})}\leq C\gamma ' \Vert h \Vert _{\Delta _{\gamma }(\mathbb{R}_{+})} \Vert f \Vert _{L^{p}(\mathbb{R}^{n})} \end{aligned}$$ holds for any \((1,\infty )\) atom Ω and \(p\in (\gamma ',\infty )\) if \(\gamma \geq 2\) or \(p\in (\gamma ',\frac{2\gamma '}{\gamma '-2})\) if \(\gamma \in (4/3,2)\). Let Ω be a \((1,\infty )\) atom satisfying (2.6)–(2.8) with \(0<\varrho \leq 1\) and \(\vartheta \in {\mathrm{S}}^{n-1}\). Without loss of generality, we may assume that \(\vartheta =\theta =(1,0,\ldots,0)\). Let \(\{\sigma _{h,\Omega,k,\theta,s}\}_{s=0}^{\Lambda }\) be given as in the proof of Theorem 1.1. By a simple argument following from the proof of [18, Theorem 2], one has $$\begin{aligned} T_{h,\Omega,P,\varphi }^{*}f\leq \sup_{k\in \mathbb{Z}} \bigl\vert \vert \sigma _{h,\Omega,k,\theta,\Lambda } \vert *f \bigr\vert +\sup _{k\in \mathbb{Z}} \Biggl\vert \sum_{j=k}^{\infty } \sigma _{h,\Omega,j, \theta,\Lambda }*f \Biggr\vert . \end{aligned}$$ By (3.2), (3.54), (3.56), (3.60), and Lemmas 3.2–3.4, we have (3.59) for \(p\in (\gamma ',\infty )\) if \(\gamma \geq 2\) or \(p\in (\gamma ', \frac{2\gamma '}{\gamma '-2})\) if \(\gamma \in (4/3,2)\). □ Let \(h, \Omega, P, \varphi \) be given as in Theorem 1.3. Notice that $$\begin{aligned} \bigl\vert \triangle _{\zeta }\bigl(T_{h,\Omega,P,\varphi }^{*}f \bigr) (x) \bigr\vert & = \bigl\vert T_{h,\Omega,P, \varphi }^{*}f(x+\zeta )-T_{h,\Omega,P,\varphi }^{*}f(x) \bigr\vert \\ & = \bigl\vert T_{h,\Omega,P,\varphi }^{*}f_{\zeta }(x)-T_{h,\Omega,P,\varphi }^{*}f(x) \bigr\vert \leq T_{h,\Omega,P,\varphi }^{*}\bigl( \triangle _{\zeta }(f)\bigr) (x),\quad \forall x, \zeta \in \mathbb{R}^{n}. \end{aligned}$$ This together with Lemma 3.6 and (3.52) implies that, for \(\alpha \in (0,1)\) and \(1< p, q<\infty \), $$\begin{aligned} &\bigl\Vert T_{h,\Omega,P,\varphi }^{*}f \bigr\Vert _{\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})} \\ &\quad\leq C \biggl\Vert \biggl(\sum_{l\in \mathbb{Z}}2^{lq \alpha } \biggl( \int _{\mathfrak{R}_{n}} \bigl\vert \triangle _{2^{-l}\zeta } \bigl(T_{h, \Omega,P,\varphi }^{*}f\bigr) \bigr\vert \,d\zeta \biggr)^{q} \biggr)^{{1}/{q}} \biggr\Vert _{L^{p}( \mathbb{R}^{n})} \\ & \quad\leq C \biggl\Vert \biggl( \sum_{l\in \mathbb{Z}}2^{lq\alpha } \biggl( \int _{\mathfrak{R}_{n}} \bigl\vert T_{h, \Omega,P,\varphi }^{*}\bigl( \triangle _{2^{-l}\zeta }(f)\bigr) \bigr\vert \,d\zeta \biggr)^{q} \biggr)^{{1}/{q}} \biggr\Vert _{L^{p}(\mathbb{R}^{n})} \\ &\quad \leq C\sum_{j=1}^{\infty } \vert c_{j} \vert \biggl\Vert \biggl(\sum_{l\in \mathbb{Z}}2^{lq\alpha } \biggl( \int _{\mathfrak{R}_{n}} \bigl\vert T_{h,\Omega _{j},P,\varphi }^{*}\bigl( \triangle _{2^{-l}\zeta }(f)\bigr) \bigr\vert \,d\zeta \biggr)^{q} \biggr)^{{1}/{q}} \biggr\Vert _{L^{p}( \mathbb{R}^{n})}. \end{aligned}$$ Therefore, to establish the bounds for \(T_{h,\Omega,P,\varphi }^{*}\) on \(\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})\), it suffices to show that $$\begin{aligned} \biggl\Vert \biggl(\sum_{l\in \mathbb{Z}}2^{lq\alpha } \biggl( \int _{ \mathfrak{R}_{n}} \bigl\vert T_{h,\Omega,P,\varphi }^{*}\bigl( \triangle _{2^{-l}\zeta }(f)\bigr) \bigr\vert \,d \zeta \biggr)^{q} \biggr)^{{1}/{q}} \biggr\Vert _{L^{p}(\mathbb{R}^{n})}\leq C \Vert f \Vert _{\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})} \end{aligned}$$ holds for any \((1,\infty )\) atom Ω and \(\alpha \in (0,1)\) and \(1< p, q<\infty \). Here, \(C>0\) is independent of Ω and the coefficients of P. In what follows, let Ω be a \((1,\infty )\) atom satisfying (2.6)–(2.8) with \(0<\varrho \leq 1\) and \(\vartheta \in {\mathrm{S}}^{n-1}\). Without loss of generality, we may assume that \(\vartheta =\theta =(1, 0,\ldots,0)\). Let \(P, \{P_{s}\}_{s=0}^{\Lambda }, \{\Gamma _{s, \theta }\}_{s=0}^{\Lambda }\), \(\{L_{s}\}_{s=1}^{\Lambda }\), and \(\{\sigma _{h, \Omega,k,\theta,s}\}_{s=0}^{\Lambda }\) be given as in the proof of Theorem 1.1. We define the measures \(\{\nu _{k,s}\}_{0}^{2\Lambda }\) and \(\{|\nu _{k,s}|\}_{0}^{2\Lambda }\) by $$\begin{aligned} &\widehat{\nu _{k,s}}(\xi )= \int _{2^{k}< \vert y \vert \leq 2^{k+1}}\exp \Biggl( \sum_{i=1}^{s}a_{l_{i}} \varphi \bigl( \vert y \vert \bigr)^{l_{i}}\xi \cdot \theta \Biggr) \frac{\Omega (y/ \vert y \vert )}{ \vert y \vert ^{n}}\,dy, \quad 0\leq s\leq \Lambda,\\ &\nu _{k,s}(\xi )=\sigma _{h,\Omega,k,\theta,s-\Lambda }, \quad\Lambda +1\leq s\leq 2 \Lambda,\\ &\widehat{ \vert \nu _{k,s} \vert }(\xi )= \int _{2^{k}< \vert y \vert \leq 2^{k+1}}\exp \Biggl( \sum_{i=1}^{s}a_{l_{i}} \varphi \bigl( \vert y \vert \bigr)^{l_{i}}\xi \cdot \theta \Biggr) \frac{ \vert \Omega (y/ \vert y \vert ) \vert }{ \vert y \vert ^{n}}\,dy,\quad 0\leq s\leq \Lambda,\\ &\vert \nu _{k,s} \vert (\xi )= \vert \sigma _{h,\Omega,k,\theta,s-\Lambda } \vert ,\quad \Lambda +1\leq s\leq 2\Lambda. \end{aligned}$$ Let \(\xi =(\xi _{1},\ldots,\xi _{n})\). By (1.1) and a change of variable, one has $$\begin{aligned} \widehat{\nu _{k,s}}(\xi )=0,\quad \forall 0\leq s\leq \Lambda. \end{aligned}$$ Invoking Lemma 2.5, one finds $$\begin{aligned} \bigl\vert \widehat{ \vert \nu _{k,s} \vert }(\xi ) \bigr\vert = \Biggl\vert \int _{2^{k}}^{2^{(k+1)}} \exp \Biggl(\sum _{i=1}^{s}a_{l_{i}}\varphi (t)^{l_{i}} \xi _{1} \Biggr)\frac{dt}{t} \Biggr\vert \Vert \Omega \Vert _{L^{1}({\mathrm{S}}^{n-1})} \leq C\bigl(\varphi \bigl(2^{k+1}\bigr)^{l_{s}} \vert a_{l_{s}}\xi _{1} \vert \bigr)^{-1/l_{s}}. \end{aligned}$$ Combining this with the trivial estimate \(|\widehat{|\nu _{k,s}|}(\xi )|\leq C\) yields that $$\begin{aligned} \bigl\vert \widehat{ \vert \nu _{k,s} \vert }(\xi ) \bigr\vert \leq C\min \bigl\{ 1,\bigl(\varphi \bigl(2^{k}\bigr)^{l_{s}} \vert a_{l_{s}} \xi _{1} \vert \bigr)^{-1/l_{s}}\bigr\} ,\quad 1\leq s\leq \Lambda. \end{aligned}$$ By the definition of \(|\nu _{k,s}|\) and the arguments similar to those used to derive (3.12), $$\begin{aligned} \bigl\vert \widehat{ \vert \nu _{k,s} \vert }(\xi ) \bigr\vert & = \bigl\vert \widehat{ \vert \sigma _{h,\Omega,k,s-\Lambda } \vert }( \xi ) \bigr\vert \\ & \leq C\min \bigl\{ 1,\bigl(\varphi \bigl(2^{k}\bigr)^{l_{s-\Lambda }} \bigl\vert L_{s- \Lambda }(\xi ) \bigr\vert \bigr)^{-1/(2(s-\Lambda )l_{s-\Lambda }\delta )}\bigr\} , \quad{\text{for }} \Lambda +1\leq s\leq 2\Lambda. \end{aligned}$$ We get from (3.12) that $$\begin{aligned} \bigl\vert \widehat{\nu _{k,s}}(\xi ) \bigr\vert & = \bigl\vert \widehat{\sigma _{h,\Omega,k,s-\Lambda }}(\xi ) \bigr\vert \\ & \leq C\min \bigl\{ 1,\bigl(\varphi \bigl(2^{k}\bigr)^{l_{s-\Lambda }} \bigl\vert L_{s- \Lambda }(\xi ) \bigr\vert \bigr)^{-1/(2(s-\Lambda )l_{s-\Lambda }\delta )}\bigr\} , \quad{\text{for }} \Lambda +1\leq s\leq 2\Lambda. \end{aligned}$$ One can easily check that $$\begin{aligned} & \bigl\vert \widehat{ \vert \nu _{k,s} \vert }(\xi )-\widehat{ \vert \nu _{k,s-1} \vert }(\xi ) \bigr\vert \\ &\quad= \Biggl\vert \int _{2^{k}}^{2^{(k+1)}} \Biggl(\exp \Biggl(\sum _{i=1}^{s}a_{l_{i}}\varphi (t)^{l_{i}} \xi _{1} \Biggr)-\exp \Biggl(\sum_{i=1}^{s-1}a_{l_{i}} \varphi (t)^{l_{i}}\xi _{1} \Biggr) \Biggr)\frac{dt}{t} \Biggr\vert \Vert \Omega \Vert _{L^{1}({\mathrm{S}}^{n-1})} \\ &\quad\leq C\varphi \bigl(2^{k+1}\bigr)^{l_{s}} \vert a_{l_{s}}\xi _{1} \vert \leq C\varphi \bigl(2^{k} \bigr)^{l_{s}} \vert a_{l_{s}} \xi _{1} \vert , \quad{\text{for }} 1\leq s\leq \Lambda. \end{aligned}$$ Arguments similar to (3.56) show that $$\begin{aligned} \bigl\vert \widehat{ \vert \nu _{k,s} \vert }(\xi )-\widehat{ \vert \nu _{k,s-1} \vert }(\xi ) \bigr\vert &= \bigl\vert \widehat{ \vert \sigma _{h,\Omega,k,\theta,s-\Lambda } \vert }(\xi )- \widehat{ \vert \sigma _{h,\Omega,k,\theta,s-\Lambda -1} \vert }(\xi ) \bigr\vert \\ & \leq C\min \bigl\{ 1,\varphi \bigl(2^{k}\bigr)^{l_{s- \Lambda }} \bigl\vert L_{s-\Lambda }(\xi ) \bigr\vert \bigr\} \quad{\text{for }} \Lambda +1 \leq s \leq 2\Lambda. \end{aligned}$$ In view of (3.56), $$\begin{aligned} \bigl\vert \widehat{\nu _{k,s}}(\xi )-\widehat{\nu _{k,s-1}}(\xi ) \bigr\vert &= \bigl\vert \widehat{\sigma _{h,\Omega,k,\theta,s-\Lambda }}(\xi )- \widehat{\sigma _{h,\Omega,k,\theta,s-\Lambda -1}}(\xi ) \bigr\vert \\ & \leq C\min \bigl\{ 1,\varphi \bigl(2^{k}\bigr)^{l_{s- \Lambda }} \bigl\vert L_{s-\Lambda }(\xi ) \bigr\vert \bigr\} \quad{\text{for }} \Lambda +1 \leq s \leq 2\Lambda. \end{aligned}$$ We now define linear transformations \(I_{s}:\mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\) for \(1\leq s\leq 2\Lambda \) by $$\begin{aligned} I_{s}(\xi )= \textstyle\begin{cases} a_{l_{s}}\xi _{1}& {\text{if }} 1\leq s\leq \Lambda; \\ L_{s-\Lambda }(\xi )& {\text{if }} \Lambda +1\leq s\leq 2\Lambda. \end{cases}\displaystyle \end{aligned}$$ We also set $$\begin{aligned} \gamma _{s}= \textstyle\begin{cases} l_{s}& {\text{if }} 1\leq s\leq \Lambda; \\ l_{s-\Lambda }& {\text{if }} \Lambda +1\leq s\leq 2\Lambda \end{cases}\displaystyle \end{aligned}$$ $$\begin{aligned} \beta _{s}= \textstyle\begin{cases} \frac{1}{sl_{s}}& {\text{if }} 1\leq s\leq \Lambda; \\ \frac{1}{2(s-\Lambda )l_{s-\Lambda }\delta }& {\text{if }} \Lambda +1 \leq s\leq 2\Lambda. \end{cases}\displaystyle \end{aligned}$$ It follows from (3.63)–(3.69) that $$\begin{aligned} &\max \bigl\{ \bigl\vert \widehat{\nu _{k,s}}(\xi )-\widehat{\nu _{k,s-1}}(\xi ) \bigr\vert , \bigl\vert \widehat{ \vert \nu _{k,s} \vert }(\xi ) -\widehat{ \vert \nu _{k,s-1} \vert }(\xi ) \bigr\vert \bigr\} \leq C \bigl\vert 2^{k \gamma _{s}}I_{s}( \xi ) \bigr\vert ,\quad 1\leq s\leq 2\Lambda; \end{aligned}$$ $$\begin{aligned} &\max \bigl\{ \bigl\vert \widehat{\nu _{k,s}}(\xi ) \bigr\vert , \bigl\vert \widehat{ \vert \nu _{k,s} \vert }(\xi ) \bigr\vert \bigr\} \leq CA\min \bigl\{ 1, \bigl\vert 2^{k\gamma _{s}}I_{s}(\xi ) \bigr\vert ^{-\beta _{s}}\bigr\} ,\quad 1 \leq s\leq 2\Lambda. \end{aligned}$$ $$\begin{aligned} \sup_{k\in \mathbb{Z}} \bigl\vert \vert \nu _{k,0} \vert *f(x) \bigr\vert \leq C \vert f \vert (x). \end{aligned}$$ From (3.60) we see that $$\begin{aligned} T_{h,\Omega,P,\varphi }^{*}f\leq \sup_{k\in \mathbb{Z}} \bigl\vert \vert \nu _{k,2\Lambda } \vert *f \bigr\vert +\sup_{k\in \mathbb{Z}} \Biggl\vert \sum_{i=k}^{\infty }\nu _{i,2\Lambda }*f \Biggr\vert . \end{aligned}$$ Using Lemmas 2.4 and 2.5 in [31], we obtain that, for any \(1\leq s\leq 2\Lambda \) and \(1< p, q, r<\infty \), $$\begin{aligned} &\biggl\Vert \biggl(\sum_{i\in \mathbb{Z}} \biggl(\sum _{k\in \mathbb{Z}} \vert \nu _{k,s}*g_{k,i} \vert ^{2} \biggr)^{q/2} \biggr)^{1/q} \biggr\Vert _{L^{p}( \mathbb{R}^{n})} \leq C \biggl\Vert \biggl(\sum _{i\in \mathbb{Z}} \biggl(\sum_{k\in \mathbb{Z}} \vert g_{k,i} \vert ^{2} \biggr)^{q/2} \biggr)^{1/q} \biggr\Vert _{L^{p}(\mathbb{R}^{n})}; \end{aligned}$$ $$\begin{aligned} & \biggl\Vert \biggl(\sum_{i\in \mathbb{Z}} \biggl\Vert \biggl( \sum_{k\in \mathbb{Z}} \bigl\vert \vert \nu _{k,s} \vert *g_{i,\zeta,k} \bigr\vert ^{2} \biggr)^{1/2} \biggr\Vert _{L^{r}(\mathfrak{R}_{n})}^{q} \biggr)^{1/q} \biggr\Vert _{L^{p}( \mathbb{R}^{n})} \\ &\quad\leq C \biggl\Vert \biggl(\sum_{i\in \mathbb{Z}} \biggl\Vert \biggl(\sum_{k\in \mathbb{Z}} \vert g_{i,\zeta,k} \vert ^{2} \biggr)^{1/2} \biggr\Vert _{L^{r}(\mathfrak{R}_{n})}^{q} \biggr)^{1/q} \biggr\Vert _{L^{p}( \mathbb{R}^{n})}. \end{aligned}$$ By (3.63), (3.70)–(3.72), (3.74), (3.75) and invoking Lemma 3.7, we have that, for \(\alpha \in (0,1)\), \(1< p, q<\infty \), and \(1\leq s\leq 2\Lambda \), $$\begin{aligned} &\biggl\Vert \biggl(\sum_{l\in \mathbb{Z}}2^{lq\alpha } \biggl( \int _{ \mathfrak{R}_{n}}\sup_{k\in \mathbb{Z}} \bigl\vert \vert \nu _{k,s} \vert * \vert \triangle _{2^{-l}\zeta }f \vert \bigr\vert \,d\zeta \biggr)^{q} \biggr)^{{1}/{q}} \biggr\Vert _{L^{p}(\mathbb{R}^{n})}\leq C \Vert f \Vert _{\dot{F}_{\alpha }^{p,q}( \mathbb{R}^{n})}, \end{aligned}$$ $$\begin{aligned} &\Biggl\Vert \Biggl(\sum_{l\in \mathbb{Z}}2^{lq\alpha } \Biggl( \int _{ \mathfrak{R}_{n}}\sup_{k\in \mathbb{Z}} \Biggl\vert \sum _{i=k}^{ \infty }\nu _{i,2\Lambda }*\triangle _{2^{-l}\zeta }f \Biggr\vert \,d\zeta \Biggr)^{q} \Biggr)^{{1}/{q}} \Biggr\Vert _{L^{p}(\mathbb{R}^{n})} \leq C \Vert f \Vert _{\dot{F}_{ \alpha }^{p,q}(\mathbb{R}^{n})}. \end{aligned}$$ Then (3.62) follows from (3.73), (3.76), and (3.77). Furthermore, the boundedness for \(T_{h,\Omega,P,\varphi }^{*}\) on \(F_{\alpha }^{p,q}(\mathbb{R}^{n})\) follows from the boundedness for \(T_{h,\Omega,P,\varphi }^{*}\) on \(\dot{F}_{\alpha }^{p,q}(\mathbb{R}^{n})\), (2.4), (2.5), and Theorem 1.2. By (3.61), (3.62) and the arguments similar to those used in deriving the continuity part of [31, Theorem 1.1], we can get the continuity part in Theorem 1.3. This completes the proof of Theorem 1.3. □ Al-Hasan, A., Pan, Y.: \(L^{p}\)-boundedness of a singular integral operator. Can. Math. Bull. 41(4), 404–412 (1998) Al-Qassem, A., Cheng, L., Pan, Y.: Boundedness of rough integral operators on Triebel–Lizorkin spaces. Publ. Math. 56, 261–277 (2012) Al-Salman, A., Pan, Y.: Singular integrals with rough kernels in \(L\log L({\mathrm{S}}^{n-1})\). J. Lond. Math. Soc. 66(2), 153–174 (2002) Calderón, A.P., Zygmund, A.: On singular integral. Am. J. Math. 78, 289–309 (1956) Chen, J., Fan, D., Ying, Y.: Singular integral operators on function spaces. J. Math. Anal. Appl. 276, 691–708 (2002) Chen, J., Zhang, C.: Boundedness of rough singular integral on the Triebel–Lizorkin spaces. J. Math. Anal. Appl. 337, 1048–1052 (2008) Chen, P., Duong, X.T., Li, J., Wu, Q.: Compactness of Riesz transform commutator on stratified Lie groups. J. Funct. Anal. 277, 1639–1676 (2019) Chen, W., Fu, Z., Grafakos, L., Wu, Y.: Fractional Fourier transforms on \(L^{p}\) and applications. Appl. Comput. Harmon. Anal. 55, 71–96 (2021) Chen, Y., Ding, Y.: Rough singular integrals on Triebel-Lizorkin space and Besov space. J. Math. Anal. Appl. 347, 493–501 (2008) Chen, Y., Ding, Y., Liu, H.: Rough singular integrals supported on submanifolds. J. Math. Anal. Appl. 368, 677–691 (2010) Cheng, L.C.: Singular integrals related to homogeneous mappings. Mich. Math. J. 47, 407–416 (2000) Coifman, R.R., Weiss, G.: Extensions of Hardy spaces and their use in analysis. Bull. Am. Math. Soc. 83(6), 569–645 (1977) Colzani, L.: Hardy spaces on sphere. PhD Thesis, Washington University, St Louis, MO (1982) Colzani, L., Taibleson, M., Weiss, G.: Maximal estimates for Cesàro and Riesz means on sphere. Indiana Univ. Math. J. 33(6), 873–889 (1984) Connett, W.C.: Singular integrals near \(L^{1}\) in harmonic analysis in Euclidean spaces. In: Proc. Sympos. Pure Math., pp. 163–165. Am. Math. Soc., Providence (1979) Duoandikoetxea, J., Rubio de Francia, J.L.: Maximal and singular integral operators via Fourier transform estimates. Invent. Math. 84(3), 541–561 (1986) Fan, D., Guo, K., Pan, Y.: Singular integrals with rough kernels along real-analytic submanifolds in \(\mathbb{R}^{n}\). Integral Equ. Oper. Theory 33, 8–19 (1999) Fan, D., Guo, K., Pan, Y.: A note of a rough singular integral operator. Math. Inequal. Appl. 2(1), 73–81 (1999) Fan, D., Guo, K., Pan, Y.: Singular integrals with rough kernels along real-analytic submanifolds in \(\mathbb{R}^{3}\). Trans. Am. Math. Soc. 355(3), 1145–1165 (2003) Fan, D., Pan, Y.: \(L^{2}\)-boundedness of a singular integral operator. Publ. Mat. 41, 317–333 (1997) Fan, D., Pan, Y.: A singular integral operator with rough kernel. Proc. Am. Math. Soc. 125, 3695–3703 (1997) Fan, D., Pan, Y.: Singular integral operators with rough kernels supported by subvarieties. Am. J. Math. 119(4), 799–839 (1997) Fefferman, R.: A note on singular integrals. Proc. Am. Math. Soc. 74(2), 266–270 (1979) Frazier, M., Jawerth, B., Weiss, G.: Littlewood–Paley Theory and the Study of Function Spaces. CBMS Reg. Conf. Ser., vol. 79. Am. Math. Soc., Providence (1991) Grafakos, L., Stefanov, A.: \(L^{p}\) bounds for singular integrals and maximal singular integrals with rough kernels. Indiana Univ. Math. J. 47, 455–469 (1998) Liu, F.: On singular integrals associated to surfaces. Tohoku Math. J. 66(1), 1–14 (2014) Liu, F.: Rough singular integrals associated to surfaces of revolution on Triebel–Lizorkin spaces. Rocky Mt. J. Math. 47(5), 1617–1653 (2017) Liu, F., Mao, S., Wu, H.: On rough singular integrals related to homogeneous mappings. Collect. Math. 67(1), 113–132 (2016) Liu, F., Wu, H.: Singular integrals related to homogeneous mappings in Triebel–Lizorkin spaces. J. Math. Inequal. 11(4), 1075–1097 (2017) Liu, F., Xue, Q., Yabuta, K.: Rough maximal singular integral and maximal operators supported by subvarieties on Triebel–Lizorkin spaces. Nonlinear Anal. 171, 41–72 (2018) Liu, F., Xue, Q., Yabuta, K.: Boundedness and continuity of maximal singular integrals and maximal functions on Triebel–Lizorkin spaces. Sci. China Math. 63(5), 907–936 (2020) Namazi, J.: A singular integral. Proc. Am. Math. Soc. 96, 201–219 (1986) Ricci, F., Stein, E.: Harmonic analysis on nilpotent groups and singular integrals I: oscillatory integrals. J. Funct. Anal. 73, 179–194 (1987) Sato, S.: Estimates for singular integrals and extrapolation. Stud. Math. 192, 219–233 (2009) Shi, S., Fu, Z., Lu, S.: On the compactness of commutators of Hardy operators. Pac. J. Math. 307, 239–256 (2020) Shi, S., Xiao, J.: Fractional capacities relative to bounded open Lipschitz sets complemented. Calc. Var. Partial Differ. Equ. 56, 1–22 (2017) Stein, E.M.: Problems in harmonic analysis related to curvature and oscillatory integrals. In: Proceedings of the International Congress of Mathematicians, pp. 196–221. Amer. Math. Soc., Providence (1987) Stein, E.M.: Harmonic Analysis: Real-Variable Methods, Orthogonality, and Oscillatory Integrals. Princeton University Press, Princeton (1993) Stein, E.M.: Some geometrical concepts arising in harmonic analysis. Geom. Funct. Anal. Special volume, Part 1, 434–453 (2000) Triebel, H.: Theory of Function Spaces. Monogr. Math., vol. 78. Birkhäser, Basel (1983) Yabuta, K.: Triebel–Lizorkin space boundedness of Marcinkiewicz integrals associated to surfaces. Appl. Math. J. Chin. Univ. 30(4), 418–446 (2015) Yang, M., Fu, Z., Sun, J.: Existence and large time behavior to coupled chemotaxis fluid equations in Besov–Morrey spaces. J. Differ. Equ. 266, 5867–5894 (2019) Zhang, C., Chen, J.: Boundedness of singular integrals and maximal singular integrals on Triebel–Lizorkin spaces. Int. J. Math. 21(2), 157–168 (2010) The authors want to express their sincere thanks to the referee for his or her valuable remarks and suggestions, which made this paper more readable. The first author was supported by Young Teachers and Top-Notch Talents in Undergraduate Teaching of Shandong University of Science and Technology (No. BJRC20180502). This second author was supported partially by the NNSF of China (No. 11701333). College of Mathematics and System Sciences, Shandong University of Science and Technology, Qingdao, 266590, P.R. China Yulin Zhang & Feng Liu Yulin Zhang Feng Liu YLZ was a major contributor in writing the manuscript. FL performed the validation and formal analysis. All authors read and approved the final manuscript. Correspondence to Feng Liu. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Zhang, Y., Liu, F. Rough singular integrals associated to polynomial curves. J Inequal Appl 2022, 19 (2022). https://doi.org/10.1186/s13660-022-02754-8 Singular integral Maximal singular integral Polynomial curves \(H^{1}({\mathrm{S}}^{n-1})\)
CommonCrawl
What is the fastest/most efficient algorithm for estimating Euler's Constant $\gamma$? What is the fastest algorithm for estimating Euler's Constant $\gamma \approx0.57721$? Using the definition: $$\lim_{n\to\infty} \sum_{x=1}^{n}\frac{1}{x}-\log n=\gamma$$ I finally get $2$ decimal places of accuracy when $n\geq180$. The third correct decimal place only comes when $n \geq638$. Clearly, this method is not very efficient (it can be expensive to compute $\log$). What is the best method to use to numerically estimate $\gamma$ efficiently? sequences-and-series limits numerical-methods estimation eulers-constant J. M. is a poor mathematician ArgonArgon The paper "On the computation of the Euler constant $\gamma$" by Ekatharine A. Karatsuba, in Numerical Algorithms 24(2000) 83-97, has a lot to say about this. This link might work for you. In particular, the author shows that for $k\ge 1$, $$ \gamma= 1-\log k \sum_{r=1}^{12k+1} \frac{ (-1)^{r-1} k^{r+1}}{(r-1)!(r+1)} + \sum_{r=1}^{12k+1} \frac{ (-1)^{r-1} k^{r+1} }{(r-1)! (r+1)^2}+\mbox{O}(2^{-k})$$ and more explicitly $$\begin{align*} -\frac{2}{(12k)!} - 2k^2 e^{-k} \le \gamma -1+&\log k \sum_{r=1}^{12k+1} \frac{ (-1)^{r-1} k^{r+1}}{(r-1)!(r+1)} - \sum_{r=1}^{12k+1} \frac{ (-1)^{r-1} k^{r+1} }{(r-1)! (r+1)^2}\\ &\le \frac{2}{(12k)!} + 2k^2 e^{-k}\end{align*}$$ for $k\ge 1$. Since the series has fast convergence, you can use these to get good approximations to $\gamma$ fairly quickly. Matthew ConroyMatthew Conroy $\begingroup$ ... not to be confused with Karatsuba of Karatsuba algorithm. $\endgroup$ – user2468 Apr 9 '12 at 21:39 $\begingroup$ Thanks! Here are the results from this series to those who are interested (Sorry about the lack of line breaks): k=1, sum=0.7965995992978246, error=0.21938393439629178. k=5, sum=0.5892082678451087, error=0.011992602943575847. k=10, sum=0.5773243590712589, error=1.086941697260313E-4. k=15, sum=0.5772165124955206, error=8.47593987773898E-7. Great series! $\endgroup$ – Argon Apr 10 '12 at 2:06 $\begingroup$ Great answer Matthew! $\endgroup$ – GarouDan Apr 10 '12 at 19:04 $\begingroup$ @user2468: Ekatherina Karatsuba is Anatolii Karatsuba's daughter. $\endgroup$ – A. Rex Mar 19 '13 at 4:50 $\begingroup$ @Argon, what an amazing coincidence! The continued fraction algorithm I posted for $k=15$ has almost the same error $8.4354308339798 \cdot 10^{-7}$ $\endgroup$ – Yuriy S Feb 17 '16 at 20:58 I like $$ \gamma = \lim_{n \rightarrow \infty} \; \; \left( \; \; 1 + \frac{1}{2} + \cdots + \frac{1}{n} - \frac{1}{n + 1 } - \cdots - \frac{1}{n^2 } - \frac{1}{n^2 + 1 } - \cdots - \frac{1}{n^2 + n} \; \; \right) $$ because it needs no logarithm and the error is comparable to the final term used. n sum error n^2 * error 1 0.5 0.07721566490153287 0.07721566490153287 10 0.5757019096925315 0.001513755209001322 0.1513755209001322 100 0.5771991634147917 1.650148674114948e-05 0.1650148674114948 1000 0.5772154984013406 1.665001923001341e-07 0.1665001923001341 10000 0.5772156632363485 1.665184323762503e-09 0.1665184323762503 I found this formula on page 82, the January 2012 issue (volume 119, number 1) of the M. A. A. American Mathematical Monthly. It was sent in by someone named Jouzas Juvencijus Macys, possibly for the Problems and Solutions section. He stopped the sum at $-1/n^2.$ I noticed that the error would be minimized by continuing the sum to $-1/(n^2 + n).$ If you want, you can add a single term $1/(6 n^2)$ to get the error down to $n^{-3}.$ $$ \gamma = \lim_{n \rightarrow \infty} \; \; \frac{1}{6n^2} + \left( \; \; 1 + \frac{1}{2} + \cdots + \frac{1}{n} - \frac{1}{n + 1 } - \cdots - \frac{1}{n^2 } - \frac{1}{n^2 + 1 } - \cdots - \frac{1}{n^2 + n} \; \; \right) $$ n sum error 1 0.6666666666666666 -0.08945100176513376 10 0.5773685763591982 -0.0001529114576653834 100 0.5772158300814584 -1.651799255153463e-07 1000 0.5772156650680073 -1.664743898288634e-10 10000 0.5772156649030152 -1.482369782479509e-12 EDIT, December 2013. I just got a nice note, with English preprint, from Prof. Macys. The original article is in Lithuanian in 2008. A Russian version and matching English translation are both 2013: the Springer website is not quite up to Volume 94, number 5, pages 45-50. The English language journal is called Mathematical Notes. Oh, the title is "On the Euler-Mascheroni constant." If desired, you can put two correction terms to get the error down to $n^{-4}.$ $$ \gamma = \lim_{n \rightarrow \infty} \; \; \frac{-1}{6n^3} +\frac{1}{6n^2} + \left( \; \; 1 + \frac{1}{2} + \cdots + \frac{1}{n} - \frac{1}{n + 1 } - \cdots - \frac{1}{n^2 } - \cdots - \frac{1}{n^2 + n} \; \; \right) $$ 10 0.5772019096925316 1.375520900126492e-05 100 0.5772156634147917 1.486741174616668e-09 Will JagyWill Jagy $\begingroup$ The behavior of the approximation error for Macys's series makes it a good candidate for extrapolation methods. Richardson worked nicely (as in the answer I gave), but maybe other methods can do better... $\endgroup$ – J. M. is a poor mathematician Apr 10 '12 at 11:38 $\begingroup$ The first expression may be written as a series: $$ \gamma= \sum_{n=1}^\infty \left(\frac{2}{n}-\sum_{j=n(n-1)+1}^{n(n+1)} \frac{1}{j}\right) $$ math.stackexchange.com/a/1591256/134791 $\endgroup$ – Jaume Oliver Lafont Jan 1 '16 at 20:50 $\begingroup$ that leads to a closed form for $\psi\left(z+1\right)$ math.stackexchange.com/questions/1357091/… $\endgroup$ – Jaume Oliver Lafont Jan 12 '16 at 9:40 $\begingroup$ Changing the weight of the last term also gets the error down to $n^{-4}$ $$lim_{n \to \infty} \left( 2H_n-\frac{1}{6}H_{n^2+n-1}-\frac{5}{6}H_{n^2+n}\right)$$ because this cancels two terms in the asymptotic expansion. The results are very similar without additional terms, just weight 5/6 for 1/(n^2+n): 10 0.577217061207682 1.39630614900659 E-6 100 0.577215665064940 1.63407322137730 E-10 600 0.577215664901661 1.28173793434336 E-13 $\endgroup$ – Jaume Oliver Lafont Jan 14 '16 at 11:41 $\begingroup$ @JaumeOliverLafont Thank you $\endgroup$ – Will Jagy Jan 14 '16 at 18:37 A good place for fast evaluation of constants is Gourdon and Sebah's 'Numbers, constants and computation'. They got $108\cdot 10^6$ digits for $\gamma$ in 1999 (see the end of their 2004 article 'The Euler constant') and propose a free program for high precision evaluation of various constants 'PiFast'. On his page of constants Simon Plouffe has Euler's constants to 10^6 digits (the file looks much smaller sorry...) using Brent's splitting algorithm (see the 1980 paper of Brent 'Some new algorithms for high-precision computation of Euler's constant' or more recently 3.1 in Haible and Papanikolaou's 'Fast multiprecision evaluation of series of rational numbers'). It seems that the 1999 record was broken in 2009 by A. Yee & R. Chan with 29,844,489,545 digits 'Mathematical Constants - Billions of Digits' (warning: the torrent file proposed there is more than 11Gb large! An earlier 52Mb file of 'only' 116 million digits is available here using the method proposed by Gourdon and Sebah). Raymond ManzoniRaymond Manzoni $\begingroup$ The 2.3 Exponential integral methods chapter on the 2nd link is really great. Thanks. $\endgroup$ – Alexey Burdin Nov 21 '19 at 13:40 $\begingroup$ Glad it helped @Alexey Burdin and excellent continuation! $\endgroup$ – Raymond Manzoni Nov 21 '19 at 23:30 (N.B. The previous version of this answer featured both the Brent-McMillan algorithm and the acceleration of Macys's series; I have decided to move the Brent-McMillan material into a new answer in the interest of having only one method per answer.) The convergence properties of Macys's series in Will's answer can be improved a fair bit, if you're willing to devote some amount of computational effort; due to the $n^{-2}$ behavior of the error, one obvious choice for a convergence acceleration method is Richardson extrapolation. Skipping some hairy details (which I might include later if I find time, but see Marchuk/Shaidurov if you must), the working formula is $$\gamma=\lim_{n\to\infty} G_n=2\lim_{n\to\infty} \sum_{i=1}^{n+1} \frac{(-1)^{n-i} i^{2n+2}}{(n+i+1)!(n-i+1)!}\left(\sum_{k=i+1}^{i(i+1)} \frac1{k}-\sum_{k=1}^i \frac1{k}\right)$$ Here are some sample results: $$\begin{array}{ccc}n&G_n&\gamma-G_n\\10&0.577210083083&5.581818\times10^{-6}\\50&0.577215659731&5.170456\times10^{-9}\\100&0.577215664665&2.362333\times10^{-10}\\200&0.577215664891&1.061648\times10^{-11}\\250&0.577215664898&3.902515\times10^{-12}\\300&0.577215664900&1.721878\times10^{-12}\\350&0.577215664901&8.618620\times10^{-13}\end{array}$$ For higher precision, there isn't much of an improvement; I would still recommend Brent-McMillan if one needs many digits of $\gamma$. J. M. is a poor mathematicianJ. M. is a poor mathematician $\begingroup$ On another note: I also like using convergence acceleration methods (e.g. Cohen-Rodriguez Villegas-Zagier or Levin) on the following alternating series for the Stieltjes constants: $$\gamma_n=\frac{(\log\,2)^n}{n+1}\sum_{k=1}^\infty \frac{(-1)^k}{k}B_{n+1}(\log_2 k)$$ where $B_n(x)$ is a Bernoulli polynomial, and $\gamma=\gamma_0$ $\endgroup$ – J. M. is a poor mathematician Apr 10 '12 at 5:48 $\begingroup$ The Macys acceleration is very nice. How soon do we get error below $1.532860 \times 10^{−12},$ so that we see $G_n$ beginning with $0.5772156649...?$ $\endgroup$ – Will Jagy Apr 10 '12 at 19:06 $\begingroup$ @Will: I expanded my results a bit... $\endgroup$ – J. M. is a poor mathematician Apr 14 '12 at 6:45 $\begingroup$ I just got a note from Prof. Macys. I think he would like this version; his interest seems to be in keeping to rational functions of $n,$ which i rather like as well. $\endgroup$ – Will Jagy Dec 2 '13 at 20:42 Finch's Mathematical Constants mentions these papers: D. W. DeTemple, A quicker convergence to Euler's constant, Amer. Math. Monthly (1993) 468-470. T. Negoi, A faster convergence to Euler's constant, Math. Gazette 83 (1999) 487-489. lhflhf As it turns out, the convergence of the Karatsuba series presented in Matthew's answer can be improved. This time, the geometric behavior of the error (as can be ascertained from the bounds presented) can be exploited through the use of the Shanks transformation. (Richardson can be made to work here as well, but the results are not as spectacular.) $$\varepsilon_0^{(k)}=1-\log(k+1) \sum_{r=1}^{12k+13} \frac{ (-k)^{r+1}}{(r-1)!(r+1)} + \sum_{r=1}^{12k+13} \frac{ (-k)^{r+1} }{(r-1)!(r+1)^2}$$ Wynn's version of the Shanks transformation uses the recursion $$\varepsilon_{k+1}^{(n)}=\varepsilon_{k-1}^{(n+1)}+\frac1{\varepsilon_{k}^{(n+1)}-\varepsilon_k^{(n)}}$$ It would seem that a two-dimensional array would be required for implementation, but one can arrange things such that only a one-dimensional array is required, through clever overwriting. Here is a Mathematica routine to demonstrate: wynnEpsilon[seq_?VectorQ] := Module[{n = Length[seq], ep, res, v, w}, res = {}; Do[ ep[k] = seq[[k]]; w = 0; v = w; w = ep[j]; ep[j] = v + (If[Abs[ep[j + 1] - w] > 10^-(Precision[w]), ep[j + 1] - w, 10^-(Precision[w])])^-1; , {j, k - 1, 1, -1}]; res = {res, ep[If[OddQ[k], 1, 2]]}; , {k, n}]; Flatten[res] (actually the same as the routine presented in this answer). Here's a comparison of Karatsuba's series, with and without Shanks transformation: gamprox = Table[N[1 - Log[k]*Sum[(-k)^(r + 1)/((r + 1)*(r - 1)!), {r, 1, 12*k + 1}] + Sum[(-k)^(r + 1)/((r + 1)^2*(r - 1)!), {r, 1, 12*k + 1}], 50], {k, 30}]; trans = wynnEpsilon[gamprox]; gamprox[[20]] - EulerGamma // N 1.31827*10^-7 trans[[20]] - EulerGamma // N 6.49869*10^-18 Last[gamprox] - EulerGamma // N Last[trans] - EulerGamma // N Not too shabby, in my humble opinion... Hmm, I don't know whether is is actually competing. The Euler-gamma can also be seen as the "regularized" sum of all zeta's at the nonpositive integer arguments (mostly expressed in a sum-formula using the Bernoulli-numbers). If I do a convergence-acceleration-method, (in the sense of a Nörlund-matrix-summation) I get the following approximations, where the partial sums are documented in steps of 5. $ \qquad \small \begin{array} {ll|ll} k & \text{approx partial sum to k'th term}& k & \text{approx partial sum to k'th term}\\ \hline \\ 1&1/2&6&0.576161647582377561685517908649\\ 11&0.577642454055878876964082277383&16&0.577256945328427287289300010076\\ 21&0.577203007376005733835733501905&26&0.577213676374423017168422213469\\ 31&0.577216385568992428628821604406&36&0.577215824990983093761408431095\\ 41&0.577215639855185823618977575460&46&0.577215658304198651397646593838\\ 51&0.577215664821529245660187000460&56&0.577215664719517597256388852446\\ 61&0.577215665026720261633726731324&66&0.577215664986600655216189453626\\ 71&0.577215664916609466905818220446&76&0.577215664902581218436870655519\\ 81&0.577215664899673837349687879474&86&0.577215664900634540946733895948\\ 91&0.577215664901420597291612350155&96&0.577215664901605693627171524946\\ 101&0.577215664901606197813305786106&106&0.577215664901564816031433598865\\ 111&0.577215664901542872603251577435&116&0.577215664901534551921030743308\\ 121&0.577215664901532778454660696838&126&0.577215664901532679657316339069\\ 131&0.577215664901532833003775498032&136&0.577215664901532904864265239897\\ 141&0.577215664901532914818902560099&146&0.577215664901532899695081822359\\ 151&0.577215664901532883268517660911&156&0.577215664901532871664134738564\\ 161&0.577215664901532865398778282147&166&0.577215664901532862462629396963\\ 171&0.577215664901532861321338522582&176&0.577215664901532860909786316139\\ 181&0.577215664901532860775151258773&186&0.577215664901532860715949714001\\ 191&0.577215664901532860680839731393&196&0.577215664901532860654520816630\\ 201&0.577215664901532860635577825538&206&0.577215664901532860623198773464\\ 211&0.577215664901532860615556602981&216&0.577215664901532860611334824284\\ 221&0.577215664901532860609026420469&226&0.577215664901532860607850525370\\ 231&0.577215664901532860607246781354&236&0.577215664901532860606925091229\\ 241&0.577215664901532860606758778390&246&0.577215664901532860606658539051 \\ \ldots \\ \hline &&&0.577215664901532860606512090082 \\ &&&\text{(final value as given by Pari/GP)} \end{array} $ Well, this might not compete because of the computation effort for the coefficients of the Noerlund summation and also it seems, as if the rate/quality of convergence decreases with the increasing steps, so this should possibly be seen only as a sidenote. (Reminder as to how to reproduce the behaviour: \\Pari/GP, using user-defined procedures NoerlundSum(1.7,1.0)*ZETA[,1] \\matrix-function NoerlundSum and ZETA-matrix Gottfried HelmsGottfried Helms I do not know about the best method, however numerically evaluating the integral $$\gamma = - \int_0^1\!dx\,\ln \ln x^{-1}$$ seems to be pretty efficient. FabianFabian $\begingroup$ Do you mean $$- \int_0^1\!\, \ln \ln x^{-1} dx $$ $\endgroup$ – Argon Apr 9 '12 at 20:47 $\begingroup$ @Argon: yes. You can use your favorite numerical method to obtain an approximation to the integral. $\endgroup$ – Fabian Apr 9 '12 at 20:49 $\begingroup$ I would then need to compute $\ln \ln \frac{1}{x}$ at several places between $x=0$ and $x=1$ to approximate the definite integral, which again may be quite costly. $\endgroup$ – Argon Apr 9 '12 at 20:49 $\begingroup$ @Aragon: I'm not sure I understand your concern... $\endgroup$ – Fabian Apr 9 '12 at 20:52 $\begingroup$ It takes lots of computational effort simply to find the value of $\log$ to a sufficient accuracy. To do this twice for each summation is very costly and requires much effort. $\endgroup$ – Argon Apr 9 '12 at 21:44 I quite like the Brent-McMillan algorithm myself (which is based on the relationships between the Euler-Mascheroni constant and modified Bessel functions): $$\gamma=\lim_{n\to\infty}\mathscr{G}_n=\lim_{n\to\infty}\frac{\sum\limits_{k=0}^\infty \left(\frac{n^k}{k!}\right)^2 (H_k-\log\,n)}{\sum\limits_{k=0}^\infty \left(\frac{n^k}{k!}\right)^2}$$ where $H_k=\sum\limits_{j=1}^k \frac1{j}$ is a harmonic number. It requires the use of a logarithm, but the algorithm is quite simple and reasonably efficient (in particular, we have the inequality $0 < \mathscr{G}_n-\gamma < \pi\exp(-4n)$). Here's some Mathematica code for the Brent-McMillan algorithm (which should be easily translatable to your language of choice): a = u = N[-Log[n], n]; b = v = 1; While[True, k = (n/i)^2; a *= k; b *= k; a += b/i; If[u + a == u || v + b == v, Break[]]; u += a; v += b; i++ u/v The integer parameter n controls the accuracy; very roughly, the algorithm will yield n-2 or so correct digits. The Brent-McMillan paper also presents more elaborate schemes for computing $\gamma$, such as $$\gamma=\lim_{n\to\infty}\frac{\sum\limits_{k=0}^\infty \left(\frac{n^k}{k!}\right)^2 (H_k-\log\,n)}{\sum\limits_{k=0}^\infty \left(\frac{n^k}{k!}\right)^2}-\frac{\frac1{4n}\sum\limits_{k=0}^\infty \frac{(2k)!^3}{k!^4 (16n)^{2k}}}{\left(\sum\limits_{k=0}^\infty \left(\frac{n^k}{k!}\right)^2\right)^2}$$ but I have no experience in using them. $\begingroup$ $\pi e^{-4n}$ is a pretty fast rate of convergence. (+1) Have you investigated the computational efficiency in terms of how many multiplications and additions would be needed to get a given accuracy? $\endgroup$ – robjohn♦ Jul 26 '12 at 0:26 $\begingroup$ Not in full, but there's something to be said about it being the method of choice for arbitrary-precision computation of $\gamma$ in computing environments like Maple and Mathematica. $\endgroup$ – J. M. is a poor mathematician Jul 26 '12 at 1:27 There is another interesting formula $$\small 1- \gamma = \sum_{k=2}^\infty {\zeta(k)-1\over k}$$ found in mathworld (see eq 123) . If we simply use approximations to the zetas by truncating their series, and write this in an array $\small \begin{array} {lll} 1 & 1 & 1 & 1 & \cdots & 1 \\ {1 \over 2^2} & {1 \over 2^3} & {1 \over 2^4} & {1 \over 2^5} & \cdots&{1 \over 2^c}\\ {1 \over 3^2} & {1 \over 3^3} & {1 \over 3^4} & {1 \over 3^5} & \cdots&{1 \over 2^c}\\ \cdots & \cdots & \cdots & \cdots & \cdots & &\\ {1 \over r^2} & {1 \over r^3} & {1 \over r^4} & {1 \over r^5} & \cdots&{1 \over r^c}\\ \hline \zeta_r(2)&\zeta_r(3)&\zeta_r(4)&\zeta_r(5)&\cdots&\zeta_r(c)& \end{array} $ then we can write an approximation-formula for the Euler $\small \gamma$ $$\small 1-\gamma_{r,c} = \sum_{k=2}^c {\zeta_r(k)-1\over k}$$ which depends on the number of rows r and the number of columns c . Now to reduce the number of coefficients needed to arrive at a good approximation we can use the alternating (column-)sums and convert by the eta/zeta-conversion term additionally we can use Eulersummation for convergence acceleration for the (now alternating) $\small \zeta_r(c) $ we can even introduce Euler-summation of (small) negative order to accelerate convergence of the sum of zetas (which itself is non-alternating). If we use all three accelerations, we get a double sum $$\small 1-\gamma_{r,c} = \sum_{k=2}^c \sum_{j=1}^r a_{j,k}{ 1 \over j^k}$$ where the $\small a_{j,k} $ contain the factors due to the denominator in the $\small \gamma$-formula and due to the threefold convergence-acceleration. I did actually implement this in Pari/GP and the surprising result was, that the best approximations were (using order 0.5 in the Eulersummation for the columns and -0.25 for the Eulersummation of the approximated zetas), if roughly r=c . Then the number of correct digits were about r/2; so with r=64 and c=64 we get $\small \gamma$ to 31 digits accuracy. So the effort comes out to be $$\small \text{ # of correct digits} \sim r/2 \qquad \text{ if } r \sim c $$ The cost of computation of the complete array of zeta-terms is thus in principle quadratic in d (the required number of correct digits); for the Euler-sums a vector for the column-acceleration and another vector for the row-acceleration is required whose values can recursively be computed and are thus linear with the number of rows resp the number of columns and thus also linear with d. (The convergence-acceleration (1.) by using the alternating sums costs nearly nothing) $\begingroup$ For small enough values of $\zeta$/$\eta$, there is this algorithm described by Borwein. The Euler-transformed $\eta$ series is a special case of this algorithm; the general method is equivalent to performing Cohen-Rodriguez Villegas-Zagier acceleration to the $\eta$ series (which PARI/GP supports as sumalt()). $\endgroup$ – J. M. is a poor mathematician Apr 14 '12 at 13:35 $\begingroup$ @J.M. :true; however it is unknown to me how many operations (terms for the sum) are used by sumalt (I only know it is roughly related to the current float-precision). In the light of some other answers I wanted an explicite description and enumeration of operations which are required for some required correct digits. $\endgroup$ – Gottfried Helms Apr 14 '12 at 13:56 I found this 2013 paper claiming to offer the first general continued fraction for $\gamma$ with regular pattern. It has almost exponential convergence. The $\log_{10}$ of absolute error is plotted below: The error is of the order of $10^{-17}$ at $100$ steps of the algorithm. The continued fraction it creates is a little bit complicated, starting with several terms having no apparent pattern: $$\gamma=\cfrac{1}{2-\cfrac{1}{4-\cfrac{5}{16+\cfrac{36}{59-\cfrac{15740}{404-\cfrac{1489700}{30422-...}}}}}}$$ The next partial quotents grow very fast in absolute value. The computation of partial quotents after $5$ is based on a complicated three order reccurence relation, which is described in details in the paper on page 14. First, we define: $$q_n=\sum^{n}_{k=0} \left( \begin{matrix} n \\ k \end{matrix} \right)^2 k!$$ Then we define the few initial values $d_1=-1$, $d_2=-2$, $d_3=-5$, $d_4=8$ for a $3^{rd}$ oder reccurence relation for $n \geq 3$: $$(n-1)(n-2)d_{n+2}=$$ $$=(n-2)(n+1)(n^2+3n-2)d_{n+1}-n^2(2n^3+n^2-7n-4)d_n+n^4(n-1)^2d_{n-1}$$ Then the partial quotents $\frac{a_n|}{|b_n}$ will be defined for $n \geq 6$ as: $$a_n=-\frac{(n-1)^2}{4}d_n d_{n-2}$$ $$b_n=n^2 d_{n-1}+\frac{(n-1)(n-2)}{2}q_{n-2}$$ Here is my implementation in Mathematica. A0 = {{0, 1}, {1, 0}}; Af = {{1}, {0}}; Nm = 100; q = Table[\!\( \*UnderoverscriptBox[\(\[Sum]\), \(k = 0\), \(n\)]\( \*SuperscriptBox[\(Binomial[n, k]\), \(2\)]\ \(k!\)\)\), {n, 1, Nm}]; d = Table[0, {n, 1, Nm}]; d[[1]] = -1; d[[4]] = 8; Do[d[[n + 2]] = ((n + 1) (n^2 + 3 n - 2) d[[n + 1]])/(n - 1) - ( n^2 (2 n^3 + n^2 - 7 n - 4) d[[n]])/((n - 1) (n - 2)) + n^4 (n - 1)/(n - 2) d[[n - 1]], {n, 3, Nm - 2}]; a = Table[0, {n, 1, Nm}]; b = Table[0, {n, 1, Nm}]; b[[1]] = 2; b[[3]] = 16; b[[5]] = 404; a[[1]] = 1; a[[2]] = -1; a[[4]] = 36; a[[5]] = -15740; Do[a[[n]] = 1/4 (-(n - 1)^2) d[[n]] d[[n - 2]]; b[[n]] = n^2 d[[n - 1]] + 1/2 (n - 1) (n - 2) q[[n - 2]], {n, 6, Nm}]; er = Table[0, {n, 1, Nm}]; Do[A1 = {{b[[n]], 1}, {a[[n]], 0}}; P0 = A0.Af; A0 = A0.A1; P = A0.Af; Pf = N[P[[1, 1]]/P[[2, 1]], 20]; er[[n]] = Log[10, Pf - EulerGamma]; Print[n, " ", Pf, " ", Pf - EulerGamma], {n, 1, Nm}] ListPlot[er] 1 arXiv:1010.1420 [math.NT] Yuriy SYuriy S No heavy machinery or special series manipulation required for this. Simple enough to do on a standard scientific calculator. By applying the Euler-Maclaurin summation formula, one finds that $$\lim_{b\to\infty}H_b-H_{n-1}-\ln(b)+\ln(n)=\frac1{2n}+\left(\sum_{j=1}^p\frac{B_{2j}}{2j\times n^{2j}}\right)+R_{n,p}$$ and thus, we may derive $$\gamma=H_{n-1}-\ln(n)+\frac1{2n}+\left(\sum_{j=1}^p\frac{B_{2j}}{2j\times n^{2j}}\right)+R_{n,p}$$ $$H_n=\sum_{k=1}^n\frac1k$$ $$|R_{n,p}|\le\frac{2\zeta(2p)(2p-1)!}{(2\pi n)^{2p}}$$ where $\zeta$ is the Riemann zeta function. It suffices to use $$\zeta(s)<\frac1{1-2^{1-s}}\left(1-\frac1{2^s}\right)$$ Or more simply, $$\zeta(s)<2$$ both for $s\ge2$. For example, with $n=5$ and $p=4$, we can approximate out $7$ places: $$\gamma=H_4-\ln(5)+\frac1{10}+\frac1{300}-\frac1{75000}+\frac1{3937500}-\frac1{93750000}+R_{5,4}$$ $$\gamma\approx0.577215664+R_{5,4}$$ $$|R_{5,4}|<1.1\times10^{-8}$$ Simply Beautiful ArtSimply Beautiful Art $\begingroup$ What does the function $f$ in the linked WP article correspond to in your answer? $\endgroup$ – flawr Sep 16 '17 at 22:34 $\begingroup$ @flawr It corresponds to $1/x$, summed/integrated from $n$ to $b$. $\endgroup$ – Simply Beautiful Art Sep 16 '17 at 22:55 Not the answer you're looking for? Browse other questions tagged sequences-and-series limits numerical-methods estimation eulers-constant or ask your own question. How do you calculate the decimal expansion of an irrational number? Intuitively, why is the Euler-Mascheroni constant near $\sqrt{1/3}$? Evaluation of Euler's Constant $\gamma$ Euler-Mascheroni constant expression, further simplification $\lim_{n\to\infty} f(2^n)$ for some very slowly increasing function $f(n)$ Re-Expressing the Digamma Question on Macys formula for Euler-Mascheroni Constant $\gamma$ Continued fraction of $\gamma+1$ using recursion Series for Stieltjes constants from $\gamma= \sum_{n=1}^\infty \left(\frac{2}{n}-\sum_{j=n(n-1)+1}^{n(n+1)} \frac{1}{j}\right)$ What is the series representation of $\frac{1}{\gamma}$? How to efficiently estimate quantile function of Gamma distribution Method for estimating the $n^{th}$ derivative? Best and most efficient way to numerically compute $e$? Is there an efficient method for the calculation of $e^{1/e}$? Estimating the multiplicity of a root (numerically) Compute $\sum_{n=1}^\infty \exp(\beta n + \gamma \exp(\lambda n))$
CommonCrawl
EJNMMI Research Quantitative pulmonary blood flow measurement using 15O-H2O PET with and without tissue fraction correction: a comparison study Keiko Matsunaga1,2, Masahiro Yanagawa3, Tomoyuki Otsuka4, Haruhiko Hirata4, Takashi Kijima4, Atsushi Kumanogoh4, Noriyuki Tomiyama3, Eku Shimosegawa1,2 & Jun Hatazawa2 EJNMMI Research volume 7, Article number: 102 (2017) Cite this article Physiological measures per lung parenchyma, rather than per lung volume, sometimes reflect the disease status. PET images of the lung, which are usually expressed per lung volume, could confound the interpretation of the disease status, especially in cases with a prominent heterogeneity in aeration. The aim of the present study was to develop a method for measuring pulmonary blood flow (PBF) with aeration correction using 15O-H2O PET and to compare the results with those obtained using a conventional method. We obtained the voxel-based tissue fraction (TF) derived from density images converted from transmission images obtained using an external 137Cs point source. Quantitative PBF values with and without the TF were calculated using 15O-H2O PET to examine contralateral lung tissue in 9 patients with unilateral lung cancer. The heterogeneity in PBF before and after TF correction was then evaluated and compared. As a measure of PBF heterogeneity, we used the skewness and kurtosis of the PBF distribution. The mean PBF of contralateral lung was 1.4 ± 0.3 mL/min per mL of lung. The TF-corrected PBF was 5.0 ± 0.6 mL/min per mL of lung parenchyma. After TF correction, the skewness and kurtosis of the PBF decreased significantly. The present PBF calculation method using TF correction demonstrated that the normal PBF increased significantly and the PBF distribution became uniform. The proposed TF correction method is a promising technique to account for variations in density when interpreting PBF in PET studies. The lung is a unique organ comprised of some tissue and a large quantity of air. The average size of human lung alveoli is much smaller than the usual PET voxel size [1], making it difficult to measure physiological values per lung parenchyma using PET. Physiological values per parenchyma can be obtained using the volume fraction of parenchyma per lung, to which we refer hereafter as the tissue fraction (TF) [2, 3]. TF heterogeneity is prominent in patients with chronic obstructive pulmonary disease (COPD) or interstitial lung disease (ILD) [4, 5]. Recently, Lambrou et al. [2] and Holman et al. [3] presented a method for calculating the standardized uptake value (SUV) and kinetic parameters in lung with TF correction using the density distribution derived from CT. They applied this method to 18F-FDG SUV images in patients with ILD. Before correction, the 18F-FDG uptake seemed to be restricted to the CT-identified fibrotic region. After TF correction, however, the 18F-FDG uptake increased not only in the fibrotic region, but also in normal-appearing lung regions. In addition, the 18F-FDG uptake in normal-appearing lung tissue in patients with ILD was significantly higher than the uptake in normal subjects, suggesting that normal-appearing lung tissue is also affected by the disease process. TF-corrected images could clarify the disease process in the lung. 15O-H2O PET using a one-tissue compartment model [6, 7] is the reference standard for measuring pulmonary blood flow (PBF) per lung volume, and this technique has been validated against 68Ga microsphere measurements in dogs [8] and 68Ga-labeled macroaggregate measurements in humans [9]. However, the PBF per lung parenchyma with TF correction using 15O-H2O PET has not yet been reported. The TF distribution is derived from the density distribution, which can be obtained from transmission images using CT or an external source. Rhodes et al. [10] and Schuster et al. [11] reported that density was linearly correlated with the attenuation coefficient derived from a 15-min 68Ga/68Ge transmission scan. Transmission data reflects the time-averaged density distribution over several respiratory cycles, which can be easily registered with PET data. Therefore, in PET scanners equipped with an external radioactive source, TF-corrected images can be easily obtained using transmission data. In pulmonary kinetic PET studies with TF correction, the fraction of the blood volume in the lung should be considered, since 20 % of the lung volume is blood [10]. However, a method for measuring PBF with correction for the pulmonary blood volume has not been fully established. The aim of the present study was to develop a method for measuring PBF per parenchyma using the TF derived from transmission images. We applied this method to contralateral lung tissue in patients with lung cancer and examined the heterogeneity in PBF before and after TF correction. We also investigated the effect of blood volume correction on PBF measurements. Between April 2012 and July 2015, 13 patients with stage 4 non-small cell lung cancer, who were scheduled to receive chemotherapy at the Osaka University Hospital, underwent 15O-H2O dynamic PET. All the patient scans were obtained as part of a prospective study designed to evaluate lung cancer and PBF before and after the administration of bevacizumab. The permission of the Institutional Review Board and informed patient consent were obtained. To analyze normal lung, 9 patients (6 men, 3 women; mean age ± standard deviation (SD), 60 ± 10) with unilateral lung cancer were chosen. We analyzed the PBF of the contralateral lung. The patient characteristics are summarized in Table 1. Table 1 Patient characteristics CT analysis to examine the emphysematous change Eight of nine patients in this study had a history of smoking, which is a risk factor of COPD [12]. Emphysema is sometimes associated with COPD, which can cause TF heterogeneity. The relative area of lung parenchyma with attenuation coefficients of less than −950 HU on thin-section CT scans obtained during inspiration is reported to be an objective measurement of the extent of macroscopic emphysema and a reflection of microscopic emphysema [13]. We examined non-contrast chest high resolution (HR) CT of each patient to examine the presence of emphysematous change. In eight of nine cases, HRCT was taken within 1 year before or after 15O-H2O PET. In one case (No. 2), non-contrast HRCT was taken 1.5 years before 15O-H2O PET. Images were acquired in the supine position with a 0.625 mm slice thickness, a 0.4 s rotation time, a beam pitch of 0.98, and 120 kV of x-ray tube voltage. The tube current was determined by automatic exposure control, which is clinically used for dose reduction. During scanning the patients were asked to hold their breath after a deep inspiration. We used lung analysis module of Fujifilm Synapse Vincent system (Fujifilm Corporation, Tokyo, Japan) to obtain the volume fraction of lung parenchyma with attenuation coefficient of less than −950 HU to the whole contralateral lung (LAA-950). Five patients (No. 2, 3, 6, 7, 9) underwent spirometry within 1 year of PET. We examined the ratio of forced expiratory volume in 1 s to forced vital capacity (FEV1/FVC) as indication of airflow limitation, which is accompanied with COPD. Phantom study to investigate the relationship between density and the linear attenuation coefficient Prior to applying our proposed technique to the patient datasets, we performed phantom studies to confirm the linear relationship between the linear attenuation coefficient derived from the transmission scan and density. Studies were performed using a SET-3000 GCT/X scanner (Shimadzu Corp., Kyoto, Japan). The transmission scanner had an axial length of 23 mm and was equipped with a rotating 137Cs point source and a tungsten collimator. The rotation speed was 3 s per cycle [14]. We performed 5-min transmission scans. The transmission image shows the distribution of the linear attenuation coefficient (cm−1), which was reconstructed using a maximum a posteriori reconstruction. Following the methods reported by Rhodes et al. [10] and Schuster et al. [11], we performed transmission scans using a variety of phantoms with known densities (in g/mL): Styrofoam (0.016), sawdust (0.11), Japanese cypress (0.40), ethanol for disinfection (0.86), cat litter containing of minerals (0.89), and water (1.00). The sawdust, ethanol, cat litter, and water were placed within a light polyethylene container of known weight and volume and scanned. The densities of these materials were then plotted against the attenuation coefficient obtained by the transmission scan. Derivation of TF We assumed that a lung voxel would contain three components: air, parenchyma, and blood [3]. ρ lung (see Appendix for list of variables) can be considered as $$ {\rho}_{lung}={\rho}_{air}\cdot {V}_{air}+{\rho}_{pa}\cdot {V}_{pa}+{\rho}_v\cdot {V}_V. $$ By definition, V air + V pa + V V equals 1. ρ pa and ρ v are approximately equivalent to the averaged density of the mediastinal blood pool ρ med (g/cm3). Since ρ air (1.2 × 10−3 g/cm3) is about 1000 times smaller than ρ med (1.06 g/cm3), ρ air ⋅ V air is negligible in Eq. 1. Therefore, $$ {\displaystyle \begin{array}{ll}{\rho}_{lung}& ={\rho}_{med}\cdot {V}_{pa}+{\rho}_{med}\cdot {V}_V\\ {}& ={\rho}_{med}\left(1-{V}_{air}\right)\end{array}} $$ We obtained V air as follows: $$ {V}_{air}=1-{\rho}_{lung}/{\rho}_{med}. $$ If density and the attenuation coefficient derived from the transmission scan have a linear relationship, Eq. 3 is expressed as, $$ {V}_{air}=1-{T}_{lung}/{T}_{med}. $$ T lung /T med corresponds to TF. C PET can be considered as, $$ {\displaystyle \begin{array}{ll}{C}_{PET}& ={V}_{air}\cdot {C}_{PET_{air}}+{V}_{pa}\cdot {C}_{PET_{pa}}+{V}_{\mathrm{v}}\cdot {C}_{RV}\\ {}& =\left(1-{V}_{air}-{V}_V\right)\cdot {C}_{PET_{pa}}+{V}_{\mathrm{v}}\cdot {C}_{RV},\end{array}} $$ where \( {C}_{PET_{air}} \), \( {C}_{PET_{pa}} \), and C RV are the radioactivities of air, lung parenchyma, and pulmonary circulation blood. \( {C}_{PET_{air}} \) equals 0. PBF with and without TF correction We used the standard single-tissue-compartment model [8] to calculate the PBF without TF correction as follows: $$ {C}_{PET}(t)=F\cdot {C}_{RV}(t)\otimes {e}^{-\left(\frac{F}{V_T}+\lambda \right)t}, $$ where, ⊗ is the convolution operation. \( {C}_{PET_{pa}} \) would be represented as follows using PBF with TF F corr : $$ {C}_{PET_{pa}}={F}_{corr}\cdot {C}_{RV}(t)\otimes {e}^{-\left(\frac{F_{corr}}{V_{Tcorr}}+\lambda \right)t}. $$ Thus, the relationship between C PET and F corr , V Tcorr was as follows: $$ {\displaystyle \begin{array}{l}{C}_{PET}=\left(1-{V}_{air}-{V}_V\right)\cdot {F}_{corr}\cdot {C}_{RV}(t)\otimes {e}^{-\left(\frac{F_{corr}}{V_{Tcorr}}+\lambda \right)t}+{V}_{\mathrm{v}}\cdot {C}_{RV}\\ {}=\left({T}_{lung}/{T}_{med}-{V}_V\right)\cdot {F}_{corr}\cdot {C}_{RV}(t)\otimes {e}^{-\left(\frac{F_{corr}}{V_{Tcorr}}+\lambda \right)t}+{V}_{\mathrm{V}}\cdot {C}_{RV}.\end{array}} $$ We also assessed a model with TF correction but without pulmonary circulation blood volume correction: $$ {C}_{PET}=\left(1-{V}_{air}\right)\cdot {F}_{corr\hbox{'}}\cdot {C}_{RV}(t)\otimes {e}^{-\left(\frac{F_{corr\hbox{'}}}{V_{Tcorr\hbox{'}}}+\lambda \right)t}. $$ A flow chart of the analytical procedures is shown in Fig. 1. Flow chart of the algorithm The need to include a correction for the pulmonary circulation blood volume in the model was assessed using the Akaike information criteria (AIC) [15]. Scanning protocol Studies were performed using a SET-3000 GCT/X scanner (Shimadzu Corp., Kyoto, Japan). This scanner has an axial field of view of 26 cm, divided into 99 contiguous planes. The intrinsic spatial resolution is 3.5 mm full width at half maximum (FWHM) in-plane and 4.2 mm FWHM axially [14]. Patients were positioned supine in the scanner bed, with both the tumor and aortic arch or heart in the center of the axial field of view. For attenuation correction, a transmission scan (5 min) was performed using a 137Cs point source. Transmission images were also used for the derivation of TF images. After the transmission scan, a 10-min list mode scan was started simultaneously with an intravenous injection of 185 MBq of 15O–H2O (18.5 at 0.5 mL/s). The images were corrected for scatter radiation using the hybrid dual-energy window method [16]. The emission scan was reconstructed into 22 frames (1 × 10, 8 × 5, 4 × 10, 2 × 15, 3 × 20, 2 × 30, and 6 frames × 60 s) using the two-dimensional dynamic row-action maximum-likelihood algorithm (DRAMA) after 3-D Gaussian smoothing with a 6-mm FWHM. The voxel size was 4.7 × 4.7 × 2.6 mm. Input function The volumes of interest (VOIs) were drawn over the right ventricular cavity manually so as to include the hottest voxel in approximately five consecutive image planes of the frame in which the first pass of the bolus was best visualized. The average VOI was 2.3 cm3. The projection of the resulting VOI onto all the image frames yielded the time–activity curve for the pulmonary circulation C RV (t). Parametric imaging Parametric PBF images with and without TF correction were generated using the basis function method [17]. Equation 8, which is used to obtain PBF with TF, is rewritten as follows: $$ {C}_{PET}(t)=\left({T}_{lung}/{T}_{med}-{V}_V\right)\cdot {F}_{corr}\cdot {B}_{corr}+{V}_V\cdot {C}_{RV}, $$ where B corr = C RV (t) ⊗ e −θt and θ = F corr /V Tcorr + λ. The nonlinear term B corr in Eq. 10 including θ is precalculated as the discrete basis function B corri for the available range of θ: $$ {B}_{corri}=\kern0.5em {C}_{RV}(t)\otimes {e}^{-{\theta}_it}. $$ Using the basis function B corri , Eq. 10 becomes: $$ {C}_{PET}(t)=\left({T}_{lung}/{T}_{med}-{V}_V\right)\cdot {F}_{corr}\cdot {B}_{corr i}+{V}_V\cdot {C}_{RV}. $$ Solving Eq. 12 is a linear problem against the parameters F corr and V V. For each basis function, F corr and V V in Eq. 12 are estimated by means of the linear least squares technique. θ is determined by searching the minimum sum of squared residuals between the estimated and observed data among all basis functions. From the determined F corr , V V and θ values, V Tcorr can be calculated. These computations are done for each voxel. In the present study, 50 logarithmically spaced precomputed basis functions with exponent values θ ranging from λ to 2 s−1 were used. PBF without TF correction F and PBF with TF and without blood volume correction F corr' were also calculated using basis function method in the same way. Definition of lung fields We defined the lung contralateral to the cancer-affected side as normal lung. We then defined the VOI on normal lung in the transmission images. The mean of the transmission values (linear attenuation coefficients) of the mediastinal blood pool and normal lung in 10 subjects were 0.1 and 0.03 (cm−1), respectively. We used an isocontour of 0.04 (cm−1) as a threshold. The resulting lung region corresponded to the lung field visually. Then, we manually removed the regions corresponding to large pulmonary vessels using PMOD, version 3.4 (PMOD Technologies Ltd.). Within the remaining regions, we obtained the TF and the PBF with and without the TF. Intensity histograms of the TF and PBF were generated over the normal lung field for both the original and the TF-corrected data. From these histograms, we estimated the mean of the TF and PBF distribution. In addition, the kurtosis, a measure of the asymmetry of distribution, and the skewness, a measure of the peakedness of the distribution, were analyzed to evaluate the heterogeneity of the PBF among the different methods. We removed regions where the blood flow was higher than 6 mL/min/cm3 (more than 3 SD of PBF) from the calculation because these areas correspond to the pulmonary vasculature or the apical area affected by the spillover of the subclavicular vein. We calculated the AIC in a voxel-wise manner and averaged the values over the normal lung for TF-corrected images with and without pulmonary blood volume correction. We statistically compared the mean, skewness, and kurtosis of the PBF with and without TF. We also compared the mean, skewness, kurtosis and AIC of the TF-corrected PBF with and without pulmonary blood correction. The statistical tests were performed using the Mann–Whitney U test with Matlab (MathWorks, Inc., Natick, MA). The results of LAA−950 and FEV1/FVC were shown in Table 1. In seven of eight patients with smoking history, LAA−950 was less than 12% and emphysematous changes were confined to subpleural area. In one patient (No. 9), the emphysematous changes were severe and LAA−950 was as much as 25.9%. The results of FEV1/FVC were available in five patients. Three patients had normal, one had mild, and one had moderate airflow limitation. The linear relationship between the linear attenuation coefficient and the known density for each phantom is shown in Fig. 2. The correlation coefficient r was 0.999. The intercept value (0.001 cm−1) was 1% of the value for water (0.1 cm−1) and was therefore ignored when calculating the calibration factor for converting the linear attenuation coefficient into a quantitative measure of density. Correlation between the known density of various phantoms and the linear attenuation coefficient. The regression equation relating the two variables was a linear attenuation coefficient = 0.0991 (density) + 0.001; (r = 0.999) Figure 3 shows a single slice obtained in a representative case illustrating the uncorrected PBF, the PBF corrected for both TF and blood volume, and the PBF corrected for TF but without blood volume correction. This patient had slight emphysematous change confined to subpleural area, but it was hard to detect corresponding PBF change. Visually, dorso-ventral gradient due to gravity in PBF appeared to be less prominent after TF correction. As for the TF-corrected PBF images, no visual differences were observed between the images with and those without blood volume correction. The histograms of PBF without TF correction, PBF with TF and without blood volume correction, and PBF with TF and blood volume correction are also shown in Fig. 3. After the TF correction, the skewed and peaked distributions of the PBF approached the Gaussian distributions, indicating that TF correction improved the heterogeneity of the PBF (Fig. 3 f, g, and h). Transmission image (a), PBF image without TF correction (b), PBF image with TF and blood volume correction (c), PBF with TF and without blood volume correction (d), histograms of TF (e), PBF image without TF correction (f), PBF image with TF and blood volume correction (g), and PBF image with TF and without blood volume correction (h) in patient 1, who had right upper lobe lung cancer The quantitative results for PBF are presented in Table 2. The mean PBF after correction increased by 3.6 times compared with the uncorrected value. The skewness and the kurtosis of the TF-corrected PBF significantly decreased, compared with those of the uncorrected PBF, indicating that the distribution of the PBF had become more uniform and symmetrical. Since V V was negligibly small (0.0005 ±0.0004), the TF-corrected PBF with and without blood volume correction did not differ significantly. The AIC also did not differ significantly between the TF-corrected images with and those without blood volume correction. The averaged V T and \( {V}_{T_{corr}} \) were 0.17 ± 0.03 and 0.60 ± 0.08, respectively. Table 2 Summary of TF and PBF Figure 4 shows representative slices of the PBF and the PBF corrected for both TF and blood volume obtained in a case with severe emphysematous change. HRCT of corresponding slices were also shown. In Fig. 4b, c, low-PBF region remained after the TF correction, which corresponded to the area where emphysematous change was prominent in HRCT. On the contrary, in the relatively low PBF area in the right lung in Fig. 4e, PBF increased after the TF correction (Fig. 4f). HRCT image (a), PBF image without TF correction (b), PBF image with TF and blood volume correction (c) of apical area and those of basal area (d-f) in patient 9, who had left lower lobe lung cancer and emphysematous changes In the present study, we proposed a novel method for calculating the quantitative PBF, taking the TF and blood volume correction into account. We measured the TF using transmission scans obtained with a 137Cs external source. The averaged TF over the normal lung was 0.3, which was consistent with the result obtained from a 68Ga transmission scan [10, 11]. We measured the PBF of contralateral lung without TF correction by fitting the time activity curve of the lung to a one tissue compartment model. The averaged PBF without TF correction were 1.4 ± 0.3 mL/min/cm3 lung. Schuster et al. reported that the averaged PBF in normal 15 subjects was 1.4 ± 0.2 mL/min/cm3 lung using 15O-H2O PET (9). They obtained F and V T separately by administering 15O-H2O twice, while we obtained both simultaneously using a single injection of 15O-H2O. Although our calculating method differed slightly from that of the above-mentioned previous report, the results were consistent. Due to the history of smoking in eight of nine patients, we examined the presence of emphysematous changes in contralateral lung. Slight emphysematous changes confined to subpleural region were observed in seven of nine patients and the spirometry showed normal or mild airflow limitation. In one case, emphysematous changes were severe and there was moderate airflow limitation in spirometry. Pulmonary blood flow over whole lung is obtained by dividing cardiac output by functional residual capacity [9]. In the case with airflow limitation, functional residual capacity is expected to increase, which could reduce PBF. Actually, the averaged PBF was 0.9 mL/min/cm3 in the case with severe emphysema, which were lower than PBF of the remaining cases with slight or no emphysematous change (1.5 ± 0.3 mL/min/cm3). We evaluated the effect of TF correction on quantitative PBF measurements. The ratio of PBF to the TF-corrected PBF was expected to be equivalent to the TF. The ratio of the average PBF to the TF-corrected average PBF was 0.3, which was reasonable since it corresponded to the mean TF (Table 2). After TF correction, the PBF distribution became less skewed and less peaked. The asymmetrical distribution in the PBF without TF correction was thought to be mainly due to the vertical gradient in the PBF. A TF gradient in the dorso-ventral direction because of gravity was also reported in a 68Ga transmission study [10]. Therefore, the dorso-ventral gradient in the TF-corrected PBF, which is equivalent to the PBF divided by TF, was supposed to be smaller than that of the uncorrected PBF, reducing the asymmetry of the PBF distribution. Hopkins et al. evaluated the PBF and lung density in normal supine subjects using arterial spin labeling and proton density imaging during MRI examinations [18]. They reported that the dorso-ventral gradient in the PBF was reduced after the normalization of PBF according to density, consistent with our results. In COPD or ILD, histological changes in the pulmonary vasculature have been reported [19,20,21]. The density distributions of the lung in these patients were heterogeneous. Since the raw PBF does not reflect the PBF per parenchyma, the PBF per lung parenchyma could be overestimated in low aeration areas and underestimated in high aeration areas. Thus, TF-corrected PBF images may contribute to clarifying the disease process, especially in these diseases. Actually, in one case with severe emphysematous change we found the region, where PBF was really decreased even after the TF correction, and the region where PBF per parenchyma was not so decreased after the TF correction. We estimated the effect of the pulmonary blood volume on the calculation of PBF. V V was negligibly small, and the model with blood volume correction was not superior to that without blood volume correction based on the AIC analysis. This result seems to contradict the volume fraction of the pulmonary vascular bed, which was 10–20%, derived from 15O-CO PET [10]. The pulmonary vascular bed is composed of arterial (precapillary), capillary, and venous (postcapillary) components. In our formulation, V V corresponds to precapillary blood volume fraction [22] because first-pass extraction of 15O-H2O by lung tissue could be considered to be 100% [8]. The precapillary component of lung could be estimated to be about 0.03 per lung volume based on a histological analysis [23]. We could not distinguish such small contributions from the precapillary blood volume in the PET data. Although the contribution of pulmonary vascular bed was not taken into account in the former studies of measuring PBF with 15O-H2O PET [8, 9], the results were in good agreement with the PBF derived from microsphere measurements, which supported our result. The present study had several limitations. First, the types of PET scanners currently available are limited. Although we used a PET scanner that was equipped with an external transmission source, PET/CT scanners with CT-based transmission scans are mainstream among PET devices. CT images could be used to derive the TF [2, 3] instead of transmission images; however, careful registration of the CT images with the PET images is required because of respiratory movement. TF derived from CT taken at deep respiration may induce some errors caused by the misregistration with the PET emission data. Therefore, CT at shallow breathing [2] or Cine-CT over complete breathing period [3] could be useful. Second, we could not divide the capillary and post-capillary components using the present TF correction method. In our definition of TF, true lung parenchyma components (alveoli, bronchioles, and interstitium) in addition to capillary and post-capillary vascular components were considered as "parenchyma". To obtain the PBF per true parenchyma, information on the capillary and post-capillary vascular components, which could be derived from 15O-CO PET, is required. Rhodes et al. [10] estimated the volume fraction of the true lung parenchyma components by subtracting the 15O-CO vascular component image from the density image derived from the transmission scan. Third, we did not consider the gradient-dependent difference in the TF that exists between the lung parenchyma and the blood component precisely. There was a vertical gradient in the volume fraction of the true parenchyma (0.12 at the ventral portion, and 0.16 at the dorsal portion), while the volume fraction of the blood had a steeper gradient (0.075 at the ventral portion and 0.21 at the dorsal portion). Therefore, if we use the TF of the true parenchyma in the PBF correction, the dorso-ventral gradient in the corrected PBF might be larger than our present results. In other words, we might have overcorrected the vertical heterogeneity of the PBF. We have developed a novel method of calculating the TF-corrected PBF using 15O-H2O PET. We derived the TF from transmission images. We applied this method to the contralateral lung in patients with lung cancer. After TF correction, the heterogeneous distribution of the PBF arising from gravity became more uniform. AIC: Akaike information criteria COPD: CT: DRAMA: Dynamic row-action maximum-likelihood algorithm FDG: Fluorodeoxyglucose FEV1/FVC: the ratio of forced expiratory volume in 1 s to forced vital capacity FWHM: Full width at half maximum HU: Hounsfield unit ILD: Interstitial lung disease LAA−950 : Volume fraction of lung parenchyma with attenuation coefficient of less than −950 HU to the whole contralateral lung PBF: Pulmonary blood flow SUV: Standardized uptake value TF: Tissue fraction VOI: Volume of interest Ochs M, Nyengaard JR, Jung A, et al. The number of alveoli in the human lung. Am J Respir Crit Care Med. 2004;169:120–4. Lambrou T, Groves AM, Erlandsson K, et al. The importance of correction for tissue fraction effects in lung PET: preliminary findings. Eur J Nucl Med Mol Imaging. 2011;38:2238–46. Holman BF, Cuplov V, Millner L, et al. Improved correction for the tissue fraction effect in lung PET/CT imaging. Phys Med Biol. 2015;60:7387–402. Rennard SI. COPD: overview of definitions, epidemiology, and factors influencing its development. Chest. 1998;113(4 Suppl):235S–41S. Raghu G, Collard HR, Egan JJ, et al. An official ATS/ERS/JRS/ALAT statement: idiopathic pulmonary fibrosis: evidence-based guidelines for diagnosis and management. Am J Respir Crit Care Med. 2011;183:788–824. Kety SS. Measurement of local blood flow by the exchange of an inert, diffusible substance. Methods Med Res. 1960;8:228–36. Kety SS. The theory and application of the exchange of inert gas at the lung and tissues. Pharmacol Rev. 1951;3:1–41. Mintun MA, Ter-Pogossian MM, Green MA, Lich LL, Schuster DP. Quantitative measurement of regional pulmonary blood flow with positron emission tomography. J Appl Physiol. 1986;60:317–26. Schuster DP, Kaplan JD, Gauvain K, Welch MJ, Markham J. Measurement of regional pulmonary blood flow with PET. J Nucl Med. 1995;36:371–7. Rhodes CG, Wollmer P, Fazio F, Jones T. Quantitative measurement of regional extravascular lung density using positron emission and transmission tomography. J Comput Assist Tomogr. 1981;5:783–91. Schuster DP, Marklin GF, Mintun MA, Ter-Pogossian MMPET. Measurement of regional lung density: 1. J Comput Assist Tomogr. 1986;10:723–9. Xu X, Weiss ST, Rijcken B, Schouten JP. Smoking, changes in smoking habits, and rate of decline in FEV1: new insight into gender differences. Eur Respir J. 1994;7:1056–61. Gevenois PA, de Maertelaer V, De Vuyst P, Zanen J, Yernault JC. Comparison of computed density and macroscopic morphometry in pulmonary emphysema. Am J Respir Crit Care Med. 1995;152:653–7. Matsumoto K, Kitamura K, Mizuta T, et al. Performance characteristics of a new 3-dimensional continuous-emission and spiral-transmission high-sensitivity and high-resolution PET camera evaluated with the NEMA NU 2-2001 standard. J Nucl Med. 2006;47:83–90. Akaike HA. New look at the statistical model identification. IEEE Trans Automat Contr. 1974;19:716–23. Ibaraki M, Miura S, Shimosegawa E, et al. Quantification of cerebral blood flow and oxygen metabolism with 3-dimensional PET and 15O: validation by comparison with 2-dimensional PET. J Nucl Med. 2008;49:50–9. Watabe H, Jino H, Kawachi N, et al. Parametric imaging of myocardial blood flow with 15O-water and PET using the basis function method. J Nucl Med. 2005;46:1219–24. Hopkins SR, Henderson AC, Levin DL, et al. Vertical gradients in regional lung density and perfusion in the supine human lung: the slinky effect. J Appl Physiol. 2007;103:240–8. Zanini A, Chetta A, Imperatori AS, Spanevello A, Olivieri D. The role of the bronchial microvasculature in the airway remodelling in asthma and COPD. Respir Res. 2010;11:132. Santos S, Peinado VI, Ramírez J, et al. Characterization of pulmonary vascular remodelling in smokers and patients with mild COPD. Eur Respir J. 2002;19:632–8. Farkas L, Kolb M. Pulmonary microcirculation in interstitial lung disease. Proc Am Thorac Soc. 2011;8:516–21. Ohta S, Meyer E, Fujita H, Reutens DC, Evans A, Gjedde A. Cerebral [15O]water clearance in humans determined by PET: I. Theory and normal values. J Cereb Blood Flow Metab. 1996;16:765–80. Singhal S, Henderson R, Horsfield K, Harding K, Cumming G. Morphometry of the human pulmonary arterial tree. Circ Res. 1973;33:190–7. We thank Mr. Kouichi Fujino and Mr. Takashi Kamiya for their assistance with the scanning. We thank Dr. Hiroshi Watabe for the PET data format conversion. We also acknowledge Mr. Sadahiro Naka for the production of 15O-H2O. This study was funded by the Osaka Medical Research Foundation for Intractable Disease and KAKENHI Grant Number 17H01575 from the Ministry of Education, Culture, Sports, Science and Technology, Japan. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. Department of Molecular Imaging in Medicine, Osaka University Graduate School of Medicine, Suita, Osaka, Japan Keiko Matsunaga & Eku Shimosegawa Department of Nuclear Medicine and Tracer Kinetics, Osaka University Graduate School of Medicine, 2-2 Yamadaoka Suita city, Osaka, Japan Keiko Matsunaga, Eku Shimosegawa & Jun Hatazawa Diagnostic and Interventional Radiology, Osaka University Graduate School of Medicine, Suita, Osaka, Japan Masahiro Yanagawa & Noriyuki Tomiyama Department of Respiratory Medicine, Allergy and Rheumatic Disease, Osaka University Graduate School of Medicine, Suita, Osaka, Japan Tomoyuki Otsuka, Haruhiko Hirata, Takashi Kijima & Atsushi Kumanogoh Keiko Matsunaga Masahiro Yanagawa Tomoyuki Otsuka Haruhiko Hirata Takashi Kijima Atsushi Kumanogoh Noriyuki Tomiyama Eku Shimosegawa Jun Hatazawa KM, MY, TK, AK, NT, ES, and JH contributed to the study design. KM, MY, ES, and JH contributed to data collection. KM, ES, and JH contributed to data analysis and manuscript drafting. MY, TO, HH, TK, and AK contributed to the patient recruitment. All authors read and approved the final manuscript. Correspondence to Jun Hatazawa. All procedures performed were approved by the ethical committee of Osaka University Hospital (No.11109). Informed consent was obtained from all individual participants included in the study. Informed consent was obtained from all individual participants included in the study. ρ lung = average density of lung over voxel (g/cm3) ρ air = density of air (g/cm3). ρ pa = density of lung parenchyma (g/cm3). ρ v = density of blood (g/cm3). V air = fraction of unit volume occupied by air at a specific voxel. V pa = fraction of unit volume occupied by parenchyma at a specific voxel. V V = fraction of unit volume occupied by blood at a specific voxel. T lung = linear attenuation coefficient of lung (cm−1). T med = linear attenuation coefficient of mediastinum (cm−1). F = PBF per lung volume (mL/s/cm3). V T = distribution volume of water. λ = physical decay constant of 15O (0.00567 s−1). F corr = PBF per lung parenchyma(mL/s/cm3). \( {V}_{T_{corr}} \) = distribution volume of water in lung parenchyma. F corr' = PBF per lung parenchyma without pulmonary circulation blood volume correction (mL/s/cm3). \( {V}_{T_{corr}'} \) = distribution volume of water in lung parenchyma without pulmonary circulation blood volume correction. Matsunaga, K., Yanagawa, M., Otsuka, T. et al. Quantitative pulmonary blood flow measurement using 15O-H2O PET with and without tissue fraction correction: a comparison study. EJNMMI Res 7, 102 (2017). https://doi.org/10.1186/s13550-017-0350-8 15O-H2O Tissue fraction correction
CommonCrawl
A multi-view genomic data simulator Michele Fratello1,2, Angela Serra2, Vittorio Fortino3, Giancarlo Raiconi2, Roberto Tagliaferri2 & Dario Greco3 BMC Bioinformatics volume 16, Article number: 151 (2015) Cite this article OMICs technologies allow to assay the state of a large number of different features (e.g., mRNA expression, miRNA expression, copy number variation, DNA methylation, etc.) from the same samples. The objective of these experiments is usually to find a reduced set of significant features, which can be used to differentiate the conditions assayed. In terms of development of novel feature selection computational methods, this task is challenging for the lack of fully annotated biological datasets to be used for benchmarking. A possible way to tackle this problem is generating appropriate synthetic datasets, whose composition and behaviour are fully controlled and known a priori. Here we propose a novel method centred on the generation of networks of interactions among different biological molecules, especially involved in regulating gene expression. Synthetic datasets are obtained from ordinary differential equations based models with known parameters. Our results show that the generated datasets are well mimicking the behaviour of real data, for popular data analysis methods are able to selectively identify existing interactions. The proposed method can be used in conjunction to real biological datasets in the assessment of data mining techniques. The main strength of this method consists in the full control on the simulated data while retaining coherence with the real biological processes. The R package MVBioDataSim is freely available to the scientific community at http://neuronelab.unisa.it/?p=1722. OMICs technologies allow the comprehensive and parallel measurement of multiple molecular events (e.g., DNA modifications, RNA transcription and protein translation) in the same samples. Exploiting such complex and rich data is needed in the frame of systems biology for building global models able to explain complex phenotypes. In order to get useful information, the data must first be mined in search of relevant subsets of features, but classical feature selection methods can potentially fail as they classically test a feature at the time, not considering their potential interactions. Likewise, single-data layers (views) analysed separately could provide incomplete and fragmented information. On the contrary, multi-view leaning approaches take into account the different views simultaneously to reconstruct the underlying structure of the data. They can be benchmarked on real and synthetic datasets. A common problem with real datasets is that they are not fully understood and well annotated, whereas the synthetic data, although under full control, may be too simplistic to efficiently simulate the complex regulatory interactions among the molecules. Different approaches for simulating biological data have been proposed. A first method consists in generating synthetic data with multivariate distributions similar to those observed on the real datasets [1-3]. New data can be generated using models that incorporate phenotypic variation, additive and multiplicative noise, transcriptional activity or inactivity, and/or block-correlation structures. An alternative method focuses on generating data from synthetic transcriptional regulatory networks (TRNs). The main idea is to generate regulatory networks that include different types of biological interactions and produce biologically plausible synthetic gene expression data. An important point of these simulation methods is the computational technique used to quantitatively model the network interactions. A common technique for this purpose is based on solving a set of ordinary differential equations (ODEs) that explicitly model the variation of concentration of gene products. In [4-7], different models for the definition of the interactions are proposed. In [5,6], interaction networks are sampled from existing ones. Starting from a given real network and a seed node of the network, a new network is constructed by sampling the modules of the real network. The main drawback of this method is that the number of possible networks that can be generated is limited by the size of the original network used for sampling. In [4], network topologies are generated based on different theoretical random network models. The main disadvantage of these models is that none of them can reproduce the characteristic of hierarchical modularity of TRNs. In [7], a hierarchical modular network is generated reproducing modules on different scales [8]. Starting with a network without connections, nodes are connected to each other following the patterns of known modules at different scales. Once the topology is defined, interactions among the regulators are modelled by ODEs. In [4] interactions among regulators are modelled as the product of several Hill equations, one for each regulator. In [7] complex interactions among regulators like cooperation and competition are modelled with continuous Boolean logic functions. None of these simulators is able to produce multi-view data, but provide a valuable source of techniques to be used for this purpose. The state of a cell is regulated by a series of complex biological processes like protein synthesis, which is regulated by different control structures. The transcription factors (TF) are proteins that bind to specific regions of the genome regulating, together with other molecular signals such as histone modifications and DNA methylation, the transcription rate of the genes [9]. At the post-transcriptional level, microRNA (miRNA), whose transcription is also regulated similarly to the other genes, repress the protein expression [10]. A priori knowledge on the targeting patterns of TFs and miRNAs can be used, for instance, to produce network models of interaction. TRNs can be modelled as graphs in which nodes represent genes and edges represent the interactions between genes, such as activation or repression. Since the flow of information follows a precise direction, these graphs are directed. TRNs can be characterized by a set of global and local topological properties. Similarly to other networks, also in TRNs, the degree distribution follows a power-law decay P(k)≈k −α with 2<α<3 [11,12]. This distribution is characteristic of the scale-free networks, in which the degree of a node is independent on the size (scale) of the network. Another global characterization of TRNs is the clustering coefficient. For each node of the network it is defined as $$ C = \frac{n}{k \cdot \left(k - 1 \right)} $$ where n is the number of connections between the neighbours and k is the number of neighbours. Studies have confirmed that the clustering coefficient in TRNs depends only on the degree of the nodes and it is distributed again as a power-law C(k)≈k −1 [8,13,14]. Both these two properties specify that genes with low degree have a higher clustering coefficient than nodes highly connected leading to a hierarchical network of separated modules of genes interconnected by high-degree genes. On a local scale genes organize in modules. The most significantly frequent patterns of connections between genes of a module are called motifs [15] each with different dynamical proprieties, such as self-regulation, feed-forward and feed-back loops and dense overlapping regulons [15,16] (Figure 1). The most frequent motifs that comprehend miRNAs and TFs interactions are the feedback and feed forward loops [17-19]. Motifs of interactions. Graphical representation of the interactions between genes and miRNAs. Arrows are for activation, blunt edges are for repression. Intuitively, network construction is based on an iterative procedure. The key idea is to construct a regulatory network starting from a graph without edges in which each node represent a gene or a miRNA and to add connections between nodes imitating some well known motif randomly chosen. Every time a motif is constructed into the network, all the participating nodes are removed from the graph. Regulating genes of the constructed motifs are kept in a separate set of nodes, namely, H. When the graph remains without nodes, a new graph is constructed with the nodes stored in H again with no edges. The procedure then restarts. This iterative method goes on until there are no nodes. The reinsertion of the regulating genes ensures the creation of a modular hierarchy of nodes. The methods here proposed have been implemented as an R [20] package freely available from (Additional file 1) http://neuronelab.unisa.it/?p=1722. The idea of creating a modular hierarchical network by replicating the same module at different scales was proposed in [8]. In [7] this replication procedure was extended by constructing a network using a set of motifs instead of a single one replicated at different scales. In this work, this idea is further extended with the addition of the interactions among TFs and miRNAs with the objective of synthesizing multi-view biological data. A set of motifs containing both TF-TF, miRNA-TF, and TF-miRNA interactions are defined based on [11,15,17], and recursively used as local templates to construct a network that satisfies the condition of hierarchical modularity. The procedure starts with a network N=(V N ,E N ) of n genes and m miRNAs, with n+m=|V N |, and without edges E N =∅. In each step a pool of random motifs is generated. For each motif a score S is computed. This score measures the reduction in the difference between the degree distribution specified by the user and the current degree distribution. The score is the sum over a set of sub-scores $$ S\left(M\right) = \sum_{\substack{i \in \mathsf{genes}\left(V_{N}\right) \\ j \in \mathsf{genes}\left(V_{M}\right)}} Sg_{ij} + \sum_{\substack{i \in \mathsf{mirnas}\left(V_{N}\right) \\ j \in \mathsf{mirnas}\left(V_{M}\right)}} Sm_{ij} $$ Each sub-score indicates the advantage of connecting node i in V N as each node j in V M . For each i ∈ genes(V N ) and j ∈ genes(V M ), the sub-score is given by $$ Sg_{ij}=\sum\limits_{k=1}^{\vert V_{N} \vert} Sg_{ijk} $$ Where S g ijk is calculated by $$ Sg_{ijk}=\mathsf{sign}\left(\vert d^{\mathrm{p}}_{k} - p_{k} \vert - \vert d^{\mathrm{p}}_{k} - f_{kij} \vert \right) \cdot \frac{\vert d^{\mathrm{p}}_{k} - p_{k} \vert}{d^{\mathrm{p}}_{k}} $$ in which \(d^{\mathrm {p}}_{k}\) is the portion of nodes with degree k that is sampled from a power-law with parameter α specified as input by the user; p k is the current portion of nodes with degree k and f kij is the portion of nodes with degree k if node i gets the connections of node j. The sign(·) factor determines whether adding the connections of node j to node i is a good decision (sign(·)>0) or not (sign(·)<0). The factor \(\frac {\vert d^{\mathrm {p}}_{k} - p_{k} \vert }{d^{\mathrm {p}}_{k}}\) determines the magnitude of the advantage or disadvantage of edge additions to N, normalized by the number of desired nodes of degree k. Sub-scores for nodes i ∈mirnas(V N ) and j ∈mirnas(V M ) are computed differently since miRNA-gene interactions respect different properties. The final portion of nodes regulated by a miRNA is denoted by \(d^{\mathrm {e}}_{k}\), that is sampled from an exponential distribution of parameter λ given as input [18]. Whereas the desired number of nodes that regulate a miRNA is sampled by a power-law as in the previous case. Globally considered, $$ Sm_{ij} = \sum\limits_{k=1}^{\vert \mathsf{mirnas}\left(V_{N}\right)\vert} Sm^{\text{in}}_{ijk} + Sm^{\text{out}}_{ijk} $$ $$ Sm^{\text{in}}_{ijk}=\mathsf{sign}\left(\vert d^{\mathrm{p}}_{k} - p^{\text{in}}_{k} \vert - \vert d^{\mathrm{p}}_{k} - f^{\text{in}}_{kij} \vert \right) \cdot \frac{\vert d^{\mathrm{p}}_{k} - p^{\text{in}}_{k} \vert}{d^{\mathrm{p}}_{k}} $$ $$ Sm^{\text{out}}_{ijk}=\mathsf{sign}\left(\vert d^{\mathrm{e}}_{k} - p^{\text{out}}_{k} \vert - \vert d^{\mathrm{e}}_{k} - f^{\text{out}}_{kij} \vert \right) \cdot \frac{\vert d^{\mathrm{e}}_{k} - p^{\text{out}}_{k} \vert}{d^{\mathrm{e}}_{k}} $$ Given the score for each motif in the pool a motif is selected by sampling a distribution proportional to the scores. The selected motif is used as a template. A subset of nodes of the current network N are sampled using the sub-scores S g ij and S m ij and are connected as the nodes in the motif. During each edge addition, a set of parameters is generated in order to characterize the dynamical properties of the interaction and making the overall behaviour of the motif similar to its real-world counterparts. For example, using the same terminology of [15], the Single-input motif (Figure 1), is considered to generate coordinated expression of a set of genes, and, more interestingly scheduled expression schemas, in which the regulated genes will express in a defined order. The selected nodes are then removed from N and in a separate set H, which is initially empty, are added the nodes that took the role of x in Figure 1. When there are no more nodes to connect in V N , the nodes in H are passed into V N , H is set to ∅ and a new iteration is started. This process goes on until both V N and H are empty. Each time V N gets the nodes of H, modules of nodes in the network get connected hierarchically. When the network construction is completed, a special class of nodes are added to the network: signalling nodes. These nodes are responsible of transferring information to the network [15,21]. Stimulation signals are an example of information passed. System state can be set through signals as covered in the next section. The number of signalling nodes to be placed in the network is determined by the user. Signalling nodes only have outgoing edges. Target genes are determined sampling a distribution proportional to the out degree of the nodes of the network. This ensures that the majority of genes controlled by signals, have enough capability of controlling the state of the network during simulation. A more concise representation of the network generation procedure is reported in Algorithm 1. Simulation of the system is based on ODEs. Concentrations of gene products are modelled by continuous variables on a limited time interval [22]. The rate of production of a given element x i depends on the concentration of its regulatory components, both genes and miRNAs (to not clutter the notation we omit explicit time dependency of concentrations and concentration rates) $$ \frac{\mathrm{d}x_{i}}{\mathrm{d}t} = f_{i}(\mathbf{x}, \mathbf{m}) $$ x is the vector of concentrations of the genes regulating x i , m is the vector of concentration levels of the miRNAs regulating x i and f i is a non-linear regulation function of these components. A common model for f i (x,m) with a single regulating gene x j and a single miRNA m k is $$ \frac{\mathrm{d}x_{i}}{\mathrm{d}t} = p_{i} \cdot r_{i} \left(x_{j}\right) - d_{i}\left(m_{k} \right) \cdot x_{i} $$ Where p i is the basal production rate of x i , i.e., the basic rate of production; r i (x j ) is the function that model the regulation of x j on x i and d i (m k ) is the degradation function [22,23] that depends on the concentration level of m k . A common regulation function is the Hill equation [24] $$ h\left(x_{j}; \theta, \mu\right) = \frac{x_{j}^{\mu}}{x_{j}^{\mu} + \theta^{\mu}} $$ with h(x j ;θ , μ)∈[0,1]. Parameter θ>0 is the value at which h(x j ;θ , μ)=0.5, i.e., a threshold on the concentration level of x j ; μ>0 controls the steepness of the function. For μ>1 the Hill equation has a sigmoid shape (Figure 2). Hill functions. Shapes of the Hill function for different values of the parameters. The solid red line is a Hill function with parameters θ=0.5 and μ=5. The shaded red area is the family of Hill functions obtained when θ∈[0.3,0.8] and μ is fixed. Similarly, the solid blue line is the Hill function of parameters θ=0.3 and μ=6. The shaded blue area is the family of Hill functions obtained when μ∈[2,10] and θ is fixed. The degradation rate of target genes is directly influenced by the regulating miRNA m k [25,26]. The degradation function is defined as $$ d_{i}\left(m_{k} \right) = d_{i0} + d_{i} \cdot h\left(m_{k}; \theta, \mu\right) $$ The first term is the basal degradation rate, that is the rate of degradation of x i independent of m k and d i is the rate of degradation dependent on the concentration of m k . The miRNA rate of production is assumed to follow a law similar to the production of genes, but with a constant degradation rate. When there's more than a regulator the Hill equation will not suffice. Hence, there is the need for a model taking into account interactions among regulators in addition to interactions between regulators and the regulated gene. Since most of the interactions among regulators are unknown [15], we apply the same idea proposed in [7] and define the possible interactions among regulators by combinations of simple functions. Here we follow the same approach and define the same simple interaction functions among regulators: All regulators need to be highly expressed to activate the regulated gene $$\begin{array}{@{}rcl@{}} &\mathsf{COOP}\left(x_{1}, \ldots, x_{n}\right)=\mathsf{min}\left(h\left(x_{1}\right), \ldots, h\left(x_{n}\right) \right)& \end{array} $$ Contemporary activation of all regulators is not necessary to activate the regulated gene $$\begin{array}{@{}rcl@{}} &\mathsf{SYN}\left(x_{1}, \ldots, x_{n}\right)=\mathsf{min}\left(1, h\left(x_{1}\right) + \ldots + h\left(x_{n}\right) \right)& \end{array} $$ Activation of the regulator means target gene is repressed $$\begin{array}{@{}rcl@{}} &\mathsf{INH}\left(x\right)=1-h\left(x\right)& \end{array} $$ Regulator x 1 competes with repressor x 2 $$\begin{array}{@{}rcl@{}} &\mathsf{COMP}\left(x_{1}, x_{2}\right)=\mathsf{max}\left(0, h\left(x_{1}\right) - h\left(x_{2}\right) \right)& \end{array} $$ It is to be noted that in this case the threshold and steepness parameters are different for each interaction. The specific regulation function of each gene is defined by the composition of randomly sampled functions and the regulators that will interact. Since miRNAs tend to increment the rate of degradation of target genes, resulting in reduced expression levels, we assume that the only type of interaction among miRNAs regulating the same target gene is a synergyc inhibition. Once all the system parameters are specified, the set of ODEs is solved with a numerical procedure over a given time interval. An initial value for the system must be specified. The result of the simulation can be used both as a time series dataset, or as steady state microarray data by sampling the time series. Different experimental conditions can be simulated using controlling signals for the synthetic subjects. A large set of different stimuli can be simulated, from inhibition of some hub gene (with a constant 0 signal) to periodic drug administration (using periodic signals). Variability of the model In order to generate plausible expression values for different simulated subjects it must be present a degree of variability in the model. We used a two-level model comprehending biological and technical variability. Biological variability is an intrinsic characteristic among beings of the same species and is implemented in the synthetic system as a small amount of noise in system parameters values. Specifically, white noise with low standard deviation is added for each subject to be simulated. Technical variability is an inevitable part of the data acquisition process and is simulated implementing the model of error measurement proposed in [27] that considers two error components. For each true expression level x i , the measured intensity y i is given by $$ y_{i} = c + x_{i}e^{\eta} + \epsilon $$ where c is the constant mean background level. ε is an additive error term distributed as \(\mathcal {N}\left (0, \sigma _{\epsilon } \right)\) that represents the background noise and mostly influences low expressed genes. The second error term is \(\eta ~\sim ~\mathcal {N}\left (0, \sigma _{\eta }\right)\), a multiplicative factor that represents the proportional error that mostly influences higher expression values. Results and discussions In order to verify the hierarchical modularity of the generated regulatory networks, we constructed different sets of networks of different sizes with default parameters. The scale-free property has been verified generating 50 networks of 1000 nodes with the same scale parameter α=2.2. We measured the degree distribution for each network and fitted a line in the log-log plot. In Figure 3 is shown the resulting fit. From the generated networks we estimated a scale parameter \(\hat {\alpha }=2.5566\) and a goodness-of-fit parameter of R 2=0.9362. Fitting of degree distribution. The degree distribution of 50 networks generated with the same size is fitted by a line in log-log space. The resulting estimated scale parameter is \(\hat {\alpha }=2.5566\) with R 2=0.9362. We then verified the scale-invariance of the clustering coefficient. For this, we generated 100 networks with size randomly sampled from the interval [10,1000]. In Figure 4 the estimated scale parameter of the distribution of the clustering coefficient in relation with network size is shown. Together these results show that generated networks have the hierarchical modularity property of the real regulatory networks. Scale invariance of clustering coefficient. Simulation of 100 networks of random size in [10,1000] shows that the estimated scaling parameter of the clustering coefficient is independent from the network size and approximates the value found in real networks. In the rest of this section we report four cases of analysis that can be performed on the generated datasets: two examples try to explore the topology of the network and are based on network reconstruction methods and on clustering methods. The other two are methods of feature relevance: a filter method based on t-tests and a wrapper method based on the Boruta method. For the experiments we generated three regulatory networks from which we generated different simulated dataset of increasing complexity: GRN1 1000 genes, 100 miRNAs and 10 controlling signals GRN2 1000 genes, 300 miRNAs and 35 controlling signals GRN3 500 genes, 100 miRNAs and 20 controlling signals In all cases the synthetic datasets are obtained by simulating the regulatory network for an amount of 100 time points. The resulting dataset is obtained by taking the expression values at the last simulated time point. We wanted to test if the synthetic networks generated with the proposed model can be reconstructed with commonly used tools for this task. From GRN1 we generated a dataset of 75 samples by assigning to each of the 10 controlling signals a constant value randomly sampled from a uniform distribution in [0,1]. We estimated the significance of each connection for both gene-only expression dataset and genes+miRNAs dataset with PANDA [28]. PANDA is a message-passing network prediction method based on interactions among TFs and regulated genes. Information for each type of interaction is propagated to the others iteratively, resulting in a prediction score for each interaction. For both types of regulatory networks we provided different numbers of a priori connections. We executed PANDA with prior information covering the 10%, 25%, 50%, 75% and 100% of all actual connections among the gene-gene network (1656 edges) and full network interactions (3969 edges). In addition, we introduced noisy prior information in the form of false connections. Different quantities of incorrect edges have been tested, namely 10%, 25%, 50%, 75%, 100% of incorrect edges. Since the PANDA scores can be interpreted as z-scores, we set a p-value threshold to 0.05 for both nominal and Bonferroni corrected p-values. We also set a threshold of 0.05 to false discovery rate (FDR). For each significant connection we calculated the length of the path in the actual synthetic network. In Tables 1, 2, 3 and 4 are reported the results of the analysis (where 1 signifies direct interaction, >1 signifies indirect interaction, Inf signifies no interaction). When given correct prior information PANDA is able to mark as significant almost 100% of true interactions, whereas when noisy (false) prior information is passed, none of it is marked as significant. Table 1 Gene-only path length Table 2 Gene-only path length with false information Table 3 Whole-network path length Table 4 Whole-network path length with false information We carried out additional tests using ARACNE [29], which estimates pairwise interactions by the degree of mutual information shared among the nodes in exam. Indirect connections that may stem are removed applying the data processing inequality. Starting from the expression dataset we estimated the mutual information matrix for both gene-only interactions and for the full regulatory network. We set a threshold of 0.05 on the weights of the reconstructed connections and checked how many of them are actual connections in the synthetic network. In Table 5 are listed the path lengths for the interactions predicted by ARACNE on the synthetic network. In the gene-only network, most of the interactions found do not actually exist, whereas in the full network, comprising both genes and miRNAs, about half of the interactions found exist in the network but have an average path length of 4.38. Table 5 Whole-network path length with ARACNE The high rate of erroneous interactions may be due to the fact that ARACNE works well when the role of the loops in the regulatory network is negligible [29], whereas the networks produced by the proposed simulator involve both feedback and feed-forward loops on different scales (i.e., from loops of nodes to loops of motifs) that may produce complex behaviours like oscillations or memory states. In addition, miRNAs also participate in loops with genes. Both facts may motivate the high levels of false interactions found in the gene-only network, where the miRNA layer of information needed to explain the behaviour of genes is not included in the analysis. We speculate that the large number of direct interactions inferred by ARACNE in the full regulatory network may be due to the simplistic model of variation employed. This results in nodes of the same pathway sharing too much information, such that they look like directly connected with respect to the Mutual Information. Clustering of genes and miRNAs Broadly speaking, clustering a set of objects aims to partition them into disjoint subsets. This partition is such that objects from different subsets are as much dissimilar as possible, whereas objects of the same cluster are maximally similar. Clustering has been widely applied to gene expression profiles across subjects. Gene clustering can be used as a mean of dimensionality reduction technique in which only a representer for each cluster is used instead of the entire dataset for further analysis [30]. In addition, gene clustering can be useful to predict the functional role of unknown genes based on the known genes of the same cluster [31]. We analysed two different synthetic datasets. The first dataset was generated from GRN2. The dataset is made of two classes each of 50 samples. The signalling genes were all set to 0 for the first condition and to 1 for the second condition (relative expression levels). The second dataset is made of 75 samples divided into three classes of 25 samples each. The dataset is simulated from GRN1. For each condition we defined a constant expression value for the 10 controlling genes by randomly sampling a uniform distribution \(\mathcal {U}\left (0,1\right)\). In both experiments for each sample we add a small amount of white noise to network parameters and then we simulated the network over an interval of 100 time points and taking as the expression dataset the last time-point. For both synthetic datasets we used the k-means clustering algorithm on the features (genes and mirnas). Genes and miRNAs of both datasets have been standardized so that the mean of each gene and miRNA is 0 and the standard deviation is 1, then clustered into 50 groups. Data standardization makes the differences among genes and miRNAs depend on their correlations. In Figure 5 are shown some of the clustered genes and miRNAs along with the information coming from the known regulatory network (the actual connections). As can be seen nodes (genes and miRNAs) that are clustered together are actually connected in the network from which the data has been generated. Revealed interactions among clustered genes. Clustered genes and miRNAs together with interactions. The majority of nodes that are clustered together are actually connected in the network from which data has been simulated. Feature relevance Due to the high-dimensional nature of OMICs data, effective modelling for inference or prediction in bioinformatics cannot be performed without an initial phase of feature selection. Different approaches to feature selection are available, which can be summarized in three categories: filter, wrapper and embedded methods, each with its own advantages and disadvantages [30]. We performed two feature relevance analysis. The first dataset is made of 50 samples for each condition (2 conditions in total) generated from GRN3. In this experiment we wanted to simulate the case in which the 2 different conditions are well characterized by a subset of controlling signals in the form of an expression signature by setting 13 of the 20 control signals to a deterministic value. Specifically, the first 13 genes for the first condition have been set to values 1100011011011 mixed with the remaining 7 control signals set to random values sampled from \(\mathcal {U}\left (0,1\right)\). For the second condition the first 13 genes have been set to 1001110100010. We applied a filter feature selection method based on the t-test [32]. We set a threshold on the FDR to 0.05 and marked as significant all the genes with a q-value below the threshold. 8 out of the 39 significant genes are actual signalling genes, 22 significant genes are at path length 1 from a signalling gene, the remaining 9 are at path length 2 from a signalling gene. The same has been made for miRNAs and 3 out of 5 are at distance 1 from a signalling gene and the remaining 2 are at distance 2. The second dataset consists of 75 samples divided into three classes of 25 samples each generated from GRN1 by setting the controlling signals to random constant values sampled from \(\mathcal {U}\left (0,1\right)\). For this more complex dataset we used the wrapper method of Boruta [33]. This method relies on the random forest classifier. The significance of each feature is assessed comparing its importance given by the random forest to the importance of a randomly computed version of the same feature. Features that are significantly more important than their random permutations are marked as relevant. The procedure marked as relevant 59 genes: 9 are signalling genes, 28 are directly connected to (at least) a signalling gene, 20 are at distance 2 from a signalling gene and the remaining 3 genes are at distance 3 from a signalling gene. Of the 6 relevant miRNAs, 5 are at distance 2 from a signalling gene and only 1 is at distance 1 from a signalling gene. From these experiments it is to be noted that almost all signalling genes which have been set to different values for each experimental condition are recognised as significant. The remaining signalling genes that are not marked as significant may have been set to values too similar between different conditions or the amount of noise is such to deteriorate the pattern. It should be also noted that both feature relevance procedures marked as significant nodes directly connected to at least a signalling gene or in the same pathway. This shows the capability of the proposed model of propagating information through modules of locally connected genes. Here we proposed a multi-view biological data simulator based on ordinary differential equations with the objective of benchmarking multi-view learning methods. We ensured that the generated data is biologically relevant for the features need to follow patterns of interaction that are similar to those observed in real biological networks. We showed different cases of analysis where the simulated datasets can complement real datasets in the assessment of novel methods for data analysis. At the same time the sample analysis further validated the proposed approach since information coherent with the regulatory network is extracted from the synthetic dataset. It will be possible to implement additional layers of complexity (e.g., including DNA methylation or copy number variations) as more comprehensive and systematic knowledge on the biological interactions arises. TRN: Transcriptional regulatory network ODE: Ordinary differential equations TF: miRNA: False discovery rate Bian S, Wang W. Computational intelligence and security. Lecture Notes in Computer Science. vol. 3801. Berlin, Heidelberg: Springer; 2005, pp. 809–14. doi:10.1007/11596448. http://link.springer.com/chapter/10.1007/11596448_119 http://www.springerlink.com/index/10.1007/11596448. Zhang J, Coombes K. UMPIRE: Ultimate Microarray Prediction, Inference, and Reality Engine. In: BIOTECHNO 2011, The Third International Conference on Bioinformatics, Biocomputational Systems and Biotechnologies: 2011. p. 121–125. Muselli M, Bertoni A, Frasca M, Beghini A, Ruffino F, Valentini G. A mathematical model for the validation of gene selection methods. IEEE/ACM Trans Comput Biol Bioinform; 8(5):1385–92. doi:10.1109/TCBB.2010.83. Mendes P, Sha W, Ye K. Artificial gene networks for objective comparison of analysis algorithms. Bioinformatics. 2003; 19(Suppl 2):122–9. doi:10.1093/bioinformatics/btg1069. Van den Bulcke T, Van Leemput K, Naudts B, van Remortel P, Ma H, Verschoren A, et al. SynTReN: a generator of synthetic gene expression data for design and analysis of structure learning algorithms. BMC Bioinformatics. 2006; 7:43. doi:10.1186/1471-2105-7-43. Schaffter T, Marbach D, Floreano D. GeneNetWeaver: in silico benchmark generation and performance profiling of network inference methods. Bioinformatics (Oxford, England). 2011; 27(16):2263–70. doi:10.1093/bioinformatics/btr373. Di Camillo B, Toffolo G, Cobelli C. A gene network simulator to assess reverse engineering algorithms,. Ann N Y Acad Sci. 2009; 1158:125–42. doi:10.1111/j.1749-6632.2008.03756.x. Ravasz E, Somera aL, Mongru Da, Oltvai ZN, Barabási aL. Hierarchical organization of modularity in metabolic networks. Science (N Y). 2002; 297(5586):1551–5. doi:10.1126/science.1073374. Bernstein BE, Birney E, Dunham I, Green ED, Gunter C, Snyder M. An integrated encyclopedia of DNA elements in the human genome. Nature. 2012; 489(7414):57–74. doi:10.1038/nature11247. Ambros V. The functions of animal microRNAs. Nature. 2004; 431(7006):350–5. doi:10.1038/nature02871. Milo R, Shen-Orr S, Itzkovitz S, Kashtan N, Chklovskii D, Alon U. Network motifs: simple building blocks of complex networks. Science (N Y). 2002; 298(5594):824–7. doi:10.1126/science.298.5594.824. Thieffry D, Huerta AM, Pérez-Rueda E, Collado-Vides J. From specific gene regulation to genomic networks: a global analysis of transcriptional regulation in Escherichia coli. Bioessays; 20(5):433–0. doi:10.1002/(SICI)1521-1878(199805)20:5<433::AID-BIES10>3.0.CO;2-2. Barabási AL, Oltvai ZN. Network biology: understanding the cell's functional organization. Nat Rev Genet. 2004; 5(2):101–13. doi:10.1038/nrg1272. Potapov AP, Voss N, Sasse N, Wingender E. Topology of mammalian transcription networks. Genome Inform. 2005; 16(2):270–8. Alon U. Network motifs: theory and experimental approaches,. Nat Rev Genet. 2007; 8(6):450–61. doi:10.1038/nrg2102. Shen-Orr SS, Milo R, Mangan S, Alon U. Network motifs in the transcriptional regulation network of Escherichia coli. Nat Genet. 2002; 31(1):64–8. doi:10.1038/ng881. Shalgi R, Lieber D, Oren M, Pilpel Y. Global and local architecture of the mammalian microRNA-transcription factor regulatory network. PLoS Comput Biol. 2007; 3(7):131. doi:10.1371/journal.pcbi.0030131. Martinez NJ, Ow MC, Barrasa MI, Hammell M, Sequerra R, Doucette-Stamm L, et al. A C. elegans genome-scale microRNA network contains composite feedback motifs with high flux capacity. Genes Dev. 2008; 22(18):2535–49. doi:10.1101/gad.1678608. Sun J, Gong X, Purow B, Zhao Z. Uncovering MicroRNA and Transcription Factor Mediated Regulatory Networks in Glioblastoma. PLoS Comput Biol. 2012; 8(7):1002488. doi:10.1371/journal.pcbi.1002488. R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria; 2014. http://www.r-project.org/. Hecker M, Lambeck S, Töepfer S, van Someren E, Guthke R. Gene regulatory network inference: data integration in dynamic models-a review. Biosystems. 2009; 96(1):86–103. doi:10.1016/j.biosystems.2008.12.004. de Jong H. Modeling and simulation of genetic regulatory systems: a literature review. J Comput Biol. 2002; 9(1):67–103. doi:10.1089/10665270252833208. Karlebach G, Shamir R. Modelling and analysis of gene regulatory networks. Nat Rev Mol Cell Biol. 2008; 9(10):770–80. doi:10.1038/nrm2503. Hill A. The possible effects of the aggregation of the molecules of haemoglobin on its dissociation curves. J Physiol (Lond). 1910; 40:4–7. Vohradsky J, Panek J, Vomastek T. Numerical modelling of microRNA-mediated mRNA decay identifies novel mechanism of microRNA controlled mRNA downregulation. Nucleic Acids Res. 2010; 38(14):4579–85. doi:10.1093/nar/gkq220. Khanin R, Vinciotti V. Computational modeling of post-transcriptional gene regulation by microRNAs. J Comput Biol. 2008; 15(3):305–16. doi:10.1089/cmb.2007.0184. Rocke DM, Durbin B. A model for measurement error for gene expression arrays. J Comput Biol. 2001; 8(6):557–69. doi:10.1089/106652701753307485. Glass K, Huttenhower C, Quackenbush J, Yuan GC. Passing messages between biological networks to refine predicted interactions. PloS One. 2013; 8(5):64832. doi:10.1371/journal.pone.0064832. Margolin AA, Nemenman I, Basso K, Wiggins C, Stolovitzky G, Dalla Favera R, et al. ARACNE: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC Bioinformatics. 2006; 7((Suppl 1)):7. doi:10.1186/1471-2105-7-S1-S7. Guyon I, Elisseeff A. An introduction to variable and feature selection. J Machine Learn Res. 2003; 3:1157–82. D'haeseleer P. How does gene expression clustering work?Nat Biotechnol. 2005; 23(12):1499–1501. doi:10.1038/nbt1205-1499. Saeys Y, Inza In, Larrañaga P. A review of feature selection techniques in bioinformatics. Bioinformatics. 2007; 23(19):2507–517. doi:10.1093/bioinformatics/btm344. Kursa MB, Jankowski A, Rudnicki WR. Boruta - A system for feature selection. Fundamenta Informaticae. 2010; 101:271–85. doi:10.3233/FI-2010-288. This work has been supported by the European Commission, under grant agreement FP7-309329 (NANOSOLUTIONS). Department of Medical, Surgical, Neurological, Metabolic and Ageing Sciences, Second University of Napoli, Napoli, Italy Michele Fratello Department of Computer Science, Fisciano, Italy , Angela Serra , Giancarlo Raiconi & Roberto Tagliaferri Unit of Systems Toxicology and Nanosafety Research Centre, Finnish Institute of Occupational Health, FIOH, Helsinki, Finland Vittorio Fortino & Dario Greco Search for Michele Fratello in: Search for Angela Serra in: Search for Vittorio Fortino in: Search for Giancarlo Raiconi in: Search for Roberto Tagliaferri in: Search for Dario Greco in: Correspondence to Dario Greco. DG and RT conceived and supervised the study. MF developed the methods, analysed and interpreted the data, and implemented the software. AS, VF and GR participated the development of the methods and the analysis of the data. All the authors have participated in drafting the manuscript. All authors read and approved the final manuscript. Additional file 1 MVBioDataSim R package. The R implementation of the proposed method is available as R package attached to this paper. Fratello, M., Serra, A., Fortino, V. et al. A multi-view genomic data simulator. BMC Bioinformatics 16, 151 (2015) doi:10.1186/s12859-015-0577-1 Received: 03 December 2014 Multi-view Regulatory network Gene-miRNA interactions OMICs data simulation
CommonCrawl
OSA Publishing > Biomedical Optics Express > Volume 11 > Issue 9 > Page 5362 Christoph Hitzenberger, Editor-in-Chief Detection of single-base mutation of DNA oligonucleotides with different lengths by terahertz attenuated total reflection microfluidic cell Mingjie Tang, Mingkun Zhang, Liangping Xia, Zhongbo Yang, Shihan Yan, Huabin Wang, Dongshan Wei, Chunlei Du, and Hong-Liang Cui Mingjie Tang,1,2,5 Mingkun Zhang,1,5 Liangping Xia,1,4 Zhongbo Yang,1 Shihan Yan,1 Huabin Wang,1 Dongshan Wei,1,3,* Chunlei Du,1 and Hong-Liang Cui1 1Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing, 400714, China 2University of Chinese Academy of Sciences, Beijing 100049, China 3School of Electronic Engineering, Dongguan University of Technology, Dongguan, Guangdong, 523808, China 4Key Laboratory of Micro Nano Optoelectronic Devices and Intelligent Perception Systems, Yangtze Normal University, Chongqing, 408100, China 5These authors contributed equally to this work *Corresponding author: [email protected] Liangping Xia https://orcid.org/0000-0002-5886-6689 Zhongbo Yang https://orcid.org/0000-0002-7365-0819 Dongshan Wei https://orcid.org/0000-0003-3999-3911 M Tang M Zhang L Xia Z Yang S Yan H Wang D Wei C Du H Cui •https://doi.org/10.1364/BOE.400487 Mingjie Tang, Mingkun Zhang, Liangping Xia, Zhongbo Yang, Shihan Yan, Huabin Wang, Dongshan Wei, Chunlei Du, and Hong-Liang Cui, "Detection of single-base mutation of DNA oligonucleotides with different lengths by terahertz attenuated total reflection microfluidic cell," Biomed. Opt. Express 11, 5362-5372 (2020) Direct laser trapping for measuring the behavior of transfused erythrocytes in a sickle cell anemia patient (BOE) Prenatal detection of thalassemia by cell-free fetal DNA (cffDNA) in maternal plasma using surface enhanced Raman spectroscopy combined with PCR (BOE) In-vitro study of monocytic THP-1 leukemia cell membrane elasticity with a single-cell microfluidic-assisted optical trapping system (BOE) Table of Contents Category Terahertz Spectroscopy and Imaging Absorption spectroscopy Analytical techniques Terahertz spectroscopy Total internal reflection Original Manuscript: June 18, 2020 Revised Manuscript: August 6, 2020 Manuscript Accepted: August 26, 2020 Biomedical Optics Express Biophotonics (2020) Many human genetic diseases are caused by single-base mutation in the gene sequence. Since DNA molecules with single-base mutation are extremely difficult to differentiate, existing detection methods are invariably complex and time-consuming. We propose a new label-free and fast terahertz (THz) spectroscopic technique based on a home-made terahertz attenuated total reflection (ATR) microfluidic cell and a terahertz time-domain spectroscopy (THz-TDS) system to detect single-base-mutated DNA molecules. The detected object DNA molecules are normal hemoglobin gene, sickle cell anemia gene (15 nt), JAK2 gene wild type and JAK2 V617F gene mutation (39 nt) from sickle cell anemia and thrombocytopenia, respectively. Results show that the oligonucleotide fragments with single-base mutation can be identified by THz spectroscopy combined with the ATR microfluidic cell, and the recognition effect of short oligonucleotide fragments with single-base mutation is better than that of long oligonucleotide fragments. The terahertz biosensor is shown to have high sensitivity and can be used to detect DNA molecules directly in the solution environment. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement DNA is one of the most important biological macromolecules, which plays an essential role in life activities. As the carrier of genetic information, it participates in the transmission and expression of genetic information in cells, so as to promote and control the metabolic process. DNA detection not only provides important information for scientific researches [1,2], but can also be employed for early diagnosis, treatment and prognosis of diseases (especially tumors) [3,4]. As such, DNA detection has been widely used in all aspects of biomedicine, and is one of the indispensable detection methods in basic medical research and clinical disease treatment [5]. Many human diseases, such as thalassemia and a large variety of tumors, are caused by single-base mutation in the gene sequence. These single-base mutation can be used as biomarkers and are very useful for early medical diagnosis of diseases [6]. Single-base mutation is a point mutation that occurs at a specific position in a genome and constitutes the most common form of genetic variation. The number of occurrence of single-base mutation is relatively large. In the human genome, about one base may mutate for every 100-300 bases [7]. Single-base mutation may affect gene function resulting from amino acid substitution, modification of gene expression or alteration of gene splicing, and is closely associated with various common diseases and individual differences in drug metabolism [8]. Although numerous techniques for point mutation detection have been proposed to date, most of these approaches require target amplification, typically with polymerase chain reaction (PCR) [9–11]. Additional efforts are thus needed to explore more broadly applicable methods for sensitive, accurate, rapid, and low-cost single-base mutation identification [12–14]. At present, there are a number of detection methods of single-base mutation, and new methods are emerging. Most of these methods can be regarded as comprising two separate steps: distinguishing the specific site, and detecting the change and analyzing the data. The distinguishing point of single-base mutation is mainly achieved by hybridization, PCR, molecular conformation and enzyme method. Quantitative detection methods mainly include gel electrophoresis, fluorescence, DNA chip and mass spectrometry. Thus, most single-base mutation detection is a combination of the above two categories of methods. For example, the task of distinguishing a specific site based on DNA hybridization can be accomplished by a combination of fluorescence method [15] and DNA chip technology [16]. Sickle cell anemia (SCD) is an autosomal recessive genetic disease, which belongs to β-hemoglobin disease. β-hemoglobinopathy is caused by the mutation of the HBB gene, which is the most common single-base mutation genetic disease affecting humans. SCD is caused by the mutation of the 6th codon, GAG, of the HBB gene into GTG: This promotes the replacement of the 6th glutamate (Glu) residue of the HBB protein by the valine (Val) residue, and subsequently the normal hemoglobin is replaced by a sickle hemoglobin S (HbS) [17]. Under the condition of reduced oxygen, HbS is prone to polymerization, which leads to sickle, hemolysis and aggregation of red blood cells and adhesion of white blood cells in the microvascular system, and eventually leads to vascular occlusion. This process will lead to various serious sequelae. The main clinical manifestations of SCD are chronic hemolytic anemia, predisposition to infection, recurrent pain crisis and tissue and organ damages caused by chronic ischemia. Primary thrombocytopenia (PT) is a kind of myeloproliferative diseases (MPD) characterized by Ph chromosome-negative. JAK2V617F gene plays a key role in the growth of human hematopoietic factors. The mutation is very important for the study of the pathogenesis of MPNs [18]. The mutation occurs at the 1849th position of the gene. The original guanosine is replaced by thymine, which leads to the missense coding of valine into phenylalanine [19]. Then a series of mechanisms lead to an over-sensitivity of hematopoietic cells to growth factors, resulting in abnormal cell proliferation. Early, accurate, simple and rapid identification of these single base mutation is of particular importance for the pathogeny and early therapy of the corresponding diseases. To that end, a new detection method for single-base mutation has been developed in this work. Terahertz spectroscopy is maturing into a versatile tool for scientific researches in recent years. Its band lies between microwave and infrared regions of the electromagnetic spectrum, and its approximate frequency range is 0.1 to 10 THz. Because terahertz radiation has no known damage to biological system at moderate power levels [20–22], and it can provide conformational information of biological molecules closely related to biological functions in cells, which is difficult to be obtained by other optical, X-ray and nuclear magnetic resonance spectroscopy techniques [23]. However, for most of the time that it has been attempted, there has been a bottleneck in applications of terahertz spectroscopy technology in biomedicine resulted from the strong absorption of terahertz wave due to water, which overwhelms most of the terahertz spectral characteristics of biomolecules in the solution, whereas biomolecules can only reflect their conformational information in solution. In order to overcome this bottleneck, one effective way is to confine the active biomolecules into a micro-nano structure, which can not only reduce the strong terahertz absorption of water in solution [24,25], but also more closely simulate the conformation of biomolecules confined in the cell, so as to enhance the THz signal and improve the detection accuracy and sensitivity [26–28]. In this study, an attenuated total reflection (ATR) microfluidics was used as the loading device of liquid samples. Based on the microfluidics structure, the ATR mode is used to replace the transmission mode. The influence of thickness on the subsequent calculation was deducted and the standing-wave interference was eliminated. In this work, the oligonucleotides of sickle cell anemia and thrombocytopenia caused by single-base variations are selected as the research targets. THz-TDS combined with ATR microfluidics detection is applied to this biomedical diagnosis research, which confirms the potential value of THz-TDS in clinical disease detections. 2. Experimental methods 2.1 Experimental preparation 2.1.1 Design and preparation of oligonucleotides The two oligonucleotides (Normal Hemoglobin, Sickle Cell Anemia) with 15 nt target single-base mutation, and the other two (JAK2 Wild, JAK2 V617F Mutant) with 39 nt target single-base mutation (Table 1) were synthesized and then purified by HPLC (high-performance liquid chromatography) at Sangon Biotech Co., Ltd. (Shanghai, China). They were dissolved separately in a TE buffer at pH 7.2 with concentration of 5 µg/µL and 0.5 µg/µL. Table 1. Sequences of oligonucleotides investigated in this study 2.1.2 Design and preparation of microstructures A liquid cell was purchased from Hellma UK LTD (Order Number: 121-0.10-40) as a liquid sample cell for THz measurements of the buffer and the four DNA oligonucleotide solutions. The cell has 2 pieces of quartz windows with a depth of 100 µm. The inlet and outlet holes are at the edge of the circle. The inner dimension of the cell is 13 mm. The outer ring is used to store excess liquid, as shown in Fig. 1(a). Before each THz spectroscopy measurement, the liquid sample was gently injected into the circular reservoir through the inlet hole by pipette. The reservoir was fully immersed by the liquid sample and the excess liquid sample flowed into the outer loop. The volume of the reservoir is around 14.0 µL. Fig. 1. Schematic diagram of (a) Conventional liquid cell, (b) terahertz ATR microfluidic cell and (c) its cover. Download Full Size | PPT Slide | PDF Attenuated total reflection (ATR) facilitates the attainment of the interaction information between the evanescent THz wave and measured object on the surface of the prism where total internal reflection occurs. The evanescent wave propagates along the tangent direction of the interface and its amplitude decreases exponentially with the distance away from the interface. In terahertz band, the penetration depth of evanescent wave (the penetration distance when the amplitude attenuates to 1/e of the incident light amplitude) is generally tens of microns. Therefore, when the sample to be measured is placed on the surface of the ATR prism, the evanescent wave transmits into the bottom layer of the sample through the prism, thus the emitted terahertz signal will mainly reflect the physicochemical properties of the bottom layer of the sample in THz band (as shown in Fig. 1(b)). With the same detecting instruments, the penetration depth of the evanescent wave is only related to the refractive indices of the prism and the liquid sample and the THz wavelength in the sample, but not the thickness of the sample. This effectively circumvents the problem that the transmission detection method encounters as the latter needs to accurately control and measure the sample thickness. Furthermore, the THz-ATR scheme presented here also improves the detection accuracy and simplifies the pre-processing process of the sample. At the same time, such reflection-based detection setup will not suffer from interference of the standing wave resonance, which simplifies the post-processing of data. The prism material is selected based on its high transmission and refraction characteristics in the terahertz band. At present, the power of terahertz source is weak and the light spot of the terahertz source is relatively large. In order to couple as much radiation energy into and out of the prisms as possible for a given THz band, an appropriate thickness of the prism should be selected. Then, according to the dielectric properties of the samples on the prism surface, the critical angle for total internal reflection is calculated to determine the bevel angle of the prism. The sample cell is divided into two parts, one is the bearing cell on the prism surface, which is integrated with the prism, and the other is the cover as shown in Fig. 1(b). The cover of the sample cell is made of PDMS. A hexagonal groove with a depth of 200 µm is engraved on the inner side of PDMS. The sample inlet and outlet are set at the two opposite corners of the rectangular groove, as shown in Fig. 1(c). The liquid sample was gently injected into the hexagonal groove through the inlet hole by a syringe connected to a plastic pipe. The groove was fully immersed by the liquid sample and the excess liquid sample flowed into the outlet pipe. 2.2 THz spectroscopy measurement A commercial THz-TDS system (Tera K15, Menlo Systems GmbH, Munich, Germany) with the transmission mode was utilized to measure the THz spectra of the DNA oligonucleotide solutions in the liquid cell and THz ATR microfluidic cell. Briefly, a femtosecond laser with a center wavelength of 1560 nm, a repetition rate of 100 MHz and a pulse width below 90 fs was split into the pump beam and the probe beam. The pump beam was irradiated on a biased photoconductive (PC) antenna to emit THz pulse that was pre-collimated by a high-resistivity silicon lens attached behind the PC antenna. Then the pulse was collimated by a polymethylpentene (TPX) lens with an effective focal length of about 50 mm, and focused by another TPX lens. THz pulse transmitted through the sample was detected via a process nearly reverse to the process of THz emission. An optical delay line in the probe optical path was utilized to sample the THz pulse with a time interval of 33.3 fs and the sampled data were used to rebuild the THz time-domain pulse signal. A relative humidity of 3% was maintained by the purge of nitrogen gas during measurements. 2.3 Calculation of absorption coefficient According to the Beer-Lambert Law, the absorption coefficient of the oligonucleotide solution sample after subtracting the contribution from the buffer solution was calculated as (1)$$\mathrm{\Delta }\alpha = \frac{{\ln \left( {\frac{{{I_{buff}}}}{{{I_s}}}} \right)}}{d}.$$ Where ${I_{buff}}$ and ${I_s}$ are power transmissions of buffer solution and the oligonucleotide solution, respectively, and d is the thickness of the liquid cell ($d$ = 0.1 mm). The calculation method of THz absorption coefficient of DNA oligonucleotide solutions in the ATR microfluidics has been established. If the electric field amplitude of the incident terahertz wave is ${\tilde{E}_{\textrm{in}}}$, according to the law of reflection (2)$$\frac{{{{\tilde{E}}_{\textrm{sam}}}}}{{{{\tilde{E}}_{\textrm{ref}}}}} = \left( {\frac{{{{\tilde{E}}_{\textrm{sam}}}}}{{{{\tilde{E}}_{\textrm{in}}}}}} \right)\left( {\frac{{{{\tilde{E}}_{\textrm{in}}}}}{{{{\tilde{E}}_{\textrm{ref}}}}}} \right) = \frac{{{{\tilde{r}}_{\textrm{sam}}}}}{{{{\tilde{r}}_{\textrm{ref}}}}}.$$ Where ${\tilde{r}_{\textrm{ref}}}$ is the reflection coefficient when the liquid sample cell is empty and ${\tilde{r}_{\textrm{sam}}}$ is the reflection coefficient after the sample is injected. According to Fresnel's law of reflection, the reflection coefficient ${\tilde{r}_{12}}$ and ${\tilde{r}_{23}}$ of silicon-microfluidics interface and microfluidics-PDMS interface under P polarization have been established. The total reflection coefficient $\tilde{r}$ of the terahertz wave incident on the ATR microfluidics is (3)$$\tilde{r} = \frac{{{{\tilde{r}}_{12}} + {{\tilde{r}}_{23}}\textrm{exp}\left( {i\frac{{4\pi d}}{\lambda }\sqrt {\tilde{\varepsilon } - {\varepsilon_{\textrm{Si}}}{{\sin }^2}\theta } } \right)}}{{1 + {{\tilde{r}}_{12}}{{\tilde{r}}_{23}}\textrm{exp}\left( {i\frac{{4\pi d}}{\lambda }\sqrt {\tilde{\varepsilon } - {\varepsilon_{\textrm{Si}}}{{\sin }^2}\theta } } \right)}}.$$ Where d is the microfluidics depth, ${\varepsilon _{\textrm{Si}}}$ is the dielectric constant of the silicon prism, $\tilde{\varepsilon }$ is the complex dielectric constant of the sample on the total reflection surface. When the sample is not injected into the microfluidics, $\tilde{r} = {\tilde{r}_{\textrm{ref}}}$; when the sample is filled into the microfluidics, $\tilde{r} = {\tilde{r}_{\textrm{sam}}}$. The complex permittivity of the liquid sample can be obtained by combining Eqs. (2) and (3). Furthermore, the absorption coefficient of the sample is obtained as (4)$$\mathrm{\alpha }(\nu )= \frac{4}{{\textrm{c}\mathrm{\pi }}}\nu \varepsilon ^{\prime\prime}.$$ Where $\nu $ is the frequency, $\textrm{c}$ is the speed of light, and $\varepsilon ^{\prime\prime}$ is the imaginary part of the complex permittivity. 3.1 THz spectroscopy analysis of oligonucleotides based on conventional liquid cell The THz spectra of four oligonucleotides (normal hemoglobin gene, sickle cell anemia gene, JAK2 gene wild type, and JAK2 V617F gene mutation) in TE buffer solution loaded into the liquid cell (Fig. 1(a)) were measured using the THz-TDS system. Each oligonucleotide solution sample was repeatedly measured three times on different days to minimize the fluctuation of instrument performances. Before each measurement, the commercial sample cell was first cleaned in anhydrous ethanol and deionized water, and then dried in nitrogen for three times. The absorption coefficient of each sample in the frequency range of 0.3-1.4 THz is shown in Fig. 2. From this figure, we can see that there is no characteristic absorption peak for all samples, and there are significant differences in THz absorption coefficients between normal hemoglobin gene and sickle cell anemia gene solutions with 5 µg/µL concentration. In such a frequency band, we can clearly see that the difference in absorption coefficients between the normal hemoglobin gene and sickle cell anemia gene is greater than that between the wild type of JAK2 gene and the mutant type of JAK2 V617F gene. There is some overlap of the error bars from Fig. 2(b), but the oligonucleotides of healthy genes and mutant genes from sickle cell anemia can nevertheless be differentiated from the averaged absorption intensity over all frequencies. Fig. 2. Terahertz absorption spectra of two short chain oligonucleotides (a) and two long chain oligonucleotides (b) at the concentration of 5.0 µg/µL, based on conventional liquid sample cell. In addition, to demonstrate the system's capability for trace detection, the sample concentration was diluted 10 times and measured again. In other words, when the concentration is 0.5 µg/µL, the absorption coefficients of four oligonucleotide solutions in the frequency range of 0.3-1.4 THz were obtained, and shown in Fig. 3. It was found that the difference in THz absorption between the four oligonucleotide solutions decreased and became no longer significant. It can be seen from the figure that the difference in the absorption coefficient of oligonucleotides between the normal hemoglobin gene and the sickle cell anemia gene is close to that between the wild type and the mutant type of JAK2 V617F gene at 0.5 µg/µL. However, it is worth noting that the difference in absorption coefficient between the four oligonucleotide solutions is not prominent due to the strong water absorption. It can be concluded that the absorption coefficients at low concentration (0.5 µg/µL) and high concentration (5 µg/µL) solution have the same change trend for the four oligonucleotide chains from Figs. 2 and 3. Small difference in the absorption coefficients of the same oligonucleotide chain at different concentrations and the different oligonucleotides at the same concentration can be observed from the insets in Figs. 2 and 3. The absorption coefficient itself is believed to be a manifestation of the interplay and competition between hydration layer absorption and free-water absorption [29–31]. It is speculated that the different absorption coefficient is indicative of different hydration layer around the oligonucleotide. The differences in the THz absorption coefficient between different concentrations and between different DNA specificities may be related to the number of hydrogen bonds formed around the oligonucleotides. Long chain oligonucleotides are more likely to form secondary structures in the solution of low concentration, and a thicker hydration layer is more likely to gather around the secondary-structure conformation, which may give rise to stronger terahertz absorption [32–35]. This is consistent with our previous experimental results [36]. 3.2 THz spectroscopic analysis of oligonucleotide based on the THz ATR microfluidic cell Based on the conventional THz liquid cell, the recognition effect for two groups of healthy and mutated genes is not ideal at the lower (0.5 µg/µL) concentration. Because the THz liquid cell is used as the carrier of DNA molecular solution, the transmission signal will be heavily affected by the sample thickness. Hence it is proposed to use the THz attenuated total reflection microfluidic cell as the carrier of DNA molecular solution in the experiment. Using the same detection instrument, the evanescent wave penetration depth is only related to the sample refractive index and the THz wavelength, but not to the sample thickness. This avoids the problem that the transmission detection method needs to control and measure the sample thickness accurately and simplifies the sample pre-treatment process. At the same time, such a reflection detection scheme will not bring in the interference effect of standing wave resonance, which simplifies the data post-processing as well. THz spectra of the four oligonucleotide samples (normal hemoglobin gene, sickle cell anemia gene, JAK2 gene wild type, JAK2 V617F gene mutation) in TE buffer solution loaded into the THz ATR microfluidic cell were measured using the THz-TDS system. Each oligonucleotide solution sample was repeatedly measured three times. Each oligonucleotide solution sample was measured three times on different days. In order to ensure the stability of the system, the bearing cell on the prism is mounted in the light path. When the sample is replaced, the cover is removed. Before each measurement, the bearing cell and the cover of the self-developed ATR microfluidic cell can be cleaned separately. The bearing cell was first cleaned in alternate of anhydrous ethanol and deionized water, and then dried in nitrogen for three times. The cover of was first cleaned ultrasonically in baths of anhydrous ethanol and deionized water for 3 min respectively, and then dried in nitrogen. The silicon surface of the bearing cell can adhere to PDMS after cleaning. Four oligonucleotides with concentration of 0.5 µg/µL were used for detection. The absorption coefficients of four oligonucleotide solutions in the frequency range of 0.3-1.4 THz are shown in Fig. 4. It was found that there were some differences in THz absorption between the normal hemoglobin gene and sickle cell anemia gene solutions. It can be seen from the figure that when the oligonucleotide is 0.5 µg/µL, the difference in the absorption coefficient between the normal hemoglobin gene and the sickle cell anemia gene is greater than that of the wild type of JAK2 gene and the mutant of JAK2 V617F gene, and the difference in the absorption coefficient between the normal hemoglobin gene and sickle cell anemia gene with 15 nt is more prominent than that with 39 nt, with less error-bar overlapping. Fig. 4. Terahertz absorption spectra of two short chain oligonucleotides (a) and two long chain oligonucleotides (b) at the concentration of 0.5 µg/µL based on the THz ATR microfluidic cell. To ascertain that the difference is meaningful statistically and can be used to discriminate the four samples, a principal component analysis (PCA) is performed using more than 30-times measurement data for each oligonucleotide solution to classify the four groups of oligonucleotides. PCA is an unsupervised classification method, which is often applied to spectral data, given that the variables are numerous and often correlated. PCA can reduce the dimension of the data as far as possible without losing important information of the original data [37–39]. Figure 5 shows the three-dimensional view of the THz spectral principal component analysis of these four single-base-mutation DNA oligonucleotides. The cumulative percentage variance of the first three principal components is 99.23%. PC1 interprets 95.85% variance, PC2 2.77% variance, and PC3 0.61% variance. The 3-dimensional space represented by the first three PCs' score included 99.23% of the information from the original THz data, covering an overwhelming majority of useful information thereof. Fig. 5. Principal component analysis of THz spectra of four DNA oligonucleotide (0.5 µg/µL) based on the THz ATR microfluidic cell. (NH: normal hemoglobin gene, SCA: sickle cell anemia gene, JW: JAK2 gene wild type, JVM: JAK2 V617F gene mutation). Based on the principal component analysis, quadratic discriminant analysis (QDA) was employed in classification of these four oligonucleotides. QDA is a best-known discriminant analysis approach, which has been successfully used for appraisement in various fields [40,41]. The quality of the classification of QDA models is predicated on the number of principal components: more principal components result in higher recognition rate. In the present case, the optimal QDA model was generated when 9 PCs were employed. Figure 6 shows the best identification results of QDA models with 9 principal components. The classification rate by cross-validation were 100% in the calibration set and 90.3% in the prediction set. All 80 samples were correctly classified in the calibration set; 47 of 52 samples were correctly classified, and 5 samples from the long chain oligonucleotides were wrongly classified in the prediction set, which affirms the understanding that the longer the chain, the higher the conformational similarity of oligonucleotides with single-base mutation in solution. Fig. 6. Identification results of the QDA model with 9 PCs for four different DNA oligonucleotides in the calibration set (a) and prediction set (b). (NH: normal hemoglobin gene, SCA: sickle cell anemia gene, JW: JAK2 gene wild type, JVM: JAK2 V617F gene mutation). From the cluster results of the first three principal components and identification results of QDA, we can see that the recognition efficacy of two groups of healthy and mutated genes using the ATR microfluidic cell is better than that of oligonucleotides with the same concentration in the conventional liquid cell. The results also show that the recognition rate of short single-base mutation oligonucleotides is higher than that of the long ones. It is worth noting that the detection results using the THz ATR microfluidic cell and the conventional liquid cell are otherwise quite consistent. It can also be observed from Fig. 4 that the absorption coefficients of long-chain oligonucleotides (39 nt) were higher than those of short-chain oligonucleotides (15 nt) at the same concentration and frequency, which is again consistent with the detection results using the conventional liquid cell. Furthermore, the recognition efficacy of the ATR microfluidic cell was markedly better than that of the conventional liquid cell for low-concentration trace substances, because the former was not affected by the uncertainty of sample thickness determination and standing wave interference. 4. Conclusions Detection of single-base mutation in DNA molecules is an important part of analysis of diseases of genetic origin. However, differentiation of DNA molecules with single-base mutation is notoriously difficult—the corresponding discrepancies in physical, chemical and other detectable properties are usually miniscule—and the few available detection methods are convoluted and time-consuming. As THz detection of biomolecular solutions with conventional liquid cells is strongly affected by the sample thickness, we proposed and demonstrate a self-developed terahertz attenuated total reflection microfluidic cell to detect single-base mutation DNA molecules (normal hemoglobin and sickle cell anemia genes with 15 nt, JAK2 wild type and JAK2 V617F mutants with 39 nt) using THz time-domain spectroscopy. The proposed THz biosensor technology has high sensitivity. It only needs a small amount of samples to differentiate DNA molecular solutions with different mutation and length. The THz absorption spectra of DNA oligonucleotides with single-base mutation were obtained and their difference was observed and analyzed by the principle component analysis and the quadratic discriminant analysis methods. The recognition rate of short oligonucleotides with single-base mutation is higher than that of long ones. Our results indicated that the DNA oligonucleotides with single-base mutation can be clearly identified by THz spectroscopy employing our attenuated total reflection microfluidic cell. National Natural Science Foundation of China (61771138, 61775213, 61875196). The authors declare no conflicts of interest. 1. A. Cistaro, L. Cassalia, C. Ferrara, C. Atzori, D. Vai, N. Quartuccio, P. Fania, G. P. Vaudano, and D. Imperiale, "Brain 18F-FDG PET/CT findings in a case of genetic Creutzfeldt-Jakob disease due to V203I heterozygous mutation in the PRNP gene," J. Neurol. 264(1), 170–173 (2017). [CrossRef] 2. G. M. Milley, E. T. Varga, Z. Grosz, B. Bereznai, Z. Aranyi, J. Boczan, P. Dioszeghy, B. Kalman, A. Gal, and M. J. Molnar, "Three novel mutations and genetic epidemiology analysis of the Gap Junction Beta 1 (GJB1) gene among Hungarian Charcot-Marie-Tooth disease patients," Neuromuscular Disord. 26(10), 706–711 (2016). [CrossRef] 3. C. Kraus, J. Hoyer, G. Vasileiou, M. Wunderle, M. P. Lux, P. A. Fasching, M. Krumbiegel, S. Uebe, M. Reuter, M. W. Beckmann, and A. Reis, "Gene panel sequencing in familial breast/ovarian cancer patients identifies multiple novel mutations also in genes others than BRCA1/2," Int. J. Cancer 140(1), 95–102 (2017). [CrossRef] 4. A. Rahman, B. Stanley, and A. K. Rahman, "Ultrasensitive label-free detection and quantitation of DNA hybridization via terahertz spectrometry," Proc. SPIE 7568, 756810 (2010). [CrossRef] 5. J. Wang, "From DNA biosensors to gene chips," Nucleic Acids Res. 28(16), 3011–3016 (2000). [CrossRef] 6. M. B. Wabuyele, H. Farquar, W. Stryjewski, R. P. Hammer, S. A. Soper, Y. W. Cheng, and F. Barany, "Approaching real-time molecular diagnostics: Single-pair fluorescence resonance energy transfer (spFRET) detection for the analysis of low abundant point mutations in K-ras oncogenes," J. Am. Chem. Soc. 125(23), 6937–6945 (2003). [CrossRef] 7. R. Sachidanandam, D. Weissman, S. C. Schmidt, J. M. Kakol, L. D. Stein, G. Marth, S. Sherry, J. C. Mullikin, B. J. Mortimore, D. L. Willey, S. E. Hunt, C. G. Cole, P. C. Coggill, C. M. Rice, Z. M. Ning, J. Rogers, D. R. Bentley, P. Y. Kwok, E. R. Mardis, R. T. Yeh, B. Schultz, L. Cook, R. Davenport, M. Dante, L. Fulton, L. Hillier, R. H. Waterston, J. D. Mcpherson, B. Gilman, S. Schaffner, W. J. Van Etten, D. Reich, J. Higgins, M. J. Daly, B. Blumenstiel, J. Baldwin, N. S. Stange-Thomann, M. C. Zody, L. Linton, E. S. Lander, D. Altshuler, and I. S. M. W. Grp, "A map of human genome sequence variation containing 1.42 million single nucleotide polymorphisms," Nature 409(6822), 928–933 (2001). [CrossRef] 8. Y. Wang, C. Li, X. Li, Y. Li, and H. B. Kraatz, "Unlabeled hairpin-DNA probe for the detection of single-nucleotide mismatches by electrochemical impedance spectroscopy," Anal. Chem. (Washington, DC, U. S.) 80(6), 2255–2260 (2008). [CrossRef] 9. M. L. Larramendy, W. El-Rifai, and S. Knuutila, "Comparison of fluorescein isothiocyanate- and Texas red-conjugated nucleotides for direct labeling in comparative genomic hybridization," Cytometry 31(3), 174–179 (1998). [CrossRef] 10. H. Ozaki and L. W. Mclaughlin, "The Estimation of Distances between Specific Backbone-Labeled Sites in DNA Using Fluorescence Resonance Energy-Transfer," Nucleic Acids Res. 20(19), 5205–5214 (1992). [CrossRef] 11. Z. R. Zhu and A. S. Waggoner, "Molecular mechanism controlling the incorporation of fluorescent nucleotides into DNA by PCR," Cytometry 28(3), 206–211 (1997). [CrossRef] 12. G. D. Liu and Y. H. Lin, "Electrochemical quantification of single-nucleotide polymorphisms using nanoparticle probes," J. Am. Chem. Soc. 129(34), 10394–10401 (2007). [CrossRef] 13. H. Cheon, H. J. Yang, S. H. Lee, Y. A. Kim, and J. H. Son, "Terahertz molecular resonance of cancer DNA," Sci. Rep. 6(1), 37103 (2016). [CrossRef] 14. X. J. Qin, S. X. Xu, L. Deng, R. F. Huang, and X. F. Zhang, "Photocatalytic electrosensor for label-free and ultrasensitive detection of BRCA1 gene," Biosens. Bioelectron. 85, 957–963 (2016). [CrossRef] 15. S. Tyagi, D. P. Bratu, and F. R. Kramer, "Multicolor molecular beacons for allele discrimination," Nat. Biotechnol. 16(1), 49–53 (1998). [CrossRef] 16. J. A. Warrington, N. A. Shah, X. Chen, M. Janis, C. Liu, S. Kondapalli, V. Reyes, M. P. Savage, Z. Zhang, R. Watts, M. Deguzman, A. Berno, J. Snyder, and J. Baid, "New developments in high-throughput resequencing and variation detection using high density microarrays," Hum. Mutat. 19(4), 402–409 (2002). [CrossRef] 17. V. M. Ingram, "Gene Mutations in Human Haemoglobin - Chemical Difference Between Normal and Sickle Cell Haemoglobin," Nature 180(4581), 326–328 (1957). [CrossRef] 18. Z. Y. Wu, X. J. Zhang, X. Xu, Y. M. Chen, T. T. Hu, Z. H. Kang, S. B. Li, H. Wang, W. W. Liu, X. C. Ma, and M. Guan, "The mutation profile of JAK2 and CALR in Chinese Han patients with Philadelphia chromosome-negative myeloproliferative neoplasms," J. Hematol. Oncol. 7(1), 48 (2014). [CrossRef] 19. S. Y. Kim, K. Im, S. N. Park, J. Kwon, J. A. Kim, and D. S. Lee, "CALR, JAK2, and MPL Mutation Profiles in Patients With Four Different Subtypes of Myeloproliferative Neoplasms Primary Myelofibrosis, Essential Thrombocythemia, Polycythemia Vera, and Myeloproliferative Neoplasm, Unclassifiable," Am. J. Clin. Pathol. 143(5), 635–644 (2015). [CrossRef] 20. S. W. Smye, J. M. Chamberlain, A. J. Fitzgerald, and E. Berry, "The interaction between Terahertz radiation and biological tissue," Phys. Med. Biol. 46(9), R101–R112 (2001). [CrossRef] 21. X. Yang, D. S. Wei, S. H. Yan, Y. P. Liu, S. Yu, M. K. Zhang, Z. B. Yang, X. Y. Zhu, Q. Huang, H. L. Cui, and W. L. Fu, "Rapid and label-free detection and assessment of bacteria by terahertz time-domain spectroscopy," J. Biophoton. 9(10), 1050–1058 (2016). [CrossRef] 22. A. N. Bogomazova, E. M. Vassina, T. N. Goryachkovskaya, V. M. Popik, A. S. Sokolov, N. A. Kolchanov, M. A. Lagarkova, S. L. Kiselev, and S. E. Peltek, "No DNA damage response and negligible genome-wide transcriptional changes in human embryonic stem cells exposed to terahertz radiation," Sci. Rep. 5(1), 7749 (2015). [CrossRef] 23. X. C. Zhang, "Terahertz wave imaging: horizons and hurdles," Phys. Med. Biol. 47(21), 3667–3677 (2002). [CrossRef] 24. A. J. Baragwanath, G. P. Swift, D. Dai, A. J. Gallant, and J. M. Chamberlain, "Silicon based microfluidic cell for terahertz frequencies," J. Appl. Phys. (Melville, NY, U. S.) 108(1), 013102 (2010). [CrossRef] 25. P. A. George, W. Hui, F. Rana, B. G. Hawkins, A. E. Smith, and B. J. Kirby, "Microfluidic devices for terahertz spectroscopy of biomolecules," Opt. Express 16(3), 1577–1582 (2008). [CrossRef] 26. M. K. Zhang, Z. B. Yang, M. J. Tang, S. H. Yan, D. S. Wei, H. L. Cui, and C. L. Du, "The Properties, Preparation Approaches and Uses of Microfluidic Channels for Terahertz Absorption Signatures Detection in Aqueous," Int Conf Manip Manu, 362–365 (2016). 27. E. R. Brown, E. A. Mendoza, Y. Kuznetsova, A. Neumann, and S. R. J. Brueck, "THz Signatures of DNA in Nanochannels under Electrophoretic Control," IEEE Sensors, 153–155 (2013). 28. E. R. Brown, E. A. Mendoza, D. Y. Xia, and S. R. J. Brueck, "Narrow THz Spectral Signatures Through an RNA Solution in Nanofluidic Channels," IEEE Sens. J. 10(3), 755–759 (2010). [CrossRef] 29. M. L. Li, T. Y. Chang, D. S. Wei, M. J. Tang, S. H. Yan, C. L. Du, and H. L. Cui, "Label-free detection of anti-estrogen receptor alpha and its binding with estrogen receptor peptide alpha by terahertz spectroscopy," RSC Adv. 7(39), 24338–24344 (2017). [CrossRef] 30. S. Ebbinghaus, S. J. Kim, M. Heyden, X. Yu, U. Heugen, M. Gruebele, D. M. Leitner, and M. Havenith, "An extended dynamical hydration shell around proteins," Proc. Natl. Acad. Sci. U. S. A. 104(52), 20749–20752 (2007). [CrossRef] 31. B. Born, S. J. Kim, S. Ebbinghaus, M. Gruebele, and M. Havenith, "The terahertz dance of water with the proteins: the effect of protein flexibility on the dynamical hydration shell of ubiquitin," Faraday Discuss. 141, 161–173 (2009). [CrossRef] 32. U. Heugen, G. Schwaab, E. Brundermann, M. Heyden, X. Yu, D. M. Leitner, and M. Havenith, "Solute-induced retardation of water dynamics probed directly by terahertz spectroscopy," Proc. Natl. Acad. Sci. U. S. A. 103(33), 12301–12306 (2006). [CrossRef] 33. D. M. Leitner, M. Gruebele, and M. Havenith, "Solvation dynamics of biomolecules: modeling and terahertz experiments," HFSP J. 2(6), 314–323 (2008). [CrossRef] 34. N. Y. Tan, R. Li, P. Braeuer, C. D'agostino, L. F. Gladden, and J. A. Zeitler, "Probing hydrogen-bonding in binary liquid mixtures with terahertz time-domain spectroscopy: a comparison of Debye and absorption analysis," Phys. Chem. Chem. Phys. 17(8), 5999–6008 (2015). [CrossRef] 35. Y. Xu and M. Havenith, "Perspective: Watching low-frequency vibrations of water in biomolecular recognition by THz spectroscopy," J. Chem. Phys. 143(17), 170901 (2015). [CrossRef] 36. M. J. Tang, M. K. Zhang, S. H. Yan, L. P. Xia, Z. B. Yang, C. L. Du, H. L. Cui, and D. S. Wei, "Detection of DNA oligonucleotides with base mutations by terahertz spectroscopy and microstructures," PLoS One 13(1), e0191515 (2018). [CrossRef] 37. S. Nakajima, H. Hoshina, M. Yamashita, C. Otani, and N. Miyoshi, "Terahertz imaging diagnostics of cancer tissues with a chemometrics technique," Appl. Phys. Lett. 90(4), 041102 (2007). [CrossRef] 38. G. S. Geng, G. B. Dai, D. D. Li, S. L. Zhou, Z. X. Li, Z. B. Yang, Y. H. Xu, J. G. Han, T. Y. Chang, H. L. Cui, and H. B. Wang, "Imaging brain tissue slices with terahertz near-field microscopy," Biotechnol. Prog. 35(2), e2741 (2019). [CrossRef] 39. L. F. Xiao, M. J. Tang, Q. F. Li, and A. H. Zhou, "Non-invasive detection of biomechanical and biochemical responses of human lung cells to short time chemotherapy exposure using AFM and confocal Raman spectroscopy," Anal. Methods 5(4), 874–879 (2013). [CrossRef] 40. N. Stone, C. Kendall, J. Smith, P. Crow, and H. Barr, "Raman spectroscopy for identification of epithelial cancers," Faraday Discuss. 126, 141–157 (2004). [CrossRef] 41. W. H. Shao, Y. J. Li, S. F. Diao, J. M. Jiang, and R. X. Dong, "Rapid classification of Chinese quince (Chaenomeles speciosa Nakai) fruit provenance by near-infrared spectroscopy and multivariate calibration," Anal. Bioanal. Chem. 409(1), 115–120 (2017). [CrossRef] A. Cistaro, L. Cassalia, C. Ferrara, C. Atzori, D. Vai, N. Quartuccio, P. Fania, G. P. Vaudano, and D. Imperiale, "Brain 18F-FDG PET/CT findings in a case of genetic Creutzfeldt-Jakob disease due to V203I heterozygous mutation in the PRNP gene," J. Neurol. 264(1), 170–173 (2017). [Crossref] G. M. Milley, E. T. Varga, Z. Grosz, B. Bereznai, Z. Aranyi, J. Boczan, P. Dioszeghy, B. Kalman, A. Gal, and M. J. Molnar, "Three novel mutations and genetic epidemiology analysis of the Gap Junction Beta 1 (GJB1) gene among Hungarian Charcot-Marie-Tooth disease patients," Neuromuscular Disord. 26(10), 706–711 (2016). C. Kraus, J. Hoyer, G. Vasileiou, M. Wunderle, M. P. Lux, P. A. Fasching, M. Krumbiegel, S. Uebe, M. Reuter, M. W. Beckmann, and A. Reis, "Gene panel sequencing in familial breast/ovarian cancer patients identifies multiple novel mutations also in genes others than BRCA1/2," Int. J. Cancer 140(1), 95–102 (2017). A. Rahman, B. Stanley, and A. K. Rahman, "Ultrasensitive label-free detection and quantitation of DNA hybridization via terahertz spectrometry," Proc. SPIE 7568, 756810 (2010). J. Wang, "From DNA biosensors to gene chips," Nucleic Acids Res. 28(16), 3011–3016 (2000). M. B. Wabuyele, H. Farquar, W. Stryjewski, R. P. Hammer, S. A. Soper, Y. W. Cheng, and F. Barany, "Approaching real-time molecular diagnostics: Single-pair fluorescence resonance energy transfer (spFRET) detection for the analysis of low abundant point mutations in K-ras oncogenes," J. Am. Chem. Soc. 125(23), 6937–6945 (2003). R. Sachidanandam, D. Weissman, S. C. Schmidt, J. M. Kakol, L. D. Stein, G. Marth, S. Sherry, J. C. Mullikin, B. J. Mortimore, D. L. Willey, S. E. Hunt, C. G. Cole, P. C. Coggill, C. M. Rice, Z. M. Ning, J. Rogers, D. R. Bentley, P. Y. Kwok, E. R. Mardis, R. T. Yeh, B. Schultz, L. Cook, R. Davenport, M. Dante, L. Fulton, L. Hillier, R. H. Waterston, J. D. Mcpherson, B. Gilman, S. Schaffner, W. J. Van Etten, D. Reich, J. Higgins, M. J. Daly, B. Blumenstiel, J. Baldwin, N. S. Stange-Thomann, M. C. Zody, L. Linton, E. S. Lander, D. Altshuler, and I. S. M. W. Grp, "A map of human genome sequence variation containing 1.42 million single nucleotide polymorphisms," Nature 409(6822), 928–933 (2001). Y. Wang, C. Li, X. Li, Y. Li, and H. B. Kraatz, "Unlabeled hairpin-DNA probe for the detection of single-nucleotide mismatches by electrochemical impedance spectroscopy," Anal. Chem. (Washington, DC, U. S.) 80(6), 2255–2260 (2008). M. L. Larramendy, W. El-Rifai, and S. Knuutila, "Comparison of fluorescein isothiocyanate- and Texas red-conjugated nucleotides for direct labeling in comparative genomic hybridization," Cytometry 31(3), 174–179 (1998). H. Ozaki and L. W. Mclaughlin, "The Estimation of Distances between Specific Backbone-Labeled Sites in DNA Using Fluorescence Resonance Energy-Transfer," Nucleic Acids Res. 20(19), 5205–5214 (1992). Z. R. Zhu and A. S. Waggoner, "Molecular mechanism controlling the incorporation of fluorescent nucleotides into DNA by PCR," Cytometry 28(3), 206–211 (1997). G. D. Liu and Y. H. Lin, "Electrochemical quantification of single-nucleotide polymorphisms using nanoparticle probes," J. Am. Chem. Soc. 129(34), 10394–10401 (2007). H. Cheon, H. J. Yang, S. H. Lee, Y. A. Kim, and J. H. Son, "Terahertz molecular resonance of cancer DNA," Sci. Rep. 6(1), 37103 (2016). X. J. Qin, S. X. Xu, L. Deng, R. F. Huang, and X. F. Zhang, "Photocatalytic electrosensor for label-free and ultrasensitive detection of BRCA1 gene," Biosens. Bioelectron. 85, 957–963 (2016). S. Tyagi, D. P. Bratu, and F. R. Kramer, "Multicolor molecular beacons for allele discrimination," Nat. Biotechnol. 16(1), 49–53 (1998). J. A. Warrington, N. A. Shah, X. Chen, M. Janis, C. Liu, S. Kondapalli, V. Reyes, M. P. Savage, Z. Zhang, R. Watts, M. Deguzman, A. Berno, J. Snyder, and J. Baid, "New developments in high-throughput resequencing and variation detection using high density microarrays," Hum. Mutat. 19(4), 402–409 (2002). V. M. Ingram, "Gene Mutations in Human Haemoglobin - Chemical Difference Between Normal and Sickle Cell Haemoglobin," Nature 180(4581), 326–328 (1957). Z. Y. Wu, X. J. Zhang, X. Xu, Y. M. Chen, T. T. Hu, Z. H. Kang, S. B. Li, H. Wang, W. W. Liu, X. C. Ma, and M. Guan, "The mutation profile of JAK2 and CALR in Chinese Han patients with Philadelphia chromosome-negative myeloproliferative neoplasms," J. Hematol. Oncol. 7(1), 48 (2014). S. Y. Kim, K. Im, S. N. Park, J. Kwon, J. A. Kim, and D. S. Lee, "CALR, JAK2, and MPL Mutation Profiles in Patients With Four Different Subtypes of Myeloproliferative Neoplasms Primary Myelofibrosis, Essential Thrombocythemia, Polycythemia Vera, and Myeloproliferative Neoplasm, Unclassifiable," Am. J. Clin. Pathol. 143(5), 635–644 (2015). S. W. Smye, J. M. Chamberlain, A. J. Fitzgerald, and E. Berry, "The interaction between Terahertz radiation and biological tissue," Phys. Med. Biol. 46(9), R101–R112 (2001). X. Yang, D. S. Wei, S. H. Yan, Y. P. Liu, S. Yu, M. K. Zhang, Z. B. Yang, X. Y. Zhu, Q. Huang, H. L. Cui, and W. L. Fu, "Rapid and label-free detection and assessment of bacteria by terahertz time-domain spectroscopy," J. Biophoton. 9(10), 1050–1058 (2016). A. N. Bogomazova, E. M. Vassina, T. N. Goryachkovskaya, V. M. Popik, A. S. Sokolov, N. A. Kolchanov, M. A. Lagarkova, S. L. Kiselev, and S. E. Peltek, "No DNA damage response and negligible genome-wide transcriptional changes in human embryonic stem cells exposed to terahertz radiation," Sci. Rep. 5(1), 7749 (2015). X. C. Zhang, "Terahertz wave imaging: horizons and hurdles," Phys. Med. Biol. 47(21), 3667–3677 (2002). A. J. Baragwanath, G. P. Swift, D. Dai, A. J. Gallant, and J. M. Chamberlain, "Silicon based microfluidic cell for terahertz frequencies," J. Appl. Phys. (Melville, NY, U. S.) 108(1), 013102 (2010). P. A. George, W. Hui, F. Rana, B. G. Hawkins, A. E. Smith, and B. J. Kirby, "Microfluidic devices for terahertz spectroscopy of biomolecules," Opt. Express 16(3), 1577–1582 (2008). M. K. Zhang, Z. B. Yang, M. J. Tang, S. H. Yan, D. S. Wei, H. L. Cui, and C. L. Du, "The Properties, Preparation Approaches and Uses of Microfluidic Channels for Terahertz Absorption Signatures Detection in Aqueous," Int Conf Manip Manu, 362–365 (2016). E. R. Brown, E. A. Mendoza, Y. Kuznetsova, A. Neumann, and S. R. J. Brueck, "THz Signatures of DNA in Nanochannels under Electrophoretic Control," IEEE Sensors, 153–155 (2013). E. R. Brown, E. A. Mendoza, D. Y. Xia, and S. R. J. Brueck, "Narrow THz Spectral Signatures Through an RNA Solution in Nanofluidic Channels," IEEE Sens. J. 10(3), 755–759 (2010). M. L. Li, T. Y. Chang, D. S. Wei, M. J. Tang, S. H. Yan, C. L. Du, and H. L. Cui, "Label-free detection of anti-estrogen receptor alpha and its binding with estrogen receptor peptide alpha by terahertz spectroscopy," RSC Adv. 7(39), 24338–24344 (2017). S. Ebbinghaus, S. J. Kim, M. Heyden, X. Yu, U. Heugen, M. Gruebele, D. M. Leitner, and M. Havenith, "An extended dynamical hydration shell around proteins," Proc. Natl. Acad. Sci. U. S. A. 104(52), 20749–20752 (2007). B. Born, S. J. Kim, S. Ebbinghaus, M. Gruebele, and M. Havenith, "The terahertz dance of water with the proteins: the effect of protein flexibility on the dynamical hydration shell of ubiquitin," Faraday Discuss. 141, 161–173 (2009). U. Heugen, G. Schwaab, E. Brundermann, M. Heyden, X. Yu, D. M. Leitner, and M. Havenith, "Solute-induced retardation of water dynamics probed directly by terahertz spectroscopy," Proc. Natl. Acad. Sci. U. S. A. 103(33), 12301–12306 (2006). D. M. Leitner, M. Gruebele, and M. Havenith, "Solvation dynamics of biomolecules: modeling and terahertz experiments," HFSP J. 2(6), 314–323 (2008). N. Y. Tan, R. Li, P. Braeuer, C. D'agostino, L. F. Gladden, and J. A. Zeitler, "Probing hydrogen-bonding in binary liquid mixtures with terahertz time-domain spectroscopy: a comparison of Debye and absorption analysis," Phys. Chem. Chem. Phys. 17(8), 5999–6008 (2015). Y. Xu and M. Havenith, "Perspective: Watching low-frequency vibrations of water in biomolecular recognition by THz spectroscopy," J. Chem. Phys. 143(17), 170901 (2015). M. J. Tang, M. K. Zhang, S. H. Yan, L. P. Xia, Z. B. Yang, C. L. Du, H. L. Cui, and D. S. Wei, "Detection of DNA oligonucleotides with base mutations by terahertz spectroscopy and microstructures," PLoS One 13(1), e0191515 (2018). S. Nakajima, H. Hoshina, M. Yamashita, C. Otani, and N. Miyoshi, "Terahertz imaging diagnostics of cancer tissues with a chemometrics technique," Appl. Phys. Lett. 90(4), 041102 (2007). G. S. Geng, G. B. Dai, D. D. Li, S. L. Zhou, Z. X. Li, Z. B. Yang, Y. H. Xu, J. G. Han, T. Y. Chang, H. L. Cui, and H. B. Wang, "Imaging brain tissue slices with terahertz near-field microscopy," Biotechnol. Prog. 35(2), e2741 (2019). L. F. Xiao, M. J. Tang, Q. F. Li, and A. H. Zhou, "Non-invasive detection of biomechanical and biochemical responses of human lung cells to short time chemotherapy exposure using AFM and confocal Raman spectroscopy," Anal. Methods 5(4), 874–879 (2013). N. Stone, C. Kendall, J. Smith, P. Crow, and H. Barr, "Raman spectroscopy for identification of epithelial cancers," Faraday Discuss. 126, 141–157 (2004). W. H. Shao, Y. J. Li, S. F. Diao, J. M. Jiang, and R. X. Dong, "Rapid classification of Chinese quince (Chaenomeles speciosa Nakai) fruit provenance by near-infrared spectroscopy and multivariate calibration," Anal. Bioanal. Chem. 409(1), 115–120 (2017). Altshuler, D. Aranyi, Z. Atzori, C. Baid, J. Baldwin, J. Baragwanath, A. J. Barany, F. Barr, H. Beckmann, M. W. Bentley, D. R. Bereznai, B. Berno, A. Blumenstiel, B. Boczan, J. Bogomazova, A. N. Born, B. Braeuer, P. Bratu, D. P. Brown, E. R. Brueck, S. R. J. Brundermann, E. Cassalia, L. Chamberlain, J. M. Chang, T. Y. Chen, Y. M. Cheng, Y. W. Cheon, H. Cistaro, A. Coggill, P. C. Cole, C. G. Cook, L. Crow, P. Cui, H. L. D'agostino, C. Dai, D. Dai, G. B. Daly, M. J. Dante, M. Davenport, R. Deguzman, M. Deng, L. Diao, S. F. Dioszeghy, P. Dong, R. X. Du, C. L. Ebbinghaus, S. El-Rifai, W. Fania, P. Farquar, H. Fasching, P. A. Ferrara, C. Fitzgerald, A. J. Fu, W. L. Fulton, L. Gal, A. Gallant, A. J. Geng, G. S. George, P. A. Gilman, B. Gladden, L. F. Goryachkovskaya, T. N. Grosz, Z. Grp, I. S. M. W. Gruebele, M. Guan, M. Hammer, R. P. Han, J. G. Havenith, M. Hawkins, B. G. Heugen, U. Heyden, M. Higgins, J. Hillier, L. Hoshina, H. Hoyer, J. Hu, T. T. Huang, Q. Huang, R. F. Hui, W. Hunt, S. E. Im, K. Imperiale, D. Ingram, V. M. Janis, M. Jiang, J. M. Kakol, J. M. Kalman, B. Kang, Z. H. Kendall, C. Kim, J. A. Kim, S. J. Kim, S. Y. Kim, Y. A. Kirby, B. J. Kiselev, S. L. Knuutila, S. Kolchanov, N. A. Kondapalli, S. Kraatz, H. B. Kramer, F. R. Kraus, C. Krumbiegel, M. Kuznetsova, Y. Kwok, P. Y. Kwon, J. Lagarkova, M. A. Lander, E. S. Larramendy, M. L. Lee, D. S. Lee, S. H. Leitner, D. M. Li, C. Li, D. D. Li, M. L. Li, Q. F. Li, R. Li, S. B. Li, X. Li, Y. Li, Y. J. Li, Z. X. Lin, Y. H. Linton, L. Liu, C. Liu, G. D. Liu, W. W. Liu, Y. P. Lux, M. P. Ma, X. C. Mardis, E. R. Marth, G. Mclaughlin, L. W. Mcpherson, J. D. Mendoza, E. A. Milley, G. M. Miyoshi, N. Molnar, M. J. Mortimore, B. J. Mullikin, J. C. Nakajima, S. Neumann, A. Ning, Z. M. Otani, C. Ozaki, H. Park, S. N. Peltek, S. E. Popik, V. M. Qin, X. J. Quartuccio, N. Rahman, A. Rahman, A. K. Rana, F. Reich, D. Reis, A. Reuter, M. Reyes, V. Rice, C. M. Rogers, J. Sachidanandam, R. Savage, M. P. Schaffner, S. Schmidt, S. C. Schultz, B. Schwaab, G. Shah, N. A. Shao, W. H. Sherry, S. Smith, A. E. Smye, S. W. Snyder, J. Sokolov, A. S. Son, J. H. Soper, S. A. Stange-Thomann, N. S. Stanley, B. Stein, L. D. Stone, N. Stryjewski, W. Swift, G. P. Tan, N. Y. Tang, M. J. Tyagi, S. Uebe, S. Vai, D. Van Etten, W. J. Varga, E. T. Vasileiou, G. Vassina, E. M. Vaudano, G. P. Wabuyele, M. B. Waggoner, A. S. Wang, H. B. Warrington, J. A. Waterston, R. H. Watts, R. Wei, D. S. Weissman, D. Willey, D. L. Wu, Z. Y. Wunderle, M. Xia, D. Y. Xia, L. P. Xiao, L. F. Xu, S. X. Xu, X. Xu, Y. Xu, Y. H. Yamashita, M. Yan, S. H. Yang, H. J. Yang, X. Yang, Z. B. Yeh, R. T. Yu, S. Yu, X. Zeitler, J. A. Zhang, M. K. Zhang, X. C. Zhang, X. F. Zhang, X. J. Zhou, A. H. Zhou, S. L. Zhu, X. Y. Zhu, Z. R. Zody, M. C. Am. J. Clin. Pathol. (1) Anal. Bioanal. Chem. (1) Anal. Chem. (Washington, DC, U. S.) (1) Anal. Methods (1) Appl. Phys. Lett. (1) Biosens. Bioelectron. (1) Biotechnol. Prog. (1) Cytometry (2) Faraday Discuss. (2) HFSP J. (1) Hum. Mutat. (1) IEEE Sens. J. (1) Int. J. Cancer (1) J. Am. Chem. Soc. (2) J. Appl. Phys. (Melville, NY, U. S.) (1) J. Biophoton. (1) J. Chem. Phys. (1) J. Hematol. Oncol. (1) J. Neurol. (1) Nat. Biotechnol. (1) Neuromuscular Disord. (1) Nucleic Acids Res. (2) Opt. Express (1) Phys. Chem. Chem. Phys. (1) Phys. Med. Biol. (2) Proc. Natl. Acad. Sci. U. S. A. (2) Proc. SPIE (1) RSC Adv. (1) Sci. Rep. (2) OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here. Alert me when this article is cited. Click here to see a list of articles that cite this paper View in Article | Download Full Size | PPT Slide | PDF Equations on this page are rendered with MathJax. Learn more. (1) Δ α = ln ⁡ ( I b u f f I s ) d . (2) E ~ sam E ~ ref = ( E ~ sam E ~ in ) ( E ~ in E ~ ref ) = r ~ sam r ~ ref . (3) r ~ = r ~ 12 + r ~ 23 exp ( i 4 π d λ ε ~ − ε Si sin 2 θ ) 1 + r ~ 12 r ~ 23 exp ( i 4 π d λ ε ~ − ε Si sin 2 θ ) . (4) α ( ν ) = 4 c π ν ε ′ ′ . Sequences of oligonucleotides investigated in this study Oligo ID Sequences (5 ′ >3 ′ ) Normal Hemoglobin CTGACTCCTGAGGAG Sickle Cell Anemia CTGACTCCTGTGGAG JAK2 Wild TTGGTTTTAAATTATGGAGTCTGTGTCTGTGGAGACGAG JAK2 V617F Mutant TTGGTTTTAAATTATGGAGTCTGTTTCTGTGGAGACGAG
CommonCrawl
An interaction regression model for crop yield prediction Javad Ansarifar1, Lizhi Wang1 & Sotirios V. Archontoulis2 Scientific Reports volume 11, Article number: 17754 (2021) Cite this article Crop yield prediction is crucial for global food security yet notoriously challenging due to multitudinous factors that jointly determine the yield, including genotype, environment, management, and their complex interactions. Integrating the power of optimization, machine learning, and agronomic insight, we present a new predictive model (referred to as the interaction regression model) for crop yield prediction, which has three salient properties. First, it achieved a relative root mean square error of 8% or less in three Midwest states (Illinois, Indiana, and Iowa) in the US for both corn and soybean yield prediction, outperforming state-of-the-art machine learning algorithms. Second, it identified about a dozen environment by management interactions for corn and soybean yield, some of which are consistent with conventional agronomic knowledge whereas some others interactions require additional analysis or experiment to prove or disprove. Third, it quantitatively dissected crop yield into contributions from weather, soil, management, and their interactions, allowing agronomists to pinpoint the factors that favorably or unfavorably affect the yield of a given location under a given weather and management scenario. The most significant contribution of the new prediction model is its capability to produce accurate prediction and explainable insights simultaneously. This was achieved by training the algorithm to select features and interactions that are spatially and temporally robust to balance prediction accuracy for the training data and generalizability to the test data. Predicting crop yield is crucial to addressing emerging challenges in food security, particularly in an era of global climate change. Accurate yield predictions not only help farmers make informed economic and management decisions but also support famine prevention efforts. Underlying crop yield prediction is a fundamental research question in plant biology, which is to understand how plant phenotype is determined by genotype (G), environment (E), management (M), and their interactions (G \(\times \) E \(\times \) M)1,2,3,4,5,6. State-of-the-art crop yield prediction methods fall into three main categories: linear models, machine learning models, and crop models, which have complementary strengths and limitations. Linear models are explainable by quantifying the additive effect of each variable, but they often struggle to achieve high prediction accuracy due to the inability to capture the intrinsically nonlinear interactions among G, E, and M variables. Machine learning models have been successfully used for crop yield prediction, including stepwise multiple linear regression7, random forest8, neural networks9,10,11, convolutional neural networks12, recurrent neural networks13, weighted histograms regression14, interaction based model15, and association rule mining and decision tree16. Most of these studies were based on environmental and managerial variables only, due to lack of publicly available genotype data at the state or national scale. Some studies16,17,18,19 explored the relationship between genotype and grain yield from regional yield trials from a plant breeding perspective, which would be hard to scale up to statewide or nationwide predictions. Many machine learning algorithms are scalable to large datasets and have reasonably high prediction accuracy. However, due to the black-box nature of these models, prediction accuracy is sensitive to model structure and parameter calibration, and it can prove difficult to explain why predictions are accurate or inaccurate. Crop models are another type of nonlinear models, including APSIM20, DSSAT21,22, RZWQM23, and SWAP/WOFOST24, which build upon the physiological understanding of plant and soil processes to develop biologically meaningful non-linear equations to predict crop yield and other phenotypes. These models provide explicit (albeit complex) explanations of the interactions between traits and environmental conditions in different phases of the crop growth cycle. They also offer biological insights into causes of phenotypic variation25. Nevertheless, the collection of trait measurement data and calibration of model coefficients can be labor intensive and time consuming26,27,28,29, computation speed could be low29, and prediction accuracy may not be as high as some machine learning algorithms. We propose a novel model, the interaction regression model, for crop yield prediction, which attempts to combine the strengths and avoid the limitations of the aforementioned approaches. At the core of this model lies a combinatorial optimization algorithm, which not only selects the most revealing E and M features but also detects their most pronounced interactions; the contributions of these features and interactions to the crop yield are then quantified with a multiple linear regression. To ensure the explainability of the results, we trained our algorithm to find features and interactions that are spatially and temporally robust, which means that they should be consistently predictive of crop yield across all counties in all years. As such, results from this model have the potential to propose biologically and agronomically insightful hypotheses on E \(\times \) M interactions that can be validated experimentally. A similar concept of robust inference model in spatial–temporal models was presented in Santos and Erniel30. A measure of robustness was proposed in Nogueira et al.31, which was based on the number of overlapping features selected using different subsets of training data. In our approach, the robustness measure is defined as the average prediction performance in multiple validation datasets at different temporal and spatial spectra. As such, our robustness definition allowed the algorithm to strike a balance between prediction accuracy and generalizability. The proposed model has demonstrated notable performance in a comprehensive case study, in which it was compared with eight other machine learning models to predict corn and soybean yield in 293 counties of the states of Illinois, Indiana, and Iowa from 2015 to 2018. Moreover, prediction performance with and without knowing weather during the growing season and temporal and spatial extrapolation performance of the proposed model in unseen counties were tested. The proposed model not only achieved a less than 8% relative root mean square error (RRMSE) for both corn and soybean in all three states, outperforming all other machine learning models in the case study, but also produced explainable insights. In particular, our model identified 11 E \(\times \) M interactions for corn and 12 for soybean, and also dissected the total yield into contributions from weather, soil, management, and their interactions. To test the generalizability of the model in terms of both temporal and spatial extrapolation, we trained the model using historical data from two states up to 2017 and applied it to predict corn yield in a third state for 2018, and the resulting average RRMSE was less than 10%. Let X denote the set of explanatory (including environment and management) variables and y the crop yield of a given county for a given year. We propose the interaction regression model to describe the relationship between X and y as follows. $$\begin{aligned} \hat{y}_i =\beta _0 + \sum _{j \in {\mathcal {P}}} X_{i,j} \beta _j + \sum _{m \in \mathcal {M}} b_m Z_{i,m}, \quad \forall i \in \mathcal {N}, \end{aligned}$$ where,\(\mathcal {N}\) is the set of sample observations (one sample per county per year), \(\mathcal {P}\) is the set of explanatory variables, \(\mathcal {M}\) is the set of interactions, \(\hat{y}_i\) is predicted crop yield of sample i, \(\beta _0\) is the intercept of crop yield, \(\beta _j\) is the additive effect of variable j, \(X_{i,j}\) is the explanatory variable j of sample i, \(b_m\) is the effect of interaction m, and \(Z_{i,m}\) is the interaction variable m of sample i. Key to Eq. (1) is to decipher the interaction matrix Z from explanatory variables. We use a kernel-based approach to represent the interactions as $$\begin{aligned} Z_{i,m} = \sum \limits _{k \in \mathcal {K}} \delta _{m,k} K_k(X_i), \end{aligned}$$ where, \(K_k(\cdot )\) is the type k kernel function, \(\mathcal {K}\) is the set of kernel functions that we use to describe nonlinear relationships between explanatory variables and crop yield, and \(\delta _{m,k}\) is a binary variable indicating whether interaction m is best described by the type k kernel (\(\delta _{m,k} = 1\)) or not (\(\delta _{m,k} = 0\)). In order to solve Eq. (1), we propose an approach that consists of three major steps: data pre-processing, robust feature and interaction selection, and linear regression, as illustrated in Fig. 1. Key elements of the three steps are summarized as follows. Illustration of the proposed interaction regression model for crop yield prediction. Step 1 is data pre-processing. In step 2, Algorithms 1 and 2 select robust features and interactions, which are then used in step 3 to predict the crop yield with a multiple linear regression model. Here, \(\hat{y}\) is the predicted yield, \(\beta _\text {W}\), \(\beta _\text {S}\), and \(\beta _\text {M}\) are, respectively, the additive effects of weather, soil, and management features, whereas \(\beta _\text {I}\) is the effect of E × M interactions. This plot was created with Microsof PowerPoint (Version 16.0.12827.20200 32-bit). Step 1: Data pre-processing We collected weather data from the Iowa Environmental Mesonet32, soil data from the Gridded Soil Survey Geographic Database33, and management and yield performance data from the National Agricultural Statistics Service34 for all 293 counties of the states of Illinois, Indiana, and Iowa from 1990 to 2018. Weather variables include precipitation (Prcp, mm), solar radiation (Srad, MJ m\(^{-2}\)), maximum temperature (Tmax, \(^\circ \)C), and minimum temperature (Tmin, \(^\circ \)C) from weeks 13 (late March) to 52 (late December). Soil variables include dry bulk density (BDdry, g cm\(^{-3}\)), clay percentage (clay, %), soil pH (pH), drained upper limit (dul, mm mm\(^{-1}\)), soil saturated hydraulic conductivity (ksat, mm day\(^{-1}\)), wilting point (ll, mm mm\(^{-1}\)), soil organic matter (om, %), sand percentage (sand, %), and saturated volumetric water content (sat, mm mm\(^{-1}\)) at nine different depths of soil: 0–5, 5–10, 10–15, 15–30, 30–45, 45–60, 60–80, 80–100, and 100–120 cm. Weather data and Soil data were available at 1 km\(^2\) spatial resolution. To compute county-level information, we had to scale up and aggregate the soil and weather information. We took the average of soil at different spatial resolutions at a county to compute county-level soil information. In contrast, We took the median of weather at different spatial resolutions at a county to scale up the county-level weather information. Management variables include acres planted at the county-level, weekly cumulative percentage of planted and harvested acreages. We also created additional variables using the weather and management data based on agronomic insight to help enhance the performance of the model, such as growing degree days, number of rainy days, and heat units. Due to the lack of publicly available genotypic data, we extracted two new variables using additional data from the National Agricultural Statistics Service34 to account for the trend of genetic improvements2: (1) trend of historical yields and (2) trend of population density for corn and pod count for soybean. These two variables were put in the category of management variables. All variables were normalized to the [0, 1] interval. Step 2: Robust feature and interaction selection To avoid overfitting, we selected a subset of all explanatory variables (features) to predict crop yield. We applied elastic net regularization model to select a set of high-quality features for each category of weather, soil, and management, and then we used forward and backward stepwise selection to identify features and interaction that are spatially and temporally robust across different counties over different years. These robust features and interactions were selected using a similar algorithm from our previous study35, which was modified to iterate between exploring new interactions and cross-validating their performances. Such process continues until a set of robust features and interactions has been discovered that lead to good prediction accuracy on the training data and generalizability on the validation data. The way interactions were represented in our model differs from the classical factorial interaction. However, they are also similar in the sense that our algorithm explores all possible factorial combinations to identify the most effect interactions to include in the model. Step 3: Linear regression The last step of the prediction model is a multiple linear regression, which attributes crop yield to additive contributions from weather, soil, management, and their interactions. As such, this prediction model combines the strengths of explainability of linear regression, prediction accuracy of machine learning, and agronomic insights. More details about the kernel functions in Eq. (1) and the algorithm for solving it are provided in Appendix 1. Experimental setting We compared the performance of the proposed algorithm with that of eight other machine learning algorithms from the literature: linear regression was implemented in R; stepwise regression was implemented in R using the MASS package36; LASSO, ridge, and elastic net were implemented in R using the glmnet package37; random forest was implemented in R using the ranger package38; extreme gradient boosting (XGBoost) was implemented in R using the xgboost package39; and neural network was implemented in Python using the Sklearn package40. We fed all original explanatory variables as input to these eight algorithms. The linear regression algorithm uses all features without interaction selection; stepwise regression, Lasso regression, ridge regression, and elastic net have their default feature selection settings in the software packages without interaction selection; random forecast, xgboost, and neural network use different modeling structures for feature and interaction selection. As such, the different performances of these algorithms can be attributed to how they select features and interactions from the same set of explanatory data. All nine algorithms were deployed to predict both corn and soybean yields in the states of Illinois, Indiana, and Iowa from 2015 to 2018. To predict yield for the test year t, the training data included all the explanatory (weather, soil, and management) and response (crop yield) data from 1990 to year \(t-1\). A 10-fold CV over training and validation partitions was applied to tune the hyperparameters using a grid search approach. Illustration of generating scenarios and predicting yield at each week during the growing season. This plot was created with Microsof PowerPoint (Version 16.0.12827.20200 32-bit). Crop yield prediction during the growing season is informative for farmers to make economic or management decisions, but it is also very challenging due to weather and management uncertainty. Our model was able to provide weekly predictions by integrating continuously updated weather and management data with future weather scenarios. For this purpose, first, we trained the proposed model for historical information, and then we utilized this trained model to predict yield performance during the growing season. The process of generating scenarios during growing season and predicting yield performance was illustrated in Fig. 2. For the prediction at each week, we recorded observed weather and management information and estimated them in advance to construct the whole weather and management profiles. For unknown part of data, we used the observed ones from previous years as different scenarios at each week. Therefore, we could generate several predictions for corn and soybean for each week corresponding to each scenario. By observing more and more weather and management data, the uncertainty decreased; thus, the prediction accuracy was expected to improve over time as more actual observations by being available to replace estimated weather and management. Our previous work using a crop model suggested that weather uncertainty decreased by 60% by mid-July in Iowa for both corn and soybean41. The final prediction at each week was the median of yield performances of scenarios. To explore the prediction performance of the proposed Interaction–Regression model for corn and soybean in complete unseen counties, we created four datasets by removing the historical dataset of some counties from the training and validation sets. For the first three datasets, we removed data for Illinois (IL), Indiana (IN), and Iowa (IA) from training and validation sets, respectively; for the last dataset, we randomly picked 100 out of the 293 counties and removed all their data from training and validation sets. For this purpose, for the test dataset of unseen counties in 2018, the historical dataset of seen counties from 1990 to 2017 was divided into four time-wise folds. Then, the proposed framework used these folds for feature selection and interaction detection. After extracting robust features and interactions for each dataset, we partitioned validation and training sets as two previous years from the test year 2018 (years 2016 and 2017) and dataset corresponding to the rest of the years to 1990 (years 1990 to 2015), respectively. Then, for each test dataset, we trained the model using its training partition and robust features and interactions, and the trained models were utilized to predict crop yield of the unseen counties in the year 2018. Prediction accuracy comparison with other machine learning models Prediction errors for two crops over four test years using nine algorithms are summarized in Table 1. More comparison in terms of the relative RMSE (RRMSE), the relative squared error (RSE), the mean absolute error (MAE), the relative absolute error (RAE), and the coefficient of determination (\(R^2\)) of nine models are reported in Appendix 2. These results suggested that the proposed model outperformed other models for all test years for both corn and soybean in all evaluation criteria. The test root mean square errors (RMSE) are also lower than what has been reported in the literature13,14,16,29. As such, the different performances of our model and others can be attributed to how our model selects high-quality and robust features and interactions from the same set of explanatory data. Second, due to the sparsity of the modeling structure by specifically separating interactive effects from additive effects of features, the algorithms are less prone to overfitting than some machine learning approaches. In terms of the computation time, the proposed approach took approximately two hours for each test year, which was comparable with the neural network model. Table 1 RMSE (in t/ha) of nine algorithms for corn and soybean yield prediction over four test years. Prediction performance with known weather after growing season Figure 3 illustrates the prediction performance of the proposed model after the end of the growing season when all the weather data have been observed. These results indicate that the proposed model has an RRMSE lower than 8% in all three states (and most of the counties) over multiple years for both corn and soybean. In reference, prediction accuracy of other recent studies ranged from 7.6% mean absolute percentage error for corn using deep neural networks42 to 16.7% RRMSE for corn using random forest8. RRMSE for corn and soybean yield prediction from 2015 to 2018. These plots were created with R (version 3.6.3)43. Prediction performance with updating weather during growing season Figure 4 shows the predictions of corn and soybean yield during the growing season of 2018 in the three states, updated weekly to incorporate new weather data. Compared with the USDA predictions, results from the proposed model have two advantages: (1) interval predictions throughout growing season with weekly updates, (2) county level (as opposed to state level) predictions with well accuracy. The pattern of increased yield prediction from April to July was caused by weather and planting time in 2018, and it varied across different counties. Our prediction continues to update until the end of December, which is more than 2 months after the end of the growing season. This is because the model is able to capture factors that affect crop yield from crop maturity to harvest, such as adverse weather conditions during harvesting. Temporal and spatial extrapolation performance The prediction performance of the proposed Interaction–Regression model for corn and soybean in unseen counties at the test year 2018 are reported in Table 2. Investigation on the performance of the proposed model using four datasets by removing the historical dataset of some counties from the training and validation sets suggest that the proposed approach has a satisfactory prediction performance in both temporal and spatial extrapolation. The result of corn yield prediction reveals that a trained model using two selected states from Illinois, Indiana, and Iowa is able to predict corn yield at selected states with at most 8.98 % error. In contrast, soybean prediction of unseen locations using a trained model of seen locations cannot provide robust enough soybean yield prediction. It means that corn yield is more predictable than soybean yield at completely unseen locations with new weather, soil, and management profiles. Also, results suggest that soybean yield prediction is more sensitive to the model compared with corn yield. State-level predictions of corn and soybean during the growing season for three states in 2018. Our model provided weekly predictions based on observed weather information; prediction intervals were constructed using historical weather scenarios for yet-to-be-observed weather. The dashed red curve is the median prediction, and the pink interval is defined by the first and third quantiles under multiple weather scenarios, constructed using historical weather data. The dotted blue curves are USDA predictions, which were released in August, September, and October of 2018 at the state level. The solid black line indicates the actual state average yield, which was announced by USDA in February 2019. These plots were created with MATLAB R2018a (Version 9.4.0.813654 64-bit). Table 2 RMSE in t/ha (and RRMSE in %) of the interaction regression model for the extrapolation of crop yield for unseen counties at the year 2018. Explainable insights The proposed model provided accurate predictions and some additive and interactive effects, which could help farmers, breeders, and agronomists better understand the complex and interactive relationship among environment and management. Our model selected 202 robust features and 11 two-way interactions to predict the corn yield. Out of the 202 features, 155 were for weather, 37 for soil, and 10 for management. In reference, the total number of variables is 613 (including 440 for weather, 90 for soil, 83 for management), thus the total number of possible two-way interactions is \(613^2 = 375,769\) (quadratic effects are considered self-interactions44,45). These features and interactions were carefully selected to balance prediction accuracy with spatial and temporal consistency. As such, the same set of features and interactions apply to all counties in the three states for all years between 2015 and 2018. Similarly, our model selected 160 robust features (including 91 for weather, 59 for soil, and 10 for management) and 12 two-way interactions to predict the soybean yield. The contributions of the selected features and interactions for corn and soybean are visualized in Fig. 5. The size of the bars shows the effects of variables and interactions on yield performance. The yield trend indicates a significant factor in estimating the yield of both corn and soybean. Soybean has one self-interactions which includes minimum temperature between October 15 and October 21, and it has negative effects on soybean yield. Corn has two self-interactions, including cold days from April 2 to April 8 and cumulative percentage of planted acreages from May 14 to May 20 with positive and negative effects, respectively. The number of weather factors in estimating corn yield is more than soybean yield. In contrast, the number of soil factors in estimating soybean yield is more than twice the number of soil factors in the prediction of corn yield. Corn yield is more sensitive than soybean yield to management factors. Detected interactions reveal that most of the interactions are between weathers from April to September (emergence to reproductive stages). Moreover, temperature plays an important role in most interactions as maximum and minimum temperature and numbers of cold days. A close-up view of the interactions are shown in Fig. 6 in two lower circular graphs, in which all 11 interactions for corn and 12 for soybean are numbered. The circular graphs indicate additive and interactive effects for corn and soybean. Curves inside the inner circle connect the two variables involved in the two-way interactions. The bars in the first layer around the circle represent the effects of the interactions, and the bars in the second layer show the additive effects of the features. Positive and negative effects are illustrated with red and blue colors, respectively. These plots were created with MATLAB R2018a (Version 9.4.0.813654 64-bit). The circular graphs show that interactions for corn (left) and soybean (right) that were discovered by the proposed model. Curves inside the inner circle connect the two variables involved in the interactions. The first layer outside the circle shows the positive (red) or negative (blue) effects of the interactions. These plots were created with MATLAB R2018a (Version 9.4.0.813654 64-bit). We explain the contributions of weather (\(\beta _\text {W} W\)), soil (\(\beta _\text {S} S\)), management (\(\beta _\text {M} M\)), and their interactions (\(\beta _\text {I} I\)) in all counties in 2015 and 2018 as violin plots in Fig. 7. The size of the violin plot is denoted as the contribution of parameters to yield. Although their contributions are changed from year to year, high-impact features, including maximum and minimum temperatures, number of cold days, soil organic matter, wilting point, planting time, and yield trend show high contributions to yield continuously over time. The skewness of the yield trend and heat units contributions are on the positive side, which means they increase yield performance. High-variance in temperature, soil organic matter, wilting point, clay percentage, and drained upper limit indicate that the counties across the US Corn Belt have experienced very different climates and have wide soil structures, especially in 2015. Cumulative percentage of planted acreages as the self-interaction ninth in corn yield prediction negatively impacts yield performance at most \(-4.5\) t/ha in 2015 and \(-2\) t/ha in 2018. However, interactions number 6, 7, and 8 contribute positively to corn yield. Interactions play an important role in the yield prediction of corn compared with soybean. Results also reveal that weather conditions in earlier weeks of the growing season have more influences on yield than later ones, and that late planting time is associated with lower yield performance. These findings are consistent with results from field experimental studies41,46,47,48,49,50,51. Violin plots of estimated contributions of weather (first row), soil (second row), management (third row) and interaction (fourth row) variables on corn and soybean yield in 2015 (left) and 2018 (right). Each dot on a violin plot represents a county level observation. X-axis numbers of lower panels correspond to the associated numbers to interactions in Fig. 6. These plots were created with MATLAB R2018a (Version 9.4.0.813654 64-bit). Insightful interactions The upper row of Fig. 8 illustrates three of the interactions for corn using partial dependence plots, which is a popular way to show the marginal effect that one or two features have on the predicted outcome of a machine learning model. Two-way interaction ❹ for corn: the combination of low solar radiation and high maximum temperature during the late grain filling period negatively affects corn yields. This is consistent with agronomic intuition, as low solar radiation limits the energy for photosynthesis, and high maximum temperatures are associated with additional yield losses through tissue respiration and increased evapotranspiration stress. Self interaction ❽ for corn: average yield drops from 9.455 to 9.15 t/ha as the number of cold days in the week of April 2 increases from 0 to 4. This is insightful because the soil organic matter mineralization and soil water evaporation will slow down in low temperature, leading to delayed field operations due to reduced production of nitrogen and wetter soil surface. The upward trend of yield as the number of cold days increases from 4 to 7 days is counter-intuitive biologically, but it may reveal an important agronomic insight: when the low temperatures last long enough, farmers may start to take actions (e.g., more fertilization and irrigation) to offset its negative impact on corn yield. Self interaction ❾ for corn: completing planting by May 14 is ideal for the yield, and leaving 50% of planting unfinished by May 20 may reduce the yield by 1.25 t/ha. This is consistent with the well-known benefit of early planting47. It was also validated in 2019, when the weather-caused delay in planting in IL and IN led to decreased yields34. The lower row of Fig. 8 illustrates two of the interactions for soybean using partial dependence plots. Self interaction ❸ for soybean: lower temperature, even near freezing, in mid- to late-October is favorable for soybean yield. Two-way interaction ❺ for soybean: high precipitation in mid July makes the yield sensitive to night temperature in late August; warmer nights may lead to a 0.45 t/ha higher yield than cooler nights. It has been reported that higher temperature will negatively impact soybean yield52,53; our results further suggest that precipitation may also affect the extent of such impact. A possible interpretation is that higher temperature accelerates leaf senescence and increases remobilization of nitrogen and dry matter from vegetative tissues to grains, and such process may be more sensitive to temperature at a higher level of soil moisture. The upper row indicates partial dependence plots of interactions ❹ (left), ❽ (center), and ❾ (right) for corn. The lower row shows partial dependence of interactions ❸ (left) and ❺ (right) for soybean. These plots were created with MATLAB R2018a (Version 9.4.0.813654 64-bit). Dissection of crop yield Breakdowns of observed yields in three states from 2015 to 2018 to contributions of weather (\(\beta _\text {W} W\)), soil (\(\beta _\text {S} S\)), management (\(\beta _\text {M} M\)), and their interactions (\(\beta _\text {I} I\)) are shown in Figs. 9 and 10 for corn and soybean, respectively. These contributions differ by county and change over time. In 2015, weather was the deciding variable for the yield, whereas interactions played a more important role in 2018. Due to the relatively static nature and lack of dramatic changes across the three Midwest states, soil variables demonstrated a lower effect on crop yield than the dynamic weather, management, and their interactions28,54. Breakdown of observed corn yield in three states from 2015 to 2018 to contributions of weather (\(\beta _\text {W} W\)), soil (\(\beta _\text {S} S\)), management (\(\beta _\text {M} M\)), and their interactions (\(\beta _\text {I} I\)). These plots were created with R (version 3.6.3)43. Breakdown of observed soybean yield in three states from 2015 to 2018 to contributions of weather (\(\beta _\text {W} W\)), soil (\(\beta _\text {S} S\)), management (\(\beta _\text {M} M\)), and their interactions (\(\beta _\text {I} I\)). These plots were created with R (version 3.6.3)43. The main contributions of the proposed model are summarized in its three salient properties compared with other machine learning models. The first property is to use robust features and interaction for designing a yield prediction model from year to year prediction. From an agronomic point of view, the conventional feature selection techniques are not proper for yield prediction due to changing train data set from year to year leads to a selection of a different set of features. Hence, the biological results from a different set of features are different. The lack of this robust selection structure is felt. Second, the proposed model addresses the limitation of machine learning models in transparency by deciphering environment by management interactions for corn and soybean yield. The proposed model was designed efficiently to select a subset of interactions spatially and temporally to result in high performance and less prone to the overfitting problem. Third, The proposed model quantifies contributions of weather (\(\beta _\text {W} W\)), soil (\(\beta _\text {S} S\)), management (\(\beta _\text {M} M\)), and their interactions (\(\beta _\text {I} I\)) to observed yield, where capable machine learning models such as neural network, random forest, and XGBoost cannot quantify these contributions. We proposed the interaction regression model for crop yield prediction, which made three major contributions. First, it outperformed state-of-the-art machine learning algorithms with respect to prediction accuracy in a comprehensive case study, which used historical data of three Midwest states from 1990 to 2018. Second, it was able to identify about a dozen E \(\times \) M interactions for corn and soybean yield, which are spatially and temporally robust and can be used to form counter-intuitive, insightful, and testable hypotheses. Third, it was able to explain the contributions of weather, soil, management, and their interactions to crop yield. Achieving these three contributions simultaneous is particularly significant, since no other crop yield prediction algorithms have been able to satisfactorily address both prediction accuracy and explainability. The proposed model and computational experiments are not without limitations. For example, the robust feature and interaction selection algorithms were heuristic in nature, which can find high-quality solutions efficiently but do not guarantee global optimality. By increasing the number of features (genetic information), the proposed heuristic algorithm maybe lose its efficiency in terms of running time in finding robust features and interactions. Our model is seeking self -or two-way interactions. New models are required to discover high-order interactions between variables. The non-linear functions of interaction in this paper are limited to six defined kernel functions that can be extended in future research. The performance of the algorithm may be further improved by applying more advanced techniques for hyperparameter tuning55. Due to lack of publicly available information on genotype and management, the W, S, and M data used in our case study may be disproportional to their true contributions to crop yield. However, the proposed modeling approach was designed for both discrete and continuous explanatory variables and capable of analyzing all G, W, S, and M variables and their interactions. Future research should explore the possibility of including additional data (such as high-dimensional genotype data, plant traits, detailed management strategies, and satellite images) to further improve prediction accuracy and make more biologically and agronomically insightful discoveries. The implementation of the proposed model and dataset used in this study are available at https://github.com/ansarifar/An-Explainable-Model-for-Crop-Yield-Prediction. Cooper, M. et al. Integrating Genetic Gain and Gap Analysis to Predict Improvements in Crop Productivity (Crop Science, 2020). Duvick, D. Genetic progress in yield of United States maize (Zea mays L.). Maydica 50, 193 (2005). Hipólito, J., Boscolo, D. & Viana, B. F. Landscape and crop management strategies to conserve pollination services and increase yields in tropical coffee farms. Agric. Ecosyst. Environ. 256, 218–225 (2018). Filippi, C., Mansini, R. & Stevanato, E. Mixed integer linear programming models for optimal crop selection. Comput. Oper. Res. 81, 26–39 (2017). MathSciNet MATH Article Google Scholar Alminana, M. et al. Wische: A DSS for water irrigation scheduling. Omega 38, 492–500 (2010). Dai, Z. & Li, Y. A multistage irrigation water allocation model for agricultural land-use planning under uncertainty. Agric. Water Manag. 129, 69–79 (2013). Drummond, S. T., Sudduth, K. A., Joshi, A., Birrell, S. J. & Kitchen, N. R. Statistical and neural methods for site-specific yield prediction. Trans. ASAE 46, 5 (2003). Jeong, J. H. et al. Random forests for global and regional crop yield predictions. PLoS One 11, 210 (2016). Liu, J., Goering, C. & Tian, L. A neural network for setting target corn yields. Trans. ASAE 44, 705 (2001). Kaul, M., Hill, R. L. & Walthall, C. Artificial neural networks for corn and soybean yield prediction. Agric. Syst. 85, 1–18 (2005). Crane-Droesch, A. Machine learning methods for crop yield prediction and climate change impact assessment in agriculture. Environ. Res. Lett. 13, 114003 (2018). Russello, H. Convolutional Neural Networks for Crop Yield Prediction Using Satellite Images (IBM Center for Advanced Studies, 2018). You, J., Li, X., Low, M., Lobell, D. & Ermon, S. Deep Gaussian process for crop yield prediction based on remote sensing data. In Thirty-First AAAI Conference on Artificial Intelligence (2017). Marko, O., Brdar, S., Panic, M., Lugonja, P. & Crnojevic, V. Soybean varieties portfolio optimisation based on yield prediction. Comput. Electron. Agric. 127, 467–474 (2016). Ansarifar, J., Akhavizadegan, F. & Wang, L. Performance prediction of crosses in plant breeding through genotype by environment interactions. Sci. Rep. 10, 1–11 (2020). Romero, J. R. et al. Using classification algorithms for predicting durum wheat yield in the province of Buenos Aires. Comput. Electron. Agric. 96, 173–179 (2013). González-Camacho, J. M. et al. Applications of machine learning methods to genomic selection in breeding wheat for rust resistance. Plant Genome 11, 1–15 (2018). Basnet, B. R. et al. Hybrid wheat prediction using genomic, pedigree, and environmental covariables interaction models. Plant Genome 12, 1–13 (2019). MathSciNet Article Google Scholar González-Camacho, J. M., Crossa, J., Pérez-Rodríguez, P., Ornella, L. & Gianola, D. Genome-enabled prediction using probabilistic neural network classifiers. BMC Genom. 17, 208 (2016). Keating, B. A. et al. An overview of APSIM, a model designed for farming systems simulation. Eur. J. Agron. 18, 267–288 (2003). Basso, B., Liu, L. & Ritchie, J. T. A comprehensive review of the CERES-wheat,-maize and -rice models' performances. In Advances in Agronomy Vol. 136 27–132 (Elsevier, 2016). Monsi, M. & Saeki, T. On the factor light in plant communities and its importance for matter production. Ann. Bot. 95, 549 (2005). PubMed PubMed Central Article Google Scholar Ahuja, L. & Ma, L. Methods of Introducing System Models into Agricultural Research (American Society of Agronomy, 2011). Eitzinger, J., Trnka, M., Hösch, J., Žalud, Z. & Dubrovskỳ, M. Comparison of CERES, WOFOST and SWAP models in simulating soil water content during growing season under different soil conditions. Ecol. Model. 171, 223–246 (2004). Heslot, N., Akdemir, D., Sorrells, M. & Jannink, J.-L. Integrating environmental covariates and crop modeling into the genomic selection framework to predict genotype by environment interactions. Theor. Appl. Genet. 127, 463–480 (2014). PubMed Article PubMed Central Google Scholar Bassu, S. et al. How do various maize crop models vary in their responses to climate change factors?. Glob. Change Biol. 20, 2301–2320 (2014). Lamsal, A. et al. Efficient crop model parameter estimation and site characterization using large breeding trial data sets. Agric. Syst. 157, 170–184 (2017). Puntel, L. A., Pagani, A. & Archontoulis, S. V. Development of a nitrogen recommendation tool for corn considering static and dynamic variables. Eur. J. Agron. 105, 189–199 (2019). Akhavizadegan, F., Ansarifar, J., Wang, L., Huber, I. & Archontoulis, S. V. A time-dependent parameter estimation framework for crop modeling. Sci. Rep. 11, 1–15 (2021). ADS Article CAS Google Scholar Santos, J. & Barrios, E. Robust inference in semiparametric spatial-temporal models. Commun. Stat. Simul. Comput. 20, 1–20 (2019). Nogueira, S., Sechidis, K. & Brown, G. On the stability of feature selection algorithms. J. Mach. Learn. Res. 18, 6345–6398 (2017). MathSciNet MATH Google Scholar Environmental Mesonet, I. https://mesonet.agron.iastate.edu. Database, G. S. S. G. https://gdg.sc.egov.usda.gov. Service, N. A. S. https://quickstats.nass.usda.gov. Ansarifar, J. & Wang, L. New algorithms for detecting multi-effect and multi-way epistatic interactions. Bioinformatics 35, 5078–5085 (2019). CAS PubMed Article PubMed Central Google Scholar Ripley, B. et al. Mass: Support functions and datasets for venables and Ripley's mass. R Package Version 7-3 (2011). Friedman, J., Hastie, T. & Tibshirani, R. Regularization paths for generalized linear models via coordinate descent. J. Stat. Softw. 33, 1 (2010). Wright, M. N. & Ziegler, A. ranger: A fast implementation of random forests for high dimensional data in C++ and R. arXiv:1508.04409 (arXiv preprint) (2015). Chen, T. & Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, 785–794 (ACM, 2016). Pedregosa, F. et al. Scikit-learn: Machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011). Archontoulis, S. V. et al. Predicting crop yields and soil-plant nitrogen dynamics in the US corn belt. Crop Sci. 60, 721–738 (2020). Kim, N. et al. A comparison between major artificial intelligence models for crop yield prediction: Case study of the midwestern United States, 2006–2015. ISPRS Int. J. Geo Inf. 8, 240 (2019). Hornik, K. R FAQ. https://CRAN.R-project.org/doc/FAQ/R-FAQ.html (2020). Alvarez, R. & Grigera, S. Analysis of soil fertility and management effects on yields of wheat and corn in the rolling pampa of Argentina. J. Agron. Crop Sci. 191, 321–329 (2005). Leeper, R., Runge, E. & Walker, W. Effect of plant-available stored soil moisture on corn yields. I. Constant climatic conditions 1. Agron. J. 66, 723–727 (1974). Kessler, A., Archontoulis, S. V. & Licht, M. A. Soybean yield and crop stage response to planting date and cultivar maturity in Iowa, USA. Agron. J. 112, 382–394 (2020). Baum, M., Archontoulis, S. & Licht, M. Planting date, hybrid maturity, and weather effects on maize yield and crop stage. Agron. J. 111, 303–313 (2019). Fan, Y., Li, H. & Miguez-Macho, G. Global patterns of groundwater table depth. Science 339, 940–943 (2013). ADS CAS PubMed Article PubMed Central Google Scholar Rizzo, G., Edreira, J. I. R., Archontoulis, S. V., Yang, H. S. & Grassini, P. Do shallow water tables contribute to high and stable maize yields in the US corn belt?. Glob. Food Sec. 18, 27–34 (2018). Pasley, H. R. et al. Nitrogen rate impacts on tropical maize nitrogen use efficiency and soil nitrogen depletion in eastern and southern Africa. Nutr. Cycling Agroecosyst. 20, 1–12 (2020). Nichols, V. A. et al. Maize root distributions strongly associated with water tables in Iowa, USA. Plant Soil 444, 225–238 (2019). Wilhelm, W. & Wortmann, C. S. Tillage and rotation interactions for corn and soybean grain yield as affected by precipitation and air temperature. Agron. J. 96, 425–432 (2004). Zhao, C. et al. Temperature increase reduces global yields of major crops in four independent estimates. Proc. Natl. Acad. Sci. 114, 9326–9331 (2017). CAS PubMed PubMed Central Article Google Scholar Zipper, S. C., Soylu, M. E., Booth, E. G. & Loheide, S. P. Untangling the effects of shallow groundwater and soil texture as drivers of subfield-scale yield variability. Water Resour. Res. 51, 6338–6358 (2015). Bergstra, J. & Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, 281–305 (2012). This work was partially supported by the National Science Foundation under the LEAP HI and GOALI programs (Grant number 1830478) and under the EAGER program (Grant number 1842097). Department of Industrial and Manufacturing Systems Engineering, Iowa State University, Ames, IA, 50011, USA Javad Ansarifar & Lizhi Wang Department of Agronomy, Iowa State University, Ames, IA, 50011, USA Sotirios V. Archontoulis Javad Ansarifar J.A., L.W., and S.V. designed the research questions. J.A. prepared and cleaned the database. J.A. performed the experiment, statistical analysis, and analyzed the dataset. J.A. designed and implemented a new algorithm. J.A. created the figures. J.A., L.W., and S.V. interpreted experiment results. J.A., L.W., and S.V. wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Javad Ansarifar. The authors declare no competing interests. Supplementary Information 1. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Ansarifar, J., Wang, L. & Archontoulis, S.V. An interaction regression model for crop yield prediction. Sci Rep 11, 17754 (2021). https://doi.org/10.1038/s41598-021-97221-7
CommonCrawl
(For sines, the integral and derivative are . Fourier Integrals & Dirac -function Fourier Integrals and Transforms The connection between the momentum and position representation relies on the notions of Fourier integrals and Fourier transforms, (for a more extensive coverage, see the module MATH3214). This video contains a example on Fourier Cosine and Sine Integrals. Chapter 7: 7.2-7.3- Fourier Transform Prob7.2-20. Integral of e^(ikx) from -pi to pi where k is an integer, Complex Fourier Series: https://youtu.be/aC0j8CW58AMPlease subscribe for more math content!Check ou. A must watch video and an important example is solved as well as explained in this video . I have to find the fourier integral representation and hence show that. The class of Fourier integral operators contains differential . 0 cos x + sin x 1 + 2 d w = { 0 x < 0 2 x = 0 e x x > 0. Sorted by: 2. The only states that the function is f (x) = e^ {-x} , x> 0 and f (-x) = f (x) In that case, I think the problem is asking for the Fourier integral representation of . This video contains a example on Fourier Cosine and Sine Integrals. Differentiation of Fourier Series. Use \text{Re}(e^{inx})=\cos(nx),\text{Im}(e^{inx}. The fourier transform calculator with steps is an online tool which helps you to find fourier transformation of a specified periodic function. Edit: The fourier integral representation of a function is defined as follows: f ( x) = 0 [ A ( w) c o s w x + B ( w) s i n w . ON A CLASS OF FOURIER INTEGRAL OPERATORS ON MANIFOLDS WITH BOUNDARY arXiv:1406.0636v1 [math.OA] 3 Jun 2014 UBERTINO BATTISTI, SANDRO CORIASCO, AND ELMAR SCHROHE Abstract. ( 9) gives us a Fourier transform of f ( x), it usually is denoted by "hat": (FT) f ^ ( ) = 1 2 f ( x) e i x d x; sometimes it is denoted by "tilde" ( f ~ ), and seldom just by a corresponding capital letter F ( ). flx). Fourier integral of a function f is any Fourier integral, that satisfies x(t)=y()eitd . On the interval , and on the interval . To calculate f ( 2) = e 2 1 2 e + e 2 1 e n = 1 ( 1) n 1 + 2 n 2 you just notice that it is the same sum as for f ( 0) = 1. The process of finding integrals is called integration. Writing the two transforms as a repeated integral, we obtain the usual statement of the Fourier's integral theorem: Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. 3. g square-integrable), then In my case this would mean that I can look at the Fourier transform of the derivative, divided by ip: I can split up that last integral (in order to get rid of that absolute value of x): Combined with the constant from earlier: If the derivative f ' (x) of this function is also piecewise continuous and the function f (x) satisfies the periodicity . f ^ ( ) = 1 2 f ( x) e i x d x, while the inverse Fourier transform is taken to be. transform of $ f(x) $ is denoted by $ \mathscr{F}\{f(x)\}= $$ F(k), k \in \mathbb{R}, $ and defined by the integral : $ \mathscr{F}\{f(x)\}=F(k)=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} e^{-i k x} f(x) d x $ Where $ \mathscr{F} $ is called fourier transform operator. I don't see any reason not to include 0 in each of . (For sines, the integral and derivative are . A must watch video and an important example is solved as well as explained in this video . Math Advanced Math Q&A Library a) Using Fourier integral representation, show that cos xw+ w sin xw 1+ w So -dw= 0, TT 2 -x, b) Evaluate Fourier series of f(x) = x,- x . if x 0 if x = 0 if x > 0 The reason I ask is, since this function is not odd: the Fourier sine transform gives you only the imaginary part of the full Fourier transform, \sqrt{\fr. The function is, f ( x) = { 0 x < 0 e x x > 0. WikiZero zgr Ansiklopedi - Wikipedia Okumann En Kolay Yolu . , report the values of x for which f(x) equals its Fourier integral. Definition 1. The integral of e x is e x itself.But we know that we add an integration constant after the value of every indefinite integral and hence the integral of e x is e x + C. We write it mathematically as e x dx = e x + C.Here, is the symbol of integration. The non-discrete analogue of a Fourier series. Prob7.1-19. By continuity and compactness, the property remains true in a sufficiently small collar neighborhood of the boundary. Answer: Do you mean the Fourier sine transform of the function, \sqrt{\frac2{\pi}}\int_0^{\infty}f(x)\sin(kx)dx? In mathematics, the Fourier inversion theorem says that for many types of functions it is possible to recover a function from its Fourier transform.Intuitively it may be viewed as the statement that if we know all frequency and phase information about a wave then we may reconstruct the original wave precisely. The representation of a function given on a finite interval of the real axis by a Fourier series is very important. quation (3) is true at a point of continuity a point of discontinuity, the value of the. 36,145. We know that. The Fourier transform of a function f ( x) is defined as. J6204 said: I am a little confused of the domain also. Ax) f)cos t cos - https://youtu.be/32Q0tMddoRwf(x) =x(2-x) x= 0 to 2 Show . 8,104. In mathematical analysis, Fourier integral operators have become an important tool in the theory of partial differential equations. lx +0)+ fx -0)). That sawtooth ramp RR is the integral of the square wave. Along with differentiation, integration is a fundamental, essential operation of calculus, [a] and serves as a tool . Subject - Engineering Mathematics 3Video Name - Fourier Expansion of f(x) =e^-x in (0,2pi)Chapter - Fourier SeriesFaculty - Prof. Mahesh WaghUpskill and get . What is the significance of Fourier integral? It may be possible to calculate this sum independently, but I doubt you're supposed to do that. of. Using the formula for the Fourier integral representation, f ( x) = 0 ( A ( ) cos x + B ( ) sin x) d . Figure 4.3 shows two even functions, the repeating ramp RR(x)andtheup-down train UD(x) of delta functions. integral. The complex fourier series calculator allows you to transform a function of time into function of frequency. As we know, the Fourier series expansion of such a function exists and is given by. (Fourier Transform) Let f(x) = x for |x . cos A(t x) = cos At cos Ar +sin At sin Ar. Notice here how I used 0 and as my bounds, is this correct? 36,145. The reason why you're not obtaining the previous series . Fourier Series of e^x from -pi to pi, featuring Sum of (-1)^n/(1+n^2)Fourier Series Formulas: https://youtu.be/iSw2xFhMRN0Integral of e^(ax)*cos(bx), integra. 1 Answer. Fourier integral. Free Pre-Algebra, Algebra, Trigonometry, Calculus, Geometry, Statistics and Chemistry calculators step-by-step F ( u) is in turn related to f ( x) by the inverse Fourier transform: (2) f(x) = F(u)e2iuxdu. If f(t) is a function without too many horrible discontinuities; technically if f(t) is decent enough so that Rb a f(t)dt is dened (makes sense as a Riemann integral, for example) for all nite intervals 1 < a < b < 1 and if Z which is known as Fourier. Fourier Theorem: If the complex function g L2(R) (i.e. An example application of the Fourier transform is determining the constituent pitches in a musical waveform.This image is the result of applying a Constant-Q transform (a Fourier-related transform) to the waveform of a C major piano chord.The first three peaks on the left correspond to the frequencies of the fundamental frequency of the chord (C, E, G). J6204 said: I am a little confused of the domain also. (1) F(u) = f(x)e 2iuxdx. Math Advanced Math Q&A Library a) Using Fourier integral representation, show that cos xw+ w sin xw 1+ w So -dw= 0, TT 2 -x, b) Evaluate Fourier series of f(x) = x,- x . if x 0 if x = 0 if x > 0 I don't see any reason not to include 0 in each of . The class of Fourier integral operators contains differential . ( 8) is a Fourier integral aka inverse Fourier transform: (FI) f ( x . Chapter 7: 7.2-7.3- Fourier Transform Prob7.2-20. The non-discrete analogue of a Fourier series. We study a class of Fourier integral operators on compact mani- folds with boundary X and Y , associated with a natural class of symplecto- morphisms : T Y \ 0 T . ( 8) is a Fourier integral aka inverse Fourier transform: (FI) f ( x . FOURIER SINE AND COSINE. If you check your solution and multiply it by the factor 1 / 2 you will . The process of finding integrals is called integration. 8,104. f ( ) = 1 2 f ( x) e i x d x. Wolfram Alpha defines the Fourier transform of an integrable function as. The representation of a function given on a finite interval of the real axis by a Fourier series is very important. May I ask why you need this? INTEGRALS. Definition 2. The Fourier transform of the derivative of a general function is related to the function like so: . (Fourier Integral and Integration Formulas) Invent a function f(x) such that the Fourier Integral Representation implies the formula ex = 2 Z 0 cos(x) 1+2 d. h (t) is the time derivative of g (t)] into equation [3]: Since g (t) is an arbitrary function, h (t) is as . Engineering Mathematics II MAP 4306-4768 Spring 2002 Fourier Integral Representations Basic Formulas and facts 1. integral. In my case this would mean that I can look at the Fourier transform of the derivative, divided by ip: I can split up that last integral (in order to get rid of that absolute value of x): Combined with the constant from earlier: 1. Fourier Integrals & Dirac -function Fourier Integrals and Transforms The connection between the momentum and position representation relies on the notions of Fourier integrals and Fourier transforms, (for a more extensive coverage, see the module MATH3214). $ Fourier \ Cosine\ Integral:\\[3ex] \displaystyle f(x)=\int_0^{\infty{}}A\left(w\right)\cos {wx\ dw} \\[2ex] \displaystyle where,\ A\left(w\right)=\frac{2}{\pi . The only states that the function is f (x) = e^ {-x} , x> 0 and f (-x) = f (x) In that case, I think the problem is asking for the Fourier integral representation of . Definition 2. Calculating A ( ), A ( ) = 1 f ( u) cos u d u = 1 0 e u cos u d u. (1). Insights Author. The delta functions in UD give the derivative of the square wave. Introduction to Fourier integral The Fourier integral is obtain from a regular Fourier series which seriously must be applied only to periodic signals. On the interval , and on the interval . A Class of Fourier Integral Operators on Manifolds with Boundary In this section we introduce the Fourier integral operators we are interested in and describe their mapping properties, cf. integral on the right is. Ex. Definition 1. Introduction to Fourier Transform Calculator. $\begingroup$ @Hyperplane, thank you for pointing out. Fourier Series of e^x from -pi to pi, featuring Sum of (-1)^n/(1+n^2)Fourier Series Formulas: https://youtu.be/iSw2xFhMRN0Integral of e^(ax)*cos(bx), integra. Subject - Engineering Mathematics 3Video Name - Fourier Expansion of f(x) =e^-x in (0,2pi)Chapter - Fourier SeriesFaculty - Prof. Mahesh WaghUpskill and get . Ex. An analogous role is played by the representation of a function $ f $ given on the whole axis by a Fourier integral: $$ \tag {1 } f ( x) = \ \int\limits _ { 0 . Fourier Transform example : All important fourier transforms 3 Solution . Along with differentiation, integration is a fundamental, essential operation of calculus, [a] and serves as a tool . It indicates that attempting to discover the zero coefficients could be a lengthy operation that should be avoided. Thee trick is to take the limit of the Fourier series as the originally finite period of the periodic signal goes to infinitely that means the signal will never be repeated, and thus it will . Fourier integral. Featured on Meta Announcing the arrival of Valued Associate #1214: Dalmarus be. written as. Fourier Theorem: If the complex function g L2(R) (i.e. ; e x (which is followed by dx) is the integrand; C is the integration constant Insights Author. I more or less have pinned down the problem with Mathematica. FOURIER SERIES LINKSf(x) = (-x)/2 x= 0 to 2 Deduce /4 = 1 - 1/3 + 1/5 - 1/7 + . (Fourier Transform) Let f(x) = x for |x . Answer (1 of 4): In order to compute this, you'll need integrals having integrands of the type Ce^x\cos(nx), Ce^x\sin(nx) for some suitable constant C. Compute both in one sweep by computing an integral with an integrand of the form Ce^{(1+in)x}. The Fourier transform of the derivative of a general function is related to the function like so: . In mathematical analysis, Fourier integral operators have become an important tool in the theory of partial differential equations. (Fourier Integral and Integration Formulas) Invent a function f(x) such that the Fourier Integral Representation implies the formula ex = 2 Z 0 cos(x) 1+2 d. 3) Laplace integrals (a) Fourier cosine integral: (b) Fourier sine integral: For even function f(x): B(w)=0, For odd function f(x): A(w)=0, f(x)= ekx (x,k > 0) = 0 f(v)coswvdv 2 A(w) = 0 f(x) A(w)coswxdw = 0 f(v)sinwvdv 2 B(w) Fourier cosine integral: = 0 f x B( w) sinwxdw Fourier sine integral: 0 2 2 kv k w 2k/ e . Your formulas for a n and b n are correct. Fourier integral of a function f is any Fourier integral, that satisfies x(t)=y()eitd . It only takes a minute to sign up. Fourier. It is true that it cannot be simply $2\delta(x)$. of flx) can. 320 Chapter 4 Fourier Series and Integrals Every cosine has period 2. 3) Laplace integrals (a) Fourier cosine integral: (b) Fourier sine integral: For even function f(x): B(w)=0, For odd function f(x): A(w)=0, f(x)= ekx (x,k > 0) = 0 f(v)coswvdv 2 A(w) = 0 f(x) A(w)coswxdw = 0 f(v)sinwvdv 2 B(w) Fourier cosine integral: = 0 f x B( w) sinwxdw Fourier sine integral: 0 2 2 kv k w 2k/ e . Figure 4.3 shows two even functions, the repeating ramp RR(x)andtheup-down train UD(x) of delta functions. Prob7.1-19. ( 9) gives us a Fourier transform of f ( x), it usually is denoted by "hat": (FT) f ^ ( ) = 1 2 f ( x) e i x d x; sometimes it is denoted by "tilde" ( f ~ ), and seldom just by a corresponding capital letter F ( ). g square-integrable), then Browse other questions tagged calculus integration definite-integrals fourier-analysis fourier-series or ask your own question. What is the significance of Fourier integral? e. In mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. That sawtooth ramp RR is the integral of the square wave. In words, equation [1] states that y at time t is equal to the integral of x () from minus infinity up to time t. Now, recall the derivative property of the Fourier Transform for a function g (t): We can substitute h (t)=dg (t)/dt [i.e. 320 Chapter 4 Fourier Series and Integrals Every cosine has period 2. First I noticed that asking for the FT of $\omega(\dots+\dots)$ returns the $2\delta(x)$ while asking for $(\omega\times\dots+\omega\times\dots)$ returns the result I quote above. , report the values of x for which f(x) equals its Fourier integral. An analogous role is played by the representation of a function $ f $ given on the whole axis by a Fourier integral: $$ \tag {1 } f ( x) = \ \int\limits _ { 0 . The delta functions in UD give the derivative of the square wave. Let f (x) be a 2 -periodic piecewise continuous function defined on the closed interval [, ]. e. In mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. Fourier series of odd and even functions: The fourier coefficients a 0, a n, or b n may get to be zero after integration in certain Fourier series problems. fourier integral of e^-x Nykaa Font Style Name Air Jordan 11 Cmft Low White Navy Polar Explorer Richard Fritz Vs Rublev Highlights Is Jailbreaking Nintendo Switch Worth It Menu For Flames Restaurant What Are The Angle Measures In Triangle Abc? fourier integral of e^-x 2022
CommonCrawl
Effective acoustic properties of a meta-material consisting of small Helmholtz resonators Derivation of effective transmission conditions for domains separated by a membrane for different scaling of membrane diffusivity August 2017, 10(4): 799-813. doi: 10.3934/dcdss.2017040 On the geometry of the p-Laplacian operator Bernd Kawohl 1,, and Jiří Horák 2, Mathematisches Institut, Universität zu Köln, 50923 Köln, Germany Fakultät Maschinenbau, TH Ingolstadt, Postfach 21 04 54,85019 Ingolstadt, Germany * Corresponding author: Bernd Kawohl Received April 2016 Revised August 2016 Published April 2017 Under a Creative Commons license -Laplacian operator $\Delta_pu={\rm div }\left(|\nabla u|^{p-2}\nabla u\right)$ is not uniformly elliptic for any $p\in(1,2)\cup(2,\infty)$ and degenerates even more when $p\to \infty$ $p\to 1$ . In those two cases the Dirichlet and eigenvalue problems associated with the -Laplacian lead to intriguing geometric questions, because their limits for $p\to\infty$ can be characterized by the geometry of $\Omega$ . In this little survey we recall some well-known results on eigenfunctions of the classical 2-Laplacian and elaborate on their extensions to general $p\in[1,\infty]$ . We report also on results concerning the normalized or game-theoretic -Laplacian $\Delta_p^Nu:=\tfrac{1}{p}|\nabla u|^{2-p}\Delta_pu=\tfrac{1}{p}\Delta_1^Nu+\tfrac{p-1}{p}\Delta_\infty^Nu$ and its parabolic counterpart $u_t-\Delta_p^N u=0$ . These equations are homogeneous of degree 1 and $\Delta_p^N$ is uniformly elliptic for any $p\in (1,\infty)$ . In this respect it is more benign than the -Laplacian, but it is not of divergence type. Keywords: p-Laplacian, viscosity solutions, variational methods, nodal lines, eigenfunctions. Mathematics Subject Classification: Primary: 35J92; Secondary: 35K92, 35D40, 49L25. Citation: Bernd Kawohl, Jiří Horák. On the geometry of the p-Laplacian operator. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 799-813. doi: 10.3934/dcdss.2017040 T. N. Anoop, P. Drábek and S. Sarath, On the structure of the second eigenfunctions of the p-Laplacian on a ball, Proc. Amer. Math. Soc., 144 (2016), 2503-2512. doi: 10.1090/proc/12902. Google Scholar S. N. Armstrong and C. K. Smart, A finite difference approach to the infinity Laplace equation and tug-of-war games, Trans. Amer. Math. Soc., 364 (2012), 595-636. doi: 10.1090/S0002-9947-2011-05289-X. Google Scholar G. Aronsson, Extension of functions satisfying Lipschitz conditions, Ark. Mat., 6 (1967), 551-561. doi: 10.1007/BF02591928. Google Scholar A. Attouch, M. Parviainen and E. Ruosteenoja, C1, α regularity for the normalized p-Poisson problem, preprint, arXiv: 1603.06391, to appear in J. Math. Pures Appl. Google Scholar A. Banerjee and N. Garofalo, On the Dirichlet boundary value problem for the normalized p-Laplacian evolution, Commun. Pure Appl. Anal., 14 (2015), 1-21. doi: 10.3934/cpaa.2015.14.1. Google Scholar T. Bhattacharya, E. DiBenedetto and J. Manfredi, Limits as p → ∞ of ∆pup = f and related extremal problems, Some topics in nonlinear PDEs (Turin, 1989), Rend. Sem. Mat. Univ. Politec. , Torino, Special Issue, (1991), 15-68. Google Scholar I. Birindelli and F. Demengel, Eigenvalue, maximum principle and regularity for fully non linear homogeneous operators, Comm. Pure Applied Anal., 6 (2007), 335-366. Google Scholar I. Birindelli and F. Demengel, Regularity and uniqueness of the first eigenfunction for singular fully nonlinear operators, J. Differential Equations, 249 (2010), 1089-1110. doi: 10.1016/j.jde.2010.03.015. Google Scholar L. Brasco, C. Nitsch and C. Trombetti, An inequality á la Szegö-Weinberger for the p-Laplacian on convex sets, Communications in Contemporary Mathematics, 18 (2016), 1550086, 23pp. doi: 10.1142/S0219199715500868. Google Scholar J. Cheeger, A lower bound for the smallest eigenvalue of the Laplacian, in Problems in Analysis, A Symposium in Honor of Salomon Bochner, (ed. R. C. Gunning), Princeton Univ. Press, (2015), 195-200. doi: 10.1515/9781400869312-013. Google Scholar M. G. Crandall, H. Ishii and P. L. Lions, User's guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc. (N.S.), 27 (1992), 1-67. Google Scholar G. Crasta and I. Fragalá, A C1 regularity result for the inhomogeneous normalized infinity Laplacian, Proc. Amer. Math. Soc., 144 (2016), 2547-2558. doi: 10.1090/proc/12916. Google Scholar K. Does, An evolution equation involving the normalized p-Laplacian, Commun. Pure Appl. Anal., 10 (2011), 361-396. doi: 10.3934/cpaa.2011.10.361. Google Scholar L. Esposito, B. Kawohl, C. Nitsch and C. Trombetti, The Neumann eigenvalue problem for the ∞-Laplacian, Rend. Lincei Mat.Appl., 26 (2015), 119-134. doi: 10.4171/RLM/697. Google Scholar L. Esposito, V. Ferone, B. Kawohl, C. Nitsch and C. Trombetti, The longest shortest fence and sharp Poincaré Sobolev inequalities, Arch. Ration. Mech. Anal., 206 (2012), 821-851. doi: 10.1007/s00205-012-0545-0. Google Scholar L. C. Evans and J. Spruck, Motion of level sets by mean curvature Ⅰ, Chapter: Fundamental Contributions to the Continuum Theory of Evolving Phase Interfaces in Solids, 33 (1991), 328-374. doi: 10.1007/978-3-642-59938-5_13. Google Scholar H. Gajewski and K. Gärtner, Domain separation by means of sign changing eigenfunctions of p-Laplacians, Appl. Anal., 79 (2001), 483-501. doi: 10.1080/00036810108840974. Google Scholar J. Horák, Numerical investigation of the smallest eigenvalues of the p-Laplace operator on planar domains, Electron. J. Differential Equations, 2011 (2011), 1-30. Google Scholar R. Hynd, C. K. Smart and Y. Yu, Nonuniqueness of infinity ground states, Calc. Var. Partial Differential Equations, 48 (2013), 545-554. doi: 10.1007/s00526-012-0561-9. Google Scholar R. Jensen, Uniqueness of Lipschitz extensions: Minimizing the sup norm of the gradient, Arch. Rational Mech. Anal., 123 (1993), 51-74. doi: 10.1007/BF00386368. Google Scholar T. Jin and L. Silvestre, Hölder gradient estimates for parabolic homogeneous p-Laplacian equations, preprint, arXiv: 1505.05525 doi: 10.1016/j.matpur.2016.10.010. Google Scholar V. Julin and P. Juutinen, A new proof for the equivalence of weak and viscosity solutions for the p-Laplace equation, Comm. Partial Differential Equations, 37 (2012), 934-946. doi: 10.1080/03605302.2011.615878. Google Scholar P. Juutinen, P. Lindqvist and J. Manfredi, The ∞-eigenvalue problem, Arch. Ration. Mech. Anal., 148 (1999), 89-105. Google Scholar P. Juutinen, p-Harmonic approximation of functions of least gradient, Indiana Univ. Math. J., 54 (2005), 1015-1029. doi: 10.1512/iumj.2005.54.2658. Google Scholar P. Juutinen and B. Kawohl, On the evolution governed by the infinity Laplacian, Math. Ann., 335 (2006), 819-851. doi: 10.1007/s00208-006-0766-3. Google Scholar P. Juutinen, Principal eigenvalue of a very badly degenerate operator and applications, J. Differential Equations, 236 (2007), 532-550. doi: 10.1016/j.jde.2007.01.020. Google Scholar P. Juutinen and P. Lindqvist, On the higher eigenvalues for the ∞-eigenvalue problem, Calc. Var. Partial Differential Equations, 23 (2005), 169-192. doi: 10.1007/s00526-004-0295-4. Google Scholar B. Kawohl, On a family of torsional creep problems, J. Reine Angew. Math., 410 (1990), 1-22. Google Scholar B. Kawohl and N. Kutev, Maximum and comparison principle for one-dimensional anisotropic diffusion, Math. Ann., 311 (1989), 107-123. doi: 10.1007/s002080050179. Google Scholar B. Kawohl and V. Fridman, Isoperimetric estimates for the first eigenvalue of the p-Laplace operator and the Cheeger constant, Comment. Math. Univ. Carolinae, 44 (2003), 659-667. Google Scholar B. Kawohl and H. Shahgholian, Gamma limits in some Bernoulli free boundary problems, Archiv D. Math., 84 (2005), 79-87. doi: 10.1007/s00013-004-1334-2. Google Scholar B. Kawohl and Th. Lachand-Robert, Characterization of Cheeger sets for convex subsets of the plane, Pacific J.Math., 225 (2006), 103-118. doi: 10.2140/pjm.2006.225.103. Google Scholar B. Kawohl and P. Lindqvist, Positive eigenfunctions for the p-Laplace operator revisited, Analysis, (Munich), 26 (2006), 539-544. doi: 10.1524/anly.2006.26.4.545. Google Scholar B. Kawohl and N. Kutev, Global behaviour of solutions to a parabolic mean curvature equation, Differential Integral Equations, 8 (1995), 1923-1946. Google Scholar B. Kawohl, Variational versus PDE-based Approaches in Mathematical Image Processing, CRM Proceedings and Lecture Notes, 44 (2008), 113-126. Google Scholar B. Kawohl, Variations on the p-Laplacian in Nonlinear Elliptic Partial Differential Equations, (eds. Bonheure D., P. Takač et al.), Contemporary Mathematics, 540 (2011), 35-46. doi: 10.1090/conm/540/10657. Google Scholar B. Kawohl, J. Manfredi and M. Parviainen, Solutions of nonlinear PDEs in the sense of averages, J. Math. Pures Appl., 97 (2012), 173-188. doi: 10.1016/j.matpur.2011.07.001. Google Scholar B. Kawohl, S. Krömer and J. Kurtz, Radial eigenfunctions for the game-theoretic p-Laplacian on a ball, Differential and Integral Equations, 27 (2014), 659-670. Google Scholar B. Kawohl and F. Schuricht, First eigenfunctions of the 1-Laplacian are viscosity solutions, Commun. Pure Appl. Anal., 14 (2015), 329-339. doi: 10.3934/cpaa.2015.14.329. Google Scholar G. Lu and P. Wang, Inhomogeneous infinity Laplace equation, Advances in Mathematics, 217 (2008), 1838-1868. doi: 10.1016/j.aim.2007.11.020. Google Scholar G. Lu and P. Wang, A uniqueness theorem for degenerate elliptic equations, Lecture Notes of Seminario Interdisciplinare di Matematica, Conference on Geometric Methods in PDEs, On the Occasion of 65th Birthday of Ermanno Lanconelli, Bologna, May 27-30,2008, (eds. Giovanna Citti, Annamaria Montanari, Andrea Pascucci, Sergio Polidoro), 207-222. Google Scholar Th. Lachand-Robert and É. Oudet, Minimizing within convex bodies using a convex hull method, SIAM J. Optim., 16 (2005), 368-379. Google Scholar P. J. Martínez-Aparicio, M. Pérez-Llanos and J. D. Rossi, The limit as p → ∞ for the eigenvalue problem of the 1-homogeneous p-Laplacian, Rev. Mat. Complut., 27 (2014), 241-258. Google Scholar E. Parini, The second eigenvalue of the p-Laplacian as p goes to 1, Int. J. Diff. Eq. , (2010), Art. ID 984671, 23 pp. Google Scholar E. Parini, An introduction to the Cheeger problem, Surv. Math. Appl., 6 (2011), 9-22. Google Scholar J. D. Rossi and N. Saintier, On the first nontrivial eigenvalue of the ∞-Laplacian with Neumann boundary conditions, Houston J. Math., 42 (2016), 613-635. Google Scholar R. P. Sperb, Maximum Principles and Their Applications, Academic Press, New York-London, 1981. Google Scholar Y. Yu, Some properties of the ground states of the infinity Laplacian, Indiana Univ. Math. J., 56 (2007), 947-964. doi: 10.1512/iumj.2007.56.2935. Google Scholar Figure 1. The positive viscosity solution of (4.4) Figure 2. Conceivable nodal lines of the second eigenfunction for $p=\infty$ in the disc Figure 3. Illustration of (5.4) and (5.5) Figure 4. Numerical simulation of $u_{15}$ and side view in diagonal direction Figure 5. Numerical simulation of $u_p$: normalized values along half of the diagonal for $p=2, 3, 4, 6, 8, 10, 15$ (left), and for $p=15$ compared to the line $y=x$ (right) Bernd Kawohl, Friedemann Schuricht. First eigenfunctions of the 1-Laplacian are viscosity solutions. Communications on Pure & Applied Analysis, 2015, 14 (1) : 329-339. doi: 10.3934/cpaa.2015.14.329 Nikolaos S. Papageorgiou, Vicenţiu D. Rǎdulescu, Dušan D. Repovš. Nodal solutions for the Robin p-Laplacian plus an indefinite potential and a general reaction term. Communications on Pure & Applied Analysis, 2018, 17 (1) : 231-241. doi: 10.3934/cpaa.2018014 Dimitri Mugnai. Bounce on a p-Laplacian. Communications on Pure & Applied Analysis, 2003, 2 (3) : 371-379. doi: 10.3934/cpaa.2003.2.371 Robert Stegliński. On homoclinic solutions for a second order difference equation with p-Laplacian. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 487-492. doi: 10.3934/dcdsb.2018033 Sophia Th. Kyritsi, Nikolaos S. Papageorgiou. Positive solutions for p-Laplacian equations with concave terms. Conference Publications, 2011, 2011 (Special) : 922-930. doi: 10.3934/proc.2011.2011.922 Shanming Ji, Yutian Li, Rui Huang, Xuejing Yin. Singular periodic solutions for the p-laplacian ina punctured domain. Communications on Pure & Applied Analysis, 2017, 16 (2) : 373-392. doi: 10.3934/cpaa.2017019 Maya Chhetri, D. D. Hai, R. Shivaji. On positive solutions for classes of p-Laplacian semipositone systems. Discrete & Continuous Dynamical Systems, 2003, 9 (4) : 1063-1071. doi: 10.3934/dcds.2003.9.1063 Leyun Wu, Pengcheng Niu. Symmetry and nonexistence of positive solutions to fractional p-Laplacian equations. Discrete & Continuous Dynamical Systems, 2019, 39 (3) : 1573-1583. doi: 10.3934/dcds.2019069 Everaldo S. de Medeiros, Jianfu Yang. Asymptotic behavior of solutions to a perturbed p-Laplacian problem with Neumann condition. Discrete & Continuous Dynamical Systems, 2005, 12 (4) : 595-606. doi: 10.3934/dcds.2005.12.595 Adam Lipowski, Bogdan Przeradzki, Katarzyna Szymańska-Dębowska. Periodic solutions to differential equations with a generalized p-Laplacian. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2593-2601. doi: 10.3934/dcdsb.2014.19.2593 Shuang Wang, Dingbian Qian. Periodic solutions of p-Laplacian equations via rotation numbers. Communications on Pure & Applied Analysis, 2021, 20 (5) : 2117-2138. doi: 10.3934/cpaa.2021060 Meiqiang Feng, Yichen Zhang. Positive solutions of singular multiparameter p-Laplacian elliptic systems. Discrete & Continuous Dynamical Systems - B, 2022, 27 (2) : 1121-1147. doi: 10.3934/dcdsb.2021083 Yansheng Zhong, Yongqing Li. On a p-Laplacian eigenvalue problem with supercritical exponent. Communications on Pure & Applied Analysis, 2019, 18 (1) : 227-236. doi: 10.3934/cpaa.2019012 Genni Fragnelli, Dimitri Mugnai, Nikolaos S. Papageorgiou. Robin problems for the p-Laplacian with gradient dependence. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 287-295. doi: 10.3934/dcdss.2019020 Francesca Colasuonno, Benedetta Noris. A p-Laplacian supercritical Neumann problem. Discrete & Continuous Dynamical Systems, 2017, 37 (6) : 3025-3057. doi: 10.3934/dcds.2017130 Ronghua Jiang, Jun Zhou. Blow-up and global existence of solutions to a parabolic equation associated with the fraction p-Laplacian. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1205-1226. doi: 10.3934/cpaa.2019058 Leszek Gasiński. Positive solutions for resonant boundary value problems with the scalar p-Laplacian and nonsmooth potential. Discrete & Continuous Dynamical Systems, 2007, 17 (1) : 143-158. doi: 10.3934/dcds.2007.17.143 Yinbin Deng, Yi Li, Wei Shuai. Existence of solutions for a class of p-Laplacian type equation with critical growth and potential vanishing at infinity. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 683-699. doi: 10.3934/dcds.2016.36.683 Eun Kyoung Lee, R. Shivaji, Inbo Sim, Byungjae Son. Analysis of positive solutions for a class of semipositone p-Laplacian problems with nonlinear boundary conditions. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1139-1154. doi: 10.3934/cpaa.2019055 Patrizia Pucci, Mingqi Xiang, Binlin Zhang. A diffusion problem of Kirchhoff type involving the nonlocal fractional p-Laplacian. Discrete & Continuous Dynamical Systems, 2017, 37 (7) : 4035-4051. doi: 10.3934/dcds.2017171 HTML views (1391) Bernd Kawohl Jiří Horák
CommonCrawl
Angular momentum and magnetic moment [duplicate] Relation between magnetic moment and angular momentum -- classic theory (2 answers) Closed 1 year ago. I have just started studying MRI physics and was reading F.Bloch's paper on Nuclear Induction. https://doi.org/10.1103/PhysRev.70.460 In page 463, it is mentioned, To obtain this variation does not require the solution of the Schroedinger equation. It is enough to remember the general fact that the quantum-mechanical expectation value of any quantity follows in its time dependence exactly the classical equations of motion and that the magnetic and angular momenta of each nucleus are parallel to each other. The parallelity between the magnetic moment $\mu$ and the angular momentum a for each nucleus implies $\mu = \gamma a$ Are the magnetic and angular momenta of the proton always parallel to each other? Why is this so? quantum-mechanics quantum-spin magnetic-moment nuclear-magnetic-resonance Qmechanic♦ $\begingroup$ That's odd phrasing, since it implies that it's "magnetic momentum" rather than magnetic moment $\endgroup$ – Nihar Karve $\begingroup$ @NiharKarve Yes, it is indeed odd. This sentence describes the variation with time of the vector M (resultant nuclear magnetic moment per unit volume). I have also edited the text to give the next paragraph which implies that he is talking about the magnetic moment. $\endgroup$ – Julian $\begingroup$ Does this answer your question? Relation between magnetic moment and angular momentum -- classic theory $\endgroup$ $\begingroup$ @NiharKarve I had seen this before and it says that the direction of the magnetic moment and the angular momentum is perpendicular to the plane of the loop and hence they are parallel but this a classical approach to it. I want to know if this is always the case since the magnetic moment of any particle is due to its intrinsic angular momentum. Also, if it is only true for only electrons or for all particles like protons $\endgroup$ $\begingroup$ Well, it is true that the spin magnetic moment is always (anti)-parallel to the spin angular moment - that's the basis for the definition of the g-factor. I believe you can invoke the Dirac equation to see this. $\endgroup$ The Pauli equation in the weak magnetic field approximation is $$ \left[\frac{1}{2m}(p^2-q(\vec{L}+2\vec{S})\cdot \vec{B})\right] |\psi\rangle = i\hbar\frac{\partial}{\partial t}|\psi\rangle $$ which is itself obtained from the non-relativistic limit of the Dirac equation. The $\frac{q}{2m}\vec{L}\cdot \vec{B}$ and $\frac{2q}{2m}\vec{S}\cdot \vec{B}$ terms are exactly the perturbation to the Hamiltonian of the form $-\vec{\mu}\cdot\vec{B}$ in, for example, the Zeeman effect, so we can identify the orbital magnetic moment $\vec\mu_B$ with $\frac{q}{2m}$ and the spin magnetic moment $\vec\mu_S$ with $\frac{q}m\vec S$ - so the magnetic moment is aligned with the spin angular momentum. A curious observation is that the spin magnetic moment for the electron is twice the classical result (the orbital magnetic momentum) - this factor of two$^\dagger$ is called the g factor, and typically varies for different subatomic particles (analogous results hold) $\dagger$ Actually, loop diagrams in QED lead to a g-factor slightly greater than two: $2 + \frac{\alpha}{\pi} + \ldots$ as a perturbation series. Nihar KarveNihar Karve $\begingroup$ Thank you. This equation makes more sense. $\endgroup$ Not the answer you're looking for? Browse other questions tagged quantum-mechanics quantum-spin magnetic-moment nuclear-magnetic-resonance or ask your own question. Relation between magnetic moment and angular momentum -- classic theory Huge confusion with Fermions and Bosons and how they relate to total spin of atom Total angular momentum of deuteron Relationship Between Magnetic Dipole Moment and Spin Angular Momentum Why does Carbon-12 have zero nuclear spin? The spin-orbit interaction for a classical magnetic dipole moving in an electric field Relation between angular momentum and magnetic moment of an electron (Vector Form) Origin of hyperfine structure Hamiltonian Magnetic moment and angular momentum of electron
CommonCrawl
How does VHF/UHF propagate beyond the expected (radio) horizon? I am not asking about the fairly well-known effect of the earth "appearing less curved to radio waves" that are otherwise still essentially line-of-sight, but a deeper arcanum: In the ARRL Antenna Book, 17th edition (1994) there is a discussion of "Reliable VHF coverage" in starting on page 23-7 in the Radio Wave Propagation chapter. The claim is made, Because of age-old ideas, misconceptions about the coverage obtainable in our VHF bands persist. This reflects the thoughts that VHF waves travel only in straight lines, […] However, let us survey the picture in the light of modern wave-propagation knowledge and see what the bands above 50 MHz are good fro on a day-to-day basis, ignoring the anomalies [presumably referring to the tropospheric ducting of previous section] that may result in extensions of normal coverage. It goes on, after mentioning an article by D.W. Bray, K2LMG in the November 1961 QST magazine, to present two graphs that plot "tropospheric path loss" against distance. The curves therein rise steeply from 120 dB of loss at a distance of 0 miles [?!] to around 180 dB near 50 miles, then level off slightly so that at 500 miles there is a path loss around 240 dB. (That's reading the 50% reliability chart roughly, there's actually 4 lines plotted for 144/50, 220, 432, and 1296 MHz, as well as a second separate chart showing 99% reliability; the 99% reliability chart is very approximately 10–20 dB worse than the 50% one at any given point.) UPDATE: thanks to W0BTU Mike, here's the actual charts scanned from an earlier edition: What "modern wave-propagation knowledge" is this referring to? What mechanism(s) would allow VHF signals to be 99% reliably received 500 miles away, albeit with more than 250 dB of path loss, or 50%-of-the-time reliability with a little less loss? (These path-loss charts do NOT assume any antenna-height gain.) propagation vhf uhf natevw - AF7TB natevw - AF7TBnatevw - AF7TB $\begingroup$ Troposcatter? EME? $\endgroup$ – Phil Frost - W8II Feb 24 '17 at 14:09 $\begingroup$ Here are some nomographs and accompanying text from an earlier version of the ARRL VHF Manual. Go to w0btu.com/files/vhf and download VHF_distance_coverage_nomographs.zip. I've found them to be a good predictor of VHF coverage back when I was operating SSB and CW on the low end of 2m back in the 1980s. And somewhat related is this webpage: w0btu.com/VHF-UHF_vertical_antenna_stacking.html $\endgroup$ – Mike Waters♦ Mar 7 '17 at 15:14 $\begingroup$ @MikeWaters Thanks, those are very similar to my edition! I've grabbed the chart I tried to describe and added it to my question, hope you don't mind. (Was on your site just the other day while researching beverage antennas and glad to meet you on this site now too!) $\endgroup$ – natevw - AF7TB Mar 7 '17 at 20:03 $\begingroup$ @natevw-AF7TB I don't mind at all! In August 2018 I converted the TIFF files to four PNG files there. Feel free to add those. $\endgroup$ – Mike Waters♦ Oct 5 '18 at 18:38 Turns out that, after turning to discussion of HF propagation for a number of intervening pages, this Antenna Book ends up getting back around to its own answer for this question! From the "Scatter Modes" section on page 23-30 of the same 17th edition: The wave energy of VHF stations is not gone after it reaches the radio horizon, described early in this chapter. It is scattered, but it can be heard to some degree for hundreds of miles. Everything on Earth, and in the regions of space up to at least 100 miles, is a potential scattering agent. Tropospheric scatter is always with us […] this is what produces that nearly flat portion of the curves given in an earlier section on reliable VHF coverage. … As long ago as the early 1950s, VHF enthusiasts found that VHF contests could be won with high power, big antennas and a good ear for signals deep in the noise. … Ionospheric scatter works much the same as the tropo version, [… and] can fill in the skip zone with marginally readable signals scattered from ionized trails of meteors, small areas of random ionization, cosmic dust, satellites and whatever may come into the antenna patterns at 50 to 150 miles or so above the Earth. […] [bold added for emphasis] It goes on similarly to discuss "backscatter" and "transequatorial scatter" before going on to a different section on "auroral propagation" (which can also affect VHF but is probably not related to the reliable propagation graphs). So in short, "scatter" (in many forms) is claimed as the mechanism that allows VHF signals to be heard hundreds of miles beyond the primary "radio horizon". I also believe the ARRL editors consider the experimental discoveries of the various scatter modes to be the "modern wave-propagation knowledge" referred to earlier — in this "Scatter Modes" section there are a couple historical references around the same dates as the QST article, including the "early 1950s" one quoted above as well as Transequitorial scatter as "an amateur 50-MHz discovery in the years 1946–1947". Terrestrial, point-point propagation paths can exist due to diffraction of the radiated e-m fields over terrain peaks and man-made structures, with each diffraction adding loss at the receive antenna to the normal inverse-distance field loss for an LOS path of that total length. The graphic below illustrates this for an FM broadcast station, where the line-of-sight path is severely obstructed, but the signal can be received well beyond those obstructions. The real propagation path would consist of several straight-line segments over terrain peaks, joined together to connect the transmit and receive antennas. In this example, the additional loss due to diffractions compared to an LOS path of that total length is shown to be 76.59 dB. The received field will vary over time depending on atmospheric K-factor and other conditions. Richard FryRichard Fry $\begingroup$ Older ARRL publications I have refer to what you describe in the first paragraph as knife edge diffraction, IIRC. But they don't mention man-made structures (clusters of tall buildings in cities?), only jagged mountain peaks. That's intriguing! Do you happen to know of any examples? $\endgroup$ – Mike Waters♦ Oct 5 '18 at 18:48 $\begingroup$ Here is a quote with a short reference about this from Engineering Considerations for Microwave Communications Systems (GTE Lenkurt, Inc, 1970): "The effect of man-made obstacles depends entirely upon their shape and position, Microwave- transparent objects, which are few, are ignored. A large round container such as a gas storage reservoir, if partially in the path, causes both diffraction and dispersion as well as some blocking.". $\endgroup$ – Richard Fry Oct 5 '18 at 19:51 I really don't know – in a review, I'd have marked that sentence as far too vague – what the author means with "modern wave-propagation knowledge". If I need to benchmark that knowledge against a 31 year old Ham article, well… I don't really think it's a great advocate. "Modern knowledge", to me, is probably something that is what research on an academic level has yielded and is now dissipating into technological areas such as amateur radio, and thus, an article in a ham mag might by definition, not be used as describing the state of the sciences in 1961¹. So, I dispute your "expected radio horizon"; just because a ham expert in 1961 modeled something is really no great reason to expect the same to be accurate in 1994. Anyway, having minimal knowledge of earth observation and radio propagation modelling, I'd go for: having the ability to actually simulate non-trivial situations, including: clearer idea of atmospheric properties such as permittivity ($\epsilon_r$) and magnetic permeability ($\mu_r$) as well as charge density ($\rho$), non-linear gradients of above properties, non-perfectly-spherical atmosphere, actually looking into things that are far, far more granular than just saying "ok, here's the troposphere that we model as thing with the following diffractive index and a constant attenuation $\alpha$", including effects like weather-based inversion of conductivities etc. being able to model effects of ground conductivity etc. actually having data on how charged, conductive, vapor-saturated and shaped strata of the atmosphere are, based on a lot of satellite and radioastronomy experiments having a far better understanding of the interaction between large antennas and their surroundings ¹I know that article is cited in many ham places. I've never read it. However, it seems to me that modeling VHF transmission's reach in 1961 should basically be equivalent to asking a couple of WWII and cold war radio engineers; it's really not like radio reach is not a very important strategic factor, and I'm very certain that all involved parties had very accurate recordings of how far they could reach, and will have worked to improve their models to match that, long before 1961. These models might not have been public back in the day, but really, they also shouldn't be that much of rocket science to recreate. In 1994, there should be no "surprise" in how VHF propagates terrestrially – I really think it's very worth writing articles that bring research-level theory, models and experimentation to the amateur masses (which, by the beard of Hertz, are pretty good at such things), but you must then compare these to state-of-back then in academics, not in ham mags. That's just unfair – many countries simply had restrictions on radio usage during WWII, so the amateur radio community just needed a decade or two to catch up. That catching-up phase was an especially fruitful one, what with all the semiconductor technology emerging at the same time. Downside of that being a phase is that if you look online, you still find a lot of people trying to build the 1960's kits nowadays, even trying to get the same diodes and transistors of the day – there's really no good reason you'd want a Germanium noise gen– err transistor amplifier if you can have a silicon one for cheap, if you just don't stick to material from the "golden era of rediscovering possibilities". I attribute that to a lot of Kit manufacturers and mag publishers just copying articles from back then, until the original source and its restrictions got lost. Enough ranting for today. Marcus MüllerMarcus Müller $\begingroup$ Hi Marcus, I don't begrudge your rant; I hear what you're saying but would appreciate a bit more balance towards explaining "ϵ, μ gradients" (you mean EM fields?) and your other ideas for the mechanisms there. $\endgroup$ – natevw - AF7TB Feb 23 '17 at 18:11 $\begingroup$ You're right – though I must admit that I'm looking at this a bit from an "aftermath" position. It's kind of hard for me to know to what state to compare – for example, 1961's models definitely had tropospheric diffraction models, but I don't know how well they actually modelled things like large-scale "waveguiding" due to relatively strong changes of atmospheric properties within a couple m of height difference (i.e. weather), or whether the models assumed effects of ionized upper atmosphere and so on. $\endgroup$ – Marcus Müller Feb 23 '17 at 18:24 $\begingroup$ Thanks! And re. "expected radio horizon" I am not sure what you are disputing? My intent was to draw a distinction between the notion of a basically "hard" radio horizon 15% further away than the visual horizon (e.g. en.wikipedia.org/w/… expectations) versus the notion that signals propagate weakly beyond that — how? $\endgroup$ – natevw - AF7TB Feb 23 '17 at 18:25 $\begingroup$ I must admit I wasn't even aware of this 115% Line-Of-Sight model! $\endgroup$ – Marcus Müller Feb 23 '17 at 18:26 $\begingroup$ I'd have to get a large stack of paper out and get going,but the idea is: As light entering water,beams always bend towards the normal at a medium interface if they enter a medium with higher refractive index $n$ (optical density),and away from it when going into one of lower refractive index.Electromagnetically,this is an effect of how the poynting vector shifts if you increase $\epsilon$ or $\mu$ ($n=\sqrt{\epsilon\mu}$, by the way).You can now model the atmosphere as a ball with decreasing $n$ the further you go from the center;thus,you can,analytically,find an equation for EM "beam-bend". $\endgroup$ – Marcus Müller Feb 23 '17 at 18:33 Historically, this is the best time of year (Late September through early December) in the northern hemisphere to enjoy VHF and UHF propagation enhancements ("band openings") caused by sharp air temperature differences between two (or more) distinct layers of air. These occur far below the ionosphere. However, band openings can occur in any season. The two types --somewhat related-- are described below. Temperature Inversions By far, the most common type. Just two layers in the troposphere are are involved. Often incorrectly referred to as "ducting", these occur where there is a sudden change in air temperature vs. height, and can even cover an area encompassing many hundreds of square miles. They usually occur along a cold occlusion or occluded front. While most common in the fall and spring, I know a ham in Ohio who observed a spectacular band opening years ago in January, where the temperature dropped from above freezing to -20° F in a matter of hours. (It would have been even more spectacular had other hams known about it, but there was no Internet or VHF DX clusters in those days.) Tropospheric Ducting A genuine duct consists of three layers of air. They can be spectacular and extend over a greater distance, but are rare. A true tropo duct almost never covers an area anywhere near as large as a temperature inversion. By *waveguiding" I assume that Marcus means "tropospheric ducting". Below is what a duct looks like. More details can be found here. Features common to both inversions and ducts Both inversions and ducts usually occur when the air is relatively calm. Once it gets windy, the air masses start to mix and the band opening gradually disappears. Experienced VHF/UHF enthusiasts understand that whenever a hot day followed by a rapid and large temperature drop in the evening occurs, there might be a good opening. The larger the drop, the better the DX. You can tell if you have a band opening in your area by transmitting on 146.94 (or another very common FM repeater frequency) followed by receiving countless beeps, heterodynes, and distant repeater IDs when you unkey and listen. During the better band openings, all it takes is an HT to experience that. Depending on who you ask, that can either be a blessing or a nuisance. ;-) They can and do cause interference to both local and distant repeater groups. Hams operating SSB or CW below ~144.250 or on simplex FM frequencies within that area are treated to a special delight. Mike Waters♦Mike Waters $\begingroup$ I just discovered a site that claims to forecast when a "duct" might occur. Select your region in the drop-down list. It looks to me like that when he says "duct", he actually means two-layer temperature inversions. Regardless, it may well very be useful assuming it is at least somewhat accurate. $\endgroup$ – Mike Waters♦ Oct 6 '18 at 2:42 $\begingroup$ I'd just like to point out that the refraction of electromagnetic waves by a temperature inversion, is exactly why you see the 'mirage' effect on a hot road (which has very hot air in contact with the road, with cooler air above it) $\endgroup$ – Scott Earle♦ Oct 7 '18 at 12:54 Not the answer you're looking for? Browse other questions tagged propagation vhf uhf or ask your own question. Can I talk 2 Meter VHF across the United States Effect of night on VHF/UHF propagation Why can't VHF / UHF be used with ionosphere reflection? VHF / UHF - Terrestrial Circular Polarized Antenna UHF Into VHF Power Amplifier How to learn about the physics behind propagation? Troposcatter: really all that bad? Why is VHF better than UHF in this situation? Better explanation for beyond-line-of-sight VHF signal reach than "lower curvature to radio waves" Weakly directive VHF/UHF antenna Does a UHF/VHF handheld transceiver kit exist?
CommonCrawl
Locally conservative finite difference schemes for the modified KdV equation JCD Home Principal symmetric space analysis December 2019, 6(2): 277-306. doi: 10.3934/jcd.2019014 Integrable reductions of the dressing chain Charalampos Evripidou 1,2,, , Pavlos Kassotakis 2, and Pol Vanhaecke 3, Department of Mathematics, Faculty of Science, University of Hradec Kralove, Czech Republic Department of Mathematics and Statistics, University of Cyprus, P.O. Box 20537, 1678 Nicosia, Cyprus Université de Poitiers, Laboratoire de Mathématiques et Applications, UMR 7348 du CNRS, Bât. H3, Boulevard Marie et Pierre Curie, Site du Futuroscope, TSA 61125, 86073 POITIERS Cedex 9, France * Corresponding author: Charalampos Evripidou Received March 2019 Revised July 2019 Published November 2019 Fund Project: The research of the first author was supported by the project "International mobilities for research activities of the University of Hradec Králové", CZ.02.2.69/0.0/0.0/16_027/0008487. In this paper we construct a family of integrable reductions of the dressing chain, described in its Lotka-Volterra form. For each $ k, n\in \mathbb N $ with $ n \geqslant 2k+1 $ we obtain a Lotka-Volterra system $ \hbox{LV}_b(n, k) $ on $ \mathbb {R}^n $ which is a deformation of the Lotka-Volterra system $ \hbox{LV}(n, k) $, which is itself an integrable reduction of the $ 2m+1 $-dimensional Bogoyavlenskij-Itoh system $ \hbox{LV}({2m+1}, m) $, where $ m = n-k-1 $. We prove that $ \hbox{LV}_b(n, k) $ is both Liouville and non-commutative integrable, with rational first integrals which are deformations of the rational first integrals of $ \hbox{LV}({n}, {k}) $. We also construct a family of discretizations of $ \hbox{LV}_b(n, 0) $, including its Kahan discretization, and we show that these discretizations are also Liouville and superintegrable. Keywords: Integrable systems, deformations, discretizations. Mathematics Subject Classification: 53D17, 70H06. Citation: Charalampos Evripidou, Pavlos Kassotakis, Pol Vanhaecke. Integrable reductions of the dressing chain. Journal of Computational Dynamics, 2019, 6 (2) : 277-306. doi: 10.3934/jcd.2019014 M. Adler, P. van Moerbeke and P. Vanhaecke, Algebraic Integrability, Painlevé Geometry and Lie Algebras, Ergebnisse der Mathematik und ihrer Grenzgebiete, 47, Springer-Verlag, Berlin, 2004. doi: 10.1007/978-3-662-05650-9. Google Scholar V. Adler, Cutting of polygons, Funct. Anal. Appl., 27 (1993), 141-143. doi: 10.1007/BF01085984. Google Scholar O. I. Bogoyavlenskij, Some constructions of integrable dynamical systems, Math. USSR-Izv., 31 (1988), 47-75. doi: 10.1070/IM1988v031n01ABEH001043. Google Scholar O. I. Bogoyavlenskij, Integrable Lotka-Volterra systems, Regul. Chaotic Dyn., 13 (2008), 543-556. doi: 10.1134/S1560354708060051. Google Scholar E. Celledoni, R. I. McLachlan, D. I. McLaren, B. Owren and G. R. W. Quispel, Integrability properties of Kahan's method, J. Phys. A, 47 (2014), 20pp. doi: 10.1088/1751-8113/47/36/365202. Google Scholar P. A. Damianou, C. A. Evripidou, P. Kassotakis and P. Vanhaecke, Integrable reductions of the Bogoyavlenskij-Itoh Lotka-Volterra systems, J. Math. Phys., 58 (2017), 17pp. doi: 10.1063/1.4978854. Google Scholar C. A. Evripidou, P. Kassotakis and P. Vanhaecke, Integrable deformations of the Bogoyavlenskij-Itoh Lotka-Volterra systems, Regul. Chaotic Dyn., 22 (2017), 721-739. doi: 10.1134/S1560354717060090. Google Scholar C. A. Evripidou, P. H. van der Kamp and C. Zhang, Dressing the dressing chain, SIGMA Symmetry Integrability Geom. Methods Appl., 14 (2018), 59-73. doi: 10.3842/SIGMA.2018.059. Google Scholar A. Fordy and A. Hone, Discrete integrable systems and Poisson algebras from cluster maps, Comm. Math. Phys., 325 (2014), 527-584. doi: 10.1007/s00220-013-1867-y. Google Scholar R. Hirota and K. Kimura, Discretization of the Euler top, J. Phys. Soc. Japan, 69 (2000), 627-630. doi: 10.1143/JPSJ.69.627. Google Scholar R. Hirota and K. Kimura, Discretization of the Lagrange top, J. Phys. Soc. Japan, 69 (2000), 3193-3199. doi: 10.1143/JPSJ.69.3193. Google Scholar Y. Itoh, Integrals of a Lotka-Volterra system of odd number of variables, Progr. Theoret. Phys., 78 (1987), 507-510. doi: 10.1143/PTP.78.507. Google Scholar Y. Itoh, A combinatorial method for the vanishing of the Poisson brackets of an integrable Lotka-Volterra system, J. Phys. A, 42 (2009), 11pp. doi: 10.1088/1751-8113/42/2/025201. Google Scholar T. E. Kouloukas, G. R. W. Quispel and P. Vanhaecke, Liouville integrability and superintegrability of a generalized Lotka-Volterra system and its Kahan discretization, J. Phys. A, 49 (2016), 13pp. doi: 10.1088/1751-8113/49/22/225201. Google Scholar C. Laurent-Gengoux, A. Pichereau and P. Vanhaecke, Poisson Structures, Fundamental Principles of Mathematical Sciences, 347, Springer, Heidelberg, 2013. doi: 10.1007/978-3-642-31090-4. Google Scholar A. J. Lotka, Analytical Theory of Biological Populations, The Plenum Series on Demographic Methods and Population Analysis, Plenum Press, New York, 1998. doi: 10.1007/978-1-4757-9176-1. Google Scholar A. S. Miscenko and A. T. Fomenko, Generalized Liouville method for the integration of Hamiltonian systems, Funkcional. Anal. i Priložen., 12 (1978), 46–56, 96. doi: 10.1007/BF01076254. Google Scholar M. Noumi and Y. Yamada, Affine Weyl groups, discrete dynamical systems and Painlevé equations, Comm. Math. Phys., 199 (1998), 281-295. doi: 10.1007/s002200050502. Google Scholar P. H. van der Kamp, T. E. Kouloukas, G. R. W. Quispel, D. T. Tran and P. Vanhaecke, Integrable and superintegrable systems associated with multi-sums of products, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 470 (2014), 23pp. doi: 10.1098/rspa.2014.0481. Google Scholar A. P. Veselov, Integrable mappings, Russian Math. Surveys, 46 (1991), 1-51. doi: 10.1070/RM1991v046n05ABEH002856. Google Scholar A. P. Veselov and A. B. Shabat, Dressing chains and the spectral theory of the Schrödinger operator, Funktsional. Anal. i Prilozhen., 27 (1993), 1–21, 96. doi: 10.1007/BF01085979. Google Scholar V. Volterra, Leçons Sur la Théorie Mathématique de la Lutte Pour la Vie, Les Grands Classiques Gauthier-Villars, Éditions Jacques Gabay, Sceaux, 1990. Google Scholar Aristophanes Dimakis, Folkert Müller-Hoissen. Bidifferential graded algebras and integrable systems. Conference Publications, 2009, 2009 (Special) : 208-219. doi: 10.3934/proc.2009.2009.208 Leo T. Butler. A note on integrable mechanical systems on surfaces. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 1873-1878. doi: 10.3934/dcds.2014.34.1873 W.-J. Beyn, Y.-K Zou. Discretizations of dynamical systems with a saddle-node homoclinic orbit. Discrete & Continuous Dynamical Systems, 1996, 2 (3) : 351-365. doi: 10.3934/dcds.1996.2.351 Sonomi Kakizaki, Akiko Fukuda, Yusaku Yamamoto, Masashi Iwasaki, Emiko Ishiwata, Yoshimasa Nakamura. Conserved quantities of the integrable discrete hungry systems. Discrete & Continuous Dynamical Systems - S, 2015, 8 (5) : 889-899. doi: 10.3934/dcdss.2015.8.889 Răzvan M. Tudoran. On the control of stability of periodic orbits of completely integrable systems. Journal of Geometric Mechanics, 2015, 7 (1) : 109-124. doi: 10.3934/jgm.2015.7.109 Ernest Fontich, Pau Martín. Arnold diffusion in perturbations of analytic integrable Hamiltonian systems. Discrete & Continuous Dynamical Systems, 2001, 7 (1) : 61-84. doi: 10.3934/dcds.2001.7.61 Peter H. van der Kamp, D. I. McLaren, G. R. W. Quispel. Homogeneous darboux polynomials and generalising integrable ODE systems. Journal of Computational Dynamics, 2021, 8 (1) : 1-8. doi: 10.3934/jcd.2021001 Álvaro Pelayo, San Vű Ngọc. First steps in symplectic and spectral theory of integrable systems. Discrete & Continuous Dynamical Systems, 2012, 32 (10) : 3325-3377. doi: 10.3934/dcds.2012.32.3325 Boris S. Kruglikov and Vladimir S. Matveev. Vanishing of the entropy pseudonorm for certain integrable systems. Electronic Research Announcements, 2006, 12: 19-28. Helen Christodoulidi, Andrew N. W. Hone, Theodoros E. Kouloukas. A new class of integrable Lotka–Volterra systems. Journal of Computational Dynamics, 2019, 6 (2) : 223-237. doi: 10.3934/jcd.2019011 Dong Chen. Positive metric entropy in nondegenerate nearly integrable systems. Journal of Modern Dynamics, 2017, 11: 43-56. doi: 10.3934/jmd.2017003 Matteo Petrera, Yuri B. Suris. Geometry of the Kahan discretizations of planar quadratic Hamiltonian systems. Ⅱ. Systems with a linear Poisson tensor. Journal of Computational Dynamics, 2019, 6 (2) : 401-408. doi: 10.3934/jcd.2019020 Thierry Horsin, Mohamed Ali Jendoubi. Asymptotics for some discretizations of dynamical systems, application to second order systems with non-local nonlinearities. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2022007 Rafael De La Llave, Victoria Sadovskaya. On the regularity of integrable conformal structures invariant under Anosov systems. Discrete & Continuous Dynamical Systems, 2005, 12 (3) : 377-385. doi: 10.3934/dcds.2005.12.377 Claude Froeschlé, Massimiliano Guzzo, Elena Lega. First numerical evidence of global Arnold diffusion in quasi-integrable systems. Discrete & Continuous Dynamical Systems - B, 2005, 5 (3) : 687-698. doi: 10.3934/dcdsb.2005.5.687 Sebastián Ferrer, Francisco Crespo. Parametric quartic Hamiltonian model. A unified treatment of classic integrable systems. Journal of Geometric Mechanics, 2014, 6 (4) : 479-502. doi: 10.3934/jgm.2014.6.479 Jihua Yang, Liqin Zhao. Limit cycle bifurcations for piecewise smooth integrable differential systems. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2417-2425. doi: 10.3934/dcdsb.2017123 Francisco Crespo, Francisco Javier Molero, Sebastián Ferrer. Poisson and integrable systems through the Nambu bracket and its Jacobi multiplier. Journal of Geometric Mechanics, 2016, 8 (2) : 169-178. doi: 10.3934/jgm.2016002 Alicia Cordero, José Martínez Alfaro, Pura Vindel. Bott integrable Hamiltonian systems on $S^{2}\times S^{1}$. Discrete & Continuous Dynamical Systems, 2008, 22 (3) : 587-604. doi: 10.3934/dcds.2008.22.587 Fuzhong Cong, Jialin Hong, Hongtian Li. Quasi-effective stability for nearly integrable Hamiltonian systems. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 67-80. doi: 10.3934/dcdsb.2016.21.67 Charalampos Evripidou Pavlos Kassotakis Pol Vanhaecke
CommonCrawl
Ramachandran, B. Functional equations in probability theory (1991) Ramachandran, B., Lau, Ka-Sing Ramachandran, B.. Some pioneering results on infinite divisibility revisited. Journal of the Indian Statistical Association 2005. 43:63-76 Barakat, H. M., Ramachandran, B.. Continuability/ identifiability of local weak limits for certain normalized intermediate/ central rank sequences of order statistics. Journal of the Indian Statistical Association 2001. 39:1-31 Ramachandran, B.. On the results of Ibragimov, Titov, and Blank on distribution functions on the real line -- Their convolution powers coinciding on a half-line. Journal of the Indian Statistical Association 1998. 36:75-81 Ramachandran, B.. On the determinacy of certain probability distribution functions on the real line through their values on a half-line. Journal of the Indian Statistical Association 1998. 36:45-59 Ramachandran, B.. On certain classes of functions of bounded variation on the real line which are determined by their values on a half-line. Journal of the Indian Statistical Association 1998. 36:61-73 Ramachandran, B.. Characteristic functions with some powers real -- III. Statistics & Probability Letters 1997. 34:33-36 Ramachandran, B.. On geometric-stable laws, a related property of stable processes, and stable densities of exponent one. Annals of the Institute of Statistical Mathematics 1997. 49:299-313 Ramachandran, B.. Complex-valued characteristic functions with (some) real powers -- II. Journal of Statistical Planning and Inference 1997. 63:337-349 Ramachandran, B.. Complex-valued characteristic functions with (some) real powers. Sankhyā, Series A 1996. 58:1-7 Ramachandran, B.. Characteristic functions taking constant values on intervals of the real line. Statistics & Probability Letters 1996. 28:269-270 Ramachandran, B., Rao, C. Radhakrishna. Solutions of functional equations arising in some regression problems and a characterization of the Cauchy law. Ramachandran, B., Rao, C. Radhakrishna. Some results on characteristic functions and characterizations of the normal and generalized stable laws. Ramachandran, B.. On the theorems of Rossberg and Riedel on the normal law. Sankhyā, Series A 1993. 55:321- Ramachandran, B.. Characteristic functions and shift-symmetry. Gujarat Statistical Review 1990. 17A:169-174 Lau, Ka-Sing, Ramachandran, B.. Integrated Cauchy functional equations with error terms on $[0, ∊fty)$. Ramachandran, B.. Two remarks on vague convergence. Sankhyā, Series A 1989. 51:233-235 Ramachandran, B., Lau, Ka-Sing, Gu, Hua Min. On characteristic functions satisfying a functional equation and related classes of simultaneous integral equations. Sankhyā, Series A 1988. 50:190-198 Ramachandran, B., Seshadri, V.. On a property of strongly reproductive exponential families on $R$ (Corr: V7 p87). Statistics & Probability Letters 1988. 6:171-174 Ramachandran, B.. On the equation $f(x)=\int_{-\infty}^{+\infty}f(x+y)\,d\mu(y)$ for $x\ge0$. Sankhyā, Series A 1988. 50:44-51 Ramachandran, B.. On the equation $f(x)=\int f(x+y)\,d\mu(y)$. Sankhyā, Series A 1987. 49:195-198 Ramachandran, B.. Renewal type equations on $Z$ (STMA V26 2695). Sankhyā, Series A 1984. 46:319-325 Ramachandran, B., Rao, B. L. S. P.. On the equation $f(x)=\int f(x+y)\,d\mu(y)$ (STMA V26 2696). Sankhyā, Series A 1984. 46:326-338 Ramachandran, B.. An integral equation in probability theory and its implications. Ramachandran, B.. On the equation $f(x)$ = an improper integral of $f(x+y)$ with respect to a measure on $y$. Sankhyā, Series A 1982. 44:364-371 Ramachandran, B.. On some fundamental lemmas of Linnik (STMA V22 1479). Ramaswami, V., Balasubramanian, K., Ramachandran, B.. Correction to ``The stable laws revisited'' (1976V38 p300-303). Sankhyā, Series A 1980. 42:298-299 Ramachandran, B.. On the strong Markov property of the exponential laws (STMA V22 1478). Ramachandran, B.. On the ``strong memorylessness property'' of the exponential and geometric probability laws. Sankhyā, Series A 1979. 41:244-251 Ramachandran, B.. On a conjecture of Geisser's. Sankhyā, Series A 1975. 37:423-427 Ramachandran, B., Rao, C. Radhakrishna. Solutions of functional equations arising in some regression problems and a characterization of the Cauchy law. Sankhyā, Series A 1970. 32:1-30 Ramachandran, B.. On characteristic functions and moments. Sankhyā, Series A 1969. 31:1-12 Ramachandran, B., Rao, C. Radhakrishna. Some results on characteristic functions and characterizations of the normal and generalized stable laws. Sankhyā, Series A 1968. 30:125-140 Ramachandran, B.. On one-sided distribution functions. Sankhyā, Series A 1966. 28:315-318 Ramachandran, B.. An extension of a theorem of Mamay, with application. Sankhyā, Series A 1965. 27:303-310 Ramachandran, B.. On the order and the type of entire characteristic functions. The Annals of Mathematical Statistics 1962. 33:1238-1255
CommonCrawl
MCA 2013 Onsite Registration Desk MCA Schedule Social & Cultural Activities Satellite Sessions Prize Recipients CMS Activities Free Probability and its Applications Session code: fpa Session type: Special Sessions All abstracts Michael Anshelevic (Texas A&M University) Octavio Arizmendi (CIMAT) James A. Mingo (Queen's University) Tuesday, Jul 25 [McGill U., Arts Building, Room 260] 11:45 Victor Perez Abreu (Centro de Investigación en Matemáticas, Guanajuato, Mexico), On new noncommutative processes arising from matricial random processes 12:15 Jonathan Novak (University of California, San Diego), Semiclassical asymptotics of $\mathrm{GL}_N(\mathbb{C})$ tensor products 14:15 Mario Diaz (Queen's University), A New Application of Free Probability Theory: Data Privacy 14:45 Solesne Bourguin (Boston University), Some recent results on Wigner integrals 15:45 Paul Skoufranis (York University), Conditional Bi-Free Independence 16:15 Yinzheng Gu (Queen's University), Bi-monotonic independence for pairs of algebras 17:00 Pierre Tarrago (Centro de Investigación en Matemáticas, Guanajuato, Mexico), Free wreath product quantum groups and free probability 17:30 Jiun-Chau Wang (University of Saskatchewan), Multiplicative bi-free infinite divisibility Wednesday, Jul 26 [McGill U., Arts Building, Room 260] 11:15 Alexandru Nica (University of Waterloo), An application of free cumulants to meandric systems 11:45 James Pascoe (Washington University in St. Louis), Applications of model-realization theory to inverse problems in free probability 13:45 Todd Kemp (University of California, San Diego), Partitioned Matrices with Correlations 14:15 Emily Redelmeier (ISARA Corporation), Cumulants in the finite-matrix and higher-order free cases 14:45 Brent Nelson (University of California, Berkeley), Free Stein kernels and an improvement of the free logarithmic Sobolev inequality 15:15 Alexey Kuznetsov (York University), Free stable distributions 16:15 Isaac Pérez Castillo (Universidad Nacional Autónoma de México), Large deviation function for the number of eigenvalues of sparse random graphs inside an interval Victor Perez Abreu Centro de Investigación en Matemáticas, Guanajuato, Mexico On new noncommutative processes arising from matricial random processes PDF abstract The Dyson-Brownian motion is the process of eigenvalues of a matrix Brownian motion. The limiting of its empirical spectral process, when dimension increases, is the free Brownian motion. In this talk we present new noncommutative processes arising in a similar way from fractional Hermitian Brownian, and fractional Wishart processes. Scheduled time: Tuesday, July 25 at 11:45 Location: McGill U., Arts Building, Room 260 Jonathan Novak Semiclassical asymptotics of $\mathrm{GL}_N(\mathbb{C})$ tensor products It has been known since seminal work of Biane that the asymptotic behaviour of irreducible representations of the complex general linear group in coupled semiclassical/large-dimension limits is governed by additive free convolution. Biane's original work on this connection required superlinear decay of the semiclassical parameter as a function of $N$. More recently, Bufetov and Gorin conjectured that linear decay is sufficient. I will present recent work with Collins and Sniady in which we prove an unconditional result: semiclassical limits of tensor products are governed by free probability irrespective of the decay rate of the semiclassical parameter. A New Application of Free Probability Theory: Data Privacy Many applications of free probability theory have emerged from random matrix theory problems. In this talk we will discuss a new problem connected to the area of data privacy. In addition to our recent developments, we will discuss a conjecture concerning the free multiplicative convolution of a Marchenko-Pastur and a Bernoulli distributions. -- This Is joint work with S. Asoodeh, J. Mingo, S. Belinschi, F. Alajaji, and T. Linder. Solesne Bourguin Some recent results on Wigner integrals In this talk, we present recent results dealing with Wigner integrals, which are the building blocks of free Brownian motion functionals. We will discuss limit theorems and quantitative limit theorems involving Wigner integrals, as well as several characterizations of freeness for free random variables having the form of Wigner integrals. If time permits, we will discuss applications of these characterizations to limit theorems. Paul Skoufranis Conditional Bi-Free Independence In this talk, we discuss the recent extension of the notion of bi-free independence to two-state systems. This so called conditional bi-free independence enables one to keep track of more information pertaining to the actions of the left and right regular representations on reduced free product spaces thereby permitting a greater number of non-commutative probabilities to be modelled. The focus of this talk will be the definition of conditional bi-free independence, the combinatorial formula for both the moment and cumulant functions, the operator-valued setting, the partial R-transforms, and infinitely divisible conditional bi-free distributions. (Joint work with Y. Gu.) Yinzheng Gu Bi-monotonic independence for pairs of algebras According to Muraki's classification work, there are only five notions of independences in a natural sense: tensor, free, Boolean, monotonic, and anti-monotonic. Following Voiculescu's extension from free to bi-free indpendnece, the notion of Boolean independence has been recently upgraded to bi-Boolean independence as well. In this talk, we consider a similar generalization in the framework of monotonic probability and introduce the notion of bi-monotonic independence for pairs of algebras. Time permitting, we will discuss related topics such as bi-monotonic cumulants, convolution, and a connection with operator-valued monotonic independence. Pierre Tarrago Free wreath product quantum groups and free probability A free wreath product is an algebraic construction which builds a new quantum group from a compact matrix quantum group and a non-commutative permutation group, in the same spirit as the usual wreath product. In this talk, I will present some recent results on the representation theory of certain free wreath products: I will first introduce the notion of vectorial Boolean cumulants for a compact quantum group, and then I will give an explicit basis of the intertwiner spaces of a free wreath product in terms of those vectorial Boolean cumulants. We significantly use a connection between representation theory of non-commutative permutation groups and planar algebras, and some of our results generalize to arbitrary free products of planar algebras. This is a joint work with Jonas Wahl. Jiun-Chau Wang Multiplicative bi-free infinite divisibility We will present a bi-free analogue of Khintchine's characterization of infinite divisibility for commuting left and right unitaries. Alexandru Nica An application of free cumulants to meandric systems I will discuss a class of meandric systems which contains a good number of interesting examples, and where on the other hand the expected number of components can be calculated via arguments concerning free cumulants. This is joint work with Ian Goulden and Doron Puder. Scheduled time: Wednesday, July 26 at 11:15 James Pascoe Applications of model-realization theory to inverse problems in free probability Classically, Nevanlinna showed that there was bijection between positive finite Borel measures on the reals and analytic self-maps of the upper half plane which satisfy an asymptotic condition via the Cauchy transform. More recently, analogous problems have been considered in free probability by various authors. That is, there should be a correspondence between noncommutative probability and function theory on a noncommutative upper half plane. We will discuss how to re-frame recent developments in Agler model-realization theory developed on the upper half plane to completely understand the inverse problem in the free probabilistic context. This talk represents joint work with Benjamin Passer and Ryan Tully-Doyle. Todd Kemp Partitioned Matrices with Correlations There is a vast literature on "band matrices" which are symmetric random matrices with independent but not necessarily identically-distributed entries. Recently, I have been studying models where independence is also abandoned. I will present two kinds of results. First, in joint work with D. Zimmermann, we show that if the matrix entries are partitioned into independent blocks, and if the blocks are not too big, then the empirical spectral distribution still concentrates on its mean as the dimension grows. Second, I will discuss recent and ongoing work with undergraduate research students on several partitioned matrix models whose blocks grow with dimension, where the empirical spectral distribution has a limit which can be computed, using the tools of operator-valued free probability, and other combinatorial and analytic means. Emily Redelmeier ISARA Corporation Cumulants in the finite-matrix and higher-order free cases I will discuss an extension of the matrix cumulants first defined by Capitaine and Casalis which follows naturally from their interpretation in connection with diagrammatic methods for computing matrix integrals. The asymptotics of these quantities are the higher-order free cumulants. I will focus on the real and quaternionic cases. Brent Nelson Free Stein kernels and an improvement of the free logarithmic Sobolev inequality In their 2015 paper, Ledoux, Nourdin, and Peccati used Stein kernels and Stein discrepancies to improve the classical logarithmic Sobolev inequality (relative to a Gaussian distribution). Simply put, Stein discrepancy measures how far a probability distribution is from the Gaussian distribution by looking at how badly it violates the integration by parts formula. In free probability, free semicircular operators are known to satisfy a corresponding "integration by parts formula" by way of the free difference quotients. Using this fact, we define in this talk the non-commutative analogues of Stein kernels and Stein discrepancies and use them to produce an improvement of Biane and Speicher's free logarithmic Sobolev inequality from 2001. This talk is based on joint work with Max Fathi. Alexey Kuznetsov Free stable distributions I will discuss some analytical properties of free stable distributions, derived using Mellin transform technique. The results include an explicit formula for the Mellin transform, an explicit series representations for the characteristic function and for the density of a free stable distribution. All of these formulas bear close resemblance to the corresponding expressions for classical stable distributions. One consequence of these results is a factorization of a classical stable random variable into an independent (in the classical sense) product of a free stable random variable and a power of a Gamma(2) random variable. This talk is based on joint work with Takahiro Hasebe. Isaac Pérez Castillo Large deviation function for the number of eigenvalues of sparse random graphs inside an interval We present a general method to obtain the exact rate function $\Psi_{[a,b]}(k)$ controlling the large deviation probability $\text{Prob}[\mathcal{I}_N[a,b]=kN] \asymp e^{-N\Psi_{[a,b]}(k)}$ that an $N \times N$ sparse random matrix has $\mathcal{I}_N[a,b]=kN$ eigenvalues inside the interval $[a,b]$. The method is applied to study the eigenvalue statistics in two distinct examples: (i) the shifted index number of eigenvalues for an ensemble of Erd\"os-R\'enyi graphs and (ii) the number of eigenvalues within a bounded region of the spectrum for the Anderson model on regular random graphs. A salient feature of the rate function in both cases is that, unlike rotationally invariant random matrices, it is asymmetric with respect to its minimum. The asymmetric character depends on the disorder in a way that is compatible with the distinct eigenvalue statistics corresponding to localized and delocalized eigenstates. The results also show that the level compressibility $\kappa_2/\kappa_1$ for the Anderson model on a regular graph fulfills $0 < \kappa_2/\kappa_1 < 1$ in the bulk regime, in contrast to the behavior found in Gaussian random matrices.
CommonCrawl
Density-functional fluctuation theory of crowds J. Felipe Méndez-Valderrama ORCID: orcid.org/0000-0003-3026-89401 na1, Yunus A. Kinkhabwala ORCID: orcid.org/0000-0003-2320-11822 na1, Jeffrey Silver ORCID: orcid.org/0000-0002-8453-10303, Itai Cohen4 & T. A. Arias ORCID: orcid.org/0000-0001-5880-02604 103 Altmetric Computational biophysics Computer modelling Statistical physics A primary goal of collective population behavior studies is to determine the rules governing crowd distributions in order to predict future behaviors in new environments. Current top-down modeling approaches describe, instead of predict, specific emergent behaviors, whereas bottom-up approaches must postulate, instead of directly determine, rules for individual behaviors. Here, we employ classical density functional theory (DFT) to quantify, directly from observations of local crowd density, the rules that predict mass behaviors under new circumstances. To demonstrate our theory-based, data-driven approach, we use a model crowd consisting of walking fruit flies and extract two functions that separately describe spatial and social preferences. The resulting theory accurately predicts experimental fly distributions in new environments and provides quantification of the crowd "mood". Should this approach generalize beyond milling crowds, it may find powerful applications in fields ranging from spatial ecology and active matter to demography and economics. Identifying the role of social interactions and environmental influences on living systems has been the goal of many recent studies of collective population behavior1,2,3,4,5,6,7,8,9,10,11,12,13,14,15. Current agent-based models of crowds can reproduce many emergent behaviors, ranging from random milling to swarming, but often must postulate preconceived rules for individual agent interactions with each other and their environment1,2,3,4,5,6,7,8,9,10. In contrast to such bottom-up approaches, some studies have inferred interaction rules from observations of individual motions within a crowd for a few species of fish11,12, birds13, and insects14,15, but these studies have largely been limited to specific behaviors and have not been developed for making predictions under new circumstances. To date, a general predictive approach to emergent collective behavior in living systems has been lacking. Such approaches, however, have been developed successfully for large collections of interacting atoms and molecules in the field of statistical physics. One of the central tenants of statistical physics is that generic thermodynamic behaviors emerge from underlying interaction rules among large numbers of particles16,17. Remarkably, these emergent behaviors are often insensitive to the detailed nature of the underlying interactions. Here, we pursue the hypothesis that a similar scenario emerges in the study of large crowds18,19,20,21,22 so that behaviors arising from generic agent-based models can be predicted using a top-down approach. Accordingly, our strategy is to begin with a family of models that roughly capture the "microscopic" behaviors of individuals as they rearrange within a crowd. We do this, not because we are interested directly in individual behaviors, but rather because we are interested in the generic "macroscopic" behaviors that emerge in crowds en masse. This tack is not a priori obvious since active systems do not possess a fixed energy, their temperature is ill-defined, and there are no obvious equilibrium states23. Nonetheless, we show here that mathematical equivalents of free energy, the Hamiltonian, and equilibrium states arise naturally from plausible models of crowd behavior. In this work, we present the following results. We introduce a general class of plausible agent-based models in which two different functions,"vexation" and "frustration," quantify location and social preferences, respectively. For this class of models, we develop a coarse-grained approach stemming from classical density-functional theory (DFT) that allows us to determine the general mathematical form of the probability distributions describing a crowd. We then discuss the conditions a system must possess to be describable by our theory and test our approach using a living system consisting of walking fruit flies (Drosophila melanogaster), which we confine to a variety of two-dimensional environments. For this fruit-fly system, we successfully extract the vexation and frustration functions corresponding to a variety of different physical settings. Furthermore, these functions are sufficiently stable that, by mixing and matching functions from different experiments, we accurately predict crowd distributions in new environments. Finally, by exposing the fly system to conditions that elicit distinct social motivations, we are able to identify changes in the overall behavior of the crowd, i.e., its "mood," by tracking the evolution of the social preference function. General mathematical form of crowd-density distributions Consider, as an example, a crowd at a political rally (Fig. 1a). Under such circumstances, individuals will seek the best locations—presumably closest to the stage—while avoiding overcrowded areas where there is insufficient "personal space." Moreover, individuals will, from time to time, move to new, better locations that become available. Resulting density-functional approach. a Schematic of crowd in which agents attempt to get as close to the stage as possible while avoiding overcrowding. b In the absence of interactions, the mean of each probability distribution (vertical dashed line) indicates location preference, from which we can extract a bin-dependent vexation functional, vb. c Resulting bin-dependent vexations. d–f Crowds in environments with uniform vexation but with neutral, repulsive, or attractive interactions. For neutral interactions, we expect complete spatial randomness leading to Poisson distributed counts within each bin. The repulsive and attractive interactions are thus reflected in the deviation of the probability distribution from the Poisson form26. From these deviations we can extract a bin-independent frustration functional, fN, whose curvature indicates the nature and intensity of the interaction A plausible agent-based model of this behavior would assign an intrinsic desirability of each location x through a "vexation" function V(x) that takes its minimum value at the most ideal location near the stage. In addition, it would account for crowding effects through the local crowd areal density n(x) by introducing a "frustration" function f′(n), so that the relative preferablity of location x is actually the sum of vexation and frustration effects, V(x) + f′(n(x)). Finally, this model would include a behavioral rule to account for the tendency for individuals to seek improved locations. When an agent considers a move from location x to x′, the change in the agent's dissatisfaction is ΔH ≡ (V(x′) + f′(n(x′)) − (V(x) + f′(n(x)). A rule where each agent executes such moves with probability 1/(eΔH + 1) captures the intuition that moves that increase the dissatisfaction ΔH > 0 are unlikely, and moves that decrease the dissatisfaction ΔH < 0 are likely, while moves where ΔH = 0 occur with 50% probability. The disadvantage of such an agent based modeling approach is that the rules for each agent are postulated and comparison with experiment requires gathering statistics from repeated simulations, each of which scales as the number of agents or worse. Again, our purpose here is not to develop such a model in detail, but rather to explore the top-level, global behaviors that emerge from this class of models, which we conjecture should apply to crowds more generally. To extract such global behaviors, we develop a top-down approach by considering the system as a whole and summing the changes in the individual agent dissatisfactions ΔH to obtain a net global population dissatisfaction functional H[n(x)] (Methods). Integrating over dn and area element dA yields $$H[n({\mathbf{x}})] \equiv F[n({\mathbf{x}})] + {\int} V({\mathbf{x}})n({\mathbf{x}})dA,$$ where the net frustration effect at location x is described by \(f(n) = {\int} f\prime (n){\kern 1pt} dn\), and a local density approximation24,25 \(F[n({\mathbf{x}})] \equiv {\int} f(n({\mathbf{x}})){\kern 1pt} dA\) is in this case sufficient for capturing the crowd behavior. This global functional H[n(x)] and the model described above then lead mathematically to the prediction (Methods) that the probability for observing a crowd arrangement with density n(x) will be given by the probability density functional $$P[n({\mathbf{x}})] = Z^{ - 1}{\mathrm{exp}}( - H[n({\mathbf{x}})]),$$ where Z is an overall normalization constant. Since we cannot measure the function n(x) directly in experimental crowds, we instead consider discrete counts of individuals within equal area bins (quadrats)26. Thus, to make contact with experiments we discretize Eq. 1 as \(H = \mathop {\sum}\nolimits_b \left( {f_{N_b} + v_bN_b} \right)\), where vb is the average value of the vexation V(x) over bin b, and \(f_{N_b} \equiv f(N_b/A)A\) approximates the total frustration contribution of bin b of area A (Methods). Substituting this discretization into Eq. 2, the overall probability factors into independent distributions for each bin of the form $$P_b(N) = z_b^{ - 1}\frac{1}{{N!}}\left( {e^{ - v_b}} \right)^Ne^{ - f_N},$$ where zb is a bin-dependent normalization constant and N! accounts for equivalent configurations among the bins (Methods). Thus, we predict that the fluctuations of the bin counts will be statistically independent and follow a modified Poisson form for each bin. This formulation dramatically reduces the complexity of the system description from tracking each individual to tracking the local density in each bin. Additionally, instead of rules with potentially complex interactions for each agent, the global system behavior of the density is determined by just two functions, vb and a bin-independent fN. Because this reduction in the number of variables is the result of transitioning to a local-density description as in classical density-functional theory, but now with the modification that interactions are inferred from density fluctuations, we call our approach density-functional fluctuation theory (DFFT). Remarkably, rather than postulating these functions, they can be extracted directly from measurements of density distributions in each bin. In particular, in the case of neutral interactions (fN = 0), the bin counts will be single-parameter Poisson distributed, as expected for an experiment counting so-called completely spatially random events26. From the mean of these distributions one can extract an effective vb (Fig. 1b, c), or logarithm of the so-called intensity26, that can arise either from actual preferences for particular locations or from other kinetic interactions with the environment, such as slowing down near barriers27. In the case of interactions, such probability distributions can vary substantially from their non-interacting form (Fig. 1d) when the interactions are included (Fig. 1e, f). For example, so-called contagious distributions, which correspond to attractive interactions and show increased variance-to-mean ratios, have been observed26,28,29. If the interaction is strongly attractive, groups will form, resulting in a bimodal bin probability distribution corresponding to low and high density regions (Fig. 1e), with the high density region constrained by the packing limit. In contrast, highly repulsive interactions (Fig. 1f) lead to more uniform distribution of individuals in the crowd26 and will narrow the bin probability distribution. Finally, from distortions off of the Poisson form, we can determine an effective frustration function fN, without assuming any particular functional form, that describes any local interaction, attractive or repulsive. This formulation holds whether the interaction is directly related to density or to more complex factors such as orientation distributions, as well as higher-order many body interactions (Methods). The power of this approach is that, since vb is tied to the interactions with the environment and fN is tied to inter-agent interactions, it may be possible to combine vexations and frustrations from previous measurements to predict future crowd behaviors. Several conditions must be met when applying this methodology to crowds under realistic circumstances. For example, the system must be sufficiently ergodic. Thus, the time scales for measurements must be longer than the system decorrelation time. In addition, the agent interactions with their environment should be sufficiently independent of the agent density, the agent interactions should be sufficiently independent of location, and both should be stable over the measurement time. Finally, bin sizes must be appropriately chosen. The bins must be large enough to yield reliable estimates of density, as well as to avoid trivial correlations in neighboring bins, yet small enough that the underlying vexation and local density are nearly constant across each bin. Extraction of functionals for model system of walking flies To test whether this approach applies to actual populations, we consider a model crowd consisting of wild-type male Drosophila melanogaster from an out-bred laboratory stock. It is well know that flies exhibit complex spatial preferences30,31 and social behaviors32,33. Here we seek to determine whether a large crowd of individuals with such complex behaviors indeed can be described within our vexation and frustration framework. The flies are confined in 1.5 mm tall transparent chambers where they can walk freely but cannot fly or climb on top of each other. We record overhead videos of the flies, bin the arena, and use custom Matlab-based tracking algorithms (Methods) to measure the individual bin counts Nb in each video frame. To explore a variety of behaviors, we use arenas of different shapes30 and apply heat gradients34 across the arenas to generate different spatial preferences. We find that the flies fully adjust to such changes in their environments after 5 min. We also find that the behavior of the flies changes slowly over a time scale of hours (Methods). We thus take care to make our observations over 10 minute windows during time periods where the behavior is stable. A top down image of 65 flies in a quasi 1D arena that is uncomfortably heated on the right is shown in Fig. 2a. We find that a bin size of 0.15 cm2, corresponding to the area of approximately 7 flies, ensures that the counts are spatially independent (Fig. 2b) and that the density does not vary substantially over each bin. We also find that the decorrelation time for Nb is about 5 s (Fig. 2c) indicating the system is sufficiently ergodic over the time scale of our observation windows. We show representative probability distributions Pb(N) for a high and a low density bin in Fig. 2d, e, respectively. We find that the distribution peaks are centered at higher N near the left side of the chamber suggesting lower vexation there. Additionally, the high density probability distribution is significantly narrower than the fitted Poisson distribution, hinting that there are repulsive interactions among the flies. Statistical analysis and extraction of functionals for walking fruit fly experiments. a Single frame of 65 flies walking in a quasi 1D chamber of dimensions 10 cm × 0.8 cm divided into 48 bins with approximate area 0.15 cm2. Heat is applied on the right side of the chamber so that the temperature varies from 35 °C on the left to 50 °C on the right. b Averaged spatial correlation function. c Averaged temporal correlation function. d–e Probability distributions of the number of flies in the two bins outlined in a in red and magenta, respectively. f The "pseudo-free energy," −ln(N!Pb(N)), for eight representative bins. The observed positive curvature indicates deviations from the Poisson form and repulsive interactions. g Frustration functional, fN, obtained from collapse of the pseudo-free energies for all 48 bins upon removal of the Poisson contributions. h Vexation for each bin as measured from the Poisson contributions to the pseudo-free energies. S.d. error bars in d–f computed from Bayesian posterior distribution assuming a Dirichlet prior. S.d. errors bars in g computed from linear propagation of errors displayed in f To validate our description and quantify the vexations and frustrations, we plot what we call as a mnemonic the "pseudo-free energy" −ln(N!Pb(N)) = (vbN +ln zb) + fN versus N in Fig. 2f. To determine whether the frustration fN is indeed universal, we subtract a linear term corresponding to a bin-dependent vexation and normalization constant, vbN +ln zb, from each curve. Remarkably, the resulting curves can be made to collapse, indicating that a single, universal frustration function fN applies equally well to all bins (Fig. 2g). The positive curvature indicates that higher densities are less preferable than expected from non-interacting populations, and thus indicates repulsive interactions. We also show the bin-dependent vexation values vb used to collapse the curves in Fig. 2h. Finally, as an indicator of the strength of the collapse, we find that modifying the best least-squared fit Poisson distributions by including just eight universal frustration values (f0 through f7) decreases our reduced χ2 value for 166 degrees of freedom from 8.1 to 0.95. Additionally, our DFFT model is favored by the likelihood ratio test with probability p < 0.001 for accepting the hypothesis that the frustration values should be taken to be zero and a vexation-only model be used. This latter test confirms that the aforementioned reduction in χ2 is not a result of overfitting (Methods). Predictions of crowd density under new circumstances An important consequence of the physical independence of fN from vb is that it should be possible to use the frustrations extracted from the quasi-1D chamber to predict fly distributions in distinct vexations (Fig. 3). We demonstrate this capability by predicting the measured density distributions for large numbers of flies (on the order of 100) in three distinct geometries and temperature gradients (Fig. 3a). Using measurements of just a few flies in each chamber, we extract density distributions and determine the corresponding vexation vb. Combining this few-fly vexation for each environment with the many-fly frustration fN extracted from the quasi 1D geometry, we predict the fly distributions under dense conditions. Fig. 3b shows this procedure for the stair-case geometry. We find that the individual fly probability distributions (density normalized by total number of flies) for low and high densities are significantly different (Fig. 3c). In contrast, including the interactions through our DFFT approach predicts a more homogeneous population that matches the observed distribution (Fig. 3d). These results demonstrate that, using our DFFT analysis, it is indeed possible to make accurate predictions by combining vexations from low-density experiments in different environments with a frustration that corresponds to a particular behavior ("mood"). Predictions of large crowd distributions in three new environments. a Experimental observations of dense crowds (124, 219, and 189 flies) in three chambers with different geometries, two with applications of heat creating temperature differences of up to 20 °C. b Measured single-fly probability distributions, NAve/NTot. c, d DFFT protocol applied to the stair-case geometry. c Measurement of the density for 3 flies is used to determine the vexation, vb. d Combining this vexation with the extracted quasi 1D frustration from Fig. 2 leads to the high density DFFT prediction. e Comparison of single-fly probabilities for the sparse and dense populations shows significant population shifts as indicated by a correlation coefficient r = 0.73 and a σmean = 3.8. f DFFT analysis that incorporates interactions predicts the measured dense population distribution within statistical uncertainty (r = 0.96 with a σmean = 1.0). Vertical error bars correspond to s.d. of bin-occupation distributions and horizontal error bars correspond to s.e.m. of the observed density within a given bin Frustration used to quantify the "mood" of a crowd Conversely, by keeping the environmental conditions fixed and analyzing different time points in the experiments or changing the ratio of male to female flies, the resulting change in "mood" can be quantified by extracting the corresponding functionals. For example, after spending about six hours in the chamber without food or water, the flies exhibit transient groups or clusters of about 10-20 individuals. This change in behavior is quantified by the different curvatures for the frustrations fN characterizing the initial (blue curve) and deprived states (red curve) in Fig. 4. The nearly flat frustration associated with this behavior indicates that male flies are willing to surmount their natural repulsion and form higher density groups under deprivation conditions, a previously undocumented spontaneous self-organized change in collective behavior31,32,35. Attraction between individuals can be induced by introducing female flies. For groups of flies with equal numbers of males and females which have been separated for several days, we find pair formation (yellow ellipses). This behavior is characterized by a sharp downward curvature in the frustration at low N (yellow curve). Exposing this population to similar deprivation conditions drives formation of larger groups (purple circle) at the expense of pair formation. This behavior is captured by the shift of downward curvature in the frustration to larger bin occupations of N≈7 (purple curve). These data establish that the DFFT approach has the power to detect and quantify changes in social behaviors. Extracting frustrations to quantify changing behavior. Frustrations measured for flies in a 4 cm square chamber. The experiment duration was seven hours. The frustrations were extracted from two different 10 minute intervals corresponding to the initial and final stages of experiments on two different populations. The blue curve (90♂) exhibits a positive curvature at all occupancies, indicating an aversion to crowding at all densities. The red curve characterizes interactions for the same population 6 hours later. The lower curvature indicates significantly reduced aversion to grouping. The yellow curve (30♂ + 25♀) exhibits a downward curvature at low occupations, reflecting mating interactions between pairs of flies (yellow ellipses). At higher occupancies, the lack of curvature indicates a more neutral response to changes in occupation number. Finally, the purple curve characterizes interactions for the same mixed-sex population 6 hours later. The downward curvature shifts to higher occupancies and is followed by a region of positive curvature. The corresponding inflection point indicates a preference for group formation with a density of about eight flies per bin. S.d. error bars calculated from the maximum likelihood (ML) covariance matrix of DFFT distribution in Eq. 3 Collectively, these results demonstrate that top-down approaches are a promising method for predicting crowd distributions and quantifying crowd behaviors. The DFFT analysis that we present is particularly powerful because it separates the influence of the environment on agents from interactions among those agents. This separation then enables predictions of crowd distributions in new situations through mixing and matching of the vexations and frustrations from previous observations in different scenarios. In addition, the real-time quantification of frustrations opens the door to tracking behavioral changes and potentially extrapolating the time evolution of frustrations to anticipate future behaviors. There are a number of directions in which the formal framework suggested here can be extended, paralleling developments from the traditional density-functional theory literature. Extensions to time-dependent DFT methods (TDDFT)36,37 would enable the prediction of situations in which crowds gather and disperse in response to changes in the environment. This approach would also apply to situations in which the center of mass of the entire group is moving as whole, such as in herd migration and bacterial and insect swarming. Moreover, by including the local current density ("flow") in the functional, such approaches may even be able to describe crowds where correlated subgroups move with different local velocities, such as in flocks of birds. Likewise, extensions to multicomponent DFT38 would enable corresponding predictions and observations in crowds composed of distinct groups exhibiting interactions such as inter-group conflict, predator-prey relations, or mating behavior. Should these results extend to human populations, the implications are profound. From publicly available video data of people milling in public spaces, this approach could predict how people would distribute themselves under extreme crowding. Additionally, a simple application running on a hand-held device could easily measure density fluctuations and extract functionals that are indicative of the current behavioral state or mood of the crowd. Through comparison with a library of functionals measured from past events, such an application could provide early warning as a crowd evolves towards a dangerous behavior. Finally, given the recent proliferation of newly available cell-phone and census data39,40 these approaches may also extend to population flows on larger scales, such as migration. Here, vexations could correspond to political or environmental drivers and frustrations to population pressures. The resulting predictions of migration during acute events would enable better planning by all levels of government officials, from local municipalities to international bodies40,41, with the potential to save millions of human lives. Global dissatisfaction functional H[n(x)] The main text describes a net global population dissatisfaction functional H[n(x)]. To derive this functional, we begin by considering a deterministic model, in which agents reject or accept potential moves with unit probability according to whether ΔH ≡ (V(x′) + f′(n(x′)) − (V(x) + f′(n(x)) is positive or negative, respectively. In such a model, it is clear that equilibrium is attained and all motion ceases when ΔH = 0 for all pairs of points x and x′. This statement is equivalent to the combination V(x) + f′(n(x)) attaining some constant value μ across the system, $$V({\mathbf{x}}) + f\prime (n({\mathbf{x}})) = \mu .$$ This equation corresponds precisely to the Lagrange-multiplier equation for minimization of the functional $$H[n({\mathbf{x}})] \equiv {\int} f(n({\mathbf{x}})){\kern 1pt} dA + {\int} V({\mathbf{x}})n({\mathbf{x}}){\kern 1pt} dA,$$ subject to the constraint of fixed number of agents \(N = {\int} n({\mathbf{x}})\,dA\), with μ being the corresponding Lagrange-multiplier. Here, μ plays an analogous role to the "chemical potential" from Statistical Physics. Probability density functional P[n(x)] To make the transition to the probability functional P[n(x)], we note that the stochastic model described in the text maps directly onto a particular Markov chain. Each step on this chain corresponds to a three-stage process. First, (a) an agent is selected at random to consider a possible move from current location x. Selecting a random agent at each time step allows agents to adjust their locations at equal rates. In this approach, choosing the physical time interval between Markov steps to be inversely related to the number of agents preserves the time scale of the overall crowd dynamics. Second, (b) a location x′ nearby x is selected at random as a move to be considered by the given agent. We note that for this work, we assume that the new location x′ is selected in a symmetric way so that that agents at x contemplate moves to x′ with the same probability that agents at x′ contemplate moves to x. This assumption seems most plausible given the systems we consider here. Other selection criteria, however, are possible and would modify the distribution below. Finally, (c) the contemplated move is accepted or rejected according to the probability 1/(eΔH + 1), where ΔH is defined specifically as the change in the value of the functional described in Eq. 5 as a result of the move. There are two critical things to note about this Markov chain. The first is that it gives a very natural description of agent behavior. The second is that it corresponds precisely to the standard Metropolis-Barker algorithm42,43 for drawing random samples from the Boltzmann distribution P ∝ exp(−H) for a Hamiltonian H. Thus, under our proposed motion model, the population itself naturally samples from the distribution quoted in the text, $$P[n({\mathbf{x}})] = Z^{ - 1}{\mathrm{exp}}( - H[n({\mathbf{x}})]).$$ Discretization H=Σb(\({\boldsymbol{ f}}_{{\boldsymbol{N}}_{\boldsymbol{b}}} + {\boldsymbol{v}}_{\boldsymbol{b}} {\boldsymbol{N}}_{\boldsymbol{b}}\)) To arrive at the discretization described in the text, it is important to note that the density n(x) appearing in the probability functional P[n(x)] corresponds to the fluctuating crowd density, as opposed to the average density nave(x). As such, in practice, this density must be described in terms of the discrete locations xa of all agents a in the crowd at any give time. The most natural description for the associated density operator is $$n({\mathbf{x}}) = \mathop {\sum}\limits_a \delta ^{(\sigma )}({\mathbf{x}},{\mathbf{x}}_a),$$ where δ(σ)(x, xa) is a function describing the range over which the presence of an agent at xa contributes to the density n(x) at point x. To conserve number of agents, this function must integrate to unity. The analysis carried out in the text divides space into bins b of area Ab, and estimates the density in each bin as n = Nb/Ab where Nb corresponds to the total number of agents in bin b. This definition sets the range function as $${\delta}^{(\sigma )}({\mathbf{x}},{\mathbf{x}}_{a}) \equiv \left\{ {\begin{array}{*{20}{l}} {\frac{1}{A_{b}}} \hfill & {{\mathrm{if}}\,{\mathbf{x}}\,{\mathrm{and}}\,{\mathbf{x}}_{a}\,{\mathrm{are}}\,{\mathrm{in}}\,{\mathrm{the}}\,{\mathrm{same}}\,{\mathrm{bin}}\,b{\kern 1pt} } \hfill \\ 0 \hfill & {{\mathrm{otherwise}}{\kern 1pt} } \hfill \end{array}} \right.$$ To capture relevant variations in vexation and density, the bins cannot be selected so large that these quantities vary significantly across each bin. Alternately, to avoid missing the effects of nearby agents, the bins cannot be selected to be smaller than the agent's interaction range. Finally, combining equations 5, 7, and 8, yields $$H[n({\mathbf{x}})] = \mathop {\sum}\limits_b f_{N_b} + \mathop {\sum}\limits_b v_bN_b,$$ where \(f_{N_b} \equiv f(N_b/A_b)A_b\) and \(v_b \equiv {\int}_b V({\mathbf{x}}){\kern 1pt} dA/A_b\). Bin occupation probability distributions P b(N) To arrive at the final discrete probability expression in the text, there are now two routes. One can directly insert Eq. 9 above into Eq. 2 from the main text, or one can employ Eq. 9 directly to compute ΔH to determine the probabilities for moves. In the latter case, the predicted probability distribution becomes exact so long as we interpret f′(n) in the main text at points x′ and x to represent forward and reverse finite difference derivatives \(f_{+}^{\prime} (n({\mathbf{x}}\prime )) = (f(n({\mathbf{x}}\prime ) + {\mathrm{\Delta }}) - f(n({\mathbf{x}}\prime )))/{\mathrm{\Delta }}\) and \(f_{-}^\prime \left( {n\left( {\mathbf{x}} \right)} \right) = \left( {f\left( {n\left( {\mathbf{x}} \right)} \right) - f\left( {n\left( {\mathbf{x}} \right) - {\mathrm{\Delta }}} \right)} \right)/{\mathrm{\Delta }}\), respectively, where Δ ≡ 1/Ab. Finally, because the Boltzmann factor above gives probabilities for individual arrangements of agents among bins, we must account for the multiple ways to realize a set of bin counts {Nb} by permuting individuals among the bins. Multiplying by the combinatorial factor Ntot!/(N1!…Nb!…), we find $$P(\{ N_b\} ) = \frac{{N_{{\mathrm{tot}}}!}}{Z}\mathop {\prod}\limits_b \frac{{e^{ - f_{N_b} - v_bN_b}}}{{N_b!}},$$ where Z is a normalization factor. As described in the text, we note that the form of the joint probability distribution above predicts the occupations of different bins to be very nearly statistically independent. The only deviation from complete statistical independence comes from the constraint of a fixed total number of agents \(N_{{\mathrm{tot}}} = \mathop {\sum}\nolimits_b N_b\). Due to this constraint, the probability distribution is difficult to use in making predictions. We can overcome this difficulty using a standard technique from statistical physics. Specifically, introducing a factor \(e^{\mu N_{{\mathrm{tot}}}}\) removes the constraint without significantly affecting the calculated local distributions. As a result, the individual bin distributions then become statistically independent and of the form $$P_b(N) = z_b^{ - 1}\frac{1}{{N!}}\left( {e^{ - (v_b - \mu )}} \right)^Ne^{ - f_N}.$$ In statistical physics this mathematical transformation corresponds to using a Grand Canonical Ensemble44 to simplify statistical calculations. Physically, this approach corresponds to relaxing the constraint of a fixed number of agents by allowing exchanges between the system being considered and a large reservoir whose vexation is controlled by μ. Mathematically, we can add and subtract a constant within the exponent, (vb − c − (μ − c)) without affecting the distribution. Accordingly, we redefine vb and μ with a constant shift such that vb ← vb − c and μ ← μ − c and, further, choose c so that μ = 0, resulting in Eq. 3 in the text. Note that motion between bins is controlled only by differences in vexations, so that none of this affects the dynamics represented in our analysis. When considering a different number of agents in the same chamber, however, μ will take on a different value and so μ − c can no longer be set to zero. Accordingly, to predict distributions for new numbers of flies, we employ Eq. 11 above and adjust μ so that the vexation of the associated reservoir fixes the new total number of flies. Orientation and higher-order many-body interactions Remarkably, our conclusions hold also for plausible models in which the inter-agent interactions are not explicitly expressed in terms of the local density n(x). To see this, we can consider the same behavioral rule of moves accepted according to probability 1/(eΔH + 1), but with H now defined as a sum of two parts, $$H \equiv U({\mathbf{x}}_a) + \mathop {\sum}\limits_a V({\mathbf{x}}_a),$$ where V(x) is the usual vexation function for the individual agents, and now U(xa) is some potentially complex many-body interaction of finite range depending explicitly on the locations of all of the agents xa. As above, the form of the Markov chain associated with the move model leads directly to the Boltzmann distribution P(xa) = Z−1e−H. To recover the frustration-vexation probability form analyzed throughout the text, we now follow the standard Statistical Mechanics approach of defining an pseudo-free-energy functional by integrating out internal degrees of freedom. Specifically, we will keep the bin occupancies constant while integrating over all arrangements of agents consistent with these occupancies. For sufficiently small bins in which vexation does not vary significantly, we again find to a good approximation \(\mathop {\sum}\nolimits_a V({\mathbf{x}}_a) = \mathop {\sum}\nolimits_b v_bN_b\), so that vexation simply gives a constant factor. Next, for sufficiently large bins, the net contributions to U(xa) from interactions occurring within the bins will be large compared to the boundary effects from contributions from interactions crossing bin boundaries. Thus, we can imagine decomposing the overall interaction into a sum over the bins of the interactions just among agents a within each bin b, \(U({\mathbf{x}}_a) = \mathop {\sum}\nolimits_b U(\{{\mathbf{x}}_a\} _{a\, \in \,b})\), where we can improve accuracy by repeating the same agent locations {xa}a∈b in neighboring bins (so-called periodic boundary conditions). Combining these approximations, and summing over all ways to assign agents to bins with counts {Nb} and over all possible locations for the agents within each bin, yields the same frustration-vexation form considered throughout the text, $$P\left( {\left\{ {N_b} \right\}} \right) = Z^{ - 1}\left( {\begin{array}{*{20}{c}} N \\ {N_1! \ldots N_B!} \end{array}} \right)\left( {\mathop {\prod}\limits_b e^{ - f_{N_b}}} \right)e^{ - \mathop {\sum}\limits_b v_bN_b},$$ where B is the total number of bins, and $$e^{ - f_N} \equiv {\int}_A \ldots {\int}_A e^{ - U\left( {{\mathbf{x}}_1, \ldots ,{\mathbf{x}}_N} \right)}{\kern 1pt} dA_1 \ldots dA_N$$ defines the effective bin-frustration functional fN as an N-dimensional integral over the area of a single bin (with periodic boundary conditions applied to the interactions). Finally, we note that the above generalizes naturally to orientation-dependent interactions by considering the coordinates {xa} to include orientation, as well as spatial coordinates. If the vexation is orientation-independent, we recover precisely the form above. Otherwise, the entire framework generalizes naturally to consideration of joint location-orientation densities n(x,θ). All experiments were performed 3–15 days post-eclosion using common fruit flies (D. melanogaster) from an out-bred laboratory stock reared at room temperature on a 12 h/12h day-night cycle. Flies are anesthetized using CO2 and sorted within a few days post-eclosure. We wait for 24 h after sorting before running experiments. Most observations started between 1–5 h after the light was turned on. The experiment chambers are constructed by sandwiching a 1.5 mm thick aluminum frame between two transparent acrylic sheets. The chamber is suspended above an LED light table. Holes in the upper acrylic sheet allow for the introducing flies via aspiration from above. To heat the chambers, 2 Ω high-power resistors are adhered using JB Weld to the aluminum sheet and powered by a variable power supply. On the opposite side of the sheet, a beaker of ice water is used as a heat sink. Chamber temperature is measured for two locations using a contact thermometer to ensure no more than 2 degrees Celsius drift and consistent temperature gradients between trials. We heat one side of the chamber to temperatures between 40–50 degrees Celsius34. The opposing side of the chamber is connected to a heat sink and kept at temperatures between 25–35 degrees Celsius. We find that the resulting temperature gradient drives a strong avoidance behavior for the hotter wall while avoiding fly death as the flies avoid the high-temperature region. A video camera (AVT Marlin, Andover, MA) records overhead images of flies at frame rates around 30 fps and relays these images to a computer where they are analyzed by a custom MATLAB program in real-time. The entire apparatus was enclosed in a black box to prevent biases introduced by ambient light or additional visual cues. To label fly centroids, images were thresholded to find fly silhouettes. For high density experiments, large groups become common and a more sophisticated approach is necessary to separate clusters, which may be as large at 10 flies. First, the images of several individual flies are combined to make a single, averaged fly mask. This mask is then convolved with images of fly groups. The best fits for these convolutions are used to approximate the locations of flies whose silhouettes overlap. (For additional details, see code provided under Code Availability statement below.) Labeling is then manually checked and we find this technique robust enough to label male flies with 0.25 % error or 1 in 400 flies mislabeled. The mating flies required extensive manual corrections due to changes in the fly postures and the polydispersity of fly sizes, since females are larger than males. For the analysis in this paper we sampled these positions at intervals of 1 s. Due to wall-exclusion effects, the area of a chamber is different from the area accessible by the centroid of a fly. We thus exclude the outer area of the chamber that corresponds to approximately half the width of a fly. Areas of the bins are then extracted using images from the experiment. To demonstrate another method for tracking flies that only measures local densities, a simpler method was used for counting flies in the "C" shaped chamber. After thresholding, the number of pixels corresponding to a fly were summed in each bin and then a discrete fly density was assigned to each bin using knowledge of the total number of flies in the chamber. This method has the advantage of computational speed, but weights larger flies more heavily and requires reanalysis for different bin sizes. Measurement timing and thermal ramp protocol Observations for Fig. 3 were conducted using time intervals from approximately 5–15 min after being introduced into the chamber so that the flies could explore their new chamber and adjust to a steady state. To measure the vexation of the square experiment, we performed 12 separate single fly measurements each lasting 10 min. Similar results are obtained if three flies are used over a single 10 min period. Thus, measurements of vexation in the "C" and stair shaped chambers used two and three concurrent flies and only needed a single ten minute observation to measure the vexation. To probe the changing fly behaviors shown in Fig. 4, we track the flies for up to 9 h before flies begin to die from deprivation45,46. To test whether fly behavior is changing over our standard 10 minute time windows, we compare the probabilities, Pb(N), from the first 5 min of the window with the last 5 min and find that they are consistent. The only exception to this is during the very first 5 min after the flies are introduced into the chamber as they become oriented to their new environment that we do not include in our analysis. To elicit different behaviors and location preferences with the same population of flies, we apply a heat gradient to generate an avoidance behavior34 starting at 20 min after being introduced to the chamber. By minute 30, the chamber has reached a steady temperature and we observe that the flies exhibit an approximately constant average distribution. At minute 40, we turn the heat off and let it adjust to room temperature for the remainder of the experiment. Throughout these observations, we qualitatively observe several different behaviors. For the first 5 min, flies are most active and their frustration has a slightly higher positive curvature than the frustration for the 5–15 min period. When the chamber is heated, the frustration stays approximately the same despite the drastic change in the vexation. After the chamber cools down, flies enter a readjustment phase where they are much less active. After this readjustment phase, however, flies again exhibit behavior similar to that from the 5–15 min interval. By 6 h, flies in all the experiments switch to a grouping behavior as shown in Fig. 4. Validation of assumptions underlying theoretical analysis As mentioned above, we made some general assumptions developing our theory which we now validate for the walking fly system. First, to verify attainment of equilibrium and sufficient ergodicity, we consider the normalized autocorrelation function $$c_{\mathrm{T}}({\mathrm{\Delta }}t) \equiv \frac{{\left\langle {\mathop {\sum}\nolimits_b {N_b(t)N_b(t + {\mathrm{\Delta }}t)} } \right\rangle _t}}{{\left\langle {\mathop {\sum}\nolimits_b {N_b(t)N_b(t)} } \right\rangle _t}},$$ where <…>t indicates average over all times. This function shows the expected rapid exponential decay (Fig. 2c), and has an integral which gives the decorrelation time τ = 0.92 s. Indeed, we find this time to be quite short, typically on the order of a few seconds, for all of our experimental runs. This decay time is two orders of magnitude faster than the typical run time and does not vary significantly when computed in different time sub-windows, strongly suggesting rapid mixing and stationarity of the random process, thereby allowing the interchange of time and ensemble averages, and establishing the existence of equilibrium in the timescales under study. Our videos thus represent hundreds of independent samples drawn from the equilibrium ensemble underlying our analysis. We next consider whether the bins are truly independently distributed as expected in Eq. 3. Accordingly, we consider the normalized time-averaged spatial-correlation function $$c_{\mathrm{S}}({\mathrm{\Delta }}) \equiv \frac{{\left\langle {\mathop {\sum}\nolimits_b {N_b(t)N_{b + {\mathrm{\Delta }}}(t)} } \right\rangle _{b,t}}}{{\left\langle {N_b(t)N_b(t)} \right\rangle _{b,t}}},$$ where <…>b,t indicates average all times and bins, and Δ is the two-dimensional vector displacement between bins (Fig. 2b)). The data show essentially no correlation between bins, thereby verifying the product form of the global bin distribution function in Eq. 3 in the main text. This confirms not only that we have chosen appropriately sized bins but also, more fundamentally, establishes that there are little or no fly–fly interaction effects between bins, so that the local density approximation (LDA) form for the frustration, \(F[n({\mathbf{x}})] = {\int} f(n({\mathbf{x}}))dA\), indeed gives a good representation of the behavior of the fly populations at scales greater than 0.15 cm2. To estimate the frustration and vexation for the crowds in our experiments, we start by constructing the posterior function P(fN, vb|Nb(t)), which represents the relative likelihood of different parameter choices for our model given the data (number counts within each bin) that has actually been observed. Then, to find the a posteriori estimate of the parameters, we maximize this likelihood by performing a numerical gradient minimization of $$\begin{array}{l} - {\mathrm{ln}}P(f_N,v_b|N_b(t)) = C + TB\left( {\left\langle {{\mathrm{ln}}z_b} \right\rangle _b + \left\langle {v_bN_b(t) + {\mathrm{ln}}N_b(t)! + f_{N_b(t)}} \right\rangle _{b,t}} \right)\\ + \mathop {\sum}\limits_N \frac{{f_N^2}}{{2\sigma ^2}} + \mathop {\sum}\limits_b \frac{{v_b^2}}{{2\sigma ^2}},\end{array}$$ where C is an irrelevant normalization constant, B corresponds to the total number of bins in the system, T the total number of independent time samples employed, and 〈…〉b and 〈…〉b,t represent averages over either all bins or bins and times, respectively. Finally, for the last two terms, σ accounts for the range about zero of a Gaussian prior distribution on the frustration and vexation parameters. This Gaussian prior distribution reflects the fact that the frustration and vexation parameters vb and fN can in principle take any real value, but in practice generally fall in a range on the order of from about −15 to 15 because these parameters enter as exponentials in our probability models. Because the amount of data that we handle is on the order of tens of thousands of frames, the likelihood peaks strongly around its maximum, and the precise form of the Gaussian prior is largely irrelevant. Indeed, changing the value of σ from a reasonable value of 15 to an unreasonably small value of 1, only changes our final results for the frustration by 11.4%. Throughout the rest of our work, we take σ = 15. Uncertainty in parameter estimation The sharp peaks associated with the large amount of data ensure the accuracy of the asymptotic Gaussian approximation, in which the joint probability distribution representing the range of parameters supported by the data is a multivariate Gaussian distribution. As a result, the associated covariance matrix of uncertainties in the parameters is the inverse of the Fisher information matrix I (i.e., the second derivative of −lnP evaluated at the location of its maximum). The matrices of parameter uncertainties and cross-correlations among them are computed as follows. For our full DFFT model, with vexation and frustration, and the simple Poisson model, with vexation only, we calculate the inverses of the following matrices, respectively, $$I_{{\mathrm{DFFT}}}(\{ f_N\} ,\{ v_b\} ) = \left( {\begin{array}{*{20}{c}} {\left[ {I_{ff}} \right]_{N_{{\mathrm{max}}} \times N_{{\mathrm{max}}}}} & {\left[ {I_{fv}} \right]_{N_{{\mathrm{max}}} \times B}} \\ {\left[ {I_{fv}^T} \right]_{N_{{\mathrm{max}}} \times B}} & {\left[ {I_{vv}} \right]_{B \times B}} \end{array}} \right),$$ $$I_{{\mathrm{Poisson}}}(\{ v_b\} ) = \left[ {I_{vv}} \right]_{B \times B},$$ where the matrix elements of each block are $$\left[ {I_{ff}} \right]_{N,N^\prime } = T\delta _{NN^\prime }\left( {\mathop {\sum}\limits_{\bar{b}} P_{\,\bar{b}}\left( N \right) - \mathop {\sum}\limits_{\bar{b}} \left({P_{\,\bar{b}}\left( N \right)P_{\,\bar{b}}\left( {N^\prime } \right)} \right)} \right)$$ $$\left[ {I_{fv}} \right]_{N,b} = TP_b\left( N \right)\left( {N - \mathop {\sum}\limits_{\widetilde{N}} \widetilde{N}P_b\left( {\widetilde{N}} \right)} \right)$$ $$\left[ {I_{vv}} \right]_{b,b\prime } = T\delta _{bb\prime }\left( {\mathop {\sum}\limits_{\widetilde{N}} \widetilde{N}^{2} P_b\left( {\widetilde{N}} \right) - \left( {\mathop {\sum}\limits_{\widetilde{N}} \widetilde{N} P_b \left( {\widetilde{N}} \right) } \right)^{2}} \right).$$ Here, Pb(N) is defined as in Eq. 3 in the main text, T again represents the total number of independent time frames, and the " ~ " indicates internal summation indices. Finally, a subtle, but important, ambiguity arises in the extraction of frustrations and vexations. Specifically, because the exponent in the observed probabilities for each bin takes the form (ln zb + vbN + fN), making the replacements (vb → vb − α; zb → zb − β; fN → fN + β + αN;) leaves the predictions of the model unchanged, and any choice of parameters corresponding to these replacements represents the data equally well. As a result, the Fisher matrices described above are singular. To resolve this "gauge invariance" and remove the singularity, we must break the symmetry among equivalent models by adding two constraints (one for α and one for β) to our choice of fN. Here, we do this by enforcing the natural choice that f0 ≡ 0 and f1 ≡ 0, corresponding to the convention that that the frustration does not affect the probability for bins with either N = 0 or N = 1 flies. Finally, in terms of the information matrices above, implementing this constraint corresponds to dropping the first two rows and columns associated with these parameters from the IDFFT matrix. Uncertainty in predictions of average occupations With the uncertainties in the extraction of the vexation and frustration parameters from above, we next determined the uncertainties in our predictions of the average bin occupations for large populations in new arenas. The predicted mean densities are $$\bar N_b = \mathop {\sum}\limits_{N = 0}^{N_{{\mathrm{max}}}} NP_b(N) = \frac{1}{{z_b}}\mathop {\sum}\limits_{N = 0}^{N_{{\mathrm{max}}}} N\frac{{e^{ - (v_b - \mu )N - f_N}}}{{N!}},$$ where the normalization is $$z_b = \mathop {\sum}\limits_{N = 0}^{N_{{\mathrm{max}}}} \frac{{e^{ - (v_b - \mu )N - f_N}}}{{N!}},$$ where Pb(N) is the probability of having N flies in bin b, vb is the vexation in bin b, and fN is the frustration associated with having N flies in a bin. We accordingly computed the associated uncertainties using standard linearized error propagation as $$\sigma (\bar N_b) = \sqrt {\left( {\frac{{\partial \bar N_b}}{{\partial v_b}}} \right)^2{\mathrm{var}}(v_b) + \mathop {\sum}\limits_{N,N^\prime = 2}^{N_{{\mathrm{max}}}} \frac{{\partial \bar N_b}}{{\partial f_N}}\frac{{\partial \bar N_b}}{{\partial f_{N^{\prime}}}}{\mathrm{covar}}(f_N,f_{N^\prime })} ,$$ where var(X) and covar(X, Y) represent the variance of random variable X and covariance between X and Y, respectively, as determined by the inverse of the Fisher information matrix as discussed above. Finally, the derivatives needed in Eq. 25 are $$\frac{{\partial \bar N_b}}{{\partial v_b}} = - \left( {\left\langle {N_b^2} \right\rangle - \bar N_b^2} \right),$$ $$\frac{{\partial \bar N_b}}{{\partial f_N}} = - \left( {\frac{{N_b - \bar N_b}}{Z}} \right)\frac{{e^{ - \,N_b(v_b - \mu ) - f_{N_b}}}}{{N_b!}},$$ where \(\left\langle {N_b^2} \right\rangle \equiv \mathop {\sum}\nolimits_N N^2P_b(N)\) with Pb(N) as defined above. A few technical notes are in order to understand the terms present in Eq. 25. First, note that cross-correlations between vexations in different bins are not relevant because \(\bar N_b\) depends solely on vb and not on vexations from other bins. Also, cross-correlations between extracted vexations vb and frustrations fN are zero in our case because we extract the vexations and frustrations from different, and thus independent, experiments when making our predictions for average occupations. Finally, the uncertainties in f0 and f1 are not included because these uncertainties are zero due to the gauge choice discussed in the section above. Uncertainty in experimentally measured bin statistics For each independent bin, we obtain from the experiment a sequence of length NT with elements each corresponding to a bin occupation that can range from zero to the maximum packing of files, N = 0,…,Nmax. From this data, we hope to extract probability parameters pN describing the bin occupation distributions studied in the main text. For simplicity of notation, we here use lower case p to denote experimentally measured probabilities. To account for time-correlations in bin occupancies, particularly at high frame rates, we down-sample at intervals given by the decorrelation time τ and actually consider uncorrelated sequences of length T = NT/τ. The data then correspond to the result of a random process of making T independent selections among Nmax + 1 possible bin occupations. Thus, for each bin, the probability of observing a given data sequence becomes the multinomial distribution, $$\left( {\begin{array}{*{20}{c}} T \\ {h_0 \cdots h_{N_{{\mathrm{max}}}}} \end{array}} \right)p_0^{h_0} \cdots p_{N_{{\mathrm{max}}}}^{h_{N_{{\mathrm{max}}}}},$$ where hN represents the number of times ("hits") we observe each of the possible occupancies N. To extract the underlying uncertainties, we note that Bayes' theorem gives the following distribution for the probability parameters to take the values {pN} given the actually observed counts {hN}, $$P(\{ p_N\} |\{ h_N\} ) = \frac{{P\left( {\left. {\{ h_N\} } \right|\{ p_N\} } \right)P\left( {\{ p_N\} } \right)}}{{P(\{ h_N\} )}} \propto \left( {\mathop {\prod}\limits_{n = 0}^{N_{{\mathrm{max}}}} \frac{{p_n^{h_n}}}{{h_n!}}} \right)P(\{ p_N\} ).$$ This posterior probability is proportional to an undetermined prior probability P({pN}) describing our a priori expectations for the values of the {pN} parameters. However, as per our discussion surrounding Eq. 17 above, in the large T limit, the Poisson-like product factor in Eq. 29 above will be highly peaked, and the unknown prior P({pN}) will not have a substantial effect on the posterior distribution. To completely eliminate the effects of unwarranted assumptions entering through our choice of prior, we assume an uninformative prior distribution that is consistent with the invariance of the probability values under the inclusion of new samples, and choose the multivariate generalization of Haldane's uninformative improper prior distribution47, $$P(\{ p_N\} ) = \frac{1}{{\mathop {\prod}\nolimits_{n = 0}^{N_{{\mathrm{max}}}} {p_n} }}.$$ With this choice, upon normalization, Eq. 29 becomes the Dirichlet distribution, $$P(\{ p_N\} |\{ h_N\} ) = \Gamma \left( {\mathop {\sum}\limits_{n\prime = 0}^{N_{{\mathrm{max}}}} h_{N\prime }} \right)\mathop {\prod}\limits_{N = 0}^{N_{{\mathrm{max}}}} \frac{{p_N^{h_N - 1}}}{{\Gamma (h_N)}},$$ where Γ(x) is the Gamma function. This distribution yields expected values for the probabilities equal precisely to the observed frequencies \(\bar p_N = h_N/T\). The variances of this distribution, then give our desired uncertainties, $$\sigma (p_N) = \sqrt {\frac{{h_N\left( {T - h_N} \right)}}{{T^2(T + 1)}}} = \sqrt {\frac{{\bar p_N(1 - \bar p_N)}}{{T + 1}}} .$$ Note that when T is large and \(\bar p_N \ll 1\), the uncertainties correspond to what we would naïvely expect from Poisson counting, namely an uncertainty of \(\sqrt {h_N}\) in the counts, corresponding to an uncertainty of \(\sqrt {h_N} /T = \sqrt {\overline{p}_N/T}\) in the extracted probabilities. Such an analysis, however, misses the important factor of \(\sqrt {1 - \overline{p}_N}\) and leads to significant errors in our case. Finally, for the uncertainty in the experimental average occupation \(\bar N_{{\mathrm{exp}}t} = \mathop {\sum}\nolimits_N Np_N\), the corresponding variance is $${\mathrm{var}}\left( {\bar N_{{\mathrm{expt}}}} \right) = \mathop {\sum}\limits_{N \ne N^{\prime}} NN^{\prime}{\mathrm{covar}}\left( {p_N,p_{N^\prime }} \right) + \mathop {\sum}\limits_N N^2\sigma (p_N)^2,$$ where the needed covariances of the Dirichlet distribution are $${\mathrm{covar}}(p_N,p_{N^\prime }) = \frac{{ - h_Nh_{N^\prime }}}{{T^2(T + 1)}} = \frac{{ - \bar p_N\bar p_{N^\prime }}}{{T + 1}}$$ Readers can access the code related to parameter estimation and crowd density predictions by going to (https://github.com/MendezV/DFFT) or to (https://doi.org/10.5281/zenodo.1285931). Readers can also access code related to image analysis procedures by visitng (https://github.com/yunuskink/Fitfly-fly-tracking) or (https://doi.org/10.5281/zenodo.1304326). There are no access restrictions to this software. The fly density data that support the findings of this study are available in the Open Science Framework database at (https://doi.org/10.17605/OSF.IO/7UBZ2). Silverberg, J. L., Bierbaum, M., Sethna, J. P. & Cohen, I. Collective motion of humans in mosh and circle pits at heavy metal concerts. Phys. Rev. Lett. 110, 228701 (2013). ADS Article PubMed CAS Google Scholar Helbing, D., Farkas, I. & Vicsek, T. Simulating dynamical features of escape panic. Nature 407, 487 (2000). Bellomo, N. & Dogbe, C. On the modeling of traffic and crowds: a survey of models, speculations, and perspectives. SIAM Rev. 53, 409–463 (2011). MathSciNet Article MATH Google Scholar Leonard, N. E. & Fiorelli, E. Virtual leaders, artificial potentials and coordinated control of groups. Decision and Control, 2001. In Proceedings of the 40th IEEE Conference on, Vol. 3, 2968–2973. (IEEE, 2001). Couzin, I. D., Krause, J., Franks, N. R. & Levin, S. A. Effective leadership and decision-making in animal groups on the move. Nature 433, 513 (2005). Vicsek, T., Czirók, A., Ben-Jacob, E., Cohen, I. & Shochet, O. Novel type of phase transition in a system of self-driven particles. Phys. Rev. Lett. 75, 1226 (1995). ADS MathSciNet Article PubMed CAS Google Scholar Moussad, M., Helbing, D. & Theraulaz, G. How simple rules determine pedestrian behavior and crowd disasters. Proc. Natl Acad. Sci. USA 108, 6884–6888 (2011). An, L. Modeling human decisions in coupled human and natural systems: review of agent-based models. Ecol. Model. 229, 25–36 (2012). Zheng, X., Zhong, T. & Liu, M. Modeling crowd evacuation of a building based on seven methodological approaches. Build. Environ. 44, 437–445 (2009). Treuille, A., Cooper, S. & Popović, Z. Continuum crowds. ACM Trans. Graph. 25, 1160–1168 (2006). Hinz, R. C. & de Polavieja, G. G. Ontogeny of collective behavior reveals a simple attraction rule. Proc. Natl Acad. Sci. USA 114, 2295–2300 (2017). Katz, Y., Tunstrøm, K., Ioannou, C. C., Huepe, C. & Couzin, I. D. Inferring the structure and dynamics of interactions in schooling fish. Proc. Natl Acad. Sci. USA 108, 18720–18725 (2011). Ballerini, M. et al. Interaction ruling animal collective behavior depends on topological rather than metric distance: evidence from a field study. Proc. Natl Acad. Sci. USA 105, 1232–1237 (2008). ADS Article PubMed Google Scholar Berman, G. J., Choi, D. M., Bialek, W. & Shaevitz, J. W. Mapping the stereotyped behaviour of freely moving fruit flies. J. R. Soc. Interface 11, 20140672 (2014). Kelley, D. H. & Ouellette, N. T. Emergent dynamics of laboratory insect swarms. Sci. Rep., 3, 1073 (2013). Boltzmann, L. Über die mechanische Bedeutung des zweiten Hauptsatzes der Wärmetheorie:(vorgelegt in der Sitzung am 8 February 1866). (Staatsdruckerei, 1866). Diderik Van der Waals, J. Over de Continuiteit van den Gas-en Vloeistoftoestand, volume 1. Sijthoff, 1873. Toner, J. & Tu, Y. Flocks, herds, and schools: a quantitative theory of flocking. Phys. Rev. E 58, 4828 (1998). ADS MathSciNet Article CAS Google Scholar Mora, T. & Bialek, W. Are biological systems poised at criticality? J. Stat. Phys. 144, 268–302 (2011). ADS MathSciNet Article MATH Google Scholar Cavagna, A. et al. Dynamic scaling in natural swarms. Nat. Phys. 13, 914 (2017). Tennenbaum, M., Liu, Z., Hu, D. & Fernandez-Nieves, A. Mechanics of fire ant aggregations. Nat. Mater. 15, 54 (2016). Sinhuber, Ml & Ouellette, N. T. Phase coexistence in insect swarms. Phys. Rev. Lett. 119, 178003 (2017). Solon, A. P. et al. Pressure is not a state function for generic active fluids. Nat. Phys. 11, 673 (2015). Sham, L. J. & Kohn, W. One-particle properties of an inhomogeneous interacting electron gas. Phys. Rev. 145, 561 (1966). ADS Article CAS Google Scholar Hohenberg, P. & Kohn, W. Inhomogeneous electron gas. Phys. Rev. 136(3B), B864 (1964). ADS MathSciNet Article Google Scholar Diggle, P. J. Statistical Analysis of Spatial and Spatio-Temporal Point Patterns, Third Edition. Chapman & Hall/CRC Monographs on Statistics and Applied Probability. (CRC Press, 2013). Kim, I. S. & Dickinson, M. H. Idiothetic path integration in the fruit fly Drosophila melanogaster. Curr. Biol. 27, 2227–2238 (2017). Neyman, J. On a new class of contagious distributions, applicable in entomology and bacteriology. Ann. Math. Statist. 10, 35–57 (1939). Article MATH Google Scholar Taylor, L. R. Aggregation, variance and the mean. Nature 189, 732–735 (1961). Besson, M. & Martin, J.-R. Centrophobism/thigmotaxis, a new role for the mushroom bodies in Drosophila. Dev. Neurobiol. 62, 386–396 (2005). Ramdya, P. et al. Mechanosensory interactions drive collective behaviour in Drosophila. Nature 519, 233 (2015). Schneider, J., Atallah, J. & Levine, J. D. 3 one, two, and many—a perspective on what groups of Drosophila melanogaster can tell us about social dynamics. Adv. Genet. 77, 59 (2012). Schneider, J., Dickinson, M. H. & Levine, J. D. Social structures depend on innate determinants and chemosensory processing in Drosophila. Proc. Natl Acad. Sci. USA 109, 17174–17179 (2012). Zars, M. & Zars, T. High and low temperatures have unequal reinforcing properties in Drosophila spatial learning. J. Comp. Physiol. A. 192, 727 (2006). Ramdya, P., Schneider, J. & Levine, J. D. The neurogenetics of group behavior in Drosophila melanogaster. J. Exp. Biol. 220, 35–41 (2017). Runge, E. & Gross, E. K. U. Density-functional theory for time-dependent systems. Phys. Rev. Lett. 52, 997 (1984). Chan, G. K.-L. & Finken, R. Time-dependent density functional theory of classical fluids. Phys. Rev. Lett. 94, 183001 (2005). Capitani, J. F., Nalewajski, R. F. & Parr, R. G. Non-Born–Oppenheimer density functional theory of molecular systems. J. Chem. Phys. 76, 568–573 (1982). Wesolowski, A. et al. Quantifying the impact of human mobility on malaria. Science 338, 267–270 (2012). ADS Article PubMed PubMed Central CAS Google Scholar Hauer, M. E. Migration induced by sea-level rise could reshape the US population landscape. Nat. Clim. Change 7, 321–325 (2017). Clark, P. U. et al. Consequences of twenty-first-century policy for multi-millennial climate and sea-level change. Nat. Clim. Chang. 6, 360–369 (2016). Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. & Teller, E. Equation of state calculations by fast computing machines. J. Chem. Phys. 21, 1087–1092 (1953). Barker, A. A. Monte Carlo calculations of the radial distribution functions for a proton-electron plasma. Aust. J. Phys. 18, 119–134 (1965). Reif, F. Chapter 8, Fundamentals of Statistical and Thermal Physics. (McGraw Hill, New York, 1965). Ji, F. & Zhu, Y. A novel assay reveals hygrotactic behavior in Drosophila. PLoS. One. 10, e0119162 (2015). Bell, W. J., Cathy, T., Roggero, R. J., Kipp, L. R. & Tobin, T. R. Sucrose-stimulated searching behaviour of Drosophila melanogaster in a uniform habitat: modulation by period of deprivation. Anim. Behav. 33, 436–448 (1985). Haldane, J. A note on inverse probability. Math. Proc. Camb. Philos. Soc. 28, 55–61 (1932). The authors thank Xiaoning Wang, Marc-Antoine Bouvattier, Alonso Botero, Tom Corwin, Tom Mifflin, Greg Godfrey, and Nathan Sitaraman for their help with the initial stages of the project. We further thank the Cohen and Arias groups for discussions throughout this work. The work was primarily funded by the Army Research Office Army-ARO W911NF-16-1-0433. J.F.M. was also supported in part by an Office of the Vice President for Research at the University of Los Andes. Y.K. was also supported in part by funding from the National Science Foundation Graduate Research Fellowship Award No. DGE-1650441. These authors contributed equally: J. Felipe Méndez-Valderrama, Yunus A. Kinkhabwala Department of Physics, Universidad de Los Andes, Bogotá, 111711, Colombia J. Felipe Méndez-Valderrama Department of Applied and Engineering Physics, Cornell University, Ithaca, NY, 14853, USA Yunus A. Kinkhabwala Metron Inc., Scientific Solutions, Reston, VA, 2019, USA Jeffrey Silver Department of Physics, Cornell University, Ithaca, NY, 14853, USA Itai Cohen & T. A. Arias Itai Cohen T. A. Arias J.F.M.V.: Development and implementation of analyses to extract vexations, frustrations, and predicted mean occupations. Analysis of statistical uncertainties in all of these quantities and also in the extraction of bin-occupancy distributions from experimental data. Final display format for cross-correlating predicted mean occupancies with predictions. Theoretical parts of the Methods Section. Significant contributions to main text. Y.A.K.: Design and implementation of experiments along with development of image analysis techniques. Running data through analyses provided by Méndez Valderrama. Experimental parts of the Methods Section, design, and implementation of figures, and an early draft of the manuscript. Significant contributions to main text. Y.A.K. and J.F.M.V. contributed equally to this work. J.S.: Co-development of underlying Markov chain, proper accounting for degeneracy factor, identification of multinomial and Poisson distributions for the non-interacting case, initiation of use of maximum-likelihood estimation and Bayesian uncertainty techniques. I.C.: Significant input into design of experiments, and primary responsibility for main text. T.A.A.: Development of underlying motion model, density-functional theory analysis, and prediction of form of population fluctuations. Co-development of underlying Markov chain. Complete early draft of manuscript, and significant contributions to main text. Correspondence to T. A. Arias. Méndez-Valderrama, J.F., Kinkhabwala, Y.A., Silver, J. et al. Density-functional fluctuation theory of crowds. Nat Commun 9, 3538 (2018). https://doi.org/10.1038/s41467-018-05750-z DOI: https://doi.org/10.1038/s41467-018-05750-z Effects of social distancing and isolation on epidemic spreading modeled via dynamical density functional theory Michael te Vrugt Jens Bickmann Raphael Wittkowski Synchronization of complex human networks Shir Shahal Ateret Wurzberg Moti Fridman Characterizing reticulation in online social networks during disasters Chao Fan Jiayi Shen Xia Hu Applied Network Science (2020) Effects of Urban Atmospheres on Changing Attitudes of Crowded Public Places: An Action Plan Hisham Abusaada Abeer Elshater International Journal of Community Well-Being (2020) Correcting locomotion dependent observation biases in thermal preference of Drosophila Diego Giraldo Andrea Adden Bart R. H. Geurten Scientific Reports (2019)
CommonCrawl
Article | Open | Published: 25 June 2019 Elasmobranch bycatch in the demersal prawn trawl fishery in the Gulf of Papua, Papua New Guinea W. T. White1,2, L. Baje3, C. A. Simpfendorfer4, S. A. Appleyard1,2, A. Chin ORCID: orcid.org/0000-0003-1813-40424, B. Sabub3, E. Rochel5 & G. J. P. Naylor6 Scientific Reportsvolume 9, Article number: 9254 (2019) | Download Citation The elasmobranch bycatch of the Gulf of Papua Prawn Fishery is investigated in detail for the first time. Fisheries observers collected data on the elasmobranch bycatch from a total of 403 trawl sets (1,273 hrs) in the Gulf of Papua. A total of 40 species of elasmobranchs were recorded ranging in size from a 12 cm disc width stingray to a 350 cm total length sawfish. High mortality rates were recorded (>80%), attributed to the long trawl durations (up to 4 hours). The future inclusion of bycatch reduction devices would likely reduce the number of larger elasmobranchs being caught, based on evidence from the prawn trawl fisheries of northern Australia, and is being investigated by the PNG National Fisheries Authority. Differences in catch compositions were detected across the management zones as well as between the two monsoonal seasons (SE Monsoon and NW Monsoon). Increased monitoring and additional research is required and management plans should address the elasmobranch bycatch and in particular their high mortality rate. The majority of the global elasmobranch catch is incidental, in the form of bycatch from fisheries that target teleosts or crustaceans1,2. Accurate catch composition data are often not available and, particularly in developing countries, there are few management regulations or reporting requirements for elasmobranchs, and even less so for bycatch3. Large declines in abundance of demersal elasmobranchs have been recorded from several locations in the Indo–West Pacific, e.g. Thailand4,5 and Indonesia6. For example, in the Andaman Sea region of Thailand, a comprehensive survey of sharks observed at fish landing sites in 2014 and 2015 recorded far less landings of larger sharks compared to a 2004 survey5. Similarly, a study on elasmobranch fisheries in southern Indonesia highlighted that catches of elasmobranchs in the Java Sea had declined by at least one order of magnitude between 1976 and 19976. However, there has been very few studies on the elasmobranch bycatch in most demersal fisheries in the tropical Indo–West Pacific. Elasmobranchs are important apex predators in marine ecosystems, but many populations are under significant pressure from overexploitation4. Elasmobranchs typically have a k-selected life history, i.e. slow growth rates, low fecundity, late maturity and long gestation period, which leads to them having low productivity4. Thus, it is important to obtain information on catch composition of elasmobranch bycatch from fisheries, to provide the evidence-based science required for fisheries assessments. Approximately 44% of annual global tropical prawn catches come from the Coral Triangle region, with the majority coming from Indonesia (18% of global catch)7. Although Papua New Guinea (PNG) contributes only about 0.1% to the global tropical prawn catches, it is the only Pacific Island nation with a demersal prawn trawl fishery. Despite its small size, it is one of the most valuable export fisheries for PNG, earning revenue of ~K10 million (~$3 USD) annually8. No foreign trawl vessels currently operate in PNG waters. The first surveys for prawns in the Gulf of Papua (GoP) were carried out in 1954, while commercial fishing explicitly targeting prawn commenced in 19699,10. The GoP prawn fishery currently extends along the south coast of PNG from the mouth of the Fly River in the west to the Iokea coast in the east to depths of about 40 m8. The main prawn species targeted are banana prawns Penaeus merguiensis and Indian white prawn P. indicus, with lower catches of giant tiger prawns P. monodon and green tiger prawn P. semisulcatus8. Although an estimated 9,603 km2 of the GoP is considered suitable for trawling, 1,388 km2 receives more than 50% of the total fishing effort (hours of trawling)8. The majority of fishing effort is in Kerema Bay and Orokolo Bay which are around a 20 hr steam from Port Moresby. The GoP prawn fishery currently is limited to 15 licenses, although only 6 vessels operated in 2014 and 20158 (NFA unpubl. data). Between 1990 and 2011, the average annual catch of prawns in the GoP prawn fishery was 625.9 mt8. Although there are detailed data on the teleost bycatch in the GoP prawn fishery11, these data included no information on the elasmobranch bycatch. Prior to the current study, no data existed on the elasmobranch bycatch of this fishery both in terms of catch composition but also the fate of the bycatch following capture. Between 2014 and 2015, PNG's National Fisheries Authority (PNG NFA) ran an observer program to investigate the elasmobranch bycatch in the GoP prawn fishery. The current study provides the first detailed investigation of this bycatch in PNG, including species, sex and size composition, and address the question of whether species richness and abundance varies at a spatial and temporal level. The major aim of this study was to provide management options to PNG's NFA relating to the elasmobranch bycatch of GoP prawn fishery based on evidence-based science. Sampling effort A total of 7 fishing trips were observed in this study. Five of the fishing trips comprised 9–17 days of trawling activity, and two comprised 35 and 36 days of trawling. The latter two longer fishing trips were not planned to be long trips prior to the departure from Port Moresby in September 2015 but both skippers remained fishing for a long period. It was not determined why these trips were longer than the other fishing trips observed. Data were obtained based on catches from 1,273 hours of trawling in 403 trawls within the GoP carried out between June 2014 and August 2015 at depths of 6 to 37 m. Although trawls were carried out across all fishery management zones in the GoP (Fig. 1), effort was not evenly distributed across them (Table 1). Trawling was concentrated in fisheries management zone 6 (n = 146 trawls), followed by zones 7 (n = 97 trawls) and 2 (n = 67 trawls), equating to 77% of all trawls being recorded from these three zones. Trawling in each zone covered roughly similar depth ranges. The majority of observed trawls were carried out during the day (6:00 am to 5:59 pm, n = 282) compared to at night (6:00 pm to 5:59 am, n = 120), i.e. 282 vs 120 respectively. The majority of trawls (64%, n = 258) were conducted during the Southeast Monsoon (SE), i.e. between May and September. This difference reflects the closure of fishing zones 2–8 to trawling between 1st December and 31st March12 (during the Northwest Monsoon, NE). Map of Papua New Guinea: (a) whole country with yellow box indicating the Gulf of Papua region; (b) Gulf of Papua showing the locations of the trawl sets observed in this study superimposed over the fisheries management zones (1–8) and the extralimital Fly zone (0). Map data: ©2017 Google Earth, NASA; ©2013, TerraMetrics, Inc. www.terrametrics.com. Table 1 Number of trawls, number of hours trawled, depth range fished, number of elasmobranchs recorded and catch per unit effort (CPUE, number of elasmobranchs/hour of trawling) in each of the fishing zones in the Gulf of Papua. A total of 2,030 elasmobranch specimens were recorded from 339 of the 402 (i.e. 84%) trawls observed. No elasmobranchs were recorded on the remaining 63 trawls observed. Most elasmobranchs were recorded in fishing zone 6 (n = 878), followed by zones 2 (n = 351) and 7 (n = 314). Overall catch per unit effort (CPUE), calculated as number of elasmobranchs caught per hour of trawling (in all trawls, i.e. including those with zero elasmobranchs), was 1.7 elasmobranch hr−1. There was a significant difference in CPUE between the zones (GLMM, P < 0.005), but not between season and time of day (P > 0.05). Post-hoc two-tailed t-tests with Holm-Bonferroni correction found that all but four two-tailed tests were not significant (P > 0.05), with only zone 7 vs. zones 0, 1, 2 and 8 differing significantly (P < 0.005). Post-hoc t-test also indicated that the western (zones 0–4) and eastern (zones 5–8) zones differed significantly (P < 0.005). CPUE was greatest in fishing zones 0 and 1 (2.5 elasmobranchs hr−1), and lowest in fishing zone 7 (1.0 elasmobranchs hr−1), with the exception of zone 4 where no elasmobranchs were caught during the three trawls observed (Table 1). The low CPUE in fishing zone 7 is the most likely cause of the significant pairwise differences found. Elasmobranch species diversity and biomass A total of 40 species (18 sharks and 22 rays) and 14 families (5 shark and 9 ray) of elasmobranchs were recorded during the survey period (Table 2). An additional family (Ginglymostomatidae) and two additional species (Nebrius ferrugineus and Urogymnus asperrimus) were also verified from images from the bycatch of the GoP prawn fishery provided by other sources, but not during the observer trips in this study. These two species are included in Table 2, but are not included in any analyses or summaries below. Table 2 Abundance (number), biomass (in kg), CPUE (elasmobranchs per 100 hrs), size range (TL, total length; DW, disc width) recorded and maximum known size of each elasmobranch species recorded in this study. The two species with an asterisk were recorded from this fishery outside of this study and were not used in the analyses. The a and b parameters used to convert the estimated total lengths or disc widths (cm) to total weight (g) are provided in Table 3 together with their source. Although sharks represented 71% of the total elasmobranch abundance in the catches, they represented only 34% of the total biomass. The most species rich families were Carcharhinidae (whaler sharks) and Dasyatidae (stingrays) with 12 and 11 species, respectively. Table 3 Length to weight conversion parameters, and their source, used to estimate biomass of specimens measured but not weighed. Six species represented 65% of the total elasmobranch catch (by abundance): Rhizoprionodon taylori (29.4%), Carcharhinus coatesi (9.5%), Gymnura australis (7.6%), Sphyrna lewini (6.6%), Maculabatis astra (6.6%) and Hemigaleus australiaensis (5.8%). Of the remaining species, 24 were rarely caught (<1% of total abundance). Five species represented 40% of the total elasmobranch biomass: Rhynchobatus palpebratus (9.2%), Himantura australis (9.2%), Rhizoprionodon taylori (8.2%), Maculabatis astra (6.7%) and Rhinoptera neglecta (6.7%). Of the remaining species, 14 represented <1% of total biomass. A total of 81% of elasmobranchs recorded by observers were recorded as dead at capture, 15% as dying or moribund, and only ~4% as alive. The long average trawl duration is likely to the main contributor to the high mortality rate. The sharks and shark-like rays in the bycatch typically have their fins removed. Although large portions of the fish bycatch is discarded (mostly dead), in the more inshore areas close to major towns, such as Kerema, there is an arrangement with local villagers to access the bycatch. In this scenario, up to 10 small boats can pull up to the trawler with as many as 30–40 people coming on-board to divide up the bycatch from the trawl haul. Prior to this arrangement, inshore fishing by the trawlers was often met with hostility by the local villagers. Size compositions The bycatch of sharks and rays in the GoP prawn trawl fishery encompassed a wide size range of individuals. Captured elasmobranchs ranged from newly hatched Chiloscyllium punctatum (18 cm TL) and presumably newborn Hemitrygon longicauda (12 cm DW), up to a 350 cm TL Pristis pristis. Thus, the catch composition from this fishery likely provides a reasonably comprehensive inventory of the sharks and rays occurring in the GoP. Based on the size-frequency data for the most abundant species (Figs 2–5), bycatch from the trawl fishery included juveniles close to birth size for all species, except R. taylori. The latter species has a birth size of 22–26 cm TL13 but the smallest specimens recorded were in the 30–32 cm length class. The newborns of this species are very slender and may escape capture by trawl nets, or alternatively may occur in different areas or position in the water column thus evading capture. The capture of late-term pregnant females indicates that they give birth in the GoP and so the newborn individuals are expected to occur within the trawled area. Size-frequency histograms of the most abundant shark species represented by 9 or more individuals in the trawl catches of the Gulf of Papua: (a) Chiloscyllium punctatum; (b) Stegostoma fasciatum; (c) Hemigaleus australiensis; (d) Carcharhinus brevipinna; (e) Carcharhinus coatesi; (f) Carcharhinus fitzroyensis. In this Figure and Figs 2–4, the species are placed in phylogenetic order from bamboosharks through to cownose rays; white bars denote females, grey bars males and black bars unsexed individuals; the total number (n) of individuals, known size at birth (red line) and known size at maturity (left dashed line denotes known size of maturity for males, right dashed line denotes known size at maturity for females; a single dashed line indicates both sexes mature at that size or known for only males in which case denoted with a 'M' above line) is given for each species; the size scale bar (x-axis) extends to the maximum known size for each of the species. Size-frequency histograms of the most abundant shark species represented by 9 or more individuals in the trawl catches of the Gulf of Papua: (a) Carcharhinus limbatus; (b) Carcharhinus macloti; (c) Rhizoprionodon acutus; (d) Rhizoprionodon taylori; (e) Eusphyra blochii; (f) Sphyrna lewini. Size-frequency histograms of the most abundant ray species represented by 9 or more individuals in the trawl catches of the Gulf of Papua: (a) Rhynchobatus palpebratus; (b) Gymnura australis; (c) Hemitrygon longicauda; (d) Himantura australis; (e) Himantura leoparda; (f) Maculabatis astra. Size-frequency histograms of the most abundant ray species represented by 9 or more individuals in the trawl catches of the Gulf of Papua: (a) Neotrygon annotata; (b) Pateobatis hortlei; (c) Aetomylaeus caeruleofasciatus; (d) Rhinoptera neglecta. For several species, only immature individuals were recorded, e.g. Carcharhinus brevipinna, C. limbatus, Sphyrna lewini, Hemitrygon longicauda (Figs 2–4). Note that in this case and below, determination of immature individuals for each species is based on the theoretical size-at-maturity (see Figs 2–5) and not sampling observations from this study as not all individuals were examined for maturity. The catches of several of the remaining abundant species consisted of mostly immature individuals with only a very small proportion of mature individuals, i.e. Chiloscyllium punctatum, Stegostoma fasciatum, Hemigaleus australiensis, Eusphyra blochii and Pateobatis hortlei (Figs 2, 3 and 5). A few species were represented by a higher proportion of mature than immature individuals, i.e. Carcharhinus fitzroyensis, C. macloti, R. taylori and Neotrygon annotata (Figs 2, 3 and 5). Only a small number of species were represented by all size classes in the bycatch of the trawl fishery, i.e. Carcharhinus coatesi, Gymnura australis and Maculabatis astra. Sex ratios For most species, sex ratios did not differ significantly from parity (χ2 test, P > 0.05). However, significantly more females than males were recorded for Carcharhinus fitzroyensis (3.2:1, P < 0.05), Rhizoprionodon taylori (1.9:1, P < 0.001), Eusphyra blochii (1.9:1, P < 0.01), and Maculabatis astra (1.5:1, P < 0.05). In contrast, significantly more males than females were recorded for Carcharhinus coatesi (2.2:1, P < 0.001), C. limbatus (4.5:1, P < 0.05), Rhizoprionodon acutus (2.1:1, P < 0.001), and Pateobatis hortlei (3.8:1, P < 0.005). Size at maturity Size at maturity was calculated for those species with an adequate spread of data across the size classes for one or both sexes, i.e. Hemigaleus australiensis, Rhizoprionodon acutus, Gymnura australis, Maculabatis astra and Aetomylaeus caeruleofasciatus. Note that for the two most abundant shark species, the Australian blackspot shark C. coatesi and Australian sharpnose shark R. taylori, size-at-maturity of GoP populations will be provided in two manuscripts currently in review (L. Baje, unpublished data). Males of Hemigaleus australiensis below 56 cm TL possessed non- or partially calcified claspers, while claspers for specimens above 65 cm TL were all fully calcified (Fig. 6a). The L50 for males was calculated as 60.6 cm TL, but the high upper confidence value of 189.5 suggests the data were not adequate for a robust calculation. Claspers of Rhizoprionodon acutus below 43 cm TL were non-calcified, while all those above 76 cm TL were fully calcified (Fig. 6b). The L50 for males was calculated as 58.8 cm TL (57.0–62.2 cm TL). Only a single adult (106 cm TL) and a single subadult (99 cm TL) male Rhynchobatus palpebratus were recorded (Fig. 6c). Although an accurate L50 could not be calculated, size at maturity likely occurs between 99 and 106 cm TL. Clasper length vs. size (total length, TL or disc width DW) relationships for five species of sharks and rays: (a) Hemigaleus australiensis; (b) Rhizoprionodon acutus; (c) Rhynchobatus palpebratus; (d) Gymnura australis; and (e) Maculabatis astra. All male Gymnura australis below 34 cm DW possessed non- or partially calcified claspers, while all specimens above 38 cm DW had fully calcified claspers (Fig. 6d). The DW50 for males was calculated as 34.2 cm DW (33.8–35.1 cm TL). All female G. australis below 48 cm DW were immature while all those above 52 cm DW were mature. The DW50 for females was calculated as 50.0 cm DW (46.2–52.1 cm TL). All male Maculabatis astra below 39 cm DW possessed non-calcified claspers, while all those above 45 cm DW possessed fully calcified claspers (Fig. 6e). The DW50 for males was calculated as 41.5 cm DW (38.1–42.0 cm TL). Two male Aetomylaeus caeruleofasciatus of 34 and 37 cm DW possessed non-calcified claspers while two males of 45 and 48 cm DW possessed fully calcified claspers. The DW50 for males was calculated as 36.3 cm DW (30.0–40.2 cm TL). Spatial patterns in elasmobranch abundance Catch composition between fishing zones was significantly different overall (ANOSIM, P < 0.001; R = 0.378; Fig. 7), and in most of the pairwise comparisons except zone 0 vs. zones 1 and 5, and zone 1 vs. all other areas (due to low sample sizes). The species most diagnostically different across zones were C. coatesi, R. acutus, R. taylori, H. australiensis, M. astra and G. australis, and to a lesser extent E. blochii and C. brevipinna (Table 4). Non-metric multidimensional scaling (MDS) ordination of the elasmobranch catches in each of the fisheries management zones of the Gulf of Papua. Within each zone, each sample represents 5 randomly pooled trawl sets. Table 4 Species identified by similarity percentages (SIMPER) as typifying fishing zones (bold text), and those species distinguishing each pair of fishing zones (normal text). Note, the low number of samples for zone 1 did not allow determination of typifying species and all pairwise comparisons were not significant based on ANOSIM results. Thus, zone 1 not included in this table. Also excluded from this table is zone 3 which did not have adequate trawls to include in analyses, and zone 4 where no elasmobranchs were caught in the three trawls undertaken. When comparing the number of each species caught per trawl in each of the fishing zones, clear species distribution patterns were apparent. Neotrygon annotata (n = 35) was only caught in the western half of the GoP (fishing zones 0–4), and not in the eastern half (fishing zones 5–8). Other species were far more abundant in the western than eastern half of the GoP: E. blochii (1.70 vs. 0.55 individuals/trawl), Carcharhinus fitzroyensis (0.31 vs. 0.06 individuals/trawl), Rhynchobatus palpebratus (0.81 vs. 0.46 individuals/trawl), Hemitrygon longicauda (0.72 vs. 0.19 individuals/trawl), Himantura australis (0.34 vs. 0.08 individuals/trawl), and Pateobatis hortlei (1.04 vs. 0.15 individuals/trawl). In contrast, some species were only caught in the eastern half of the GoP (restricted to species with 5 or more individuals): Stegostoma fasciatum (n = 10), Carcharhinus brevipinna (n = 20), Glaucostegus typus (n = 5) and Aetobatus ocellatus (n = 5). The following species were also more abundant in the eastern than the western half of the GoP: Chiloscyllium punctatum (0.81 vs. 0.16 individuals/trawl), H. australiensis (0.97 vs. 0.27 individuals/trawl), R. acutus (1.88 vs. 1.14 individuals/trawl), R. taylori (3.63 vs. 2.88 individuals/trawl), S. lewini (1.42 vs. 0.33 individuals/trawl), M. astra (1.87 vs. 0.87 individuals/trawl), Aetomylaeus caeruleofasciatus (0.53 vs. 0.03 individuals/trawl) and R. neglecta (0.37 vs. 0.01 individuals/trawl). Temporal patterns in elasmobranch abundance Elasmobranch catches were compared for trawls in the two seasons (NW and SE Monsoon) influencing PNG. There was no significant difference in catches of most species (ANOSIM, P > 0.05), although this was likely the consequence of the low abundances recorded for most species. However, significantly more individuals of the following species were caught in the SE Monsoon trawls than the NW Monsoon trawls: C. punctatum (χ2 test, P < 0.005), H. australiensis (χ2 test, P < 0.005), C. brevipinna (χ2 test, P < 0.005), R. taylori (χ2 test, P < 0.005), S. lewini (χ2 test, P < 0.005), and R. neglecta (χ2 test, P < 0.05). In contrast, significantly more individuals of the following species were caught in the NW Monsoon trawls than the SE Monsoon trawls: C. coatesi (χ2 test, P < 0.005), R. palpebratus (χ2 test, P < 0.005), N. annotata (χ2 test, P < 0.005) and P. hortlei (χ2 test, P < 0.05). Since the catch composition was shown to vary across fishing zones, particularly when comparing the eastern and western halves of the GoP, comparisons between the seasons were restricted to fishing zones with high numbers of trawls carried out in both seasons, i.e. zones 6, 7 and 8. Following ordination of the elasmobranch catches for fishing zone 6, the NW Monsoon samples (n = 2) formed a discrete group to the SE Monsoon samples (Fig. 8). ANOSIM demonstrated that the catch data across the two seasons were significantly different (P < 0.005; R = 0.464). SIMPER designated R. taylori to be the main species causing this difference, followed by M. astra and C. coatesi. It needs to be noted here that only 2 samples were available for the NW Monsoon samples. Additional samples from this zone during the NW Monsoon are needed to validate whether these differences are real. Similar ordinations for fishing zones 7 and 8 showed less discrete groupings of NW and SE Monsoon samples, and ANOSIM demonstrated that the catch data was not significantly different (P > 0.05) in both cases. Significantly more R. taylori were caught in the SE Monsoon period (0.57 individuals hr−1) across all zones than in the NW Monsoon period (0.24 individuals hr−1) (χ2 test, P < 0.005). More importantly, R. taylori was only recorded in the bycatch in zones 5–8 during the SE Monsoon, and not in any of the NW Monsoon trawls. It should be noted that although ~8 times more trawling occurred across these four zones in the SE Monsoon, it is still significant that no R. taylori were recorded in the 105 hours of trawling during the NW Monsoon in these zones. In contrast, 93 R. taylori were recorded in the trawl catches in zones 0–2 during the NW Monsoon (0.33 individuals hr−1 vs. 0.57 individuals hr−1 in zones 5–8 in the SE Monsoon). Non-metric multidimensional scaling (MDS) ordination of the elasmobranch catches in fisheries management zone 6 in both the Northwest (NW) and Southeast (SE) Monsoon seasons. Within each season, each sample represents 5 randomly pooled trawl sets. When comparing day vs. night trawls, significantly more individuals of the following species were caught at night: Chiloscyllium punctatum (χ2 test, P < 0.005), Hemigaleus australiensis (χ2 test, P < 0.005), Gymnura australis (χ2 test, P < 0.05), Hemitrygon longicauda (χ2 test, P < 0.005) and Rhinoptera neglecta (χ2 test, P < 0.05). No species were caught in significantly higher numbers during the day. Ordination of the overall elasmobranch catches for fishing zones 2, 6, 7 and 8 showed mostly high overlap between day and night samples. ANOSIM demonstrated that the catch data were not significantly different (P > 0.05) between day and night in fishing zones 6, 7 and 8, but was weakly significant in fishing zone 2 (P < 0.05; R = 0.266). SIMPER designated R. taylori, C. punctatum and R. palpebratus to be the species most responsible for causing the difference between day and night trawls in fishing zone 2. The large number of elasmobranch species recorded in this study, i.e. 18 sharks and 22 rays, reflects the high diversity of sharks and rays in PNG. Although the GoP prawn fishery only covers a relatively small proportion of PNG's marine area, ~31% of the confirmed 130 elasmobranch species recorded from PNG13 were caught in this fishery. It should be noted that some larger and/or faster swimming species may not be captured in prawn trawl nets (see e.g.14). Since no Bycatch Reduction Devices (BRDs) or Turtle Excluder Devices (TEDs) are used in the GoP prawn fishery and the nets have stretched mesh size of ~50 mm, all bycatch encountered is likely to be retained in the trawl nets. PNG NFA is currently investigating the feasibility of introducing BRDs and/or TEDs into the GoP prawn fishery. The closest comparable prawn trawl fisheries to the GoP prawn fishery is the Northern Prawn Fishery (NPF) in northern Australia and the Torres Strait Prawn Fishery (TSPF). The NPF operates over two fishing seasons, the first 2.5 months and second up to 4 months duration while the TSPF operates at night from 1 February to 1 December (http://www.afma.gov.au/fisheries/northern-prawn-fishery/). TEDs and BRDs were introduced into these fisheries in 200015. In a study carried out before introduction of TEDs and BRDs16, the fish bycatch from 401 trawls was determined. The total trawling hours of this study was ~204 hrs vs. 1,273 hrs in the present study. They recorded a total of ~1,160 kg of elasmobranchs from 25 species and 11 families (vs. 4,368 kg from 40 species and 14 families in the present study), with a higher elasmobranch CPUE by weight than in the present study, i.e. 5.7 vs. 3.4 kg elasmobranchs hr−1. In a larger survey of the NPF, a bycatch of 56 elasmobranch species from 16 families was recorded15. A total of 35 of the 40 species recorded in the present study have also been recorded from the bycatch of the NPF15, highlighting the similarity in elasmobranch assemblages between the two regions. The high elasmobranch species diversity in the GoP prawn fishery presents a challenge to monitoring and management of this fishery. The prawn trawl management plan12 states that "removing of fin from shark and returning to the sea alive is prohibited", with no other policies regarding sharks and rays caught as bycatch. There was no evidence from the observer data that there were any breaches in this policy, however compliance levels when observers are not present are unknown. It is important to recognise the high level of mortality of sharks and rays in this fishery (81% of sharks and rays were dead at capture), due largely to the long haul times (up to 4 hrs) and lack of BRDs and TEDs. While TEDs are likely to exclude larger sharks and rays, particularly wedgefishes, the majority of the catches are small rays and sharks which are unlikely to be excluded15. Furthermore, the most threatened species, the sawfishes, are likely to still get entangled in the net or the TED itself. Thus, although the introduction of TEDs and BRDs into the GoP prawn fishery in the future could see a reduction in the capture of some sharks and rays, it is unlikely to reduce the majority of sharks and rays from being caught. The susceptibility of the various shark and rays species caught in the NPF and TSPF in northern Australia has been investigated15. The species deemed most at risk and least sustainable in the NPF were the four species of sawfish (Pristidae) and Pateobatis jenkinsii (not recorded in GoP prawn fishery bycatch). A similar sustainability study and ecological risk assessment needs to be conducted on the elasmobranch bycatch in the GoP prawn fishery. It is likely that many of the larger species of sharks and rays are highly susceptible to trawling activities. However, it should be noted that a large portion of the Gulf of Papua is considered not suitable for trawling11, so there are likely to be refugia away from trawling activities for a number of species. Any assessment of the sustainability of elasmobranch species in the GoP prawn fishery needs to be extended to incorporate the impact of other fisheries operating in the GoP. In particular, various coastal fisheries, e.g. gillnetting and seine netting, capture many of the same species caught as bycatch in the GoP prawn fishery (W. White, unpublished data). The limited information available for these fisheries indicates that many species overlap with the GoP prawn fishery. The significantly lower CPUE in the eastern half of the GoP may be due to a number of factors. Firstly, the western half of the GoP is adjacent to enormous river outflows, e.g. Fly, Kikori and Purari rivers, with presumably higher productivity compared to the eastern half of the GoP. Thus, the western half may naturally have higher abundances of elasmobranchs, but there are no data that bear on this question. An alternative explanation is that the much higher trawling pressure in the eastern half of the GoP, e.g. 70% of trawls recorded in this study were in sites 6–8, has resulted in lower abundances of elasmobranchs through overexploitation. However, without a historical time series of elasmobranch CPUE data from the GoP prawn fishery, it is not possible to determine the impacts of trawling on elasmobranch populations and species in the GoP. The trawl data have contributed to knowledge of elasmobranch diversity in the region. One species of shark (C. fitzroyensis) and two species of rays (M. microps and U. acanthobothrium) recorded during this study were the first records of these elasmobranch species in PNG waters13. Furthermore, specimens of H. australis and A. caeruleofasciatus retained from the trawl catches contributed to the description of these two recently described species, both also found in northern Australia17,18. The presence of only immature individuals of C. brevipinna, C. limbatus and S. lewini in the trawl bycatch suggests that these species may have nursery areas within the GoP, but this suggestion needs to be evaluated rigorously based on criteria that are used to positively identify nursery locations19,20. These three species have been recorded as using inshore nursery areas for their young elsewhere in their wide ranges, e.g. off South Carolina in the USA21,22,23. Only immature individuals of H. longicauda were also recorded. New-born and juvenile individuals of this species are regularly caught in the intertidal zone by seine net fishers along the Western and Gulf Provinces (W. White, unpublished data) and adults are largely unknown. Recently, several images of adult specimens have been taken from shallow waters in the Western Province of PNG (J. Page, pers. comm.). Adults of this species may occupy different, possibly more specific habitats than the juveniles outside of the regularly trawled area in the GoP. The full size cohort of a number of elasmobranchs were recorded in the trawl bycatch in the GoP indicating that they not only give birth in this area, but occur in these inshore waters at all stages of their life cycle. The most abundant sharks which fall into this category were C. punctatum, H. australiensis, C. coatesi, R. acutus, R. taylori and E. blochii which were represented in the trawl bycatch by a wide size range of individuals from juveniles to adults. Similar distributions are known for these species from northern Australia where they also occur24,25 (in25 two species under the older names H. microstoma [=australiensis] and C. dussumieri [=C. coatesi]). The most abundant rays represented in the catch by a wide size range were R. palpebratus, G. australis, M. astra and A. caeruleofasciatus. All size ranges of these four ray species were also represented in the trawl bycatch in Australia's northern prawn fishery15 (in15 three species under the older names R. djiddensis [presumably mostly R. palpebratus], Himantura toshi [=M. astra], and A. nichofii [=A. caeruleofasciatus]). Although a number of species were caught in significantly different numbers in the two seasons examined, the SE Monsoon and NW Monsoon, the effects of fishing zone also needed to be taken into account. The most informative seasonal information came from comparing catches in the two seasons from within a fishery management zone. Results showed that the biggest difference between seasons was the lack of catches of R. taylori in the eastern zones during the NW Monsoon. Off northeastern Australia, R. taylori was found to prefer seagrass habitats but between December and February they abruptly move to sandy inshore habitats coinciding with increased river discharge26. This may account for the seasonal differences in R. taylori catches in the GoP. In the NW Monsoon, increasing river discharges into the GoP may have a similar effect, causing R. taylori to select more inshore, sandy habitats where trawling is limited. As more river outflows are present in the western half of the GoP, this would account for catches of R. taylori in zones 0–2 during the NW Monsoon but not in zones 5–8. The western half is also shallower so is likely a combined effect of R. taylori populations moving more inshore and westward during the NW Monsoon. Unfortunately no trawls in zones 0–2 were recorded in this study during the SE Monsoon to determine whether the catches remained stable or differed significantly according to the season. More detailed seasonal catch data is required from within each of the management zones to confirm the above hypotheses regarding R. taylori and to determine other seasonal patterns. Although only weakly significant or no differences were seen in catches between day and night-time trawls, several species were caught in significantly higher numbers at night than during the day, i.e. C. punctatum, H. australiensis, G. australis, H. longicauda and R. neglecta. In contrast, no species were caught in significantly higher numbers during the day. It is possible that these species undergo diel movements between shallower, inshore waters and the deeper coastal waters where trawling is more prevalent. There is limited data on diel patterns in these species from trawl bycatch data elsewhere or more generally in similar habitats. More detailed bycatch data from the GoP, ideally coupled with comparative inshore catch composition data, are needed to be able to investigate diel patterns in more detail. The GoP prawn fishery bycatch includes a wide diversity of shark and ray species ranging in size from only 12 cm in width to over 3 m in length. Prior to this study, no data existed on the elasmobranch bycatch of the GoP prawn fishery in PNG. This study provides crucial baseline data for PNG for the fishery. The introduction of BRDs and TEDs in the future will likely benefit at least the larger size classes of a number of species that are currently landed in this fishery15,27. The long average duration of the trawl hauls in the GoP prawn fishery contributes to the high catch mortality of sharks and rays which needs to be considered by fisheries managers. The removal of fins from most sharks and use of the meat of some species possibly makes them a desirable bycatch which should also be considered by fisheries managers. Further research and monitoring of the GoP prawn fishery shark and ray bycatch is required coupled with more comprehensive data from the coastal fisheries adjacent to the fisheries management zones. The GoP is located on the southern coastline of mainland PNG and is adjacent to the northwestern margin of the Coral Sea. It has a total area of ~30,000 km2 and consists of a broad shelf with a maximum width of ~150 km near the Fly Delta and narrows to less than 20 km east of the Purari River delta28. The southwestern tip of the GoP joins the shallow Torres Strait shelf of northeastern Australia. The numerous rivers which flow into the GoP, including the massive Fly River system, discharge ~1.5 billion tonnes of sediments each year28. Within the GoP, muddy to sandy bottoms are prevalent to about 50 m depth, creating a large area suitable for trawling. Between 50 and 70 m, large rocky peaks occur which are unsuitable for trawling and beyond about 80 m depth, the sea floor rapidly drops off, making trawling difficult11. The Gulf of Papua prawn fishery management zone is divided into 8 zones in the coastal zone of the GoP: 1 – North Fly; 2 – Cape Blackwood; 3 – Purari; 4 – Orokolo Bay; 5 – West Kerema Bay; 6 – Kerema Bay; 7 – Freshwater Bay; 8 – Iokea29 (Fig. 1). Some trawling also occurs off the Fly River mouth southwest of area 1; this area is referred to as 0 – Fly. The trawl zones from Cape Blackwood to Iokea (2–8) are closed from fishing between 1st December and 31st March each year under the current Gulf of Papua Prawn Fishery Management Plan12. This work is a collaboration with the National Fisheries Authority (NFA), the government agency responsible for managing commercial fisheries and implementing fisheries research in PNG. Fishery observers were deployed on-board prawn trawlers with the task of identifying and recording sharks and rays that were observed in the bycatch as well as additional parameters described below. The sharks recorded and/or collected by observers in this study had already suffered mortality in the process of fishing and subsequent landing, and no sharks were intentionally sacrificed for the study. Although some sharks and rays were alive when the nets were emptied on the vessel deck, they had all suffered mortality before observers were able to handle them. All sampling procedures were allowed by the NFA. No further permits were required by relevant authorities. All data were obtained by PNG NFA observers during commercial prawn trawling activities. The PNG NFA observer program consists of well-trained observers (both by the Secretariat of the Pacific Community fisheries program and the current project) with detailed data collection protocols and species identification guides. Five PNG NFA observers were deployed on 7 commercial prawn trawl trips (across 5 vessels) between June 2014 and August 2015. Details of the 5 trawl vessels on which observers were placed are provided in Table 5. Twin- or quad-rigged trawls were deployed, with the total sweep of both of these rigging types being 60 m8. A small 'try' net was deployed during the trawls and was checked every 15 minutes. When the try net indicated suitable prawn catches, the vessel trawled along that depth contour for up to 4 hours8. The mesh size used in the nets is ~50 mm stretched mesh with no mesh size less than this size allowed to be used in this fishery to catch prawns12. During a fishing trip, trawling occurred on a 24-hour basis and each vessel trawled for about 250 days per year during the open season8. Since only a single observer was deployed per fishing trip, observer coverage was restricted to around 12 hours per day, i.e. about 4 trawls per day. Bycatch data used in this paper is available from the Dryad Digital Repository: https://doi.org/10.5061/dryad.f77972k. All data used in this study were collected by PNG National Fisheries Authority personnel who are the approved national authority for fisheries research in PNG. Table 5 Specifications of the five trawl vessels on which observers were deployed on in this study (from8). Elasmobranch species identification All sharks and shark-like rays less than ~80 cm total length (TL) and rays less than ~40 cm disc width (DW) were retained whole and frozen for subsequent processing in Port Moresby. All larger specimens were photographed, measured and sexed on the deck and a genetic sample was taken and stored in ethanol for subsequent DNA analysis if required. Sampling processes enabled the identification and accurate verification of all sharks and rays caught during trawling. For specimens that could not be accurately identified from the images taken by the observers, genetic techniques were employed to confirm identifications. To allow for the best comparisons with other species, the NADH2 mitochondrial marker was sequenced and compared with other specimens in the Chondrichthyan Tree of Life database (https://sharksrays.org). Amplification and sequencing of the NADH2 gene followed the protocols outlined in30. Catch composition For each specimen, the horizontal stretched total length (TL; for all sharks and shark-like rays, i.e. guitarfishes, wedgefishes and sawfishes) or disc width (DW; for all stingrays, eagle, butterfly and cownose rays) was recorded. Sex was also recorded and for males, the degree of calcification of the claspers and outer length of the claspers was recorded where possible. However, clasper length was only recorded for a subset of those males for which clasper calcification was recorded. For specimens frozen on-board, processing was undertaken at a laboratory at the Biological Sciences building at the University of Papua New Guinea. As well as TL or DW, total weight, female maturity (where possible), and clasper outer length was also recorded. Internal male maturity stage, e.g. testes development and degree of coiling of vas deferens, were not recorded due to time constraints and the fact that claspers provide an accurate means of determining maturity in males. The number, sex and size of embryos was recorded from pregnant females. Significance of sex ratios for species with more than 10 sexed individuals across the fishing zones were tested with χ2 test in MicrosoftTM Excel. The condition (dead, dying or moribund, or alive) at time of landing on the vessel deck of 924 of the 2,030 elasmobranchs caught was recorded by observers. Maturity analyses For species with adequate maturity data (>15 individuals with maturity recorded representing both juveniles and adults), the size (TL or DW) at maturity of females and males was calculated. For males, individuals with either non-calcified or partially calcified claspers were considered immature, while those with fully calcified claspers were considered mature31. For females, individuals without mature uteri and ovaries were considered immature and those with mature ovaries and uteri were considered mature31. The size, at which 50% of females or males of a particular species attain maturity (Size50) was derived using logistic regression, where the proportion, P, of those sharks that were mature at size S was calculated as, $$P=\frac{1}{1+\exp [-\,\mathrm{ln}(19)\frac{(S-{S}_{50})}{({S}_{95}-{S}_{50})}]},$$ where S50 and S95 are the sizes at which 50 and 95% of the individuals, respectively, were mature. Maximum likelihood estimates of the parameters were obtained using the routine SOLVER in MicrosoftTM Excel. Reported estimates of parameters were determined as the median values derived from 200 sets of randomly resampled data, with the same sample size, drawn from the data on the observed maturity status at size for individuals. The approximate 95% CI were estimated as the 2.5 and 97.5 percentiles of the 200 estimates resulting from these resampled data32. Clasper (outer) length vs. body size relationships were produced for species with adequate samples sizes of immature and mature individuals. Note that only a subsample of specimens which had clasper calcification recorded also had clasper length measured. Calculating biomass The weight of each individual measured but not weighed was calculated using a power curve (W = aLb) derived from individuals that were measured and weighed. For those species without adequate numbers of measured and weighed individuals, a and b parameters were obtained from published sources. If no published parameters were available, those from a morphologically similar species were used to estimate biomass, including: Neotrygon annotata parameters used for N. picta, Aetobatus narinari parameters used for A. ocellatus and Mobula alfredi, Himantura australis/leoparda parameters used for Pateobatis fai, Pastinachus ater, Urogymnus acanthobothrium, U. granulatus and unknown stingray. The weight of the single Megatrygon microps recorded (~180 cm DW) was conservatively estimated to be ~80 kg based on a published weight of 75 kg for a 170 cm DW specimen from Iranian waters33. Spatial and temporal multivariate analyses The number of each elasmobranch species in each trawl haul in each of the fishing zones (0–8) was determined. Single trawl hauls often only contained a few of the total elasmobranch species recorded, which was problematic for multivariate analyses. To overcome this, data were pooled for groups of trawls, similar to what is done with stomach content data34,35. Thus, the catch data were randomly allocated into groups of five trawl hauls within each of the fishing zones and mean catch composition values determined. These pooled samples were subjected to non-metric multidimensional scaling (MDS) ordination in order to determine whether fishing zone influenced the elasmobranchs species composition. Prior to subjecting the pooled catch data to MDS ordination, data were square-root transformed and a similarity matrix was constructed using the Bray-Curtis similarity coefficient and ordination in PRIMER v7 package following the techniques outlined in36. One-way analyses of similarities (ANOSIM) were used to test whether the elasmobranch catches differed significantly amongst fishing zones. Similarity percentages (SIMPER) were employed to determine the elasmobranch species that typified particular fishing zones and/or contributed most to the dissimilarities between zones37. To remove the effect of abundances of the various species to the ordination, a separate analysis with a presence/absence transformation was also undertaken. To determine the effect of season (Northwest Monsoon, NW, between November and April; or Southeast Monsoon, SE, between May to October) on catch composition, means of randomly allocated groups of five trawl hauls within either season within each fishing zone were produced and analysed as described above. Similarly, to determine the effect of day vs. night on catch composition, means of randomly allocated groups of five trawl hauls within day or night within each fishing zone were produced and analysed as above. To test for differences in the catch per unit of effort (CPUE, number per hour trawled) of elasmobranchs a generalised linear mixed model (GLMM) with Fisheries Management Zone, Season (NW monsoon/SE monsoon) and Time of Day (day/night) as factors was used. Zones 3 and 4 had limited data and were thus excluded from the analyses. To account for differences in fishing power and observer ability between vessels Fishing Trip was included as a random factor in the model. No interactions between factors were tested due to limitations in the data set. CPUE values were square root transformed to meet assumptions of normality in the data. Bonfil, R. Overview of world elasmobranch fisheries. FAO Fisheries Technical Paper 341. 1–119 (1994). Oliver, S., Braccini, M., Newman, S. J. & Harvey, E. S. Global patterns in the bycatch of sharks and rays. Mar. Pol. 54, 86–97 (2015). Walker, T. I. Management measures in Management techniques for elasmobranch fisheries. FAO Fish Technical Paper 474 (eds Musick, J. A. & Bonfil, R.) 216–242 (FAO, 2005). Stevens, J. D., Bonfil, R., Dulvy, N. K. & Walker, P. A. The effects of fishing on sharks, rays, and chimaeras (chondrichthyans), and the implications for marine ecosystems. ICES J. Mar. Sci. 57, 476–494 (2000). Arunrugstichai, S., True, J. D. & White, W. T. Catch composition and aspects of the biology of sharks caught by Thai commercial fisheries in the Andaman Sea. J. Fish Biol. 92, 1487–1504 (2018). Blaber, S. J. M. et al. Elasmobranchs in southern Indonesian fisheries: the fisheries, the status of the stocks and management options. Rev. Fish Biol. Fisher. 19, 367–391, https://doi.org/10.1007/s11160-009-9110-9 (2009). Banks, R. & Macfadyen, G. A blueprint for sustainable tropical shrimp trawl fisheries. A WWF Commissioned report (Poseidon Aquatic Resource Management Ltd, 2010). Liviko, I. Gulf of Papua Status Report 2012. Report prepared for FAO/GEF Regional Workshop on Work Planning – Year 1, 6–9 November 2012 Bangkok, Thailand (National Fisheries Authority, 2012). Rapson, A. M. Small mesh trawling in Papua. Papua and New Guinea Agr. 10, 15–25 (1955). Rapson, A. M. & McIntosh, G. R. Prawn surveys in Papua and New Guinea. Biological Series 10/5 (Department of Agriculture, Stock and Fisheries, 1971). Kailola, P. J. & Wilson, M. A. The trawl fishes of the Gulf of Papua. Research Bulletin No. 20 (Department of Primary Industry, 1978). NFA. The Gulf of Papua Prawn Fishery Management Plan. (National Fisheries Authority, 1998). White, W. T. et al. Sharks and rays of Papua New Guinea. ACIAR Monograph No. 189 (Australian Centre for International Agricultural Research, 2018). Wassenberg, T. J. et al. The effectiveness of fish and shrimp trawls for sampling fish communities in tropical Australia. Fish. Res. 30, 241–251 (1997). Stobutzki, I. C., Miller, M. J., Heales, D. S. & Brewer, D. T. Sustainability of elasmobranchs caught as bycatch in a tropical prawn (shrimp) trawl fishery. Fish. Bull. 100, 800–821 (2002). Stobutzki, I. C., Miller, M. J., Jones, P. & Salini, J. P. Bycatch diversity and variation in a tropical Australian penaeid fishery; the implications for monitoring. Fish. Res. 53, 283–301 (2001). Last, P. R., White, W. T. & Naylor, G. Three new stingrays (Myliobatiformes: Dasyatidae) from the Indo–West Pacific. Zootaxa 4147, 377–402, https://doi.org/10.11646/zootaxa.4147.4.2 (2016). White, W. T., Last, P. R. & Baje, L. Aetomylaeus caeruleofasciatus, a new species of eagle ray (Myliobatiformes: Myliobatidae) from northern Australia and New Guinea. Ichthyol. Res. 63, 94–109 (2015). Heupel, M. R., Carlson, J. K. & Simpfendorfer, C. A. Shark nursery areas: concepts, definition, characterization and assumptions. Mar. Ecol. Prog. Ser. 337, 287–297 (2007). Martins, A. P. B., Heupel, M. R., Chin, A. & Simpfendorfer, C. A. Batoid nurseries: definition, use and importance. Mar. Ecol. Prog. Ser. 595, 253–267 (2018). Castro, J. I. The shark nursery of Bulls Bay, South Carolina, with a review of the shark nurseries of the southeastern coast of the United States. Environ. Biol. Fish. 38, 37–48 (1993). Simpfendorfer, C. A. & Milward, N. E. Utilisation of a tropical bay as a nursery area by sharks of the families Carcharhinidae and Sphyrnidae. Environ. Biol. Fish. 37, 337–345 (1993). Duncan, K. M. & Holland, K. N. Habitat use, growth rates and dispersal patterns of juvenile hammerhead sharks Sphyrna lewini in a nursery habitat. Mar. Ecol. Prog. Ser. 312, 211–221 (2006). Stevens, J. D. & McLoughlin, K. J. Distribution, size and sex composition, reproductive biology and diet of sharks from northern Australia. Aust. J. Mar. Freshwat. Res. 42, 151–199 (1991). Harry, A. V. et al. Evaluating catch and mitigating risk in a multispecies, tropical, inshore shark fishery within the Great Barrier Reef World Heritage Area. Mar. Freshwat. Res. 62, 710–721 (2011). Munroe, S. E. M., Simpfendorfer, C. A. & Heupel, M. R. Habitat and space use of an abundant nearshore shark, Rhizoprionodon taylori. Mar. Freshwat. Res. 65, 959–968 (2014). Brewer, D., Rawlinson, N., Eayrs, S. & Burridge, C. An assessment of Bycatch Reduction Devices in a tropical Australian prawn fishery. Fish. Res. 36, 195–215 (1998). Harris, P. T. et al. Late quaternary deltaic and carbonate sedimentation in the Gulf of Papua foreland basin: response to sea-level change. J. Sediment. Res. 66, 801–819 (1996). Kare, B., Koren, L. & Milton, D. Biological survey trip report (23rd March–12th April 2004). (National Fisheries Authority, 2004). Naylor, G. J. P. et al. A DNA sequence-based approach to the identification of shark and ray species and its implications for global elasmobranch diversity and parasitology. B. Am. Mus. Nat. Hist. 367, 1–263 (2012). White, W., Platell, M. & Potter, I. Relationship between reproductive biology and age composition and growth in Urolophus lobatus (Batoidea: Urolophidae). Marine Biology 138, 135–147 (2001). Wood, M. Statistical inference using bootstrap confidence intervals. Significance 1, 180–182 (2004). Moore, A. B. M., White, W. T. & Peirce, R. Additions to the shark fauna of the Persian (Arabian) Gulf. Zool. Middle East 50, 83–88 (2010). Platell, M. E., Potter, I. C. & Clarke, K. R. Resource partitioning by four species of elasmobranch (Batoidea: Urolophidae) in coastal waters of temperate Australia. Mar. Biol. 131, 719–734 (1998). White, W. T., Platell, M. E. & Potter, I. C. Comparisons between the diets of four abundant species of elasmobranchs in a subtropical embayment: implications for resource partitioning. Mar. Biol. 144, 439–448 (2004). Clarke, K. R. & Gorley, R. N. Primer v5: user manual/tutorial. (Primer-E, 2001). Clarke, K. R. Non-parametric multivariate analyses of changes in community structure. Aust. J. Ecol. 18, 117–143 (1993). Motta, F. S., Caltabellotta, F. P., Namora, R. C. & Gadig, O. B. F. Length-weight relationships of sharks caught by artisanal fisheries from southeastern Brazil. J. App. Ichthyol. 30, 239–240 (2014). Lyle, J. M. Observations on the biology of Carcharhinus cautus (Whitley), C. melanopterus (Quoy & Gaimard) and C. fitzroyensis (Whitley) from northern Australia. Aust. J. Mar. Freshwat. Res. 38, 701–710 (1987). Castro, J. I. Biology of the blacktip shark, Carcharhinus limbatus, off the southeastern United States. Bull. Mar. Sci. 59, 508–522 (1996). Stevens, J. D. & Wiley, P. D. Biology of two commercially important carcharhinid sharks from northern Australia. Aust. J. Mar. Freshwat. Res. 37, 671–688 (1986). Stevens, J. D. & Lyle, J. M. Biology of three hammerhead sharks (Eusphyra blochii, Sphyrna mokarran and S. lewini) from northern Australia. Aust. J. Mar. Freshwat. Res. 40, 129–146 (1989). Salini, J. P. et al. Northern Australian sharks and rays: the sustainability of target and bycatch species. Phase 2. Final report Project No. 2002/064 (CSIRO, 2007). Gordon, I. A new record extending the southerly distribution of the shark ray (Rhina ancylostoma), and notes on the behaviour of the specimen in captivity. Aust. J. Mar. Freshwat. Res. 43, 319–323 (1992). Rajapackiam, S., Mohan, S. & Rudramurthy, N. On the landing of a large size guitar fish, Rhina ancylostoma at Chennai Fishery Harbour. Mar. Fish. Info. Serv. 191, 28 (2007). Uchida, S., Toda, M. & Kamei, Y. Reproduction of elasmobranchs in captivity in Elasmobranchs as living resources: advances in the biology, ecology, systematics, and the status of the fisheries. NOAA Technical Report, NMFS 90 (eds Pratt, H. L. Jr., Gruber, S.H. & Taniuchi, T.) 211–237 (NOAA, 1990). Wallace, J. H. The batoid fishes of the east coast of southern Africa. Part I: sawfishes and guitarfishes. Investigational Report, Oceanographic Research Institute 16, 1–32 (1967). Bassos-Hull, K. et al. Life history and seasonal occurrence of the spotted eagle ray, Aetobatus narinari, in the eastern Gulf of Mexico. Environ. Biol. Fish. 97, 1039–1056 (2014). The research was supported by funds from the Australian Centre for International Agricultural Research (ACIAR; project FIS/2012/102) and the PNG National Fisheries Authority. The authors would like to thank the NFA observers who collected the high quality data used in this study, i.e. Baera Nawia, Ian Tony, Ronald Wala, Sarea Tova and Siwen Ohuesaho. Collection of data would not have been possible without the help from the crew of the fishing vessels Charisma, Louro, Lavai No. 1, Siwi and Ipali. Thanks also go to Ralph Mana at the University of PNG who provided ample work space for processing and storage of the samples retained by the observers. Thanks also to Leban Gisawa, Thomas Usu, Luanah Yaman and Brian Kumasi (PNG NFA), Ann Fleming, Chris Barlow (ACIAR) and Jes Sammut (UNSW) for their support with this ACIAR project. CSIRO Australian National Fish Collection, National Research Collections Australia, Castray Esplanade, Hobart, 7001, Tasmania, Australia W. T. White & S. A. Appleyard CSIRO Oceans & Atmosphere, Castray Esplanade, Hobart, 7001, Tasmania, Australia Papua New Guinea National Fisheries Authority, P.O. Box 2016, Port Moresby, National Capital District Papua, New Guinea, Australia L. Baje & B. Sabub Centre for Sustainable Tropical Fisheries and Aquaculture & College of Science and Engineering, James Cook University, Townsville, Queensland, 4108, Australia C. A. Simpfendorfer & A. Chin College of Charleston, Hollings Marine Laboratory, 331 Fort Johnson Road, 29412, Charleston, SC, USA E. Rochel Florida Museum of Natural History, University of Florida, Gainesville, FL, 32611, USA G. J. P. Naylor Search for W. T. White in: Search for L. Baje in: Search for C. A. Simpfendorfer in: Search for S. A. Appleyard in: Search for A. Chin in: Search for B. Sabub in: Search for E. Rochel in: Search for G. J. P. Naylor in: Conceived, designed, analysed and prepared the manuscript: W.W., S.A., L.B., C.S., A.C., E.R., G.N. Provided field support: B.S. All authors reviewed the manuscript. Correspondence to W. T. White. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. https://doi.org/10.1038/s41598-019-45715-w Scientific Reports menu Scientific Reports Top 100 2017 Scientific Reports Top 10 2018 Guest Edited Collections Editorial Board Highlights Author Highlights Scientific Reports rigorous editorial process Open Access Funding Support
CommonCrawl
Jan Maas ERC Grant IST Austria Analysis & Math. Physics Seminar Optimal transport, stochastic partial differential equations, stochastic analysis. Show or hide all abstracts. Homogenisation of one-dimensional discrete optimal transport Abstract: This paper deals with dynamical optimal transport metrics defined by discretisation of the Benamou–Benamou formula for the Kantorovich metric $W_2$. Such metrics appear naturally in discretisations of $W_2$-gradient flow formulations for dissipative PDE. However, it has recently been shown that these metrics do not in general converge to $W_2$, unless strong geometric constraints are imposed on the discrete mesh. In this paper we prove that, in a $1$-dimensional periodic setting, discrete transport metrics converge to a limiting transport metric with a non-trivial effective mobility. This mobility depends sensitively on the geometry of the mesh and on the non-local mobility at the discrete level. Loosely speaking, the result quantifies to what extent discrete transport can make use of microstructure in the mesh to reduce the cost of transport. with P. Gladbach, E. Kopfer, and L. Portinale 30 pages, submitted for publication. Non-commutative calculus, optimal transport and functional inequalities in dissipative quantum systems Abstract: We study dynamical optimal transport metrics between density matrices associated to symmetric Dirichlet forms on finite-dimensional $C^∗$-algebras. Our setting covers arbitrary skew-derivations and it provides a unified framework that simultaneously generalizes recently constructed transport metrics for Markov chains, Lindblad equations, and the Fermi Ornstein-Uhlenbeck semigroup. We develop a non-nommutative differential calculus that allows us to obtain non-commutative Ricci curvature bounds, logarithmic Sobolev inequalities, transport-entropy inequalities, and spectral gap estimates. with E. Carlen Scaling limits of discrete optimal transport Abstract: We consider dynamical transport metrics for probability measures on discretisations of a bounded convex domain in $\mathbb{R}^d$. These metrics are natural discrete counterparts to the Kantorovich metric $\mathbb{W}_2$, defined using a Benamou–Brenier type formula. Under mild assumptions we prove an asymptotic upper bound for the discrete transport metric $\mathcal{W}_\mathcal{T}$ in terms of $\mathbb{W}_2$, as the size of the mesh $\mathcal{T}$ tends to $0$. However, we show that the corresponding lower bound may fail in general, even on certain one-dimensional and symmetric two-dimensional meshes. In addition, we show that the asymptotic lower bound holds under an isotropy assumption on the mesh, which turns out to be essentially necessary. This assumption is satisfied for the regular triangular and hexagonal lattices, and it implies Gromov–Hausdorff convergence of the transport metric. with P. Gladbach and E. Kopfer On the geometry of geodesics in discrete optimal transport Abstract: We consider the space of probability measures on a discrete set $\mathcal{X}$, endowed with a dynamical optimal transport metric. Given two probability measures supported in a subset $\mathcal{Y} \subseteq \mathcal{X}$, it is natural to ask whether they can be connected by a constant speed geodesic with support in $\mathcal{Y}$ at all times. Our main result answers this question affirmatively, under a suitable geometric condition on $\mathcal{Y}$ introduced in this paper. The proof relies on an extension result for subsolutions to discrete Hamilton-Jacobi equations, which is of independent interest. with M. Erbar and M. Wirth Calc. Var. Partial Differential Equations 58 (2019), no. 1, 58:19. Gradient flow and entropy inequalities for quantum Markov semigroups with detailed balance Abstract: We study a class of ergodic quantum Markov semigroups on finite-dimensional unital $C^*$-algebras. These semigroups have a unique stationary state $\sigma$, and we are concerned with those that satisfy a quantum detailed balance condition with respect to $\sigma$. We show that the evolution on the set of states that is given by such a quantum Markov semigroup is gradient flow for the relative entropy with respect to $\sigma$ in a particular Riemannian metric on the set of states. This metric is a non-commutative analog of the $2$-Wasserstein metric, and in several interesting cases we are able to show, in analogy with work of Otto on gradient flows with respect to the classical $2$-Wasserstein metric, that the relative entropy is strictly and uniformly convex with respect to the Riemannian metric introduced here. As a consequence, we obtain a number of new inequalities for the decay of relative entropy for ergodic quantum Markov semigroups with detailed balance. J. Funct. Anal. 273 (5) (2017), 1810-1869. Transport based image morphing with intensity modulation Abstract: We present a generalized optimal transport model in which the mass-preserving constraint for the $L^2$-Wasserstein distance is relaxed by introducing a source term in the continuity equation. The source term is also incorporated in the path energy by means of its squared $L^2$-norm in time of a functional with linear growth in space. This extension of the original transport model enables local density modulation, which is a desirable feature in applications such as image warping and blending. A key advantage of the use of a functional with linear growth in space is that it allows for singular sources and sinks, which can be supported on points or lines. On a technical level, the $L^2$-norm in time ensures a disintegration of the source in time, which we use to obtain the well-posedness of the model and the existence of geodesic paths. The numerical discretization is based on the proximal splitting approach and selected numerical test cases show the potential of the proposed approach. Furthermore, the approach is applied to the warping and blending of textures. with M. Rumpf and S. Simon Scale Space and Variational Methods in Computer Vision. SSVM 2017. Lecture Notes in Computer Science, vol 10302. Springer, Cham Long-time behavior of a finite volume discretization for a fourth order diffusion equation Abstract: We consider a non-standard finite-volume discretization of a strongly non-linear fourth order diffusion equation on the $d$-dimensional cube, for arbitrary $d \geq 1$. The scheme preserves two important structural properties of the equation: the first is the interpretation as a gradient flow in a mass transportation metric, and the second is an intimate relation to a linear Fokker-Planck equation. Thanks to these structural properties, the scheme possesses two discrete Lyapunov functionals. These functionals approximate the entropy and the Fisher information, respectively, and their dissipation rates converge to the optimal ones in the discrete-to-continuous limit. Using the dissipation, we derive estimates on the long-time asymptotics of the discrete solutions. Finally, we present results from numerical experiments which indicate that our discretization is able to capture significant features of the complex original dynamics, even with a rather coarse spatial resolution. with D. Matthes Nonlinearity 29 (7) (2016), 1992-2023. Entropic Ricci curvature bounds for discrete interacting systems Abstract: We develop a new and systematic method for proving entropic Ricci curvature lower bounds for Markov chains on discrete sets. Using different methods, such bounds have recently been obtained in several examples (e.g., 1-dimensional birth and death chains, product chains, Bernoulli–Laplace models, and random transposition models). However, a general method to obtain discrete Ricci bounds had been lacking. Our method covers all of the examples above. In addition we obtain new Ricci curvature bounds for zero-range processes on the complete graph. The method is inspired by recent work of Caputo, Dai Pra and Posta on discrete functional inequalities. with M. Fathi Ann. Appl. Probab. 26 (3) (2016), 1774-1806. From large deviations to Wasserstein gradient flows in multiple dimensions Abstract: We study the large deviation rate functional for the empirical distribution of independent Brownian particles with drift. In one dimension, it has been shown by Adams, Dirr, Peletier and Zimmer that this functional is asymptotically equivalent (in the sense of $\Gamma$-convergence) to the Jordan–Kinderlehrer–Otto functional arising in the Wasserstein gradient flow structure of the Fokker–Planck equation. In higher dimensions, part of this statement (the lower bound) has been recently proved by Duong, Laschos and Renger, but the upper bound remained open, since the proof in \cite{DLR2013} relies on regularity properties of optimal transport maps that are restricted to one dimension. In this note we present a new proof of the upper bound, thereby generalising the result of \cite{ADPZ2011} to arbitrary dimensions. with M. Erbar and M. Renger Electron. Comm. Probab. 20 (2015), no 89, 1-12. Discrete Ricci curvature bounds for Bernoulli-Laplace and random transposition models Abstract: We calculate a Ricci curvature lower bound for some classical examples of random walks, namely, a chain on a slice of the $n$-dimensional discrete cube (the so-called Bernoulli–Laplace model) and the random transposition shuffle of the symmetric group of permutations on $n$ letters. with M. Erbar and P. Tetali Ann. Fac. Sci. Toulouse Math 24 (4) (2015), 781-800. A generalized model for optimal transport of images including dissipation and density modulation Abstract: In this paper the optimal transport and the metamorphosis perspectives are combined. For a pair of given input images geodesic paths in the space of images are defined as minimizers of a resulting path energy. To this end, the underlying Riemannian metric measures the rate of transport cost and the rate of viscous dissipation. Furthermore, the model is capable to deal with strongly varying image contrast and explicitly allows for sources and sinks in the transport equations which are incorporated in the metric related to the metamorphosis approach by Trouvé and Younes. In the non-viscous case with source term existence of geodesic paths is proven in the space of measures. The proposed model is explored on the range from merely optimal transport to strongly dissipative dynamics. For this model a robust and effective variational time discretization of geodesic paths is proposed. This requires to minimize a discrete path energy consisting of a sum of consecutive image matching functionals. These functionals are defined on corresponding pairs of intensity functions and on associated pairwise matching deformations. Existence of time discrete geodesics is demonstrated. Furthermore, a finite element implementation is proposed and applied to instructive test cases and to real images. In the non-viscous case this is compared to the algorithm proposed by Benamou and Brenier including a discretization of the source term. Finally, the model is generalized to define discrete weighted barycentres with applications to textures and objects. with M. Rumpf, C. Schönlieb and S. Simon ESAIM Math. Model. Numer. Anal. 49 (6) (2015), 1745-1769. Gradient flow structures for discrete porous medium equations Abstract: We consider discrete porous medium equations of the form $\partial_t\rho_t = \Delta \varphi(\rho_t)$, where $\Delta$ is the generator of a reversible continuous time Markov chain on a finite set $\mathcal{X}$, and $\varphi$ is an increasing function. We show that these equations arise as gradient flows of certain entropy functionals with respect to suitable non-local transportation metrics. This may be seen as a discrete analogue of the Wasserstein gradient flow structure for porous medium equations in $\mathbf{R}^n$ discovered by Otto. We present a one-dimensional counterexample to geodesic convexity and discuss Gromov-Hausdorff convergence to the Wasserstein metric. with M. Erbar Discrete Contin. Dyn. Syst. 34 (4) (2014), 1355-1374. An analog of the 2-Wasserstein metric in non-commutative probability under which the fermionic Fokker-Planck equation is gradient flow for the entropy Abstract: Let $\mathfrak{C}$ denote the Clifford algebra over $\mathbb{R}^n$, which is the von Neumann algebra generated by $n$ self-adjoint operators $Q_j$, $j=1,\dots,n$ satisfying the canonical anticommutation relations, $Q_iQ_j+Q_jQ_i = 2\delta_{ij}I$, and let $\tau$ denote the normalized trace on $\mathfrak{C}$. This algebra arises in quantum mechanics as the algebra of observables generated by $n$ Fermionic degrees of freedom. Let ${\mathfrak P}$ denote the set of all positive operators $\rho\in\mathfrak{C}$ such that $\tau(\rho) =1$; these are the non-commutative analogs of probability densities in the non-commutative probability space $(\mathfrak{C},\tau)$. The Fermionic Fokker-Planck equation is a quantum-mechanical analog of the classical Fokker-Planck equation with which it has much in common, such as the same optimal hypercontractivity properties. In this paper we construct a Riemannian metric on ${\mathfrak P}$ that we show to be a natural analog of the classical $2$-Wasserstein metric, and we show that, in analogy with the classical case, the Fermionic Fokker-Planck equation is gradient flow in this metric for the relative entropy with respect to the ground state. We derive a number of consequences of this, such as a sharp Talagrand inequality for this metric, and we prove a number of results pertaining to this metric. Several open problems are raised. Comm. Math. Phys. 331 (3) (2014), 887-926. Approximating rough stochastic PDEs Abstract: We study approximations to a class of vector-valued equations of Burgers type driven by a multiplicative space-time white noise. A solution theory for this class of equations has been developed recently in [Hairer, Weber, Probab. Theory Related Fields, 2013]. The key idea was to use the theory of controlled rough paths to give definitions of weak/mild solutions and to set up a Picard iteration argument. In this article the limiting behaviour of a rather large class of (spatial) approximations to these equations is studied. These approximations are shown to converge and convergence rates are given, but the limit may depend on the particular choice of approximation. This effect is a spatial analogue to the Itô-Stratonovich correction in the theory of stochastic ordinary differential equations, where it is well known that different approximation schemes may converge to different solutions. with M. Hairer and H. Weber Comm. Pure Appl. Math. 67 (5) (2014), 776-870. Gromov-Hausdorff convergence of discrete transportation metrics Abstract: This paper continues the investigation of `Wasserstein-like' transportation distances for probability measures on discrete sets. We prove that the discrete transportation metrics $\mathcal{W}_N$ on the $d$-dimensional discrete torus ${\mathbf{T}_N^d}$ with mesh size $\frac1N$ converge, when $N\to\infty$, to the standard 2-Wasserstein distance $W_2$ on the continuous torus in the sense of Gromov-Hausdorff. This is the first convergence result for the recently developed discrete transportation metrics $\mathcal{W}$. The result shows the compatibility between these metrics and the well-established 2-Wasserstein metric. with N. Gigli SIAM J. Math. Anal. 45 (2) (2013), 879-899. Poisson stochastic integration in Banach spaces Abstract: We prove new upper and lower bounds for Banach space-valued stochastic integrals with respect to a compensated Poisson random measure. Our estimates apply to Banach spaces with non-trivial martingale (co)type and extend various results in the literature. We also develop a Malliavin framework to interpret Poisson stochastic integrals as vector-valued Skorohod integrals, and prove a Clark-Ocone representation formula. with S. Dirksen and J. van Neerven Electron. J. Probab. 18 (2013), no. 100, 1-28. Ricci curvature of finite Markov chains via convexity of the entropy Abstract: We study a new notion of Ricci curvature that applies to Markov chains on discrete spaces. This notion relies on geodesic convexity of the entropy and is analogous to the one introduced by Lott, Sturm, and Villani for geodesic measure spaces. In order to apply to the discrete setting, the role of the Wasserstein metric is taken over by a different metric, having the property that continuous time Markov chains are gradient flows of the entropy. Using this notion of Ricci curvature we prove discrete analogues of fundamental results by Bakry-Émery and Otto-Villani. Furthermore we show that Ricci curvature bounds are preserved under tensorisation. As a special case we obtain the sharp Ricci curvature lower bound for the discrete hypercube. Arch. Ration. Mech. Anal. 206 (3) (2012), 997-1038. A spatial version of the Itô-Stratonovich correction Abstract: We consider a class of stochastic PDEs of Burgers type in spatial dimension 1, driven by space-time white noise. Even though it is well-known that these equations are well-posed, it turns out that if one performs a spatial discretisation of the nonlinearity in the "wrong" way, then the sequence approximate equations does converge to a limit, but this limit exhibits an additional correction term. This correction term is proportional to the local quadratic cross-variation (in space!) of the gradient of the conserved quantity with the solution itself. This can be understood as a consequence of the fact that for any fixed time, the law of the solution is locally equivalent to Wiener measure, where space plays the role of time. In this sense, the correction term is similar to the usual Itô-Stratonovich correction term that arises when one considers different temporal discretisations of stochastic ODEs. with M. Hairer Ann. Probab. 40 (4) (2012), 1675–1714. Whitney coverings and the tent spaces T1,q(γ) for the Gaussian measure Abstract: We introduce a technique for handling Whitney decompositions in Gaussian harmonic analysis and apply it to the study of Gaussian analogues of the classical tent spaces $T^{1,q}(\gamma)$ of Coifman-Meyer-Stein. with J. van Neerven and P. Portal Ark. Mat. 50 (2) (2012), 379-395. Gradient flows of the entropy for finite Markov chains Abstract: Let $K$ be an irreducible and reversible Markov kernel on a finite set $\mathcal{X}$. We construct a metric $\mathcal{W}$ on the set of probability measures on $\mathcal{X}$ and show that with respect to this metric, the law of the continuous time Markov chain evolves as the gradient flow of the entropy. This result is a discrete counterpart of the Wasserstein gradient flow interpretation of the heat flow in $\mathbb{R}^n$ by Jordan, Kinderlehrer, and Otto (1998). The metric $\mathcal{W}$ is similar to, but different from, the $L^2$-Wasserstein metric, and is defined via a discrete variant of the Benamou-Brenier formula. A Trotter product formula for gradient flows in metric spaces Abstract: We prove a Trotter product formula for gradient flows in metric spaces. This result is applied to establish convergence in the $L^2$-Wasserstein metric of the splitting method for some Fokker-Planck equations and porous medium type equations perturbed by a potential. The published version of the article contains a typo in Theorem 1.1. The factor $\frac12$ on the left hand side of (1.4) should be removed. This has been corrected in an erratum. with Ph. Clément J. Evol. Equ. 11 (2) (2011), 405-427. Conical square functions and non-tangential maximal functions with respect to the Gaussian measure Abstract: We study, in $L^{1}(\mathbb{R}^n;\gamma)$ with respect to the Gaussian measure, non-tangential maximal functions and conical square functions associated with the Ornstein-Uhlenbeck operator by developing a set of techniques which allow us, to some extent, to compensate for the non-doubling character of the Gaussian measure. The main result asserts that conical square functions can be controlled in $L^1$-norm by non-tangential maximal functions. Along the way we prove a change of aperture result for the latter. This complements recent results on Gaussian Hardy spaces due to Mauceri and Meda. Publ. Mat. 55 (2) (2011), 313-341. Gradient estimates and domain identification for analytic Ornstein-Uhlenbeck operators Abstract: Let $P$ be the Ornstein-Uhlenbeck semigroup associated with the stochastic Cauchy problem \[ dU(t) = AU(t) dt + dWH (t), \] where $A$ is the generator of a $C_0$-semigroup $S$ on a Banach space $E$, $H$ is a Hilbert subspace of $E$, and $W_H$ is an $H$-cylindrical Brownian motion. Assuming that $S$ restricts to a $C_0$-semigroup on $H$, we obtain $L^p$-bounds for $D_H P(t)$. We show that if $P$ is analytic, then the invariance assumption is fulfilled. As an application we determine the $L^p$-domain of the generator of $P$ explicitly in the case where $S$ restricts to a $C_0$-semigroup on $H$ which is similar to an analytic contraction semigroup. with J. van Neerven Parabolic Problems: The Herbert Amann Festschrift, Birkhäuser (2011), 463-477. Malliavin calculus and decoupling inequalities in Banach spaces Abstract: We develop a theory of Malliavin calculus for Banach space-valued random variables. Using radonifying operators instead of symmetric tensor products we extend the Wiener-Itô isometry to Banach spaces. In the white noise case we obtain two sided $L^p$-estimates for multiple stochastic integrals in arbitrary Banach spaces. It is shown that the Malliavin derivative is bounded on vector-valued Wiener-Itô chaoses. Our main tools are decoupling inequalities for vector-valued random variables. In the opposite direction we use Meyer's inequalities to give a new proof of a decoupling result for Gaussian chaoses in UMD Banach spaces. J. Math. Anal. Appl. 363 (2) (2010), 383-398. Boundedness of Riesz transforms for elliptic operators on abstract Wiener spaces Abstract: Let $(E,H,\mu)$ be an abstract Wiener space and let $D_V := VD$, where $D$ denotes the Malliavin derivative and $V$ is a closed and densely defined operator from $H$ into another Hilbert space ${\underline{H}}$. Given a bounded operator $B$ on ${\underline{H}},$ coercive on the range $\overline{\mathsf{R}(V)}$, we consider the operators $A:= V^* BV$ in $H$ and $\underline{A}:= VV^* B$ in ${\underline{H}}$, as well as the realisations of the operators $L: = D_V^* BD_V$ and $\underline{L} := D_VD_V^* B$ in $L^p(E,\mu)$ and $L^p(E,\mu;{\underline{H}})$ respectively, where $1 < p < \infty$. Our main result asserts that the following four assertions are equivalent: ${\mathsf D}(\sqrt{L}) = {\mathsf D}(D_V)$ with $\| \sqrt{L}f\|_{p} \eqsim \| D_V f\|_{p}$ for $f\in {\mathsf D}(\sqrt{L})$; $\underline{L}$ admits a bounded $H^\infty$-functional calculus on $\overline{\mathsf{R}(D_V)}$; ${\mathsf D}(\sqrt{A}) = {\mathsf D}(V)$ with $\| \sqrt{A}h\| \eqsim \| Vh \|$ for $h\in {\mathsf D}(\sqrt{A})$; $\underline{A}$ admits a bounded $H^\infty$-functional calculus on $\overline{\mathsf{R}(V)}$. This is a nonsymmetric generalisation of the classical Meyer inequalities of Malliavin calculus (where ${\underline{H}}=H$, $V = I$, $B = \frac12 I$). A one-sided version of the main result, giving $L^p$-boundedness of the Riesz transform $D_V/\sqrt{L}$ in terms of a square function estimate, is also obtained. As an application let $-A$ generate an analytic $C_0$-contraction semigroup on a Hilbert space $H$ and let $-L$ be the $L^p$-realisation of the generator of its second quantisation. Our results imply that two-sided bounds for the Riesz transform of $L$ are equivalent with the Kato square root property for $A$. A Clark-Ocone formula in UMD Banach spaces Abstract: Let $H$ be a separable real Hilbert space and let $\mathbb{F}=(\mathscr{F}_t)_{t\in [0,T]}$ be the augmented filtration generated by an $H$-cylindrical Brownian motion $(W_H(t))_{t\in [0,T]}$ on a probability space $(\Omega,\mathscr{F},\mathbb{P})$. We prove that if $E$ is a UMD Banach space, $1\leq p<\infty$, and $F\in \mathbb{D}^{1,p}(\Omega;E)$ is $\mathscr{F}_T$-measurable, then $$ F = \mathbb{E} (F) + \int_0^T P_{\mathbb{F}} (DF)\,dW_H,$$ where $D$ is the Malliavin derivative of $F$ and $P_{\mathbb{F}}$ is the projection onto the ${\mathbb{F}}$-adapted elements in a suitable Banach space of $L^p$-stochastically integrable $\mathcal{L}(H,E)$-valued processes. Electron. Comm. Probab. 13 (2008), 151-164. On the domain of non-symmetric Ornstein-Uhlenbeck operators in infinite dimensions Abstract: We consider the linear stochastic Cauchy problem \[ dX(t) = AX(t)\,dt + B\,dW_H(t),\qquad t\geq 0, \] where $A$ generates a $C_0$-semigroup on a Banach space $E$, $W_H$ is a cylindrical Brownian motion over a Hilbert space $H$, and $B:H\to E$ is a bounded operator. Assuming the existence of a unique minimal invariant measure $\mu_\infty$, let $L_p$ denote the realization of the Ornstein-Uhlenbeck operator associated with this problem in $L^p(E,\mu_\infty)$. Under suitable assumptions concerning the invariance of $\mathrm{Ran}(B)$ under the semigroup generated by $A$, we prove the following domain inclusions, valid for $1 < p \leq 2$: \begin{equation}\begin{aligned}\label{eq:} \mathscr{D}((-L_p)^{1/2}) & \hookrightarrow W_H^{1,p}(E,\mu_\infty), \\ \mathscr{D}(L_p) &\hookrightarrow W_H^{2,p}(E,\mu_\infty). \end{aligned}\end{equation} Here $W_H^{k,p}(E,\mu_\infty)$ denotes the $k$-th order Sobolev space of functions with Fréchet derivatives up to order $k$ in the direction of $H$. No symmetry assumptions are made on $L_p$. Infin. Dimens. Anal. Quantum Probab. Relat. Topics 11 (4) (2008), 603-626. On analytic Ornstein-Uhlenbeck semigroups in infinite dimensions Abstract: We extend to infinite dimensions an explicit formula of Chill, Fašangová, Metafune, and Pallara for the optimal angle of analyticity of analytic Ornstein-Uhlenbeck semigroups. The main ingredient is an abstract representation of the Ornstein-Uhlenbeck operator in divergence form. Arch. Math. (Basel) 89 (3) (2007), 226-236. Entropic Ricci curvature for discrete spaces Abstract: We give a short overview on a recently developed notion of Ricci curva- ture for discrete spaces. This notion relies on geodesic convexity properties of the relative entropy along geodesics in the space of probability densities, for a metric which is similar to (but different from) the 2-Wasserstein metric. The theory can be considered as a discrete counterpart to the theory of Ricci curvature for geodesic measure spaces developed by Lott–Sturm–Villani. In: Modern Approaches to Discrete Curvature, Springer Lecture Notes in Mathematics 2184 (2017), 159-174. Analysis of infinite dimensional diffusions. I defended my thesis at 21 April 2009 at TU Delft. Coauthors Eric Carlen (Rutgers) Philippe Clément (Delft) Sjoerd Dirksen (RWTH Aachen) Matthias Erbar (Bonn) Max Fathi (Toulouse) Nicola Gigli (SISSA) Peter Gladbach (Leipzig) Martin Hairer (Warwick) Eva Kopfer (Bonn) Daniel Matthes (TU Munich) Jan van Neerven (Delft) Pierre Portal (ANU Canberra) Lorenzo Portinale (IST Austria) Michiel Renger (WIAS Berlin) Martin Rumpf (Bonn) Carola Schönlieb (Cambridge) Stefan Simon (Bonn) Prasad Tetali (Georgia Tech) Hendrik Weber (Warwick) Melchior Wirth (Jena)
CommonCrawl
Comparing the effect of sucrose gel and metronidazole gel in treatment of clinical symptoms of bacterial vaginosis: a randomized controlled trial Somayyeh Khazaeian1, Ali Navidian2, Shahin-dokht Navabi-Rigi1, Marzieh Araban3, 4Email authorView ORCID ID profile, Faraz Mojab5 and Safoura Khazaeian6Email author Accepted: 7 September 2018 Lactobacilli, as normal vaginal flora, have a central role in controlling body environment and preventing the growth of pathogens. Sucrose, by promoting the growth of Lactobacilli, accelerates the suppression of pathogenic bacteria. The aim of this research was to compare the effects of sucrose gel with those of metronidazole gel in treating women with bacterial vaginosis (BV). This triple-blind clinical trial (IRCT2016112631105N1) was conducted with 70 sexually active, premenopausal women diagnosed with bacterial vaginosis through meeting at least three out of four Amsel criteria. The subjects were randomly divided into two groups of 35 patients, one group treated with sucrose vaginal gel, and the other with metronidazole vaginal gel. The treatment period was 14 days for each group. At the end of the treatment period, the status of each woman's improvement was determined by elimination at least three out of four Amsel criteria (homogeneous vaginal discharge, presence of clue cells > 20%, positive whiff test and vaginal pH value > 4.5), and clinical complaints and reported side effects of medication were recorded for the patients. Data were analyzed using the t test, chi-squared test and McNemar's test). The sucrose vaginal gel and metronidazole vaginal gel were not significantly different in reducing patients' clinical complaints or in elimination at least three out of four of the Amsel criteria that were positive before treatment. With an 85.7% improvement rate with sucrose gel and an 88.5% improvement rate with metronidazole gel, the differences in therapeutic response were not significant, and neither was statistically different in improving the disease (p = 0.389). It seems that sucrose vaginal gel might be considered a possible alternative to metronidazole vaginal gel in the treatment of bacterial vaginosis. Iranian Registry of Clinical Trials, IRCT2016112631105N1. Registered on 27 December 2016. Sucrose vaginal gel Metronidazole vaginal gel Bacterial vaginosis (BV) is the most common vaginal disorder in women, and a syndrome associated with changes in vaginal ecology. In this clinical syndrome, Lactobacillus replacement occurs as normal vaginal flora with anaerobic bacteria and Gardnerella vaginalis [1, 2]. Vaginosis prevalence has been reported to be 20–49% in Africa, 11% in the UK and 15–30% in the USA [3]. Overall, the prevalence rate varies between 22% and 50% in various studies [4, 5]. Anaerobic bacteria are found in less than 1% of vaginal flora of healthy women. However, the anaerobic concentration in patients with BV reaches 100%. What triggers changes in normal vaginal flora has not been identified so far, but it is assumed that frequent vaginal alkalinization via factors such as vaginal douching and frequent sexual activity are effective in this area [6]. Four features comprise the Amsel criteria [7, 8], the principle of clinical diagnosis of the disease: gray, homogeneous, diluted vaginal discharge a positive whiff test (amine odor after treatment with potassium hydroxide) presence of clue cells in the vaginal fluid (> 20%) vaginal pH greater than 4.5 If untreated, this infection has adverse consequences, including spontaneous abortion, pre-term delivery, postpartum endometritis, the risk of sexually transmitted infections, pelvic inflammatory disease, postoperative infections and urinary tract infections [9–11]. The standard treatment for BV is oral and vaginal forms of metronidazole. However, beneficial effects of this drug should be weighed against its complications. Some side effects of these include nausea, abdominal pain and metallic taste in the mouth [12]. Because resistance to antibiotics is considered one of the greatest threats to public health, alternative therapies to antibiotics are essential for BV [13]. Currently, the use of herbal medicines and complementary medicine have particular importance in developing countries, and abundant research conducted by the World Health Organization in this field has led to the establishment of a strong scientific basis in this area [14]. Recently, findings of some studies have shown the antibacterial effect of sucrose in the vagina [13, 15]. Lactobacilli, as normal vaginal flora, have a central role in controlling the body's ecosystem and preventing the growth of pathogens [16]. Normal flora makes use of sucrose as the main source of nutrition to produce lactic acid and hydrogen peroxide that cause the reduction in pH, resulting in an undesirable environment for the growth of pathogens. In addition, the osmolarity of sucrose causes the absorption of water and subsequently the disappearance of pathogenic bacteria [17]. As is warranted by the complications associated with chemical drugs [18], considering widespread use of traditional medications [19], the need for alternative treatment with fewer side effects, and the high prevalence of this infection among women, this study was conducted to compare the effects of sucrose gel with metronidazole gel in the treatment of women with bacterial vaginosis. This triple-blind, parallel randomized clinical trial (IRCT2016112631105N1) was conducted in 70 married women aged between 15 and 45 years, with bacterial vaginosis, who were patients at the women's clinic at Ali ibn Abi Talib Hospital from May 2012 to March 2013. The facility is the largest medical center affiliated with the university in Zahedan, Iran. The sample size required to meet 80% power at 5% risk of type I error was 30 women per group, so a sample size of 35 per group was planned to account for a 10% loss to follow-up rate. This sample size was estimated considering 80% response to treatment (clinical symptoms) than the base line using this formula: $$ n={\frac{\ \left[ z\alpha \sqrt{2{\pi}_1\left(1-{\pi}_1\right)}+ z\beta \sqrt{\pi_1\left(1-{\pi}_1\right)+{\pi}_2\left(1-{\pi}_2\ \right)}\right]}{\pi_1-{\pi}_2}}^2. $$ Therefore, 70 patients who met the inclusion criteria and had BV confirmed by a gynecologist who was not member of research team, were enrolled in the study. The study design and objectives were explained to the patients. All patients were asked to give written informed consent. Randomization was achieved using sealed, opaque, sequentially numbered envelopes developed from a random number generator. A midwife who was not involved in the recruitment of participants prepared the envelopes. As such, 35 patients were included in the group receiving sucrose gel and 35 patients composed the group receiving metronidazole gel based on a 1:1 ratio and in a single block. The inclusion criteria were as follows: being sexually active, not being pregnant or breastfeeding at the time, not taking immunosuppressive drugs, not using an intrauterine device (IUD), not using vaginal douching and antibiotic therapy within 2 weeks prior to sampling, lack of any specific illness requiring treatment and the diagnosis of BV based on the presence of at least three out of four of the Amsel criteria (homogeneous vaginal discharge, presence of clue cells > 20%, amine odor when potassium hydroxide solution is added to the vaginal secretion, vaginal pH value > 4.5), and the absence of flagellated parasites of trichomonas or candida infection in the vaginal specimen based on laboratory tests. Exclusion criteria were as follows: not using the drug as prescribed, obligation to use antibiotics, or reluctance to continue. Data were collected through questionnaires on demographics, a self-reporting sheet on patients' symptoms, an observation checklist used during the first and second visits, observation using a microscope (Olympus, Japan) and pH test strips (Merck, Germany). The checklist included questions related to patient complaints and the Amsel criteria. Content validity was used for the observation checklist and the degree of agreement coefficient by kappa statistic was applied to assess its reliability; the agreement coefficient was 0.9. The Olympus microscope used is known to meet validity standards. Its validity was evaluated through calibrating the microscope. To assess the reliability of the pH test strips, five standard specimens were prepared and the pH was measured for them. Validity was confirmed by correlation between standard levels and pH test strip results. The study methodology was as follows: After history-taking, vaginal specimens were obtained from subjects in the lithotomy position; a sterile speculum without lubricant material was used. Each woman's vagina and cervix were examined for evidence of inflammation and abnormal findings, and the vaginal discharge was assessed in terms of color, texture and smell. A discharge specimen from the upper part of the lateral wall of the vagina was placed on two slides using a swab. One to two drops of normal saline were added to the first specimen, which was examined under a microscope for the presence of clue cells and Trichomonas vaginalis. The second specimen was mixed with one drop of potassium hydroxide (KOH) 10% solution and examined for Candida hyphae and amine odor (specimens with flagellated parasites of Trichomonas or Candida infection were excluded from the study.) The pH of the vaginal discharge was determined using pH test strips. Patient complaints were recorded on the first-visit checklist. Discharge specimen homogeneity was conducted by two midwifery experts (after training on specific procedures for this study), and their vaginal harvesting and microscopic observation techniques were verified by a laboratory sciences expert. Moreover, the research team gynecologist confirmed all diagnoses. After definitive diagnosis of BV in the specimens, patients were randomly assigned by random numbers to the sucrose gel treatment group or the metronidazole gel treatment group. In this study, metronidazole gel 0.75% and sucrose gel 9% were prepared at the Laboratory of the Pharmacy School of Shahid Beheshti University in Tehran, Iran. The gels were inserted into identical 70-g tubes. Then, each tube was coded separately as A or B. In addition, the gels had no discernible differences in appearance, shape, color, or odor and were placed inside the completely identical tubes. Examiners, patients and the analysis team were unaware of which type of gel was within the tubes. Study participants were advised to use the gel with an applicator morning and night for five days. A self-report sheet was given to patients to confirm that they had self-administered the treatment correctly each time; patients were supposed to bring the sheet to clinic visits. Participating women were advised to refrain from the following during the study: Intercourse without condoms Vaginal douching, spermicides or other vaginal medications Taking antibiotics other than what they may be using for the study Patients were asked to refer 14 days after starting treatment. Amsel clinical criteria and patient complaints were re-evaluated, and results were recorded on the observation record forms. The absence of at least three out of four Amsel criteria 14 days after terminating treatment were indicative of treatment improvement [11, 20, 21]; any other result was considered as treatment failure. Participants, the gynecologist (assessor) and the statistician were blinded to group assignments from the beginning to the end of the study and data analysis. Data were analyzed using descriptive statistics (mean and standard deviation) and inferential statistics (t test, chi-squared test and McNemar's test) using SPSS 21 software. As mentioned, in this study, 70 women with BV were assigned to one of two groups of 35 patients. Figure 1 shows the Consolidated Standards of Reporting Trials (CONSORT) flow diagram of the study participants. Consolidated Standards of Reporting Trials (CONSORT) flow diagram of the study One group used sucrose vaginal gel and the other used metronidazole vaginal gel. The results showed no significant difference between the two groups in age, weight, age at onset of sexual activity and the number of pregnancies in both groups (Table 1). Comparison of mean and standard deviation of demographic characteristics and fertility status in two groups with bacterial vaginosis Group treatment (P values)* Sucrose gel Mean ± SD Metronidazole Gel 32.31 ± 5.79 Duration of marriage (years) Age at first pregnancy (years) The number of pregnancies *Derived from t test The maximum educational level was high-school completion in both groups with a frequency of 34.3% and 40% (p > 0.05), respectively, in the sucrose gel and metronidazole gel groups. In terms of occupation, 77.1% of subjects in the sucrose group and 88.6% of subjects in the metronidazole group were housewives (p > 0.05). The chi-squared test showed no significant difference between the two treatment groups in terms of education level and occupation (p > 0.05). Also, based on the chi-squared test, there was no statistically significant difference between the two treatment groups in terms of clinical complaints before and after treatment (p > 0.05). The McNemar test indicated a statistically significant difference between clinical complaints before and after treatment with sucrose gel and before and after treatment with metronidazole gel (Table 2). Comparison of absolute and relative frequency distribution of topics of patients' clinical complaints before and after treatment Intra-group comparison before and after treatment Before treatment Abundant vaginal discharge P < 0.001 Malodor P = 0.001 Dyspareunia Group (n) Comparison of the two groups after treatment P > 0.05** A sucrose gel, B metronidazole gel, N % number and percentage of patients within each endpoint for each treatment group, Group (n) the number of patients in each group *Derived from the McNemar test **Derived from the chi-square test Three of the four Amsel criteria - homogenous gray discharge, whiff test and pH > 4.5 - were reported 100% positive in both groups before treatment, and the chi-squared test showed no significant difference between the two groups (p > 0.05). The results comparison demonstrated that both treatments were effective in eliminating at least three out of four Amsel criteria, and no statistically significant difference was found between the two groups in terms of response to treatment (Table 3). Comparison of absolute and relative frequency distribution of the Amsel criteria before and after treatment Presence of vaginal discharge Positive whiff test Presence of clue cells Vaginal pH > 4.5 Treatment improvement was defined as the absence of at least three out of four Amsel criteria 14 days after terminating treatment. In this study, 85.7% of those undergoing sucrose therapy and 88.5% of those undergoing metronidazole therapy improved with BV treatment. The chi-squared test showed no significant differences in therapeutic response, and the two drugs were not significantly different in meeting criteria for treatment success (p > 0.05). Adverse effects related to drugs in both treatment methods were investigated: there were three cases (8.9%) of vaginal dryness and one case (2.9%) of itching in the metronidazole gel group and one case (2.9%) of vaginal dryness in the sucrose gel group. The aim of this parallel, randomized, clinical trial was to compare the effect of sucrose gel and metronidazole gel in the treatment of clinical symptoms of bacterial vaginosis. The overall results showed that sucrose gel could be an alternative to metronidazole gel in the treatment of bacterial vaginosis. It has been documented that sucrose gel, with its restoration of normal vaginal flora, promotion of growth of Lactobacilli and suppression of pathogenic bacteria in patients with bacterial vaginosis, is considered a new choice for non-antibiotic-based treatment for BV [14, 16]. Lactobacilli are dominant, normal vaginal flora with the ability to inhibit adhesion strength and growth of pathogens, diminish pathogens' access to nutrients and modulate the host immune response [22]. Marino et al., in a study carried out between 2002 and 2004, concluded that the use of Lactobacillus tablets can be effective in restoring normal vaginal flora. In patients with recurrent infections and those who frequently use antibiotics, Lactobacilli are reduced. Prescribing Lactobacillus was more effective than metronidazole [16]. There are limited studies in this regard; however, these studs have shown the positive effect of sucrose vaginal gel in improving symptoms of the infection. Xing et al., in a study of eight hospitals in China in 2008, found that metronidazole and sucrose are effective in the treatment of BV. Additionally, this study found that using sucrose vaginal gel could improve both the clinical and laboratory index of BV [23]. A randomized, double-blind, multi-center, parallel-group, phase III clinical trial conducted by Xiao et al. in 2015 found that the cure rate using sucrose gel was 80% while the cure rate using metronidazole was 70% [24]. In our study the improvement rate using sucrose gel and metronidazole gel were 85% and 88%, respectively, which indicates a greater improvement rate than the trial conducted by Xiao et al. in 2015. Sucrose gel has no antibiotic properties - hence, there is no possibility of resistance - and it is a type of nutrition for Lactobacilli and was also shown to help the shifting of vaginal flora from the type in BV to Lactobacilli in an animal model [15]. Also, because the presence of Lactobacilli is an important factor in the prevention of infection, it is more effective than antibiotics such as metronidazole. For example, metronidazole not only prevents the growth of pathogenic bacteria, but also it simultaneously inhibits growth of Lactobacilli. This could be one of the causes of poor response to treatment and recurrence of infection [13]. The animal model of Hui et al., which showed the similarity of rhesus macaque vaginal microbial flora to that of patients with bacterial vaginosis, concluded that sucrose gel could cause changes in vaginal bacterial flora by decreasing the vaginal pH. In addition, they reported a significant increase in the relative frequency of Lactobacillus DNA, from 50.84% to 96.98% (p < 0.001). However, there was no change in the relative frequency of Lactobacillus DNA in the control group. They confirmed the important role of Lactobacilli in vaginal flora and thus in women's health through the properties of probiotics, which prevent the growth of pathogens such as anaerobic bacteria, fungi and viruses by eliminating competition or producing organic acids and hydrogen peroxide. They introduced sucrose as a facilitator for growth of Lactobacilli [15]. Sucrose is a disaccharide composed of glucose and fructose carbohydrates [25]. Studies have reported an elevated survival rate of Lactobacilli in acidic conditions caused by glucose [26]. Lactobacilli, through glucose breakdown and hydrogen-ion production, are able to reduce pH, creating an unsuitable environment for pathogen growth [27, 28]. Monosaccharides, especially fructose, have a significant inhibitory effect on adhesion of pathogens to the mucosa [29]. In addition, the combination of fructose and glucose produces a significant increase in osmolarity and water reabsorption which, in turn, provides an unsuitable environment for fungi and other pathogens [30, 31]. Fructose and glucose also make up the main composition of honey; in fact, they comprise 85% to 95% of the sugar in honey [32]. The antimicrobial properties of honey, known for thousands of years, have been attributed to hydrogen peroxide, elevated osmolarity due to high levels of sugar like fructose, glucose, and sucrose and the acidity of its ingredients [33]. For example, the contents of honey, especially fructose, accelerate epithelial growth in wounds. It also eliminates the bad odor of wounds that is caused by lactic acid byproducts. In addition, the glucose oxidase activity in honey leads to low production of hydrogen peroxide which, in turn, prevents the growth of bacteria [34]. Given the aforementioned facts, it can be stated that the structure of sucrose has an effective role in inhibiting pathogen growth. Although the results of the present study and other limited research reveal the benefits of sucrose in the improvement of bacterial vaginosis, further studies are needed to determine the mechanism of action Lactobacilli in the presence of sucrose. Importantly, any such study should have a larger sample size to enhance the accuracy of the assessment. In addition, it is necessary to examine time dimensions in terms of infection recurrence levels to adequately determine the value of replacing routine antibiotics. A limitation of this study was that two things were beyond the control of the researchers: subjects may have provided incorrect answers on the self-report sheet that assessed whether they used the medication correctly, and participants may not have complied with the study's specialized health advice. Although being sexually active was one of our inclusion criteria, collecting data on this variable quantitatively might have provided more valid data on this risk factor. This limitation should be considered in future research. Limitation of findings in clinical practice As sucrose gel is not readily available in many pharmacologic environments, it may be more costly and more difficult for women to obtain it than to obtain a prescription for metronidazole gel. The findings gleaned from the present study demonstrated that sucrose vaginal gel is effective in treating bacterial vaginosis. According to the positive effects of sucrose and its compounds on normal vaginal flora, it seems that further studies can contribute to achieving beneficial outcomes in the treatment of infections in women, especially bacterial vaginosis. BV: CONSORT: Consolidated Standards of Reporting Trials IRCT: Iranian Registry of Clinical Trials KOH: The authors hereby express their thanks and appreciation to the Research and Technology Deputy of Zahedan University of Medical Sciences for approving and funding the project, to the authorities of Ali ibn Abi Talib (AS) Hospital in Zahedan, and to all persons who helped us conduct this study. We appreciate those women who participated in this study. Also, we are grateful to Dr Teimoori and Dr Jahantigh who helped us to implement the study. The Research and Technology Deputy of Zahedan University of Medical Sciences, Iran funded the study. Raw data listings will not be shared due to confidentiality reasons. If anyone has a question concerning the raw data, please contact [email protected]. SoK was the main investigator, designed the study and drafted the manuscript. AN, Sh-DN, FM and SaK helped as consultants. MA was the supervisor of the study and helped in data analysis, and provided the final manuscript. All authors read and approved the final version of the manuscript. The study (code 90–1305) was approved by the Ethics Committee of Zahedan University of Medical Sciences and recorded in the Iranian Registry of Clinical Trials with code of IRCT2016112631105N1. The research was carried out after obtaining permission from the Research Deputy of Zahedan University of Medical Sciences and providing an introduction letter to the head of the women's clinic at the hospital. All participants were informed about the study and confidentiality protocols. Informed consent was obtained from all the participants. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Pregnancy Health Research Center, Department of Midwifery, Zahedan University of Medical Sciences, Zahedan, Iran Pregnancy Health Research Center, Department of Counseling, Zahedan University of Medical Sciences, Zahedan, Iran Social Determinants of Health Research Center, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran Department of Health Education and Promotion, Public Health School, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran Department of Pharmacognosy, School of Pharmacy, Shahid Beheshti University of Medical Sciences and Health Services, Tehran, Iran Department of Obstetrics and Gynecology, School of Medicine, Zahedan University of Medical Sciences, Zahedan, Iran Homayouni A, Bastani P, Ziyadi S, Mohammad-Alizadeh-Charandabi S, Ghalibaf M, Mortazavian AM, Mehrabany EV. Effects of probiotics on the recurrence of bacterial vaginosis: a review. J Lower Genital Tract Dis. 2014;18(1):79–86.View ArticleGoogle Scholar Romoren M, Velauthapillai M, Rahman M, Sundby J, Klouman E, Hjortdahl P. Trichomoniasis and bacterial vaginosis in pregnancy: inadequately managed with the syndromic approach. Bull World Health Organ. 2007;85(4):297–304.View ArticleGoogle Scholar Demba E, Morison L, Van der Loeff MS, Awasana AA, Gooding E, Bailey R, Mayaud P, West B: Bacterial vaginosis, vaginal flora patterns and vaginal hygiene practices in patients presenting with vaginal discharge syndrome in The Gambia, West Africa. BMC Infect Dis 2005, 5(1):1.Google Scholar Allsworth JE, Lewis VA, Peipert JF. Viral sexually transmitted infections and bacterial vaginosis: 2001–2004 National Health and Nutrition Examination Survey data. Sex Transm Dis. 2008;35(9):791–6.View ArticleGoogle Scholar Allsworth JE, Peipert JF. Prevalence of bacterial vaginosis: 2001–2004 national health and nutrition examination survey data. Obstet Gynecol. 2007;109(1):114–20.View ArticleGoogle Scholar Burkman RT. Berek & Novak's Gynecology. JAMA. 2012;308(5):516–7.View ArticleGoogle Scholar Bohbot J-M, Vicaut E, Fagnen D, Brauman M. Treatment of bacterial vaginosis: a multicenter, double-blind, double-dummy, randomised phase III study comparing secnidazole and metronidazole. Infect Dis Obstet Gynecol. 2010;2010. https://doi.org/10.1155/2010/705692.View ArticleGoogle Scholar MacPhee RA, Hummelen R, Bisanz JE, Miller WL, Reid G. Probiotic strategies for the treatment and prevention of bacterial vaginosis. Expert Opin Pharmacother. 2010;11(18):2985–95.View ArticleGoogle Scholar Gallo MF, Macaluso M, Warner L, Fleenor ME, Hook EW, Brill I, Weaver MA. Bacterial vaginosis, gonorrhea, and chlamydial infection among women attending a sexually transmitted disease clinic: a longitudinal analysis of possible causal links. Ann Epidemiol. 2012;22(3):213–20.View ArticleGoogle Scholar Machado D, Castro J, Palmeira-de-Oliveira A, Martinez-de-Oliveira J, Cerca N. Bacterial Vaginosis Biofilms: Challenges to Current Therapies and Emerging Solutions. Frontiers in microbiology. 2015;6:1528.PubMedGoogle Scholar Simbar M, Azarbad Z, Mojab F, Majd HA. A comparative study of the therapeutic effects of the Zataria multiflora vaginal cream and metronidazole vaginal gel on bacterial vaginosis. Phytomedicine. 2008;15(12):1025–31.View ArticleGoogle Scholar Brandt M, Abels C, May T, Lohmann K, Schmidts-Winkler I, Hoyme U. Intravaginally applied metronidazole is as effective as orally applied in the treatment of bacterial vaginosis, but exhibits significantly less side effects. Eur J Obstet Gynecol Reprod Biol. 2008;141(2):158–62.View ArticleGoogle Scholar Z-m Z, Liao Q-p, Yao C, Geng L, Feng L-h, Shi H-r, Xin X-y, Li P, Wang H-l, Pang Y-c. Directed shift of vaginal flora after topical application of sucrose gel in a phase III clinical trial: a novel treatment for bacterial vaginosis. Chin Med J. 2010;123(15):2051–7.Google Scholar Iwalokun B, Ogunledun A, Ogbolu D, Bamiro S, Jimi-Omojola J. In vitro antimicrobial properties of aqueous garlic extract against multidrug-resistant bacteria and Candida species from Nigeria. J Med Food. 2004;7(3):327–33.View ArticleGoogle Scholar Hu K-t, Zheng J-x, Yu Z-j, Chen Z, Cheng H, Pan W-g, Yang W-z, Wang H-z, Deng Q-w, Zeng Z-m. Directed shift of vaginal microbiota induced by vaginal application of sucrose gel in rhesus macaques. Int J Infect Dis. 2015;33:32–6.View ArticleGoogle Scholar Mastromarino P, Macchia S, Meggiorini L, Trinchieri V, Mosca L, Perluigi M, Midulla C. Effectiveness of Lactobacillus-containing vaginal tablets in the treatment of symptomatic bacterial vaginosis. Clin Microbiol Infect. 2009;15(1):67–74.View ArticleGoogle Scholar Lidbeck A, Nord CE. Lactobacilli and the normal human anaerobic microflora. Clin Infect Dis. 1993;16(Supplement 4):S181–7.View ArticleGoogle Scholar Ashafa AOT. Medicinal potential of Morella serata (Lam.) Killick (Myricaceae) root extracts: biological and pharmacological activities. BMC Complement Altern Med. 2013;13(1):163.View ArticleGoogle Scholar Mapfunde S, Sithole S, Mukanganyama S. In vitro toxicity determination of antifungal constituents from Combretum zeyheri. BMC Complement Altern Med. 2016;16(1):162.View ArticleGoogle Scholar Verstraelen H, Verhelst R, Roelens K, Temmerman M. Antiseptics and disinfectants for the treatment of bacterial vaginosis: a systematic review. BMC Infect Dis. 2012;12(1):148.View ArticleGoogle Scholar Mikic AN, Budakov D. Comparison of local metronidazole and a local antiseptic in the treatment of bacterial vaginosis. Arch Gynecol Obstet. 2010;282(1):43–7.View ArticleGoogle Scholar Erickson KL, Hubbard NE. Probiotic immunomodulation in health and disease. J Nutr. 2000;130(2):403S–9S.View ArticleGoogle Scholar Xiao BB, Wu C, Lin HX, Zhang D, Geng L, Wang HL, Yu FH, Zhu SN, Yao C, Liao QP. Sucrose gel for the treatment of bacterial vaginosis: a phase II clinical trial. Beijing Da Xue Xue Bao Yi Xue Ban. 2010;42(6):746–51.PubMedGoogle Scholar Xiao BB, Zhang D, Chen R, Shi HR, Xin XR, Wang HL, Pang YC, Zhu SN, Yao C, Liao QP. Sucrose gel for treatment of bacterial vaginosis: a randomized, double-blind, multi-center, parallel-group, phase III clinical trial. Beijing Da Xue Xue Bao Yi Xue Ban. 2015;47(6):925–32.PubMedGoogle Scholar Sánchez-Lozada LG, Mu W, Roncal C, Sautin YY, Abdelmalek M, Reungjui S, Le M, Nakagawa T, Lan HY, Yu X. Comparison of free fructose and glucose to sucrose in the ability to cause fatty liver. Eur J Nutr. 2010;49(1):1–9.View ArticleGoogle Scholar Charalampopoulos D, Pandiella S, Webb C. Evaluation of the effect of malt, wheat and barley extracts on the viability of potentially probiotic lactic acid bacteria under acidic conditions. Int J Food Microbiol. 2003;82(2):133–41.View ArticleGoogle Scholar Corcoran B, Stanton C, Fitzgerald G, Ross R. Survival of probiotic lactobacilli in acidic environments is enhanced in the presence of metabolizable sugars. Appl Environ Microbiol. 2005;71(6):3060–7.View ArticleGoogle Scholar Hong S-I, Kim Y-J, Pyun Y-R. Acid Tolerance ofLactobacillus plantarumfromKimchi. LWT-Food Sci Technol. 1999;32(3):142–8.View ArticleGoogle Scholar Chen Q, Yan Q, Wang K, Zhuang Z, Wang X. Portal of entry for pathogenic Vibrio alginolyticus into large yellow croaker Pseudosciaena crocea, and characteristics of bacterial adhesion to mucus. Dis Aquat Org. 2008;80(3):181–8.View ArticleGoogle Scholar Hallsworth JE, Magan N. Effect of carbohydrate type and concentration on polyhydroxy alcohol and trehalose content of conidia of three entomopathogenic fungi. Microbiology. 1994;140(10):2705–13.View ArticleGoogle Scholar Rao SS, Attaluri A, Anderson L, Stumbo P. Ability of the normal human small intestine to absorb fructose: evaluation by breath testing. Clin Gastroenterol Hepatol. 2007;5(8):959–63.View ArticleGoogle Scholar Olaitan PB, Adeleke OE, Iyabo O. Honey: a reservoir for microorganisms and an inhibitory agent for microbes. Afr Health Sci. 2007;7(3).Google Scholar Lee H, Churey JJ, Worobo RW. Antimicrobial activity of bacterial isolates from different floral sources of honey. Int J Food Microbiol. 2008;126(1):240–4.View ArticleGoogle Scholar Lusby P, Coombes A, Wilkinson J. Honey: a potent agent for wound healing? J Wound Ostomy Cont Nurs. 2002;29(6):295–300.Google Scholar
CommonCrawl
Stringy Geometry Session code: sg Alejandro Adem (University of British Columbia) Xiang Tang (Washington University) 11:45 Alejandro Adem (University of British Columbia), Twisted K-theory for Actions with Maximal Rank Isotropy 14:15 Takashi Kimura (Boston University), Stringy operations in equivariant K-theory and cohomology 14:45 Yunfeng Jiang (University of Kansas), On motivic virtual signed Euler characteristics 15:45 Vincent Bouchard (University of Alberta), Quantization and Topological Recursion 17:00 Emily Clader (San Francisco State University), Higher-genus wall-crossing in Gromov-Witten and Landau-Ginzburg theory 17:30 Hsian-Hua Tseng (Ohio State), A tale of four theories 11:15 Bernardo Uribe Jongbloed (Universidad del Norte in Barranquilla , Colombia), Stringy structures in cohomology and K-theory of orbifolds 13:45 Carla Farsi (University of Colorado Boulder), The spectrum of orbifold connected sums and collapsing 14:15 Dorette Pronk (Dalhousie University), Mapping Groupoids for Topological Orbifolds 14:45 Ilya Shapiro (University of Windsor), Some invariance properties of cyclic cohomology with coefficients 15:15 Xiang Tang, Untitled 16:15 Ernesto Lupercio (CINVESTAV), Untitled Alejandro Adem Twisted K-theory for Actions with Maximal Rank Isotropy In this talk we discuss twisted K-theory for actions of compact Lie groups with maximal rank isotropy. This is joint work with Jose Manuel Gomez and Jose Maria Cantarero. Takashi Kimura Stringy operations in equivariant K-theory and cohomology We describe algebraic structures in the K-theory and the cohomology of complex orbifolds in the framework of equivariant K-theory and cohomology which generalize familiar operations from topology. These include Chen-Ruan products on orbifold cohomology whose K-theoretic analog, under some conditions, admit power operations and compatible characteristic classes. We will explain how this structure can be used to endow such a K-theory with a positive structure which plays a role analogous to classes of vector bundles in topology. We describe some applications and some open problems. Yunfeng Jiang On motivic virtual signed Euler characteristics In a joint work with R. Thomas we defined several invariants for the total space of the dual obstruction cone for a perfect obstruction theory, which we call the virtual signed Euler characteristics. These invariants can be used to study the Vafa-Witten invariants for projective surfaces. In this talk I will talk about a motivic version of the invariants of the dual obstruction cone. Vincent Bouchard Quantization and Topological Recursion The Eynard-Orantin topological recursion appears in a wide variety of geometric contexts, from Gromov-Witten theory to knot theory. From the data of a spectral curve, it reconstructs recursively generating functions for appropriate enumerative invariants. The ubiquity of this recursive structure can be understood in terms of quantization. From the topological recursion, one can construct a wave-function, which is then conjectured to be annihilated by a differential operator that is a quantization of the spectral curve. In this talk I will give an overview of the conjectural relation between topological recursion and quantization, highlighting its foundations in terms of tau functions and variational formulae. I will also present a recent theorem that proves the conjecture for a large class of genus zero spectral curves, and briefly mention recent results on higher genus spectral curves. This is based on joint work with N.K. Chidambaram, T. Dauphinee and B. Eynard. Emily Clader Higher-genus wall-crossing in Gromov-Witten and Landau-Ginzburg theory The theory of quasi-maps, developed in recent work of Ciocan-Fontanine and Kim, is a generalization of Gromov-Witten theory that depends on an additional stability parameter varying over positive rational numbers. When that parameter tends to infinity, Gromov-Witten theory is recovered, while when it tends to zero, the resulting theory encodes B-model information. Ciocan-Fontanine and Kim proved a wall-crossing formula exhibiting how the theory changes with the stability parameter, and in this talk, we discuss an alternative proof of their result as well as a generalization to other gauged linear sigma models. This is joint work with Felix Janda and Yongbin Ruan. Hsian-Hua Tseng A tale of four theories Around a decade ago the following four $(\mathbb{C}^*)^2$-equivariant theories are proven to be equivalent: (1) Gromov-Witten theory of $\mathbb{P}^1\times \mathbb{C}^2$ relative to three fibers; (2) Donaldson-Thomas theory of $\mathbb{P}^1 \times \mathbb{C}^2$ relative to three fibers; (3) Quantum cohomology of Hilbert schemes of points on $\mathbb{C}^2$; (4) Quantum cohomology of symmetric product stacks of $\mathbb{C}^2$. In this talk we'll discuss these four equivalence. We'll also sketch some new development, namely higher genus extensions of these equivalences (joint work with R. Pandharipande). Bernardo Uribe Jongbloed Universidad del Norte in Barranquilla , Colombia Stringy structures in cohomology and K-theory of orbifolds The adjective stringy was coinned by Yongbin Ruan in order to denote those structures that can be associated to orbifolds which are constructed from loops or strings on the orbifold. The first one was the orbifold cohomology of Chen-Ruan and then several others appear in the literature such as the stringy product on twisted orbifold K-theory of Adem-Leida-Ruan and the Stringy K-theory of Jarvis-Kaufmann-Kimura. These stringy K-theory structures once applied to an orbifold point [*/G] could be understood as the isomorphism classes of representations of a twisted Drinfeld double of G. In this talk I will explain how the study of the isomorphism classes of twisted Drinfeld doubles gives us information on nontrivial isomorphisms of stringy K-theories for different orbifolds. Carla Farsi The spectrum of orbifold connected sums and collapsing The Laplace operator on an orbifold is a non-negative self-adjoint operator on functions (or forms), and its spectrum is an orbifold invariant. Isospectral orbifolds are orbifolds whose Laplace spectra coincide. Though many geometric quantities, such as volume and dimension, are determined by the spectrum, it is known that there are pairs of isospectral orbifolds with different numbers and kinds of singular points. In particular, by the work of Rossetti-Schueth-Weildandt, there are isospectral pairs for whom the maximum order of the orbifold isotropy groups is different. The question of whether an orbifold with singular points can be isospectral to a manifold, i.e. an orbifold without singular points, is currently open. Generalizing work of Ann\'e, Colbois, and Takahashi for manifolds, we study the behavior of the spectrum of a connected sum of orbifolds when one component of the connected sum is collapsed to a point. We use this to demonstrate that there are singular orbifolds and manifolds whose spectra are arbitrarily close to one another. In the process, we derive a Hodge-de-Rham theory for orbifolds. Joint work with Emily Proctor and Chris Seaton Dorette Pronk Mapping Groupoids for Topological Orbifolds We consider topological orbifolds as proper \'etale groupoids, i.e., topological groupoids with a proper diagonal and \'etale structure maps. We call these orbigroupoids. To describe maps between these groupoids and 2-cells between them, we will use the bicategory of fractions of the 2-category of orbigroupoids and continuous functors with respect to a subclass of the Morita equivalences which is suitably small and gives a bicategory of fractions that is equivalent to the usual one. We will present several nice results about the equivalence relation on the 2-cell diagrams in this bicategory that then enable us to obtain a very explicit description of the topological groupoids $\mbox{Map}\,(G,H)$ encoding the new generalized maps from $G$ to $H$ and equivalence classes of 2-cell diagrams between them. When $G$ has a compact orbit space we show that the mapping groupoid is an orbigroupoid and has the appropriate universal properties to be the mapping object. In particular, sheaves on this groupoid for the mapping topos for geometric morphisms between the toposes of sheaves on $G$ and $H$. This construction is invariant under Morita equivalence: Morita equivalent copies of $G$ and $H$ result in a Morita equivalent mapping groupoid. This groupoid can also be viewed as a pseudo colimit of mapping groupoids in the original 2-category of topological groupoids and continuous functors. Ilya Shapiro Some invariance properties of cyclic cohomology with coefficients While Morita invariance of cyclic cohomology is well understood, in light of recent work on a categorical approach to cyclic cohomology with coefficients it became possible to formulate and consider $2$-Morita invariance. Just as the usual Morita invariance can be viewed as the dependence of cohomology only on the category of modules, $2$-Morita invariance requires a modification of the definition so that the cohomology depends only on the $2$-category of categorical representations of a monoidal category. This is natural from the point of view of local $3$d-TFTs which are determined by their value ($2$-category) at a point, or the invariance of cyclic cohomology under a version of a categorified Fourier transform. Xiang Tang Ernesto Lupercio CINVESTAV
CommonCrawl
AMB Express Baofukang suppository promotes the repair of vaginal epithelial cells in response to Candida albicans Ting Li1, Xiaoxi Niu1, Xu Zhang2, Suxia Wang2 & Zhaohui Liu1 AMB Express volume 6, Article number: 109 (2016) Cite this article Vulvovaginal candidiasis (VVC) is an opportunistic fungal infection predominantly caused by Candida albicans affecting a significant number of women of reproductive age. The Chinese medicine, the Baofukang suppository is widely used in the clinic for its antimicrobial activity and is therefore of great interest as a potential antifungal drug for the prevention of VVC. We evaluated the cytotoxic activity of the Baofukang suppository using the VK2/E6E7 vaginal epithelial cell (VEC) line. When treated with the Baofukang suppository, all of the immunocompetent cytokines and chemokines (e.g., IL-2, IL-4, IL-6, IL-8, and IL-17) by infected VK2/E6E7 cells was statistically up-regulated (P < 0.05), except IL-4 (11.70 ± 1.82 vs. 14.88 ± 4.72, P = 0.343) compared to the infected control cells. The secretion of non-B IgG also exhibited the same trend. Our scanning electron microscopy results revealed that C. albicans can invade VECs by both induced endocytosis and active penetration. The Baofukang suppository could effectively inhibit the adhesion, hyphal formation, and proliferation, as well as notably restore the vaginal epithelial cell morphology, viability, and enhance the local immune function of the VECs. These preliminary results suggest promising antimicrobial properties of the Baofukang suppository, which may be efficacious as an antifungal therapy candidate via up-regulating Th1 cellular immunity, the Th17-axis of the innate immune response, and the secretion of vaginal epithelial-derived IgG. These combined effects collectively restore the immune function of the infected VECs against C. albicans in vitro. The majority of women experience at least one episode of vulvovaginal candidiasis (VVC) in their lifetime (Sobel et al. 1998). The vaginal epithelium as a mucosal surface is of immense importance in host defense and immune surveillance (Moyes et al. 2011). Specifically, it functions as the first line of host defense against pathogen invasion to provide a physical barrier and protect underlying tissues and organs (Cole 2006). Products containing essential oils have been formulated for intra-vaginal use, and are often recommended as home remedies for the treatment of vaginal candidiasis by published books and articles (Marti and Hine 1995; Newall et al. 1996). Baofukang suppository is a type of traditional Chinese medicine that consists primarily of two Chinese Medicinal Herbs: (1) zedoary turmeric oil; and (2) borneol. Zedoary turmeric oil is a volatile oil steam-distilled from Curcuma phaeocaulis which contains a variety of effective components, including beta-elemene, curcumin, curzerenone, and Zimmer ketones currently in clinical use. These components may provide potent pharmacological activities, such as antitumor or antiviral effects, enhancing the immune response, particularly by promoting the regeneration and repair of damaged or inflamed mucosa (Kamazeri et al. 2012). Due to its potent pharmacological action, this plant extract has begun to attract some significant attention. Borneol is extracted from the essential oil of various subtropical and evergreen broad-leaved such as Cinnamomum and Lauraceae, and a bicyclic monoterpenoid alcohol commonly used in traditional Chinese medicine as the adjuvant for more than 1500 years (Jiang et al. 2008). It demonstrates anti-inflammatory, analgesic, and antibacterial properties, while also accelerating percutaneous drug absorption and increasing the bioavailability of drugs in the brain tissue (Almeida et al. 2013; Slamenova et al. 2009). Therefore, the aims of this work was to evaluate the in vitro antifungal properties and mechanisms of the Baofukang suppository, which would be further applied widely in clinic as a potential antifungal drug for the prevention of VVC. Vaginal epithelial cell culture The VK2/E6E7 vaginal epithelial cell line (ATCC® CRL-2616), obtained from the American Type Culture Collection (ATCC; Rock ville, MD, USA) is an epithelial cell line derived from the vaginal mucosa of a healthy premenopausal female undergoing vaginal repair surgery, that was subsequently immortalized with human papillomavirus 16/E6E7. VK2 cells were cultured in keratinocyte-serum free medium (K-SFM, Gibco, USA) supplemented with 5 ng/mL recombinant epidermal growth factor and 50 µg/mL bovine pituitary extract (Invitrogen Corporation, Grand Island, NY, USA), 100 U/mL penicillin (Life Technologies, Grand Island, NY, USA), 100 µg/mL streptomycin (Life Technologies) and 0.4 mM CaCl2 at 37 °C with 5% CO2 and a high humidity environment. A subculture of the cells was performed every 3–4 days. Microbial strains and growth conditions Candida albicans collection strains (ATCC-11006) were grown aerobically overnight on Sabouraud-dextrose agar plates (Becton Dickinson, Cockeysville, MD, USA) at 37 °C until the mid-exponential growth phase. The blastoconidia were collected and resuspended in RPMI 1640 and adjusted to 1.0 × 105 cells/mL after counting with a hemocytometer (Hausser Scientific; Horsham, PA, USA). Drug preparation Baofukang suppository (Hainan bikai Pharmaceutical Co., Ltd.) is a traditional Chinese Medicine, with every 1.74 g tablet consisting of 88 mg zedoary turmeric oil, 75 mg borneol, and other components as a preservation matrix. One vaginal suppository tablet (water soluble) was dissolved in 44 mL serum-free RPMI1640 culture medium to prepare a drug stock solution of 3.95 × 104 µg/mL, and was passed through a 0.22 µM membrane filter for sterilization. All drug solutions were stored at −20 °C until further experiments. Evaluation of cytotoxic activity CCK-8 (Dojindo Laboratories, Tokyo, Japan) was used to evaluate the cytotoxicity of the Baofukang suppository at a concentration of 5, 10, 20, 40, 80, and 160 µg/mL. A total volume of 200 µL VK2 cell suspension was seeded into each well of a 96-well microtiter plate and placed into a humidified atmosphere containing 5% CO2 at 37 °C for 24 h before the cells were treated. Untreated cells that received only media were used as the negative control. Concentrations of 0, 5, 10, 20, 40, 80, and 160 µg/mL Baofukang suppository were added to the VK2 cells for 24 h. The cells in each well were incubated in 100 µL K-SFM containing 10 µL CCK-8 reagents at 37 °C for 1 h. The plate was then shaken on an automatic mixer for 3 min and the absorbance at 450 nm (A450) was measured using a Multiscan GO micro-plate reader. The results were expressed as the percentage of cell viability and plotted. $${\text{Cell viability }}\left( \% \right) = \left[ {{\text{A}}_{450} \left( {{\text{treated}}} \right) - {\text{ A}}_{450} \left( {{\text{blank}}} \right)} \right]/\left[ {{\text{ A}}_{450} \left( {{\text{control}}} \right) - {\text{ A}}_{450} \left( {{\text{blank}}} \right)} \right] \times 100\%$$ The concentration of the sample that inhibited 10% cell growth as calculated by SPSS 13.0 was the 10% inhibition concentration (IC10). This dose was defined as a safe dose with little poisonous side effects—the highest concentration where still no effect of the Baofukang suppository on cell viability (≥90% survival) (Namiecinski et al. 2004; Qiao et al. 2013). Cytokine and chemokine analysis of coculture supernatants For the examination of cytokines and chemokines, epithelial cells (1 × 10 cells/mL) were cocultured with C. candida (1 × 105/mL) at a ratio of 1:1 in separate wells for 12 h for the VK2 cell line cells in a total volume of 2 mL K-SFM complete medium in 24-well tissue culture plates (Costar, Corning, NY, USA). Following a coculture for 12 h, the culture medium was aspirated, washed three times with PBS, and replaced with 1 mL different concentrations 20 µg/mL of Baofukang suppository (IC10) as described above for additional 24 h. The supernatants were collected and centrifuged at 12,000g for 5 min and finally stored at −80 °C until an enzyme-linked immunosorbent assay (ELISA, eBioscience, USA) was performed. The supernatants were assayed for the levels of IL-6, IL-2, IL-4, IL-8, and IL-17 cytokines according to the manufacturer's instructions. New standard curves were generated for every set of experiments. The absorbance values and concentrations of each cytokine were determined using a Ceres 900 automated microplate reader (Bio-Tek Corp., Wisnooski, VT, USA) and Kineticalc software (Bio-Tek). Each independent experiment was performed in triplicate. Epithelial-derived IgG and sIgA analysis of coculture supernatants To further explore the local immune function of vaginal epithelial cells, we stimulated VK2/E6E7 with C. candida (1 × 105/mL) and detected the level of secreted non-B IgG and IgA in the culture supernatants (collected as described above) by an ELISA (eBioscience). The ELISAs were conducted as mentioned above. Specimens were fixed overnight in 2.5% glutaraldehyde in 0.1 M sodium cacodylate buffer (pH 7.4; Electron Microscopy Sciences, Hatfield, PA, USA) at 4 °C, rinsed three times with PBS, dehydrated in graded ethanol (25, 50, 75, 95, and 100%), and dried using the critical point drying method (BALTEC, Balzers, Liechtenstein). The dried samples were glued onto SEM stubs, sputter-coated with a 10 nm thick layer of gold (BALTEC, Balzers, Liechtenstein), and examined using a scanning electron microscope (S-3400 N, Hitachi, Japan). All data are presented as the mean ± standard deviation of three independent measurements. The statistical analyses were conducted using SPSS version 13.0 (SPSS, Chicago, IL, USA). A statistical comparison between the groups was carried out using a one-way ANOVA. Subsequent comparisons were performed using the LSD method. Significant differences were defined as having a P value of less than 0.05. Effects of the Baofukang suppository on VK2/E6E7 cell viability To mimic clinical situations in which antibiotics or antifungals may be safe and well-tolerated in the human body, we determined whether the Baofukang suppository affects cell viability. The A450 of the VK2/E6E7 cells treated with the Baofukang suppository at doses of 0, 5, 10, 20, 40, 80, and 160 µg/mL were plotted in Fig. 1. As shown in Fig. 1, high doses (>20 µg/mL) of the Baofukang suppository did inhibit vaginal epithelial cell viability or growth, while there were no observed changes in the cells exposed to the low doses (≤20 µg/mL). The IC10 calculated by SPSS 13.0 was 19.89 µg/mL, and was considered to be a safe dose for VK2 cells with minimal toxic side effects, and was selected for further experiments. Cytotoxicity of VECs in the presence of the Baofukang suppository. The vaginal epithelial cells were treated with the Baofukang suppository for 24 h. The mean values for the remaining four values and standard deviations (error bars) are shown Baofukang suppository modulates cytokine release by human vaginal epithelial cells In the present study, at baseline, the level of IL-2, IL-4, IL-6, IL-8, and IL-17 production by VK2 cells were 46.81 ± 3.07, 34.42 ± 5.60, 27.96 ± 0.15, 21.53 ± 0.78, and 43.36 ± 3.68 pg/mL, respectively (Fig. 2). After 24 h of co-incubation with 20 µg/mL of the Baofukang suppository alone, there were no significant changes in the levels of IL-2 (45.76 ± 3.31 pg/mL, P = 0.679) or IL-17 (45.13 ± 4.718 pg/mL, P = 0.526). Comparatively, the Baofukang suppository stimulated a significant up-regulation in the production of IL-8 (25.91 ± 0.50 pg/mL, P < 0.0001) and a significant down-regulation in the production of IL-4 (25.06 ± 1.65 pg/mL, P = 0.018) and IL-6 (23.08 ± 0.12 pg/mL, P < 0.0001) by the VK2/E6E7 cells was observed (Fig. 2). The production of IL-2, IL-4, IL-6, IL-8, and IL-17 (expressed as pg/mL) by the vaginal epithelial cell line, VK2/E6E7 cells cultivated alone, grown with 20 µg/mL the Baofukang suppository, and infected with C. albicans (1 × 105/mL). The cytokine levels were measured after 12 h of infection and subsequent 24 h of co-incubation with 20 µg/mL of the Baofukang suppository. V represents the vaginal epithelial cells cultivated alone; V+B represents vaginal epithelial cells co-incubated with 20 µg/mL of the Baofukang suppository for 24 h; V+C represents vaginal epithelial cells infected with C. albicans for 12 h; V+C+F50 represents the vaginal epithelial cells infected with C. albicans for 12 h, then treated with 20 µg/mL of Baofukang suppository for another 24 h After 12 h of challenge with C. albicans, IL-2, IL-4, IL-6, IL-8, and IL-17 had substantially declined (0.4- to 0.8-fold, P < 0.05; Fig. 2); however, when treated with 20 µg/mL Baofukang suppository, all of the above-mentioned cytokines in the infected VK2/E6E7 cell cultures were significantly up-regulated (IL-2: 25.14 ± 3.43 vs. 31.59 ± 1.90 pg/mL, P = 0.030; IL-4: 11.70 ± 1.82 vs. 14.88 ± 4.72 pg/mL, P = 0.343; IL-6: 8.91 ± 0.65 vs. 13.74 ± 0.51 pg/mL, P < 0.0001; IL-8: 12.41 ± 0.06 vs. 17.63 ± 0.41 pg/mL, P < 0.0001; IL-17: 12.99 ± 2.57 vs. 19.52 ± 2.13 pg/mL, P = 0.039; respectively), when compared to the untreated vaginal cells infected with C. albicans (Fig. 2). We determined the Th1/Th2 balance by calculating the IL-2/IL-4 ratio (Table 1). Table 1 Th1/Th2 cytokines/ratios The Baofukang suppository modulates epithelial-derived IgG secreted by human vaginal epithelial cells To determine whether infected, uninfected or treated vaginal epithelial cells secrete epithelial-derived IgG and sIgA, the culture supernatants were examined by an ELISA (Fig. 3). Secretory IgA (sIgA) that we previously assumed to be the most abundant Ig isotype secreted in culture supernatants was undetectable in this experiment (data not shown). The production of epithelial-derived IgG (expressed in µg/mL) by the vaginal epithelial cell line, VK2/E6E7 cells. The IgG levels were measured after 12 h of infection and the subsequent 24 h of co-incubation with the Baofukang suppository. V represents the vaginal epithelial cells cultivated alone; V+B represents the vaginal epithelial cells co-incubated with 20 µg/mL of teh Baofukang suppository for 24 h; V+C represents the vaginal epithelial cells infected with C. albicans for 12 h; V+C+B represents the vaginal epithelial cells infected with C. albicans for 12 h, then treated with 20 µg/mL of the Baofukang suppository for another 24 h. ***Significant difference compared to the V group (P < 0.0001), ##significant difference compared to the V+C group (P = 0.025). Each sample was repeated three times. The error bars indicate the standard deviation Surprisingly, we found that the VECs spontaneously secrete epithelial-derived IgG under baseline conditions (Fig. 3). The baseline level of IgG secreted by the VK2 cells was 0.64 ± 0.13 µg/mL, which dropped sharply following infection or co-incubation with the Baofukang suppository alone (P < 0.0001). However, when the infected VECs were treated with 20 µg/mL Baofukang suppository, the interaction pattern of this suppository was changed, and the level of IgG secreted by the treated VK2/E6E7 cells was statistically up-regulated (0.42 ± 0.06 µg/mL) over the untreated infected cells (0.24 ± 0.02 µg/mL) (P = 0.025). Despite this increase, the levels did not reach the baseline values (P = 0.013). Baofukang suppository promotes the repair of infected cells For further insight into the interactions of C. albicans with VECs, VEC monolayers were preferred even though the vaginal mucosa is composed of stratified squamous epithelium because only the superficial epithelial cells that are exposed to the luminal surface interact with C. albicans. Thus, establishing models with multiple layers of cells could complicate the quantification of the interactions between the epithelial cells and the pathogen. The cell surface of the normal, uninfected cells was covered with microvilli or a microvilli crest that was an irregular, loose net-like membrane ruffle (Fig. 4A). At 6 and 12 h post-infection, epithelial cell adherence and invasion were observed (Fig. 4B, D). Our results suggest that C. albicans can invade vaginal epithelial cells by two distinct mechanisms that ultimately result in cellular damage. The first is the induction of cellular endocytosis by the C. albicans hypha, and the other is via active penetration without pseudopod formation and endocytosis. Scanning electron micrographs (SEM) of the vaginal epithelial cells. SEM of the control cells (A), C. albicans infected cells at 6 h (B), C. albicans infected cells at 12 h (C) and treated cells (D), the latter represents the vaginal epithelial cells infected with C. albicans for 12 h, then treated with 20 µg/mL Baofukang suppository for another 24 h. Fusion of the microvilli-like structures forming membrane leaflets that envelop the invading hyphal cells is indicated by the red arrow; damaged cell debris is indicated by the white arrow; pseudohyphae elements "engulfed" into vaginal epithelial cells are indicated by yellow arrows; a budding spore is indicated by blue arrows After 24 h of treatment with the Baofukang suppository, as shown at ×3000 high magnification in Fig. 4D, the invasive blastoconidia and hyphae initially observed were significantly reduced or completely absent. Treated cells with a normal shape, good cell viability, relatively intact and smooth cell membrane, were well adhered to the wall and completely stretched, similar to the uninfected cells shown in Fig. 4A. These results illustrate that the Baofukang suppository could not only effectively inhibit the adhesion, hyphal formation, and proliferation of C. albicans, but also notably restore the VEC morphology and viability. Thus, this serves to enhance the local immune function of the VECs. VECs lining the mucosal surfaces of the vagina provide a first-line barrier defense system for both innate and acquired immune functions. As in anticandidal mucosal immunity, the innate role of the epithelial response involves the release of epithelial cell-derived cytokines and chemokines in response to C. albicans appear to have important roles recruiting and activating a variety of immune cells, immunoregulation, and tissue repair (Dongari-Bagtzoglou et al. 2005; Schaller et al. 2004). This serves to further mediate the innate and adaptive responses of protective mucosal immunity against C. albicans. Our results have demonstrated that VECs secrete various soluble immunological mediators under normal culture conditions. However, after challenging the cells with C. albicans for 12 h, the secretion of all the cytokines and chemokines produced by the VECs were significantly decreased. This decrease may partially be due to a direct invasion of C. albicans and a strongly diminished total number of VK2/E6E7, resulting in damaging the local innate immune function. Martinez et al. (2009) confirmed that the numbers of VK2/E6E7 cells remained stable within 6 h of C. albicans infection, which decreased to 1/8-fold after inoculation for 12 h. To mimic the effect of the Baofukang suppository in treating vulvovaginal candidiasis under in vivo conditions, the VK2/E6E7 cells were infected with C. albicans for 12 h and then treated with the Baofukang suppository for 24 h. The levels of IL-2, IL-6, IL-8, and IL-17 were significantly increased, while IL-4 was only slightly increased in the treated cells compared with the infected control. Th1 cells are associated with interferon γ or L-2 cooperating with cytotoxic CD8+ T cells, while Th2 cells are associated with IL-4 or IL-6 promoting a humoral, proinflammatory response (Schilling et al. 2007; Woo et al. 2015). A increase in the balance of the Th1/Th2 ratio is associated with the enhancement of cell-mediated immunity. Thus, the Baofukang suppository may enhance or restore a protective Th1 response in infected cells, which represents the immunological hallmark of candidal lesions believed to play a crucial role in the clearance of mycotic infection (Fidel 2002). IL-2 and IL-4 are the classically representative of Th1 and Th2 cytokines, respectively (Fidel and Sobel 1994; Romani 1999; Woo et al. 2015). A strong role for Th1-type cell-mediated immunity against Candida was demonstrated by various experimental models (Klein et al. 1984; Clift 1984). An increased Th1/Th2 ratio (IL-2/IL-4) is associated with the activation of cell-mediated immunity and would be potentially beneficial for pathogen or cancer elimination, while a decreased ratio would raise the risk of inflammation progression. Our results indicate that the IL-2/IL-4 ratio was significantly increased in the Baofukang suppository-treated cells and indicates that the Baofukang suppository can indirectly up-regulate the vaginal local cellular immunity under the infective status promoting the host defense against invading pathogenic microorganisms. Moreover, evidence has been mounting that a Th17-driven immune response plays a predominant role in modulating a defensive mucosal immune response against Candida in both mice and humans (Huang et al. 2004; Conti et al. 2009; Eyerich et al. 2008). An impaired IL-17 response appears to be responsible for the pathogenesis of chronic mucocutaneous candidiasis (Eyerich et al. 2008). Our data also affirm that the Baofukang suppository partly restores IL-17-production by VECs against Candida infection in vitro, thus, initiating an early Th17-type innate immune response against extracellular Candida adhesion and filamentation. Active cytokine production by VECs would enable a rapid response to an infection and provide protection within the vaginal microenvironment. Immunoglobulins are one of the key molecules of the humoral immune response, and they have previously thought to be produced only by B cells, and no other cell types. However, 20 years ago, a series of studies demonstrated that non-B cancer cells and normal non-B cells (Qiu et al. 2003, 2013; Zhao et al. 2011) are capable of producing Igs, but it remains unclear whether normal VECs express functional Ig molecules. We cultured the immortalized VEC lines, VK2 cells, and used an ELISA to determine whether IgG or sIgA was secreted by the VECs. To our surprise, sIgA was undetectable in cell supernatants. In this report, we first noted epithelial-derived IgG were secreted by VECs in vitro, which challenges the classical concept that B cells are the only source of Ig. Antibody-mediated protection containing an anti-secreted aspartyl proteinase antibody or anti-Candida-mannoprotein appear to play an indispensable role in mucosal immunity against vaginitis. However, this hypothesis cannot explain the clinical phenomenon that no antibody deficiency was observed in recurrent VVC women and the absence of protective Candida-specific antibody production in inoculated mice (Ashman et al. 2004). Our study confirms our hypothesis that non-B IgG can be expressed in VECs, and they appear to be involved in the innate immune response of the vagina against mycotic infections, which can be partially repaired by the Baofukang suppository treatment. Further studies should ascertain if vaginal epithelial-derived IgG participates in the local mucosal immunity of the vagina against various common pathogens. The SEM observations were first conducted to visualize the different stages (6 and 12 h) of the interaction of C. albicans with VECs. Similar to the pathological processes of vulvovaginal candidiasis, the yeast and filamentous forms of C. albicans are capable of adhering and invading the VECs, enabling the fungal cells to translocate across the vaginal mucosal barrier. C. albicans invade the vaginal epithelial monolayer via two mechanisms: (1) induced endocytosis; or (2) active penetration (Zhu and Filler 2010; Dalle et al. 2010; Naglik et al. 2011). Endocytosis is defined as the engulfment of epithelial cells with membrane protrusions at the point of entry of the invading hypha, and active penetration is defined as hypha penetration on the epithelial cell surface at their apical side at the point of entry of an invading hypha (Dalle et al. 2010). The two mechanisms appear throughout the entire process of invasion, while the former occurs more frequently during the early-stage and the latter more frequently in the late-stage in vitro (Dalle et al. 2010). We also confirm that hypha formation is the key to fungal invasion and damage, and VK2 cells, when not fully damaged by fungal cells, can kill or "engulf" the attached C. albicans. However, this antifungal phenomenon was not observed when the VK2 cells were killed by C. albicans. When treated with a Baofukang suppository for 24 h compared with an infectious condition, C. albicans adhesion, invasion, and cellular injury could be restored to a normal condition. The antimicrobial activity of the Baofukang suppository of C. albicans could, in part, be associated with a major constituent of Zedoary turmeric oil 6. Moreover, these results also demonstrated a direct role for the Baofukang suppository in inhibiting germ tube formation and hyphal elongation for antifungal defense at mucosal surfaces. Therefore, it can be used as a natural preservative in food or pharmaceuticals. In summary, the Baofukang suppository could not only effectively inhibit the adhesion, hyphal formation, and proliferation of C. albicans, but also notably restores the VEC morphology and viability, thereby enhancing the local immune function of these cells. The preliminary results suggest promising antimicrobial properties of the Baofukang suppository, which may be efficacious as an antifungal therapy via up-regulating Th1-type cellular immunity, the Th17-axis, and the secretion of vaginal epithelial-derived IgG. These responses serve to restore the immune function of the infected VECs against C. albicans in vitro. VVC: vulvovaginal candidiasis ATCC: K-SFM: keratinocyte-serum free medium IC10 : the 10% inhibition concentration ELISA: SEM: secretory IgA VECs: vaginal epithelial cells Almeida JR, Souza GR, Silva JC, Saraiva SR, Júnior RG, Quintans Jde S, Barreto Rde S, Bonjardim LR, Cavalcanti SC, Quintans LJ Jr (2013) Borneol, a bicyclic monoterpene alcohol, reduces nociceptive behavior and inflammatory response in mice. Sci World J 2013:808460. doi:10.1155/2013/808460 Ashman RB, Farah CS, Wanasaengsakul S, Hu Y, Pang G, Clancy RL (2004) Innate versus adaptive immunity in Candida albicans infection. Immunol Cell Biol 82:196–204 Clift RA (1984) Candidiasis in the transplant patient. Am J Med 77:34–38 Cole AM (2006) Innate host defense of human vaginal and cervical mucosae. Curr Top Microbiol Immunol 306:199–230 Conti HR, Shen F, Nayyar N, Stocum E, Sun JN, Lindemann MJ, Ho AW, Hai JH, Yu JJ, Jung JW, Filler SG, Masso-Welch P, Edgerton M, Gaffen SL (2009) Th17 cells and IL-17 receptor signaling are essential for mucosal host defense against oral candidiasis. J Exp Med 206:299–311 Dalle F, Wachtler B, L'Ollivier C, Holland G, Bannert N, Wilson D, Labruère C, Bonnin A, Hube B (2010) Cellular interactions of Candida albicans with human oral epithelial cells and enterocytes. Cell Microbiol 12:248–271 Dongari-Bagtzoglou A, Villar CC, Kashleva H (2005) Candida albicans-infected oral epithelial cells augment the anti-fungal activity of human neutrophils in vitro. Med Mycol 43:545–549 Eyerich K, Foerster S, Rombold S, Seidl HP, Behrendt H, Hofmann H, Ring J, Traidl-Hoffmann C (2008) Patients with chronic mucocutaneous candidiasis exhibit reduced production of Th17-associated cytokines IL-17 and IL-22. J Invest Dermatol 128:2640–2645 Fidel PL Jr (2002) Distinct protective host defenses against oral and vaginal candidiasis. Med Mycol 40:359–375 Fidel PL Jr, Sobel JD (1994) The role of cell-mediated immunity in candidiasis. Trends Microbiol 2:202–206 Huang W, Na L, Fidel PL, Schwarzenberger P (2004) Requirement of interleukin-17A for systemic anti-Candida albicans host defense in mice. J Infect Dis 190:624–631 Jiang XF, Zou JL, Yuan YM, Francis CL, Qiao YJ, Yao MC (2008) Preliminary study: biotransformation of borneol to camphor in mice, rats, and rabbits. Mode Tradit Chin Med Mater Med 10:27–36 Kamazeri TS, Samah OA, Taher M, Susanti D, Qaralleh H (2012) Antimicrobial activity and essential oils of Curcuma aeruginosa, Curcuma mangga, and Zingiber cassumunar from Malaysia. Asian Pac J Trop Med 5:202–209 Klein RS, Harris CA, Small CB, Moll B, Lesser M, Friedland GH (1984) Oral candidiasis in high-risk patients as the initial manifestation of the acquired immunodeficiency syndrome. N Engl J Med 311:354–358 Marti J, Hine A (1995) The alternative health and medicine encyclopedia. Gale Research International Inc. New York Martinez RC, Seney SL, Summers KL, Nomizo A, De Martinis EC, Reid G (2009) Effect of Lactobacillus rhamnosus GR-1 and Lactobacillus reuteri RC-14 on the ability of Candida albicans to infect cells and induce inflammation. Microbiol Immunol 53:487–495 Moyes DL, Murciano C, Runglall M, Islam A, Thavaraj S, Naglik JR (2011) Candida albicans yeast and hyphae are discriminated by MAPK signaling in vaginal epithelial cells. PLoS ONE 6:e26580 Naglik JR, Moyes DL, Wachtler B, Hube B (2011) Candida albicans interactions with epithelial cells and mucosal immunity. Microbes Infect 13:963–976 Namiecinski M, Pulaski L, Kochman A, Skolimowski J, Bartosz G, Metodiewa D (2004) Cytotoxicity, cytoprotection and neurotoxicity of novel deprenyl-related propargylamines, stable nitroxidefree radicals, in vitro and in vivo. In Vivo 18:171–180 Newall CA, Anderson L A, Philipson JD (1996) Herbal medicines. A guide for health-care professionals. In: The Pharmaceutical Press L Qiao B, Kerr M, Groselj B, Teo MT, Knowles MA, Bristow RG, Phillips RM, Kiltie AE (2013) Imatinib radiosensitises bladder cancer by targeting homologous recombination. Cancer Res 73:1611–1620 Qiu X, Zhu X, Zhang L, Mao Y, Zhang J, Hao P, Li G, Lv P, Li Z, Sun X, Wu L, Zheng J, Deng Y, Hou C, Tang P, Zhang S, Zhang Y (2003) Human epithelial cancers secrete immunoglobulin g with unidentified specificity to promote growth and survival of tumor cells. Cancer Res 63:6488–6495 Qiu X, Sun X, He Z, Huang J, Hu F, Chen L, Lin P, You MJ, Medeiros LJ, Yin CC (2013) Immunoglobulin gamma heavy chain gene with somatic hypermutation is frequently expressed in acute myeloid leukemia. Leukemia 27:92–99 Romani L (1999) Immunity to Candida albicans: Th1, Th2 cells and beyond. Curr Opin Microbiol 2:363–367 Schaller M, Boeld U, Oberbauer S, Hamm G, Hube B, Korting HC (2004) Polymorphonuclear leukocytes (PMNs) induce protective Th1-type cytokine epithelial responses in an in vitro model of oral candidosis. Microbiology 150:2807–2813 Schilling T, Kozian A, Kretzschmar M, Huth C, Welte T, Bühling F, Hedenstierna G, Hachenberg T (2007) Effects of propofol and desflurane anaesthesia on the alveolar inflammatory response to one-lung ventilation. Br J Anaesth 99:368–375 Slamenova D, Horvathova E, Wsolova L, Sramková M, Navarová J (2009) Investigation of anti-oxidative, cytotoxic, DNA-damaging and DNA-protective effects of plant volatiles eugenol and borneol in human-derived HepG2, Caco-2 and VH10 cell lines. Mutat Res 677:46–52 Sobel JD, Faro S, Force RW, Foxman B, Ledger WJ, Nyirjesy PR, Reed BD, Summers PR (1998) Vulvovaginal candidiasis: epidemiologic, diagnostic, and therapeutic considerations. Am J Obstet Gynecol 178:203–211 Woo JH, Baik HJ, Kim CH, Chung RK, Kim DY, Lee GY, Chun EH (2015) Effect of propofol and desflurane on immune cell populations in breast cancer patients: a randomized trial. J Korean Med Sci 30:1503–1508 Zhao Y, Liu Y, Chen Z, Korteweg C, Gu J (2011) Immunoglobulin g (IgG) expression in human umbilical cord endothelial cells. J Histochem Cytochem 59:474–488 Zhu W, Filler SG (2010) Interactions of Candida albicans with epithelial cells. Cell Microbiol 12:273–282 The main experimental conception and design: ZL; Performed the experiments: TL and XN; Analyzed the data and contributed reagents: TL, XN and ZL; Writing the manuscript: TL and XN. All authors read and approved the final manuscript. Thanks to Dr. Xiaoyan Qiu for providing helpful comments for the paper. Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors. This work was supported by a grant from the National Natural Science Foundation of China (Grant Number 81571394). Department of Obstetrics and Gynecology, Peking University First Hospital, 8 Xishiku Street, Xicheng District, Beijing, 100034, China Ting Li, Xiaoxi Niu & Zhaohui Liu Laboratory of Electron Microscopy, Ultrastructural Pathology Center, Peking University First Hospital, Beijing, 100034, China Xu Zhang & Suxia Wang Ting Li Xiaoxi Niu Xu Zhang Suxia Wang Zhaohui Liu Correspondence to Zhaohui Liu. Ting Li and Xiaoxi Niu contributed equally to this work as joint first author Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Li, T., Niu, X., Zhang, X. et al. Baofukang suppository promotes the repair of vaginal epithelial cells in response to Candida albicans . AMB Expr 6, 109 (2016). https://doi.org/10.1186/s13568-016-0281-1 Baofukang suppository
CommonCrawl
Faster Diffusion-Relaxation Correlation Spectroscopic Imaging (DR-CSI) using Optimized Experiment Design Daeun Kim1 and Justin P. Haldar1 1Electrical Engineering, University of Southern California, Los Angeles, CA, United States We propose a new experiment design method to accelerate the recent novel diffusion-relaxation correlation spectroscopic imaging (DR-CSI) experiment. DR-CSI acquires imaging data across a range of different b-value and echo time combinations. This enables new insights into tissue microstructure, but the contrast encoding can be slow. Our experiment design approach selects a small subset of the most informative observations to acquire using results from estimation theory. We demonstrate with ex vivo mouse spinal cord MR data that the new experiment design approach enables DR-CSI to be accelerated by a factor of more than 2 without a substantial loss in quality. The diffusion-relaxation correlation spectroscopic imaging (DR-CSI) experiment is a novel high-dimensional extension of traditional diffusion MRI and MRI relaxometry1. Similar to an earlier diffusion-relaxation correlation spectroscopy method (non-imaging)2-3, DR-CSI jointly and nonseparably encodes diffusion and relaxation information, and uses that information to construct a 2D diffusion-relaxation correlation spectrum for every voxel. Compared to traditional analyses, this 2D spectrum enables better resolution of the microstructural tissue compartments that exist inside a macroscopic imaging voxel1. However, despite the potential of DR-CSI, the need to jointly encode both diffusion and relaxation information leads to relatively long acquisition times. In this work, we demonstrate that, guided by principled methods from estimation theory, DR-CSI acquisitions can be substantially accelerated without a substantial loss of information. Theory and Methods The Cramér-Rao Bound (CRB) is a lower bound on the variance of any unbiased estimator4, and is often used as a metric for deriving optimal experiment designs5-7. We hypothesize that CRB-based experiment design will enable accelerated DR-CSI acquisitions without a significant loss of information. Based on this hypothesis, the CRB for the DR-CSI model is defined by: $$f(\boldsymbol{\theta},\boldsymbol{\gamma}_n)=\sum_{s=1}^S A_s e^{-b_nD_s} e^{-TE_n/T2_s}$$, where $$$\boldsymbol{\theta}=\left[ \bf{A}~\bf{D}~\bf{T2} \right]^T \in \mathbb{R}^{3S \times1}$$$ is a set of the unknown diffusion-relaxation parameters to be estimated and $$$\boldsymbol{\gamma}_n = \left[b_n~TE_n \right]$$$ are the experimental conditions for the $$$n$$$th acquisition. Our goal is to choose the experimental conditions $$$\boldsymbol{\gamma}_n = \left[b_n~TE_n \right]$$$ so that the data measurements contain as much information about $$$\boldsymbol{\theta}$$$ as possible. The CRB is defined by $$$\text{COV}(\hat{\boldsymbol{\theta}}) \geq \text{CRB}(\boldsymbol{\theta})=\mathbf{F}^{-1} (\boldsymbol{\theta})$$$, where the Fisher information matrix $$$\mathbf{F}(\boldsymbol{\theta})$$$ is given by $$$[\mathbf{F}(\boldsymbol{\theta})]_{ij} = \sum_{n=1}^N \frac{1}{\sigma^2} \frac{\partial f(\boldsymbol{\theta},\boldsymbol{\gamma}_n)}{\partial \theta_i} \frac{\partial f(\boldsymbol{\theta},\boldsymbol{\gamma}_n)}{\partial \theta_j}$$$ assuming Gaussian random noise statistics. Based on the CRB, the experimental conditions are determined by minimizing the following objective function: $$J(\boldsymbol{\gamma};\boldsymbol{\theta}) = \sum_{i=1}^{3S}w_i \frac{\sqrt{\left[\text{CRB}(\boldsymbol{\theta})\right]_{ii}}}{\theta_i}$$, where the $$$w_i$$$ are weighting coefficients to emphasize the interesting tissue parameters. Our experiments used the same ex-vivo mouse spinal cord datasets from previous DR-CSI work1 (three sham controls and three with traumatic spinal cord injury). This data was sampled at every combination of 7 b-values (0, 500, 1000, 2000, 3000, 4000 and 5000s/mm2) and 4 TEs (40, 80, 120 and 160ms) for a total of 28 images. Representative images are shown in Figure 1(a). The spectra estimated from this set of 28 images were considered as a "fully sampled" ground truth. We used the sequential backward selection algorithm9 to select a subset of 12 contrast-encoding parameters from the original full set of 28. This corresponds to a substantial (2.3x) reduction in experimental duration. For reference, we also compared against 12-encoding rectangular grid and random sampling8 schemes. The sampling schemes are displayed in Fig. 1(b). For rectangular grid sampling, two different options (4 b-values$$$\times$$$3 TEs and 3 b-values$$$\times$$$4 TEs) were considered. For random sampling, ten different sampling realizations were considered, and we report results derived from the best-performing option. For all sampling schemes, DR-CSI spectra were reconstructed using the same dictionary-based spatially-regularized nonnegative least squares optimization approach described in previous DR-CSI work1. Figure 2 shows spatially-averaged DR-CSI spectra for control and injured spinal cords. In the control cord, the spectra from the CRB-based and 4x3 rectangular grid sampling schemes have the best performance in resolving the two distinct spectral peaks that are known to be present from the "fully sampled" ground truth. In the injured cord, the spectrum from the CRB-based sampling has good consistency with the ground truth, and correctly demonstrates three distinct spectral peaks. Figure 3(a) shows spatially-varying DR-CSI spectra from a region of the "fully sampled" ground truth in which the tissue transitions between injury and gray matter (GM). Fig. 3(b-c) show the DR-CSI spectra obtained from the CRB-based and 4x3 rectangular grid accelerated acquisitions. The tissue transitions are still seen in the both of the accelerated acquisitions. Figure 4 shows that the spatial maps of the integrated spectral peaks from the accelerated scans also have large similarity with the maps from the ground truth. In each case, the maps seem to be consistent with the known anatomy of the spinal cord: The first three components appear to correspond with white matter and gray matter compartments. The last component is only present in the injured cords, and seems to be associated with a new compartment related to the injury. We demonstrated that DR-CSI can be subsampled, leading to a substantial acceleration of the experiment without a substantial loss of information. Our results also confirm the hypothesis that CRB-based sampling design works well for achieving this acceleration, although other sampling design approaches can also yield similarly good results. This work was supported in part by NSF CAREER award CCF-1350563, and NIH research grants R21-EB022951 and R01-NS089212. 1. Kim D, Kim JH, Haldar JP. Diffusion-relaxation correlation spectroscopic imaging (DR-CSI): An enhanced approach to imaging microstructure. In: Proc. Int. Soc. Magn. Reson. Med. 2016; p. 660. 2. Hürlimann MD, Venkataramanan L, Flaum C. The diffusion-spin relaxation time distribution function as an experimental probe to characterize fluid mixtures in porous media. J Chem Phys 2002; 117:10223-10232. 3. Callaghan PT, Godefroy S, Ryland BN. Use of the second dimension in PGSE NMR studies of porous media. Magn Reson Imaging 2003; 21:243-248. 4. Kay SM. Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory. Upper Saddle River: Prentice Hall, 1993. 5. Cavassila S, Deval S, Huegen C, van Ormondt D, Graveron-Demilly D. Cramér-Rao bounds: an evaluation tool for quantitation. Nuclear Mag Reson Biomed 2001; 14:278-283. 6. Alexander DC, A general framework for experiment design in diffusion MRI and its application in measuring direct tissue-microstructure features. Magn Reson Med 2008; 60:439-448. 7. Zhao B, Haldar JP, Setsompop K, Wald LL. Optimal experiment design for magnetic resonance fingerprinting. In: Proc. IEEE Engineering in Medicine and Biology Conf. 2016; p. 453-456. 8. Benjamini D, Basser PJ. Use of marginal distributions constrained optimization (MADCO) for accelerated 2D MRI relaxometry and diffusometry. J Magn Reson 2016; 271:40-45. 9. Reeves SJ, Zhe Z. Sequential algorithms for observation selection. IEEE Trans. on Signal Processing 1999; 47:123-132. Figure 1. (a) The "fully sampled" diffusion- and relaxation-encoded images from a control mouse spinal cord, containing 28 total images (every combination of 7 b-values and 4 TEs). (b) The sampling schemes we used with only 12 contrast encodings. Figure 2. Estimated 2D diffusion-relaxation correlation spectra (averaged across all voxels) from representative (a) control and (b) injured spinal cords. The first column shows representative diffusion- and relaxation-encoded images. The second column shows the integrated 2D spectra from the ground truth. The right four columns show the integrated 2D spectra from the four different accelerated scans, corresponding to the sampling schemes shown in Fig. 1(b). Figure 3. Spatially-varying DR-CSI spectra from (a) the "fully sampled" ground truth, (b) accelerated CRB-based sampling, and (c) 4x3 Rectangular grid sampling. Spectra are shown from the transition region between injury and gray matter, corresponding to the red box drawn on the injured cord in Fig. 2. Figure 4. Spatial maps of the integrated spectral peaks from the DR-CSI reconstruction from representative (a) control and (b) injured cords. The correspondences between spatial maps and spectral peaks are indicated using the color-coding scheme shown in the left column (red: comp. 1, blue: comp. 2, green: comp. 3 and yellow: comp. 4). Proc. Intl. Soc. Mag. Reson. Med. 25 (2017)
CommonCrawl
Peer uest Forget passwd?? Interview Career Finance Data Sci Question > Finance In layman's term, what is a stock Show answers Can you explain to a layman how is money created Can you explain in layman's term, what is crypto currency What is an index fund What is an exchange traded fund (ETF)? What is a stock's price to earning ratio? What is a company's enterprise value, and how is it calculated? What is stock dividend? What are the common financial ratios that every investor should know? What is the annual return of the S&P 500? What is trade deficit? What is tier 1 and tier 2 capital? How does the central bank control the supply of money? What does central bank do? What is federal reserve, and what does it do? What is treasury yield, and why it is important? Why does China buy US treasuries? What is the Best financial advice you've ever received? What is downside of low interest rate? How to estimate skewness and kurtosis? What does a quantitative analyst do? What are the most respected quantitative hedge fund? What kind of background is required to work for a quantitative hedge fund? What major should I major in to pursue a career in quantitative finance? Do quants need to be an expert in C++? What do quants in credit risk do? What kind of mathematical skill I should learn to pursue a career in quantitative hedge fund? What are the applications of machine learning to finance? What is the best way to learn stock investing? What book should I read to get into quantitative trading? Can you explain what is risk neutral measure intuitively? What type of mathematical skills are expected for each type of quant? What is the career progression for a quant? What separate quant finance from other fields of finance Why many hedge funds care about top schools? Does Warren Buffet use quantitative strategies? What is sharp ratio, and how is it used in finance? As of Sep 2019, do you think us stock market is over-valued? To get a job in quant finance, which major is better, data science, or master in finance? What is VIX, how is it calculated, and why is it important? How does analyst value astock? What is a treasury bond? Zhuo Yu • Director at Bank of America Answered 1 month ago Treasury bonds are long-term debt securities. They pay interest semi-annually until maturity. At maturity, the investor is paid the face value of the bond. The long term Treasury will generally pay a higher interest rate than shorter Treasuries to compensate for the additional risks inherent in the longer maturity. Treasuries are relatively safe since they are backed by the U.S. government. The price and interest rate of the Treasury bond is determined at auction where it is set at either par, premium, or discount to par. If the yield to maturity (YTM) is greater than the interest rate, the price of the bond will be issued at a discount. If the YTM is equal to the interest rate, the price will be equal to par. Finally, if the YTM is less than the interest rate, the Treasury bond price will be sold at a premium to par. In a single auction, a bidder can buy up to \(\$\)5 million in bonds by non-competitive bidding or up to 35 percent of the initial offering amount by competitive bidding. In addition, the bonds are sold in increments of $100 and the minimum purchase is \(\$\)100. What is savings bond? U.S. Savings bonds are non-marketable securities that earn interest for 30 years. Interest accumulates, and the investor receives everything when s/he redeems the savings bond. The bond can be redeemed after one year, but if they are sold before five years from the purchase date, the investor will lose the last three months' interest. What is a 10-K for a public company A 10-K is a report filed by a publicly traded company annually about its financial performance. It is required by the U.S. Securities and Exchange Commission (SEC). A 10-K report includes its organizational structure, financial statements, executive compensation, earnings per share, subsidiaries, and other relevant data. The report is required by the SEC to keep investors informed of a company's financial condition before they buy or sell shares in the corporation. What is an asset An asset is a resource that an individual, corporation owns with the expectation that it will provide a future value. An asset is a resource that can generate cash flow in the future. A corporation's assets are reported on its balance sheet. What is a balance sheet? A balance sheet reports a company's assets, liabilities and shareholders' equity at a given time point. It is a financial statement that provides a snapshot of what a company owns and owes, as well as the amount invested by shareholders. It is used with income statement and cash flows to conduct fundamental analysis or calculating financial ratios. The balance sheet adheres to the following accounting equation: Assets = Liabilities + Equity Intuitively, a company has to pay for all the things it owns (assets) by either borrowing money (taking on liabilities) or taking it from investors (issuing shareholders' equity). For example, if a company takes out a ten-year, \(\$\)100,000 loan from a bank, its assets will increase by \(\$\)100,000. Its liabilities will also increase by \(\$\)100,000, balancing the two sides of the equation. If the company takes \(\$\)80,000 from investors, its assets will increase by that amount, as will its shareholders' equity. A number of ratios can be derived from the balance sheet to assess how healthy a company is. These include the debt-to-equity ratio and the acid-test ratio, and many others. What is capital asset pricing model? The Capital Asset Pricing Model (CAPM) describes the relationship between systematic risk and expected return for assets, particularly stocks. CAPM is widely used for pricing risky securities and calculating expected returns for assets given the risk of those assets and cost of capital. The expected return of an asset given its risk is calculated as follows: \(ER = R_f+\beta\times (ER_m-R_f)\) where \(ER\) = expected return, \(R_f\) = risk free rate, \(\beta\) = beta of the investment, and \(ER_m-R_f\) = market risk premium. The beta measures the risk of the investment relative to the market. If a stock has a beta of less than one, it is riskier than the market, If a stock has a beta of less than one, it is less riskier than the market. The goal of the CAPM formula is to assess whether a stock is fairly valued when its risk and the time value of money are compared to its expected return. For example, consider a stock worth \(\$\)100 per share today that pays a 3% annual dividend. The stock has a beta of 1.2. Assume that the risk-free rate is 3% and market return is 7% per year. The expected return of the stock based on the CAPM formula is 7.8%=3%+1.2×(7%−3%). The expected return of the CAPM formula is used to discount the expected dividends and capital appreciation of the stock over the expected holding period. If the discounted value of those future cash flows is equal to $100 then the CAPM formula indicates the stock is fairly valued relative to risk. Can you explain block chain in layman's term? What is debt service coverage ratio (DSCR)? The debt-service coverage ratio (DSCR) measures a reflects a company's ability to service its current debt obligations. \(DSCR=\frac{ \textrm{Net Operating Income}}{\textrm{Total Debt Service}}\) Jobs you may be interested in Courses you may be interested in Coursera • Online Courses Advanced Data Science Capstone | Coursera Reproducible Research | Coursera
CommonCrawl
A Note on Viral Marketing – Part II: How Hotmail Grew January 27, 2014 by Brian Laung Aoaeh Hotmail is one example of a product that spread through the use of viral marketing techniques. This case study will cover the early days of hotmail, explore some of the underlying factors that led to its spread, and examine one model that has been used to model growth of its number of users. ((Any errors in appropriately citing my sources is entirely mine. Let me know what you object to, and how I might fix the problem. Any data in this post is only as reliable as the sources from which I obtained them.)) Ray Tomlinson is credited with inventing email as we know it today. Before 1972, email could only be sent between users of the same computer. The problem became more complex once different computers were connected to one another to form a network, and a user on one computer wanted to send email to users on a different computer. Important contributions to the evolution of email were made by others, and commercial email packages began to appear in 1976. ((Ian Peter, The History of Email. Accessed at http://www.nethistory.info/History%20of%20the%20Internet/email.html on Jan 17, 2013.)) Sabeer Bhatia and Jack Smith met at Apple Computer in the early 1990s, and later joined a startup called Firepower Systems. In 1995 they started discussing the idea of building a startup themselves. Their first idea was to build a database on Sun's Java technology. They called it JavaSoft. Venture capitalists turned them down. During the period when they were working on JavaSoft, they encountered a number of obstacles that prevented them from communicating freely with each other. Jack Smith developed a system that allowed them to have their email displayed on a web page. This became the basis for Hotmail. They soon obtaind $300,000 in funding from DFJ and rounded up an additional $100,000 in additional capital. This was in early 1996. The funding terms ascribed Hotmail an implied valuation of $2,000,000. ((Oliver A. Hugo and Elizabeth W. Garnsey, Hotmail: Delivering E-mail to the World, http://doczine.com/bigdata/1/1370291311_60c0e3de77/4e7-hotmailcase26apr02.pdf. Accessed on Jan. 26th, 2014.)) At the urging of the venture capitalist's backing Hotmail, Bhatia and Smith did two things. First they struck a strategic relationship with Four11, another DFJ portfolio startup which ran "the most comprehensive 'people finder' on the Internet" at that time according to PC Magazine. Second, they automatically included the text "P.S. I love you. Get your own free Hotmail at www.hotmail.com" at the end of every email that was sent by a Hotmail user. ((There seem to be variants of the exact message that was appended to the end of each email, but it is consistently reported that a message was included with every email sent from Hotmail.)) Hotmail launched in July 1996, with 100 signing up in the first hour. By September it boasted 100,000 subscribers. That number rose to 1,000,000 by January 1997, and 8,000,000 by October. Though Hotmail had ran out of cash before it launched its email service to the public, it went on to raise additional capital from venture capitalists. By August 1996 it was valued at $20,000,000, up 10x from the $2,000,000 at which it had been valued just 8 months earlier. To model the growth of Hotmail's subscriber base we'll turn to a model called the Bass Model, after Professor Frank M. Bass who first published it in 1963 as a section of another paper. ((http://www.bassbasement.org/BassModel/)) The Bass Model states that the probability of adoption by those who have not yet adopted is a linear function of those who have previously adopted. The mathematical expression for the model is given below. ((Frank M. Bass, A New Product Growth for Model Consumer Durables, January 1969. Available at http://www.bassbasement.org/F/N/FMB/Pubs/Bass%201969%20New%20Prod%20Growth%20Model.pdf. Accessed on Jan. 26th, 2014)) $latex \frac{f(t)}{1-F(t)}=p+\frac{q}{M}\left[ A\left( t \right) \right]$ In the equation above: t represents time, and the first full time interval of sales is t = 1, p represents coefficient of innovation, q represents the coefficient of imitation, M is a constant, and represents the potential market or the number of purchasers of the product, f(t) represents the fraction of the potential market that adopts a product at time t, and F(t) represents the portion of the potential market that has adopted the product up to and including time t, and f(t) is the first derivative of F(t) wrt t. Alan Montgomery uses the Bass Model to fit the model's results to actual data from Hotmail's first year and reports a very good fit. ((Alan L. Montgomery, Applying Quantitative Marketing Techniques to the Internet, available at http://www.andrew.cmu.edu/user/alm3/papers/internet%20marketing.pdf, July 2000. Accessed Jan. 26th, 2014)) He uses estimates of 0.0012 for p, 0.008 for q, and 9,670,000 for M. I will tackle models like the Bass Model in later posts. It is reported that Bhatia sent a message to a friend in India using Hotmail, and three weeks after that Hotmail had 100,000 users there. ((Willix Halim, My Top Five "Growth Hacking" Techniques, http://e27.co/my-top-five-growth-hacking-techniques/. Accessed on Jan. 27th, 2014.)) Hotmail was eventually bought by Microsoft in 1998, a year and a half after it launched to the public. The value of the deal was not made public but is rumored to be as high as $400,000,000. ((Jeff Peline, Microsoft Buys Hotmail, January 3rd, 1998, http://news.cnet.com/2100-1033-206717.html. Accessed on Jan. 27th, 2014.)) What ever you call it, "Growth Hacking" or "Viral Marketing", it works. Hotmail spent a fraction of the capital that its rivals spent on marketing and advertising, but experienced significantly more growth. In the next post on this topic I will study the tactics Dropbox used to grow its user base. Filed Under: Case Studies, How and Why, Long Read, Sales and Marketing Tagged With: Early Stage Startups, Long Read, Viral Marketing
CommonCrawl
Home > Journals > Geom. Topol. > Volume 3 > Issue 1 Geometry & Topology VOL. 3 · NO. 1 | 1999 Content Email Alerts notify you when new content has been published. Receive bi-monthly emailed content alerts Receive immediate emailed alerts when a new issue has been published Please select when you would like to receive an alert. Alert saved! < Previous Issue | Next Issue > VIEW ALL ABSTRACTS + Contact Lie algebras of vector fields on the plane Boris M Doubrov, Boris P Komrakov Geom. Topol. 3 (1), 1-20, (1999) DOI: 10.2140/gt.1999.3.1 KEYWORDS: contact vector fields, filtered, graded Lie algebras, differential invariants, 17B66, 53C30, 34A26, 58A20 Read Abstract + The paper is devoted to the complete classification of all real Lie algebras of contact vector fields on the first jet space of one-dimensional submanifolds in the plane. This completes Sophus Lie's classification of all possible Lie algebras of contact symmetries for ordinary differential equations. As a main tool we use the abstract theory of filtered and graded Lie algebras. We also describe all differential and integral invariants of new Lie algebras found in the paper and discuss the infinite-dimensional case. Classical 6j-symbols and the tetrahedron Justin Roberts Geom. Topol. 3 (1), 21-66, (1999) DOI: 10.2140/gt.1999.3.21 KEYWORDS: $6j$–symbol, asymptotics, tetrahedron, Ponzano–Regge formula, geometric quantization, Scissors congruence, 22E99, 81R05, 51M20 A classical 6j–symbol is a real number which can be associated to a labelling of the six edges of a tetrahedron by irreducible representations of SU(2). This abstract association is traditionally used simply to express the symmetry of the 6j–symbol, which is a purely algebraic object; however, it has a deeper geometric significance. Ponzano and Regge, expanding on work of Wigner, gave a striking (but unproved) asymptotic formula relating the value of the 6j–symbol, when the dimensions of the representations are large, to the volume of an honest Euclidean tetrahedron whose edge lengths are these dimensions. The goal of this paper is to prove and explain this formula by using geometric quantization. A surprising spin-off is that a generic Euclidean tetrahedron gives rise to a family of twelve scissors-congruent but non-congruent tetrahedra. Embeddings from the point of view of immersion theory : Part I Geom. Topol. 3 (1), 67-101, (1999) DOI: 10.2140/gt.1999.3.67 KEYWORDS: ‎embedding‎, immersion, calculus of functors, 57R40, 57R42 Let M and N be smooth manifolds without boundary. Immersion theory suggests that an understanding of the space of smooth embeddings emb(M,N) should come from an analysis of the cofunctor V↦emb(V,N) from the poset O of open subsets of M to spaces. We therefore abstract some of the properties of this cofunctor, and develop a suitable calculus of such cofunctors, Goodwillie style, with Taylor series and so on. The terms of the Taylor series for the cofunctor V↦emb(V,N) are explicitly determined. In a sequel to this paper, we introduce the concept of an analytic cofunctor from ′ to spaces, and show that the Taylor series of an analytic cofunctor F converges to F. Deep excision theorems due to Goodwillie and Goodwillie–Klein imply that the cofunctor V↦emb(V,N) is analytic when dim(N)− dim(M)≥3. Embeddings from the point of view of immersion theory : Part II Thomas G Goodwillie, Michael Weiss Geom. Topol. 3 (1), 103-118, (1999) DOI: 10.2140/gt.1999.3.103 Let M and N be smooth manifolds. For an open V⊂M let emb(V,N) be the space of embeddings from V to N. By the results of Goodwillie and Goodwillie–Klein, the cofunctor V↦emb(V,N) is analytic if dim(N)− dim(M)≥3. We deduce that its Taylor series converges to it. For details about the Taylor series, see Part I The bottleneck conjecture Greg Kuperberg KEYWORDS: metric geometry, euclidean geometry, Mahler conjecture, bottleneck conjecture, Central symmetry, 52A40, 46B20, 53C99 The Mahler volume of a centrally symmetric convex body K is defined as M(K)=(VolK)(VolK∘). Mahler conjectured that this volume is minimized when K is a cube. We introduce the bottleneck conjecture, which stipulates that a certain convex body K♢⊂K×K∘ has least volume when K is an ellipsoid. If true, the bottleneck conjecture would strengthen the best current lower bound on the Mahler volume due to Bourgain and Milman. We also generalize the bottleneck conjecture in the context of indefinite orthogonal geometry and prove some special cases of the generalization. $\mathbb{R}$–covered foliations of hyperbolic 3-manifolds Danny Calegari KEYWORDS: $\mathbb{R}$–covered foliations, slitherings, hyperbolic 3–manifolds, transverse geometry, 57M50, 57R30, 53C12 We produce examples of taut foliations of hyperbolic 3–manifolds which are ℝ–covered but not uniform — ie the leaf space of the universal cover is ℝ, but pairs of leaves are not contained in bounded neighborhoods of each other. This answers in the negative a conjecture of Thurston. We further show that these foliations can be chosen to be C0 close to foliations by closed surfaces. Our construction underscores the importance of the existence of transverse regulating vector fields and cone fields for ℝ–covered foliations. Finally, we discuss the effect of perturbing arbitrary ℝ–covered foliations. Vanishing lines in generalized Adams spectral sequences are generic M J Hopkins, J H Palmieri, J H Smith KEYWORDS: Adams spectral sequence, vanishing line, generic, 55T15, 55P42 We show that in a generalized Adams spectral sequence, the presence of a vanishing line of fixed slope (at some term of the spectral sequence, with some intercept) is a generic property. Seiberg–Witten invariants and pseudo-holomorphic subvarieties for self-dual, harmonic 2–forms Clifford Henry Taubes KEYWORDS: Four–manifold invariants, symplectic geometry, 53C07, 52C15 A smooth, compact 4–manifold with a Riemannian metric and b2+≥1 has a non-trivial, closed, self-dual 2–form. If the metric is generic, then the zero set of this form is a disjoint union of circles. On the complement of this zero set, the symplectic form and the metric define an almost complex structure; and the latter can be used to define pseudo-holomorphic submanifolds and subvarieties. The main theorem in this paper asserts that if the 4–manifold has a non zero Seiberg–Witten invariant, then the zero set of any given self-dual harmonic 2–form is the boundary of a pseudo-holomorphic subvariety in its complement. Lefschetz fibrations and the Hodge bundle Ivan Smith KEYWORDS: symplectic geometry, Lefschetz fibration, stable curves, signature, 53C15, 53C55, 58F99 Integral symplectic 4–manifolds may be described in terms of Lefschetz fibrations. In this note we give a formula for the signature of any Lefschetz fibration in terms of the second cohomology of the moduli space of stable curves. As a consequence we see that the sphere in moduli space defined by any (not necessarily holomorphic) Lefschetz fibration has positive "symplectic volume"; it evaluates positively with the Kähler class. Some other applications of the signature formula and some more general results for genus two fibrations are discussed. All two dimensional links are null homotopic Arthur C Bartels, Peter Teichner KEYWORDS: link homotopy, Milnor group, concordance, 57Q45, 57Q60 We show that any number of disjointly embedded 2–spheres in 4–space can be pulled apart by a link homotopy, ie, by a motion in which the 2–spheres stay disjoint but are allowed to self-intersect. Transversal torus knots John B Etnyre KEYWORDS: tight, contact structure, transversal knots, torus knots, 57M50, 57M25, 53C15 We classify positive transversal torus knots in tight contact structures up to transversal isotopy. Non-positively curved aspects of Artin groups of finite type Mladen Bestvina KEYWORDS: Artin groups, nonpositive curvature, 20F32, 20F36, 55P20 Artin groups of finite type are not as well understood as braid groups. This is due to the additional geometric properties of braid groups coming from their close connection to mapping class groups. For each Artin group of finite type, we construct a space (simplicial complex) analogous to Teichmüller space that satisfies a weak nonpositive curvature condition and also a space "at infinity" analogous to the space of projective measured laminations. Using these constructs, we deduce several group-theoretic properties of Artin groups of finite type that are well-known in the case of braid groups. Piecewise Euclidean structures and Eberlein's Rigidity Theorem in the singular case Michael W Davis, Boris Okun, Fangyang Zheng KEYWORDS: piecewise Euclidean structure, CAT(0) space, Hadamard space, rigidity theorem, 57S30, 53C20 In this article, we generalize Eberlein's Rigidity Theorem to the singular case, namely, one of the spaces is only assumed to be a CAT(0) topological manifold. As a corollary, we get that any compact irreducible but locally reducible locally symmetric space of noncompact type does not admit a nonpositively curved (in the Aleksandrov sense) piecewise Euclidean structure. Any hyperbolic manifold, on the other hand, does admit such a structure. Examples of Riemannian manifolds with positive curvature almost everywhere Peter Petersen, Frederick Wilhelm KEYWORDS: Positive curvature, unit tangent bundle of $S^4$, 53C20, 53C20, 58B20, 58G30 We show that the unit tangent bundle of S4 and a real cohomology ℂP3 admit Riemannian metrics with positive sectional curvature almost everywhere. These are the only examples so far with positive curvature almost everywhere that are not also known to admit positive curvature. Circle-valued Morse theory and Reidemeister torsion Michael Hutchings, Yi-Jen Lee KEYWORDS: Morse–Novikov complex, Reidemeister torsion, Seiberg–Witten invariants, 57R70, 53C07, 57R19, 58F09 Let X be a closed manifold with χ(X)=0, and let f:X→S1 be a circle-valued Morse function. We define an invariant I which counts closed orbits of the gradient of f, together with flow lines between the critical points. We show that our invariant equals a form of topological Reidemeister torsion defined by Turaev [Math. Res. Lett. 4 (1997) 679–695]. We proved a similar result in our previous paper [Topology 38 (1999) 861–888], but the present paper refines this by separating closed orbits and flow lines according to their homology classes. (Previously we only considered their intersection numbers with a fixed level set.) The proof here is independent of the previous proof, and also simpler. Aside from its Morse-theoretic interest, this work is motivated by the fact that when X is three-dimensional and b1(X)>0, the invariant I equals a counting invariant I3(X) which was conjectured in our previous paper to equal the Seiberg–Witten invariant of X. Our result, together with this conjecture, implies that the Seiberg–Witten invariant equals the Turaev torsion. This was conjectured by Turaev and refines the theorem of Meng and Taubes [Math. Res. Lett 3 (1996) 661–674]. The Burau representation is not faithful for $n = 5$ Stephen Bigelow KEYWORDS: Braid group, Burau representation, 20F36, 57M07, 20C99 The Burau representation is a natural action of the braid group Bn on the free ℤ[t,t−1]–module of rank n−1. It is a longstanding open problem to determine for which values of n this representation is faithful. It is known to be faithful for n=3. Moody has shown that it is not faithful for n≥9 and Long and Paton improved on Moody's techniques to bring this down to n≥6. Their construction uses a simple closed curve on the 6–punctured disc with certain homological properties. In this paper we give such a curve on the 5–punctured disc, thus proving that the Burau representation is not faithful for n≥5. An elementary approach to the mapping class group of a surface Bronisław Wajnryb KEYWORDS: mapping class group, surface, curve complex, group presentation, 20F05, 20F34, 57M05, 20F38, 57M60 We consider an oriented surface S and a cellular complex X of curves on S, defined by Hatcher and Thurston in 1980. We prove by elementary means, without Cerf theory, that the complex X is connected and simply connected. From this we derive an explicit simple presentation of the mapping class group of S, following the ideas of Hatcher–Thurston and Harer.
CommonCrawl
Evaluate: {\text{begin}array l x-2y=11 } x+5y=-17\text{end}array . Expression: $\left\{\begin{array} { l } x-2y=11 \\ x+5y=-17\end{array} \right.$ Move the variable to the right-hand side and change its sign $\left\{\begin{array} { l } x=11+2y \\ x+5y=-17\end{array} \right.$ $\left\{\begin{array} { l } x=11+2y \\ x=-17-5y\end{array} \right.$ Since both expressions $11+2y$ and $-17-5y$ are equal to $x$, set them equal to each other forming an equation in $y$ $11+2y=-17-5y$ Solve the equation for $y$ $y=-4$ Substitute the given value of $y$ into the equation $x=-17-5y$ $x=-17-5 \times \left( -4 \right)$ Solve the equation for $x$ $x=3$ The possible solution of the system is the ordered pair $\left( x, y\right)$ $\left( x, y\right)=\left( 3, -4\right)$ Check if the given ordered pair is the solution of the system of equations $\left\{\begin{array} { l } 3-2 \times \left( -4 \right)=11 \\ 3+5 \times \left( -4 \right)=-17\end{array} \right.$ Simplify the equalities $\left\{\begin{array} { l } 11=11 \\ -17=-17\end{array} \right.$ Since all of the equalities are true, the ordered pair is the solution of the system Calculate: 4 (1)/(4)-3 (5)/(16) Calculate: /(3) 8 = /(15) a Calculate: 3 (1)/(6)-1 (2)/(3) Calculate: 11sqrt(1210)+sqrt(1000)-131sqrt(10) Solve for: 2(x-y)^2-5(y-x)-12 Solve for: (6⁢x^2+21⁢x+12)/(x+3) Evaluate: y=ln(x)^3 Evaluate: 0.3 * (1)/(150) Evaluate: (d)/(dx) ((x+8)/((7x+4)^2)) Solve for: -2z-4=-14 How To Get The Most Out Of Math Apps How Much Does Math Tutoring Cost? A Step-By-Step Guide To Solving Differential Equations
CommonCrawl
Compositional variations along the route of Chang'e-3 Yutu rover revealed by the lunar penetrating radar Chunyu Ding1,2, Zhiyong Xiao ORCID: orcid.org/0000-0002-9405-62451,3,4, Yan Su2,5, Jiannan Zhao6 & Jun Cui1,2,4 Using the high-frequency lunar penetrating radar data obtained by the Chang'e-3 mission, we apply the frequency-shift method to calculate the decay rate of the electromagnetic wave in the regolith-like ejecta deposits of the Ziwei crater. The radar data are divided into segments according to the navigation points along the traverse route of the Yutu rover. For each segment, we calculate the bulk loss tangent of materials within the top ~ 50 ns of the radar data based on the frequency decreasing rate of the electromagnetic wave. The loss tangent varies from ~ 0.011–0.017 along the route of Yutu, and it is within the range of the measured loss tangent of Apollo regolith samples. Using the empirical relationship between loss tangent and TiO2 + FeO content derived from the Apollo lunar samples, we estimate the TiO2 + FeO content for the bulk regolith along the route of Yutu, which is ~ 23–30 wt.%. This value is comparable with that estimated using both orbital reflectance spectral data and in situ observation made by the Yutu rover. The loss tangent derived along the route of Yutu is larger than the average value of returned lunar samples, which is mainly caused by the larger content of TiO2 + FeO at the landing site compared to the global average. Variations of the TiO2 + FeO content along the route of Yutu are mainly due to the excavation of the Ziwei crater. The TiO2 + FeO content map derived by the radar has a much higher spatial resolution compared to orbital observation, testifying the feasibility of this technique for regional geology study. Most of our knowledge about the composition of the Moon is derived from orbital observations, since the number of both returned lunar samples and collected lunar meteorites is still rather limited. Orbital observations using spectrometers that work at various wavelengths have perfect spatial coverage, but they are usually restricted in the spatial resolution. On the other hand, highly heterogeneous materials exist at any places of the Moon due to intense material transportation, mixing, and metamorphism by impact cratering (Huang et al. 2017). Therefore, there is a gap in the spatial resolution of interpreted compositions using orbital and laboratory measurements, and caution should be used when directly comparing the compositional data obtained from orbit and those measured for samples at laboratories. For example, reflectance spectra obtained from orbit (normally > 10 m/pixel) are usually compared with those measured at laboratories for typical minerals and lunar samples to determine the possible compositions (e.g., Pieters et al. 2000), but the two measurements have a scale difference of at least an order of magnitude. Recently, Wu and Hapke (2018) analyzed the visible–near-infrared spectrometer data obtained on the lunar surface by the Chang'e-3 mission (i.e., CE-3), and heterogeneous reflectance spectra were observed both at the surface and in the near-subsurface materials. Lunar penetrating radar (i.e., LPR) can also be used to deduce the composition of surface materials, especially the content of TiO2 + FeO (Schaber et al. 1975). The rate of energy decay of electromagnetic waves in lunar regolith is mainly controlled by the content of TiO2 + FeO (Strangway et al. 1977), so that the loss tangent of lunar regolith derived from the energy decay rate of electromagnetic waves can be used to estimate the amount of TiO2 + FeO (Campbell et al. 1997). Pommerol et al. (2010) noticed that radar echoes returned from high-Ti regions on the Moon were low, which were detected by the lunar radar sounder onboard the Selenological and Engineering Explorer Kaguya mission. Applying this method, Campbell et al. (1997) estimated the bulk composition of regolith in the lunar mare using Earth-based radar data. The LPR onboard the Yutu rover of the CE-3 mission obtained the first in situ radar profile on the surface of the Moon (Su et al. 2014). Compared to the radar data obtained by Earth-based telescopes (e.g., Campbell et al. 1997), the LPR data obtained by CE-3 have a much shorter wavelength ("Data and method" section) and higher spatial resolution. This work utilizes the CE-3 LPR data to derive the composition along the route of Yutu, which will fill the observation gap in terms of the spatial resolution between the orbital (e.g., Zhao et al. 2014) and in situ measurements (e.g., Wu and Hapke, 2018). Furthermore, instead of probing only the top-most materials, the LPR-derived contents of TiO2 + FeO represent the average value of materials within the radar detection ranges. We introduce the data obtained by the CE-3 high-frequency LPR in the "Data" section, the frequency-shift technique used to estimate the loss tangent of lunar materials and the method used to derive the TiO2 + FeO content are introduced in the "Estimate of loss and tangent". The derived loss tangent values and the TiO2 + FeO contents are discussed in the "Estimate of loss and tangent" and "Bulk TiO2 + FeO content" sections, respectively, and the reliability of these results is verified in the "Reliability of the loss tangent estimated" and "Reliability of the TiO2 + FeO contents estimated". Indications of the results are discussed in "Indications to regional geology" section. Data and method We use the CE-3 high-frequency LPR data in this study. The LPR system onboard the Yutu rover consists of two channels that have center frequencies of 60 and 500 MHz, respectively (Fang et al. 2014; Su et al. 2014). The transmitted pulse of the LPR was generated by a digital integrated circuit, so that the transmitted waveform was constant (Fang et al. 2014), enabling the derivation of loss tangent from the measured frequency drift rate (Irving and Knight, 2003). The high-frequency LPR was operated with different gain values during the mission (Feng et al. 2017), and a constant gain of 0 dB was used from the navigation point N105 to N208 (Fig. 1). To avoid uncertainties raised by the different gain values (Feng et al. 2017), we use the radar data from N105 to N208 here, which contain ~ 1600 valid traces after removing the redundant data. The landing site of the Chang'e-3 mission and the traverse route of the Yutu rover. a Geological context of the landing site. The landing site (white star) is located on the rim of the ~ 450 m diameter Ziwei crater. The base image is obtained by the Lunar Reconnaissance Orbit (M102285549LE+RE; 1.66 m/pixel). b Image acquired by the descent camera (image ID: CE3_BMYK_LCAM-3006) on the Chang'e-3 lander shows numerous small craters within the landing site. The landing site is marked by the white star and the white line is the route of the Yutu rover. c Traverse route and speed of the Yutu rover from the navigation points N101 to N208 (black dots). The color bar is for the moving speed of the rover. Navigation points that are started with '1' represent the first lunar day, and '2' represents the second lunar day. The white star represents the position of lander The radar data are processed following the routine procedure (e.g., Su et al. 2014). The Yutu rover was driven with different speeds from the navigation points N105 to N208 (Feng et al. 2017). To remedy distortions in the radargram that are caused by the different moving speeds (e.g., data sections N202–203 and N204–206 shown in Fig. 1c), we perform a trace equivalent calculation for the data based on the average rover speed (i.e., 0.055 m/s). Afterward, the standard processing procedure (e.g., remove direct current component and background, and band-pass filtering) is applied to obtain the final LPR radargram (Fig. 2a). Radargram obtained by the high-frequency LPR from the navigation points N105 to N208. The two-way travel time of the electromagnetic wave is shown in the right y-axis. a The final-processed radargram. The left y-axis is the depth corresponding to an assumed dielectric constant of 1. b Interpretations of subsurface structure of the radargram. The solid green lines are uneven reflectors that are interpreted as the boundary between the ejecta deposits of Ziwei and deeper more competent materials (e.g., Xiao et al. 2015; Fa et al. 2015). The dashed pink lines mark the approximate boundary where the signal transition is not obvious. The distance values shown in the lower panel are from ~ 33.2 to 107.4 m that are the route lengths from the landing site. The left y-axis mark depths that correspond to an assumed dielectric constant of 3.2 (Fa et al. 2015) We use the top ~ 50 ns of the radargram for this study. There is a general consensus that this part of the radargram corresponds to the continuous ejecta deposits of the Ziwei crater (Fig. 1a, e.g., Xiao et al. 2015; Fa et al. 2015; Zhang et al. 2019), which is supported by both the geological context (Qiao et al. 2016) and the signal transition of radar echoes (Xiao et al. 2015). Data at larger depths are less clear in terms of the geological interpretation (Xiao et al. 2015; Zhang et al. 2015; Zhang et al. 2019), and the signal-to-noise ratio is lower (Xing et al. 2017). To avoid possible controversies regarding the reliability of both the data and geological interpretation at larger depths, we focus on the ~ 50 ns of the radargram here. Previous studies have found that materials within this depth (i.e., the continuous ejecta deposits of Ziwei) have similar properties with typical regolith materials (Lai et al. 2016; Xiao et al. 2015). Estimate of loss tangent We focus on the imaginary part of the relative permittivity (i.e., loss tangent) to invert the content of TiO2 + FeO. Radar permittivity consists of real and imaginary parts. For the top ~ 50 ns LPR radargram, the real part of the relative permittivity has been estimated using hyperbolic reflectors in the radargram, resulting in values below 5 (Feng et al. 2017; Lai et al. 2016). However, the imagery part for the lunar surface at the CE-3 landing site was taken as a constant value of ~ 0.014, and this value was estimated using the empirical relationship between loss tangent and the content of TiO2+FeO of the surface regolith (~ 27.8% on average), which was obtained by the Active Particle-induced X-ray Spectrometer (APXS) onboard the Yutu rover (Ling et al. 2015). However, the APXS observations are only for 4 points (Ling et al. 2015), and it is our target to resolve the TiO2 + FeO content along the route of the Yutu rover. Measurements for the permittivity of returned lunar regolith revealed that at radar frequencies larger than 105 Hz, the loss tangent of lunar regolith is primarily affected by the content of TiO2 + FeO, while the loss tangent is only weakly related to the bulk density (Strangway et al. 1977). On the other hand, the Apollo 17 mission performed both radar and seismic detections for the subsurface, and a sudden increase was observed in both the imaginary and real parts at the possible boundary between the regolith layer and the competent basalts, and this increase was interpreted to be caused by the larger bulk density of competent basalts (Strangway et al. 1977). The top ~ 50 ns of the CE-3 LPR radargram is restricted within regolith-like materials (Xiao et al. 2015; Lai et al. 2016), so that the content of TiO2 + FeO can be estimated using the loss tangent derived from the LPR data. Loss tangent can be estimated in both the time-domain (i.e., a decay function of signal amplitude with time; Brzostowski and McMechan, 1992) and frequency-domain (i.e., a decrease function of centroid frequency with time; Brzostowski and McMechan, 1992; Turner and Siggins, 1994). Compared to the time-domain method, the advantage of the frequency-domain method is that the frequency shift of the reflected electromagnetic waves is not affected by reflection losses or far-field geometrical spreading (Liu et al. 1998). Difficulties raised by the amplitude decay of the reflected electromagnetic waves can be ignored using frequency-domain method. The relationship between the dielectric losses and downshift of the centroid frequency (Quan and Harris, 1997) can be applied to estimate the loss tangent of the lunar regolith (Irving and Knight, 2003; Lauro et al. 2017). This method is also widely used to estimate the loss tangent of media both in seismic detections (e.g. Quan and Harris, 1997) and ground penetrating radar (Irving and Knight, 2003; Liu et al. 1998; Quan and Harris, 1997). When electromagnetic waves are propagating in loss materials such as lunar regolith, the centroid frequency of the received electromagnetic wave (fr) is lower than that transmitted (ft), a phenomena called wavelet dispersion, which is caused by dielectric losses (Turner and Siggins, 1994). To estimate the loss tangent using the frequency shift method, the LPR data are transformed to the frequency domain, and the centroid frequency and its time decay for each trace of the LPR data are calculated using the short-time Fourier transformation (STFT; Griffin and Lim, 1984). The decreasing rate of the centroid frequency of the received signal (i.e., downtrend slope) is computed using a linear least square fit (Irving and Knight, 2003). Loss tangent is proportional to the downtrend slope (Quan and Harris, 1997; Irving and Knight, 2003). The frequency drift of the received electromagnetic waves is calculated based on the assumption that for electromagnetic waves in radar wavelengths, the amplitudes of the transmitted signals (x), those loss through medium (r), and those of the received signals (y) follow a linear system (Liu et al. 1998; Quan and Harris, 1997). Therefore, at each time (τ), the amplitude of the transmitted and that of the received radar waves can be calculated via convolution based on the amplitude losses in the medium as shown in Fig. 3. In a linear signal system, the time-domain signal can be translated to a frequency-domain signal using the Fourier transformation (FFT; Rabiner and Gold, 1975), which can simplify the calculation procedure (Cooley et al. 1969). Equation (1) shows the linear system in the frequency domain (Fig. 3). $$ Y(f)=X(f)\cdot \kern0.5em R(f) $$ The LPR data are treated as a linear signal system. Fourier transformation and Inverse Fourier transformation are used to transform the signal system from time domain to frequency domain (Rabiner and Gold, 1975). In the time domain, the received signal y(τ) is derived based on the convolution between the transmitted signal x(τ) and that of the loss medium r(τ). In the frequency domain, the amplitude spectrum of the received signal Y(f) equals the product of the amplitude spectra of the transmitted waveform X(f) and the loss medium R(f) where f is frequency, Y(f) is the frequency spectrum of the received signal. X(f) is the frequency spectrum of the transmitted radar pulse. R(f) is the propagation function of the radar pulse, which includes both attenuation and phase terms. The transmitted radar pulse of the high frequency LPR channel can be approximated as a plane wave (Irving and Knight, 2003), so that the frequency spectrum of the received signal Y(f) at a given distance d can be stated as Eq. (2). $$ Y(f)=X(f)\cdot {e}^{-\alpha d}\cdot \kern0.5em {e}^{- j\beta d} $$ where α is the attenuation term, and β is the phase term in the loss medium R(f). The loss tangent (tanδ) of the lossy medium is proportional to the attenuation function (α), and their relationship is expressed as Eq. (3). Constant tanδ not depending on frequency is assumed in this study for simplicity. $$ \alpha =\frac{\omega }{\upsilon_p}{\left\{\frac{\left[\sqrt{1+{\tan}^2\delta -1}\right]}{2}\right\}}^{\frac{1}{2}} $$ where ω = 2πf, f is the frequency of the radar pulse. vp is the velocity of the radar pulse in the medium. The loss tangent of typical lunar regolith is ~ 0.005 (Strangway et al. 1977), so that the power of the loss tangent (tan2δ) is far less than one. Applying the binomial approximation \( \sqrt{1+{\chi}^2}-1\approx {\chi}^2/2 \) to Eq. (3), we obtain $$ \alpha (f)=\frac{\pi \tan \delta }{\upsilon_p}f $$ The electromagnetic wave transmitted by the high-frequency LPR follows a constant Ricker function (Fang et al. 2014), which can also be approximated by the Gaussian function shown in Eq. (5) (Lauro et al. 2017): $$ X(f)=\frac{2}{f_0\sqrt{\pi }}\exp \left[-\frac{4{\left(f-{f}_0\right)}^2}{f_0^2}\right] $$ where f0 is the dominant frequency of the transmitted radar pulse, which is also recognized as the peak frequency (Zhang et al. 2002). Substituting Eq. (4) and (5) into Eq. (2), the frequency spectrum of the received signal can be expressed as the Eq. (6). More detailed derivation process can be found in the supplementary information.. $$ Y(f)=\frac{2}{f_0\sqrt{\pi }}\exp \left\{-\frac{4{\left[f-\left({f}_0-\frac{\pi \tan \delta {f}_0^2d}{8{\upsilon}_p}\right)\right]}^2}{f_0^2}\right\}\kern0.5em \cdot \kern0.5em \exp \left[{\left(\frac{\pi \tan \delta {f}_0d}{4{\upsilon}_p}-2\right)}^2-4\right]\kern0.5em \cdot \kern0.5em \exp \left(- j\beta d\right) $$ The first exponential expression of Eq. (6) is the Gaussian format of the transmitted radar pulse. The second is the attenuation of the radar pulse, and the third is the phase of the radar pulse. The centroid frequency of the received signal (fr) is shown in the Eq. (7) (Irving and Knight, 2003; Liu et al. 1998; Quan and Harris, 1997). $$ {f}_r=\frac{\int_0^{\infty }f\cdot Y(f) df}{\int_0^{\infty }Y(f) df} $$ The centroid frequency of transmitted radar wave X(f) is ft, which is expressed as Eq. (8) (Quan and Harris, 1997). $$ {f}_t=\frac{\int_0^{\infty }f\cdot X(f) df}{\int_0^{\infty }X(f) df}={f}_0 $$ Combining Eqs. (6), (7), and (8), the centroid frequency of the received signal (fr) is related with that of centroid frequency of the transmitted signal (ft) and the loss tangent (Quan and Harris, 1997), and expressed as $$ {f}_r={f}_t-\frac{\pi \tan \delta {f}_t^2}{8}\tau $$ where τ = d/υp is propagation time. Fitting the centroid frequency of the received signal (fr) and the propagation time (τ), we can estimate the loss tangent (tanδ) of the electromagnetic wave and the centroid frequency of the transmitted signal (ft). Equation (10) shows the relationship between loss tangent (tanδ) and the frequency drift rate(Δfr/Δτ). $$ \tan \delta =-8\frac{\raisebox{1ex}{$\Delta {f}_r$}\!\left/ \!\raisebox{-1ex}{$\Delta \tau $}\right.}{\pi {f}_t^2} $$ In the calculation, STFT is applied to the first ~ 50 ns of each A-scan to transform the received LPR signal from time-domain, y(τ), to frequency-domain, Y(f). The ~ 50 ns boundary (i.e., solid green and dashed pink curves; Fig. 2b) corresponds to the approximated depth of surface regolith along the rover route. The sampling frequency is set to 3.2 GHz, which corresponds to the time sampling interval (0.3125 ns) of the high frequency LPR (Su et al. 2014). Each A-scan is divided into 8 equally sized segments (i.e., ~ 6 ns each), and the overlapped time width is set to 50% of the segment width (i.e., 3 ns) to ensure reliable frequency resolution (Niethammer et al. 2000). A 3 ns Hamming window is applied for each segment of the A-scan for the STFT. For each segment of the LPR data that were obtained between adjacent navigation points (Fig. 1), the frequency drift rate for the radar data is calculated. Figure 4 shows the centroid frequency of the received signals versus time-delays, which are fitted using linear equations. It is notable that fr here is derived from the data after reducing the system noise. Deriving fr from the raw data would cause substantial oscillation in the echo pattern. Table 1 shows the results and the related errors. In general, the loss tangent derived is ~ 0.011–0.017 (Table 1), and the average value is 0.014. Referring to the relationship between loss tangent and penetration depth of the high-frequency LPR (Xing et al. 2017; Fig. 5a), our estimated loss tangent corresponds to penetrating depths of ~ 12.5–15.7 m (ε = 2.9), which is consistent with the actual penetrating depth of the high-frequency LPR (Xiao et al. 2015; Fa et al. 2015). The relationship between the centroid frequency of the received signal (fr) and the time-delay for each segment of the LPR data. The data segments (a–j) are divided according to the adjacent navigation points from N105 to N208. The red lines are the best-fit between fr and τ using the least square method. The slopes of the best-fit lines are the drift rate of fr with τ Table 1 Parameters estimation for each navigational points. The centroid frequency of the received radar signal, the drift slope of the centroid frequency, the derived loss tangent, and the derived TiO2 + FeO contents at different segments of the LPR data. The uncertainties for all the estimated values are the 95% confidence bounds. The navigation points are shown in Figs. 1c and 5b a The relationship between the different loss tangent and the penetration depth of the high frequency channel (Xing et al. 2017). The pink, green and blue lines show the electromagnetic wave propagation in media that have a relative permittivity of 2.3, 2.9, and 3.5, respectively. The red dashed line shows the position of our estimated average loss tangent, and the rest two dashed lines correspond to the lower and upper limits of our estimated loss tangent, respectively. b The estimated content of TiO2 + FeO from the navigation points N105 to N208 along the route of Yutu. c An interpolated trend map for the content of TiO2 + FeO in the traverse region of Yutu. This map is based on the measurement shown in panel b using a natural neighbor interpolation method. The base map of the main frame is obtained by the Lunar Reconnaissance Orbiter (M102285549LE + RE; 1.66 m/pixel), and that for the inset is obtained by the descent camera onboard the CE-3 lander (image ID CE3_BMYK_LCAM-3006) Bulk TiO2+FeO content The average density of materials within the top ~ 50 ns of LPR data is estimated to be 1.8 g/cm3 (Fa et al. 2015), which is consistent with the density of the typical lunar regolith (Carrier et al. 1991). Therefore, applying the empirical relationship between the loss tangent and TiO2 + FeO content of lunar regolith samples (Eq. 11; Carrier et al. 1991), the bulk TiO2 + FeO content of materials within the ~ 50 ns of the radargram can be calculated. $$ \tan \delta ={10}^{\left(0.030\times \left(\%{TiO}_2+\% FeO\right)-2.676\right)} $$ The TiO2 + FeO content is estimated as ~ 23–30 wt.% (Table 1), and the average value is ~ 27 wt.%. The estimated bulk concentration of TiO2 + FeO exhibits a decreasing trend from the navigation points N105 to N202 (Fig. 5b). In general, navigation points further away from the rim of Ziwei (Fig. 1) exhibit a higher content of TiO2 + FeO (Fig. 5b). However, the TiO2 + FeO concentration at N107–N108 is distinctly less than the surroundings, suggesting that the composition of materials in this region is patchy in distribution. Assuming that the change of the bulk TiO2 + FeO contents along the traverse route of Yutu is continuous outward, the TiO2 + FeO contents along the route of Yutu are interpolated to derive a regional trend (Fig. 5c) using the natural neighbor method (Sibson, 1981). Reliability of the loss tangent estimated In this study, the loss tangent of subsurface materials along the route of Yutu is estimated using the relationship between the centroid frequency of the received signals and the time delay. Table 1 shows that the centroid frequency of the LPR transmitter is ~ 495.1–510.4 MHz. The average value (501.3 MHz) is identical with that of the designed centroid frequency (500 MHz) of the high frequency LPR (Fang et al. 2014), suggesting that the base data used to estimate the loss tangent is reliable. The top ~ 50 ns of the CE-3 LPR data are restricted within the continuous ejecta deposits of the Ziwei crater (Xiao et al. 2015; Fa et al. 2015). The ejecta deposits are dominated by pre-impact regolith as evident by the geological context (Qiao et al. 2016). This is also consistent with the relative permittivity value of ~ 2–5 estimated using the LPR data (Fa et al. 2015; Lai et al. 2016; Feng et al. 2017), which is substantially less than that of typical lunar rocks (Carrier et al. 1991). The derived loss tangent values shown in Table 1 are within the range and toward the upper end of the measured loss tangent of returned regolith samples by the Apollo missions (0.00057 to 0.0232; Carrier et al. 1991). Typical lunar regolith samples exhibit loss tangent less than 0.01, and this is also true for in situ measurements performed on the Moon. The Surface Electrical Properties Experiment carried out by the Apollo 17 mission has measured the permittivity and loss tangent of materials at the landing site, and the interpreted regolith layer (~ 7 m thick) has a bulk loss tangent of 0.008 and a relative permittivity of 3.8 (Strangway et al. 1977). On the contrary, the loss tangent of the interpreted bedrocks at the Apollo 17 landing site is 0.035 and the relative permittivity is 7.7 (Strangway et al. 1977). It is notable that the value of loss tangent cannot be used to determine whether or not the medium is porous regolith or competent bedrock. The Lunar Radar Sounder (operating frequency of 5 MHz) onboard the Kaguya orbiter (SELENE) estimated that the loss tangent for the ~ 200 m thick mare units at the Oceanus Procellarum is ~ 0.001–0.01, but the estimated relative permittivity is ~ 5.76–8.08 (Ono et al. 2009). Reliability of the TiO2 + FeO contents estimated The TiO2 + FeO contents derived from the loss tangent are the average values for materials within the top ~ 50 ns of the LPR data. This value is consistent with various orbital and in situ observations for this region. The Gamma Ray Spectroscopy onboard the Lunar Prospector find that the TiO2 + FeO content on the mare surface where the CE-3 landed is ~ 25.2 wt.% (Prettyman et al. 2006). The ultraviolet-visible spectrometer onboard the Clementine mission found that the TiO2 + FeO near the CE-3 landing site is 24–26 wt.% (with a FeO content of ~ 19 wt.% and a TiO2 content of ~ 5–7 wt.%; Ling et al. 2015). Likewise, the FeO and TiO2 content estimated by the Multispectral Imager onboard the Kaguya mission (Fig. 6a, b) are also consistent with the measurements done here (Fig. 6c and Table 1), which are basically consistent those estimated by the Clementine mission (Zhao et al. 2014). Furthermore, in situ measurements performed by the CE-3 APXS found that the TiO2 + FeO content is ~ 27.8 wt.% (Ling et al. 2015). Therefore, the estimated TiO2 + FeO content are in line with previous studies. The TiO2 (a), FeO (b), and their combined (c) contents at the CE-3 landing site (white stars). The data are derived from the Multispectral Imager data, and the maps are revised from Zhao et al. (2014). The white dashed line is the boundary between the Imbrian- (north) and Eratosthenian-aged mare units (down). North is up in all the panels The higher content of TiO2 + FeO compared to the global average of mare surfaces (12%; Lucey et al. 1995) explains the larger than average loss tangent derived in Table 1. The measured loss tangent of returned lunar regolith samples exhibit a large range (from less than 0.001 to ~ 0.03), indicating that the loss tangent of lunar regolith is weakly dependent on the density or relative permittivity and the dominant factor is the content of TiO2 + FeO (Olhoeft et al. 1975; Strangway et al. 1977). Regolith samples that have > 0.01 loss tangent uniformly feature a high content of TiO2 + FeO (Table 1). Therefore, while the CE-3 landing region has the highest content of TiO2 + FeO among lunar mare surfaces (Zhao et al. 2014), the regolith layer, which is the dominant materials within the ejecta deposits of Ziwei, also features the largest loss tangent. Indications to regional geology Along the route of Yutu, places further away from the rim of Ziwei generally exhibit a slightly larger TiO2 + FeO content of ~ 30 wt.%, e.g., navigation points N105–N107 (Fig. 5b). This content is comparable with that of the Eratosthenian-aged mare surface around the landing site (white star in Fig. 6c). However, places closer to the rim of Ziwei have a content of TiO2 + FeO ~ 24 wt.% (e.g., navigation points N203–N207; Fig. 5b), which is closer to the Late Imbrian-aged mare basalts to the north (Fig. 6c). This comparison indicates that the generally lower content of TiO2 + FeO closer to the rim of Ziwei may be caused by the impact excavation of deeper Imbrian-aged materials, as the Eratosthenian-aged basalts have been penetrated through by Ziwei. Also, the route between navigation points N107–N108 features a less content of TiO2 + FeO compared to the surrounding area, indicating that the excavated low TiO2 + FeO materials follow a patchy distribution. Since the relative permittivity of materials within the top ~ 50 ns of the radargram are close to that of lunar regolith (e.g., Fa et al. 2015), the excavated Imbrian-aged materials are most likely from the paleo-regolith that was developed between the eruptions of the Erathothenian- and Imbrian-aged mare basalts. This interpretation is consistent with the geological interpretation of the LPR data (Xiao et al. 2015). Furthermore, most of the continuous ejecta deposits of the Ziwei crater exhibit a high content of TiO2 + FeO (Fig. 6c), suggesting that most of ejecta is from shallow part of Ziwei crater site and Imbrium-agedpaleo-regolith with low TiO2 + FeO low contents from deep part is minor and limited to crater rims and restricted locations (e.g., N107–N108; Fig. 5c) because Ziwei crater is small (Qiao et al. 2016). The TiO2+FeO content derived here bridges the resolution gap between in situ and orbital observations, attesting the advantage of ground penetration radar in estimating the bulk composition of regolith materials. Compared to compositional data obtained by orbital observations (e.g., the maps shown in Fig. 6 represent the highest resolution maps obtained from orbit), the large variations of the TiO2 + FeO content along the route of Yutu (Fig. 5b, c) is not expected (Fig. 6c), appealing for more careful interpretations of compositional data obtained from orbit. The high-frequency radar data obtained by the lunar penetration radar onboard the Yutu rover, Chang'e-3 is used to derive the content of TiO2 + FeO for the shallow materials along the route of Yutu. The frequency-shift method is used for the top ~ 50 ns radar data to evaluate the drift rates of the received radar signals, which are related to the loss tangent of the bulk materials. The estimated loss tangent is ~ 0.011–0.017, which is consistent with, but slightly larger than the measured results for returned lunar regolith. The larger than average loss tangent for the bulk material along the route of Yutu is mainly caused by of the high content of TiO2 + FeO (~ 23–30 wt.%), which is consistent with both orbital and in situ observations ( Prettyman et al. 2006; Ling et al. 2015; Zhang et al. 2015). The resolved TiO2 + FeO contents exhibit a lower value towards the rim of Ziwei, which is caused by the excavation of the Late-Imbrian-aged paleo-regolith. This study shows that ground penetration radar can be applied as a useful supplementary tool to investigate the bulk composition of lunar regolith, as both the spatial resolution and detection depth are better than orbital observations. Source codes and data used in this study open to public access (https://doi.org/10.5281/zenodo.3884522). The LPR data are also available at the Ground Application System of Lunar Exploration, National Astronomical Observatories, Chinese Academy of Sciences (http://moon.bao.ac.cn). LPR: Lunar penetrating radar CE-3: SELENE: Selenological and Engineering Explorer APXS: Active particle-induced X-ray spectrometer Fast Fourier transformation STFT: Short time Fourier transformation Brzostowski MA, McMechan GA (1992) 3-D tomographic imaging of near-surface seismic velocity and attenuation. Geophysics 57(3):396–403. Campbell BA, Hawke BR, Thompson TW (1997) Regolith composition and structure in the lunar maria: Results of long-wavelength radar studies. Journal of Geophysical Research: Planets 102(E8):19307–320. Carrier D, Olhoeft R, Mendell W (1991) Physical properties of the lunar surface. In: Heiken GH, Vaniman DT, French BM(ed) Lunar source book-A user's guide to the moon.Cambridge University Press, Cambridge, pp 475–594. Cooley W, Lewis W, Welch D (1969) The Fast Fourier Transform and Its Applications. IEEE Transactions on Education 12:27–34. Fa W, Zhu M-H, Liu T, Plescia JB (2015) Regolith stratigraphy at the Chang'E-3 landing site as seen by lunar penetrating radar. Geophysical Research Letters 42(23):10,179–10,187. Fang G, Zhou B, Ji Y, Zhang Q, Shen S, Li Y, Guang H, Tang C, Gao Y, Lu W, Ye S, Han H, Zheng J, Wang S (2014) Lunar Penetrating Radar onboard the Chang'e-3 mission. Research in Astronomy and Astrophysics 14(12):1607–22. Feng J, Su Y, Ding C, Xing S, Dai S, Zou Y (2017) Dielectric properties estimation of the lunar regolith at CE-3 landing site using lunar penetrating radar data. Icarus 284:424–30. Griffin D, Lim J (1984) Signal estimation from modified short-time Fourier transform. IEEE Transactions on Acoustics, Speech, and Signal Processing 32:236–43. Huang Y, Minton D, Hirabayashi M, Elliott R, Richardson E, Fassett I, Zellner B (2017) Heterogeneous impact transport on the Moon. Journal of Geophysical Research: Planets 122(6):1158–80. Irving D, Knight J (2003) Removal of wavelet dispersion from ground-penetrating radar data. Geophysics 68:960–70. Lai J, Xu Y, Zhang X, Tang Z (2016) Structural analysis of lunar subsurface with Chang′E-3 lunar penetrating radar. Planetary and Space Science 120:96–102. Lauro SE, Mattei E, Cosciotti B, Di Paolo F, Arcone SA, Viccaro M, Pettinelli E (2017) Electromagnetic signal penetration in a planetary soil simulant: Estimated attenuation rates using GPR and TDR in volcanic deposits on Mount Etna. Journal of Geophysical Research: Planets 122(7):1392–1404. L3ing Z, Jolliff BL, Wang A, Li C, Liu J, Zhang J et al (2015) Correlated compositional and mineralogical investigations at the Chang′e-3 landing site. Nature Communications 6(1):8880. Liu L, Lane JW, Quan Y (1998) Radar attenuation tomography using the centroid frequency downshift method. Journal of Applied Geophysics 40:105–16. Lucey PG, Taylor GJ, Malaret E (1995) Abundance and Distribution of Iron on the Moon. Science. 268(5214):1150–53. Niethammer M, Jacobs LJ, Qu J, Jarzynski J (2000) Time-frequency representation of Lamb waves using the reassigned spectrogram. The Journal of the Acoustical Society of America 107(5):L19–L24. Olhoeft GR, Strangway DW (1975) Dielectric properties of the first 100 meters of the Moon. Earth and Planetary Science Letters 24(3):394–404. Ono T, Kumamoto A, Nakagawa H, Yamaguchi Y, Oshigami S, Yamaji A, Kobayashi T, Kasahara Y, Oya H (2009) Lunar Radar Sounder Observations of Subsurface Layers Under the Nearside Maria of the Moon. Science 323(5916):909–12. Pieters CM, Taylor LA, Noble SK, Keller LP, Hapke B, Morris RV, Allen CC, McKAY DS, Wentworth S (2000) Space weathering on airless bodies: Resolving a mystery with lunar samples. Meteoritics & Planetary Science 35(5):1101–07. Pommerol A, Kofman W, Audouard J, Grima C, Beck P, Mouginot J, Herique A, Kumamoto A, Kobayashi T, Ono T (2010) Detectability of subsurface interfaces in lunar maria by the LRS/SELENE sounding radar: Influence of mineralogical composition. Geophysical Research Letters 37:L03201. Prettyman TH, Hagerty JJ, Elphic RC, Feldman WC, Lawrence DJ, McKinney GW, Vaniman DT (2006) Elemental composition of the lunar surface: Analysis of gamma ray spectroscopy data from Lunar Prospector. Journal of Geophysical Research: Planets 111:E12007. Qiao L, Xiao ZY, Zhao JN, Xiao L (2016) Subsurface structures at the Chang'e-3 landing site: Interpretations from orbital and in-situ imagery data. Journal of Earth Science 27:707–15. Quan Y, Harris JM (1997) Seismic attenuation tomography using the frequency shift method. Geophysics 62:895–905. Rabiner LR, Gold B (1975) Theory and application of digital signal processing. Prentice-Hall, New Jersey. Schaber GG, Thompson TW, Zisk SH (1975) Lava flows in mare imbrium: An evaluation of anomalously low earth-based radar reflectivity. The Moon 13:395–423. Sibson R (1981) A brief description of natural neighbor interpolation. In: Interpolating Multivariate Data. John Wiley, New York, pp 21–36. Strangway D, Pearce G, Olhoeft G (1977) Magnetic and dielectric properties of lunar samples. In: The Soviet-American Conference on Cosmochmistry of the Moon and Planets, NASA Special Publication-370, Washington, pp 417–33. Su Y, Fang G, Feng J, Xing S, Ji Y, Zhou B et al (2014) Data processing and initial results of Chang'e-3 lunar penetrating radar. Research in Astronomy and Astrophysics 14(12):1623–32. Turner G, Siggins AF (1994) Constant Q attenuation of subsurface radar pulses. Geophysics 59:1192–1200. Wu Y, Hapke B (2018) Spectroscopic observations of the Moon at the lunar surface. Earth and Planetary Science Letters 484:145–53. Xiao L, Zhu P, Fang G, Xiao Z, Zou Y, Zhao J et al (2015) A young multilayered terrane of the northern Mare Imbrium revealed by Chang'E-3 mission. Science 347(6227):1226–29. Xing S, Su Y, Feng J, Dai S, Xiao Y, Ding C, Li C (2017) The penetrating depth analysis of Lunar Penetrating Radar onboard Chang'e-3 rover. Research in Astronomy and Astrophysics 17(5):046. Zhang C, Ulrych TJ (2002) Estimation of quality factors from CMP records. Geophysics 67(5):1542–47. Zhang J, Yang W, Hu S, Lin Y, Fang G, Li C et al (2015) Volcanic history of the Imbrium basin: A close-up view from the lunar rover Yutu. Proceedings of the National Academy of Sciences 112(17):5342–47. Zhang L, Zeng Z, Li J, Huang L, Huo, Hijun, Zhang J, Huai N (2019) A story of regolith told by Lunar Penetrating Radar. Icarus 321:148–60. Zhao J, Huang J, Qiao L, Xiao Z, Huang Q, Wang J et al (2014) Geologic characteristics of the Chang'E-3 exploration region. Science China Physics, Mechanics and Astronomy 57(3):569–76. We thank Dr. Sebastian Emanuel Lauro and Dr. Federico Di Paolo for helpful discussion. Comments and suggestions provided by two anonymos reviewers significantly helped clarifying and improving the manuscript. Authors' informations Dr. Chunyu Ding is an associate researcher at Sun Yat-sen University. He received a Ph.D of Astronomy from the National Astronomical Observatories, Chinese Academy of Sciences in 2017. Dr. Zhiyong Xiao is an associate professor at the Planetary Environmental and Astrobiological Research Laboratory, Sun Yat-sen University. Dr Jiannan Zhao is a postdoc at China University of Geosciences (Wuhan). Dr. Yan Su is the PI of data acquisition subsystem at the Ground Application System of Lunar Exploration, National Astronomical Observatories, Chinese Academy of Sciences. Dr. Jun Cui is the director of Planetary Environmental and Astrobiological Research Laboratory, Sun Yat-sen University. The authors are supported by the B-type Strategic Priority Program of the Chinese Academy of Sciences (XDB41000000), the National Natural Science Foundation of China (41773063, 41525015, and 41830214), the Science and Technology Development Fund of Macau (0042/2018/A2), the Pre-research Project on Civil Aerospace Technologies (D020101, D020202) of CNSA, and the Opening Fund of the Key Laboratory of Lunar and Deep Space Exploration, Chinese Academy of Sciences (ldse201702; ldse201908). Planetary Environmental and Astrobiological Research Laboratory, School of Atmospheric Sciences, Sun Yat-sen University, Zhuhai, China Chunyu Ding, Zhiyong Xiao & Jun Cui Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing, China Chunyu Ding, Yan Su & Jun Cui State Key Laboratory of Lunar and Planetary Sciences, Space Science Institute, Macau University of Science and Technology, Macau, China Zhiyong Xiao Chinese Academy of Sciences, Center for Excellence in Comparative Planetology, Hefei, China Zhiyong Xiao & Jun Cui University of Chinese Academy of Sciences, Beijing, China Yan Su Planetary Science Institute, School of Earth Sciences, China University of Geosciences (Wuhan), Wuhan, China Jiannan Zhao Chunyu Ding Jun Cui C.D. and Z.X. proposed the topic, conceived and designed the study, and analyzed the data. Y.S, J.Z., and J.C. collaborated with the corresponding author in the construction of manuscript. All authors read and approved the final manuscript. Correspondence to Zhiyong Xiao. Ding, C., Xiao, Z., Su, Y. et al. Compositional variations along the route of Chang'e-3 Yutu rover revealed by the lunar penetrating radar. Prog Earth Planet Sci 7, 32 (2020). https://doi.org/10.1186/s40645-020-00340-4 Regolith Loss tangent 1. Space and planetary sciences
CommonCrawl
Comparison of short-term electrical load forecasting methods for different building types Arne Groß1,2, Antonia Lenders1,3, Friedhelm Schwenker3, Daniel A. Braun3 & David Fischer4 The transformation of the energy system towards volatile renewable generation, increases the need to manage decentralized flexibilities more efficiently. For this, precise forecasting of uncontrollable electrical load is key. Although there is an abundance of studies presenting innovative individual methods for load forecasting, comprehensive comparisons of popular methods are hard to come across.In this paper, eight methods for day-ahead forecasts of supermarket, school and residential electrical load on the level of individual buildings are compared. The compared algorithms came from machine learning and statistics and a median ensemble combining the individual forecasts was used.In our examination, nearly all the studied methods improved forecasting accuracy compared to the naïve seasonal benchmark approach. The forecast error could be reduced by up to 35% compared to the benchmark. From the individual methods, the neural networks achieved the best results for the school and supermarket buildings, whereas the k-nearest-neighbor regression had the lowest forecasting error for households. The median ensemble narrowly yielded a lower forecast error than all individual methods for the residential and school category and was only outperformed by a neural network for the supermarket data. However, this slight increase in performance came at the cost of a significantly increased computation time. Overall, identifying a single best method remains a challenge specific to the forecasting task. To mitigate the effects of climate change and protect the environment, Germany set a goal to increase its share of renewable energy in the power generation to 80% by 2050 (Bundesministerium für Wirtschaft und Energie 2017). However, since renewable energy generation from sources such as wind or sun is highly volatile, accurate forecasts of non-controllable electrical load are necessary to flexibly manage and achieve demand-supply balance. Load forecasting is divided into three types depending on the forecasting horizon: short-term load forecasting (STLF), which is used as a term to denote forecasts horizons of up to one week ahead, medium-term load forecasting (MTLF) ranging from one week to one year ahead and long-term load forecasting (LTLF), which predicts load profiles of one year and more (Hahn et al. 2009). MTLF is necessary for fuel supply planning and maintenance and LTLF is crucial for power systems planning (Kyriakides and Polycarpou 2007). STLF is relevant for day-to-day operations of power systems such as energy trading in deregulated markets and unit dispatching or energy management on the individual building or household level (Gajowniczek and Zabkowski 2014). Another way to categorize load forecasting besides the forecasting horizon is the level of aggregation of load profiles. The discrepancy in forecasting performance can be substantial as forecasting more aggregated load profiles yields lower forecast errors (Sevlian and Rajagopal 2018). Most of the previous research on load forecasting focused on aggregated load data at for example city or country level (Mirowski et al. 2014; Hayes et al. 2015). However, since smart metering data becomes increasingly available, load forecasting at the level of end-users gains increasing attention (Kong et al. 2017; Shi et al. 2017). Furthermore, work on load forecasting differs in the granularity of the data used. Most previous research focused on hourly data and only 12% of papers reviewed by (Amasyali and El-Gohary 2018) used sub-hourly data. Even though higher data granularity allows for decision-making at higher frequency, two challenges arise from higher sampled data. First, a courser data granularity yields smaller forecasting errors due to the smoothing of load fluctuations. Vice versa a finer data granularity leads to a higher forecasting error. Secondly, a larger data granularity increases the amount of data points presenting a challenge for computationally intensive machine learning methods. Forecasting methods used in prior studies Various methods are used for forecasting tasks in the literature. The most popular methods include machine learning (ML) methods such as artificial neural networks (ANNs) and Support Vector Regression (SVR), statistical methods like ARIMA and regression models (Amasyali and El-Gohary 2018; Kuster et al. 2017). A review in (Yildiz et al. 2017) reports that linear regression models are easier to implement, use and understand compared to the 'black-box' ML methods, while the forecasting accuracy of the ML models was higher in the performed study. Even though ML methods enjoy great popularity in the timeseries forecasting research field, their suitability is still debated (Makridakis et al. 2018a; Hippert et al. 2001);(Yildiz et al. 2017). This debate is attributed by (Hippert et al. 2001) to a rather unsystematic testing of the models, poor predictive results arising from overfitting of the networks and lack of sufficient comparison of benchmarks. As for the type of load profile, most studies investigate demand prediction of non-residential buildings, namely 81%, compared to 19%, which explored prediction of residential building demand (Amasyali and El-Gohary 2018). For prediction of non-residential building consumption not one method was found to be superior to all other, as (Penya et al. 2011) for example found a simple auto-regressive (AR) model to be most successful, whereas (Massana et al. 2016) received lowest forecasting errors with SVR. The same holds for residential buildings. In Kong et al. (2017), the authors found LSTM to yield the lowest forecasting error for day-ahead predictions, whereas (Humeau et al. 2013) reported the linear regression to perform best at the individual household level in their comparison of linear regression, SVR and MLP. However on the aggregated level, SVR yielded the lowest forecast error. For day-ahead forecasts of 27 households, (Lusis et al. 2017) identified the SVR to be the best performing method. Overall, for residential as well as non-residential building load profiles no consensus on a single best method for STLF could be determined from our literature research. Additionally, most studies used few methods for comparison and oftentimes only one type of data. In this paper, we aim to fill this identified gap by providing a competitive comparison study of popular load forecasting methods on a large database. We focused our work on STLF, more specifically on day-ahead forecasting of individual electrical load profiles. The database consists of three different categories including education (school load profiles), industry (supermarket load profiles) and residential data. From the literature review, seven widely popular methods from machine learning and statistics were selected for our comparative study. Furthermore, an ensemble technique was applied and a naïve seasonal model was used as a benchmark. The methods include support vector regression (SVR), multiple linear regression (LR), a simple multi-layer perceptron with one hidden layer (MLP), long short-term memory network (LSTM), random forest regression (RF), k-nearest neighbor regression (KNN) and an auto-regressive integrated moving-average model with explanatory variables (ARIMA). Additionally, a forecast is obtained by combining the individual forecasts in a median ensemble. Our study makes a comprehensive comparison of these popular STLF methods with the intention to determine the most suitable method for each type of the load profile. To summarize, the contributions of this paper are the following: Comparison of seven different methods from ML and statistics as well as one median stacked ensemble for day-ahead forecasting of electrical load profiles Three different datasets, including school, supermarket and residential buildings High resolution data of 15 minutes sampling Adaptation to a specific type of load profile through optimal feature and hyperparameter selection Predictions on the building level as well as a comparison to the forecasting error of predicting aggregated data After this introductory Section, the paper is structured in the following way: "Methods" section introduces the seven algorithms and one benchmark method used in this study and gives a description of the experiment and the dataset, as well as an outline of the preprocessing and hyperparameter tuning steps. "Results" section presents the results and in the "Discussion" and "Conclusion" sections the key findings are put into context and summarized. In the following, the seven methods used in this paper are presented. Figure 1 shows the conceptual idea of forecasting in a supervised fashion, where the method can be any ML or statistics method capable of such a supervised approach. Conceptual graph of explanatory forecasting. On the left hand the explanatory variables aka features are depicted, which are input to a method. The method finds a function mapping from these input features to a value, which is the forecast If not specified otherwise the algorithms were implemented in an explanatory fashion, such that explanatory variables (aka features) were used to predict the future electrical load: $$ y_{t+1} = f(\boldsymbol{x_{t+1}}^{T}) + \epsilon_{t+1}, $$ where the electrical load one step into the future yt+1 is predicted with xt+1T, a vector of a multivariate time series at time t+1 comprising of n explanatory variables (Hyndman and Athanasopoulos 2018). ε is the error. To predict one whole day in the future the multivariate time series needs to be available for this whole future time period as well. Therefore sometimes forecasts themselves need to be utilized. The features used in this work are specified in "Feature extraction and selection" section. Some methods are intrinsically not explanatory, such as ARIMA models, which are time series forecasting models (see Auto-Regressive integrated moving-Average model with explanatory variables). The time series regression forecasting framework, is defined as: $$ y_{t+1} = f(y_{t}, y_{t-1},..., y_{t-l}, \boldsymbol{x_{t+1}}^{T}) + \epsilon_{t+1}, $$ where l is the length of the immediate history, which is additionally to the explanatory variables xt+1T used to predict one step ahead. Since the aim is to do day-ahead forecasts, predicting only yt+1 is not enough. To predict all values of one day an iterative multi-step forecasting method was used for ARIMA as well as the LSTM. The reason for choosing an explanatory forecasting framework were findings from a preliminary experiment, where explanatory forecasting yielded more promising results compared to time series or time series regression forecasting. However, a more in-depth investigation into this topic was out of scope for this paper and will be left to investigate in future work. Multiple linear regression Multiple linear regression (LR) assumes a linear relationship between independent explanatory variables x1,x2,...xn and the dependent variable y: $$ y_{t+1} = \beta_{1} x_{1,t+1} + \beta_{2} x_{2,t+1} +... + \beta_{n} x_{n,t+1} + \epsilon_{t+1}. $$ To estimate the β values, the error εt+1 was minimized on the training data using the ordinary least squares method. Auto-Regressive integrated moving-Average model with explanatory variables A very well known and often used statistical method for forecasting is the auto-regressive integrated moving-average (ARIMA) model (Mirowski et al. 2014; Gross and Galiana 1987; Yildiz et al. 2017). The ARIMA (p,d,q) model here is defined as: $$ \begin{aligned} y_{t+1}^{(d)} = \beta_{1} x_{1,t+1} +... + \beta_{k} x_{k,t+1} + \phi_{1} y_{t}^{(d)} +...\\ + \phi_{p} y_{t-p}^{(d)} - \theta_{1} \epsilon_{1} -... - \theta_{q} \epsilon_{t-q} + \epsilon_{t} \end{aligned} $$ where y(d) denotes the d-order difference (Hyndman and Athanasopoulos 2018, Ch.8). After the identification of the model order (the p, q and d value), maximum likelihood estimation is used to find the parameters β1,..βk,ϕ1,...ϕp,θ1,...θq. Support vector regression The idea of support vector regression, which is an extension of support vector machine, is to find a function, where each prediction y is at most ε far away from the target value (Smola and Schölkopf 2004). The support vector regression line is described by $$ y = w^{T} \phi(x) + b, $$ and the parameters wT and b are obtained from data using $$ \min_{w, b, \xi_{i}, \xi_{i}^{*}} \hspace{0.5cm} \frac{1}{2} \left\lVert w \right\Vert^{2} + C \sum_{i=1}^{N} (\xi_{i} + \xi_{i}^{*}) $$ $$\begin{array}{@{}rcl@{}} y_{i} - w^{T}\phi(x_{i}) -b \leq \epsilon + \xi_{i}^{*} \hspace{1 cm} i = 1,...,N \end{array} $$ $$\begin{array}{@{}rcl@{}} w^{T}\phi(x_{i}) - y_{i} +b \leq \epsilon + \xi_{i} \hspace{1 cm} i = 1,...,N \end{array} $$ $$\begin{array}{@{}rcl@{}} \xi_{i}, \xi_{i}^{*} \geq 0, \hspace{1cm} i = 1,...,N. \end{array} $$ where w are the weights, ϕ(·) is the transformation of the training data from feature to kernel space and b is the bias. N is the number of samples in the training set. The goal is to get a regression line which is on the one hand flat and on the other hand minimizes the prediction error. To achieve a flat regression function, one wants to minimize the norm. Deviations from the ε-tube are tolerated by the slack variables ξi and \(\xi _{i}^{*}\). The constant C represents a trade-off between the flatness of the functions and how many predictions can be tolerated outside of the ε-tube. Random forest regression Random forests (RF) are an ensemble method, comprising a voting committee of n binary decision trees. For each tree a randomly sampled subset of the original training data is used to build it. This is due to single decision trees being prone to overfitting on the training data. The last step is then to average the predictions of each tree to obtain the final prediction of the RF (Bishop 2006, Ch.14). K-Nearest neighbor regression In k-nearest neighbor (KNN) regression the Euclidean distance between the query feature vector and every training feature vector is computed. The labels of the k closest vectors are averaged and yield the prediction yt+1 (Ahmed et al. 2010). Multi-layer perceptron A multi-layer perceptron (MLP) is a feed-forward neural network. One can discriminate three different types of layers: The input layer, where each neuron gets one input dimensions value, the hidden layers, and the output layer. Apart from the input nodes, all neurons calculate a weighted sum of their inputs including a bias term, apply a differentiable, non-linear activation function and pass the output on to the next layer. In the training process the error between the output of the network and the real labels is minimized by propagating the error gradient back trough the network and updating the weights and biases in the direction of the negative gradient, such that the overall error is decreased (Bishop 2006, Ch. 5). Long short-Term memory network In Hochreiter and Schmidhuber (1997) a recurrent neural network architecture called long short-term memory (LSTM) including an input gate, an output gate and a forget gate which dynamically regulate the flow of information, is designed. The gates can be seen as three filters, which decide what past information is relevant and make it possible to learn long-term dependencies. Naturally the LSTM is intended for sequences and as such the forecasting framework differs from the explanatory forecasting formulation and uses the time series regression framework defined in Eq. 2. For an in-depth description of the LSTM we refer the reader to (Hochreiter and Schmidhuber 1997). Naïve seasonal model The naïve seasonal model in this work assumes the electrical load next week is the same as the electrical load of this week. We refer the reader to (Hyndman and Athanasopoulos 2018, Ch. 3.1) for a general naïve seasonal model formulation. Many profiles are very repetitive and follow a strong weekly seasonality. This observation is exploited by the naïve seasonal methods. Therefore, we expect this method to already result in reasonably well predictions, which paired with its straightforward and simple implementation led to our choice to use it as benchmark method. Median ensemble In the RF method, a set of decision trees is used to determine the forecast from the resulting ensemble of individual forecasts. Using this idea, the forecasts of all individual methods presented previously, are used to generate an additional forecast. This forecast is obtained by taking the median of all forecasts at every timestep over the horizon. We want to investigate whether this simple combination of predictions in a second level would lead to an increased forecasting performance compared to the individual methods. Data preprocessing and forecasting setup Three datasets were used for this study: 19 electrical load profiles of residential buildings, 20 electrical load profiles of schools and 20 electrical load profiles of supermarkets. All time series are from buildings in Germany and were collected in the scope of the Fraunhofer ISE projects synGHDFootnote 1 and synPROFootnote 2 (Fischer et al. 2015). The residential and school load profiles each span a time of 18 months, whereas the supermarket load profiles span a time of 10 months. The granularity of the data is 15 minutes. An example week for each profile is shown in Fig. 2. Each building type shows a different degree to which the load profile follows a typical pattern. Example weeks for each category to show the different properties of the electrical load profiles To obtain the aggregated load profile of each category the time series in the respective data set were summed up. The resulting aggregated profiles exhibited less fluctuations and a smoother, more regular pattern compared to individual profiles of the specific categories. Especially the aggregated residential load profile showed a more recognisable seasonality in contrast to household demand on the building level. In all three data categories of individual load profiles a daily seasonality could be observed. Individual school and supermarket time series additionally exhibited a strong weekly seasonality, where the weekend (schools) or Sundays (supermarkets) had substantially lower energy demand compared to workdays. The average autocorrelation values for a lag of 96 (one day prior) of schools and supermarkets are 0.68 and 0.56 respectively and for a lag of 672 the correlation values are 0.78 (schools) and 0.85 (supermarkets) (see Fig. 3). Average autocorrelation plots for each category. Lag 96 is one day prior and lag 672 is exactly one week prior In the individual household load profiles the weekly seasonality was not as strong. The correlation of lag 96 (one day prior) and lag 672 (one week prior) for the residential category is only at 0.22 for both lag values, which emphasises that individual household only show a weak seasonality. However, the autocorrelation values of the aggregated residential load profile are 0.71 for 'lag 96' and 'lag 672', showing a more distinct daily and weekly seasonality compared to individual residential profiles. Overall, the individual supermarket load profiles display the most regular weekly pattern, followed by school and then the very irregular residential electrical load profiles. The individual load profiles in the supermarket category differed only slightly from each other apart from different base loads. The school load profiles showed different base loads and also different schools showed a varying course of the school day as to when the school had their lunch breaks and how long a regular school day lasted. Similarly to (Gerossier et al. 2018) we also observed individual residential load profiles to exhibit substantially varying electricity demand. Data preprocessing All time series were converted to coordinated universal time. The preprocessing steps are described in the following: Missing values Only load profiles with less than six consecutive missing values were included into the datasets. If less than six consecutive values were missing, the values were replaced by linear interpolation. The outlier detection included an additive seasonal decomposition with a weekly seasonality. On the residual part of the decomposition an interquantile range (IQR) filter with a lower limit of 2% and an upper limit of 98% detected outliers. The IQR scaling parameter was 1.5 for the school and supermarket data and 3.5 for the residential category as this category has higher and more sudden fluctuations resulting in a more challenging outlier detection (Dawson 2011). The values which were identified as outliers were replaced by linear interpolation between the two neighboring values. Train/Validation/Test Split To train the methods a sliding window approach was chosen. The four month prior to the day, which should be predicted, were used as the training set. As validation set six days were selected for 6-fold cross-validation to find the best hyperparameters and features. These six days were chosen to include two days being either Saturday or Monday since on these days the change from work week to weekend or the other way around occurs. At least two normal workdays and two weekend days had to be included as well as for the school category at least two days had to be school vacation days. The final test set comprised 68 days for the residential and school data and 52 days for the supermarket data as the time series in this category were shorter. The test days included two work weeks (Monday to Friday) from each season. Of these eight work weeks at least two had to be in school vacation times, three weekends from each season and four national holidays. For supermarket data the number of test days is less since spring could not be evaluated as it was in the training data. All datasets were min-max scaled. For this, the training set was scaled. The features of the test set were scaled for forecasting using the same scaler. The final prediction was then inversely scaled back to the original data representation. Feature extraction and selection Since we used explanatory forecasting, the features are of major importance to successful prediction. The features which were available belong to three categories: calendar information, history information and weather information. Calendar information includes features describing the seasonal components of the data and features that represent national and school holidays. The seasonalities in the data can be represented by a set of one-hot encoding vectors aka dummy features or by Fourier terms. Seasonality dummy encoding was used to generate two sets of vectors, one set with hour of the week day encoding, featuring 167 vectors and one set with weekday and hour of day encoding, featuring 29 vectors. The Fourier terms were created according to (Hyndman and Athanasopoulos 2018, Ch. 5.4). Fourier terms allow the reduce the number of features for seasonality compared to seasonal dummy features, depending on the number k of sine and cosine pairs used. The Fourier terms were used to generate features including multiple seasonalities, for example daily, weekly and annual seasonality. Altogether six Fourier term feature sets were extracted. A Fourier feature set with k=10 for weekly as well as daily seasonality was selected most often in the features selection process. History information included the electrical load value one week prior and the electrical load value one day prior. Weather information was obtained from the European Centre for Medium-Range Weather ForecastsFootnote 3 according to the location of the load profiles. The weather information include temperature values and global solar irradiation. Both values could only be obtained hourly, but were upsampled to a quarter-hourly resolution to fit the time series data. Since not only single features were extracted, but feature sets such as the seasonal dummy features or Fourier terms, filter methods and embedded methods for feature selection were found to be unfeasible (Chandrashekar and Sahin 2014). However, the wrapper method can be applied with feature sets and therefore a wrapper method was utilized, namely sequential forward selection. From each dataset category two time series were selected for which a sequential forward selection was used to choose the best features for each method and category individually. The feature selection was limited to use only one feature set describing seasonality (so either one-hot encoding type or Fourier type) and stopped as soon as the forecasting performance of the method did not improve more than 0.1% compared to the last added features. For schools the dummy features for school vacation was set as mandatory beforehand to enforce this information in the final feature set. The features found for the two time series of each category were compared and combined for each method yielding the set of final features used for the hyperparameter tuning (HPT) and forecast of the test set. The features for the median ensemble comprise the predictions of the individual methods. Hyperparameter tuning Hyperparameter tuning was done with one time series from each category, but for every algorithm individually. We used Bayesian optimization to find the optimal hyperparameters for each algorithm (Snoek et al. 2012). The objective function was to minimize the mean root-mean-square error (RMSE) of the six validation days, which were chosen for cross-validation. The LSTM was restricted to use a sequence length of up to one day only in order to limit computational efforts. For the MLP only a single hidden layer was used to evaluate the forecasting performance of the most simple feed-forward neural network. To compare the results of the final forecast the normalized root-mean-square error (NRMSE) was calculated for all 68/52 days of every time series and averaged. This was done for every method. Therefore, every method in each category has an averaged NRMSE of 68/52 times 20/19 values. The normalized root mean square error has following equation: $$ \text{NRMSE} = \frac{\sqrt{\frac{\sum_{t=1}^{T} (\hat{y_{t}} - y_{t})^{2}}{T}}}{\bar{y}}, $$ where yt is a measured sample at time t, \(\hat {y_{t}}\) is the prediction at time t, T is the number of samples and \(\bar {y}\) is the mean of all observations y. All forecasting measures suffer from specific drawbacks (Shcherbakov et al. 2013) as quantifying the quality of a forecast is not straightforward (Hyndman and Koehler 2006). In this case, the NRMSE is prone to the influence of large outliers. Furthermore, the time in seconds for every algorithm was measured. The times are divided into fit and predict operations, where fit describes the process of building the model and fitting the training data and predict referring to the step of predicting one day ahead. After the preprocessing steps, the individual feature selection for every method in every category and the individual tuning of hyperparameters, the final forecast was conducted. For the final forecast the selected test days were predicted for each time series in all categories and with all methods. For these test days the forecasting performance was evaluated with the error measure from Evaluation criteria. Additionally, the final forecast was performed for the aggregated load profile in each category. The same hyperparameters and features, which were found for the individual building level were used for the forecast at the aggregate level. Building level results The forecasting accuracy of the different categories varied greatly. The mean NRMSE of the best methods range from 0.18 in the supermarkets to 1.01 for the residential buildings. The mean NRMSE for the residential dataset ranged from 1.01 to 1.41 for the different algorithms. This can be seen in Fig. 4 top left. All methods in the residential category have a lower mean NRMSE than the benchmark naïve seasonal method. The best performing method is the KNN (1.01 mean NRMSE) with an improvement of 28.8% compared to the benchmark method in terms of mean NRMSE. A two-tailed paired Student's t-test between the distributions over all buildings of the NRMSE for the KNN-method and the benchmark resulted in a p-value of 2.68·10−10. The observed, greater forecasting performance of KNN is therefore significant. Mean NRMSE of each method for the test days. On the upper left for the residential category, the upper right the school category and the supermarket category in the bottom row For the school category the neural networks had the lowest NRMSE, followed by SVR and RF. The simple one layer MLP led to the lowest forecasting error with 0.28 compared to the mean NRMSE of 0.44 of the lowest performing multiple linear regression method. This is the only category where the naïve seasonal method did not consistently show the highest NRMSE compared to the other methods. However, this most likely stems from a disadvantageous selection of features. The improvement of the highest performing MLP compared to the benchmark method is 23.5% in terms of mean NRMSE. The difference between the MLP compared to the naïve seasonal method was found to be significant by a paired Student's t-test (p=1.08·10−5). Like the school category, the simple MLP resulted in the lowest forecasting errors with a mean NRMSE of 0.18. The naïve seasonal method again performed lowest (mean NRMSE of 0.28), which leads to an improvement of 35.2% in terms of mean NRMSE. As with the residential and school category the difference between the MLP method and the benchmark was significant according to a paired Student's t-test (p=2.58·10−8). We also found that the individual buildings in one category varied in the magnitude of the forecasting error. For households KNN achieved NRMSE of 0.66 and 1.56 for two different buildings, the same behaviour can be observed for schools (MLP; NRMSE of 0.155 vs. 0.43) and supermarkets (MLP; NRMSE of 0.12 vs. 0.32). This is the reason why the standard deviations in Fig.4 are quite large for all methods. However, the relative order of performance of the individual methods did not change much between the households. Another reason for the high standard deviation can be explained by breaking down the forecasting accuracy for different day types. The supermarket and school datasets exhibited a higher NRMSE for national holidays compared to every other daytype (weekdays, Saturdays, Sundays and school holidays). In future work this could be counteracted by treating national holidays like Sundays in the feature representation. The results of creating a median ensemble (M-Ens) from the predictions of all other individual methods, can also be seen in Fig. 4. For the residential and school categories the ensemble resulted in the lowest mean NRMSE (0.996 and 0.276 respectively) and for the supermarket dataset (0.184) only the MLP has a lower mean NRMSE. However, especially for the residential time series it is noteworthy, that the four to five methods with the lowest forecasting error are very close together. Aggregate level results The results of the day-ahead forecast of the aggregated load profiles yielded smaller forecasting errors compared to the forecasts of the individual building level (see Table 1). The residential category had the largest improvement from the aggregation of time series with 73,6% in terms of mean NRMSE for the KNN, whereas schools had an improvement of 42.95% compared to the individual building level for MLP. Day-ahead forecasts of supermarkets on the aggregated level improved by 53% compared to the individual building level in terms of mean NRMSE for the MLP. Table 1 The mean NRMSE of all methods for the individual building level and for the aggregate level Computational cost results In addition to forecasting performance one should keep in mind the computational and time costs needed by a method for real life applications. The computational and hence time resources varied greatly (see Table 2). LSTM and ARIMA took most time and resources, which in some applications could be unsuitable to the task or require custom hardware. The multiple linear regression needed the least amount of time, followed by KNN. Table 2 Times in seconds for the fit and predict of each method for all categories The ordering of forecasting accuracy by building category follows the extent to which the electric load profiles follow a repeating weekly pattern (cf. Fig. 3). Other studies (e.g. Makridakis et al. (2018a)) suggest that decomposing the data into trend, seasonality and residuals prior to the forecasting procedure may improve forecasting performance depending on the specifically used model and the data. However, we selected to evaluate the forecast methods using minimal preprocessing to facilitate application of the methods. Then, the seasonality in the data can easily be represented by the temporal features and the seasonal patterns can be learned by the forecasting method. Therefore, it makes sense that with an explanatory forecasting method the error is smallest for the most regularly repeating load profiles. Overall, the supermarket time series show the strongest weekly seasonality, followed by schools and then residences with only weak weekly seasonality, which is reflected in the magnitude of the forecasting errors. Here, especially the neural networks and LSTM performed well which are often successfully applied to detect signals in data afflicted with noise. As discussed, a regular seasonal pattern can be identified as such a signal. The load profiles of individual households did not follow a seasonal pattern to the same extent. Furthermore, the residential data exhibited rapidly changing and unique fluctuations of load profiles on the building level. For the class of load profiles with this characteristic, LSTM and MLP were not in the best performing methods. Instead, KNN performed best, but was closely followed by SVR, ARIMA and LR. Since the forecasting errors are close together, this provides no clear indication which method is truly the best for households, although from a computational resources perspective LR was fastest. As the fluctuations were smoothed, the residential category showed the highest increase in forecast quality through aggregation of several data sets. Note that the accuracy of MLP is close to the best performing method on the more regular aggregated profiles. Overall, the forecasting errors of the supermarket and school categories compared to the residential category are substantially smaller, emphasising how challenging the task of residential load prediction on the individual building level is. Generally, these observations are in line with other studies. SVR was found to perform well by (Massana et al. 2016) for non-residential profiles. In Kong et al. (2017), the authors found that LSTM performs well for residential load profiles. In contrast, LSTM performed particularly weak on residential data in our studies. However, the profiles used in (Kong et al. 2017) were restricted to households with an electric heating system showing a strong daily pattern. This study was restricted to using forecast algorithms on the unprocessed data without deseasonalizing or detrending which reduces the effort for the practitioner. This implies that a load forecast method should be selected based on the degree to which the profile to be predicted follows a seasonal pattern. However, for all three load profile categories SVR was in the best performing three methods with reasonably small computational costs. The ensemble did not, in contrast to expectation (e.g. in Makridakis et al. (2018b)), lead to a considerably better forecasting performance and only resulted in minor improvements of forecasting accuracy. We observed a stabilizing effect due to the ensemble, albeit at a high time and computational effort as all individual predictions are necessary to create it. From the individual methods, LSTM and ARIMA took most time and resources, which in some applications could be unsuitable to the task or require custom hardware. The mismatch in time required by ARIMA in the supermarket category compared to the other two categories can be ascribed to ARIMA using no MA terms in the supermarket case. Both the ARIMA and the LSTM methods used a time series forecasting approach and since our data has a high resolution, including a long immediate history increases computational load significantly. In most cases, the slight decrease in forecast error of these methods or the median ensemble will not justify the much higher computational time compared with more basic ML methods such as MLP or SVR. However, setting up such a more basic ML model can lead to up to 30 % improvement of forecast accuracy at reasonable computational costs compared to a naïve seasonal approach. The hyperparameter tuning and features selection was conducted on one and two time series respectively to minimize computational effort. For this, we selected the time series in the medium range of annual consumption compared to the other buildings in the category. We assumed the chosen building load profile would be representative of the category. This is a limitation of the study as especially the residential category could potentially profit from clustering the residential buildings and finding hyperparameters and features for each cluster individually. This paper gives a comprehensive comparison of popular methods for day-ahead forecasting on individual school, supermarket and residential load profiles. All methods are compared against a naïve seasonal benchmark method. Especially for the residential load, forecasting on the consumer level is challenging compared to the forecasting problem on aggregated data. However, forecasts on the individual building level are crucial due to the increasing integration of volatile renewable energy generation. We found that all methods, apart from the LR in the school category, outperformed the benchmark method. Furthermore, the different load profile categories were predictable according to the regularity of their patterns. The neural networks, especially the MLP, worked best for school and supermarket data. Even though the KNN yielded the smallest forecasting error for households, the forecasting errors of the first four methods were so close together, that it is difficult to pick one best performing on forecasting error alone. For all datasets the SVR performed well and has reasonable computational cost. The median ensemble narrowly led to the best forecasting performance for the residential and school load profiles and was only slightly outperformed by the MLP method for the supermarket data. However, the computational effort is significantly larger as all individual forecasts must be generated for the ensemble. We conclude that investing the extra time and computational cost for setting up a learned model compared to the benchmark method is justified as the learned method can achieve a better prediction by up to 30% less error in terms of the NRMSE. Some ideas for future work are clustering the buildings prior to HPT and feature selection, other ways of pre-processing and a more in-depth investigation of the different forecasting frameworks. The addition of meta data and occupancy behaviour are worth exploring in the future as well. Data is not published due to legal restrictions but will be made available on request https://www.ise.fraunhofer.de/de/forschungsprojekte/synghd.html https://www.elink.tools/elink-tools/synpro https://www.ecmwf.int/ Ahmed, NK, Atiya AF, Gayar NE, El-Shishiny H (2010) An empirical comparison of machine learning models for time series forecasting. Econ Rev 29(5-6):594–621. Amasyali, K, El-Gohary NM (2018) A review of data-driven building energy consumption prediction studies. Renew Sust Energ Rev 81:1192–1205. Bishop, CM (2006) Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, Berlin. MATH Google Scholar Bundesministerium für Wirtschaft und Energie, BMWI (2017) Das Erneuerbare-Energien-Gesetz. https://www.erneuerbare-energien.de/EE/Redaktion/DE/Dossier/eeg.html. Accessed: 19 Feb 2020. Chandrashekar, G, Sahin F (2014) A survey on feature selection methods. Comput Electr Eng 40(1):16–28. Dawson, R (2011) How significant is a boxplot outlier?J Stat Educ 19(2). Fischer, D, Härtl A, Wille-Haussmann B (2015) Model for electric load profiles with high time resolution for german households. Energy Build 92:170–179. Gajowniczek, K, Zabkowski T (2014) Short term electricity forecasting using individual smart meter data. Procedia Comput Sci 35:589–597. Gerossier, A, Girard R, Bocquet A, Kariniotakis G (2018) Robust day-ahead forecasting of household electricity demand and operational challenges. Energies 11(12):3503. Gross, G, Galiana FD (1987) Short-term load forecasting. Proc IEEE 75(12):1558–1573. Hahn, H, Meyer-Nieberg S, Pickl S (2009) Electric load forecasting methods: Tools for decision making. Eur J Oper Res 199(3):902–907. Hayes, B, Gruber J, Prodanovic M (2015) Short-term load forecasting at the local level using smart meter data In: 2015 IEEE Eindhoven PowerTech, 1–6.. IEEE, New York. Hippert, HS, Pedreira CE, Souza RC (2001) Neural networks for short-term load forecasting: A review and evaluation. IEEE Trans Power Syst 16(1):44–55. Hochreiter, S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780. Humeau, S, Wijaya TK, Vasirani M, Aberer K (2013) Electricity load forecasting for residential customers: Exploiting aggregation and correlation between households In: 2013 Sustainable Internet and ICT for Sustainability (SustainIT), 1–6.. IEEE, New York. Hyndman, RJ, Athanasopoulos G (2018) Forecasting: Principles and Practice. OTexts, Monash University, Australia. Hyndman, RJ, Koehler AB (2006) Another look at measures of forecast accuracy. Int J Forecast 22(4):679–688. Kong, W, Dong ZY, Jia Y, Hill DJ, Xu Y, Zhang Y (2017) Short-term residential load forecasting based on lstm recurrent neural network. IEEE Trans Smart Grid 10(1):841–851. Kuster, C, Rezgui Y, Mourshed M (2017) Electrical load forecasting models: A critical systematic review. Sustain Cities Soc 35:257–270. Kyriakides, E, Polycarpou M (2007) Short term electric load forecasting: A tutorial. In: Chen K Wang L (eds)Trends in Neural Computation, 391–418.. Springer, Berlin. Lusis, P, Khalilpour KR, Andrew L, Liebman A (2017) Short-term residential load forecasting: Impact of calendar effects and forecast granularity. Appl Energy 205:654–669. Makridakis, S, Spiliotis E, Assimakopoulos V (2018) Statistical and machine learning forecasting methods: Concerns and ways forward. PLoS ONE 13(3):1–26. Makridakis, S, Spiliotis E, Assimakopoulos V (2018) The M4 Competition: Results, findings, conclusion and way forward. Int J Forecast 34(4):802–808. https://doi.org/10.1016/j.ijforecast.2018.06.001. Massana, J, Pous C, Burgas L, Melendez J, Colomer J (2016) Short-term load forecasting for non-residential buildings contrasting artificial occupancy attributes. Energy Buildings 130:519–531. Mirowski, P, Chen S, Ho TK, Yu C-N (2014) Demand forecasting in smart grids. Bell Labs Tech J 18(4):135–158. Penya, YK, Borges CE, Fernández I (2011) Short-term load forecasting in non-residential buildings In: IEEE Africon'11, 1–6.. IEEE. Sevlian, R, Rajagopal R (2018) A scaling law for short term load forecasting on varying levels of aggregation. Int J Electr Power Energy Syst 98:350–361. Shcherbakov, MV, Brebels A, Shcherbakova NL, Tyukov AP, Janovsky TA, Kamaev VA (2013) A survey of forecast error measures. World Appl Sci J 24(24):171–176. Shi, H, Xu M, Li R (2017) Deep learning for household load forecasting—a novel pooling deep rnn. IEEE Trans Smart Grid 9(5):5271–5280. Smola, AJ, Schölkopf B (2004) A tutorial on support vector regression. Stat Comput 14(3):199–222. Snoek, J, Larochelle H, Adams RP (2012) Practical bayesian optimization of machine learning algorithms In: Advances in Neural Information Processing Systems, 2951–2959.. Curran Associates, Inc., Red Hook. Yildiz, B, Bilbao JI, Sproul AB (2017) A review and analysis of regression and machine learning models on commercial building electricity load forecasting. Renew Sust Energ Rev 73:1104–1122. This research was supported by the German Ministry for Economic Affairs and Energy (BMWi) via SynGHD (03ET7534A). Publication funding was provided by the German Federal Ministry for Economic Affairs and Energy. Fraunhofer Institute for Solar Energy Systems ISE, Freiburg, 79110, Germany Arne Groß & Antonia Lenders IMTEK, Faculty of Engineering, University of Freiburg, Freiburg, 79110, Germany Arne Groß Ulm University, Institute of Neural Information Processing, Ulm, 89081, Germany Antonia Lenders, Friedhelm Schwenker & Daniel A. Braun greenventory GmbH, Freiburg, 79108, Germany Antonia Lenders Friedhelm Schwenker Daniel A. Braun AG supervised the design and implementation of experiments and discussed the experimental results. AL planned, implemented and carried out the experiments. AG and AL wrote the paper. DF provided the data, directed the project and supervised the process. FS was regularly consulted and supervised design and progress of experiments. DAB supervised and contributed to result analysis selection. All authors read and approved the final manuscript. Correspondence to Arne Groß. Groß, A., Lenders, A., Schwenker, F. et al. Comparison of short-term electrical load forecasting methods for different building types. Energy Inform 4, 13 (2021). https://doi.org/10.1186/s42162-021-00172-6 Electrical load
CommonCrawl
The One-way FSI Method Based on RANS-FEM for the Open Water Test of a Marine Propeller at the Different Advance Coefficient Mobin Masoomi, Amir Mosavi Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Fluid-Structure Interaction; OpenFOAM; One-way approach; Structural Analysis This study addressed a Fluid-Structure Interaction of an open Water test for vp1304 propeller to predict pressure and stress distributions with a low cost and high precision method. The most striking aspect of such a method(one-way coupling) is to use one hydrodynamic solution for the number of different structural sets involved in other materials or different layup methods and combinations of layers. An open-access software(OpenFOAM) with an open-source code solver is used to simulate the fluid domain. Abaqus is used To evaluate and predict the deformation and strength of the blade with the Finite Element Method(FEM). The coupling approach is based on dry condition, which means the added mass effects due to propeller blades vibration is neglected. The pressures imposed on the blades are extracted from the fluid solver for each time step. Then, These pressures role as a load condition for the structure solver. This approach was verified in the last paper(wedge impact); a key factor for the present solution is the rotational rate interrelated between two solution domains, which is explained in this paper. Finally, the blades' stress and strain are calculated and compared in each advance coefficient. The Complexity of Mathematics Frank Vega Subject: Mathematics & Computer Science, Computational Mathematics Keywords: complexity classes; regular languages; reduction; number theory; one-way; primes In mathematics, the Riemann hypothesis is a conjecture that the Riemann zeta function has its zeros only at the negative even integers and complex numbers with real part 1/2. Many consider it to be the most important unsolved problem in pure mathematics. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a US 1,000,000 prize for the first correct solution. We prove the Riemann hypothesis using the Complexity Theory. Number theory is a branch of pure mathematics devoted primarily to the study of the integers and integer-valued functions. The Goldbach's conjecture is one of the most important and unsolved problems in number theory. Nowadays, it is one of the open problems of Hilbert and Landau. We show the Goldbach's conjecture is true using the Complexity Theory as well. An important complexity class is 1NSPACE(S(n)) for some S(n). These mathematical proofs are based on if some unary language belongs to 1NSPACE(S(log n)), then the binary version of that language belongs to 1NSPACE(S(n)) and vice versa. P versus NP Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: complexity classes; combinatorial optimization; polynomial time; reduction; logarithmic space; one-way $P$ versus $NP$ is considered as one of the most important open problems in computer science. This consists in knowing the answer of the following question: Is $P$ equal to $NP$? The precise statement of the $P$ versus $NP$ problem was introduced independently by Stephen Cook and Leonid Levin. Since that date, all efforts to find a proof for this problem have failed. Another major complexity class is $\textit{P-Sel}$. $\textit{P-Sel}$ is the class of decision problems for which there is a polynomial time algorithm (called a selector) with the following property: Whenever it's given two instances, a $``yes"$ and a $``no"$ instance, the algorithm can always decide which is the $``yes"$ instance. It is known that if $NP$ is contained in $\textit{P-Sel}$, then $P = NP$. We consider the problem of computing the sum of the weighted densities of states of a Boolean formula in $3CNF$. Given a Boolean formula $\phi$ with $m$ clauses, the density of states $n(E)$ for some integer $0 \leq E \leq m$ counts the number of truth assignments that leave exactly $E$ clauses unsatisfied in $\phi$. The weighted density of states $m(E)$ is equal to $E \times n(E)$. The sum of the weighted densities of states of a Boolean formula in $3CNF$ with $m$ clauses is equal to $\sum_{E = 0}^{m} m(E)$. We prove that we can calculate the sum of the weighted densities of states in polynomial time. Given two Boolean formulas $\phi_{1}$ and $\phi_{2}$ in $3CNF$ with $n$ variables and $m$ clauses, the combinatorial optimization problem $\textit{SELECTOR-3SAT}$ consists in selecting the formula which is satisfiable, where every clause from $\phi_{1}$ and $\phi_{2}$ can be unsatisfied for some truth assignment. We assume that the formula that is satisfiable has the minimum sum of the weighted densities of states. In this way, we solve $\textit{SELECTOR-3SAT}$ with an exact polynomial time algorithm. We claim this problem is a selector of $3SAT$ and thus, $P = NP$. Study on the Preferred Application-Oriented Index for Mental Fatigue Detection Tianhong Duan, Nong Zhang, Kaiway Li, Xuelin Hou, Jun Pei Subject: Behavioral Sciences, Cognitive & Experimental Psychology Keywords: mental fatigue; one-way ANOVA; digital decoding testing; relative fatigue index (RFI); Sensitivity ordering Most of the research of mental fatigue evaluation mainly concentrated on some indexes that require sophisticate and large instruments which make the detection of mental fatigue cumbersome, time-consuming, and difficult to apply on a large scale. A quick and sensitive mental fatigue detection index is necessary so that mental workers can be alerted in time and take corresponding countermeasures. But to date, no studies have compared the sensitivity of common objective evaluation indexes. To solve these problems this study recruited 56 human subjects. These subjects were evaluated using six fatigue indexes: the Stanford sleepiness scale, digital span, digital decoding, short-term memory, critical flicker fusion frequency (CFF), and speed perception deviation. The results of fatigue tests before and after mental fatigue were compared, and a one-way analysis of variance (ANOVA) was performed on the speed perception deviation. The result indicated the significance of this index. Considering individual differences, the relative fatigue index (RFI) was proposed to compare the sensitivity of the indexes. The results showed that when the self-rated fatigue grade changed from non-fatigue to mild fatigue, the ranges of RFI values for digital span, digital decoding, short-term memory and CFF were 0.175–0.258, 0.194–0.316, 0.068–0.139, and 0.055–0.075, respectively. Correspondingly, when the self-rated fatigue grade changed from non-fatigue to severe fatigue, the ranges of RFI values for the above indexes were 0.175–0.258, 0.194–0.316, 0.068–0.139, and 0.055–0.075, respectively. These results suggest that the sensitivity of the digital decoding, digital span, short-term memory, and CFF decreased sequentially when the self-evaluated fatigue grade changed from no fatigue to mild or severe fatigue. The RFI individuality of the speed perception deviation is highly variable and is not suitable as an evaluation index. In mental fatigue testing, digital decoding testing can provide faster, more convenient, and more accurate results. The Original Method of Deriving Transformations for Kinematics with a Universal Reference System Roman Szostek Subject: Physical Sciences, General & Theoretical Physics Keywords: kinematics; universal frame of reference; coordinate and time transformation; one-way speed of light; summing speed; relative speed The article presents the original derivation method of transformations for kinematics with a universal reference system. This method allows to derive transformations that meet the results of the Michelson-Morley and Kennedy-Thorndike experiments only in some frame of reference, e.g. in laboratories moving in relation to a universal frame of reference with small speeds. The obtained transformations are the basis for the derivation of the new physical theory, which has been called the Special Theory of Ether. The generalized transformations can be expressed by relative speeds (26)-(27) or by the parameter δ (v) (37)-(38). Based on conclusions of the Michelson-Morley's and Kennedy-Thorndike's experiments, the parameter δ (v) was determined. This allows the transformations to take a special form (81)-(82), which is consistent with experiments in which velocity of light is measured. On the basis of obtained transformations, the formulas for summing speed and relative speed were also determined. The entire article includes only original research conducted by its author. Derivation of All Linear Transformations that Meet the Results of Michelson-Morley's Experiment and Discussion of the Relativity Basics Subject: Physical Sciences, General & Theoretical Physics Keywords: coordinate and time transformation; kinematics; universal frame of reference; one-way speed of light; anisotropy of cosmic microwave background The article presents formal proof that the Special Theory of Relativity is wrong, that is, the interpretation of the mathematics on which STR is based, proposed by Einstein is incorrect. The article shows that there are infinitely many kinematics in which one-way speed of light is always equal to c. The kinematics of Special Theory of Relativity (STR) is only one of those infinitely many kinematics. It presents that mathematics on which STR kinematics is based can be interpreted differently and this leads to other conclusions on the properties of this kinematics. In this article, the whole class of linear transformations of time and coordinate was derived. Transformations were derived on the assumption that conclusions from Michelson-Morley's and Kennedy-Thorndikea's experiments are met for the observer from each inertial frame of reference, i.e. that the mean velocity of light in the vacuum flowing along the way back and forth is constant. It was also assumed that there is at least one inertial frame of reference, in which the velocity of light in a vacuum in each direction has the same value c, and the space is isotropic for observers from this distinguished inertial frame of reference (universal frame of reference). Derived transformations allow for building many different kinematics according to Michelson-Morley's and Kennedy-Thorndikea's experiments. The class of transformations derived in the study is a generalization of transformations derived in the paper [10], which consists in enabling non-zero values of parameter e(v). The idea of such a generalization derives from the person, who gave me this extended transformations class for analysis and publication. Factors Influencing Citizen Satisfaction in Getting Public Service (Case Study: The Service User of the Investment and One Stop Service Agency of Tanah Bumbu Regency in 2018) Muhammad Iqbal, Indriani Mahbubah Subject: Social Sciences, Political Science Keywords: citizen satisfaction; public service; one-stop service; one-stop service agency The present study explained factors influencing citizen satisfaction service in the Investment and One-Stop Service Agency of Tanah Bumbu Regency. In particular, this research analyses the level of citizen satisfaction and the extent to which Awareness, Rules, Organizational, Income, Skill-Ability, and Service Facility Factor influence Citizen Satisfaction. This study uses a mixed methodology with a sequential explanatory strategy. Using the incidental sampling with Slovin's Formula to calculate the number of samples is 93 respondents. The quantitative data were analyzed by the SmartPLS 3.0 program. The findings showed that the level of citizen satisfaction is included in the category "Satisfied". Furthermore, the variable of Citizen Satisfaction is influenced by variables of Awareness, Rules, Organizational, Income, Skill-Ability dan Service Facility Factor for 70,5%. Whereas Awareness, Rules, Organizational, and Skill-Ability Factor has a significant influence on Citizen Satisfaction. Besides, Income and Service Facility Factor do not have a significant influence on Citizen Satisfaction. Preprint CASE REPORT | doi:10.20944/preprints202212.0279.v1 A Model Approach to Achieving SDGs: A Case Study from Dayalbagh, India Apurva Narayan, Pami Dua, Ashita Swarup Allamraju Subject: Earth Sciences, Environmental Sciences Keywords: SDG; Dayalbagh Way of life; Agroecology; Sustainable Agriculture The multiple crises that the world is facing – climate change, COVID-19 and war have halted or reversed the progress of the world towards the achievement of Sustainable Development Goals. Using a case study of Dayalbagh, a locality in metropolitan Agra, India, and headquarters of the Radhasoami faith, we examine the potential benefits of employing agroecology to achieve the United Nations Sustainable Development Goals (SDGs). The active, disciplined and cooperative community-based lifestyle followed in Dayalbagh with a strong focus on agriculture and service demonstrates how most of the SDGs can be achieved. It offers lessons for policy makers in terms of focus areas for policy support and reaching the last, lowest, least and the lost. A One Health Perspective to Recognize Fusarium and Neocosmospora as Important in Clinical Practice Valeri Sáenz, Carlos Alvarez-Moreno, Patrice Le Pape, Silvia Restrepo, Josep Guarro, Adriana Marcela Celis Ramírez Subject: Life Sciences, Other Keywords: Fusarium; Neocosmospora; One Health A strategy to propose solutions to health-related problems recognizes that people, animals, and the environment are interconnected. Fusarium and Neocosmospora are an example of this interaction due to the capable of infecting plants, animals, and human. This review provides information on various aspects of these relations and proposes how to approach fusariosis with a One Health methodology. Here we give a framework to understand infection pathogenesis, through the epidemiological triad and explain how the broad utilization of fungicides in agriculture may play a role in the treatment of human fusariosis. We assess how plumbing systems and hospital environments might play a role as a reservoir for animal and human infections. We explain the role of antifungal resistance mechanism in both humans and agriculture. Our review emphasizes the importance of developing interdisciplinarity research studies where aquatic animals, plants, and human disease interactions can be explored through coordination and collaborative actions. Similarities between Hurricanes and Galaxies: A Short Review Jim Henry, Mesut Yurukcu, George Nnanna Subject: Keywords: Universe; galaxies; Milky Way Galaxy; spiral galaxies; hurricanes; pattern. Universe created with the fundamental laws of science. Nature is lazy and needs to form with the least possible to be perfect. A natural pattern, such as pinecones, sunflowers, pineapples, and cacti, has a double spiral structure. Once we look at these plants' centers, we will see the seeds line up in spirals shape. The number of spirals whirling in each direction will give us the Fibonacci numbers. We can give more examples representing these natural patterns; however, one example is unique and remarkable. The similarities between spiral galaxies- Milky Way and hurricanes. Are they similar in every property or just in shape and rotational movements? What are the similarities between them? This short review article will try to find these questions' answers by reviewing some literature articles. The first part of this article gave some information about hurricanes and galaxies. The second of this article focused on the comparison between hurricanes and galaxies. Finally, we will conclude the article with our remarks. Two-Way Coupling Fluid-Structure Interaction (FSI) Approach to Inertial Focusing Dynamics Under Dean Flow Patterns in Asymmetric Serpentines Eric Pedrol, Jaume Massons, Francesc Díaz, Magdalena Aguiló Subject: Physical Sciences, Fluids & Plasmas Keywords: inertial focusing; dean flow; serpentine; fsi; two-way coupling The dynamics of a spherical particle in an asymmetric serpentine is studied by finite element method (FEM) simulations in a physically unconstrained system. The two-way coupled time dependent solutions illustrate the path of the particle along a curve where a secondary flow (Dean flow) has developed. The simulated conditions were adjusted to match those of an experiment for which particles were focused under inertial focusing conditions. The obtained rotational modes allowed to infer the influence of the local flow around the particle. We propose a new approach to find the decoupled secondary flow contribution employing a quasi-Stokes flow. One-bath One-step Dyeing of Polyester/Cotton (PC) Blends Fabric With Disperse Dyes After Acetylation of Cotton Girmaye Kumsa Bana, Gemeda Gebino Gelebo, Gezu Ketema Janka Subject: Engineering, Automotive Engineering Keywords: Polyester/cotton blend; One-bath one-step dyeing; surface modification; Acetylation; Disperse dye Usually, the two-bath dyeing process using disperse dyes and reactive dyes separately was employed for the dyeing of PC blends. The cost of the double bath, dyeing cycle, energy consumption and chemical consumption is quite higher than the one-step or single bath dyeing methods. In this study, the one-bath dyeing process using one kind of dye was investigated. Polyester cotton blends dyed in one-bath one-step dyeing methods with disperse dye after surface modifications of cotton by acetylation methods were studied. Surface modification of cotton was carried out using fibrous acetylation methods. The effect of acetic anhydride and time on percent acetyl content at room temperature was studied. Modified polyester/cotton was carried out in HTHP dyeing machine incorporates with different dye concentrations and dyeing temperature. Surface chemistry, thermal decomposition property and moisture regain of modified polyester/ cotton blend are evaluated. The color strength of the dyed fabrics and their fastness properties to washing, light, and rubbing as well as tear strength and abrasion resistance were evaluated. The effect of dye concentration and temperature color strength, tensile strength warp and weft direction was assessed. The optimum value for surface modification was obtained with a concentration of acetylation agent 16% and time of reaction 2.5 hours, gave percent acetylation of 34. An FTIR spectrum shows acetylation resonance. The experiment result of dyeing showed that the optimum value was obtained with dye concentration above 1% at a temperature of 120oC, warp tensile strength decreased by 12% and weft tensile strength was decreased by 9% from the control half-bleached fabric. Results of this study showed that a one-step one-bath dyed modified polyester/cotton blend with disperse dye fabric presents good fastness property compared with conventional two-bath dyed fabric as well as colour strength values. Alternative Paradigms in Animal Health Decisions: A Framework for Thinking beyond Money Liz Paola Noguera Z., Sonja Hartnack, Paul R. Torgerson Subject: Medicine & Pharmacology, Other Keywords: One Health; zoonosis; animal health Zoonoses are diseases transmitted from (vertebrate) animals to humans. Control and prevention of these diseases require an appropriate way to measure health for prudent and well-balanced decisions in public health. We propose a framework that aims to explore, understand and open up a conversation about the non-monetary value of animals through environmental and normative ethics. As an example of its application, participants can choose what they are willing to give in exchange for curing an animal in hypothetical scenarios selecting a human health condition to suffer, the amount of money, and lifetime as a tradeoff. We believe that considering animals beyond their monetary value in public health decisions will contribute to a more rigorous assessment of the burden of zoonotic diseases, among other health decisions. This method might help us complement the existing metrics in health, adding more comprehensive values for animal and human health for the "One Health" approach. Untangling Cosmic Magnetic Fields: Faraday Tomography at MetreWavelengths with LOFAR Shane O'Sullivan, Marcus Brüggen, Cameron Van Eck, Martin Hardcastle, Marijke Haverkorn, Timothy Shimwell, Cyril Tasse, Valentina Vacca, Cathy Horellou, George Heald Subject: Physical Sciences, Astronomy & Astrophysics Keywords: magnetic fields; Faraday tomography; large-scale structure; AGN; Milky Way The technique of Faraday tomography is a key tool for the study of magnetised plasmas in the new era of broadband radio polarisation observations. In particular, observations at metre-wavelengths provide significantly better Faraday depth accuracies compared to traditional cm-wavelength observations. However, the effect of Faraday depolarisation makes the polarised signal very challenging to detect at metre wavelengths (MHz frequencies). In this work, Faraday tomography is used to characterise the Faraday rotation properties of polarised sources found in data from the LOFAR Two-Metre Sky Survey (LoTSS). Of the 76 extragalactic polarised sources analysed here, we find that all host a radio-loud AGN. The majority of the sources (∼64%) are large FRII radio galaxies with a median projected linear size of 710 kpc and median radio luminosity at 144 MHz of 4 × 10^26 W/Hz. In several cases, both hotspots are detected in polarisation at an angular resolution of ∼20". One such case allowed a study of intergalactic magnetic fields on scales of 3.4 Mpc. Other detected source types include an FRI radio galaxy and at least 8 blazars. Most sources display simple Faraday spectra, however, we highlight one blazar that displays a complex Faraday spectrum, with two close peaks in the Faraday dispersion function. Effect of One Belt One Road Initiative (OBORI) Policy on the International Spread of Chinese Brands Karamoko K.E.H. N'da, Jiaoju Ge, Ren Jifan, Jia Wang Subject: Behavioral Sciences, Other Keywords: International Online Shopping; One Belt One Road Initiative; Chinese Bands; Brand Preference; International Online Consumers Since the advent of the OBORI, it was subjected to numerous studies. However, most previous studies investigated only the potential impact of the OBORI on the Chinese economy and geopolitics. Therefore, its real effect on Chinese international commerce in OBORI countries is not evaluated yet. Accordingly, this study intends to model the OBORI effect on Chinese product brand purchases across country members. The assessment is made on 18362 purchases of the International Online Consumers (IOCs) from a Chinese international online selling platform. The Data was obtained from a programming language and the octopus software. The OBORI policy's effect on Chinese brands' purchases was examined through a Different In Different Model (DIDM). Results show that the impact of OBORI is weak in the real market. However, it could be significant if OBORI includes more developed and economically strong countries. To Chinese brands and policymakers, we show how the inclusion in the OBORI project of developed countries could contribute more to Chinese product brands' purchases. Thus, the study enables decision-makers to understand the current impact of OBORI on the real market and its potential effect if more developed and economically strong countries are included. Emerging Roles of Urine-Derived Components for the Management of Bladder Cancer: One Man's Trash is Another Man's Treasure Sarah Minkler, Fabrice Lucien, Michael Kimber, Agnes Bourgois-Mochel, Margareth Musser, Chad Johannes, Igor Frank, John Cheville, Karin Allenspach, Jonathan Mochel Subject: Life Sciences, Biochemistry Keywords: Bladder Cancer; Exosomes; Organoids; One Health Urinary bladder cancer (UBC) is the most common malignancy of the urinary tract in humans, with an estimated global prevalence of 1.1 million cases over 5 years1. Due to high rates of recurrence and resistance to chemotherapy, UBC is one of the most expensive cancers to treat, resulting in significant health care costs. There is, therefore, a critical need to develop innovative molecular and cellular tools to refine patient stratification and help predict response to treatment. Urine is an underused resource of biological components shed from bladder tumors, such as exfoliated cells and extracellular vesicles, that could serve as molecular fingerprints and provide valuable biological insights into tumor phenotype and mechanisms of resistance to chemotherapy. Additionally, characterization of urine-derived extracellular vesicles and cells could be used as reliable biomarkers for prediction of response to neoadjuvant therapy. One Health, Fermented Foods and Gut Microbiota Victoria Bell, Jorge Ferrão, Lígia Pimentel, Manuela Pintado, Tito Fernandes Subject: Biology, Ecology Keywords: One Health, fermented foods, microbiota, nutrition The microbioma is presently one of the hottest areas of scientific and medical research and exerts a marked influence on the host during homeostasis and disease. Fermented foods arise in the human relationship to the microbial environment. Further to the traditionally recognized effects of fermented foods and beverages on the digestive health and well-being there is now strong evidence on their general health benefits, namely the significance on the gut microbiota and brain functionality. We highlight the possibilities in this field, how little is still known, and call for a convergence of interdisciplinary research fields of One Health microbe-nutrition with fermented foods and gut-brain research. A consequence of civilisation, changes in present-day society in diets with more sugar, fat and salt, habits and lifestyle, contributes to the likelihood of an inflammatory microbiome, particularly the global epidemics of obesity and mental health. Although two recent papers claim that probiotics perturb rather than aid in microbiota recovery back to baseline after antibiotic administration in humans, consuming fermented foods has shown to reduce inflammation so improve gut health and the proper function of the body's immune system. UAV-assisted WRSN Online Charging Strategy Based on Dynamic Queue and Improved K-means Tianle Shan, Yang Wang, Chuanxin Zhao, Rongyu Zou, Yanzhe Sun Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: Rechargeable Sensor Network; Unmanned Aerial Vehicle; One-to-one Charging; Space-time Collaboration; Optimal Charging Trajectory Aiming at the problem of low charging efficiency caused by the scattered sensor nodes in traditional wireless rechargeable sensor networks (WRSNs), a UAV-assisted WRSN Online Charging Strategy Based on Dynamic Queue and Improved K-means (UOCS) is proposed. The scheme assumes that the energy consumption of nodes is unpredictable, and only generates charging requests when the energy is lower than a threshold, and performs on-demand responses to nodes that issue charging requests. The scheme combines the characteristics of one-to-one charging of UAVs, the selection and allocation timing of waiting queues and the number of UAVs, and the improved K-means partitioning based on space-time coordination(SPKM), which simplifies the problem of coordinated charging of multiple UAVs and maximizes energy. Using the efficiency and charging success rate, the optimal charging trajectory can be found under the constraint that the node will not starve to death due to power shortage. Finally, a simulation comparison experiment is carried out with the existing UAV charging scheduling strategy. UOCS achieves the optimal node survival rate with low algorithm complexity. What Determines Spontaneous Physical Activity in Patients with Parkinson's Disease? Agnieszka Gorzkowska, Joanna Cholewa, Andrzej Malecki, Aleksandra Klimkowicz, Jarosław Cholewa Subject: Medicine & Pharmacology, Clinical Neurology Keywords: Parkinson's disease; physical activity; sedentary way; non-motor symptoms; apathy; dopaminergic therapy Physical activity (PA) is a factor that may have an influence on the symptoms of Parkinson's disease (PD). The aim of this study was to identify the potential determinants of spontaneous PA in the PD patient group. 134 PD patients aged 65.2±9.2 years, Hoehn-Yahr scale ≤ 4, Mini Mental State Examinaton (MMSE) ≥ 24 were examined. For the study purposes, the authors analyzed: age, sex, education, history of PD, dopaminergic treatment, the severity of PD symptoms using Unified Parkinson's Disease Rating Scale (UPDRS) and Hoehn-Yahr scale. Additionally all participants were evaluated through a set of scales for specific neuropsychiatric symptoms including: depression, anxiety, apathy, fatigue and sleep disorders. An analysis of linear regression was used with backward elimination. In the total explanatory model, 12% of the variability in activity (R2=0.125; F(16.133)=2.185; p<0.01), the significant predictor was starting therapy with the dopamine agonist (DA) (β= 0.420; t= 4.068; p=0.000), which was associated with a longer duration of moderate PA. In the total explanatory model, for more than 13% of the variance in time spent sitting (R2=0.135; F(16.130)=2.267; p<0.01), the significant predictors were secondary education and the results of the UPDRS. The patients with secondary and vocational education, those starting treatment with DA and those with a less severe degree of Parkinson's symptoms (UPDRS) spent less time sitting in a day. It is possible to identify determinants of spontaneous PA. It may elucidate consequences in terms of influence on modifiable conditions of PA and the proper approach to patients with unmodifiable PA factors. Synthesis and in vitro Antibacterial Activity of Quaternization 10-Methoxycanthin-6-one Derivatives Na Li, Jiang-Kun Dai, Dan Liu, Jin-Yi Wang, Jun-Ru Wang Subject: Chemistry, Medicinal Chemistry Keywords: 10-methoxycanthin-6-one; quaternization; antibacterial; SARs Natural products are an important source of antibacterial agents. Canthin-6-one alkaloids have displayed potential antibacterial activity based on our previous work. In order to improve the activity, twenty-two new 3-N-benzylated 10-methoxy canthin-6-ones were designed and synthesized through quaternization reaction. The in vitro antibacterial activity against three bacteria was evaluated by double dilution method. Four compounds (6f, 6i, 6p and 6t) displayed 2-fold superiority (minimum inhibitory concentration (MIC) = 3.91 µg/mL) against agricultural pathogenic bacteria R. solanacearum and P. syringae than agrochemical propineb. Moreover, the structure–activity relationships (SARs) were also carefully summarized in order to guide the development of antibacterial canthin-6-one agents. Heterogeneous Dynamic Correlation Research Among Industrial Structure Distortion, Two-way FDI, and Carbon emission intensity Jiansheng You, Guohan Ding, Liyuan Zhang Subject: Social Sciences, Economics Keywords: two-way FDI; structural distortion; ecological civilization construction; spatial econometrics; carbon emission intensity In this paper, industrial structure distortion, two-way FDI and carbon emission intensity are brought into a unified research framework, and based on China's panel data from 2011 to 2020, empirical tests are conducted employing Exploratory Spatial Data Analysis (ESDA), spatial econometric model and intermediary effect test. The results show the following. Firstly, China's industrial structure distortion index shows a downward trend. The industrial structure distortion index is the highest in the west, followed by the middle, and the lowest in the East. Secondly, the relationship between carbon emission intensity and economic development shows a "decoupling" effect and keeps decreasing year by year. The spatial disparity is remarkable, showing the pattern of "the east leading, the middle catching up and the west lagging ". At the provincial level, except in Xinjiang province, the carbon emission intensity of other provinces showed different degrees of decline. In terms of spatial distribution, the polarization characteristics of carbon emission intensity are significant, and the traditional spatial distribution pattern has been broken. Thirdly, there is a positive spatial correlation between China's industrial structure distortion, two-way FDI and carbon emission intensity. The distortion of industrial structure will not only lead to the increase of local carbon emission intensity but also produce reverse spillover to adjacent areas. IFDI and OFDI provide a strong driving force for the decline of carbon emission intensity. IFDI promotes the decline of carbon emission intensity in adjacent areas, while OFDI will increase the carbon emission intensity in surrounding areas. The interaction of IFDI and OFDI can significantly reduce the carbon emission intensity of local and adjacent areas. Fourthly, the results of intermediary effect analysis show that two-way FDI is the two channels of industrial structure distortion affecting carbon emission intensity. Industrial structure distortion can affect the transmission mechanism of carbon emission intensity by affecting two-way FDI. Design of ethylene-vinyl acetate copolymer fiber with two-way shape memory effect Xiaoming Qi, Wentong Yang, Laiming Yu, Wenjun Wang, Haohao Lu, Yanglong Wu, Shanwen Zhu, Yaofeng Zhu, Xiangdong Liu, Yubing Dong, Yaqin Fu Subject: Materials Science, Polymers & Plastics Keywords: shape memory polymer fiber; two-way shape memory effect; melt spinning; UV curing One-dimensional shape memory polymer fibers (SMPFs) have obvious advantages in mechanical properties, dispersion properties and weavability. In this work, a method for fabricating semi-crystallization ethylene-vinyl acetate copolymer (EVA) fiber with two-way shape memory effect by melt spinning and ultraviolet (UV) curing was developed. Here, the effect of crosslink density on its performance was systematically analyzed by gel fraction measurement, tensile tests, DSC and TMA analysis. The results showed that the crosslink density and shape memory properties of EVA fiber could be facilely adjusted by controlling UV curing time. The resulting EVA fiber with cylindrical structure had a diameter of 247.13 ± 10.07 μm, and its mechanical strength and elongation at break were 64.46 MPa and 114.33%, respectively. The critical impact of the crosslink density and applied constant stress on the two-way shape memory effect were analyzed. Moreover, the single EVA fiber could lift more than 143 times its own weight and achieve 9% reversible actuation strain. The reversible actuation capability was significantly enhanced by a simple winding design of the single EVA fiber, which provided great potential applications in smart textiles, flexible actuators and artificial muscles. Socio-economic Factors Affecting Fertilization Sustainability in Bangladesh: Effects of Traditional Way of Fertilization and Rental Land Farming K. M. Atikur Rahman, Dunfu Zhang Subject: Social Sciences, Sociology Keywords: Keywords: excessive fertilization; agro environment; rental land; traditional way; younger farmers; environmental consciousness Abstract: The study focuses on how socio-economic and demographic indicators affect fertilization sustainability (excessive amount of fertilization). Principally we aim to examine the significance magnitude of the effects of three socio-demographic variables such as traditional way of fertilization, rental land farming, and farmers' younger age on over-fertilization in Bangladesh and other developing countries. In 1960s, Bangladesh state authority launched a campaign 'Grow more Food' to feed huge numbers of population and thus the farmers are provided chemical fertilizers and pesticides at a subsidized low price. Farmers began to use huge amount of fertilizers for gaining high yields and continued it to present causing environmental woes a lot. We interview (face-to-face, focus group discussion, and phone interview) 210 Bangladesh farmers in 2016 by semi-structured questionnaire. Data has been analyzed using General Linear Model (GLM) in Univariate Analysis of Variance. The study found the effect of traditional way of fertilization on excessive amount of fertilization is strongly significant at 1% level. Apart from, rental land farming and farmers' younger age have a significant influence on over- fertilization; though their significance level (5% and 10% respectively) is quasi-strong. Policy makers can be able to formulate fertilizer policy on the basis of these findings. Drivers of Antibiotic Use in Semi-intensive Poultry Farms: Evidence From a Survey in Senegal Eve Emes, Adiouma Faye, Nichola Naylor, Dagim Belay, Babacar Ngom, Awa Gueye Fall, Gwenan Knight, Michel Dione Subject: Life Sciences, Other Keywords: antimicrobial resistance; antimicrobial stewardship; One Health; agriculture; biosecurity Antimicrobial resistance (AMR), the capacity of microbial pathogens to survive in the presence of antimicrobials, is considered one of the greatest threats to human health worldwide and is growing rapidly in importance. AMR is thought to be driven in part by the use of antimicrobials (AMU) in livestock production. AMU reduction in agriculture is therefore important, but doing so may endanger farmers' incomes and hamper broader food security. Understanding drivers for farmers' antibiotics use is essential to designing interventions which avoid harming agricultural output and safeguard farmers' economic security. In this study, we analyse AMUSE survey data from poultry farmers in Senegal to explore the effects of vaccination, attitudes towards AMR, and biosecurity practices on: AMU, animal mortality, and farm productivity. We found that farmers with more "AMR-aware" attitudes may be less likely to use antibiotics in healthy birds. Stronger on-farm biosecurity was associated with less use of antibiotics in healthy birds, and in some specifications was linked to higher broiler productivity. Vaccination and AMU were both linked with higher disease prevalence, and both factors appeared conducive to higher broiler productivity. Overall, there is evidence that awareness-raising and biosecurity improvements could encourage prudent use of antibiotics, and that biosecurity and vaccination could to some extent replace antibiotic use as productivity-enhancing and disease-management tools in broiler farms. Finally, issues of farm antimicrobial stewardship must be considered at the structural level, with farm behaviours contingent on interaction with state and private stakeholders. A Quantitative Framework for Evaluating the Societal Impact of Antimicrobial Use Reduction in Agriculture Eve Tresco Emes Subject: Life Sciences, Other Keywords: AMR; agriculture; One Health; health economics; policy; modelling Antimicrobial resistance (AMR) is an increasingly pressing threat to human, animal, and environmental health. Reducing the use of antibiotics in agriculture has been identified as a key way to curb the spread of AMR. However, the effect of such policies on AMR prevalence, and their broader impacts on agricultural, health, and economic outcomes at the population level have proven very difficult to estimate and compare. This paper draws on and formalises ideas presented at the JPIAMR New Perspectives on Bacterial Drug Resistance workshop in June of 2022. With reference to emerging literature on the topic, it proposes a quantitative framework for estimating the relevant causal relationships needed to quantify the cross-sectoral impacts of AMR policies in agriculture, and for comparing these outcomes in like terms in a way which can feed directly into policy decision-making, notably without prohibitive data requirements. The ability of researchers to apply frameworks such as this will be increasingly necessary in order to holistically capture the impacts of AMR policies and to situate them in the broader policy context; especially where the mechanisms of transmission are opaque or complex, where data availability is limited; and where policymakers must allocate scarce resources among many competing objectives. One Step Mechanosynthesis of Graphene Oxide Directly from Graphite Victor Ibarra, Demetrio Mendoza, Alma Sanchez, Rosa Vazquez, Karina Aleman, Marius Ramirez, Victor Castano Subject: Physical Sciences, Applied Physics Keywords: graphene; graphene oxide; mechanochemistry; solvent-free; one-step Graphene oxide was synthesized by a one-step environmentally friendly mechanochemistry process directly from graphite and characterized by Raman, FT-IR and UV/vis spectroscopies, Atomic Force Microscopy, X-ray Diffraction, Scanning Electron Microscopy, Energy-Dispersive X-ray Spectroscopy and Thermogravimetric Analysis. Spectroscopic analysis shows that the functional groups and oxygen content of the synthesized material are comparable with those of graphene oxide synthesized by other previously reported methods (Hummers). Thermogravimetric analysis reveals thermal stability up to 400 °C. CAR T-Cell Immunotherapy in Human and Veterinary Oncology: Changing the Odds against Hematological Malignancies Jonathan P Mochel, Stephen C Ekker, Chad M Johannes, Albert E Jergens, Karin Allenspach, Agnes Bourgois-Mochel, Michael Knouse, Sebastien Benzekry, Wesley Wierson, Amy K LeBlanc, Saad S Kenderian Subject: Medicine & Pharmacology, Oncology & Oncogenics Keywords: immuno-oncology; CAR T-cell; lymphoma; one health The advent of the genome editing era brings forth the promise of adoptive cell transfer using engineered chimeric antigen receptor (CAR) T-cells for targeted cancer therapy. CAR T-cell immunotherapy is probably one of the most encouraging developments for the treatment of hematological malignancies. In 2017, two CAR T-cell therapies were approved by the U. S Food and Drug Administration; one for the treatment of pediatric Acute Lymphoblastic Leukemia (ALL), the other for adult patients with advanced lymphomas. However, despite significant progress in the area, CAR T-cell therapy is still in its early days and faces significant challenges, including the complexity and costs associated with the technology. B-cell lymphoma is the most common hematopoietic cancer in dogs, with an incidence approaching 0.1% and a total of 20-100 cases per 100,000 individuals. It is a widely accepted naturally occurring model for human non-Hodgkin's lymphoma. Current treatment is with combination chemotherapy protocols, which prolong life for less than a year in canines and are associated with severe dose-limiting side effects, such as gastrointestinal and bone marrow toxicity. To date, one canine study generated CAR T-cells by transfection of mRNA for CAR domain expression. While this was shown to provide a transient anti-tumor activity, results were modest, indicating that stable, genomic integration of CAR modules is required in order to achieve lasting therapeutic benefit. This Commentary summarizes the current state of knowledge on CAR T-cell immunotherapy in human medicine and its potential applications in animal health, while discussing the potential of the canine model as a translational system for immuno-oncology research. One-Pot Synthesis of Coumarins Unsubstituted on the Pyranic Nucleus Catalysed by a Wells–Dawson Heteropolyacid (H6P2W18O62) Yu-Feng Sun, Jia-Meng Liu, Jing Sun, Ya-Tao Huang, Jia Lu, Min-Min Li, Nuo Jin, Xiao-Feng Dai, Bei Fan Subject: Chemistry, Organic Chemistry Keywords: coumarin; one-pot synthesis; catalysis; Wells–Dawson heteropolyacid The development of a method to produce coumarins unsubstituted on the pyranic nucleus catalyzed from Wells–Dawson heteropolyacid (H6P2W18O62), phenol derivatives and ethyl 3,3-diethoxypropionate using Pechmann condensation under solvent-free conditions is described. This catalytic method was also applied successfully to synthesize various substituted coumarins, including the corresponding phenols and ethyl 3,3-diethoxypropionate. This work provides a novel, cheaper and safer way to syhthesize coumarins unsubstituted on the pyranic nucleus. Can a Priori Unknown Values of Biomechanical Parameters Be Determined with Sufficient Accuracy in Mbs Using Sensitivity Analysis? Analyzing the Interaction Characteristics between Cervical Vertebra and Pedicle Screw Ivanna Kramer, Sabine Bauer Subject: Engineering, Mechanical Engineering Keywords: multibody simulation; multi-way sensitivity analysis; spinal implant anchor screw; stiffness and damping parameters Finite element (FE) modeling is commonly used as a method to investigate the influence of medical devices, such as implants and screws and their effects on the biomechanical behavior of the spine. Another simulation method is a multi-body simulation (MBS), where the model is composed of several non-deformable bodies. MBS solvers generally require a very short computing time for dynamic tasks compared to an FE analysis. Considering this computational advantage, in this study, we examine whether parameters whose values are not known a priory can be determined with sufficient accuracy using MBS model. Therefore, we propose a Many-at-a-time sensitivity analysis method that allows approximating these a priory unknown parameters without requiring long simulation times. This method enables a high degree of MBS model optimization to be achieved in an iterative process. The sensitivity analysis method is applied to a simplified screw-vertebra model, consisting of an anterior anchor implant screw and vertebral body of C4. An experiment described in the literature is used as a basis for developing and assessing the potential of the method for sensitivity analysis and to validate the models action. The optimal model parameters for the MBS model were determined to be c=823224N/m for stiffness and d=488Ns/m for damping. The presented method of parameter identification can be used in studies including more complex MBS spine models or to set initial parameter values that are not available as initial values for FE models. Preprint HYPOTHESIS | doi:10.20944/preprints202012.0680.v1 Is Dark Matter Light? Lorne Hofstetter Subject: Physical Sciences, Astronomy & Astrophysics Keywords: dark matter; Milky Way dark mater halo; galaxy kinematics; rotation curves; cosmology; dark energy Elucidating the nature of dark matter in galactic systems remains one of the important unsolved mysteries of modern cosmology. As a thought experiment, we consider a galaxy model where the light radiating outward from stellar objects produces a gravitational effect larger than Einstein's theory of gravity predicts. Using computer simulations, we observe that this assumption allows the basic rotation curve profiles observed in both dwarf and late-type spiral galaxies to be recreated. It is important to highlight that a separate mass model describing the dark matter halo is not needed or used. This toy model may also lead to insights about the nature of dark energy. If the gravitational effects in the universe are currently dominated by radiated light as this toy model may suggest, the cosmic scale factor would be closely linked to the time-history and spatial distribution of star formation and death rates. An accelerating universe may simply be a manifestation of star death rates exceeding star formation rates in the current epoch. Scoping Review of National Antimicrobial Stewardship Activities in Eight African Countries and Adaptable Recommendations Nduta Kamere, Sandra Tafadzwa Garwe, Oluwatosin Olugbenga Akinwotu, Chloe Zabrina Tuck, Eva M. Krockow, Sara Yadav, Agbaje Ganiyu Olawale, Ayobami Hassan Diyaolu, Derick Munkombwe, Eric Muringu, Eva Prosper Muro, Felix Kaminyoghe, Hameedat Taiye Ayotunde, Love Omoniyei, Mashood Oluku Lawal, Shuwary Hughric Adekule Barlatt, Tumaini J. Makole, Winnie Nambatya, Yvonne Esseku, Victoria Rutter, Diane Ashiru-Oredope Subject: Medicine & Pharmacology, Other Keywords: Antibiotic Resistance; CwPAMS; National Action Plans; Pharmacy; One Health Antimicrobial resistance (AMR) is a global health problem threatening safe, effective healthcare delivery in all countries and settings. The ability of microorganisms to become resistant to the effects of antimicrobials is an inevitable evolutionary process. The misuse and overuse of antimicrobial agents has increased the importance of a global focus on antimicrobial stewardship (AMS). This review provides insight into the current AMS landscape and identifies contemporary actors and initiatives related to AMS projects in eight African countries (Ghana, Kenya, Malawi, Nigeria, Sierra Leone, Tanzania, Uganda, and Zambia), which form a network of countries participating in the Commonwealth Partnerships for Antimicrobial Stewardship (CwPAMS) programme. We focus on common themes across the eight countries, including the current status of AMR, infection prevention and control, AMR implementation strategies, AMS, antimicrobial surveillance, antimicrobial use, antimicrobial consumption surveillance, a one health approach, digital health, pre-service and in-service AMR & AMS training, access to and supply of medicines, and the impact of COVID-19. Recommendations suitable for adaptation are presented, including the development of a national AMS strategy and incorporation of AMS in pharmacists' and other healthcare professionals' curricula for pre-service and in-service training. Concentrations of Ciprofloxacin in the World's Rivers Are Associated With the Prevalence of Fluoroquinolone Resistance in E. coli: A Global Ecological Analysis Subject: Medicine & Pharmacology, Other Keywords: Rivers; one-health; E. coli; fluoroquinolones; antimicrobial resistance; AMR Extremely low concentrations of ciprofloxacin may select for antimicrobial resistance. A recent global survey found that ciprofloxacin concentrations exceded safe levels at 64 sites. We assessed if national median ciprofloxacin concentrations in rivers were associated with fluoroquinolone resistance in Escherichia coli. Methods Spearman's regression was used to assess the country-level association between the national prevalence of fluoroquinolone resistance in E. coli and the median ciprofloxacin concentration in the countries rivers. Results The prevalence of fluoroquinolone resistance in E. coli was positively correlated with the concentration of ciprofloxacin in rivers (ρ=0.36; P=0.011; N=48). Conclusions Steps to reducing the concentrations of fluoroquinolones in rivers may help prevent the emergence of resistance in E. coli and other bacterial species. Gene Editing and Gene Therapy: Entering Uncharted Territory in Veterinary Oncology Wesley Wierson, Alex Abel, Elizabeth Siegler, Stephen Ekker, Chad Johannes, Saad Kenderian, Jonathan Mochel Subject: Medicine & Pharmacology, Oncology & Oncogenics Keywords: Gene Editing; Gene Therapy; Oncology; Comparative Medicine; One Health With rapid advances in gene editing and gene therapy technologies, the development of genetic, cell, or protein-based cures to disease are no longer the realm of science fiction but that of today's practice. The impact of these technologies are rapidly bringing them to the veterinary market as both enhanced therapeutics and towards modeling their outcomes for translational application. Simply put, gene editing enables scientists to modify an organism's DNA a priori through the use of site-specific DNA targeting tools like clustered regularly interspaced short palindromic repeats and CRISPR-associated protein 9 (CRISPR/Cas9). Gene therapy is a broader definition that encompasses the addition of exogenous genetic materials into specific cells to correct a genetic defect. More precisely, the U.S Food and Drug Administration (FDA) defines gene therapy as "a technique that modifies a person's genes to treat or cure disease" by either (i) replacing a disease-causing gene with a healthy copy of the gene; (ii) inactivating a disease-causing gene that was not functioning properly; or (iii) introducing a new or modified gene into the body to help treat a disease. In some instances, this can be accomplished through direct transfer of DNA or RNA into target cells of interest or more broadly through gene editing. While gene therapy is possible through the simple addition of genetic information into cells of interest, gene editing allows the genome to be reprogrammed intentionally through the deletion of diseased alleles, reconstitution of wild type sequence, or targeted integration of exogenous DNA to impart new function. Cells can be removed from the body, altered, and reinfused, or edited in vivo. Indeed, manufacturing and production efficiencies in gene editing and gene therapy in the 21st century has brought the therapeutic potential of in vitro and in vivo reprogrammed cells, to the front lines of therapeutic intervention (Brooks et al., 2016). For example, CAR-T cell therapy is revolutionizing hematologic cancer care in humans and is being translated to canines by us and others, and gene therapy trials are ongoing for mitral valve disease in dogs. Anomalous Diffusion With an Apparently Negative Diffusion Coefficient in a One-Dimensional Quantum Molecular Chain Model Sho Nakade, Kazuki Kanki, Satoshi Tanaka, Tomio Petrosky Subject: Physical Sciences, Acoustics Keywords: anomalous diffusion; one dimensional quantum system; irreversibility vs. reversibility An interesting anomaly of the diffusion process with an apparently negative diffusion coefficient defined through the mean-square displacement in a one-dimensional quantum molecular chain model is shown. Nevertheless, the system satisfies the H-theorem, so that the second law of thermodynamics is satisfied. The reason why the "diffusion constant" becomes negative is due to the effect of the phase mixing process, which is a characteristic result of the one-dimensionality of the system. We illustrate the situation where this negative "diffusion constant" appears. Molecular Identification of a Tentatively Novel Hantavirus in Malaysian Bronze Tube-Nosed Bat (Murina aenea) Brigitta Zana, Gabor Kemenesi, Dora Buzas, Gabor Csorba, Tamas Gorfol, Faisal Ali Anwarali Khan, Nurul Farah Diyana Ahmad Tahir, Safia Zeghbib, Monika Madai, Henrietta Papp, Fanni Foldes, Peter Urban, Robert Herczeg, Gabor Endre Toth, Ferenc Jakab Subject: Life Sciences, Virology Keywords: Mulu mobatvirus; MinION; Tb1-Lu; Mobatvirus; one health concept In the past ten years several novel hantaviruses were discovered in shrews, moles and bats, suggesting the dispersal of hantaviruses in many animal taxa other than rodents during their evolution. Interestingly, the co-evolutionary analyses of most recent studies have raised the possibility of non-rodents may have served as the primordial mammalian host and harboured the ancestors of rodent-borne hantaviruses as well. The aim of our study was to investigate the presence of hantaviruses in bat lung tissue homogenates originally collected for taxonomic purposes in Malaysia, 2015. Hantavirus specific nested RT-PCR screening of 116 samples targeting the L segment of the virus have revealed the positivity of two lung tissue homogenates originating from Murina aenea bat species. Nanopore sequencing of hantavirus positive samples resulted in partial genomic data from S, M and L genome segments. The obtained results indicate the first molecular evidence for hantavirus in Murina aenae bat species and also the first discovery of a hantavirus in Murina bat species. Sequence analysis of the PCR amplicon and partial genome segments suggests the identified virus may represent a novel species in Mobatvirus genus within Hantaviridae family. Furthermore, our results provide additional genomic data to help extend our knowledge about the evolution of these viruses. Evidence of Antimicrobial Resistance in Bats and Its Planetary Health Impact for Surveillance of Zoonotic Spillover Events: A Scoping Review Popy Devnath, Nabil Karah, Jay P Graham, Elizabeth Rose, Muhammad Asaduzzaman Subject: Medicine & Pharmacology, Veterinary Medicine Keywords: Antimicrobial resistance (AMR); Bats; Zoonotic spillover; Planetary health; One health As a result of the COVID-19 pandemic, as well as other outbreaks such as SARS and Ebola, bats are recognized as a critical species for mediating zoonotic infectious disease spillover events. While there is a growing concern of increased antimicrobial resistance (AMR) globally during this pandemic, knowledge of AMR circulating between bats and humans is limited. In this paper, we have reviewed the evidence of AMR in bats and discussed the planetary health aspect of AMR to elucidate how this is associated with the emergence, spread and persistence of antibiotic resistance at the human-animal interface. The presence of clinically significant resistant bacteria in bats and wild life has reflective and broad impact on zoonotic pandemic surveillance, disease transmission and treatment modalities. We searched MEDLINE through PubMed and Google Scholar to retrieve relevant studies (n=38) that provided data on resistant bacteria in bats till September 30, 2022. There is a substantial variability in the results from studies measuring the prevalence of AMR based on geographic location, bat types and time. We found all major groups of gram positive and gram negative bacteria in bats which are resistant to commonly used antibiotics. The most alarming issue is- recent studies have increasingly identified Methicillin Resistant Staphylococcus aureus (MRSA), ESBL producing and Colistin resistant Enterobacteriaceae in samples from bats. This evidence of superbugs abundance in both humans and wild mammals like bats, could facilitate a greater understanding of which specific pathways of exposure should be targeted. We believe that these data will also facilitate future pandemic prepareness as well as global AMR containment during the pandemic events and beyond. Patterns and Risk Factors of Antibiotic Use in Poultry Farming and the Farmers: A Cross Sectional One-health Study in Pakistan Um e Habiba, Amjad Khan, Elia John Mmbaga, Muhammad Asaduzzaman Subject: Medicine & Pharmacology, Veterinary Medicine Keywords: antimicrobial resistance; One Health; poultry; poultry farmers; antibiotic use; Pakistan Antimicrobial resistance (AMR) due to community carriage of antibiotic-resistant bacteria is highly prevalent in the WHO South-East Asia region. One of the major reasons is the misuse of antibiotics in animal farming practices and at community level, which threatens both human and animal health. However, this multifaceted One Health (OH) problem of antibiotic use (ABU) in poultry farms and respective farmers is not well studied in countries like Pakistan. Therefore, we conducted n OH cross-sectional study in rural Punjab to explore the current practices of ABU in poultry and poultry farmers, associated factors, their healthcare-seeking behaviour and biosecurity practices. We found all the participating farmers using antibiotics for poultry, 60% of which were Colistin sulphate and Amoxicillin trihydrate. The significant consumption of antibiotics in poultry farms (60%) and poultry farmers (50%) was without prescription. Most of the farms (85%) had no wastewater drainage system, causing direct shedding of poultry waste and antibiotic residue in the surrounding environment. Lack of farmers' education, professional farm training and duration of farming experience were the significantly associated factors with ABU and knowledge of AMR. Our study implies the necessity of an integrated OH-AMR policy with the inclusion of farmers' education, mass awareness, and strict antibiotic usage guidelines. Cross-Sectional Study on the Knowledge About Pet Ownership, Zoonoses and Practices of Pet Owners in the North of Portugal Beatriz Do Vale, Ana Patrícia Lopes, Maria Conceição Fontes, Mário Silvestre, Luís Cardoso, Ana Cláudia Coelho Subject: Medicine & Pharmacology, Veterinary Medicine Keywords: Knowledge; One Health; Pet ownership; Pets; Portugal; Public Health; Zoonoses Pet ownership is common in modern society. In Portugal, 38% and 31% of all households own, at least, one dog or cat, respectively. Few studies have ascertained the knowledge of pet owners about pet ownership and zoonoses, and none of them was carried out in Portugal. The aim of the present study was to assess household knowledge and practices related to pet ownership and zoonoses in the North of Portugal. A questionnaire was completed by 424 pet owners, during November 2019 to February 2020. Most respondents (97.2%) considered pets as an important part of the family, especially women (p = 0.036); 73.1% allowed their pets free access to indoors; 41.3% denied sharing the bed with their pets and 29% assumed they did it daily; 20.3% reported never kissing their pets/pets licking their faces; 73.6% considered animals as potential sources of diseases to humans, but only 25.9% reported knowing the definition of zoonoses; 96.9% considered important the role of veterinarians in protecting public health. The low level of knowledge of pet owners and the occurrence of high-risk behaviors indicate a need to strengthen communication between veterinarians, physicians, pet owners and the general public to reduce the risk of acquisition and transmission of zoonoses. Enhanced Photoelectrocatalytic Performance of TiO2 Nanorods in Photoelectrochemical Water Splitting Cell by Using an Alcoholic Sacrificial Agent Armin Hariri, Neda Gilani, Javad Vahabzadeh Pasikhani Subject: Engineering, Biomedical & Chemical Engineering Keywords: TiO2 nanorods; water splitting; photoelectrocatalyst; sacrificial agent; one-pot hydrothermal Photoelectrocatalytic water splitting by using various TiO2 nanostructures is a promising approach to generate hydrogen without harmful byproducts. However, their effective performance is restricted by some drawbacks such as high rapid electron-hole pair recombination and backward reaction producing H2O. Thus in this study, the probability of enhancing hydrogen generation rate by adding methanol as a sacrificial agent to the anodic chamber of a two-compartment photoelectrochemical cell is investigated. Herein, one-dimensional elongated TiO2 nanorods that were fabricated via a facile one-pot hydrothermal method are utilized as potent photoanode. Voltammetric characterizations confirm that addition of alcoholic sacrificial agent has a significant effect on photoelectrochemical properties of TiO2 nanorods which by adding 10 wt% of methanol, the photocurrent density and photoconversion efficiency increased from 0.8mA.cm-2 to 1.5mA.cm-2 and from 0.28% to 0.45%, respectively. The results of photoelectrocatalytic water splitting indicated that the hydrogen generation rate in the presence of methanol was about 1.2 times higher than that from pure water splitting. These enhancements can be attributed to the key role of methanol. Methanol molecules not only inhibit the electron-hole pair recombination but also accelerate the hydrogen generation rate by sharing their hydrogen atoms. Secure Communication for Two-way Relay Networks with Imperfect CSI Cong Sun, Ke Liu, Dahu Zheng, Wenbao Ai Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: physical layer security; semi-infinite programming; amplify-and-forward two-way relay; imperfect CSI; robust optimization This paper considers a two-way relay network, where two source nodes exchange messages through several relays in the presence of an eavesdropper, and the channel state information (CSI) of the eavesdropper is imperfectly known. The amplify-and-forward relay protocol is used and the relay beamforming weights are designed. The model is built up to minimize the total relay transmit power while guaranteeing the quality of service at users and preventing the eavesdropper from decoding the signals. Due to the imperfect CSI, a semi-infinite programming problem is obtained. An algorithm is proposed to solve the problem, and the iterative points are updated through the linesearch technique, where the feasibility are preserved during iterations. The optimality property is analyzed. The obtained subproblems are quadratic constrained quadratic programming problems, either with less than $4$ constraints or with only one variable, which are solved optimally. Simulation results demonstrate the importance of the proposed model, and imply that the proposed algorithm is efficient and converges very fast, where more than 85% of the problems are solved optimally. Electromagnetic Performances Evaluation of an Outer-Rotor Flux-Switching Permanent Magnet Motor Based on Electrical-Thermal Two-Way Coupling Method Zhengming Shu, Xiaoyong Zhu, Li Quan, Yi Du, Chang Liu Subject: Arts & Humanities, Anthropology & Ethnography Keywords: electrical-thermal two-way coupling; flux-switching permanent magnet motor; thermal analysis; permanent magnet material characteristics Online: 4 April 2017 (08:38:40 CEST) Flux-switching permanent magnet (FSPM) motors have gained increasing attention in the electric vehicles (EVs) applications due to the advantages of high power density, high efficiency. However, the heat sources of both permanent magnet (PM) and armature winding are located on the limited stator space in the FSPM motors, which may result in the PM overheated and irreversible demagnetization caused by temperature rise and it is often ignored in the conventional thermal analysis. In this paper, a new electrical-thermal two-way coupling design method is proposed to analyze the electromagnetic performances, where the change of PM material characteristics under different temperatures is taken into consideration. Firstly, the motor topology and design equations are introduced. Secondly, the demagnetization curves of PM materials under different temperatures are modeled due to PM materials are sensitive to the temperature. And based on the electrical-thermal two-way coupling method, the motor performances are evaluated in details, such as the load PM flux linkage and output torque. Then, the motor is optimized, and the electromagnetic performances between initial and improved motors are compared. Finally, a prototype motor is manufactured, and the results are validated by experimental measurements. New Insights into the Structural Requirements of Isatin-derived Pro-apoptotic Agents against Acute Myeloid Leukemia Ahmed K. Hamdy, Takashi Sakamoto, Tsugumasa Toma, Masaharu Sakamoto, Mohammed A.S Abourehab, Masami Otsuka, Mikako Fujita, Hiroshi Tateishi, Mohamed O. Radwan Subject: Chemistry, Medicinal Chemistry Keywords: isatin; indolin-2-one; acute myeloid leukemia; apoptosis; ERK1/2; MAPK Searching for bioactive compounds within the huge chemical space is like trying to find a needle in a haystack. Isatin is a unique natural compound which is endowed with different biopertinent activities specially in cancer therapy. Herein, we envisaged that adopting a hybrid strategy of isatin and α,β-unsaturated ketone would afford new chemical entities with strong chemotherapeutic potential. Of interest, compounds 5b and 5g demonstrated significant antiproliferative activities against different cancer genotypes according to NCI assay. Concomitantly, their IC50 against HL-60 cells were 0.38 ± 0.08 and 0.57 ± 0.05, respectively, demonstrating remarkable apoptosis and mod-erate cell cycle arrest at G1 phase. Intriguingly, an impressive safety profile for 5b was reflected by a 37.2 times selectivity against HL-60 over PBMC from a healthy donor. This provoked us to further explore their mechanism of action by in vitro and in silico tools. Conclusively, 5b and 5g stand out as strong chemotherapeutic agents that hold a clinical promise against acute myeloid leukemia. Detection of Human Case of Sylvatic Dengue Virus 2 During Routine Surveillance of Fever in Senegal, Kolda 2021 Idrissa Dieng, Samba Niang Sagne, Mignane Ndiaye, Mamadou Aliou Barry, Cheikh Talla, Moufid Mhamadi, Diamilatou Balde, Cheikh Talibouya toure, Boly Diop, Amadou Alpha Sall, Gamou Fall, Cheikh Loucoubar, Oumar Faye, Ousmane Faye Subject: Biology, Anatomy & Morphology Keywords: Febrile patient; Kolda; Circulation; Sylvatic Dengue virus 2; 2021; One health A human case of dengue virus 2 was detected from a febrile patient living in the Sare yoba, Kolda region (Southern Senegal). Phylogenetic analysis based on the partial sequence of the NS5 gene reveals that the virus belongs to the DENV2 Sylvatic genotype and is closely related to a strain (JF260983/ 98.98% identity) detected in Spain from a tourist who travelled to Guinee Bissau (bordering Kolda region) in 2009. This highlights a potential recent underreported circulation of sylvatic Dengue in the southern part of Senegal and calls for re-enforced integrated surveillance among humans, non-human primates and arboreal mosquitoes throughout a one-health approach. The Shelter Dog in a One Health View. A Model Kennel in Southern Italy Danila D'Angelo, Luigi Sacchettino, Angelo Quaranta, Michele Visone, Luigi Avallone, Claudia Gatta, Francesco Napolitano Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: One Health; shelter dog; dog adoption; dog well-being; dog behavior Today, the kennel is considered one of the crucial concerns of the human-animal relationship, since it is very often regarded as animal dump where dogs exile, thus representing a burden on society. Therefore, drawing up strategies for a new "kennel conception", as an added value for human society, environment, and dogs is still an unmet need. Here, we described the activities of a shelter dog in southern Italy, which faithfully meets criteria aimed at One Health perspective. It normally relies on an initial careful assessment by veterinary behaviorist, in order to guarantee the most suitable life conditions for the animals in the kennels, increase the chances of adoption and enroll them in projects tailored to their predispositions. Accordingly, dogs housed there are normally included in training courses to increase the skills to be used in different human social contexts, like support to the inmates, rescue in the rubble, animal-assisted interventions, as well as zooanthropology educational programs. The main strength for this groundbreaking shelter relies on the environmental protection schedule, where the purposes, employing technically and economically sustainable tools, point towards the continuous improvement and minimization of the environmental impact, promoting joint integrative projects for a sustainable One Health framework. Antimicrobial Resistance Surveillance System Mapping in Different Countries Ramendra Pati Pandey, Riya Mukherjee, Chung-Ming Chang Subject: Life Sciences, Microbiology Keywords: AMR, Surveillance; One Health Approach; Alternative Antibiotics; Comparative Medicine; Phage Therapy Antibiotics are in excessive use that has extensively increased antimicrobial resistance worldwide which has become the major public concern among the countries. To control this threat proper monitoring of the antimicrobial usage along with the increasing rate of antimicrobial resistance (AMR) is required. Further, surveillance of both the parameters is highly recommended for comparing the differences in distinct countries. Moreover, alternatives for antibiotics are also surveyed and are being researched for quick use in the near future. AMR is an issue that needs immense attention from various sectors. Thus, intervention of multisector is highly encouraged for better outcomes. One Health is one of the approaches that play a vital role in resolving this issue. In this research paper, six different European countries are discussed in terms of antimicrobial usage and AMR in the human and livestock sectors with the help of literature study and various reports published by different organizations. Data study has been conducted to collect the data for comparison study. Data sources of AMR and antimicrobial usage are analyzed and a thorough comparison of both antimicrobial use and AMR are conducted. Also, the application of One Health is studied for a balanced system. This article provides about various surveillance systems that are formed only to keep a track on the upcoming situation of AMR and the consumption of antimicrobials by the humans as well as animals. The article does not provide about all the details required to monitor the AMR issue but firmly allow the readers to get acknowledged with the broad information about the antimicrobial resistance across the six countries of Europe. The regular data collected by the different organizations play a vital role in monitoring the status of AMR and antimicrobial usage by humans and in live stocks. These annual reports have highly helped the government to decide for alternatives and have focused in many training activities to combat the AMR situation globally. AMR prevention is linked to the One Health concept. As antibiotic resistance genes persist on an interface between environment and animal and animal health, an approach is required in all three areas that stress the concept of 'One Approach to Health.' One Health Approach for combating Antimicrobial Resistance in Animals Chung-Ming Chang, Ramendra Pati Pandey, Riya Mukherjee Subject: Life Sciences, Microbiology Keywords: One Health Strategies; Antimicrobial Resistance; Salmonella isolates; Poultry Farms, Turkey Poults Antimicrobial resistance (AMR) is an increasing hazard to human and animal health that necessitates an international response. Surveillance methods in high-income nations aided in the development of measures to combat AMR in animals. Demand for meat is increasing in countries making it critical to implement anti-AMR initiatives. Surveillance of AMR, on the other hand, is at best in its infancy, and the current evidence base for informing policymakers is geographically disparate. All of the isolates had high rates of AMR to medicines that are critical/highly important in human and animal medicine. A higher incidence of AMR was found in poultry farms. Our findings show that AMR, including MDR, is common in coli, Salmonella spp., commonly found in poultry. The study promotes the development of national policies, programs, and additional research based on a "One Health" approach that helps humans and animals, as well as the environment. Pumice as a Novel Nature Heterogeneous Catalyst for Synthesis of 3,4-dihydropyrimidine-2-(1H)-Ones/Thiones via Biginelli Reaction under Solvent-free Conditions Lamya H. Al-Wahaibi, Antar A. Abdelhamid, Sayed A. Saber, Nadia A.A. Elkanzi, Ali M. Ali Subject: Chemistry, Analytical Chemistry Keywords: One pot reaction; 3,4- Dihydropyrimidin-2(1H) ones / thiones and Pumice Abstract: An efficient and environmentally Pumice as a new catalyst for designing of 3,4- dihydropyrimidin-2(1H) ones / thiones via one-pot multi component condensation of aromatic aldehydes, urea/ thiourea and ethyl acetoacetate or acetyl acetone in excellent yields (96-99 %). The advantages using of this new catalyst is very cheap, available, non-toxic, stable under thermal conditions, easy work-up, improved yields, the product of reaction is very pure without using chromatographic methods and solvent free conditions. An Anti-Noise Fault Diagnosis Method of Bearing based on Multi-Scale 1DCNN Jie Cao, Zhidong He, Jinhua Wang, Ping Yu Subject: Engineering, Mechanical Engineering Keywords: intelligent fault diagnosis; bearing; anti-noise; one-dimensional convolution neural network In recent years, intelligent fault diagnosis algorithms using deep learning method have achieved much success. However, the signals collected by sensors contain a lot of noise, which will have a great impact on the accuracy of the diagnostic model. To address this problem, we propose a one-dimensional convolutional neural network with multi-scale kernels (MSK-1DCNN) and apply this method to bearing fault diagnosis. We use a multi-scale convolution structure to extract different fault features in the original signal, and use the ELU activation function instead of the ReLU function in the multi-scale convolution structure to improve the anti-noise ability of MSK-1DCNN; then we use the training set with pepper noise to train the network to suppress overfitting. We use the Western Reserve University bearing data to verify the effectiveness of the algorithm and compare it with other fault diagnosis algorithms. Experimental results show that the improvements we proposed have effectively improved the diagnosis performers of MSK-1DCNN under strong noise and the diagnosis accuracy is higher than other comparison algorithms. Intestinal Schistosomiasis and Giardiasis Co-Infection in Sub-Saharan Africa: Can a One Health Approach Improve Control of Each Waterborne Parasite Simultaneously? John Archer, Lisa O'Halloran, Hajri Al-Shehri, Shannan Summers, Tapan Bhattacharyya, Narcis Kabaterine, Aaron Atuhaire, Moses Adriko, Moses Arianaitwe, Martyn Stewart, James LaCourse, Bonnie Webster, Amaya Bustinduy, Russell Stothard Subject: Life Sciences, Microbiology Keywords: One Health; Schistosoma mansoni; Giardia duodenalis; Sanitation and Hygiene (WASH); Uganda Both intestinal schistosomiasis and giardiasis are co-endemic throughout many areas of sub-Saharan Africa, significantly impacting the health of millions of children within endemic areas. While giardiasis is not considered a neglected tropical disease, intestinal schistosomiasis is formally grouped within the NTD umbrella and, as such, receives significant advocacy and financial support for large-scale control, annually. Given the many epidemiological similarities between intestinal schistosomiasis and giardiasis, in this review, we critically discuss why disease surveillance and control activities for giardiasis are largely absent within low- and middle-income countries. With advances in new methods of parasite diagnostics and provision of existing anti-parasitic medications, better management of intestinal schistosomiasis and giardiasis co-infection could, not only be better understood but also, more effectively controlled. In this light, we appraise the suitability of a One Health approach for intestinal schistosomiasis, for if adopted more broadly, could also pave a way forward for more inclusive public health actions against giardiasis. The Pipeline of Processing fMRI Data with Python Based on the Ecosystem NeuroDebian Qiang Li, Rong Xue Subject: Mathematics & Computer Science, Analysis Keywords: neuroscience; big data; functional Magnetic Resonance (fMRI); pipeline; one platform system In the neuroscience research field, specific for medical imaging analysis, how to mining more latent medical information from big medical data is significant for us to find the solution of diseases. In this review, we focus on neuroimaging data that is functional Magnetic Resonance Imaging (fMRI) which non-invasive techniques, it already becomes popular tools in the clinical neuroscience and functional cognitive science research. After we get fMRI data, we actually have various software and computer programming that including open source and commercial, it's very hard to choose the best software to analyze data. What's worse, it would cause final result imbalance and unstable when we combine more than software together, so that's why we want to make a pipeline to analyze data. On the other hand, with the growing of machine learning, Python has already become one of very hot and popular computer programming. In addition, it is an open source and dynamic computer programming, the communities, libraries and contributors fast increase in the recent year. Through this review, we hope that can make neuroimaging data analysis more easy, stable and uniform base the one platform system. One-tube RT-PCR for the Simultaneous Detection and Typing Duck Hepatitis A Virus Subtypes 1 and 3 Xueming Chen, Yuhuan Chen, Chunguo Liu, Xiaojun Li, Hongyu Liu, Xiaofei Bai, Ming Liu, Yun Zhang Subject: Biology, Agricultural Sciences & Agronomy Keywords: DHAV-1; DHAV-3; Phylogenetic analysis; One-tube RT-PCR; Simultaneously The co-circulation of duck hepatitis A virus subtypes 1 (DHAV-1) and 3 (DHAV-3) in ducklings has resulted in significant economic losses. Because ducklings infected with DHAV-1 or DHAV-3 show similar clinical signs and gross lesions, it is important to discriminate these subtypes as early as possible for better clinical management. On the basis of multiple alignments of the 5′-noncoding region sequences of strains DHAV-1 and DHAV-3, universal and type-specific primers were designed and synthesized. Using the primers in a one-tube reverse transcription-PCR (RT-PCR) assay, reference strains of DHAV-1 and DHAV-3 (isolated over a span of 60 years and covering many different countries) were successfully amplified, indicating that the primer sequences were completely conserved. The amplicon sequences results and the sizes of amplicons from reference DHAV-1 and DHAV-3 isolates correlated completely with their genotypes. Moreover, with this one-tube RT-PCR system, the amplicon sizes of liver samples of reference DHAV-1- or DHAV-3-infected birds matched perfectly with their respective genotypes, as determined by virus isolation and neutralization tests. No other RNA viruses of duck origin were detected with the synthesized primers. The sensitivity of viral RNA detection was 10 pg. With this system, 20% genotype 1, 45% genotype 3, and 9% co-infection of the two genotypes were detected in 55 clinical samples. This novel approach could be used for the rapid genotyping DHAV-1 and/or DHAV-3 infection in routine clinical surveillance or epidemiologic screening. Design, Synthesis, and Biological Activity of Novel penta-1,4-dien-3-one Derivatives Containing a H-phosphonate Scaffold Lijuan Chen, Tao Guo, Rongjiao Xia, Xu Tang, Ying Chen, Cheng Zhang, Wei Xue Subject: Chemistry, Organic Chemistry Keywords: penta-1,4-dien-3-one; H-phosphonate; antibacterial activities; antiviral activities A series of penta-1,4-dien-3-one containing a H-phosphonate scaffold were designed and synthesized. The structures of all title compounds were determined by 1H-NMR, 13C-NMR, 31P-NMR, and HRMS. Bioassay results showed that several of the title compounds exhibited remarkable antibacterial and antiviral activities. Among these, compounds 3c and 3o exhibited substantial antibacterial activities against Xanthomonas oryzae pv. Oryzae (Xoo) and Xanthomonas axonopodis pv. citri (Xac). In addition, compounds 3c, 3f, and 3r showed remarkable curative activities against tobacco mosaic virus (TMV), with 50% effective concentration (EC50) values of 290.0, 234.0, and 373.6 μg/mL, respectively. These were superior to that of ningnanmycin (386.2 μg/mL). Compound 3r exhibited comparative protective activity against TMV, with an EC50 value of 291.1 μg/mL, which was better than that of ningnanmycin (297.1 μg/mL). Notably, the solubility of all title compounds improved relative to the lead compound curcumin. These results suggest that penta-1,4-dien-3-one containing a H-phosphonate scaffold may be considered as an activator for antibacterial and antiviral agents. The Effects of One-Child Policy on the Economy and the Environment in an OLG Framework Peter J. Stauvermann, Jin Hu, Ronald Ravinesh Kumar Subject: Social Sciences, Economics Keywords: one-child policy; environment; OLG model; fertility; human capital; child tax In this paper we take China's one child policy as an example and investigate its environmental impact. We develop a model for an economy using a standard overlapping generation model extended with human capital, endogenous fertility, and changing life expectancy. To model the environmental impact of economic activities, we use a modified IPAT model. We show that China's one child has a very strong positive impact on the environment, particularly if we consider the whole human legacy. A Bidirectional Adaptive Multihop Routing Algorithm for Wireless Body Area Networks Abdelrahman Miky, Mohamed Saleh, Bassem Mokhtar, M. R. M. Rizk Subject: Engineering, Electrical & Electronic Engineering Keywords: Wireless Body Area Networks; Adaptive Routing; Two-way Communication in BANs; Routing protocol in BAN; Fuzzy logic Wireless Body Area Networks are composed of sensor nodes that may be implanted in the body or worn on it. A node is composed of a sensing unit, a processor and a radio unit. One of the nodes, the sink, acts as a gateway between the body area network and other networks such as the Internet. We propose a routing protocol that constructs paths between nodes such that the final network topology is a tree rooted at the sink. The protocol's aim is to increase network lifetime and reliability, and to adapt to network conditions dynamically. Moreover, the protocol enables communications between nodes and sink both in the upstream direction, from nodes to sink, and in the downstream direction from sink to nodes. When the network tree is constructed, a node chooses its parent, i.e., next hop to sink, by using one of various criteria. Namely, these are the number of hops between parent and sink, energy level of parent, received signal strength from parent, number of current parent's children, and a fuzzy logic function that combines multiple criteria. Moreover, as time progresses the tree structure may dynamically change to adapt to conditions such as the near-depletion of a routing node's energy. Simulation results show improvements in network lifetime and energy consumption over the older version of the protocol. A Flexible Wireless Sensor Network based on Ultra-Wide Band Technology for Ground Instability Monitoring Lorenzo Mucchi, Sara Jayousi, Alessio Martinelli, Stefano Caputo, Emanuele Intrieri, Giovanni Gigli, Teresa Gracchi, Francesco Mugnai, Massimiliano Favalli, Alessandro Fornaciai, Luca Nannipieri Subject: Engineering, Electrical & Electronic Engineering Keywords: Ultra-Wide Band; wireless sensor networks; monitoring; warning system; ground instability; landslide; Time Of Flight, Two-way ranging. An innovative wireless sensor network (WSN) based on Ultra-Wide Band (UWB) technology for 3D accurate superficial monitoring of ground deformations, as landslides and subsidence, is proposed. The system has been designed and developed as part of an European Life+ project, called Wi-GIM (Wireless Sensor Network for Ground Instability Monitoring). The details of the architecture, the localization via wireless technology and data processing protocols are described. The flexibility and accuracy achieved by the UWB two-way ranging technique is analysed and compared with the traditional systems, such as robotic total stations (RTSs), Ground-based Interferometric Synthetic Aperture Radar (GB-InSAR), highlighting the pros and cons of the UWB solution to detect the surface movements. An extensive field trial campaign allows the validation of the system and the analysis of its sensitivity to different factors (e.g., sensor nodes inter-visibility, effects of the temperature, etc.). The Wi-GIM system represents a promising solution for landslide monitoring and it can be adopted in conjunction with traditional systems or as an alternative in areas where the available resources are inadequate. The versatility, easy/fast deployment and cost-effectiveness, together with the good accuracy, make the Wi-GIM system a possible solution for municipalities that cannot afford expensive/complex systems to monitor potential landslides in their territory. High Pyrethroid Resistance to Deltamethrin and DDT in Major Malaria Vector Anopheles gambiae s.l. from South-Western Nigeria is Probably Driven by Metabolic Resistance Mechanisms Adedapo O. Adeogun, Ahmed Omotayo, Ayodele Babalola, Tosin Joseph, Oluwakemi Adesalu, Romoke Jimoh, Tolulope Oyeniyi, Samson Awolola, Olusola Ladokun Subject: Life Sciences, Other Keywords: Insecticide resistance; Anopheles gambiae; Anopheles coluzzii; Cytochrome P450; Gluthathi-one-S-Transferases Background: Insecticide resistance in Anopheles gambiae s.l. is a major challenge for malaria vector control in Nigeria. Both target-site insensitivity and metabolic resistance have been im-plicated in resistance process, with the latter receiving little attention in Nigeria. Therefore, we investigated metabolic enzyme activities in Anopheles gambiae s.l populations resistant to Del-tamethrin and Diethyldichlorotriethylethane (DDT) in South-West Nigeria. Methods: Anopheles larvae were collected from Ibadan, Oyo and Badagry, Lagos. Adults were exposed to Deltamethrin and DDT using WHO method. Cohorts of populations were further exposed to Pyperonil Butoxide (PBO) and Deltamethrin. Insecticide-exposed and unexposed co-horts were examined for metabolic enzyme activities. Results were compared between exposed and unexposed samples ANOVA (P<0.05). Results: Mosquitoes were identified as An. gambiae (89%, Ibadan; 0%, Badagry) and An. coluzzii (11%, Ibadan; 100%, Badagry). The populations showed varied level of resistance to Deltamethrin (26%, Ibadan; 71%, Badagry) and DDT (2%, Ibadan; 44%, Badagry). Mortality to Deltamethrin in-creased from 26% to 64% (Ibadan) and 71% to 84% (Badagry) when populations were pre-exposed to PBO. Biochemical analysis revealed significant high levels (P<0.05) of cytochrome P450 and GST in exposed samples. Conclusions: Cytochrome P450 and GST are involved in Deltamethrin and DDT resistance in Anopheles gambiae s.l populations in South-West Nigeria. Preparation of Degradable Superhydrophobic Mg/P/Z/F/H Composite Materials and Their Anticorrosion Zhongxian Xi, Chengqing Yuan, Xiuqin Bai, Chun Wang, Anne Neville Subject: Materials Science, Surfaces, Coatings & Films Keywords: superhydrophobic surface; degradable; one-step method; Dip-coating; poly(ε-caprolactone) (PCL) In this study, the degradable superhydrophobic Mg/P/Z/F/H (magnesi-um/poly(-caprolactone)/zinc oxide/1H,1H,2H,2H-perfluorodecyltriethoxysilane (PFDTES)/ heating process) composite materials were prepared through one-step method for enhancing the corrosion resistance of AZ91D magnesium alloys. Electrochemical measurements showed that Mg/P/Z/F/H materials significantly improved the corrosion resistance of magnesium alloys in 3.5 wt.% NaCl. The self-cleaning, adhesion and stability tests suggested that Mg/P/Z/F/H composite materials had well self-cleaning properties, good adhesion strength and stability in wet atmosphere. One Cycle Control of A PWM Rectifier a New Approach Andres Salazar, Rodrigo Teixeira, Werbert da Silva, Guilherme Pillon, João Carvalho Neto, Elmer Villarreal, Alberto Lock Subject: Engineering, Electrical & Electronic Engineering Keywords: Power factor corrector; One cycle control; Common-mode voltage; Common-mode current In this work, it is analyzed a Digital Signal Processor, DSP, based One Cycle Control, OCC, strategy for a Power Factor Corrector, PFC, rectifier, which presents Common-mode Voltage, CMV, immunity. The proposed strategy utilizes an emulated-resistance-controller in closed-loop configuration to set up dc-link voltage and achieve unity power factor, UPF. It is shown that if PFC can achieve UPF condition and if phase voltage is only affected by CMV, then phase current is free from CMV, as well as a lead-lag compensator, LLC, to average phase current. Another possible condition is also analyzed. The proposal is verified by simulation and experiments. An Eye on the Dog as the Scientist's Best Friend for Translational Research in Ophthalmology: Focus on the Ocular Surface Lionel Sebbag, Jonathan P. Mochel Subject: Medicine & Pharmacology, Ophthalmology Keywords: Ocular Surface; Tear Film; Albumin; Pharmacology; Animal Models; Translational Research; One Health Preclinical animal studies provide valuable opportunities to better understand human diseases and contribute to major advances in medicine. This review provides a comprehensive overview of ocular parameters in humans and selected animals, with a focus on the ocular surface, detailing species differences in ocular surface anatomy, physiology, tear film dynamics and tear film composition. We describe major pitfalls that tremendously limit the translational potential of traditional laboratory animals (ie., rabbits, mice and rats) in ophthalmic research, and highlight the benefits of integrating companion dogs with clinical analogues to human diseases into preclinical pharmacology studies. This One Health approach can help accelerate and improve the framework in which ophthalmic research is translated to the human clinic. Studies can be conducted in canine subjects with naturally occurring or non-invasively induced ocular surface disorders (eg., dry eye disease, conjunctivitis), reviewed herein, and tear fluid can be easily retrieved from canine eyes for various bioanalytical purposes. In this review, we discuss common tear collection methods, including capillary tubes and Schirmer tear strips, and provide guidelines for tear sampling and extraction to improve the reliability of analyte quantification (drugs, proteins, others). Salmonella Surveillance Systems in Swine and Humans in Spain: A Review Marta Martínez-Avilés, Macarena Garrido-Estepa, Julio Álvarez, Ana de la Torre Subject: Medicine & Pharmacology, Veterinary Medicine Keywords: Zoonoses, food-borne, disease control, public health, domestic livestock, pigs, One health Non-typhoid salmonellosis is a common and problematic foodborne zoonotic disease in which pork and pork products can be an important potential source of infection. In order to prevent this disease important efforts to monitor the situation in the main source, livestock, are conducted in most developed countries. In the European Union EFSA and ECDC compile information at the member state level, even though important differences in production systems and surveillance systems exist. Here, Salmonella surveillance systems in one of the main sources of foodborne salmonellosis, swine, and humans in Spain were reviewed to identify potential gaps and discuss potential ways of integration under a One Health approach. Despite the extensive information generated through the surveillance activities source attribution can be only routinely performed through ad-hoc outbreak investigations, and national reports on human outbreaks do not provide sufficiently detailed information to gain a better understanding of the epidemiology of the pathogen. Human and animal monitoring of Salmonella would benefit from a better exchange of information and collaboration. Analysis of spatio-temporal trends in livestock and humans could help to identify likely sources of infection and to target surveillance efforts in areas with higher prevalence or where specific strains are found. Working Paper REVIEW A Critical Review of Disinfection Processes to Control SARS-CoV-2 Transmission in the Food Industry Adrián Pedreira, Yeşim Taşkın, Míriam R. García Subject: Biology, Other Keywords: SARS-CoV-2; COVID-19 pandemic; food industry; disinfection trade-offs; one-health Industries of the food sector have made a great effort to control SARS-CoV-2 indirect transmission, through objects or surfaces, by updating cleaning and disinfection protocols previously focused on inactivating other pathogens, as well as food spoilage microorganisms. The information, although scarce at the beginning of the COVID-19 pandemic, has started to be sufficiently reliable to avoid over-conservative disinfection procedures. This work reviews the literature to propose a holistic view of the disinfection process where the decision variables, such as type and concentration of active substance, are optimised to guarantee the inactivation of SARS-CoV-2 and other usual pathogens and spoilage microorganisms while minimising possible side-effects on the environment and animal and human health. A Decision Tree Based Intrusion Detection System for Identification of Malicious Web Attacks Samir Bandyopadhyay, Ratul Chowdhury, Pallabi Banerjee, Soumya Deep Dey, Banani Saha Subject: Keywords: Intrusion Detection System; NSL-KDD Dataset; One Hot Encoding; Information Gain; Decision Tree . In today's world, cyber attack is one of the major issues concerning the organizations that deal with technologies like cloud computing, big data, IoT etc. In the area of cyber security, intrusion detection system (IDS) plays a crucial role to identify suspicious activities in the network traffic. Over the past few years, a lot of research has been done in this area but in the current scenario, network attacks are diversifying in both volume and variety. In this regard, this research article proposes a novel IDS where a combination of information gain and decision tree algorithm has been used for the purpose of dimension reduction and classification. For experimental purpose the NSL-KDD dataset has been used. Initially out of 41 features present in the dataset only 5 high information gain valued features are selected for classification purpose. The applicability of the selected features are evaluated through various machine learning based algorithms. The experimental result shows that the decision tree based algorithm records highest recognition accuracy among all the classifiers. Based on the initial classification result a novel methodology based on decision tree has been further developed which is capable of identifying multiple attacks by analyzing the packets of various transactions in real time. Pricing to the Scenario: Price-Setting Newsvendor Models for Innovative Products Xiuyan Ma Subject: Social Sciences, Business And Administrative Sciences Keywords: price-setting newsvendor, one-shot decision theory, innovative product, scenario, behavioral operations research In this paper, we consider a manufacturer who produces and sells a kind of innovative product in the monopoly market environment. Because the life cycle of innovative product is usually shorter than its procurement lead time, one unique demand quantity (scenario) will occur in the selling season, thus there is only one chance for the manufacturer to determine both optimal production quantity and optimal sale price. Considering this one-time feature of such a decision problem, a price-setting newsvendor model for innovative products is proposed. Different to the existing price-setting newsvendor models, the proposed models determine the optimal production quantity and sale price based on some specific state (scenario) which is most applicable for the manufacturer. The theoretical analysis provides managerial insights into the manufacturers' behaviors in a monopoly market of an innovative product and several phenomena in the luxury goods market are well explained. Chitosan Decorated Copper Nanoparticles as Efficient Catalyst for Synthesis of Novel Quinoline Derivatives Kahdijah S. Alghamdi, Nesreen S.I. Ahmed, D. Bakhotmah, Mohamed Mokhtar M. Mostafa Subject: Chemistry, Applied Chemistry Keywords: chitosan-copper NPs; quinolone derivatives; ultrasonic irradiation; one- pot synthesis; green-sustainable perspectives Chitosan decorated copper nanoparticles (CS/CuNPs) catalysts were synthesized via reduction methods utilizing green protocol. The CS/CuNPs hybrid catalysts were tested for the synthesis of quinoline derivatives utilizing one-pot multicomponent reaction (MCR) under ultrasonic irradiation. The best catalyst (CS/CuNPs) that provided good conversion reaction yield and high turnover frequency (TOF) was characterized using Fourier transform infrared (FTIR), Thermogravimetric analyses (TGA), X-ray diffraction (XRD), , scanning electron microscopy (SEM), transmission electron microscope (TEM) and X-ray photoelectron spectroscopy (XPS) techniques. Generalization of the scope of the proposed catalytic process was studied using different aldehydes. Excellent products yield and high TOF in even shorter reaction time (~5 min) was attained. Recyclability performance of the catalyst over five times re-use without detectable loss in product yield was recorded. The current method is green process utilizing environmentally benign catalyst and considered to be promising sustainable protocol for the synthesis of fine chemicals. Peripheral Nerve Injury Treatments and Advances: One Health Perspective Bruna Lopes, Patrícia Sousa, Rui Alvites, Mariana Branquinho, Ana Sousa, Carla Mendonça, Luís Miguel Atayde, Ana Lúcia Luís, Artur S. P. Varejão, and Ana Colette Maurício Subject: Life Sciences, Biotechnology Keywords: Mesenchymal stem cells; Nerve Guide Conduits; Nerve recovery; One Health; Peripheral nerve injury; Secretome Peripheral nerve injuries (PNI) can have several etiologies, such as trauma and iatrogenic interventions that can lead to the loss of structure and/or function impairment. These changes can cause a partial or complete loss of motor and sensory functions, physical disability, and neuropathic pain, what in turn can affect the quality of life. For those reasons, PNI is a major public health concern. This review aims to revisit the concepts associated with the PNI. First, the anatomy of the peripheral nerve is detailed to explain the different types of injury. Then, some of the available therapeutic strategies are explained, including surgical methods, pharmacological therapies, and the use of cell-based therapies alone or in combination with biomaterials in the form of tube guides. Nevertheless, even with the various available treatments, it is difficult to achieve a perfect outcome with complete functional recovery. This review aims to explain the urge for new approaches and to understand the methods to evaluate nerve regeneration in a One Health perspective. In vitro models followed by in vivo models are very important to be able to translate the achievements to human medicine. Getting out of Crises: Environmental, Social-ecological and Evolutionary Research Needed to Avoid Future Risks of Pandemics Delphine Destoumieux-Garzon, Franziska Matthies-Wiesler, Nicolas Bierne, Aurélie Binot, Jérôme Boissier, Anais Devouge, Jeanne Garric, Kim Gruetzmacher, Christoph Grunau, Jean-François Guégan, Sylvie Hurtrez-Boussès, Anke Huss, Serge Morand, Clare Palmer, Denis Sarigiannis, Roel Vermeulen, Robert Barouki Subject: Keywords: One Health; Planetary Health; Pandemics; Ecology; Evolution; Environment; Climate change; Biodiversity loss; Emergence; Pathogen The implementation of One Health/EcoHealth/Planetary Health approaches has been identified as key (i) to address the strong interconnections between risk for pandemics, climate change and biodiversity loss, and (ii) to develop and implement solutions to these interlinked crises. As a response to the multiple calls of scientists in that direction, we have put forward seven long term research questions regarding COVID-19 and emerging infectious diseases (EIDs) that are based on an effective integration of environmental, ecological, evolutionary, and social sciences to better anticipate and mitigate EIDs. Research needs cover the social-ecology of infectious disease agents, their evolution, the determinants of susceptibility of humans and animals to infections, and the human and ecological factors accelerating infectious disease emergence. For comprehensive investigation, they include the development of nature-based solutions to interlinked global planetary crises, addressing ethical and philosophical questions regarding the relationship of humans to nature and regarding transformative changes to safeguard the environment and human health. In support of this research, we propose the implementation of innovative multidisciplinary facilities embedded in social-ecosystems locally: the "ecological health observatories" and the "living laboratories". This work has been carried out in the frame of the EC project HERA (www.HERAresearchEU.eu) that aims to set the priorities for an environment, climate and health research agenda in the EU by adopting a systemic approach in the face of global environmental change. A Step Forward to Revolutionise IntrusionDetection System Using Deep Convolution Neural Network Samir Bandyopadhyay, Ratul Chowdhury, Arindam Roy, Banani Saha Subject: Keywords: Intrusion Detection System; NSL-KDD Dataset; One Hot Encoding; Information Gain; Convolution Neural Network Cyber security plays an important role to protect our computer, network, program and data from unauthorized access. Intrusion detection system (IDS) and intrusion prevention system (IPS) are two main categories of cyber security, designed to identify any suspicious activities present in inbound and outbound network packets and restrict the suspicious incident. Deep neural network plays a significant role in the construction of IDS and IPS. This paper highlights a novel IDS using optimized convolution neural network (CNN-IDS). An optimized CNNIDS model is an improvement over CNN which selects the best weighted model by considering the loss in every epoch. All the experiments have been conducted on the well known NSL-KDD dataset. Information gain has been used for dimensionality reduction. The accuracy of the proposed model is evaluated through optimized CNN for both binary and multiclass categories. Finally, a critical comparison has been performed with other general classifiers like J48, Naive Bayes, NB tree, Random forest, Multilayer Perceptron (MLP), Support Vector Machine (SVM), Recurrent Neural Network (RNN) and Convolution Neural Network(CNN). All the experimental results demonstrate that the optimized CNN-IDS model records the best recognition rate with minimum model construction time. Coronavirus Disease 2019 – COVID-19 Kuldeep Dhama, Khan Sharun, Ruchi Tiwari, Shubhankar Sircar, Sudipta Bhat, Yashpal Singh Malik, Karam Pal Singh, Wanpen Chaicumpa, D. Katterine Bonilla-Aldana, Alfonso J. Rodriguez-Morales Subject: Medicine & Pharmacology, Other Keywords: emerging coronavirus; 2019-nCoV; SARS-CoV-2; COVID-19; diagnosis; vaccines; therapy; one health In the past decades, several new diseases have emerged in new geographical areas, with pathogens including Ebola, Zika, Nipah, and coronaviruses (CoVs). Recently, a new type of viral infection has emerged in Wuhan City, China, and initial genomic sequencing data of this virus does not match with previously sequenced CoVs, suggesting a novel CoV strain (2019-nCoV), which has now been termed as severe acute respiratory syndrome CoV-2 (SARS-CoV-2). Although Coronavirus disease 2019 (COVID-19) is suspected to originate from an animal host (zoonotic origin) followed by human-to-human transmission, the possibility of other routes such as food-borne transmission should not be ruled out. Compared to diseases caused by previously known human CoVs, COVID-19 shows less severe pathogenesis but higher transmission competence, as is evident from the continuously increasing number of confirmed cases globally. Compared to other emerging viruses such as Ebola virus, avian H7N9, SARS-CoV, or MERS-CoV, SARS-CoV-2 has shown relatively low pathogenicity and moderate transmissibility. Codon usage studies suggest that this novel virus may have been transferred from an animal source such as bats. Early diagnosis by real-time PCR and next-generation sequencing has facilitated the identification of the pathogen at an early stage. Since, no antiviral drug or vaccine exists to treat or prevent SARS-CoV-2, potential therapeutic strategies that are currently being evaluated predominantly stem from previous experience with treating SARS-CoV, MERS-CoV, and other emerging viral diseases. In this review, we address epidemiological, diagnostic, clinical, and therapeutic aspects, including perspectives of vaccines and preventive measures that have already been globally recommended. On a Class of Hermite-Obrechkoff One-Step Methods with Continuous Spline Extension Francesca Mazzia, Alessandra Sestini Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: Initial Value Problems; One-step Methods; Hermite-Obrechkoff methods; symplecticity; B–Splines; BS Methods The class of A-stable symmetric one-step Hermite-Obrechkoff (HO) methods introduced in [1] for dealing with Initial Value Problems is analyzed. Such schemes have the peculiarity of admitting a multiple knot spline extension collocating the differential equation at the mesh points. As a new result, it is shown that these maximal order schemes are conjugate symplectic which is a benefit when the methods have to be applied to Hamiltonian problems. Furthermore a new efficient approach for the computation of the spline extension is introduced, adopting the same strategy developed in [2] for the BS linear multistep methods. The performances of the schemes are tested in particular on some Hamiltonian benchmarks and compared with those of the Gauss Runge-Kutta schemes and Euler-Maclaurin formulas of the same order. Recent Advances in Double-Lumen Tube Malposition in Thoracic Surgery: A Bibliometric Analysis and Narrative Literature Review Xi Zhang, Dong-Xu Wang, Jing-Qiu Wei, He Liu, Si-Ping Hu Subject: Medicine & Pharmacology, Anesthesiology Keywords: Double lumen tube; Malposition; Thoracic surgery; Airway management; One-lung ventilation; Fiberoptic bronchoscopy; Bibliometric analysis The thoracic surgery has increased drastically in recent years, especially in the light of the severe outbreak of 2019 novel coronavirus disease (COVID-19). Routine "passive" chest computed tomography (CT) screening of inpatients detects some pulmonary diseases requiring thoracic surgeries timely. As an essential device for thoracic anesthesia, the double-lumen tube (DLT) is particularly important for anesthesia and surgery. With the continuous upgrading of the DLTs and the widespread use of the fiberoptic bronchoscopy (FOB), the position of DLT in thoracic surgery is gradually becoming more stable and easier to observe or adjust. However, the DLT malposition still occurs during transferring patients from supine to lateral position in thoracic surgery, which leads to lung isolation failure and hypoxemia during one-lung ventilation (OLV). Recently some innovative DLTs or improved intervention methods have shown good results in reducing the incidence of DLT malposition. This review aims to summarize the recent studies of the incidence of left-sided DLT malposition, the reasons and effects of malposition, and summarize current methods for reducing DLT malposition and prospects for possible approaches. Meanwhile, we use bibliometric analysis to summarize the research trends and hot spots of the DLT research. A Generalizable One Health Framework for the Control of Zoonotic Diseases Ria Ghai, Ryan Wallace, James Kile, Trevor Shoemaker, Antonio Vieira, Maria Negron, Sean Shadomy, Julie Sinclair, Grace Goryoka, Stephanie Salyer, Casey Barton Behravesh Subject: Medicine & Pharmacology, Veterinary Medicine Keywords: One Health; zoonotic disease; zoonotic disease control; anthrax; brucellosis; rabies; rift valley fever; zoonotic influenza Effectively preventing and controlling zoonotic diseases requires a One Health approach that involves collaboration across sectors responsible for human health, animal health (both domestic and wildlife), and the environment, as well as other partners. Here we describe the Generalizable One Health Framework (GOHF), a five-step framework that provides structure for using a One Health approach in zoonotic disease programs being implemented at the local, sub-national, national, regional, or international level. Part of the framework is a toolkit that compiles existing resources and presents them following a stepwise schematic, allowing users to identify relevant resources as they are required. Coupled with recommendations for implementing a One Health approach for zoonotic disease prevention and control in technical domains including laboratory, surveillance, preparedness and response, this framework can mobilize One Health and thereby enhance and guide capacity building to combat zoonotic disease threats at the human-animal-environment interface. One-step nucleic acid amplification (OSNA) of Sentinel Lymph Node in Early Stage Endometrial Cancer: Spanish Multicenter Study (ENDO-OSNA). M. D. Diestro, A. Berjón, I. Zapardiel, L. Yébenes, I. Ruiz, A. Lekuona, M. Rezola, I. Jaunarena, J. Siegrist, M. Sánchez-Pastor, M. Cuadra, A. Sagasta, I. Guerra, L. I. Lete, F. Roldán, C.-B. Marta, M. J. Boillos, M. J. Cardiel, C. López-de la Manzanara, F. Relea, P. J. Coronado, A. Pascual, M. J. Román, G. Peiró, L. J. Matute, B. Montero, J. C. Muruzábal, R. Guarch, C. Zorrero, A. Calatrava, L. Ribot, I. Costa, A. Hernández, D. Hardisson Subject: Medicine & Pharmacology, Allergology Keywords: Endometrial cancer; sentinel lymph node; micrometastases; ultrastaging; one-step nucleic acid amplification; OSNA; cytokeratin 19 The objective of this study was to evaluate the efficacy of one-step nucleic acid amplification (OSNA) for the detection of sentinel lymph node (SLN) metastasis compared to standard pathological ultrastaging in patients with early-stage endometrial cancer (EC). A total of 526 SLNs from 191 patients with EC were included in the study. 379 SLNs (147 patients) were evaluated by both methods, OSNA and standard pathological ultrastaging. The central 1-mm portion of each lymph node was subjected to semi-serial sectioning at 200-μm intervals and examined by hematoxylin-eosin and immunohistochemistry with CK19; the remaining tissue was analysed by OSNA for CK19 mRNA. The OSNA assay detected metastases in 19.7% of patients (14.9% micrometastasis and 4.8% macrometastasis), whereas pathological ultrastaging detected metastasis in 8.8% of patients (3.4% micrometastasis and 5.4% macrometastasis). Using the established cut-off value for detecting SLN metastasis by OSNA in EC (250 copies/μl), the sensitivity of the OSNA assay was 92%; specificity was 82%; diagnostic accuracy was 83%, and the negative predictive value was 99%. Discordant results between both methods were recorded in 20 patients (13.6%). OSNA resulted in an upstaging in 12 patients (8.2%). OSNA could aid in the identification of patients requiring adjuvant treatment at the time of diagnosis. Efforts to Identify and Combat Antimicrobial Resistance in Uganda: A Systematic Review Kivumbi Mark Tefero, Claire J Standley Subject: Medicine & Pharmacology, Allergology Keywords: antimicrobial resistance; antimicrobial stewardship; antiviral resistance; antibacterial resistance; antimalarial resistance; antifungal resistance; One Health; Uganda The global burden of antimicrobial resistance is on the rise, resulting in higher morbidity and mortality in our communities. The spread of antimicrobial resistance in the environment and development of resistant microbes is a challenge to the control of antimicrobial resistance. Approaches, such as antimicrobial stewardship programmes, and enhanced surveillance, have been devised to curb its spread. However, particularly in lower- and middle-income countries, the overall extent of antimicrobial resistance, and knowledge on on-going surveillance, stewardship or investigation efforts, re often poorly understood. This study aimed to look at the efforts that have been undertaken to combat antimicrobial resistance in Uganda as a means of establishing an overview of the situation, to help inform future decisions. We conducted a systematic literature review of the PubMed database to assess the efforts that have been done in Uganda to investigate and combat antimicrobial resistance. A search combining keywords associated with antimicrobial resistance were used to look up relevant studies between 1995 and 2020 on surveillance of antimicrobial resistance in Uganda, and susceptibility of microbes to different drugs. The search yielded 430 records, 163 of which met the inclusion criteria for analysis. The studies were categorized according to country and region, the type of antimicrobial resistance, context of the study, study design and outcome of the study. Antibacterial resistance and antimalarial resistance had the most published studies while antiviral and antifungal resistance each were represented by very few studies. Most studies were conducted in humans and hospital settings, with very few in veterinary and One Health contexts. The results from our work can inform public health policy on antimicrobial stewardship as it contributes to understanding the status of antimicrobial resistance surveillance in Uganda, and can also help to guide future research efforts. Notably, a One Health approach needs to be followed with re-spect to surveillance of antimicrobial resistance to better understand the mechanisms of resistance transfer across the human-animal–environment interface, including additional investigation in antiviral and antifungal resistance. Bilateral Tempered Fractional Derivatives Manuel D. Ortigueira, Gabriel Bengochea Subject: Mathematics & Computer Science, General Mathematics Keywords: Tempered Fractional Derivative; One-sided Tempered Fractional Derivative; Bilateral Tempered Fractional Derivative; Tempered Riesz potential. The bilateral tempered fractional derivatives are introduced generalising previous works on the one-sided tempered fractional derivatives and the two-sided fractional derivatives. An analysis of the tempered Riesz potential is done and showed that it cannot be considered as a derivative. Optimizing Open Data to Support One Health: Best Practices to Ensure Interoperability of Genomic Data from Microbial Pathogens Ruth E. Timme, William J. Wolfgang, Maria Balkey, Sai Laxmi Gubbala Venkata, Robyn Randolph, Marc Allard, Errol Strain Subject: Life Sciences, Other Keywords: Genomic Epidemiology; GenomeTrakr; microbial pathogen surveillance, NCBI submission; whole genome sequencing; QA/QC; One Health The holistic approach of One Health, which sees human, animal, plant, and environmental health as a unit, rather than discrete parts, requires not only interdisciplinary cooperation, but standardized methods for communicating and archiving data, enabling participants to easily share what they have learned and allow others to build upon their findings.Ongoing work by NCBI and the GenomeTrakr project illustrates how open data platforms can help meet the needs of federal and state regulators, public health laboratories, departments of agriculture, and universities. Here we describe how microbial pathogen surveillance can be transformed by having an open access database along with Best Practices for contributors to follow. First, we describe the open pathogen surveillance framework, hosted on the NCBI platform. We cover the current community standards for WGS quality, provide an SOP for assessing your own sequence quality and recommend QC thresholds for all submitters to follow. We then provide an overview of NCBI data submission along with step by step details. And finally, we provide curation guidance and an SOP for keeping your public data current within the database. These Best Practices can be models for other open data projects, thereby advancing the One Health goals of Findable, Accessible, Interoperable and Re-usable (FAIR) data. Evaluation of Different Cannulation Strategies for Aortic Arch Surgery Using a Cardiovascular Numerical Simulator Beatrice De Lazzari, Massimo Capoccia, Roberto Badagliacca, Claudio De Lazzari Subject: Engineering, Biomedical & Chemical Engineering Keywords: Aortic surgery; Aortic arch; Three-way cannulation approach; Carotid artery perfusion; Pressure-volume loop; Lumped parameter model; Software simulation; Cardiovascular modelling.; CARDIOSIM Aortic disease has a significant impact on quality of life. Involvement of the aortic arch requires preservation of blood supply to the brain during surgery. Deep hypothermic circulatory arrest is an established technique for this purpose although neurological injury remains high. Additional techniques have been used to reduce the risk although controversy still remains. A three-way cannulation approach, including both carotid arteries and the femoral artery or the ascending aorta, has been used successfully for aortic arch replacement and redo procedures. We have de-veloped circuits of the circulation to simulate blood flow during this type of cannulation set up. The aim is to analyse using CARDIOSIM© cardiovascular simulation platform, how the haemodynamic and energetic parameters are affected and the benefit derived with particular reference to the cerebral circulation. Effective One-Class Classifier Model for Memory Dump Malware Detection Mahmoud Al-Qudah, Zein Ashi, Mohammad Alnabhan, Qasem Abu Al-Haija Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: novelty-class; one online-Class SVM (OCSVM); memory dump; Malware; Principal Component Analysis (PCA); dimensionality reduction Malware complexity is rapidly increasing, causing catastrophic impacts on computer systems. Memory dump malware is gaining increased attention due to its ability to expose plaintext passwords or key encryption files. This paper presents an enhanced classification model based on One class SVM (OCSVM) classifier that can identify any deviation from the normal memory dump file patterns and detect it as malware. The proposed model integrates OCSVM and Principal Component Analysis (PCA) for increased model sensitivity and efficiency. An up-to-date dataset known as "MALMEMANALYSIS-2022" was utilized during the evaluation phase of this study. The accuracy achieved by the traditional one-class classification (TOCC) model was 55%, compared to 99.4% in the one-class classification with PCA (OCC-PCA) model. Such results have confirmed the increased performance achieved by the proposed model. Positive Association between the Use of Quinolones in Food Animals and the Prevalence of Fluoroquinolone Resistance in E. Coli and K. Pneumoniae, A. Baumanii and P. Aeruginosa: A Global Ecological Analysis Subject: Medicine & Pharmacology, Other Keywords: One-health; food-animals; E. coli; K. pneumoniae; Acinetobacter; P. aeruginosa; fluoroquinolones; antimicrobial resistance; antibiotic consumption BackgroundIt is unclear what underpins the large global variations in the prevalence of fluoroquinolone resistance in gram-negative bacteria. We tested the hypothesis that different intensities in the use of quinolones for food-animals plays a role. MethodsWe used Spearman's correlation to assess if the country-level prevalence of fluoroquinolone resistance in human infections with Acinetobacter baumannii, Escherichia coli, Klebsiella pneumoniae and Pseudomonas aeruginosa was correlated with the use of quinolones for food producing animals. Linear regression was used to assess the relative contributions of country-level quinolone consumption for food-animals and humans on fluoroquinolone resistance in these 4 species. ResultsThe prevalence of fluoroquinolone resistance in each species was positively associated with quinolone use for food-producing animals (E. coli [ρ=0.55; P<0.001], K. pneumoniae [ρ=0.58; P<0.001]; A. baumanii [ρ=0.54; P=0.004]; P. aeruginosa [ρ=0.48; P=0.008]). Linear regression revealed that both quinolone consumption in humans and food animals were independently associated with fluoroquinolone resistance in E. coli and A. baumanii. ConclusionsReducing quinolone use in food-producing animals may help retard the spread of fluoroquinolone resistance in various gram negative bacterial species. The Meaning of Accelerated Motion Qing Li Subject: Physical Sciences, Acoustics Keywords: the linear and curvilinear process of the continuum,one quantitative continuumm, the infinitely great, accelerating force Abstract Unlike accelerated motion or curvalinear motion,a nonlinear motion state(non-inertial system) can be described by differential equations or the other algebric equations in axiom 2, the accelerated motion in axiom 3 can be considered to be described by using the equation of the linear and curvilinear process of the continuum extending to infinitely distance.Further, this linear and curvilinear process of the continuum is a quantitative continuum in essence as a unity of infinitely quantities and infinitely dimensions at infinite distance (infinity) relative to all orientations in which we exist. These indicates that an accelerated motion accumulates continuously started from a finite quantities, leaping to transit from finite to infinite quantities by an infinitely great accelerating force. Stateless One-time Authenticated Session Resumption in TLS Handshake Using Paired Token Byoungcheon Lee Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Transport Layer Security; Handshake; Session resumption; Paired token; Stateless; One-time authenticated session resumption; Privacy; Untraceability Transport Layer Security (TLS) is a cryptographic protocol that provides communications security between two peers and it is widely used in many applications. To reduce the latency in TLS handshake session resumption using pre-shared key (PSK) had been used. But current methods in PSK mode handshake uses a fixed session key multiple times for the lifetime of session ticket. Reuse of fixed session key should be very careful in the point of communications security. It is vulnerable to replay attacks and there is a possibility of tracking users. Paired token (PT) is a new secondary credential scheme that provides pre-shared key in stateless way in client-server environment. Server issues paired token (public token and secret token) to authenticated client. Public token represents signed identity of client and secret token is a kind of shared secret between client and server. Once client is equipped with PT, it can be used for many symmetric key based cryptographic applications such as authentication, authorization, key establishment, etc. It was also shown that it can be used for one-time authenticated key establishment using the time-based one-time password (TOTP) approach. In this paper we apply the PT and TOTP approach to TLS to achieve stateless one-time authenticated session resumption. Server executes full handshake of TLS 1.3 and issues PT to authenticated client. Then client and server can execute one-time authenticated session resumption using PT in stateless way in server side. In every runs of session resumption distinct session keys are established that the same PT can be used safely for longer lifetime. If anonymous PT is used with renewal issuing, user privacy, untraceability and forward security can be achieved easily. It will provide a huge performance gain in large-scale distributed services. Insights Into the Roles of the Sideroflexins / SLC56 Family in Iron Homeostasis and Iron-Sulfur Biogenesis Nesrine Tifoun, José M. De las Heras, Arnaud Guillaume, Sylvina Bouleau, Bernard Mignotte, Nathalie Le Floch Subject: Life Sciences, Biochemistry Keywords: sideroflexin; mitochondria; mitochondrial transporters; iron homeostasis; iron-sulfur cluster; heme biosynthesis; one-carbon metabolism; ferroptosis; ferritinophagy. Sideroflexins (SLC56 family) are highly conserved multi-spanning transmembrane proteins inserted in the inner mitochondrial membrane in eukaryotes. Few data are available on their molecular function but, since their first description, they were thought to be metabolite transporters probably required for iron utilization inside the mitochondrion. Such as numerous mitochondrial transporters, sideroflexins remain poorly characterized. The prototypic member SFXN1 has been recently identified as the previously unknown mitochondrial transporter of serine. Nevertheless, pending questions on the molecular function of sideroflexins remain unsolved, especially their link with iron metabolism. Here, we review the current knowledge on sideroflexins, their presumed mitochondrial functions and the sparse - but growing - evidence linking sideroflexins to iron homeostasis and iron-sulfur cluster biogenesis. Since an imbalance in iron homeostasis can be detrimental at the cellular and organismal levels, we also investigate the relationship between sideroflexins, iron and physiological disorders. Investigating Sideroflexins' functions constitutes an emerging research field of great interest and will certainly lead to main discoveries on mitochondrial physiopathology. Phage Assisted Continuous Evolution (PACE): A How-to Guide for Directed Evolution Serban C. Popa, Ichiro Inamoto, Benjamin W. Thuronyi, Jumi A. Shin Subject: Life Sciences, Biochemistry Keywords: continuous evolution; protein design; protein engineering; phage; bacterial one-hybrid; plaque assay; mutational analysis; DNA sequencing Directed evolution methods are becoming increasingly popular, as they are extremely powerful toward developing new biomolecules with altered/novel activities, e.g., proteins with new catalytic functions or substrate specificities, and nucleic acids that recognize an intended target. Especially useful are systems that have incorporated continuous evolution, where the protein to be evolved undergoes continuous mutagenesis to evolve a desired trait with little to no input from the researcher once the system is started. However, continuous evolution methods can be challenging to implement in the lab and daunting for researchers to invest time and resources. Our intent is to provide basic information and helpful suggestions that we have gained from our experience with bacterial phage-assisted continuous evolution (PACE). Specifically, we review factors to consider before adopting PACE for a given evolution scheme, different types of selection circuits that can be utilized with particular focus on the PACE-B1H selection system, what optimization of a PACE selection circuit may look like using directed evolution of ME47 as a case study, and additional techniques that may be incorporated into a PACE experiment. With this information, researchers will be better equipped to determine if PACE is a valid strategy to use to evolve their proteins and how to set up a valid selection circuit. Predictor Packing in Developing Unprecedented Shaped Colloidal Particles Mubarak Ali, I –Nan Lin, C.–J. Yeh Subject: Materials Science, Nanotechnology Keywords: fundamental forces; transition state gold atoms; packing and assembling; process parameters; one-dimensional particles; multi-dimensional particles Developing particles of different anisotropic shapes are the hot topic since decades as they guarantee some special features of properties not possible through other means. Again, controlling atoms to develop certain size and shape particle is a quite challenging job. In this study, gold particles of different shapes are developed via pulse-based electronphoton-solution interface process. Gold atoms of certain transition state develop monolayer assembly at solution surface around the light glow (known in argon plasma) being generated at bottom of copper capillary (known in cathode). The rate of uplifting gold atoms to solution surface is being controlled by forcing energy (travelling photons) pursuing electrons and high energy photons (in high density) entering to solution. Gold atoms dissociated from the precursor under dissipating heat energy into the solution supplied under propagating photons characteristic current through immersed graphite rod (known in anode). Placing packets of nano shape energy of tuned pulse protocol over compact monolayer assembly comprising transition state atoms develop tiny-sized particles of formed shape. On separation of joint tiny particles into two equilateral triangular-shaped tiny particles, exerting forces of surface format elongate atoms of one-dimensional arrays converting them into structures of smooth elements. Due to immersing level of force, such tiny-shaped particles pack from different zones at centre of light glow where they assembled structures of smooth elements for developing mono-layers of different shapes of particles. Developing one-dimensional particles deal assembling of structures of smooth elements of packing tiny-shaped particles from nearly rearward zones of reflection of north-south poles, whereas, developing multi-dimensional particles deal assembling of structures of smooth elements of packing tiny-shaped particles from the east-west poles and near regions. Depending on the number of assembled structures of smooth elements at point of nucleation, packing of tiny-shaped particles from different zones develop different shapes of the anisotropic particles. At fixed precursor concentration, increasing the process time results into develop particles of low aspect ratio. Under tuned parameters, developing mechanisms of particles exhibiting unprecedented features are discussed. Peierls and Spin-Peierls Instabilities in the Per2[M(mnt)2] Series of One-Dimensional Organic Conductors; Experimental Realization of a 1D Kondo Lattice for M = Pd, Ni and Pt Jean-Paul Pouget, Pascale Foury-Leylekian, Manuel Almeida Subject: Physical Sciences, Condensed Matter Physics Keywords: organic conductors; one dimensional metal; Kondo lattice; Peierls and spin-Peierls transitions; frustrated anti-ferromagnetic systems We summarize structural instabilities exhibited by the one dimensional (1D) (arene)2X family of organic conductors in relation with their electronic and magnetic properties. With a charge transfer of one electron to each anion X these salts exhibit a quarter-filled (hole) conduction band located on the donor stacks. Compounds built with donors such as fluorenthene and perylene derivatives and anions X such PF6 or AsF6 exhibit a high temperature (TP~170K) conventional Peierls transition which is announced by a sizeable regime of 1D 2kF charge density wave fluctuations (kF is the Fermi wave vector of the 1D electron gas located on Per stacks). Surprisingly, and probably because of the presence of a multi-sheet warped Fermi surface, the Peierls transition is considerably reduced in the perylene series α-(Per)2[M(mnt)2] where X is the dithiolate molecule with M=Au, Cu, Co and Fe. A special attention is devoted in this paper to physical properties of α-(Per)2[M(mnt)2] salts which with M=Pt, Pd and Ni incorporate segregated S=1/2 1D antiferromagnetic (AF) dithiolate stacks with 1D metallic Per stacks. We analyse conjointly the structural and magnetic properties of these salts in relation with the 1D spin-Peierls (SP) instability located on the dithiolate stacks. We show that the SP instability of the Pd and Ni derivatives occurs in the classical (adiabatic limit) while the SP instability of the Pt derivative occurs in the quantum (anti-adiabatic limit). Furthermore we show that in the Pd and Ni derivatives frustrated 1st neighbour direct and 2nd neighbour indirect (through a fine tuning with the mediated 2kF RKKY coupling interaction on Per stacks) AF interactions add their contribution to the SP instability to open a singlet-triplet gap. Our analysis of the data show unambiguously that magnetic α-(Per)2[M(mnt)2] salts are a typical realization of the physics predicted for two chain 1D Kondo lattices. "Superwobbling" and tRNA-34 wobble and tRNA-37 anticodon loop modifications in evolution and devolution of the genetic code Lei Lei, Zachary Frome Burton Subject: Biology, Other Keywords: elongation factor Tu; evolution of the genetic code; four-way wobbling; genetic code degeneracy; inosine; mitochondria; queuosine; superwobbling; tRNA modification; tRNA wobble U modification The genetic code evolved around the reading of the tRNA anticodon on the primitive ribosome, and tRNA-34 wobble and tRNA-37 modifications coevolved with the code. We posit that EF-Tu, the closing mechanism of the 30S ribosomal subunit, methylation of wobble U34 at the 5-carbon and suppression of wobbling at the tRNA-36 position were partly redundant and overlapping functions that coevolved to establish the code. The genetic code devolved in evolution of mitochondria to reduce the size of the tRNAome (all of the tRNAs of an organism or organelle). "Superwobbling" or four-way wobbling describes a major mechanism for shrinking the mitochondrial tRNAome. In superwobbling, unmodified wobble tRNA-U34 can recognize all four codon wobble bases (A, G, C and U), allowing a single unmodified tRNA-U34 to read a 4-codon box. During code evolution, to suppress superwobbling in 2-codon sectors, U34 modification by methylation at the 5-carbon position appears essential. As expected, at the base of code evolution, tRNA-37 modifications mostly related to the identity of the adjacent tRNA-36 base. TRNA-37 modifications help maintain the translation frame during elongation. Candida Massiliensis sp. nov. Isolated from A Clinical Sample Jihane Kabtani, Fatima BOULANOUAR, Muriel Militiello, Carole Cassagne, Stéphane Ranque Subject: Life Sciences, Microbiology Keywords: biolog phenotypic technology; Candida; energy-dispersive X-ray spectroscopy; genotype; multilocus DNA sequencing; one new taxon; yeast The majority of Candida species are known as non-pathogenic yeasts and rarely involved in human diseases. However, recently case reports of human infections caused by non-albicans Candida species have increased, mostly in immunocompromised hosts. Our study aimed to describe and caracterise as thoroughly as possible, a new species of the Candida genus, named here Candida massiliensis (PMML0037), isolated from a clinical sample of human sputum. We compared genetic data based on the sequences of four genetic regions: "Internal Transcribed Spacers" of rRNA, D1/D2 domains (28S large subunit rRNA) and part of the genes encoding Translation Elongation Factor 1-α and β-tubulin2, to morphological characters, from scanning electron microscopy (TM 4000 Plus, SU5000), physiological, including the results of oxidation and assimilation tests of different carbon sources by the Biolog system, and chemical mapping by Energy-Dispersive X-ray Spectroscopy. Lastly, the in vitro antifungal susceptibility profile was performed using the E-testTM exponential gradient method. The multilocus analysis supported the genetic position of Candida massiliensis (PMML0037) as a new species of the genus Candida, and the phenotypic analysis highlighted its unique morphological and chemical profile when compared to other Candida species included in the study. Uganda Mountain Community Health System Perspectives and Capacities Towards Emerging Infectious Disease Surveillance Aggrey Siya, Richardson Mafigiri, Richard Migisha, Rebekah C. Kading Subject: Medicine & Pharmacology, Allergology Keywords: Alerts; Village Health Teams; Community Based Surveillance; Integrated Disease Surveillance and Reporting; Elgon; Climate Change; One Health In mountain communities like Sebei, Uganda, that are highly vulnerable to emerging and reemerging infectious diseases, community-based surveillance plays an important role in the monitoring of public health hazards. In this survey, we explored capacities of Village Health Teams (VHTs) in Sebei communities of Mount Elgon in undertaking surveillance tasks for emerging and reemerging infectious diseases in the context of a changing climate. We used participatory epidemiology techniques to elucidate VHTs' perceptions on climate change and public health and assess their capacities in conducting surveillance for emerging and reemerging infectious diseases. Overall, VHTs perceived climate change to be occurring with wider impacts on public health. However, they have inadequate capacities in collecting sur-veillance data. The VHTs lack transport to navigate through their communities and have in-sufficient capacities in using mobile phones for sending alerts. They do not engage in reporting other hazards related with the environment, wildlife and domestic livestock that would ac-celerate infectious disease outbreaks. Records are not maintained for disease surveillance ac-tivities and the abilities of VHTs to analyze data are also limited. However, VHTs have access to platforms that can enable them to disseminate public health information. The VHTs thus need to be retooled to conduct their work effectively and efficiently through equipping them with adequate logistics and knowledge on collecting, storing, analyzing, and relaying data, which will improve infectious disease response and mitigation efforts. Effects of Domain Boundaries on the Diffraction Patterns of One Dimensional Structures Frederic Timmer, Joachim Wollschläger Subject: Physical Sciences, Condensed Matter Physics Keywords: spot-profile analysis; one dimensional physics; low energy electron diffraction; binary surface technique; supercell model; domain boundary Motivated by diffraction experiments on the 23×3R30∘ reconstructed Si(111) due to deposition of rare earth elements (Dy, Tb) and silicide formation we analyse the splitting and non-splitting of superstructure spots. For this purpose, we model diffraction patterns for one dimensional structures generated by the binary surface technique and use supercell models to keep the analysis simple. Diffraction pattern are calculated in the framework of the kinematical diffraction theory and they are analyzed as a function of the domains and domain boundaries. Basic properties of the diffraction pattern are analyzed for model systems of a two-fold and a three-fold periodicity. The rules derived from these calculations are applied to the "real-world" system of Si(111)-(23×3R30∘)-RESi2 (RE = Dy or Tb). Depending on the combination of domains and domain boundaries of different types a plethora of different features are observed in the diffraction patterns. These are analyzed to determine the sizes of both domain boundaries and domains from experimentally observed splitting of specific superstructure spots. RNA Polymerase and Transcription Mechanism: Forefront of Physicochemical Study as Chemical Reactions Nobuo Shimamoto, Masahiko Imashimizu Subject: Life Sciences, Molecular Biology Keywords: transcriptional regulation; reaction theory; prediction of promoters; one-dimensional diffusion; rate equation; detailed balance; antenna effect; physicochemical techniques Transcriptional regulations have been widely studied as one of the main bridges between biology and other basic sciences as well as medicine. The traffic across it has been mostly unidirectional: chemistry and physics provided a lot of tools for biology, although the supply is now saturating. The traffic in opposite direction, the supply of subjects to develop chemistry and physics, has been only a little. However, if there are any, the supply will be at least from transcription, because the notion of chemical reaction is the strongest. This topic is aimed to increase the opposite traffic by introducing the forefront of physicochemical studies of transcription. Bias Correction of Chinese Fengyun-3C Microwave Humidity and Temperature Sounder Measurements in Retrieval of Atmospheric Parameters Qiurui He, Zhenzhan Wang, Jieying He Subject: Earth Sciences, Atmospheric Science Keywords: FY-3C/MWHTS; linear regression correction; neural networks correction; one-dimensional variational algorithm; atmospheric temperature and humidity profiles The Microwave Humidity and Temperature sounder (MWHTS) on board the Fengyun (FY)-3C satellite measure the outgoing radiance form the Earth surface and atmospheric constituents. MWHTS makes measurements in the isolated oxygen absorption line near 118 GHz and the vicinity of strong water vapor line around 183 GHz, can provide fine vertical distribution structure of both atmospheric humidity and temperature. However, in order to obtain the accurate soundings of humidity and temperature by the physical retrieval method, bias between the observed radiance and those simulated by radiative transfer model from the background or first guess profiles must be correct. In this study, two bias correction methods are developed through the correlation analysis between MWHTS measurements and air mass identified by the first guess profiles of the physical inversion, one is the linear regression correction (LRC) and the other is neural networks correction (NNC), representing the linear and nonlinear nature between MWHTS measurements and air mass, respectively. Both correction methods have been applied to MWHTS observed brightness temperatures over the geographic area (180° W-180° E, 60° S-60° N). The corrected results are evaluated by the probability density function of the difference between corrected observations and simulated values and the root mean square error (RMSE) with respect to simulated observations. The numerical results show that the NNC method perform better, especially in MWHTS channels 1 and 7-9 whose peak weight function heights are close to the surface. In order to assess the effects of bias correction methods proposed in this study on the retrieval accuracy, a one-dimensional variational system was built and applied to the MWHTS uncorrected and corrected brightness temperatures to estimated atmospheric temperature and humidity profiles, The retrieval results show that the NNC has better performance which is to be expected. An indication of the stability and robustness of NNC method is given which suggests that the NNC method has promising application perspectives in the physical retrieval. Cytoprotective Activity of Newly Synthesized 3-(arylmethyl)-6-methyl-4-phenylpyridin-2(1H)-ones derivatives Shynggys Sergazy, Zarina Shulgau, Aigerim Zhulikeyeva, Yerlan Ramankulov, Irina V. Palamarchuk, Ivan V. Kulakov Subject: Chemistry, Organic Chemistry Keywords: 3-aminopyridin-2(1H)-one derivatives; 3-(arylmethyl)-6-methyl-4-phenylpyridin-2(1H)-ones; antiradical activity; cytoprotective activity Currently, studies are being conducted on the possible role of the cytoprotective effect of biologically active substances in conditions of cerebral hypoxia or cardiomyopathies. At the same time, oxidative stress is considered as one of the important mechanisms of cellular cytotoxicity and a target for the action of cytoprotectors. The aim of this study is to search for derivatives of 3-(arylmethyl)-6-methyl-4-phenylpyridin-2(1H)-ones. The probability of cytoprotective action was assessed by two tests by measuring cell viability (with neutral red dye and MTT test). It was found that some derivatives of 3-(arylmethyl)-6-methyl-4-phenylpyridin-2(1H)-ones under the conditions of our experiment have a pronounced cytoprotective activity, providing better cell survival in vitro, including the MTT test and conditions of blood hyperviscosity. To correlate the obtained results in vitro, molecular docking of the synthesized derivatives was also carried out. The standard drug omeprazole (co-crystallized with the enzyme) was used as a standard. It was shown that all synthesized derivatives of 3-(arylmethyl)-6-methyl-4-phenylpyridin-2(1H)-ones had higher affinity for the selected protein than the standard gastro-cytoprotector omeprazole. The studied derivatives of 3-(arylmethyl)-6-methyl-4-phenylpyridin-2(1H)-ones also fully satisfy Lipinski's rule of thumb of five (RO5), which increases their chances for possible use as orally active drugs with a good ability to absorption and moderate lipophilicity. Thus, the results obtained make it possible to evaluate derivatives of 3-(arylmethyl)-6-methyl-4-phenylpyridin-2(1H)-ones as having a relatively high cytoprotective potential. The Avocado (Persea americana Mill.): A Review and Sustainability Perspectives Subhash Janardhan Bhore, Daniela Salgado Ochoa, Amina Al Houssari, Angela Lopez Zelaya, Ru Yang, Zixin Chen, Sarah Siddiqui Deeya, Sheila C. da Silva Sens, Margherita Schumann, Zihan Zhang, Eslam Eltantawy Subject: Biology, Plant Sciences Keywords: Agriculture; antioxidants; Avocado; Cameroon; CAMAAY; deforestation; environment; food security; green gold; health; one health; sustainable development goals (SDGs); sustainability Avocado (Persea americana Mill.) plant fruits are well-known for their high nutritional value, unique test, and healthy oil. It has a history of about 10,000 years. Avocado fruit offers many health benefits, and its production is rapidly increasing. The Food and Agriculture Organization (FAO)'s recent data suggest that the Avocados produced in the world in 2019 was twice that of 2010 (3778010 tons). Avocado's global Gross Production Value was about 5.812 billion USD in 2018, and it is likely to increase rapidly because of the increasing demand for Avocado fruits. Avocado oil is also used in the cosmetic industry because of its therapeutic properties, and it boosts the economic value of the Avocado industry. Avocado fruits have a rough green-gold skin; however, fruits are called 'the green gold' because of their massive global demand in the worldwide market and a lucrative business. The cultivation of Avocado has tremendous potential in increasing the rural economy, rural agriculture-based employment and reducing the poverty rate of growers. On the other hand, the Avocado industry is highly criticised because of deforestation, massive water utilisation, polluting water bodies with insecticides and fertilisers, posing a threat to other plant species, and environmental pollution. However, it doesn't preclude the importance of Avocado. Cameroon's average temperature is about 23 °C, which is considered optimal for Avocado propagation and commercial cultivation. Cameroon Association of Active Youths (CAMAAY) want to explore the possibilities of engaging Cameroon youths in Avocado cultivation. This review is aimed to provide an overview of Avocado. The review also highlights Avocado cultivation related issues from one health and sustainability perspective in line with the global goals. Stateless Reassociation in WPA3 Using Paired Token Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Wi-Fi; WPA3; PMK caching; Stateless reassociation; Paired token; Secondary credential; JSON web token; One-time authenticated key establishment In WPA3 secure connection is executed in two sequential stages. Firstly, in authentication and association stage a pairwise master key (PMK) is generated. Secondly, in post-association stage a pairwise transient key (PTK) is generated from PMK using the traditional 4-way handshake protocol. To reduce the heavy computation of the first stage PMK caching can be used. If client and AP are previously authenticated and has PMK cache, client can skip the first heavy stage and reuse the cached PMK to directly execute the 4-way handshake. But PMK caching is a very primitive technology to manage shared key between client and AP and there are many limitations; AP has to manage stateful cache for multiple clients, cache lifetime is limited, etc. Paired token (PT) \cite{LZ} is a new secondary credential scheme that provides stateless pre-shared key (PSK) in client-server environment. Server issues paired token (public token and secret token) to authenticated client where public token has the role of signed identity and secret token is a kind of shared secret. Once client is equipped with PT, it can be used for many symmetric key based cryptographic applications such as authentication, authorization, key establishment, etc. In this paper we apply the PT approach to WPA3 and try to replace the PMK caching with the one-time authenticated key establishment using PT. At the end of the authentication and association stage AP securely issues PT to client. Then in reassociation stage client and AP can compute the same one-time authenticated PMK from PT in stateless way and compute PTK using the traditional 4-way handshake protocol. Using this kind of stateless reassociation technology AP can provide high performance service to huge number of clients. Method of Accounting for Higher Harmonics in the Calculation of the Electronic Structure of the States of a Linear Molecule Excited by High-Intensity X-Rays Anton Kasprzhitskii, Georgy Lazorenko, Victor Yavna Subject: Physical Sciences, Atomic & Molecular Physics Keywords: one-center method, molecular orbital, higher harmonics, excited state, ionized state, linear molecule, hydrogen fluoride, lithium monofluoride, boron monofluoride Modern development of high-intensity and high-resolution X-ray technology allows detailed studies of the multiphoton absorption and scattering of X-ray photons by deep and subvalent shells of molecular systems on a wide energy range. The interpretation of experimental data requires the improvement of computational methods for obtaining excited and ionized electron states of molecular systems with one or several vacancies. The specificity of solving these problems requires the use of molecular orbitals obtained in one-center representation. Slow convergence of one-center expansions is a significant disadvantage of this approach; it affects the accuracy of the calculation of spectroscopic quantities. We offer a method of including higher harmonics in one-center expansion of a molecular orbital with the use of wave functions of electrons of deep shells of a ligand (off-center atom of a molecule). The method allows one to take into account correctly electron density of a linear molecule near the ligand when describing vacancies created in a molecular core leading to radial relaxation of electron density. The analysis of the parameters of one-center expansion of the ligand functions depending on ligand's charge is performed. The criteria for the inclusion of higher harmonics of one-center decomposition of the ligand functions in the molecular orbital are determined. The efficiency of the method is demonstrated by the example of diatomic molecules HF, LiF, and BF by estimating energy characteristics of their ground and ionized states. Type I Almost-Homogeneous Manifolds of Cohomogeneity One—IV Zhuang-dan Guan, Pilar Orellana, Anthony Van Subject: Mathematics & Computer Science, Geometry & Topology Keywords: Kahler manifolds, Einstein metrics, Ricci curvature, fibration, almost-homogeneous, cohomogeneity one, semisimple Lie group, Sasakian Einstein, Calabi-Yau metrics This is the fourth part of [6] on the existence of K¨ahler Einstein metrics of the general type I almost homogeneous manifolds of cohomogeneity one. We actually carry out all the results in [8] to the type I cases. In part II [14], we obtained a lot of new K¨ahler-Einstein manifolds as well as Fano manifolds without K¨ahler-Einstein metrics. In particular, by applying Theorem 15 therein, we have complete results in the Theorems 3 and 4 in that paper. However, we only have some partial results in Theorem 5 there. In this note, we shall give a report of recent progress on the Fano manifolds Nn,m when n > 15 and N′n,m when n > 4. We actually give two nice pictures for these two classes of manifolds. See our Theorems 1 and 2 in the last section. Moreover, we post two conjectures. Once we could solve these two conjectures, the question for these two classes of manifolds would be completely solved. With applying our results to the canonical circle bundles we also obtain Sasakian manifolds with or without Sasakian-Einstein metrics. That also give some open Calabi-Yau manifolds. In Vitro Antibacterial Activities of Aniline Dithiocarbamate Crystals with its Corresponding Oxovanadium(IV) and Zinc(II) Coordination Compounds Ayodele Odularu, Peter Adewale Ajibade, Albert Bolhuis Subject: Chemistry, Medicinal Chemistry Keywords: Keywords: one-pot synthesis; single crystal x-ray crystallography; oxovanadium(IV); zinc(II); spectroscopic studies; in vitro antibacterial studies. Abstract Antibacterial activities can be improved using mixed ligands. Mixed ligands involved in this research are sodium sulfadiazine (Na-sfz) and dithiocarbamate (ai-dtc). One-pot synthesis was used to synthesize ligand of aniline dithiocarbamate (ai-dtc) and the corresponding coordination compounds of [VO(sfz)(ai-dtc)] and [Zn(sfz)(ai-dtc)]. Crystals of ai-dtc, which grew from the solution when refrigerated after five days, were diffracted with technique of single crystal x-ray crystallography to reveal the structure. Other characterization techniques involving physicochemical parameters, FT-IR, UV-Vis and NMR (1H NMR and 13C NMR) were carried out on ligands of ai-dtc, sfz and corresponding coordination compounds. Differences in results of FT-IR, UV-Vis and NMR between ligands and their respective metal ions confirmed the coordination. The in vitro antibacterial studies showed that the ligands (not the metal complexes) had modest activity against Gram negative bacteria: Staphylococcus aureus, whereas, the coordination compounds had modest activities against the Gram negative bacteria: Escherichia coli and Pseudomonas aeruginosa. Design and Implementation of an Environmental Planning Analysis Platform based on the "Three Lines One Permit" Xuya Zhang, Kuikui Yuan, Hongdi Lv, Lei Yu, Changbo Qin Subject: Earth Sciences, Environmental Sciences Keywords: Three Lines One Permit; web-based interactive analysis; online environmental planning analysis platform; EIA approval; Web-GIS; geospatial data; Guangzhou Currently, an interactive environmental planning analysis system platform based on " Three Lines One Permit " (TLOP) is being developed to support environmental planning, construction project approval, and the application of TLOP outcome data in Guangzhou. The main objective is to provide governments, businesses and the public with environmental planning analysis tools to determine the site of construction projects. The platform is using the system architecture of the browser and server. Its core functions are interactive environmental planning analysis tool for construction project and the results display tool supporting map viewing. It provides users with a large number of detailed geospatial data and TLOP results data access and environmental planning analysis functions. This article describes the system architecture and implementation of the system platform and has a case study illustrating the system functionality. At present, the platform has been deployed and trial-operated. The content of the analysis framework is constantly expanding. This promotes the matching of environmental planning and analysis with local conditions. This will implement the application of TLOP and improve the efficiency of project construction and the level of ecological environment planning and management. One Health Landscape in Sub-Saharan African Countries Folorunso Oludayo Fasina, Olubunmi G. Fasanmi, Yilma J. Makonnen, Charles Bebay, Bernard Bett, Kristina Roesel Subject: Keywords: one health; Africa; public health; animal health; environment health; zoonosis; emerging and re-emerging diseases; food safety; antimicrobial resistance; toxicosis An evaluation of emerging issues in One Health (OH) in Sub-Saharan Africa was undertaken to map the existing OH initiatives in Sub-Saharan African (SSA) countries. Desk review, expert opinions survey, limited interviews and wider consultations with selected OH stakeholders were conducted. The strengths, weaknesses, opportunities and threats to OH initiatives were identified. OH influence, interest and impacts were evaluated. One Health is transiting from multidisciplinary to transdisciplinary concepts and OH viewpoint should move from 'proxy for zoonoses', to include issues of climate change, nutrition and food safety, social sciences, geography, policy and planning, economics, welfare and well-being, antimicrobial resistance (AMR), vector-borne diseases, toxicosis and pesticides issues. While the identified major strengths should be boosted, the weaknesses should be addressed.OH Networks in SSA were spatially and temporally spread across SSA and stakeholders were classified as key, latent, marginal and OH defenders. Imbalance in stakeholders' representation led to hesitation in buying-in from stakeholders who are outside the main networks. Theory of change, monitoring and evaluation frameworks, and tools to standardized evaluation of OH policies is needed for sustained future of OH and the future OH engagement should be outputs and outcomes-driven and not activity-driven.National roadmap for OH implementation and institutionalization is necessary and proofs of concepts in OH should be verified and scaled-up. Dependence on external funding is unsustainable and must be addressed. Necessary policy and legal instrument to support OH nationally and sub-nationally should be implemented taking cognizance of contemporary issues like urbanization, endemic poverty and other emerging issues. Utilizing current technologies and OH approach to address ongoing pandemic of COVID-19 and other emerging diseases is desirable. Finally, OH implementation should be anticipatory and not reactive to significantly benefit budgeting and contain disease outbreaks in animal sources before the risk of spillover to human can be envisaged. Philosophia Naturalis Renovata: Natural Philosophy for the Twenty-First Century Bruce MacLennan Subject: Arts & Humanities, Philosophy Keywords: natural philosophy; philosophy of science; Jungian psychology; depth psychology; analytical psychology; phenomenological psychology; evolutionary psychology; active imagination; Aristotle's four causes; aesthetics in science; philosophy as a way of life A revitalized practice of natural philosophy can help people to live a better life and promote a flourishing ecosystem. Such a philosophy is natural in two senses. First, it is natural by seeking to understand the whole of nature, including mental phenomena, In particular, a comprehensive natural philosophy should address the phenomena of sentience by embracing first- and second-person methods of investigation. Moreover, to expand our understanding of the world, natural philosophy should embrace a full panoply of explanations, similar to Aristotle's four causes. Second, such a philosophy is natural by being grounded in human nature, taking full account of human capacities and limitations. Future natural philosophers should also make use of all human capacities, including emotion and intuition as well as reason and perception, to investigate nature. Finally, since the majority of our brain's activities are unconscious, natural philosophy should explore the unconscious mind with the aim of deepening our relation to the rest of nature and enhancing well-being. Highly Virulent and Multidrug-Resistant Escherichia Coli Sequence Type 58 from a Sausage in Germany Elias Eger, Marielle Domke, Stefan E. Heiden, Madeleine Paditz, Veronika Balau, Christiane Huxdorff, Dirk Zimmermann, Timo Homeier-Bachmann, Katharina Schaufler Subject: Life Sciences, Microbiology Keywords: antimicrobial resistance; CTX-M-1; Enterobactericeae,; Escherichia coli; extended-spectrum β-lactamase; food safety; IncI1; next-generation sequencing; One Health; virulence Studies have previously described the occurrence of multidrug-resistant (MDR) Escherichia coli in human and veterinary medical settings, livestock, and, to a lesser extent, in the environment and food. While they mostly analyzed foodborne E. coli regarding phenotypic and sometimes genotypic antibiotic resistance and basic phylogenetic classification, we have limited understanding of the in vitro and in vivo virulence characteristics and global phylogenetic contexts of these bacteria. Here, we investigated an E. coli strain (PBIO3502) isolated from a pork sausage in Germany in 2021 in-depth. Whole-genome sequence analysis revealed sequence type (ST)58, which is an emerging international high-risk clonal lineage. In addition to its MDR phenotype that mostly matched the genotype, PBIO3502 demonstrated pronounced virulence features, including in vitro biofilm formation, siderophore secretion, serum resilience, and in vivo mortality in Galleria mellonella larvae. Along with the genomic analysis indicating close phylogenetic relatedness of our strain with publicly available, clinically relevant representatives of the same ST, these results suggest the zoonotic and pathogenic character of PBIO3502 with the potential to cause infection in humans and animals. Also, our study highlights the necessity of the One Health approach while integrating human, animal, and environmental health, as well as the role of meat products and food chains in the putative transmission of MDR pathogens.
CommonCrawl
Quantum Computing Meta Quantum Computing Stack Exchange is a question and answer site for engineers, scientists, programmers, and computing professionals interested in quantum computing. It only takes a minute to sign up. Syndrome extraction operator as matrix? I am trying to understand how to achieve the syndrome extraction operator matrix in quantum repetition code (if it even exists). It is given that the syndrome is defined here (page 4) as: [perform] controlled-nots from the first and second received qubits to the first ancilla qubit, then from the first and third received qubits to the second ancilla bit. First of all, are the ancilla qubits entangled with $\vert \psi \rangle$? Here is what I have understood so far. Given $\vert \psi \rangle = \alpha \vert x_1, x_2, x_3\rangle + \beta \vert \overline{x_1}, \overline{x_2}, \overline{x_3}\rangle$ (where $\overline{x_{i}} = x_i + 1 \quad (\text{mod }2)$), we can define an operator $S$ as: $$S: \alpha \vert x_1, x_2, x_3\rangle + \beta \vert \overline{x_1}, \overline{x_2}, \overline{x_3}\rangle \rightarrow \underbrace{\vert x_1 \oplus x_2\rangle}_{\text{1$^{\text{st}}$ ancilla qubit}} \underbrace{\vert x_1 \oplus x_3\rangle}_{\, \text{ 2$^{\text{nd}}$ ancilla qubit}},$$ or equivalently as: $$S: \alpha \vert x_1, x_2, x_3\rangle + \beta \vert \overline{x_1}, \overline{x_2}, \overline{x_3}\rangle \rightarrow \underbrace{\vert \overline{x_1} \oplus \overline{x_2}\rangle}_{\text{1$^{\text{st}}$ ancilla qubit}} \underbrace{\vert \overline{x_1} \oplus \overline{x_3}\rangle}_{\, \text{ 2$^{\text{nd}}$ ancilla qubit}}.$$ This will yield the ancilla values corresponding to the following states: $$ \begin{align*} \text{State$\qquad~~$} & \text{$~~~~$Ancilla qubits}\\ \alpha \vert 000\rangle + \beta \vert 111\rangle &\qquad\qquad \vert 00\rangle\\ \alpha \vert 100\rangle + \beta \vert 011\rangle &\qquad\qquad \vert 11\rangle\\ \alpha \vert 010\rangle + \beta \vert 101\rangle &\qquad\qquad \vert 10\rangle\\ \alpha \vert 001\rangle + \beta \vert 110\rangle &\qquad\qquad \vert 01\rangle\\ \alpha \vert 110\rangle + \beta \vert 001\rangle &\qquad\qquad \vert 01\rangle\\ \alpha \vert 101\rangle + \beta \vert 010\rangle &\qquad\qquad \vert 10\rangle\\ \alpha \vert 011\rangle + \beta \vert 100\rangle &\qquad\qquad \vert 11\rangle\\ \alpha \vert 111\rangle + \beta \vert 000\rangle &\qquad\qquad \vert 00\rangle\\\\ \end{align*} $$ How can I perform such operation using a matrix? error-correction matrix-representation glS♦ M. Al JumailyM. Al Jumaily $\begingroup$ To answer your first question, no the ancillas are not entangled with the system. Equation 2 of the link you provided gives you what the joint state of the system and introduced ancillas is. For further discussion about ancillas : quantumcomputing.stackexchange.com/questions/1855/… users.physics.ox.ac.uk/~Steane/qec/qec_ams_4.html $\endgroup$ – Purva Thakre $\begingroup$ @PurvaThakre, thank you for the suggestion. I did spend lots of time trying to grasp what is happening. I think I now understand that an ancilla qubit is a temporary slot that is used to store some information temporarily. My question now is should one ancilla qubit in the repetiton code be $\vert 000 \rangle$ rather than $\vert 0 \rangle$ so that the vectors are aligned with the operator's dimensions? If not, how can a 2D vector be used with dealing with an $8 \times 8$ matrix? I am more concerned about the details of the math here. $\endgroup$ – M. Al Jumaily First of all, are the ancilla qubits entangled with $|\psi\rangle$? Yes, depending on what the state is. Let's say you started with $$ \alpha|000\rangle+\beta|111\rangle, $$ but it has experienced an error of $(\cos\theta I+i\sin\theta X)$ on the first qubit. So, your state becomes $$ \cos\theta(\alpha|000\rangle+\beta|111\rangle)+i\sin\theta(\alpha|100\rangle+\beta|011\rangle). $$ When you perform syndrome extraction, you'll get $$ \cos\theta(\alpha|000\rangle+\beta|111\rangle)|00\rangle+i\sin\theta(\alpha|100\rangle+\beta|011\rangle)|11\rangle. $$ This state has entanglement between the computational qubits register and the ancilla register. Of course, in the next step, you measure the ancilla qubits, and that projects the ancilla qubits into a basis state, removing all entanglement between system and ancillas. As for what the matrix is, I strongly recommend against performing such calculations. Once you're up to 5 qubits, you're talking a $32\times 32$ matrix, which is fairly awful. It is far better to just talk about the action on (relevant) basis states, as you did. However, since you have explicitly asked: I calculated this in Mathematica using cNOT[i_, j_] := IdentityMatrix[32] + KroneckerProduct[IdentityMatrix[2^(i - 1)], {{0, 0}, {0, 1}}, IdentityMatrix[2^(j - i - 1)], {{-1, 1}, {1, -1}}, IdentityMatrix[2^(5 - j)]] TeXForm[cNOT[1, 4].cNOT[1, 5].cNOT[2, 4].cNOT[3, 5]] (the first 3 qubits are the system qubits, and qubits 4&5 are the ancillas) giving the answer $$ \left( \begin{array}{cccccccccccccccccccccccccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \end{array} \right) $$ We can then verify the calculation that I showed before: initial = {\[Alpha], 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \[Beta], 0, 0, 0}; witherror = KroneckerProduct[{{Cos[\[Theta]], I Sin[\[Theta]]}, {I Sin[\[Theta]], Cos[\[Theta]]}}, IdentityMatrix[2^4]].initial U.witherror DaftWullieDaftWullie $\begingroup$ Thank you for the detailed reply. I have two question please: the first is when the ancilla qubits are measured, the whole system doesn't collapse but just the subsystem? how can this be achieved? Also, how did you come up with IdentityMatrix[32] + ..., how about the two $2 \times 2$ matrices MatrixForm[{{0, 0}, {0, 1}}] and MatrixForm[{{-1, 1}, {1, -1}}]? $\endgroup$ $\begingroup$ The whole system does, kind of, collapse. There's a standard way of talking about the state after a measurement, particularly when you only measure some qubits. $\endgroup$ – DaftWullie $\begingroup$ The logic of controlled not is to do nothing (identity matrix) unless the control qubit is in state 1. If it is, don't do the identity on the target (so subtract it) and add on the not action. $\endgroup$ $\begingroup$ Can you kindly direct me to learning more about the standard way of talking about the state after a measurement? I am relatively new to QC. $\endgroup$ $\begingroup$ Anything that talks about the measurement postulate will give you the formalism to handle this, although might not talk explicitly about this case. What you need is to describe a projective measurement on only some qubits. For example, on 2 qubits, where you project onto the first qubit, you have projection operators $|0\rangle\langle 0|\otimes I$ and $|1\rangle\langle 1|\otimes I$ $\endgroup$ Thanks for contributing an answer to Quantum Computing Stack Exchange! Not the answer you're looking for? Browse other questions tagged error-correction matrix-representation or ask your own question. What counts as an "ancilla" qubit? Error syndromes and recovery procedure in bit flip code Quantum error correction using bit-flip code for the amplitude damping channel Repetition code encoder circuit Why should we measure in X/Z basis for Z/X errors in Steane syndrome extraction? How does the unitary encoding procedure in quantum secret sharing schemes or error correcting codes work? How to measure syndromes in QEC Question about quantum error correction and density matrixs What does measurement mean in quantum error correction(syndrome diagnosis)?
CommonCrawl
Home / DC Machines / DC Motor Theory DC Motor Theory One of the most important developments in the field of electricity is the electric motor. The electric motor converts electrical power into rotating mechanical power. Motors are used for such items as refrigeration and air conditioning, food mixers, vacuum cleaners, grinders, pumps, power bench saws, lathes, various wood and metal machines, as well as hundreds of other useful machines. DC Motor Operation (Working Principle) The DC motor is simply an application of magnetic principles. Motor rotation depends on the interaction of magnetic fields. You will recall that the laws of magnetism state that: Like poles repel each other. Unlike poles attract each other. A north pole repels a north pole. A south pole repels a south pole. North and south poles attract each other. The theory of the simple DC motor is detailed in Figures 1 through 9. Figures 1 and 2 diagram the basic parts, the fields and the armature. Figures 3 and 4 put the motor parts together. Figures 5 through 9 take you through the motor action. Examine these figures. Figure 1. A magnetic field exists between the north and south poles of a permanent magnet. Figure 2. An electromagnet is wound on an iron core and the core is placed on a shaft so it can rotate. This assembly is called the armature Figure 3. The armature is placed in the permanent magnetic field Figure 4. The ends of the armature coil are connected to semicircular sections of metal called commutators. Brushes contact the rotating commutator sections and energize the armature coil from an external power source. (Recall that the polarity of the armature electromagnets depends on the direction of the current flowing through the coil.) A battery is connected to the brushes. Current flows into brush A to commutator section A, through the coil to section B, and back to the battery through brush B, completing the circuit. The armature coil is magnetized as indicated in the sketch Figure 5. The north pole of the armature is repelled by the north pole of the field magnet. The south pole of the armature is repelled by the south pole of the field magnet. The armature turns one quarter revolution, or 90 degrees. Figure 6. The north pole of the armature is attracted by the south pole of the field magnet. The south pole of the armature is attracted by the north pole of the field magnet. The armature turns another quarter turn. It has now turned one-half revolution. Figure 7. As the commutator sections turn with the armature, section B contacts brush A and section A contacts brush B. The current now flows into section B and out section A. The current has been reversed in the armature due to commutator switching action. The current reversal changes the polarity of the armature, so that unlike poles are next to each other. Figure 8. Like poles repel each other and unlike poles attract each other. The armature turns another quarter turn. Figure 9. Unlike poles attract each other and the armature turns the last quarter turn, completing one revolution. The commutator and brushes are now lined up in their original positions, which causes the current to reverse in the armature again. The armature continues to rotate by repulsion and attraction. The current is reversed at each one-half revolution by the commutator. The construction of a simple DC motor is very similar to a DC generator. In fact, a DC generator and motor are often interchangeable in use. In these cases, they are referred to as DC machines. To make the motor more powerful, permanent field magnets can be replaced by electromagnets called field windings. The field winding is placed over a soft iron pole piece. It consists of many turns of enamel-covered copper wire. Like the generator, the field windings can have an independent source of voltage connected to them. Or, the field windings can be connected in series or parallel with the armature windings to a single voltage source, examine Figure 10. Figure 10. Sketches and schematic diagrams of field winding connections. A–Shunt wound motor is connected in parallel. B–Series wound motor. C–Separately excited field motor. Construct a trial motor, Figure 11. Connect the motor first in series, then in parallel as a shunt motor. Compare the speed and power of the two motors. Figure 11. Examine the trial motor with series and parallel connections. A Practical Motor In industry, motors are made in a slightly different manner than discussed. Rotational force comes from the interaction between the magnetic field found around a current-carrying conductor and a fixed magnetic field. A conductor carrying a current has a magnetic field around it. The direction of the field depends on the direction of the current. When this conductor is placed in a fixed magnetic field, the interaction between the two fields causes motion. Study Figures 12 through 16. Figure 12. A magnetic field exists between the poles of a permanent magnet. The arrows indicate the direction of the field. Figure 13. A current carrying conductor has a magnetic field; its direction depends on the direction of the current. Use the left hand rule to determine direction. Figure 14. The field around the conductor flows with the permanent field above the conductor but opposes the permanent field below the conductor. The conductor will move toward the weakened field. Figure 15. The current has been reversed in the conductor, causing the conductor field to reverse. Now the field is reinforced below the conductor and weakened above the conductor. The conductor will move up. Figure 16. The single conductor is replaced by a coil of conductors wound in the slots of an armature core. Notice how the interaction of the two fields will produce rotation. Coil side A moves up and coil side B moves down. The rotation is clockwise. Armature coils on industry motors are connected to commutator sections, as in the trial motor. The theory of operation is similar. A practical motor has several armature coils wound in separate slots around the core. Each coil has a commutator section. Increasing the number of field poles gives the motor greater power. A four-pole motor is sketched in Figure 17. The current divides into four parts. The current flowing in windings under each field pole produces rotation. This then increases the turning power, or torque, of the motor. Figure 17. The torque of the motor is increased by adding armature coils and field coils. Counter-electromotive Force When a conductor cuts through a magnetic field, voltage is induced in the moving conductor. And while a motor is meant to convert electrical energy into mechanical energy, when the armature begins to rotate, the motor also becomes a generator. The generated electrical force that opposes the applied emf is called counter-electromotive force. Counter-electromotive force is often written as counter emf or c-emf. It is a result of the generator action of the motor. If the motor were connected to a prime mover and rotated in the same direction as the DC motor, it would produce a voltage with the opposite polarity. See Figure 18. Figure 18. The generator and the motor are rotating clockwise. The DC generator develops a polarity opposite of the motor polarity for the same clockwise rotation. This is the basis of counter emf. The counter emf magnitude increases as the rotational speed and field strength increase. Therefore: \[\text{Counter emf=Speed }\!\!\times\!\!\text{ Field Strength }\!\!\times\!\!\text{ K}\] Where K equals some constant. This constant will vary in different motors. It is affected by things such as the number of windings. The actual effective voltage when applied to the windings in the armature must equal: ${{E}_{source}}-{{E}_{counter}}={{E}_{armature}}$ The current flowing in the armature windings at any given instant can be found using Ohm's law when the ohmic resistance of the windings is known: \[{{I}_{armature}}=\frac{{{E}_{armature}}}{{{R}_{armature}}}\] It is important to note that, as rotation of the motor armature slows down, less counter emf is generated. As a result of less counter emf being produced, there will be an increase in the current through the armature circuit. The current will continue to increase until the motor stops rotating as it does when physically overloaded. When the motor stalls, maximum current through the armature circuit is limited only by the resistance of the armature. This condition results in extremely high current values. A DC motor must be properly protected against overload conditions. Overload protection can be provided through one of several methods. The method used depends upon the size, type, and application of the motor. The circuit feeding power to the motor is usually protected by a fuse or circuit breaker. A fuse or circuit breaker provides the best method of protection against damage from short circuit or locked rotor conditions. Locked rotor is a term that means the rotor is not turning because of a physical impedance while the power is applied to the motor. Actual overload protection is usually provided by a thermo-overload. A thermo-overload device is a simple ratch wheel device held in place by a metal alloy such as solder. When the overload condition generates sufficient current flow to melt the solder, the wheel is free to rotate, causing the circuit to open. This allows the motor to safely shut down before any damage occurs to the equipment or personnel. See Figure 19. Figure 19. The thermo-overload mounts to the bottom of a motor starter and provides protection for motor overloads Another type of overload protection is a bimetallic overload device. Bimetallic overload devices contain a bimetallic strip with contacts at each end A bimetallic strip has two layers, with each layer made of a different metal. Current flows through the bimetallic strip and through the contacts. When the current reaches a set level, the bimetallic strips bend far enough that the contacts separate, opening the circuit. Because each of the two metals expand at different rates, the bimetallic strip bends. Once the bimetallic strip cools, it returns to its original shape and closes the contacts once more. See Figure 20. According to the National Electric Code, DC motors over one horsepower should be protected by a fuse or a breaker that is no more than 150 percent of the full-load current. The size of the conductor feeding the motor should be able to carry at least 125 percent of the full-load current. The thermo-overload device is sized closely to the maximum full-load current rating of the motor. It is usually sized at 115 to 125 percent of the full- load current depending on the exact type of motor and its application. Figure 20. The two different metals expand to different lengths, causing an open circuit Commutation and Interpoles As the motor armature rotates, the current in the armature windings routinely reverses. This is caused by commutator action. Due to the self-inductance of the windings, however, the current does not instantly reverse. This results in sparking at the commutator brushes. There are a number of methods for preventing these sparks. Changing the position of the brushes is one method. With this method, the brushes are moved slightly against the direction of rotation, and the counter emf is used to induce the previous pole. The counter emf opposes the self-induction caused by the decreasing current in the coil. Sparking is eliminated. This method, however, is not a practical method for preventing sparks on large motors used in a varying load condition. As the load varies on the motor, the brush position must be changed by hand. Instead, larger motors use interpoles to reduce the sparking. An interpole is a smaller field pole placed midway between main field poles, Figure 21. The interpole has the same polarity as the main field poles and follows the main pole in direction of rotation. Interpoles are also called commutating poles. Figure 21. Interpoles reduce sparking at the commutator. A counter emf is developed as the armature passes the interpole. This counter emf overcomes the emf caused by self-induction in the armature windings. The windings of the interpole are connected in series with the armature and carry the armature current. Thus, interpole field strength varies as the load varies, and it provides automatic control of commutator sparking. Speed Regulation Many motors are designed for special purposes. Some develop full power under load, while others must be brought up to speed before the load is applied. When the speed for a motor is determined on the job, the motor should maintain that speed under varying load conditions. A ratio of the speed under no-load conditions to the speed under full-load can be expressed as a percentage of the full-load speed. This is called the percent of speed regulation. The equation is written: \[\text{Percentage of Speed Regulation=}\frac{\text{No-Load Speed-Full-Load Speed}}{\text{Full-Load Speed}}\text{ }\!\!\times\!\!\text{ 100 Percent}\] A low-speed regulation percentage means that the motor operates at a somewhat constant speed, regardless of load applied. Previous Paralleling of Generators and Synchronization Next Transformer Applications and Uses DC Generator Types & Working Types of DC Motors
CommonCrawl
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) Shock Wave Interactions and the Riemann-flat Condition: The Geometry behind Metric Smoothing and the Existence of Locally Inertial Frames in General Relativity (1610.02390) Moritz Reintjes, Blake Temple Oct. 28, 2019 math.DG, gr-qc We prove that the essential smoothness of the gravitational metric at shock waves in GR, a PDE regularity issue for weak solutions of the Einstein equations, is determined by a geometrical condition which we introduce and name the {\it Riemann-flat condition}. The Riemann-flat condition determines whether or not the essential smoothness of the gravitational metric is two full derivatives more regular than the Riemann curvature tensor. This provides a geometric framework for the open problem as to whether {\it regularity singularities} (points where the curvature is in $L^\infty$ but the essential smoothness of the gravitational metric is only Lipschitz continuous) can be created by shock wave interaction in GR, or whether metrics Lipschitz at shocks can always be smoothed one level to $C^{1,1}$ by coordinate transformation. As a corollary of the ideas we give a proof that locally inertial frames always exist in a natural sense for shock wave metrics in spherically symmetric spacetimes, independent of whether the metric itself can be smoothed to $C^{1,1}$ locally. This latter result yields an explicit procedure (analogous to Riemann Normal Coordinates in smooth spacetimes) for constructing locally inertial coordinates for Lipschitz metrics, and is a new regularity result for GR solutions constructed by the Glimm scheme. The Regularity Transformation Equations: An elliptic mechanism for smoothing gravitational metrics in General Relativity (1805.01004) May 2, 2018 gr-qc {\it Regularity singularities} are points in spacetime where the gravitational metric tensor of General Relativity fails to be at least two levels more regular than its curvature tensor. Whether regularity singularities exist for shock wave solutions constructed by the Glimm scheme in GR is an open problem. In this paper we address the problem at the general level of connections $\Gamma\in W^{m,p}$ satisfying $d\Gamma\in W^{m,p}$ as well, and ask the question as to whether there always exists a coordinate transformation with Jacobian $J\in W^{m+1,p}$ that smooths the connection by one order. Introducing a new approach to this problem, we derive a system of nonlinear elliptic Poisson equations, which we call the {\it Regularity Transformation equations} (RT-equations), with matrix-valued differential forms as unknowns, and prove that the existence of solutions to these equations is equivalent to the Riemann-flat condition, which was shown by authors to be equivalent to the existence of a coordinate transformation smoothing the connection by one order. Different from earlier approaches, our method does not employ any apriori coordinate ansatz. In a forthcoming paper we establish an existence theory which demonstrates the consistency of the RT-equations at the levels of smoothness we seek, and a mathematical framework for completing the existence theory is outlined in the final section. Constrained Systems of Conservation Laws: A Geometric Theory (1510.06677) Moritz Reintjes Sept. 27, 2017 math.AP, math-ph, math.MP We address the Riemann and Cauchy problems for systems of $n$ conservation laws in $m$ unknowns which are subject to $m-n$ constraints ($m\geq n$). Such constrained systems generalize systems of conservation laws in standard form to include various examples of conservation laws in Physics and Engineering beyond gas dynamics, e.g., multi-phase flow in porous media. We prove local well-posedness of the Riemann problem and global existence of the Cauchy problem for initial data with sufficiently small total variation, in one spatial dimension. The key to our existence theory is to generalize the $m\times n$ systems of constrained conservation laws to $n\times n$ systems of conservation laws with states taking values in an $n$-dimensional manifold and to extend Lax's theory for local existence as well as Glimm's random choice method to our geometric framework. Our resulting existence theory allows for the accumulation function to be non-invertible across hypersurfaces. The Fermionic Signature Operator and Space-Time Symmetries (1708.09643) Felix Finster, Moritz Reintjes June 28, 2019 math-ph, math.MP, gr-qc We show that and specify how space-time symmetries give rise to corresponding symmetries of the fermionic signature operator and generalized fermionic projector states. A note on incompressibility of relativistic fluids and the instantaneity of their pressures (1601.08106) June 14, 2017 gr-qc We introduce a natural notion of incompressibility for fluids governed by the relativistic Euler equations on a fixed background spacetime, and show that the resulting equations reduce to the incompressible Euler equations in the classical limit as $c\rightarrow \infty$. As our main result, we prove that the fluid pressure of solutions of these incompressible "relativistic" Euler equations satisfies an elliptic equation on each of the hypersurfaces orthogonal to the fluid four-velocity, which indicates infinite speed of propagation. Spacetime is Locally Inertial at Points of General Relativistic Shock Wave Interaction between Shocks from Different Characteristic Families (1409.5060) Feb. 7, 2017 gr-qc We prove that spacetime is locally inertial at points of shock wave collision in General Relativity. The result applies for collisions between shock waves coming from different characteristic families, in spherically symmetric spacetimes. We give a constructive proof that there exist coordinate transformations which raise the regularity of the gravitational metric tensor from $C^{0,1}$ to $C^{1,1}$ in a neighborhood of such points of shock wave interaction, and a $C^{1,1}$ metric regularity suffices for locally inertial frames to exist. This result corrects an error in our earlier RSPA-publication, which led us to the wrong conclusion that such coordinate transformations, which smooth the metric to $C^{1,1}$, cannot exist. Our result here proves that regularity singularities, (a type of mild singularity introduced in our RSPA-publication), do not exist at points of interacting shock waves from different families in spherically symmetric spacetimes, and this generalizes Israel's famous 1966 result to the case of such shock wave interactions. The strategy of proof here is an extension of the strategy outlined in our RSPA-paper, but differs fundamentally from the method used by Israel. The question whether regularity singularities exist in more complicated shock wave solutions of the Einstein Euler equations still remains open. The Fermionic Signature Operator and Hadamard States in the Presence of a Plane Electromagnetic Wave (1609.04516) Jan. 23, 2017 hep-th, math-ph, math.MP We give a non-perturbative construction of a distinguished state for the quantized Dirac field in Minkowski space in the presence of a time-dependent external field of the form of a plane electromagnetic wave. By explicit computation of the fermionic signature operator, it is shown that the Dirac operator has the strong mass oscillation property. We prove that the resulting fermionic projector state is a Hadamard state. A Non-Perturbative Construction of the Fermionic Projector on Globally Hyperbolic Manifolds II - Space-Times of Infinite Lifetime (1312.7209) Jan. 14, 2017 math-ph, math.MP, gr-qc, math.FA The previous functional analytic construction of the fermionic projector on globally hyperbolic Lorentzian manifolds is extended to space-times of infinite lifetime. The construction is based on an analysis of families of solutions of the Dirac equation with a varying mass parameter. It makes use of the so-called mass oscillation property which implies that integrating over the mass parameter generates decay of the Dirac wave functions at infinity. We obtain a canonical decomposition of the solution space of the massive Dirac equation into two subspaces, independent of observers or the choice of coordinates. The constructions are illustrated in the examples of ultrastatic space-times and de Sitter space-time. "Regularity Singularities" and the Scattering of Gravity Waves in Approximate Locally Inertial Frames (1506.04074) It is an open question whether solutions of the Einstein-Euler equations are smooth enough to admit locally inertial coordinates at points of shock wave interaction, or whether "regularity singularities" can exist at such points. The term {\it regularity singularity} was proposed by the authors as a point in spacetime where the gravitational metric tensor is Lipschitz continuous ($C^{0,1}$), but no smoother, in any coordinate system of the $C^{1,1}$ atlas. An existence theory for shock wave solutions in $C^{0,1}$ admitting arbitrary interactions has been proven for the Einstein-Euler equations in spherically symmetric spacetimes, but $C^{1,1}$ is the requisite smoothness required for space-time to be locally flat. Thus the open problem of regularity singularities is the problem as to whether locally inertial coordinate systems exist at shock waves within the larger $C^{1,1}$ atlas. To clarify this open problem, we identify new "Coriolis type" effects in the geometry of $C^{0,1}$ shock wave metrics and prove they are essential in the sense that they can never be made to vanish within the atlas of {\it smooth} coordinate transformations, the atlas usually assumed in classical differential geometry. Thus the problem of existence of regularity singularities is equivalent to the question as to whether or not these Coriolis type effects are essentially non-removable and `real', or merely coordinate effects that can be removed, (in analogy to classical Coriolis forces), by going to the less regular atlas of $C^{1,1}$ transformations. If essentially non-removable, it would argue strongly for a `real' new physical effect for General Relativity, providing a physical context to the open problem of regularity singularities. A Non-Perturbative Construction of the Fermionic Projector on Globally Hyperbolic Manifolds I - Space-Times of Finite Lifetime (1301.5420) We give a functional analytic construction of the fermionic projector on a globally hyperbolic Lorentzian manifold of finite lifetime. The integral kernel of the fermionic projector is represented by a two-point distribution on the manifold. By introducing an ultraviolet regularization, we get to the framework of causal fermion systems. The connection to the "negative-energy solutions" of the Dirac equation and to the WKB approximation is explained and quantified by a detailed analysis of closed Friedmann-Robertson-Walker universes. A Proposal of a Damping Term for the Relativistic Euler Equations (1511.08183) Nov. 25, 2015 gr-qc We introduce a damping term for the special relativistic Euler equations in $3$-D and show that the equations reduce to the non-relativistic damped Euler equations in the Newtonian limit. We then write the equations as a symmetric hyperbolic system for which local-in-time existence of smooth solutions can be shown. Points of General Relativisitic Shock Wave Interaction are "Regularity Singularities" where Spacetime is Not Locally Flat (1105.0798) June 15, 2015 math.AP, gr-qc We show that the regularity of the gravitational metric tensor in spherically symmetric spacetimes cannot be lifted from $C^{0,1}$ to $C^{1,1}$ within the class of $C^{1,1}$ coordinate transformations in a neighborhood of a point of shock wave interaction in General Relativity, without forcing the determinant of the metric tensor to vanish at the point of interaction. This is in contrast to Israel's Theorem which states that such coordinate transformations always exist in a neighborhood of a point on a smooth single shock surface. The results thus imply that points of shock wave interaction represent a new kind of singularity for perfect fluids evolving in spacetime, singularities that make perfectly good sense physically, that can form from the evolution of smooth initial data, but at which the spacetime is not locally Minkowskian under any coordinate transformation. In particular, at such singularities, delta function sources in the second derivatives of the gravitational metric tensor exist in all coordinate systems of the $C^{1,1}$ atlas, but due to cancelation, the curvature tensor remains uniformly bounded. No Regularity Singularities Exist at Points of General Relativistic Shock Wave Interaction between Shocks from Different Characteristic Families (1506.04081) We give a constructive proof that coordinate transformations exist which raise the regularity of the gravitational metric tensor from $C^{0,1}$ to $C^{1,1}$ in a neighborhood of points of shock wave collision in General Relativity. The proof applies to collisions between shock waves coming from different characteristic families, in spherically symmetric spacetimes. Our result here implies that spacetime is locally inertial and corrects an error in our earlier RSPA-publication, which led us to the false conclusion that such coordinate transformations, which smooth the metric to $C^{1,1}$, cannot exist. Thus, our result implies that regularity singularities, (a type of mild singularity introduced in our RSPA-paper), do not exist at points of interacting shock waves from different families in spherically symmetric spacetimes. Our result generalizes Israel's celebrated 1966 paper to the case of such shock wave interactions but our proof strategy differs fundamentally from that used by Israel and is an extension of the strategy outlined in our original RSPA-publication. Whether regularity singularities exist in more complicated shock wave solutions of the Einstein Euler equations remains open. The 'Regularity Singularity' at Points of General Relativistic Shock Wave Interaction (1112.1803) Sept. 18, 2014 gr-qc A proof that a new kind of non-removable {\it "regularity singularity"} forms when two shock waves collide within the theory of General Relativity, was first announced in ProcRoySoc A \cite{ReintjesTemple}. In the present paper we give complete proofs of the claims in \cite{ReintjesTemple} and extend the results on the regularity of the Einstein curvature tensor to the full Riemann curvature tensor. The main result is that, in a neighborhood of a point where two shock waves collide in a spherically symmetric spacetime, the gravitational metric tensor cannot be lifted from C0,1 to C1 within the class of C1,1 coordinate transformations. This contrasts Israel's celebrated theorem \cite{Israel}, which states that around each point on a {\it single} shock surface there exist a coordinate system in which the metric is C1,1 regular. Moreover, at points of shock wave interaction, delta function sources exist in the second derivatives of the gravitational metric tensor in all coordinate systems of the C1,1-atlas, but due to cancellation, the Einstein and Riemann curvature tensor remain sup-norm bounded. We conclude that points of shock wave interaction are a new kind of spacetime singularity, (which we name "regularity singularity"), singularities that can form from the evolution of smooth initial data for perfect fluids and that lie in physical spacetime, but at such points {\it locally inertial} coordinates fail to exist. The Dirac Equation and the Normalization of its Solutions in a Closed Friedmann-Robertson-Walker Universe (0901.0602) Feb. 6, 2013 math-ph, math.MP, gr-qc We set up the Dirac equation in a Friedmann-Robertson-Walker geometry and separate the spatial and time variables. In the case of a closed universe, the spatial dependence is solved explicitly, giving rise to a discrete set of solutions. We compute the probability integral and analyze a space-time normalization integral. This analysis allows us to introduce the fermionic projector in a closed Friedmann-Robertson-Walker geometry and to specify its global normalization as well as its local form.
CommonCrawl
Only show content I have access to (48) Only show open access (9) Chapters (55) Last 3 years (19) Over 3 years (184) Physics and Astronomy (31) Area Studies (17) Materials Research (16) Politics and International Relations (16) Earth and Environmental Sciences (11) Statistics and Probability (10) Itinerario (20) MRS Online Proceedings Library Archive (15) Infection Control & Hospital Epidemiology (10) The Journal of Asian Studies (9) Econometric Theory (8) Proceedings of the International Astronomical Union (8) Publications of the Astronomical Society of Australia (5) The Journal of Laryngology & Otology (5) Journal of the Royal Asiatic Society (4) Symposium - International Astronomical Union (4) The British Journal of Psychiatry (4) Prehospital and Disaster Medicine (3) Quaternary Research (3) Weed Science (3) British Journal of Nutrition (2) Disaster Medicine and Public Health Preparedness (2) Epidemiology & Infection (2) Journal of the International Neuropsychological Society (2) Marine Biodiversity Records (2) Psychological Medicine (2) The China Quarterly (2) Boydell & Brewer (12) Forum on European Expansion and Global Interaction (20) Materials Research Society (16) The Association for Asian Studies (15) International Astronomical Union (13) Society for Healthcare Epidemiology of America (SHEA) (10) MBA Online Only Members (3) The Paleontological Society (3) The Royal College of Psychiatrists (3) Weed Science Society of America (3) World Association for Disaster and Emergency Medicine (3) International Neuropsychological Society INS (2) Malaysian Society of Otorhinolaryngologists Head and Neck Surgeons (2) Nestle Foundation - enLINK (2) Nutrition Society (2) Royal College of Psychiatrists / RCPsych (2) Royal College of Speech and Language Therapists (2) Society for Disaster Medicine and Public Health, Inc. SDMPH (2) AMMS - Australian Microscopy and Microanalysis Society (1) American Society of Law, Medicine & Ethics (1) Entomological Society of Canada TCE ESC (1) The Roman Society - JRS and BRI (1) Studies in Early Modern Cultural, Political and Social history (12) Cambridge Handbooks in Behavioral Genetics (1) Cambridge Studies in Biological and Evolutionary Anthropology (1) Department of Applied Economics Occasional Papers (1) Econometric Society Monographs (1) Literature in Context (1) The Cambridge History of China (1) Cambridge Histories (1) Cambridge Histories - Asian History (1) Characterisation of age and polarity at onset in bipolar disorder Janos L. Kalman, Loes M. Olde Loohuis, Annabel Vreeker, Andrew McQuillin, Eli A. Stahl, Douglas Ruderfer, Maria Grigoroiu-Serbanescu, Georgia Panagiotaropoulou, Stephan Ripke, Tim B. Bigdeli, Frederike Stein, Tina Meller, Susanne Meinert, Helena Pelin, Fabian Streit, Sergi Papiol, Mark J. Adams, Rolf Adolfsson, Kristina Adorjan, Ingrid Agartz, Sofie R. Aminoff, Heike Anderson-Schmidt, Ole A. Andreassen, Raffaella Ardau, Jean-Michel Aubry, Ceylan Balaban, Nicholas Bass, Bernhard T. Baune, Frank Bellivier, Antoni Benabarre, Susanne Bengesser, Wade H Berrettini, Marco P. Boks, Evelyn J. Bromet, Katharina Brosch, Monika Budde, William Byerley, Pablo Cervantes, Catina Chillotti, Sven Cichon, Scott R. Clark, Ashley L. Comes, Aiden Corvin, William Coryell, Nick Craddock, David W. Craig, Paul E. Croarkin, Cristiana Cruceanu, Piotr M. Czerski, Nina Dalkner, Udo Dannlowski, Franziska Degenhardt, Maria Del Zompo, J. Raymond DePaulo, Srdjan Djurovic, Howard J. Edenberg, Mariam Al Eissa, Torbjørn Elvsåshagen, Bruno Etain, Ayman H. Fanous, Frederike Fellendorf, Alessia Fiorentino, Andreas J. Forstner, Mark A. Frye, Janice M. Fullerton, Katrin Gade, Julie Garnham, Elliot Gershon, Michael Gill, Fernando S. Goes, Katherine Gordon-Smith, Paul Grof, Jose Guzman-Parra, Tim Hahn, Roland Hasler, Maria Heilbronner, Urs Heilbronner, Stephane Jamain, Esther Jimenez, Ian Jones, Lisa Jones, Lina Jonsson, Rene S. Kahn, John R. Kelsoe, James L. Kennedy, Tilo Kircher, George Kirov, Sarah Kittel-Schneider, Farah Klöhn-Saghatolislam, James A. Knowles, Thorsten M. Kranz, Trine Vik Lagerberg, Mikael Landen, William B. Lawson, Marion Leboyer, Qingqin S. Li, Mario Maj, Dolores Malaspina, Mirko Manchia, Fermin Mayoral, Susan L. McElroy, Melvin G. McInnis, Andrew M. McIntosh, Helena Medeiros, Ingrid Melle, Vihra Milanova, Philip B. Mitchell, Palmiero Monteleone, Alessio Maria Monteleone, Markus M. Nöthen, Tomas Novak, John I. Nurnberger, Niamh O'Brien, Kevin S. O'Connell, Claire O'Donovan, Michael C. O'Donovan, Nils Opel, Abigail Ortiz, Michael J. Owen, Erik Pålsson, Carlos Pato, Michele T. Pato, Joanna Pawlak, Julia-Katharina Pfarr, Claudia Pisanu, James B. Potash, Mark H Rapaport, Daniela Reich-Erkelenz, Andreas Reif, Eva Reininghaus, Jonathan Repple, Hélène Richard-Lepouriel, Marcella Rietschel, Kai Ringwald, Gloria Roberts, Guy Rouleau, Sabrina Schaupp, William A Scheftner, Simon Schmitt, Peter R. Schofield, K. Oliver Schubert, Eva C. Schulte, Barbara Schweizer, Fanny Senner, Giovanni Severino, Sally Sharp, Claire Slaney, Olav B. Smeland, Janet L. Sobell, Alessio Squassina, Pavla Stopkova, John Strauss, Alfonso Tortorella, Gustavo Turecki, Joanna Twarowska-Hauser, Marin Veldic, Eduard Vieta, John B. Vincent, Wei Xu, Clement C. Zai, Peter P. Zandi, Psychiatric Genomics Consortium (PGC) Bipolar Disorder Working Group, International Consortium on Lithium Genetics (ConLiGen), Colombia-US Cross Disorder Collaboration in Psychiatric Genetics, Arianna Di Florio, Jordan W. Smoller, Joanna M. Biernacka, Francis J. McMahon, Martin Alda, Bertram Müller-Myhsok, Nikolaos Koutsouleris, Peter Falkai, Nelson B. Freimer, Till F.M. Andlauer, Thomas G. Schulze, Roel A. Ophoff Journal: The British Journal of Psychiatry / Volume 219 / Issue 6 / December 2021 Published online by Cambridge University Press: 25 August 2021, pp. 659-669 You have access Access Studying phenotypic and genetic characteristics of age at onset (AAO) and polarity at onset (PAO) in bipolar disorder can provide new insights into disease pathology and facilitate the development of screening tools. To examine the genetic architecture of AAO and PAO and their association with bipolar disorder disease characteristics. Genome-wide association studies (GWASs) and polygenic score (PGS) analyses of AAO (n = 12 977) and PAO (n = 6773) were conducted in patients with bipolar disorder from 34 cohorts and a replication sample (n = 2237). The association of onset with disease characteristics was investigated in two of these cohorts. Earlier AAO was associated with a higher probability of psychotic symptoms, suicidality, lower educational attainment, not living together and fewer episodes. Depressive onset correlated with suicidality and manic onset correlated with delusions and manic episodes. Systematic differences in AAO between cohorts and continents of origin were observed. This was also reflected in single-nucleotide variant-based heritability estimates, with higher heritabilities for stricter onset definitions. Increased PGS for autism spectrum disorder (β = −0.34 years, s.e. = 0.08), major depression (β = −0.34 years, s.e. = 0.08), schizophrenia (β = −0.39 years, s.e. = 0.08), and educational attainment (β = −0.31 years, s.e. = 0.08) were associated with an earlier AAO. The AAO GWAS identified one significant locus, but this finding did not replicate. Neither GWAS nor PGS analyses yielded significant associations with PAO. AAO and PAO are associated with indicators of bipolar disorder severity. Individuals with an earlier onset show an increased polygenic liability for a broad spectrum of psychiatric traits. Systematic differences in AAO across cohorts, continents and phenotype definitions introduce significant heterogeneity, affecting analyses. Systematic and other reviews: criteria and complexities Robert T Sataloff, Matthew L Bush, Rakesh Chandra, Douglas Chepeha, Brian Rotenberg, Edward W Fisher, David Goldenberg, Ehab Y Hanna, Joseph E Kerschner, Dennis H Kraus, John H Krouse, Daqing Li, Michael Link, Lawrence R Lustig, Samuel H Selesnick, Raj Sindwani, Richard J Smith, James Tysome, Peter C Weber, D Bradley Welling Journal: The Journal of Laryngology & Otology / Volume 135 / Issue 7 / July 2021 Print publication: July 2021 Clinical evaluation of Sofia Rapid Antigen Assay for detection of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) among emergency department to hospital admissions SARS-CoV-2/COVID-19 Richard D. Smith, J. Kristie Johnson, Colleen Clay, Leo Girio-Herrera, Diane Stevens, Michael Abraham, Paul Zimand, Mark Ahlman, Sheri Gimigliano, Richard Zhao, Cynthia Hildenbrand, Fermin Barrueto, Surbhi Leekha Journal: Infection Control & Hospital Epidemiology , First View Published online by Cambridge University Press: 24 June 2021, pp. 1-6 To determine the utility of the Sofia SARS rapid antigen fluorescent immunoassay (FIA) to guide hospital-bed placement of patients being admitted through the emergency department (ED). Cross-sectional analysis of a clinical quality improvement study. This study was conducted in 2 community hospitals in Maryland from September 21, 2020, to December 3, 2020. In total, 2,887 patients simultaneously received the Sofia SARS rapid antigen FIA and SARS-CoV-2 RT-PCR assays on admission through the ED. Rapid antigen results and symptom assessment guided initial patient placement while confirmatory RT-PCR was pending. The sensitivity, specificity, positive predictive values, and negative predictive values of the rapid antigen assay were calculated relative to RT-PCR, overall and separately for symptomatic and asymptomatic patients. Assay sensitivity was compared to RT-PCR cycle threshold (Ct) values. Assay turnaround times were compared. Clinical characteristics of RT-PCR–positive patients and potential exposures from false-negative antigen assays were evaluated. For all patients, overall agreement was 97.9%; sensitivity was 76.6% (95% confidence interval [CI], 71%–82%), and specificity was 99.7% (95% CI, 99%–100%). We detected no differences in performance between asymptomatic and symptomatic individuals. As RT-PCR Ct increased, the sensitivity of the antigen assay decreased. The mean turnaround time for the antigen assay was 1.2 hours (95% CI, 1.0–1.3) and for RT-PCR it was 20.1 hours (95% CI, 18.9–40.3) (P < .001). No transmission from antigen-negative/RT-PCR–positive patients was identified. Although not a replacement for RT-PCR for detection of all SARS-CoV-2 infections, the Sofia SARS antigen FIA has clinical utility for potential initial timely patient placement. A history of high-power laser research and development in the United Kingdom 60th Celebration of First Laser Colin N. Danson, Malcolm White, John R. M. Barr, Thomas Bett, Peter Blyth, David Bowley, Ceri Brenner, Robert J. Collins, Neal Croxford, A. E. Bucker Dangor, Laurence Devereux, Peter E. Dyer, Anthony Dymoke-Bradshaw, Christopher B. Edwards, Paul Ewart, Allister I. Ferguson, John M. Girkin, Denis R. Hall, David C. Hanna, Wayne Harris, David I. Hillier, Christopher J. Hooker, Simon M. Hooker, Nicholas Hopps, Janet Hull, David Hunt, Dino A. Jaroszynski, Mark Kempenaars, Helmut Kessler, Sir Peter L. Knight, Steve Knight, Adrian Knowles, Ciaran L. S. Lewis, Ken S. Lipton, Abby Littlechild, John Littlechild, Peter Maggs, Graeme P. A. Malcolm, OBE, Stuart P. D. Mangles, William Martin, Paul McKenna, Richard O. Moore, Clive Morrison, Zulfikar Najmudin, David Neely, Geoff H. C. New, Michael J. Norman, Ted Paine, Anthony W. Parker, Rory R. Penman, Geoff J. Pert, Chris Pietraszewski, Andrew Randewich, Nadeem H. Rizvi, Nigel Seddon, MBE, Zheng-Ming Sheng, David Slater, Roland A. Smith, Christopher Spindloe, Roy Taylor, Gary Thomas, John W. G. Tisch, Justin S. Wark, Colin Webb, S. Mark Wiggins, Dave Willford, Trevor Winstone Journal: High Power Laser Science and Engineering / Volume 9 / 2021 Published online by Cambridge University Press: 27 April 2021, e18 The first demonstration of laser action in ruby was made in 1960 by T. H. Maiman of Hughes Research Laboratories, USA. Many laboratories worldwide began the search for lasers using different materials, operating at different wavelengths. In the UK, academia, industry and the central laboratories took up the challenge from the earliest days to develop these systems for a broad range of applications. This historical review looks at the contribution the UK has made to the advancement of the technology, the development of systems and components and their exploitation over the last 60 years. Relating pollen representation to an evolving Amazonian landscape between the last glacial maximum and Late Holocene Richard J. Smith, Francis E. Mayle, S. Yoshi Maezumi, Mitchell J. Power Journal: Quaternary Research / Volume 99 / January 2021 Published online by Cambridge University Press: 25 August 2020, pp. 63-79 Print publication: January 2021 In contrast to temperate regions, relationships between basin characteristics (e.g., type/size) and fossil pollen archives have received little attention in Amazonia. Here, we compare fossil pollen records of a small palm swamp (Cuatro Vientos; CV) and a nearby large lake (Laguna Chaplin, LCH) in Bolivian Amazonia, demonstrating that palm swamps can yield Quaternary pollen archives recording the history of terrestrial vegetation beyond the basin margin, rather than merely a history of localized swamp vegetation dynamics. The pollen assemblages from these two contrasting basins display remarkable agreement throughout their late Quaternary history, indicating past drier climates supported savanna landscape during the last glacial maximum (LGM; 24,000–18,000 cal yr BP) and savanna/semideciduous forest mosaic during the middle Holocene (7000-4750 cal yr BP) at both regional (inferred from LCH) and local (inferred from CV) spatial scales. Additionally, the local-scale catchment of CV and the basin's proximity to the riverine forests of the Río Paraguá enables exploration of the extent of gallery/riverine forests during the LGM and middle Holocene. We show that, between 24,000–4000 cal yr BP, riverine/gallery rainforests were substantially reduced compared with present, challenging the hypothesis that gallery rainforests were important refugia for rainforest species during the drier LGM and middle Holocene. Periconception maternal low-protein diet adversely affects male mouse fetal bone growth and mineral density quality in late gestation Stuart A. Lanham, Stephanie J. Smith, Adam J. Watkins, Emma S. Lucas, Niamh MacCaoilte, Richard O.C. Oreffo, Tom P. Fleming, Judith J. Eckert Journal: Journal of Developmental Origins of Health and Disease / Volume 12 / Issue 3 / June 2021 Print publication: June 2021 Adverse programming of adult non-communicable disease can be induced by poor maternal nutrition during pregnancy and the periconception period has been identified as a vulnerable period. In the current study, we used a mouse maternal low-protein diet fed either for the duration of pregnancy (LPD) or exclusively during the preimplantation period (Emb-LPD) with control nutrition provided thereafter and postnatally to investigate effects on fetal bone development and quality. This model has been shown previously to induce cardiometabolic and neurological disease phenotypes in offspring. Micro 3D computed tomography examination at fetal stages Embryonic day E14.5 and E17.4, reflecting early and late stages of bone formation, demonstrated LPD treatment caused increased bone formation of relative high mineral density quality in males, but not females, at E14.5, disproportionate to fetal growth, with bone quality maintained at E17.5. In contrast, Emb-LPD caused a late increase in male fetal bone growth, proportionate to fetal growth, at E17.5, affecting central and peripheral skeleton and of reduced mineral density quality relative to controls. These altered dynamics in bone growth coincide with increased placental efficiency indicating compensatory responses to dietary treatments. Overall, our data show fetal bone formation and mineral quality is dependent upon maternal nutritional protein content and is sex-specific. In particular, we find the duration and timing of poor maternal diet to be critical in the outcomes with periconceptional protein restriction leading to male offspring with increased bone growth but of poor mineral density, thereby susceptible to later disease risk. The GLEAM 4-Jy (G4Jy) Sample: I. Definition and the catalogue Sarah V. White, Thomas M. O Franzen, Chris J. Riseley, O. Ivy Wong, Anna D. Kapińska, Natasha Hurley-Walker, Joseph R. Callingham, Kshitij Thorat, Chen Wu, Paul Hancock, Richard W. Hunstead, Nick Seymour, Jesse Swan, Randall Wayth, John Morgan, Rajan Chhetri, Carole Jackson, Stuart Weston, Martin Bell, Bi-Qing For, B. M. Gaensler, Melanie Johnston-Hollitt, André Offringa, Lister Staveley-Smith Journal: Publications of the Astronomical Society of Australia / Volume 37 / 2020 Published online by Cambridge University Press: 01 June 2020, e018 The Murchison Widefield Array (MWA) has observed the entire southern sky (Declination, $\delta< 30^{\circ}$ ) at low radio frequencies, over the range 72–231MHz. These observations constitute the GaLactic and Extragalactic All-sky MWA (GLEAM) Survey, and we use the extragalactic catalogue (EGC) (Galactic latitude, $|b| >10^{\circ}$ ) to define the GLEAM 4-Jy (G4Jy) Sample. This is a complete sample of the 'brightest' radio sources ( $S_{\textrm{151\,MHz}}>4\,\text{Jy}$ ), the majority of which are active galactic nuclei with powerful radio jets. Crucially, low-frequency observations allow the selection of such sources in an orientation-independent way (i.e. minimising the bias caused by Doppler boosting, inherent in high-frequency surveys). We then use higher-resolution radio images, and information at other wavelengths, to morphologically classify the brightest components in GLEAM. We also conduct cross-checks against the literature and perform internal matching, in order to improve sample completeness (which is estimated to be $>95.5$ %). This results in a catalogue of 1863 sources, making the G4Jy Sample over 10 times larger than that of the revised Third Cambridge Catalogue of Radio Sources (3CRR; $S_{\textrm{178\,MHz}}>10.9\,\text{Jy}$ ). Of these G4Jy sources, 78 are resolved by the MWA (Phase-I) synthesised beam ( $\sim2$ arcmin at 200MHz), and we label 67% of the sample as 'single', 26% as 'double', 4% as 'triple', and 3% as having 'complex' morphology at $\sim1\,\text{GHz}$ (45 arcsec resolution). We characterise the spectral behaviour of these objects in the radio and find that the median spectral index is $\alpha=-0.740 \pm 0.012$ between 151 and 843MHz, and $\alpha=-0.786 \pm 0.006$ between 151MHz and 1400MHz (assuming a power-law description, $S_{\nu} \propto \nu^{\alpha}$ ), compared to $\alpha=-0.829 \pm 0.006$ within the GLEAM band. Alongside this, our value-added catalogue provides mid-infrared source associations (subject to 6" resolution at 3.4 $\mu$ m) for the radio emission, as identified through visual inspection and thorough checks against the literature. As such, the G4Jy Sample can be used as a reliable training set for cross-identification via machine-learning algorithms. We also estimate the angular size of the sources, based on their associated components at $\sim1\,\text{GHz}$ , and perform a flux density comparison for 67 G4Jy sources that overlap with 3CRR. Analysis of multi-wavelength data, and spectral curvature between 72MHz and 20GHz, will be presented in subsequent papers, and details for accessing all G4Jy overlays are provided at https://github.com/svw26/G4Jy. The GLEAM 4-Jy (G4Jy) Sample: II. Host galaxy identification for individual sources Sarah V. White, Thomas M. O. Franzen, Chris J. Riseley, O. Ivy Wong, Anna D. Kapińska, Natasha Hurley-Walker, Joseph R. Callingham, Kshitij Thorat, Chen Wu, Paul Hancock, Richard W. Hunstead, Nick Seymour, Jesse Swan, Randall Wayth, John Morgan, Rajan Chhetri, Carole Jackson, Stuart Weston, Martin Bell, B. M. Gaensler, Melanie Johnston–Hollitt, André Offringa, Lister Staveley–Smith The entire southern sky (Declination, $\delta< 30^{\circ}$ ) has been observed using the Murchison Widefield Array (MWA), which provides radio imaging of $\sim$ 2 arcmin resolution at low frequencies (72–231 MHz). This is the GaLactic and Extragalactic All-sky MWA (GLEAM) Survey, and we have previously used a combination of visual inspection, cross-checks against the literature, and internal matching to identify the 'brightest' radio-sources ( $S_{\mathrm{151\,MHz}}>4$ Jy) in the extragalactic catalogue (Galactic latitude, $|b| >10^{\circ}$ ). We refer to these 1 863 sources as the GLEAM 4-Jy (G4Jy) Sample, and use radio images (of ${\leq}45$ arcsec resolution), and multi-wavelength information, to assess their morphology and identify the galaxy that is hosting the radio emission (where appropriate). Details of how to access all of the overlays used for this work are available at https://github.com/svw26/G4Jy. Alongside this we conduct further checks against the literature, which we document here for individual sources. Whilst the vast majority of the G4Jy Sample are active galactic nuclei with powerful radio-jets, we highlight that it also contains a nebula, two nearby, star-forming galaxies, a cluster relic, and a cluster halo. There are also three extended sources for which we are unable to infer the mechanism that gives rise to the low-frequency emission. In the G4Jy catalogue we provide mid-infrared identifications for 86% of the sources, and flag the remainder as: having an uncertain identification (129 sources), having a faint/uncharacterised mid-infrared host (126 sources), or it being inappropriate to specify a host (2 sources). For the subset of 129 sources, there is ambiguity concerning candidate host-galaxies, and this includes four sources (B0424–728, B0703–451, 3C 198, and 3C 403.1) where we question the existing identification. Consortium of Otolaryngology Journal Editors: collegiality and contributions Robert T Sataloff, Rakesh Chandra, Edward W Fisher, David Goldenberg, Ehab Y Hanna, Jonas Johnson, David W Kennedy, Dennis H Kraus, John H Krouse, Michael Link, Lawrence R Lustig, Bert W O'Malley, Jr,, Jay F Piccirillo, Robert Ruben, Sandra Schwartz, Samuel H Selesnick, Raj Sindwani, Richard J Smith, Michael G Stewart, James Tysome, Peter C Weber, D Bradley Welling Journal: The Journal of Laryngology & Otology / Volume 134 / Issue 5 / May 2020 An ultra-wide bandwidth (704 to 4 032 MHz) receiver for the Parkes radio telescope George Hobbs, Richard N. Manchester, Alex Dunning, Andrew Jameson, Paul Roberts, Daniel George, J. A. Green, John Tuthill, Lawrence Toomey, Jane F. Kaczmarek, Stacy Mader, Malte Marquarding, Azeem Ahmed, Shaun W. Amy, Matthew Bailes, Ron Beresford, N. D. R. Bhat, Douglas C.-J. Bock, Michael Bourne, Mark Bowen, Michael Brothers, Andrew D. Cameron, Ettore Carretti, Nick Carter, Santy Castillo, Raji Chekkala, Wan Cheng, Yoon Chung, Daniel A. Craig, Shi Dai, Joanne Dawson, James Dempsey, Paul Doherty, Bin Dong, Philip Edwards, Tuohutinuer Ergesh, Xuyang Gao, JinLin Han, Douglas Hayman, Balthasar Indermuehle, Kanapathippillai Jeganathan, Simon Johnston, Henry Kanoniuk, Michael Kesteven, Michael Kramer, Mark Leach, Vince Mcintyre, Vanessa Moss, Stefan Osłowski, Chris Phillips, Nathan Pope, Brett Preisig, Daniel Price, Ken Reeves, Les Reilly, John Reynolds, Tim Robishaw, Peter Roush, Tim Ruckley, Elaine Sadler, John Sarkissian, Sean Severs, Ryan Shannon, Ken Smart, Malcolm Smith, Stephanie Smith, Charlotte Sobey, Lister Staveley-Smith, Anastasios Tzioumis, Willem van Straten, Nina Wang, Linqing Wen, Matthew Whiting Published online by Cambridge University Press: 08 April 2020, e012 We describe an ultra-wide-bandwidth, low-frequency receiver recently installed on the Parkes radio telescope. The receiver system provides continuous frequency coverage from 704 to 4032 MHz. For much of the band ( ${\sim}60\%$ ), the system temperature is approximately 22 K and the receiver system remains in a linear regime even in the presence of strong mobile phone transmissions. We discuss the scientific and technical aspects of the new receiver, including its astronomical objectives, as well as the feed, receiver, digitiser, and signal processor design. We describe the pipeline routines that form the archive-ready data products and how those data files can be accessed from the archives. The system performance is quantified, including the system noise and linearity, beam shape, antenna efficiency, polarisation calibration, and timing stability. The relationship between antihypertensive medications and mood disorders: analysis of linked healthcare data for 1.8 million patients Richard J. Shaw, Daniel Mackay, Jill P. Pell, Sandosh Padmanabhan, David S. Bailey, Daniel J. Smith Journal: Psychological Medicine / Volume 51 / Issue 7 / May 2021 Published online by Cambridge University Press: 24 January 2020, pp. 1183-1191 Recent work suggests that antihypertensive medications may be useful as repurposed treatments for mood disorders. Using large-scale linked healthcare data we investigated whether certain classes of antihypertensive, such as angiotensin antagonists (AAs) and calcium channel blockers, were associated with reduced risk of new-onset major depressive disorder (MDD) or bipolar disorder (BD). Two cohorts of patients treated with antihypertensives were identified from Scottish prescribing (2009–2016) and hospital admission (1981–2016) records. Eligibility for cohort membership was determined by a receipt of a minimum of four prescriptions for antihypertensives within a 12-month window. One treatment cohort (n = 538 730) included patients with no previous history of mood disorder, whereas the other (n = 262 278) included those who did. Both cohorts were matched by age, sex and area deprivation to untreated comparators. Associations between antihypertensive treatment and new-onset MDD or bipolar episodes were investigated using Cox regression. For patients without a history of mood disorder, antihypertensives were associated with increased risk of new-onset MDD. For AA monotherapy, the hazard ratio (HR) for new-onset MDD was 1.17 (95% CI 1.04–1.31). Beta blockers' association was stronger (HR 2.68; 95% CI 2.45–2.92), possibly indicating pre-existing anxiety. Some classes of antihypertensive were associated with protection against BD, particularly AAs (HR 0.46; 95% CI 0.30–0.70). For patients with a past history of mood disorders, all classes of antihypertensives were associated with increased risk of future episodes of MDD. There was no evidence that antihypertensive medications prevented new episodes of MDD but AAs may represent a novel treatment avenue for BD. Health screening, cardiometabolic disease and adverse health outcomes in individuals with severe mental illness Robert Pearsall, Richard J. Shaw, Gary McLean, Moira Connolly, Kate A. Hughes, James G. Boyle, John Park, Daniel J. Smith, Daniel Mackay Journal: BJPsych Open / Volume 5 / Issue 6 / November 2019 Published online by Cambridge University Press: 08 November 2019, e97 Poor physical health in severe mental illness (SMI) remains a major issue for clinical practice. To use electronic health records of routinely collected clinical data to determine levels of screening for cardiometabolic disease and adverse health outcomes in a large sample (n = 7718) of patients with SMI, predominantly schizophrenia and bipolar disorder. We linked data from the Glasgow Psychosis Clinical Information System (PsyCIS) to morbidity records, routine blood results and prescribing data. There was no record of routine blood monitoring during the preceding 2 years for 16.9% of the cohort. However, monitoring was poorer for male patients, younger patients aged 16–44, those with schizophrenia, and for tests of cholesterol, triglyceride and glycosylated haemoglobin. We estimated that 8.0% of participants had diabetes and that lipids levels, and use of lipid-lowering medication, was generally high. Electronic record linkage identified poor health screening and adverse health outcomes in this vulnerable patient group. This approach can inform the design of future interventions and health policy. 5 - Diamonds and the Mantle Geodynamics of Carbon By Steven B. Shirey, Karen V. Smit, D. Graham Pearson, Michael J. Walter, Sonja Aulbach, Frank E. Brenker, Hélène Bureau, Antony D. Burnham, Pierre Cartigny, Thomas Chacko, Daniel J. Frost, Erik H. Hauri, Dorrit E. Jacob, Steven D. Jacobsen, Simon C. Kohn, Robert W. Luth, Sami Mikhail, Oded Navon, Fabrizio Nestola, Paolo Nimis, Mederic Palot, Evan M. Smith, Thomas Stachel, Vincenzo Stagno, Andrew Steele, Richard A. Stern, Emilie Thomassot, Andrew R. Thomson, Yaakov Weiss Edited by Beth N. Orcutt, Isabelle Daniel, Université Claude-Bernard Lyon I, Rajdeep Dasgupta, Rice University, Houston Book: Deep Carbon Print publication: 17 October 2019, pp 89-128 The science of studying diamond inclusions for understanding Earth history has developed significantly over the past decades, with new instrumentation and techniques applied to diamond sample archives revealing the stories contained within diamond inclusions. This chapter reviews what diamonds can tell us about the deep carbon cycle over the course of Earth's history. It reviews how the geochemistry of diamonds and their inclusions inform us about the deep carbon cycle, the origin of the diamonds in Earth's mantle, and the evolution of diamonds through time. Atomic Structure of Extended Defects in GaAs-based Heterostructures Abhinandan Gangopadhyay, Aymeric Maros, Nikolai Faleev, Richard R. King, David J. Smith Journal: Microscopy and Microanalysis / Volume 25 / Issue S2 / August 2019 Determining Key Influences on Patient Ability to Successfully Manage Noncommunicable Disease After Natural Disaster Benjamin J. Ryan, Richard C. Franklin, Frederick M. Burkle, Jr., Erin C. Smith, Peter Aitken, Peter A. Leggat Journal: Prehospital and Disaster Medicine / Volume 34 / Issue 3 / June 2019 Natural disasters often damage or destroy the protective public health service infrastructure (PHI) required to maintain the health and well-being of people with noncommunicable diseases (NCDs). This interruption increases the risk of an acute exacerbation or complication, potentially leading to a worse long-term prognosis or even death. Disaster-related exacerbations of NCDs will continue, if not increase, due to an increasing prevalence and sustained rise in the frequency and intensity of disasters, along with rapid unsustainable urbanization in flood plains and storm-prone coastal zones. Despite this, the focus of disaster and health systems preparedness and response remains on communicable diseases, even when the actual risk of disease outbreaks post-disaster is low, particularly in developed countries. There is now an urgent need to expand preparedness and response beyond communicable diseases to include people with NCDs. Hypothesis/Problem: The developing evidence-base describing the risk of disaster-related exacerbation of NCDs does not incorporate the perspectives, concerns, and challenges of people actually living with the conditions. To help address this gap, this research explored the key influences on patient ability to successfully manage their NCD after a natural disaster. A survey of people with NCDs in Queensland, Australia collected data on demographics, disease, disaster experience, and primary concern post-disaster. Descriptive statistics and chi-square tests with a Bonferroni-adjustment were used to analyze data. There were 118 responses to the survey. Key influences on the ability to self-manage post-disaster were access to medication, medical services, water, treatment and care, power, and food. Managing disease-specific symptoms associated with cardiovascular disease, diabetes, mental health, and respiratory diseases were primary concerns following a disaster. Stress and anxiety, loss of sleep, weakness or fatigue, and shortness of breath were common concerns for all patients with NCDs. Those dependent on care from others were most worried about shortness of breath and slow healing sores. Accessing medication and medical services were priorities for all patients post-disaster. The key influences on successful self-management post-disaster for people with NCDs must be reflected in disaster plans and strategies. Achieving this will reduce exacerbations or complications of disease and decrease demand for emergency health care post-disaster. Reinvestigating Cougar Mountain Cave: New Perspectives on Stratigraphy, Chronology, and a Younger Dryas Occupation in the Northern Great Basin Richard L. Rosencrance, Geoffrey M. Smith, Dennis L. Jenkins, Thomas J. Connolly, Thomas N. Layton Journal: American Antiquity / Volume 84 / Issue 3 / July 2019 Cougar Mountain Cave is located in Oregon's Fort Rock Basin. In 1958, avocationalist John Cowles excavated most of the cave's deposits and recovered abundant fiber, lithic, wood, and osseous artifacts. A crew from the University of California, Davis returned to the site in 1966 to evaluate the potential for further research, collecting additional lithic and fiber artifacts from disturbed deposits and in situ charcoal from apparently undisturbed deposits. Because Cowles took few notes or photographs, the Cougar Mountain Cave collection—most of which is housed at the Favell Museum in Klamath Falls, Oregon—has largely gone unstudied even though it contains diagnostic artifacts spanning the Holocene and, potentially, the terminal Pleistocene. We recently submitted charcoal and basketry from the site for radiocarbon dating, providing the first reliable sense of when Cougar Mountain Cave was first occupied. Our results indicate at least a Younger Dryas age for initial occupation. The directly dated basketry has provided new information about the age ranges and spatial distributions of diagnostic textile types in the northwestern Great Basin. Integration of genomic and clinical data augments surveillance of healthcare-acquired infections Doyle V. Ward, Andrew G. Hoss, Raivo Kolde, Helen C. van Aggelen, Joshua Loving, Stephen A. Smith, Deborah A. Mack, Raja Kathirvel, Jeffery A. Halperin, Douglas J. Buell, Brian E. Wong, Judy L. Ashworth, Mary M. Fortunato-Habib, Liyi Xu, Bruce A. Barton, Peter Lazar, Juan J. Carmona, Jomol Mathew, Ivan S. Salgo, Brian D. Gross, Richard T. Ellison III Journal: Infection Control & Hospital Epidemiology / Volume 40 / Issue 6 / June 2019 Determining infectious cross-transmission events in healthcare settings involves manual surveillance of case clusters by infection control personnel, followed by strain typing of clinical/environmental isolates suspected in said clusters. Recent advances in genomic sequencing and cloud computing now allow for the rapid molecular typing of infecting isolates. To facilitate rapid recognition of transmission clusters, we aimed to assess infection control surveillance using whole-genome sequencing (WGS) of microbial pathogens to identify cross-transmission events for epidemiologic review. Clinical isolates of Staphylococcus aureus, Enterococcus faecium, Pseudomonas aeruginosa, and Klebsiella pneumoniae were obtained prospectively at an academic medical center, from September 1, 2016, to September 30, 2017. Isolate genomes were sequenced, followed by single-nucleotide variant analysis; a cloud-computing platform was used for whole-genome sequence analysis and cluster identification. Most strains of the 4 studied pathogens were unrelated, and 34 potential transmission clusters were present. The characteristics of the potential clusters were complex and likely not identifiable by traditional surveillance alone. Notably, only 1 cluster had been suspected by routine manual surveillance. Our work supports the assertion that integration of genomic and clinical epidemiologic data can augment infection control surveillance for both the identification of cross-transmission events and the inclusion of missed and exclusion of misidentified outbreaks (ie, false alarms). The integration of clinical data is essential to prioritize suspect clusters for investigation, and for existing infections, a timely review of both the clinical and WGS results can hold promise to reduce HAIs. A richer understanding of cross-transmission events within healthcare settings will require the expansion of current surveillance approaches. Treatment-resistant and Multi-therapy resistant criteria for bipolar depression: A consensus definition – CORRIGENDUM Diego Hidalgo-Mazzei, Michael Berk, Andrea Cipriani, Anthony J. Cleare, Arianna Di Florio, Daniel Dietch, John R. Geddes, Guy M. Goodwin, Heinz Grunze, Joseph F. Hayes, Ian Jones, Siegfried Kasper, Karine Macritchie, R. Hamish McAllister-Williams, Richard Morriss, Sam Nayrouz, Sofia Pappa, Jair C. Soares, Daniel J. Smith, Trisha Suppes, Peter Talbot, Eduard Vieta, Stuart Watson, Lakshmi N. Yatham, Allan H. Young, Paul R. A. Stokes Journal: The British Journal of Psychiatry / Volume 214 / Issue 5 / May 2019 Published online by Cambridge University Press: 28 February 2019, p. 309 Syndromic surveillance: two decades experience of sustainable systems – its people not just data! Gillian E. Smith, Alex J. Elliot, Iain Lake, Obaghe Edeghere, Roger Morbey, Mike Catchpole, David L. Heymann, Jeremy Hawker, Sue Ibbotson, Brian McCloskey, Richard Pebody, Public Health England Real-time Syndromic Surveillance Team Journal: Epidemiology & Infection / Volume 147 / 2019 Published online by Cambridge University Press: 22 February 2019, e101 Syndromic surveillance is a form of surveillance that generates information for public health action by collecting, analysing and interpreting routine health-related data on symptoms and clinical signs reported by patients and clinicians rather than being based on microbiologically or clinically confirmed cases. In England, a suite of national real-time syndromic surveillance systems (SSS) have been developed over the last 20 years, utilising data from a variety of health care settings (a telehealth triage system, general practice and emergency departments). The real-time systems in England have been used for early detection (e.g. seasonal influenza), for situational awareness (e.g. describing the size and demographics of the impact of a heatwave) and for reassurance of lack of impact on population health of mass gatherings (e.g. the London 2012 Olympic and Paralympic Games).We highlight the lessons learnt from running SSS, for nearly two decades, and propose questions and issues still to be addressed. We feel that syndromic surveillance is an example of the use of 'big data', but contend that the focus for sustainable and useful systems should be on the added value of such systems and the importance of people working together to maximise the value for the public health of syndromic surveillance services. Treatment-resistant and multi-therapy-resistant criteria for bipolar depression: consensus definition Journal: The British Journal of Psychiatry / Volume 214 / Issue 1 / January 2019 Published online by Cambridge University Press: 06 December 2018, pp. 27-35 Most people with bipolar disorder spend a significant percentage of their lifetime experiencing either subsyndromal depressive symptoms or major depressive episodes, which contribute greatly to the high levels of disability and mortality associated with the disorder. Despite the importance of bipolar depression, there are only a small number of recognised treatment options available. Consecutive treatment failures can quickly exhaust these options leading to treatment-resistant bipolar depression (TRBD). Remarkably few studies have evaluated TRBD and those available lack a comprehensive definition of multi-therapy-resistant bipolar depression (MTRBD). To reach consensus regarding threshold definitions criteria for TRBD and MTRBD. Based on the evidence of standard treatments available in the latest bipolar disorder treatment guidelines, TRBD and MTRBD criteria were agreed by a representative panel of bipolar disorder experts using a modified Delphi method. TRBD criteria in bipolar depression was defined as failure to reach sustained symptomatic remission for 8 consecutive weeks after two different treatment trials, at adequate therapeutic doses, with at least two recommended monotherapy treatments or at least one monotherapy treatment and another combination treatment. MTRBD included the same initial definition as TRBD, with the addition of failure of at least one trial with an antidepressant, a psychological treatment and a course of electroconvulsive therapy. The proposed TRBD and MTRBD criteria may provide an important signpost to help clinicians, researchers and stakeholders in judging how and when to consider new non-standard treatments. However, some challenging diagnostic and therapeutic issues were identified in the consensus process that need further evaluation and research. Declaration of interest In the past 3 years, M.B. has received grant/research support from the NIH, Cooperative Research Centre, Simons Autism Foundation, Cancer Council of Victoria, Stanley Medical Research Foundation, MBF, NHMRC, Beyond Blue, Rotary Health, Geelong Medical Research Foundation, Bristol Myers Squibb, Eli Lilly, Glaxo SmithKline, Meat and Livestock Board, Organon, Novartis, Mayne Pharma, Servier, Woolworths, Avant and the Harry Windsor Foundation, has been a speaker for Astra Zeneca, Bristol Myers Squibb, Eli Lilly, Glaxo SmithKline, Janssen Cilag, Lundbeck, Merck, Pfizer, Sanofi Synthelabo, Servier, Solvay and Wyeth and served as a consultant to Allergan, Astra Zeneca, Bioadvantex, Bionomics, Collaborative Medicinal Development, Eli Lilly, Grunbiotics, Glaxo SmithKline, Janssen Cilag, LivaNova, Lundbeck, Merck, Mylan, Otsuka, Pfizer and Servier. A.J.C. has in the past 3 years received honoraria for speaking from Astra Zeneca and Lundbeck, honoraria for consulting from Allergan, Janssen, Lundbeck and LivaNova and research grant support from Lundbeck. G.M.G. holds shares in P1Vital and has served as consultant, advisor or CME speaker for Allergan, Angelini, Compass pathways, MSD, Lundbeck, Otsuka, Takeda, Medscape, Minervra, P1Vital, Pfizer, Servier, Shire and Sun Pharma. J.G. has received research funding from National Institute for Health Research, Medical Research Council, Stanley Medical Research Institute and Wellcome. H.G. received grants/research support, consulting fees or honoraria from Gedeon Richter, Genericon, Janssen Cilag, Lundbeck, Otsuka, Pfizer and Servier. R.H.M.-W. has received support for research, expenses to attend conferences and fees for lecturing and consultancy work (including attending advisory boards) from various pharmaceutical companies including Astra Zeneca, Cyberonics, Eli Lilly, Janssen, Liva Nova, Lundbeck, MyTomorrows, Otsuka, Pfizer, Roche, Servier, SPIMACO and Sunovion. R.M. has received research support from Big White Wall, Electromedical Products, Johnson and Johnson, Magstim and P1Vital. S.N. received honoraria from Lundbeck, Jensen and Otsuka. J.C.S. has received funds for research from Alkermes, Pfizer, Allergan, J&J, BMS and been a speaker or consultant for Astellas, Abbott, Sunovion, Sanofi. S.W has, within the past 3 years, attended advisory boards for Sunovion and LivaNova and has undertaken paid lectures for Lundbeck. D.J.S. has received honoraria from Lundbeck. T.S. has reported grants from Pathway Genomics, Stanley Medical Research Institute and Palo Alto Health Sciences; consulting fees from Sunovion Pharamaceuticals Inc.; honoraria from Medscape Education, Global Medical Education and CMEology; and royalties from Jones and Bartlett, UpToDate and Hogrefe Publishing. S.P. has served as a consultant or speaker for Janssen, and Sunovion. P.T. has received consultancy fees as an advisory board member from the following companies: Galen Limited, Sunovion Pharmaceuticals Europe Ltd, myTomorrows and LivaNova. E.V. received grants/ research support, consulting fees or honoraria from Abbott, AB-Biotics, Allergan, Angelini, Dainippon Sumitomo, Ferrer, Gedeon Richter, Janssen, Lundbeck, Otsuka and Sunovion. L.N.Y. has received grants/research support, consulting fees or honoraria from Allergan, Alkermes, Dainippon Sumitomo, Janssen, Lundbeck, Otsuka, Sanofi, Servier, Sunovion, Teva and Valeant. A.H.Y. has undertaken paid lectures and advisory boards for all major pharmaceutical companies with drugs used in affective and related disorders and LivaNova. He has also previously received funding for investigator-initiated studies from AstraZeneca, Eli Lilly, Lundbeck and Wyeth. P.R.A.S. has received research funding support from Corcept Therapeutics Inc. Corcept Therapeutics Inc fully funded attendance at their internal conference in California USA and all related expenses. He has received grant funding from the Medical Research Council UK for a collaborative study with Janssen Research and Development LLC. Janssen Research and Development LLC are providing non-financial contributions to support this study. P.R.A.S. has received a presentation fee from Indivior and an advisory board fee from LivaNova.
CommonCrawl
Cheenta Passion for Mathematics IOQM 2021 I.S.I. & C.M.I. AMC – AIME M.Stat Entrance Program Research Track ISI MStat Problems and Solutions India – Math Olympiad INMO I.S.I. Entrance USA – Math Olympiad TIFR, IIT JAM, M.Math Join Trial I.S.I. and C.M.I. Entrance ISI Entrance 2020 Problems and Solutions – B.Stat & B.Math Problems and Solutions of ISI BStat and BMath Entrance 2020 of Indian Statistical Institute. Post author By Cheenta 5 Comments on ISI Entrance 2020 Problems and Solutions – B.Stat & B.Math This post contains Indian Statistical Institute, ISI Entrance 2020 Problems and Solutions. Try them out. This is a work in progress. Subjective Paper – ISI Entrance 2020 Problems and Solutions Objective Paper – ISI Entrance 2020 Problems and Solutions Objective Answer Key Let \( \iota \) be a root of the equation \( x^2 + 1 = 0 \) and let \( \omega \) be a root of the equation \( x^2 + x + 1 = 0 \). Construct a polynomial $$ f(x) = a_0 + a_1 x + \cdots + a_n x^n $$ where \( a_0, a_1, \cdots , a_n \) are all integers such that \( f (\iota + \omega) = 0 \). Answer: \( f(x) = x^4 + 2x^3 + 5x^2 + 4x + 1 \) Let \( a \) be a fixed real number. Consider the equation $$(x+2)^2 (x+7)^2 + a = 0, x \in \mathbb{R} $$ where \( \mathbb{R} \) is the set of real numbers. For what values of \(a \), will the equation have exactly one root? Answer: \( – (2.5)^4 \) Let \( A \) and \( B \) be variable points on the \(x\)-axis and \(y\)-axis respectively such that the line segment \( AB \) is in the first quadrant and of a fixed length \(2d\). Let \(C \) be the mid-point of \(AB\) and \(P\) be a point such that (a) \( P \) and the origin are on the opposite sides of \(AB\) and, (b) \(PC\) is a line of length \(d\) which is perpendicular to \(AB\). Find the locus of \(P\). Answer: Line segment connecting \( (d, d) \) to \( \sqrt{2} d, \sqrt{2} d \) Let a real-valued sequence \( \{x_n\}_{n \geq 1} \) be such that $$ \displaystyle{\lim_{n \to \infty} n x_n = 0 }. $$ Find all possible real values of \( t \) such that \( \displaystyle{\lim_{n \to \infty} x_n \cdot (\log n)^t = 0 }. \) Prove that the largest pentagon (in terms of area) that can be inscribed in a circle of radius \( 1\) is regular (i.e., has equal sides). Prove that the family of curves $$ \displaystyle{ \frac{x^2}{a^2 + \lambda} + \frac{y^2}{b^2 + \lambda} = 1} $$ satisfies $$ \displaystyle { \frac{dy}{dx} (a^2 – b^2) = (x + y \frac{dy}{dx})(x \frac{dy}{dx} – y ) } $$ Consider a right-angled triangle with integer-valued sides \( a < b < c \) where \(a, b, c\) are pairwise co-prime. Let \( d = c – b \). Suppose \( d \) divides \(a \). Then (a) Prove that \( d \leq 2 \) (b) Find all such triangles (i.e. all possible triplets \(a, b, c\) ) with permeter less than \(100 \). A finite sequence of numbers \( (a_1, \cdots , a_n ) \) is said to be alternating if $$ \displaystyle{ a_1 > a_2, a_2 < a_3, a_2 > a_4, a_4 < a_5, \cdots \\ \\ \textrm{or} \quad a_1 < a_2, a_2 > a_3, a_3< a_4, a_4 > a_5} $$ How many alternatig sequences of length \(5 \), with distinct number \( a_1, \cdots , a_5 \) can be formed such that \( a_i \in \{ 1, 2, \cdots , 20 \} \) for \( i = 1, \cdots , 5 \)? Answer: \( 32 \times { {20} \choose {5} } \) ISI-BStat-BMath-2020-UGADownload $1$ .The number of subsets of ${1,2,3, \ldots, 10}$ having an odd number of elements is (A) $1024$ (B) $512$ (C) $256 $ (D)$ 50$ $2$ .For the function on the real line $\mathbb{R}$ given by $f(x)=|x|+|x+1|+e^{x}$, which of the following is true? (A) It is differentiable everywhere. (B) It is differentiable everywhere except at $x=0$ and $x=-1$ (C) It is differentiable everywhere except at $x=1 / 2$ (D) It is differentiable everywhere except at $x=-1 / 2$ $3$ .If $f, g$ are real-valued differentiable functions on the real line $\mathbb{R}$ such that $f(g(x))=x$ and $f^{\prime}(x)=1+(f(x))^{2},$ then $g^{\prime}(x)$ equals (A) $\frac{1}{1+x^{2}}$ (B) $1+x^{2}$ (C) $\frac{1}{1+x^{4}}$ (D) $1+x^{4}$ $4$ . The number of real solutions of $e^{x}=\sin (x)$ is (A) $0$ (B) $1$ (C) $2$ (D) infinite. $5$ . What is the limit of $\sum_{k=1}^{n} \frac{e^{-k / n}}{n}$ as $n$ tends to $\infty ?$ (A) The limit does not exist. (B) $\infty$ (C) $1-e^{-1}$ (D) $e^{-0.5}$ $6$ . A group of 64 players in a chess tournament needs to be divided into 32 groups of 2 players each. In how many ways can this be done? (A) $\frac{64 !}{32 ! 2^{32}}$ (B) $64 \choose 2$ $62 \choose 2$ $ \ldots$ $4 \choose 2$$2 \choose 2$(C) $\frac{64 !}{32 ! 32 !}$ (D) $\frac{64 !}{2^{64}}$ $7$ .The integral part of $\sum_{n=2}^{9999} \frac{1}{\sqrt{n}}$ equals (A) $196$ (B) $197$ (C) $198$ (D) $199$ $8$ .Let $a_{n}$ be the number of subsets of ${1,2, \ldots, n}$ that do not contain any two consecutive numbers. Then (A) $a_{n}=a_{n-1}+a_{n-2}$ (B) $a_{n}=2 a_{n-1}$ (C) $a_{n}=a_{n-1}-a_{n-2}$ (D) $a_{n}=a_{n-1}+2 a_{n-2}$ $9$ . There are $128$ numbers $1,2, \ldots, 128$ which are arranged in a circular pattern in clockwise order. We start deleting numbers from this set in a clockwise fashion as follows. First delete the number $2,$ then skip the next available number (which is 3) and delete 4. Continue in this manner, that is, after deleting a number, skip the next available number clockwise and delete the number available after that, till only one number What is the last number left? (A) $1$ (B) $63$ (C) $127$ (D) None of the above. $10$ . Let $z$ and $w$ be complex numbers lying on the circles of radii 2 and 3 respectively, with centre $(0,0) .$ If the angle between the corresponding vectors is 60 degrees, then the value of $|z+w| /|z-w|$ is: (A) $\frac{\sqrt{19}}{\sqrt{7}}$ (B) $\frac{\sqrt{7}}{\sqrt{19}}$ (C) $\frac{\sqrt{12}}{\sqrt{7}}$ (D) $\frac{\sqrt{7}}{\sqrt{12}}$ $11$ . Two vertices of a square lie on a circle of radius $r$ and the other two vertices lie on a tangent to this circle. Then the length of the side of the square is (A) $\frac{3 r}{2}$ (B) $\frac{4 r}{3}$ (C) $\frac{6 r}{5}$ (D) $\frac{8 r}{5}$ $12$ . For a real number $x,$ let $[x]$ denote the greatest integer less than or equal to $x .$ Then the number of real solutions of $|2 x-[x]|=4$ is (A) $4$ (B) $3$ (C) $2$ (D) $1$ $13$ . Let $f, g$ be differentiable functions on the real line $\mathbb{R}$ with $f(0)>g(0)$ Assume that the set $M={t \in \mathbb{R} \mid f(t)=g(t)}$ is non-empty and that $f^{\prime}(t) \geq g^{\prime}(t)$ for all $t \in M .$ Then which of the following is necessarily true? (A) If $t \in M,$ then $t<0$. (B) For any $t \in M, f^{\prime}(t)>g^{\prime}(t)$ (C) For any $t \notin M, f(t)>g(t)$ (D) None of the above. $14$ . Consider the sequence $1,2,2,3,3,3,4,4,4,4,5,5,5,5,5, \ldots$ obtained by writing one $1,$ two $2$ 's, three $3$ 's and so on. What is the $2020^{\text {th }}$ term in the sequence? (A) $62$ (B)$ 63$ (C) $64$ (D) $65$ $15$.Let $A=\{x_{1}, x_{2}, \ldots, x_{50}\}$ and $B=\{y_{1}, y_{2}, \ldots, y_{20}\}$ be two sets of real numbers. What is the total number of functions $f: A \rightarrow B$ such that $f$ is onto and $f\left(x_{1}\right) \leq f\left(x_{2}\right) \leq \cdots \leq f\left(x_{50}\right) ?$ (A) $49 \choose 19$ (B) $49 \choose 20$ (C) $50 \choose 19$ (A) $50 \choose 20$ $16$. The number of complex roots of the polynomial $z^{5}-z^{4}-1$ which have modulus $1$ is (A) $0$ (B) $1$ (C) $2$ (D) more than $2$ $17$. The number of real roots of the polynomial p(x)=\left(x^{2020}+2020 x^{2}+2020\right)\left(x^{3}-2020\right)\left(x^{2}-2020\right) (A) $2$ (B)$3$ (C) $2023$ (D) $2025$ $18$. Which of the following is the sum of an infinite geometric sequence whose terms come from the set $\{1, \frac{1}{2}, \frac{1}{4}, \ldots, \frac{1}{2^{n}}, \ldots\} ?$ (A) $\frac{1}{5}$ (B) $\frac{1}{7}$ (C) $\frac{1}{9}$ (D) $\frac{1}{11}$ $19$. If $a, b, c$ are distinct odd natural numbers, then the number of rational roots of the polynomial $a x^{2}+b x+c$ (A) must be $0 $. (B) must be $1$ . (C) must be $2$ . (D) cannot be determined from the given data. $20$. Let $A, B, C$ be finite subsets of the plane such that $A \cap B, B \cap C$ and $C \cap A$ are all empty. Let $S=A \cup B \cup C$. Assume that no three points of $S$ are collinear and also assume that each of $A, B$ and $C$ has at least 3 points. Which of the following statements is always true? (A) There exists a triangle having a vertex from each of $A, B, C$ that does not contain any point of $S$ in its interior. (B) Any triangle having a vertex from each of $A, B, C$ must contain a point of $S$ in its interior. (C) There exists a triangle having a vertex from each of $A, B, C$ that contains all the remaining points of $S$ in its interior. (D) There exist 2 triangles, both having a vertex from each of $A, B, C$ such that the two triangles do not intersect. $21$. Shubhaangi thinks she may be allergic to Bengal gram and takes a test that is known to give the following results: (i)For people who really do have the allergy, the test says "Yes" $90 \%$ of the time. (ii)For people who do not have the allergy, the test says "Yes" $15 \%$ of the time. If $2 \%$ of the population has the allergy and Shubhangi's test says "Yes" then the chances that Shubhaangi does really have the allergy are (A) $1 / 9$ (B) $6 / 55$ (C) $1 / 11$ $22$. If $\sin \left(\tan ^{-1}(x)\right)=\cot \left(\sin ^{-1}\left(\sqrt{\frac{13}{17}}\right)\right)$ then $x$ is (A) $\frac{4}{17}$ (B) $ \frac{2}{3}$ (C) $\sqrt{\frac{17^{2}-13^{2}}{17^{2}+13^{2}}}$ (D) $\sqrt{\frac{17^{2}-13^{2}}{17 \times 13}}$ $23$. If the word PERMUTE is permuted in all possible ways and the different resulting words are written down in alphabetical order (also known as dictionary order $)$, irrespective of whether the word has meaning or not, then the $720^{\text {th }}$ word would be: (A) EEMPRTU (B) EUTRPME (C) UTRPMEE (D) MEET-PUR. $24$. The points (4,7,-1),(1,2,-1),(-1,-2,-1) and (2,3,-1) in $\mathbb{R}^{3}$ are the vertices of a (A) rectangle which is not a square. (B) rhombus. (C) parallelogram which is not a rectangle. (D) trapezium which is not a parallelogram. $25$. Let $f(x), g(x)$ be functions on the real line $\mathbb{R}$ such that both $f(x)+g(x)$ and $f(x) g(x)$ are differentiable. Which of the following is FALSE? (A) $f(x)^{2}+g(x)^{2}$ is necessarily differentiable. (B) $f(x)$ is differentiable if and only if $g(x)$ is differentiable. (C) $f(x)$ and $g(x)$ are necessarily continuous. (D) If $f(x)>g(x)$ for all $x \in \mathbb{R},$ then $f(x)$ is differentiable. $26$. Let $S$ be the set consisting of all those real numbers that can be written as $p-2 a$ where $p$ and $a$ are the perimeter and area of a right-angled triangle having base length 1 . Then $S$ is (A)$(2, \infty)$ (B) $(1, \infty)$ (C) $(0, \infty)$ (D) the real line $\mathbb{R}$. $27$. Let $S={1,2, \ldots, n} .$ For any non-empty subset $A$ of $S$, let l(a) denote the largest number in $A .$ If $f(n)=\sum_{A \subseteq S} l(A),$ that is, $f(n)$ is the sum of the numbers $l(A)$ while $A$ ranges over all the nonempty subsets of $S$, then $f(n)$ is ( A )$ 2^{n}(n+1)$ (B) $2^{n}(n+1)-1$ ( C) $2^{n}(n-1)$ (D) $2^{n}(n-1)+1$ $28$. The area of the region in the plane $\mathbb{R}^{2}$ given by points $(x, y)$ satisfying $|y| \leq 1$ and $x^{2}+y^{2} \leq 2$ is (A) $\pi+1$ (B) $2 \pi-2$ (G) $\pi+2$ (D) $2 \pi-1$ $29$. Let $n$ be a positive integer and $t \in(0,1) .$ Then $\sum_{r=0} r\left(\begin{array}{l}n \ r\end{array}\right) t^{r}(1-t)^{n-r}$equals (A) $n t$ (B)$(n-1)(1-t)$ (C) $n t+(n-1)(1-t)$ (D) $\left(n^{2}-2 n+2\right) t$ $30$. For any real number $x,$ let $[x]$ be the greatest integer $m$ such that $m \leq x$ Then the number of points of discontinuity of the function $g(x)=\left[x^{2}-2\right]$ on the interval$ (-3,3)$ is (A) $5$ (B) $9$ (C) $13$ (D) $16$ (Created by students). Please suggest changes in the comment section. 1. B 2. B 3. A 4. D 5. C 6. A 7. B 8. A 9. A 10. A 11. D 12. A 13. C 14. C 15. A 16. C 17. B 18. B 19. A 20. A 21. B 22. B 23. B 24. C 25. D 26. A 27. D 28. C 29. A 30. D ISI Entrance Solutions and Problems ISI BStat 2010 Solution and Strategy – Video ← Measure of angle | AMC 10A, 2019| Problem No 13 → Fly trapped inside cubical box | AMC 10A, 2010| Problem No 20 5 replies on "ISI Entrance 2020 Problems and Solutions – B.Stat & B.Math" Samprit Chakrabortysays: HELLO SIR I THINK 13 SHOULD BE D.NONE. Take f(x)=x+1 and g(x)= x^2 Shruti bansalsays: Pls check that for ur assumed function the condition of f'(t)>=g'(t) is not satisfied for all t belonging to m Dipak Kumar Dassays: What will be the cut off for B.math 2020 SUDIP MONDALsays: 25 -C………take a case f(X)={1 for x>=0, and -1 for x<0 g(x)=-f(x) Soumyadeep mandalsays: I think you are right. I wrongly chose option D. IOQM Mock Test Series | Sample Question Papers & Past Papers ZIO 2020 Cheenta Genius ISI Problems I.S.I. & C.M.I. Entrance Program IOQM 2021 Problem Solutions ISI MStat Entrance Exam books based on Syllabus IIT JAM Stat Mock Test Toppers How did Aryabhatta invent zero? How did he get this idea? Why did he give zero an oval shape? ISI Entrance 2020 Problems and Solutions - B.Stat & B.Math Pigeonhole Principle IIT JAM Stat MS 2021 Problem Solving Crash Course Mathematics Summer Camps in India One Should Explore Subscribe to Cheenta via Email © 2021 Cheenta
CommonCrawl
Invariant Models for Causal Transfer Learning (1507.05333) Mateo Rojas-Carulla, Bernhard Schölkopf, Richard Turner, Jonas Peters Sept. 24, 2018 stat.ML Methods of transfer learning try to combine knowledge from several related tasks (or domains) to improve performance on a test task. Inspired by causal methodology, we relax the usual covariate shift assumption and assume that it holds true for a subset of predictor variables: the conditional distribution of the target variable given this subset of predictors is invariant over all tasks. We show how this assumption can be motivated from ideas in the field of causality. We focus on the problem of Domain Generalization, in which no examples from the test task are observed. We prove that in an adversarial setting using this subset for prediction is optimal in Domain Generalization; we further provide examples, in which the tasks are sufficiently diverse and the estimator therefore outperforms pooling the data, even on average. If examples from the test task are available, we also provide a method to transfer knowledge from the training tasks and exploit all available features for prediction. However, we provide no guarantees for this method. We introduce a practical method which allows for automatic inference of the above subset and provide corresponding code. We present results on synthetic data sets and a gene deletion data set. Theoretical Aspects of Cyclic Structural Causal Models (1611.06221) Stephan Bongers, Jonas Peters, Bernhard Schölkopf, Joris M. Mooij Aug. 5, 2018 cs.AI, stat.ME, cs.LG Structural causal models (SCMs), also known as (non-parametric) structural equation models (SEMs), are widely used for causal modeling purposes. A large body of theoretical results is available for the special case in which cycles are absent (i.e., acyclic SCMs, also known as recursive SEMs). However, in many application domains cycles are abundantly present, for example in the form of feedback loops. In this paper, we provide a general and rigorous theory of cyclic SCMs. The paper consists of two parts: the first part gives a rigorous treatment of structural causal models, dealing with measure-theoretic and other complications that arise in the presence of cycles. In contrast with the acyclic case, in cyclic SCMs solutions may no longer exist, or if they exist, they may no longer be unique, or even measurable in general. We give several sufficient and necessary conditions for the existence of (unique) measurable solutions. We show how causal reasoning proceeds in these models and how this differs from the acyclic case. Moreover, we give an overview of the Markov properties that hold for cyclic SCMs. In the second part, we address the question of how one can marginalize an SCM (possibly with cycles) to a subset of the endogenous variables. We show that under a certain condition, one can effectively remove a subset of the endogenous variables from the model, leading to a more parsimonious marginal SCM that preserves the causal and counterfactual semantics of the original SCM on the remaining variables. Moreover, we show how the marginalization relates to the latent projection and to latent confounders, i.e. latent common causes. Differentially Private Database Release via Kernel Mean Embeddings (1710.01641) Matej Balog, Ilya Tolstikhin, Bernhard Schölkopf May 31, 2018 stat.ML We lay theoretical foundations for new database release mechanisms that allow third-parties to construct consistent estimators of population statistics, while ensuring that the privacy of each individual contributing to the database is protected. The proposed framework rests on two main ideas. First, releasing (an estimate of) the kernel mean embedding of the data generating random variable instead of the database itself still allows third-parties to construct consistent estimators of a wide class of population statistics. Second, the algorithm can satisfy the definition of differential privacy by basing the released kernel mean embedding on entirely synthetic data points, while controlling accuracy through the metric available in a Reproducing Kernel Hilbert Space. We describe two instantiations of the proposed framework, suitable under different scenarios, and prove theoretical results guaranteeing differential privacy of the resulting algorithms and the consistency of estimators constructed from their outputs. Automatic Estimation of Modulation Transfer Functions (1805.01872) Matthias Bauer, Valentin Volchkov, Michael Hirsch, Bernhard Schölkopf May 4, 2018 cs.CV, stat.ML The modulation transfer function (MTF) is widely used to characterise the performance of optical systems. Measuring it is costly and it is thus rarely available for a given lens specimen. Instead, MTFs based on simulations or, at best, MTFs measured on other specimens of the same lens are used. Fortunately, images recorded through an optical system contain ample information about its MTF, only that it is confounded with the statistics of the images. This work presents a method to estimate the MTF of camera lens systems directly from photographs, without the need for expensive equipment. We use a custom grid display to accurately measure the point response of lenses to acquire ground truth training data. We then use the same lenses to record natural images and employ a data-driven supervised learning approach using a convolutional neural network to estimate the MTF on small image patches, aggregating the information into MTF charts over the entire field of view. It generalises to unseen lenses and can be applied for single photographs, with the performance improving if multiple photographs are available. Clustering Meets Implicit Generative Models (1804.11130) Francesco Locatello, Damien Vincent, Ilya Tolstikhin, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf April 30, 2018 cs.AI, cs.LG, stat.ML Clustering is a cornerstone of unsupervised learning which can be thought as disentangling multiple generative mechanisms underlying the data. In this paper we introduce an algorithmic framework to train mixtures of implicit generative models which we particularize for variational autoencoders. Relying on an additional set of discriminators, we propose a competitive procedure in which the models only need to approximate the portion of the data distribution from which they can produce realistic samples. As a byproduct, each model is simpler to train, and a clustering interpretation arises naturally from the partitioning of the training points among the models. We empirically show that our approach splits the training distribution in a reasonable way and increases the quality of the generated samples. Structural causal models for macro-variables in time-series (1804.03911) Dominik Janzing, Paul Rubenstein, Bernhard Schölkopf April 11, 2018 math.ST, stat.TH We consider a bivariate time series $(X_t,Y_t)$ that is given by a simple linear autoregressive model. Assuming that the equations describing each variable as a linear combination of past values are considered structural equations, there is a clear meaning of how intervening on one particular $X_t$ influences $Y_{t'}$ at later times $t'>t$. In the present work, we describe conditions under which one can define a causal model between variables that are coarse-grained in time, thus admitting statements like `setting $X$ to $x$ changes $Y$ in a certain way' without referring to specific time instances. We show that particularly simple statements follow in the frequency domain, thus providing meaning to interventions on frequencies. Revisiting First-Order Convex Optimization Over Linear Spaces (1803.09539) Francesco Locatello, Anant Raj, Sai Praneeth Karimireddy, Gunnar Rätsch, Bernhard Schölkopf, Sebastian U. Stich, Martin Jaggi April 5, 2018 math.OC, cs.LG, stat.ML Two popular examples of first-order optimization methods over linear spaces are coordinate descent and matching pursuit algorithms, with their randomized variants. While the former targets the optimization by moving along coordinates, the latter considers a generalized notion of directions. Exploiting the connection between the two algorithms, we present a unified analysis of both, providing affine invariant sublinear $\mathcal{O}(1/t)$ rates on smooth objectives and linear convergence on strongly convex objectives. As a byproduct of our affine invariant analysis of matching pursuit, our rates for steepest coordinate descent are the tightest known. Furthermore, we show the first accelerated convergence rate $\mathcal{O}(1/t^2)$ for matching pursuit on convex objectives. Coordination via predictive assistants from a game-theoretic view (1803.06247) Philipp Geiger, Justus Winkelmann, Claudius Proissl, Michel Besserve, Bernhard Schölkopf March 16, 2018 cs.GT, stat.ML We study machine learning-based assistants that support coordination between humans in congested facilities via congestion forecasts. In our theoretical analysis, we use game theory to study how an assistant's forecast that influences the outcome relates to Nash equilibria, and how they can be reached quickly in congestion game-like settings. Using information theory, we investigate approximations to given social choice functions under privacy constraints w.r.t. assistants. And we study dynamics and training for a specific exponential smoothing-based assistant via a linear dynamical systems and causal analysis. We report experiments conducted on a real congested cafeteria with about 400 daily customers where we evaluate this assistant and prediction baselines to gain further insight. Experimental and causal view on information integration in autonomous agents (1606.04250) Philipp Geiger, Katja Hofmann, Bernhard Schölkopf March 13, 2018 cs.AI The amount of digitally available but heterogeneous information about the world is remarkable, and new technologies such as self-driving cars, smart homes, or the internet of things may further increase it. In this paper we present preliminary ideas about certain aspects of the problem of how such heterogeneous information can be harnessed by autonomous agents. After discussing potentials and limitations of some existing approaches, we investigate how \emph{experiments} can help to obtain a better understanding of the problem. Specifically, we present a simple agent that integrates video data from a different agent, and implement and evaluate a version of it on the novel experimentation platform \emph{Malmo}. The focus of a second investigation is on how information about the hardware of different agents, the agents' sensory data, and \emph{causal} information can be utilized for knowledge transfer between agents and subsequently more data-efficient decision making. Finally, we discuss potential future steps w.r.t.\ theory and experimentation, and formulate open questions. Adversarial Extreme Multi-label Classification (1803.01570) Rohit Babbar, Bernhard Schölkopf March 5, 2018 cs.LG, stat.ML The goal in extreme multi-label classification is to learn a classifier which can assign a small subset of relevant labels to an instance from an extremely large set of target labels. Datasets in extreme classification exhibit a long tail of labels which have small number of positive training instances. In this work, we pose the learning task in extreme classification with large number of tail-labels as learning in the presence of adversarial perturbations. This view motivates a robust optimization framework and equivalence to a corresponding regularized objective. Under the proposed robustness framework, we demonstrate efficacy of Hamming loss for tail-label detection in extreme classification. The equivalent regularized objective, in combination with proximal gradient based optimization, performs better than state-of-the-art methods on propensity scored versions of precision@k and nDCG@k(upto 20% relative improvement over PFastreXML - a leading tree-based approach and 60% relative improvement over SLEEC - a leading label-embedding approach). Furthermore, we also highlight the sub-optimality of a sparse solver in a widely used package for large-scale linear classification, which is interesting in its own right. We also investigate the spectral properties of label graphs for providing novel insights towards understanding the conditions governing the performance of Hamming loss based one-vs-rest scheme vis-\`a-vis label embedding methods. Tempered Adversarial Networks (1802.04374) Mehdi S. M. Sajjadi, Bernhard Schölkopf March 1, 2018 cs.CR, cs.LG, stat.ML Generative adversarial networks (GANs) have been shown to produce realistic samples from high-dimensional distributions, but training them is considered hard. A possible explanation for training instabilities is the inherent imbalance between the networks: While the discriminator is trained directly on both real and fake samples, the generator only has control over the fake samples it produces since the real data distribution is fixed by the choice of a given dataset. We propose a simple modification that gives the generator control over the real samples which leads to a tempered learning process for both generator and discriminator. The real data distribution passes through a lens before being revealed to the discriminator, balancing the generator and discriminator by gradually revealing more detailed features necessary to produce high-quality results. The proposed module automatically adjusts the learning process to the current strength of the networks, yet is generic and easy to add to any GAN variant. In a number of experiments, we show that this can improve quality, stability and/or convergence speed across a range of different GAN architectures (DCGAN, LSGAN, WGAN-GP). Analysis of Cause-Effect Inference via Regression Errors (1802.06698) Patrick Blöbaum, Dominik Janzing, Takashi Washio, Shohei Shimizu, Bernhard Schölkopf Feb. 19, 2018 cs.AI We address the problem of inferring the causal relation between two variables by comparing the least-squares errors of the predictions in both possible causal directions. Under the assumption of an independence between the function relating cause and effect, the conditional noise distribution, and the distribution of the cause, we show that the errors are smaller in causal direction if both variables are equally scaled and the causal relation is close to deterministic. Based on this, we provide an easily applicable algorithm that only requires a regression in both possible causal directions and a comparison of the errors. The performance of the algorithm is compared with different related causal inference methods in various artificial and real-world data sets. Learning Independent Causal Mechanisms (1712.00961) Giambattista Parascandolo, Niki Kilbertus, Mateo Rojas-Carulla, Bernhard Schölkopf Feb. 19, 2018 cs.LG, stat.ML Statistical learning relies upon data sampled from a distribution, and we usually do not care what actually generated it in the first place. From the point of view of causal modeling, the structure of each distribution is induced by physical mechanisms that give rise to dependencies between observables. Mechanisms, however, can be meaningful autonomous modules of generative models that make sense beyond a particular entailed data distribution, lending themselves to transfer between problems. We develop an algorithm to recover a set of independent (inverse) mechanisms from a set of transformed data points. The approach is unsupervised and based on a set of experts that compete for data generated by the mechanisms, driving specialization. We analyze the proposed method in a series of experiments on image data. Each expert learns to map a subset of the transformed data back to a reference distribution. The learned mechanisms generalize to novel domains. We discuss implications for transfer learning and links to recent trends in generative modeling. Adversarial Vulnerability of Neural Networks Increases With Input Dimension (1802.01421) Carl-Johann Simon-Gabriel, Yann Ollivier, Bernhard Schölkopf, Léon Bottou, David Lopez-Paz Feb. 5, 2018 cs.CV, cs.LG, stat.ML Over the past four years, neural networks have proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when seen as a function of the inputs. For most current network architectures, we prove that the $\ell_1$-norm of these gradients grows as the square root of the input-size. These nets therefore become increasingly vulnerable with growing image size. Over the course of our analysis we rediscover and generalize double-backpropagation, a technique that penalizes large gradients in the loss surface to reduce adversarial vulnerability and increase generalization performance. We show that this regularization-scheme is equivalent at first order to training with adversarial noise. Finally, we demonstrate that replacing strided by average-pooling layers decreases adversarial vulnerability. Our proofs rely on the network's weight-distribution at initialization, but extensive experiments confirm their conclusions after training. Avoiding Discrimination through Causal Reasoning (1706.02744) Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, Bernhard Schölkopf Jan. 21, 2018 cs.LG, cs.CY, stat.ML Recent work on fairness in machine learning has focused on various statistical discrimination criteria and how they trade off. Most of these criteria are observational: They depend only on the joint distribution of predictor, protected attribute, features, and outcome. While convenient to work with, observational criteria have severe inherent limitations that prevent them from resolving matters of fairness conclusively. Going beyond observational criteria, we frame the problem of discrimination based on protected attributes in the language of causal reasoning. This viewpoint shifts attention from "What is the right fairness criterion?" to "What do we want to assume about the causal data generating process?" Through the lens of causality, we make several contributions. First, we crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion. Second, our approach exposes previously ignored subtleties and why they are fundamental to the problem. Finally, we put forward natural causal non-discrimination criteria and develop algorithms that satisfy them. A pixel-level model for event discovery in time-domain imaging (1710.02428) Dun Wang, David W. Hogg, Daniel Foreman-Mackey, Bernhard Schölkopf Oct. 9, 2017 astro-ph.IM Difference imaging or image subtraction is a method that measures differential photometry by matching the pointing and point-spread function (PSF) between image frames. It is used for the detection of time-variable phenomena. Here we present a new category of method---CPM Difference Imaging, in which differences are not measured between matched images but instead between image frames and a data-driven predictive model that has been designed only to predict the pointing, PSF, and detector effects but not astronomical variability. In CPM Difference Imaging each pixel is modelled by the Causal Pixel Model (CPM) originally built for modeling Kepler data, in which pixel values are predicted by a linear combination of other pixels at the same epoch but far enough away such that these pixels are causally disconnected, astrophysically. It does not require that the user have any explicit model or description of the pointing or point-spread function of any of the images. Its principal drawback is that---in its current form---it requires an imaging campaign with many epochs and fairly stable telescope pointing. The method is applied to simulated data and also the K2 Campaign 9 microlensing data. We show that CPM Difference Imaging can detect variable objects and produce precise differentiate photometry in a crowded field. CPM Difference Imaging is capable of producing image differences at nearly photon-noise precision. Learning Blind Motion Deblurring (1708.04208) Patrick Wieschollek, Michael Hirsch, Bernhard Schölkopf, Hendrik P.A. Lensch Aug. 14, 2017 cs.CV As handheld video cameras are now commonplace and available in every smartphone, images and videos can be recorded almost everywhere at anytime. However, taking a quick shot frequently yields a blurry result due to unwanted camera shake during recording or moving objects in the scene. Removing these artifacts from the blurry recordings is a highly ill-posed problem as neither the sharp image nor the motion blur kernel is known. Propagating information between multiple consecutive blurry observations can help restore the desired sharp image or video. Solutions for blind deconvolution based on neural networks rely on a massive amount of ground-truth data which is hard to acquire. In this work, we propose an efficient approach to produce a significant amount of realistic training data and introduce a novel recurrent network architecture to deblur frames taking temporal information into account, which can efficiently handle arbitrary spatial and temporal input sizes. We demonstrate the versatility of our approach in a comprehensive comparison on a number of challening real-world examples. EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis (1612.07919) Mehdi S. M. Sajjadi, Bernhard Schölkopf, Michael Hirsch July 30, 2017 cs.AI, cs.CV, stat.ML Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack high-frequency textures and do not look natural despite yielding high PSNR values. We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixel-accurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks. Causal Consistency of Structural Equation Models (1707.00819) Paul K. Rubenstein, Sebastian Weichwald, Stephan Bongers, Joris M. Mooij, Dominik Janzing, Moritz Grosse-Wentrup, Bernhard Schölkopf July 4, 2017 cs.AI, stat.ME, cs.LG, stat.ML Complex systems can be modelled at various levels of detail. Ideally, causal models of the same system should be consistent with one another in the sense that they agree in their predictions of the effects of interventions. We formalise this notion of consistency in the case of Structural Equation Models (SEMs) by introducing exact transformations between SEMs. This provides a general language to consider, for instance, the different levels of description in the following three scenarios: (a) models with large numbers of variables versus models in which the `irrelevant' or unobservable variables have been marginalised out; (b) micro-level models versus macro-level models in which the macro-variables are aggregate features of the micro-variables; (c) dynamical time series models versus models of their stationary behaviour. Our analysis stresses the importance of well specified interventions in the causal modelling process and sheds light on the interpretation of cyclic SEMs. Discriminative k-shot learning using probabilistic models (1706.00326) Matthias Bauer, Mateo Rojas-Carulla, Jakub Bartłomiej Świątkowski, Bernhard Schölkopf, Richard E. Turner June 1, 2017 cs.LG, stat.ML This paper introduces a probabilistic framework for k-shot image classification. The goal is to generalise from an initial large-scale classification task to a separate task comprising new classes and small numbers of examples. The new approach not only leverages the feature-based representation learned by a neural network from the initial task (representational transfer), but also information about the form of the classes (concept transfer). The concept information is encapsulated in a probabilistic model for the final layer weights of the neural network which then acts as a prior when probabilistic k-shot learning is performed. Surprisingly, simple probabilistic models and inference schemes outperform many existing k-shot learning approaches and compare favourably with the state-of-the-art method in terms of error-rate. The new probabilistic methods are also able to accurately model uncertainty, leading to well calibrated classifiers, and they are easily extensible and flexible, unlike many recent approaches to k-shot learning. Interpolated Policy Gradient: Merging On-Policy and Off-Policy Gradient Estimation for Deep Reinforcement Learning (1706.00387) Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Bernhard Schölkopf, Sergey Levine June 1, 2017 cs.AI, cs.LG, cs.RO Off-policy model-free deep reinforcement learning methods using previously collected data can improve sample efficiency over on-policy policy gradient techniques. On the other hand, on-policy algorithms are often more stable and easier to use. This paper examines, both theoretically and empirically, approaches to merging on- and off-policy updates for deep reinforcement learning. Theoretical results show that off-policy updates with a value function estimator can be interpolated with on-policy policy gradient updates whilst still satisfying performance bounds. Our analysis uses control variate methods to produce a family of policy gradient algorithms, with several recently proposed algorithms being special cases of this family. We then provide an empirical comparison of these techniques with the remaining algorithmic details fixed, and show how different mixing of off-policy gradient estimates with on-policy samples contribute to improvements in empirical performance. The final algorithm provides a generalization and unification of existing deep policy gradient techniques, has theoretical guarantees on the bias introduced by off-policy updates, and improves on the state-of-the-art model-free deep RL methods on a number of OpenAI Gym continuous control benchmarks. AdaGAN: Boosting Generative Models (1701.02386) Ilya Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann Simon-Gabriel, Bernhard Schölkopf May 24, 2017 cs.LG, stat.ML Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a reweighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes. Local Group Invariant Representations via Orbit Embeddings (1612.01988) Anant Raj, Abhishek Kumar, Youssef Mroueh, P. Thomas Fletcher, Bernhard Schölkopf Invariance to nuisance transformations is one of the desirable properties of effective representations. We consider transformations that form a \emph{group} and propose an approach based on kernel methods to derive local group invariant representations. Locality is achieved by defining a suitable probability distribution over the group which in turn induces distributions in the input feature space. We learn a decision function over these distributions by appealing to the powerful framework of kernel methods and generate local invariant random feature maps via kernel approximations. We show uniform convergence bounds for kernel approximation and provide excess risk bounds for learning with these features. We evaluate our method on three real datasets, including Rotated MNIST and CIFAR-10, and observe that it outperforms competing kernel based approaches. The proposed method also outperforms deep CNN on Rotated-MNIST and performs comparably to the recently proposed group-equivariant CNN. Annealed Generative Adversarial Networks (1705.07505) Arash Mehrjou, Bernhard Schölkopf, Saeed Saremi We introduce a novel framework for adversarial training where the target distribution is annealed between the uniform distribution and the data distribution. We posited a conjecture that learning under continuous annealing in the nonparametric regime is stable irrespective of the divergence measures in the objective function and proposed an algorithm, dubbed {\ss}-GAN, in corollary. In this framework, the fact that the initial support of the generative network is the whole ambient space combined with annealing are key to balancing the minimax game. In our experiments on synthetic data, MNIST, and CelebA, {\ss}-GAN with a fixed annealing schedule was stable and did not suffer from mode collapse. Personalized Brain-Computer Interface Models for Motor Rehabilitation (1705.03259) Anastasia-Atalanti Mastakouri, Sebastian Weichwald, Ozan Özdenizci, Timm Meyer, Bernhard Schölkopf, Moritz Grosse-Wentrup May 9, 2017 cs.HC We propose to fuse two currently separate research lines on novel therapies for stroke rehabilitation: brain-computer interface (BCI) training and transcranial electrical stimulation (TES). Specifically, we show that BCI technology can be used to learn personalized decoding models that relate the global configuration of brain rhythms in individual subjects (as measured by EEG) to their motor performance during 3D reaching movements. We demonstrate that our models capture substantial across-subject heterogeneity, and argue that this heterogeneity is a likely cause of limited effect sizes observed in TES for enhancing motor performance. We conclude by discussing how our personalized models can be used to derive optimal TES parameters, e.g., stimulation site and frequency, for individual patients.
CommonCrawl
How to limit the proportion of a Force sensitive population? I'm making a world where small part of population are like Star Wars Force sensitives. They have high status in my world, as the sensitivity gives them certain advantages over ordinary people. A trained sensitive could defeat several Special Forces type of soldiers, though a dozen would probably kill him. Sensitives are over-represented among the top of the society. To be born sensitive grants you a chance to become part of the elite. Having a sensitive among your family, friends & neighbors is an enormous prestige. It's the same as with aristocracy two century ago and movie/sport stars now. A child born of two sensitives has a much larger chance of being a sensitive, than a child of two ordinary citizens, though sensitivity is an exception not a rule. Most children born to sensitive parents are not sensitive themselves, and sometimes a strong sensitive is born to normal parents. Sensitivity is a weakly heritable trait. The country runs an extensive system for finding and identifying sensitive individuals, regardless of their class. Every sensitive must be trained, whether they or their parents like it or not. Those who run away are considered fugitives, those who help them are considered traitors. On one hand an ordinary family could quickly rise in status for having sensitive child or even better several of them, on the other hand once preeminent families wither away for failing to do so. The problem I face is that I want to limit the number of sensitives to a small part of the population, something like 1 in 10,000 would suffice. The question is: How to reconcile small numbers of sensitives with sensitivity being a hereditary trait? The ideas that don't work in my settings are below: High mortality rate, due to wars & infighting. However female sensitives don't usually go to war, unless there is an emergency. Therefore they could simply repopulate the pool. Caste system where noble bloodlines keep the right genes among themselves. The problem is that sensitivity is a badly needed trait for the society. The country wants to have more sensitives as they are very handy in wars with their neighbors. Being the ruling caste doesn't mean anything if your country is destroyed. The society doesn't care if your father is Michael Phelps or Joe Schmo, you could either swim fast or you don't. If you have potential you must join, it's not a destiny you could choose. Reduced fertility. For women this might work, but the problem are men who could sire hundreds of children every year. Besides even among sensitive couples the chance to have a sensitive children is low. They might have one or none. This is an important part of my story, with preeminent families falling from grace, when they are unable to produce sensitive offspring. Similar to how a family could be devastated in ancient China when it was unable to have a member that passed imperial exam and become a government official. biology magic genetics SarenSaren $\begingroup$ Hello @saren and welcome to Worldbuilding. At present it is a little bit hard to figure out what your actual question is. Here is what I normally recommend; structure a question in the form: premise, problem, query. Ok, you have a premise, this is all fine and well. But what is the actual problem that you are facing? What is it that has gotten your stuck in your creative process? Please define this. And once you have defined the problem, make the query: the who/what/when/where/how question that — if answered — solves your problem. Click "edit" and try this, all right? :) $\endgroup$ – MichaelK Mar 20 '17 at 11:04 $\begingroup$ Well the thing is: you have essentially answered your own question. There you have not just one but three workable solutions to your problem. :) $\endgroup$ – MichaelK Mar 20 '17 at 12:57 $\begingroup$ Weakness to disease for all the sensitives? It's a common thing in nature. People of African descent have a weakness to sickle cell disease but lower mortality in the first 6 months of their life. $\endgroup$ – Mormacil Mar 20 '17 at 13:44 $\begingroup$ @Saran: You'd need to reduce it to a lot less than that! While siring hundreds might be theoretically possible, it's wildly unlikely in practice. In fact, if you look at historically nobilities, they typically have relatively few recognised children and probably a number of illegitimate offspring. Meanwhile, deaths in battle are rare among the nobility, they have the best armour, the best weapons, the best mounts, and guards to protect them if they fall. $\endgroup$ – Jack Aidley Mar 20 '17 at 14:30 $\begingroup$ This question is being discussed on meta: worldbuilding.meta.stackexchange.com/q/4719/28 $\endgroup$ – Monica Cellio♦ Mar 21 '17 at 3:17 Since sensitivity is somewhat heritable, how to prevent it to spread among the general population I want sensitives to be rare something like 1 in 10,000. Being blonde is heritable. You have much more probabilities of being blonde if both your parents are blonde, but there's a small chance a blonde boy or girl is born out of non-blonde parents. You don't have to do anything about it. It's just how genetics works. In fact, you do want it to be spread among the general population, otherwise the probability of a sensitive being born in the general population would be zero. The genes carrying the sensitiveness to the force are "in the wild", but they are recessive, say something like 1 in 10,000 births - but way more probable if both parents have these gene active. Edit: about expanding sensitive population Evolution's ways are twisted and intertwined. In theory, any change that is beneficial for a species should grant a reproductive advantage and so, it's more probable its offspring would survive while other individuals' offspring dies until this beneficial trait becomes commonplace. However, that only works when there's a huge pressure on survivability of said species. For instance, let's look a giraffes. Long, long time ago, giraffes had a neck like horses or zebras. Some of them had a longer neck, some of them had a shorter neck, but it wasn't an advantage per se. Then, some kind of global crisis happened (a glaciation, a change in the rain patterns, whatever...) and then all the food was gone and 99% of giraffes died (short and long neck). There was little to no grass at all, but giraffes can eat leaves, too, so they resorted to them, but very soon all the lower-hanging leaves were gone, too. And then, only then, having a longer neck became an advantage important enough so to 99.9% of short-neck giraffes died while "only" 99.1% of long-neck giraffes died. The survivors were nearly all long-neck giraffes and their offspring were even longer-neck ones. In just a few dozen generations, horse-like giraffes have turned in today's giraffes. Without that kind of pressure, though, evolution is much more subtle or even maybe stalls - we don't know for sure. In our world, being filthy rich is an obvious advantage but rich people don't have more children than poor people (on the contrary, I would say). Unless you design a world where the non-sensitive humans are doomed to go extinct due an external cause, I don't think natural evolution would make this trait universal. Now, if we are talking about selective breeding, that's entirely a different question... RekesoftRekesoft $\begingroup$ This works for me but wouldn't amount of sensitives grow over time, if the trait is very useful? $\endgroup$ – Saren Mar 20 '17 at 12:52 $\begingroup$ @Saren Don't accept an answer so early. Give it a day or two and you might get more and better responses. People are less likely to look at questions with accepted answers. $\endgroup$ – kingledion Mar 20 '17 at 13:04 $\begingroup$ Your answer seems the best , though I think that proportion of sensitives would grow over time. However if the growth is very very slow nobody would notice, and events like being forced to send woman sensitive into battle where they could die, could kill that growth and keep the percentage in check. Will wait a day if something better doesn't appear I will accept your answer $\endgroup$ – Saren Mar 20 '17 at 13:04 $\begingroup$ @Saren With evolution "useful" means "increased chance of having children who live until fertile age". Even with "force" being valued by society it isn't necessarily associated with having more children unless something like polygamy for sensitives is enforced. It is actually reasonable for the sensitives to have less opportunities and time for having family. $\endgroup$ – Ville Niemi Mar 21 '17 at 8:38 $\begingroup$ @Saren: You say you feel that the proportion of sensitives would grow over time. It won't. e.g. Even if you start with a population that consists entirely of sensitives, the second generation will already have much fewer sensitives than the first gen; and so the third generation will have even fewer sensitives, until finally, the percentage of sensitives will reach a steady state. Variables such as the percentage of sensitives who will prefer other sensitives as sexual partners, and whether male sensitives will be in demand among women will only boost the steady state somewhat. $\endgroup$ – ArjunShankar Mar 21 '17 at 17:08 You can borrow an idea from Flatland, and enforce a reduced fertility as consequence of this force. Force bearing individual will have lower chance of being fertile, this reducing the amount of newborn potentially bearing the same feature. $\begingroup$ Flatlandia? Isn't that the book/animation about higher dimensions? $\endgroup$ – Mormacil Mar 20 '17 at 11:27 $\begingroup$ @Mormacil, yes. and the circular classes there are fertility limited $\endgroup$ – L.Dutch♦ Mar 20 '17 at 12:02 $\begingroup$ I believe the English edition is called "Flatland" $\endgroup$ – NonlinearFruit Mar 20 '17 at 19:05 $\begingroup$ @NonlinearFruit, you are right, my bad. Fixed. $\endgroup$ – L.Dutch♦ Mar 20 '17 at 19:47 There are many ways to do this so the following are broad catagories: Biological Reason for Low Incidence Rate: There is some genetic detail of the force so that can't be easily passed on. Maybe sensitivity is a non-inheritable genetic mutation. Maybe it requires an additional chromosome similar to a favorable down syndrome or Turners. Maybe a mother, especially one who is not force sensitive, will be damaged by a force sensitive fetus resulting in a miscarriage or maternal death. There are many ideas for this, some interesting and some limiting storywise. Lack of Reproduction Among Upper Class: I don't think this takes much explanation as it is a real phenomenon. Rich families often produce fewer children than individuals with less wealth. Explain this in your world, however, you like: they don't want to sacrifice their me time raising them; private schooling (force training) is expensive; too busy working since they have the hard ruling party jobs; their religion dissuades attachment or sex. This won't help a specific noble family, however, unless they wait too long to start trying. Culling: It is dark but good for story. Someone seeks out and kills younglings. Maybe it is a secret organization. Maybe it is public knowledge. Be creative as to why. Maybe they aren't killed but kidnapped...how is that for a story? Other Removal from Gene Pool: Force users are likely to become infertal (or specifically unable to bring force sensitive children to sexual maturity). This would be best if it has nothing to do with the force but is a result of inbreeding to try and produce force sensitive children. Accidental Self Injury: Force users can do amazing things but they don't realize the downside. Some force sensitive children experience SIDS as they use their powers without control on themselves as infants. Older ones playing with their abilities don't know how limited their self control is...until it is too late. Maybe all of them incur some degree of physical damage to their body through the use of their powers and that leads to chronic illness or acute fatal symptoms a fraction of the time. Asymmetric Heritage: This is different from 1 but similar. Midoclorians (I don't care enough to check spelling) are inspired by mitochondria. These are passed on only from mother to child. Maybe only the mother can pass on the force so the father's procreation rate doesn't matter. Not knowing this, the matriarch of the family is the result of adultery/secret adoption and, as they bring in force sensitive male suitors, nothing helps. The remaining force sensitive men obviously can't do anything, as much as they get to try, unless they mate with the force sensitive woman who can't just be paid to act as livestock for breeding. My current favorite is number 5. kainekaine Here are some possibilities. They each work for very specific "worlds": Force use is not genetic. There are "force spirits" that are born into people. The number if force spirits are either fixed or grow at a specific rate. Force use requires more than one gene change to produce results. It takes a specific combination of these genes to produce a force user. Also, if the combinations isn't just right it results in a terminal mutation. That would make force uses appear to be randomly generated because any time you had a population with almost the right set of genes, most of them would miscarry. Most babies that can use the force die from it. Maybe the fetus reads too many minds and dies from shock, maybe it tries TK (as it kicks and moves arms) and causes cellular damage while still in the womb. This would result in only the force users who are too weak to manifest on their own surviving. Personally, I favor the second option. $\begingroup$ This is like the Chinese Crested dog breed and the hairless gene, the hairless gene is dominant but an embryo with 2 hairless genes is non-viable, so it's not possible to have "pure" hairless dog, as such a mating of two hairless dogs can always produce some hairy offspring. $\endgroup$ – Blake Walsh Mar 21 '17 at 22:40 $\begingroup$ @BlakeWalsh, That's really good. I hadn't thought of that angle in my answer. You should expand that into a full answer. $\endgroup$ – ShadoCat Mar 21 '17 at 23:37 $\begingroup$ @BlakeWalsh Sickle Cell works the same way in people. It's actually mildly beneficial to have the trait recessively in (I think) malaria prone regions but it's close to fatal to have two copies of sickle cell gene. $\endgroup$ – Mathily Mar 28 '17 at 21:25 $\begingroup$ @Mathily, yes. With one gene, malaria causes the red blood cell to deform and be removed from your system. With two copies of the gene, red blood cells deform spontaneously and can clog arteries causing strokes and heart attacks. $\endgroup$ – ShadoCat Mar 28 '17 at 21:54 You are thinking this about right. Untrained sensitivity - in low parts of the society no one would know you are / can be sensitive. Without training it wouldn't be an advantage, so no evolutionary pressure for this trait. This would mean that this trait may reasonably be expected to spread or not, just like other minor traits, like different eye colors. If it's not helping you to survive or procreate, it's moot, evolutionary. This opens up a place for dark cults, hidden shrines, rogue teachers and similar plot hooks. Emergency testing and teaching to get an army, at the cost of social crisis. Untrained who can use it anyway would be danger to himself. So you can push someone, hard. But if you don't know how to stabilize your own body at the same time, you're breaking your spine. You can jump 5 meters high, but you don't know how to land. and so on. If you are dead or severely injured, you don't procreate, and this would make force sensitivity a trait purged out of low class people, it evolution is allowed to work. Taking talented children into higher caste and leaving their families behind. Just like Anakin was taken to life in luxury and his mother left to die as a slave and no one seemed to be disturbed. This would leave no sensitive parents in the social classes you don't want to have many sensitive children. This works well for force users, magic users, psi users, X-Men style mutants in caste society, and so on. MołotMołot $\begingroup$ I don't see how this answers the question. $\endgroup$ – kingledion Mar 20 '17 at 13:05 $\begingroup$ @kingledion Why? all points shows reasons why there wouldn't be many force users in low castes. Being unable to use it without training, killing yourself if using without training, and being removed from low caste if you are force user all causes low percentage of users in low class, aren't they? $\endgroup$ – Mołot Mar 20 '17 at 13:11 $\begingroup$ I'm just saying, if I can't see how you are answering the question, other people might not either. In that case, maybe edit your question to be more clear. I don't think you are connecting the dots between the three things you mention, and the reasons why that would limit the population. $\endgroup$ – kingledion Mar 20 '17 at 15:30 $\begingroup$ Most people wouldn't say Anakin was taken to a life of luxury, what with the strict asceticism and stoicism the Jedi Order demands. Not to mention the way Jedi must constantly risk their life and cannot retire. Though yes forcing the force-sensitives into a life of chastity like the Jedi would limit their numbers. $\endgroup$ – EldritchWarlord Mar 20 '17 at 15:42 $\begingroup$ There's an extensive system in place for recruiting sensitive children regardless of their birth. If Anakin was born in my world he would be quickly found and put into training while his mother would become upstanding citizen and live a life in luxury. Watto would not have any say in this. The state would compensate him for his slaves, for whatever value it seems fair, something like eminent domain in our world. $\endgroup$ – Saren Mar 20 '17 at 15:56 Magic has its own rules The Force is magic. You're going to be breaking the laws of scientific reality all over the place in order to have the Force at all, so why do you feel the need to tie the inheritance of your magic to genetics? Just have it behave how you want it to behave, it doesn't need explaining and it is highly unlikely that trying to tie it to a genetic cause will make it more satisfying. Jack AidleyJack Aidley How about noise? If the force is a field that force users are sensitive to, the number of people who are trained to manipulate it will cause a gratuitous amount of noise in 'the force' thus reducing the effectiveness of any one 'force user'. This would mean that it is in the best interest of the top of society, who are force sensitive and politically powerful, to limit the number of other people trained to effectively use the force in order to maximise their own power. This could potentially take the form of an oppressive dictatorship that ruthlessly crushes all other potential force sensitives, ensuring one or two force users remain supreme, or perhaps a quasi-religious military organisation that indoctrinates potential force users into a strict 'use only when necessary' policy. Joe BloggsJoe Bloggs $\begingroup$ Very interesting idea though I don't feel that it fits well with rest of my setting. $\endgroup$ – Saren Mar 20 '17 at 12:59 Since the "Force" is already a mystic element, you can simply combine a mystic element with the inheritable trait. The Force is a pool of energy. This energy is naturally generated by sentient life though rapidly disperses into the universe at large resulting in this energy being distributed relatively uniformly. The rough likelihood of a latent Force Sensitive person being able to manipulate this energy is proportionally related to the overall sentient population and inversely proportional (with a much stronger weighting) to the total population of active Force Users. This results in a rough ratio of 1 Force User per 10,000 sentient beings. A child of one or more Force Users (should the child be Force Sensitive), has a small (though not insignificant) increase in the likelihood of becoming a full fledged Force User. Children, in general (should they be Force Sensitive) are significantly more likely than an older Force Sensitive to develop into a Force User, should a "slot" become available. It is not unheard of for older people to spontaneously developing into Force Users, but it is rare. Note that should a large number of Force Users be killed, this method of generation will result in a similar number of Force Users swiftly developing, though a majority of them will likely be children and youth. Michael RichardsonMichael Richardson You might treat sensitivity like a theory concerning autism. Make it require both a genetic disposition coupled with some sort of external stimuli. You can adjust the incidence by making the external stimuli to a greater or lesser degree known and possibly harmful. For Example, once genetic disposition is determined, the mother must take some sort of catalyst that has a 70% chance of being fatal to the child, but only a 10% chance of triggering the change that would cause sensitivity. As an alternate method, make the genetic disposition a typical genetic regressive trait, but levels of sensitivity vary greatly. You might have 1 in 4 with a genetic marker for sensitivity, but only one in a 1000 has a worthwhile level of talent. The ones with a useful level of talent are the ones sought out, and you might have a pseudo underclass of those who didn't quite make the cut. Either way, genetics is complicated enough to support your basis for a sensitive meritocracy. You can look at History, and in spite of the nobilities attempts at breeding, you only ended up with a good king every once in a while. The Empires of Alexander the Great and Genghis Kahn only survived until their deaths and fell apart during the next generation. Paul TIKIPaul TIKI Reduced fertility. For woman this might work, but the problem are men who could sire hundreds of children every year. Besides even among sensitive couples the chance to have a sensitive children is low. You could posit that one factor that increases the chances of a Force-sensitive child to be born, rather than a normal one, is Force attunement between the parents. Which is more likely to grow with time and acquaintance. Without that, the genes are there, but the power doesn't awaken sufficiently. After several centuries of experimentation, it has then been found that to maximize the chances of an ESP birth, both parents would need to enter seclusion together for several months before conception and remain there for two or three months after the birth. At the very least they must avoid any intimacy with other ESP-endowed people, and to be more sure (anyone might have a slight attunement to the Force, and slightly decrease your attunement with your partner), with anyone else. When scrupulously following this regime, the chances of a Force birth increase from very little to perhaps one in ten. This has the possibly interesting side effect that people with no attunement whatsoever and as little Force genes as possible would be highly requested for the role of servants, as they would bring the minimal disruption to a secluded household. Another side effect is that a family - unless large enough - would have to choose between delving into high caste politics, and interacting daily with the Court, or have its fertile members secluded to churn out more children, thus losing influence. To cull the noble offspring, consider that a half-Forced boy or girl hasn't all that great a future to look at; so the temptation might exist to try and awaken the Force with some dangerous ordeal, with a ten percent chance of awakening the Force, and a fifty percent chance of ending up dead (I half-remember something of this kind in MZB's Darkover series - some dangerous way of awakening one's laran. I might misremember, though). LSerniLSerni The world is not static. Right "now", there is only 1 in 10000 Force Sensitive (FS), but it wasn't always so. And it will not always stay so. One possibility is that FS is a recent development, and ten thousand years ago there were none. Then somebody got lucky in the genetic/environmental lottery and the first Jedi was. They had more children than average, and those children had more children than average and so on, but it still a rare trait. In the future it will be more common. The other possibility is that they used to be more common, but for some reason most of them died. Maybe they were killed as witches, maybe they lost a civil war. Again, they will be more common in the future... unless there is a new witch hunt. (The above is my unique contribution, below I repeat points made by others to make this a complete answer) Weak inheritance You said that FS parents had an increased but still low probability of getting FS children. This can be achieved in two ways. The first is making the genetic basis very complex. Maybe you need one of two genes from group A, one of three from group B as well as one particular gene C. All of these genes are somewhat rare. Parents are likely to not carry more genes than needed, and can fail to pass them along in several ways. People having more than the minimum number of genes can be stronger FS, but not dramatically so. The other way is to make the genes only part of the equation. Having the right genes only gives you a chance, in addition you must get some boost from the environment. Maybe during pregnancy, maybe early childhood. E.g. a near death experience can "wake you up". Note that either way, weak inheritance will make evolution move even slower, making my first point more valid. Who cares anyway. The world is as it is, and the people in the story doesn't know why. The readers doesn't need to know why either. Nor does the writer. Stig HemmerStig Hemmer A Caste system, probably religious in origin. Sensitive children are forced into service. Perhaps the only way out is marrying into a noble family. Another way is to make the sensitivity come from a mutation first found among the nobility or even royal bloodlines. They simply kept it among themselves by inbreeding. You could even again use a caste system and reproducing with a lower caste is a sin for the sensitives. You could have some hidden romances throughout the ages that allow you for some very rare sensitives among the lower castes. Still caste breeding can be a very rigid system. It was very effective in India. We can still see the effects of caste divides in DNA these days. There was just that little contact between castes. For population reduction I'd go with a wasting disease that specifically targets sensitives or at least affects them much stronger. Perhaps regular people would only be carriers. This again could force a societal divide between the sensitives and the rest. MormacilMormacil $\begingroup$ I have something similar, sensitive woman are limited to marrying only sensitive men to increase the chance of having sensitive offspring. But castes do not exist, it doesn't matter if your father is Michael Jordan/Lebron James, you could score 25PPG or you can't. $\endgroup$ – Saren Mar 20 '17 at 12:57 You ask specifically about genetic inheritance. This means that a lot of the suggestions about alternative explanations are incorrect, especially the handwavium of "it's magic". If it is a simple hereditary trait, then it would end up a lot more common. Thus it is more likely for the Sensitivity to be a genetically complex trait, perhaps even bound to the specific sex chromosomes (making especially men (Y chromosome) or especially women (X chromosome) to be more likely to be Sensitive). Another way to look at it is by making Sensitivity be a sliding scale, like intelligence. To simplify, you could say everyone has 20 genes in their DNA structure that might or might not influence Sensitivity. If the gene has the Sensitive variant, the person becomes more Sensitive. On low levels this translate to good intuition, luck, athletic ability, etc. It all depends on which specific Sensitive genes are active for that person. To be actually Force Sensitive to a level that it can be trained, you would need to have 60% of these Sensitive genes activated. More activation, of course, results in a stronger Gift in the Force. But alas, the problem then comes for the family. Even if both parents have a large amount of active genes, these are recessive and unless both have the same ones, this will not inherit all the time. But their children will still be gifted above average, even if they are not actually Force Sensitives. At the same time, with all these genes flowing through the population, giving ever so slight advantages to people here and there.. it remains active and wanted within the population. Often people with similar genes come together due to similar interests and talents. And chances of their children having enough activation to count as Sensitive increases with each generation as like seeks like, without this becoming an actual certainty. We can then look at simple statistics and chance. If the amount of genes decide the chance of the child being Force Sensitive, then in the limited pool of FS people, it is noticeable not enough to get consistent results. Unless perhaps the exact genes are known and a breeding program is started. In the general population a chance of 0.001% is not much, but if you have a million babies born to people with the latent genes.. then 0.001% is still going to be a fair amount of Sensitive babies (0.001% of 1 million = 10). Finally, besides the presence of genes, there is also the activation of genes. If the genes never activate due to environmental influences, then one can have 100% Sensitive genes, yet not display any of it due to the genes being dormant because the body did not perceive a reason to activate these genes yet. Viruses interact with genetic activation, and could be a cause why the richer families are not specifically more successful in getting Sensitives. Make Force sensitivity a partially inherited trait, and give complete inheritance of the same genes a downside Over the course of evolution, purely beneficial traits tend to become more common in a population. Rare traits are sometimes controlled by multiple alleles where partial inheritance is beneficial but complete inheritance is detrimental, thereby ensuring that the trait remains rare but does not disappear. Maybe partial inheritance of Force sensitivity genes gives special powers, but complete inheritance of the same genes has a high chance of causing insanity due to too many psychic voices. Perhaps full inheritors tend to die in utero, effectively reducing the fertility rates of inbred families of sensitives. It doesn't have to be strictly single-gene Mendelian inheritance, but rather a higher chance of detrimental traits that increases over multiple generations of sensitive inbreeding. This will ensure that the genes remain "spread out" among the population and you don't wind up with "noble" families that keep all of the special genes for themselves. (Or there might be some families that do, but they will tend to have problems.) IndigoFenixIndigoFenix So you need to be trained as well as having the natural ability. You could make it so people with the ability (without the training) don't even realise they can do it. In fact, special techniques and equipment which are tightly controlled are needed to detect force users. That way, the elite can enforce a whatever number of users they like. $\begingroup$ You could even adopt the idea of a "critical window" for learning, so training must begin when the children are very young to work. $\endgroup$ – Jack Aidley Mar 20 '17 at 15:49 $\begingroup$ This is counter to what the question is asking. The elite do don't want to limit or enforce the numbers, as I read it. They want to have as many as possible. I read question to ask: So what principle or situation can balance this? $\endgroup$ – Loduwijk Mar 20 '17 at 17:20 I would suggest the combination of one and two. As you said in times of great conflict force-sensitive females can also be drafted into the army. Just great series of large conflicts. ( like World War 1 and World War 2 scale). That Force the nations of the world to put every force-sensitive available on the front lines. Combine this with the fact that the force sensitive is ownership of land and wealth and your force-sensitives have an incentive to keep the force sensitive population low. The more force-sensitives there are the more land and Wells needs to be distributed among them the less one has. This is why the nobility what often disinherit all but the first born is if they divided the lines equally then each generation would have less. Bryan McClureBryan McClure $\begingroup$ Two doesn't work because families have no incentives to limit number of sensitives. If Johnson's have 4 sensitive sons and they are strong/cunning/lucky/etc to survive the training, war & internal politics they would have 4 bread-winners. On the other hand if Smith's have only 1 he could be weak/foolish/unlucky/etc to die and they would lose their stipends & privileges. Johnson's would get 4 palaces while Smith's would get an apartment as a consolatory reward. $\endgroup$ – Saren Mar 20 '17 at 13:52 $\begingroup$ @Saren yes they do a nobility Is wealth is not dependent on his Force Sensibility dependent on his status as a nobleman and the land and privileges that comes with. That status is the deturmened Sensibility it doesn't even have to be a good Force user. So there's no reason for a family to have more then one or two Force users. Noble man's don't work for money they receive taxes, they're not Breadwinners $\endgroup$ – Bryan McClure Mar 20 '17 at 15:05 $\begingroup$ It seems my nobility comparison doesn't fit. Having powerful sensitives in your family increases your status. The more of them you have in your pedigree, the closer they are to you and stronger they are, the greater positive effect on your status. To compare them with dogs it looks for working lines not show lines. The sensitive have status by how they perform not by who their great-grandparent is. Oscar Robertson being your grand uncle is nice, being first cousin of Curry, Durant & James is much better. Even if you don't play basketball people will assume you have it in your blood. $\endgroup$ – Saren Mar 20 '17 at 15:32 Because the gene combination that activates those powers is all recessive. It's why the upper castes inbreed to improve their chances of having force sensitive children-- which in turn leads to all the other problems of inbreeding, resulting in higher mortality and deformity. nzamannzaman $\begingroup$ Recessive genes are, almost always, broken genes. Non-functional variants of genes where the dominant form is able to produce function at reduced dosage. It seems extremely unlike that "force sensitivity" could be the result of a broken gene. In fact, the idea of "force sensitivity" as any single gene seems pretty far fetched. Furthermore, inbreeding has to be really very extreme to produce large changes in mortality, and since force sensitivity is such a positive trait it seems unlikely it could overwhelm it. $\endgroup$ – Jack Aidley Mar 20 '17 at 15:47 $\begingroup$ @JackAidley: Note the word "combination". Imagine a dozen allele pairs, where it's nearly impossible for all of them to be recessive, and the dominant form doesn't really do much. You'd expect that the recessive (r-r) form should do nothing either, except the combination of those specific r-r pairs unlocks (literally) force abilities, where even one D-r out of the twelve pairs would not. One might even combine this with other recessive traits, e.g., only redheads can use the force. $\endgroup$ – nzaman Mar 20 '17 at 18:20 $\begingroup$ In your case, the children of two force-sensitives (i.e. two persons with only recessive alleles) would automatically also have only recessive alleles, i.e. also be force-sensitive. (In the original example of mendel, wrinkled peas only produce wrinkled peas.) Which is not what is wanted in the OP. I would more think of a combination of multiple genes, where most of them are dominant, but with some fertility reducing side-effects so they become rare. $\endgroup$ – Paŭlo Ebermann Mar 20 '17 at 23:14 Nothing. Your rules already allow for a limit proportion of force-sensitive population, as long as your numbers are right. Kind of. Proposition 1 : If the number of special children born of each special parent is less than 0.5, then the majority of special children will be born to non-special parents in an equilibrium situation. Proof: Since we have an equilibrium situation, the number of special people $S$ must remain the same from one generation to the next. However, if the number of special children born of each special parent is less than 0.5, then there are less than $0.5S$ special children born of special parents. The rest must thus come from normal parents. Proposition 2: If we let the number of special children born of each special parent be $s_s<1$, the number of special children born of each normal parent be $s_n$, and the ratio of special people all people be $S$, then in equilibrium situation, \begin{align} S = s_s\cdot S + s_n(1-S) \end{align} This proposition just states that the population of special people is a constant over a generation. In other words, the number of special people is equal to the number of special children born from special parents plus the number of special people born from normal parents. Now, if $S<< 1$, we have roughly \begin{align} S &= s_s\cdot S + s_n\\ s_n &= S - s_s \cdot S\\ & = S(1-s_s) \end{align} So, what does this mean? Well, it means if we want the ratio of special people to be $0.0001$, and we want each special parent to have 0.5 special children, then each normal parent must have $0.0001\cdot (1-0.5) = 0.00005$ special children. So, let us set up a scenario where that happens. Consider 2 special people who have children. Since they are of the elite class, maybe they will have a lot of children. Let us say, on average, they have 5 children. Our number tells us that 2 special people should have 1 special children. So the chance that a child is special, given that both parents are special is $20\%$. Consider 2 normal people who have children. Perhaps they only have 2 children, since they are not elite. On average, 2 normal people should have $0.0001$ special children. So the chance that a specific child of theirs is special is $0.00005$. Finally, let us consider a special parent with a normal parent. Let us say they have 4 children on average. They should have roughly $0.5$ special children. So each child has a $12.5\%$ chance of being special. This example, of course, might not fit the genetic model. However, you can change these numbers a bit to fit the genetic model, and ensure that each special parent has 0.5 special children on average, and each normal parent has 0.00005 special children on average. Fluidized Pigeon ReactorFluidized Pigeon Reactor This is inspired by Stig Hemmer's answer: It isn't just genetic Even if the society had technology at our level, they wouldn't necessarily know who would have the force. There are aspects of gene regulation that are influenced by a myriad of environmental factors, and not just the presence of some genes in your DNA. Things like DNA methylation and the necessity of trace elements/vitamins as well as the randomness inherent in the development of everyone's brain means there is a lot of post birth randomness to play with. Think of it like intelligence, it seems to be at least sort of heritable, but the much bigger impacts are environmental. Maybe the force is like being an outstanding genius. We could all learn to use the force a tiny bit, most of us top out at being able to move something that weighs an ounce, but there are a few of us who are profoundly gifted. Those who show the gift (like musical prodigies) are identified easily at a young age a shepherded into training programs to refine and build on their talent. Like music (and the force) it requires a great deal of training to build on the raw talent we identify in the occasional youth to create an adult who has real power. This would create a situation where there would probably be a lot of middlingly powerful people who might be able to pick up the remote, or even a baby, under ideal conditions, but who don't have the focus and talent to use their skill 'in the real world'. MathilyMathily $\begingroup$ I concur with this and i raise it, i think the force-sensitiveness is not genetic AT ALL. Skywalker's family was strong in it not because of genes but because of their presence in the axis of galactic events. $\endgroup$ – ThreeLifes Mar 29 '17 at 11:43 Not sure how broadly you need to limit the population, but if it's planet-local, you can induce conditions so that they're just not around. Most of them are usually off on missions saving the galaxy, negotiating truces, etc. Just like Star Wars Jedi Knights. Or have sensitivity be determined not by one trait, but by a combination of rare traits. Like black hair and blue eyes. asgawegasgaweg $\begingroup$ Hi, welcome to worldbuilding. It sounds like you've got an idea for an answer but it needs a little more fleshing out to fully address this question. $\endgroup$ – Lio Elbammalf Mar 20 '17 at 14:40 Go the Warhammer 40K route, scour the lands for "Psykers", humans with psionic potential and indoctrinate them via rigorous examinations to test their power and abilities, those who pass the stringent trials without death are indoctrinated some more before being put into a task force to find more psykers, while failures are placed on floating despot "black ships" to channel their energy to keep the god-emperor alive. of course this all centers around indoctrination. Crashie-JCrashie-J Easy. Cultivate a jedi religious or quasi-religious monastic order that separates the force sensitives from the rest of society and emphasizes the virtues of emotional moderation and asceticism. Your jedi order members will be trained to a high degree of capability and also be discouraged from 'disruptive' emotional distractions like marriages and girlfriends. If you make it an order that is not tied to a particular nation, but a world-wide system that emphasizes being apolitical and subordination to secular governments, it should be able to spread. Contributing a son or daughter to the order conveys an impressive amount of prestige, of course. Adam MillerAdam Miller If sensitivity is a single recessive gene, then you have to have it from both parents for it to show. If having a single gene is somewhat selected against, then the occurrences in the general population is low. So this becomes a gene like Tay Sachs disease or Huntington's chorea. This however would make all the offspring of a sensitive be sensitive. Blue eyed parents have blue eyed kids. Brown eyed parents can carry the blue eyed gene masked, or be dual brown eyed. (Eye colour is a bit more complex. There are ways that blue eyed parents can have brown eyed kids, but it is rare.) Suppose that having a doubled sensitive gene makes you sterile. So there will be NO offspring from a sensitive? Ok. Let's make it more complicated. Dominance of insensitive is not complete. Children who are genetically Is (Insensitive from one parent, sensitive from another) are trainable, can handle small tasks but are not Jedi material. If not trained, they have oddities -- lucky at dice. Unusually good marksmen, but not really remarkable. Let's make it a bit more complex. Being sensitive to the force is polygenetic. Suppose that it is 4 genes, all recessive, If you have none of the genes you have the sensitivity of a rock. Any one of the genes gives you some slight sensitivity, any 2 gives you four as much, and 3, nine times as much, and all 4 16 times as much. If each of these has an occurrence in the general population of 1 in 100, then you have 1% of the population has a trace, 1 in 10,000 has more than trace, 1 in a million has 3 genes, and is of Radagast the Brown capability (borrowing a metaphor) and all 4 genes is 1 in a hundred million that allows you to use LS monogrammed handkerchiefs. That is still 10 per billion, or aobut 70 on planet earth. Adjust the occurrence of the genes to suit. This sort of thing allows you to create "Witches of Karres" universes where a people has self selected for these genes. If you want to mix it up more, you can have each gene give specific traits/perceptions. (E.g. s1 gives perception of what isn't here. s2 gives you some perception of what isn't yet. s1 and s2 reenforce giving you greater distance/time sensitivity. s3 is the ability to fine tune and descriminate. s4 gives you the feel of the galactic overmind, or gives you the ability to cloak your own footprint on the force. Whatever. Add more genes but make some less probable. Make some lethal when combined. E.g. s5 reenforce with s6 makes you incredibly empathic, but so much so that if someone is killed near you you may die from the empathic shock. Training affects all aspects. I think that there is a mistake in thinking that there are genetic or biological factors at all in force sensitiveness. First of all, the force is in all things, even Han Solo has shown remarkable speed and accuracy when shooting from the hip. Huts and Nomoidians are believe to be naturaly resistant to mental tricks, but this could be a product of their culture, which makes them wary of "things that look too god to be true" or "conveniently simple". SO it would be possible for any individual to achieve a level of understanding and limited use of the force simply by training. Skywalker's family is riddled with internal emotional conflict and external galaxy-shaping conflict. That could be very well why they are strong in the force and not their genes. I personally think that the force would regulate itself how many individuals achieve this kind of higher understanding of the force and when a sufficiently number is achieved there simply won't appear any more, no matter the trainings or breeding programmes. ThreeLifesThreeLifes Not the answer you're looking for? Browse other questions tagged biology magic genetics or ask your own question. How can I make a net beneficial genetic trait occur only in a small fraction of the population, sustained? Real life explanation of inheritance of werewolves' curse Imbalance between male and female magician population - What would be the implications? How to make different social classes look quite different? Social implication of human hermaphroditism and parthenogenesis What would be the effects of raising all children with step-parents? How small can a population be with regard to genetic diversity? A friend has an idea for a PA colony traveling to Proxima Centauri B but need help fleshing it out How can isolated matrilineal societies avoid loss of genetic variation? How do I tune the incidence of gene expression in my fictitious population? How can I prevent planeswalkers from organizing into an army?
CommonCrawl
Preprint ARTICLE | doi:10.20944/preprints201703.0208.v1 Closed Expressions of the Fibonacci Polynomials in Terms of Tridiagonal Determinants Feng Qi, Jing-Lin Wang, Bai-Ni Guo Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: closed expression; Fibonacci number; Fibonacci polynomial; tridiagonal determinant; Hessenberg determinant Online: 28 March 2017 (03:11:06 CEST) Show abstract| Share In the paper, the authors nd a new closed expression for the Fibonacci polynomials and, consequently, for the Fibonacci numbers, in terms of a tridiagonal determinant. Closed Forms for Derangement Numbers in Terms of the Hessenberg Determinants Feng Qi, Jiao-Lian Zhao, Bai-Ni Guo Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: derangement number; closed form; Hessenberg determinant; tridiagonal determinant; generating function; recurrence relation; derivative Online: 11 October 2016 (10:53:07 CEST) In the paper, the authors find closed forms for derangement numbers in terms of the Hessenberg determinants, discover a recurrence relation of derangement numbers, present a formula for any higher order derivative of the exponential generating function of derangement numbers, and compute some related Hessenberg and tridiagonal determinants. Does Ethiopian Competitive in Export of Coffee so far and What Determines it? Evidence from Revealed Comparative Advantage and Autoregressive Distributed Lag Model Diriba Hordofa Subject: Social Sciences, Accounting Keywords: ARDL; Coffee; Competitiveness; Determinant; RCA Online: 2 April 2021 (11:32:05 CEST) As an export commodity coffee industry contributes to the economies of both exporting and importing countries. The aim of the study involves competitiveness and determinant of coffee export in Ethiopia through the period of 1990–2018 observations. To explain the level of comparative advantage and competitiveness respectively Revealed Comparative Advantage and Syematric Revealed Comparative Advantage were employed. To capture determinans of coffee ARDL model with bound testing to co-integration approach was employed to investigate the long-run association between Ethiopian total coffee export in bags (60kg each) with domestic coffee production, world coffee price, real exchange rate, FDI, world coffee production and price ratio. Even though Ethiopia has the comparative advantage in the export of coffee, however, the share it in the international market low in amount and not inlined with RCA. Bound testing to co-integration approach result confirmed the existence of a long-run relationship between total coffee exports of Ethiopia with its independent variables. The analysis pointed out that in the long run the extent of domestic coffee production, world price, and real exchange rate positively and significantly affects total coffee export. However, FDI, price ratio, world production of coffee have negative & significant effects. In the short-run Ethiopian total coffee export defined as a positive significant function of domestic coffee production and real exchange rate positive but insignificant effect with Level of RCA and world price as well as a negative function of FDI, price ratio, and world production of coffee. Coefficient Error Correction Model (ECM (-1)) was negative and significant with a value of 134.4 % of the adjustment would make each year and return to its long-run equilibrium after 1.3 years. The policy implication calls for addressing issues of the combined effect of the policy setting, institutions, and market failures to avoid the evil effect of the sector. Determinants of Pesticide Application in Nepalese Vegetable Farming: An Empirical Analysis using Multivariate Probit Model Arun GC, Kiran Ghimire Subject: Social Sciences, Econometrics & Statistics Keywords: Pesticides, Vegetable, Nepal, Determinant, Multivariate Probit Online: 29 November 2017 (13:27:57 CET) Currently, the pesticides are the global core concern because it is a boon to farmers against increasing disease-pest and simultaneously, pesticide residue is the major anxiety regarding human health. For that reason, identification and determination of factors affecting the application of pesticides are essential. To identify and evaluate determinants of pesticides application in Nepal, a household survey of 300 households was carried-out and an empirical analysis was done using multivariate probit model. Moreover, powder and liquid forms of pesticides were considered for summer and winter season in vegetable farming, which was assigned as outcome variables. Likewise, socio-economic, demographic, farm-level and perception data were considered as explanatory variables. Use of chemical fertilizers, age and gender of head of household, household size and access to weather information were found the most influencing factors. Moreover, forms of pesticides and growing seasons were found complementary to each other. Therefore, devising the policy options accordingly should balance needs of farmers and health of consumers. The Sharp Bound of the Hankel Determinant of the Third Kind for Starlike Functions with Real Coefficients Oh Sang Kwon, Young Jae Sim Subject: Mathematics & Computer Science, Analysis Keywords: starlike functions; Hankel determinant; Carathéodory functions; Schwarz functions Online: 17 July 2019 (08:41:14 CEST) Let ${\mathcal{SR}}^*$ be the class of starlike functions with real coefficients, i.e., the class of analytic functions $f$ which satisfy the condition $f(0)=0=f'(0)-1$, Re{z f'(z) / f (z)} > 0, for $z\in\mathbb{D}:=\{z\in\mathbb{C}:|z|<1 \}$ and $a_n:=f^{(n)}(0)/n!$ is real for all $n\in\mathbb{N}$. In the present paper, the sharp estimates of the third Hankel determinant $H_{3,1}$ over the class ${\mathcal{SR}}^*$ are computed. Preprint SHORT NOTE | doi:10.20944/preprints201610.0034.v1 A Determinantal Expression and a Recurrence Relation for the Euler Polynomials Feng Qi, Bai-Ni Guo Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: determinantal expression; recurrence relation; Euler polynomial; Euler number; Hessenberg determinant In the paper, by a very simple approach, the author establishes an expression in terms of a lower Hessenberg determinant for the Euler polynomials. By the determinantal expression, the author finds a recurrence relation for the Euler polynomials. By the way, the author derives the corresponding expression and recurrence relation for the Euler numbers. A Conjecture on the Solution Existence of the Navier-Stokes Equation Bohua Sun Subject: Physical Sciences, Fluids & Plasmas Keywords: solution existence condition; the Navier-Stokes equations; velocity gradient; tensor determinant Online: 9 July 2020 (12:25:22 CEST) For the solution existence condition of the Navier-Stokes equation, we propose a conjecture as follows: "\emph{The Navier-Stokes equation has a solution if and only if the determinant of flow velocity gradient is not zero, namely $\det (\bm \nabla \bm v)\neq 0$.}"
CommonCrawl
Home Journals EJEE Modeling and Simulation of a Metal Oxide Lightning Surge Arrester for 132kV Overhead Transmission Lines Submisssion CiteScore 2019: ℹCiteScore: CiteScore is the number of citations received by a journal in one year to documents published in the three previous years, divided by the number of documents indexed in Scopus published in those same three years. SCImago Journal Rank (SJR) 2019: ℹSCImago Journal Rank (SJR): The SJR is a size-independent prestige indicator that ranks journals by their 'average prestige per article'. It is based on the idea that 'all citations are not created equal'. SJR is a measure of scientific influence of journals that accounts for both the number of citations received by a journal and the importance or prestige of the journals where such citations come from It measures the scientific influence of the average article in a journal, it expresses how central to the global scientific discussion an average article of the journal is. Source Normalized Impact per Paper (SNIP) 2019: ℹSource Normalized Impact per Paper(SNIP): SNIP measures a source's contextual citation impact by weighting citations based on the total number of citations in a subject field. It helps you make a direct comparison of sources in different subject fields. SNIP takes into account characteristics of the source's subject field, which is the set of documents citing that source. 240x200fu_ben_.jpg Modeling and Simulation of a Metal Oxide Lightning Surge Arrester for 132kV Overhead Transmission Lines Aziz Ullah Khan CECOS University of Information Technology and Emerging Sciences Peshawar, Peshawar 25100, Pakistan Corresponding Author Email: [email protected] https://doi.org/10.18280/ejee.224-510 | Citation 22.4-5_10.pdf This paper demonstrated the design of a Metal Oxide Surge Arrester for a 132 kV system with a rated voltage of 120 kV according to specifications. The study model was chosen to be Pinceti model which is a derivation of the IEEE standard model of Lightning arrester design. The design specifications for the lab tests on 120kV rated arrester for ZnO material were obtained from the catalogue of Ohio Brass Pvt. Ltd. The parameters for the lumped components were derived from the manufacturer's data sheet while the non-linear characteristic was derived from curve fitting based on the Pinceti provided curves in literature, using Matlab Curve Fitting Tool. The design was simulated on EMTP-RV commercial software and the results before optimization as well as after optimization are presented. A cross comparison with the manufacturers data results in 1.113% relative error, which is in competition with similar designs for different rated and system voltages in literature. The study presents an improved model of a metal oxide arrester for 132kV system, with its lumped and exponential parameters presented in detail. metal oxide surge arrester, lightning surge arrester, simulation, MOV, residual voltage, over-voltage, CFOV, EMPT-RV Power system outages are affected by many direct and indirect factors, of which lightning is the most important one. Lightning causes disruption in the power distribution system that directly or indirectly affect the end consumer. Lightning occurs because of friction between oppositely charged clouds. Lightning strikes usually target conductive material as well as electrically charged regions of the earth such as Power Transmission Systems, antennas, and other conductive surfaces [1]. It is a natural phenomenon in which millions of volts and thousands of amperes of current strike the stratosphere or ground where conductive or electrical equipment are present. This phenomenon produces a breakage in the insulating layers of air between the clouds and the earth; so a visible flash of light is seen [2]. The waveform of a strike for 8/20us is shown below: Figure 1. Standard waveform of a lightning strike During decades of observation on lightning strikes and lab experiments with Tesla coils and artificially produced lightning inside labs, concluded the above waveform, which roughly stands for the type of waveform possible in nature. Figure 1 shows the main categorization of lightning strike which reaches its peak value in 8 microseconds and then drops to half its peak value in 20 microseconds [3]. Underground transmission system is completely insulated below the layers of soil, while the overhead transmission system is exposed to air as well as lightning strikes. In third world countries, overhead distribution systems are a common practice. For a lightning strike to damage a distribution system, it does not need to touch the transmission system physically. All it has to do is to strike at the vicinity of the transmission system. This type of a lightning strike at the vicinity of a distribution system typically induces about 300 thousand volts of electricity in the nearby electrical transmission lines. There is no way of safeguarding the distribution system or the nearby transmission lines from such a high voltage. Typical protective techniques such as compensators are not enough to safeguard the nearby transmission lines from the effects of the induced voltage of this order [4]. In a normal overhead transmission system, insulator strings are attached as a safety measure that can absorb voltage swells up to specific limits. This limit is called Critical Flashover Voltage (CFOV) rating. CFOV is defined as, "a maximum value of the absorption power of the insulator string in a voltage swell phenomena in the distribution system". In the event of exceeding this rating, the transmission system could be damaged by the voltage swell, resulting in damage to very expensive equipment in the nearby substation. The phenomenon of exceeding voltages over the ratings of the critical flashover voltage of the insulator string is called a flashover. Researches are carried out everywhere in the world to test this threshold limit of an insulating string, or another insulating equipment, by creating an artificial flashover. Table 1. Arrester models IEEE Model $\begin{aligned} \mathrm{L} 1 &=15 \mathrm{d} / \mathrm{n}[\mu \mathrm{H}] \\ \mathrm{R} 1 &=65 \mathrm{d} / \mathrm{n}[\Omega] \\ \mathrm{L} 0 &=0.2 \mathrm{d} / \mathrm{n}[\mu \mathrm{H}] \\ \mathrm{R} 0 &=100 \mathrm{d} / \mathrm{n}\lceil\Omega] \\ C &=\frac{100 n}{d}[p F] \end{aligned}$ d is the estimated height of the arrester in meter n is the number of parallel columns of MO in the arrester Pinceti Model $\begin{aligned} L_{1} &=\frac{1}{4}\left(\frac{\frac{U_{r 1}}{T_{2}}-U_{\frac{r 8}{T_{2}}}}{U_{\frac{r 8}{T_{2}}}}\right) U_{r}[\mu H] \\ L_{0} &=\frac{1}{12}\left(\frac{\frac{U_{r 1}}{T_{2}}-U_{\frac{r 8}{T_{2}}}}{U_{\frac{r 8}{T_{2}}}}\right) U_{r}[\mu H] \end{aligned}$ Ur is the rate voltage $U_{r 1 / T 2}$ is the residual voltage at 10 kA fast front current surge $1 / T 2 \mu s$ $U_{\frac{r 8}{20}}$ is the residual voltage at current surge with $8 / 20 \mu s$ shape $R_{0}=1 M \Omega$ Fernandez-Diaz model $\begin{align} & \text{L1}=\frac{2}{5}\frac{{{U}_{\frac{r8}{T2}}}-Uss}{{{U}_{\frac{r8}{T2}}}}{{U}_{r[\mu H]}} \\ & \text{C}=\frac{1}{55}\frac{{{U}_{\frac{r8}{T2}}}-Uss}{{{U}_{\frac{r8}{T2}}}}{{U}_{r[pF]}} \\\end{align}$ Ur is the rated voltage Ur8/20 is the residual voltage at 10kA current surge with $8 / 20 \mu s$ shape in kV Uss is the residual voltage at at $500 \mu s$ or $30 / 70 \mu s$ in $k V$ In the past few decades, transmission line arresters (TLAs) have shown promising results for protection of distribution systems at substation as well as transmission line level [3, 5, 6]. Transmission line arrester are specifically designed electrical circuits that can absorb electrical power above a certain rating up to a certain limit. Most of the absorbed power is diverted to the nearby ground level; and is discarded from the distribution system safely, while the remaining is dissipated as heat. Table 1 describe surge arrester models. Overvoltage occurs from various effects both naturally, as well as from system faults in the electrical power system. They are classified into two classes, switching over-voltage and lightning over-voltage. For designing of extra high-voltage parameters of a power system distribution, it is necessary that switching over-voltages are taken into account. For instance, designing of an arrester for an extremely high-voltage line requires the 1/2 μs front-of-wave (FOW) switching Surge analysis [7]. Specific interest is the lightning strike that occurs in the vicinity or directly on a substation, or a part of a power distribution system. Lightning strikes are typically in the range of millions of volts, with thousands of amperes of current. A direct hit on a transmission system causes drastic damage to the equipment if the system is unprotected. However, it is not necessary that the lightning strike should hit the transmission system directly, a nearby hit can also cause the collapse of insulating airfield, and can cause induced voltages and induced current in the transmission system. The induced voltages can reach as high as 300 kilovolts with currents as large as 10000 amperes [3]. In the research [8], authors proposed an Ultra High Voltage TLA, which was tested on a 828kV overhead transmission line system. The design was tested in parallel combination with the existing fixed arrester. The tests were carried out in collaboration with energy Supplier Company, and the arrester proved its working for the said purpose. However, TL protection parameters were not calculated in the research paper. A study in Malaysia was conducted recently on the design and selection of a sure arrester for a 500kV overhead transmission line system. It was concluded that a 132kV arrester with suitable modifications can be utilized to work for the 500kV line, and that in order to protect the line, the surge arrester must be installed [9]. This paper is an intimate understanding of the working of a transmission line arrester installed on an overhead transmission line system for protection of the nearby substation as well as the transmission system. The designing of the metal oxide surge arrestor without a spark gap is a difficult task due to its dynamic behavior. Many researchers have followed a modern evolutionary algorithm approach for finding a best possible set of values of the arrestor for a given rated voltage. However, as proposed by IEEE, the trial and error method of deriving the parameters of the design is a simple approach before fine-tuning them to perfection. Therefore, the methodology is: i. Derivation of the nonlinear resistor values for a Pinceti Model by curve fitting using Matlab for the raw model of the arrestor. ii. Feeding these design values to the EMTP-RV for dynamic adjustments, utilizing the ZnO Fitter Routine (a form of automatic dynamic adjustment function) [10]. iii. Testing, evaluating, and cross validating the results of each fitting results with commercial expected waveforms, before tuning again. iv. If error is less than 11% for sufficiently enough experiments, stop the routine and plot results, otherwise go back to first step. v. Cross-compare with latest journals in 132 kV overhead transmission lines system Metal Oxide arrestor design errors. The methodology is also shown in Figure 2 as a flow chart for better realization: Figure 2. Flow chart of the research method 3. Result and Analysis 3.1 Simulation results The simulation scenario chosen in this study has been the focus of many researchers throughout the last decade and the researches that have followed the same model are cited as Refs. [11-15]. The usual analysis requirement demands the elimination of every possible transient effect from the transmission line especially the tower containing the arrester. This is necessary for correct justification of the analysis and results of the arrester model. Figure 3 shows a simulation environment of twelve Towers with 2 kilometers river crossing and 30 kilometers to the source generator. 30 kilometers distance from the generator is taken in order to avoid travelling waves effects on the arrester. Each consecutive Tower is placed at 300 meters from the previous one. 12 consecutive towers are placed so that the effect of the reduction of lightning surges can be observed if required. Figure 3. The simulation scenario for this study 3.2 Lightning model simulation The model of the Lightning surge has been designed on the model proposed by, time series of the Lightning Voltage Surge as well as the current surge is shown in Figure 4 and 5 consecutively: Figure 4. Max value of the lightning source Figure 5. Lightning current waveform (8/20 μs) showing peak value of ~10kA The maximum value of the lightning strike is about 1M Volts on an 8/20 μs waveform. The current waveform shows the following series: 3.3 Curve fitting results in Matlab Matlab curve fitting toolbox is used for curve fitting on the values obtained from the IEEE group for A0 and A1 that gives equations from which any dependent value for an independent input could be found out. The graphs shown in Figure 6 and 7, both for A0 and A1 are generated with general model power2 with 95% confidence bounds. Figure 6. Curve fitting result for A0 The curve fitting routine run for A0 gives a binomial power fit with the following equations: General model Power2: f(x)= a*x^b+c (1) Coefficients (with 95% confidence bounds): a = 11.62 (4.257,18.98) b = 0.428 (0.2639,0.5922) c =112.8 (105.2,120.4) Goodness of fit: SSE: 28.9 R-square: 0.982 Adjusted R-square: 0.978 RMSE: 1.792 Similarly, for A1: a = 114.8 (-403.7,633.3) b = 0.04436 (-0.1503,0.2391) c = -11.61 (-528.7,505.5) SSE: 31.24 R-square: 0.9582 Adjusted R-square: 0.9489 3.4 Residual voltages The time series of residual voltage data is shown in Figure 8: Figure 8. Residual voltage in arrester with peak value shown The residual voltage test shows a peak value of 194kV which is lower than the maximum residual voltage obtained from manufacturer specification that is 291kV, and satisfies the model requirements. 3.5 Footing currents Footing current is the term used for the current flowing in the low impedance resister at the base of the tower. It is used for providing a high conductive path to the current appearing on the arrester in the event of a lightning Strike. Typical values of this resistor lie between 20 to 25 ohms. Figure 9 shows a discharge current maximum value of 4.79kA which passes from the footing resistor and drops to 0 Amperes at about 25µs, which shows the stability for arrester. Figure 9. Footing current of the arrester showing peak value 3.6 Simulation results after parameter adjustment For a 20 μs duration of simulation, after the adjusted parameters of the L1 from default value to 2.905 in multiple runs of the simulation, the resulting waveform shows promising results given below: Figure 10. Residual voltage on the designed arrester The Figure 10 above shows a peak value of 294.239 kV of residual voltage which is upto the standards and the consistency is assured as well. 3.7 Design parameters summary The arrester design parameters are tabulated in Table 2: Table 2. Designed parameters of the arrester 23Ω 3.8 Analysis The analysis of the designed arrester is done visually from the graphs considering the various requirements of the manufacturer as well as the design criteria. However, a few of the analysis is done mathematically to prove the arrester's efficacy in the desired voltage rating installation. For this study, the following parameters are analyzed mathematically for the justification of the design. 3.8.1 Lightning impulse withstand voltage – LIWV The lightning impulse withstand voltage is defined using the following equation: LIWV = BIL /VrPeak LIWV $_{\frac{8}{20}}=550 \mathrm{kV} / 322 \mathrm{kV}=1.708$ (3) 322kV is the peak residual voltage from the manufacturer specification table provided in Table 3. The higher this value, the better the insulation level. So, LIWV value is satisfactory. 3.8.2 Energy absorption capability The instantaneous energy absorption at 0.5 μs is thus calculated as: Energy $=\mathrm{Pd}$ Energy $=\mathrm{Vid}$ Energy $=294239 \times 5000 \times 0.5 \times 10^{-6}$ 735.59 Jouls@10kA $\in 0.5 \mu \mathrm{s}$ This value of energy is also in agreement and is in fact improved from previous designs. The total energy absorbed in 20 μs of the lightning strike is calculated as below: Total Energy$=\int_{1009}^{9950} \mathrm{VI} \Delta t d x\\=\int_{1009}^{9950}|-7786.95 x+294239| d x \Delta t\\=378869094867.025 \Delta \mathrm{t}\\=378869094867.025 \times 20\\\times 10^{-6}JoulesTotalEnergy$ Total Energy $=7.577$ MegaJoules (4) This value is greater than the required minimum of 5kJ that justifies the energy absorption capability of the designed arrester. 3.8.3 Relative error Referring to the manufacturer's data sheet reproduced in Table 3: Table 3. Ohio brass EP series 120kV arrester for 132kV system Continuous Voltage 1/2us @10kA Res Voltage 8/20us @10kA Res Voltage 120 kV 98kV 322kV The relative error is defined as: $\epsilon=\frac{\left(V_{S}-V_{d}\right)}{V_{d}} \times 100 \%$ (5) In this study: $\epsilon=\frac{294.239 k V-291 k V}{291 k V} \times 100 \%$ $\epsilon=1.113 \%$ This value is targeted to result between 0 and 4%, and the resulting value is quiet expectedly good and relates to the manufacturer's specs, hence the designed arrester is improved in its relative error, LIWV, as well as its energy absorption capability. This paper demonstrated the design of a Metal Oxide Surge Arrester for a 132 kV system with a rated voltage of 120 kV according to specifications. The study model was chosen to be Pinceti model which is a derivation of the IEEE standard model of Lightning arrester design. The design specifications for the lab tests on 120kV rated arrester for ZnO material were obtained from the catalogue of Ohio Brass Pvt. Ltd. The parameters for the lumped components were derived from the manufacturer's data sheet while the non-linear characteristic was derived from curve fitting based on the Pinceti provided curves in literature, using Matlab Curve Fitting Tool. The design was simulated on EMTP-RV commercial software and the results before optimization as well as after optimization are presented here. Comparison with recent publications has not been done because similar study could not be found in the recent years. However, a cross comparison with the manufacturers data results in 1.113% error, which is in competition with similar designs for different rated and system voltages in literature. The total energy absorption capability of the arrester is 7.577 MegaJoules, and is above the required minimum value of 5 MJ. Hence the design is satisfied. The author of this study presents an improved model of a metal oxide arrester for 132kV system, with its lumped and exponential parameters presented in detail. The improved design works in agreement with the manufacturer's desired configuration, for which the lumped parameters are found out. The process is elaborated and explained in detail. basic insulation level capacitance, F CFOV Critical flashover voltage estimated height of the arrester, m EMTP electromagnetic transient program front-of-wave current, A $L_{0}, L_{1}$ inductance, H LIWV lightning impulse withstand voltage number of parallel columns of MO in the arrester power, w $R_{0}, R_{1}$ resistance, Ω RMSE root mean square error sum of squares error time, sec TLAs transmission line arresters $U_{r}$ rated voltage, V $U_{\frac{r 1}{T 2}}$ residual voltage at 10 kA fast front current surge, V $U_{\frac{r 8}{20}}$ residual voltage at 10 kA current surge with 8/20 μs shape, V voltage, V $V_{\text {rPeak}}$ peak residual voltage, V [1] Ritenour, A.E., Morton, M.J., McManus, J.G., Barillo, D.J., Cancio, L.C. (2008). Lightning injury: A review. Burns, 34(5): 585-594. https://doi.org/10.1016/j.burns.2007.11.006 [2] Uman, M.A. (1994). Natural lightning. IEEE Transactions on Industry Applications, 30(3): 785-790. https://doi.org/10.1109/28.293729 [3] Abdel-Salam, M., Ahmed, N.A., Elhamd, I.S. (2005). Varistor as a surge protection device for electronic equipments. 2 2004 IEEE International Conference on Industrial Technology, 2004. IEEE ICIT '04., Hammamet, Tunisia, pp. 688-694. https://doi.org/10.1109/ICIT.2004.1490158 [4] Rakov, V.A., Rachidi, F. (2009). Overview of recent progress in lightning research and lightning protection. IEEE Transactions on Electromagnetic Compatibility, 51(3): 428-442. https://doi.org/10.1109/TEMC.2009.2019267 [5] Surtees, A.J. (2011). A review of IEC 62305-4 protection against lightning Part 4: Electrical and electronic systems within structures. 2011 7th Asia-Pacific International Conference on Lightning, Chengdu, pp. 478-481. https://doi.org/10.1109/APL.2011.6110170 [6] Flauzino, R.A., Moraes, L.A., Araújo, M.A., Altafim, R.A.C., Batista, O.E. (2015). Practical methodology for modeling and simulation of a lightning protection system using metal-oxide surge arresters for distribution lines. Electric Power Systems Research, 118: 47-54. https://doi.org/10.1016/j.epsr.2014.07.017 [7] Ali, S.A. (2013). Design of lightning arresters for electrical power systems protection. Advances in Electrical and Electronic Engineering, 11(6): 433-442. https://doi.org/10.15598/aeee.v11i6.661 [8] Latiff, N.A.A., Illias, H.A., Bakar, A.H.A., Dabbak, S.Z.A. (2018). Measurement and modelling of leakage current behaviour in ZnO surge arresters under various applied voltage amplitudes and pollution conditions. Energies, 11(4): 875. https://doi.org/10.3390/en11040875 [9] Islam, M.Z., Rashed, M.R., Yusuf, M.S.U. (2018). ATP-EMTP modeling and performance test of different type lightning arrester on 132kv overhead transmission tower. 3rd International Conference on Electrical Information and Communication Technology, 7-9: 1-6. https://doi.org/10.1109/EICT.2017.8275172 [10] Li, H.J., Birlasekaran, S., Choi, S.S. (2002). A parameter identification technique for metal-oxide surge arrester models. IEEE Transactions on Power Delivery, 17(3): 736-741. https://doi.org/10.1109/TPWRD.2002.1022797 [11] Hassan, N.H.N., Abu Bakar, A.H., Illias, H.A., Abd Halim, S., Mokhlis, H., Terzija, V. (2019). Analysis of discharge energy on surge arsrester configurations in 132kV double circuit transmission lines. Measurement: Journal of the International Measurement Confederation, 139: 103-111. https://doi.org/10.1016/j.measurement.2019.02.088 [12] Martinez-Velasco, J.A., Castro-Aranda, F. (2005). Modeling of Overhead Transmission Lines for Lightning Studies. International Conference on Power Systems Transients IPST'05 in Montreal Canada, 18(1): 1-6. [13] Derafshi Beigvand, S., Moradi, M. (2013). Comparison between different installation locations of surge arresters at transmission line using EMTP-RV. 28th International Power System Conference (PSC 2013), pp. 1-6. [14] Martinez-velasco, J.A., Castro-aranda, F. (2018). Assessment of the Lightning Flashover Rate of a Shielded Transmission Line Protected by Surge Arresters. Presented at the International Conference on Power Systems Transients (IPST'07) in Lyon. [15] Corellas, G. (2016). Transient overvoltages in gas insulated systems. UNIVERSITÀ DEGLI STUDI DI PADOVA Dipartimento di Ingegneria Industriale. http://tesi.cab.unipd.it/53422/1/Corellas_Giulia_tesi.pdf.
CommonCrawl
Erratum: Reducibility of quasiperiodic cocycles under a Brjuno-Rüssmann arithmetical condition JMD Home This Volume A concise proof of the multiplicative ergodic theorem on Banach spaces 2015, 9: 257-284. doi: 10.3934/jmd.2015.9.257 Iterated identities and iterational depth of groups Anna Erschler 1, Département de Mathématiques et Applications, École Normale Supérieure, 45 Rue d'Ulm, 75005 Paris, France Received September 2014 Revised July 2015 Published September 2015 Given a word $w$ on $n$ letters, we study groups which satisfy ``iterated identity'' $w$, meaning that for all $x_1, \dots, x_n$ there exists $N$ such that the $N-th$ iteration of $w$ of Engel type, applied to $x_1, \dots, x_n$, is equal to the identity. We define bounded groups and groups which are multiscale with respect to identities. This notion of being multiscale can be viewed as a self-similarity conditions for the set of identities, satisfied by a group. In contrast with torsion groups and Engel groups, groups which are multiscale with respect to identities appear among finitely generated elementary amenable groups. We prove that any polycyclic, as well as any metabelian group is bounded, and we compute the iterational depth for various wreath products. We study the set of iterated identities satisfied by a given group, which is not necessarily a subgroup of a free group and not necessarily invariant under conjugation, in contrast with usual identities. Finally, we discuss another notion of iterated identities of groups, which we call solvability type iterated identities, and its relation to elementary classes of varieties of groups. Keywords: nilpotent groups, group identity, amenable groups, metabelian groups, Engel groups, elementary amenable groups., verbal dynamics, wreath products, Universal law, solvable groups, polycyclic groups, Grigorchuk group, verbal map, torsion groups. Mathematics Subject Classification: Primary: 20E10, 20F69 ; Secondary: 20E22, 43A07, 20F16, 20F18, 20F19, 20F45, 20F5. Citation: Anna Erschler. Iterated identities and iterational depth of groups. Journal of Modern Dynamics, 2015, 9: 257-284. doi: 10.3934/jmd.2015.9.257 M. Abért, Group laws and free subgroups in topological groups, Bull. London Math. Soc., 37 (2005), 525-534. doi: 10.1112/S002460930500425X. Google Scholar S. I. Adjan, Infinite irreducible systems of group identities, (Russian) Izv. Akad. Nauk SSSR Ser. Mat., 34 (1970), 715-734. Google Scholar S. I. Adyan, Problema Bernsaĭda i tozhdestva v gruppakh, (Russian) Izdat. "Nauka'', Moscow, 1975. Google Scholar S. V. Alešin, Finite automata and the Burnside problem for periodic groups, (Russian) Mat. Zametki, 11 (1972), 319-328. Google Scholar T. Bandman, G.-M. Greuel, F. Grunewald, B. Kunyavskiĭ, G. Pfister and E. Plotkin, Identities for finite solvable groups and equations in finite simple groups, Compos. Math., 142 (2006), 734-764. doi: 10.1112/S0010437X0500179X. Google Scholar T. Bandman, F. Grunewald and B. Kunyavskiĭ, Geometry and arithmetic of verbal dynamical systems on simple groups, With an appendix by Nathan Jones, Groups Geom. Dyn., 4 (2010), 607-655. doi: 10.4171/GGD/98. Google Scholar T. Bandman, S. Garion and F. Grunewald, On the surjectivity of Engel words on $PSL(2,q)$, Groups Geom. Dyn., 6 (2012), 409-439. doi: 10.4171/GGD/162. Google Scholar L. Bartholdi, R. Grigorchuk and V. Nekrashevych, From fractal groups to fractal sets, in Fractals in Graz 2001, Trends Math., Birkhäuser, Basel, 2003, 25-118. Google Scholar R. Brandl and J. S. Wilson, Characterization of finite soluble groups by laws in a small number of variables, J. Algebra, 116 (1988), 334-341. doi: 10.1016/0021-8693(88)90221-9. Google Scholar J. N. Bray, J. S. Wilson and R. A. Wilson, A characterization of finite soluble groups by laws in two variables, Bull. London Math. Soc., 37 (2005), 179-186. doi: 10.1112/S0024609304003959. Google Scholar C. Chou, Elementary amenable groups, Illinois J. Math., 24 (1980), 396-407. Google Scholar E. S. Golod, Some problems of Burnside type, (Russian) in Proc. Internat. Congr. Math. (Moscow, 1966), Izdat. "Mir'', Moscow, 1968, 284-289. Google Scholar R. I. Grigorčuk, On Burnside's problem on periodic groups, (Russian) Funktsional. Anal. i Prilozhen., 14 (1980), 53-54. Google Scholar R. I. Grigorchuk, Branch groups, (Russian) Mat. Zametki, 67 (2000), 852-858; translation in Math. Notes, 67 (2000), 718-723. doi: 10.1007/BF02675625. Google Scholar L. Bartholdi and R. I. Grigorchuk, On parabolic subgroups and Hecke algebras of some fractal groups, Serdica Math. J., 28 (2002), 47-90. Google Scholar M. Gromov, Hyperbolic groups, in Essays in Group Theory, Math. Sci. Res. Inst. Publ., 8, Springer, New York, 1987, 75-263. doi: 10.1007/978-1-4613-9586-7_3. Google Scholar N. Gupta and S. Sidki, On the Burnside problem for periodic groups, Math. Z., 182 (1983), 385-388. doi: 10.1007/BF01179757. Google Scholar R. Guralnick, E. Plotkin and A. Shalev, Burnside-type problems related to solvability, Internat. J. Algebra Comput., 17 (2007), 1033-1048. doi: 10.1142/S0218196707003962. Google Scholar P. Hall, Finiteness conditions for soluble groups, Proc. London Math. Soc. (3), 4 (1954), 419-436. Google Scholar P. Hall, The Edmonton notes on nilpotent groups, Queen Mary College Mathematics Notes, Mathematics Department, Queen Mary College, London, 1969. Google Scholar W. Magnus, Beziehungen zwischen Gruppen und Idealen in einem speziellen Ring, (German) Math. Ann., 111 (1935), 259-280. doi: 10.1007/BF01472217. Google Scholar V. Nekrashevych, Self-Similar Groups, Mathematical Surveys and Monographs, 117, American Mathematical Society, Providence, RI, 2005. doi: 10.1090/surv/117. Google Scholar D. V. Osin, Elementary classes of groups, (Russian) Mat. Zametki, 72 (2002), 84-93; translation in Math. Notes, 72 (2002), 75-82. doi: 10.1023/A:1019869105364. Google Scholar E. L. Pervova, Everywhere dense subgroups of a group of tree automorphisms, (Russian) Tr. Mat. Inst. Steklova, 231 (2000), Din. Sist., Avtom. i Beskon. Gruppy, 356-367; translation in Proc. Steklov Inst. Math., (2000), 339-350. Google Scholar B. I. Plotkin, Notes on Engel groups and Engel elements in groups. Some generalizations, Izv. Ural. Gos. Univ. Mat. Mekh., 7(36) (2005), 153-166, 192-193. Google Scholar E. Ribnere, Sequences of words characterizing finite solvable groups, Monatsh. Math., 157 (2009), 387-401. doi: 10.1007/s00605-008-0034-6. Google Scholar J. G. Thompson, Nonsolvable finite groups all of whose local subgroups are solvable, Bull. Amer. Math. Soc., 74 (1968), 383-437. doi: 10.1090/S0002-9904-1968-11953-6. Google Scholar J. G. Thompson, Nonsolvable finite groups all of whose local subgroups are solvable. IV, V, VI, Pacific J. Math., 48 (1973), 511-592, ibid. 50 (1974), 215-297, ibid. 51 (1974), 573-630. doi: 10.2140/pjm.1973.48.511. Google Scholar J. S. Wilson, Two-generator conditions for residually finite groups, Bull. London Math. Soc., 23 (1991), 239-248. doi: 10.1112/blms/23.3.239. Google Scholar E. I. Zel'manov, Solution of the restricted Burnside problem for $2$-groups, (Russian) Mat. Sb., 182 (1991), 568-592; translation in Math. USSR-Sb., 72 (1992), 543-565. Google Scholar Elon Lindenstrauss. Pointwise theorems for amenable groups. Electronic Research Announcements, 1999, 5: 82-90. A. Yu. Ol'shanskii and M. V. Sapir. Non-amenable finitely presented torsion-by-cyclic groups. Electronic Research Announcements, 2001, 7: 63-71. Eldho K. Thomas, Nadya Markin, Frédérique Oggier. On Abelian group representability of finite groups. Advances in Mathematics of Communications, 2014, 8 (2) : 139-152. doi: 10.3934/amc.2014.8.139 Nir Avni. Spectral and mixing properties of actions of amenable groups. Electronic Research Announcements, 2005, 11: 57-63. Michel Coornaert, Fabrice Krieger. Mean topological dimension for actions of discrete amenable groups. Discrete & Continuous Dynamical Systems, 2005, 13 (3) : 779-793. doi: 10.3934/dcds.2005.13.779 Benjamin Hellouin de Menibus, Hugo Maturana Cornejo. Necessary conditions for tiling finitely generated amenable groups. Discrete & Continuous Dynamical Systems, 2020, 40 (4) : 2335-2346. doi: 10.3934/dcds.2020116 Steven T. Piantadosi. Symbolic dynamics on free groups. Discrete & Continuous Dynamical Systems, 2008, 20 (3) : 725-738. doi: 10.3934/dcds.2008.20.725 Ludovic Rifford. Ricci curvatures in Carnot groups. Mathematical Control & Related Fields, 2013, 3 (4) : 467-487. doi: 10.3934/mcrf.2013.3.467 Sergei V. Ivanov. On aspherical presentations of groups. Electronic Research Announcements, 1998, 4: 109-114. Benjamin Weiss. Entropy and actions of sofic groups. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3375-3383. doi: 10.3934/dcdsb.2015.20.3375 Neal Koblitz, Alfred Menezes. Another look at generic groups. Advances in Mathematics of Communications, 2007, 1 (1) : 13-28. doi: 10.3934/amc.2007.1.13 Robert McOwen, Peter Topalov. Groups of asymptotic diffeomorphisms. Discrete & Continuous Dynamical Systems, 2016, 36 (11) : 6331-6377. doi: 10.3934/dcds.2016075 Hans Ulrich Besche, Bettina Eick and E. A. O'Brien. The groups of order at most 2000. Electronic Research Announcements, 2001, 7: 1-4. Światosław R. Gal, Jarek Kędra. On distortion in groups of homeomorphisms. Journal of Modern Dynamics, 2011, 5 (3) : 609-622. doi: 10.3934/jmd.2011.5.609 Marc Peigné. On some exotic Schottky groups. Discrete & Continuous Dynamical Systems, 2011, 31 (2) : 559-579. doi: 10.3934/dcds.2011.31.559 Paul Skerritt, Cornelia Vizman. Dual pairs for matrix groups. Journal of Geometric Mechanics, 2019, 11 (2) : 255-275. doi: 10.3934/jgm.2019014 Uri Bader, Alex Furman. Boundaries, Weyl groups, and Superrigidity. Electronic Research Announcements, 2012, 19: 41-48. doi: 10.3934/era.2012.19.41 Javier Pérez Álvarez. Invariant structures on Lie groups. Journal of Geometric Mechanics, 2020, 12 (2) : 141-148. doi: 10.3934/jgm.2020007 Martin Kassabov. Symmetric groups and expanders. Electronic Research Announcements, 2005, 11: 47-56. André Caldas, Mauro Patrão. Entropy of endomorphisms of Lie groups. Discrete & Continuous Dynamical Systems, 2013, 33 (4) : 1351-1363. doi: 10.3934/dcds.2013.33.1351 Anna Erschler
CommonCrawl
Asymptotic behaviors for the full compressible quantum Navier-Stokes-Maxwell equations with general initial data DCDS-B Home Superfluidity phase transitions for liquid $ ^{4} $He system September 2019, 24(9): 5121-5148. doi: 10.3934/dcdsb.2019046 Long term behavior of stochastic discrete complex Ginzburg-Landau equations with time delays in weighted spaces Dingshi Li 1, , Lin Shi 1, and Xiaohu Wang 2,, School of Mathematics, Southwest Jiaotong University, Chengdu, Sichuan 610031, China Department of Mathematics, Sichuan University, Chengdu, Sichuan 610064, China * Corresponding author: Xiaohu Wang, [email protected] Received September 2018 Published February 2019 Fund Project: This work was supported by NSFC (11331007, 11601446, 11701475 and 11871049) and Excellent Youth Scholars of Sichuan University (2016SCU04A15) Full Text(HTML) In this paper, we investigate the long term behavior of the solutions to a class of stochastic discrete complex Ginzburg-Landau equations with time-varying delays and driven by multiplicative white noise. We first prove the existence and uniqueness of random attractor in a weighted space for these equations. Then, we analyze the upper semicontinuity of the random attractors as the time delay approaches to zero. Keywords: Discrete complex Ginzburg-Landau equation, delay, random attractor, weighted spaces. Mathematics Subject Classification: Primary: 35B40; Secondary: 35B41, 37L30. Citation: Dingshi Li, Lin Shi, Xiaohu Wang. Long term behavior of stochastic discrete complex Ginzburg-Landau equations with time delays in weighted spaces. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 5121-5148. doi: 10.3934/dcdsb.2019046 L. Arnold, Random Dynamical Systems, Springer-Verlag, 1998. doi: 10.1007/978-3-662-12878-7. Google Scholar P. W. Bates, H. Lisei and K. Lu, Attractors for stochastic lattice dynamical systems, Stoch. Dynam., 6 (2006), 1-21. doi: 10.1142/S0219493706001621. Google Scholar P. W. Bates, K. Lu and B. Wang, Attractors of non-autonomous stochastic lattice systems in weighted spaces, Physica D, 289 (2014), 32-50. doi: 10.1016/j.physd.2014.08.004. Google Scholar T. Caraballo, M. J. Garrido-Atienza, B. Schmalfuss and J. Valero, Asymptotic behaviour of a stochastic semilinear dissipative functional equation without uniqueness of solutions, Discrete Contin. Dyn. Syst. - B, 14 (2010), 439-455. doi: 10.3934/dcdsb.2010.14.439. Google Scholar T. Caraballo, F. Morillas and J. Valero, Attractors of stochastic lattice dynamical systems with a multiplicative noise and non-Lipschitz nonlinearity, J. Differential Equations, 253 (2012), 667-693. doi: 10.1016/j.jde.2012.03.020. Google Scholar T. Caraballo, F. Morillas and J. Valero, On differential equations with delay in Banach spaces and attractors for retarded lattice dynamical systems, Discrete Contin. Dyn. Syst. -A, 34 (2014), 51-77. doi: 10.3934/dcds.2014.34.51. Google Scholar T. Chen, S. Zhou and C. Zhao, Attractors for discrete nonlinear Schrödinger equation with delay, Acta Math. Appl. Sin. Engl. Ser., 26 (2010), 633-642. doi: 10.1007/s10255-007-7101-y. Google Scholar H. Cui, Y. Li and J. Yin, Existence and upper semicontinuity of bi-spatial pullback attractors for smoothing cocycles, Nonlinear Anal., 128 (2015), 303-324. doi: 10.1016/j.na.2015.08.009. Google Scholar J. Duan, K. Lu and B. Schmalfuss, Invariant manifolds for stochastic partial differential equations, Ann. Probab., 31 (2003), 2109-2135. doi: 10.1214/aop/1068646380. Google Scholar A. Gu and Y. Li, Dynamic behavior of stochastic $p$-Laplacian-type lattice equations, Stoch. Dynam., 17 (2017), 1750040, 19pp. doi: 10.1142/S021949371750040X. Google Scholar X. Han and P. E. Kloeden, Non-autonomous lattice systems with switching effects and delayed recovery, J. Differential Equations, 261 (2016), 2986-3009. doi: 10.1016/j.jde.2016.05.015. Google Scholar X. Han, W. Shen and S. Zhou, Random attractors for stochastic lattice dynamical systems in weighted spaces, J. Differential Equations, 250 (2011), 1235-1266. doi: 10.1016/j.jde.2010.10.018. Google Scholar X. Han, Random attractors for second order stochastic lattice dynamical systems with multiplicative noise in weighted spaces, Stoch. Dynam., 12 (2012), 1150024, 20pp. doi: 10.1142/S0219493711500249. Google Scholar X. Han, Asymptotic behaviors for second order stochastic lattice dynamical systems on $\mathbb Z^k$ in weighted spaces, J. Math. Anal. Appl., 397 (2013), 242-254. doi: 10.1016/j.jmaa.2012.07.015. Google Scholar N. Karachalios, H. Nistazakis and A. Yannacopoulos, Asymptotic behavior of solutions of complex discrete evolution equations: the discrete Ginzburg-Landau equation, Discrete Contin. Dyn. Syst -A, 19 (2007), 711-736. doi: 10.3934/dcds.2007.19.711. Google Scholar D. Li and L. Shi, Upper semicontinuity of random attractors of stochastic discrete complex Ginzburg-Landau equations with time-varying delays in the delay, J. Difference Equ. Appl., 24 (2018), 872-897. doi: 10.1080/10236198.2018.1437913. Google Scholar X. Li, K. Wei and H. Zhang, Exponential attractors for lattice dynamical systems in weighted spaces, Acta Appl. Math., 114 (2011), 157-172. doi: 10.1007/s10440-011-9606-x. Google Scholar V. A. Liskevich and M. A. Perelmuter, Analyticity of submarkovian semigroups, Proc. Amer. Math. Soc., 123 (1995), 1097-1104. doi: 10.2307/2160706. Google Scholar Y. Lv and J. Sun, Asymptotic behavior of stochastic discrete complex Ginzburg-Landau equations, Physica D, 221 (2006), 157-169. doi: 10.1016/j.physd.2006.07.023. Google Scholar J. Shu, Random attractors for stochastic discrete Klein-Gordon-Schrödinger equations driven by fractional Brownian motions, Discrete Contin. Dyn. Syst. - B, 22 (2017), 1587-1599. doi: 10.3934/dcdsb.2017077. Google Scholar B. Wang, Suffcient and necessary criteria for existence of pullback attractors for non-compact random dynamical systems, J. Differential Equations, 253 (2012), 1544-1583. doi: 10.1016/j.jde.2012.05.015. Google Scholar B. Wang, Uniform attractors of nonautonomous discrete reaction-diffusion systems in weighted spaces, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 18 (2008), 695-716. doi: 10.1142/S0218127408020598. Google Scholar Y. Wang and K. Bai, Pullback attractors for a class of nonlinear lattices with delays, Discrete Contin. Dyn. Syst. - B, 20 (2015), 1213-1230. doi: 10.3934/dcdsb.2015.20.1213. Google Scholar P. Wang, Y. Huang and X. Wang, Random attractors for stochastic discrete complex non-autonomous Ginzburg-Landau equations with multiplicative noise, Adv. Difference Equ., 2015 (2015), 15pp. doi: 10.1186/s13662-015-0575-7. Google Scholar X. Wang, K. Lu and B. Wang, Long term behavior of delay parabolic equations with additive noise and deterministic time dependent forcing, SIAM J. Appl. Dynam. Syst., 14 (2015), 1018-1047. doi: 10.1137/140991819. Google Scholar X. Wang, K. Lu and B. Wang, Exponential stability of non-autonomous stochastic delay lattice systems driven by a multiplicative white noise, J. Dynam. Differential Equations, 28 (2016), 1309-1335. doi: 10.1007/s10884-015-9448-8. Google Scholar X. Wang, S. Li and D. Xu, Random attractors for second-order stochastic lattice dynamical systems, Nonlinear Anal., 72 (2010), 483-494. doi: 10.1016/j.na.2009.06.094. Google Scholar Z. Wang and S. Zhou, Existence and upper semicontinuity of attractors for non-autonomous stochastic lattice systems with random coupled coefficients, Commun. Pure Appl. Anal., 15 (2016), 2221-2245. doi: 10.3934/cpaa.2016035. Google Scholar Z. Wang and S. Zhou, Random attractors for stochastic retarded lattice systems, J. Difference Equations Appl., 19 (2013), 1523-1543. doi: 10.1080/10236198.2013.765412. Google Scholar W. Yan, Y. Li and S. Ji, Random attractors for first order stochastic retarded lattice dynamical systems, J. Math. Phys., 51 (2010), 032702, 17pp. doi: 10.1063/1.3319566. Google Scholar D. Yang, The asymptotic behavior of the stochastic Ginzburg-Landau equation with multiplicative noise, J. Math. Phys., 45 (2004), 4064-4076. doi: 10.1063/1.1794365. Google Scholar C. Zhang and L. Zhao, The attractors for 2nd-order stochastic delay lattice systems, Discrete Contin. Dyn. Syst. - A, 37 (2017), 575-590. doi: 10.3934/dcds.2017023. Google Scholar M. Zhao and S. Zhou, Random attractor of non-autonomous stochastic Boussinesq lattice system, J. Math. Phys., 56 (2015), 092702, 16pp. doi: 10.1063/1.4930195. Google Scholar C. Zhao and S. Zhou, Limit behavior of global attractors for the complex Ginzburg-Landau equation on infinite lattices, Appl. Math. Lett., 21 (2008), 628-635. doi: 10.1016/j.aml.2007.07.016. Google Scholar C. Zhao and S. Zhou, Attractors of retarded first order lattice systems, Nonlinearity, 20 (2007), 1987-2006. doi: 10.1088/0951-7715/20/8/010. Google Scholar S. Zhou and Z. Wang, Random attractors for stochastic retarded lattice systems, J. Difference Equ. Appl., 19 (2013), 1523-1543. doi: 10.1080/10236198.2013.765412. Google Scholar N. I. Karachalios, Hector E. Nistazakis, Athanasios N. Yannacopoulos. Asymptotic behavior of solutions of complex discrete evolution equations: The discrete Ginzburg-Landau equation. Discrete & Continuous Dynamical Systems - A, 2007, 19 (4) : 711-736. doi: 10.3934/dcds.2007.19.711 Michael Stich, Carsten Beta. Standing waves in a complex Ginzburg-Landau equation with time-delay feedback. Conference Publications, 2011, 2011 (Special) : 1329-1334. doi: 10.3934/proc.2011.2011.1329 Yuta Kugo, Motohiro Sobajima, Toshiyuki Suzuki, Tomomi Yokota, Kentarou Yoshii. Solvability of a class of complex Ginzburg-Landau equations in periodic Sobolev spaces. Conference Publications, 2015, 2015 (special) : 754-763. doi: 10.3934/proc.2015.0754 N. I. Karachalios, H. E. Nistazakis, A. N. Yannacopoulos. Remarks on the asymptotic behavior of solutions of complex discrete Ginzburg-Landau equations. Conference Publications, 2005, 2005 (Special) : 476-486. doi: 10.3934/proc.2005.2005.476 Hans G. Kaper, Peter Takáč. Bifurcating vortex solutions of the complex Ginzburg-Landau equation. Discrete & Continuous Dynamical Systems - A, 1999, 5 (4) : 871-880. doi: 10.3934/dcds.1999.5.871 Noboru Okazawa, Tomomi Yokota. Subdifferential operator approach to strong wellposedness of the complex Ginzburg-Landau equation. Discrete & Continuous Dynamical Systems - A, 2010, 28 (1) : 311-341. doi: 10.3934/dcds.2010.28.311 Sen-Zhong Huang, Peter Takáč. Global smooth solutions of the complex Ginzburg-Landau equation and their dynamical properties. Discrete & Continuous Dynamical Systems - A, 1999, 5 (4) : 825-848. doi: 10.3934/dcds.1999.5.825 Hongzi Cong, Jianjun Liu, Xiaoping Yuan. Quasi-periodic solutions for complex Ginzburg-Landau equation of nonlinearity $|u|^{2p}u$. Discrete & Continuous Dynamical Systems - S, 2010, 3 (4) : 579-600. doi: 10.3934/dcdss.2010.3.579 Yueling Jia, Zhaohui Huo. Inviscid limit behavior of solution for the multi-dimensional derivative complex Ginzburg-Landau equation. Kinetic & Related Models, 2014, 7 (1) : 57-77. doi: 10.3934/krm.2014.7.57 Hong Lu, Shujuan Lü, Mingji Zhang. Fourier spectral approximations to the dynamics of 3D fractional complex Ginzburg-Landau equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2539-2564. doi: 10.3934/dcds.2017109 Qiongwei Huang, Jiashi Tang. Bifurcation of a limit cycle in the ac-driven complex Ginzburg-Landau equation. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 129-141. doi: 10.3934/dcdsb.2010.14.129 Bo You, Yanren Hou, Fang Li, Jinping Jiang. Pullback attractors for the non-autonomous quasi-linear complex Ginzburg-Landau equation with $p$-Laplacian. Discrete & Continuous Dynamical Systems - B, 2014, 19 (6) : 1801-1814. doi: 10.3934/dcdsb.2014.19.1801 Feng Zhou, Chunyou Sun. Dynamics for the complex Ginzburg-Landau equation on non-cylindrical domains I: The diffeomorphism case. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3767-3792. doi: 10.3934/dcdsb.2016120 Noboru Okazawa, Tomomi Yokota. Smoothing effect for generalized complex Ginzburg-Landau equations in unbounded domains. Conference Publications, 2001, 2001 (Special) : 280-288. doi: 10.3934/proc.2001.2001.280 N. Maaroufi. Topological entropy by unit length for the Ginzburg-Landau equation on the line. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 647-662. doi: 10.3934/dcds.2014.34.647 Jingna Li, Li Xia. The Fractional Ginzburg-Landau equation with distributional initial data. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2173-2187. doi: 10.3934/cpaa.2013.12.2173 Satoshi Kosugi, Yoshihisa Morita, Shoji Yotsutani. A complete bifurcation diagram of the Ginzburg-Landau equation with periodic boundary conditions. Communications on Pure & Applied Analysis, 2005, 4 (3) : 665-682. doi: 10.3934/cpaa.2005.4.665 Jun Yang. Vortex structures for Klein-Gordon equation with Ginzburg-Landau nonlinearity. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 2359-2388. doi: 10.3934/dcds.2014.34.2359 Dingshi Li, Xiaohu Wang. Asymptotic behavior of stochastic complex Ginzburg-Landau equations with deterministic non-autonomous forcing on thin domains. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 449-465. doi: 10.3934/dcdsb.2018181 Jungho Park. Bifurcation and stability of the generalized complex Ginzburg--Landau equation. Communications on Pure & Applied Analysis, 2008, 7 (5) : 1237-1253. doi: 10.3934/cpaa.2008.7.1237 PDF downloads (100) HTML views (369) Dingshi Li Lin Shi Xiaohu Wang
CommonCrawl
Uspekhi Matematicheskikh Nauk Search papers Search references Uspekhi Mat. Nauk: Personal entry: 1980, Volume 35, Issue 2(212) Cluster expansions in lattice models of statistical physics and the quantum theory of fields V. A. Malyshev 3 A survey of Linnik's large sieve and the density theory of zeros of $L$-functions A. F. Lavrik 55 The actions of groups and Lie algebras on non-commutative rings V. K. Kharchenko 67 On the sums of trigonometric series I. N. Pak 91 Asymptotic representation of orthogonal polynomials B. L. Golinskii 145 David Aleksandrovich Kveselava (obituary) N. P. Vekua, A. A. Dorodnitsyn, V. D. Kupradze, M. A. Lavrent'ev, G. S. Litvinchuk, T. A. Èbanoidze 197 In the Moscow Mathematical Society Communications of the Moscow Mathematical Society A Cantor limit set Yu. S. Barkovskii, G. M. Levin 201 The theory of residues in commutative algebras M. M. Vinogradov 203 The Poincaré polynomial of the space of form-residues on a quasi-homogeneous complete intersection V. V. Goryunov 205 Mobility and extension of fundamental sequences S. Kotanov 207 Necessary optimality conditions in smoothly-convex problems with operator constraints L. I. Krechetov 209 The absence of $L_2$-solutions for periodic partial differential equations P. A. Kuchment 211 The boundary of a set of stable matrices L. V. Levantovskii 213 An averaging principle and a theorem on large deviations for a family of extensions of a $Y$-flow V. B. Minasyan 215 Classification of flags of foliations N. M. Mishachev 217 Summability to $+\infty $ of Haar and Walsh series N. B. Pogosyan 219 The theory of Jordan algebras with a minimum condition A. M. Slin'ko 221 Rings whose quotient rings are all semi-injective A. A. Tuganbaev 223 Cohomology of groups and algebras of flows B. L. Feigin 225 A bound for the measure of the mutual transcendence of values of $E$-functions connected by arbitrary algebraic equations over $\mathbb{C}(z)$ A. B. Shidlovskii 227 Extensions of algebraic groups that are transitive on projective varieties M. T. Èl'baradi 229 Mathematical Events in the USSR Konstantin Ivanovich Babenko (on his sixtieth birthday) L. R. Volevich, G. P. Voskresenskii, A. V. Zabrodin, A. N. Kolmogorov, O. A. Oleinik, V. M. Tikhomirov 231 Nikolai Aleksandrovich Shanin (on his sixtieth birthday) S. Yu. Maslov, Yu. V. Matiyasevich, G. E. Mints, V. P. Orevkov, A. O. Slisenko 241 Georgii Dmitrievich Suvorov (on his sixtieth birthday) P. P. Belinskii, V. I. Belyi, V. Ya. Gutlyanskii, M. A. Lavrent'ev, B. V. Shabat 247 Sessions of the Petrovskii Seminar on differential equations and problems of mathematical physics N. V. Krylov, M. V. Safonov, V. P. Maslov, L. A. Bunimovich, Ya. G. Sinai, S. M. Kozlov, V. E. Zakharov, A. G. Aslanyan, D. G. Vasil'ev, V. B. Lidskii, R. I. Nigmatulin, V. M. Petkov 251 The Sixth Soviet–Czechoslovak Meeting on Applications of Methods of Function Theory and Functional Analysis to the Equations of Mathematical Physics and Computational Mathematics J. Brilla, V. N. Maslennikova, V. S. Sarkisyan 257 Reviews and Bibliography New books on mathematics L. P. Kalinina 263 Correction to the paper: "The problem of mass transfer with a discontinuous cost function and a mass statement of the duality problem for convex extremal problems" V. L. Levin, A. A. Milyutin 275
CommonCrawl
Stacked generative adversarial networks for image compositing Bing Yu ORCID: orcid.org/0000-0002-1697-60891,2, Youdong Ding1,2, Zhifeng Xie1,2 & Dongjin Huang1,2 Perfect image compositing can harmonize the appearance between the foreground and background effectively so that the composite result looks seamless and natural. However, the traditional convolutional neural network (CNN)-based methods often fail to yield highly realistic composite results due to overdependence on scene parsing while ignoring the coherence of semantic and structural between foreground and background. In this paper, we propose a framework to solve this problem by training a stacked generative adversarial network with attention guidance, which can efficiently create a high-resolution, realistic-looking composite. To this end, we develop a diverse adversarial loss in addition to perceptual and guidance loss to train the proposed generative network. Moreover, we construct a multi-scenario dataset for high-resolution image compositing, which contains high-quality images with different styles and object masks. Experiments on the synthesized and real images demonstrate the efficiency and effectiveness of our network in producing seamless, natural, and realistic results. Ablation studies show that our proposed network can improve the visual performance of composite results compared with the application of existing methods. Image compositing. is a fundamental technique in image editing that focuses on seamlessly integrating the foreground region of the source image into another target background. Ideally, a seamless composite result can trick humans into believing that it is not a fake image. However, as shown in Fig. 1a, some differences in appearance between the foreground and background, including illumination, lighting, white balance, and shading, severely reduce the fidelity of image composition. Therefore, to achieve highly realistic compositing, it is necessary to eliminate differences in appearance between the original foreground region and the target background as much as possible. Comparison of compositing methods on real cut-and-paste image. a Cut-and-paste. b MVCC [1]. c RC [2]. d DIH [3]. e DPH [4]. f Our proposed network Early techniques performed gradient-domain blending [1, 5] or alpha matting [6] operations to refine the foreground region for seamless compositing. However, as shown in Fig. 1b, they ignored some essential consistency constraints; thus, their composite results often appear unrealistic. Subsequently, some harmonization methods [7, 8] attempted to yield seamless and realistic results by transferring the visual appearance, texture, and even noise patterns. between images before gradient-domain compositing [5]. Unfortunately, they did not take into account global semantic and structure information and produced unrealistic composite results when the foreground region and target background were very different. As a powerful learning method, the deep neural network has been successfully applied to various fields of image processing, including image compositing. However, traditional convolutional neural network (CNN)-based methods [2–4, 9] are still tentative and imperfect for high-fidelity compositing. As shown in Fig. 1c, the realism CNN method [2] generates image composite results with unsatisfactory appearance through simple color parameter optimization. Deep image harmonization [3] was subsequently able to capture both the context and semantic information from images through a joint training scheme in which the scene parsing decoders can control semantic organization and generate sharp content efficiently in the process of compositing. However, if scene understanding fails, this method cannot produce a realistic composite result. As shown in Fig. 1d, due to some semantic errors, the composite effects of the deep image harmonization method are not sufficiently harmonized between the foreground and background. In addition, as shown in Fig. 1e, the recent deep painterly harmonization method [4] does not seem to work well for the adjustment of the appearance of nature images. Recently, several generative adversarial networks (GANs) [10, 11] have been introduced to achieve image compositing. Although these GAN models have the ability to harmonize composite images, they cannot solve all compositing issues, including appearance artifacts and high resolution. Especially for high-resolution compositing, the GAN models that take context encoders as the generative network can only output composite results with a low resolution of 64×64. Thus, they cannot directly generate high-resolution composites, and gradient-based optimization is needed as a post-processing step to create high-resolution images. In this paper, we propose a stacked generative adversarial network that can create realistic-looking image composites through an end-to-end network. As shown in Fig. 2, our model includes two generators, three discriminators, and multiple loss terms. The inputs to this network are a cut-and-paste composite image and its corresponding mask, and the output is a harmonized high-resolution composite result. Our new model can construct stacked generators and discriminators to harmonize the composite image and determine whether a composite image looks realistic and natural. The generators are essential components for training and testing, while the discriminators are auxiliary components used only for training. Furthermore, after building a multi-scenario high-resolution dataset, our new network can achieve stable training and faster convergence solving in three steps: (1) train generator G1 and all discriminators; (2) fix the parameters of generator G1, and then train generator G2; and (3) jointly fine-tune the whole network. The overview of the proposed stacked generative adversarial network. It consists of two generators and three discriminators: the output feature map of G1 is concatenated with the input image and serves as the input of G2. The discriminators D1 and D2 have an identical network architecture but operate at different image scales. D2 and D3 operate at the same image scales but have distinct architecture Briefly, to reduce appearance differences between the foreground and background, our end-to-end network can fully consider the texture and structure information of the background while effectively preserving the semantic consistency of the composite result. As shown in Fig. 1f, some appearance artifacts (e.g., illumination, contrast, noise, and texture) can be effectively eliminated by our new model; thus, the composite result is seamless and realistic. This paper makes four main contributions, summarized as follows: We propose a novel stacked generative adversarial network for high-resolution image compositing. It explores the cascade attention guidance generation strategy and aims to achieve a realistic-looking composite result. Unlike the state-of-the-art GAN-based methods, our network generates a harmonized image in an end-to-end manner. We introduce the shift-connection layer [12] to the image compositing task. The layer can utilize long-range and multilevel dependencies across different features to guide generation, improving the structure and texture consistency of the composite image. By doing so, we can take into account the advantages of learning-based and exemplar-based methods and obtain a more realistic composite result compared with the state-of-the-art methods. We propose a specialized discriminator for high-resolution image compositing that can employ diverse adversarial strategies at different scales to strengthen the ability of detail discrimination. We build a multi-scenario dataset for high-resolution image compositing that mainly contains indoor and outdoor scenes with different styles. To our knowledge, this is the first high-resolution publicly available dataset for image compositing. The organization of this paper is as follows. Section 2 briefly reviews the existing relevant works. Section 3 describes the proposed network and implementation details. Section 4 verifies the proposed method through a number of comparisons and describes ablation studies through experiments. Section 5 briefly summarizes this work and discusses possible future work. In this section, we briefly introduce three subdomains, namely, image compositing, learning-based image editing, and image synthesis using GANs, with particular attention to related works. Image compositing Gradient-domain compositing [1, 5] can adjust the foreground region and the background region to be consistent in terms of illumination by blending the transition region between them. To make the composite image look more realistic, Sunkavalli et al. [7] proposed transferring the appearance of the target image to the source image before blending them. Darabi et al. [8] proposed combining Poisson blending [5] with patch-based synthesis in a unified framework (image melding) to produce a realistic composite. To avoid inconsistent colors and sacrificing texture sharpness, Darabi et al.'s work introduced an extra gradual transition operation between the foreground and background. Xue et al. [13] proposed using statistics and machine learning to study the realism of composites. Recently, deep neural networks have further improved image realism by learning context and semantic information. Zhu et al. [2] proposed a CNN-based model to distinguish composite images from realistic photographs. Tsai et al. [9] proposed using a scene parsing deep network to replace the sky background in a given photograph. These authors further proposed an end-to-end CNN method [3] for image appearance harmonization that could automatically learn both the context and semantic information of the input image and could be trained for both compositing and scene parsing tasks. Wu et al. [11] proposed a two-step method for high-resolution image compositing by combining Wasserstein GAN with multiscale gradient-based methods. Tan et al. [14] proposed a model that learns to predict foreground objects from source images before dealing with appearance compatibility. In contrast to the abovementioned methods, our GAN-based model can take into account the advantages of both exemplar-based and learning-based methods for high-fidelity image compositing. Learning-based image editing Many researchers have leveraged deep learning for image editing with the goal of modifying an image using given image pairs as training data. Zhang et al. [15] proposed a CNN-based image colorization method in which a color recommender system was used to help users interactively use the trained model to translate a gray image to a image. Wang et al. [16] proposed a learning-based image super-resolution method that uses an improved deep CNN to reconstruct a high-resolution image from a given low-resolution image. A deep reinforcement learning-based image enhancement method was proposed by Park et al. [17] that used the MIT-Adobe FiveK dataset [18] to model the stepwise nature of the human retouching process. Yan et al. [12] introduced a novel image inpainting model that uses attention-guided U-net [19] as the generator that fills in marked missing regions with suitable structure and texture. Our method shares a similar concept with learning-based methods and incorporates the advantages of multiple editing models to propose a novel trainable GAN architecture for image compositing. Image synthesis using GANs While GANs [20] can generate photorealistic images from random noise, the generated results might not be in accordance with the user's requirements. It is worth emphasizing some recent works on deep image synthesis using GANs. Conditional GANs [21, 22] are new models that generate images based on particular inputs other than simple noise, thus providing user-controllable results. Isola et al. [23] proposed a pix2pix method that explores conditional GANs to translate semantic label maps into photorealistic images. To solve the pix2pix model's unstable performance during adversarial training for high-resolution synthesis tasks, Wang et al. [24] synthesized 2048×1024 resolution realistic-looking photos through a robust training objective together with coarse-to-fine generators and multiscale discriminators. Recently, Xian et al. [25] introduced local texture loss to train a generative adversarial network that can take the texture patches and sketches as inputs and output a shoe or bag. Our method is inspired by the above successful work and is within the framework of image-to-image translation GANs. With our adversarial training objective as well as stacked generators and diverse discriminators, we can not only realize automatic image compositing but also achieve better results compared to existing methods. Proposed method In this section, we first introduce the attention-guided cascaded generative network and multiple losses. We then describe the training scheme that jointly fine-tunes all the networks together after two separate training processes. Finally, we introduce the multi-scenario synthesized dataset collection method. Stacked generators Given a source image ysrc and a target image ytrg, the cut-and-paste composite image y can be given as follows: $$ y = y_{src}\odot M + y_{trg}\odot (1 - M) $$ where ⊙ is element-wise multiplication. M is a binary mask corresponding to the foreground region with a value of 1 and 0 for the background region. Our goal is to generate a natural-looking composite result \(\hat {y}\) in which the contents are the same as the cut-and-paste input but the appearance is more natural and realistic. Similar to the pix2pix network [23], our generator is based on the U-net architecture and leverages the property of skip connections between each layer of the encoder and those of the corresponding layer of the decoder. This architecture maintains the texture and details of the image that are lost during the compression process in the encoder, which is important for image compositing [3] and other image editing tasks [26, 27]. Given a U-net of n layers, we denote Φl(y) as the encoder feature of the lth layer and Φn−l(y) as the decoder feature of the (n−l)th layer. In addition, we denote Ψl(M) as a binary mask corresponding to the foreground region in both the encoder feature Φl(y) and the decoder feature Φn−l(y). Ψl(M) is computed by an extra network that has the same architecture as the U-net encoder but with a network width of 1. The pix2pix framework is designed to generate low-resolution images if applied directly to 512×512 resolution image synthesis. We find that the training is unstable and the generated results are unsatisfactory. Since stacked networks can be competent for high-resolution image synthesis because of their progressive refinement capability [26, 28, 29], we introduce this concept to our compositing task. Our network consists of two generators in which the second one is stacked upon the first. We call the first generator G1 and the second generator G2. Given a cut-and-paste image y, generator G1 is trained to produce a first feature map G1(y). Then, G1(y) is concatenated with the original image y and serves as the input for the second generator G2. The generator is given by the tuple G={G1,G2}, as showed in Fig. 2. The detailed architecture of the stacked generators is listed in Table 1. Table 1 The architecture of G1/G2 network. "IN" represents InstanceNorm, "LReLU" represents Leaky ReLU activation, "Conv."/"DeConv." denotes convolutional/transposed convolutional layer with kernel size of 4, "st" means stride, "Concat" explains the skip connections, "Guidance" means guidance loss operation, and "Shift" means shift-connection operation. The different layers of G1 and G2 are listed separately Attention guidance compositing As a state-of-art appearance compositing method, the deep image harmonization method [3] adjusts the masked parts conditioned on their surroundings. However, we have found that this method can produce a distorted appearance or structural inconsistency between the foreground and background when the appearance of particular scenes is improperly remembered due to the limitation of training samples. In contrast, as a traditional compositing method, the image melding method [8] uses exemplar-based synthesis to smoothly transform from the source region to the target region, which avoids obviously inconsistent appearance. This suggests that matching by patches might lead to a more harmonious result. Motivated by these observations, our network takes into account the advantages of learning-based and exemplar-based methods for image compositing. We introduce the shift-connection attention layer [12] in our generators, which can guide the generator to obtain global semantic and structural information, improving the structure and texture consistency of the result. Formally, let Ω be the foreground region and \(\overline {\Omega } \) be the background region. For each (Φn−l(y))p with location p∈Ω, its nearest neighbor searching in (Φl(y))q (location \(q \in \overline {\Omega }\)) can be independently defined as [12]: $$ q^{*}(p)= \text{arg} \mathop{\text{max}}\limits_{q \in \overline{\Omega}}\frac{\left < (\Phi_{n-l}(y))_{p}, (\Phi_{l}(y))_{q}\right>}{\left\| (\Phi_{n-l}(y))_{p} \right\|_{2} \left\|(\Phi_{l}(y))_{q}\right\|_{2}} $$ and the shift vector is obtained by [12]: $$ u_{p} = q^{*}(p)-p $$ Then, we spatially rearrange the encoder feature (Φl(y))q according to the shift vector to obtain a new estimate [12]: $$ (\Phi^{\text{shift}}_{n-l}(y))_{p} = (\Phi_{l}(y))_{p+u_{p}} $$ The shift-connection layer takes Φl(y), Φn−l(y), and Ψl(M) as inputs and outputs a new shift-connection feature \(\Phi ^{\text {shift}}_{n-l}(y)\). The layer is embedded in the decoders of both G1 and G2 to guide generation. On the one hand, the layer can thus use the information from the background region of the feature to generate new appearances in the foreground region. On the other hand, the layer also helps to model global dependencies across generated regions, ensuring that the details at each location are carefully coordinated with the details at a distance. Training losses The choice of GAN discriminator is especially important for learning-based high-resolution image editing tasks. To obtain realistic-looking generated results, multiple discriminators at different image scales [24] or different image patches [30] have been proposed. Considering that the shape and size of the foreground region in the cut-and-paste image are arbitrary and the resolution of the generating task is high, our compositing network constructs three diverse PatchGAN discriminators [22, 23]. The discriminators receive the generated composite or the ground truth at different scales and attempt to classify the content as either "real" or ''fake." We denote the discriminators as D1, D2, and D3. The discriminator is given by the tuple D={D1,D2,D3}, as shown in Fig. 2. Specifically, the generated and real high-resolution images are downsampled by a factor of 2 to obtain image pyramids of 2 scales. Then, D1 is trained to differentiate real and generated images at the finest scale, and D2 and D3 are both trained to differentiate images at the coarsest scale. The detailed architecture of the discriminators is presented in Table 2. The discriminators D1 and D2 have identical network architectures, while D3 differs from them. With the discriminators, our adversarial loss is defined as: $$ \mathcal{L}_{\text{adv}} = \underset{G}{\text{min}}\underset{D_{1},D_{2},D_{3}}{\text{max}}\sum_{k=1,2,3}\mathcal{L}_{GAN}(G,D_{k}) $$ Table 2 The architecture of D1/D2/D3 network. Annotations are the same as Table 1. The different layers of D1, D2, and D3 are listed separately where k is the number of discriminators. The objective function \(\mathcal {L}_{GAN}(G,D_{k})\) is given by: $$ \mathcal{L}_{GAN}(G,D_{k})=E_{x_{k} \sim p_{\text{data}}(x_{k})}[logD_{k}(x_{k})]+E_{y_{k} \sim p_{\text{data}}(y_{k})}[log(1-D_{k}(G(y_{k})))] $$ where yk is a cut-and-paste image and xk is the corresponding ground truth image. Specifically, y1 and x1 correspond to the finest scale, and y2, y3 and x2, x3 correspond to the coarsest scale. \(E_{x_{k} \sim p_{\text {data}}(x_{k})}\) represents the mathematical expectation of logDk(xk), where xk follows the probability distribution pdata(xk). \(E_{y_{k} \sim p_{\text {data}}(y_{k})}\) represents the mathematical expectation of log(1−Dk(G(yk))), where yk follows the probability distribution pdata(yk). Recent GAN methods [25, 27] have found it effective to combine the adversarial loss with other additional multiple loss terms. First, we choose to use the traditional L2 pixel loss to stabilize the training. It is defined as the mean squared error (MSE) between a generated image and its reference image: $$ \mathcal{L}_{L2} = \left\|G(y)-x\right\|^{2}_{2} $$ where G(y) is the output of a given cut-and-paste composite using a generator and x is the corresponding ground truth. Next, we further include the perceptual loss term, which is used in various editing tasks, such as image inpainting [27] and image super-resolution [31]. Given a cut-and-paste input, we would like the composite result to look realistic and the foreground and background regions to be compatible. The features extracted from the middle layers of the pretrained very deep network represent high-level semantic perception. We defined the perceptual loss using the active layer of the pretrained VGG-19 [32] network on the ImageNet dataset [33]. The loss is defined as the MSE between the feature representations of a generated image and its ground truth: $$ \mathcal{L}_{\text{per}} = \left\|\phi(G(y))-\phi(x)\right\|^{2}_{2} $$ where ϕ(·) is the activation map of the selected layer. Our final loss term is used to encourage the compositing network to focus on the masked foreground region. We use the guidance loss on the decoder feature of U-net proposed by Yan et al. [12]. It is defined as the MSE between the masked feature representations: $$ \mathcal{L}_{\text{gui}} = \sum_{j=1,2}\left\|(\Psi_{l}(M)\odot \Phi^{j}_{n-l}(y))-(\Psi_{l}(M)\odot \Phi^{j}_{l}(x))\right\|^{2}_{2} $$ where j is the generator number, \(\Phi ^{j}_{n-l}(y)\) is the decoder feature of cut-and-paste input on the (n−l)th layer for G1 or G2, and \(\Phi ^{j}_{l}(x)\) is the encoder feature of ground truth on the lth layer. Note that the guidance loss is only deployed to the decoder feature maps of the (n−3)th layer for G1 and G2 in our method. Our combined loss is defined as the sum of all the above loss functions: $$ \mathcal{L}_{\text{total}} = w_{\text{adv}}\mathcal{L}_{\text{adv}} + w_{L2}\mathcal{L}_{L2} + w_{\text{per}}\mathcal{L}_{\text{per}} + w_{\text{gui}}\mathcal{L}_{\text{gui}} $$ where wadv, wL2, wper, and wgui are the weight parameters for the adversarial, L2, perceptual, and guidance losses, respectively. During the training, three discriminators are trained to distinguish the generated results from the ground truth, while the stacked compositing networks are trained to fake the discriminators. Since the high-resolution image compositing task itself is very challenging, we need to train the network carefully to make it converge. The training procedure is divided into three phases. First, generator G1 and discriminator D are trained for \(T_{G_{1}}\) epochs. Then, generator G1 is fixed, and generator G2 is trained from scratch jointly with discriminator D for \(T_{G_{2}}\) epochs. Finally, generator G1, generator G2, and discriminator D are trained jointly until the end of the training. An overview of the training procedure is shown in Algorithm 1. In all experiments, we set the weight wadv=0.002, wL2=1, wper=0.01, and wgui=1. Our network is optimized using the Adam algorithm [34] with a learning rate of 0.0002. We train our models at an input resolution of 512×512, and the batch size is 1. Data augmentation, such as cropping, is also adopted during training. Synthetic datasets Data acquisition is the foundation of a successful training network. In our experiment, a masked image pair containing the cut-and-paste and composite result is required as the input and ground truth for the network. However, there are currently no public datasets for our task. To solve this problem, we selected two public datasets (MIT-Adobe FiveK [18] and Archive of Many Outdoor Scenes (AMOS) [35]) to create our multi-scenario training dataset for compositing through appearance editing. Two different processes are described in Fig. 3. Data acquisition methods for our multi-scenario dataset. a MIT-Adobe FiveK. b AMOS MIT-Adobe FiveK consists of 5000 raw images, each of which is paired with five retouched images using Adobe Lightroom by 5 trained photographers, A/B/C/D/E. The 6 editions of the same image have different styles. We randomly select one of the 6 versions of the image as the target image and then randomly select one of the remaining 5 versions as the source image. Therefore, there are 30 sets of 5000 target-source paired images (i.e., 150,000 paired images). To create more foreground objects or scenes from the source image, we manually annotate multiple object-level masks for each image in the dataset using the LabelMe annotation tool [36]. When generating input data, we first randomly select a mask and manually segment a region from the source image. Then, we crop this segmented region and overlay on the target image (i.e., ground truth image) to generate the cut-and-paste composite. We reserve 109 images (i.e., 3270 masked paired images) for testing, and the model is trained on the remaining 4891 (i.e., 146,730 paired images) images. To cover richer object categories and scene styles, we use images from outdoor webcams, which contain images that are captured at the same location but change dramatically with lighting, weather, and season. We construct the compositing dataset using sequences from 92 webcams (the webcam numbers are the same as the famous Transient Attributes Database [37]) selected from AMOS by color transfer. First, given a target image from the camera sequence, we pick 20–30 other images of the same camera taking pictures at other times as transfer reference images. Second, instead of using the simple color and illumination histogram statistics method in Tsai et al. [3], we use a patch-based matching method [38] to transfer the appearance between two images with similar content. In this way, we produce 20–30 images of different styles from the given target image while maintaining the same content and scene. Third, for each camera sequence, we repeat the above steps to select 3–10 target images and produce multiple images of different styles. Fourth, all original targets and color transfer results are manually reviewed to ensure that there will be no artifacts or noise. Fifth, we obtain multiple object-level masks for each target image using the LabelMe tool. We use the original target image as the ground truth and crop a segmented foreground from its corresponding produced image in a different style to overlay on the original image. We reserve 1365 masked paired images from 7 webcams for testing and train the model on the remaining 21,658 paired images from another 85 webcams. To distinguish them from the original datasets, we call our compositing datasets FiveK and AMOS in the following experimental discussion. In this section, we first describe the experimental setup. We then provide comparisons of synthesized images and real images with several metric methods, including user studies. Finally, we conduct five ablation studies on our network design. Our model is implemented on PyTorch v0.3.1, CUDNN v7.0.5, and CUDA v9.0 and run on hardware with an NVIDIA TITAN X GPU (12GB). We separately train and test on our two synthesized datasets. Since the GAN loss curve does not reveal much information in training image-to-image translation GANs [23], we check whether the training has converged by observing L2 and perceptual loss curves. On the one hand, the L2 term can reflect how close the results are to ground truth images at the pixel level. On the other hand, the perceptual term can reflect the perceptual similarity between generated images and ground truth images. Figures 4 and 5 show the L2 and perceptual loss convergence curves of different training phases on the two datasets, respectively. For FiveK, we set \(T_{G_{1}}=6\) (880,380 iterations), \(T_{G_{2}}=1\) (146,730 iterations), and TG=3 (440,190 iterations). For AMOS, we set \(T_{G_{1}}=16\) (346,528 iterations), \(T_{G_{2}}=10\) (216,580 iterations), and TG=30 (649,740 iterations). For each dataset, the training takes approximately 3 weeks. Compositing a single cut-and-paste image of 512×512 takes less than 0.7 s. Training convergence curves of L2 loss. a FiveK. b AMOS Training convergence curves of perceptual loss. a FiveK. b AMOS Comparison with existing methods For synthesized images, we compare our results with MVCC [1], IM [8], DIH [3], and GP [11] at 512×512 resolution. For DIH [3] and GP [11], we use the pretrained models provided by the authors. Note that DIH [3] uses a combination of three public datasets, including MIT-Adobe FiveK, to train the model, and GP [11] uses the transient attributes database as the training dataset. The images shown in Figs. 6 and 7 are taken from the FiveK and AMOS test datasets. Although the foreground appearances of the MVCC results are well blended using mean-value coordinates, some obvious artifacts can be found, as shown in Figs. 6c and 7c. IM results showed no significant improvement in visual appearance. DIH is effective in semantic compositing, and the visual appearance of the results shows better performance than MVCC and IM. However, the boundary between the foreground and background of the DIH results is not seamless enough, and there are obvious jagged edges. In addition, DIH models the dependencies between the scene semantics and its surface appearance, but these kinds of semantic priors do not always work well; for example, the yellow flower foreground in the composite of the three rows of Fig. 6e is adjusted to green, and the result is far from the ground truth (GT). GP adopts a multistage scheme to combine a deep network and Poisson blending, while its GAN model generates poor results at low resolution and leads to incorrect enlargement in the subsequent high-resolution optimization step, resulting in unrealistic images, as shown in Fig. 7e. Overall, the proposed method performs favorably in generating realistic, seamless, and harmonious images. The foreground appearance of our results is most consistent with the corresponding background. Example results on synthesized FiveK dataset. a GT. b Cut-and-paste. c MVCC [1]. d IM [8]. e DIH [3]. f Our proposed network. Our composite results obtained the highest PSNR value scores Example results on synthesized AMOS dataset. a GT. b Cut-and-paste. c MVCC [1]. d IM [8]. e GP [11]. f Our proposed network. Our composite results obtained the highest PSNR value scores In addition, we use three quantitative criteria to evaluate the proposed and other methods. First, the peak signal-to-noise ratio (PSNR), which is used by Tsai et al. [3], can reflect how close the result is to GT. Second, the structural similarity index (SSIM) attempts to quantify the visibility of structural differences between the result and GT. Third, the learned perceptual image patch similarity (LPIPS) [39], which agrees surprisingly well with human judgment, is used to assess the perceptual similarity between two images. Note that unlike PSNR and SSIM, smaller values mean greater perceptual similarity for LPIPS. Tables 3 and 4 show the quantitative scores between GT and composite results for FiveK and AMOS, respectively. The scores are calculated based on the mean values of a random subset of 300 images selected from each of the two test datasets. Our proposed image compositing network performs better than other methods in terms of PSNR, SSIM, and LPIPS metrics. Table 3 Comparisons of methods on the FiveK test dataset Table 4 Comparisons of methods on the AMOS test dataset For real images, we compare our results with MVCC [1], IM [8], RC [2], GP [11], and DIH [3]. To demonstrate that the models trained on our multi-scenario dataset can be generalized to real cut-and-paste composite images, we created a test set of 30 high-resolution real composite images and combined 50 public high-quality images collected by Xue et al. [13] and Tsai et al. [3], resulting in a real cut-and-paste composite set that contains 80 images. Since Xue et al.'s statistical method has no public code, our results are not compared with it. Figure 8 shows some experimental comparisons selected from the real composite set. The MVCC and IM can solve some of the inconsistencies between two parts of inputs, but the results are not satisfactory. RC's realism prediction model is effective in handling easily distinguishable cut-and-paste composite input; nevertheless, it generates unsatisfactory results, especially the transition region between foreground and background (e.g., there are distinct jagged outlines at the boundary of the foreground in the results of the second, third, and fifth rows). For GP, it generates visually poor results. For DIH, because the model utilizes semantic information to adjust cut-and-paste input, it is limited by the training dataset. If the scene semantics are incorrectly judged, this will lead to unrealistic outputs (e.g., the fourth, sixth, and eighth rows). Compared with others, the proposed model can better predict the global structure and thus maintain the consistency of the context, resulting in realistic-looking composites. Example results on real images. a Cut-and-paste. b Mask. c MVCC [1]. d IM [8]. e RC [2]. f GP [11]. g DIH [3]. h Our proposed network Figure 9 illustrates one example where the same foreground (i.e., the zebra) is copied to different backgrounds (i.e., a street in dim light and a zebra herd in the sun). For RC, the discriminative model cannot correctly predict the degree of perceived visual realism of the given inputs, so the appearance of the foreground is almost never adjusted. For DIH, regardless of the scene, the context-aware encoder-decoder recovers the fur color constrained by the trained prior knowledge to almost invariable results. In contrast to the two methods mentioned above, with the proposed network, the foregrounds can be adjusted according to the surrounding scene and luminance. Real example with same foreground and different backgrounds. a Cut-and-paste. b Partial enlarged details in a. c RC [2]. d Partial enlarged details in c. e DIH [3]. f Partial enlarged details in e. g Our proposed network. h Partial enlarged details in g To better understand the performance of our methods, we conducted quantitative assessment studies with users, similar to Tsai et al. [9]. Participants were shown an input cut-and-paste composite and six results from MVCC, IM, RC, GP, DIH, and the proposed method. Each participant was asked to rate each group according to the realistic nature of the images using a 5-point Likert scale (1 for worst, 5 for best). We asked 20 users to provide feedback by giving users 30 tuples of images selected from our real cut-and-paste composite set. The average scores of individual images in the evaluation set are shown in Fig. 10. Most of our scores are above 3.0. Our scores outperform MVCC in 80%, IM in 80%, RC in 80%, GP in 100%, and DIH in 73%. Average evaluation scores for each image, sorted by our score. The proposed method performs better than others in most cases Ablation studies The main differences between our compositing method and other methods are the stacked generative adversarial network architecture and the combined loss function. Thus, five groups of experiments in the FiveK dataset were conducted to analyze the effect of stacked generators, diverse discriminators, shift-connection operations, perceptual loss, and guidance loss on composite results. Table 3 shows that the proposed network achieved better scores in terms of PSNR, SSIM, and LPIPS metrics compared to the other five strategies. To evaluate the effectiveness of stacked generators for high-resolution compositing, we trained our network without using generator G2. The number of training epochs was constrained to be the same as the original model. As shown in Fig. 11, the results generated by a single (non-stacked) generator may not be satisfactory and have obvious artifacts. In addition, the consistent improvement in the quantitative assessment scores of our models clearly demonstrates the benefits of the cascaded refinement approach. Effect of stacked generators. a GT. b Cut-and-paste [PSNR = 26.19 db, LPIPS = 0.0750]. c Ours (w/o G2) [PSNR = 30.91 db, LPIPS = 0.0986]. d Ours [PSNR = 32.48 db, LPIPS = 0.0387]. e GT. f Cut-and-paste [PSNR = 19.00 db, LPIPS = 0.06326]. g Ours (w/o G2) [PSNR = 33.30 db, LPIPS = 0.0428]. h Ours [PSNR = 35.00 db, LPIPS = 0.0274] To evaluate the effectiveness of our specialized discriminator for high-resolution compositing, we trained our network only with discriminator D1. Visually, as shown in Fig. 12, we observed that the model using the combination of three diverse PatchGAN discriminators could reduce artifacts and improve appearance in terms of realism. Effect of diverse discriminators. a GT. b Cut-and-paste [PSNR = 23.06 db, LPIPS = 0.0669]. c Ours (w/ D1 only) [PSNR = 24.75 db, LPIPS = 0.0518]. d Ours [PSNR = 28.15 db, LPIPS = 0.0444]. e GT. f Cut-and-paste [PSNR = 20.27 db, LPIPS = 0.0873]. g Ours (w/ D1 only) [PSNR = 28.66 db, LPIPS = 0.0685]. h Ours [PSNR = 32.10 db, LPIPS = 0.0495] We trained a model without using the shift-connection layer. As shown in Fig. 13, the operation helps to obtain representation for the foreground (i.e., the man or the flower) from the background region, resulting in composites with consistent regions. Effect of shift-connection layer. a GT. b Cut-and-paste [PSNR = 27.17 db, LPIPS = 0.0592]. c Ours (w/o Shift) [PSNR = 29.59 db, LPIPS = 0.0548]. d Ours [PSNR = 33.82 db, LPIPS = 0.03167]. e GT. f Cut-and-paste [PSNR = 28.40 db, LPIPS = 0.0422]. g Ours (w/o Shift) [PSNR = 29.90 db, LPIPS = 0.0402]. h Ours [PSNR = 32.71 db, LPIPS = 0.0257] We trained a model without perceptual loss. As shown in Fig. 14, the composites generated by the model without \(\mathcal {L}_{\text {per}}\) have ghosting. In addition, the significant advantage in LPIPS scores for the model with \(\mathcal {L}_{\text {per}}\) shows that the perceptual loss can greatly improve visual perception. Effect of perceptual loss. a GT. b Cut-and-paste [PSNR = 33.82 db, LPIPS = 0.0196]. c Ours (w/o \(\mathcal {L}_{\text {per}}\)) [PSNR = 31.64 db, LPIPS = 0.0580]. d Ours [PSNR = 35.83 db, LPIPS = 0.0167]. e GT. f Cut-and-paste [PSNR = 31.66 db, LPIPS = 0.0100]. g Ours (w/o \(\mathcal {L}_{\text {per}}\)) [PSNR = 34.33 db, LPIPS = 0.0464]. h Ours [PSNR = 36.37db, LPIPS = 0.009] We trained a model without guidance loss. As shown in Fig. 15, guidance loss is helpful in preserving better visual appearance. We observed that the color and luminance of foregrounds with \(\mathcal {L}_{\text {gui}}\) were closer to GT. Effect of guidance loss. a GT. b Cut-and-paste [PSNR = 28.65 db, LPIPS = 0.0257]. c Ours (w/o \(\mathcal {L}_{\text {gui}}\)) [PSNR = 27.98 db, LPIPS = 0.0281]. d Ours [PSNR = 29.86 db, LPIPS = 0.0248]. e GT. f Cut-and-paste [PSNR = 17.12 db, LPIPS = 0.0921]. g Ours (w/o \(\mathcal {L}_{\text {gui}}\)) [PSNR = 21.72 db, LPIPS = 0.0424]. h Ours [PSNR = 33.19 db, LPIPS = 0.0247] Our model trained on the proposed multi-scenario dataset can handle the composition of high-resolution real cut-and-paste images in most cases. However, if the input image is significantly different from the training data, it may still fail. Figure 16 shows two examples of our failure case, where the appearance of foregrounds and backgrounds is not sufficiently natural and harmonious. Failure cases. a Cut-and-paste. b Ours. c Cut-and-paste. d Ours In this paper, we proposed a stacked GAN method for high-resolution image compositing. Given a cut-and-paste composite, the proposed network can adjust the foreground appearance and output a harmonized image that looks realistic. We have shown that by using stacked generators, diverse discriminators, and multiple loss constraints, it is possible to train a good performance model. In addition, we demonstrated that our network can be implemented in three steps to achieve stable training and faster convergence. Our method utilizes a cascade attention guidance generation strategy and generates more harmonious and consistent results than state-of-the-art methods. Future studies will focus on improving the speed of high-resolution compositing of the proposed network and expanding the training dataset. The datasets for high-resolution image compositing generated during the current study are available in the Baidu Cloud repository, https://pan.baidu.com/s/1WmJ5P7ToSeA9FS4vgmMfaA (download password: 1111). AMOS: Archive of many outdoor scenes PSNR: Peak signal to noise ratio SSIM: Structural similarity index LPIPS: Learned perceptual image patch similarity MVCC: Mean-value coordinates cloning RC: Realism CNN DIH: Deep image harmonization Deep painterly harmonization Image melding GP: Gaussian-Poisson Z. Farbman, G. Hoffer, Y. Lipman, D. Cohen-Or, D. Lischinski, Coordinates for instant image cloning. ACM Trans. Graph.28:, 67 (2009). J. -Y. Zhu, P. Krahenbuhl, E. Shechtman, A. A. Efros, in 2015 IEEE International Conference on Computer Vision (ICCV). Learning a discriminative model for the perception of realism in composite images (IEEEPiscataway, 2015), pp. 3943–3951. Y. -H. Tsai, X. Shen, Z. Lin, K. Sunkavalli, X. Lu, M. -H. Yang, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Deep image harmonization (IEEEPiscataway, 2017), pp. 2799–2807. F. Luan, S. Paris, E. Shechtman, K. Bala, Deep painterly harmonization. Comput. Graph. Forum. 37:, 95–106 (2018). P. Perez, M. Gangnet, A. Blake, Poisson image editing. ACM Trans. Graph.22:, 313–318 (2003). J. Wang, M. Agrawala, M. F. Cohen, Soft scissors: an interactive tool for realtime high quality matting. ACM Trans. Graph.26:, 9 (2007). K. Sunkavalli, M. K. Johnson, W. Matusik, H. Pfister, Multi-scale image harmonization. ACM Trans. Graph.29:, 125 (2010). S. Darabi, E. Shechtman, C. Barnes, D. B. Goldman, P. Sen, Image melding: combining inconsistent images using patch-based synthesis. ACM Trans. Graph.31:, 82 (2012). Y. -H. Tsai, X. Shen, Z. Lin, K. Sunkavalli, M. -H. Yang, Sky is not the limit: semantic-aware sky replacement. ACM Trans. Graph.35:, 149 (2016). D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, A. A. Efros, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Context encoders: feature learning by inpainting (IEEEPiscataway, 2016), pp. 2536–2544. H. Wu, S. Zheng, J. Zhang, K. Huang, in 2019 ACM International Conference on Multimedia. Gp-gan: Towards realistic high-resolution image blending (ACMNew York, 2019), pp. 2487–2495. Z. Yan, X. Li, M. Li, W. Zuo, S. Shan, in Computer Vision – ECCV 2018. Shift-net: image inpainting via deep feature rearrangement (SpringerNew York, 2018), pp. 3–19. S. Xue, A. Agarwala, J. Dorsey, H. Rushmeier, Understanding and improving the realism of image composites. ACM Trans. Graph.31:, 84 (2012). F. Tan, C. Bernier, B. Cohen, V. Ordonez, C. Barnes, in 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). Where and who automatic semantic-aware person composition (IEEEPiscataway, 2018), pp. 1519–1528. R. Zhang, J. -Y. Zhu, P. Isola, X. Geng, A. S. Lin, T. Yu, A. A. Efros, Real-time user-guided image colorization with learned deep priors. ACM Trans. Graph.36:, 119 (2017). X. Wang, K. Yu, C. Dong, C. C. Loy, in 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Recovering realistic texture in image super-resolution by deep spatial feature transform (IEEEPiscataway, 2018), pp. 606–615. J. Park, J. -Y. Lee, D. Yoo, I. S. Kweon, in 2018 IEEE Conference on Computer Vision and Pattern Recognition. Distort-and-recover: color enhancement using deep reinforcement learning (IEEEPiscataway, 2018), pp. 5928–5936. V. Bychkovsky, S. Paris, E. Chan, F. Durand, in 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Learning photographic global tonal adjustment with a database of input/output image pairs (IEEEPiscataway, 2011), pp. 97–104. O. Ronneberger, P. Fischer, T. Brox, in Lecture Notes in Computer Science. U-net: convolutional networks for biomedical image segmentation (SpringerNew York, 2015), pp. 234–241. I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, in Advances in Neural Information Processing Systems. Generative adversarial nets (MIT PressCambridge, 2014), pp. 2672–2680. M. Mirza, S. Osindero, Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014). https://arxiv.org/abs/1411.1784. J. -Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, E. Shechtman, in Advances in Neural Information Processing Systems. Toward multimodal image-to-image translation (MIT PressCambridge, 2017), pp. 465–476. P. Isola, J. -Y. Zhu, T. Zhou, A. A. Efros, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Image-to-image translation with conditional adversarial networks (IEEEPiscataway, 2017), pp. 5967–5976. T. -C. Wang, M. -Y. Liu, J. -Y. Zhu, A. Tao, J. Kautz, B. Catanzaro, in 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). High-resolution image synthesis and semantic manipulation with conditional gans (IEEEPiscataway, 2018), pp. 8798–8807. W. Xian, P. Sangkloy, V. Agrawal, A. Raj, J. Lu, C. Fang, F. Yu, J. Hays, in 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Texturegan: controlling deep image synthesis with texture patches (IEEEPiscataway, 2018), pp. 8456–8465. T. Dekel, C. Gan, D. Krishnan, C. Liu, W. T. Freeman, in 2018 IEEE Conference on Computer Vision and Pattern Recognition. Sparse, smart contours to represent and edit images (IEEEPiscataway, 2018), pp. 8456–8465. G. Liu, F. A. Reda, K. J. Shih, T. -C. Wang, A. Tao, B. Catanzaro, in Computer Vision – ECCV 2018. Image inpainting for irregular holes using partial convolutions (SpringerNew York, 2018), pp. 89–105. H. Zhang, T. Xu, H. Li, in 2017 IEEE International Conference on Computer Vision (ICCV). StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks (IEEEPiscataway, 2017), pp. 5908–5916. Q. Chen, V. Koltun, in 2017 IEEE International Conference on Computer Vision (ICCV). Photographic image synthesis with cascaded refinement networks (IEEEPiscataway, 2017), pp. 1520–1529. S. Iizuka, E. Simo-Serra, H. Ishikawa, Globally and locally consistent image completion. ACM Trans. Graph.36:, 107 (2017). J. Johnson, A. Alahi, L. Fei-Fei, in Computer Vision – ECCV 2016. Perceptual losses for real-time style transfer and super-resolution (SpringerNew York, 2016), pp. 694–711. K. Simonyan, Z. Andrew, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014). http://arxiv.org/abs/1409.1556. J. Deng, W. Dong, R. Socher, L. -J. Li, K. Li, L. Fei-Fei, in 2009 IEEE Conference on Computer Vision and Pattern Recognition. Imagenet: a large-scale hierarchical image database (IEEEPiscataway, 2009), pp. 248–255. D. P. Kingma, J. L. Ba, Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). https://arxiv.org/abs/1412.6980. N. Jacobs, N. Roman, R. Pless, in 2007 IEEE Conference on Computer Vision and Pattern Recognition. Consistent temporal variations in many outdoor scenes (IEEEPiscataway, 2007), pp. 1–6. B. C. Russell, A. Torralba, K. P. Murphy, W. T. Freeman, LabelMe: a database and web-based tool for image annotation. Int. J. Comput. Vis.77:, 157–173 (2007). P. -Y. Laffont, Z. Ren, X. Tao, C. Qian, J. Hays, Transient attributes for high-level understanding and editing of outdoor scenes. ACM Trans. Graph.33:, 149 (2014). Y. HaCohen, E. Shechtman, D. B. Goldman, D. Lischinski, Non-rigid dense correspondence with applications for image enhancement. ACM Trans. Graph.30:, 70 (2011). R. Zhang, P. Isola, A. A. Efros, E. Shechtman, O. Wang, in 2018 IEEE Conference on Computer Vision and Pattern Recognition. The unreasonable effectiveness of deep features as a perceptual metric (IEEEPiscataway, 2018), pp. 586–595. The authors thank the editor and anonymous reviewers. This work was supported by the National Natural Science Foundation of China (61303093, 61402278) and the Shanghai Natural Science Foundation (19ZR1419100). Shanghai Film Academy, Shanghai University, Yanchang Road, Shanghai, 200072, China Bing Yu, Youdong Ding, Zhifeng Xie & Dongjin Huang Shanghai Engineering Research Center of Motion Picture Special Effects, Shanghai University, Yanchang Road, Shanghai, 200072, China Bing Yu Youdong Ding Zhifeng Xie Dongjin Huang All authors take part in the discussion of the work described in this manuscript. YB wrote the first version of the manuscript. DY, XZ, and HD did part experiments of the paper. All authors read and approved the final manuscript. Bing Yu received his Ph.D. degree in digital media technology from Shanghai University, Shanghai, China, in 2020. He received his BS degrees in computer science and technology from Zhengzhou University, Zhengzhou, China, in 2011, and his MS degree in computer science and technology from North Minzu University, Yinchuan, China, in 2015. He is now a lecturer with Shanghai University, Shanghai, China. His current research interests include deep learning and image processing. Youdong Ding received his Ph.D. degree in mathematics from University of Science and Technology of China, Hefei, China, in 1997. He was a post-doctor at the Department of Mathematics of Fudan University, Shanghai, China, from 1997 to 1999. He is now a professor with Shanghai University, Shanghai, China. His research interests are computer graphics, image processing, and digital media technology. Zhifeng Xie received his Ph.D. degree in computer application technology from Shanghai Jiao Tong University, Shanghai, China, in 2013. He was a research assistant at the Department of Computer Science, City University of Hong Kong, Hong Kong, China, in 2011. He is now an associate professor with Shanghai University, Shanghai, China. His research interests include image/video editing, computer graphics, and digital media technology. Dongjin Huang received his Ph.D. degree in computer application technology from Shanghai University, Shanghai, China, in 2011. He was a post-doctor with University of Bradford, UK, from 2012 to 2013. He is now an assistant professor with Shanghai University, Shanghai, China. His research interests are augmented reality, computer vision, and computer graphics. Correspondence to Bing Yu. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Yu, B., Ding, Y., Xie, Z. et al. Stacked generative adversarial networks for image compositing. J Image Video Proc. 2021, 10 (2021). https://doi.org/10.1186/s13640-021-00550-w DOI: https://doi.org/10.1186/s13640-021-00550-w Generative adversarial networks Deep neural network
CommonCrawl
DOI:10.1142/S0217984910025280 Failed theories of superconductivity @article{Schmalian2010FailedTO, title={Failed theories of superconductivity}, author={Joerg Schmalian}, journal={arXiv: History and Philosophy of Physics}, J. Schmalian arXiv: History and Philosophy of Physics Almost half a century passed between the discovery of superconductivity by Kamerlingh Onnes and the theoretical explanation of the phenomenon by Bardeen, Cooper and Schrieffer. During the intervening years the brightest minds in theoretical physics tried and failed to develop a microscopic understanding of the effect. A summary of some of those unsuccessful attempts to understand superconductivity not only demonstrates the extraordinary achievement made by formulating the BCS theory, but also… View PDF on arXiv Figures from this paper Superconductivity—A Challenge to Modern Physics C. Joas, G. Waysand The discovery of superconductivity could not have happened without the liquefaction of helium by the Dutch physicist Heike Kamerlingh Onnes in 1908, which allowed physicists to reach temperatures… The Physics of Cold in the Cold War—"On-Line Computing" Between the ICBM Program and Superconductivity J. Knolle, C. Joas Superconductivity—the loss of resistance in various materials close to absolute zero temperature—was a hot topic after World War II. Advances in nuclear reactor technology led to the discovery of the… The Path to Type-II Superconductivity R. Huebener Following the discovery of superconductivity by Heike Kamerlingh Onnes in 1911, research concentrated on the electric conductivity of the materials investigated. Then, it was Max von Laue who in the… Fluctuating superconductivity and pair-density wave order in the cuprate superconductors Jonatan Wårdh High-temperature superconductors are some of nature's most enigmatic materials. Besides carrying a supercurrent, these materials manifest a range of electronic and structural orders. A state of… Cold numbers: Superconducting supercomputers and presumptive anomaly N. Liso, G. Filatrella, D. Gagliardi, C. Napoli Industrial and Corporate Change In February 2014 Time magazine announced to the world that the first quantum computer had been put in use. One key component of this computer is the "Josephson-junction," a superconducting device,… Electrical Resistance of Superconductors F. Lacy A theoretical model has been created to explain why the electrical resistance of superconductors reaches zero. Although a description of electron behavior in solids requires some knowledge of quantum… View 1 excerpt Stabilization of phenomenon and meaning J. Potters European Journal for Philosophy of Science In recent years, the use of historical cases in philosophy of science has become a proper topic of reflection. In this article I will contribute to this research by means of a discussion of one very… Emergence and Reductionism: an awkward Baconian alliance P. Coleman This article discusses the relationship between emergence and reductionism from the perspective of a condensed matter physicist. Reductionism and emergence play an intertwined role in the everyday… Understanding the Behavior of Superconductors by Analyzing Permittivity A superconductor has the ability to conduct electricity perfectly and exclude magnetic fields from its interior. In order to understand electromagnetic characteristics of superconductors, their… Kamerlingh Onnes and the discovery of superconductivity P. Meijer Many papers begin with the statement that Kamerlingh Onnes discovered superconductivity in 1911; one wonders what urged him to do the experiment that led to this discovery. Superconductivity was… John Bardeen and the Theory of Superconductivity: A Late Revision of a Homework Assignment for J. M. Luttinger L. Hoddeson This account of the history of the Bardeen–Cooper–Schrieffer theory of superconductivity has its roots in an assignment that Quin Luttinger designed for me in 1963 when I was his graduate student. It… Antiferromagnetic spin fluctuation and superconductivity T. Moriya, K. Ueda In this review, we summarize the present status of theories of the spin fluctuation (SF) mechanism in explaining anomalous or non-Fermi liquid behaviours and unconventional superconductivity (SC) in… Superconductivity and Spin Fluctuations D. Scalapino The organizers of the Memorial Session for Herman Rietschel asked that I review some of the history of the interplay of superconductivity and spin fluctuations. Initially, Berk and Schrieffer showed… On the Problem of the Molecular Theory of Superconductivity F. London The electrodynamics and thermodynamics of the superconducting state entail quite definite consequences with regard to the stability character of the supercurrents. In contrast to a recent attempt of… Einstein and the Early Theory of Superconductivity, 1919–1922 T. Sauer Einstein's early thoughts about superconductivity are discussed as a case study of how theoretical physics reacts to experimental findings that are incompatible with established theoretical notions.… The development of the quantum-mechanical electron theory of metals: 1928-1933 L. Hoddeson, G. Baym, M. Eckert We trace the fundamental developments and events, in their intellectual as well as institutional settings, of the emergence of the quantum-mechanical electron theory of metals from 1928 to 1933. This… Superfluidity and Superconductivity R. Feynman Physics, Philosophy I am sorry that Professor Landau was unable to come for two personal reasons. The first is that I have worked on this problem of helium upon which he has also done so much, and I would have liked… Experimental evidence for Fröhlich superconductivity in high magnetic fields N. Harrison, C. Mielke, J. Singleton, J. Brooks, M. Tokumoto Physics, Geology Resistivity and irreversible magnetization data taken within the high magnetic field CDWx phase of the quasi-two-dimensional organic metal α-(BEDT-TTF)2KHg(SCN)4 are shown to be consistent with a… A theoretical description of the new phases of liquid He-3 A. Leggett This paper reviews the theory of anisotropic superfluid phases and its application to the new A and B phases of liquid $^{3}\mathrm{He}$. It is tutorial in nature and advanced formal techniques are…
CommonCrawl
Adventures in Computation Batch Multivalid Conformal Prediction Our new paper gives very simple algorithms that promise "multivalid" conformal prediction sets for exchangable data. This means they are valid not just marginally, but also conditionally on (intersecting!) group membership, and in a threshold calibrated manner. I'll explain! Instead of making point predictions, we can quantify uncertainty by producing "prediction sets" --- sets of labels that contain the true label with (say) 90% probability. The problem is, in a k label prediction problem, there are $2^k$ prediction sets. The curse of dimensionality! One of the great ideas of conformal prediction is that if we can find a good "non-conformity score" s(x,y) telling us how unusual a label y seems for features x, we can focus on a 1-parameter family of prediction sets $P(x, t) = \{y : s(x,y) < t\}$. Now the problem is just to find $t$. The usual recipe in split conformal prediction is to use a holdout set of points (x,y) to find a $t$ such that $\Pr[s(x,y) \leq t] = 0.9$. Then over the randomness of new examples (x,y), we have that $\Pr[y \in P(x,t)] = 0.9$. This is a -marginal- guarantee: the randomness is over x and y. Suppose we have a bunch of groups (subsets of the feature space) g that we think are prediction-relevant? g could record e.g. demographic or other attributes of people. We might want to promise $\Pr[y \in P(x,t) | x \in g] = 0.9$. Vanilla split conformal doesn't promise this. If the groups g are disjoint, you could use a different threshold $t_g$ for each group --- but what if a single example can be a member of multiple groups? You can be conservative and use the largest threshold $t_g$ among all groups g that x is a member of, but this will over-cover. The first insight here is that it no longer suffices to find a single threshold $t$ --- we need to find a function f mapping examples to thresholds, and to consider prediction sets $P(x,f(x)) =\{y : s(x,y) < f(x)\}$. The problem is to use a calibration set to train the function f. Our first algorithm is super simple and given any set of (intersecting!) groups G, trains f such that for every $g \in G$, we have that for new examples, $\Pr[y \in P(x,f(x))|g\in G] = 0.9$. How? f just minimizes pinball loss over linear combinations of group indicator functions $g \in G$. Now that we are using different thresholds f(x) for different x, you might worry that the threshold f(x) itself is correlated with coverage. To make sure its not, we can also ask for threshold calibration: $\Pr[y \in P(x,f(x)) | x \in g, f(x) = t] = 0.9$ for all $g \in G$ and all t. Our second algorithm trains f so that it has both group conditional and threshold calibrated coverage - what we call "full multivalid" coverage. It is also simple: It iteratively finds pairs $(g,t)$ on which multivalid coverage is violated empirically, and corrects the violations. This is the batch analogue of what we did in our NeurIPS 2022 paper in the sequential setting, which I wrote about here: https://aaronsadventures.blogspot.com/2022/06/practical-robust-and-equitable.html The sequential setting is more difficult in many respects (no need to assume exchangable data!) but it requires labels at test time. Our new algorithms don't. Both algorithms are very performant, taking a couple of seconds to train on thousands of points. Our first algorithm gets nearly perfect group conditional coverage on real datasets, and our second is never off by more than 1%, both improving significantly on baselines. Our second algorithm gets better threshold calibration than our first (and compared to baselines), as expected. But perhaps surprisingly, our first algorithm performs quite well on calibration tests --- significantly beating baselines --- despite no formal calibration guarantees. Our techniques come from the algorithmic fairness literature --- we train f to satisfy quantile analogues of multicalibration and multi-accuracy. If you haven't been paying attention to algorithmic fairness, maybe you should start --- there is interesting stuff going on there! Check out e.g. the Simons Collaboration on Algorithmic Fairness This is joint work with the excellent Chris Jung, Georgy Noarov, and Ramya Ramalingam. Our paper is here: https://arxiv.org/abs/2209.15145 and our code is here: https://github.com/ProgBelarus/BatchMultivalidConformal Posted by Aaron at 3:07 PM No comments: Practical, Robust, and Equitable Uncertainty Estimation This is a post about a new paper that is joint work with Bastani, Gupta, Jung, Noarov, and Ramalingam. The paper is here: https://arxiv.org/abs/2206.01067 and here is a recording of a recent talk I gave about it at the Simons Foundation: https://www.simonsfoundation.org/event/robust-and-equitable-uncertainty-estimation/ . This is cross-posted to the TOC4Fairness Blog (and this work comes out of the Simons Collaboration on the Theory of Algorithmic Fairness) Machine Learning is really good at making point predictions --- but it sometimes makes mistakes. How should we think about which predictions we should trust? In other words, what is the right way to think about the uncertainty of particular predictions? Together with Osbert Bastani, Varun Gupta, Chris Jung, Georgy Noarov, and Ramya Ramalingam, we have some new work I'm really excited about. A natural way to quantify uncertainty is to predict a set of labels rather than a single one. Pick a degree of certainty --- say 90%. For every prediction we make, we'd like to return the smallest set of labels that is guaranteed to contain the true label 90% of the time. These are "prediction sets", and quantify uncertainty in a natural way: ideally, we will be sure about the correct label, and the prediction set will contain only a single label (the prediction we are certain about). But the larger our prediction set, the more our uncertainty, and the contents of the prediction set lets us know what exactly the model is uncertain about. An example of prediction sets for ImageNet. This example comes from a nice recent paper by Angelopoulos, Bates, Malik, and Jordan: https://arxiv.org/abs/2009.14193 But how can we do this? Conformal Prediction provides a particularly simple way. Here is an outline of the vanilla version of conformal prediction (there are plenty of variants): Step 1: Pick a (non)conformity score to measure how different a label y is from a prediction f(x). e.g. for a regression model we could choose $s(x,y) = |f(x)-y|$ --- but lots of interesting work has been done recently to develop much fancier ones. A lot of the art of conformal prediction is in finding a good score function. Step 2: Find a threshold $\tau$ such that for a new example $(x,y)$, $\Pr[s(x,y) \leq \tau] = 0.9$. An easy way to do this is using a holdout set. Step 3: On a new example $x$, given a point prediction $f(x)$, produce the prediction set $P(x) = \{y : s(x,y) \leq \tau\}$. Thats it! Nice and simple. Check out this recent survey by Angelopolous and Bates for an accessible introduction to conformal prediction. But a few things could go wrong. First, the technique of using a holdout set only works if the data is i.i.d. or more generally exchangable --- i.e. the data distribution should be permutation invariant. But maybe its coming from some changing distribution. If the distribution has changed in an expected and well behaved way, there are some fixes that let you apply the same framework, but if not you are likely in trouble. A joke about non-exchangable data Second, an average over everyone might not be what you care about. If we are in a personalized medicine setting, you might care about the reliability of predictions not just overall, but for women with a family history of diabetes and egg allergies --- or whatever else you think is medically relevant about you as an individual. This is the problem that we want to solve: How to give prediction sets that cover their label 90% of the time even if we make no assumptions at all about the data generating process, and even if we care about coverage conditional on arbitrary intersecting subsets of the data. We want stronger guarantees in another way too. If you think about our goal, there is a way to cheat: 90% of the time, predict the (trivial) set of all labels. 10% of the time predict the empty set. This covers the real label 90% of the time, but is completely uninformative. To avoid this "solution", we also ask that our predictions be threshold calibrated. Remember our prediction sets have the form $P_t(x) = \{y : s(x,y) \leq \tau_t\}$. Now the threshold $\tau_t$ might be different every day. But we want 90% coverage even conditional on the value of $\tau_t$. This rules out cheating. Remarkably (I think!), for every set of groups specified ahead of time, we're able to guarantee that even if the data is generated by an adversary, that our empirical coverage converges to 90% at the statistically optimal rate. Here is what that means: Pick a threshold $\tau$ and group $G$. Consider all $n_{\tau,G}$ rounds in which the example $x$ was in $G$, and in which we predicted threshold $\tau$. We promise that on this set, we cover 90% $\pm$ $1/\sqrt{n_{\tau,G}}$ of the labels. This is the best you could do even with a known distribution. The best thing is that the algorithm is super simple and practical. We had a paper last year that showed how to do much of this in theory --- but the algorithm from that paper was not easily implementable (it involved solving an exponentially large linear program with a separation oracle). But here is our new algorithm --- it only involves doing a small amount of arithmetic for each prediction: So we're able to implement it and run a bunch of experiments. You can read about them in detail in the paper, but the upshot is that our new method is competitive with split conformal prediction even on "its own turf" --- i.e. when the data really is drawn i.i.d. and we only care about marginal coverage --- and really excels when the data comes from a more complicated source, or when we measure group-conditional coverage, which traditional methods tend to have much more trouble with. We run experiments on regression and classification tasks, on exchangeable data, under distribution shift, on real time series data, and on adversarial data orderings. Even when the data is i.i.d. and we only care about marginal coverage, our method has an important advantage over split conformal prediction --- since we don't need to preserve exchangability, we can use all of the data to train the underlying model, whereas split conformal prediction needs to reserve some fraction of it for a holdout set. The result is faster learning for our method, which results in smaller/more accurate prediction sets even without the complicating factors of groupwise coverage, threshold calibration or adversarial data! Posted by Aaron at 9:00 AM No comments: FORC 2021 Call for Papers Reminder to anyone who has forgotten about FORC 2021 --- its a very nice venue --- and also a nice place to highlight recent work that is published or submitted elsewhere, via the non-archival track. Symposium on Foundations of Responsible Computing (FORC) 2021 Call for Papers - Deadline February 15, 2021 AOE (anywhere on Earth) The second annual Symposium on Foundations of Responsible Computing (FORC) is planned to be held on June 9-11, 2021, *online*. FORC is a forum for mathematically rigorous research in computation and society writ large. The Symposium aims to catalyze the formation of a community supportive of the application of theoretical computer science, statistics, economics, and other relevant analytical fields to problems of pressing and anticipated societal concern. Topics that fall in scope include, but are not restricted to, formal approaches to privacy, including differential privacy; theoretical approaches to fairness in machine learning, including the investigation of definitions, algorithms and lower bounds, tradeoffs, and economic incentives; computational and mathematical social choice (including apportionment and redistricting); theoretical foundations of sustainability; mechanism design for social good; mathematical approaches to bridging computer science, law and ethics; and theory related to modeling and mitigating the spread of epidemics. The Program Committee also warmly welcomes mathematically rigorous work on societal problems that have not traditionally received attention in the theoretical computer science literature. Whatever the topic, submitted papers should communicate their contributions towards responsible computing, broadly construed. The symposium itself will feature a mixture of talks by authors of accepted papers and invited talks. At least one author of each accepted paper should be present at the symposium to present the work (with an option for virtual attendance, as needed). Dual Submission Policy. Authors must indicate at the time of submission whether they are submitting to the archival-option track or the non-archival track. * For submissions to the non-archival track, it is permitted to submit papers that have appeared in a peer-reviewed conference or journal since the last FORC. It is also permitted to simultaneously or subsequently submit substantially similar work to another conference or to a journal. Accepted papers in the non-archival track will receive talks at the symposium and will appear as one-page abstracts on the symposium website. They will not appear in the proceedings. * For submissions to the archival-option track, papers that are substantially similar to papers that have been previously published, accepted for publication, or submitted in parallel to other peer-reviewed conferences with proceedings may not be submitted. Also, submissions that are substantially similar to papers that are already published in a journal at the time of submission may not be submitted to the archival-option track. Accepted papers in the archival-option track will receive talks at the symposium. Authors of papers accepted to the archival-option track will be given the option to choose whether to convert to a one-page abstract (which will not appear in the proceedings) or publish a 10-page version of their paper in the proceedings. The proceedings of FORC 2021 will be published by LIPIcs. Authors are also responsible for ensuring that submitting to FORC would not be in violation of other journals' or conferences' submission policies. PC members and reviewers will be aware during the review process of whether papers have been submitted as archival-option or non-archival. The PC reserves the right to hold non-archival papers to a different standard than archival-option papers. Submission Instructions. * Authors should upload a PDF of the paper here: https://easychair.org/conferences/?conf=forc2021. * A footnote on the title of the paper should indicate whether the paper is a submission to the archival-option track or the non-archival track. Submissions to the non-archival track should also indicate in this footnote any archival venues (conferences or journals) at which the paper has appeared, a link to the publication, and the date on which it was published. * The font size should be at least 11 point and the format should be single-column. * Author names and affiliations should appear at the top of the paper (reviewing for FORC is single, not double blind). * Beyond these, there are no formatting or length requirements, but reviewers will only be asked to read the first 10 pages of the submission. It is the authors' responsibility that the main results of the paper and their significance be clearly stated within the first 10 pages. For both the archival-option track and the non-archival track, submissions should include proofs of all central claims, and the committee will put a premium on writing that conveys clearly and in the simplest possible way what the paper is accomplishing. * Authors are free to post their submissions on arXiv or other online repositories. All questions about submissions should be emailed to the PC chair, Katrina Ligett, at [email protected] FORC Steering Committee Avrim Blum Cynthia Dwork Shafi Goldwasser Sampath Kannan Jon Kleinberg Kobbi Nissim Toni Pitassi Omer Reingold Guy Rothblum Salvatore Ruggieri Salil Vadhan Adrian Weller FORC 2021 Program Committee Borja Balle Raef Bassily Mark Bun Elisa Celis Aloni Cohen Moon Duchin Vitaly Feldman Kira Goldner Krishna Gummadi Swati Gupta Gautam Kamath Michael Kearns Scott Kominers Himabindu Lakkaraju Katrina Ligett (chair) Jamie Morgenstern Seth Neel Kunal Talwar Submission deadline: February 15, 2021 AOE (anywhere on Earth) Author notification: March 31, 2021 Conference: June 9-11, 2021 How to Estimate the Uncertainty of Predictions This is a post about a new paper Online Multivalid Learning: Means, Moments, and Prediction Intervals, that is joint work with Varun Gupta, Christopher Jung, Georgy Noarov, and Mallesh Pai. It is cross-posted to the new TOC4Fairness blog. For those that prefer watching to reading, here is a recording of a talk I gave on this paper. Suppose you go and train the latest, greatest machine learning architecture to predict something important. Say (to pick an example entirely out of thin air) you are in the midst of a pandemic, and want to predict the severity of patients' symptoms in 2 days time, so as to triage scarce medical resources. Since you will be using these predictions to make decisions, you would like them to be accurate in various ways: for example, at the very least, you will want your predictions to be calibrated, and you may also want to be able to accurately quantify the uncertainty of your predictions (say with 95% prediction intervals). It is a fast moving situation, and data is coming in dynamically --- and you need to make decisions as you go. What can you do? The first thing you might do is ask on twitter! What you will find is that the standard tool for quantifying uncertainty in settings like this is conformal prediction. The conformal prediction literature has a number of elegant techniques for endowing arbitrary point prediction methods with marginal prediction intervals: i.e intervals $(\ell(x), u(x))$ such that over the randomness of some data distribution over labelled examples $(x,y)$: $\Pr_{(x,y)}\left[y \in [\ell(x), u(x)]\right] \approx 0.95$ These would be 95% marginal prediction intervals --- but in general you could pick your favorite coverage probability $1-\delta$. Conformal prediction has a lot going for it --- its tools are very general and flexible, and lead to practical algorithms. But it also has two well known shortcomings: Strong Assumptions. Like many tools from statistics and machine learning, conformal prediction methods require that the future look like the past. In particular, they require that the data be drawn i.i.d. from some distribution --- or at least be exchangable (i.e. their distribution should be invariant to permutation). This is sometimes the case --- but it often is not. In our pandemic scenario, the distribution on patient features might quickly change in unexpected ways as the disease moves between different populations, as might the relationship between features and outcomes, as treatments advance. In other settings in which consequential decisions are being made about people --- like lending and hiring decisions --- people might intentionally manipulate their features in response to the predictive algorithms you deploy, in an attempt to get the outcome they want. Or you might be trying to predict outcomes in time series data, in which there are explicit dependencies across time. In all of these scenarios, exchangeability is violated. Weak Guarantees. Marginal coverage guarantees are averages over people. 95% marginal coverage means that the true label falls within the predicted interval for 95% of people. It need not mean anything for people like you. For example, if you are part of a demographic group that makes up less than 5% of the population, it is entirely consistent with the guarantees of a 95% marginal prediction interval that labels for people from your demographic group fall outside of their intervals 100% of the time. This can be both an accuracy and a fairness concern --- marginal prediction works well for "typical" members of a population, but not necessarily for everyone else. What kinds of improvements might we hope for? Lets start with how to strengthen the guarantee: Multivalidity Ideally, we would want conditional guarantees --- i.e. the promise that for every $x$, that we would have $\Pr_{y}\left[y \in [\ell(x), u(x)] | x \right] \approx 0.95$. In other words, that somehow for each individual, the prediction interval was valid for them specifically, over the "unrealized" (or unmeasured) randomness of the world. Of course this is too much to hope for. In a rich feature space, we have likely never seen anyone exactly like you before (i.e. with your feature vector $x$). So strictly speaking, we have no information at all about your conditional label distribution. We still have to average over people. But we don't have to average over everybody. An important idea that has been investigated in several different contexts in recent years in the theory literature on fairness is that we might articulate a very rich collection of (generally intersecting) demographic groups $G$ corresponding to relevant subsets of the data domain, and ask for things that we care about to hold true as averaged over any group $S \in G$ in the collection. In the case of prediction intervals, this would correspond to asking for something like that simultaneously for every demographic group $S \in G$, $\Pr_{(x,y)}\left[y \in [\ell(x), u(x)] | x \in S \right] \approx 0.95$. Note here that an individual might be a member of many different demographic groups, and can interpret the guarantees of their prediction interval as averages over any of those demographic groups, at their option. This is what we can achieve --- at least for any such group that isn't too small. And what kinds of assumptions do we need? Adversarial Data Actually, its not clear that we need any! Many learning problems which initially appear to require distributional assumptions turn out to be solvable even in the worst case over data sequences --- i.e. even if a clever adversary, with full knowledge of your algorithm, and with the intent only to sabotage your learning guarantees, is allowed to adaptively choose data to present to your algorithm. This is the case for calibrated weather prediction, as well as general contextual prediction. It turns out to be the case for us as well. Instead of promising coverage probabilities of $1-\delta + O(1/T)$ after $T$ rounds on the underlying distribution, as conformal prediction is able to, (for us there is no underlying distribution) we offer empirical coverage rates of $1-\delta \pm O(1/\sqrt{T})$. This kind of guarantee is quite similar to what conformal prediction guarantees about empirical coverage. More Generally Our techniques are not specific to prediction intervals. We can do the same thing for predicting label means, and predicting variances of the residuals of arbitrary prediction methods. For mean prediction, this corresponds to an algorithm for providing multi-calibrated predictions in the sense of Hebert-Johnson et al, in an online adversarial environment. For variances and other higher moments, it corresponds to an online algorithm for making mean-conditioned moment multicalibrated predictions in the sense of Jung et al. Techniques At the risk of boring my one stubbornly remaining reader, let me say a few words about how we do it. We generalize an idea that dates back to an argument that Fudenberg and Levine first made in 1995 --- and is closely related to an earlier, beautiful argument by Sergiu Hart --- but that I just learned about this summer, and thought was just amazing. It applies broadly to solving any prediction task that would be easy, if only you were facing a known data distribution. This is the case for us. If, for each arriving patient at our hospital, a wizard told us their "true" distribution over outcome severity, we could easily make calibrated predictions by always predicting the mean of this distribution --- and we could similarly read off correct 95% coverage intervals from the CDF of the distribution. So what? That's not the situation we are in, of course. Absent a wizard, we first need to commit to some learning algorithm, and only then will the adversary decide what data to show us. But lets put our game theory hats on. Suppose we've been making predictions for awhile. We can write down some measure of our error so far --- say the maximum, over all demographic groups in $G$, of the deviation of our empirical coverage so far from our 95% coverage target. For the next round, define a zero sum game, in which we (the learner) want to minimize the increase in this measure of error, and the adversary wants to maximize it. The defining feature of zero-sum games is that how well you can do in them is independent of which player has to announce their distribution on play first --- this is the celebrated Minimax Theorem. So to evaluate how well the learner could do in this game, we can think about the situation involving a Wizard above, in which for each arriving person, before we have to make a prediction for them, we get to observe their true label distribution. Of course in this scenario we can do well, because for all of our goals, our measure of success is based on how well our predictions match observed properties of these distributions. The Minimax theorem tells us that (at least in principle --- it doesn't give us the algorithm), there must therefore also be a learning algorithm that can do just as well, but against an adversary. The minimax argument is slick, but non-constructive. To actually pin down a concrete algorithm, we need to solve for the equilibrium in the corresponding game. That's what we spend much of the paper doing, for each of the prediction tasks that we study. For multicalibration, we get a simple, elementary algorithm --- but for the prediction interval problem, although we get a polynomial time algorithm, it involves solving a linear program with a separation oracle at each round. Finding more efficient and practical ways to do this strikes me as an important problem. Finally, I had more fun writing this paper --- learning about old techniques from the game theoretic calibration literature --- than I've had in awhile. I hope a few people enjoy reading it! No Regret Algorithms from the Min Max Theorem The existence of no-regret learning algorithms can be used to prove Von-Neumann's min-max theorem. This argument is originally due to Freund and Schapire, and I teach it to my undergraduates in my algorithmic game theory class. The min-max theorem also can be used to prove the existence of no-regret learning algorithms. Here is a constructive version of the argument (Constructive in that in the resulting algorithm, you only need to solve polynomially sized zero-sum games, so you can do it via linear programming). Recall the setting. Play proceeds in rounds $t \in \{1,\ldots,T\}$. At each day $t$, the learner chooses one of $k$ actions $i_t \in \{1,\ldots,k\}$, and the adversary chooses a loss vector $\ell^t \in [0,1]^k$. The learner incurs loss $\ell^t_{i_t}$, corresponding to the action he chose. At the end of the interaction, the regret of the learner is defined to be the difference between the cumulative loss he incurred and the cumulative loss of the best fixed action (played consistently) in hindsight: $$\textrm{Regret}_T = \max_j \left(\sum_{t=1}^T \ell^t_{i_t} - \ell^t_j\right)$$ A classical and remarkable result is that there exist algorithms that can guarantee that regret grows only sublinearly with time: $\textrm{Regret}_T = O(\sqrt{T})$. Lets prove this. Define the non-negative portion of our cumulative regret with respect to action $j$ up until day $d$ as: $$V_d^j = \left(\sum_{t=1}^d\left(\ell^t_{i_t} - \ell^t_j\right)\right)^+$$ and our additional regret at day $d+1$ with respect to action $j$ as: $$r_{j}^{d+1} = \ell_{i_{d+1}}^{d+1} - \ell^{d+1}_j$$ Observe that if $V_{d}^j \geq 1$ then $V_{d+1}^j = V_d^j + r_j^{d+1}$. Define a surrogate loss function as our squared cumulative regrets, summed over all actions: $$L_d = \sum_{j=1}^k (V_d^j)^2$$ Observe that we can write the expected gain in our loss on day $d+1$, conditioned on the history thus far: $$\mathbb{E}[L_{d+1} - L_d] \leq \sum_{j : V_d^j \geq 1} \mathbb{E}[(V_d^j+r_j^{d+1})^2 - (V_d^j)^2) ] + 3k$$ $$= \sum_{j : V_d^j \geq 1} \left(2V_d^j \mathbb{E}[r_{j}^{d+1}] + \mathbb{E}[(r_{j}^{d+1})^2]\right) + 3k $$ $$\leq \sum_{j=1}^k \left(2V_d^j \mathbb{E}[r_{j}^{d+1}]\right) + 4k$$ where the expectations are taken over the randomness of both the learner and the adversary in round $d+1$. Now consider a zero-sum game played between the learner and the adversary in which the learner is the minimization player, the adversary is the maximization player, and the utility function is $$u(i_{d+1}, \ell^{d+1}) = \sum_{j=1}^k \left(2V_d^j \mathbb{E}[r_{j}^{d+1}]\right)$$ The min-max theorem says that the learner can guarantee the same payoff for herself in the following two scenarios: The learner first has to commit to playing a distribution $p_{d+1}$ over actions $i$, and then the adversary gets to best respond by picking the worst possible loss vectors, or The adversary has to first commit to a distribution over loss vectors $\ell$ and then the learner gets the benefit of picking the best action $i_{d+1}$ to respond with. Scenario 1) is the scenario our learner finds herself in, when playing against an adaptive adversary. But 2) is much easier to analyze. If the adversary first commits to a distribution over loss vectors $\ell^{d+1}$, the learner can always choose action $i_{d+1} = \arg\min_j \mathbb{E}[\ell^{d+1}_j]$, which guarantees that $\mathbb{E}[r_{j}^{d+1}] \leq 0$, which in turn guarantees that the value of the game $ \sum_{j=1}^k \left(2V_d^j \mathbb{E}[r_{j}^{d+1}]\right) \leq 0$. Hence, the min-max theorem tells us that the learner always has a distribution over actions $p_{d+1}$ that guarantees that $\mathbb{E}[L_{d+1} - L_d] \leq 4k$, even in the worst case over loss functions. If the learner always plays according to this distribution, then by a telescoping sum, we have that: $$\mathbb{E}[L_T] \leq 4kT$$. We therefore have by Jensen's inequality that: $$\mathbb{E}[\max_j (V^j_T)] \leq \sqrt{\mathbb{E}[\max_j (V^j_T)^2]}\leq \sqrt{\mathbb{E}[\sum_{j=1}^k (V_j^T)^2]} \leq 2\sqrt{kT}$$. Posted by Aaron at 10:16 AM No comments: Moment Multicalibration for Uncertainty Estimation This blog post is about a new paper that I'm excited about, which is joint work with Chris Jung, Changhwa Lee, Mallesh Pai, and Ricky Vohra. If you prefer watching talks, you can watch one I gave to the Wharton statistics department here. Suppose you are diagnosed with hypertension, and your doctor recommends that you take a certain drug to lower your blood pressure. The latest research, she tells you, finds that the drug lowers diastolic blood pressure by an average of 10 mm Hg. You remember your statistics class from college, and so you ask about confidence intervals. She looks up the paper, and tells you that it reports a 95% confidence interval of [5, 15]. How should you interpret this? What you might naively hope is that [5, 15] represents a conditional prediction interval. If you have some set of observable features $x$, and a label $y$ (in this case corresponding to your decrease in diastolic blood pressure after taking the drug), a 95% conditional prediction interval would promise that: $$\Pr_y [y \in [5, 15] | x] \geq 0.95$$ In other words, a conditional prediction interval would promise that given all of your observed features, over the unrealized/unmeasured randomness of the world, there is a 95% chance that your diastolic blood pressure will decrease by between 5 and 15 points. But if you think about it, coming up with a conditional prediction interval is essentially impossible in a rich feature space. If $x$ contains lots of information about you, then probably there was nobody in the original study population that exactly matched your set of features $x$, and so we have no information at all about the conditional distribution on $y$ given $x$ --- i.e. no samples at all from the distribution over which our coverage probability supposedly holds! So how can you expect any sort of promise at all? There are two typical ways around this difficulty. The first is to make heroic assumptions about the data generation process. For example, if we assume that the world looks like an ordinary least squares model, and that there is a linear relationship between $y$ and $x$, then we can form a confidence region around the parameters of the model, and from that derive prediction intervals. But these prediction intervals are not valid if the model fails to hold, which it inevitably will. The second is to give up on conditional prediction intervals, and instead give marginal prediction intervals. This is what the conformal prediction literature aims to do. A marginal prediction interval looks quite similar to a conditional prediction interval (at least syntactically), and promises: $$\Pr_{(x,y)} [y \in [5, 15] ] \geq 0.95$$ Rather than conditioning on your features $x$, a marginal prediction interval averages over all people, and promises that 95% of people who take the drug have their diastolic blood pressure lowered by between 5 and 15 points. But the semantics of this promise are quite different than that of a conditional prediction interval. Because the average is now taken over a large, heterogeneous population, very little is promised to you. For example, it might be that for patients in your demographic group (e.g. middle aged women with Sephardic Jewish ancestry and a family history of diabetes) that the drug is actually expected to raise blood pressure rather than lower it. Because this subgroup represents less than 5% of the population, it is entirely consistent with the marginal prediction interval being correct. Of course, if you are lucky, then perhaps someone has conducted a study of people from this demographic group and has computed marginal prediction intervals over it! But what if there are multiple different groups that you are a member of, over which the results seem to conflict? For example, you might also have a low BMI value and have unusually good cholesterol readings --- features of a group for which the drug works unusually well. Which uncertainty estimate should you trust, if you are a member of both groups? These concerns actually arise already when we think about the semantics of mean estimations ("the expected drop in blood pressure amongst patients who take this drug is 10 mm Hg"). Ideally, if you were a patient with features $x$, then 10 would be an estimate of $\mathbb{E}[y | x]$. But just as with uncertainty estimation, in a large feature space, we typically have no information about the distribution on $y$ conditional on $x$ (because we have never met anyone exactly like you before), and so instead what we have is just an estimate of $\mathbb{E}[y]$ --- i.e. averaging over people. If you have a method of making predictions $f(x)$ as a function of features $x$, then a standard performance metric is calibration --- which informally asks that for every prediction $p$, amongst all people for whom we predicted $f(x) = p$, the average of the realized labels $y$ should be $p$. Again, estimates of this form promise little to individuals, because they are averages over a large and heterogeneous population. Several years ago, Hebert-Johnson et al. proposed a nice way to interpolate between the (impossible) ideal of offering conditional mean predictions $f(x) = \mathbb{E}[y | x]$, and the weak guarantee of merely offering calibrated predictions $f$. Roughly speaking, they proposed to specify a very large collection of potentially intersecting groups $G$ (representing e.g. demographic groups like Sephardic Jewish women with a family history of diabetes, and hypertensive patients with low cholesterol and BMI values, etc) and to ask that a trained predictor be simultaniously calibrated on each sufficiently large group in $G$. They showed how to accomplish this using a polynomially sized sample from the underlying distribution, with polynomial running time overhead, on top of the cost of solving learning problems over $G$. In our paper, we --- roughly speaking --- show how to accomplish the same thing, but for variances and other higher moments, in addition to just means. And our "multicalibrated moment estimates" can be used to construct prediction intervals in exactly the same way that real moments of the conditional label distribution could be used. If you used the real (unknown) label distribution moments, you would have gotten conditional prediction intervals. If you use our multi-calibrated moments, you get marginal prediction intervals that are simultaneously valid as averaged over each of the groups in $G$. So, for example, our hypertensive patient above could interpret her prediction interval --- if it was constructed from multicalibrated moment estimates computed from her features --- as an average over each of the demographic groups that she is a member of (so long as they are contained within $G$), and all of those interpretations would be simultaneously valid. I'll leave the details to the paper --- including what exactly we mean by "moment multicalibration". I'll just note that a major difficulty is that variances and higher moments --- unlike expectations --- do not combine linearly, so it is no longer sensible to ask that "amongst all people for whom we predicted variance v, the true variance should be v" --- because even the true conditional label variances do not satisfy this property. But it is sensible to ask that a pair of mean and moment predictions be calibrated in this way: "amongst all people for whom we predicted mean $\mu$ and variance v, the true mean should be $\mu$ and the true variance should be $v$." This is what we call "mean-conditioned moment calibration", and it is satisfied by the true distributional moments. The paper is here: Moment Multicalibration for Uncertainty Estimation. TCS Visioning Workshop — Call for Participation Reposting from here: https://thmatters.wordpress.com/2020/06/05/tcs-visioning-workshop-call-for-participation/ The CATCS will be hosting a virtual "Visioning Workshop" the week of July 20 in order to identify broad research themes within theoretical computer science (TCS) that have potential for a major impact in the future. The goals are similar to the workshop of the same name in 2008: to package these themes in a way that can be consumed by the general public, which we would deliver primarily to the Computing Community Consortium and others (e.g. funding agencies) to help them advocate for TCS. While participation in the workshop is primarily through invitation, we have a few slots available for the broader community. If you are interested in participating, please see details of the application process below. The workshop will be organized according to area-wise breakout groups. Each breakout group will have 1-2 leads. Breakout groups will meet for 4-5 hours spread across several days and will be tasked with brainstorming ideas and preparing materials related to their topic. Leads are further expected to participate in plenary sessions held on Monday July 20 and Friday July 24 (4-5 hrs of additional time) where these materials will be discussed. If you are interested in participating in the workshop, please fill out this Google form by Monday June 15. On this form, applicants are asked to contribute one or two major results in the last 10 years whose significance can be explained in layperson terms, and one or two major challenges for theory whose significance can be explained in layperson terms. These descriptions can be very brief. We will just use them to select participants and create breakout groups. Algorithmic Game Theory Andy's Math/CS Page CMU Computer Science Constructive Economics CS Diary Ernie's 3d Pancakes Game Theory in Practice Machine Learning Theory Shtetl Optimized
CommonCrawl
A particle image velocimetry study of dual-rotor counter-rotating wind turbine near wake | springerprofessional.de Skip to main content Close search form Enter your search terms Access for companies English subNavigationMarker Javascript needed Create an alert for Journal of Visualization now and receive an email for each new issue with an overview and direct links to all articles. Back to the search result list Download PDF-version previous article Visualization and analysis of muzzle flow field... next article Three-dimensional visualization of stratified t... Activate PatentFit Swipe to navigate through the articles of this issue 31-03-2020 | Regular Paper | Issue 3/2020 Open Access A particle image velocimetry study of dual-rotor counter-rotating wind turbine near wake Journal of Visualization > Issue 3/2020 Eloise O. Hollands, Chuangxin He, Lian Gan » View abstract Download PDF-version The global wind industry is expected to surpass generation of 60 GW in 2020, and reach a total of 840 GW by 2022 (Global Wind Energy Council 2017 ). These values alone highlight the significance of improvements to wind turbine efficiency. An increase as small as 5% could result in an additional 42 GW of clean, emissions-free wind power by the year 2020. The majority wind turbines are single rotor (SRWT) type. Current SRWTs are limited at the hands of three main factors: the inherent Betz limit, root losses and wake interactions. The Betz limit defines the theoretical maximum efficiency of a horizontal-axis single-rotor wind turbine stating that no more than 59.3% of the kinetic energy from wind may be converted into useful mechanical energy to turn the rotor. Root losses are the result of thick and thus aerodynamically poor turbine blade roots required to withstand large structural loads; it is stated that a loss in power generation of up to 5% is approximated for horizontal axial wind turbines due to increased structural integrity required at the root (Sharma and Frere 2010 ). Wake interactions are relevant specifically to wind farms featuring a configuration of many wind turbines as opposed to isolated use. The problem is that a resultant wake after passing through a turbine expands, superimposes and impinges upon downstream turbines negatively affecting the wake, and consequently, the downwind turbine's ability to extract energy from it. The combined effect of these three limitations sees an efficiency of approximately between 10% and 30% for conventional SRWTs. In the context of a wind farm, it is desired that the wake passing through a turbine recovers quickly so as not to inhibit the functionality of the downstream turbine and reduce the possible harmful resonance on the blades. However, there are several factors that prohibit the recovery of a wake, the most detrimental of which is the phenomenon of tip vortices. The pressure difference responsible for generating lift at an airfoil surface also causes vortices emanating from the blade tips whose pathlines are helical in nature due to the rotation of the turbine blades (Sherry et al. 2013 ). As the speed of the turbine blade tips is significantly higher than the incoming flow speed, the distance between the spirals of the tip vortices is very small meaning they can be approximated as a very turbulent cylindrical shear layer, separating the wake containing slow moving air and the surrounding fluid at ambient conditions (Gomez-Elviraa et al. 2005 ). This shear layer essentially acts as a barrier, delaying wake re-energisation via mixing with ambient air. Furthermore, it is suggested by Bartl ( 2011 ) that the swirling motion leaving the turbine rotor could excite an eigenfrequency of the blades of the downstream turbine leading to material fatigue. Consequently, a method of dissipating these vortices as soon as possible is desired. This evident need for improvement sparked the relatively new research into other unconventional wind turbine designs, such as horizontal dual-rotor wind turbine (DRWT). DRWTs are characterised by two rotors mounted on a single tower to both capture additional energy otherwise missed by a single rotor and, more importantly, ultimately to improve the characteristics of the wake. Ozbay ( 2014 ) conducted wind tunnel experiments and compared SRWT and DRWT for both co-rotating and counter-rotating configurations. The study found that the dissipation of the vortex-induced shear layer is highly dependent on the turbulence kinetic energy (TKE) available for turbulent mixing. This need for increased TKE as an input to the wake is met by the DRWT, experimentally validated by incorporating a smaller auxiliary rotor half of the size of the main rotor (Wang et al. 2018 ). It found improved recovery in the wake of a DRWT compared to that of a SRWT due to the interaction and consequent dissipation of separately emanating tip vortices. The measurements also concluded that although the highest turbulence production rate was seen behind the co-rotating DRWT, it suffered the largest velocity deficit and was unable to utilise the swirling velocity induced by the upwind rotor as the counter-rotating DWRT could. The counter-rotating and co-rotating DRWT saw power enhancements of 7.2% and 1.8%, respectively, providing confidence in the decision to go for the former configuration. Herzog et al. ( 2010 ) compared both numerically and experimentally a SRWT with a counter-rotating DRWT in which both rotors were of the same diameter. In an effort to more accurately simulate the free stream operation of both turbine configurations, the blockage effects of the wind tunnel used were numerically studied as well as practical measurement of the drag coefficient. It was concluded that an increase in power output of 9% was achieved when compared to the SRWT. Lee et al. ( 2012 ) studied the effects of design parameters on the aerodynamic performance of a counter-rotating DRWT and concluded that for optimal system performance, the secondary rotor should be about 40%–50% the size of the larger rotor, depending on the pitch of each rotor. This also agreed with the field testing (Jung et al. 2005 ). The optimal power generation was later further studied parametrically (Rosenberg et al. 2014 ), which however found that the auxiliary rotor placed upwind of the existing rotor should have a diameter 25% the size of the latter, combined with 2 D ( D being the diameter of the main rotor) distance and a tip speed ratio of 6, to yield optimal system performance. Although these studies align closely with the intentions of the current study, their primary focus was on the power output of a standalone system, while it is important to know the wake characteristics are not well-understood, which however are crucial for eventual large-scale implementation involving multiple DRWT systems. This study aims to focus on the near wake behind a counter-rotating DRWT. The dependence of the helical vortex wake decay on the size of the smaller auxiliary rotor is of the special interest. The primary advantage of this configuration is twofold; the smaller rotor placed upstream aims to capture the energy loss on the root of the main turbine more economically than using the same size rotor, while the opposite rotating direction is potentially beneficial to counteract the swirl or at least accelerate the swirl decay behind the DRWT system. 2 Experimental set-up 2.1 Turbine model and its operation A scaled turbine tower and nacelle were manufactured for use in a wind tunnel of 0.5 m working section. The turbine tower is 225 mm long and extended vertically downwards from the ceiling, placing the nacelle at the centre of the testing section. The main rotor diameter is 180 mm, sufficiently small so as to avoid flow interference with the wind tunnel wall. The rotors embodied a NACA 4415 aerofoil profile, being one of the most common and broadly used aerodynamic shapes for wind turbine blades, and were 3D printed by FullCure®720 using an Eden Object 500 printer. This produced fairly accurate blade shape and good surface finish, ready to be spray-painted matte black straight after curing. The largest rotor provided the base model of which the two auxiliary rotors were simply scaled versions at factors of 0.8 and 0.5 to ensure aerodynamic similarity. Three turbine configurations were studied: a SRWT comprised of the main rotor only, a DRWT that modifies the SRWT model to include a 144-mm-diameter rotor (80%), placed 60 mm (the length of the hub) upstream, and a second DRWT that utilises the smaller 90-mm-diameter rotor (50%); see Fig. 1a. Smaller rotor size was difficult to achieve due to the 3D printer resolution as well as the material strength. DWRTs having auxiliary rotors larger than 80% of the main rotor will not be economically suitable in practice compared to two separate single-rotor wind turbines. Hence, they are not studied. a Rotor blades, b model circuit diagram, c, d experimental set-up photograph and schematic including wind tunnel working section dimensions In the two DRWT configurations, the smaller auxiliary rotor was a mirror of the main and was placed upwind, aiming to capture the energy at the root part of the main rotor installed at the downstream side of the hub. As the superiority of the counter-rotating DRWT over the co-rotating DRWT and SRWT in power generation has been justified (Ozbay 2014 ; Wang et al. 2018 among others), only the counter-rotating condition is investigated in this study. Hereafter, DRWT refers to the counter-rotating configuration for short. The current SRWT configuration is different from a conventional one where the main rotor is installed on the windward side of the hub. The current SRWT configuration is to ensure a direct comparison to the two DRWT configurations. The conventional SRWT and DRWT configuration with the auxiliary rotor of the same size as the main rotor has been well-studied. Therefore, they are not repeated here. In the context of this investigation, it is desirable that the results are independent of the Reynolds number (Re). Most wind tunnel experiments were conducted at lower Re than real conditions not surprisingly. Existing research into the Re dependence of turbulence statistics in the wake of turbines states that mean velocity and turbulence intensity, both of which are to be studied here, become independent of Re for \({\mathrm{Re}} \gtrsim 9.3 \times 10^{4}\) (Chamorro et al. 2012 ). Here, Re is defined as $$\begin{aligned} {\mathrm{Re}}=\frac{U_{\infty }D}{\nu } \end{aligned}$$ where \(U_{\infty }\) is the free stream wind velocity at hub centre height, set to be \(8\,{\mathrm{ms}}^{-1}\); D is the characteristic length scale of the flow, taken as the diameter of the main rotor and \(\nu\) is the kinematic viscosity for air at 20 \(^{\circ }\)C. This gives \({\mathrm{Re}}\approx 9.5\times 10^4\), satisfying the Re independence criterion. The tip speed ratio \(\varLambda\) is an important factor of wind turbine design (Yurdusev et al. 2006 ) which quantifies the power generation capability and consequent efficiency of a turbine. It is defined as the ratio between the blade tip speed and the free stream velocity, viz \(\varLambda =\varOmega L/U_{\infty }\), where \(\varOmega\) is the turbine rotational speed and L is the length of the blade. According to Ragheb and Ragheb ( 2011 ), the optimal \(\varLambda\) was found to be \(\varLambda _{\mathrm{opt}}=4\pi /m\), where \(m=3\) is the number of blades in this study. A turbine with \(\varLambda\) that is too low fails to capture energy from the wind. Alternatively, a rotor having too large \(\varLambda\) acts as more of a barrier effect to the incoming air. It is shown by Siddiqui et al. ( 2017 ) that \(\varLambda\) greatly impacts properties of the wake, namely velocity, vorticity and flow streamline trend, rendering it a parameter that must remain constant throughout the experimentation. However, \(\varOmega\) required for \(\varLambda _{\mathrm{opt}}\) from the 50% auxiliary rotor was too high to be feasible, and for this reason, a slightly lower value of \(\varLambda =3.46\) was chosen and was set to be the same for all the rotors. At this value, the theoretical power coefficient for a turbine with three blades of NACA 4415 profile type, defined as the ratio of electrical power output to wind power input, is found to be 0.4196 (Yurdusev et al. 2006 ) well within the average range of 0.2–0.45. A MFA como drill motor and a low inertia solar motor were used to control the rotor rotational speed. Each of them was connected to a rotor using a push-fit fastener and powered by a 5 V source. The connecting wires were wrapped around the turbine tower, properly secured so as to not alter its aerodynamic properties, and fed out of the working section. The rotors were powered to rotate in the direction they would naturally do in a free stream, as dictated by their blade profiles. As shown in Fig. 1b, adjustable resistors were used in order to tune the rotational velocity until the desired value was reached, measured using a strobe light. The accuracy provided by the strobe saw a change in tip speed ratio of maximum \(\pm\,0.5\%\), deeming the set-up suitably reliable. Table 1 details the working conditions of the rotors, where B is the auxiliary rotor upstream of the main rotor A. Table of the working conditions for each configuration with reference to Fig. 1b Rotor A ø (mm) Rotor B ø (mm) \(\varOmega\) (RPM) SRWT DRWT(L) DRWT(S) For the two DRWT configurations, only the \(\varOmega\) of the auxiliary rotor is listed, as the \(\varOmega\) for the main rotor is the same as that in the SRWT configuration. L and S denote large and small auxiliary rotor, respectively 2.2 PIV measurements Two-dimensional particle image velocimetry (PIV) measurements were performed to investigate the near wake flow structure. The flow was seeded with oil droplets of diameter \(\approx 1\,\upmu\)m produced by an atomiser; they are small enough to follow the motion of the flow, large enough for PIV cross-correlation at the set field of view (FOV). Particles are distributed homogeneously in the FOV plane, illuminated by a 120 mJ per pulse 15 Hz double-headed Nd:YAG laser, which fired from far downstream of the flow as shown in Fig. 1c. The laser sheet was set to \(\approx 3\) mm thick to account for the out-of-plane velocity component due to blade rotation. The PIV \(\Delta t\) was set to 30 \(\upmu\)s for FOV size of about 190 mm \(\times\) 240 mm in the x– y plane, where x is the streamwise direction and y is along the rotor radius (vertical) direction. The origin was set at the leeward centre of the main rotor; see Fig. 1b. The FOV was offset in the y direction to be optimised for the part of the wake away from the tower. A low speed SensiCam camera was used as the imaging tool, sampling at a rate of four image pairs per second and faced normal to the FOV in an enclosure. Figure 1c, d shows the image and the schematic of the set-up, respectively. In total, 1000 pairs of images were taken from each of the configurations defined in Table 1. The particle images were processed by LaVision®Davis 7.0, with \(32\times 32\) pixel interrogation window and \(50\%\) overlap. This gives the spatial resolution \(\approx 3\) mm by vector spacing. The instantaneous velocity in the x and y direction, respectively, is written as \((u,v)=\left( U+u',V+v'\right)\), where ( U, V) is the ensemble averaged (time mean) velocity and \((u',v')\) is the fluctuating component. 3 Results and discussion 3.1 Mean wake Figure 2 illustrates the mean streamwise velocity U distribution over the region \(-\,0.4<x/D<0.9\). Similar to most of the wind turbine wake studies, we focus on the part of the wake away from the tower. Note that in our study, the tower was installed upside down, due to physical constraints of the facility (Fig. 1c). Contour of the mean streamwise velocity. a SRWT, b DRWT(S), c DRWT(L). The position of the rotors are labelled with the white arrows indicating the rotational direction (viewing from downstream for the main rotor and from upstream for the auxiliary rotors, so they counter-rotate) As expected, significant velocity deficit can be seen in the wakes behind all three configurations, indicating a large amount of the incoming flow's kinetic energy being consumed. The DRWT(L) featuring the larger auxiliary rotor shows the greatest velocity deficit, closely followed by the DRWT(S), both have greater deficit than SRWT at the end of the FOV. This is confirmed by the U distribution along the radial direction, as it is shown in Fig. 3a. It shows that the free stream velocity \(U_{\infty }\) is fully recovered at \(|y/D|=0.6\) by \(x=0.9D\). Figure 2 shows that the velocity deficit area is roughly cone shaped, gradually expands starting at the main rotor tips. The slight overshoot of U ( \(>U_{\infty }\)) at \(y/D=-0.6\) in Fig. 3a is because of the induced velocity caused by the three winding helical vortex cores originating from the main rotor blades, which will be investigated later. It shows that the width of the wake for the three configurations are very similar, with the wake of DRWT(L) marginally wider. This is consistent with the findings of Ozbay ( 2014 ) who also found the wake width behind an SRWT and DRWT to be almost identical and consequently dependent on the size of the main rotor, which has also been kept constant there. This establishes that it is not the wake shape that changes with the addition of an auxiliary rotor, but the characteristics within it, further supported visually by Fig. 2. a Dependence of axial velocity on the radial distance from the hub centre at \(x=0.9D\), b dependence of the axial velocity on the axial distance at hub centre height The fact that the obvious quicker velocity recovery is seen in the SWRT for \(-\,0.5\lesssim y/D\lesssim -0.2\) (Fig. 3a) confirms that the auxiliary rotors of the DRWTs capture a large proportion of the kinetic energy of the flow at the main rotor root region, otherwise missed by the SRWT. This is consistent with existing knowledge that DRWTs are able to yield a higher power output than conventional horizontal-axis SRWTs (Ozbay 2014 ; Herzog et al. 2010 ). Over the same region, Fig. 2 shows that DRWT(S) has the highest density of contour lines followed closely by DRWT(L). This can also be inferred by \(\partial {U}/\partial {y}\) derivable from Fig. 3a. This suggests that the velocity gradient in the radial direction, and therefore shear strength, is larger for the DRWTs when compared to the SRWT. Ozbay ( 2014 ) demonstrated that the presence of high shear is prone to flow instability and hence promotive of turbulent mixing. This turbulent mixing is desired to breakdown the cone-shaped shear layer, caused by tip vortices as shown in Sect. 1, in order to accelerate the process of wake recovery. Figure 3b shows the U deficit with axial distance at hub height. The profiles differ significantly within the region \(0<x/D<0.3\) , wherein the auxiliary rotors of the DRWTs resulting in a larger velocity deficit in the immediate near wake. Beyond this distance, U behaviours are very similar among the three. By the end of the measurement range, U at hub height continues dropping, but it can be expected that it will eventually recover to \(U_{\infty }\) as the wake dissipated. Whether or not the U distributions of the three remain similar further downstream requires further investigation. Figure 3b also shows that U of the DRWT(L) is consistently lower than the other two configurations, suggesting the former is slightly superior at extracting energy from the wind. This agrees with the findings of Jung et al. ( 2005 ) who stated that using a secondary rotor \(\approx 50\%\) the size of the main rotor sees the best performance in the context of power output, yielding the highest power coefficient. 3.2 Phase-averaged wake As the rotation rate of the rotors is fixed, it is expected that the wake behind all the configurations manifests a periodic feature, at a frequency of \(m\varOmega /(2\pi )\). Since the samples were acquired at a fixed low frequency in a statistically independent way, without phase locking by an external phase indicator, the phase of the wake is resolved using the snapshot-based proper orthogonal decomposition (POD) (Berkooz et al. 1993 ). POD is a suitable but not unique tool to extract coherent flow structures from both turbulent velocity data, e.g. Wang et al. ( 2020 ), and passive scalar data, e.g. He and Liu ( 2017 ). [Other techniques, e.g. wavelet transform (Fijimoto and Rinoshika 2017 ), might also be suitable for the similar purposes under special circumstances.] Briefly speaking, for a set of N snapshots of the fluctuation components, \((u',v')\), an auto-covariance matrix M is constructed where solving the standard eigenvalue problem produces N eigenvalues \(\lambda _{i}\), and N eigenvectors \(A_i\). $$\begin{aligned} MA _{i}=\left( {\hat{U}}^T{\hat{U}}\right) A_{i}=\lambda _{i}A_{i}, \end{aligned}$$ where \({\hat{U}}=[u'_1\ldots u'_N,v'_1\ldots v'_N]\), combining both velocity components. The associated POD mode, \(\varPhi _{i}\) can be calculated as: $$\begin{aligned} \varPhi _i=\frac{\sum _{n=1}^NA_{i,n}(u'_n,v'_n)}{\Vert \sum _{n=1}^NA_{i,n}(u'_n,v'_n) \Vert }, \; i=1,2,\ldots N. \end{aligned}$$ The eigenvalue \(\lambda _{i}\) reflects the contribution of mode \(\varPhi _{i}\) to the total fluctuating energy of the flow. The instantaneous velocity field can then be represented as a sum of orthogonal modal contributions as $$\begin{aligned} (u,v)=(U,V) + \sum _{i=1}^N a_{i}\varPhi _{i}, \end{aligned}$$ where \(a_{i}\) is the coefficient that is obtained by projecting the instantaneous velocity fields on the POD basis. That is $$\begin{aligned} a_i=[\varPhi _1\;\varPhi _2\ldots \;\varPhi _N]^T(u'_i,v'_i). \end{aligned}$$ The ranking of the modal energy contribution, viz. \(\lambda _i\) percentage, is given in Fig. 4a. It shows that \(\lambda _1\) and \(\lambda _2\), corresponding to \(\varPhi _1\) and \(\varPhi _2\), in total contribute 3.2%, 3.3% and 3.5% of the energy for the SRWT, DRWT(S) and DRWT(L) configurations, respectively, which are fairly similarly small. Higher modes contribute less than 1.5% each. This means that the wake behind all the three turbine configurations is very turbulent, and the energy contained in the coherent structures is relatively small. Contribution of \(\lambda _1\)– \(\lambda _4\) of the two DRWTs are similar, more than that of SRWT. From \(\lambda _5\) onwards, all the configurations become very similar in \(\lambda _i\). SRWT gains a small fraction back at higher mode \(i\gtrsim 25\), which are unimportant due to incoherency. POD analysis of DRWT(S) dataset. a Percentage of the POD mode energy to the total energy. Mode zero, viz. energy of the mean flow fields is excluded. Legends follow Fig. 3, b projection of the normalised coefficients of the first two modes \(a_1/\sqrt{2\lambda _1}\) and \(a_2/\sqrt{2\lambda _2}\) from each snapshot on to the polar coordinates. The red lines mark the \(\pm\,10^{\circ }\) bin size for phase averaging, c the vorticity \(\omega _M\) contours of the first two modes \(\varPhi _1\) and \(\varPhi _2\), overlaid with the corresponding modal velocity vectors, d phase-averaged vorticity contour from the snapshots falling inside the bin in ( b), e the corresponding phase-averaged swirling strength \(\lambda _{ci}\) Figure 4b illustrates the projection of the (normalised) \(a_1\) and \(a_2\) on to the polar coordinates, with the associated mode \(\varPhi _1\) and \(\varPhi _2\) presented in (c), in terms of vorticity derived from the modal velocity. It is evident from (c) that the first two modes reflect a periodic vortex shedding pattern, and this is confirmed by the rather homogeneous angular distribution in (b). This suggests that \(a_{1}=\sqrt{2\lambda _{1}}\sin (\phi )\) and \(a_{2}=\sqrt{2\lambda _{2}}\cos (\phi )\) with \(\phi\) representing the vortex shedding phase angle (Oudheusden et al. 2005 ). Phase averaging can be done by defining a sample bin size, in this study \(\pm\,10^{\circ }\), to ensure a sample size of \(55\pm 5\) in each bin (phase). An example of a bin centred at an arbitrary phase is shown in (b). At this phase, the averaged vorticity field ( \(\omega\)), using the raw instantaneous velocity ( u, v), is shown in (d) and the contour of the swirling strength \(\lambda _{ci}\) is shown in (e). \(\lambda _{ci}\) is the imaginary part of the complex eigenvalue of the (phase-averaged) velocity gradient tensor, which provides a measure of the swirl strength to allow shear layer to be excluded from detections (Zhou et al. 1999 ). The coherent vortices shed from the main rotor tips are clearly shown in Fig. 4d after phase averaging. Also seen is the area of vorticity originates from the small auxiliary rotor. These vorticity, in the form of shear layer, fails to form any coherent vortex packets due to the interference of the main rotor downstream. At the same tip speed ratio \(\varLambda\), the auxiliary rotor and the main rotor rotate at different rates, having different vortex shedding frequency captured in FOV. It is confirmed in (e) that no strong swirl is observed in this area, while clear swirling vortex cores can be seen downstream of the main rotor. POD analysis of the sub-region excluding the main rotor vortices also do not show evidence of periodic shedding from the auxiliary rotor; figure not shown. Figure 5 shows three successive wake phases from \(\phi =\pi /4\) behind all three configurations. As the phase angle increases, the tip vortices shed from one main rotor blade can be seen to align sequentially with the vortices shed from the other two blades of the same rotor. The distance between the neighbouring vortices is found to be fairly constant among the three configurations and over the entire FOV, which is about 0.15 D. Since the vortices are convected downstream by the local velocity and the rotation rate of the main rotor remains constant, this suggests that the auxiliary rotor does not impact on the local mean velocity in the wake, in agreement with Fig. 2. Phase-averaged vorticity contour for three successive phase angles behind all three turbine configurations. From left to right: SRWT, DRWT(S), DRWT(L) The trace of the turbine root vortices is also observable in the wake behind SRWT, but not in a coherent pattern in-phase with the tip vortices. This might be because of the particular blade shape used inhibiting coherent vortex shedding in the root part. In comparison, the influence of the smaller auxiliary rotor is clearly seen in the wake of DRWT(S), where stronger vorticity is seen as highlighted by the blue box. This shear layer is also reflected in the U profile at the end the FOV in Fig. 2a, where a 'step' is seen for \(0.4\lesssim |y/D|\lesssim 0.5\). This shear layer has the same sense of direction as the main rotor tip vortices in the x– y plane, but should have an opposite swirl direction as the main rotor helical wake due to the counter-rotating auxiliary rotor. This could be beneficial as it counteracts the main rotor tip wake swirl. The strength of this shear layer is appreciably lower in DRWT(L). This is because the vortices shed by the larger auxiliary rotor are entrained into the main rotor vortices, attributed to their closer radial distance. This is also the reason for the stronger vortices (both size and \(\omega\) magnitude) behind DRWT(L). The vortex interaction also tends to distort the shape of the vortices for \(x>0.7D\). The negative \(\omega\) is negligible hence not included in Fig. 5. Note that although the auxiliary rotor rotates in the opposite direction as the main rotor, because of its mirrored blade geometry, the two rotors shed vortices in the same sense in x– y plane. It is so far clear that the main rotor tip vortices still play a dominant role, acting as a barrier preventing wake re-energisation. Its evolution is further analysed next. The vortex centre trajectory is shown in Fig. 6a. The vortex centre is found by the \(\lambda _{ci}\) weighted centroid; see Fig. 4e. The trajectory behind all three configurations are very similar, with a very small rate of expansion, weakly increasing from SRWT, DRWT(S) to DRWT(L), under the influence of the auxiliary rotor vortices. a The trajectory of the main tip vortex centre, b dependence of the main rotor tip vortex circulation \(\varGamma\) on the streamwise distance. Legends follow Fig. 3, c dependence of the vorticity at vortex centroids on the streamwise distance. The solid lines are the least squares exponential fit to the data points Figure 6b displays the decay of circulation \(\varGamma\) of the tip vortex packets, where \(\varGamma =\int _S\omega \;{\mathrm {d}}s\) for S denoting the vortex packet area based on a universal threshold \(\omega L/U_{\infty }=0.4\), about \(6\%\) of the peak vorticity value in Fig. 5. The \(\varGamma\) decay can be well-described by an exponential function \(\varGamma =\varGamma _0 \exp \left[ -\alpha _{\varGamma } (x/D)\right]\). At the main rotor tips \(x=0\), \(\varGamma _0/LU_{\infty }=0.11, 0.087\) and 0.141 for SRWT, DRWT(S) and DRWT(L), respectively. In consistency with Fig. 5, DRWT(L) have the strongest vortices in the wake, due to the auxiliary rotor vortices entrained and also the weaker vorticity connected with the auxiliary vortices shear region. Interestingly, DRWT(S) has the vortices of the lowest \(\varGamma\), even lower than SRWT. Close examination of Fig. 5 suggests that the influence of the smaller auxiliary rotor is to take the background vorticity near the main rotor vortices away from them and deliver that to the root vortices area. The \(\varGamma\) decay rates are found to be similar among the three, with \(\alpha _{\varGamma }=0.20, 0.23\) and 0.24, respectively. In particular for the two DRWTs, their decay rates are nearly identical. This suggests that the size of the auxiliary rotors does not have a large impact on the tip vortices decay rate. However, compared to SRWT, incorporation of the auxiliary rotors does increase it, very weakly, due to the vortex interaction. Similarly, the peak vorticity at the vortex centroids also displays an exponential decay described by \(\omega _p=\omega _0\exp \left[ -\alpha _{\omega } (x/D)\right]\); see Fig. 6c. The decay rate \(\alpha _{\omega }=0.60, 0.51\) and 1.33 for SRWT, DRWT(S) and DRWT(L), respectively. It is clear that DRWT(L) sees the most rapid \(\omega _p\) decay, while that for SRWT and DRWT(S) is similar. If the \(\omega\) profile of the vortices are assumed to be close to Gaussian, it is possible to deduce the x dependence of the characteristic vortex size r, combining the \(\varGamma\) and \(\omega _p\) behaviour. That is \(r\sim \exp \left( \alpha _r x\right)\), where \(\alpha _r=(\alpha _{\omega }-\alpha _{\varGamma })/2=0.2, 0.14\) and 0.55. This means that r gradually increases due to vorticity diffusion, and this rate is the fastest for DRWT(L). At \(x=0\), extrapolation of the exponential relations find \(\omega _{0}L/U_{\infty }=5.6, 6.1\) and 9.0 for the three configurations, respectively. 3.3 Turbulence kinetic energy Finally, we take a look into the fluctuating velocities. Without knowing the out-of-plane velocity component w, TKE in this study is defined as $$\begin{aligned} {\mathrm {TKE}}=\frac{1}{2}\left(\overline{u^{'}u^{'}} + \overline{v^{'}v^{'}}\right), \end{aligned}$$ where \(\overline{u^{'}u^{'}}\) and \(\overline{v^{'}v^{'}}\) are the normal stress in the x and y directions, respectively. Figure 7 depicts TKE contour for all three turbine configurations. Consistent with the finding that the wake width is mainly dependent on the vortex core trajectory, and is therefore very similar behind the three turbines, the TKE distribution patterns are also similar. Very close to the main rotor surface, the TKE intensity behind SRWT appears higher than the two DRWTs, where part of the free stream wind energy is absorbed by the auxiliary rotors. In DRWT(S), higher TKE intensity can vaguely be seen just above and below the vortex core trajectory, while in DRWT(L) higher TKE can only be seen below. This is in line with the visualisation shown in Fig. 5, where auxiliary rotor vortices are entrained to the main rotor vortices behind DRWT(L). Contour of TKE overlaid with the vortex core trajectory taken from Fig. 6a. a SRWT, b DRWT(S) and c DRWT(L) Figure 8 demonstrates the change of the fluctuating velocity root mean square, \(u(\mathrm{rms})\), along the vortex centroid trajectory, where \(u(\mathrm{rms})=\sqrt{\mathrm {TKE}}\). Up to the end of the FOV, the magnitude and the decay rate of \(u(\mathrm{rms})\) are found to be similar among all the three configurations. For \(x>0.5D\), \(u(\mathrm{rms})\) is the highest for DRWT(L) and lowest for DRWT(S). This is in agreement with the evolution of the \(\varGamma\) magnitude shown in Fig. 6b. The fluctuating velocity \((u',v')\) is obtained from subtracting the time mean from the instantaneous velocities, and hence consists of both coherent mean (for periodic flows) and random turbulence. High random turbulence intensity contributes to the dissipation of helical wake and consequent wake re-energisation deeming it, at appropriate regions, a desirable quantity for the application. The phase-averaged vortices discussed in Sect. 3.2 are coherent mean which contribute significantly to the fluctuating velocities. The \(u(\mathrm{rms})\) intensity around the vortex cores is thus a manifestation of the circulation \(\varGamma\) of the vortex packets. The similar \(u(\mathrm{rms})\) decay rates for \(x>0.5D\) cannot be fitted with a simple exponential function. However, they are also in consistence with the decay rate of \(\varGamma\) in Fig. 6a. Fluctuating velocity rms along the vortex core trajectory. Solid lines are the least squares fits. Legends follow Fig. 3 The near wake velocity field behind three turbine configurations was experimentally studied in order to evaluate the impact of an additional counter-rotating auxiliary rotor on the conventional single-rotor wind turbine wake. The two auxiliary rotors were of 80% and 50% scale of the main rotor, installed upwind of the main rotor, aiming to capture the energy loss in the root part of the latter. All the rotors were tested at a constant tip speed ratio 3.46. We focused on the wake region within one main rotor diameter behind the turbines. Characteristics of the mean velocity field, phase-averaged vortices and TKE were studied using 2DPIV. The following conclusions may be established: Incorporating auxiliary rotors induces greater mean velocity deficit, with DWRT(L) marginally larger than DRWT(S), meaning that more wind energy was utilised with an auxiliary rotor. The size of the auxiliary rotor does not impact very much on the width of the wake, which primarily is determined by the trajectory of the vortices shed by the main rotor installed at the downwind side, in agreement with Ozbay ( 2014 ). The vortices shed by the 50% scale auxiliary rotor leaves a shear layer behind without coherent structures surviving the main rotor interference. Those shed at the 80% scale rotor are entrained into the main rotor vortices. The DRWT(L) has the strongest main rotor tip vortices because of the entrainment of the auxiliary rotor vortices. Although it also experiences the most rapid decay of vorticity strength at the vortex core centroids, very obviously, the decay rate of its vortex circulation is only marginally larger than the other two configurations. The decay of peak vorticity and vortex circulation is found to be exponential for all three configurations. In line with the circulation behaviour, DRWT(L) sees the slightly strongest TKE intensity at the vortex core trajectory, but the decay rate is similar to the other two configurations. Overall, the two DRWTs do not see a significant difference in the near wake characteristics. DRWT(S) with 50% scaled auxiliary rotor seems to work better owing to its weaker vortex circulation and TKE along the vortex trajectory. Although it results in a strongest shear layer behind the mid-span of the main rotor, this wake is believed to be beneficial because of its opposite swirl direction (counter-rotating) which tends to counteract the main rotor wake. This is largely consistent with the findings of Jung et al. ( 2005 ) who found that an auxiliary rotor 40–50% the size sees the best performance in the context of power output, when compared to rotors of other sizes. The authors would like to thank Mr. Lincoln Greatrick and Mr. Robbie Grout for their earlier contribution to the work; also the UK EPSRC (EP/P004377/1) for the financial support. Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​. go back to reference Bartl J (2011) Wake measurements behind an array of two model wind turbines. Master's thesis, Norwegian University of Science and Technology, Department of Energy and Process Engineering Bartl J (2011) Wake measurements behind an array of two model wind turbines. Master's thesis, Norwegian University of Science and Technology, Department of Energy and Process Engineering go back to reference Berkooz G, Holmes P, Lumley L (1993) The proper orthogonal decomposition in the analysis of turbulent flows. Annu Rev Fluid Mech 25:539 MathSciNetCrossRef Berkooz G, Holmes P, Lumley L (1993) The proper orthogonal decomposition in the analysis of turbulent flows. Annu Rev Fluid Mech 25:539 MathSciNetCrossRef go back to reference Chamorro L, Arndt R, Sotiropoulos F (2012) Reynolds number dependence of turbulence statistics in the wake of wind turbines. Wind Energy 15(5):733 CrossRef Chamorro L, Arndt R, Sotiropoulos F (2012) Reynolds number dependence of turbulence statistics in the wake of wind turbines. Wind Energy 15(5):733 CrossRef go back to reference Fijimoto S, Rinoshika A (2017) Multi-scale analysis on wake structures of asymmetric cylinders with different aspect ratios. J Vis 20(3):519–533 CrossRef Fijimoto S, Rinoshika A (2017) Multi-scale analysis on wake structures of asymmetric cylinders with different aspect ratios. J Vis 20(3):519–533 CrossRef go back to reference Global Wind Energy Council. Global Wind Report 2017 Global Wind Energy Council. Global Wind Report 2017 go back to reference Gomez-Elviraa R, Crespob A, Migoyab E, Manuelb F, Hernandezc J (2005) Anisotropy of turbulence in wind turbine wakes. J Wind Eng Ind Aerodyn 93:797 CrossRef Gomez-Elviraa R, Crespob A, Migoyab E, Manuelb F, Hernandezc J (2005) Anisotropy of turbulence in wind turbine wakes. J Wind Eng Ind Aerodyn 93:797 CrossRef go back to reference He C, Liu Y (2017) Proper orthogonal decomposition of time-resolved LIF visualisation: scalar mixing in a round jet. J Vis 20(4):789–815 CrossRef He C, Liu Y (2017) Proper orthogonal decomposition of time-resolved LIF visualisation: scalar mixing in a round jet. J Vis 20(4):789–815 CrossRef go back to reference Herzog R, Schaffarczyk A, Wacinski A, Zurcher O (2010) In: European wind energy conference EWEC Herzog R, Schaffarczyk A, Wacinski A, Zurcher O (2010) In: European wind energy conference EWEC go back to reference Jung S, No T, Ryu K (2005) Aerodynamic performance prediction of a 30 kW counter-rotating wind turbine system. Renew Energy 30(5):631 CrossRef Jung S, No T, Ryu K (2005) Aerodynamic performance prediction of a 30 kW counter-rotating wind turbine system. Renew Energy 30(5):631 CrossRef go back to reference Lee S, Hogeon K, Soogab L, Son E (2012) Effects of design parameters on aerodynamic performance of a counter-rotating wind turbine. Renew Energy 42:140 CrossRef Lee S, Hogeon K, Soogab L, Son E (2012) Effects of design parameters on aerodynamic performance of a counter-rotating wind turbine. Renew Energy 42:140 CrossRef go back to reference Oudheusden B, Scarano F, Hinsberg N, Watt D (2005) Phase-resolved characterization of vortex shedding in the near wake of a square-section cylinder at incidence. Exp Fluids 39:86 CrossRef Oudheusden B, Scarano F, Hinsberg N, Watt D (2005) Phase-resolved characterization of vortex shedding in the near wake of a square-section cylinder at incidence. Exp Fluids 39:86 CrossRef go back to reference Ozbay A (2014) An experimental investigation on wind turbine aeromechanics and wake interferences among multiple wind turbines. Master's thesis, Iowa State University Ozbay A (2014) An experimental investigation on wind turbine aeromechanics and wake interferences among multiple wind turbines. Master's thesis, Iowa State University go back to reference Ragheb M, Ragheb A (2011) Wind turbines theory—the Betz equation and optimal rotor tip speed ratio. In: Fundamental and advanced topics in wind power. Intech. https://​doi.​org/​10.​5772/​21398 Ragheb M, Ragheb A (2011) Wind turbines theory—the Betz equation and optimal rotor tip speed ratio. In: Fundamental and advanced topics in wind power. Intech. https://​doi.​org/​10.​5772/​21398 go back to reference Rosenberg A, Selvaraj S, Sharma A (2014) A novel dual-rotor turbine for increased wind energy capture. J Phys Conf Ser 524:012078 CrossRef Rosenberg A, Selvaraj S, Sharma A (2014) A novel dual-rotor turbine for increased wind energy capture. J Phys Conf Ser 524:012078 CrossRef go back to reference Sharma A, Frere A (2010) Diagnosis of aerodynamic losses in the root region of a horizontal axis wind turbine. Technical report, General Electric Global Research Center Internal Report Sharma A, Frere A (2010) Diagnosis of aerodynamic losses in the root region of a horizontal axis wind turbine. Technical report, General Electric Global Research Center Internal Report go back to reference Sherry M, Sheridan J, Jacono D (2013) Characterisation of a horizontal axis wind turbine tip and root vortices. Technical report 2. Springer, Berlin Sherry M, Sheridan J, Jacono D (2013) Characterisation of a horizontal axis wind turbine tip and root vortices. Technical report 2. Springer, Berlin go back to reference Siddiqui M, Rasheed A, Kvamsdal T, Tabib M (2017) Influence of tip speed ratio on wake flow characteristics utilizing fully resolved CFD methodology. J Phys 854:012043 Siddiqui M, Rasheed A, Kvamsdal T, Tabib M (2017) Influence of tip speed ratio on wake flow characteristics utilizing fully resolved CFD methodology. J Phys 854:012043 go back to reference Wang Z, Ozbay A, Tian W, Hu H (2018) An experimental study on the aerodynamic performances and wake characteristics of an innovative dual-rotor wind turbine. Energy 15(147):94 CrossRef Wang Z, Ozbay A, Tian W, Hu H (2018) An experimental study on the aerodynamic performances and wake characteristics of an innovative dual-rotor wind turbine. Energy 15(147):94 CrossRef go back to reference Wang Q, Gan L, Xu S, Zhou Y (2020) Vortex evolution in the near wake behind polygonal cylinders. Exp Therm Fluid Sci 110:109940 CrossRef Wang Q, Gan L, Xu S, Zhou Y (2020) Vortex evolution in the near wake behind polygonal cylinders. Exp Therm Fluid Sci 110:109940 CrossRef go back to reference Yurdusev M, Ata R, Cetin N (2006) Assessment of optimum tip speed ratio in wind turbines using artificial neural networks. Energy 31(12):2153 CrossRef Yurdusev M, Ata R, Cetin N (2006) Assessment of optimum tip speed ratio in wind turbines using artificial neural networks. Energy 31(12):2153 CrossRef go back to reference Zhou J, Adrian R, Balachandar S, Kendall T (1999) Mechanisms for generating coherent packets of hairpin vortices in channel flow. J Fluid Mech 387:353 MathSciNetCrossRef Zhou J, Adrian R, Balachandar S, Kendall T (1999) Mechanisms for generating coherent packets of hairpin vortices in channel flow. J Fluid Mech 387:353 MathSciNetCrossRef Eloise O. Hollands Chuangxin He Lian Gan Journal of Visualization Electronic ISSN: 1875-8975 Other articles of this Issue 3/2020 Go to the issue Regular Paper On the application of non-standard rainbow schlieren technique upon supersonic jets Visual interactive exploration and clustering of brain fiber tracts VEGA: visual comparison of phylogenetic trees for evolutionary genome analysis (ChinaVis 2019) Visualization of dispersed phase in the carrier phase with lattice Boltzmann method through high- and low-resolution observations Experimental study on fluid selection for a stable Taylor cone formation via micro-PIV measurement Novel jet impingement atomization by synchronizing the sweeping motion of the fluidic oscillators Neuer Inhalt/© ITandMEDIA
CommonCrawl
Contact Submit FAQ About Forum Number Algebra Geometry Graphs Calculus Probability Statistics Mechanics Other Exam Papers Number - Proportion a - answer s - solution v - video d - discussion Write a proportional statement with and without a constant of proportionality for the following a) $y$ is directly proportional to $x$ a $y\propto x$, $y=kx$ Question ID: 10070010010 Something wrong? Copy Question ID and contact us Submit yours! Copy Question ID and click here Question by: ada Answer by: ada Solution by: Video by: b) $m$ is directly proportional to $n$ a $m\propto n$, $m=kn$ c) $a$ is directly proportional to the square of $b$ a $a\propto b^2$, $a=kb^2$ d) $p$ is directly proportional to the cube of $q$ a $p\propto q^3$, $p=kq^3$ e) $u$ is directly proportional to the root of $v$ a $u\propto \sqrt v$, $u=k\sqrt v$ f) $g$ is directly proportional to the cube root of $h$ a $g\propto \sqrt[3] h$, $g=k\sqrt[3] h$ g) $c$ is directly proportional to the $5$th power of $d$ a $c\propto d^5$, $c=kd^5$ h) $s$ is directly proportional to the $7$th root of $t$ a $s\propto \sqrt[7] t$, $s=k\sqrt[7] t$ By finding the constant of proportionality form an equation linking the variables a) $y$ is directly proportional to $x$. When $y=10, x=5$. a $y=2x$ $\begin{align}y&\propto x\\y&=kx\\\text{when }y=10,\,x=5\,\,\,\,\,\,\,10&=k\times5\\\frac{10}{5}&=k\\k&=2\\\text{therefore }y&=2x\end{align}$ Solution by: ada b) $a$ is directly proportional to $b$. When $a=20, b=4$. a $a=5b$ $\begin{align}a&\propto b\\a&=kb\\\text{when }a=20,\,b=4\,\,\,\,\,\,\,20&=k\times4\\\frac{20}{4}&=k\\k&=5\\\text{therefore }a&=5b\end{align}$ c) $y$ is directly proportional to the square of $x$. When $y=2, x=1$. a $y=2x^2$ $\begin{align}y&\propto x^2\\y&=kx^2\\\text{when }y=2,\,x=1\,\,\,\,\,\,\,2&=k\times1^2\\k&=2\\\text{therefore }y&=2x^2\end{align}$ d) $f$ is directly proportional to the square of $g$. When $f=45, g=3$. a $f=5g^2$ $\begin{align}f&\propto g^2\\f&=kg^2\\\text{when }f=45,\,g=3\,\,\,\,\,\,\,45&=k\times3^2\\45&=k\times9\\\frac{45}{9}&=k\\k&=5\\\text{therefore }f&=5g^2\end{align}$ e) $y$ is directly proportional to the cube of $x$. When $y=24, x=2$. a f) $m$ is directly proportional to the cube of $n$. When $m=3, n=3$. a $n=\frac19n^3$ $\begin{align}m&\propto n^3\\m&=kn^3\\\text{when }m=3,\,n=3\,\,\,\,\,\,\,3&=k\times3^3\\3&=k\times27\\\frac{3}{27}&=k\\k&=\frac19\\\text{therefore }m&=\frac19n^3\end{align}$ g) $y$ is directly proportional to the square of $x$. When $y=2, x=4$. a $y=\frac18x^2$ $\begin{align}y&\propto x^2\\y&=kx^2\\\text{when }y=2,\,x=4\,\,\,\,\,\,\,2&=k\times4^2\\2&=k\times16\\\frac{2}{16}&=k\\k&=\frac18\\\text{therefore }y&=\frac18x^2\end{align}$ h) $y$ is directly proportional to the square root of $x$. When $y=10, x=4$. a $y=5\sqrt x$ $\begin{align}y&\propto\sqrt x\\y&=\sqrt x\\\text{when }y=10,\,x=4\,\,\,\,\,\,\,10&=k\times\sqrt4\\10&=k\times2\\\frac{10}{2}&=k\\k&=5\\\text{therefore }y&=5\sqrt x\end{align}$ i) $h$ is directly proportional to the square root of $g$. When $h=5, g=25$. a $h=\sqrt g$ $\begin{align}h&\propto\sqrt g\\h&=k\sqrt g\\5&=k\sqrt{25}\\k&=1\\h&=\sqrt g\end{align}$ Solution by: Aaron j) $c$ is directly proportional to the cube root of $d$. When $c=2, d=64$. a $c=\frac12\sqrt[3] d$ k) $w$ is directly proportional to the $5$th root of $z$. When $w=3, z=32$. a $w=\frac32\sqrt[5] z$ By finding the constant of proportionality, answer the following questions a) $y$ is directly proportional to $x$. When $y=6, x=2$. Find $y$ when $x=10$ a $y=30$ $\begin{align}y&\propto x\\y&=kx\\\text{when }y=6,\,x=2\,\,\,\,\,\,\,6&=k\times2\\\frac{6}{2}&=k\\k&=3\\\text{therefore }y&=3x\\x=10\,\,\,\,\,\,\,y&=3\times10\\y&=30\end{align}$ b) $y$ is directly proportional to $x$. When $y=2, x=20$. Find $y$ when $x=150$ a c) $a$ is directly proportional to $b$. When $a=3, b=8$. Find $b$ when $a=5$ a $b=\frac{40}{3}$ or $b=13\frac13$ $\begin{align}a&\propto b\\a&=kb\\\text{when }a=3,\,b=8\,\,\,\,\,\,\,3&=k\times8\\k&=\frac38\\\text{therefore }a&=\frac38b\\a=5\,\,\,\,\,\,\,5&=\frac38\times b\\5\div\frac38&=b\\5\times\frac83&=b\\b&=\frac{40}{3}\end{align}$ d) $y$ is directly proportional to the square of $x$. When $y=16, x=4$. Find $y$ when $x=9$ a e) $r$ is directly proportional to the square of $s$. When $r=15, s=5$. Find $r$ when $s=15$ a $r=135$ f) $p$ is directly proportional to the cube of $q$. When $p=9, q=3$. Find $q$ when $p=243$ a $q=9$ $\begin{align}p&\propto q^3\\p&=kq^3\\\text{when }p=9,\,q=3\,\,\,\,\,\,\,9&=k\times3^3\\9&=k\times27\\\frac{9}{27}&=k\\k&=\frac13\\\text{therefore }p&=\frac13q^3\\p=243\,\,\,\,\,\,\,243&=\frac13\times q^3\\243\times3&=q^3\\729&=q^3\\\sqrt[3]{729}&=q\\q&=9\end{align}$ g) $y$ is directly proportional to the square root $x$. When $y=18, x=36$. Find $y$ when $x=64$ a h) $y$ is directly proportional to the cube root $x$. When $y=1, x=27$. Find $y$ when $x=64$ a $y=\frac43$ or $y=1\frac13$ a) $y$ is inversely proportional to $x$ a $y\propto \frac1x$, $y=\frac kx$ b) $P$ is inversely proportional to $Q$ a $P\propto \frac1Q$, $P=\frac kQ$ c) $y$ is inversely proportional to the square of $x$ a $P\propto \frac{1}{x^2}$, $P=\frac {k}{x^2}$ d) $a$ is inversely proportional to the cube of $b$ a $a\propto \frac{1}{b^2}$, $a=\frac {k}{b^3}$ e) $h$ is inversely proportional to the square root of $g$ a $h\propto \frac{1}{\sqrt g}$, $h=\frac {k}{\sqrt g}$ f) $E$ is inversely proportional to the cube root of $F$ a $E\propto \frac{1}{\sqrt[3] F}$, $E=\frac {k}{\sqrt[3] F}$ g) $y$ is inversely proportional to the fourth power of $x$ a $y\propto \frac{1}{x^4}$, $y=\frac {k}{x^4}$ h) $s$ is inversely proportional to the ninth root of $t$ a $s\propto \frac{1}{\sqrt[9] t}$, $s=\frac {k}{\sqrt[9] t}$ a) $y$ is inversely proportional to $x$. When $y=\frac{2}{5}, x=5$. a $y=\frac2x$ $\begin{align}y&\propto \frac{1}{x}\\y&=\frac{k}{x}\\\text{when }y=\frac25,\,x=5\,\,\,\,\,\,\,\frac25&=\frac k5\\\frac{2}{5}\times5&=k\\k&=2\\\text{therefore }y&=\frac2x\end{align}$ b) $A$ is inversely proportional to $B$. When $A=10, B=\frac12$. a $A=\frac5B$ $\begin{align}A&\propto \frac{1}{B}\\A&=\frac{k}{B}\\\text{when }A=10,\,B=\frac12\,\,\,\,\,\,\,10&=\frac {k}{\frac12}\\10\times\frac12&=k\\k&=5\\\text{therefore }A&=\frac5B\end{align}$ c) $g$ is inversely proportional to the square of $h$. When $g=3, h=2$. a $g=\frac{12}{h^2}$ $\begin{align}g&\propto \frac{1}{h^2}\\g&=\frac{k}{h^2}\\\text{when }g=3,\,h=2\,\,\,\,\,\,\,3&=\frac {k}{2^2}\\3\times2^2&=k\\3\times4&=k\\k&=12\\\text{therefore }g&=\frac{12}{h}\end{align}$ d) $y$ is inversely proportional to the square of $x$. When $y=18, x=\frac13$. a $y=\frac{2}{x^2}$ e) $P$ is inversely proportional to the cube of $Q$. When $P=\frac34, Q=2$. a $P=\frac{6}{Q^3}$ f) $S$ is inversely proportional to the square root of $T$. When $S=5, T=16$. a $S=\frac{20}{\sqrt T}$ g) $u$ is inversely proportional to the cube root of $w$. When $u=1, w=27$. a $u=\frac{3}{\sqrt[3] w}$ h) $Y$ is inversely proportional to the sixth root of $X$. When $Y=\frac{1}{6}, X=64$. a $Y=\frac{\frac13}{\sqrt[6] X}$ or $Y=\frac{1}{3\sqrt[6]X}$ a) $y$ is inversely proportional to $x$. When $y=4, x=3$. Find $y$ when $x=4$ a $y=3$ $\begin{align}y&\propto \frac1x\\y&=\frac kx\\\text{when }y=4,\,x=3\,\,\,\,\,\,\,4&=\frac k3\\4\times3&=k\\k&=12\\\text{therefore }y&=\frac{12}{x}\\x=4\,\,\,\,\,\,\,y&=\frac{12}{4}\\y&=3\end{align}$ b) $C$ is inversely proportional to $D$. When $C=\frac12, D=4$. Find $C$ when $D=7$ a $C=\frac27$ $\begin{align}C&\propto \frac1D\\C&=\frac kD\\\text{when }C=\frac12,\,D=4\,\,\,\,\,\,\,\frac12&=\frac k4\\\frac12\times4&=k\\k&=2\\\text{therefore }C&=\frac{2}{D}\\D=7,\,\,\,\,\,\,\,C&=\frac{2}{7}\end{align}$ c) $y$ is inversely proportional to the square of $x$. When $y=3, x=4$. Find $y$ when $x=2$ a $\begin{align}y&\propto \frac{1}{x^2}\\y&=\frac{k}{x^2}\\\text{when }y=3,\,x=4\,\,\,\,\,\,\,3&=\frac {k}{4^2}\\3\times4^2&=k\\3\times16&=k\\k&=48\\\text{therefore }y&=\frac{48}{x^2}\\x=2,\,\,\,\,\,\,\,y&=\frac{48}{2^2}\\y&=\frac{48}{4}\\y&=12\end{align}$ d) $a$ is inversely proportional to the cube of $b$. When $a=6, b=1$. Find $a$ when $b=2$ a $a=\frac34$ e) $g$ is inversely proportional to the square root of $h$. When $g=4, h=49$. Find $g$ when $h=16$ a $g=7$ f) $R$ is inversely proportional to the cube root of $S$. When $R=1, S=8$. Find $R$ when $S=27$ a $R=\frac23$ g) $f$ is inversely proportional to the cube of $g$. When $f=\frac12, g=2$. Find $g$ when $f=32$ a $g=\frac12$ $\begin{align}f&\propto \frac{1}{g^3}\\f&=\frac{k}{g^3}\\\text{when }f=\frac12,\,g=2\,\,\,\,\,\,\,\frac12&=\frac {k}{2^3}\\\frac12\times2^3&=k\\\frac12\times8&=k\\k&=4\\\text{therefore }f&=\frac{4}{g^3}\\f=32,\,\,\,\,\,\,\,32&=\frac{4}{g^3}\\32\times g^3&=4\\g^3&=\frac{4}{32}\\g^3&=\frac18\\g&=\sqrt[3]{\frac18}\\g&=\frac12\end{align}$ h) $W$ is inversely proportional to the square root of $X$. When $W=\sqrt8, X=2$. Find $W$ when $X=4$ a $W=1$ By forming an equation with a constant of proportionality, answer the following a) $£5$ can be exchanged for $\$7$. Find how much $£20$ can buy in $\$$. a $\$28$ Let P be the amount of £ and let D be the amount of $\$$. $£$ are directly proportional to $\$$. $\begin{align}P&\propto D\\P&=kD\\\text{when }P=5,\,D=7\,\,\,\,\,\,\,5&=k\times7\\k&=\frac57\\\text{therefore }P&=\frac{5}{7}D\\P=20,\,\,\,\,\,\,\,20&=\frac{5}{7}D\\20\div\frac57&=D\\20\times\frac75&=D\\D&=28\end{align}$ So $£20$ can buy $\$28$. b) $5$ people can dig $8$ holes in an hour. They all work at the same rate. How many complete holes can $12$ people dig in an hour? a $19$ complete holes (rounded from $19.2$ holes) c) The $y$ coordinate on a graph is directly proportional to the square of the $x$ coordinate. The point $(20,2)$ lies on the graph. Find the equation of the graph and hence find the $y$ coordinate when $x=5$. a $y=5x^2$ and $y=125$ Let $y$ be the $y$ coordinate and let $x$ be the $x$ coordinate. $\begin{align}y&\propto x^2\\y&=kx^2\\\text{when }y=20,\,x=2\,\,\,\,\,\,\,20&=k\times2^2\\20\div2^2&=k\\20\div4&=k\\k&=5\\\text{therefore the equation of the graph is }\,\,\,y&=5x^2\\x=5,\,\,\,\,\,\,\,y&=5\times5^2\\y&=5\times25\\y&=125\end{align}$ So the $y$ coordinate is $125$ when $x=5$. d) The amount of flour put into a bread mix is directly proportional to the cube root of the volume of the bread, once baked. When I put in $50$g of flour, I get a bread with volume $6859$cm$^3$. How much flour is needed to make a bread with volume $10,000$cm$^3$? Give you answer to the nearest gram. a $43$g to the nearest gram. Let $F$ be the amount of flour and let $V$ be the volume of bread, once baked. $\begin{align}F&\propto \sqrt[3]V\\F&=k\sqrt[3]V\\\text{when }F=40,\,V=8000\,\,\,\,\,\,\,40&=k\times\sqrt[3]{8000}\\40&=k\times20\\40\div20&=k\\k&=2\\\text{therefore }\,\,\,F&=2\sqrt[3]V\\V=10,000\,,\,\,\,\,\,\,\,F&=2\times\sqrt[3]{10,000}\\F&=43.08869\dots\\F&=43\text{ to nearest gram}\end{align}$ So the amount of flour needed to make a bread with volume $10,000$cm$^3$ is $43$g to the nearest gram. e) Gravitational force is inversely proportional to the square of distance from the earth. At $6000$km from the centre of the earth (so on the earth surface) a person feels a gravitational force of $500$N (Newtons, which is a measure of force). How much gravitational force do they feel $90,000$km away? a $G=2\frac29$N Let $D$ be the distance to the centre of the Earth and let $G$ be the gravitational force. $\begin{align}G&\propto \frac{1}{D^2}\\G&=\frac{k}{D^2}\\\text{when }D=6000,\,G=500\,\,\,\,\,\,\,500&=\frac{k}{6000^2}\\500\times6000^2&=k\\500\times36000000&=k\\k&=18000000000 \text{ or }1.8\times10^{10}\\\text{therefore }\,\,\,G&=\frac{1.8\times10^{10}}{D^2}\\D=90,000\,,\,\,\,\,\,\,\,G&=\frac{1.8\times10^{10}}{90,000^2}\\G&=2\frac29\end{align}$ So the gravitational force felt at $90,000$km is $2\frac29$N a) $y$ is directly proportional to $x$ b) $m$ is directly proportional to $n$ c) $a$ is directly proportional to the square of $b$ d) $p$ is directly proportional to the cube of $q$ e) $u$ is directly proportional to the root of $v$ f) $g$ is directly proportional to the cube root of $h$ g) $c$ is directly proportional to the $5$th power of $d$ h) $s$ is directly proportional to the $7$th root of $t$ a) $y$ is directly proportional to $x$. When $y=10, x=5$. b) $a$ is directly proportional to $b$. When $a=20, b=4$. c) $y$ is directly proportional to the square of $x$. When $y=2, x=1$. d) $f$ is directly proportional to the square of $g$. When $f=45, g=3$. e) $y$ is directly proportional to the cube of $x$. When $y=24, x=2$. f) $m$ is directly proportional to the cube of $n$. When $m=3, n=3$. g) $y$ is directly proportional to the square of $x$. When $y=2, x=4$. h) $y$ is directly proportional to the square root of $x$. When $y=10, x=4$. i) $h$ is directly proportional to the square root of $g$. When $h=5, g=25$. j) $c$ is directly proportional to the cube root of $d$. When $c=2, d=64$. k) $w$ is directly proportional to the $5$th root of $z$. When $w=3, z=32$. a) $y$ is directly proportional to $x$. When $y=6, x=2$. Find $y$ when $x=10$ b) $y$ is directly proportional to $x$. When $y=2, x=20$. Find $y$ when $x=150$ c) $a$ is directly proportional to $b$. When $a=3, b=8$. Find $b$ when $a=5$ d) $y$ is directly proportional to the square of $x$. When $y=16, x=4$. Find $y$ when $x=9$ e) $r$ is directly proportional to the square of $s$. When $r=15, s=5$. Find $r$ when $s=15$ f) $p$ is directly proportional to the cube of $q$. When $p=9, q=3$. Find $q$ when $p=243$ g) $y$ is directly proportional to the square root $x$. When $y=18, x=36$. Find $y$ when $x=64$ h) $y$ is directly proportional to the cube root $x$. When $y=1, x=27$. Find $y$ when $x=64$ a) $y$ is inversely proportional to $x$ b) $P$ is inversely proportional to $Q$ c) $y$ is inversely proportional to the square of $x$ d) $a$ is inversely proportional to the cube of $b$ e) $h$ is inversely proportional to the square root of $g$ f) $E$ is inversely proportional to the cube root of $F$ g) $y$ is inversely proportional to the fourth power of $x$ h) $s$ is inversely proportional to the ninth root of $t$ a) $y$ is inversely proportional to $x$. When $y=\frac{2}{5}, x=5$. b) $A$ is inversely proportional to $B$. When $A=10, B=\frac12$. c) $g$ is inversely proportional to the square of $h$. When $g=3, h=2$. d) $y$ is inversely proportional to the square of $x$. When $y=18, x=\frac13$. e) $P$ is inversely proportional to the cube of $Q$. When $P=\frac34, Q=2$. f) $S$ is inversely proportional to the square root of $T$. When $S=5, T=16$. g) $u$ is inversely proportional to the cube root of $w$. When $u=1, w=27$. h) $Y$ is inversely proportional to the sixth root of $X$. When $Y=\frac{1}{6}, X=64$. a) $y$ is inversely proportional to $x$. When $y=4, x=3$. Find $y$ when $x=4$ b) $C$ is inversely proportional to $D$. When $C=\frac12, D=4$. Find $C$ when $D=7$ c) $y$ is inversely proportional to the square of $x$. When $y=3, x=4$. Find $y$ when $x=2$ d) $a$ is inversely proportional to the cube of $b$. When $a=6, b=1$. Find $a$ when $b=2$ e) $g$ is inversely proportional to the square root of $h$. When $g=4, h=49$. Find $g$ when $h=16$ f) $R$ is inversely proportional to the cube root of $S$. When $R=1, S=8$. Find $R$ when $S=27$ g) $f$ is inversely proportional to the cube of $g$. When $f=\frac12, g=2$. Find $g$ when $f=32$ h) $W$ is inversely proportional to the square root of $X$. When $W=\sqrt8, X=2$. Find $W$ when $X=4$ a) $£5$ can be exchanged for $\$7$. Find how much $£20$ can buy in $\$$. b) $5$ people can dig $8$ holes in an hour. They all work at the same rate. How many complete holes can $12$ people dig in an hour? c) The $y$ coordinate on a graph is directly proportional to the square of the $x$ coordinate. The point $(20,2)$ lies on the graph. Find the equation of the graph and hence find the $y$ coordinate when $x=5$. d) The amount of flour put into a bread mix is directly proportional to the cube root of the volume of the bread, once baked. When I put in $50$g of flour, I get a bread with volume $6859$cm$^3$. How much flour is needed to make a bread with volume $10,000$cm$^3$? Give you answer to the nearest gram. e) Gravitational force is inversely proportional to the square of distance from the earth. At $6000$km from the centre of the earth (so on the earth surface) a person feels a gravitational force of $500$N (Newtons, which is a measure of force). How much gravitational force do they feel $90,000$km away? a) $y\propto x$, $y=kx$ b) $m\propto n$, $m=kn$ c) $a\propto b^2$, $a=kb^2$ d) $p\propto q^3$, $p=kq^3$ e) $u\propto \sqrt v$, $u=k\sqrt v$ f) $g\propto \sqrt[3] h$, $g=k\sqrt[3] h$ g) $c\propto d^5$, $c=kd^5$ h) $s\propto \sqrt[7] t$, $s=k\sqrt[7] t$ a) $y=2x$ b) $a=5b$ c) $y=2x^2$ d) $f=5g^2$ e) $y=3x^3$ f) $n=\frac19n^3$ g) $y=\frac18x^2$ h) $y=5\sqrt x$ i) $h=\sqrt g$ j) $c=\frac12\sqrt[3] d$ k) $w=\frac32\sqrt[5] z$ a) $y=30$ b) $y=15$ c) $b=\frac{40}{3}$ or $b=13\frac13$ d) $y=81$ e) $r=135$ f) $q=9$ g) $y=24$ h) $y=\frac43$ or $y=1\frac13$ a) $y\propto \frac1x$, $y=\frac kx$ b) $P\propto \frac1Q$, $P=\frac kQ$ c) $P\propto \frac{1}{x^2}$, $P=\frac {k}{x^2}$ d) $a\propto \frac{1}{b^2}$, $a=\frac {k}{b^3}$ e) $h\propto \frac{1}{\sqrt g}$, $h=\frac {k}{\sqrt g}$ f) $E\propto \frac{1}{\sqrt[3] F}$, $E=\frac {k}{\sqrt[3] F}$ g) $y\propto \frac{1}{x^4}$, $y=\frac {k}{x^4}$ h) $s\propto \frac{1}{\sqrt[9] t}$, $s=\frac {k}{\sqrt[9] t}$ a) $y=\frac2x$ b) $A=\frac5B$ c) $g=\frac{12}{h^2}$ d) $y=\frac{2}{x^2}$ e) $P=\frac{6}{Q^3}$ f) $S=\frac{20}{\sqrt T}$ g) $u=\frac{3}{\sqrt[3] w}$ h) $Y=\frac{\frac13}{\sqrt[6] X}$ or $Y=\frac{1}{3\sqrt[6]X}$ a) $y=3$ b) $C=\frac27$ c) $y=12$ d) $a=\frac34$ e) $g=7$ f) $R=\frac23$ g) $g=\frac12$ h) $W=1$ a) $\$28$ b) $19$ complete holes (rounded from $19.2$ holes) c) $y=5x^2$ and $y=125$ d) $43$g to the nearest gram. e) $G=2\frac29$N a) $\begin{align}y&\propto x\\y&=kx\\\text{when }y=10,\,x=5\,\,\,\,\,\,\,10&=k\times5\\\frac{10}{5}&=k\\k&=2\\\text{therefore }y&=2x\end{align}$ b) $\begin{align}a&\propto b\\a&=kb\\\text{when }a=20,\,b=4\,\,\,\,\,\,\,20&=k\times4\\\frac{20}{4}&=k\\k&=5\\\text{therefore }a&=5b\end{align}$ c) $\begin{align}y&\propto x^2\\y&=kx^2\\\text{when }y=2,\,x=1\,\,\,\,\,\,\,2&=k\times1^2\\k&=2\\\text{therefore }y&=2x^2\end{align}$ d) $\begin{align}f&\propto g^2\\f&=kg^2\\\text{when }f=45,\,g=3\,\,\,\,\,\,\,45&=k\times3^2\\45&=k\times9\\\frac{45}{9}&=k\\k&=5\\\text{therefore }f&=5g^2\end{align}$ f) $\begin{align}m&\propto n^3\\m&=kn^3\\\text{when }m=3,\,n=3\,\,\,\,\,\,\,3&=k\times3^3\\3&=k\times27\\\frac{3}{27}&=k\\k&=\frac19\\\text{therefore }m&=\frac19n^3\end{align}$ g) $\begin{align}y&\propto x^2\\y&=kx^2\\\text{when }y=2,\,x=4\,\,\,\,\,\,\,2&=k\times4^2\\2&=k\times16\\\frac{2}{16}&=k\\k&=\frac18\\\text{therefore }y&=\frac18x^2\end{align}$ h) $\begin{align}y&\propto\sqrt x\\y&=\sqrt x\\\text{when }y=10,\,x=4\,\,\,\,\,\,\,10&=k\times\sqrt4\\10&=k\times2\\\frac{10}{2}&=k\\k&=5\\\text{therefore }y&=5\sqrt x\end{align}$ i) $\begin{align}h&\propto\sqrt g\\h&=k\sqrt g\\5&=k\sqrt{25}\\k&=1\\h&=\sqrt g\end{align}$ a) $\begin{align}y&\propto x\\y&=kx\\\text{when }y=6,\,x=2\,\,\,\,\,\,\,6&=k\times2\\\frac{6}{2}&=k\\k&=3\\\text{therefore }y&=3x\\x=10\,\,\,\,\,\,\,y&=3\times10\\y&=30\end{align}$ c) $\begin{align}a&\propto b\\a&=kb\\\text{when }a=3,\,b=8\,\,\,\,\,\,\,3&=k\times8\\k&=\frac38\\\text{therefore }a&=\frac38b\\a=5\,\,\,\,\,\,\,5&=\frac38\times b\\5\div\frac38&=b\\5\times\frac83&=b\\b&=\frac{40}{3}\end{align}$ f) $\begin{align}p&\propto q^3\\p&=kq^3\\\text{when }p=9,\,q=3\,\,\,\,\,\,\,9&=k\times3^3\\9&=k\times27\\\frac{9}{27}&=k\\k&=\frac13\\\text{therefore }p&=\frac13q^3\\p=243\,\,\,\,\,\,\,243&=\frac13\times q^3\\243\times3&=q^3\\729&=q^3\\\sqrt[3]{729}&=q\\q&=9\end{align}$ a) $\begin{align}y&\propto \frac{1}{x}\\y&=\frac{k}{x}\\\text{when }y=\frac25,\,x=5\,\,\,\,\,\,\,\frac25&=\frac k5\\\frac{2}{5}\times5&=k\\k&=2\\\text{therefore }y&=\frac2x\end{align}$ b) $\begin{align}A&\propto \frac{1}{B}\\A&=\frac{k}{B}\\\text{when }A=10,\,B=\frac12\,\,\,\,\,\,\,10&=\frac {k}{\frac12}\\10\times\frac12&=k\\k&=5\\\text{therefore }A&=\frac5B\end{align}$ c) $\begin{align}g&\propto \frac{1}{h^2}\\g&=\frac{k}{h^2}\\\text{when }g=3,\,h=2\,\,\,\,\,\,\,3&=\frac {k}{2^2}\\3\times2^2&=k\\3\times4&=k\\k&=12\\\text{therefore }g&=\frac{12}{h}\end{align}$ a) $\begin{align}y&\propto \frac1x\\y&=\frac kx\\\text{when }y=4,\,x=3\,\,\,\,\,\,\,4&=\frac k3\\4\times3&=k\\k&=12\\\text{therefore }y&=\frac{12}{x}\\x=4\,\,\,\,\,\,\,y&=\frac{12}{4}\\y&=3\end{align}$ b) $\begin{align}C&\propto \frac1D\\C&=\frac kD\\\text{when }C=\frac12,\,D=4\,\,\,\,\,\,\,\frac12&=\frac k4\\\frac12\times4&=k\\k&=2\\\text{therefore }C&=\frac{2}{D}\\D=7,\,\,\,\,\,\,\,C&=\frac{2}{7}\end{align}$ c) $\begin{align}y&\propto \frac{1}{x^2}\\y&=\frac{k}{x^2}\\\text{when }y=3,\,x=4\,\,\,\,\,\,\,3&=\frac {k}{4^2}\\3\times4^2&=k\\3\times16&=k\\k&=48\\\text{therefore }y&=\frac{48}{x^2}\\x=2,\,\,\,\,\,\,\,y&=\frac{48}{2^2}\\y&=\frac{48}{4}\\y&=12\end{align}$ g) $\begin{align}f&\propto \frac{1}{g^3}\\f&=\frac{k}{g^3}\\\text{when }f=\frac12,\,g=2\,\,\,\,\,\,\,\frac12&=\frac {k}{2^3}\\\frac12\times2^3&=k\\\frac12\times8&=k\\k&=4\\\text{therefore }f&=\frac{4}{g^3}\\f=32,\,\,\,\,\,\,\,32&=\frac{4}{g^3}\\32\times g^3&=4\\g^3&=\frac{4}{32}\\g^3&=\frac18\\g&=\sqrt[3]{\frac18}\\g&=\frac12\end{align}$ a) Let P be the amount of £ and let D be the amount of $\$$. c) Let $y$ be the $y$ coordinate and let $x$ be the $x$ coordinate. d) Let $F$ be the amount of flour and let $V$ be the volume of bread, once baked. e) Let $D$ be the distance to the centre of the Earth and let $G$ be the gravitational force. © ADA Maths 2019 Contact About Terms
CommonCrawl
Compressible and viscous two-phase flow in porous media based on mixture theory formulation Local weak solvability of a moving boundary problem describing swelling along a halfline September 2019, 14(3): 471-487. doi: 10.3934/nhm.2019019 On the local and global existence of solutions to 1d transport equations with nonlocal velocity Hantaek Bae 1,, , Rafael Granero-Belinchón 2, and Omar Lazar 3, Department of Mathematical Sciences, Ulsan National Institute of Science and Technology (UNIST), Ulsan, Republic of Korea Departamento de Matemáticas, Estadística y Computación, Universidad de Cantabria, Santander, Spain Departamento de Análisis Matemático & IMUS, Universidad de Sevilla, Sevilla, Spain Received June 2018 Revised February 2019 Published May 2019 We consider the 1D transport equation with nonlocal velocity field: $ \begin{equation*} \label{intro eq} \begin{split} &\theta_t+u\theta_x+\nu \Lambda^{\gamma}\theta = 0, \\ & u = \mathcal{N}(\theta), \end{split} \end{equation*} $ $ \mathcal{N} $ is a nonlocal operator and $ \Lambda^{\gamma} $ is a Fourier multiplier defined by $ \widehat{\Lambda^{\gamma} f}(\xi) = |\xi|^{\gamma}\widehat{f}(\xi) $ . In this paper, we show the existence of solutions of this model locally and globally in time for various types of nonlocal operators. Keywords: Fluid equations, 1D models, global weak solution. Mathematics Subject Classification: Primary: 35A01; Secondary: 35D30, 35Q35, 35Q86. Citation: Hantaek Bae, Rafael Granero-Belinchón, Omar Lazar. On the local and global existence of solutions to 1d transport equations with nonlocal velocity. Networks & Heterogeneous Media, 2019, 14 (3) : 471-487. doi: 10.3934/nhm.2019019 H. Bae, D. Chae and H. Okamoto, On the well-posedness of various one-dimensional model equations for fluid motion, Nonlinear Anal., 160 (2017), 25-43. doi: 10.1016/j.na.2017.05.002. Google Scholar H. Bae and R. Granero-Belinchón, Global existence for some transport equations with nonlocal velocity, Adv. Math., 269 (2015), 197-219. doi: 10.1016/j.aim.2014.10.016. Google Scholar H. Bae, R. Granero-Belinchón and O. Lazar, Global existence of weak solutions to dissipative transport equations with nonlocal velocity, Nonlinearity, 31 (2018) 1484–1515. doi: 10.1088/1361-6544/aaa2e0. Google Scholar H. Bahouri, J-Y. Chemin and R. Danchin, Fourier Analysis and Nonlinear Partial Differential Equations, Grundlehren der Mathematischen Wissenschaften, 343, Springer, 2011. doi: 10.1007/978-3-642-16830-7. Google Scholar G. R. Baker, X. Li and A. C. Morlet, Analytic structure of 1D transport equations with nonlocal fluxes, Physica D., 91 (1996), 349-375. doi: 10.1016/0167-2789(95)00271-5. Google Scholar J. A. Carrillo, L. C. F. Ferreira and J. C. Precioso, A mass-transportation approach to a one dimensional fluid mechanics model with nonlocal velocity, Adv. Math., 231 (2012), 306-327. doi: 10.1016/j.aim.2012.03.036. Google Scholar A. Castro and D. Córdoba, Global existence, singularities and ill-posedness for a nonlocal flux, Adv. Math., 219 (2008), 1916-1936. doi: 10.1016/j.aim.2008.07.015. Google Scholar A. Castro and D. Córdoba, Self-similar solutions for a transport equation with non-local flux, Chinese Annals of Mathematics, Series B, 30 (2009), 505-512. doi: 10.1007/s11401-009-0180-8. Google Scholar D. Chae, A. Cordoba, D. Cordoba and M. Fontelos, Finite time singularities in a 1D model of the quasi-geostrophic equation, Adv. Math., 194 (2005), 203-223. doi: 10.1016/j.aim.2004.06.004. Google Scholar A. Córdoba and D. Córdoba, A maximum principle applied to quasi-geostrophic equations, Comm. Math. Phys., 249 (2004), 511-528. doi: 10.1007/s00220-004-1055-1. Google Scholar A. Córdoba, D. Córdoba and M. Fontelos, Formation of singularities for a transport equation with nonlocal velocity, Ann. of Math., 162 (2005), 1-13. doi: 10.4007/annals.2005.162.1377. Google Scholar M. Cotlar, A unified theory of Hilbert transforms and ergodic theorems, Rev. Mat. Cuyana, 1 (1955), 105-167. Google Scholar S. De Gregorio, On a one-dimensional model for the 3D vorticity equation, J. Statist. Phys., 59 (1990), 1251-1263. doi: 10.1007/BF01334750. Google Scholar H. Dong, Well-posedness for a transport equation with nonlocal velocity, J. Funct. Anal., 255, (2008), 3070–3097. doi: 10.1016/j.jfa.2008.08.005. Google Scholar H. Dong, On a multi-dimensional transport equation with nonlocal velocity, Adv. Math., 264 (2014), 747–761. doi: 10.1016/j.aim.2014.07.028. Google Scholar H. Dong and D. Li, On a one-dimensional $\alpha$-patch model with nonlocal drift and fractional dissipation, Trans. Amer. Math. Soc., 366 (2014), 2041–2061. doi: 10.1090/S0002-9947-2013-06075-8. Google Scholar J. Duoandikoetxea, Fourier Analysis, Translated and revised from the 1995 Spanish original by David Cruz-Uribe, Graduate Studies in Mathematics, 29, American Mathematical Society, 2000. Google Scholar T. Kato and G. Ponce, Commutator estimates and the Euler and Navier-Stokes equations, Comm. Pure Appl. Math., 41 (1988), 891-907. doi: 10.1002/cpa.3160410704. Google Scholar A. Kiselev, Regularity and blow up for active scalars, Math. Model. Math. Phenom., 5 (2010), 225-255. doi: 10.1051/mmnp/20105410. Google Scholar O. Lazar, On a 1D nonlocal transport equation with nonlocal velocity and subcritical or supercritical diffusion, Journal of Diff. Eq., 261 (2016), 4974-4996. doi: 10.1016/j.jde.2016.07.009. Google Scholar O. Lazar and P.-G. Lemarié-Rieusset, Infinite energy solutions for a 1D transport equation with nonlocal velocity, Dynamics of PDEs, 13 (2016), 107-131. doi: 10.4310/DPDE.2016.v13.n2.a2. Google Scholar D. Li, On Kato-Ponce and fractional Leibniz, arXiv: 1609.01780. doi: 10.4171/rmi/1049. Google Scholar D. Li and J. Rodrigo, Blow-up of solutions for a 1D transport equation with nonlocal velocity and supercritical dissipation, Adv. Math., 217 (2008), 2563-2568. doi: 10.1016/j.aim.2007.11.002. Google Scholar D. Li and J. Rodrigo, On a one-dimensional nonlocal flux with fractional dissipation, SIAM J. Math. Anal., 43 (2011), 507-526. doi: 10.1137/100794924. Google Scholar A. Morlet, Further properties of a continuum of model equations with globally defined flux, J. Math. Anal. Appl., 221 (1998), 132-160. doi: 10.1006/jmaa.1997.5801. Google Scholar J. Simon, Compact sets in the space $L^{p}(0, T; B)$, Ann. Mat. Pura Appl., 146 (1987), 65-96. doi: 10.1007/BF01762360. Google Scholar Waixiang Cao, Lueling Jia, Zhimin Zhang. A $ C^1 $ Petrov-Galerkin method and Gauss collocation method for 1D general elliptic problems and superconvergence. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 81-105. doi: 10.3934/dcdsb.2020327 Mokhtari Yacine. Boundary controllability and boundary time-varying feedback stabilization of the 1D wave equation in non-cylindrical domains. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021004 Tianwen Luo, Tao Tao, Liqun Zhang. Finite energy weak solutions of 2d Boussinesq equations with diffusive temperature. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3737-3765. doi: 10.3934/dcds.2019230 Shuxing Chen, Jianzhong Min, Yongqian Zhang. Weak shock solution in supersonic flow past a wedge. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 115-132. doi: 10.3934/dcds.2009.23.115 Yang Liu. Global existence and exponential decay of strong solutions to the cauchy problem of 3D density-dependent Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1291-1303. doi: 10.3934/dcdsb.2020163 Jérôme Lohéac, Chaouki N. E. Boultifat, Philippe Chevrel, Mohamed Yagoubi. Exact noise cancellation for 1d-acoustic propagation systems. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020055 Oleg Yu. Imanuvilov, Jean Pierre Puel. On global controllability of 2-D Burgers equation. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 299-313. doi: 10.3934/dcds.2009.23.299 Yi An, Bo Li, Lei Wang, Chao Zhang, Xiaoli Zhou. Calibration of a 3D laser rangefinder and a camera based on optimization solution. Journal of Industrial & Management Optimization, 2021, 17 (1) : 427-445. doi: 10.3934/jimo.2019119 Wenlong Sun, Jiaqi Cheng, Xiaoying Han. Random attractors for 2D stochastic micropolar fluid flows on unbounded domains. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 693-716. doi: 10.3934/dcdsb.2020189 Xin Zhong. Singularity formation to the nonhomogeneous magneto-micropolar fluid equations. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021021 Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081 Ludovick Gagnon, José M. Urquiza. Uniform boundary observability with Legendre-Galerkin formulations of the 1-D wave equation. Evolution Equations & Control Theory, 2021, 10 (1) : 129-153. doi: 10.3934/eect.2020054 Sujit Kumar Samanta, Rakesh Nandi. Analysis of $GI^{[X]}/D$-$MSP/1/\infty$ queue using $RG$-factorization. Journal of Industrial & Management Optimization, 2021, 17 (2) : 549-573. doi: 10.3934/jimo.2019123 Yue-Jun Peng, Shu Wang. Asymptotic expansions in two-fluid compressible Euler-Maxwell equations with small parameters. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 415-433. doi: 10.3934/dcds.2009.23.415 Andrea Giorgini, Roger Temam, Xuan-Truong Vu. The Navier-Stokes-Cahn-Hilliard equations for mildly compressible binary fluid mixtures. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 337-366. doi: 10.3934/dcdsb.2020141 Jens Lorenz, Wilberclay G. Melo, Suelen C. P. de Souza. Regularity criteria for weak solutions of the Magneto-micropolar equations. Electronic Research Archive, 2021, 29 (1) : 1625-1639. doi: 10.3934/era.2020083 Jiwei Jia, Young-Ju Lee, Yue Feng, Zichan Wang, Zhongshu Zhao. Hybridized weak Galerkin finite element methods for Brinkman equations. Electronic Research Archive, , () : -. doi: 10.3934/era.2020126 2019 Impact Factor: 1.053 PDF downloads (131) Hantaek Bae Rafael Granero-Belinchón Omar Lazar
CommonCrawl
Rock, paper, scissors, lizard, Spock By Gianluigi Filippelli on Saturday, February 28, 2015 http://t.co/a0l4FP6ftF Goodbye #LeonardNimoy, #Spock from #StarTrek by SciFiCat One popular five-weapon expansion is "rock-paper-scissors-lizard-Spock", invented by Sam Kass and Karen Bryla, which adds "Spock" and "lizard" to the standard three choices. "Spock" is signified with the Star Trek Vulcan salute, while "lizard" is shown by forming the hand into a sock-puppet-like mouth. Spock smashes scissors and vaporizes rock; he is poisoned by lizard and disproven by paper. Lizard poisons Spock and eats paper; it is crushed by rock and decapitated by scissors. This variant was mentioned in a 2005 article of The Times and was later the subject of an episode of the American sitcom The Big Bang Theory in 2008. The majority of such proposed generalizations are isomorphic to a simple game of modulo arithmetic, where half the differences are wins for player one. For instance, rock-paper-scissors-Spock-lizard (note the different order of the last two moves) may be modeled as a game in which each player picks a number from one to five. Subtract the number chosen by player two from the number chosen by player one, and then take the remainder modulo 5 of the result. Player one is the victor if the difference is one or three, and player two is the victor if the difference is two or four. If the difference is zero, the game is a tie. Alternatively, the rankings in rock-paper-scissors-Spock-lizard may be modeled by a comparison of the parity of the two choices. If it is the same (two odd-numbered moves or two even-numbered ones) then the lower number wins, while if they are different (one odd and one even) the higher wins. Using this algorithm, additional moves can easily be added two at a time while keeping the game balanced: Declare a move N+1 (where N is the original total of moves) that beats all existing odd-numbered moves and loses to the others (for example, the rock (#1), scissors (#3), and lizard (#5) could fall into the German well (#6), while the paper (#2) covers it and Spock (#4) manipulates it). Declare another move N+2 with the reverse property (such as a plant (#7) that grows through the paper (#2), poisons Spock (#4), and grows through the well (#6), while being damaged by the rock (#1), scissor (#3), and lizard(#5)). (via en.wiki) Labels: big bang theory, leonard nimoy, mathematics, spock, star trek By Gianluigi Filippelli on Sunday, February 22, 2015 #moon #astronomy #NASA #video #PinkFloyd The first photo of the lunar far side taken by the Soviet spacecraft Luna 3 on Oct. 7, 1959 - via Universe Today Labels: moon, pink floyd, video, youtube What Einstein thought about Galilei about #AlbertEinsten #GalileoGalilei Galileo's Dialogue Concerning the Two Chief World Systems is a mine of information for anyone interested in the cultural history of the Western world and its influence upon economic and political development. (...) To begin with, the Dialogue gives an extremely lively and persuasive exposition of the then prevailing views on the structure of the cosmos in the large. The naïve picture of the earth as a flat disc, combined with obscure ideas about star-filled space and the motions of the celestial bodies, prevalent in the early Middle Ages, represented a deterioration of the much earlier conceptions of the Greeks, and in particular of Aristotle's ideas and Ptolemy's consistent spatial concept of the celestial bodies and their motions. (...) In advocating and fighting for the Copernican theory Galileo was not only motivated by a striving to simplify the representation of the celestial motions. His aim was to substitute for a petrified and barren system of ideas the unbiased and strenuous quest for a deeper and more consistent comprehension of the physical and astronomical facts. The form of dialogue used in his work may be partly due to Plato's shining example; it enabled Galileo to apply his extraordinary literary talent to the sharp and vivid confrontation of opinion. To be sure, he wanted to avoid an open commitment in these controversial questions that would have delivered him to destruction by the Inquisition. Galileo had, in fact, been expressly forbidden to advocate the Copernican theory. Apart from its revolutionary factual content the Dialogue represents a down-right roguish attempt to comply with this order in appearance and yet in fact to disregard it. Unfortunately, it turned out that the Holy Inquisition was unable to appreciate adequately such subtle humor. (...) It is difficult to us today to appreciate the imaginative power made manifest in the precise formulation of the concept of acceleration and in the recognition of its physical significance. Once the conception of the center of the universe had, with good reason, been rejected, the idea of the immovable earth, and, generally, of an exceptional role of the earth, was deprived of its justification (...) (...) Galileo takes great pains to demonstrate that the hypothesis of the rotation and revolution of the earth is not refuted by the fact that we do not observe any mechanical effects of these motions. Strictly speaking, such a demonstration was impossible because a complete theory of mechanics was lacking. I think it is just in the struggle with this problem that Galileo's originality is demonstrated with particular force. Galileo is, of course, also concerned to show that the fixed stars are too remote for parallaxes produced by the yearly motion of the earth to be detectable with the measuring instruments of his time. This investigation also is ingenious, notwithstanding its primitiveness. It was Galileo's longing for a mechanical proof of the motion of the earth which misled him into formulating a wrong theory of the tides. The fascinating arguments in the last conversation would hardly have been accepted as proofs by Galileo, had his temperament not got the better of him. It is hard for me to resist the temptation to deal with this subject more fully. The leitmotif which I recognize in Galileo's work is the passionate fight against any kind of dogma based on authority. Only experience and careful reflection are accepted by him as criteria of truth. Nowadays it is hard for us to grasp how sinister and revolutionary such an attitude appeared at Galileo's time, when merely to doubt the truth of opinions which had no basis but authority was considered a capital crime and punished accordingly. Actually we are by no means so far removed from such a situation even today as many of us would like to flatter ourselves; but in theory, at least, the principle of unbiased thought has won out, and most people are willing to pay lip service to this principle. It has often been maintained that Galileo became the father of modern science by replacing the speculative, deductive method with the empirical, experimental method. I believe, however, that this interpretation would not stand close scrutiny. There is no empirical method without speculative concepts and systems; and there is no speculative thinking whose concepts do not reveal, on closer investigation, the empirical material from which they stem. To put into sharp contrast the empirical and the deductive attitude is misleading, and was entirely foreign to Galileo. Actually it was not until the nineteenth century that logical (mathematical) systems whose structures were completely independent of any empirical content had been cleanly extracted. Moreover, the experimental methods at Galileo's disposal were so imperfect that only the boldest speculation could possibly bridge the gaps between the empirical data. (For example, there existed no means to measure times shorter than a second). The antithesis Empiricism vs. Rationalism does not appear as a controversial point in Galileo's work. Galileo opposes the deductive methods of Aristotle and his adherents only when he considers their premises arbitrary or untenable, and he does not rebuke his opponents for the mere fact of using deductive methods. In the first dialogue, he emphasizes in several passages that according to Aristotle, too, even the most plausible deduction must be put aside if it is incompatible with empirical findings. And on the other hand, Galileo himself makes considerable use of logical deduction. His endeavors are not so much directed at "factual knowledge" as at "comprehension". But to comprehend is essentially to draw conclusions from an already accepted logical system. (from the foreword to Dialogue Concerning the Two Chief World Systems: Ptolemaic and Copernican (1953), Einstein Archives 1-174 - via Open Parachute) About the italian physicist, Galileo Galilei and the impossible biomechanics of giants is an interesting reading. Labels: albert einstein, galileo galilei The mathematics of love #ValentinesDay #mathematics \[\left (x^2 + \frac{9}{4} y^2 + z^2 - 1 \right )^3 - x^2 z^3 - \frac{9}{200} y^2 z^3 = 0\] via Wikipedia Labels: mathematics, st valentine Black holes and revelations: their large interiors By Gianluigi Filippelli on Friday, February 13, 2015 about #blackhole #cosmology #arXiv #abstract #CarloRovelli The 3d volume inside a spherical black hole can be defined by extending an intrinsic flat-spacetime characterization of the volume inside a 2-sphere. For a collapsed object, the volume grows with time since the collapse, reaching a simple asymptotic form, which has a compelling geometrical interpretation. Perhaps surprising, it is large. The result may have relevance for the discussion on the information paradox. Marios Christodoulou & Carlo Rovelli (2014). How big is a black hole?, arXiv: http://arxiv.org/abs/1411.2854v2 A sphere $S$ on the event horizon bounds a spacelike hypersurface, a large portion of which coincides with an $r$ = constant hypersurface. We show this hypersurface with one dimension suppressed, and cut in the middle, omitting the long cylindrical part which gives the main contribution to its volume. We also illustrate the argument showing that most of the volume is contained in a region out of causal contact with matter that has advanced far into the black hole. Ingemar Bengtsson & Emma Jakobsson (2015). Black holes: Their large interiors, arXiv: http://arxiv.org/abs/1502.01907v1 Labels: abstract, arxiv, astronomy, black hole, cosmology Black holes and revelations: the seeds of the galaxies about #blackhole #astronomy #arXiv #abstract The centre of the Milky Way - via Nasa In this paper we present a new scenario where massive Primordial Black Holes (PBH) are produced from the collapse of large curvature perturbations generated during a mild waterfall phase of hybrid inflation. We determine the values of the inflaton potential parameters leading to a PBH mass spectrum peaking on planetary-like masses at matter-radiation equality and producing abundances comparable to those of Dark Matter today, while the matter power spectrum on scales probed by CMB anisotropies agrees with Planck data. These PBH could have acquired large stellar masses today, via merging, and the model passes both the constraints from CMB distortions and micro-lensing. This scenario is supported by Chandra observations of numerous BH candidates in the central region of Andromeda. Moreover, the tail of the PBH mass distribution could be responsible for the seeds of supermassive black holes at the center of galaxies, as well as for ultra-luminous X-rays sources. We find that our effective hybrid potential can originate e.g. from D-term inflation with a Fayet-Iliopoulos term of the order of the Planck scale but sub-planckian values of the inflaton field. Finally, we discuss the implications of quantum diffusion at the instability point of the potential, able to generate a swiss-cheese like structure of the Universe, eventually leading to apparent accelerated cosmic expansion. Sébastien Clesse & Juan García-Bellido (2015). Massive Primordial Black Holes from Hybrid Inflation as Dark Matter and the seeds of Galaxies, arXiv: http://arxiv.org/abs/1501.07565v1 Labels: abstract, arxiv, black hole A probabilistic approach to the prime numbers distribution By Gianluigi Filippelli on Tuesday, February 03, 2015 by @ulaulaman about #prime_numbers #arXiv #mathematics The prime numbers theorem states the asymptotic approximation for the prime-counting function. The first statement for the theorem is given by Euler in 1737 (pdf): The sum of the series of the reciprocals of the prime numbers, \[\frac{1}{2} + \frac{1}{3} + \frac{1}{5} + \frac{1}{7} + \frac{1}{11} + \frac{1}{13} + \cdots\] is infinitely large, but it is infinitely many times less than the sum of the harmonic series, \[1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \cdots\] Furthemore, the sum of the former series is like the logarithm of the sum of the latter series. After Euler, the most known attempt to evaluate the prime numbers distribution is given by Bernard Riemann, and the most recent is posted on arXiv some days ago: Labels: abstract, arxiv, mathematics, prime numbers Black holes and revelations: the seeds of the gala... A probabilistic approach to the prime numbers dist...
CommonCrawl
Seeing Using Gravitational Waves [duplicate] Could a species evolve a biological gravitational wave detector? 6 answers I know that humans' eyes are adapted to detect some electromagnetic waves, and their ears are able to detect sound waves. What I would like to ask is: Is it plausible for a species to biologically detect gravitational waves? My first idea is that they would probably have to evolve near a black hole rich area and not be fried by radiation. science-based reality-check black-holes gravitational-waves HDE 226868♦ Generic UserGeneric User marked as duplicate by JDługosz reality-check Users with the reality-check badge can single-handedly close reality-check questions as duplicates and reopen them as needed. Dec 23 '16 at 0:09 $\begingroup$ Do they have to evolve, or can they be engineered, which would introduce more possibilities? $\endgroup$ – Zxyrra Dec 3 '16 at 22:37 $\begingroup$ @Zxyrra They probably can be engineered, but my preference is evolution. $\endgroup$ – Generic User Dec 4 '16 at 23:45 $\begingroup$ @GenericUser be sure to read the answers at the older post, too! $\endgroup$ – JDługosz Dec 23 '16 at 0:14 The strain, $h$, of a gravitational wave is $$h\sim\frac{1}{R}\frac{GM}{c^2}\left(\frac{v}{c}\right)^2$$ to a relatively decent degree of accuracy. Here, $R$ is the distance to the source, $M$ is the combined mass of the black holes, and $v$ is the orbital speed of the binary. If we gratuitously assume that $v\sim0.1c$ and $M\sim50M_{\odot}$, then at one astronomical unit (AU) away, the same distance Earth is from the Sun, we find that $h\sim5\times10^{-4}$. In order to detect the waves, the species' "eyes" would need to detect a change in a meter-long object of half a millimeter - and that's with them living dangerously close to the black hole binary! Maybe extremely small lifeforms - bacteria, perhaps - would notice such a change, but macroscopic, human-sized lifeforms would not. Now, this $h$ is greater than the strain measured by LIGO, which was on the order of $10^{-21}$, so in general, it should be easier to measure. However, it seems highly improbable that the species would involve mini laser interferometers as eyes. It's true that the orbital speed of the black holes would increase as they begin to coalesce. Abbott et al. (2016) - the discovery paper announcing LIGO's observations that was published earlier this year - showed a maximum speed of about $0.6c$ (see Fig. 2): Given that $h\propto v^2$, this causes an increase in $h$ of about 36 (also, $M\sim70M_{\odot}$ for the LIGO binary), giving us a length change of 1 cm for 1-meter object, which is certainly detectable, but and it lasts for only a very short amount of time before the source stops producing gravitational waves (post-coalescence). If we put these aliens near an area full of black hole binaries - never mind the absurdity involved in the formation of such a cluster, or the likely instabilities that could break it apart - we have the problem that different binaries could interfere with one another. Gravitational waves are . . . well, waves, and so are subject to constructive and destructive interference. A simple metric for the expansion and contraction of spacetime by a point source of gravitational waves coming along the $z$-axis is $$ds^2=dt^2-\left[(1+2H(t,z))dx^2+(1-2H(t,z))dy^2+dz^2\right]$$ where $$H(t,z)=h\cos\left[2\pi f(t-z/v)\right]$$ for frequency $f$ and speed $v$ (which should be $c$). This should make the actual wave nature of gravitational waves clearer. If we have two sources of gravitational waves aligned along the same axis, then they can either increase the strength of the signal or significant decrease it, depending on their respective strains and the values of their $t-z/v$. More complex interactions occur when different waves are at other odd angles to one another. I realize that I should apologize to the curious reader who may want to know more, because I looked at my notes and used non-standard notation. If you look at most papers on the subject, you'll see the quantity I referred to as $h$ denoted by $h_0$, and the quantity I referred to as $H(t,z)$ denoted by $h(t)$, where we set $z=0$. Therefore, you'd really see the equations $$h_0\sim\frac{1}{R}\frac{GM}{c^2}\left(\frac{v}{c}\right)^2,\quad h(t)=h_0\cos\left(2\pi ft\right)$$ for the case of a binary system moving together at a very slight rate. The first one might be written in terms of the orbital frequency of the system, but I prefer using this form for an easy approximation. HDE 226868♦HDE 226868 $\begingroup$ Wow, that was fast. It's very detailed too. This will probably be the answer if none that are better appear by Wednesday. $\endgroup$ – Generic User Dec 3 '16 at 15:23 Is it plausible for a species to biologically detect gravitational waves? Basically, no. The Laser Interferometer Gravitational-wave Observatory (LIGO) consists of the most high precision technology developed by the human species. Take a bow, human species! You're truly brilliant to have created such a wonderful instrument LIGO's Extreme Engineering LIGO exemplifies extreme engineering and technology. LIGO consists of: Two "blind" L-shaped detectors with 4 km long vacuum chambers... built 3000 kilometers apart and operating in unison... to measure a motion 10,000 times smaller than an atomic nucleus (the smallest measurement ever attempted by science)... caused by the most violent and cataclysmic events in the Universe... occurring millions or billions of light years away! A few of LIGO's most remarkable engineering facts are listed below. Most sensitive: LIGO is designed to detect a change in distance between its mirrors 1/10,000th the width of a proton! This is equivalent to measuring the distance to the nearest star to an accuracy smaller than the width of a human hair! World's second-largest vacuum chambers: Encapsulating 10,000 m3 (350,000 ft3), each vacuum chamber encloses as much volume as 11 Boeing 747-400 commercial airliners. The air removed from each of LIGO's vacuum chambers could inflate two-and-a-half MILLION footballs, or 1.8 million soccer balls! LIGO's vacuum volume is surpassed only by the Large Hadron Collider in Switzerland. Ultra-high vacuum: The pressure inside LIGO's vacuum tubes is one-trillionth of an 'atmosphere' (in scientific terms, that's 10-9 torr). It took 40 days (1100 hours) to remove all 10,000 m3 (353,000 ft3) of air and other residual gases from each of LIGO's vacuum tubes to reach an air pressure one-trillionth that at sea level. Air pressure on the vacuum tubes: 155-million kg (341-million pounds) of air press down on each 4 km length of vacuum tube. Remarkably, the steel tubes that hold all that air at bay are only 3 mm (0.12 inches) thick. Curvature of the Earth: LIGO's arms are so long that the curvature of the Earth is a measurable 1 meter (vertical) over the 4 km length of each arm. The most precise concrete pouring and leveling imaginable was required to counteract this curvature and ensure that LIGO's vacuum chambers were "flat" and level. Without this work, LIGO's lasers would hit the end of each arm 1 m above the mirrors it is supposed to bounce off of! Source: https://www.ligo.caltech.edu/page/facts There are two main factors that mitigate against any organism developed gravitational-wave perception. Firstly, they would need to be surrounded by a continuous illumination of colliding black holes. Secondly, if they had LIGO-like receptors the things are too incredibly complicated, difficult to construct, and they're plain too big. This does overlook trying to perceive changes in size of the order of the thickness of a human hair over a length equivalent to a light year. A tad bit tricky? More like lots tricky. Perhaps an organism that is the size of a small planet living in a cosmic environment extremely rich in colliding black holes might, remotely conceivable, need to "see" gravitational waves, but it might be able to develop other sensory systems that could do the same job with less trouble and difficulty. Pity really. Gravitational-wave perceiving critters would be so cool!! edited Dec 4 '16 at 4:18 a4androida4android It's already been explained that it would be very difficult (likely impossible) for humans to develop biological receptors for gravitational waves due to sensitivity requirements. However even if such receptors existed, there does not exist a simply means to focus the gravitational waves. Remember, your eye includes a lens in addition to rods and cones. How to Focus Gravitational Waves As of now the only known way to focus gravitational waves would be using gravitational lensing; using the gravitational force of a massive object to bend light rays. The same is true for gravitational radiation. LIGO and other detectors do not focus the gravitational waves, but act simply as a single pixel in a camera. Directionality is then determined by triangulation using multiple detectors. So the best we can do currently is mimick ears, in the sense that we can point vaguely in the direction we detected the gravitational wave from. More detectors means better localization of the source; we know where the "light" is coming from but not what the "light source" looks like. So you would need to hand-wave a means of focusing gravitational waves (exotic matter, etc.), since the mass necessary for a gravitational lens is likely a deal breaker. Any other way? If you are dead-set on seeing using gravitational waves, and wish to handwave the biological gravitational wave receptors, the only way out of handwaving gravitational lenses would be gravitational wave interferometry, the gravitational equivalent to radio interferometry. ALMA's website has a good, though admittedly quite technical, description of how high-resolution images are made from arrays of telescopes (detectors). Snyder005Snyder005 Not the answer you're looking for? Browse other questions tagged science-based reality-check black-holes gravitational-waves or ask your own question. Could a species evolve a biological gravitational wave detector? Non-predatory evolution How will humanity survive when black holes gain sentience and turn on us? Protect a planet against a black hole attack Can data be sent using Gravitational Waves across solar systems? Convergent Eye Structure in Aliens? The Source of Gravity Based Destruction Would combining thermoelectric cells and electronic warfare reduce chances of a ship being detected? How could intelligence be balanced with a large muzzle? Almost realistic way to beat entropy Black hole collides with Sun: How rapidly is energy released?
CommonCrawl
Numerical study of phase transition in van der Waals fluid DCDS-B Home A reaction-diffusion-advection SIS epidemic model in a spatially-temporally heterogeneous environment December 2018, 23(10): 4541-4555. doi: 10.3934/dcdsb.2018175 Identification of generic stable dynamical systems taking a nonlinear differential approach Mahdi Khajeh Salehani 1,2,, School of Mathematics, Statistics and Computer Science, College of Science, University of Tehran, P.O. Box: 14155-6455, Tehran, Iran School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box: 19395-5746, Tehran, Iran * Corresponding author's e-mail address: [email protected] (M. Khajeh Salehani) Received September 2017 Revised December 2017 Published May 2018 Fund Project: This work was supported in part by a grant from the Institute for Research in Fundamental Sciences (IPM) [No. 95510037] Identifying new stable dynamical systems, such as generic stable mechanical or electrical control systems, requires questing for the desired systems parameters that introduce such systems. In this paper, a systematic approach to construct generic stable dynamical systems is proposed. In fact, our approach is based on a simple identification method in which we intervene directly with the dynamics of our system by considering a continuous $1$-parameter family of system parameters, being parametrized by a positive real variable $\ell$, and then identify the desired parameters that introduce a generic stable dynamical system by analyzing the solutions of a special system of nonlinear functional-differential equations associated with the $\ell$-varying parameters. We have also investigated the reliability and capability of our proposed approach. To illustrate the utility of our result and as some applications of the nonlinear differential approach proposed in this paper, we conclude with considering a class of coupled spring-mass-dashpot systems, as well as the compartmental systems - the latter of which provide a mathematical model for many complex biological and physical processes having several distinct but interacting phases. Keywords: Generic stable dynamical system, nonlinear differential approach, monic characteristic polynomial, Routh-Hurwitz criterion, Hardy-Hutchinson criterion. Mathematics Subject Classification: Primary: 34D20, 37C75; Secondary: 65L03, 93D05. Citation: Mahdi Khajeh Salehani. Identification of generic stable dynamical systems taking a nonlinear differential approach. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4541-4555. doi: 10.3934/dcdsb.2018175 K. Alexis, G. Nikolakopoulos and A. Tzes, Design and experimental verification of a constrained finite time optimal control scheme for the attitude control of a quadrotor helicopter subject to wind gusts, In: Proc. IEEE Int. Conf. Robot. Autom., Anchorage, Alaska, USA, (2010), 1636-1641.Google Scholar H. Bilharz, Bemerkung zu einem Satze von Hurwitz, Zeitschrift f${\rm{\ddot{u}}}$r Angewandte Mathematik und Mechanik, 24 (1944), 77-82. doi: 10.1002/zamm.19440240205. Google Scholar D. Cabecinhas, R. Naldi, L. Marconi, C. Silvestre and R. Cunha, Robust take-off and landing for a quadrotor vehicle, In: Proc. IEEE Int. Conf. Robot. Autom., Anchorage, Alaska, USA, (2010), 1630-1635. doi: 10.1109/ROBOT.2010.5509430. Google Scholar F. Calogero, Nonlinear differential algorithm to compute all the zeros of a generic polynomial, J. Math. Physics, 57 (2016), 083508, 3pp. doi: 10.1063/1.4960821. Google Scholar F. Calogero, Comment on "Nonlinear differential algorithm to compute all the zeros of a generic polynomial", J. Math. Physics, 57 (2016), 104101, 4pp. doi: 10.1063/1.4965441. Google Scholar A. Cauchy, Calcul des indices des fonctions, Calcul des indices des fonctions, (2011), 416-466. doi: 10.1017/CBO9780511702501.013. Google Scholar H. Cremer, Über den Zusammenhang zwischen den Routhschen und Hurwitzschen Stabilitätskriterien, Zeitschrift f${\rm{\ddot{u}}}$r Angewandte Mathematik und Mechanik, 27 (1947), 160-161. doi: 10.1002/zamm.19470250525. Google Scholar H. Cremer and F. H. Effertz, Über die algebraischen Kriterien f${\rm{\ddot{u}}}$r die Stabilität von Regelsystemen, Mathematische Annalen, 137 (1959), 328-350. doi: 10.1007/BF01360969. Google Scholar G. Frobenius, Ueber das Trägheitsgesetz der quadratischen Formen, J. f${\rm{\ddot{u}}}$ die reine und angewandte Mathematik, 114 (1895), 187-230. doi: 10.1515/crll.1895.114.187. Google Scholar F. R. Gantmacher, Matrizentheorie, [Russion original, Moscow, 1968], Springer-Verlag, Berlin, 1986. doi: 10.1007/978-3-642-71243-2. Google Scholar S. D. Hanford, L. N. Long and J. F. Horn, A small semiautonomous rotary-wing unmanned air vehicle (UAV), In: Proc. AIAA Infotech at Aerospace Conf., Washington DC, USA, 2005.Google Scholar E. G. Hardy, On the zeros of a class of integral functions, Messenger of Mathematics, 34 (1904), 97-101. Google Scholar B. Herisse, T. Hamel, R. Mahony and F. X. Russotto, Landing a VTOL unmanned aerial vehicle on a moving platform using optical flow, IEEE Trans. Robot., 28 (2012), 77-89. doi: 10.1109/TRO.2011.2163435. Google Scholar C. Hermite, Extrait d'une lettre de Mr. Ch. Hermite de Paris à Mr. Borchardt de Berlin sur le nombre des racines d'une équation algébrique comprises entre des limites données, J. f${\rm{\ddot{u}}}$r die reine und angewandte Mathematik, 52 (1856), 39-51. doi: 10.1515/crll.1856.52.39. Google Scholar F. Hoffmann, N. Goddemeier and T. Bertram, Attitude estimation and control of a quadrocopter, In: Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Taipei, Taiwan, (2010), 1072-1077. doi: 10.1109/IROS.2010.5649111. Google Scholar A. Hurwitz, Ueber die Bedingungen, unter welchen eine Gleichung nur Wurzeln mit negativen reellen Theilen besitzt, Mathematische Annalen, 46 (1895), 273-284. doi: 10.1007/BF01446812. Google Scholar J. I. Hutchinson, On a remarkable class of entire functions, Trans. Amer. Math. Soc., 25 (1923), 325-332. doi: 10.1090/S0002-9947-1923-1501248-1. Google Scholar K. G. J. Jacobi, Über eine elementare Transformation eines in Bezug auf jedes von zwei Variablen-Systemen linearen homogenen Ausdrucks, J. fü die reine und angewandte Mathematik, 53 (1857), 265-270 [see: Gesammelte Werke, pp. 583-590. Chelsea Publishing Co., New York (1969)]. doi: MR1579002. Google Scholar S. H. Lehnigk, Liapunov's direct method and the number of roots with positive real parts of a polynomial with constant complex coefficients, SIAM J. on Control, 5 (1967), 234-244. doi: 10.1137/0305016. Google Scholar A. Liénard and M. H. Chipart, Sur le signe de la partie réelle des racines d'une équation algébrique, J. de Mathématiques Pures et Appliquées, 10 (1914), 291-346. Google Scholar D. Mellinger, M. Shomin, N. Michael and V. Kumar, Cooperative grasping and transport using multiple quadrotors, Distributed Auton. Syst., 83 (2013), 545-558. doi: 10.1007/978-3-642-32723-0_39. Google Scholar N. Michael, J. Fink and V. Kumar, Cooperative manipulation and transportation with aerial robots, Auton. Robot., 30 (2011), 73-86. doi: 10.15607/RSS.2009.V.001. Google Scholar A. N. Michel, L. Hou and D. Liu, Stability of Dynamical Systems. On the Role of Monotonic and Non-monotonic Lyapunov Functions, 2$^{nd}$ edition, Systems & Control: Foundations & Applications. Birkhäuser/Springer, Cham, 2015. doi: 10.1007/978-3-319-15275-2. Google Scholar W. Michiels and S.-I. Niculescu, Stability, Control, and Computation for Time-delay Systems. An Eigenvalue-based Approach, 2$^{nd}$ edition, Advances in Design and Control, 27. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2014. doi: 10.1137/1.9781611973631. Google Scholar N. Michael and V. Kumar, Control of ensembles of aerial robots, Proc. IEEE, 99 (2011), 1587-1620. doi: 10.1109/JPROC.2011.2157275. Google Scholar B. Michini, J. Redding, N. K. Ure, M. Cutler and J. P. How, Design and flight testing of an autonomous variable-pitch quadrotor, In: Proc. IEEE Int. Conf. Robot. Autom., Shanghai, China, (2011), 2978-2979. doi: 10.1109/ICRA.2011.5980561. Google Scholar J. Moser, Stable and Random Motions in Dynamical Systems. With Special Emphasis on Celestial Mechanics, Princeton University Press, Princeton, NJ, 2001. doi: 10.1515/9781400882694. Google Scholar L. Orlando, Sul problema di Hurwitz relative alle parti reali delle radici di un'equazione algebrica, Math. Ann., 71 (1911), 233-245. doi: 10.1007/BF01456650. Google Scholar P. C. Parks, A new proof of the Routh-Hurwitz stability criterion using the second method of Liapunov, Proceedings of the Cambridge Philosophical Society, 58 (1962), 694-702. doi: 10.1017/S030500410004072X. Google Scholar P. Pounds, R. Mahony and P. Corke, Modelling and control of a large quadrotor robot, Control Eng. Practice, 18 (2010), 691-699. doi: 10.1016/j.conengprac.2010.02.008. Google Scholar E. J. Routh, A Treatise on the Stability of a given State of Motion, Macmillan, London, 1877. [Reprinted in: A. T. Fuller, Stability of Motion, Taylor & Francis, London (1975), 19-138.]Google Scholar C. Sturm, Autres démonstrations du même théorème, J. de Mathématiques Pures et Appliquées, 1 (1836), 290–308. [English translation in: Stability of Motion, Taylor & Francis, London (1975), 189–207.]Google Scholar C. Sturm and A. Liouville, Démonstration d'un théorème de M. Cauchy, relatif aux racines imaginaires des équations, J. de Mathématiques Pures et Appliquées, 1 (1836), 278-289. Google Scholar A. Tayebi and S. McGilvray, Attitude stabilization of a VTOL quadrotor aircraft, IEEE Trans. Control Syst. Technol., 14 (2006), 562-571. doi: 10.1109/TCST.2006.872519. Google Scholar S. Trapp and M. Matthies, Chemodynamics and Environmental Modeling. An Introduction, Springer Berlin, Heidelberg, 1998.Google Scholar N. I. Vitzilaios and N. C. Tsourveloudis, An experimental test bed for small unmanned helicopters, J. Intell. Robot. Syst., 54 (2000), 769-794. doi: 10.1007/s10846-008-9284-8. Google Scholar X. Wu, Y. Liu and J. J. Zhu, Design and real time testing of a trajectory linearization flight controller for the quanserUFO, In: Proc. Amer. Control Conf., Athens, OH, USA, (2003), 3913-3918.Google Scholar Figure 1. Schematic of a coupled spring-mass-dashpot system Figure 2. Schematic of an open compartmental system J. M. Peña. Refinable functions with general dilation and a stable test for generalized Routh-Hurwitz conditions. Communications on Pure & Applied Analysis, 2007, 6 (3) : 809-818. doi: 10.3934/cpaa.2007.6.809 Bin Li. On the blow-up criterion and global existence of a nonlinear PDE system in biological transport networks. Kinetic & Related Models, 2019, 12 (5) : 1131-1162. doi: 10.3934/krm.2019043 Xueli Bai, Suying Liu. A new criterion to a two-chemical substances chemotaxis system with critical dimension. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3717-3721. doi: 10.3934/dcdsb.2018074 Dapeng Du, Yifei Wu, Kaijun Zhang. On blow-up criterion for the nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3639-3650. doi: 10.3934/dcds.2016.36.3639 Roman Czapla, Vladimir V. Mityushev. A criterion of collective behavior of bacteria. Mathematical Biosciences & Engineering, 2017, 14 (1) : 277-287. doi: 10.3934/mbe.2017018 Jibin Li, Yi Zhang. On the traveling wave solutions for a nonlinear diffusion-convection equation: Dynamical system approach. Discrete & Continuous Dynamical Systems - B, 2010, 14 (3) : 1119-1138. doi: 10.3934/dcdsb.2010.14.1119 Dominique Zosso, Braxton Osting. A minimal surface criterion for graph partitioning. Inverse Problems & Imaging, 2016, 10 (4) : 1149-1180. doi: 10.3934/ipi.2016036 Jürgen Scheurle, Stephan Schmitz. A criterion for asymptotic straightness of force fields. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 777-792. doi: 10.3934/dcdsb.2010.14.777 Samir EL Mourchid. On a hypercylicity criterion for strongly continuous semigroups. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 271-275. doi: 10.3934/dcds.2005.13.271 Yu-Zhu Wang, Weibing Zuo. On the blow-up criterion of smooth solutions for Hall-magnetohydrodynamics system with partial viscosity. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1327-1336. doi: 10.3934/cpaa.2014.13.1327 Yi-hang Hao, Xian-Gao Liu. The existence and blow-up criterion of liquid crystals system in critical Besov space. Communications on Pure & Applied Analysis, 2014, 13 (1) : 225-236. doi: 10.3934/cpaa.2014.13.225 Jishan Fan, Tohru Ozawa. A regularity criterion for 3D density-dependent MHD system with zero viscosity. Conference Publications, 2015, 2015 (special) : 395-399. doi: 10.3934/proc.2015.0395 Yinghua Li, Shijin Ding, Mingxia Huang. Blow-up criterion for an incompressible Navier-Stokes/Allen-Cahn system with different densities. Discrete & Continuous Dynamical Systems - B, 2016, 21 (5) : 1507-1523. doi: 10.3934/dcdsb.2016009 Xiaofeng Hou, Limei Zhu. Serrin-type blowup criterion for full compressible Navier-Stokes-Maxwell system with vacuum. Communications on Pure & Applied Analysis, 2016, 15 (1) : 161-183. doi: 10.3934/cpaa.2016.15.161 Xiangxiang Huang, Xianping Guo, Jianping Peng. A probability criterion for zero-sum stochastic games. Journal of Dynamics & Games, 2017, 4 (4) : 369-383. doi: 10.3934/jdg.2017020 F. Rodriguez Hertz, M. A. Rodriguez Hertz, A. Tahzibi and R. Ures. A criterion for ergodicity for non-uniformly hyperbolic diffeomorphisms. Electronic Research Announcements, 2007, 14: 74-81. doi: 10.3934/era.2007.14.74 Emanuela Caliceti, Sandro Graffi. An existence criterion for the $\mathcal{PT}$-symmetric phase transition. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 1955-1967. doi: 10.3934/dcdsb.2014.19.1955 Stefan Kindermann, Andreas Neubauer. On the convergence of the quasioptimality criterion for (iterated) Tikhonov regularization. Inverse Problems & Imaging, 2008, 2 (2) : 291-299. doi: 10.3934/ipi.2008.2.291 Vitaly Bergelson, Joanna Kułaga-Przymus, Mariusz Lemańczyk, Florian K. Richter. A generalization of Kátai's orthogonality criterion with applications. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2581-2612. doi: 10.3934/dcds.2019108 Xingwu Chen, Jaume Llibre, Weinian Zhang. Averaging approach to cyclicity of hopf bifurcation in planar linear-quadratic polynomial discontinuous differential systems. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3953-3965. doi: 10.3934/dcdsb.2017203 PDF downloads (76) HTML views (385) Mahdi Khajeh Salehani
CommonCrawl
definitionS of a topological space (reference) I know of at least 6 different ways to define a topological space: with open sets, with closed sets, with nets, with neighbourhoods, with Kuratowski's closure operator, and his interior operator. I have a vague idea of how to work with the non-standard ones, and I am not sure about all equivalences, I do not have much reference. The last book where I looked for the kuratowski operators left me unsatisfied. (Moreover he told that it is possible to define a topology using a boundary operator, overlooking difficulties.) I am asking for all the possible definitions of a topological space, or at least a reference to where to find them. Thank you. general-topology reference-request definition Lolman LolmanLolman $\begingroup$ I think you probably don't have to ask about the first four. Presumably equivalence of the Kuratowski operator definitions with any of the first four would be enough for you, wouldn't it? $\endgroup$ – Danu Apr 27 '17 at 9:22 $\begingroup$ You can also use filter convergence and even boundary-operators. $\endgroup$ – Henno Brandsma Apr 27 '17 at 11:37 $\begingroup$ "all the possible definitions of a topological space" is likely an infinite set. Even if you restrict your attention to ones that have appeared in the literature somewhere or other (e.g. Ralph Kopperman's paper "All topologies come from generalized metrics") it will be a fairly large list. $\endgroup$ – John Coleman Apr 27 '17 at 15:06 Willard's General Topology has equivalences of the 6 that you listed. Some books also include an equivalence based on the frontier [= boundary] operator, but I'm not home now where all my books are. If you have access to a university library, then browse through the general topology texts on the shelves. The following two papers are among a couple I know of right now (I could probably dig up more if I was at home where all my stuff is) that include equivalences not in Willard's book. José Ribeiro de Albuquerque (1910-1991), La notion de «frontière» en topologie [The notion of «frontier» in topology], Portugaliae Mathematica 2 #1 (1941), 280-289. Miron Zarycki (1899-1961), Quelques notions fondamentales de l'analysis situs au point de vue de l'algèbre de la logique [Some fundamental notions of topology from the point of view of the algebra of logic], Fundamenta Mathematicae 9 (1927), 3-15. (translation of 3 sentences near the beginning of the paper) In the present Note I consider some analogous systems of axioms for some other fundamental notions of topology, namely for the notions of exterior, of interior, of frontier and of border. I prove the equivalence of these systems to that of Mr. Kuratowski and I deduce some theorems concerning the fundamental properties of the mentioned notions. I wish to thank Mr. Kuratowski for his valuable advice concerning the final editing of this article. (ADDED NEXT DAY) This morning, while at home where all my math stuff is, I looked for some more references and found the following. I didn't bother with references for characterizations in terms of the interior operator (or nets, or neighborhoods, etc.) because these are quite common and in a lot of topology texts. Alexander Abian (1923-1999), The derived set axioms for topology, Mathematica (Cluj) 12(35) #2 (1970), 213-215. Abian shows that a topology for a set X can be characterized by a function $D:P(X) \rightarrow P(X)$ that simultaneously satisfies all of the following: (1) $D(\emptyset) = \emptyset;$ (2) For each $A,B \in P(X)$ we have $D(A \cup B) = D(A) \cup D(B);$ (3) For each $A \in P(X)$ we have: $D(A \cup D(A)) \subseteq A \cup D(A);$ (4) For each $x \in X$ we have $x \notin D(\{x\}).$ Shair Ahmad (1934- ), On the derived set operator (conference abstract #2), American Mathematical Monthly 71 #8 (October 1964), 956. Abstract of a talk given at the annual spring meeting of the Minnesota Section of the Mathematical Association of America, College of St. Thomas (St. Paul, Minnesota), 9 May 1964: The four axioms for a derived set operator as given by [Frank Reese] Harvey are shown to be equivalent to three somewhat simpler axioms. A slight modification of these axioms renders them absolutely independent. Kenneth Albert Henry Gravett (??-1966), A characterization of frontier, Proceedings of the Cambridge Philosophical Society 52 #1 (January 1956), 152-153. Frank Reese Harvey (1941- ), The derived set operator, American Mathematical Monthly 70 #10 (December 1963), 1085-1086. A topology for a set X can be characterized by a function $D:P(X) \rightarrow P(x)$ that simultaneously satisfies all of the following: (1) $D(\emptyset) = \emptyset;$ (2) For each $A,B \in P(X)$ we have $D(A \cup B) = D(A) \cup D(B);$ (3) For each $A \in P(X)$ we have: $x \in D(A)$ if and only if $x \in D(A - \{x\});$ (4) For each $A \in P(X)$ we have $D(A \cup D(A)) \subseteq A \cup D(A).$ Denis Arthur Higgs (1932-2011), Iterating the derived set function, American Mathematical Monthly 90 #10 (December 1983), 693-697. A characterization for the topology on a set in terms of the derived set operator is given on p. 694. Hellen Frances Cullen (1919-2007), Introduction to General Topology, D. C. Heath and Company, 1968, xii + 427 pages. Several alternative characterizations of a topological space are given in the subsection Extended and Conventional Definitions of a Topological Space on pp. 22-25, including the derived set operator. [Note: Her term "cotopology" refers to the collection of closed sets in a topological space.] James Dugundji (1919-1985), Topology, Allyn and Bacon Series in Advanced Mathematics, Allyn and Bacon, 1966, xvi + 447 pages. A characterization of the topology on a set in terms of the derived set operator is given on p. 73. Michael Caesar Gemignani (1938 - ), Elementary Topology, Addison-Wesley Publishing Company, 1967, xi + 258 pages. [The 2nd edition was published by Addison-Wesley Publishing Company in 1972 (xi + 270 pages), and the 2nd edition was reprinted by Dover Publications in 1990.] Exercise #5 on p. 59 (1972 2nd edition): Try to find a method for specifying a topology on a set $X$ by specifying Fr $A$ for each* $A \subset X.$ Do likewise for Ext. [The frontier (Fr) of a set $A$ is defined to be the intersection of the closure of $A$ and the closure of $X-A.$ The exterior (Ext) of $A$ is defined to be the complement of the closure of $A.$] There are no answers or hints for the exercises, and no references are given for this particular exercise. Wolfgang Joseph Thron (1918-2001), Topological Structures, Holt, Rinehart and Winston, 1966, xii + 240 pages. Ramaswamy Vaidyanathaswamy (1894-1960), Set Topology, 2nd edition, Chelsea Publishing Company, 1960, viii + 305 pages. [Reprinted by Dover Publications in 1999.] On p. 58 (Example 13) three properties of the boundary operator are stated (hint provided) to characterize a topology on a set, where the boundary of $A$ is defined to be the set of all points in $A$ that do not belong to the interior of $A.$ On p. 58 (Example 15) four properties of the frontier operator are stated to characterize a topology on a set, where the frontier of $A$ is defined to be the union of the boundary of $A$ and the boundary of $A'$ $(A'$ is the derived set of $A).$ On p. 58 (Example 16) four properties of a certain $2$-variable frontier operator $F:P(X) \times P(X) \rightarrow P(X),$ defined by $F(A,B) = (A \cap \overline{B}) \cup (\overline{A} \cap B),$ are stated to characterize a topology on a set. On p. 59 (Example 20) four properties of the exterior operator are stated to characterize a topology on a set, where the exterior of $A$ is defined to be the interior of $X-A.$ Incidentally, all three of these characterizations can be found on pp. 58-59 of the first edition of Vaidyanathaswamy's book [Treatise on Set Topology. Part I, Indian Mathematical Society, Madris, 1947, vi + 306 pages], but I did not see any references to relevant literature in either the 1947 edition or the 1960 edition. Dave L. RenfroDave L. Renfro $\begingroup$ I was reading Willard's book these past days, where for the first time I found the axioms for the topology through nets, and was pleased to find the characterizations I know presented together in one book, even though not with much explicit examples of extensions and usage. The french papers left me stunned for the "categorical" notation on the subsets of the space, but given time I think even without the language I will be able to read them. Given time I will look into all the papers, and as soon as I can I will accept the answer! :) THANK YOU. $\endgroup$ – Lolman Apr 30 '17 at 10:40 $\begingroup$ Dave your reference books are really hard to find. But I made some good progress. Now I just need to understand and find a duality for the derived set. Thank you. $\endgroup$ – Lolman May 11 '17 at 8:08 Not the answer you're looking for? Browse other questions tagged general-topology reference-request definition or ask your own question. Supplement book for topology Constructing Kuratowski closure operator from (co)topology Equivalent definitions to countability axioms Equivalence of Baire Space definitions variations of Kuratowski closure complement theorem Algebra formed by operators and Kuratowski's theorem Kuratowski Operations What topological properties need a space have to be topologically euclidean? Kuratowski Closure/Interior Operator — Basis for Topology Definition of Hausdorff space using only closure operator Definition of convergent sequence using only closure operator Kuratowski closure-interior families in a given topological space Examples of Kuratowski closure axioms translation into the language of interior operators?
CommonCrawl
Comparing different topologies I have to solve the following: For every function $f\in C[0,1]$, $\varepsilon>0$ and every finite set $A$, set $U(f,A,\varepsilon)$ is defined by $U(f,A,\varepsilon)=\{g\in C[0,1]:\forall x\in A\; |f(x)-g(x)|<\varepsilon\}$ and family of those sets forms a base of some topology $\mathcal{T}$ on $C[0,1]$. If $U$ is topology induced by metric $d_\infty$ and $M$ topology induced by $d_1$, compare topologies $\mathcal{T}$, $\mathcal{U}$, $\mathcal{M}$. (By $d_\infty$ and $d_1$ we denote $d_\infty(f,g)=\max_{x\in[0,1]}|f(x)-g(x)|$ and $d_1(f,g)=\int_0^1 |f(x)-g(x)|dx$). So, set $U(f,A,\varepsilon)$ contains continuous functions whose graphs pass through the finite set of $\varepsilon$-balls (whose centers are points from $A$). We have to check two properties of base for topology $\mathcal{T}$ (first, sets $U(f,A,\varepsilon)$ cover $C[0,1]$ and the second one, with intersection of two base sets, that I'm having problem with). Also, I have got that $\mathcal{M}\subset\mathcal{U}$ (it is easy to check that $d_1(f,g)\leq d_\infty(f,g)$ ) and those topologies are not equivalent, and $\mathcal{U}\subset\mathcal{T}$. But, what happens with $\mathcal{M}$ and $\mathcal{T}$? Also, are topologies $\mathcal{T}$ and $\mathcal{U}$ equivalent? Detailed explanations are welcome. Thanks in advance. general-topology metric-spaces alansalans $\begingroup$ Do you mean $$U(f,A,\epsilon) = \{g\in C[0,1] : |f(x) - g(x)| < \epsilon \quad\forall x \in A\}$$ $\endgroup$ – Prahlad Vaidyanathan Nov 12 '13 at 9:35 $\begingroup$ Yes, that was typo. $\endgroup$ – alans Nov 12 '13 at 9:36 $\begingroup$ What does compare precisely mean -- decide whether or not equivalent? $\endgroup$ – Rasmus Nov 12 '13 at 9:52 A partial answer that should help you a little - Comparing $\mathcal{T}$ and $\mathcal{M}$ : a) Consider $V:= U(0,\{1\}, 1/2) \in \mathcal{T}$, I claim that $V \notin \mathcal{M}$ : Define $f \in V$ to be the constant function $f \equiv 1/4$. Now I claim that for any $\delta > 0$, there is a $g \in C[0,1]$ such that $$ d_1(f,g) < \delta \text{ and } g \notin V $$ For this you can think of a picture of a function $g$ such that $$ g(1) = 1, \text{ and } g(x) = 1/4\quad\forall x < 1-\delta $$ and $g$ describes a thin triangle between $x = 1-\delta$ and $x=1$. Then, $$ d_1(f,g) = \int_{1-\delta}^1 |g(x) - 1/4|dx \leq \int_{1-\delta}^1 (1-1/4)dx = \frac{3\delta}{4} < \delta $$ Hence, $d_1(f,g) < \delta$. However, $$ |g(1) - 0| = 1 > 1/2 \Rightarrow g \notin V $$ b) Consider $W := \{g \in C[0,1] : \int |g(x)|dx < 1\} \in \mathcal{M}$, I claim that $W \notin \mathcal{T}$ : Choose $f \equiv 1/2$. For any finite subset $A \subset [0,1]$ and any $\delta > 0$, we can construct a function $g \in C[0,1]$ such that $$ g(x) = 1/2 \quad\forall x \in A, \text{ but } \int |g(x)|dx > 1 $$ In fact, just take the smallest $x_0 \in A$, and build a really large triangle from $(0,2)$ to $(x_0,1/2)$. Now let $g$ be the hypotenuse of that triangle for $x \leq x_0$ and $g(x) = 1/2$ for all $x > x_0$. Hence, $\mathcal{T}$ and $\mathcal{M}$ are not related by inclusion. Prahlad VaidyanathanPrahlad Vaidyanathan Not the answer you're looking for? Browse other questions tagged general-topology metric-spaces or ask your own question. Examples of topologies in which all open sets are regular? Comparing certain topologies Comparing Topologies on $\mathbb{R}$ Topology on a space starting from topologies on subspaces Demonstration that the strong and weak topologies are equivalent in finite dimensions, and may be different in infinite dimensions? Find the intersection of all $T_2$ topologies on an infinite set $X$ compare these two topologies $ \tau_1, \ \tau_2$. How to define topologies on different Banach spaces Defining the class of sets for which the union of topologies is a topology [sets of subsets of the set of topologies on a set: the settening] Types of Topologies: show how they compare to each other (finer, strictly finer, coarser…)
CommonCrawl
Technical Thoughts on Polling Methods Hi Folks, just a quicky on how polls could be tallied with significantly reduced noise. I'd be very interested in hearing your thoughts (especially if your thoughts consist of: this is already done, dummy, and it's called X). Oh, and apologies to those on mobile devices for which the LaTeX parsing seems not to be working. I've been thinking that an effective way to poll people who've participated in past elections is to ask them who they voted for (or if they voted) in the last election and then to ask who or which party they'll vote for now. Say that the last race was a nearly 50-50 split, if the $f_{AB} \ll 1$ is the fraction of A voters switching from parties ($A\rightarrow B$) and $f_{BA} \ll 1$(the number going from $B\rightarrow A$), then the vote share for party A will be: $$p_A=0.5\times (1-f_{AB}+f_{BA})$$ But the uncertainty* in this will be: $$\sigma_A=\sqrt{\sigma_{AB}^2+\sigma_{BA}^2}$$ $$\sigma_{AB}^2\simeq \frac{0.5\times f_{AB}(1-f_{AB})}{N}$$ So, taking $1-f_{AB}\simeq 1$ we get: $$\sigma_A=\frac{0.5\times \sqrt{f_{AB}+f_{BA}}}{\sqrt{N}}$$ This is to be compared with the normal error bars for simply asking "who will you vote for?" which yields $$\sigma_{A,traditional}=\frac{0.5}{\sqrt{N}}$$ So consider a district which went exactly 50-50 in the 2016 election but which now approximately 4% of Trump voters are now feeling regrets and would switch to voting Dem (and no Clinton voters switch). This means that a perfect poll would produce a 52-48 result, in favor of the Dem. The error bars are reduced by approximately a factor of: $$\frac{\sigma_A}{\sigma_{A,traditional}}\simeq \sqrt{f_{AB}+f_{BA}}=\sqrt{0.04}= 0.2$$ Put another way, you'd get roughly the same errorbars by interviewing 40 people under the new approach as you would with interviewing 1000 people under the old. Or consider a 400 person survey. Under the traditional approach, your formal uncertainty would be $\sigma_A=2.5\%$. Under the new approach, you'd expect (with 4% of Republican "switchers") to get $\sim 8$ to tell you that they will switch. Those are the only ones you're looking for. In this case, you'd get a formal error of only $0.5\%$. You'd get a similar result by asking the third option (both from the previous and next election) as to whether they'd voted or intended to vote. Now, before you jump in with every conceivable objection, I realize that a major issue is that people may simply lie about who they voted for (because of virtue signalling, or to skew the results). This, indeed, was the fatal flaw of the notorious LA Times poll from the last election (which used a panel and missed by 5 points and predicted a significant popular vote victory by Trump). For instance, if the probability of falsely saying that someone had previously voted for $A$ is $L_A$, and for $B$ is $L_B$ then $L_A -L_B$ would produce the same effect as $f_{BA}-f_{AB}$ on the calculation. It's possible that you'd need to correct for that using some sort of Bayesian prior, but at the moment, I don't have deep thoughts about how that would be done. But there are advantages to this approach as well, besides reducing the formal errorbars. Since election polling is essentially looking for changes in behavior at or near the margins, this approach is much more sensitive to focusing on those changes in behavior. What's more, it's less sensitive to poor sampling by parties. Suppose you inadvertently poll too many Dems, for instance. Traditional polling would over-estimate the Dem result in the next election, but this won't. * Most reporting gives the "margin of error" (MOE), which is $2\sigma$, corresponding to a 95% likelihood range. Polls Technical
CommonCrawl
Nuclear vs. Chemical LFTR Overview Two-Fluid MSBR Core Designs One-Fluid MSBR Chem Processing One-Fluid MSBR Core Designs Denatured MSR Design Efforts Blog/Forum IEER Rebuttal Myths vs. Facts The earliest designs for thermal-spectrum molten-salt breeder reactors using thorium were conceived before graphite was known to be a suitable moderator material. They primarily relied on the moderating properties of the salt itself, which were poorer than graphite. Because of this limitation they could not achieve a truly thermal-neutron spectrum and this led to higher fuel inventories. The absorption of neutrons in the salt constituents (lithium, beryllium, and fluorine) is rather low, but not nearly so low as graphite, so there were larger neutronic losses to the salt itself. The barrier material between the core and blanket salts was metallic and this structure would have a limited lifetime, if it even was feasible. In-pile experiments with graphite in the early 1960s established its chemical and neutronic stability with the fuel and blanket salts, allowing graphite-moderated reactors to begin to be considered in future designs. The first fruit of this realization was the decision to manufacture unclad graphite moderator elements for the Molten-Salt Reactor Experiment, which had been approved for construction in 1960. Figure 1: An 1958 concept for a two-fluid molten-salt breeder reactor. ORNL-3708, February 1964-July 1964 In a 1964 progress report (ORNL-3708), the researchers of the MSRP put forward a concept for a thorium breeder reactor that first used a prismatic core design, where the prismatic structures consisted of graphite extrusions. Their primary motivation for attempting the challenging goal of the breeder was the unforgiving nature of resource calculations for reactors that fell much short of breeding. Realization of a system that makes full use of the potential energy in thorium to produce cheap electricity is the primary mission of reactor development at the Oak Ridge National Laboratory. That system must be an efficient breeder system. An advanced converter may be a worthwhile step in the development, but an advanced converter does not reach the goal. No matter how good the conversion ratio, if it is significantly less than 1, the amount of uranium that must be mined to make up the deficit in fissionable material is greater than the amount of thorium that must be mined to compensate for the thorium converted to 233U and burned. For example, if the conversion ratio is 0.90, the 235U from 20 tons of natural uranium will be burned with each ton of thorium consumed. Even with a conversion ratio of 0.99, the 235U from 2 tons of uranium must be supplied with each ton of thorium. Figure 2: The reactor cell elevation view from ORNL-3708 for a molten-salt breeder reactor. From ORNL-3708, page 11. Their concept for the breeder reactor, shown in Figure 2, used the graphite prisms as the flow channels for the fuel salt. Each graphite prisms had a square cross-section at the bottom and top of the structure, which was shaped into a circular cross-section in the "core" region between the ends. A central circular channel was bored into each graphite prism. Blanket salt was allowed to flow into interstitial regions between the graphite blocks. At the top and bottom of each set of graphite blocks, flexible piping redirected the upcoming flow from one block into the downgoing flow into an adjacent block. The graphite blocks were arranged on a square pitch (distance between centers) of 8 inches. The complication of that arrangement can be seen in Figure 3. Figure 3: The graphite core module from ORNL-3708 for a molten-salt breeder reactor. From ORNL-3708, page 12. The overall average core power density was 40 kW/liter and the core consisted of 324 prismatic elements. Fuel salt entered the reactor at 1125°F and left at 1300°F, for a ΔT across the reactor of 175°F. The proposed coolant was a fluoride salt mixture (LiF-NaF-KF) that would enter the primary heat exchanger at 950°F and exit at 1100°F. This led to an anticipated thermal efficiency of 42-45%. The design power of the reactor was 1000 MWe from a core power of 2400 MWt. The reactor vessel was to be constructed of a modified Hastelloy-N alloy and would be 240 inches in outer diameter. A graphite reflector 12 inches thick lined the interior of the reactor vessel. No special structure was described to achieve a transition from the prismatic structure of the core to the annular structure of the reflector and core vessel. There was simply a gap between the prismatic core and the reflector that would be filled by blanket fluid, as can be seen in Figure 2. This design proposed to process the reactor's fuel inventory in 10-60 days and the blanket inventory every 35-200 days. The anticipated breeding ratio was 1.08, but this had not been rigorously verified. Based on a fissile inventory of 880-1220 kg, the reactor achieved a specific fissile inventory of 0.88 to 1.22 kg of fissile for each electrical megawatt generated. ORNL-3936, September 1965-February 1966 Studies of a thorium breeder reactor began in earnest in September 1965, following the successful completion and startup of the Molten-Salt Reactor Experiment. In the MSRP semiannual report of February 1966 (ORNL-3936), a new reactor core design was put forward that showed a number of design innovations. The reactor was sized for 1000 MWe from 2220 MWt, assuming a nearly 45% efficient steam turbine power conversion system and subtracting for station loads. The average core power density had been doubled to 80 kW/liter as well, which led to a more compact reactor vessel 120 inches in outer diameter and 150 inches high, shown in Figure 4. Figure 4: An elevation view of the reactor vessel from ORNL-3996 for a molten-salt breeder reactor. The central region consists of hexagonal graphite prisms arranged on a triangular pitch. The transition region of the reactor is empty, and the reflector consists of a graphite annulus. From ORNL-3996, page 36. The reactor core was also prismatic, but now the up-and-down flow of fuel salt through the core would be confined to a single "fuel" cell rather than passing from one channel to another, as shown in Figure 5. The graphite prisms were hexagonal in cross-section rather than square and set on a triangular pitch of 4.8 inches, with 534 in total. Salt flowed upward through the prism through eight small circular channels, and then was turned at the top of the prism by a plug structure and flowed downward through a central circular bore. This strategy eliminated the external plumbing of the ORNL-3708 core module design, internalizing the flow into a single graphite prism. Blanket salt filled the interstices between the prismatic structures, and the nominal core composition was 75% graphite, 18% fuel salt, and 7% blanket salt by volume. Figure 5: A single graphite "fuel cell" from ORNL-3936 for a molten-salt breeder reactor. Fuel salt flowed upward from the entrance plenum through eight channels at 45-degree angles to one another, then downward through the central channel to the exit plenum. From ORNL-3936, page 178. The prismatic core structure was set into the reactor vessel mounted to the top of a plenum structure. Inside the plenum structure the exit plenum was fully enclosed in the entrance plenum. Each "fuel cell" was connected to each plenum by graphite-to-metal transition sleeves. The fuel cells were only anchored at one end to permit axial movement due to thermal expansion. There was no transition region between the prismatic core and the annular 3-inch graphite reflector immediately inside the reactor vessel. This can be seen in Figures 4 and 6. Figure 6: A cross-sectional view of the reactor vessel from ORNL-3996 for a molten-salt breeder reactor. The central region consists of hexagonal graphite prisms arranged on a triangular pitch. The transition region of the reactor is empty, and the reflector consists of a graphite annulus. From ORNL-3996, page 35. Fuel salt entered the reactor at 1000°F and left at 1300°F, for a ΔT of 300°F. A mixture of sodium fluoroborate and sodium fluoride (39-61 mole%) was proposed that would enter the primary heat exchanger at 850°F and exit at 1125°F. The blanket salt was held to a smaller temperature range, entering the reactor at 1150°F and leaving at 1250°F (ΔT = 100°F). Fuel would be processed every 47 days and blanket every 23 days. The total fuel volume in the primary circuit and processing system was 717 ft3 and the total blanket volume was 3383 ft3. With the high core power density this design projected a fissile inventory of only 769 kg, leading to a specific fissile inventory of 0.77 kg/MWe. The high core power density diminished the breeding ratio to 1.049, most likely due to neutron loss to protactinium. The fertile inventory of 260,000 kg of thorium was over 300 times greater than the fissile. The next semiannual report in August 1966 (ORNL-4037) proposed a substantial change to the design approach, employing four 250-MWe modules rather than a single 1000-MWe reactor. There was also discussion of how the removal of protactinium in the blanket (rather than waiting for the protactinium to decay to uranium) would improve overall performance. The February 1967 semiannual report (ORNL-4119) adopted the modular 250-MWe reactors (each with 556 MW of thermal power) as the baseline approach. The average power density of the core was reduced to 40 kW/liter. The graphite "fuel cell" that made up the primary structure of the core was modified from a hexagonal cross-section to a circular cross-section, such that the fuel cell was essentially a long graphite cylinder. The fuel cells were enlarged to 5 inches in diameter and their number reduced to 336. In their center was a circular bore 1.5 inches in diameter, surrounded by three 7/8-in diameter holes for the upward flow from the entrance plenum, as shown in Figure 7. Figure 7: A single graphite "fuel cell" from ORNL-4119 for a molten-salt breeder reactor. Fuel salt flowed upward from the entrance plenum through three channels at 120-degree angles to one another, then downward through the central channel to the exit plenum. From ORNL-4119, page 184. Temperature ranges for fuel, blanket, and coolant were all retained from the previous design. The plenum structures evolved but remained similar in concept, with a central feed and drain rather than an offset design. A transition structure was first proposed in this design, employing graphite spheres 4 inches in diameter to fill the region between the prismatic core and the annular graphite reflector. Each graphite ball had holes in them so that blanket salt occupied 60% of the volume of the transition region. The reflector's thickness was increased to 6 inches. These changes can be seen in Figure 8. Figure 8: An elevation view of the reactor vessel from ORNL-4119 for a molten-salt breeder reactor. The central region consists of cylindrical graphite prism arranged on a triangular pitch. The transition region of the reactor consists of loose graphite spheres with blanket salt between them, and the reactor is lined with a 4-inch graphite reflector. From ORNL-4119, page 183. Fuel and blanket processing times remained largely the same but breeding ratio improved to 1.07 due to the lower core power density of the design. A control rod was envisioned for the central position in the core. The control rod would not be a rod in the conventional sense, but there would be a hollow graphite cylinder 5 inches in diameter, the same as the other fuel cells, and equipped with a gas inlet at the top. The hollow graphite cylinder would naturally fill with blanket salt, which was a neutron absorber and would be particularly effective in this central region of the reactor. Gas pressure would be used to position the height of the blanket salt column at any position desired within this cell, and this column height would be used for control of the reactor. If the gas were expelled, the column would fill with blanket salt, introducing negative reactivity and tending towards reactor shutdown. An alternative approach would be to position a graphite rod 5 inches or less in diameter in this central position and to actuate it from above. By pushing the rod down into the blanket, positive reactivity would be introduced, and by withdrawing the rod and allowing the volume to fill with blanket fluid, negative reactivity would be introduced. Since the blanket salt was more dense than graphite, a loss of actuation capability would cause the graphite rod to float upward in the reactor, ideally leading to a reduction in reactivity. ORNL-4191, March 1967-August 1967 The next semiannual report from August 1967 (ORNL-4191) was the last to feature the two-fluid design. Because design work shifted to the one-fluid core design, the two-fluid design described in ORNL-4191 became the "reference" two-fluid design, and was written up in greater detail in the final report on two-fluid reactor design efforts, ORNL-4528, Two-Fluid Molten-Salt Breeder Reactor Design Study, published in August 1970. The "reference" two-fluid design described in ORNL-4191 and ORNL-4528 reduced the core power density further, from 40 kW/liter to 20 kW/liter, in order to improve graphite lifetime in the core. There were also significant changes to the graphite "fuel cells". The geometry was again changed back to a hexagonal cross-section, with an inner diameter (flat-to-flat distance) of 5.375 inches. A single circular channel 2-23/32 inches in diameter was to be bored down the centerline of each graphite prism. Inside that channel would be another cylindrical channel of graphite 3/4-inch in thickness and 2-1/4 in diameter. Salt would flow upward on the outside of this "recursive" channel and then flow downward on the inside, as shown in Figure 9. With this technique, the need to drill multiple passages down the length of graphite fuel cells was done away with, and this could lead to a substantial improvement in manufacturability. The concentric flow channels were generated by placing the inner cylindrical graphite structure in the bore of the larger hexagonal graphite prism. This simple approach to a "recursive" fuel channel has been mimicked in our newer designs. Figure 9: A single graphite "fuel cell" from ORNL-4191 for a molten-salt breeder reactor. Fuel salt flowed upward from the entrance plenum along the outside of a recursive cylindrical sleeve, then downward through the central channel of that sleeve to the exit plenum. From ORNL-4191, page 74. The prismatic pattern of core structures continued across the entire cross section of the core, but comprised three distinct regions. The three regions can be seen in the core cutaway view shown in Figure 12. The central region of the reactor was composed of the hexagonal "fuel cells", in which the fuel salt flowed upward through a central bore and then downward through a concentric graphite sleeve inside the bore. There were to be 420 of these structures in the reactor vessel. Blanket salts fills the interstitial space between graphite fuel cells. Surrounding the "core" of hexagonal graphite prisms is a "blanket" constructed of simple graphite cylinders, open at both ends, providing for the flow of blanket salt and whose geometry achieves the desired graphite-to-salt volume ratio in the blanket. There were to be 252 of these graphite cylinders, including one in the central location of the reactor that would act as a position for a control rod. The outermost region is the "reflector" consisting of solid graphite cylinders. Some of these cylinders were to be trimmed in order to fit them into the core. This trimming apparently constituted the transition between the prismatic pattern and the metallic reactor vessel. Figure 10: An elevation view of the reactor vessel from ORNL-4191 for a molten-salt breeder reactor. The central region consists of hexagonal graphite prisms arranged on a triangular pitch. The transition region of the reactor consists of cylindrical graphite tubes open at each end and allowed to fill with blanket salt. The reflector consists of solid graphite cylinders. From ORNL-4191, page 73. Figure 11: A cross-sectional view of the reactor vessel from ORNL-4191 for a molten-salt breeder reactor. The central region consists of hexagonal graphite prisms arranged on a triangular pitch. The transition region of the reactor consists of cylindrical graphite tubes open at each end and allowed to fill with blanket salt. The reflector consists of solid graphite cylinders. From ORNL-4191, page 72. Figure 12: A cutaway view of the reactor vessel and core described in ORNL-4191. Different graphite regions are depicted in different colors. The core region is depicted in dark blue, the blanket region in green, and the reflector region in dark gray. The prismatic core transitions to the annular shape of the reactor vessel by trimming some of the graphite prisms of the reflector section. Image generated by Flibe Energy. The manner in which the graphite fuel cells were to be attached to the entrance and exit plena constituted a challenge to the design. The outer graphite structure was to be brazed to a metallic section at the bottom end, and this metallic section would then be welded into the fuel entrance plenum. The inner, "recursive" graphite channel would be fitted to another metallic section using a sliding fit, and then the metallic section would be welded to the exit plenum. Leak-tightness between the exit stream and the entrance stream was not crucial since both streams were of the same composition, differing only in temperature. The challenge of the design came about in the event that one of the fuel cells needed to be replaced. With the hexagonal geometry of the fuel cells and the very small interstitial space for blanket salt flow between them, there was no room for a long-handled tool to be inserted from the top of the reactor down to the area where the fuel cells connected to the entrance and exit plena. The only option would be to remove the central channel (a simple graphite annular cylinder) and to remove fuel cells, one after another, until the area was reached where the defective fuel cell was present. This would likely involve removing many good fuel cells in order to replace a single defective one. Returning, repairing, and rewelding each of these fuel cells remotely was also a very conceptually challenging task. Each fuel cell would require two welds. Figure 13: Detail of the blanket fluid inlet to the two-fluid reactor vessel, also showing the fuel salt flow patterns inside the graphite channels. The salt flow patterns in the entrance and exit plena were also never described in detail, but their facile design leads one to consider that they may have embodied many challenges. Since the distance from the large entrance channel to the individual small entrance channels of each fuel cell is different, the pressure drop that could be expected would be highly variable, favoring the fuel cells nearer to the entrance and exit at the expense of those furthest away. It would be desirable to shape the flow through the plena in a way to create uniform pressure drop across each fuel cell, but there is no indication that this was attempted in any of the ORNL two-fluid designs. The severe challenge of replacing defective fuel cells led ORNL designers to consider instead the replacement of the entire reactor core vessel, with all its graphite internal structures, rather than to have to face the challenge of the individual replacement of defective fuel cells. But this led to great uncertainty. What would be the replacement frequency of defective fuel cells? They had assessed the core replacement frequency due to graphite dimensional change under fast neutron flux and not found it economically excessive, but the uncertainty associated with defective fuel cell replacement was deemed unacceptable. This was the central reason that consideration of one-fluid core designs began. The importance of removing a defective reactor core influenced the design of the entire reactor module construction. A separate spent reactor vessel cell, labeled "Hot Storage" was included as a place for the spent reactor vessel to cool down after removal and before disassembly, shown in Figure 13. Considerations of disassembly were not included in the final report on the two-fluid design. Figure 14: A plan view of the entire plant described in ORNL-4191. Four of the two-fluid reactor designs are compared in Table 1. They show the evolution in core power density from one design to another. One can also observe the changes in transition zone concepts as that issue came into focus. Changes in breeding ratio also accompanied changes in core power density. Later improvements in chemical removal techniques for protactinium would help decouple those effects from one another. Table 1: Comparison of Two-Fluid Breeder Reactor Designs [latex mode=1] \begin{table}[htp] \footnotesize \begin{tabular}{lp{2.5cm}p{2.5cm}p{2.5cm}p{2.5cm}} \midrule & ORNL-3708 & ORNL-3936 & ORNL-4119 & ORNL-4191\\ & July 1964 & February 1966 & February 1967 & August 1967\\ Materials\\ \hspace{0.25cm}Fuel salt & LiF-BeF$_2$-UF$_4$\\ \hspace{0.25cm}Composition, mole\% & 63-36.6-0.4 & 63.6-36.2-0.22 & 63.6-36.2-0.22 & 68.5-31.3-0.2\\ \hspace{0.25cm}Blanket salt & LiF-BeF$_2$-ThF$_4$\\ \hspace{0.25cm}Composition, mole\% & 67-18-15 & 71-2-27 & 71-2-27 & 71-2-27\\ \hspace{0.25cm}Coolant fluid & LiF-NaF-KF & NaBF$_4$-NaF\\ \hspace{0.25cm}Composition, mole\% & not specified & 38.9-61.1 & 38.9-61.1 & 92-8\\ \hspace{0.25cm}Moderator & graphite\\ \hspace{0.25cm}Structural alloy & Hastelloy-N\\ Reactor vessel\\ \hspace{0.25cm}Design thermal power, MW & 2400 & 2220 & 556 & 556\\ \hspace{0.25cm}Average power density, kW/L & 40 & 80 & 39 & 20\\ \hspace{0.25cm}Outer diameter, in & 240 & 120 & 144 & 168\\ \hspace{0.25cm}Vessel height, in & 300 & 150 & 204 & 234\\ \hspace{0.25cm}Prismatic arrangement & square & triangular & triangular & triangular\\ \hspace{0.25cm}Prismatic pitch, in & 8.0 & 4.8 & 5.0 & 5-9/16 \\ \hspace{0.25cm}Number of core elements & 324 & 534 & 336 & 420\\ \hspace{0.25cm}Fuel cell X-S geometry & circular & hexagonal & circular & hexagonal\\ \hspace{0.25cm}Transition region & none & none & graphite spheres & trimmed prisms\\ \hspace{0.25cm}Reflector thickness, in & 12 & 3 & 6 & 6\\ Temperatures, °F\\ \hspace{0.25cm}Fuel inlet/outlet & 1125/1300 & 1000/1300 & 1000/1300 & 1000/1300\\ \hspace{0.25cm}Blanket inlet/outlet & not specified & 1150/1250 & 1150/1250 & 1150/1250\\ \hspace{0.25cm}Coolant inlet/outlet & 950/1100 & 850/1125 & 850/1125 & 850/1125\\ Processsing\\ \hspace{0.25cm}Total fuel volume, ft$^3$ & not specified & 717 & 231.3 & 355\\ \hspace{0.25cm}Total blanket volume, ft$^3$ & not specified & 3383 & 1061 & 520\\ \hspace{0.25cm}Fuel processing rate, days & 10-60 & 47 & 58 & 60\\ \hspace{0.25cm}Blanket processing rate, days & 35-200 & 23 & 22 & 3\\ \hspace{0.25cm}Fissile inventory, kg & 880-1220 & 769 & 218 & 314\\ \hspace{0.25cm}Fertile inventory, kg & 270,000 & 260,000 & 43,000 & 54,000\\ \hspace{0.25cm}Specific power, MWt/kg & 2.72-1.97 & 2.89 & 2.55 & 1.8\\ \hspace{0.25cm}Specific inventory, kg/MWe & 0.88-1.22 & 0.77 & 0.87 & 1.26\\ \hspace{0.25cm}Breeding ratio & 1.08-1.03 & 1.049 & 1.07 & 1.06\\ Net thermal efficiency & 42-45\% & 44.9\% & 44.9\% & 44.9\%\\ Reactor electrical power, MW & 1000 & 1000 & 250 & 250\\ \end{table} [/latex] ORNL-3708 salt composition data from Table 1, pg 4. ORNL-3936 performance data largely from Table 6.1, pgs 180-181 and Table 6.7, pg 188, also from ORNL-3996, Table 3.1, pgs 37-38 and Table 3.1, pg 41. ORNL-4119 performance data largely from Table 9.1, pg 176. ORNL-4191 performance data largely from Table 5.2, pg 75. Some values for the ORNL-4191 design were taken from the later summary report ORNL-4528. Thorium Energy by Kirk Sorensen | Privacy Policy
CommonCrawl
Trigonometric Graphs Symmetrical and periodic nature of trig functions Intro to sin(x), cos(x) and tan(x) Key features of sine and cosine curves Amplitude of sine and cosine Period changes for sine and cosine Phase shifts for sine and cosine Transformations of sine and cosine curves and equations Domain and range of sine and cosine curves Graphing sine curves Sine Waves and Music (Investigation) Graphing cosine curves Finding equations of sine and cosine curves Graphical solution of trigonometric equations involving sine and cosine Applications of sine and cosine functions Graph sums of sine and cosine (rad) Key features of tangent curves Dilation of tangent curves Period changes for tangents Phase shifts for tangents Transformations of tangent curves and equations Domain and range of tangent curves Graphing tangent curves Find the equation of a tangent curve Intro to sec(x), cosec(x) and cot(x) Key features of cot, sec and cosec curves Transformations of cot, sec and cosec curves and equations Domain and range of cot, sec and cosec curves Graphing cot, sec and cosec curves Find the equation of a cot, sec and cosec curve Solve harmonic motion problems Level 8 - NCEA Level 3 We have previously looked at how to sketch the graphs of secant, cosecant and cotangent curves given their equations. We are now going to look at the reverse of this - how to recover the equation of a secant, cosecant or cotangent curve given its graph. To do so, we first want to identify which of the three functions is most appropriate for the given curve. A graph of $y=\sec x$y=secx A graph of $y=\csc x$y=cscx Notice that the graphs of $y=\sec x$y=secx and $y=\csc x$y=cscx both have local minima and maxima, but no points of inflection. They differ by a phase shift (a horizontal translation): $y=\sec x$y=secx has a local minimum on the $y$y-axis, while $y=\csc x$y=cscx has an asymptote along the $y$y-axis. A graph of $y=\cot x$y=cotx On the other hand, the graph of $y=\cot x$y=cotx has points of inflection, but no local minima or maxima. Once we have identified the most appropriate base function, we look at the features of the graph in order to determine the particular coefficients of the equation. These features include the location of the asymptotes, the points of inflection or local minima and maxima and the median value of the function. Let's go through an example. Consider the graph below The graph of an unknown function. We can immediately see that this curve has the shape of a secant or cosecant function. In particular it has an asymptote along the $y$y-axis, which matches the graph of $y=\csc x$y=cscx. So we can write this function in the form $y=a\csc\left(b\left(x-c\right)\right)+d$y=acsc(b(x−c))+d. To determine the values of the coefficients, recall that: the value of $a$a represents a vertical dilation, the value of $b$b represents a horizontal dilation, and is related to the period of the function, the value of $c$c represents a phase shift (a horizontal translation), and the value of $d$d represents a vertical translation. Now, we chose to use $y=\csc x$y=cscx as our base function because the feature on the $y$y-axis matches up (an asymptote in this case). This means that there is no phase shift, and so we have that $c=0$c=0. Also, we can see that the median value of this function is $y=0$y=0 (halfway between the local minima and maxima), which also matches the median value of $y=\csc x$y=cscx. This means that there is no vertical translation, and so we have that $d=0$d=0. Looking at the graph of $y=\csc x$y=cscx, we can see that the local minima and maxima are $1$1 unit above or below the median value respectively. On our graph, however, these points are $3$3 units above or below the median value instead. So there is a vertical dilation by a factor of $\frac{3}{1}=3$31​=3, and so we have that $a=3$a=3. The graph has a median value of $y=0$y=0 and a vertical dilation factor of $3$3. Finally, the period of our function is $\pi$π, while the graph of $y=\csc x$y=cscx has a period of $2\pi$2π. From this we get the relation $\frac{2\pi}{b}=\text{Period}=\pi$2πb​=Period=π, and so we have that $b=2$b=2. Putting this together, the equation of our graph is $y=3\csc\left(2x\right)$y=3csc(2x) Note that we could have also expressed this function in the form $y=3\sec\left(2\left(x-\frac{\pi}{4}\right)\right)=3\sec\left(2x-\frac{\pi}{2}\right)$y=3sec(2(x−π4​))=3sec(2x−π2​), using $y=\sec x$y=secx as the base function. Observe that this form of the equation has a phase shift, since the graph did not line up with that of $y=\sec x$y=secx on the $y$y-axis. To determine the equation of a function from its graph, first determine the most appropriate base type of function (that most closely resembles the graph). Then, look at the features of the graph and compare them to the base function to determine any: Vertical dilation Horizontal dilation (which is related to the period for trigonometric functions) Vertical translation Horizontal translation (called a phase shift for trigonometric functions) Finally, write down the equation in the form $y=af\left(b\left(x-c\right)\right)+d$y=af(b(x−c))+d, where $f\left(x\right)$f(x) is the base function. Careful! Since trigonometric functions are periodic, we can express them in infinitely many equivalent ways by changing the phase shift by a multiple of the period. For example, we determined the equation of the graph above to be $y=3\csc\left(2x\right)$y=3csc(2x). We could also represent this same graph by the equation $y=3\csc\left(2\left(x-\pi\right)\right)$y=3csc(2(x−π)) or $y=3\csc\left(2\left(x+7\pi\right)\right)$y=3csc(2(x+7π)) or with a phase shift of any other multiple of the period, $\pi$π. It is typical to write the equation with a phase shift as close to zero as possible. Consider the graph below. What is the equation of the asymptote shown? Which key feature occurs at the point where $x=0$x=0? A local minimum. A local maximum. An asymptote. A point of inflection. What is the period of this function? Write down the equation of this function in the form $y=a\sec\left(bx\right)$y=asec(bx), $y=a\csc\left(bx\right)$y=acsc(bx) or $y=a\cot\left(bx\right)$y=acot(bx). Which key feature occurs at the point where $x=\frac{\pi}{6}$x=π6​? What is the median value of this function? Write down the equation of this function in the form $y=a\sec\left(b\left(x-c\right)\right)+d$y=asec(b(x−c))+d, where $-\pi\le c\le\pi$−π≤c≤π. Display and interpret the graphs of functions with the graphs of their inverse and/or reciprocal functions
CommonCrawl
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) Observation of Anisotropy of TeV Cosmic Rays with Two Years of HAWC (1805.01847) A.U. Abeysekara, R. Alfaro, C. Alvarez, J.D. Alvarez, R. Arceo, J.C. Arteaga-Velázquez, D. Avila Rojas, H.A. Ayala Solares, A. Becerril, E. Belmont-Moreno, S.Y. BenZvi, A. Bernal, J. Braun, K.S. Caballero-Mora, T. Capistrán, A. Carramiñana, S. Casanova, M. Castillo, U. Cotti, J. Cotzomi, C. De León, E. De la Fuente, R. Diaz Hernandez, S. Dichiara, B.L. Dingus, M.A. DuVernois, J.C. Díaz-Vélez, K. Engel, D.W. Fiorino, N. Fraija, J.A. García-González, F. Garfias, A. González Muñoz, M.M. González, J.A. Goodman, Z. Hampel-Arias, J.P. Harding, S. Hernandez, B. Hona, F. Hueyotl-Zahuantitla, C.M. Hui, P. Hüntemeyer, A. Iriarte, A. Jardin-Blicq, V. Joshi, S. Kaufmann, A. Lara, R.J. Lauer, W.H. Lee, H. León Vargas, A.L. Longinotti, G. Luis-Raya, R. Luna-García, D. López-Cámara, R. López-Coto, D. López-Cámara, R. López-Coto, K. Malone, S.S. Marinelli, O. Martinez, I. Martinez-Castellanos, J. Martínez-Castro, H. Martínez-Huerta, J.A. Matthews, P. Miranda-Romagnoli, E. Moreno, M. Mostafá, A. Nayerhoda, L. Nellen, M. Newbold, M.U. Nisa, R. Noriega-Papaqui, R. Pelayo, J. Pretz, E.G. Pérez-Pérez, Z. Ren, C.D. Rho, C. Rivière, D. Rosa-González, M. Rosenberg, E. Ruiz-Velasco, F. Salesa Greus, A. Sandoval, M. Schneider, H. Schoorlemmer, M. Seglar Arroyo, G. Sinnis, A.J. Smith, R.W. Springer, P. Surajbali, I. Taboada, O. Tibolla, K. Tollefson, I. Torres, G. Vianello, L. Villaseñor, T. Weisgarber, F. Werner, S. Westerhoff, J. Wood, T. Yapici, A. Zepeda, H. Zhou May 7, 2018 astro-ph.HE After two years of operation, the High-Altitude Water Cherenkov (HAWC) Observatory has analyzed the TeV cosmic-ray sky over an energy range between $2.0$ and $72.8$ TeV. The HAWC detector is a ground-based air-shower array located at high altitude in the state of Puebla, Mexico. Using 300 light-tight water tanks, it collects the Cherenkov light from the particles of extensive air showers from primary gamma rays and cosmic rays. This detection method allows for uninterrupted observation of the entire overhead sky (2~sr instantaneous, 8.5~sr integrated) in the energy range from a few TeV to hundreds of TeV. Like other detectors in the northern and southern hemisphere, HAWC observes an energy-dependent anisotropy in the arrival direction distribution of cosmic rays. The observed cosmic-ray anisotropy is dominated by a dipole moment with phase $\alpha\approx40^{\circ}$ and amplitude that slowly rises in relative intensity from $8\times10^{-4}$ at 2 TeV to $14\times10^{-4}$ around 30.3 TeV, above which the dipole decreases in strength. A significant large-scale ($>60^{\circ}$ in angular extent) signal is also observed in the quadrupole and octupole moments, and significant small-scale features are also present, with locations and shapes consistent with previous observations. Compared to previous measurements in this energy range, the HAWC cosmic-ray sky maps improve on the energy resolution and fit precision of the anisotropy. These data can be used in an effort to better constrain local cosmic-ray accelerators and the intervening magnetic fields. Constraining the $\bar{p}/p$ Ratio in TeV Cosmic Rays with Observations of the Moon Shadow by HAWC (1802.08913) A.U. Abeysekara, A. Albert, R. Alfaro, C. Alvarez, R. Arceo, J.C. Arteaga-Velázquez, D. Avila Rojas, H.A. Ayala Solares, E. Belmont-Moreno, S.Y. BenZvi, J. Braun, C. Brisbois, K.S. Caballero-Mora, T. Capistrán, A. Carramiñana, S. Casanova, M. Castillo, U. Cotti, J. Cotzomi, S. Coutiño de León, C. De León, E. D la Fuentem, R. Diaz Hernandez, S. Dichiara, B.L. Dingus, M.A. DuVernois, R.W. Ellsworth, K. Engels, O. Enríquez-Rivera, H. Fleischhack, N. Fraija, A. Galván-Gámez, J.A. García-González, A. González Muñoz, M.M. González, J.A. Goodman, Z. Hampel-Arias, J.P. Harding, S. Hernandez, B. Hona, F. Hueyotl-Zahuantitla, C.M. Hui, P. Hüntemeyer, A. Iriarte, A. Jardin-Blicq, V. Joshi, S. Kaufmann, A. Lara, W.H. Lee, H. León Vargas, J.T. Linnemann, A.L. Longinotti, G. Luis-Raya, R. Luna-García, R. López-Coto, K. Malone, S.S. Marinelli, O. Martinez, I. Martinez-Castellanos, J. Martínez-Castro, H. Martínez-Huerta, J.A. Matthews, P. Miranda-Romagnoli, E. Moreno, M. Mostafá, A. Nayerhoda, L. Nellena, M. Newbold, M.U. Nisa, R. Noriega-Papaqui, R. Pelayo, J. Pretz, E.G. Pérez-Pérez, Z. Ren, C.D. Rho, C. Riviére, D. Rosa-González, M. Rosenberg, E. Ruiz-Velasco, F. Salesa Greus, A. Sandoval, M. Schneider, H. Schoorlemmer, M. Seglar Arroyo, G. Sinnis, A.J. Smith, R.W. Springer, P. Surajbali, I. Taboada, O. Tibolla, K. Tollefson, I. Torres, L. Villaseñor, T. Weisgarber, S. Westerhoff, J. Wood, T. Yapici, G.B. Yodha, A. Zepeda, H. Zhou, J.D. Alvarez April 22, 2018 astro-ph.IM, astro-ph.HE An indirect measurement of the antiproton flux in cosmic rays is possible as the particles undergo deflection by the geomagnetic field. This effect can be measured by studying the deficit in the flux, or shadow, created by the Moon as it absorbs cosmic rays that are headed towards the Earth. The shadow is displaced from the actual position of the Moon due to geomagnetic deflection, which is a function of the energy and charge of the cosmic rays. The displacement provides a natural tool for momentum/charge discrimination that can be used to study the composition of cosmic rays. Using 33 months of data comprising more than 80 billion cosmic rays measured by the High Altitude Water Cherenkov (HAWC) observatory, we have analyzed the Moon shadow to search for TeV antiprotons in cosmic rays. We present our first upper limits on the $\bar{p}/p$ fraction, which in the absence of any direct measurements, provide the tightest available constraints of $\sim1\%$ on the antiproton fraction for energies between 1 and 10 TeV. Science with e-ASTROGAM (A space mission for MeV-GeV gamma-ray astrophysics) (1711.01265) A. De Angelis, V. Tatischeff, I. A. Grenier, J. McEnery, M. Mallamaci, M. Tavani, U. Oberlack, L. Hanlon, R. Walter, A. Argan, P. Von Ballmoos, A. Bulgarelli, A. Bykov, M. Hernanz, G. Kanbach, I. Kuvvetli, M. Pearce, A. Zdziarski, J. Conrad, G. Ghisellini, A. Harding, J. Isern, M. Leising, F. Longo, G. Madejski, M. Martinez, M. N. Mazziotta, J. M. Paredes, M. Pohl, R. Rando, M. Razzano, A. Aboudan, M. Ackermann, A. Addazi, M. Ajello, C. Albertus, J. M. Alvarez, G. Ambrosi, S. Anton, L. A. Antonelli, A. Babic, B. Baibussinov, M. Balbo, L. Baldini, S. Balman, C. Bambi, U. Barres de Almeida, J. A. Barrio, R. Bartels, D. Bastieri, W. Bednarek, D. Bernard, E. Bernardini, T. Bernasconi, B. Bertucci, A. Biland, E. Bissaldi, M. Boettcher, V. Bonvicini, V. Bosch Ramon, E. Bottacini, V. Bozhilov, T. Bretz, M. Branchesi, V. Brdar, T. Bringmann, A. Brogna, C. Budtz Jorgensen, G. Busetto, S. Buson, M. Busso, A. Caccianiga, S. Camera, R. Campana, P. Caraveo, M. Cardillo, P. Carlson, S. Celestin, M. Cermeno, A. Chen, C. C Cheung, E. Churazov, S. Ciprini, A. Coc, S. Colafrancesco, A. Coleiro, W. Collmar, P. Coppi, R. Curado da Silva, S. Cutini, F. DAmmando, B. De Lotto, D. de Martino, A. De Rosa, M. Del Santo, L. Delgado, R. Diehl, S. Dietrich, A. D. Dolgov, A. Dominguez, D. Dominis Prester, I. Donnarumma, D. Dorner, M. Doro, M. Dutra, D. Elsaesser, M. Fabrizio, A. FernandezBarral, V. Fioretti, L. Foffano, V. Formato, N. Fornengo, L. Foschini, A. Franceschini, A. Franckowiak, S. Funk, F. Fuschino, D. Gaggero, G. Galanti, F. Gargano, D. Gasparrini, R. Gehrz, P. Giammaria, N. Giglietto, P. Giommi, F. Giordano, M. Giroletti, G. Ghirlanda, N. Godinovic, C. Gouiffes, J. E. Grove, C. Hamadache, D. H. Hartmann, M. Hayashida, A. Hryczuk, P. Jean, T. Johnson, J. Jose, S. Kaufmann, B. Khelifi, J. Kiener, J. Knodlseder, M. Kole, J. Kopp, V. Kozhuharov, C. Labanti, S. Lalkovski, P. Laurent, O. Limousin, M. Linares, E. Lindfors, M. Lindner, J. Liu, S. Lombardi, F. Loparco, R. LopezCoto, M. Lopez Moya, B. Lott, P. Lubrano, D. Malyshev, N. Mankuzhiyil, K. Mannheim, M. J. Marcha, A. Marciano, B. Marcote, M. Mariotti, M. Marisaldi, S. McBreen, S. Mereghetti, A. Merle, R. Mignani, G. Minervini, A. Moiseev, A. Morselli, F. Moura, K. Nakazawa, L. Nava, D. Nieto, M. Orienti, M. Orio, E. Orlando, P. Orleanski, S. Paiano, R. Paoletti, A. Papitto, M. Pasquato, B. Patricelli, M. A. PerezGarcia, M. Persic, G. Piano, A. Pichel, M. Pimenta, C. Pittori, T. Porter, J. Poutanen, E. Prandini, N. Prantzos, N. Produit, S. Profumo, F. S. Queiroz, S. Raino, A. Raklev, M. Regis, I. Reichardt, Y. Rephaeli, J. Rico, W. Rodejohann, G. Rodriguez Fernandez, M. Roncadelli, L. Roso, A. Rovero, R. Ruffini, G. Sala, M. A. SanchezConde, A. Santangelo, P. Saz Parkinson, T. Sbarrato, A. Shearer, R. Shellard, K. Short, T. Siegert, C. Siqueira, P. Spinelli, A. Stamerra, S. Starrfield, A. Strong, I. Strumke, F. Tavecchio, R. Taverna, T. Terzic, D. J. Thompson, O. Tibolla, D. F. Torres, R. Turolla, A. Ulyanov, A. Ursi, A. Vacchi, J. Van den Abeele, G. Vankova Kirilovai, C. Venter, F. Verrecchia, P. Vincent, X. Wang, C. Weniger, X. Wu, G. Zaharijas, L. Zampieri, S. Zane, S. Zimmer, A. Zoglauer, the eASTROGAM collaboration April 5, 2018 hep-ex, astro-ph.SR, astro-ph.IM, astro-ph.HE e-ASTROGAM (enhanced ASTROGAM) is a breakthrough Observatory space mission, with a detector composed by a Silicon tracker, a calorimeter, and an anticoincidence system, dedicated to the study of the non-thermal Universe in the photon energy range from 0.3 MeV to 3 GeV - the lower energy limit can be pushed to energies as low as 150 keV for the tracker, and to 30 keV for calorimetric detection. The mission is based on an advanced space-proven detector technology, with unprecedented sensitivity, angular and energy resolution, combined with polarimetric capability. Thanks to its performance in the MeV-GeV domain, substantially improving its predecessors, e-ASTROGAM will open a new window on the non-thermal Universe, making pioneering observations of the most powerful Galactic and extragalactic sources, elucidating the nature of their relativistic outflows and their effects on the surroundings. With a line sensitivity in the MeV energy range one to two orders of magnitude better than previous generation instruments, e-ASTROGAM will determine the origin of key isotopes fundamental for the understanding of supernova explosion and the chemical evolution of our Galaxy. The mission will provide unique data of significant interest to a broad astronomical community, complementary to powerful observatories such as LIGO-Virgo-GEO600-KAGRA, SKA, ALMA, E-ELT, TMT, LSST, JWST, Athena, CTA, IceCube, KM3NeT, and LISA. Search for Dark Matter Gamma-ray Emission from the Andromeda Galaxy with the High-Altitude Water Cherenkov Observatory (1804.00628) A. Albert, R. Alfaro, C. Alvarez, J.D. Alvarez, R. Arceo, J.C. Arteaga-Velázquez, D. Avila Rojas, H.A. Ayala Solares, A. Becerril, E. Belmont-Moreno, S.Y. BenZvi, A. Bernal, C. Brisbois, K.S. Caballero-Mora, T. Capistrán, A. Carramiñana, S. Casanova, M. Castillo, U. Cotti, J. Cotzomi, S. Coutiño de León, C. De León, S. Dichiara, B.L. Dingus, M.A. DuVernois, J.C. Díaz-Vélez, C. Eckner, K. Engel, O. Enríquez-Rivera, C. Espinoza, D.W. Fiorino, N. Fraija, A. Galván-Gámez, J.A. García-González, F. Garfias, A. González Muñoz, M.M. González, J.A. Goodman, Z. Hampel-Arias, J.P. Harding, S. Hernandez, A. Hernandez-Almada, B. Hona, P. Hüntemeyer, A. Iriarte, A. Jardin-Blicq, V. Joshi, S. Kaufmann, G.J. Kunde, D. Lennarz, H. León Vargas, J.T. Linnemann, A.L. Longinotti, G. Luis Raya, R. Luna-García, K. Malone, S.S. Marinelli, O. Martinez, J. Martínez-Castro, H. Martínez-Huerta, J.A. Matthews, P. Miranda-Romagnoli, E. Moreno, M. Mostafá, A. Nayerhoda, L. Nellen, M. Newbold, M.U. Nisa, R. Noriega-Papaqui, R. Pelayo, J. Pretz, E.G. Pérez-Pérez, Z. Ren, C.D. Rho, C. Riviére, D. Rosa-González, M. Rosenberg, E. Ruiz-Velasco, E. Ruiz-Velasco, H. Salazar, F. Salesa Greus, A. Sandoval, M. Schneider, M. Seglar Arroyo, G. Sinnis, A.J. Smith, R.W. Springer, P. Surajbali, I. Taboada, O. Tibolla, K. Tollefson, I. Torres, T.N. Ukwatta, L. Villaseñor, T. Weisgarber, S. Westerhoff, J. Wood, T. Yapici, G. Zaharijas, A. Zepeda, , H. Zhou April 3, 2018 astro-ph.HE The Andromeda Galaxy (M31) is a nearby ($\sim$780 kpc) galaxy similar to our own Milky Way. Observational evidence suggests that it resides in a large halo of dark matter (DM), making it a good target for DM searches. We present a search for gamma rays from M31 using 1017 days of data from the High Altitude Water Cherenkov (HAWC) Observatory. With its wide field of view and constant monitoring, HAWC is well-suited to search for DM in extended targets like M31. No DM annihilation or decay signal was detected for DM masses from 1 to 100 TeV in the $b\bar{b}$, $t\bar{t}$, $\tau^{+}\tau^{-}$, $\mu^{+}\mu^{-}$, and $W^{+}W^{-}$ channels. Therefore we present limits on those processes. Our limits nicely complement the existing body of DM limits from other targets and instruments. Specifically the DM decay limits from our benchmark model are the most constraining for DM masses from 25 TeV to 100 TeV in the $b\bar{b}, t\bar{t}$ and $\mu^{+}\mu{-}$ channels. In addition to DM-specific limits, we also calculate general gamma-ray flux limits for M31 in 5 energy bins from 1 TeV to 100 TeV. Cosmic Ray Origin: beyond the Standard Model(s). The case of Pulsar Wind Nebulae and Unidentified very high energy gamma-ray sources (1802.03764) O. Tibolla Feb. 11, 2018 astro-ph.HE The riddle of the origin of Cosmic Rays is open since one century. Recently we got the experimental proof of hadronic acceleration in Supernovae Remnants, however new questions rised and no final answer has been provided so far. Gamma ray observations above 100 MeV reveal the sites of cosmic ray acceleration to energies where they are unaffected by solar modulation. In the last years the knowledge in this field of research widely increased, however almost 50% of the TeV (> 10^12 eV) Galactic sources are still unidentified, at GeV (> 10^9 eV) energies, 67% of EGRET sources were unidentified and also with the newer generation of gamma-ray satellites we have the same result: in fact, at low Galactic latitudes (b<10 deg), 62% of the Fermi LAT detected sources have no formal counterpart. Hence understanding the high energy unidentified sources will be a crucial brick in solving the whole riddle of Cosmic Rays origin. Several examples will be shown, underlining the importance of the so-called "dark sources". Both theoretical aspects (with particular emphasis to the so-called Ancient Pulsar Wind Nebulae scenario) and their observational proofs will be discussed. A Search for Dark Matter in the Galactic Halo with HAWC (1710.10288) A. U. Abeysekara, A. M. Albert, R. Alfaro, C. Alvarez, J. D. Álvarez, R. Arceo, J. C. Arteaga-Velázquez, D. Avila Rojas, H. A. Ayala Solares, A. Becerril, E. Belmont-Moreno, S. Y. BenZvi, A. Bernal, C. Brisbois, K. S. Caballero-Mora, T. Capistrán, A. Carramiñana, S. Casanova, M. Castillo, U. Cotti, J. Cotzomi, C. De León, E. De la Fuente, R. Diaz Hernandez, B. L. Dingus, M. A. DuVernois, J. C. Díaz-Vélez, K. Engel, O. Enríquez-Rivera, D. W. Fiorino, H. Fleischhack, N. Fraija, J. A. García-González, F. Garfias, A. González Muñoz, M. M. González, J. A. Goodman, Z. Hampel-Arias, J. P. Harding, S. Hernandez, A. Hernandez-Almada, F. Hueyotl-Zahuantitla, P. Hüntemeyer, A. Iriarte, A. Jardin-Blicq, V. Joshi, S. Kaufmann, R. J. Lauer, W. H. Lee, D. Lennarz, H. León Vargas, J. T. Linnemann, A. L. Longinotti, G. Luis-Raya, R. Luna-García, R. López-Coto, K. Malone, S. S. Marinelli, O. Martinez, I. Martinez-Castellanos, J. Martínez-Castro, J. A. Matthews, P. Miranda-Romagnoli, E. Moreno, M. Mostafá, L. Nellen, M. Newbold, M. U. Nisa, R. Noriega-Papaqui, R. Pelayo, J. Pretz, E. G. Pérez-Pérez, Z. Ren, C. D. Rho, N. L. Rodd, D. Rosa-González, M. Rosenberg, E. Ruiz-Velasco, B. R. Safdi, H. Salazar, F. Salesa Greus, A. Sandoval, M. Schneider, G. Sinnis, A. J. Smith, R. W. Springer, P. Surajbali, I. Taboada, O. Tibolla, K. Tollefson, I. Torres, T. N. Ukwatta, G. Vianello, L. Villaseñor, T. Weisgarber, S. Westerhoff, I. G. Wisher, J. Wood, T. Yapici, G. B. Yodh, P. W. Younk, A. Zepeda, H. Zhou Nov. 3, 2017 hep-ph, astro-ph.CO, astro-ph.HE The High Altitude Water Cherenkov (HAWC) gamma-ray observatory is a wide field-of-view observatory sensitive to 500 GeV - 100 TeV gamma rays and cosmic rays. With its observations over 2/3 of the sky every day, the HAWC observatory is sensitive to a wide variety of astrophysical sources, including possible gamma rays from dark matter. Dark matter annihilation and decay in the Milky Way Galaxy should produce gamma-ray signals across many degrees on the sky. The HAWC instantaneous field-of-view of 2 sr enables observations of extended regions on the sky, such as those from dark matter in the Galactic halo. Here we show limits on the dark matter annihilation cross-section and decay lifetime from HAWC observations of the Galactic halo with 15 months of data. These are some of the most robust limits on TeV and PeV dark matter, largely insensitive to the dark matter morphology. These limits begin to constrain models in which PeV IceCube neutrinos are explained by dark matter which primarily decays into hadrons. All-particle cosmic ray energy spectrum measured by the HAWC experiment from 10 to 500 TeV (1710.00890) HAWC Collaboration: R. Alfaro, J.C. Arteaga-Velázquez, A.S. Barber, C. Brisbois, A. Carramiñana, S. Coutiño de León, R. Diaz Hernandez, J.C. Díaz-Vélez, D.W. Fiorino, J.A. García-González, Z. Hampel-Arias, F. Hueyotl-Zahuantitla, A. Iriarte, R.J. Lauer, A.L. Longinotti, D. López-Cámara, O. Martinez, H. Martínez-Huerta, E. Moreno, M.U. Nisa, E.G. Pérez-Pérez, D. Rosa-González, F. Salesa Greus, A.J. Smith, O. Tibolla, L. Villaseñor, T. Yapici Instituto de Física, Universidad Nacional Autónoma de México, Universidad Autónoma de Chiapas, Tuxtla Gutiérrez Chiapas, Mexico Universidad Michoacana de San Nicolás de Hidalgo, Morelia, Mexico Department of Physics, Michigan Technological University, Houghton, MI, USA Department of Physics, Astronomy, University of Utah, Salt Lake City, UT, USA Department of Physics, Astronomy, University of Rochester, Rochester, NY, USA Instituto Nacional de Astrofísica, Óptica y Electrónica, Tonantzintla, Puebla, Mexico Instytut Fizyki Jadrowej im Henryka Niewodniczanskiego Polskiej Akademii Nauk, Krakow, Poland 9 Max-Planck Institute for Nuclear Physics, Heidelberg, Germany Facultad de Ciencias Físico Matemáticas, Benemérita Universidad Autónoma de Puebla, Puebla, Mexico Departamento de Física, Centro Universitario de Ciencias Exactase Ingenierias, Universidad de Guadalajara, Guadalajara, Mexico Instituto de Astronomía, Universidad Nacional Autónoma de México, Mexico City, Mexico Physics Division, Los Alamos National Laboratory, Los Alamos, NM, USA Department of Physics, University of Wisconsin-Madison, Madison, WI, USA School of Physics, Astronomy, Computational Sciences, George Mason University, Fairfax, VA, USA Instituto de Geofísica, Universidad Nacional Autónoma de México, Mexico City, Mexico Department of Physics, University of Maryland, College Park, MD, USA NASA Marshall Space Flight Center, Astrophysics Office, Huntsville, AL, USA Department of Physics, Astronomy, University of New Mexico, Albuquerque, NM, USA School of Physics, Center for Relativistic Astrophysics - Georgia Institute of Technology, Atlanta, GA, USA Department of Physics, Astronomy, Michigan State University, East Lansing, MI, USA Universidad Politecnica de Pachuca, Pachuca, Hidalgo, Mexico Centro de Investigacíon en Computacíon, Instituto Politécnico Nacional, Mexico City, Mexico Cátedras Conacyt|Instituto de Astronomía, Universidad Nacional Autónoma de México, Mexico City, Mexico Department of Physics, Pennsylvania State University, University Park, PA, USA Physics Department, Centro de Investigacion y de Estudios Avanzados del IPN, Mexico City, Mexico Universidad Autónoma del Estado de Hidalgo, Pachuca, Mexico Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, Mexico City, Mexico Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, Santa Cruz, CA, USA) Nov. 1, 2017 astro-ph.HE We report on the measurement of the all-particle cosmic ray energy spectrum with the High Altitude Water Cherenkov (HAWC) Observatory in the energy range 10 to 500 TeV. HAWC is a ground based air-shower array deployed on the slopes of Volcan Sierra Negra in the state of Puebla, Mexico, and is sensitive to gamma rays and cosmic rays at TeV energies. The data used in this work were taken from 234 days between June 2016 to February 2017. The primary cosmic-ray energy is determined with a maximum likelihood approach using the particle density as a function of distance to the shower core. Introducing quality cuts to isolate events with shower cores landing on the array, the reconstructed energy distribution is unfolded iteratively. The measured all-particle spectrum is consistent with a broken power law with an index of $-2.49\pm0.01$ prior to a break at $(45.7\pm0.1$) TeV, followed by an index of $-2.71\pm0.01$. The spectrum also respresents a single measurement that spans the energy range between direct detection and ground based experiments. As a verification of the detector response, the energy scale and angular resolution are validated by observation of the cosmic ray Moon shadow's dependence on energy. HAWC Contributions to the 35th International Cosmic Ray Conference (ICRC2017) (1708.02572) A. U. Abeysekara, A. Albert, R. Alfaro, C. Alvarez, J. D. Álvarez, R. Arceo, J. C. Arteaga-Velázquez, D. Avila Rojas, H. A. Ayala Solares, A. S. Barber, J. Becerra Gonzalez, A. Becerril, E. Belmont-Moreno, S. Y. BenZvi, D. Berley, A. Bernal, C. Brisbois, K. S. Caballero-Mora, T. Capistrán, A. Carramiñana, S. Casanova, M. Castillo, U. Cotti, J. Cotzomi, S. Coutiño de León, E. De la Fuente, C. De León, T. DeYoung, R. Diaz Hernandez, L. Diaz-Cruz, J. C. Díaz-Vélez, S. Dichiara, B. L. Dingus, M. A. DuVernois, R. W. Ellsworth, K. Engel, O. Enriquez-Rivera, B. Fick, D. W. Fiorino, H. Fleischhack, J. L. Flores, N. Fraija, J. A. García-González, J. L. Garcia-Luna, G. Garcia-Torales, F. Garfias, M. Gerhardt, M. M. González, A. González Muñoz, J. A. Goodman, M. Gussert, Z. Hampel-Arias, J. P. Harding, S. Hernandez, A. Hernandez-Almada, J. Hinton, B. Hona, C. M. Hui, P. Hüntemeyer, A. Iriarte, A. Jardin-Blicq, V. Joshi, S. Kaufmann, D. Kieda, G. J. Kunde, A. Lara, R. J. Lauer, W. H. Lee, D. Lennarz, H. León Vargas, J. T. Linnemann, A. L. Longinotti, M. Longo Proper, R. López-Coto, G. Luis Raya, R. Luna-García, K. Malone, V. Marandon, S. S. Marinelli, O. Martinez, I. Martinez-Castellanos, J. Martínez-Castro, H. Martínez-Huerta, J. A. J. Matthews, J. McEnery, P. Miranda-Romagnoli, E. Moreno, M. Mostafá, L. Nellen, M. Newbold, M. U. Nisa, R. Noriega-Papaqui, R. Pelayo, E. G. Pérez-Pérez, J. Pretz, Z. Ren, C. D. Rho, C. Rivière, D. Rosa-González, M. Rosenberg, E. Ruiz-Velasco, J. Ryan, H. Salazar, F. Salesa Greus, A. Sandoval, M. Schneider, H. Schoorlemmer, G. Sinnis, A. J. Smith, A. W. Smith, R. W. Springer, P. Surajbali, I. Taboada, O. Tibolla, K. Tollefson, I. Torres, T. N. Ukwatta, G. Vianello, T. Weisgarber, S. Westerhoff, J. Wood, T. Yapici, G. B. Yodh, P. W. Younk, A. Zepeda, H. Zhou Aug. 18, 2017 hep-ex, astro-ph.HE List of proceedings from the HAWC Collaboration presented at the 35th International Cosmic Ray Conference, 12 July - 20 July 2017, Bexco, Busan, Korea. Multiwavelength follow-up of a rare IceCube neutrino multiplet (1702.06131) M. G. Aartsen, M. Ackermann, J. Adams, J. A. Aguilar, M. Ahlers, M. Ahrens, I. Al Samarai, D. Altmann, K. Andeen, T. Anderson, I. Ansseau, G. Anton, M. Archinger, C. Argüelles, J. Auffenberg, S. Axani, X. Bai, S. W. Barwick, V. Baum, R. Bay, J. J. Beatty, J. Becker Tjus, K.-H. Becker, S. BenZvi, D. Berley, E. Bernardini, A. Bernhard, D. Z. Besson, G. Binder, D. Bindig, E. Blaufuss, S. Blot, C. Bohm, M. Börner, F. Bos, D. Bose, S. Böser, O. Botner, J. Braun, L. Brayeur, H.-P. Bretz, S. Bron, A. Burgman, T. Carver, M. Casier, E. Cheung, D. Chirkin, A. Christov, K. Clark, L. Classen, S. Coenders, G. H. Collin, J. M. Conrad, D. F. Cowen, R. Cross, M. Day, J. P. A. M. de André, C. De Clercq, E. del Pino Rosendo, H. Dembinski, S. De Ridder, P. Desiati, K. D. de Vries, G. de Wasseige, M. de With, T. DeYoung, V. di Lorenzo, H. Dujmovic, J. P. Dumm, M. Dunkman, B. Eberhardt, T. Ehrhardt, B. Eichmann, P. Eller, S. Euler, P. A. Evenson, S. Fahey, A. R. Fazely, J. Feintzeig, J. Felde, K. Filimonov, C. Finley, S. Flis, C.-C. Fösig, A. Franckowiak, E. Friedman, T. Fuchs, T. K. Gaisser, J. Gallagher, L. Gerhardt, K. Ghorbani, W. Giang, L. Gladstone, T. Glauch, T. Glüsenkamp, A. Goldschmidt, J. G. Gonzalez, D. Grant, Z. Griffith, C. Haack, A. Hallgren, F. Halzen, E. Hansen, T. Hansmann, K. Hanson, D. Hebecker, D. Heereman, K. Helbing, R. Hellauer, S. Hickford, J. Hignight, G. C. Hill, K. D. Hoffman, R. Hoffmann, K. Hoshina, F. Huang, M. Huber, K. Hultqvist, S. In, A. Ishihara, E. Jacobi, G. S. Japaridze, M. Jeong, K. Jero, B. J. P. Jones, W. Kang, A. Kappes, T. Karg, A. Karle, U. Katz, M. Kauer, A. Keivani, J. L. Kelley, A. Kheirandish, J. Kim, M. Kim, T. Kintscher, J. Kiryluk, T. Kittler, S. R. Klein, G. Kohnen, R. Koirala, H. Kolanoski, R. Konietz, L. Köpke, C. Kopper, S. Kopper, D. J. Koskinen, M. Kowalski, K. Krings, M. Kroll, G. Krückl, C. Krüger, J. Kunnen, S. Kunwar, N. Kurahashi, T. Kuwabara, A. Kyriacou, M. Labare, J. L. Lanfranchi, M. J. Larson, F. Lauber, M. Lesiak-Bzdak, M. Leuermann, L. Lu, J. Lünemann, J. Madsen, G. Maggi, K. B. M. Mahn, S. Mancina, M. Mandelartz, R. Maruyama, K. Mase, R. Maunu, F. McNally, K. Meagher, M. Medici, M. Meier, T. Menne, G. Merino, T. Meures, S. Miarecki, J. Micallef, G. Momenté, T. Montaruli, M. Moulai, R. Nahnhauer, U. Naumann, G. Neer, H. Niederhausen, S. C. Nowicki, D. R. Nygren, A. Obertacke Pollmann, A. Olivas, A. O'Murchadha, T. Palczewski, H. Pandya, D. V. Pankova, P. Peiffer, Ö. Penek, J. A. Pepper, C. Pérez de los Heros, D. Pieloth, E. Pinat, P. B. Price, G. T. Przybylski, M. Quinnan, C. Raab, L. Rädel, M. Rameez, K. Rawlins, R. Reimann, B. Relethford, M. Relich, E. Resconi, W. Rhode, M. Richman, B. Riedel, S. Robertson, M. Rongen, C. Rott, T. Ruhe, D. Ryckbosch, D. Rysewyk, L. Sabbatini, S. E. Sanchez Herrera, A. Sandrock, J. Sandroos, S. Sarkar, K. Satalecka, P. Schlunder, T. Schmidt, S. Schoenen, S. Schöneberg, L. Schumacher, D. Seckel, S. Seunarine, D. Soldin, M. Song, G. M. Spiczak, C. Spiering, J. Stachurska, T. Stanev, A. Stasik, J. Stettner, A. Steuer, T. Stezelberger, R. G. Stokstad, A. Stößl, R. Ström, N. L. Strotjohann, G. W. Sullivan, M. Sutherland, H. Taavola, I. Taboada, J. Tatar, F. Tenholt, S. Ter-Antonyan, A. Terliuk, G. Tešić, S. Tilav, P. A. Toale, M. N. Tobin, S. Toscano, D. Tosi, M. Tselengidou, C. F. Tung, A. Turcati, E. Unger, M. Usner, J. Vandenbroucke, N. van Eijndhoven, S. Vanheule, M. van Rossem, J. van Santen, M. Vehring, M. Voge, E. Vogel, M. Vraeghe, C. Walck, A. Wallace, M. Wallraff, N. Wandkowsky, A. Waza, Ch. Weaver, M. J. Weiss, C. Wendt, S. Westerhoff, B. J. Whelan, S. Wickmann, K. Wiebe, C. H. Wiebusch, L. Wille, D. R. Williams, L. Wills, M. Wolf, T. R. Wood, E. Woolsey, K. Woschnagg, D. L. Xu, X. W. Xu, Y. Xu, J. P. Yanez, G. Yodh, S. Yoshida, M. Zoll, K. Z. Stanek, B. J. Shappee, C. S. Kochanek, T. W.-S. Holoien, J. L. Prieto, D. B. Fox, J. J. DeLaunay, C. F. Turley, S. D. Barthelmy, A. Y. Lien, P. Mészáros, K. Murase, D. Kocevski, R. Buehler, M. Giomi, J. L. Racusin, A. Albert, R. Alfaro, C. Alvarez, J. D. Álvarez, R. Arceo, J. C. Arteaga-Velázquez, H. A. Ayala Solares, A. S. Barber, N. Baustista-Elivar A. Becerril, E. Belmont-Moreno, A. Bernal, C. Brisbois, K. S. Caballero-Mora, T. Capistrán, A. Carramiñana, S. Casanova, M. Castillo, U. Cotti, S. Coutiño de León, E. de la Fuente, C. De León, R. Diaz Hernandez, J. C. Díaz-Vélez, B. L. Dingus, M. A. DuVernois, R. W. Ellsworth, K. Engel, D. W. Fiorino, N. Fraija, J. A. García-González, M. Gerhardt, A. González Muñoz, M. M. González, J. A. Goodman, Z. Hampel-Arias, J. P. Harding, S. Hernandez, C. M. Hui, P. Hüntemeyer, A. Iriarte, A. Jardin-Blicq, V. Joshi, S. Kaufmann, A. Lara, R. J. Lauer, W. H. Lee, D. Lennarz, H. León Vargas, J. T. Linnemann, G. Luis Raya, R. Luna-García, R. López-Coto, K. Malone, S. S. Marinelli, O. Martinez, I. Martinez-Castellanos, J. Martínez-Castro, H. Martínez-Huerta, J. A. Matthews, P. Miranda-Romagnoli, E. Moreno, M. Mostafá, L. Nellen, M. Newbold, M. U. Nisa, R. Noriega-Papaqui, R. Pelayo, J. Pretz, E. G. Pérez-Pérez, Z. Ren, C. D. Rho, C. Rivière, D. Rosa-González, M. Rosenberg, F. Salesa Greus, A. Sandoval, M. Schneider, H. Schoorlemmer, G. Sinnis, A. J. Smith, R. W. Springer, P. Surajbali, O. Tibolla, K. Tollefson, I. Torres, T. N. Ukwatta, L. Villaseñor, T. Weisgarber, I. G. Wisher, J. Wood, T. Yapici, A. Zepeda, H. Zhou, I. Arcavi, G. Hosseinzadeh, D. A. Howell, S. Valenti, C. McCully, V. M. Lipunov, E. S. Gorbovskoy, N. V. Tiurina, P. V. Balanutsa, A. S. Kuznetsov, V. G. Kornilov, V. Chazov, N. M. Budnev, O. A. Gress, K. I. Ivanov, A. G. Tlatov, R. Rebolo Lopez, M. Serra-Ricart, P. A. Evans, J. A. Kennea, N. Gehrels, J. P. Osborne, K. L. Page, A. U. Abeysekara, A. Archer, W. Benbow, R. Bird, T. Brantseg, V. Bugaev, J. V Cardenzana, M. P. Connolly, W. Cui, A. Falcone, Q. Feng, J. P. Finley, H. Fleischhack, L. Fortson, A. Furniss, S. Griffin, J. Grube, M. Hütten, O. Hervet, J. Holder, G. Hughes, T. B. Humensky, C. A. Johnson, P. Kaaret, P. Kar, N. Kelley-Hoskins, M. Kertzman, M. Krause, S. Kumar, M. J. Lang, T. T.Y. Lin, S. McArthur, P. Moriarty, R. Mukherjee, D. Nieto, R. A. Ong, A. N. Otte, M. Pohl, A. Popkow, E. Pueschel, J. Quinn, K. Ragan, P. T. Reynolds, G. T. Richards, E. Roache, C. Rulten, I. Sadeh, M. Santander, G. H. Sembroski, D. Staszak, S. Trépanier, J. Tyler, S. P. Wakely, A. Weinstein, P. Wilcox, A. Wilhelm, D. A. Williams, B. Zitzer, E. Bellm, Z. Cano, A. Gal-Yam, D. A. Kann, E. O. Ofek, M. Rigault, M. Soumagnac Aug. 11, 2017 astro-ph.HE On February 17 2016, the IceCube real-time neutrino search identified, for the first time, three muon neutrino candidates arriving within 100 s of one another, consistent with coming from the same point in the sky. Such a triplet is expected once every 13.7 years as a random coincidence of background events. However, considering the lifetime of the follow-up program the probability of detecting at least one triplet from atmospheric background is 32%. Follow-up observatories were notified in order to search for an electromagnetic counterpart. Observations were obtained by Swift's X-ray telescope, by ASAS-SN, LCO and MASTER at optical wavelengths, and by VERITAS in the very-high-energy gamma-ray regime. Moreover, the Swift BAT serendipitously observed the location 100 s after the first neutrino was detected, and data from the Fermi LAT and HAWC were analyzed. We present details of the neutrino triplet and the follow-up observations. No likely electromagnetic counterpart was detected, and we discuss the implications of these constraints on candidate neutrino sources such as gamma-ray bursts, core-collapse supernovae and active galactic nucleus flares. This study illustrates the potential of and challenges for future follow-up campaigns. Search for very-high-energy emission from Gamma-ray Bursts using the first 18 months of data from the HAWC Gamma-ray Observatory (1705.01551) The HAWC collaboration: R. Alfaro, C. Alvarez, J.D. Álvarez, R. Arceo, J.C. Arteaga-Velázquez, D. Avila Rojas, H.A. Ayala Solares, A.S. Barber, N. Bautista-Elivar, A. Becerril, E. Belmont-Moreno, S.Y. BenZvi, A. Bernal, J. Braun, C. Brisbois, K.S. Caballero-Mora, T. Capistrán, A. Carramiñana, S. Casanova, M. Castillo, U. Cotti, J. Cotzomi, S. Coutiño de León, E. de la Fuente, C. De León, T. DeYoung, R. Diaz Hernandez, B.L. Dingus, M.A. DuVernois, J.C. Díaz-Vélez, R.W. Ellsworth, K. Engel, D.W. Fiorino, N. Fraija, J.A. García-González, F. Garfias, M. Gerhardt, A. González Muñoz, M.M. González, J.A. Goodman, Z. Hampel-Arias, J.P. Harding, A. Hernandez-Almada, S. Hernandez, B. Hona, C.M. Hui, P. Hüntemeyer, A. Iriarte, A. Jardin-Blicq, V. Joshi, S. Kaufmann, D. Kieda, R.J. Lauer, W. H. Lee, D. Lennarz, H. León Vargas, J.T. Linnemann, A.L. Longinotti, G. Luis Raya, R. Luna-García, K. Malone, S.S. Marinelli, O. Martinez, I. Martinez-Castellanos, J. Martínez-Castro, H. Martínez-Huerta, J.A. Matthews, P. Miranda-Romagnoli, E. Moreno, M. Mostafá, L. Nellen, M. Newbold, R. Noriega-Papaqui, R. Pelayo, E.G. Pérez-Pérez, J. Pretz, Z. Ren, C.D. Rho, C. Rivière, D. Rosa-González, M. Rosenberg, E. Ruiz-Velasco, H. Salazar, F. Salesa Greus, A. Sandoval, M. Schneider, H. Schoorlemmer, G. Sinnis, A.J. Smith, R.W. Springer, P. Surajbali, I. Taboada, O. Tibolla, K. Tollefson, I. Torres, T.N. Ukwatta, G. Vianello, T. Weisgarber, S. Westerhoff, J. Wood, T. Yapici, P.W. Younk, A. Zepeda, H. Zhou Aug. 4, 2017 astro-ph.HE The High Altitude Water Cherenkov (HAWC) Gamma-ray Observatory is an extensive air shower detector operating in central Mexico, which has recently completed its first two years of full operations. If for a burst like GRB 130427A at a redshift of 0.34 and a high-energy component following a power law with index -1.66, the high-energy component is extended to higher energies with no cut-off other than from extragalactic background light attenuation, HAWC would observe gamma rays with a peak energy of $\sim$300 GeV. This paper reports the results of HAWC observations of 64 gamma-ray bursts (GRBs) detected by $\mathit{Swift}$ and $\mathit{Fermi}$, including three GRBs that were also detected by the Large Area Telescope ($\mathit{Fermi}$-LAT). An ON/OFF analysis method is employed, searching on the time scale given by the observed light curve at keV-MeV energies and also on extended time scales. For all GRBs and time scales, no statistically significant excess of counts is found and upper limits on the number of gamma rays and the gamma-ray flux are calculated. GRB 170206A, the third brightest short GRB detected by the Gamma-ray Burst Monitor on board the $\mathit{Fermi}$ satellite ($\mathit{Fermi}$-GBM) and also detected by the LAT, occurred very close to zenith. The LAT measurements can neither exclude the presence of a synchrotron self-Compton (SSC) component nor constrain its spectrum. Instead, the HAWC upper limits constrain the expected cut-off in an additional high-energy component to be less than $100~\rm{GeV}$ for reasonable assumptions about the energetics and redshift of the burst. Dark Matter Limits From Dwarf Spheroidal Galaxies with The HAWC Gamma-Ray Observatory (1706.01277) A. Albert, R. Alfaro, C. Alvarez, J.D. Álvarez, R. Arceo, J.C. Arteaga-Velázquez, D. Avila Rojas, H.A. Ayala Solares, N. Bautista-Elivar, A. Becerril, E. Belmont-Moreno, S.Y. BenZvi, A. Bernal, J. Braun, K.S. Caballero-Mora, T. Capistrán, A. Carramiñana, M. Castillo, U. Cotti, C. De León, E. De la Fuente, R. Diaz Hernandez, B.L. Dingus, M.A. DuVernois, J.C. Díaz-Vélez, R.W. Ellsworth, D.W. Fiorino, N. Fraija, J.A. García-González, F. Garfias, M.M. González, J.A. Goodman, J.P. Harding, S. Hernandez, A. Hernandez-Almada, A. Iriarte, V. Joshi, S. Kaufmann, D. Kieda, R.J. Lauer, D. Lennarz, H. León Vargas, J.T. Linnemann, A.L. Longinotti, G. Luis Raya, R. López-Coto, K. Malone, S.S. Marinelli, I. Martinez-Castellanos, J. Martínez-Castro, H. Martínez-Huerta, J.A. Matthews, P. Miranda-Romagnoli, E. Moreno, L. Nellen, M. Newbold, M.U. Nisa, R. Noriega-Papaqui, R. Pelayo, J. Pretz, E.G. Pérez-Pérez, Z. Ren, C.D. Rho, C. Rivière, D. Rosa-González, E. Ruiz-Velasco, F. Salesa Greus, A. Sandoval, M. Schneider, H. Schoorlemmer, G. Sinnis, A.J. Smith, R.W. Springer, P. Surajbali, I. Taboada, O. Tibolla, K. Tollefson, I. Torres, T. Weisgarber, T. Yapici, H. Zhou June 5, 2017 hep-ph, astro-ph.HE The High Altitude Water Cherenkov (HAWC) gamma-ray observatory is a wide field of view observatory sensitive to 500 GeV - 100 TeV gamma rays and cosmic rays. It can also perform diverse indirect searches for dark matter (DM) annihilation and decay. Among the most promising targets for the indirect detection of dark matter are dwarf spheroidal galaxies. These objects are expected to have few astrophysical sources of gamma rays but high dark matter content, making them ideal candidates for an indirect dark matter detection with gamma rays. Here we present individual limits on the annihilation cross section and decay lifetime for 15 dwarf spheroidal galaxies within the HAWC field-of-view, as well as their combined limit. These are the first limits on the annihilation cross section and decay lifetime using data collected with HAWC. The HAWC real-time flare monitor for rapid detection of transient events (1704.07411) A.U. Abeysekara, R. Alfaro, C. Alvarez, J.D. Álvarez, R. Arceo, J.C. Arteaga-Velázquez, D. Avila Rojas, H.A. Ayala Solares, A.S. Barber, N. Bautista-Elivar, J. Becerra Gonzalez, A. Becerril, E. Belmont-Moreno, S.Y. BenZvi, A. Bernal, J. Braun, C. Brisbois, K.S. Caballero-Mora, T. Capistrán, A. Carramiñana, S. Casanova, M. Castillo, U. Cotti, J. Cotzomi, S. Coutiño de León, E. De la Fuente, C. De León, J.C. Díaz-Vélez, B.L. Dingus, M.A. DuVernois, R.W. Ellsworth, K. Engel, D.W. Fiorino, N. Fraija, J.A. García-González, F. Garfias, M. Gerhardt, M.M. González, A. González Muñoz, J.A. Goodman, Z. Hampel-Arias, J.P. Harding, S. Hernandez, A. Hernandez-Almada, B. Hona, C.M. Hui, P. Hüntemeyer, A. Iriarte, A. Jardin-Blicq, V. Joshi, S. Kaufmann, D. Kieda, R.J. Lauer, W.H. Lee, D. Lennarz, H. León Vargas, J.T. Linnemann, A.L. Longinotti, D. López-Cámara, R. López-Coto, G. Luis Raya, R. Luna-García, K. Malone, S.S. Marinelli, O. Martinez, I. Martinez-Castellanos, J. Martínez-Castro, H. Martínez-Huerta, J.A. Matthews, P. Miranda-Romagnoli, E. Moreno, M. Mostafá, L. Nellen, M. Newbold, M.U. Nisa, R. Noriega-Papaqui, R. Pelayo, E.G. Pérez-Pérez, J. Pretz, Z. Ren, C.D. Rho, C. Rivière, D. Rosa-González, M. Rosenberg, E. Ruiz-Velasco, H. Salazar, F. Salesa Greus, A. Sandoval, M. Schneider, H. Schoorlemmer, G. Sinnis, A.J. Smith, R.W. Springer, P. Surajbali, I. Taboada, O. Tibolla, K. Tollefson, I. Torres, T.N. Ukwatta, G. Vianello, T. Weisgarber, S. Westerhoff, I.G. Wisher, J. Wood, T. Yapici, P.W. Younk, A. Zepeda, H. Zhou June 1, 2017 astro-ph.HE We present the development of a real-time flare monitor for the High Altitude Water Cherenkov (HAWC) observatory. The flare monitor has been fully operational since 2017 January and is designed to detect very high energy (VHE; $E\gtrsim100$ GeV) transient events from blazars on time scales lasting from 2 minutes to 10 hours in order to facilitate multiwavelength and multimessenger studies. These flares provide information for investigations into the mechanisms that power the blazars' relativistic jets and accelerate particles within them, and they may also serve as probes of the populations of particles and fields in intergalactic space. To date, the detection of blazar flares in the VHE range has relied primarily on pointed observations by imaging atmospheric Cherenkov telescopes. The recently completed HAWC observatory offers the opportunity to study VHE flares in survey mode, scanning 2/3 of the entire sky every day with a field of view of $\sim$1.8 steradians. In this work, we report on the sensitivity of the HAWC real-time flare monitor and demonstrate its capabilities via the detection of three high-confidence VHE events in the blazars Markarian 421 and Markarian 501. Search for Very High Energy Gamma Rays from the Northern $\textit{Fermi}$ Bubble Region with HAWC (1703.01344) A.U. Abeysekara, A. Albert, R. Alfaro, C. Alvarez, J.D. Álvarez, R. Arceo, J.C. Arteaga-Velázquez, H.A. Ayala Solares, A.S. Barber, N. Bautista-Elivar, A. Becerril, E. Belmont-Moreno, S.Y. BenZvi, D. Berley, J. Braun, C. Brisbois, K.S. Caballero-Mora, T. Capistrán, A. Carramiñana, S. Casanova, M. Castillo, U. Cotti, J. Cotzomi, S. Coutiño de León, C. De León, E. De la Fuente, R. Diaz Hernandez, B.L. Dingus, M.A. DuVernois, J.C. Díaz-Vélez, R.W. Ellsworth, K. Engel, B. Fick, D.W. Fiorino, H.Fleischhack, N. Fraija, J.A. García-González, F. Garfias, M. Gerhardt, A. González Muñoz, M.M. González, J.A. Goodman, Z. Hampel-Arias, J.P. Harding, S. Hernandez, A. Hernandez-Almada, J. Hinton, B. Hona, C.M. Hui, P. Hüntemeyer, A. Iriarte, A. Jardin-Blicq, V. Joshi, S. Kaufmann, D.Kieda, A. Lara, R.J. Lauer, W.H. Lee, D. Lennarz, H. León Vargas, J.T. Linnemann, A.L. Longinotti, G. Luis Raya, R. Luna-García, R. López-Coto, K. Malone, S.S. Marinelli, O. Martinez, I. Martinez-Castellanos, J. Martínez-Castro, H. Martínez-Huerta, J.A. Matthews, P. Miranda-Romagnoli, E. Moreno, M. Mostafá, L. Nellen, M. Newbold, M.U. Nisa, R. Noriega-Papaqui, R. Pelayo, J. Pretz, E.G. Pérez-Pérez, Z. Ren, C.D. Rho, C. Rivière, D. Rosa-González, M. Rosenberg, E. Ruiz-Velasco, H. Salazar, F. Salesa Greus, A. Sandoval, M. Schneider, H. Schoorlemmer, G. Sinnis, A.J. Smith, R.W. Springer, P. Surajbali, I. Taboada, O. Tibolla, K. Tollefson, I. Torres, T.N. Ukwatta, G. Vianello, T. Weisgarber, S. Westerhoff, I.G. Wisher, J. Wood, T. Yapici, G.B. Yodh, A. Zepeda, H. Zhou May 24, 2017 astro-ph.HE We present a search of very high energy gamma-ray emission from the Northern $\textit{Fermi}$ Bubble region using data collected with the High Altitude Water Cherenkov (HAWC) gamma-ray observatory. The size of the data set is 290 days. No significant excess is observed in the Northern $\textit{Fermi}$ Bubble region, hence upper limits above $1\,\text{TeV}$ are calculated. The upper limits are between $3\times 10^{-7}\,\text{GeV}\, \text{cm}^{-2}\, \text{s}^{-1}\,\text{sr}^{-1}$ and $4\times 10^{-8}\,\text{GeV}\,\text{cm}^{-2}\,\text{s}^{-1}\,\text{sr}^{-1}$. The upper limits disfavor a proton injection spectrum that extends beyond $100\,\text{TeV}$ without being suppressed. They also disfavor a hadronic injection spectrum derived from neutrino measurements. Daily monitoring of TeV gamma-ray emission from Mrk 421, Mrk 501, and the Crab Nebula with HAWC (1703.06968) A.U. Abeysekara, A. Albert, R. Alfaro, C. Alvarez, J.D. Álvarez, R. Arceo, J.C. Arteaga-Velázquez, D. Avila Rojas, H.A. Ayala Solares, A.S. Barber, N. Bautista-Elivar, J. Becerra Gonzalez, A. Becerril, E. Belmont-Moreno, S.Y. BenZvi, A. Bernal, J. Braun, C. Brisbois, K.S. Caballero-Mora, T. Capistrán, A. Carramiñana, S. Casanova, M. Castillo, U. Cotti, J. Cotzomi, S. Coutiño de León, C. De León, E. De la Fuente, R. Diaz Hernandez, B.L. Dingus, M.A. DuVernois, J.C. Díaz-Vélez, R.W. Ellsworth, K. Engel, D.W. Fiorino, N. Fraija, J.A. García-González, F. Garfias, M. Gerhardt, A. González Muñoz, M.M. González, J.A. Goodman, Z. Hampel-Arias, J.P. Harding, S. Hernandez, A. Hernandez-Almada, B. Hona, C.M. Hui, P. Hüntemeyer, A. Iriarte, A. Jardin-Blicq, V. Joshi, S. Kaufmann, D. Kieda, A. Lara, R.J. Lauer, W.H. Lee, D. Lennarz, H. León Vargas, J.T. Linnemann, A.L. Longinotti, G. Luis Raya, R. Luna-García, R. López-Coto, K. Malone, S.S. Marinelli, O. Martinez, I. Martinez-Castellanos, J. Martínez-Castro, J.A. Matthews, P. Miranda-Romagnoli, E. Moreno, M. Mostafá, L. Nellen, M. Newbold, M.U. Nisa, R. Noriega-Papaqui, J. Pretz, E.G. Pérez-Pérez, Z. Ren, C.D. Rho, C. Rivière, D. Rosa-González, M. Rosenberg, E. Ruiz-Velasco, F. Salesa Greus, A. Sandoval, M. Schneider, H. Schoorlemmer, G. Sinnis, A.J. Smith, R.W. Springer, P. Surajbali, I. Taboada, O. Tibolla, K. Tollefson, I. Torres, T.N. Ukwatta, G. Vianello, T. Weisgarber, S. Westerhoff, I.G. Wisher, J. Wood, T. Yapici, P.W. Younk, A. Zepeda, H. Zhou We present results from daily monitoring of gamma rays in the energy range $\sim0.5$ to $\sim100$ TeV with the first 17 months of data from the High Altitude Water Cherenkov (HAWC) Observatory. Its wide field of view of 2 steradians and duty cycle of $>95$% are unique features compared to other TeV observatories that allow us to observe every source that transits over HAWC for up to $\sim6$ hours each sidereal day. This regular sampling yields unprecedented light curves from unbiased measurements that are independent of seasons or weather conditions. For the Crab Nebula as a reference source we find no variability in the TeV band. Our main focus is the study of the TeV blazars Markarian (Mrk) 421 and Mrk 501. A spectral fit for Mrk 421 yields a power law index $\Gamma=2.21 \pm0.14_{\mathrm{stat}}\pm0.20_{\mathrm{sys}}$ and an exponential cut-off $E_0=5.4 \pm 1.1_{\mathrm{stat}}\pm 1.0_{\mathrm{sys}}$ TeV. For Mrk 501, we find an index $\Gamma=1.60\pm 0.30_{\mathrm{stat}} \pm 0.20_{\mathrm{sys}}$ and exponential cut-off $E_0=5.7\pm 1.6_{\mathrm{stat}} \pm 1.0_{\mathrm{sys}}$ TeV. The light curves for both sources show clear variability and a Bayesian analysis is applied to identify changes between flux states. The highest per-transit fluxes observed from Mrk 421 exceed the Crab Nebula flux by a factor of approximately five. For Mrk 501, several transits show fluxes in excess of three times the Crab Nebula flux. In a comparison to lower energy gamma-ray and X-ray monitoring data with comparable sampling we cannot identify clear counterparts for the most significant flaring features observed by HAWC. Properties of flat-spectrum radio-loud Narrow-Line Seyfert 1 Galaxies (1409.3716) L. Foschini, M. Berton, A. Caccianiga, S. Ciroi, V. Cracco, B. M. Peterson, E. Angelakis, V. Braito, L. Fuhrmann, L. Gallo, D. Grupe, E. Järvelä, S. Kaufmann, S. Komossa, Y. Y. Kovalev, A. Lähteenmäki, M. M. Lisakov, M. L. Lister, S. Mathur, J. L. Richards, P. Romano, A. Sievers, G. Tagliaferri, J. Tammi, O. Tibolla, M. Tornikoski, S. Vercellone, G. La Mura, L. Maraschi, P. Rafanelli We have conducted a multiwavelength survey of 42 radio loud narrow-1ine Seyfert 1 galaxies (RLNLS1s), selected by searching among all the known sources of this type and omitting those with steep radio spectra. We analyse data from radio frequencies to X-rays, and supplement these with information available from online catalogs and the literature in order to cover the full electromagnetic spectrum. This is the largest known multiwavelength survey for this type of source. We detected 90% of the sources in X-rays and found 17% at gamma rays. Extreme variability at high energies was also found, down to timescales as short as hours. In some sources, dramatic spectral and flux changes suggest interplay between a relativistic jet and the accretion disk. The estimated masses of the central black holes are in the range $\sim 10^{6-8}M_{\odot}$, smaller than those of blazars, while the accretion luminosities span a range from $\sim 0.01$ to $\sim 0.49$ times the Eddington limit, similar to those of quasars. The distribution of the calculated jet power spans a range from $\sim 10^{42.6}$ to $\sim 10^{45.6}$ erg s$^{-1}$, generally lower than quasars and BL Lac objects, but partially overlapping with the latter. Once normalised by the mass of the central black holes, the jet power of the three types of active galactic nuclei are consistent with each other, indicating the scalability of the jet. Despite the observational differences, the central engine of RLNLS1s is apparently quite similar to that of blazars. The historical difficulties in finding radio-loud narrow-line Seyfert 1 galaxies might be due to their low power and to intermittent jet activity. The 2HWC HAWC Observatory Gamma Ray Catalog (1702.02992) A.U. Abeysekara, A. Albert, R. Alfaro, C. Alvarez, J.D. Álvarez, R. Arceo, J.C. Arteaga-Velázquez, H.A. Ayala Solares, A.S. Barber, N. Bautista-Elivar, J. Becerra Gonzalez, A. Becerril, E. Belmont-Moreno, S.Y. BenZvi, D. Berley, A. Bernal, J. Braun, C. Brisbois, K.S. Caballero-Mora, T. Capistrán, A. Carramiñana, S. Casanova, M. Castillo, U. Cotti, J. Cotzomi, S. Coutiño de León, E. de la Fuente, C. De León, R. Diaz Hernandez, B.L. Dingus, M.A. DuVernois, J.C. Díaz-Vélez, R.W. Ellsworth, K. Engel, D.W. Fiorino, N. Fraija, J.A. García-González, F. Garfias, M. Gerhardt, A. González Muñoz, M.M. González, J.A. Goodman, Z. Hampel-Arias, J.P. Harding, S. Hernandez, A. Hernandez-Almada, J. Hinton, C.M. Hui, P. Hüntemeyer, A. Iriarte, A. Jardin-Blicq, V. Joshi, S. Kaufmann, D. Kieda, A. Lara, R.J. Lauer, W.H. Lee, D. Lennarz, H. León Vargas, J.T. Linnemann, A.L. Longinotti, G. Luis Raya, R. Luna-García, R. López-Coto, K. Malone, S.S. Marinelli, O. Martinez, I. Martinez-Castellanos, J. Martínez-Castro, H. Martínez-Huerta, J.A. Matthews, P. Miranda-Romagnoli, E. Moreno, M. Mostafá, L. Nellen, M. Newbold, M.U. Nisa, R. Noriega-Papaqui, R. Pelayo, J. Pretz, E.G. Pérez-Pérez, Z. Ren, C.D. Rho, C. Rivière, D. Rosa-González, M. Rosenberg, E. Ruiz-Velasco, H. Salazar, F. Salesa Greus, A. Sandoval, M. Schneider, H. Schoorlemmer, G. Sinnis, A.J. Smith, R.W. Springer, P. Surajbali, I. Taboada, O. Tibolla, K. Tollefson, I. Torres, T.N. Ukwatta, G. Vianello, L. Villaseñor, T. Weisgarber, S. Westerhoff, I.G. Wisher, J. Wood, T. Yapici, P.W. Younk, A. Zepeda, H. Zhou Feb. 9, 2017 astro-ph.HE We present the first catalog of TeV gamma-ray sources realized with the recently completed High Altitude Water Cherenkov Observatory (HAWC). It is the most sensitive wide field-of-view TeV telescope currently in operation, with a 1-year survey sensitivity of ~5-10% of the flux of the Crab Nebula. With an instantaneous field of view >1.5 sr and >90% duty cycle, it continuously surveys and monitors the sky for gamma ray energies between hundreds GeV and tens of TeV. HAWC is located in Mexico at a latitude of 19 degree North and was completed in March 2015. Here, we present the 2HWC catalog, which is the result of the first source search realized with the complete HAWC detector. Realized with 507 days of data and represents the most sensitive TeV survey to date for such a large fraction of the sky. A total of 39 sources were detected, with an expected contamination of 0.5 due to background fluctuation. Out of these sources, 16 are more than one degree away from any previously reported TeV source. The source list, including the position measurement, spectrum measurement, and uncertainties, is reported. Seven of the detected sources may be associated with pulsar wind nebulae, two with supernova remnants, two with blazars, and the remaining 23 have no firm identification yet. Observation of the Crab Nebula with the HAWC Gamma-Ray Observatory (1701.01778) A.U. Abeysekara, A. Albert, R. Alfaro, C. Alvarez, J.D. Álvarez, R. Arceo, J.C. Arteaga-Velázquez, H.A. Ayala Solares, A.S. Barber, N. Bautista-Elivar, A. Becerril, E. Belmont-Moreno, S.Y. BenZvi, D. Berley, J. Braun, C. Brisbois, K.S. Caballero-Mora, T. Capistrán, A. Carramiñana, S. Casanova, M. Castillo, U. Cotti, J. Cotzomi, S. Coutiño de León, E. de la Fuente, C. De León, T. DeYoung, B.L. Dingus, M.A. DuVernois, J.C. Díaz-Vélez, R.W. Ellsworth, D.W. Fiorino, N. Fraija, J.A. García-González, M. Gerhardt, A. González Mun̈oz, M.M. González, J.A. Goodman, Z. Hampel-Arias, J.P. Harding, S. Hernandez, A. Hernandez-Almada, J. Hinton, C.M. Hui, P. Hüntemeyer, A. Iriarte, A. Jardin-Blicq, V. Joshi, S. Kaufmann, D. Kieda, A. Lara, R.J. Lauer, W.H. Lee, D. Lennarz, H. León Vargas, J.T. Linnemann, A.L. Longinotti, G. Luis Raya, R. Luna-García, R. López-Coto, K. Malone, S.S. Marinelli, O. Martinez, I. Martinez-Castellanos, J. Martínez-Castro, H. Martínez-Huerta, J.A. Matthews, P. Miranda-Romagnoli, E. Moreno, M. Mostafá, L. Nellen, M. Newbold, M.U. Nisa, R. Noriega-Papaqui, R. Pelayo, J. Pretz, E.G. Pérez-Pérez, Z. Ren, C.D. Rho, C. Rivière, D. Rosa-González, M. Rosenberg, E. Ruiz-Velasco, H. Salazar, F. Salesa Greus, A. Sandoval, M. Schneider, H. Schoorlemmer, G. Sinnis, A.J. Smith, R.W. Springer, P. Surajbali, I. Taboada, O. Tibolla, K. Tollefson, I. Torres, T.N. Ukwatta, L. Villaseñor, T. Weisgarber, S. Westerhoff, I.G. Wisher, J. Wood, T. Yapici, G.B. Yodh, P.W. Younk, A. Zepeda, H. Zhou Jan. 6, 2017 astro-ph.HE The Crab Nebula is the brightest TeV gamma-ray source in the sky and has been used for the past 25 years as a reference source in TeV astronomy, for calibration and verification of new TeV instruments. The High Altitude Water Cherenkov Observatory (HAWC), completed in early 2015, has been used to observe the Crab Nebula at high significance across nearly the full spectrum of energies to which HAWC is sensitive. HAWC is unique for its wide field-of-view, nearly 2 sr at any instant, and its high-energy reach, up to 100 TeV. HAWC's sensitivity improves with the gamma-ray energy. Above $\sim$1 TeV the sensitivity is driven by the best background rejection and angular resolution ever achieved for a wide-field ground array. We present a time-integrated analysis of the Crab using 507 live days of HAWC data from 2014 November to 2016 June. The spectrum of the Crab is fit to a function of the form $\phi(E) = \phi_0 (E/E_{0})^{-\alpha -\beta\cdot{\rm{ln}}(E/E_{0})}$. The data is well-fit with values of $\alpha=2.63\pm0.03$, $\beta=0.15\pm0.03$, and log$_{10}(\phi_0~{\rm{cm}^2}~{\rm{s}}~{\rm{TeV}})=-12.60\pm0.02$ when $E_{0}$ is fixed at 7 TeV and the fit applies between 1 and 37 TeV. Study of the systematic errors in this HAWC measurement is discussed and estimated to be $\pm$50\% in the photon flux between 1 and 37 TeV. Confirmation of the Crab flux serves to establish the HAWC instrument's sensitivity for surveys of the sky. The HAWC survey will exceed sensitivity of current-generation observatories and open a new view of 2/3 of the sky above 10 TeV. Prototype muon detectors for the AMIGA component of the Pierre Auger Observatory (1605.01625) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, N. Awal, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, S.G. Blaess, A. Blanco, M. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, A.M. Botti, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, P. Brogueira, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, A. Cordier, S. Coutu, C.E. Covault, J. Cronin, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, J.C. Diaz, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, A. Dorofeev, Q. Dorosti Hasankiadeh, R.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, A. Fuster, F. Gallo, B. García, D. García-Gámez, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, S. Hartmann, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Hervé, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A.W. Kuotb Awad, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, K. Louedec, A. Lucero, M. Malacari, M. Mallamaci, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, D. Martraire, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, R. Meissner, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, M.A. Muller, G. Müller, S. Müller, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, J. Pękala, R. Pelayo, I.M. Pepe, L. Perrone, E. Petermann, C. Peters, S. Petrera, Y. Petrov, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, A. Porcelli, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, Y.N. Srivastava, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, M. Suarez Durán, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, O. Tibolla, C. Timmermans, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R. Vasquez, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, B. Vlcek, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, F. Werner, A. Widom, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, F. Zuccarello May 12, 2016 hep-ex, physics.ins-det Auger Muons and Infill for the Ground Array) is an upgrade of the Pierre Auger Observatory to extend its range of detection and to directly measure the muon content of the particle showers. It consists of an infill of surface water-Cherenkov detectors accompanied by buried scintillator detectors used for muon counting. The main objectives of the AMIGA engineering array, referred to as the Unitary Cell, are to identify and resolve all engineering issues as well as to understand the muon-number counting uncertainties related to the design of the detector. The mechanical design, fabrication and deployment processes of the muon counters of the Unitary Cell are described in this document. These muon counters modules comprise sealed PVC casings containing plastic scintillation bars, wavelength-shifter optical fibers, 64 pixel photomultiplier tubes, and acquisition electronics. The modules are buried approximately 2.25 m below ground level in order to minimize contamination from electromagnetic shower particles. The mechanical setup, which allows access to the electronics for maintenance, is also described in addition to tests of the modules' response and integrity. The completed Unitary Cell has measured a number of air showers of which a first analysis of a sample event is included here. Broad-band properties of flat-spectrum radio-loud narrow-line Seyfert 1 galaxies (1602.08227) Feb. 26, 2016 astro-ph.GA, astro-ph.HE We report about recent updates of broad-band properties of radio-loud narrow-line Seyfert 1 galaxies. Multiwavelength survey of a sample of flat-spectrum radio-loud narrow-line Seyfert 1 galaxies (1512.00192) Dec. 1, 2015 astro-ph.GA, astro-ph.HE We report on a multiwavelength survey of a sample of 42 flat-spectrum radio-loud narrow-line Seyfert 1 galaxies (RLNLS1s). This is the largest known sample of this type of active galactic nucleus (AGN) to date. We found that 17% of sources were detected at high-energy gamma rays (E>100 MeV), and 90% at X-rays (0.3-10 keV). The masses of the central black holes are in the range $\sim 10^{6-8}M_{\odot}$, smaller than the values of blazars. The disk luminosities are about 1-49% of the Eddington value, with one outlier at 0.3%, comparable with the luminosities observed in flat-spectrum radio quasars (FSRQs). The jet powers are $\sim 10^{42-46}$ erg s$^{-1}$, comparable with BL Lac Objects, yet relatively smaller than FSRQs. However, once renormalized by the mass of the central black hole, the jet powers of RLNLS1s, BL Lacs, and FSRQs are consistent each other, indicating the scalability of the jets. We found episodes of extreme variability at high energies on time scales of hours. In some cases, dramatic spectral and flux changes are interpreted as the interplay between the relativistic jet and the accretion disk. We conclude that, despite the distinct observational properties, the central engines of RLNLS1s are similar to those of blazars. Pierre Auger Observatory and Telescope Array: Joint Contributions to the 34th International Cosmic Ray Conference (ICRC 2015) (1511.02103) Telescope Array Collaboration: R.U. Abbasi, M. Abe, T. Abu-Zayyad, M. Allen, R. Azuma, E. Barcikowski, J.W. Belz, D.R. Bergman, S.A. Blake, R. Cady, M.J. Chae, B.G. Cheon, J. Chiba, M. Chikawa, W.R. Cho, T. Fujii, M. Fukushima, T. Goto, W. Hanlon, Y. Hayashi, N. Hayashida, K. Hibino, K. Honda, D. Ikeda, N. Inoue, T. Ishii, R. Ishimori, H. Ito, D. Ivanov, C.C.H. Jui, K. Kadota, F. Kakimoto, O. Kalashev, K. Kasahara, H. Kawai, S. Kawakami, S. Kawana, K. Kawata, E. Kido, H.B. Kim, J.H. Kim, J.H. Kim, S. Kitamura, Y. Kitamura, V. Kuzmin, Y.J. Kwon, J. Lan, S.I. Lim, J.P. Lundquist, K. Machida, K. Martens, T. Matsuda, T. Matsuyama, J.N. Matthews, M. Minamino, Y. Mukai, I. Myers, K. Nagasawa, S. Nagataki, T. Nakamura, T. Nonaka, A. Nozato, S. Ogio, J. Ogura, M. Ohnishi, H. Ohoka, K. Oki, T. Okuda, M. Ono, A. Oshima, S. Ozawa, I.H. Park, M.S. Pshirkov, D.C. Rodriguez, G. Rubtsov, D. Ryu, H. Sagawa, N. Sakurai, L.M. Scott, P.D. Shah, F. Shibata, T. Shibata, H. Shimodaira, B.K. Shin, H.S. Shin, J.D. Smith, P. Sokolsky, R.W. Springer, B.T. Stokes, S.R. Stratton, T.A. Stroman, T. Suzawa, M. Takamura, M. Takeda, R. Takeishi, A. Taketa, M. Takita, Y. Tameda, H. Tanaka, K. Tanaka, M. Tanaka, S.B. Thomas, G.B. Thomson, P. Tinyakov, I. Tkachev, H. Tokuno, T. Tomida, S. Troitsky, Y. Tsunesada, K. Tsutsumi, Y. Uchihori, S. Udo, F. Urban, G. Vasiloff, T. Wong, R. Yamane, H. Yamaoka, K. Yamazaki, J. Yang, K. Yashiro, Y. Yoneda, S. Yoshida, H. Yoshii, R. Zollinger, Z. Zundel, Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, G.A. Anastasi, L. Anchordoqui, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, N. Awal, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, S.G. Blaess, A. Blanco, M. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, P. Brogueira, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, A. Cordier, S. Coutu, C.E. Covault, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, J.C. Diaz, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, A. Dorofeev, Q. Dorosti Hasankiadeh, R.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, B. García, D. García-Gámez, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, S. Hartmann, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Hervé, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A.W. Kuotb Awad, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, K. Louedec, A. Lucero, M. Malacari, M. Mallamaci, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, D. Martraire, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, R. Meissner, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, G. Müller, M.A. Muller, S. Müller, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, J. Pȩkala, R. Pelayo, I.M. Pepe, L. Perrone, E. Petermann, C. Peters, S. Petrera, Y. Petrov, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, A. Porcelli, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, Y.N. Srivastava, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, M. Suarez Durán, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, O. Tibolla, C. Timmermans, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R. Vasquez, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, B. Vlcek, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, C. Welling, F. Werner, A. Widom, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, F. Zuccarello Joint contributions of the Pierre Auger Collaboration and the Telescope Array Collaboration to the 34th International Cosmic Ray Conference, 30 July - 6 August 2015, The Hague, The Netherlands. The IceCube Neutrino Observatory, the Pierre Auger Observatory and the Telescope Array: Joint Contribution to the 34th International Cosmic Ray Conference (ICRC 2015) (1511.02109) IceCube Collaboration: M.G. Aartsen, K. Abraham, M. Ackermann, J. Adams, J.A. Aguilar, M. Ahlers, M. Ahrens, D. Altmann, T. Anderson, I. Ansseau, M. Archinger, C. Arguelles, T.C. Arlen, J. Auffenberg, X. Bai, S.W. Barwick, V. Baum, R. Bay, J.J. Beatty, J. Becker Tjus, K.-H. Becker, E. Beiser, S. BenZvi, P. Berghaus, D. Berley, E. Bernardini, A. Bernhard, D.Z. Besson, G. Binder, D. Bindig, M. Bissok, E. Blaufuss, J. Blumenthal, D.J. Boersma, C. Bohm, M. Börner, F. Bos, D. Bose, S. Böser, O. Botner, J. Braun, L. Brayeur, H.-P. Bretz, N. Buzinsky, J. Casey, M. Casier, E. Cheung, D. Chirkin, A. Christov, K. Clark, L. Classen, S. Coenders, D.F. Cowen, A.H. Cruz Silva, J. Daughhetee, J.C. Davis, M. Day, J.P.A.M. de André, C. De Clercq, E. del Pino Rosendo, H. Dembinski, S. De Ridder, P. Desiati, K.D. de Vries, G. de Wasseige, M. de With, T. DeYoung, J.C. Díaz-Vélez, V. di Lorenzo, J.P. Dumm, M. Dunkman, R. Eagan, B. Eberhardt, T. Ehrhardt, B. Eichmann, S. Euler, P.A. Evenson, O. Fadiran, S. Fahey, A.R. Fazely, A. Fedynitch, J. Feintzeig, J. Felde, K. Filimonov, C. Finley, T. Fischer-Wasels, S. Flis, C.-C. Fösig, T. Fuchs, T.K. Gaisser, R. Gaior, J. Gallagher, L. Gerhardt, K. Ghorbani, D. Gier, L. Gladstone, M. Glagla, T. Glüsenkamp, A. Goldschmidt, G. Golup, J.G. Gonzalez, D. Góra, D. Grant, J.C. Groh, A. Groß, C. Ha, C. Haack, A. Haj Ismail, A. Hallgren, F. Halzen, B. Hansmann, K. Hanson, D. Hebecker, D. Heereman, K. Helbing, R. Hellauer, D. Hellwig, S. Hickford, J. Hignight, G.C. Hill, K.D. Hoffman, R. Hoffmann, K. Holzapfel, A. Homeier, K. Hoshina, F. Huang, M. Huber, W. Huelsnitz, P.O. Hulth, K. Hultqvist, S. In, A. Ishihara, E. Jacobi, G.S. Japaridze, K. Jero, M. Jurkovic, B. Kaminsky, A. Kappes, T. Karg, A. Karle, M. Kauer, A. Keivani, J.L. Kelley, J. Kemp, A. Kheirandish, J. Kiryluk, J. Kläs, S.R. Klein, G. Kohnen, R. Koirala, H. Kolanoski, R. Konietz, A. Koob, L. Köpke, C. Kopper, S. Kopper, D.J. Koskinen, M. Kowalski, K. Krings, G. Kroll, M. Kroll, J. Kunnen, N. Kurahashi, T. Kuwabara, M. Labare, J.L. Lanfranchi, M.J. Larson, M. Lesiak-Bzdak, M. Leuermann, J. Leuner, L. Lu, J. Lünemann, J. Madsen, G. Maggi, K.B.M. Mahn, R. Maruyama, K. Mase, H.S. Matis, R. Maunu, F. McNally, K. Meagher, M. Medici, A. Meli, T. Menne, G. Merino, T. Meures, S. Miarecki, E. Middell, E. Middlemas, L. Mohrmann, T. Montaruli, R. Morse, R. Nahnhauer, U. Naumann, G. Neer, H. Niederhausen, S.C. Nowicki, D.R. Nygren, A. Obertacke, A. Olivas, A. Omairat, A. O'Murchadha, T. Palczewski, H. Pandya, L. Paul, J.A. Pepper, C. Pérez de los Heros, C. Pfendner, D. Pieloth, E. Pinat, J. Posselt, P.B. Price, G.T. Przybylski, J. Pütz, M. Quinnan, C. Raab, L. Rädel, M. Rameez, K. Rawlins, R. Reimann, M. Relich, E. Resconi, W. Rhode, M. Richman, S. Richter, B. Riedel, S. Robertson, M. Rongen, C. Rott, T. Ruhe, D. Ryckbosch, S.M. Saba, L. Sabbatini, H.-G. Sander, A. Sandrock, J. Sandroos, S. Sarkar, K. Schatto, F. Scheriau, M. Schimp, T. Schmidt, M. Schmitz, S. Schoenen, S. Schöneberg, A. Schönwald, L. Schulte, D. Seckel, S. Seunarine, R. Shanidze, M.W.E. Smith, D. Soldin, M. Song, G.M. Spiczak, C. Spiering, M. Stahlberg, M. Stamatikos, T. Stanev, N.A. Stanisha, A. Stasik, T. Stezelberger, R.G. Stokstad, A. Stößl, R. Ström, N.L. Strotjohann, G. W. Sullivan, M. Sutherland, H. Taavola, I. Taboada, S. Ter-Antonyan, A. Terliuk, G. Tešić, S. Tilav, P.A. Toale, M.N. Tobin, S. Toscano, D. Tosi, M. Tselengidou, A. Turcati, E. Unger, M. Usner, S. Vallecorsa, J. Vandenbroucke, N. van Eijndhoven, S. Vanheule, J. van Santen, J. Veenkamp, M. Vehring, M. Voge, M. Vraeghe, C. Walck, A. Wallace, M. Wallraff, N. Wandkowsky, Ch. Weaver, C. Wendt, S. Westerhoff, B.J. Whelan, N. Whitehorn, C. Wichary, K. Wiebe, C.H. Wiebusch, L. Wille, D.R. Williams, H. Wissing, M. Wolf, T.R. Wood, K. Woschnagg, D.L. Xu, X.W. Xu, Y. Xu, J.P. Yanez, G. Yodh, S. Yoshida, M. Zoll, Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, G.A. Anastasi, L. Anchordoqui, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, N. Awal, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, S.G. Blaess, A. Blanco, M. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, P. Brogueira, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, A. Cordier, S. Coutu, C.E. Covault, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, J.C. Diaz, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, A. Dorofeev, Q. Dorosti Hasankiadeh, R.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, B. García, D. García-Gámez, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, S. Hartmann, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Hervé, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A.W. Kuotb Awad, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, K. Louedec, A. Lucero, M. Malacari, M. Mallamaci, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, D. Martraire, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, R. Meissner, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, G. Müller, M.A. Muller, S. Müller, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, J. Pȩkala, R. Pelayo, I.M. Pepe, L. Perrone, E. Petermann, C. Peters, S. Petrera, Y. Petrov, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, A. Porcelli, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, Y.N. Srivastava, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, M. Suarez Durán, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, O. Tibolla, C. Timmermans, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R. Vasquez, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, B. Vlcek, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, C. Welling, F. Werner, A. Widom, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, F. Zuccarello, Telescope Array Collaboration: R.U. Abbasi, M. Abe, T. Abu-Zayyad, M. Allen, R. Azuma, E. Barcikowski, J.W. Belz, D.R. Bergman, S.A. Blake, R. Cady, M.J. Chae, B.G. Cheon, J. Chiba, M. Chikawa, W.R. Cho, T. Fujii, M. Fukushima, T. Goto, W. Hanlon, Y. Hayashi, N. Hayashida, K. Hibino, K. Honda, D. Ikeda, N. Inoue, T. Ishii, R. Ishimori, H. Ito, D. Ivanov, C.C.H. Jui, K. Kadota, F. Kakimoto, O. Kalashev, K. Kasahara, H. Kawai, S. Kawakami, S. Kawana, K. Kawata, E. Kido, H.B. Kim, J.H. Kim, J.H. Kim, S. Kitamura, Y. Kitamura, V. Kuzmin, Y.J. Kwon, J. Lan, S.I. Lim, J.P. Lundquist, K. Machida, K. Martens, T. Matsuda, T. Matsuyama, J.N. Matthews, M. Minamino, Y. Mukai, I. Myers, K. Nagasawa, S. Nagataki, T. Nakamura, T. Nonaka, A. Nozato, S. Ogio, J. Ogura, M. Ohnishi, H. Ohoka, K. Oki, T. Okuda, M. Ono, A. Oshima, S. Ozawa, I.H. Park, M.S. Pshirkov, D.C. Rodriguez, G. Rubtsov, D. Ryu, H. Sagawa, N. Sakurai, L.M. Scott, P.D. Shah, F. Shibata, T. Shibata, H. Shimodaira, B.K. Shin, H.S. Shin, J.D. Smith, P. Sokolsky, R.W. Springer, B.T. Stokes, S.R. Stratton, T.A. Stroman, T. Suzawa, M. Takamura, M. Takeda, R. Takeishi, A. Taketa, M. Takita, Y. Tameda, H. Tanaka, K. Tanaka, M. Tanaka, S.B. Thomas, G.B. Thomson, P. Tinyakov, I. Tkachev, H. Tokuno, T. Tomida, S. Troitsky, Y. Tsunesada, K. Tsutsumi, Y. Uchihori, S. Udo, F. Urban, G. Vasiloff, T. Wong, R. Yamane, H. Yamaoka, K. Yamazaki, J. Yang, K. Yashiro, Y. Yoneda, S. Yoshida, H. Yoshii, R. Zollinger, Z. Zundel Nov. 6, 2015 hep-ex, astro-ph.IM, astro-ph.HE We have conducted three searches for correlations between ultra-high energy cosmic rays detected by the Telescope Array and the Pierre Auger Observatory, and high-energy neutrino candidate events from IceCube. Two cross-correlation analyses with UHECRs are done: one with 39 cascades from the IceCube `high-energy starting events' sample and the other with 16 high-energy `track events'. The angular separation between the arrival directions of neutrinos and UHECRs is scanned over. The same events are also used in a separate search using a maximum likelihood approach, after the neutrino arrival directions are stacked. To estimate the significance we assume UHECR magnetic deflections to be inversely proportional to their energy, with values $3^\circ$, $6^\circ$ and $9^\circ$ at 100 EeV to allow for the uncertainties on the magnetic field strength and UHECR charge. A similar analysis is performed on stacked UHECR arrival directions and the IceCube sample of through-going muon track events which were optimized for neutrino point-source searches. HAWC Collaboration: A. U. Abeysekara, R. Alfaro, C. Alvarez, J. D. Álvarez, R. Arceo, J. C. Arteaga-Velázquez, H. A. Ayala Solares, A. S. Barber, B. M. Baughman, N. Bautista-Elivar, J. Becerra Gonzalez, A. Becerril, E. Belmont, S. Y. BenZvi, D. Berley, A. Bernal, J. Braun, K. S. Caballero-Mora, T. Capistrán, A. Carramiñana, M. Castillo, U. Cotti, J. Cotzomi, S. Coutiño de León, E. de la Fuente, C. De León, T. DeYoung, R. Diaz Hernandez, L. Diaz-Cruz, J. C. Díaz-Vélez, B. L. Dingus, M. A. DuVernois, R. W. Ellsworth, K. Engel, O. Enriquez-Rivera, B. Fick, D. W. Fiorino, J. L. Flores, N. Fraija, G. Garcia-Torales, F. Garfias, M. M. González, J. A. Goodman, M. Gussert, Z. Hampel-Arias, P. Hansen, J. Patrick Harding, S. Hernandez, C. M. Hui, P. Hüntemeyer, A. Imran, A. Iriarte, P. Karn, D. Kieda, G. J. Kunde, A. Lara, R. J. Lauer, W. H. Lee, D. Lennarz, H. León Vargas, J. T. Linnemann, M. Longo Proper, G. Luis Raya, R. Luna-García, K. Malone, A. Marinelli, S. S. Marinelli, H. Martinez, O. Martinez, J. Martínez-Castro, J. A. J. Matthews, J. McEnery, P. Miranda-Romagnoli, E. Moreno, M. Mostafá, L. Nellen, M. Newbold, M. Un Nisa, R. Noriega-Papaqui, T. Oceguera-Becerra, B. Patricelli, R. Pelayo, E. G. Pérez-Pérez, J. Pretz, Z. Ren, C. D. Rho, C. Rivière, D. Rosa-González, J. Ryan, H. Salazar, F. Salesa Greus, A. Sandoval, M. Schneider, G. Sinnis, A. J. Smith, A. W. Smith, K. Sparks Woodle, R. W. Springer, I. Taboada, O. Tibolla, P. A. Toale, K. Tollefson, I. Torres, T. N. Ukwatta, L. Villaseñor, T. Weisgarber, S. Westerhoff, I. G. Wisher, J. Wood, T. Yapici, G. B. Yodh, P. W. Younk, A. Zepeda, H. Zhou Oct. 8, 2015 hep-ex, astro-ph.HE List of proceedings from the HAWC Collaboration presented at the 34th International Cosmic Ray Conference, 30 July - 6 August 2015, The Hague, The Netherlands. Search for TeV Gamma-Ray Emission from Point-like Sources in the Inner Galactic Plane with a Partial Configuration of the HAWC Observatory (1509.05401) A. U. Abeysekara, R. Alfaro, C. Alvarez, J. D. Álvarez, R. Arceo, J. C. Arteaga-Velázquez, H. A. Ayala Solares, A. S. Barber, B. M. Baughman, N. Bautista-Elivar, A.D. Becerril Reyes, E. Belmont, S. Y. BenZvi, A. Bernal, J. Braun, K. S. Caballero-Mora, T. Capistrán, A. Carramiñana, S. Casanova, M. Castillo, U. Cotti, J. Cotzomi, S. Coutiño de León, E. de la Fuente, C. De León, T. DeYoung, B. L. Dingus, M. A. DuVernois, R. W. Ellsworth, O. Enriquez-Rivera, D. W. Fiorino, N. Fraija, F. Garfias, M. M. González, J. A. Goodman, M. Gussert, Z. Hampel-Arias, J. P. Harding, S. Hernandez, P. Hüntemeyer, C. M. Hui, A. Imran, A. Iriarte, P. Karn, D. Kieda, A. Lara, R. J. Lauer, W. H. Lee, D. Lennarz, H. León Vargas, J. T. Linnemann, M. Longo, G. Luis Raya, K. Malone, A. Marinelli, S. S. Marinelli, H. Martinez, O. Martinez, J. Martínez-Castro, J. A. J. Matthews, P. Miranda-Romagnoli, E. Moreno, M. Mostafá, L. Nellen, M. Newbold, R. Noriega-Papaqui, B. Patricelli, R. Pelayo, E. G. Pérez-Pérez, J. Pretz, Z. Ren, C. Rivière, D. Rosa-González, H. Salazar, F. Salesa Greus, A. Sandoval, M. Schneider, G. Sinnis, A. J. Smith, K. Sparks Woodle, R. W. Springer, I. Taboada, O. Tibolla, K. Tollefson, I. Torres, T. N. Ukwatta, L. Villaseñor, K. Vrabel, T. Weisgarber, S. Westerhoff, I. G. Wisher, J. Wood, T. Yapici, G. B. Yodh, P. W. Younk, D. Zaborov, A. Zepeda, H. Zhou Sept. 17, 2015 astro-ph.HE A survey of the inner Galaxy region of Galactic longitude l in [+15, +50] degree and latitude b in [-4,+4] degree is performed using one-third of the High Altitude Water Cherenkov (HAWC) Observatory operated during its construction phase. To address the ambiguities arising from unresolved sources in the data, we use a maximum likelihood technique to identify point source candidates. Ten sources and candidate sources are identified in this analysis. Eight of these are associated with known TeV sources but not all have differential fluxes compatible with previous measurements. Three sources are detected with significances $>5\,\sigma$ after accounting for statistical trials, and are associated with known TeV sources. The Pierre Auger Observatory: Contributions to the 34th International Cosmic Ray Conference (ICRC 2015) (1509.03732) The Pierre Auger Collaboration: A. Aab, P. Abreu, M. Aglietta, E.J. Ahn, I. Al Samarai, I.F.M. Albuquerque, I. Allekotte, P. Allison, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, R. Alves Batista, M. Ambrosio, A. Aminaei, G.A. Anastasi, L. Anchordoqui, S. Andringa, C. Aramo, F. Arqueros, N. Arsene, H. Asorey, P. Assis, J. Aublin, G. Avila, N. Awal, A.M. Badescu, C. Baus, J.J. Beatty, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, P. Billoir, S.G. Blaess, A. Blanco, M. Blanco, J. Blazek, C. Bleve, H. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, N. Borodai, J. Brack, I. Brancus, T. Bretz, A. Bridgeman, P. Brogueira, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, B. Caccianiga, L. Caccianiga, M. Candusso, L. Caramete, R. Caruso, A. Castellina, G. Cataldi, L. Cazon, R. Cester, A.G. Chavez, A. Chiavassa, J.A. Chinellato, J. Chudoba, M. Cilmo, R.W. Clay, G. Cocciolo, R. Colalillo, A. Coleman, L. Collica, M.R. Coluccia, R. Conceição, F. Contreras, M.J. Cooper, A. Cordier, S. Coutu, C.E. Covault, J. Cronin, R. Dallier, B. Daniel, S. Dasso, K. Daumiller, B.R. Dawson, R.M. de Almeida, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, V. de Souza, L. del Peral, O. Deligny, N. Dhital, C. Di Giulio, A. Di Matteo, J.C. Diaz, M.L. Díaz Castro, F. Diogo, C. Dobrigkeit, W. Docters, J.C. D'Olivo, A. Dorofeev, Q. Dorosti Hasankiadeh, R.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, M. Erdmann, M. Erfani, C.O. Escobar, J. Espadanal, A. Etchegoyen, H. Falcke, K. Fang, G. Farrar, A.C. Fauth, N. Fazzini, A.P. Ferguson, B. Fick, J.M. Figueira, A. Filevich, A. Filipčič, O. Fratu, M.M. Freire, T. Fujii, B. García, D. García-Gámez, D. Garcia-Pinto, F. Gate, H. Gemmeke, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, D. Głas, C. Glaser, H. Glass, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, N. González, B. Gookin, J. Gordon, A. Gorgi, P. Gorham, P. Gouffon, N. Griffith, A.F. Grillo, T.D. Grubb, F. Guarino, G.P. Guedes, M.R. Hampel, P. Hansen, D. Harari, T.A. Harrison, S. Hartmann, J.L. Harton, A. Haungs, T. Hebbeker, D. Heck, P. Heimann, A.E. Hérve, G.C. Hill, C. Hojvat, N. Hollon, E. Holt, P. Homola, J.R. Hörandel, P. Horvath, M. Hrabovský, D. Huber, T. Huege, A. Insolia, P.G. Isar, I. Jandt, S. Jansen, C. Jarne, J.A. Johnsen, M. Josebachuili, A. Kääpä, O. Kambeitz, K.H. Kampert, P. Kasper, I. Katkov, B. Keilhauer, E. Kemp, R.M. Kieckhafer, H.O. Klages, M. Kleifges, J. Kleinfeller, R. Krause, N. Krohm, D. Kuempel, G. Kukec Mezek, N. Kunka, A.W. Kuotb Awad, D. LaHurd, L. Latronico, R. Lauer, M. Lauscher, P. Lautridou, S. Le Coz, D. Lebrun, P. Lebrun, M.A. Leigui de Oliveira, A. Letessier-Selvon, I. Lhenry-Yvon, K. Link, L. Lopes, R. López, A. López Casado, K. Louedec, A. Lucero, M. Malacari, M. Mallamaci, J. Maller, D. Mandat, P. Mantsch, A.G. Mariazzi, V. Marin, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, D. Martraire, J.J. Masías Meza, H.J. Mathes, S. Mathys, J. Matthews, J.A.J. Matthews, G. Matthiae, D. Maurizio, E. Mayotte, P.O. Mazur, C. Medina, G. Medina-Tanco, R. Meissner, V.B.B. Mello, D. Melo, A. Menshikov, S. Messina, M.I. Micheletti, L. Middendorf, I.A. Minaya, L. Miramonti, B. Mitrica, L. Molina-Bueno, S. Mollerach, F. Montanet, C. Morello, M. Mostafá, C.A. Moura, G. Müller, M.A. Muller, S. Müller, S. Navas, P. Necesal, L. Nellen, A. Nelles, J. Neuser, P.H. Nguyen, M. Niculescu-Oglinzanu, M. Niechciol, L. Niemietz, T. Niggemann, D. Nitz, D. Nosek, V. Novotny, L. Nožka, L.A. Núñez, L. Ochilo, F. Oikonomou, A. Olinto, N. Pacheco, D. Pakk Selmi-Dei, M. Palatka, J. Pallotta, P. Papenbreer, G. Parente, A. Parra, T. Paul, M. Pech, J. Pękala, R. Pelayo, I.M. Pepe, L. Perrone, E. Petermann, C. Peters, S. Petrera, Y. Petrov, J. Phuntsok, R. Piegaia, T. Pierog, P. Pieroni, M. Pimenta, V. Pirronello, M. Platino, M. Plum, A. Porcelli, C. Porowski, R.R. Prado, P. Privitera, M. Prouza, E.J. Quel, S. Querchfeld, S. Quinn, J. Rautenberg, O. Ravel, D. Ravignani, D. Reinert, B. Revenu, J. Ridky, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.D. Rodríguez-Frías, D. Rogozin, J. Rosado, M. Roth, E. Roulet, A.C. Rovero, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, A. Saleh, F. Salesa Greus, G. Salina, J.D. Sanabria Gomez, F. Sánchez, P. Sanchez-Lucas, E.M. Santos, E. Santos, F. Sarazin, B. Sarkar, R. Sarmento, C. Sarmiento-Cano, R. Sato, C. Scarso, M. Schauer, V. Scherini, H. Schieler, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F.G. Schröder, A. Schulz, J. Schulz, J. Schumacher, S.J. Sciutto, A. Segreto, M. Settimo, A. Shadkam, R.C. Shellard, G. Sigl, O. Sima, A. Śmiałkowski, R. Šmída, G.R. Snow, P. Sommers, S. Sonntag, J. Sorokin, R. Squartini, Y.N. Srivastava, D. Stanca, S. Stanič, J. Stapleton, J. Stasielak, M. Stephan, A. Stutz, F. Suarez, M. Suarez Durán, T. Suomijärvi, A.D. Supanitsky, M.S. Sutherland, J. Swain, Z. Szadkowski, O.A. Taborda, A. Tapia, A. Tepe, V.M. Theodoro, O. Tibolla, C. Timmermans, C.J. Todero Peixoto, G. Toma, L. Tomankova, B. Tomé, A. Tonachini, G. Torralba Elipe, D. Torres Machado, P. Travnicek, M. Trini, R. Ulrich, M. Unger, M. Urban, J.F. Valdés Galicia, I. Valiño, L. Valore, G. van Aar, P. van Bodegom, A.M. van den Berg, S. van Velzen, A. van Vliet, E. Varela, B. Vargas Cárdenas, G. Varner, R. Vasquez, J.R. Vázquez, R.A. Vázquez, D. Veberič, V. Verzi, J. Vicha, M. Videla, L. Villaseñor, B. Vlcek, S. Vorobiov, H. Wahlberg, O. Wainberg, D. Walz, A.A. Watson, M. Weber, K. Weidenhaupt, A. Weindl, C. Welling, F. Werner, A. Widom, L. Wiencke, H. Wilczyński, T. Winchen, D. Wittkowski, B. Wundheiler, S. Wykes, L. Yang, T. Yapici, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, A. Zepeda, B. Zimmermann, M. Ziolkowski, F. Zuccarello Sept. 12, 2015 astro-ph.IM, astro-ph.HE Contributions of the Pierre Auger Collaboration to the 34th International Cosmic Ray Conference, 30 July - 6 August 2015, The Hague, The Netherlands
CommonCrawl
> math > arXiv:2209.14119v1 Mathematics > Rings and Algebras arXiv:2209.14119v1 (math) [Submitted on 28 Sep 2022 (this version), latest version 13 Jan 2023 (v5)] Title:A new proof of the Pythagorean Theorem inspired by novel characterizations of unital algebras Authors:Fred Greensite Abstract: A new proof of the Pythagorean Theorem is presented, based on George Birkhoff's version of the postulates of Euclidean geometry as incorporating $\mathbb{R}$. The proof is inspired by a novel characterization of unital associative algebras having $\mathbb{R}^n$ as the vector space of elements. Comments: This is one of the four papers into which arXiv:2207.11358 has now been split Subjects: Rings and Algebras (math.RA) Cite as: arXiv:2209.14119 [math.RA] (or arXiv:2209.14119v1 [math.RA] for this version) From: Fred Greensite [view email] [v1] Wed, 28 Sep 2022 14:15:35 UTC (12 KB) [v2] Tue, 18 Oct 2022 15:20:17 UTC (12 KB) [v3] Fri, 2 Dec 2022 19:22:14 UTC (20 KB) [v4] Tue, 10 Jan 2023 16:46:51 UTC (20 KB) [v5] Fri, 13 Jan 2023 17:44:17 UTC (20 KB) math.RA
CommonCrawl
MCRF Home On the motion planning of the ball with a trailer September 2013, 3(3): 245-267. doi: 10.3934/mcrf.2013.3.245 Estimates on trajectories in a closed set with corners for $(t,x)$ dependent data Piernicola Bettiol 1, and Richard Vinter 2, Laboratoire de Mathematiques, Université de Bretagne Occidentale, 6 Avenue Victor Le Gorgeu, 29200 Brest, France Department of Electrical and Electronic Engineering, Imperial College London, SW7 2BT Received November 2012 Revised February 2013 Published September 2013 Estimates on the distance of a given process from the set of processes that satisfy a specified state constraint in terms of the state constraint violation are important analytical tools in state constrained optimal control theory; they have been employed to ensure the validity of the Maximum Principle in normal form, to establish regularity properties of the value function, to justify interpreting the value function as a unique solution of the Hamilton-Jacobi equation, and for other purposes. A range of estimates are required, which differ according the metrics used to measure the `distance' and the modulus $\theta(h)$ of state constraint violation $h$ in terms of which the estimates are expressed. Recent research has shown that simple linear estimates are valid when the state constraint set $A$ has smooth boundary, but do not generalize to a setting in which the boundary of $A$ has corners. Indeed, for a velocity set $F$ which does not depend on $(t,x)$ and for state constraints taking the form of the intersection of two closed spaces (the simplest case of a boundary with corners), the best distance estimates we can hope for, involving the $W^{1,1,}$ metric on state trajectories, is a super-linear estimate expressed in terms of the $h|\log(h)|$ modulus. But, distance estimates involving the $h|\log (h)|$ modulus are not in general valid when the velocity set $F(.,x)$ is required merely to be continuous, while not even distance estimates involving the weaker, Hölder modulus $h^{\alpha}$ (with $\alpha$ arbitrarily small) are in general valid, when $F(.,x)$ is allowed to be discontinuous. This paper concerns the validity of distance estimates when the velocity set $F(t,x)$ is $(t,x)$-dependent and satisfy standard hypotheses on the velocity set (linear growth, Lipschitz $x$-dependence and an inward pointing condition). Hypotheses are identified for the validity of distance estimates, involving both the $h|\log(h)|$ and linear moduli, within the framework of control systems described by a controlled differential equation and state constraint sets having a functional inequality representation. Keywords: Control systems, optimal control., state constraints. Mathematics Subject Classification: Primary: 93C10, 93C15; Secondary: 34H05, 49K1. Citation: Piernicola Bettiol, Richard Vinter. Estimates on trajectories in a closed set with corners for $(t,x)$ dependent data. Mathematical Control & Related Fields, 2013, 3 (3) : 245-267. doi: 10.3934/mcrf.2013.3.245 J.-P. Aubin and H. Frankowska, "Set-valued Analysis,", Systems & Control: Foundations & Applications, (1990). Google Scholar P. Bettiol, A. Bressan and R. B. Vinter, On trajectories satisfying a state constraint: $W^{1,1}$ estimates and counter-examples,, SIAM J. Control Optim., 48 (2010), 4664. doi: 10.1137/090769788. Google Scholar P. Bettiol, A. Bressan and R. B. Vinter, Estimates for trajectories confined to a cone in $\mathbbR^n$,, SIAM J. Control Optim., 49 (2011), 21. doi: 10.1137/09077240X. Google Scholar P. Bettiol, P. Cardaliaguet and M. Quincampoix, Zero-sum state constrained differential games: Existence of value for Bolza problem,, Int. J. Game Theory, 34 (2006), 495. doi: 10.1007/s00182-006-0030-9. Google Scholar P. Bettiol and H. Frankowska, Regularity of solution maps of differential inclusions for systems under state constraints,, Set-Valued Anal., 15 (2007), 21. doi: 10.1007/s11228-006-0018-4. Google Scholar P. Bettiol and H. Frankowska, Lipschitz regularity of solution map to control systems with multiple state constraints,, Discrete Contin. Dyn. Syst., 32 (2012), 1. Google Scholar P. Bettiol, H. Frankowska and R. B. Vinter, $L^{\infty}$ estimates on trajectories confined to a closed subset,, J. Differential Eq., 252 (2012), 1912. doi: 10.1016/j.jde.2011.09.007. Google Scholar P. Bettiol and R. B. Vinter, Existence of feasible approximating trajectories for differential inclusions with obstacles as state constraints,, Proc. of the 48th IEEE CDC 2009., (2009). doi: 10.1109/CDC.2009.5400266. Google Scholar P. Bettiol and R. B. Vinter, Sensitivity interpretations of the co-state variable for optimal control problems with state constraints,, SIAM J. Control Optim., 48 (2010), 3297. doi: 10.1137/080732614. Google Scholar P. Bettiol and R. B. Vinter, Trajectories satisfying a state constraint: Improved estimates and new non-degeneracy conditions,, IEEE Trans. Automat. Control, 56 (2011), 1090. doi: 10.1109/TAC.2010.2088670. Google Scholar A. Bressan and G. Facchi, Trajectories of differential inclusions with state constraints,, J. Differential Eq., 250 (2011), 2267. doi: 10.1016/j.jde.2010.12.021. Google Scholar F. H. Clarke, The maximum principle under minimal hypotheses,, SIAM J. Control Optim., 14 (1976), 1078. doi: 10.1137/0314067. Google Scholar F. H. Clarke, L. Rifford and R. J. Stern, Feedback in state constrained optimal control,, ESAIM Control Optim. Calc. Var., 7 (2002), 97. doi: 10.1051/cocv:2002005. Google Scholar I. Ekeland, On the variational principle,, J. Math. Anal. Appl., 47 (1974), 324. doi: 10.1016/0022-247X(74)90025-0. Google Scholar F. Forcellini and F. Rampazzo, On nonconvex differential inclusions whose state is constrained in the closure of an open set. Applications to dynamic programming,, Differential Integral Equations, 12 (1999), 471. Google Scholar H. Frankowska and M. Mazzola, Discontinuous solutions of Hamilton-Jacobi-Bellman equation under state constraints,, Calculus Var. Partial Differ. Equ., 46 (2013), 725. doi: 10.1007/s00526-012-0501-8. Google Scholar H. Frankowska and M. Mazzola, On relations of the adjoint state to the value function for optimal control problems with state constraints,, NoDEA Nonlinear Differ. Equ. Appl., 20 (2013), 361. doi: 10.1007/s00030-012-0183-0. Google Scholar H. Frankowska and F. Rampazzo, Filippov's and Filippov-Wazewski's theorems on closed domains,, J. Differential Eq., 161 (2000), 449. doi: 10.1006/jdeq.2000.3711. Google Scholar H. Frankowska and R. B. Vinter, Existence of neighbouring feasible trajectories: Applications to dynamic programming for state constrained optimal control problems,, J. Optim. Theory Appl., 104 (2000), 21. doi: 10.1023/A:1004668504089. Google Scholar F. Rampazzo and R. B. Vinter, A theorem on existence of neighbouring trajectories satisfying a state constraint, with applications to optimal control,, IMA J. Math. Control Inform., 16 (1999), 335. doi: 10.1093/imamci/16.4.335. Google Scholar F. Rampazzo and R. B. Vinter, Degenerate optimal control problems with state constraints,, SIAM J. Control Optim., 39 (2000), 989. doi: 10.1137/S0363012998340223. Google Scholar H. M. Soner, Optimal control problems with state-space constraints,II,, SIAM J. Control Optim., 24 (1986), 552. doi: 10.1137/0324067. Google Scholar R. B. Vinter, "Optimal Control,", Systems & Control: Foundations & Applications. Birkhaüser Boston, (2000). Google Scholar Mikhail Gusev. On reachability analysis for nonlinear control systems with state constraints. Conference Publications, 2015, 2015 (special) : 579-587. doi: 10.3934/proc.2015.0579 Yuefen Chen, Yuanguo Zhu. Indefinite LQ optimal control with process state inequality constraints for discrete-time uncertain systems. Journal of Industrial & Management Optimization, 2018, 14 (3) : 913-930. doi: 10.3934/jimo.2017082 Luís Tiago Paiva, Fernando A. C. C. Fontes. Adaptive time--mesh refinement in optimal control problems with state constraints. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4553-4572. doi: 10.3934/dcds.2015.35.4553 Theodore Tachim-Medjo. Optimal control of a two-phase flow model with state constraints. Mathematical Control & Related Fields, 2016, 6 (2) : 335-362. doi: 10.3934/mcrf.2016006 Vincenzo Basco, Piermarco Cannarsa, Hélène Frankowska. Necessary conditions for infinite horizon optimal control problems with state constraints. Mathematical Control & Related Fields, 2018, 8 (3&4) : 535-555. doi: 10.3934/mcrf.2018022 Piernicola Bettiol, Hélène Frankowska. Lipschitz regularity of solution map of control systems with multiple state constraints. Discrete & Continuous Dynamical Systems - A, 2012, 32 (1) : 1-26. doi: 10.3934/dcds.2012.32.1 H. O. Fattorini. The maximum principle for linear infinite dimensional control systems with state constraints. Discrete & Continuous Dynamical Systems - A, 1995, 1 (1) : 77-101. doi: 10.3934/dcds.1995.1.77 Matthias Gerdts, Martin Kunkel. A nonsmooth Newton's method for discretized optimal control problems with state and control constraints. Journal of Industrial & Management Optimization, 2008, 4 (2) : 247-270. doi: 10.3934/jimo.2008.4.247 Piermarco Cannarsa, Hélène Frankowska, Elsa M. Marchini. On Bolza optimal control problems with constraints. Discrete & Continuous Dynamical Systems - B, 2009, 11 (3) : 629-653. doi: 10.3934/dcdsb.2009.11.629 Changzhi Wu, Kok Lay Teo, Volker Rehbock. Optimal control of piecewise affine systems with piecewise affine state feedback. Journal of Industrial & Management Optimization, 2009, 5 (4) : 737-747. doi: 10.3934/jimo.2009.5.737 M. Arisawa, P.-L. Lions. Continuity of admissible trajectories for state constraints control problems. Discrete & Continuous Dynamical Systems - A, 1996, 2 (3) : 297-305. doi: 10.3934/dcds.1996.2.297 Cristiana J. Silva, Helmut Maurer, Delfim F. M. Torres. Optimal control of a Tuberculosis model with state and control delays. Mathematical Biosciences & Engineering, 2017, 14 (1) : 321-337. doi: 10.3934/mbe.2017021 Maria do Rosário de Pinho, Ilya Shvartsman. Lipschitz continuity of optimal control and Lagrange multipliers in a problem with mixed and pure state constraints. Discrete & Continuous Dynamical Systems - A, 2011, 29 (2) : 505-522. doi: 10.3934/dcds.2011.29.505 Md. Haider Ali Biswas, Maria do Rosário de Pinho. A nonsmooth maximum principle for optimal control problems with state and mixed constraints - convex case. Conference Publications, 2011, 2011 (Special) : 174-183. doi: 10.3934/proc.2011.2011.174 Elimhan N. Mahmudov. Optimal control of second order delay-discrete and delay-differential inclusions with state constraints. Evolution Equations & Control Theory, 2018, 7 (3) : 501-529. doi: 10.3934/eect.2018024 Huaiqiang Yu, Bin Liu. Pontryagin's principle for local solutions of optimal control governed by the 2D Navier-Stokes equations with mixed control-state constraints. Mathematical Control & Related Fields, 2012, 2 (1) : 61-80. doi: 10.3934/mcrf.2012.2.61 Elena K. Kostousova. On polyhedral control synthesis for dynamical discrete-time systems under uncertainties and state constraints. Discrete & Continuous Dynamical Systems - A, 2018, 38 (12) : 6149-6162. doi: 10.3934/dcds.2018153 IvÁn Area, FaÏÇal NdaÏrou, Juan J. Nieto, Cristiana J. Silva, Delfim F. M. Torres. Ebola model and optimal control with vaccination constraints. Journal of Industrial & Management Optimization, 2018, 14 (2) : 427-446. doi: 10.3934/jimo.2017054 Alberto Bressan, Ke Han, Franco Rampazzo. On the control of non holonomic systems by active constraints. Discrete & Continuous Dynamical Systems - A, 2013, 33 (8) : 3329-3353. doi: 10.3934/dcds.2013.33.3329 Jérome Lohéac, Jean-François Scheid. Time optimal control for a nonholonomic system with state constraint. Mathematical Control & Related Fields, 2013, 3 (2) : 185-208. doi: 10.3934/mcrf.2013.3.185 Piernicola Bettiol Richard Vinter
CommonCrawl
From string theory to M-theory to F-theory. What is the roadmap? A duality between F-theory and M-theory? Kriz-Sati conjecture on modular equivariant charges in F-theory Exotic spheres in string/M-theory Proof of S-duality between Type IIB, IIB and Type HO, I string theories Advanced topics in string theory Question about Duality between F-theory on elliptically fibered K3 and Heterotic/Type-I on T^2 Breaking of E6 to SO(10) in heterotic string theory Critical radius in heterotic string theory Geometric (fibre bundle) picture of heterotic string theory? Relation of cubical structures in M-theory, in Heterotic string theory (and maybe in F-theory)? It is "well known" that on the hand there is a "cubical line bundle" governing the fine structure of the Chern-Simons term in 11-dimensional supergravity/M-theory; on the other hand just such "cubical lines" on elliptic curves induce the elliptic cohomology refinement of the partition function of the heterotic string. I provide a review of that with pointers to the literature below, to be self-contained. First though my question: it is natural to speculate that these two "cubical structures" are in fact "the same", or at least closely related. In fact it seems to me that standard F-theory lore gives a way to relate them in some detail (this, too, I spell out below). But I am not really sure yet about the full story. My question is: has this or anything like this been considered/worked out anywhere? Here now more details on and pointers to what I have in mind here: Cubical Structure in M-Theory It is well known that when the higher Chern-Simons term in 11-dimensional supergravity is compactified on a 4-sphere to yield the 7-dimensional Chern-Simons theory which inside AdS7/CFT6 is dual to the M5-brane 6d (2,0)-superconformal QFT, the cup product square in ordinary differential cohomology that enters its definition is to receive a quadratic refinement. This was originally argued in (Witten 97) and then formalized and proven in (Hopkins-Singer 02). What though is the situation up in 11 dimensions before compactifying to 7-dimensions? In (DFM 03, section 9) it is claimed that the full 11-dimensional Chern-Simons term evaluated on the supergravity C-field (with its flux quantization correction, see there) indeed carries a cubic refinement. More precisely, and slightly paraphrasing, the transgression \(\int_X \mathrm{CS}_{11}(\hat C)\) of the 11-dimensional Chern-Simons term of 11d SuGra to 10d spacetime X is a complex line bundle on the moduli space CField(X) of supergravity C-fields \(\hat C\) is claimed to be such that its "cubical line" \(\Theta^3\left(\int_X \mathrm{CS}_{11}(\hat C)\right)\) (in the notation at cubical structure on a line bundle) is the line bundle on the space of triples of C-field configurations which is given by the transgression of the three-fold cup product in ordinary differential cohomology, \(\Theta^3\left(\int_X \mathrm{CS}_{11}(\hat -)\right) \simeq \int_X (\hat -)_1 \cup (\hat -)_2 \cup (\hat-)_3\) In the context of "F-theory compactifications" of M-theory, one considers C-fields on an elliptic fibration which are "factorizable fluxes", in that their underlying cocycle \(\hat C\) in ordinary differential cohomology is the cup product of a cocycle \(\hat C_{fib }\) on the fiber with one \(\hat C_{b}\) on the base \(\hat C = \hat C_{b} \cup \hat C_{fib}\) In approaches like (GKP 12 (around p. 19), KMW 12) the C-field is factored as a cup product of a degree-2 cocycle on the elliptic fiber with a degree-2 class in the Calabi-Yau-base. This makes the component of the C-field on the elliptic fiber a complex line bundle (with connection). Notice that the space of complex line bundles on an elliptic curve is dual to the elliptic curve itself. On the other hand in e.g. (DFM 03, p.38) the factorization is taken to be that of two degree-3 cocycles in the base (which are then identified with the combined degree-3 RR-field/B-field flux coupled to the (p,q)-string) with, respectively, the two canonical degree-1 cocycles \(\hat t_i\) on the elliptic fiber which are given by the two canonical coordinate functions \(t_i\) (speaking of a framed elliptic curve). In this case the fiber-component of thesupergravity C-field "is" the elliptic curve-fiber, \(\hat C = \hat B_{NS} \cup \hat t_1 + \hat B_{RR}\cup \hat t_2\) or equivalently: each point in the moduli space of H-flux in 10d induces an identification of the G-flux with the elliptic curve this way. This is maybe noteworthy in that when the C-field is identified with the compactification elliptic curve in this way, then the formula for \(\Theta^3\left(\int_X \mathrm{CS}_{11}(\hat C)\right)\) as above is exactly that appearing in the definition of a cubical structure on a line bundle over an elliptic curve. But a "cubical" trivialization of \(\Theta^3(\mathcal{O}(-\{0\}))\) over a given elliptic curve is what in (Hopkins 02, AHS01) is used to induce the sigma-orientation of the corresponding elliptic cohomology theory and in totality the string-orientation of tmf. But that is the refinement of the Witten genus, hence of the partition function of the heterotic string. Now, by the above M-theoretic equivalence, the cubical trivialization is also given by a trivialization of the topological class of the C-field. This is one way (or is at least closely related) to the trivialization of the anomaly line bundle which "sets the quantum integrand" of M-theory. So there is a curious coincidence of concepts here, which might want to become a precise identification: on the one hand there is naturally a cubical structure on a line bundle on the Chern-Simons line bundle over the moduli space of supergravity C-fields which for F-theory compactifications and factorizable flux configurations induces in particular a cubical structure on a line bundle over the compactification elliptic curve. On the other hand, the latter are the structures that enter the refined construction of the Witten genus via the string orientation of tmf. Has this been related further anywhere? string-theory type-iib-string-theory heterotic-string-theory m-theory f-theory asked May 13, 2014 in Theoretical Physics by Urs Schreiber (6,095 points) [ revision history ] retagged May 21, 2014 by dimension10 p$\hbar$ysicsOverflo$\varnothing$
CommonCrawl