text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Some invariant solutions and conservation laws of a type of long-water wave system
Xiangzhi Zhang ORCID: orcid.org/0000-0003-3900-57921 &
Yufeng Zhang1
We propose a generalized long-water wave system that reduces to the standard water wave system. We also obtain the Lax pair and symmetries of the generalized shallow-water wave system and single out some their similarity reductions, group-invariant solutions, and series solutions. We further investigate the corresponding self-adjointness and the conservation laws of the generalized system.
The classical dispersiveless long wave equations
$$ \textstyle\begin{cases} u_{t}+uu_{x}+h_{x}=0, \\ h_{t}+(uh)_{x}=0, \end{cases} $$
have a number of dispersive generalizations [1]. Kupershmidt [2] considered the following extension of (1):
$$ \textstyle\begin{cases} u_{t}=(\frac{1}{2}u^{2}+h-\beta u_{x})_{x}, \\ h_{t}=(uh+\alpha u_{xx}-\beta h_{x})_{x}, \end{cases} $$
where α, β are arbitrary constants. The invertible change of variables \(u=\bar{u}, h=\bar{h}+\gamma \bar{u}x\), turns (2) into
$$ \textstyle\begin{cases} \bar{u}_{t}=(\frac{1}{2}\bar{u}^{2}+\bar{h}+\mu \bar{u}_{x})_{x}, \\ \bar{h}_{t}=(\bar{u}\bar{h}-\mu \bar{h}_{x})_{x},\mu =\gamma +\beta = \pm \sqrt{\alpha +\beta ^{2}}. \end{cases} $$
Broer [1] derived system (2) for \(\alpha =\frac{1}{3}, \beta =0\), for which it is the proper Boussinesq equation. In terms of the potential \(\varphi :u=\varphi _{x}\), system (2) was derived by Kaup [3]. Later, Matveev and Yavor [4] found algebrogeometrically a large class of almost periodic solutions. Li, Ma, and Zhang [5] used a scaling transformation to transfer a nonlinear long wave equation of Boussinesq class to the Broer–Kaup (BK) system, a type of long water wave equations:
$$ \textstyle\begin{cases} v_{t}=\frac{1}{2}(v^{2}+2w-v_{x})_{x}, \\ w_{t}=(vw+\frac{1}{2}w_{x})_{x}. \end{cases} $$
Furthermore, some exact solutions and Darboux transformations of (1) were obtained by applying the Lax-pair method. In terms of [5], we can study the similarity reduction, exact solutions, and conservation laws of the Boussinesq system through the scalar transformation
$$ v=-u,w=\xi +1+\frac{v_{x}}{2}, $$
that is, we can transform the BK system to the Boussinesq system
$$ \textstyle\begin{cases} \xi _{t}+[(1+\xi )u]_{x}=-\frac{1}{4}u_{xxx}, \\ u_{t}+uu_{x}+\xi _{x}=0, \end{cases} $$
where ξ is the elevation of the water wave, u is the surface velocity of water along the x-direction. Hence, the results of the paper have certain physical sense.
In the paper, we construct a generalized BK system as follows:
$$ \textstyle\begin{cases} v_{t}=\frac{\alpha }{2}(v_{x}-v^{2}-2w)_{x}-\beta v_{x}, \\ w_{t}=-\frac{\alpha }{2}(w_{x}+2wv)_{x}-\beta w_{x}, \end{cases} $$
where α, β are constants, so that some symmetries of (2) are produced by the symmetry group method [6]. It follows that some similarity solutions, group-invariant solutions, and series solutions are produced. In addition, Ibragimov and Avdonina [7] showed how to apply the symmetries of differential equations to study the self-adjointness and conservation laws. Thus we would like to follow the approach to investigate the quasiself-adjointness and conservation laws of the generalized BK system (2).
Integrability of (2)
$$ \varphi _{x}=U\varphi ,\qquad \varphi _{t}=V\varphi , $$
$$\begin{aligned}& U=\begin{pmatrix} -\lambda +\frac{v}{2} & 1 \\ -w & \lambda -\frac{v}{2} \end{pmatrix}, \\& V=\begin{pmatrix} \alpha \lambda ^{2}+\beta \lambda +\frac{\alpha }{4}v_{x}-\frac{\alpha }{4}v^{2}-\frac{\beta }{2}v & -\alpha \lambda -\frac{\alpha }{2}v- \beta \\ \alpha w\lambda +\frac{\alpha }{2}w_{x}+\frac{\alpha }{2}wv+\beta w &- \alpha \lambda ^{2}-\beta \lambda -\frac{\alpha }{4}v_{x}+\frac{\alpha }{4}v^{2}+\frac{\beta }{2}v \end{pmatrix}. \end{aligned}$$
Then the compatibility condition of (3)
$$ V_{t}-U_{x}+UV-VU=0 $$
admits the generalized BK system, which can be directly verified. Hence the generalized BK system (2) is Lax integrable. Using (3), we can get some Darboux transformations for deducing solutions of the system. Here we omit them.
Similarity solutions and group-invariant solutions
Applying the Lie symmetry analysis, we can get the symmetry of system (2):
$$ X_{1}=\frac{\partial }{\partial t},\qquad X_{2}=\frac{\partial }{ \partial x},\qquad X_{3}= \biggl( \frac{1}{2}x+\beta t \biggr)\frac{ \partial }{\partial x}+t\frac{\partial }{\partial t}- \frac{1}{2}v\frac{ \partial }{\partial v}-w\frac{\partial }{\partial w}. $$
The vector field \(X_{3}\) has the following characteristic equation:
$$ \frac{dt}{t}=\frac{dx}{\frac{1}{2}x+\beta t}= \frac{dv}{-\frac{1}{2}v}= \frac{dw}{-w}, $$
which gives rise to
$$ \biggl(\beta t+\frac{1}{2}x\biggr)\,dt-t\,dx=0. $$
One integration factor of (6) is given by
$$ \mu =e^{-\int \frac{3}{2t}\,dt}=t^{-\frac{3}{2}}, $$
which transfers (6) to the complete integration equation
$$ \beta t^{-\frac{1}{2}}\,dt+d \bigl(-t^{-\frac{1}{2}}x \bigr)=0, $$
from which we have the invariant variable \(\xi =2\beta t^{\frac{1}{2}}-t ^{-\frac{1}{2}}x\). In terms of Eq. (5), we have the formal invariants
$$ v=t^{-\frac{1}{2}}f(\xi ),\qquad w=t^{-1}g(\xi ), $$
where \(f(\xi )\) and \(g(\xi )\) are arbitrary smooth functions of ξ. Substituting (7) into system (2) yields the ordinary differential system
$$ \textstyle\begin{cases} -\frac{1}{2}f(\xi )+\frac{1}{2}xt^{-\frac{1}{2}}f'(\xi )=\frac{\alpha }{2}[f''(\xi )+2f(\xi )f'(\xi )+2g'(\xi )], \\ -g(\xi )+\frac{1}{2}xt^{-\frac{1}{2}}g'(\xi )=-\frac{\alpha }{2}[g''( \xi )-2g'(\xi )f(\xi )-2g(\xi )f'(\xi )]. \end{cases} $$
Let \(\beta =0\), Then system (8) reduces to
$$ \textstyle\begin{cases} f(\tau )+\tau f'(\tau )=-\alpha [f''(\tau )+2f(\tau )f'(\tau )+2g'( \tau )], \\ g(\tau )+\frac{1}{2}\tau g'(\tau )=\frac{\alpha }{2}[g''(\tau )-2g'( \tau )f(\tau )-2g(\tau )f'(\tau )], \end{cases} $$
where \(\tau =-t^{-\frac{1}{2}}x\), which is a reduction of ξ. In facr, system (9) is an ordinary differential system corresponding to the BK system (1).
The group-invariant transformations of the generalized BK system are as follows:
$$ \textstyle\begin{cases} g_{1}:(x,t,v,w)\rightarrow (x,t+\epsilon ,v,w), \\ g_{2}:(x,t,v,w)\rightarrow (x+\epsilon ,t,v,w), \\ g_{3}:(x,t,v,w)\rightarrow (2\beta te^{\epsilon }+(x-2\beta t)e^{ \frac{1}{2}\epsilon },te^{\epsilon },e^{-\frac{1}{2}\epsilon }v,we ^{-\epsilon }). \end{cases} $$
In what follows, we consider solutions to the BK system. Set \(v=V(\rho )\), \(w=W(\rho )\), \(\rho =x+lt\). Then (2) becomes
$$ \textstyle\begin{cases} lV'=\frac{\alpha }{2}(V''-2VV'-2W')-\beta V', \\ lW'=-\frac{\alpha }{2}(W''+2W'V+2WV')-\beta W', \end{cases} $$
from which we have
$$ \textstyle\begin{cases} lV-\frac{\alpha }{2}(V'-V^{2}-2W)+\beta V=c_{1}, \\ lW+\frac{\alpha }{2}(W'+2WV)-\beta W'=c_{2}. \end{cases} $$
A special solution to (11) is given by
$$ V=\frac{1}{c+\xi },\qquad W=\frac{1}{(c+\xi )^{2}} $$
in the case of \(l=-\beta \), \(c_{1}=c_{2}=0\). Hence we get a set of solutions to the generalized BK system (2):
$$ v=\frac{1}{c+x-\beta t},\qquad w=\frac{1}{(c+x-\beta t)^{2}}. $$
Applying the group-invariant transformation (10), we can deduce some other new solutions to system (2):
$$ \textstyle\begin{cases} g_{1}:\quad v=\frac{1}{c+x-\beta (t+\epsilon )},\qquad w=\frac{1}{[c+x- \beta (t+\epsilon )]^{2}}, \\ g_{2}:\quad v=\frac{1}{c+x-\beta t+\epsilon },\qquad w=\frac{1}{(c+x-\beta t+ \epsilon )^{2}}, \\ g_{3}:\quad v=\frac{e^{-\frac{1}{2}\epsilon }}{c+\beta te^{\epsilon }+(x-2 \beta t)e^{\frac{1}{2}\epsilon }},\qquad w=\frac{e^{-\epsilon }}{[c+ \beta te^{\epsilon }+(x-2\beta t)e^{\frac{1}{2}\epsilon }]^{2}}. \end{cases} $$
Taking \(\beta =0\), we can obtain group-invariant solutions to the BK system (1). In particular, we can get the series solutions to the BK system. Indeed, let
$$ f(\tau )=\sum_{n=0}^{\infty }c_{n} \tau ^{n},\qquad g(\tau )=\sum_{m=0} ^{\infty }c_{m}\tau ^{m}, $$
and substituting into (9), we have that
$$\begin{aligned}& \begin{gathered} c_{0}+\sum_{n=1}^{\infty }c_{n} \tau ^{n}+\tau c_{1}+\sum_{n=1}^{ \infty }(n+1)c_{n+1} \tau ^{n+1}\\ \quad =-\alpha \Biggl[2c_{2}+\sum _{n=1}^{\infty }(n+2) (n+1)c _{n+2}\tau ^{n}\Biggr] +2\Biggl(c_{0}+\sum _{n=1}^{\infty }c_{n}\tau ^{n}\Biggr) \Biggl(c_{1}+ \sum _{n=1}^{\infty }(n+1)c_{n+1}\tau ^{n}\Biggr)\\ \qquad {}+2d_{1} +2\sum_{m=1}^{\infty }(m+1)d_{m+1} \tau ^{m}, \end{gathered} \\& d_{0}+\sum_{m=1}^{\infty }d_{m} \tau ^{m}+\frac{1}{2}\tau d_{1}+ \frac{1}{2}\sum_{m=1}^{\infty }(m+1)d_{m+1} \tau ^{m+1} \\& \quad = \frac{\alpha }{2}[2d_{2}+\sum _{m=1}^{\infty }(m+2) (m+1)d_{m+2}\tau ^{m}-2\Biggl(c_{0} +\sum_{n=1}^{\infty }c_{n} \tau ^{n}\Biggr) \Biggl(d_{1}+\sum _{m=1}^{ \infty }(m+1)d_{m+1}\tau ^{m}\Biggr)\\& \qquad {}-2\Biggl(d_{0}+\sum _{m=1}^{\infty }d_{m}\tau ^{m}\Biggr) \Biggl(c_{1}+\sum _{n=1}^{\infty }(n+1)c_{n+1}\tau ^{n}\Biggr), \end{aligned}$$
from which we infer that
$$\begin{aligned}& c_{2}=-c_{0}c_{1}-\frac{1}{2\alpha }c_{0}-d_{1}, \\& d_{2}=c_{0}d_{1}+d_{0}c_{1}+ \frac{1}{\alpha }d_{0}, \\& c_{3}=-\frac{1}{3\alpha }c_{1}- \frac{2}{3}c_{0}c_{2}-\frac{1}{6}c _{1}^{2}-\frac{2}{3}d_{1}, \\& d_{3}=\frac{1}{2\alpha }d_{1}+\frac{2}{3}c_{0}d_{2}- \frac{1}{3}d_{1}c _{1}+\frac{2}{3}d_{0}c_{2}, \\& \cdots \cdots , \\& c_{n+2}=\frac{1}{\alpha (n+1)(n+2)}\Biggl[-c_{n}-2\alpha c_{0}(n+1)c_{n+1}-2 \alpha c_{1}c_{n}\\& \hphantom{c_{n+2}=}{}- \alpha \sum_{i,j=2}^{n}c_{i}c_{j+1}(j+1) \tau ^{i+j}-2 \alpha (n+1)d_{n+1}\Biggr], \\& d_{n+2}=\frac{1}{(n+1)(n+2)}\Biggl[\frac{2}{\alpha }d_{n}+2(n+1)c_{0}d_{n+1} +2d_{1}c_{n}+2\sum_{i,j=2}^{n}c_{i}d_{j+1} \tau ^{i+j}\\& \hphantom{d_{n+2}=}{}+2(n+1)d_{0}c_{n+1}+2c _{1}d_{n} +2\sum_{i,j=2}^{n}(j+1)d_{i}c_{j+1} \tau ^{i+j}\Biggr], \end{aligned}$$
where \(c_{0}\), \(d_{0}\), \(c_{1}\), \(d_{1}\) are arbitrary parameters. Inserting these expressions into (14), we get the series solutions of the BK system. The second equation of system (9) can be reduced to
$$ g''(\tau )-\frac{1}{\alpha } \tau g'(\tau )-\frac{2}{\alpha }g(\tau )=0 $$
under the condition
$$ (fg)'=0\quad \Rightarrow\quad fg=c. $$
As long as the solution of (15) is obtained, we can get the solution \(f(\tau )\) from (16). If \(g_{1}(\tau )\) is the known solution of (15), then we assume that \(g(\tau )=u(\tau )g_{1}(\tau )\). If \(u(\tau )\) is known, then the solution \(g(\tau )\) to Eq. (15) can be presented. It is easy to see that
$$ g''(\tau )=g_{1}( \tau )u''(\tau )+2u'(\tau )g_{1}'(\tau )+u(\tau )g _{1}''( \tau ). $$
Substituting (17) into Eq. (15) yields
$$\begin{aligned}& g_{1}(\tau )u''(\tau )+ \biggl(2g_{1}'(\tau )-\frac{1}{\alpha } \tau g_{1}(\tau ) \biggr)u'(\tau )+ \biggl(g_{1}''( \tau )-\frac{1}{ \alpha }\tau g_{1}'(\tau )- \frac{2}{\alpha } g_{1}(\tau ) \biggr)u( \tau )=0. \end{aligned}$$
$$ g_{1}''(\tau )-\frac{1}{\alpha } \tau g_{1}'(\tau )-\frac{2}{\alpha }g _{1}(\tau )=0, $$
$$ g_{1}(\tau )u''(\tau )+ \biggl(2g_{1}'(\tau )-\frac{1}{\alpha }\tau g _{1}(\tau ) \biggr)u'(\tau )=0. $$
Assume that \(u'(\tau )=z(\tau )\). Then Eq. (18) becomes
$$ g_{1}(\tau )z'(\tau )+ \biggl(2g_{1}'( \tau )-\frac{1}{\alpha }\tau g _{1}(\tau ) \biggr)z(\tau )=0, $$
which has the solution
$$ z=\frac{c}{g_{1}^{2}(\tau )}e^{\int \frac{1}{\alpha }\tau\, d\tau }=\frac{c}{g _{1}^{2}(\tau )}e^{\frac{1}{2\alpha }\tau ^{2}}, $$
where c is a constant. Thus we have
$$\begin{aligned}& \begin{gathered} u(\tau )=c \int^{\tau }\frac{1}{g_{1}^{2}(\tau )}e^{\frac{\tau ^{2}}{2 \alpha }}\,d \tau +\bar{c}, \\ g(\tau )=g_{1}(\tau ) \biggl[c \int^{\tau } \frac{1}{g_{1}^{2}(\tau )}e^{\frac{\tau ^{2}}{2\alpha }}\,d \tau +\bar{c} \biggr]. \end{gathered} \end{aligned}$$
Substituting (19) into Eq. (16), we can get \(f(\tau )\). Thus a type of special solutions to system (9) can be obtained.
The self-adjointness of system (2)
Ibragimov [8] introduced a few related notations of the strict self-adjointness, the nonlinear self-adjointness, and the quasiself-adjointness. Let us recall them.
Let H be a Hilbert space with the scalar product \((u,v)\) defined by
$$ (Fu,v)=\bigl(u,F^{*}v\bigr),\quad u,v\in H, $$
where \(F^{*}\) is the adjoint operator to a linear operator F. A special Hilbert space is given by
$$ H=\biggl\{ \int _{R^{n}} \bigl\vert f(x) \bigr\vert ^{2}\,dx \biggr\} $$
along with an inner product
$$ (u,v)= \int _{R^{n}} u(x)v(x)\,dx. $$
Let F be a linear differential operator in H whose action on the function u is expressed by \(F[u]\). Then Eq. (20) becomes
$$ \bigl(F[u],v\bigr)=\bigl(u,F^{*}[v]\bigr), $$
which means that
$$ vF[u]-uF^{*}[v]=D_{i}\bigl(\xi ^{i}\bigr), $$
where \(D_{i}=\frac{\partial }{\partial x^{i}}+u_{i}^{\alpha } \partial _{u^{\alpha }}+u_{ij}^{\alpha }\partial _{u_{j}^{\alpha }}+ \cdots \) .
For the differential equations
$$ F_{\alpha }(x,u,u_{x_{i}},u_{x_{i}x_{j}}, \dots )=0,\quad \alpha =1, \dots ,m, $$
where \(u=(u^{1},\ldots ,u^{m})\). The adjoint equations to (22) are as follows:
$$ F_{\alpha }^{*}(x,u,v,u_{x_{i}},v_{x_{i}}, \ldots)=0,\quad \alpha =1,\ldots,m, $$
with \(F_{\alpha }^{*}=\frac{\delta \mathcal{\varphi }}{\delta u^{ \alpha }}\). The Lagrangian φ for (22) is defined by
$$\begin{aligned}& \begin{gathered} \mathcal{\varphi }=v^{\beta }F_{\beta }=:\sum _{\beta =1}^{m}v^{\beta }F_{\beta }, \\ \frac{\delta }{\delta u^{\alpha }}=\frac{\partial }{\partial u^{ \alpha }}+\sum_{j=1}^{\infty }(-1)^{j}D_{i_{1}} \cdots D_{i_{j}}\frac{ \partial }{\partial u_{i_{1}\cdots i_{j}}^{\alpha }}. \end{gathered} \end{aligned}$$
([7, 8])
The differential Eqs. (22) are said to be strictly self-adjoint if their adjoint Eqs. (23) are equivalent to (23) upon the substitution \(v=u\). That is, the equation
$$ F^{*}(x,u,u,u_{x_{i}},u_{x_{i}},\ldots )=\lambda F(x,u,u_{x},\ldots ) $$
holds with a coefficient λ.
([8])
Upon a substitution
$$ v=\varphi (u), $$
if (23) becomes (22), then we call (22) is quasiself-adjoint.
$$ v=\varphi (x,u)\neq 0, $$
if (26) solves the adjoint Eqs. (23) for all the solutions of (22), then we call system (22) nonlinearly self-adjoint, that is, we have the following equations:
$$ F_{\alpha }^{*}(x,u,\varphi ,\ldots )= \lambda _{\alpha }^{\beta }F_{ \beta }(x,u,\ldots ). $$
It is easy to find that the strictly self-adjoint and quasiself-adjoint equations both are particular cases of the nonlinear self-adjoint equations.
For the generalized BK system (2), denoted by
$$ \textstyle\begin{cases} F=v_{t}-\frac{\alpha }{2}(v_{x}-v^{2}-2w)_{x}+\beta v_{x}, \\ G=w_{t}+\frac{\alpha }{2}(w_{x}+2wv)_{x}+\beta w_{x}, \end{cases} $$
the formal Lagrangian \(\mathcal{L}\) can be written as \(\mathcal{L}=pF+qG\), and the adjoint system of (2) is as follows:
$$ \textstyle\begin{cases} \frac{\delta \mathcal{\mathcal{L}}}{\delta v}=2\alpha pv_{x}-p_{t}-\frac{ \alpha }{2}p_{xx}+\alpha (pv)_{x}-\beta p_{x}-\alpha wq_{x}=0, \\ \frac{\delta \mathcal{\mathcal{L}}}{\delta w}=-\alpha p_{x}-q_{t}- \alpha (qv)_{x}+\frac{\alpha }{2}q_{xx}-\beta q_{x}=0. \end{cases} $$
Setting \(p=\varphi (v,w)\) and \(q=\psi (v,w)\) and substituting into (27), along with (28), we have
$$ \frac{\delta \mathcal{\mathcal{L}}}{\delta v}\biggm|_{p=\varphi ,q=\psi }= \lambda _{1}F+\mu _{1}G,\qquad \frac{\delta \mathcal{\mathcal{L}}}{\delta w}\biggm|_{ p=\varphi , q=\psi }= \lambda _{2}F+\mu _{2}G, $$
where \(\lambda _{1}\), \(\lambda _{2}\), \(\mu _{1}\), \(\mu _{2}\) are undetermined functions. It is easy to get
$$ \textstyle\begin{cases} p_{t}=\varphi _{v}v_{t}+\varphi _{w}w_{t},\qquad p_{x}=\varphi _{v}v_{x}+ \varphi _{w}w_{x}, \\ p_{xx}=\varphi _{vv}v_{x}^{2}+2\varphi _{vw}v_{x}w_{x}+\varphi _{ww}w _{x}^{2}+\varphi _{v}v_{xx}+\varphi _{w}w_{xx}, \\ q_{t}=\psi _{v}v_{t}+\psi _{w}w_{t},\qquad q_{x}=\psi _{v}v_{x}+\psi _{w}w _{x}, \\ q_{xx}=\psi _{vv}v_{x}^{2}+2\psi _{vw}v_{x}w_{x}+\psi _{ww}w_{x}^{2}+\psi _{v}v_{xx}+\psi _{w}w_{xx}. \end{cases} $$
Inserting all these results into (29) yields that
$$ \lambda _{1}=\mu _{1}=\lambda _{2}=\mu _{2}=0. $$
Therefore, for all solutions of system (2), (28) holds. Thus system (2) is nonlinearly self-adjoint.
Another expression of system (2) and some properties
$$ v(x,t)=V \biggl(x,\frac{\alpha }{2}t \biggr)-\frac{\beta }{\alpha }, \qquad w(x,t)=W \biggl(x,\frac{\alpha }{2}t \biggr). $$
Then system (2) becomes
$$ \textstyle\begin{cases} V_{t}=V_{xx}-2VV_{x}-2W_{x}, \\ W_{t}=-W_{xx}-2W_{x}V-2WV_{x}, \end{cases} $$
which has the infinitesimal symmetries
$$ X=(2c_{1}t+c_{2})\partial _{t}+(c_{1}x+c_{3}t+c_{4}) \partial _{x}+\biggl(c _{1}v-\frac{1}{2}c_{3} \biggr)\partial _{V}+2c_{1}\partial _{W}, $$
where \(c_{1}\), \(c_{2}\), \(c_{3}\), \(c_{4}\) are constants. Obviously, when \(c_{1}=c_{2}=c_{3}=0\) and \(c_{4}=1\), we get \(X_{1}=\partial _{x}\). When \(c_{1}=c_{3}=c_{4}=0\) and \(c_{2}=1\), we have \(X_{2}=\partial _{t}\). When \(c_{2}=c_{3}=c_{4}=0\) and \(c_{1}=1\), we find \(X_{3}=2t\partial _{t}+x \partial _{x}+\partial _{V}+2\partial _{W}\); \(X_{i}\) \((i=1,2,3)\) all are particular cases of X.
Next, we consider the characteristic equation of X so that we can obtain the similarity reductions of system (30). The characteristic equation of X reads as
$$ \frac{dt}{2c_{1}t+c_{2}}=\frac{dx}{c_{1}x+c_{3}t+c_{4}}= \frac{dV}{-c _{1}v+\frac{1}{2}c_{3}}=\frac{dW}{-2c_{1}W}. $$
Case 1: \(c_{1}=1\).
$$ \xi =\frac{x-c_{3}t+c_{4}-c_{2}c_{3}}{\sqrt{2t+c_{2}}},\qquad V= \frac{1}{2}c_{3}+ \frac{f(\xi )}{\sqrt{2t+c_{2}}},\qquad W=\frac{g( \xi )}{2(2t+c_{2})}. $$
System (30) reduces to
$$ \textstyle\begin{cases} -f(\xi )-\xi f'(\xi )+2f(\xi )f'(\xi )-f''(\xi )+g'(\xi )=0, \\ -2g(\xi )-\xi g'(\xi )+g''(\xi )+2g(\xi )f'(\xi )+2g'(\xi )f(\xi )=0. \end{cases} $$
Case 2: \(c_{1}=c_{2}=0\). Equation (31) becomes
$$ \frac{dt}{0}=\frac{dx}{c_{3}t+c_{4}}=\frac{dV}{\frac{1}{2}c_{3}}= \frac{dW}{0}. $$
We take
$$ \xi =t,\qquad W=W(t),\qquad V=\frac{c_{3}x}{2(c_{3}t+c_{4})}- \frac{1}{2}c_{3}f(t). $$
Then system (30) reduces to
$$ \textstyle\begin{cases} c_{3}\xi f'(\xi )+c_{3}f(\xi )+c_{4}f'(\xi )=0, \\ c_{3}\xi W'(\xi )+c_{4}W'(\xi )+c_{3}W(\xi )=0. \end{cases} $$
The two equations are in fact the same.
Case 3: \(c_{1}=0\), \(c_{2}\neq0\). Equation (31) reduces to
$$ \frac{dt}{c_{2}}=\frac{dx}{c_{3}t+c_{4}}=\frac{dV}{\frac{1}{2}c_{3}}= \frac{dW}{0}. $$
We choose
$$ \xi =c_{2}x-\frac{1}{2}c_{3}t^{2}-c_{4}t, \qquad V= \frac{c_{3}t}{2c_{3}}+\frac{f(\xi )}{c_{2}},\qquad W=g(\xi ). $$
Thus system (30) turns to
$$ \textstyle\begin{cases} -2c_{4}f'(\xi )+c_{3}-2c_{2}^{2}f''(\xi )+4f(\xi )f'(\xi )+4c_{2}^{2}g'( \xi )=0, \\ -c_{4}g'(\xi )+c_{2}^{2}g''(\xi )+2g'(\xi )f(\xi )+2g(\xi )f'(\xi )=0. \end{cases} $$
System (35) has the particular solutions
$$ f(\xi )=\frac{1}{2}\xi ^{2}-\xi ^{-1},\qquad g(\xi )=c\xi , $$
where ξ satisfies the constraint
$$ \xi ^{3}-\frac{3}{2}\xi ^{2}+c-2=0. $$
Thus from (32) we get a set of new solutions of system (2):
$$ \textstyle\begin{cases} v(x,t)=\frac{1}{2}c_{3}+\frac{1}{\sqrt{\alpha t+c_{2}}}(\frac{1}{2}\frac{(x-c _{3}t+c_{4}-c_{2}c_{3})^{2}}{2t+c_{2}}-\frac{\sqrt{2t+c_{3}}}{x-c _{3}t+c_{4}-c_{2}c_{3}})-\frac{\beta }{\alpha }, \\ w(x,t)=\frac{c}{2}\frac{x-c_{3}t+c_{4}-c_{2}c_{3}}{(\alpha t+c_{2})^{ \frac{3}{2}}}. \end{cases} $$
In what follows, we consider the series solutions of (33).
$$ f(\xi )=\sum_{i=0}^{\infty }a_{i} \xi ^{i}, \qquad g(\xi )=\sum_{i=0}^{\infty }b_{i} \xi ^{i} $$
and substituting into system (33), we infer that
$$\begin{aligned}& \textstyle\begin{cases} a_{0}+2a_{0}a_{1}-2a_{2}+b_{1}=0, \\ 4a_{0}a_{2}+2a_{1}^{2}-6a_{3}+2b_{2}=0, \\ -a_{2}+2(3a_{0}a_{3}+3a_{1}a_{2})-12a_{4}+3b_{3}=0, \end{cases}\displaystyle \\& a_{n}-na_{n}+2\sum_{i,j=1}^{n}a_{i}(j+1)a_{j+1}-(n+2)!a_{n+2}+(n+1)b _{n+1}=0, \end{aligned}$$
$$\begin{aligned}& \textstyle\begin{cases} -2b_{0}+2b_{2}+2b_{0}a_{1}+2b_{1}a_{0}=0, \\ -3b_{1}+6b_{3}+2(2b_{0}a_{2}+b_{1}a_{1})+2(b_{1}a_{1}+2b_{2}a_{0})=0, \\ -4b_{2}+12b_{4}+2(3b_{0}a_{3}+2b_{1}a_{2}+a_{1}b_{2})+2(b_{1}a_{2}+2b _{2}a_{1}+3b_{3}a_{0})=0, \\ \cdots \end{cases}\displaystyle \\& -2b_{n}-(n+1)b_{n+1}+(n+2)!b_{n+2}+2\sum _{i,j=1}^{n}b_{i}(j+1)a_{j} +2 \sum_{i,j=1}^{n}a_{i}(j+1)b_{j+1}=0, \end{aligned}$$
from which we get
$$ \textstyle\begin{cases} a_{2}=\frac{1}{2}a_{0}+a_{0}a_{1}+\frac{1}{2}b_{1}, \\ b_{2}=b_{0}-a_{1}b_{0}-a_{0}b_{1}, \\ a_{3}=\frac{1}{3}(2a_{0}a_{2}+a_{1}^{2}+b_{2}), \\ b_{3}=\frac{1}{2}b_{1}-\frac{1}{3}(a_{1}b_{1}+2a_{2}b_{0})- \frac{1}{3}(b_{1}a_{1}+2b_{2}a_{0}), \\ \cdots \end{cases} $$
where \(a_{0}\), \(b_{0}\), \(a_{1}\), \(b_{1}\) are arbitrary parameters. Thus we obtain the following formal series solutions of system (33):
$$\begin{aligned}& f(\xi )=a_{0}+a_{1}\xi + \biggl(\frac{1}{2}a_{0}+a_{0}a_{1}+ \frac{1}{2}b_{1} \biggr)\xi ^{2} + \frac{1}{3}\bigl(2a_{0}a_{2}+a_{1}^{2}+b_{2} \bigr)\xi ^{3}+\sum_{i=4}^{\infty }a _{i}\xi ^{i}, \end{aligned}$$
$$\begin{aligned}& g(\xi )=b_{0}+b_{1}\xi +(b_{0}-a_{1}b_{0}-a_{0}b_{1}) \xi ^{2} \\& \hphantom{g(\xi )=}{}+\biggl[ \frac{1}{2}b_{1}- \frac{1}{3}(2a_{1}b_{1}+2a_{2}b_{0}+2b_{2}a_{0}) \biggr]\xi ^{3}+\sum_{i=4}^{\infty }b_{i} \xi ^{i}, \end{aligned}$$
where \(a_{i},b_{i}\) (\(i=4,5,\ldots \)) satisfy (36) and (37). Substituting (38)and (39) into (32), we can get the series solutions of the generalized BK system.
Next, we consider the solutions to system (34). It is easy to see that
$$ g(\xi )=f(\xi )=-\xi -\frac{c_{4}}{c_{3}}\quad \text{or}\quad g( \xi )=f(\xi )=\frac{\hat{c}}{\xi +\frac{c_{4}}{c_{3}}}, $$
where ĉ is an integration constant.
System (35) is solvable similarly to system (33), and we omit the computations.
Conservation laws
In this section, we consider the conservation laws of the generalized BK system by using the method in [7, 8]. From the identity
$$ X+D_{i}\bigl(\xi ^{i}\bigr)=W^{\alpha } \frac{\delta }{\delta u^{\alpha }}+D_{i}N ^{i} $$
we find that
$$ X(\mathcal{L})+D_{i}\bigl(\xi ^{i} \bigr)\mathcal{L}=W^{\alpha }\frac{\delta \mathcal{L}}{\delta u^{\alpha }}+D_{i} \bigl[N^{i}(\mathcal{L})\bigr], $$
$$ \textstyle\begin{cases} X=\xi ^{i}\partial _{x_{i}}+\eta ^{\alpha }\frac{\partial }{\partial u ^{\alpha }}+\xi _{i}\frac{\partial }{\partial u_{i}^{\alpha }}+\cdots , \\ N^{i}=\xi ^{i}+W^{\alpha }\frac{\delta }{\delta u_{i}^{\alpha }}+\sum_{s=1}^{\infty }D_{i_{1}}\cdots D_{i_{s}}(w_{\alpha })\frac{\delta }{ \delta u_{ii_{1}\cdots i_{s}}},\quad i=1,2,\ldots ,n, \\ W^{\alpha }=\eta ^{\alpha }-\xi ^{j}u_{j}^{\alpha },\quad \alpha =1,\ldots ,m, \end{cases} $$
and \(\mathcal{L}\) is the Euler–Lagrange function, which satisfies
$$ \frac{\delta \mathcal{L}}{\delta u^{\alpha }}=0,\quad \alpha =1,\ldots ,m. $$
Since system (28) holds, we can investigate the conservation laws by using (41), where the components of the conservation laws are the following:
$$ C^{i}=N^{i}(\mathcal{L}),\quad i=1,\ldots ,n, $$
which satisfy the conservation equations
$$ D_{i}\bigl(C^{i}\bigr)_{(22)}=0. $$
For \(X_{1}=\frac{\partial }{\partial x}\), we find that
$$ W^{1,1}=-v_{x},\qquad W^{1,2}=-w_{x}. $$
Substituting (44) into (42) yields
$$ \textstyle\begin{cases} C_{v}^{1}=-\alpha vv_{x}(p+q)-(\alpha +\beta )v_{x}p-\alpha qwv_{x}- \beta qv_{x}+\frac{\alpha }{2}v_{x}(q_{x}-p_{x}) \\ \hphantom{C_{v}^{1}=}{}+\frac{\alpha }{2}v_{xx}-\frac{\alpha }{2}qv_{xx}, \\ C_{w}^{1}=-\alpha pvw_{x}-\beta pw_{x}-\alpha qww_{x}-\alpha pw_{x}- \alpha qvw_{x} \\ \hphantom{C_{w}^{1}=}{}-\beta qw_{x}-\frac{\alpha }{2}p_{x}w_{x}+\frac{\alpha }{2}w_{x}q_{x}+\frac{ \alpha }{2}pw_{xx}-\frac{\alpha }{2}qw_{xx}. \end{cases} $$
For \(X_{2}=\frac{\partial }{\partial t}\), we get
$$ \textstyle\begin{cases} C_{v}^{2}=-v_{t}(p+q)=-(p+q)[-\beta v_{x}+\frac{\alpha }{2}(v_{x}-v ^{2}-2w)_{x}], \\ C_{w}^{2}=(p+q)[\beta w_{x}+\frac{\alpha }{2}(w_{x}+2wv)_{x}]. \end{cases} $$
For \(X_{3}=(\frac{1}{2}x+\beta t)\partial _{x}+t\partial _{t}- \frac{1}{2}v\partial _{v}-w\partial _{w}\), we infer
$$ \textstyle\begin{cases} W^{3,1}=-\frac{1}{2}v-tv_{t}-(\frac{1}{2}x+\beta t)v_{x}, \\ W^{3,2}=-w-tw_{t}-(\frac{1}{2}x+\beta t)w_{x}, \\ C_{v}^{3}=[-\frac{1}{2}v-tv_{t}-(\frac{1}{2}x+\beta t)v_{x}][\alpha pv+ \beta p+2wq+\alpha p+\alpha qv+\beta \\ \hphantom{C_{v}^{3}=}{}+\frac{\alpha }{2}(p+q)[-\frac{1}{2}v_{x}-v_{xt}-\frac{1}{2}v_{x}-( \frac{1}{2}x+\beta t)v_{xt}], \\ C_{w}^{3}=[-w-tw_{t}-(\frac{1}{2}x+\beta t)w_{x}](\alpha pv+\beta p+2wq+ \alpha p+\alpha qv+\beta ) \\ \hphantom{C_{w}^{3}=}{}+\frac{\alpha }{2}(p+q)[-w_{x}-tw_{xt}-\frac{1}{2}w_{x}-(\frac{1}{2}x+ \beta t)w_{xx}], \end{cases} $$
where \(v_{t}\), \(w_{t}\) are given by system (2).
Anco and Bluman [9] proposed a method for constructing conservation laws of differential equations, which uses a formula directly generating the conservation laws and independent of the system having a Lagrangian formulation, in contrast to Noether's theorem, which requires a Lagrangian. They adopted the linear equations and the adjoint equations of the original differential equations to study conservation laws. Essentially, the algorithm presented by Ibragimov et al. is the same as that of Anco and Bluman. Besides, Anco [10] also gave some comments on the work of Ibragimov.
In the paper, we have investigated various similarity reductions and exact solutions of the generalized BK system and various its conservation laws by the Lie group analysis. We have pointed out that the standard BK system is only a paticular case of the generalized BK system (2) when \(\alpha =-1\) and \(\beta =0\). In addition, Lou [11, 12] applied the symmetry group method to study some coherent solutions of nonlocal KdV systems and primary branch solutions of a first-order autonomous system. We hope to extend the methods to the systems presented in the paper in the forthcoming days. In addition, Ma [13] obtained some new conservation laws of some discrete evolution equation by symmetries and adjoint symmetries. Zhang, et al. [14, 15] considered symmetry properties of some fractional equations. Therefore there is an open problem how we can look for the fractional systems that correspond to the systems presented in the paper and how we can to solve them. Besides, Liu, Zhang, and Zhou [16] constructed the fractional Volterra hierarchy, gave a definition of the hierarchy in terms of Lax pair and Hamiltonian formalisms, and constructed its tau functions and multisoliton solutions. Bridgman, Hereman, Quispel, and Kamp [17] and El-Nabulsi [18] studied the peakon and Toda lattice. The approaches adopted in [16,17,18] can lead us to investigate some related properties of the generalized BK system presented in the paper. These questions will be discussed in the future.
Broer, L.J.F.: Approximate equations for long water waves. Appl. Sci. Res. 31, 377–395 (1975)
Kupershmidt, B.A.: Mathematics of dispersive water waves. Commun. Math. Phys. 99, 51–73 (1985)
Kaup, D.J.: A higher-order water-wave equation and the method for solving it. Prog. Theor. Phys. 54, 396–408 (1975)
Matveev, V.B., Yavor, M.I.: Solutions presque périodiques et a N-solitons de l'équation hydrodynamique non linéaire de Kaup. Ann. Inst. Henri Poincaré A XXXI (1), 25–41 (1979)
Li, Y.S., Ma, W.X., Zhang, J.E.: Darboux transformations of classical Boussinesq system and its new solutions. Phys. Lett. A 275, 60–66 (2000)
Olver, P.J.: Applications of Lie Groups to Differential Equations. Springer, New York (1993)
Ibragimov, N.H., Avdonina, E.D.: Nonlinear self-adjointness, conservation laws, and the construction of solutions of partial differential equations using conservation laws. Russ. Math. Surv. 68, 889–921 (2013)
Ibagimov, N.H.: Nonlinear self-adjointness in constructing conservation laws pp. 1–104 (2011) arXiv:1109.1728vl [math-ph]
Anco, S., Bluman, G.: Direct construction of conservation laws from field equations. Phys. Rev. Lett. 78, 2869–2873 (1997)
Anco, S.: On the incompleteness of Ibragimov's conservation law theorem and its equivalence to a standard formula using symmetries and adjoint-symmetries. Symmetry 9, 33 (2017)
Lou, S.Y., Huang, F.: Alice–Bob physics: coherent solutions of nonlocal KdV systems. Sci. Rep. 7, 1–13 (2016)
Lou, S.Y., Yao, R.X.: Invariant functions, symmetries and primary branch solutions of first order autonomous systems. Commun. Theor. Phys. 68, 21–28 (2017)
Ma, W.X.: Conservation laws of discrete evolution equations by symmetries and adjoint symmetries. Symmetry 7, 714–725 (2015)
Zhang, Y.F., et al.: Symmetry properties and explicit solutions of some nonlinear differential and fractional equations. Appl. Math. Comput. 337, 408–418 (2018)
Zhang, X.Z., Zhang, Y.F.: Some similarity solutions and numerical solutions to the time-fractional Burgers system. Symmetry 11, 112 (2019). https://doi.org/10.3390/sym11010112
Liu, S.Q., Zhang, Y., Zhou, C.: Fractional Volterra hierarchy. Lett. Math. Phys. 108, 261–283 (2018)
Bridgman, T.J., Hereman, W., Quispel, G.R.W., van der Kamp, P.: Symbolic computation of Lax pairs of partial difference equations using consistency around the cube. Found. Comput. Math. 13, 517–544 (2012)
El-Nabulsi, R.A.: Non-standard higher-order G-strand partial differential equations on matrix Lie algebra. J. Niger. Math. Soc. 36, 101–112 (2017)
The authors wish to thank the anonymous referees for their valuable suggestions.
This work is supported by the Fundamental Research Funds for the Central University (No. 2017XKZD11).
School of Mathematics, China University of Mining and Technology, Xuzhou, P.R. China
Xiangzhi Zhang & Yufeng Zhang
Xiangzhi Zhang
Yufeng Zhang
The authors declare that the study was realized in collaboration with the same responsibility. Both authors read and approved the final manuscript.
Correspondence to Xiangzhi Zhang.
Zhang, X., Zhang, Y. Some invariant solutions and conservation laws of a type of long-water wave system. Adv Differ Equ 2019, 496 (2019). https://doi.org/10.1186/s13662-019-2422-8
PACS Codes
05.45.Yv
02.30.Jr
02.30.Ik
Similarity solution
Conservation law
|
CommonCrawl
|
Spin and charge drift-diffusion in ultra-scaled MRAM cells
Shape anisotropy revisited in single-digit nanometer magnetic tunnel junctions
K. Watanabe, B. Jinnai, … H. Ohno
Critical switching current density of magnetic tunnel junction with shape perpendicular magnetic anisotropy through the combination of spin-transfer and spin-orbit torques
Doo Hyung Kang & Mincheol Shin
Observation and theoretical calculations of voltage-induced large magnetocapacitance beyond 330% in MgO-based magnetic tunnel junctions
Kentaro Ogata, Yusuke Nakayama, … Hideo Kaiju
Giant nonvolatile manipulation of magnetoresistance in magnetic tunnel junctions by electric fields via magnetoelectric coupling
Aitian Chen, Yan Wen, … Yonggang Zhao
Effect of insertion layer on electrode properties in magnetic tunnel junctions with a zero-moment half-metal
Aleksandra Titova, Ciarán Fowley, … Alina Maria Deac
Large magnetocapacitance beyond 420% in epitaxial magnetic tunnel junctions with an MgAl2O4 barrier
Kenta Sato, Hiroaki Sukegawa, … Hideo Kaiju
Magneto-ionic control of spin polarization in multiferroic tunnel junctions
Yingfen Wei, Sylvia Matzen, … Beatriz Noheda
Current-induced magnetization switching in atom-thick tungsten engineered perpendicular magnetic tunnel junctions with large tunnel magnetoresistance
Mengxing Wang, Wenlong Cai, … Weisheng Zhao
Robustness of Voltage-induced Magnetocapacitance
Hideo Kaiju, Takahiro Misawa, … Gang Xiao
Simone Fiorentini1,2,
Mario Bendra1,2,
Johannes Ender1,2,
Roberto L. de Orio1,2,
Wolfgang Goes3,
Siegfried Selberherr2 &
Viktor Sverdlov1,2
Scientific Reports volume 12, Article number: 20958 (2022) Cite this article
Electronic and spintronic devices
Magnetic devices
Spintronics
Designing advanced single-digit shape-anisotropy MRAM cells requires an accurate evaluation of spin currents and torques in magnetic tunnel junctions (MTJs) with elongated free and reference layers. For this purpose, we extended the analysis approach successfully used in nanoscale metallic spin valves to MTJs by introducing proper boundary conditions for the spin currents at the tunnel barrier interfaces, and by employing a conductivity locally dependent on the angle between the magnetization vectors for the charge current. The experimentally measured voltage and angle dependencies of the torques acting on the free layer are thereby accurately reproduced. The switching behavior of ultra-scaled MRAM cells is in agreement with recent experiments on shape-anisotropy MTJs. Using our extended approach is absolutely essential to accurately capture the interplay of the Slonczewski and Zhang-Li torque contributions acting on a textured magnetization in composite free layers with the inclusion of several MgO barriers.
The ever-improving semiconductor industry has relied, in recent years, on the down-scaling of its components. The presence of leakage currents has, however, caused an increase of the stand-by power consumption in traditional volatile memories like SRAM and DRAM1. Nonvolatile components would allow to avoid any stand-by power usage. Emerging nonvolatile spin-transfer torque (STT) magnetoresistive random access memory (MRAM) offers high speed and endurance and is attractive for stand-alone2, embedded automotive3, MCU, and IoT4 applications, as well as frame buffer memory5 and slow SRAM6. The core of an STT-MRAM cell consists of a magnetic tunnel junction (MTJ), cf. Fig. 1a, with two ferromagnetic layers separated by an oxide tunnel barrier (TB). The reference layer (RL) is fixed either by proper choice of materials or by antiferromagnetic pinning, while the magnetization of the free layer (FL) can be reversed. When the magnetization vectors in the two layers are parallel (P), the resistance is lower than in the anti-parallel state (AP), providing a way to store binary information. The percentage difference between the two resistance states is labeled tunneling magnetoresistance (TMR) ratio. In STT-MRAM, switching between the two stable configurations is achieved by running an electric current through the structure. The spin-polarization of the RL generates a spin current which, when entering the free layer, acts on the magnetization via the exchange interaction. When the magnetization vectors are not aligned, conservation of angular momentum causes the transverse spin current to be quickly absorbed, generating the spin-transfer torque7,8. Employing CoFeB for the ferromagnetic layers and MgO for the oxide layers allows to reach TMR values of up to 600%9. CoFeB and MgO also possess suitable properties for the fabrication of MTJs with perpendicular magnetic anisotropy (PMA), which present better thermal stability, better scalability, and a lower switching current10. In order to increase the interface PMA, provided by the MgO tunneling layer, the FL is often interfaced with a second MgO layer11. Recently, more advanced structures were proposed to boost the PMA even further, either by introducing more MgO layers in the FL or using the shape anisotropy of elongated FLs12, while also improving scalability thanks to a reduced diameter. Accurate simulation tools can provide valuable support in the design of these ultra-scaled MRAM cells, cf Fig. 1b. In order to model such devices, it is paramount to generalize the traditional Slonczewski13 approach for the torque computation, applicable only to thin FLs, to incorporate normal metal buffers or MgO barriers between multiple CoFeB free layers, as well as the barrier between the RL and FL, and the torques coming from magnetization textures or domain walls, which can be generated in elongated FLs. In this work, we present an extension of the drift-diffusion formalism for the computation of the torque in the presence of MTJs in the structure. The model is implemented in a finite element (FE) solver based on open-source software. We show how the proposed approach is able to reproduce the expected properties of the STT torque observed in MTJs. Moreover, we show how the STT contribution and the one coming from magnetization gradients in the bulk of the magnetic layers are non-additive, so that a unified treatment of the two contributions is necessary in order to describe the torque acting in the ultra-scaled MRAM devices. Finally, we present switching simulations carried out with the described approach. The parameters employed for all the simulations, unless specified differently in the text, are summarized in the supplementary material available online, together with the weak formulation employed by the FE solver.
(a) MTJ structure with non-uniform magnetization configuration. The structure is composed of a reference layer (dark red), a tunnel barrier (green), a free layer (yellow), and two non-magnetic contacts (light blue). The arrows represent the magnetization orientation. (b) Model examples of elongated ultra-scaled MRAM cells, with single (top) or composite (middle and bottom) free layer.
In micromagnetic simulations, the magnetization dynamics is described by the Landau-Lifshitz-Gilbert equation:
$$\begin{aligned} \frac{\partial \textbf{m}}{\partial t} = -\gamma \mu _0 \textbf{m}\times \mathbf {H_{eff}}+\alpha \textbf{m}\times \frac{\partial \textbf{m}}{\partial t}+\frac{1}{M_S}\mathbf {T_S} \end{aligned}$$
\(\textbf{m}\) is a unit vector pointing in the magnetization direction, \(\gamma \) is the gyromagnetic ratio, \(\mu _0\) is the magnetic permeability, \(\alpha \) is the Gilbert damping constant, \(M_S\) is the saturation magnetization, \(\mathbf {H_{eff}}\) is an effective field containing the contribution of external field, exchange interaction, and demagnetizing field, and \(\mathbf {T_S}\) is the STT term. We implemented the equation in a Finite Element (FE) solver based on the Open Source library MFEM14. The contribution of the demagnetizing field is evaluated only on the disconnected magnetic domain by using a hybrid approach combining the boundary element method and the FE method15. A complete description of the torque term, which allows to include all physical phenomena responsible for proper ultra-scaled MRAM operation, can be obtained by computing the non-equilibrium spin accumulation. For this purpose, the drift-diffusion (DD) formalism has already been successfully applied in a spin-valve structure with a non-magnetic spacer layer16,17,18. The drift-diffusion equations for charge and spin current density are19:
$$\begin{aligned} \mathbf {J_C}= & {} \sigma \textbf{E} + \beta _D D_e\frac{e}{\mu _B} \left[ \left( \nabla \textbf{S}\right) ^T\textbf{m}\right] \end{aligned}$$
$$\begin{aligned} \mathbf {\overline{J_S}}= & {} -\frac{\mu _B}{e} \beta _\sigma \sigma \textbf{m} \otimes \textbf{E}-D_e\nabla \textbf{S} \end{aligned}$$
\(\mu _B\) is the Bohr magneton, e is the electron charge, \(\beta _\sigma \) and \(\beta _D\) are polarization parameters, \(D_e\) is the electron diffusion coefficient, and \(\textbf{E}\) is the electric field. \(\mathbf {J_C}\) is the charge current density, \(\mathbf {\overline{J_S}}\) is the spin polarization current density tensor, where the components \(J_{S,ij}\) indicate the flow of the i-th component of spin polarization in the j-th direction, \(\nabla \cdot \mathbf {\overline{J_S}}\) is the divergence of \(\mathbf {\overline{J_S}}\) with components \(\left( \nabla \cdot \mathbf {\overline{J_S}}\right) _i = \sum _j \frac{\partial J_{S,ij}}{\partial x_j}\), and \(\nabla \textbf{S}\) is the vector gradient of \(\textbf{S}\), with components \(\left( \nabla \textbf{S}\right) _{ij} = \frac{\partial S_i}{\partial x_j}\). The term \(\left( \nabla \textbf{S}\right) ^T\textbf{m}\) is a vector with components \(\left( \left( \nabla \textbf{S}\right) ^T\textbf{m}\right) _i = \sum _j \frac{\partial S_j}{\partial x_i}m_j\). \(\mathbf {\overline{J_S}}\) will be referred to as spin current density in the rest of the paper, and can be converted to the usual units by multiplying by \(\hbar /(2\mu _B)\). By inserting the expression for \(\textbf{E}\) obtained from (2a) in (2b), one obtains the expression for the spin current density:
$$\begin{aligned} \mathbf {\overline{J_S}}= & {} -\frac{\mu _B}{e} \beta _\sigma \textbf{m} \otimes \left( \mathbf {J_C}-\beta _D D_e\frac{e}{\mu _B} \left[ \left( \nabla \textbf{S}\right) ^T\textbf{m}\right] \right) -D_e\nabla \textbf{S} \end{aligned}$$
The spin accumulation in the steady-state can than be obtained readily:
$$\begin{aligned}{} & {} -\nabla \cdot \mathbf {\overline{J_S}}-D_e\frac{\textbf{S}}{\lambda _{sf}^2}-\mathbf {T_S}=\textbf{0} \end{aligned}$$
$$\begin{aligned}{} & {} \quad \mathbf {T_S}=-\frac{D_e}{\lambda _{J}^2}\textbf{m}\times \textbf{S}-\frac{D_e}{\lambda _{\varphi }^2}\textbf{m}\times \left( \textbf{m}\times \textbf{S}\right) \end{aligned}$$
\(\lambda _{sf}\) is the spin-flip length, \(\lambda _J\) is the spin exchange length, and \(\lambda _{\varphi }\) is the spin dephasing length. The term \(\mathbf {T_S}\) is the same one entering (1), as it describes the transfer of angular momentum between the magnetization \(\textbf{m}\) and the spin accumulation \(\textbf{S}\).
As the DD approach only accounts for semi-classical transport properties, it must be supplemented with appropriate conditions for the TB to account for the dependence of the torque on the tunneling process across the MTJ.
Model extension to include MTJ properties
Through the NEGF formalism, it is possible to compute expressions for the charge and spin current flowing through the TB of an MTJ20. Such expressions can be simplified to include the most prominent characteristics of the transport in a few polarization parameters21,22:
$$\begin{aligned}{} & {} J_C^{TB} \backsim J_0(V)\,\left( 1+P_{RL}\,P_{FL}\,\cos \theta \right) \end{aligned}$$
$$\begin{aligned}{} & {} J_{S,x}^{TB} \backsim -\frac{{a_{mx}}\,P_{RL}+{a_{mx}}\,P_{FL}\,\cos \theta }{ 1+P_{RL}\,P_{FL}\,\cos \theta }\,\frac{\hbar }{2e}\,J_C^{TB} \end{aligned}$$
$$\begin{aligned}{} & {} J_{S,y}^{TB} \backsim -\frac{1/2\,\left( P_{RL}\,P_{RL}^\eta -P_{FL}\,P_{FL}^\eta \right) \,\sin \theta }{ 1+P_{RL}\,P_{FL}\,\cos \theta }\,\frac{\hbar }{2e}\,J_C^{TB} \end{aligned}$$
(5c)
$$\begin{aligned}{} & {} J_{S,z}^{TB} \backsim -\frac{{a_{mx}}\,P_{FL}\,\sin \theta }{ 1+P_{RL}\,P_{FL}\,\cos \theta }\,\frac{\hbar }{2e}\,J_C^{TB} \end{aligned}$$
(5d)
\(J_0(V)\) contains the voltage-dependent portion of the current density, \(P_{RL}\) and \(P_{FL}\) are the in-plane Slonczewski polarization parameters, \(P_{RL}^\eta \) and \(P_{FL}^\eta \) are out-of-plane polarization parameters, and \({a_{mx}}\) describes the influence of the interface spin-mixing conductance on the transmitted in-plane spin current. The given expressions consider the RL magnetization pointing in the x-direction, and the FL magnetization lying in the xz-plane, at an angle \(\theta \) with respect to the RL one.
We extended the DD approach to be able to include the above equations for the current flowing through the MTJ. We modeled the TB as a poor conductor with a local conductivity depending on the relative orientation of the magnetization23. The TB conductivity expression is:
$$\begin{aligned} \sigma \left( \mathbf {m_{RL}},\,\mathbf {m_{FL}}\right) = \sigma _0 \left( 1+ \left( P_{FL}\,P_{RL}\right) \mathbf {m_{RL}}\cdot \mathbf {m_{FL}}\right) \end{aligned}$$
\(\sigma _0=(\sigma _P+\sigma _{AP})/2\) is the angle independent portion of the conductivity, \(\sigma _{P(AP)}\) is the conductivity in the parallel (anti-parallel) state, and \(\mathbf {m_{RL(FL)}}\) is the magnetization of the RL(FL) close to the interface. It is a manifestation of Ohm's law relating the voltage and the charge current through a structure with many transversal modes24. Computing the TMR from (6) gives back the Julliere expression25:
$$\begin{aligned} TMR = \frac{G_P-G_{AP}}{G_{AP}} = \frac{2\,P_{RL}\,P_{FL}}{1-P_{RL}\,P_{FL}} \end{aligned}$$
\(G_{P(AP)}\) is the conductance in the parallel (anti-parallel) state. To compute the current density, we solve:
$$\begin{aligned} -\nabla \cdot \left( \sigma \nabla V\right)= & {} 0 \end{aligned}$$
$$\begin{aligned} \mathbf {J_C}= & {} \sigma \nabla V \end{aligned}$$
V is the elctrical potential and \(\sigma \) is described by (6) in the tunnel barrier. Figure 2 shows the redistribution of the current density in an MTJ at a fixed voltage for the FL magnetization configuration shown in Fig. 1a. The structure has a diameter of 40 nm, the FL and RL are 2 nm thick, the TB is 1 nm thick, and the NM contacts are 50 nm thick. The current density is larger in the center, where the FL magnetization is parallel to that of the RL and the magnetization-dependent conductivity is the highest. The difference between lowest and highest current density values is dictated by TMR \(\backsim \) 200%.
Current density in an MTJ biased under a constant voltage for the non-uniform FL magnetization configuration sketched in Fig. 1a. The center planes are at the TB interface, the side planes are in the NM contacts.
Angular dependence of the damping-like torque acting on a semi-infinite FL based on the DD equations (scatter plot) and on the Slonczewski expression13 (dash-dotted line).
Spin accumulation (a) and torque (b) computed with the spin-current boundary condition (9) for an MTJ with semi-infinite ferromagnetic layers. Magnetization is along x in the RL and along z in the FL. The three curves represent x-, y-, and z-components of the computed spin accumulation and spin torque, respectively, along an axis going through the center of the structure. Brown vectors report the magnetization direction in both ferromagnetic layers.
While it is possible to use (3) and (4a) together with (6) to mimic the torque magnitude expected in an MTJ by tuning the tunnel barrier parameters23, some of the torque properties are not reproduced in this way. In Fig. 3, the angular dependence of the torque acting on a semi-infinite FL is compared with the Slonczewski expression13. The results show a clear deviation of the DD results from the expected ones. Therefore, the spin current part of () must also be accounted for. The traditional FE approach applied to the DD equations16 enforces the spin current and the spin accumulation to be continuous through all the interfaces. In order to include the spin current from equation () in the model, we take the diffusion coefficient of the TB to be low, proportionally to the conductivity, and apply the following expression as a boundary condition for both the RL|TB and TB|FL interface:
$$\begin{aligned} \mathbf {J_S^{TB}} = -\frac{\mu _B}{e} \, \frac{\mathbf {J_C^{TB}}\cdot \textbf{n}}{1 + P_{RL}\,P_{FL}\,\mathbf {m_{RL}}\cdot \mathbf {m_{FL}}} \left[ {a_{mx}}\,P_{RL}\,\mathbf {m_{RL}} + {a_{mx}}\,P_{FL}\,\mathbf {m_{FL}} + 1/2\,\left( P_{RL}\,P_{RL}^\eta - P_{FL}\,P_{FL}^\eta \right) \,\mathbf {m_{RL}}\times \mathbf {m_{FL}}\right] \end{aligned}$$
\(\mathbf {J_C^{TB}}\) is the electric current density at the interface, \(\textbf{n}\) is the interface normal, and \(\mathbf {m_{RL(FL)}}\) is the unit magnetization vector of the RL(FL) at the interface. Doing this, we fix the spin current to the value prescribed by (9), when \(\mathbf {J_C}\) flows through the MTJ. This is the key to describe the spin current and the spin accumulation in the RL and FL of an MTJ. Employing this approach gives the opportunity to describe the spin and charge transport coupled to the magnetization in arbitrary stacks of MTJs and metallic spin valves with a unified LLG-DD approach, and it allows to compute a fully three-dimensional solution in the presence of non-uniform magnetization configurations.
Our approach is applied to analyze spin accumulation and torque in a structure with semi-infinite ferromagnetic leads separated by a 1 nm thick tunnel junction, for uniform magnetization along x in the RL and along z in the FL. The results are shown in Fig. 4a and b. To evaluate \(\mathbf {J_{S,TB}}\) with (9) at every boundary point of the RL|TB interface, with a magnetization value \(\mathbf {m_{RL}}\), the solver looks for the closest point on the opposite TB|FL interface and uses its corresponding \(\mathbf {m_{FL}}\) value. The same procedure is carried out for the TB|FL interface. The transverse spin dephasing length is \(\lambda _\varphi =0.4\) nm, the exchange length is \(\lambda _J=1\) nm, and the spin-flip length is \(\lambda _{sf}=10\) nm. The short value of the dephasing length is employed to guarantee the fast absorption of the transverse components of the spin accumulation near the interface26, as expected in the presence of strong ferromagnets13,27. The boundary condition imposed by (9) creates a jump between the values of the spin accumulation components parallel to the magnetization at the left and right interface of the TB. This is the manifestation of the MTJ polarization effects on the spin current28. The transverse spin accumulation is quickly absorbed, so that the torques are acting near the interfaces. We note that computing the spin accumulation in the whole structure gives the torque acting in all the ferromagnetic layers from a unified expression.
Figure 5 shows the angular dependence of the damping-like torque with the inclusion of the spin current boundary condition, for semi-infinite ferromagnetic layers. The typical sinusoidal dependence13,21 of the torque acting on the FL in an MTJ is now reproduced exactly, for various values of the RL|TB interface spin polarization. The structure is biased by a fixed voltage, so that the torque is independent of the TB|FL polarization, and only depends on the value of the RL|TB one.
Angular dependence of the damping-like torque computed with the spin-current boundary conditions, for semi-infinite FL and RL. Dash-dotted lines represent the dependence described by the Slonczewski expression. The expected sinusoidal angular dependence of an MTJ is reproduced, for several values of the RL spin polarization parameter.
Dependence of both resistance (a) and damping-like (DL) and field-like (FL) torques (b) on the bias voltage, compared with experimental results29.
The implementation discussed until now produces a linear dependence of the torques on the bias voltage, with a vanishing damping-like component for \(P_{RL}^\eta =P_{FL}^\eta \). Fabricated MRAM devices usually exhibit clear non-linearity in the observed bias dependence of both the torques and the TMR29,30. As a way to account for this non-linearity, bias dependence can be included in the polarization parameters \(P_{RL}\) and \(P_{FL}\). It can be postulated as31:
$$\begin{aligned} P_{RL}(V)=\frac{1}{1+P_0\,\exp (V/V_0)}, \,\,\, P_{FL}(V)=P_{RL}(-V) \end{aligned}$$
V is the voltage drop across the TB, \(P_0\) can be extracted from the TMR at zero bias, and \(V_0\) from the high bias behavior. A comparison of both TMR and torque results with experimental ones29 is reported in Fig. 6a and b, showing a good agreement. The results were obtained for \(P_{RL}(0)=P_{FL}(0)=0.66\), \(P_{RL}^\eta =P_{FL}^\eta =0.11\), \({a_{mx}}=0.36\), \(V_0=0.65\) V, and \(\sigma _0\) extracted from the anti-parallel resistance \(R_{AP}=294\) \(\Omega \) of the experimental structure, possessing a surface area of \(70~\text {nm}\,\text {x}\,250~\text {nm}\)29. Additional bias dependence features could be included by having \(\sigma _0\) also depend on the applied bias voltage32.
GMR effect in spin-valves
While the proposed approach is able to compute both the TMR and torque in an MTJ, in ultra scaled devices non-magnetic spacer layers can also be used to split the FL into two parts and avoid the formation of magnetization textures or domain walls. In a spin-valve with a metallic spacer layer, it is the Giant-Magnetoresistance (GMR) effect which causes the resistance of the structure to depend on the relative angle between the magnetization vectors. Such an effect can be accounted for by taking the magnetization-dependent contribution in (2a) into account when computing the current density. By taking \(\nabla \cdot \mathbf {J_C}=0\) (in the absence of current sources) and \(\textbf{E}=-\nabla V\) in (2a), one obtains the equation for the electrical potential.
$$\begin{aligned} -\nabla \cdot \left( \sigma \nabla V\right) = -\beta _D\,D_e\,\frac{e}{\mu _B}\nabla \cdot \left[ \left( \nabla \textbf{S}\right) ^T\textbf{m}\right] \end{aligned}$$
The additional right-hand side term depends on the spin accumulation, which in turn depends on the current density. In order to compute a solution which takes the interdependence into account, we iterate over the solution of (11) and (4a), until a convergence threshold is reached. This approach can be directly used for the FE implementation of the two separate equations, and does not require additional care for the inclusion of the boundary condition (9) in a coupled system of equations.
(a) Angular dependence of the total current in a spin-valve structure with metallic spacer, computed with the iterative approach for various values of the convergence parameter \(\epsilon \). (b) Angular dependence of the total current in an MTJ, computed using both the direct and iterative approach.
The iterative solution is computed as follows:
We obtain a first estimate \(\mathbf {S_0}\) of the spin accumulation by solving (8a) and (4a) with the spin current density taken from (3).
We use \(S_0\) to compute the electrical potential from (11).
This potential is then used to obtain an updated solution \(\mathbf {S_1}\) from (4a), with the spin current density now described by (2b).
Steps 2 and 3 are iterated until the solver reaches convergence.
$$\begin{aligned} \frac{\Vert \mathbf {S_n}\Vert _{L2}-\Vert \mathbf {S_{n-1}}\Vert _{L2}}{\Vert \mathbf {S_n}\Vert _{L2}} < \epsilon \end{aligned}$$
Figure 7a shows the obtained dependence of the total current density on the relative angle between the magnetization vectors in the FL and RL, for several values of \(\epsilon \). The solution is computed in the structure in Fig. 1a, with the middle layer treated as a non-magnetic metallic spacer, for an applied voltage of \(-0.2\) V. The dashed lines represent a fit carried out using equation (13)33.
$$\begin{aligned} I(\theta ) = \frac{V}{R_P}\,\frac{1+\chi \,\cos ^2 \theta }{1+GMR+(\chi -GMR)\,\cos ^2 \theta } \end{aligned}$$
V is the applied bias voltage, \(R_P\) is the resistance in the parallel state, and \(\chi \) and GMR are used as fitting parameters. The obtained GMR is \(\backsim 11\%\), with the results obtained using \(\epsilon =1\%\) converging fast (\(n\le 3\)) and giving a good approximation.
Figure 7b reports the current dependence obtained by considering a tunneling middle layer. The data were computed both by using the direct solution of equations (2b), (4), (8) and the iterative solution described in this section. The fitting can be performed by using (6) as the angular dependence expression. As the iterative solver always converges for \(n=1\) and the results are indistinguishable from the direct solution, these findings confirm that the latter can be safely employed for all structures only containing MTJs.
Torques in elongated ultra-scaled devices
In the presence of elongated FLs like the ones in Fig. 1b, the switching of the whole layer at the same time is not guaranteed: a domain wall or magnetization textures can be generated, with their propagation through the FL affecting the switching behavior. In this case, the additional spin torques created by the presence of magnetization gradients in the bulk of the ferromagnetic layers must be taken into account. These torques are modeled by the Zhang and Li (ZL)34 equation. We generalized the ZL torques to include \(\lambda _\varphi \) using the expression:
$$\begin{aligned} \mathbf {T_{ZL}} = -\frac{\mu _B}{e} \, \frac{\beta }{1+(\epsilon +\epsilon ')^2} \, \left( \left( 1+\epsilon '\,\left( \epsilon +\epsilon '\right) \right) \, \textbf{m}\times \left[ \textbf{m}\times \left( \mathbf {J_C}\cdot \nabla \right) \textbf{m} \right] - \epsilon \, \textbf{m} \times \left( \mathbf {J_C}\cdot \nabla \right) \textbf{m} \right) \end{aligned}$$
\(\epsilon = \left( \lambda _J / \lambda _{sf}\right) ^2\) and \(\epsilon ' = \left( \lambda _J / \lambda _\varphi \right) ^2\). Such expression can be derived from the spin accumulation equation by taking \(\nabla \textbf{S}=0\), and is strictly valid only when the change of magnetization in space happens over length scales longer than \(\lambda _{sf}\). To test this assumption, we consider the magnetization profile shown in Fig. 8a, and compute the torque for \(\lambda _{sf}=10\) nm, \(\lambda _{J}=1\) nm, \(\lambda _{\varphi }=5\) nm. Figure 8b demonstrates that, for a magnetization profile width of \(\backsim 100\) nm, \(\mathbf {T_S}\) is well reproduced with (14). However, if the width of the magnetization profile is reduced to \(\backsim 3\) nm, the spin accumulation gradients neglected in (14) affect the result, and a large deviation of \(\mathbf {T_S}\) from \(\mathbf {T_{ZL}}\) is observed, especially for the field-like torque (y-component), as shown in Fig. 9a. However, the presence of a short spin dephasing length, \(\lambda _\varphi =0.4\) nm, guarantees the fast absorption of the transverse spin, and a good agreement between \(\mathbf {T_S}\) and \(\mathbf {T_{ZL}}\) is recovered, cf. Fig. 9b.
(a) Non-uniform magnetization texture with the magnetization orientation changing from z to -x. (b) Comparison of the spin torque \(\mathbf {T_S}\) to the Zhang-Li torque \(\mathbf {T_{ZL}}\) for a magnetization texture longer than \(\lambda _{sf}\), for \(\lambda _{J}=1\) nm and \(\lambda _{\varphi }=5\) nm. The two approaches are in good agreement.
In MRAM cells with elongated FLs the MTJ and ZL torque contribution act at the same time in the presence of magnetization textures and domain walls in the bulk of the layer. We compute the torque in an experimental MTJ structure12 with a 5 nm RL, 0.9 nm TB, and an elongated FL of 15 nm with a magnetization profile in the FL similar to the one shown in Fig. 8a, with the magnetization vector going from the z-direction to the -x-direction over the length of the layer. The magnetization in the RL is pointing towards the x-direction. The solution is computed with the same parameters as the ones employed for Fig. 9b.
Comparison of the spin torque \(\mathbf {T_S}\) to the Zhang-Li torque \(\mathbf {T_{ZL}}\) for a magnetization texture shorter than \(\lambda _{sf}\), for \(\lambda _{J}=1\) nm and \(\lambda _{\varphi } = 5\) nm in (a) and \(\lambda _{\varphi }=0.4\) nm in (b). The shorter dephasing length takes the role of quickly absorbing the transverse spin accumulation components, so that the agreement between the two approaches is recovered.
(a) Torques computed for an MRAM cell with elongated RL and FL and a magnetization profile in the FL similar to the one of Fig. 8a. The brown vectors indicate the magnetization direction in the RL and in two parts of the FL. (b) Close-up of the spin torque \(\mathbf {T_S}\) compared to the Zhang-Li torque \(\mathbf {T_{ZL}}\). The presence of the MTJ influences also the bulk portion of the torque, making the unified approach the most suitable for dealing with ultra-scaled MTJs with elongated ferromagnetic layers.
The torque \(\mathbf {T_S}\) acting in the FL for this magnetization profile is shown in Fig. 10a. Both the interface contribution from the TB and the bulk ZL contribution are present. In Fig. 10b we show a close-up of the bulk portion of \(\mathbf {T_S}\), compared with the ZL torque \(\mathbf {T_{ZL}}\) computed in the FL for the same magnetization configuration. The comparison reveals a substantial difference between the torques obtained with our model and the traditional approach, where the ZL torque is simply added to the Slonczewski term, even in the presence of a short spin dephasing length. Our approach clearly demonstrates that, in an MTJ with elongated ferromagnetic layers, the Slonczewski and ZL torques are not independent: the presence of the TB also generates a spin accumulation component parallel to the magnetization, whose decay is dictated by \(\lambda _{sf}\), cf. Fig. 4a. This component interacts with the magnetization texture, modifying the ZL torque contribution. A unified treatment of the MTJ polarization process and FL magnetization texture is thus required to accurately describe the torque and switching in ultra-scaled MRAM.
Finally, we investigate the magnetization behavior during switching in ultra-scaled MRAM cells with a diameter of 2.3 nm recently experimentally demonstrated12. The values of the resistance-area product (RA) and the TMR in the simulated structures are 2 \(\Omega \,\mu \text {m}^2\) and 100%, respectively. In Fig. 11 the behavior of the top cell of Fig. 1b, with a single FL of 10 nm length, capped by an MgO TB separating it from the non-magnetic contact, is presented, under a bias voltage of 1.5 V. The thickness of the RL, TBs, and non-magnetic contacts are 5 nm, 0.9 nm, and 50 nm, respectively. The magnetization of the RL is in the positive x-direction. The magnetization of the FL is tilted \(5^\circ \) away from the perfect P or AP orientation, to emulate the destabilizing effect of a non-zero temperature on the system. The precise value employed for the tilting angle only affects the duration of the incubation period before the start of the switching process, and doesn't change the overall behavior of the magnetization reversal. The value of the bias voltage, while being sufficient to achieve switching for the AP to P scenario, is not enough to reverse the magnetization from P to AP. The additional stability of the parallel configuration comes from the stray field contribution of the RL, which favors it. Due to the presence of a stronger spin accumulation component parallel to the magnetization at the TB interface in the P state, the interaction of the Slonczewski and Zhang-Li torque contributions quickly generates a texture in the magnetization, whose average x-component slightly deviates from the starting configuration, as evidenced by the dip in the plot during the first nanoseconds of the simulation. Despite this, the overall torque is not strong enough to overcome the perpendicular anisotropy. We carried out additional simulations with bias values from 2 to 4 V, presented in Fig. 12, showing how an increased bias voltage, which entails an increased value of the torque, is able to achieve switching for both configurations. Moreover, we investigated the switching behavior of structures with FL thickness of 5 nm and 7.5 nm. The results are presented in Fig. 13. A shorter layer possesses a reduced energy barrier separating the two magnetization configurations35, so that the speed of AP to P switching is improved, and P to AP switching is achieved in the case of the 5 nm layer.
Switching results for a structure with an elongated FL, for both the AP to P and P to AP scenarios, under a bias voltage of 1.5 V.
Switching results for a structure with an elongated FL under increasing bias voltage values, for both the AP to P and P to AP scenarios.
Switching results for a structure with an elongated FL for several lengths of the FL.
Switching results for a structure with composite FL, for both the AP to P and P to AP scenarios, under a bias voltage of 1.5 V.
The switching performance can be improved by employing a structure where the FL is split into two parts of 5 nm length by an additional MgO layer in the middle, presented in the middle of Fig. 1b. The addition of an MgO layer boosts the stability of the composite FL because of an increased interface anisotropy contribution, while the two parts of the FL have a preferred aligned configuration because of the stray field they exert on one another12. We employed our approach to carry out switching simulations in such a structure, under a bias voltage of 1.5 V, presented in Fig. 14. The resulting plot evidences how the switching process is overall faster in the composite structure as compared to the one with a single elongated layer, and that P to AP switching is achieved for lower values of the bias voltage. Figure 15 shows how the improved performance of the second structure comes from the composite nature of the FL, allowing for the different sections of the FL layer to be switched one at a time. In the AP configuration, the RL exerts a torque on the first part of the FL (FL1) to push it in the positive x-direction, parallel to it. At the same time, the second part of the FL (FL2) also exerts a torque on FL1 to push the magnetization to the positive x-direction, so that it is anti-parallel to FL2. Both torques' contributions act in the same direction, causing FL1 to switch first and fast. At the same time, the torque acting from FL1 towards FL2 favors the two magnetization vectors to be parallel, keeping FL2 in its original orientation. However, after the magnetization of FL1 has switched, the torque acting on FL2 changes its sign, forcing it to switch. As the torque acts only from FL1, the magnitude is smaller than that acting in the first part of the switching process, resulting in a slower reversal of the FL2 magnetization. The three stages of AP to P switching are showcased in Fig. 15a. When going from P to AP, the opposite process happens. The torque acting from FL2 on FL1 is opposite to that from the RL, while the torque acting from FL1 on FL2 is favoring magnetization reversal, so that FL2 switches first. As only the torque from FL1 is acting, the switching time of FL2 is relatively slow. After FL2 has switched, the torque contributions from FL2 and the RL act on FL1 in the same direction, completing the switching fast. The three stages of P to AP switching are shown in Fig. 15b. The obtained switching time and the applied bias voltage agree well with the experimentally reported results12, and show how our approach can be employed to investigate the switching behavior of MRAM devices.
Switching stages of an ultra-scaled STT-MRAM cell with composite free layer, showcasing how the different parts of the FL switch one at a time. The RL is the first section on the left of the structure, while the second and third sections are the two parts of the FL (from left to right, FL1 and FL2, respectively). AP to P switching is presented in (a), while P to AP switching in (b).
In order to further analyze the performance of a composite FL, we performed simulations to investigate the behavior of a structure with three FL segments of 3.5 nm length each, to keep the same aspect ratio of the whole FL. The structure is reported on the bottom of Fig. 1b. All the segments are separated by 0.9 nm thick MgO layers. The results for both AP to P and P to AP switching are reported in Fig. 16, for a bias voltage of 1.5 V. The switching process is qualitatively similar to the one of the structure with two FL segments, with the three sections of the FL switching one at a time. In the AP configuration, the torque coming from the RL and FL2 causes the fast switching of FL1 to the positive x-direction. At this point, the torque coming from FL1 and the third part of the FL (FL3) causes FL2 to also switch fast towards the positive x-direction. Finally, as only the torque coming from FL2 acts on FL3, the latter has a slower magnetization reversal which completes the switching process. When going from P to AP, as is the case for the structure with two FL segments, the opposite process happens. The torques acting on FL1 and FL2 from the adjacent layers compensate each other, and only the torque acting from FL2 on FL3 is able to cause the magnetization reversal of the latter. At this point, the torque acting from FL1 and FL3 on FL2 becomes additive, so that FL2 switches faster. This is finally followed by the fast switching of FL1, as the torque contributions coming from the RL and FL2 push its magnetization towards the negative x-direction. As shown in Fig. 16, the complete switching process is faster in the structure with three FL segments, for both AP to P and P to AP realizations. This indicates that increasing the number of segments provides an advantage in terms of switching time and bias, and the multiple magnetization states reached during the switching process make these structures promising candidates as multi-bit memory cells.
Switching results for a structure with three FL segments, for both the AP to P and P to AP scenarios, under a bias voltage of 1.5 V (solid line). The magnetization trajectories are compared to the ones obtained in the structure with two FL segments (dash-dotted lines).
We presented a modeling approach to accurately describe the charge and spin currents, the torques, and the magnetization dynamics in ultra-scaled MRAM cells consisting of several elongated pieces of ferromagnets separated by multiple tunnel barriers. We showed how the fully 3D spin and charge drift-diffusion equations can be supplied with appropriate conditions at the tunneling layer to reproduce the TMR effect as well as the angular and voltage dependence of the torque expected in MTJs. We reported how an iterative solution of the charge and spin accumulation equations can be employed to account for the GMR effect. The advantage of the proposed approach is the possibility of computing all the torque contributions from a unified expression, so that the interactions between them can be evaluated, and the torque acting in the presence of multiple layers of varying thickness is automatically accounted for, even for non-uniform magnetization distributions. We demonstrated that the Slonczewski and Zhang and Li torques are not additive and must be derived from the spin accumulation to account for their interplay and correctly describe the torques on textured magnetization in elongated FLs with several MgO TBs. Finally, we applied the presented method to switching simulations of MRAM cells with elongated and composite FLs. The obtained results validate the use of the proposed simulation approach as support for the design of advanced ultra-scaled MRAM cells.
Hanyu, T. et al. Standby-power-free integrated circuits using MTJ-based VLSI computing. Proc. IEEE 104, 1844–1863. https://doi.org/10.1109/JPROC.2016.2574939 (2016).
Aggarwal, S. et al. Demonstration of a reliable 1 Gb standalone spin-transfer torque MRAM for industrial applications. In Proceedings of the IEDM Conference 2.1.1–2.1.4 https://doi.org/10.1109/IEDM19573.2019.8993516 (2019).
Naik, V. B. et al. JEDEC-qualified highly reliable 22nm FD-SOI embedded MRAM for low-power industrial-grade, and extended performance towards automotive-grade-1 applications. In Proceedings of the IEDM Conference 11.3.1–11.3.4 https://doi.org/10.1109/IEDM13553.2020.9371935 (2020).
Shih, Y.-C. et al. A reflow-capable, embedded 8Mb STT-MRAM macro with 9ns read access time in 16nm FinFET logic CMOS process. In Proceedings of the IEDM Conference 11.4.1–11.4.4 https://doi.org/10.1109/IEDM13553.2020.9372115 (2020).
Han, S. H. et al. 28 nm 0.08 mm\(^{2}\)/Mb embedded MRAM for frame buffer memory. InProceedings of the IEDM Conference 11.2.1–11.2.4 https://doi.org/10.1109/IEDM13553.2020.9372040 (2020).
Alzate, J. G. et al. 2 Mb array-level demonstration of STT-MRAM process and performance towards L4 cache applications. In Proceedings of the IEDM Confeence 2.4.1–2.4.4 https://doi.org/10.1109/IEDM19573.2019.8993474 (2019).
Slonczewski, J. C. Current-driven excitation of magnetic multilayers. J. Magn. Magn. Mater. 159, L1–L7. https://doi.org/10.1016/0304-8853(96)00062-5 (1996).
Berger, L. Emission of spin waves by a magnetic multilayer traversed by a current. Phys. Rev. B 54, 9353–9358. https://doi.org/10.1103/PhysRevB.54.9353 (1996).
Ikeda, S. et al. Tunnel magnetoresistance of 604% at 300 K by suppression of Ta diffusion in CoFeB/MgO/CoFeB pseudo-spin-valves annealed at high temperature. Appl. Phys. Lett. 93, 082508. https://doi.org/10.1063/1.2976435 (2008).
Tudu, B. & Tiwari, A. Recent developments in perpendicular magnetic anisotropy thin films for data storage applications. Vacuum 146, 329–341. https://doi.org/10.1016/j.vacuum.2017.01.031 (2017).
Sato, H. et al. MgO/CoFeB/Ta/CoFeB/MgO recording structure in magnetic tunnel junctions with perpendicular easy axis. IEEE Trans. Magn. 49, 4437–4440. https://doi.org/10.1109/TMAG.2013.2251326 (2013).
Jinnai, B. et al. High-performance shape-anisotropy magnetic tunnel junctions down to 2.3 nm. In Proceedings of the IEDM Conference 24.6.1–24.6.4 https://doi.org/10.1109/IEDM13553.2020.9371972 (2020).
Slonczewski, J. C. Currents, torques, and polarization factors in magnetic tunnel junctions. Phys. Rev. B 71, 024411. https://doi.org/10.1103/PhysRevB.71.024411 (2005).
Anderson, R. et al. MFEM: A modular finite element library. Comp. Math. Appl.https://doi.org/10.1016/j.camwa.2020.06.009 (2020).
Ender, J. et al. Efficient demagnetizing field calculation for disconnected complex geometries in STT-MRAM cells. In Proceedings of the SISPAD Conference 213–216 https://doi.org/10.23919/SISPAD49475.2020.9241662 (2020).
Abert, C. et al. A three-dimensional spin-diffusion model for micromagnetics. Sci. Rep. 5, 14855. https://doi.org/10.1038/srep14855 (2015).
Article CAS PubMed PubMed Central ADS Google Scholar
Abert, C. et al. A self-consistent spin-diffusion model for micromagnetics. Sci. Rep. 6, 16. https://doi.org/10.1038/s41598-016-0019-y (2016).
Lepadatu, S. Unified treatment of spin torques using a coupled magnetisation dynamics and three-dimensional spin current solver. Sci. Rep. 7, 12937. https://doi.org/10.1038/s41598-017-13181-x (2017).
Zhang, S., Levy, P. M. & Fert, A. Mechanisms of spin-polarized current-driven magnetization switching. Phys. Rev. Lett. 88, 236601. https://doi.org/10.1103/PhysRevLett.88.236601 (2002).
Article CAS PubMed ADS Google Scholar
Theodonis, I., Kioussis, N., Kalitsov, A., Chshiev, M. & Butler, W. H. Anomalous bias dependence of spin torque in magnetic tunnel junctions. Phys. Rev. Lett. 97, 237205. https://doi.org/10.1103/PhysRevLett.97.237205 (2006).
Chshiev, M. et al. Analytical description of ballistic spin currents and torques in magnetic tunnel junctions. Phys. Rev. B 92, 104422. https://doi.org/10.1103/PhysRevB.92.104422 (2015).
Camsari, K. Y., Ganguly, S., Datta, D. & Datta, S. Physics-based factorization of magnetic tunnel junctions for modeling and circuit simulation. In Proceedings of the IEDM Conference 35.6.1–35.6.4 https://doi.org/10.1109/IEDM.2014.7047177 (2014).
Fiorentini, S. et al. Coupled spin and charge drift-diffusion approach applied to magnetic tunnel junctions. Solid State Electron. 186, 108103. https://doi.org/10.1016/j.sse.2021.108103 (2021).
Petitjean, C., Luc, D. & Waintal, X. Unified drift-diffusion theory for transverse spin currents in spin valves, domain walls, and other textured magnets. Phys. Rev. Lett. 109, 117204. https://doi.org/10.1103/PhysRevLett.109.117204 (2012).
Julliere, M. Tunneling between ferromagnetic films. Phys. Lett. A 54, 225–226. https://doi.org/10.1016/0375-9601(75)90174-7 (1975).
Haney, P., Lee, H. W., Lee, K. J., Manchon, A. & Stiles, M. Current induced torques and interfacial spin-orbit coupling: Semiclassical modeling. Phys. Rev. Bhttps://doi.org/10.1103/PhysRevB.87.174411 (2013).
Brataas, A., Bauer, G. E. & Kelly, P. J. Non-collinear magnetoelectronics. Phys. Rep. 427, 157–255. https://doi.org/10.1016/j.physrep.2006.01.001 (2006).
Fabian, J., Matos-Abiague, A., Ertler, C., Stano, P. & Zutic, I. Semiconductor spintronics. Acta Phys. Slovacahttps://doi.org/10.2478/v10155-010-0086-8 (2007).
Kubota, H. et al. Quantitative measurement of voltage dependence of spin-transfer torque in MgO-based magnetic tunnel junctions. Nat. Phys. 4, 37–41. https://doi.org/10.1038/nphys784 (2008).
Tiwari, D., Sharma, R., Heinonen, O. G., Åkerman, J. & Muduli, P. K. Influence of MgO barrier quality on spin-transfer torque in magnetic tunnel junctions. Appl. Phys. Lett. 112, 022406. https://doi.org/10.1063/1.5005893 (2018).
Torunbalci, M. M., Upadhyaya, P., Bhave, S. A. & Camsari, K. Y. Modular compact modeling of MTJ devices. IEEE Trans. Electron Devices 65, 4628–4634. https://doi.org/10.1109/TED.2018.2863538 (2018).
Ji, Y., Liu, J. & Yang, C. Novel modeling and dynamic simulation of magnetic tunnel junctions for spintronic sensor development. J. Phys. D Appl. Phys. 50, 025005. https://doi.org/10.1088/1361-6463/50/2/025005 (2016).
Slonczewski, J. C. Currents and torques in metallic magnetic multilayers. J. Magn. Magn. Mat. 247, 324–338. https://doi.org/10.1016/S0304-8853(02)00291-3 (2002).
Zhang, S. & Li, Z. Roles of nonequilibrium conduction electrons on the magnetization dynamics of ferromagnets. Phys. Rev. Lett. 93, 127204. https://doi.org/10.1103/PhysRevLett.93.127204 (2004).
Ikeda, S. et al. A perpendicular-anisotropy CoFeB-MgO magnetic tunnel junction. Nat. Mater. 9, 721–724. https://doi.org/10.1038/nmat2804 (2010).
The financial support by the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development and the TU Wien Bibliothek through its Open Access Funding Program is gratefully acknowledged.
Christian Doppler Laboratory for Nonvolatile Magnetoresistive Memory and Logic, Vienna, Austria
Simone Fiorentini, Mario Bendra, Johannes Ender, Roberto L. de Orio & Viktor Sverdlov
Institute for Microelectronics, TU Wien, Gußhausstraße 27–29/E360, 1040, Vienna, Austria
Simone Fiorentini, Mario Bendra, Johannes Ender, Roberto L. de Orio, Siegfried Selberherr & Viktor Sverdlov
Silvaco Europe Ltd., Cambridge, UK
Wolfgang Goes
Simone Fiorentini
Mario Bendra
Johannes Ender
Roberto L. de Orio
Siegfried Selberherr
Viktor Sverdlov
S.F. implemented and extended the drift-diffusion equations in the FE solver and performed the respective simulations, M.B. performed the micromagnetic simulations, J.E. and R.O. implemented the LLG equation in the FE solver, W.G. helped with the development of the software, S.S. and V.S. guided the research carried out for the paper. All authors contributed to the manuscript preparation.
Correspondence to Simone Fiorentini.
Supplementary Information.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Fiorentini, S., Bendra, M., Ender, J. et al. Spin and charge drift-diffusion in ultra-scaled MRAM cells. Sci Rep 12, 20958 (2022). https://doi.org/10.1038/s41598-022-25586-4
|
CommonCrawl
|
Studying slippage on pushing applications with snake robots
Fabian Reyes ORCID: orcid.org/0000-0001-6455-64871 &
Shugen Ma1
In this paper, a framework for analyzing the motion resulting from the interaction between a snake robot and an object is shown. Metrics are derived to study the motion of the object and robot, showing that the addition of passive wheels to the snake robot helps to minimize slippage. However, the passive wheels do not have a significant impact on the force exerted onto the object. This puts snake robots in a similar framework as robotic arms, while considering special properties exclusive to snake robots (e.g., lack of a fixed-base, interaction with the environment through friction). It is also shown that the configuration (shape) of the snake robot, parameterized with the polar coordinates of the robot's COM, plays an important role in the interaction with the object. Two examples, a snake robot with two joints and another with three joints, are studied to show the applicability of the model.
Robots that are capable of locomotion in unstructured conditions are necessary for realistic applications. However, locomotion alone may not be sufficient when more dexterous interaction with the environment is needed. Therefore, robotic systems with capability to locomote and also interact dexterously with their surroundings are desirable, and indeed a natural extension of robotics research.
Snake robots have shown promise regarding locomotion [1]. Locomotion in planar environments has been probably the main topic of research for snake robots [2,3,4] and has been extended to motion in planar slopes [5, 6], motion in 3D-space [7, 8], and more broad studies on locomotion [9]. An interesting idea that combines locomotion and interaction with the environment, called obstacle-aided locomotion (OAL), has been proposed in [10] where obstacles in the environment are used as auxiliary sources for propulsion or to avoid jamming.
Although snake robots could excel in locomotion, it is not clear if they can be used to interact with the environment (or an object) dexterously. Its structure resembles a robotic manipulator, but there are key differences that have not been fully addressed in previous research (c.f. Fig. 1).
Thought experiment. General scenario of a snake robot contacting an object. a The snake robot contacts an object while it may also be contacting the environment either with its belly (friction) or pushing against a wall, for example. b The snake robot may be able to move the object. c The object may be very heavy and the snake robot will move around the object
The lack of a fixed-base makes it difficult for a snake robot to manipulate an object as dexterously as a robotic arm. Another difference is that a snake robot has contact with the environment through friction at several points of its body. Additionally, mass becomes a very important parameter to study. Unlike research regarding robotic arms where it is assumed that the arm can lift the object and it is a matter of choosing an optimal input, snake robots may not be able to move the object due to its inertial properties.
Because the kinematic structure of a snake robot resembles a robotic arm, papers that deal with similar (but not exactly the same) situations can be found in existing literature. In [11] a hyper-redundant serial robot was considered and both locomotion and manipulation of an object were considered. However, the analysis was purely kinematic while assuming a fixed-base robotic system. In other words, there was not force analysis showing the conditions for feasibility of the problem. In [12], the duality between locomotion and manipulation of a snake robot was considered under the assumption that the snake robot can be treated similarly to a robotic arm with a fixed-base when manipulating an object. This was achieved by making the first link of the snake robot behave similarly to fixed-base (due to its shape and mass), but the results cannot be extended to the case of a general snake robot. The problem of analyzing and controlling a snake robot under these conditions has been reported in [13], where it is shown that several assumptions made in previous published literature are not enough to guarantee accurate control of a planar snake robot with frictional contacts with the ground.
The main objective of this paper is to study the resulting interaction between a snake robot and an object, when the task is to push the object. We consider this to be a prelude to more interesting interactions like grasping or dexterous manipulation. However, it is important to understand the basics of the interaction first. This is an extension of previously published work [14, 15] where a more complete mathematical modeling of the problem has been presented. In [15], the optimal configurations of the snake robot to maximize the force exerted onto the object have been presented.
This paper focuses on the motion of the system, rather than forces. The main motivation for this study is that, as presented in [13], calculating an optimal input instantaneously (i.e., at one instant of time) is not enough to accurately control the system. We conjecture that understanding how the system will behave over time is also important. In other words, if the task is to push an object, then the motion of the object must be maximized, while the motion of the snake robot minimized.
Throughout this paper, there are several assumptions that have to be made because of the complexity of the problem:
All bodies in the system are rigid.
Contacts between the snake robot and objects or the environment (except the ground) are considered frictionless point contacts.
The snake robot has passive wheels or any other mechanical means to achieve anisotropic friction between the robot's belly and the ground.
There is only one constraint per link of the snake robot.
The operational space is a plane (embedded in a full 3D space).
The snake robot has only one contact point with the object to be manipulated.
Assumption 1 allows to have a clear mathematical model of the problem without making assumptions about the compliance of the bodies which may be unrealistic to know a priori in real-life situations. Although this assumption may be relaxed by considering some sort of virtual compliance at the contacts (e.g., [16, 17]), it does not necessarily imply more realistic or correct results. In particular, it may lead to a stiff system of differential equations and several other problems. As presented in [14], a snake robot may have too many contacts with the environment leading to a statically indeterminate system [18]. In order to ensure Assumption 4, we assume that the constraints from passive wheels are removed when that link is contacting an object or a wall, for example. This can be done by lifting the links [19] or with retractable passive wheels, for example. Assumptions 1 through 4 allow to consider that all constraint forces are linearly independent and a unique solution can be found. Assumption 5 is made in order to limit the number of parameters and get clear and meaningful results which may be difficult for 3-dimensional space, since the problem presented in this paper is still very broad and has not been clearly defined as it has been discussed in this section. However, the models presented in this paper are based on spatial vectors [20] which are trivial to extend from 2D to 3D, so in the future more results can be obtained for more specific tasks. Assumption 6 is made because it is not the intent of this paper to study any type of grasp closure or dexterous manipulation, but to understand the interaction itself first.
The paper is organized as follows. In "Mathematical background" section, the necessary mathematical background to understand this paper is presented along references necessary to develop the concepts further. "Motion of the system due to the interaction" section is the main body of the paper; the modeling of the system is presented, and metrics and quantities mentioned in this section are derived. In "Results" section, a specific example is studied to show the application of the proposed metrics. "Discussion and future applications" section includes several comments regarding the scope and limitations of the results presented in this paper. The paper concludes with some remarks in "Conclusion" section.
Mathematical background
In this section, we give a very brief introduction to the mathematical topics necessary to understand this paper. We recommend [20,21,22] for a more detailed treatment. In particular, the foundations of the model used in this paper have been presented in [15]; readers are encouraged to read this reference for a more detailed treatment of snake robots in the framework of articulated-bodies. As stressed in previous research, it is important to guarantee invariance of metrics in order for the results to be meaningful. Not only to avoid inconsistency of units, but also for the metric to be invariant to a change of coordinates. To derivate the analysis of the system to lead to meaningful metrics, we employ dual vectors [23] and basic differential geometry [21].
Differential geometry: twists, wrenches, and metrics
A twist (concatenation of linear and angular velocity) \({\vec { \varvec{\upsilon }} \in {\text{M}}^{n}}\) can be expressed w.r.t. a covariant basis \({\varvec{e} = [\vec {\varvec{e}}_{1}, \ldots , \vec {\varvec{e}}_{n}]^{\mathrm{T}}}\) as \({\vec { \varvec{\upsilon }} = \varvec{e}^{\mathrm{T}} \varvec{\upsilon }}\). The element \(\varvec{\upsilon }\in {\mathfrak{R}}^{n}\) can be interpreted as the (vector of) contravariant components of \(\vec {\varvec{\upsilon }}\). A wrench (concatenation of linear force and torque) \({\vec { \varvec{f}} \in \text{ F }^{n}}\) can be expressed w.r.t. a contravariant basis \({\varvec{e}^{*} = [\vec {\varvec{e}}_{1}^{*},\ldots ,\vec {\varvec{e}}^{*}_{n}]^{\mathrm{T}}}\) as \({\vec {\varvec{f}} = \varvec{e}^{*^{\mathrm{T}}} \varvec{f}^{*}}\) and \(\varvec{f}^{*} \in {\mathfrak{R}}^{n}\) can be interpreted as the covariant components of \(\vec {\varvec{f}}\).
It is important to notice that both bases \(\varvec{e}\) and \(\varvec{e}^{*}\) may not be orthogonal, so the common definition of inner product (e.g., \(\varvec{\upsilon } \cdot \varvec{\upsilon } = \varvec{\upsilon }^{\mathrm{T}} \varvec{\upsilon }\)) would give incorrect results. Let us denote the metric tensor of the covariant basis as \(\varvec{I} = \varvec{e} \varvec{e}^{\mathrm{T}}\) and its inverse by \(\varvec{I}^{-1}\). The (squared) length of a twist and wrench is an invariant quantity and can be obtained using the scalar product \(\{\circ \}\) while taking into account the metric tensor as
$$\begin{aligned} ||\vec {\varvec{\upsilon }}||^{2}= \vec {\varvec{\upsilon }} \circ \vec {\varvec{\upsilon }} = \varvec{\upsilon }^{\mathrm{T}} \varvec{e} \varvec{e}^{\mathrm{T}} \varvec{\upsilon } = \varvec{\upsilon }^{\mathrm{T}} \varvec{I} \varvec{\upsilon }, \end{aligned}$$
$$\begin{aligned} ||\vec {\varvec{f}}||^{2}= \vec {\varvec{f}} \circ \vec {\varvec{f}} = \varvec{f}^{*^T} \varvec{e}^{*} \varvec{e}^{*^T} \varvec{f} = \varvec{f}^{\mathrm{T}} \varvec{I}^{-1} \varvec{f}. \end{aligned}$$
Unconstrained model of the snake robot
The snake robot can be modeled as a series of rigid links connected by revolute joints. All joints have their axes parallel to each other; therefore, the snake robot is constrained to move on a plane (but is unconstrained in any other way). The kinematic model of a snake robot is similar to an open-chain robotic manipulator (c.f. Fig. 2). The model has been previously studied in [13, 15, 24, 25].
Kinematic model. The generalized coordinates of the snake robot and its COM are shown. Also, the interaction between the snake robot and object with its corresponding contact force can be seen \(\varvec{f}_{c}^{*}\)
The snake robot has a total of \({n_{s} \in \mathbb {N}}\) degrees of freedom (DOFs), and its generalized coordinates are encapsulated in the vector \(\varvec{q}_{s}(t) \in {\mathfrak{R}}^{n_{s}}\). The snake robot has \(n_{\ell }=n_{a}+1\) links each with mass \(m_{i}\).
The Jacobian for the ith link is a mapping from the vector of generalized velocities \(\dot{\varvec{q}}_{s}\) to the twist \(\varvec{\upsilon }_{i} \in {\mathfrak{R}}^{3}\) of the link and is denoted as \(\varvec{J}_{i} \in {\mathfrak{R}}^{3 \times n_{s}}\)
$$\begin{aligned} \varvec{\upsilon }_{i} = \varvec{J}_{i} \dot{\varvec{q}}_{s}. \end{aligned}$$
The equations of motion of the snake robot can be presented in the canonical form
$$\begin{aligned} \varvec{e}^{*^{\mathrm{T}}} \left[ \varvec{M}_{s} \ddot{\varvec{q}}_{s} + \varvec{h}_{s}^{*} \right] = \varvec{e}^{*^{\mathrm{T}}} \left[ \varvec{B} \varvec{\tau }_{\mathrm{act}}^{*} + \varvec{\tau }_{\mathrm{ext}}^{*} \right] \end{aligned}$$
where \({\varvec{M}_{s} (\varvec{q}_{s}) \in {\mathfrak{R}}^{n_{s} \times n_{s}}}\) is the inertia matrix of the snake robot (a symmetric positive definite (PD) matrix), \(\varvec{h}_{s}^{*} (\varvec{q}_{s}, \dot{\varvec{q}}_{s})\in {\mathfrak{R}}^{n_{s} \times 1}\) contains Coriolis and centripetal effects, and \(\varvec{\tau }_{\mathrm{ext}}^{*} (\varvec{q}, \dot{\varvec{q}}) \in {\mathfrak{R}}^{n_{s} \times 1}\) is a vector of torques produced by external forces (e.g., kinetic friction). The matrix \({\varvec{B} \in {\mathfrak{R}}^{n_{s} \times n_{a}}}\) defined as
$$\begin{aligned} \varvec{B} := \left[ \begin{array}{c} \varvec{0}_{3 \times n_{a}}\\ \varvec{1}_{n_{a} \times n_{a}} \end{array}\right] , \end{aligned}$$
is a matrix that projects the vector of input forces \(\varvec{\tau }_{\mathrm{act}}^{*}\) into the space of generalized forces. The matrix \(\varvec{1}\) denotes the identity matrix of appropriate dimensions.
Unconstrained model of an object
A rigid body is able to move in its operational space with dimensions \(n_{\mathrm{op}},\) and its equations of motion can be compactly written as
$${\varvec{e}}^{*{\text{T}}} \left[{\varvec{I}}_{\text{obj}} {\varvec{a}}_{\text{obj}} + {\varvec{p}}_{\text{obj}}^{*} \right] = {\varvec{e}}^{*{\text{T}}} \left[{\varvec{f}}_{\text{obj}}^{*} \right],$$
where \({\varvec{a}_{\mathrm{obj}}}\), \({\varvec{p}_{\mathrm{obj}}^{*}}\), and \({\varvec{f}_{\mathrm{obj}}^{*} \in {\mathfrak{R}}^{n_{\mathrm{op}}}}\) denote the acceleration, velocity-produced terms, and total wrench acting on the body, respectively. If the body is constrained to move in a plane (but unconstrained in any other way), it will have three DOFs (i.e., \(n_{\mathrm{op}} = 3\)). \({\varvec{I}_{\mathrm{obj}} \in \text{ R }^{n_{\mathrm{op}} \times n_{\mathrm{op}}}}\) denotes the inertia tensor of the rigid body. The mass of the object \(m_{\mathrm{obj}}\) will be denoted as a multiple of the mass of a link of the snake robot as \(m_{\mathrm{obj}} = \kappa m_{i}\). In other words, \(\kappa \) is a proportionality coefficient relating the masses of interest.
If all links of the snake robot have the same mass m, then the inertia matrix of the snake robot can be factored as \(\varvec{M}_{s} := m \bar{\varvec{M}}\), and the inertia matrix of an object as \(\varvec{I}_{\mathrm{obj}} = m_{\mathrm{obj}} \bar{\varvec{I}}_{\mathrm{obj}}\), where the new inertia matrices \(\bar{\varvec{M}}\) and \(\bar{\varvec{I}}_{\mathrm{obj}}\) correspond to inertia matrices with unitary mass.
Summary of constraints
The interaction between a snake robot and an external object creates a set of forces between them that avoid penetration (also called kinematic constraints or non-penetrability constraints [21, 26, 27]). Additionally, the (static) friction forces between the belly of the robot and the ground can also be modeled as constraint forces (bounded by their friction limit). Assuming there are \(n_{c} \in \mathbb {N}\) constraint forces in total, the constraint forces \({\varvec{f}_{c}^{*} \in {\mathfrak{R}}^{n_{c}} }\) span the constrained subspace
$$\begin{aligned} \mathcal {C} = \lbrace \varvec{f}_{c}^{*} : \varvec{f}_{c}^{*} = \varvec{T} \varvec{\lambda }^{*} \rbrace , \end{aligned}$$
where the matrix \(\varvec{T} \in ^{n_{\mathrm{op}}n_{c} \times n_{c}}\) is a matrix spanning the constraint forces on the operational space [21], and \(\varvec{\lambda }^{*} \in {\mathfrak{R}}^{n_{c}}\) contains the magnitude of the constraint forces (in the context of optimization this vector is usually called the Lagrangian multipliers [21, 26]).
To facilitate the coupling between the snake robot and environment/object(s), it is useful to put together all the constraints in vector/matrix form. All the constraints can be put together into the following form
$$\begin{aligned} \varvec{A} \left[ \begin{array}{c} \dot{\varvec{q}}_{s} \\ \varvec{\upsilon }_{\mathrm{obj}} \end{array}\right] \geqslant \varvec{0} \end{aligned}$$
where the \(\varvec{A} \in {\mathfrak{R}}^{n_{s} \times n}\) is called the constraint matrix and takes the following form
$$\begin{aligned} \varvec{A} = \left[ \begin{array}{ll} -\varvec{J}_{s}&\varvec{G}^{\mathrm{T}} \end{array}\right] , \end{aligned}$$
where \(\varvec{J}_{s} \in {\mathfrak{R}}^{n_{c} \times n_{s}}\) is called the robot Jacobian (also called hand Jacobian [26, 27]) which projects the vector of generalized velocities of the snake robot onto the constrained subspace
$$\begin{aligned} \varvec{J}_{s} = \left[ \begin{array}{ccc} \varvec{T}_{1}^{\mathrm{T}} &{} \cdots &{} \varvec{0}\\ \varvec{0} &{} \ddots &{} \varvec{0}\\ \varvec{0} &{} \cdots &{} \varvec{T}_{n_{c}}^{\mathrm{T}} \end{array}\right] \left[ \begin{array}{c} \varvec{J}_{*}\\ \vdots \\ \varvec{J}_{*} \end{array}\right] , \end{aligned}$$
where \(\varvec{T}_{k}\) spans the constrained space for the kth constraint and \(\varvec{J}_{*}\) denotes the Jacobian corresponding to the link under that constraint, without any specific ordering. The matrix \({\varvec{G} \in {\mathfrak{R}}^{n_{\mathrm{op}} \times n_{c}}}\) is usually referred to as Grasp Matrix and its transpose is a mapping from the motion space of the object to the constrained subspace; it can be constructed in a similar manner to the robot Jacobian. The constraint forces projected back onto the snake robot and object are
$$\begin{aligned} \left[ \begin{array}{c} \varvec{\tau }_{c}^{*} \\ \varvec{f}_{c}^{*} \end{array}\right] = \left[ \begin{array}{c}\varvec{-} \varvec{J}_{s}^{\mathrm{T}} \\ \varvec{G}\end{array}\right] \varvec{\lambda }^{*} = \varvec{A}^{\mathrm{T}} \varvec{\lambda }^{*}, \end{aligned}$$
where \(\varvec{\tau }_{c}^{*} \in {\mathfrak{R}}^{n_{s}}\) is simply the projection of the constraint reaction force \(- \varvec{f}_{c}^{*}\) onto the space of generalized forces of the snake robot.
Motion of the system due to the interaction
Now that it is assumed that the snake robot is touching at least one object, the new coupled equations of motion can be written as
$$\begin{aligned} \varvec{e}^{*T} \left[ \varvec{I} \varvec{a} + \varvec{p}^{*} \right]= \varvec{e}^{*T} \left[ \varvec{f}^{*} + \varvec{A}^{\mathrm{T}} \varvec{\lambda }^{*} \right] ,\end{aligned}$$
$$\begin{aligned} \varvec{I}&= \left[ \begin{array}{ll} \varvec{M}_{s} &{} \varvec{0} \\ \varvec{0} &{} \varvec{I}_{\mathrm{obj}} \end{array}\right] \qquad \varvec{a} = \left[ \begin{array}{l} \ddot{\varvec{q}}_{s} \\ \varvec{a}_{\mathrm{obj}} \end{array}\right] \\ \varvec{p}^{*}&= \left[ \begin{array}{l} \varvec{h}_{s}^{*} \\ \varvec{p}_{\mathrm{obj}}^{*} \end{array}\right] \qquad \varvec{f}^{*} = \left[ \begin{array}{l} \varvec{B} \varvec{\tau }_{\mathrm{act}}^{*} \\ \varvec{f}_{\mathrm{obj}}^{*} \end{array}\right] \end{aligned}$$
along the constraints
$$\begin{aligned} \varvec{A} \varvec{a} + \dot{\varvec{A}} \varvec{\upsilon }\geqslant \varvec{0}, \end{aligned}$$
where equality holds for constraints imposed by friction. Equation (13) is the derivative of (8). As discussed in [21], both holonomic and non-holonomic constraints can be obtained in this uniform manner at the acceleration level.
The change from the contravariant basis \(\varvec{e}^{*}\) to the basis of the constrained space \(\varvec{e}_{c}^{*}\) can be obtained by using the projector \({{}^{\varvec{e}_{c}^{*}} \varvec{\Phi }_{\varvec{e}^{*}}: {\mathfrak{R}}^{n} \rightarrow {\mathfrak{R}}^{n_{c}}}\) [15, 21] defined as
$$\begin{aligned} {}^{\varvec{e}_{c}^{*}} \varvec{\Phi }_{\varvec{e}^{*}} := (\varvec{A} \varvec{I}^{-1} \varvec{A}^{\mathrm{T}})^{-1} \varvec{A} \varvec{I}^{-1} = \varvec{G}_{c}^{-1} \varvec{A} \varvec{I}^{-1}. \end{aligned}$$
The positive semidefinite (PSD) matrix \({\varvec{G}_{c}:=(\varvec{A} \varvec{I}^{-1} \varvec{A}^{\mathrm{T}})}\) represents the metric tensor for the basis \(\varvec{e}_{c}^{*}\) and has full rank if all constraints are linearly independent. (Additional comments regarding the rank of this metric tensor are located in "Discussion and future applications" section). The mapping (14) can be interpreted as the left pseudo inverse of the matrix \(\varvec{A}^{\mathrm{T}}\) as
$$\begin{aligned} \varvec{A}^{T^{\dagger }} :=(\varvec{A} \varvec{I}^{-1} \varvec{A}^{\mathrm{T}})^{-1} \varvec{A} \varvec{I}^{-1} \rightarrow \varvec{A}^{T^{\dagger }} \varvec{A}^{\mathrm{T}} = \varvec{1}. \end{aligned}$$
The constraint forces \(\vec {\varvec{\lambda }}=\varvec{e}_{c}^{*T} \varvec{\lambda }^{*}\) can be obtained by projecting the equations of motion (12) onto the constrained subspaces using the projector (14) and taking into account the constraint (13) as
$$\begin{aligned} \varvec{e}_{c}^{*T} \varvec{\lambda }^{*} \geqslant \varvec{e}_{c}^{*T} \left[ - \varvec{A}^{T^{\dagger }} \varvec{f}^{*} + (\varvec{A} \varvec{I}^{-1} \varvec{A}^{\mathrm{T}})^{-1} (\varvec{A} \varvec{I}^{-1} \varvec{p}^{*} - \dot{\varvec{A}} \varvec{\upsilon }) \right] . \end{aligned}$$
The right-hand side (RHS) of (16) has two terms. The first term depends purely on the set of forces exerted onto the system, either by the actuators of the snake robot or an external wrench exerted onto the object. The second term includes terms produced by velocity and will vanish if the system starts from an equilibrium configuration. This (affine) system of equations is usually interpreted as a force ellipsoid [27,28,29], and it maps a quadratic region in the input space \(\varvec{f}^{*}\) to an ellipsoid in the output space of constraint forces \(\varvec{\lambda }^{*}\), while the velocity-produced terms will shift the origin of such ellipsoid. In this paper, it is assumed the system starts from equilibrium so that the following linear mapping can be defined
$$\begin{aligned} \varvec{\lambda }^{*} = - {}^{\varvec{e}_{c}^{*}} \varvec{\Phi }_{\varvec{e}^{*}} \varvec{f}^{*}. \end{aligned}$$
The obtained contrained forces (due to the inputs in the system) can be substituted back onto the equations of motion (12). The resulting acceleration of the system is
$$\begin{aligned} \varvec{e}^{\mathrm{T}} \varvec{a} = \varvec{e}^{T} \left[ \varvec{I}^{-1} ({}^{\varvec{e}_{c}^{*\perp }} \varvec{\Phi }_{\varvec{e}^{*}}) \varvec{f}^{*} \right] , \end{aligned}$$
where the new projector \({}^{\varvec{e}_{c}^{*\perp }} \varvec{\Phi }_{\varvec{e}^{*}}: {\mathfrak{R}}^{n} \rightarrow {\mathfrak{R}}^{n}\) defined as
$$\begin{aligned} {}^{\varvec{e}_{c}^{*\perp }} \varvec{\Phi }_{\varvec{e}^{*}} := \varvec{1} - \varvec{A}^{\mathrm{T}} \varvec{A}^{T^{\dagger }} \end{aligned}$$
is a projector from input wrenches \(\varvec{e}^{*T} \varvec{f}^{*}\) to the space orthogonal to the constrained space \(\mathcal {C}\), but with coordinates expressed with respect to the original basis \(\varvec{e}^{*}\). The extra inverse inertia \(\varvec{I}^{-1}\) transforms to coordinates in \(\varvec{e}\) basis (i.e., transforms from wrenches to twists).
By solving for \(\ddot{\varvec{q}}_{s}\) and \(\varvec{a}_{\mathrm{obj}}\), the motion of the snake robot and object can be obtained as
$$\begin{aligned} \ddot{\varvec{q}}_{s}= \left( \varvec{}^{\ddot{\varvec{q}}_{s}} \varvec{\Phi }_{\varvec{\tau }_{\mathrm{act}}^{*}} \right) \varvec{\tau }_{\mathrm{act}}^{*} + \left( \varvec{}^{\ddot{\varvec{q}}_{s}} \varvec{\Phi }_{\varvec{f}_{\mathrm{obj}}^{*}} \right) \varvec{f}_{\mathrm{obj}}^{*},\end{aligned}$$
$$\begin{aligned} \varvec{a}_{\mathrm{obj}}= \left( \varvec{}^{\varvec{a}_{\mathrm{obj}}} \varvec{\Phi }_{\varvec{\tau }_{\mathrm{act}}^{*}} \right) \varvec{\tau }_{\mathrm{act}}^{*} + \left( \varvec{}^{\varvec{a}_{\mathrm{obj}}} \varvec{\Phi }_{\varvec{f}_{\mathrm{obj}}^{*}} \right) \varvec{f}_{\mathrm{obj}}^{*}, \end{aligned}$$
where the auxiliary mappings \({\varvec{}^{\ddot{\varvec{q}}_{s}} \varvec{\Phi }_{\varvec{\tau }_{\mathrm{act}}^{*}}: {\mathfrak{R}}^{n_{a}} \rightarrow {\mathfrak{R}}^{n_{s}}}\), \({\varvec{}^{\ddot{\varvec{q}}_{s}} \varvec{\Phi }_{\varvec{f}_{\mathrm{obj}}^{*}}: {\mathfrak{R}}^{n_{\mathrm{op}}} \rightarrow {\mathfrak{R}}^{n_{s}}}\),
\({\varvec{}^{\varvec{a}_{\mathrm{obj}}} \varvec{\Phi }_{\varvec{\tau }_{\mathrm{act}}^{*}}: {\mathfrak{R}}^{n_{a}} \rightarrow {\mathfrak{R}}^{n_{\mathrm{op}}}}\), and \({\varvec{}^{\varvec{a}_{\mathrm{obj}}} \varvec{\Phi }_{\varvec{f}_{\mathrm{obj}}^{*}} : {\mathfrak{R}}^{n_{\mathrm{op}}} \rightarrow {\mathfrak{R}}^{n_{\mathrm{op}}}}\) can be defined as
$$\begin{aligned} \varvec{}^{\ddot{\varvec{q}}_{s}} \varvec{\Phi }_{\varvec{\tau }_{\mathrm{act}}^{*}}:= \frac{1}{m} \bar{\varvec{M}}_{s}^{-1} \left( \varvec{1} - \varvec{J}_{s}^{\mathrm{T}} \left( \varvec{G}_{c}^{-1} \right) \varvec{J}_{s} \bar{\varvec{M}}_{s}^{-1} \right) \varvec{B} \end{aligned}$$
$$\begin{aligned} \varvec{}^{\ddot{\varvec{q}}_{s}} \varvec{\Phi }_{\varvec{f}_{\mathrm{obj}}^{*}}:= \frac{1}{\kappa m} \bar{\varvec{M}}_{s}^{-1} \varvec{J}_{s}^{\mathrm{T}} \left( \varvec{G}_{c}^{-1} \right) \varvec{G}^{\mathrm{T}} \bar{\varvec{I}}_{\mathrm{obj}}^{-1} \end{aligned}$$
$$\begin{aligned} \varvec{}^{\varvec{a}_{\mathrm{obj}}} \varvec{\Phi }_{\varvec{\tau }_{\mathrm{act}}^{*}}:= \frac{1}{\kappa m} \bar{\varvec{I}}_{\mathrm{obj}}^{-1} \varvec{G} \left( \varvec{G}_{c}^{-1} \right) \varvec{J}_{s} \bar{\varvec{M}}_{s}^{-1} \varvec{B} \end{aligned}$$
$$\begin{aligned} \varvec{}^{\varvec{a}_{\mathrm{obj}}} \varvec{\Phi }_{\varvec{f}_{\mathrm{obj}}^{*}}:= \frac{1}{\kappa m} \bar{\varvec{I}}_{\mathrm{obj}}^{-1} \left( \varvec{1}-\frac{1}{\kappa } \varvec{G} \left( \varvec{G}_{c}^{-1} \right) \varvec{G}^{\mathrm{T}} \bar{\varvec{I}}_{\mathrm{obj}}^{-1} \right) \end{aligned}$$
The (squared) length of the accelerations of the snake robot \(||\vec {\ddot{\varvec{q}}}_{s}||^{2}\) or object \(||\vec {\varvec{a}}_{\mathrm{obj}}||^{2}\) can be obtained in an invariant way by considering its metric tensors, where the total expression can be divided into three terms as
$$\begin{aligned} ||\vec {\ddot{\varvec{q}}}_{s}||^{2}= \varvec{\tau }_{\mathrm{act}}^{*T} \left( \varvec{\Xi }_{\varvec{\tau }_{\mathrm{act}}^{*}} \right) \varvec{\tau }_{\mathrm{act}}^{*} + \varvec{f}_{\mathrm{obj}}^{*T} \left( \varvec{\Xi }_{\varvec{f}_{\mathrm{obj}}^{*}} \right) \varvec{f}_{\mathrm{obj}}^{*} + \varvec{\tau }_{\mathrm{act}}^{*T} \left( {}_{\varvec{\tau }_{\mathrm{act}}^{*}} \varvec{\Xi }_{\varvec{f}_{\mathrm{obj}}^{*}} \right) \varvec{f}_{\mathrm{obj}}^{*} \end{aligned}$$
$$\begin{aligned} ||\vec {\varvec{a}}_{\mathrm{obj}}||^{2}= \varvec{\tau }_{\mathrm{act}}^{*T} \left( \varvec{\Omega }_{\varvec{\tau }_{\mathrm{act}}^{*}} \right) \varvec{\tau }_{\mathrm{act}}^{*} + \varvec{f}_{\mathrm{obj}}^{*T} \left( \varvec{\Omega }_{\varvec{f}_{\mathrm{obj}}^{*}} \right) \varvec{f}_{\mathrm{obj}}^{*} + \varvec{\tau }_{\mathrm{act}}^{*T} \left( {}_{\varvec{\tau }_{\mathrm{act}}^{*}} \varvec{\Omega }_{\varvec{f}_{\mathrm{obj}}^{*}} \right) \varvec{f}_{\mathrm{obj}}^{*}. \end{aligned}$$
We will concentrate on the contributions of the inputs of the snake robot. It can be verified that the auxiliary mappings \(\varvec{\Xi }_{\varvec{\tau }_{\mathrm{act}}^{*}}\) and \(\varvec{\Omega }_{\varvec{\tau }_{\mathrm{act}}^{*}}\), after some manipulation, can be defined as
$$\begin{aligned} \varvec{\Xi }_{\varvec{\tau }_{\mathrm{act}}^{*}}:= \frac{1}{m} \varvec{B}^{\mathrm{T}} \left( \varvec{1}- \hat{\varvec{J}}_{s}^{\mathrm{T}} \right) ^{\mathrm{T}} \varvec{M}_{s}^{-1} \left( \varvec{1}- \hat{\varvec{J}}_{s}^{\mathrm{T}} \right) \varvec{B} \end{aligned}$$
$$\begin{aligned} \varvec{\Omega }_{\varvec{\tau }_{\mathrm{act}}^{*}}:= \frac{1}{\kappa m} \varvec{B}^{\mathrm{T}} \bar{\varvec{M}}_{s}^{-1} \varvec{J}_{s}^{\mathrm{T}} \varvec{G}_{c}^{-1} \varvec{G}^{\mathrm{T}} \bar{\varvec{I}}_{\mathrm{obj}}^{-1} \varvec{G} \varvec{G}_{c}^{-1} \varvec{J}_{s} \bar{\varvec{M}}_{s}^{-1} \varvec{B} \end{aligned}$$
where the auxiliary term
$$\begin{aligned} \hat{\varvec{J}}_{s}^{\mathrm{T}} := \varvec{J}_{s}^{\mathrm{T}} \varvec{G}_{c}^{-1} \varvec{J}_{s} \bar{\varvec{M}}_{s}^{-1} \end{aligned}$$
has been introduced for a more compact notation and any further simplification has been omitted for simplicity's sake. However, the linear relationship w.r.t. the masses of the system becomes evident.
Slippage ratio
As stated in "Background" section, it is an important problem to predict motion and not only forces, in order to understand the interaction between the snake robot and object and try to accomplish a task. If the task is to manipulate an object, then it is desirable to maximize the motion of the object \(||\vec {\varvec{a}}_{\mathrm{obj}}||^{2}\) while minimizing the slippage of the snake robot \(||\vec {\ddot{\varvec{q}}}_{s}||^{2}\). On the other hand, a snake robot could locomote using the environment as a source of propulsive forces or as a support, similar to the idea of climbing [30,31,32]. This case resembles more a walking robot where the contact with the environment is necessary for the robot to move. To the best of our knowledge, this distinction has not been studied with snake robots. To analyze this, we propose the ratio of accelerations
$$\begin{aligned} sr := \frac{||\vec {\varvec{a}}_{\mathrm{obj}}||^{2}}{||\vec {\varvec{a}}_{\mathrm{obj}}||^{2}+||\vec {\ddot{\varvec{q}}}_{s}||^{2}}, \end{aligned}$$
and call it slippage ratio which is a dimensionless scalar quantity bounded as \({sr \in [0,1]}\). Using this ratio, we can analyze the following three general situations:
\(\hbox {sr} \rightarrow 1\) which implies that the acceleration of the snake robot is minimal (\({||\vec {\ddot{\varvec{q}}}_{s}||^{2} \ll ||\vec {\varvec{a}}_{\mathrm{obj}}||^{2}}\) or \({||\vec {\ddot{\varvec{q}}}_{s}||^{2} \approx 0}\)).
\(\hbox {sr} \approx 0.5\) which implies a similar magnitude of acceleration for the two subsystems (\({||\vec {\ddot{\varvec{q}}}_{s}||^{2} \approx ||\vec {\varvec{a}}_{\mathrm{obj}}||^{2}}\)).
\(\hbox {sr} \rightarrow 0\) which implies that the magnitude of the acceleration of the object is minimal (\({||\vec {\ddot{\varvec{q}}}_{s}||^{2} \gg ||\vec {\varvec{a}}_{\mathrm{obj}}||^{2}}\) or \({||\vec {\varvec{a}}_{\mathrm{obj}}||^{2} \approx 0}\)).
This quantity can be seen as the ratio between a desired output and the total output. By analyzing the slippage ratio sr, given a configuration and input, we can understand better the behavior of the system.
Polar coordinates of the COM of the snake robot
In order to compare snake robots with different number of joints, it is necessary to parameterize the configuration of the robot with a set of parameters in common. The idea of using the polar coordinates of the COM of the snake robot w.r.t. the contact point with the object has been introduced in [15] and further explored in [24]. The (unsigned) distance from the COM of the snake robot to the contact point is denoted by |COM|, and the angle between this vector and the link contacting the object is denoted as \(\angle {COM}\). These quantities can be seen in Fig. 3.
Three scenarios. Snake robot with 2 joints contacting an object. a The friction between the snake robot and the ground is negligible. b The snake robot has passive wheels (represented by the black rectangles) with a limit surface for the friction force. c Ideal passive wheels are assumed
To study the interaction between snake robot and object, we can apply the framework proposed in this paper while changing the number and type of constraints and studying the resulting accelerations of the system. In general, we propose three different scenarios depending on the type of constraints present on the system as follows:
Scenario 1: The snake robot is in contact with an object but unconstrained in any other way. The friction between the snake robot and ground is negligible.
Scenario 2: The snake robot contacts one object and has passive wheels in all other links. The friction between the passive wheels and ground is bounded by its limit surface.
Scenario 3: The snake robot is contacting one object and has passive wheels in all other links. The passive wheels impose (unbounded and bilateral) non-holonomic constraints.
In other words, we will change the properties of the interaction between the snake robot and the environment (ground) and then analyze the resulting acceleration of the object \(||\vec {\varvec{a}}_{\mathrm{obj}}||^{2}\), of the snake robot \(||\vec {\ddot{\varvec{q}}}_{s}||^{2}\) and the slippage ratio sr as a result. Scenario 1 allows us to consider only the inertial properties of the system. Scenario 2, on the other hand, allows us to study the effect that passive wheels have on the system, but with a bounded friction coefficient \(\mu _{s}\). Scenario 3 considers ideal passive wheels and could be considered as the extreme case when \(\mu _{s} \rightarrow \infty \). This is the most common model used for studying locomotion of snake robots.
The norms (26), (27) and slippage ratio (30) have to be studied over regions of the input space on a specific kth configuration. The inputs are restricted to the quadratic region of the input
$$\begin{aligned} \bar{\tau } = \{ \varvec{\tau }_{\mathrm{act}}^{*}: \varvec{\tau }_{\mathrm{act}}^{*T} \left( \bar{\varvec{M}}_{s}^{-1}\right) \varvec{\tau }_{\mathrm{act}}^{*} \leqslant 1 \} \end{aligned}$$
which will be, in general, an ellipsoid and not a unitary sphere as is usually considered (i.e., \(||\vec {\varvec{\tau }}_{\mathrm{act}}||^{2} \leqslant 1\) is not the same as \(\varvec{\tau }_{\mathrm{act}}^{*T} \varvec{\tau }_{\mathrm{act}}^{*} \leqslant 1\)).
Case study 1: Snake robot with two joints
In order to show more specific and qualitative results, we apply the mappings and study a snake robot with two joints (c.f. Fig. 3). The small number of joints allows us to show graphically the magnitude of the studied norms as a function of the joint torques. First, we study a snake robot with the parameters described in Table 1. The snake robot is contacting an object with its tail (first link) and the contact occurs at the middle of the link (c.f. Fig. 3). The angles of the joints are varied in the range [− 135°, 135°] every 10° (784 configurations in total), and the metrics (26), (27), and (30) are calculated within the quadratic region (31).
Table 1 Parameters of the simulation for case study 1
One example configuration can be seen in Fig. 4, where it is assumed that the object has a hundred times the mass of a link of the robot (i.e., \(\kappa =100\)). The first, second, and third columns represent the three scenarios depicted in Fig. 3, respectively. A lighter color represents a higher value of the depicted norm. Figure 4a shows the magnitude of the acceleration of the object \(||\vec {\varvec{a}}_{\mathrm{obj}}||^{2}\). It can be seen that it barely changes regardless of the scenario (i.e., independently of the fact that the snake robot has or has not passive wheels, the object will accelerate the same given the same input). Figure 4b shows the magnitude of the acceleration of snake robot \(||\vec {\ddot{\varvec{q}}}_{s}||^{2}\). This shows clearly that, even if the object's acceleration is similar for all three scenarios, the behavior of the snake robot changes. The addition of passive wheels (second and third columns) increases the area where the snake robot's slippage is minimal. Without passive wheels, the snake robot will slip in almost any direction of the input space.
Norms of motion of the system. The first, second, and third column represent scenario 1, scenario 2, and scenario 3, respectively. A higher value represents more power transmitted to the respective motion. The configuration of the robot is \(\varvec{q}_{s} = \{ 0,0,0,-135^{\circ },-\,135^{\circ }\}\). a Acceleration of the object. b Acceleration of the snake robot. c Slippage ratio. Several values of \(\kappa \) are shown
The slippage ratio (30) gives quantitative information about the movement of the system and can be studied in the same way as the previous norms. Figure 4c shows the value of the slippage ratio for all three scenarios with several values for \(\kappa \) for one configuration. It can be seen that \(\hbox {sr} \rightarrow 0\) in the region where there is no contact with the object (i.e., the snake robot can move freely and therefore \(||\vec {\varvec{a}}_{\mathrm{obj}}||^{2} \rightarrow 0\)). It is interesting to see that in all three scenarios it is always possible to make the object move. However, Scenario 2 (non-ideal passive wheels) has a limited region where the slippage ratio is high, compared to Scenario 3, where a whole region seems to give high values of slippage ratio. These regions in the input space are highlighted in Fig. 4b, c. Regions where the slippage of the snake robot are minimized tend to have higher slippage ratio.
Case study 2: Snake robot with three joints
The proposed framework and metrics can be applied to a snake robot with any number of joints. In this section, a snake robot with three joints is studied. However, studying the three-dimensional input space could be cumbersome. Instead, the norms \(||\vec {\varvec{\lambda }}||^{2}\), \(||\vec {\varvec{a}}_{\mathrm{obj}}||^{2}\) and slippage ratio (30) are studied as a function of the polar coordinates of the COM of the snake robot \((|COM|,\angle COM)\) w.r.t. the contact point, as discussed in previous sections.
Figure 5a reports the result for the norm of constraint forces \(||\vec {\varvec{\lambda }}||^{2}\). The polar plots show the results for scenario 1, 2, and 3, respectively. The higher the value, the bigger the constraint forces. It can be seen that in scenario 1 (negligible friction) there is a clear trend for configurations with the COM of the snake robot at angles \(90^{\circ }\) and \(-\,90^{\circ }\) to have a higher impact on the wrench applied to the object. Although scenarios 2 and 3 report a higher norm of the constraint forces, this is due to the addition of passive wheels. From this figure alone, it is not possible to ascertain the impact on the object.
Norms studied over all configurations of the snake robot. a Constraint force (from left to right: scenarios 1, 2, and 3). b Acceleration of the object (from left to right: scenarios 1, 2, and 3). c Slippage ratio (from left to right: scenarios 1 and 2)
Figure 5b reports the result for the norm of the object's acceleration \(||\vec {\varvec{a}}_{\mathrm{obj}}||^{2}\). The polar plots report the results for scenarios 1, 2, and 3, respectively. It can be seen that the addition of passive wheels (even ideal ones) have little impact on the acceleration of the object. However, the configuration of the snake robot, parametrized with the polar coordinates of its COM, has a clear and meaningful impact on the acceleration of the object.
Although basic intuition would tell that the addition of constraints (i.e., passive wheels) should have an impact on the force applied to the object (through \(\vec {\varvec{\lambda }}\)), and consequently on its acceleration \(\vec {\varvec{a}}_{\mathrm{obj}}\), this study shows that is not the case (at least, not that simply).
An important addition of this paper w.r.t. [15, 24] is the study of the slippage ratio sr. By studying the relationship between motions of both systems (snake robot and object), we can understand how the additional constraints have an impact on the system. A snake robot without passive wheels will slip as it pushes the object. Therefore, minimizing this motion while keeping a steady force on the object (and therefore producing an acceleration) is desirable. Figure 5c shows the result of the slippage ratio (30). It can be seen that in Scenario 1 (without passive wheels) the same trend as with the object's acceleration appears. However, passive wheels (even non-ideal ones) have a big impact on the slippage ratio (take notice of the change of scale).
To show more clearly the impact of the configuration of the snake robot on the acceleration and slippage of the system, Fig. 6 shows the best and worst configurations for the acceleration of the object Fig. 6a and for slippage ratio Fig. 6b. The results are summarized in Table 2. It can be seen that passive wheels (Scenario 2 and 3) have little impact on the acceleration, but a significant one on decreasing the slippage of the snake robot (\(\hbox {sr} \rightarrow 1\)).
Representative configurations chosen among the best and worst configurations of the snake robot. a Acceleration of the object. b Slippage ratio
Table 2 Results of the norms
Discussion and future applications
Several assumptions have been made in this line of research, especially the number of contacts considered between objects, and their rigidity. This is because we are interested in giving a solution that is mathematically rigorous while guaranteeing uniqueness of solution. To include more contact points means to loose this in favor of robustness. For example, a penalty method (aka. virtual springs) or barrier functions may be considered, which is common for whole-arm body manipulation (WAM) tasks. Although our assumptions are restrictive, it allows us to give a solid foundation for the research. Other models or considerations can be used for more realistic scenarios, but the rigid-body assumption used here allows to have a clear basis for comparison. Considering the gaps in knowledge regarding snake robots (as highlighted in "Background" section), we consider the model and results presented to be useful for moving research forward.
The holonomic constraints (e.g., constraints due to joints of the snake robot) are already encoded in the kinematic model of the snake robot presented in "Mathematical background" section. These holonomic constraints are described by the use of the robot's Geometric Jacobians. Further distinction between holonomic and non-holonomic constraints is not necessary, as both can be expressed in the same unified manner (13), as mentioned in [21, 33].
The metrics and general framework presented can be used to analyze more complex tasks involving snake robots (or similar robotic systems) interacting with an object or the environment. This gives an opportunity to study both snake robots and robotic arms in the same framework, since the analysis is similar to the often-used force/manipulability ellipsoids used to study robotic arms or hands [34].
However, the metrics presented in previous research do not consider the motion of the robot itself, since a fixed-base was always assumed. The framework presented in this paper can be applied to other mobile systems in a more complete manner than reported in the literature [28, 29]. More specifically, the analysis presented extends the concept of force or manipulability ellipsoids [26, 34], from the case of a fixed-base robot with end-effector, to a mobile robot without an end-effector. For a given task and configuration, an analysis can be carried out to find the optimal input (vector of joint torques) to minimize or maximize slippage of the system.
A few conclusions can be drawn from analyzing norms (26) and (27) on the input space (c.f. Fig. 4). In all scenarios, \(\tau _{2}\) which is the joint furthest away from the object has almost no effect on the acceleration of the object. However, the addition of passive wheels helps to anchor the snake robot and couples the effect of \(\tau _{2}\) on the system.
In this paper, a modeling and analysis framework for snake robots in contact with an external body has been presented. Results show that the addition of passive wheels has little effect on the wrench applied to the object and therefore, little change in its acceleration. However, the passive wheels do have an effect on the motion of the robot itself. In other words, under certain conditions the slippage of the robot can be minimized while pushing the object. This could be beneficial for pushing or manipulation tasks. To the best of our knowledge, this problem has not been fully studied with snake robots.
Liljebäck P, Pettersen KY, Stavdahl Ø, Gravdahl JT. A review on modelling, implementation, and control of snake robots. Robot Auton Syst. 2012;60(1):29–40.
Article MATH Google Scholar
Ma S. Analysis of creeping locomotion of a snake-like robot. Adv Robot. 2001;15(2):205–24.
Saito M, Fukaya M, Iwasaki T. Serpentine locomotion with robotic snakes. IEEE Control Syst Mag. 2002;22(1):64–81.
Liljebäck P, Pettersen KY, Stavdahl Ø, Gravdahl JT. A simplified model of planar snake robot locomotion. In: Proceedings of IEEE/RSJ international conference on intelligent robots and systems (IROS 2010); 2010. p. 2868–75.
Ma S, Tadokoro N. Analysis of creeping locomotion of a snake-like robot on a slope. Auton Robots. 2006;20(1):15–23.
Hatton RL, Choset H. Sidewinding on slopes. In: Proceedings of IEEE international conference on robotics and automation (ICRA 2010). IEEE; 2010. p. 691–96.
Ma S, Ohmameuda Y, Inoue K. Dynamic analysis of 3-dimensional snake robots. In: Proceedings of IEEE international conference on intelligent robots and systems (IROS 2004), vol. 1. IEEE; 2004. p. 767–72.
Liljebäck P, Stavdahl Ø, Pettersen KY. Modular pneumatic snake robot: 3d modelling, implementation and control. Model Identif Control. 2008;29(1):21–8.
Hatton RL, Knepper RA, Choset H, Rollinson D, Gong C, Galceran E. Snakes on a plan: toward combining planning and control. In: Proceedings of IEEE international conference robotics and automation (ICRA 2013); 2013. p. 5174–81.
Liljebäck P, Pettersen KY, Stavdahl Ø, Gravdahl JT. Hybrid modelling and control of obstacle-aided snake robot locomotion. IEEE Trans Robot. 2010;26(5):781–99.
Chirikjian GS, Burdick JW. The kinematics of hyper-redundant robot locomotion. IEEE Trans Robot Autom. 1995;11(6):781–93.
Wang Z, Ma S, Li B, Wang Y. A unified dynamic model for locomotion and manipulation of a snake-like robot based on differential geometry. Sci China Inf Sci. 2011;54(2):318–33.
Article MATH MathSciNet Google Scholar
Reyes F, Ma S. Using a planar snake robot as a robotic arm taking into account the lack of a fixed base: Feasible region. In: Proceedings of IEEE international conference on intelligent robots and systems (IROS 2015); 2015. p. 956–62.
Reyes F, Ma S. Modeling of snake robots oriented towards grasping and interaction with the environment. In: In: Proceedings of international conference on real-time computing and robotics (RCAR 2015); 2015.
Reyes F, Ma S. Snake robots in contact with the environment: Influence of the configuration on the applied wrench. In: Proceedings of IEEE international conference on intelligent robots and systems (IROS 2016); 2016. p. 3854–59.
Prattichizzo D, Bicchi A. Dynamic analysis of mobility and graspability of general manipulation systems. IEEE Trans Robot Autom. 1998;14(2):241–58.
Nguyen V.-D. Constructing stable grasps in 3d. In: Proceedings of IEEE international conference on robotics and automation (ICRA 1987), vol. 4. IEEE; 1987. p. 234–39.
Bicchi A, Melchiorri C, Balluchi D. On the mobility and manipulability of general multiple limb robots. IEEE Trans Robot Autom. 1995;11(2):215–28.
Matsuno F, Sato H. Trajectory tracking control of snake robots based on dynamic model. In: Proceedings of IEEE international conference on robotics and automation (ICRA 2005); 2005. p. 3029–34.
Featherstone R. Rigid body dynamics algorithms. New York: Springer; 2014.
Blajer W. A geometrical interpretation and uniform matrix formulation of multibody system dynamics. J Appl Math Mech/Zeitschrift für Angewandte Mathematik und Mechanik (ZAMM). 2001;81(4):247–59. Accessed 2016-01-03.
Grinfeld P. Introduction to tensor analysis and the calculus of moving surfaces. London: Springer; 2013.
Featherstone R. Rigid body dynamics algorithms. Springer, New York. http://link.springer.com/10.1007/978-1-4899-7560-7. Accessed 2015-10-29.
Reyes F, Ma S. Snake robots in contact with the environment—influence of the friction on the applied wrench. In: Proceedings of IEEE international conference on intelligent robots and systems (IROS 2017); 2017.
Reyes F, Ma S. Studying slippage on pushing applications with snake robots. In: Proceedings of international conference on real-time computing and robotics (RCAR 2017); 2017.
Murray RM, Sastry SS, Li Z. A mathematical introduction to robotic manipulation. 1st ed. Boca Raton: CRC Press, Inc.; 1994.
Siciliano B, Khatib O, editors. Springer handbook of robotics. New York: Springer; 2008.
Wen JTY, Wilfinger LS. Kinematic manipulability of general constrained rigid multibody systems. IEEE Trans Robot Autom. 1999;15(3):558–67. https://doi.org/10.1109/70.768187.
Bayle B, Fourquet JY, Renaud M. Manipulability analysis for mobile manipulators. In: Proceedings of IEEE international conference on robotics and automation (ICRA 2001), vol. 2; 2001. p. 1251–56. https://doi.org/10.1109/ROBOT.2001.932782.
Kuwada A, Wakimoto S, Suzumori K, Adomi Y. Automatic pipe negotiation control for snake-like robot. In: Proceedings of IEEE international conference on advanced intelligent mechatronics (AIM 2008). IEEE; 2008. p. 558–63.
Shapiro A, Greenfield A, Choset H. Frictional compliance model development and experiments for snake robot climbing. In: Proceedings of IEEE international conference on robotics and automation (ICRA 2007). IEEE; 2007. p. 574–579.
Greenfield A, Rizzi AA, Choset H. Dynamic ambiguities in frictional rigid-body systems with application to climbing via bracing. In: Proceedings of IEEE international conference on robotics and automation (ICRA 2005). IEEE; 2005.p. 1947–52.
Udwadia FE, Kalaba RE. A new perspective on constrained motion. Proc. Math. Phys. Sci. 1992;439(1906):407–10.
Siciliano B, Sciavicco L, Villani L, Oriolo G. Robotics: modelling, planning and control. 1st ed. London: Springer; 2008.
FR conducted mathematical analysis, programming to perform the simulations, and wrote the manuscript. SM supervised the research. Both authors read and approved the final manuscript.
This study was in part supported by "Strategic Research Foundation Grant-aided Project for Private Universities (2013–2017) from Ministry of Education, Culture, Sports, Science and Technology, Japan, and R-GIRO (Ritsumeikan Global Innovation Research Organization). We acknowledge and thank their support.
This research did not receive funding from any particular organization.
Department of Robotics, Ritsumeikan University, Kusatsu, Shiga, 525-8577, Japan
Fabian Reyes & Shugen Ma
Fabian Reyes
Shugen Ma
Correspondence to Shugen Ma.
Reyes, F., Ma, S. Studying slippage on pushing applications with snake robots. Robot. Biomim. 4, 9 (2017). https://doi.org/10.1186/s40638-017-0065-3
Received: 08 September 2017
Snake robot
|
CommonCrawl
|
AprilGonzales777
A competitive capital market is important to society because it directs resources towards projects that
Are valued more highly than their cost
A legal system that protects private property and enforces contracts in an even-handed manner helps promote economic growth because it
Provides people with a strong incentive to supply others with things that they value at an economical price
In a market economy persons undertaking an investment project must
Either personally supply the required funds or convince other investors to do so
Private entrepreneurs are likely to make better investment decisions than central planners because
Entrepreneurs who make mistakes must bear the costs of the mistakes personally
The policies of Fannie Mae and Freddie Mac during 1995 to 2008 encourage mortgage originators to
Loosen lending standards and extend more loans to low income and sub prime borrowers
Fannie Mae and Freddie Mac held a competitive advantage over other lenders primarily because
They could borrow funds cheaper than other lenders because their bonds were perceived to be backed by the federal government
What must a nation do to achieve and sustain a high rate of economic growth?
Have a mechanism capable of attracting savings and channeling them into wealth creating projects
Why would you question the validity of the statement: "because Walmart is a low wage firm it exploits its workers" ?
No one is required to work for Walmart and therefore it must attract workers by paying them more attractive wages than they would earn elsewhere
What contributed to soaring housing prices of 2002 to 2005 ?
Regulations designed to make housing more affordable increased the demand for housing and drove housing prices upward
Is compared to a barter economy money of stable and predictable value will
Reduce transaction cost and increase the volume of trade
What is the best definition of money?
Anything that widely serves as a medium of exchange, unit of account, and store of value
What reduces the purchasing power of the dollar?
When the inflation rate is high and volatile
Investment is riskier and gains from trade are reduced
The most frequent reason for printing more money
Is the existence of an unbalanced budget
If the fed is going to create an environment for economic progress it should focus on
Keeping interest rates low
Most economists believe the severity and duration of the great depression was primarily the result of
Manipulation of the money supply by the fed
How do government bail outs impact the economy?
They hurt the economy by rewarding reckless private and state spending thereby encouraging more of it
Where John Maynard Keynes argued that markets were inherently unstable necessitating government intervention Frederick Hayek maintained
The economy was far too complex for experts and government planners to manage
Is the Smoot Hawley tariff was passed in order to
Force Americans to buy more American goods which theoretically would increase jobs
What was a result of the agricultural adjustment act , the national industrial recovery act and the numerous other programs introduced as part of the new deal?
A business environment of uncertainty that reduced output and investment
BizTown U1L3 Vocab
BizTown U1L2
Biztown unit 1 L1
HS Lit Fahrenheit 451
A pizza restaurant monitors the size (measured by the diameter) of the $10$-inch pizzas that it prepares. Pizza crusts are made from doughs that are prepared and prepackaged in boxes of $15$ by a supplier. Doughs are thawed and pressed in a pressing machine. The toppings are added, and the pizzas are baked. The wetness of the doughs varies from box to box, and if the dough is too wet or greasy, it is difficult to press, resulting in a crust that is too small. The first shift of workers begins work at $4 \text{~P.M.}$, and a new shift takes over at $9 \text{~P.M.}$ and works until closing. The pressing machine is readjusted at the beginning of each shift. The restaurant takes five consecutive pizzas prepared at the beginning of each hour from opening to closing on a particular day. The diameter of each baked pizza in the subgroups is measured, and the pizza crust diameters obtained are given in Table $16.4$. **Table $16.4$ $10$ Samples of Pizza Crust Diameters** $$ \begin{array}{lcc} & & & & \text{Pizza Crust Diameter (Inches)} & & & \text{Mean} & \text{Range} \\ \text{Subgroup} & \text{Time} & 1 & 2 & 3 & 4 & 5 & \overline{x} & R \\ 1 & 4 \text{p.m.} & 9.8 & 9.0 & 9.0 & 9.2 & 9.2 & 9.24 & 0.8\\ 2 & \text{5 p.m.} & 9.5 & 10.3 & 10.2 & 10.0 & 10.0 & 10.00 & 0.8\\ 3 & \text{6 p.m.} & 10.5 & 10.3 & 9.8 & 10.0 & 10.3 & 10.18 & 0.7\\ 4 & \text{7 p.m.} & 10.7 & 9.5 & 9.8 & 10.0 & 10.0 & 10.00 & 1.2\\ 5 & \text{8 p.m.} & 10.0 & 10.5 & 10.0 & 10.5 & 10.5 & 10.30 & 0.5\\ 6 & \text{9 p.m.} & 10.0 & 9.0 & 9.0 & 9.2 & 9.3 & 9.30 & 1.0\\ 7 & \text{10 p.m.} & 11.0 & 10.0 & 10.3 & 10.3 & 10.0 & 10.32 & 1.0\\ 8 & \text{11 p.m.} & 10.0 & 10.2 & 10.1 & 10.3 & 11.0 & 10.32 & 1.0\\ 9 & \text{12 a.m.} & 10.0 & 10.4 & 10.4 & 10.5 & 10.0 & 10.26 & 0.5\\ 10 & \text{1 a.m.} & 11.0 & 10.5 & 10.1 & 10.2 & 10.2 & 10.40 & 0.9\\ \end{array} $$ Use the pizza crust diameter data to do the following: Use the revised values of $\bar{x}$ and $\bar{R}$ to compute revised $\bar{x}$ and $R$ chart control limits for the pizza crust diameter data. Set up $\bar{x}$ and $R$ charts using these revised limits. Be sure to omit subgroup means and ranges for subgroups $1$ and $6$ when setting up these charts.
At the end of the current period. Oxford Ltd. has a defined benefit obligation of $195,000 and pension plan assets with a fair value of$110,000. The amount of the vested benefits for the plan is $105,000. What amount related to its pension plan will be reported on the company's statement of financial position? a.$5,000. b. $90,000. c.$85,000. d. $20,000.
You have the following data for Cable Company's accounts receivable and accounts payable for 2019: |Particulars|Amount| |-|-:| Accounts receivable, 1/1/2019|$6,325 2019 Sales on credit| 93,680 Accounts receivable, 12/31/2019|7,950 Wages payable, 1/1/2019| 4,960 2019 Wage expense |49,510 Wages payable, 12/31/2019 |3,625 **Required:** 1. How much cash did Cable collect from customers during 2019? 2. How would you classify cash collected from customers on the statement of cash flows? 3. How much cash did Cable pay for wages during 2019? 4. How would you classify the cash paid for wages on the statement of cash flows?
Mark LaVine, age 58.\ Worked 23 years.\ Expected retirement age: 60.\ Rate of benefits: 3.0%.\ Final average salary: $47,147.\ What is the annual disability benefit?\ What is the monthly disability benefit?
Financial Algebra
1st EditionRichard Sgroi, Robert Gerver
Understanding Economics
1st EditionGary E. Clayton
Economics New Ways of Thinking, Applying the Principles, Workbook
2nd EditionScott Wolla
Practice Problems for Financial Algebra: Advanced Algebra with Financial Applications
2nd EditionRobert Gerver
|
CommonCrawl
|
Erdös-Heilbronn problem
(Redirected from Erdös–Heilbronn problem)
Let $ G $ be an Abelian group and let $ A \subset G $. For $ k \in \mathbf N $, let
$$ k \wedge A = \left \{ {\sum _ {x \in X } x } : {X \subset A \textrm{ and } \left | X \right | = k } \right \} . $$
(Here, $ | A | $ denotes the cardinality of a set $ A $.) Let $ p $ be a prime number and let $ A \subset \mathbf Z/p \mathbf Z $. It was conjectured by P. Erdös and H. Heilbronn that $ | {2 \wedge A } | \geq \min ( p,2 | A | - 3 ) $.
This conjecture, mentioned in [a5], was first proved in [a3], using linear algebra. As a consequence of the lower bound on the degree of the minimal polynomial of the Grasmann derivative, the following theorem is true [a3]: Let $ p $ be a prime number and let $ A \subset \mathbf Z/ {p \mathbf Z } $. Then
$$ \left | {k \wedge A } \right | \geq \min ( p,k ( \left | A \right | - k ) + 1 ) . $$
Applying this theorem with $ k = 2 $, one obtains the Erdös–Heilbronn conjecture mentioned above. A generalization of the theorem has been obtained in [a2]. Presently (1996), almost nothing is known for composite numbers. The following conjecture is proposed here: Let $ n $ be a composite number (cf. also Prime number) and let $ A \subset \mathbf Z/ {n \mathbf Z } $ be such that $ | A | \geq k - 1 + { {( n - 1 ) } / k } $. Then $ 0 \in j \wedge A $ for some $ 1 \leq j \leq k $.
For a prime number $ n $, the above conjecture is an easy consequence of the theorem above. Some applications to integer subset sums are contained in [a6]. Along the same lines, the conjecture has several implications. In particular, for $ k = 3 $ one finds: Let $ A \subset \{ 1 \dots n \} $ be such that $ | A | \geq 5 + { {( n - 1 ) } / 3 } $. Then there is a $ B \subset A $ such that $ 3 \leq | B | \leq 6 $ and $ \sum _ {x \in B } x = 2n $.
This was conjectured partially by Erdös and R. Graham [a5] and follows easily by applying the conjecture twice, after adding $ 0 $.
[a1] N. Alon, "Subset sums" J. Number Th. , 27 (1987) pp. 196–205
[a2] N. Alon, M.B. Nathanson, I. Z. Rusza, "The polynomial method and restricted sums of congruence classes" J. Number Th. (to appear)
[a3] J.A. Dias da Silva, Y.O. Hamidoune, "Cyclic subspaces of Grassmann derivations" Bull. London Math. Soc. , 26 (1994) pp. 140–146
[a4] P. Erdös, H. Heilbronn, "On the addition of residue classes mod " Acta Arith. , 9 (1964) pp. 149–159
[a5] P. Erdös, R. Graham, "Old and new problems and results in combinatorial number theory" L'Enseign. Math. (1980) pp. 1–128
[a6] Y.O. Hamidoune, "The representation of some integers as a subset sum" Bull. London Math. Soc. , 26 (1994) pp. 557–563
[a7] H.B. Mann, "Addition theorems" , R.E. Krieger (1976) (Edition: Second)
Erdös–Heilbronn problem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Erd%C3%B6s%E2%80%93Heilbronn_problem&oldid=38691
Retrieved from "https://encyclopediaofmath.org/index.php?title=Erdös-Heilbronn_problem&oldid=46847"
TeX auto
TeX done
|
CommonCrawl
|
Determinant of a block matrix [duplicate]
Determinant of a block lower triangular matrix 6 answers
How prove this equality for a block matrix: $$\det\left[\begin{array}[cc]\\A&C\\ 0&B\end{array}\right]=\det(A)\det(B)$$
I tried to use a proof by induction but I'm stuck on it. Is there a simpler method? Thanks for help.
linear-algebra determinant
marked as duplicate by user1551, Daniel Fischer, mdp, Thomas Andrews, Davide Giraudo Oct 11 '13 at 12:35
$\begingroup$ possible duplicate of Determinant of a block lower triangular matrix and The determinant of block triangular matrix as product of determinants of diagonal blocks. $\endgroup$ – user1551 Oct 11 '13 at 12:07
$\begingroup$ There is a deeper truth to this than just the proof itself. While a matrix doesn't have the commutative multiplication of a field, most linear algebra results can be rephrased to not assume multiplicative commutivity. Therefore any matrix can be though of as a 2x2 matrix of matrices, and most results follow inductively from the 2x2 case. $\endgroup$ – DanielV Oct 11 '13 at 12:18
Hint: See Determinants of Block Matrices.
Eleven-ElevenEleven-Eleven
$\begingroup$ That does not even indirectly (through the link) give a proof. It is just an indication, and you could have copied that here (with reference to the source if you think it really necessary). $\endgroup$ – Marc van Leeuwen Oct 11 '13 at 11:32
$\begingroup$ I was trying to guide him somewhere. I will put that in the comment section next time. Thanks for the feedback. $\endgroup$ – Eleven-Eleven Oct 11 '13 at 15:16
Other answers suggest quite elementary proofs, and I upvoted one of them. However, I want to propose a technically easier, but less elementary proof.
If you're familiar with it, you can use QR decomposition. Let
$$A = Q_A R_A, \quad B = Q_B R_B$$
be QR decompositions of $A$ and $B$. Then
\begin{align*} \det \begin{bmatrix} A & C \\ 0 & B \end{bmatrix} &= \det \begin{bmatrix} Q_A R_A & Q_A Q_A^T C \\ 0 & Q_B R_B \end{bmatrix} = \det \left( \begin{bmatrix} Q_A \\ & Q_B \end{bmatrix} \begin{bmatrix} R_A & Q_A^T C \\ 0 & R_B \end{bmatrix} \right) \\ &= \det \begin{bmatrix} Q_A \\ & Q_B \end{bmatrix} \det \begin{bmatrix} R_A & Q_A^T C \\ 0 & R_B \end{bmatrix} = \det Q \det R, \end{align*}
$$Q := \begin{bmatrix} Q_A \\ & Q_B \end{bmatrix}, \quad R := \begin{bmatrix} R_A & Q_A^T C \\ 0 & R_B \end{bmatrix}.$$
Notice that $R$ is (upper) triangular, so its determinant is equal to the product of its diagonal elements, so
$$\det R = \det \begin{bmatrix} R_A & 0 \\ 0 & R_B \end{bmatrix}.$$
Combining what we have,
\begin{align*} \det \begin{bmatrix} A & C \\ 0 & B \end{bmatrix} &= \det Q \det R = \det \begin{bmatrix} Q_A \\ & Q_B \end{bmatrix} \det \begin{bmatrix} R_A \\ & R_B \end{bmatrix} \\ &= \det Q_A \det Q_B \det A \det B = \det (Q_AR_A) \det (Q_B R_B) \\ &= \det A \det B. \end{align*}
Notice that this is far from elementary proof. It uses the QR decomposition, a formula for the determinant of block diagonal matrices, a formula for the determinant of triangular matrices, and block multiplication of matrices.
Vedran ŠegoVedran Šego
$\begingroup$ I personally don't like too much a proof that works in a much more restricted setting (real matrices) than the result that is being proved requires (here that is matrices over a commutative ring), even it one should be primarily interested in that restricted setting. That is because uch a proof suggests a relation that just isn't there. $\endgroup$ – Marc van Leeuwen Oct 11 '13 at 11:29
$\begingroup$ Gaussian elimination definitely does not work for matrices over arbitrary commutative rings. $\endgroup$ – Marc van Leeuwen Oct 11 '13 at 13:16
$\begingroup$ But the QR-decompostion is just a special case of the Iwasawa decomposition, so it works at least in somewhat more generality than is suggested... $\endgroup$ – Vincent May 14 '15 at 14:03
$\begingroup$ If you prove it for real-valued matrices, you've proved it for matrices with values in an arbitrary commutative unital ring. LHS minus RHS of the desired identity is an integer-coefficient polynomial $F$ in the entries of $A,B,C$. We've proved that $F$ evaluates to 0 at any point of $\mathbb{R}^N$. Thus, $F$ is 0 as an element of $\mathbb{Z}[x_1,\ldots, x_N]$. So it evaluates to 0 any point of $R^N$ for any commutative unital ring $R$. $\endgroup$ – Dmitri Gekhtman Aug 24 '16 at 6:43
Since I have criticised some other answers, let me be fair and give my take on this. The result to use is just the Leibniz formula defining the determinant (for once, use the definition!): $$ \def\sg{\operatorname{sgn}} \det(M) = \sum_{\sigma \in S_n} \sg(\sigma) \prod_{i = 1}^n M_{i,\sigma(i)}. $$ Now if $M$ is the matrix of the question, and its block $A$ has size $k\times k$, then by the block form $M_{i,j}=0$ whenever $j\leq k<i$ (lower left hand block). So we can drop all permutations from the sum for which $\sigma(i)\leq k$ for any $i\in\{k+1,\ldots,n\}$. But that means that for $\sigma$ to survive, the $n-k$ values $\sigma(i)$ for are $i\in T=\{k+1,\ldots,n\}$ must all be in the $n-k$-element set $T$, and they must of course also be distinct: they form a permutation of the elements of$~T$. So $\sigma$ permutes them elements of $T$ among each other, and then necessarily also the elements of the complement $\{1,\ldots,k\}$ of$~T$. These permutations of subsets are independent, so the surviving permutations are naturally in bijection with the Cartesian product of the permutations of $\{1,\ldots,k\}$ and those of$~T$. Also the sign of the combination of two such permutations is the product of the signs of the individual permutations. All in all one gets $$ \begin{align} \det(M)&=\sum_{\pi \in S_k} \sum_{\tau\in S_T} \sg(\pi)\sg(\tau) \left(\prod_{i = 1}^k M_{i,\pi(i)}\right)\prod_{i \in T}M_{i,\tau(i)} \\&=\left(\sum_{\pi \in S_k}\sg(\pi)\prod_{i = 1}^k M_{i,\pi(i)}\right)\left(\sum_{\tau\in S_T}\sg(\tau)\prod_{i \in T}M_{i,\tau(i)}\right) \\&=\det (A)\det(B). \end{align} $$
Marc van LeeuwenMarc van Leeuwen
$\begingroup$ How would this work if the matrix of zero's was in the lower right? $\endgroup$ – user342914 Oct 25 '16 at 18:59
$\begingroup$ @user342914 For a matrix of the form $(\begin{smallmatrix}A&B\\C&0\end{smallmatrix})$ with $B$ and $C$ square has a determinant which is $\pm\det(B)\det(C)$ where the sign is that of the permutation of columns that transforms the matrix to the form $(\begin{smallmatrix}B&A\\0&C\end{smallmatrix})$ (it is $-1$ if and only if both $B$ and $C$ have an odd size). It might be more natural to have $A$ and $0$ square, but in that case there is no obvious formula. $\endgroup$ – Marc van Leeuwen Oct 26 '16 at 6:48
Hint: Use Laplace expansion to get what you need.
Тимофей ЛомоносовТимофей Ломоносов
$\begingroup$ I don't see how I can use the Laplace expansion and why the matrix $C$ hasn't an effect on the result. $\endgroup$ – user66407 Oct 11 '13 at 9:38
$\begingroup$ Select last $k$ rows of the matrix, where $k$ is the number of the rows of the $0$-part. What minors of $k$-th order are not equal to 0? What are their algebraic complements? $\endgroup$ – Тимофей Ломоносов Oct 11 '13 at 9:46
$\begingroup$ The link you provided only does Laplace expansion for one row at the time. So at least you might want to state what Laplace expansion of $k$ rows at the time is exactly. $\endgroup$ – Marc van Leeuwen Oct 11 '13 at 12:06
Determinant of a block lower triangular matrix
The determinant of block triangular matrix as product of determinants of diagonal blocks
Very basic computations in determinants and matrices
Determinant of symmetric block matrix
Determinant of block matrix with certain properties
Determinant of block matrices of block matrices with different dimensions
Determinant of block $n \times n$ matrix
Determinant of a $2\times 2$ block matrix
Determinant of block matrices
Finding the determinant of a block matrix (Linear Algebra)
Determinant of an anti-diagonal block matrix
Compute the block matrix determinant
|
CommonCrawl
|
Journal of the Association for Research in Otolaryngology
March 2007 , Volume 8, Issue 1, pp 84–104 | Cite as
A Dual-Process Integrator–Resonator Model of the Electrically Stimulated Human Auditory Nerve
Olivier Macherey
Robert P. Carlyon
Astrid van Wieringen
A phenomenological dual-process model of the electrically stimulated human auditory nerve is presented and compared to threshold and loudness data from cochlear implant users. The auditory nerve is modeled as two parallel processes derived from linearized equations of conductance-based models. The first process is an integrator, which dominates stimulation for short-phase duration biphasic pulses and high-frequency sinusoidal stimuli. It has a relatively short time constant (0.094 ms) arising from the passive properties of the membrane. The second process is a resonator, which induces nonmonotonic functions of threshold vs frequency with minima around 80 Hz. The ion channel responsible for this trend has a relatively large relaxation time constant of about 1 ms. Membrane noise is modeled as a Gaussian noise, and loudness sensation is assumed to relate to the probability of firing of a neuron during a 20-ms rectangular window. Experimental psychophysical results obtained in seven previously published studies can be interpreted with this model. The model also provides a physiologically based account of the nonmonotonic threshold vs frequency functions observed in biphasic and sinusoidal stimulation, the large threshold decrease obtained with biphasic pulses having a relatively long inter-phase gap and the effects of asymmetric pulses.
auditory nerve electrical stimulation computational model threshold nonmonotonicities subthreshold resonance subthreshold oscillations
During the past two decades, a number of computational models of the electrically stimulated auditory nerve (AN) were developed. These include biophysical (Colombo and Parkins 1987; Finley et al. 1990; Frijns et al. 1996; Cartee 2000; Matsuoka et al. 2001; Rattay et al. 2001; Morse and Evans 2003) and phenomenological models (Shannon 1989; Bruce et al. 1999b; Rubinstein et al. 2001; McKay et al. 2003; Xu and Collins 2005; Carlyon et al. 2005). The development of computational models may guide our understanding of the AN properties and help to design future coding strategies for cochlear implants (CIs).
On the one hand, biophysical models are typically conductance-based and aim to describe the kinetics of ion channels using the formalism of Hodgkin and Huxley (1952). Most of these models are derived from cold-temperature animals such as the squid (Hodgkin and Huxley 1952) or the toad (Frankenhauser and Huxley 1964) and need further adjustments to match the properties of the human AN (Rattay et al. 2001). The gating processes need to be accelerated to account for the temperature difference, and conductance values need to be corrected to account for the higher channel density, thereby leading to relatively short time constants. Cartee (2000) studied the summation and refractory properties of the AN using several conductance-based models. She determined thresholds for a single pseudomonophasic pulse and for a pair of pulses separated by an interpulse interval. She showed that the threshold for the pulse pair was lower than for the single pulse for intervals smaller than 500 μs, irrespective of the model and of the temperature value (varying from 20 to 39°C), and that this difference vanished for longer intervals. Similarly, the introduction of an inter-phase gap (IPG) between the two phases of a biphasic (BP) pulse was shown to reduce threshold of excitation in animal and computational models (van den Honert and Mortimer 1979; Shepherd and Javel 1999). However, this threshold drop remained approximately constant when the IPG was increased beyond 100 μs. This is in discrepancy with a recent study of Carlyon et al. (2005) who reported that thresholds of cochlear implantees for BP pulses continued to drop as the IPG increased up to 4.9 ms, the longest value tested. They showed that this drop was not due to a release of refractoriness at levels central to the AN but rather to a specific process at the level of the cochlea/AN. This may involve ion channels with higher time constants, which were not taken into account in previously published biophysical models of the AN. The exploration of individual ion channels in conductance-based models can be complicated because of the strong nonlinearities and of the numerous parameters they use.
On the other hand, phenomenological models are based on experimental results and aim to describe trends of data using general and simple mathematical laws. They have proved, in some cases, to be accurate predictors of psychophysical results. Shannon (1989) proposed a dual-process model that could predict a wide range of data acquired for biphasic and sinusoidal stimuli. However, Carlyon et al. (2005) pointed out that this model could not account for the effects of introducing an IPG between the two phases of a BP pulse. In this same study, Carlyon et al. introduced a linear filter model derived from behavioral thresholds for sinusoidal stimuli, which could account for the effects of IPG. Since then, their model has successfully predicted cochlear implantees' thresholds for a wide range of stimulus waveforms (Macherey et al. 2006; van Wieringen et al. 2006). Despite using a few parameters and predicting a large set of data, the physiological significance of this model is not straightforward, and its predictions are restricted to the threshold level and cannot account for loudness growth.
The aim of the present study is to develop a stochastic, phenomenological model of single-channel stimulation that can account for thresholds and loudness data of CI users subjected to a variety of stimulus waveforms. As the model of Carlyon et al. (2005) suggests a linear process underlying threshold levels in the AN, we believe that the linearization of conductance-based models' equations can provide a useful tool to study the dynamic of the AN membrane. It may also, in return, provide a basis to develop more realistic conductance-based models of the human AN. Linearization of conductance-based models is valid in the subthreshold domain, where the variations of the transmembrane potential remain small (Mauro et al. 1970). Different subthreshold behaviors can be obtained depending on the characteristics of the neuron. Spiking neurons can be divided in two major classes based on their mechanisms of excitability as they go from quiescence to periodical firing. Type I neurons act as integrators of incoming signals; type II neurons act as resonators and show a peak response at a specific frequency (Izhikevich 2001; St Hilaire and Longtin 2004). This classification is based on the dynamics properties of a neuron and should not be confused with the anatomical classification of primary auditory neurons. These two types of neurons can also exhibit different behaviors in response to a subthreshold step-current: the transmembrane potential of type I neuron exponentially converges to the holding voltage whereas type II neurons show, in some cases, damped oscillations (Richardson et al. 2003). Based on the results from previous psychophysical experiments with CI users, we hypothesize in this study that primary auditory neurons can exhibit these two types of behavior. First, nonmonotonic curves of threshold vs rate were observed in some cochlear implantees subjected to BP pulses having relatively long-phase durations (1 or 2 ms) (Shannon 1985; Pfingst et al. 1996) and to sinusoidal stimulation (Shannon 1985; Pfingst 1988; Miller et al. 1999a). In these studies, a threshold minimum was typically reached between 70 and 100 Hz. This frequency preference resembles the fundamental property of resonator-like neurons (type II). Frequency resonance was observed in a number of real and modeled neurons (reviewed in Hutcheon and Yarom 2000) and we will show that nonmonotonic functions obtained in CI listeners are consistent with the operation of such a mechanism. Second, at stimulation frequencies higher than 300 Hz for sinusoidal stimulation and also for biphasic stimulation with relatively short-phase durations (<500 μs), the neural membrane is believed to act as a leaky integrator of charge, similar to type I neurons (Moon et al. 1993).
Simple linear models can exhibit either integrative or resonant properties. The "leaky integrate-and-fire" model (Gerstner and Kistler 2002) is the most common and simplest integrator model. Izhikevich (2001) proposed an analog of it, which shows a resonance and which he termed the "resonate-and-fire" model. Similarly, Richardson et al. (2003) introduced the "generalized integrate-and-fire" model, which could also, in some cases, exhibit frequency resonance and subthreshold damped oscillations. These models can be obtained via linearization and simplification of conductance-based models' equations. They aim to describe the subthreshold deviations of the transmembrane potential from the resting potential and assume that an action potential is initiated whenever the membrane voltage crosses a certain threshold. The "Methods" section of this paper presents a dual-process model, which uses both a linear integrator and linear resonator neurons. Although the characterization of the processes involves a large number of parameters, in the "Results" section, the values of these parameters are kept fixed while we compare the model predictions to the results of psychophysical experiments employing a wide range of pulse shapes.
Description of the model
The present model includes four stages (Fig. 1): stimulation, subthreshold behavior, neural activation, and central integration.
Schematic representation of the stochastic model.
We assume that a population of N f neural fibers close to the stimulating electrode is being driven by a current I Stim. The spatial spread of current is not modeled and we simply consider a uniform stimulation. No hypothesis concerning the mode of stimulation (monopolar or bipolar) is being made. A sampling frequency of 250 kHz is used for computation except for pulsatile stimuli with short-phase durations (<30 μs) where 500 kHz is used. The stimulus duration is always 100 ms unless otherwise stated.
Subthreshold behavior
We assume that the population of stimulated neurons is divided into two classes. Some are integrators and some are resonators. Although we assume in this study that two populations of neurons are modeled, the model can also be seen as describing the same population of neurons stimulated at distinct spatial locations or submitted to different patterns of excitation. The proportion of integrator neurons (λ) is assumed to be 0.5, the same as the proportion of resonator neurons (1−λ). We model the integrator as a RC circuit (conductance G 0 and capacity C 0) and the resonator as the circuit shown in Figure 1 (bottom part), consisting of an inductance L 1 and conductance g 1, in parallel to a capacity C 1 and shunted by a conductance G 1. These two models are the simplest linearized models that can exhibit integrative and resonant properties, respectively, and their mathematical description can be found in previous studies (Gerstner and Kistler 2002; Richardson et al. 2003; Brunel et al. 2003). However, for means of completeness, the main properties of the two filters are given in "Appendix 1." The outputs of the filters V int and V res represent the subthreshold deviations of the transmembrane potential from the resting potential for the integrator and resonator neurons, respectively.
Neural activation
The subthreshold membrane potentials V int and V res are full-wave rectified to account for both polarities of the stimulus. In monopolar mode, a negative and a positive current input induce a depolarization and a hyperpolarization, respectively, of the fibers close to the active electrode. In addition, they also induce a hyperpolarization and a depolarization, respectively, of the same fibers at a location remote from the electrode (Rattay 1989). In bipolar mode, a negative current input depolarizes the fibers close to the active electrode and hyperpolarizes the fibers close to the return electrode. That is presumably why, irrespective of the stimulation mode, both polarities of a pulse can evoke neural spikes (Miller et al. 1999c). The full-wave rectification therefore assumes that both polarities of an alternating polarity stimulus are equally effective. We model membrane noise V noise as a Gaussian noise with amplitude distribution N [0, s 2] that changes its value every 4 μs. Spike initiation is assumed to occur whenever the transmembrane potential (V int + V noise or V res + V noise) crosses a threshold potential V thr, which is supposed to be constant across fibers. The probability of firing at a given discrete time k (k = 1...n; n being the number of samples contained in the stimulus) is thus given by:
$$ P_{{\operatorname{int} }} {\left( k \right)} = \frac{1} {2}{\left( {1 + {\text{erf}}{\left( {\frac{{\frac{{{\left| {V_{{\operatorname{int} }} {\left( k \right)}} \right|}}} {{V_{{{\text{thr}}}} }} - 1}} {{{\sqrt 2 }{\text{RS}}}}} \right)}} \right)}\quad {\text{and}}\quad P_{{{\text{res}}}} {\left( k \right)} = \frac{1} {2}{\left( {1 + {\text{erf}}{\left( {\frac{{\frac{{{\left| {V_{{{\text{res}}}} {\left( k \right)}} \right|}}} {{V_{{{\text{thr}}}} }} - 1}} {{{\sqrt 2 }{\text{RS}}}}} \right)}} \right)} $$
for the integrator and resonator units, respectively. Here, RS is the "relative spread" as defined by Verveen (1961) and erf is the error function.
$$ {\text{RS}} = \frac{s} {{V_{{{\text{thr}}}} }} $$
$$ {\text{erf}}{\left( x \right)} = \frac{2} {{{\sqrt \pi }}}{\int\limits_0^x {e^{{ - t^{2} }} {\text{d}}t} } $$
These formulas are typical of a Bernoulli process and have already been derived in Bruce et al. (1999a).
Central integration
Loudness is commonly believed to be integrated at a level central to the AN. Physiologically, this integration phenomenon probably relates to the integration of neural activity. Central neurons may fire only if they receive a sufficient number of input spikes (Middlebrooks 2004). We hypothesize here that the loudness of a stimulus relates to the number of spikes initiated at the AN level within a certain temporal window. We use a series of 20-ms rectangular windows W i (i = 1...M, M being the number of windows contained in the stimulus), with a 0.5-ms step increment and integrate the probability of firing of the two processes (integrator and resonator) across each window. Loudness perception is assumed to relate to the maximum firing probability during any of the temporal integration window. For the sake of simplicity, the case of repetitive firing is not considered and refractory effects are not modeled. The probability of firing \( P^{{W_{i} }}_{{{\text{firing}}}} \) of a fiber during the ith temporal window is given by:
$$ P^{{W_{i} }}_{{{\text{firing}}}} = \lambda P^{{W_{i} }}_{{\operatorname{int} }} + {\left( {1 - \lambda } \right)}P^{{W_{i} }}_{{{\text{res}}}} \quad {\text{with}}\quad \left\{ \begin{aligned} P^{{W_{i} }}_{{\operatorname{int} }} = 1 - {\prod\limits_{k \in W_{i} } {{\left( {1 - P_{{\operatorname{int} }} {\left( k \right)}} \right)}} } \\ P^{{W_{i} }}_{{{\text{res}}}} = 1 - {\prod\limits_{k \in W_{i} } {{\left( {1 - P_{{{\text{res}}}} {\left( k \right)}} \right)}} } \\ \end{aligned} \right. $$
\( P^{{W_{i} }}_{{\operatorname{int} }} \) and \( P^{{W_{i} }}_{{{\text{res}}}} \) are the firing probabilities of the integrator and resonator neurons, respectively, during W i . For most of the conditions, the sampling frequency equals the rate of noise variations (250 kHz). In these cases, the window W i simply contains all the samples. For short-phase duration stimuli (<30 μs), because the sampling frequency is 500 kHz, the V int and V res vectors are first downsampled to the rate of noise variations prior to perform the central integration of Eq. 4.
Threshold detection and most comfortable level estimation
Two different cases are studied: deterministic (V noise = 0) and stochastic (V noise ≠ 0) cases. For the deterministic model, all resonator neurons fire once V res exceeds V thr. Similarly, all integrator neurons fire once V int exceeds V thr. We assume that threshold is reached when one of the two potentials V int or V res exceeds V thr. The level needed to reach threshold is therefore simply inversely proportional to the maximum of V res and V int after full-wave rectification. For the stochastic model, we determine thresholds using an analytical technique derived from signal detection theory and described in Bruce et al. (1999b). Let a sequence of independent random variables (RVs) X j represent the firing state of each neuron during the ith temporal window (j = 1...N f). As we do not consider the case of repetitive spiking, each X j represents the binary firing state of the jth fiber (X j = 0 if the neuron did not fire at all during the window and X j = 1 if it did). The output of the model X is the sum of the X j and is itself a RV that can be well approximated either by a Poisson distribution or a normal distribution depending on the value of its mean (Bruce et al. 1999a).
The probability of obtaining m spikes (m between 0 and N f) during the window W i is given by:
$$ P_{X} {\left( {X = m} \right)} = P_{X} {\left( m \right)} = \left\{ {\matrix {e^{{ - \mu }} \frac{{\mu ^{m} }} {{m!}}}{{\text{if}}\;\mu \leqslant 15} \\ {\frac{1} {{{\sqrt {2\pi } }\sigma }}e^{{^{{ - \frac{{{\left( {m - \mu } \right)}^{2} }} {{2\sigma ^{2} }}}} }} }{{\text{if}}\;\mu > 15} \ } \right. $$
Here, μ and σ are the mean and standard deviation of X.
$$ \mu = N_{{\text{f}}} P^{{W_{i} }}_{{{\text{firing}}}} $$
$$ \sigma ^{2} = \lambda N_{{\text{f}}} P^{{W_{i} }}_{{\operatorname{int} }} {\left( {1 - P^{{W_{i} }}_{{\operatorname{int} }} } \right)} + {\left( {1 - \lambda } \right)}N_{f} P^{{W_{i} }}_{{{\text{res}}}} {\left( {1 - P^{{W_{i} }}_{{{\text{res}}}} } \right)} $$
We can then determine the probability of correct detection Pr of the signal in a two-interval forced-choice task in a way identical to Bruce et al. (1999b). Consider two RVs X[I Stim] and X[0] that describe the number of discharges during one of the temporal integration windows in response to a stimulus of amplitude I Stim and 0, respectively. The probability Pr of choosing correctly the signal of amplitude I Stim is equal to the probability that more spikes are initiated in response to I Stim plus the probability of making a correct guess.
$$ \Pr {\left[ {I_{{{\text{Stim}}}} } \right]} = {\sum\limits_{m = 0}^{N_{{\text{f}}} } {{\left( {P_{{X{\left[ 0 \right]}}} {\left( m \right)}{\sum\limits_{l = m + 1}^{N_{{\text{f}}} } {P_{{X{\left[ {I_{{{\text{Stim}}}} } \right]}}} {\left( l \right)}} }} \right)}} } + \frac{1} {2}{\sum\limits_{m = 0}^{N_{{\text{f}}} } {P_{{X{\left[ 0 \right]}}} {\left( m \right)}P_{{X{\left[ {I_{{{\text{Stim}}}} } \right]}}} {\left( m \right)}} } $$
By increasing the amplitude of the stimulus I Stim, we obtain a psychometric function that rises from 50% (chance level) to 100%. Behavioral thresholds are often measured using a two-down, one-up procedure, which converges toward the 70.71% correct level (Levitt 1971) and we will always use this criterion for the stochastic threshold predictions presented in the following sections. Practically, the amplitude needed to reach threshold is adaptively tracked. The algorithm stops when the probability Pr reaches 70.71% with an error less than 10−5.
Most comfortable level (MCL) is assumed to correspond to a certain number of spikes elicited during the central integration window. Several spike counts will be studied in the "Results" section. As for threshold estimation, the amplitude of the stimulus that produces a desired number of spikes is adaptively tracked.
Parameters fitting
Absolute thresholds and MCLs can vary greatly among CI listeners. These differences may partly relate to neural survival, electrode placement, and geometry. The present model is designed and will only be used to make relative predictions, i.e., comparison of thresholds and MCLs in decibels between several stimuli. As described in "Appendix 1," the transfer function of the integrator can be expressed as a function of a capacity C 0 and a time constant τ 0. Similarly, the transfer function of the resonator can be expressed as a function of a capacity C 1, a time constant τ 1, and two dimensionless parameters α and β. We define δ as the ratio of the two capacities \( \delta = {C_{1} } \mathord{\left/ {\vphantom {{C_{1} } {C_{0} }}} \right. \kern-\nulldelimiterspace} {C_{0} } \). The transfer function of the integrator can now be expressed as a function of C 1, δ, and τ 0. Because the two subthreshold processes are linear, the threshold or MCL difference in decibels between two arbitrary stimuli is a function of τ 0, τ 1, α, β, and δ and does not depend on the value of C 1 or V thr. The two parameters V thr and C 1 can be merged into a single variable (the product C 1 × V thr), which defines an absolute reference. Practically, the product C 1 × V thr is set to 1 to perform the computation. Then, for each set of data, the model predictions are adjusted (by vertical translation) to match one of the data points.
Deterministic model
The deterministic model uses five parameters (three from the resonator process, two from the integrator process). The three parameters τ 1, α, and β of the resonator process are determined by assuming that:
There is a frequency resonance at 80 Hz (this is the approximate value for which a minimum threshold is observed in some subjects subjected to sinusoidal stimulation; Pfingst 1988).
The amplitude of the complex impedance of the resonator is 1 dB larger at 100 Hz than at 50 Hz and 14 dB larger at 100 Hz than at 200 Hz (these are the mean threshold differences determined by Miller et al. (1999a) using monopolar sinusoidal stimulation).
Under these conditions, the model exhibits subthreshold oscillations after a step-current input. The frequency of these damped oscillations is 81.5 Hz (cf. Richardson et al. 2003 for calculation).
The two parameters τ 0 and δ of the integrator unit are adjusted so that:
The slope of the strength duration function (threshold vs phase duration function for single BP pulses) of the integrator has a mean decrease of 3.6 dB per doubling of phase duration from 12.5 to 400 μs (following Moon et al. 1993; cf. Fig. 2a).
The strength duration functions of the two processes equate at 500 μs. This is the approximate value for which a change of slope is observed in psychophysical data (Shannon 1985; Moon et al. 1993). This change of slope is assumed to be the point where the two processes contribute to loudness perception in an equal fashion. For phase durations lower than 500 μs, the integrator process dominates and for phase durations higher than 500 μs, the resonator process dominates.
(a) Deterministic thresholds of the integrator and resonator processes as a function of phase duration. (b) Strength duration functions for different relative spread (RS) values for the stochastic model. The deterministic case corresponds to V noise = 0.
Under these conditions, the slope of the strength duration function of the two processes combined has a mean decrease of 5.6 dB per doubling of phase duration from 500 to 8000 μs (similar to the mean value of 5.7 dB obtained by Moon et al. for durations greater than 500 μs; cf. Fig. 2a). The numerical values of these parameters are provided in Table 1. In addition, a sensitivity analysis of these same parameters is given in "Appendix 2."
Numerical values of the deterministic and stochastic model parameters
−0.746
τ 1
1.04 ms
Central window
Stochastic model
The stochastic model uses six parameters (the same five parameters as the deterministic model and the RS value). Figure 2b shows the influence of RS on the strength duration function for a single BP pulse, as predicted by the stochastic model, which combines both processes. As RS is increased, the mean slopes get steeper than for the deterministic case. Consider the case RS = 0.18. For durations smaller than 500 μs; the integrator process dominates and the mean threshold decrease is less than 6 dB per doubling of phase duration. For durations larger than 500 μs, the resonator process dominates and the threshold decrease is more than 6 dB per doubling of phase duration. Although these slopes are steeper than those obtained by Moon et al. (1993) for single pulses presented in bipolar mode, we will see in the "Results" section that they are still consistent with data obtained by Shannon (1989) in monopolar mode using 10-pps BP stimuli. In the following sections, the predictions of the stochastic model are always obtained using a fixed RS of 0.18. N f is assumed to be 10,000, the same number used in the model of Bruce et al. (1999b). To determine MCL, we calculate the current needed to evoke 100 and 1,000 spikes during at least one integration window, which is equivalent to finding the current that leads to a maximal P firing of 0.01 and 0.1, respectively. We use these two spike counts because they give reasonable dynamic range values (difference between MCL and threshold) compared to those commonly observed in CI stimulation.
Stimuli
Model simulations were performed for different stimulus waveforms. The results were compared to the experimental data obtained with CI subjects described in several reports (Shannon 1985; Miller et al. 1997; McKay and Henshall 2003; Carlyon et al. 2005; Macherey et al. 2006; van Wieringen et al. 2006). An overview of the pulsatile stimuli is provided in Figure 3. Predictions of thresholds and/or MCLs were made for the following stimuli:
BP pulses with an IPG of zero,
Alternating biphasic (ALT-BP) pulses in which the leading polarity alternates from pulse to pulse,
Pseudomonophasic (PS) pulses, which consist of a short phase immediately followed by a longer and lower opposite phase eight times longer than the first,
Alternating pseudomonophasic (ALT-PS) pulses,
Delayed pseudomonophasic (DPS) pulses, which are identical to PS except that the long/low phase is delayed to be midway between two subsequent pulses,
Alternating delayed pseudomonophasic (ALT-DPS) pulses,
BP pulses with an IPG (BP + IPG) longer than in (a),
An alternating polarity version of BP + IPG (ALT-BP + IPG),
Alternating monophasic (ALT-M) pulses, which are identical to BP pulses except that again, the second phase is delayed to be midway between two subsequent pulses, and
BP pulses with an IPG where two subsequent phases have the same polarity (ALT-BP-SAME + IPG).
Overview of stimulus waveforms used for model predictions. PW denotes the phase duration of the pulses.
Validity of the linearity assumption
As already pointed out in the "Introduction," linearized equations of conductance-based models are known to provide a good approximation of the neural response in the subthreshold regime (Mauro et al. 1970). However, as the transmembrane potential approaches threshold, nonlinearities become more prominent, thereby questioning the validity of the linearity assumption. The construction of the model presented in the "Methods" section is based on the assumption that linearized equations of conductance-based models can still provide a good approximation of the original equations at threshold level. We want to test this hypothesis by comparing the threshold predictions of the Hodgkin and Huxley (HH) model of the giant squid axon to its linearized version. The HH model is chosen because its linear behavior is well known (Mauro et al. 1970; Koch 1984) and because it shows a subthreshold resonance.
The predictions for the HH model are made using the softcell package software (Weiss 2000). We use the original parameters of HH at a temperature of 6.3°C. Action potentials are detected using the method described in Phan et al. (1994), which tracks the membrane model's ionic gating events. The stimuli are 100-ms sinusoids and the stimulus frequency is varied from 10 Hz to 4 kHz. For each frequency, the amplitude is raised in steps of 0.2 dB. Threshold is assumed to be crossed when at least one action potential is detected. We compare these predictions to those obtained from the equations of the linearized squid axon membrane (cf. Fig. 19b in Mauro et al. 1970). For the linear model, threshold is assumed to be inversely proportional to the maximal amplitude of the model's response to a unitary input.
The predictions of the two models are illustrated in Figure 4. Both functions are U-shaped with minima around 60 Hz for the HH model (consistent with a previous study by French 1984) and 67 Hz for the linearized model (consistent with Koch 1984). These minima correspond to the resonance frequency of the HH model, which is a type II model. Although the slopes of the linear model predictions are shallower than those of the HH model, the patterns remain comparable and the linear model appears to provide a good estimate of the threshold trend near the resonance frequency. In the following paragraphs, the model described in the "Methods" section is used to make predictions of psychophysical results obtained in CI stimulation.
Threshold predictions of the HH model of the squid axon and of its linearized version at 6.3°C.
Sinusoidal stimulation
Threshold vs frequency functions were measured in CI users for sinusoidal stimuli (Shannon 1983; Shannon 1985; Pfingst 1988). Thresholds are typically constant or slightly decrease with increases in frequency up to about 100 Hz. As already pointed out in the "Introduction," some subjects show a threshold minimum between 70 and 100 Hz. Thresholds then increase at a rate of +15 dB per doubling of frequency from 100 to 250 Hz and at a rate of +3 dB per doubling of frequency for frequencies higher than 300 Hz. Figure 5 shows a summary plot of data (from Pfingst 1988) together with the predictions of the stochastic and deterministic models. As expected, the model predictions show a minimum at 80 Hz, the resonance frequency. For frequencies lower than 250 Hz, the resonator process dominates. For higher frequencies, both processes contribute to threshold (because threshold for the resonator process increases by +6 dB per doubling of frequency and threshold for the integrator process remains constant, the overall slope is approximately +3 dB per doubling of frequency). For frequencies higher than 1,000 Hz, the slope starts to be steeper as it gets closer to the cut-off frequency (1,700 Hz) of the integrator process.
Threshold for sinusoidal stimulation as a function of frequency: summary of behavioral data (replotted from Pfingst 1988) and deterministic and stochastic model predictions. The models' references are arbitrarily chosen.
Symmetric biphasic pulses
Shannon (1985) measured thresholds for BP pulse-train stimuli (condition Fig. 6a) and found that at short-phase durations (<500 μs), thresholds were constant for rates up to 100 pps and then decreased by approximately 3 dB per octave. For long-phase durations (>500 μs), thresholds first decreased with increases in rate up to about 100 pps and then increased, leading to nonmonotonic threshold vs rate functions. Similar nonmonotonic functions were obtained by Pfingst et al. (1996) for phase durations of 1,000 and 2,000 μs. The results of a typical subject (subject EHT from Shannon 1985) are shown in Figure 6a, together with the predictions of the stochastic (Fig. 6b) and deterministic (Fig. 6c) models. The deterministic predictions (for the two processes combined) show nonmonotonicities for relatively long-phase durations (>500 μs). The minimum is reached when the stimulus coincides with the subthreshold oscillations of the membrane. This is demonstrated in Figure 7 where the output voltages of the resonator filter are represented for a 2,000-μs phase duration BP stimulus at three different rates. At 50 pps (Fig. 7a), the onset of the second pulse occurs when the transmembrane potential is close to the resting potential, so the amplitude of the response to the second pulse is similar to the response to the first. At 100 pps (Fig. 7b), the second pulse coincides with a time when the membrane is already depolarized, so the response to the second pulse reaches a greater amplitude than the first. The opposite phenomenon occurs for 200 pps (Fig. 7c), where the onset of the second pulse coincides with a hyperpolarization of the membrane. The presence of noise has an amplifying effect on the nonmonotonicities, as shown by the predictions of the stochastic model (Fig. 7b). Also, the central integration window induces a threshold drop at high frequencies. As we used a 20-ms window, threshold starts to drop at about 50 pps. This is because more and more pulses fall into the window and increase the probability of threshold crossing. The predictions of the stochastic model provide a good match to the threshold data. In the same study, Shannon (1985) showed that the subjects' dynamic range increased with phase duration (ranging from 5–10 dB for a 100-μs phase to 30–35 dB for an 8,000-μs phase). This was mainly because of a slow growth of loudness just above threshold. The stochastic model predicts this trend but underestimates the size of the increase. The predicted dynamic range increases from 5.9 to 12.4 dB for a MCL criterion of 100 spikes and from 8.4 to 15.5 dB for a criterion of 1,000 spikes.
Summary of thresholds for biphasic stimulation as a function of phase duration and pulse rate. (a) Subject EHT (replotted from Shannon 1985). (b) Stochastic model predictions. (c) Deterministic model predictions. The model reference is chosen to match the BP threshold data at 100-μs phase duration and 10-pps rate.
Output of the resonator process (V res) for a 2,000-μs phase duration BP pulse-train stimulus at three different rates. For illustration purposes, C 1 is set to 1 μF.
Asymmetric biphasic pulses
In a previous study (Macherey et al. 2006), we have measured thresholds and MCLs for a variety of pulse shapes, including asymmetric stimuli in bipolar and monopolar mode. We showed thresholds to decrease by 0 to 3 dB when using 100-pps PS stimuli (condition Fig. 3c) compared to a "standard" BP stimulus. A much larger decrease was found using stimuli with a relatively long IPG such as DPS (Fig. 3e) or ALT-M (Fig. 3i).
Mean thresholds and MCLs (Macherey et al. 2006) for BP, ALT-PS (Fig. 3d), and ALT-M stimuli are illustrated together with the stochastic model predictions in Figure 8a. The phase duration was 97 μs for the three stimuli. The rate was 198 pps for BP and ALT-PS and 99 pps for ALT-M, and the electrode configuration was bipolar. The stochastic model accounts for the decrease in threshold and MCL. The mean number of discharges as a function of stimulus level is shown in Figure 8b for the three pulse shapes. ALT-M gives the lowest threshold because of the domination of the resonator process. This is more clearly shown in Figure 9 where the stimulus waveforms, integrator, and resonator outputs of the model are illustrated. The amplitudes of the integrator responses are similar for the three pulse shapes unlike those of the resonator responses. For BP, the amplitude of the resonator output remains low because the second phase counteracts the effect of the first. For ALT-M, the first phase induces a depolarization of the membrane. Then during the IPG the transmembrane potential starts to oscillate and the second phase of the pulse hyperpolarizes the membrane, which is already hyperpolarized at that time. Therefore, the voltage oscillations are amplified because the membrane potential is driven at a frequency more or less coinciding with the subthreshold oscillations. Additional predictions were made for other pulse shapes published in Macherey et al. (2006). Figure 10a, b summarize these results. The mean data together with the model predictions are illustrated. The stochastic model can account for the general trend of data both at short and long-phase durations, low and high rate.
(a) Comparison of mean data from Macherey et al. (2006) and stochastic model predictions. The model reference is chosen to match the mean BP threshold data. (b) Mean number of spikes elicited during the analysis window and standard deviations as a function of stimulation level for the three waveforms BP, ALT-PS, and ALT-M. The phase duration is 97 μs. The rate is 198 pps for BP and ALT-PS and 99 pps for ALT-M.
Stimulation current (top row), output of the integrator (middle row), and output of the resonator (bottom row) for the BP, ALT-PS, and ALT-M stimuli (same parameters as in Fig. 8). For illustration purposes, C 1 is set to 1 μF.
FIG. 10
Mean data 1from Macherey et al. (2006) and stochastic model predictions. (a) 97-μs phase, 99-pps pulses. (b) 22-μs phase, 813-pps pulses. The ALT-DPS "short/high only" and "long/low only" correspond to the ALT-DPS stimulus with the long/low phases and the short/high phases removed, respectively. The model reference is the same for subpanels (a) and (b) and is chosen to match the mean BP (99-pps rate, 97-μs phase) threshold data.
Biphasic pulses with an inter-phase gap
The effects of IPG on thresholds and MCLs of CI users were presented in two previous publications (McKay and Henshall 2003; Carlyon et al. 2005).
First, McKay and Henshall found thresholds and MCLs to decrease with increases in IPG up to 100 μs, the longest value tested (condition Fig. 3g). This effect was greater at threshold than at MCL, greater at the shorter phase duration (26 μs vs 52 μs), and not significantly different for the two rate values tested (1,000 and 4,000 pps). Their results, together with the stochastic model predictions, are illustrated in Figure 11. In addition to the two spike counts used previously (100 and 1,000) to determine MCL, a third count of 10 spikes is studied. The stochastic model can account for the different observations at threshold but underestimates the current difference at MCL. In addition, an increase in the spike count for MCL estimation leads to a reduced effect of IPG on the predictions.
Effects of IPG on threshold and MCL of BP stimuli. For three different conditions of phase duration (PW in microseconds) and rate (in pulses per second), the bars illustrate the mean current difference (in decibels) needed to maintain a constant loudness when the IPG varies from 8.4 to 45 μs and from 45 to 100 μs. Mean data (bars) and standard deviations from McKay and Henshall (2003) and stochastic model predictions of threshold (asterisks) and MCLs (using three different spike counts).
Second, Carlyon et al. (2005) showed the effects of IPG to extend over several milliseconds using 100-pps stimuli. They also demonstrated that this effect depended on whether the polarity of the two phases of the pulse was the same (condition Fig. 3h) or opposite (Fig. 3j). When they were the same, thresholds slightly increased with IPG whereas in the opposite case, thresholds continued to drop as IPG increased up to 4.9 ms. The mean results and standard deviations of their four subjects are illustrated together with the stochastic model predictions for these two conditions (Fig. 12). The trends are well predicted by the model, which leads to considerable threshold reductions when two subsequent phases of opposite polarity are separated by a relatively long IPG (5 ms).
Effects of IPG on the ALT-BP + IPG and ALT-BP - SAME+IPG conditions: mean data of four subjects from Carlyon et al. (2005) and stochastic model predictions.
We have also studied the effects of rate on ALT-M and BP thresholds and MCLs in cochlear implantees (van Wieringen et al. 2006). For BP stimuli, thresholds decreased with increases in rate whereas for ALT-M, thresholds first increased from 100 to 250 pps and then decreased. The predictions of the model together with the mean data of two subjects are illustrated for BP and ALT-M in Figure 13. The bell shape of the threshold function for the ALT-M stimulus (Fig. 13a) is also observed in the model results (Fig. 13b, c). This is because for BP, the integrator process dominates for the whole frequency range whereas for ALT-M, at low frequencies, the frequency of alternation approaches the subthreshold oscillations frequency of the resonator process so that thresholds are very low. As frequency increases, the resonator process is not driven at this preferred frequency and its threshold increases. At higher frequencies, threshold decreases again because of the central integration effect. MCLs show the same pattern but with smoother variations. Finally, the model also accounts for the increase in dynamic range with increases in rate, as observed in another study (Kreft et al. 2004).
Effects of rate on thresholds (open symbols) and MCLs (filled symbols) for BP and ALT-M stimuli with a 97-μs phase duration. (a) Mean results of two subjects (from van Wieringen et al. 2006). (b) Stochastic model predictions with a spike count of 100 used for MCL. (c) Stochastic model predictions with a spike count of 1,000 used for MCL. For means of comparison with the original data illustrated in van Wieringen et al. (2006), ALT-M is illustrated as a function of twice the pulse rate. The model reference is chosen to match the BP threshold data at 200 pps.
Resonance in neurons
Whereas leaky integration of charge is commonly accepted as the main process underlying biphasic threshold levels at short-phase durations (Moon et al. 1993), the nonmonotonicities of the threshold vs rate function observed for long-phase duration pulses have remained difficult to explain. In the present study, we have shown that a resonant process can account for these nonmonotonicities and for those observed in sinusoidal stimulation in some CI subjects. Moreover, it can account for the decrease in threshold with increases in IPG and for the increase in threshold with increasing frequency in ALT-M stimulation. Clopton et al. (1983) already hypothesized that nonmonotonic functions observed in animal models may result from a type of resonance in auditory neurons similar to what is observed in experimental and modeling studies of the squid axon (Guttman and Hachmeister 1971). The HH model is a type II model (Izhikevich 2001) and, at the temperature of the squid, does exhibit a frequency resonance around 60 Hz (Fig. 2). However, using the appropriate parameter corrections to account for the higher temperature and higher channel density of the human AN (Rattay et al. 2001), the time constants are smaller and the resonance frequency much higher. Consequently, the voltage-gated ion channels of the original HH model are not sufficient to explain the nonmonotonic trends observed in the human AN. The ion channel responsible for the nonmonotonicities exhibited by the present model has a relatively slow relaxation time constant of about 1 ms and induces subthreshold resonance and damped oscillations. Such an ion channel is not included in previously published conductance-based models of the AN and further investigations are needed to determine whether its existence is realistic or not. A large range of ion channels can exhibit resonant behavior (Hutcheon and Yarom 2000; Richardson et al. 2003). Two possible candidates are (1) the hyperpolarization-activated current (I h) and (2) a combination of slow potassium and persistent sodium channels. First, the I h channel is known to have high time constants and has already been found in mammalian spiral ganglion cells (Chen 1997; Mo and Davis 1997). Second, McIntyre et al. (2002) included slow potassium and persistent sodium channels in a model of mammalian motor nerve fibers and showed them to be responsible for the depolarizing afterpotentials, suggesting they may play a significant role in mammals. Also, Longnion and Rubinstein (2006) recently implemented slow potassium channels in their stochastic AN model. The identification of a potential resonant ion channel in the AN is, however, beyond the scope of this study and the present model should only be considered tentative to give a physiologically based explanation of the nonmonotonic threshold functions obtained with CI users and of the effects of pulse shape on thresholds and MCLs.
Other hypotheses have already been proposed in previous articles to interpret nonmonotonic trends observed in CI sinusoidal and pulsatile stimulation (Shannon 1983; Shannon 1985; Pfingst et al. 1996; Miller et al. 1997) and they will be discussed in the following paragraphs. We will refer to the "descending arm" of the threshold function for the decrease in threshold with increases in frequency up to about 70–100 Hz in biphasic or sinusoidal stimulation and to the "ascending arm" of the threshold function for the increase in threshold with increases in frequency above 70–100 Hz in biphasic, sinusoidal, or ALT-M stimulation.
Peripheral and central processes
The first hypothesis that has to be discussed is whether the descending and ascending arms of the threshold functions are the result of a specific process at the level of the AN. Nonmonotonic threshold vs frequency functions were also obtained in central auditory neurons of mammals (Clopton et al. 1983) and avians (Schwarz et al. 1993; Strohmann et al. 1995). So the nonmonotonicities observed in psychophysical experiments may result from a frequency selectivity of neurons central to the AN and not, as assumed in the present study, from a process at the AN site.
The descending arm of the threshold function may relate to temporal integration at a location central to the AN. If the rate increases, then more pulses fall within a certain central integration window (Middlebrooks 2004). It is not clear, however, why this phenomenon would depend on the phase duration, showing steeper slopes for long-phase duration BP pulses. The present model provides an explanation for this trend. Part of this decrease comes from the central integration window but the slope of the decrease also depends on the phase duration value because as the rate increases, its value gets closer and closer to the subthreshold oscillation frequency of the resonator process. Therefore, the effect is larger when the resonator process dominates, i.e., at phase durations higher than 500 μs for BP stimuli.
Two observations suggest that the mechanism responsible for the ascending arm of the threshold functions is located at the AN site. First, Carlyon et al. (2005) showed that the decrease in threshold with increases in IPG up to 4.9 ms only occurred when the IPG was varied between two phases of opposite polarity and not when they were of the same polarity (cf. Fig. 12). They interpreted these findings as an evidence of a mechanism at the level of the cochlea/AN and not from a release of refractoriness at a more central level. Second, Zeng et al. (2000) found that thresholds for sinusoidal stimuli could be lowered if a subthreshold noise was added to the stimulus. They interpreted this result as a demonstration of stochastic resonance in the AN and showed that the threshold shift in presence of noise was dependent on the sinusoidal frequency, being maximal around 100 Hz (the lowest frequency tested) and decreasing with increasing frequency. In this same study, Zeng et al. performed the same experiment with brainstem implantees (where the AN is bypassed) and, interestingly, did not find the same frequency dependence. This suggests that the frequency dependence arises at least partly from a process at the AN site and not purely central. What they interpreted as a stochastic resonance effect may in fact relate to the enhancement of the response of resonator neurons stimulated close to their resonant frequency. This alternative explanation is supported by a report of Richardson et al. (2003) who studied the response of a simulated neuron with subthreshold resonance to a sinusoidal stimulus in the background of a white-noise source. They showed that when the noise was sufficiently strong to cause the neuron to fire irregularly, input frequencies close to the subthreshold resonance frequency were the most amplified ones.
Additional potential mechanisms
Potential mechanisms responsible for the ascending arm in biphasic stimulation with long-phase durations were reviewed in two different studies (Pfingst et al. 1996; Miller et al. 1997). They include refractoriness, accommodation, and residual potential effects.
Refractoriness is an eligible mechanism because as the time between two subsequent pulses is increased, the neurons that have fired after the first pulse will be more likely to fire again because they will progressively come out of their refractory period. As refractory effects are believed to occur at pulse separations up to 6 ms (Miller et al. 1997), this may partly explain the ascending arm of the threshold function for BP stimuli and also the decrease in threshold with increases in IPG. If this was the only explanation, however, the effect would be expected to be larger, or at least equal, at high levels of stimulation, where more fibers are excited, than at threshold. However, in a recent study using ALT-M pulses (van Wieringen et al. 2006; cf. Fig. 13), we found the opposite trend: that the slope of the ascending arm was steeper at threshold than at MCL. Some subjects did not even show any ascending arm at MCL. So refractoriness effects are unlikely to be fully responsible for the ascending arm of the threshold function. Also, Miller et al. (1997) obtained different threshold vs pulse separation functions of nonhuman primates when the leading polarity of subsequent pulses did alternate or not (conditions Fig. 3a, b). First, threshold functions for the BP shape (Fig. 14a, squares) at relatively long-phase durations (2 ms) were nonmonotonic, similar to what is found in humans. Second, the thresholds for the ALT-BP condition (circles) were similar to the thresholds for the BP condition at long-pulse separations (low rate) but were lower at shorter separations and did not show the nonmonotonic pattern. They suggested that it may involve refractory effects in a polarity-segregated neuron array. Our model provides an alternative explanation for this trend (Fig. 14b). Although the slopes are steeper for the model predictions, the relative trends remain similar. Slopes of behavioral threshold functions are typically steeper for humans than for other species (Miller et al. 1999a, b).
(a) Psychophysical thresholds of a macaque monkey (from Miller et al. 1997) for the BP and ALT-BP stimuli. Thresholds were obtained for 20-pulses stimuli, thereby leading to a covariation of stimulus duration with pulse separation. (b) Stochastic model predictions. The model reference is chosen to match the mean BP threshold data at a 0.2-ms pulse separation.
Accommodation effects can occur after long subthreshold depolarizations because of the inactivation of sodium channels. Pfingst et al. (1996) suggested that this phenomenon may explain why the ascending arm in biphasic stimulation is observed at long-phase durations and not at shorter ones. However, the threshold increase with increasing rate observed in ALT-M stimulation (van Wieringen et al. 2006) was still evident at short-phase durations (25 μs), suggesting that long-phase durations are not necessary to produce the ascending arm.
Residual potential effects were investigated with CI users in two different studies using BP pulses with phase durations shorter than 50 μs (Eddington et al. 1994; de Balthasar et al. 2003). de Balthasar measured the threshold of a BP pulse-train probe, which was interleaved with a subthreshold BP pulse-train masker presented on an adjacent channel. When the leading polarities of the masker and probe were opposite, the probe threshold was lower than its unmasked threshold for delays between masker offset and probe onset that were shorter than 150 μs. They observed the opposite trend when the leading polarities were identical. Eddington et al. (1994) found similar results with single pulses having opposite leading polarities for delays up to approximately 400 μs. However, when the leading polarity was the same, the masked threshold remained about 1 dB lower than the unmasked threshold and this difference persisted for delays up to 800 μs (the largest value tested). Although this last observation is not consistent with a residual potential summation, the other trends suggest that the neural membrane potential needs a finite time after the offset of a BP pulse to return to its resting value. The associated time constant (between 31 and 40 μs as calculated by de Balthasar et al.) is in the same order of magnitude as the time constant of the integrator process of our model (95 μs). At relatively short-phase durations, as used in those two studies, our model predicts that the integrator process dominates, leading to fast recovery to rest. However, the model also predicts that long-phase duration (>500 μs) or long-IPG BP pulses would produce extended residual potential effects because the (slower) resonator process would dominate.
Comparison to other phenomenological models and limitations
The construction of the present model was inspired from three previously published phenomenological models (Shannon 1989; Bruce et al. 1999a, b; Carlyon et al. 2005).
As in Shannon's (1989) model, the present model uses dual processes. The resonator can be related to the "compressive" process of Shannon's model and the integrator to its "envelope" process. The main difference is that the subthreshold processes of the present model are linear whereas Shannon used nonlinear power-law transformations. Moreover, as pointed out in the "Introduction," Shannon's model cannot account for the effects of IPG whereas the present model can. Shannon (1989) suggested that its compressive process may relate to the spiral ganglion cell survival. The somas of spiral ganglion cells are larger in humans than in other mammals (Rattay et al. 2001) and may involve longer time constants than what is typically observed in single-cell recordings (Shepherd and Javel 1999).
In CI stimulation, the variance in response of the AN to electrical stimuli is believed to be essentially due to membrane noise (Matsuoka et al. 2001). Bruce et al. (1999a, b) developed a simple stochastic model that assumed a perfect integration of charge. Their model can account for the effects of rate and phase duration in biphasic stimulation. However, only the cathodic phase of the pulse is assumed to be effective, which makes it unable to predict the effects of IPG or asymmetric pulses. Bruce et al. (1999b) used a RS value that was dependent on the phase duration of the stimulus. Although no physiological observations contradict this dependence for biphasic stimuli, it is known that the RS does not depend on the pulse duration for monophasic stimulation (Rubinstein 1995). RS was only shown to depend on the interpulse interval of the stimulus (Matsuoka et al. 2001). The present model uses a constant value for RS although this value (0.18) is much larger than what is typically measured in single-unit recordings of the cat (about 0.06; Miller et al. 1999c). This difference may partly come from the fact that every fiber in our model has the same threshold and that the current spread is uniform. Xu and Collins (2005) demonstrated that the effective RS (taking into account the entire neural population) was larger when individual fiber thresholds were uniformly distributed from −5 to +5 dB than when they were constant.
The present model gives similar threshold predictions as the model of Carlyon et al. (2005). Our resonator and integrator filters can be viewed as a decomposition of Carlyon's lowpass filter into two separated processes. However, their model cannot predict loudness growth and to predict MCLs of CI users would probably require the implementation of a second filter, thus multiplying the number of variables.
A 20-ms rectangular central integration window was used in the present model. Some other shapes, probably more realistic as suggested by studies with normal hearing listeners (Moore et al. 1988), should be considered in the future. Carlyon et al. (2005) used a Hanning window with a total duration of 20 ms, McKay et al. (2003) and Moore et al. (1996) used a window with exponential decays having an equivalent rectangular duration of 7 ms. Our model cannot account for the decrease in threshold observed at long stimulus durations, which may be due to more central processes and effectively modeled by additional mechanisms such as "multiple looks" (Viemeister and Wakefield 1991; Donaldson et al. 1997). It is however interesting to note that Moon et al. (1993) observed that at 100 pps the decrease in threshold with increases in stimulus duration did depend on the phase duration. They found this decrease to be larger using a 1,536-μs phase compared to a 96-μs phase. This may be due to the resonator process, which would, at 100 pps, amplify the amplitude of the long-phase duration (1,536 μs) biphasic response but not of the shorter one. Xu and Collins (2004) implemented a multiple looks approach in their stochastic AN model and compared its predictions to those of a long-term integration (100-ms duration) approach. They showed the multiple looks model to predict more trends of psychophysical data than the long-term integration model. They also found that in the case of a small number of stimulated fibers (N f = 100), the multiple looks model predicted a nonmonotonic threshold vs rate function for biphasic stimuli.
In a previous psychophysical study, we found MCLs of anodic-first PS stimuli to be higher than cathodic ones in monopolar mode (Macherey et al. 2006). The full-wave rectification used in our model makes it unable to predict such effects of leading polarity. Rattay (1989) studied the effects of electrode geometry on neural activation and showed that neural fibers were more symmetrically excited by bipolar electrodes than by monopolar ones. Therefore, our full-wave rectification hypothesis is probably a better approximation for bipolar mode than for monopolar mode, where it overestimates the contribution of one polarity over the other. A model of spatial excitation using, e.g., the activating function of Rattay (1989) may help to explore polarity effects and their dependence on the electrode-coupling mode.
Another limitation of the model lies in its inability to predict refractory effects. Neural refractoriness was not modeled and may not be the main determinant of loudness perception in single-channel stimulation. One reason could be that CI are operating at low levels of discharge probability as previously suggested by Bruce et al. (1999a) and assumed in the present study. Another reason may be that, for many of the manipulations used here, refractory effects are more or less equivalent and do not strongly affect the ability of the model to account for the data.
We would like to thank Serge Meimon for helpful discussions, Christopher Long for comments on a previous version of the manuscript, and two anonymous reviewers for their very constructive criticisms and suggestions. This work was supported by the Fonds voor Wetenschappelijk Onderzoek (FWO)-Vlaanderen (FWO G.0233.01) and the Research Council of the Katholieke Universiteit Leuven (OT/03/58).
Linearization of conductance-based models
The method of conductance-based model linearization is well known and has been described in details in several studies (Mauro et al. 1970; Richardson et al. 2003; Brunel et al. 2003). Consider a single compartment neuron with n voltage-gated ion channels (Fig. 15a). The transmembrane potential is described by the following equation:
$$ C\frac{{{\text{d}}V}} {{{\text{d}}t}} = g_{{{\text{leak}}}} {\left( {E_{{{\text{leak}}}} - V} \right)} - {\sum\limits_{j = 1}^n {I_{j} + I_{{{\text{Stim}}}} {\left( t \right)}} } $$
Each of the n ionic currents I j is a function of the transmembrane potential V, the conductance \( \overline{{g_{j} }} \), the reversal potential E j , and of m activation and/or inactivation variables x k (k = 1...m). Under the assumption of small voltage variations of V, this equation can be linearized around the resting potential of the membrane. This results in the equivalent electrical circuit shown in Figure 15b. G is the sum of all the steady-state conductances; each g k measures the strength of the steady-state current flow change due to the x k variation; and each \( \tau _{k} {\left( {\tau _{k} = L_{k} \times g_{k} } \right)} \) is the relaxation time constant of the x k variable. Further reduction of this model can be operated by comparing the time constants of the different currents and only conserving the significant ones.
Linearization and possible simplifications of a conductance-based model.
In the extreme case, without any phenomenological inductance, the system can be reduced to a simple RC circuit where only the passive properties of the neural membrane are considered (Fig. 15c). The amplitude and the phase of the complex impedance of this filter are given by:
$$ \left\{ {\matrix {{\left| {Z_{0} } \right|} = {\left( {\frac{{\tau _{0} }} {{C_{0} }}} \right)}{\sqrt {\frac{1} {{1 + \tau ^{2}_{0} \omega ^{2} }}} }} \\ {\phi _{0} = \arctan {\left( { - \frac{1} {{\tau _{0} \omega }}} \right)}} \ \quad } \right.{\text{with}}\quad \tau _{0} = \frac{{C_{0} }} {{G_{0} }} $$
In the case of only one ionic current, the equivalent circuit contains a single phenomenological inductance branch (Fig. 15d). The amplitude and the phase of the complex impedance of this filter are given by:
$$\left\{ {\begin{array}{*{20}c} {{\left| {Z_{1} } \right|} = {\left( {\frac{{\tau _{1} }}{{C_{1} }}} \right)}{\sqrt {\frac{{{\left( {1 + \tau ^{2}_{1} \omega ^{2} } \right)}}}{{{\left( {\alpha + \beta - \tau ^{2}_{1} \omega ^{2} } \right)}^{2} + \tau ^{2}_{1} \omega ^{2} {\left( {1 + \alpha } \right)}^{2} }}} }} \\ {\phi _{1} = \arctan {\left( {\tau _{1} \omega \frac{{\beta - {\left( {1 + \tau ^{2}_{1} \omega ^{2} } \right)}}}{{\beta + \alpha {\left( {1 + \tau ^{2}_{1} \omega ^{2} } \right)}}}} \right)}} \\ \end{array} \quad {\text{with}}\quad \left\{ {\begin{array}{*{20}c} {\tau _{1} = L_{1} g_{1} } \\ {\alpha = \frac{{\tau _{1} G_{1} }}{{C_{1} }}} \\ {\beta = \frac{{\tau _{1} g_{1} }}{{C_{1} }}} \\ \end{array} } \right.} \right.$$
This case corresponds to the generalized integrate-and-fire model with two variables introduced by Richardson et al. (2003). They presented a detailed mathematical analysis of this filter and derived several properties. The values of the dimensionless parameters α and β determine whether the neuron is a type I (integrator) or a type II (resonator) neuron. The relaxation time constant of the ion channel is τ 1. Under certain conditions expounded in Richardson et al. (2003), the model shows a subthreshold resonance and/or damped subthreshold oscillations.
Sensitivity analysis of the model parameters
In the "Methods" section, the parameters of the integrator and resonator processes are fitted using threshold data of CI users. We investigate in this section the consequences of a variation of these parameters. As shown in Eqs. 10 and 11, the response of the resonator neuron is entirely defined by four parameters: the capacitance C 1, the time constant τ 1, and two dimensionless parameters α and β. Similarly, the integrator process is entirely defined by two parameters C 0 and τ 0. It was shown that the membrane potential is stable only if α > −1 and α + β > 0 (Richardson et al. 2003) and we perform the sensitivity analysis within those bounds. We study the effects of varying each of the three parameters (τ 1, α, and β) on the threshold predictions of the resonator process for sinusoidal stimulation (deterministic case). For the study of τ 1, we keep α, β, and C 1 fixed and use five different values of τ 1: the value used throughout the paper 1.04 ms, the same value increased by 10%, increased by 20%, decreased by 10%, and decreased by 20%. An identical analysis is performed for α and β. The results are illustrated in Figure 16. Changing τ 1 (Fig. 16a) produces a shift in the resonance frequency but does not change the overall shape of the function. The resonance is shaped by the two dimensionless parameters α and β. A decrease in α (Fig. 16b) induces steeper slopes and a higher quality factor. A change in β (Fig. 16c) produces changes essentially for frequencies lower than the resonance frequency. Increasing β leads to a steeper slope at low frequency whereas decreasing it moves the system closer to its nonresonant mode. Similarly, a variation in τ 0 produces a shift in the cut-off frequency of the integrator process (Fig. 16a). The effect of the ratio of capacitance \( \delta = {C_{1} } \mathord{\left/ {\vphantom {{C_{1} } {C_{0} }}} \right. \kern-\nulldelimiterspace} {C_{0} } \) is trivial as it determines the vertical distance between the resonator and integrator predictions.
Deterministic thresholds of the resonator-alone and integrator-alone processes for sinusoidal stimulation. (a) Effects of a variation of the time constant values of the integrator (τ 0) and resonator (τ 1) processes on their respective thresholds. (b) Effects of a variation of α on the resonator thresholds. (c) Effects of a variation of β on the resonator thresholds. For all parameters, the variations range from −20% to +20% in steps of 10% relative to the values used for predictions (cf. Table 1).
Bruce IC, White MW, Irlicht LS, O'Leary SJ, Dynes S, Javel E, Clark GM. A Stochastic model of the electrically stimulated auditory nerve: Single-pulse response. IEEE Trans. Biomed. Eng. 46:617–629, 1999a.PubMedCrossRefGoogle Scholar
Bruce IC, White MW, Irlicht LS, O'Leary SJ, Clark GM. The effects of stochastic neural activity in a model predicting intensity perception with cochlear implants: Low-rate stimulation. IEEE Trans. Biomed. Eng. 46:1393–1404, 1999b.PubMedCrossRefGoogle Scholar
Brunel N, Hakim V, Richardson MEJ. Firing-rate resonance in a generalized integrate-and-fire neuron with subthreshold resonance. Phys. Rev. 67:051916.1–051916.23, 2003.Google Scholar
Carlyon RP, van Wieringen A, Deeks JM, Long CJ, Lyzenga J, Wouters J. Effect of inter-phase gap on the sensitivity of cochlear implant users to electrical stimulation. Hear. Res. 205:210–224, 2005.PubMedCrossRefGoogle Scholar
Cartee LA. Evaluation of a model of the cochlear neural membrane. II: Comparison of model and physiological measures of membrane properties measured in response to intrameatal electrical stimulation. Hear. Res. 146:153–166, 2000.PubMedCrossRefGoogle Scholar
Chen C. Hyperpolarization-activated current (Ih) in primary auditory neurons. Hear. Res. 110:179–190, 1997.PubMedCrossRefGoogle Scholar
Clopton BM, Spelman FA, Glass I, Pfingst BE, Miller JM, Lawrence PD, Dean DP. Neural encoding of electrical signals. In cochlear prostheses. New York, Annals of New York Academy of Sciences, pp. 146–158, 1983.Google Scholar
Colombo J, Parkins CW. A model of electrical excitation of the mammalian auditory-nerve neuron. Hear. Res. 31:287–311, 1987.PubMedCrossRefGoogle Scholar
de Balthasar C, Boex C, Cosendai G, Valentini G, Sigrist A, Pelizzone M. Channel interactions with high-rate biphasic electrical stimulation in cochlear implant subjects. Hear. Res. 182:77–87, 2003.PubMedCrossRefGoogle Scholar
Donaldson GS, Viemeister NF, Nelson DA. Psychometric functions and temporal integration in electric hearing. J. Acoust. Soc. Am. 101:3706–3721, 1997.PubMedCrossRefGoogle Scholar
Eddington DK, Noel VA, Rabinowitz WM, Svirsky MA, Tierney J, Zissman MA. Speech processors for auditory prostheses. Eighth quarterly progress report, NIH contract N01-DC-2-2402, 1994.Google Scholar
Finley CC, Wilson BS, White MW. Models of neural responsiveness to electrical stimulation. In: Miller JM and Spelman FA (eds) Cochlear Implants: Models of the electrically stimulated ear. New York, Springler-Verlag, pp. 55–96, 1990.Google Scholar
Frankenhauser B, Huxley AF. The action potential in the myelinated nerve fiber of Xenopus laevis as computed on the basis of voltage clamp data. J. Physiol. 171:302–315, 1964.Google Scholar
French AS. The frequency response function and sinusoidal threshold properties of the Hodgkin–Huxley model of action potential encoding. Biol. Cybern. 49:169–174, 1984.PubMedCrossRefGoogle Scholar
Frijns JHM, de Snoo SL, ten Kate JH. Spatial selectivity in a rotationally symmetric model of the electrically stimulated cochlea. Hear. Res. 95:33–48, 1996.PubMedCrossRefGoogle Scholar
Gerstner W, Kistler WM. Formal spiking neuron models. In: Spiking Neuron Models. Cambridge, MA, Cambridge University Press, pp. 93–145, 2002.Google Scholar
Guttman R, Hachmeister L. Effect of calcium, temperature, and polarizing currents upon alternating current excitation of space-clamped squid axons. J. Gen. Physiol. 58:304–321, 1971.PubMedCrossRefGoogle Scholar
Hodgkin AL, Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117:500–544, 1952.PubMedGoogle Scholar
Hutcheon B, Yarom Y. Resonance, oscillation and the intrinsic frequency preferences of neurons. Trends Neurosci. 23:216–222, 2000.PubMedCrossRefGoogle Scholar
Izhikevich EM. Resonate-and-fire neurons. Neural Netw. 14:883–894, 2001.PubMedCrossRefGoogle Scholar
Koch C. Cable theory in neurons with active, linearized membranes. Biol. Cybern. 50:15–33, 1984.PubMedCrossRefGoogle Scholar
Kreft HA, Donaldson GS, Nelson DA. Effects of pulse rate on threshold and dynamic range in Clarion cochlear-implant users. J. Acoust. Soc. Am. 115:1885–1888, 2004.PubMedCrossRefGoogle Scholar
Levitt H. Transformed up-down methods in psychoacoustics. J. Acoust. Soc. Am. 49:467–477, 1971.PubMedCrossRefGoogle Scholar
Longnion JK, Rubinstein JT. A biophysical population model of an auditory nerve: response to electrically-encoded speech. Abstracts of the 29th Midwinter Research Meeting, Association for Research in Otolaryngology, p. 219, 2006.Google Scholar
Macherey O, van Wieringen A, Carlyon RP, Deeks JM, Wouters J. Asymmetric pulses in cochlear implants: effects of pulse shape, polarity and rate. J. Assoc. Res. Otolaryngol. 7:253–266, 2006.PubMedCrossRefGoogle Scholar
Matsuoka AJ, Rubinstein JT, Abbas PJ, Miller CA. The effects of interpulse interval on stochastic properties of electrical stimulation: Models and measurements. IEEE Trans. Biomed. Eng. 48:416–424, 2001.PubMedCrossRefGoogle Scholar
Mauro A, Conti F, Dodge F, Schor R. Subthreshold behavior and phenomenological impedance of the squid giant axon. J. Gen. Physiol. 55:497–523, 1970.PubMedCrossRefGoogle Scholar
McIntyre CC, Richardson AG, Grill WM. Modeling the excitability of mammalian nerve fibers: Influence of afterpotentials on the recovery cycle. J. Neurophysiol. 87:995–1006, 2002.PubMedGoogle Scholar
McKay CM, Henshall KR. The perceptual effects of interphase gap duration in cochlear implant stimulation. Hear. Res. 181:94–99, 2003.PubMedCrossRefGoogle Scholar
McKay CM, Henshall KR, Farrell RJ, McDermott HJ. A practical method of predicting the loudness of complex electrical stimuli. J. Acoust. Soc. Am. 113:2054–2063, 2003.PubMedCrossRefGoogle Scholar
Middlebrooks JC. Effects of cochlear-implant pulse rate and inter-channel timing on channel interactions and thresholds. J. Acoust. Soc. Am. 116:452–468, 2004.PubMedCrossRefGoogle Scholar
Miller AL, Morris DJ, Pfingst BE. Interactions between pulse separation and pulse polarity order in cochlear implants. Hear. Res. 109:21–33, 1997.PubMedCrossRefGoogle Scholar
Miller AL, Smith DW, Pfingst BE. Across-species comparisons of psychophysical detection thresholds for electrical stimulation of the cochlea: I. Sinusoidal stimuli. Hear. Res. 134:89–104, 1999a.PubMedCrossRefGoogle Scholar
Miller AL, Smith DW, Pfingst BE. Across-species comparisons of psychophysical detection thresholds for electrical stimulation of the cochlea: II. Strength–duration functions for single, biphasic pulses. Hear. Res. 135:47–55, 1999b.PubMedCrossRefGoogle Scholar
Miller CA, Abbas PJ, Robinson BK, Rubinstein JT, Matsuoka AJ. Electrically evoked single-fiber action potentials from cat: Responses to monopolar, monophasic stimulation. Hear. Res. 130:197–218, 1999c.PubMedCrossRefGoogle Scholar
Mo Z, Davis RL. Heterogeneous voltage dependence of inward Rectifier currents in spiral ganglion neurons. J. Neurophysiol. 78:3019–3027, 1997.PubMedGoogle Scholar
Moon AK, Zwolan TA, Pfingst BE. Effects of phase duration on detection of electrical stimulation of the human cochlea. Hear. Res. 67:166–178, 1993.PubMedCrossRefGoogle Scholar
Moore BCJ, Glasberg BR, Plack CJ, Biswas AK. The shape of the ear's temporal window. J. Acoust. Soc. Am. 83:1102–1116, 1988.PubMedCrossRefGoogle Scholar
Moore BCJ, Peters RW, Glasberg BR. Detection of decrements and increments in sinusoids at high overall levels. J. Acoust. Soc. Am. 99:3669–3677, 1996.PubMedCrossRefGoogle Scholar
Morse RP, Evans EF. The sciatic nerve of the toad Xenopus Laevis as a physiological model of the human cochlear nerve. Hear. Res. 182:97–118, 2003.PubMedCrossRefGoogle Scholar
Pfingst BE. Comparisons of psychophysical and neurophysiological studies of cochlear implants. Hear. Res. 34:243–251, 1988.PubMedCrossRefGoogle Scholar
Pfingst BE, Holloway LA, Razzaque SA. Effects of pulse separation on detection thresholds for electrical stimulation of the human cochlea. Hear. Res. 98:77–92, 1996.PubMedCrossRefGoogle Scholar
Phan TT, White MW, Finley CC, Cartee LA. Neural membrane model responses to sinusoidal electrical stimuli. In: Hochmair-Desoyer IJ and Hochmair ES (eds) Advances in Cochlear Implants. Wien, Manz, pp. 342–347, 1994.Google Scholar
Rattay F. Analysis of models for extracellular fiber stimulation. IEEE Trans. Biomed. Eng. 36:676–682, 1989.PubMedCrossRefGoogle Scholar
Rattay F, Lutter P, Felix H. A model of the electrically excited human cochlear neuron. I. Contribution of neural substructures to the generation and propagation of spikes. Hear. Res. 153:43–63, 2001.PubMedCrossRefGoogle Scholar
Richardson MJ, Brunel N, Hakim V. From subthreshold to firing-rate resonance. J. Neurophysiol. 89:2538–2554, 2003.PubMedCrossRefGoogle Scholar
Rubinstein JT. Threshold fluctuations in an N-sodium channel model of the node of Ranvier. Biophys. J. 68:779–785, 1995.PubMedCrossRefGoogle Scholar
Rubinstein JT, Miller CA, Mino H, Abbas PJ. Analysis of monophasic and biphasic electrical stimulation of nerve. IEEE Trans. Biomed. Eng. 48:1065–1070, 2001.PubMedCrossRefGoogle Scholar
Schwarz DW, Dezso A, Neufeld PR. Frequency selectivity of central auditory neurons without inner ear. Acta Otolaryngol. 113:266–270, 1993.PubMedGoogle Scholar
Shannon RV. Multichannel electrical stimulation of the auditory nerve in man. I. Basic psychophysics. Hear. Res. 11:157–189, 1983.PubMedCrossRefGoogle Scholar
Shannon RV. Threshold and loudness functions for pulsatile stimulation of cochlear implants. Hear. Res. 18:135–143, 1985.PubMedCrossRefGoogle Scholar
Shannon RV. A model of threshold for pulsatile electrical stimulation of cochlear implants. Hear. Res. 40:197–204, 1989.PubMedCrossRefGoogle Scholar
Shepherd RK, Javel E. Electrical stimulation of the auditory nerve: II. Effect of stimulus waveshape on single fibre response properties. Hear. Res. 130:171–188, 1999.PubMedCrossRefGoogle Scholar
St Hilaire M, Longtin, A. Comparison of coding capabilities of type I and type II neurons. J. Comput. Neurosci. 16:299–313, 2004.PubMedCrossRefGoogle Scholar
Strohmann B, Schwarz DW, Puil E. Electrical resonances in central auditory neurons. Acta Otolaryngol. 115:168–172, 1995.PubMedGoogle Scholar
van den Honert C, Mortimer JT. The response of the myelinated nerve fiber to short duration biphasic stimulating currents. Ann. Biomed. Eng. 7:117–125, 1979.PubMedCrossRefGoogle Scholar
van Wieringen A, Carlyon RP, Macherey O, Wouters J. Effects of pulse rate on thresholds and loudness of biphasic and alternating monophasic pulse trains in electrical hearing. Hear. Res. 220:49–60, 2006.PubMedCrossRefGoogle Scholar
Verveen AA. Fluctuation in excitability. Ph.D. dissertation, University of Amsterdam, Amsterdam, The Netherlands, 1961.Google Scholar
Viemeister NF, Wakefield GH. Temporal integration and multiple looks. J. Acoust. Soc. Am. 90:858–865, 1991.PubMedCrossRefGoogle Scholar
Weiss TF. Cellular biophysics: teaching and learning with computer simulations. http://umech.mit.edu/weiss/announce.html, 2000.
Xu Y, Collins LM. Predicting the threshold of pulse-train electrical stimuli using a stochastic auditory nerve model: The effects of stimulus noise. IEEE Trans. Biomed. Eng. 51:590–603, 2004.PubMedCrossRefGoogle Scholar
Xu Y, Collins LM. Predicting dynamic range and intensity discrimination for electrical pulse-train stimuli using a stochastic auditory nerve model: The effects of stimulus noise. IEEE Trans. Biomed. Eng. 52:1040–1049, 2005.PubMedCrossRefGoogle Scholar
Zeng FG, Fu QJ, Morse R. Human hearing enhanced by noise. Brain Res. 869:251–255, 2000.PubMedCrossRefGoogle Scholar
© Association for Research in Otolaryngology 2007
1.ExpORL, Department of NeurosciencesKatholieke Universiteit LeuvenLeuvenBelgium
2.Cognition and Brain Sciences UnitMedical Research Council (MRC)CambridgeUK
Macherey, O., Carlyon, R.P., van Wieringen, A. et al. JARO (2007) 8: 84. https://doi.org/10.1007/s10162-006-0066-3
Not logged in Not affiliated 54.211.135.32
|
CommonCrawl
|
Fixed-point algorithms for inverse of residual rectifier neural networks
MFC Home
Word sense disambiguation based on stretchable matching of the semantic template
February 2021, 4(1): 15-30. doi: 10.3934/mfc.2020023
On approximation to discrete q-derivatives of functions via q-Bernstein-Schurer operators
Harun Karsli ,
Bolu Abant Izzet Baysal University, Faculty of Science and Arts, Department of Mathematics, 14030, Golkoy-Bolu, Turkey
* Corresponding author: Harun Karsli
Received April 2020 Revised September 2020 Published February 2021 Early access November 2020
In the present paper, we shall investigate the pointwise approximation properties of the $ q- $analogue of the Bernstein-Schurer operators and estimate the rate of pointwise convergence of these operators to the functions $ f $ whose $ q- $derivatives are bounded variation on the interval $ [0,1+p]. $ We give an estimate for the rate of convergence of the operator $ \left( B_{n,p,q}f\right) $ at those points $ x $ at which the one sided $ q- $derivatives $D_{q}^{+}f(x) $ and $ D_{q}^{-}f(x) $ exist. We shall also prove that the operators $ \left( B_{n,p,q}f\right) (x) $ converge to the limit $ f(x) $. As a continuation of the very recent and initial study of the author deals with the pointwise approximation of the $ q- $Bernstein Durrmeyer operators [12] at those points $ x $ at which the one sided $ q- $derivatives $ D_{q}^{+}f(x) $ and $ D_{q}^{-}f(x) $ exist, this study provides (or presents) a forward work on the approximation of $ q $-analogue of the Schurer type operators in the space of $ D_{q}BV $.
Keywords: Q-Bernstein-Schurer operators, pointwise approximation, right and left q-derivatives, convergence rate, bounded variation.
Mathematics Subject Classification: Primary: 41A25; Secondary: 41A35.
Citation: Harun Karsli. On approximation to discrete q-derivatives of functions via q-Bernstein-Schurer operators. Mathematical Foundations of Computing, 2021, 4 (1) : 15-30. doi: 10.3934/mfc.2020023
T. Acar and A. Aral, On pointwise convergence of q-Bernstein operators and their q-derivatives, Numer. Funct. Anal. Optim., 36 (2015), 287-304. doi: 10.1080/01630563.2014.970646. Google Scholar
A.-M. Acu, C. V. Muraru, D. F. Sofonea and V. A. Radu, Some approximation properties of a Durrmeyer variant of q-Bernstein-Schurer operators, Math. Methods Appl. Sci., 39 (2016), 5636-5650. doi: 10.1002/mma.3949. Google Scholar
A. Aral, V. Gupta and R. P. Agarwal, Applications of q-Calculus in Operator Theory, Springer, New York, 2013. doi: 10.1007/978-1-4614-6946-9. Google Scholar
R. Bojanić and F. Chêng, Rate of convergence of Bernstein polynomials for functions with derivatives of bounded variation, J. Math. Anal. Appl., 141 (1989), 136-151. doi: 10.1016/0022-247X(89)90211-4. Google Scholar
R. Bojanić and F. Cheng, Rate of convergence of Hermite-Fejér polynomials for functions with derivatives of bounded variation, Acta Math. Hungar., 59 (1992), 91-102. doi: 10.1007/BF00052094. Google Scholar
R. Bojanić and M. Vuilleumier, On the rate of convergence of Fourier-Legendre series of functions of bounded variation, J. Approx. Theory, 31 (1981), 67-79. doi: 10.1016/0021-9045(81)90031-9. Google Scholar
F. H. Chêng, On the rate of convergence of Bernstein polynomials of functions of bounded variation, J. Approx. Theory, 39 (1983), 259-274. doi: 10.1016/0021-9045(83)90098-9. Google Scholar
R. J. Finkelstein, q-uncertainty relations, Internat. J. Modern Phys. A., 13 (1998), 1795-1803. doi: 10.1142/S0217751X98000780. Google Scholar
C.-L. Ho, On the use of Mellin transform to a class of q-difference-differential equations, Phys. Lett. A, 268 (2000), 217-223. doi: 10.1016/S0375-9601(00)00191-2. Google Scholar
F. H. Jackson, On q-definite integrals, Quart. J. Pure Appl. Math., 41 (1910), 193-203. Google Scholar
V. Kac and P. Cheung, Quantum Calculus, Universitext, Springer-Verlag, New York, 2002. doi: 10.1007/978-1-4613-0071-7. Google Scholar
H. Karsli, Some approximation properties of q-Bernstein-Durrmeyer operators, Tbilisi Math. J., 12 (2019), 189-204. doi: 10.32513/tbilisi/1578020576. Google Scholar
H. Karsli and V. Gupta, Some approximation properties of q-Chlodowsky operators, Appl. Math. Comput., 195 (2008), 220-229. doi: 10.1016/j.amc.2007.04.085. Google Scholar
D. Levi, J. Negro and M. A. del Olmo, Discrete q-derivatives and symmetries of q-difference equations, J. Phys. A, 37 (2004), 3459-3473. doi: 10.1088/0305-4470/37/10/010. Google Scholar
A. Lupaş, A q-analogue of the Bernstein operator, in Seminar on Numerical and Statistical Calculus, Univ. "Babeş-Bolyai", Cluj-Napoca, 1987, 85–92. Google Scholar
K. Mezlini and N. Bettaibi, Generalized discrete q-Hermite I polynomials and q-deformed oscillator, Acta Math. Sci. Ser. B (Engl. Ed.), 38 (2018), 1411-1426. doi: 10.1016/S0252-9602(18)30822-1. Google Scholar
C.-V. Muraru, Note on q-Bernstein-Schurer operators, Stud. Univ. Babeş-Bolyai Math., 56 (2011), 489–495. Google Scholar
G. M. Phillips, Bernstein polynomials based on the q-integers, Ann. Numer. Math., 4 (1997), 511-518. Google Scholar
G. M. Phillips, On generalized Bernstein polynomials, in Numerical Analysis, World Sci. Publ., River Edge, NJ, 1996,263–269. doi: 10.1142/9789812812872_0018. Google Scholar
M.-Y. Ren and X.-M. Zeng, On statistical approximation properties of modified q-Bernstein-Schurer operators, Bull. Korean Math. Soc., 50 (2013), 1145-1156. doi: 10.4134/BKMS.2013.50.4.1145. Google Scholar
J. Thomae, Beiträge zur Theorie der durch die Heinische Reihe: Darstellbaren Functionen, J. Reine Angew. Math., 70 (1869), 258-281. doi: 10.1515/crll.1869.70.258. Google Scholar
Lucian Coroianu, Sorin G. Gal. New approximation properties of the Bernstein max-min operators and Bernstein max-product operators. Mathematical Foundations of Computing, 2021 doi: 10.3934/mfc.2021034
Purshottam Narain Agrawal, Şule Yüksel Güngör, Abhishek Kumar. Better degree of approximation by modified Bernstein-Durrmeyer type operators. Mathematical Foundations of Computing, 2021 doi: 10.3934/mfc.2021024
Denis R. Akhmetov, Renato Spigler. $L^1$-estimates for the higher-order derivatives of solutions to parabolic equations subject to initial values of bounded total variation. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1051-1074. doi: 10.3934/cpaa.2007.6.1051
Yaiza Canzani, A. Rod Gover, Dmitry Jakobson, Raphaël Ponge. Nullspaces of conformally invariant operators. Applications to $\boldsymbol{Q_k}$-curvature. Electronic Research Announcements, 2013, 20: 43-50. doi: 10.3934/era.2013.20.43
Mary J. Bravo, Marco Caponigro, Emily Leibowitz, Benedetto Piccoli. Keep right or left? Towards a cognitive-mathematical model for pedestrians. Networks & Heterogeneous Media, 2015, 10 (3) : 559-578. doi: 10.3934/nhm.2015.10.559
Li-Fang Dai, Mao-Lin Liang, Wei-Yuan Ma. Optimization problems on the rank of the solution to left and right inverse eigenvalue problem. Journal of Industrial & Management Optimization, 2015, 11 (1) : 171-183. doi: 10.3934/jimo.2015.11.171
Tadahisa Funaki, Yueyuan Gao, Danielle Hilhorst. Convergence of a finite volume scheme for a stochastic conservation law involving a $Q$-brownian motion. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1459-1502. doi: 10.3934/dcdsb.2018159
Markus Grasmair. Well-posedness and convergence rates for sparse regularization with sublinear $l^q$ penalty term. Inverse Problems & Imaging, 2009, 3 (3) : 383-387. doi: 10.3934/ipi.2009.3.383
Matthias Gerdts, Martin Kunkel. Convergence analysis of Euler discretization of control-state constrained optimal control problems with controls of bounded variation. Journal of Industrial & Management Optimization, 2014, 10 (1) : 311-336. doi: 10.3934/jimo.2014.10.311
W. Cary Huffman. On the theory of $\mathbb{F}_q$-linear $\mathbb{F}_{q^t}$-codes. Advances in Mathematics of Communications, 2013, 7 (3) : 349-378. doi: 10.3934/amc.2013.7.349
Gokhan Yener, Ibrahim Emiroglu. A q-analogue of the multiplicative calculus: Q-multiplicative calculus. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1435-1450. doi: 10.3934/dcdss.2015.8.1435
Harman Kaur, Meenakshi Rana. Congruences for sixth order mock theta functions $ \lambda(q) $ and $ \rho(q) $. Electronic Research Archive, 2021, 29 (6) : 4257-4268. doi: 10.3934/era.2021084
Leszek Gasiński, Nikolaos S. Papageorgiou. Dirichlet $(p,q)$-equations at resonance. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 2037-2060. doi: 10.3934/dcds.2014.34.2037
Jose S. Cánovas. On q-deformed logistic maps. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021162
Robert J. Martin, Patrizio Neff. Minimal geodesics on GL(n) for left-invariant, right-O(n)-invariant Riemannian metrics. Journal of Geometric Mechanics, 2016, 8 (3) : 323-357. doi: 10.3934/jgm.2016010
Benjamin Jourdain, Julien Reygner. Optimal convergence rate of the multitype sticky particle approximation of one-dimensional diagonal hyperbolic systems with monotonic initial data. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 4963-4996. doi: 10.3934/dcds.2016015
Tuvi Etzion, Alexander Vardy. On $q$-analogs of Steiner systems and covering designs. Advances in Mathematics of Communications, 2011, 5 (2) : 161-176. doi: 10.3934/amc.2011.5.161
Samer Dweik. $ L^{p, q} $ estimates on the transport density. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3001-3009. doi: 10.3934/cpaa.2019134
Zvi Drezner, Carlton Scott. Approximate and exact formulas for the $(Q,r)$ inventory model. Journal of Industrial & Management Optimization, 2015, 11 (1) : 135-144. doi: 10.3934/jimo.2015.11.135
Jesús Carrillo-Pacheco, Felipe Zaldivar. On codes over FFN$(1,q)$-projective varieties. Advances in Mathematics of Communications, 2016, 10 (2) : 209-220. doi: 10.3934/amc.2016001
Harun Karsli
|
CommonCrawl
|
Altered functional organization within the insular cortex in adult males with high-functioning autism spectrum disorder: evidence from connectivity-based parcellation
Takashi Yamada†1, 2,
Takashi Itahashi†1,
Motoaki Nakamura1, 3,
Hiromi Watanabe1,
Miho Kuroda1, 4, 5,
Haruhisa Ohta1,
Chieko Kanai1,
Nobumasa Kato1 and
Ryu-ichiro Hashimoto1, 2, 6, 7Email author
Molecular AutismBrain, Cognition and Behavior20167:41
Received: 6 April 2016
Accepted: 27 September 2016
Published: 5 October 2016
The insular cortex comprises multiple functionally differentiated sub-regions, each of which has different patterns of connectivity with other brain regions. Such diverse connectivity patterns are thought to underlie a wide range of insular functions, including cognitive, affective, and sensorimotor processing, many of which are abnormal in autism spectrum disorder (ASD). Although past neuroimaging studies of ASD have shown structural and functional abnormalities in the insula, possible alterations in the sub-regional organization of the insula and the functional characteristics of each sub-region have not been examined in the ASD brain.
Resting-state functional magnetic resonance imaging (rs-fMRI) data were acquired from 36 adult males with ASD and 38 matched typically developed (TD) controls. A data-driven clustering analysis was applied to rs-fMRI data of voxels in the left and right insula to automatically group voxels with similar intrinsic connectivity pattern into a cluster. After determining the optimal number of clusters based on information theoretic measures of variation of information and mutual information, functional parcellation patterns in both the left and the right insula were compared between the TD and ASD groups. Furthermore, functional profiles of each sub-region were meta-analytically decoded using Neurosynth and were compared between the groups.
We observed notable alterations in the anterior sector of the left insula and the middle ventral sub-region of the right insula in the ASD brain. Meta-analytic decoding revealed that whereas the anterior sector of the left insula contained two functionally differentiated sub-regions for cognitive, sensorimotor, and emotional/affective functions in TD brain, only a single functional cluster for cognitive and sensorimotor functions was identified in the anterior sector in the ASD brain. In the right insula, the middle ventral sub-region, which is primarily specialized for sensory- and auditory-related functions, showed a significant volumetric increase in the ASD brain compared with the TD brain.
The results indicate an altered organization of sub-regions in specific parts of the left and right insula of the ASD brain. The alterations in the left and right insula may constitute neural substrates underlying abnormalities in emotional/affective and sensory functions in ASD.
Resting-state functional magnetic resonance imaging
Connectivity-based functional parcellation
Autism spectrum disorder (ASD) has been increasingly conceptualized as a disease of large-scale brain networks [1]. Among multiple nodes that constitute the brain's networks, several brain regions have emerged as key structures that particularly contribute to abnormal functionalities in the ASD brain. The insular cortex is one such brain region, whose structural and functional abnormalities are frequently reported in the neuroimaging literature of ASD. Structurally, alterations of the gray matter (GM) volume have been identified in the anterior and posterior parts of the right insula in adult ASD [2–4]. Functionally, a comprehensive meta-analysis of functional imaging studies has revealed hypoactivation in the right anterior insula during various social tasks including face recognition and mentalizing [5]. Regarding connectivity, previous resting-state fMRI studies of adolescent and adult ASD have shown reduced functional connectivity (FC) of the anterior, middle, and posterior insula with distant brain regions, including the amygdala and the somatosensory cortex [6–8]. Convergent evidence from these structural, functional, and FC studies strongly prompts more detailed investigations of the insula to advance our understanding of the neural substrates of ASD.
Recent progress in anatomical studies has enabled more finely grained investigations into the neural organization of the insula. Traditionally, the insula has been divided into the following three regions: (1) anterior agranular, (2) posterior granular, and (3) precentral dysgranular [9–11]. However, a recent histological study has identified at least three distinct regions within the posterior insula alone [12], indicating the presence of a more fine-grained subdivision of this brain region. Similarly, FC analysis using fMRI data has led to recent progress in understanding the functional anatomy of the insular cortex. The rationale behind the FC approach is that the connectivity pattern of a brain region is a significant determinant of its functional role [13, 14]. FC patterns of the insular cortex have been mainly investigated using resting-state FC analyses or meta-analyses of co-activation patterns obtained in task-based fMRI studies. Using data-driven clustering methods in which voxels with similar FC patterns were grouped into a functional unit (parcel), Cauda and colleagues have identified two sub-regions of the ventral-anterior and dorsal-posterior insula based on resting-state FC [15] and co-activation patterns obtained in task-based fMRI studies [16]. On the other hand, a more recent study using data-driven clustering analysis of resting-state FC data supported the tripartite subdivision into dorsal-anterior, ventral-anterior, and posterior regions [17]. Based on the fact that each sub-region has a distinct FC pattern, this study used a meta-analytic co-activation tool (Neurosynth) and succeeded in decoding the functional profiles of insular sub-regions using reverse inference from the FC patterns seeded from each sub-region [18]. These studies demonstrate the utility of data-driven FC-based clustering methods in the study of insular functional organization, particularly when they are used in combination with meta-analytical tools such as the Neurosynth framework.
The aforementioned studies reporting a functional parcellation of the insula confirm the traditional cytoarchitecture-based subdivision scheme of this brain region. However, given more recent evidence indicating an even finer subdivision within the posterior insula, the data-driven FC clustering approach is expected to lead to a more finely grained functional parcellation of this brain region. Indeed, Kelly and colleagues gradually increased the number of parcels in the insula starting at two and examined changes in functional parcellation patterns using resting-state FC data, gray matter structural images, and a task-based fMRI co-activation map [19]. They observed consistent patterns of fine-scale functional parcellation across these three different modalities when using up to nine parcels. Comparing parcellation patterns at multiple scale levels (coarse vs. fine subdivisions) revealed that insular sub-regions are organized in a pseudo-hierarchical fashion, such that an increment in the number of subdivisions results in a finer division of an already existing cluster rather than generating a completely new parcellation pattern. These results indicate that the insular cortex is organized at multiple spatial scales and that finer parcellation patterns may offer additional insights into the functional organization of this structure.
In the present fMRI study, we examined the functional organization of the insula in adult males with ASD by applying the data-driven clustering method to resting-state FC data. Although localized alterations in insular gray matter volume, cortical activation, and FC have previously been reported in neuroimaging studies of ASD [6–8], to our knowledge, no study has examined the possibility of an altered organization of functionally heterogeneous insular sub-regions in the ASD brain. In this study, we first examined whether the functional organization of the insula is altered in subjects with ASD compared with typically developed (TD) controls. When we detected significant alterations, we then attempted to identify functions of the affected sub-regions by applying a meta-analytical decoding tool [18] to FC patterns of insular sub-regions. In this series of analyses, we needed to determine the optimal number of clusters for parcellation. Given previous findings indicating multiple spatial scales of insular organization [19], we expected that finer parcellation schemes would be more sensitive to subtle but significant functional alteration than coarser subdivisions adopted by traditional bi- or tripartite schemes.
Thirty-six high-functioning adult males with ASD and 38 age-matched TD males participated in this study. The diagnostic procedure to identify patients with ASD was the same as in our previous studies [20–22]. Briefly, an experienced psychiatrist and a clinical psychologist independently interviewed the patients (together with their caregivers when available) for approximately 3 h regarding their developmental history, present illness, life history, and family history. The diagnosis of ASD was made only when there was a consensus between the psychiatrist and clinical psychologist based on the criteria of the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV-TR) (American Psychiatric, 2000). In addition, the diagnosis was reconfirmed after at least a 2-month of follow-up period. TD subjects were recruited through advertisements and acquaintances. None of the TD participants reported any severe medical problems or a neurological or psychiatric history. Except for six TD participants, participants completed the Japanese version of the autism spectrum quotient (AQ) test [23]. Table 1 shows detailed demographic information for the study participants.
The demographic data for the participants
AQ score
Social reciprocity
Note: WAIS-III or -R was administrated to all participants with ASD, and the IQ score was estimated for all TDs based on JART. The AQ score was collected from 32 TDs and all participants with ASD
We confirmed that all of the participants were right-handed using the Edinburgh Handedness Inventory [24]. The intelligence quotient (IQ) scores of all participants with ASD were evaluated using either the Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) or the WAIS-Revised (WAIS-R), while those of TD subjects were estimated using a Japanese version of the National Adult Reading Test (JART) [25]. There were no significant differences between the two groups in age or IQ scores (both p > 0.1). Out of 36 participants with ASD, 24 individuals underwent either (1) Diagnostic Interview for Social and Communicative Disorders (DISCO) only (n = 5), or (2) Autism Diagnostic Observation Schedule (ADOS) only (n = 11), or (3) both (n = 8). The remaining 12 ASD participants did not undergo either of the auxiliary diagnostic tool of DISCO or ADOS. All of the five participants who underwent only the DISCO satisfied the DISCO diagnostic criteria for ASD. Out of the 11 participants who underwent only the ADOS, ten satisfied the ADOS diagnostic criteria for ASD, while one participant had a total ADOS score that was lower than the cut-off score (6 < 7); however, the same subject satisfied the cut-off scores for the "communication" and "social reciprocity" subscales of the ADOS ("communication": 2 ≥ 2, "social reciprocity": 4 ≥ 4), and therefore he was regarded as an individual with ASD. All of the eight participants who underwent the DISCO and ADOS satisfied the criteria for ASD for both measures. All participants provided written informed consent. This study was conducted in accordance with the principles of the Declaration of Helsinki and was approved by the Ethics Committee of the Faculty of Medicine of Showa University.
All scans were acquired using a 1.5T GE Signa system (General Electric, Milwaukee, WI, USA). The resting-state functional images were acquired using a gradient echo-planar imaging sequence (in-plane resolution: 3.4375 × 3.4375 mm, echo time (TE): 40 ms, repetition time (TR): 2000 ms, flip angle: 90°, slice thickness: 4 mm with a 1-mm slice gap, matrix size: 64 × 64, 27 axial slices). Two-hundred and eight volumes were acquired in a single run, and the first four volumes were discarded to allow for T1 equilibration. We also obtained a high-resolution T1-weighted spoiled gradient recalled 3D MRI image (in-plane resolution: 0.9375 × 0.9375 mm, 1.4 mm slice thickness, TR: 25 ms, TE: 9.2 ms, matrix size: 256 × 256, 128 sagittal slices). Each participant was instructed to lie still with his eyes closed and to not think of anything in particular, yet stay awake in the dim scanner room.
With the exception of the pinpoint use of Analysis of Functional NeuroImages (AFNI) [26], we mainly used the Statistical Parametric Mapping software (SPM 8) (Wellcome Department of Cognitive Neurology, London, UK) to preprocess the resting-state functional magnetic resonance imaging (rs-fMRI) data. Unless otherwise specified, rs-fMRI data were preprocessed using functions implemented in the SPM software. First, functional images were adjusted for slice timing and then were corrected for head motion. None of the participants moved translationally more than 2 mm (x, y, and z) or rotated more than 2° (roll, pitch, and yaw) in relation to the first volume. We next skull-stripped the functional images using 3dAutomask and 3dcalc and despiked the time series using 3dDespike in AFNI to account for the impact of outliers. The functional images were then co-registered to each participant's T1 image and normalized to the Montreal Neurological Institute (MNI) template, resampled to a resolution of 2 × 2 × 2 mm. Finally, the images were smoothed using a 6-mm Gaussian kernel.
We calculated covariates from the segmented white matter and cerebrospinal fluid using the "aCompCor" method [27] in order to remove artefactual components from the fMRI time series. These covariates, together with 12 head motion parameters (6 motion parameters and their first-order temporal derivatives), were then regressed out from the smoothed rs-fMRI data. To further remove possible effects of sub-millimeter motion on estimations of functional connectivity, we applied the scrubbing method in the following manner [28, 29]: (1) the framewise displacement (FD) and frame-by-frame signal intensity change (DVARS) indices were calculated immediately after head motion correction; (2) we regarded a volume as a motion-contaminated volume either when its FD value was greater than 0.5 mm or when its DVARS value was greater than the 75th percentile plus 1.5 times the interquartile range; (3) the signal values of the motion-contaminated volumes were interpolated by applying the cubic spline function [28]; (4) a band-pass filter (0.009–0.08 Hz) was applied to further reduce the effects of low-frequency drifts and high-frequency physiological noise; (5) finally, the interpolated volumes were deleted. There were no significant differences between the two groups either in the mean (± standard deviation [SD]) FD (0.115 ± 0.042 in TD, 0.107 ± 0.049 in ASD; p = 0.495) or in the mean (± SD) DVARS (TD: 12.94 ± 2.05, ASD: 12.86 ± 1.83, p = 0.863). The number of deleted volumes after the scrubbing procedures was 5.24 ± 4.90 for the TD group and 4.25 ± 4.23 for the ASD group. No significance difference was found between groups (p = 0.36).
Figure 1 illustrates the processing scheme for the connectivity-based parcellation of the insula. Briefly, we generated a group of functional connectivity maps for each participant using every voxel in the insula as a seed. We then applied a clustering method to the resulting groups of connectivity maps for the left and the right insula separately and identified a set of voxel clusters such that each voxel in the same cluster had a similar connectivity pattern. After calculating the cluster map for each individual, we again applied the clustering method to matrices that represent patterns of similarity of clustering either within the TD or the ASD group to obtain separate group-level functional parcellation maps for the TD and ASD groups.
The procedure for the functional connectivity-based parcellation of the insula. We first identified voxels in the left and right insula of each participant and generated a set of functional connectivity maps by correlating the resting-state time-series of each voxel with voxels in a whole gray matter mask (excluding the insula) for each hemisphere. Following a Fisher's z-transformation for functional connectivity maps, we constructed the individual-level similarity matrix using eta-squared, which is a measure of similarity between a pair of functional connectivity maps (see the "Connectivity-based functional parcellation" section). We applied the spectral clustering algorithm to the set of individual-level similarity matrices in order to cluster voxels with similar time-series of the resting-state signal fluctuations. For group-level analysis, we first calculated a binary adjacency matrix for each participant. Adjacency matrices of all participants were averaged separately for TD and ASD individuals to generate a group-level similarity matrix. Finally, we applied the spectral clustering algorithm to the group-level similarity matrix to assign a k clustering label to each voxel (see the "Connectivity-based functional parcellation" section)
Specifically, we first identified all of the voxels included in the insula as follows: (1) voxels in the standard spaces of the left and the right insula were defined using the Harvard-Oxford probabilistic atlas thresholded at 25 % probability; (2) voxels in the insula from both hemispheres were further constrained by conjunction with a study-specific GM mask, which represented voxels that survived the threshold of 40 % GM probability in every participant. This method of generating the study-specific GM masks was based on previous studies [30, 31]. This procedure resulted in the inclusion of 1161 voxels in the left insula and 1179 voxels in the right insula.
Next, we generated a group of voxel-based functional connectivity maps by calculating temporal correlations between time series of each individual voxel in the insula and those of all voxels within the study-specific GM mask, except for the voxels in the insula. This procedure resulted in the generation of 1161 maps for the left insula and 1179 maps for the right insula in each participant. Connectivity maps of the Pearson's correlation coefficients were then converted to Z-scores using the Fisher's r-to-z transformation.
For the functional parcellation, we evaluated the similarities of functional connectivity patterns among the 1161 maps for the left insula and the 1179 maps for the right insula. Following previously described methods [19, 32–34], we calculated the eta-squared value as a measure of similarity between a pair of connectivity maps as follows:
$$ \mathrm{eta}\ \mathrm{squared} = 1\ \hbox{--}\ \frac{{\displaystyle {\sum}_{i=1}^n}\left\{{\left({a}_i-{m}_i\right)}^2+{\left({b}_i-{m}_i\right)}^2\right\}}{{\displaystyle {\sum}_{i=1}^n}\left\{{\left({a}_i-X\right)}^2+{\left({b}_i-X\right)}^2\right\}} $$
where a i and b i are the Z-scored connectivity values at voxel i in connectivity maps a and b, respectively. m i is the mean of the two connectivity map values at voxel i, and X is the mean of all voxels of the two connectivity maps. Using the eta-squared values between all pairs of connectivity maps, we constructed the individual-level similarity matrix for the left and the right insula separately (1161 × 1161 matrix for the left insula and 1179 × 1179 matrix for the right insula). After determining the optimal number of clusters (k, see the "Estimation of optimal number of clusters" section), we applied the spectral clustering algorithm to each of the individual-level similarity matrices and parcellated either the left or the right insula into k clusters based on similarities in functional connectivity patterns.
After performing functional parcellation at the individual-level, we next performed group-level functional parcellation as follows [33]: (1) we generated a binary adjacency matrix whose value was 1 if a pair of voxels belonged to the same cluster and zero otherwise for each participant, (2) we then generated a group-level similarity matrix by averaging the adjacency matrices of all individuals within each group, and (3) we applied the spectral clustering algorithm to the group-level similarity matrix to assign one of the k clustering labels to each voxel.
The spectral clustering algorithm arbitrarily assigns one of the k clustering labels to each voxel. To compare the parcellation results of the two groups, we first fixed the configuration of labels in the TD group as a reference. Then, for each of the possible instantiation of the k labels in the ASD group, we calculated the ratio of the voxels having the same label to those having different labels in the two groups. We selected the configuration of labels that maximized this ratio.
Estimation of the optimal number of clusters
Before applying the spectral clustering method, we determined the optimal number of clusters (k) using two complementary indices that evaluate the goodness of clustering solutions: variation of information (VI) and mutual information (MI) [35]. VI is an information-theoretic measure that quantifies the information lost and gained in changing clustering solution A to clustering solution B. This measure can thus be used as an index of dissimilarity [19]. On the other hand, MI is an information-theoretic measure that represents the similarity between clustering solutions. We examined the VI and MI values from 2 to 10 for each k value for the left and the right insula and for the TD and ASD groups separately and selected the optimal k value. A k value with a low VI and a high MI indicates a good solution in terms of similarity.
To calculate VI and MI, we used a split-half procedure, as described previously [36, 37]. First, in order to determine an optimal k value not biased toward the TD or ASD groups, all of the participants were randomly assigned to one of two groups (group A and group B). We then applied the aforementioned spectral clustering algorithm to each group for each k (k = 2, 3,…, 10) and obtained group-level parcellation maps for each group. Finally, we compared the clustering of the two groups by calculating VI and MI for each k. We repeated this procedure 100 times, as in previous studies. A previous functional connectivity-based parcellation study on the insula demonstrated that the stability of the solutions substantially decreases when the number of clusters (k) exceeds 10 [19]. Therefore, we explored k values ranging from 2 to 10 in the present study.
In order to determine the optimal k value, we first identified a set of "good solutions" using VI and MI independently and then determined the optimal values using the logical product of the two solutions. Since VI is a measure of dissimilarity, we identified points (k values) of "local minimal" VI as good solutions. Similarly, since MI is a measure of similarity, we identified points (k values) of "local maximal" MI as good solutions. Here, a local minimum of VI is defined as a range of points with a significantly smaller value than its two adjacent points. The local maximum MI value was determined in a similar manner. We determined the optimal k value as the point where local minimal VI and local maximal MI values converge.
We also performed another split-half procedure within the TD group by randomly assigning participants with TD into one of two groups (group 1 or group 2) while changing k from 2 to 10. We did this to ensure that the optimal k value determined by our procedure was not biased due to contamination from the ASD group.
Comparison of functional parcellation patterns between the TD and ASD groups
After the parcellation of the insula in each participant, we examined between-group differences in the volumes of specific sub-regions by performing a permutation test, as previously described [33]. Briefly, we randomly assigned participants to one of the two groups by permuting the diagnostic labels of all of the participants and then applied our clustering algorithm to each of the permuted groups. The sub-regions in each group were re-labeled following the aforementioned procedure, which used the labels from the original TD group as references (see the "Connectivity-based functional parcellation" section). This procedure was repeated 5000 times to obtain the null distribution of volumetric differences between the groups. We deemed between-group volumetric differences statistically significant when the difference between the TD and ASD groups fell above the 95th percentile of the null distribution.
Meta-analytic decoding of the sub-regions' connectivity maps
In addition to analyzing sub-regional volumes, we also examined whether there are significant alterations in functional characteristics of insular sub-regions in ASD. We thus examined the functional profiles of each sub-region in the TD and ASD groups separately using a recently developed meta-analytic tool called "Neurosynth" [18]. Given a pattern of functional connectivity as an input, the tool allows us to decode the relevant psychological and physiological functions (e.g., "cognitive control," "emotion," and "perception") strongly associated with that pattern.
As a first step, we performed functional connectivity analysis using each of the insular sub-regions as a region of interest (ROI). We generated maps of the Pearson's correlation coefficients between the averaged time series of a sub-region and those of all voxels within the GM mask for each individual. After transformation into z-maps, the maps of all participants in the TD and ASD groups were separately subjected to a one-sample t test using age as a nuisance covariate to obtain the connectivity pattern at the group level. The resulting t-statistic maps were converted to z-statistic maps and then fed into Neurosynth as inputs for meta-analytic decoding.
In order to characterize the functional properties of each functional connectivity map, we determined the top 5 terms that showed the strongest correlations with each functional connectivity map. Related terms (e.g., "emotion" and "emotional") were merged into a single term of the base form. This procedure yielded a total of 14 terms, as some of the same terms were repeatedly selected across all sub-regions for both hemispheres. Finally, we drew a radar chart consisting of these 14 terms to visually assess the potential psychological and physiological functions of each sub-region as functional fingerprints. To visualize the similarities (and dissimilarities) of these functional fingerprints, we constructed a 14-dimensional feature vector whose elements were the correlation coefficients of the 14 terms for each sub-region. We then calculated the Pearson's correlation coefficient between pairs of feature vectors to represent the similarities between sub-regions in terms of functional fingerprints.
Connectivity-based parcellation of the intracalcalrine cortex as a control region
The functional parcellations of the insula resulted in different patterns between the TD and ASD groups (see the "Visual inspection of functional connectivity-based parcellation" section). To examine whether group differences in the parcellation patterns have any regional selectivity, we applied the same clustering analysis to the intracalcarine cortex as a control region.
The intracalcarine cortex as defined in the Harvard-Oxford probabilistic atlas (thresholded at 25 probability) roughly corresponds to the primary visual cortex and its neighboring regions. Given the evidence of alterations in vision-related functions in some ASD individuals (e.g., enhanced visual search ability [38, 39]), it may not be possible to strongly assert that functional organizations within the lower-order visual cortices are unaltered in the ASD brain. However, we propose that the intracalcarine cortex may serve as a control for the insula for the following reasons: (1) the region is thought to be dedicated to visual processing (in contrast to the multiple distinct functions of the insula) and (2) the size of the region (1405 voxels) is comparable to those of the left and right insula (left: 1161 voxels, right: 1179 voxels).
We estimated the optimal cluster number in the intracalcarine cortex using MI and VI in the same manner as that for the left and right insula (see the "Estimation of the optimal number of clusters" section). After determining the optimal number of clusters of this region, the parcellation patterns were visually inspected and compared between groups. Furthermore, a possible volumetric alteration was examined using a permutation test for each sub-region (see the "Comparison of functional parcellation patterns between the TD and ASD groups" section).
Assessment for the replicability of functional parcellation patterns
To assess the stability of the functional parcellation patterns, we performed a replication analysis using subsets of the data. First, we divided the participants of each group into six folds (in the case of the ASD group, 6 participants per fold [36 participants in total]). The stability of the parcellation pattern was examined using the leave-one-fold-out method. In other words, we applied the clustering method to the data for the remaining five folds and repeated the procedure for six times until all six folds were excluded. This procedure resulted in six parcellation patterns. We then examined the similarity between the parcellation pattern using all participants in the group and each of the six parcellation patterns that were generated using the leave-one-fold-out method. We used the MI as an index of similarity of the two patterns.
Optimal number of clusters
Figure 2 shows the means and standard errors of the mean (SEM) of VI and MI for the left and the right insula for each clustering solution. In the left insula, VI had local minima at k = 5 and 8 (VI: 0.167 ± 0.0031 [mean ± SEM] for k = 5 and VI: 0.151 ± 0.0026 for k = 8), while MI had a local maximum at k = 8 (MI: 0.740 ± 0.0046) (Fig. 2a). Following the criteria for the selection of the optimal number of clusters using VI and MI (see the "Estimation of optimal number of clusters" section), we determined an optimal k of 8 for the left insula. In the right insula, VI had local minima at k = 7 and 8 (VI: 0.154 ± 0.0024 for k = 7 and VI: 0.155 ± 0.0022 for k = 8), while MI had a local maximum at k = 8 (MI: 0.732 ± 0.0039) (Fig. 2b). We determined an optimal k of 8 for the right insula using the information that we obtained from VI and MI.
Determination of the optimal number of clusters based on VI and MI. The VI and MI values are shown for every clustering solution for k values ranging from 2 to 10 for each side of the insula (a Left insula, b Right insula). Arrows indicate either local minima of VI or local maxima of MI. Dashed lines denote the optimal number of solutions as determined using both VI and MI. The error bars denote standard errors of the mean for 100 repetitions of the split-half procedure (see the "Estimation of the optimal number of clusters" section). n.s. indicates no statistically significant difference between points
Visual inspection of functional connectivity-based parcellation
Figure 3 illustrates the functional parcellation patterns of the left and right insula in the TD and ASD groups when k = 8. We named the eight sub-regions based on their locations along the anterior-posterior and dorsal-ventral axes. Figure 4 shows the magnified picture of the parcellation pattern of the left insula of the TD group (Fig. 3a). We first divided the whole region anterior, middle, posterior, and posterior-most sectors along the anterior-posterior axis (Fig. 4a). Each of the anterior and posterior sectors was further divided into dorsal and ventral sub-regions. The 3 sub-regions in the middle sector were labeled as middle dorsal, center, and middle ventral sub-regions (Fig. 4b). We thus obtained eight sub-regions as follows: (1) anterior sector: anterior dorsal (AD) and ventral (AV) sub-regions; (2) middle sector: middle dorsal (MD), central (C), and middle ventral (MV) sub-regions; (3) posterior sector: posterior dorsal (PD) and ventral (PV) sub-regions; and (4) posterior-most sub-region. Note that this labeling is similarly applicable to the parcellation pattern in the right insula (Fig. 3b). We also note that our parcellation patterns were highly similar to previously identified patterns based on resting-state FC studies of a neurotypical population [19]. Most notably, the parcellation patterns in our study and those in the previous study were consistent in that the anterior and posterior sectors were divided into two sub-regions and the middle sector was divided into three sub-regions.
The patterns of functional parcellation in the left and right insula in TD and ASD (a Left insula of TD, b Right insula of TD, c Left insula of ASD, d Right insula of ASD). Each figure is presented in sagittal and magnified sagittal views. The color of each insular sub-region reflects the color of the corresponding sub-region in the TD and ASD groups
Spatial configurations and labels for the insular sub-regions. The parcellation pattern of the left insula of the TD group is magnified. We divided the whole region anterior, middle, posterior, and posterior-most sectors along the anterior-posterior axis (a). Further subdivision along the dorsal-ventral axis in each sector resulted in eight sub-regions in total (b): (1) anterior sector: anterior dorsal (AD) and ventral (AV) sub-regions (green and cyan), (2) middle sector: middle dorsal (MD), central (C), and middle ventral (MV) sub-regions (brown, yellow, and blue), (3) posterior sector: posterior dorsal (PD) and ventral (PV) sub-regions (magenta and orange), and (4) posterior-most sub-region (purple). D dorsal, V ventral, A anterior, P posterior
In comparison to the parcellation patterns of the TD group, we observed notable localized alterations in both the left and the right insula in the ASD group. Specifically, we observed that the left anterior sector, which was divided into two sub-regions in the TD group, contained only a single sub-region in the ASD group (Fig. 3c). Moreover, the left PV sub-region, which was a single region in the TD group, was divided into two sub-regions in the ASD group. On the other hand, the parcellation patterns of the middle and posterior-most sectors in the ASD group were largely unchanged from those of the TD group. In the right insula, the MV sub-region was expanded and the central sub-region was shifted in the posterior direction in the ASD group (Fig. 3d). Aside from localized alterations, the parcellation patterns in the anterior and posterior-most sectors were largely the same in the two groups. These observations indicate that the organization of functional sub-regions may be altered in specific parts of the left and right insula in ASD.
Meta-analytical decoding of the left insular sub-regions
We investigated the functional profiles of the anterior sub-regions and the PV sub-region in the left insula, where the functional parcellations were visually different between the TD and ASD groups. First, we performed a meta-analytic decoding of the functions of the two anterior sub-regions (AD and AV) in the TD group and of the anterior sub-region in the ASD group. The results are presented as a radar chart in Fig. 5. The functional profile of the anterior sub-region in the ASD group showed an almost identical pattern to that of the AD sub-region in the TD group (r = 0.97). Although the profile of the ASD anterior sub-region also significantly correlated with that of the AV sub-region in the TD group (r = 0.86), the profile of the anterior sub-region in the ASD group was clearly more similar to that of the AD sub-region in the TD group. Therefore, the alterations of the anterior sector in the ASD group may be characterized as the absence of the AV sub-region together with the volumetric expansion of the AD sub-region. The volumetric increase was statistically confirmed by the permutation test (ASD anterior sub-region > TD AD sub-region, p = 0.024) (Additional file 1: Figure S1a). When the functional profiles were compared between the AD and AV sub-regions, the AV sub-region was more highly associated with terms related to affection and emotion, such as "reward," "fear," "anxiety," and "affective," whereas the AD sub-region was more preferably associated with a cognitive terms ("control" and "inhibition") and sensorimotor terms ("sensorimotor," "motor," "tactile," and "somatosensory").
The meta-analytic decoding of sub-regions in the left anterior sector and posterior ventral sub-regions. a The radar chart that shows the correlation of left anterior ventral (AV) and anterior dorsal (AD) sub-regions in the TD group and the anterior section in the ASD group with the 14 terms of interest. Note that the profile of the ASD anterior sector is more similar to that of the AD sub-region rather than the AV sub-region in the TD group. b The radar chart that shows the correlation of left posterior ventral (PV) sub-region in the TD group and the two parcels within the PV sub-region (PV1 and PV2) with the 14 terms of interest. The profiles of these three sub-regions were highly similar
We next performed a meta-analytic decoding of the function of the PV sub-region in the TD group and functions of the two PV sub-regions in the ASD group. For simplicity, we will label the anterior portion of the PV sub-region in the ASD group as PV1 and the posterior PV as PV2. When comparing the PV sub-region in the TD group with the two PV sub-regions in the ASD group, we observed almost identical functional profiles among these sub-regions (TD PV and ASD PV1: r = 0.96; TD PV and ASD PV2: r = 0.95; ASD PV1 and ASD PV2: r = 1.00).
Meta-analytical decoding of the right insular sub-regions
We next examined functional alterations in the right insular sub-regions of the ASD group. We previously noted the visual observation suggesting the extension of the MV sub-region into the more dorsal part of the insula (see the "Visual inspection of functional connectivity-based parcellation" section). We statistically confirmed this observation by using the permutation test and finding a significant group difference (ASD > TD, p = 0.003) (Additional file 1: Figure S1b). Meta-analytic decoding of the MV sub-region revealed that the region was strongly associated with terms related to perceptual processes such as "pain," "somatosensory," "auditory," "speech," and "heat," and that such functional profiles were almost identical between the TD and ASD groups (r = 0.99) (Fig. 6a). In addition to the spatial expansion of the MV sub-region, our visual inspection indicated that the position of the C sub-region had shifted in the posterior direction in the ASD group. In order to examine possible functional alterations in this region, we performed meta-analytic decoding of the C sub-region in the two groups and found highly dissimilar patterns (r = 0.45). However, we observed that the C sub-region in the ASD group was functionally similar to the PV sub-region in the TD group (r = 0.98), which was strongly associated with terms largely related to perceptual ("somatosensory," "tactile," and "auditory") and motor processes. The results are consistent with the observation of the spatial shift of the C sub-region toward the posterior direction in the ASD group.
The meta-analytic decoding in the right middle ventral, central, and posterior ventral sub-regions. a The radar chart that shows the correlation of the middle ventral (MV) sub-region in both groups with the 14 terms of interest. b The radar chart shows the correlation of the central (C) and posterior ventral (PV) sub-regions in both groups with the 14 terms of interest
Group differences in individual variability in insula parcellation
Our analysis showed significant differences in functional parcellation patterns between the TD and ASD groups. However, recent studies have suggested that the ASD brain is highly idiosyncratic, indicating that each individual ASD brain may be greatly discrepant from the group mean [40]. Therefore, it may be argued that the inter-individual variability in the insula parcellation patterns is larger in the ASD than in the TD and that such enhanced variability might have played a role in the altered parcellation pattern in the ASD. To test this possibility, we calculated the MIs between each individual adjacency matrix of the ASD participants and the group-level similarity matrix (see the "Connectivity-based functional parcellation" section) of the ASD group when the cluster number (k) was optimally set to 8 (eight sub-regions). Similarly, the MIs were also calculated for the TD individuals using the TD group-level similarity matrix. If the parcellation pattern in each individual brain is more idiosyncratic in the ASD group, then the MI should be significantly reduced in the ASD group than in the TD group. However, we did not observe a significant difference in MIs between the ASD and TD groups in either the left or right insula (left insula: t = −1.54, p = 0.13, right insula: t = 0.038, p = 0.97).
We also reasoned that the degree of idiosyncrasy in the insular parcellation pattern of an individual might be associated with his or her severity of the ASD symptoms. To explore this possibility, we first evaluated the degree of idiosyncrasy using the MI values between the adjacency matrix of each ASD individual and the group-level similarity matrix of ASD individuals. These MIs represented the discrepancy from the mean ASD parcellation pattern. Then we computed the correlation between the AQ scores and the MI values, but the correlation was not significant (left insula: r = 0.03, p = 0.85, right insula: r = −0.02, p = 0.90). We also calculated the individual MIs using the group-level similarity matrix of TD individuals, as indices for the discrepancy from the typical (mean TD) parcellation pattern. However, no significant correlation with the total AQ score was found in either the left or right insula (left insula: r = 0.04, p = 0.83, right insula: r = −0.02, p = 0.90).
Functional parcellation in the intracalcarine cortex
The same clustering methods as those used for the insula were applied to the control region of the intracalcarine cortex. MI reached the local maximum at k = 7, 8, 9, and 10, and VI reached the local minimum at k = 6 and 7 (Additional file 2: Figure S2). Following the same criteria as that for the insula, we determined an optimal k of 7. Based on visual inspection, the functional parcellation patterns at k = 7 were highly comparable between the TD and ASD groups (Additional file 3: Figure S3). Permutation tests for the volume of each sub-region revealed no significant group differences in any of the seven sub-regions (light blue region: p = 0.36, green region: p = 0.67, orange region: p = 0.55, yellow region: p = 0.58, magenta region: p = 0.55, blue region: p = 0.52, violet region: p = 0.31). The comparable parcellation pattern between groups in the control region indicates that group differences that were identified by our connectivity-based parcellation may be selective to parts of brain regions, including the left and right insula.
Replicability of the functional parcellation patterns in the insula
We examined the replicability of the functional parcellation patterns using the leave-one-fold-out method (six folds in each group) (see the "Assessment for the replicability of functional parcellation patterns" section). The means and SEM of MIs between the parcellation pattern for all participants in a group and each of the six parcellation patterns that were generated using the leave-one-fold-out method were as follows: TD: 0.88 ± 0.02 (mean ± SEM), ASD: 0.81 ± 0.01 for the left insula; TD: 0.90 ± 0.02, ASD: 0.79 ± 0.01 for the right insula. These high MI values indicate good stability for the parcellation patterns of the left and right insula in both groups.
We also examined the replicability of the group differences in the sub-regional volumes that were identified in the left and right insula. In the left insula, the anterior sub-region of the ASD group was significantly larger than the AD sub-region of the TD group (see the "Meta-analytical decoding of the left insular sub-regions" section). In the right insula, the MV sub-region was significantly larger for the ASD group than the TD group (see the "Meta-analytical decoding of the right insular sub-regions" section). For these two sub-regions, we subtracted the volume of the ASD group from that of the TD group for each of the six leave-one-fold-out steps. Because the aim of this analysis was to test whether the subtracted value is significantly lower than 0, we performed a one-sample, one-tailed t test using the six samples. As a result, we observed a significant difference for both sub-regions (the left sub-region: t = −2.88, p = 0.017; the right sub-region: t = −2.64, p = 0.023). Therefore, the group differences in the sub-regional volumes were also replicable.
In this study, we performed an automatic functional parcellation of the insular cortex in adult male ASD by adopting a data-driven clustering method based on resting-state FC patterns of voxels. In the comparison with matched TD brain, we observed notable alterations in specific sub-regions in the left and right insular cortices of ASD brain. These alterations were localized to the anterior sub-regions of the left insula and the middle ventral sub-region of the right insula. In the left insula, whereas the anterior sector contained two functionally differentiated sub-regions in TD brain, only a single functional cluster was identified in the anterior sector in the ASD group. No clear group-difference was observed in the control region of the intracalcarine cortex using the same resting-state FC-based parcellation method. Additional analyses using subsets of the entire dataset confirmed the high stability of the insular parcellation patterns in both groups. Meta-analytical decoding revealed the absence of sub-region specialized for emotional and affective functions in the anterior sector in the ASD brain. The middle ventral sub-region of the right insula, which is primarily specialized for sensory and auditory-related functions, extended into the more dorsal parts of the insula and its volume was significantly enlarged compared with the same region in the TD brain. We thus observe alterations in the functional organization of the left and right insular sub-regions, which may underlie abnormalities in emotional/affective and sensory domains in ASD.
The left insula lacks an emotion-related sector in ASD
We found a notable difference in the anterior sector of the left insula when we carried out a group comparison of functional parcellation patterns. Specifically, whereas the anterior sector was parceled into dorsal and ventral sub-regions in the TD group, only a single cluster was identified in the anterior sector in the ASD group. Analysis of the functional profiles of the two sub-regions indicate a functional differentiation in the TD brain, such that the ventral sub-region is characterized by selective involvement in emotional and affective processes, fear, anxiety, and reward, whereas the dorsal sub-region is more strongly involved in cognitive and sensorimotor functions compared with the ventral sub-region. On the other hand, the functional profile of the single anterior cluster in the ASD brain closely matched that of the dorsal rather than the ventral sub-region in the TD brain. Therefore, we report that the left anterior insula in ASD is functionally altered and is characterized by the absence of the sub-region for emotional and affective functions.
It is notable that such an alteration in functional parcellation was observed in the left insula. Based on anatomical evidence of left-to-right asymmetry in peripheral autonomic efferent neurons and homeostatic afferent neurons, as well as a review of neuroimaging literature, an influential model of the insula posits that the left insula is associated with parasympathetic ("affiliative") functions, while the right side is associated with sympathetic ("aroused") functions. This idea provides the neurobiological foundation for the functional laterality of emotion. According to this model, the left anterior insula is more strongly involved in "positive (energy enrichment)" emotions. Consistent with this view, previous fMRI studies of neurotypical populations demonstrated the left-lateralized activation of the anterior insula during the viewing of pleased facial expressions [41] and when mothers viewed their own child [42] and experienced maternal and romantic love [43]. These observations indicate a dichotomy of emotion, so that the left anterior insula may be viewed as critical for group-oriented (affiliative) emotions as opposed to individual-oriented emotions.
Our observation of the absence of the anterior sub-region for emotional and affective functions in ASD may predict abnormalities in group-oriented emotions. Consistently, a recent model of ASD focused on the concept of "social motivation" and proposed that the core problems of ASD may be explained by extremely diminished social motivation [44]. According to previous surveys, half of the adults with ASD report having no particular friends [45] and score significantly lower on items in the friendship questionnaire that concern attitudes toward interpersonal relationships, such as pleasure in close friendship and interest in people [46]. Therefore, the functional alteration in the anterior section of the left anterior insula may provide a neurological basis for the lack of group-oriented emotion and behavior in ASD.
The right insula has an enlarged sensory and auditory-related section in ASD
We identified a significant alteration of functional parcellation patterns in the right insula. Specifically, we observed an increase in the volume of the middle ventral sub-region, which is specialized for auditory-related and other sensory functions. Convergent evidence from anatomical, electrophysiological, brain lesion, and neuroimaging studies indicate that the insular cortex is involved in multimodal sensory functions, including somatosensory, olfactory, visceral, and auditory functions [47]. In particular, previous reports have highlighted a critical role for the insula in various aspects of auditory function, including auditory attention, music, the processing of novel auditory stimuli, temporal processing, and visual-auditory integration [47]. Our analyses of the functional profiles of insular sub-regions indicate that the middle ventral sub-region is specifically associated with such auditory and sensory functions.
Given the functional profiles of the above insular sub-regions, it is plausible that the volumetric expansion of the right middle ventral sub-region is associated with sensory abnormalities in ASD, including those in the auditory domain. Indeed, atypical sensory reactivity, including altered auditory sensitivity, has recently been recognized as a core symptom of ASD under the new diagnostic criteria of DSM-5 [48]. In addition to evidence for auditory hypersensitivity revealed by clinical questionnaires [49, 50], recent behavioral experiments have revealed a more detailed picture of the altered auditory processing in ASD. Although individuals with ASD have been shown to be impaired in some auditory functions, such as auditory attention reorienting and the modulation of auditory perception [51], it is notable that they outperform TD subjects in a number of auditory functions, such as auditory stream segregation [52] and the perception of pitch [53] and time [54]. Abnormalities in sensations other than audition have been also reported in the previous literature [55, 56]. Although we acknowledge the need for further investigation to establish the functional significance of the volumetric enlargement of the right middle ventral sub-region, our observations raise the interesting possibility that this alteration may involve parts of the neural mechanisms underlying altered sensation in ASD.
We acknowledge a few limitations of the present study. Firstly, the present study restricted recruitment to adult males. This restriction was made because several studies have reported effects of age [57] and sex [58] on functional connectivity, at least for neurotypical populations. Given the possibility that age and sex may be critical factors for the etiological and phenotypic heterogeneity of ASD [59], we need to carry out further studies targeting specific populations excluded in the present study.
Secondly, we determined the optimal cluster number k by the combined use of VI and MI. However, the choice of the optimal k may be an inherent problem in any data-driven clustering analysis, given that the gold standard for the selection of this value has not yet been established. Indeed, previous studies have adopted a variety of methods for determining the optimal k value, such as using a combination of percent agreement and VI [19], VI only [36], validation indicator [17], and Dice's coefficient between test and re-test data [33]. Among all of the possible measures, we selected MI and VI, because the combined use of the two measures allows for the estimation of the three critical features of clustering solutions: similarity, dissimilarity, and stability. Although we acknowledge some uncertainty in our analysis, we note that our method unequivocally determined the optimal k for each hemisphere and that the k = 8 was optimal even when the TD data was used (Additional file 4: Figure S4). These results indicate the validity and robustness of our method, at least for the current dataset.
Thirdly, in contrast to the task-related fMRI design, it may be argued that the resting-state FC study alone does not allow us to directly test hypotheses about the relationship between particular brain regions and their functions. To overcome this general limitation of the resting-state FC study, we have adopted the meta-analytical decoding method of Neurosynth, which uses meta-analysis of co-activation data in the fMRI literature and performs a reverse inference regarding the functions of the sub-region of interest from an FC pattern seeding that sub-region. Although this approach has been gaining credibility in recent fMRI studies [17, 60], we still acknowledge the need of the hypothesis-driven approaches, such as task-related fMRI studies, to establish the links between alterations in specific insular sub-regions and abnormal functions in the ASD brain. The present findings based on data-driven approaches are expected to provide specific hypotheses in designing such hypothesis-driven studies in the future.
Fourthly, only one control region was used to show that the functional alterations that we observed in the organization of the insula are not present in functionally intact brain regions in individuals with ASD. Although this remains a limitation of the study, at least in the intracalcarine cortex, we were able to show that the functional parcellation pattern in the ASD group was highly comparable to that of the TD group in visual inspection and in volumes of the corresponding sub-regions that were revealed using permutation test. Although more control regions may be needed, using the intracalcarine cortex as a control region has raised the specificity of our findings for the insula.
To conclude, the present study uses data-driven clustering analysis of the multi-voxel resting state FC data to reveal altered functional parcellation in the insular cortex of the ASD brain. With the combined use of meta-analytic decoding analysis, we observed localized alterations in sub-regional organization in each side of the insula in ASD. In the left insula, two anterior sub-regions were merged into a single sub-region in which selective involvement in affective and emotional functions was absent. In the right insula, the middle ventral sub-region for auditory-related and sensory functions was significantly increased in volume. Such alterations in functional organization in the insular cortex may constitute neural abnormalities that underlie emotional/affective and sensory problems in ASD.
Anterior dorsal
AFNI:
Analysis of Functional NeuroImages
AQ:
Autism spectrum quotient
Anterior ventral
DISCO:
Diagnostic Interview for Social and Communicative Disorders
DVARS:
D referring temporal derivative and VARS referring the root mean squared
FC:
FD:
Framewise displacement
JART:
Japanese version of the National Adult Reading Test
MD:
Middle dorsal
MV:
Middle ventral
Posterior dorsal
Posterior ventral
rs-fMRI:
SPM:
Statistical Parametric Mapping
TE:
Echo time
Typically developed
TR:
Repetition time
VI:
Variation of information
WAIS:
Wechsler Adult Intelligence Scale
This study is the result of "Development of BMI Technologies for Clinical Application" carried out under the Strategic Research Program for Brain Sciences by the Ministry of Education, Culture, Sports, Science and Technology of Japan. This work was also supported by the Japan Society for the Promotion of Science (JSPS) Grant-in-Aid for Scientific Research (C) (16K10236 to T.Y.), a Grant-in-Aid for Young Scientists (B) (16K17363 to T.I.), and a Grant-in-Aid for Scientific Research on Innovative Areas (23118003; Adolescent Mind & Self-Regulation to R.H.) from the Ministry of Education, Culture, Sports, Science, and Technology of Japan.
TY and TI conceived of the study, performed the analysis, and drafted and revised the manuscript. TY, TI, MN, HW, HO, CK, and RH participated in the data acquisition. MK, TY, MN, HW, HO, and CK were in charge of the clinical assessment. NK participated in the design of the study and supervised the study. MN, MK, HW, HO, CK, and NK revised the manuscript for important intellectual content. RH participated in its design, coordination, and analysis and helped to write and revise the manuscript. All authors read and approved the final manuscript.
Additional file 1: Figure S1. The statistical test for between-group difference in the sub-regional volume using the permutation procedure. The histograms illustrate the null distribution of volume differences between groups (in voxels) induced by the permutation procedure in the left anterior dorsal (AD) sub-region (the anterior sector in ASD) (a) and in the right middle ventral (MV) sub-region (b). The red dashed vertical line indicates the observed volume differences between correctly labeled TD and ASD groups. The negative values indicate that the sub-region is smaller in TD brain compared to the corresponding sub-region in ASD. (PDF 650 kb)
Additional file 2: Figure S2. Determination of the optimal number of clusters based on VI and MI in intracalcarine cortex. The intracalcarine cortex was selected as a control region. The VI and MI values are shown for every clustering solution for k values ranging from 2 to 10. Arrows indicate either local minima of VI or local maxima of MI. Dashed lines denote the optimal number of solutions as determined using both VI and MI. The error bars denote standard errors of the mean for 100 repetitions of the split-half procedure (see the "Estimation of the optimal number of clusters" section). "n.s." indicates no statistically significant difference between points. (PDF 334 kb)
Additional file 3: Figure S3. The patterns of functional parcellation in the intracalcarine cortex in TD and ASD. Figures show the parcellation patterns at the optimal cluster number (k) of 7. Each figure is presented in axial and magnified axial views. The color of each intracalcarine sub-region reflects the color of the corresponding sub-region in the TD and ASD groups. Note the highly comparable parcellation patterns between groups. (PDF 355 kb)
Additional file 4: Figure S4. VI and MI values for clustering solutions (k = 2 to 10) using only TD participants. The dashed line denotes the optimal number of solutions based on the same criteria as in Fig. 2. The error bars are standard errors of the mean. Each kind of arrows points to the optimal number of solutions based on "similarity" and "dissimilarity" criteria. "n.s." indicates that the MI or VI values are not significantly different between the 2 bracketed points. (PDF 1125 kb)
Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-karasuyama, Setagaya-ku, Tokyo, Japan
ATR Brain Information Communication Research Laboratory Group, 2-2-2 Hikaridai, Seika-cho, Sorakugun, Kyoto, Japan
Kinko Hospital, Kanagawa Psychiatric Center, 2-5-1 Serigaya, Yokohama, Kanagawa, Japan
Child Mental Health-care Center, Fukushima University, 1 Kanayagawa, Fukusima-shi, Fukushima, Japan
Department of Child Neuropsychiatry, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongou, Bunkyo-ku, Tokyo, Japan
Department of Language Sciences, Graduate School of Humanities, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo, Japan
Research Center for Language, Brain and Genetics, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo, Japan
Menon V. Large-scale brain networks and psychopathology: a unifying triple network model. Trends Cogn Sci. 2011;15:483–506.View ArticlePubMedGoogle Scholar
Cauda F, Geda E, Sacco K, D'Agata F, Duca S, Geminiani G, Keller R. Grey matter abnormality in autism spectrum disorder: an activation likelihood estimation meta-analysis study. J Neurol Neurosurg Psychiatry. 2011;82:1304–13.View ArticlePubMedGoogle Scholar
Ecker C, Suckling J, Deoni SC, Lombardo MV, Bullmore ET, Baron-Cohen S, Catani M, Jezzard P, Barnes A, Bailey AJ, et al. Brain anatomy and its relationship to behavior in adults with autism spectrum disorder: a multicenter magnetic resonance imaging study. Arch Gen Psychiatry. 2012;69:195–209.View ArticlePubMedGoogle Scholar
Kosaka H, Omori M, Munesue T, Ishitobi M, Matsumura Y, Takahashi T, Narita K, Murata T, Saito DN, Uchiyama H, et al. Smaller insula and inferior frontal volumes in young adults with pervasive developmental disorders. Neuroimage. 2010;50:1357–63.View ArticlePubMedGoogle Scholar
Di Martino A, Ross K, Uddin LQ, Sklar AB, Castellanos FX, Milham MP. Functional brain correlates of social and nonsocial processes in autism spectrum disorders: an activation likelihood estimation meta-analysis. Biol Psychiatry. 2009;65:63–74.View ArticlePubMedGoogle Scholar
Di Martino A, Yan CG, Li Q, Denio E, Castellanos FX, Alaerts K, Anderson JS, Assaf M, Bookheimer SY, Dapretto M, et al. The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism. Mol Psychiatry. 2014;19:659–67.View ArticlePubMedGoogle Scholar
Ebisch SJ, Gallese V, Willems RM, Mantini D, Groen WB, Romani GL, Buitelaar JK, Bekkering H. Altered intrinsic functional connectivity of anterior and posterior insula regions in high-functioning participants with autism spectrum disorder. Hum Brain Mapp. 2011;32:1013–28.View ArticlePubMedGoogle Scholar
von dem Hagen EA, Stoyanova RS, Baron-Cohen S, Calder AJ. Reduced functional connectivity within and between 'social' resting state networks in autism spectrum conditions. Soc Cogn Affect Neurosci. 2013;8:694–701.View ArticleGoogle Scholar
Augustine JR. The insular lobe in primates including humans. Neurol Res. 1985;7:2–10.View ArticlePubMedGoogle Scholar
Augustine JR. Circuitry and functional aspects of the insular lobe in primates including humans. Brain Res Brain Res Rev. 1996;22:229–44.View ArticlePubMedGoogle Scholar
Mesulam MM, Mufson EJ. Insula of the old world monkey. I. Architectonics in the insulo-orbito-temporal component of the paralimbic brain. J Comp Neurol. 1982;212:1–22.View ArticlePubMedGoogle Scholar
Kurth F, Eickhoff SB, Schleicher A, Hoemke L, Zilles K, Amunts K. Cytoarchitecture and probabilistic maps of the human posterior insular cortex. Cereb Cortex. 2010;20:1448–61.View ArticlePubMedGoogle Scholar
Passingham RE, Stephan KE, Kotter R. The anatomical basis of functional localization in the cortex. Nat Rev Neurosci. 2002;3:606–16.View ArticlePubMedGoogle Scholar
Uddin LQ, Kinnison J, Pessoa L, Anderson ML. Beyond the tripartite cognition-emotion-interoception model of the human insular cortex. J Cogn Neurosci. 2014;26:16–27.View ArticlePubMedGoogle Scholar
Cauda F, D'Agata F, Sacco K, Duca S, Geminiani G, Vercelli A. Functional connectivity of the insula in the resting brain. Neuroimage. 2011;55:8–23.View ArticlePubMedGoogle Scholar
Cauda F, Costa T, Torta DM, Sacco K, D'Agata F, Duca S, Geminiani G, Fox PT, Vercelli A. Meta-analytic clustering of the insular cortex: characterizing the meta-analytic connectivity of the insula when involved in active tasks. Neuroimage. 2012;62:343–55.View ArticlePubMedPubMed CentralGoogle Scholar
Chang LJ, Yarkoni T, Khaw MW, Sanfey AG. Decoding the role of the insula in human cognition: functional parcellation and large-scale reverse inference. Cereb Cortex. 2013;23:739–49.View ArticlePubMedGoogle Scholar
Yarkoni T, Poldrack RA, Nichols TE, Van Essen DC, Wager TD. Large-scale automated synthesis of human functional neuroimaging data. Nat Methods. 2011;8:665–70.View ArticlePubMedPubMed CentralGoogle Scholar
Kelly C, Toro R, Di Martino A, Cox CL, Bellec P, Castellanos FX, Milham MP. A convergent functional architecture of the insula emerges across imaging modalities. Neuroimage. 2012;61:1129–42.View ArticlePubMedPubMed CentralGoogle Scholar
Itahashi T, Yamada T, Watanabe H, Nakamura M, Jimbo D, Shioda S, Toriizuka K, Kato N, Hashimoto R. Altered network topologies and hub organization in adults with autism: a resting-state fMRI study. PLoS One. 2014;9:e94115.View ArticlePubMedPubMed CentralGoogle Scholar
Watanabe H, Nakamura M, Ohno T, Itahashi T, Tanaka E, Ohta H, Yamada T, Kanai C, Iwanami A, Kato N, Hashimoto R. Altered orbitofrontal sulcogyral patterns in adult males with high-functioning autism spectrum disorders. Soc Cogn Affect Neurosci. 2014;9:520–8.View ArticlePubMedGoogle Scholar
Yamada T, Ohta H, Watanabe H, Kanai C, Tani M, Ohno T, Takayama Y, Iwanami A, Kato N, Hashimoto R. Functional alterations in neural substrates of geometric reasoning in adults with high-functioning autism. PLoS One. 2012;7:e43220.View ArticlePubMedPubMed CentralGoogle Scholar
Wakabayashi A, Baron-Cohen S, Wheelwright S, Tojo Y. The autism-spectrum quotient (AQ) in Japan: a cross-cultural comparison. J Autism Dev Disord. 2006;36:263–70.View ArticlePubMedGoogle Scholar
Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971;9:97–113.View ArticlePubMedGoogle Scholar
Matsuoka K, Uno M, Kasai K, Koyama K, Kim Y. Estimation of premorbid IQ in individuals with Alzheimer's disease using Japanese ideographic script (Kanji) compound words: Japanese version of National Adult Reading Test. Psychiatry Clin Neurosci. 2006;60:332–9.View ArticlePubMedGoogle Scholar
Cox RW. AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Comput Biomed Res. 1996;29:162–73.View ArticlePubMedGoogle Scholar
Behzadi Y, Restom K, Liau J, Liu TT. A component based noise correction method (CompCor) for BOLD and perfusion based fMRI. Neuroimage. 2007;37:90–101.View ArticlePubMedPubMed CentralGoogle Scholar
Carp J. Optimizing the order of operations for movement scrubbing: comment on Power et al. Neuroimage. 2013;76:436–8.View ArticlePubMedGoogle Scholar
Power JD, Barnes KA, Snyder AZ, Schlaggar BL, Petersen SE. Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion. Neuroimage. 2012;59:2142–54.View ArticlePubMedGoogle Scholar
Di Martino A, Zuo XN, Kelly C, Grzadzinski R, Mennes M, Schvarcz A, Rodman J, Lord C, Castellanos FX, Milham MP. Shared and distinct intrinsic functional network centrality in autism and attention-deficit/hyperactivity disorder. Biol Psychiatry. 2013;74:623–32.View ArticlePubMedPubMed CentralGoogle Scholar
Zuo XN, Ehmke R, Mennes M, Imperati D, Castellanos FX, Sporns O, Milham MP. Network centrality in the human functional connectome. Cereb Cortex. 2012;22:1862–75.View ArticlePubMedGoogle Scholar
Barnes KA, Nelson SM, Cohen AL, Power JD, Coalson RS, Miezin FM, Vogel AC, Dubis JW, Church JA, Petersen SE, Schlaggar BL. Parcellation in left lateral parietal cortex is similar in adults and children. Cereb Cortex. 2012;22:1148–58.View ArticlePubMedGoogle Scholar
Nebel MB, Joel SE, Muschelli J, Barber AD, Caffo BS, Pekar JJ, Mostofsky SH. Disruption of functional organization within the primary motor cortex in children with autism. Hum Brain Mapp. 2014;35:567–80.View ArticlePubMedGoogle Scholar
Nelson SM, Cohen AL, Power JD, Wig GS, Miezin FM, Wheeler ME, Velanova K, Donaldson DI, Phillips JS, Schlaggar BL, Petersen SE. A parcellation scheme for human left lateral parietal cortex. Neuron. 2010;67:156–70.View ArticlePubMedPubMed CentralGoogle Scholar
Meilă M. Comparing clusterings—an information based distance. J Multivar Anal. 2007;98:873–95.View ArticleGoogle Scholar
Kahnt T, Chang LJ, Park SQ, Heinzle J, Haynes JD. Connectivity-based parcellation of the human orbitofrontal cortex. J Neurosci. 2012;32:6240–50.View ArticlePubMedGoogle Scholar
Kelly C, Uddin LQ, Shehzad Z, Margulies DS, Castellanos FX, Milham MP, Petrides M. Broca's region: linking human brain functional connectivity data and non-human primate tracing anatomy studies. Eur J Neurosci. 2010;32:383–98.View ArticlePubMedPubMed CentralGoogle Scholar
Dakin S, Frith U. Vagaries of visual perception in autism. Neuron. 2005;48:497–507.View ArticlePubMedGoogle Scholar
Gliga T, Bedford R, Charman T, Johnson MH, Team B. Enhanced visual search in infancy predicts emerging autism symptoms. Curr Biol. 2015;25:1727–30.View ArticlePubMedPubMed CentralGoogle Scholar
Jabbi M, Swart M, Keysers C. Empathy for positive and negative emotions in the gustatory cortex. Neuroimage. 2007;34:1744–53.View ArticlePubMedGoogle Scholar
Leibenluft E, Gobbini MI, Harrison T, Haxby JV. Mothers' neural activation in response to pictures of their children and other children. Biol Psychiatry. 2004;56:225–32.View ArticlePubMedGoogle Scholar
Bartels A, Zeki S. The neural correlates of maternal and romantic love. Neuroimage. 2004;21:1155–66.View ArticlePubMedGoogle Scholar
Chevallier C, Kohls G, Troiani V, Brodkin ES, Schultz RT. The social motivation theory of autism. Trends Cogn Sci. 2012;16:231–9.View ArticlePubMedPubMed CentralGoogle Scholar
Howlin P, Goode S, Hutton J, Rutter M. Adult outcome for children with autism. J Child Psychol Psychiatry. 2004;45:212–29.View ArticlePubMedGoogle Scholar
Baron-Cohen S, Wheelwright S. The Friendship Questionnaire: an investigation of adults with Asperger syndrome or high-functioning autism, and normal sex differences. J Autism Dev Disord. 2003;33:509–17.View ArticlePubMedGoogle Scholar
Bamiou D-E, Musiek FE, Luxon LM. The insula (Island of Reil) and its role in auditory processing. Brain Res Rev. 2003;42:143–54.View ArticlePubMedGoogle Scholar
American Psychiatric Association. Diagnostic and statistical manual of mental disorders. 5th ed. Washington: American Psychiatric Publisher; 2013.View ArticleGoogle Scholar
Takayama Y, Hashimoto R, Tani M, Kanai C, Yamada T, Watanabe H, Ono T, Kato N, Iwanami A. Standardization of the Japanese version of the Glasgow Sensory Questionnaire (GSQ). Res Autism Spectr Disord. 2014;8:347–53.View ArticleGoogle Scholar
Tavassoli T, Hoekstra RA, Baron-Cohen S. The Sensory Perception Quotient (SPQ): development and validation of a new sensory questionnaire for adults with and without autism. Mol Autism. 2014;5:29.View ArticlePubMedPubMed CentralGoogle Scholar
Orekhova EV, Stroganova TA. Arousal and attention re-orienting in autism spectrum disorders: evidence from auditory event-related potentials. Front Hum Neurosci. 2014;8:34.View ArticlePubMedPubMed CentralGoogle Scholar
Lin IF, Yamada T, Komine Y, Kato N, Kashino M. Enhanced segregation of concurrent sounds with similar spectral uncertainties in individuals with autism spectrum disorder. Sci Rep. 2015;5:10524.View ArticlePubMedPubMed CentralGoogle Scholar
Bonnel A, Mottron L, Peretz I, Trudel M, Gallun E, Bonnel A-M. Enhanced pitch sensitivity in individuals with autism: a signal detection analysis. J Cogn Neurosci. 2003;2:226–35.View ArticleGoogle Scholar
Stewart ME, Griffiths TD, Grube M. Autistic traits and enhanced perceptual representation of pitch and time. J Autism Dev Disord. 2015. epub ahead of print. PMID: 26189179.Google Scholar
Duerden EG, Taylor MJ, Lee M, McGrath PA, Davis KD, Roberts SW. Decreased sensitivity to thermal stimuli in adolescents with autism spectrum disorder: relation to symptomatology and cognitive ability. J Pain. 2015;16:463–71.View ArticlePubMedGoogle Scholar
Yasuda Y, Hashimoto R, Nakae A, Kang H, Ohi K, Yamamori H, Fujimoto M, Hagihira S, Takeda M. Sensory cognitive abnormalities of pain in autism spectrum disorder: a case-control study. Ann Gen Psychiatry. 2016;15:8.View ArticlePubMedPubMed CentralGoogle Scholar
Dosenbach NU, Nardos B, Cohen AL, Fair DA, Power JD, Church JA, Nelson SM, Wig GS, Vogel AC, Lessov-Schlaggar CN, et al. Prediction of individual brain maturity using fMRI. Science. 2010;329:1358–61.View ArticlePubMedPubMed CentralGoogle Scholar
Filippi M, Valsasina P, Misci P, Falini A, Comi G, Rocca MA. The organization of intrinsic brain activity differs between genders: a resting-state fMRI study in a large cohort of young healthy subjects. Hum Brain Mapp. 2013;34:1330–43.View ArticlePubMedGoogle Scholar
Ecker C, Murphy D. Neuroimaging in autism—from basic science to translational research. Nat Rev Neurol. 2014;10:82–91.View ArticlePubMedGoogle Scholar
Pauli WM, O'Reilly RC, Tal Y, Wager TD. Regional specialization within the human striatum for diverse psychological functions. Proc Natl Acad Sci U S A. 2016;113:1907–12.View ArticlePubMedPubMed CentralGoogle Scholar
|
CommonCrawl
|
Some Remarks on regularity criteria of Axially symmetric Navier-Stokes equations
CPAA Home
Stability and $ L^{p}$ convergence rates of planar diffusion waves for three-dimensional bipolar Euler-Poisson systems
May 2019, 18(3): 1303-1332. doi: 10.3934/cpaa.2019063
Solvability of nonlocal systems related to peridynamics
Moritz Kassmann 1, , Tadele Mengesha 2,, and James Scott 2,
Fakultät für Mathematik, Universität Bielefeld, Postfach 100131, D-33501 Bielefeld, Germany
Department of Mathematics, The University of Tennessee Knoxville, 227 Ayres Hall, 1403 Circle Drive, Knoxville, TN 37996, USA
Received May 2018 Revised May 2018 Published November 2018
Fund Project: M. Kassmann acknowledges the support of the German Science Foundation through CRC 1283. T. Mengesha and J. Scott acknowledge the support of the U.S. NSF under grant DMS-1615726.
In this work, we study the Dirichlet problem associated with a strongly coupled system of nonlocal equations. The system of equations comes from a linearization of a model of peridynamics, a nonlocal model of elasticity. It is a nonlocal analogue of the Navier-Lamé system of classical elasticity. The leading operator is an integro-differential operator characterized by a distinctive matrix kernel which is used to couple differences of components of a vector field. The paper's main contributions are proving well-posedness of the system of equations and demonstrating optimal local Sobolev regularity of solutions. We apply Hilbert space techniques for well-posedness. The result holds for systems associated with kernels that give rise to non-symmetric bilinear forms. The regularity result holds for systems with symmetric kernels that may be supported only on a cone. For some specific kernels associated energy spaces are shown to coincide with standard fractional Sobolev spaces.
Keywords: Nonlocal coupled system, peridynamics, fractional Korn's inequality, well posedness, regularity.
Mathematics Subject Classification: Primary: 45F15, 45E10, 35B65; Secondary: 74Bxx.
Citation: Moritz Kassmann, Tadele Mengesha, James Scott. Solvability of nonlocal systems related to peridynamics. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1303-1332. doi: 10.3934/cpaa.2019063
U. Biccari, M. Warma and E. Zuazua, Addendum: Local elliptic regularity for the Dirichlet fractional Laplacian, Adv. Nonlinear Stud., 17 (2017), 837-839. doi: 10.1515/ans-2017-6020. Google Scholar
U. Biccari, M. Warma and E. Zuazua, Local elliptic regularity for the Dirichlet fractional Laplacian, Adv. Nonlinear Stud., 17 (2017), 387-409. doi: 10.1515/ans-2017-0014. Google Scholar
H. Brezis, Functional analysis, Sobolev Spaces and Partial Differential Equations, Universitext, Springer, New York, 2011. Google Scholar
Z.-Q. Chen, On notions of harmonicity, Proc. Amer. Math. Soc., 137 (2009), 3497-3510. doi: 10.1090/S0002-9939-09-09945-6. Google Scholar
M. Cozzi, Interior regularity of solutions of non-local equations in Sobolev and Nikol'skii spaces, Ann. Mat. Pura Appl., 196 (2017), 555-578. doi: 10.1007/s10231-016-0586-3. Google Scholar
F. Da Lio, Fractional harmonic maps into manifolds in odd dimension $ n>1$, Calc. Var. Partial Differential Equations, 48 (2013), 421-445. doi: 10.1007/s00526-012-0556-6. Google Scholar
F. Da Lio and T. Rivière, Three-term commutator estimates and the regularity of $ \frac{1}{2}$-harmonic maps into spheres, Anal. PDE, 4 (2011), 149-190. doi: 10.2140/apde.2011.4.149. Google Scholar
F. Da Lio and A. Schikorra, On regularity theory for n/p-harmonic maps into manifolds, arXiv e-prints, (2017). doi: 10.1016/j.na.2017.10.001. Google Scholar
H. Dong and D. Kim, On $ L_p$-estimates for a class of non-local elliptic equations, J. Funct. Anal., 262 (2012), 1166-1199. doi: 10.1016/j.jfa.2011.11.002. Google Scholar
Q. Du, M. Gunzburger, R. B. Lehoucq and K. Zhou, Analysis of the volume-constrained peridynamic Navier equation of linear elasticity, J. Elasticity, 113 (2013), 193-217. doi: 10.1007/s10659-012-9418-x. Google Scholar
Q. Du, M. Gunzburger, R. B. Lehoucq and K. Zhou, A nonlocal vector calculus, nonlocal volume-constrained problems, and nonlocal balance laws, Math. Models Methods Appl. Sci., 23 (2013), 493-540. doi: 10.1142/S0218202512500546. Google Scholar
Q. Du and K. Zhou, Mathematical analysis for the peridynamic nonlocal continuum theory, ESAIM Math. Model. Numer. Anal., 45 (2011), 217-234. doi: 10.1051/m2an/2010040. Google Scholar
E. Emmrich and O. Weckner, On the well-posedness of the linear peridynamic model and its convergence towards the Navier equation of linear elasticity, Commun. Math. Sci., 5 (2007), 851-864. Google Scholar
M. Felsinger, M. Kassmann and P. Voigt, The Dirichlet problem for nonlocal operators, Math. Z., 279 (2015), 779-809. doi: 10.1007/s00209-014-1394-3. Google Scholar
M. Fukushima, Y. Oshima, and M. Takeda, Dirichlet Forms and Symmetric MArkov Processes, vol. 19 of De Gruyter Studies in Mathematics, Walter de Gruyter & Co., Berlin, extended ed., 2011. Google Scholar
M. Fukushima and T. Uemura, Jump-type Hunt processes generated by lower bounded semi-Dirichlet forms, Ann. Probab., 40 (2012), 858-889. doi: 10.1214/10-AOP633. Google Scholar
M. Giaquinta and L. Martinazzi, An introduction to the regularity theory for elliptic systems, harmonic maps and minimal graphs vol. 11 of Appunti. Scuola Normale Superiore di Pisa (Nuova Serie) [Lecture Notes. Scuola Normale Superiore di Pisa (New Series)], Edizioni della Normale, Pisa, second ed., 2012. doi: 10.1007/978-88-7642-443-4. Google Scholar
G. Grubb, Fractional Laplacians on domains, a development of Hörmander's theory of µ-transmission pseudodifferential operators, Adv. Math., 268 (2015), 478-528. doi: 10.1016/j.aim.2014.09.018. Google Scholar
M. Gunzburger and R. B. Lehoucq, A nonlocal vector calculus with application to nonlocal boundary value problems, Multiscale Model. Simul., 8 (2010), 1581-1598. doi: 10.1137/090766607. Google Scholar
Z.-C. Hu, Z.-M. Ma and W. Sun, On representations of non-symmetric Dirichlet forms, Potential Anal., 32 (2010), 101-131. doi: 10.1007/s11118-009-9145-5. Google Scholar
M. Kassmann and B. Dyda, Function spaces and extension results for nonlocal Dirichlet problems, arXiv e-prints, (2016). Google Scholar
T. Leonori, I. Peral, A. Primo and F. Soria, Basic estimates for solutions of a class of nonlocal elliptic and parabolic equations, Discrete Contin. Dyn. Syst., 35 (2015), 6031-6068. doi: 10.3934/dcds.2015.35.6031. Google Scholar
Z. Ma, R. Zhu and X. Zhu, On notions of harmonicity for non-symmetric Dirichlet form, Sci. China Math., 53 (2010), 1407-1420. doi: 10.1007/s11425-010-4001-z. Google Scholar
T. Mengesha, Fractional Korn and Hardy-type inequalities for vector fields in half space, Communications in Contemporary Mathematics, to appear. doi: 10.1142/S0219199718500554. Google Scholar
T. Mengesha and Q. Du, The bond-based peridynamic system with Dirichlet-type volume constraint, Proc. Roy. Soc. Edinburgh Sect. A, 144 (2014), 161-186. doi: 10.1017/S0308210512001436. Google Scholar
T. Mengesha and Q. Du, Nonlocal constrained value problems for a linear peridynamic Navier equation, J. Elasticity, 116 (2014), 27-51. doi: 10.1007/s10659-013-9456-z. Google Scholar
V. Millot and Y. Sire, On a fractional Ginzburg-Landau equation and 1/2-harmonic maps into spheres, Arch. Ration. Mech. Anal., 215 (2015), 125-210. doi: 10.1007/s00205-014-0776-3. Google Scholar
X. Ros-Oton, Nonlocal elliptic equations in bounded domains: a survey, Publ. Mat., 60 (2016), 3-26. Google Scholar
A. Rutkowski, The Dirichlet problem for nonlocal Lévy-type operators, Publ. Mat., 62 (2018), 213-251. doi: 10.5565/PUBLMAT6211811. Google Scholar
A. Schikorra, Regularity of n/2-harmonic maps into spheres, J. Differential Equations, 252 (2012), 1862-1911. doi: 10.1016/j.jde.2011.08.021. Google Scholar
A. Schikorra, Lp-gradient harmonic maps into spheres and SO(N), Differential Integral Equations, 28 (2015), 383-408. Google Scholar
R. L. Schilling and J. Wang, Lower bounded semi-Dirichlet forms associated with Lévy type operators, in Festschrift Masatoshi Fukushima, vol. 17 of Interdiscip. Math. Sci., World Sci. Publ., Hackensack, NJ, 2015,507-526. doi: 10.1142/9789814596534_0025. Google Scholar
R. Servadei and E. Valdinoci, Variational methods for non-local operators of elliptic type, Discrete Contin. Dyn. Syst., 33 (2013), 2105-2137. doi: 10.3934/dcds.2013.33.2105. Google Scholar
S. A. Silling, Reformulation of elasticity theory for discontinuities and long-range forces, J. Mech. Phys. Solids, 48 (2000), 175-209. doi: 10.1016/S0022-5096(99)00029-0. Google Scholar
S. A. Silling, Linearized theory of peridynamic states, J. Elasticity, 99 (2010), 85-111. doi: 10.1007/s10659-009-9234-0. Google Scholar
S. A. Silling, M. Epton, O. Weckner, J. Xu, and E. Askari, Peridynamic states and constitutive modeling, J. Elasticity, 88 (2007), 151-184. doi: 10.1007/s10659-007-9125-1. Google Scholar
E. M. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton Mathematical Series, No. 30, Princeton University Press, Princeton, N.J., 1970. Google Scholar
R. Temam and A. Miranville, Mathematical Modeling in Continuum Mechanics, Cambridge University Press, Cambridge, second ed., 2005. doi: 10.1017/CBO9780511755422. Google Scholar
Xavier Carvajal, Liliana Esquivel, Raphael Santos. On local well-posedness and ill-posedness results for a coupled system of mkdv type equations. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020382
Tong Tang, Jianzhu Sun. Local well-posedness for the density-dependent incompressible magneto-micropolar system with vacuum. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020377
Noufel Frikha, Valentin Konakov, Stéphane Menozzi. Well-posedness of some non-linear stable driven SDEs. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 849-898. doi: 10.3934/dcds.2020302
Boris Andreianov, Mohamed Maliki. On classes of well-posedness for quasilinear diffusion equations in the whole space. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 505-531. doi: 10.3934/dcdss.2020361
Wenxiong Chen, Congming Li, Shijie Qi. A Hopf lemma and regularity for fractional $ p $-Laplacians. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3235-3252. doi: 10.3934/dcds.2020034
Antoine Benoit. Weak well-posedness of hyperbolic boundary value problems in a strip: when instabilities do not reflect the geometry. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5475-5486. doi: 10.3934/cpaa.2020248
Xiaopeng Zhao, Yong Zhou. Well-posedness and decay of solutions to 3D generalized Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 795-813. doi: 10.3934/dcdsb.2020142
Dongfen Bian, Yao Xiao. Global well-posedness of non-isothermal inhomogeneous nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1243-1272. doi: 10.3934/dcdsb.2020161
Michael Winkler, Christian Stinner. Refined regularity and stabilization properties in a degenerate haptotaxis system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 4039-4058. doi: 10.3934/dcds.2020030
Rong Chen, Shihang Pan, Baoshuai Zhang. Global conservative solutions for a modified periodic coupled Camassa-Holm system. Electronic Research Archive, 2021, 29 (1) : 1691-1708. doi: 10.3934/era.2020087
Nguyen Anh Tuan, Donal O'Regan, Dumitru Baleanu, Nguyen H. Tuan. On time fractional pseudo-parabolic equations with nonlocal integral conditions. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020109
Maoding Zhen, Binlin Zhang, Vicenţiu D. Rădulescu. Normalized solutions for nonlinear coupled fractional systems: Low and high perturbations in the attractive case. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020379
Yichen Zhang, Meiqiang Feng. A coupled $ p $-Laplacian elliptic system: Existence, uniqueness and asymptotic behavior. Electronic Research Archive, 2020, 28 (4) : 1419-1438. doi: 10.3934/era.2020075
Abdelghafour Atlas, Mostafa Bendahmane, Fahd Karami, Driss Meskine, Omar Oubbih. A nonlinear fractional reaction-diffusion system applied to image denoising and decomposition. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020321
Xianyong Chen, Weihua Jiang. Multiple spatiotemporal coexistence states and Turing-Hopf bifurcation in a Lotka-Volterra competition system with nonlocal delays. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021013
Vo Van Au, Mokhtar Kirane, Nguyen Huy Tuan. On a terminal value problem for a system of parabolic equations with nonlinear-nonlocal diffusion terms. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1579-1613. doi: 10.3934/dcdsb.2020174
PDF downloads (113)
Moritz Kassmann Tadele Mengesha James Scott
|
CommonCrawl
|
Using Scheduler
Bulletin PDFs
Saturday–Tuesday, February 13–16, 2010; Washington, DC
Session Q10: Plasma Physics
Sponsoring Units: DPP
Chair: James Drake, University of Maryland
Room: Maryland B
Q10.00001: Well Confined High Density Plasmas as Neutron Sources
A. Bianchi, B. Coppi
The physics of high density plasmas ($n_{0}\simeq5\times10^{14}-10^{15}$ cm$^{-3}$) that can be well confined in high magnetic field, compact machines, and that can be developed into interesting neutron sources is discussed. Ignitor[1], a machine following a line of which Alcator was the prototype, that has been conceived and designed in order to demonstrate ignition of a D-T burning plasma, can produce up to $3\times10^{19}$ n/sec although with too low a duty cycle. Therefore, a non-igniting, differently conceived device with an adequate duty cycle is being analyzed. An important element for this is the development of cables involving the recently discovered MgB$_{2}$ superconducting material for which the He-gas cryogenic system designed for Ignitor can be adopted. The two largest poloidal (vertical) field coils for Ignitor are in fact designed with these kind of cables. We propose extending the adoption of this material for other magnet systems through a hybrid solution, in contrast to the pure copper solution adopted for Ignitor, taking advantage of the higher current densities that MgB$_{2}$ can sustain, and of the structural characteristics of the relevant cables.\\[4pt] [1] B.Coppi, {\it et al.} Paper FT/P3-23 (Publ. I.A.E.A., Vienna 2008) [Preview Abstract]
Q10.00002: The Effect of SF$_{6}$ dilution in an Argon plasma
Sudip Koirala, Matt Gordon
Plasma etching is widely used in semiconductor industries. There have been extensive studies in the dilution of rare gases; however, limited studies are found in the dilution of electronegative gases. In this work, SF$_{6}$ content is varied from 5{\%} to 60{\%} in an Ar plasma in a deep reactive ion etching system. A Langmuir probe is used to measure electron temperature (T$_{e})$, electron density (n$_{e})$, and electron energy distribution function (eedf). T$_{e}$ decreases monotonically with increasing SF$_{6}$ at first, and then increases for SF$_{6}$ content greater than 20{\%}. This increase is attributed to the loss of low energy electrons in attachment and high energy electrons in excitation and ionization. As the content of SF$_{6 }$is increased above 20{\%}, the dissociation of SF$_{6}$ increases and most of the low energy electrons are lost in attachment and hence the average electron temperature increases. n$_{e}$ decreases by an order of magnitude as the SF$_{6}$ dilution is increased from 5{\%} to 60{\%}. eedf shows that the distribution shifts towards high energy with the increase of SF$_{6}$ content, which is because of the depletion of low energy electrons. [Preview Abstract]
Q10.00003: Recent Discoveries on the Plasma Environment of Mars as seen by the Radar Sounder on MARS EXPRESS Spacecraft
Firdevs Duru, Donald A. Gurnett, David D. Morgan
Mars Advanced Radar for Subsurface and Ionospheric Sounding (MARSIS), which is a low-frequency radar on Mars Express, is designed to study the subsurface and ionosphere of Mars. Here, we give an overview of the plasma environment of Mars as seen by MARSIS. With MARSIS, it is possible to obtain the electron densities with both remote sounding and local electron plasma oscillations. Remote sounding of the ionosphere revealed several types of echoes, including oblique echoes which arise from upward bulges in the ionosphere in regions where the crustal magnetic field of Mars is strong and nearly vertical. It is observed that the electron density profiles are in agreement with the Chapman photo-equilibrium model. Local density data revealed steep, transient electron density gradients similar to the ionopause commonly observed at Venus. It also showed that, at altitudes above 300 km, the electron density on the dayside is almost constant at a given altitude range and it increases exponentially with increasing altitude at a fixed solar zenith angle range. [Preview Abstract]
Q10.00004: Passive, precise plasma jet experiments on the sky
Philipp Kronberg
The typical plasma parameter space has been established for the most luminous, collimated jets in the Universe. They are magnetically dominated energy pipes produced by super-massive black holes, with energy flows in excess of $\sim $ 10$^{42}$ erg s$^{-1}$, over supra-galaxy scales. I discuss these jets with examples, and conclude that all current radio telescopes fall short in resolution to provide the important plasma diagnostics in these systems. The solution is within technological reach, if the full imaging resolution of the Enhanced VLA (EVLA) were increased from the current 35 km to a few hundred km. This can be achieved by additional telescopes (\textit{ca}. 6) in the State of New Mexico. The cost of doing this, $\sim $ {\$}200M, is modest when matched against the potential benefits to plasma and fusion science. [Preview Abstract]
Q10.00005: Observational signatures of sub-Larmor scale magnetic fields in astrophysical objects and HEAD lab experiments
An extensive body of studies indicate that small-scale (sub-Larmor-scale) magnetic turbulence are produced at relativistic shocks, in reconnection events and other high-energy density environments. Here we present a general description of radiation produced by relativistic electrons moving in such fields and stress its non-synchrotron spectral characteristics. We illustrate the results with spectral data from gamma-ray burst observations. [Preview Abstract]
Q10.00006: Stimulated Brillouin Scattering from OMEGA gas-filled hohlraums and NIF hohlraums with gold-boron layers
Richard Berger, L. Divol, D. Froula, S. Glenzer, J. Kline, P. Michel, D. Callahan, D. Hinkel, R. London, N. Meezan, L. Suter, E. Williams
The long laser pulse length required to achieve ignition on the National Ignition Facility (NIF) creates long scalelength, hot, high-Z plasma inside the hohlraum from which stimulated Brillouin scatter (SBS) is predicted to be greater than 10{\%}. We predicted that adding $\sim $40{\%} Boron to a thin layer of the high-Z wall reduces the predicted SBS to less than 1{\%}. In the past few years, a number of experiments at the OMEGA laser facility have tested elements of the physics of SBS in gold-boron and the modeling tools. The damping rates for plasmas with various gold-boron mixtures were duplicated with mixtures of CO$_{2}$ and hydrocarbon gasses. Use of the rad-hydro code HYDRA for bulk plasma parameters and the paraxial-wave-solver pF3d allowed the measured levels of stimulated Brillouin backscatter in the OMEGA experiments to be predicted in advance of the experiments. Although the SBS increases with the average gain as expected, closer examination shows that, for the same gain, plasmas with very weakly damped ion acoustic waves Brillouin scatter light more strongly than plasmas with more strongly damped ion acoustic waves. The pF3d simulations also show that behavior. SBS from NIF hohlraums with gold-boron layers will be presented. [Preview Abstract]
Q10.00007: Numerical Simulations of Pair Production by Ultraintense Lasers
Edison Liang, Alexander Henderson, Pablo Yepes, Hui Chen, Scott Wilks
Using a combination of particle-in-cell plasma kinetic codes and the CERN GEANT4 code for pair production, we systematically study the pair production by ultraintense lasers irradiating gold targets. We will present results for the pair production yield and spectra as a function of laser and target parameters, and compare simulation results with recent data from Titan and other laser experiments. Using these we will design future experiments to optimize the pair yield and pair density. Potential applications of these results to both laboratory astrophysics and high density positronium physics will be discussed. [Preview Abstract]
Q10.00008: Monte Carlo Mathematical Modeling and Analysis of Optogalvanic Waveforms FOR 1s$_{5}$-2p$_{j}$ (j = 7,8,9) transitions of Neon in a Hollow Cathode Discharge
Kayode Ogungbemi, Xianming Han, Prabhakar Misra
The laser optogalvanic (OG) waveforms associated with the 1s$_{5}$ -- 2p$_{j }$(j=7,8,9) transitions of neon in a hollow discharge lamp have been investigated as a function of discharge current (2.0 -- 19.0 mA). We have refined a mathematical model in determining the amplitudes, decay constants, and time constants associated with these transitions. Monte Carlo least-squares fitting of these waveforms has helped to specifically determine the decay rate constant (a$_{i})$, exponential rates (b$_{i})$ and time constant ($\tau )$ parameters associated with the evolution of the OG signals. In our investigation of the 1s$_{5}$ -- 2p$_{j }$(j=7,8,9)$_{ }$optogalvanic transitions of neon, we have measured the intensity of each transition (3.65*10$^{-28}$ , 1.43*10$^{-27}$ and 5.82*10$^{-27}$ cm$^{-1}$/mole-cm$^{-2}$, respectively), which in turn has provided insight into the excitation temperature of the plasma (estimated to be 2847$\pm $285 K). The population distribution of the excited neon atoms in the pertinent energy levels has also been estimated using the Heisenberg Uncertainty Principle. [Preview Abstract]
|
CommonCrawl
|
Anthony Suen ,
Department of Mathematics and Information Technology, The Education University of Hong Kong, 10 Lo Ping Road, Tai Po, New Territories, Hong Kong, China
* Corresponding author: Anthony Suen
We study the low-energy solutions to the 3D compressible Navier-Stokes-Poisson equations. We first obtain the existence of smooth solutions with small $ L^2 $-norm and essentially bounded densities. No smallness assumption is imposed on the $ H^4 $-norm of the initial data. Using a compactness argument, we further obtain the existence of weak solutions which may have discontinuities across some hypersurfaces in $ \mathbb R^3 $. We also provide a blow-up criterion of solutions in terms of the $ L^\infty $-norm of density.
Keywords: Navier-Stokes-Poisson equations, compressible flow, blow-up criteria.
Mathematics Subject Classification: Primary: 35Q30; Secondary: 76N10.
Citation: Anthony Suen. Existence and a blow-up criterion of solution to the 3D compressible Navier-Stokes-Poisson equations with finite energy. Discrete & Continuous Dynamical Systems - A, 2020, 40 (3) : 1775-1798. doi: 10.3934/dcds.2020093
S. Cordier and E. Grenier, Quasineutral limit of an Euler-Poisson system arising from plasma physics, Comm. Partial Differential Equations, 25 (2000), 1099-1113. doi: 10.1080/03605300008821542. Google Scholar
P. Degond, Mathematical modelling of microelectronics semiconductor devices, Some Current Topics on Nonlinear Conservation Laws, AMS/IP Stud. Adv. Math., Amer. Math. Soc., Providence, RI, 15 (2000), 77-110. Google Scholar
D. Donatelli, Local and global existence for the coupled Navier-Stokes-Poisson problem, Quart. Appl. Math., 61 (2003), 345-361. doi: 10.1090/qam/1976375. Google Scholar
D. Donatelli and K. Trivisa, From the dynamics of gaseous stars to the incompressible euler equations, J. Differential Equations, 245 (2008), 1356-1385. doi: 10.1016/j.jde.2008.05.018. Google Scholar
E. Feireisl, Dynamics of Viscous Compressible Fluids, Oxford Lecture Series in Mathematics and its Applications, 26. Oxford University Press, Oxford, 2004. Google Scholar
D. Hoff, Global solutions of the Navier-Stokes equations for multidimensional, compressible flow with discontinuous initial data, J. Diff. Eqns., 120 (1995), 215-254. doi: 10.1006/jdeq.1995.1111. Google Scholar
D. Hoff, Compressible flow in a half-space with navier boundary conditions, J. Math. Fluid Mech., 7 (2005), 315-338. doi: 10.1007/s00021-004-0123-9. Google Scholar
D. Hoff, Uniqueness of weak solutions of the Navier-Stokes equations of multidimensional compressible flow, SIAM J. Math. Anal., 37 (2006), 1742-1760. doi: 10.1137/040618059. Google Scholar
H.-L. Li, A. Matsumura and G. J. Zhang, Optimal decay rate of the compressible Navier-Stokes-Poisson system in $ \mathbb R^3$, Arch. Ration. Mech. Anal., 196 (2010), 681-713. doi: 10.1007/s00205-009-0255-4. Google Scholar
J. Li and A. Matsumura, On the Navier-Stokes equations for three-dimensional compressible barotropic flow subject to large external potential forces with discontinuous initial data, J. Math. Pures Appl. (9), 95 (2011), 495–512. doi: 10.1016/j.matpur.2010.12.002. Google Scholar
J. Lin, J. Zhang and J. Zhao, On the motion of three-dimensional compressible isentropic flows with large external potential forces and vacuum, arXiv: 1111.2114. Google Scholar
P.-L. Lions, Mathematical Topics in Fluid Mechanics. Vol. 2. Compressible Models, Oxford Lecture Series in Mathematics and its Applications, 10. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1998. Google Scholar
A. Matsumura and T. Nishida, The initial value problem for the equation of motion of compressible viscous and heat-conductive fluids, Proc. Japan Acad. Ser. A Math. Sci., 55 (1979), 337-342. doi: 10.3792/pjaa.55.337. Google Scholar
A. Matsumura and T. Nishida, The initial value problem for the equations of motion of viscous and heat-conductive gases, J. Kyoto Univ., 20 (1980), 67-104. doi: 10.1215/kjm/1250522322. Google Scholar
E. M. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton Mathematical Series, No. 30 Princeton University Press, Princeton, N.J. 1970. Google Scholar
A. Suen, A blow-up criterion for the 3D compressible magnetohydrodynamics in terms of density, Discrete Contin. Dyn. Syst., 33 (2013), 3791-3805. doi: 10.3934/dcds.2013.33.3791. Google Scholar
A. Suen, Global solutions of the Navier-Stokes equations for isentropic flow with large external potential force, Z. Angew. Math. Phys., 64 (2013), 767-784. doi: 10.1007/s00033-012-0263-3. Google Scholar
A. Suen, Existence of global weak solution to Navier-Stokes equations with large external potential force and general pressure, Math. Methods Appl. Sci., 37 (2014), 2716-2727. doi: 10.1002/mma.3012. Google Scholar
A. Suen, Existence and uniqueness of low-energy weak solutions to the compressible 3D magnetohydrodynamics equations, J. Diff. Eqns., (2019). doi: 10.1016/j.jde.2019.09.037. Google Scholar
A. Suen and D. Hoff, Global low-energy weak solutions of the equations of 3D compressible magnetohydrodynamics, Arch. Rational Mechanics Ana., 205 (2012), 27-58. doi: 10.1007/s00205-012-0498-3. Google Scholar
Y. Z. Sun, C. Wang and Z. F. Zhang, A Beale-Kato-Majda Blow-up criterion for the 3-D compressible Navier-Stokes equations, J. Math. Pures Appl., 95 (2011), 36-47. doi: 10.1016/j.matpur.2010.08.001. Google Scholar
Y. H. Zhang and Z. Tan, On the existence of solutions to the Navier-Stokes-Poisson equations of a two-dimensional compressible flow, Math. Methods Appl. Sci., 30 (2007), 305-329. doi: 10.1002/mma.786. Google Scholar
W. P. Ziemer, Weakly Differentiable Functions. Sobolev Spaces and Functions of Bounded Variation, Graduate Texts in Mathematics, 120. Springer-Verlag, New York, 1989. doi: 10.1007/978-1-4612-1015-3. Google Scholar
Min Li, Xueke Pu, Shu Wang. Quasineutral limit for the quantum Navier-Stokes-Poisson equations. Communications on Pure & Applied Analysis, 2017, 16 (1) : 273-294. doi: 10.3934/cpaa.2017013
Zhong Tan, Yong Wang, Xu Zhang. Large time behavior of solutions to the non-isentropic compressible Navier-Stokes-Poisson system in $\mathbb{R}^{3}$. Kinetic & Related Models, 2012, 5 (3) : 615-638. doi: 10.3934/krm.2012.5.615
Thomas Y. Hou, Ruo Li. Nonexistence of locally self-similar blow-up for the 3D incompressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2007, 18 (4) : 637-642. doi: 10.3934/dcds.2007.18.637
Renjun Duan, Xiongfeng Yang. Stability of rarefaction wave and boundary layer for outflow problem on the two-fluid Navier-Stokes-Poisson equations. Communications on Pure & Applied Analysis, 2013, 12 (2) : 985-1014. doi: 10.3934/cpaa.2013.12.985
Ming Lu, Yi Du, Zheng-An Yao. Blow-up phenomena for the 3D compressible MHD equations. Discrete & Continuous Dynamical Systems - A, 2012, 32 (5) : 1835-1855. doi: 10.3934/dcds.2012.32.1835
Ming Lu, Yi Du, Zheng-An Yao, Zujin Zhang. A blow-up criterion for the 3D compressible MHD equations. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1167-1183. doi: 10.3934/cpaa.2012.11.1167
Baoquan Yuan, Xiao Li. Blow-up criteria of smooth solutions to the three-dimensional micropolar fluid equations in Besov space. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 2167-2179. doi: 10.3934/dcdss.2016090
Yinghua Li, Shijin Ding, Mingxia Huang. Blow-up criterion for an incompressible Navier-Stokes/Allen-Cahn system with different densities. Discrete & Continuous Dynamical Systems - B, 2016, 21 (5) : 1507-1523. doi: 10.3934/dcdsb.2016009
Xiaojing Xu. Local existence and blow-up criterion of the 2-D compressible Boussinesq equations without dissipation terms. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1333-1347. doi: 10.3934/dcds.2009.25.1333
Xin Zhong. A blow-up criterion for three-dimensional compressible magnetohydrodynamic equations with variable viscosity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3249-3264. doi: 10.3934/dcdsb.2018318
Pavel I. Plotnikov, Jan Sokolowski. Compressible Navier-Stokes equations. Conference Publications, 2009, 2009 (Special) : 602-611. doi: 10.3934/proc.2009.2009.602
Victor A. Galaktionov, Juan-Luis Vázquez. The problem Of blow-up in nonlinear parabolic equations. Discrete & Continuous Dynamical Systems - A, 2002, 8 (2) : 399-433. doi: 10.3934/dcds.2002.8.399
Pavel I. Plotnikov, Jan Sokolowski. Optimal shape control of airfoil in compressible gas flow governed by Navier-Stokes equations. Evolution Equations & Control Theory, 2013, 2 (3) : 495-516. doi: 10.3934/eect.2013.2.495
Haibo Cui, Zhensheng Gao, Haiyan Yin, Peixing Zhang. Stationary waves to the two-fluid non-isentropic Navier-Stokes-Poisson system in a half line: Existence, stability and convergence rate. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 4839-4870. doi: 10.3934/dcds.2016009
Tong Tang, Hongjun Gao. On the compressible Navier-Stokes-Korteweg equations. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2745-2766. doi: 10.3934/dcdsb.2016071
Wenjing Zhao. Local well-posedness and blow-up criteria of magneto-viscoelastic flows. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4637-4655. doi: 10.3934/dcds.2018203
Alejandro Sarria. Global estimates and blow-up criteria for the generalized Hunter-Saxton system. Discrete & Continuous Dynamical Systems - B, 2015, 20 (2) : 641-673. doi: 10.3934/dcdsb.2015.20.641
Jihoon Lee. Scaling invariant blow-up criteria for simplified versions of Ericksen-Leslie system. Discrete & Continuous Dynamical Systems - S, 2015, 8 (2) : 381-388. doi: 10.3934/dcdss.2015.8.381
Vural Bayrak, Emil Novruzov, Ibrahim Ozkol. Local-in-space blow-up criteria for two-component nonlinear dispersive wave system. Discrete & Continuous Dynamical Systems - A, 2019, 39 (10) : 6023-6037. doi: 10.3934/dcds.2019263
Mohamed-Ali Hamza, Hatem Zaag. Blow-up results for semilinear wave equations in the superconformal case. Discrete & Continuous Dynamical Systems - B, 2013, 18 (9) : 2315-2329. doi: 10.3934/dcdsb.2013.18.2315
Anthony Suen
|
CommonCrawl
|
Thermal maturity structures in an accretionary wedge by a numerical simulation
Ayumu Miyakawa ORCID: orcid.org/0000-0001-8089-64061,
Masataka Kinoshita2,
Yohei Hamada3 &
Makoto Otsubo1
Progress in Earth and Planetary Science volume 6, Article number: 8 (2019) Cite this article
This study investigates the thermal maturity structure of the accretionary wedge along with the thermal history of sediments during wedge formation using a numerical simulation. The thermal maturity, which is described in terms of vitrinite reflectance, is determined using the temperature and duration of exposure based on the particle trajectories within the accretionary wedge. This study revealed the variability in the thermal maturity even though sediments are observed to originate at an identical initial depth and thermal conditions. We propose two end-member pathways of sediment movement in the accretionary wedge during wedge growth: a shallow, low thermal maturity pathway and a deep, high thermal maturity pathway. These shallow path sediments, which move into the shallow portion of the wedge during wedge growth through accretion, rarely experience high temperatures; therefore, their thermal maturity is low. However, the sediments subducted in the deep portion of the wedge experience high temperatures and obtain high thermal maturity as a function of the deep high thermal maturity pathway. Simultaneously, a geological deformation event, such as faulting, defines the steps of thermal maturity. The small step of thermal maturity is formed by the frontal thrusting and can be preserved as a function of the shallow low thermal maturity pathway. However, the step is overprinted and is observed to disappear through the deep high thermal maturity pathway. The large step of thermal maturity is formed by long-term displacement along an out-of-sequence thrust (OOST) in the deep portion of the wedge.
Previous studies have depicted that the relative duration of peak heating within any accretionary wedge, especially during specific stages of deformation, can be ascertained by determining whether any discordance exists between megascopic structural geometries and paleothermal gradients. Thermal history (e.g., peak heating and its duration) is recorded in an organic material in the form of thermal maturity. The observations of these recorded thermal maturities and geological structures (e.g., intrusion and fault) have promoted a better understanding about the formation of the geological structures and the associated igneous activity (Underwood et al. 1992) as well as about the duration of thrusting (e.g., Sakaguchi 1996; Yamamoto et al. 2017). In order to represent these observations, various conceptual models have been proposed to interpret the relation between the recorded thermal maturities and geological structures (e.g., Underwood et al. 1993). The temporal development of thermal maturity and their relationships to the associated structures in accretionary wedges have been investigated to date (Barr and Dahlen 1989; Barr et al. 1991; Beyssac et al. 2007; Chen et al. 2017).
The difficulty that is encountered while defining the temporal development of thermal maturity and the associated structures originates from the fact that thermal maturity represents an integration of the exposed temperature and its duration during structural deformation. The thermal maturity recorded in vitrinite reflectance or in the Raman spectrum of carbonaceous materials (e.g., Beyssac et al. 2007; Jehlička et al. 2003; Sakaguchi 1996; Sweeney and Burnham 1990; Yamamoto et al. 2017), which are most frequently used as preservation indices of thermal history, is the integration of the entire history of the material (c.a., temperature and exposed time). Therefore, these indices can reveal the thermal history either as a function of the heating temperature or as a function of the duration. Thus, the unique trajectory of the sediments cannot be inferred by only using the observed thermal maturity indices because of the tradeoff between the exposed temperature and its duration.
In this study, the trajectory of the sediments and their thermal histories during wedge formation were examined using a numerical simulation. Previous studies have investigated the trajectory of particles in the wedges (e.g., Willett et al. 1993; Beaumont et al. 1999; Konstantinovskaia and Malavieille 2005; Stockmal et al. 2007; Mulugeta and Koyi 1992; Naylor and Sinclair 2007; Wenk and Huhn 2013). Herein, we obtained the trajectory of the particles using the distinct element method (DEM) (Cundall and Strack 1979). The temperature in the accretionary wedge was also calculated in order to evaluate the relation between the trajectory of the sediments and the thermal maturity. This approach can provide a spatiotemporal framework of the deformation and thermal maturity within the accretionary wedge. The results are going to be used to compare to natural systems to understand the thermal evolution across a wedge. Additionally, the established framework will promote understanding of the relation between thermal history as the thermal maturity and the associated structures in natural accretionary wedges.
Methods/Experimental
Overview of the simulation
The structure and thermal maturity are modeled by combining the numerical simulation that reproduces the geological and thermal structures in the accretionary wedge. The DEM geodynamic simulation can reproduce the evolution of the geological structure of the accretionary wedge. The thermal structure, on the other hand, was reconstructed using the observed thermal condition in the natural accretionary wedge. The particle tracks that are observed during the simulation of DEM imitate the tracks of the sediments in the accretionary wedge, and the simulated DEM particles that are traveling within the wedge detect the temperatures that are derived from the thermal structure. The thermal history of each particle can, therefore, be obtained. The vitrinite reflectance was used to represent the index of the thermal maturity that originated from the integration of the temperature and its duration.
Reconstruction of the accretionary wedge using the DEM method
We employed the DEM geodynamic modeling method to reconstruct the geological structure of the accretionary wedge. Several studies have reconstructed accretionary wedges using the DEM method (e.g., Morgan 2015; Burbidge and Braun 2002; Naylor and Sinclair 2007; Wenk and Huhn 2013; Yamada et al. 2006; Yamada et al. 2014; Miyakawa et al. 2016). The modeling process in this study follows the method given by Miyakawa et al. (2010) in which an accretionary wedge containing an out-of-sequence thrust (OOST) was reconstructed by increasing the basal friction in the model. OOSTs represent large displacements in the wedge, and several geological surveys found remarkable gaps in thermal maturity across OOSTs (e.g., Yamamoto et al. 2017). Consequently, it is expected that the numerical simulation will show remarkable variations in the thermal maturity structure across the OOST.
We used the model to reproduce the evolution of the accretionary wedge based on both the input of the sediments to the trench and the deforming sediments that are the basis of the body of the accretionary wedge. The initial condition of the model represents the coherent sediments on the oceanic plate (Fig. 1). The thickness of the sediments is set to 1000 m. We employed two sets of particle parameters while performing a simulation of DEM (Table 1). The upper part of the sediment comprised strong particles (particle A) that represent sediments, whereas the lower part comprised a thin layer (100 m) of weak particles (particle B) that represent a weak layer of the décollement. The radii of the particles are diverse: 12–30 m for particle A and 6–10 m for particle B. We did not consider interparticle bonding in our simulation. By setting weak particles, we reproduced the décollement zone, which has a thickness of several particles. Rigid walls confine the model boundaries, and the friction between the side walls and the particles is the same as the interparticle friction in particle A. A moving wall compresses the sediments to duplicate the accretion process. The displacement rate was observed to be small enough to approximate the deformation as quasi-static and was, therefore, set to be 0.009 m during each calculation cycle. The over a period of calculation cycle is explained in the next section. A basal slit (60 m in height; observe Fig. 1) was used to generate a detachment horizon within the thin bottom décollement layer.
Setup to perform the numerical simulations. The upper layer (100–1000 m) comprises particle A, whereas the bottom layer (0–100 m) comprises particle B. The left side wall pushes the particles to the right side at a small rate of displacement (0.009 m for each calculation time step). The moving wall has a slit (0–60 m) to produce a detachment (décollement) within the bottom layer. The shear drag factor of particle B increases from the left-hand side to the right-hand side in response to the displacement of the moving wall (modified after Miyakawa et al. 2010)
Table 1 Parameters of the particles used in the DEM simulation
In this study, the OOST within an accretionary wedge was formed by increasing the basal strength of the model (Miyakawa et al. 2010). The increase in the basal strength was intended to simulate the varying pore pressure distribution (Bangs et al. 2004) and/or material properties (Hyndman and Wang 1993), which control the strength of the décollement. The pore pressure distribution and material properties were affected by the alterations in various parameters such as the temperature, overburden thickness, porosity, and development of faults and fractures that act as fluid conduits. It is difficult to consider these parameters directly while performing the DEM simulation. Therefore, we increased the basal strength of the particles in the thin bottom layer. We gradually increased the interparticle friction of the particles in the thin bottom layer from left to right. The rate of increase was a function of the amount of shortening (see Miyakawa et al. 2010 for details). The function maintained a constant distance between the trench position and the strength increase zone. The functional increase in the strength of the particles models the loss of pore pressure and the hardening of the material along the décollement as a function of the evolution of the accretionary wedge. The simulation was performed using the two-dimensional particle flow code (PFC2D) software (ITASCA Corp., Minneapolis, USA).
Timescale conversion from the DEM model to the thermal model
The definition of the timescale is critical to bridge the gap between the deformation rate during the simulation of DEM and the heating rate that was obtained from the thermal structure. The calculation cost of the DEM method was observed to be so high that we were not able to calculate the deformation during the simulation of DEM over the natural geological timescale (i.e., thousands or millions of years). The timescale used for the DEM simulation, therefore, was set to satisfy the quasi-static deformation conditions to imitate the long-term geological deformation. Consequently, the timescale of the calculation step during the DEM simulation cannot be directly converted to the timescale of the thermal model. To solve this problem, we introduced a normalized timescale based on the displacement of the moving wall, which controls the deformation rate.
The shortening rate that was observed in the natural subduction zone and the shortening rate in this model were set to be equal in the normalized timescale by the displacement of the moving wall. The wedge deformations in the natural geological structures and the numerical simulation were derived from the shortening of the plate subduction zone and the moving wall, respectively. The shortening rate in the plate subduction zone was 4 cm/year in the Nankai Trough, southwest Japan (Seno et al. 1993), whereas the shortening rate of the moving wall was 0.9 cm/step in our simulation timestep. Therefore, to calibrate the shortening rate in this model with that observed in nature, the timescale should be maintained to be 0.225 years/step. Hereafter, the heating duration in the DEM model is maintained to be 0.225 years/step. This normalized timescale reconciles the deformation rate during the simulation of DEM and the heating duration from the thermal model by imitating the deformation rate and heating duration that are observed in nature.
Thermal model
The thermal structure was reconstructed with reference to the thermal condition of the natural accretionary wedge. The heat-flow data that was collected along the individual marine transects depict that the geothermal gradients considerably decrease in magnitude as a function of the arcward distance of the subduction front and that a thermal structure developed because of the combined effects of the cooling of the slab as it subducted and the thickening of the depth of the deformed sediments in the overlying accretionary wedge (Underwood et al. 1993). For example, Harris et al. (2013) noted that the thermal gradient at the location of the incoming sediments was 70–90 °C/km and that the thermal gradient at the wedge edge was 40–50 °C/km in the Kumano area of the Nankai Trough, southwest Japan. In this study, therefore, a high thermal gradient was set at the location of the incoming sediments, whereas a low thermal gradient was set at the wedge. The trench denotes the boundary between the incoming sediments and the wedge. The position of the trench, however, changes each time a new frontal thrust forms; therefore, the thermal gradient may not follow the trend very carefully. Therefore, we functionalized the position of the trench by interpolating the tip of the frontal thrust because the position of the trench intermittently migrates rightward (i.e., seaward), which results in the formation of new frontal thrusts (Miyakawa et al. 2010). The thermal gradient at the location of the incoming sediments was 90 °C/km to the right side (i.e., seaward) of the trench. The thermal gradient at the wedge was 50 °C/km to the left side (i.e., landward) of the final wedge after performing 22,500 m of shortening (Additional file 1: Table S1). The thermal gradient between the incoming sediments and the wedge was interpolated linearly connecting two end-member gradients (Fig. 2, Additional file 1: Figure S2).
The thermal gradient models for the calculation of the temperature of each particle. The horizontal axis depicts the distance from the left side wall (c.a., moving backstop), whereas the vertical axis depicts the thermal gradient. The thermal gradient is linearly interpolated from the input sediments at 90 °C/km to the landward end at 50 °C/km in the final condition. The final thermal gradient model after 22,500 m of shortening (a). The thermal gradient model for different amounts of shortening, i.e., for 0 m (c.a., initial condition) (b), 9000 m (c), and 18,000 m (d)
The temperature of each particle was calculated according to its buried depth from the surface and thermal gradient at that point. We calculated and updated the temperature of the particles after every 10,000 timesteps (i.e., 2250 years). The particles were preheated to the initial temperatures that were recorded at the locations of the incoming sediments for 1,650,000 timesteps (i.e., 371,250 years) to duplicate the heating process that occurred after sedimentation and to prevent rapid spikes in the temperature of the numerical simulation.
The thermal maturity of each particle was given as the vitrinite reflectance that was reproducing profiles observed in natural accretionary wedges. Sweeney and Burnham (1990) constructed a kinetic model by employing a series of first-order kinetics that describes parallel chemical reactions, which are associated with the progress of vitrinite thermal maturation (Easy%Ro). Suzuki et al. (1993) proposed an improved kinetic model based on single activation energy, which is more suitable to perform numerical treatment (SIMPLE-Ro). In their model, the vitrinite reflectance, Ro, is expressed as a function of the fraction of the reacted material, Fc, as follows:
$$ {R}_o=\exp \left[\ln \left({\mathrm{Ro}}_0\right)+3.7\mathrm{Fc}\right], $$
where Ro0 is the initial reflectance of vitrinite. Fc can be written as follows:
$$ \mathrm{Fc}\kern0.5em =0.85-0.85\exp \left[-A\exp \left(-E\Delta t/\mathrm{RT}\right)\right], $$
where A is a frequency factor (1.0 × 1013/s), R is a gas constant, Δt is duration of calculation interval, and E is the activation energy that can be represented as:
$$ E=40.7\ln \left(\mathrm{Ro}\right)+227\left[\mathrm{kJ}/\mathrm{mol}\right]. $$
Formation of an accretionary wedge and fault activity
The formation process of an accretionary wedge, including a fault, was reconstructed through numerical simulation using the DEM (Fig. 3 and Additional file 2: Movie S1). The incoming sediments were offscraped by the moving wall and were deformed, thus forming reverse faults. The thickness of the wedge increased in parallel with the increased shortening.
The sequential simulation results and the relative displacement of particles toward the moving wall while performing 90 m of shortening (0–22,500 m of shortening). The colored particles on the simulation results are the target particles that can be used for monitoring their depth, temperature, and vitrinite reflectance (Fig. 4). The trajectories of the particles are also depicted in the figure. The solid black lines in a and white lines in b are fore thrust, and the dashed black lines in a and white lines in b are back thrust. The thick solid black line in a and white line in b depict the OOST. The black arrows represent the displacement boundaries that are associated with active faulting
Movie S1. Evolution of the structure of the accretionary wedge and the distribution of the displacement of each particle (AVI 4329 kb)
The activity of the fault varied throughout the simulation. The active faults were distinguished by the discontinuity between the displacements of the particles while performing 90 m of shortening (Fig. 3). The most active fault was the frontal thrust, which partitioned the incoming sediment and the accretionary wedge at the trench. The activity of the frontal thrust reduced when a new frontal thrust was formed further toward the sea (i.e., the right side). The faults in the accretionary wedge were reactivated at times during the shortening process although the displacement was smaller than that observed for the active frontal thrust. The most remarkable reactivated fault was the OOST. The motion along the OOST continued for a long period of time, and, consequently, the amount of displacement along the OOST was observed to be large.
Particle tracks and their depth and thermal history
The trajectories of particles are observed to vary as a function of their burial depth and the position relative to active thrust fault (e.g., Konstantinovskaia and Malavieille 2005. Consequently, the thermal history of each particle is observed to be different. The trajectories of particles can be traced by tracking the position of the target particles at every step. We set four tracking particles in the model (Fig. 3). The initial depths of these tracking particles in the incoming sediment were observed to be approximately the same (about 500 m). Therefore, every trajectory overlapped until the sediment was accreted by the formation of the frontal thrust in the vicinity of the target particles. Once they were accreted into the wedge, the trajectories of the tracking particles diverged into different pathways. The difference between the pathways is reflected in the difference between the final burial depths of the tracking particles (Fig. 4). Consequently, the temperature of the particles and the Ro% values reflect the variability in the thermal history of the particles despite originating from an identical initial temperature condition.
The depth (a), temperature (b), and vitrinite reflectance (c) vs. the time of the target particles. The colors of the lines correspond to the colors of the particle in Fig. 3
The temperature and the Ro% of every particle were calculated according to the trajectory of each particle in the manner shown above. Finally, we obtained the temperature and vitrinite reflectance structures in the wedge (Fig. 5 and Additional file 3: Movie S2). The structure of the thermal maturity as elucidated by vitrinite reflectance (Ro%) was spatiotemporally discontinuous despite the continuous thermal structure (Fig. 5). The discontinuity of the thermal maturity, as observed using the vitrinite reflectance, was considerably evident along the frontal fault and the OOST. The step observed during the thermal maturity along the frontal thrust, especially at a particular depth, disappeared with the growth of the wedge. The most remarkable step was observed along the OOST.
The sequential temperature structure and the vitrinite reflectance structure after every 4500 m of shortening from 0 to 22,500 m of the total shortening. The colored particles with white edge on the simulation results are the target particles that are used for monitoring their depth, temperature, and vitrinite reflectance (Fig. 4)
Movie S2. Evolution of the temperature and thermal structure within the wedge. (AVI 4780 kb)
The impacts of the order of the normalized timescale and the variation in the thermal gradient model are examined in Additional file 1: Figures S1, S2, and S3. The qualitative thermal structure and the thermal maturity structures are independent of the difference between the normalized timescale and the thermal gradient model, though the absolute values of temperature and thermal maturity have been changed.
Profiles of the thermal maturity
The spatial variation of the steps in thermal maturity is observed along both the horizontal and vertical profiles (Fig. 6). Generally, thermal maturity increases landward with increasing depth. This trend originated from the high temperature that was experienced and the resultant thickening of the wedge. The large steps in thermal maturity are concurrent with the fault development in both the horizontal and vertical profiles.
The geological and the thermal maturity structure after 22,500 m of the total shortening. The geological structure (a). The thermal maturity structure as the vitrinite reflectance structure (b). The horizontal profiles of the vitrinite reflectance (c). The vertical profiles of the vitrinite reflectance (d). The position of the profile lines is displayed on the geological structure and the thermal maturity structure. The solid black lines in a and white lines in b are fore thrust, and the broken black lines in a and white lines in b are back thrust. The thick slid black line in a and white line in b depict the OOST. The gray shades in c and d depict position of the faults crossing the profile lines
Comparison between the conceptual model and the natural accretionary wedge
The relation between the recorded thermal maturities and the geological structures during the numerical simulation reproduces the conceptual model that is depicted in Underwood et al. (1993). Thermal maturity inversion, which is observed as a step in thermal maturity in our simulation, occurs in an accretionary wedge when the hanging wall strata are uplifted and thrusted over the less mature strata of a foot wall (Fig. 6B in Underwood et al. 1993). This type of thermal maturity inversion is observed in our numerical simulation, particularly across the frontal thrust and OOST (Fig. 6).
We compared the results of the numerical simulation and the natural accretionary wedge observations, which denote young and unmetamorphosed complexes in the Miura and Boso peninsulas of central Japan (Yamamoto 2006; Yamamoto et al. 2017). The Early Miocene and Late Miocene to Pliocene accretionary complexes that were exposed in this area are still observed to contain 30–50% of their initial porosity and low P-wave velocity structures (Yamamoto 2006). The observed maximum paleotemperatures of the Miura–Boso accretionary wedge ranges from 0 to 150 °C (Yamamoto et al. 2017). Higher maximum paleotemperatures are restricted to the western (c.a., landward) part of the Early Miocene Hota accretionary complex, which indicates a spatial difference in the slip upon OOST (Yamamoto et al. 2017). The range of the maximum paleotemperatures that were observed in the Miura–Boso accretionary wedge was consistent with that obtained from the numerical simulation (Fig. 5). Specific variations in the maximum paleotemperature, which cause an increase in the maximum paleotemperature toward the land, and the large steps across large OOSTs are also consistent with that observed during the numerical simulations. These results and observations lead to the conclusion that the thermal maturity in the numerical simulation is highly representative of the thermal maturity that occurs in the natural accretionary wedge. Our results also show that the existence and absence of steps in the thermal maturity observed in the natural accretionary wedge should be considered while examining the higher temperature overprints and the reactivation of the fault.
Thermal history of the sediments and the spatiotemporal evolution of the thermal maturity
The simulation results described the thermal history of the sediments and the spatiotemporal evolution of the thermal maturity structures that were associated with the wedge growth. The thermal history is controlled by the two components of subducting and faulting. The subducting and faulting associated with the wedge growth cause the overall maturity of the vitrinite, the formation of steps during thermal maturity, the banishment of the step, and the reformation of the step. Hence, this thermal maturity and its spatiotemporal evolution are associated with the stage of wedge growth.
Incoming sediment
The incoming sediments, which are pre-subducting sediments, are heated in a high thermal gradient on the oceanic plate. The deeper sediments depicted higher vitrinite reflectance (Ro%) than that observed in the shallower sediments due to the high thermal gradient that was observed in this stage. Hence, vitrinite reflectance is identical across sediments at similar depth. The maximum vitrinite reflectance that was encountered in the input sediments at depth was, however, lower than that observed in the deeper part of the inner wedge because the thicknesses of the input sediments were thinner (~ 1 km) than that of the wedge (~ 4 km).
Frontal thrust region
The step in thermal maturity was formed in the frontal thrust region, where the incoming sediment was scraped off by the frontal thrust formation. The displacement along the frontal thrust shifted the horizontally layered thermal maturity structure. The thickness at the toe of the wedge was, however, still thinner than the thickness at the landward inner wedge. Hence, the vitrinite reflectance value and its step function along the frontal thrust were still small.
Shallow part of the accretionary wedge
The shallow portion of the accreted sediments did not subduct and remained at a shallow depth. These shallow sediments rarely experience high temperatures, and the vitrinite reflectance is rarely overprinted by the higher temperature. The shallow sediments, therefore, maintain the vitrinite reflectance of the incoming sediments and the step of the thermal maturity that was obtained by frontal thrusting. This preservation enables us to observe the low vitrinite reflectance value and the small steps during thermal maturity in the shallow part of the wedge. Recent ocean drilling activities revealed low-grade metamorphism and vitrinite reflectance along with small steps of thermal maturity at the shallow portion of the wedge in the Nankai Trough in the southwest part of Japan (Fukuchi et al. 2017).
Deep part of the accretionary wedge
The deep portion of the accreted sediments subduct and get deeper with the growth of the wedge. The sediments experience higher temperatures than those experienced by the incoming sediments and by the toe of the wedge. Consequently, the deep sediments depict high vitrinite reflectance values. While the present frontal thrust at the toe is active, the important point is that the paleofrontal thrust was already inactive at this stage. Therefore, the higher vitrinite reflectance overprints the existing thermal maturity structure, and the steps of thermal maturity disappear gradually, though thrust faults have steps (e.g., lower shaded zone in Fig. 6d). This absence of steps in the vitrinite reflectance along the faults is assumed to be caused by the overprint of the higher vitrinite reflectance that is associated with the subduction. In fact, faults without any steps in the vitrinite reflectance were observed in the natural wedge (e.g., Kitamura et al. 2005).
Reactivation of the fault and OOST
The steps in thermal maturity are recreated by the reactivation of the landward and inner wedge. The reactivation of the faults in the landward and inner wedge was clearly observed as a gap in displacement (black arrows in Fig. 3). This gap of displacement along the fault rebuilds the steps of the vitrinite reflectance profile (Fig. 6). The most remarkable reactivated fault in the inner wedge is the OOST. The large displacement along the OOST formed the large steps of the vitrinite reflectance. Similar large steps, across the OOST, have also been reported in natural accretionary wedges (e.g., Yamamoto et al. 2017) (Fig. 7).
A comparison of the numerical model and the natural accretionary wedge. The geological structure (a) and the horizontal profiles of the vitrinite reflectance (b) of the numerical simulation (Fig. 6). Schematic cross-section of the Miura–Boso subduction margin showing maximum paleo-temperatures deduced from the Boso Peninsula (c, d) (modified after Yamamoto et al. 2017)
Summary of the spatiotemporal evolution of the thermal maturity
Two end-members of paths of thermal maturity were revealed with the simulation: the deep and high thermal maturity pathway and the shallow and low thermal maturity pathway (Fig. 8). The thickness of the wedge increased with wedge growth, and the sediments that were subducting into the deeper part of the wedge experienced increasingly high temperatures and obtained high thermal maturity. However, the sediments in the shallow part of the wedge maintained low temperatures, and therefore, the thermal maturity also remained low.
Schematic of an accretionary wedge with two main sediment flow pathways: a a shallow and low thermal maturity pathway and a deep high thermal maturity path and b vertical profiles of vitrinite reflectance in an accretionary wedge. The steps in vitrinite reflectance across the fault are associated with fault formation and reactivation, as depicted with arrows in II and V. The step in vitrinite reflectance across the fault is preserved in the shallow part of the wedge, as depicted by an arrow in III. The disappearance of the step with high temperatures and the overprint that depicts higher vitrinite maturity than that obtained in previous stages are illustrated using an arrow in IV
The steps in thermal maturity are assumed to be caused due to the fault displacement (Fig. 8). The initial step of thermal maturity was formed because of the displacement of the frontal thrust. The step, however, disappears due to the overprinting of the high-grade vitrinite reflectance in the deep pathway even though the step is preserved in the shallow path. The steps in the high-level thermal maturity were formed during fault reactivation or the formation of OOST in the inner wedge.
The disappearance of the step and the overprinting of the high vitrinite reflectance are important features that may occur in a natural accretionary wedge (Fig. 8). A large fault which lacks a thermal maturity step has been reported although the large fault is assumed to be a seismogenic fault indicated by the presence of pseudotachylyte (Kitamura et al. 2005). One proposed interpretation is that the large fault was parallel to the isotherms. In subduction zones, the plate boundary décollement gently dips almost parallel to the isotherms (Underwood et al. 1989), which indicates that the displacement along the décollement does not cause a thermal contrast between the hanging wall and the foot wall sides. The lack of a thermal step across the Minami–Awa fault in southwest Japan was consistent with the expected thermal structure of the plate boundary décollement (Kitamura et al. 2005). However, our results suggest another explanation for the absence of steps of thermal maturity across the fault. Though a step in thermal maturity may have formed during the large fault displacement, which may have produced pseudotachylyte during its slip, the strata were subducted after the fault became inactive after which it experienced higher temperatures. At that time, both the hanging and foot wall strata obtained high-grade vitrinite reflectance, and the step of vitrinite reflectance disappeared. The high-grade vitrinite reflectance overprint may be caused not only due to subduction but also due to some other external factors such as an igneous activity (Sakaguchi 1996; Underwood et al. 1992).
Limitations of assumptions and model simplifications
The simplified assumptions in the model should be more realistic in future work. Our model does not take heat conduction into account, although we assume a constant thermal gradient at a given depth and a steady-state distribution in temperature. Therefore, our model will not be consistent with convective heat transport linked to faulting and fluid flow within the wedge. Consequently, steps in temperature across faults and heterogeneous variations in thermal gradients in the real system would not be reproduced in our model. The boundary condition of our model should also be improved to reproduce the complex natural conditions of accretionary wedges. For example, other studies suggest that an output of material (subducting sediments) through a type of "subduction channel" is responsible for major OOST faults in a wedge (e.g., Kukowski et al. 1994 and Gutscher et al. 1998). Several boundary conditions should be tested in future work, as our model is just one example of the formation of the OOST.
The deformational and thermal events were concurrent and affected each other in the natural environment. With syn-metamorphic faulting, the foot wall will be subjected to the effects of conductive heat transfer across the fault surface, and the thermal maturity indicators will be reset (Underwood et al. 1993). Therefore, thermomechanical simulation techniques (e.g., Gerya and Stöckhert 2006) may enable a considerably more detailed examination. The effect of thermal fluid flow or a local geological event, such as the encroachment of magmatic intrusions, which were not taken into account in our model, may also be important explanatory variables of the local thermal history.
We modeled a simple geological and thermal maturity structure by coupling a wedge deformation model and a thermal structure model. The results of the numerical simulation were observed to agree with the natural accretionary wedge observations. The simulation results described the thermal history of the sediments and the spatiotemporal evolution of the thermal maturity structures that were associated with the wedge growth. Two paths of thermal maturity are depicted: one is the deep and high thermal maturity pathway and the other is the shallow and low thermal maturity pathway. The subducting and faulting associated with the wedge growth control the overall maturity of the vitrinite, and the formation of steps, the decay of the step, and the reformation of the step. Our results depict that the existence and absence of steps of thermal maturity should be considered while examining the higher temperature overprint and the reactivation of the fault. The simplified assumptions in the model should be more realistic in future work.
DEM:
Distinct element method
OOST:
Out-of-sequence thrust
Bangs NL, Shipley TH, Gulick SP, Moore GF, Kuromoto S, Nakamura Y (2004) Evolution of the Nankai Trough décollement from the trench into the seismogenic zone: inferences from three-dimensional seismic reflection imaging. Geology 32:273–276
Barr TD, Dahlen FA (1989) Brittle frictional mountain building: 2. Thermal structure and heat budget. J Geophys Res Solid Earth 94(B4):3923–3947
Barr TD, Dahlen FA, McPhail DC (1991) Brittle frictional mountain building 3. Low-grade metamorphism. J Geophys Res Solid Earth 96(B6):10319–10338
Beaumont C, Ellis S, Pfiffner A (1999) Dynamics of sediment subduction-accretion at convergent margins: short-term modes, long-term deformation, and tectonic implications. J Geophys Res Solid Earth 104(B8):17573–17601
Beyssac O, Simoes M, Avouac JP, Farley KA, Chen YG, Chan YC, Goffé B (2007) Late Cenozoic metamorphic evolution and exhumation of Taiwan. Tectonics 26(6)
Burbidge DR, Braun J (2002) Numerical models of the evolution of accretionary wedges and fold-and-thrust belts using the distinct-element method. Geophys J Int 148(3):542–561
Chen WS, Chung SL, Chou HY, Zugeerbai Z, Shao WY, Lee YH (2017) A reinterpretation of the metamorphic Yuli belt: evidence for a middle-late Miocene accretionary prism in eastern Taiwan. Tectonics 36(2):188–206
Cundall PA, Strack OD (1979) A discrete numerical model for granular assemblies. Geotechnique 29:47–65
Fukuchi R, Yamaguchi A, Yamamoto Y, Ashi J (2017) Paleothermal structure of the Nankai inner accretionary wedge estimated from vitrinite reflectance of cuttings. Geochem Geophys Geosyst 18:3185–3196
Gerya T, Stöckhert B (2006) Two-dimensional numerical modeling of tectonic and metamorphic histories at active continental margins. Int J of Earth Sci 95:250–274
Gutscher MA, Kukowski N, Malavieille J, Lallemand S (1998) Episodic imbricate thrusting and underthrusting: analog experiments and mechanical analysis applied to the Alaskan accretionary wedge. J Geophys Res Solid Earth 103(B5):10161–10176
Harris R, Yamano M, Kinoshita M, Spinelli G, Hamamoto H, Ashi J (2013) A synthesis of heat flow determinations and thermal modeling along the Nankai Trough, Japan. J of Geophys Res Solid Earth 118:2687–2702
Hyndman RD, Wang K (1993) Thermal constraints on the zone of major thrust earthquake failure: the Cascadia subduction zone. J of Geophys Res Solid Earth 98:2039–2060
Jehlička J, Urban O, Pokorný J (2003) Raman spectroscopy of carbon and solid bitumens in sedimentary and metamorphic rocks. Spectrochim Acta A Mol Biomol Spectrosc 59(10):2341–2352
Kitamura Y, Sato K, Ikesawa E, Ikehara-Ohmori K, Kimura G, Kondo H, Ujie K, Onishi CT, Kawabata K, Hashimoto Y, Masago H, Mukoyoshi H (2005) Mélange and its seismogenic roof décollement: a plate boundary fault rock in the subduction zone—an example from the Shimanto Belt, Japan. Tectonics 24:TC5012. https://doi.org/10.1029/2004TC001635
Konstantinovskaia E, Malavieille J (2005) Erosion and exhumation in accretionary orogens: experimental and geological approaches. Geochem Geophys Geosyst 6(2)
Kukowski N, Von Huene R, Malavieille J, Lallemand SE (1994) Sediment accretion against a buttress beneath the Peruvian continental margin at 12 S as simulated with sandbox modeling. Geol Rundsch 83(4):822–831
Miyakawa A, Yamada Y, Matsuoka T (2010) Effect of increased shear stress along a plate boundary fault on the formation of an out-of-sequence thrust and a break in surface slope within an accretionary wedge, based on numerical simulations. Tectonophysics 484:127–138
Miyakawa A, Yamada Y, Otsubo M (2016) Stress changes in an accretionary wedge related to the displacement of an out-of-sequence thrust in a numerical simulation. Island Arc 25:433–435
Morgan JK (2015) Effects of cohesion on the structural and mechanical evolution of fold and thrust belts and contractional wedges: discrete element simulations. J Geophys Res Solid Earth 120(5):3870–3896
Mulugeta G, Koyi H (1992) Episodic accretion and strain partitioning in a model sand wedge. Tectonophysics 202:319–333
Naylor M, Sinclair HD (2007) Punctuated thrust deformation in the context of doubly vergent thrust wedges: implications for the localization of uplift and exhumation. Geology 35:559–562
Sakaguchi A (1996) High paleogeothermal gradient with ridge subduction beneath the Cretaceous Shimanto accretionary prism, southwest Japan. Geology 24:795–798
Seno T, Stein S, Gripp AE (1993) A model for the motion of the Philippine Sea plate consistent with NUVEL-1 and geological data. J of Geophys Res Solid Earth 98:17941–17948
Stockmal GS, Beaumont C, Nguyen M, Lee B (2007) Mechanics of thin-skinned fold-and-thrust belts: insights from numerical models. Geol Soc Am Spec Pap 433:63–98
Suzuki N, Matsubayashi H, Waples DW (1993) A simpler kinetic model of vitrinite reflectance. AAPG Bull 77:1502–1508
Sweeney JJ, Burnham AK (1990) Evaluation of a simple model of vitrinite reflectance based on chemical kinetics (1). AAPG Bull 74:1559–1570
Underwood MB, Hibbard JP, DiTullio L (1993) Geologic summary and conceptual framework for the study of thermal maturity within the Eocene-Miocene Shimanto Belt, Shikoku, Japan. Geo Soc of Am Special Papers 273:1–24
Underwood MB, Laughland MM, Byrne T, Hibbard JP, DiTullio L (1992) Thermal evolution of the Tertiary Shimanto Belt, Muroto Peninsula, Shikoku, Japan. Island Arc 1:116–132
Underwood, MB, Laughland, MM, Wiley, TJ, and Howell, DG, Thermal maturity and organic geochemistry of the Kandik basin region, east-central Alaska, US Geological Survey, open file report OFR 89–353, 1989, 41 p.
Wenk L, Huhn K (2013) The influence of an embedded viscoelastic–plastic layer on kinematics and mass transport pattern within accretionary wedges. Tectonophysics 608:653–666
Willett S, Beaumont C, Fullsack P (1993) Mechanical model for the tectonics of doubly vergent compressional orogens. Geology 21(4):371–374
Yamada Y, Baba K, Matsuoka T (2006) Analogue and numerical modeling of accretionary prisms with a décollement in sediments. Geol Soc Lond, Spec Publ 253:169–183
Yamada Y, Baba K, Miyakawa A, Matsuoka T (2014) Granular experiments of thrust wedges: insights relevant to methane hydrate exploration at the Nankai accretionary prism. Mar Pet Geol 51:34–48
Yamamoto Y (2006) Systematic variation of shear-induced physical properties and fabrics in the Miura–Boso accretionary prism: the earliest processes during off-scraping. Earth Planet Sci Lett 244:270–284
Yamamoto Y, Hamada Y, Kamiya N, Ojima T, Chiyonobu S, Saito S (2017) Geothermal structure of the Miura–Boso plate subduction margin, central Japan. Tectonophysics 710:81–87
We are grateful to S. Tonai (Kochi University), R. Fukuchi (JAMSTEC) and H. Hara (Geological survey of Japan). We also thank two anonymous reviewers for providing constructive reviews that greatly improved the manuscript.
A.M. was supported by JSPS KAKENHI Grant Number JP17H05321.
The datasets in the current study are available from the corresponding author on request.
Geological Survey of Japan, AIST, AIST Tsukuba Central 7, Higashi-1-1-1, Tsukuba, Ibaraki Pref., 305-8567, Japan
Ayumu Miyakawa & Makoto Otsubo
Earthquake Research Institute, University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo, 113-0032, Japan
Masataka Kinoshita
Kochi Institute for Core Sample Research, Japan Agency for Marine-Earth Science and Technology, 200 Monobe Otsu, Nankoku City, Kochi, 783-8502, Japan
Yohei Hamada
Ayumu Miyakawa
Makoto Otsubo
AM conducted the numerical simulation and designed the study. MK proposed the topic and investigated the thermal model. YH carried out the experimental study. MO conceived the thermal maturity calculation. All authors read and approved the final manuscript.
Correspondence to Ayumu Miyakawa.
Additional file 1:
Supporting information. Figure S1. Variation in the temperature structures and the thermal maturity structures as a function of the distribution of the vitrinite reflectance based on the difference of the normalized timescales of short (0.0225 year/step), moderate (0.225 year/step), and long (2.25 year/step) timescale models. Figure S2. Variation in the thermal gradient models (Table S1.). Table S1. Thermal gradient models. Figure S3. Variation of the temperature structures and the thermal maturity structures as a distribution of the vitrinite reflectance that was derived using different thermal gradient models: HH, HL, LH, and LL models. (DOCX 3942 kb)
Miyakawa, A., Kinoshita, M., Hamada, Y. et al. Thermal maturity structures in an accretionary wedge by a numerical simulation. Prog Earth Planet Sci 6, 8 (2019). https://doi.org/10.1186/s40645-018-0252-z
Accretionary wedge
Thermal maturity structure
Vitrinite reflectance
Subduction-zone megathrust earthquakes: New perspectives from insitu data & laboratory analyses
|
CommonCrawl
|
Weak solutions of a gas-liquid drift-flux model with general slip law for wellbore operations
About the unfolding of a Hopf-zero singularity
October 2013, 33(10): 4473-4495. doi: 10.3934/dcds.2013.33.4473
Dispersive estimates for matrix Schrödinger operators in dimension two
M. Burak Erdoǧan 1, and William R. Green 2,
Department of Mathematics, University of Illinois, Urbana, IL 61801, United States
Department of Mathematics, Rose-Hulman Institute of Technology, Terre Haute, IN 47803, United States
Received November 2012 Revised February 2013 Published April 2013
We consider the non-selfadjoint operator \[ H = \left[\begin{array}{cc} -\Delta + \mu-V_1 & -V_2\\ V_2 & \Delta - \mu + V_1 \end{array} \right] \] where $\mu>0$ and $V_1,V_2$ are real-valued decaying potentials. Such operators arise when linearizing a focusing NLS equation around a standing wave. Under natural spectral assumptions we obtain $L^1(\mathbb{R}^2)\times L^1(\mathbb{R}^2)\to L^\infty(\mathbb{R}^2)\times L^\infty(\mathbb{R}^2)$ dispersive decay estimates for the evolution $e^{it H}P_{ac}$. We also obtain the following weighted estimate $$ \|w^{-1} e^{it\mathcal H}P_{ac}f\|_{L^\infty(\mathbb R^2)\times L^\infty(\mathbb R^2)} ≲ \frac{1}{|t|\log^2(|t|)} \|w f\|_{L^1(\mathbb R^2)\times L^1(\mathbb R^2)},\,\,\,\,\,\,\,\, |t| >2, $$with $w(x)=\log^2(2+|x|)$.
Keywords: dispersive estimates, asymptotic stability., solitons, weighted estimates, Matrix Schrödinger operators.
Mathematics Subject Classification: Primary: 35J10, 35Q4.
Citation: M. Burak Erdoǧan, William R. Green. Dispersive estimates for matrix Schrödinger operators in dimension two. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4473-4495. doi: 10.3934/dcds.2013.33.4473
M. Abramowitz and I. A. Stegun, "Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables,", National Bureau of Standards Applied Mathematics Series, (1964). Google Scholar
S. Agmon, Spectral properties of Schrödinger operators and scattering theory,, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 2 (1975), 151. Google Scholar
R. Asad and G. Simpson, Embedded eigenvalues and the nonlinear Schrödinger equation,, Journal of Mathematical Physics, 52 (2011). doi: 10.1063/1.3567152. Google Scholar
M. Beceanu, A critical center-stable manifold for Schrödinger's equation in three dimensions,, Comm. Pure Appl. Math., 65 (2012), 431. doi: 10.1002/cpa.21387. Google Scholar
M. Beceanu and M. Goldberg, Schrödinger dispersive estimates for a scaling-critical class of potentials,, Comm. Math. Phys., 314 (2012), 471. doi: 10.1007/s00220-012-1435-x. Google Scholar
H. Berestycki and P.-L. Lions, Nonlinear scalar field equations. I. Existence of a ground state,, Arch. Rational Mech. Anal., 82 (1983), 313. doi: 10.1007/BF00250555. Google Scholar
D. Bollé, F. Gesztesy and C. Danneels, Threshold scattering in two dimensions,, Ann. Inst. H. Poincaré Phys. Théor., 48 (1988), 175. Google Scholar
V. S. Buslaev and G. S. Perelman, Scattering for the nonlinear Schrödinger equation: States that are close to a soliton,, (Russian) Algebra i Analiz, 4 (1992), 63. Google Scholar
F. Cardosa, C. Cuevas and G. Vodev, Dispersive estimates for the Schrödinger equation in dimensions four and five,, Asymptot. Anal., 62 (2009), 125. doi: 10.3233/ASY-2009-0916. Google Scholar
O. Costin, M. Huang and W. Schlag, On the spectral properties of $L_{\pm}$ in three dimensions,, Nonlinearity, 25 (2012), 125. doi: 10.1088/0951-7715/25/1/125. Google Scholar
S. Cuccagna, Stabilization of solutions to nonlinear Schrödinger equations,, Comm. Pure Appl. Math., 54 (2001), 1110. doi: 10.1002/cpa.1018. Google Scholar
S. Cuccagna and T. Mizumachi, On asymptotic stability in energy space of ground states for nonlinear Schrödinger equations,, Comm. Math. Phys., 284 (2008), 51. doi: 10.1007/s00220-008-0605-3. Google Scholar
S. Cuccagna and M. Tarulli, On asymptotic stability in energy space of ground states of NLS in 2D,, Ann. I. H. Poincare, 26 (2009), 1361. doi: 10.1016/j.anihpc.2008.12.001. Google Scholar
L. Demanet and W. Schlag, Numerical verification of a gap condition for linearized NLS,, Nonlinearity, 19 (2006), 829. doi: 10.1088/0951-7715/19/4/004. Google Scholar
M. B. Erdoǧan and W. R. Green, Dispersive estimates for the Schrodinger equation for $C^\frac{n-3}{2}$ potentials in odd dimensions,, Int. Math. Res. Notices, 13 (2010), 2532. doi: 10.1093/imrn/rnp221. Google Scholar
M. B. Erdoǧan and W. R. Green, Dispersive estimates for Schrödinger operators in dimension two with obstructions at zero energy,, To appear in Trans. Amer. Math. Soc., (2012). Google Scholar
M. B. Erdoǧan and W. R. Green, A weighted dispersive estimate for Schrödinger operators in dimension two,, Comm. Math. Phys., 319 (2013), 791. doi: 10.1007/s00220-012-1640-7. Google Scholar
M. B. Erdoǧan and W. Schlag, Dispersive estimates for Schrödinger operators in the presence of a resonance and/or an eigenvalue at zero energy in dimension three: I,, Dynamics of PDE, 1 (2004), 359. doi: 10.1007/BF02789446. Google Scholar
M. B. Erdoǧan and W. Schlag, Dispersive estimates for Schrödinger operators in the presence of a resonance and/or eigenvalue at zero energy in dimension three: II,, J. Anal. Math., 99 (2006), 199. doi: 10.1007/BF02789446. Google Scholar
D. Finco and K. Yajima, The $L^p$ boundedness of wave operators for Schrödinger operators with threshold singularities II. Even dimensional case,, J. Math. Sci. Univ. Tokyo, 13 (2006), 277. Google Scholar
F. Gesztesy, C. K. R. T. Jones, Y. Latushkin and M. Stanislavova, A spectral mapping theorem and invariant manifolds for nonlinear Schrödinger equations,, Indiana Univ. Math. J., 49 (2000), 221. doi: 10.1512/iumj.2000.49.1838. Google Scholar
M. Goldberg, Transport in the one-dimensional Schrödinger equation,, Proc. Amer. Math. Soc., 135 (2007), 3171. doi: 10.1090/S0002-9939-07-08897-1. Google Scholar
M. Goldberg and W. Schlag, Dispersive estimates for Schrödinger operators in dimensions one and three,, Comm. Math. Phys., 251 (2004), 157. doi: 10.1007/s00220-004-1140-5. Google Scholar
M. Goldberg and M. Visan, A counterexample to dispersive estimates,, Comm. Math. Phys., 266 (2006), 211. doi: 10.1007/s00220-006-0013-5. Google Scholar
W. R. Green, Dispersive estimates for matrix and scalar Schrödinger operators in dimension five,, To appear in the Illinois J. Math., (2010). Google Scholar
P. D. Hislop and I. M. Sigal, "Introduction to Spectral Theory. With Applications to Schrödinger Operators,", Applied Mathematical Sciences, 113 (1996). doi: 10.1007/978-1-4612-0741-2. Google Scholar
A. Jensen and T. Kato, Spectral properties of Schrödinger operators and time-decay of the wave functions,, Duke Math. J., 46 (1979), 583. doi: 10.1215/S0012-7094-79-04631-3. Google Scholar
A. Jensen and G. Nenciu, A unified approach to resolvent expansions at thresholds,, Rev. Mat. Phys., 13 (2001), 717. doi: 10.1142/S0129055X01000843. Google Scholar
A. Jensen and K. Yajima, A remark on $L^p$-boundedness of wave operators for two-dimensional Schrödinger operators,, Comm. Math. Phys., 225 (2002), 633. doi: 10.1007/s002200100603. Google Scholar
J.-L. Journé, A. Soffer and C. D. Sogge, Decay estimates for Schrödinger operators,, Comm. Pure Appl. Math., 44 (1991), 573. doi: 10.1002/cpa.3160440504. Google Scholar
E. Kirr and A. Zarnescu, On the asymptotic stability of bound states in 2D cubic Schrödinger equation,, Comm. Math. Phys., 272 (2007), 443. doi: 10.1007/s00220-007-0233-3. Google Scholar
J. Marzuola, Dispersive estimates using scattering theory for matrix Hamiltonian equations,, Discrete Cont. Dyn. Syst. - Series A, 30 (2011), 995. doi: 10.3934/dcds.2011.30.995. Google Scholar
J. Marzuola and G. Simpson, Spectral analysis for matrix Hamiltonian operators,, Nonlinearity, 24 (2011), 389. doi: 10.1088/0951-7715/24/2/003. Google Scholar
T. Mizumachi, Asymptotic stability of small solitons for 2D nonlinear Schrödinger equations with potential,, J. Math. Kyoto Univ., 47 (2007), 599. Google Scholar
M. Murata, Asymptotic expansions in time for solutions of Schrödinger-type equations,, J. Funct. Anal., 49 (1982), 10. doi: 10.1016/0022-1236(82)90084-2. Google Scholar
C.-A. Pillet and E. C. Wayne, Invariant manifolds for a class of dispersive, Hamiltonian, partial differential equations,, J. Differ. Eqs., 141 (1997), 310. doi: 10.1006/jdeq.1997.3345. Google Scholar
J. Rauch, Local decay of scattering solutions to Schrödinger's equation,, Comm. Math. Phys., 61 (1978), 149. doi: 10.1007/BF01609491. Google Scholar
M. Reed and B. Simon, "Methods of Modern Mathematical Physics I: Functional Analysis, IV: Analysis of Operators,", Academic Press, (1972). Google Scholar
I. Rodnianski and W. Schlag, Time decay for solutions of Schrödinger equations with rough and time-dependent potentials,, Invent. Math., 155 (2004), 451. doi: 10.1007/s00222-003-0325-4. Google Scholar
I. Rodnianski, W. Schlag and A. Soffer, Dispersive analysis of charge transfer models,, Comm. Pure Appl. Math., 58 (2005), 149. doi: 10.1002/cpa.20066. Google Scholar
W. Schlag, Dispersive estimates for Schrödinger operators in dimension two,, Comm. Math. Phys., 257 (2005), 87. doi: 10.1007/s00220-004-1262-9. Google Scholar
W. Schlag, Spectral theory and nonlinear partial differential equations: A survey,, Discrete Contin. Dyn. Syst., 15 (2006), 703. doi: 10.3934/dcds.2006.15.703. Google Scholar
W. Schlag, Dispersive estimates for Schrödinger operators: A survey,, Mathematical aspects of nonlinear dispersive equations, 163 (2007), 255. Google Scholar
W. Schlag, Stable manifolds for an orbitally unstable NLS,, Ann. of Math. (2), 169 (2009), 139. doi: 10.4007/annals.2009.169.139. Google Scholar
A. Soffer and M. Weinstein, Multichannel nonlinear scattering for nonintegrable equations,, Comm. Math. Phys., 133 (1990), 119. doi: 10.1007/BF02096557. Google Scholar
W. Strauss, Existence of solitary waves in higher dimensions,, Comm. Math. Phys., 55 (1977), 149. doi: 10.1007/BF01626517. Google Scholar
R. Weder, Center manifold for nonintegrable nonlinear Schrödinger equations on the line,, Commun. Math. Phys., 215 (2000), 343. doi: 10.1007/s002200000298. Google Scholar
K. Yajima, $L^p$-boundedness of wave operators for two-dimensional Schrödinger operators,, Comm. Math. Phys., 208 (1999), 125. doi: 10.1007/s002200050751. Google Scholar
K. Yajima, The $L^p$ Boundedness of wave operators for Schrödinger operators with threshold singularities I. The odd dimensional case,, J. Math. Sci. Univ. Tokyo, 13 (2006), 43. Google Scholar
Alexander Komech, Elena Kopylova, David Stuart. On asymptotic stability of solitons in a nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1063-1079. doi: 10.3934/cpaa.2012.11.1063
Youngwoo Koh, Ihyeok Seo. Strichartz estimates for Schrödinger equations in weighted $L^2$ spaces and their applications. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4877-4906. doi: 10.3934/dcds.2017210
Roberta Bosi, Jean Dolbeault, Maria J. Esteban. Estimates for the optimal constants in multipolar Hardy inequalities for Schrödinger and Dirac operators. Communications on Pure & Applied Analysis, 2008, 7 (3) : 533-562. doi: 10.3934/cpaa.2008.7.533
Younghun Hong. Strichartz estimates for $N$-body Schrödinger operators with small potential interactions. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5355-5365. doi: 10.3934/dcds.2017233
Michael Goldberg. Strichartz estimates for Schrödinger operators with a non-smooth magnetic potential. Discrete & Continuous Dynamical Systems - A, 2011, 31 (1) : 109-118. doi: 10.3934/dcds.2011.31.109
Jeremy L. Marzuola. Dispersive estimates using scattering theory for matrix Hamiltonian equations. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 995-1035. doi: 10.3934/dcds.2011.30.995
David Cruz-Uribe, SFO, José María Martell, Carlos Pérez. Sharp weighted estimates for approximating dyadic operators. Electronic Research Announcements, 2010, 17: 12-19. doi: 10.3934/era.2010.17.12
Leyter Potenciano-Machado, Alberto Ruiz. Stability estimates for a magnetic Schrödinger operator with partial data. Inverse Problems & Imaging, 2018, 12 (6) : 1309-1342. doi: 10.3934/ipi.2018055
Yonggeun Cho, Tohru Ozawa, Suxia Xia. Remarks on some dispersive estimates. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1121-1128. doi: 10.3934/cpaa.2011.10.1121
Fabio Nicola. Remarks on dispersive estimates and curvature. Communications on Pure & Applied Analysis, 2007, 6 (1) : 203-212. doi: 10.3934/cpaa.2007.6.203
Mouhamed Moustapha Fall. Regularity estimates for nonlocal Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1405-1456. doi: 10.3934/dcds.2019061
Jin-Cheng Jiang, Chengbo Wang, Xin Yu. Generalized and weighted Strichartz estimates. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1723-1752. doi: 10.3934/cpaa.2012.11.1723
Chu-Hee Cho, Youngwoo Koh, Ihyeok Seo. On inhomogeneous Strichartz estimates for fractional Schrödinger equations and their applications. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 1905-1926. doi: 10.3934/dcds.2016.36.1905
Benjamin Dodson. Improved almost Morawetz estimates for the cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2011, 10 (1) : 127-140. doi: 10.3934/cpaa.2011.10.127
Ihyeok Seo. Carleman estimates for the Schrödinger operator and applications to unique continuation. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1013-1036. doi: 10.3934/cpaa.2012.11.1013
Felipe Hernandez. A decomposition for the Schrödinger equation with applications to bilinear and multilinear estimates. Communications on Pure & Applied Analysis, 2018, 17 (2) : 627-646. doi: 10.3934/cpaa.2018034
Zhong Wang. Stability of Hasimoto solitons in energy space for a fourth order nonlinear Schrödinger type equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 4091-4108. doi: 10.3934/dcds.2017174
Jan Boman, Vladimir Sharafutdinov. Stability estimates in tensor tomography. Inverse Problems & Imaging, 2018, 12 (5) : 1245-1262. doi: 10.3934/ipi.2018052
Felipe Alvarez, Juan Peypouquet. Asymptotic equivalence and Kobayashi-type estimates for nonautonomous monotone operators in Banach spaces. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1109-1128. doi: 10.3934/dcds.2009.25.1109
Tetsu Mizumachi, Dmitry Pelinovsky. On the asymptotic stability of localized modes in the discrete nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - S, 2012, 5 (5) : 971-987. doi: 10.3934/dcdss.2012.5.971
M. Burak Erdoǧan William R. Green
|
CommonCrawl
|
American Physical Society Sites|APS|Journals|Physics
APS Meetings Home|Help|Contact APS Meetings
Bulletin of the American Physical Society
My Scheduler
Session Index
Chair Index
Affiliation Search
Using Scheduler
63rd Annual Meeting of the APS Division of Fluid Dynamics
Sunday–Tuesday, November 21–23, 2010; Long Beach, California
Session RA: Turbulence Theory III
Chair: Carl H. Gibson, University of California, San Diego
Room: Long Beach Convention Center 101A
RA.00001: Effect of scalar-field boundary conditions on the Markovian properties of passive scalar increments
Jason Lepore, Laurent Mydlarski
Lepore and Mydlarski\footnote{Lepore and Mydlarski, 2009, {\it Phys. Rev. Lett.}, {\bf 103}, 034501.} recently investigated the influence of the scalar-field boundary conditions on the inertial-convective-range scaling exponents of the high-order passive scalar structure functions ($\xi_n$). The latter was accomplished by injecting the scalar field into the flow (i.e., the turbulent wake of a circular cylinder) using two different scalar injection methods: (i) heating the cylinder, and (ii) using a ``mandoline''. The authors concluded that all previous estimates of $\xi_{n}$ are sensitive to the scalar field boundary conditions, given the finite Reynolds numbers of the flows under consideration, and, therefore, do not constitute a universal measure of the internal intermittency of the passive scalar field. The present work examines the Markovian properties of passive scalar increments, and their dependence on the scalar injection method, to provide additional insight into the small-scale structure of the turbulent passive scalar. In particular, the current research examines the relationship between the high-order terms of the Kramers-Moyal expansion and the internal intermittency of the passive scalar field. [Preview Abstract]
RA.00002: Tomographic Particle-Image Velocimetry To Investigate Dissipation Elements
Lisa Schaefer, Uwe Dierksheide, Wolfgang Schroeder
A new method to describe the nature of turbulence has been proposed by Wang and Peters (JFM 2006). Based on fluctuating scalar fields, local minimum and maximum points are determined via gradient trajectories starting from every grid point in the direction of the steepest ascending and descending gradients. Then, so-called dissipation elements are indentified as the region of all the grid points the trajectories of which share the same pair of minimum and maximum points. The statistical properties of these space-filling elements are evaluated focusing on the linear distance and the scalar difference between their extrema. The procedure is also applied to various DNS fields using u', v', w', and k' as scalar fields (Wang and Peters JFM 2008). In this spirit, dissipation elements are derived from experimental 3D velocity data of a fully developed turbulent channel flow gained by Tomographic Particle-Image Velocimetry. The statistical results, inter alia, regarding the distribution of the element length are compared to those from the DNS. [Preview Abstract]
RA.00003: Forced Turbulence, Multiscale Dynamics, and Variational Principles
Haris J. Catrakis
We consider theoretically fundamental aspects of forced turbulence as well as unforced turbulence, with emphasis on the multiscale properties of turbulent level crossings as well as emphasis on connections to variational principles. The connection between power spectral exponents and level crossing scales in forced turbulence, as well as unforced turbulence, is explored. Also, the connection between variational principles and the behavior of level crossing scales is investigated in both forced and unforced turbulence. In addition, we explore testing of our theoretical considerations using computations and visualizations. [Preview Abstract]
RA.00004: Recent Analytical and Numerical Results for The Navier-Stokes-Voigt Model and Related Models
Adam Larios, Edriss Titi, Mark Petersen, Beth Wingate
The equations which govern the motions of fluids are notoriously difficult to handle both mathematically and computationally. Recently, a new approach to these equations, known as the Voigt-regularization, has been investigated as both a numerical and analytical regularization for the 3D Navier-Stokes equations, the Euler equations, and related fluid models. This inviscid regularization is related to the alpha-models of turbulent flow; however, it overcomes many of the problems present in those models. I will discuss recent work on the Voigt-regularization, as well as a new criterion for the finite-time blow-up of the Euler equations based on their Voigt-regularization. Time permitting, I will discuss some numerical results, as well as applications of this technique to the Magnetohydrodynamic (MHD) equations and various equations of ocean dynamics. [Preview Abstract]
RA.00005: Is drift-wave turbulence intermittent from a Lagrangian point of view?
Kai Schneider, Benjamin Kadoch, Wouter Bos
Lagrangian velocity statistics of dissipative drift-wave turbulence are investigated by means of direct numerical simulation in the context of the Hasegawa-Wakatani model. For large values of the adiabaticity (or small collisionality), the probability density function of the Lagrangian acceleration shows exponential tails, as opposed to the stretched exponential or algebraic tails, generally observed for the highly intermittent acceleration of Navier-Stokes turbulence. This exponential distribution is shown to be a robust feature independent of the Reynolds number. For small adiabaticity, algebraic tails are observed, suggesting the strong influence of point-vortex-like dynamics on the acceleration. A causal connection is found between the shape of the probability density function and the auto-correlation of the norm of the acceleration. For further details we refer to Bos et al., Physica D 239, 2010 and Kadoch et al., Phys. Rev. Lett., 2010, in press. [Preview Abstract]
RA.00006: Maximum Enstrophy Growth in Burgers Equation
Diego Ayala, Bartosz Protas
The regularity of solutions of the Navier--Stokes equation is controlled by the boundedness of the enstrophy $\mathcal{E}$. The best estimate for its rate of growth is $d\mathcal{E}/dt \leq C\mathcal{E}^{3}$, for $C>0$, leading to the possibility of a finite--time blow--up when straightforward time integration is used. Recent numerical evidence by Lu \& Doering (2008) supports the sharpness of the instantaneous estimate. Thus, the central question is how to extend the instantaneous estimate to a finite--time estimate in a way that will incorporate the dynamics imposed by the PDE. We state the problem of saturation of finite--time estimates for the enstrophy growth as a PDE--constrained optimization problem, using the Burgers equation as a ``toy model''. The following problem is solved numerically: \begin{displaymath} \max_{\phi}[\mathcal{E}(T) - \mathcal{E}(0)]\quad\mbox{subject to}\quad\mathcal{E}(0) = \mathcal{E}_0 \end{displaymath} where $\phi$ represents the initial data for Burgers equation, for a wide range of values of $T>0$ and $\mathcal{E}_0$ finding that the maximum enstrophy growth in finite time scales as $\mathcal{E}^{\alpha}_0$ with $\alpha\approx 3/2$, an exponent different from $\alpha = 3$ obtained by analytic means. [Preview Abstract]
RA.00007: Turbulence in more than two and less than three dimensions
Antonio Celani, Dario Vincenzi, Stefano Musacchio
We investigate the behavior of turbulent systems in geometries with one compactified dimension. A novel phenomenological scenario dominated by the splitting of the turbulent cascade emerges both from the theoretical analysis of passive scalar turbulence and from direct numerical simulations of Navier-Stokes turbulence. (Phys. Rev. Lett. 104, 184506 (2010), J. Stat. Phys. 138, 579-597 (2010)) [Preview Abstract]
Become an APS Member
Submit a Meeting Abstract
Find a Journal Article
Donate to APS
My APS
Join an APS Unit
Get My Member Number
The American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics.
© 2023 American Physical Society | All rights reserved | Terms of Use | Contact Us
Headquarters 1 Physics Ellipse, College Park, MD 20740-3844 (301) 209-3200
Editorial Office 1 Research Road, Ridge, NY 11961-2701 (631) 591-4000
Office of Public Affairs 529 14th St NW, Suite 1050, Washington, D.C. 20045-2001 (202) 662-8700
|
CommonCrawl
|
Ebook Tutorial Guide Download Tutorial And More Book With Mobi Epub PDF online
Home » Fiction » Science » The Three Body Problem
The Three Body Problem
Friday, 10 August 2018, 6:27 | Fiction, Science | Comments Off on The Three Body Problem | 3107 Views
1967: Ye Wenjie witnesses Red Guards beat her father to death throughout China's revolution. This singular event can form not solely the remainder of her life however additionally the longer term of grouping.
Four decades later, Peiping police raise nanotech engineer Wang Miao to infiltrate a closelipped cabal of scientists once a spate of self-contradictory suicides. Wang's investigation can lead him to a mysterious on-line game and immerse him during a virtual world dominated by the recalcitrant and unpredictable interaction of its 3 suns.
This is the Three-Body drawback and it's the key to everything: the key to the scientists' deaths, the key to a conspiracy that spans light-years and also the key to the extinction-level threat humanity currently faces.
the three body problem ebook download
the three body problem pdf
3 body problem epub
pdf three body problem
the three body problem epub
three body problem epub
« Clockwork Prince 2 : The Infernal Devices
A Ladder to the Sky »
The Three-Body Problem :
Author - Cixin Liu
Publisher - no defined
Pages - 416
Download Read Now
Detail - 1967: Ye Wenjie witnesses Red Guards beat her father to death during China's Cultural Revolution. This singular event will shape not only the rest of her life but also the future of mankind. Four decades later, Beijing police ask nanotech engineer Wang Miao to infiltrate a secretive cabal of scientists after a spate of inexplicable suicides. Wang's investigation will lead him to a mysterious online game and immerse him in a virtual world ruled by the intractable and unpredicatable interaction of its three suns. This is the Three-Body Problem and it is the key to everything: the key to the scientists' deaths, the key to a conspiracy that spans light-years and the key to the extinction-level threat humanity now faces.
Remembrance of Earth's Past : The Three-Body Trilogy (The Three-Body Problem, The Dark Forest, Death's End)
Publisher - Tor Books
Pages - 1472
ISBN - 076539748X
Detail - This discounted ebundle of the Three-Body Trilogy includes: The Three-Body Problem, The Dark Forest, Death's End "Wildly imaginative, really interesting." —President Barack Obama The Three-Body trilogy by New York Times bestseller Cixin Liu keeps you riveted with high-octane action, political intrigue, and unexpected twists in this saga of first contact with the extraterrestrial Trisolaris. The Three-Body Problem — An alien civilization on the brink of destruction captures the signal and plans to invade Earth. Meanwhile, on Earth, different camps start forming, planning to either welcome the superior beings and help them take over a world seen as corrupt, or to fight against the invasion. The Dark Forest — In The Dark Forest, the aliens' human collaborators may have been defeated, but the presence of the sophons, the subatomic particles that allow Trisolaris instant access to all human information remains. Humanity responds with the Wallfacer Project, a daring plan that grants four men enormous resources to design secret strategies, hidden through deceit and misdirection from Earth and Trisolaris alike. Three of the Wallfacers are influential statesmen and scientists, but the fourth is a total unknown. Luo Ji, an unambitious Chinese astronomer and sociologist, is baffled by his new status. All he knows is that he's the one Wallfacer that Trisolaris wants dead. Death's End — Half a century after the Doomsday Battle, Cheng Xin, an aerospace engineer from the early 21st century, awakens from hibernation in this new age. She brings with her knowledge of a long-forgotten program dating from the beginning of the Trisolar Crisis, and her very presence may upset the delicate balance between two worlds. Will humanity reach for the stars or die in its cradle? Other Books by Cixin Liu (Translated to English) The Remembrance of Earth's Past The Three-Body Problem The Dark Forest Death's End Other Books Ball Lightning At the Publisher's request, this title is being sold without Digital Rights Management Software (DRM) applied.
Poincaré and the Three Body Problem :
Author - June Barrow-Green
Publisher - American Mathematical Soc.
Detail - The idea of chaos figures prominently in mathematics today. It arose in the work of one of the greatest mathematicians of the late 19th century, Henri Poincare, on a problem in celestial mechanics: the three body problem. This ancient problem - to describe the paths of three bodies in mutual gravitational interaction - is one of those which is simple to pose but impossible to solve precisely. Poincare's famous memoir on the three body problem arose from his entry in the competition celebrating the 60th birthday of King Oscar of Sweden and Norway. His essay won the prize and was set up in print as a paper in Acta Mathematica when it was found to contain a deep and critical error.In correcting this error Poincare discovered mathematical chaos, as is now clear from Barrow-Green's pioneering study of a copy of the original memoir annotated by Poincare himself, recently discovered in the Institut Mittag-Leffler in Stockholm. ""Poincare and the Three Body Problem"" opens with a discussion of the development of the three body problem itself and Poincare's related earlier work. The book also contains intriguing insights into the contemporary European mathematical community revealed by the workings of the competition. After an account of the discovery of the error and a detailed comparative study of both the original memoir and its rewritten version, the book concludes with an account of the final memoir's reception, influence and impact, and an examination of Poincare's subsequent highly influential work in celestial mechanics.
Author - Mauri Valtonen
Publisher - Cambridge University Press
Detail - This book surveys statistical and perturbation methods for the solution of the general three body problem.
The Redemption of Time :
Author - Baoshu
Publisher - Head of Zeus Ltd
ISBN - 1788542193
Detail - Published with the blessing of Cixin Liu, The Redemption of Time extends the astonishing universe conjured by the Three-Body Trilogy. Death is no release for Yun Tianming – merely the first step on a journey that will place him on the frontline of a war that has raged since the beginning of time. At the end of the fourth year of the Crisis Era, Yun Tianming died. He was flash frozen, put aboard a spacecraft and launched on a trajectory to intercept the Trisolaran First Fleet. It was a desperate plan, a Trojan gambit almost certain to fail. But there was an infinitesimal chance that the aliens would find rebooting a human irresistible, and that someday, somehow, Tianming might relay valuable information back to Earth. And so he did. But not before he betrayed humanity. Now, after millennia in exile, Tianming has a final chance at redemption. A being calling itself The Spirit has recruited him to help wage war against a foe that threatens the existence of the entire universe. A challenge he will accept, but this time Tianming refuses to be a mere pawn... He has his own plans. Published with the blessing of Cixin Liu, The Redemption of Time extends the astonishing universe conjured by the Three-Body Trilogy. You'll discover why the universe is a 'dark forest', and for the first time, you'll come face-to-face with a Trisolaran...
Author - Catherine Shaw
Publisher - Allison & Busby
Detail - Cambridge, 1888. When schoolmistress Vanessa Duncan learns of a murder at St John's College, little does she know that she will become deeply entangled in the mystery. Dr Geoffrey Akers, Fellow in Pure Mathematics, has been found dead, struck down by a violent blow to the head. What could provoke such a brutal act? Vanessa, finding herself in amongst Cambridge's brightest scholarly minds, discovers that the motive may lie in mathematics itself. Drawn closer to the case by a blossoming friendship with mathematician Arthur Weatherburn, Vanessa begins to investigate. When she learns of Sir Isaac Newton's elusive 'n-body problem' and the prestigious prize offered to anyone with a solution, things begin to make sense. But with further deaths occurring and the threat of an innocent man being condemned, Vanessa must hurry with her calculations...
The Dark Forest :
Detail -
Death's End :
Publisher - Macmillan
Detail - WithThe Three-Body Problem, English-speaking readers got their first chance to experience the multiple-award-winning and bestselling Three-Body Trilogy by China's most beloved science fiction author, Cixin Liu.Three-Body was released to great acclaim including coverage inThe New York Times andThe Wall Street Journal.It was also named a finalist for the Nebula Award, making it the first translated novel to be nominated for a major SF award since Italo Calvino'sInvisible Cities in 1976. Now this epic trilogy concludes withDeath's End. Half a century after the Doomsday Battle, the uneasy balance of Dark Forest Deterrence keeps the Trisolaran invaders at bay. Earth enjoys unprecedented prosperity due to the infusion of Trisolaran knowledge. With human science advancing daily and the Trisolarans adopting Earth culture, it seems that the two civilizations will soon be able to co-exist peacefully as equals without the terrible threat of mutually assured annihilation. But the peace has also made humanity complacent. Cheng Xin, an aerospace engineer from the early 21st century, awakens from hibernation in this new age. She brings with her knowledge of a long-forgotten program dating from the beginning of the Trisolar Crisis, and her very presence may upset the delicate balance between two worlds. Will humanity reach for the stars or die in its cradle?
The Wall of Storms :
Author - Ken Liu
Publisher - Simon and Schuster
Detail - In the much-anticipated sequel to the "magnificent fantasy epic" (NPR) Grace of Kings, Emperor Kuni Garu is faced with the invasion of an invincible army in his kingdom and must quickly find a way to defeat the intruders. Kuni Garu, now known as Emperor Ragin, runs the archipelago kingdom of Dara, but struggles to maintain progress while serving the demands of the people and his vision. Then an unexpected invading force from the Lyucu empire in the far distant west comes to the shores of Dara—and chaos results. But Emperor Kuni cannot go and lead his kingdom against the threat himself with his recently healed empire fraying at the seams, so he sends the only people he trusts to be Dara's savvy and cunning hopes against the invincible invaders: his children, now grown and ready to make their mark on history.
Three-body Problem Set :
Publisher - Remembrance of Earth's Past
The Integral Manifolds of the Three Body Problem :
Author - Christopher Keil McCord
Pages - 91
Detail - The phase space of the spatial three-body problem is an open subset in ${\mathbb R}^{18}$. Holding the ten classical integrals of energy, center of mass, linear and angular momentum fixed defines an eight dimensional submanifold. For fixed nonzero angular momentum, the topology of this manifold depends only on the energy. This volume computes the homology of this manifold for all energy values. This table of homology shows that for negative energy, the integral manifolds undergo seven bifurcations. Four of these are the well-known bifurcations due to central configurations, and three are due to 'critical points at infinity'. This disproves Birkhoff's conjecture that the bifurcations occur only at central configurations.
Author - C. Marchal
Publisher - Elsevier
Detail - Recent research on the theory of perturbations, the analytical approach and the quantitative analysis of the three-body problem have reached a high degree of perfection. The use of electronics has aided developments in quantitative analysis and has helped to disclose the extreme complexity of the set of solutions. This accelerated progress has given new orientation and impetus to the qualitative analysis that is so complementary to the quantitative analysis. The book begins with the various formulations of the three-body problem, the main classical results and the important questions and conjectures involved in this subject. The main part of the book describes the remarkable progress achieved in qualitative analysis which has shed new light on the three-body problem. It deals with questions such as escapes, captures, periodic orbits, stability, chaotic motions, Arnold diffusion, etc. The most recent tests of escape have yielded very impressive results and border very close on the true limits of escape, showing the domain of bounded motions to be much smaller than was expected. An entirely new picture of the three-body problem is emerging, and the book reports on this recent progress. The structure of the solutions for the three-body problem lead to a general conjecture governing the picture of solutions for all Hamiltonian problems. The periodic, quasi-periodic and almost-periodic solutions form the basis for the set of solutions and separate the chaotic solutions from the open solutions.
7 months ago - Small Garden Design
7 months ago - New Larousse Gastronomique
7 months ago - Light On Life
7 months ago - Dog Man: Fetch-22
7 months ago - Before She Was Found
7 months ago - The Memory Police
8 months ago - Kingdom of the Blind
8 months ago - Saga: Compendium One
8 months ago - Keto For Women
8 months ago - The Starless Sea
A Ladder to the Sky
A Capitol Death : Flavia Albia 7
Twisted : From the bestselling author of THIRTEEN
A Rose Petal Summer
Pinch of Nom : 100 Slimming, Home-style Recipes
The Wicked King Part 2 :The Folk of the Air
Small Garden Design
New Larousse Gastronomique
Dog Man: Fetch-22
Before She Was Found
The Memory Police
Kingdom of the Blind
Saga: Compendium One
Keto For Women
An American Marriage
A Visit to Fairyland
dog man for whom the ball rolls
pinch of nom pdf
ninth house epub
children of virtue and vengeance
saga compendium one torrent
why buildings fall down pdf
Children of Virtue and Vengeance epub online
read finale caraval online free
stella rimington the moscow sleepers epub
dog man fetch22
stella rimington the moscow sleepers epub free download
the unknow kimi raikkonen deutsch
pinch of nom epub download
pinch of nom 400 calorie dishes
sapphire flames read online
enlightened gardener torrent
moonology pdf
erin morgenstern the starless sea epub download
art in the hellenistic world pdf
oseman alice torrent
pinch of nom creamy beef and chicken
the bride test read online
in the blink of an eye new edition
© 2020 Ebook Tutorial Guide | All Rights Reserved
|
CommonCrawl
|
Cultural significance of the flora of a tropical dry forest in the Doche vereda (Villavieja, Huila, Colombia)
Jeison Herley Rosero-Toro1,
Luz Piedad Romero-Duque1,
Dídac Santos-Fita ORCID: orcid.org/0000-0001-7347-84762 &
Felipe Ruan-Soto3
In Colombia, ethnobotanical studies regarding plant cultural significance (CS) in tropical dry forests are scarce and mainly focused on the Caribbean region. Different authors have indicated that the plants with the most uses are those of greater cultural importance. Additionally, gender differences in knowledge and interest in natural resources has been widely recorded. This study evaluated the cultural significance of plants in the Doche community, in the Department of Huila. Furthermore, it evaluates the richness of plant knowledge among local inhabitants, looking for testing the hypothesis that the CS of plants positively correlates to the number of uses people inform about, and that there are significant differences on the richness of ethnobotanical knowledge between men and women in this community.
The ethnobotanical categories: "food," "condiment," "economy," "fodder," "firewood," "timber", "medicine," and "others" were established to carry out semi-structured interviews, social cartography, and ethnobotanical walks. The frequency of mention was calculated as a measure of CS. The richness of knowledge of each collaborator was obtained. Non-parametric tests were performed to determine whether differences between the numbers of mentioned species existed between genders and ethnobotanical categories. Finally, Pearson correlation tests determined the relationship between CS and the number of ethnobotanical categories.
A hundred useful species were registered in crops and forests. The most abundant categories were medicinal (45 species), firewood (30), and fodder (28). The most culturally significant species according to frequency of mention were Pseudosamanea guachapele, Guazuma ulmifolia, Manihot esculenta, and Musa balbisiana. The species with the most registered uses (five) were Guazuma ulmifolia and Gliricidia sepium. We found a correlation between CS and the number of uses per ethnobotanical category, but no significant difference between genders regarding ethnobotanical knowledge.
Frequency of mention provides relevant information about the CS of species. Furthermore, it aids to establish sustainable use of tropical dry forests without loss of resources parting from strategies designed from within the Doche community and based on their ethnobotanical knowledge. We found that the number of uses of a plant is correlated with its degree of cultural importance. On the other hand, no significant differences were found between genders regarding ethnobotanical knowledge; that is, both men and women have similar roles in the community, which allows them to recognize the same uses per species.
The services provided by ecosystems are valued differently by different actors according to socio-ecological contexts and cultural and economic interests [1, 2]. The measure of this value can be ecologic, economic, and socio-cultural [3]. This last category has become a relevant tool to learn of the significance and the benefits that ecosystems provide to human communities, resulting in a combination of social perception and the capacity of an ecosystem to satisfy the needs of human groups [4,5,6]. While this information does not necessarily mean a monetary estimation of the services, it does reflect the relevance of the services provided [4]. Thus, cultural significance can be a valuable tool for such purposes of evaluating ecosystem services [7].
The cultural significance of a species has been defined by Hunn [8] as the value or role it has within a particular community; this includes species with high and low relevance for a social group and it may vary according to use and appreciation of a species by people [9, 10]. Research on cultural significance has used for different research approaches, using different methods: participant techniques, group interviews, community workshops, participant observation, academic and community group opinions, and others [11, 12]. From quantitative ethnobotany, numerous ways to evaluate the significance of a particular taxon have been proposed, the most popular of which are those based on informant consensus [13, 14]. The most popular indicators in these cases include frequency of mention [15].
Different ethnobotanical studies have speculated about the features that plant species that are considered highly culturally significant must share. Aspects such as availability [16,17,18], features of their biological cycle [19], or other specific features such as biomass and size [20], to mention some few, have been explored as factors that might explain cultural significance. For some researchers, the more uses a plant has (either as food, construction material, medicine, religious objects, or any other) the greater its degree of cultural significance is [16, 21, 22]. Therefore, the sum of uses is a very widespread method, which allows to quickly quantify the importance of species [22].
On the other hand, indexes like the Knowledge Richness Index have been used to evaluate the degree of knowledge a user has about the possibilities of their useful flora and whether significant differences exist in the knowledge of different socio-demographic sectors [23, 24], given that clearly different social groups have different roles that could affect the amount and quality of knowledge of useful flora [24,25,26]. For gender, different authors have reported that preferences on plant species, as well as the general interest in natural resources, may be different in men and women and, therefore, both have different priorities in the management of natural resources [27]. Particularly from ecofeminism, it is postulated that women, through their daily activities, have a more intense bond with their environment, which makes them carriers of a special interest in the conservation of nature and gives them extensive knowledge about the natural resources that surround their communities [28].
In Colombia, ethnobotanical studies have dealt with inventories of useful flora [29,30,31] and agro-biodiversity in traditional production systems [32, 33]. Studies of ethnobotany in tropical dry forests are scarce and have been focused on the Caribbean region, mainly reporting inventories of uses and vernacular names for useful plants [34,35,36]. Thus far, research in the department of Huila has been mainly centered in analyzing the functional and nutritional properties of Passifloraceae [37], as well as identifying the non-timber forest resources with the greatest potential for medicine and commerce in the mid and lower basins of Las Ceibas river [38]. In the Doche vereda, where this study was carried out, there is around 35.3% of tropical dry forest left [39]. The strong pressure that is put on this ecosystem through timber, fodder, and firewood extraction along with the different agricultural activities supporting economic and subsistence activities have caused the ecosystem services to be less available and accessible to the community. Furthermore, no studies have evaluated the most culturally significant species in this community, the features these plants share, or whether knowledge differs between men and women.
In this context the following questions arise: What are the used species in the Doche community with the greater cultural significance (CS)? Can the number of uses a plant has be a factor that explains its CS? And, is the richness of knowledge about these species different for men and women? This study aims to determine the CS of different used plants in the Doche community in the Huila Department in Colombia. Furthermore, it evaluates the richness of plant knowledge among its inhabitants, testing the hypothesis that the CS of plants positively correlates to the number of uses they are given and that there are significant differences on the richness of ethnobotanical knowledge between men and women in this community.
This study was carried out in the Doche vereda, old Doche Hacienda located in the eastern portion of the municipality of Villavieja (Huila, Colombia) located at 3° 17′ 5.07″ North and 75° 3' 25.11″ West (Fig. 1), with an extent of 3870.6 ha [40]. The area has a transitional climate spanning from warm-dry to warm-very dry; the mean monthly temperature is 28 °C with scarce rains. The vegetation is that of a tropical dry forest, where average annual temperature is ≥ 25 °C, annual rains span from 700 to 2000 mm, and there are three or more dry months during the year (rain < 100 mm/month) [41].
Location of Doche vereda (Villavieja, Huila, Colombia). Image by Trejo-Rangel (2017)
Fandiño and Wyngaarden [40] registered a population of 79 inhabitants, and according to the Development Plan for Villavieja [42], 18 residences can be found there. According to estimations in the field, there are around 60 people currently living in Doche. Although agriculture and stockbreeding have grown following the creation of an irrigation system, these activities have decreased. Because of this situation, people have migrated to other regions within this department. The residents of this area depend on the sale of rice, bananas, sweet potato, and cocoa, and, to a lesser degree, on goat and sheep stockbreeding for family sustenance.
Between December 2015 and August 2016, we worked with 18 people (seven men and 11 women) belonging to 12 families. The community authorized the use of all data obtained via the proposed methodologies through previous, free, and informed consent. The participants were selected following three criteria: (a) being current residents of Doche, (b) carrying out activities related to natural resource use in the study area, and (c) having time availability to participate in the project.
To register the cultural value of useful flora in the tropical dry forest, eight ethnobotanical categories were established; these were "food" (cultivated and wild edible species), "condiment" (species used as additional ingredients to prepare food), "economic" (species generating an income by being sold), "fodder" (species used as livestock food), "firewood" (species used for heating to cook food), "timber" (species used for construction to make beams, fences, or other things), "medicine" (species that prevent or cure ailments in humans), and "others" (useful species that are not included in the aforementioned categories). Based on these categories, the semi-structured interviews were carried out [43] among participants to recognize species of cultural significance.
To establish the cultural significance (CS) of plant species, those interviewed listed the most important species in each ethnobotanical category and their frequency of being mentioned was calculated [44,45,46] by adding the number of times each species was mentioned [47, 48].
Additionally, the Knowledge Richness Index (RQZ) was calculated to estimate the richness in the knowledge of each person about the uses of plants in their region [23]. For accomplishing such purpose, the following equation was used:
$$ \mathrm{RQZ}=\frac{\mathrm{EU}}{\ \mathrm{maximum}\ \mathrm{EU}\ \mathrm{value}} $$
In this equation, EU = the number of useful species to make up for services reported by a participant and maximum EU value = total number of useful species needed to make up for reported services in the region by all participants. The value of this index varies between zero and one, where one represents the maximum knowledge of useful plants in the region.
The results from the interviews were analyzed with non-parametric U Mann-Whitney tests [49] to determine whether differences exist in the number of mentioned species between genders and between ethnobotanical categories. Likewise, Pearson correlations were calculated to determine whether a greater CS (obtained through the frequency of mention) would include the species included in the greater number of ethnobotanical categories. These statistical analyses were performed using Minitab 16. Additionally, a Principal Coordinates Analysis (PCO) was carried out to determine the similarity of the reported useful flora between interviewed people and ethnobotanical categories. This analysis was carried out using the 2.11 version of the Numerical Taxonomy and Multivariate Analysis System (NTSYSPC) software [50].
Finally, a social cartography workshop was carried out [51] to find the areas from which plant species are extracted. To recognize the known vegetation's vernacular names, ethnobotanical walks were taken in the usage areas in order to identify useful species [51]. Plant material that was not identified in the field was collected and registered in photography for botanical backup. The determination of plants was carried out in the biological collection at the Universidad de Ciencias Aplicadas y Ambientales (UDCA) in Bogota, using the TROPICOS platform to corroborate current scientific names. These specimens were not included in the biological collection due to their lack of minimal required features, such as flowers and/or fruits.
We found that the areas in which species are used by the Doche community are croplands and cattle areas up to a hectare in extension. These are located in tree-covered savannah and the banks of Cabrera river (Fig. 2a, b), and the wooded area spanning from the tree-covered savannah to the conservation area called Cerro Saltarén, where logging and herding activities take place (Fig. 2c, d). In croplands, 58 plant species are used, while 34 species come from forests; eight species can be found in both environments (Appendix).
Areas in which vegetable species are used in the Doche community (Villavieja, Huila), both croplands and forests. a Location of the cropland along the banks of the Cabrera river and b in the tree-covered Savannah. The forest area is located between the c tree-covered Savannah and d Cerro Saltarén (photos by J. H. Rosero-Toro)
Regarding the access to utilized flora, it is currently allowed to log only two individuals of wild timber species per family each year, while no limitations are set for firewood species. Approximately 75% of the families interviewed extract timber once or twice a week, while 25% of them seek this resource three or four times a week. Fodder material is extracted daily by 35% of the families, while 47% of them extract it once or twice a week, and 18% only three to four time a week. Contrastingly, the interviewed population showed no knowledge of the magnitude of use of species in the food, condiment, economy, medicine, and others categories.
In all, 100 species of used flora were recorded. These belong in 94 genera and 39 families; four species were not taxonomically identified since they were not found in the study area (see Appendix). The most used family was Fabaceae, which included 13 genera and 15 used species, while 24 families included only one used species each. The timber category groups 20 species. From these, the interviewed population prefers cedro (Cedrela odorata) and coyo (Trichilia sp.); however, these species have a low availability of individuals so their use is restricted. Thus, species growing on croplands such as bili bil (Guarea guidonia), cachimbo (Erythrina fusca), and matarratón (Gliricidia sepium) are often used instead.
Meanwhile, the firewood category grouped 30 species: varazón (Casearia tremula), balso (Heliocarpus americanus), siete cueros (Machaerium sp.), and tachuelo (Zanthoxylum sp.) are exclusive to this category. Other species of interest were amargoso (Aspidodera cuspa), payandé (Pithecellobium dulce), cachimbo, matarratón, guácimo (Guazuma ulmifolia), and cuchiyuyo (Trichanthera gigantea) (see Appendix). This category shares 13 species with fodder, which contains 28 species. In cropfields both cultivated (leucaena and caña) and wild species typical to tropical dry forests (pela and amargoso) are found.
The medicine category grouped the most species, 45, which are used for 32 different treatments. We found that a single species can be used for more than one treatment: such is the case of sábila (Aloe vera), consuelda (Pseudelephantopus spiralis), and chisaca (Tridax procumbens). The most used parts of the plant were twigs (52%) and leaves (26%). The most frequent forms of use were aromatic (57%) and poultice (15%), while the most frequent forms of application were oral (73%), external (15%), and in baths (6%) (Table 1).
Table 1 Ailment, form of preparation, and form of application of useful medicinal plants identified by inhabitats of the Doche vereda (Villavieja, Huila)
In the food category, 20 species were registered. Of these, piñuela (Bromelia penguin) and cabeza de negro (Melocactus curyispinus) are not cultivated. Thirteen of these are also included in the economy category, among them are yuca (Manihot esculenta) and plátano (Musa balbisiana). Commerce of these species is carried out within the community of Doche vereda and in the municipality of Villavieja. The category others grouped 15 species, such as nim (Azadirachta indica), gomo (Cordia dentate), and cruceto (Randia aculeate). These species have diverse uses, such as insecticide, mouse poison, and home utensils. The category with the smallest amount of species in it is condiment, which includes five cultivated species (see Appendix).
Species with the greatest cultural significance according to their frequency of mention were Pseudosamanea guachapele (mentioned 18 times), Guazuma ulmifolia (17), Manihot esculenta and Musa balbisiana (16), and Acacia farnesiana and Pithecellobium dulce (15).
Species with the most uses were Guazuma ulmifolia and Gliricidia sepium each with five registered uses. Other multiple-use species are Trichanthera gigantea, Annona muricata, Cordia dentata and Theobroma cacao, each with four uses and, after these, 15 species with three uses, 26 species with two, and 53 species with only one registered use (see Appendix).
The results from the Pearson correlation coefficient found that there is a moderately positive relation (r = 0, 64; p < 0.001) between cultural importance and the number of ethnobotanical categories.
On the other hand, the Principal Coordinates Analysis (PCO) test made two interviewed groups. Most of the collaborators were in group A, while group B only included two interviewed persons (JT05, MI06) (Fig. 3). Furthermore, the PCO per ethnobotanical category also formed two groups (Fig. 3b); principal coordinate 1 discriminates the "timber", "firewood", "fodder", and "others" grouping them apart from the "economic" and "food" categories.
Principal Coordinates Analysis (PCO) a per interviewed person in the community of Doche vereda (Villavieja, Huila) and b per ethnobotanical category. Names of the interviewed population correspond to a code and an interview number
The greatest Knowledge Richness (RQZ) was registered in three individuals (MI06, IG16, and JT05), each reaching a 0.35 value (Table 2). No significant differences between gender were found for RQZ values (p = 0, 6184). Similarly, no significant differences were found between genders within ethnobotanical categories (economy, p = 0.4011; fodder, p = 0.2152; firewood, p = 0.3813, timber, p = 0.4876; condiment, p = 0.1872; food, p = 0.3133; medicinal, p = 0.8904; and others, p = 0.5442).
Table 2 Number of uses per ethnobotanical category and Richness of Knowledge Index (RQZ) of the useful flora by interviewed of the Doche vereda (Villavieja, Huila)
Of the useful flora in the Doche vereda, the family Fabaceae has the greatest diversity of genera and species, as has been formerly found in tropical dry forests of Huila [39, 52,53,54], in the Caribbean and in the dry valley of Magdalena river [55, 56]. Plants from this family are considered pioneers in the lowlands of the Neotropics, tropical dry forests, and arid and semi-arid zones [57, 58]. The use of legumes has physiological advantages by producing favorable habitats for the establishment of other species [59], their potential to become fodder, and being a relevant alternative in the management of areas and soil restoration [29, 52].
Regarding usage areas, the relevance of cropfields proved to be related to the coverage of immediate needs of the local population, by finding a greater availability and easy access to resources in these areas [49, 60]. Because of climate conditions in the area, farming these species guarantees resources that are relevant to the community for their role in covering basic needs. The relevance of cropland in peasant communities has been documented by Zuluaga and Ramírez [33], who found that these spaces contain and preserve a high agrodiversity, as well as accumulated knowledge that is a product of the experience of peasants adapting to production systems. These authors also reported 64 species in croplands, a similar number to that found in this work. Regarding the use of species within forests, a lower use to that reported for tropical dry forests of the Atlantic, Bolivar, Sucre, and Cesar [36, 43, 56]. This is explained by the subsistence needs of the Doche community, where cultivating species that guarantee certain services such as food, medicine, and fodder so that they are available yearlong is a priority.
When we analyzed by ethnobotanical category, we found 20 species within the timber category, which is less than what was reported in communities from the Complejo Ciénaga de Zapatosa and in Bailadores, Venezuela [34, 61], but similar to findings by Sanchez et al. [62], with 23 species of which only one (Maclura tinctoria) is cited in this work. The low availability of woody species resulting from the low rate at which this resource is produced in tropical dry forests, which is 50% slower than it is in tropical humid forests, has caused the substitution of these resources for cultivated species [63].
Quiroz-Carranza and Orellana [64] indicated the use of 41 species for firewood, which is more than we found in this study. The lower diversity of used species may be due to the preferences of inhabitants of the Doche community when selecting useful vegetation to use as firewood. The use of leucaena, guanabano, guacimo, and payandé is shared, since these species are easily grown in these dry ecosystems and have been reported to be preferred by peasant communities [65]. The recognition of firewood-bearing species can aid the preservation of tropical dry forests by pointing to strategies guaranteeing the availability of this resource and thus protect the species. The establishment of multiple use croplands close to the households [66], the extraction of dry timber from the forest in favor of woodcutting [64], and the design of stoves that increase the efficiency of firewood and keep spaces free of smoke are some of the suggested strategies to diminish the extraction of this resource from forests and allow its efficient use [67].
The fodder category grouped 28 species, more than what was reported by other studies [34, 35, 56, 68]. The number of species used as fodder in Doche is influenced by the cultivated plants and those collected from forests. Because of the climate conditions of the area, the community has implemented the cultivation of fodder species to avoid the displacement of cattle towards the higher areas of the forest. The use of guacimo [34], cují [62], payandé, and leucaena [69] has been previously reported as fodder species. Additionally, according to studies of the nutritional value of these plants to rumiants, the use of matarratón, yuca, and leucaena coincides with this study [70].
The medicinal category was the most important in the Doche community. When comparing the number of medicinal species reported in previous research in tropical dry forests, we found that the Atlantic, Bolivar, and Cesar have a lower use of species [35, 56], as is the case in the peasant community of Santa Catalina de Chongoyape in Peru [71]. Differences in the number of medicinal species are associated to resource availability, species uses, and the significance a plant has in the community [72].
The main ailments treated with these plants in the study area are epidemiologically frequent diseases in warm zones [73]. For the treatment of illness, those interviewed considered twigs to be the most effective, which coincides with research showing they contain a relatively high concentration of active substances and secondary metabolites [74, 75], particularly in the bark [76]. Although it is well known that other portions of plants contain a much larger concentration of metabolites, the use of the twigs instead of that of floral organs and of fruits is due to the fact that these organs present a low availability of the resource throughout the year as an adaptation strategy to the tropical dry forest [77, 78]. The use of twigs for medicinal use has already been documented in previous studies by Carrillo-Rosario and Moreno [79], Giraldo et al. [80], and Jaramillo et al. [81].
The food category grouped 20 species, which is less than what has been reported in other studies in tropical dry forests [34,35,36]. The use of tropical dry forest edible species in Ecuador was equally low (13 species), while in Mexico, Martínez-Pérez et al. [82] reported 51 species, far surpassing than what was found in this study. The apparently small number of used species might be due to the fact that all plants in this category are cultivated. Considering the climate conditions of the area and the need for this resource among its inhabitants, the community is focused on cultivating a small number of species in crop fields.
Out of 20 food species, 13 are also included in the economy category. These are sold in Doche and the populated center Villavieja. The sale of vegetable species generates an income that allows communities to obtain other basic needs [33], as well as projecting programs for the sustainable use of ecosystems, and through it, preservation strategies generate [83]. Such was the case of two communities in the Chietla municipality in Puebla, Mexico, where socio-economic and ecological valuation of the useful flora was generated in order to establish conservation priorities [82].
On the other hand, when we compared the plants grouped in the others category, we did not find coincidences with flora reported in other studies. Despite this fact, the relevance of shadow plants is widely recognized, and their multiple use as timber and fodder is recognized [34, 35].
Meanwhile, the condiment category reported the lowest number of species: five. The community does not utilize diverse spices to season foods, and all condiment plants are cultivated. This is also reported in communities from the Perijá Mountains where two species are registered, of these, ají coincides with our findings [35]. In the tropical region of Cesar, Colombia, the same number of species is reported and the use of cilantro (Coriandrum sativum) and ají (Capsicum annuum) coincides with this study [34].
According to the results from the Pearson correlation test, a significant relationship was found between the number of uses per ethnobotanical category and cultural significance (see Appendix). Thus, species with the highest cultural significance will be those with the most different uses. To some authors, the frequency of being mentioned is a very effective indicator for the evaluation of cultural importance, mainly because it is a quick technique and relatively easy to carry out; however it does not say much about the particular importance of the different species [84]. On the other hand, literature points out that as long as a plant has more uses (food, medicine, fuel, or any other category), it will have greater importance [21, 22]. Different ethnobotanical studies have indicated that the sum of uses can be considered as an indicator that is directly related to the cultural importance of the plants and that the sum of uses can be a quick tool that provides quantitative data to evaluate this phenomenon [16]. Thus, based on the evidence, we can see how the number of uses of a plant is correlated with its degree of cultural importance. This allows us to recognize the relevance of these species and opens the possibility to generate management and preservation strategies in different ecosystems in future studies [85]. Additionally, it would be recommendable to use standardized ethnobotanical categories to compare between studies and study sites [86].
On the other hand, the Principal Coordinates Analysis per ethnobotanical category, showed the relationship between species and ethnobotanical categories. The first Principal Coordinate grouped species considered edible for the community that also provide families with income ("food" + "economy"; see Fig. 3b). Meanwhile, the second Principal Coordinate discriminated a greater number of categories, all of which share similar uses and are recognized by most of the interviewed population, such as timber species that can also be fodder for goats and sheep. According to this, a single species may be used in more than one category. This has been made reference to before in the study by Cárdenas and Ramírez [29]; however, to avoid bias in value allocation, it is recommended to cite the species once per category instead of according to each use given. This was done in this study according to proposals by Marín-Corba et al. [86] and Sánchez et al. [87].
The degree of knowledge of the useful species by collaborators in the Doche community, as measured through the Knowledge Richness Index, has no significant differences between genders. According to different authors, there are differences between men and women in knowledge about natural resources [27]. For Tuñón [88], the differences between the uses, access, and control that men and women have of their natural resources are evident. It can even be expected that women hold greater knowledge. To Sánchez-Núñez and Espinosa [89], women have a detailed knowledge about the natural environment that places them in a preponderant place in the administration of community natural resources. However, in the present study there seems to be no gender difference in the richness of knowledge. Although the level of plant knowledge varies, each person has a portion of the "total" knowledge and it can change according to necessities and priorities in each community as is concluded by Castellanos [23] in his study of the Cane river basin in Iguaque (Boyacá). Predominant economic activities, urbanization, individual roles, and cultural diversity are among the factors influencing how much communities know of their ecosystems and how they use them [61, 90].
Similarly, no significant differences were found between genders within ethnobotanical categories. Communities in greater contact with their ecosystems tend to relate the same species [85]. Differences in knowledge and perception of natural resources between men and women have been partially explained as a consequence of the sexual division of labor in traditional societies [27]. Nevertheless, in the Doche community, both men and women carry out similar activities, which can be evidenced in this study by their reporting the same useful species. This coincides with reports from Canales et al. [91] who indicated that the number of known plants is not related to the area they inhabit nor to gender, schooling, occupation, or place of origin, but rather to the role each person plays in a community and the activities they carry out in it. However, certain tendencies can be observed in some categories, for example, medicinal species are more readily recognized by women [26, 46], as are food species [92], while men have deeper knowledge of species used in construction and sold species [25]. Voeks and Leony [93] report that women from a rural community in the state of Bahia, Brazil, are significantly better informed than men about the names and medicinal properties of plants. Although our data did not support the hypothesis that there is a difference in the degree of knowledge between genders, within particular categories, such as medicinal plants, we found evidence that this difference does appear. According to the results of the PCO per category, a consensus was found in the information about useful plants, which would suggest homogeneity and preservation of traditional knowledge, contrasting with findings by Albuquerque et al. [13] and Lastres et al. [60], who cite traditional knowledge to be disperse among the people, which might lead to its eventual loss.
Finally, the relevance of culturally significant species has led to the recognition of resource availability and the knowledge communities have of plants [81, 94]. Additionally, strategies for a better use of ecosystems can be put into practice considering the most relevant species [34]. Cultural valuation should identify, recognize, and accept changes in preferences and the dynamic way in which communities learn, given that people are constantly modifying their ecosystems in search for optimal benefits [95]. This is evident in Doche, a community that has modified its agricultural and livestock-breeding practices to guarantee a long-term availability of the resources their environment provides. Furthermore, people in this community have received training to generate new strategies to utilize their ecosystems, as well as regulate the use or resources such as the wood that is extracted from the forest.
The usage strategies developed by people from Doche to thrive in their dry ecosystem lead us to the conclusion that knowledge of the cultural value of vegetable species is fundamental to endeavor in forest preservation without ceasing to use natural resources. The agreements on internal rules to control the extraction of timber and the cultivation of species for this purpose are strategies that this community have developed to preserve resources while still covering their basic needs. The establishment of cultivation crop lands, the limitation of livestock breeding within the forests, and the diversification of species has further contributed to the regeneration of the tropical dry forest.
The cultural valuation measured in the frequency of mention allowed us to recognize the cultural significance of species of the tropical dry forest; however, the importance of these species was explained by diverse factors such as the number of uses per ethnobotanical category, availability, and access to the resource. Furthermore, the recognition of ethnobotanical uses by gender showed that in the Doche community, men and women know the same species, both genders participate in agricultural endeavors, the collection of wood, herding, production of food, and selling, as well as group activities carried out in the community. This has led to a homogeneously distributed and well-preserved ethnobotanical knowledge.
The community is more invested in preserving species with a higher cultural significance because these provide them with basic resources for subsistence. Each person has a portion of the general knowledge, and this is modified according to immediate needs, as well as the availability and access to resources. Therein lies the relevance of recognizing useful species, use areas, and socio-ecological relationships between a population and its ecosystem. The knowledge of useful flora and its cultural valuation represents a relevant step towards the preservation of the tropical dry forest, one of the most fragmented ecosystems in Colombia. The participation of communities in the preservation of this ecosystem is fundamental for strategies to guarantee a long-term conservation of this ecosystem and the services with which it provides the population.
Ar:
Ba:
Bt:
Co:
Ec:
FA:
Form of application
Fd:
Fi:
Fl:
FP:
Form of preparation
Ju:
Le:
N/I:
No preparation
Principal Coordinates Analysis
RQZ:
Knowledge Richness Index
No common name
Tw:
Costanza R, Farber S. Introduction to the special issue on the dynamics and value of ecosystem services: integrating economic and ecological perspectives. Ecol Econom. 2002; https://doi.org/10.1016/S0921-8009(02)00087-3.
Cowling RM, Egoh B, Knight AT, O'Farrell PJ, Reyers B, Rouget M, et al. An operational model for mainstreaming ecosystem services for implementation. Proc Natl Acad Sci U S A. 2008; https://doi.org/10.1073/pnas.0706559105.
De Groot RS, Wilson MA, Boumans RMJ. A typology for the classification, description and valuation of ecosystem functions, goods and services. Ecol Econom. 2002; https://doi.org/10.1016/S0921-8009(02)00089-7.
Laterra P, Castellarini F, Orúe ME. ECOSER: Un protocolo para la evaluación biofísica de servicios ecosistémicos y la integración con su valor social. In: Laterra P, Jobbágy EG, Paruelo JM, editors. Valoración de Servicios Ecosistémicos: conceptos, herramientas y aplicaciones para el ordenamiento territorial. Buenos Aires: Instituto Nacional de Tecnología Agropecuaria; 2011. p. 359–89.
Bennett EM, Cramer W, Begossi A, Cundill G, Díaz S, Egoh BN, et al. Linking biodiversity, ecosystem services, and human well-being: three challenges for designing research for sustainability. Curr Opin Environ Sustain. 2015; https://doi.org/10.1016/j.cosust.2015.03.007.
Briceño J, Iniguez-Gallardo V, Ravera F. Factores que influyen en la apreciación de servicios ecosistémicos de los bosques secos del sur del Ecuador. Revista Ecosistemas. 2016; https://doi.org/10.7818/ECOS.2016.25-2.06.
García del Valle YG, Naranjo EJ, Caballero J, Martorell C, Ruan-Soto F, Enríquez PL. Cultural significance of wild mammals in mayan and mestizo communities of the Lacandon Rainforest, Chiapas, Mexico. J Ethnobiol Ethnomed. 2015; https://doi.org/10.1186/s13002-015-0021-7.
Hunn ES. The utilitarian factor in flog biological classification. Am Anthropol. 1982;4:830–47.
González-Insuasti MS, Caballero J. Managing plant resources: how intensive can it be? Hum Ecol. 2007;35:303–14.
Bravo-Avilés D. Relación entre la importancia cultural y atributos ecológicos en tres especies de cactáceas en la mixteca poblana. Tesis de Maestría. Universidad Autónoma Metropolitana, Iztapalapa, Mexico. 2011. http://148.206.53.84/tesiuami/UAMI15438.pdf
Vilardy SP, González JA, Martín-López B, Oteros-Rozas E. Los servicios de los ecosistemas de la Reserva de Biosfera Ciénaga Grande de Santa Marta. Revista Iberoamericana de Economía Ecológica. 2012;19:66–83.
Infante-Ramírez KD, Arce-Ibarra AM. Percepción local de los servicios ecológicos y de bienestar de la selva de la zona maya en Quintana Roo, México. Boletín del Instituto de Geografía. 2015;86:67–81.
Albuquerque UP, Lucena RFP, Monteiro JM, Florentino ATN, Almeida CFCBR. Evaluating two quantitative ethnobotanical techniques. Ethnobot Res Appl. 2006;4:51–60.
Tardío J, Pardo-de-Santayana M. Cultural importance indices: a comparative analysis based on the useful wild plants of Southern Cantabria (Northern Spain). Econ Bot. 2008;62:24–39.
Weller SC, Romney AK. Systematic data collection, vol. 10. Newbury Park: Sage publications; 1988.
Phillips OL, Gentry AH. The useful plants of Tambopata Peru: I: statistical hypotheses tests with a new quantitative technique. Econ Bot. 1993;47:15–32.
MDLA LT, Islebe GA. Traditional ecological knowledge and use of vegetation in southeastern Mexico: a case study from Solferino, Quintana Roo. Biodivers Conserv. 2003;12:2455–76.
Lucena RFP, Lima Araújo E, Albuquerque UP. Does the local availability of woody Caatinga plants (Northeastern Brazil) explain their use value? Econ Bot. 2007;61:347–61.
González-Insuasti MS, Martorell C, Caballero J. Factors that influence the intensity of non-agricultural management of plant resources. Agrofor Syst. 2008;74:1–15.
Casas A, Caballero J. Traditional management and morphological variation in Leucaena esculenta (Fabaceae: Mimosoideae) in the Mixtec Region of Guerrero, Mexico. Econ Bot. 1996;50:167–81.
Turner NJ. The importance of a rose: evaluating the cultural significance of plants in Thompson and Lillooet Interior Salish. Am Anthropol. 1988;90:272–90.
Phillips OL. Some quantitative methods for analysing ethnobotanical knowledge. In: Alexiades MN, editor. Selected guidelines for ethnobotanical research: a field manual. New York: The New York Botanical Garden; 1996. p. 171–97.
Castellanos LI. Conocimiento etnobotánico, patrones de uso y manejo de plantas útiles en la cuenca del río Cane-Iguaque (Boyacá-Colombia): una aproximación desde los sistemas de uso de la biodiversidad. Ambiente Sociedade. 2011;14:45–75.
Medellín SG, Barrientos L, del Amo Rodríguez S, Almaguer P, Mora SG. Uso de la flora tradicional de la Reserva de la Biosfera El Cielo. Tamaulipas Investigación y Ciencia. 2016;24:32–8.
León M, Cueva P, Aguirre Z, Kvist L. Composición florística, estructura, endemismo y etnobotánica del bosque nativo "El Colorado", en el cantón Puyango, provincia de Loja. Lyonia. 2006;10:105–15.
Vázquez B, Martínez B, Aliphat MM, Aguilar A. Uso y conocimiento de plantas medicinales por hombres y mujeres en dos localidades indígenas en Coyomeapan, Puebla, México. Interciencia. 2011;36:493–9.
Camou A, Reyes-García V, Martínez-Ramos M, Casas A. Knowledge and use value of plant species in a Rarámuri community: a gender perspective for conservation. Hum Ecol. 2007; https://doi.org/10.1007/s10745-007-9152-3.
Velázquez-Gutiérrez M. Hacia la construcción de la sustentabilidad social: ambiente, relaciones de género y unidades domésticas. In: Tuñón E, editor. Género y Medio ambiente. Mexico: Ecosur-Semarnat-Plaza y Valdés; 2003. p. 79–106.
Cárdenas D, Ramírez-A JG. Plantas útiles y su incorporación a los sistemas productivos del departamento del Guaviare (Amazonía Colombiana). Caldasia. 2004;26:95–110.
Pino N, Valois H. Ethnobotany of four black communities of the municipality of Quibdó, Chocó-Colombia. Lyonia. 2004;7:59–68.
Estupiñán-González AC, Jiménez-Escobar ND. Uso de las plantas por grupos campesinos en la franja tropical del Parque Nacional Natural Paramillo (Córdoba, Colombia). Caldasia. 2010;32:21–38.
Jiménez-Escobar ND, Albuquerque UP, Rangel-Ch JO. Huertos familiares en la bahía de Cispatá, Córdoba, Colombia. Bonplandia. 2011;20:309–28.
Zuluaga GP, Ramírez LA. Uso, manejo y conservación de la agrobiodiversidad por comunidades campesinas afrocolombianas en el municipio de Nuquí, Colombia. Etnobiología. 2015;13:5–18.
Cruz MP, Estupiñán AC, Jiménez-Escobar ND, Sánchez N, Galeano G, Linares E. Etnobotánica de la región tropical del Cesar, Complejo Ciénaga de Zapatosa. In: Rangel-Ch JO, editor. Colombia diversidad Biótica VIII: media y baja montaña de la serranía de Perijá. Bogotá: Universidad Nacional de Colombia; 2009. p. 417–47.
Jiménez-Escobar ND, Estupiñán-González AC, Sánchez N, Garzón C. Etnobotánica de la media montaña de la Serranía del Perijá. In: Rangel-Ch JO, editor. Colombia diversidad biótica, media y baja montaña de la serranía del Perijá. Bogotá: Universidad Nacional de Colombia, Instituto de Ciencias Naturales, CORPOCESAR-REVIVE; 2009. p. 393–416.
Barrios-Paternina E, Mercado-Gómez J. Plantas útiles del corregimiento Santa Inés y la vereda San Felipe (San Marcos, Sucre, Colombia). Revista Ciencia en Desarrollo. 2014;5:131–44.
Carvajal LM, Turbay S, Álvarez LM, Rodríguez A, Álvarez M, Bonilla K, Restrepo S, Parra M. Propiedades funcionales y nutricionales de seis especies de pasifloras del departamento del Huila. Caldasia. 2014;36:1–15.
Fajardo SV. Estudio etnobotánico para la identificación del recurso forestal no maderable con mayor potencial medicinal y comercial en la cuenca media y baja del río Las Ceibas en Neiva, Colombia. Entornos. 2015;27:13–25.
Romero-Duque L, Batista-Morales MF, Vargas JA, Jaramillo VJ, Balvanera P, Mocaleano AM. Diversidad y servicios ecosistémicos del Bosque Tropical seco de la cuenca Alta del río Magdalena. Bogotá: Universidad de Ciencias Aplicadas y Ambientales; 2016.
Fandiño M, Wyngaarden W. Zonificación para el manejo del Parque Natural Regional de la Tatacoa. Informe Final de los convenios 300 y 279 de 2009. Neiva: Grupo ARCO; 2010.
Sánchez-Azofeifa GA, Quesada M, Rodríguez JP, Nassar JM, Stoner KE, Castillo A, et al. Research priorities for Neotropical dry forests. Biotropica. 2005;37:477–85.
Plan de desarrollo de Villavieja 2012–2015. Villavieja ¡Unidos por el cambio!; 2012. http://www.villavieja-huila.gov.co/Nuestros_planes.shtml?apc=gbxx-1-&x=2629980.
Guber R. La etnografía: método, campo y reflexividad. Bogotá: Grupo editorial Norma; 2001.
Heinrich M, Ankli A, Frei B, Weimann C, Sticher O. Medicinal plants in Mexico: healers' consensus and cultural importance. Soc Sci Med. 1998;47:1859–71.
Montoya EA. Aprovechamiento de los hongos silvestres comestibles en el volcán la Malinche, Tlaxcala. Tesis de Doctorado. Posgrado en ciencias biológicas, Facultad de ciencias. Universidad Nacional Autónoma de México, Mexico; 2005. http://www.remeri.org.mx/portal/REMERI.jsp?id=oai:tesis.dgbiblio.unam.mx:000345191
Hernández T, Canales M, Caballero J, Durán Á, Lira R. Análisis cuantitativo del conocimiento tradicional sobre plantas utilizadas para el tratamiento de enfermedades gastrointestinales en Zapotitlán de las Salinas, Puebla, México. Interciencia. 2005;30:529–35.
Agelet A, Vallés J. Studies on pharmaceutical ethnobotany in the region of Pallars (Pyrenees, Catalonia, Iberian Peninsula). Part I. General results and new or very rare medicinal plants. J Ethnopharmacol. 2001; https://doi.org/10.1016/S0378-8741(01)00262-8.
Burrola-Aguilar C, Montiel O, Garibay-Orijel R, Zizumbo-Villarreal L. Conocimiento tradicional y aprovechamiento de los hongos comestibles silvestres en la región de Amanalco, Estado de México. Revista mexicana de micología. 2012;35:1–16.
Zar JH. Biostatistical analysis. 2nd ed. New Jersey: Prentice Hall; 1984.
Rohlf FJ. NTSYS-pc numerical taxonomy and multivariate analysis system. v. 2.01. Setauket New York: applied. Biostatistics. 2000;
Geilfus F. 80 herramientas para el desarrollo participativo. San José, Costa Rica: Instituto Interamericano de Cooperación para la. Agricultura. 2002;
Figueroa Y, Galeano G. Lista comentada de las plantas vasculares del enclave seco interandino de La Tatacoa (Huila, Colombia). Caldasia. 2007;29:263–81.
Llanos F. Flora del desierto de la Tatacoa Municipio de Villavieja (Huila) Colombia. Neiva: Universidad Surcolombiana; 2010.
Olaya A, Gutiérrez GA, editors. La Tribuna, reserva natural en zona petrolera del norte del Huila. 1rd ed. Neiva: Grupo de Investigación Ecosistemas Surcolombianos (ECOSURC), Universidad Surcolombiana; 2014.
Mendoza-C H. Estructura y riqueza florística del bosque seco tropical en la región Caribe y el valle del río Magdalena, Colombia. Caldasia 1999;21:70–94.
Rodríguez G, Banda-R K, Reyes SP, Estupiñan A. Lista comentada de las plantas vasculares de bosques secos prioritarios para la conservación en los departamentos de Atlántico y Bolívar (Caribe colombiano). Biota Colombiana. 2012;13:7–39.
Fajardo L, Gonzales V, Nassar J, Lacabana P, Portillo CA, Carrasquel F, et al. Tropical dry forests of Venezuela: characterization and current conservation status. Biotropica. 2005;37:531–46.
Albesiano S, Rangel-Ch JO. Estructura del cañón del río Chicamocha, 500–1200m; Satander-Colombia: Una herramienta para la conservación. Caldasia. 2006;28:307–25.
Laterra-Álcazar DM, López RP, Barrientos D. The nurse-plant effect of propopis flexuosa D. C. (Leg-Mim) in a dry valley of the Bolivian Andes. Ecotropicos. 2005;18:89–95.
Lastres M, Ruiz-Zapata T, Castro M, Torrecilla P, Lapp M, Hernández-Chong L, et al. Conocimiento y uso de las plantas medicinales de la comunidad Valle de la Cruz, estado Aragua. Pittieria. 2015;39:59–89.
Aranguren A. Plantas útiles empleadas por los campesinos de la región de Bailadores. Venezuela Boletín Antropológico. 2005;23:139–65.
Sánchez O, Kvist LP, Aguirre Z. Bosques secos en Ecuador y sus plantas útiles. In: Moraes-RM, Øllgaard B, Kvist LP, Borchsenius F, Balslev H, editors. Botánica económica de los Andes Centrales. Bolivia: Universidad Mayor de San Andrés; 2006. p. 188–204.
Valencia-Duarte J, Trujillo Ortiz LN, Vargas RO. Dinámica de la vegetación en un enclave semiárido del río Chicamocha, Colombia. Biota Colombiana. 2012;3:40–65.
Quiroz-Carranza J, Orellana R. Uso y manejo de leña combustible en viviendas de seis localidades de Yucatán, México. Madera y bosques. 2010;16:47–67.
Couttolenc-Brenis E, Cruz-Rodríguez JA, Cedillo E, Musálem MA. Uso local y potencial de las especies arbóreas en camarón de Tejeda, Veracruz. Revista Chapingo. Serie ciencias forestales y del ambiente. 2005;11:445–50.
May T. Plantas preferidas para leña en la zona de bosque seco de Pedro Santana y Bánica, República Dominicana. Ambiente y Desarrollo. 2013;17:71–85.
Boy E, Bruce N, Smith KR, Hernandez R. Fuel efficiency of an improved wood-burning stove in rural Guatemala: implications for health, environment and development. Energy for sustainable development. 2000;4:23–31.
Zamora P, Flores JS, Ruenes R. Flora útil y su manejo en el cono sur del estado de Yucatán. México Polibotánica. 2009;28:227–50.
Flores JS, Bautista F. Knowledge of the Yucatec Maya in seasonal tropical forest management: the forage plants. Revista Mexicana de Biodiversidad. 2012;83:503–18.
Cáceres O, González E. Valor nutritivo de árboles, arbustos y otras plantas forrajeras para los rumiantes. Pastos y Forrajes. 2002;25:15–20.
Lerner-Martínez T, Ceroni A, González CE. Etnobotánica de la comunidad campesina "Santa Catalina de Chongoyape" en el Bosque seco del área de conservación privada Chaparrí-Lambayeque. Ecología Aplicada. 2003;2:14–20.
Bermúdez A, Oliveira-Miranda MA, Velázquez D. La investigación etnobotánica sobre plantas medicinales: una revisión de sus objetivos y enfoques actuales. Interciencia. 2005;30:453–9.
Bermúdez A, Velázquez D. Etnobotánica médica de una comunidad campesina del estado Trujillo, Venezuela: un estudio preliminar usando técnicas cuantitativas. Rev Fac Farm. 2002;44:2–6.
Scarpa GF. Medicinal plants used by the Criollos of Northwestern Argentine Chaco. J Ethnopharmacol. 2004; https://doi.org/10.1016/j.jep.2003.12.003.
Henao J, Muñoz LJ, Ríos VE, Padilla L, Giraldo GA. Evaluación de la actividad antimicrobiana de los extractos de la planta Lippia origanoides HBK cultivada en el Departamento del Quindío. Rev Invest Univ Quindio. 2009;19:159–64.
Fomogne-Fodjoa MCY, Ndintehb DT, Olivierc DK, Kempgensa P, van Vuurenc S, Krausea RWM. Secondary metabolites from Tetracera potatoria stem bark with antimycobacterial activity. J Ethnopharmacol. 2017; https://doi.org/10.1016/j.jep.2016.11.027.
Borchert R. Soil and stem water storage determine phenology and distribution of tropical dry forest trees. Ecology. 1994;75:1437–49.
Pizano C, García H, editors. El Bosque Seco Tropical en Colombia. Bogotá: Instituto de Investigación de Recursos Biológicos Alexander von Humboldt; 2014.
Carrillo-Rosario T, Moreno G. Importancia de las plantas medicinales en el autocuidado de la salud en tres caseríos de Santa Ana Trujillo, Venezuela. Rev Fac Farm. 2006;48:21–8.
Giraldo D, Baquero E, Bermúdez A, Oliveira-Miranda MA. Caracterización del comercio de plantas medicinales en los mercados populares de Caracas, Venezuela. Acta Botanica Venezuélica. 2009;32:267–301.
Jaramillo MA, Castro M, Ruiz-Zapata T, Lastres M, Torrecilla P, Lapp M, et al. Estudio etnobotánico de plantas medicinales en la comunidad campesina de Pelelojo, municipio Urdaneta, estado Aragua, Venezuela. Ernstia. 2014;24:85–110.
Martínez-Pérez A, López PA, Gil-Muñoz A, Cuevas-Sánchez JA. Plantas silvestres útiles y prioritarias identificadas en la Mixteca Poblana, México. Acta Bot Mex. 2012;98:73–98.
Ticktin T, de la Pefia G, Ilsley C, Dalle S, Johns T. Participatory ethnoecological research for conservation: lessons from case studies in Mesoamerica. In: Stepp JR, Wyndham ES, Zarger RK, editors. Ethnobiology and biocultural diversity: Proceedings of the Seventh International Congress of Ethnobiology. University of Georgia Press; 2002. p. 575–584.
Garibay-Orijel R, Caballero J, Estrada-Torres A, Cifuentes J. Understanding cultural significance, the edible mushrooms case. J Ethnobiol Ethnomed. 2007; https://doi.org/10.1186/1746-4269-3-4.
Narváez-Eraso MT. Usos de la biodiversidad del resguardo indígena de Chiles-Nariño. Revista Criterios. 2010;81:91.
Marín-Corba C, Cárdenas-L D, Suárez-Suárez S. Utilidad del valor de uso en etnobotánica. Estudio en el departamento de Putumayo (Colombia). Caldasia 2005; 27:89–101.
Sánchez M., Duque A, Miraña P, Miraña E, Miraña J. Valoración del uso no comercial del bosque - Métodos en Etnobotánica Cuantitativa. In: Duivenvoorden JF, Balslev H, Cavelier J, Grandez C, Tuomisto H, Valencia, R., editors. Evaluación de recursos vegetales no maderables en la Amazonía noroccidental. Amsterdam: IBED, Universiteit van Amsterdam; 2001.
Tuñón E. Género y Medio ambiente. México: Ecosur-Semarnat-Plaza y Valdés; 2003.
Sánchez-Núñez E, Espinosa G. Mujeres indígenas y medio ambiente: una reflexión desde la región de la mariposa monarca. In: Tuñón E, editor. Género y Medio ambiente. México: Ecosur-Semarnat-Plaza y Valdés; 2003. p. 129–44.
Hurtado R, Moraes R. Comparación del uso de plantas por dos comunidades campesinas del bosque tucumano-boliviano de Vallegrande (Santa Cruz, Bolivia). Ecología en Bolivia. 2010;45:20–54.
Canales M, Hernández T, Caballero J, Romo de Vivar A, Durán A, Lira R. Análisis cuantitativo del conocimiento tradicional de las plantas medicinales en San Rafael, Coxcatlán, Valle de Tehuacán-Cuicatlán, Puebla, México. Acta Bot Mex. 2006;75:21–43.
Arango S. Estudios etnobotánicos en los Andes Centrales (Colombia): Distribución del conocimiento del uso de las plantas según características de los informantes. Lyonia. 2004;7:89–104.
Voeks RA, Leony A. Forgetting the Forest: assessing medicinal plant erosion in eastern Brazil. Econ Bot. 2004;58:S294–306.
Gómez-Beloz A. Plant use knowledge of the Winikina Warao: the case for questionnaires in ethnobotany. Econ Bot. 2002;56:231–41.
Kumar M, Kumar P. Valuation of the ecosystem services: a psycho-cultural perspective. Ecol Econ. 2008; https://doi.org/10.1016/j.ecolecon.2007.05.008.
We sincerely thank all residents of Doche vereda for their hospitality and disposition to collaborate in this study. Finally, we thank Marisa Ordaz for translating this manuscript to English.
This study was funded by the COLCIENCIAS Program: "Formación de capital humano de alto nivel para el departamento del Huila", Colombia.
We have already included all data in the manuscript collected during the field surveys.
Universidad de Ciencias Aplicadas y Ambientales, 222 St. 55-37, E-111166, Bogotá, Colombia
Jeison Herley Rosero-Toro & Luz Piedad Romero-Duque
Asociación Etnobiológica Mexicana A.C., Calle Profesor Felipe W. Mijangos, Colonia 12 de Junio, E-29243, San Cristóbal de Las Casas, Chiapas, Mexico
Dídac Santos-Fita
Centro de Investigaciones Multidisciplinarias sobre Chiapas y la Frontera Sur, UNAM, Calle María Adelina Flores 34-A, Barrio Guadalupe, CP 29230, San Cristóbal de Las Casas, Chiapas, Mexico
Felipe Ruan-Soto
Jeison Herley Rosero-Toro
Luz Piedad Romero-Duque
JRT wrote early drafts of the research design and the manuscript and did the fieldwork. DSF and LRD reviewed and improved the proposal and the manuscript. Finally, FRS participated in the "Discussion" section and reviewed the manuscript. All authors read and approved the final manuscript.
Correspondence to Dídac Santos-Fita.
We obtained permission from each informant before conducting the interview. All the participants were anonymized and so their personal details are not disclosed in this paper.
Table 3 Species used by the Doche commnity (Villavieja, Huila)
Rosero-Toro, J.H., Romero-Duque, L.P., Santos-Fita, D. et al. Cultural significance of the flora of a tropical dry forest in the Doche vereda (Villavieja, Huila, Colombia). J Ethnobiology Ethnomedicine 14, 22 (2018). https://doi.org/10.1186/s13002-018-0220-0
Cultural significance
Use and management
Tropical dry forest
|
CommonCrawl
|
International Journal of Concrete Structures and Materials
Micro and Nano Engineered High Volume Ultrafine Fly Ash Cement Composite with and without Additives
R. Roychand1,
S. De Silva1,
D. Law1 &
S. Setunge1
International Journal of Concrete Structures and Materials volume 10, pages 113–124 (2016)Cite this article
This paper presents the effect of silica fume and nano silica, used individually and in combination with the set accelerator and/or hydrated lime, on the properties of class F high volume ultra fine fly ash (HV-UFFA) cement composites, replacing 80 % of cement (OPC). Compressive strength test along with thermogravimetric analysis, X-ray diffraction and scanning electron microscopy were undertaken to study the effect of various elements on the physico-chemical behaviour of the blended composites. The results show that silica fume when used in combination with the set accelerator and hydrated lime in HV-UFFA cement mortar, improves its 7 and 28 day strength by 273 and 413 %, respectively, compared to the binary blended cement fly ash mortar. On the contrary, when nano silica is used in combination with set accelerator and hydrated lime in HV-UFFA cement mortar, the disjoining pressure in conjunction with the self-desiccation effect induces high early age micro cracking, resulting in hindering the development of compressive strength. However, when nano silica is used without the additives, it improves the 7 and 28 day strengths of HV-UFFA cement mortar by 918 and 567 %, respectively and the compressive strengths are comparable to that of OPC.
Globally, out of the total fly ash (FA) production of 620–660 Million tons per year, only about 53.5 % is currently being utilized and the remaining forms part of the landfills (Heidrich et al. 2013). FA producers have to spend large amount of money for the safe disposal of this unutilized industrial by product (About Coal Ash—CCP FAQs. American Coal Ash Association 2014). This poses a big challenge to both the fly ash producers and the research community, to bring down this production vs utilization ratio to a minimum possible level.
FA has good pozzolanic properties and it can be used as a supplementary cementitious material (SCM). In addition, the production of cement (OPC) is responsible for 5–7 % of global greenhouse gas emissions (Benhelal et al. 2013). Therefore, using fly ash as a cement replacement material provides dual benefit (i) it helps in increasing the use of this industrial by product and (ii) it assists in cutting down the emissions associated with the cement production. Moreover, it not only enhances the properties of fresh concrete like workability (Sata et al. 2007; Liu 2010) but also improves its mechanical and durability properties such as; greater long term strength (Hansen 1990; Sivasundaram et al. 1990), lower shrinkage (Atis 2003; Nakarai and Ishida 2008), lower water absorption (Malhotra and Mehta 2002; Şahmaran et al. 2009), reduction in chloride permeability (Nagataki and Ohga 1992; Dinakar et al. 2008), increased resistance to sulphate attack (Structure et al. 1986; Turanli et al. 2005), low heat of hydration (Turanli et al. 2005; Kasai et al. 1983) and reduction in alkali aggregate reactivity (Turanli et al. 2005; Pepper and Mather 1959; Islam 2014). Although fly ash has many advantages when used as a cement replacement material, it has one major disadvantage of low reactivity (Liu 2010; Şahmaran et al. 2009). Therefore, it has to be a very judicious decision to choose a percentage of fly ash content as cement replacement material in a mix design. Numerous research studies have been conducted in the past to address its low reactivity and to improve the strength of fly ash blended mixes. Researchers have looked at the effect of particle size (Paya et al. 1995; Chindaprasirt et al. 2005), use of hydrated lime (Şahmaran et al. 2009; Barbhuiya et al. 2009), silica fume (Barbhuiya et al. 2009; Sellevold and Radjy 1983; El-Chabib and Syed 2012; Rashad 2014), metakaolin (Wei et al. 2007; Reis and Camões 2011) to improve the mechanical properties of fly ash blended mixes.
Paya et al. (1995) investigated the effect of reduction in particle size on the reactivity of fly ash blended cement concrete. They found that a linear relation between the particle size and compressive strength exists in fly ash blended concrete. Compressive strength of fly ash concrete increased with the decrease in its particle size (Paya et al. 1995; Chindaprasirt et al. 2005; Erdoğdu and Türker 1998; Li and Wu 2005). Hill et al. (1994) studied the effect of calcium nitrate set accelerator (SA) on the hydration reaction of fly ash. They found that calcium nitrate considerably accelerated the hydration reaction of fly ash resulting in the improvement in its setting time and compressive strength. Studies show that hydrated lime (HL) considerably improves the reactivity of fly ash. It accelerates the hydration reaction resulting in a significant improvement of compressive strength of high volume fly ash (HVFA) concrete (Barbhuiya et al. 2009; Jayakumar and Abdullahi 2011). Bharbhuia et al. (Barbhuiya et al. 2009) and Rashad et al. (2014) studied the effect of silica fume (SF) on the mechanical properties of HVFA concrete. They found that addition of silica fume considerably improved the 7 and 28 day strength. Moreover, Silica fume performed considerably better than hydrated lime in improving the early age strength of fly ash concrete (Barbhuiya et al. 2009).
Recently, amorphous nano silica (nS) has been gaining widespread attention of the research community due to its nano sized particles and very high amorphous SiO2 content (Björnström et al. 2004; Jo et al. 2007; Zhang and Islam 2012; Hou et al. 2012; Singh et al. 2015). It not only takes part in the hydration reaction to provide additional C–S–H but also accelerates the hydration process (Björnström et al. 2004; Zhang and Islam 2012; Hou et al. 2012). Hou et al. (Hou et al. 2012) investigated the effect of 0, 2.25 and 5 % colloidal nano silica (CNS) on 60 % FA blended cement composite. They found that the compressive strength of the mortar samples at 7 and 28 days increased with the increasing amount of CNS but at 3 months no significant difference in their respective strength results was observed. The maximum increase in strength that they observed with 5 % CNS compared to 0 % CNS at 7 and 28 days was approximately 60 and 33 % respectively. Shaikh et al. (Shaikh et al. 2014) studied the effect of 2 % nano silica on 68 % FA blended cement composite and found that there was no difference in strength at 7 days but at 28 days there was 56 % increase in strength as compared to the mix not containing nano silica.
Based on the review of past literature there appears to be limited or no research available on the effect of silica fume and nano silica in combination with hydrated lime and set accelerator on the properties of class F high volume fly ash (HVFA) cement composite. In addition, a significant variation in the effect of nano silica on the percentage of strength improvement of high volume fly ash cement composites has been observed (Zhang and Islam 2012; Hou et al. 2012; Shaikh et al. 2014). Very few studies are available that report the properties of class F high volume fly ash cement composite replacing 80 % of cement (Liu 2010; Huang et al. 2013). With the successful replacement of 80 % cement with supplementary cementitious materials, predominantly containing fly ash, the cement industry could benefit in significantly reducing its CO2 emissions, and at the same time the production/utilisation rate of fly ash could be reduced. Therefore this study was undertaken to investigate the effect of SF and nS in combination with HL and SA on the physico-chemical behaviour of class F high volume ultrafine fly ash (HV-UFFA) cement composite, replacing 80 % of cement. Compressive strength test has been undertaken to identify the mechanical properties of the blended mortar samples. In addition, thermogravimetric analysis (TGA), X-ray diffraction (XRD) and secondary electron microscopy (SEM) have been undertaken to identify the formation of various hydrates and to understand the morphological changes occuring in the cement matrix due to the addition of various materials under study.
Materials and Experimental Procedure
Materials and Mix Design
The materials used in this study were: Ordinary portland cement, non-chloride calcium nitrate and sodium thiocyanate based set accelerator "Pozzolith NC 534" having water content of 51 % and polycarboxylic ether based superplasticizer "Glenium 79" having water content of 55 %, class F low calcium ultra fine fly ash, hydrated lime, densified silica fume (SF) and powdered nano silica. Raw fly Ash having a mean particle size of 15 µm was ground in a micronizer to produce ultrafine fly ash (UFFA) of mean particle size of 8.1 µm. Chemical composition of OPC, FA, SF, HL and nS is presented in Table 1. Particle size distribution of OPC, UFFA, HL, and SF was obtained by using laser diffraction particle size analyser "Malvern Mastersizer 3000" and is presented in Table 2.
Table 1 Chemical composition of OPC, HL, FA, SF and nS.
Table 2 Particle size distribution of OPC, UFFA, RHL, HL, SF and NS.
XRD spectra of OPC, HL, FA, SF and nS are shown in Fig. 1. Predominant crystalline content of OPC are calcium silicates (C2S, C3S), calcium aluminate (C3A), calcium alumino ferrite (C4AF) and gypsum (G). Hydrated lime shows a sharp calcite peak at 29.4° 2-theta in addition to Ca(OH)2 peaks. The crystalline content in FA was mainly from quartz and mullite. SF and nS shows broad peaks centred around 22° and 22.5° 2-theta respectively, which is a typical characteristic of amorphous silica content.
XRD spectra of a OPC, b HL, c FA, d SF and e NS.
Table 3 summarizes the various mortar mix designs tested. All binding materials used were based on percentage mass of the total binding material. TGA, XRD and SEM samples were prepared with the binder paste only. Amount of superplasticizer was adjusted to get an approximately similar consistency for the self consolidation of the mortar. Water content of SP and SA were included in the total w/b ratio.
Table 3 Mortar mix designs.
Sample Preparation, Curing & Testing
Compressive Strength of Mortar
All the raw materials were dry mixed at low speed in the mortar mixer for 1 min to obtain a homogenous mix. Then water, SP and SA (as required) were added and mixed for 3 min, followed by a final high speed mixing for another 1 min. Mortar samples were then poured in 50 mm × 50 mm × 50 mm steel moulds. The samples were covered with a plastic sheet, cured in room temperature for 24 h, de-moulded (except CF and S1 which were de-moulded after 48 h and 36 h respectively, because of low early strength) and then further cured in saturated lime water at room temperature as per ASTM C109 until the time of testing. The samples were taken out of lime water, wiped and surface dried after 1, 7 and 28 days of curing. Compressive strength of the mortar samples was measured as per ASTM C109 using a 300kN tecnotest mortar strength testing machine. For every mix at each age, three replicates were tested at a loading rate of 0.36 MPa/s.
TGA, XRD and SEM of Hardened Binder Paste
Mixing procedure and curing method was kept the same for TGA, XRD and SEM binder paste samples. For TGA and XRD, the samples were ground and sieved through 63 µm sieve. To stop hydration and to remove physically bound water the solvent exchange method was adopted using acetone. A 100 mL of acetone was added to 30 g of the sieved sample in a plastic bottle and mixed vigorously by hand for about 3 min. Excess acetone was drained out and the process was repeated. The samples were then dried overnight in an oven at 40 °C temperature. The dried samples were collected and stored in a sealed plastic container till the time of testing.
TGA was conducted using PerkinElmer STA 6000 thermal analyser in a nitrogen environment with a flow rate of 19.8 mL min−1. 10–20 mg of powdered samples were heated from 40 to 550 °C with the heating rate of 10 °C min−1.
XRD was conducted using a Bruker AXS D4 Endeavour system using Cu-Kα radiation operated at 40 kV and 40 mA and a Lynxeye linear strip detector. Samples were tested between 5° and 55° 2-theta (2θ) with a step size of 0.02° and the counting time per step was 5 s.
For SEM, a small thin section of the hardened paste was cut out from the internal part of the specimen. It was then embedded in epoxy, ground, polished, mounted on a steel stub and gold coated. FEI Quanta 200 SEM was used to study the microstructure of the hardened paste samples. The accelerating voltage of the beam was 20 kV and the electron images were acquired at 10 mm working distance and 5000× magnification.
Compressive Strength
The effect of silica fume, set accelerator and hydrated lime on HV-UFFA mortar is shown in Fig. 2a. By partially replacing UFFA with SF, there was a 55 % increase in 7 day strength which further increased to 116 % at 28 days in S1 compared to that of CF. This is attributed to the effect of SF on the improvement of strength of HVFA cement composite. Silica fume, because of its amorphous character and large surface area, possesses high pozzolanic activity and readily reacts with the available Ca(OH)2 to form additional calcium silicate hydrate, resulting in the improvement of strength. Similar results have been reported by Barbhuiya et al. (Barbhuiya et al. 2009). With the combination of silica fume and set accelerator in Mix S2, the 7 and 28 day strengths improved by 124 and 316 % respectively, compared to that of Mix CF. This is attributed to the accelerating effect of SA on the pozzolanic reaction of the blended mix, which is discussed in detail in Sect. 3.2. By combining silica fume and hydrated lime in mix S3, the strength improvement was approximately the same as that of S2. The performance of 5 % hydrated lime addition was equivalent to that of set accelerator with 27.5 mL kg−1 binder content in improving the compressive strength of silica fume modifed HV-UFFA cement composite. When both SA and HL were added to the silica fume modified HV-UFFA in Mix S4, the 7 and 28 day strengths were improved by 273 and 413 % respectively. The combined acceleration effect imparted by SA and HL significantly improved the compressive strength of S4 compared to that of CF. This shows that the performance of the combined effect of SF, SA and HL is significantly higher than that of their individual additions in improving the compressive strength of HV-UFFA cement composite.
Compressive strength of mortar samples (a) containing silica fume (b) containing nano silica at 1, 7 & 28 days of curing.
Figure 2b demonstrates the effect of nano silica, set accelerator and hydrated lime on HV-UFFA mortar. By partially replacing UFFA with nS, there was a significant improvement in 1 day strength of mix N1 compared to that of CF and the 7 and 28 day strengths increased by 918 and 567 % respectively. This significant improvement in strength is attributed to the amorphous character and very high surface area of nano silica, which readily reacts with the available Ca(OH)2 to form additional calcium silicate hydrate. Comparing the effect of silica fume in S1 to that of nano silica in N1, nano silica performed significantly better than silica fume in improving the compressive strength. Though, both SF and nS are amorphous in nature but the particle size of nano silica is approximately 425 times smaller than the average particle size of SF. This extremely small particle size of nano silica provides a very high surface area, that accelerates the pozzolanic reaction resulting in much higher compressive strength than that with SF. When set accelerator was combined with nano silica in Mix N2, there was a further 30 % improvement in one day strength, but the 7 and 28 day strengths decreased by approximately 14 and 11 % respectively, compared to N1. This shows that though set accelerator improves the pozzolanic reaction, which results in higher 1 day strength, it has a negative impact at 7 and 28 day strengths. In comparison, the set accelerator had positive impact on silica fume modified HV-UFFA at all ages. By combining nano silica and hydrated lime in mix N3, the strength improvement was approximately the same as that of N2. Similarly, no difference on the treatment of SA and HL was observed on silica fume modified HV-UFFA mixes S2 and S3. When both SA and HL were added to the nano silica modified HV-UFFA in Mix S4, though the 1 day strength improved by 48 %, the 7 and 28 day strengths decreased by 34 and 18 % respectively. The combined effect of SA and HL was positive at 1 day and was higher than that of their individual effects. But at 7 and 28 days the combination of SA and HL proved detrimental to the strength development of HV-UFFA modified with nS and was worse than that of their individual effects.
TGA, XRD and SEM Analysis
Derivative thermogravimetric (DTG) curves were plotted from the thermogravimetric (TG) data to identify the exact boundaries of the Calcium hydroxide (CH) content at various ages of curing. Figure 3a shows typical TG and DTG curves with identifiable CH endotherm. The onset and the endset point of CH mass loss identified with the help of derivative curve, has been marked with dotted lines. Figure 3b shows a typical TG and DTG curves with no identifiable CH endotherm. The absence of CH endothermic peak shows that the residual CH content, at a particular age of curing was too small (if any) to be detected with the help of TGA.
a Typical TG and DTG curves with identifiable CH endotherm. b Typical TG and DTG curves with no identifiable CH endotherm. Note Calcium silicate hydrate (C–S–H), Calcium alumino silicate hydrate (C–A-S–H), Calcium aluminate hydrate (C–A–H), Aluminate ferrite monosulphate (AFm), Calcium hydroxide (CH).
CH (residual) in Eq. (1) represents the residual CH content at a particular age of curing, expressed in percentage of the mass of dry sample at 500 °C (M500) (De Weerdt et al. 2011). CH (normalised) in Eq. (2) denotes the normalised CH content per g of cement, in which no hydrated lime powder has been added. CH (normalised) in Eq. (3) was modified to suit the addition of hydrated lime powder.
$$ {\text{CH}}\, \left({\text{residual}} \right) = \frac{{{\rm M{^{\rm S}_{{H_{2} O\;CH}}}} * \frac{74}{18}}}{{\rm M_{500} }}*100 [\% ] $$
$$ {\text{CH}}\, \left({\text {normalised}} \right) = \frac{{{\rm M{^{\rm S}_{{H_{2} O\;CH}}}} * \frac{74}{18}}}{{\rm M_{500} }}*\frac{1}{0.2}*100 [\% ] $$
(without additional hydrated lime)
$$ {\text{CH}}\, \left({\text {normalised}} \right) = \frac{{\left[ {\left\{ {{\rm M{^{\rm S}_{{H_{2} O\;CH}}}} * \frac{74}{18}} \right\} - \left\{ {\left( {{\rm M{^{\rm HL}_{{H_{2} O\;CH}}}} * \frac{74}{18}} \right) * \rm HL_{\% } } \right\} } \right]}}{{\rm M_{500} }}\quad*\frac{1}{0.2}*100 [\% ] $$
(with additional hydrated lime)
\( {\rm{M}}_{{{\text{H}}_{2} {\text{O\;CH}}}}^{\text{S}} \) is mass loss due to the dehydroxylation of portlandite present in the hydrated cement paste. The fraction \( \frac{ 7 4}{ 1 8} \) is used to convert CH bound water into CH mass, where 74 is the molar mass of Ca(OH)2 and 18 is the molar mass of H2O. The fraction \( \frac{1}{0.2} \) represents the division of percentage of the OPC content in the sample to normalise the value to per g of OPC. \( {\rm{M}}_{{{\text{H}}_{2} {\text{O\;CH}}}}^{\text{HL}} \) is mass loss due to the dehydroxylation of the pure Ca(OH)2 present in the raw hydrated lime, as part of the raw HL was converted to CaCO3 due to its exposure to CO2 present in the atmosphere. The pure Ca(OH)2 content was 81 % of the raw HL sample. \( {\text{HL}}_{{\text{\% }}} \) represnts the percentage of hydrated lime powder added to the sample. TG analysis of silica fume and nano silica modified HV-UFFA pastes cured for 1, 7 and 28 days are shown in Table 4a, b, respectively. Negative CH (normalised) values show that in addition to the CH released by the OPC, a part of the hydrated lime powder added externally, has also been consumed by the pozzolanic reaction.
Table 4 (a) Thermogravimetric analysis of silica fume modified samples at 1, 7 and 28 days of curing. (b) Thermogravimetric analysis of nano silica modified samples at 1, 7 and 28 days of curing.
XRD analysis of silica fume and nano silica modified HV-UFFA pastes cured for 1, 7 and 28 days are shown in Figs. 4a and 5a, respectively. Their SEM images at 28 days of curing are presented in Figs. 4b and 5b, respectively. Though, C–S–H is considered as the main contributor to the compressive strength of mortar/concrete, but because of its near amorphous nature, it is hard to identify through XRD analysis. Since portlandite is considered as a good indicator of the performance of the hydration/pozzolanic reaction, therefore for the clarity in presentation of the important hydration products, the XRD data is presented from 5° 2-theta to 25° 2-theta. Various phases noticed in the XRD patterns were ettringite (E), AFmss (Aluminate ferrite monosulphate)—most likely a solid solution of hemicarbonate and OH− substituted monosulphate (Matschei et al. 2007) denoted as (A), hemicarbonate (Hc), calcium aluminium iron oxide carbonate hydroxide hydrate (Fc), monocarbonate (Mc), di-calcium aluminate hydrate (D), mullite (M), portlandite (P), quartz (Q) and calcite (C).
a XRD spectra of silica fume modified samples at 1, 7 and 28 days of curing. b SEM images of silica fume modified samples at 28 days of curing.
a XRD spectra of nano silica modified samples at 1, 7 and 28 days of curing. b SEM images of nano silica modified samples at 28 days of curing.
The increase in the pozzolanic reaction, due to the partial replacement of fly ash with silica fume in S1 can be noticed in the reduction of CH (normalised) value in the TGA data and the portlandite peak in the XRD spectra at both 7 and 28 days of curing. This shows that the portlandite consumption was increased due to the presence of SF which is amorphous in nature and has considerably higher surface area than that of FA. The SEM image of S1 shows a denser C–S–H gel with finer cracks, resulting in a stronger cement matrix than that of CF. This was reflected in the corresponding increase in the compressive strength results of S1 compared to CF.
With the addition of SA in mix S2, there was a small reduction in the CH (normalised) value at 1 day, which reduced considerably at 7 and 28 days of curing, compared to S1. Similar observations were seen in the reduction in the portlandite peaks of S2 in the XRD spectra. This increase in the consumption of portlandite is attributed to the accelerating effect of SA on the pozzolanic reaction of the blended mix. The set accelerator used was calcium nitrate and sodium thiocyanate based. Calcium nitrate accelerates the setting time and moderately accelerates hardening whereas sodium thiocyanate accelerates the strength gain (Paillère 1994). As reported by Rettvin and Dalen and cited by Cabrera et al. (Cabrera and Rivera-Villarreal 1999) when calcium nitrate was used in combination with sodium thiocyanate, calcium nitrate started the hydration process earlier which was then hastened by sodium thiocyanate. The SEM image of S2 shows an increase in the density and width of cracks in the cement matrix. The XRD spectra of S2 shows a considerable increase in the intensity of ettringite peaks at 7 and 28 days compared to S1. The growth of ettringite crystals in smaller diameter pores refined by silica fume (Zhang and Gjørv 1991), exerts the highest expansive pressure on the pore walls promoting the development of cracks in the cement matrix (Scherer 1999). Though, based on the increase in ettringite content in S2 and the findings of (Scherer 1999), there is a possibility of ettringite being instrumental in the development of cracks, but no ettringite crystal was found within the cracks, when observed through the electron microscope. The addition of SA significantly increases the pore solution concentration of calcium ions (Ca+, CaOH+) which induces a high degree of super saturation of portlandite (Nonat 2000) that leads to the increase in disjoining pressure resulting in the development of micro cracks (Beltzung et al. 2001). Beltzung et al. (Beltzung et al. 2001) in their research on the influence of Ca(OH)2 on shrinkage stresses observed that the samples containing high residual portlandite content (portlandite available in pore water after pozzolanic reaction and ettringite formation) showed higher shrinkage stresses resulting from high disjoining pressure, compared to the one having low residual portlandite. They also found that the specimens in which the portlandite produced in the system was progressively consumed during secondary pozzolanic and ettringite reactions, showed very low shrinkage stresses. They concluded that with the use of low alkali cements a significant reduction in the shrinkage stresses originating from the disjoining pressure can be achieved. Tazawa et al. (Tazawa and Miyazawa 1993) and Persson (Persson 1997) studied the relationship of self desiccation effect with the autogenous shrinkage. They reported that the self desiccation (reduction in internal relative humidity) at a particular age of curing increased with the decrease in w/c ratio, resulting in the increase in autogenous shrinkage. Tazawa et al. (Tazawa and Miyazawa 1993) also reported that the resultant increase in autogenous shrinkage leads to early age cracking. Therefore, the increase in the development of micro cracks in S2 could be attributed to the combined effect of disjoining pressure and the self desiccation effect.
By combining silica fume and hydrated lime in mix S3, there was a considerable reduction in the CH (normalised) values at 1 and 7 days of curing, compared to S1. This shows that the addition of hydrated lime powder significantly increased the pozzolanic reaction of the blended mix. The TGA data and the XRD spectra of S3 show a significant increase in CH (residual) content and portlandite peak intensity, respectively, at all curing ages, compared to S1. This could be associated with the fact that the additional hydrated lime increased the portlandite content more than that was required to react with the amorphous silica present in SF and FA at 1 and 7 days of curing. At 7 days there was a considerable reduction in the portlandite peak of S3, which further reduced significantly by 28 days, compared to that of S1, showing an increase in the pozzolanic reaction. Comparing the consumption of the portlandite by S3 with that of S2, at 1, 7 and 28 days of curing, S3 showed a higher pozzolanic activity than S2. But the compressive strength results show no difference between S3 and S2 at all curing ages. The SEM image of S3 shows wider cracks than that of S2. This again shows that with the increases in pore solution concentration of Ca+ ions, and the decrease in relative internal humidity there is a corresponding increase in the development of micro cracks. The wider the cracks, the deeper they are, indicating that the higher volume of the strength forming C–S–H gel was weakened in S3, thereby, counterbalancing in reducing the strength that was improved by the increase in C–S–H gel.
When both SA and HL were added to the silica fume modified HV-UFFA in Mix S4, there was a significant improvement in the consumption of portlandite at all curing ages as seen from both the TGA data and the XRD spectra. Comparing the TGA data and the XRD spectra of S4 with that of S3, the consumption of portlandite was significantly higher in S4 at all curing ages, due to the combined accelarating effect of the SA and the HL. This increase in the pozzolanic activity was reflected in the compressive strength results of S4 which had the highest strength among all SF modified mixes. This shows that the combined effect of SA and HL provides the best performance in improving the pozzolanic reaction of SF modified HV-UFFA compared to their respective individual effects. The SEM image of S4 shows a further increase in crack width compared to that of S3. This reinforces our finding that there is a strong correlation of the combined effect of increase in portlandite content and the reduction in relative internal humidity with the resulting increase in crack formation. In-spite of the increase in micro cracking in S4, the increase in the production of C–S–H gel from the accelerated pozzolanic reaction resulted in counterbalancing the effect of increased cracking, thereby, improving its strength compared to that of S1, S2 and S3. It is to be noted, that the internal humidity can be increased without altering the w/c ratio and negatively hampering the compressive strength results, with the help of internal curing method as reported by Bentz et al. (2010), though it is not the scope of the present work.
The increase in the pozzolanic reaction, due to the partial replacement of fly ash with silica fume in S1 can be noticed in the reduction of CH (normalised) value in the TGA data and the portlandite peak in the XRD spectra at both 7 and 28 days of curing.
When the fly ash was partially replaced with nS in mix N1, there was a significant reduction in the CH (normalised) value and the corresponding intensity of the portlandite peak at 1 day, compared to CF. At 7 and 28 days of curing no identifiable portlandite content was observed in both the TGA and the XRD data of N1. This shows that with the addition of nS, the consumption of the portlandite content in pozzolanic reaction increased significantly, because of its highly amorphous nature and significantly higher surface area than that of FA. The SEM image of N1 shows a very dense C–S–H gel with a significant reduction in micro cracking, compared to that of CF, resulting in a stronger cement matrix. Since a significant amount of portlandite produced by the OPC was consumed at day 1, and its production at later ages was progressively being consumed due to the accelerated pozzolanic reaction, the resulting micro-cracking due to the disjoining pressure (Beltzung et al. 2001) were considerably reduced. Moreover, the self desiccation effect of the cement matrix alone had a minimal impact on the propagation of micro-cracking, resulting in a significant reduction in crack formation. The combination effect of increased pozzolanic reaction and highly dense cement matrix, was reflected in the corresponding increase in the compressive strength of N1 compared to CF, at all curing ages. Comparing the effect of silica fume in S1 to that of nano silica in N1, nano silica performed considerably better than SF in improving the pozzolanic reaction and densifiying the cement matrix. Though both SF and nS are amorphous in nature, the significantly higher surface area of nS was the main driving force in accelerating the pozzolanic reaction and densifying the cement matrix.
With the addition of SA in mix N2, though the pore solution concentration of Ca+ ions from the portlandite increases (Nonat 2000), a considerable reduction in the portlandite content was observed in both the TGA and the XRD data, compared to that of N1. There was no identifiable CH (residual) content and any noticeable intensity of the portlandite peak in N2 at all curing ages. This increase in portlandite consumption at 1 day shows that the pozzolanic reaction of N2 improved considerably with the addition of SA and the further release of portlandite from OPC at later ages, was progressively being consumed. This considerable improvement in the pozzolanic reaction of N2 resulted in 30 % increase in its compressive strength, compared to that of N1 at 1 day of curing. But, the 7 and 28 day strengths decreased by 14 and 11 % respectively. The SEM image of N2 shows increased micro-cracking than that in N1. Though, the SA considerably increases the pore solution concentration of portlandite, but it was progressively being consumed. Therefore, the occurrence of micro cracks was most likely due to the self desiccation effect which increases with the increase in hydration/pozzolanic reaction (Persson 1997). Since, the contribution of fly ash in the production of C–S–H gel is minimal at early age i.e. before 7 days of curing, the skeleton structure of the cement matrix produced primarily from the pozzolanic reaction of nano silica is probably not strong enough to resist internal stresses, resulting in the early devlopment of micro cracks. This weakening of the cement matrix at early age, inspite of the increased pozzolanic reaction results in the reduction of 7 and 28 day compressive strength results of N2 compared to that of N1.
By adding HL to nS modified HV-UFFA in mix N3, the intensity of the portlandite peak and the CH (residual) content, increased considerably compared to that of N1 at 1 day of curing. But when compared with S3, the intensity of its portlandite peak and its CH (normalised) content was significantly lower. This shows that though the addition of HL significantly increased the portlandite content, the nano silica present in N3 consumed a large amount of it due to the accelerated pozzolanic reaction at 1 day of curing. There was no identifiable portlandite content observed in both the TGA and the XRD data of N3 at later ages of curing, showing that the remaining portlandite was consumed by the 7th day and any further release of portlandite from OPC was progessively being consumed. Though the increase in pozzolanic activity resulted in 20 % increase in 1 day strength of N3 compared to N1, the 7 and 28 days showed a strength reduction of 16 and 9 % respectively. The SEM image of N3 showed an increase in crack formation compared to that of N1. Therefore the decrease in 7 and 28 day strengths, inspite of the increase in pozzolanic reaction could be attributed to the the combined effect of high disjoing pressure because of the high portlandite content at 1 day and the self desiccation effect due to the decrease in relative humidity.The strucutral weakness so introduced in the skeleton of the cement matrix at early age negatively impacted its strength development.
When both SA and HL were added together to the nano silica modified HV-UFFA in Mix N4, the consumption of portlandite increased further, as can be seen in the considerable reduction in the CH (normalised) content and the intensity of the portlandite peak at 1 day of curing, compared to that of N3. This shows that the combined effect of nS, SA and HL provided the highest acceleration in the pozzolanic reaction of HV-UFFA cement composite compared to that of their individual effects. The compressive strength results show a further 23 % increase in 1 day strength of N4 compared to that of N3. But the 7 and 28 day strengths show a further decrease of 21 and 9 % respectively. The SEM image of N4 shows a further increase in crack width compared to that of N2 and N3. This increase in crack width could be attributed to the combined effect of increased disjoining pressure and decreased internal relative humidity at early age of curing. Since the intensity of the portlandite peak was lower than that of N1 at 1 day of curing, possibly the effect of disjoining pressure was small. But with the increase in the pozzolanic reaction, the self desiccation effect coupled with the increase in disjoining pressure could probably have aggravated the development of micro cracks. The decrease in 7 and 28 day strength of N4 inspite of a significant increase in its pozzolanic reaction, compared to that of N1, N2 and N3 shows that the formation of micro cracks at early age is a major deteriorating factor affecting the development of its compressive strength.
Based on the findings of this research, the following conclusions can be drawn:
Silica fume, when used in conjunction with SA or HL considerably improves the pozzolanic reaction of HV-UFFA cement composite resulting in the improvement of its compressive strength. But, when both SA and HL are used in conjunction with the SF, the combined effect provides the best performance in accelerating the pozzolanic reaction resulting in a significant improvement in its compressive strength.
Use of silica fume considerably reduces the the development of micro cracks but when it is combined with the SA or HL increases the formation of micro cracks due to the combined effect of disjoining pressure and self desiccation effect. The highest stresses are induced when both the SA and the HL are combined with SF resulting in a significant increase in crack formation. However, this mix presented the best compressive strength because the increase in the production of C–S–H gel from the accelerated pozzolanic reaction resulted in counterbalancing the effect of increased cracking, thereby, improving its compressive strength.
Nano silica, when used in conjunction with the SA or the HL considerably improves the pozzolanic reaction of HV-UFFA cement composite resulting in the improvement of its 1 day compressive strength. But the formation of micro cracks due the disjoining pressure and self desiccation effect hinders the development of its later age strengths inspite of the increase in pozzolanic reaction. When both SA and HL are used in conjunction with the nS, the combined effect further accelerates the pozzolanic reaction, resulting in considerably improving its 1 day strength. But at later ages their combined effect significantly increases the formation of early age micro cracks resulting in considerably hindering the development of 7 and 28 day strengths, inspite of the increase in pozzolanic reaction.
Ultra-fine fly ash when combined with nano silica can help in achieving 80 % replacement of cement, having comparable mechanical properties to that of OPC.
However, the limitation of this work is that It does not address the issue of control of micro cracking induced by the self-desiccation effect and disjoining pressure, and will form part of future studies. If the formation of micro cracks are controlled in SA and HL blended HV-UFFA cement composites modified with SF and nS, there is a great potential of tapping the benefits of their accelerated pozzolanic reaction, to further improve their compressive strength results.
Though nano silica presents a great potential in the production of highly environmental friendly cementitious material, its high cost limits its immediate application in the construction industry. But, the recent advances in the research of the production of amorphous nano silica (Lazaro et al. 2012; Lazaro Garcia AA 2014), has paved a way for its cost effective mass production method, which makes its application in construction industry within reach (Quercia and Brouwers 2010).
The authors greatly appreciate the scientific and technical support provided by the RMIT Microscopy & Microanalysis Facility (RMMF), at RMIT University. The authors would like to thank Cement Australia for providing the material support to carry out this research.
About Coal Ash-CCP FAQs (2014). American Coal Ash Association; p. Coal Combustion Products-Frequently Asked Questions.
Atis, C. D. (2003). High-volume fly ash concrete with high strength and low drying shrinkage. Journal of Materials in Civil Engineering, 15(2), 153–156.
Barbhuiya, S., Gbagbo, J., Russell, M., & Basheer, P. (2009). Properties of fly ash concrete modified with hydrated lime and silica fume. Construction and Building Materials, 23(10), 3233–3239.
Beltzung, F., Wittmann, F., & Holzer, L. (2001). Influence of composition of pore solution on drying shrinkage. Creep, Shrinkage and Durability Mechanics of Concrete and other Quasi-Brittle Materials, edited by Ulm, F-J, Bazant, ZP and Wittmann, FH, Elsevier Science Ltd.
Benhelal, E., Zahedi, G., Shamsaei, E., & Bahadori, A. (2013). Global strategies and potentials to curb CO2 emissions in cement industry. Journal of Cleaner Production, 51, 142–161.
Bentz D. P., & Weiss, W. J. (2011). Internal curing: a 2010 state-of-the-art review: US Department of Commerce, National Institute of Standards and Technology.
Björnström, J., Martinelli, A., Matic, A., Börjesson, L., & Panas, I. (2004). Accelerating effects of colloidal nano-silica for beneficial calcium–silicate–hydrate formation in cement. Chemical Physics Letters, 392(1), 242–248.
Cabrera, J. G., Rivera-Villarreal, R. (1999). PRO 5: International RILEM Conference on the Role of Admixtures in High Performance Concrete: RILEM.
Chindaprasirt, P., Jaturapitakkul, C., & Sinsiri, T. (2005). Effect of fly ash fineness on compressive strength and pore size of blended cement paste. Cement & Concrete Composites, 27(4), 425–428.
De Weerdt, K., Haha, M. B., Le Saout, G., Kjellsen, K. O., Justnes, H., & Lothenbach, B. (2011). Hydration mechanisms of ternary Portland cements containing limestone powder and fly ash. Cement and Concrete Research, 41(3), 279–291.
Dinakar, P., Babu, K., & Santhanam, M. (2008). Durability properties of high volume fly ash self compacting concretes. Cement & Concrete Composites, 30(10), 880–886.
El-Chabib, H., & Syed, A. (2012). Properties of self-consolidating concrete made with high volumes of supplementary cementitious materials. Journal of Materials in Civil Engineering, 25(11), 1579–1586.
Erdoğdu, K., & Türker, P. (1998). Effects of fly ash particle size on strength of Portland cement fly ash mortars. Cement and Concrete Research, 28(9), 1217–1222.
Hansen, T. C. (1990). Long-term strength of high fly ash concretes. Cement and Concrete Research, 20(2), 193–196.
Heidrich C., Feuerborn H. -J., & Weir A. (2013). Coal Combustion Products: a Global Perspective. WOCA.
Hill, R. L. (1994). The study of hydration of fly ash in the presence of calcium nitrate and calcium formate, University of North Texas, Denton, TX.
Hou, P., Wang, K., Qian, J., Kawashima, S., Kong, D., & Shah, S. P. (2012). Effects of colloidal nanoSiO2 on fly ash hydration. Cement & Concrete Composites, 34(10), 1095–1103.
Huang, C.-H., Lin, S.-K., Chang, C.-S., & Chen, H.-J. (2013). Mix proportions and mechanical properties of concrete containing very high-volume of Class F fly ash. Construction and Building Materials, 46, 71–78.
Islam, M. S. (2014). Comparison of ASR mitigation methodologies. International Journal of Concrete Structures and Materials, 8(4), 315–326.
Jayakumar, M., & Abdullahi, M. S. (2011). Experimental study on sustainable concrete with the mixture of low calcium fly ash and lime as a partial replacement of cement. Advanced Materials Research, 250, 307–312.
Jo, B.-W., Kim, C.-H., Tae, G.-H., & Park, J.-B. (2007). Characteristics of cement mortar with nano-SiO2 particles. Construction and Building Materials, 21(6), 1351–1355.
Kasai, Y., Matsui, I., Fukushima, Y., & Kamohara, H. (1983). Air permeability and carbonation of blended cement mortars. ACI Special Publication, p. 79.
Lazaro, A., Brouwers, H., Quercia, G., & Geus, J. (2012). The properties of amorphous nano-silica synthesized by the dissolution of olivine. Chemical Engineering Journal, 211, 112–121.
Lazaro Garcia, A. A., Quercia, G. G., & Brouwers, H. (2014). Synthesis of nano-silica at low temperatures and its application in concrete. In Proceedings of the International Conference Non-Traditional Cement & Concrete V, June 16–19, 2014, Brno, Czech Republic.
Li, G., & Wu, X. (2005). Influence of fly ash and its mean particle size on certain engineering properties of cement composite mortars. Cement and Concrete Research, 35(6), 1128–1134.
Liu, M. (2010). Self-compacting concrete with different levels of pulverized fuel ash. Construction and Building Materials, 24(7), 1245–1252.
Malhotra, V. M., Mehta, P. K., & Development SCMfS. (2002). High-performance, high-volume fly ash concrete: materials, mixture proportioning, properties, construction practice, and case histories: Suppementary Cementing Materials for Sustainable Development.
Matschei, T., Lothenbach, B., & Glasser, F. (2007). The AFm phase in Portland cement. Cement and Concrete Research, 37(2), 118–130.
Mehta, P. K. (1986). Concrete. Structure, properties and materials
Nagataki, S., & Ohga, H. (1992). Combined effect of carbonation and chloride on corrosion of reinforcement in fly ash concrete. ACI Special Publication.
Nakarai, K., & Ishida, T. (2009). Numerical evaluation of influence of pozzolanic materials on shrinkage base on moisture state and pore structure. In Creep, Shrinkage and Durability Mechanics of Concrete and Concrete Structures, Two Volume Set: Proceedings of the CONCREEP 8 conference, Ise-Shima, Japan. CRC Press.
Nonat, A. (2000). PRO 13: 2nd International RILEM Symposium on Hydration and Setting-Why Does Cement Set? An interdisciplinary approach: RILEM Publications.
Paillère, A. M. (1994). Application of admixtures in concrete: CRC Press.
Paya, J., Monzo, J., Peris-Mora, E., Borrachero, M., Tercero, R., & Pinillos, C. (1995). Early-strength development of Portland cement mortars containing air classified fly ashes. Cement and Concrete Research, 25(2), 449–456.
Pepper, L., & Mather, B. (1959). Effectiveness of mineral admixtures in preventing excessive expansion of concrete due to alkali-aggregate reaction. American Soc Testing & Materials Proc.
Persson, B. (1997). Self-desiccation and its importance in concrete technology. Materials and Structures, 30(5), 293–305.
Quercia, G., & Brouwers, H. (2010). Application of nano-silica (nS) in concrete mixtures. In 8th fib PhD symposium in Kgs Lyngby, Denmark, 2010 (pp. 431–436).
Rashad, A. M. (2014). Seleem HE-DH, Shaheen AF. Effect of silica fume and slag on compressive strength and abrasion resistance of HVFA concrete. International Journal of Concrete. Structures and Materials, 8(1), 69–81.
MathSciNet Google Scholar
Reis, R., & Camões, A. (2011). Eco-efficient ternary mixtures incorporating fly ash and metakaolin. In International Conference on Sustainability of Constructions – Towards a Better Built Environment. Proceedings of the Final Conference of COST Action C25, Feb 3–5, 2011, University of Innsbruck, Austria.
Şahmaran, M., Yaman, İ. Ö., & Tokyay, M. (2009). Transport and mechanical properties of self consolidating concrete with high volume fly ash. Cement & Concrete Composites, 31(2), 99–106.
Sata, V., Jaturapitakkul, C., & Kiattikomol, K. (2007). Influence of pozzolan from various by-product materials on mechanical properties of high-strength concrete. Construction and Building Materials, 21(7), 1589–1598.
Scherer, G. W. (1999). Crystallization in pores. Cement and Concrete Research, 29(8), 1347–1358.
Sellevold E, Radjy F (1983). Condensed silica fume (microsilica) in concrete: water demand and strength development. ACI Special Publication, p. 79.
Shaikh, F., Supit, S., & Sarker, P. (2014). A study on the effect of nano silica on compressive strength of high volume fly ash mortars and concretes. Materials and Design, 60, 433–442.
Singh, L. P., Goel, A., Bhattachharyya, S. K., Ahalawat, S., Sharma, U., & Mishra, G. (2015). Effect of Morphology and Dispersibility of Silica Nanoparticles on the Mechanical Behaviour of Cement Mortar. International Journal of Concrete Structures and Materials, 9, 1–11.
Sivasundaram, V., Carette, G., & Malhotra, V. (1990). Long-term strength development of high-volume fly ash concrete. Cement & Concrete Composites, 12(4), 263–270.
Tazawa, E., & Miyazawa, S. (1993). Autogenous shrinkage of concrete and its importance in concrete technology. In RILEM Proceedings (p. 159). Chapman & Hall.
Turanli, L., Uzal, B., & Bektas, F. (2005). Effect of large amounts of natural pozzolan addition on properties of blended cements. Cement and Concrete Research, 35(6), 1106–1111.
Wei, X., Zhu, H., Li, G., Zhang, C., & Xiao, L. (2007). Properties of high volume fly ash concrete compensated by metakaolin or silica fume. Journal of Wuhan University of Technology-Mater Science, 22(4), 728–732.
Zhang, M.-H., & Gjørv, O. E. (1991). Effect of silica fume on pore structure and chloride diffusivity of low parosity cement pastes. Cement and Concrete Research, 21(6), 1006–1014.
Zhang, M.-H., & Islam, J. (2012). Use of nano-silica to reduce setting time and increase early strength of concretes with high volumes of fly ash or slag. Construction and Building Materials, 29, 573–580.
School of Civil, Environmental and Chemical Engineering, RMIT University, Melbourne, VIC, Australia
R. Roychand, S. De Silva, D. Law & S. Setunge
R. Roychand
S. De Silva
D. Law
S. Setunge
Correspondence to R. Roychand.
Roychand, R., De Silva, S., Law, D. et al. Micro and Nano Engineered High Volume Ultrafine Fly Ash Cement Composite with and without Additives. Int J Concr Struct Mater 10, 113–124 (2016). https://doi.org/10.1007/s40069-015-0122-7
Accepted: 20 December 2015
Issue Date: March 2016
silica fume
fly ash
|
CommonCrawl
|
[Submitted on 22 Aug 2020 (v1), revised 26 Sep 2022 (this version, v3), latest version 28 Sep 2022 (v4)]
Title:Automorphy of mod 2 Galois representations associated to the quintic Dwork family and reciprocity of some quintic trinomials
Authors:Nobuo Tsuzuki, Takuya Yamauchi
Abstract: In this paper, we determine mod $2$ Galois representations $\overline{\rho}_{\psi,2}:G_K:={\rm Gal}(\overline{K}/K)\longrightarrow {\rm GSp}_4(\mathbb{F}_2)$ associated to the mirror motives of rank 4 with pure weight 3 coming from the Dwork quintic family $$X^5_0+X^5_1+X^5_2+X^5_3+X^5_4-5\psi X_0X_1X_2X_3X_4=0,\ \psi\in K$$ defined over a number field $K$ under the irreducibility condition of the quintic trinomial $f_\psi$ below. Applying this result, when $K=F$ is a totally real field, for some at most qaudratic totally real extension $M/F$, we prove that $\overline{\rho}_{\psi,2}|_{G_M}$ is associated to a Hilbert-Siegel modular Hecke eigen cusp form for ${\rm GSp}_4(\mathbb{A}_M)$ of parallel weight three.
In the course of the proof, we observe that the image of such a mod $2$ representation is governed by reciprocity of the quintic trinomial $$f_\psi(x)=4x^5-5\psi x^4+1,\ \psi\in K$$ whose decomposition field is generically of type 5-th symmetric group $S_5$. This enable us to use results on the modularity of 2-dimensional, totally odd Artin representations of ${\rm Gal}(\overline{F}/F)$ due to Shu Sasaki and several Langlands functorial lifts for Hilbert cusp forms. Then, it guarantees the existence of a desired Hilbert-Siegel modular cusp form of parallel weight three matching with the Hodge type of the compatible system in question.A twisted version is also discussed and it is related to general quintic trinomials.
Comments: 33 pages. The claims of main results and their proofs are slightly modified. Section 8 is added to discuss a twisted version
From: Takuya Yamauchi [view email]
[v1] Sat, 22 Aug 2020 15:14:27 UTC (33 KB)
[v2] Fri, 11 Sep 2020 06:42:11 UTC (34 KB)
[v3] Mon, 26 Sep 2022 23:17:17 UTC (37 KB)
[v4] Wed, 28 Sep 2022 10:57:33 UTC (37 KB)
|
CommonCrawl
|
Existence of smooth solutions to coupled chemotaxis-fluid equations
DCDS Home
Global dynamics for symmetric planar maps
June 2013, 33(6): 2253-2270. doi: 10.3934/dcds.2013.33.2253
Symbolic extensionsfor partially hyperbolic dynamical systems with 2-dimensional center bundle
David Burguet 1, and Todd Fisher 2,
LPMA Université Paris 6, 4 Place Jussieu, 75252 Paris Cedex 05, France
Department of Mathematics, Brigham Young University, Provo, UT 84602
Received July 2011 Revised July 2012 Published December 2012
We relate the symbolic extension entropy of a partially hyperbolic dynamical system to the entropy appearing at small scales in local center manifolds. In particular, we prove the existence of symbolic extensions for $\mathcal{C}^2$ partially hyperbolic diffeomorphisms with a $2$-dimensional center bundle. 200 words.
Keywords: Symbolic extensions, partial hyperbolicity..
Mathematics Subject Classification: 37C05, 37C40, 37A35, 37D30, 37B1.
Citation: David Burguet, Todd Fisher. Symbolic extensionsfor partially hyperbolic dynamical systems with 2-dimensional center bundle. Discrete & Continuous Dynamical Systems, 2013, 33 (6) : 2253-2270. doi: 10.3934/dcds.2013.33.2253
M. Asaoka, Hyperbolic set exhibing $\mathcalC^1$-persistent homoclinic tangency for higher dimensions, Proc. Am. Math. Soc., 136 (2008), 677-686. doi: 10.1090/S0002-9939-07-09115-0. Google Scholar
J. Bochi and M. Viana, The Lyapunov exponents of generic volume preserving and symplectic maps, Ann. of Math. (2), 161 (2005), 1423-1485. doi: 10.4007/annals.2005.161.1423. Google Scholar
R. Bowen, Entropy-expansive maps, Trans. Ame. Math. Soc., 164 (1972), 323-331. Google Scholar
M. Boyle and T. Downarowicz, The entropy theory of symbolic extension, Invent. Math., 156 (2004), 119-161 . doi: 10.1007/s00222-003-0335-2. Google Scholar
M. Boyle and T. Downarowicz, Symbolic extension entropy : $\mathcalC^r$ examples, products and flows, Discrete Contin. Dyn. Syst., 16 (2006), 329-341. doi: 10.3934/dcds.2006.16.329. Google Scholar
M. Boyle, D. Fiebig and U. Fiebig, Residual entropy, conditional entropy and subshift covers, Forum Math., 14 (2002), 713-757. doi: 10.1515/form.2002.031. Google Scholar
D. Burguet, $\mathcalC^2$ surface diffeomorphism have symbolic extensions, Invent. Math., 186 (2011), 191-236. doi: 10.1007/s00222-011-0317-8. Google Scholar
D. Burguet, A direct proof of the variational principle for tail entropy and its extension to maps, Ergodic Theory Dynam. Systems, 29 (2009), 357-369. doi: 10.1017/S0143385708080425. Google Scholar
D. Burguet, Symbolic extension for $\mathcalC^r$ non uniformly entropy expanding maps, Colloq. Math., 121 (2010), 129-151. doi: 10.4064/cm121-1-12. Google Scholar
K. Burns and A. Wilkinson, On the ergodicity of partially hyperbolic systems, Ann. of Math. (2), 171 (2010), 451-489. doi: 10.4007/annals.2010.171.451. Google Scholar
J. Buzzi, Intrinsic ergodicity for smooth interval maps, Israel J. Math., 100 (1997), 125-161. doi: 10.1007/BF02773637. Google Scholar
W. Cowieson and L.-S. Young, SRB mesaures as zero-noise limits, Ergod. Th. Dynamic. Systems, 25 (2005), 1115-1138. doi: 10.1017/S0143385704000604. Google Scholar
L. J. Díaz and T. Fisher, Symbolic extensions and partially hyperbolic diffeomorphisms, Discrete Contin. Dyn. Syst., 29 (2011), 1419-1441. doi: 10.3934/dcds.2011.29.1419. Google Scholar
L. J. Diaz, T. Fisher, M. J. Pacifico and J. L. Vieitez, Entropy-expansiveness for partially hyperbolic diffeomorphisms,, Discrete Contin. Dyn. Syst., (). Google Scholar
T. Downarowicz, "Entropy in Dynamical Systems, New Mathematical Monographs," 18, Cambridge University Press, Cambridge, 2011. doi: 10.1017/CBO9780511976155. Google Scholar
T. Downarowicz, Entropy structure, J. Anal. Math., 96 (2005), 57-116. doi: 10.1007/BF02787825. Google Scholar
T. Downarowicz and A. Maass, Smooth interval maps have symbolic extensions: the antarctic theorem, Invent. Math., 176 (2009), 617-636. doi: 10.1007/s00222-008-0172-4. Google Scholar
T. Downarowicz and S. Newhouse, Symbolic extensions in smooth dynamical systems, Invent. Math., 160 (2005), 453-499. doi: 10.1007/s00222-004-0413-0. Google Scholar
M. W. Hirsch, C. C. Pugh and M. Shub, "Invariant Manifolds," Lecture Notes In Mathematics, 583, Springer-Verlag, Berlin-New York, 1977. Google Scholar
A. Katok and B. Hasselblatt, "Introduction to the Modern Theory of Dynamical Systems," Encyclopedia of Mathematics and Its Applications, 54, Cambridge University Press, Cambridge, 1995. Google Scholar
M. Misiurewicz, Topological conditional entropy, Studia Math., 55 (1976), 175-200. Google Scholar
S. Newhouse, Continuity properties of entropy, Ann. of Math. (2), 129 (1989), 215-235. doi: 10.2307/1971492. Google Scholar
V. I. Oseledec, A multiplicative ergodic theorem. Lyapunov characteristic numbers for dynamical systems, Trudy Moskov. Mat. Obšč., 19 (1968), 197-231. Google Scholar
M. Pacifico and J. Vieitez, Entropy-expansiveness and domination for surface diffeomorphisms, Rev. Mat. Complut., 21 (2008), 293-317. Google Scholar
Y. Pesin and L. Barreira, "Nonuniform Hyperbolicity: Dynamics of Systems with Nonzero Lyapunov Exponents," Encyclopedia of Mathematics and Its Applications, 115, Cambridge University Press, Cambridge, 2007. Google Scholar
D. Ruelle, An inequality of the entropy of differentiable maps, Bol. Soc. Brasil. Mat., 9 (1978), 83-87. doi: 10.1007/BF02584795. Google Scholar
M. Shub, "Global Stability of Dynamical Systems," With the collaboration of A. Fathi and R. Langevin. Transl. by J. Cristy, Springer-Verlag, New York, 1987. Google Scholar
P. Walters, "An Introduction to Ergodic Theory," Graduate Texts in Mathematics, 79, Springer-Verlag, New York-Berlin, 1982. Google Scholar
Y. Yomdin, Volume growth and entropy, Israel J. Math., 57 (1987), 285-300. doi: 10.1007/BF02766215. Google Scholar
Y. Yomdin, $\mathcalC^k$-resolution of semialgebraic mappings. Addendum to : "Volume growth and entropy", Israel J. Math., 57 (1987), 301-317. doi: 10.1007/BF02766216. Google Scholar
Lorenzo J. Díaz, Todd Fisher. Symbolic extensions and partially hyperbolic diffeomorphisms. Discrete & Continuous Dynamical Systems, 2011, 29 (4) : 1419-1441. doi: 10.3934/dcds.2011.29.1419
Sergey Kryzhevich, Sergey Tikhomirov. Partial hyperbolicity and central shadowing. Discrete & Continuous Dynamical Systems, 2013, 33 (7) : 2901-2909. doi: 10.3934/dcds.2013.33.2901
David Burguet, Ruxi Shi. Zero-dimensional and symbolic extensions of topological flows. Discrete & Continuous Dynamical Systems, 2021 doi: 10.3934/dcds.2021148
Andy Hammerlindl, Jana Rodriguez Hertz, Raúl Ures. Ergodicity and partial hyperbolicity on Seifert manifolds. Journal of Modern Dynamics, 2020, 0: 331-348. doi: 10.3934/jmd.2020012
Federico Rodriguez Hertz, María Alejandra Rodriguez Hertz, Raúl Ures. Partial hyperbolicity and ergodicity in dimension three. Journal of Modern Dynamics, 2008, 2 (2) : 187-208. doi: 10.3934/jmd.2008.2.187
Jérôme Buzzi, Todd Fisher. Entropic stability beyond partial hyperbolicity. Journal of Modern Dynamics, 2013, 7 (4) : 527-552. doi: 10.3934/jmd.2013.7.527
Yakov Pesin. On the work of Dolgopyat on partial and nonuniform hyperbolicity. Journal of Modern Dynamics, 2010, 4 (2) : 227-241. doi: 10.3934/jmd.2010.4.227
Andy Hammerlindl. Partial hyperbolicity on 3-dimensional nilmanifolds. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3641-3669. doi: 10.3934/dcds.2013.33.3641
Eleonora Catsigeras, Xueting Tian. Dominated splitting, partial hyperbolicity and positive entropy. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 4739-4759. doi: 10.3934/dcds.2016006
Rafael Potrie. Partial hyperbolicity and foliations in $\mathbb{T}^3$. Journal of Modern Dynamics, 2015, 9: 81-121. doi: 10.3934/jmd.2015.9.81
Marcin Mazur, Jacek Tabor, Piotr Kościelniak. Semi-hyperbolicity and hyperbolicity. Discrete & Continuous Dynamical Systems, 2008, 20 (4) : 1029-1038. doi: 10.3934/dcds.2008.20.1029
Marcin Mazur, Jacek Tabor. Computational hyperbolicity. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 1175-1189. doi: 10.3934/dcds.2011.29.1175
Boris Hasselblatt, Yakov Pesin, Jörg Schmeling. Pointwise hyperbolicity implies uniform hyperbolicity. Discrete & Continuous Dynamical Systems, 2014, 34 (7) : 2819-2827. doi: 10.3934/dcds.2014.34.2819
Mauricio Achigar. Extensions of expansive dynamical systems. Discrete & Continuous Dynamical Systems, 2021, 41 (7) : 3093-3108. doi: 10.3934/dcds.2020399
Sergiĭ Kolyada, Mykola Matviichuk. On extensions of transitive maps. Discrete & Continuous Dynamical Systems, 2011, 30 (3) : 767-777. doi: 10.3934/dcds.2011.30.767
Steven T. Piantadosi. Symbolic dynamics on free groups. Discrete & Continuous Dynamical Systems, 2008, 20 (3) : 725-738. doi: 10.3934/dcds.2008.20.725
Jacek Serafin. A faithful symbolic extension. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1051-1062. doi: 10.3934/cpaa.2012.11.1051
Luis Barreira, Claudia Valls. Growth rates and nonuniform hyperbolicity. Discrete & Continuous Dynamical Systems, 2008, 22 (3) : 509-528. doi: 10.3934/dcds.2008.22.509
Rasul Shafikov, Christian Wolf. Stable sets, hyperbolicity and dimension. Discrete & Continuous Dynamical Systems, 2005, 12 (3) : 403-412. doi: 10.3934/dcds.2005.12.403
Arno Berger. On finite-time hyperbolicity. Communications on Pure & Applied Analysis, 2011, 10 (3) : 963-981. doi: 10.3934/cpaa.2011.10.963
PDF downloads (53)
HTML views (0)
David Burguet Todd Fisher
|
CommonCrawl
|
OSA Publishing > Optical Materials Express > Volume 9 > Issue 6 > Page 2601
Effect of the resin viscosity on the writing properties of two-photon polymerization
T. Zandrini, N. Liaros, L. J. Jiang, Y. F. Lu, J. T. Fourkas, R. Osellame, and T. Baldacchini
T. Zandrini,1 N. Liaros,2 L. J. Jiang,3 Y. F. Lu,3 J. T. Fourkas,2,4 R. Osellame,1,7 and T. Baldacchini5,6
1Istituto di Fotonica e Nanotecnologie del CNR, Dipartimento di Fisica del Politecnico di Milano, 20133 Milano, Italy
2Department of Chemistry and Biochemistry, University of Maryland, College Park 20742, USA
3Deaprtment of Electrical Engineering, University of Nebraska-Lincoln, Lincoln, NE 68588, USA
4Institute for Physical Sciences & Technology, University of Maryland, College Park 20742, USA
5Schmid College of Science and Technology, Chapman University, Orange, CA 92866, USA
6 [email protected]
7 [email protected]
Y. F. Lu https://orcid.org/0000-0002-5942-1999
R. Osellame https://orcid.org/0000-0002-4457-9902
T Zandrini
N Liaros
L Jiang
Y Lu
J Fourkas
R Osellame
T Baldacchini
T. Zandrini, N. Liaros, L. J. Jiang, Y. F. Lu, J. T. Fourkas, R. Osellame, and T. Baldacchini, "Effect of the resin viscosity on the writing properties of two-photon polymerization," Opt. Mater. Express 9, 2601-2616 (2019)
Performance comparison of acrylic and thiol-acrylic resins in two-photon polymerization (OE)
Two-photon polymerization with variable repetition rate bursts of femtosecond laser pulses (OE)
Three-dimensional microfabrication by two-photon-initiated polymerization with a low-cost microlaser (OL)
Laser Materials Processing
High numerical aperture optics
Laser beams
Laser energy
Laser stability
Two photon polymerization
Original Manuscript: April 15, 2019
Revised Manuscript: May 5, 2019
Manuscript Accepted: May 5, 2019
Optical Materials Express Laser Writing (2019)
Methods and materials
Results and discussions
While the role of resin viscosity has been largely studied for stereolithography, where a very low viscosity material is preferred, an extensive study of its microscale counterpart, two-photon polymerization, is still lacking. In the present work, we tried to fill the gap by correlating directly the properties of the features produced by two-photon polymerization with the viscosity of four acrylate materials, prepared by mixing two monomers with very different viscosity, and a constant quantity of photoinitiator. Linewidth, polymerization and damage thresholds, dynamic range, and fabrication resolution have been object of investigation in our experiments.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Stereolithography (SL) and two-photon polymerization (TPP) are additive manufacturing processes employed in the fabrication of three-dimensional parts [1,2]. While the first one produces structures with dimensions that span from meters to centimeters and with feature sizes ranging from hundreds of microns to tens of microns, the latter one builds micro- and mesostructures with feature sizes as small as hundreds of nanometers. Although several advances have been implemented over the years to increase the throughput of these techniques [3,4], both SL and TPP are still considered slow manufacturing processes since they essentially make one part at a time. Therefore, SL is applied mostly in the production of objects that have complex geometries and require customization or in applications where prototyping is needed [5]. Alternatively, TPP has grown to be a unique and powerful microfabrication tool that enables researchers to investigate optical, mechanical, and biological phenomena at the microscale with unparalleled spatial accuracy [6–9].
Apart from their contrasting fabrication scales, the main difference between SL and TPP is the process they use to create three-dimensional structures. In SL, a photosensitive material (resin) is hardened when exposed to a focused beam of UV light. Since light absorption occurs in this case through a linear process, three-dimensional parts are formed in SL using a layer-by-layer approach. Specifically, a lifting platform is used to create a thin liquid film within the resin pool; laser scanning begins at the platform-resin interface and then continues over each polymerized layer as the platform is lifted until the entire three-dimensional part is formed. In TPP instead, a NIR ultra-short pulsed laser is used to initiate the photopolymerization of the resin through nonlinear optical absorption. In this case, light-matter interaction can be confined within sub-femtoliters volumes (voxels) allowing "true" three-dimensional writing. TPP microstructures are made directly in the volume of the resin by scanning the laser beam and/or moving the sample following three-dimensional patterns. Because of the microscopic scale of the parts made by TPP, fabrication typically begins or ends at the resin-substrate interface. In this way, the parts can be easily retrieved being anchored to the substrate after the development step. Recent studies have revealed that the order of optical nonlinearity in TPP depends greatly on the photoinitiator and the conditions used for light excitation [10]. Therefore, the acronym TPP is not formally correct since it describes only one of the multiphoton processes that can cause photopolymerization. Nonetheless, we will continue to refer to this high precision three-dimensional printing technology as two-photon polymerization for simplicity (some of the other names used to describe TPP are multiphoton lithography, 2-photon polymerization (2PP), multiphoton polymerization (MPP), and 3D direct laser writing).
One of the consequences of the different methods used in SL and TPP for creating three-dimensional parts is the viscosity of the resin. While low viscosity resins are preferred in SL, resins in TPP are often very viscous [11,12]. This asymmetry originates solely from practical reasons. In SL, the photopolymerization of a new layer begins with a recoating step. A thin film of the resin (which defines the axial resolution of SL) must be deposited on top of the last polymerized layer before photopolymerization can carry on. Low viscosity resins reduce the time needed to accomplish this step, hence decreasing fabrication time. Furthermore, low viscosity resins permit to obtain self-leveling layers. In TPP, samples are often placed on high-performance linear stages that move at high speed around a fixed laser beam. This is especially true when TPP is used to make smooth and continuous millimeter-sized parts. High viscosity resins are then preferable for avoiding adverse drag effects on the sample when making complex patterns because of the stages accelerations and decelerations at turning points. Additionally, highly viscous resins simplify the preparation of the sample which requires small quantities of material to be positioned close to or in contact with the front lens of a high numerical aperture objective. Popular organic-inorganic hybrid TPP resins are for example highly viscous or in a semi-solid state when used at room temperature [13,14].
Because of their fast curing speed, acrylic monomers and oligomers are commonly used in SL and TPP resins [15]. The ability of these molecules to create highly cross-linked networks is key for the fabrication of self-sustained and readily assembled three-dimensional structures [16]. Moreover, since acrylic monomers are used extensively in several industries, they are inexpensive, easily available, and can be found in a wide assortment of functionalities and sizes.
In order to be used successfully in SL, acrylic-based resins and their corresponding polymers must exhibit a series of physical and chemical properties. For example, the resin viscosity and wetting behavior are critical in SL [17]. The first one is preferable when it is less than few hundreds mPa·s for the reasons explained earlier, while the latter one requires low surface tension when in contact with the polymer surface in order to ensure proper coating. High tensile strength and low volume shrinkage are polymer requirements to produce a stiff material capable of supporting its shape and maintaining dimensional accuracy of the photocured product. The combination of these properties makes the preparation of a resin for SL challenging [18]. For instance, low molecular weight bifunctional acrylic monomers are excellent in promoting low volume shrinkage and decreasing the overall resin viscosity, but they tend to create polymers with low tensile strengths. The addition of small quantities of tri-, tetra-, and penta-acrylic monomers into the resin compensates this issue by producing dense and rigid polymer networks optimal for SL. Unfortunately, as their concentration is increased the resin becomes too viscous and the polymer too brittle to be used effectively. To solve these problems, SL resins typically consists of a mixture of low molecular weight diacrylate monomers, highly branched acrylate monomers, and high molecular weight acrylate oligomers based on urethane or epoxy backbones; a successful SL resin is obtained when these ingredients are mixed in the right proportions, a goal typically achieved by experimenting with different mixtures ratios and testing the resin and polymer properties [19].
Although TPP has been used with a variety of resins chemistry in order to obtain polymers for various applications, resins based on acrylates have received the most attention [13,20–28]. Some of the first demonstrations of TPP were performed using commercially available acrylic resins or mixing commercially available acrylic monomers [29,30]. Subsequent works committed to improving the performance of TPP resins were centered around the synthesis and characterization of photoinitiators with large optical nonlinearities so to initiate radical polymerization of acrylates more efficiently than traditional UV photoinitiators [31]. Then, hybrid organic-inorganic resins have been developed that have improved the ease with which sturdy and precise three-dimensional microstructures can be fabricated by TPP with, among other things, minimal volume shrinkage [13,32]. The organic part of these hybrid resins still relies on acrylic moieties that are used to induce the material final cross-linking by means of radical polymerization.
SL was developed almost twenty years before TPP was first demonstrated. Furthermore, because of its manufacturing scale length, SL has found almost immediately applications that have demanded intense R&D efforts [33]. Therefore, it is not surprising to find a large number of published works on the design, preparation, and characterization of resins for SL [1,34–37]. On the other hand, TPP research has been focused mostly on the development of methods for improving writing resolution, increasing writing speed and part overall size, and for studying a wide variety of natural phenomena at a length scale that would be otherwise impossible [6,8,38–41]. Although these efforts have delivered impressive results, it has created also a lack of systematic investigation of the resins' chemical and physical properties on the TPP process.
For example, in SL the viscosity of the resin is adjusted in a way to minimize recoating time between each polymerized layer. Accordingly, the UV curing of resins made by mixing varying concentrations of acrylic monomers with different viscosities has been studied extensively. Particularly, the photopolymerization kinetics of these mixtures have been investigated as a function of viscosity [42–44]. Although it was found that the rate of polymerization for these systems depends on many factors such as crosslinking density, number of functionalities, reactivity, and hydrogen bonding, it was also found that a major influence comes from the resin viscosity. As the viscosity is increased the polymerization goes faster because termination reactions become diffusion-limited. At even higher viscosities, the rate of polymerization starts to decrease since propagation reactions become diffusion-limited as well. Thus, a most reactive composition is frequently obtained by mixing high and low viscosity monomers at specific molar ratios. Moreover, the writing characteristics of SL were studied as a function of the resin viscosity [45]. At low irradiation times, low viscosity resins produce the smallest polymerization depth. This is not the case anymore at higher irradiation times, indicating that in this exposure conditions other factors besides the resin viscosity must be taken into consideration for explaining the observed polymerization depth. It is noteworthy to point out that a similar trend was observed also for the polymerization width in SL.
By contrast, the effect of the resin viscosity in TPP has not yet been fully investigated. With the exception of the following studies indeed, the scientific literature on this topic is still lacking. By taking advantage of the scattering signal generated by the interaction of a probe beam with a single TPP voxel, Wegener and collaborators were able to monitor the polymerization kinetics of a series of acrylic-based resins with excellent spatial and temporal resolutions [46]. When compared to the kinetics of similar materials polymerized with UV light, this study has shown that there are substantial differences: the polymerization rate is several orders of magnitude faster in TPP than in UV curing, and radical quenching through oxygen plays a large effect on the termination processes of TPP. The study revealed that the viscosity of the resin contributes greatly to the kinetics of TPP, most probably by reducing oxygen diffusion. In a work by Kawata and collaborators, TPP was performed at different temperatures by placing the sample on a temperature-controlled block [47]. The temperature of the resin was varied from -60˚C to 80˚C, which consequently corresponds to a change in the resin viscosity. The width of the polymerized voxels was measured within this temperature range. It was found that the voxel becomes smaller as the temperature decreases (viscosity increases) and as the temperature increases (viscosity decreases) reaching the largest dimension at around room temperature. The magnitude of this effect was observed to depend on the exposure time. TPP writing in the presence of a radical quencher was studied by Farsari and collaborators [48]. The findings showed how both linewidth and writing resolution could be minimized if the range of action of the radicals generated in and around the voxel is made smaller. Thus, the diffusion of the quencher in the resin plays a fundamental role in scavenging radicals and in determining the fastest writing speed achievable in this process. The viscosity of the resin is evidently a major factor in the design of such an approach for performing high-resolution TPP writing. Finally, a recent work by Mendonça and collaborators has investigated the linewidths of TPP microstructures using acrylic-based resins made by mixing different concentrations of two monomers [49]. The authors of this study found that the writing linewidth decreased substantially as the concentration of one of the monomers increased, which coincided with an increase of the resin viscosity. This trend is in agreement with the observation made by Kawata and collaborators.
The effect of the resin's viscosity on the photochemistry of SL and on several practical aspects involved in the fabrication of three-dimensional parts using this technique is well-known [17,50]. Considering that the materials used in TPP are often similar to the ones used in SL, and that various researches have already encountered the effect of the resin's viscosity in TPP, it is then surprising to observe that a systematic study on this subject is not yet present in the TPP literature. Hence, in this work, we aim to fill this gap by reporting the results of a TPP study using a series of resins having viscosities varying over a range of three orders of magnitude. To be able to interpret the data as deriving mostly from the large variation of viscosity, the resins are prepared by mixing two acrylates in different molar proportions. Writing linewidth and fabrication dynamic range are measured and conspicuous differences are noted. A qualitative investigation on the effect of the resin viscosity on the writing resolution is provided as well.
As described elsewhere, the formation of the polymerized voxel in TPP can be thought as the result of two interaction volumes [32]. One, technical, which depends principally by the hardware employed in performing TPP (i.e. positioning system accuracy and repeatability, laser stability, quality of optics, and effective damping systems), and one chemical which is determined by a complex interplay of several factors such as the nature and concentration of the photoinitiator and the kinetics of the polymerization. The latter is influenced by the viscosity of the resin that controls, among other things, the diffusion of both radicals and radicals' scavengers. Since the room temperature viscosity of the resins used in this study varies for more than three orders of magnitude, the viscosity of the resin is a dominant factor that influences not only the rate of polymerization but also the properties and sizes of the written microstructures. We believe this study complements our understanding of TPP by linking directly the effects of various TPP writing characteristics to the resin viscosity.
2. Methods and materials
TPP writing linewidths and thresholds experiments are performed using the output of a Ti:sapphire laser delivering 100 fs pulses at a repetition rate of 80 MHz (MaiTai, Spectra-Physics). The center wavelength of the excitation source chosen for these experiments is centered at 775 nm. The laser beam is focused on the sample by means of a 63x 1.4 NA oil immersion microscope objective (Zeiss, Plan-Apochromat). During microfabrication, the excitation focal point is kept fixed, while the sample is moved in predetermined geometries with the aid of a computer-controlled, motorized, three-axis translational stage assembly (XMS, Newport Corp.). Laser exposure is controlled by a mechanical shutter (Uniblitz); the action of the shutter is synchronized with the sample motion to avoid under and overexposures. The average laser power delivered to the sample is controlled by a polarizer and half-wave plate with the latter mounted on a manual rotational stage. TPP is monitored in real time by means of optical microscopy in transmission mode which is an imaging functionality added to the writing setup.
Test samples for measuring writing linewidths are made at the substrate-resin interface in the form of 15 µm long lines. Each line is written using a single laser pass. A series of parallel lines are written at different z-positions using a constant offset of 200 nm. In this series, there are lines that are truncated by the substrate for more than half of their lengths, while there are lines that are barely attached to the substrate and hence, are fallen over. The line used for extrapolating the width of the written feature is the one that appears just before the fallen line. We choose this method to minimize the contribution that volume shrinkage can add to the measurement of TPP linewidth [51]. Each line is written at a constant velocity of 50 µm/s and the laser energy (as measured before the microscope objective) is varied between 0.03 nJ/pulse and 0.45 nJ/pulse. At each laser energy, the line test is repeated at least five times, and the average linewidth is reported. Following the development step, the samples are characterized using an FE-SEM after being coated with a thin layer (∼ 5 nm) of Au.
Polymerization (Eth) and damage (Edamage) energy thresholds are determined by writing 20 µm long lines in the bulk of the resin at a constant speed of 50 µm/s and varying the laser energy per pulse. We define Eth as the lowest pulse energy that yields a visible change of the resin into a polymer by means of transmission light microscopy. Similarly, Edamage is defined as the highest energy per pulse that can be used before cavitation within the resin begins to occur. As the laser pulse energy surpasses Edamage, the resin boils forming bubbles around the focal point that are easy to spot by transmission light microscopy. Each reported value of Eth and Edamage is the average of twenty measurements.
TPP writing resolution experiments are made using a somewhat different system. The setup is based on the second harmonic of a femtosecond fiber laser (Toptica FemtoFiber Pro NIR), at 780 nm, with pulses of 100 fs and a repetition rate of 80 MHz. Laser average power is controlled through a polarizer and a half-wave plate mounted on a motorized rotation stage (Aerotech, MPS50GR). The laser beam is focused through a 100x 1.4 NA oil immersion microscope objective (Zeiss, Plan-Apochromat), which can be moved on the vertical axis by a counterbalanced linear stage (Aerotech, ANT130-LZS) to translate the laser focus along the beam direction. The substrate with the resin is moved on the horizontal plane by a 2-axis nanopositioning stage (Aerotech ANT95XY). The test samples used to characterize TPP writing resolution consist of arrays of woodpile microstructures. Woodpiles are made by stacking layers composed by parallel rods. Each layer is perpendicular to the previous one, and shifted laterally by half the rod separation distance from the nearest parallel layer. Each woodpile layer is separated vertically by 500 nm. Rod lateral separation was varied between 0.2 µm and 2.57 µm, at each pulse energy. Woodpiles are fabricated at a constant velocity of 100 µm/s, and with laser pulse energies ranging from 0.075 nJ to 0.375 nJ (as measured before the microscope objective). The samples are then developed and dried; subsequently, they are characterized by optical microscopy in transmission mode.
In this study we investigate the TPP writing behaviors of four resins A – D. They are made by mixing various concentrations of two acrylic monomers and the same amount of a photoinitiator. The two monomers are dipentaerythritol pentacrylate (DPEPA, Sartomer) and 1,3-butylene glycol diacrylate (BGDA, Sartomer). The photoinitiator is phenylbis (2,4,6-trimethyl benzoyl)phosphine oxide (BAPO, Sigma-Aldrich). The molecular structures of the monomers and photoinitiator are shown in Fig. 1. Besides presenting different functionalities (5 vs. 2), the two monomers appear quite different when handled since one is much thicker than the other. Specifically, the viscosities at room temperature of DPEPA and BGDA are 13,600 mPa·s and 6 mPa·s, respectively. The component settings for the resins are summarized in Table 1, alongside their corresponding viscosities measured at room temperature. The molar ratio between the carbon-carbon double bonds and the photoinitiator molecules in resins A, B, C, and D is constant. All materials are used as received without any further purification. Prior to use, the resins are mixed thoroughly until homogenous solutions are obtained. Samples are prepared by depositing a drop of resin in use on top of a 150 µm thick cover glass. The cover glass is then positioned on top of the motion system with the side opposite to the resin facing the incoming focused laser. After completion of the writing process, the unsolidified part of the sample assembly is washed away in an ethanol bath revealing the desired microstructure patterns on the glass substrate.
Fig. 1. Molecular structures of (a) monomer DPEPA, (b) monomer BGDA, and (c) photoinitiator BAPO used to make the four resins used in this study. (d) 2-BIT data for a resin containing 0.15 wt% BAPO. The dashed line is the result that would be expected for 2-photon absorption as a reference. The error bars are based on standard deviations from multiple measurements.
Table 1. Composition of the resins investigated in this study with the corresponding viscosity measured at room temperature.
All resins are heated for 1 minute in a 90 °C oven, and then they are blended at 1850 rpm for 5 minutes using a centrifugal mixer. This procedure is repeated until the resins appear homogeneous. Viscosity measurements are performed at 22.8 °C using a DV-2+ Brookfield viscometer. Because of the wide range of viscosity between the samples used in this study, the viscosity of Resin A is measured using an LV-4 spindle, while the viscosity of Resins B, C, and D are measured using an enhanced UL adapter. The errors in the viscosity measurements for Resins A, B, C, and D are 4%, 1%, 0.4%, and 0.1%, respectively.
The effective nonlinear absorption of a resin containing BAPO (0.15 wt% BAPO in SR399) is measured using the two-beam initiation threshold (2-BIT) technique, a detailed description of which is presented elsewhere [52]. In brief, this technique involves combining two interleaved, spatially overlapped pulse trains to expose the resin. The average power of one pulse train required to reach the polymerization threshold P1 is measured as a function of the average power of the other pulse train P2, and a plot of P1 versus P2 is used to determine the effective order of nonlinear absorption in the resin. For each set of powers, the polymerization thresholds were measured by creating sets of lines at a constant distance above the coverslip surface at a stage velocity of 20 µm/s. Representative 2-BIT data obtained for this system at a wavelength of 777.5 nm (chosen to be between the center wavelengths of the two laser systems used to obtain the other data in this manuscript) are shown in Fig. 1d. The average value of the exponent derived from multiple 2-BIT experiments is 2.01 ± 0.05, indicating that initiation at this wavelength is a 2-photon process.
3. Results and discussions
The exposure curves of resins employed in TPP are quite nonlinear, thus creating a light intensity threshold below which polymerization does not occur [53]. It is the presence of this intensity threshold that permits the formation of voxels with dimensions that are considerably smaller than the wavelength of light used for fabrication. For example, by adjusting the intensity of the excitation laser barely above the polymerization threshold, voxels with lateral dimensions smaller than 100 nm can be obtained with a typical exposure wavelength of 800 nm [54]. TPP resins display also a light intensity threshold above which boiling occurs [24]. While the polymerization threshold depends on the characteristics of the photoinitiator and its concentration, the damage threshold is an intrinsic property of the monomers and oligomers mixtures that constitute the bulk of the resin. The window between these two laser intensity thresholds defines the writing dynamic range available for making three-dimensional structures by TPP. Resins with large dynamic ranges are preferable for two reasons. The first reason is a consequence of the fact that such resins support the use of voxels with a wide variety of dimensions. This is obviously advantageous when structures with different overall sizes are demanded [55]. The second reason is that high-speed writing is facilitated when resins with large dynamic ranges are used [39].
In this study we are comparing the writing performances of four resins, hence we limit ourselves in interpreting the laser energy thresholds Eth and Edamage measured at a constant velocity instead of the absolute values for the resins' laser intensities thresholds. The laser energy thresholds for resins A, B, C, and D are compiled in Table 2, where the dynamic ranges are added also by using the following expression $\left( {\frac{{{E_{damage}} - {E_{th}}}}{{{E_{damage}}}}} \right) \times 100.\; $It is significant to remember that the energy damage thresholds presented in this work do not take into consideration proximity effects. It is well-known that high feature densities in TPP writing produce energy damage thresholds that can be considerably lower than the energy damage thresholds found when writing isolated features [56].
Table 2. Polymerization (Eth) and damage (Edamage) energy thresholds for the resins considered in this study. The dynamic range for each resin is listed as well.
Polymerization energy thresholds grow monotonically as the viscosity of the resins decreases. Eth goes from 0.04 nJ/pulse to 0.19 nJ/pulse as the viscosity of the resin is lowered from η = 10400 mPa·s to η = 15 mPa·s. At a laser repetition rate of 80 MHz, this corresponds to a change in average power for initiating TPP from 3.4 mW to 15.5 mW. By contrast, damage energy thresholds do not diverge much as the resin viscosity changes. Edamage remains constant at around 0.5 nJ/pulse (40 mW) across the four resins. Consequently, we observe the largest dynamic range in the resin with the highest viscosity. A plot of the measured dynamic ranges for A, B, C, and D as a function of the resin viscosity is shown in Fig. 2 and it illustrates clearly this point. As the viscosity of the resin becomes three order of magnitudes larger, the dynamic range increase from 64% to 92%. High-speed (mm/s to cm/s) TPP writing is typically performed by using a photoinitiator with a large nonlinear cross-section [39]. This has the effect of widening the dynamic range by essentially lowering Eth. The data in Fig. 2 suggests that an additional approach to performing safely high-speed TPP is to increase the resin viscosity.
Fig. 2. Dynamic ranges of the four investigated resins as a function of their viscosities. The blue, red, green, and orange colors represent resins A, B, C, and D, respectively.
An example of the tests used to extrapolate TPP writing linewidths is shown in the FE-SEM image of Fig. 3(a). Here, five arrays of lines made under the same experimental conditions are visible. In each column, the focal spot axial position is gradually changing. The lines at the top are the most truncated by the glass substrate, while the lines at the bottom are barely attached to it and hence, they fall over during the development step. The lines that appears straight just before the ones fallen over are used for measuring linewidths.
Fig. 3. Scanning electron microscopy images of samples used to measure writing linewidths. (a) In this overview, five arrays of ascending lines are written using the same experimental condition in Resin D. The polymerized lines used for linewidth measurements are highlighted in yellow. (b) High magnification image of a line written in Resin A using an energy per pulse of 0.29 nJ at a velocity of 50 µm/s. (c) High magnification image of a line written in Resin D using an energy per pulse of 0.30 nJ at a velocity of 50 µm/s. The scale bars are 50 µm, 3 µm, and 2 µm, in (a), (b) and (c), respectively.
The complete set of linewidth measurements versus laser pulse energies is shown in Fig. 4 for all four resins. Several striking differences are observed as the viscosity is decreased from Resin A to Resin D. This plot confirms the energy threshold results discussed earlier. The processing window of written TPP microstructures that survived with physical integrity the development step slowly decreases as the viscosity decreases from Resin A to Resin C, and then it becomes considerably smaller for the less viscous Resin D. The smallest measured linewidth differs only slightly among the four resins with an average value of around 360 nm. Contrarily, the largest measured linewidth falls within a wide range of values as the resin viscosity is changed. Maximum linewidths are indeed 2.98 µm, 2.31 µm, 1.42 µm, and 0.68 µm, for Resins A, B, C, and D, respectively. The large difference in linewidth written under the same experimental conditions observed as the resin viscosity is changed is clearly shown in Figs 3(b) and 3(c). Here, the SEM images of two lines are displayed. The line in 3(b) was made using Resin A at a writing speed of 50 µm/s and a laser pulse energy of 0.29 nJ. The line in 3(c) was made using Resin D at a writing speed of 50 µm/s and a laser pulse energy of 0.30 nJ. As the viscosity of the resin diminishes from 10,400 mPa·s to 15 mPa·s, the corresponding linewidth goes from a value of 2.3 µm to a value of 0.4 µm. Lastly, the rate of linewidth variation as the laser pulse energy is increased is rather different within the four resins. Specifically, a fixed change in laser pulse energy produces a larger linewidth change in the more viscous resin than in the less viscous one. Within the available writing processing window for example, linewidths created in Resin A change from 0.41 µm to 2.98 µm, a growth of almost 90%; linewidths created in Resin D instead change from 0.33 µm to 0.68 µm, a growth of roughly 50%. The rate of linewidth variation with laser pulse energy for the various resins can be quantified by performing a linear regression of the central data shown in Fig. 4. This analysis produces values of 7.3 µm/nJ, 5.9 µm/nJ, 3.3 µm/nJ, and 1.74 µm/nJ for Resins A, B, C, and D, respectively. The largest rate of linewidth change is observed for the resin with viscosity of 10.400 mPa·s, while the slowest rate of linewidth change is observed for the resin with viscosity of 15 mPa·s. It is significant to report that the results as described so far and shown in Fig. 2 and 4 have been reproduced using two different experimental setups. Specifically, when Resins A, B, C, and D were used in the system employed for studying writing resolution (see Section 2) the same trends were observed.
Fig. 4. Experimental values of TPP writing linewidths as a function of the laser energy per pulse. Data collected for Resins A, B, C, and D are presented together. All structures used for creating this plot were made at a writing speed of 50 µm/s.
When using a simple threshold model for describing the growth of TPP features, a trend that follows the square root of the natural logarithm of the ratio between the laser intensity and the polymerization intensity threshold is found [57]. Noticeably, our data in Fig. 4 does not correlate well with this behavior but rather follows more a linear trend. We believe this discrepancy from the threshold model is due to the shape of the focal spot that at high laser pulse energy cannot be considered anymore as an ideal Gaussian [51]. Furthermore, it is likely that the smallest achievable linewidths could not be measured since the mechanical properties of the polymers produced at these low laser pulse energies might be too weak to survive intact the development step. This is especially true for the less viscous resins where the concentration of the cross-linker is low (Table 1).
The chemistry involved in the photopolymerization of resins made with multifunctional acrylic monomers is a complex interplay between several reactions: initiation, propagation, termination [58]. The kinetics of these reactions and the properties of the polymers produced correlate to many parameters such as monomer functionality/size/structure, type and concentration of photoinitiator, and, in resins made with more than one monomer, monomer molar and reactivity ratios [59]. The complexity of the photopolymerization of these materials originates also from the fact that the reactions involved in it occur almost simultaneously in a medium that is rapidly changing. In the case of negative tone resins employed in SL or TPP, that change is from liquid to solid and it is localized within small and well-defined volumes. Moreover, radical chain polymerizations are sensitive to the presence of inhibitors and retarders that have critical consequences on the polymerization kinetics [60]. Since the resins in our study are prepared and used in ambient pressure, they contained a certain amount of dissolved O2 (∼ 10−3 M) [61]. Oxygen is known to be a strong inhibitor as demonstrated by its large z values (ratio of the rate constants for inhibition and propagation) [62]. Specifically, oxygen interferes with the chain growth mechanism by forming peroxy radicals, which do not react rapidly with acrylates [63]. Thus, oxygen has the overall effect of reducing polymerization efficiency by scavenging radicals and effectively terminating them [64]. Hence its action cannot be ignored when analyzing results of TPP experiments such as the ones presented in this study.
For the reasons conveyed so far, comprehensive models for describing and predicting photopolymerization of multifunctional acrylic monomers are challenging to design. In TPP, this task is rendered even more arduous by the nonlinear order of light absorption and parasitic processes that can be induced when using high repetition-rate fs pulsed lasers. It is not our task to develop such model in this work; nonetheless we think that the results presented up to now can be attributed in great part to the effect the resin viscosity has on molecular diffusion.
Typical values for the propagation (kp) and termination (kt) rate constants of acrylic monomers are 103 L mol−1 s−1 and 107 L mol−1 s−1, respectively [65]. The rate constant of the radical scavenging by O2 (ko) is 108 L mol−1 s−1 [61]. A specific reaction becomes diffusion-limited when its coefficient rate is larger than the diffusion rate coefficient kdiff. The latter can be estimated in function of viscosity using the following equation
(1)$${k_{diff}} = \frac{{8000 RT}}{{3\eta }}\; $$
where R is the gas constant (J mol−1 K−1), T is the temperature (K), and η is the viscosity (Pa·s) [43]. The constant in the numerator allows to get values of kdiff directly in units of L mol−1 s−1. Table 3 compiles kdiff for Resins A, B, C, and D. As the viscosity decreases, kdiff increases from 6.25·105 L mol−1 s−1 to 4.33·108 L mol−1 s−1. Thus, within this viscosity range, propagation is never diffusion-controlled while radical scavenging by O2 is always diffusion-controlled. Interestingly, termination is diffusion-controlled only for Resin A.
Table 3. Diffusion rate coefficients calculated using Eq. (1) for the resin studied in this work. Viscosities of the resins in Pa·s are shown as well.
In light of these considerations, we can now attempt to explain the experimental results presented in this study. The observed variation in the dynamic range (Fig. 2) is caused by Eth becoming smaller as the resin viscosity increases (Table 2). For resin A, this is caused by the limited mobility of O2 in scavenging radicals and of radicals in finding pathways to terminate propagation. The result is that, under identical exposure time, the amount of laser pulse energy required to start and sustain polymerization by TPP is lower than in less viscous resins. As the viscosity decreases (Resins B, C, and D), termination processes are not diffusion-limited anymore. Radical scavenging by O2 becomes then the primary source of dynamic range variation as the viscosity changes. As the viscosity decreases, O2 motility is higher; thus, its ability to eliminate radicals from the polymer propagation is increased as well. Consequentially, Eth is larger in less viscous resins. More photons are needed to generate enough radicals that can sustain polymerization of TPP structures.
The results of TPP linewidths (Figs. 3 and 4) can be analyzed using similar arguments. Under the same dose, the most viscous resin produces the largest linewidth. The reduced action of O2 and the lower probability of termination events because of limited motility, allows for a bigger build-up of radicals that consequentially produce a larger polymer. Likewise, the same logic can describe the different magnitude of linewidth change that is observed as the laser pulse energy is increased.
So far, in the description of the results, we have neglected the possibility that the resin temperature in the focal volume increases during photopolymerization. Although under some experimental conditions localized heat accumulation may occur [66,67], it has been shown that the temperature change in the focal volume of a TPP experiment during laser irradiation amounts only to few degree Kelvins when excitation is performed with parameters almost identical to the ones used in this study (laser repetition rate, pulse width, wavelength, pulse energy, focusing lens, scan velocity) [68]. This observation together with the data published by Kawata and collaborators [47], give us confidence that the results presented in this work are due predominantly to the large variation in the resins' viscosities.
In many applications, the ability to write features close together leaving a gap between them is of fundamental importance [69]. For example, many types of three-dimensional photonic crystals rely on this fabrication ability of TPP to work in different regions of the EM spectrum [70]. To evaluate how resin viscosity influences writing resolution in TPP, arrays of woodpile microstructures are written using Resin A and B. Images of these arrays recorded in reflection mode using an optical microscope are shown in Fig. 5. Laser energy per pulse is varied from 0.075 nJ to 0.375 nJ in incremental steps of 0.025 nJ, while line separation between the rods of the woodpile is increased between 0.4 µm and 2.6 µm using increments of around 200 nm. The same experimental conditions are used for both arrays. Depending on the linewidth of the rods and on separation chosen between rods, the woodpile microstructure displays different colors under white light illumination. We assigned the presence of colors in the reflectivity images to well-formed structures with separated rods. Woodpile microstructures that appear white have rods close together to a point of merging with each other. As the viscosity of the resin decreases from 10,400 mPa·s to 116 mPa·s, the number of colored microstructures increases. This observation is highlighted in Fig. 5 with a red outline. This result indicates that TPP writing resolution is favored in less viscous resins. This is in agreement with other reports where quenchers are added to the resin mixture to compete with photopolymerization, thus improving writing resolution [48,71]. We are achieving (qualitatively) the same result by lowering the resin viscosity so to increase the motility of O2. Writing speed determines the time that elapses between two adjacent rods; during this time, the scavenging action of O2 is fundamental in diminishing the probability of overlapping areas to convert interstitial resin into polymer. Under the experimental conditions employed in this study (100 µm/s), viscosity plays obviously an important role in defining writing resolution.
Fig. 5. Optical microscopy images of arrays of woodpile microstructures fabricated by TPP using Resin A and Resin B. Identical experimental conditions are used in both resins. The woodpile microstructures in each array have different lattice parameters and thus, exhibit different structural colors; this is achieved by varying the line separation (top to bottom) and the laser pulse energy (left to right). A red outline delineates the woodpile microstructures that present well-separated individual rods.
As the resin viscosity decreases, the mechanical properties of the written structures are affected as well. Resins C and D for example, did not produced well-defined three-dimensional microstructures as the woodpiles shown in Fig. 5. This is most probably due to the higher percentages of the linear monomer BGDA in Resins C and D that result in polymers not as cross-linked as the ones produced by Resins A and B.
The image of the microstructure made with Resin A in Fig. 5 presents a large section of the processed substrate with burnt or exploded woodpiles. This observation depends on a well-known fact in TPP. That is, the damage threshold of TPP microstructures depend on the proximity between adjacent features. The Edamage of closely spaced or overlapping features are lower than the Edamage of the corresponding features when written spatially apart from each other. It has been demonstrated that this lowering of Edamage is caused by an increase in the single-photon absorptivity of the cured resin [56]. The geometry of the written microstructure and the writing speed both influence the proximity effect on Edamage. The result in Fig. 5 shows that viscosity plays an important role as well. Not only lower viscosity resins are preferable for writing microstructure with better resolution, but they diminish the possibility of destroying the microstructure itself because of Edamage.
In this paper, we showed how viscosity influences the formation of lines in two-photon polymerization, in four different acrylate resins with viscosity ranging from more than 10 Pa·s to 15 mPa·s. While in stereolithography low viscosity is a standard, in two-photon polymerization it is desirable in order to achieve higher resolution and to reduce the probability of damaged microstructures, even though it does not affect heavily the damage threshold for single lines. On the other hand, when a large dynamic range is needed, highly viscous resins allow for a wide variation of the usable pulse energy, thanks to the lower polymerization threshold, to control and increase line width with more freedom, to increase fabrication velocity, and to achieve a more robust fabrication process. These results can be understood considering the effects of the diffusion of quenchers in the different materials.
National Science Foundation (NSF) (CMMI-1449309); Horizon 2020 Framework Programme (H2020) (754467).
NL and JTF acknowledge support by the National Science Foundation, Grant CMMI-1449309. R.O. and T.Z. acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (ERC-PoC NICHOIDS - grant agreement No 754467). T.B. thanks Prof. Ewa Andrzejewska, and Prof Arnaud Bertsch for useful discussions during the preparation of the manuscript.
1. S. C. Ligon, R. Liska, J. Stampfl, M. Gurr, and R. Mulhaupt, "Polymers for 3D Printing and Customized Additive Manufacturing," Chem. Rev. 117(15), 10212–10290 (2017). [CrossRef]
2. C. N. LaFratta, J. T. Fourkas, T. Baldacchini, and R. A. Farrer, "Multiphoton fabrication," Angew. Chem., Int. Ed. 46(33), 6238–6258 (2007). [CrossRef]
3. B. E. Kelly, I. Bhattacharya, H. Heidari, M. Shusteff, C. M. Spadaccini, and H. K. Taylor, "Volumetric additive manufacturing via tomographic reconstruction," Science 363(6431), 1075–1079 (2019). [CrossRef]
4. C. N. LaFratta and L. J. Li, Making Two-Photon Polymerization Faster (William Andrew Inc, 2016).
5. P. J. Bártolo, Stereolithography: Materials, Processes and Applications (Springer-Verlag Berlin, 2011).
6. M. Malinauskas, A. Zukauskas, S. Hasegawa, Y. Hayasaki, V. Mizeikis, R. Buividas, and S. Juodkazis, "Ultrafast laser processing of materials: from science to industry," Light: Sci. Appl. 5(8), e16133 (2016). [CrossRef]
7. M. Malinauskas, M. Farsari, A. Piskarskas, and S. Juodkazis, "Ultrafast laser nanostructuring of photopolymers: A decade of advances," Phys. Rep. 533(1), 1–31 (2013). [CrossRef]
8. J. Fischer and M. Wegener, "Three-dimensional optical laser lithography beyond the diffraction limit," Laser Photonics Rev. 7(1), 22–44 (2013). [CrossRef]
9. A. K. Nguyen and R. J. Narayan, "Two-photon polymerization for biological applications," Mater. Today 20(6), 314–322 (2017). [CrossRef]
10. N. Liaros and J. T. Fourkas, "The Characterization of Absorptive Nonlinearities," Laser Photonics Rev. 11(5), 1700106 (2017). [CrossRef]
11. J. Li, Y. H. Cui, K. Qin, J. C. Yu, C. Guo, J. Q. Yang, C. C. Zhang, D. D. Jiang, and X. Wang, "Synthesis and properties of a low-viscosity UV-curable oligomer for three-dimensional printing," Polym. Bull. 73(2), 571–585 (2016). [CrossRef]
12. M. Malinauskas, A. Zukauskas, V. Purlys, K. Belazaras, A. Momot, D. Paipulas, R. Gadonas, A. Piskarskas, H. Gilbergs, A. Gaidukeviciute, I. Sakellari, M. Farsari, and S. Juodkazis, "Femtosecond laser polymerization of hybrid/integrated micro-optical elements and their characterization," J. Opt. 12(12), 124010 (2010). [CrossRef]
13. A. Ovsianikov, J. Viertl, B. Chichkov, M. Oubaha, B. MacCraith, I. Sakellari, A. Giakoumaki, D. Gray, M. Vamvakaki, M. Farsari, and C. Fotakis, "Ultra-Low Shrinkage Hybrid Photosensitive Material for Two-Photon Polymerization Microfabrication," Acs Nano 2(11), 2257–2262 (2008). [CrossRef]
14. F. Burmeister, S. Steenhusen, R. Houbertz, U. D. Zeitner, S. Nolte, and A. Tunnermann, "Materials and technologies for fabrication of three-dimensional microstructures with sub-100 nm feature sizes by two-photon polymerization," J. Laser Appl. 24(4), 042014 (2012). [CrossRef]
15. C. Decker, T. N. T. Viet, D. Decker, and E. Weber-Koehl, "UV-radiation curing of acrylate/epoxide systems," Polymer 42(13), 5531–5541 (2001). [CrossRef]
16. C. Decker and K. Moussa, "Photopolymerization of mutifunctional monomers in condensed phase," J. Appl. Polym. Sci. 34(4), 1603–1618 (1987). [CrossRef]
17. X. Q. Zhang, Y. Xu, L. Li, B. Yan, J. J. Bao, and A. M. Zhang, "Acrylate-based photosensitive resin for stereolithographic three-dimensional printing," J. Appl. Polym. Sci. 136(21), 47487 (2019). [CrossRef]
18. J. W. Stansbury and M. J. Idacavage, "3D printing with polymers: Challenges among expanding options and opportunities," Dent. Mater. 32(1), 54–64 (2016). [CrossRef]
19. S. Corbel, O. Dufaud, and T. Roques-Carmes, Materials for Stereolithography (Springer-Verlag Berlin, 2011).
20. W. H. Teh, U. Durig, U. Drechsler, C. G. Smith, and H. J. Guntherodt, "Effect of low numerical-aperture femtosecond two-photon absorption on (SU-8) resist for ultrahigh-aspect-ratio microstereolithography," J. Appl. Phys. 97(5), 054907 (2005). [CrossRef]
21. C. A. Coenjarts and C. K. Ober, "Two-photon three-dimensional microfabrication of poly(dimethylsiloxane) elastomers," Chem. Mater. 16(26), 5556–5558 (2004). [CrossRef]
22. J. Torgersen, X. H. Qin, Z. Q. Li, A. Ovsianikov, R. Liska, and J. Stampfl, "Hydrogels for Two-Photon Polymerization: A Toolbox for Mimicking the Extracellular Matrix," Adv. Funct. Mater. 23(36), 4542–4554 (2013). [CrossRef]
23. B. Kaehr, N. Ertas, R. Nielson, R. Allen, R. T. Hill, M. Plenert, and J. B. Shear, "Direct-write fabrication of functional protein matrixes using a low-cost Q-switched laser," Anal. Chem. 78(9), 3198–3202 (2006). [CrossRef]
24. T. Baldacchini, C. N. LaFratta, R. A. Farrer, M. C. Teich, B. E. A. Saleh, M. J. Naughton, and J. T. Fourkas, "Acrylic-based resin with favorable properties for three-dimensional two-photon polymerization," J. Appl. Phys. 95(11), 6072–6076 (2004). [CrossRef]
25. L. H. Nguyen, M. Straub, and M. Gu, "Acrylate-based photopolymer for two-photon microfabrication and photonic applications," Adv. Funct. Mater. 15(2), 209–216 (2005). [CrossRef]
26. D. S. Correa, M. R. Cardoso, V. Tribuzi, L. Misoguti, and C. R. Mendonca, "Femtosecond Laser in Polymeric Materials: Microfabrication of Doped Structures and Micromachining," IEEE J. Sel. Top. Quantum Electron. 18(1), 176–186 (2012). [CrossRef]
27. V. Tribuzi, D. S. Correa, W. Avansi, C. Ribeiro, E. Longo, and C. R. Mendonca, "Indirect doping of microstructures fabricated by two-photon polymerization with gold nanoparticles," Opt. Express 20(19), 21107–21113 (2012). [CrossRef]
28. X. M. Duan, H. B. Sun, K. Kaneko, and S. Kawata, "Two-photon polymerization of metal ions doped acrylate monomers and oligomers for three-dimensional structure fabrication," Thin Solid Films 453-454, 518–521 (2004). [CrossRef]
29. S. Maruo, O. Nakamura, and S. Kawata, "Three-dimensional microfabrication with two-photon-absorbed photopolymerization," Opt. Lett. 22(2), 132–134 (1997). [CrossRef]
30. S. Kawata, H. B. Sun, T. Tanaka, and K. Takada, "Finer features for functional microdevices - Micromachines can be created with higher resolution using two-photon absorption," Nature 412(6848), 697–698 (2001). [CrossRef]
31. B. H. Cumpston, S. P. Ananthavel, S. Barlow, D. L. Dyer, J. E. Ehrlich, L. L. Erskine, A. A. Heikal, S. M. Kuebler, I. Y. S. Lee, D. McCord-Maughon, J. Q. Qin, H. Rockel, M. Rumi, X. L. Wu, S. R. Marder, and J. W. Perry, "Two-photon polymerization initiators for three-dimensional optical data storage and microfabrication," Nature 398(6722), 51–54 (1999). [CrossRef]
32. H. Ruth, S. Sonke, S. Thomas, and S. Gerhard, Two-Photon Polymerization of Inorganic-Organic Hybrid Polymers as Scalable Technology Using Ultra-Short Laser Pulses (Intech Europe, 2010).
33. P. F. Jacobs, Rapid Prototyping & Manufacturing: fundamentals of stereolithography (Society of Manufacturing Engineers, 1992).
34. R. Hague, S. Mansour, N. Saleh, and R. Harris, "Materials analysis of stereolithography resins for use in Rapid Manufacturing," J. Mater. Sci. 39(7), 2457–2464 (2004). [CrossRef]
35. M. Gurr, D. Hofmann, M. Ehm, Y. Thomann, R. Kubler, and R. Mulhaupt, "Acrylic nanocomposite resins for use in stereolithography and structural light modulation based rapid prototyping and rapid manufacturing technologies," Adv. Funct. Mater. 18(16), 2390–2397 (2008). [CrossRef]
36. F. P. W. Melchels, J. Feijen, and D. W. Grijpma, "A review on stereolithography and its applications in biomedical engineering," Biomaterials 31(24), 6121–6130 (2010). [CrossRef]
37. R. Liska, M. Schuster, R. Infuhr, C. Tureeek, C. Fritscher, B. Seidl, V. Schmidt, L. Kuna, A. Haase, F. Varga, H. Lichtenegger, and J. Stampfl, "Photopolymers for rapid prototyping," JCT Res. 4(4), 505–510 (2007). [CrossRef]
38. L. J. Li, R. R. Gattass, E. Gershgoren, H. Hwang, and J. T. Fourkas, "Achieving lambda/20 Resolution by One-Color Initiation and Deactivation of Polymerization," Science 324(5929), 910–913 (2009). [CrossRef]
39. Z. Q. Li, N. Pucher, K. Cicha, J. Torgersen, S. C. Ligon, A. Ajami, W. Husinsky, A. Rosspeintner, E. Vauthey, S. Naumov, T. Scherzer, J. Stampfl, and R. Liska, "A Straightforward Synthesis and Structure-Activity Relationship of Highly Efficient Initiators for Two-Photon Polymerization," Macromolecules 46(2), 352–361 (2013). [CrossRef]
40. C. N. LaFratta, L. J. Li, and J. T. Fourkas, "Soft-lithographic replication of 3D microstructures with closed loops," Proc. Natl. Acad. Sci. U. S. A. 103(23), 8589–8594 (2006). [CrossRef]
41. L. Jonusauskas, S. Juodkazis, and M. Malinauskas, "Optical 3D printing: bridging the gaps in the mesoscale," J. Opt. 20(5), 053001 (2018). [CrossRef]
42. A. Marcinkowska and E. Andrzejewska, "Viscosity Effects in the Photopolymerization of Two-Monomer Systems," J. Appl. Polym. Sci. 116(1), 280–287 (2010). [CrossRef]
43. E. Andrzejewska and A. Marcinkowska, "New Aspects of Viscosity Effects on the Photopolymerization Kinetics of the 2,2-Bis 4-(2-hydroxymethacryloxypropoxy) pheny1 propane/Triethylene Glycol Dimethacrylate Monomer System," J. Appl. Polym. Sci. 110(5), 2780–2786 (2008). [CrossRef]
44. M. Y. Zakharina, V. B. Fedoseev, Y. V. Chechet, S. A. Chesnokov, and A. S. Shaplov, "Effect of Viscosity of Dimethacrylate Ester-Based Compositions on the Kinetics of Their Photopolymerization in Presence of o-Quinone Photoinitiators," Polym. Sci., Ser. B 59(6), 665–673 (2017). [CrossRef]
45. S. Zissi, A. Bertsch, J.-Y. Jezequel, S. Corbel, D. J. Lougnot, and J. C. Andre, "Stereolithography and microtechniques," Microsyst. Technol. 2(2), 97–102 (1996). [CrossRef]
46. J. B. Mueller, J. Fischer, F. Mayer, M. Kadic, and M. Wegener, "Polymerization Kinetics in Three-Dimensional Direct Laser Writing," Adv. Mater. 26(38), 6566–6571 (2014). [CrossRef]
47. K. Takada, K. Kaneko, Y. D. Li, S. Kawata, Q. D. Chen, and H. B. Sun, "Temperature effects on pinpoint photopolymerization and polymerized micronanostructures," Appl. Phys. Lett. 92(4), 041902 (2008). [CrossRef]
48. I. Sakellari, E. Kabouraki, D. Gray, V. Purlys, C. Fotakis, A. Pikulin, N. Bityurin, M. Vamvakaki, and M. Farsari, "Diffusion-Assisted High-Resolution Direct Femtosecond Laser Writing," Acs Nano 6(3), 2302–2311 (2012). [CrossRef]
49. P. M. Consoli, A. J. G. Otuka, D. T. Balogh, and C. R. Mendonca, "Feature size reduction in two-photon polymerization by optimizing resin composition," J. Polym. Sci., Part B: Polym. Phys. 56(16), 1158–1163 (2018). [CrossRef]
50. C. Decker, B. Elzaouk, and D. Decker, "Kinetic study of ultrafast photopolymerization reactions," J. Macromol. Sci., Part A: Pure Appl.Chem. 33(2), 173–190 (1996). [CrossRef]
51. J. Fischer, J. B. Mueller, J. Kaschke, T. J. A. Wolf, A. N. Unterreiner, and M. Wegener, "Three-dimensional multi-photon direct laser writing with variable repetition rate," Opt. Express 21(22), 26244–26260 (2013). [CrossRef]
52. Z. Tomova, N. Liaros, S. A. G. Razo, S. M. Wolf, and J. T. Fourkas, "In situ measurement of the effective nonlinear absorption order in multiphoton photoresists," Laser Photonics Rev. 10(5), 849–854 (2016). [CrossRef]
53. J. T. Fourkas, Fundamentals of Two-Photon Fabrication (William Andrew Inc, 2016).
54. S. Juodkazis, V. Mizeikis, K. K. Seet, M. Miwa, and H. Misawa, "Two-photon lithography of nanorods in SU-8 photoresist," Nanotechnology 16(6), 846–849 (2005). [CrossRef]
55. L. Jonusauskas, S. Rekstyte, and M. Malinauskas, "Augmentation of direct laser writing fabrication throughput for three-dimensional structures by varying focusing conditions," Opt. Eng. 53(12), 125102 (2014). [CrossRef]
56. S. K. Saha, C. Divin, J. A. Cuadra, and R. M. Panas, "Effect of Proximity of Features on the Damage Threshold During Submicron Additive Manufacturing Via Two-Photon Polymerization," J. Micro- Nano-Manuf. 5(3), 031002 (2017). [CrossRef]
57. J. Serbin, A. Egbert, A. Ostendorf, B. N. Chichkov, R. Houbertz, G. Domann, J. Schulz, C. Cronauer, L. Frohlich, and M. Popall, "Femtosecond laser-induced two-photon polymerization of inorganic-organic hybrid materials for applications in photonics," Opt. Lett. 28(5), 301–303 (2003). [CrossRef]
58. C. Decker, "Photoinitiated crosslinking polymerisation," Prog. Polym. Sci. 21(4), 593–650 (1996). [CrossRef]
59. E. Andrzejewska, "Photopolymerization kinetics of multifunctional monomers," Prog. Polym. Sci. 26(4), 605–665 (2001). [CrossRef]
60. F. Tudos and T. Foldesberezsnich, "Free-radical polymerization - inhibition and retardation," Prog. Polym. Sci. 14(6), 717–761 (1989). [CrossRef]
61. C. Decker and A. D. Jenkins, "Kinetic approach of O2 inhibition in ultraviolet-induced and laser-induced polymerizations," Macromolecules 18(6), 1241–1244 (1985). [CrossRef]
62. G. Odian, "Radical chain polymerization," in Principles of Polymerization (John Wiley & Sons, 1991), pp. 262–266.
63. K. Studer, C. Decker, E. Beck, and R. Schwalm, "Overcoming oxygen inhibition in UV-curing of acrylate coatings by carbon dioxide inerting, Part I," Prog. Org. Coat. 48(1), 92–100 (2003). [CrossRef]
64. A. K. O'Brien and C. N. Bowman, "Impact of oxygen on photopolymerization kinetics and polymer structure," Macromolecules 39(7), 2501–2506 (2006). [CrossRef]
65. S. Beuermann and M. Buback, "Rate coefficients of free-radical polymerization deduced from pulsed laser experiments," Prog. Polym. Sci. 27(2), 191–254 (2002). [CrossRef]
66. S. Rekstyte, T. Jonavicius, D. Gailevicius, M. Malinauskas, V. Mizeikis, E. G. Gamaly, and S. Juodkazis, "Nanoscale Precision of 3D Polymerization via Polarization Control," Adv. Opt. Mater. 4(8), 1209–1214 (2016). [CrossRef]
67. T. Baldacchini, S. Snider, and R. Zadoyan, "Two-photon polymerization with variable repetition rate bursts of femtosecond laser pulses," Opt. Express 20(28), 29890–29899 (2012). [CrossRef]
68. J. B. Mueller, J. Fischer, Y. J. Mange, T. Nann, and M. Wegener, "In-situ local temperature measurement during three-dimensional direct laser writing," Appl. Phys. Lett. 103(12), 123107 (2013). [CrossRef]
69. G. de Miguel, G. Vicidomini, B. Harke, and A. Diaspro, Linewidth and Writing Resolution (William Andrew Inc, 2016).
70. H. B. Sun, S. Matsuo, and H. Misawa, "Three-dimensional photonic crystal structures achieved with two-photon-absorption photopolymerization of resin," Appl. Phys. Lett. 74(6), 786–788 (1999). [CrossRef]
71. W. E. Lu, X. Z. Dong, W. Q. Chen, Z. S. Zhao, and X. M. Duan, "Novel photoinitiator with a radical quenching moiety for confining radical diffusion in two-photon induced photopolymerization," J. Mater. Chem. 21(15), 5650–5659 (2011). [CrossRef]
S. C. Ligon, R. Liska, J. Stampfl, M. Gurr, and R. Mulhaupt, "Polymers for 3D Printing and Customized Additive Manufacturing," Chem. Rev. 117(15), 10212–10290 (2017).
C. N. LaFratta, J. T. Fourkas, T. Baldacchini, and R. A. Farrer, "Multiphoton fabrication," Angew. Chem., Int. Ed. 46(33), 6238–6258 (2007).
B. E. Kelly, I. Bhattacharya, H. Heidari, M. Shusteff, C. M. Spadaccini, and H. K. Taylor, "Volumetric additive manufacturing via tomographic reconstruction," Science 363(6431), 1075–1079 (2019).
C. N. LaFratta and L. J. Li, Making Two-Photon Polymerization Faster (William Andrew Inc, 2016).
P. J. Bártolo, Stereolithography: Materials, Processes and Applications (Springer-Verlag Berlin, 2011).
M. Malinauskas, A. Zukauskas, S. Hasegawa, Y. Hayasaki, V. Mizeikis, R. Buividas, and S. Juodkazis, "Ultrafast laser processing of materials: from science to industry," Light: Sci. Appl. 5(8), e16133 (2016).
M. Malinauskas, M. Farsari, A. Piskarskas, and S. Juodkazis, "Ultrafast laser nanostructuring of photopolymers: A decade of advances," Phys. Rep. 533(1), 1–31 (2013).
J. Fischer and M. Wegener, "Three-dimensional optical laser lithography beyond the diffraction limit," Laser Photonics Rev. 7(1), 22–44 (2013).
A. K. Nguyen and R. J. Narayan, "Two-photon polymerization for biological applications," Mater. Today 20(6), 314–322 (2017).
N. Liaros and J. T. Fourkas, "The Characterization of Absorptive Nonlinearities," Laser Photonics Rev. 11(5), 1700106 (2017).
J. Li, Y. H. Cui, K. Qin, J. C. Yu, C. Guo, J. Q. Yang, C. C. Zhang, D. D. Jiang, and X. Wang, "Synthesis and properties of a low-viscosity UV-curable oligomer for three-dimensional printing," Polym. Bull. 73(2), 571–585 (2016).
M. Malinauskas, A. Zukauskas, V. Purlys, K. Belazaras, A. Momot, D. Paipulas, R. Gadonas, A. Piskarskas, H. Gilbergs, A. Gaidukeviciute, I. Sakellari, M. Farsari, and S. Juodkazis, "Femtosecond laser polymerization of hybrid/integrated micro-optical elements and their characterization," J. Opt. 12(12), 124010 (2010).
A. Ovsianikov, J. Viertl, B. Chichkov, M. Oubaha, B. MacCraith, I. Sakellari, A. Giakoumaki, D. Gray, M. Vamvakaki, M. Farsari, and C. Fotakis, "Ultra-Low Shrinkage Hybrid Photosensitive Material for Two-Photon Polymerization Microfabrication," Acs Nano 2(11), 2257–2262 (2008).
F. Burmeister, S. Steenhusen, R. Houbertz, U. D. Zeitner, S. Nolte, and A. Tunnermann, "Materials and technologies for fabrication of three-dimensional microstructures with sub-100 nm feature sizes by two-photon polymerization," J. Laser Appl. 24(4), 042014 (2012).
C. Decker, T. N. T. Viet, D. Decker, and E. Weber-Koehl, "UV-radiation curing of acrylate/epoxide systems," Polymer 42(13), 5531–5541 (2001).
C. Decker and K. Moussa, "Photopolymerization of mutifunctional monomers in condensed phase," J. Appl. Polym. Sci. 34(4), 1603–1618 (1987).
X. Q. Zhang, Y. Xu, L. Li, B. Yan, J. J. Bao, and A. M. Zhang, "Acrylate-based photosensitive resin for stereolithographic three-dimensional printing," J. Appl. Polym. Sci. 136(21), 47487 (2019).
J. W. Stansbury and M. J. Idacavage, "3D printing with polymers: Challenges among expanding options and opportunities," Dent. Mater. 32(1), 54–64 (2016).
S. Corbel, O. Dufaud, and T. Roques-Carmes, Materials for Stereolithography (Springer-Verlag Berlin, 2011).
W. H. Teh, U. Durig, U. Drechsler, C. G. Smith, and H. J. Guntherodt, "Effect of low numerical-aperture femtosecond two-photon absorption on (SU-8) resist for ultrahigh-aspect-ratio microstereolithography," J. Appl. Phys. 97(5), 054907 (2005).
C. A. Coenjarts and C. K. Ober, "Two-photon three-dimensional microfabrication of poly(dimethylsiloxane) elastomers," Chem. Mater. 16(26), 5556–5558 (2004).
J. Torgersen, X. H. Qin, Z. Q. Li, A. Ovsianikov, R. Liska, and J. Stampfl, "Hydrogels for Two-Photon Polymerization: A Toolbox for Mimicking the Extracellular Matrix," Adv. Funct. Mater. 23(36), 4542–4554 (2013).
B. Kaehr, N. Ertas, R. Nielson, R. Allen, R. T. Hill, M. Plenert, and J. B. Shear, "Direct-write fabrication of functional protein matrixes using a low-cost Q-switched laser," Anal. Chem. 78(9), 3198–3202 (2006).
T. Baldacchini, C. N. LaFratta, R. A. Farrer, M. C. Teich, B. E. A. Saleh, M. J. Naughton, and J. T. Fourkas, "Acrylic-based resin with favorable properties for three-dimensional two-photon polymerization," J. Appl. Phys. 95(11), 6072–6076 (2004).
L. H. Nguyen, M. Straub, and M. Gu, "Acrylate-based photopolymer for two-photon microfabrication and photonic applications," Adv. Funct. Mater. 15(2), 209–216 (2005).
D. S. Correa, M. R. Cardoso, V. Tribuzi, L. Misoguti, and C. R. Mendonca, "Femtosecond Laser in Polymeric Materials: Microfabrication of Doped Structures and Micromachining," IEEE J. Sel. Top. Quantum Electron. 18(1), 176–186 (2012).
V. Tribuzi, D. S. Correa, W. Avansi, C. Ribeiro, E. Longo, and C. R. Mendonca, "Indirect doping of microstructures fabricated by two-photon polymerization with gold nanoparticles," Opt. Express 20(19), 21107–21113 (2012).
X. M. Duan, H. B. Sun, K. Kaneko, and S. Kawata, "Two-photon polymerization of metal ions doped acrylate monomers and oligomers for three-dimensional structure fabrication," Thin Solid Films 453-454, 518–521 (2004).
S. Maruo, O. Nakamura, and S. Kawata, "Three-dimensional microfabrication with two-photon-absorbed photopolymerization," Opt. Lett. 22(2), 132–134 (1997).
S. Kawata, H. B. Sun, T. Tanaka, and K. Takada, "Finer features for functional microdevices - Micromachines can be created with higher resolution using two-photon absorption," Nature 412(6848), 697–698 (2001).
B. H. Cumpston, S. P. Ananthavel, S. Barlow, D. L. Dyer, J. E. Ehrlich, L. L. Erskine, A. A. Heikal, S. M. Kuebler, I. Y. S. Lee, D. McCord-Maughon, J. Q. Qin, H. Rockel, M. Rumi, X. L. Wu, S. R. Marder, and J. W. Perry, "Two-photon polymerization initiators for three-dimensional optical data storage and microfabrication," Nature 398(6722), 51–54 (1999).
H. Ruth, S. Sonke, S. Thomas, and S. Gerhard, Two-Photon Polymerization of Inorganic-Organic Hybrid Polymers as Scalable Technology Using Ultra-Short Laser Pulses (Intech Europe, 2010).
P. F. Jacobs, Rapid Prototyping & Manufacturing: fundamentals of stereolithography (Society of Manufacturing Engineers, 1992).
R. Hague, S. Mansour, N. Saleh, and R. Harris, "Materials analysis of stereolithography resins for use in Rapid Manufacturing," J. Mater. Sci. 39(7), 2457–2464 (2004).
M. Gurr, D. Hofmann, M. Ehm, Y. Thomann, R. Kubler, and R. Mulhaupt, "Acrylic nanocomposite resins for use in stereolithography and structural light modulation based rapid prototyping and rapid manufacturing technologies," Adv. Funct. Mater. 18(16), 2390–2397 (2008).
F. P. W. Melchels, J. Feijen, and D. W. Grijpma, "A review on stereolithography and its applications in biomedical engineering," Biomaterials 31(24), 6121–6130 (2010).
R. Liska, M. Schuster, R. Infuhr, C. Tureeek, C. Fritscher, B. Seidl, V. Schmidt, L. Kuna, A. Haase, F. Varga, H. Lichtenegger, and J. Stampfl, "Photopolymers for rapid prototyping," JCT Res. 4(4), 505–510 (2007).
L. J. Li, R. R. Gattass, E. Gershgoren, H. Hwang, and J. T. Fourkas, "Achieving lambda/20 Resolution by One-Color Initiation and Deactivation of Polymerization," Science 324(5929), 910–913 (2009).
Z. Q. Li, N. Pucher, K. Cicha, J. Torgersen, S. C. Ligon, A. Ajami, W. Husinsky, A. Rosspeintner, E. Vauthey, S. Naumov, T. Scherzer, J. Stampfl, and R. Liska, "A Straightforward Synthesis and Structure-Activity Relationship of Highly Efficient Initiators for Two-Photon Polymerization," Macromolecules 46(2), 352–361 (2013).
C. N. LaFratta, L. J. Li, and J. T. Fourkas, "Soft-lithographic replication of 3D microstructures with closed loops," Proc. Natl. Acad. Sci. U. S. A. 103(23), 8589–8594 (2006).
L. Jonusauskas, S. Juodkazis, and M. Malinauskas, "Optical 3D printing: bridging the gaps in the mesoscale," J. Opt. 20(5), 053001 (2018).
A. Marcinkowska and E. Andrzejewska, "Viscosity Effects in the Photopolymerization of Two-Monomer Systems," J. Appl. Polym. Sci. 116(1), 280–287 (2010).
E. Andrzejewska and A. Marcinkowska, "New Aspects of Viscosity Effects on the Photopolymerization Kinetics of the 2,2-Bis 4-(2-hydroxymethacryloxypropoxy) pheny1 propane/Triethylene Glycol Dimethacrylate Monomer System," J. Appl. Polym. Sci. 110(5), 2780–2786 (2008).
M. Y. Zakharina, V. B. Fedoseev, Y. V. Chechet, S. A. Chesnokov, and A. S. Shaplov, "Effect of Viscosity of Dimethacrylate Ester-Based Compositions on the Kinetics of Their Photopolymerization in Presence of o-Quinone Photoinitiators," Polym. Sci., Ser. B 59(6), 665–673 (2017).
S. Zissi, A. Bertsch, J.-Y. Jezequel, S. Corbel, D. J. Lougnot, and J. C. Andre, "Stereolithography and microtechniques," Microsyst. Technol. 2(2), 97–102 (1996).
J. B. Mueller, J. Fischer, F. Mayer, M. Kadic, and M. Wegener, "Polymerization Kinetics in Three-Dimensional Direct Laser Writing," Adv. Mater. 26(38), 6566–6571 (2014).
K. Takada, K. Kaneko, Y. D. Li, S. Kawata, Q. D. Chen, and H. B. Sun, "Temperature effects on pinpoint photopolymerization and polymerized micronanostructures," Appl. Phys. Lett. 92(4), 041902 (2008).
I. Sakellari, E. Kabouraki, D. Gray, V. Purlys, C. Fotakis, A. Pikulin, N. Bityurin, M. Vamvakaki, and M. Farsari, "Diffusion-Assisted High-Resolution Direct Femtosecond Laser Writing," Acs Nano 6(3), 2302–2311 (2012).
P. M. Consoli, A. J. G. Otuka, D. T. Balogh, and C. R. Mendonca, "Feature size reduction in two-photon polymerization by optimizing resin composition," J. Polym. Sci., Part B: Polym. Phys. 56(16), 1158–1163 (2018).
C. Decker, B. Elzaouk, and D. Decker, "Kinetic study of ultrafast photopolymerization reactions," J. Macromol. Sci., Part A: Pure Appl.Chem. 33(2), 173–190 (1996).
J. Fischer, J. B. Mueller, J. Kaschke, T. J. A. Wolf, A. N. Unterreiner, and M. Wegener, "Three-dimensional multi-photon direct laser writing with variable repetition rate," Opt. Express 21(22), 26244–26260 (2013).
Z. Tomova, N. Liaros, S. A. G. Razo, S. M. Wolf, and J. T. Fourkas, "In situ measurement of the effective nonlinear absorption order in multiphoton photoresists," Laser Photonics Rev. 10(5), 849–854 (2016).
J. T. Fourkas, Fundamentals of Two-Photon Fabrication (William Andrew Inc, 2016).
S. Juodkazis, V. Mizeikis, K. K. Seet, M. Miwa, and H. Misawa, "Two-photon lithography of nanorods in SU-8 photoresist," Nanotechnology 16(6), 846–849 (2005).
L. Jonusauskas, S. Rekstyte, and M. Malinauskas, "Augmentation of direct laser writing fabrication throughput for three-dimensional structures by varying focusing conditions," Opt. Eng. 53(12), 125102 (2014).
S. K. Saha, C. Divin, J. A. Cuadra, and R. M. Panas, "Effect of Proximity of Features on the Damage Threshold During Submicron Additive Manufacturing Via Two-Photon Polymerization," J. Micro- Nano-Manuf. 5(3), 031002 (2017).
J. Serbin, A. Egbert, A. Ostendorf, B. N. Chichkov, R. Houbertz, G. Domann, J. Schulz, C. Cronauer, L. Frohlich, and M. Popall, "Femtosecond laser-induced two-photon polymerization of inorganic-organic hybrid materials for applications in photonics," Opt. Lett. 28(5), 301–303 (2003).
C. Decker, "Photoinitiated crosslinking polymerisation," Prog. Polym. Sci. 21(4), 593–650 (1996).
E. Andrzejewska, "Photopolymerization kinetics of multifunctional monomers," Prog. Polym. Sci. 26(4), 605–665 (2001).
F. Tudos and T. Foldesberezsnich, "Free-radical polymerization - inhibition and retardation," Prog. Polym. Sci. 14(6), 717–761 (1989).
C. Decker and A. D. Jenkins, "Kinetic approach of O2 inhibition in ultraviolet-induced and laser-induced polymerizations," Macromolecules 18(6), 1241–1244 (1985).
G. Odian, "Radical chain polymerization," in Principles of Polymerization (John Wiley & Sons, 1991), pp. 262–266.
K. Studer, C. Decker, E. Beck, and R. Schwalm, "Overcoming oxygen inhibition in UV-curing of acrylate coatings by carbon dioxide inerting, Part I," Prog. Org. Coat. 48(1), 92–100 (2003).
A. K. O'Brien and C. N. Bowman, "Impact of oxygen on photopolymerization kinetics and polymer structure," Macromolecules 39(7), 2501–2506 (2006).
S. Beuermann and M. Buback, "Rate coefficients of free-radical polymerization deduced from pulsed laser experiments," Prog. Polym. Sci. 27(2), 191–254 (2002).
S. Rekstyte, T. Jonavicius, D. Gailevicius, M. Malinauskas, V. Mizeikis, E. G. Gamaly, and S. Juodkazis, "Nanoscale Precision of 3D Polymerization via Polarization Control," Adv. Opt. Mater. 4(8), 1209–1214 (2016).
T. Baldacchini, S. Snider, and R. Zadoyan, "Two-photon polymerization with variable repetition rate bursts of femtosecond laser pulses," Opt. Express 20(28), 29890–29899 (2012).
J. B. Mueller, J. Fischer, Y. J. Mange, T. Nann, and M. Wegener, "In-situ local temperature measurement during three-dimensional direct laser writing," Appl. Phys. Lett. 103(12), 123107 (2013).
G. de Miguel, G. Vicidomini, B. Harke, and A. Diaspro, Linewidth and Writing Resolution (William Andrew Inc, 2016).
H. B. Sun, S. Matsuo, and H. Misawa, "Three-dimensional photonic crystal structures achieved with two-photon-absorption photopolymerization of resin," Appl. Phys. Lett. 74(6), 786–788 (1999).
W. E. Lu, X. Z. Dong, W. Q. Chen, Z. S. Zhao, and X. M. Duan, "Novel photoinitiator with a radical quenching moiety for confining radical diffusion in two-photon induced photopolymerization," J. Mater. Chem. 21(15), 5650–5659 (2011).
Ajami, A.
Allen, R.
Ananthavel, S. P.
Andre, J. C.
Andrzejewska, E.
Avansi, W.
Baldacchini, T.
Balogh, D. T.
Bao, J. J.
Barlow, S.
Bártolo, P. J.
Beck, E.
Belazaras, K.
Bertsch, A.
Beuermann, S.
Bhattacharya, I.
Bityurin, N.
Bowman, C. N.
Buback, M.
Buividas, R.
Burmeister, F.
Cardoso, M. R.
Chechet, Y. V.
Chen, Q. D.
Chen, W. Q.
Chesnokov, S. A.
Chichkov, B.
Chichkov, B. N.
Cicha, K.
Coenjarts, C. A.
Consoli, P. M.
Corbel, S.
Correa, D. S.
Cronauer, C.
Cuadra, J. A.
Cui, Y. H.
Cumpston, B. H.
de Miguel, G.
Decker, C.
Decker, D.
Diaspro, A.
Divin, C.
Domann, G.
Dong, X. Z.
Drechsler, U.
Duan, X. M.
Dufaud, O.
Durig, U.
Dyer, D. L.
Egbert, A.
Ehm, M.
Ehrlich, J. E.
Elzaouk, B.
Erskine, L. L.
Ertas, N.
Farrer, R. A.
Farsari, M.
Fedoseev, V. B.
Feijen, J.
Fischer, J.
Foldesberezsnich, T.
Fotakis, C.
Fourkas, J. T.
Fritscher, C.
Frohlich, L.
Gadonas, R.
Gaidukeviciute, A.
Gailevicius, D.
Gamaly, E. G.
Gattass, R. R.
Gerhard, S.
Gershgoren, E.
Giakoumaki, A.
Gilbergs, H.
Gray, D.
Grijpma, D. W.
Gu, M.
Guntherodt, H. J.
Guo, C.
Gurr, M.
Haase, A.
Hague, R.
Harke, B.
Harris, R.
Hasegawa, S.
Hayasaki, Y.
Heidari, H.
Heikal, A. A.
Hill, R. T.
Hofmann, D.
Houbertz, R.
Husinsky, W.
Hwang, H.
Idacavage, M. J.
Infuhr, R.
Jacobs, P. F.
Jenkins, A. D.
Jezequel, J.-Y.
Jiang, D. D.
Jonavicius, T.
Jonusauskas, L.
Juodkazis, S.
Kabouraki, E.
Kadic, M.
Kaehr, B.
Kaneko, K.
Kaschke, J.
Kawata, S.
Kelly, B. E.
Kubler, R.
Kuebler, S. M.
Kuna, L.
LaFratta, C. N.
Lee, I. Y. S.
Li, J.
Li, L.
Li, L. J.
Li, Y. D.
Li, Z. Q.
Liaros, N.
Lichtenegger, H.
Ligon, S. C.
Liska, R.
Longo, E.
Lougnot, D. J.
Lu, W. E.
MacCraith, B.
Malinauskas, M.
Mange, Y. J.
Mansour, S.
Marcinkowska, A.
Marder, S. R.
Maruo, S.
Matsuo, S.
Mayer, F.
McCord-Maughon, D.
Melchels, F. P. W.
Mendonca, C. R.
Misawa, H.
Misoguti, L.
Miwa, M.
Mizeikis, V.
Momot, A.
Moussa, K.
Mueller, J. B.
Mulhaupt, R.
Nakamura, O.
Nann, T.
Narayan, R. J.
Naughton, M. J.
Naumov, S.
Nguyen, A. K.
Nguyen, L. H.
Nielson, R.
Nolte, S.
O'Brien, A. K.
Ober, C. K.
Odian, G.
Ostendorf, A.
Otuka, A. J. G.
Oubaha, M.
Ovsianikov, A.
Paipulas, D.
Panas, R. M.
Perry, J. W.
Pikulin, A.
Piskarskas, A.
Plenert, M.
Popall, M.
Pucher, N.
Purlys, V.
Qin, J. Q.
Qin, K.
Qin, X. H.
Razo, S. A. G.
Rekstyte, S.
Ribeiro, C.
Rockel, H.
Roques-Carmes, T.
Rosspeintner, A.
Rumi, M.
Ruth, H.
Saha, S. K.
Sakellari, I.
Saleh, B. E. A.
Saleh, N.
Scherzer, T.
Schmidt, V.
Schulz, J.
Schuster, M.
Schwalm, R.
Seet, K. K.
Seidl, B.
Serbin, J.
Shaplov, A. S.
Shear, J. B.
Shusteff, M.
Smith, C. G.
Snider, S.
Sonke, S.
Spadaccini, C. M.
Stampfl, J.
Stansbury, J. W.
Steenhusen, S.
Straub, M.
Studer, K.
Sun, H. B.
Takada, K.
Tanaka, T.
Taylor, H. K.
Teh, W. H.
Teich, M. C.
Thomann, Y.
Thomas, S.
Tomova, Z.
Torgersen, J.
Tribuzi, V.
Tudos, F.
Tunnermann, A.
Tureeek, C.
Unterreiner, A. N.
Vamvakaki, M.
Varga, F.
Vauthey, E.
Vicidomini, G.
Viertl, J.
Viet, T. N. T.
Wang, X.
Weber-Koehl, E.
Wegener, M.
Wolf, S. M.
Wolf, T. J. A.
Wu, X. L.
Xu, Y.
Yan, B.
Yang, J. Q.
Yu, J. C.
Zadoyan, R.
Zakharina, M. Y.
Zeitner, U. D.
Zhang, A. M.
Zhang, C. C.
Zhang, X. Q.
Zhao, Z. S.
Zissi, S.
Zukauskas, A.
Acs Nano (2)
Adv. Funct. Mater. (3)
Adv. Mater. (1)
Adv. Opt. Mater. (1)
Anal. Chem. (1)
Angew. Chem., Int. Ed. (1)
Biomaterials (1)
Chem. Mater. (1)
Chem. Rev. (1)
Dent. Mater. (1)
IEEE J. Sel. Top. Quantum Electron. (1)
J. Appl. Polym. Sci. (4)
J. Laser Appl. (1)
J. Macromol. Sci., Part A: Pure Appl.Chem. (1)
J. Mater. Chem. (1)
J. Mater. Sci. (1)
J. Micro- Nano-Manuf. (1)
J. Polym. Sci., Part B: Polym. Phys. (1)
JCT Res. (1)
Laser Photonics Rev. (3)
Light: Sci. Appl. (1)
Macromolecules (3)
Mater. Today (1)
Microsyst. Technol. (1)
Opt. Eng. (1)
Phys. Rep. (1)
Polym. Bull. (1)
Polym. Sci., Ser. B (1)
Proc. Natl. Acad. Sci. U. S. A. (1)
Prog. Org. Coat. (1)
Prog. Polym. Sci. (4)
Thin Solid Films (1)
(1) k d i f f = 8000 R T 3 η
Composition of the resins investigated in this study with the corresponding viscosity measured at room temperature.
Polymerization (Eth) and damage (Edamage) energy thresholds for the resins considered in this study. The dynamic range for each resin is listed as well.
Diffusion rate coefficients calculated using Eq. (1) for the resin studied in this work. Viscosities of the resins in Pa·s are shown as well.
|
CommonCrawl
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
npj digital medicine
The effectiveness of public health advertisements to promote health: a randomized-controlled trial on 794,000 participants
Elad Yom-Tov1,
Jinia Shembekar2,
Sarah Barclay2 &
Peter Muennig3
npj Digital Medicine volume 1, Article number: 24 (2018) Cite this article
Lifestyle modification
An Author Correction to this article was published on 16 August 2018
This article has been updated
As public health advertisements move online, it becomes possible to run inexpensive randomized-controlled trials (RCTs) thereof. Here we report the results of an online RCT to improve food choices and integrate exercise into daily activities of internet users. People searching for pre-specified terms were randomized to receive one of several professionally developed campaign advertisements or the "status quo" (ads that would otherwise have been served). For 1-month pre-intervention and post-intervention, their searches for health-promoting goods or services were recorded. Our results show that 48% of people who were exposed to the ads made future searches for weight loss information, compared with 32% of those in the control group—a 50% increase. The advertisements varied in efficacy. However, the effectiveness of the advertisements may be greatly improved by targeting individuals based on their lifestyle preferences and/or sociodemographic characteristics, which together explain 49% of the variation in response to the ads. These results demonstrate that online advertisements hold promise as a mechanism for changing population health behaviors. They also provide researchers powerful ways to measure and improve the effectiveness of online public health interventions. Finally, we show that corporations that use these sophisticated tools to promote unhealthy products can potentially be outbid and outmaneuvered.
Hundreds of millions of dollars are spent on traditional public health advertisements annually.1,2,3,4,5,6,7 In theory, public health advertising can save money and lives by encouraging behaviors that prevent disease before it happens.8 While the objective of advertising investments (e.g., encouraging people to quit smoking) differs from those of private advertisers (encouraging people to purchase a good or service), the central idea is the same: to change behaviors.
Before online advertising, it was only possible to empirically test public health campaigns by randomizing small numbers of participants and to examine a few outcome measures.1,2 This makes it difficult to test to whom different forms of advertisement are best targeted.3,4,5,6
Humans vary greatly with respect to both their biology and their beliefs. Medical researchers use predictive analytics to mine databases of genetic information in order to target treatments to individuals who are more likely to respond to them. Similarly, private advertisers use predictive analytics to mine multiple sources of sociodemographic and behavioral data to better target individual consumers with the goal of changing their behavior. However, precision public health interventions have largely sat on the sidelines both due to the large sums of money required for targeted advertising and due to ethical concerns.
Ethical concerns arise for a number of reasons. First, participant data are collected without informed consent.9 Second, many in public health feel uncomfortable with the idea of manipulating individual behaviors, preferring instead to work with anonymous means to attempt to change behavior more generically.10,11 Such concerns have largely pre-empted the use of precision public health advertising, leaving only private firms to employ these tools.
In the private sector, Google, Microsoft, Facebook, and other internet-based companies provide online services for free in exchange for the information that drives precision advertising using "big data analytics". Online ads targeted using data analytics can influence emotions and behaviors.10,12,13
First, advertisers can make educated guesses or small-scale tests about who might respond most to a given advertisement based on common search terms by topic. Then, advertisement can be randomized to be shown to users of search engines that search for such terms. Randomization provides a "gold standard" test of efficacy. Randomization can also provide causal information on how different sub-groups (e.g., young women) respond to an advertisement relative to others. Information on the experimental responses of different "architypes" of individuals can then re-tested with newer, more effective advertisements. This incremental approach—targeting, refining, and testing—has the power to produce online ads that affect beliefs and behaviors.
Big data companies—such as Facebook, Google, and Microsoft—conduct tens of thousands of randomized-controlled trials (RCTs) on their users every year.14 These results are invariably kept inside these companies, but the general process for evaluating advertisement efficacy is likely similar across companies.
Search advertisements are typically presented as textual advertisements that appear on a search results page coupled with a click through link to the advertiser's site. More advanced versions include images in addition to (or instead of) the text. While it is rare that users click on ads, online advertisements have been shown to increase sales both for online display ads and in brick-and-mortar stores.15 Display and search ads are believed to produce similar impacts, and about 75% of this increase in traffic from the cited study was from those who never clicked through.
In this paper, we take the reader through this advertising process, and conduct, to our knowledge, the first fully randomized online public health communication's campaign which tracks not only click response to ads, but also the searches made prior to and post advertisement display. Professionals from J. Walter Thompson (JWT), a leading advertising firm in New York City, developed a series of online ads aimed at improving exercise and eating behaviors among search engine users ("users") who are overweight. We then experimentally tested these ads using a series of 10 RCTs, each for a different textual advertisement paired with unique click through content. Finally, we explored the impact of the ads on changing health behaviors as measured by future health-promotion searches.
Descriptive analysis
During the month of the RCT, the experiment ads were shown 265,279 times and clicked 1024 times. Of these displays, the ads were shown 3108 times to 2996 users who could be tracked in their queries before and after. Additionally, during the month of the RCT, a total of 505,693 non-exposed users made at least one query such as the ones which triggered a campaign ad in the treatment population.
The majority of users were between the ages of 35 and 64, and females were more likely to see the ads than males (Supplemental Fig. 1). Of those over the age of 65, males and female users were about equal in number. A total of 36 tracked users clicked on the ads.
The tracked users and users who were not tracked were both successfully randomly assigned (Table 1).
Table 1 Percentage of users by age group and by gender among all people exposed to the ads and among the tracked population
The click rate is congruent with the click rate in other advertising campaigns.16 As shown in Supplemental Fig. 1, females were more likely to use terms which triggered the campaign ads, but there was a trend toward males having a higher click through.
Exposure to textual ads and future target searches
A model predicting future target searches from prior interest in target searches and from exposure to campaign ads reaches an R2 or 0.314 (p < 10−10). As shown in Table 2, prior interest in target searches increases the likelihood of future target searches by 52% (slope = 0.52, standard error [SE] = 0.001; <10−10). However, exposure to campaign ads significantly increases the likelihood of future target searches by 15% (slope = 0.15, SE = 0.01; p < 10−10), especially in the absence of prior target searches. Stated differently, 48% of people who were exposed to the ads made future target searches, compared to 32% of the controls (a 50% increase). This difference is even greater when observing the population which did not have past target searches (30% vs. 15%).
Table 2 Model of the likelihood that a user will make future target searches
We constructed a model to predict future target searches in the treatment population. Using only respondent characteristics (both behavioral and demographic) produced a model with an R2 of 0.414. When only previous query topics were added, the R2 was 0.410. When both were added, the R2 was 0.491.
Cox hazards analyses
Table 3 shows the hazards ratios for the likelihood of future target searches for the sociodemographic and contextual characteristics of the ads. Recall that we correct p-values for the number of comparisons within each category. We discuss statistically significant results here. While the number of previous searches for keywords is associated with a very slight change in the HR for future keyword searches, the average person tends to make a large number of searches. None of the ads were particularly more effective than other ads in evoking a future keyword search. However, exposure to more than one ads increases the chance of a keyword search by 11% (HR = 1.11; 95% CI = 1.03, 1.20). Females were much less likely than males to perform a future search for keywords when exposed to an advertisement HR = 0.84 (95% CI = 0.76, 0.91).
Table 3 Cox proportionate hazards ratios (HR) and 95% confidence intervals (CI) associated with searcher characteristics
We randomized Bing users to receive a professionally designed public health advertisement or to receive control (status quo) advertisements. We found that people who view online health promotion advertisements are much more likely to perform searches related to health promotion than those who were assigned "status quo" advertisements. The experimental effect sizes were large, with 48% of those with exposure to the text messages (and in some cases, the landing pages) more likely to perform future health-related searches while only 32% of the matched control group performed such searches.
At the population level, searching for specific health behaviors is associated with performing these behaviors in the physical world.17 For example, the number of people searching for information about cannabis is highly correlated with the known number of users of cannabis,18 the number of people searching for specific medicines corresponds to the number of prescriptions sold,19 and the distribution of birth events, as inferred from search queries, is extremely well aligned with the distribution provided by the Centers for Disease Control.20 We show that it is possible to alter the behavior of those with enough interest to conduct a search online, and show that it is possible to test such behavioral changes experimentally. With online advertisements, it is no longer necessary to stab in the dark with public health advertisement design. Nor is it necessary to guess who will respond to those advertisements. Rather, it is possible to systematically target users with advertisements to which they are most susceptible, thereby eliciting behavior change. Our identification strategies can, in theory, be used to continuously refine, randomize, and test the targeting algorithms on different user types. For instance, it is not only possible to target based on the users' age, race, and location, but also on their characteristics as defined by their internet searches, shopping preferences, and even email content. The targeting algorithms can use the information to be "stepped up" until there is evidence that the user changes his or her behaviors.
Our study was susceptible to few limitations, including those typically inherent to RCTs. Perhaps the most important limitation is external validity since we ran the campaign on only one platform. Second, it is difficult to quantify the impact of the counterfactual advertisements that were shown to users. The counterfactual could be health promoting (e.g., gym memberships), neutral (e.g., vitamin supplements), or negative (e.g., unhealthy foods or products targeted toward high-risk groups). It is therefore possible that an ad with no efficacy could appear efficacious if the bulk of counterfactual advertisements discouraged future keyword searches.
Experimental offline advertisements have shown that it is possible to motivate health behavior change with traditional advertising modalities, such as associating behaviors with those of desirable social groups.2 The only published RCT of online health promotion advertisements we are aware of demonstrated that it is different audiences respond very differently to a given advertisement.16 For example, empowering advertisements were generally more effective at inducing future searches on smoking cessation than those that emphasized the negative health impacts of smoking. But this varied dramatically by the demographic characteristics of the viewers.
This also raises significant concerns for health departments and other agencies that could greatly benefit from access to the data underlying the health advertisements. Most large information technology firms provide services to users for free in exchange for access to user data. To best understand how to target users and change their behavior, it would be useful for them to have access to identified data that could be linked across multiple sources of big data. Like academic institutions, most public agencies require approvals that are difficult to obtain in part because institutional review boards have not adjusted to the nuances of big data research and partly because there is no clear opt in for users of online services. Clearly, cooperation between ethicists, big data companies, governmental bodies, and academia has great potential to advance population health. We show that it is not only technically possible to launch an online campaign that effectively improves health behaviors, but also that corporations that promote an unhealthy diet or a sedentary lifestyle can potentially be outbid or outmaneuvered.
Our RCT was conducted by Microsoft during April 2017 using the Bing Ads system. In this system, advertisers bid to place the ads when specific keywords are searched by users of the Bing search engine. Internet users of the Bing search engine who were logged into a Microsoft account and searching for pre-specified keywords were selected for this study. Eligible users were randomly exposed to JWT ads (treatment) or any other ads served up by the system (control). We then followed both the treatment and control users' future search queries, and retrospectively examined past queries to build and interpret predictive models.
This study was approved by the Microsoft Institutional Review Board and was declared exempt by the Columbia University Institutional Review Board under the understanding that the Columbia University researchers would not have access to the data in any form other than the tabular results presented in this paper, and further that they would not seek funding for the study.
This trial was registered on February 2018 with ClinicalTrials.gov, registration number NCT03439553.
User selection
Those who are motivated to change their behaviors are more likely to do so. As a result, advertisers often attempt to target individuals with some motivation to change. In this study, we attempt to improve the viewer's diets and increase their levels of physical activity. The goal of the user selection process was therefore to identify individuals who were motivated for behavioral change due to social stigma or disease, and then present an advertisement that suggests a behavioral change that is within reach given their lifestyle. We therefore selected users who used search terms associated with social stigma or diseases related to poor diet or low levels of exercise.
The Bing Ads system is designed to randomize advertisements for experimental purposes. In this study, we selected users for inclusion if they (1) were using the Bing search engine; (2) logged into a Microsoft account; and (3) typed any of the following combination of terms:
(Weight, Overweight, Obesity, lose weight) AND (<none>, Hypertension, High cholesterol, High blood pressure, Exercise, Diabetes, bullying)
Hypercholesterolemia, Fat, BMI, Body fat, Big gut, Big and tall clothing, Easy exercises, Healthy diet, Easy workout, Plus size, Weight loss pill, Diet pill, Weight loss surgery, XXL
The vast majority of Bing search engine users who typed the above keywords (n = 283,716) were excluded based on a missing Microsoft account pre-randomization (the account is needed for user demographic data and analytics). Users who had a Microsoft account were further analyzed (Fig. 1). Among users with a Microsoft account, those with incomplete demographic data (age, gender, zip code) were also excluded, leaving 2996 treated participants and 505,693 control participants. The CONSORT diagram (Fig. 1) and the age and gender characteristics of the users (Table 1) show no threats to internal validity. There were no statistically significant differences in the demographic characteristics of the treatment and control users (χ2 test, p > 0.05).
CONSORT 2010 flow diagram. CONSORT flow diagram template courtesy of http://www.consort-statement.org/consort-statement/flow-diagram
In addition to the above four criteria for inclusion of users, campaign ads also had to competitively bid for an ad on the search results page. Keyword demand differs by advertisers, and so does the associated maximum bid for each keyword. To account for the differences in keyword demand and have a similar baseline for all keywords, we set the Bing Ads system to automatically adjust the bids for each of the campaign terms listed above to be high enough for our ads to be as competitive as control ads (i.e., those of other advertisers), but no more than US$1 per click.
User characterization
We extracted all queries made on Bing by treatment and control users in our trial, from 1 month before the first advertisement was shown through until 1 month after the last ad was shown.
For each query, we registered an anonymized user identifier, the time of the query, the US county from which the user made the query, and the text of the query. The query was further classified (using a proprietary classifier developed by Microsoft) into one or more of approximately 60 topical categories. These categories were encompassed broad topics, such as commerce, travel, and health.
Users exposed to ads were further characterized by their self-reported age, gender, and the county-level poverty as inferred from the county from which they made the query.21
JWT developed both the textual ads initially displayed to treatment users, as well as the content on the landing page shown when the user clicks on a textual link. The advertisements were grounded in the Fogg Behavior Model.22,23 In this model, three elements must come together at the same time: motivation to change, ability to change, and a trigger for change. The ads were designed to be "hot triggers,"23 designed to prime highly motivated users with content that is easy and actionable in order to nudge a behavior change toward more positive health habits.
All treatment users that enter the keywords above are exposed to the textual ads. The associated landing pages contained information on how the subject might improve health behaviors through nutrition or exercise. However, the vast majority of users will only view the textual advertisements. The JWT text ads fall into three categories: (1) suggestive of healthy behavior change ("Laugh your calories off", "Chores: the new workout", "Your kids are an exercise", "'Swalty' snacks are best"); (2) related to nutrition or exercise, but not to behavior change ("Burn calories sitting", "Lose weight watching TV", "Pimp up your snack"); or (3) unrelated to both behavior change and nutrition or exercise ("Find a hairy partner"). Each textual ad was designed to motivate users to click on the ad. Both the textual and click through advertisements were explicitly designed to avoid stigmatizing obesity.
The landing page ads focused solely on nudging users toward changing their behaviors with suggestions for incorporating small amounts of exercise or easy dietary changes into day-to-day activities. These were accompanied by an animated image meant to reinforce the message of the advertisement (see the Supplemental Appendix). Users were also provided links to additional content developed by professional health organizations or the Centers for Disease Control and Prevention if they wanted more information.
Outcomes and predictor variables
Our primary outcome measure was the likelihood of a future search using a set of pre-specified keywords. These keywords were selected by identifying common weight-related search terms among Bing users. The terms fell into categories that suggest that the subject either (1) desires a deeper understanding of obesity (fat; nutrition; calories; body mass index; BMI; body weight; body mass) or (2) wishes to change their behavior (weight loss; weight watcher; weightwatcher; losing weight; and lose weight).
We were interested in exploring differences in outcome measures for treated and control users overall, by demographic characteristics, and by advertisement characteristics (content, placement, etc.). We were also interested in building predictive analytics that could identify which user types are most likely to respond to a given advertisement.
We used the following covariates to operationalize demographic and advertisement characteristics:
Past user behavior:
Number of past searches by the user
Number of past target searches by the user
Number of past ads shown to the user
User demographic:
Age group (categorized into six groups: 13–17, 18–24, 25–34, 35–49, 50–64, or 65+ years)
Gender (female or male)
Advertisement information:
Hour of the day ad shown (integer between 0 and 23 h)
Advertisement title (categorized into 10 groups)
Was the ad clicked? (yes/no)
Search page number on which the ad is displayed (integer between 1 and 100)
Search page position (indicator variable for whether the ad was placed on the top or the right-hand side of the search page)
Given the large sample size, we specified an effect size of greater than or equal to 10% to be meaningful.
We explored the likelihood of future target searches given user' exposure to ads, controlling for past searches. We used ordinary least squares regression to model the association between variables:
$$y = \alpha _0 + \alpha _1x_1 + \alpha _2x_2 + \ldots + + \alpha _Nx_N,$$
where y is an indicator of future searches, and x the predictors of the model.
Because previous searches predict the probability of subsequent searches, we were also interested in the interaction term between the probability of a subsequent search given exposure to the treatment (the interaction between the coefficient of conducting a previous search and being in the treatment group).
We then developed a predictive model. In this model, each user was profiled prior to the ad campaign with respect to demographic characteristics and previous topical searches. With respect to topical searches, we explored whether the user had performed searches that relate to one of 60 pre-specified categories of interest. These included broader topics, such as shopping, travel, and health. By adding a term to the above equation that includes previous searches in each of these categories, it becomes possible to examine the influence of inclusion of the topic on the model's predictive value, as measured by the model's goodness-of-fit (R2).
These models included all 10 of the covariates listed above. These covariates are used for predictive purposes, and regression is conducted on a cohort that has already been randomized. This way, it becomes possible to make predictions based on treatment response when only treatment status introduces non-random variation.
Next, we used Cox proportionate hazards models and explored 32 predictors of future searches:
$${\mathrm {HR}} = \exp \left( {X_1\beta _1 + \ldots + X_N\beta _N} \right),$$
where HR is the hazard ratio, X the predictors of the model, and β their corresponding model coefficients.
The predictors fell into five broad characteristics of the user and their exposures: previous searches, exposure to our advertisements, advertisement placement characteristics, age, gender, and poverty. We used Bonferonni correction for the number of categorical variables within each of these broader categories. We examined the HR for future searches for various user and ad characteristics.
In a secondary analysis (see Supplementary Materials), we used propensity score matching of users meeting inclusion characteristics, who were matched to unexposed users based on age, gender, and zip code, and analyzed using the above characteristics. This analysis allows for a low-noise, low sample size analysis in which it becomes possible to obtain very conservative assurances that there are statistically significant differences by treatment status, rather than relying on clinically meaningful effect sizes (as in the parent analysis).
The data that support the findings of this study are available from Microsoft, but restrictions apply to the availability of the data. Specifically, all aggregate advertising data are available from the authors on reasonable request. Individual-level search data are available from the authors on reasonable request and with permission of Microsoft.
In the original version of the published Article, there was an error in the caption to Table 1 which stated "None of the differences are statistically significant (χ2, two-sided, p > 0.05)". This has been changed to "The 18–24 year old are over-represented in the all user treatment population, while the 50–64 year old are underrepresented in both tracked and all user population, p-values were <0.05 for age groups and gender." This has been corrected in the HTML and PDF version of the Article.
Atlantis, E., Salmon, J. & Bauman, A. Acute effects of advertisements on children's choices, preferences, and ratings of liking for physical activities and sedentary behaviours: a randomised controlled pilot study. J. Sci. Med. Sport 11, 553–557 (2008).
Berger, J. & Rand, L. Shifting signals to help health: using identity signaling to reduce risky health behaviors. J. Consum. Res. 35, 509–518 (2008).
Snyder, L. B. Health communication campaigns and their impact on behavior. J. Nutr. Educ. Behav. 39, S32–S40 (2007).
Snyder, L. B. et al. A meta-analysis of the effect of mediated health communication campaigns on behavior change in the United States. J. Health Commun. 9, 71–96 (2004).
Witte, K. & Allen, M. A meta-analysis of fear appeals: Implications for effective public health campaigns. Health Educ. Behav. 27, 591–615 (2000).
Nutbeam, D. Health literacy as a public health goal: a challenge for contemporary health education and communication strategies into the 21st century. Health Promot. Int. 15, 259–267 (2000).
Mathieson, S. A. DH doubled ad spending to £60m. The Guardian https://www.theguardian.com/healthcare-network/2011/jan/13/department-health-doubled-advertising-spending-60m (2017).
Rice, R. E. & Atkin, C. K. Public Communication Campaigns. (Sage, Thousand Oaks, CA, 2012).
Grady, C. Enduring and emerging challenges of informed consent. N. Engl. J. Med. 372, 855–862 (2015).
Kramer, A. D., Guillory, J. E. & Hancock, J. T. Experimental evidence of massive-scale emotional contagion through social networks. Proc. Natl. Acad. Sci. USA 111, 8788–8790 (2014).
Zuboff, S. Big other: surveillance capitalism and the prospects of an information civilization. J. Inform. Technol. 30, 75–89 (2015).
Andreu-Perez, J., Poon, C. C., Merrifield, R. D., Wong, S. T. & Yang, G.-Z. Big data for health. IEEE J. Biomed. Health 19, 1193–1208 (2015).
Ruggeri, K., Yoon, H., Kácha, O., van der Linden, S. & Muennig, P. Policy and population behavior in the age of Big Data. Curr. Opin. Behav. Sci. 18, 1–6 (2017).
Kohavi, R., Crook, T., Longbotham, R. Online Experimentation at Microsoft. Third Workshop on Data Mining Case Studies and Practice Prize. Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining (Association of Computing Machinery (ACM), San Jose, CA, 2009).
Lewis, R. A. & Reily, D. H. Online Ads and onine sales: measuring the effects of retail advertising via a controlled experiment on Yahoo! QME-Quant. Mark. Econ. 12, 235–266 (2014).
Yom-Tov, E., Muennig, P. & El-Sayed, A. M. Web-based antismoking advertising to promote smoking cessation: a randomized controlled trial. J. Med. Internet Res. 8, e306 (2016).
Yom-Tov, E. Crowdsourced Health: How What You Do on the Internet Will Improve Medicine. (MIT Press, Cambridge, MA, 2016.
Book Google Scholar
Yom-Tov, E. & Lev-Ran, S. Adverse reactions associated with cannabis consumption as evident from search engine queries. JMIR Public Health Surveill. 3, e77 (2017).
Yom-Tov, E. & Gabrilovich, E. Postmarket drug surveillance without trial costs: discovery of adverse drug reactions through large-scale analysis of web search queries. J. Med. Internet Res. 15, e124 (2013).
Fourney, A., White, R. W., Horvitz, E. Exploring time-dependent concerns about pregnancy and childbirth from search logs. 33rd Annual ACM Conference on Human Factors in Computing Systems, 737–746 (Seoul, Republic of Korea, 2015).
US Bureau of the Census. Census 2010. http://www.census.gov/main/www/cen2000.html (2010).
Fogg, B. J. Fogg Behavior Model. http://www-personal.umich.edu/~mrother/KATA_Files/FBM.pdf (The author, 2007).
Fogg, B. J. A behavior model for persuasive design. Proceedings of the 4th International Conference on Persuasive Technology, Claremont, CA (Association of Computing Machinery (ACM), New York, NY, 2009).
The authors wish to thank Nicholas Orsini, Zeynep Cingir, Javier Pinol, Yudi Rojas, Gustavo Tezza, Valerie O'Bert, Vaibhav Bhanot, and Pritika Mathur for their help in designing the ads.
Microsoft Research Israel, 13 Shenkar st., 46875, Herzeliya, Israel
Elad Yom-Tov
J. Walter Thompson, 466 Lexington Avenue, New York, NY, 10017, USA
Jinia Shembekar & Sarah Barclay
Global Research Analytics for Population Health and the Department of Health Policy and Management, Mailman School of Public Health, Columbia University, 722 West 168th St., New York, NY, 10032, USA
Peter Muennig
Jinia Shembekar
P.M. devised the study. S.J. and S.B. designed the ads and landing pages. All authors decided on the keywords. E.Y.T. ran the advertising campaign, collected the data, and analyzed it. All authors were involved in writing the paper. This work was carried out as part of the author's salaried employment, with no specific funding.
Correspondence to Elad Yom-Tov.
E.Y.T. is an employee of Microsoft, owner of the Bing search engine. The authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Yom-Tov, E., Shembekar, J., Barclay, S. et al. The effectiveness of public health advertisements to promote health: a randomized-controlled trial on 794,000 participants. npj Digital Med 1, 24 (2018). https://doi.org/10.1038/s41746-018-0031-7
Revised: 30 March 2018
Social media interventions for precision public health: promises and risks
Adam G. Dunn
Kenneth D. Mandl
Enrico Coiera
npj Digital Medicine (2018)
Editorial Summary
Online advertising: healthier ads promote healthier living
People who see specific health-promoting messages after searching online for weight-related terms are more likely to subsequently search for information on weight loss interventions. A team led by Elad Yom-Tov from Microsoft Research Israel in Herzeliya conducted a randomized trial involving 794,000 users of the Bing search engine who queried terms related to weight, diet, and exercise. Randomly chosen subjects were shown advertisements designed to promote healthy living, while all other users were shown standard ads. The researchers found that 48% of people exposed to the health-promoting advertisements made searches within the next month for weight loss information, compared with only 32% of those in the control group. The findings suggest that targeted online messaging can help change population health behaviors.
About the Partner
For Authors & Referees
Search articles by subject, keyword or author
Show results from All journals This journal
Explore articles by subject
npj Digital Medicine (npj Digit. Med.) ISSN 2398-6352 (online)
nature.com sitemap
Nature portfolio policies
Author & Researcher services
Nature Masterclasses
Nature Research Academies
Libraries & institutions
Librarian service & tools
Librarian portal
Partnerships & Services
Nature Conferences
Nature Africa
Nature Italy
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
|
CommonCrawl
|
Error-based control systems on Riemannian state manifolds: Properties of the principal pushforward map associated to parallel transport
MCRF Home
Finite-dimensional controllers for robust regulation of boundary control systems
March 2021, 11(1): 119-141. doi: 10.3934/mcrf.2020030
Optimal design problems governed by the nonlocal $ p $-Laplacian equation
Fuensanta Andrés 1, , Julio Muñoz 1,2,, and Jesús Rosado 1,2,
Universidad de Castilla-La Mancha, Departamento de Matemáticas and Escuela de Ingeniería Industrial y Aerospacial, Avenida Carlos Ⅲ s/n, Real Fábrica de Armas, 45071 Toledo (ESPAÑA)
Universidad de Castilla-La Mancha, Departamento de Matemáticas and Facultad de CC del Medioambiente y Bioquímica, Avenida Carlos Ⅲ s/n, Real Fábrica de Armas, 45071 Toledo (ESPAÑA)Universidad de Castilla-La Mancha, Departamento de Matemáticas and Facultad de CC del Medioambiente y Bioquímica, Avenida Carlos Ⅲ s/n, Real Fábrica de Armas, 45071 Toledo (ESPAÑA)
Received June 2019 Revised March 2020 Published June 2020
In the present work, a nonlocal optimal design model has been considered as an approximation of the corresponding classical or local optimal design problem. The new model is driven by the nonlocal $ p $-Laplacian equation, the design is the diffusion coefficient and the cost functional belongs to a broad class of nonlocal functional integrals. The purpose of this paper is to prove the existence of an optimal design for the new model. This work is complemented by showing that the limit of the nonlocal $ p $-Laplacian state equation converges towards the corresponding local problem. Also, as in the paper by F. Andrés and J. Muñoz [J. Math. Anal. Appl. 429:288– 310], the convergence of the nonlocal optimal design problem toward the local version is studied. This task is successfully performed in two different cases: when the cost to minimize is the compliance functional, and when an additional nonlocal constraint on the design is assumed.
Keywords: Approximation of partial differential equations, Optimal Control, Nonlocal elliptic equations, G-Convergence, p-Laplacian.
Mathematics Subject Classification: Primary:49J55;35D99;Secondary:39J92, 49J45.
Citation: Fuensanta Andrés, Julio Muñoz, Jesús Rosado. Optimal design problems governed by the nonlocal $ p $-Laplacian equation. Mathematical Control & Related Fields, 2021, 11 (1) : 119-141. doi: 10.3934/mcrf.2020030
G. Allaire, Shape Optimization by the Homogenization Method, Springer Verlag, New York, 2002. Google Scholar
B. Aksoylu and T. Mengesha, Results on nonlocal boundary value problems, Numer. Funct. Anal. Optim., 31 (2010), 1301-1317. doi: 10.1080/01630563.2010.519136. Google Scholar
F. Andrés and J. Muñoz, A type of nonlocal elliptic problem: Existence and approximation through a Galerkin-Fourier method, SIAM J. Math. Anal., 47 (2015), 498-525. doi: 10.1137/140963066. Google Scholar
F. Andrés and J. Muñoz, Nonlocal optimal design: A new perspective about the approximation of solutions in optimal design, J. Math. Anal. Appl., 429 (2015), 288-310. doi: 10.1016/j.jmaa.2015.04.026. Google Scholar
F. Andrés and J. Muñoz, On the convergence of a class of nonlocal elliptic equations and related optimal design problems, J. Optim. Theory Appl., 172 (2017), 33-55. doi: 10.1007/s10957-016-1021-z. Google Scholar
F. Andreu-Vaillo, J. M. Mazón, J. D. Rossi and J. J. Toledo-Melero, Nonlocal Diffusion Problems, Mathematical Surveys and Monographs, 165, American Mathematical Society, Providence, 2010. doi: 10.1090/surv/165. Google Scholar
F. Andreu, J. D. Rossi and J. J. Toledo-Melero, Local and nonlocal weighted p-Laplacian evolution equations with Neumann boundary conditions, Publ. Mat., 55 (2011), 27-66. doi: 10.5565/PUBLMAT\_55111\_03. Google Scholar
O. Bakunin, Turbulence and Diffusion: Scaling Versus Equations, 1$^st$ edition, Springer Verlag, Berlin, 2008. doi: 10.1007/978-3-540-68222-6. Google Scholar
J. C. Bellido and A. Egrafov, A simple characterization of $H$-convergence for a class of nonlocal problems, Revista Matemática Complutense, (2020). doi: 10.1007/s13163-020-00349-9. Google Scholar
J. C. Bellido and C. Mora-Corral, Existence for nonlocal variational problems in peridynamics, SIAM J. Math. Anal., 46 (2014), 890-916. doi: 10.1137/130911548. Google Scholar
J. Fernández-Bonder, A. Ritorto and A. M. Salort, $H$-convergence result for nonlocal elliptic-type problems via Tartar's method, SIAM J. Maht. Anal., 49 (2017), 2387-2408. doi: 10.1137/16M1080215. Google Scholar
J. Fernández-Bonder and J. F. Spedaletti, Some nonlocal optimal design problems, J. Math. Anal. Appl., 459 (2018), 906-931. doi: 10.1016/j.jmaa.2017.11.015. Google Scholar
M. Bonforte, Y. Sire and J. L. Vázquez, Existence, uniqueness and asymptotic behavior for fractional porous medium equations on bounded domains, Discrete Contin. Dyn. Syst., 35 (2015), 5725-5767. doi: 10.3934/dcds.2015.35.5725. Google Scholar
J. Bourgain, H. Brezis and P. Mironescu, Another look at Sobolev spaces, in Optimal Control and Partial Differential Equations, (A volume in honour of A. Benssoussan's 60th birthday) (Eds. J. L. Menldi et al.), IOS, Amsterdam, (2001), 439–455. Google Scholar
C. Bucur and E. Valdinoci, Nonlocal Diffusion and Applications, Lecture Notes of the Unione Matematica Italiana, 20, Springer International Publisher, 2016. doi: 10.1007/978-3-319-28739-3. Google Scholar
B. A. Carreras, V. E. Lynch and G. M. Zaslavsky, Anomalous diffusion and exit time distribution of particle tracer in plasma turbulence models, Phys. Plasmas, 8 (2001), 113-147. Google Scholar
J. Cea and K. Malanowski, An example of a max-min problem in partial differential equations, SIAM J. Control, 8 (1970), 305-316. doi: 10.1137/0308021. Google Scholar
A. Cherkaev and R. Kohn, Topics in Mathematical Modeling of Composite Materials, Birkhäuser Boston, Inc., Boston, MA, 1997. doi: 10.1007/978-3-319-97184-1. Google Scholar
M. Chipot, Elliptic Equations: An Introductory Course, Birkhäuser, 2009. doi: 10.1007/978-3-7643-9982-5. Google Scholar
M. C. Delfour and J. P. Zolésio, Shapes and Geometries: Metrics, Analysis, Differential Calculus, Advances in design and control, 22, SIAM, 2011. doi: 10.1137/1.9780898719826. Google Scholar
M. D'Elia and M. Gunzburger, Optimal distributed control of nonlocal steady diffusion problems, SIAM. J. Control Optim., 52 (2014), 243-273. doi: 10.1137/120897857. Google Scholar
M. D'Elia and M. Gunzburger, Identification of the diffusion parameter in nonlocal steady diffusion problems, Appl. Math. Optim., 73 (2016), 227-249. doi: 10.1007/s00245-015-9300-x. Google Scholar
M. D'Elia, Q. Du and M. Gunzburger, Recent progress in mathematical and computational aspects of Peridynamics,, in Handbook of Nonlocal Continuum Mechanics for Materials and Structures (Ed. G. voyiadjis), Springer, (2018), 1–26. doi: 10.1007/978-3-319-22977-5\_30-1. Google Scholar
E. Di Nezza, G. Palatucci and E. Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces, Bull. Sci. math., 136 (2012), 521-573. doi: 10.1016/j.bulsci.2011.12.004. Google Scholar
Q. Du, M. D. Gunzburger, R. B. Lehoucq and K. Zhou, Analysis and approximation of nonlocal diffusion problems with volume constraints, SIAM Rev., 54 (2012), 667-696. doi: 10.1137/110833294. Google Scholar
M. Felsinger, M. Kassmann and P. Voigt, The Dirichlet problem for nonlocal operators, Mathematische Zeitschrift, 279 (2015), 779-809. doi: 10.1007/s00209-014-1394-3. Google Scholar
C. W. Gardiner, Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences, Springer Series in Synergetics, 3rd edition, Springer-Verlag, Berlin, 2004. Google Scholar
M. Gunzburguer, N. Jiang and F. Xu, Anaysis and approximation of a fractional Laplacian-based closure model for turbulent flows and its connection to Richardson Pair Dispersion, Comput. Math. with Appl., 75 (2018), 1973-2001. doi: 10.1016/j.camwa.2017.06.035. Google Scholar
O. Hernández-Lerma and J. B. Lasserre, Fatou's lemma and Lebesgue's convergence theorem for measures, J. Appl. Math. Stochastic Anal., 13 (2000), 137-146. doi: 10.1155/S1048953300000150. Google Scholar
B. Hinds and P. Radu, Dirichlet's principle and wellposedness of solutions for a nonlocal p-Laplacian system, Appl. Math. Comput., 219 (2012), 1411-1419. doi: 10.1016/j.amc.2012.07.045. Google Scholar
E. Lindgren and P. Lindqvist, Fractional eigenvalues, Calc. Var. Partial Differential Equations, 49 (2014), 795-826. doi: 10.1007/s00526-013-0600-1. Google Scholar
J. M. Mazón, J. D. Rossi and J. J. Toledo-Melero, Fractional $p$-Laplacian evolution equations, J. Math. Pures Appl., 105 (2016), 810-844. doi: 10.1016/j.matpur.2016.02.004. Google Scholar
T. Mengesha and Q. Du, Characterization of function spaces of vector fields and an application in nonlinear peridynamics, Nonlinear Anal., 140 (2016), 82-111. doi: 10.1016/j.na.2016.02.024. Google Scholar
T. Mengesha and Q. Du, On the variational limit of a class of nonlocal functionals related to peridynamics, Nonlinearity, 28 (2015), 3999-4035. doi: 10.1088/0951-7715/28/11/3999. Google Scholar
R. Metzler and J. Klafter, The random walk's guide to anomalous diffusion: a fractional dynamics approach, Phys. Rep., 339 (2000), 1-77. doi: 10.1016/S0370-1573(00)00070-3. Google Scholar
R. Metzler and J. Klafter, The restaurant at the end of the random walk: Recent developments in the description of anomalous transport by fractional dynamics, J. Phys. A: Math. Gen., 37 (2004), 161-208. Google Scholar
J. Muñoz, Generalized Ponce's inequality, preprint, arXiv: 1909.04146v2. Google Scholar
S. P. Neuman and D. M. Tartakosky, Perspective on theories of non-fickian transport in heterogeneous media, Adv. in Water Resources, 32 (2009), 670-680. doi: 10.1016/j.advwatres.2008.08.005. Google Scholar
A. C. Ponce, An estimate in the spirit of Poincaré's inequality, J. Eur. Math. Soc. (JEMS), 6 (2004), 1-15. doi: 10.4171/JEMS/1. Google Scholar
A. C. Ponce, A new approach to Sobolev spaces and connections to Γ-convergence, Calc. Var. Partial Differential Equations, 19 (2004), 229-255. doi: 10.1007/s00526-003-0195-z. Google Scholar
F. Riesz and B. Sz.-Nagy, Functional Analysis, Dover Publications, New York, 1990. Google Scholar
H. L. Royden, Real Analysis, 3$^rd$ edition, Macmillan Publishing Company, New York, 1988. Google Scholar
M. F. Shlesinger, B. J. West and J. Klafter, Lévy dynamics of enhanced diffsion: Application to turbulence, Phys Rev. Lett., 58 (1987), 1100-1103. doi: 10.1103/PhysRevLett.58.1100. Google Scholar
J. L. Vázquez, Nonlinear diffusion with fractional laplaian opertors,, in Nonlinear Partial Differential Equations: The Abel Symposium 2010 (eds. H. Holden, K. H. Karlse), Springer, (2012), 271–298. doi: 10.1007/978-3-642-25361-4\_15. Google Scholar
J. L. Vázquez, Recent porgress in the theory on nonlinear diffusion with fractional Laplacian operators, Dis. Cont. Dyn. Syst., 7 (2014), 857-885. doi: 10.3934/dcdss.2014.7.857. Google Scholar
J. L. Vázquez, The mathematical theories of diffusion: Nonlinear and fractional diffusion,, in Nonlocal and Nonlinear Diffusions and Interactions: New Methods and Directions (eds. M. Bonforte, G. Grillo), Lecture Notes in Mathematics, 2186, Springer Cham, (2017), 205–278. doi: 10.1007/978-3-319-61494-6\_5. Google Scholar
K. Zhou and Q. Du, Mathematical and numerical analysis of linear perydynamic models with nonlocal boundary conditions, SIAM J. Numer. Anal., 48 (2010), 1759-1780. doi: 10.1137/090781267. Google Scholar
Lorenzo Zambotti. A brief and personal history of stochastic partial differential equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 471-487. doi: 10.3934/dcds.2020264
Yueyang Zheng, Jingtao Shi. A stackelberg game of backward stochastic differential equations with partial information. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020047
Stanislav Nikolaevich Antontsev, Serik Ersultanovich Aitzhanov, Guzel Rashitkhuzhakyzy Ashurova. An inverse problem for the pseudo-parabolic equation with p-Laplacian. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021005
Hua Qiu, Zheng-An Yao. The regularized Boussinesq equations with partial dissipations in dimension two. Electronic Research Archive, 2020, 28 (4) : 1375-1393. doi: 10.3934/era.2020073
Neil S. Trudinger, Xu-Jia Wang. Quasilinear elliptic equations with signed measure. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 477-494. doi: 10.3934/dcds.2009.23.477
Christian Clason, Vu Huu Nhu, Arnd Rösch. Optimal control of a non-smooth quasilinear elliptic equation. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020052
Hai Huang, Xianlong Fu. Optimal control problems for a neutral integro-differential system with infinite delay. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020107
Wenqiang Zhao, Yijin Zhang. High-order Wong-Zakai approximations for non-autonomous stochastic $ p $-Laplacian equations on $ \mathbb{R}^N $. Communications on Pure & Applied Analysis, 2021, 20 (1) : 243-280. doi: 10.3934/cpaa.2020265
Matthieu Alfaro, Isabeau Birindelli. Evolution equations involving nonlinear truncated Laplacian operators. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3057-3073. doi: 10.3934/dcds.2020046
Hua Chen, Yawei Wei. Multiple solutions for nonlinear cone degenerate elliptic equations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020272
Pierre Baras. A generalization of a criterion for the existence of solutions to semilinear elliptic equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 465-504. doi: 10.3934/dcdss.2020439
Nahed Naceur, Nour Eddine Alaa, Moez Khenissi, Jean R. Roche. Theoretical and numerical analysis of a class of quasilinear elliptic equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 723-743. doi: 10.3934/dcdss.2020354
Fabio Camilli, Giulia Cavagnari, Raul De Maio, Benedetto Piccoli. Superposition principle and schemes for measure differential equations. Kinetic & Related Models, 2021, 14 (1) : 89-113. doi: 10.3934/krm.2020050
Elimhan N. Mahmudov. Infimal convolution and duality in convex optimal control problems with second order evolution differential inclusions. Evolution Equations & Control Theory, 2021, 10 (1) : 37-59. doi: 10.3934/eect.2020051
Fuensanta Andrés Julio Muñoz Jesús Rosado
\begin{document}$ p $\end{document}-Laplacian equation" readonly="readonly">
|
CommonCrawl
|
Binary sequences derived from differences of consecutive quadratic residues
Internal state recovery of Espresso stream cipher using conditional sampling resistance and TMDTO attack
Several new classes of (balanced) Boolean functions with few Walsh transform values
Tingting Pang 1, , Nian Li 1,, , Li Zhang 2, and Xiangyong Zeng 1,
Wuhan Maritime Communication Research Institute, Wuhan 430079, China
* Corresponding author: Nian Li
Received February 2020 Revised April 2020 Published July 2020
Fund Project: This work was supported by the National Natural Science Foundation of China (Nos. 61702166, 61761166010) and Major Technological Innovation Special Project of Hubei Province (No. 2019ACA144)
Three classes of (balanced) Boolean functions with few Walsh transform values derived from bent functions, Gold functions and the product of linearized polynomials are obtained in this paper. Further, the value distributions of their Walsh transform are also determined by virtue of the property of bent functions, the Walsh transform property of Gold functions and the $ k $-tuple balance property of trace functions respectively.
Keywords: Walsh transform, Bent function, Gold function, linearized polynomial, balanced function.
Mathematics Subject Classification: Primary: 94A60, 11T23; Secondary: 06E30.
Citation: Tingting Pang, Nian Li, Li Zhang, Xiangyong Zeng. Several new classes of (balanced) Boolean functions with few Walsh transform values. Advances in Mathematics of Communications, doi: 10.3934/amc.2020095
N. Boston and G. McGuire, The weight distributions of cyclic codes with two zeros and zeta functions, J. Symbolic Comput., 45 (2010), 723-733. doi: 10.1016/j.jsc.2010.03.007. Google Scholar
C. Carlet, Boolean Functions for Cryptography and Error Correcting Codes, In Y. Crama and P. L. Hammer, editors, Boolean Models and Methods in Mathematics, Computer Science, and Engineering, Cambridge University Press, 2010. Google Scholar
C. Carlet, L. E. Danielsen, M. G. Parker and P. Solé, Self-dual bent functions, Int. J. Inform. and Coding Theory, 1 (2010), 384-399. doi: 10.1504/IJICOT.2010.032864. Google Scholar
R. S. Coulter, On the evaluation of a class of Weil sums in characteristic 2, New Zealand J. Math., 28 (1999), 171-184. Google Scholar
J. F. Dillon, Elementary Hadamard Difference Sets, Ph.D. dissertation, Univ. Maryland, College Park, 1974. Google Scholar
H. Dobbertin, One-to-one highly nonlinear power functions on $GF(2^n)$, Appl. Algebra Eng. Commun. Comput., 9 (1998), 139-152. doi: 10.1007/s002000050099. Google Scholar
H. Dobbertin, P. Felke, T. Helleseth and P. Rosendahl, Niho type cross-correlation functions via Dickson polynomials and Kloosterman sums, IEEE Trans. Inf. Theory, 52 (2006), 613-627. doi: 10.1109/TIT.2005.862094. Google Scholar
P. Z. Fan and M. Darnell, Sequence Design for Communications Applications, New York: Wiley, 1996. Google Scholar
T. Helleseth, Some results about the cross-correlation function between two maximal linear sequences, Discrete Math., 16 (1976), 209-232. doi: 10.1016/0012-365X(76)90100-X. Google Scholar
T. Helleseth, A note on the cross-correlation function between two binary maximal length linear sequences, Discrete Math., 23 (1978), 301-307. doi: 10.1016/0012-365X(78)90010-9. Google Scholar
T. Helleseth and P. Kumar, Sequences with Low Correlation, In Handbook of Coding Theory, V. S. Pless and W. C. Huffman, Eds. New York, Elsevier Science, 1998. Google Scholar
T. Helleseth and P. Rosendahl, New pairs of $m$-sequences with $4$-level cross-correlation, Finite Fields Appl., 11 (2005), 674-683. doi: 10.1016/j.ffa.2004.09.001. Google Scholar
A. Johansen and T. Helleseth, A family of $m$-sequences with five-valued cross correlation, IEEE Trans. Inf. Theory, 55 (2009), 880-887. doi: 10.1109/TIT.2008.2009810. Google Scholar
A. Johansen, T. Helleseth and A. Kholosha, Further results on $m$-sequences with five-valued cross correlation, IEEE Trans. Inf. Theory, 55 (2009), 5792-5802. doi: 10.1109/TIT.2009.2032854. Google Scholar
K. H. Kim, J. H. Choe, D. N. Lee, D. S. Go and S. Mesnager, Solutions of $x^{q^k}+\cdots+x^q+x = a$ in $\mathbb{F}_{2^n}$, arXiv: 1905.10579v1. Google Scholar
N. G. Leander, Monomial bent functions, IEEE Trans. Inf. Theory, 52 (2006), 738-743. doi: 10.1109/TIT.2005.862121. Google Scholar
N. Li, T. Helleseth, A. Kholosha and X. H. Tang, On the Walsh transform of a class of functions from Niho exponents, IEEE Trans. Inf. Theory, 59 (2013), 4662-4667. doi: 10.1109/TIT.2013.2252053. Google Scholar
R. Lidl and H. Niederreiter, Finite Fields, Encycl. Math. Appl., Cambridge University Press, Cambridge, 1997. Google Scholar
S. Mesnager, Several new infinite families of bent functions and their duals, IEEE Trans. Inf. Theory, 60 (2014), 4397-4407. doi: 10.1109/TIT.2014.2320974. Google Scholar
Y. Niho., Multi-Valued Cross-Correlation Functions between Two Maximal Linear Recursive Sequences, Ph.D. dissertation, University of Southern California, Los Angeles, 1972. Google Scholar
O. S. Rothaus, On "Bent" functions, J. Comb. Theory Ser. A, 20 (1976), 300-305. doi: 10.1016/0097-3165(76)90024-8. Google Scholar
Z. Q. Sun and L. Hu, Boolean Functions with four-valued Walsh spectra, J. Syst. Sci. Complex., 28 (2015), 743-754. doi: 10.1007/s11424-014-2224-8. Google Scholar
Z. R. Tu, D. B. Zheng, X. Y. Zeng and L. Hu, Boolean functions with two distinct Walsh coefficients, Appl. Algebra Eng. Commun. Comput., 22 (2011), 359-366. doi: 10.1007/s00200-011-0155-3. Google Scholar
Y. H. Xie, L. Hu, W. F. Jiang and X. Y. Zeng, A class of Boolean functions with four-valued Walsh spectra,, Asia-pacific Conference on Communications. IEEE Press, (2009), 880–883. doi: 10.1109/APCC.2009.5375462. Google Scholar
G. K. Xu, X. W. Cao and S. D. Xu, Several new classes of Boolean functions with few Walsh transform values, Appl. Algebra Eng. Commun. Comput., 28 (2017), 155-176. doi: 10.1007/s00200-016-0298-3. Google Scholar
Y. L. Zheng and X. M. Zhang, On plateaued functions, IEEE Trans. Inf. Theory, 47 (2001), 1215-1223. doi: 10.1109/18.915690. Google Scholar
Bahaaeldin Abdalla, Thabet Abdeljawad. Oscillation criteria for kernel function dependent fractional dynamic equations. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020443
Liping Tang, Ying Gao. Some properties of nonconvex oriented distance function and applications to vector optimization problems. Journal of Industrial & Management Optimization, 2021, 17 (1) : 485-500. doi: 10.3934/jimo.2020117
Mohammed Abdulrazaq Kahya, Suhaib Abduljabbar Altamir, Zakariya Yahya Algamal. Improving whale optimization algorithm for feature selection with a time-varying transfer function. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 87-98. doi: 10.3934/naco.2020017
Lingfeng Li, Shousheng Luo, Xue-Cheng Tai, Jiang Yang. A new variational approach based on level-set function for convex hull problem with outliers. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020070
Juntao Sun, Tsung-fang Wu. The number of nodal solutions for the Schrödinger–Poisson system under the effect of the weight function. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021011
Madhurima Mukhopadhyay, Palash Sarkar, Shashank Singh, Emmanuel Thomé. New discrete logarithm computation for the medium prime case using the function field sieve. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020119
Kateřina Škardová, Tomáš Oberhuber, Jaroslav Tintěra, Radomír Chabiniok. Signed-distance function based non-rigid registration of image series with varying image intensity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1145-1160. doi: 10.3934/dcdss.2020386
Peizhao Yu, Guoshan Zhang, Yi Zhang. Decoupling of cubic polynomial matrix systems. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 13-26. doi: 10.3934/naco.2020012
Bimal Mandal, Aditi Kar Gangopadhyay. A note on generalization of bent boolean functions. Advances in Mathematics of Communications, 2021, 15 (2) : 329-346. doi: 10.3934/amc.2020069
Haiyu Liu, Rongmin Zhu, Yuxian Geng. Gorenstein global dimensions relative to balanced pairs. Electronic Research Archive, 2020, 28 (4) : 1563-1571. doi: 10.3934/era.2020082
Xin Guo, Lexin Li, Qiang Wu. Modeling interactive components by coordinate kernel polynomial models. Mathematical Foundations of Computing, 2020, 3 (4) : 263-277. doi: 10.3934/mfc.2020010
Helmut Abels, Andreas Marquardt. On a linearized Mullins-Sekerka/Stokes system for two-phase flows. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020467
Jan Bouwe van den Berg, Elena Queirolo. A general framework for validated continuation of periodic orbits in systems of polynomial ODEs. Journal of Computational Dynamics, 2021, 8 (1) : 59-97. doi: 10.3934/jcd.2021004
Masaharu Taniguchi. Axisymmetric traveling fronts in balanced bistable reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3981-3995. doi: 10.3934/dcds.2020126
Stefan Doboszczak, Manil T. Mohan, Sivaguru S. Sritharan. Pontryagin maximum principle for the optimal control of linearized compressible navier-stokes equations with state constraints. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020110
Chunming Tang, Maozhi Xu, Yanfeng Qi, Mingshuo Zhou. A new class of $ p $-ary regular bent functions. Advances in Mathematics of Communications, 2021, 15 (1) : 55-64. doi: 10.3934/amc.2020042
Gheorghe Craciun, Jiaxin Jin, Casian Pantea, Adrian Tudorascu. Convergence to the complex balanced equilibrium for some chemical reaction-diffusion systems with boundary equilibria. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1305-1335. doi: 10.3934/dcdsb.2020164
Peter Frolkovič, Karol Mikula, Jooyoung Hahn, Dirk Martin, Branislav Basara. Flux balanced approximation with least-squares gradient for diffusion equation on polyhedral mesh. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 865-879. doi: 10.3934/dcdss.2020350
Tingting Pang Nian Li Li Zhang Xiangyong Zeng
|
CommonCrawl
|
Home » Applied Numerical Algorithms Group » Publications
Sort by: Date | Author | Type
Mark Adams, Stephen Cornford, Daniel Martin, Peter McCorquodale, "Composite matrix construction for structured grid adaptive mesh refinement", Computer Physics Communications, November 2019, 244:35-39, doi: 10.1016/j.cpc.2019.07.006
Download File: AdamsCornfordMartinMcCorquodale.pdf (pdf: 1.2 MB)
Weiqun Zhang, Ann Almgren, Vince Beckner, John Bell, Johannes Blashke, Cy Chan, Marcus Day, Brian Friesen, Kevin Gott, Daniel Graves, Max P. Katz, Andrew Myers, Tan Nguyen, Andrew Nonaka, Michele Rosso, Samuel Williams, Michael Zingale, "AMReX: a framework for block-structured adaptive mesh refinement", Journal of Open Source Software, May 2019, doi: 10.21105/joss.01370
Boris Lo, Phillip Colella, "An Adaptive Local Discrete Convolution Method for the Numerical Solution of Maxwell's Equations", Communications in Applied Mathematics and Computational Science, April 26, 2019, 14:105-119, doi: DOI: 10.2140/camcos.2019.14.105
Sergi Molins, David Trebotich, Bhavna Arora, Carl Steefel, Hang Deng, "Multi-scale Model of Reactive Transport in Fractured Media: Diffusion Limitations on Rates", Transport in Porous Media, March 20, 2019, 128:701-721, doi: 10.1007/s11242-019-01266-2
Download File: Molins2019-Article-Multi-scaleModelOfReactiveTran.pdf (pdf: 3.2 MB)
Daniel F. Martin, Stephen L. Cornford, Antony J. Payne, "Millennial‐scale Vulnerability of the Antarctic Ice Sheet to Regional Ice Shelf Collapse", Geophysical Research Letters, January 9, 2019, doi: 10.1029/2018gl081229
The Antarctic Ice Sheet (AIS) remains the largest uncertainty in projections of future sea level rise. A likely climate‐driven vulnerability of the AIS is thinning of floating ice shelves resulting from surface‐melt‐driven hydrofracture or incursion of relatively warm water into subshelf ocean cavities. The resulting melting, weakening, and potential ice‐shelf collapse reduces shelf buttressing effects. Upstream ice flow accelerates, causing thinning, grounding‐line retreat, and potential ice sheet collapse. While high‐resolution projections have been performed for localized Antarctic regions, full‐continent simulations have typically been limited to low‐resolution models. Here we quantify the vulnerability of the entire present‐day AIS to regional ice‐shelf collapse on millennial timescales treating relevant ice flow dynamics at the necessary ∼1km resolution. Collapse of any of the ice shelves dynamically connected to the West Antarctic Ice Sheet (WAIS) is sufficient to trigger ice sheet collapse in marine‐grounded portions of the WAIS. Vulnerability elsewhere appears limited to localized responses.
Plain Language Summary:
The biggest uncertainty in near‐future sea level rise (SLR) comes from the Antarctic Ice Sheet. Antarctic ice flows in relatively fast‐moving ice streams. At the ocean, ice flows into enormous floating ice shelves which push back on their feeder ice streams, buttressing them and slowing their flow. Melting and loss of ice shelves due to climate changes can result in faster‐flowing, thinning and retreating ice leading to accelerated rates of global sea level rise.To learn where Antarctica is vulnerable to ice‐shelf loss, we divided it into 14 sectors, applied extreme melting to each sector's floating ice shelves in turn, then ran our ice flow model 1000 years into the future for each case. We found three levels of vulnerability. The greatest vulnerability came from attacking any of the three ice shelves connected to West Antarctica, where much of the ice sits on bedrock lying below sea level. Those dramatic responses contributed around 2m of sea level rise. The second level came from four other sectors, each with a contribution between 0.5‐1m. The remaining sectors produced little to no contribution. We examined combinations of sectors, determining that sectors behave independently of each other for at least a century.
Chris Kavouklis, Phillip Colella, "Computation of volume potentials on structured grids with the method of local corrections", Communications in Applied Mathematics and Computational Science, October 31, 2018, 14:1-32, doi: DOI: 10.2140/camcos.2019.14.1
Hang Deng, Sergi Molins, David Trebotich, Carl Steefel, Donald DePaolo, "Pore-scale numerical investigation of the impacts of surface roughness: Up-scaling of reaction rates in rough fractures", Geochimica et Cosmochimica Acta, October 15, 2018, 239:374-389, doi: 10.1016/j.gca.2018.08.005
Blake Barker, Rose Nguyen, Björn Sandsted, Nathaniel Ventura, Colin Wahl, "Computing Evans functions numerically via boundary-value problems", Physica D: Nonlinear Phenomena, March 15, 2018, 367:1-10, doi: https://doi.org/10.1016/j.physd.2017.12.002
M. S. Waibel, C. L. Hulbe, C. S. Jackson, D. F. Martin, "Rate of Mass Loss Across the Instability Threshold for Thwaites Glacier Determines Rate of Mass Loss for Entire Basin", Geophysical Research Letters, February 19, 2018, 45:809-816, doi: 10.1002/2017GL076470
Nishant Nangia, Hans Johansen, Neelesh A. Patankar, Amneet Pal Singh Bhalla, "A moving control volume approach to computing hydrodynamic forces and torques on immersed bodies", Journal of Computational Physics, June 29, 2017, doi: 10.1016/j.jcp.2017.06.047
Dharshi Devendran, Daniel T. Graves, Hans Johansen,Terry Ligocki, "A Fourth Order Cartesian Grid Embedded Boundary Method for Poisson's Equation", Communications in Applied Mathematics and Computational Science, edited by Silvio Levy, May 12, 2017, 12:51-79, doi: DOI 10.2140/camcos.2017.12.51
Download File: poisson-eb-4th-order.pdf (pdf: 1.1 MB)
Sergi Molins, David Trebotich, Gregory H. Miller, Carl I. Steefel, "Mineralogical and transport controls on the evolution of porous media texture using direct numerical simulation", Water Resources Research, April 7, 2017, doi: 10.1002/2016WR020323
Protonu Basu, Samuel Williams, Brian Van Straalen, Leonid Oliker, Phillip Colella, Mary Hall, "Compiler-Based Code Generation and Autotuning for Geometric Multigrid on GPU-Accelerated Supercomputers", Parallel Computing (PARCO), April 2017, doi: 10.1016/j.parco.2017.04.002
Saverio E Spagnolie, Colin Wahl, Joseph Lukasik, Jean-Luc Thiffeault, "Microorganism billiards", Physica D: Nonlinear Phenomena, February 15, 2017, 341:33 - 44, doi: https://doi.org/10.1016/j.physd.2016.09.010
Jared O. Ferguson, Christiane Jablonowski, Hans Johansen, Peter McCorquodale, Phillip Colella, Paul A. Ullrich, "Analyzing the adaptive mesh refinement (AMR) characteristics of a high-order 2D cubed-sphere shallow-water model", Mon. Wea. Rev., November 9, 2016, 144:4641–4666, doi: 10.1175/MWR-D-16-0197.1
S.L. Cornford, D.F.Martin, V. Lee, A.J. Payne, E.G. Ng, "Adaptive mesh refinement versus subgrid friction interpolation in simulations of Antarctic ice dynamics", Annals of Glaciology, September 2016, 57 (73), doi: 10.1017/aog.2016.13
Boris Lo, Victor Minden, Phillip Colella, "A real-space Green's function method for the numerical solution of Maxwell's equations", Communications in Applied Mathematics and Computational Science, August 11, 2016, 11.2:143-170, doi: 10.2140/camcos.2016.11.143
Xylar S. Asay-Davis, Stephen L. Cornford, Gaël Durand, Benjamin K. Galton-Fenzi, Rupert M. Gladstone, G. Hilmar Gudmundsson, Tore Hattermann, David M. Holland, Denise Holland, Paul R. Holland, Daniel F. Martin, Pierre Mathiot, Frank Pattyn, Hélène Seroussi, "Experimental design for three interrelated marine ice sheet and ocean model intercomparison projects: MISMIP v. 3 (MISMIP +), ISOMIP v. 2 (ISOMIP +) and MISOMIP v. 1 (MISOMIP1)", Geoscientific Model Development, July 2016, 9(7), doi: doi:10.5194/gmd-9-2471-2016
Andrew Myers, Phillip Colella, Brian Van Straalen, "A 4th-Order Particle-in-Cell Method with Phase-Space Remapping for the Vlasov-Poisson Equation", submitted to SISC, February 1, 2016,
Andrew Myers, Phillip Colella, Brian Van Straalen, "The Convergence of Particle-in-Cell Schemes for Cosmological Dark Matter Simulations", The Astrophysical Journal, Volume 816, Issue 2, article id. 56, 2016,
Stephen M. Guzik, Xinfeng Gao, Landon D. Owen, Peter McCorquodale, Phillip Colella, "A freestream-preserving fourth-order finite-volume method in mapped coordinates with adaptive-mesh refinement", Computers & Fluids, December 21, 2015, 123:202–217, doi: 10.1016/j.compfluid.2015.10.001
S. L. Cornford, D. F. Martin, A. J. Payne, E. G. Ng, A. M. Le Brocq, R. M. Gladstone, T. L. Edwards, S. R. Shannon, C. Agosta, M. R. van den Broeke, H. H. Hellmer, G. Krinner, S. R. M. Ligtenberg, R. Timmermann, D. G. Vaughan, "Century-scale simulations of the response of the West Antarctic Ice Sheet to a warming climate", The Cryosphere, August 18, 2015, doi: 10.5194/tc-9-1579-2015, 2015
P. McCorquodale, P.A. Ullrich, H. Johansen, P. Colella, "An adaptive multiblock high-order finite-volume method for solving the shallow-water equations on the sphere", Comm. App. Math. and Comp. Sci., 2015, 10:121-162, doi: 10.2140/camcos.2015.10.121
P. McCorquodale, M.R. Dorr, J.A.F. Hittinger, P. Colella, "High-order finite-volume methods for hyperbolic conservation laws on mapped multiblock grids", J. Comput. Phys., May 1, 2015, 288:181-195, doi: 10.1016/j.jcp.2015.01.006
D. Devendran, D. T. Graves, H. Johansen, "A higher-order finite-volume discretization method for Poisson's equation in cut cell geometries", submitted to SIAM Journal on Scientific Computing (preprint on arxiv), 2015,
David Trebotich, Daniel T. Graves, "An Adaptive Finite Volume Method for the Incompressible Navier-Stokes Equations in Complex Geometries", Communications in Applied Mathematics and Computational Science, January 15, 2015, 10-1:43-82, doi: 10.2140/camcos.2015.10.43
Download File: camcos-v10-n1-p03-s3.pdf (pdf: 9.1 MB)
A Chien, P Balaji, P Beckman, N Dun, A Fang, H Fujita, K Iskra, Z Rubenstein, Z Zheng, R Schreiber, others, "Versioned Distributed Arrays for Resilience in Scientific Applications: Global View Resilience", Journal of Computational Science, 2015,
David Trebotich, Mark F. Adams, Sergi Molins, Carl I. Steefel, Chaopeng Shen, "High-Resolution Simulation of Pore-Scale Reactive Transport Processes Associated with Carbon Sequestration", Computing in Science and Engineering, December 2014, 16:22-31, doi: 10.1109/MCSE.2014.77
Download File: CISE-16-06-Trebotichappeared.pdf (pdf: 2.7 MB)
Sergi Molins, David Trebotich, Li Yang, Jonathan B. Ajo-Franklin, Terry J. Ligocki, Chaopeng Shen and Carl Steefel, "Pore-Scale Controls on Calcite Dissolution Rates from Flow-through Laboratory and Numerical Experiments", Environmental Science and Technology, May 27, 2014, 48:7453-7460, doi: 10.1021/es5013438
Download File: MolinsETALappearedonline2014-06-09.pdf (pdf: 2.4 MB)
S.M Guzik, T.H. Weisgraber, P. Colella, B.J. Alder, "Interpolation Methods and the Accuracy of Lattice-Boltzmann Mesh Refinement", Journal of Computational Physics, February 15, 2014, 259:461 - 487, doi: https://doi.org/10.1016/j.jcp.2013.11.037
Anshu Dubey, Ann Almgren, John Bell, Martin Berzins, Steve Brandt, Greg Bryan, Phillip Colella, Daniel Graves, Michael Lijewski, Frank L\ offler, others, "A survey of high level frameworks in block-structured adaptive mesh refinement packages", Journal of Parallel and Distributed Computing, 2014, 74:3217--3227, doi: 10.1016/j.jpdc.2014.07.001
Daniel T. Graves, Phillip Colella, David Modiano, Jeffrey Johnson, Bjorn Sjogreen, Xinfeng Gao, "A Cartesian Grid Embedded Boundary Method for the Compressible Navier Stokes Equations", Communications in Applied Mathematics and Computational Science, December 23, 2013,
Download File: gravesetal.pdf (pdf: 964 KB)
In this paper, we present an unsplit method for the time-dependent
compressible Navier-Stokes equations in two and three dimensions.
We use a a conservative, second-order Godunov algorithm.
We use a Cartesian grid, embedded boundary method to resolve complex
boundaries. We solve for viscous and conductive terms with a
second-order semi-implicit algorithm. We demonstrate second-order
accuracy in solutions of smooth problems in smooth geometries and
demonstrate robust behavior for strongly discontinuous initial
conditions in complex geometries.
C. Steefel, S. Molins, D. Trebotich, "Pore scale processes associated with subsurface CO2 injection and sequestration", Reviews in Mineralogy and Geochemistry", Reviews in Mineralogy and Geochemistry, November 1, 2013,
Download File: SteefelMolinsTrebotich2013.pdf (pdf: 5.9 MB)
Frank Pattyn, Laura Perichon, Gaël Durand, Lionel Favier, Olivier Gagliardini, Richard C.A. Hindmarsh, Thomas Zwinger, Torsten Albrecht, Stephen Cornford, David Docquier, Johannes J. Fürst, Daniel Goldberg, G. Hilmar Gudmundsson, Angelika Humbert, Moritz Hütten, Philippe Huybrechts, Guillaume Jouvet, Thomas Kleiner, Eric Larour, Daniel Martin, Mathieu Morlighem, Anthony J. Payne, David Pollard, Martin Rückamp, Oleg Rybak, Hélène Seroussi, Malte Thoma, Nina Wilkens, "Grounding-line migration in plan-view marine ice-sheet models: results of the ice2sea MISMIP3d intercomparison", Journal of Glaciology, 2013, 59:410-422, doi: 10.3189/2013JoG12J129
S.L. Cornford, D.F. Martin, D.T. Graves, D.F. Ranken, A.M. Le Brocq, R.M. Gladstone, A.J. Payne, E.G. Ng, W.H. Lipscomb, "Adaptive mesh, finite volume modeling of marine ice sheets", Journal of Computational Physics, 232(1):529-549, 2013,
Download File: cornfordmartinJCP2012.pdf (pdf: 1 MB)
A Dubey, B Van Straalen, "Experiences from software engineering of large scale AMR multiphysics code frameworks", arXiv preprint arXiv:1309.1781, January 1, 2013, doi: http://dx.doi.org/10.5334/jors.am
B. Kallemov, G. H. Miller, S. Mitran and D. Trebotich, "Calculation of Viscoelastic Bead-Rod Flow Mediated by a Homogenized Kinetic Scale with Holonomic Constraints", Molecular Simulation, 2012, doi: 10.1080/08927022.2011.654206
Download File: molsim2012.pdf (pdf: 204 KB)
G. H. Miller and D. Trebotich, "An Embedded Boundary Method for the Navier-Stokes Equations on a Time-Dependent Domain", Communications in Applied Mathematics and Computational Science, 7(1):1-31, 2012,
Download File: camcos-v7-n1-p01-p.pdf (pdf: 1.2 MB)
S. Molins, D. Trebotich, C. I. Steefel and C. Shen, "An Investigation of the Effect of Pore Scale Flow on Average Geochemical Reaction Rates Using Direct Numerical Simulation", Water Resour. Res., 48(3) W03527, DOI:10.1029/2011WR011404, 2012,
Download File: wrcr13375.pdf (pdf: 1.4 MB)
P.S. Li, D.F. Martin, R.I. Klein, and C.F. McKee, "A Stable, Accurate Methodology for High Mach Number, Strong Magnetic Field MHD Turbulence with Adaptive Mesh Refinement: Resolution and Refinement Studies", The Astrophysical Journal Supplement Series, 2012,
Download File: LiMartinKleinMcKee.pdf (pdf: 1.5 MB)
B. Wang, G.H. Miller, and P. Colella, "A Particle-in-Cell Method with Adaptive Phase-Space Remapping for Kinetic Plasmas", SIAM J. Sci. Comput, 2011,
F. Miniati and D.F. Martin, "Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM", The Astrophysical Journal Supplement Series, 195(1):5, 2011,
B. Kallemov, G. H. Miller, S. Mitran and D. Trebotich, "Multiscale Rheology: New Results for the Kinetic Scale", NSTI-Nanotech 2011, Vol. 2, pp. 575-578 (2011), 2011,
Download File: Nanotech2011.pdf (pdf: 203 KB)
B. Kallemov, G. H. Miller and D. Trebotich, "A Higher-Order Accurate Fluid-Particle Algorithm for Polymer Flows", Molecular Simulation, 37(8):738-745 (2011), 2011,
Download File: MolSimPaper2011.pdf (pdf: 175 KB)
P. McCorquodale and P. Colella, "A High-Order Finite-Volume Method for Conservation Laws on Locally Refined Grids", Communications in Applied Mathematics and Computational Science, Vol. 6 (2011), No. 1, 1-25, 2011,
P. Colella, M.R. Dorr, J.A.F. Hittinger, and D.F. Martin, "High-Order Finite-Volume Methods in Mapped Coordinates", Journal of Computational Physics, 230(8):2952-2976 (2011), 2011,
Download File: HOFiniteVolume2010.pdf (pdf: 1.1 MB)
Prateek Sharma, Phillip Colella, and Daniel F. Martin, "Numerical Implementation of Streaming Down the Gradient: Application to Fluid Modeling of Cosmic Rays", SIAM Journal on Scientific Computing , Vol 32(6), 3564-3583, 2010,
Download File: crstrsiamPS.pdf (pdf: 1.1 MB)
Q. Zhang, H. Johansen and P. Colella, "A Fourth-Order Accurate Finite-Volume Method with Structured Adaptive Mesh Refinement for Solving the Advection-Diffusion Equation", SIAM Journal on Scientific Computing, Vol. 34, No. 2. (2012), B179, doi:10.1137/110820105, 2010,
Download File: O4AdvDiff.pdf (pdf: 599 KB)
R.K. Crockett, P. Colella, and D.T. Graves, "A Cartesian Grid Embedded Boundary Method for Solving the Poisson and Heat Equations with Discontinuous Coefficients in Three Dimensions", Journal of Computational Physics , 230(7):2451-2469, 2010,
Download File: YJCPH3372.pdf (pdf: 1008 KB)
Caroline Gatti-Bono, Phillip Colella and David Trebotich, "A Second-Order Accurate Conservative Front-Tracking Method in One Dimension", SIAM J. Sci. Comput., 31(6):4795-4813, 2010,
Download File: BonoSISC.pdf (pdf: 414 KB)
A. Nonaka, D. Trebotich, G. H. Miller, D. T. Graves, and P. Colella, "A Higher-Order Upwind Method for Viscoelastic Flow", Comm. App. Math. and Comp. Sci., 4(1):57-83, 2009,
Download File: nonakaetal.pdf (pdf: 709 KB)
B. Kallemov, G. H. Miller and D. Trebotich, "A Duhamel Approach for the Langevin Equations with Holonomic Constraints", Molecular Simulation, 35(6):440-447, 2009,
Download File: MolSim.pdf (pdf: 186 KB)
D. Trebotich, G. H. Miller and M. D. Bybee, "A Penalty Method to Model Particle Interactions in DNA-laden Flows", J. Nanosci. Nanotechnol., 8(7):3749-3756, 2008,
Download File: JNNpreprint.pdf (pdf: 269 KB)
Colella, P. and Sekora, M. D., "A Limiter for PPM that Preserves Accuracy at Smooth Extrema", Submitted to Journal of Computational Physics, 2008,
Download File: ColellaSekora.pdf (pdf: 111 KB)
Martin, D.F., Colella, P., and Graves, D.T., "A Cell-Centered Adaptive Projection Method for the Incompressible Navier-Stokes Equations in Three Dimensions", Journal of Computational Physics Vol 227 (2008) pp. 1863-1886., 2008, LBNL 62025,
Download File: martinColellaGraves2008.pdf (pdf: 3.1 MB)
D. T. Graves, D Trebotich, G. H. Miller, P. Colella, "An Efficient Solver for the Equations of Resistive MHD with Spatially-Varying Resistivity", Journal of Computational Physics Vol 227 (2008) pp.4797-4804., 2008,
Download File: gravesTrebMillerColella2008.pdf (pdf: 155 KB)
Miniati, F. and Colella, P., "A Modified higher-Order Godunov's Scheme for Stiff Source Conservative Hydrodynamics", Journal of Computational Physics Vol. 224 (2007), pp. 519-538., 2008, LBNL 59902,
Download File: JCPMiniatiColellaJun2007.pdf (pdf: 585 KB)
Katherine Yelick, Paul Hilfinger, Susan Graham, Dan Bonachea, Jimmy Su, Amir Kamil, Kaushik Datta, Phillip Colella, and Tong Wen, "Parallel Languages and Compilers: Perspective from the Titanium Experience", The International Journal Of High Performance Computing Applications, August 1, 2007, 21(3):266-290, doi: 10.1177/1094342007078449
We describe the rationale behind the design of key features of Titanium—an explicitly parallel dialect of JavaTM for high-performance scientific programming—and our experiences in building applications with the language. Specifically, we address Titanium's Partitioned Global Address Space model, SPMD parallelism support, multi-dimensional arrays and array-index calculus, memory management, immutable classes (class-like types that are value types rather than reference types), operator overloading, and generic programming. We provide an overview of the Titanium compiler implementation, covering various parallel analyses and optimizations, Titanium runtime technology and the GASNet network communication layer. We summarize results and lessons learned from implementing the NAS parallel benchmarks, elliptic and hyperbolic solvers using Adaptive Mesh Refinement, and several applications of the Immersed Boundary method.
D. Trebotich, "Toward a Solution to the High Weissenberg Number Problem", Proc. Appl. Math. Mech. 7(1):2100073-2100074, 2007,
Download File: HWNPPAMM2007.pdf (pdf: 160 KB)
D. Trebotich, "Simulation of Biological Flow and Transport in Complex Geometries using Embedded Boundary / Volume-of-Fluid Methods", Journal of Physics: Conference Series 78 (2007) 012076, 2007,
Download File: SciDAC2007.pdf (pdf: 453 KB)
G. H. Miller and D. Trebotich, "Toward a Mesoscale Model for the Dynamics of Polymer Solutions", J. Comput. Theoret. Nanosci. 4(4):797-801, 2007,
Download File: JCTNpreprint.pdf (pdf: 271 KB)
D. Trebotich, G. H. Miller and M. D. Bybee, "A Hard Constraint Algorithm to Model Particle Interactions in DNA-laden Flows", Nanoscale and Microscale Thermophysical Engineering 11(1):121-128, 2007,
Download File: NMTE.pdf (pdf: 132 KB)
Miniati, F. and Colella, P., "Block Structured Adaptive Mesh and Time Refinement for Hybrid, Hyperbolic + N-body Systems", Journal of Computational Physics Vol. 227 (2007), pp. 400-430., 2007,
Download File: JCPMiniatiColellaNov2007.pdf (pdf: 1.1 MB)
McCorquodale, P., Colella, P., Balls, G.T., and Baden, S.B., "A Local Corrections Algorithm for Solving Poisson's Equation in Three Dimensions", Communications in Applied Mathematics and Computational Science Vol. 2, No. 1 (2007), pp. 57-81., 2007,
Gatti-Bono, C., Colella, P., "An Anelastic Allspeed Projection Method for Gravitationally Stratified Flows", J. Comput. Phys. Vol. 216 (2006), pp. 589-615, 2006, LBNL 57158,
Download File: B114.pdf (pdf: 632 KB)
Colella, P., Graves, D.T., Keen, B.J., Modiano, D., "A Cartesian Grid Embedded Boundary Method for Hyperbolic Conservation Laws", Journal of Computational Physics. Vol. 211 (2006), pp. 347-366., 2006, LBNL 56420,
D. Trebotich, "Modeling Complex Biological Flows in Multi-Scale Systems Using the APDEC Framework", Journal of Physics: Conference Series 46 (2006) 316-321., 2006,
Martin, D.F., Colella, P., Anghel, M., Alexander, F., "Adaptive Mesh Refinement for Multiscale Nonequilibrium Physics", Computing in Science and Engineering Vol.7 N.3 (2005), pp. 24-31, 2005,
Barad, M., Colella, P., "A Fourth-Order Accurate Local Refinement Method for Poisson's Equation", J. Comput. Phys. Vol.209 (2005), pp. 1-18, 2005,
Trebotich, D., Colella, P., Miller, G.H., "A Stable and Convergent Scheme for Viscoelastic Flow in Contraction Channels", J. Comput. Phys. Vol.205 (2005), pp. 315-342, 2005,
Crockett, R.K., Colella, P., Fisher, R., Klein, R.I., McKee, C.F., "An Unsplit, Cell-Centered Godunov Method for Ideal MHD", J. Comput. Phys. Vol.203 (2005), pp. 422-448, 2005,
Vay, J.L., Colella, P., Friedman, A., Grote, D.P., McCorquodale, P., Serafini, D.B., "Implementations of Mesh Refinement Schemes for Particle-in-Cell Plasma Simulations", Computer Physics Communications Vol.164 (2004), pp. 297-305, 2004,
Samtaney, R., Colella, P., Jardin, S.C., Martin, D.F., "3D Adaptive Mesh Refinement Simulations of Pellet Injection in Tokamaks", Computer Physics Communications Vol.164 (2004), pp. 220-228, 2004,
Vay, J.L., Colella, P., Kwan, J.W., McCorquodale, P., Serafini, D.B., Friedman, A., Grote, D.P., Westenskow, G., Adam, J.C., "Application of Adaptive Mesh Refinement to Particle-in-Cell Simulations of Plasmas and Beams", Physics of Plasmas Vol.11 (2004), pp. 2928-2934, 2004,
Miller, G.H., "Minimal Rotationally Invariant Bases for Hyperelasticity", SIAM J. Appl. Math Vol.64, No. 6 (2004), pp. 2050-2075, 2004,
Download File: Miller-2004.pdf (pdf: 289 KB)
McCorquodale, P., Colella, P., Grote, D., Vay, J.L., "A Node-Centered Local Refinement Algorithm for Poisson's Equation in Complex Geometries", J. Comput. Phys. Vol.201 (2004), pp. 34-60, 2004,
Miller, G.H., "An Iterative Riemann Solver for Systems of Hyperbolic Conservation Laws, with Application to Hyperelastic Solid Mechanics", J. Comp. Physics Vol.193 (2003), pp. 198-225, 2003,
Miller, G.H., Colella, P., "A Conservative Three-Dimensional Eulerian Method for Coupled Solid-Fluid Shock Capturing", J. Comput. Phys. Vol.183 (2002), pp. 26, 2002,
Balls, G., Colella, P., "A Finite Difference Domain Decomposition Method Using Local Corrections for the Solution of Poissons's Equation", J. Comput. Phys. Vol.180 (2002), pp. 25, 2002,
Vay, J.L., Colella, P., McCorquodale, P., Van Straalen, B., Friedman, A., Grote, D.P., "Mesh Refinement for Particle-in-Cell Plasma Simulations: Applications to and Benefits for Heavy Ion Fusion", Laser and Particle Beams. Vol.20 N.4 (2002), pp. 569-575, 2002,
McCorquodale, P., Colella, P., Johansen, H., "A Cartesian Grid Embedded Boundary Method for the Heat Equation on Irregular Domains", J. Comput. Phys. Vol.173 (2001), pp. 620-635, 2001,
Miller, G.H., Colella, P., "A Higher-Order Godunov Method for Elastic-Plastic Flow in Solids", J. Comput. Phys. Vol.167 (2001), pp. 131, 2001,
Trebotich, D., P. Colella, P., "A Projection Method for Incompressible Viscous Flow on Moving Quadrilateral Grids", J. Comput. Phys. Vol.166 (2001), pp. 191-217, 2001,
Propp, R., Colella, P., Crutchfield, W.Y., Day, M.S., "A Numerical Model for Trickle-Bed Reactors", J. Comput. Phys., 2000, 165:311-333,
Martin D., Colella, P., "A Cell-Centered Adaptive Projection Method for the Incompressible Euler Equations", J. Comput. Phys. Vol.163 (2000), pp. 271-312, 2000,
Download File: A144.pdf (pdf: 1.6 MB)
Colella, P., Dorr, M.R., Wake, D.D., "A Conservative Finite Difference Method for the Numerical Solution of Plasma Fluid Equations", J. Comput. Phys. Vol.149 (1999), pp. 168-193, 1999,
Colella, P., Dorr, M.R., Wake, D.D., "Numerical Solution of Plasma Fluid Equations Using Locally Refined Grids", J. Comput. Phys. Vol.152 (1999), pp. 550-583, 1999,
Colella, P., Pao, K., "A Projection Method for Low Speed Flows", J. Comput. Phys. Vol.149 (1999), pp. 245-269, 1999,
Howell, L.H., Pember, R.B., Colella, P., Fiveland, W.A., Jessee, J.P., "A Conservative Adaptive-Mesh Algorithm for Unsteady, Combined-Mode Heat Transfer Using the Discrete Ordinates Method", Numerical Heat Transfer, Part B: Fundamentals , Vol.35, (1999), pp. 407-430, 1999,
Colella, P., Trebotich, D., "Numerical Simulation of Incompressible Viscous flow in Deforming Domains", Proceedings of the National Academy of Sciences of the United States of America Vol.96, (1999), pp. 5378-5381, 1999,
Sussman, M.M., Almgren, A.S., Bell, J.B., Colella, P., Howell, L.H., Welcome, M., "An Adaptive Level Set Approach for Incompressible Two-Phase Flows", J. Comp. Phys. Vol.148, pp. 81-124, 1999, LBNL 40327,
Download File: paper.ps.gz (gz: 577 KB)
Johansen, H., Colella, P., "A Cartesian Grid Embedded Boundary Method for Poisson's Equation on Irregular Domains", J. Comput. Physics, Vol.147, No.1, pp. 60-85, November 1998,
Vegas-Landeau, M.A., Propp, R., Patzek, T.W., Colella, P., "A Sequential Semi-Implicit Algorithm for Computing Discontinuous Flows in Porous Media", SPE Journal, June 1998,
Jessee, J.P., Fiveland, W.A., Howell, L.H., Colella, P., Pember, R., "An Adaptive Mesh Refinement Algorithm for the Radiative Transport Equation", J. Comput. Phys. Vol.139, (1998), pp. 380-398, 1998,
Pember, R.P., Howell, L.H., Bell, J.B., Colella, P., Crutchfield, W.Y., Fiveland, W.A., Jessee, J.P., "An Adaptive Projection Method For Unsteady Low-Mach Number Combustion", Comb. Sci. Tech., 1998, 140:123-168,
Almgren, A.S., Bell, J.B., Colella, P., Howell, L.H., Welcome, M.L., "A Conservative Adaptive Projection Method for the Variable Density Incompressible Navier-Stokes Equations", J. Comp. Phys., 1998, 142:1-46, LBNL 39075,
Sussman, M.M., Almgren, A.S., Bell, J.B., Colella, P., Howell, L.H., Welcome, M., "An Adaptive Level Set Approach for Incompressible Two-Phase Flows", J. Comp. Phys., 148, pp. 81-124, April 1997, LBNL 40327,
Download File: LBNL-40327-02.pdf (pdf: 511 KB)
Helmsen, J., Colella, P., Puckett, E.G., "Non-Convex Profile Evolution in Two Dimensions Using Volume of Fluids", 1997, LBNL 40693,
Almgren, A.S., Bell, J.B., Colella, P., Marthaler, T., "A Cartesian Grid Projection Method for the Incompressible Euler Equations in Complex Geometries", SIAM J. Sci. Comp., 1997, 18(5):1289-1309,
Pember, R.B., Bell, J.B., Colella, P., Crutchfield, W.Y., Welcome, M.L., "An Adaptive Cartesian Grid Method for Unsteady Compressible Flow in Irregular Regions", J. Comp. Phys., 1995, 120(2):278-304,
Hilditch, J., Colella, P., "A Front Tracking Method for Compressible Flames in One Dimension", SIAM Journal on Scientific Computing, Vol.16 No.4 (1995), pp. 755-772, 1995,
Chien, K.Y., Ferguson, R.E., Kuhl, A.L., Glaz, H.M., Colella, P., "Inviscid Dynamics of Two-Dimensional Shear Layers", International Journal of Computational Fluid Dynamics, Vol.5 No.1-2, pp. 59+, 1995,
Collins, J.P., Colella, P., Glaz, H.M., "An Implicit-Explicit Eulerian Godunov Scheme for Compressible Flow", J. Comp. Phys., Vol.116 No.2, pp. 195-211, 1995,
Almgren, A.S., Buttke, T., Colella, P., "A Fast Adaptive Vortex Method In Three Dimensions", J. Comp. Phys., Vol.113 No.2, pp. 177-200, 1994,
Zachary, A.L., Malagoli, A., Colella, P., "A Higher-Order Godunov Method for Multidimensional Ideal Magnetohydrodynamics", SIAM Journal on Scientific Computing, Vol.15 No.2, pp. 263-284, 1994,
Liu, J.C., Colella, P., Peterson, P.F., Schrock, V.E., "Modeling Supersonic Flows Through a Gas-Continuous 2-Fluid Medium", Nuclear Engineering and Design, Vol.146 No.1-3, pp. 337-348, 1994,
Klein, R.I., McKee, C.F., Colella, P., "On the Hydrodynamic Interaction of Shock Waves with Interstellar Clouds .1. Nonradiative Shocks in Small Clouds", Astrophysical Journal, Vol.420 No.1, pp. 213-236, 1994,
Download File: A126.pdf (pdf: 3 MB)
Chen, X.M., Schrock, V.E., Peterson, P.F., Colella, P., "Gas Dynamics in the Central Cavity of the Hylife-II Reactor", Fusion Technology, Vol.21 No.3, pp. 1520-1524., 1992,
Zachary, A.L., Colella, P., "A Higher-Order Godunov Method for the Equations of Ideal Magnetohydrodynamics", Journal of Computational Physics, Vol.99 No.2, pp. 341-347., 1992,
Henderson L.F., Colella P., Puckett E.G., "On the Refraction of Shock Waves at a Slow Fast Gas Interface", Journal of Fluid Mechanics, Vol.224 MAR:1+., 1991,
Download File: A211991.pdf (pdf: 1.7 MB)
Trangenstein J.A., Colella P., "A Higher-Order Godunov Method for Modeling Finite Deformation in Elastic-Plastic Solids", Communications on Pure and Applied Mathematics, Vol.44 No.1, pp. 41-100, 1991,
Download File: A1201991.pdf (pdf: 2.1 MB)
Kuhl, A.L., Ferguson, R.E., Chien, K.Y., Glowacki, W., Collins, P. Glaz, H., Colella P., "Turbulent Wall Jet in a Mach Reflection Flow", Progress in Aeronautics and Astronautics, Vol.13, pp. 201-232, 1990,
Colella, P., Henderson, L.F., "The von Neumann Paradox for the Diffraction of Weak Shock Waves", Journal of Fluid Mechanics, Vol.213, pp. 71-94., 1990,
Colella, P., "Multidimensional Upwind Methods for Hyperbolic Conservation Laws", J. Comp. Phys., Vol.87 No.1, pp. 171-200, 1990,
Bell, J.B., Colella, P., Glaz, H.M., "A Second-Order Projection Method for the Incompressible Navier Stokes Equations", J. Comp. Phys,, Vol.85 No.2, pp. 257-283., 1989,
Bell, J.B., Colella, P,, Trangenstein, J.A., "Higher Order Godunov Methods for General Systems of Hyperbolic Conservation Laws", Journal of Computational Physics, Vol.82 No.2, pp. 362-397, 1989,
Berger M.J., Colella P., "Local Adaptive Mesh Refinement for Shock Hydrodynamics", J. Comp. Phys., Vol.82 No.1, pp. 64-84, 1989,
Sakurai, A., Henderson, L.F., Takayama, K., Walenta, Z., Colella P., "On the Von Neumann Paradox of Weak Mach Reflection", Fluid Dynamics Research, Vol.4, pp. 333-346, 1989,
Download File: A1161989.pdf (pdf: 848 KB)
Glaz, H.M., Colella, P., Collins, J.P., Ferguson, R.E., "Nonequilibrium Effects in Oblique Shock-Wave Reflection", AIAA Journal, Vol.26, pp. 698-705., 1988,
Hilfinger, P.N., Colella P., "FIDIL: A Language for Scientific Programming", Symbolic Computation: Applications to Scientific Computing, SIAM Frontiers in Applied Mathematics, Vol.5, pp. 97-138, 1988,
Download File: B231988.pdf (pdf: 2 MB)
Fryxell, B.A., Woodward, P.R., Colella, P., Winkler, K.H., "An Implicit-Explicit Hybrid Method for Lagrangian Hydrodynamics", Journal of Computational Physics, Vol.62, pp. 283-310., 1986,
Download File: A191986.pdf (pdf: 508 KB)
Glaz, H.M., Colella P., Glass, I.I., Deschambault, R.L., "Mach Reflection from an HE-Driven Blast Wave", Progress in Aeronautics and Astronautics, Vol.106, pp. 388-421., 1986,
Colella, P., Majda, A., Roytburd, V., "Theoretical and Numerical Structure for Reacting Shock Waves", SIAM Journal of Sci. Stat. Computing, Vol.7 No.4, pp. 1059-1080, 1986,
Colella, P., "A Direct Eulerian MUSCL Scheme for Gas Dynamics", SIAM Journal for Sci. Stat. Computing, Vol.6 No.1, pp. 104-117., 1985,
Download File: A16.pdf (pdf: 716 KB)
Colella, P., Glaz, H.M., "Efficient Solution Algorithms for the Riemann Problem for Real Gases", J. Comp. Phys., Vol.59 No.2, pp. 264-289., 1985,
Eidelman, S., Colella P., Shreeve, R.P., "Application of the Godunov Method and its Second-Order Extension to Cascade Flow Modeling", AIAA Journal, Vol.22 No.11 (1984), pp. 1609-1615, 1984,
Download File: A15.pdf (pdf: 1.1 MB)
Woodward, P.R., Colella, P., "The Numerical Simulation of Two-Dimensional Fluid Flow with Strong Shocks", J. Comp. Phys., Vol.54 No.1 (1984), pp. 115-173, 1984,
Download File: A31984.pdf (pdf: 3.4 MB)
Colella P., Woodward, P.R., "The Piecewise Parabolic Method (PPM) for Gas-Dynamical Simulations", J. Comp. Phys., Vol.54 No.1 (1984), pp. 174-201, 1984,
Colella P., "Glimm's Method for Gas Dynamics", SIAM Journal for Sci. Stat. Computing, Vol.3 No.1, pp. 76-110, 1982,
Colella P., Lanford, O.E., "Sample Field Behavior for the Free Markov Random Field", 1973,
Download File: B21.pdf (pdf: 903 KB)
Tuowen Zhao, Mary Hall, Samuel Williams, Hans Johansen, "Exploiting Reuse and Vectorization in Blocked Stencil Computations on CPUs and GPUs", Supercomputing (SC), November 2019,
Download File: SC19-VectorScatter-final.pdf (pdf: 1019 KB)
Tuowen Zhao, Samuel Williams, Mary Hall, Hans Johansen, "Delivering Performance Portable Stencil Computations on CPUs and GPUs Using Bricks", International Workshop on Performance, Portability and Productivity in HPC (P3HPC), November 2018,
Download File: p3hpc-bricks-final.pdf (pdf: 1.3 MB)
Tuowen Zhao, Mary Hall, Protonu Basu, Samuel Williams, Hans Johansen, "SIMD code generation for stencils on brick decompositions", Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP), February 2018,
Bryce Adelstein Lelbach, Hans Johansen, Samuel Williams, "Simultaneously Solving Swarms of Small Sparse Systems on SIMD Silicon", Parallel and Distributed Scientific and Engineering Computing (PDSEC), June 2017,
Bin Dong, Suren Byna, Kesheng Wu, Prabhat, Hans Johansen, Jeffrey N. Johnson, and Noel Keen, "Data Elevator: Low-contention Data Movement in Hierarchical Storage System", The 23rd annual IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC) (Acceptance rate: 25%), December 19, 2016,
Download File: 201612-DataElevator-HiPC2016-Bin-Byna.pdf (pdf: 765 KB)
Anshu Dubey, Hajime Fujita, Daniel T. Graves, Andrew Chien Devesh Tiwari, "Granularity and the Cost of Error Recovery in Resilient AMR Scientific Applications", SuperComputing 2016, August 10, 2016,
Dharshi Devendran, Suren Byna, Bin Dong, Brian van Straalen, Hans Johansen, Noel Keen, and Nagiza Samatova,, "Collective I/O Optimizations for Adaptive Mesh Refinement Data Writes on Lustre File System", Cray User Group (CUG) 2016, May 10, 2016,
Xiaocheng Zou, David Boyuka, Dhara Desai, Martin, Suren Byna, Kesheng Wu, Kushal, Bin Dong, Wenzhao Zhang, Houjun Tang Dharshi Devendran, David Trebotich, Scott, Hans Johansen, Nagiza Samatova, "AMR-aware In Situ Indexing and Scalable Querying", The 24th High Performance Computing Symposium (HPC, January 1, 2016,
Andrey Ovsyannikov, Melissa Romanus, Brian Van Straalen, Gunther H. Weber, David Trebotich, "Scientific Workflows at DataWarp-Speed: Accelerated Data-Intensive Science using NERSC s Burst Buffer", Proceedings of the 1st Joint International Workshop on Parallel Data Storage & Data Intensive Scalable Computing Systems, IEEE Press, 2016, 1--6, doi: 10.1109/PDSW-DISCS.2016.005
Anshu Dubey, Daniel T. Graves, "A Design Proposal for a Next Generation Scientific Software Framework", EuroPar 2015, July 31, 2015,
Download File: framework.pdf (pdf: 774 KB)
Protonu Basu, Samuel Williams, Brian Van Straalen, Mary Hall, Leonid Oliker, Phillip Colella, "Compiler-Directed Transformation for Higher-Order Stencils", International Parallel and Distributed Processing Symposium (IPDPS), May 2015,
Download File: ipdps15CHiLL.pdf (pdf: 1.8 MB)
Xiaocheng Zou, Kesheng Wu, David A. Boyuka, Daniel F. Martin, Suren Byna, Houjun, Kushal Bansal, Terry J. Ligocki, Hans Johansen, and Nagiza F. Samatova, "Parallel In Situ Detection of Connected Components Adaptive Mesh Refinement Data", Proceedings of the Cluster, Cloud and Grid Computing (CCGrid) 2015, 2015,
J. Ferguson, C. Jablonowski, H. Johansen, R. English, P. McCorquodale, P. Colella, J. Benedict, W. Collins, J. Johnson, P. Ullrich, "Assessing Grid Refinement Strategies in the Chombo Adaptive Mesh Refinement Model", AGU Fall Meeting, San Francisco, CA, December 15, 2014,
Yu Jung Lo, Samuel Williams, Brian Van Straalen, Terry J. Ligocki, Matthew J. Cordery, Leonid Oliker, Mary W. Hall, "Roofline Model Toolkit: A Practical Tool for Architectural and Program Analysis", Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS), November 2014, doi: 10.1007/978-3-319-17248-4_7
Download File: PMBS14-Roofline.pdf (pdf: 340 KB)
Protonu Basu, Samuel Williams, Brian Van Straalen, Leonid Oliker, Mary Hall, "Converting Stencils to Accumulations for Communication-Avoiding Optimization in Geometric Multigrid", Workshop on Stencil Computations (WOSC), October 2014,
Download File: wosc14chill.pdf (pdf: 973 KB)
Gunther H. Weber, Hans Johansen, Daniel T. Graves, Terry J. Ligocki, "Simulating Urban Environments for Energy Analysis", Proceedings Visualization in Environmental Sciences (EnvirVis), 2014, LBNL 6652E,
Samuel Williams, Mike Lijewski, Ann Almgren, Brian Van Straalen, Erin Carson, Nicholas Knight, James Demmel, "s-step Krylov subspace methods as bottom solvers for geometric multigrid", Parallel and Distributed Processing Symposium, 2014 IEEE 28th International, January 2014, 1149--1158, doi: 10.1109/IPDPS.2014.119
Download File: ipdps14cabicgstabfinal.pdf (pdf: 943 KB)
Download File: ipdps14CABiCGStabtalk.pdf (pdf: 944 KB)
Protonu Basu, Anand Venkat, Mary Hall, Samuel Williams, Brian Van Straalen, Leonid Oliker, "Compiler generation and autotuning of communication-avoiding operators for geometric multigrid", 20th International Conference on High Performance Computing (HiPC), December 2013, 452--461,
Download File: hipc13chill.pdf (pdf: 989 KB)
P. Basu, A. Venkat, M. Hall, S. Williams, B. Van Straalen, L. Oliker, "Compiler Generation and Autotuning of Communication-Avoiding Operators for Geometric Multigrid", Workshop on Stencil Computations (WOSC), 2013,
Christopher D. Krieger, Michelle Mills Strout, Catherine Olschanowsky, Andrew Stone, Stephen Guzik, Xinfeng Gao, Carlo Bertolli, Paul H.J. Kelly, Gihan Mudalige, Brian Van Straalen, Sam Williams, "Loop chaining: A programming abstraction for balancing locality and parallelism", Parallel and Distributed Processing Symposium Workshops & PhD Forum (IPDPSW), 2013 IEEE 27th International, May 2013, 375--384, doi: 10.1109/IPDPSW.2013.68
S. Williams, D. Kalamkar, A. Singh, A. Deshpande, B. Van Straalen, M. Smelyanskiy, A. Almgren, P. Dubey, J. Shalf, L. Oliker, "Optimization of Geometric Multigrid for Emerging Multi- and Manycore Processors", Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis (SC), November 2012, doi: 10.1109/SC.2012.85
Download File: sc12-mg.pdf (pdf: 808 KB)
Download File: sc12mgtalk.pdf (pdf: 1.9 MB)
S. Guzik, P. McCorquodale, P. Colella, "A Freestream-Preserving High-Order Finite-Volume Method for Mapped Grids with Adaptive-Mesh Refinement", 50th AIAA Aerospace Sciences Meeting Nashville, TN, 2012,
Download File: AIAAASM2012Guzik1140057v2.pdf (pdf: 1.3 MB)
Ushizima, D.M., Weber, G.H., Ajo-Franklin, J., Kim, Y., Macdowell, A., Morozov, D., Nico, P., Parkinson, D., Trebotich, D., Wan, J., and Bethel E.W., "Analysis and visualization for multiscale control of geologic CO2", Journal of Physics: Conference Series, Proceedings of SciDAC 2011, July 2011, LBNL Denver, CO, USA,
Chaopeng Shen, David Trebotich, Sergi Molins, Daniel T Graves, BV Straalen, DT Graves, T Ligocki, CI Steefel, "High performance computations of subsurface reactive transport processes at the pore scale", Proceedings of SciDAC, 2011,
Download File: SciDAC2011sim.pdf (pdf: 1.1 MB)
B. Van Straalen, P. Colella, D. T. Graves, N. Keen, "Petascale Block-Structured AMR Applications Without Distributed Meta-data", Euro-Par 2011 Parallel Processing - 17th International Conference, Euro-Par 2011, August 29 - September 2, 2011, Proceedings, Part II. Lecture Notes in Computer Science 6853 Springer 2011, ISBN 978-3-642-23396-8, Bordeaux, France, 2011,
Download File: EuroPar2011bvs.pdf (pdf: 400 KB)
Deines E., Weber, G.H., Garth, C., Van Straalen, B. Borovikov, S., Martin, D.F., and Joy, K.I., "On the computation of integral curves in adaptive mesh refinement vector fields", Proceedings of Dagstuhl Seminar on Scientific Visualization 2009, Schloss Dagstuhl, 2011, 2:73-91, LBNL 4972E,
Download File: 7.pdf (pdf: 799 KB)
G. H. Weber, S. Ahern, E.W. Bethel, S. Borovikov, H.R. Childs, E. Deines, C. Garth, H. Hagen, B. Hamann, K.I. Joy, D. Martin, J. Meredith, Prabhat, D. Pugmire, O. Rübel, B. Van Straalen and K. Wu, "Recent Advances in VisIt: AMR Streamlines and Query-Driven Visualization", Numerical Modeling of Space Plasma Flows: Astronum-2009 (Astronomical Society of the Pacific Conference Series, 3185E, 2010, 429:329-334,
Download File: LBNL-3185E.pdf (pdf: 2.1 MB)
B. Kallemov, G. H. Miller and D. Trebotich, "Numerical Simulation of Polymer Flow in Microfluidic Devices", 009 Proceedings of the Fourth SIAM Conference on Mathematics for Industry (MI09) pp. 93-98, 2010,
Download File: siammi09011kallemovb.pdf (pdf: 862 KB)
R.H. Cohen, J. Compton, M. Dorr, J. Hittinger, W.M Nevins, T.D. Rognlien, Z.Q. Xu, P. Colella, and D. Martin, "Testing and Plans for the COGENT edge kinetic code", (abstract) submitted to Sherwood 2010, 2010,
Download File: Sherwood2010Abstract.pdf (pdf: 32 KB)
P. Colella, M. Dorr, J. Hittinger, D.F. Martin, and P. McCorquodale, "High-Order Finite-Volume Adaptive Methods on Locally Rectangular Grids", 2009 J. Phys.: Conf. Ser. 180 012010, 2009,
P. Colella, M. Dorr, J. Hittinger, P.W. McCorquodale, and D.F. Martin, "High-Order Finite-Volume Methods on Locally-Structured Grids", Numerical Modeling of Space Plasma Flows: Astronum 2008 -- Proceedings of the 3rd International Conference, June 8-13, 2008, St John, U.S. Virgin Islands, 2009, pp. 207-216, 2009,
Download File: Astronum2008MappedPaper.pdf (pdf: 1 MB)
B.V. Straalen, J. Shalf, T. Ligocki, N. Keen, and W. Yang, "Scalability Challenges for Massively Parallel AMR Application", 23rd IEEE International Symposium on Parallel and Distributed Processing, 2009., 2009,
Download File: ipdps09finalcertified.pdf (pdf: 366 KB)
Brian van Straalen, Shalf, J. Ligocki, Keen, Woo-Sun Yang, "Scalability challenges for massively parallel AMR applications", IPDPS, 2009, 1-12,
Download File: ipdps09submit.pdf (pdf: 529 KB)
G.H. Weber, V. Beckner, H. Childs, T. Ligocki, M. Miller, B. van Straalen, E.W. Bethel, "Visualization of Scalar Adaptive Mesh Refinement Data", Numerical Modeling of Space Plasma Flows: Astronum-2007 (Astronomical Society of the Pacific Conference Series), April 2008, 385:309-320, LBNL 220E,
Download File: LBNL-220E.pdf (pdf: 1.5 MB)
P. Colella, D. Graves, T. Ligocki, D. Trebotich and B.V. Straalen, "Embedded Boundary Algorithms and Software for Partial Differential Equations", 2008 J. Phys.: Conf. Ser. 125 012084, 2008,
Download File: SciDAC2008-EBAlgor.pdf (pdf: 972 KB)
D. Trebotich, B.V. Straalen, D. Graves and P. Colella, "Performance of Embedded Boundary Methods for CFD with Complex Geometry", 2008 J. Phys.: Conf. Ser. 125 012083, 2008,
Download File: SciDAC2008-EBPerform.pdf (pdf: 167 KB)
D. Trebotich and G. H. Miller, "Simulation of Flow and Transport at the Micro (Pore) Scale", Proceedings of the 2nd International Conference on Porous Media and its Applications in Science and Engineering, ICPM2 June 17-21, Kauai, Hawaii, USA, 2007,
Download File: Kauaiporousmedia.pdf (pdf: 198 KB)
Phillip Colella, John Bell, Noel Keen, Terry Ligocki, Michael Lijewski, Brian van Straalen, "Performance and Scaling of Locally-Structured Grid Methods for Partial Differential Equations", presented at SciDAC 2007 Annual Meeting, 2007,
Download File: AMRPerformance.pdf (pdf: 386 KB)
Martin, D.F., Colella, P.. and Keen, N., A. Deane, G. Brenner, A. Ecer, D. Emerson, J. McDonough, J. Periaux, N. Satofuka, & D. Tromeur-Dervout (Eds.), "An Incompressible Navier-Stokes with Particles Algorithm and Parallel Implementation", Parallel Computational Fluid Dynamics: Theory and Applications, Proceedings of the 2005 International Conference on Parallel Computational Fluid Dynamics, May 24-27, College Park, MD, USA, Elsevier (2006), p. 461-468., 2006, LBNL 58787,
Download File: MartinColellaKeen.pdf (pdf: 93 KB)
McCorquodale, P., Colella, P., Balls, G., Baden, S.B., "A Scalable Parallel Poisson Solver with Infinite-Domain Boundary Conditions", Proceedings of the 7th Workshop on High Performance Scientific and Engineering Computing, Oslo, Norway, June 2005,
Download File: HPSEC05.pdf (pdf: 141 KB)
Wen, T., Colella, P., "Adaptive Mesh Refinement in Titanium", Proceedings of the International Parallel and Distributed Processing Symposium, Denver, Colorado, April 2005,
Horst Simon, William Kramer, William Saphir, John Shalf, David Bailey, Leonid Oliker, Michael Banda, C. William McCurdy, John Hules, Andrew Canning, Marc Day, Philip Colella, David Serafini, Michael Wehner, Peter Nugent, "Science-Driven System Architecture: A New Process for Leadership Class Computing", Journal of the Earth Simulator, Volume 2., 2005, LBNL 56545,
Download File: JES-SDSA.pdf (pdf: 110 KB)
Trebotich, D., Colella, P., Miller, G.H., Nonaka, A., Marshall, T., Gulati, S., Liepmann, D., "A Numerical Algorithm for Complex Biological Flow in Irregular Microdevice Geometries", Technical Proceedings of the 2004 Nanotechnology Conference and Trade Show Vol.2 (2004), pp. 470-473, 2004,
Balls, G.T., Baden, S.B., Colella, P., "SCALLOP: A Highly Scalable Parallel Poisson Solver in Three Dimensions", Proceedings, SC'03, Phoenix, Arizona, November, 2003, November 2003,
Gunther H. Weber, Oliver Kreylos, Terry J. Ligocki, Jonh Shalf, Hans Hagen, Bernd Hamann, Ken I. Joy, Kwan-Liu Ma, "High-quality Volume Rendering of Adaptive Mesh Refinement Data", VMV, 2001, 121-128,
Colella, P., Graves, D.T., Modiano, D., Puckett, E.G., Sussman, M., "An Embedded Boundary / Volume of Fluid Method for Free Surface Flows in Irregular Geometries", ASME Paper FEDSM99-7108, in Proceedings of the 3rd ASME/JSME Joint Fluids Engineering Conference, 18-23 July, San Francisco, CA, 1999,
Nelson, E.S., Colella, P., "Parametric Study of Reactive Melt Infiltration", R.M. Sullivan, N.J. Salamon, M. Keyhani, and S. White, eds. "Application of porous media methods for engineered materials" AMD-Vol 233, pp 1-11. American Society of Mechanical Engineers (1999). (Presented at the 1999 ASME International Mechanical Engineer, 1999,
Yelick, K., Semenzato, L., Pike, G., Miyamoto, C., Liblit, B., Krishnamurthy, A., Hilfinger, P., Graham, S., Gay, D., Colella, P., Aiken, A., "Titanium: A High-Performance Java Dialect", ACM 1998 workshop on Java for high-performance computing, Stanford, CA, February 1998,
Kevin Long, Brian Van Straalen, "PDESolve: an object-oriented PDE analysis environment", Object Oriented Methods for Interoperable Scientific and Engineering Computing: Proceedings of the 1998 SIAM Workshop, 1998, 99:225,
Dudek, S., Colella, P., "Steady-State Solution-Adaptive Euler Computations on Structured Grids", AIAA paper 98-0543, AIAA Aerospace Sciences meeting, Reno, NV, January 1998,
Dudek, S., Colella, P., "A Godunov Steady-State Solver for Structured Grids", AIAA Aerospace Sciences meeting, AIAA paper 97-0875, Reno, NV, January 1997,
Hilditch, J., Colella, P., "A Projection Method for Low-Mach Number Fast Chemistry Reacting Flow", AIAA Aerospace Sciences meeting, AIAA paper 97-0263, Reno, NV, January 1997,
J.A. Greenough, J.B. Bell, P. Colella, E.G. Puckett, "A Numerical Study of Shock-Induced Mixing of a Helium Cylinder: Comparison with Experiment", Proceedings of the 20th International Symposium on Shock Waves, 1997,
Jesse, J.P., Howell, L.H., Fiveland, W.A., Colella, P., Pember, R.B., "An Adaptive Mesh Refinement Algorithm for the Discrete Ordinates Methods", Proceedings, ASME 1996 National Heat Transfer Conference, August 1996,
Mark. M. Sussman, Ann S. Almgren, John B. Bell, Phillip Colella, Louis H. Howell, Michael Welcome, "An Adaptive Level Set Approach for Incompressible Two-Phase Flows", Proceedings of the ASME Fluids Engineering Summer Meeting: Forum on Advances in Numerical Modeling of Free Surface and Interface Fluid Dynamics, July 1996,
Helmsen, J., Puckett, E.G., Colella, P., Dorr, M., "Two New Methods for Simulating Photolithography Development in 3D", Proceedings of the SPIE - The International Society for Optical Engineering Optical Microlithography IX, Santa Clara, CA, March 1996,
Pember, R.B., Almgren, A.S., Bell, J.B., Colella, P., Howell, L., Lai, M., "A Higher-Order Projection Method for the Simulation of Unsteady Turbulent Nonpremixed Combustion in an Industrial Burner", Proceedings of the 8th International Symposium on Transport Phenomena in Combustion, San Francisco, CA, July 1995,
Almgren, A.S., Bell, J.B., Colella, P., Marthaler, T., "A Cell-Centered Cartesian Grid Projection Method for the Incompressible Euler Equations in Complex Geometries", AIAA paper 95-1924, Proceedings, AIAA 12th Computational Fluid Dynamics Conference, San Diego, CA, June 1995,
Steinthorsson, E., Modiano, D., Crutchfield, W.Y., Bell, J.B., Colella, P., "An Adaptive Semi-Implicit Scheme for Simulations of Unsteady Viscous Compressible Flow", AIAA Paper 95-1727-CP, in Proceedings of the 12th AIAA CFD Conference, June 1995,
Bell, J.B., Colella, P., Greenough, J.A., Marcus, D.L., "A Multi-Fluid Algorithm for Compressible, Reacting Flow", AIAA 95-1720, 12th AIAA Computational Fluid Dynamics Conference, San Diego, CA, June 1995,
Greenough, J.A., Beckner, V., Pember, R.B., Crutchfield, W.Y., Bell, J.B.,Colella, P., "An Adaptive Multifluid Interface-Capturing Method for Compressible Flow in Complex Geometries", AIAA-95-1718, Proceedings of 26th AIAA Fluid Dynamics Conference, San Diego, CA, June 1995,
Ann S. Almgren, John B. Bell, Phillip Colella, Louis H. Howell, Michael Welcome, "A High-Resolution Adaptive Projection Method for Regional Atmospheric Modeling", Proceedings of the NGEMCOM Conference sponsored by the U.S. EPA, August 7-9, Bay City, MI, 1995,
Richard B. Pember, Ann S. Almgren, John B. Bell, Phillip Colella, Louis Howell, and Mindy Lai, "A Higher-Order Projection Method for the Simulation of Unsteady Turbulent Nonpremixed Combustion in an Industrial Burner", Proceedings of the 8th International Symposium on Transport Phenomena in Combustion, July 16-20, San Francisco, CA, 1995,
Download File: paper95.ps.gz (gz: 62 KB)
Download File: abstract95.ps.gz (gz: 9.6 KB)
Download File: preprint.ps.gz (gz: 101 KB)
[Fig. 2,3], [Fig.4]
R.B. Pember, A.S. Almgren, W.Y. Crutchfield, L.H. Howell, J.B. Bell, P. Colella, and V.E. Beckner, "An Embedded Boundry Method for the Modeling of Unsteady Combustion in an Industrial Gas-Fired Furnace", WSS/CI 95F-165, 1995 Fall Meeting of the Western united States Section of the Comustion Institute, Stanford University, October 30-31, 1995,
Download File: paper1995.ps.gz (gz: 201 KB)
Download File: abs95.ps.gz (gz: 18 KB)
P. Colella and W.Y. Crutchfield, "A Parallel Adaptive Mesh Refinement Algorithm on the C-90", Energy Research Power Users Symposium, July 12, 1994,
Colella, P., Crutchfield, W.Y., "A Parallel Adaptive Mesh Refinement Algorithm on the C-90", Energy Research Power Users Symposium, 1994,
Ann S. Almgren, John B. Bell, Louis H. Howell and Phillip Colella, "An Adaptive Projection Method for the Incompressible Navier-Stokes Equations", Proceedings of the 14th IMACS World Congress, July 11-15, pp. 537-540, Atlanta, Georgia, 1994,
Lai, M., Colella, P., Bell, J., "A Projection Method for Combustion in the Zero Mach Number Limit", AIAA paper 93-3369, Proceedings of the AIAA 11th Computational Fluid Dynamics Conference, Orlando, FL, July 1993,
Pember, R.B., Bell, J.B., Colella, P., Crutchfield, W.Y., Welcome, M.L., "Adaptive Cartesian Grid Methods for Representing Geometry in Inviscid Compressible Flow", Proceedings of the 11th AAIA CFD Conference, Orlando, Florida, July 1993,
Almgren, A.S., Bell, J.B., Colella, P., Howell, L.H., "An Adaptive Projection Method for the Incompressible Euler Equations", Proceedings of the AIAA 11th Computational Fluid Dynamics Conference, Orlando, FL, July 1993,
Ann S. Almgren, John B. Bell,Phillip Colella, Louis H. Howell, "An Adaptive Projection Method for the Incompressible Euler Equations", Proceedings of the AIAA 11th Computational Fluid Dynamics Conference, July 6-9, Orlando, FL, 1993,
Chien, KY., Ferguson, R.E., Kuhl, A.L., Glaz, H.M., Colella P., "Inviscid Dynamics of Two-Dimensional Shear Layers", Proceedings, 22nd AIAA Fluid Dynamics, Plasma Dynamics, and Lasers Conference, Honolulu, Hawaii, June 1991,
Download File: June1991.pdf (pdf: 1.4 MB)
Bell, J.B., Colella P., Welcome, M., "Conservative Front-Tracking for Inviscid Compressible Flow", Proceedings, 10th AIAA Computational Fluid Dynamics Conference, pp. 814-822., Honolulu, Hawaii, June 1991,
Bell, J.B., Colella P., Howell, L., "An Efficient Second-Order Projection Method for Viscous Incompressible Flow", Proceedings, 10th AIAA Computational Fluid Dynamics Conference, pp. 360-367., Honolulu, Hawaii, June 1991,
Colella, P., Henderson, L.F., Puckett, E.G., "A Numerical Study of Shock Wave Refractions at a Gas Interface", Proceedings, 9th AIAA Computational Fluid Dynamics Conference, pp. 426-439, Buffalo, NY, 1989,
Download File: A271989.pdf (pdf: 1022 KB)
Bell, J.B., Colella P., Trangenstein, J. A., Welcome, M., "Adaptive Mesh Refinement on Moving Quadrilateral Grids", Proceedings, 9th AIAA Computational Fluid Dynamics Conference, pp. 471-479, Buffalo, NY, 1989,
Download File: April1989.pdf (pdf: 868 KB)
Bell, J.B., Colella, P. Trangenstein, J.A., Welcome, M., "Godunov Methods and Adaptive Algorithms for Unsteady Fluid Dynamics", Proceedings, 11th International Conference on Numerical Methods in Fluid Dynamics, Springer Lecture Notes in Physics Vol.323, pp. 137-141, Williamsburg, Virginia, June 1988,
Download File: June1988.pdf (pdf: 438 KB)
Bell, J.B., Colella P., Trangenstein, J., Welcome, M., "Adaptive Methods for High Mach Number Reactig Flow", Proceedings, AIAA 8th Computational Fluid Dynamics Conference, pp. 717-725, Honolulu, Hawaii, June 1987,
Glaz, H.M., Colella, P., Collins, J.P., Ferguson, R.E., "High Resolution Calculations of Unsteady, Two-Dimensional Non-Equilibrium Gas Dynamics with Experimental Comparisons", AIAA paper 87-1293, Proceedings, AIAA 8th Computational Fluid Dynamics Conference, Honolulu, Hawaii, June 1987,
Bell, J.B., Colella P., Glaz, H.M., "A Second Order Projection Method for Viscous Incompressible Flow", AIAA paper 87-1176-CP, Proceedings, AIAA 8th Computational Fluid Dynamics Conference, pp. 789-794, Honolulu, Hawaii, June 1987,
Glaz, H.M., Colella P., Glass, I.I., Deschambault, R.L., "A Numerical Study of Oblique Shock-Wave Reflections with Experimental Comparisons", Proceedings, Royal Society of London A, Vol. 398, pp. 117-140, 1985,
Colella P., Glaz, H.M., "Numerical Calculation of Complex Shock Reflections in Gases", Proceedings, 9th International Conference on Numerical Methods in Fluid Dynamics, Saclay, France, June, 1984, Springer Lecture Notes in Physics, Vol.218, pp. 154-158, 1984,
Colella P., Glaz, H.M., "Numerical Modelling of Inviscid Shocked Flows of Real Gases", Proceedings, 8th International Conference on Numerical Methods in Fluid Dynamics,Springer Lecture Notes in Physics, Vol.170, pp. 175-182, Aachen, Germany, June 1982,
Woodward, P.R., Colella P., "High Resolution Difference Schemes for Compressible Gas Dynamics", Proceedings, 7th International Conference on Numerical Methods in Fluid Dynamics, Stanford, CA, June, 1980, Springer Lecture Notes in Physics, Vol.142, pp. 434-441, June 1980,
B. Van Straalen, D. Trebotich, A. Ovsyannikov and D.T. Graves, "Scalable Structured Adaptive Mesh Refinement with Complex Geometry", Exascale Scientific Applications: Programming Approaches for Scalability, Performance, and Portability, edited by Straatsma, T., Antypas, K., Williams, T., (Chapman and Hall/CRC: November 9, 2017)
Katherine Yelick, Susan Graham, Paul Hilfinger, Dan Bonachea, Jimmy Su, Amir Kamil, Kaushik Datta, Phillip Colella, Tong Wen, "Titanium", Encyclopedia of Parallel Computing, edited by David Padua, (Springer: 2011) Pages: 2049-2055 doi: 10.1007/978-0-387-09766-4
Presentation/Talk
Daniel Martin, Modeling Antarctic Ice Sheet Dynamics using Adaptive Mesh Refinement, 2019 SIAM Conference on Computational Science and Engineering, February 26, 2019,
Download File: Martin-CSE19-final.pdf (pdf: 3.6 MB)
Dan Martin, Brent Minchew, Stephen Price, Esmond Ng, Modeling Marine Ice Cliff Instability: Higher resolution leads to lower impact, AGU Fall Meeting, December 12, 2018,
Download File: Martin-AGU-2018-1.pdf (pdf: 2.8 MB)
Dan Martin, Ice sheet model-dependence of persistent ice-cliff formation, European Geosciences Union General Assembly 2018, April 11, 2018,
Download File: Martin-EGU-2018-final.pdf (pdf: 2.8 MB)
Daniel Martin, Stephen Cornford, Antony Payne, Millennial-Scale Vulnerability of the Antarctic Ice Sheet to localized subshelf warm-water forcing, International Symposium on Polar Ice, Polar Climate, Polar Change, August 18, 2017,
Download File: Martin-IGS-2017.pdf (pdf: 6.9 MB)
Samuel Williams, Mark Adams, Brian Van Straalen, Performance Portability in Hybrid and Heterogeneous Multigrid Solvers, Copper Moutain, March 2016,
Download File: CU16SWWilliams.pptx (pptx: 1 MB)
Daniel Martin, Xylar Asay-Davis, Stephen Cornford, Stephen Price, Esmond Ng, William Collins, A Tale of Two Forcings: Present-Day Coupled Antarctic Ice-sheet/Southern Ocean dynamics using the POPSICLES model., European Geosciences Union General Assembly 2015, April 16, 2015,
Download File: Martin-EGU-2015.pdf (pdf: 5.3 MB)
Daniel F. Martin, Response of the Antarctic Ice Sheet to Ocean Forcing using the POPSICLES Coupled Ice sheet-ocean model, Joint Land Ice Working Group/Polar Climate Working Group Meeting, Boulder, CO, February 3, 2015,
Download File: Martin-LIWG-2015-final.pdf (pdf: 4.3 MB)
Daniel Martin, Xylar Asay-Davis, Stephen Price, Stephen Cornford, Esmond Ng, William Collins, Response of the Antarctic ice sheet to ocean forcing using the POPSICLES coupled ice sheet - ocean model, Twenty-first Annual WAIS Workshop, September 25, 2014,
Esmond Ng, Katherine J. Evans, Peter Caldwell, Forrest M. Hoffman, Charles Jackson, Kerstin Van Dam, Ruby Leung, Daniel F. Martin, George Ostrouchov, Raymond Tuminaro, Paul Ullrich, Stefan Wild, Samuel Williams, "Advances in Cross-Cutting Ideas for Computational Climate Science (AXICCS)", January 2017, doi: 10.2172/1341564
Download File: AXICCS-Report.pdf (pdf: 4 MB)
Dharshi Devendran, Daniel T. Graves, Hans Johansen, "A Hybrid Multigrid Algorithm for Poisson's equation using an Adaptive, Fourth Order Treatment of Cut Cells", LBNL Report Number: LBNL-1004329, November 11, 2014,
Download File: multigrid.pdf (pdf: 221 KB)
Mark F. Adams, Jed Brown, John Shalf, Brian Van Straalen, Erich Strohmaier, Samuel Williams, "HPGMG 1.0: A Benchmark for Ranking High Performance Computing Systems", LBNL Technical Report, 2014, LBNL 6630E,
Download File: hpgmg.pdf (pdf: 183 KB)
Samuel Williams, Dhiraj D. Kalamkar, Amik Singh, Anand M. Deshpande, Brian Van Straalen, Mikhail Smelyanskiy,
Ann Almgren, Pradeep Dubey, John Shalf, Leonid Oliker, "Implementation and Optimization of miniGMG - a Compact Geometric Multigrid Benchmark", December 2012, LBNL 6676E,
Download File: miniGMGLBNL-6676E.pdf (pdf: 906 KB)
Brian Van Straalen, David Trebotich, Terry Ligocki, Daniel T. Graves, Phillip Colella, Michael Barad, "An Adaptive Cartesian Grid Embedded Boundary Method for the Incompressible Navier Stokes Equations in Complex Geometry", LBNL Report Number: LBNL-1003767, 2012, LBNL LBNL Report Numb,
Download File: paper5.pdf (pdf: 360 KB)
We present a second-order accurate projection method to solve the
incompressible Navier-Stokes equations on irregular domains in two
and three dimensions. We use a finite-volume discretization
obtained from intersecting the irregular domain boundary with a
Cartesian grid. We address the small-cell stability problem
associated with such methods by hybridizing a conservative
discretization of the advective terms with a stable, nonconservative
discretization at irregular control volumes, and redistributing the
difference to nearby cells. Our projection is based upon a
finite-volume discretization of Poisson's equation. We use a
second-order, $L^\infty$-stable algorithm to advance in time. Block
structured local refinement is applied in space. The resulting
method is second-order accurate in $L^1$ for smooth problems. We
demonstrate the method on benchmark problems for flow past a
cylinder in 2D and a sphere in 3D as well as flows in 3D geometries
obtained from image data.
M. Christen, N. Keen, T. Ligocki, L. Oliker, J. Shalf, B. van Straalen, S. Williams, "Automatic Thread-Level Parallelization in the Chombo AMR Library", LBNL Technical Report, 2011, LBNL 5109E,
Ryne, R., Abell, D., Adelmann, A., Admundson, J., Bohn, C., Cary, J., Colella, P., Dechow, D., Decyk, V., Dragt, A., Gerber, R., Habib, S., Higdon, D., Katsouleas, T., Ta, K.L., McCorquodale, P., Mihalcea, D., Mitchell, C., Mori, W., Mottershead, C.T., Neri, F., Pogorelov, I., Quiang, J., Samulyak, R., Serafini, D., Shalf, J., Siegerist, C., Spentzouris, P., Stoltz, P., Terzic, B., Venturini, M., Walstrom, P., "SciDAC Advances and Applications in Computational Beam Dynamics", June 2005, LBNL 58243,
Gatti-Bono, C., Colella, P., "A Filtering Method for Gravitationally Stratified Flows", 2005, LBNL 57161,
Simon, H., Kramer, W., Saphir, W., Shalf, J., Bailey, D., Oliker, L., Banda, M., McCurdy, C.W., Hules, J., Canning, A., Day, M., Colella, P., Serafini, D., Wehner, M., Nugent, P., "National Facility for Advanced Computational ScienceL A Sustainable Path to Scientific Discovery", April 2004, LBNL 5500,
Download File: PUB-5500.pdf (pdf: 1.8 MB)
Colella, P., Graves, D.T., Greenough, J.A., "A Second-Order Method for Interface Recontruction in Orthogonal Coordinate Systems", January 2002, LBNL 45244,
Modiano, D., Colella, P., "A Higher-Order Embedded Boundary Method for Time-Dependent Simulation of Hyperbolic Conservation Laws", ASME paper FEDSM00-11220, to appear in Proceedings of the ASME 2000 Fluids Engineering Division Summer Meeting, 2000, LBNL 45239,
Download File: LBNL-45239.ps.gz (gz: 549 KB)
M.S. Day, P. Colella, M. Lijewski, C.A. Rendleman and D.L. Marcus, "Embedded Boundry Algorithms for Solving the Poisson Equation on Complex Domains", 1998, LBNL 41811,
Download File: dclrm.ps.gz (gz: 2.6 MB)
Tallio, K.V., Colella, P., "A Multifluid CFD Turbulent Entrainment Combustion Model: Formulation and One-Dimensional Results", Society of Automotive Engineers Fuels and Lubricants Meeting, November 1997, LBNL 40806,
Martin, D.F., Cartwright, K.L., "Solving Poisson's Equation using Adaptive Mesh Refinement", U.C. Berkeley Electronics Research Laboratory report No. UCB/ERL M96/66, October 19, 1996,
Download File: MartinCartwright.pdf (pdf: 150 KB)
R.B. Pember, P. Colella, L.H. Howell, A.S. Almgren, J.B. Bell, W.Y. Crutchfield, V.E. Beckner, K.C. Kaufman, W.A. Fiveland, and J.P. Jessee, "The Modeling of a Laboratory Natural Gas-Fired Furnace with a Higher-Order Projection Method for Unsteady Combustion", UCRL-JC123244, February 1996,
Download File: paper96.ps.gz (gz: 136 KB)
Download File: abstract.ps.gz (gz: 25 KB)
Download File: talk.ps.gz (gz: 261 KB)
Steinthorsson, E., Modiano, D., Colella, P., "Computations of Unsteady Viscous Compressible Flows Using Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems", NASA technical memorandum 106704, ICOMP report no. 94-17, 1994,
Hilfinger, P.N., Colella, P., "FIDIL Reference Manual", UC Berkeley Computer Science Division Report, UCB/CSD-93-759, May 1993,
Download File: B17.pdf (pdf: 1.7 MB)
Chern, I.L., Colella P., "A Conservative Front Tracking Method for Hyperbolic Conservation Laws", Lawrence Livermore National Laboratory Report UCRL-97200, July 1987,
Download File: B141987.pdf (pdf: 1.9 MB)
A Projection Method for Incompressible Viscous Flow on a Deformable Domain, Trebotich, D.P., 1998,
Download File: thesisTrebotichUCB1998.pdf (pdf: 3.5 MB)
An Adaptive Cell-Centered Projection Method for the Incompressible Euler Equations, Martin, D.F., 1998,
Cartesian Grid Embedded Boundary Finite Difference Methods for Elliptic and Parabolic Partial Differential Equations on Irregular Domains, Johansen, H., 1997,
Download File: HansJohansenThesis1997.pdf (pdf: 8.8 MB)
An Approximate Projection Method Suitable for the Modeling of Rapidly Rotating Flows, Graves, D.T., 1996,
Download File: GravesThesis.pdf (pdf: 38 MB)
Daniel F Martin, Xylar Asay-Davis, Jan De Rydt,, "Sensitivity of Ice-Ocean coupling to interactions with subglacial hydrology", AGU 2018 Ocean Sciences Meeting,, February 14, 2018,
Download File: Martin-OS2018.pdf (pdf: 1.6 MB)
E.G. Ng, D.F. Martin, X. S. Asay-Davis , S.F. Price , W.D. Collins, "High-resolution coupled ice sheet-ocean modeling using the POPSICLES model", American Geophysical Union Fall Meeting, December 17, 2014,
Download File: Ng-AGU2014.pdf (pdf: 815 KB)
D.F. Martin, X.S.Asay-Davis, S.F. Price, S.L. Cornford, M. Maltrud, E.G. Ng, W.D. Collins, "Response of the Antarctic ice sheet to ocean forcing using the POPSICLES coupled ice sheet-ocean model", AmericanGeophysical Union Fall Meeting, December 17, 2014,
Download File: Martin-AGU2014.pdf (pdf: 1000 KB)
"New Model Will Help Predict Stability of CO2 Reservoirs", June 1, 2014,
Massively-Parallel Simulations Verify Carbon Dioxide Sequestration Experiments, FY15 DOE ASCR Budget Request to Congress, May 1, 2014,
A Mignone, C Zanni, P Tzeferacos, B van Straalen, P Colella, G Bodo, The PLUTO code for adaptive mesh computations in astrophysical fluid dynamics, The Astrophysical Journal Supplement Series, Pages: 7 2012,
|
CommonCrawl
|
Comparing different supervised machine learning algorithms for disease prediction
Shahadat Uddin ORCID: orcid.org/0000-0003-0091-69191,
Arif Khan1,2,
Md Ekramul Hossain1 &
Mohammad Ali Moni3
BMC Medical Informatics and Decision Making volume 19, Article number: 281 (2019) Cite this article
Supervised machine learning algorithms have been a dominant method in the data mining field. Disease prediction using health data has recently shown a potential application area for these methods. This study ai7ms to identify the key trends among different types of supervised machine learning algorithms, and their performance and usage for disease risk prediction.
In this study, extensive research efforts were made to identify those studies that applied more than one supervised machine learning algorithm on single disease prediction. Two databases (i.e., Scopus and PubMed) were searched for different types of search items. Thus, we selected 48 articles in total for the comparison among variants supervised machine learning algorithms for disease prediction.
We found that the Support Vector Machine (SVM) algorithm is applied most frequently (in 29 studies) followed by the Naïve Bayes algorithm (in 23 studies). However, the Random Forest (RF) algorithm showed superior accuracy comparatively. Of the 17 studies where it was applied, RF showed the highest accuracy in 9 of them, i.e., 53%. This was followed by SVM which topped in 41% of the studies it was considered.
This study provides a wide overview of the relative performance of different variants of supervised machine learning algorithms for disease prediction. This important information of relative performance can be used to aid researchers in the selection of an appropriate supervised machine learning algorithm for their studies.
Machine learning algorithms employ a variety of statistical, probabilistic and optimisation methods to learn from past experience and detect useful patterns from large, unstructured and complex datasets [1]. These algorithms have a wide range of applications, including automated text categorisation [2], network intrusion detection [3], junk e-mail filtering [4], detection of credit card fraud [5], customer purchase behaviour detection [6], optimising manufacturing process [7] and disease modelling [8]. Most of these applications have been implemented using supervised variants [4, 5, 8] of the machine learning algorithms rather than unsupervised ones. In the supervised variant, a prediction model is developed by learning a dataset where the label is known and accordingly the outcome of unlabelled examples can be predicted [9].
The scope of this research is primarily on the performance analysis of disease prediction approaches using different variants of supervised machine learning algorithms. Disease prediction and in a broader context, medical informatics, have recently gained significant attention from the data science research community in recent years. This is primarily due to the wide adaptation of computer-based technology into the health sector in different forms (e.g., electronic health records and administrative data) and subsequent availability of large health databases for researchers. These electronic data are being utilised in a wide range of healthcare research areas such as the analysis of healthcare utilisation [10], measuring performance of a hospital care network [11], exploring patterns and cost of care [12], developing disease risk prediction model [13, 14], chronic disease surveillance [15], and comparing disease prevalence and drug outcomes [16]. Our research focuses on the disease risk prediction models involving machine learning algorithms (e.g., support vector machine, logistic regression and artificial neural network), specifically - supervised learning algorithms. Models based on these algorithms use labelled training data of patients for training [8, 17, 18]. For the test set, patients are classified into several groups such as low risk and high risk.
Given the growing applicability and effectiveness of supervised machine learning algorithms on predictive disease modelling, the breadth of research still seems progressing. Specifically, we found little research that makes a comprehensive review of published articles employing different supervised learning algorithms for disease prediction. Therefore, this research aims to identify key trends among different types of supervised machine learning algorithms, their performance accuracies and the types of diseases being studied. In addition, the advantages and limitations of different supervised machine learning algorithms are summarised. The results of this study will help the scholars to better understand current trends and hotspots of disease prediction models using supervised machine learning algorithms and formulate their research goals accordingly.
In making comparisons among different supervised machine learning algorithms, this study reviewed, by following the PRISMA guidelines [19], existing studies from the literature that used such algorithms for disease prediction. More specifically, this article considered only those studies that used more than one supervised machine learning algorithm for a single disease prediction in the same research setting. This made the principal contribution of this study (i.e., comparison among different supervised machine learning algorithms) more accurate and comprehensive since the comparison of the performance of a single algorithm across different study settings can be biased and generate erroneous results [20].
Traditionally, standard statistical methods and doctor's intuition, knowledge and experience had been used for prognosis and disease risk prediction. This practice often leads to unwanted biases, errors and high expenses, and negatively affects the quality of service provided to patients [21]. With the increasing availability of electronic health data, more robust and advanced computational approaches such as machine learning have become more practical to apply and explore in disease prediction area. In the literature, most of the related studies utilised one or more machine learning algorithms for a particular disease prediction. For this reason, the performance comparison of different supervised machine learning algorithms for disease prediction is the primary focus of this study.
In the following sections, we discuss different variants of supervised machine learning algorithm, followed by presenting the methods of this study. In the subsequent sections, we present the results and discussion of the study.
Supervised machine learning algorithm
At its most basic sense, machine learning uses programmed algorithms that learn and optimise their operations by analysing input data to make predictions within an acceptable range. With the feeding of new data, these algorithms tend to make more accurate predictions. Although there are some variations of how to group machine learning algorithms they can be divided into three broad categories according to their purposes and the way the underlying machine is being taught. These three categories are: supervised, unsupervised and semi-supervised.
In supervised machine learning algorithms, a labelled training dataset is used first to train the underlying algorithm. This trained algorithm is then fed on the unlabelled test dataset to categorise them into similar groups. Using an abstract dataset for three diabetic patients, Fig. 1 shows an illustration about how supervised machine learning algorithms work to categorise diabetic and non-diabetic patients. Supervised learning algorithms suit well with two types of problems: classification problems; and regression problems. In classification problems, the underlying output variable is discrete. This variable is categorised into different groups or categories, such as 'red' or 'black', or it could be 'diabetic' and 'non-diabetic'. The corresponding output variable is a real value in regression problems, such as the risk of developing cardiovascular disease for an individual. In the following subsections, we briefly describe the commonly used supervised machine learning algorithms for disease prediction.
An illustration of how supervised machine learning algorithms work to categorise diabetic and non-diabetic patients based on abstract data
Logistic regression (LR) is a powerful and well-established method for supervised classification [22]. It can be considered as an extension of ordinary regression and can model only a dichotomous variable which usually represents the occurrence or non-occurrence of an event. LR helps in finding the probability that a new instance belongs to a certain class. Since it is a probability, the outcome lies between 0 and 1. Therefore, to use the LR as a binary classifier, a threshold needs to be assigned to differentiate two classes. For example, a probability value higher than 0.50 for an input instance will classify it as 'class A'; otherwise, 'class B'. The LR model can be generalised to model a categorical variable with more than two values. This generalised version of LR is known as the multinomial logistic regression.
Support vector machine
Support vector machine (SVM) algorithm can classify both linear and non-linear data. It first maps each data item into an n-dimensional feature space where n is the number of features. It then identifies the hyperplane that separates the data items into two classes while maximising the marginal distance for both classes and minimising the classification errors [23]. The marginal distance for a class is the distance between the decision hyperplane and its nearest instance which is a member of that class. More formally, each data point is plotted first as a point in an n-dimension space (where n is the number of features) with the value of each feature being the value of a specific coordinate. To perform the classification, we then need to find the hyperplane that differentiates the two classes by the maximum margin. Figure 2 provides a simplified illustration of an SVM classifier.
A simplified illustration of how the support vector machine works. The SVM has identified a hyperplane (actually a line) which maximises the separation between the 'star' and 'circle' classes
Decision tree (DT) is one of the earliest and prominent machine learning algorithms. A decision tree models the decision logics i.e., tests and corresponds outcomes for classifying data items into a tree-like structure. The nodes of a DT tree normally have multiple levels where the first or top-most node is called the root node. All internal nodes (i.e., nodes having at least one child) represent tests on input variables or attributes. Depending on the test outcome, the classification algorithm branches towards the appropriate child node where the process of test and branching repeats until it reaches the leaf node [24]. The leaf or terminal nodes correspond to the decision outcomes. DTs have been found easy to interpret and quick to learn, and are a common component to many medical diagnostic protocols [25]. When traversing the tree for the classification of a sample, the outcomes of all tests at each node along the path will provide sufficient information to conjecture about its class. An illustration of an DT with its elements and rules is depicted in Fig. 3.
An illustration of a Decision tree. Each variable (C1, C2, and C3) is represented by a circle and the decision outcomes (Class A and Class B) are shown by rectangles. In order to successfully classify a sample to a class, each branch is labelled with either 'True' or 'False' based on the outcome value from the test of its ancestor node
A random forest (RF) is an ensemble classifier and consisting of many DTs similar to the way a forest is a collection of many trees [26]. DTs that are grown very deep often cause overfitting of the training data, resulting a high variation in classification outcome for a small change in the input data. They are very sensitive to their training data, which makes them error-prone to the test dataset. The different DTs of an RF are trained using the different parts of the training dataset. To classify a new sample, the input vector of that sample is required to pass down with each DT of the forest. Each DT then considers a different part of that input vector and gives a classification outcome. The forest then chooses the classification of having the most 'votes' (for discrete classification outcome) or the average of all trees in the forest (for numeric classification outcome). Since the RF algorithm considers the outcomes from many different DTs, it can reduce the variance resulted from the consideration of a single DT for the same dataset. Figure 4 shows an illustration of the RF algorithm.
An illustration of a Random forest which consists of three different decision trees. Each of those three decision trees was trained using a random subset of the training data
Naïve Bayes
Naïve Bayes (NB) is a classification technique based on the Bayes' theorem [27]. This theorem can describe the probability of an event based on the prior knowledge of conditions related to that event. This classifier assumes that a particular feature in a class is not directly related to any other feature although features for that class could have interdependence among themselves [28]. By considering the task of classifying a new object (white circle) to either 'green' class or 'red' class, Fig. 5 provides an illustration about how the NB technique works. According to this figure, it is reasonable to believe that any new object is twice as likely to have 'green' membership rather than 'red' since there are twice as many 'green' objects (40) as 'red'. In the Bayesian analysis, this belief is known as the prior probability. Therefore, the prior probabilities of 'green' and 'red' are 0.67 (40 ÷ 60) and 0.33 (20 ÷ 60), respectively. Now to classify the 'white' object, we need to draw a circle around this object which encompasses several points (to be chosen prior) irrespective of their class labels. Four points (three 'red' and one 'green) were considered in this figure. Thus, the likelihood of 'white' given 'green' is 0.025 (1 ÷ 40) and the likelihood of 'white' given 'red' is 0.15 (3 ÷ 20). Although the prior probability indicates that the new 'white' object is more likely to have 'green' membership, the likelihood shows that it is more likely to be in the 'red' class. In the Bayesian analysis, the final classifier is produced by combining both sources of information (i.e., prior probability and likelihood value). The 'multiplication' function is used to combine these two types of information and the product is called the 'posterior' probability. Finally, the posterior probability of 'white' being 'green' is 0.017 (0.67 × 0.025) and the posterior probability of 'white' being 'red' is 0.049 (0.33 × 0.15). Thus, the new 'white' object should be classified as a member of the 'red' class according to the NB technique.
An illustration of the Naïve Bayes algorithm. The 'white' circle is the new sample instance which needs to be classified either to 'red' class or 'green' class
K-nearest neighbour
The K-nearest neighbour (KNN) algorithm is one of the simplest and earliest classification algorithms [29]. It can be thought a simpler version of an NB classifier. Unlike the NB technique, the KNN algorithm does not require to consider probability values. The 'K' is the KNN algorithm is the number of nearest neighbours considered to take 'vote' from. The selection of different values for 'K' can generate different classification results for the same sample object. Figure 6 shows an illustration of how the KNN works to classify a new object. For K = 3, the new object (star) is classified as 'black'; however, it has been classified as 'red' when K = 5.
A simplified illustration of the K-nearest neighbour algorithm. When K = 3, the sample object ('star') is classified as 'black' since it gets more 'vote' from the 'black' class. However, for K = 5 the same sample object is classified as 'red' since it now gets more 'vote' from the 'red' class
Artificial neural network
Artificial neural networks (ANNs) are a set of machine learning algorithms which are inspired by the functioning of the neural networks of human brain. They were first proposed by McCulloch and Pitts [30] and later popularised by the works of Rumelhart et al. in the 1980s [31].. In the biological brain, neurons are connected to each other through multiple axon junctions forming a graph like architecture. These interconnections can be rewired (e.g., through neuroplasticity) that helps to adapt, process and store information. Likewise, ANN algorithms can be represented as an interconnected group of nodes. The output of one node goes as input to another node for subsequent processing according to the interconnection. Nodes are normally grouped into a matrix called layer depending on the transformation they perform. Apart from the input and output layer, there can be one or more hidden layers in an ANN framework. Nodes and edges have weights that enable to adjust signal strengths of communication which can be amplified or weakened through repeated training. Based on the training and subsequent adaption of the matrices, node and edge weights, ANNs can make a prediction for the test data. Figure 7 shows an illustration of an ANN (with two hidden layers) with its interconnected group of nodes.
An illustration of the artificial neural network structure with two hidden layers. The arrows connect the output of nodes from one layer to the input of nodes of another layer
Data source and data extraction
Extensive research efforts were made to identify articles employing more than one supervised machine learning algorithm for disease prediction. Two databases were searched (October 2018): Scopus and PubMed. Scopus is an online bibliometric database developed by Elsevier. It has been chosen because of its high level of accuracy and consistency [32]. PubMed is a free publication search engine and incorporates citation information mostly for biomedical and life science literature. It comprises more than 28 million citations from MEDLINE, life science journals and online books [33]. MEDLINE is a bibliographic database that includes bibliographic information for articles from academic journals covering medicine, nursing, pharmacy, dentistry, veterinary medicine, and health care [33].
A comprehensive search strategy was followed to find out all related articles. The search terms that were used in this search strategy were:
"disease prediction" AND "machine learning";
"disease prediction" AND "data mining";
"disease risk prediction" AND "machine learning"; and
"disease risk prediction" AND "data mining".
In scientific literature, the generic name of "machine learning" is often used for both "supervised" and "unsupervised" machine learning algorithms. On the other side, there is a close relationship between the terms "machine learning" and "data mining", with the latter is commonly used for the former one [34]. For these reasons, we used both "machine learning" and "data mining" in the search terms although the focus of this study is on the supervised machine learning algorithm. The four search items were then considered to launch searches on the titles, abstracts and keywords of an article for both Scopus and PubMed. This resulted in 305 and 83 articles from Scopus and PubMed, respectively. After combining these two lists of articles and removing the articles written in languages other than English, we found 336 unique articles.
Since the aim of this study was to compare the performance of different supervised machine learning algorithms, the next step was to select the articles from these 336 which used more than one supervised machine learning algorithm for disease prediction. For this reason, we wrote a computer program using Python programming language [35] which checked the presence of the name of more than one supervised machine learning algorithm in the title, abstract and keyword list of each of 336 articles. It found 55 articles that used more than one supervised machine learning algorithm for the prediction of different diseases. Out of the remaining 281 articles, only 155 used one of the seven supervised machine learning algorithms considered in this study. The rest 126 used either other machine learning algorithms (e.g., unsupervised or semi-supervised) or data mining methods other than machine learning ones. ANN was found most frequently (30.32%) in the 155 articles, followed by the Naïve Bayes (19.35%).
The next step is the manual inspection of all recovered articles. We noticed that four groups of authors reported their study results in two publication outlets (i.e., book chapter, conference and journal) using the same or different titles. For these four publications, we considered the most recent one. We further excluded another three articles since the reported prediction accuracies for all supervised machine learning algorithms used in those articles are the same. For each of the remaining 48 articles, the performance outcomes of the supervised machine learning algorithms that were used for disease prediction were gathered. Two diseases were predicted in one article [17] and two algorithms were found showing the best accuracy outcomes for a disease in one article [36]. In that article, five different algorithms were used for prediction analysis. The number of publications per year has been depicted in Fig. 8. The overall data collection procedure along with the number of articles selected for different diseases has been shown in Fig. 9.
Number of articles published in different years
The overall data collection procedure. It also shows the number of articles considered for each disease
Figure 10 shows a comparison of the composition of initially selected 329 articles regarding the seven supervised machine learning algorithms considered in this study. ANN shows the highest percentage difference (i.e., 16%) between the 48 selected articles of this study and initially selected 155 articles that used only one supervised machine learning algorithm for disease prediction, which is followed by LR. The remaining five supervised machine learning algorithms show a percentage difference between 1 and 5.
Composition of initially selected 329 articles with respect to the seven supervised learning algorithms
Classifier performance index
The diagnostic ability of classifiers has usually been determined by the confusion matrix and the receiver operating characteristic (ROC) curve [37]. In the machine learning research domain, the confusion matrix is also known as error or contingency matrix. The basic framework of the confusion matrix has been provided in Fig. 11a. In this framework, true positives (TP) are the positive cases where the classifier correctly identified them. Similarly, true negatives (TN) are the negative cases where the classifier correctly identified them. False positives (FP) are the negative cases where the classifier incorrectly identified them as positive and the false negatives (FN) are the positive cases where the classifier incorrectly identified them as negative. The following measures, which are based on the confusion matrix, are commonly used to analyse the performance of classifiers, including those that are based on supervised machine learning algorithms.
a The basic framework of the confusion matrix; and (b) A presentation of the ROC curve
$$ Accuracy=\frac{TP+ TN}{TP+ TN+ FP+ FN\ }\kern2em {F}_1\ score=\frac{2\times TP}{2\times TP+ FN+ FP} $$
$$ Precisioin=\frac{TP}{TP+ FP}\kern7.5em Sensitivity= Recall= True\ positive\ rate=\frac{TP}{TP+ FN} $$
$$ Specificity=\frac{TN}{TN+ FP\ }\kern6.75em False\ positive\ rate=\frac{FP}{FP+ TN\ } $$
An ROC is one of the fundamental tools for diagnostic test evaluation and is created by plotting the true positive rate against the false positive rate at various threshold settings [37]. The area under the ROC curve (AUC) is also commonly used to determine the predictability of a classifier. A higher AUC value represents the superiority of a classifier and vice versa. Figure 11b illustrates a presentation of three ROC curves based on an abstract dataset. The area under the blue ROC curve is half of the shaded rectangle. Thus, the AUC value for this blue ROC curve is 0.5. Due to the coverage of a larger area, the AUC value for the red ROC curve is higher than that of the black ROC curve. Hence, the classifier that produced the red ROC curve shows higher predictive accuracy compared with the other two classifiers that generated the blue and red ROC curves.
There are few other measures that are also used to assess the performance of different classifiers. One such measure is the running mean square error (RMSE). For different pairs of actual and predicted values, RMSE represents the mean value of all square errors. An error is the difference between an actual and its corresponding predicted value. Another such measure is the mean absolute error (MAE). For an actual and its predicted value, MAE indicates the absolute value of their difference.
The final dataset contained 48 articles, each of which implemented more than one variant of supervised machine learning algorithms for a single disease prediction. All implemented variants were already discussed in the methods section as well as the more frequently used performance measures. Based on these, we reviewed the finally selected 48 articles in terms of the methods used, performance measures as well as the disease they targeted.
In Table 1, names and references of the diseases and the corresponding supervised machine learning algorithms used to predict them are discussed. For each of the disease models, the better performing algorithm is also described in this table. This study considered 48 articles, which in total made the prediction for 49 diseases or conditions (one article predicted two diseases [17]). For these 49 diseases, 50 algorithms were found to show the superior accuracy. One disease has two algorithms (out of 5) that showed the same higher-level accuracies [36]. To sum up, 49 diseases were predicted in 48 articles considered in this study and 50 supervised machine learning algorithms were found to show the superior accuracy. The advantages and limitations of different supervised machine learning algorithms are shown in Table 2.
Table 1 Summary of all references
Table 2 Advantages and limitations of different supervised machine learning algorithms
The comparison of the usage frequency and accuracy of different supervised learning algorithms are shown in Table 3. It is observed that SVM has been used most frequently (29 out of 49 diseases that were predicted). This is followed by NB, which has been used in 23 articles. Although RF has been considered the second least number of times, it showed the highest percentage (i.e., 53%) in revealing the superior accuracy followed by SVM (i.e., 41%).
Table 3 Comparison of usage frequency and accuracy of different supervised machine learning algorithms
In Table 4, the performance comparison of different supervised machine learning algorithms for most frequently modelled diseases is shown. It is observed that SVM showed the superior accuracy at most times for three diseases (e.g., heart disease, diabetes and Parkinson's disease). For breast cancer, ANN showed the superior accuracy at most times.
Table 4 Comparison of the performance of different supervised machine learning algorithms based on different criteria
A close investigation of Table 1 reveals an interesting result regarding the performance of different supervised learning algorithms. This result has also been reported in Table 4. Consideration of only those articles that used clinical and demographic data (15 articles) reveals DT as to show the superior result at most times (6). Interestingly, SVM has been found the least time (1) to show the superior result although it showed the superior accuracy at most times for heart disease, diabetes and Parkinson's disease (Table 4). In other 33 articles that used research data other than 'clinical and demographic' type, SVM and RF have been found to show the superior accuracy at most times (12) and second most times (7), respectively. In articles where 10-fold and 5-fold validation methods were used, SVM has been found to show the superior accuracy at most times (5 and 3 times, respectively). On the other side, articles where no method was used for validation, ANN has been found at most times to show the superior accuracy. Figure 12 further illustrates the superior performance of SVM. Performance statistics from Table 4 have been used in a normalised way to draw these two graphs. Fig. 12a illustrates the ROC graph for the four diseases (i.e., Heart disease, Diabetes, Breast cancer and Parkinson's disease) under the 'disease names that were modelled' criterion. The ROC graph based on the 'validation method followed' criterion has been presented in Fig. 12b.
Illustration of the superior performance of the Support vector machine using ROC graphs (based on the data from Table 4) – (a) for disease names that were modelled; and (b) for validation methods that were followed
To avoid the risk of selection bias, from the literature we extracted those articles that used more than one supervised machine learning algorithm. The same supervised learning algorithm can generate different results across various study settings. There is a chance that a performance comparison between two supervised learning algorithms can generate imprecise results if they were employed in different studies separately. On the other side, the results of this study could suffer a variable selection bias from individual articles considered in this study. These articles used different variables or measures for disease prediction. We noticed that the authors of these articles did not consider all available variables from the corresponding research datasets. The inclusion of a new variable could improve the accuracy of an underperformed algorithm considered in the underlying study, and vice versa. This is one of the limitations of this study. Another limitation of this study is that we considered a broader level classification of supervised machine learning algorithms to make a comparison among them for disease prediction. We did not consider any sub-classifications or variants of any of the algorithms considered in this study. For example, we did not make any performance comparison between least-square and sparse SVMs; instead of considering them under the SVM algorithm. A third limitation of this study is that we did not consider the hyperparameters that were chosen in different articles of this study in comparing multiple supervised machine learning algorithms. It has been argued that the same machine learning algorithm can generate different accuracy results for the same data set with the selection of different values for the underlying hyperparameters [81, 82]. The selection of different kernels for support vector machines can result a variation in accuracy outcomes for the same data set. Similarly, a random forest could generate different results, while splitting a node, with the changes in the number of decision trees within the underlying forest.
This research attempted to study comparative performances of different supervised machine learning algorithms in disease prediction. Since clinical data and research scope varies widely between disease prediction studies, a comparison was only possible when a common benchmark on the dataset and scope is established. Therefore, we only chose studies that implemented multiple machine learning methods on the same data and disease prediction for comparison. Regardless of the variations on frequency and performances, the results show the potential of these families of algorithms in the disease prediction.
The data used in this study can be extracted from online databases. The detail of this extraction has been described within the manuscript.
AUC:
Area under the ROC curve
DT:
FN:
False negative
KNN:
LR:
MAE:
RF:
RMSE:
Running mean square error
ROC:
Receiver operating characteristic
SVM:
TN:
True negative
True positive
T. M. Mitchell, "Machine learning WCB": McGraw-Hill Boston, MA:, 1997.
Sebastiani F. Machine learning in automated text categorization. ACM Comput Surveys (CSUR). 2002;34(1):1–47.
Sinclair C, Pierce L, Matzner S. An application of machine learning to network intrusion detection. In: Computer Security Applications Conference, 1999. (ACSAC'99) Proceedings. 15th Annual; 1999. p. 371–7. IEEE.
Sahami M, Dumais S, Heckerman D, Horvitz E. A Bayesian approach to filtering junk e-mail. In: Learning for Text Categorization: Papers from the 1998 workshop, vol. 62; 1998. p. 98–105. Madison, Wisconsin.
Aleskerov E, Freisleben B, Rao B. Cardwatch: A neural network based database mining system for credit card fraud detection. In: Computational Intelligence for Financial Engineering (CIFEr), 1997., Proceedings of the IEEE/IAFE 1997; 1997. p. 220–6. IEEE.
Kim E, Kim W, Lee Y. Combination of multiple classifiers for the customer's purchase behavior prediction. Decis Support Syst. 2003;34(2):167–75.
Mahadevan S, Theocharous G. "Optimizing Production Manufacturing Using Reinforcement Learning," in FLAIRS Conference; 1998. p. 372–7.
Yao D, Yang J, Zhan X. A novel method for disease prediction: hybrid of random forest and multivariate adaptive regression splines. J Comput. 2013;8(1):170–7.
R. S. Michalski, J. G. Carbonell, and T. M. Mitchell, Machine learning: an artificial intelligence approach. Springer Science & Business Media, 2013.
Culler SD, Parchman ML, Przybylski M. Factors related to potentially preventable hospitalizations among the elderly. Med Care. 1998;1:804–17.
Uddin MS, Hossain L. Social networks enabled coordination model for cost Management of Patient Hospital Admissions. J Healthc Qual. 2011;33(5):37–48.
Lee PP, et al. Cost of patients with primary open-angle glaucoma: a retrospective study of commercial insurance claims data. Ophthalmology. 2007;114(7):1241–7.
Davis DA, Chawla NV, Christakis NA, Barabási A-L. Time to CARE: a collaborative engine for practical disease prediction. Data Min Knowl Disc. 2010;20(3):388–415.
McCormick T, Rudin C, Madigan D. A hierarchical model for association rule mining of sequential events: an approach to automated medical symptom prediction; 2011.
Yiannakoulias N, Schopflocher D, Svenson L. Using administrative data to understand the geography of case ascertainment. Chron Dis Can. 2009;30(1):20–8.
Fisher ES, Malenka DJ, Wennberg JE, Roos NP. Technology assessment using insurance claims: example of prostatectomy. Int J Technol Assess Health Care. 1990;6(02):194–202.
Farran B, Channanath AM, Behbehani K, Thanaraj TA. Predictive models to assess risk of type 2 diabetes, hypertension and comorbidity: machine-learning algorithms and validation using national health data from Kuwait-a cohort study. BMJ Open. 2013;3(5):e002457.
Ahmad LG, Eshlaghy A, Poorebrahimi A, Ebrahimi M, Razavi A. Using three machine learning techniques for predicting breast cancer recurrence. J Health Med Inform. 2013;4(124):3.
Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–9.
Demšar J. Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res. 2006;7:1–30.
Palaniappan S, Awang R. Intelligent heart disease prediction system using data mining techniques. In: Computer Systems and Applications, 2008. AICCSA 2008. IEEE/ACS International Conference on; 2008. p. 108–15. IEEE.
Hosmer Jr DW, Lemeshow S, Sturdivant RX. Applied logistic regression. Wiley; 2013.
Joachims T. Making large-scale SVM learning practical. SFB 475: Komplexitätsreduktion Multivariaten Datenstrukturen, Univ. Dortmund, Dortmund, Tech. Rep. 1998. p. 28.
Quinlan JR. Induction of decision trees. Mach Learn. 1986;1(1):81–106.
Cruz JA, Wishart DS. Applications of machine learning in cancer prediction and prognosis. Cancer Informat. 2006;2:59–77.
Breiman L. Random forests. Mach Learn. 2001;45(1):5–32.
Lindley DV. Fiducial distributions and Bayes' theorem. J Royal Stat Soc. Series B (Methodological). 1958;1:102–7.
I. Rish, "An empirical study of the naive Bayes classifier," in IJCAI 2001 workshop on empirical methods in artificial intelligence, 2001, vol. 3, 22, pp. 41–46: IBM New York.
Cover T, Hart P. Nearest neighbor pattern classification. IEEE Trans Inf Theory. 1967;13(1):21–7.
McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys. 1943;5(4):115–33.
Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Nature. 1986;323(6088):533.
Falagas ME, Pitsouni EI, Malietzis GA, Pappas G. Comparison of PubMed, Scopus, web of science, and Google scholar: strengths and weaknesses. FASEB J. 2008;22(2):338–42.
PubMed. (2018). https://www.ncbi.nlm.nih.gov/pubmed/.
Kavakiotis I, Tsave O, Salifoglou A, Maglaveras N, Vlahavas I, Chouvarda I. Machine learning and data mining methods in diabetes research. Comput Struct Biotechnol J. 2017;15:104–16.
Pedregosa F, et al. Scikit-learn: Machine learning in Python. J Mach Learn Res. 2011;12:2825–30.
Borah MS, Bhuyan BP, Pathak MS, Bhattacharya P. Machine learning in predicting hemoglobin variants. Int J Mach Learn Comput. 2018;8(2):140–3.
Fawcett T. An introduction to ROC analysis. Pattern Recogn Lett. 2006;27(8):861–74.
Aneja S, Lal S. Effective asthma disease prediction using naive Bayes—Neural network fusion technique. In: International Conference on Parallel, Distributed and Grid Computing (PDGC); 2014. p. 137–40. IEEE.
Ayer T, Chhatwal J, Alagoz O, Kahn CE Jr, Woods RW, Burnside ES. Comparison of logistic regression and artificial neural network models in breast cancer risk estimation. Radiographics. 2010;30(1):13–22.
Lundin M, Lundin J, Burke H, Toikkanen S, Pylkkänen L, Joensuu H. Artificial neural networks applied to survival prediction in breast cancer. Oncology. 1999;57(4):281–6.
Delen D, Walker G, Kadam A. Predicting breast cancer survivability: a comparison of three data mining methods. Artif Intell Med. 2005;34(2):113–27.
Chen M, Hao Y, Hwang K, Wang L, Wang L. Disease prediction by machine learning over big data from healthcare communities. IEEE Access. 2017;5:8869–79.
Cai L, Wu H, Li D, Zhou K, Zou F. Type 2 diabetes biomarkers of human gut microbiota selected via iterative sure independent screening method. PLoS One. 2015;10(10):e0140827.
Malik S, Khadgawat R, Anand S, Gupta S. Non-invasive detection of fasting blood glucose level via electrochemical measurement of saliva. SpringerPlus. 2016;5(1):701.
Mani S, Chen Y, Elasy T, Clayton W, Denny J. Type 2 diabetes risk forecasting from EMR data using machine learning. In: AMIA annual symposium proceedings, vol. 2012; 2012. p. 606. American Medical Informatics Association.
Tapak L, Mahjub H, Hamidi O, Poorolajal J. Real-data comparison of data mining methods in prediction of diabetes in Iran. Healthc Inform Res. 2013;19(3):177–85.
Sisodia D, Sisodia DS. Prediction of diabetes using classification algorithms. Procedia Comput Sci. 2018;132:1578–85.
Yang J, Yao D, Zhan X, Zhan X. Predicting disease risks using feature selection based on random forest and support vector machine. In: International Symposium on Bioinformatics Research and Applications; 2014. p. 1–11. Springer.
Juhola M, Joutsijoki H, Penttinen K, Aalto-Setälä K. Detection of genetic cardiac diseases by Ca 2+ transient profiles using machine learning methods. Sci Rep. 2018;8(1):9355.
Long NC, Meesad P, Unger H. A highly accurate firefly based algorithm for heart disease prediction. Expert Syst Appl. 2015;42(21):8221–31.
Jin B, Che C, Liu Z, Zhang S, Yin X, Wei X. Predicting the risk of heart failure with ehr sequential data modeling. IEEE Access. 2018;6:9256–61.
Puyalnithi T, Viswanatham VM. Preliminary cardiac disease risk prediction based on medical and behavioural data set using supervised machine learning techniques. Indian J Sci Technol. 2016;9(31):1–5.
Forssen H, et al. Evaluation of Machine Learning Methods to Predict Coronary Artery Disease Using Metabolomic Data. Stud Health Technol Inform. 2017;235: IOS Press:111–5.
Tang Z-H, Liu J, Zeng F, Li Z, Yu X, Zhou L. Comparison of prediction model for cardiovascular autonomic dysfunction using artificial neural network and logistic regression analysis. PLoS One. 2013;8(8):e70571.
Toshniwal D, Goel B, Sharma H. Multistage Classification for Cardiovascular Disease Risk Prediction. In: International Conference on Big Data Analytics; 2015. p. 258–66. Springer.
Alonso DH, Wernick MN, Yang Y, Germano G, Berman DS, Slomka P. Prediction of cardiac death after adenosine myocardial perfusion SPECT based on machine learning. J Nucl Cardiol. 2018;1:1–9.
Mustaqeem A, Anwar SM, Majid M, Khan AR. Wrapper method for feature selection to classify cardiac arrhythmia. In: Engineering in Medicine and Biology Society (EMBC), 39th Annual International Conference of the IEEE; 2017. p. 3656–9. IEEE.
Mansoor H, Elgendy IY, Segal R, Bavry AA, Bian J. Risk prediction model for in-hospital mortality in women with ST-elevation myocardial infarction: a machine learning approach. Heart Lung. 2017;46(6):405–11.
Kim J, Lee J, Lee Y. Data-mining-based coronary heart disease risk prediction model using fuzzy logic and decision tree. Healthc Inform Res. 2015;21(3):167–74.
Taslimitehrani V, Dong G, Pereira NL, Panahiazar M, Pathak J. Developing EHR-driven heart failure risk prediction models using CPXR (log) with the probabilistic loss function. J Biomed Inform. 2016;60:260–9.
Anbarasi M, Anupriya E, Iyengar N. Enhanced prediction of heart disease with feature subset selection using genetic algorithm. Int J Eng Sci Technol. 2010;2(10):5370–6.
Bhatla N, Jyoti K. An analysis of heart disease prediction using different data mining techniques. Int J Eng. 2012;1(8):1–4.
Thenmozhi K, Deepika P. Heart disease prediction using classification with different decision tree techniques. Int J Eng Res Gen Sci. 2014;2(6):6–11.
Tamilarasi R, Porkodi DR. A study and analysis of disease prediction techniques in data mining for healthcare. Int J Emerg Res Manag Technoly ISSN. 2015;1:2278–9359.
Marikani T, Shyamala K. Prediction of heart disease using supervised learning algorithms. Int J Comput Appl. 2017;165(5):41–4.
Lu P, et al. Research on improved depth belief network-based prediction of cardiovascular diseases. J Healthc Eng. 2018;2018:1–9.
Khateeb N, Usman M. Efficient Heart Disease Prediction System using K-Nearest Neighbor Classification Technique. In: Proceedings of the International Conference on Big Data and Internet of Thing; 2017. p. 21–6. ACM.
Patel SB, Yadav PK, Shukla DD. Predict the diagnosis of heart disease patients using classification mining techniques. IOSR J Agri Vet Sci (IOSR-JAVS). 2013;4(2):61–4.
Venkatalakshmi B, Shivsankar M. Heart disease diagnosis using predictive data mining. Int J Innovative Res Sci Eng Technol. 2014;3(3):1873–7.
Ani R, Sasi G, Sankar UR, Deepa O. Decision support system for diagnosis and prediction of chronic renal failure using random subspace classification. In: Advances in Computing, Communications and Informatics (ICACCI), 2016 International Conference on; 2016. p. 1287–92. IEEE.
Islam MM, Wu CC, Poly TN, Yang HC, Li YC. Applications of Machine Learning in Fatty Live Disease Prediction. In: 40th Medical Informatics in Europe Conference, MIE 2018; 2018. p. 166–70. IOS Press.
Lynch CM, et al. Prediction of lung cancer patient survival via supervised machine learning classification techniques. Int J Med Inform. 2017;108:1–8.
Chen C-Y, Su C-H, Chung I-F, Pal NR. Prediction of mammalian microRNA binding sites using random forests. In: System Science and Engineering (ICSSE), 2012 International Conference on; 2012. p. 91–5. IEEE.
Eskidere Ö, Ertaş F, Hanilçi C. A comparison of regression methods for remote tracking of Parkinson's disease progression. Expert Syst Appl. 2012;39(5):5523–8.
Chen H-L, et al. An efficient diagnosis system for detection of Parkinson's disease using fuzzy k-nearest neighbor approach. Expert Syst Appl. 2013;40(1):263–71.
Behroozi M, Sami A. A multiple-classifier framework for Parkinson's disease detection based on various vocal tests. Int J Telemed Appl. 2016;2016:1–9.
Hussain L, et al. Prostate cancer detection using machine learning techniques by employing combination of features extracting strategies. Cancer Biomarkers. 2018;21(2):393–413.
Zupan B, DemšAr J, Kattan MW, Beck JR, Bratko I. Machine learning for survival analysis: a case study on recurrence of prostate cancer. Artif Intell Med. 2000;20(1):59–75.
Hung C-Y, Chen W-C, Lai P-T, Lin C-H, Lee C-C. Comparing deep neural network and other machine learning algorithms for stroke prediction in a large-scale population-based electronic medical claims database. In: Engineering in Medicine and Biology Society (EMBC), 2017 39th Annual International Conference of the IEEE, vol. 1; 2017. p. 3110–3. IEEE.
Atlas L, et al. A performance comparison of trained multilayer perceptrons and trained classification trees. Proc IEEE. 1990;78(10):1614–9.
Lucic M, Kurach K, Michalski M, Bousquet O, Gelly S. Are GANs created equal? a large-scale study. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems; 2018. p. 698–707. Curran Associates Inc.
Levy O, Goldberg Y, Dagan I. Improving distributional similarity with lessons learned from word embeddings. Trans Assoc Comput Linguistics. 2015;3:211–25.
This study did not receive any funding.
Complex Systems Research Group, Faculty of Engineering, The University of Sydney, Room 524, SIT Building (J12), Darlington, NSW, 2008, Australia
Shahadat Uddin
, Arif Khan
& Md Ekramul Hossain
Health Market Quality Research Stream, Capital Markets CRC, Level 3, 55 Harrington Street, Sydney, NSW, Australia
Arif Khan
Faculty of Medicine and Health, School of Medical Sciences, The University of Sydney, Camperdown, NSW, 2006, Australia
Mohammad Ali Moni
Search for Shahadat Uddin in:
Search for Arif Khan in:
Search for Md Ekramul Hossain in:
Search for Mohammad Ali Moni in:
SU: Originator of the idea, data analysis and writing. AK: Data analysis and writing. MEH: Data analysis and writing. MAM: Data analysis and critical review of the manuscript. All authors have read and approved the manuscript.
Correspondence to Shahadat Uddin.
The authors declare that they do not have any competing interests.
Uddin, S., Khan, A., Hossain, M. et al. Comparing different supervised machine learning algorithms for disease prediction. BMC Med Inform Decis Mak 19, 281 (2019) doi:10.1186/s12911-019-1004-8
Medical data
Disease prediction
Standards, technology, machine learning, and modeling
|
CommonCrawl
|
Recent questions without an upvoted answer
No answer No selected answer No upvoted answer Featured Previous GATE
No selected answer
No upvoted answer
Previous GATE
In the circuit shown, the diodes are ideal, the inductance is small, and $I_{0} \neq 0$. Which one of the following statements is true? $D_{1}$ conducts for greater than $180^{\circ}$ and $D_{2}$ conducts for greater than $180^{\circ}$. $D_{2}$ conducts for more than ... $180^{\circ}$. $D_{1}$ conducts for more than $180^{\circ}$ and $D_{2}$ conducts for $180^{\circ}$.
A three-phase voltage source inverter with ideal devices operating in $180^{\circ}$ conduction mode is feeding a balanced star-connected resistive load. The $DC$ voltage input is $V_{dc}$. The peak of the fundamental component of the phase voltage is $\frac{V_{dc}}{\pi} \\$ $\frac{2V_{dc}}{\pi} \\$ $\frac{3V_{dc}}{\pi} \\$ $\frac{4V_{dc}}{\pi} $
A $3$-phase, $4$-pole, $400$V, $50$Hz squirrel-cage induction motor is operating at a slip of $0.02$. The speed of the rotor flux in mechanical rad/sec, sensed by a stationary observer, is closest to $1500$ $1470$ $157$ $154$
The figure shows the per-phase representation of a phase-shifting transformer connected between buses $1$ and $2$, where $\alpha$ is a complex number with non-zero real and imaginary parts. For the given circuit, $Y_{bus}$ and $Z_{bus}$ are bus admittance matrix and ... is unsymmetric. $Y_{bus}$ is unsymmetric and $Z_{bus}$ is symmetric. Both $Y_{bus}$ and $Z_{bus}$ are unsymmetric.
The figure below shows the circuit diagram of a controlled rectifier supplied from a $230$V, $50$Hz, $1$-phase voltage source and a $10:1$ ideal transformer. Assume that all devices are ideal. The firing angles of the thyristors $T_{1}$ and $T_{2}$ are $90^{\circ}$ and $270^{\circ}$, respectively. The RMS value of the current through diode $D_{3}$ in amperes is _______.
Assume that in traffic junction, the cycle of the traffic signal lights is $2$ minutes of green (Vehicle does not stop) and $3$ minutes of red (Vehicle stops). Consider that the arrival time of vehicles at the junction is uniformly distributed over $5$ minute cycle. The expected waiting time (in minutes) for the vehicle at the junction is ________.
Consider a function $f(x, y, z)$ given by $f(x, y, z)=(x^{2}+y^{2}-2z^{2})(y^{2}+z^{2})$ The partial derivative of this function with respect to $x$ at the point, $x=2, y=1$ and $z=3$ is _______.
Let $x$ and $y$ be integers satisfying the following equations $2x^{2}+y^{2}=34$ $x+2y=11$ The value of $(x+y)$ is _______.
Let $y^{2}-2y+1=x$ and $\sqrt{x}+y=5$. The value of $x+\sqrt{y}$ equals _________. (Give the answer up to three decimal places).
Two resistors with nominal resistance values $R_{1}$ and $R_{2}$ have additive uncertainties $\bigtriangleup R_{1}$ and $\bigtriangleup R_{2}$, respectively. When these resistances are connected in parallel, the standard deviation of the error in the equivalent resistance $R$ ...
A stationary closed Lissajous pattern on an oscilloscope has $3$ horizontal tangencies and $2$ vertical tangencies for a horizontal input with frequency $3$kHz. The frequency of the vertical input is $1.5$kHz $2$kHz $3$kHz $4.5$kHz
For a $3$-input logic circuit shown below, the output $Z$ can be expressed as $Q+\overline{R}$ $P\overline{Q}+R$ $\overline{Q}+R$ $P+\overline{Q}+R$
Consider a solid sphere of radius $5$ cm made of a perfect electric conductor. If one million electrons are added to this sphere, these electrons will be distributed Uniformly over the entire volume of the sphere. Uniformly over the outer surface of the sphere. Concentrated around the centre of the sphere. Along a straight line passing through the centre of the sphere.
The figures show diagramatic representations of vector fields $\vec{X}, \vec{Y}, \text{and} \vec{Z}$ respectively. Which one of the following choices is true? $\bigtriangledown . \vec{X}=0,\bigtriangledown \times \vec{Y} \neq 0, \bigtriangledown \times \vec{Z}=0$ ... $\bigtriangledown . \vec{X}=0,\bigtriangledown \times \vec{Y} = 0, \bigtriangledown \times \vec{Z}=0$
The pole-zero plots of three discrete-time systems $P, Q$ and $R$ on the $z$-plane are shown below. Which one of the following is TRUE about the frequency selectivity of these systems? All three are high-pass filters. All three are band-pass filters. All three are low-pass filters. $P$ is low-pass filter. $Q$ is a band-pass filter and $R$ is a high-pass filter.
If a synchronous motor is running at a leading power factor, its excitation induced voltage $(E_{f})$ is Equal to terminal voltage $V_{t}$ Higher than the terminal voltage $V_{t}$ Less than terminal voltage $V_{t}$ Dependent upon supply voltage $V_{t}$
When a unit ramp input is applied to the unity feedback system having closed loop transfer function $\frac{C(s)}{R(s)}=\frac{Ks+b}{s^{2}+as+b}, (a>0, b>0, K>0)$, the steady state error will be $0 \\ $ $\frac{a}{b} \\$ $\frac{a+K}{b} \\$ $\frac{a-K}{b}$
The transfer function $C(s)$ of a compensator is given below. $C(s)=\frac{(1+\frac{s}{0.1})(1+\frac{s}{100})}{(1+s)(1+\frac{s}{10})}$ The frequency range in which the phase (lead) introduced by the compensator reaches the maximum is $0.1 < \omega < 1$ $1 < \omega < 10$ $10 < \omega < 100$ $ \omega > 100$
The positive, negative, and zero sequence reactances of a wye-connected synchronous generator are $0.2$pu, $0.2$pu, and $0.1$pu, respectively. The generator is an open circuit with a terminal voltage of $1$pu. The minimum value of the inductive ... ground fault occurs at the terminals is ___________ (assume fault impedance to be zero). (Give the answer up to one decimal place.)
The figure shows the single line diagram of a power system with a double circuit transmission line. The expression for electrical power is $1.5 \sin \delta$, where $\delta$ is the rotor angle. The system is operating at the stable equilibrium point with mechanical power equal to ... $P_{\max}$, in pu is ______. (Give the answer up to three decimal places.)
Research in the workplace reveals that people work for many reasons ________. Money beside Beside money Money besides Besides money
"The hold of the nationalist imagination on our colonial past is such that anything inadequately or improperly nationalist is just not history." Which of the following statements best reflects the author's opinion? Nationalists are highly ... the filter of nationalism. Our colonial past never happened. Nationalism has to be both adequately and properly imagined.
The expression $\frac{(x+y)-|x-y|}{2}$ is equal to The maximum of $x$ and $y$ The minimum of $x$ and $y$ $1$ None of the above.
Only one of the real roots of $f(x)=x^{6}-x-1$ lies in the interval $1 \leq x \leq 2$ and bisection method is used to find its value. For achieving an accuracy of $0.001$, the required minimum number of iterations is _________.
In the circuit shown below, the maximum power transferred to the resistor $R$ is ______ W.
The magnitude of magnetic flux density (B) in micro Teslas $(\mu T)$, at the centre of a loop of wire wound as a regular hexagon of side length $1 m$ carrying a current $(I=1 A)$, and placed in vacuum as shown in the figure is _________. (Give the answer up to two decimal places.)
A $375$W, $230$V, $50$Hz, capacitor start single-phase induction motor has the following constants for the main and auxiliary windings (at starting): $Z_{m}=(12.50+j15.75) \Omega$ (main winding), $Z_{a}=(24.50+j12.75) \Omega$ (auxiliary ... the value of the capacitance (in $\mu F$) to be added in series with the auxiliary winding to obtain maximum torque at starting is ________.
Two parallel connected, three-phase, $50$Hz, $11$kV, star-connected synchronous machines $A$ and $B$, are operating as synchronous condensers. They together supply $50$MVAR to a $11$kV grid. Current supplied by both the machines are equal. Synchronous reactances of ... of excitation current of machine $A$ to that of machine $B$ is _______. (Give the answer up to two decimal places.)
A $220$V DC series motor runs drawing a current of $30$ A from the supply. Armature and field circuit resistances are $0.4\Omega$ and $0.1\Omega$, respectively. The load torque varies as the square of the speed. The flux in the motor may be taken as ... %, the resistance in ohms that should be added in series with the armature is ______. (Give the answer up to two decimal places.)
A three-phase, three winding $\Delta / \Delta / Y (1.1kV/6.6kV/400V)$ transformer is energized from $AC$ mains at the $1.1kV$ side. It supplies $900kVA$ load at $0.8$ power factor lag from the $6.6kV$ winding and $300kVA$ load at $0.6$ power factor ... . The $RMS$ line current in ampere drawn by the $1.1kV$ winding from the mains is ________. (Give the answer up to one decimal place.)
A separately excited $DC$ generator supplies $150$A to a $145$V DC grid. The generator is running at $800$RPM. The armature resistance of the generator is $0.1\Omega$. If the speed of the generator is increased to $1000$ RPM, the current in amperes supplied by the generator to the $DC$ grid is ______. (Give the answer up to one decimal place.)
For a system having transfer function $G(s)=\frac{-s+1}{s+1}$, a unit step input is applied at time $t=0$. The value of the response of the system at $t=1.5 sec$ (rounded off to three decimal places) is ________.
Consider a causal and stable $LTI$ system with rational transfer function $H(z)$, whose corresponding impulse response begins at $n=0$. Furthermore, $H(1)=\frac{5}{4}$. The poles of $H(z)$ are $p_{k}=\frac{1}{\sqrt{2}} \text{exp}(j\frac{(2k-1)\pi}{4})$ ... all at $z=0$. Let $g[n]=j^{n}h[n]$. The value of $g[8]$ equals __________. (Give the answer up to three decimal places.)
Engineering Mathematics
Electrical and Electronic Measurements
Analog and Digital Electronics
General Aptitude
|
CommonCrawl
|
Search SpringerLink
Evaluation of field visit planning heuristics during rapid needs assessment in an uncertain post-disaster environment
S.I.: Design and Management of Humanitarian Supply Chains
Mohammadmehdi Hakimifar ORCID: orcid.org/0000-0001-7121-70291,
Burcu Balcik2,
Christian Fikar3,4,
Vera Hemmelmayr1 &
Tina Wakolbinger1
Annals of Operations Research (2021)Cite this article
A Rapid Needs Assessment process is carried out immediately after the onset of a disaster to investigate the disaster's impact on affected communities, usually through field visits. Reviewing practical humanitarian guidelines reveals that there is a great need for decision support for field visit planning in order to utilize resources more efficiently at the time of great need. Furthermore, in practice, there is a tendency to use simple methods, rather than advanced solution methodologies and software; this is due to the lack of available computational tools and resources on the ground, lack of experienced technical staff, and also the chaotic nature of the post-disaster environment. We present simple heuristic algorithms inspired by the general procedure explained in practical humanitarian guidelines for site selection and routing decisions of the assessment teams while planning and executing the field visits. By simple, we mean methods that can be implemented by practitioners in the field using primary resources such as a paper map of the area and accessible software (e.g., Microsoft Excel). We test the performance of proposed heuristic algorithms, within a simulation environment , which enables us to incorporate various uncertain aspects of the post-disaster environment in the field, ranging from travel time and community assessment time to accessibility of sites and availability of community groups. We assess the performance of proposed heuristics based on real-world data from the 2011 Van earthquake in Turkey. Our results show that selecting sites based on an approximate knowledge of community groups' existence leads to significantly better results than selecting sites randomly. In addition, updating initial routes while receiving more information also positively affects the performance of the field visit plan and leads to higher coverage of community groups than an alternative strategy where inaccessible sites and unavailable community groups are simply skipped and the initial plan is followed. Uncertainties in travel time and community assessment time adversely affect the community group coverage. In general, the performance of more sophisticated methods requiring more information deteriorates more than the performance of simple methods when the level of uncertainty increases.
After occurrence of a sudden-onset disaster, humanitarian aid agencies need to make key decisions on how to respond and how to help affected people. Before making response decisions, humanitarian organizations quickly assess the needs of affected people, which, in humanitarian practices, is called the Rapid Needs Assessment (RNA). (IFRC 2008). The RNA starts immediately after a disaster strikes and has to be completed within a few days to quickly evaluate the disaster impact and population needs (IFRC 2008; Arii 2013). Without a successful needs assessment, humanitarian agencies may fail to satisfy needs effectively, which not only wastes precious resources at a time of great need, but can also lead to a further burden on authorities and affected people (de Goyet et al. 1991; Arii 2013). For instance, in the aftermath of the 1988 Armenian earthquake, a lack of a proper needs assessment has been mentioned as one of the main reasons for the mismatch between demand and supply of medical items sent by international organizations (Hairapetian et al. 1990; Lillibridge et al. 1993).
The RNA process begins with a preliminary review of secondary information which is collected from various sources such as national institutions, NGOs, United Nations agencies, satellite images, aerial photography and media including social media (IFRC 2008; ACAPS 2011b; IASC 2012; ACAPS 2014). After reviewing this secondary information, humanitarian agencies need to plan field visits in order to (i) confirm assumptions, initial impressions and predictions; (ii) receive more information on uncertain issues; and (iii) obtain beneficiary perspectives related to their priority needs (ACAPS 2011b). Rapid assessment via field visits includes interviews with affected community groups and direct observations of affected sites (IFRC 2008; ACAPS 2011b). The assessment is conducted by experts, who are familiar with the local area and have specialties such as public health, epidemiology, nutrition, logistics and shelter (ACAPS 2011b; Arii 2013).
Planning the field visits plays a significant role in achieving a successful assessment. One of the key decisions that influences the quality of this planning is to decide which sites to visit. Site selection processes aim to achieve acceptable coverage of various community groups. Due to time and resource restrictions during the RNA stage, it is normally neither feasible nor desirable to evaluate the entire affected region. Consequently, a sample must be drawn (ACAPS 2011b). Sampling methods are applied in practice to choose a limited number of sites to visit, which will allow assessment teams to observe and compare the post-disaster conditions of different community groups such as displaced persons, host communities, and returnees (IFRC 2008; ACAPS 2011b). Limited time and resources usually do not permit statistically representative sampling at the household or individual level; therefore the sample of sites that represent community level must be drawn (IASC 2012). Selecting which sites to visit may significantly affect the time spent for RNA.
Beside site selection, routing decisions, which involve determining the order of site visits, can also affect the efficiency of the field visit plan. The importance of reducing travel time by planning routes has been emphasized in practical resources (Garfield 2011; Benini 2012). Savings in travel time can improve the quality of assessment by providing the opportunity to spend more time at each site and/or to increase the number of sites to visit (Benini 2012). Despite existing optimization approaches in the academic literature, humanitarian organizations may have difficulties applying these methods in the field (Gralla and Goentzel 2018). Reviewing practical humanitarian resources as well as interviews with practitioners, however, show that, in practice, the tendency is to use simple methods such as greedy heuristics for determining vehicle routes in the field, rather than advanced solution methodologies and software (Gralla and Goentzel 2018). This is mainly due to the lack of available computational tools and resources on the ground, lack of experienced technical staff, and also the chaotic nature of the post-disaster environment.
While advances in technologies such as satellite data and drone images can assist humanitarian organizations in obtaining timely and accurate information about the physical impact of a disaster in the affected region, the availability of these technologies is a matter of concern due to their costs and possible disruptions in IT infrastructure after the disaster strikes (EPRS 2019). Besides, in the RNA stage, it is necessary to conduct interviews with the affected community groups, and usually it is a challenging task for humanitarian agencies to know their exact location. Therefore, both site selection and routing decisions during the RNA stage may be made in a highly uncertain post-disaster environment. Given the difficulties in accessing technological advances, evaluation of the uncertain factors can assist decision-makers in better utilizing these tools. These uncertainties largely stem from; (i) transportation network disruptions including link capacity, reliability and availability; (ii) safety and security concerns in affected regions, and; (iii) ambiguities with respect to the existence or availability of a certain community group in a specific region and their willingness to be interviewed (IFRC 2008; Garfield 2011; ACAPS 2011b; Liberatore et al. 2013; Arii 2013). In this paper, we use the term inaccessibility to refer to the cases when a site in the field visit plan turns out to be not accessible for assessment teams for various reasons such as security issues or road blockage. The term unavailability refers to the cases when the assumption regarding the existence or availability of a specific community group in a specific site turns out to be inaccurate. This happens when community groups are displaced, or they are unwilling to assist in collecting information (IFRC 2008). While planning the field visits in the RNA stage, "where overall needs are urgent, widespread and unmet, it is justifiable to focus on accessible areas" (IASC 2012, p. 7). However, sometimes information regarding inaccessibility or unavailability is revealed when assessment teams travel through the region (ACAPS 2011b). In fact, while visiting affected sites, the assessment teams may receive updated information regarding inaccessibility and/or unavailability. In such cases, they usually follow a pre-defined set of rules to react appropriately (ACAPS 2011b). That is, they need to decide how to update their original plan in order to obtain better assessment results within the restricted time limit. Therefore, while planning the field visits in the RNA stage, the uncertainties related to accessibility of sites and existence of community groups at the visited sites must be taken into account.
For field visit planning, humanitarian organizations, depending on availability of information and required resources, can use different pre-defined rules in case of inaccessibility/unavailability as well as different methods regarding site selection and routing decisions. Different combinations of methods and pre-defined rules provide a list of options for planning the field visit. We refer to them as a heuristic. The term heuristic in our study describes the approach followed to make decisions. That is, each heuristic represents a combined set of methods for making site selection and routing decisions and rules to follow in case of inaccessibility/unavailability. Depending on which methods and pre-defined rules are considered, the heuristics can vary in terms of required resources and information. For instance, regarding the resources, applying a simple routing method, requires easier to access tools and software than an advanced optimization procedure. Likewise, concerning the required information, selecting sites randomly requires less information than selecting sites based on the location of target community groups. This paper investigates the following research question:
RQ1. How do different heuristics, which are developed based on simple rules and methods applied by field visit teams that conduct humanitarian needs assessment, perform in post-disaster settings characterized by uncertainty with respect to travel times, assessment times, site accessibility, and availability of communities?
We provide a list of heuristics, including simple methods and pre-defined rules, that can be applied while planning the field visit in the RNA stage under uncertainty, evaluate their performance and provide an overview for decision-makers to be able to compare them in various scenarios. The terms "easy" or "simple" are both subjective and need to be clarified within the scope of this study. We consider methods as simple or easy when they carry the following two characteristics: first, practitioners should be able to implement them in the field using primary resources such as a paper map of the affected area and accessible software (e.g., Microsoft Excel). Second, these methods should follow the general principles mentioned in practical reports for field visit planning during the RNA stage. When practitioners observe that applying simple algorithms in practice can improve their field visit planning, they may recognize the need for further improvements. We believe that optimization models have a great potential to assist in decision-making processes, provided that practitioners recognize the need for these models and the required computational tools and resources are available at the time of planning. Accordingly, we briefly show in Section 5.3.3 how optimization procedures can further improve the results.
As a testing environment to evaluate the performance of heuristics, we incorporate them into a simulation model. Simulation models in general aim to analyze, evaluate and compare the performance of different options that differ in relation to various parameters (Lund et al. 2017). This is in line with the main objective of this study, which is not to provide one optimal solution but instead help the decision makers to compare the performance of a variety of heuristics in different settings. Moreover, simulation models enable us to incorporate various uncertain factors of the post-disaster environment in a reasonable amount of computational time. We compare the performance of different heuristics based on metrics that generally focus on achieving higher coverage of various community groups within time and resource limitation. We perform a numerical analysis based on a case study of the 2011 Van (Turkey) earthquake. We observe that updating the routes based on pre-defined rules positively affects the performance of the field visit plan and leads to higher coverage of community groups in comparison to an alternative strategy where inaccessible sites and unavailable community groups are simply skipped and the initial plan is followed. In addition, we see that selecting sites based on an approximate knowledge of community groups' location leads to significantly better results than selecting sites randomly. Our results show that uncertainties in travel time and community assessment time adversely affect the heuristics' performance in terms of coverage ratio, no matter which one we use; however, its impact is not the same on all heuristics. The results of more sophisticated heuristics requiring more data deteriorate more when the level of uncertainty increases.
The paper is structured as follows: Sect. 2 provides an overview of related works. Section 3 describes the decision making environment and Sect. 4 presents an overview of heuristics. In Sect. 5, we present computational results. Finally, the conclusion and future research directions are presented in Sect. 6.
Related literature
Transportation planning for needs assessment processes has recently attracted attention in the field of optimization. Huang et al. (2013) consider routing of post-disaster assessment teams. They construct routes for assessment teams to visit all communities in the affected regions. This model may be appropriate for the detailed assessment stage where time allows visits to all sites. However, usually in the RNA stage it is only possible to visit a subset of sites. Oruc and Kara (2018) propose a bi-objective optimization model that provides damage assessment of both population centers and road segments with aerial and ground vehicles. Balcik (2017) presents a mixed-integer model for the proposed "Selective Assessment Routing Problem" (SARP) which simultaneously addresses site selection and routing decisions and supports the RNA process that involves the purposive sampling method, a method that only selects those sites that carry certain characteristics. Balcik and Yanıkoğlu (2020) take the study further by considering the travel time as an uncertain parameter in post-disaster networks and present a robust optimization model to address the uncertainty. The objective function in Balcik (2017) is maximizing the minimum coverage ratio achieved across the community groups, where the coverage ratio for a group is calculated by dividing the number of times that the group is covered by the total number of sites in the network with that group. As an alternative objective function, Pamukcu and Balcik (2020) specify coverage targets in advance and the objective is to ensure covering all community groups in minimum duration. Bruni et al. (2020) approach the post-disaster assessment operations from a customer-centric perspective by including a service level constraint that guarantees a given coverage level with the objective of minimizing the total latency. They consider travel time uncertainty and address this uncertainty through a mean-risk approach. Li et al. (2020) propose a bi-objective model addressing both the RNA stage and the detailed needs assessment stage to balance the contradictory objectives of the two stages. The objective of the RNA stage is, similar to Balcik (2017), maximizing the minimum coverage ratio achieved among community groups, and the second objective is minimizing the maximum assessment time of all assessment teams. There is a stream of literature focusing on damage assessment using unmanned aerial vehicles (UAVs), which shows similarities to the needs assessment routing problem (e.g., Zhu et al. 2019, 2020; Glock and Meyer 2020). The main difference is that damage assessment studies focus mainly on settings where UAVs' high-quality pictures can meet the assessment purposes, and there is no possibility or necessity to conduct interviews with the community groups. Both of the literature streams mentioned above belong to the family of Team Orienteering Problems (Chao et al. 1996) as both problems address site selection and vehicle routing decisions. The goal of both problems is maximizing the benefits collected from the visited nodes and constructing efficient routes.
One of the main criticisms of using optimization models is their limited applicability in practice (Altay and Green III 2006; Galindo and Batta 2013; Anaya-Arenas et al. 2014; Gralla and Goentzel 2018). Difficulties in accessing data, required computing time and resources, lack of contextualization, poor problem definition, complexity of the approach and lack of trust in its conclusions by humanitarian organizations are the main barriers that limit the possibility of using optimization models in practice (de la Torre et al. 2012; IFRC 2013; Kunz et al. 2017; Gralla and Goentzel 2018). Nevertheless, according to Gralla and Goentzel (2018), in order to improve the current dependence on "error-prone" and "by hand" planning methods, there is still a great need for decision support in practice. Developing "easy-to-understand" and "easy-to-apply" heuristics has been mentioned as an effective way to improve transportation planning by building trust between humanitarian logisticians and academic researchers as well as reducing implementation challenges (Gralla and Goentzel 2018). Some researchers have focused on developing simple heuristics that can be easily implemented in practice to support routing decisions in various non-profit settings. For instance, Bartholdi III et al. (1983) present a heuristic vehicle-routing strategy for delivering prepared meals to people who are unable to shop or cook for themselves. Knott (1988) suggests a simple heuristic based on methods used in practice by experienced field officers for scheduling emergency relief management vehicles. In a more recent study, Gralla and Goentzel (2018) develop simple and practice-driven heuristic algorithms for planning and prioritization of vehicles to transport humanitarian aid to affected communities based on their observational study on planning practices currently in use in the humanitarian sector. The authors compare the solutions of heuristics to each other and to those of a mixed-integer linear program to identify the strengths and weaknesses of each approach.
The general approach in this study is similar to that of Gralla and Goentzel (2018); that is, we also develop simple practice-driven heuristics to support RNA operations and compare their performance to each other and with a modified version of the optimization model presented in Balcik (2017). The modification on the original SARP model is due to the fact that we want to keep assumptions consistent between all heuristics that we test in this paper. The main modification is uncertainty concerning the existence of community groups. In the original SARP, the existence of a community group is known in advance. In the modified SARP, we consider the expected value of visiting the community group. Furthermore, in this study we divide sites into some clusters and add a new constraint to the original SARP to ensure that we visit each community group a certain number of times within each cluster. Note that although we made above changes to the original Balcik (2017), the main focus of this paper is not extending the Balcik (2017) but taking the optimization approach in this work as a basis to compare it with the other heuristic algorithms inspired by practical humanitarian resources. Balcik (2017) was closest to the assumption of our developed heuristic algorithms and required fewer changes. The practice-driven heuristics are inspired by the main assumptions, principles and procedures that are described in practical humanitarian resources and guidelines regarding field visit planning for the RNA stage (e.g., IFRC 2008; ACAPS 2011b; IASC 2012; ACAPS 2013, 2014; USAID 2014). Our study differs from Gralla and Goentzel (2018) in two main aspects; (i), we incorporate different heuristics and an optimization model into a simulation model for decision makers to facilitate evaluating their performance in an uncertain environment, and (ii) we focus on RNA operations rather than delivery of relief items. We elaborate more on these two subjects in the following paragraphs.
Practical studies and guidelines provide general and conceptual principles for the RNA processes which are mostly open to interpretation. Regarding the site selection decisions, available practical reports almost unanimously highlight the importance of using purposive sampling. IASC (2012) emphasizes the fact that in the response phase of a disaster due to time, access and logistics constraints, assessing needs at household or individual levels is often unrealistic and it is more reasonable to collect information at community level. They also emphasize the importance of purposive sampling by mentioning that limited time normally does not permit random or statistically representative sampling. Therefore a sample of sites which represent a cross-section of typical regions and affected populations must be drawn (IASC 2012). IASC (2012) also states that the size of the selected sites is determined by the availability of resources (staff, time and logistics), the geographic spread of the disaster and the heterogeneity/homogeneity of the community groups. Similarly, IFRC (2008) declares that if the affected sites differ significantly, it is beneficial to select a variety of sites reflecting different characteristics (e.g., ethnicity, economics, town/village, etc.). ACAPS (2011b) focuses specifically on the purposive sampling method and provides a case study to guide how to select relevant community groups and identify the most appropriate sites to assess. Routing decisions are the other important decisions which need to be taken during the RNA stage. Although the importance of saving in travel time using routing methods has been highlighted in some practical resources (e.g., Garfield 2011; Benini 2012), the details of applied or suggested methods for routing decisions have not been discussed in detail (IFRC 2008; ACAPS 2011b; IASC 2012; ACAPS 2013, 2014).
As stated in Sect. 1, practical studies mention different uncertain factors which assessment teams might encounter while planning and also during the RNA stage (IASC 2000; Darcy and Hofmann 2003; ACAPS 2011a, b). We use a simulation model to deal with these uncertainties. Simulation models are powerful tools to evaluate a set of predefined options, especially in a situation with a high level of uncertainty (Liberatore et al. 2013; Davidson and Nozick 2018). Furthermore, simulation models often have an excellent capability of providing a graphical user interface that can facilitate applying these models as a decision support tool and improve understanding of the underlying problem settings. There is a growing body of simulation-based decision support tools that focuses on supporting various humanitarian operations (e.g., Yu et al. 2014; Fikar et al. 2016, 2018). Mishra et al. (2019) present a review of simulation models developed as an analytical tool for different stages of disaster relief operations. Within our proposed simulation model, we develop easy-to-apply heuristic algorithms based on practical guidelines for planning field visits during the RNA stage. We also incorporate a modified version of a MIP model presented by Balcik (2017) in our model. We compare the solution of the proposed heuristics to each other and to those of the MIP model to provide an overview for decision-makers to be able to compare them in various scenarios and see the trade-offs.
In summary, we explore evidence from practice and formulate heuristic algorithms motivated by humanitarian reports implemented for field visit planning during the RNA stage. The importance of evidence-based research has been highlighted in humanitarian literature (e.g., de Vries and Van Wassenhove 2017; Besiou and Van Wassenhove 2020). In this regard, we consider a wide range of uncertainties, including the accessibility of sites, availability of community groups, travel time, and assessment time. Our proposed decision-making environment assists humanitarian organizations in investigating the trade-offs between different heuristics and deciding on the most suitable choice.
Decision making environment
RNA starts immediately after a disaster strikes and is often completed within a few days. Assessment teams visit a number of sites in the affected areas to evaluate and compare the impact of the disaster on different community groups. The number of sites is limited since assessments must be completed quickly.
Reviewing practical studies shows that, in general, a field visit plan requires input from various information sources. This information is based on secondary data (e.g., sources from governments, NGOs, United Nations agencies, satellite images, aerial photography, and media including social media) and available resources such as logistical, staff, and time (ACAPS 2014). The main information is summarized in Table 1. The better the quality of available information, the higher the quality of the assessment plan. The quality of the assessment plan is higher when the assessment teams can increase the number and diversity of visited community groups within the time and resource limitation. Figure 1 shows the main inputs of field visit planning ranging from transport network, community groups and their possible locations to number of teams and total available time. Below we briefly explain each of the inputs mentioned in Fig. 1:"
Inputs and outputs of a field visit planning model for RNA
Target community groups refer to different groups of the population that have been affected by a disaster in very different ways and have different needs (ACAPS 2011b). These groups of the population could be various sub-groups of the population (e.g., refugees vs. residents), different vulnerable groups (e.g., disabled, food insecure, unemployed) and different demographic groups (e.g., women vs. men or elderly vs. youth) (ACAPS 2011b; IASC 2012). The set of target community groups is denoted by G and indexed by \(g \in G\) in this work.
Sites refer to geographical locations where assessment teams can find target community groups. In the RNA stage, sites generally refer to cities, towns, and villages (ACAPS 2011b). Different districts, neighborhoods, or individual houses are considered as sites while doing detailed assessment (Waring et al. 2002). Let N represent the set of sites in the affected region. Each assessment team departs from the origin node \(\{0\}\) and returns to the origin node after completing all site visits. Let \(N_0 = N\cup {0}\).
Approximate mapping of community groups (availability) points out the fact that humanitarian agencies are not always sure about the existence of a community group within a specific site due to lack of accurate secondary information and breakdown of established information and communication technology infrastructure (ACAPS 2011b). Recent developments in technology can help humanitarian agencies to gather more accurate information in the planning phase. For example, Nagendra et al. (2020) show how a satellite data analytics platform was adopted to identify the locations that needed high-priority rescue support. ACAPS (2011b) uses terms such as "we assume" or "we have good reason to believe" to show the possibility of the existence of a community group in a specific site. In Sect. 4.1, we explain how we can map these verbal terms onto numeric probabilities.
Clusters refer to a group of sites that share the same characteristics. Geographical or disaster impact features can be used as stratification factors for making clusters (ACAPS 2011b). Assessment teams are interested in comparing the situation of community groups among different clusters. For example, they might be interested in evaluating the needs of disabled people (as a community group) in both urban and rural areas (as clusters) or needs of refugee people (as a community group) in directly affected areas with indirectly affected areas (as clusters). The set of clusters is denoted by C and indexed by \(c \in C\). It is worth mentioning that clusters may have different priority levels (IFRC 2008; ACAPS 2011b). For instance, in case of an earthquake, humanitarian organizations may define clusters based on the distance from the earthquake's epicenter. In such a case, they may give more priority to clusters that are closer to the epicenter and select a larger portion (or percentage) of sites to visit from these clusters.
Total available time The RNA operations must be completed quickly (e.g., within 3 days based on Arii (2013)). Depending on the severity, extent, and scope of a disaster, decision-makers decide on the total available time, which is denoted by \(T_{max}\).
Number of assessment teams refers to the available number of assessment teams. These teams consist of experts familiar with the local area and specialties such as public health, epidemiology, nutrition, logistics, and shelter (ACAPS 2011b; Arii 2013). The set of teams is denoted by K and indexed by \(k \in K\).
Community assessment time By using secondary data and previous experiences of the assessment teams, an estimation of time for assessing one community group at a site, which mainly consists of conducting interviews and direct observation, is determined (Garfield 2011). The time for assessing community groups is an uncertain parameter and can deviate from its nominal value. One reason for increased assessment time is a phenomenon called assessment fatigue, which may happen if different humanitarian agencies assess a community group many times (IFRC 2008). In this situation, people are frustrated and unwilling to answer the interview questions which are mostly similar to the questions that the other agencies have already asked. Site assessment time refers to the total time spent at each site for assessing its existing community groups. Estimated site assessment time is represented by \(s_i\), which is calculated by the number of community groups at a site multiplied by the nominal value of assessing one community group.
Travel time Travel time between sites is calculated using the available information on the road conditions and damage to the infrastructure (Garfield 2011). Travel time is another uncertain parameter in the RNA stage and can increase due to reasons such as network and infrastructure disruptions. The nominal value of travel time between nodes is represented by \(t_{ij}\).
We assume travel time and community assessment time both can increase by up to a fraction of their nominal values. We show the level of increase by U. For example, when U (the uncertainty level) is 0.2, travel time and community assessment time can increase by up to 20 percent of their nominal value. In practice, for uncertain parameters little or no data is available. Therefore, to show the probability distribution for both travel time and community assessment, we consider triangular distribution (left triangular since it can only increase) , which is often used when the shape of the distribution is only vaguely known (Stein and Keblis 2009; Fairchild et al. 2016). The parameter for triangular distribution is the lower limit (minimum), best guess (mode) and the upper limit (maximum). In left triangular distribution the value of minimum is equal to the mode.
Site accessibility While planning the field visit using secondary information, assessment teams exclude sites that are not accessible (i.e., those sites of which they have accurate information); however, they may also encounter inaccessibility during their field trip (IASC 2000; IFRC 2008). The assumption in this study is that the assessment teams realize this once they get close enough to the inaccessible site (receiving information from local people, and direct observation). In such situations, the assessment teams usually follow a pre-defined set of rules to update their original plan (ACAPS 2011b). In Sect. 4.3, we introduce two pre-defined rules for updating routes.
The aspects described above characterize the decision-making environment and required information while planning field visits during the RNA stage. To plan a field visit during the RNA stage, two main decisions must be made: (i) site selection: decision regarding which sites to visit and (ii) routing: decision regarding in which order to visit the selected sites and how to update the planned route. The main goal of field visit planning during the RNA stage is to visit different community groups in different clusters as much as possible and in a balanced way, considering the time and resource limits. The main performance measurement in this study is the concept of Coverage Ratio (CR) of community groups. CR is calculated by the number of times one specific community group is visited divided by the total expected number of times this community group exists in the network. For example, if a community group is visited twice and expected to exist in 10 different sites in the whole network, the CR of this community group is 0.2. The higher the rate of CRs for all community groups and, preferably, the closer their values to each other, the better the performance of a heuristic (in Sect. 5.2, we present different KPIs that stem from the concept of CR). Below, we present and explain different heuristics consisting of simple methods inspired by practical reports for both site selection and routing decisions.
Table 1 Main factors for planning the field visit during the RNA stage
Overview of heuristics
In this section, we present our proposed heuristics for planning the field visit during the RNA stage. These heuristics include a set of methods for making site selection and routing decisions as well as pre-defined rules to follow in case of site inaccessibility and group unavailability. Figure 2 presents different methods and pre-defined rules considered in this study, and Table 2 shows the list of four heuristics that consist of different combinations of these methods and pre-defined rules. The heuristics are sorted based on the level of simplicity (Heuristic A simplest).
Methods for site selection and routing and pre-defined rules in case of inaccessibility and unavailability
Site selection methods
Random site selection The main assumption in this method is the fact that assessment teams do not have access to information regarding the location of community groups, or they do not have time to gather and analyze this information using secondary data. Therefore, sites are selected randomly. This method is the simplest approach for selecting sites that we found in practical resources (IFRC 2008). This method is typically used when humanitarian organizations assume sites are similar in terms of existing community groups.
The selection process is shown in Algorithm 1. First, we randomly select \(f_c\) number of sites from each cluster. \(f_c\) is determined by experts and represents their preferred number of sites to be visited from each cluster. For simplicity, in our algorithm, we set \(f_c\) as a fixed percentage of sites from each cluster (e.g., 30 percent). Then, we assign total selected sites to teams using the Sweep-NN algorithm (see Sect. 4.2 for the steps of this algorithm) and construct |K| routes (# of available teams). To consider available resources (total available time), we calculate the travel times and site assessment times required to complete each route. If the time of completing one specific route exceeds the total available time (\(T_{max}\)), we start reducing the number of sites from this route (randomly) until we have a feasible route.
Table 2 List of heuristics for field visit planning during the RNA stage
Community-based site selection In this method, we assume that while selecting sites, decision-makers have at least an approximate knowledge about where the community groups exist and they make their site selection decision based on this information. This method, which we call community-based site selection, is adopted from the general procedure explained in ACAPS (2011b) and, in general, requires more information compared to the previous one. ACAPS (2011b) places a great emphasis on visiting different community groups and considers the following general factors while selecting sites: i) sample richness (i.e., observing each community group at least once within each cluster is important), ii) collecting adequate
information (i.e., observing a community group at multiple sites), iii) efficiency (e.g., visiting a site that involves multiple community groups may be beneficial).
Another important factor that is highlighted in ACAPS (2011b) is the uncertainty concerning the existence of a community group in a specific site due to the lack of information. We mentioned in Sect. 3 how humanitarian agencies use verbal terms to show the possibility of the presence of a community group on a particular site. Translation or mapping of these verbal terms onto numeric probabilities in the planning stage is challenging. We assume these verbal terms could be mapped onto numeric probabilities using some suggested methods in literature such as Barnes (2016) or Kent (1964). For example, according to Barnes (2016) we can associate terms such as "almost certain", "extremely likely" or "highly likely" with the probability of 0.9 or terms such as "very unlikely" or "highly unlikely" with 0.1. After these mappings, we assume that the probability of each group being available in each site is an independent Bernoulli trial with the parameter resulting from verbal terms. We then let \(\alpha _{ig}\) represents the probability of visiting community group g in site i. Note that the expected value of a community group g to be in site i is also \(\alpha _{ig}\).
In community-based site selection, we consider the main criteria mentioned in ACAPS (2011b) for site selection. More specifically, we select sites that ensure visiting a minimum target number from each community group within each cluster (\(l_{gc}\)) determined by decision-makers. Also, to take efficiency into account, we first try to meet the minimum number by selecting sites that have a higher possibility of visiting community groups to save resources such as time and assessment team (see Algorithm 2).
The resource checking process is similar to the one in random site selection. The only difference is that when we have an infeasible route and need to remove one site (or more) from this route, we start removing the site that causes the least decrease in CR of community groups. Using CR helps us to avoid removing a site from the list of selected sites that includes a community group that exists in a limited number of areas.
Routing method
The routing algorithm we use determines the sequencing of visits to the selected sites, which helps assessment teams utilize limited resources by reducing travel time efficiently. Applying routing methods has been emphasized in practical studies (Garfield 2011; Benini 2012). Nevertheless, due to the limitations mentioned in Sect. 1, in practice, the tendency is to use simpler methods for determining vehicle routes in the field, rather than advanced solution methodologies and software. (Gralla and Goentzel 2018). In the following section, we introduce an easy-to-apply method from literature for generating routes that do not require sophisticated resources.
Sweep-NN algorithm This algorithm is one of the simplest methods for solving the capacitated vehicle routing problem (Gillett and Miller 1974; Nurcahyo et al. 2002). This algorithm consists of two stages: (i) clustering, and (ii) routing. Clustering starts with the unassigned node with the smallest angle with respect to a depot and assigns it to vehicle k. The sweeping for each team continues until M (total selected sites divided by the number of teams) sites are assigned. At the routing stage, a solution to the traveling salesman problem (TSP) is required to construct routes. Following the easy-to-apply approach in this paper, we consider the Nearest Neighbor (NN) algorithm for solving the TSP in each cluster. Algorithm 3 presents the steps of the Sweep-NN method.
Sweep clustering and routing
Pre-defined rules in case of site inaccessibility
When the assessment teams start traveling in the field based on their original planned route, they might encounter site inaccessibility and need to update their route accordingly. In our simulation algorithm, we assume that once they finished with the assessment of one site, they realize if the next site in their planned route is accessible or not. After realizing the site is inaccessible, the assessment teams need to decide how to react. Below, we suggest two rules to follow in case of inaccessibility:
Skip In this rule, once the assessment teams realize that the next site on their plan is not accessible, they skip that site and continue with their original plan. That is, they travel to the next node in the original plan. We assume assessment teams can find an alternative route to the next node.
Replace In this rule, the assessment team replaces the inaccessible site with another site within a specific radius (r). The inaccessible site is replaced by a site that is similar to it. The similarity is calculated by the absolute total difference between the expected value of visiting each community group (\(\alpha _{ij}\)) of the inaccessible site and the sites around it within a r radius. Please see Algorithm 4 for the details of this rule.
Pre-defined rules in case of group unavailability
Another circumstance based on which the assessment teams can decide whether or not to update their route is the time when they enter the site and realize that their target community group(s) do not exist at that site. Since it is suggested in practical studies that "the assessment teams should respect a pre-defined set of rules to replace communities that turn out to be inaccessible or irrelevant" (ACAPS 2011b, p. 8), we present the following two rules that might be reasonable in case of unavailability of community groups.
Skip In this rule, the assessment teams stick to their original planned routes and do not update their route based on the number of community groups that have been successfully visited during their trip.
Insert The assumption in this rule is that the assessment teams keep track of the visited number of each community group during their trip. Then, once in a while (e.g., after visiting every \(\rho \) sites), they compare this number with what they expected to visit in their original plan. If the gap between the expected and real number of visits is larger than a threshold (i.e., \(\tau \)) , they insert a site that has the highest possibility of visiting the community group that has the largest gap. The inserted site will be chosen within a specific radius (r) around the current location. Please see Algorithm 5 for the detailed procedure.
To facilitate investigating the trade-offs between using different heuristics, we incorporate all heuristics into a simulation model. A high-level process overview of the steps performed in the simulation is provided in Fig. 4. Pre-simulation computations refer to the determination of sites and original routes, which then feed into the simulation run where the realization of the uncertain parameters occurs. Then, based on the pre-defined rules, the original routes get updated. Note that the need for simulation (or more specifically the need for updating routes) is due to uncertainties which resolve over the assessment horizon. Otherwise, there would be no need for simulation and updating the initially constructed routes. This is the first study that considers a wide range of uncertainties, including the accessibility of sites, availability of community groups, travel time, and assessment time within a new problem environment capable of comparing different field visit planning methods for site selection and routing decisions, and pre-defined rules for updating routes. This new decision-making environment allows humanitarian organizations, depending on the specific setting of a disaster, to investigate the trade-off between using different heuristics and decide on the most suitable choice.
A high-level evaluation process overview of the steps performed for each heuristic
Computational results
We evaluate the performance of the presented heuristics using the case study network in Balcik (2017) that focuses on the affected towns and villages after the 2011 earthquake in Van, Turkey. This last mile network was introduced in Noyan et al. (2016). We vary critical parameters of the problem instance such as number of teams, total available time, level of uncertainty and allowed radius for detour, and analyze the performance of the heuristics across all instances. In Sect. 5.1, we briefly describe the case study and provide parameters and assumptions considered. Key performance indicators and numerical results and analyses are provided in Sects. 5.2 and 5.3.
This case study focuses on 93 affected sites after the 2011 earthquake in Van. A case study can be applied to capture the conditions generated by a disaster and evaluate the performance of the disaster management system (Rodríguez-Espíndola et al. 2018). Ketokivi and Choi (2014) states that case studies can be used for theory generation, theory testing or theory elaboration. Based on this categorization, the case study in this paper is defined as theory testing since it aims to test the performance of proposed heuristics for field visit planning during the RNA stage.
This case study is a good example to test our proposed heuristics because of the following reasons. First, in the 2011 Van earthquake, the scale of disaster and number of affected sites were large such that the RNA operations were necessary to be conducted. The Turkish Red Crescent (TRC), teams that immediately arrived in the city from agency offices located in Van were responsible for the assessment operations (AFAD 2020; Kizilay 2021). This is very important because when the number of sites is limited, there is usually no need for site selection (sampling), and the assessment teams are able to visit all sites. Furthermore, the affected area in the 2011 Van earthquake was diverse in terms of geographical aspects (e.g., elevation, proximity to the Van lake), and demographic differences(e.g., population classifications and vulnerable groups). This diversity highlights the importance of purposive sampling, which is applied when the affected sites differ significantly, and it is beneficial to select a variety of sites reflecting different aspects (IFRC 2008).
According to Disaster and Emergency Management Presidency of Turkey (AFAD), this earthquake killed 604, left 200,000 people homeless and in need and caused damage to more than 11,000 buildings in the region, out of which more than 6,000 were found to be uninhabitable (AFAD 2020). The case study in Balcik (2017) provides information regarding transport network, geographical characteristics of the affected sites (e.g., elevation and proximity to the lake), disaster impact (proximity to the epicenter), and demographic information. Note that in this paper the existence of community groups is uncertain and determined after the occurrence of the disaster, which is different from that of Balcik (2017), where the existence of community groups is known in advance. Moreover, the concepts of clustering and inaccessibility of sites are specific to the case study in this paper. Other information regarding the considered community groups and their possible locations, and clustering factors are created for this study, which will be explained below.
Target community groups We consider the following three community groups:
\(g_1\)- Internally displaced people Those who were forced to leave their homes due to reasons such as damaged buildings and fear of aftershocks.
\(g_2\) - Injured people Those who require medical attention in the immediate aftermath of a disaster.
\(g_3\) - Disabled people Those who usually need special attention during disaster relief and it is not easy for them to move.
These community groups are chosen based on reviewing reports and studies that describe the situation right after the Van 2011 earthquake (Zare and Nazmazar 2013; Platt and Drinkwater 2016) as well as practical reports which define most critically (and highly possible) affected groups after the occurrence of an earthquake (ACAPS 2011b).
Mapping of community groups As mentioned earlier, determining the existence of community groups within sites includes uncertainty due to the lack of precise information in the early stages after a disaster strikes. Using secondary data is one of the main sources to approximate the likelihood of finding a specific community group at a specific site. This approximation creates the parameter (i.e., \(\alpha _{ig}\)) for the Bernoulli trial, which represents the probability of group i being available at site j. Table 3 represents the generated parameters for the Bernoulli trial (i.e., Bernoulli (\(\alpha _{ig}\))). For \(g_1\) and \(g_2\) (displaced and injured people), we assume that the proximity to the epicenter of the earthquake increases the damage to the building and consequently increases the chance of finding more displaced and injured people. For \(g_3\) (disabled people), we use demographical aspects of the region provided by Balcik (2017) which categorizes the population of disabled people into three groups of low, medium and high.
Table 3 Parameters for Bernoulli trial used to represent availability of community groups
Clusters The affected sites are dispersed through a rural region with a population between 112 and 20,000. Other characteristics of the affected region, such as geographic aspects and disaster impact, can be used as stratification factors for making clusters. Deciding how detailed the stratification must be depends on how different the impact of the disaster is on the region. We consider four clusters based on the available data regarding the geographical characteristics and the disastrous impact of the Van 2011 earthquake. Table 4 shows the factors and number of sites considered for each cluster and Fig. 5 illustrates the case study network.
Table 4 Clusters considered for the 2011 earthquake in Van, Turkey
The case study network representing affected sites and clusters considered for the Van earthquake; adapted from Noyan et al. (2016)
Community assessment time and travel time As indicated in Sect. 3, travel time and community assessment time are both considered as uncertain parameters and can increase up to a certain fraction of their nominal values (i.e., (1+U)\(\times \)nominal value). Therefore, when we change the level of uncertainty (i.e., setting different values to U), it only refers to deviation in nominal values of travel time and community assessment time. The nominal values of road travel times for this network are obtained from Noyan et al. (2016). The nominal value to assess each community is considered to be 45 minutes (based on approximated time provided in Garfield (2011)). We consider left triangular distribution to generate the realization of both community assessment time and travel time in our simulation model. The parameters for left triangular distribution are the lower limit (minimum), best guess (mode) and the upper limit (maximum), where the minimum is equal to the mode. Figure 6 represents the components of a left triangular distribution.
Components of a left triangular distribution for generating the realization of travel time and community assessment time
Site accessibility We assume the chance of facing inaccessibility increases when the assessment team gets closer to the earthquake epicenter. Therefore the probability of facing inaccessibility is defined similarly to the availability of community groups \(g_1\) and \(g_2\) , i.e., Bernoulli trial with parameters provided in Table 3.
Table 5 provides other parameters considered. In total we obtained 2,520 instances.
Table 5 Other parameters
As mentioned earlier, the main performance measures in this study are developed based on the CR of community groups. The CR of each community group is calculated using the below formula in one simulation run.
$$\begin{aligned} \begin{aligned} CR_g = \frac{number\; of \;times \; group\; g\; is\; visited }{total\; expected\; number\; of\; times\; group\; g\; exists\; in\; the\; network\;(\sum \nolimits _{i=1}^{N}{\alpha _{ig}})} \end{aligned} \end{aligned}$$
Using the concept of CR, we define the following KPIs:
Average Coverage Ratio (ACR): ACR is the average of coverage ratios of community groups (\(g_1, g_2\), and \(g_3\)) in one simulation run.
Minimum Coverage Ratio (MCR): MCR is the minimum of coverage ratios of community groups (\(g_1, g_2\) and \(g_3\)) in one simulation run.
Coefficient of Variation of Coverage Ratios (CVCR): CVCR is the coefficient of variation of coverage ratios (the ratio of standard deviation to the average of coverage ratios) in one simulation run. See the below formula:
$$\begin{aligned} CVCR = \frac{SD(CR_{g_1},CR_{g_2},CR_{g_3})}{ACR} \end{aligned}$$
Note that since the network is divided into different clusters, the CR and correspondingly, other KPIs can be calculated for each cluster separately. We calculate CVCR per cluster, which can be used to show how balanced the CRs are within each cluster.
Each instance is run with 300 replications and average results of ACR, MCR, and CVCR are reported. Simulation algorithms were implemented with AnyLogic 7.2.0 and all the test runs were run on an Intel Core i5-5300U CPU, 12.0 GB RAM, with MS-Windows 10. The optimization model (Appendix A) is coded in Java and CPLEX 12.6.1 is used to solve instances.
We divide the computation results into three sections. In Sect. 5.3.1, we analyze the effect of the site selection methods by comparing the performance of Heuristics A and B, which apply different site selection methods but do not update routes. To analyze the impact of different route update rules in case of inaccessibility and unavailability, in Sect. 5.3.2, we compare Heuristics C and D, which implement the same site selection method but different rules for updating the routes. In Sect. 5.3.3, we compare the performance of our best heuristic with the solution obtained by an exact optimization procedure.
Heuristic A vs Heuristic B (random vs community-based site selection)
In random site selection (Heuristic A), the assumption is that decision-makers do not have enough time to gather information or do not have access to data regarding the location of community groups. In community-based site selection (Heuristic B), decision-makers spend some time before starting the assessment to analyze secondary data. This leads to approximate information about the location of community groups. Based on this information they decide where to go, to cover more community groups. Table 6 shows that Heuristic B shows a better performance in terms of ACR (on average 32 percent improvement) and MCR (on average 38 percent improvement) than Heuristic A, which means they can visit a higher number of community groups with the same amount of resources. Nevertheless, as Fig. 7 shows, this improvement in the solution decreases when the level of uncertainty (U) with respect to travel time and community assessment time increases. That is, while planning the field visit, where time is of vital importance, even if the assessment teams spend time to gather more information regarding the existence of community groups, other uncertainties such as road situation can deteriorate the expected results.
Table 6 Evaluation of the performance of Heuristics A and B
Moreover, in addition to having higher coverage ratios, Heuristic B has a lower CVCR in comparison to Heuristic A. That means if we assume community groups have the same priority for the assessment teams, Heuristic B provides more balanced results than Heuristic A, both within the whole network and every cluster.
Heuristic C vs Heuristic D (Replace-Skip vs Replace-Insert)
In Heuristics C and D, the assessment teams update their pre-planned routes. In Heuristic C, the assessment teams react to site inaccessibility (see Algorithm 4 for the procedure), i.e., where possible, an inaccessible site is replaced with another if it falls within a specified radius (r) from the inaccessible site. In Heuristic D, in addition to inaccessibility, the assessment teams react to community group unavailability such that they insert suitable site(s) in their original plan (see Algorithm 5 for the procedure).
Table 7 presents the solutions of both Heuristic C and D. The first observation is that in general, these two heuristics achieve better coverage than the previously discussed two heuristics, which shows that updating the routes has improved the results. This stems from the fact that when assessment teams face inaccessibility and unavailability and do not replace or insert sites, their field visit plan, in many cases, finishes before the total available time period (i.e., \(T_{max}\)). Heuristics C and D utilize this extra time.
Impact of uncertainty level on ACR on Heuristics A and B
Moreover, as observed in Table 7, in lower uncertainty levels (less than 0.2), Heuristic D shows a better performance in terms of ACR and MCR than Heuristic C. CVCRs are also slightly lower in Heuristic D. That means updating planned routes based on the number of visited community groups, in lower uncertainty levels, has a positive effect on coverage ratios. Please note that when we increase the threshold level (i.e., \(\tau =2\) or more), there is no significant difference between Heuristics C and D since it is rare for teams to be so far behind the original plan.
In higher uncertainty levels, we see a similar impact to that discussed in Sect. 5.3.1, i.e., the better performance of Heuristic D over Heuristic C decreases when the uncertainty level increases (see Fig. 8). In other words, considering more complex methods does not lead to a significant improvement in the results in higher uncertainty levels.
Another observation is about the effect of allowed radius for a detour (i.e., r) on ACR and MCR. Increasing r expands the options to choose when the assessment teams need to find a replacement, but further causes more deviation from the original planned route. The outcome of these two opposing effects is presented in Table 7. We see that increasing r, which continuously increases the total distance traveled, does not necessarily improve ACR and MCR and in some cases even leads to lower values. For example, in Heuristic D, for instances with U = 0, |K| = 3, \(T_{max}\) = 40, \(\tau \) = 1 and \(\rho \) = 2 when r varies from 10 to 40 km, the total distance traveled increases from 479 to 559 km, but the ACR decreases from 0.241 to 0.231 and MCR from 0.203 to 0.197.
Table 7 Evaluation of the performance of Heuristics C and D
Impact of uncertainty and radius on ACR in Heuristics C and D
Improvement in pre-simulation computations
In this section, we investigate how we can further improve the results, provided that more sophisticated computational tools and resources are available to select sites and determine routes in the initial planning phase. In Fig. 4, we showed that site selection and routing decisions are calculated separately in pre-simulation computations using proposed heuristics. For a similar problem, Balcik (2017) proposed a MIP model called the Selective Assessment Routing Problem (SARP), in which the site selection and routing decisions are made in an integrated manner. The SARP considers a coverage type objective to ensure balanced coverage of the community groups. This balanced coverage is ensured by defining an objective that maximizes the minimum coverage ratio across the community groups. The main constraints of SARP are limiting the number of routes by the available number of assessment teams and ensure that each route is completed within the allowed duration.
Table 8 Evaluation of the performance of Heuristics D and D*
We use a modified version of the original SARP and feed the results of the modified SARP into our simulation model and compare the results with our best heuristic (i.e., Heuristic D). We call the new heuristic Heuristic D*. The modification on the original SARP model is due to sticking to the same assumptions with other heuristics. The main modifications are as follows (see Appendix A for the modified SARP formulation):
uncertainty with respect to the existence of community groups: In the original SARP the existence of community groups is known in advance. Therefore, the \(\alpha _{ig}\) is either 0 or 1. In the modified SARP, \(\alpha _{ig}\) represents the expected value of visiting community group g in site i.
minimum target number of visiting each community group within each cluster: We add a new constraint to the original SARP to ensure that we visit each community group a certain number of times (\(l_{gc}\)) within each cluster.
This comparison helps decision-makers see, in cases of providing more advanced computational resources, to what extent they can improve their plan. Table 8 presents the comparison of Heuristic D with Heuristic D*. We observe that in lower uncertainty levels, Heuristic D* shows a better performance than Heuristic D. However, the better performance of Heuristic D* decreases when the level of uncertainty increases. In other words, the results of Heuristic D* deteriorate faster by increasing the uncertainty level. This deterioration can be seen in Fig. 9. We also see that, the average values of ACR and MCR of Heuristic D* (0.527 and 0.454, respectively), are relatively close to average values of ACR and MCR of Heuristic D (0.456 and 0.391,respectively). Note that in this section we only consider a network of 30 nodes (clusters 1 and 2). The reason to select the subset of affected sites is the limitation we have in solving the MIP optimally.
Gralla and Goentzel (2018) recommend that practice-driven heuristics can be used as a planning approach when optimization is not feasible. Our results also support this idea. As it is based on a practice-driven methods and rules, Heuristic D provides reasonable results compared to Heuristic D* while it is easier to be implemented in practice. However, our practice-driven heuristics' main challenges are their greedy natures, which do not usually produce an optimal solution but may find solutions that approximate a globally optimal solution in a reasonable amount of time.
In this study, we provide methods and pre-defined rules to assist and improve the decision-making process in the field visit planning of the RNA stage of humanitarian relief, where assessment teams aim to choose and visit sites that involve different community groups. Reviewing practical reports shows that time and resources are limited, which hampers using sophisticated decision-making methods. For this purpose, we developed different heuristic algorithms inspired by practical humanitarian reports that are easily implementable in practice. Each combination of methods and rules for site selection and routing decisions was named as a heuristic. We evaluated the performance of these heuristics by changing the critical parameters and enhanced our best heuristic by using an optimization approach. Considering the inherent uncertainty in various input data such as travel time, community assessment time, inaccessibility of sites, and unavailability of community groups is an important and yet challenging factor while planning the field visit. We incorporated these uncertain parameters within a simulation model, which enabled us to analyze our results under different levels of uncertainties and in a reasonable amount of computational time. Our work consists of two central parts:
Impact of uncertainty level on ACR on Heuristics D and D*
First, there is a great need for decision support in practice, given the current situation of using a less systematic procedure in the field. In this situation, even applying simple methods, which can be implemented using accessible software (e.g., Microsoft Excel) and a paper map of the field, can improve the assessment plan significantly by increasing the accuracy of the plan and efficiency of the available resources. Using these methods in practice and observing their performance by practitioners pave the way for decision-makers to trust more complex optimization-based approaches.
Second, we incorporated all the heuristics in a simulation model. Designing this model was in line with the primary approach of this paper, which was to show the trade-off between using different methods and having different levels of resources such as time and number of assessment teams. We investigated the impact of various uncertain factors ranging from accessibility of sites and availability of community groups to travel time and assessment time. We showed how much these uncertainties can deteriorate the results in terms of community coverage ratios.
Managerial implications
As indicated by the results of the computational experiments, choosing which heuristic to follow for field visit planning has a substantial impact on achieving higher community group coverage. Selecting sites based on the approximate knowledge of the existence of community groups leads to significantly better results in comparison with selecting sites randomly. This, of course, comes at the cost of gathering more information related to the location of community groups. Further, updating the original routes in case of inaccessibility and unavailability also improves the performance of the field visit plan. While updating the routes, it is important to take the allowed radius into account. Increasing the radius expands the total distance traveled, but this extra distance traveled after a certain limit does not necessarily increase coverage ratios and, in some cases, even decreases them.
Another important factor is uncertainties in travel time and community assessment time. These uncertainties adversely affect the results, no matter which heuristic is applied. However, the effects of these uncertainties are not the same on all heuristics. We see that, in general, the result of more sophisticated heuristics or, in other words, those requiring more information for planning the field visit, deteriorates more when the level of uncertainty increases. This factor becomes more important when the trade-off is to use a more advanced heuristic instead of a simpler heuristic in a highly uncertain environment, which in the end will not provide significantly different results.
Social implications
The number of natural disasters due to the increase in extreme weather events such as storms and floods is growing (Hoeppe 2016). Furthermore, the negative impacts of geophysical events such as earthquakes and tsunamis have been increasing due to socio-economic/demographic factors such as population growth and urbanization (Hoeppe 2016). Considering this devastating impact of disasters, enhancing the performance of humanitarian operations is becoming increasingly important. After the occurrence of a disaster, humanitarian agencies conduct various operations to assist the affected people. While there exists a large body of literature on last mile distribution problems, needs assessment operations have received little attention (Pamukcu and Balcik 2020). In fact, most studies focusing on relief distribution assume that the needs of different affected people are already known or can be estimated, and needs assessment operations are not specifically addressed (de la Torre et al. 2012). Providing timely and accurate information at the RNA stage is of vital importance for the success of disaster response by matching needs with the available resources effectively. Having a successful needs assessment also leads to saving precious resources at a time of great need. This paper improves the disaster response by analyzing the performance of existing methods in literature to assist decision-makers in selecting the most suitable heuristic at the RNA stage. Improving disaster response contributes to using the limited financial resources more efficiently and effectively; therefore help can be provided to more people.
Research implications
Disasters are hard to anticipate with respect to their occurrence and consequences. Thus, humanitarian organizations often have to make decisions and plan for their operations in a highly uncertain environment (Liberatore et al. 2013). The RNA stage, which should be carried out immediately after the onset of a disaster to investigate the disaster's impact on affected communities, also includes various uncertain factors. In this paper, we evaluated the impact of various uncertain factors of the RNA stage within a simulation environment. These factors were accessibility of sites, availability of community groups, travel time, and assessment time. The proposed evaluation environment showed that these uncertainties significantly impact the field visit plan, i.e., site selection and routing decisions. We see that most of the studies addressing the RNA stage consider a deterministic environment for this stage. To the best of our knowledge, Balcik and Yanıkoğlu (2020) is the only study in the RNA literature which considers travel time as a single uncertain factor. Our numerical results reveal that optimization models do have the potential to improve decision-making, but they very much depend on the quality of the input data. We showed that adopting a deterministic model such as modified SARP cannot effectively address the highly uncertain environment. The humanitarian context has unique characteristics, and it is not easy to adopt a model to its condition; also, the adopted model should be carefully evaluated. On the one hand, there is criticism that current models are too complicated for practitioners. On the other hand, the current models need to consider important real world assumptions such as a wide range of uncertain factors.
One of the main limitations of this study is that we evaluate the proposed methods' performance based on one case study specific to an earthquake setting. Also, in the present case study, we assume the likelihood of finding displaced and injured people is correlated with the location of the earthquake's epicenter, i.e., these groups are more available in sites closer to the epicenter. However, this assumption might be different in other settings. Furthermore, in our simulation algorithm, we assume once the assessment teams finalize the assessment of one site, they realize whether the following site in their planned route is accessible or not. This assumption is based on a pessimistic approach, where the communication infrastructure is impacted by the disaster, and the assessment teams receive updated information when they get closer to the affected area, observe the situation, and talk to local people. However, the assessment teams may receive updated information at other times of the assessment, e.g., either at the beginning of planning or traveling in the field.
Needs assessment operations have not been studied extensively in the humanitarian logistics field. This study was an initial step toward providing an evaluation of various methods for decision-makers. There are certain avenues for future research. First, in this paper we assumed assessment is done by a single organization. Future research can consider this issue by considering a cooperative multi-sector assessment with other agencies, in which agencies are able to share the resources and information. Furthermore, the heuristics we developed are based on and limited to the general principles of reports available from humanitarian agencies. Therefore, future studies depending on the availability of the information can formulate other heuristics and compare the results with each other. This can also go in the direction of considering other disaster settings. For instance, needs assessment for a flood might be different from the one for an earthquake. Finally, we showed that deterministic models cannot address the inherent uncertainties well and models that can consider these circumstances need to be developed. Balcik and Yanıkoğlu (2020) is a good first step toward addressing uncertainty by considering the travel time as an uncertain parameter in post-disaster networks and present a robust optimization model to tackle the uncertainty. Nevertheless, other uncertain factors such as inaccessibility of sites and unavailability of community groups need to be investigated further.
ACAPS (2011a). Joint rapid assessment of the northern governorates of Yemen. https://www.humanitarianresponse.info/sites/www.humanitarianresponse.info/files/assessments/ACAPS_JRA%20ERG%20Consortion_Sept2011.pdf. Accessed 28 June 2020.
ACAPS (2011b). Technical brief: Purposive sampling and site selection in phase 2. https://www.humanitarianresponse.info/sites/www.humanitarianresponse.info/files/documents/files/Purposive_Sampling_Site_Selection_ACAPS.pdf. Accessed 28 June 2020.
ACAPS (2013). The good enough guide to needs assessment. https://reliefweb.int/sites/reliefweb.int/files/resources/h-humanitarian-needs-assessment-the-good-enough-guide.pdf. Accessed 28 June 2020.
ACAPS (2014). Secondary data review: Sudden onset natural disasters. https://resourcecentre.savethechildren.net/node/13734/pdf/secondary_data_review-sudden_onset_natural_disasters_may_2014.pdf. Accessed 28 June 2020.
AFAD (2020). https://en.afad.gov.tr/disaster-report---van-earthquake. Accessed 28 June 2020.
Altay, N., & Green, W. G., III. (2006). OR/MS research in disaster operations management. European Journal of Operational Research, 175(1), 475–493.
Anaya-Arenas, A. M., Renaud, J., & Ruiz, A. (2014). Relief distribution networks: A systematic review. Annals of Operations Research, 223(1), 53–79.
Arii, M. (2013). Rapid assessment in disasters. Japan Medical Association Journal, 56(1), 19–24.
Balcik, B. (2017). Site selection and vehicle routing for post-disaster rapid needs assessment. Transportation Research Part E: Logistics and Transportation Review, 101, 30–58.
Balcik, B., & Yanıkoğlu, I. (2020). A robust optimization approach for humanitarian needs assessment planning under travel time uncertainty. European Journal of Operational Research, 282(1), 40–57.
Barnes, A. (2016). Making intelligence analysis more intelligent: Using numeric probabilities. Intelligence and National Security, 31(3), 327–344.
Bartholdi, J. J., III., Platzman, L. K., Collins, R. L., & Warden, W. H., III. (1983). A minimal technology routing system for meals on wheels. Interfaces, 13(3), 1–8.
Benini, A. (2012). A computer simulation of needs assessments in disasters. https://www.acaps.org/sites/acaps/files/resources/files/a_computer_simulation_of_needs_assessments_in_disasters-the_impact_of_sample_size_logistical_difficulty_and_measurement_error_november_2012.pdf. Accessed 28 June 2020.
Besiou, M., & Van Wassenhove, L. N. (2020). Humanitarian operations: A world of opportunity for relevant and impactful research. Manufacturing and Service Operations Management, 22(1), 135–145.
Bruni, M., Khodaparasti, S., & Beraldi, P. (2020). The selective minimum latency problem under travel time variability: An application to post-disaster assessment operations. Omega, 92, 102154.
Chao, I.-M., Golden, B. L., & Wasil, E. A. (1996). The team orienteering problem. European Journal of Operational Research, 88(3), 464–474.
Darcy, J. & Hofmann, C. (2003). According to need?: needs assessment and decision-making in the humanitarian sector. https://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/285.pdf. Accessed 28 June 2020.
Davidson, R. A., & Nozick, L. K. (2018). Computer simulation and optimization. In Handbook of disaster research (pp. 331–356). Springer.
de Goyet, V., Bittner, P., et al. (1991). Should disaster relief strike: be prepared! In World health.
de la Torre, L. E., Dolinskaya, I. S., & Smilowitz, K. R. (2012). Disaster relief routing: Integrating research and practice. Socio-Economic Planning Sciences, 46(1), 88–97.
de Vries, H., & Van Wassenhove, L. N. (2017). Evidence-based vehicle planning for humanitarian field operations.
EPRS (2019). Technological innovation for humanitarian aid and assistance. https://www.europarl.europa.eu/RegData/etudes/STUD/2019/634411/EPRS_STU(2019)634411_EN.pdf. Accessed 14 April 2021.
Fairchild, K. W., Misra, L., & Shi, Y. (2016). Using triangular distribution for business and finance simulations in excel. Journal of Financial Education, 42(3–4), 313–336.
Fikar, C., Gronalt, M., & Hirsch, P. (2016). A decision support system for coordinated disaster relief distribution. Expert Systems with Applications, 57, 104–116.
Fikar, C., Hirsch, P., & Nolz, P. C. (2018). Agent-based simulation optimization for dynamic disaster relief distribution. CEJOR, 26(2), 423–442.
Galindo, G., & Batta, R. (2013). Review of recent developments in OR/MS research in disaster operations management. European Journal of Operational Research, 230(2), 201–211.
Garfield, R. (2011). Common needs assessments and humanitarian action. https://www.files.ethz.ch/isn/128805/networkpaper069.pdf. Accessed 28 June 2020.
Gillett, B. E., & Miller, L. R. (1974). A heuristic algorithm for the vehicle-dispatch problem. Operations Research, 22(2), 340–349.
Glock, K., & Meyer, A. (2020). Mission planning for emergency rapid mapping with drones. Transportation Science, 54(2), 534–560.
Gralla, E., & Goentzel, J. (2018). Humanitarian transportation planning: Evaluation of practice-based heuristics and recommendations for improvement. European Journal of Operational Research, 269(2), 436–450.
Hairapetian, A., Alexanian, A., Férir, M., Agoudjian, V., Schmets, G., Dallemagne, G., et al. (1990). Drug supply in the aftermath of the 1988 Armenian earthquake. The Lancet, 335(8702), 1388–1390.
Hoeppe, P. (2016). Trends in Weather Related Disasters-Consequences for Insurers and Society. Weather and Climate Extremes, 11, 70–79.
Huang, M., Smilowitz, K. R., & Balcik, B. (2013). A continuous approximation approach for assessment routing in disaster relief. Transportation Research Part B: Methodological, 50, 20–41.
IASC (2000). Initial rapid assessment (IRA): guidance notes. https://www.unscn.org/web/archives_resources/files/IRA_guidance_note.pdf. Accessed 28 June 2020.
IASC (2012). Multi-cluster/sector initial rapid assessment (MIRA). https://www.unocha.org/sites/dms/CAP/mira_final_version2012.pdf. Accessed 28 June 2020.
IFRC (2008). Guidelines for assessment in emergencies, Geneva, Switzerland. https://www.icrc.org/en/doc/assets/files/publications/icrc-002-118009.pdf. Accessed 28 June 2020.
IFRC (2013). World disasters report: Focus on technology and the future of humanitarian action: International Federation of Red Cross and Red Crescent Societies. https://www.ifrc.org/PageFiles/134658/WDR%202013%20complete.pdf. Accessed 28 June 2020.
Kent, S. (1964). Words of estimative probability. https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/sherman-kent-and-the-board-of-national-estimates-collected-essays/6words.html. Accessed 29 June 2020.
Ketokivi, M., & Choi, T. (2014). Renaissance of case research as a scientific method. Journal of Operations Management, 32(5), 232–240.
Kizilay (2021). https://www.kizilay.org.tr/Upload/Dokuman/Dosya/1353075061_web.xVan_Faaliyet_Raporu.Son.pdf. Accessed 05 February 2021.
Knott, R. (1988). Vehicle scheduling for emergency relief management: A knowledge-based approach. Disasters, 12(4), 285–293.
Kunz, N., Van Wassenhove, L. N., Besiou, M., Hambye, C., & Kovacs, G. (2017). Relevance of humanitarian logistics research: Best practices and way forward. International Journal of Operations and Production Management, 37(11), 1585–1599.
Li, X., Liu, X., Ma, H., & Hu, S. (2020). Integrated routing optimization for post-disaster rapid-detailed need assessment. International Journal of General Systems, 49(5), 521–545.
Liberatore, F., Pizarro, C., de Blas, C. S., Ortuño, M., & Vitoriano, B. (2013). Uncertainty in humanitarian logistics for disaster management. a review. In Decision aid models for Disaster Management and Emergencies (pp. 45–74). Springer.
Lillibridge, S. R., Noji, E. K., & Burkle, F. M., Jr. (1993). Disaster assessment: The emergency health evaluation of a population affected by a disaster. Annals of Emergency Medicine, 22(11), 1715–1720.
Lund, H., Arler, F., Østergaard, P., Hvelplund, F., Connolly, D., Mathiesen, B., & Karnøe, P. (2017). Simulation versus optimisation: Theoretical positions in energy system modelling. Energies, 10(7), 840.
Mishra, D., Kumar, S., & Hassini, E. (2019). Current trends in disaster management simulation modelling research. Annals of Operations Research, 283(1), 1387–1411.
Nagendra, N. P., Narayanamurthy, G., & Moser, R. (2020). Management of humanitarian relief operations using satellite big data analytics: The case of kerala floods. In Annals of operations research (pp. 1–26).
Noyan, N., Balcik, B., & Atakan, S. (2016). A stochastic optimization model for designing last mile relief networks. Transportation Science, 50(3), 1092–1113.
Nurcahyo, G. W., Alias, R. A., Shamsuddin, S. M., & Sap, M. N. M. (2002). Sweep algorithm in vehicle routing problem for public transport. Jurnal Antarabangsa Teknologi Maklumat, 2, 51–64.
Oruc, B. E., & Kara, B. Y. (2018). Post-disaster assessment routing problem. Transportation Research Part B: Methodological, 116, 76–102.
Pamukcu, D., & Balcik, B. (2020). A multi-cover routing problem for planning rapid needs assessment under different information-sharing settings. OR Spectrum, 42(1), 1–42.
Platt, S., & Drinkwater, B. D. (2016). Post-earthquake decision making in Turkey: Studies of Van and Izmir. International Journal of Disaster Risk Reduction, 17, 220–237.
Rodríguez-Espíndola, O., Albores, P., & Brewster, C. (2018). Decision-making and operations in disasters: Challenges and opportunities. International Journal of Operations & Production Management. 2: 9–64
Stein, W. E., & Keblis, M. F. (2009). A new method to simulate the triangular distribution. Mathematical and Computer Modelling, 49(5–6), 1143–1147.
USAID (2014). A rapid needs assessment guide: For education in countries affected by crisis and conflict. https://www.usaid.gov/sites/default/files/documents/2155/USAID%20RNAG%20FINAL.pdf. Accessed 28 June 2020.
Waring, S. C., Reynolds, K. M., D'Souza, G., & Arafat, R. R. (2002). Rapid assessment of household needs in the houston area after tropical storm Allison. In Disaster management and response: DMR: an official publication of the Emergency Nurses Association (pp. 3–9).
Yu, J., Pande, A., Nezamuddin, N., Dixit, V., & Edwards, F. (2014). Routing strategies for emergency management decision support systems during evacuation. Journal of Transportation Safety and Security, 6(3), 257–273.
Zare, M., & Nazmazar, B. (2013). Van, Turkey earthquake of 23 October 2011, mw 7.2; an overview on disaster management. Iranian Journal of Public Health, 42(2):134(2), 134.
Zhu, M., Du, X., Zhang, X., Luo, H., & Wang, G. (2019). Multi-uav rapid-assessment task-assignment problem in a post-earthquake scenario. IEEE access, 7, 74542–74557.
Zhu, M., Zhang, X., Luo, H., Wang, G., & Zhang, B. (2020). Optimization dubins path of multiple uavs for post-earthquake rapid-assessment. Applied Sciences, 10(4), 1388.
Open access funding provided by Vienna University of Economics and Business (WU).
Institute for Transport and Logistics Management, WU (Vienna University of Economics and Business), Welthandelsplatz 1, 1020, Vienna, Austria
Mohammadmehdi Hakimifar, Vera Hemmelmayr & Tina Wakolbinger
Industrial Engineering Department, Ozyegin University, Istanbul, Turkey
Burcu Balcik
Institute for Production Management, WU (Vienna University of Economics and Business), Welthandelsplatz 1, 1020, Vienna, Austria
Christian Fikar
Chair of Food Supply Chain Management, Faculty of Life Sciences, University of Bayreuth, Fritz-Hornschuch-Straße 13, 95326, Kulmbach, Germany
Mohammadmehdi Hakimifar
Vera Hemmelmayr
Tina Wakolbinger
Correspondence to Mohammadmehdi Hakimifar.
Appendix A Integrated site selection and routing: Modified SARP Model
Balcik (2017) proposed the Selective Assessment Routing Problem (SARP), a mathematical formulation for purposive sampling strategy. The SARP determines site selection and vehicle routing decisions simultaneously. The SARP considers a coverage type objective in order to ensure balanced coverage of the selected community groups. This balanced coverage is ensured by defining an objective that maximizes the minimum coverage ratio of community groups. The purpose of this objective is to ensure that each community group is observed at least once, and further, if total available time permits, one community group can be observed multiple times. See Balcik (2017) for detailed information.
Below, we present a modified version of SARP. The main difference of the modified SARP with the original is twofold. First, in a modified version, the existence of community groups (\(\alpha _{ig} \)) is assumed to be uncertain. Second, the network in the modified SARP is divided into a number of clusters , and we need to make sure that each community group is visited at least \(l_{gc}\) times in each cluster. Therefore to ensure this, we add another constraint to the original one, i.e., constraint (8).
The following notation is used to formulate the modified SARP Model:
Sets/indices
N = set of sites in the affected sites indexed by i,j \(\in \) \(N_0\)
\(N_{0}\) = N \(\cup \) \(\{0\}\) where \(\{0\}\) is the depot
K = set of assessment teams indexed by k \(\in \) K
G = set of community groups indexed by g \(\in \) G
C = set of clusters indexed by c \(\in \) C
\(\alpha _{ig}\) = expected value of visiting group g when we visit node i
\(\tau _{g}\) = sum of the expected values of visiting group g in the whole network
\(l_{gc}\) = target number of visiting community group g within cluster c
\(\beta _{ic}\) = 1 if node i belongs to the cluster c, and 0 otherwise
\(t_{ij}\) = travel time between nodes i and j
\(s_{i}\) = estimated assessment time at site i
\(T_{max}\) = total available time for each team
The decisions to be made are represented by the following sets of variables:
Decision Variables
\(x_{ijk}\) = 1 if team k visits site j after site i , and 0 otherwise
\(y_{ik}\) = 1 if team k visits site i , and 0 otherwise
\(u_i\) = sequence in which site i is visited
Z = minimum expected coverage ratio
Mathematical formulation
$$\begin{aligned}&{\text {maximize}}&Z,&\end{aligned}$$
$$\begin{aligned}&\text {s.t.}&Z \le \sum _{i \in N}\sum _ {k \in K} \alpha _{ig}y_{ik}/\tau _g&\forall g \in G,&\end{aligned}$$
$$\begin{aligned}&&\sum _{j\in N_0} x_{ijk}= y_{ik}&\forall i\in N_0,\forall k \in K,&\end{aligned}$$
$$\begin{aligned}&&\sum _{j\in N_0} x_{jik}= y_{ik}&\forall i\in N_0,\forall k \in K,&\end{aligned}$$
$$\begin{aligned}&&\sum _{k\in K} y_{ik}\le 1&\forall i\in N,&\end{aligned}$$
$$\begin{aligned}&&\sum _{k\in K} y_{0k}\le K ,&\end{aligned}$$
$$\begin{aligned}&&\sum _{i \in N_0}\sum _ {j \in N_0} (t_{ij}+s_i)x_{ij}\le T_{max}&\forall k \in K,&\end{aligned}$$
$$\begin{aligned}&&\sum _{i \in N}\sum _ {k \in K} \alpha _{ig}\beta _{ic}y_{ik}\ge l_{gc}&\forall c \in C,\forall g \in G,&\end{aligned}$$
$$\begin{aligned}&&u_i-u_j+Nx_{ijk}\le N-1&\forall i \in N,\forall j \in N(i\ne j),\forall k\in K,&\end{aligned}$$
$$\begin{aligned}&&Z \ge 0,&\end{aligned}$$
$$\begin{aligned}&&u_i \ge 0&\forall i \in N,&\end{aligned}$$
$$\begin{aligned}&&x_{ijk} \in \{0,1\}&\forall i \in N_0,\forall j \in N_0,\forall k\in K,&\end{aligned}$$
$$\begin{aligned}&&y_{ik} \in \{0,1\}&\forall i \in N_0,\forall k\in K.&\end{aligned}$$
The objective function (1) maximizes the minimum coverage ratio, which is defined by constraint (2). Constraints (3) and (4) ensure that an arc enters and leaves the depot and each selected site. Constraint (5) guaranties that each site is visited once at most. Constraint (6) limits the number of routes by the available number of assessment teams. Constraint (7) ensures that each route is completed within the allowed duration. Constraint(8) ensures that the number of selected sites to be visited within each cluster must be at least equal to the minimum expected target number. Constraint (9) is for eliminating subtours. Constraints (10)- (13) define the domains of the variables.
Hakimifar, M., Balcik, B., Fikar, C. et al. Evaluation of field visit planning heuristics during rapid needs assessment in an uncertain post-disaster environment. Ann Oper Res (2021). https://doi.org/10.1007/s10479-021-04274-y
DOI: https://doi.org/10.1007/s10479-021-04274-y
Rapid needs assessment
Selective routing
Over 10 million scientific documents at your fingertips
Switch Edition
Corporate Edition
Not affiliated
© 2022 Springer Nature Switzerland AG. Part of Springer Nature.
|
CommonCrawl
|
Synergistic power of genomic selection, assisted reproductive technologies, and gene editing to drive genetic improvement of cattle
Maci L. Mueller ORCID: orcid.org/0000-0002-5470-68211 &
Alison L. Van Eenennaam ORCID: orcid.org/0000-0003-1562-162X1
CABI Agriculture and Bioscience volume 3, Article number: 13 (2022) Cite this article
Genetic improvement of cattle around the globe has been, and will continue to be, an important driver of animal agriculture sustainability. There are several reproductive and molecular biotechnologies that are used in genetic improvement of cattle, and their impact on the rate of genetic progress is maximized when combined synergistically in a structured breeding program with a clear breeding objective. One of the most recently developed and increasingly popular tools, gene editing, allows animal breeders to precisely add, delete, or replace letters in the genetic code so as to influence a specific trait of interest (e.g., disease resistance), in as little as one generation. However, for gene editing to be an important factor for genetic improvement, it must integrate smoothly into conventional cattle breeding programs to maintain or accelerate rates of genetic gain. This review first summarizes the current state of key reproductive and molecular biotechnologies available for the genetic improvement of cattle, and then discusses potential strategies for effectively incorporating gene editing into cattle genetic improvement programs and methods for disseminating traits improved via gene editing. Moreover, it examines how genetic improvement strategies, including the use of gene editing, will differ depending on the cattle industry sector (i.e., dairy or beef), and the region of the world in which they are being deployed.
Genetic improvement is a powerful tool for improving animal agriculture sustainability because the results are permanent and cumulative. Unlike nutritional and animal health interventions, which require continuous inputs, genetic improvements made in one generation are passed onto the next. Moreover, genetic solutions for animal health and welfare issues often require less labor and material inputs than chemical or mechanical methods. For example, polled, or hornless, genetics can eliminate the need for physical dehorning of animals, which is undertaken to ensure both worker and animal safety, can save livestock producers both time and money, in addition to addressing an animal welfare concern (Gottardo et al. 2011; Thompson et al. 2017).
Sustainable agriculture and increased production efficiency go hand-in-hand. Efficiency is defined as achieving maximum productivity with minimum waste, or in other words, producing more product with the same or even fewer resources. Livestock genetic improvement programs, beginning with selective breeding using statistical prediction methods, such as estimated breeding values (EBVs), and more recently genomic selection (GS), in combination with assisted reproductive technologies (ART) have enabled more accurate selection and intense utilization of genetically superior parents for the next generation to accelerate rates of genetic gain. Genetic gain is the amount of increased performance, or the improvement in average genetic value, in a population that is achieved annually through selection. Increased animal performance based on genetic improvement results in more product produced per animal, so fewer animals are required to meet the same amount of demand, which reduces the environmental impact per unit of livestock product. Therefore, increasing rates of genetic gain can improve livestock production efficiency and ultimately the sustainability of animal agriculture.
The power and scale of genetic improvement is well-illustrated by the increased efficiency of the United States (U.S.) dairy cattle population from 1944 to today, which now produces over 80% more milk with 65% fewer cows. This was enabled by a more than four-fold increase in milk production per cow, from 2000 kg/cow in 1944 to 10,000 kg/cow in 2017 (Capper and Cady 2019; Capper et al. 2009). It is estimated that approximately 50% of the increased productivity per animal observed can be attributed solely to the increased rate of genetic gain obtained by the widespread use of artificial insemination (AI) over natural service breeding alone (Bertolini and Bertolini, 2009). Overall, the dramatic decrease in the number of dairy cows (25.6 million to 9 million) required to meet the demand, due to increased productivity per animal largely from improved genetics, reduced the current environmental impact of a glass of milk to approximately one third of that associated with the same glass of milk in 1944 (Capper and Cady 2019; Capper et al. 2009).
In livestock breeding programs, the breeder's equation is used to measure the rate of genetic gain (ΔG) towards the breeding objective of a given production system. It consists of four components: \(\Delta G= \frac{i\times r\times{\upsigma }_{\mathrm{A}}}{L}\), where i is selection intensity (how extensively the most elite animals are used as parents of the next generation); r is selection accuracy (how well the EBV represents the true breeding value of selection candidates); σA is genetic diversity (as measured by the additive genetic standard deviation of the population); and L is the generation interval (interval length calculated as the average age of parents when progeny are born) (Lush 1937).
Strategies to improve rates of genetic gain in a population involve increasing the components of the breeder's equation in the numerator and decreasing the denominator, or generation interval. It is important to note that the foundation of genetic improvement is a well-structured breeding program with a clear breeding objective, and routine recording of pedigree and performance information on the population under selection. Genomic information can additionally improve the accuracy of the relationship matrix compared to pedigree information alone. Within a structured breeding program, reproductive and molecular biotechnologies, such as ART and GS, can be applied to further accelerate rates of genetic gain by influencing one or more of the components of the breeder's equation.
To increase selection intensity, ART [e.g., AI and embryo transfer (ET)] have been incorporated into cattle breeding schemes. Concurrently, the development of high-throughput genotyping of single nucleotide polymorphisms (SNPs), has enabled GS to predict the genetic merit of an animal based on its DNA (Meuwissen et al. 2001). Using GS has both improved the accuracy of selection and reduced the generation interval. Additionally, GS can provide information on traits that are recorded late in life, or that are difficult or expensive to record (García-Ruiz et al. 2016; Hayes et al. 2013; Meuwissen et al. 2013). Moreover, the benefits of each of these tools, GS and ART, can be maximized when used synergistically to accurately select young animals, which can markedly reduce the generation interval and ultimately accelerate genetic gain (Fig. 1) (Kadarmideen et al. 2015; Loi et al. 2016).
Schematic illustrating the synergistic relationships between genomic selection (GS), assisted reproductive technologies (ART), and gene editing for the genetic improvement of cattle. The foundation of genetic improvement is a well-structured breeding program with a clear breeding objective. Within a structured breeding program, reproductive and molecular biotechnologies, such as ART and GS, can be applied to further improve rates of genetic gain by effecting one or more of the components of the breeder's equation (Lush 1937): (1) increase selection intensity (i), (2) increase selection accuracy (r), (3) decrease the generation interval (L), and (4) increase genetic variation (σA)
Genome or gene editing (GnEd) is one of the most recently developed tools for genetic improvement. This advanced biotechnology allows animal breeders to very precisely target the addition, deletion, or replacement of base pairs in the genetic code to influence traits of interest. Specifically, GnEd refers to the use of site-directed nucleases (i.e., nucleic acid cleaving enzymes) to precisely introduce double stranded breaks (DSB) in the DNA at a targeted location in the genome (Gaj et al. 2013). When the cell attempts to repair the DSB, it can result in the disruption (knockout) of a gene, or if a donor repair nucleic acid template is provided, the insertion (knock-in) of an allele or gene from the same species (intraspecies or cisgenic) or possibly a different species (interspecies or transgenic).
In cattle breeding programs, GnEd offers promising opportunities to introduce useful genetic variation from one breed of cattle to another in the absence of undesired linkage drag, or even beneficial traits from different species. Currently, GnEd research in cattle has focused on and is well-suited for improving monogenic, or Mendelian, traits. Mendelian traits are controlled by one to a few loci that each have large effects, and most are qualitative traits, such as horned/polled or coat color. Although, there are a few known single genes that have large effects on important quantitative traits. For example, a naturally occurring mutation in the myostatin (MSTN) gene present in some cattle breeds like Belgian Blue, results in a substantial increase in the quantitative trait, muscle yield (Kambadur et al. 1997; McPherron and Lee 1997). If GnEd is used to target a gene that has a large effect on a quantitative trait, like MSTN, then GnEd has the potential to increase genetic variation of that trait in the population, thus accelerating the rate of genetic gain. It should be noted that complete MSTN knockouts have also resulted in increased birth weights, which can cause dystocia issues (i.e., calving difficulties), so more precise MSTN mutations will likely be required for practical applications of this target (Proudfoot et al. 2015).
However, most of the traits that animal breeders want to improve are polygenic and quantitative (e.g., marbling, growth, feed efficiency, etc.). For these traits, quantitative genetics and GS have been, and will continue to be, the major driver for genetic improvement. Additionally, GnEd in livestock is only possible through the use of ART. Therefore, the potential of GnEd can only be fully realized when used in conjunction with ART and GS in a structured breeding program with a clear breeding objective to accelerate genetic gain by concurrently altering multiple components of the breeder's equation (Fig. 1) (Bishop and Van Eenennaam 2020; Jenko et al. 2015; McLean et al. 2020; Van Eenennaam 2017).
Given that there are a wide variety of tools for genetic improvement in cattle, this review first summarizes the current state of key reproductive and molecular biotechnologies, and then discusses their synergistic potential when employed jointly. There is a primary focus on how the increasingly popular modern biotechnology, GnEd, is being used for genetic improvement of cattle and strategies for effectively incorporating it into existing cattle breeding programs. Moreover, we discuss how genetic improvement strategies, including the use of GnEd, will differ depending on the cattle industry sector (i.e., dairy or beef) being targeted, and the region of the world in which they are being deployed.
Considerations for genetic improvement of cattle in beef versus dairy systems
Advanced reproductive and molecular biotechnologies are often easier to cost effectively implement in the breeding pyramid of vertically integrated, "high-input" (intensive), industries. In such systems, external inputs such as supplementary feeds, veterinary medicines and ART are relatively easy to obtain and widely used. Additionally, in vertically integrated programs the return on investment in performance recording of each nucleus animal can be recouped through thousands, or even millions, of genetic descendants (Van Eenennaam et al. 2014).
Compared to other livestock species, cattle have a long generation interval and low fecundity, which slows genetic progress. Nevertheless, the dairy industry was well-positioned for rapid adoption of GS due to its industry-wide selection goal (e.g., Lifetime Net Merit, NM$ in the U.S.), widespread use of AI, large number of high accuracy AI sires, primary use of purebred animals (e.g., Holstein), extensive and uniform phenotype data collection, and central evaluation program to receive genotypes. Moreover, large breeding organizations were willing to fund genotyping because they received a clear cost savings in terms of identifying AI sires at a young age (< 1 year-old) compared to previous progeny testing schemes (> 5 years-old) (Wiggans et al. 2017).
In contrast, genetic progress in beef cattle selection programs has been slower and industry-wide rates of genetic gain lag well below what is possible (Banks 2005). This is due to a multitude of factors including the difficulty of developing an industry-wide breeding objective in large part because of industry segmentation. The beef industry has a large number of ranches/decision makers raising animals in very diverse environments and selection decisions are made at the seedstock level without good linkages to performance metrics in the commercial cow-calf, feedlot, or processing sector. Also, the beef industry is comprised of multiple breeds and breed associations all collecting separate data, has limited to no data recording on several economically relevant traits (e.g., female reproduction and feed efficiency), has lower producer adoption of economic indexes, and a limited use of AI (Van Eenennaam et al. 2014). Moreover, a large proportion of the world's beef cattle are located in tropical and subtropical environments, which requires additional traits, such as tolerance or resistance to environmental stressors, to be included in the breeding objective and those traits are typically very difficult or expensive to effectively record for genetic improvement purposes and they may have antagonistic relationships with productive attributes.
Genomic selection (GS) opportunities
The development of high-throughput genotyping of SNPs enabled the development of approaches to predict an animal's genetic merit based on its DNA (Meuwissen et al. 2001). In GS, SNP effects are estimated using genotyped individuals that are phenotyped for the characteristics of interest (i.e., training population), and then genomic estimated breeding values (GEBVs) can be predicted for any genotyped individual by using only its SNP genotypes and estimated SNP effects. GS has been used in cattle to improve accuracy of selection, reduce the generation interval, and to provide useful information on traits that would otherwise be difficult to measure (García-Ruiz et al. 2016; Meuwissen et al. 2013). Genetic improvement in cattle, using GS for hard to measure traits like feed efficiency, cow longevity and fertility, has the potential to reduce the environmental footprint per unit of production (Barwick et al. 2019; Fennessy et al. 2019; Hayes et al. 2013; Pryce and Haile-Mariam 2020; Quinton et al. 2018).
Furthermore, improving efficiency of cattle production through exploitation of genomics can be considered a public good (Berry et al. 2016). For example, in Ireland this concept has been recognized by public support of genotyping cattle to facilitate GS. In 2016, a multibreed genomic evaluation in beef cattle was launched and a monetary incentive was provided for beef producers to genotype females, more extensively phenotype females, and to retain genomically tested high-index females as herd replacements to increase the efficiency of the national herd. To date, the Irish Cattle Breeding Federation (ICBF) has genotyped almost 2 million animals. This program provided the data required to validate that higher maternal index females, on average, calved for the first time at a younger age, had shorter calving intervals, survived longer, and were also expected have a lower mature weight. An accelerated rate of genetic gain in the Irish maternal index was observed following the deployment of genotyping incentives and genomic predictions (Twomey et al. 2020). All of these improvements would be expected to reduce the environmental impact per unit of beef production in this system.
Assisted reproductive technologies (ART) adoption
ART is the term used to describe treatments and procedures which involve the manipulation of reproductive cycles, gametes, or embryos. In cattle breeding schemes, ART including AI, cryopreservation of sperm or embryos, estrus synchronization, multiple ovulation ET (MOET), ovum-pick up (OPU) and in vitro embryo production (IVP), sex determination of sperm or embryos, and nuclear transfer (NT) have been incorporated to increase selection intensity, which can accelerate rates of genetic gain. Globally, the most widely used ART in cattle is AI (Baruselli et al. 2018). AI allows females around the world to be inseminated by genetically superior bulls via cryopreserved semen, which increases the selection intensity of males and thus accelerates rates of genetic gain. India, which is the country with the second largest number of cattle in the world in 2019 (193 million head, not including over 110 million Buffalo, Mithun and Yak), behind Brazil (215 million head), currently has the world's largest AI infrastructure. This consists of 49 semen stations producing 66.8 million doses of frozen semen annually. Additionally, there has been an increase in the uptake of sexed semen in India to reduce the number of male calves born into dairy herds (Ojango et al. 2016).
In some countries, the adoption of AI has been markedly skewed towards the dairy sector. For instance, while AI has been widely adopted by the U.S. dairy industry (> 80%) (Capper and Cady 2019; VanRaden 2007), to date it has seen limited uptake in the U.S. beef industry (USDA 2020). Only 12% of U.S. beef producers report using AI, and even fewer (7%) use estrus synchronization. In 2017, this resulted in less than 10% of all females being bred via AI. A larger portion of beef heifers (19%) were bred via AI compared to only 7% of cows (USDA 2020). Additionally, in Northern Australia, which accounts for over 50% of Australia's total beef cattle population, it is estimated that AI is used by less than 1% of breeding herds (MLA 2015). This low adoption rate in the beef industry is largely due to the difficulty in extensive systems of identifying females in estrus and constraining them to allow AI (USDA 2020).
To eliminate the burden and challenge of estrus detection, timed AI (TAI) was developed (Pursley et al. 1995). Additionally, TAI allows anestrous cows to be inseminated and has enabled conception to be more clustered to the beginning of the breeding season, thus increasing the reproductive and productive efficiency of farms (Baruselli et al. 2018). South America has widely adopted TAI. In 2017, more than 15 million breeding females were inseminated using TAI in Brazil, Argentina and Uruguay (Mapletoft et al. 2018). Specifically in Brazil, the widespread adoption of TAI resulted in a remarkable 220% increase in the Brazilian market for bovine semen units, from 7 million doses in 2002 to 15.5 million in 2018 (Fig. 2) (Baruselli et al. 2019). Over this time period (2002–2018), the percentage of female cattle in Brazil that were inseminated using AI more than doubled from 5.8% to 13.1%, totaling approximately 9.5 million head (13.6% of beef and 10.8% of dairy). Importantly, the large majority (86.3%) of these inseminations were via TAI (Fig. 2). Overall, it has been estimated that TAI returns more than half a billion U.S. dollars per year to the Brazilian beef production chain due to genetic improvement in economically important traits, such as growth and carcass merit, as compared to natural service (Baruselli et al. 2018).
Reproduced from Baruselli et al. (2019) under a CC-BY license
Comparison of timed artificial insemination (TAI) and artificial insemination with estrous detection in cattle in Brazil from 2002 to 2018.
While AI and TAI enable increased selection intensity of males, MOET and OPU-IVP have allowed for increased selection intensity of females. In livestock, ET is the process of placing an embryo (usually at day 7 of development) into the uterus of a synchronized (in estrus 7 days prior to the transfer) recipient female that is typically not related to the embryo. Additionally, the development of synchronization techniques for timed embryo transfer (TET), has significantly increased the number of recipients suitable for receiving an embryo (Nasser et al. 2004). Historically, most embryos for ET were produced through MOET, also known as "flushing" or in vivo production. In a MOET program, a genetically superior donor female is typically superovulated prior to AI and then the resulting embryos are flushed from the uterus of the donor (i.e., genetic dam) 7 days after AI. Alternatively, embryos can be generated via IVP. In an IVP program, unfertilized oocytes are collected from the donor cow's ovaries by transvaginal, ultrasound-guided needle aspiration of multiple follicles per ovary, also known as OPU. The collected oocytes then undergo in vitro maturation (IVM), followed by in vitro fertilization (IVF) and then in vitro culture (IVC) for 7 days until they reach the blastocyst stage and are ready for cryopreservation or ET. IVP is advantageous because donors can be collected repeatedly for most of the year, even while pregnant, thereby keeping them in synchrony with an annual calving cycle. Furthermore, in a process known as juvenile in vitro ET (JIVET), oocytes for IVP can also be collected from prepubertal heifers (< 7 months old), but with decreasing embryo development rates at younger ages (Brogliatti and Adams 1996; Duby et al. 1996; Torres et al. 2014). Using JIVET could decrease the female generation interval to one year (Duby et al. 1996; Granleese et al. 2015).
Globally, the number of IVP embryos has increased dramatically overtime (Fig. 3). This increase has occurred predominately in North and South America and to a lesser extent in Europe, with almost no uptake of this technology in Asia and Africa. World-wide, more than one million bovine IVP embryos were produced in 2018 and 742,908 were transferred, of which more than 50% were transferred in South America. In Brazil specifically, over 270,000 IVP embryos were transferred in 2018. Baruselli et al. (2019) concluded that the uptake of reproductive biotechnologies in Brazil "increases productivity per unit of land and significantly contributes to improve the efficiency of livestock. Therefore, with the intensification of the use of reproductive biotechnologies it is possible to enhance production with reduced environmental impact." The challenge to continued adoption of these technologies is, according to these authors, dependent on an increase in extension services for producers and specialists, development of more efficient/cost-effective products and practical protocols, increased integration between universities, research institutes, veterinarians and industry, and market demand for the production of animal protein with higher quality, efficiency and environmental and economic sustainability (Baruselli et al. 2019).
Number of in vitro produced (IVP) bovine embryos from 2000 to 2019, by continent. Data from IETS (2000–2019) Data Retrieval Committee Reports
Another way to increase selection intensity is through embryo multiplication procedures, including embryo splitting and cloning by embryonic cell NT (ECNT) (Heyman et al. 1998; Lopes et al. 2001). Alternatively, adult somatic cell NT (SCNT) cloning can be used to multiply unique genotypes (Oback and Wells 2003; Wilmut et al. 1997). Unfortunately, due to faulty or incomplete epigenetic reprogramming of the donor cell genome, SCNT cloning often results in high rates of pregnancy loss and can also negatively affect the viability of live-born calves (Akagi et al. 2013; Galli and Lazzari 2021; Keefer 2015). Therefore, SCNT cloning is primarily used for research or to produce "back-ups" of individual animals with unique genetic features (Bousquet and Blondin 2004; Loi et al. 2016). On the other hand, ECNT cloning has been shown to greatly reduce the incidence and severity of abnormal phenotypes compared to somatic clones, but has limited multiplication potential due to the small number of embryonic cells, or blastomeres (Heyman et al. 2002; McLean et al. 2020; Misica-Turner et al. 2007).
One advanced reproductive biotechnology that has been invaluable for rodent and primate research, but until recently was not available for livestock species, is embryonic stem cells (ESCs) (Blomberg and Telugu 2012; Evans and Kaufman 1981; Ezashi et al. 2016; Li et al. 2008; Soto and Ross 2016). ESCs are derived from the inner cell mass (ICM) of preimplantation blastocysts. The ICM is the tight cluster of cells inside a blastocyst that will eventually give rise to the definitive structures of the fetus. ESCs are a unique cell type because they are self-renewing (able to replicate indefinitely) and pluripotent, meaning they can differentiate into all three primary germ layers: ectoderm, endoderm, and mesoderm (Wu and Belmonte 2015; Ying et al. 2008). Given that ESCs are derived from pre-implantation embryos, they could provide a potentially unlimited source of elite genetics from the next generation of animals for multiplication, which could further increase the selection intensity of both males and females in livestock production.
Unfortunately, derivation and stable propagation of pluripotent ESCs from domestic ungulates, including cattle, has been challenging (Blomberg and Telugu 2012; Ezashi et al. 2016; Soto and Ross 2016). Although there have been reports of the development of bovine ESC lines, they did not pass the standard pluripotency tests (i.e., in vitro embryoid body formation, in vivo teratoma assay, and/or chimera formation). Moreover, they showed poor derivation efficiencies, limited proliferation capacities, and loss of pluripotency markers after extensive passages (Kim et al. 2017; Saito et al. 1992). Consequently, cattle research has been limited to investigation of induced pluripotent stem cells (iPSC), which can be derived from the epigenetic reprogramming of somatic cells (Heo et al. 2015; Kawaguchi et al. 2015).
However, in 2018, after decades of research, Bogliotti et al. (2018) reported the successful derivation of pluripotent bovine ESCs with stable morphology, transcriptome, karyotype, population-doubling time, pluripotency marker gene expression, and epigenetic features. Moreover, the authors reported that stable bovine ESCs can be established quickly in 3–4 weeks and were simply propagated by trypsin treatment (Bogliotti et al. 2018). More recently, Zhao et al. (2021) reported the successful derivation of another type of bovine pluripotent stem cell, expanded potential stem cells (EPSCs). Currently, the production of a live calf from ESCs would require NT using an ESC as the nuclear donor. Experiments have shown that ESC-NT results in similar blastocyst development rates to SCNT, but there could potentially be higher pregnancy rates and less offspring abnormalities (Bogliotti et al. 2018; McLean et al. 2020; Zhao et al. 2021).
In the future, ESCs could enable in vitro breeding (IVB) schemes, which could drastically decrease the generation interval (Goszczynski et al. 2018). IVB would involve repeated cycles of deriving gametes (i.e., sperm and eggs) in culture from ESCs and IVF (Goszczynski et al. 2018). In mice, ESCs have been induced in culture to become primordial germ cell-like cells (PGCLCs) and subsequently induced to form gametes. Furthermore, these in vitro gametes have successfully produced live, fertile offspring (Hayashi et al. 2011, 2012; Ishikura et al. 2016; Yoshino et al. 2021). Although bovine PGCLCs have yet to be produced, the ability to derive bovine ESCs now makes this strategy possible (Goszczynski et al. 2018). However, IVB will only be a useful tool to improve genetic gain if combined with GS (see discussion below).
Synergistic power of GS & ART
When GS and ART are used concurrently, the benefits of each act synergistically to accurately select genetically superior, young animals, thereby substantially reducing the generation interval and accelerating rates of genetic gain (Fig. 1) (Granleese et al. 2015; Kadarmideen et al. 2015; Loi et al. 2016). For example, GS can be used to accurately select high-genetic-merit young donor females for MOET or IVP and bulls for semen collection. The embryos produced from these matings will also have high genetic merit. However, due to Mendelian sampling variance, not all full-sibling embryos have the same genetic merit and there is a large cost and natural resource drain in gestating ET calves of unknown genetic merit only to later cull the genetically inferior animals (Segelke et al. 2014). Therefore, methods to produce and identify genetically superior embryos before ET have been highly sought after.
The idea of combining GS with the manipulation of sex cells and embryos to accelerate genetic gain, coined "velogenetics," was first proposed by Georges and Massey (1991). Briefly, velogenetics is a breeding scheme based on the collection of fetal oocytes for IVP followed by genomic testing of the resulting embryos, with the possibility to reduce the generation interval to 3–6 months (Georges and Massey 1991). Although this scheme would provide a substantial decrease in the generation interval, the low efficiency and practical complications of having to slaughter the dam for fetal collection, have inhibited further development of this specific scheme (Chohan and Hunter, 2004; Figueiredo et al., 1993). However, alternative approaches with the same goals have been developed.
Genomic screening of embryos (GSE), sometimes referred to as embryo genotyping, is the process of genotyping cells collected from a biopsy of a preimplantation embryo (i.e., before ET into a recipient female). GSE can be used to predict an embryo's genetic merit so that only the embryos with the highest genetic merit are used for ET. Moreover, since a larger number of embryos can be generated via IVP compared to live-born animals, GSE can be used to select a small number of animals (in their embryo stage) from a large pool of candidates for ET, which will further increase the selection intensity (Fisher et al. 2012; Kadarmideen et al. 2015; Yudin et al. 2016). Although GSE holds great potential, there are currently several technical limitations to overcome.
There is an inverse relationship between the viability of a biopsied embryo and the ability to obtain enough DNA sufficient for genotyping (Ponsart et al. 2013). DNA extracted from embryo biopsies can be used for genetic diagnosis [i.e., genotyping of a few specific loci via polymerase chain reaction (PCR)], for GS, or a combination of both. DNA from one to several biopsied cells has been used successfully for genetic diagnosis (primarily, sex identification) of preimplantation bovine embryos (Cenariu et al. 2012; de Sousa et al. 2017; Ponsart et al. 2013; Tominaga and Hamada 2004). Moreover, de Sousa et al. (2017) took biopsies of a limited number of cells (10–20 blastomeres) from the trophectoderm of both in vivo derived and IVP bovine embryos on day 7 of development. They demonstrated that the biopsies were sufficient for embryo sexing via PCR and that there was no significant (P > 0.05) difference on day 60 pregnancy rates of fresh transfer, biopsied embryos compared to control, non-biopsied embryos. It is important to note that this study did not investigate pregnancy rates of embryos that had been both biopsied and cryopreserved. Due to the limited amount of time between being able to biopsy an embryo and needing to transfer the fresh embryo (i.e., both on day 7 of IVP development), the ability to cryopreserve biopsied embryos will likely be a critical process for applying GSE on a commercial scale (Mullaart and Wells 2018).
While embryo biopsies for sex determination have been routinely used in ET programs (Bondioli 1992; Lopes et al. 2001; Ponsart et al. 2013), GS of embryos has been limited since a much larger number of cells (minimum of 30–40 cells) must be biopsied and genotyped to make accurate selection decisions (Fisher et al. 2012; Ponsart et al. 2013). Although taking a biopsy of more than ~ 20 cells will drastically decrease embryo viability, alternatives to generate a sufficient amount of DNA for GS from only a small number of biopsied cells have been investigated, such as growing biopsied cells in culture (Ramos-Ibeas et al. 2014; Shojaei Saadi et al. 2014), and using whole genome amplification of biopsied cells in combination with imputation from known parental and population genotypes (Allan 2019; Lauri et al. 2013; Shojaei Saadi et al. 2014).
An adaption to traditional GSE was developed by Kasinathan et al. (2015) to genomically screen unborn bovine fetuses rather than embryos. Their strategy utilized multiple ET's and subsequent embryo flushing (21–26 day fetuses) to generate fetal fibroblast lines. DNA was extracted from the fibroblast lines for GS and the resulting GEBVs for NM$ (U.S. dairy) were used to select the line with the highest genetic merit. Cells from the selected elite fibroblast line were used as donor cells for SCNT cloning. Following ET of the cloned embryos, five healthy calves with elite dairy genetics were born (Kasinathan et al. 2015). While this scheme overcomes the challenges of taking embryo biopsies for GS, it still relies on the inefficient process of SCNT cloning to produce live offspring (Akagi et al. 2013; Keefer 2015).
Bovine pluripotent stem cells (Bogliotti et al. 2018; Zhao et al. 2021) have the potential to open a whole new avenue for GSE. Given that pluripotent stem cells are self-replicating, a sufficient amount of DNA could be extracted without harming the viability of the remaining stem cells, which would allow for the use of GS to determine the genetic merit of each line. The genetically superior stem cell lines could then be used for ECNT, similar to the Kasinathan et al. (2015) method. Alternatively, the genetically superior stem cell lines could be in vitro differentiated (as described above) to produce gametes which would enable IVB schemes (Goszczynski et al. 2018). Goszczynski et al. (2018) anticipates that one round of IVB could be completed in 3–4 months, which would drastically reduce the generation interval. These authors estimate that in the same time that it takes a GS program to obtain its first generation (2.5 years), an IVB program would instead allow 10 generations of mating and selection in this same period, ultimately enabling substantial genetic improvements to be made in a short amount time (Goszczynski et al. 2018).
Gene editing (GnEd) potential
A potentially ground-breaking tool for genetic improvement is GnEd, which offers promising opportunities to inactivate targeted gene function (i.e., knockout genes), knock-in genes from other species, and achieve intraspecies allele introgression in the absence of undesired linkage drag. GnEd refers to the use of site-directed nucleases to precisely introduce DSB at predetermined locations in the genome (Gaj et al. 2013). Cells have evolved two primary pathways to repair DSBs: non-homologous end joining (NHEJ) and homology-directed repair (HDR). The underlying principle is that the cell's endogenous repair factors will identify and congregate at the site of the DSB to repair the DNA in an efficient manner.
When using the NHEJ pathway, the cell's natural DNA repair pathway fuses the broken DNA ends back together through blunt-end ligation. NHEJ is referred to as "non-homologous" because the ligation occurs without the use of a homologous nucleic acid template (e.g., sister chromatid) (Moore and Haber 1996). Consequently, this pathway is error-prone and often introduces variable-length insertion and deletion mutations (indels) at the DSB site (Sander and Joung 2014). In other words, the NHEJ pathway allows for the efficient disruption or knockout of a gene by targeting breaks to the coding region of the gene, where indels can result in frameshift or nonsense mutations.
On the other hand, the cell can use the HDR pathway if a nucleic acid donor template is provided. HDR templates can be designed to include desired modifications between regions of homology flanking either side of the targeted DSB, and templates are generally provided to the cell in the form of single-stranded or double-stranded DNA. The cell's DNA repair enzymes can use the template as a model for precise repair by homologous recombination. The HDR pathway can be used to introduce, or knock-in, a range of gene edits, from point mutations to allelic substitutions, to entire transgenes (Sander and Joung 2014). However, in most cell types a lower frequency of HDR than NHEJ has been observed (Sonoda et al. 2006).
There are currently three primary site-directed nucleases used for GnEd in livestock: (1) zinc finger nucleases (ZFN); (2) transcription activator-like effector nucleases (TALENs); and (3) clustered regularly interspersed short palindromic repeats and associated protein 9 (CRISPR/Cas9). Since 2012, all three GnEd systems have been used to perform both gene knockouts and knock-ins in livestock cells and zygotes (Bishop and Van Eenennaam 2020; Tait-Burkard et al. 2018; Tan et al. 2016). Most recently, the high efficiency, technical simplicity of design, and cost-effectiveness of the CRISPR/Cas9 system has greatly advanced the potential for GnEd in livestock (Petersen 2017).
GnEd experiments in cattle have primarily focused on three main areas of improvement (1) animal health and welfare, (2) product yield or quality, and (3) reproduction or novel breeding schemes (Table 1). All three of these areas are highly aligned with the goals of conventional breeding programs (Rexroad et al. 2019; Tait-Burkard et al. 2018; Van Eenennaam 2017).
Table 1. Publications using gene editing in cattle for agricultural applications, grouped by category of genetic improvement goals.
In particular, a highly anticipated application of GnEd in livestock is to enable breeders to tackle animal health and welfare issues at a genetic level in a way that is either not currently possible, or would result in decreased rates of genetic gain, if pursued through conventional breeding. For example, GnEd enabled Wu et al. (2015) and Gao et al. (2017) to precisely insert genes from other species (mouse Sp110 (SP110 Nuclear Body Protein) and human NRAMP1 (Natural Resistance-Associated Macrophage Protein 1), respectively) into an intergenic region of the bovine genome to decrease susceptibility to tuberculosis. This scientific feat would not have been possible through conventional breeding methods alone. GnEd has also enabled researchers to replicate a beneficial mutation in the prolactin receptor (PRLR) gene, first found in Senepol cattle and hypothesized to result in a SLICK phenotype (i.e., short, sleek hair coat), in Angus cattle to increase thermotolerance (Rodriguez-Villamil et al. 2021). Although the Senepol PRLR mutation could be introgressed into another breed, such as Angus, through conventional breeding methods alone, the process would require multiple generations of backcrossing to restore genetic merit to pre-introgression levels, due to linkage drag (Tan et al. 2012). In a species like cattle, with a long generation interval, backcrossing is a time-consuming and expensive process (Gaspa et al. 2015; Visscher et al. 1996). Additionally, it is important to note that genetic solutions for animal health and welfare issues are often more sustainable and require less labor for livestock producers than chemical or mechanical methods (e.g., polled genetics versus dehorning) (Gottardo et al. 2011; Thompson et al. 2017). It is also anticipated that GnEd could be used to repair defective genes, such as recessive lethal or heritable disease variations in high genetic merit animals (Ikeda et al. 2017; Ishino et al. 2018).
Overall, the potential for GnEd to improve livestock sustainability is clearly evident. As illustrated by the 2018 National Academies of Sciences, Engineering, and Medicine (NASEM) study, "Science Breakthroughs 2030: A Strategy for Food and Agricultural Research," which identified "the ability to carry out routine gene editing of agriculturally important organisms" as one of the five most promising scientific breakthroughs that are possible to achieve in the next decade to increase the U.S. food and agriculture system's sustainability, competitiveness, and resilience (NASEM 2018). However, strategies for routinely incorporating GnEd into existing animal breeding programs, especially for species with long-generation intervals, like cattle, are less evident.
ART enables production of live GnEd offspring
For GnEd to be an important factor for genetic improvement, it must reliably edit the germline of breeding stock, so the edits can be passed on to the next generation. To date, it has been challenging to produce live, homozygous, non-mosaic, GnEd offspring. There are currently two primary methods to generate GnEd bovine embryos and each has associated tradeoffs (Fig. 4).
Schematic showing the number of steps required to produce live, homozygous, non-mosaic, GnEd livestock (maroon calf) through either somatic cell nuclear transfer (SCNT) cloning (tan arrows) or zygote microinjection (light purple arrows). Both methods include gamete collection and maturation, introduction of the gene-editing (GnEd) reagents, and transfer of embryos into synchronized recipients (surrogate dams). For the SCNT cloning approach (tan arrows) GnEd reagents are introduced into a somatic cell line and then SCNT cloning is used to produce embryos for transfer. The GnEd cell line can be screened before cloning to ensure production of a homozygous, non-mosaic animal. For the zygote microinjection approach (light purple arrows) GnEd reagents are introduced directly into a zygote via cytoplasmic injection or electroporation. GnEd of zygotes can result in mosaic offspring, which requires subsequent breeding to produce first heterozygous and ultimately homozygous GnEd offspring. Therefore, gene editing of zygotes may require more steps to produce a homozygous, non-mosaic, GnEd animal, as indicated by the increased number of light purple arrows (7) compared to the number of tan arrows (3). Reproduced from (Bishop and Van Eenennaam 2020) under a CC-BY license
One option is to introduce the GnEd reagents (e.g., CRISPR/Cas9) into a somatic cell line and subsequently clone the cell line by SCNT to produce embryos. Thus far, SCNT has been the primary method for producing GnEd livestock because the clonal colony growth of cell lines provides large amounts of DNA that can be genomically sequenced to confirm and isolate cells with the desired edit such as to only produce animals with intended edits. However, as previously discussed, SCNT cloning often results in high rates of pregnancy loss and can also negatively affect the viability of live-born calves (Akagi et al. 2013; Keefer 2015). Additionally, unless a scheme similar to Kasinathan et al. (2015) is used, adult somatic cloning increases the generation interval by one generation (equivalent to two years in cattle), compared to ET of in vivo derived or IVP embryos.
Alternatively, GnEd reagents can be introduced directly into the cytoplasm of an IVP zygote, typically via microinjection or more recently, via electroporation (Lin and Van Eenennaam 2021; McLean et al. 2020). GnEd of zygotes is an attractive option because it avoids the inefficiencies associated with SCNT cloning, does not increase the generation interval because the GnEd process is occurring in the next generation of animals, and allows for the introduction of GnEd reagents into a genetically diverse population of foundation animals as each zygote will produce a genetically distinct animal, as compared to animals derived from a clonal cell line. However, characterizing GnEd zygotes is difficult due to the challenges of GSE discussed above. Specifically, a major challenge associated with GnEd of zygotes is the production of mosaic animals (Bishop and Van Eenennaam 2020; Hennig et al. 2020; McLean et al. 2020). Mosaicism arises from mutations that occur after DNA replication (van Echten-Arends et al. 2011), resulting in one individual having two or more different genotypes. It is important to keep in mind that many livestock GnEd applications require homozygous modifications (i.e., two copies) to ensure inheritance of one copy in the F1 generation (Bishop and Van Eenennaam 2020). Therefore, mosaic GnEd animals will often require time-consuming and expensive subsequent crossbreeding to ultimately produce homozygous edited offspring (Fig. 4).
Regardless of the method used to generate GnEd bovine embryos, ET into synchronized recipient females is a crucial step in producing live GnEd offspring (Fig. 4). Therefore, GnEd in mammalian livestock species is currently reliant on the use of ART (i.e., IVP or SCNT to produce GnEd embryos, and ET to produce live, GnEd offspring).
Synergistic strategies for incorporating GnEd into livestock breeding programs: Simulations
To be an effective tool for genetic improvement, GnEd must integrate smoothly into existing cattle breeding programs (Bishop and Van Eenennaam 2020). Thus far, GnEd has not yet been applied at commercial scale, and so strategies for incorporating GnEd into livestock breeding programs have primarily been modeled via computer simulation.
One of the first simulation studies to explore the potential of combining GnEd with GS in a livestock breeding program was by Jenko et al. (2015). Although, GnEd is currently being used to improve monogenic traits, Jenko et al. (2015) modeled a hypothetical breeding scheme of GS supplemented with promotion of alleles by GnEd (PAGE) to improve a quantitative trait and compared the results to a baseline scenario of using GS alone. In the PAGE scheme, the top sires (5, 10, or 25) based on their true breeding values (i.e., GS with perfect accuracy) were selected and then GnEd for 1–100 loci. They found that using GS + PAGE for 20 loci using 25 sires doubled the rate of genetic gain as compared to using GS alone. It is important to note that this simulation assumed a quantitative trait that had 10,000 known quantitative trait nucleotides (QTN), but identifying such QTN is not a trivial exercise and to date relatively few QTN with large effects on quantitative traits have been identified (Georges et al. 2019).
Bastiaansen et al. (2018) modeled GnEd of a monogenic trait at the zygote stage in a generic livestock population combined with GS for a quantitative trait (i.e., index-based selection). In this simulation, zygotes from either 0, 10, or 100% of matings from genomically-selected elite parents were GnEd for the desired monogenic trait. Additionally, due to the low efficiencies of GnEd reported in the literature (Tan et al. 2016), they modeled various GnEd success and embryo survival rates. When they modeled 100% GnEd efficiency and embryo survival, they observed a strong favorable impact of GnEd on decreasing the time to fixation for the desired allele (four-fold faster), compared to GS alone. However, when they modeled a 4% GnEd efficiency, this had a major impact on the number of GnEd procedures needed (increased by 72%) and the selection response for the polygenic trait decreased by eight-fold, compared to the 100% efficiency model (Bastiaansen et al. 2018). As discussed previously, GnEd of zygotes is typically not 100% and mosaic animals are common (Hennig et al. 2020; McLean et al. 2020). Therefore, in a commercial setting GnEd embryos will likely need to be biopsied to confirm the desired change before ET to avoid transferring embryos without the desired edit(s). Moreover, the current technical limitations of embryo biopsying will need to be overcome to not only identify embryos with the intended edit(s), but also to use GS to select embryos with superior genetic merit to increase rates of genetic gain.
Van Eenennaam (2017) proposed a scheme where GnEd could be incorporated as an added step to the Kasinathan et al. (2015) elite cattle production system (Fig. 5). This approach was modeled to introduce a beneficial, monogenic, dominant allele (i.e., the POLLED Celtic allele (PC)) into the U.S. dairy cattle (Mueller et al. 2019) and northern Australian beef cattle populations (Mueller et al. 2021). In these simulations, fetal tissue from the next generation of yet-to-be-born bulls was genomically screened and selected, edited, and then successfully cloned such that this production system added 3–5 months to produce a homozygous GnEd bull (Fig. 5).
Production of high genetic merit calves using a range of biotechnologies and showing where gene editing might fit into the process. Blue ribbons represent elite genetics. Modified from Van Eenennaam (2017) and reproduced from (Mueller et al. 2021) under a CC-BY license
Mueller et al. (2019) modelled the U.S. dairy population and found that the use of GnEd was the most effective way to increase the frequency of the desired PC allele while minimizing detrimental effects on inbreeding and the rate of genetic gain based on an economic selection index (NM$). They observed that GnEd only the top 1% of bull calves per year based on their index value while placing moderate selection pressure on the polled phenotype was sufficient to maintain the same or a better rate of genetic gain compared to conventional selection on genetic merit alone, while significantly increasing the PC allele frequency to greater than 90% (Mueller et al. 2019). Additionally, both Bastiaansen et al. (2018) and Mueller et al. (2019) found that GnEd reduced long-term inbreeding levels in scenarios that placed moderate to strong selection emphasis on the monogenic trait of interest (e.g., polled) compared to conventional breeding alone. Importantly, Mueller et al. (2019) modeled conventional breeding to represent the widespread use of AI in the U.S. dairy population (i.e., maximum of 5000 (5%) matings/bull/year) (Capper and Cady 2019; Capper et al. 2009; García-Ruiz et al. 2016; VanRaden 2007), so a single dairy sire was able to have a large impact on the whole population. Therefore, only a small number of elite, GnEd polled, dairy sires were needed to see population-level results (Mueller et al. 2019).
In contrast, AI is rarely used in northern Australian beef cattle breeding herds (< 1%) (MLA 2015), thus Mueller et al. (2021) modeled all matings via natural service (i.e., maximum of 35 matings/bull/year). The natural mating limits prevented individual GnEd beef bulls from having an extensive impact on the whole population. Consequently, GnEd only the top 1% of seedstock beef bull calves per year in mating schemes that placed moderate to strong selection on polled resulted in significantly slower rates of genetic gain as compared to conventional selection based on genetic merit alone. However, they did find that if the proportion of GnEd animals was increased to the top 10% of seedstock beef bull calves per year then similar rates of genetic gain could be achieved compared to conventional selection on genetic merit alone. In all scenarios, regardless of whether GnEd was applied, the population inbreeding level never exceeded 1%. This level of inbreeding has been found to have relatively minor effects on traits of economic or biological significance in tropical beef cattle (Burrow 1998). This simulation study modeled solely natural mating because currently ARTs are scarcely used in this beef cattle population (MLA 2015). However, the authors explain that, "this is unlikely to be the situation with valuable GnEd bulls. It is more probable that a high-genetic-merit homozygous polled sire would be used for AI or IVP followed by ET, in the seedstock sector. This system would amplify the reach of each GnEd bull using well-proven ART and enable these bulls to produce hundreds or even thousands of progeny, and thus have a greater impact on the whole population."
Although Mueller et al. (2021) modeled a northern Australian beef cattle population, many findings are also applicable to the global beef industry and the situation in many developing countries (Baruselli et al. 2019; MLA 2015; Ojango et al. 2016a, b; Setiana et al. 2020; USDA 2020). AI is logistically challenging to implement for both smallholder farms in developing countries (e.g., lack of AI technicians and difficulties transporting cryopreserved semen) and often for commercial-scale extensive beef operations in developed countries (e.g., additional labor required to identify females in estrus and constrain them to perform AI). Therefore, a large number of GnEd natural service bulls would currently be needed to broadly disseminate GnEd traits globally in systems that have limited adoption of ARTs.
Surrogate sires to disseminate GnEd traits
A potential alternative to AI that could be enabled through GnEd is a concept called surrogate sires. Surrogate sires would be host bulls that carry germ cells from more genetically elite donor sires, and they will be able to pass on these desirable donor genetics through natural mating to improve production efficiency (Gottardo et al. 2019). Additionally, surrogate sire technology could potentially provide an efficient means for the distribution of traits that have been improved through GnEd (McFarlane et al. 2019).
It is anticipated that surrogate sire technology could be realized through germline complementation, which consists of using donor cells from one genetic background to complement or replace the germline of an otherwise sterile host of a different genetic background (Giassetti et al. 2019; Richardson et al. 2009). Germline complementation requires two components: (1) a host that lacks his own germline, but otherwise has normal gonadal development (e.g., intact reproductive tract), and (2) donor cells that are capable of becoming gametes (Fig. 6).
Schematic of potential surrogate sire production systems. Grey represents steps to generate the host animal. Green and blue represent potential alternative sources and steps for generating donor cells. Light purple represents the germline complementation steps and dark purple/maroon represents the resulting final surrogate sire product. Key differences are that in the green (A) path, germline complementation would take place in a live, juvenile or adult, animal and the host would be non-mosaic. In contrast, in the blue path (B), germline complementation would take place at the embryo stage and the resulting host could be mosaic. Blue ribbons represent elite genetics and scissors represent steps that require (solid fill) gene editing or where gene editing could potentially be introduced (outline only). PGCLC: primordial germ cell-like cells, ESC: embryonic stem cell
One method to generate germline-deficient hosts is via treatment with chemotoxic drugs (e.g., busulfan) or local irradiation, but these methods are not efficient in livestock because they either fail to completely eliminate the endogenous germline, or the treatment has undesirable side effects on animal health (Giassetti et al. 2019). A promising alternative is to use GnEd to knockout a gene (e.g., NANOS2 or DAZL) in a zygote that is necessary for that animal's own germ cell production (Ciccarelli et al. 2020; McLean et al. 2021; Miao et al. 2019; Park et al. 2017; Taylor et al. 2017).
Donor cells could be blastomeres (i.e., embryo cells) or stem cells, as reviewed by Bishop and Van Eenennaam (2020) and McLean et al. (2020). Potential sources of germline competent stem cells are ESCs, iPSCs, or spermatogonial stem cells (SSCs), which can be isolated from mature or juvenile testes (Ciccarelli et al. 2020; Giassetti et al. 2019). Additionally, ESCs or iPSCs could possibly be induced in culture to become PGCLCs (Hayashi et al. 2011). Stem cells provide several advantages over blastomeres, as an embryo has a limited number of blastomeres and therefore a limited amount of genomic screening and multiplication potential (McLean et al. 2020). In contrast, stem cells are self-replicating so they can provide a potentially unlimited supply of donor cells. Additionally, stem cells could be GnEd in culture, possibly multiple times sequentially, and then DNA could be extracted without harming the viability of the remaining stem cells to both confirm the intended gene edit was made and to use GS to determine the genetic merit of each line. This scheme would be especially useful when applied to ESCs, which represent the next generation, to overcome the current challenges associated with GSE and to avoid the mosaicism issues currently associated with zygote GnEd.
The process of germline complementation (i.e., combining donor cells with a host) can occur at different stages of a host animal's development, depending on the donor cell source (Fig. 6). If the donor cells are SSCs or PGCLCs then they can be injected into a juvenile or adult host's germline-deficient gonad (Fig. 6A). SSCs transfer has been demonstrated in pigs and goats and represents germline cloning of the current generation of sires (Ciccarelli et al. 2020; Park et al. 2017). Whereas, PGCLCs derived from ESCs would represent germline cloning of the next generation since the donor cells would originate from an unborn 7-day old embryo. Alternatively, donor blastomeres or ESCs, which both represent the next generation, could be combined with the host at the developing embryo stage (Fig. 6B) (Ideta et al. 2016; McLean et al. 2020).
Irrespective of the production method, surrogate sires could unlock an opportunity to both accelerate rates of genetic gain and widely distribute traits improved via GnEd. The selection of only elite males for donor cells would increase selection intensity. Additionally, since the use of surrogate sires will not require any additional labor for commercial producers, there could be widespread adoption of this technology, which would dramatically reduce the lag in genetic merit that typically exists between the seedstock sector and the commercial sector. For example, Gottardo et al. (2019) performed simulations to develop and test a strategy for exploiting surrogate sire technology in a pig breeding programs. Their model projected that using surrogate sire technology in the swine industry would significantly raise the genetic merit of commercial sires by closing the typical 4 year genetic lag (difference in genetic mean between the nucleus and commercial populations), resulting in as much as 6.5 to 9.2 years' worth of genetic gain as compared to a conventional pig breeding program (Gottardo et al. 2019; Visscher et al. 2000).
Considerations for incorporation of records from animals produced using advanced reproductive or molecular biotechnologies into National Cattle Genetic Evaluations
Currently, an important question is how to best accommodate animals produced using advanced reproductive and/or molecular biotechnology and their progeny into genetic evaluations. In the U.S., the majority of genetic evaluations for beef cattle are carried out by breed associations following the industry-standard Beef Improvement Federation (BIF) guidelines (BIF 2021d; Van Eenennaam 2019). U.S. dairy cattle genetic evaluations were previously performed by the U.S. Department of Agriculture-Agricultural Research Service-Animal Genomics and Improvement Laboratory (USDA-ARS-AGIL) and are currently performed by the Council of Dairy Cattle Breeding (CDCB). Additionally, the International Committee for Animal Recording (ICAR), which is an international Non-Governmental Organization (NGO), provides guidelines, standards, and certification for animal identification, animal recording, and animal evaluation.
Records from animals resulting from ART
For animals resulting from MOET, BIF recommends that all observations, or phenotypic information, for traits that do not have maternal effects be used in genetic evaluations and that observations "for traits that have maternal effects, be used in genetic evaluations as long as the recipient dams' ages (heifer, 1st parity, or multiparity) and approximate breed compositions are available" (BIF 2021b). Additionally, "BIF recommends that embryo stage (1–9) and grade (1–3) and whether frozen, split, sexed, or genotyped be recorded and submitted to breed association or other recording organization" and that, "when sufficient information becomes available, genetic evaluation models for MOET calves include effects of fresh versus frozen and of biopsied (sexed and/or genotyped) or not" (BIF 2021b). However, due to historic concerns of large offspring syndrome, BIF does not recommend to use phenotypic observations from animals resulting from IVP in genetic evaluations (BIF 2021b; Thallman and Snider 2021). Although, BIF does recommend that observations on all ET calves (i.e., resulting from MOET or IVP) be recorded and submitted to breed association or other recording organizations, along with the form of technology used and other pertinent details related to producing the ET calves (BIF 2021b), so that this information could eventually be used in analyses that would enable the incorporation of records from IVP produced beef cattle to be included in future genetic evaluations (Thallman and Snider 2021). In contrast, phenotypic observations from animals resulting from both MOET and IVP are included in dairy cattle genetic evaluations. For dairy animals known to be produced by ET (both MOET and IVP), production records (e.g., lactation records) are included in genetic evaluations, but fertility and calving data (e.g., stillbirth records) are excluded from genetic evaluations of those traits because they don't represent "normal" expressions of fertility (personal communication, John B. Cole).
Regarding animals resulting from NT, due to concerns of large offspring syndrome and abnormal clone syndrome, BIF recommends to not use phenotypic observations from these animals in genetic evaluations (BIF 2021b; Thallman and Snider 2021), but also recognizes that "there are instances where genetically identical animals are in the pedigree (i.e. identical twins and clones)." In these cases where genetically identical animals exist in the pedigree, BIF recommends that, "for purposes of routine genetic evaluation, each set of genetically identical individuals is assigned a common identifier, so they have identical expected progeny differences (EPDs)," and recommends that, "they should also be assigned different permanent identification numbers" (BIF 2021c). An EPD, which is the standard term used in the U.S. beef industry, is a predictor of the genetic merit of an animal's progeny and is equal to half of an animal's EBV. Data from clones is handled similarly for dairy genetic evaluations, where each clone receives a unique permanent identification number and an individual evaluation, but the same predicted transmitting ability (PTA) is distributed for all clones from the same donor (personal communication, John B. Cole). A PTA, which is the standard term used in the U.S. dairy industry, is a predictor of the genetic merit of an animal's progeny and is equal to half of an animal's EBV.
ICAR recommends that detailed data should be recorded at all steps of embryo production (e.g., embryo stage, embryo grade, and whether frozen, split, sexed, or genotyped) and this information should be submitted to breed association or other recording organizations. ICAR is working to develop standardized codes for identifying features of embryos (e.g., sex, NT, IVP, etc.). Additionally, ICAR advises having parentage verification for animals resulting from ET (ICAR, 2017, 2019).
Records from animals resulting from GnEd
Given that all GnEd animals are currently produced via SCNT or IVP the phenotypic observations of the resulting animals would be recommended to be excluded from beef genetic evaluations, but could potentially be included in dairy genetic evaluations (BIF 2021b; Thallman and Snider 2021). ICAR recommends that "breed Associations should check the rules of their countries with regard to allowing GnEd animals in the herd book," and "if an animal has been GnEd it should be recorded against the animal when registered and should appear on the Zootechnical Certificate" (ICAR 2019). Additionally, BIF has developed more detailed guidelines for what data should be required from GnEd animals for breed association registration (BIF 2021a). Recently, two major beef breed associations, the American Angus Association (AAA) and the Red Angus Association of America (RAAA) adopted bylaws regarding the registration requirements for GnEd founders (GEF) and descendants (GED) (AAA 2021; RAAA 2021). Moreover, in September of 2021 the RAAA was the first breed association to announce that "they will provide herdbook registry of Red Angus animals carrying GnEd traits for heat tolerance and coat color" (RAAA 2021).
Moving forward, the GED will eventually enter genetic evaluations and the method for inclusion of these phenotypic records may differ depending on the type of trait affected by the GnEd (Thallman and Snider 2021). Most GnEd targeting qualitative traits (e.g., horned/polled or coat color), would have no influence on genetic evaluations. In contrast, GnEd targeting quantitative traits (e.g., muscle yield or disease resistance) could have a major impact on the genetic evaluations of close relatives. Thallman and Snider (2021) state that "gene editing directly violates fundamental assumptions of traditional (non-genomic) genetic evaluation." However, they also point out that fortunately, it will likely be easier to accommodate GnEd in genomic evaluation models (e.g., Single Step), and that research will be needed to determine the best way to include these records in different genomic models (Thallman and Snider 2021).
Records from surrogate sires
Based on the current proposed methods, surrogate sires will also be produced using IVP to generate the germline knockout host for germline complementation (Fig. 6). Therefore, based on current BIF guidelines, phenotypic observations on surrogate sires would also be excluded from beef genetic evaluations (BIF 2021b). However, phenotypes recorded on the somatic host are unrelated to the genetic merit of the donor germline, and therefore should not be included in the genetic merit estimate calculations associated with the donor. It should be noted that GnEd, homozygous NANOS2 knockout females are expected to be fertile, so when crossed with a GnEd, heterozygous NANOS2 knockout, fertile male this mating would be expected to produce 50% homozygous NANOS2 knockout, infertile male offspring, even in the absence of IVP or other ARTs (Park et al. 2017). Similar to animals resulting from ET, it will be useful to record as much information as possible on all contributing factors to the surrogate sire embryo (i.e., sire and dam of the host embryo, identification and genomic information of the germline donor source, ET recipient identification, and details on the production process). Regarding progeny of the surrogate sires, they should be genotyped to confirm inheritance of the germline donor's DNA. Once paternal inheritance is confirmed, then potentially these progeny could be handled similarly to those of clones (BIF 2021), where all offspring data is attributed to the original germline donor and the progeny would all share a common identifier, but also be assigned unique permanent identification.
Considerations for genetic improvement of cattle in developing countries
Cattle are raised in more than 200 countries around the world in almost all climatic zones, with the exception of high elevations, and they have been bred for adaptations to heat, cold, humidity, extreme diet, water scarcity, mountainous terrain, dry environments, and for general hardiness. In 2019, the Food and Agriculture Organization of the United Nations (FAO) estimated global cattle numbers at 1.511 billion head (FAOSTAT 2020). Across the globe and between individual producers, there is a wide gap in production efficiency, which results in considerable variation, even up to a 50-fold difference, of the environmental impact of producing the same product (Herrero et al. 2013; Poore and Nemecek 2018). This production efficiency gap is especially large between developed and developing, or Low-to-Middle-Income Countries (LMIC).
For example, while global beef production is currently split evenly between developed (49%) and developing (51%) countries, the environmental impact of production is not (FAO 2021b). Presently, LMIC contribute the majority of global ruminant greenhouse gas emission emissions (75%) and house 76% of the global cattle herd (FAO 2021a; Herrero et al. 2013). It's important to note, in the 1990's the African continent became the region of the world with the largest number of cattle and now collectively is home to 361 million cattle. This exceeds the 215 million cattle located in Brazil, the individual country with the largest cattle population (#3 beef producer), and is more than triple the number of cattle in the U.S. (94.8 million head; #1 beef producer). Ethiopia alone has 63 million cattle, the most of any African country, followed by Sudan and Chad at 31 million head each. In 2019, the African continent accounted for 24% of the global cattle population, but only 10% of the global beef production (FAO 2021a, b).
Considering that 81% of the additional beef production expected by 2029 is predicted to occur in the developing countries of Argentina, Brazil, China, Pakistan, and Sub-Saharan Africa, this production efficiency gap is a crucial challenge for global cattle production sustainability. For example, Chang et al. (2021) estimated that improving livestock production efficiencies in the 10 countries with the largest emission reduction potential (i.e., the current production efficiency is low, resulting in a high emission intensity per kg protein, and a large increase in livestock production is projected), could contribute 60%–65% of the global reduction in livestock emissions by 2050 (compared to a baseline where emissions intensities are held constant in the future). Chang et al. (2021) determined that the 10 countries with the largest emission reduction potential were in Africa (Madagascar, Morocco, Niger, South Africa, Tanzania), Asia (China, India, Iran, Turkey) and South America (Brazil).
It is important to keep in mind that beyond meat and milk, cattle also produce fibers, hides, skins, fertilizer, and fuel, are used for transportation and draft power, serve ecological roles, and particularly in Africa and parts of Asia, they also serve socio-economic (e.g., asset building in the form of stock accumulation) and cultural (e.g., religious worship in India and Lobola, or 'bride price' in parts of Africa) purposes. Therefore, careful consideration of livelihood concerns will be required when implementing production efficiency improvements. Van Eenennaam and Werth (2021) explain, "any proposed strategies for boosting the efficiency of cattle production need to consider these broader concerns, and also the fact that access to technologies may more be limited in some settings, often because of factors such as inaccessibility, unaffordability, lack of relevant knowledge, and/or of organizational capacity." Although some LMIC, like Brazil, have successfully implemented ART on a large commercial scale, not all genetic improvement tools or strategies have translated as easily to other developing countries.
In LMIC, genetic progress can be frustrated by poor infrastructure and ecological and financial challenges (Mapiye et al. 2018; Nyamushamba et al. 2017). For example, in South Africa, it is difficult to develop genetic tools such as EBVs for smallholder farmers due to small herds, incomplete data recording for most traits, a lack of parentage recording, insufficient contemporary groups, and lack of organizational capacity (van Marle-Köster and Visser 2018). In a survey of 62 market-oriented smallholder beef farmers in South Africa, 77% percent of the farmers reported that they were constrained by cattle breeding challenges including a shortage of breeding bulls (12%), lack of enclosed breeding pens (46%), and poor breeding management skills (29%) (Mapiye et al. 2018). Additionally, a number of non-scientific challenges also face emerging market-orientated cattle farmers including land access and ownership issues, and access to financial support and markets (Khapayi and Celliers 2016; Mapiye et al. 2018). These studies suggest that providing South African smallholder farmers with superior genetic material for genetic improvement of their livestock will require different approaches than have been used to implement traditional genetic evaluation programs (van Marle-Köster and Visser 2018). Community-based breeding programs have seen the most success, especially when they "are based on the breeding goals of smallholder farmers, there are strong market incentives for improved animal productivity, and strong support services such as extension and veterinary services" (de Haas et al. 2016).
The Consultative Group on International Agricultural Research (CGIAR) implemented a collaborative research program to observe, survey, and compare the dairy value chains in Tanzania and Kenya (East Africa), India (South Asia) and Nicaragua (Latin America) (Ojango et al. 2016). In these countries a large number of smallholder farmers that operate mixed crop–livestock production systems play a significant role in dairy production. CGIAR chose to include countries in multiple regions in order to allow for comparisons and cross-system learning that would support development of lessons, methodologies, and technologies of wide applicability (ILRI et al. 2011). This analysis revealed significant productivity gaps especially between large and small-scale producers and identified genetic and reproductive biotechnologies that hold promise for the advancement of global development goals in countries (ILRI et al. 2011).
Among these four countries, Ojango et al. (2016) observed that Kenya was the only country that had a national animal recording system where pedigree and performance recording is conducted. Although open to all producers, the system is primarily used by the large-scale dairy producers in high-input systems where purebred cattle are common. At the time, only 2.5% of the national dairy herd was accounted for in the national animal recording program. This low participation rate is a major obstacle because, as discussed previously, the foundation of genetic improvement is a well-structured breeding program with a clear breeding objective.
Crossbreeding is a more common practice within the smaller-scale livestock production enterprises in both Kenya and Tanzania, where the majority of the smallholder farmers have less than five cows. However, indiscriminate or uncontrolled crossbreeding can lead to the demise of indigenous breeds (van Marle-Köster and Visser 2018). For instance, unstructured crossbreeding programs in Africa have produced non-descript crossbred cattle that now constitute more than two thirds of the smallholder herd (Scholtz et al. 2008). It has been suggested that, structured breeding programs of African indigenous livestock should be developed (Mwai et al. 2015), informed by knowledge of the population structure and genetic diversity of these breeds (Nyamushamba et al. 2017). Such developments should include active farmer participation in the selection of superior indigenous sires based on the local breeding objectives using a community based breeding program model (Mapiye et al. 2019).
The CGIAR study found that AI was the most widely used reproductive biotechnology in all four countries, especially in large-scale dairy systems. However, it has proven more difficult to successfully implement in smallholder cattle production systems in developing countries due to logistical and institutional challenges (Ojango et al., 2016).
In other LMIC, crossbreeding via AI has been used to try to intensify the beef cattle sector with limited success. For example, in Indonesia in the 1980s, the government promoted the AI of the local Ongole cattle with Simmental and Limousin semen to produce more productive F1 animals. In this country with a population of 270 million people and 17 million cattle, 90% of cattle production is from smallholder farming systems with about 6.5 million farmers living in the rural areas. These crossbred animals were not supported with better feed and health services, which limited their potential and the cattle keeping systems did not become more efficient through crossbreeding (Agus and Mastuti Widi 2018). More recently, a program which translated into "a cow must be pregnant" was launched in 2016 and set a target of 4 million head of productive cows inseminated to produce 3 million calves, this time with the support of improved feed provided by planting improved pastures and legumes, and the provision of health services. A report on the success of this program details some of the problems encountered in getting frozen semen to remote locations, difficulty in getting cattle in the right body condition score to be reproductively cycling, and lack of farming experience (Setiawan 2018). Additionally, in a survey conducted in another region of Indonesia, adoption of AI was found to be inversely correlated with farmer age and cost of AI (Setiana et al. 2020).
In recent years, genomics has started to be used to try to identify animals that have both enhanced productivity and adaptation to African conditions (Marshall et al. 2019; van Marle-Köster and Visser 2018). Crossbred animals that retain some of the resilience of indigenous breeds while being more productive can improve production efficiency. In a case study with dairy production in Senegal, crossbred indigenous zebu by Bos taurus dairy cattle, as identified by genomics, and kept under better management produced up to 7.5-fold higher milk-yields, eightfold higher household profit, and threefold lower greenhouse gas emission intensity, per cow per annum, in comparison to indigenous Zebu kept under poorer management, for a typical herd size of eight animals (Salmon et al. 2018). There are glaring disparities when it comes to the implementation of GS in LMICs, and even among small breeds in the developed world. GS is not a scale-neutral technology, advantaging large breeds and genetic providers over small ones. It is difficult to implement in the absence of structured breeding programs with sufficiently sized genotyped and phenotyped reference populations. Therefore, more investment in data recording and structured breeding programs, linked to multiplication and delivery systems that can be delivered at scale will be needed to enable genetics and genomic technologies to deliver sustained benefits in LMIC cattle production systems.
Additionally, genomics can provide information on important traits of indigenous breeds. For example, it is well known that African cattle have improved thermo-tolerance levels and an increased ability to regulate their body temperature (Kim et al. 2017a). It has been suggested that the greatest benefit of genomics to smallholder farmers might be the characterization of the drought tolerant, resistance to ticks and tick-borne diseases, thermo tolerance and resistance to trypanosomosis traits present in adapted native breeds (Kim et al. 2017b; Nyamushamba et al. 2017). Other potential genotype-derived information includes the breed composition of the animal, which may be particularly useful in devising structured crossbreeding strategies (Kuehn et al. 2011; Marshall et al. 2019; Ojango et al. 2014).
GnEd could potentially be a useful tool for genetic improvement of cattle in LMIC because GnEd can be used to efficiently introduce useful Mendelian traits from other breeds into existing, locally adapted breeds, rather than having to introgress useful alleles via crossbreeding. Additionally, GnEd could be used to introduce novel beneficial traits (e.g., disease resistance), possibly from different species. In Africa, a particular focus has been placed on using GnEd to combat animal disease (Karavolias et al. 2021). One approach is to gene edit virulence genes of parasites, like Theileria parva, to weaken the pathogen so that it could be used in the development of a more effective vaccine against East Coast Fever, which is a disease that is estimated to kill one cow every 30 s across a dozen African countries (Enahoro et al. 2019; Karembu 2021). Alternatively, GnEd could be used to introduce disease resistance into indigenous breeds of cattle. For example, the Apolipoprotein L1 (ApoL 1) gene has been found to confer resistance to trypanosomiasis in primates (O'Toole et al. 2017), and African researchers are currently working to use the CRISPR/Cas9 system to knock-in Apol 1 into an indigenous goat breed (Karembu 2021). If successful, this GnEd scheme could also be used to combat the devastating disease of trypanosomiasis in cattle.
It is important to keep in mind that the effective and efficient use of GnEd will require infrastructure to perform ART to both facilitate the production of animals, and the dissemination of improved traits. To accelerate rates of genetic gain, a structured breeding program, ideally including GS, should be used to ensure that the best (i.e., highest genetic merit) parents of and/or animals are put forward as selection candidates. This alone would improve production and accelerate genetic improvement, even in the absence of GnEd. Additionally, surrogate sires distributing elite, locally adapted genetics, with or without useful GnEd traits, could provide a workable approach for the more widespread dissemination of improved genetics via natural service.
Regulatory considerations for tools for the genetic improvement of cattle
Regulation of GS
Animals produced from conventional breeding methods are routinely evaluated for changes in productivity, reproductive efficiency, reactions to disease, and quality characteristics by breeders. However, they are not subject to regulatory approval, other than it is illegal to sell unsafe food irrespective of the breeding method that was used to produce it. Regulatory agencies do not evaluate new conventionally bred varieties or breeds for health and environmental safety or approve their sale prior to commercial release; nor are they evaluated for unintended effects at the molecular level. There are more than 86.5 million known genetic variants between different breeds of cattle, including 2.5 million insertions and deletions of one, or more, base pairs of DNA, and 84 million single nucleotide variants (Hayes and Daetwyler 2019). Genetic variation per se does not pose a unique hazard as it relates to food safety (Van Eenennaam et al. 2019). The variations fuel genetic improvement programs and drive GS, which was rapidly adopted in livestock breeding programs globally, in the absence of any specific regulatory oversight or approvals or public controversy.
Regulation of cloning
In North America, South America, and New Zealand, cloning for agricultural purposes is not legally restricted (Table 2). Additionally, both the U.S. Food and Drug Administration (FDA) in 2008, and the European Food Safety Authority (EFSA) in 2012, concluded that products derived from animal clones are not different from those derived from non-cloned animals. However, in the European Union (EU), food derived from animal clones falls under the 'Novel Foods Regulation,' as food derived from animals obtained by non-traditional breeding practices. Current regulation in the EU has placed a ban on food products from animal clones due to, amongst others, ethical considerations regarding animal welfare. This ban does not cover products from their progeny, which are considered to be indistinguishable from traditionally bred livestock (van der Berg et al. 2019). Currently, no company in Europe is contemplating bringing products derived from animal clones, or their offspring, to market (Galli and Lazzari 2021). In contrast, several companies in other parts of the world now specialize in cloning farm animals (van der Berg et al. 2019). A Supply Chain Management Program to identify cloned livestock in the U.S. was set up by Viagen and Trans Ova in 2007. However, according to these companies, although the program was run from 2008 until 2012, no other cloning companies showed interest in participating in the program, and it was never accessed by industry (van der Berg et al. 2019).
Table 2 Regulation of animal cloning, transgenesis and gene editing in livestock in the main countries exporting beef to the European Union (EU)
Regulation of genetic engineering (Transgenesis)
Genetically engineered (GE) or transgenic cattle have been around since the 1990s, but none have ever been successfully commercialized for food or feed production. In 2008, the Codex Alimentarius Commission published guidelines for the safety assessment of foods derived from recombinant DNA (rDNA) animals (FAO/WHO 2008). A "rDNA Animal" is defined as "an animal in which the genetic material has been changed through in vitro nucleic acid techniques, including rDNA and direct injection of nucleic acid into cells or organelles." The guidelines recommend evaluations of product composition and animal health as essential steps in ensuring the safety of food derived from rDNA animals. Only a single GE food animal application has ever been sold for food consumption, the fast-growing AquAdvantage salmon, and even then, only in Canada and the U.S. The regulatory approval process for this product took over 20 years and several million dollars (Van Eenennaam et al. 2021). A second GE food animal application approval, for an Alpha-gal (galactose-α-1,3-galactose) knockout "GalSafe" pig, was announced by the FDA in 2020, for a line of pigs that was first reported in the literature in 2003 (Phelps et al. 2003). This pig was developed using a traditional gene knockout approach and carries a plasmid (pPL657) rDNA construct disrupting the Alpha-gal gene along with the neomycin phosphotransferase (nptII) selection marker gene in its genome. The approval was for a single swine farm to produce a maximum of 1000 GalSafe® pigs annually to be raised in the absence of aminoglycosides, such as neomycin, to produce meat that is safe for consumption for people with Alpha-gal syndrome and porcine-based materials to produce human medical products.
Regulation of GnEd
The regulatory picture for GnEd is currently mixed (Table 2). Argentina was the first country to publish their proposed approach to the regulation of GnEd organisms. The trigger for regulation is whether animals carry a "novel combination of genetic material" (i.e., transgenic). Those that do will be considered a "GMO" (Genetically Modified Organism) under Argentine law, and those that do not will not trigger additional regulatory oversight (Whelan and Lema 2015). The Argentine regulation calls for GnEd plants and animals to be presented to the biosafety commission in order to establish, on a case by case basis, whether it is a GMO. An interesting aspect of this regulation is that there is an opportunity to present projects at the "design stage," whereby a preliminary opinion based on the expected outcome of the project will be issued by the commission. Later, when the plants or animals have been obtained and fully characterized, applicants must present a follow up report that will be used to establish a final decision. That determination is mostly based on any changes present in the genome of the product intended to be sold commercially.
Conversely, in the EU, New Zealand, and the U.S., GnEd is being treated as equivalent to GE, with implications for global trade.
The Department of Biotechnology in India published draft guidelines for GnEd regulation in 2020. These guidelines propose a tiered approach depending upon the characteristics of the end product, but include requirements for a quite extensive characterization of trait efficacy and phenotypic equivalence of GnEd organisms triggered solely by the use of GnEd, and which is not required for those plants and animals resulting from conventional breeding.
To date, no African nation has passed regulations for GnEd animals, but similar to India, proposed guidelines are being drafted in many countries. Kenya has begun drafting guidelines to regulate GnEd products, using the Argentinean approach as a model. The draft guidelines define what needs to be regulated, what is partially regulated and what is not regulated at all. Kenya's National Biosafety Authority (NBA) has approved, at the research level, six applications for genome editing applications in agriculture, including one application focused on making pigs resistant to African swine fever. Other applications include improving banana and yam to resist two destructive plant viruses.
The decision by the FDA to regulate GnEd animals—or more correctly the intentional alterations in the genome of animals—as new animal drugs irrespective of product risk was done in the absence of public discourse. Similarly, the decision by the European Court of Justice that genome edited organisms would be subject to the full range of testing and regulation as if they were transgenic according to the EU Directive, was made without engagement with the public. Moreover, the decision by the European Court of Justice effectively side-stepped any processes of wider societal discussion (Bruce and Bruce 2019). In considering this decision, these authors wrote, "regulation sets bounds to what can be done, who can do it and under what conditions can things be done. But if there has been no discussion with the public, this could be argued to be a case where regulation has been socially premature, and not done on behalf of the society."
Interestingly, following the United Kingdom's (UK) departure from the EU, a public consultation was held in 2021 by the UK government's Department for Environment, Food and Rural Affairs (DEFRA) as to whether GnEd technology should be regulated in the same way as GE, if it yields a result that could have been produced by conventional breeding. Following this consultation, it was determined that UK plant researchers who planned to conduct field trials of GnEd plants no longer need to submit risk assessments to DEFRA, but UK research involving GnEd animals will continue to be regulated as before to ensure animal-welfare standards are met (Ledford 2021).
While a highly precautionary regulatory approach may be of little consequence in food-secure developed regions like North America and the EU, such an approach is likely to hinder the adoption of GnEd in some LMIC that could most benefit from targeted applications, such as disease-resistant livestock. For resource-poor Africa, responding to the promises and challenges of GnEd is likely to be complex, not least because most lack the capacity for regulatory oversight. Additionally, if GnEd livestock are not required to undergo unique regulatory approval in some parts of the world, they will not necessarily be segregated from conventionally bred animals and there will often be no way to uniquely detect the products derived from them, especially if the genetic alteration already exists in the target population. This is somewhat analogous to the situation for clones, where there is no molecular way to differentiate or track the products from a clone as compared to those of its progenitor.
Public perception of GnEd
In countries where food security is not a priority, consumer acceptance of GnEd animals is expected to be lower, especially for those applications offering economic advantages mainly to the livestock producer. Bruce and Bruce (2019) considered two examples of GnEd in livestock; hornless cattle and disease resistant pigs, from the perspective of Responsible Research and Innovation (RRI). They suggested that the public's knowledge gap of current practices in livestock agriculture, could lead to unexpected outcomes from public consultations. For example, if an argument is made regarding using GnEd to introduce the POLLED allele, the advantage of polled cattle might not be immediately obvious to those not versed in agricultural practice, and more generally "the need for dehorning may be considered shocking by some publics" (Bruce and Bruce 2019).
A 2017 public consultation performed by the UK Royal Society found that GnEd animal applications that targeted reducing antibiotic use, greenhouse gas emissions, and zoonotic disease transmission were all deemed acceptable (van Mil et al. 2017). However, it should be noted that a major pre-occupation of these participants in this consultation was to ensure GnEd was used to address inequality. The participants were particularly concerned about who owns the technology, who gets rich from its use, and whether it could be used to unfairly obtain monopoly power (van Mil et al. 2017). This raises interesting questions regarding whether the GnEd regulatory approaches that have been proposed in the U.S. and EU are fit for purpose (Van Eenennaam et al. 2019), as they advantage large companies and incentivize intellectual property protection. The latter of which may prove to be disruptive to the cattle breeding industry (Bruce 2017).
Evidence from Mora et al. (2012) suggested that if geographic differences are considered, consumers' acceptance of GE animals would be higher in developing countries where the requirement for enhanced food production might be met by application of this technology (Van Eenennaam and Young 2018). Historically, the debates around GE crops in Africa have been dominated by a few elite scientists or largely international NGOs, leading to a polarization that bypassed the farmers most directly affected by decisions. Roughly 65% of Africa's population relies on smallholder farming, and these farms are not highly productive. To date, only eight African countries have commercialized GE crops; Burkina Faso, Eswatini, Ethiopia, Kenya, Malawi, Nigeria, Sudan and South Africa, mostly insect-resistant Bt cotton and recently Bt cowpea in Kenya. Kenya, Nigeria and Eswatini are leading the agricultural GnEd research as they see its potential to increase farmers' income in Africa. As of yet, there is little research specifically gauging the acceptability of the use of GnEd livestock in LMIC, especially among the smallholder farmers and livestock keepers who would be most affected by any decisions around the technology.
Genetic improvement of livestock around the globe has been, and will continue to be, an important driver of the sustainability of animal agriculture. Livestock genetic improvement programs, beginning with selective breeding using statistical prediction methods (e.g., EBVs) and more recently GS, in combination with ART have enabled more accurate selection and intense utilization of genetically superior parents for the next generation to accelerate rates of genetic gain. Most recently, the ability to use GnEd to inactivate targeted gene function (i.e., knockout genes), knock-in genes, or achieve allele introgression in the absence of undesired linkage drag, offers promising opportunities to introduce useful genetic variation into livestock breeding programs. GnEd experiments in cattle have primarily focused on three main areas of improvement (1) animal health and welfare, (2) product yield or quality, and (3) reproduction or novel breeding schemes, which are all areas that are highly aligned with the goals of conventional breeding programs. Presently, GnEd is well-suited for introgressing alleles affecting typically qualitative, Mendelian traits at a more rapid pace than is possible using conventional selection alone. However, most of the traits that animal breeders seek to improve are polygenic and quantitative. Additionally, GnEd in livestock is only possible through the use of ART. Therefore, in order for GnEd to be an effective tool for genetic change it will need to seamlessly integrate into a structured breeding program with a clear breeding objective and ideally be used in conjunction with ART and GS to accelerate genetic gain by simultaneously altering multiple components of the breeder's equation. To accomplish this, several GnEd schemes have been modeled for livestock populations. The most efficient schemes have relied heavily on widespread adoption of ART, especially commercial sector use of AI. Considering the currently limited adoption of AI around the world and specifically in the commercial beef industry, novel breeding schemes, such as GnEd applied to surrogate sire production, will likely be required to widely disseminate desired traits improved via GnEd. The lack of global regulatory harmonization around GnEd animals and products from these animals, including semen and embryos, will pose challenges in relation to global trade, and aspects of traceability in both animal breeding and the food chain.
American Angus Association
ABCZ:
Brazilian Zebu Cattle Association
AI:
Apol 1:
Apolipoprotein L1
BIF:
Beef Improvement Federation
BLG:
Beta-lactoglobulin
BRD:
BSE:
Bovine spongiform encephalopathy
Southern Agricultural Council
CDCB:
Council of Dairy Cattle Breeding
CGIAR:
Consultative Group on International Agricultural Research
CONABIA:
National Advisory Commission on Agricultural Biotechnology
Cytoplasmic injection
CRISPR/Cas9:
Clustered regularly interspersed short palindromic repeats and associated protein 9
CSN2:
Beta-casein
CTNBio:
National Technical Biosafety Commission
DAZL:
Deleted in AZoospermia Like
DEFRA:
Department for Environment, Food and Rural Affairs
DSB:
Double stranded breaks
FAO:
FDA:
FD&C Act:
Food, Drug and Cosmetic Act
EBV:
Estimated breeding value
ECNT:
Embryonic cell nuclear transfer
EFSA:
EMBRAPA:
Brazilian Agriculture and Livestock Research Enterprise
EP:
Electroporation
EPSC:
Expanded potential stem cells
EPD:
Expected progeny difference
ESC:
Embryonic stem cells
ET:
JIVET:
Juvenile in vitro ET
GE:
Genetically engineered
GEBV:
Genomic estimated breeding values
GED:
Gene edited descendants
GEF:
Gene edited founders
GMO:
Genetically Modified Organism
GnEd:
GS:
GSE:
Genomic screening of embryos
Homology-directed repair
IARS:
Isoleucyl-tRNA synthetase
ICAR:
International Committee for Animal Recording
ICBF:
Irish Cattle Breeding Federation
ICM:
Inner cell mass
ILRI:
iPSC:
Induced pluripotent stem cells
ITGB2:
Integrin subunit beta 2
IVB:
In vitro Breeding
IVP:
In vitro Embryo production
IVC:
In vitro Culture
IVF:
IVM:
In vitro Maturation
LacS:
Sulfolobus solfataricus beta-glycosidase
LMIC:
Low-to-Middle-Income Countries
Meat and Livestock Australia
MOET:
Multiple ovulation embryo transfer
MSTN:
Myostatin
NANOS2:
Nanos C2HC-Type Zinc Finger 2
NASEM:
National Academies of Sciences, Engineering, and Medicine
NEPA:
National Environmental Policy Act
NGO:
NHEJ:
Non-homologous end joining
NM$:
Lifetime Net Merit selection index
Nuclear transfer
OGTR:
Office of the Gene Technology Regulator
OPU:
Ovum-pick up
Promotion of alleles by gene editing
PC :
POLLED, Celtic allele
PCR:
Polymerase chain reaction
PGCLC:
Primordial germ cell-like cells
PMEL:
Premelanosomal protein gene
PRLR:
Prolactin receptor
PRNP:
Prion protein
PTA:
Predicted transmitting ability
QTN:
Quantitative trait nucleotides
RAAA:
Red Angus Association of America
rDNA:
Recombinant DNA
SNP:
Single nucleotide polymorphisms
SCNT:
SRY:
Sex determining region Y protein
SSC:
Spermatogonial stem cells
TAI:
Timed artificial insemination
TET:
Timed embryo transfer
Transcription activator-like effector nucleases
U.S.:
USDA:
USDA-ARS-AGIL:
United States Department of Agriculture-Agricultural Research Service-Animal Genomics and Improvement Laboratory
ZFN:
Zinc finger nucleases
American Angus Association (AAA). Breeder's Reference Guide - Part 2: Association Rules, 104-f. St. Joseph, MO; 2021. https://www.angus.org/pub/brg_part2.pdf. Accessed 10 Oct 2021.
Agus A, Mastuti Widi TS. Current situation and future prospects for beef cattle production in Indonesia — A review. Asian-Australas J Anim Sci. 2018;31(7):976–83. https://doi.org/10.5713/ajas.18.0233.
Akagi S, Geshi M, Nagai T. Recent progress in bovine somatic cell nuclear transfer. Anim Sci J. 2013;84(3):191–9. https://doi.org/10.1111/asj.12035.
Allan MF. Past, Present and Future of Genetic Embryo Testing in Cattle. Beef Improvement Federation (BIF) Research Symposium and Convention; Brookings, SD, June 18–21, 2019.
Banks R. Challenges with investing in genetic improvement for the Australian extensive livestock industries. Aust J Exp Agric. 2005;45(8):1033–9.
Baruselli PS, Catussi BLC, Abreu LÂ, Elliff FM, Silva LG, Batista EOS. Challenges to increase the AI and ET markets in Brazil. Anim Reprod. 2019;16:364–75.
Baruselli PS, Ferreira RM, Sá Filho MF, Bó GA. Review: Using artificial insemination v natural service in beef herds. Animal. 2018;12(1):s45–52. https://doi.org/10.1017/S175173111800054X.
Barwick SA, Henzell AL, Herd RM, Walmsley BJ, Arthur PF. Methods and consequences of including reduction in greenhouse gas emission in beef cattle multiple-trait selection. Genet Sel Evol. 2019;51(1):18. https://doi.org/10.1186/s12711-019-0459-5.
Bastiaansen JWM, Bovenhuis H, Groenen MAM, Megens H-J, Mulder HA. The impact of genome editing on the introduction of monogenic traits in livestock. Genet Sel Evol. 2018;50(1):18. https://doi.org/10.1186/s12711-018-0389-7.
Berry DP, Garcia JF, Garrick DJ. Development and implementation of genomic predictions in beef cattle. Anim Front. 2016;6(1):32–8. https://doi.org/10.2527/af.2016-0005.
Bertolini M, Bertolini L. Advances in reproductive technologies in cattle: from artificial insemination to cloning. Revis Fac Med Vet Zootecnia. 2009;56(3):184–94.
Bevacqua RJ, Fernandez-Martín R, Savy V, Canel NG, Gismondi MI, Kues WA, et al. Efficient edition of the bovine PRNP prion gene in somatic cells and IVF embryos using the CRISPR/Cas9 system. Theriogenology. 2016;86(8):1886-96.e1. https://doi.org/10.1016/j.theriogenology.2016.06.010.
BIFa. Data From Gene Edited Animals. Beef Improvement Federation (BIF) Guidelines Wiki; 2021a. http://guidelines.beefimprovement.org/index.php?title=Data_From_Gene_Edited_Animals&oldid=2474. Accessed 15 Oct 2021.
BIFb. Embryo Transfer (ET): Data Collection And Utilization. Beef Improvement Federation (BIF) Guidelines Wiki; 2021b. http://guidelines.beefimprovement.org/index.php/Embryo_Transfer_(ET):_Data_Collection_And_Utilization. Accessed 15 Oct 2021.
BIFc. Expected Progeny Difference. Beef Improvement Federation (BIF) Guidelines Wiki; 2021c. http://guidelines.beefimprovement.org/index.php/Expected_Progeny_Difference. Accessed 15 Oct 2021.
BIFd. Guidelines for Uniform Beef Improvement Programs. Beef Improvement Federation (BIF) Guidelines Wiki; 2021d. http://guidelines.beefimprovement.org/index.php/Guidelines_for_Uniform_Beef_Improvement_Programs. Accessed 15 Oct 2021.
Bishop TF, Van Eenennaam AL. Genome editing approaches to augment livestock breeding programs. J Exp Biol. 2020;223(1): 207159. https://doi.org/10.1242/jeb.207159.
Blomberg LA, Telugu B. Twenty years of embryonic stem cell research in farm animals. Reprod Domest Anim. 2012;47:80–5. https://doi.org/10.1111/j.1439-0531.2012.02059.x.
Bogliotti YS, Wu J, Vilarino M, Okamura D, Soto DA, Zhong C, et al. Efficient derivation of stable primed pluripotent embryonic stem cells from bovine blastocysts. Proc Natl Acad Sci. 2018;115(9):2090–5. https://doi.org/10.1073/pnas.1716161115.
Bondioli KR. Embryo sexing: a review of current techniques and their potential for commercial amdication in livestock production. J Anim Sci. 1992;70(2):19–29. https://doi.org/10.2527/1992.70suppl_219x.
Bousquet D, Blondin P. Review: potential uses of cloning in breeding schemes: dairy cattle. Cloning Stem Cells. 2004;6(2):190–7. https://doi.org/10.1089/1536230041372373.
Brogliatti GM, Adams GP. Ultrasound-guided transvaginal oocyte collection in prepubertal calves. Theriogenology. 1996;45(6):1163–76. https://doi.org/10.1016/0093-691X(96)00072-6.
Bruce A. Genome edited animals: Learning from GM crops? Transgenic Res. 2017;26(3):385–98. https://doi.org/10.1007/s11248-017-0017-2.
Bruce A, Bruce D. Genome editing and responsible innovation, can they be reconciled? J Agric Environ Ethics. 2019;32(5):769–88. https://doi.org/10.1007/s10806-019-09789-w.
Burrow HM. The effects of inbreeding on productive and adaptive traits and temperament of tropical beef cattle. Livest Prod Sci. 1998;55(3):227–43. https://doi.org/10.1016/S0301-6226(98)00139-0.
Capper JL, Cady RA. The effects of improved performance in the US dairy cattle industry on environmental impacts between 2007 and 2017. J Anim Sci. 2019;98:1. https://doi.org/10.1093/jas/skz291.
Capper JL, Cady RA, Bauman DE. The environmental impact of dairy production: 1944 compared with 2007. J Anim Sci. 2009;87(6):2160–7. https://doi.org/10.2527/jas.2009-1781.
Carlson DF, Lancto CA, Zang B, Kim E-S, Walton M, Oldeschulte D, et al. Production of hornless dairy cattle from genome-edited cell lines. Nat Biotechnol. 2016;34(5):479–81. https://doi.org/10.1038/nbt.3560.
Carlson DF, Tan W, Lillico SG, Stverakova D, Proudfoot C, Christian M, et al. Efficient TALEN-mediated gene knockout in livestock. Proc Natl Acad Sci. 2012;109(43):17382–7. https://doi.org/10.1073/pnas.1211446109.
Cenariu M, Pall E, Cernea C, Groza I. Evaluation of bovine embryo biopsy techniques according to their ability to preserve embryo viability. J Biomed Biotechnol. 2012;2012: 541384. https://doi.org/10.1155/2012/541384.
Chang J, Peng S, Yin Y, Ciais P, Havlik P, Herrero M. The key role of production efficiency changes in livestock methane emission mitigation. AGU Advances. 2021;2(2): e2021000391. https://doi.org/10.1029/2021AV000391.
Chohan KR, Hunter AG. In vitro maturation, fertilization and early cleavage rates of bovine fetal oocytes. Theriogenology. 2004;61(2–3):373–80. https://doi.org/10.1016/S0093-691X(03)00220-6.
Choi W, Kim E, Yum S-Y, Lee C, Lee J, Moon J, et al. Efficient PRNP deletion in bovine genome using gene-editing technologies in bovine cells. Prion. 2015;9(4):278–91. https://doi.org/10.1080/19336896.2015.1071459.
Ciccarelli M, Giassetti MI, Miao D, Oatley MJ, Robbins C, Lopez-Biladeau B, et al. Donor-derived spermatogenesis following stem cell transplantation in sterile NANOS2 knockout males. Proc Natl Acad Sci. 2020;117(39):24195–204. https://doi.org/10.1073/pnas.2010102117.
de Haas Y, Davis S, Reisinger A, Richards MB, Difford G, Lassen J. Pracice Brief: Improved ruminant genetics: Implementation guidance for policymakers and investors. Global Alliance of Climate-Smart Agriculture; 2016. https://globalresearchalliance.org/wp-content/uploads/2018/02/CSA-Practice-Brief_Animal-Breeding-Sept-2016.pdf.
de Sousa RV, da Silva Cardoso CR, Butzke G, Dode MAN, Rumpf R, Franco MM. Biopsy of bovine embryos produced in vivo and in vitro does not affect pregnancy rates. Theriogenology. 2017;90:25–31. https://doi.org/10.1016/j.theriogenology.2016.11.003.
Duby RT, Damiani P, Looney CR, Fissore RA, Robl JM. Prepuberal calves as oocyte donors: Promises and problems. Theriogenology. 1996;45(1):121–30. https://doi.org/10.1016/0093-691X(95)00361-B.
Enahoro D, Herrero M, Johnson N. Promising options for improving livestock production and productivity in developing countries. Nairobi, Kenya: ILRI: ILRI Project Report; 2019. https://hdl.handle.net/10568/105759.
Evans MJ, Kaufman MH. Establishment in culture of pluripotential cells from mouse embryos. Nature. 1981;292(5819):154–6. https://doi.org/10.1038/292154a0.
Ezashi T, Yuan Y, Roberts RM. Pluripotent stem cells from domesticated mammals. Annu Rev Anim Biosci. 2016;4:223–53.
Food and Agriculture Organization of the United Nationsa. FAOSTAT Statistical Database-Production-Live Animals. Rome, Italy; 2021a. http://www.fao.org/faostat/en/?#data/QA. Accessed 10 Mar 2021.
Food and Agriculture Organization of the United Nationsb. FAOSTAT Statistical Database-Production-Live Primary. Rome, Italy; 2021b. http://www.fao.org/faostat/en/?#data/QL. Accessed 10 Mar 2021.
Food and Agriculture Organization of the United Nations/World Health Organization. Guideline for the Conduct of Food Safety Assessment of Foods Derived From Recombinant-DNA Animals CAC/CL 68–2008. Rome, Italy; 2008. https://www.who.int/docs/default-source/food-safety/food-genetically-modified/cxg-068e.pdf?sfvrsn=c9de948e_2. Accessed 30 Oct 2021.
Fennessy P, Byrne T, Proctor L, Amer P. The potential impact of breeding strategies to reduce methane output from beef cattle. Anim Prod Sci. 2019;59(9):1598–610.
Figueiredo JR, Hulshof SCJ, Van den Hurk R, Ectors FJ, Fontes RS, Nusgens B, et al. Development of a combined new mechanical and enzymatic method for the isolation of intact preantral follicles from fetal, calf and adult bovine ovaries. Theriogenology. 1993;40(4):789–99. https://doi.org/10.1016/0093-691X(93)90214-P.
Fisher P, DL H, et al. Potential for genomic selection of bovine embryos. In: Proceedings of the New Zealand Society of Animal Production; Christchurch: New Zealand Society of Animal Production.
Gaj T, Gersbach CA, Barbas CF. ZFN, TALEN and CRISPR/Cas-based methods for genome engineering. Trends Biotechnol. 2013;31(7):397–405. https://doi.org/10.1016/j.tibtech.2013.04.004.
Galli C, Lazzari G. 25th ANNIVERSARY OF CLONING BY SOMATIC-CELL NUCLEAR TRANSFER: Current applications of SCNT in advanced breeding and genome editing in livestock. Reproduction. 2021;162(1):F23–32. https://doi.org/10.1530/rep-21-0006.
Gao Y, Wu H, Wang Y, Liu X, Chen L, Li Q, et al. Single Cas9 nickase induced generation of NRAMP1 knockin cattle with reduced off-target effects. Genome Biol. 2017;18(1):13. https://doi.org/10.1186/s13059-016-1144-4.
García-Ruiz A, Cole JB, VanRaden PM, Wiggans GR, Ruiz-López FJ, Van Tassell CP. Changes in genetic selection differentials and generation intervals in US Holstein dairy cattle as a result of genomic selection. Proc Natl Acad Sci. 2016;113(28):E3995–4004. https://doi.org/10.1073/pnas.1519061113.
Gaspa G, Veerkamp RF, Calus MPL, Windig JJ. Assessment of genomic selection for introgression of polledness into Holstein Friesian cattle by simulation. Livest Sci. 2015;179:86–95. https://doi.org/10.1016/j.livsci.2015.05.020.
Georges M, Charlier C, Hayes B. Harnessing genomic information for livestock improvement. Nat Rev Genet. 2019;20(3):135–56. https://doi.org/10.1038/s41576-018-0082-2.
Georges M, Massey JM. Velogenetics, or the synergistic use of marker assisted selection and germ-line manipulation. Theriogenology. 1991;35(1):151–9. https://doi.org/10.1016/0093-691X(91)90154-6.
Giassetti MI, Ciccarelli M, Oatley JM. Spermatogonial stem cell transplantation: insights and outlook for domestic animals. Annu Rev Anim Biosci. 2019;7(1):385–401. https://doi.org/10.1146/annurev-animal-020518-115239.
Goszczynski DE, Cheng H, Demyda-Peyrás S, Medrano JF, Wu J, Ross PJ. In vitro breeding: application of embryonic stem cells to animal production. Biol Reprod. 2018;100(4):885–95. https://doi.org/10.1093/biolre/ioy256.
Gottardo F, Nalon E, Contiero B, Normando S, Dalvit P, Cozzi G. The dehorning of dairy calves: Practices and opinions of 639 farmers. J Dairy Sci. 2011;94(11):5724–34. https://doi.org/10.3168/jds.2011-4443.
Gottardo P, Gorjanc G, Battagin M, Gaynor RC, Jenko J, Ros-Freixedes R, et al. A strategy to exploit surrogate sire technology in livestock breeding programs. Genes Genome Genet. 2019;9(1):203–15. https://doi.org/10.1534/g3.118.200890.
Granleese T, Clark SA, Swan AA, van der Werf JHJ. Increased genetic gains in sheep, beef and dairy breeding programs from using female reproductive technologies combined with optimal contribution selection and genomic breeding values. Genet Sel Evol. 2015;47(1):70. https://doi.org/10.1186/s12711-015-0151-3.
Hayashi K, Ogushi S, Kurimoto K, Shimamoto S, Ohta H, Saitou M. Offspring from oocytes derived from in vitro primordial germ cell-like cells in mice. Science. 2012;338(6109):971–5. https://doi.org/10.1126/science.1226889.
Hayashi K, Ohta H, Kurimoto K, Aramaki S, Saitou M. Reconstitution of the mouse germ cell specification pathway in culture by pluripotent stem cells. Cell. 2011;146(4):519–32. https://doi.org/10.1016/j.cell.2011.06.052.
Hayes BJ, Daetwyler HD. 1000 Bull genomes project to map simple and complex genetic traits in cattle: applications and outcomes. Annu Rev Anim Biosci. 2019;7(1):89–102. https://doi.org/10.1146/annurev-animal-020518-115024.
Hayes BJ, Lewin HA, Goddard ME. The future of livestock breeding: genomic selection for efficiency, reduced emissions intensity, and adaptation. Trends Genet. 2013;29(4):206–14. https://doi.org/10.1016/j.tig.2012.11.009.
Hennig SL, McNabb BR, Trott JF, Van Eenennaam AL, Murray JD. LincRNA#1 knockout does not affect polled phenotype in cattle. Sci Rep. 2021;8:90.
Hennig SL, Owen JR, Lin JC, McNabb BR, Van Eenennaam AL, Murray JD. Can CRISPR-mediated deletions result in a polled phenotype in Cattle? Sci Rep. 2021;3:78.
Hennig SL, Owen JR, Lin JC, Young AE, Ross PJ, Van Eenennaam AL, et al. Evaluation of mutation rates, mosaicism and off target mutations when injecting Cas9 mRNA or protein for genome editing of bovine embryos. Sci Rep. 2020;10(1):22309. https://doi.org/10.1038/s41598-020-78264-8.
Heo YT, Xiaoyuan Q, Nan XY, Soonbong B, Hwan C, Nam-Hyung K, et al. CRISPR/Cas9 nuclease-mediated gene knock-in in bovine-induced pluripotent cells. Stem Cells Develop. 2015;24(3):393–402. https://doi.org/10.1089/scd.2014.0278.
Herrero M, Havlík P, Valin H, Notenbaert A, Rufino MC, Thornton PK, et al. Biomass use, production, feed efficiencies, and greenhouse gas emissions from global livestock systems. Proc Natl Acad Sci. 2013;110(52):20888–93. https://doi.org/10.1073/pnas.1308149110.
Heyman Y, Chavatte-Palmer P, LeBourhis D, Camous S, Vignon X, Renard JP. Frequency and occurrence of late-gestation losses from cattle cloned embryos. Biol Reprod. 2002;66(1):6–13. https://doi.org/10.1095/biolreprod66.1.6.
Heyman Y, Vignon X, Chesné P, Le Bourhis D, Marchal J, Renard J-P. Cloning in cattle: from embryo splitting to somatic nuclear transfer. Reprod Nutr Dev. 1998;38(6):595–603.
International Committee for Animal Recording (ICAR). ICAR Guidelines: Section 6 - AI and ET. St. Joseph, MO; 2017. https://www.icar.org/Guidelines/06-AI-and-ET.pdf. Accessed 10 Oct 2021.
International Committee for Animal Recording (ICAR). ICAR Guidelines: Section 18 - Breed Associations. St. Joseph, MO; 2019. https://www.icar.org/Guidelines/18-Breed-Associations.pdf. Accessed 10 Oct 2021.
Ideta A, Yamashita S, Seki-Soma M, Yamaguchi R, Chiba S, Komaki H, et al. Generation of exogenous germ cells in the ovaries of sterile NANOS3-null beef cattle. Sci Rep. 2016;6:24983. https://doi.org/10.1038/srep24983.
IETS. Data Retrieval Committee Reports. International Embryo Transfer Society (IETS). 2000–2019. https://www.iets.org/Committees/Data-Retrieval-Committee.
Ikeda M, Matsuyama S, Akagi S, Ohkoshi K, Nakamura S, Minabe S, et al. Correction of a disease mutation using CRISPR/Cas9-assisted genome editing in Japanese Black Cattle. Sci Rep. 2017;7(1):17827. https://doi.org/10.1038/s41598-017-17968-w.
Ishikura Y, Yabuta Y, Ohta H, Hayashi K, Nakamura T, Okamoto I, et al. In vitro derivation and propagation of spermatogonial stem cell activity from mouse pluripotent stem cells. Cell Rep. 2016;17(10):2789–804. https://doi.org/10.1016/j.celrep.2016.11.026.
Ishino T, Hashimoto M, Amagasa M, Saito N, Dochi O, Kirisawa R, et al. Establishment of protocol for preparation of gene-edited bovine ear-derived fibroblasts for somatic cell nuclear transplantation. Biomed Res. 2018;39(2):95–104. https://doi.org/10.2220/biomedres.39.95.
Jenko J, Gorjanc G, Cleveland MA, Varshney RK, Whitelaw CBA, Woolliams JA, et al. Potential of promotion of alleles by genome editing to improve quantitative traits in livestock breeding programs. Genet Sel Evol. 2015;47(1):55. https://doi.org/10.1186/s12711-015-0135-3.
Kadarmideen HN, Mazzoni G, Watanabe YF, Strøbech L, Baruselli PS, Meirelles FV, et al. Genomic selection of in vitro produced and somatic cell nuclear transfer embryos for rapid genetic improvement in cattle production. Anim Reprod Sci. 2015;12(3):8.
Kambadur R, Sharma M, Smith TP, Bass JJ. Mutations in myostatin (GDF8) in double-muscled Belgian Blue and Piedmontese cattle. Genome Res. 1997;7(9):910–6. https://doi.org/10.1101/gr.7.9.910.
Karavolias NG, Horner W, Abugu MN, Evanega SN. Application of gene editing for climate change in agriculture. Front Sustain Food Syst. 2021;5:296. https://doi.org/10.3389/fsufs.2021.685801.
Karembu M. Genome Editing in Africa's Agriculture 2021: An Early Take-of. Nairobi, Kenya: International Service for the Acquisition of Agri-biotech Applications (ISAAA AfriCenter); 2021. http://africenter.isaaa.org/wp-content/uploads/2021/04/GENOME-EDITING-IN-AFRICA-FINAL.pdf
Kasinathan P, Wei H, Xiang T, Molina JA, Metzger J, Broek D, et al. Acceleration of genetic gain in cattle by reduction of generation interval. Sci Rep. 2015;5:8674. https://doi.org/10.1038/srep08674.
Kawaguchi T, Tsukiyama T, Kimura K, Matsuyama S, Minami N, Yamada M, et al. Generation of Naïve Bovine induced pluripotent stem cells using PiggyBac transposition of doxycycline-inducible transcription factors. PLoS ONE. 2015;10(8): e0135403. https://doi.org/10.1371/journal.pone.0135403.
Keefer CL. Artificial cloning of domestic animals. Proc Natl Acad Sci. 2015;112(29):8874–8. https://doi.org/10.1073/pnas.1501718112.
Khapayi M, Celliers PR. Factors limiting and preventing emerging farmers to progress to commercial agricultural farming in the King William's Town area of the Eastern Cape Province, South Africa. South Afr J Agric Extension. 2016;44:25–41.
Kim D, Jung Y-G, Roh S. Microarray analysis of embryo-derived bovine pluripotent cells: The vulnerable state of bovine embryonic stem cells. PLoS ONE. 2017;12(3):e0173278.
Kim J, Hanotte O, Mwai OA, Dessie T, Bashir S, Diallo B, et al. The genome landscape of indigenous African cattle. Genome Biol. 2017;18(1):34. https://doi.org/10.1186/s13059-017-1153-y.
Kuehn LA, Keele JW, Bennett GL, McDaneld TG, Smith TPL, Snelling WM, et al. Predicting breed composition using breed frequencies of 50,000 markers from the US Meat Animal Research Center 2,000 Bull Project 1,2. J Anim Sci. 2011;89(6):1742–50. https://doi.org/10.2527/jas.2010-3530.
Laible G, Cole S-A, Brophy B, Wei J, Leath S, Jivanji S, et al. Holstein Friesian dairy cattle edited for diluted coat color as adaptation to climate change. bioRxiv. 2020. https://doi.org/10.1101/2020.09.15.298950.
Lauri A, Lazzari G, Galli C, Lagutina I, Genzini E, Braga F, et al. Assessment of MDA efficiency for genotyping using cloned embryo biopsies. Genomics. 2013;101(1):24–9. https://doi.org/10.1016/j.ygeno.2012.09.002.
Ledford H. New rules will make UK gene-edited crop research easier. Nat News. 2021. https://doi.org/10.1038/d41586-021-01572-0.
Li P, Tong C, Mehrian-Shai R, Jia L, Wu N, Yan Y, et al. Germline competent embryonic stem cells derived from rat blastocysts. Cell. 2008;135(7):1299–310. https://doi.org/10.1016/j.cell.2008.12.006.
Lin JC, Van Eenennaam AL. Electroporation-mediated genome editing of livestock zygotes. Front Genet. 2021;12:546. https://doi.org/10.3389/fgene.2021.648482.
Liu X, Wang Y, Guo W, Chang B, Liu J, Guo Z, et al. Zinc-finger nickase-mediated insertion of the lysostaphin gene into the beta-casein locus in cloned cows. Nat Commun. 2013;4(1):2565. https://doi.org/10.1038/ncomms3565.
Liu X, Wang Y, Tian Y, Yu Y, Gao M, Hu G, et al. Generation of mastitis resistance in cows by targeting human lysozyme gene to β-casein locus using zinc-finger nucleases. Proc R Soc Biol Sci. 2014;281(1780):20133368. https://doi.org/10.1098/rspb.2013.3368.
Loi P, Toschi P, Zacchini F, Ptak G, Scapolo PA, Capra E, et al. Synergies between assisted reproduction technologies and functional genomics. Genet Sel Evol. 2016;48(1):53. https://doi.org/10.1186/s12711-016-0231-z.
Lopes RFF, Forell F, Oliveira ATD, Rodrigues JL. Splitting and biopsy for bovine embryo sexing under field conditions. Theriogenology. 2001;56(9):1383–92. https://doi.org/10.1016/S0093-691X(01)00641-0.
Luo J, Song Z, Yu S, Cui D, Wang B, Ding F, et al. Efficient Generation of Myostatin (MSTN) biallelic mutations in cattle using zinc finger nucleases. PLoS ONE. 2014;9(4): e95225. https://doi.org/10.1371/journal.pone.0095225.
Lush JL. Animal Breeding Plans. Ames, IA: Collegiate Press, Inc.; 1937.
Mapiye C, Chikwanha OC, Chimonyo M, Dzama K. Strategies for sustainable use of indigenous cattle genetic resources in Southern Africa. Diversity. 2019;11(11):214.
Mapiye O, Makombe G, Mapiye C, Dzama K. Limitations and prospects of improving beef cattle production in the smallholder sector: a case of Limpopo Province. South Africa Trop Anim Health Prod. 2018;50(7):1711–25. https://doi.org/10.1007/s11250-018-1632-5.
Mapletoft RJ, Bó GA, Baruselli PS, Menchaca A, Sartori R. Evolution of knowledge on ovarian physiology and its contribution to the widespread application of reproductive biotechnologies in South American cattle. Animal Reprod (AR). 2018;15(Supplement 1):1003–14.
Marshall K, Gibson JP, Mwai O, Mwacharo JM, Haile A, Getachew T, et al. Livestock genomics for developing countries – African examples in practice. Front Genet. 2019;10:297. https://doi.org/10.3389/fgene.2019.00297.
McFarlane GR, Salvesen HA, Sternberg A, Lillico SG. On-farm livestock genome editing using cutting edge reproductive technologies. Front Sustain Food Syst. 2019;3:106. https://doi.org/10.3389/fsufs.2019.00106.
McLean Z, Oback B, Laible G. Embryo-mediated genome editing for accelerated genetic improvement of livestock. Front Agric Sci Eng. 2020;7(2):148–60. https://doi.org/10.15302/j-fase-2019305.
McLean ZL, Appleby SJ, Wei J, Snell RG, Oback B. Testes of DAZL null neonatal sheep lack prospermatogonia but maintain normal somatic cell morphology and marker expression. Mol Reprod Dev. 2021;88(1):3–14. https://doi.org/10.1002/mrd.23443.
McPherron AC, Lee S-J. Double muscling in cattle due to mutations in the myostatin gene. Proc Natl Acad Sci. 1997;94(23):12457–61. https://doi.org/10.1073/pnas.94.23.12457.
Meuwissen T, Hayes B, Goddard M. Accelerating improvement of livestock with genomic selection. Annu Rev Anim Biosci. 2013;1(1):221–37. https://doi.org/10.1146/annurev-animal-031412-103705.
Meuwissen THE, Hayes BJ, Goddard ME. Prediction of total genetic value using genome-wide dense marker maps. Genetics. 2001;157(4):1819–29.
Miao D, Giassetti MI, Ciccarelli M, Lopez-Biladeau B, Oatley JM. Simplified pipelines for genetic engineering of mammalian embryos by CRISPR-Cas9 electroporation. Biol Reprod. 2019;101(1):177–87. https://doi.org/10.1093/biolre/ioz075.
Misica-Turner PM, Oback FC, Eichenlaub M, Wells DN, Oback B. Aggregating embryonic but not somatic nuclear transfer embryos increases cloning efficiency in cattle1. Biol Reprod. 2007;76(2):268–78. https://doi.org/10.1095/biolreprod.106.050922.
Meat and Livestock Australia Limited (MLA). Strategies to increase the adoption of AI in northern Australian tropical beef genotype herds. North Sydney, NSW; 2015. http://www.animalwelfarestandards.net.au/. Accessed 4 Apr 2019.
Moore JK, Haber JE. Cell cycle and genetic requirements of two pathways of nonhomologous end-joining repair of double-strand breaks in Saccharomyces cerevisiae. Mol Cell Biol. 1996;16(5):2164–73. https://doi.org/10.1128/mcb.16.5.2164.
Mora C, Menozzi D, Kleter G, Aramyan L, Valeeva N, Zimmermann Kl, et al. Factors Affecting the Adoption of Genetically Modified Animals in the Food and Pharmaceutical Chains. Bio-based Appl Econ. 2012;1:3 doi:https://doi.org/10.13128/BAE-11706.
Mueller ML, Cole JB, Connors NK, Johnston DJ, Randhawa IAS, Van Eenennaam AL. Comparison of Gene Editing Versus Conventional Breeding to Introgress the POLLED Allele Into the Tropically Adapted Australian Beef Cattle Population. Front Genet. 2021;12:68. https://doi.org/10.3389/fgene.2021.593154.
Mueller ML, Cole JB, Sonstegard TS, Van Eenennaam AL. Comparison of gene editing versus conventional breeding to introgress the POLLED allele into the US dairy cattle population. J Dairy Sci. 2019;102(5):4215–26. https://doi.org/10.3168/jds.2018-15892.
Mullaart E, Wells D. Embryo Biopsies for Genomic Selection. In: Niemann H, Wrenzycki C, editors. Animal Biotechnology 2: Emerging Breeding Technologies. Cham: Springer International Publishing; 2018. p. 81–94.
Mwai O, Hanotte O, Kwon YJ, Cho S. African indigenous cattle: unique genetic resources in a rapidly changing world. Asian-Australas J Anim Sci. 2015;28(7):911–21. https://doi.org/10.5713/ajas.15.0002R.
Namula Z, Wittayarat M, Hirata M, Hirano T, Nguyen NT, Le QA, et al. Genome mutation after the introduction of the gene editing by electroporation of Cas9 protein (GEEP) system into bovine putative zygotes. In Vitro Cell Dev Biol Anim. 2019;55(8):598–603. https://doi.org/10.1007/s11626-019-00385-w.
NASEM. Science Breakthroughs to Advance Food and Agricultural Research by 2030. Washington, DC: National Academies of Sciences, Engineering, and Medicine: The National Academies Press; 2018. p. 200. https://www.nap.edu/catalog/25059/science-breakthroughs-to-advance-food-and-agricultural-research-by-2030.
Nasser LF, Reis EL, Oliveira MA, Bó GA, Baruselli PS. Comparison of four synchronization protocols for fixed-time bovine embryo transfer in Bos indicus x Bos taurus recipients. Theriogenology. 2004;62(9):1577–84. https://doi.org/10.1016/j.theriogenology.2004.03.013.
Nyamushamba GB, Mapiye C, Tada O, Halimani TE, Muchenje V. Conservation of indigenous cattle genetic resources in Southern Africa's smallholder areas: turning threats into opportunities—a review. Asian-Australas J Anim Sci. 2017;30(5):603–21. https://doi.org/10.5713/ajas.16.0024.
O'Toole JF, Bruggeman LA, Madhavan S, Sedor JR. The cell biology of APOL1. Semin Nephrol. 2017;37(6):538–45. https://doi.org/10.1016/j.semnephrol.2017.07.007.
Oback B, Wells DN. Cloning Cattle. Cloning Stem Cells. 2003;5(4):243–56. https://doi.org/10.1089/153623003772032763.
Ojango JM, Marete A, Mujibi F, Rao E, Poole EJ, Rege J, et al. A novel use of high density SNP assays to optimize choice of different crossbred dairy cattle genotypes in smallholder systems in East Africa: American Society of Animal Science.
Ojango JM, Wasike C, Enahoro DK, Okeyo Mwai A. Dairy production systems and the adoption of genetic and breeding technologies in Tanzania, Kenya, India and Nicaragua. Animal Genetic Resources. 2016.
Owen JR, Hennig SL, McNabb BR, Mansour TA, Smith JM, Lin JC, et al. One-step generation of a targeted knock-in calf using the CRISPR-Cas9 system in bovine zygotes. BMC Genomics. 2021;22(1):118. https://doi.org/10.1186/s12864-021-07418-3.
Park K-E, Foster Frey J, Waters J, Simpson SG, Coutu C, Plummer S, et al. One-Step Homology Mediated CRISPR-Cas editing in zygotes for generating genome edited cattle. CRISPR J. 2020;3(6):523–34. https://doi.org/10.1089/crispr.2020.0047.
Park K-E, Kaucher AV, Powell A, Waqas MS, Sandmaier SES, Oatley MJ, et al. Generation of germline ablated male pigs by CRISPR/Cas9 editing of the NANOS2 gene. Sci Rep. 2017;7:40176. https://doi.org/10.1038/srep40176.
Petersen B. Basics of genome editing technology and its application in livestock species. Reprod Domest Anim. 2017;52(S3):4–13. https://doi.org/10.1111/rda.13012.
Phelps CJ, Koike C, Vaught TD, Boone J, Wells KD, Chen SH, et al. Production of α1,3-Galactosyltransferase–Deficient Pigs. Science. 2003;299(5605):411–4. https://doi.org/10.1126/science.1078942.
Ponsart C, Le Bourhis D, Knijn H, Fritz S, Guyader-Joly C, Otter T, et al. Reproductive technologies and genomic selection in dairy cattle. Reprod Fertil Dev. 2013;26(1):12–21. https://doi.org/10.1071/RD13328.
Poore J, Nemecek T. Reducing food's environmental impacts through producers and consumers. Science. 2018;360(6392):987–92. https://doi.org/10.1126/science.aaq0216.
Proudfoot C, Carlson DF, Huddart R, Long CR, Pryor JH, King TJ, et al. Genome edited sheep and cattle. Transgenic Res. 2015;24(1):147–53. https://doi.org/10.1007/s11248-014-9832-x.
Pryce JE, Haile-Mariam M. Symposium review: Genomic selection for reducing environmental impact and adapting to climate change. J Dairy Sci. 2020;103(6):5366–75. https://doi.org/10.3168/jds.2019-17732.
Pursley JR, Mee MO, Wiltbank MC. Synchronization of ovulation in dairy cows using PGF2alpha and GnRH. Theriogenology. 1995;44(7):915–23. https://doi.org/10.1016/0093-691x(95)00279-h.
Quinton CD, Hely FS, Amer PR, Byrne TJ, Cromie AR. Prediction of effects of beef selection indexes on greenhouse gas emissions. Animal. 2018;12(5):889–97. https://doi.org/10.1017/s1751731117002373.
RAAA. Beef breed approves gene-edited traits for animal registration. BEEF Magazine. 2021.
Red Angus Association of America (RAAA). Rules & Regulations, F-3. Commerce City, CO; 2021. https://redangus.org/wp-content/uploads/2021/06/Rules-and-Regulations-6-1-21current.pdf. Accessed 10 Oct 2021.
Ramos-Ibeas P, Calle A, Pericuesta E, Laguna-Barraza R, Moros-Mora R, Lopera-Vásquez R, et al. An efficient system to establish biopsy-derived trophoblastic cell lines from bovine embryos. Biol Reprod. 2014;91(1):15.
Rexroad C, Vallet J, Matukumalli LK, Reecy J, Bickhart D, Blackburn H, et al. Genome to Phenome: Improving Animal Health, Production, and Well-Being – A New USDA Blueprint for Animal Genome Research 2018–2027. Front Genetics. 2019;10:327. https://doi.org/10.3389/fgene.2019.00327.
Richardson TE, Chapman KM, Dann CT, Hammer RE, Hamra FK. Sterile testis complementation with spermatogonial lines restores fertility to DAZL-deficient rats and maximizes donor germline transmission. PLoS ONE. 2009;4(7): e6308. https://doi.org/10.1371/journal.pone.0006308.
Rodriguez-Villamil P, Ongaratto FL, Bostrom JR, Larson S, Sonstegard T. Generation of SLICK beef cattle by embryo microinjection: a case report. Reprod Fertil Dev. 2021;33(2):114. https://doi.org/10.1071/RDv33n2Ab13.
Saito S, Strelchenko N, Niemann H. Bovine embryonic stem cell-like cell lines cultured over several passages. Rouxs Arch Dev Biol. 1992;201(3):134–41. https://doi.org/10.1007/bf00188711.
Salmon GR, Marshall K, Tebug SF, Missohou A, Robinson TP, MacLeod M. The greenhouse gas abatement potential of productivity improving measures applied to cattle systems in a developing region. Animal. 2018;12(4):844–52. https://doi.org/10.1017/S1751731117002294.
Sander JD, Joung JK. CRISPR-Cas systems for editing, regulating and targeting genomes. Nat Biotechnol. 2014;32:347. https://doi.org/10.1038/nbt.2842.
Scholtz M, Bester J, Mamabolo J, Ramsay K. Results of the national cattle survey undertaken in South Africa, with emphasis on beef. Appl Anim Husb Rural Dev. 2008;1:1–9.
Segelke D, Reinhardt F, Liu Z, Thaller G. Prediction of expected genetic variation within groups of offspring for innovative mating schemes. Genet Sel Evol. 2014;46(1):42. https://doi.org/10.1186/1297-9686-46-42.
Setiana L, Saleh DM, Nugroho AP, Lana DL. Factors in the Adoption of Beef Cattle Artificial Insemination (AI) Technology in Brebes Regency. Jurnal Penyuluhan. 2020;16(1):16–23. https://doi.org/10.25015/16202027574.
Setiawan D. Artificial Insemination of Beef Cattle UPSUS SIWAB Program based on the calculation of non-return rate, service per conception and calving rate in the North Kayong Regency. Int J Trop Vet Biomed Res. 2018;3(1):7–11.
Shanthalingam S, Tibary A, Beever JE, Kasinathan P, Brown WC, Srikumaran S. Precise gene editing paves the way for derivation of Mannheimia haemolytica leukotoxin-resistant cattle. Proc Natl Acad Sci. 2016;113(46):13186–90. https://doi.org/10.1073/pnas.1613428113.
Shojaei Saadi HA, Vigneault C, Sargolzaei M, Gagné D, Fournier É, de Montera B, et al. Impact of whole-genome amplification on the reliability of pre-transfer cattle embryo breeding value estimates. BMC Genomics. 2014;15(1):889. https://doi.org/10.1186/1471-2164-15-889.
Sonoda E, Hochegger H, Saberi A, Taniguchi Y, Takeda S. Differential usage of non-homologous end-joining and homologous recombination in double strand break repair. DNA Repair. 2006;5(9):1021–9. https://doi.org/10.1016/j.dnarep.2006.05.022.
Soto DA, Ross PJ. Pluripotent stem cells and livestock genetic engineering. Transgenic Res. 2016;25(3):289–306.
Su X, Wang S, Su G, Zheng Z, Zhang J, Ma Y, et al. Production of microhomologous-mediated site-specific integrated LacS gene cow using TALENs. Theriogenology. 2018;119:282–8. https://doi.org/10.1016/j.theriogenology.2018.07.011.
Tait-Burkard C, Doeschl-Wilson A, McGrew MJ, Archibald AL, Sang HM, Houston RD, et al. Livestock 2.0 – genome editing for fitter, healthier, and more productive farmed animals. Genome Biol. 2018;19(1):204. https://doi.org/10.1186/s13059-018-1583-1.
Tan W, Carlson DF, Lancto CA, Garbe JR, Webster DA, Hackett PB, et al. Efficient nonmeiotic allele introgression in livestock using custom endonucleases. Proc Natl Acad Sci. 2013;110(41):16526–31. https://doi.org/10.1073/pnas.1310478110.
Tan W, Carlson DF, Walton MW, Fahrenkrug SC, Hackett PB. Precision Editing of Large Animal Genomes. In: Friedmann T, Dunlap J, Goodwin S, editors. Advances in genetics. Waltham: Academic Press; 2012. p. 37–97.
Tan W, Proudfoot C, Lillico SG, Whitelaw CBA. Gene targeting, genome editing: from Dolly to editors. Transgenic Res. 2016;25(3):273–87. https://doi.org/10.1007/s11248-016-9932-x.
Taylor L, Carlson DF, Nandi S, Sherman A, Fahrenkrug SC, McGrew MJ. Efficient TALEN-mediated gene targeting of chicken primordial germ cells. Development. 2017;144(5):928–34. https://doi.org/10.1242/dev.145367.
Thallman RM, Snider A. Use of Advanced Reproductive Technologies and Inclusion of these Records in Genetic Evaluation. BIF Research Symposium & Convention; Des Moines, IA: Beef Improvement Federation (BIF).
Thompson NM, Widmar NO, Schutz MM, Cole JB, Wolf CA. Economic considerations of breeding for polled dairy cows versus dehorning in the United States. J Dairy Sci. 2017;100(6):4941–52. https://doi.org/10.3168/jds.2016-12099.
Tominaga K, Hamada Y. Efficient production of sex-identified and cryosurvived bovine in-vitro produced blastocysts. Theriogenology. 2004;61(6):1181–91. https://doi.org/10.1016/j.theriogenology.2003.07.008.
Torres A, Batista M, Diniz P, Silva E, Mateus L, Lopes-da-Costa L. Effects of oocyte donor age and embryonic stage of development on transcription of genes coding for enzymes of the prostaglandins and progesterone synthesis pathways in bovine in vitro produced embryos. Zygote. 2014;23(6):802–12. https://doi.org/10.1017/S0967199414000446.
Twomey AJ, Cromie AR, McHugh N, Berry DP. Validation of a beef cattle maternal breeding objective based on a cross-sectional analysis of a large national cattle database. J Anim Sci. 2020;98:11. https://doi.org/10.1093/jas/skaa322.
U.S. Department of Agriculture (USDA)-Animals and Plant Health Inspection Service (APHIS)-National Animal Health Monitoring System (NAHMS). Beef Cow-calf Management Practices in the United States, 2017, report 1, #.782.0520. Fort Collins, CO; 2020. https://www.aphis.usda.gov/animal_health/nahms/beefcowcalf/downloads/beef2017/Beef2017_dr_PartI.pdf. Accessed 10 Mar 2021.
van der Berg JP, Kleter GA, Battaglia E, Groenen M, Kok EJ. Developments in genetic modification of cattle and implications for regulation, safety and traceability. Front Agric Sci Eng. 2020;7(2):136–47. https://doi.org/10.15302/j-fase-2019306.
van der Berg JP, Kleter GA, Kok EJ. Regulation and safety considerations of somatic cell nuclear transfer-cloned farm animals and their offspring used for food production. Theriogenology. 2019;135:85–93. https://doi.org/10.1016/j.theriogenology.2019.06.001.
van Echten-Arends J, Mastenbroek S, Sikkema-Raddatz B, Korevaar JC, Heineman MJ, van der Veen F, et al. Chromosomal mosaicism in human preimplantation embryos: a systematic review. Hum Reprod Update. 2011;17(5):620–7. https://doi.org/10.1093/humupd/dmr014.
Van Eenennaam AL. Genetic modification of food animals. Curr Opin Biotechnol. 2017;44:27–34. https://doi.org/10.1016/j.copbio.2016.10.007.
Van Eenennaam AL. Recent Developments in Genetic Evaluations and Genomic Testing. eXtention.org; 2019. https://beef-cattle.extension.org/recent-developments-in-genetic-evaluations-and-genomic-testing/. Accessed 4 Oct 2021.
Van Eenennaam AL, Silva FDF, Trott JF, Zilberman D. Genetic engineering of livestock: the opportunity cost of regulatory delay. Annu Rev Anim Biosci. 2021;9(1):453–78. https://doi.org/10.1146/annurev-animal-061220-023052.
Van Eenennaam AL, Weigel KA, Young AE, Cleveland MA, Dekkers JCM. Applied animal genomics: results from the field. Annu Rev Anim Biosci. 2014;2(1):105–39. https://doi.org/10.1146/annurev-animal-022513-114119.
Van Eenennaam AL, Wells KD, Murray JD. Proposed US regulation of gene-edited food animals is not fit for purpose. Sci Food. 2019;3(1):3. https://doi.org/10.1038/s41538-019-0035-y.
Van Eenennaam AL, Werth SJ. Animal board invited review: animal agriculture and alternative meats – learning from past science communication failures. Animal. 2021;15(10): 100360. https://doi.org/10.1016/j.animal.2021.100360.
Van Eenennaam AL, Young AE. Public Perception of Animal Biotechnology. In: Niemann H, Wrenzycki C, editors. Animal Biotechnology 2: Emerging Breeding Technologies. Cham: Springer International Publishing; 2018. p. 275–303.
van Marle-Köster E, Visser C. Genetic Improvement in South African Livestock: can genomics bridge the gap between the developed and developing sectors? Front Genet. 2018;9:331. https://doi.org/10.3389/fgene.2018.00331.
van Mil A, Hopkins H, Kinsella S. Potential uses for genetic technologies: dialogue and engagement research conducted on behalf of the Royal Society. London, England: Hopkins Van Mil: Creating Connections Ltd; 2017. https://royalsociety.org/~/media/policy/projects/gene-tech/genetic-technologies-public-dialogue-hvm-full-report.pdf.
VanRaden PM. Improving Animals Each Generation by Selecting from the Best Gene Sources. 2007. https://aipl.arsusda.gov/publish/other/2007/Duke07_pvr.pdf. Accessed 5 Jan 2018.
Visscher P, Haley C, Thompson R. Marker-assisted introgression in backcross breeding programs. Genetics. 1996;144(4):1923–32.
Visscher P, Pong-Wong R, Whittemore C, Haley C. Impact of biotechnology on (cross)breeding programmes in pigs. Livest Prod Sci. 2000;65(1):57–70. https://doi.org/10.1016/S0301-6226(99)00180-3.
Wei J, Wagner S, Lu D, Maclean P, Carlson DF, Fahrenkrug SC, et al. Efficient introgression of allelic variants by embryo-mediated editing of the bovine genome. Sci Rep. 2015;5(1):11735. https://doi.org/10.1038/srep11735.
Wei J, Wagner S, Maclean P, Brophy B, Cole S, Smolenski G, et al. Cattle with a precise, zygote-mediated deletion safely eliminate the major milk allergen beta-lactoglobulin. Sci Rep. 2018;8(1):7661. https://doi.org/10.1038/s41598-018-25654-8.
Whelan AI, Lema MA. Regulatory framework for gene editing and other new breeding techniques (NBTs) in Argentina. GM Crops Food. 2015;6(4):253–65. https://doi.org/10.1080/21645698.2015.1114698.
Wiggans GR, Cole JB, Hubbard SM, Sonstegard TS. Genomic selection in dairy cattle: the USDA experience. Annu Rev Anim Biosci. 2017;5(1):309–27. https://doi.org/10.1146/annurev-animal-021815-111422.
Wilmut I, Schnieke AE, McWhir J, Kind AJ, Campbell KHS. Viable offspring derived from fetal and adult mammalian cells. Nature. 1997;385(6619):810–3. https://doi.org/10.1038/385810a0.
Wu H, Wang Y, Zhang Y, Yang M, Lv J, Liu J, et al. TALE nickase-mediated SP110 knockin endows cattle with increased resistance to tuberculosis. Proc Natl Acad Sci. 2015;112(13):E1530–9. https://doi.org/10.1073/pnas.1421587112.
Wu J, Belmonte JCI. Dynamic pluripotent stem cell states and their applications. Cell Stem Cell. 2015;17(5):509–25. https://doi.org/10.1016/j.stem.2015.10.009.
Ying Q-L, Wray J, Nichols J, Batlle-Morera L, Doble B, Woodgett J, et al. The ground state of embryonic stem cell self-renewal. Nature. 2008;453(7194):519. https://doi.org/10.1038/nature06968.
Yoshino T, Suzuki T, Nagamatsu G, Yabukami H, Ikegaya M, Kishima M, et al. Generation of ovarian follicles from mouse pluripotent stem cells. Science. 2021;373(6552):eabe0237. https://doi.org/10.1126/science.abe0237.
Yu S, Luo J, Song Z, Ding F, Dai Y, Li N. Highly efficient modification of beta-lactoglobulin (BLG) gene via zinc-finger nucleases in cattle. Cell Res. 2011;21(11):1638–40. https://doi.org/10.1038/cr.2011.153.
Yudin NS, Lukyanov KI, Voevoda MI, Kolchanov NA. Application of reproductive technologies to improve dairy cattle genomic selection. Russ J Genet Appl Res. 2016;6(3):321–9. https://doi.org/10.1134/s207905971603014x.
Zhao L, Gao X, Zheng Y, Wang Z, Zhao G, Ren J, et al. Establishment of bovine expanded potential stem cells. Proc Natl Acad Sci. 2021;118(15): e2018505118. https://doi.org/10.1073/pnas.2018505118.
This work was supported by the National Institute for Food and Agriculture (NIFA), National Needs Graduate and Postgraduate Fellowship (No. 2017-38420-26790) and Predoctoral Fellowship (No. 2021-67034-35150) from the U.S. Department of Agriculture (USDA).
Department of Animal Science, University of California, Davis, CA, USA
Maci L. Mueller & Alison L. Van Eenennaam
Maci L. Mueller
Alison L. Van Eenennaam
MM performed the literature review and wrote the first draft of the manuscript, with input from AVE. Both authors read and approved the final manuscript.
MM is an Animal Biology Ph.D. candidate working in the laboratory of AVE. AVE is a Professor of Cooperative Extension in Animal Biotechnology and Genomics in the Department of Animal Science at the University of California, Davis.
Correspondence to Alison L. Van Eenennaam.
Mueller, M.L., Van Eenennaam, A.L. Synergistic power of genomic selection, assisted reproductive technologies, and gene editing to drive genetic improvement of cattle. CABI Agric Biosci 3, 13 (2022). https://doi.org/10.1186/s43170-022-00080-z
Improvements in animal agriculture through genomic editing
|
CommonCrawl
|
Journal of Nanostructure in Chemistry
pp 1–6 | Cite as
Simple chemistry drives controlled synthesis of platinum nanocrystal to micron size
Tahoora Tajerian
Mehrdad Monsefi
Alan Rowan
In this research a high yield, homogenous and fast bottom-up wet-chemical method has been carried out for the synthesis of platinum nanocrystal (Pt-NC) and up to micron size (Pt-M). After synthesis of the particles, surface plasmon resonance (SPR), ultra violet (UV) spectrum, size, shape, and composition were measured for each set. Platinum nanocrystal attained by preventing particles to grow directly after reduction of Pt+4 to Pt0. Pt-NC constructed by statistic repulsion of ionic surfactant surrounded nanocrystals. The final size of the Pt-NC found to be 3.8 ± 0.72 nm. Before sonication treatment, particle size of 705.2 ± 80.3 nm was achieved. After sonication, particle size increased to 1046.1 ± 199 nm. Particles were formed in a controllable way, homogenous and mono-disperse in size and shape. It is confirmed that sonication except for the sharpness of the spectrum did not alter the peak wavelengths. The suggested synthesis method enabled cost-effective concrete control over size, shape, concentration, and time of the synthesis.
Graphic abstract
Platinum nanocrystal Platinum microparticles PVP Capping agent
Bulk platinum metals are well-known for it's highly catalytic and photocatalytic properties [1, 2]. Moreover, it has novel applications including microelectronics, magnetic materials, catalysis, photocatalysis, and plasmonic-based electrochemical imaging (SPR based) [3, 4, 5]. Nevertheless the use of platinum micro and nanoparticles, especially in catalysis is particularly interesting due to the reduced cost and the greater surface area with respect to the bulk platinum. Thanks to its excellent properties, its application could be observed as cathode in low-temperature solid oxide fuel cells ( < 500 °C), as an electrocatalyst in pure hydrogen production, as anode in direct ethanol fuel cells, and as electro-oxidizer in biosensors [6]. Recent applications of platinum required it to be used in the form of nano thin films, micro-particles and nanoparticles. Beside platinum nanocrystals (Pt-NC), platinum microparticles (Pt-M) size are suitable alternatives for developing new generations of fuel cells because they are commercially available and suggest alternative energy sources, and electrochemical activity, while functioning under wide operating temperatures [6, 7]. Surface plasmon resonance (SPR) is the dipolar excitation of entire particle between free electron and its positively charged lattice induced by incoming electromagnetic wave (EM). Although the plasmonic peak for platinum is not highly active due to one empty space in the d-band and one empty space in the s-band [8], investigation of plasmonic peak for Pt-NC and Pt-M to obtain a spectrum might be useful mainly because of its anisotropic nature [9]. The first form of platinum particles, i.e., platinum nanocrystal (Pt-NC) offers quantum size effects and confined size in the range of 1–10 nm that enables it to be applied as catalyst in quantum physics and chemistry [8, 9, 10]. The second one is platinum micro-particles (Pt-M) that four advantages for its synthesis has been suggested: first, it is mono-dispersed, even though the particles are larger than the previous set of Pt-NC; second, the size barrier, from nanometer to micrometer, is overcome; third, Pt-M is synthesized with a one-step fast reduction, rather than a multiple step process which would be more complex and cost time; fourth, Pt-M characterizations has not been extensively reported in literature so it could introduce new opportunities in relevant research areas [3, 11, 12, 13, 14, 15, 16, 17, 18, 19]. A fundamental challenge is to improve the reproducibility of the synthesis process so a consistent particle size can be obtained. Studying the details of Pt-M synthesis seems to be more useful than that of the other types of platinum nanoparticles, e.g., Pt-NC in terms of vast for opportunities to investigate the formation mechanism. In the current study we aim to propose a straightforward, single-stage chemistry using homogeneous nucleation method to synthesize platinum particles in nano and micron size.
Preparation of platinum nanoparticles was done in two stages base on reagents as follows: acid hexachloroplatinic (IV) (H2PtCl6∙6H2O; Merck KGaA), polyvinylpyrrolidone (PVP; Sigma–Aldrich), sodium borohydride (NaBH4, Fluka), and cetyltrimethylammonium bromide (CTAB; TCI Europe N.V.). All the chemicals were used as received from the suppliers, without any additional purification. Water was purified with a Mili pore MiliQ system (MQ water 18.1 MΩ). Glassware used in the syntheses was cleaned with the mixture of hydrochloric acid and nitric acid (HCl: HNO3, 3:1). Sonication was carried out with an ultrasonic bath (Branson Ultrasonic Cleaner 2520E-DTH). Centrifugation was performed with a Scan Speed 1730R centrifuge. In the first stage, for the formation of platinum nanocrystal (Pt-NC), 1 ml of 20 mM of platinum solution was added to 1 ml of 1 mM of PVP. Then 500 μl of 0.05 mM of pre-cooled sodium borohydrate was added and stirred. While stirring, 750 μl of 0.1 M of CTAB was added drop-wise. Subsequently, the mixture was kept sufficiently long at 26.5 °C to prevent crystallization of CTAB. To purify the obtained product centrifugation was done twice at 1300 rpm. The reaction mixture was stirred vigorously for a short period of time. In the second stage, the formation of platinum micron size (Pt-M) occurred by transporting of 1 ml of 20 mM of platinum solution to a falcon tube containing a stirrer bar. Next, 1 ml of 1 mM of PVP solution was added as described earlier. Then the mixture was stirred and incubated. After stirring, pre-cooled sodium borohydrate 500 μl was added. Then the mixture was stirred gently. The purification stage lasted a few seconds before performing sonication. Transmission electron microscopy (JEOL 1010 TEM) was used to investigate the 2D morphology and size of the particles. A UV–visible spectrophotometer (Varian-Cary-50 Conc.) has been used as well for measuring the nanoparticles absorption spectra. We used nanoparticle size and concentration analyzer (NANO sight-NS500) to analyze the presence, size distribution, and concentration of nanoparticles in liquid. However, as consistent data for Pt-M size cannot be provided (setup restriction in measuring larger particles > 700 nm), we excluded this set of data.
Figure 1 shows a schematic mechanism for the possible formation of the CTAB micelles around Pt-NC. The reduction of platinum starts with the combination of a platinum metallic salt precursor, a reducing agent and a stabilizer. There are different types of platinum salt precursors that have been used for the reduction of platinum such as potassium tetrachloroplatinate-II (K2PtCl4) and chloroplatinic acid hexahydrate (H2PtCl6·6H2O). The main reduction reaction based on potassium tetra chloroplatinate-II is presented by the reaction of formula (1) [20]:
Schematic representation of the synthesis of Pt-NC and Pt-M size
$$\begin{aligned} 2{\text{NaBH}}_{4} &+ K_{2} {\text{PtCl}}_{4} + 6{\text{H}}_{2} {\text{O}} \to {\text{Pt}} + 2{\text{NaCl}} \\&\quad + 2{\text{H}}_{3} {\text{BO}}_{3} + 2{\text{KCl}} + 7{\text{H}}_{2}. \end{aligned}$$
H2PtCl6 salt precursor was freshly used in our research instead of K2PtCl4 and better results were obtained, however, the main working mechanisms were the same. The former salt precursor (K2PtCl4) needed aging time, therefore, in the case of any failure plenty of time could be lost. Nevertheless, the final product was pure platinum, using different salts as precursor would have led to different types of particles [21]. The inside reaction in terms of electron donation can be interpreted as the formula (2) [22]:
$$\begin{array}{*{20}l} {{\text{BH}}_{4}^{ - } + [{\text{PtCl}}_{6} ]^{ - 2} + 8{\text{OH}}^{ - } \to {\text{H}}_{2} {\text{BO}}_{3}^{ - } + {\text{Pt}}^{o} + 5{\text{H}}_{2} {\text{O}} + 2{\text{Cl}}^{ - } + 8{\text{e}}^{ - } } \hfill \\ \end{array} .$$
As it has been shown in the reaction (2), sodium borohydride directly reduces the platinum complex and turns Pt+4 into Pto. This stage was obviously visible because of the clear change of the solution color from pale yellow to very dark brown.
Later, cetyltrimethylammonium bromide (CTAB) was introduced as an ionic surfactant to control the size and shape of platinum nanoparticles as shown in Fig. 1. The presence of non-ionic surfactant, polyvinylpyrrolidone (PVP) prevented aggregation of the particles after the reduction of platinum [23]. In fact, since they were characterized by an electrostatic charge, these compounds enabled electrostatic repulsion between particles and inhibit their further growth [24, 25].
Figure 2a shows the final color of the solution that was dark brown. A transmission electron microscopy (TEM) image of the Pt-NC has been shown in Fig. 2b. To avoid a thick layer of excess surfactant on the TEM grid surface, centrifugation of the solution was necessary prior to further investigation of each type of synthesized particles in TEM. Thorough statistical investigation was necessary due to the limitations of the TEM resolution; statistical analysis was performed manually and analyzed by exploring eight different regions of a TEM grid to get a clear idea of the size range that was shown in Fig. 3a. The average measured diameter of the particles was found to be 3.85 ± 0.72 nm as shown in Fig. 3a. The surface plasmon resonance (SPR) of Pt-NC was 230 nm as shown in Fig. 3b.
a Final color of the Pt-NC stored in cuvette and b TEM image of Pt-NC
a Size distribution histograms of Pt-NC and b the plasmonic peak of Pt-NC
The reduction mechanism of platinum micro-sized particles was based on the presence of platinum salt (H2PtCl6·6H2O), the strong reducing agent, sodium borohydrate (NaBH4), polyvinylpyrrolidone (PVP) and the capping agent. The distinction in this case was that there were no other types of reagents as for Pt-NC; consequently, particle size became greater. Although there is not much literature available about the formation mechanism of Pt-M particles, it can be inferred from the previously described formation mechanisms of the Pt-NC particles by reactions (1) and (2). After the introduction of the reducing agent to the reaction solution, the platinum complex was directly reduced resulting in a spherical particle shape. In the second stage, the particle solution underwent a sonication treatment. Thanks to the extreme heat and pressure of the ultrasound waves (T = 5000 K, P = 1000 atm), the nanosize particles grew in size up to 1 μm in diameter, most likely due to an ultrasound-induced Ostwald-ripening phenomenon [26], which re-dissolves the smallest unstable particles in favor of the biggest ones, possibly contributing to their growth. As previously mentioned, the final color of the solution was very dark brown as shown in Fig. 2a.
TEM image of the Pt-M particles before and after sonication have been shown in Fig. 4a, b that confirmed the growth of platinum particles to micron size. Distribution of Pt-M sizes before and after sonication was shown in Fig. 4c, d, respectively. The mean diameter value of the particles before the sonication was 700 nm while after sonication it grew up to micron ( ~ 1000 nm). Additionally, as previously reported for Pt-NC, the plasmonic peak of the platinum nanoparticles does not shift enormously with respect to different sizes and morphologies.
a TEM image of Pt-M before sonication, b TEM image of Pt-M after 10 s sonication, c Size distribution histograms of Pt-M before sonication, and d Size distribution histograms of Pt-M after sonication
The surface plasmon resonance (SPR) of Pt-M before and after sonication has been shown in Fig. 5a, b, respectively. Before sonication treatment, the absorption spectra of Pt-M showed two major peaks at 230 nm and 240 nm, respectively, and one minor peak at 250 nm. After the sonication treatment, the particles were kept at room temperature for a while, and then the spectrum was measured. The peaks at 230 nm and 240 nm decreased in their y axis (Abs). Although there was a clear shift in the size of particles, the spectrum of Pt-M did not shift significantly after sonication. This was also verified with our final experimental results of the Pt-M where the particles also grew from nano to micron size. It is worth to mention that the sharpness of the plasmonic spectrum after sonication treatment decreased significantly. This can be due to the consumption of smaller particles during formation of large particles.
The plasmonic peak of Pt-M a before sonication, and b after sonication
In this work, the synthesis of micron size platinum particles was done in two steps. Platinum nanocrystals (Pt-NC) fabricated with platinum salt, strong reducing agent NaBH4 and capping agents such as PVP and CTAB as ionic surfactant were used. The repulsion force caused by similar ions around the nanoparticles did not let the particles grow larger and they stayed in a crystal form. The exact control of the particle size and morphology of Pt-NCs by the addition of ionic surfactant was the key factor of their fabrication. A main advantage to the synthesis of Pt-NC was the high yield and high surface to volume ratio for catalytic activities. The reduction mechanism of the platinum micro-sized (Pt-M) particles was formed using platinum salt (H2PtCl6·6H2O), sodium borohydrate (NaBH4), and PVP-as the capping agent. The reduction process occurred immediately after introduction of the strong reducing agent and formed spherical particles. After reduction, sonication was performed on the particles; it was believed that Ostwald-ripening occurred, where energetically favorable for smaller particles to dissolve and adsorb onto larger particles. This work demonstrated the close match of theoretical and practical data to former literature regarding Pt nanocrystals. Investigation of the explicit mechanism of the Pt-M formation and determination of the reproducibility of the particle synthesis can be done in the near future.
Attilio, S., Karen, R.W., Oleg, S.A., Michael, D.A.: Synthesis and characterization of Pt clusters in aqueous solutions. J. Catal. 257, 5–15 (2008)CrossRefGoogle Scholar
Yoo, E., Okata, T., Kohyama, M., Nakamura, J., Honma, I.: Enhanced electrocatalytic activity of Pt subnanoclusters on graphene nanosheet surface. Nano Lett. 9(6), 2255–2259 (2009)CrossRefGoogle Scholar
Liang, W., Yusuke, Y.: Facile synthesis of three-dimensional dendritic platinum nanoelectrocatalyst. Chem. Mater. 21, 3562–3569 (2009)CrossRefGoogle Scholar
Liz-Marzan, L.M.: Nanometals: formation and color. Mater. Today 7(2), 26–31 (2004)CrossRefGoogle Scholar
Herbert W., Rocha, T.C.R., Wallace C.N., Leandro M.S., Marcelo K., Daniel Z.: Chemical synthesis and structural characterization of highly disordered Ni colloidal nanoparticles. ACS Nano. 2, 1313–1319 (2008)CrossRefGoogle Scholar
Ji, S., Chang, I., Cho, G.Y., Lee, Y.H., Shim, J.H., Cha, S.W.: Application of dense nano-thin platinum films for low-temperature solid oxide fuel cells by atomic layer deposition. Int. J. Hydrog. Energy 39, 12402–12408 (2014)CrossRefGoogle Scholar
Feng, Y., Bu, L., Guo, S., Guo, J., Huang, X.: 3-D platinum-lead nanowire networks as highly efficient ethylene glycol oxidation electrocatalysts. SMALL 12(33), 4464–4470 (2016)CrossRefGoogle Scholar
Svetlana V.B., Ghasemi H., Chen, G.: Plasmonic materials for energy: from physics to applications. Mater. Today 16(10), 375–386 (2013)CrossRefGoogle Scholar
Xu, J., Fu, G., Tang, Y., Zhou, Y., Chen, Y., Lu, T.: One-pot synthesis of three-dimensional platinum nanochain networks as stable and active electrocatalysts for oxygen reduction reactions. J. Mater. Chem. 22, 13585–13590 (2012)CrossRefGoogle Scholar
Christoph, L., Zhe, Y., Igor, Z., Bengt, K.: Plasmonic properties of supported Pt and Pd nanostructures. Nano Lett. 6(4), 833–838 (2006)CrossRefGoogle Scholar
Xiong, Y., Xia, Y.: Shape-controlled synthesis of metal nanostructures: the case of palladium. Adv. Mater. 19, 3385–3391 (2007)CrossRefGoogle Scholar
Sara, E.S., Younan, X.: Pushing nanocrystal synthesis toward nonomanufacturing. ACS Nano 3(1), 10–15 (2009)CrossRefGoogle Scholar
Byungkwon, L., Majiong, J., Jing, T., Pedro, H.C., Yimei, Z., Younan, X.: Shape-controlled synthesis of pd nanocrystals in aqueous solutions. Adv. Funct. Mater. 19, 189–200 (2009)CrossRefGoogle Scholar
Peng, Z., Yang, H.: Designer platinum nanoparticles: Control of shape, composition in alloy, nanostructure and electrocatalytic property. Nano Today 4(2), 143–164 (2009)CrossRefGoogle Scholar
Cheong, S., Watt, J., Ingham, B., Tony, M.F., Tilley R.D.: In situ and ex situ studies of platinum nanocrystals: growth and evolution in solution. J. Am. Chem. Soc. 131(40), 14590–14595 (2009)CrossRefGoogle Scholar
Wiley, B.J., Chen, Y., McLellan, J.M., Xiong, Y., Li, Z., Ginger, D., Xia, Y.: Synthesis and optical properties of silver nanobars and nanorice. Nano Lett. 7(4), 1032–1036 (2007)CrossRefGoogle Scholar
Qu, W.L., Wang, ZhB, Gao, Y., Deng, Ch., Wang, R.H., Zhao, L., Sui, X.L.: WO3/C supported Pd catalysts for formic acid electro-oxidation activity. Int. J. Hydrog. Energy 43, 407–416 (2018)CrossRefGoogle Scholar
Qu, W.L., Gu, D.M., Wang, ZhB, Zhang, J.J.: High stability and high activity Pd/ITO-CNTs electrocatalyst for direct formic acid fuel cell. Electrochem. Acta. 137, 676–684 (2014)CrossRefGoogle Scholar
Qu, W.L., Wang, ZhB, Jiang, ZhZh, Gu, D.M., Yin, G.P.: Investigation on performance of Pd/Al2O3-C catalyst synthesized by microwave assisted polyol process for electrooxidation of formic acid. RSC Adv. 2(1), 344–350 (2012)CrossRefGoogle Scholar
Liang, W., Chunping, H., Yoshihiro, N., Yoshitaka, T., Yusuke, Y.: On the role of ascorbic acid in the synthesis of single-crystal hyperbranched platinum nanostructures. Cryst. Growth Des. 10, 3454–3460 (2010)CrossRefGoogle Scholar
Wilson, D.A., Nolte, R.J., van Hest, J.C.: Autonomous movement of platinum-loaded stomatocytes. Nat Chem. 4(4), 268–274 (2012)CrossRefGoogle Scholar
Minh, D.P., Oudart, Y., Baubet, B., Verdon, C., Thomazeau, C.: Nanostructured heterogeneous catalysts: well defined platinum nanoparticles supported on alumina. preparation, characterization, and application to the selective hydrogenation of buta-1, 3-diene. Oil Gas Sci. Technol. 64, 697–706 (2009)CrossRefGoogle Scholar
Teranishi, T., Hosoe, M., Tanaka, T., Miyake, M.: Size control of monodispersed Pt nanoparticles and their 2D organization by electrophoretic deposition. J. Phys. Chem. B 103(19), 3818–3827 (1999)CrossRefGoogle Scholar
Berhault, G., Bausach, M., Bisson, L., Becerra, L., Thomazeau, C., Uzio, D.: Seed-mediated synthesis of pd nanocrystals: factors influencing a kinetic- or thermodynamic- controlled growth regime. J. Phys. Chem. C 111(16), 5915–5925 (2007)CrossRefGoogle Scholar
Konsolakis, M., Yentekakis, I.V., Palermo A., Lambert, R.M.: Optimal promotion by rubidium of the CO + NO reaction over Pt/γ-Al2O3 catalysts. Appl. Catal. B Environ. 33(4), 293–302 (2002)Google Scholar
Nguyen, N.T.K. Maclean, N., Mahiddine, S.: Mechanisms of nucleation and growth of nanoparticles in solution. Chem. Rev. 114(15), 7610–7630 (2014)Google Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.Department of Physics, Karaj BranchIslamic Azad UniversityKarajIran
2.Australian Institute for Bioengineering and NanotechnologyThe University of QueenslandBrisbaneAustralia
Tajerian, T., Monsefi, M. & Rowan, A. J Nanostruct Chem (2019). https://doi.org/10.1007/s40097-019-0310-0
|
CommonCrawl
|
How can I estimate how many people are living in a specific territory?
In my own medieval fantasy setting, I have different countries that all have different populations. I admit that I have no idea of how many people should live in X or Y. Just using the numbers for the whole country is not really precise. I try to compare them to real medieval countries but I'm unsure of what is the best criteria to use.
Resources worth mentioning:
Medieval demographics made easy
Welsh Piper demographic guide
(they are not wrong but they have limited informations.)
Historical Statistics of the World Economy: 1-2008 AD by Angus Maddison and al.
Data from the Worldmapper by the University of Sheffield
Clarifications:
There is magic in the world but it's low magic. It mean people can't use it to farm. It would be a waste.
I'm trying to find numbers for a stable and sustainable population, not decimated, not starving or booming.This also mean that the land has been settled for quite some time.
About the available land: It is different everywhere and it clearly influences the population density of large area. But if we take a lot of different territories with different percentage of available land, we should have a average density. Then , we could apply a modifier depending if the land is flat or in the mountains.
Urbanization is another factor that will influence the density but it is not the most important.
Sedentary vs nomadic: I know that the population density is lower with nomadic people. There is a question about this here.
Hypothesis:
The higher the temperature, the higher the population density as long as there is sufficient water to grow crops.
The higher the precipitations (or the water available) the higher the population density. Past a certain threshold, precipitations stops to have an impact on the population density.
So, I was wondering if there was a way to accurately estimate the population of a country in a middle age or renaissance era ?
medieval-europe demographics renaissance
VincentVincent
$\begingroup$ Is there any particular reason why you need to mimic Medieval Indian metrics so closely? Because you're asking about accurately estimating the population of a country, but I assume that means the fictional country, otherwise this could be off-topic. I think I have a few things to add, I'll dig around. $\endgroup$
– mechalynx
$\begingroup$ If you want data to test out your hypothesis on there's nrcs.usda.gov/wps/portal/nrcs/detail/soils/use/worldsoils/… it's all derived so there's higher quality data elsewhere. At least you don't have to hunt for everything in the same projection though. (Note: Not too old, just for testing). $\endgroup$
$\begingroup$ India was just an example. I'm interested in India only if I can extrapolate the results to use in a fantasy world. But yea, if I want to have the demographic of Uttar Pradesh during the 11th century, I'm better to ask on History.SE. But I mostly interested in establishing a link between climate and population density. Or another variable with population density if there are any interesting studies on this. $\endgroup$
$\begingroup$ Possible duplicate of, but definitely related to: worldbuilding.stackexchange.com/questions/1084/… $\endgroup$
$\begingroup$ Also, "countries" are a somewhat artificial concept to begin with. In a situation like you describe I believe they would be separated by natural borders, but that doesn't mean everything within those natural borders forms a single country as we think of it today. Maybe it would help you to look at this in terms of settlements which have some (likely fairly small) amount of trade with each other, and spread those out at reasonable distances in a world letting countries "just happen" from the geological features of your world, rather than designing countries first? $\endgroup$
It really comes down to two factors: waterways and open land.
TL;DR: More waterways mean more cities and more people; more open land means more farmers and less people.
The above summary pretty much says it all, but I obviously have to expand on just where the connection lies. I think I can cite some pretty good examples to support my point, but if I've made any logical or (even worse) factual errors, I'd be obliged if anyone could point them out.
Humans like to settle where there are good natural resources nearby that can aid them. If these resources are plentiful enough, more humans will move in. The settlement becomes a village, then a town, and finally a city. What are these so-called "resources" that I've been talking about? Well, they can be a wide variety of things - open land, good sources of natural food and water, good ways for transportation, etc. One resource that combines all of these is a waterway. It could be a river, a stream, an ocean - anything you can dream up.
What are the benefits of a waterway? Well, a waterway satisfies a few of humanity's simplest needs:
The obvious thing that any marine environment can provide is a (fairly) good source of food. Fish, crab, lobster, eel, and a whole bunch of other delicacies. Rivers are particularly good because animals use them a lot to travel. Salmon famously use them to get upstream to spawn. Crabs may live in the shallows. And there are other animals that like to eat these aquatic animals. Otters, bears, and a whole host of other carnivores. Herbivores, too, like to come to rivers to drink. If you live in the America Northeast, just think of the venison. . .
There are, for the vegans, other options. Plants need water to live, and so if you're really out of food, you can always grab a few berries off a bush. But have your friend try them first. That could really save your life. . . Other plants, too (depending on the climate) may grow near rivers.
Chances are, humans are going to need to go to other places outside the settlement - wars, trade, family reunions with the in-laws, etc. Waterways provide a great mode of transportation. You can't exactly use a boat in the middle of a plain, can you? If you're going downstream, you have a source of transportation that requires little effort. Upstream does require some effort (e.g. sails or rowers), but it's still an improvement over trekking miles and miles with a donkey and a cart.
Rivers and oceans easily bolster trade. There's a reason that the term "port city" is so ubiquitous. Back in the Middle Ages (and today), port cities were a dime (or shilling, rupee, guinea, yen, etc.) a dozen. Some of the bigger ones include London, Liverpool, Rotterdam, etc. More trade means a healthier economy, and more available resources.
Yep, waterways can help agriculture near cities. Even if land isn't directly near the river/ocean/whatever, canals can be built to help with irrigation. A farming economy an exist near an urban area, drawing people even closer to cities. This fell apart with the rise of suburbs, but the Middle Ages saw many serfs and peasants working and living near large population centers.
Open land, however, also draws people. Sure, Mesopotamia was the poster child for settlements by the water, but it wouldn't have succeeded without agriculture - which resulted from a lot of open land. Plains are helpful, as are valleys - which are often created by glaciers, which eventually melt to become rivers. Wherever it is, open land draws people. There are primary uses for it:
Grazing and raising livestock
Okay, back to farming. It's hard to grow corn in the Himalayas. Just think about that sentence for a while. Crops are incredibly important to a civilization, and so humans will also settle where there is room to grow food. The Three Sisters (corn, squash, and beans) were important to the indigenous people of North America. They could be grown in a variety of regions, from rocky New England to the sunny Midwest. Sure, they needed a certain climate to thrive in, but open land was another factor. Can you grow corn in the Rocky Mountains or in the eastern woodlands? I didn't think so. So open land definitely draws people.
I hope that satisfied all the vegans out there, because they aren't going to like this next bit. The other good thing about open land is that animals like it, too. The tribes of the North American plains used it to their advantage by hunting buffalo. Later on, cattle ranchers drove out the tribes and used the land to raise cattle. Both groups were drawn by the allure of open land and the possibilities it held. Why waste your time staring at corn kernels when you can just go out and kill a buffalo?
How does this relate to you question?
I went off on quite a tangent there, and I did it to try to show how important waterways and open land are to civilizations. I emphasized them because the rest of my answer depends upon those two factors, and those two factors alone.
Calculating population
Here's the bit you have to wake up for. I'll start out by counting rural farmers and ranchers. In medieval Europe, many peasants worked as serfs, working on a lord's land. In fact, a large portion of the population lived in rural communities:
The High Middle Ages saw an expansion of population. The estimated population of Europe grew from 35 to 80 million between 1000 and 1347, although the exact causes remain unclear: improved agricultural techniques, the decline of slaveholding, a more clement climate and the lack of invasion have all been suggested. As much as 90 per cent of the European population remained rural peasants.
This suggests about 8 million people living in "suburban" areas (i.e. small towns and villages) and cities.
Let's say a lord owns $a$ acres of land. On each acre he might have $80$ serfs working on it. So to calculate the population of a region, you would simply do $$a \text { acres} \times \frac{80 \text { serfs}}{\text { acre}}$$ Let's assume that all the open land in a region is used for this type of agriculture (which is likely the case). So you can simply use the above formula to calculate the population.
What about cities? There's no easy formula for this; you'll just have to do estimation. You'll have more cities if
You country has a long coastline (or any coastline at all)
Your country has a lot of waterways
So western China might not have a lot of cities, while eastern China will.
I'd estimate perhaps 1.5 cities per major river, and 10 cities per length of the east coast of the United States (2,000 miles, give or take. From here,
The dimensions of the Western European cities were too small. Usually, their population numbers from 1000 up to 3-5 thousand people. Even in XIV-XV century, the cities with 20-30 thousand inhabitants were considered large. Only a few very large cities have a population of more than 80-100 thousand (Paris, Milan, Venice, Florence, Cordoba, Seville).
A region like Western Europe could, perhaps, have a half-dozen of these large cities, with perhaps 20 others of 5,000 people or more. Let's estimate a population of roughly 800,000 people in European cities during the High Middle Ages - 1% of Europe's population.
Contrary to what I had originally hypothesized, cities were not a huge part of the population of Europe; they held perhaps only a few percent of the population. Most people were rural farmers, living in densities of roughly 80 people per acre. If you know what fraction of your country is either arable land or land that can be made arable by magical means, you can figure out what the rural population is. From there, you can either use the rule of them that 90% of people were peasants, or simply sprinkle in a half-dozen 80-100,000-person cities per continent, with maybe 20 or so at 5,000 or more.
HDE 226868♦HDE 226868
$\begingroup$ how do I know how many serfs are working on each acre? $\endgroup$
$\begingroup$ about the cities, they are usually located along rivers but having more rivers does not give more cities. Or maybe more cities but they are smaller. $\endgroup$
$\begingroup$ @Vincent I would think that a river would draw people in, but I can see your logic. Rest assured, I'm working on an edit. $\endgroup$
$\begingroup$ @Vincent I have some better numbers. You were right about cities not being a huge factor in population. $\endgroup$
$\begingroup$ @HDE226868 I don't believe an acre is the same as a square mile. $\endgroup$
– trichoplax
There are many variables involved when one seriously tries to model the cause and effect of populations, and there is no answer that can simply be expressed as something like:
population = function of (climate, food-creation-technologies[x], medicine, land available, cultural effects[y], previous-population[age][gender][fertility][z]...)
Not only is accurate population simulation very complex, but the population level in any area is caused by the actual history and its many details, starting with what the population was before the time you are interested in, what they actually do, etc. For example, looking at medieval England, there were great swings in population related to periodic famines and plagues.
There are however some interesting immediate limits such as how much food, water, and shelter is available, and how predictable that is. So, especially for a fictional world, I would say it does make a lot of sense to try various views on what the limits might be on the population, especially in terms of where they get their food, and how much that can be based on the sources and labor available.
I think a very reasonable and time-saving approach would be to go over to History.SE and ask about population demographics in a region/time you feel is similar to your setting. You might want to study what the contributing factors are, and adjust if your world is different.
On the other hand, if you are making a simulation, then of course you will be interested in more of the actual cause and effect of the various details, rather than the resulting population and a description of conditions.
Both simulation effects and historical populations are estimates. Historical estimates often have a wide range, and change as new theories or historians come and go.
DronzDronz
A possible answer:
Using the Köppen classification of climates, I did try to see if I could set a specific population density for each climate and I think I managed to get some numbers.
The data for each area needed to be evaluated. For example: Jiangsu, Shangdong and Uttar Pradesh all have very high densities. This is mainly because they are very flat and almost all the space is used to grow food. That gives them a 20% or 30% bonus compared to other less fortunate regions with the same climate. Having a lot of data helps to figure where the marginal values are.
My main sources are mentioned in the question and some of my other statistics include numbers form the game Victoria 2 of Paradox. The game studio made researches about the era and they try to be as accurate as possible. It's not foolproof but it's better than nothing. Using Madison numbers I see that the population was multiplied by 2 or 3 between 1500 and 1836.
Factors other than the climate to take in consideration:
These values are the average and suppose a good deal of fertile land but also some areas unsuitable for farming. If the area is hilly, reduce the population density but increase it if it's mostly flat.
These values suppose that the country has enjoyed several decades of stability to allow the population to reach a certain level. The density is relatively high but sustainable.
Climates that have a dry summer will have lower population density because their crops have less water to grow.
The population from the Victorian era is 2 to 3 times higher than the population at the end of the medieval era.
High urbanization increase the population density. Maybe by 20%.
Non sedentary lifestyle usually mean a lower population density. Most temperate and humid climates are probably inhabited by sedentary people but nomadism is very common in arid, semi-arid climates and some cold climates. There, the possibilities are more limited for agriculture and the densities are low even with farming. The actual densities are between 10 or 100 times lower, I don't know exactly.
Trading: if a country is wealthy enough, he can import the food form elsewhere.
The results are classified by density of population per km2 in a decreasing order:
30 to 40 : BWh but the density fall around 0,2 without water
My main source of information was Egypt. Around 1500, the population was about 4 million people on a area of 1 000 000 km2. Since the population only live on 6% of the land, the real density is around 66 people per km2. But it's flat and very urbanized, so I lowered the values. This climate is the hottest and therefore can give a very good farming output with sufficient water.
30 to 35 : Cwa, Cfa, and BSh if water is available but 5 without water.
These areas include mostly regions of central and eastern China, but also Japan and some in Europe such as Montenegro. They pretty much all have high densities and are well developed and they tend to be pretty flat too. These are subtropical climates with almost no winters. Some areas can grow crops almost all year long.
20 to 25 : Cwb, Cfb, Dwa, Dfa
These two bunch of climates don't have much in common. Cb climates are well documented since it's the most common European climate. Thus, I just had to figure what the average was. Belgium and Italy have higher density because they are more urbanized.
15 to 20: Am, Af, Aw
It is pretty much an estimation of the average. It is usually lower than that but never higher.
10 to 15: Csa, Csb, Dwb, Dfb, Dsa, Dsb
Cs and Db had a lot of information and I managed to find information on Turkey regarding the Dsd climate. This is the average.
5 to 10: Cwc, Cfc
I just have the numbers for Cfc but I extrapolated the results for Cwc.
4 to 6: BSk
The cold steppes are usually pretty dry. Farming is possible but most of the population will be nomadic. if the population is only nomadic, divide the population by 10 or more.
0,5 to 1: Dwc, Dfc, Dsc
These areas are not well suited for agriculture. The hottest parts might be acceptable for farming but the population is scattered.
0,25 to 0,5: BWk
The cold desert is very dry and not suitable for farming except is some rare areas because rivers are also rare.
0,01: Dwd, Dfd, Dsd and Ef (tundra)
Even nomads find that this is a harsh climate. Still some might live here.
0: Ef (ice cap)
Nobody can live here because it's always frozen.
Not the answer you're looking for? Browse other questions tagged medieval-europe demographics renaissance .
How to build a self-sufficient town
What is the distance between large settlements?
How to calculate medieval urbanisation
Can medieval people make a potato gun?
What are the disadvantages of a long living race?
How many generations to lock in androgynous human characteristics?
How Dangerous Are Ploppers?
How Can Medieval People Defend Themselves Against Plopup?
How can a Middle-Age spy traveller hide as many ropes and gags as possible on themselves?
|
CommonCrawl
|
How is an empty set a member of this power set?
I'm reading the section on power sets in Book of Proof, and the chapter includes this statement (Example 1.4 #13) of what isn't included in the power set:
$$P(\{1,\{1,2\}\})=\{\emptyset,\{\{1\}\},\{\{1,2\}\},\{\emptyset,\{1,2\}\}\}...\text{wrong because }\{\{1\}\}\subsetneq\{1,\{1,2\}\}$$
I understand $\{\{1\}\}\subsetneq\{1,\{1,2\}\}$, but why is the last element, $\{\emptyset,\{1,2\}\}$, in the power set if the empty set is not an element of the original set?
discrete-mathematics elementary-set-theory
kaskas
$\begingroup$ Why do you think $\{\emptyset,\{1,2\}\} \in P(\{1,\{1,2\}\})$? The statement absolutely never makes this claim. It isn't true. So why do you think the statement is claiming such? $\endgroup$ – fleablood Sep 7 '16 at 20:29
$\begingroup$ The empty set is a subset of every set. The power set is the set of all subsets. So the empty set is a member of every power set. $\endgroup$ – Doug M Sep 7 '16 at 20:31
$\begingroup$ @DougM That's not what the OP asked. The OP asked about $\{\emptyset, \{1,2\}\}$. That is not an element of the power set because (as the OP correctly argued) the empty set is not a member of the original set (and hence can not be a member of a subset). In short, the OP is 100% correct. But the author of the book never claimed it was. $\endgroup$ – fleablood Sep 7 '16 at 20:38
$\begingroup$ To be thorough. $\emptyset \in P$. $\{\{1\}\} \not \in P$. $\{\{1,2\}\} \in P$. $\{\emptyset,\{1,2\}\} \not \in P$. And finally $\{1,\{1,2\}\} \in P$ but $\{1,\{1,2\}\}$ wasn't listed in the set claimed to be the power set. So that set is not the Power set for three reasons. The book gave one. The op gave another. I gave a third.... and maybe I missed a 4th.... who knows.... $\endgroup$ – fleablood Sep 7 '16 at 20:43
$\begingroup$ Yes, a fourth would be $\{1\} \in P$ which wasn't listed in the set. Basically the given set over bracketed almost consistently. $\endgroup$ – fleablood Sep 7 '16 at 20:46
Suppose that $A$ is the set $\{0,1,2\}$, then $A=\{3,4,5\}$ is wrong because $3\notin A$.
It's true that neither $4$ nor $5$ are elements of $A$ as well, but one counterexample is enough to disprove a statement.
In a nutshell, you're right, but so is the book. Both statements are valid counterexamples, but one is enough.
It isn't: $$ \mathscr P(\{1,\{1,2\}\})=\{\emptyset,\{1\},\{\{1,2\}\},\{1,\{1,2\}\}\}. $$ If the author didn't comment on why $\{\emptyset,\{1,2\}\}$ is not in the set, it is likely because the author intended to comment on a particular reason why the proposed power set wasn't the correct power set. That is, he decided to be explicit about why $\{\{1\}\}$ isn't in the power set. Perhaps to be more complete, the author could have commented on why $\{\emptyset,\{1,2\}\}$ is also not in the power set, but it only takes one counterexample to do the job.
Alex OrtizAlex Ortiz
$\begingroup$ I disagree that it is necessarily a typo. $\endgroup$ – Asaf Karagila♦ Sep 7 '16 at 20:30
$\begingroup$ @AsafKaragila good point. $\endgroup$ – Alex Ortiz Sep 7 '16 at 20:30
$\begingroup$ It's not a typo. In showing {{1}} was not a subset the author didn't have to say anything about {emptyset,{1,2}} and the author didn't say anything about {emptyset,{1,2}}. If they author had said anything it'd be that it is not in the power set either. $\endgroup$ – fleablood Sep 7 '16 at 20:31
$\begingroup$ @fleablood noted. Thanks. $\endgroup$ – Alex Ortiz Sep 7 '16 at 20:32
$\begingroup$ It's not a "typo" but it is an omission. However it is an omission that the author was allowed to omit. $\endgroup$ – fleablood Sep 7 '16 at 20:47
Not the answer you're looking for? Browse other questions tagged discrete-mathematics elementary-set-theory or ask your own question.
Direct proof of empty set being subset of every set
Why does the empty set have a cardinality of zero?
How is an empty set truly "empty"?
Cantor's Theorem holding simply because every power set includes a singleton set for each element, and the empty set?
The empty-set as a subset, and as a member
Intersection of Set A and a set containing empty set
Intersection of Power set and Power set of power set.
How to prove statements in formal set theory? Substitution, the empty set and an example.
|
CommonCrawl
|
Clinical outcome and isolated pathogens among neonates with sepsis in Democratic Republic of the Congo: a cross-sectional study
Gabriel Kambale Bunduki1, 2Email authorView ORCID ID profile and
Yaw Adu-Sarkodie2
Neonatal sepsis still remains a significant cause of morbidity and mortality in developing countries. The prediction of the neonatal sepsis outcome depends on the anticipation from the clinical history, suspicion from clinical findings and confirmation by laboratory tests. This study aimed to determine the clinical outcome and isolated pathogens among neonates with sepsis in Butembo, Democratic Republic of the Congo.
The most frequent bacteria related to a poor outcome were Staphylococcus aureus, Escherichia coli, Pseudomonas aeruginosa and Klebsiella spp. Most of isolated bacteria were found to be hospital-acquired infections. Therefore, adherence to infection prevention and control measures would reduce reduced rate of neonatal sepsis in our setting. The empiric antibiotic treatment should cover the spectrum of bacteria responsible of neonatal sepsis in Butembo, DRC.
Clinical outcome
Neonatal sepsis
Butembo
Neonatal sepsis constitutes a major health concern during the first 4 weeks of life [1]. It is a major cause of death, with about 26% worldwide [1]. In Democratic Republic of the Congo (DRC), sepsis account for 16% of all causes of neonatal deaths behind prematurity (34.7%) and birth asphyxia and trauma (28.6%), respectively [2]. The mortality rate of neonatal sepsis is evaluated based on the sepsis definition used. When considering all bacteraemic infections, the reported mortality rate in neonatal sepsis varies from 10 to 40% [3]. The mortality may vary according to the onset of signs and symptoms of sepsis.
Depending on the onset of clinical symptoms, neonatal sepsis is classified into early-onset neonatal sepsis (EoNS) which occurs within 72 h of life and late-onset neonatal sepsis (LoNS) which occurs beyond 72 h of life. The infectious source for EONS is most probable from the maternal genital tract while LoNS is usually a nosocomial infection consecutive from intensive neonatal care complication, or an acquired community infection [4, 5].
Risk factors for acquiring neonatal sepsis include some maternal factors and factors related to the neonates and the care provided to him [6]. These factors, combined to the pathogens responsible of sepsis may predict the outcome of the sick neonate. Despite efforts and interventions supplied by the government of the DRC, the neonatal sepsis still has a non-negligible poor outcome. Knowledge on pathogens related to the poor outcome of neonates with sepsis in Butembo is not documented.
Anticipation from the clinical history, suspicion from clinical findings and confirmation by laboratory tests is essential for predict the outcome of the neonate with sepsis [7]. Therefore, this study aimed to determine the clinical outcome and isolated pathogens among neonates with sepsis in Butembo, DRC.
Operational definition
Since there is no consensus on the definition of sepsis 3.0 [8] in the paediatric population, in this study, we considered the sepsis 2.0 version definition of the International Paediatric Sepsis Consensus Criteria (IPSC) of neonatal sepsis. It defines neonatal sepsis as a clinical syndrome characterized by systemic signs and positive culture during the first 4 weeks of life [9]. Neonates falling in the above definition were considered as neonates with confirmed sepsis while those with more than two clinical manifestation of a systemic infection although with negative culture was considered as probable or suspected cases of sepsis. A negative culture does not exclude sepsis in neonates.
In this study, good outcome was considered when the neonate improved after completion of treatment without complication like shock, meningitis, seizure, blindness and deafness. Meanwhile, a poor outcome was considered when neonate was not improved after completion of treatment, presented with complications, referred to other hospitals or died.
Study design and setting
A prospective cross-sectional study was conducted at three hospitals in Butembo between September and November 2018. The health facilities in which the study was conducted were selected according to their hierarchy (general hospital, general referral hospital, and teaching hospital). Butembo is a commercial city located in North-Kivu, Eastern part of the DRC; region facing humanitarian crisis for several years and facing the on-going Ebola outbreak. Mother and sick neonate pairs constituted the study population. The sick neonates were those with sepsis and those with the history of maternal infection. Neonates suspected with sepsis but who died immediately or upon arrival and blood samples were not yet taken were excluded from the study. Neonates with congenital malformation or dysmorphic features, those diagnosed with malaria parasitaemia and those from HIV-positive mothers, those under antibiotics therapy, those above 28 days of life and those whom their parents or guardians did not consent to participate to the study were also excluded.
The estimation of the sample size (N) was based on the prevalence (P) reported in previous survey report [10] using the Fischer's formula with a maximum error of 5% (d) within a standard normal deviation of 1.96 for 95% confidence interval (CI).
$$N = \frac{{Z^{2} \times P \times \left( {1 - P} \right)}}{{d^{2} }}$$
Therefore, the sample size was 207 neonates. By adding 10% of margin for non-respondents, the final sample size was 228 neonates.
Sampling procedures and processing
Structured pre-tested questionnaires were used to collect demographic and clinical information from the mothers and sick neonates. Data collected from the mother included: age, educational level, employment, marital status, gravidity and parity, antenatal care attendance, genitourinary infection during the pregnancy, maternal fever during labour, membrane rupture time, stained and foul smelling amniotic liquid and number of vaginal examination. Data collected from the neonate included: the gestational age, mode of delivery, gender, birth weight, Apgar score at first minute, insertion of umbilical catheter, mechanical ventilation, the age of the neonate at the time of suspicion of sepsis and/or signs of sepsis, and the outcome (good or poor). The sepsis signs and symptoms considered in this study included fever, hypothermia, jaundice, difficulty in suckling, tachypnea, bradypnea, tachycardia, bradycardia, vomiting, irritability, lethargy, grunting, cyanosis, pallor, convulsion, and septic rash. Each neonate was follow up until discharged from the hospital.
Approximately 1.5 to 2 mL of blood sample were collected using an aseptic technique and sent at the Central Research Laboratory of the "Université Catholique du Graben" for the culture. The culture and identification of the pathogens were done by the methods described by Koneman [11]. Briefly, the sample was aseptically inoculated in the brain heart infusion broth and incubated at 35–37 °C for microbial growth observation. At the same time, subcultures were done on enriched media (blood agar, chocolate agar and McConkey agar). The identification was done by using the colony morphological characteristics, Gram staining and biochemical tests.
All the data were analysed using the SPSS software version 22. The proportions were used for categorical variables. Associations between the outcome and the independent's exposure variables were assessed. The independent variables included socio-demographic and clinical data collected from the mother and the neonate. Significance tests of proportions were done using Chi-Square test and Odds ratios (OR). The two-tailed P-values were considered to be statistically significant if ≤ 0.05 within a 95% CI.
From September to November 2018, 228 neonates were recruited. Among them, 69 (30.3%) had a positive blood culture. The poor outcome among all the recruited neonates was observed in 48 (21.1%) cases. Of the 69 neonates who had a positive blood culture, 20 (29.0%) had a poor outcome. Bacteria related with poor outcome were (in order of their frequencies) Staphylococcus aureus, Escherichia coli, Pseudomonas aeruginosa, Klebsiella spp., Acinetobacter spp., coagulase negative Staphylococci, Streptococcus pneumonia, Enterococcus spp., and Enterobacter spp. (Table 1).
Distribution of pathogen isolated according to the outcome and the time of infection onset
Isolated pathogen
Infection onset
Acinetobacter spp.
Citrobacter spp.
Enterobacter spp.
Enterococcus spp.
Klebsiella spp.
P. aeruginosa
S. agalactiae
S. pneumoniae
The isolates distribution based on the time of infection onset shows a predominance of bacteria in LoNS than in EoNS. The prevalent isolates in LoNS are Staphylococcus aureus followed by coagulase negative Staphylococci (CoNS), Escherichia coli and Pseudomonas aeruginosa. Meanwhile, Streptococcus agalactiae (Group B Streptococci) is prevalent in EoNS, followed by Staphylococcus aureus, Escherichia coli, and Klebsiella pneumoniae (Table 1).
The outcome of neonatal sepsis regarding the time on the infection onset is resumed in Table 2. Among the 91 (39.9%) neonates with EoNS, 17 (18.7%) had a poor outcome while among the 137 (60.1%) neonates with LoNS, 31 (22.6%) had a poor outcome. The difference was not statistically significant (P > 0.05).
The outcome of neonatal sepsis regarding the time of infection onset
Time of the infection onset
None of the studied maternal and neonatal risk factors, and clinical signs was statistically related to a poor outcome (Additional file 1).
The findings of this study showed that most of the bacteria related to poor outcome in neonates with sepsis are almost the same with those responsible of LoNS. This means that they are most probably hospital-acquired infections (HAI). The HAI bacteria may be more virulent and difficult to treat, therefore responsible of poor outcome. These findings are similar to those of Mhada et al. in Tanzania [12], who showed that bacteria related to a poor outcome of neonatal sepsis are HAI and mostly found in neonates with LoNS. S. aureus can be transmitted from health care providers and relatives to the newborn [13, 14]. CoNS has been reported to be a leading cause of neonatal sepsis in Egypt [15], similarly as in this study. The isolation of CoNS is usually taken as contaminant, but in case it has been proved to be pathogenic, the source of infection is medical devices, and it is seen more often in LoNS. Streptococcus agalactiae is among bacterial gallery which may be acquired from the maternal vagina [16]. They are mostly isolated in EoNS, like in this study. In developed countries, following the GBS prophylaxis, Escherichia coli have been reported as a frequent isolate responsible for neonatal sepsis [17]. In developing countries, E. coli has also been identified among the most frequent causative bacteria with an infection rate varying from 15.7 to 77.1% [18, 19].
The diversity of major bacteria responsible for neonatal sepsis may be due to the fact that the bacterial spectrum varies from a region to another [16, 20]. Other factors like the study setting and population, the adherence to hand hygiene practice may explain this variation observed.
The poor outcome among neonates with sepsis was more observed in LoNS. Meanwhile, this was not statistically significant. Similar findings have been demonstrated in other studies [13, 21, 22]. The prolonged use of an invasive catheter and parenteral nutrition, respiratory infections and cardiovascular diseases are factors which fueled the high rate of LoNS [23, 24].
None of the maternal and neonatal risk factors, and neonatal signs was statistically related to a poor outcome of sepsis. This may be explained by the fact that none of them change the outcome of sepsis once present. The cause of sepsis and its management, including the related supportive care, would rather determine the outcome of the neonatal sepsis.
Regarding to our findings, the clinical outcome of neonatal sepsis in Butembo was not satisfactory. Any of the risk factors was found to be significant. Therefore, health personnel should improve their skills in care-giving and hospitals should get advanced equipment. Aseptic measures should be applied when invasive procedures are performed. Implementation of infection prevention and control measures should be promoted in order to avoid HAI. The empiric antibiotic therapy should cover the spectrum of organism responsible of neonatal sepsis in our study area.
Since this was a hospital-based study, neonates with related signs and symptoms of sepsis who were not brought to the hospital could have been missed. The short time period of study and a small sample size could have introduced a selection bias. There is also a lack of long term follow up. A larger scale prospective study with adjustment of factors has to be done.
APGAR:
appearance pulse grimace activity respiration
coagulase negative Staphylococci
DHS:
Demographic Health Survey
DRC:
IPSC:
International Paediatric Sepsis Consensus Criteria
MDG:
Millennium Development Goal
PROM:
premature rupture of membrane
Authors are thankful to all the people who helped during data collection and laboratory work.
This study has been funded by the Else-Kröner-Fresenius Stiftung via the German registered NGO förderverein UNIKIN (fUNIKIN) (http://www.foerderverein-uni-kinshasa.de). GKB has benefited from these funds through the BEBUC scholarship system. The funding body covered all costs related to data collection but had any rule in study design, analysis, interpretation of data and in writing the manuscript.
The author GKB conceived the study, designed the study, collected and analysed the data and drafted the manuscript. The author YAS coordinated the study and revised the manuscript for critically important intellectual content. All authors read and approved the final manuscript.
Ethical clearance was obtained from the Ethical Committee of North-Kivu (Decision No 011-18/08/2018, Protocol No 005/TEN/2018). Permission was also sorted from the hospital administration of the respective hospitals. A written informed consent was obtained from the neonates' mothers before participation in the study.
13104_2019_4346_MOESM1_ESM.docx Additional file 1: Table S1. Maternal related risk factors that predisposed to a poor outcome neonatal sepsis. Table S2. Neonatal related risk factors that predisposed to neonatal sepsis. Table S3. Clinical features of sepsis predicting a poor outcome.
Department of Infectious Diseases, Faculty of Medicine, Université Catholique du Graben, PO.Box 29, Butembo, North-Kivu, Democratic Republic of the Congo
Department of Clinical Microbiology, School of Medical Sciences, College of Medicine, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
Lawn JE, Cousens S, Zupan J. 4 million neonatal deaths, when? where? why? Lancet. 2005;365:891–900.View ArticleGoogle Scholar
UNICEF. Maternal and newborn disparities, Democratic Republic of Congo. Key facts. 2015.Google Scholar
Khalid N. Neonatal infection. In: McIntosh N, Helms P, Smyth R, editors. Forfar and Arneil textbook of pediatrics. 6th ed. Philadelphia: Churchill Livingston; 2003. p. 336–43.Google Scholar
Shrestha S, Shrestha NC, Dongol Singh S, Shrestha RPB, Kayestha S, Shrestha M, Thakur NK. Bacterial isolates and its antibiotics susceptibility in NICU. Kathmandu Univ med J. 2013;41(1):66–70.View ArticleGoogle Scholar
Nayak S, Rai R, Kumar VK, Sanjeev H, Pai A, Ganesh HR. Distribution of microorganisms in neonatal sepsis and antimicrobial susceptibility patterns in a tertiary care hospital. Arch Med Health Sci. 2014;2:136–9.View ArticleGoogle Scholar
Tewabe T, Mohammed S, Tilahun Y, Melaku B, Fenta M, et al. Clinical outcome and risk factors of neonatal sepsis among neonates in Felege Hiwot referral hospital, Bahir Dar, Amhara Regional State, North West Ethiopia 2016: a retrospective chart review. BMC Res Notes. 2017;10:265.View ArticleGoogle Scholar
Finer N. Neonatal sepsis. San Diego J Pediatr Neonatol. 2003;15(5):855–67.Google Scholar
Singer M, Deutschman CS, Seymour CW, Shankar-Hari M, Annane D, et al. The third international consensus definitions for sepsis and septic shock (sepsis-3). JAMA. 2016;315:801–10.View ArticleGoogle Scholar
Goldstein B, Giroir B, Randolph A. International pediatric sepsis consensus conference: definitions for sepsis and organ dysfunction in pediatrics. Pediatr Crit Care Med. 2005;6:2–8.View ArticleGoogle Scholar
WHO-MCEE estimates for child causes of death, 2000–2015. http://www.who.int/healthinfo/global_burden_disease/en/. Accessed 8 Mar 2016.
Winn WC, Allen SD, Janda WN, Koneman E, Procop G, Schreckenberger P, Woods G. Koneman's color atlas and textbook of diagnostic microbiology. 6th ed. Philadelphia: Lippincott; 2006.Google Scholar
Mhada TV, Frederick F, Matee MI, Massawe A. Neonatal sepsis at Muhimbili National Hospital, Dar es Salaam, Tanzania; aetiology, antimicrobial sensitivity pattern and clinical outcome. BMC Public Health. 2012;12:904.View ArticleGoogle Scholar
Yadav NS, Sharma S, Chaudhary DK, Panthi P, Pokhrel P, Shrestha A, et al. Bacteriological profile of neonatal sepsis and antibiotic susceptibility pattern of isolates admitted at Kanti Children's Hospital, Kathmandu, Nepal. BMC Res Notes. 2018;11:301.View ArticleGoogle Scholar
Kayenge N, Kamugisha E, Mwizamholya DL, Jeremiah S, Mshana SE. Predictors of positive blood culture and deaths among neonates with suspected neonatal sepsis in a tertiary hospital, Mwanza-Tanzania. BMC Pediatr. 2010;10:39.View ArticleGoogle Scholar
Moore KL, Kainer MA, Badrawi N, Afifi S, Wasfy M, Bashir M, et al. Neonatal sepsis in Egypt associated with bacterial contamination of glucose-containing intravenous fluids. Pediatr Infect Dis J. 2005;24(7):590–4.View ArticleGoogle Scholar
Shrestha RK, Rai SK, Khanal LK, Mandal PK. Bacteriological study of neonatal sepsis and antibiotic susceptibility pattern of isolates in Kathmandu, Nepal. Nepal Med Coll J. 2013;15(1):71–3.PubMedGoogle Scholar
Bhat RY, Lewis LES, Vandana KE. Bacterial isolates of early-onset neonatal sepsis and their antibiotic susceptibility pattern between 1998 and 2004: an audit from a center in India. Italian J Pediatr. 2011;37:32.View ArticleGoogle Scholar
Kuruvilla KA, Pillai S, Jesudason M, Jana AK. Bacterial profile of sepsis in a neonatal unit in south India. Indian Pediatr. 1998;35:851–8.PubMedGoogle Scholar
Aurangzeb B, Hameed A. Neonatal sepsis in hospital-born babies: bacterial isolates and antibiotic susceptibility patterns. J Coll Physicians Surg Pak. 2003;13:629–32.PubMedGoogle Scholar
Gebrehiwot A, Lakew W, Moges F, Moges B, Anagaw B, Yismaw G, Nega T, Unakal C, Kassu A. Bacterial profile and drug susceptibility pattern of neonatal sepsis in Gondar University Hospital, Gondar northwest Ethiopia. Der Pharmacia Lettre. 2012;4(6):1811–6.Google Scholar
Mohsen L, Ramy N, Saied D, Akmal D, Salama N, Haleim MMA, et al. Emerging antimicrobial resistance in early and late-onset neonatal sepsis. Antimicrob Resist Infect Control. 2017;6:63.View ArticleGoogle Scholar
Motara F, Ballot DE, Perovic O. Epidemiology of neonatal sepsis at Johannesburg hospital. S Afr J Epidemiol Infect. 2005;20(3):90–3.Google Scholar
Tröger B, Göpel W, Faust K, Müller T, Jorch G, Felderhoff-Müsser U, et al. Risk for late-onset blood-culture proven sepsis in very-low-birth-weight infants born small for gestational age: a large multicenter study from the German Neonatal Network. Pediatr Infect Dis J. 2014;33:238–43.View ArticleGoogle Scholar
Tsai MH, Hsu JF, Chu SM, Lien R, Huang HR, Chiang MC, et al. Incidence, clinical characteristics, and risk factors for adverse outcome in neonates with late-onset sepsis. Pediatr Infect Dis J. 2014;33:e7–13.View ArticleGoogle Scholar
|
CommonCrawl
|
Full paper
Comparison between IRI and preliminary Swarm Langmuir probe measurements during the St. Patrick storm period
Alessio Pignalberi1,
Michael Pezzopane2Email author,
Roberta Tozzi2,
Paola De Michelis2 and
Igino Coco3
Earth, Planets and Space201668:93
Received: 30 December 2015
Accepted: 6 May 2016
Preliminary Swarm Langmuir probe measurements recorded during March 2015, a period of time including the St. Patrick storm, are considered. Specifically, six time periods are identified: two quiet periods before the onset of the storm, two periods including the main phase of the storm, and two periods during the recovery phase of the storm. Swarm electron density values are then compared with the corresponding output given by the International Reference Ionosphere (IRI) model, according to its three different options for modelling the topside ionosphere. Since the Swarm electron density measurements are still undergoing a thorough validation, a comparison with IRI in terms of absolute values would have not been appropriate. Hence, the similarity of trends embedded in the Swarm and IRI time series is investigated in terms of Pearson correlation coefficient. The analysis shows that the electron density representations made by Swarm and IRI are different for both quiet and disturbed periods, independently of the chosen topside model option. Main differences between trends modelled by IRI and those observed by Swarm emerge, especially at equatorial latitudes, and at northern high latitudes, during the main and recovery phases of the storm. Moreover, very low values of electron density, even lower than 2 × 104 cm−3, were simultaneously recorded in the evening sector by Swarm satellites at equatorial latitudes during quiet periods, and at magnetic latitudes of about ±60° during disturbed periods. The obtained results are an example of the capability of Swarm data to generate an additional valuable dataset to properly model the topside ionosphere.
IRI model
Swarm data
Topside electron density
St. Patrick storm
At the end of 2013, the European Space Agency (ESA) launched the three-satellite Swarm constellation. Among the three satellites, two [Alpha (A) and Charlie (C)] are orbiting the Earth side-by-side at the same altitude of about 460 km, while the third [Bravo (B)] is flying about 60 km above. They are all equipped with identical instruments consisting of high-resolution sensors for measurements of both geomagnetic and electric fields, as well as plasma density. Besides the new generation instruments, the revolution introduced by this mission is in its geometrical configuration. For instance, satellites A and C allow performing differential investigations taking advantage of the proximity of the two satellites, while satellite B, whose orbital plane gets farther from that of the other two satellites, will allow spanning a wider local time window with consequent implications, for instance, for the Space Weather community (Friis-Christensen et al. 2006).
Here, we are interested mainly in the measurements made by the electric field instrument (EFI) comprising two thermal ion imagers (TIIs) and two Langmuir probes (LPs). The former measures the three-dimensional ion distribution, the latter the spacecraft potential, plasma density, and electron temperature, both at 2 Hz rate. In particular, we will analyze preliminary measurements of electron density (N e) recorded by the Swarm constellation during March 2015, a period of time including the so-called St. Patrick storm. This storm, which was classified as severe and for which the K p index reached the maximum value of 8, is the most intense observed during solar cycle 24. At ground observatories the sudden storm commencement was observed at around 04:45 Universal Time (UT) of 17 March 2015 with the arrival at the Earth of a coronal mass ejection. Figure 1 shows the temporal trend (from 9 to 25 March 2015) of the south component of the interplanetary magnetic field (IMF) and of some geomagnetic indices (D st, AE, and a p) describing the global level of the Earth's magnetic disturbance. The maximum intensity of the storm was reached at around 23:00 UT of 17 March and was characterized by the minimum value of D st index of −223 nT. Some details on the complex structure of this storm can be found in Kamide and Kusano (2015) and Cherniak et al. (2015).
B z component of IMF and magnetic activity indices during the St. Patrick magnetic storm. From top to bottom: interplanetary magnetic field B z component and D st, AE, and 3-hourly a p magnetic indices
The vertical electron density profile is the most representative feature of the ionospheric plasma, and its reconstruction is essential for studies concerning ionospheric physics and for space weather purposes. Ground-based ionosondes can measure only the bottomside of this profile, up to the height of the F2-layer peak, that is the absolute maximum of the ionospheric electron density. With regard to the topside, from the F2-peak height to higher altitudes, ground-based ionosondes can provide only an estimation, based on bottomside measurements (Reinisch and Huang 2001; Huang and Reinisch 2001). Measuring the topside ionosphere requires an ionosonde onboard a satellite sounding from above the F2-peak. Only few missions from the sixties to the eighties, such as Alouette-1 and Alouette-2, ISIS-1 and ISIS-2, and Intercosmos 19, have provided sets of topside ionospheric data, but with a limited spatial coverage; moreover, only a small percentage of the total soundings were processed into electron density profiles (Huang et al. 2002). This lack of experimental topside ionospheric data (Benson et al. 1998) limits significantly the efforts to study and model this ionospheric region as a function of altitude and geographical location as well as diurnal, seasonal, and solar activity variations. Hence, even though, early in 2014, the International Reference Ionosphere (IRI) model was officially recognized as the international standard for the specification of the ionosphere by the International Standardization Organization (ISO) (Bilitza and Reinisch 2015), its topside profile often does not represent properly the real features of the ionosphere. A thorough description of these shortcomings and the corresponding efforts done to improve the model were published by Bilitza et al. (2006) and briefly summarized later in the section devoted to the IRI topside options illustration.
Swarm satellites flight right in the topside ionosphere. In this paper, preliminary Swarm N e measurements recorded during six time periods of March 2015 are considered and compared with the corresponding output given by the IRI model (Bilitza and Reinisch 2008; Bilitza et al. 2014). These time intervals are chosen to have two quiet periods before the onset of the storm, two periods including the main phase of the storm, and two periods during the recovery phase of the storm.
The Swarm N e measurements, although preliminary and under validation, are considered nowadays reliable by the reference community. For example, Pedatella et al. (2015) did show a comparison between Swarm densities and those inferred from COSMIC radio occultation measurements and found a very good agreement: Overall, Swarm measurements seem to show a slight underestimate of the ionospheric electron density, varying between 8 and 15 % depending on latitude and local time.
Nevertheless, since the Swarm data validation is still ongoing, a comparison with IRI in terms of absolute values would have not been suitable. So, a correlation analysis was considered to evaluate the trends embedded in the Swarm and IRI time series. The corresponding results show that the representations made by Swarm and IRI are pretty different for both quiet and disturbed periods, independently of the chosen IRI topside option.
As already mentioned, data used consist of electron density measurements made onboard the three satellites of Swarm constellation during six selected time periods between 9 and 25 March 2015, a period of time including the so-called St. Patrick magnetic storm. During this time window the only available data are those from the Preliminary Plasma Dataset prepared by the Swedish Institute for Space Science (IRF) at Uppsala (Knudsen et al. 2015). Specifically, only data with a quality flag value lower than 256 were considered (Knudsen et al. 2015).
The purpose of our investigation is to compare Swarm and IRI electron density representations for both disturbed and quiet magnetic conditions. For this reason, we chose two quiet periods before the onset of the St. Patrick storm which we refer to as pre-storm time intervals (P1 and P2), two periods characterized by a significant low value of the D st index and including the main phase of the storm, which we refer to as main phase periods (M1 and M2), and two periods (R1 and R2) during the recovery phase of the storm. Detailed information on the bounds of the selected periods is summarized in Table 1, together with the range of variability and average values of D st and AE indices in each period.
Details on the time periods selected and on the corresponding level of magnetic activity expressed by means of D st and AE geomagnetic indices
Period code
Day—start time [UT]
Day—end time [UT]
D st
[min, max] [nT]
<D st>
<AE>
[−9, 15]
[22, 261]
[−17, 259]
[−223, 56]
[37, 1570]
[−189, −70]
[200, 1043]
R1 (A)
[−88, −48]
R1 (B)
R1 (C)
[−36, 9]
Due to gaps in the 20 March Swarm A and B data, R1 time periods differ from satellite to satellite
The pre-storm periods P1 and P2 were chosen according to simultaneously low values of both D st and AE indices, in order to be quite confident that the magnetic activity was low at all latitudes. In fact, the well-known D st index is able to represent the disturbance observed on the ground at low and mid-latitudes produced by the ring current, the partial ring current and by magnetopause and magnetotail currents during magnetic storms. Differently, AE index indicates the total intensity of the auroral electrojets and is used to represent the disturbance observed at high latitudes due to geomagnetic substorms. Consequently, when D st is low, AE is not necessarily low as well. The main phase periods M1 and M2 correspond to the growth of the ring current till its maximum intensity and up to its initial decay, respectively. Values of D st index during M2 still correspond to a significant perturbation of several tens of nanoTeslas in the horizontal component of the geomagnetic field, as observed at ground observatories. With regard to the recovery phase periods, R1 is selected midway, in terms of D st index, between the main phase and quiet conditions, while during R2 quiet conditions are almost achieved.
In order to compare the 2 Hz Swarm N e measurements with the N e values provided by the IRI model at the same time and location, Swarm data are resampled, actually decimated, taking 1 every 9 measurements, which corresponds to a sampling of 4.5 s. This value descends by the fact that the IRI temporal step is expressed as tenths of hour, and at the same time has to be a multiple of 0.5 s, that is the Swarm sampling. The smallest value matching these two constraints is the decimal temporal increment of 0.00125 h which corresponds right to 4.5 s. The other temporal steps multiple of 0.5 s, and lower than 4.5 s, would give rise to circulating decimal temporal increments, which would result in an inaccurate analysis.
Data from each period shown in Table 1 are grouped according to magnetic local time (MLT) sectors and magnetic latitude bands. Partition into MLT is made to consider that, for each Swarm orbit, half measurements are taken in the morning sector (descending phase of satellite orbit) and half in the evening sector (ascending phase of satellite orbit). So, dividing data in this way we distinguish among the different dynamics characterizing morning and evening ionospheric sectors, especially at low and equatorial latitudes that are characterized by the fountain effect (Davies 1990; Kelley 2009). Since Swarm satellites move along near-polar orbits, MLTs are clustered around morning and evening sectors and partially spread over the entire 24 h MLT range at the poles. So, to consider disjoint set of measurements, the MLT ranges considered for the descending and ascending phases are 04–12 and 16–24 h. Within these time intervals, over 99 % of measurements taken between magnetic latitudes of 60°S and 60°N falls in the range 06–09 h for Swarm A and C and in the range 08–11 h for Swarm B, as concerns the morning sector, and in the range 18–21 h for Swarm A and C and in the range 20–23 h for Swarm B, concerning the evening sector. Differently, at latitudes higher than 60° the percentage of measurements taken in the morning and in the evening sectors decreases to around 60 %, being the orbits not really polar. Figure 2 shows the overall distribution of measurements in MLT without distinguishing between high and low/mid-latitudes.
Electron density value availability for Swarm A and B. Histograms of available electron density measurements as a function of MLT, for P1, M1, and R1, for Swarm A and B. Due to the geometry of Swarm constellation the MLT distribution of Swarm C is identical to that of Swarm A
The reason for the splitting into magnetic latitude bands is more or less the same as that related to the partition in MLT. In fact, most of processes occurring in the ionosphere have a marked magnetic latitudinal dependence (Davies 1990; Kelley 2009). So, we converted geographical coordinates into quasi-dipole coordinates (Emmert et al. 2010) and considered the following magnetic latitude bands: between −90°S and −60°S (SP, south pole), between −60°S and −30°S (SM, south mid ), between −30°S and 30°N (EQ, equator), between 30°N and 60°N (NM, north mid), between 60°N and 90°N (NP, north pole). The limits of these bands have been chosen also on the base of the magnetic latitude distribution of Swarm N e measurements. Two examples are shown in Fig. 3 for Swarm A, during the quiet period P1 for measurements recorded in the morning sector, and during the perturbed period M2 for measurements in the evening sector.
Swarm A electron density during periods P1 and M2 in the morning and evening sector. Electron density as measured by Swarm A during a the pre-storm period P1 in the morning sector (04-12 MLT), and b during the main phase period M2 in the evening sector (16-24 MLT)
IRI model: topside electron density and storm options
Many studies have noted disagreements between the IRI topside modelling (hereafter called IRI-2001) and measurements (Bilitza 2001; Bilitza et al. 2006). IRI-2001 tends to overestimate the electron densities in the upper topside (from about 500 km above the F-peak upward) reaching a factor of about 3 at 1000 km above the ionospheric peak. In order to face this limitation two new options were introduced in IRI-2007 (Bilitza and Reinisch 2008). The first option (hereafter called IRI-2001corr) is a correction factor for the 2001 model, based on over 150,000 topside profiles from Alouette-1, Alouette-2, and ISIS-1, ISIS-2, and varying with altitude, modified dip latitude, and local time (Bilitza 2004). The application of this factor helped, for instance, in improving the discrepancies that were found by Jee et al. (2005), who compared IRI-2001 with TOPEX measurements.
The second option (hereafter called IRI-NeQuick) is the NeQuick topside model (Radicella and Leitinger 2001; Coisson et al. 2006). This model is based on a semi-Epstein layer function, governed by an empirical shape parameter k, whose analytical relationship was first calculated by using TEC data and ionosonde data recorded respectively at Florence and Rome, Italy (Radicella and Zhang 1995), and subsequently updated by using ISIS-2 topside profiles (Coisson et al. 2006). Comparisons with TOPEX data have shown that IRI-NeQuick provides an improvement with respect to IRI-2001 predictions (Coisson et al. 2004).
Since the IRI-2001 version, also a storm option as a correction factor for disturbed conditions is included (Fuller-Rowell et al. 2000; Bilitza 2001; Araujo-Pradere et al. 2002). This option consists of an empirical ionospheric storm-time correction model that scales the quiet time F region critical frequency (foF2) to account for storm-time changes in the ionosphere. IRI uses the 3-hourly a p index for the description of magnetic storm effects, and the storm model option is driven by a new index based on the integral of the a p index over the previous 33 h, weighted by a filter obtained by the method of singular value decomposition. The storm option gives reliable results at mid-latitudes during summer and equinox, but during winter and near the equator, the model does not improve significantly the IRI representation.
It is worth highlighting that the IRI storm model option was implemented mostly to represent the mid-latitude F2 peak density variations for disturbed conditions. Anyhow, the setting on of this option clearly influences the whole electron density profile over the entire terrestrial globe. With regard to this, Fig. 4 displays six global maps of the following percent relative difference
$$\begin{aligned} \left[ {\frac{{\left( {{\text{IRI}} - {\text{NeQuick}} } \right)_{\text{StOFF}} - \left( {{\text{IRI}} - {\text{NeQuick}}} \right)_{\text{StON}} }}{{\left( {{\text{IRI}} - {\text{NeQuick}} } \right)_{\text{StOFF}}}}} \right] \times 100 \end{aligned}$$
between the electron densities given by IRI-NeQuick with the storm option off (StOFF) and the electron densities given by IRI-NeQuick with the storm option on (StON), on 17 March 2015 at 00, 03, 06, 09, 15, and 23 UT, at 460 km of altitude, that is the orbital altitude of Swarm A and C. It is evident that at 00 UT, before the beginning of St. Patrick storm, the two representations are identical, with the corresponding percent difference equal to 0 % everywhere; on the contrary, when the storm is ongoing, differences between StON and StOFF appear and become more and more significant.
Percent relative difference between IRI model StOFF and StON on 17 March 2015. Electron density percent relative difference according to Eq. (1) on 17 March 2015 at 00, 03, 06, 09, 15, and 23 UT, at 460 km of altitude. Coordinates are geographical. Bold lines represent the magnetic parallels at −60°, −30°, 0°, 30°, and 60°. Due to the large difference between N e values for different times, it is not possible to use the same color scale for all plots
In this work, the IRI model is used to estimate N e at the same time and location (geographical latitude and longitude, altitude) of Swarm measurements falling in the six selected periods listed in Table 1. In detail, we used the URSI coefficients, according to the three topside options (IRI-2001, IRI-2001corr, NeQuick), and with the storm option on (StON).
Among all the plots that were obtained, only a few are shown as representative in Figs. 5, 6, 7, 8, 9, 10, 11, and 12. These figures compare, for morning (Figs. 5, 6, 7, 8) and evening (Figs. 9, 10, 11, 12) sectors, electron densities measured by Swarm A with the corresponding ones calculated by IRI-NeQuick(StON), and electron densities measured by Swarm B with the corresponding ones calculated by IRI-2001corr(StON), for periods P1, M1, M2, and R1. With regard to these figures, it is worth noting that showing Swarm A measurements only with the output given by the NeQuick option, and Swarm B measurements only with the output given by the IRI-2001corr option, does not mean that the other topside options were not considered to perform the comparison. This way to proceed was imposed only by the fact that it was clearly not possible to show for each time period listed in Table 1 the corresponding output given by each of the three IRI topside options. Moreover, at Swarm altitudes, the three IRI topside options give a very similar ionospheric representation, and that's why we chose to represent different IRI topside options for Swarm A and B.
Electron densities measured by Swarm satellites and IRI for period P1, for the morning sector. Electron densities measured by Swarm A (top-right panels) and Swarm B (bottom-right panels), and the corresponding ones calculated by IRI-NeQuick(StON) (top-left panels) and IRI-2001corr(StON) (bottom-left panels), for the period P1, for the morning sector. Magnetic latitude bands between −60° and 60° are plotted in a Gall stereographic projection, while the high-latitude bands are plotted in an orthographic projection (on the left the north pole, on the right the south pole). Coordinates are geographical, and bold lines in both Gall stereographic projections and polar orthographic projections represent magnetic parallels drawn with a 30° step. Due to the large difference between N e values measured by Swarm A and B and those estimated by IRI, it is not possible to draw the values into the Gall stereographic projections with the same color scale
Electron densities measured by Swarm satellites and IRI for period M1, for the morning sector. Same as Fig. 5, but for M1 morning sector
Electron densities measured by Swarm satellites and IRI for period R1, for the morning sector. Same as Fig. 5, but for R1 morning sector
Electron densities measured by Swarm satellites and IRI for period P1, for the evening sector. Same as Fig. 5, but for P1 evening sector
Electron densities measured by Swarm satellites and IRI for period M1, for the evening sector. Same as Fig. 5, but for M1 evening sector
Electron densities measured by Swarm satellites and IRI for period R1, for the evening sector. Same as Fig. 5, but for R1 evening sector
In order to assess quantitatively the behavior of the different IRI topside models, for each selected period, and for each model, the Pearson correlation coefficient between Swarm and IRI time series was calculated, according to the following formula
$$\rho_{X,Y} = \frac{{\text{cov} (X,Y)}}{{\sigma_{X} \sigma_{Y} }} = \frac{{E\left( {\left( {X - E(X)} \right)\left( {Y - E(Y)} \right)} \right)}}{{\sigma_{X} \sigma_{Y} }},$$
where cov() is the covariance between the variables X and Y, σ X e σ Y are the corresponding standard deviations, and E() represents the expected value.
We chose this approach because the Swarm Langmuir probe data are still undergoing a thorough validation, and hence, a comparison in terms of absolute values would have not been appropriate. On the contrary, the value of the Pearson coefficient can give an idea about the similarity of trends embedded in the IRI and Swarm time series.
Figures 13 and 14 show the average of Pearson coefficients calculated for Swarm A and C, and Pearson coefficients calculated for Swarm B, by considering all the three IRI topside options (IRI-2001, IRI-2001corr, NeQuick), respectively, for morning and evening sectors, and for each magnetic latitude band. Correlations for Swarm A and C were averaged since obtained results are practically identical. Figure 15 displays instead the magnetic latitude dependence of Pearson coefficients shown in Figs. 13 and 14.
Correlation analysis between Swarm A and C and IRI values. Average of Pearson coefficients calculated for Swarm A and C, for morning (left panels) and evening (right panels) sectors, for each magnetic latitude band (NP, NM, EQ, SM, SP), by considering all the three IRI topside models: IRI-2001 (top panels), IRI-2001corr (middle panels), and IRI-NeQuick (bottom panels)
Correlation analysis between Swarm B and IRI values. Pearson coefficients calculated for Swarm B, for morning (left panels) and evening (right panels) sectors, for each magnetic latitude band (NP, NM, EQ, SM, SP), by considering all the three IRI topside models: IRI-2001 (top panels), IRI-2001corr (middle panels), and IRI-NeQuick (bottom panels)
Magnetic latitude dependence of Pearson coefficients calculated between Swarm satellites and IRI values. Average of Pearson coefficients calculated for Swarm A and C (left panels), and Pearson coefficients calculated for Swarm B (right panels), for each magnetic latitude band (from top to bottom: NP, NM, EQ, SM, and SP), by considering all the three IRI topside models (IRI-2001, IRI-2001corr, IRI-NeQuick), for morning and evening sectors
In the analysis we have done, we noted that sporadically Swarm data were characterized by very low values of electron density, even lower than 2 × 104 cm−3. To assess the truthfulness of these values, we checked whether they were recorded simultaneously by Swarm A and C, and we found that these values were seen by both satellites. As expected, these unusually low values are not reproduced by the IRI model. As an example, Fig. 16 shows where Swarm A (the same is for Swarm C) recorded these values for periods P1, M2, and R1, for the evening sector. For the morning sector, these low values are practically absent.
Very low values of electron density measured by Swarm A for the evening sector. Swarm A electron density values (in red) lower than 2 × 104 cm−3, for periods P1 (top panel), M2 (middle panel), and R1 (bottom panel), for the evening sector. Coordinates are geographical, and bold lines represent magnetic parallels drawn with a 30° step
Before getting to the hearth of the discussion of results, we want to draw the attention to the fact that in this section each time we talk generically about Swarm, we refer to all satellites (Swarm A, B, and C), and each time we talk about Swarm A, due to their proximity, we are implicitly talking also about Swarm C. Moreover, if we look carefully at Figs. 13, 14, and 15, we realize that: (a) the differences between the correlation coefficients of IRI-2001 and those of IRI-2001corr are minimal; (b) even though the correlation coefficients of IRI-NeQuick can be different from those of IRI-2001 and IRI-2001corr, the corresponding trend is, however, somewhat similar. So, henceforward, when we talk about IRI, we mean that the same is valid for all the corresponding three topside models.
Looking at Figs. 5, 6, 7, 8, 9, 10, 11, and 12, several interesting features measured by Swarm satellites, and differences between these and IRI, come out. Below, first are discussed the results of morning sectors of Figs. 5, 6, 7, and 8, and then the results of the corresponding evening sectors of Figs. 9, 10, 11, and 12.
Concerning the period P1 (the same is for period P2), for the morning sector (Fig. 5), the equatorial band shows for both IRI and Swarm the same usual pattern characterized by a maximum of electron density along the magnetic equator (e.g., Balan and Bailey 1995). Anyway, some differences about the electron density intensity appear: Swarm A measures N e values lower than those calculated by IRI, while the contrary holds for Swarm B. This dissimilarity could be related to the local time shift characterizing the two satellites (see Fig. 2) but, more likely, is due to their different orbital altitudes, a fact that, when having accurate measurements, will turn out to be really useful to obtain new insights about the topside plasma scale height, which is so important to reliably model the topside profile. In Fig. 5, as reported also by several authors (Sagawa et al. 2005; Immel et al. 2006; Liu et al. 2010; Lühr et al. 2012; Xiong and Luhr 2014) a wave-3 longitudinal modulation is discernible, more evident for Swarm B than for Swarm A; IRI succeeds in catching this feature only modelling the same times and locations of Swarm B. Figure 5 shows also, for Swarm A, a general underestimation made by IRI in the southern part of Atlantic Ocean. Concerning the polar regions, the values measured by Swarm satellites are pretty different from those given by IRI and, with regard to this, the most striking feature is the very low values of correlation coefficients characterizing the northern polar region for Swarm B (Fig. 14).
About periods M1/M2, for the morning sector (Figs. 6, 7), IRI still models an equatorial pattern characterized by a maximum centered on the magnetic equator, while Swarm measures a double-crest pattern in the west longitude sector of the globe, that is unusual for these local times. In fact, at these local times, the zonal electric field is westward and gives rise to a reverse fountain causing an increase of electron density around the magnetic equator, according to the mechanism proposed by Balan and Bailey (1995). The double-crest pattern measured by Swarm can be ascribed to an early fountain effect caused by ionospheric electric fields and currents that at low and mid-latitudes, during geomagnetic disturbed periods, can significantly differ from their quiet-day patterns, due to a concurrent action of two mechanisms: the magnetospheric dynamo and the ionospheric disturbance dynamo (Blanc and Richmond 1980). Dynamic interactions between the solar wind and the magnetosphere are the source of the magnetospheric dynamo. This gives rise to electrical currents which, along with their associated electric fields [called penetrating interplanetary electric fields (IEFs)], can penetrate to lower latitudes through the conducting ionosphere (Fejer and Scherliess 1995; Fejer et al. 2008; Huang et al. 2007; Zhao et al. 2008). The second mechanism is instead generated by an energy input to the thermosphere that alters the global thermospheric circulation, modifying the electric fields and currents that are generated by the ionospheric wind dynamo action during quiet conditions at low and mid-latitudes (Fejer et al. 2008; Nicolls et al. 2006).
Specifically, Fejer et al. (2008) showed that during equinox, for geomagnetically disturbed periods, the equatorial drifts ascribable to the magnetospheric dynamo are upward from about 07 to 23 LT, those due to the ionospheric dynamo are upward between 21 and 16 LT during equinox, with the amplitudes of daytime ones (between 07 and 16 LT) that are significantly lower than the nighttime ones (between 21 and 06 LT).
This further supports the thought that the double-crest pattern measured by Swarm in Figs. 6 and 7 is due to a combined effect of IEFs and the ionospheric disturbance wind dynamo, with a contribution of the latter which is definitely smaller, thus causing an inversion of the usual dynamo zonal electric field from westward to eastward. In particular, during the M1 period, two crests of electron density well beyond the magnetic parallels at 30° and −30° are observed in the Atlantic Ocean sector, suggesting also the occurrence of a "super-fountain effect" (Balan et al. 2010; Zong et al. 2010). It is interesting to note how the double-crest pattern in the M2 period is still recorded by Swarm A and not by Swarm B, suggesting that, in the 2 h of MLT difference characterizing the two satellites, the plasma fountain from direct became again reverse.
Also the polar patterns given by IRI are different from those measured by Swarm, showing a general overestimation of N e. This feature is confirmed by the low values of correlation coefficients characterizing the polar bands (Figs. 13, 14), especially for Swarm B.
Regarding the period R1 (the same is for R2), for the morning sector (Fig. 8), Swarm comes back to the usual pattern characterized by a maximum centered on the magnetic equator, as on the other hand is modelled by IRI; anyway, along a Pacific Ocean sector, Swarm measures N e values higher than IRI ones, while the rest of values are lower than those modelled by IRI. Again, the correlation coefficients of polar regions are the lowest ones, confirming that also for the R1 period Swarm and IRI patterns are different.
Concerning the period P1 (the same is for the period P2), for the evening sector (Fig. 9), the equatorial band shows for both IRI and Swarm the same usual electron density double-crest pattern around the magnetic equator (e.g., Balan and Bailey 1995). The values measured by Swarm that show a maximum over the south American sector are, however, higher than those given as output by IRI. Moreover, the IRI values are significantly asymmetric with those of the southern crest that are higher than those of the northern crest. On the contrary, Swarm satellites measure two crests that are very similar, and there is only a slight difference over the South America, where Swarm presents the maximum of N e, for which the northern crest is more intense than the southern one. As for the morning sector, also for the evening sector, the northern polar region is characterized by the lowest values of correlation coefficients (Figs. 13, 14).
With reference to periods M1/M2, for the evening sector (Figs. 10, 11), the usual electron density double-crest pattern around the magnetic equator is still shown by both IRI and Swarm, even though the crests measured by Swarm are noticeably narrower than those modelled by IRI. Anyway, also during the main phase of the storm, the values measured by Swarm, which present again a maximum over the South American sector, are higher than those modelled by IRI. Moreover, as for the period P1, IRI still models electron density crests that are significantly asymmetric, with the southern crest which is notably more intense than the northern one. It is not the same for the electron density crests measured by Swarm, which appear quite symmetric. During the main phase of the storm, the polar patterns characterizing IRI and Swarm are again sensibly different, especially for the period M2; the most striking feature is the very low values of correlation coefficient associated to the northern polar region.
Concerning the period R1 (the same is for the period R2), for the evening sector (Fig. 12), the morphology of all latitude bands is very similar to that of periods M1/M2. The only difference is that the maximum values of N e measured by Swarm are now spread on a wider longitude sector including also the Atlantic Ocean.
In summary, Figs. 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, and 15 show that, from a morphological point of view, the electron density patterns measured by Swarm and those modelled by IRI are different, especially during the main phase of the storm, for the morning sector, when Swarm highlights an unusual double-crest pattern. As a consequence, the correlation coefficients between IRI and Swarm of all magnetic latitude bands are somewhat low, mainly in the periods M1, M2, and R1. In general, the correlation coefficients of mid-latitude regions are higher than those of equatorial and polar regions, confirming a well-known feature of IRI, that is IRI predictions are less accurate at equatorial and auroral latitudes (Bilitza and Reinisch 2008).
The correlation coefficients of the northern polar region deserve a special mention, because they are often very low (even negative at times), for both the morning and evening sectors, and although less evident the same happens for the southern polar region. This result is most likely caused by the high-latitude current systems, which are activated during disturbed magnetic periods. In fact, the lowest values of correlation are found mainly in the periods M2 and R1. During these periods the a p index, which is the magnetic index used by IRI, is not high (a p < 50), especially when compared with that relative to the M1 period (a p > 150). Nevertheless, the values of the AE index clearly show an intense global electrojet activity in the auroral zone during both the main and the recovery phases of the storm; this means that from 18 to 22 March 2015 the auroral regions are characterized by an intense substorm activity, with a consequent enhancement of the auroral electrojet systems. This may explain the considerable difference obtained between Swarm measurements and IRI modelled values not only at polar regions, but also between the morning and evening sectors. Indeed, the spatial distribution of the polar ionospheric convection and current systems is not uniform at high latitude, showing a greater intensity in the evening sector than in the morning one. Moreover, during this particular geomagnetic event, also a difference of the current intensities could have characterized the two hemispheres. This hypothesis is consistent with the results reported by Cherniak et al. (2015) who, during the St. Patrick geomagnetic storm, found hemispheric asymmetries in both the intensity and spatial structures of ionospheric irregularities.
An interesting feature showed by Swarm measurements during the analyzed periods is represented by the very low values of N e, for the evening sector, as it is displayed in Fig. 16. The most striking feature of Fig. 16 is that for quiet periods these values are clustered along the magnetic equator, while during disturbed periods they are grouped at magnetic latitudes of about ±60°, with those of the recovery phase period being more rarefied than those of the main phase period.
The clusters along the magnetic equator can be interpreted as equatorial plasma bubbles. In fact, near sunset, plasma densities and dynamo electric fields in the E region decrease causing a weakening of the equatorial anomaly. At the same time, however, at this local time (corresponding to the evening sector here considered), a dynamo develops in the F region, and polarization charges within conductivity gradients at the terminator surface enhance the eastward electric field after sunset, giving rise to a pre-reversal increase of the equatorial fountain (Woodman 1970). Hence, in these hours, a rapid uplifting of the plasma in the F region and a general steepening of the bottom side gradient lead to the Rayleigh–Taylor instability, which allows plasma density irregularities to form. These irregularities can grow to become large ionospheric depletions that are usually called equatorial plasma bubbles (e.g., Whalen 2000). The fact that very low values of N e are detected along the magnetic equator only during quiet conditions could be an additional confirmation that ionospheric irregularities can be either inhibited or triggered during disturbed periods, possibly depending on the phase of the storm and local time of occurrence of D st maximum excursion (Aarons 1991; Alfonsi et al. 2013; Dabas et al. 2003).
Concerning the very low values of N e measured by Swarm at magnetic latitudes of about ±60°, these are interpreted as the mid-latitude ionospheric trough, which is a latitudinal (located equatorward of the auroral oval) narrow and longitudinal extended depletion in the electron distribution, regularly detected in evening and night hours (Moffett and Quegan 1983). The ionospheric trough, characterized by very low values of N e, is so well detected under disturbed conditions by Swarm satellites because, as shown by Krankowski et al. (2009), it significantly depends on the geomagnetic activity. In fact, under disturbed conditions, the ionospheric trough tends to exhibit much lower values of electron density than for quiet conditions. This is confirmed by what it is shown in Fig. 16. In some sense, this N e decrease of the ionospheric trough simplifies significantly its detection in both hemispheres by Swarm satellites. As expected, this feature is not modelled by IRI, because at present the model has difficulties in reproducing auroral boundaries as well as density and temperature features related to these boundaries, such as the subauroral density trough (Bilitza et al. 2014).
This work represents a further evidence that the topside ionosphere modelling, especially during magnetically disturbed periods, is still a challenge. In fact, even though they are preliminary, Swarm electron density data considered in this study, measured during March 2015 including the St. Patrick storm, showed patterns that are at the moment difficult to model. Specifically, the analysis we have done, based on the Pearson correlation coefficient, showed that, independently of the chosen topside option (IRI-2001, IRI-2001corr, NeQuick), the trends embedded in the Swarm and IRI time series are fairly different. In particular, the analysis did not show a topside option that behaves definitely better than the others.
For the sake of correctness, it is worth reminding that the IRI model works the best when considering long series of monthly median values, while in this work the IRI model was compared directly with plasma measurements on a limited period of time. So, to fully confirm the results here described, longer series of monthly median values should be considered. At the same time, however, we would like to stress the fact that if this might be possible for quiet periods, it would become difficult when considering disturbed conditions, for which the calculation of monthly median values does not make sense. On the other hand, given that the IRI model has a "storm" routine capable of changing the output of the model for disturbed conditions, the results here shown, although based on a limited series of data, have their own validity.
With regard to the topside modelling, in situ measurements of the thin electron plasma density around the Earth carried out by the Swarm constellation can be extremely valuable. In fact, when having accurate and calibrated measurements, the peculiar configuration of Swarm satellites will allow to achieve new insights about the topside plasma scale height, a parameter of crucial significance to reliably model the topside profile.
EFI:
Electric field instrument
EQ:
ESA:
IEF:
interplanetary electric field
IMF:
interplanetary magnetic field
IRF:
Swedish Institute for Space Science
IRI:
International Reference Ionosphere
International Standardization Organization
LP:
Langmuir probe
MLT:
magnetic local time
NM:
north mid
NP:
SM:
south mid
SP:
TII:
thermal ion imager
URSI:
International Union of Radio Science
Universal Time
AP made all the analyses needed to compare Swarm and IRI trends. MP conceived and coordinated the study and discussed the results. RT participated in designing the study, prepared the Swarm dataset, and helped to draft the manuscript. PDM participated in the discussion concerning the influence of geomagnetic storms and substorms on the geospace and helped to draft the manuscript. IC actively participated in the discussion concerning the reliability of Swarm Langmuir probe measurements. All authors read and approved the final manuscript.
We thank all members of the ESA Swarm team for their precious work, which is the basis for our investigations; in particular, we thank Dr. Stephan Buchert of Swedish Institute of Space Physics (IRF) for having made available the Langmuir Probes Preliminary Plasma Dataset, freely accessible at https://earth.esa.int/web/guest/swarm/data-access. We acknowledge use of NASA/GSFC's Space Physics Data Facility's CDAWeb service, and OMNI data that were obtained at http://cdaweb.gsfc.nasa.gov/. We acknowledge the IRI Working Group for their valuable work and for making the IRI code freely available at http://irimodel.org/.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Dipartimento di Fisica e Astronomia, Università di Bologna "Alma Mater Studiorum", Viale Carlo Berti Pichat 6/2, 40127 Bologna, Italy
Istituto Nazionale di Geofisica e Vulcanologia, Via di Vigna Murata 605, 00143 Rome, Italy
Serco Italia S.P.A., Via Sciadonna 24/26, 00044 Frascati, RM, Italy
Aarons J (1991) The role of the ring current in the generation or inhibition of equatorial F layer irregularities during magnetic storms. Radio Sci 26:1131–1149. doi:10.1029/91RS00452 View ArticleGoogle Scholar
Alfonsi L, Spogli L, Pezzopane M, Romano V, Zuccheretti E, De Franceschi G, Cabrera MA, Ezquer RG (2013) Comparative analysis of spread-F signature and GPS scintillation occurrences at Tucuman, Argentina. J Geophys Res Space Phys 118:4483–4502. doi:10.1002/jgra.50378 View ArticleGoogle Scholar
Araujo-Pradere EA, Fuller-Rowell TJ, Codrescu MV (2002) STORM: an empirical storm-time ionospheric correction model 1. Model description. Radio Sci. doi:10.1029/2001RS002467 Google Scholar
Balan N, Bailey GJ (1995) Equatorial plasma fountain and its effect: possibility of an additional layer. J Geophys Res 100:21421–21432View ArticleGoogle Scholar
Balan N, Shiokawa K, Otsuka Y, Kikuchi T, Vijaya Lekshmi D, Kawamura S, Yamamoto M, Bailey GJ (2010) A physical mechanism of positive ionospheric storms at low latitudes and midlatitudes. J Geophys Res 115:A02304. doi:10.1029/2009JA014515 View ArticleGoogle Scholar
Benson RF, Reinisch BW, Green JL, Fung SF, Calvert W, Haines DM, Bougeret JL, Manning R, Carpenter DL, Gallagher DL, Reiff P, Taylor WWL (1998) Magnetospheric radio sounding on the IMAGE mission. Radio Sci Bull 285:9–20Google Scholar
Bilitza D (2001) International Reference Ionosphere 2000. Radio Sci 36:261–275. doi:10.1029/2000RS002432 View ArticleGoogle Scholar
Bilitza D (2004) A correction for the IRI topside electron density model based on Alouette/ISIS topside sounder data. Adv Space Res 33:838–843View ArticleGoogle Scholar
Bilitza D, Reinisch BW (2008) International Reference Ionosphere 2007: improvements and new parameters. Adv Space Res 42:599–609. doi:10.1016/j.asr.2007.07.048 View ArticleGoogle Scholar
Bilitza D, Reinisch BW (2015) Preface: International Reference Ionosphere and global navigation satellite systems. Adv Space Res 55:1913View ArticleGoogle Scholar
Bilitza D, Reinisch BW, Radicella SM, Pulinets S, Gulyaeva T, Triskova L (2006) Improvements of the International Reference Ionosphere model for the topside electron density profile. Radio Sci. 41:RS5S15. doi:10.1029/2005RS003370 View ArticleGoogle Scholar
Bilitza D, Altadill D, Zhang Y, Mertens C, Truhlik V, Richards P, McKinnell LA, Reinisch B (2014) The International Reference Ionosphere 2012—a model of international collaboration. J Space Weather Space Clim 4:A07. doi:10.1051/swsc/2014004 View ArticleGoogle Scholar
Blanc M, Richmond AD (1980) The ionospheric dynamo. J Geophys Res 85:1669–1686View ArticleGoogle Scholar
Cherniak I, Zakharenkova I, Redmon RJ (2015) Dynamics of the high-latitude ionospheric irregularities during the 17 March 2015 St. Patrick's Day storm: ground-based GPS measurements. Space Weather. doi:10.1002/2015SW001237 Google Scholar
Coisson P, Radicella SM, Leitinger R, Ciraolo L (2004) Are models predicting a realistic picture of vertical total electron content? Radio Sci 39:RS1S14. doi:10.1029/2002RS002823 Google Scholar
Coisson P, Radicella SM, Leitinger R, Nava B (2006) Topside electron density in IRI and NeQuick: features and limitations. Adv Space Res 37:934–937View ArticleGoogle Scholar
Dabas RS, Singh L, Lakshmi DR, Subramanyam P, Chopra P, Garg SC (2003) Evolution and dynamics of equatorial plasma bubbles: relationships to E × B drift, postsunset total electron content enhancements and equatorial electrojet strength. Radio Sci 38(4):1075. doi:10.1029/2001RS002586 View ArticleGoogle Scholar
Davies K (1990) Ionospheric Radio. Peter Peregrinus Ltd., LondonView ArticleGoogle Scholar
Emmert JT, Richmond AD, Drob DP (2010) A computationally compact representation of Magnetic-Apex and Quasi-Dipole coordinates with smooth base vectors. J Geophys Res. doi:10.1029/2010JA015326 Google Scholar
Fejer BG, Scherliess L (1995) Time dependent response of equatorial ionospheric electric field to magnetospheric disturbances. Geophys Res Lett 22:851–854View ArticleGoogle Scholar
Fejer BG, Jensen JW, Su S-Y (2008) Seasonal and longitudinal dependence of equatorial disturbance vertical plasma drifts. Geophys Res Lett 35:L20106. doi:10.1029/2008GL035584 View ArticleGoogle Scholar
Friis-Christensen E, Lühr H, Hulot G (2006) Swarm: a constellation to study the Earth's magnetic field. Earth Planets Space 58:351–358View ArticleGoogle Scholar
Fuller-Rowell TJ, Araujo-Pradere E, Codrescu MV (2000) An empirical ionospheric storm-time correction model. Adv Space Res 25(1):139–146View ArticleGoogle Scholar
Huang XQ, Reinisch BW (2001) Vertical electron content from ionograms in real time. Radio Sci 36:335–342View ArticleGoogle Scholar
Huang X, Reinisch BW, Bilitza D, Benson RF (2002) Electron density profiles of the topside ionosphere. Ann Geophys Italy 45:125–130Google Scholar
Huang C-S, Sazykin S, Chau JL, Maruyama N, Kelley MC (2007) Penetration electric fields: efficiency and characteristic time scale. J Atmos Sol Terr Phys 69:1135–1146. doi:10.1016/j.jastp.2006.08.016 View ArticleGoogle Scholar
Immel TJ, Sagawa E, England SL, Henderson SB, Hagan ME, Mende SB, Frey HU, Swenson CM, Paxton LJ (2006) Control of equatorial ionospheric morphology by atmospheric tides. Geophys Res Lett 33:L15108. doi:10.1029/2006GL026161 View ArticleGoogle Scholar
Jee G, Schunk RW, Scherliess L (2005) Comparison of IRI-2001 with TOPEX TEC measurements. J Atmos Solar-Terr Phys 67:365–380. doi:10.1016/j.jastp.2004.08.005 View ArticleGoogle Scholar
Kamide Y, Kusano K (2015) No major solar flares but the largest geomagnetic storm in the present solar cycle. Space Weather 13:365–367. doi:10.1002/2015SW001213 View ArticleGoogle Scholar
Kelley MC (2009) The Earth's ionosphere. International Geophysics (Book 96), 2nd edn. Academic Press, San DiegoGoogle Scholar
Knudsen D, Burchill J, Buchert S, Coco I, Toffner-Clausen L, Holmdahl Olsen PE (2015) Swarm preliminary Plasma dataset User Note, ESA Ref. SWAM-GSEG-EOPG-TN-15-0003. https://earth.esa.int/web/guest/document-library/browse-document-library/-/article/swarm-preliminary-plasma-dataset-user-note
Krankowski A, Shagimuratov II, Ephishov II, Krypiak-Gregorczyk A, Yakimova G (2009) The occurrence of the mid-latitude ionospheric trough in GPS-TEC measurements. Adv Space Res 43:1721–1731. doi:10.1016/j.asr.2008.05.014 View ArticleGoogle Scholar
Liu H, Thampi SV, Yamamoto M (2010) Phase reversal of the diurnal cycle in the midlatitude ionosphere. J Geophys Res 115:A01305. doi:10.1029/2009JA014689 Google Scholar
Lühr H, Rother M, Häusler K, Fejer B, Alken P (2012) Direct comparison of non-migrating tidal signatures in the electrojet, vertical plasma drift and equatorial ionization anomaly. J Atmos Solar-Terr Phys 75–76:31–43. doi:10.1016/j.jastp.2011.07.009 View ArticleGoogle Scholar
Moffett RJ, Quegan S (1983) The mid-latitude trough in the electron concentration of the ionospheric F-layer: a review of observations and modeling. J Atmos Terrest Phys 45:315–343View ArticleGoogle Scholar
Nicolls MJ, Kelley MC, Vlasov MN, Sahai Y, Chau JL, Hysell DL, Fagundes PR, Becker-Guedes F, Lima WLC (2006) Observations and modeling of post-midnight uplifts near the magnetic equator. Ann Geophys 24:1317–1331. doi:10.5194/angeo-24-1317-2006 View ArticleGoogle Scholar
Pedatella N, Stolle C, Chau J (2015) Comparing Swarm electron density data to COSMIC GPS radio occultation observations. Paper presented at the 26th IUGG General Assembly, Prague, Czech Republic, June 22–July 2 2015Google Scholar
Radicella SM, Leitinger R (2001) The evolution of the DGR approach to model electron density profiles. Adv Space Res 27:35–40View ArticleGoogle Scholar
Radicella SM, Zhang M-L (1995) The improved DGR analytical model of electron density height profile and total electron content in the ionosphere. Ann Geophys 38(1):35–41Google Scholar
Reinisch BW, Huang XQ (2001) Deducing topside profiles and total electron content from bottomside ionograms. Adv Space Res 27:23–30View ArticleGoogle Scholar
Sagawa E, Immel TJ, Frey HU, Mende SB (2005) Longitudinal structure of the equatorial anomaly in the nighttime ionosphere observed by IMAGE/FUV. J Geophys Res 110:A11302. doi:10.1029/2004JA010848 View ArticleGoogle Scholar
Whalen JA (2000) An equatorial bubble: its evolution observed in relation to bottomside spread F and to the Appleton anomaly. J Geophys Res 105:5303–5315View ArticleGoogle Scholar
Woodman RF (1970) Vertical drift velocities and east-west electric fields at the magnetic equator. J Geophys Res 75(31):6249–6259. doi:10.1029/JA075i031p06249 View ArticleGoogle Scholar
Xiong C, Luhr H (2014) The Midlatitude Summer Night Anomaly as observed by CHAMP and GRACE: interpreted as tidal features. J Geophys Res Space Phys 119:4905–4915. doi:10.1002/2014JA019959 View ArticleGoogle Scholar
Zhao B, Wan W, Liu L, Igarashi K, Nakamura M, Paxton LJ, Su SY, Li G, Ren Z (2008) Anomalous enhancement of ionospheric electron content in the Asian-Australian region during a geomagnetically quiet day. J Geophys Res 113:A11302. doi:10.1029/2007JA012987 Google Scholar
Zong Q-G, Reinisch BW, Song P, Wei Y, Galkin IA (2010) Dayside ionospheric response to the intense interplanetary shocks–solar wind discontinuities: observations from the digisonde global ionospheric radio observatory. J Geophys Res 115:A06304. doi:10.1029/2009JA014796 View ArticleGoogle Scholar
3. Space science
Swarm Science Results after two years in Space
|
CommonCrawl
|
Discriminating seismic events using 1D and 2D CNNs: applications to volcanic and tectonic datasets
Masaru Nakano ORCID: orcid.org/0000-0003-0278-02421 &
Daisuke Sugiyama2
Detecting seismic events, discriminating between different event types, and picking P- and S-wave arrival times are fundamental but laborious tasks in seismology. In response to the ever-increasing volume of seismic observational data, machine learning (ML) methods have been applied to try to resolve these issues. Although it is straightforward to input standard (time-domain) seismic waveforms into ML models, many studies have used time–frequency-domain representations because the frequency components may be effective for discriminating events. However, detailed comparisons of the performances of these two methods are lacking. In this study, we compared the performances of 1D and 2D convolutional neural networks (CNNs) in discriminating events in datasets from two different tectonic settings: tectonic tremor and ordinary earthquakes observed at the Nankai trough, and eruption signals and other volcanic earthquakes at Sakurajima volcano. We found that the 1D and 2D CNNs performed similarly in these applications. Half of the misclassified events were misassigned the same labels in both CNNs, implying that the CNNs learned similar features inherent to the input signals and thus misclassified them similarly. Because the first convolutional layer of a 1D CNN applies a set of finite impulse response (FIR) filters to the input seismograms, these filters are thought to extract signals effective for discriminating events in the first step. Therefore, because our application was the discrimination of signals dominated by low- and high-frequency components, we tested which frequency components were effective for signal discriminations based on the filter responses alone. We found that the FIR filters comprised high-pass and low-pass filters with cut-off frequencies around 7–9 Hz, frequencies at which the magnitude relations of the input signal classes change. This difference in the power of high- and low-frequency components proved essential for correct signal classifications in our dataset.
Event identification and phase picking are the most fundamental tasks in seismic event monitoring, but are also quite laborious. Because of the ever-increasing volume of seismic observational data from dense arrays covering entire nations and their surrounding oceans, such as USArray (Kerr 2013) and Japan's MOWLAS (Aoi et al. 2020), these tasks now require automated systems. Various machine learning (ML) methods have been applied to resolve this issue (e.g., Dowla et al. 1990; Wang and Teng 1995; Del Pezzo et al. 2003; Scarpetta et al. 2005; Kong et al. 2016), of which convolutional neural networks (CNNs; e.g., LeCun et al. 2015) have frequently been used for seismic signal discriminations and phase picking (e.g., Perol et al. 2018; Ross et al. 2018a, b; Sugiyama et al. 2021). CNNs have been applied to slow earthquakes in subduction zones (Nakano et al. 2019; Takahashi et al. 2021) and volcanic signals (Canário et al. 2020). Recent studies have combined CNNs with other methods to improve the accuracy and efficiency of these tasks (Mousavi et al. 2019, 2020; Soto and Schurr 2021).
Whereas it is straightforward to use waveform traces as the input for ML applications to seismic data (e.g., Perol et al. 2018; Ross et al. 2018a, b), many studies use time–frequency-domain representations (running spectra, wavelet transforms) to classify signals (e.g., Dowla et al. 1990; Wang and Teng 1995; Yoon et al. 2015; Holtzman et al. 2018; Mousavi et al. 2019; Dokht et al. 2019; Nakano et al. 2019; Rouet-Leduc et al. 2020; Takahashi et al. 2021). Because the spectral characteristics of seismic signals depend on their source locations and source mechanisms, the time–frequency-domain representation is expected to improve ML performances. Several studies (Nakano et al. 2019; Rouet-Leduc et al. 2020; Takahashi et al. 2021) have discriminated low-frequency (2–8 Hz; e.g., Obara 2002) earthquakes in subduction zones from ordinary earthquakes dominated by higher frequency components. Similarly, volcanic earthquakes can also be classified by their dominant frequencies. Iguchi (1994) classified volcanic earthquakes at Sakurajima volcano into 4 groups: A-type, BH-type, BL-type, and explosion. Of those, A-type earthquakes result from fault ruptures within the edifice and are dominated by energy at 10–20 Hz, whereas the other types are dominated by lower frequency components. Chouet (1996) classified volcanic earthquakes as either volcano-tectonic (VT) associated with shear failures and dominated by higher frequency components, or long-period (LP) events dominated by frequency components lower than several hertz, and Scarpetta et al. (2005) and Canário et al. (2020) developed methods to classify volcanic earthquakes by focusing on differences in their frequency characteristics. Despite the importance of efficiently classifying these signals using automated ML systems for monitoring seismic and volcanic activity, detailed comparisons of the performances of ML methods using time–frequency-domain data to those using time-domain data are lacking.
Canário et al. (2020) made such a comparison, and found that the performances were mostly similar. However, their study was based only on 10 frequency components after performing a wavelet transform of the time–frequency-domain representations. CNN performance might depend on the specific selection of frequency components. In this study, we compared the performances of 1D (time-domain) and 2D (time–frequency-domain) CNNs by using the same time-window width, number of data points, and frequency components in the running spectral images (2D) as those of the 1D waveform traces. We used two datasets from different tectonic settings, one from the Nankai trough and one from Sakurajima volcano, to gauge whether CNN performance depended on the input dataset. Based on our results, we discuss the possibility of determining the most useful frequency components solely using 1D CNNs.
Methods and data
We compared the performances of 1D and 2D CNNs in discriminating different seismic and volcanic signals; these CNNs use waveform traces and running spectral images, respectively, as input data. For the 2D CNN, we used the model developed in our previous study (Nakano et al. 2019), in which running spectral images of 64 × 64 pixels are processed using two sets of convolutional and max pooling layers, and then passed into two fully connected layers to classify signals into three classes: earthquake, tectonic tremor, or noise. The dimension of the convolutional kernel (also called convolutional filter) was 5 × 5, and the stride of the max pooling layer was 2 × 2. We adopted the rectified linear unit (e.g., Fukushima 1980) as an activation function. The first and second convolutional layers had 32 and 16 channels, respectively (Additional file 1: Table S1), and, respectively, used 64 and 28 frequency components, corresponding to the number of pixels on the vertical axis. The lengths of the first and second fully connected layers were 20 and 3, respectively (see details in Nakano et al. 2019).
For comparison, we constructed a 1D CNN by replacing the 2D convolutional and pooling layers with two sets of 1D convolutional and pooling layers each (i.e., four sets of convolutional and max pooling layers), and the output was passed into two fully connected layers (Additional file 1: Table S2). The length of the convolutional kernel was 5, and the stride of the max pooling layer was 2, matching the 2D CNN described above. Again, we adopted the rectified linear unit as an activation function. We tested three models with different channel numbers in each layer (Additional file 1: Table S2). In Model 1, the first and second convolutional layers each had 64 channels, corresponding to the number of frequency components in the first layer of the 2D CNN, and the third and fourth layers each had 28 channels, as in the second layer of the 2D CNN. In Model 2, the first layer had 16 channels, and the number of channels was doubled in each subsequent layer, reaching 128 in the fourth layer, as in the model of Ross et al. (2018b). Model 3 was the same as Model 2, but the first layer had only two channels. The number of trainable parameters for each model is summarized in Additional file 1: Table S3.
To prepare the data for input into the 1D CNN, we decimated the seismic records to 25 Hz and trimmed the data so that the signal to be classified fit within a 163.84-s time window. The waveforms were then normalized to the absolute maximum value in each trace. Each waveform trace had 4096 data points, and the Nyquist frequency was 12.5 Hz. A running spectral image corresponding to the waveform trace was created by computing the amplitude spectra of the short-term Fourier transforms of 5.12 s (128-point) half-overlapping time windows of the decimated continuous waveform, and then cutting them to the same time window to obtain images of 64 × 64 pixels. These running spectral images included signals between 0.2 and 12.5 Hz within the 163.84-s time window. Each image was normalized to the maximum value and used as the input to the 2D CNN. The waveform traces and running spectral images prepared in this way have a common number of data points and contain frequency components up to 12.5 Hz, but the running spectral images do not contain phase information or frequency components below 0.2 Hz.
To evaluate whether CNN performance depends on the input dataset, we used two catalogs from different tectonic settings. The first was the catalog of shallow tectonic tremor at the Nankai trough subduction zone used by Nakano et al. (2019) (Additional file 1: Fig. S1), and the other comprised volcanic earthquakes excited by summit explosions at Sakurajima volcano, based on the eruption catalog developed by the Kagoshima Meteorological Office of the Japan Meteorological Agency (JMA) (Additional file 1: Fig. S2).
For the Nankai catalog, we created the input waveforms and running spectral images from DONET three-component broadband seismometer records (Kaneda et al. 2015; Kawaguchi et al. 2015; Aoi et al. 2020) using time windows centered on the origin times of local earthquakes and tectonic tremor events. Noise data were created with the same start time as that of Nakano et al. (2019). From the catalog, we randomly selected events in each signal class and split the dataset into 70% for training, 15% for validation, and 15% for test. In Nakano et al. (2019), input data with low signal-to-noise ratios (S/N) were removed by visual inspection. Here, we evaluated the data quality based on the S/N computed from the peak absolute amplitude and the standard deviation during the first 10 s of the waveform traces, band-passed between 1 and 10 Hz. Signals with S/N in the top 80% were used for local earthquakes and tectonic tremor events, and those in the bottom 80% were used for noise. The criteria for signal selection were the same for the waveform and running spectrum datasets and they included the same data. Figure 1a and b shows representative input waveforms and their corresponding running spectral images, respectively, for ordinary earthquakes, tectonic tremor events, and noise. Compared to local earthquakes, tectonic tremor signals were dominated by lower frequency components and had longer durations. The number of waveforms in each dataset is listed in Table 1.
Representative waveforms and running spectral images used as input data for CNNs. a Waveforms and b corresponding running spectral images of a local earthquake, a tectonic tremor event, and noise from the Nankai dataset. c Waveforms and d corresponding running spectral images of a non-eruptive (NER) volcanic event, an eruptive (ER) event, and noise from the Sakurajima dataset. Amplitudes in a, c are normalized to the maximum amplitude
Table 1 Numbers of samples used for the training, validation, and test datasets
For the Sakurajima catalog, we used volcanic earthquakes that occurred in 2015, when eruptive activity was the most active in recent years; activity ceased by the end of September. At Sakurajima, A-type volcanic earthquakes result from faulting within the edifice, BH-type from the release of volumetric strain due to magmatic intrusions, BL-type are related to ejections of volcanic bombs and ash from the summit, and explosion earthquakes are excited from violent explosive eruptions (Iguchi 1994). Here, we classified seismic signals as eruptive events (ER, including BL-type and explosion earthquakes), non-eruptive events (NER, including BH- and A-type earthquakes), or noise. The aim of this classification was to develop a telemetric seismic monitoring system for volcanic eruptions. Figure 1c and d shows representative waveforms and their corresponding running spectral images, respectively, for our three signal classifications at Sakurajima. ER events are dominated by lower frequency components like tectonic tremor, whereas NER events contain higher frequency components like local earthquakes (Iguchi 1994). Input data for the Sakurajima catalog were created from the vertical components of seismograms of the Sakurajima observation network operated by JMA. Since the eruption catalog only defines event origin times down to the minute, the exact time that each signal appeared was unknown. Therefore, we created three input waveforms and running spectral images for each ER and NER event by randomly shifting the center of time window between ± 65.536 s (i.e., 40% of the time window) from the origin time. Noise data were created from randomly selected time windows, excluding the 327.68 s preceding and 1 h following event origin times. We selected enough time windows of noise to roughly match the number of NER events. The quality of each waveform was evaluated based on the same S/N criterion as in the Nankai dataset. In this way, we obtained 14,934, 128,516, and 125,795 waveforms for ER events, NER events, and noise, respectively. In each class, we used events that occurred during February–August as the training dataset, and events occurring in January and September as the validation and test datasets, respectively. The number of waveforms in each dataset varied according to the event type because, compared to NER events, far fewer ER events occurred during the analysis period and because the monthly number of events varied (Table 1).
We trained the 1D and 2D CNN models using the training dataset for 300 epochs. Because the amount of data in each class was quite variable, especially in the Sakurajima dataset, we allowed the duplication of samples from minority classes to balance the size of the majority class when subdividing the dataset into minibatches. The parameters of the CNN were trained using a cross-entropy loss function with the Adam optimization algorithm (Kingma and Ba 2015) in minibatches of 64 waveforms. We determine the predicted class as that of the largest output probabilities from the CNN. Model performance was evaluated after each epoch using the validation dataset and the balanced accuracy (BACC) metric, defined for a three-class input as:
$${\text{BACC}} = \frac{1}{3}\left( {\frac{{T_{00} }}{{N_{0} }} + \frac{{T_{11} }}{{N_{1} }} + \frac{{T_{22} }}{{N_{2} }}} \right),$$
where \({T}_{ii}\) is the number of correctly classified waveforms and \({N}_{i}\) the total number of waveforms in the \(i\)th class. Based on these results, we selected the model with the highest BACC value and applied it to the test dataset to evaluate model performance upon generalization, again based on BACC.
Figure 2 summarizes the performances of the trained 1D and 2D CNN models when applied to the validation and test datasets (open and closed bars, respectively). Among the 1D models, model 1 achieved the highest BACC value for both the Nankai and Sakurajima validation datasets. However, the 2D CNN achieved the highest overall BACC value for both validation datasets, indicating that the 2D model using running spectral images performed better than the 1D models using waveform traces. However, when generalized to the test dataset, the BACC value of model 1 was very similar to that of the 2D CNN for both datasets. This result implies that, when generalized, the performances of the 1D and 2D CNNs were comparable.
Balanced accuracy (BACC) scores after training the 1D and 2D CNN models. a Nankai and b Sakurajima datasets. Open bars show BACC scores when applied to the validation dataset and filled bars with corresponding values reported show BACC scores when applied to the test dataset
Table 2 shows the confusion matrices of the 2D and 1D CNN model 1 for the Nankai dataset (those for the Sakurajima dataset are reported in Additional file 1: Table S4). In the Nankai dataset, 94 and 121 samples were misclassified by the 2D and 1D CNNs, respectively; of those, 50 samples from 20 earthquakes or tremor events were misclassified to the same class by both CNNs. In the Sakurajima dataset, 1129 and 1257 samples were misclassified by the 2D and 1D CNNs, respectively; of those, 731 samples from 156 ER/NER events were misclassified to the same class by both CNNs. These results imply that the 2D and 1D CNNs learned similar features inherent to the input signals and misclassified these samples similarly.
Table 2 Confusion matrices for 2D CNN and 1D CNN model 1 to the Nankai trough dataset
Computation times taken for training the CNN parameters and classifications of one sample are summarized in Additional file 1: Table S5. The training times for the 1D models were three to four times longer than that for the 2D model. Classification of one sample takes about 1 ms, but the 1D models take two to three times longer than the 2D model.
Our results show that the 1D and 2D CNNs performed comparably. Similar results were obtained by Canário et al. (2020), who used only 10 frequency components in their time–frequency-domain data. Although the running spectra (2D) explicitly represent dominant frequency components and durations characteristic to the signals, the 1D CNN seems to have successfully learned the features necessary for effectively discriminating between the signal classes. Therefore, both time-domain and time–frequency-domain data can be used to classify seismic signals and achieve similar predictive performance. However, running spectral images lose some information, especially the long-period components of seismic data. Very-low-frequency earthquakes occurring at subduction zones (e.g., Ito et al. 2007; Nakano et al. 2018) and very-long-period events at volcanoes (e.g., Chouet 1996) are dominated by signals longer than tens of seconds, which were lost from the running spectra used in this study but retained in the waveform traces. Using 1D CNNs may therefore result in better predictive performance when discriminating these signals. Of course, running spectral images can include long-period components if they are created with a proper time window. Although 1D CNNs take several times longer for signal classifications than 2D CNNs (Additional file 1: Table S5), the computation time of about 1 ms is short enough for real-time applications. Because 1D CNNs do not require the same preprocessing as running spectral computations, they are better suited to real-time seismic monitoring than 2D CNNs.
The convolutional layer of a 1D CNN can be written as:
$$u_{i,k} = f\left( {b_{k} + \sum\limits_{{i^{\prime}=-m}}^{m} \sum\limits_{{k^{\prime}=1}}^{n} {a_{{i^{\prime},k^{\prime},k}} z_{{i + i^{\prime},k^{\prime}}} } } \right),$$
where \({z}_{i,k}\) and \({u}_{j,k}\) are the \(i\)th data of \(k\)th channel in the input and output of the layer, respectively,\(a_{{i^{\prime } {,}k^{\prime } {,}k}}\) is a kernel applied to the input, \({b}_{k}\) is the bias on the \(k\)th channel, \(m\) is the kernel size, \(n\) is the number of channels in the input layer, and \(f()\) is an activation function. By setting \(f\left(x\right)=x\), \({b}_{k}=0\), and \(n=1\), Eq. (2) simplifies to:
$$u_{i} = \sum\limits_{{i^{\prime} = - m}}^{m} {a_{{i^{\prime}}} } z_{{i + i^{\prime}}} ,$$
which constitutes a finite impulse response (FIR) filter of length \(m\). Because the weights and bias are tuned for signal classification during training, 1D CNNs are considered to learn the discriminating frequency components of the input signals.
In our application, the difference in signal power between the low- and high-frequency components may be key to discriminating between the signals because, in both the Nankai and Sakurajima datasets, the two earthquake classes were dominated by high- or low-frequency components. As shown by Eq. (3), the convolutional layer of a 1D CNN is basically equivalent to a set of FIR filters. In the first convolutional layer, the input seismic waveforms are filtered by the FIR filters, then passed into the following layer after application of the activation function. Therefore, signals in the passband of the FIR filters in the first convolutional layer are used for classifications in the first step. To check this hypothesis, we tested a CNN model with only two channels in the first layer (1D CNN model 3) and computed the FIR filter responses from the channel weights. We note that performance was not significantly degraded for this model (Fig. 2). Figure 3 shows the responses of the FIR filters in the two channels of the first convolutional layer; they were band-rejection filters with a stopband at 4–6 Hz and different amplitude responses for the higher frequency components. This result implies that the differences in the frequency components above and below 4–6 Hz were important for signal discriminations in our dataset.
FIR filter responses in the first convolutional layer and average spectra of each signal class. a, c, e Nankai and b, d, f Sakurajima datasets. FIR filter response curves are computed from the weights of a, b the two channels in the first convolutional layer of 1D CNN model 3 and c, d the 64 channels in the first layer of 1D CNN model 1 (see Table S2). In a, b red and blue lines indicate response curves for the first and second channels, respectively. In c, d response curves for channels with the four largest amplitudes are shown by red lines, and other channels by gray lines. e, f Average spectra of each signal class
To visualize the difference in the frequency components of the input signals, we computed average spectra for each signal class in the Nankai and Sakurajima datasets (Fig. 3e, f) by normalizing the waveforms to their absolute maximum amplitude, performing a Fourier transform, and then averaging the signals for each class. Tectonic tremor and ER events were dominated by frequency components lower than 5–7 Hz, whereas local earthquakes and NER events were dominated by frequency components above 7 Hz. Noise dominated frequencies below 1 Hz and above 10 Hz in the Nankai dataset, whereas in the Sakurajima dataset noise showed lower signal levels than NER events below 6 Hz. Therefore, differences in the signal levels at higher and lower frequencies seem key to accurate signal classifications, as in the frequency scanning method to discriminate tectonic tremor from ordinary earthquakes (e.g., Katakami et al. 2017; Sit et al. 2012). We confirmed that this property is retained by CNNs using many channels in the first convolutional layer; Fig. 3 shows the FIR filter responses computed for the first convolutional layer (64 channels) of the 1D CNN model 1. Although the FIR filters showed various responses with different passbands and stopbands that covered the entire frequency range, filters with the largest amplitude responses comprised low- and high-pass filters with cutoffs around 7–9 Hz. Again, the difference between the higher and lower frequency components arises as fundamental information for signal classification.
Although the filter responses of the first convolutional layer correspond to the characteristics of the input signal, interpreting deeper layers is not straightforward. If we similarly compute the filter responses of the deeper convolutional layers, we obtain responses only for low-frequency components because the pooling layer downsamples the data. These filters were applied to the input data after non-linear conversions by the activation function and max pooling, and the outputs from different channels in the previous layer were added. Therefore, certain aspects of neural networks remain a 'black-box'.
So far, we have focused on the amplitude responses of the filters, but the phase characteristics of the filters and the appearance of signals in different channels should also play important roles for different applications. For example, Ross et al. (2018b) developed a CNN to estimate P- and S-wave arrival times. Although the dominant frequencies of S-waves may be lower than those of P-waves due to attenuation at higher frequencies, these waves generally have different signal durations and appearances in three-component seismograms that may be important for their identification. Therefore, the correspondence between signal characteristics and FIR filter responses may be limited to applications such as in this study.
We used the same Nankai dataset and 2D CNN structure as in our previous study (Nakano et al. 2019), but the performance upon generalization was lower in this study: the earlier study achieved an accuracy of 0.995, whereas we obtained BACC = 0.965. This difference is because Nakano et al. (2019) fixed the CNN model and used the same dataset for validation and test, resulting in generally better performance. When applied to the validation dataset in this study, the model performance attained BACC = 0.981 (Fig. 2a). In addition, Nakano et al. (2019) removed low-quality data by visual inspection, whereas we removed low-quality data based on the S/N computed from waveform traces. We note that most of our misclassified data had small S/N. Some local earthquake data that were misclassified as tectonic tremor were dominated by lower frequency components at all stations, and this signal characteristics may be due to source properties or path effects. Such data might have been noticed and removed by visual inspections, but not based on the S/N as in this study. However, visual inspection is not practical in automated seismic monitoring systems, and the performance attained in this study based by objectively selecting data should be realistic.
The BACC for the Sakurajima dataset was slightly lower than that for the Nankai dataset. When applied to the validation dataset for Sakurajima, the model performance attained BACC = 0.943. Misclassifications mostly occurred between NER and ER events; noise was rarely misclassified and other events were rarely misclassified as noise (Additional file 1: Table S4). Misclassified NER events had greater lower frequency components than other events in the same class, i.e., they had signal characteristics similar to ER events. It is possible that seismic signals from small, uncatalogued eruptions were included in the NER dataset, or that the NER dataset was more variable because of the variety of possible source processes and path effects at volcanoes. Future work should therefore seek higher performance CNN models to resolve this problem.
We compared the performances of 1D and 2D CNNs (using waveform traces and running spectral images as input data, respectively) in classifying seismic signals from two different tectonic settings: tectonic tremor vs. ordinary earthquakes at Nankai trough and eruptive vs. non-eruptive volcanic earthquakes at Sakurajima. In both applications, the 1D and 2D CNNs performed similarly, indicating that the data preprocessing to produce the time–frequency-domain representation is not necessary to achieve high performance signal discriminations. We cannot exclude the possibility that different results will be obtained for different datasets or when other network structures are used because the behaviors of neural networks remain difficult to predict.
Because the signals in our datasets were dominated by high- and low-frequency components, we were able to determine which frequency components were effective for signal classifications using the 1D CNN. The first convolutional layer applies a set of FIR filters to the input seismogram and the filtered seismograms are passed into deeper layers, meaning that the passband frequencies of the filters are effective for signal discrimination. Our application to the Nankai and Sakurajima datasets showed that the filters constituted low- and high-pass filters with cut-off frequencies around 7–9 Hz: tectonic tremor events (Nankai) and eruptive earthquakes (Sakurajima) were dominated by lower frequencies, and local earthquakes (Nankai) and non-eruptive earthquakes (Sakurajima) by higher frequencies.
These results may help to reveal useful characteristics of input signal classes and to improve the performance of future signal discrimination models.
Seismic waveform data from DONET and the JMA observation network at Sakurajima are available at http://www.hinet.bosai.go.jp/?LANG=en. The Sakurajima eruption catalog of the Kagoshima Meteorological Office of JMA is available at https://www.jma-net.go.jp/kagoshima/vol/kazan_top.html.
ML:
Volcano-tectonic
LP:
Long-period
JMA:
Japan Meteorological Agency
Signal-to-noise ratios
Eruptive
NER: :
Non-eruptive
BACC:
Balanced accuracy
FIR:
Finite impulse response
Aoi S, Asano Y, Kunugi T, Kimura T, Uehira K, Takahashi N, Ueda H, Shiomi K, Matsumoto T, Fujiwara H (2020) MOWLAS: NIED observation network for earthquake, tsunami and volcano. Earth Planet Space 72:126. https://doi.org/10.1186/s40623-020-01250-x
Canário JP, Mello R, Curilemc M, Huenupan F, Rios R (2020) In-depth comparison of deep artificial neural network architectures on seismic events classification. J Volcanol Geotherm Res 401:106881. https://doi.org/10.1016/j.jvolgeores.2020.106881
Chouet BA (1996) Long-period volcano seismicity: its source and use in eruption forecasting. Nature 380:309–316
Del Pezzo E, Esposito A, Giudicepietro F, Marinaro M, Martini M, Scarpetta S (2003) Discrimination of earthquakes and underwater explosions using neural networks. Bull Seismo Soc Am 93:215–223
Dokht RMH, Kao H, Visser R, Smith B (2019) Seismic event and phase detection using time–frequency representation and convolutional neural networks. Seismo Res Lett 90(2A):481–490. https://doi.org/10.1785/0220180308
Dowla FU, Taylor SR, Anderson RW (1990) Seismic discrimination with artificial neural networks: preliminary results with regional spectral data. Bull Seismo Soc Am 80:1346–1373
Fukushima K (1980) Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern 36:193–202. https://doi.org/10.1007/BF00344251
Holtzman BK, Paté A, Paisley J, Waldhauser F, Repetto D (2018) Machine learning reveals cyclic changes in seismic source spectra in Geysers geothermal field. Sci Adv 4:eaao2929. https://doi.org/10.1126/sciadv.aao2929
Iguchi M (1994) A vertical expansion source model for the mechanisms of earthquakes originated in the magma conduit of an andesitic volcano: Sakurajima, Japan. Bull Volcanol Soc Japan 39:49–67
Ito Y, Obara K, Shiomi K, Sekine S, Hirose H (2007) Slow earthquakes coincident with episodic tremors and slow slip events. Science 315:503–506. https://doi.org/10.1126/science.1134454
Kaneda Y, Kawaguchi K, Araki E, Matsumoto H, Nakamura T, Kamiya S, Ariyoshi K, Hori T, Baba T, Takahashi N (2015) Development and application of an advanced ocean floor network system for megathrust earthquakes and tsunamis. In: Paolo F, Laura B, Angelo DS (eds) Seafloor observatories. Springer, Berlin, pp 643–662 (10.1007/978-3-642-11374-1_25)
Katakami S, Yamashita Y, Yakihara H, Shimizu H, Ito Y, Ohta K (2017) Tidal response in shallow tectonic tremors. Geophys Res Lett 44:9699–9706. https://doi.org/10.1002/2017GL074060
Kawaguchi K, Kaneko S, Nishida T, Komine T (2015) Construction of the DONET real-time seafloor observatory for earthquakes and tsunami monitoring. In: Paolo F, Laura B, Angelo DS (eds) Seafloor observatories. Springer, Berlin, pp 211–228 (10.1007/978-3-642-11374-1_10)
Kerr RA (2013) Geophysical exploration linking deep earth and backyard geology. Science 340:1283–1285. https://doi.org/10.1126/science.340.6138.1283
Kingma DP, Ba J (2015) Adam: a method for stochastic optimization. In: Proceedings of the 3rd international conference on learning representations (ICLR2015). Hilton San Diego Resort-Spa, San Diego, 7–9 May 2015, Accessed October 2018. Available at http://arxiv.org/abs/1412.6980.
Kong Q, Allen RM, Schreier L, Kwon YW (2016) MyShake: a smartphone seismic network for earthquake early warning and beyond. Sci Adv 2:e1501055. https://doi.org/10.1126/sciadv.1501055
LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444. https://doi.org/10.1038/nature14539
Mousavi SM, Zhu W, Sheng Y, Beroza GC (2019) CRED: a deep residual network of convolutional and recurrent units for earthquake signal detection. Sci Rep 9:10267. https://doi.org/10.1038/s41598-019-45748-1
Mousavi SM, Ellsworth WL, Zhu W, Chuang LY, Beroza GC (2020) Earthquake transformer—an attentive deeplearning model for simultaneous earthquake detection and phase picking. Nat Comm 11:3952. https://doi.org/10.1038/s41467-020-17591-w
Nakano M, Hori T, Araki E, Kodaira S, Ide S (2018) Shallow very-low-frequency earthquakes accompany slow slip events in the Nankai subduction zone. Nat Comm 9:984. https://doi.org/10.1038/s41467-018-03431-5
Nakano M, Sugiyama D, Hori T, Kuwatani T, Tsuboi S (2019) Discrimination of seismic signals from earthquakes and tectonic tremor by applying a convolutional neural network to running spectral images. Seism Res Lett 90(2A):530–538. https://doi.org/10.1785/0220180279
National Research Institute for Earth Science and Disaster Resilience (2019) NIED MOWLAS National Research Institute for Earth Science and Disaster Resilience. https://doi.org/10.17598/NIED.0009
Obara K (2002) Nonvolcanic deep tremor associated with subduction in Southwest Japan. Science 296:1679–1681. https://doi.org/10.1126/science.1070378
Perol T, Gharbi M, Denolle M (2018) Convolutional neural network for earthquake detection and location. Sci Adv 4:e1700578. https://doi.org/10.1126/sciadv.1700578
Ross ZE, Meier MA, Hauksson E (2018a) P wave arrival picking and first-motion polarity determination with deep learning. J Geophys Res 123:5120–5129. https://doi.org/10.1029/2017JB015251
Ross ZE, Meier MA, Hauksson E, Heaton TH (2018b) Generalized seismic phase detection with deep learning. Bull Seism Soc Am 108:2894–2901. https://doi.org/10.1785/0120180080
Rouet-Leduc B, Hulbert C, McBrearty IW, Johnson PA (2020) Probing slow earthquakes with deep learning. Geophys Res Lett 47:e2019GL085870. https://doi.org/10.1029/2019GL085870
Scarpetta S, Giudicepietro F, Ezin EC, Petrosino S, Del Pezzo E, Martini M, Marinaro M (2005) Automatic classification of seismic signals at Mt. Vesuvius Volcano, Italy, using neural networks. Bull Seismo Soc Am 95:185–196. https://doi.org/10.1785/0120030075
Sit S, Brudzinski M, Kao H (2012) Detecting tectonic tremor through frequency scanning at a single station: application to the Cascadia margin. Earth Planet Sci Lett 353–354:134–144. https://doi.org/10.1016/j.epsl.2012.08.002
Soto H, Schurr B (2021) DeepPhasePick: a method for detecting and picking seismic phases from local earthquakes based on highly optimized convolutional and recurrent deep neural networks. Geophys J Int 227:1268–1294. https://doi.org/10.1093/gji/ggab266
Sugiyama D, Tsuboi S, Yukutake Y (2021) Application of deep learning-based neural networks using theoretical seismograms as training data for locating earthquakes in the Hakone volcanic region. Japan Earth Planet Space 73:135. https://doi.org/10.1186/s40623-021-01461-w
Takahashi H, Tateiwa K, Yano K, Kano M (2021) A convolutional neural network-based classification of local earthquakes and tectonic tremors in Sanriku-oki, Japan, using S-net data. Earth Planet Space 73:186. https://doi.org/10.1186/s40623-021-01524-y
Wang J, Teng TL (1995) Artificial neural network-based seismic detector. Bull Seismo Soc Am 85:308–319
Wessel P, Smith WHF (1998) New, improved version of generic mapping tools released. Eos 79:579
Yoon CE, O'Reilly O, Bergen KJ, Beroza CG (2015) Earthquake detection through computationally efficient similarity search. Sci Adv 1:e1501057. https://doi.org/10.1126/sciadv.1501057
We used waveform data from DONET and the Japan Meteorological Agency (JMA) observation network at Sakurajima (National Research Institute for Earth Science and Disaster Resilience 2019). We also used the Sakurajima eruption catalog created by the Kagoshima Meteorological Office of JMA. All figures were drawn using Generic Mapping Tools (Wessel and Smith 1998). We thank two anonymous reviewers and the editor N. Uchida for careful review and constructive comments, which have improved the manuscript.
This study was supported by JSPS KAKENHI Grant Number JP19K04050 and JP21H05205 (to MN).
Institute for Marine Geodynamics (IMG), Japan Agency for Marine-Earth Science and Technology, 2-15, Natsushima-cho, Yokosuka, Kanagawa, 237-0061, Japan
Masaru Nakano
Research Institute for Value-Added Information Generation (VAiG), Japan Agency for Marine-Earth Science and Technology, 3173‑25 Showa‑machi, Kanazawa‑ku, Yokohama, 236-0001, Japan
Daisuke Sugiyama
MN designed this paper and performed the analysis. DS developed the computation code for the CNNs. All authors read and approved the final manuscript.
Correspondence to Masaru Nakano.
The authors declare no conflicts of interest associated with this manuscript.
Additional file1
: Fig. S1 Map showing the distribution of DONET stations (gray triangles), hypocenters of regular earthquakes (circles), and slow earthquakes (inverted triangles) used in this study. Fig. S2 Map showing the distribution of JMA seismic stations (gray triangles) and location of earthquakes in 2015 determined by JMA (orange dots). Red triangles are the locations of summit craters. Table S1 Architecture of the 2D CNN model. Table S2 Architectures of the 1D CNN models. Table S3 Number of trainable parameters. Table S4 Confusion matrices for 2D CNN and1D CNN model 1 to the Sakurajima dataset. Table S5 Computation times for training and classifications.
Nakano, M., Sugiyama, D. Discriminating seismic events using 1D and 2D CNNs: applications to volcanic and tectonic datasets. Earth Planets Space 74, 134 (2022). https://doi.org/10.1186/s40623-022-01696-1
Sakurajima
Explosion earthquake
Nankai trough
Slow earthquake
|
CommonCrawl
|
Limei Dai
School of Mathematics and Information Science, Weifang University, Weifang 261061, China
Fund Project: The author is supported by Shandong Provincial Natural Science Foundation, China (ZR2018LA006)
In this paper, we consider the augmented Hessian equations $ S_k^{\frac{1}{k}}[D^2u+\sigma(x)I] = f(u) $ in $ \mathbb{R}^{n} $ or $ \mathbb{R}^{n}_+ $. We first give the necessary and sufficient condition of the existence of classical subsolutions to the equations in $ \mathbb{R}^{n} $ for $ \sigma(x) = \alpha $, which is an extended Keller-Osserman condition. Then we obtain the nonexistence of positive viscosity subsolutions of the equations in $ \mathbb{R}^{n} $ or $ \mathbb{R}^{n}_+ $ for $ f(u) = u^p $ with $ p>1 $.
Keywords: Augmented Hessian equations, subsolutions, Keller-Osserman condition.
Mathematics Subject Classification: 35J60, 35D40.
Citation: Limei Dai. Existence and nonexistence of subsolutions for augmented Hessian equations. Discrete & Continuous Dynamical Systems - A, 2020, 40 (1) : 579-596. doi: 10.3934/dcds.2020023
J. G. Bao, X. H. Ji and H. G. Li, Existence and nonexistence theorem for entire subsolutions of $k$-Yamabe type equations, J. Differential Equations, 253 (2012), 2140-2160. doi: 10.1016/j.jde.2012.06.018. Google Scholar
L. A. Caffarelli, L. Nirenberg and J. Spruck, The Dirichlet problem for nonlinear second-order elliptic equations, Ⅰ. Monge-Ampère equation, Comm. Pure Appl. Math., 37 (1984), 369-402. doi: 10.1002/cpa.3160370306. Google Scholar
L. Caffarelli, L. Nirenberg and J. Spruck, The Dirichlet problem for nonlinear second-order elliptic equations. Ⅲ. Functions of the eigenvalues of the Hessian, Acta Math., 155 (1985), 261-301. doi: 10.1007/BF02392544. Google Scholar
I. Capuzzo Dolcetta, F. Leoni and A. Vitolo, Entire subsolutions of fully nonlinear degenerate elliptic equations, Bull. Inst. Math. Acad. Sin. (N.S.), 9 (2014), 147-161. Google Scholar
I. Capuzzo Dolcetta, F. Leoni and A. Vitolo, On the inequality $F(x, D^2u)\geq f(u)+g(u)|Du|^q$, Math. Ann., 365 (2016), 423-448. doi: 10.1007/s00208-015-1280-2. Google Scholar
H. Car and R. Pröpper, Removable singularities of $m$-Hessian equations, NoDEA Nonlinear Differential Equations Appl., 24 (2017), Art. 6, 18 pp. doi: 10.1007/s00030-016-0429-3. Google Scholar
K. S. Chou and X. J. Wang, A variational theory of the Hessian equation, Comm. Pure Appl. Math., 54 (2001), 1029-1064. doi: 10.1002/cpa.1016. Google Scholar
D. P. Covei, The Keller-Osserman problem for the $k$-Hessian operator, arXiv: 1508.04653. Google Scholar
M. G. Crandall, H. Ishii and P. L. Lions, User's guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc. (N.S.), 27 (1992), 1-67. doi: 10.1090/S0273-0979-1992-00266-5. Google Scholar
A. Cutrì and F. Leoni, On the Liouville property for fully nonlinear equations, Ann. Inst. H. Poincaré Anal. Non Linéaire, 17 (2000), 219-245. doi: 10.1016/S0294-1449(00)00109-8. Google Scholar
S. Dumont, L. Dupaigne, O. Goubet and V. Radulescu, Back to the Keller-Osserman condition for boundary blow-up solutions, Adv. Nonlinear Stud., 7 (2007), 271-298. doi: 10.1515/ans-2007-0205. Google Scholar
P. L. Felmer and A. Quaas, On critical exponents for the Pucci's extremal operators, Ann. Inst. H. Poincaré Anal. Non Linéaire, 20 (2003), 843-865. doi: 10.1016/S0294-1449(03)00011-8. Google Scholar
D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Reprint of the 1998 edition, Classics in Mathematics, Springer-Verlag, Berlin, 2001. Google Scholar
B. Guan, Second-order estimates and regularity for fully nonlinear elliptic equations on Riemannian manifolds, Duke Math. J., 163 (2014), 1491-1524. doi: 10.1215/00127094-2713591. Google Scholar
Y. Huang, F. D. Jiang and J. K. Liu, Boundary $C^ {2, \alpha}$ estimates for Monge-Ampère type equations, Adv. Math., 281 (2015), 706-733. doi: 10.1016/j.aim.2014.12.043. Google Scholar
X. H. Ji and J. G. Bao, Necessary and sufficient conditions on solvability for Hessian inequalities, Proc. Amer. Math. Soc., 138 (2010), 175-188. doi: 10.1090/S0002-9939-09-10032-1. Google Scholar
F. D. Jiang and N. S. Trudinger, On the Dirichlet problem for general augmented Hessian equations, arXiv: 1903.12410. Google Scholar
F. D. Jiang, N. S. Trudinger and X. P. Yang, On the Dirichlet problem for Monge-Ampère type equations, Calc. Var. Partial Differential Equations, 49 (2014), 1223-1236. doi: 10.1007/s00526-013-0619-3. Google Scholar
F. D. Jiang, N. S. Trudinger and X. P. Yang, On the Dirichlet problem for a class of augmented Hessian equations, J. Differential Equations, 258 (2015), 1548-1576. doi: 10.1016/j.jde.2014.11.005. Google Scholar
Q. N. Jin, Y. Y. Li and H. Y. Xu, Nonexistence of positive solutions for some fully nonlinear elliptic equations, Methods Appl. Anal., 12 (2005), 441-449. doi: 10.4310/MAA.2005.v12.n4.a5. Google Scholar
J. B. Keller, On solutions of $\Delta u = f(u)$, Comm. Pure Appl. Math., 10 (1957), 503-510. doi: 10.1002/cpa.3160100402. Google Scholar
Y. Y. Li, Some existence results for fully nonlinear elliptic equations of Monge-Ampère type, Comm. Pure Appl. Math., 43 (1990), 233-271. doi: 10.1002/cpa.3160430204. Google Scholar
J. K. Liu, N. S. Trudinger and X. J. Wang, Interior $C^ {2, \alpha}$ regularity for potential functions in optimal transportation, Comm. Partial Differential Equations, 35 (2010), 165-184. doi: 10.1080/03605300903236609. Google Scholar
G. Z. Lu and J. Y. Zhu, The maximum principles and symmetry results for viscosity solutions of fully nonlinear equations, J. Differential Equations, 258 (2015), 2054-2079. doi: 10.1016/j.jde.2014.11.022. Google Scholar
X. N. Ma, N. S. Trudinger and X. J. Wang, Regularity of potential functions of the optimal transportation problem, Arch. Ration. Mech. Anal., 177 (2005), 151-183. doi: 10.1007/s00205-005-0362-9. Google Scholar
R. Osserman, On the inequality $\Delta u\geq f(u)$, Pacific J. Math., 7 (1957), 1641-1647. Google Scholar
N. S. Trudinger, On the Dirichlet problem for Hessian equations, Acta Math., 175 (1995), 151-164. doi: 10.1007/BF02393303. Google Scholar
X. J. Wang, The $k$-Hessian equation, Geometric analysis and PDEs, 177–252, Lecture Notes in Math., 1977, Springer, Dordrecht, 2009. doi: 10.1007/978-3-642-01674-5_5. Google Scholar
Marie-Françoise Bidaut-Véron, Marta García-Huidobro, Cecilia Yarur. Keller-Osserman estimates for some quasilinear elliptic systems. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1547-1568. doi: 10.3934/cpaa.2013.12.1547
Limei Dai, Hongyu Li. Entire subsolutions of Monge-Ampère type equations. Communications on Pure & Applied Analysis, 2020, 19 (1) : 19-30. doi: 10.3934/cpaa.2020002
Nina Ivochkina, Nadezda Filimonenkova. On the backgrounds of the theory of m-Hessian equations. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1687-1703. doi: 10.3934/cpaa.2013.12.1687
Bo Guan, Heming Jiao. The Dirichlet problem for Hessian type elliptic equations on Riemannian manifolds. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 701-714. doi: 10.3934/dcds.2016.36.701
Weisong Dong, Tingting Wang, Gejun Bao. A priori estimates for the obstacle problem of Hessian type equations on Riemannian manifolds. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1769-1780. doi: 10.3934/cpaa.2016013
Yadong Shang, Jianjun Paul Tian, Bixiang Wang. Asymptotic behavior of the stochastic Keller-Segel equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1367-1391. doi: 10.3934/dcdsb.2019020
Xie Li, Zhaoyin Xiang. Boundedness in quasilinear Keller-Segel equations with nonlinear sensitivity and logistic source. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3503-3531. doi: 10.3934/dcds.2015.35.3503
Jan Burczak, Rafael Granero-Belinchón. Boundedness and homogeneous asymptotics for a fractional logistic Keller-Segel equations. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 139-164. doi: 10.3934/dcdss.2020008
Jean-François Coulombel, Frédéric Lagoutière. The Neumann numerical boundary condition for transport equations. Kinetic & Related Models, 2020, 13 (1) : 1-32. doi: 10.3934/krm.2020001
Igor E. Verbitsky. The Hessian Sobolev inequality and its extensions. Discrete & Continuous Dynamical Systems - A, 2015, 35 (12) : 6165-6179. doi: 10.3934/dcds.2015.35.6165
M. Eller. On boundary regularity of solutions to Maxwell's equations with a homogeneous conservative boundary condition. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 473-481. doi: 10.3934/dcdss.2009.2.473
RazIye Mert, A. Zafer. A necessary and sufficient condition for oscillation of second order sublinear delay dynamic equations. Conference Publications, 2011, 2011 (Special) : 1061-1067. doi: 10.3934/proc.2011.2011.1061
Samuel Bernard, Fabien Crauste. Optimal linear stability condition for scalar differential equations with distributed delay. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 1855-1876. doi: 10.3934/dcdsb.2015.20.1855
Guowei Dai, Ruyun Ma, Haiyan Wang, Feng Wang, Kuai Xu. Partial differential equations with Robin boundary condition in online social networks. Discrete & Continuous Dynamical Systems - B, 2015, 20 (6) : 1609-1624. doi: 10.3934/dcdsb.2015.20.1609
Jean-Pierre Raymond. Stokes and Navier-Stokes equations with a nonhomogeneous divergence condition. Discrete & Continuous Dynamical Systems - B, 2010, 14 (4) : 1537-1564. doi: 10.3934/dcdsb.2010.14.1537
Zuodong Yang, Jing Mo, Subei Li. Positive solutions of $p$-Laplacian equations with nonlinear boundary condition. Discrete & Continuous Dynamical Systems - B, 2011, 16 (2) : 623-636. doi: 10.3934/dcdsb.2011.16.623
Peter Šepitka. Riccati equations for linear Hamiltonian systems without controllability condition. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1685-1730. doi: 10.3934/dcds.2019074
Sergei Avdonin, Jeff Park, Luz de Teresa. The Kalman condition for the boundary controllability of coupled 1-d wave equations. Evolution Equations & Control Theory, 2020, 9 (1) : 255-273. doi: 10.3934/eect.2020005
Tomasz Komorowski, Adam Bobrowski. A quantitative Hopf-type maximum principle for subsolutions of elliptic PDEs. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020248
Hi Jun Choe, Bataa Lkhagvasuren, Minsuk Yang. Wellposedness of the Keller-Segel Navier-Stokes equations in the critical Besov spaces. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2453-2464. doi: 10.3934/cpaa.2015.14.2453
|
CommonCrawl
|
Reconciling Continuous and Discrete Complex Domains
Having taken some courses in Systems and Stability dealing mostly with continuous time signals, I am accustomed to thinking about the Laplace and Fourier transforms dealing with complex and pure imaginary signals respectively. For example, if I transformed $x(t)$ and evaluated its magnitude and phase at 0.5+0.5j, I would looking at the response of the system having injected it with $\exp(.5t)\exp(j0.5 \pi t)$
Recently, I have been working in the discrete domains of the DTFT (continuous in the transform domain), DFT, and Z. Here, we have complex exponentials mapping to the unit circle. Now I am struggling at a high level to reconcile these continuous and discrete domains. If I look at a DTFT of a discrete time signal and evaluate it for some input $\exp(0.5 j \hat{\omega} \pi n)$ where $n$ is some index and $\hat{omega}$ is a relative frequency based on the sampling rate of the continuous time signal, I end up somewhere on the unit circle (not on the imaginary axis) even though the signal is a purely imaginary exponential.
The questions I have are
1.) How does a complex signal (real and imaginary)$ x[n] = \exp(0.5n) \cos(j .5 \pi \hat{\omega}n)$ map into the DTFT/DFT and Z frequency domain? I am guessing somehow real part does not map to DTFT/DFT but does to Z?
2.) Why the concept of a multidimensional space for these discrete frequency signals (DTFT/DFT) when the imaginary axis (Fourier domain) worked fine for continuous signal? Why do we need a unit circle that repeats when we could map the same function to a straight line that repeats every $-\pi$ to $\pi$?
discrete-signals continuous-signals frequency-response
Jack FryeJack Frye
Simple answer: that's how the math works out.
Better answer: In the discrete domain, frequency is periodic with the sample rate. If you sample at 48 kHz, the range of frequencies that you can represent range from -24 kHz to +24 kHz. Let's say so you have a 100 kHz signal. You can sample this at 48 kHz but it would look exactly like a 4 kHz signal. In fact the two are indistinguishable, so it makes no sense to treat them differently.
That's why a circle makes a lot of sense here. Frequency is represented as the angle between the origin at any point on the circle. If you go around the circle an integer number of times, you end up exactly where you started.
So in the continuous domain 4 kHz and 100 kHz are different points. In the discrete domain (when sampled at 48 kHz) they are not.
HilmarHilmar
Consider the following $$x(t)=\cos(2\pi f t) $$
$$\frac{d}{dt} x(t)= -2\pi f \sin(2\pi f t) $$ you can take derivative with respect to time. In the $s$ domain $s X(s)$ corresponds to the time derivative of $X(s)$ (with zero initial condition)
For discrete time, $$ x[n]=\cos(2\pi f n) $$ $n$ is a discrete variable $$ \frac{d}{dn} x[n]= \lim_{\Delta \rightarrow 0} \frac{x[n+\Delta]-x[n]} {\Delta} \quad \text{is nonsense} $$ Instead we have the $z$ domain where $zX(z)$ corresponds to $x[n+1]$.
Since poles and zeros can lay in an arbitrary location, (not restricted to $s=j\omega$ or $z$ on the unit circle), using the $s$ domain just doesn't work.
The Z Transform is a special case of the Laurent Series
https://en.wikipedia.org/wiki/Laurent_series
from the first paragraph:
In mathematics, the Laurent series of a complex function f(z) is a representation of that function as a power series which includes terms of negative degree. It may be used to express complex functions in cases where a Taylor series expansion cannot be applied.
Since $(d^k /dn^k) x[n]$ would be necessary for a Taylor Series, and they don't exist, we use the Z transform.
But wait, Isn't $e^{-sT}$ a delay operator in the $s$ plain ? Yes it is, so we can describe signals like $ y(t)= a_0 x(t) + a_1x(t-T) + a_2 x(t-2T)$ in the $s$ domain as $(a_0+ e^{-sT}a_1 + e^{-2sT}a_2) X(s)$. The problem is that a LTI continuous time filter is described as a (finite) ratio of polynomials in $s$.
$$ H(s)=\frac{a_0 + a_1 s + a_2 s^2 +\dots + a_n s^{n}}{b_0 + b_1 s + b_2 s^2 +\dots + b_n s^{m}} $$ The transfer function for $e^{-sT}$ has infinite terms. This can be approximated by a Pade expansion, and often is in some applications, but you can see that the $s$ plain get very complicated and is only approximate. $e^{sT}$ and approximations do come in handy when relating $s$ to $z$
Not the answer you're looking for? Browse other questions tagged discrete-signals continuous-signals frequency-response or ask your own question.
Main differences to take into account between continuous and discrete time signals
Discrete Time Fourier Transform Pair Discrepancy
Can a discrete sinusoidal signal represent only limited number of frequencies?
$2\pi$ periodicity of discrete-time Fourier transform
Why DTFT coefficients are periodic and why continuous Fourier transform coefficients are not periodic?
How is a continuous spectrum for the DTFT possible?
Confusion about subtle difference between discrete-time and continuous-time
Maximum Magnitude Deviation between DFT and DTFT
DTFT frequency range
|
CommonCrawl
|
Analogue computing: fun with differential equations
Solving differential equations instantaneously, using some electrical components and an oscilloscope
by Bernd Ulmann. Published on 13 March 2016.
When it comes to differential equations, things start to get pretty complicated—or at least that's what it looks like. When I studied mathematics, lectures on differential equations were considered to be amongst the hardest and most abstract of all and, to be honest, I feared them because they really were incredibly formalistic and dry. This is a pity as differential equations make nature tick and there are few things more fascinating than them.
When asked about solving differential equations, most people tend to think of a plethora of complex numerical techniques, such as Euler's algorithm, Runge–Kutta or Heun's method, but few people think of using physical phenomena to tackle them, representing the equation to be solved by interconnecting various mechanical or electrical components in the right way. Before the arrival of high-performance stored-program digital computers, however, this was the main means of solving highly complicated problems and spawned the development of analogue computers.
Analogies and analogue computers
When faced with a problem to solve, there are two approaches we could take. The first is to recreate a scaled model of the problem to be investigated, based on exactly the same physical principles as the full size version. This is often done in, for example, structural analysis: Antonì Gaudi first used strings and weights to build a smaller model of his Church of Colònia Güell near Barcelona to help him determine whether it was stable. Similar techniques have been used from the Gothic period well into the 20th century, when a textile fabric was used to design the roof structure for the Olympic stadium in Munich.
Gaudi's structural analysis model of the Colonia Guell.
As powerful as this approach is (as another example, think of using soap films when determining a minimal surface), it is quite limited in its application as you are restricted to the same physical principles as those in the full problem. This is where the second technique comes into play: comparing the potentially very complex system under study to a different, but behaviourally similar, physical system. In other words, this similar, probably simpler, physical system is an analogy of the first: hence the creation and naming of analogue computers—computers that are able to study one phenomenon by using another, such as looking at the behaviour of a mechanical oscillator by using an electronic model.
Analogue computers
Analogue computers are powerful computing devices consisting of a collection of computing elements, each of which has some inputs and outputs and performs a specific operation such as addition, integration (a basic operation on such a machine!) or multiplication. These elements can then be interconnected freely to form a model, an analogue, of the problem that is to be solved.
The various computing elements can be based on a variety of different physical principles: in the past there have been mechanical, hydraulic, pneumatic, optical, and electronic analogue computers. Leaving aside the Antikythera mechanism—which is the earliest known example of a working analogue computer, used by the ancient Greeks to predict astronomical positions and eclipses—the idea of general purpose analogue computers was developed by William Thomson, better known as Lord Kelvin, when his brother, James Thomson, developed a mechanical integrator mechanism (previously also developed by Johann Martin Hermann in 1814).
Lord Kelvin realised that, given some abstract computing elements, it is possible to solve differential equations using machines: a truly trailblazing achievement. Let us try to solve the differential equation representing simple harmonic motion (perhaps of a mechanical oscillator!),
\begin{align}
\frac{\mathrm{d}^2y}{\mathrm{d}t^2}+\omega^2y=0,\tag{1}
\end{align}
by means of a clever setup consisting of integrators and other devices and using the technique developed by Lord Kelvin in 1876.
We can write (1) more compactly as $\ddot{y}+\omega^2y=0$, where the dots over the variables denote time derivatives. To simplify things a bit we will also assume that $\omega^2=1$. Hence we rearrange (1) so that the highest derivative is isolated on one side of the equation, yielding
\ddot{\hspace{-2pt}y}=-y.\tag{2}
Let us now assume that we already know what $\ddot{y}$ is (a blatant lie, at least for the moment). If we have some device capable of integration it would be easy to generate $\dot{y}=\int\ddot{y}\ \text{d}t+c_0$ and from that $y=\int\dot{y}\ \text{d}t+c_1$, with some constants $c_0$ and $c_1$.
Using a second type of computing element that allows us to change signs, it is therefore possible to derive $-y$ from $\ddot{y}$ by means of three interconnected computing elements (two integrators and a sign changer). Obviously, this is just the right hand side of (2), which is equal to $\ddot{y}$, assumed known at the beginning. Now Kelvin's genius came to the fore: we can set up a feedback circuit by feeding the first integrator in our setup with the output of the sign changing unit at the end. This is shown below in an abstract (standard) notation: this is how programs for analogue computers are written down.
The basic circuit for solving $\ddot{y}=-y$. From left to right we have two integrators and a summer (with each component inverting the sign).
The two triangular elements with the rectangles on their left denote integrators; while the single triangle on the right is a summer. It should be noted that for technical reasons all of these computing elements perform an implicit change of sign, so the leftmost integrator actually yields $-\dot{y}$ instead of $\dot{y}$ as in our thought experiment above, while the summer with the one input $y$ yields $-y$.
However, if one sets up the two integrators and a summer as demonstrated above, the system would just sit there and do nothing, yielding the constant zero function as a solution of the differential equation (2): not an incorrect solution, but utterly boring.
This is where $c_0$ and $c_1$ come into play: these are the initial conditions for the integrators. Let us assume that $c_0=1$ and $c_1=0$, ie the leftmost integrator starts with the value $1$ at its output, which feeds into the second integrator, which in turn feeds the sign changing summer, which then feeds the first integrator. This will result in a cosine signal at the output of the first integrator and a minus sine function at the output of the second one, perfectly matching the analytic solution of (2). Such initial conditions are normally shown as being fed into the top of the rectangular part of an integrator symbol, but we have omitted this in our diagrams.
Setup for the predator-prey simulation.
So if we have some computing elements, we have seen that we can arrange them to create an abstract model of a differential equation, giving us some form of specialised computer: an analogue computer! The implementation of these computing elements could be done in different ways: time integration, for example, could be done by using the integrand to control the flow of water into a bottle, or to charge a capacitor, or we could build some other intricate mechanical system. Some of the most important observations to make are the following:
Analogue computers are programmed not in an algorithmic fashion but by actually interconnecting their individual computing elements in a suitable way. Thus they do not need any program memory; in fact, there is no "memory" in the traditional sense at all.
What makes an analogue computer "analogue" is the fact that it is set up to be an analogy of some problem readily described by differential equations or systems of them. Even digital circuits qualify as analogue computers and are known as Digital Differential Analysers (DDA).
Programming an analogue computer is quite simple (although there are some pitfalls that are beyond the scope of this article). One just pretends that the highest derivative in an equation is known and generates all the other terms from this highest derivative by applying integration, summation, multiplication, etc until the right-hand side of the equation being studied is obtained, with the result then fed into the first integrator.
As a remark it should be noted that Kelvin's feedback technique, as it is known, can also be applied to traditional stored-program digital computers.
Examples of analogue computers
Analogue computers were the workhorses of computing from the 1940s to the mid-1980s when they were finally superseded by cheap and (somewhat) powerful stored-program digital computers. Thus without them, the incredible advances in aviation, space flight, engineering and industrial processes after the Second World War would have been impossible. A typical analogue computer of the 1960s was the Telefunken RA 770, shown below.
The Telefunken RA 770 analogue computer.
The most prominent feature of such a machine is the patch field, which is on the far right of the picture above. Here all of the inputs and outputs of the literally hundreds of individual computing elements are brought together. Using (shielded) patch cords, these computing elements are connected to each other, setting up the desired model. In the middle are the manual controls (start/stop a computation, set parameter values, etc) and an oscilloscope to display the results as curves. On the upper far left is a digital extension that allows us to set up things like iterative operations, where one part of the computer generates initial conditions for another part. Below left are eight function generators, which can be manually set to generate rather arbitrary functions by a polygonal approximation.
Let us now look at a somewhat more complex programming example: the investigation of a predator-prey model as described by Alfred James Lotka in 1925 and then Vito Volterra in 1926. This consists of a closed ecosystem with only two species, foxes and rabbits, and an unlimited food supply for the rabbits. Rabbits are removed from the system by being eaten by the foxes—without this mechanism their population would just grow exponentially. Foxes, on the other hand, need rabbits for food, or they would die of starvation. This system can be modelled by two coupled differential equations with $r$ and $f$ denoting the number of rabbits and foxes respectively:
\begin{eqnarray}
\dot{r}&=\alpha_1r-\alpha_2rf\tag{3}\\
\dot{f}&=-\beta_1f+\beta_2rf\tag{4}
\end{eqnarray}
The change in the rabbit population, $\dot{r}$, involves the fertility rate $\alpha_1$ and the amount of rabbits that are killed by foxes, denoted by $\alpha_2rf$. The change in the fox population, $\dot{f}$, looks quite similar but with different signs. While the rabbit population would grow in the absence of predators due to the unlimited food supply, the fox population would die out when there are no rabbits and thus no food, hence the term
$-\beta_1f$. The second term, $\beta_2 r f$, describes the increase in the fox population due to rabbits being caught for food.
The left panel computes $-r$, the right computes $-f$.
Equations (3) and (4) can now easily be set up on an analogue computer by creating two circuits, as shown in the diagrams above. The circuit for (3) has two inputs: an initial condition $r_0$ representing the initial size of the rabbit population, and the value $r f$ which is not yet available. The second circuit looks similar with an initial fox population of $f_0$ (please keep in mind that integrators and summers both perform a change of sign that can be used to simplify the circuits a bit, thus saving us from having to use two summers).
All that is necessary now is a multiplier to generate $r f$ from the outputs $-r$ and $-f$ of these two circuits. This product is then fed back into the circuits, thereby creating the feedback loop of this simple ecosystem. The setup of this circuit on a classical desktop analogue computer weighs in at 105 kg and requires quite a stable desk!
Results of the predator-prey simulation. Prey are on the top and predators on the bottom.
One of the most fascinating properties of an analogue computer is its extremely high degree of interactivity: one can change values just by turning the dial of a potentiometer while a simulation is running and the effects are instantaneously visible. It is not only easy to get a "feeling" for the properties of some differential equations, it is also incredibly addictive, as the following quote from John H McLeod and Suzette McLeod shows:
"An analogue computer is a thing of beauty and a joy forever."
Analogue computers in the future
After these two simple examples, a question arises:"What does the future hold for analogue computers? Aren't they beasts of the past?" Far from it! Even—and especially—today there is a plethora of applications for analogue computers where their particular strengths can be of great benefit. For example, electronic analogue computers yield more instructions per second per watt than most other devices and hence are ideally suited for low power applications, such as in medicine. They also offer an extremely high degree of parallelisation, with all of the computing elements working in parallel with no need for explicit synchronisation or critical code sections. The speed at which computations are run can be changed by changing the capacitance of the capacitors that perform the integration (indeed, many classical analogue computers even had a button labelled "$10\times$", which switched all integration capacitors to a second set that had a tenth of the original capacity, yielding a computation speed that was ten times higher). On top of this, and especially important today, they are more or less impossible to hack as they have no stored programs.
A modern incarnation of an analogue computer still under development is shown in the header of the article. In contrast to historic machines it is highly modular and can be expanded from a minimal system with two chassis to several racks full of computing elements.
When Lord Kelvin first came up with analogue computing, little did he know the incredible amount of progress in science and technology that his idea would make possible, nor the longevity of his idea even today in an era of supercomputers and vast numerical computations.
Bernd Ulmann
Bernd Ulmann is professor for Business Informatics at the FOM University of Applied Sciences for Economics and Management in Frankfurt-am-Main, Germany. His primary interest is analogue computing in the 21st century. If you would like to know more about analogue computing, visit analogmuseum.org, and have fun with differential equations.
+ More articles by Bernd
Slide rules: the early calculators
When slide rules used to rule... find out why they still do
Roots: the legacy of Fibonacci
More than spirals and rabbits, Fibonacci gave us something much more fundamental.
The Mathematical Games of Martin Gardner
The great contributions of the man who started popular mathematics
Spherical Dendrite by Mark J Stock
The story behind Issue 3's cover artwork
You can count on Dirichlet
Counting the divisors of an integer turns out to be a rather hard problem
A mathematical view of voting systems
Why voting systems can never be fair
← Roots: the legacy of Fibonacci
How to make: a hexaflexagon →
One thought on "Analogue computing: fun with differential equations"
Pingback: Online magazine about mathematics – Blog for Mathematical Sciences at Plymouth University
|
CommonCrawl
|
advanced_tools:symplectic_structure
Edit the page and hit Save. See Formatting Syntax for Wiki syntax. Please edit the page only if you can improve it. If you want to test some things, learn to make your first steps on the playground.
====== Symplectic Structure ====== <tabbox Intuitive> <blockquote>Our everyday world is ruled by Euclidean geometry (and by its extension, Riemannian geometry); we can measure distances in it, and velocities. Far away from our daily experience, and much more subtle, is the mechanical phase space world, in which all the phenomena related to simultaneous consideration of position and variation of position; a deep understanding of this world requires the recourse to a somewhat counter-intuitive geometry, the symplectic geometry of Hamiltonian mechanics. Symplectic geometry is highly counter-intuitive; the notion of length does not make sense there, while the notion of area does. This "areal" nature of symplectic geometry, which was not realized until very recently, has led to unexpected mathematical developments, starting in the mid 1980's with Gromovís discovery of a "non-squeezing" phenomenon which is reminiscent of the quantum uncertainty principle—but in a totally classical setting! <cite>[[https://www.univie.ac.at/nuhag-php/bibtex/open_files/7041_PhysRepSubmissionGossonLuef.pdf|Symplectic Capacities and the Geometry of Uncertainty]] by Maurice de Gosson et. al. </cite></blockquote> * [[https://www.quantamagazine.org/the-fight-to-fix-symplectic-geometry-20170209?utm_content=buffer1e5eb&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer|A Fight to Fix Geometry's Foundations]] by Kevin Hartnett <tabbox Concrete> <blockquote>A simple way of putting it is that a two-form a way of measuring area in multivariable calculus. I believe the significance for physics boils down to the following: it turns out that a two-form is precisely what is required to translate an energy functional on phase space (a Hamiltonian) into a flow (a vector field). [See Wikipedia for how the translation goes, or read Arnold's book Mathematical Methods of Classical Mechanics, or a similar reference.] The flow describes time evolution of the system; the equations which define it are Hamilton's equations. One property these flows have is that they preserve the symplectic form; this is just a formal consequence of the recipe for going from Hamiltonian to flow using the form. So, having contemplated momentum, here we find ourselves able to describe how systems evolve using the phase space T*M, where not only is there an extremely natural extra structure (the canonical symplectic form), but also that structure happens to b preserved by the physical evolution of the system. That's pretty nice! Even better, this is a good way of expressing conservation laws. When physical evolution preserves something, that's a conservation law. So in some sense, "conservation of symplectic form" is the second most basic conservation law. (The most basic is conservation of energy, which is essentially the definition of the Hamiltonian flow.) You can use conservation of symplectic form to prove the existence of other conserved quantities when your system is invariant under symmetries (this is Noether's theorem, which can also be proved in other ways, I think, but they probably boil down to the same argument ultimately). <cite>http://qr.ae/TUTIn9</cite></blockquote> ---- * [[http://math.mit.edu/~cohn/Thoughts/symplectic.html|Why symplectic geometry is the natural setting for classical mechanics]] by Henry Cohn * Chapter "[[https://link.springer.com/chapter/10.1007%2F978-1-4612-0189-2_3#page-1|Phase Spaces of Mechanical Systems are Symplectic Manifolds" in the book Symmetry in Mechanics]] by Stephanie Frank Singer * See also Chapter 1 in Principles Of Newtonian And Quantum Mechanics by Gosson <tabbox Abstract> <blockquote>"The symplectic geometry arises from the understanding of the fact that the transformations of the phase flows of the dynamical systems of classical mechanics and of variational calculus ~and hence also of optimal control theory! belong to a narrower class of diffeomorphisms of the phase space, than the incompressible ones. Namely, they preserve the so-called symplectic structure of the phase space—a closed nondegenerate differential two-form. This form can be integrated along two-dimensional surfaces in the phase space. The integral, which is called the Poincare´ integral invariant, is preserved by the phase flows of Hamilton dynamical systems. The diffeomorphisms, preserving the symplectic structure—they are called symplectomorphisms—form a group and have peculiar geometrical and topological properties. For instance, they preserve the natural volume element of the phase space ~the exterior power of the symplectic structure 2-form! and hence cannot have attractors" <cite>[[http://aip.scitation.org/doi/pdf/10.1063/1.533315|Symplectic geometry and topology]] by V. I. Arnold</cite></blockquote> --> What's the relation to the symplectic groups?# * See http://math.ucr.edu/home/baez/symplectic.html <-- <tabbox Why is it interesting?> <blockquote> As each skylark must display its comb, so every branch of mathematics must finally display symplectization. In mathematics there exist operations on different levels: function acting on numbers, operators acting on functions, functors acting on operators, and so on. Symplectization belongs to the small set of highest level operations, acting not on details (functions, operators, functions=, but on all the mathematics at once. <cite>Catastrophe Theory, by V. Arnold</cite> </blockquote> <blockquote> The word symplectic was coined by Hermann Weyl in his famous treatise The Classical groups [...] Weyl devoted very little space to the symplectic group, it was then a rather baffling oddity which presumably existed for some purpose, though it was not clear what. **Now we know: the purpose is dynamics.** In ordinary euclidean geometry the central concept is distance. To capture the notion of distance algebraically we use the inner (or scalar) product $ x.y$ of two vectors $x$ and $y$. [...] All the basic concepts of euclidean geometry can be obtained from the inner product. [...] The inner product is a bilinear form - the terms look like $x_i y_j$. Replacing it with other bilinear forms creates new kinds of geometry. Symplectic geometry corresponds to the form $x_1 y_2 -x_2 y_1$, which is the area of the parallelogram formed by the vectors $x$ and $y$. [...] The symplectic form provides the plane with a new kind of geometry, in which very vector has length zero and is at right angles to itself. [...] Can such bizarre geometries be of practical relevance? Indeed they can: they are the geometries of classical mechanics. In Hamilton's formalism, mechanics systems are described by the position coordinates $q_1,\ldots,q_n$, momentum coordinates $p_1,\ldots,p_n$m and a function $H$ of these coordinates (nowadays called the hamiltonian) which can be thought of as the total energy. Newtons equations of motion take the elegant form $dq/dt=\partial H/\partial p,$ $dp/dt= -\partial H/\partial q$. When solving Hamilton's equations it is often useful to change coordinates. but if the position coordinates are transformed in some way, then the corresponding momenta must be transformed consistently. Pursuing this idea, it turns out that such transformations have to be the symplectic analogies of rigid euclidean motions.** The natural coordinate changes in dynamics are symplectic.** This is a consequence of the asymmetry in Hamitlon's equations, whereby $dq/dt$ is plus $\partial H/\partial p$, but $dp/dt$ is minus $\partial H/\partial q$, that minus sign again. <cite>https://www.nature.com/nature/journal/v329/n6134/pdf/329017a0.pdf</cite> </blockquote> <blockquote> I've tried to show you that the symplectic structure on the phase spaces of classical mechanics, and the lesser-known but utterly analogous one on the phase spaces of thermodynamics, is a natural outgrowth of utterly trivial reflections on the process of minimizing or maximizing a function S on a manifold Q. The first derivative test tells us to look for points with $$d S = 0$$ while the commutativity of partial derivatives says that $$d^2 S = 0$$ everywhere—and this gives Hamilton's equations and the [[formulas:maxwell_relations|Maxwell relations]]. <cite>https://johncarlosbaez.wordpress.com/2012/01/23/classical-mechanics-versus-thermodynamics-part-2/</cite> </blockquote> <blockquote>Hamilton's equations push us toward the viewpoint where $p$ and $q$ have equal status as coordinates on the phase space $X$. Soon, we'll drop the requirement that $X\subseteq T^\ast Q$ where $Q$ is a configuration space. $X$ will just be a manifold equipped with enough structure to write down Hamilton's equations starting from any $H \colon X\rightarrow\mathbb{R}$. The coordinate-free description of this structure is the major 20th century contribution to mechanics: a symplectic structure. This is important. You might have some particles moving on a manifold like $S^3$, which is not symplectic. So the Hamiltonian mechanics point of view says that the abstract manifold that you are really interested in is something different: it must be a symplectic manifold. That's the phase space $X$. <cite>[[http://math.ucr.edu/home/baez/classical/texfiles/2005/book/classical.pdf|Lectures on Classical Mechanic]]s by J. Baez</cite></blockquote> <blockquote>The mathematical structure underlying both classical and quantum dynamical behaviour arises from symplectic geometry. It turns out that, in the quantum case, the symplectic geometry is non-commutative, while in the classical case, it is commutative.<cite>https://arxiv.org/pdf/1602.06071.pdf</cite></blockquote> **Further Reading:** * [[http://www.pims.math.ca/~gotay/Symplectization(E).pdf|THE SYMPLECTIZATION OF SCIENCE]] by Mark J. Gotay et. al. </tabbox>
Please solve the following equation to prove you're human. 254 -2 = Please keep this field empty:
Note: By editing this page you agree to license your content under the following license: CC Attribution-Share Alike 4.0 International
advanced_tools/symplectic_structure.txt · Last modified: 2018/10/11 12:59 by jakobadmin
|
CommonCrawl
|
Skip to main content Skip to sections
EURASIP Journal on Advances in Signal Processing
December 2018 , 2018:5 | Cite as
Adaptive independent sticky MCMC algorithms
Luca Martino
Roberto Casarin
Fabrizio Leisen
David Luengo
First Online: 11 January 2018
Part of the following topical collections:
Advanced Computational Methods for Bayesian Signal Processing
Monte Carlo methods have become essential tools to solve complex Bayesian inference problems in different fields, such as computational statistics, machine learning, and statistical signal processing. In this work, we introduce a novel class of adaptive Monte Carlo methods, called adaptive independent sticky Markov Chain Monte Carlo (MCMC) algorithms, to sample efficiently from any bounded target probability density function (pdf). The new class of algorithms employs adaptive non-parametric proposal densities, which become closer and closer to the target as the number of iterations increases. The proposal pdf is built using interpolation procedures based on a set of support points which is constructed iteratively from previously drawn samples. The algorithm's efficiency is ensured by a test that supervises the evolution of the set of support points. This extra stage controls the computational cost and the convergence of the proposal density to the target. Each part of the novel family of algorithms is discussed and several examples of specific methods are provided. Although the novel algorithms are presented for univariate target densities, we show how they can be easily extended to the multivariate context by embedding them within a Gibbs-type sampler or the hit and run algorithm. The ergodicity is ensured and discussed. An overview of the related works in the literature is also provided, emphasizing that several well-known existing methods (like the adaptive rejection Metropolis sampling (ARMS) scheme) are encompassed by the new class of algorithms proposed here. Eight numerical examples (including the inference of the hyper-parameters of Gaussian processes, widely used in machine learning for signal processing applications) illustrate the efficiency of sticky schemes, both as stand-alone methods to sample from complicated one-dimensional pdfs and within Gibbs samplers in order to draw from multi-dimensional target distributions.
Bayesian inference Monte Carlo methods Adaptive Markov chain Monte Carlo (MCMC) Adaptive rejection Metropolis sampling (ARMS) Gibbs sampling Metropolis-within-Gibbs Hit and run algorithm
Markov chain Monte Carlo (MCMC) methods [1, 2] are very important tools for Bayesian inference and numerical approximation, which are widely employed in signal processing [3, 4, 5, 6, 7] and other related fields [1, 8]. A crucial issue in MCMC is the choice of a proposal probability density function (pdf), as this can strongly affect the mixing of the MCMC chain when the target pdf has a complex structure, e.g., multimodality and/or heavy tails. Thus, in the last decade, a remarkable stream of literature focuses on adaptive proposal pdfs, which allow for self-tuning procedures of the MCMC algorithms, flexible movements within the state space, and improved acceptance rates [9, 10].
Adaptive MCMC algorithms are used in many statistical applications and several schemes have been proposed in the literature [8, 9, 10, 11]. There are two main families of methods: parametric and non-parametric. The first strategy consists in adapting the parameters of a parametric proposal pdf according to the past values of the chain [10]. However, even if the parameters are perfectly adapted, a discrepancy between the target and the proposal pdfs remains. A second strategy attempts to adapt the entire shape of the proposal density using non-parametric procedures [12, 13]. Most authors have payed more attention to the first family, designing local adaptive random-walk algorithms [9, 10], due to the difficulty of approximating the full target distribution by non-parametric schemes with any degree of generality.
In this work, we describe a general framework to design suitable adaptive MCMC algorithms with non-parametric proposal densities. After describing the different building blocks and the general features of the novel class, we introduce two specific algorithms. Firstly, we describe the adaptive independent sticky Metropolis (AISM) algorithm to draw efficiently from any bounded univariate target distribution.1 Then, we also propose a more efficient scheme that is based on the multiple try Metropolis (MTM) algorithm: the adaptive independent sticky Multiple Try Metropolis (AISMTM) method. The ergodicity of the adaptive sticky MCMC methods is ensured and discussed. The underlying theoretical support is based on the approach introduced in [14]. The new schemes are particularly suitable for sampling from complicated full-conditional pdfs within a Gibbs sampler [5, 6, 7].
Moreover, the new class of methods encompasses different well-known algorithms available in literature: the griddy Gibbs sampler [15], the adaptive rejection Metropolis sampler (ARMS) [12, 16], and the independent doubly adaptive Metropolis sampler (IA2RMS) [13, 17]. Other related or similar approaches are also discussed in Section 6. The main contributions of this paper are the following:
A very general framework, that allows practitioners to design proper adaptive MCMC methods by employing a non-parametric proposal.
Two algorithms (AISM and AISMTM), that can be used off-the-shelf in signal processing applications.
An exhaustive overview of the related algorithms proposed in the literature, showing that several well-known methods (such as ARMS) are encompassed by the proposed framework.
A theoretical analysis of the AISM algorithm, proving its ergodicity and the convergence of the adaptive proposal to the target.
The structure of the paper is the following. Section 2 introduces the generalities of the class of sticky MCMC methods and the AISM scheme. Sections 3 and 4 present the general properties, altogether with specific examples, of the proposal constructions and the update control tests. Section 5 introduces some theoretical results. Section 6 discusses several related works and highlights some specific techniques belonging to the class of sticky methods. Section 7 introduces the AISMTM method. Section 8 describes the range of applicability of the proposed framework, including its use within other Monte Carlo methods (like the Gibbs sampler or the hit and run algorithm) to sample from multivariate distributions. Eight numerical examples (including the inference of hyper-parameters of Gaussian processes) are then provided in Section 9.2 Finally, Section 10 contains some conclusions and possible future lines.3
2 Adaptive independent sticky MCMC algorithms
Let \(\widetilde {\pi }(x) \propto \pi (x)>0\), with \(x\in \mathcal {X}\subseteq \mathbb {R}\), be a bounded4 target density known up to a normalizing constant, \(c_{\pi }=\int _{\mathcal {X}} \pi (x) dx\), from which direct sampling is unfeasible. In order to draw from it, we employ an MCMC algorithm with an independent adaptive proposal,
$$ \widetilde{q}_{t}(x|\mathcal{S}_{t}) \propto q_{t}(x|\mathcal{S}_{t})>0, \quad x\in \mathcal{X}, $$
where t is the iteration index of the corresponding MCMC algorithm, and \(\mathcal {S}_{t}=\{s_{1},\ldots,s_{m_{t}}\}\) with m t >0 is the set of support points used for building \(\widetilde {q}_{t}\). At the t-th iteration, an adaptive independent sticky MCMC method is conceptually formed by three stages (see Fig. 1):
Construction of the non-parametric proposal: given the nodes in \(\mathcal {S}_{t}\), the function q t is built using a suitable non parametric procedure that provides a function which is closer and closer to the target as the number of points m t increases. Section 3 describes the general properties that must be fulfilled by a suitable proposal construction, as well as specific procedures to build this proposal.
MCMC stage: some MCMC method is applied in order to produce the next state of the chain, x t , employing \(\widetilde {q}_{t}(x|\mathcal {S}_{t})\) as (part of the) proposal pdf. This stage produces the next state of the chain, xt+1, and an auxiliary variable z (see Tables 1 and 4), used in the following update stage.
Update stage: A statistical test on the auxiliary variable z is performed in order to decide whether to increase the number of points in \(\mathcal {S}_{t}\) or not, defining a new support set, \(\mathcal {S}_{t+1}\), which is used to construct the proposal at the next iteration. The update stage controls the computational cost and ensures the ergodicity of the generated chain (see Appendix A). Section 4 is devoted to the design of different suitable update rules.
Adaptive independent sticky Metropolis (AISM)
Graphical representation of a generic adaptive independent sticky MCMC algorithm
In the following section, we describe the simplest possible sticky method, obtained by using the MH algorithm, whereas in Section 7 we consider a more sophisticated technique that employs the MTM scheme.5
2.1 Adaptive independent sticky Metropolis
The simplest adaptive independent sticky method is the adaptive independent sticky Metropolis (AISM) technique, outlined in Table 1. In this case, the proposal pdf \(\widetilde {q}_{t}(x|\mathcal {S}_{t})\) changes along the iterations (see step 1 of Table 1) following an adaptation scheme that relies upon a suitable interpolation given the set of support points \(\mathcal {S}_{t}\) (see Section 3). Step 3 of Table 1 applies a statistical control to update the set \(\mathcal {S}_{t}\). The point z, rejected at the current iteration of the algorithm in the MH test, is added to \(\mathcal {S}_{t}\) with probability
$$ P_{a}(z)=\eta_{t}(z,d_{t}(z)), $$
where \( \eta _{t}(z,d): \mathcal {X}\times \mathbb {R}^{+}\rightarrow [0,1] \)is an increasing test function w.r.t. the variable d, such that η t (z,0)=0, and \( d=d_{t}(z)=\left |\pi (z)-q_{t}(z|\mathcal {S}_{t})\right |.\) is the point distance between π and q t at z. The rationale behind this test is to use information from the target density in order to include in the support set only those points where the proposal pdf differs substantially from the target value at z. Note that, since z is always different from the current state (i.e., z≠x t for all t), then the proposal pdf is independent from the current state according to Holden's definition [14], and thus the theoretical analysis is greatly simplified.
3 Construction of the sticky proposals
There are many alternatives available for the construction of a suitable sticky proposal (SP). However, in order to be able to provide some theoretical results in Section 5, let us define precisely what we understand here by a sticky proposal.
(Valid Adaptive Proposal) Let us consider a target density, \(\widetilde {\pi }(x)\propto \pi (x)>0\) for any \(x \in \mathcal {X} \subseteq \mathbb {R}\) (the target's support), and a set of \(m_{t}=|\mathcal {S}_{t}|\) support points, \(\mathcal {S}_{t}=\{s_{1},\ldots,s_{m_{t}}\}\) with \(s_{i}\in \mathcal {X}\) for all i=1,…,m t . An adaptive proposal built using \(\mathcal {S}_{t}\) via some non-parametric interpolation approach is considered valid if the following four properties are satisfied:
The proposal function is positive, i.e., \(q_{t}(x|\mathcal {S}_{t})>0\) for all \(x\in \mathcal {X}\) and for all possible sets \(\mathcal {S}_{t}\) with \(t\in \mathbb {N}\).
Samples can be drawn directly and easily from the resulting proposal, \(\widetilde {q}_{t}(x|\mathcal {S}_{t})\propto q_{t}(x|\mathcal {S}_{t})\), using some exact sampling procedure.
For any bounded target, π(x), the resulting proposal function, \(q_{t}(x|\mathcal {S}_{t})\), is also bounded. Furthermore, defining \(\mathcal {I}_{t} = (s_{1},s_{m_{t}}]\), we have
$$ \max_{x \in \mathcal{I}_{t}} q_{t}(x|\mathcal{S}_{t}) \le \max_{x \in \mathcal{I}_{t}} \pi(x). $$
The proposal function, \(q_{t}(x|\mathcal {S}_{t})\), has heavier tails than the target, i.e., defining \(\mathcal {I}_{t}^{c} = (-\infty,s_{1}] \cup (s_{m_{t}},\infty)\), we have
$$ q_{t}(x|\mathcal{S}_{t}) \ge \pi(x) \qquad \forall x \in \mathcal{I}_{t}^{c}. $$
Condition 1 guarantees that the function \(q_{t}(x|\mathcal {S}_{t})\) leads to a valid pdf, \(\widetilde {q}_{t}(x|\mathcal {S}_{t})\), that covers the entire support of the target.
Condition 2 is required from a practical point of view to obtain efficient algorithms. Finally, conditions 3 and 4 are required by the proofs of Theorems 3 and 1, respectively, and also make sense from a practical point of view: if the target is bounded, we would expect the proposal learnt from it to be also bounded and this proposal should be heavier tailed than the target in order to avoid under-sampling the tails. Now we can define precisely what we understand by a "sticky" proposal.
(Sticky Proposal (SP)) Let us consider a valid proposal pdf according to Definition 1. Let us assume also that the i-th support point is distributed according to p i (x) (i.e., s i ∼p i (x)) such that p i (x)>0 for any \(x \in \mathcal {X}\)and i=1,…,m t . Then, a sticky proposal is any valid proposal pdf s.t. the L1 distance between q t (x)and π(x) vanishes to zero when the number of support points increases, i.e., if m t →∞,
$$\begin{array}{*{20}l} D_{1}(\pi,q_{t}) = \|\pi- q_{t} \|_{1} & = \int_{\mathcal{X}}|\pi(z)-q_{t}(z|\mathcal{S}_{t})|dz \\ & = \int_{\mathcal{X}} d_{t}(z) dz \to 0, \end{array} $$
where \(d_{t}(z) = |\pi (z)-q_{t}(z|\mathcal {S}_{t})|\) is the L1 distance between π(x) and q t (x) evaluated at \(z \in \mathcal {X}\), and (2) implies almost everywhere (a.k.a., almost surely) convergence of q t (x) to π(x).
In the following, we provide some examples of constructions that fulfill all the conditions in Definitions 1 and 2. All of them approximate the target pdf by interpolating points that belong to the graph of the target function π.
3.1 Examples of constructions
Given \(\mathcal {S}_{t}=\{s_{1},\ldots,s_{m_{t}}\}\) at the t-th iteration, let us define a sequence of m t +1 intervals: \(\mathcal {I}_{0}=(-\infty,s_{1}]\), \(\mathcal {I}_{j}=(s_{j},s_{j+1}]\) for j=1,…,m t −1, and \(\mathcal {I}_{m_{t}}=(s_{m_{t}},+\infty)\). The simplest possible procedure uses piecewise constant (uniform) pieces in \(\mathcal {I}_{i}\), 1≤i≤m t −1, with two exponential tails in the first and last intervals [13, 15, 18]. Mathematically,
$$\begin{array}{*{20}l} q_{t}(x|\mathcal{S}_{t})=\left\{ \begin{array}{lll} \!\!E_{0}(x), & x \in \mathcal{I}_{0},\\ \!\!\max\left\{\pi(s_{i}),\pi(s_{i+1}) \right\}, & x \in \mathcal{I}_{i},\\ \!\!E_{m_{t}}(x), & x \in \mathcal{I}_{m_{t}}, \end{array}\right. \end{array} $$
where 1≤i≤m t −1 and E0(x), \(E_{m_{t}}(x)\) represent two exponential pieces. These two exponential tails can be obtained simply constructing two straight lines in the log-domain as shown in [12, 13, 19]. For instance, defining V(x)= log[π(x)], we can build the straight line w0(x) passing through the points (s1,V(s1)) and (s2,V(s2)), and the straight line \(w_{m_{t}}(x)\) passing through the points \((s_{m_{t}-1},V(s_{m_{t}-1}))\) and \((s_{m_{t}},V(s_{m_{t}}))\). Hence, the proposal function is defined as E0(x)= exp(w0(x)) for \(x\in \mathcal {I}_{0}\) and \(E_{m_{t}}(x)=\exp \left (w_{m_{t}}(x)\right)\) for \(x\in \mathcal {I}_{m_{t}}\). Other kinds of tails can be built, e.g., using Pareto pieces as shown in Appendix E.2 Heavy tails. Alternatively, we can use piecewise linear pieces [20]. The basic idea is to build straight lines, Li,i+1(x), passing through the points (s i ,π(s i )) and (si+1,π(si+1)) for i=1,…,m t −1, and two exponential pieces, E0(x) and \(E_{m_{t}}(x)\), for the tails:
$$\begin{array}{*{20}l} q_{t}(x|\mathcal{S}_{t})=\left\{ \begin{array}{lll} \!\!E_{0}(x), & \quad x \in \mathcal{I}_{0},\\ \!\!L_{i,i+1}(x), & \quad x \in \mathcal{I}_{i}, \\ \!\!E_{m_{t}}(x), & \quad x \in \mathcal{I}_{m_{t}},\\ \end{array}\right. \end{array} $$
with i=1,…,m t −1. Note that drawing samples from these trapezoidal pdfs inside \(\mathcal {I}_{i}=(s_{i},s_{i+1}]\) is straightforward [20, 21]. Figure 2 shows examples of the construction of \(q_{t}(x|\mathcal {S}_{t})\) using Eq. (3) or (4) with different number of points, m t =6,8,9,11. See Appendix A for further considerations.
Examples of the proposal construction q t considering a bimodal target π, using the procedures described in Eq. (3) for a–d and in Eq. (4) for e–h with m t =6,8,9,11 support points, respectively
A more sophisticated and costly construction has been proposed for the ARMS method in [12]. However, note that this construction does not fulfill Condition 3 in Definition 1. A similar construction based on B-spline interpolation methods has been proposed in [22, 23] to build a non-adaptive random walk proposal pdf for an MH algorithm. Other alternative procedures can also be found in the literature [13, 16, 18, 19, 20].
4 Update of the set of support points
In AISM, a suitable choice of the function η t (z,d) is required. Although more general functions could be employed, we concentrate on test functions that fulfill the conditions provided in the following definition.
(Test Function) Let us denote the L1 distance between the target and the proposal at the t-th iteration, for any \(z \in \mathcal {X}\), as d=d t (z)=|π(z)−q t (z)|. A valid test function, η t (z,d), is any function that fulfills all of the following properties:
\(\eta _{t}(z,d): \mathcal {X}\times \mathbb {R}^{+}\rightarrow [0,1]\).
η t (z,0)=0 for all \(z\in \mathcal {X}\) and \(t\in \mathbb {N}\).
\(\lim \limits _{d\rightarrow \infty }\eta _{t}(z,d)=1\) for all \(z\in \mathcal {X}\) and \(t\in \mathbb {N}\).
η t (z,d) is a strictly increasing function w.r.t. d, i.e., η t (z,d2)>η t (z,d1) for any d2>d1.
The first condition ensures that we obtain a valid probability for the addition of new support points, P a (z)=η t (z,d), whereas the remaining three conditions imply that support points are more likely to be added in those areas where the proposal is further away from the target, with a non-null probability of adding new points in places where d>0. In particular, Condition 4 is required by several theoretical results provided in the Appendix. However, update rules that do not fulfill this condition can also be useful, as discussed in the following. Figure 3 depicts an example of function η t when η t (z,d)=η t (d). Note that, for a given value of z, η t satisfies all the properties of a continuous distribution function (cdf) associated to a positive random variable. Therefore, any pdf for positive random variables can be used to define a valid test function η t through its corresponding cdf.
Graphical representation of the underlying idea behind the update control test. For simplicity, in this figure, we have assumed η t (z,d)=η t (d). As the proposal function q t becomes closer and closer to π, the probability of adding a new node to \(\mathcal {S}_{t}\) decreases
4.1 Examples of update rules
Below, we provide three different possible update rules. First of all, we consider the simplest case: η t (z,d)=η(d). As a first example, we propose
$$ \eta_{t}(d)=1-e^{-\beta d}, $$
where β>0 is a constant parameter. Note that this is the cdf associated to an exponential random variable.
A second possibility is
$$\begin{array}{*{20}l} \eta_{t}(d)= \left\{ \begin{array}{lcc} \!\!1,\quad \text{if}\quad d> {\varepsilon_{t}}, \\ \!\!0,\quad \text{if}\quad d\leq {\varepsilon_{t}}; \\ \end{array} \right. \end{array} $$
where 0<ε t <M π , with \(M_{\pi }=\max \limits _{z\in \mathcal {X}}\{\pi (z)\}\),6 is some appropriate time-varying threshold that can either follow some user pre-defined rule or be updated automatically.7 Alternatively, we can also set this threshold to a fixed value, ε t =ε, as done in the simulations. In this case, setting ε≥M π implies that the update of \(\mathcal {S}_{t}\) never happens (i.e., new support points are never added to the support set), whereas candidate nodes would be incorporated to \(\mathcal {S}_{t}\) almost surely by setting ε→0. For any other value of ε(i.e., 0<ε<M π ), the adaptation would eventually stop and no support points would be added after some random number of iterations. Note that this update rule does not fulfill Condition 4 in Definition 3, implying that some of the theoretical results of Section 5(e.g., Conjecture 1) are not applicable. However, we have included it here because it is a very simple rule that has shown a good performance in practice and can be useful to limit the number of support points by using a fixed value of ε. Finally, note also that Eq. (6) corresponds to the cdf associated to a Dirac's delta located at ε t .
A third alternative is
$$ \begin{aligned} &\eta_{t}(z,d)=\frac{d}{\max\{\pi(z),q_{t}(z|\mathcal{S}_{t})\}}. \end{aligned} $$
for \(z \in \mathcal {X}\) and \(0 \le d \le \max \{\pi (z),q_{t}(z|\mathcal {S}_{t})\}\), since
$$\begin{array}{@{}rcl@{}} d &=&|\pi(z)-q_{t}(z|\mathcal{S}_{t})|, \\ &=&\max\{\pi(z),q_{t}(z|\mathcal{S}_{t})\}- \min\{\pi(z),q_{t}(z|\mathcal{S}_{t})\},\\ &\le& \max\{\pi(z),q_{t}(z|\mathcal{S}_{t})\}, \end{array} $$
This rule appears in other related algorithms, as discussed in Section 6.1. Furthermore, it corresponds to the cdf of a uniform random variable defined in the interval \([\!0,\max \{\pi (z),q_{t}(z|\mathcal {S}_{t})\}]\). Hence, for a given value of z, the update test can be implemented as follows: (a) draw a samples v′ uniformly distributed in the interval \(\left [0,\max \{\pi (z),q_{t}(z|\mathcal {S}_{t})\}\right ]\); (b) if v′≤d t (z), add z to the set of support points. A graphical representation of this rule is given in Fig. 4, whereas Table 2 summarizes all the previously described update rules.
Graphical interpretation of the third rule in Eq. (7) for the update control test. Given a point z, this test can be implemented as following: (1) draw a sample \(v' \sim \mathcal {U}([0,\max \{\pi (z),q_{t}(z|\mathcal {S}_{t})\}])\), (2) then if v′≤d t (z), add z to the set of support points, i.e., \(\mathcal {S}_{t+1}=\mathcal {S}_{t} \cup \{z\}\). a The interval \([0,\max \{\pi (z),q_{t}(z|\mathcal {S}_{t})\}]\) and the distance d t (z). b The case when v′≤d t (z) so that the point is incorporated in the set of support points whereas c illustrates the case when v′>d t (z); hence, \(\mathcal {S}_{t+1}=\mathcal {S}_{t}\). Note that as the proposal function q t becomes closer and closer to π (i.e., d t (z) decreases for any z), the probability of adding a new node to \(\mathcal {S}_{t}\) decreases
Examples of test function η t (z,d) for different update rules (recall that \(d=d_{t}(z)=|q_{t}(z|\mathcal {S}_{t})-\pi (z)|\))
η t (d)=1−e−βd
\(\eta _{t}(d)=\left \{ \begin {array}{ll} 1, \text {if} ~d> \varepsilon _{t},\\ 0, \text {if}~ d\leq \varepsilon _{t} \end {array} \right.\)
\(\eta _{t}(z,d)=\frac {d}{\max \{\pi (z),q_{t}(z|\mathcal {S}_{t})\}}\)
In the first and second cases, we have η t (z,d)=η t (d)
5 Theoretical results
In this section, we provide some theoretical results regarding the ergodicity of the proposed approach, the convergence of a sticky proposal to the target, and the expected growth of the number of support points of the proposal. First of all, regarding the ergodicity of the AISM, we have the following theorem.
(Ergodicity of AISM) Let x1,x2,…,xT−1 be the set of states generated by the AISM algorithm in Table 1, using a valid adaptive proposal function, \(\widetilde {q}_{t}(x|\mathcal {S}_{t}) = \frac {1}{c_{t}} q_{t}(x|\mathcal {S}_{t})\), constructed according to Definition 1, and a test rule fulfilling the conditions in Definition 3. The pdf of x t , p t (x), converges geometrically in total variation (TV) norm to the target, \(\widetilde {\pi }(x) = \frac {1}{c_{\pi }} \pi (x)\), i.e.,
$$ \|p_{t}(x)-\widetilde{\pi}(x)\|_{TV} \le 2 \prod\limits_{\ell=1}^{t}{(1-a_{\ell})}, $$
$$ a_{\ell} = \min\left\{1,\frac{c_{\pi}}{c_{\ell}} \min_{x \in \mathcal{X}}\left\{\frac{q_{\ell}(x|\mathcal{S}_{\ell})}{\pi(x)}\right\}\right\}. $$
with c π and c ℓ denoting the normalizing constants of π(x) and \(q_{\ell }(x|\mathcal {S}_{\ell })\), respectively.
See Appendix A. □
Theorem 1 ensures that the pdf of the states of the Markov chain becomes closer and closer to the target pdf as t increases, since 0≤1−a t ≤1 and thus the product in the right hand side of (9) is a decreasing function of t. This theorem is a direct consequence of Theorem 2 in [14], and ensures the ergodicity of the proposed adaptive MCMC approach. Regarding the convergence of a sticky proposal to the target, we consider the following conjecture.
Conjecture 1
(Convergence of SP to the target) Let \(\widetilde {\pi }(x) = \frac {1}{c_{\pi }} \pi (x)\) be a continuous and bounded target pdf that has bounded first and second derivatives for all \(x \in \mathcal {X}\). Let \(\widetilde {q}_{t}(x|\mathcal {S}_{t}) = \frac {1}{c_{t}} q_{t}(x|\mathcal {S}_{t})\) be a sticky proposal pdf, constructed according to Definition 1 by using either a piecewise constant (PWC) or piecewise linear (PWL) approximation (given by Eqs. (3) and (4), respectively). Let us also assume that the support points have been obtained by applying a test rule according to Definition 3 within the AISM algorithm described in Table 1. Then, it is reasonable to assume that \(q_{t}(x|\mathcal {S}_{t})\) converges in L1 distance to π(x) as t increases (i.e., as the number of support points grows), i.e., as t→∞
$$ D_{1}(\pi,q_{t}) = \|\pi- q_{t} \|_{1} = \int_{\mathcal{X}}|\pi(z)-q_{t}(z|\mathcal{S}_{t})|dz \to 0. $$
An intuitive argumentation is provided in Appendix A.
Note that Conjecture 1 essentially shows that the "sticky" condition is fulfilled for PWC and PWL proposals and continuous, bounded targets with bounded first and second derivatives. Note also that this conjecture implies that \(q_{t}(x|\mathcal {S}_{t}) \to \pi (x)\) almost everywhere. Combining Theorem 1 and Conjecture 1 we get the following corollary.
Corollary 2
Let x1,x2,…,xT−1 be the set of states generated by the AISM algorithm in Table 1, using either a PWC or a PWL sticky proposal function, \(\widetilde {q}_{t}(x|\mathcal {S}_{t}) = \frac {1}{c_{t}} q_{t}(x|\mathcal {S}_{t})\), constructed according to Definition 2 and a test rule fulfilling the conditions in Definition 3. Let \(\widetilde {\pi }(x) = \frac {1}{c_{\pi }} \pi (x)\) be a continuous and bounded target pdf that has bounded first and second derivatives for all \(x \in \mathcal {X}\). Then,
$$ \|\pi(x)-q_{t}(x)\|_{TV} \to 0 \qquad \text{as} \quad t \to \infty. $$
By Theorem 1 we have
with the a ℓ given by Eq. (10). Now, since \(q_{\ell }(x|\mathcal {S}_{\ell }) \to \pi (x)\) almost everywhere by Conjecture 1, we have c ℓ →c π and thus a ℓ →1 as ℓ→∞. Consequently, ∥π(x)−q t (x)∥ TV →0 as t→∞. □
Finally, we also have a bound on the expected growth of the number of support points, as provided by the following theorem.
(Expected rate of growth of the number of support points) Let d t (z)=|π(z)−q t (z)| be the L1 distance between the bounded target, π(x), and an arbitrary sticky proposal function, q t (x), constructed according to Definition 2. Let also η t (z,d)=η t (d)be an acceptance function that only depends on z through d=d t (z) and fulfills the conditions in Definition 3. The expected probability of adding a new support point in the AISM algorithm of Table 1 at the t-th iteration is
$$ E\left[P_{a}|x_{t-1}, \mathcal{S}_{t}\right] \le \eta_{t}(d_{t}(x_{t-1}) + C \cdot D_{1}(\pi,q_{t})), $$
where \(D_{1}(\pi, q_{t}) = \int _{\mathcal {X}}{d_{t}(z) dz}\), and \(C = \max _{z\in \mathcal {X}} \widetilde {q}_{t}(z|\mathcal {S}_{t})\) is a constant that depends on the sticky proposal used. Furthermore, under the conditions of Conjecture 1, \(E[P_{a}|x_{t-1}, \mathcal {S}_{t}] \to 0\) as t→∞.
See Appendix C.1. □
Theorem 3 sets a bound on the expected probability of adding new support points, and thus on the expected rate of growth of the number of support points. Furthermore, under certain smoothness assumptions for the target (i.e., that π(x) is twice continuously differentiable), it also guarantees that this expectation tends to zero as the number of iterations increases, hence implying that less points are added as the algorithm evolves.Finally, note that Theorem 3 has been derived only for η t (z,d)=η t (d). However, under certain mild assumptions, it can be easily extended to more general test functions, as stated in the following corollary.
Let \(\eta _{t}(z,d_{t}(z)) = \eta _{t}(\widetilde {d}_{t}(z))\), where \(\widetilde {d}_{t}(z)=\widetilde {d}_{t}(\pi (z),q_{t}(z))\) is some valid semi-metric and \(\eta _{t}(\widetilde {d}_{t}(z))\)is a concave function of \(\widetilde {d}_{t}(z)\). Then, if the rest of the conditions in Theorem 3 are satisfied, the expected probability of adding a new support point in the AISM algorithm of Table 1 at the t-th iteration is
$$ E[P_{a}|x_{t-1}, \mathcal{S}_{t}] \le \eta_{t}\left(\widetilde{d}_{t}(x_{t-1}) + C \cdot \widetilde{D}_{1}(\pi,q_{t})\right), $$
where \(\widetilde {D}_{1}(\pi, q_{t}) = \int _{\mathcal {X}}{\widetilde {d}_{t}(z) dz}\) and \(C = \max _{z\in \mathcal {X}} \widetilde {q}_{t}(z|\mathcal {S}_{t})\). Furthermore, under the conditions of Conjecture 1, \(E\left [P_{a}|x_{t-1}, \mathcal {S}_{t}\right ] \to 0\) as t→∞.
Note that Corollary 4 allows us to extend the results of Theorem 3 to update rule 3, which corresponds to \(\eta _{t}(z,d_{t}(z)) = \widetilde {d}_{t}(z)\) with \(\widetilde {d}_{t}(z) = \frac {d}{\max \{\pi (z),q_{t}(z|\mathcal {S}_{t})\}}\) and d denoting the L1 norm.
6 Related works
6.1 Other examples of sticky MCMC methods
The novel class of adaptive independent MCMC methods encompasses several existing algorithms already available in the literature, as shown in Table 3. We denote the proposal pdf employed in these methods as p t (x) and, for simplicity, we have removed the dependence on \(\mathcal {S}_{t}\) in the function q t (x). The Griddy Gibbs Sampler [15] builds a proposal pdf as in Eq. (3), which is never adapted later. ARMS [12] and IA2RMS [13] use as proposal density
$$p_{t}(x)\propto\min\left\{q_{t}(x), \pi(x)\right\}, $$
where q t (x) is built using different alternative methods [12, 13, 16, 18]. Note that it is possible to draw easily from p t (x)∝ min{q t (x),π(x)} using the rejection sampling principle [24, 25], i.e., using the following procedure (in order to draw one sample x a ):
Draw \(x'\sim {\widetilde q}_{t}(x) \propto q_{t}(x)\) and \(u' \sim \mathcal {U}([0,1]\!)\).
Special cases of sticky MCMC algorithms
Griddy Gibbs
IA2RMS
Main reference
Proposal pdf p t (x)
\(p_{t}(x)=\widetilde {q}_{t}(x)\)
p t (x)∝ min{q t (x),π(x)}
Proposal Constr.
Eq. (3)
[12],[16]
Eqs. (3)-(4), [13]
Update rule or P a (z)
Never update, i.e.,
If q t (z)≥π(x) then Rule 3,
If q t (z)<π(x) then
with ε=∞, i.e.,
no update, i.e.,
P a (z)=0 for all z.
Rule 2 with ε=∞, i.e.,
\(P_{a}(z)=\max \left [1-\frac {\pi (z)}{q_{t}(z)},0\right ]\)
The ARS method in [19] is a special case of ARMS and IA2RMS, so that ARS can be considered also belonging to the new class of techniques
If \(u'\leq \frac {\pi (x')}{q_{t}(x')}\), then set x a =x′.
Otherwise, if \(u' > \frac {\pi (x')}{q_{t}(x')}\), repeat from 1.
The accepted sample x a has pdf p t (x)∝ min{q t (x),π(x)}. Moreover, ARMS adds new points to \(\mathcal {S}_{t}\) using the update Rule 3, only when q t (z)≥π(z), so that
$$P_{a}(z)=1-\frac{\pi(z)}{q_{t}(z)} $$
Otherwise, if q t (z)<π(z), ARMS does not add new nodes (see the discussion in [13] about the issues in ARMS mixing). Then, the update rule for ARMS can be written as
$$\begin{array}{@{}rcl@{}} P_{a}(z)=\max\left[1-\frac{\pi(z)}{q_{t}(z)},0\right]. \end{array} $$
Furthermore, the double update check used in IA2RMS coincides exactly with Rule 3 when
$$p_{t}(x)\propto\min\{q_{t}(x), \pi(x)\} $$
is employed as proposal pdf. Finally, note that ARMS and IA2RMS contain ARS in [19] as special case when q t (x)≥π(x), \(\forall x\in \mathcal {X}\) and \(\forall t\in \mathbb {N}\). Hence, ARS can be considered also a special case of the new class of algorithms.
6.2 Related algorithms
Other related methods, using non-parametric proposals, can be found in the literature. Samplers for drawing from univariate pdfs, using similar proposal constructions, has been proposed in [20] but the sequence of adaptive proposals does not converge to the target. Interpolation procedures for building the proposal pdf are also employed in [22, 23]. The authors in [22, 23] suggest to build the proposal by b-spline procedures. However, in this case, the resulting proposal is a random walk-type (not independent) and the resulting algorithm is not adaptive. Furthermore, there is not a convergence of the shape of proposal to the shape to target, but only local approximations via b-spline interpolation. The methods [12, 13, 15] are included in the sticky class of algorithms, as pointed out in Section 6.1. In [16], the authors suggest an alternative proposal construction considering pieces of second order polynomial, in order to be used with the ARMS structure [12].
The adaptive rejection sampling (ARS) method [19, 26] is not an MCMC technique, but it is strongly related to the sticky approach, since it also employs an adaptive non-parametric proposal pdf. ARS needs to be able to build a proposal such that q t (x)≥π(x), \(\forall x\in \mathcal {X}\) and \(\forall t\in \mathbb {N}\). This is possible only when more requirements about the target are assumed (for instance, log-concavity). For this reason, several extensions of the standard ARS have been also proposed [25, 27, 28], for tackling wider classes of target distributions. In [29], the non-parametric proposal is still adapted by in this case the number of support points remains constant, fixed in advance by the user. Different construction non-parametric procedures in order to address multivariate distributions have been also presented [21, 30, 31].
Other techniques have been developed to be applied specifically for Monte Carlo-within-in-Gibbs scenario when it is possible to draw directly from the full-conditional pdfs. In [32], an importance sampling approximation of the univariate target pdf is employed and a resampling step is performed in order to provide an "approximate" sample from the full-conditional. In [18], the authors suggest a non-adaptive strategy for building a suitable non-parametric proposal via interpolation. In this work, the interpolation procedure is first performed using a huge amount of nodes and then many of them are discarded, according to a suitable criteria. Several other alternatives involving MH-type algorithms have been used for sampling efficiently from the full-conditional pdfs within a Gibbs sampler [5, 6, 7, 15, 33, 34, 35].
7 Adaptive independent sticky MTM
In this section, we consider an alternative MCMC structure for the second stage described in Section 2: using a multiple-try Metropolis (MTM) approach [36, 37]. The resulting technique, Adaptive Independent Sticky MTM (AISMTM), is an extension of AISM that considers multiple candidates as possible new state, at each iteration. This improves the ability of the chain to explore the state space [37]. At iteration t, AISMTM builds the proposal density \(q_{t}(x|\mathcal {S}_{t}) \) (step 1 of Table 4) using the current set of support points \(\mathcal {S}_{t}\). Let x t =x be the current state of the chain and xj′ (j=1,…,M) a set of i.i.d. candidates simulated from \(q_{t}(x|\mathcal {S}_{t})\) (see step 2 of Table 4). Note that AISMTM uses an independent proposal [2], just like AISM. As a consequence, the auxiliary points in step 2.3 of Table 4 can be deterministically set ([1], pp. 119-120), [37].
Adaptive independent sticky Multiple-try Metropolis
In step 2, a sample x′ is selected among the set of candidates {x1′,…,xM′}, with probability proportional to the importance sampling weights,
$$w_{t}(z)=\frac{\pi(z)}{q_{t}(z|\mathcal{S}_{t})}, \qquad \forall j \in \{1,\ldots,M\}. $$
The selected candidate is then accepted or rejected according to the acceptance probability α given in step 2. Finally, step 3 updates the set \(\mathcal {S}_{t}\),including a new point
$$z'\in\mathcal{Z}=\{z_{1},\dots,z_{M}\}, $$
with probability P a (z′)=η t (z′,d t (z′)). Note that \(x_{t}\notin \mathcal {Z}\), and thus AISMTM is an independent MCMC algorithm according to Holden's definition [14]. For the sake of simplicity, we only consider the case where a single point can be added to \(\mathcal {S}_{t}\) at each iteration. However, this update step can be easily extended to allow for more than one sample to be included into the set of support points. Note also that AISMTM becomes AISM for M=1.
AISMTM provides a better choice of the new support points than AISM (see Section 9). The price to pay for this increased efficiency is the higher computational cost per iteration. However, since the proposal quickly approaches the target, it is possible to design strategies with a decreasing number of tries (M1≥M2≥⋯≥M t ≥⋯≥M T ) in order to reduce the computational cost.
7.1 Update rules for AISMTM
The update rules presented above require changes that take into account the multiple samples available, when used in AISMTM. As an example, let us consider the update scheme in Eq. (7). Considering for simplicity that only a single point can be incorporated to \(\mathcal {S}_{t}\), the update step for \(\mathcal {S}_{t}\) can be split in two parts: choose a "bad" point in \(\mathcal {Z}\in \{z_{1},\dots,z_{M}\}\) and then test whether it should be added or not. Thus, first a z′=z i is selected among the samples in \(\mathcal {Z}\) with probability proportional to
$$ \begin{aligned} \varphi_{t}(z_{i})&=\max\left\{w_{t}(z_{i}),\frac{1}{w_{t}(z_{i})}\right\} \\ &= \frac{\max\{\pi(z_{i}),q_{t}(z_{i}|\mathcal{S}_{t})\}}{\min\{\pi(z_{i}),q_{t}(z_{i}|\mathcal{S}_{t})\}}, \\ &= \frac{d_{t}(z_{i})}{\min\{\pi(z_{i}),q_{t}(z_{i}|\mathcal{S}_{t})\}}+1, \end{aligned} $$
for i=1,…,M.8 This step selects (with high probability) a sample where the proposal value is far from the target. Then, the point z′ is included in \(\mathcal {S}_{t}\) with probability
$$\begin{array}{@{}rcl@{}} P_{a}(z')=\eta_{t}(z',d_{t}(z'))&=&1-\frac{1}{\varphi_{t}(z')}, \\ &=&\frac{d_{t}(z')}{\max\{\pi(z'),q_{t}(z'|\mathcal{S}_{t})\}}, \end{array} $$
exactly as in Eq. (7). Therefore, the probability of adding a point z i to \(\mathcal {S}_{t}\) is
$$\begin{array}{@{}rcl@{}} P_{\mathcal{Z}}(z_{i})&=&\varphi_{t}(z_{i})\eta_{t}(z_{i},d_{t}(z_{i})), \\ &=&\varphi_{t}(z_{i})P_{a}(z_{i})=\frac{\varphi_{t}(z_{i})-1}{\sum_{j=1}^{M}\varphi_{t}(z_{j})}, \end{array} $$
that is a probability mass function defined over M+1 elements: z1,…, z M and the event {no addition} that, for simplicity, we denote with the empty set symbol ∅. Thus, the update rule in step 3 of Table 4 can be rewritten as a unique step,
$$ \mathcal{S}_{t+1}=\left\{ \begin{array}{ll} \mathcal{S}_{t}\cup \{z_{1}\}, &\text{with prob.} \ P_{\mathcal{Z}}(z_{1})=\frac{\varphi_{t}\left(z_{1}\right)-1}{\sum_{j=1}^{M}\varphi_{t}\left(z_{j}\right)},\\ &\vdots \\ \mathcal{S}_{t}\cup \{z_{M}\}, &\text{with prob.} \ P_{\mathcal{Z}}(z_{M})=\frac{\varphi_{t}\left(z_{M}\right)-1}{\sum_{j=1}^{M}\varphi_{t}\left(z_{j}\right)},\\ \mathcal{S}_{t}, &\text{with prob.} \ P_{\mathcal{Z}}(\emptyset)=\frac{M}{\sum_{j=1}^{M}\varphi_{t}\left(z_{j}\right)},\\ \end{array} \right. $$
where we have used \(1-\sum _{i=1}^{(r)} P_{\mathcal {Z}}(z_{i})=\frac {M}{\sum _{j=1}^{M}\varphi _{t}\left (z_{j}\right)}\).
8 Range of applicability and multivariate generation
The range of applicability of the sticky MCMC methods is briefly discussed below. On the one hand, sticky MCMC methods can be employed as stand-alone algorithms. Indeed, in many applications, it is necessary to draw samples from complicated univariate target pdf (as example in signal processing, see [38]). In this case, the sticky schemes provide virtually independent samples (i.e., with correlation close to zero) very efficiently. It is also important to remark that AISM and AISMTM also provide automatically an estimation of the normalizing constant of the target (a.k.a. marginal likelihood or Bayesian evidence) (since, with a suitable choice of the update test, the proposal approaches the target pdf almost everywhere). This is usually a hard task using MCMC methods [1, 2, 11].
AISM and AIMTM can be also applied directly to draw from a multivariate distribution if a suitable construction procedure of the multivariate sticky proposal is designed (e.g, see [30, 31, 39, 40] and ([21], Chapter 11)). However, devising and implementing such procedures in high dimensional state spaces are not easy tasks. Therefore, in this paper, we focus on the use of the sticky schemes within other Monte Carlo techniques (such as Gibbs sampling or the hit and run algorithm) to draw from multivariate densities. More generally, Bayesian inference often requires drawing samples from complicated multivariate posterior pdfs, \(\widetilde {\pi }(\mathbf {x}|\mathbf {y})\) with
$$\mathbf{x}=[x_{1},\ldots,x_{L}] \in \mathbb{R}^{L}, \quad L>1. $$
For instance, this happens in blind equalization and source separation, or spectral analysis [3, 4]. For simplicity, in the following we denote the target pdf as \(\widetilde {\pi }(\mathbf {x})\). When direct sampling from \(\widetilde {\pi }(\mathbf {x})\) in the space \(\mathbb {R}^{L}\) is unfeasible, a common approach is the use of Gibbs-type samplers [2]. This type of methods split the complex sampling problem into simpler univariate cases. Below we briefly summarize some well-known Gibbs-type algorithms.
Gibbs sampling. Let us denote as x(0) a randomly chosen starting point. At iteration k≥1, a Gibbs sampler obtains the ℓ-th component (ℓ=1,…,L) of x, x ℓ , drawing from the full conditional \(\widetilde {\pi }_{\ell }\left (x|\mathbf {x}_{1:\ell -1}^{(k)}, \mathbf {x}_{\ell +1:L}^{(k-1)}\right)\) given all the information available, namely:
Draw \(x_{\ell }^{(k)} \sim \widetilde {\pi }_{\ell }\left (x|\mathbf {x}_{1:\ell -1}^{(k)}, \mathbf {x}_{\ell +1:L}^{(k-1)}\right)\) for ℓ=1,…,L.
Set \(\mathbf {x}^{(k)}=\left [x_{1}^{(k)},\ldots,x_{L}^{(k)}\right ]^{\top }\).
The steps above are repeated for k=1,…,N G , where N G is the total number of Gibbs iterations. However, even sampling from \(\widetilde {\pi }_{\ell }\) can often be complicated. In some specific situations, rejection samplers [41, 42, 43, 44, 45] and their adaptive versions, adaptive rejection sampling (ARS) algorithms, are employed to generate (one) sample from \(\widetilde {\pi }_{\ell }\) [12, 19, 25, 27, 28, 29, 40, 46, 47]. The ARS algorithms are very appealing techniques since they construct a non-parametric proposal in order to mimic the shape of the target pdf, yielding in general excellent performance (i.e., independent samples from \(\widetilde {\pi }_{\ell }\) with an high acceptance rate). However, their range of application is limited to some specific classes of densities [19, 47].
More generally, it is impossible to draw from a full-conditional pdf \(\widetilde {\pi }_{\ell }\) (neither a rejection sampler can be applied), an additional MCMC sampler is required in order to draw from \(\widetilde {\pi }_{\ell }\) [33]. Thus, in many practical scenarios, we have an MCMC (e.g., an MH sampler) inside another MCMC scheme (i.e., the Gibbs sampler). In the so-called MH-within-Gibbs approach, only one MH step is often performed within each Gibbs iteration, in order to draw from each complicated full-conditionals. This hybrid approach preserves the ergodicity of the Gibbs sampler and provides good performance in many cases. On the other hand, several authors have noticed that using a single MH step for the internal MCMC is not always the best solution in terms of performance (cf. [48]). Other approximated approaches have been also proposed, considering the application of the importance sampling within the Gibbs sampler [32].
Using a larger number of iterations for the MH algorithm, there is more probability of avoiding the "burn-in" period so that the last sample be distributed as the full-conditional [33, 34, 35]. Thus, this case is closer to the ideal situation, i.e., sampling directly from the full-conditional pdf. However, unless the proposal is very well tailored to the target, a properly designed adaptive MCMC algorithm should provide less correlated samples than a standard MH algorithm. Several more sophisticated (adaptive or not) MH schemes for the application "within-Gibbs" have been proposed in literature [12, 13, 16, 18, 20, 23, 49, 50]. In general, these techniques employ a non-parametric proposal pdf in the same fashion of the ARS schemes (and as the sticky MCMC methods). It is important to remark that performing more steps of a standard or adaptive MH within a Gibbs sampler can provide better performance than performing a longer Gibbs chain applying only one MH step (see, e.g., [12, 13, 16, 17]).
Recycling Gibbs sampling. Recently, an alternative Gibbs scheme, called Recycling Gibbs (RG) sampler, has been proposed in literature [51]. The combined use of RG with a sticky algorithm is particularly interesting since RG recycles and employs all the samples drawn from each full-conditional pdfs in the final estimators. Clearly, this scheme fits specially well for the use of a adaptive sticky MCMC algorithm where different MCMC steps are performed for each full-conditional pdfs.
Hit and Run. The Gibbs sampler only allows movements along the axes. In certain scenarios, e.g., when the variables x ℓ are highly correlated, this can be an important limitation that slows down the convergence of the chain to the stationary distribution. The Hit and Run sampler is a valid alternative. Starting from x(0), at the k-th iteration, it applies the following steps:
Choose uniformly a direction d(k) in \(\mathbb {R}^{L}\). For instance, it can be done drawing L samples v ℓ from a standard Gaussian \(\mathcal {N}(0,1)\), and setting
$$ \mathbf{d}^{(k)}=\frac{\mathbf{v}}{\sqrt{\mathbf{v}\mathbf{v}^{\top}} }, $$
where v=[v1,…,v L ].
Set x(k)=x(k−1)+λ(k)d(k) where λ(k) is drawn from the univariate pdf
$$p(\lambda)\propto \widetilde{\pi}\left(\mathbf{x}^{\left(k-1\right)}+\lambda \mathbf{d}^{(k)}\right), $$
where \(\widetilde {\pi }\left (\mathbf {x}_{\ell }^{(k-1)}+\lambda \mathbf {d}^{(k)}\right)\) is a slice of the target pdf along the direction d(k).
Also in this case, we need to be able to draw from the univariate pdf p(λ) using either some direct sampling technique or another Monte Carlo method (e.g., see [50]).
There are several methods similar to the Hit and Run where drawing from a univariate pdf is required; for instance, the most popular one is the Adaptive Direction Sampling [52].
Sampling from univariate pdfs is also required inside other types of MCMC methods. For instance, this is the case of exchange-type MCMC algorithms [53] for handling models with intractable partition functions. In this case, efficient techniques for generating artificial observations are needed. Techniques which generalize the ARS method, using non-parametric proposals, have been applied for this purpose (see [54]).
9 Numerical simulations
In this section, we provide several numerical results comparing the sticky methods with several well-known MCMC schemes, such as the ARMS technique [12], the adaptive MH method in [10], and the slice sampler [55].9 The first two experiments (which can be easily reproduced by interested users) correspond to bi-modal one-dimensional and two-dimensional targets, respectively, and are used as benchmarks to compare different variants of the AISM and AISMTM methods with other techniques. They allow us to show the benefits of the non-parametric proposal construction, even in these two simple experiments. Then, in the third example, we approximate the hyper-parameters of a Gaussian process (GP) [56], which is often used for regression purposes in machine learning for signal processing applications.
9.1 Multimodal target distribution
We study the ability of different algorithms to simulate multimodal densities (which are clearly non-log-concave). As an example, we consider a mixture of Gaussians as target density,
$$ \widetilde{\pi}(x) = 0.5\mathcal{N}(x;7,1)+0.5\mathcal{N}(x;-7,0.1), $$
where \(\mathcal {N}\left (x;\mu,\sigma ^{2}\right)\) denotes the normal distribution with mean μ and variance σ2. The two modes are so separated that ordinary MCMC methods fail to visit one of the modes or remains indefinitely trapped in one of them. The goal is to approximate the expected value of the target (E[X]=0 with \(X\sim \widetilde {\pi }(x)\)) via Monte Carlo. We test the ARMS method [12] and the proposed AISM and AISMTM algorithms. For AISM and AISMTM, we consider different construction procedures for the proposal pdf:
P1: the construction given in [12] formed by exponential pieces, specifically designed for ARMS.
P2: alternative construction formed by exponential pieces obtained by a linear interpolation in the log-pdf domain, given in [13].
P3: the construction using uniform pieces in Eq. (3).
P4: the construction using linear pieces in Eq. (4).
Furthermore, for AISM and AISMTM, we consider the Update Rule 1 (R1) with different values of the parameter β, the Update Rule 2 (R2) with different value of the parameter ε, and the Update Rule 3 (R3) for the inclusion of a new node in the set \(\mathcal {S}_{t}\) (see Section 4). More specifically, we first test AISM and AISMTM with all the construction procedures P1, P2, P3, and P4 jointly with the rule R3. Then, we test AISM with the construction P4 and the update test R2 with ε∈{0.005,0.01,0.1,0.2}. For Rule 1 we consider β∈{0.3,0.5,0.7,2,3,4}. All the algorithms start with \(\mathcal {S}_{0}=\{-10,-8,5,10\}\) and initial state x0=−6.6. For AISMTM, we have set M∈{10,50}. For each independent run, we perform T=5000 iterations of the chain.
The results given in Table 5 are the averages over 2000 runs, without removing any sample to account for the initial burn-in period. Table 5 shows the Mean Square Error (MSE) in the estimation E[X], the auto-correlation function ρ(τ) at different lags, τ∈{1,10,50} (normalized, i.e., ρ(0)=1), the approximated effective sample size (ESS) of the produced chain ([57], Chapter 4)
$$ ESS\approx\frac{T}{1+2\sum_{\tau=1}^{\infty} \rho(\tau)}, $$
(Ex-Sect-9.1). For each algorithm, the table shows the mean square error (MSE), the autocorrelation (ρ(τ)) at different lags, the effective sample size (ESS), the final number of support points (m T ), the computing times normalized w.r.t. ARMS (Time)
ρ(1)
ρ(10)
ARMS [12]
AISM-P1-R3
AISMTM-P1 (M=10)
R3 (M=50)
AISM-P4-R2 (ε=0.01)
(ε=0.005)
AISM-P4-R1 (β=0.3)
(β=0.7)
(β=2)
(clearly, ESS≤T), the final number of support points m T and the computing time normalized with respect to the time spent by ARMS [12]. For simplicity, in Table 5, we have reported only the case of R2 with ε∈{0.005,0.01}; however, other results are shown in Fig. 5.
(Ex-Sect-9.1). Evolution of the number of support points m t and average acceptance probability (AAP), as function of t=1,…,T for AISM, for different constructions, and update rule R2 with ε=0.005 (square), ε=0.01 (cross), ε=0.1 (triangle) and ε=0.2 (circle). Moreover, in a–d the evolution of m t of AISM with the update rule R3 is also shown with solid line. Note that the range of values in a–d is different. (e)-(f)-(g)-(h) Acceptance Rate as function of the iteration t
AISM and AIMTM outperform ARMS, providing a smaller MSE and correlation (both close to zero). This is because ARMS does not allow a complete adaptation of the proposal pdf as highlighted in [13]. The adaptation in AISM and AIMTM provides a better approximation of the target than ARMS, as also indicated by the ESS which is substantially higher in the proposed methods. ARMS is in general slower than AISM for two main reasons. Firstly, the construction P1 (used by ARMS) is more costly since it requires the computation of several intersection points [12]. It is not required for the procedures P2, P3, and P4. Secondly, the effective number of iterations in ARMS is higher than T=5000 (the averaged value is ≈5057.83) due to the discarded samples in the rejection step (in this case, the chain is not moved forward).
Figure 6a–d depicts the averaged autocorrelation function ρ(τ) for τ=1,…,100 for the different techniques and constructions. Figure 6e–h shows the average acceptance probability (AAP; the value of α of the MH-type techniques) of accepting a new state as function of the iterations t. We can see that, with AISM and AIMTM, AAP approaches 1 since q t becomes closer and closer to π. Figure 7 shows the evolutions of the number of support points, m t , as function of t=1,…,T=5000, again for the different techniques and constructions. Note that, with AIMTM and P3-P4, AAP approaches 1 so quickly and the correlation is so small (virtually zero) that it is difficult to recognize the corresponding curves which are almost constant close to one or zero, respectively. The constructions P3 and P4 provide the better results. In this experiment, P4 seems to provide the best compromise between performance and computational cost. We also test AISM with update R2 for different values of ε (and different constructions). The number of nodes m t and AAP as function of t for these cases are shown in Fig. 5. These figures and the results given in Table 5 show that AISM-P4-R2 provides extremely good performance with a small computational cost (e.g., the final number of points is only m T ≈43 with ε=0.005). This shows that the update rule R2 is a very promising choice given the obtained results. Moreover, we can observe that the update rule R1 is very parsimonious in adding new points even considering a great range of values of β, from 0.3 to 4. The results are good also in this case with R1, so that this rule seems to be a more robust interesting alternative to R2 (which seems more dependent on the choice of β). Finally, Fig. 8 shows the histograms of the 5000 samples obtained by one run of AISM-P3-R1 with β=0.1 and β=3. The target pdf is depicted in solid line and the final construction proposal pdf is shown in dashed line.
(Ex-Sect-9.1). (a)-(b)-(c)-(d) Autocorrelation Function ρ(τ) at lags from 1 to 100 and (e)-(f)-(g)-(h) Averaged Acceptance Probability (AAP) as function of t, for the different methods. In each plot: P1 (solid line), P2 (dashed-dotted line), P3 (dotted line), and P4 (dashed line). Note the different range of values of ρ(τ)
(Ex-Sect-9.1). (a)-(b)-(c)-(d) Evolution of the number of support points m t as function of t=1,…,T, for the different methods. In each plot: construction P1 (solid line), P2 (dashed-dotted line), P3 (dotted line) and P4 (dashed line)
(Ex-Sect-9.1). a Histogram of the 5000 samples obtained by one run of AISM-P3-R1 with β=0.1 (28 final points). b Histogram of the 5000 samples obtained by one run of AISM-P3-R1 with β=3 (79 final points). The target pdf, \(\widetilde {\pi }(x)\), is depicted in solid line and the final construction proposal pdf, \(\widetilde {q}_{T}(x)\), is shown in dashed line
9.2 Missing mode experiment
Let us consider again the previous bimodal target pdf,
$$\widetilde{\pi}(x) = 0.5\mathcal{N}(x;7,1)+0.5\mathcal{N}(x;-7,0.1), $$
shown in Fig. 8. Here, we consider a bad choice of the initial support points, such as \(\mathcal {S}_{0}=\{5,6,10\}\) cutting out one of the two modes (we consider that no information about the range of the target pdf is provided). We test the robust implementation described in Appendix E.1 Mixture of proposal densities, i.e., we employ the proposal density defined
$$\begin{array}{@{}rcl@{}} \widetilde{q}(x)=\alpha_{t} \widetilde{q}_{1}(x)+(1-\alpha_{t})\widetilde{q}_{2}(x|\mathcal{S}_{t}), \end{array} $$
where \( \widetilde {q}_{1}(x)=\mathcal {N}\left (x;0,\sigma _{p}^{2}\right)\) and \(\widetilde {q}_{2}(x|\mathcal {S}_{t})\) is a sticky proposal constructed using the procedure P3 in Eq. (3) (we use the update rule 1 with β=0.1). We consider the most defensive strategy defining α t =α0=0.5 for all t. We test σ p ∈{2,3,8,10}. We compute the mean absolute error (MAE), in estimating the variance Var[X]=49.55 where \(X\sim \widetilde {\pi }(x)\), with different MCMC methods generating chains of length T=104. We compare this Robust AISM-P3-R1 scheme with a standard MH method using \(\widetilde {q}_{1}(x)\) as proposal pdf and the Adaptive MH technique where the scale parameter \(\sigma _{p}^{(t)}\) is adapted online [10] (starting with \(\sigma _{p}^{(0)}\in \{2,3,8,10\}\)). The results, averaged over 103 independent runs, are given in Table 6.
(Ex-Sect-9.2). Mean absolute error (MAE) in the estimation of the var[X]=49.55, for different techniques and different scale parameters σ p (T=104)
σ p =2
σ p =10
Standard MH
Adaptive MH
Robust AISM
9.3 Heavy-tailed target distribution
In this section, we test the AISM method from drawing with a target heavy tails. We show that the sticky MCMC schemes can be applied in this scenario, even by using a proposal pdf with exponential (i.e., "light") tails. However, we recall that an alternative construction of the tails is always possible, as suggested in Appendix E.2 Heavy tails using Pareto tails, for instance. More specifically, we consider the Lévy density, i.e.
$$ {\bar \pi}(x)\propto \pi(x)=\frac{1}{(x-\lambda)^{3/2}}\exp\left(-\frac{\nu}{2(x-\lambda)}\right), $$
∀x≥λ. Given a random variable \(X \sim {\bar \pi }(x)\), we have that E[X]=∞ and Var[X]=∞ due to the heavy-tail of the Lévy distribution. However, the normalizing constant, \(\frac {1}{c_{\pi }}\), such that \({\bar \pi }(x) = \frac {1}{c_{\pi }} \pi (x)\) integrates to one, can be determined analytically, and is given by \(\frac {1}{c_{\pi }} = \sqrt {\frac {\nu }{2\pi }}\).
Our goal is estimating the normalizing constant \(\frac {1}{c_{\pi }}\) via Monte Carlo simulation, when λ=0 and ν=2. In general, it is difficult to estimate a normalizing constant using MCMC outputs [2, 58, 59]. However, in the sticky MCMC algorithms (with update rules as R1 and R3 in Table 2), the normalizing constant of the adaptive non-parametric proposal approaches the normalizing constant of the target. We compare AISM-P4-R3 and different Multiple-try Metropolis (MTM) schemes. For the MTM schemes, we use the following procedure: given the MTM outputs obtained in one run, we use these samples as nodes, then construct the approximated function using the construction P4 (considering these nodes), and finally compute the normalizing constant of this approximated function. Note that we use the same construction procedure P4, for a fair comparison.
For AISM, we start with only m0=3 support points, \(\mathcal {S}_{0}=\{s_{1}=0,s_{2},s_{3}\}\), where two nodes are randomly chosen at each run, i.e., \(s_{2},s_{3} \sim \mathcal {U}([1,10])\) with s2<s3. We also test three different MTM techniques, two of them using an independent proposal pdf (MTM-ind) and the last one a random walk proposal pdf (MTM-rw). For the MTM schemes, we set M=1000 tries and importance weights designed again to choose the best candidate in each step [37]. We set T=5000 for all the methods. Note that, the total number of target valuation E of AISM is only E=T=5000 whereas we E=MT=5·106 for the MTM-ind schemes and E=2MT=107 for the MTM-rw algorithm (see [37] for further details). For the MTM-ind methods, we use an independent proposal \(\widetilde {q}(x)\propto \exp (-(x-\mu)^{2}/(2\sigma ^{2}))\) with μ∈{10,100} and σ2=2500. In MTM-rw, we have a random walk proposal \(\widetilde {q}(x|x_{t-1})\propto \exp \left (-(x-x_{t-1})^{2}\left /\left (2\sigma ^{2}\right)\right.\!\right)\) with σ2=2500. Note that we need to choose huge values of σ2 due to the heavy-tailed feature of the target.
The results, averaged over 2000 runs, are summarized in Table 7. Note that the real value of \(\frac {1}{c_{p}}\) when ν=2 is \(\frac {1}{\sqrt {\pi }}=0.5642\). The AISM-P4-R3 provides better results than all of the MTM approaches tested with only a fraction of their computational cost. Furthermore, AISM-P4-R3 avoids the critical issue of parameter selection (selecting a small value of σ2 in this case can easily lead to very poor performance).
Estimation of the normalizing constant \(\frac {1}{c_{\pi }}=0.5642\) for the Lévy distribution (T=5000)
Target evaluation
E=T=5000
MTM-ind
E=MT=5·106
MTM-rw
E=2MT=107
9.4 Sticky MCMC methods within Gibbs sampling
9.4.1 Example 1: comparing different MCMC-within-Gibbs schemes
In this example we show that, even in a simple bivariate scenario, AISM schemes can be useful within a Gibbs sampler. Let us consider the bimodal target density
$$ \widetilde{\pi}(x_{1},x_{2})\propto \exp\left(-\frac{(x_{1}^{2}-A+Bx_{2})^{2}}{4}-\frac{x_{1}^{2}}{2\sigma_{1}^{2}}-\frac{x_{2}^{2}}{2\sigma_{2}^{2}}\right), $$
with A=16, B=10−2, and \(\sigma _{1}^{2}=\sigma _{2}^{2}=\frac {10^{4}}{2}\). Densities with this non-linear analytic form have been used in the literature (cf. [10]) to compare the performance of different Monte Carlo algorithms. We apply N G steps of a Gibbs sampler to draw from \(\widetilde {\pi }(x_{1},x_{2})\), using ARMS [12], AISM-P4-R3, and AISMTM-P4-R3 within of the Gibbs sampler to generate samples from the full-conditionals, starting always with the initial support set \(\mathcal {S}_{0}=\{-10, -6, -4.3, 0, 3.2, 3.8, 4.3, 7, 10\}\). From each full-conditional pdf, we draw T samples and take the last one as the output from the Gibbs sampler. We also apply a standard MH algorithm with a random walk proposal \(q\left (x_{\ell,t}|x_{\ell,t-1}\right) \propto \exp \left ((x_{\ell,t}-x_{\ell,t-1})^{2}\left /\left (2\sigma _{p}^{2}\right)\right.\!\right)\) for ℓ∈{1,2}, σ p ∈{1,2,10}, 1≤t≤T. Furthermore, we test an adaptive parametric approach (as suggested in [8]). Specifically, we apply the adaptive MH method in [10] where the scale parameter of q(xℓ,t|xℓ,t−1) is adapted online, i.e., σp,t varies with t (we set σp,0=3). We also consider the application of the slice sampler [55] and the Hamiltonian Monte Carlo (HMC) method [60]. For the standard MH and the slice samplers we have used the function mhsample.m and slicesample.m directly provided by MATLAB. For HMC, we consider the code provided in [61] with ε d =0.01 as discretization parameter and L=1 as length of the trajectory.10 We recall that a preliminary code of AISM is also available at Matlab-FileExchange webpage.
(Ex-Sect-9.4.1). Mean absolute error (MAE) in the estimation of four statistics (first component) and normalized computing time
Avg. MAE
Skewness
Kurtosis
AISM-P4
AISMTM-P4
(M=5)
MH (σ p =1)
MH (σ p =10)
(M=10)
We consider two initializations for all the methods-within-Gibbs: (In1) \(x_{\ell,0}^{(k)}=1\); (In2) \(x_{\ell,0}^{(k)}=1\) and \(x_{\ell,0}^{(k)}=x_{\ell,T}^{(k-1)}\) for k=1,…,N G . We use all the samples to estimate four statistics that involve the first four moments of the target: mean, variance, skewness, and kurtosis. Table 8 provides the mean absolute error (MAE; averaged over 500 independent runs) for each of the four statistics estimated, and the time required by the Gibbs sampler (normalized by considering 1.0 to be the time required by ARMS with T=50).
(Ex-Sect-9.4.1). Mean absolute error (MAE) in the estimation of four statistics (first component) and normalized computing time (Continued)
All the techniques are used within a Gibbs sampler: N G is the number of iterations of the Gibbs sampler whereas T is is the number of iterations of the technique within Gibbs (so that T×N G is the global number of MCMC iterations). The best results (in each column, and in each panel) are highlighted with italics
The results are provided in Table 8. First of all, we notice that AISM outperforms ARMS and the slice sampler for all values of T and N G , in terms of performance and computational time. Regarding the use of the MH algorithm within Gibbs, the results depend largely on the choice of the variance of the proposal, \(\sigma _{p}^{2}\), and the initialization, showing the need for adaptive MCMC strategies. For a fixed value of T×N G , the AISM schemes provide results close to the smallest averaged MAE for In1 and the best results for In2 with a slight increase in the computing time, w.r.t. the standard MH algorithm. Finally, Table 8 shows the advantage of the non-parametric adaptive independent sticky approach w.r.t. the parametric adaptive approach [8, 10].
9.4.2 Example 2: comparison with an ideal Gibbs sampler
The ideal scenario for the Gibbs sampling scheme is that we are able to draw samples from the full-conditional pdfs (using a transformation or a direct method). In this section, we compare the performance of MH and AISM-within-Gibbs schemes with the ideal case. Let us consider two Gaussian full-conditional densities,
$$\begin{array}{*{20}l} \widetilde{\pi}_{1}(x_{1}|x_{2}) & \propto \exp\left(-\frac{(x_{1}-0.5x_{2})^{2}}{2\xi_{1}^{2}}\right), \end{array} $$
with ξ1=1 and ξ2=0.2. The joint pdf is a bivariate Gaussian pdf with mean vector μ=[0,0]⊤ and covariance matrix Σ=[1.08 0.54; 0.54 0.31]. We apply a Gibbs sampler with N G iterations to estimate both the mean and the covariance of the joint pdf. Then, we calculate the average MSE in the estimation of all the elements in μ and Σ, averaged over 2000 independent runs. We use this simple case, where we can draw directly from the full-conditionals, to check the performance of MH and AISM-P3-R3 within Gibbs as a function of T and N G . For the MH scheme, we use a Gaussian random walk proposal, \(\widetilde {q}\left (x_{\ell,t}^{(k)}\left |x_{\ell,t-1}^{(k)}\right.\right) \propto \exp \left (\left.-\left (x_{\ell,t}^{(k)}-0.5x_{\ell,t-1}^{(k)}\right)^{2}\right /\left (2\sigma _{p}^{2}\right)\right)\) for ℓ∈{1,2}, 1≤t≤T and 1≤k≤N G . For AISM-P3-R3, we start with \(\mathcal {S}_{0}=\{-2,0,2\}\).
We set N G =103 and \(x_{\ell,0}^{(i)}=1\) (both for MH and AISM-P3-R3), and increase the value of T. The results can be seen in Fig. 9. AISM-within-Gibbs easily reaches the same performance as the ideal case (sampling directly from the full conditionals) even for small values of T, whereas the MH-within-Gibbs needs a substantially larger value of T (up to T=500 for σ p =0.1) to attain a similar performance. Note the importance of using a proper parameter σ p for attaining good performance. This observation shows the importance of employing an adaptive technique within-Gibbs.
MSE as function of the number of iterations in the internal chain (T), and N G =1000. The constant dashed line is the MSE (≈0.0012) obtained drawing directly from the full-conditionals (ideal Gibbs scenario drawing directly from the full-conditional pdfs)
9.5 Sticky MCMC methods within Recycling Gibbs sampling
In this section, we test the sticky MCMC methods within the Recycling Gibbs (RG) sampling scheme where the intermediate samples drawn from each full-conditional pdf are sued in the final estimator [51]. We consider a simple numerical simulation (easily reproducible by any practitioner) involving a bi-dimensional target pdf
$${\widetilde \pi}(x_{1},x_{2})\propto \exp\left(-\frac{\left(x_{1}^{2}-\mu_{1}\right)^{2}}{2\delta_{1}^{2}}-\frac{\left(x_{2}-\mu_{2}\right)^{2}}{2\delta_{2}^{2}}\right), $$
where μ1=4, μ2=1, \(\delta _{1}=\sqrt {\frac {5}{2}}\) and δ2=1. Note that \({\widetilde \pi }(x_{1},x_{2})\) is bimodal and is not Gaussian. The goal is to approximate via Monte Carlo the expected value, \(\mathbb {E}[\mathbf {X}]\) where \(\mathbf {X}=\left [X_{1},X_{2}\right ] \sim {\widetilde \pi }(x_{1},x_{2})\).
We test different Gibbs techniques: the MH [2] and AISM-P3-R3 algorithm (with update rule 3 and proposal construction in Eq. (3)), within the Standard Gibbs (SG) and within the RG sampling schemes. For the MH method, we use a Gaussian random walk proposal,
$$q\left(\left.x_{\ell,t}^{(k)}\right|x_{\ell,t-1}^{(k)}\right) \propto \exp\left(-\frac{\left(x_{\ell,t}^{(k)}-x_{\ell,t-1}^{(k)}\right)^{2}}{2\sigma^{2}}\right), $$
for σ>0, ℓ∈{1,2}, 1≤k≤N G and 1≤t≤T. We set \(x_{\ell,0}^{(k)}=1\) and \(x_{\ell,0}^{(k)}=x_{\ell,T}^{(k-1)}\) for k=1,…,N G , for all schemes.
9.5.1 Optimal scale parameter for MH
First of all, we obtain the MSE in estimation of E[ X] for different values of the σ parameter for MH-within-SG (with T=1 and N G =1000). Figure 10a shows the results averaged over 105 independent runs. The performance of the Standard Gibbs (SG) sampler depends strongly on the choice of σ of the internal MH method. We can observe that there exists an optimal value σ∗≈3. This shows the need of using an adaptive scheme for drawing from the full-conditional pdfs. In the following, we compare the performance of AISM with the performance of this optimized MH using the optimal scale parameter σ∗=3, in order to show the capability of the non-parametric adaptation employed in AISM, with respect to a standard adaptation procedure [10].
(Ex-Sect-9.5). a MSE (log-scale) as function of σ for MH-within-SG (T=1 and N G =1000). b MSE (log-scale) as function of T for different MCMC-within-Gibbs schemes (we keep fixed N G =1000). Note the MH is using the optimal scale value σ∗=3 for the (Gaussian) parametric proposal density
9.5.2 Comparison among different schemes
For AISM-P3-R3, we start with the set of support points \(\mathcal {S}_{0}=\{ -10,-6,-2,2,6,10\}\). We have averaged the MSE values over 105 independent runs for each Gibbs scheme.
In Fig. 10b (represented in log-scale), we fix N G =1000 and vary T. As T grows, when a standard Gibbs (SG) sampler is used, the curves show an horizontal asymptote since the internal chains converge after some value T≥T∗. Considering an RG scheme, the increase of T yield lower MSE since now we recycle the internal samples. Figure 10b shows the advantage of using AISM-R3-P3 even when compared with the optimized MH method. The advantage of AISM-R3-P3 is clearer with small T values (10<T<30; recall that in this experiment N G =1000 is kept fixed). The performance of AISM-R3-P3 and optimized MH (within Gibbs) becomes more similar as T increases. This is due to the fact that, in this case, with a high enough value of T, the MH chain is able to exceed its burn-in period and eventually converges.
9.6 Tuning of the hyper-parameters of a Gaussian process (GP)
9.6.1 Exponential Power kernel function
Let assume to observe the pairs of data \(\{y_{j},\mathbf {z}_{j}\}_{j=1}^{P}\), with \(y_{j} \in \mathbb {R}\) and \(\mathbf {z}_{j} \in \mathbb {R}^{d_{Z}}\), and denote the corresponding vectors y=[y1,…,y P ] and Z=[z1,…,z P ]. We address the regression problem of inferring the hidden function y=f(z), linking the variable y and z. For this goal, we assume the model
$$ y=f(\mathbf{z})+e, $$
where e∼N(e;0,σ2). For simplicity, we set d Z =1. We consider the f is a Gaussian process (GP) [56], i.e., we assume a GP prior over f, so f∼GP(μ(z),κ(z,r)) where μ(z)=0, and the kernel function is
$$ \kappa(z,r)=\exp\left(-\frac{|z-r|^{\beta}}{2\delta^{2}}\right), \qquad \beta,\delta> 0. $$
Therefore, the vector f=[f(z1),…,f(z P )] is distributed as \(p(\mathbf {f}|\mathbf {Z},\kappa,\beta,\delta)=\mathcal {N}(\mathbf {f};\mathbf {0},\mathbf {K})\) where 0 is a 1×P vector, K:=κ(z i ,z j ), for all i,j=1,…,P is a P×P matrix, and we have expressed explicitly the dependence on the choice of the kernel family κ in Eq. (22). Moreover, we denote the hyper-parameters of the model as θ=[θ1=σ,θ2=β,θ3=δ], i.e., the standard deviation of the observation noise and the two parameters of the kernel κ(z,r). We assume a prior with independent truncated positive Gaussian components for the hyper-parameters \(p(\boldsymbol {\theta })=p(\sigma,\beta,\delta)=\mathcal {N}(\sigma ;0,5) \mathcal {N}(\beta ;0,5) \mathcal {N}(\delta ;0,5) \mathbb {I}_{\sigma }\mathbb {I}_{\beta }\mathbb {I}_{\gamma }\) where \(\mathbb {I}_{v}=1\) if v>0, and \(\mathbb {I}_{v}=0\) if v≤0. To simplify the expression of the posterior pdf, let us focus on the filtering problem and the tune of the parameters, namely we desire to infer f and θ. Hence, the posterior pdf is given by
$$ p(\mathbf{f},{\boldsymbol{\theta}}|\mathbf{y}, \mathbf{Z}, \kappa)=\frac{p(\mathbf{y}|\mathbf{f},\mathbf{Z},{\boldsymbol{\theta}}, \kappa)p(\mathbf{f}|\mathbf{Z},{\boldsymbol{\theta}},\kappa) p({\boldsymbol{\theta}})}{p(\mathbf{y}|\mathbf{Z},\kappa)}, $$
with \(p(\mathbf {y}|\mathbf {f},\mathbf {Z},{\boldsymbol {\theta }}, \kappa)=\mathcal {N}\left (\mathbf {y};\mathbf {0},\sigma ^{2} \mathbf {I}\right)\) and \(p(\mathbf {f}|\mathbf {y}, \mathbf {Z},{\boldsymbol {\theta }}, \kappa) =\mathcal {N}(\mathbf {f};{\boldsymbol {\mu }}_{p}, {\boldsymbol {\Sigma }}_{p})\), with mean μ p =K(K+σ2I)−1y⊤ and covariance matrix Σ p =K−K(K+σ2I)−1K⊤, representing the solution of the GP given the specific choice of the hyper-parameters θ. The marginal posterior of the hyper-parameters [56] is
$$ p({\boldsymbol{\theta}}|\mathbf{y}, \mathbf{Z}, \kappa)=\int p(\mathbf{f},{\boldsymbol{\theta}}|\mathbf{y}, \mathbf{Z}, \kappa) d\mathbf{f}=\frac{p(\mathbf{y}|\mathbf{Z},{\boldsymbol{\theta}}, \kappa)p({\boldsymbol{\theta}})}{p(\mathbf{y}|\mathbf{Z},\kappa)}. $$
$$ p(\mathbf{y}|\mathbf{Z},{\boldsymbol{\theta}}, \kappa)=\int p(\mathbf{y}|\mathbf{Z},\mathbf{f},{\boldsymbol{\theta}}, \kappa)p(\mathbf{f}|\mathbf{Z},{\boldsymbol{\theta}},\kappa) d\mathbf{f}. $$
Hence, the log-marginal posterior is
$$\begin{array}{*{20}l} \log \left[p({\boldsymbol{\theta}}|\mathbf{y}, \mathbf{Z}, \kappa)\right] & \propto -\frac{1}{2} \mathbf{y} (\mathbf{K} \\ & + \sigma^{2} \mathbf{I})^{-1} \mathbf{y}^{\top}-\frac{1}{2} \log\large[\text{det}\large[\mathbf{K} \\ & + \sigma^{2} \mathbf{I}\large]\!\large]-\frac{1}{10} \sum_{i=1}^{3} \theta_{i}^{2}, \end{array} $$
for θ1,θ2,θ3>0, where clearly K depends on θ1=σ, θ2=β and θ3=δ.11 We apply a Gibbs sampler from drawing from p(θ|y,Z,κ). We fix Z=[−10:0.1:10] (i.e., a grid between −10 and 10 with step 0.1); hence, P=201, and the data y are artificially generated according to the model (21) considering the values θ∗=[σ∗=1,β∗=0.5,δ∗=3]. We average the results using 103 independent runs. At each run, we generate new data y according to the model with θ∗, and run the Gibbs sampler in order to approximate p(θ|y,Z,κ) considering N G =2000 samples (without removing any burn-in period). We approximate the expected value of the posterior \( \widehat {{\boldsymbol {\theta }}}\approx E_{p}[{\boldsymbol {\theta }}]\) using these N G samples and compare with θ∗ (with enough number of data, it can be considered the ground-truth). For drawing from the full-conditional pdfs, we set T=10, we employ a standard MH with Gaussian random proposal a \(q(x_{\ell,t}|x_{\ell,t-1}) \propto \exp \left (-(x_{\ell,t}-x_{\ell,t-1})^{2}/\left (2\sigma _{p}^{2}\right)\right)\) for ℓ∈{1,2,3}, and we test different values of σ p ∈{1,2,3}. Moreover, we apply AISM-P4-R3 with T=10 and the initial support points \(\mathcal {S}_{0}=\{0.01, 0.2, 0.5,1,2,4,7,10\}\). We also test the IA2RMS method [13] which is a special case of AISM technique (see Section 6.1). For IA2RMS, we use the construction procedure P4 as in AISM (both methods employ the update rule R3). The initializations for all techniques is set \(x_{\ell,0}^{(k)}=1\) and \(x_{\ell,0}^{(k)}=x_{\ell,T}^{(k-1)}\) for ℓ=1,2,3 and k=1,…,N G . The mean square error (MSE) in the estimation of θ∗, averaged over 103 runs, is shown in Table 9. AISM outperforms the MH methods. IA2RMS provides better results w.r.t. AISM since it uses a better equivalent proposal p t (x)∝ min{q t (x),π(x)}. However, IA2RMS is slower than AISM due to its rejection step (necessary in order to produce samples from the equivalent proposal p t (x)∝ min{q t (x),π(x)}). We recall that IA2RMS is a special case of AISM technique. Finally, Table 9 shows the MSE in the estimation of the hyper-parameters θ∗ employing a Riemann quadrature, i.e., using a grid approximation [ 0,A]3 with A=100 and with step ε g ∈{0.1,0.2,0.5,1,2} (note this method excludes the possibility that the hyper-parameters are greater than A). The computing times are normalized w.r.t. the time spent by MH in Tables 9 and 10.
(Ex-Sect-9.6.1). MSE in the estimation of the hyper-parameters θ∗ with N G =2000
IA2RMS-P4
Note that IA2RMS is a special case of AISM which employs the equivalent proposal p t (x)∝ min{q t (x),π(x)}, and the rule R3 (see Section 6.1). In IA2RMS, we have used the construction procedure P4 in order to build q t (x). The computing times are normalized w.r.t. the time spent by MH
(Ex-Sect-9.6.1). MSE in the estimation of the hyper-parameters θ∗ employing a Riemann quadrature, i.e., using a grid approximation [ 0,100]3 with step ε g
ε g
The computing times are normalized w.r.t. the time spent by MH in Table 9
9.6.2 Automatic Relevant Determination kernel function
Here we consider the estimation of the hyper-parameters of the Automatic Relevance Determination (ARD) covariance ([62], Chapter 6). Let us assume again the P observed data pairs \(\{y_{j},\mathbf {z}_{j}\}_{j=1}^{P}\), with \(y_{j}\in \mathbb {R}\) and
$$\mathbf{z}_{j}=\left[z_{j,1},z_{j,2},\ldots,z_{j,d_{Z}}\right]^{\top}\in \mathbb{R}^{d_{Z}}, $$
where d Z is the dimension of the input features. We also denote the corresponding P×1 output vector as y=[y1,…,y P ]⊤ and the d Z ×P input matrix Z=[z1,…,z P ]. We again address the regression problem of inferring the unknown function f which links the variable y and z. Thus, the assumed model is y=f(z)+e, where e∼N(e;0,σ2), and that f(z) is a realization of a Gaussian process (GP) [56]. Hence \(f(\mathbf {z}) \sim \mathcal {GP}(\mu (\mathbf {z}),\kappa (\mathbf {z},\mathbf {r}))\) where μ(z)=0, \(\mathbf {z},\mathbf {r} \in \mathbb {R}^{d_{Z}}\), and we consider the ARD kernel function
$$ \kappa(\mathbf{z},\mathbf{r})=\exp\left(-\sum\limits_{\ell=1}^{d_{Z}}\frac{(z_{\ell}-r_{\ell})^{2}}{2\delta_{\ell}^{2}}\right), \ \text{} \text{with} \ \delta_{\ell}> 0, $$
for ℓ=1,…,d Z . Note that we have a different hyper-parameter δ ℓ for each input component z ℓ ; hence, we also define \({\boldsymbol {\delta }}=\delta _{1:d_{Z}}=[\delta _{1},\ldots,\delta _{d_{Z}}]\). Unlike in the previous section, note that here β is assumed known (β=2). This type of kernel function is often employed to perform an automatic relevance determination (ARD) of the input components with respect the output variable ([62], Chapter 6). Namely, using ARD allows us to infer the relative importance of different components of inputs: a small value of δ ℓ means that a variation of the ℓ-component z ℓ impacts the output more, while a high value of δ ℓ shows virtually independence between the ℓ-component and the output. Therefore, the complete vector containing all the hyper-parameters of the model is
$$\begin{array}{@{}rcl@{}} {\boldsymbol{\theta}}&=&\left[\theta_{1:d_{Z}}=\delta_{1:d_{Z}},\theta_{d_{Z}+1}=\sigma\right], \\ {\boldsymbol{\theta}}&=&\left[{\boldsymbol{\delta}}, \sigma\right] \in \mathbb{R}^{d_{Z}+1}, \end{array} $$
i.e., all the parameters of the kernel function in Eq. (22) and standard deviation σ of the observation noise. We assume \(p({\boldsymbol {\theta }})=\prod _{\ell =1}^{d_{Z}+1}\frac {1}{\theta _{\ell }^{\alpha }}\mathbb {I}_{\theta _{\ell }}\) where α=1.3, \(\mathbb {I}_{v}=1\) if v>0, and \(\mathbb {I}_{v}=0\) if v≤0. We desire to compute the expected value \({\mathbb E}[{\boldsymbol {\Theta }}]\) with Θ∼p(θ|y,Z,κ), via Monte Carlo quadrature.
More specifically, we apply a AISM-P4-R3 within-Gibbs (with \(\mathcal {S}_{0}=\{0.01,0.5,1,2,5,8,10,15\}\)) and the Single Component Adaptive Metropolis (SCAM) algorithm [63] within-Gibbs to draw from π(θ)∝p(θ|y,Z,κ). Note that dimension of the problem is D=d X +1 since \({\boldsymbol {\theta }}\in \mathbb {R}^{D}\). For SCAM, we use the Gaussian random walk proposal \(q(x_{\ell,t}|x_{\ell,t-1}) \propto \exp \left (-(x_{\ell,t}-x_{\ell,t-1})^{2}/\left (2\gamma _{\ell,t}^{2}\right)\right)\). In SCAM, the scale parameters γℓ,t are adapted (one for each component) considering all the previous corresponding samples (starting with γℓ,0=1).
We generated the P=500 pairs of data, \(\{y_{j},\mathbf {z}_{j}\}_{j=1}^{P}\), drawing \(\mathbf {z}_{j}\sim \mathcal {U}\left ([0,10]^{d_{Z}}\right)\) and y j according to the model in Eq. (21), considered d Z ∈{1,3,5,7,9} so that D∈{2,4,6,8,10}, and set \(\sigma ^{*}=\frac {1}{2}\) and \(\delta _{\ell }^{*}=2\), ∀ℓ, for all the experiments (recall that θ∗=[δ∗,σ∗]). We consider θ∗ as ground truth and compute the MSE obtained by the different Monte Carlo techniques.
We have averaged the results using 103 independent runs. We consider N G =1000 and T=20 for both schemes, AISM-within-Gibbs and SCAM-within-Gibbs. The results are provided in Table 11. We can see that AISM-P4-R3 provides the better performance and the difference increases with the dimension D=d Z +1 of the problem.
(Ex-Sect-9.6.2). MSE for different techniques and different dimensions D=d Z +1 of the inference problem (number of hyper-parameters), with T=20 and N G =1000 for both schemes
D=2
D=10
SCAM within-Gibbs
AISM-P4-R3 within-Gibbs
10 Conclusions
In this work, we have introduced a new class of adaptive MCMC algorithms for any-purpose stochastic simulation. We have discussed the general features of the novel family, describing the different parts which form a generic sticky adaptive MCMC algorithm. The proposal density used in the new class is adapted on-line, constructed by employing non-parametric procedures. The name "sticky" remarks that the proposal pdf becomes progressively more and more similar to the target. Namely, a complete adaptation of the shape of the proposal is obtained (unlike using parametric proposals). The role of the update control test for the inclusion of new support points has been investigated. The design of this test is extremely important, since it controls the trade-off between computational cost and the efficiency of the resulting algorithm. Moreover, we have discussed how the combined design of a suitable proposal construction and a proper update test ensures the ergodicity of the generated chain.
Two specific sticky schemes, AISM and ASMTM, have been proposed and tested exhaustively in different numerical simulations. The numerical results show the efficiency of the proposed algorithms with respect to other state-of-the-art adaptive MCMC methods. Furthermore, we have showed that other well-known algorithms already introduced in the literature are encompassed by the novel class of methods proposed. A detailed description of the related works in the literature and their range of applicability are also provided, which is particularly useful for the interested practitioners and researchers. The novel methods can be applied both as a stand-alone algorithm or within any Monte Carlo approach that requires sampling from univariate densities (e.g., the Gibbs sampler, the hit-and-run algorithm or adaptive direction sampling). A promising future line is designing suitable constructions of the proposal density in order to allow the direct sampling from multivariate target distributions (similarly as [21, 30, 31, 39, 40]). However, we remark that the structure of the novel class of methods is valid regardless of the dimension of the target.
11 Appendix A: Proof of Theorem 1
Note that Eq. (9) in Theorem 1 is a direct consequence of Theorem 2 in [14], which requires \(x_{t} \sim q(x|\mathcal {S}_{t})\) to be independent of the current state, xt−1, and the satisfaction of the strong Doeblin condition. Regarding the first issue, x t is independent of xt−1 by construction of the algorithm, so we only need to focus on the second issue. The strong Doeblin condition is satisfied if, given a proposal pdf, \(\widetilde {q}_{t}(x|\mathcal {S}_{t}) = \frac {1}{c_{t}} q_{t}(x|\mathcal {S}_{t})\), and a target, \(\widetilde {\pi }(x) = \frac {1}{c_{\pi }} \pi (x)\) with support \(\mathcal {X} \subseteq \mathbb {R}\), there exists some a t ∈(0,1] such that, for all \(x \in \mathcal {X}\) and \(t \in \mathbb {N}\),
$$ \frac{1}{a_{t}}\widetilde{q}_{t}(x|\mathcal{S}_{t})\geq \widetilde{\pi}(x). $$
First of all, note that Eq. (28) can be rewritten as
$$ a_{t} \le \frac{c_{\pi}}{c_{t}} \frac{q_{t}(x|\mathcal{S}_{t})}{\pi(x)} \qquad \forall x \in \mathcal{X} \quad \text{and} \quad \forall t \in \mathbb{R}. $$
Then, note also that
$$\begin{array}{*{20}l} \frac{c_{\pi}}{c_{t}} \frac{q_{t}(x|\mathcal{S}_{t})}{\pi(x)} & \ge \frac{c_{\pi}}{c_{t}} \min_{x \in \mathcal{X}}\left\{\frac{q_{t}(x|\mathcal{S}_{t})}{\pi(x)}\right\} \\ & \ge \min\left\{1,\frac{c_{\pi}}{c_{t}} \min_{x \in \mathcal{X}}\left\{\frac{q_{t}(x|\mathcal{S}_{t})}{\pi(x)}\right\}\right\}, \end{array} $$
where the last inequality is due to the fact that min{1,x}≤x. Therefore, a possible value of a t that allows us to satisfy Eq. (29) is
$$ a_{t} = \min\left\{1,\frac{c_{\pi}}{c_{t}} \min_{x \in \mathcal{X}}\left\{\frac{q_{t}(x|\mathcal{S}_{t})}{\pi(x)}\right\}\right\}. $$
From Eq. (30) it is clear that a t ≤1, so all that remains to be shown is that a t >0. Let us recall that \(\mathcal {I}_{t} = (s_{1},s_{m_{t}}]\), where s1 and \(s_{m_{t}}\) are the smallest and largest support points in \(\mathcal {S}_{t} = \{s_{1}, \ldots, s_{m_{t}}\}\), respectively. Then, since \(q_{t}(x|\mathcal {S}_{t}) > 0\) for all \(x \in \mathcal {X}\) (condition 1 in Definition 1) and \(t \in \mathbb {N}\), and π(x) is assumed to be bounded, we have
$$ \min\left\{1,\frac{c_{\pi}}{c_{t}} \min_{x \in \mathcal{I}_{t}}\left\{\frac{q_{t}(x|\mathcal{S}_{t})}{\pi(x)}\right\}\right\} > 0. $$
And regarding the tails, note that \(q_{t}(x|\mathcal {S}_{t})\) must be uniformly heavier tailed by construction (condition 4 in Definition 1),12 so \(q_{t}(x|\mathcal {S}_{t}) \ge \pi (x)\) for all \(x \in \mathcal {I}_{t}^{c} = (-\infty,s_{1}] \cup (s_{m_{t}},\infty)\) and we also have
$$ \min\left\{1,\frac{c_{\pi}}{c_{t}} \min_{x \in \mathcal{I}_{t}^{c}}\left\{\frac{q_{t}(x|\mathcal{S}_{t})}{\pi(x)}\right\}\right\} > 0. $$
Therefore, we conclude that 0<a t ≤1, the strong Doeblin condition is satisfied and thus all the conditions for Theorem 2 in [14] are fulfilled.
12 Appendix B: Argumentation for Conjecture 1
Let us define \(\mathcal {I}_{t} = (s_{1}, s_{m_{t}}]\) and \(\mathcal {I}_{t}^{c} = (-\infty, s_{1}] \cup (s_{m_{t}}, \infty)\), where s1 and \(s_{m_{t}}\) are the smallest and largest points of the set of support points at time step t, \(\phantom {\dot {i}\!}\mathcal {S}_{t}=\{s_{1},\ldots,s_{m_{t}}\}\) with \(\phantom {\dot {i}\!}s_{1}<\ldots <s_{m_{t}}\). Then, the L1 distance between the target and the proposal can be expressed as \(D_{1}(\pi,q_{t}) = D_{\mathcal {I}_{t}}(\pi,q_{t}) + D_{\mathcal {I}_{t}^{c}}(\pi,q_{t})\), where \(D_{\mathcal {I}_{t}}(\pi,q_{t}) = \int _{\mathcal {I}_{t}}{d_{t}(x)\ dx}\) and \(D_{\mathcal {I}_{t}^{c}}(\pi,q_{t}) = \int _{\mathcal {I}_{t}^{c}}{d_{t}(x)\ dx}\) with d t (x)=|π(x)−q t (x)|. Let us focus first on \(D_{\mathcal {I}_{t}}(\pi,q_{t})\). Since q t (x) is constructed as a piecewise polynomial approximation on the intervals \(\mathcal {I}_{t,i} = (s_{i},s_{i+1}]\),
$$ D_{\mathcal{I}_{t}}(\pi,q_{t}) = \sum_{i=1}^{m_{t}-1} D_{\mathcal{I}_{t,i}}(\pi,q_{t}), $$
$$ D_{\mathcal{I}_{t,i}}(\pi,q_{t}) = \int_{\mathcal{I}_{t,i}} d_{t}(x) dx $$
is the L1 distance between the target and the proposal in the i-th interval. Now, using Theorem 3.1.1 in [65] we can easily bound d t (x) for the ℓ-th order interpolation polynomial (with ℓ∈{0,1} in this case) used within the i-th interval. For ℓ=0 and assuming that π(s i )≥π(si+1) (and thus \(q_{t}(x)=\pi (s_{i})\ \forall x \in \mathcal {I}_{t,i}\)) without loss of generality,13
$$\begin{array}{*{20}l} d_{t}(x) & = |\pi(x) - q_{t}(x)| \\ & = |x-s_{i}| |\dot{\pi}(\xi)| \\ & \le (s_{i+1}-s_{i}) \max_{x \in \mathcal{I}_{t,i}} |\dot{\pi}(x)| < \infty, \end{array} $$
where \(\dot {\pi }(\xi)\) denotes the first derivative of π(x) evaluated at x=ξ, ξ∈(s i ,si+1] is some point inside the interval whose value depends on x, x i and π(x), and this bound is finite since we assume that the first derivative of π(x) is bounded. Therefore, for the PWC approximation we have
$$\begin{array}{*{20}l} D_{\mathcal{I}_{t}}(\pi,q_{t}) & \le \sum_{i=1}^{m_{t}-1}{(s_{i+1}-s_{i})^{2} \max_{x \in \mathcal{I}_{t,i}} |\dot{\pi}(x)|} \\ & \le \max_{x \in \mathcal{I}_{t}}|\dot{\pi}(x)| \cdot \sum_{i=1}^{m_{t}-1}{(s_{i+1}-s_{i})^{2}} < \infty. \end{array} $$
Similarly, for ℓ=1 we have
$$\begin{array}{*{20}l} d_{t}(x) & = |\pi(x) - q_{t}(x)| \\ & = \frac{|x-s_{i}| |x-s_{i+1}|}{2} |\ddot{\pi}(\xi)| \\ & \le \frac{(s_{i+1}-s_{i})^{2}}{2} \max_{x \in \mathcal{I}_{t,i}} |\ddot{\pi}(x)| < \infty, \end{array} $$
where \(\ddot {\pi }(\xi)\) denotes the second derivative of π(x) evaluated at x=ξ, ξ∈(s i ,si+1] is some point inside the interval, and this bound is again finite since we assume that the second derivative of π(x) is also bounded. And the L1 distance for the PWL approximation can thus be bounded as
$$\begin{array}{*{20}l} D_{\mathcal{I}_{t}}(\pi,q_{t}) & \le \sum_{i=1}^{m_{t}-1}{\frac{(s_{i+1}-s_{i})^{3}}{2} \max_{x \in \mathcal{I}_{t,i}} |\ddot{\pi}(x)|} \\ & \le \frac{1}{2} \max_{x \in \mathcal{I}_{t}}|\ddot{\pi}(x)| \cdot \sum_{i=1}^{m_{t}-1}{(s_{i+1}-s_{i})^{3}} < \infty. \end{array} $$
Note that the two cases can be summarized in a single expression:
$$ D_{\mathcal{I}_{t}}(\pi,q_{t}) \le L_{t}^{(\ell)}, $$
$$ L_{t}^{(\ell)} = C_{t}^{(\ell)} \cdot \sum\limits_{i=1}^{m_{t}-1}{(s_{i+1}-s_{i})^{\ell+1}}, $$
with \(C_{t}^{(0)} = \max _{x \in \mathcal {I}_{t}}|\dot {\pi }(x)|\) and \(C_{t}^{(1)} = \frac {1}{2} \max _{x \in \mathcal {I}_{t}}|\ddot {\pi }(x)|\).
Now, let us assume that a new point, \(s' \in \mathcal {I}_{t,k} = \left [s_{k},s_{k+1}\right ]\) for 1≤k≤m t −1, is added at some iteration t′>t using the mechanism described in the AISM algorithm (see Table 1) and that no other points have been incorporated to the support set for t+1,…,t′−1. In this case, the construction of the proposal function changes only inside the interval \(\mathcal {I}_{t,k}\), which splits now into \(\mathcal {I}_{t',k}=[s_{k},s']\) and \(\mathcal {I}_{t',k+1}=[s',s_{k+1}]\). Then, the new bound for the distance inside \(\mathcal {I}_{t'} = \mathcal {I}_{t}\) is \(D_{\mathcal {I}_{t'}}(\pi,q_{t'}) \le L_{t'}^{(\ell)}\), with
$$\begin{array}{*{20}l} L_{t'}^{(\ell)} = & \,L_{t}^{(\ell)} + C_{t}^{(\ell)} \left[\left(s'-s_{k}\right)^{\ell+1} + \left(s_{k+1}-s'\right)^{\ell+1}\right. \\ &\left. - (s_{k+1}-s_{k})^{\ell+1}\right] < L_{t}^{(\ell)}, \end{array} $$
where the last inequality is obtained by applying Newton's binomial theorem, which states that Aℓ+1+Bℓ+1<(A+B)ℓ+1 for any A,B>0, using A=s′−s k >0 and B=sk+1−s′>0. Hence, the bound in Eq. (36) can never increase when a new support point is incorporated and indeed tends to decrease as new points are added to the support set.
Note that we could still have \(L_{t}^{(\ell)} \to K > 0\) as t→∞. However, the conditions of Definition 1 ensure that the support of the proposal always contains the support of the target (i.e., \(q_{t}(x|\mathcal {S}_{t})>0\) whenever π(x)>0 for any t and \(\mathcal {S}_{t}\)) and it has uniformly heavier tails (implying that \(q_{t}(x|\mathcal {S}_{t}) \to 0\) slower than π(x) as x→±∞). Consequently, support points can be added anywhere inside the support of the target, \(\mathcal {X} \subseteq \mathbb {R}\). This implies that \(L_{t}^{(\ell)} \to 0\) as t→∞, since (si+1−s i )→0 as more points are added inside \(\mathcal {I}_{t}\), and thus also \(D_{\mathcal {I}_{t}}(\pi,q_{t}) \to 0\) as t→∞. Let us focus now on \(D_{\mathcal {I}_{t}^{c}}(\pi,q_{t})\). Let us assume, without loss of generality, that a new point, s′∈(−∞,s1],14 is added at some iteration t′>t using the mechanism described in the AISM algorithm (see Table 1) and that no other points have been incorporated to the support set for t+1,…,t′−1. In this case, it is clear that the distance in the tails decreases (i.e., \(D_{\mathcal {I}_{t'}^{c}}(\pi,q_{t}) < D_{\mathcal {I}_{t}^{c}}(\pi,q_{t})\)) at the expense of increasing the distance in the central part of the target (i.e., \(D_{\mathcal {I}_{t'}}(\pi,q_{t}) > D_{\mathcal {I}_{t}}(\pi,q_{t})\)). However, even if this leads to a momentary increase in the overall distance, note that we still have \(D_{\mathcal {I}_{t'}}(\pi,q_{t}) \to 0\) as t′→∞ as long as new support points can be added inside \(\mathcal {I}_{t'}\), something which is guaranteed by the AISM algorithm. Finally, since there is always a non-null probability of incorporating points in the tails,15 thus implying that \(D_{\mathcal {I}_{t}^{c}}(\pi,q_{t}) \to 0\) as t→∞, since \(\mathcal {I}_{t}^{c}\) becomes smaller and smaller as t increases.
Therefore, we can guarantee that using the AISM algorithm in Table 1, with a valid proposal that fulfills Definition 1 and an acceptance rule according to Definition 3, we obtain a sticky proposal that fulfills Definition 2.
13 Appendix C: Support points
In this appendix we provide the proofs of Theorem 3 and Corollary 4, which bound the expected growth of the number of support points.
13.1 C.1 Proof of Theorem 3
Given the support set \(\mathcal {S}_{t}\) and the state xt−1, the expected probability of adding a new point to \(\mathcal {S}_{t}\) at the t-th iteration is given by
$$\begin{array}{*{20}l} E\left[P_{a}(z)|x_{t-1}, \mathcal{S}_{t}\right]& = \int_{\mathcal{X}} P_{a}(z) p_{t}(z|x_{t-1},\mathcal{S}_{t})\ dz, \\ & = \int_{\mathcal{X}} \eta_{t}(z,d_{t}(z)) p_{t}(z|x_{t-1},\mathcal{S}_{t})\ dz, \end{array} $$
where \(d_{t}(z)=\left |\pi (z)-q_{t}(z|\mathcal {S}_{t})\right |\) and
$${} p_{t}(z|x_{t-1},\mathcal{S}_{t}) = \int_{\mathcal{X}}{p_{t}\left(z|x',x_{t-1},\mathcal{S}_{t}\right) p_{t}\left(x'|x_{t-1},\mathcal{S}_{t}\right)\ dx'}, $$
represents the kernel function of AISM given xt−1 and \(\mathcal {S}_{t}\). Since candidate points \(x' \in \mathcal {X}\) are directly drawn from the proposal pdf, we have \(p_{t}\left (x'|x_{t-1},\mathcal {S}_{t}\right) = \widetilde {q}_{t}\left (x'|\mathcal {S}_{t}\right)\), and from the structure of the AISM in Table 1 it is straightforward to see that
$$\begin{array}{*{20}l} p_{t}(z|x',x_{t-1},\mathcal{S}_{t}) = & \alpha(x_{t-1},x') \delta(z-x_{t-1}) \\ & + \left[1-\alpha(x_{t-1},x')\right] \delta(z-x'), \end{array} $$
where \(\alpha (x_{t-1},x') =\min \left [1,\frac {\pi (x')q_{t}(x_{t-1}|\mathcal {S}_{t})}{\pi (x_{t-1})q_{t}(x'|\mathcal {S}_{t})}\right ]\). Inserting these two expressions in Eq. (38), the kernel function of AISM becomes
$$\begin{array}{*{20}l} p_{t}(z|x_{t-1},\mathcal{S}_{t}) = & \left[\int_{\mathcal{X}} \alpha(x_{t-1},x') \ \widetilde{q}_{t}(x'|\mathcal{S}_{t})\ dx' \right] \\ & \times \delta(z-x_{t-1}) \\ & + \left[1-\alpha(x_{t-1},z)\right]\ \widetilde{q}_{t}(z|\mathcal{S}_{t}) \end{array} $$
Let us recall now the integral form of Jensen's inequality for a concave function φ(x) with support \(\mathcal {X} \subseteq \mathbb {R}\) [66]:
$$ \int_{\mathcal{X}}{\varphi(x) f(x)\ dx} \le \varphi\left(\int_{\mathcal{X}}{x f(x)\ dx}\right), $$
which is valid for any non-negative function f(x) such that \(\int _{\mathcal {X}}{f(x)\ dx}=1\). Then, since we assume that η t (z,d)=η t (d), η t (d) is a concave function of d by condition 4 of Definition 3, and \(\int _{\mathcal {X}} p_{t}(z|x_{t-1},\mathcal {S}_{t}) dz=1\), we have
$$\begin{array}{*{20}l} E[P_{a}(z)|x_{t-1}, \mathcal{S}_{t}] & = \int_{\mathcal{X}} \eta_{t}(d_{t}(z)) p_{t}(z|x_{t-1},\mathcal{S}_{t})\ dz \\ & \le \eta_{t}\left(E\left[d_{t}(z)|x_{t-1},\mathcal{S}_{t}\right]\right), \end{array} $$
$$\begin{array}{*{20}l} {} E\left[d_{t}(z)|x_{t-1},\mathcal{S}_{t}\right] &= \eta_{t}\left(\int_{\mathcal{X}} d_{t}(z) p_{t}(z|x_{t-1},\mathcal{S}_{t})\ dz\right) \\ &= \left[\int_{\mathcal{X}} \alpha(x_{t-1},x') \ \widetilde{q}_{t}(x'|\mathcal{S}_{t})\ dx' \right] d_{t}(x_{t-1}) \\ & \quad+ \int_{\mathcal{X}} \Big[1-\alpha(x_{t-1},z)\Big]\ d_{t}(z) \widetilde{q}_{t}(z|\mathcal{S}_{t})\ dz, \end{array} $$
where we have used (39) to obtain the final expression in (41). Now, for the first term in the right hand side of (41), note that \(\left [\int _{\mathcal {X}} \alpha (x_{t-1},x') \ \widetilde {q}_{t}(x'|\mathcal {S}_{t})\ dx' \right ] \le 1\), since 0≤α(xt−1,x′)≤1 and \(\int _{\mathcal {X}}{\widetilde {q}_{t}(x'|\mathcal {S}_{t})\ dx'} = 1\). And for the second term, we have
$$\begin{array}{*{20}l} &\int_{\mathcal{X}} \Big[1-\alpha(x_{t-1},z)\Big]\ d_{t}(z) \widetilde{q}_{t}(z|\mathcal{S}_{t})\ dz \\ & \le \int_{\mathcal{X}} d_{t}(z) \widetilde{q}_{t}(z|\mathcal{S}_{t})\ dz \\ & \le C \cdot D_{1}(\pi,q_{t}), \end{array} $$
where we recall that \(D_{1}(\pi,q_{t}) = \int _{\mathcal {X}}{d_{t}(z)\ dz} = \int _{\mathcal {X}}{|\pi (z)-q_{t}(z|\mathcal {S}_{t})|\ dz}\) and \(C = \max _{z\in \mathcal {X}} \widetilde {q}_{t}(z|\mathcal {S}_{t}) < \infty \), since we have assumed that π(x) is bounded and thus, by condition 4 in Definition 1, \(\widetilde {q}_{t}(z|\mathcal {S}_{t})\) is also bounded. Therefore, we obtain
$$ E[d_{t}(z)|x_{t-1},\mathcal{S}_{t}] \le d_{t}(x_{t-1}) + C \cdot D_{1}(\pi, q_{t}), $$
and inserting (42) into (40) we have the following bound for the expected probability of adding a support point at the t-the iteration,
$$ E[P_{a}(z)|x_{t-1}, \mathcal{S}_{t}] \leq \eta_{t}\Big(d_{t}(x_{t-1}) + C\cdot D_{1}(\pi, q_{t})\Big). $$
Finally, noting C<∞, that both d t (xt−1)→0 and D1(q t ,π)→0 as t→∞ by Conjecture 1, and that η t (0)=0 by condition 2 in Definition 3, we have \(E[P_{a}(z)|x_{t-1}, \mathcal {S}_{t}]\to 0\) as t→∞.
13.2 C.2 Proof of Corollary 4
First of all, recall that a semi-metric fulfills all the properties of a metric except for the triangle inequality. Therefore, we have \(\widetilde {d}_{t}(\pi (z),q_{t}(z)) \ge 0\), \(\widetilde {d}_{t}(\pi (z),q_{t}(z)) = 0 \iff \pi (z) = q_{t}(z)\) and \(\widetilde {d}_{t}(\pi (z),q_{t}(z)) = \widetilde {d}_{t}(q_{t}(z),\pi (z))\). Now, from the proof of Theorem 3 (see Appendix C.1) we can see that η t is not used until Eq. (40). Since \(\eta _{t}(\widetilde {d}_{t}(z))\) is a concave function of \(\widetilde {d}_{t}(z)\), we can still use Jensen's inequality and this equation becomes
$$E[P_{a}(z)|x_{t-1},\mathcal{S}_{t}] \le \eta_{t}\Big(E[\widetilde{d}_{t}(z)|x_{t-1},\mathcal{S}_{t}]\Big), $$
where, following the same procedure as in Appendix C.1 (which is still valid due to the fact that \(\widetilde {d}_{t}(\pi (z),q_{t}(z))\) is a semi-metric), the term inside η t can be now bounded by
$$E\big[\widetilde{d}_{t}(z)|x_{t-1},\mathcal{S}_{t}\big] \le \widetilde{d}_{t}(x_{t-1}) + C \cdot \widetilde{D}_{t}(\pi, q_{t}), $$
with \(\widetilde {D}_{t}(\pi, q_{t}) = \int _{\mathcal {X}}{\widetilde {d}_{t}(z)\ dz}\). Therefore, we have
$$E\left[P_{a}(z)|x_{t-1},\mathcal{S}_{t}\right] \le \eta_{t}\left(\widetilde{d}_{t}(x_{t-1}) + C \cdot \widetilde{D}_{t}(\pi, q_{t})\right), $$
with \(E[P_{a}(z)|x_{t-1},\mathcal {S}_{t}] \to 0\) as t→∞ under the conditions of Conjecture 1.
14 Appendix D: Variate generation
The proposal density \(\widetilde {q}_{t}(x|\mathcal {S}_{t}) \propto q_{t}(x|\mathcal {S}_{t})\), built using one of the interpolation procedures in Section 3.1, is composed of m t +1 pieces (including the two tails). More specifically, the function \(q_{t}(x|\mathcal {S}_{t})\) can be seen as a finite mixture
$$ \widetilde{q}_{t}(x|\mathcal{S}_{t}) = \sum_{i=0}^{m} \eta_{i} \phi_{i}(x), $$
with \(\sum _{i=0}^{m_{t}} \eta _{i}=1\), whereas ϕ i (x) is a linear pdf or a uniform pdf (depending on the employed construction; see Eqs. (3)-(4)) defined in the interval \(\mathcal {I}_{i}\), and ϕ i (x)=0 for \(x \notin \mathcal {I}_{i}\). The tails, ϕ0(x) and \(\phi _{m_{t}}(x)\), are truncated exponential pdfs (or Pareto tails see Appendix E.2 Heavy tails). Hence, in order to draw a sample from \(\widetilde {q}_{t}(x|\mathcal {S}_{t}) \propto q_{t}(x|\mathcal {S}_{t})\), it is necessary to perform the following steps:
Compute the area A i below each piece composing \(q_{t}(x|\mathcal {S}_{t})\), i=0,…,m t . This is straightforward for the construction procedures in Eqs. (3)-(4) since the function \(q_{t}(x|\mathcal {S}_{t})\) is formed by linear or constant pieces, so that it can be easily done analytically. Moreover, since the tails are exponential functions also in this case we compute the areas below A0 and \(A_{m_{t}}\) analytically. Then, we need to normalize them,
$$ \eta_{i} = \frac{A_{i}}{\sum_{j=1}^{m} A_{j}}, \quad \text{for} \quad i=0,\ldots, m. $$
Choose a piece (i.e., an index j∗∈{0,…,m t }) according to the weights η i for i=0,…,m t .
Given the index j∗, draw a sample x′ in the interval \(\phantom {\dot {i}\!}\mathcal {I}_{j^{*}}\) with pdf \(\phantom {\dot {i}\!}\phi _{j^{*}}(x)\), i.e., \(\phantom {\dot {i}\!}x' \sim \phi _{j^{*}}(x)\).
15 Appendix E: Robust algorithms
In this appendix, we briefly discuss how to increase the robustness of the method, both with respect to a bad choice of the initial set \(\mathcal {S}_{0}\) (e.g., when information about the range of the target pdf is not available) and w.r.t. the heavy tails that appear in many target pdfs.
15.1 E.1 Mixture of proposal densities
Let us define a proposal density as
where \(\widetilde {q}_{2}(x|\mathcal {S}_{t})\) is a sticky proposal pdf built as described in Section 3. The density \(\widetilde {q}_{1}(x)\) is a generic proposal function with an explorative task. The explorative behavior of \(\widetilde {q}_{1}\) can be controlled by its scale parameter. The weight α t can be kept constant α t =α0=0.5 for all t (this is the most defensive strategy), or it can be decreased with the iteration t, i.e., α t →0 as t→∞. The joint adaptation of the weight α t , the scale parameter of \(\widetilde {q}_{1}\) and \(\widetilde {q}_{2}\) using a sticky procedure needs and deserves additional studies.
15.2 E.2 Heavy tails
The choice of the tails for the proposal is important for two reasons: (a) to accelerate the convergence of the chain to the target (especially for heavy-tailed target distributions) and (b) to increase the robustness of the method w.r.t. the initial choice of the set \(\mathcal {S}_{0}\). Indeed, often the construction of tails with a bigger area below them can reduce the dependence on a specific choice of the set of initial support points. For heavy tailed constructions, there are several possibilities. For instance, here we propose to use Pareto pieces, which have the following analytic form
$$\begin{array}{*{20}l} q_{t}(x|\mathcal{S}_{t}) & = e^{\rho_{0}} \frac{1}{|x- \mu_{0}|^{\gamma_{0}}}, \text{ }\text{} \forall x\in \mathcal{I}_{0}, \\ q_{t}(x|\mathcal{S}_{t}) & = e^{\rho_{m_{t}}} \frac{1}{|x- \mu_{m_{t}}|^{\gamma_{m_{t}}}}, \text{}\text{} \forall x\in \mathcal{I}_{m_{t}}, \end{array} $$
with γ j >1, j∈{0,m t }. In the log-domain, this results in
$$\begin{array}{*{20}l} w_{0}(x) & = \rho_{0}-\gamma_{0}\log(|x-\mu_{0}|), \text{for} x\in \mathcal{I}_{0}, \\ w_{m_{t}}(x) & = \rho_{m_{t}}-\gamma_{m_{t}}\log(|x-\mu_{m_{t}}|), \text{for} x\in \mathcal{I}_{m_{t}}, \end{array} $$
i.e., \(q_{t}(x|\mathcal {S}_{t})=\exp \left (w_{i}(x)\right)\) with i∈{0,m t }. Let us denote V(x)= log[π(x)]. Fixing the parameters μ j , j∈{0,m t }, the remaining parameters, ρ j and γ j , are set in order to satisfy the passing conditions through the points (s1,V(s1)) and (s2,V(s2)), and through the points \((s_{m_{t}-1},V(s_{m_{t}-1}))\) and \((s_{m_{t}},V(s_{m_{t}}))\), respectively. The parameters μ j can be arbitrarily chosen by the user, as long as they fulfill the following inequalities:
$$ \mu_{0}>s_{2}, \quad \mu_{m_{t}}<s_{m_{t}-1}. $$
Values of μ j such that μ0≈s2 and \(\mu _{m_{t}}\approx s_{m_{t}-1}\) yield small values of γ j (close to 1) and, as a consequence, fatter tails. Larger differences in |μ0−s2| and \(|\mu _{m_{t}}- s_{m_{t}-1}|\) yield γ j →+∞, i.e., lighter tails. Note that we can compute analytically the integral of q t (x) in \(\mathcal {I}_{0}\) and \(\mathcal {I}_{m_{t}}\):
$$\begin{array}{*{20}l} A_{0} & = \frac{e^{\rho_{0}}}{\gamma_{0}-1} \frac{1}{(\mu_{0}- s_{1})^{\gamma_{0}-1}}, \\ A_{m_{t}} & = \frac{e^{\rho_{m_{t}}}}{\gamma_{m_{t}}-1} \frac{1}{(s_{m_{t}}- \mu_{m_{t}})^{\gamma_{m_{t}}-1}}. \end{array} $$
Moreover, we can also draw samples easily from each Pareto tail using the inversion method [2].
The adjective "sticky" highlights the ability of the proposed schemes to generate a sequence of proposal densities that progressively "stick" to the target.
The purpose of this work is to provide a family of methods applicable to a wide range of signal processing problems. A generic Matlab code (not focusing on any specific application) is provided at http://www.lucamartino.altervista.org/STICKY.zip.
A preliminary version of this work has been published in [64]. With respect to that paper, the following major changes have been performed: we discuss exhaustively the general structure of the new family (not just a particular algorithm); we perform a complete theoretical analysis of the AISM algorithm; we extend substantially the discussion about related works; we introduce the AISMTM algorithm; we show how sticky methods can be used to sample from multi-variate pdfs by embedding them within a Gibbs sampler or the hit and run algorithm; and we provide additional numerical simulations (including comparisons with other benchmark sampling algorithms and the estimation of the hyper parameters of a Gaussian processes).
For simplicity, we assume that π(x) is bounded. However, the case of unbounded target pdfs can also be tackled by designing a suitable proposal construction that takes into account the vertical asymptotes of the target function. Similarly, we consider a target function defined in a continuous space \(\mathcal {X}\) for the sake of simplicity, although the support domain could also be discrete.
Note that any other MCMC technique could be used.
Note that \(d_{t}(z) \le \max \{\pi (z), q_{t}(z|\mathcal {S}_{t})\} \le M_{\pi }\), since \(M_{t}=\max \limits _{z\in \mathcal {X}} q_{t}(z|\mathcal {S}_{t})\le M_{\pi }\) for all of the constructions described in Section 3 for the proposal function. Therefore, all the ε t ≥M π lead to equivalent update rules.
Regarding the definition of ε t , this threshold should decrease over time (to guarantee that new support points can always be added), but not too fast (to avoid adding too many points and thus increasing the computational cost). Selecting the optimum threshold can be very challenging, but many of the rules that have been used in the area of stochastic filtering for the update parameter could be used here. For instance, good update rules could be ε t =κM π ·e−γt or \(\varepsilon _{t} = \frac {\kappa M_{\pi }}{t+1}\) for some appropriate values of 0<κ<1 and γ>0. Exploring this issue is out of the scope of this paper, but we plan to address this in future works.
We have used the equality \(d_{t}(z_{i})=|\pi (z_{i})-q_{t}(z_{i}|\mathcal {S}_{t})|=\max \{\pi (z_{i}),q_{t}(z_{i}|\mathcal {S}_{t})\}-\min \{\pi (z_{i}),q_{t}(z_{i}|\mathcal {S}_{t})\}\).
Preliminary Matlab code for the AISM algorithm, with the constructions described in Section 3.1 and the update control rule R3, is provided at https://www.mathworks.com/matlabcentral/fileexchange/54701-adaptive-independent-sticky-metropolis–aism–algorithm.
Other related codes can be also found at http://mc-stan.org.
Recall that if θ1θ2θ3≤0 then p(θ|y,Z,κ)=0.
Note that we can always guarantee that \(q_{t}(x|\mathcal {S}_{t})\) is heavier tailed than π(x) by using an appropriate construction for the tails of the proposal, as discussed in Section 3 and Appendix E.2 Heavy tails.
If we consider the complementary case (i.e., π(si+1)≥π(s i ) and thus \(q_{t}(x)=\pi (s_{i+1})\ \forall x \in \mathcal {I}_{t,i}\)) we obtain exactly the same bound following an identical procedure.
The same conclusion is obtained if we consider a point \(s' \in (s_{m_{t}},\infty)\).
Note that the proposals are assumed to be uniformly heavier tailed than the target by Condition 4 of Definition 1. Therefore, we can guarantee that enough candidate samples are generated in the tails.
This work has been supported by the Spanish Ministry of Economy and Competitiveness (MINECO) through the MIMOD-PLC (TEC2015-64835-C3-3-R) and KERMES (TEC2016-81900-REDT/AEI) projects; by the Italian Ministry of Education, University and Research (MIUR); by PRIN 2010-11 grant; and by the European Union (Seventh Framework Programme FP7/2007-2013) under grant agreement no:630677.
All the authors have participated in writing the manuscript and have revised the final version. All authors read and approved the final manuscript.
JS Liu, Monte Carlo Strategies in Scientific Computing (Springer-Verlag, 2004).Google Scholar
CP Robert, G Casella, Monte Carlo Statistical Methods (Springer, 2004).Google Scholar
WJ Fitzgerald, Markov chain Monte Carlo methods with applications to signal processing. Signal Process.81:, 3–18 (2001).CrossRefzbMATHGoogle Scholar
A Doucet, X Wang, Monte Carlo methods for signal processing: a review in the statistical signal processing context. IEEE Signal Process. Mag.22:, 152–17 (2005).CrossRefGoogle Scholar
M Davy, C Doncarli, JY Tourneret, Classification of chirp signals using hierarchical Bayesian learning and MCMC methods. IEEE Trans. Signal Process. 50:, 377–388 (2002).CrossRefGoogle Scholar
N Dobigeon, JY Tourneret, CI Chang, Semi-supervised linear spectral unmixing using a hierarchical Bayesian model for hyperspectral imagery. IEEE Trans. Signal Process. 56:, 2684–2695 (2008).MathSciNetCrossRefGoogle Scholar
T Elguebaly, N Bouguila, Bayesian learning of finite generalized Gaussian mixture models on images. Signal Process.91:, 801–820 (2011).CrossRefzbMATHGoogle Scholar
GO Roberts, JS Rosenthal, Examples of adaptive MCMC. J. Comput. Graph. Stat.18:, 349–367 (2009).MathSciNetCrossRefGoogle Scholar
C Andrieu, J Thoms, A tutorial on adaptive MCMC. Stat. Comput.18:, 343–373 (2008).MathSciNetCrossRefGoogle Scholar
H Haario, E Saksman, J Tamminen, An adaptive Metropolis algorithm. Bernoulli. 7:, 223–242 (2001).MathSciNetCrossRefzbMATHGoogle Scholar
F Liang, C Liu, R Caroll, Advanced Markov Chain Monte Carlo Methods: Learning from Past Samples (Wiley Series in Computational Statistics, England, 2010).CrossRefzbMATHGoogle Scholar
WR Gilks, NG Best, KKC Tan, Adaptive rejection Metropolis sampling within Gibbs sampling. Appl.Stat.44:, 455–472 (1995).CrossRefzbMATHGoogle Scholar
L Martino, J Read, D Luengo, Independent doubly adaptive rejection Metropolis sampling within Gibbs sampling. IEEE Trans. Signal Process.63:, 3123–3138 (2015).MathSciNetCrossRefGoogle Scholar
L Holden, R Hauge, M Holden, Adaptive independent Metropolis-Hastings. Ann. Appl. Probab.19:, 395–413 (2009).MathSciNetCrossRefzbMATHGoogle Scholar
C Ritter, MA Tanner, Facilitating the Gibbs sampler: The Gibbs stopper and the griddy-Gibbs sampler. J. Am. Stat. Assoc.87:, 861–868 (1992).CrossRefGoogle Scholar
R Meyer, B Cai, F Perron, Adaptive rejection Metropolis sampling using Lagrange interpolation polynomials of degree 2. Comput. Stat. Data Anal. 52:, 3408–3423 (2008).MathSciNetCrossRefzbMATHGoogle Scholar
L Martino, J Read, D Luengo, Independent doubly adaptive rejection Metropolis sampling. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (2014).Google Scholar
L Martino, H Yang, D Luengo, J Kanniainen, J Corander, A fast universal self-tuned sampler within Gibbs sampling. Digital Signal Process.47:, 68–83 (2015).MathSciNetCrossRefGoogle Scholar
WR Gilks, P Wild, Adaptive rejection sampling for Gibbs sampling. Appl. Stat.41:, 337–348 (1992).CrossRefzbMATHGoogle Scholar
B Cai, R Meyer, F Perron, Metropolis-Hastings algorithms with adaptive proposals. Stat. Comput. 18:, 421–433 (2008).MathSciNetCrossRefGoogle Scholar
W Hörmann, J Leydold, G Derflinger, Automatic nonuniform random variate generation (Springer, 2003).Google Scholar
G Krzykowski, W Mackowiak, Metropolis Hastings simulation method with spline proposal kernel. An Isaac Newton Institute Workshop (2006).Google Scholar
W Shao, G Guo, F Meng, S Jia, An efficient proposal distribution for Metropolis-Hastings using a b-splines technique. Comput. Stat. Data Anal. 53:, 465–478 (2013).MathSciNetCrossRefzbMATHGoogle Scholar
L Tierney, Markov chains for exploring posterior distributions. Ann. Stat.22:, 1701–1728 (1994).MathSciNetCrossRefzbMATHGoogle Scholar
L Martino, J Míguez, Generalized rejection sampling schemes and applications in signal processing. Signal Process.90:, 2981–2995 (2010).CrossRefzbMATHGoogle Scholar
WsR Gilks, Derivative-free adaptive rejection sampling for Gibbs sampling. Bayesian Stat.4:, 641–649 (1992).zbMATHGoogle Scholar
D Görür, YW Teh, Concave convex adaptive rejection sampling. J. Comput. Graph. Stat.20:, 670–691 (2011).MathSciNetCrossRefGoogle Scholar
W Hörmann, A rejection technique for sampling from T-concave distributions. ACM Trans. Math. Softw. 21:, 182–193 (1995).MathSciNetCrossRefzbMATHGoogle Scholar
L Martino, F Louzada, Adaptive rejection sampling with fixed number of nodes. (to appear) Communications in Statistics - Simulation and Computation, 1–11 (2017). doi:10.1080/03610918.2017.1395039.
J Leydold, A rejection technique for sampling from log-concave multivariate distributions. ACM Trans. Model. Comput. Simul. 8:, 254–280 (1998).CrossRefzbMATHGoogle Scholar
J Leydold, W Hörmann, A sweep plane algorithm for generating random tuples in a simple polytopes. Math. Comput.67:, 1617–1635 (1998).MathSciNetCrossRefzbMATHGoogle Scholar
KR Koch, Gibbs sampler by sampling-importance-resampling. J. Geodesy. 81:, 581–591 (2007).CrossRefzbMATHGoogle Scholar
AE Gelfand, TM Lee, Discussion on the meeting on the Gibbs sampler and other Markov Chain Monte Carlo methods. J. R. Stat. Soc. Ser. B. 55:, 72–73 (1993).MathSciNetGoogle Scholar
C Fox, A Gibbs sampler for conductivity imaging and other inverse problems. Proc. SPIE Image Reconstruction Incomplete Data VII. 8500:, 1–6 (2012).Google Scholar
P Müller, A generic approach to posterior integration and, Gibbs sampling. Technical Report 91-09 (Department of Statistics of Purdue University, 1991).Google Scholar
JS Liu, F Liang, WH Wong, The multiple-try method and local optimization in Metropolis sampling. J. Am. Stat. Assoc.95:, 121–134 (2000).MathSciNetCrossRefzbMATHGoogle Scholar
L Martino, J Read, On the flexibility of the design of multiple try Metropolis schemes. Comput. Stat. 28:, 2797–2823 (2013).MathSciNetCrossRefzbMATHGoogle Scholar
D Luengo, L Martino, Almost rejectionless sampling from Nakagami-m distributions (m≥1). IET Electron. Lett. 48:, 1559–1561 (2012).CrossRefGoogle Scholar
R Karawatzki, The multivariate Ahrens sampling method. Technical Report 30, Department of Statistics and Mathematics (2006).Google Scholar
W Hörmann, A universal generator for bivariate log-concave distributions. Computing. 52:, 89–96 (1995).CrossRefzbMATHGoogle Scholar
BS Caffo, JG Booth, AC Davison, Empirical supremum rejection sampling. Biometrika. 89:, 745–754 (2002).MathSciNetCrossRefzbMATHGoogle Scholar
W Hörmann, A note on the performance of the Ahrens algorithm. Computing. 69:, 83–89 (2002).MathSciNetCrossRefzbMATHGoogle Scholar
J W Hörmann, G Leydold, Derflinger, Inverse transformed density rejection for unbounded monotone densities. Research Report Series/ Department of Statistics and Mathematics (Economy and Business) (Vienna University, 2007).Google Scholar
G Marrelec, H Benali, Automated rejection sampling from product of distributions. Comput Stat.19:, 301–315 (2004).MathSciNetCrossRefzbMATHGoogle Scholar
H Tanizaki, On the nonlinear and non-normal filter using rejection sampling. IEEE Trans. Automatic Control. 44:, 314–319 (1999).MathSciNetCrossRefzbMATHGoogle Scholar
M Evans, T Swartz, Random variate generation using concavity properties of transformed densities. J. Comput. Graph. Stat.7:, 514–528 (1998).Google Scholar
L Martino, J Míguez, A generalization of the adaptive rejection sampling algorithm. Stat. Comput.21:, 633–647 (2011).MathSciNetCrossRefzbMATHGoogle Scholar
M Brewer, C Aitken, Discussion on the meeting on the Gibbs sampler and other Markov Chain Monte Carlo methods. J. R. Stat. Soc. Ser. B. 55:, 69–70 (1993).MathSciNetGoogle Scholar
F Lucka, Fast Gibbs sampling for high-dimensional Bayesian inversion (2016). arXiv:1602.08595.Google Scholar
H Zhang, Y Wu, L Cheng, I Kim, Hit and run ARMS: adaptive rejection Metropolis sampling with hit and run random direction. J. Stat. Comput. Simul.86:, 973–985 (2016).MathSciNetCrossRefGoogle Scholar
L Martino, V Elvira, G Camps-Valls, Recycling Gibbs sampling. 25th European Signal Processing Conference (EUSIPCO), 181–185 (2017).Google Scholar
WR Gilks, NGO Robert, EI George, Adaptive direction sampling. The Statistician. 43:, 179–189 (1994).CrossRefGoogle Scholar
I Murray, Z Ghahramani, DJC MacKay, in Proceedings of the 22nd Annual Conference on Uncertainty in Artificial Intelligence (UAI-06). MCMC for doubly-intractable distributions, (2006), pp. 359–366.Google Scholar
D Rohde, J Corcoran, in Statistical Signal Processing (SSP), 2014 IEEE Workshop on. MCMC methods for univariate exponential family models with intractable normalization constants, (2014), pp. 356–359.Google Scholar
RM Neal, Slice sampling. Ann. Stat.31:, 705–767 (2003).MathSciNetCrossRefzbMATHGoogle Scholar
CE Rasmussen, CKI Williams, Gaussian processes for machine learning, (2006).Google Scholar
D Gamerman, Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference (Chapman and Hall/CRC, 1997).Google Scholar
BP Carlin, S Chib, Bayesian model choice via markov chain monte carlo methods. J. R. Stat. Soc. Series B (Methodological). 3:, 473–484 (1995).zbMATHGoogle Scholar
S Chib, I Jeliazkov, Marginal likelihood from the Metropolis-Hastings output. J. Am. Stat. Assoc.96:, 270–281 (2001).MathSciNetCrossRefzbMATHGoogle Scholar
R Neal, Chapter 5 of the Handbook of Markov Chain Monte Carlo. (S Brooks, A Gelman, G Jones, X-L Meng, eds.) (Chapman and Hall/CRC Press, 2011).Google Scholar
IT Nabney, Netlab: Aalgorithms for Pattern Recognition (Springer, 2008).Google Scholar
C Bishop, Pattern Recognition and Machine Learning (Springer, 2006).Google Scholar
H Haario, E Saksman, J Tamminen, Component-wise adaptation for high dimensional MCMC. Comput. Stat. 20:, 265–273 (2005).CrossRefzbMATHGoogle Scholar
L Martino, R Casarin, D Luengo, Sticky proposal densities for adaptive MCMC methods. IEEE Workshop on Statistical Signal Processing (SSP) (2016).Google Scholar
PJ Davis, Interpolation and approximation (Courier Corporation, 1975).Google Scholar
GH Hardy, JE Littlewood, G Pólya, Inequalities (Cambridge Univ. Press, 1952).Google Scholar
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.Image Processing Lab.University of ValenciaValenciaSpain
2.Department of EconomicsUniversity Ca' Foscari of VeniceVeniceItaly
3.School of Mathematics, Statistics and Actuarial SciencesUniversity of KentCanterburyUK
4.Department of Signal Theory and CommunicationsUniversidad Politécnica de MadridMadridSpain
Martino, L., Casarin, R., Leisen, F. et al. EURASIP J. Adv. Signal Process. (2018) 2018: 5. https://doi.org/10.1186/s13634-017-0524-6
Accepted 20 December 2017
First Online 11 January 2018
|
CommonCrawl
|
Hostname: page-component-7ccbd9845f-xwjfq Total loading time: 0.968 Render date: 2023-01-30T16:19:25.045Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true
>High Power Laser Science and Engineering
>Volume 10
>Acceleration of 60 MeV proton beams in the commissioning...
High Power Laser Science and Engineering
The current status of the experimental area and SULF-10 PW beamline
Experimental results
Conclusions and perspectives
Acceleration of 60 MeV proton beams in the commissioning experiment of the SULF-10 PW laser
Part of: Editors' Pick On the Cover of HPL
Published online by Cambridge University Press: 03 August 2022
A. X. Li ,
C. Y. Qin ,
H. Zhang [Opens in a new window] ,
S. Li ,
L. L. Fan ,
Q. S. Wang ,
T. J. Xu ,
N. W. Wang ,
L. H. Yu [Opens in a new window] and
Y. Xu
...Show all authors
A. X. Li
State Key Laboratory of High Field Laser Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China ShanghaiTech University, Shanghai 201210, China Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
C. Y. Qin
State Key Laboratory of High Field Laser Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
H. Zhang*
State Key Laboratory of High Field Laser Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China CAS Center for Excellence in Ultra-intense Laser Science, Shanghai 201800, China
S. Li
State Key Laboratory of High Field Laser Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China
L. L. Fan
Q. S. Wang
State Key Laboratory of High Field Laser Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China College of Science, University of Shanghai for Science and Technology, Shanghai 200093, China
T. J. Xu
N. W. Wang
L. H. Yu
Y. Q. Liu
C. Wang
X. L. Wang
Z. X. Zhang
X. Y. Liu
P. L. Bai
Z. B. Gan
X. B. Zhang
X. B. Wang
C. Fan
Y. J. Sun
Y. H. Tang
B. Yao
X. Y. Liang
Y. X. Leng
B. F. Shen*
State Key Laboratory of High Field Laser Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China Department of Physics, Shanghai Normal University, Shanghai 200234, China
L. L. Ji*
R. X. Li*
State Key Laboratory of High Field Laser Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China ShanghaiTech University, Shanghai 201210, China CAS Center for Excellence in Ultra-intense Laser Science, Shanghai 201800, China
Z. Z. Xu
Correspondence to: H. Zhang. B. F. Shen. L. L. Ji. R. X. Li, State Key Laboratory of High Field Laser Physics, Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai 201800, China. Email: [email protected] (H. Zhang); [email protected] (B. F. Shen); [email protected] (L. L. Ji); [email protected] (R. X. Li)
Save PDF (12 mb) View PDF[Opens in a new window] Save hi-res PDF (12 mb)
Save to Google Drive
Save to Kindle
We report the experimental results of the commissioning phase in the 10 PW laser beamline of the Shanghai Superintense Ultrafast Laser Facility (SULF). The peak power reaches 2.4 PW on target without the last amplifying during the experiment. The laser energy of 72 ± 9 J is directed to a focal spot of approximately 6 μm diameter (full width at half maximum) in 30 fs pulse duration, yielding a focused peak intensity around 2.0 × 1021 W/cm2. The first laser-proton acceleration experiment is performed using plain copper and plastic targets. High-energy proton beams with maximum cut-off energy up to 62.5 MeV are achieved using copper foils at the optimum target thickness of 4 μm via target normal sheath acceleration. For plastic targets of tens of nanometers thick, the proton cut-off energy is approximately 20 MeV, showing ring-like or filamented density distributions. These experimental results reflect the capabilities of the SULF-10 PW beamline, for example, both ultrahigh intensity and relatively good beam contrast. Further optimization for these key parameters is underway, where peak laser intensities of 1022–1023 W/cm2 are anticipated to support various experiments on extreme field physics.
high-energy proton sourcelaser–plasma interactionultraintense lasers
High Power Laser Science and Engineering , Volume 10 , 2022 , e26
DOI: https://doi.org/10.1017/hpl.2022.17[Opens in a new window]
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
© The Author(s), 2022. Published by Cambridge University Press in association with Chinese Laser Press
Chirped-pulse amplification (CPA) technology has significantly advanced the development of ultrashort ultraintense lasers in the past few decades[ Reference Strickland and Mourou1– Reference Mourou3]. Today, nearly one hundred 100 TW systems are operating, with about 20 systems at the PW level existing or under construction[ Reference Mourou3], pushing laser intensities to go beyond the relativistic threshold (about 1018 W/cm2 for laser wavelengths of ~μm). Unprecedented extreme physical conditions can be created in laboratories[ Reference Macchi, Borghesi and Passoni4– Reference Remington, Drake and Ryutov6], which strongly motivate the studies of laser-driven particle acceleration[ Reference Macchi, Borghesi and Passoni4, Reference Geddes, Toth, Van Tilborg, Esarey, Schroeder, Bruhwiler, Nieter, Cary and Leemans7, Reference Gonsalves, Nakamura, Daniels, Benedetti, Pieronek, de Raadt, Steinke, Bin, Bulanov, van Tilborg, Geddes, Schroeder, Toth, Esarey, Swanson, Fan-Chiang, Bagdasarov, Bobrova, Gasilov, Korn, Sasorov and Leemans8], X/gamma ray radiation[ Reference Yan, Fruhling, Golovin, Haden, Luo, Zhang, Zhao, Zhang, Liu, Chen, Chen, Banerjee and Umstadter9– Reference Rousse, Phuoc, Shah, Pukhov, Lefebvre, Malka, Kiselev, Burgy, Rousseau, Umstadter and Hulin11], laboratory astrophysics[ Reference Remington, Drake and Ryutov6, Reference Fujioka, Takabe, Yamamoto, Salzmann, Wang, Nishimura, Li, Dong, Wang, Zhang, Rhee, Lee, Han, Tanabe, Fujiwara, Nakabayashi, Zhao, Zhang and Mima12, Reference Huntington, Fiuza, Ross, Zylstra, Drake, Froula, Gregori, Kugland, Kuranz, Levy, Li, Meinecke, Morita, Petrasso, Plechaty, Remington, Ryutov, Sakawa, Spitkovsky, Takabe and Park13], laser-driven nuclear physics[ Reference Zhang, Lu, Li, Xu, Guo, Leng, Liu, Shen, Li and Xu14], etc. On the other hand, the rising interest in strong-field quantum electrodynamics calls for lasers with even higher intensities (1022–1023 W/cm2). Such a quest has been supported by several projects aiming to reach 10 PW-level outputs, such as ELI[ Reference Clarkson, Shori, Lureau, Laux, Casagrande, Chalus, Pellegrina, Matras, Radier, Rey, Ricaud, Herriot, Jougla, Charbonneau, Duvochelle and Simon-Boisson15], Vulcan-10 PW[ Reference Hernandez-Gomez, Blake, Chekhlov, Clarke, Dunne, Galimberti, Hancock, Heathcote, Holligan, Lyachev, Matousek, Musgrave, Neely, Norreys, Ross, Tang, Winstone, Wyborn and Collier16], Apollon-10 PW[ Reference Hein, Le Garrec, Papadopoulos, Le Blanc, Zou, Chériaux, Georges, Druon, Martin, Fréneaux, Beluze, Lebas, Mathieu and Audebert17] and SULF-10 PW[ Reference Li, Gan, Yu, Wang, Liu, Guo, Xu, Xu, Hang, Xu, Wang, Huang, Cao, Yao, Zhang, Chen, Tang, Li, Liu, Li, He, Yin, Liang, Leng, Li and Xu18]. The first 100 PW-level laser facility under construction is the Station of Extreme Light Science (SEL)[ Reference Bu, Xu, Xu, Ji, Li and Xu19], while several others are also under consideration (ELI-200 PW, XCELS, Nexawatt, Gekko EXA, Rochester, etc.).
The Shanghai Superintense Ultrafast Laser Facility (SULF) is the first 10 PW-class laser facility in China, which was proposed and constructed by the Shanghai Institute of Optics and Fine Mechanics (SIOM) in July 2016. Figure 1 shows the layout of the SULF laser facility[ Reference Leng20]. The SULF laser employs a typical CPA Ti:sapphire scheme and contains two high-intensity laser beamlines, SULF-10 PW, operating at a repetition rate of one shot per 3 minutes[ Reference Liang, Leng, Li and Xu21], and SULF-1 PW, operating at a repetition rate of 0.1 Hz[ Reference Zhang, Wu, Hu, Yang, Gui, Ji, Liu, Wang, Liu, Lu, Xu, Leng, Li and Xu22]. In 2018, the SULF-10 PW beamline realized output peak power up to 10.3 PW (after compression) with 339 J output pulse energy (compressor transmission efficiency of 64%) compressed to 21 fs pulse duration[ Reference Li, Gan, Yu, Wang, Liu, Guo, Xu, Xu, Hang, Xu, Wang, Huang, Cao, Yao, Zhang, Chen, Tang, Li, Liu, Li, He, Yin, Liang, Leng, Li and Xu18]. This peak power was further increased to 12.9 PW in 2019[ Reference Liang, Leng, Li and Xu21]. The physical experimental areas in SULF include the three research platforms of dynamics of materials under extreme conditions (DMEC), ultrafast sub-atomic physics (USAP) and big molecule dynamics and extreme-fast chemistry (MODEC).
Figure 1 The layout of the SULF laser facility[ Reference Leng20].
The commissioning experiment of the SULF-10 PW beamline was carried out on the USAP platform, focusing on laser-proton acceleration using plain Cu and plastic targets. The peak power reaches 2.4 PW without the last amplifying section, corresponding to laser energy of 72 ± 9 J, focal spot size of approximately 6 μm diameter (full width at half maximum (FWHM)) and 30 fs pulse duration. These together yield a focused peak intensity around 2.0 × 1021 W/cm2. We obtained a proton beam with cut-off energy up to 62.5 MeV using the Cu target at the optimum target thickness of 4 μm. For much thinner targets (tens of nanometers), the proton cut-off energy declines to 20 MeV and ring-like or filamented structures appear in the density distribution. The obtained results from laser–foil interaction directly illustrate the current capabilities of the SULF-10 PW beamline.
2 The current status of the experimental area and SULF-10 PW beamline
2.1 Experimental area in USAP
As seen in Figure 1, the SULF-10 PW laser beam goes through seven multi-pass amplifiers on the first floor of the SULF building, and then is transmitted to floor B2 through a transmission pipeline. Following that, the amplified beam is further expanded and the image is relayed into the compressor cavity. Behind the compressor chamber are the beam-quality improvement chamber and experimental chamber (USAP). Figure 2 shows a photograph of the experimental area in USAP located on floor B2. The beam-quality improvement chamber is specially designed to place a deformable mirror (DM) and plasma mirrors (PMs) to further improve the beam quality and contrast of the laser. The experimental chamber for laser–matter interaction comprises two vacuum cavities, the larger one for short-focal-length experiments, such as laser-ion acceleration, and the smaller one for experiments requiring a long focal length, such as laser wakefield acceleration of electrons. The internal size of both the beam-quality improvement and the short-focal-length chambers is 4.5 m × 3.5 m × 2.0 m and it takes about 1 hour for the vacuum system to pump it from standard atmospheric pressure down to 10–4–10–3 Pa. An important function of the USAP platform is that it allows users to simultaneously employ both the SULF-10 PW and SULF-1 PW beamlines, for either pump-probe or laser-electron scattering experiments. In the commissioning experiment on the SULF-10 PW beamline, only the off-axis parabola (OAP) of short focal length and the DM in the beam-quality improvement chamber were employed without introducing the PMs in the laser path.
Figure 2 The experimental area in USAP.
2.2 Laser parameters of the SULF-10 PW beamline
In the commissioning stage, the output energy of the SULF-10 PW laser before the compressor is measured to be 110 ± 13 J with six multi-pass amplifiers (the last one was switched off). The energy transmission efficiency from the compressor to the target is 66%, resulting in the on-target energy of 72 ± 9 J. The near-field profile of the final output laser is elliptic, 470 mm × 430 mm along the horizontal and vertical directions, respectively. The measured modulation of the near-field beam is about 1.8, mainly due to the modulation of the pump laser beam from the main amplifier.
Decreasing the size of the focal spot is an efficient method to increase the laser intensity. However, the large-aperture optic elements implemented in the SULF-10 PW system inevitably increase the wavefront aberrations. Here, double DMs with different actuator densities are cascaded to optimize the wavefront aberrations and, hence, the focal intensity[ Reference Guo, Yu, Wang, Wang, Liu, Gan, Li, Leng, Liang and Li23]. The first adaptive-optics (AO) correction system is placed at the output of the sixth amplifier, and the DM has a diameter of 130 mm with 64 mechanical actuators. A second AO correction loop is installed using a larger DM with 520 mm diameter and 121 mechanical actuators. It is placed at the output of compressor (in the beam-quality improvement chamber). The wavefront sampling laser is exported out of the experimental chamber following the light path built after an OAP. The focal spot is amplified by 10 times and then online monitored by a low-noise charge-coupled device (CCD). An OAP with 2000 mm focal length is used for laser-proton acceleration, corresponding to an effective f-number of 4.4. Figure 3(a) shows the typical focal laser intensity distribution after the correction. The FWHM size of focal spot is 6.28 μm × 5.92 μm, containing 24% of the total laser energy.
Figure 3 Laser parameters of the SULF-10 PW beamline for the commissioning experiment. (a) The typical focal spot of the laser after the correction of the double DM system, which is measured using a low-noise CCD by the light path built after an f/4.4 OAP. (b) The typical pulse duration of the compressed pulse measured by a Fastlite Wizzler instrument. The temporal contrast of (c) the nanosecond scale measured by a photodiode with a stack of neutral attenuators and (d) the picosecond scale measured by a third-order cross-correlator. The red arrow represents the saturated peak of the laser pulse.
The measured spectral width of the output pulse is ±40 nm (FWHM) at 800 nm central wavelength. The laser beam is compressed by a four-grating compressor and the pulse duration is measured using a Fastlite Wizzler instrument. Figure 3(b) shows that the typical pulse duration is about 30 fs (FWHM), resulting in an output peak power of 2.4 PW. These data indicate that the focal peak intensity reaches 2.0 × 1021 W/cm2.
A key parameter for laser–solid interaction at relativistic intensities is the temporal contrast of the laser pulse. Pre-pulses of intensity above 1010 W/cm2[ Reference Mainfray and Manus24] would ionize the target, introducing low-density pre-plasmas in front of the target, which could enhance proton acceleration[ Reference Wang, Shou, Wang, Liu, Li, Gong, Hu, Ma and Yan25, Reference Gizzi, Boella, Labate, Baffigi, Bilbao, Brandi, Cristoforetti, Fazzi, Fulgentini, Giove, Koester, Palla and Tomassini26]. However, if the pre-plasma induced by pre-pulse driven shock appears at the target rear, proton acceleration will be restricted[ Reference Mackinnon, Borghesi, Hatchett, Key, Patel, Campbell, Schiavi, Snavely, Wilks and Willi27]. In order to improve the temporal contrast of the SULF-10 PW beamline, a combination of cross-polarized wave generation (XPWG) and femtosecond optical parametric amplification (OPA) techniques is implemented at the front end[ Reference Yu, Xu, Liu, Li, Li, Liu, Li, Wu, Yang, Yang, Wang, Lu, Leng, Li and Xu28]. Along with further optimization[ Reference Yu, Xu, Li, Liu, Hu, Wu, Yang, Zhang, Wu, Bai, Wang, Lu, Leng, Li and Xu29], the contrast ratio at 50 ps before the main pulse is measured to be around 1.7 × 10–9. Meanwhile, two Pockels cells are installed after the first amplifier to increase the nanosecond temporal contrast. Here, the contrast evolution at the nanosecond scale was measured by a combination of an oscilloscope and a photodiode at the output of the compressor, as shown in Figure 3(c). It can be found that the amplified spontaneous emission (ASE) noise level is better than 10–9 (limited by the photodiode). An intense pre-pulse is also seen at –3 ns with the contrast of approximately 4.3 × 10–9, for reasons that are still under investigation. The temporal contrast at the picosecond scale was measured by a commercial third-order cross-correlator (Amplitude, Sequoia). The contrast curve within –420 ps before the main pulse is illustrated in Figure 3(d), showing a pedestal around 10–11, which starts rising from –50 ps. In addition, three pre-pulses appear at –360, –100 and –60 ps with the contrast ratio of approximately 10–9 due to multiple reflections of the optical components in the amplifiers (the detailed analysis of pre-pulses will be introduced in another paper). Considering the laser intensity of 1021 W/cm2, these pre-pulses reach intensities of 1012 W/cm2, sufficiently strong to trigger material ionization, and could induce pre-plasmas on the target rear. PMs are to be installed in the near future to improve the performance on laser–foil interactions.
3 Experimental results
3.1 Experimental setup
A sketch of the experimental setup is shown in Figure 4. The p-polarized laser pulse is focused by the f/4.4 OAP mirror onto the target at an incident angle of 15°. In this run, the target table accommodates seven planar foils (could be more if necessary). The specially designed stacks of radiochromic films (RCFs) and BAS-SR image plates (IPs), located 6.3 cm behind the rear of the target, are used to measure the profile and energy spectrum of protons and electrons, respectively. These stacks are of 50 mm × 50 mm, with a 3-mm-diameter hole in the center to let protons pass through, and enwrapped by 15 μm-thickness Al foil to shield debris. Copper and aluminum sheets of different thickness are inserted between the RCFs and SR-IPs to attenuate proton and electron energy, respectively. Due to limited space, only two stacks are used simultaneously in an experiment.
Figure 4 The sketch of the experimental setup. The specially designed stacks of radiochromic films and BAS-SR image plates are used to measure the profile and energy spectrum of protons and electrons. The stacks and targets can move along the y direction. Two Thomson parabola spectrometers are used to detect the ion spectra at the target normal direction and laser direction. It installs six BAS-TR image plates at a time.
Two types of Thomson parabola (TP) spectrometers (TP1 and TP2) are used to detect the proton energy spectrum, as shown in Figure 4. TP1, composed of a 1.0 T magnetic field over 5-cm length and a pair of 15-cm long copper electrodes charged up to 10 kV, is placed 87.8 cm away from the target along the target normal direction. The diameter of TP1's pinhole is 150 μm, corresponding to the solid angle of 2.3 × 10–8 sr. The energy resolution of TP1 is 0.4 MeV at 100 MeV, with a low energy threshold of 3.5 MeV. The other high-resolution TP spectrometer, TP2, placed 80 cm away from the target along the laser direction, is switched on when the RCFs and IPs are not in use. TP2 employs a magnetic field of 1.7 T over 5-cm length and electrodes up to 35 cm length with 10 kV. It has a pinhole of 200 μm diameter, corresponding to a 4.9 × 10–8 sr solid angle. The energy resolution reaches 0.13 MeV at 100 MeV and the lower energy threshold is 9.2 MeV. From Figure 4, one should note that the ions cannot be detected by TP2 if the stacks move in. In both TPs, BAS-TR IPs are placed at a turntable holder that can rotate 360° in the horizontal plane, allowing for six successive measurements without interruption. Both of the TPs are surrounded by a lead shield to reduce the signal noise during experiments. For future development, an online detector, named a microchannel plate (MCP), with a fluorescent screen[ Reference Prasad, Abicht, Borghesi, Braenzel, Nickles, Priebe, Schnürer and Ter-Avetisyan30] will be installed to improve the diagnosis efficiency.
3.2 Acceleration of a 60 MeV proton beam via target normal sheath acceleration
The maximum energy of protons accelerated by intense ultrafast lasers is mainly determined by, but not limited to, the laser intensity, pulse duration and pulse contrast ratio, and thus is considered as an important perspective to find out a laser facility's capabilities. The most widely studied mechanisms for laser-driven ion acceleration are target normal sheath acceleration (TNSA)[ Reference Burdonov, Fazzini, Lelasseux, Albrecht, Antici, Ayoul, Beluze, Cavanna, Ceccotti, Chabanis, Chaleil, Chen, Chen, Consoli, Cuciuc, Davoine, Delaneau, d'Humières, Dubois, Evrard, Filippov, Freneaux, Forestier-Colleoni, Gremillet, Horny, Lancia, Lecherbourg, Lebas, Leblanc, Ma, Martin, Negoita, Paillard, Papadopoulos, Perez, Pikuz, Qi, Quéré, Ranc, Söderström, Scisciò, Sun, Vallières, Wang, Yao, Mathieu, Audebert and Fuchs31, Reference Snavely, Key, Hatchett, Cowan, Roth, Phillips, Stoyer, Henry, Sangster, Singh, Wilks, MacKinnon, Offenberger, Pennington, Yasuike, Langdon, Lasinsk, Johnson, Perry and Campbell32] and radiation pressure acceleration (RPA)[ Reference Esirkepov, Borghesi, Bulanov, Mourou and Tajima33, Reference Kim, Pae, Choi, Lee, Kim, Singhal, Sung, Lee, Lee, Nickles, Jeong, Kim and Nam34], which require different laser and target parameters. In the commissioning experiment, under the current conditions of the SULF-10 PW described above, we focus on TNSA using micrometer-thick Cu foils.
The proton cut-off energy as a function of target thickness is shown in Figure 5(a), measured by TP1 along the target normal direction and by both TP2 and RCF stacks along the laser propagation direction. The target thickness l of the Cu foil varies from 1 to 10 μm. It can be clearly seen that in both directions, the proton cut-off energy increases when the foil thickness increases from 1 to 4 μm, and then decreases with larger thickness, corresponding to an optimum value at 4 μm. This trend agrees with the previously reported results for TNSA-produced proton beams, where the effects of electron reflux and pre-pulse-induced plasma expansion at the target rear side result in an optimum target thickness for proton acceleration[ Reference Kaluza, Schreiber, Santala, Tsakiris, Eidmann, Meyer-Ter-Vehn and Witte35]. One notices that the proton energies along the target normal direction are much higher than those along the laser direction for all target thicknesses. Typical proton spectra for various target thicknesses are illustrated in Figure 5(b), showing the broad-energy-spread distribution.
Figure 5 (a) The proton cut-off energy as a function of the target thickness of the plain Cu foils measured by TP1 in the target normal direction (red squares) and by both TP2 and RCF stacks in the laser propagation direction (blue circles), where the red and blue lines represent the average proton energy over two to three shots. The vertical error bars for some data are defined by the energy interval between adjacent RCF layers. (b) Typical proton spectra for five target thicknesses of l = 1 μm (black line), 2 μm (blue line), 4 μm (red line), 7 μm (magenta line) and 10 μm (cyan line) in the target normal direction, respectively. The proton energy spectrum for l = 4 μm (dashed red line) in the laser direction is also included in (b). (c), (d) The raw IP data of TP1 and TP2 for the best result of proton acceleration from a shot on a 4-μm Cu foil, where the inset in (c) is a magnified image of the ion trace in the high-energy region.
For 4-μm-thick foils, the average cut-off energy of protons is 60 MeV according to the data from three shots, where the highest one achieves 62.5 MeV. This is among the state-of-the-art results in proton acceleration using femtosecond lasers according to previous reports[ Reference Burdonov, Fazzini, Lelasseux, Albrecht, Antici, Ayoul, Beluze, Cavanna, Ceccotti, Chabanis, Chaleil, Chen, Chen, Consoli, Cuciuc, Davoine, Delaneau, d'Humières, Dubois, Evrard, Filippov, Freneaux, Forestier-Colleoni, Gremillet, Horny, Lancia, Lecherbourg, Lebas, Leblanc, Ma, Martin, Negoita, Paillard, Papadopoulos, Perez, Pikuz, Qi, Quéré, Ranc, Söderström, Scisciò, Sun, Vallières, Wang, Yao, Mathieu, Audebert and Fuchs31, Reference Ziegler, Albach, Bernert, Bock, Brack, Cowan, Dover, Garten, Gaus, Gebhardt, Goethel, Helbig, Irman, Kiriyama, Kluge, Kon, Kraft, Kroll, Loeser, Metzkes-Ng, Nishiuchi, Obst-Huebl, Puschel, Rehwald, Schlenvoigt, Schramm and Zeil36]. Figures 5(c) and 5(d) show the raw IP data of TP1 and TP2 for the best case with the 4-μm Cu target, from which the proton energy spectra are extracted and presented in Figure 5(b). Note that both the cut-off energy and particle number of protons in the target normal direction are much higher than those measured along the laser propagation direction. The cut-off energies are 42.5 MeV (laser direction) and 62.5 MeV (target normal direction), respectively. From the proton spectrum of the best shot (see in Figure 5(b)), the proton number for energy of more than 3.5 MeV can be estimated and reaches up to 2.4 × 1012, corresponding to a 1.5% energy conversion efficiency (assuming the proton beam has a divergence half-angle of 10°).
Typical proton profiles from three shots on Cu targets of l = 1, 4 and 10 μm are shown in Figure 6(a) at selected layers of RCF stacks, with the highest energy detected of 44.3, 52.1 and 32.2 MeV, respectively. These results are slightly lower than the actual values according to the signal of TP1 with the cut-off energies of 47.3, 58.9 and 36.3 MeV for the same shots. This is mainly due to not only the large interval of energy measurement between adjacent RCF layers but also the use of the HD-V2 type RCFs, which cannot detect the low-density protons in the high-energy region. It can be found from Figure 6(a) that the proton signal monotonically decreases at higher energies. Figure 6(b) shows the corresponding divergent angle of the proton beam as a function of the energy. Protons are more collimated at higher energies. The minimum divergences measured for l = 1, 4 and 10 μm are 8.4°, 7.6° and 12.5°, respectively. It is a typical feature in the TNSA regime, which is different from that of RPA-produced protons where the beam is more divergent at higher proton energies[ Reference Gonzalez-Izquierdo, King, Gray, Wilson, Dance, Powell, Maclellan, McCreadie, Butler, Hawkes, Green, Murphy, Stockhausen, Carroll, Booth, Scott, Borghesi, Neely and McKenna37, Reference Carroll, McKenna, Lundh, Lindau, Wahlstrom, Bandyopadhyay, Pepler, Neely, Kar, Simpson, Markey, Zepf, Bellei, Evans, Redaelli, Batani, Xu and Li38]. From the density profiles of protons more than 32.2 MeV, it can be found that the center of the proton beams is not exactly aligned with the target normal direction, but shifts slightly toward the laser direction in the cases of l = 1 and 4 μm (see Figure 6(a)). This is mainly due to the bending of the target surface induced by the laser pre-pulse before the main pulse arrives[ Reference Wang, Shen, Zhang, Xu, Li, Lu, Wang, Liu, Lu, Shi, Leng, Liang, Li, Wang and Xu39].
Figure 6 (a) Typical proton profiles from three shots on Cu targets of l = 1, 4 and 10 μm at selected layers of RCF stacks corresponding to the proton energies of 11.6, 23.8, 32.2, 44.3 and 52.1 MeV, respectively. The target normal direction (0°) and laser direction (15°) are illustrated by dashed blue and red lines for 11.6 and 32.2 MeV. (b) Divergent angles of protons at different energies for l = 1 μm (blue circles), 4 μm (red squares) and 10 μm (black triangles).
The electron number distributions are also measured using IP stacks that record electrons of energy over 11.8, 14.2, 17.2, 20.2 and 23.7 MeV (electrons with energies lower than 11 MeV are stopped by the RCF stacks), respectively. As shown in Figure 7(a), the emitting direction of the electrons also shifts slightly toward the laser direction (>0°), similar to the proton beam (see in Figure 6(a)). Considering the total electron signal within each IP shown in Figure 7(a), the electron numbers within four energy intervals of 11.8–14.2, 14.2–17.2, 17.2–20.2 and 20.2–23.7 MeV are obtained and the processed spectrum is displayed in Figure 7(b). The fitting curve of the electron spectrum indicates an electron temperature T e of 7.6 MeV, which agrees reasonably well with the theoretical result of 8.6 MeV following the ponderomotive scaling law given by ${T}_{\mathrm{e}}\left({m}_{\mathrm{e}}{c}^2\right)={\int}_0^{\frac{2\pi}{{\mathrm{w}}_0}}\sqrt{1+{a}_0^2{\sin}^2\left({w}_0t\right)} \mathrm{d}t/\left(2\pi /{w}_0\right)\hbox{--} 1$ [ Reference Wilks, Kruer, Tabak and Langdon40], where ${m}_e$ , $c$ , ${a}_0$ and ${w}_0$ are the electron mass, light speed, peak normalized vector potential and angular frequency of the laser pulse, respectively (the laser intensity for this shot is 1.7 × 1021 W/cm2). The features mentioned above suggest that TNSA is dominant, given the provided laser intensity and micrometer-thick Cu foils. Both the 60 MeV proton beams of several shots and the electron temperature reflect the current capabilities of the SULF-10 PW beamline for providing laser beams with ultrahigh intensity, ultrashort duration and relatively high contrast.
Figure 7 (a) Electron number distribution measured using IP stacks for electron energies greater than 11.8, 14.2, 17.2, 20.2 and 23.7 MeV, from the same shot on a 4-μm-thick Cu target, as illustrated in Figure 6. (b) The processed electron spectrum, where the dashed line represents the fitting curve.
3.3 Proton acceleration using nanometer-thick targets
Laser-driven proton acceleration using nanometer-thick plastic (CH) foils is also investigated here. The profiles of proton beams at selected energies for CH foils with target thicknesses of l = 30, 40 and 70 nm are exhibited in Figure 8. The highest proton energies shown by the RCF are smaller than those measured by TP1 with the cut-off energies of 19.2, 20.0 and 19.4 MeV for l = 30, 40 and 70 nm, respectively. Clear ring-like profiles appear for l = 30 and 40 nm on all RCF layers, which is probably induced by the relativistic transparency effect in nanometer target situation[ Reference Gonzalez-Izquierdo, King, Gray, Wilson, Dance, Powell, Maclellan, McCreadie, Butler, Hawkes, Green, Murphy, Stockhausen, Carroll, Booth, Scott, Borghesi, Neely and McKenna37]. In both cases, the divergent angles of protons remain almost unchanged at different energies and the center of the proton beams is well aligned along the target normal direction, as shown in Figures 8(a) and 8(b), in contrast to the TNSA case using micrometer-thick targets[ Reference Wang, Shen, Zhang, Xu, Li, Lu, Wang, Liu, Lu, Shi, Leng, Liang, Li, Wang and Xu39] and the RPA case[ Reference Gonzalez-Izquierdo, King, Gray, Wilson, Dance, Powell, Maclellan, McCreadie, Butler, Hawkes, Green, Murphy, Stockhausen, Carroll, Booth, Scott, Borghesi, Neely and McKenna37]. Such properties indicate that protons are not effectively accelerated since ionization and pre-expanding of nanometer-thick targets driven by pre-pulses may lead to relativistically transparent plasma[ Reference Lefebvre and Bonnaud41].
Figure 8 Proton beam profiles for plain CH targets with three different thicknesses of (a1)–(a4) 30 nm, (b1)–(b4) 40 nm and (c1)–(c4) 70 nm, at selected proton energies of 4.8, 7.2, 11.6 and 15.9 MeV, respectively. The dashed lines in blue and red represent the target normal direction (0°) and laser direction (15°), respectively.
A filamented structure emerges when the target thickness increases to 70 nm (see Figures 8(c1)–8(c4)), which is possibly associated with Weibel instability[ Reference Gode, Rodel, Zeil, Mishra, Gauthier, Brack, Kluge, MacDonald, Metzkes, Obst, Rehwald, Ruyer, Schlenvoigt, Schumaker, Sommer, Cowan, Schramm, Glenzer and Fiuza42] or the wrinkles on the target surface[ Reference Roth43]. The divergent angles of protons become smaller for more energetic protons, while the center of the proton beam profile mainly concentrates near the laser propagation axis. It is an obvious sign that the plasma is still opaque rather than transparent. Considering the use of linearly polarized lasers and an oblique incidence angle of 15°, the proton acceleration for the l = 70 nm case may be dominant by using a hybrid scheme where both the hole boring stage[ Reference Psikal and Matys44] of RPA and TNSA play important roles. The results with nanometer-thick targets show that the laser contrast of the SULF-10 PW beamline is not sufficient to drive effective acceleration schemes, such as RPA[ Reference Esirkepov, Borghesi, Bulanov, Mourou and Tajima33, Reference Kim, Pae, Choi, Lee, Kim, Singhal, Sung, Lee, Lee, Nickles, Jeong, Kim and Nam34] and acceleration using a structural target[ Reference Bailly-Grandvaux, Kawahito, McGuffey, Strehlow, Edghill, Wei, Alexander, Haid, Brabetz, Bagnoud, Hollinger, Capeluto, Rocca and Beg45, Reference Qin, Zhang, Li, Wang, Li, Fan, Lu, Li, Xu, Wang, Liang, Leng, Shen, Ji and Li46]. Proton energies beyond 100 MeV are expected after further optimization of the temporal contrast and focal spot of the SULF-10 PW beamline.
4 Conclusions and perspectives
A commissioning experiment of the SULF-10 PW beamline has been carried out, focusing on laser-proton acceleration with plain Cu and plastic targets. The SULF-10 PW laser beamline can provide 2.4 PW peak power on target currently. A high-energy proton beam with maximum cut-off energy up to 62.5 MeV was achieved with Cu foils at the optimum target thickness of 4 μm via TNSA, which is approaching the requirement of tumor therapy treatment[ Reference Bulanov, Esirkepov, Khoroshkov, Kuznetsov and Pegoraro47]. For plastic targets of tens-of-nanometer thickness, the proton profiles show apparent ring-like or filamented structures. The experimental results illustrate the current status of the SULF-10 PW beamline.
The on-target peak power of the SULF-10 PW beamline will be increased to 10 PW after maintenance of the pump sources in the last amplifier. Further optimization works to improve laser intensity and contrast are continuing through using a smaller f-number OAP and setting up a traditional PM. In the near future, the peak laser intensity is expected to reach 1022–1023 W/cm2, which provides strong support for research in strong-field physics.
This work was supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (No. XDB16), the National Natural Science Foundation of China (Nos. 11875307, 11935008, 11804348, 11705260, 11905278 and 11975302) and the Youth Innovation Promotion Association of Chinese Academy of Sciences (No. 2021242).
Strickland, D. and Mourou, G., Opt. Commun. 56, 219 (1985).CrossRefGoogle Scholar
Danson, C. N., Haefner, C., Bromage, J., Butcher, T., Chanteloup, J.-C. F., Chowdhury, E. A., Galvanauskas, A., Gizzi, L. A., Hein, J., Hillier, D. I., Hopps, N. W., Kato, Y., Khazanov, E. A., Kodama, R., Korn, G., Li, R., Li, Y., Limpert, J., Ma, J., Nam, C. H., Neely, D., Papadopoulos, D., Penman, R. R., Qian, L., Rocca, J. J., Shaykin, A. A., Siders, C. W., Spindloe, C., Szatmári, S., Trines, R. M. G. M., Zhu, J., Zhu, P., and Zuegel, J. D., High Power Laser Sci. Eng. 7, e54 (2019).Google Scholar
Mourou, G., Rev. Mod. Phys. 91, 030501 (2019).CrossRefGoogle Scholar
Macchi, A., Borghesi, M., and Passoni, M., Rev. Mod. Phys. 85, 751 (2013).CrossRefGoogle Scholar
Di Piazza, A., Müller, C., Hatsagortsyan, K. Z., and Keitel, C. H., Rev. Mod. Phys. 84, 1177 (2012).CrossRefGoogle Scholar
Remington, B. A., Drake, R. P., and Ryutov, D. D., Rev. Mod. Phys. 78, 755 (2006).CrossRefGoogle Scholar
Geddes, C. G., Toth, C. S., Van Tilborg, J., Esarey, E., Schroeder, C. B., Bruhwiler, D., Nieter, C., Cary, J., and Leemans, W. P., Nature 431, 538 (2004).CrossRefGoogle Scholar
Gonsalves, A. J., Nakamura, K., Daniels, J., Benedetti, C., Pieronek, C., de Raadt, T. C. H., Steinke, S., Bin, J. H., Bulanov, S. S., van Tilborg, J., Geddes, C. G. R., Schroeder, C. B., Toth, C., Esarey, E., Swanson, K., Fan-Chiang, L., Bagdasarov, G., Bobrova, N., Gasilov, V., Korn, G., Sasorov, P., and Leemans, W. P., Phys. Rev. Lett. 122, 084801 (2019).CrossRefGoogle Scholar
Yan, W., Fruhling, C., Golovin, G., Haden, D., Luo, J., Zhang, P., Zhao, B., Zhang, J., Liu, C., Chen, M., Chen, S., Banerjee, S., and Umstadter, D., Nat. Photonics 11, 514 (2017).CrossRefGoogle Scholar
Powers, N. D., Ghebregziabher, I., Golovin, G., Liu, C., Chen, S., Banerjee, S., Zhang, J., and Umstadter, D. P., Nat. Photonics 8, 28 (2013).CrossRefGoogle Scholar
Rousse, A., Phuoc, K. T., Shah, R., Pukhov, A., Lefebvre, E., Malka, V., Kiselev, S., Burgy, F., Rousseau, J. P., Umstadter, D., and Hulin, D., Phys. Rev. Lett. 93, 135005 (2004).CrossRefGoogle Scholar
Fujioka, S., Takabe, H., Yamamoto, N., Salzmann, D., Wang, F., Nishimura, H., Li, Y., Dong, Q., Wang, S., Zhang, Y., Rhee, Y.-J., Lee, Y.-W., Han, J.-M., Tanabe, M., Fujiwara, T., Nakabayashi, Y., Zhao, G., Zhang, J., and Mima, K., Nat. Phys. 5, 821 (2009).CrossRefGoogle Scholar
Huntington, C. M., Fiuza, F., Ross, J. S., Zylstra, A. B., Drake, R. P., Froula, D. H., Gregori, G., Kugland, N. L., Kuranz, C. C., Levy, M. C., Li, C. K., Meinecke, J., Morita, T., Petrasso, R., Plechaty, C., Remington, B. A., Ryutov, D. D., Sakawa, Y., Spitkovsky, A., Takabe, H., and Park, H. S., Nat. Phys. 11, 173 (2015).CrossRefGoogle Scholar
Zhang, H., Lu, H., Li, S., Xu, Y., Guo, X., Leng, Y., Liu, J., Shen, B., Li, R., and Xu, Z., Appl. Phys. Express 7, 026401 (2014).CrossRefGoogle Scholar
Clarkson, W. A., Shori, R. K., Lureau, F., Laux, S., Casagrande, O., Chalus, O., Pellegrina, A., Matras, G., Radier, C., Rey, G., Ricaud, S., Herriot, S., Jougla, P., Charbonneau, M., Duvochelle, P. A., and Simon-Boisson, C., Proc. SPIE 9726, 972613 (2016).Google Scholar
Hernandez-Gomez, C., Blake, S. P., Chekhlov, O., Clarke, R. J., Dunne, A. M., Galimberti, M., Hancock, S., Heathcote, R., Holligan, P., Lyachev, A., Matousek, P., Musgrave, I. O., Neely, D., Norreys, P. A., Ross, I., Tang, Y., Winstone, T. B., Wyborn, B. E., and Collier, J., J. Phys. Conf. Ser. 244, 032006 (2010).CrossRefGoogle Scholar
Hein, J., Le Garrec, B., Papadopoulos, D. N., Le Blanc, C., Zou, J. P., Chériaux, G., Georges, P., Druon, F., Martin, L., Fréneaux, L., Beluze, A., Lebas, N., Mathieu, F., and Audebert, P., Proc. SPIE 10238, 102380Q (2017).Google Scholar
Li, W., Gan, Z., Yu, L., Wang, C., Liu, Y., Guo, Z., Xu, L., Xu, M., Hang, Y., Xu, Y., Wang, J., Huang, P., Cao, H., Yao, B., Zhang, X., Chen, L., Tang, Y., Li, S., Liu, X., Li, S., He, M., Yin, D., Liang, X., Leng, Y., Li, R., and Xu, Z., Opt. Lett. 43, 5681 (2018).CrossRefGoogle Scholar
B. Shen, Bu, Z., Xu, J., Xu, T., Ji, L., Li, R., and Xu, Z., Plasma Phys. Control. Fusion 60, 044002 (2018).Google Scholar
Leng, Y. X., Chin. J. Nat. 40, 400 (2018).Google Scholar
Liang, X., Leng, Y., Li, R., and Xu, Z., in High-brightness Sources and Light-driven Interactions Congress (OSA, 2020), paper HTh2B.2.Google Scholar
Zhang, Z., Wu, F., Hu, J., Yang, X., Gui, J., Ji, P., Liu, X., Wang, C., Liu, Y., Lu, X., Xu, Y., Leng, Y., Li, R., and Xu, Z., High Power Laser Sci. Eng. 8, e4 (2020).CrossRefGoogle Scholar
Guo, Z., Yu, L., Wang, J., Wang, C., Liu, Y., Gan, Z., Li, W., Leng, Y., Liang, X., and Li, R., Opt. Express 26, 26776 (2018).CrossRefGoogle Scholar
Mainfray, G. and Manus, G., Rep. Prog. Phys. 54, 1333 (1991).CrossRefGoogle Scholar
Wang, D., Shou, Y., Wang, P., Liu, J., Li, C., Gong, Z., Hu, R., Ma, W., and Yan, X., Sci. Rep. 8, 2536 (2018).CrossRefGoogle Scholar
Gizzi, L. A., Boella, E., Labate, L., Baffigi, F., Bilbao, P. J., Brandi, F., Cristoforetti, G., Fazzi, A., Fulgentini, L., Giove, D., Koester, P., Palla, D., and Tomassini, P., Sci. Rep. 11, 13728 (2021).CrossRefGoogle Scholar
Mackinnon, A. J., Borghesi, M., Hatchett, S., Key, M. H., Patel, P. K., Campbell, H., Schiavi, A., Snavely, R., Wilks, S. C., and Willi, O., Phys. Rev. Lett. 86, 1769 (2001).CrossRefGoogle Scholar
Yu, L., Xu, Y., Liu, Y., Li, Y., Li, S., Liu, Z., Li, W., Wu, F., Yang, X., Yang, Y., Wang, C., Lu, X., Leng, Y., Li, R., and Xu, Z., Opt. Express 26, 2625 (2018).CrossRefGoogle Scholar
Yu, L., Xu, Y., Li, S., Liu, Y., Hu, J., Wu, F., Yang, X., Zhang, Z., Wu, Y., Bai, P., Wang, X., Lu, X., Leng, Y., Li, R., and Xu, Z., Opt. Express 27, 8683 (2019).CrossRefGoogle Scholar
Prasad, R., Abicht, F., Borghesi, M., Braenzel, J., Nickles, P. V., Priebe, G., Schnürer, M., and Ter-Avetisyan, S., Rev. Sci. Instrum. 84, 053302 (2013).CrossRefGoogle Scholar
Burdonov, K., Fazzini, A., Lelasseux, V., Albrecht, J., Antici, P., Ayoul, Y., Beluze, A., Cavanna, D., Ceccotti, T., Chabanis, M., Chaleil, A., Chen, S. N., Chen, Z., Consoli, F., Cuciuc, M., Davoine, X., Delaneau, J. P., d'Humières, E., Dubois, J. L., Evrard, C., Filippov, E., Freneaux, A., Forestier-Colleoni, P., Gremillet, L., Horny, V., Lancia, L., Lecherbourg, L., Lebas, N., Leblanc, A., Ma, W., Martin, L., Negoita, F., Paillard, J. L., Papadopoulos, D., Perez, F., Pikuz, S., Qi, G., Quéré, F., Ranc, L., Söderström, P. A., Scisciò, M., Sun, S., Vallières, S., Wang, P., Yao, W., Mathieu, F., Audebert, P., and Fuchs, J., Matter Radiat. Extremes 6, 064402 (2021).CrossRefGoogle Scholar
Snavely, R. A., Key, M. H., Hatchett, S. P., Cowan, T. E., Roth, M., Phillips, T. W., Stoyer, M. A., Henry, E. A., Sangster, T. C., Singh, M. S., Wilks, S. C., MacKinnon, A., Offenberger, A., Pennington, D. M., Yasuike, K., Langdon, A. B., Lasinsk, B. F., Johnson, J., Perry, M. D., and Campbell, E. M., Phys. Rev. Lett. 85, 2945 (2000).CrossRefGoogle Scholar
Esirkepov, T., Borghesi, M., Bulanov, S. V., Mourou, G., and Tajima, T., Phys. Rev. Lett. 92, 175003 (2004).CrossRefGoogle Scholar
Kim, I. J., Pae, K. H., Choi, W., Lee, C.-L., Kim, H. T., Singhal, H., Sung, J. H., Lee, S. K., Lee, H. W., Nickles, P. V., Jeong, T. M., Kim, C. M., and Nam, C. H., Phys. Plasmas 23, 070701 (2016).CrossRefGoogle Scholar
Kaluza, M., Schreiber, J., Santala, M. I., Tsakiris, G. D., Eidmann, K., Meyer-Ter-Vehn, J., and Witte, K. J., Phys. Rev. Lett. 93, 045003 (2004).CrossRefGoogle Scholar
Ziegler, T., Albach, D., Bernert, C., Bock, S., Brack, F. E., Cowan, T. E., Dover, N. P., Garten, M., Gaus, L., Gebhardt, R., Goethel, I., Helbig, U., Irman, A., Kiriyama, H., Kluge, T., Kon, A., Kraft, S., Kroll, F., Loeser, M., Metzkes-Ng, J., Nishiuchi, M., Obst-Huebl, L., Puschel, T., Rehwald, M., Schlenvoigt, H. P., Schramm, U., and Zeil, K., Sci. Rep. 11, 7338 (2021).CrossRefGoogle Scholar
Gonzalez-Izquierdo, B., King, M., Gray, R. J., Wilson, R., Dance, R. J., Powell, H., Maclellan, D. A., McCreadie, J., Butler, N. M. H., Hawkes, S., Green, J. S., Murphy, C. D., Stockhausen, L. C., Carroll, D. C., Booth, N., Scott, G. G., Borghesi, M., Neely, D., and McKenna, P., Nat. Commun. 7, 12891 (2016).CrossRefGoogle Scholar
Carroll, D. C., McKenna, P., Lundh, O., Lindau, F., Wahlstrom, C. G., Bandyopadhyay, S., Pepler, D., Neely, D., Kar, S., Simpson, P. T., Markey, K., Zepf, M., Bellei, C., Evans, R. G., Redaelli, R., Batani, D., Xu, M. H., and Li, Y. T., Phys. Rev. E 76, 065401(R) (2007).CrossRefGoogle Scholar
Wang, W. P., Shen, B. F., Zhang, H., Xu, Y., Li, Y. Y., Lu, X. M., Wang, C., Liu, Y. Q., Lu, J. X., Shi, Y., Leng, Y. X., Liang, X. Y., Li, R. X., Wang, N. Y., and Xu, Z. Z., Appl. Phys. Lett. 102, 224101 (2013).CrossRefGoogle Scholar
Wilks, S. C., Kruer, W. L., Tabak, M., and Langdon, A. B., Phys. Rev. Lett. 69, 1383 (1992).CrossRefGoogle Scholar
Lefebvre, E. and Bonnaud, G., Phys. Rev. Lett. 74, 2002 (1995).CrossRefGoogle Scholar
Gode, S., Rodel, C., Zeil, K., Mishra, R., Gauthier, M., Brack, F. E., Kluge, T., MacDonald, M. J., Metzkes, J., Obst, L., Rehwald, M., Ruyer, C., Schlenvoigt, H. P., Schumaker, W., Sommer, P., Cowan, T. E., Schramm, U., Glenzer, S., and Fiuza, F., Phys. Rev. Lett. 118, 194801 (2017).CrossRefGoogle Scholar
Roth, M., Plasma Phys. Control. Fusion 44, B99 (2002).CrossRefGoogle Scholar
Psikal, J. and Matys, M., Plasma Phys. Control. Fusion 60, 044003 (2018).CrossRefGoogle Scholar
Bailly-Grandvaux, M., Kawahito, D., McGuffey, C., Strehlow, J., Edghill, B., Wei, M. S., Alexander, N., Haid, A., Brabetz, C., Bagnoud, V., Hollinger, R., Capeluto, M. G., Rocca, J. J., and Beg, F. N., Phys. Rev. E 102, 021201(R) (2020).CrossRefGoogle Scholar
Qin, C., Zhang, H., Li, S., Wang, N., Li, A., Fan, L., Lu, X., Li, J., Xu, R., Wang, C., Liang, X., Leng, Y., Shen, B., Ji, L., and Li, R., Commun. Phys. 5, 124 (2022).CrossRefGoogle Scholar
Bulanov, S. V., Esirkepov, T. Z., Khoroshkov, V. S., Kuznetsov, A. V., and Pegoraro, F., Phys. Lett. A 299, 240 (2002).CrossRefGoogle Scholar
View in content
Figure 1 The layout of the SULF laser facility[20].
You have Access Open access
No CrossRef data available.
A. X. Li (a1) (a2) (a3), C. Y. Qin (a1) (a3), H. Zhang (a1) (a4), S. Li (a1), L. L. Fan (a1) (a3), Q. S. Wang (a1) (a5), T. J. Xu (a1), N. W. Wang (a1), L. H. Yu (a1), Y. Xu (a1), Y. Q. Liu (a1), C. Wang (a1), X. L. Wang (a1), Z. X. Zhang (a1), X. Y. Liu (a1), P. L. Bai (a1), Z. B. Gan (a1), X. B. Zhang (a1), X. B. Wang (a1), C. Fan (a1), Y. J. Sun (a1), Y. H. Tang (a1), B. Yao (a1), X. Y. Liang (a1) (a4), Y. X. Leng (a1) (a4), B. F. Shen (a1) (a6), L. L. Ji (a1) (a4), R. X. Li (a1) (a2) (a4) and Z. Z. Xu (a1)
DOI: https://doi.org/10.1017/hpl.2022.17
|
CommonCrawl
|
Roles of peritoneal clearance and residual kidney removal in control of uric acid in patients on peritoneal dialysis
Xi Xiao1,2,
Hongjian Ye1,2,
Chunyan Yi1,2,
Jianxiong Lin1,2,
Yuan Peng1,2,
Xuan Huang1,2,
Meiju Wu1,2,
Haishan Wu1,2,
Haiping Mao1,2,
Xueqing Yu1,2 &
Xiao Yang1,2
There have been few systematic studies regarding clearance of uric acid (UA) in patients undergoing peritoneal dialysis (PD). This study investigated peritoneal UA removal and its influencing factors in patients undergoing PD.
This cross-sectional study enrolled patients who underwent peritoneal equilibration test and assessment of Kt/V from April 1, 2018 to August 31, 2019. Demographic data and clinical and laboratory parameters were collected, including UA levels in dialysate, blood, and urine.
In total, 180 prevalent patients undergoing PD (52.8% men) were included. Compared with the normal serum UA (SUA) group, the hyperuricemia group showed significantly lower peritoneal UA clearance (39.1 ± 6.2 vs. 42.0 ± 8.0 L/week/1.73m2; P = 0.008). Furthermore, higher transporters (high or high-average) exhibited greater peritoneal UA clearance, compared with lower transporters (low or low-average) (42.0 ± 7.0 vs. 36.4 ± 5.6 L/week/1.73 m2; P < 0.001). Among widely used solute removal indicators, peritoneal creatinine clearance showed the best performance for prediction of higher peritoneal UA clearance in receiver operating characteristic curve analysis [area under curve (AUC) 0.96; 95% confidence interval [CI], 0.93–0.99]. Peritoneal UA clearance was independently associated with continuous SUA [standardized coefficient (β), − 0.32; 95% CI, − 6.42 to − 0.75] and hyperuricemia [odds ratio (OR), 0.86; 95% CI, 0.76–0.98] status, only in patients with lower (≤2.74 mL/min/1.73 m2) measured glomerular filtration rate (mGFR). In those patients with lower mGFR, lower albumin level (β − 0.24; 95%CI − 7.26 to − 0.99), lower body mass index (β − 0.29; 95%CI − 0.98 to − 0.24), higher transporter status (β 0.24; 95%CI 0.72–5.88) and greater dialysis dose (β 0.24; 95%CI 0.26–3.12) were independently associated with continuous peritoneal UA clearance. Furthermore, each 1 kg/m2 decrease in body mass index (OR 0.79; 95% CI 0.63–0.99), each 1 g/dL decrease in albumin level (OR 0.08; 95%CI 0.01–0.47), and each 0.1% increase in average glucose concentration in dialysate (OR 1.56; 95%CI 1.11–2.19) were associated with greater peritoneal UA clearance (> 39.8 L/week/1.73m2).
For patients undergoing PD who exhibited worse residual kidney function, peritoneal clearance dominated in SUA balance. Increasing dialysis dose or average glucose concentration may aid in controlling hyperuricemia in lower transporters.
Uric acid (UA, 2,6,8-trihydroxypurine; C5H4N4O3), as the end-products of endogenous and dietary purine metabolism, is a weak diprotic acid that possesses two dissociable protons with a pKa1 of 5.4 and pKa2 of 10.3, respectively [1]. At a physiology PH of 7.4, 98% of UA exists as monosodium urate in the extracellular milieu [2]. UA is poorly soluble in aqueous media and cannot freely move through the cytomembrane; therefore, it is excreted mainly by means of UA transporters, generally located in the kidney and intestines. Reportedly, approximately 70% of UA is excreted by the kidney, while 30% is excreted by the gastrointestinal tract [3, 4]. Because of the important role of the kidney in excreting UA and maintaining UA balance in the internal environment, nearly 90% hyperuricemia is caused by impairment of renal UA excretion [5]. Similarly, hyperuricemia is common in patients with chronic kidney disease; these patients exhibit fivefold greater prevalence of hyperuricemia than patients with normal renal function [6].
Peritoneal dialysis (PD), a widely used dialysis modality, is becoming increasingly important in renal replacement therapy for patients with end-stage renal disease for its cost-effectiveness and related improvements in techniques and patient survival [7]. The prevalence of hyperuricemia increases with decline in renal function, these prevalences range from 40 to 70% in patients with chronic kidney disease stages 1–5 [8,9,10]. In patients receiving dialysis, the prevalence reportedly increased with increasing dialysis vintage, and are similar in patients undergoing hemodialysis and those undergoing PD [9, 11]. The effect of SUA on prognosis among patients undergoing dialysis is controversial. Most of studies of patients undergoing hemodialysis showed that the lower SUA level was a risk factor for mortality [12,13,14]. However, the higher SUA level was shown to be independently associated with mortality in patients undergoing PD [15,16,17], though some studies revealed no association [13, 18]; notably, one study recently showed an inverse association [19]. These inconsistent results between hemodialysis and PD therapies were reportedly partially related to the kinetics of UA clearance in each dialysis regimen [2]. To the best of our knowledge, there have been relatively few studies regarding UA clearance, especially in patients undergoing PD. In addition to the effects of UA-lowering agents and optimizing dietary and lifestyle factors [20], dialysis therapy itself plays a role in SUA control in patients undergoing PD [21]. However, the relative role of peritoneal UA clearance and residual renal removal in achievement of adequate SUA homeostasis have not been studied. Here, we systematically investigated the contributions of peritoneal UA clearance with respect to residual kidney function and identified its relevant modifiable factors of dialysis prescription in patients undergoing PD.
This single-center cross-sectional study enrolled patients who had undergone peritoneal equilibration test (PET) and assessment of Kt/V in our PD center from April 1, 2018 to August 31, 2019. The inclusion criteria included prevalent patients aged ≥18 years who had initiated PD therapy at least 1 month prior to PET and Kt/V tests. Patients were excluded if they had taken UA-lowering agents within 1 month before PET and Kt/V tests, had transferred from long-term hemodialysis (i.e., longer than 3 months), had undergone failed renal transplantation, or exhibited malignant tumors. All enrolled patients used standard lactate-glucose peritoneal dialysate (1.5, 2.5%, or 4.25% dextrose; Baxter, Guangzhou, China). Relevant clinical parameters were tested in the clinical laboratory of the First Affiliated Hospital of Sun Yat-sen University using standard methods. All patients provided written informed consent to participate. The study was performed in accordance with the ethical principles of the Declaration of Helsinki and was approved by the Clinical Research Ethics Committee of the First Affiliated Hospital of Sun Yat-sen University.
Demographic data were collected, included age, sex, body mass index (BMI), diabetes status, cardiovascular disease status, and primary kidney disease. Data of the first PET and Kt/V tests during the study period were collected. The PD-related data that were collected included dialysis vintage, dialysis dose, average glucose concentration in dialysate, measured glomerular filtration rate (mGFR), Kt/V, weekly total creatinine clearance (CCL), 24 h residual urine volume, normalized protein catabolic rate and standard PET data. The standard PET data described the urea, creatinine, and UA levels in dialysate, blood, and urine samples with 2 L of 2.5% dextrose dialysate dwelling for 0, 2 or 4 h; 0 h was the time point in the PET test when all of the 2 L dialysate flowed into abdominal cavity, and the duration of this process was recorded. Patients undergoing PD were classified into high, high average, low average, or low transporters, in accordance with Twardowski's criterion [22]. Clinical parameters included blood pressure, hemoglobin, neutrophil/lymphocyte ratio, high sensitivity C-reactive protein, serum albumin, prealbumin, corrected calcium, phosphorus, total cholesterol, triglyceride, serum urea nitrogen, creatinine, SUA and intact parathyroid hormone. Medication history was also collected, using follow-up records of patients who regularly visited our PD center for assessment and therapeutic regimen adjustment at 1–3-month intervals. Cardiovascular disease was defined as current or prior angina, myocardial infarction, congestive heart failure, cerebrovascular events, or peripheral vascular disease [23]. The charlson comorbidity score was used to evaluate the comorbidities of enrolled patients [24]. Men with SUA > 420 μmol/L or women with SUA > 360 μmol/L were regarded as hyperuricemic. The data of mGFR, Kt/V, CCL and normalized protein catabolic rate were obtained using PD Adequest software 2.0 (Baxter, Deerfield, IL, USA). Body surface area (BSA) was calculated using the well-known DuBois & DuBois formula [25]. UA clearance was calculated using the following formulae:
$$ \operatorname{Re}\mathrm{nal}\;\mathrm{UA}\;\mathrm{clearance}\left(\mathrm{L}/\mathrm{week}/1.73{\mathrm{m}}^2\right)=\frac{{\mathrm{UA}}_{\mathrm{urine}}\left(\upmu \mathrm{mol}/\mathrm{L}\right)\times 24\mathrm{h}\;\mathrm{urine}\kern0.17em \mathrm{output}\left(\mathrm{L}\right)\times 7\times 1.73\left({m}^2\right)}{\mathrm{SUA}\left(\upmu \mathrm{mol}/\mathrm{L}\right)\times \mathrm{BSA}\left({m}^2\right)} $$
$$ \mathrm{Peritoneal}\kern0.5em \mathrm{UA}\ \mathrm{clearance}\left(\mathrm{L}/\mathrm{week}/1.73{\mathrm{m}}^2\right)=\frac{{\mathrm{UA}}_{\mathrm{dialysate}}\left(\upmu \mathrm{mol}/\mathrm{L}\right)\times 24\mathrm{h}\;\mathrm{dialysate}\kern0.17em \mathrm{output}\left(\mathrm{L}\right)\times 7\times 1.73\left({m}^2\right)}{\mathrm{SUA}\left(\upmu \mathrm{mol}/\mathrm{L}\right)\times \mathrm{BSA}\left({m}^2\right)} $$
$$ \mathrm{Total}\ \mathrm{UA}\ \mathrm{clearance}=\mathrm{Renal}\ \mathrm{UA}\ \mathrm{clearance}+\mathrm{Peritoneal}\ \mathrm{UA}\ \mathrm{clearance} $$
Enrolled patients were divided into two groups according to median peritoneal UA clearance. Data are presented as mean ± standard deviation for normally distributed continuous variables, medians (interquartile range) for non-normally distributed continuous variables, and frequencies and percentages for categorical variables. Differences between the lower and higher peritoneal UA clearance groups were analyzed using independent samples t-tests for normally distributed continuous variables, the Mann–Whitney U test for non-normally distributed continuous variables, and chi-squared tests for categorical variables. Pearson correlation or Spearman rank correlation test were used to evaluate correlations between variables of normal or skewed distribution, respectively. Multiple linear regression and binary logistic regression were performed to explore the independent influencing factors of continuous and categorical SUA and peritoneal UA clearance in total, lower (≤2.74 mL/min/1.73m2) and higher (> 2.74 mL/min/1.73m2) mGFR group, respectively. Following exclusion of the potential effects of multicollinearity, variables that were significant in univariate analysis (P < 0.05) and those that exhibited clinical correlations were entered into the final model. The performances of small solute removal indicators for prediction of higher peritoneal UA clearance were tested using area under the receiver operating characteristic curve analysis. Two-sided P values < 0.05 were regarded as statistically significant. All statistical analyses were conducted in SPSS Statistics software (version 20.0; IBM Corp., Armonk, NY, USA).
As shown in Fig. 1, 180 patients were included in this study (mean age, 45.0 ± 13.4 years; 52.8% men; 13.3% with diabetes). Primary kidney diseases included chronic glomerulonephritis (67.2%), diabetic nephropathy (8.9%), hypertensive lesions (7.8%), and others (16.1%). The mean SUA level was 410 ± 72 μmol/L; 15.0% of patients used diuretics within 1 month before PET and Kt/V tests performed at enrollment. Patients with higher peritoneal UA clearance were older, had a greater proportion of women, and lower level of BMI, serum albumin, and SUA (Table 1). PD-related data are shown in Table 2. Overall, the patients had a median dialysis vintage of 1.6(1.4–19.8) months and a mean peritoneal UA clearance of 40.2 ± 7.1 L/week/1.73m2. Patients with higher peritoneal UA clearance had longer PD vintage, as well as higher average glucose concentration in dialysate, dialysis dose, total Kt/V, peritoneal Kt/V, and peritoneal CCL; they also had lower residual renal Kt/V, renal CCL, residual urine volume, and mGFR. Notably, there was a larger proportion of high transporters and a smaller proportion of low average transporters.
The flow chart for enrollment process of patients undergoing PD in the study. HD, hemodialysis; PD, peritoneal dialysis; PET, peritoneal equilibration test; UA, uric acid
Table 1 The demographic characteristics of enrolled patients in the study
Table 2 The PD-related information of patients
Relationships between peritoneal UA clearance and peritoneal transport characteristics
Distributions of peritoneal UA clearance according to peritoneal transport characteristics are shown in Fig. 2a; in particular, there was a progressive increase in peritoneal UA clearance with increasing peritoneal transport rate. Notably, higher transporters (high or high average) exhibited significantly greater peritoneal UA clearance, compared with lower transporters (low average or low) (42.0 ± 7.0 vs. 36.4 ± 5.6 L/week/1.73 m2; P < 0.001). As shown in Fig. 2b, 4h dialysate to plasma (D/P) UA was strongly correlated with 4 h D/P creatinine (r = 0.97; P < 0.001). Moreover, correlations were similar between 4 h D/P UA and peritoneal UA clearance (r = 0.47; P < 0.001) and between 4 h D/P creatinine and peritoneal UA clearance (r = 0.46; P < 0.001) (Fig. 2c and d). Among widely used small solute removal indicators, peritoneal CCL showed the best performance in receiver operating characteristic curve analysis [area under curve (AUC), 0.96; 95% confidence interval (CI), 0.93–0.99; P < 0.001] for prediction of higher peritoneal UA clearance (Fig. 3).
The effects of peritoneal transport characteristics on peritoneal UA clearance. a Distribution of peritoneal UA clearance according to peritoneal transport characteristics. b Correlation between the 4 h D/P creatinine and 4 h D/P UA. c Correlation between 4 h D/P UA and peritoneal UA clearance. d Correlation between 4 h D/P creatinine and peritoneal UA clearance. D/P, dialysate to plasm; UA, uric acid
The performance of different small solute removal indicators for prediction of higher peritoneal UA clearance (> 39.8 L/week/1.73 m2) in receiver operating characteristic curve analysis. CCL, creatinine clearance; CI, confidence interval; D/P, dialysate to plasm; UA, uric acid
Relationships of peritoneal UA removal with SUA in patients undergoing PD
The distribution of SUA in patients undergoing PD is shown in Fig. 4a. The average mass transfers of urea, creatinine or UA with 2 L of 2.5% dextrose dialysate for dwell times of 0 and 4 h among all patients undergoing PD are described in Fig. 4b. Similar to the mass transfer of the small molecules of urea and creatinine, the peritoneal mass transfer of UA declined remarkably as dwell time increased. Whereas the average UA mass transfer for 4 h dwell time was positively correlated with SUA (r = 0.55; P < 0.001), peritoneal UA clearance was negatively correlated with SUA (r = − 0.25; P = 0.001) (Fig. 4c and d). In comparison with the normal SUA group, the hyperuricemia group showed significantly lower peritoneal UA clearance (39.1 ± 6.2 vs. 42.0 ± 8.0 L/week/1.73 m2; P = 0.008). The further analysis of multiple linear regression and binary logistic regression shown in Table 3 revealed that peritoneal UA clearance was independently associated with continuous SUA (β − 0.32; 95%CI − 6.42 to − 0.75; P = 0.01) and hyperuricemia status (OR 0.86; 95%CI 0.76–0.98; P = 0.02), only in patients undergoing PD who had lower mGFR.
The correlation between peritoneal UA clearance and SUA. a Distribution of SUA in PD patients enrolled. b Average dialytic mass transfer of urea, creatinine and UA with 2 L of 2.5% glucose-based dialysate for dwell times of 0 and 4 h. c Correlation between the 4 h UA mass transfer and SUA. d Correlation between peritoneal UA clearance and SUA. ***P < 0.001, 0 h vs 4 h. SUA, serum uric acid; UA, uric acid
Table 3 The relationships between peritoneal UA clearance and SUA in linear regression and logistic regression model in total, lower and higher mGFR patients, respectively
Independent factors influencing peritoneal UA clearance
As shown in Table 4, after adjusting for relevant demographic and PD-related variables in the multiple linear regression model, serum albumin level (β − 0.24; 95%CI − 7.26 to − 0.99; P = 0.01) and BMI (β − 0.29; 95%CI − 0.98 to − 0.24; P = 0.001) were both negatively associated with peritoneal UA clearance, while the higher transporter status (β 0.24; 95%CI 0.72–5.88; P = 0.01) and dialysis dose (β 0.24; 95%CI 0.26–3.12; P = 0.02) were positively associated with peritoneal UA clearance in the lower mGFR group. Similarly, binary logistic regression analysis revealed that each 0.1% increase in average glucose concentration in dialysate (OR 1.56; 95%CI 1.11–2.19; P = 0.01), each 1 g/dL decrease in albumin level (OR 0.08; 95%CI 0.01–0.47; P = 0.006), and each 1 kg/m2 decrease in BMI (OR 0.79; 95%CI 0.63–0.99; P = 0.04) were independently associated with greater peritoneal UA clearance (> 39.8 L/week/1.73m2) (Table 5).
Table 4 Associated factors of peritoneal UA clearance in multiple linear regression in total, lower and higher mGFR patients, respectively
Table 5 Independent determinants of higher peritoneal UA clearance (> 39.8 L/week/1.73m2) in binary logistic regression in total, lower and higher mGFR patients, respectively
The results of this study showed that peritoneal UA removal played a significant role in SUA control. Moreover, lower albumin and BMI, higher peritoneal transporter status, greater dialysis dose, and higher glucose concentration in dialysate were independently associated with greater peritoneal UA clearance in patients undergoing PD who had worse residual kidney function.
To the best of our knowledge, this is the first systematic analysis of UA clearance and factors that independently influenced UA clearance in patients undergoing PD. As a small molecular solute, the vast majority of UA is present in the ionized form; ≤5% of circulating UA is bound to albumin [4]. Because UA exhibits high hydrophilicity and has a sieving coefficient of 1.01, which allows it to easily diffuse through the dialysis membrane, it is presumed to be sufficiently cleared by PD therapy [2, 9]. A previous study showed that UA clearance was inversely proportional to the PD dwell time; specifically, the average UA mass transfer for dwell times of 0-1 h, 1-4 h and 4-8 h with 2 L of 1.5% dialysate were 49.8 ± 3.9, 16.1 ± 1.0, 8.3 ± 0.6 mg/h/1.73m2, respectively [26]. Similarly, we found remarkable reductions of UA mass transfer of 72.9 ± 41.1 and 23.9 ± 5.6 mg/h/1.73 m2 in PD with 2 L 2.5% dialysate for dwell times of 0 h and 4 h. In addition, we revealed an average peritoneal UA clearance of 40.2 ± 7.1 L/week/1.73m2 in patients undergoing PD.
In the present study, peritoneal UA clearance was significantly greater in higher transporters than in lower transporters, when measured in terms of 4 h D/P creatinine; moreover, 4 h D/P UA was strongly correlated with 4 h D/P creatinine. Further analysis revealed similar correlations between 4 h D/P UA and peritoneal UA clearance, as well as between 4 h D/P creatinine and peritoneal UA clearance. Moreover, receiver operating characteristic curve analysis revealed that, among widely used solute removal indicators, peritoneal CCL showed the best performance for prediction of higher peritoneal UA clearance. These results illustrated that membrane characteristics, assessed in terms of creatinine transport, can be used to determine UA transport status. This similarity is presumably because the molecular weight of UA (168 Da) is near that of creatinine (113 Da); in addition, few circulating UA molecules are bound to albumin or affected by electrochemical gradient, whereas serum phosphorus molecules are affected in this manner [27]. The present study revealed that evaluation of peritoneal UA clearance solely in terms of the most frequently used indicator for peritoneal adequacy (i.e., Kt/V) may not exhibit sufficient accuracy. Peritoneal CCL may be a more reliable index for assessing UA clearance adequacy. Adjustment of dialysis prescription for better PD-related UA removal, based on peritoneal CCL rather than the widely used Kt/V, is presumably more appropriate, particularly for lower transporters with hyperuricemia.
A negative correlation was observed between peritoneal UA clearance and SUA in the present study; further multiple linear and logistic regression analyses suggested that greater peritoneal UA clearance was significantly associated with lower SUA only in patients undergoing PD who had relatively low mGFR. This suggests that the kidney still plays an indispensable role in removing excessive SUA in patients with residual kidney function; and the importance of peritoneal UA clearance gradually became evident with the decline of residual kidney removal. Therefore, the high SUA in patients undergoing PD who had unsatisfying renal function may have been partially caused by inadequate UA removal during PD. In the present study, we found that lower BMI and albumin level, higher transporter status, greater dialysis dose, and higher glucose concentration in dialysate were significantly associated with greater peritoneal UA clearance in the lower mGFR group. BMI is a body composition parameter, which was strongly correlated with BSA in this study (data not shown); accordingly, patients undergoing PD who had lower BMI may exhibit greater peritoneal UA removal, after adjustment for their relatively lower BSA. In addition, the serum albumin level was revealed to be associated with both continuous peritoneal UA clearance and higher peritoneal UA clearance category in patients with worse renal function. Previous studies have shown negative correlations between peritoneal albumin loss and serum albumin level in patients undergoing PD [26, 28]. Furthermore, peritoneal albumin loss was demonstrated to be positively associated with peritoneal CCL in a cross-sectional study including 351 patients undergoing PD [29]. Therefore, a potential mechanism underlying the negative association between peritoneal UA clearance and serum albumin level is as follows: greater peritoneal UA clearance itself indicates greater removal of albumin from peritoneum, which causes lower circulating albumin reserves, primarily because of inadequate albumin synthesis to compensate peritoneal albumin loss [26]. Therefore, peritoneal albumin loss should be considered when optimizing dialysis prescription for efficient solute removal. However, it remains unclear whether there is a causal relationship between lower albumin level and greater peritoneal UA clearance.
Notably, few studies have explored PD-related factors associated with greater peritoneal UA clearance, particularly in terms of residual kidney function; average UA clearance in PD was revealed to be positively proportional to the exchange volume and flow rate [21]. In the present study, patients who were higher transporters exhibited greater peritoneal UA removal, compared with patients who were lower transporters. With the exception of non-modifiable peritoneal membrane characteristics, the modifiable dialysis dose factors significantly increase the PD-related UA removal as well. In patients with worse residual kidney function, average glucose concentration in dialysate tended to be associated with greater peritoneal UA removal, although this was not statistically significant (β 0.19; 95%CI − 0.004 to 0.88; P = 0.052); however, the effect was shown to be statistically significant in logistic regression (OR1.56; 95%CI 1.11–2.19; P = 0.01). Therefore, the positive effect of the glucose concentration in dialysate may have been masked by the relatively small sample size after grouping based on residual kidney function.
There were some limitations in our study. First, the cross-sectional observational study itself only assessed associations rather than causal relationships. Second, this study did not explore the effects of different PD modalities (e.g., continuous cyclic peritoneal dialysis, automated peritoneal dialysis) or different exchange flow rate on peritoneal UA clearance. Third, the dialysis vintages of enrolled patients were relatively short, which may have led to bias involving inadequate and unstable dialysis. Fourth, this study used a small sample size of patients without residual kidney function; therefore, classification of residual kidney function was grouped on the basis of median mGFR, rather than the clinical standards of oliguria or anuria. Despite the above limitations, to the best of our knowledge, this was the first study to systematically explore the contributions of peritoneal UA clearance and residual kidney removal, and to identify independent factors that influence peritoneal UA clearance. In this study, we concurrently collected common small solute removal indicators for further comparison and analysis, which provides important guidance in optimizing prescription for achievement of better SUA control in patients undergoing PD, especially those with worse residual kidney function. Moreover, we excluded patients undergoing PD who had a history of taking UA-lowering agents, which enabled us to more specifically study UA clearance in PD regimen.
In summary, UA removal in patients undergoing PD was found to be more rely on peritoneal clearance, especially in patients with relatively worse residual kidney function. Peritoneal CCL may be an optimal indicator for assessment of UA removal during PD because of its similar removal characteristics through the dialysis membrane. For patients with unsatisfactory residual kidney function, increasing the dialysis dose or average glucose concentration in dialysate may aid in controlling hyperuricemia, particularly in patients who are lower transporters.
Body surface area
CCL:
Confidence interval
D/P:
Dialysate to plasm
mGFR:
measured glomerular filtration rate
Peritoneal equilibration test
SUA:
Serum uric acid
UA:
Ndrepepa G. Uric acid and cardiovascular disease. Clin Chim Acta. 2018;484:150–63.
Murea M, Tucker BM. The physiology of uric acid and the impact of end-stage kidney disease and dialysis. Semin Dial. 2019;32(1):47–57.
Dalbeth N, Merriman TR, Stamp LK. Gout. Lancet. 2016;388(10055):2039–52.
Richette P, Bardin T. Gout. Lancet. 2010;375(9711):318–28.
Terkeltaub R, Bushinsky DA, Becker MA. Recent developments in our understanding of the renal basis of hyperuricemia and the development of novel antihyperuricemic therapeutics. Arthritis Res Ther. 2006;8(Suppl 1):S4.
Krishnan E. Reduced glomerular function and prevalence of gout: NHANES 2009–10. PLoS One. 2012;7(11):e50046.
Li PK, Chow KM, Van de Luijtgaarden MW, Johnson DW, Jager KJ, Mehrotra R, et al. Changes in the worldwide epidemiology of peritoneal dialysis. Nat Rev Nephrol. 2017;13(2):90–103.
Dousdampanis P, Trigka K, Musso CG, Fourtounas C. Hyperuricemia and chronic kidney disease: an enigma yet to be solved. Ren Fail. 2014;36(9):1351–9.
Murea M. Advanced kidney failure and hyperuricemia. Adv Chronic Kidney Dis. 2012;19(6):419–24.
Chonchol M, Shlipak MG, Katz R, Sarnak MJ, Newman AB, Siscovick DS, et al. Relationship of uric acid with progression of kidney disease. Am J Kidney Dis. 2007;50(2):239–47.
Silverstein DM, Srivaths PR, Mattison P, Upadhyay K, Midgley L, Moudgil A, et al. Serum uric acid is associated with high blood pressure in pediatric hemodialysis patients. Pediatr Nephrol. 2011;26(7):1123–8.
Park C, Obi Y, Streja E, Rhee CM, Catabay CJ, Vaziri ND, et al. Serum uric acid, protein intake and mortality in hemodialysis patients. Nephrol Dial Transplant. 2017;32(10):1750–7.
Bae E, Cho HJ, Shin N, Kim SM, Yang SH, Kim DK, et al. Lower serum uric acid level predicts mortality in dialysis patients. Medicine. 2016;95(24):e3701.
Latif W, Karaboyas A, Tong L, Winchester JF, Arrington CJ, Pisoni RL, et al. Uric acid levels and all-cause and cardiovascular mortality in the hemodialysis population. Clin J Am Soc Nephrol. 2011;6(10):2470–7.
Feng S, Jiang L, Shi Y, Shen H, Shi X, Jin D, et al. Uric acid levels and all-cause mortality in peritoneal dialysis patients. Kidney Blood Press Res. 2013;37(2–3):181–9.
Xia X, Zhao C, Peng FF, Luo QM, Zhou Q, Lin ZC, et al. Serum uric acid predicts cardiovascular mortality in male peritoneal dialysis patients with diabetes. Nutr Metab Cardiovasc Dis. 2016;26(1):20–6.
Xia X, He F, Wu X, Peng F, Huang F, Yu X. Relationship between serum uric acid and all-cause and cardiovascular mortality in patients treated with peritoneal dialysis. Am J Kidney Dis. 2014;64(2):257–64.
Dong J, Han QF, Zhu TY, Ren YP, Chen JH, Zhao HP, et al. The associations of uric acid, cardiovascular and all-cause mortality in peritoneal dialysis patients. PLoS One. 2014;9(1):e82342.
Lai KJ, Kor CT, Hsieh YP. An Inverse Relationship between Hyperuricemia and Mortality in Patients Undergoing Continuous Ambulatory Peritoneal Dialysis. J Clin Med. 2018;7(11):E416.
Vargas-Santos AB, Neogi T. Management of Gout and Hyperuricemia in CKD. Am J Kidney Dis. 2017;70(3):422–39.
Robson M, Oreopoulos DG, Izatt S, Ogilvie R, Rapoport A, de Veber GA. Influence of exchange volume and dialysate flow rate on solute clearance in peritoneal dialysis. Kidney Int. 1978;14(5):486–90.
Twardowski ZJ, Nolph KO, Khanna R, Prowant BF, Ryan LP, Moore HL, et al. Peritoneal equilibration test. Perit Dial Int. 1987;7:138–48.
Huang R, Liu Y, Wu H, Guo Q, Yi C, Lin J, et al. Lower plasma visceral protein concentrations are independently associated with higher mortality in patients on peritoneal dialysis. Br J Nutr. 2015;113(4):627–33.
Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40(5):373–83.
Du Bois D, Du Bois EF. A formula to estimate the approximate surface area if height and weight be known. 1916. Nutrition. 1989;5(5):303–11.
Kagan A, Bar-Khayim Y, Schafer Z, Fainaru M. Kinetics of peritoneal protein loss during CAPD: I. different characteristics for low and high molecular weight proteins. Kidney Int. 1990;37(3):971–9.
Graff J, Fugleberg S, Brahm J, Fogh-Andersen N. The transport of phosphate between the plasma and dialysate compartments in peritoneal dialysis is influenced by an electric potential difference. Clin Physiol. 1996;16(3):291–300.
Kaysen GA, Schoenfeld PY. Albumin homeostasis in patients undergoing continuous ambulatory peritoneal dialysis. Kidney Int. 1984;25(1):107–14.
Fan J, Ye H, Zhang X, Cao P, Guo Q, Mao H, et al. Association of Lean Body Mass Index and Peritoneal Protein Clearance in peritoneal Dialysis patients. Kidney Blood Press Res. 2019;44(1):94–102.
We are very grateful to the doctors and nurses in our PD centers for their earnest work of clinical evaluation and data collecting. We thank Ryan Chastain-Gross, Ph.D., from Liwen Bianji, Edanz Group China (www.liwenbianji.cn/ac), for editing the English text of a draft of this manuscript.
This work was supported by the Natural Science Foundation of China (Grant no. 81774069, 81570614), National Key Research and Development Program (Grant no. 2016YFC0906101), Program of the Ministry of Health of China (201502023), the Guangdong Science Foundation of China (Grant 2017A050503003, 2017B020227006), Foundation of Guangdong Key Laboratory of Nephrology (Grant no. 2017B030314019), and the Guangzhou Committee of Science and Technology, China (201704020167). These funding supported the data collection, management, and analysis, as well as the charges of manuscript editing and processing for paper publishing.
Department of Nephrology, The First Affiliated Hospital, Sun Yat-sen University, 58th, Zhongshan Road II, Guangzhou, 510080, China
Xi Xiao, Hongjian Ye, Chunyan Yi, Jianxiong Lin, Yuan Peng, Xuan Huang, Meiju Wu, Haishan Wu, Haiping Mao, Xueqing Yu & Xiao Yang
Key Laboratory of Nephrology, Committee of Health and Guangdong Province, Guangzhou, 510080, China
Xi Xiao
Hongjian Ye
Chunyan Yi
Jianxiong Lin
Yuan Peng
Xuan Huang
Meiju Wu
Haishan Wu
Haiping Mao
Xueqing Yu
Xiao Yang
XY1 and XY2 designed the research, HY and XX conducted the research, CY, JL and YP collected data, XH, MW and HW analyzed the data, HM and XY1 interpreted the findings, XX and XY2 wrote the paper, XY2 had the primary responsibility for the whole content and final approval of the version to be published. And all authors read and approved the final manuscript.
Correspondence to Xiao Yang.
The study has conformed to the ethical principles of the Declaration of Helsinki and was approved by the Clinical Research Ethics Committee of the First Affiliated Hospital of Sun Yat-sen University. And all patients had signed informed consent.
Xiao, X., Ye, H., Yi, C. et al. Roles of peritoneal clearance and residual kidney removal in control of uric acid in patients on peritoneal dialysis. BMC Nephrol 21, 148 (2020). https://doi.org/10.1186/s12882-020-01800-1
Residual kidney function
|
CommonCrawl
|
Isomorphisms of some algebras of analytic functions of bounded type on Banach spaces
S.I. Halushchak Vasyl Stefanyk Precarpathian National University
Keywords: homogeneous polynomials on Banach spaces;, symmetric analytic functions;, spectra of algebras of analytic functions
The theory of analytic functions is an important section of nonlinear functional analysis.
In many modern investigations topological algebras of analytic functions and spectra of such
algebras are studied. In this work we investigate the properties of the topological algebras of entire functions,
generated by countable sets of homogeneous polynomials on complex Banach spaces.
Let $X$ and $Y$ be complex Banach spaces. Let $\mathbb{A}= \{A_1, A_2, \ldots, A_n, \ldots\}$ and $\mathbb{P}=\{P_1, P_2,$ \ldots, $P_n, \ldots \}$ be sequences of continuous algebraically independent homogeneous polynomials on spaces $X$ and $Y$, respectively, such that $\|A_n\|_1=\|P_n\|_1=1$ and $\deg A_n=\deg P_n=n,$ $n\in \mathbb{N}.$ We consider the subalgebras $H_{b\mathbb{A}}(X)$ and $H_{b\mathbb{P}}(Y)$ of the Fr\'{e}chet algebras $H_b(X)$ and $H_b(Y)$ of entire functions of bounded type, generated by the sets $\mathbb{A}$ and $\mathbb{P}$, respectively. It is easy to see that $H_{b\mathbb{A}}(X)$ and $H_{b\mathbb{P}}(Y)$ are the Fr\'{e}chet algebras as well.
In this paper we investigate conditions of isomorphism of the topological algebras $H_{b\mathbb{A}}(X)$ and $H_{b\mathbb{P}}(Y).$ We also present some applications for algebras of symmetric analytic functions of bounded type. In particular, we consider the subalgebra $H_{bs}(L_{\infty})$ of entire functions of bounded type on $L_{\infty}[0,1]$ which are symmetric, i.e. invariant with respect to measurable bijections of $[0,1]$ that preserve the measure. We prove that
$H_{bs}(L_{\infty})$ is isomorphic to the algebra of all entire functions of bounded type, generated by countable set of homogeneous polynomials on complex Banach space $\ell_{\infty}.$
S.I. Halushchak, Vasyl Stefanyk Precarpathian National University
Vasyl Stefanyk Precarpathian National University
R. Alencar, R. Aron, P. Galindo, A. Zagorodnyuk, Algebra of symmetric holomorphic functions on $ell _p$, Bull. Lond. Math. Soc., 35 (2003), 55–64.
R.M. Aron, B.J. Cole, T.W. Gamelin, Spectra of algebras of analytic functions on a Banach space, J. Reine Angew. Math., 415 (1991), 51–93.
R.M. Aron, J. Falc´o, D. Garc´ıa, M. Maestre, Algebras of symmetric holomorphic functions of several complex variables, Rev. Mat. Complut., 31 (2018), 651–672. doi: 10.1007/s13163-018-0261-x.
R. Aron, J. Falc´o, M. Maestre, Separation theorems for group invariant polynomials, J. Geom. Anal., 28 (2018), №1, 393–404. doi: 10.1007/s12220-017-9825-0.
R.M. Aron, P. Galindo, D. Garcia, M. Maestre, Regularity and algebras of analytic functions in infinite dimensions, Trans. Amer. Math. Soc., 348 (1996), 543–559.
R. Aron, P. Galindo, D. Pinasco, I. Zalduendo, Group-symmetric holomorphic functions on a Banach space, Bull. Lond. Math. Soc., 48 (2016), №5, 779–796. doi: 10.1112/blms/bdw043
I. Chernega, P. Galindo, A. Zagorodnyuk, Some algebras of symmetric analytic functions and their spectra, Proc. Edinburgh Math. Soc., 55 (2012), 125–142.
I. Chernega, P. Galindo, A. Zagorodnyuk, The convolution operation on the spectra of algebras of symmetric analytic functions, J. Math. Anal. Appl., 395 (2012), 569–577.
I. Chernega, O. Holubchak, Z. Novosad, A. Zagorodnyuk, Continuity and hypercyclicity of composition operators on algebras of symmetric analytic functions on Banach spaces, European Journal of Mathematics, 6 (2020), 153–163. https://doi.org/10.1007/s40879-019-00390-z
S. Dineen, Complex analysis in infinite dimensional spaces, London: Springer, 1999.
J. Falc´o, D. Garc´ıa, M. Jung, M. Maestre, Group invariant separating polynomials on a Banach space, Publicacions Matem`atiques. To appear.
P. Galindo, D. Garcia, M. Maestre, Holomorphic mappings of bounded type on (DF)-spaces, Bierstedt K.-D., et al. (Eds.), Progress in Functional Analysis, North-Holland Math. Stud., North-Holland, Amsterdam, 170 (1992), 35–148.
P. Galindo, T. Vasylyshyn, A. Zagorodnyuk, Analytic structure on the spectrum of the algebra of symmetric analytic functions on $L_infty$, Rev. R. Acad. Cienc. Exactas Fis. Nat. Ser. A Math. RACSAM, 114 (2020), №56. doi: 10.1007/s13398-020-00791-w
P.P. Galindo, T.V. Vasylyshyn, A.V. Zagorodnyuk, The algebra of symmetric analytic functions on $L_infty$, Proc. Roy. Soc. Edinburgh, 147 (2017), №4, 743–761. doi: 10.1017/S0308210516000287
S. I. Halushchak, Spectra of some algebras of entire functions of bounded type, generated by a sequence of polynomials, Carpathian Math. Publ., 11 (2019), №2, 311–320. doi:10.15330/cmp.11.2.311-320
F. Jawad, A. Zagorodnyuk, Supersymmetric polynomials on the space of absolutely convergent series, Symmetry, 11 (2019), №9, 1111, (19 p.). doi: 10.3390/sym11091111
J. Mujica, Complex analysis in Banach spaces, North Holland, 1986.
Z. Novosad, A. Zagorodnyuk, Analytic automorphisms and transitivity of analytic mappings, Mathematics, 8 (2020), №12, 2179. https://doi.org/10.3390/math8122179
W. Rudin, Functional analysis 2nd ed., International series in pure and applied mathematics, 1991.
T. Vasylyshyn, Algebras of entire symmetric functions on spaces of Lebesgue-measurable essentially bounded functions, J. Math. Sci., 246 (2020), 264–276. doi:10.1007/s10958-020-04736-x
T. Vasylyshyn, Symmetric functions on spaces $ell_p(mathbb{R}^n)$ and $ell_p(mathbb{C}^n)$, Carpathian Math. Publ., 12 (2020), №1, 5–16. doi:10.15330/cmp.12.1.5-16
A. Zagorodnyuk, A. Hihliuk, Classes of entire analytic functions of unbounded type on Banach spaces, Axioms, 9 (2020), №4, 133. doi:10.3390/axioms9040133
A. Zagorodnyuk, Spectra of algebras of entire functions on Banach spaces, Proc. Amer. Math. Soc., 134 (2006), 2559–2569. doi: 10.1090/S0002-9939-06-08260-8
Halushchak S. Isomorphisms of some algebras of analytic functions of bounded type on Banach spaces . Mat. Stud. [Internet]. 2021Oct.23 [cited 2022Jan.20];56(1):107-12. Available from: http://www.matstud.org.ua/ojs/index.php/matstud/article/view/196
Copyright (c) 2021 S. I. Halushchak
|
CommonCrawl
|
Automatic sleep staging using ear-EEG
Kaare B. Mikkelsen ORCID: orcid.org/0000-0002-7360-86291,
David Bové Villadsen1,
Marit Otto2 &
Preben Kidmose1
Sleep and sleep quality assessment by means of sleep stage analysis is important for both scientific and clinical applications. Unfortunately, the presently preferred method, polysomnography (PSG), requires considerable expert assistance and significantly affects the sleep of the person under observation. A reliable, accurate and mobile alternative to the PSG would make sleep information much more readily available in a wide range of medical circumstances.
New method
Using an already proven method, ear-EEG, in which electrodes are placed inside the concha and ear canal, we measure cerebral activity and automatically score the sleep into up to five stages. These results are compared to manual scoring by trained clinicians, based on a simultaneously recorded PSG.
The correspondence between manually scored sleep, based on the PSG, and the automatic labelling, based on ear-EEG data, was evaluated using Cohen's kappa coefficient. Kappa values are in the range 0.5–0.8, making ear-EEG relevant for both scientific and clinical applications. Furthermore, a sleep-wake classifier with leave-one-out cross validation yielded specificity of 0.94 and sensitivity of 0.52 for the sleep stage.
Comparison with existing method(s)
Ear-EEG based scoring has clear advantages when compared to both the PSG and other mobile solutions, such as actigraphs. It is far more mobile, and potentially cheaper than the PSG, and the information on sleep stages is far superior to a wrist-based actigraph, or other devices based solely on body movement.
This study shows that ear-EEG recordings carry information about sleep stages, and indicates that automatic sleep staging based on ear-EEG can classify sleep stages with a level of accuracy that makes it relevant for both scientific and clinical sleep assessment.
Sleep [1] and the quality of sleep has a decisive influence on general health [2,3,4], and sleep deprivation is known to have a negative impact on overall feeling of well-being, and on cognitive performance such as attention and memory [5]. However, sleep quality is difficult to measure, and the current gold standard, polysomnography (PSG) [6] requires expert assistance and expensive equipment. Moreover, characterizing sleep by means of conventional PSG equipment will inevitably have a negative impact on the sleep, and thereby bias the sleep quality assessment. Because of the need for professional assistance in PSG acquisition, and because of the laborious process to evaluate PSG data, sleep assessment is in most cases limited to a single or a few nights of sleep.
Due to these circumstances, there is an ongoing effort to explore other options for high-quality sleep monitoring [7, 8]. A very promising candidate in this field is ear-EEG [9], due to its potential portability and the fact that it conveys much of the same information as the PSG, namely EEG data [10]. It is likely that the ear-EEG technology will have a much lower impact on the quality of sleep, giving a more accurate picture of the sleep, and also be suitable for sleep assessment over longer periods of time. Recently, the feasibility of ear-EEG for sleep assessment has been studied in a few exploratory papers [11,12,13], all indicating that ear-EEG is a very promising candidate.
This paper is based on a new dataset comprising nine healthy subjects recorded simultaneously with both PSG and ear-EEG for one night. This is significantly more sleep data than in previous studies. Trained clinicians manually scored the sleep following the guidelines of the American Academy of Sleep Science (AASM) [14]. The sleep staging based on ear-EEG was based on an automatic sleep staging approach, where a statistical classifier was trained based on the labels from the manual scoring (for other examples of this, see [15,16,17]). The automatic sleep staging was chosen for two reasons: (i) there was not any established methodology for sleep staging based on ear-EEG, while the machine learning approach provided rigorous and unbiased sleep staging. (ii) The question of whether a given method can also be used without manual scoring is important whenever wearable devices for long term monitoring are discussed.
In the "Results" section below, additional support for this reasoning is presented, based on waveforms.
Research subjects
For this study, nine healthy subjects were recruited, aged 26–44, of which three were female. Measurements were all conducted in the same way: subjects first had a partial PSG [consisting of six channel EEG, electrooculography (EOG) and electromyography (EMG) on the chin] mounted by a professional at a local sleep clinic. Subsequently the subject was transported to our laboratory where the ear-EEG was mounted.
The subjects went home and slept with the equipment (both PSG and ear-EEG) for the night, and removed it themselves in the morning. The subjects were instructed to keep a cursory diary of the night, detailing comfort and whether the ear-EEG ear plugs stayed in during the night.
EEG hardware
The ear plugs used in this study were shaped very similarly to those used in [18], with the difference that the plugs here were made from soft silicone, and the electrodes were solid silver buttons soldered to copper wires. See Fig. 1 for an example of a left-ear plug. Before insertion, the outer ears were cleaned using skin preparation gel (NuPrep, Weaver and Company, USA) and electrode gel (Ten20, Weaver and Company, USA) was applied to the electrodes. Ear-EEG electrodes were ELA, ELB, ELE, ELI, ELG, ELK, ERA, ERB, ERE, ERI, ERG, ERK, as defined in [19].
Example left-ear ear plug with silver electrodes.
As described in [18], ear-EEG electrodes were validated by measuring the auditory steady state responses (ASSR) using 40 Hz amplitude modulated white noise, which was performed while the subject was still in the laboratory. All electrodes (including ear-EEG) were connected to the same amplifier (Natus xltek, Natus Medical Incorporated, USA), and ear-EEG electrodes were Cz-referenced during the recording. The PSG consisted of two EOG electrodes, two chin EMG electrodes and 8 scalp electrodes (O1, O2, C3, C4, A1, A2, F3, F4 by the 10–20 naming convention). The data was sampled at 200 Hz.
Sleep scoring
Manual scoring
All PSG-measurements were scored by trained experts at the local sleep clinic, according to the AASM guidelines [14]. The scorers did not use the ear-EEG data in any way, and did not receive any special instructions regarding this data. Scoring was done based on 30-s non-overlapping epochs, such that each epoch was assigned a label from the set: W, REM, NREM1, NREM2, NREM3. We direct the reader to the established sleep literature (such as [14]) for a discussion of these labels.
Automatic scoring
To investigate the hypothesis that ear-EEG data can be used for sleep scoring, machine learning was used to train an automatic classifier to mimic the scoring of the sleep experts. The analysis pipe line used for this is described below.
Channel rejection
Even though the ear-EEG electrodes were qualified in the lab by measuring an ASSR, it was found in the analysis of the sleep EEG, that some of the ear-EEG channels were noisy. This was probably due to a deterioration in the electrode-body contact from the time when the subject left the lab until they went to bed. The deterioration may also be related to deformation of the ear, when the subject laid their head on the pillow. Because of this deterioration, it was necessary to perform a channel rejection prior to the analysis of the data. This was done in the following way:
All intra-ear derivations were calculated, and the power in the 10–35 Hz frequency band was calculated. If \(p_{ij}\) is the power calculated for the derivation consisting of channels i and j, let \(m_i=\text {median} \left( \{p_{ij} \}_{j}\right)\). Electrode i was then rejected if \(m_i>5\cdot 10^{-12}\,\text {V}^2/\text {Hz}\). This uses the fact that a high-impedance electrode will tend to have much more high-frequency noise, and that this will be the case for all derivations that it takes part in. Elegantly, it does not require a simultaneous 'ground truth' electrode, such as a scalp measurement, to determine good and bad electrodes. The value of \(5\cdot 10^{-12}\,\text {V}^2/\text {Hz}\) was determined by observing which value cleanly separated the electrodes into two groups, commensurate with the knowledge from the ASSR measurements and the subject diaries (for instance, one subject reported having removed one ear plug entirely before falling asleep). See Appendix A for a visualization of this separation. In total, 14 electrodes were rejected out of a possible 72, resulting in a rejection rate of 19%.
We note that the band-pass filtering of 10–35 Hz was only chosen and performed for the sake of this channel rejection. The non-filtered data set was passed to the next stage of the analysis, as described below.
The eight ear-EEG channels were distilled into three derivations (\(\left\langle \cdot \right\rangle\) denotes average):
$$\begin{aligned} \text {L-R: }&\left\langle ELA, ELB, ELE, ELI, ELG, ELK \right\rangle \\&{\qquad \qquad \qquad } -\left\langle ERA, ERB, ERE, ERI, ERG, ERK \right\rangle \\ \text {L: }&\left\langle ELA, ELB \right\rangle -\left\langle ELE, ELI, ELG, ELK \right\rangle \\ \text {R: }&\left\langle ERA, ERB \right\rangle -\left\langle ERE, ERI, ERG, ERK \right\rangle \end{aligned}$$
Note that the L and R-channels describe the potential differences between concha and channel electrodes in each ear. If an electrode was marked as bad, it was excluded from the averages. If this meant that one of the derivations could not be calculated (for instance, if both ELA and ELB were missing), that derivation was substituted with a copy of one of the others. This was only done in the case of subject 5, which was missing data from the right ear plug.
When choosing features, we were inspired by [15], and chose the list of features shown in Table 1. Of these, a subset were not used by [15] and are described in Appendix B. In general, the time and frequency domain features were based on a 2–32 Hz-bandpass filtered signal, while the passbands for EOG and EMG features were 0.5–30 and 32–80 Hz, respectively. A 50 Hz notch filter was also applied. All frequency domain features were based on power spectrum estimates using Welch's algorithm with segment length 2 s, 1 s overlap and applying a Hanning window on each window.
It is important to stress that the EOG and EMG proxy features discussed in this paper were extracted entirely from ear-EEG data—no EOG or EMG electrodes were used in the analysis. This was to distill as much information about EOG and EMG variation as possible from the ear-EEG data.
Table 1 Features used in this study
All 33 features were calculated for each of the three derivations. As described in Appendix B, an attempt was made to reduce the number of features. However, this did not yield satisfactory results, and instead all 99 features were used in the study.
Classifier training
Type of classifier
We used ensembles of decision trees, called a 'random forest' [20], with each ensemble consisting of 100 trees. The implementation was that of the 'fitensemble1' function in Matlab 2015b, using the 'Bag' algorithm. This means that each decision tree is trained on a resampling of the original training set with the same number of elements (but with duplicates allowed), and each tree has a minimum leaf size of 1. For each tree, splitting is done such that the Gini coefficient [21] is optimized, and continues until all leaves (subgroups) are either homogeneous or have the minimum leaf size.
Cross validation
We explored three different ways to select test and training data for the classifier (described graphically in Fig. 2):
Leave-one-out Data was partitioned into nine subsets, each subset corresponding to a single subject. Thus the classifier had not seen any data from the person which it was tested on.
Total All epochs from all subjects were pooled, and partitioned into 20 subsets. A classifier was trained based on 19 sub-sets, and tested on the last subset. Cross-validation was performed over all 20 combinations.
Individual Same as 'Total', but only done on data from a single subject, which was split into ten subsets. Thus, there were 90 different test sets.
The three validation schemes each provide a different perspective on the sleep staging performance and the applicability of the method.
Graphical illustration of the three cross validation methods. The 'Leave-one-out' method use all epochs from eight subjects for training, and all epochs from the remaining subject for testing. This is done for all nine subjects (ninefold cross validation). In the 'Total' method, all epochs from all subjects are pooled together and partitioned into 20 subsets. The classifier is trained on 19 subsets and tested on one subset. This is done for all 20 combinations of subsets (20-fold cross validation). In the 'Individual' method the classifier is subject specific. For each subject, the epochs are partitioned into ten subsets, the classifier is trained on nine subsets and tested on one subset. Thereby the algorithm is validated ten times on each of the nine subjects (90-fold cross validation)
'Individual' is thought as a model of the scenario in which users have personal models/classifiers created. This builds on an assumption that measurements from one night will have similar characteristics to those from a different night. This seems like a reasonable assumption, given the literature [22,23,24]. As shown in Fig. 2, test and training data was only picked from the same subject. Of course, as part of the calculation of the population Cohen's kappa value, all data was eventually used as test data (each test having its own training data).
In 'Leave-one-out', a pre-trained classifier was applied to data from a new subject, which is probably the most relevant scenario. However, in this study we only had nine subjects, which is likely much too low for any given subject to be well represented by the remainder of the population.
Therefore, we have included 'Total', which represents the scenario where the pool of subjects is very large, in which case all normal sleep phenotypes are assumed represented in the training data. In the limit of a very large subject group, it is expected that 'Leave-one-out' and 'Total' would converge, to a result in-between the results reported here. However, to achieve this would likely require a substantial number of subjects.
During the analysis, we found that it is very important in 'Total' and 'Individual' that the test sets each form contiguous subsets of the data. If instead of the above, each subset was selected at random, it would mean that most likely each test epoch would have neighboring training epochs on both sides. This in turn would give the classifier access to the correct label for epochs extremely similar to the test epoch, preventing proper generalization, and leading to over fitting. We will return briefly to the discussion of this 'neighbor effect' later in the paper.
To evaluate the agreement between the expert labels and the output of the classifiers, Cohens kappa coefficient [25] was calculated for each of the three cross-validation methods.
All nine subjects managed to fall asleep wearing the PSG and ear-EEG equipment. One subject (number 5) reported having removed the right ear plug before falling asleep. When asked to judge their quality of sleep between the categories: unchanged–worse–much worse, 1 subject reported "unchanged", 5 reported "worse" and 3 felt they slept much worse than usual. The subjects were not asked to describe whether their discomfort was caused by the ear-EEG device, the PSG, or both. The subjects slept (or attempted to sleep) between 2.4 and 9.6 h with the equipment on, an average of 6.9. This means that in total, 61.8 h of sleep were recorded and scored by the sleep scorer, resulting in 7411 30-s epochs. In Table 2 are shown the number of useable electrodes and scored epochs for each subject.
In the analysis below, the one-eared subject was not removed, instead all three derivations were identical for that subject.
A first comparison
Figure 3 shows characteristics of conventional EEG and ear-EEG, during sleep. Figure 3a shows power spectra for REM, NREM2 and NREM3 for two scalp derivatives and a left-right ear-EEG derivative. A large degree of similarity is observed for the scalp and ear derivatives, in particular REM and NREM spectra are clearly separated for all three derivatives. Figure 3b shows characteristic sleep events (sleep spindle and K-complex) for the same two scalp derivatives and the left-right ear-EEG derivative. Clear similarities in the waveforms are observed across all three derivatives.
Comparisons of conventional EEG and ear-EEG. It is observed that there are clear spectral similarities between the NREM stages for scalp and ear electrodes, and that the spectral distributions are clearly distinctive from the spectral distributions of the REM stages. However, is also seen that while much of the sleep information is inherited by the ear-EEG, the stage signatures are not completely identical, leading to our use of automatic classifiers. a Spectral power distribution in the three major sleep stages. Normalized to have similar power for top half of spectrum. b Sleep events shown for different electrode pairs, simultaneously recorded. Data has been subjected to a [1; 99] Hz band-pass filter, as well as a 50 Hz notch filter
However, despite these similarities it cannot in general be assumed that sleep stage signatures will be exactly equal in conventional EEG and ear-EEG. Further, it should be stressed that not all sleep events are as clearly visible in ear-EEG as those shown. As was mentioned in the introduction, this is part of the reason why a machine learning approach is suitable for this study. More precisely, while we deem it likely that sleep experts could be trained, with some level of success, to score sleep based on ear-EEG data, it would likely require a significant amount of retraining, not suitable for this study.
Classification results
Figure 4 shows kappa values (\(\kappa\)) for the three modes of cross validation and for 5, 3 and 2-stage classification (the stages in the last two being W-REM-NREM and W-Sleep, respectively). Results for 3 and 2-stage classification were simply obtained by relabelling the 5-stage results, so the classifiers were not retrained. Regarding the percentagewise agreement, it is noteworthy that manual scorers have been shown to have an average agreement of \(82.6\%\) [26], while actigraphs using 2 stages have an agreement rate of 83.9–96.5% with PSG's [6].
Cohens Kappa (\(\kappa\)) for different numbers of stages and different ways to cross validate. For comparison, the percentage of correctly labeled stages is also shown. Not surprisingly, we see that \(\kappa\) increases when the number of stages decrease, and when the classifier has more prior information about the subject(s). In all cases, for calculation of both Kappa and accuracy, all epochs have been pooled together before calculation, equivalent to taking a population average weighted by number of subject epochs
For comparison, our classifier performs somewhat worse than the ones presented in [15] (\(\kappa \approx 0.85\)) and [16] (correlation coefficient \(\approx 0.84\)), though their studies did use scalp electrodes instead of ear-EEG.
When comparing the numbers shown in Fig. 4 to those found elsewhere in other studies, it is valuable to keep in mind that the 'neighbour effect' stemming from scattered test data, as was discussed above (see "Cross validation" section), may not always be accounted for in the literature. In our case, using scattered test data increased the percentagewise agreement between manual and automatic labels by an average 6% points across 'Total' and 'Individual'.
Figure 5 shows sleep staging traces for subject 7, using the 'Individual' cross-validation method. We see that generally the transitions between stable stages are accurately predicted.
Comparison of sleep staging results from manual (top) and automatic (bottom) staging, for one subject, using a classifier only trained on data from the same subject. REM stages have been highlighted in red, as per usual convention
Figure 6 shows the confusion matrices for the three cross validation schemes. The most difficult state to identify is NREM 1, likely stemming from the fact that there are very few examples of this (only \(7\%\) of epochs). However, NREM 2 and NREM 3 are identified very well, even for 'Leave-one-out' cross validation.
Confusion matrices for the three types of cross validation, and three model complexities. Colors match those of Fig. 4. For each matrix is given an extra column of sensitivities and specificities. We direct the reader to Fig. 4, right axis, for average accuracies. As indicated by the legend, rows correspond to manual labels, columns to automatically generated labels
In Table 2 is shown the \(\kappa\)-values for all subjects, for each method of cross validation. It is interesting to note that subject 5 was not always the worst performing subject, despite the fact that only data from one ear piece was available from this subject.
Table 2 \(\kappa\) values for each subject, for all methods of cross-validation
We have seen that ear-EEG as a platform for automatic sleep staging has definite merit, especially if problems related to inter-subject variability can be addressed. Compared with other studies [15, 27, 28], the subject cohort in this study is rather small at only nine individuals. However by resampling the cohort, it is possible to estimate the classifier performance for larger cohorts; following the procedure outlined in [29] we find that a cohort size of 30 would likely have increased the 5-stage 'Leave-one-out'-\(\kappa\) to 0.5.
An intriguing question which was not addressed here is intra-subject variability. In other words, how well does a classifier trained on data from Tuesday perform on data from Wednesday? It seems safe to say that it will at the very least be comparable to the 'Leave-one-out'-scheme described here, but possibly much closer to the 'Individual' scheme. Based on studies concerning individual differences in physiological measures during sleep [22,23,24], it seems likely that intra-subject variability will be low. In this scenario, one could imagine uses where a single night (possibly just a day-time nap) with both PSG and ear-EEG could be used to calibrate a classifier to each individual user. One example could be a clinical setting where the usual one night of PSG could be supplemented with a longer ear-EEG study spanning several weeks or more.
All data in this study was obtained from healthy individuals, and thus the study does not provide any information as to how ear-EEG would perform in the presence of pathology. However, given the demonstrated ability of ear-EEG to reliably classify sleep staging, it is likely that a specialist could utilize the technology to detect abnormal sleep.
A surprising issue during the study was that of user comfort. As soon as user discomfort was reported, a parallel investigation was initiated into possible remedies. These will be applied in a future study, where we expect the level of discomfort to be substantially reduced.
An additional benefit of the ear-EEG platform is the ease with which the electrodes remain attached to the skin. Whereas conventional electrodes need adhesives and/or mechanical support to ensure a reliable contact, ear-EEG benefits from the precise fit of the ear piece within the outer ear, largely retaining the connection through geometry alone.
The study makes the valuable contribution of having more participants than previous ear-EEG sleep studies, as well as being the first study to make a quantitative comparison to simultaneously recorded PSG.
Through the machine learning approach, the study amply demonstrates that ear-EEG contains sleep-relevant data, in line with previously published studies. However, the need for a comfortable sleep-monitoring solution is also highlighted. We are convinced, based on developments taking place after this study was conducted, that the comfort problems discussed here will be solved in future studies.
In summary, we consider the findings of this study very positive regarding the continued development of ear-EEG as a mobile sleep staging platform.
Sleep monitoring with ear-EEG will be particularly interesting in cases where it is relevant to monitor sleep over extended periods of time. In such cases automatic sleep staging turns out to be even more important and is probably a necessity. The findings in this study are also very positive in this regard.
In future studies, it would be interesting to add additional ways to compare measurements, for instance one in which the training and test sets were matched according to age and gender. This would most likely require a substantially larger pool of subjects.
Rechtschaffen A, Kales A. A manual of standardized terminology, techniques and scoring system for sleep stages of human subjects, No. 204. Washington, DC: National Institutes of Health publication; 1968.
Lamberg L. Promoting adequate sleep finds a place on the public health agenda. JAMA. 2004;291(20):2415.
Taheri S. The link between short sleep duration and obesity: we should recommend more sleep to prevent obesity. Arch Dis Child. 2006;91(11):881–4.
Smaldone A, Honig JC, Byrne MW. Sleepless in America: inadequate sleep and relationships to health and well-being of our nation's children. Pediatrics. 2007;119(Supplement 1):29–37.
Stickgold R. Sleep-dependent memory consolidation. Nature. 2005;437(7063):1272–8.
Van de Water ATM, Holmes A, Hurley DA. Objective measurements of sleep for non-laboratory settings as alternatives to polysomnography—a systematic review. J Sleep Res. 2011;20(1pt2):183–200.
Redmond SJ, de Chazal P, O'Brien C, Ryan S, McNicholas WT, Heneghan C. Sleep staging using cardiorespiratory signals. Somnologie-Schlafforschung und Schlafmedizin. 2007;11(4):245–56.
Kortelainen JM, Mendez MO, Bianchi AM, Matteucci M, Cerutti S. Sleep staging based on signals acquired through bed sensor. IEEE Trans Inf Technol Biomed. 2010;14(3):776–85.
Kidmose P, Looney D, Ungstrup M, Lind M, Mandic DP. A study of evoked potentials from ear-EEG. IEEE Trans Biomed Eng. 2013;60(10):2824–30.
Mikkelsen K, Kidmose P, Hansen LK. On the keyhole hypothesis: high mutual information between ear and scalp EEG neuroscience. Front Hum Neurosci. 2017;11:341. doi:10.3389/fnhum.2017.00341.
Zibrandtsen I, Kidmose P, Otto M, Ibsen J, Kjaer TW. Case comparison of sleep features from ear-EEG and scalp-EEG. Sleep Sci. 2016;9(2):69–72. doi:10.1016/j.slsci.2016.05.006.
Stochholm A, Mikkelsen K, Kidmose P. Automatic sleep stage classification using ear-EEG. In: 2016 38th annual international conference of the IEEE engineering in medicine and biology society (EMBC). New York: IEEE; 2016. p. 4751–4. doi:10.1109/embc.2016.7591789.
Looney D, Goverdovsky V, Rosenzweig I, Morrell MJ, Mandic DP. A wearable in-ear encephalography sensor for monitoring sleep: preliminary observations from Nap studies. Ann Am Thorac Soc. 2016.
Berry RB, Brooks R, Gamaldo CE, Hardsim SM, Lloyd RM, Marcus CL, Vaughn BV. The AASM manual for the scoring of sleep and associated events: rules, terminology and technical specifications, version 2.1. Darien: American Academy of Sleep Medicine; 2014.
Koley B, Dey D. An ensemble system for automatic sleep stage classification using single channel EEG signal. Comput Biol Med. 2012;42(12):1186–95.
Doroshenkov L, Konyshev V, Selishchev S. Classification of human sleep stages based on EEG processing using hidden Markov models. Biomed Eng. 2007;41(1):25–8.
Acharya UR, Bhat S, Faust O, Adeli H, Chua EC-PC, Lim WJEJ, Koh JEWE. Nonlinear dynamics measures for automated EEG-based sleep stage detection. Eur Neurol. 2015;74(5–6):268–87.
Mikkelsen KB, Kappel SL, Mandic DP, Kidmose P. EEG recorded from the ear: characterizing the ear-EEG method. Front Neurosci. 2015;9:438. doi:10.3389/fnins.2015.00438.
Kidmose P, Looney D, Mandic DP. Auditory evoked responses from Ear-EEG recordings. In: Proc. of the 2012 annual international conference of the IEEE engineering in medicine and biology society (EMBC). New York: IEEE; 2012. p. 586–9. doi:10.1109/embc.2012.6345999.
Breiman L. Random forests. Mach Learn. 2001;45(1):5–32.
Ceriani L, Verme P. The origins of the Gini index: extracts from variabilità e mutabilità (1912) by Corrado Gini. J Econ Inequal. 2012;10(3):421–43.
Buckelmüller J, Landolt H-PP, Stassen HH, Achermann P. Trait-like individual differences in the human sleep electroencephalogram. Neuroscience. 2006;138(1):351–6.
Tucker AM, Dinges DF, Van Dongen HPA. Trait interindividual differences in the sleep physiology of healthy young adults. J Sleep Res. 2007;16(2):170–80.
Chua EC, Yeo SC, Lee IT, Tan LC, Lau P, Tan SS, Mien IH, Gooley JJ. Individual differences in physiologic measures are stable across repeated exposures to total sleep deprivation. Physiol Rep. 2014;2(9):12129.
McHugh ML. Interrater reliability: the kappa statistic. Biochem Med. 2012;22(3):276–82.
Rosenberg RS, Van Hout S. The American academy of sleep medicine inter-scorer reliability program: sleep stage scoring. J Clin Sleep Med. 2013;9(1):81–7.
Shambroom JR, Fábregas SE, Johnstone J. Validation of an automated wireless system to monitor sleep in healthy adults. J Sleep Res. 2012;21(2):221–30.
Stepnowsky C, Levendowski D, Popovic D, Ayappa I, Rapoport DM. Scoring accuracy of automated sleep staging from a bipolar electroocular recording compared to manual scoring by multiple raters. Sleep Med. 2013;14(11):1199–207.
Figueroa R, Treitler QZ, Kandula S, Ngo L. Predicting sample size required for classification performance. BMC Med Inform Decis Mak. 2012;12(1):8.
Guyon I, Elisseeff A. An introduction to variable and feature selection. J Mach Learn Res Spec Issue Var Feature Sel. 2003;3:1157–82.
Zhang Y, Zhang X, Liu W, Luo Y, Yu E, Zou K, Liu X. Automatic sleep staging using multi-dimensional feature extraction and multi-kernel fuzzy support vector machine. J Healthc Eng. 2014;5(4):505–20.
Imtiaz S, Rodriguez-Villegas E. A low computational cost algorithm for REM sleep detection using single channel EEG. Ann Biomed Eng. 2014;42(11):2344–59.
Huupponen E, Gómez-Herrero G, Saastamoinen A, Värri A, Hasan J, Himanen S-LL. Development and comparison of four sleep spindle detection methods. Artif Intell Med. 2007;40(3):157–70.
Lajnef T, Chaibi S, Eichenlaub J-B, Ruby PM, Aguera P-E, Samet M, Kachouri A, Jerbi K. Sleep spindle and K-complex detection using tunable Q-factor wavelet transform and morphological component analysis. Front Hum Neurosci. 2015;9.
Duman F, Erdamar A, Erogul O, Telatar Z, Yetkin S. Efficient sleep spindle detection algorithm with decision tree. Expert Syst Appl. 2009;36(6):9980–5.
KBM was responsible for writing the manuscript. KBM, DBV and PK were responsible for data analysis and algorithmic development. DBV and MO were responsible for planning and carrying out the measurements and clinical scoring. PK was responsible for overall planning of the study. All authors read and approved the final manuscript.
All sleep measurements described in this paper are available for download at URL: http://www.eareeg.org/SleepData_2017/sleep.zip6. Bad channels have been marked by 'NaN'-values. The code used to sleep score automatically is available from the corresponding author (KM) upon request.
At the time of their initial briefing, all study participants were informed of the likelihood that the data would be part of a publication.
All subjects gave informed consent, and were briefed both verbally and in written form before their ear impressions were taken, in accordance with the regulations of the local ethics committee (Committee on Health Research Ethics, Jutland Central Region).
The work presented here was funded by Project 1311-00009B of the Danish strategic research council, and 110-2013-1 of the Danish national advanced technology foundation.
Department of Engineering, Aarhus University, Finlandsgade 22, 8200, Aarhus N, Denmark
Kaare B. Mikkelsen
, David Bové Villadsen
& Preben Kidmose
Department of Clinical Medicine, Aarhus University, Nørrebrogade 44, 8000, Aarhus C, Denmark
Marit Otto
Search for Kaare B. Mikkelsen in:
Search for David Bové Villadsen in:
Search for Marit Otto in:
Search for Preben Kidmose in:
Correspondence to Kaare B. Mikkelsen.
Appendix A: Channel rejection
To further elaborate on the choice of electrode rejection criteria, Fig. 7 shows \(m_i\) for all channels. In the plot, electrodes that were expected to be bad either due to lost connection during the recording (an ear plug removed, for instance) or due to poor results in the initial ASSR test, have been marked in red. We see that the simple rejection criteria employed finds almost all these electrodes, and we consider this method both a more reproducible, as well as more scientifically sound approach.
The justification for the chosen threshold for electrode rejection. Each marker corresponds to an electrode, red markers show electrodes which, based on either ASSR measurements or visual inspection, were deemed unsuitable. We see that rejecting all electrodes above this limit corresponds quite well to the initial, more loosely defined criteria
Appendix B: Additional feature discussion
Feature elimination
Feature elimination was attempted, by systematically removing one feature, and evaluating classifier performance, looping over all features. After each loop, the feature whose absence was the least detrimental to classifier performance was removed for the remainder of the analysis. In this way, the pool of features was gradually shrunk [30].
The data material in this test was all subjects pooled (called 'Total' above), and classifier performance was evaluated by partitioning the data pool into 20 equal parts, and iteratively training a classifier using 95% of the data as training data and the remaining 5% as test data. Finally, Cohen's \(\kappa\) was calculated based on the combined classifier results.
Figure 8 shows the maximal \(\kappa\) as a function of number of features. We see a general trend that best performance is achieved somewhere between 20 and 60 features. However, after further analysis, we have discovered that the precise set and number of features depends intimately not only on which subjects are included in the pool, but also how that pool is partitioned into 20 chunks. In other words, either the same, somewhat arbitrary choice of features is used, based on one chosen partitioning, in which case there is a risk of introducing a bias in the classifier (whichever representative subset of the data is chosen for selecting features, that subset may be overfitted when it is later used as test data), or there will be no clear indication of which set of features others should use. In the latter case, we would still have to present 99 features to our readers, and would have achieved no improvement in either classifier performance or readability. In future studies, we aim to have sufficient data to set aside a dedicated validation data set for determining feature selection, without the risk of overfitting. For now, we have chosen not to perform feature selection, but instead keep all 99 features.
Results from feature rejection. Features were gradually removed from the pool, in order to increase \(\kappa\). At each step, the highest achieved value was recorded. Note the overall very low variation in \(\kappa\)
Below is given a detailed description of those features used which were not included in [15].
F7: Correlation coefficient between channels The only feature requiring multiple channels in its definition. Since there are three EEG derivations, this feature was simply calculated as a the correlation coefficient between the i'th and (i+1)'th derivations, making sure that all pairs of derivations are evaluated.
F8: EMG power Total power in the [32, 80] Hz band.
F9: Minimal EMG power Each epoch was split into ten segments. EMG power was calculated for each segment, and the lowest of these ten values was recorded.
F10: Relative EMG burst amplitude Maximum EMG signal amplitude divided by F9.
F11: Slow eye movement power Power in the frequency band [0.5, 2] relative to full power in the [0.5, 30] Hz-band. Inspired by Zhang et al. [31].
F12: Rapid eye movement power Power in the frequency band [2, 5] relative to full power in the [0.5, 30] Hz-band. Inspired by Zhang et al. [31].
F26: Mean spectral edge frequency difference Taken from [32].
F29: Spindle probability Letting \(P(x-y)\) be the set of power estimates for frequencies in the x to y Hz band, this feature is calculated as \(\max (P(11-16))/(\left\langle P(4-10)\right\rangle + \left\langle P(20-32) \right\rangle)\), and is inspired by Huupponen et al. [33] ("sigma index").
F30: Frequency stationarity For each epoch, the Welch algorithm calculates power spectra for 31 segments. F30 calculates the average Pearson correlation between these 31 spectra.
F31: Lowest adj. frequency similarity Using the same correlations as in F30, F31 is the lowest recorded correlation between neighboring segments.
F32: Largest CWT value A continuous Wavelet Transform of the filtered EEG-signal is computed, using a complex frequency B-spline as wavelet. The wavelet has a support of 0.5 s. Inspired by Lajnef et al. [34].
F33: Longest sleep spindle The signal was bandpass filtered to a band of 11–16 Hz, and the Teager energy operator (TEO) was applied to it (see [35]). At the same time, a short term Fourier transform (STFT) was applied to the unfiltered signal, and the power in the 12–14 Hz band relative to average power in 4–32 Hz band was computed (excluding 12–14 Hz). Finally, signal segments in which F32 > 15, TEO > 0.5 and STFT power > 0.3 were assumed to be sleep spindles. The maximal length of observed spindles constituted F33. This was inspired by Duman et al. [35].
As to the remaining features, the exact mapping is: (letting KN be the N'th feature from [15]): F1:K3, F2:K4, F3: K5, F4: K7, F5: K8, F6:K9, F13:K14, F14:K15, F15:K16, F16:K17, F17:K18, F18:K19, F19:K20, F20:K21, F21:K22, F22:K23, F23:K24, F24:K25, F25:K27, F27:K26, F28: K28.
Mikkelsen, K.B., Villadsen, D.B., Otto, M. et al. Automatic sleep staging using ear-EEG. BioMed Eng OnLine 16, 111 (2017) doi:10.1186/s12938-017-0400-5
Ear-EEG
Mobile EEG
|
CommonCrawl
|
Gypsum-DL: an open-source program for preparing small-molecule libraries for structure-based virtual screening
Patrick J. Ropp1,
Jacob O. Spiegel1,
Jennifer L. Walker1,
Harrison Green1,
Guillermo A. Morales2,3,
Katherine A. Milliken1,
John J. Ringe1 &
Jacob D. Durrant ORCID: orcid.org/0000-0002-5808-40971
Journal of Cheminformatics volume 11, Article number: 34 (2019) Cite this article
Computational techniques such as structure-based virtual screening require carefully prepared 3D models of potential small-molecule ligands. Though powerful, existing commercial programs for virtual-library preparation have restrictive and/or expensive licenses. Freely available alternatives, though often effective, do not fully account for all possible ionization, tautomeric, and ring-conformational variants. We here present Gypsum-DL, a free, robust open-source program that addresses these challenges. As input, Gypsum-DL accepts virtual compound libraries in SMILES or flat SDF formats. For each molecule in the virtual library, it enumerates appropriate ionization, tautomeric, chiral, cis/trans isomeric, and ring-conformational forms. As output, Gypsum-DL produces an SDF file containing each molecular form, with 3D coordinates assigned. To demonstrate its utility, we processed 1558 molecules taken from the NCI Diversity Set VI and 56,608 molecules taken from a Distributed Drug Discovery (D3) combinatorial virtual library. We also used 4463 high-quality protein–ligand complexes from the PDBBind database to show that Gypsum-DL processing can improve virtual-screening pose prediction. Gypsum-DL is available free of charge under the terms of the Apache License, Version 2.0.
Structure-based virtual screening (VS) is a powerful tool for pharmacological and basic-science research [1, 2]. In a successful VS campaign, a docking program poses small-molecule models within a protein binding pocket, and a scoring function estimates binding affinities. Experimentalists then test the top-scoring compounds to verify binding. Hit rates are often better than those obtained through high-throughput screening alone [2].
The first and foundational step in a VS workflow is pose prediction. Accurate prediction depends on high-quality 3D models of both protein receptor(s) and potential small-molecule ligands. Small-molecule databases often store compounds in formats that include only atom-type and bond information (e.g., SMILES). Furthermore, database entries typically describe only one ionization or tautomeric state per molecule, and they may lack information about chirality and cis/trans isomerization.
Though effective, available commercial and open-source programs for processing and converting these simple representations into fully enumerated 3D models have their drawbacks. Commercial programs such as OpenEye's OMEGA/QUACPAC [3, 4] and Schrödinger's LigPrep (Schrödinger, LLC) have restrictive licenses and can be expensive. While OpenEye does offer a free academic license, that license imposes substantial commercialization and intellectual-property restrictions. License eligibility is also regularly re-evaluated, making long-term access uncertain. And workflows that incorporate commercial tools cannot typically be freely distributed.
Free alternatives include Frog2 [5] and Balloon [6, 7]. Frog2 [5] is an open-source, web-based program that requires no installation. Users must first upload their compounds in SMILES format to the RPBS Web portal (http://bioserv.rpbs.univ-paris-diderot.fr/services/Frog2/) [8, 9]. The Frog2 server then assigns 3D coordinates and provides a downloadable file containing the results. In contrast, Balloon [6, 7] is a command-line program that can be easily and freely incorporated into larger workflows. Though Balloon is free, the source code is not publicly available, and the program does not account for alternate chiral and cis/trans isomeric forms. Additionally, Frog2 and Balloon ignore alternate ionization and tautomeric forms; sometimes miss low-energy, non-aromatic ring conformations; and generate excess rotamers beyond those needed for flexible-ligand docking.
The popular open-source cheminformatics package Open Babel [10] also includes several executable files that can perform key small-molecule preparation steps. For example, the obabel executable accepts a -p (pH) parameter that ionizes molecules as appropriate for a user-specified pH. The obabel –gen3D (generate 3D coordinates) parameter also converts molecular representations to 3D models that can be further optimized with obminimize. But it is difficult to generate alternate tautomeric, chiral, and cis/trans isomeric forms using Open Babel's command-line interface. Advanced users/programmers must implement these features separately using Open Babel's programming API. Open Babel is also released under a copyleft license (GNU General Public License, version 2), which requires that any derivate works also be copyleft.
To address the limitations of existing commercial and open-source packages, we here present Gypsum-DL, a free, open-source program for preparing small-molecule libraries. Beyond simply assigning 3D coordinates, Gypsum-DL outputs molecular models with varying ionization, tautomeric, and isomeric states. Protein binding pockets often stabilize these alternate forms, even if their prevalence is low in bulk solution. Gypsum-DL also generates models with alternate non-aromatic ring conformations. Considering alternate ring conformations is critical given that most flexible-ligand docking programs (e.g., AutoDock Vina [12]) do not account for all possible ligand ring geometries during the docking process itself.
We use 4463 high-quality protein–ligand complexes from the PDBBind database (http://www.pdbbind.org.cn/) [13, 14] to show that Gypsum-DL processing can improve VS pose prediction. To further show utility, we also use Gypsum-DL to process two virtual molecular libraries: (1) the NCI Diversity Set VI, a set of freely available compounds provided by the National Cancer Institute (1558 molecules); and N-acylated unnatural amino acids enumerated using the accessible chemical reaction schemes developed by the Distributed Drug Discovery (D3) initiative (56,608 molecules) [15,16,17,18]. These virtual libraries are available free of charge for use in VS projects.
Gypsum-DL will be a helpful tool for those engaged in both basic-science and drug-discovery research. A copy is available at http://durrantlab.com/gypsum-dl/, released under the terms of the Apache License, Version 2.0.
The Gypsum-DL algorithm
Gypsum-DL uses RDKit (http://www.rdkit.org), MolVS 0.1.1 (https://molvs.readthedocs.io), and Dimorphite-DL 1.0 [11] to convert small-molecule representations (SMILES strings or flat SDF files) into 3D models (Fig. 1) [19]. Each output SDF file includes fields that describe the steps used to generate the corresponding model. Gypsum-DL also leverages multiple processors, if available, to speed the conversion of large virtual libraries. Command-line flags allow the user to precisely control all aspects of the program, though the default parameters should serve most use cases.
The Gypsum workflow. a Gypsum prepares a virtual small-molecule library by desalting the input compounds and considering alternate ionization, tautomeric, chiral, and cis/trans isomeric states. It then converts all variants to 3D, accounting for alternate ring conformations where appropriate. b Illustrative examples of each Gypsum step
Desalting
Gypsum-DL first removes any salts present in the user-specified virtual compound library. Molecular representations (e.g., SMILES) often include the primary molecule together with accompanying counterions. Gypsum-DL retains only the largest fragment, the presumed compound of interest.
Ionization
Gypsum-DL uses the Dimorphite-DL 1.0 algorithm [11] to generate models with different ionization states. It considers a user-defined range of pH values (6.4–8.4 by default) rather than a single (e.g., physiological) pH. Separate models are created for each identified state.
Given the computational demands of high-throughput VS, it is important to limit the number of ionization forms considered. To eliminate highly charged forms that are unlikely to be physiologically relevant, Gypsum-DL first identifies the generated ionization form with a formal charge that is closest to zero. It eliminates any additional ionization forms whose formal charges deviate from that baseline by 3 e or more.
Tautomeric forms
Many compounds readily interconvert between tautomeric states as protons and electrons shift among atoms. Gypsum-DL uses MolVS 0.1.1 to enumerate all possible tautomers. It discards tautomers that alter the number of aromatic rings (i.e., by breaking ring aromaticity) or the number of chiral centers. Separate models are created for each identified tautomeric form.
MolVS occasionally produces particularly improbable tautomeric forms. Gypsum-DL maintains a list of substructures associated with these forms and automatically eliminates any matching models. For example, though Gypsum-DL does consider keto–enol tautomerism, it does not permit enol forms that result in terminal alkenes. It also eliminates compounds with geminal vinyl diols, which are improbable tautomers of carboxylic acids. Any form with a carbanion is also eliminated, as are tautomers that disrupt existing aromaticity.
Unspecified chiral centers and cis/trans double-bond isomerization
Many virtual-library databases do not fully specify all compound chiral centers. Gypsum-DL thoroughly generates alternate chiral species by varying each of the unspecified chiral centers in each input molecule. Specifically defined chiral centers remain unchanged. Similarly, virtual-library databases often include compounds with unspecified double-bond isomerization. Gypsum-DL systematically and thoroughly generates alternate cis/trans isomers as needed.
We note that MolVS removes double-bond cis/trans stereochemistry if any derived tautomeric form changes the double bond to a single bond. Some compounds may thus end up with unspecified double bonds, even if the input molecular representation explicitly specifies isomerization. This behavior is intentional, though it may surprise some users. Gypsum-DL enumerates both the cis and trans isomers in such cases.
Alternate conformations of non-aromatic rings
To sample different small-molecule conformers, flexible-ligand docking programs permit virtual rotations about single bonds during the docking process. But they often treat rings as rigid, even if those rings include single bonds. Transitions between different ring conformations (e.g., the boat and chair conformations of cyclohexane) are thus ignored. Gypsum-DL addresses this shortcoming by generating separate models with distinct low-energy ring conformations.
Gypsum-DL first generates multiple 3D models of each input molecule using the Experimental Torsion with Knowledge Distance Geometry (ETKDG) method (version 2 if available, or version 1 otherwise) [20]. These initial models are then optimized using the Universal Force Field (UFF) [21]. Though this optimization step is computationally expensive, it encourages 3D ring conformers that closely correspond to discrete energy minima. For a given compound with R non-aromatic rings, there are thus M optimized 3D models (see Fig. 2a, where R = 2 and M = 4).
A schematic of the Gypsum-DL algorithm for generating ring-conformational forms. a Create multiple 3D variants using ETKDG and UFF optimization. b Extract the rings. c Collect the coordinates of the ring atoms. d Construct ring fingerprints by calculating the RMSD between each ring and the corresponding ring of the first model. e, f Use k-means clustering to identify unique ring fingerprints. The small circles on the graphs represent fingerprints, the larger dashed circles represent clusters, and the black circles represent the most central fingerprint of each cluster. g The central fingerprints correspond to geometrically unique models
For reference, we assign an index, m, to each of the M models. We also assign an index, r, to each of the R non-aromatic rings, and we say that the rth ring contains Ar atoms (Fig. 2b). There are thus M total conformations of the rth ring, one corresponding to each modeled compound. The rth ring of the mth model refers to a specific ring. To describe the geometry of that ring, Gypsum-DL places the 3D coordinates, (ax, ay, az), of its Ar constituent atoms into an ordered list, cm,r (Fig. 2c), where
$$c_{m,r} = \left\{ {\left( {a_{x} ,a_{y} ,a_{z} } \right) |a \in {\mathbb{N}},\text{ }a \le A_{r} } \right\}$$
To describe the collective ring-conformational geometry of the mth model, Gypsum-DL collects the geometries of the R associated rings, cm,r, into another ordered list, sm (Fig. 2c):
$$s_{m} = \left\{ {c_{m,r} |r \in {\mathbb{N}},\text{ }r \le R} \right\}$$
To quantify how much a given 3D models' ring conformations collectively differ from those of the first model, Gypsum-DL generates an R-dimensional "ring-conformation fingerprint," fm (Fig. 2d), for each of the sm lists:
$$f_{m} = \left\{ {RMSD\left( {c_{m,r} , c_{1,r} } \right) |r \in {\mathbb{N}},\text{ }r \le R} \right\}$$
where the function RMSD(c1, c2) is the minimum root-mean-square deviation (RMSD) between coordinate set c1 and coordinate set c2 when c1 is allowed to freely rotate and translate.
The first fingerprint (f1) is thus an R-dimensional zero vector because the conformations of the first-model rings are identical to themselves. Subsequent fm are R-dimensional vectors whose entries represent the extent to which the conformation of the corresponding ring differs from that of the same ring in the first model.
Among the M models generated, some may have very similar ring conformational states. To eliminate this redundancy, Gypsum-DL uses k-means clustering [22] to cluster the set of all fm into at most max_variants_per_compound groups, where max_variants_per_compound is a user parameter (default: 5; Fig. 2e, f). Only the 3D models corresponding to the most central fm of each cluster are retained (Fig. 2g).
Controlling the combinatorial explosion
Gypsum-DL accounts for alternate ionization, tautomeric, chiral, isomeric, and ring-conformational states. From an algorithmic perspective, each of these five states is independent. It is thus possible to generate an intractable number of models per input molecule. For example, consider a molecule with two variants for each state. Accounting for all possible forms would require 25 = 32 models per molecule. Performing a VS of 10,000 compounds would thus require 320,000 separate dockings.
To prevent this combinatorial explosion, after each step Gypsum-DL prunes the growing set of enumerated forms associated with each input molecule. It first randomly selects m x t variants from all the forms generated, where m is the max_variants_per_compound user parameter (default: 5) and t is the thoroughness user parameter (a scaling factor, default: 3). It then uses ETKDG [20] to generate a 3D conformer for each of the selected m × t variants. The energies of these conformers are evaluated using the UFF [21]. To reduce computational cost, Gypsum-DL generally performs this evaluation without geometry optimization (except for compounds with non-aromatic rings, see above). It ultimately retains only the m compounds from the m x t variants with the best predicted energies.
Final geometry optimization and output
As a final step, Gypsum-DL uses the UFF to optimize the geometries of any remaining compounds that have not already been optimized (i.e., any compound that was not already optimized in the ring-conformation step). It saves the resulting 3D models to SDF file(s) in a user-specified directory. The user can also instruct the program to additionally save conformers in the PDB format. Optional output to an HTML file allows the user to quickly visualize the generated structures in 2D.
Gypsum-DL and VS pose prediction
To assess the impact of Gypsum-DL processing on VS pose prediction, we compiled a benchmark library of protein–ligand complexes. We first downloaded the 4463 high-quality complexes included in PDBBind refined set [13, 14]. We removed those complexes with ligands that had molecular weights greater than 500 Daltons, contained amino acids, contained multiple residues (e.g., peptides), included improper atom names (e.g., "furan"), and/or had ligand files that did not match the corresponding entries in the Protein Data Bank (https://www.rcsb.org/) [23]. After filtering, 3177 complexes containing 2438 unique ligands remained.
For each ligand, we downloaded the corresponding SMILES string from the Protein Data Bank [23]. We neutralized the charge of each SMILES representation to the extent possible and removed all information about chirality and cis/trans isomerism. We converted these processed SMILES strings to 3D models using both Open Babel 2.3.2 and a late-stage beta version of Gypsum-DL that did not differ substantially from the final published version. For Open Babel, we used only the -d (delete hydrogens), -h (add hydrogens), and –gen3D (generate 3D coordinates) flags, in that order, to standardize hydrogen atoms and generate 3D coordinates. For Gypsum-DL, we used the following ligand-processing parameters: min_ph 6.4, max_ph 8.4, pka_precision 1.0, thoroughness 3, and max_variants_per_compound 5. We then converted all protein-receptor and small-molecule models to the PDBQT format using MGLTools 1.5.6 [24].
For each complex, we defined a docking box that entirely encompassed the corresponding crystallographic ligand, with 5 Å margins in all directions. We then used AutoDock Vina 1.1.2 [12] to dock the Open-Babel and Gypsum-DL small-molecule models into the corresponding docking boxes. The default AutoDock Vina parameters were used, except we increased the exhaustiveness parameter to 100.
To judge pose-prediction accuracy, we first used our Scoria Python library [25] to remove the hydrogen atoms from all docked compounds. We then used obrms [10], an Open-Babel utility program, to calculate the RMSDs between the non-hydrogen atom positions of each top-scoring Vina pose and those of the corresponding crystallographic ligand. The obrms approach accounts for equivalent moiety conformations (e.g., symmetric ring flips) by considering atom connectivity. We discarded an additional 26 protein–ligand complexes because obrms determined inconsistent connectivities for the crystallographic pose versus the docked Open-Babel and/or Gypsum-DL poses. The size of the final benchmark library of protein–ligand complexes was thus 3151.
Unlike Open Babel, Gypsum-DL often generates multiple variants of each compound with differing ionization, tautomeric, chiral, cis/trans isomeric, and ring-conformational states. For comparison purposes, we selected the Gypsum-DL variant with the lowest RMSD to the crystallographic pose.
Enumerating the distributed drug discovery (D3) library for Gypsum-DL testing
To test Gypsum-DL's ability to process a large virtual molecular library, we enumerated 56,608 N-acylated unnatural amino acids. We first used ChemDraw Ultra 12.0 (CambridgeSoft, 2010) to create 2D representations of the 84 alkyl-halide, 16 Michael-acceptor, and 100 carboxylic-acid building blocks described in Ref. [16]. Some building blocks were racemic, so we expanded this initial set to include all associated enantiomers. We then used MarvinSketch 16.6.13, 2016, ChemAxon (http://www.chemaxon.com) to create the multi-step reaction schemes required to enumerate a 2D virtual library from these building blocks.
A detailed description of the reactions has been published previously [16]. We selected them in part because they are central to the highly successful undergraduate curriculum developed by the Distributed Drug Discovery (D3) initiative [16]. In brief, we first created reaction schemes to alkylate polymer-bound benzophenone-imine glycine at the carbonyl ɑ-carbon. For alkylation using alkyl halides, we created two reaction schemes to generate products with (S) and (R) stereochemistry, respectively (Fig. 3a). For alkylation using Michael acceptors, we created eight reaction schemes to enumerate all possible diastereomers (Fig. 3b). We used Reactor 16.6.13, 2016, ChemAxon (http://www.chemaxon.com) to apply these reaction schemes to our library of alkyl-halide and Michael-acceptor building blocks.
Simplified representations of the reaction schemes used to enumerate the D3 library. The spheres represent the bound polymer. a Alkylation of the polymer-bound benzophenone-imine glycine using the alkyl-halide building blocks. b Alkylation using the Michael-acceptor building blocks. c Deprotecting the benzophenone protecting group. d Acylation using the carboxylic-acid building blocks. e Cleaving the polymer
We next created a reaction scheme to deprotect the benzophenone protecting groups of the alkyl-halide- and Michael-reaction products, yielding amino-free intermediates (Fig. 3c). These intermediates were then subjected to an acylation scheme (Fig. 3d), which reacted the free amino groups with each of our carboxylic-acid building blocks. This reaction added another point of diversity, ultimately yielding polymer-bound alkylated-acylated glycine products. An additional reaction scheme served to cleave the polymer (Fig. 3e). We also used reaction schemes to remove additional protecting groups (e.g., Fmoc, tert-butyl, Boc), which the final products had inherited from some of the original building blocks (not shown).
Using Gypsum-DL to process the D3 and NCI molecular libraries
We used Gypsum-DL to process both the enumerated D3 library and the NCI Diversity Set VI, a set of freely available compounds provided by the National Cancer Institute. We used the following Gypsum-DL parameters to process these libraries: min_ph 7.4, max_ph 7.4, pka_precision 1.0, thoroughness 3, and max_variants_per_compound 5. In the case of the D3 library, we additionally set the skip_making_tautomers parameter to true. Both libraries are available free of charge in the SDF format from http://durrantlab.com/gypsum-dl/ for use in VS projects.
Gypsum-DL is an open-source Python program. It requires the third-party Python libraries RDKit, NumPy [26], and SciPy [27], which must be installed separately. To ease installation, we recommend the popular Anaconda Python platform with its convenient conda package manager. Users who wish to run Gypsum-DL using the Message Passing Interface (MPI) standard must also install the mpi4py package [28,29,30]. Gypsum-DL also relies on the MolVS library (MIT License). We have included a copy of MolVS 0.1.1 with the Gypsum-DL source code, so no additional installation is needed.
Platform testing
We have tested Gypsum-DL on several operating systems, using several versions of Python, RDKit, NumPy, SciPy, and mpi4py (Table 1). We expect it will run in many other environments as well. We note that the multiprocessing feature is not available on Microsoft Windows.
Table 1 Computational environments used for Gypsum-DL testing
Gypsum-DL can take advantage of multiple processors, if available. The user-defined job_manager parameter determines whether the program runs in "serial," "multiprocessing," or "mpi" mode. In serial mode, Gypsum-DL uses only one processor to prepare each small molecule sequentially. This mode is ideal when processing only a few compounds or when using Gypsum-DL in low-resource environments. It is also the only mode available on the Windows operating system.
In multiprocessing mode, Gypsum-DL uses multiple processors on the same computer to speed small-molecule preparation. Its dynamic load-balancing approach distributes small-molecule representations (e.g., SMILES strings) to various processors as they become available. Running in parallel, each processor independently prepares its assigned representations. Figure 4a shows benchmark run times performed on a 24-core Skylake processor using a late-stage beta version of Gypsum-DL that did not differ substantially from our final published version.
Gypsum-DL benchmarks. a Run times on a single compute node, using multiprocessing mode (1000 input SMILES strings). b Run times on multiple compute nodes, using mpi mode (20,000 input SMILES strings). All benchmarks were performed in triplicate. Errors bars represent standard deviations
In mpi mode, Gypsum-DL distributes small-molecule preparation across multiple computers. Its static load-balancing approach splits the array of input small molecules into chunks that can each be handled concurrently on a different computer (i.e., node). This mode is ideal for use on high-performance computing clusters, where separate computers are networked together to enable calculations on a much larger scale. To leverage this setup, Gypsum-DL uses the Message Passing Interface (MPI) to control parallel communications between nodes. The user must separately install the mpi4py Python package [28,29,30] to use Gypsum-DL in mpi mode. We benchmarked the same beta version of Gypsum-DL on a computing cluster provided by the University of Pittsburgh's Center for Research Computing (CRC, Fig. 4b). The CRC provides MPI-enabled compute nodes with 28-core Broadwell Processors, networked using Intel's Omni-Path communication architecture. Note that the benchmarks shown in Fig. 4b were run on 20,000 input SMILES strings, vs. 1000 in Fig. 4a.
Comments on scalability
In theory, processing an entire virtual library should be embarrassingly parallel. But in practice two factors prevent perfectly linear scalability. First, in mpi mode Gypsum-DL uses static rather than dynamic load balancing. It assigns each input representation (e.g., SMILES string) to a processor before execution begins. If the number of inputs is divisible by the number of processors, each processor is tasked with handling the same number of inputs. Otherwise, Gypsum-DL distributes the inputs as evenly as possible. Each processor then independently and concurrently prepares its portion of the input virtual library, without requiring synchronization or memory sharing. Once all processors have finished, the main process collects the results. Static load balancing minimizes the required communication between nodes, but it can lead to computational inefficiency. If by random chance a given processor is assigned many time-consuming molecular representations, other processors may run idle while waiting for it to finish. Increasing the number of representations assigned to each processor can reduce the chances of highly unbalanced assignments.
Second, in both multiprocessing and mpi mode, some tasks cannot be parallelized. For example, the main process must send input data to each processor/node and collect the results when finished. Furthermore, Gypsum-DL also spawns a separate Python interpreter on each processor to handle the assigned input. The fixed time required to start and shutdown each interpreter also impacts scalability. Increasing the time spent processing molecular representations relative to the communication/startup/shutdown times (again, by increasing the number of representations assigned to each processor) thus improves scaling.
In summary, using more processors can drastically reduce the total run time (Fig. 4). But as the input data is divided among more and more processors, the number of molecular representations handled per processor begins to drop. As with most large-scale parallel calculations, users must strike a balance between short run times and computational efficiency.
Gypsum-DL improves pose-prediction accuracy
To test Gypsum-DL's impact on the accuracy of VS pose prediction, we considered 3151 protein–ligand complexes taken from the PDBBind database [13, 14]. Both Gypsum-DL and Open Babel 2.3.2 were separately used to prepare 3D models of the 3151 ligands from the corresponding SMILES strings. In the case of Open Babel, we intentionally generated electrically neutral models (i.e., we omitted the Open-Babel -p flag) so as to better judge the impact of Gypsum-DL's ionization feature on pose accuracy [11]. We docked both the Gypsum-DL-prepared and Open-Babel-prepared molecules into their corresponding protein receptors using AutoDock Vina 1.1.2 [12].
When we used Gypsum-DL, 71.4% of the 3151 ligands had RMSDs from the crystallographic pose that were less than 3.0 Å (mean: 2.37 Å; standard deviation: 2.03 Å). The same was true of 53.0% of the Open-Babel-processed molecules (mean: 3.40 Å; standard deviation: 2.51 Å). An F-test led us to reject the hypothesis that the variances of the Gypsum-DL and Open-Babel RMSDs were equal (p = 0.00). A subsequent two-tailed t test (assuming unequal variances) led us to reject the hypothesis that the Gypsum-DL and Open-Babel RMSDs had the same mean (p = 0.00). These results suggest that accounting for multiple ionization, tautomeric, chiral, cis/trans isomeric, and ring-conformational forms can improve pose-prediction accuracy.
As an example of the advantages of Gypsum-DL processing, consider folic acid bound to the human folate receptor beta (PDB ID: 4KMZ [31]; Fig. 5a). In this test case, the RMSD between the Gypsum-DL-prepared and crystallographic poses was only 0.76 Å (Fig. 4b, c). In contrast, the RMSD between the Open-Babel-prepared and crystallographic poses was 11.42 Å (Fig. 5a). Visual inspection of the docked molecules, together with structural analysis using BINANA 1.2.0, a program that automates the detection of key protein/ligand interactions [32], provides insight into why Gypsum-DL performed better. Gypsum-DL deprotonated the two folate carboxylate groups, allowing them to form strong electrostatic interactions with R152. In contrast, we did not instruct Open Babel to consider pH, so it protonated these carboxylate groups (Fig. 5a).
An illustration of the crystallographic and docked poses of folic acid bound to the human folate receptor beta. The carbon atoms of the protein, the crystallographic ligand, the docked Gypsum-DL compound, and the docked Open-Babel compound are shown in green, yellow, gray, and pink, respectively. a The region of the pocket near R152. b The Gypsum-DL-prepared compound forms additional interactions with other protein residues (in green). Possible hydrogen bonds are shown as dotted lines. c The crystallographic pose, shown for reference. Image generated using BlendMol [33]
The input SMILES string represented folate in the favored 2-aminopteridin-4(1H)-one (keto) form. Open Babel does not generate alternate tautomeric states and so used this same form. Interestingly, Gypsum-DL selected the enol tautomer, 2-aminopteridin-4-ol. And yet in the accurate Gypsum-DL pose, the enol hydroxyl group may form hydrogen bonds with S190 and/or R119 (Fig. 5b, dotted lines).
Gypsum-DL also protonated one of the 2-aminopteridin-4-ol nitrogen atoms. While unusual, the resulting positive charge enables electrostatic interactions with D97 and may further strengthen the π-π stacking with W187 and Y101 by adding a cation-π interaction. Gypsum-DL's protonated secondary amine may also form cation-π interactions with W187 and Y76 (Fig. 5b). It is admittedly unclear to what extent the Gypsum-DL enol tautomer and protonated amines are physiologically relevant, but these states may have contributed to the improved pose prediction in this case.
Sample libraries for download
We used Gypsum-DL to process two small-molecule libraries. Both are available free of charge from http://durrantlab.com/gypsum-dl/ for use in VS projects. We first obtained a copy of the NCI Diversity Set VI in flat SDF format from the National Cancer Institute (NCI). This library includes compounds that have been carefully selected so as to have diverse pharmacophores and favorable chemical properties (e.g., high molecular rigidity, few rotatable bonds, limited chiral centers, etc.). The NCI provides relatively pure samples (≥ 90% by LC/Mass Spectrometry) free of charge upon request. Gypsum-DL produced 5996 3D models from 1558 input NCI structures (pH 7.4, 3.8 models per input molecule). Eight bridged compounds could not be processed. We confirmed that alternate ionization, tautomeric, isomeric, and ring-conformer forms were among the output structures.
We also processed a large library of 56,608 N-acylated unnatural amino acids. We first generated 2D representations of these molecules by reacting alkyl-halide, Michael-acceptor, and carboxylic-acid building blocks in silico, using chemical reactions developed by the Distributed Drug Discovery (D3) initiative [15,16,17,18, 34]. D3 is an educational program started at Indiana University–Purdue University Indianapolis in 2003. Its well-documented, combinatorial solid-phase synthetic procedures enable students—including undergraduates—to synthesize diverse compounds in a classroom-laboratory setting. Candidate ligands identified in VS of these compounds can thus be easily synthesized and experimentally tested. Gypsum-DL successfully processed all 56,608 input compounds, producing 148,240 3D models (pH 7.4, 2.6 models per input molecule).
To avoid amide-iminol tautomerization, we intentionally instructed Gypsum-DL to skip tautomer enumeration for the D3 compound set. The iminol form is rare in solution, though it is occasionally chemically relevant [35]. It is reasonable to consider the iminol tautomer if virtual-library compounds contain only occasional amide moieties (e.g., the NCI set). But every compound in the D3 library contains an amide moiety. Modelling the iminol tautomer would have needlessly expanded the library's size, adding to the computational cost of any subsequent VS.
Comparison with other programs
Frog2 [5] is another open-source program for preparing small-molecule libraries. Its easy-to-use web interface is among its many strengths. This web-based approach arguably makes Frog2 more user friendly than Gypsum-DL. However, Gypsum-DL does offer some key capabilities that Frog2 lacks (Table 2). For example, Frog2 uses the Open Babel cheminformatics toolkit [10] to add hydrogen atoms to input molecules, but it does not consider alternate ionization states per a user-specified pH range. In contrast, Gypsum-DL uses the Dimorphite-DL algorithm [11] to predict ionization states. To illustrate the usefulness of this feature, we submitted oseltamivir carboxylate, an influenza neuraminidase inhibitor, to the Frog2 (v2.14) server (Fig. 6a). Frog2 protonated the carboxylic-acid moiety, despite the fact that it is largely deprotonated at physiological pH. In contrast, Gypsum-DL appropriately deprotonated the carboxylate group. Deprotonation is critical in this case, as neuraminidase-oseltamivir binding is governed largely by arginine-carboxylate electrostatic interactions that require a charged (deprotonated) carboxylate moiety [36].
Table 2 The available features of several stand-alone programs for converting molecular representations into 3D models
Example program output. Gypsum-DL outputs a deprotonated oseltamivir carboxylate, b both the ketone and enol forms of butan-2-one, c both the (R) and (S) enantiomers of bromochlorofluoroiodomethane, d both the E and Z isomers of 1-bromo-2-chloro-2-fluoro-1-iodoethene, e the twist-boat conformation of cis-1,4-di-tert-butylcyclohexane, and f only one rotomer of propan-1-ol. Image generated using BlendMol [33]
Frog2 is similarly limited in its ability to generate alternate tautomeric forms. To illustrate, we submitted butan-2-one, a ketone, to the Frog2 server (Fig. 6b). The server correctly returned a 3D model of the ketone, but it did not identify the alternate enol form, but-2-en-2-ol. In contrast, both the keto and enol form were present among the Gypsum-DL-generated 3D models.
Both Frog2 and Gypsum-DL performed comparably at enumerating unspecified chiral centers and cis/trans isomers. Both generated (R) and (S) enantiomers of bromochlorofluoroiodomethane when the input SMILES did not specify chirality (Fig. 6c). And both generated the E and Z isomers of 1-bromo-2-chloro-2-fluoro-1-iodoethene when given an ambiguous SMILES as input (Fig. 6d).
Gypsum-DL takes a more thorough albeit computationally expensive approach when generating alternate non-aromatic-ring conformations. Frog2 initially uses DG-AMMOS [37] to generate ring conformations, but the algorithm ultimately maintains rings rigid and considers only dihedral variations [5]. In contrast, Gypsum-DL uses geometry optimization and clustering to identify distinct ring conformations. To illustrate the advantages of the Gypsum-DL approach, consider cis-1,4-di-tert-butylcyclohexane (Fig. 6e). In the twist-boat conformation, both of the tert-butyl groups assume equatorial positions. The energy difference between the chair and twist-boat conformations of this compound are thus unusually small [38]. Frog2 generated models in only the chair conformation. Gypsum-DL generated models in the more stable twist-boat conformation.
An additional Gypsum-DL feature is advantageous in some VS contexts. Recall that many docking programs sample non-ring, single-bond torsions during the docking process itself. It is therefore computationally inefficient to dock otherwise identical models that differ only in their non-ring torsions. Frog2 generates these redundant models, but Gypsum-DL does not. As an illustration, consider propan-1-ol (Fig. 6f). Frog2 generated three redundant conformational isomers of propan-1-ol. In contrast, Gypsum-DL generated only one. We recommend Frog2 in those cases that require a more complete torsion library (e.g., 3D- or 4D-QSAR [39], pharmacophore modelling [40], etc.). Gypsum-DL is arguably better suited for use with flexible-ligand docking programs such as AutoDock Vina [12].
Balloon [6, 7] is a free command-line program that targets more advanced users. We tested Balloon on the Linux platform only, as we could not run any of the provided binaries on macOS Mojave. Balloon 1.6.7 is in many ways similar to Frog2 (Table 2). We applied the program to the same test molecules described above. Like Frog2, Balloon does not consider alternate ionization or tautomeric states (e.g., it protonated the oseltamivir carboxylate group and failed to identify the enol form of butan-2-one; Fig. 6a, b). While Balloon's ring-generation algorithm produced the correct twist-boat conformation of cis-1,4-di-tert-butylcyclohexane (Fig. 6e), the tert-butyl moieties were not quite as equatorial as those of the Gypsum-DL model. Balloon also tends to generate redundant conformational isomers (e.g., it produced two propan-1-ol models; Fig. 6f). On the other hand, like Gypsum-DL and Frog2, Balloon does successfully enumerate unspecified chiral centers (e.g., bromochlorofluoroiodomethane; Fig. 6c) and cis/trans isomers (e.g., 1-bromo-2-chloro-2-fluoro-1-iodoethene; Fig. 6d).
Free and open-source cheminformatics toolkits such as RDKit and Open Babel [10] target the most advanced users. These toolkits provide building blocks that programmers can assemble into more complex cheminformatics workflows. The RDKit and Open-Babel Python bindings are particularly useful for this purpose. Gypsum-DL is built on RDKit and RDKit-powered software (Dimorphite-DL 1.0 [11] and MolVS 0.1.1).
We built Gypsum-DL on RDKit, MolVS, and Dimorphite-DL rather than Open Babel in part because these packages have more permissive software licenses (BSD, MIT, and Apache version 2, respectively). Permissive licenses encourage broad adoption by allowing users to incorporate software into their own projects without having to adopt the same license. In contrast, Open Babel is released under a copyleft license (GNU General Public License, version 2), which requires that any derivate works also be copyleft. We note also that Gypsum-DL's use of the Dimorphite-DL algorithm has several advantages over Open Babel's approach (see Ref. [11] for details).
Given the role that structure-based VS plays in modern drug discovery, effective techniques for generating 3D small-molecule structures are critical. Gypsum-DL is a free, open-source program that performs this important conversion to 3D. To minimize computational costs without sacrificing accuracy, we have designed Gypsum-DL to be highly parallel and computationally efficient. It can be easily incorporated into cheminformatic and drug-discovery workflows. Gypsum-DL's easy-to-use command-line interface and default parameters make it accessible to intermediate users. Additional functionality and customizability allow advanced users to control more nuanced program parameters such as the thoroughness of conformer sampling, the pH range, and the output-file format.
Though a powerful tool, Gypsum-DL does have its limitations. For example, it often fails to identify low-energy conformations for large macrocycles. Gypsum-DL uses the ETKDG algorithm [20] to generate initial 3D models for subsequent UFF-based geometry optimization. ETKDG assigns macrocycle torsions based on acyclic-bond torsion patterns derived from experiment. We expect that future versions of the ETKDG algorithm will assign macrocycle torsions using the proper experimentally derived macrocycle torsion patterns. In the meantime, Gypsum-DL still generates valid, geometry-optimized macrocycle models, though the output conformations sometimes differ substantially from the most energetically favorable minima.
Future efforts will also include building a graphical user interface and/or web application to better accommodate the needs of researchers who are less familiar with the command line. We also hope to enable Windows multiprocessing in a future release. These current limitations aside, we believe Gypsum-DL will be a useful tool for researchers interested in structure-based, computer-aided drug discovery. A copy can be downloaded free of charge from http://durrantlab.com/gypsum-dl/.
Availability and requirements
Project name: Gypsum-DL.
Project home page: http://durrantlab.com/gypsum-dl/.
Operating systems: Windows, macOS, Linux.
Programming language: Python 2/3.
Other requirements: RDKit, NumPy, SciPy, Mpi4py (optional).
License: Apache License, Version 2.0.
Lionta E, Spyrou G, Vassilatis DK, Cournia Z (2014) Structure-based virtual screening for drug discovery: principles, applications and recent advances. Curr Top Med Chem 14:1923–1938
Tanrikulu Y, Kruger B, Proschak E (2013) The holistic integration of virtual screening in drug discovery. Drug Discov Today 18:358–364. https://doi.org/10.1016/j.drudis.2013.01.007
Hawkins PC, Nicholls A (2012) Conformer generation with OMEGA: learning from the data set and the analysis of failures. J Chem Inf Model 52:2919–2936. https://doi.org/10.1021/ci300314k
Hawkins PC, Skillman AG, Warren GL, Ellingson BA, Stahl MT (2010) Conformer generation with OMEGA: algorithm and validation using high quality structures from the Protein Databank and Cambridge Structural Database. J Chem Inf Model 50:572–584. https://doi.org/10.1021/ci100031x
Miteva MA, Guyon F, Tuffery P (2010) Frog2: efficient 3D conformation ensemble generator for small compounds. Nucleic Acids Res 38:W622–W627. https://doi.org/10.1093/nar/gkq325
Vainio MJ, Johnson MS (2007) Generating conformer ensembles using a multiobjective genetic algorithm. J Chem Inf Model 47:2462–2474. https://doi.org/10.1021/ci6005646
Puranen JS, Vainio MJ, Johnson MS (2010) Accurate conformation-dependent molecular electrostatic potentials for high-throughput in silico drug discovery. J Comput Chem 31:1722–1732. https://doi.org/10.1002/jcc.21460
Alland C et al (2005) RPBS: a web resource for structural bioinformatics. Nucleic Acids Res 33:W44–W49. https://doi.org/10.1093/nar/gki477
Neron B et al (2009) Mobyle: a new full web bioinformatics framework. Bioinformatics 25:3005–3011. https://doi.org/10.1093/bioinformatics/btp493
O'Boyle NM, Banck M, James CA, Morley C, Vandermeersch T, Hutchison GR (2011) Open Babel: an open chemical toolbox. J Cheminf 3:33. https://doi.org/10.1186/1758-2946-3-33
Ropp PJ, Kaminsky JC, Yablonski S, Durrant JD (2019) Dimorphite-DL: an open-source program for enumerating the ionization states of drug-like small molecules. J Cheminform 11:14. https://doi.org/10.1186/s13321-019-0336-9
Trott O, Olson AJ (2009) AutoDock Vina: improving the speed and accuracy of docking with a new scoring function, efficient optimization, and multithreading. J Comput Chem 31:455–461. https://doi.org/10.1002/jcc.21334
Wang R, Fang X, Lu Y, Wang S (2004) The PDBbind database: collection of binding affinities for protein-ligand complexes with known three-dimensional structures. J Med Chem 47:2977–2980. https://doi.org/10.1021/jm030580l
Wang R, Fang X, Lu Y, Yang CY, Wang S (2005) The PDBbind database: methodologies and updates. J Med Chem 48:4111–4119. https://doi.org/10.1021/jm048957q
Scott WL, O'Donnell MJ (2009) Distributed drug discovery, part 1: linking academia and combinatorial chemistry to find drug leads for developing world diseases. J Comb Chem 11:3–13. https://doi.org/10.1021/cc800183m
Scott WL et al (2009) Distributed drug discovery, part 2: global rehearsal of alkylating agents for the synthesis of resin-bound unnatural amino acids and virtual D(3) catalog construction. J Comb Chem 11:14–33. https://doi.org/10.1021/cc800184v
Scott WL et al (2009) Distributed drug discovery, part 3: using D(3) methodology to synthesize analogs of an anti-melanoma compound. J Comb Chem 11:34–43. https://doi.org/10.1021/cc800185z
Abraham MM, Denton RE, Harper RW, Scott WL, O'Donnell MJ, Durrant JD (2017) Documenting and harnessing the biological potential of molecules in distributed drug discovery (D3) virtual catalogs. Chem Biol Drug Des 90:909–918. https://doi.org/10.1111/cbdd.13012
Ghosh AK et al (2017) Design and development of highly potent HIV-1 protease inhibitors with a crown-like oxotricyclic core as the P2-ligand to combat multidrug-resistant HIV variants. J Med Chem 60:4267–4278. https://doi.org/10.1021/acs.jmedchem.7b00172
Riniker S, Landrum GA (2015) Better informed distance geometry: using what we know to improve conformation generation. J Chem Inf Model 55:2562–2574. https://doi.org/10.1021/acs.jcim.5b00654
Rappé AK, Casewit CJ, Colwell KS, Goddard Iii WA, Skiff WM (1992) UFF, a full periodic table force field for molecular mechanics and molecular dynamics simulations. J Am Chem Soc 114:10024–10035
Hartigan JA, Wong MA (1979) Algorithm AS 136: a k-means clustering algorithm. J R Stat Soc Ser C (Appl Stat) 28:100–108
Berman HM et al (2000) The Protein Data Bank. Nucleic Acids Res 28:235–242. https://doi.org/10.1093/nar/28.1.235
Morris GM, Huey R, Lindstrom W, Sanner MF, Belew RK, Goodsell DS, Olson AJ (2009) AutoDock4 and AutoDockTools4: automated docking with selective receptor flexibility. J Comput Chem 30:2785–2791. https://doi.org/10.1002/jcc.21256
Ropp P, Friedman A, Durrant JD (2017) Scoria: a Python module for manipulating 3D molecular data. J Cheminform 9:52–58. https://doi.org/10.1186/s13321-017-0237-8
Oliphant TE (2006) Guide to NumPy. Brigham Young University, Provo
Jones E, Oliphant T, Peterson P et al (2001) SciPy: Open Source Scientific Tools for Python, 0.11.0 edn
Dalcin LD, Paz RR, Kler PA, Cosimo A (2011) Parallel distributed computing using Python. Adv Water Resour 34:1124–1139
Dalcin L, Paz R, Storti M, D'Elia J (2008) MPI for Python: performance improvements and MPI-2 extensions. J Parallel Distrib Comput 68:655–662. https://doi.org/10.1016/j.jpdc.2007.09.005
Dalcin L, Paz R, Storti M (2005) MPI for Python. J Parallel Distrib Comput 65:1108–1115. https://doi.org/10.1016/j.jpdc.2005.03.010
Wibowo AS et al (2013) Structures of human folate receptors reveal biological trafficking states and diversity in folate and antifolate recognition. Proc Natl Acad Sci USA 110:15180–15188. https://doi.org/10.1073/pnas.1308827110
Durrant JD, McCammon JA (2011) BINANA: a novel algorithm for ligand-binding characterization. J Mol Graph Model 29:888–893. https://doi.org/10.1016/j.jmgm.2011.01.004
Durrant J (2018) Blendmol: advanced macromolecular visualization in blender. Bioinformatics. https://doi.org/10.1093/bioinformatics/bty968
Scott WL et al (2015) Distributed drug discovery: advancing chemical education through contextualized combinatorial solid-phase organic laboratories. J Chem Educ 92:819–826
Fairlie DP, Woon TC, Wickramasinghe WA, Willis AC (1994) Amide-iminol tautomerism: effect of metalation. Inorg Chem 33:6425–6428
Armstrong KA, Tidor B, Cheng AC (2006) Optimal charges in lead progression: a structure-based neuraminidase case study. J Med Chem 49:2470–2477. https://doi.org/10.1021/jm051105l
Lagorce D, Pencheva T, Villoutreix BO, Miteva MA (2009) DG-AMMOS: a new tool to generate 3d conformation of small molecules using distance geometry and automated molecular mechanics optimization for in silico screening. BMC Chem Biol 9:6. https://doi.org/10.1186/1472-6769-9-6
Gill G, Pawar DM, Noe EA (2005) Conformational study of cis-1,4-di-tert-butylcyclohexane by dynamic NMR spectroscopy and computational methods. Observation of chair and twist-boat conformations. J Org Chem 70:10726–10731. https://doi.org/10.1021/jo051654z
Shim J, Mackerell AD Jr (2011) Computational ligand-based rational design: role of conformational sampling and force fields in model development. MedChemComm 2:356–370. https://doi.org/10.1039/C1MD00044F
Sen D, Chatterjee TK (2013) Pharmacophore modeling and 3D quantitative structure-activity relationship analysis of febrifugine analogues as potent antimalarial agent. J Adv Pharm Technol Res 4:50–60. https://doi.org/10.4103/2231-4040.107501
We thank professors William L. Scott and Martin J. O'Donnell of the Distributed Drug Discovery (D3) Program for helpful discussions.
This work was supported by a computer-allocation grant from the University of Pittsburgh's Center for Research Computing (CRC) to JDD. The CRC played no role in the design of this study; the collection, analysis, and interpretation of the data; or the writing of the manuscript.
Patrick J. Ropp and Jacob O. Spiegel should be regarded as joint first authors.
Department of Biological Sciences, University of Pittsburgh, Pittsburgh, PA, 15260, USA
Patrick J. Ropp, Jacob O. Spiegel, Jennifer L. Walker, Harrison Green, Katherine A. Milliken, John J. Ringe & Jacob D. Durrant
Department of Chemistry and Chemical Biology, Indiana University-Purdue University Indianapolis, Indianapolis, IN, 46202, USA
Guillermo A. Morales
Innoventyx, LLC, Oro Valley, AZ, 85737, USA
Patrick J. Ropp
Jacob O. Spiegel
Jennifer L. Walker
Harrison Green
Katherine A. Milliken
John J. Ringe
Jacob D. Durrant
PJR, JOS, GAM, and JDD designed the study. PJR, JOS, HG, and JDD contributed to the Gypsum-DL codebase. JOS, JLW, HG, GAM, KAM, JJR, and JDD contributed to the text, tables, and/or figures. All authors read and approved the final manuscript.
Correspondence to Jacob D. Durrant.
Ropp, P.J., Spiegel, J.O., Walker, J.L. et al. Gypsum-DL: an open-source program for preparing small-molecule libraries for structure-based virtual screening. J Cheminform 11, 34 (2019). https://doi.org/10.1186/s13321-019-0358-3
Small-molecule libraries
Virtual screening
Computer-aided drug discovery
3D structure generation
|
CommonCrawl
|
RetSynth: determining all optimal and sub-optimal synthetic pathways that facilitate synthesis of target compounds in chassis organisms
Leanne S. Whitmore1,
Bernard Nguyen1,
Ali Pinar1,
Anthe George1 &
Corey M. Hudson ORCID: orcid.org/0000-0003-4796-538X1
The efficient biological production of industrially and economically important compounds is a challenging problem. Brute-force determination of the optimal pathways to efficient production of a target chemical in a chassis organism is computationally intractable. Many current methods provide a single solution to this problem, but fail to provide all optimal pathways, optional sub-optimal solutions or hybrid biological/non-biological solutions.
Here we present RetSynth, software with a novel algorithm for determining all optimal biological pathways given a starting biological chassis and target chemical. By dynamically selecting constraints, the number of potential pathways scales by the number of fully independent pathways and not by the number of overall reactions or size of the metabolic network. This feature allows all optimal pathways to be determined for a large number of chemicals and for a large corpus of potential chassis organisms. Additionally, this software contains other features including the ability to collect data from metabolic repositories, perform flux balance analysis, and to view optimal pathways identified by our algorithm using a built-in visualization module. This software also identifies sub-optimal pathways and allows incorporation of non-biological chemical reactions, which may be performed after metabolic production of precursor molecules.
The novel algorithm designed for RetSynth streamlines an arduous and complex process in metabolic engineering. Our stand-alone software allows the identification of candidate optimal and additional sub-optimal pathways, and provides the user with necessary ranking criteria such as target yield to decide which route to select for target production. Furthermore, the ability to incorporate non-biological reactions into the final steps allows determination of pathways to production for targets that cannot be solely produced biologically. With this comprehensive suite of features RetSynth exceeds any open-source software or webservice currently available for identifying optimal pathways for target production.
The biological production of compounds for industrial applications is an interesting and complex problem. From the perspective of biological retrosynthesis, there are essentially two challenges 1) identifying new enzymes to perform difficult and/or important chemical reactions and 2) determining the optimal (minimal) number of gene additions that is required to convert an industrial organism into one capable of successfully producing a compound of interest. There is a growing body of literature for solving the first problem and recent work on polyketide design has demonstrated considerable success [1]. This paper is focused on the second problem, which we argue is essentially a routing challenge. Identifying the minimal number of gene additions (herein referred to as an optimal pathway) has cost and time saving benefits in downstream production. Producing a compound of interest (hereafter x), not native in an organism requires determining the reaction (and corresponding enzyme/genes) additions necessary to produce x. Without complex routing algorithms the number of possible optimal pathways grows exponentially relative to the pathway length. As new biological reactions enter the literature and are available for synthetic addition, the optimal pathways may fork down completely different routes. Furthermore, there may be scenarios where the yield of a given compound is optimized, but the number of gene additions are sub-optimal (pathways with a greater number of gene/enzyme additions than the minimal). These all represent the distinct challenges in determination of pathways to production.
Reaction additions and subsequent optimal pathways can inefficiently be determined computationally by one-by-one addition of non-native reactions to a stoichiometric matrix for a chassis organism, and then performing flux balance analysis (FBA) to determine if there is compound production without interfering biomass production. FBA is a tool widely used in predicting genome-scale metabolic behavior [2]. FBA is principally used for its ease of setup and efficient optimal search. At a minimum, FBA requires a stoichiometric matrix (S) which is complete with regard to the available reactions and compounds for a given organism. The reactions are conventionally tied to a set of explicit enzymes and transporters. FBA uses linear programming, requiring an objective function (Z), to solve for the metabolism of interest. This may involve minimization of input, maximization of output, or other constraints [3].
Given k reactions to produce x, the naive approach to adding new reactions is to search each of the k reactions in the database to see if x is produced given the available compounds from FBA. This requires query of each of the k reactions. If there is a single step solution, it solves in FBA(k) time. Where there are no single step solutions, the problem explodes exponentially. A two-step solution requires not just k reactions, but all reactions that produce precursors to the k reactions. If the average number of reactions producing a given compound is \(\overline {g}\), the number of pathways that must be tested for a y step solution in the worst case is \(\text {FBA}(\overline {g}^{y})\).
RetSynth overcomes the naive and inefficient method of identifying solutions, particularly the worst-case, using constraint based mixed-integer linear programming (MILP). Given a database of known biological and chemical reactions and a genome-scale metabolic model, which can be constructed using RetSynth from numerous metabolic repositories with known enzymatic and chemical transformations, all optimal genetic additions required to produce a given compound of interest can be determined. The manner in which MILP is implemented is to minimize the objective value which represents the number of steps in the pathway. While selecting pathways based on number of reaction steps does not account for other issues in synthetic pathways (such as enzyme efficiency, enzyme or compound toxicity, or target yield) this is an ideal starting method for identifying synthetic pathways as minimizing the alterations made to a chassis organism is likely to lessen the above-mentioned issues as well as be more cost effective. Additionally, by resetting weights for reactions in the optimal pathway, RetSynth will automatically find novel sub-optimal pathways thereby providing alternative pathways that may have better target yield or fewer toxicity problems. This can be performed iteratively to determine all sub-optimal pathways for a specific path length.
Herein we describe the algorithm developed as part of RetSynth to efficiently provide solutions targeted compound production. Subsequently, RetSynth can determine which pathway will produce the highest yields of a target compound using FBA. With this comprehensive suite of features, RetSynth is an efficient tool for identifying optimal solutions to target compound synthesis. Additionally, we compare RetSynth performance to other tools that can find optimal pathways to target compound production, such as OptStrain [4], MetaRoute [5], GEM-Path [6], ReBIT [7], RetroPath [8], and RouteSearch [9]. RetSynth outperformed these tools in overall capabilities including, identifying more optimal and sub-optimal pathways, evaluating pathway efficiencies using FBA, the number of metabolic repositories it can compile into a single concise metabolic database, and the time necessary to identify optimal and sub-optimal pathways. Identification of sub-optimal pathways allows the user more pathway choices than other algorithms currently provide, while not producing an overwhelming number of solutions. The ability to provide optimal and sub-optimal solutions is unique to RetSynth and to our knowledge does not currently exist in other available tools.
RetSynth includes a comprehensive suite of features necessary for complete implementation of the software. To find pathways RetSynth requires a metabolic database of reaction (i.e. corresponding catalytic gene/enzyme information) and compound information. RetSynth can construct a database of metabolic information from number of metabolic repositories, including PATRIC [10, 11], KBase [12], MetaCyc [13], KEGG (Kyoto Encyclopedia of Genomes and Genes) [14], MINE (Metabolic In-Silico Network Expansion database) [15], ATLAS of Biochemistry [16] and SPRESI [17]. Additionally, users can add individual reactions to the database. These may be newly discovered from the literature or proprietary reactions. Combining biological and chemical reaction repositories into one database allows RetSynth to construct a comprehensive and concise metabolic database. In order to rank discovered pathways based on target yield in a chassis organism, RetSynth uses CobraPy [18] to perform FBA. The results are conveniently rendered with a visualization module, allowing the user to quickly interpret results. RetSynth is a stand-alone software package, built with Pyinstaller, which does not require a webservice or MATLAB, entirely written in Python except for two required non-Python dependencies, the GNU Linear Programming Kit (http://www.gnu.org/software/glpk), and libSMBL [19]. Finally, we have built an easy-to-use graphical user interface to make RetSynth usable by everyone.
RetSynth algorithm
The algorithm described below was developed for the RetSynth software to rapidly and efficiently identify all optimal pathways to target compound production in a specified chassis organism. Optimal pathways can then be ranked based on their ability to produce the highest yields of a compound by evaluating flux through each candidate pathway.
To identify optimal pathways, we constructed a MILP:
$$\begin{array}{*{20}l} & \text{minimize} \qquad z=\mathbf{t}^{\mathrm{T}} \mathbf{x}\\ & \text{s.t.} \qquad \qquad \,\,\mathbf{Cx = d}, \\ & \text{and} \qquad \qquad \mathbf{x} \in \text{\{0,1\}}^{m}, \end{array} $$
where the entire RetSynth metabolic database is represented by a stoichiometric matrix C, with dimensions m molecules ×n reactions which are in the database. x is a vector of variables the length of n which represent the presence or absence (1 or 0) of each reaction in an optimal path. Cx=d where d is a vector of the length m which sets bounds on metabolite availability depending on whether the molecule is a native metabolite to the chassis organism (n) which is not constrained, a non-native metabolite (w) which constrains the molecule to ensure if the molecule is consumed in the optimal path it has to also be produced by a reaction in the optimal path or the target molecule (g) which has to be produced by a variable (2).
$$ \begin{aligned} n = \left[\begin{array}{l} \infty \\ \infty \\ \vdots \\ \infty\\ \end{array}\right] w = \left[\begin{array}{l} \geq 0 \\ \geq 0 \\ \vdots \\ \geq 0\\ \end{array}\right] g = \left[\begin{array}{l} 1 \\ \end{array}\right] d = \left[\begin{array}{l} n \\ w \\ g \\ \end{array}\right] \end{aligned} $$
The objective function is set to minimize the number of variables (reactions) needed to produce the target compound. The objective function weights are distributed based on whether the variables (reactions) are native (I, vector of weights for native variables) or not native (E, vector of weights for non-native variables) (3).
$$ \begin{aligned} I = \left[\begin{array}{l} 0 \\ 0 \\ \vdots \\ 0\\ \end{array}\right] E = \left[\begin{array}{l} 1 \\ 1 \\ \vdots \\ 1\\ \end{array}\right] t = \left[\begin{array}{l} I \\ E \\ \end{array}\right] \end{aligned} $$
To identify all the optimal pathways, a penalty function is added to variables that are already identified as part of an optimal pathway, forcing the algorithm to seek an alternative optimal pathway. To implement this algorithm, Sv is the total set of variables and \(S^{*}_{v}\) is a subset of variables in an optimal pathway. We compute the penalty such that any optimal pathway to the modified problem remains an optimal pathway to the original problem, that is tTx<β∗(1+1/(2β∗)<β∗+1, where β∗ is the number of reaction steps in the optimal pathway.
Here we illustrate how variables are weighted given that they are in an identified optimal pathway \(S^{*}_{v}\). Assume the jth variable is a part of an optimal pathway but is not included in \(S^{*}_{v}\). Then we have tj=1. The weights in t for the other β∗−1 variables that are part of the optimal pathway are 1+1/(2β∗). All together the optimal pathway value to the modified problem will be β∗+1/2−1/(2β∗). The algorithm terminates only after the objective function value to the modified problem reaches β∗(1+1/(2β∗)), which is higher than the pathway that includes the jth variable (Algorithm 1). This leads to a contradiction and proves that our algorithm includes all variables that are part of an optimal pathway.
Sub-optimal length pathway enumeration
RetSynth is able to find pathways that are not only optimal, but pathways up to β∗+k, where k is a parameter set by the user and indicates the level of sub-optimal pathways to be identified. This involves adding additional constraints to (1) which prevents any of the initial optimal pathways from being discovered, forcing the algorithm to seek the next best pathway. For each initial optimal pathway, a constraint is added:
$$ \begin{aligned} Y = \left[\begin{array}{l} 0 \\ 0 \\ \vdots \\ 0\\ \end{array}\right] O = \left[\begin{array}{l} 1 \\ 1 \\ \vdots \\ 1\\ \end{array}\right] P = \left[\begin{array}{l} Y \\ O \\ \end{array}\right] \end{aligned} $$
where Y are variables that are not part of a given optimal pathway and O are variables in an optimal pathway \(S^{*}_{v}\). Combining vectors Y and O results in vector P (4). Constraints are set so that the combination of reactions in the optimal pathway cannot be identified as a solution. With the new constraints the metabolic system is:
$$\begin{array}{*{20}l} & \text{minimize} \qquad z=\mathbf{t}^{\mathrm{T}} \mathbf{x} \\ & s.t. \qquad \mathbf{Cx = d}, \\ & \qquad \qquad {foreach}\ \beta^{*}\ \text{in optimal solutions:} \\ & \qquad \qquad \qquad \mathbf{P}^{\mathrm{T}} \mathbf{x} \leq \beta^{*}-1 \\ & \text{and} \qquad \mathbf{x} \in \text{\{0,1\}}^{m} \end{array} $$
Adding these constraints forces the algorithm to seek the next best sub-optimal pathway (5). At each level, k constraints are added to prevent the algorithm from finding previous levels of optimal or sub-optimal pathways. For each level of k algorithm (1) is implemented to identify all sub-optimal pathways at that level, with the exception that instead of resolving algorithm (1) it is resolving (5).
After all optimal and sub-optimal solutions are identified, pathways are integrated into an FBA model for the chassis organism and FBA is run optimizing growth (the biomass reaction) and production of the target compound [2, 18].
Enumerating and backtracking all solutions
The new set \(S_{v}^{*}\) is typically much smaller than Sv, and drastically reduces the search space for enumerating all optimal solutions. To track optimal paths, define a directed graph G=(V,E) with two types of nodes: V=Vc∪Vp and Vc∩Vp=∅. The process nodes Vp represent the enzymes selected in the previous section, whereas the compound nodes Vc represent all compounds that are inputs to the processes. Directed edges represent the input/output relationships between compounds and processes. The backtracking proceeds by starting with target compound x. Step 1 is to determine processes in Vp that produce x. A directed edge is connected between nodes in Vp and x. These nodes are then removed from Vp. Step 2 is to determine compounds that serve as inputs for these removed nodes and to add them from Vc. If Vp is not empty, step 1 will be repeated for each added node from Vc. This process will be repeated until Vp is empty, resulting in a directed dependency graph G of all pathways to production by native metabolism to x.
Given a compound of interest and a dependency graph G, a connected subgraph that includes the node for the compound of interest and at least one predecessor node for each compound node describes a feasible solution to the problem. Symmetrically, any feasible solution is a subgraph that satisfies these conditions. Subsequently, such a subgraph with minimum number of process nodes defines an optimal solution.
Validating RetSynth
Using metabolic networks from KBase and data from the MetaCyc metabolic repository, RetSynth was used to identify optimal pathways for compounds which already have experimentally tested synthetic pathways in Escherichia coli. Comparing model results to experimentally validated pathways demonstrates that RetSynth can generate practical candidate pathways for compound synthesis.
2-propanol has previously been produced in Escherichia coli JM109 grown on LB media. Enzymes were added into E. coli in order to convert the native precursor acetyl-CoA into 2-propanol [20]. These conversions include acetyl-CoA to acetoacetyl-CoA, acetoacetyl-CoA to acetoacetate, acetoacetate to acetone, and finally acetone to 2-propanol. Enzymes thiolase, CoA-transferase, acetoacetate decarboxylase and alcohol dehydrogenase were added to Escherichia coli JM109 to facilitate these reactions. For RetSynth, the chassis organism Escherichia coli strain K-12 M1655 was used because a metabolic model for strain JM109 was not freely available. The optimal pathway identified by RetSynth consisted of the catalytic conversions acetoacetate to acetone and acetone to 2-propanol (acetoacetate decarboxylase and alcohol dehydrogenase catalyzed these reactions, respectively) (Fig. 1A). Though shorter because the Escherichia coli K-12 M1655 strain has acetoacetate (which needs to be synthetically produced in Escherichia coli JM109) RetSynth's optimal pathway uses the overall production pathway shown by Jojima et al. to be effective in producing 2-propanol [20].
To produce 1-butanol in Escherichia coli BW25113 on an M9 media, Atsumi et al. added a synthetic pathway consisting of 3 enzymatic conversions starting with the conversion 2-ketobutyrate to 2-oxovalerate [21]. Because 2-ketobutyrate is a rare metabolite in Escherichia coli BW25113, the authors add an overexpressed leuABCD pathway to increase yields of this precursor. Subsequently, 2-oxovalerate is converted to butanal by pyruvate decarboxylase and then to butanol by alcohol dehydrogenase. Using the standard BW25113 metabolic model retrieved from the KBase repository, RetSynth was unable to identify this pathway since the model did not contain a reaction for 2-oxovalerate synthesis. The lack of production of this metabolite in the model is unsurprising as natural yield of the precursor is so minimal in Escherichia coli [21]. However, with the capabilities of RetSynth, it is simple to manually add this pathway into the model, as Atsumi et al. did to increase production of 2-oxovalerate. Once the leuABCD pathway was added, the same pathway was identified by RetSynth as was published by Atsumi et. al (Fig. 1b).
RetSynth Validation. Optimal pathways identified by RetSynth for 2-propanol (a), butanol (b) and 3-methylbutanol (c). Red indicates compound targets, magenta indicates native compounds to Escherichia coli K-12 M1655 or BW25113
Our third validation example was to find the optimal pathway to production of 3-methylbutanol in Escherichia coli strain BW25113. Our pathway converted native metabolite 2-keto-4-methylpentanoate to 3-methylbutanal and then subsequently produced 3-methylbutanol via added enzymes pyruvate decarboxylase and alcohol dehydrogenase (Fig. 1C). This matches the synthetic pathway used by [20] to produce 3-methylbutanol.
Optimal and sub-optimal pathways for MetaCyc compounds in Escherichia coli K-12 M1655
The power of RetSynth lies in its ability to quickly identify optimal and sub-optimal pathways for a large set of target compounds. To illustrate this strength, a database was constructed consisting of a KBase metabolic network for Escherichia coli K-12 M1655 and MetaCyc reaction information. For every compound in the MetaCyc repository that was not native to Escherichia coli K-12 M1655, RetSynth identified an optimal pathway along with two levels (pathways that require more than the minimal number of gene additions, specifically, second and third best number of gene/reaction additions) of sub-optimal pathways.
Of the 15,706 MetaCyc compounds that were not native to Escherichia coli K-12 M1655, we found synthetic pathways for 3462 compounds. Optimal and sub-optimal pathways for methyl acetate and pterostilbene, both of which have economic value, are illustrated in Fig. 2. For methyl acetate, which is commonly used in paints and nail polish, optimal and two levels of sub-optimal pathways were identified for production in Escherichia coli. The optimal pathway synthesizes acetone from the native compound acetoacetate and subsequently converts acetone to methyl acetate (Fig. 2a). The last step of the optimal pathway is then shared among all candidate pathways. The two-level sub-optimal pathways include the conversion of the native compound farnesyl diphosphate to acetone and the conversion of methylglyoxal to acetone through two enzymatic steps. The level two sub-optimal pathway synthesizes 2-methylpropanal-oxime from the native compound valine which is then followed by three enzymatic conversions to produce acetone. The second target compound pterostilbene, which has been shown to have health benefits such as lowering cholesterol and glucose levels [22], can be synthesized in Escherichia coli through the identified optimal pathway, which consists of four enzymatic conversions starting with the native compound tyrosine, or the level one sub-optimal pathway, which has five enzymatic conversions starting with phenylalanine (Fig. 2b). A second level sub-optimal pathway could not be identified for this compound. Theoretical yields were predicted using RetSynth's FBA module to be 0.24 and 0.02 (mol/mol of glucose) for methyl acetate and pterostilbene, respectively. These compounds are just two examples of the 3462 compounds that we were able to quickly and efficiently discover optimal and sub-optimal pathways.
Optimal and sub-optimal pathways. Optimal and sub-optimal pathways identified by RetSynth for methyl acetate (a), and pterostilbene (b). Red indicates compound targets, magenta indicates native compounds to Escherichia coli K-12 M1655
Of the 3462 targets, 513 compounds had optimal and sub-optimal level one and two pathways, 1125 compounds had optimal and sub-optimal level one pathways, and for the remaining 1824 compounds only had optimal pathways. The average number of pathways identified for a compound was 7 and the average time it took to calculate all pathways for a compound was 8 minutes (Fig. 3). Some compounds significantly exceeded the average time, which is due to the process of eliminating cyclic pathways. When a cyclic pathway is identified, constraints must be added to the MILP to prevent the pathway from being identified as a viable route to production (Additional file 1). The MILP is then resolved to calculate an alternative pathway. Thus, compounds with multiple cyclic pathways dramatically increase the time required to find optimal routes to production.
Optimal and sub-optimal pathways. Number of pathways versus time for each target compound. Red dashed lines indicate the averages on the Y and X axis. Colors indicate whether optimal and sub-optimal (level 1 and 2) pathways (yellow), optimal and sub-optimal (level 1) pathways (teal) or optimal pathways only (purple) could be identified for each compound
Using the RetSynth results for the 3462 target compounds, we can identify which reaction/enzyme is common to the highest number of them. This gene would be an advantageous gene addition for cultured strains of Escherichia coli. To identify what reaction/enzyme would make an optimal genetic modification (i.e. leading to the production of the highest number of downstream targets, given that subsequent genetic modifications were made) for each reaction/enzyme we counted the number of compounds for which it was the first step in an optimal or sub-optimal pathway. Each reaction/enzyme was only counted once per compound even if it was in multiple optimal and/or sub-optimal pathways. Of the total 766 enzymes that were the first step in optimal and/or sub-optimal pathways, we identified 24 enzymes that were in 50 or more compound production pathways (Fig. 4a). The top four reactions/enzymes found in the highest number of target compound pathways, above 100 compounds, are illustrated in (Fig. 4b, c, d, e). Enzymes 1.1.1.222 and 1.1.1.237 are hydroxyphenylpyruvate reductases which catalyze the reactions in Fig. 4b and c respectively and are natively found in Solenostemon scutellarioides. The remaining two enzymes 4.3.1.23 and 4.3.1.24 (tyrosine ammonia-lyase and phenylalanine ammonia-lyase respectively) catalyze reactions in Fig. 4d and e. These enzymes are natively found in organisms Rhodotorula glutinis and Ustilago maydis respectively. Additionally, it was discovered that enzyme 4.3.1.25 can catalyze both these reactions and is found in Rhodotorula glutinis. By identifying enzyme additions that are in the highest number of target compound production pathways RetSynth can lead and enhance the development of efficient chassis organisms for optimal production of all types of economically and industrial target compounds.
Optimal enzyme/gene addition. a Depicts the number compounds each enzyme is in an optimal or sub-optimal pathway (only shows enzymes that are in 50 or more compound pathways). b, c, d, e Are the reactions that are catalyzed by the top four enzymes in the highest number of compound pathways
Biological and chemical hybrid pathways for target compound production
In addition to identifying biological optimal and sub-optimal pathways, RetSynth can incorporate strictly synthetic chemistry reaction repositories such as SPRESI, which contains thousands of chemical reactions, into its metabolic database. By integrating SPRESI into RetSynth's MetaCyc and KBase database, pathways that use both biological and chemical reactions to produce necessary compounds (termed hybrid pathways) can be discovered. With the addition of SPRESI, 413 more target compound production pathways were identified. The hybrid pathway for production of benzene in Escherichia coli K-12 M1655 (Fig. 5) consists of the enzymatic conversion of native compound 4-aminobenzoic acid to phenylamine (predicted theoretical yield to be 0.24 mol/mol glucose) which can subsequently be chemically synthesized into benzene [23]. Benzene is an important precursor to the production of other high value compounds. The ability to build a hybrid database greatly expands RetSynth's capability for the find pathways to production of many target compounds that would otherwise not be possible.
Optimal pathway for benzene production. Hybrid pathway including biological and chemical reactions necessary to produce benzene. Red indicates compound targets, magenta indicates native compounds to Escherichia coli K-12 M1655
Benchmarking RetSynth to other pathway identifying tools
There are a number of other tools which can find synthetic pathways for target compounds, however none of these tools encompass all of the features of RetSynth (Table 1). We perform comparisons between RetSynth and other tools to illustrate RetSynth's increased number and improved capabilities by benchmarking features between software such as the number of pathways found for each target compound, predicting yield of each target (if applicable) and time required to obtain results.
Table 1 Comparison of different software
OptStrain
OptStrain uses mixed integer linear programming (optimization-based framework) to find stoichiometrically balanced pathways that produce a target compound in a specified chassis organism [4]. The design flow for this software follows three main steps: 1) generation of a metabolic database filled with stoichiometrically balanced reactions from four metabolic repositories (KEGG, EMP (Enzyme and Metabolic Pathways), MetaCyc, UM-BBD (University of Minnesota Biocatalyst/Biodegradation database), 2) calculation of the maximum theoretical yield of the target compound with no restriction on whether native or non-native reactions are used, and 3) identification of the pathway that minimizes the number of non-native reactions and maximizes theoretical yield. Additionally, OptStrain identifies alternative pathways that meet both the criteria of minimization of non-native reactions and maximum theoretical yield. Because the software is no longer supported, a direct comparison to RetSynth could not be performed. However, there are numerous key differences between the two software. RetSynth allows the user direct control of the pathways they identify, specifically the level of sub-optimal pathways to find, and does not directly tie them to the yield of the target compound which ultimately results in a more comprehensive list of synthetic pathways to evaluate. The user also has more ability to add a variety of different types of reactions and compounds to the RetSynth database, including those from the literature that are not yet in a repository, as well as chemical reactions. Integrating chemical reactions into the database permits the user to also identify hybrid (containing both biological and chemical reactions) pathways. Because all targets cannot be produced biologically, this gives the user more pathways than would have otherwise be achieved using OptStrain. Additionally, the overall usability of RetSynth far surpasses OptStrain's, primarily because RetSynth has an easy-to-use graphical user interface and is a stand-alone software package, precluding the need for any knowledge of programming or command-line usage. Overall, these features of RetSynth result in a more comprehensive and functional tool than what OptStrain currently provides.
GEM-Path
The GEM-Path algorithm uses several different techniques to design pathways for target compound production in a chassis organism [6]. This algorithm specifically uses 443 reactions that were pulled from BRENDA and KEGG repositories to identify pathways in Escherichia coli. The 443 reaction were methodically classified into three different categories 1) reactions that use no co-substrates or co-factors, 2) reactions that are anabolic conversions (merging the substrate with a co-substrate), and 3) reactions that are catabolic conversions where the substrate breaks down into corresponding product and co-product. Additionally, thermodynamic analysis was performed for each reaction, calculating ΔG (KJ/MOL), as was a promiscuity analysis (determining if an enzyme could accept multiple substrates). Subsequently, GEM-Path implemented a pathway predictor algorithm, which works by 1) designating a target compound and setting predictor constraints (maximal pathway length, metabolites to compute at each iteration, thermodynamic threshold, and reaction promiscuity threshold), 2) applying reactions to the target in a retrosynthetic manner for generating the corresponding substrates, and 3) checking if the substrate matches a compound in the Escherichia coli metabolome. Subsequently, if a pathway is found FBA is run to validate production.
GEM-Path is not available for public use and there are other differences between the two software. GEM-Path integrates more detailed reaction parameters when predicting a pathway (i.e. ΔG and promiscuity) than RetSynth uses to identify optimal solutions. This subsequently makes GEM-Path's metabolic database substantially smaller than RetSynth and therefore is missing many synthetic pathway opportunities. Additionally, GEM-Path's algorithm does not allow multiple pathways per target to be identified, limiting the potential pathways provided to the researcher.
MetaRoute
MetaRoute is a web-based tool that finds pathways between two specified compounds using a graph-based searching algorithm [5]. Specifically, this tool uses Eppstein's k-shortest path algorithm to find the shortest distance between two nodes in a graph. The graph representing a metabolic network was built by 1) using pre-calculated and concise atom mapping rules in which two successive reactions are represented by a single edge, 2) removing irrelevant reaction conversions (i.e. glucose 6 phosphate to ATP to AMP), and 3) using an updated weighting schema which decreased weights on edges through frequently used metabolites which traditionally had higher weights. The graph of reactions and compounds MetaRoute uses was built using several metabolic repositories including BN++ (a biological information system), BNDB (biochemical network database) and KEGG. There are several key differences between this web-based tool and RetSynth, one being that a source compound must be specified instead of a chassis organism, which limits the number of pathways that can be discovered. While a user could perform a pathway search between every internal chassis compound and the target, this would take an extraordinary amount of time to get all optimal pathways and require the user to further sort through the pathways and identify the best route. Additionally, this is not a tool that can find sub-optimal pathways or evaluate the effectiveness of pathways through FBA. RetSynth's capabilities far exceed MetaRoute's including being a stand-alone software package that does not require a webservice like MetaRoute.
RouteSearch
RouteSearch is a module of the Pathway Tools software utilizing the EcoCyc and MetaCyc databases for synthetic pathway identification [9]. This tool uses the branch-and-bound search algorithm on atom mapping rules to find optimal pathways between a set of starting compounds (or a specified source compound) and a target compound. Users can specify the weights (costs) of identifying pathways with reactions native to the chassis organism and those external to the organism. Additionally, multiple optimal pathways as well as higher cost or length sub-optimal pathways can be identified by RouteSearch. The user must specify how many pathways they want to examine, and if there are fewer optimal pathways than the user specified, then RouteSearch will give longer (sub-optimal) pathways. When identifying pathways by RouteSearch using the BioCyc web-browser a set of source compounds can be used to find pathways to an individual target compound. Additionally, a number of external bacterial organisms can be set by the user in which to search for optimal pathways. When using all bacterial organisms, however, RouteSearch freezes and is unusable. In addition to the web browser, RouteSearch can be used through the Pathway Tools software suite, which allows all MetaCyc reactions to be loaded quickly and efficiently. When using RouteSearch through Pathway Tools only a single source compound can be set and optimal pathways cannot be identified from an entire set of source compounds. Thus a rapid search for an optimal and sub-optimal pathway using all native chassis organism metabolites cannot be rapidly or efficiently achieved. While RouteSearch can perform similar functions to RetSynth the usability and system-wide analysis that RetSynth provides cannot be matched.
Retrobiosynthesis
Retrobiosynthesis is a synthetic biology tool that can build novel synthetic pathways for compound production. This tool, which was developed by the Swiss Federal Institute of Technology [24], first implements a network generation algorithm that compiles a list of all theoretically possible enzymatic transformations. A pathway reconstruction algorithm, using either a graph-based search or optimization-based methods, then builds all possible pathways from a source compound to a target. After implementation of these algorithms, reduction steps are taken to decrease the amount of information which include: 1) sorting through the list of possible enzymatic transformations and comparing what is known vs novel using repositories such as KEGG, MetaCyc, and ChEBI, and 2) sifting through the pathways and selecting ones based on thermodynamic feasibility, number of enzymatic transformations in a pathway and maximum target yield.
Although the Retrobiosynthesis tool performs many of the same functions as RetSynth, and can predict novel enzymatic transformations, its ability to be used by independent researchers is limited. It requires setting up a collaboration with the Swiss Federal Institute of Technology and having them run the analysis. Retrobiosynthesis requires a designation of a source compound, making it likely that identifying all pathways to a target in a chassis organism would require a large amount of time, although we could not test this as we do not have access to the tool. RetSynth is a stand-alone software with a graphical user interface that researchers can download and use independently, making identifying pathways less reliant on the developers. Overall the software is quicker and easier to use for researchers to find optimal pathways.
RetroPath
RetroPath is a synthetic pathway finding tool used to identify pathways between a set of source compounds and a target compound [8]. RetroPath uses a database (database named RetroRules) of external metabolic reactions which was constructed using reaction information collected from BNICE, Simpheny, KEGG, Reactome, Rhea and MetaCyc. Reactions are represented by reaction SMARTS which facilitates the ability for potential novel enzymatic transformations to be predicted. Pathways between source and target compounds are calculated by identifying the shortest hyperpath in a larger weighted hypergraph (constructed using the database of external reactions) using the FindPath algorithm [25, 26].
To compare synthetic pathways between RetSynth and RetroPath we first retrieved the reaction SMARTS available for the MetaCyc repository from the RetroRules full database (https://retrorules.org/). A RetSynth database was then built to match the reactions that were in the RetroPath MetaCyc reaction rules database so an equal comparison between the tools could be run. Extra RetroPath parameters such as maximum and minimum diameter and maximum molecular weight for source were all kept at their default values of 1000, 0 and 1000 respectively. Diameter is a measure of the depth and detail of the molecular reaction signatures (reaction SMARTS) used to identify pathways in RetroPath. The larger diameter the more detailed and strict the reaction SMARTS are and therefore are less able to predict novel reactions. Because RetSynth cannot predict novel reactions and we want to do a strict comparison between the two tools the maximum diameter of 1000 keeps the reaction SMARTS sufficiently strict to prevent novel reactions from being identified by RetroPath. Additionally, source compounds (metabolites native to Escherichia coli K-12 M1655) were also the same for the two tools. Using RetroPath, which was run with the KNIME analytics platform with the pathway limit being 10 reaction steps (which matched the default pathway limit of RetSynth) we attempted to identify pathways for all MetaCyc compounds not in Escherichia coli. This query, however, was too large for RetroPath to handle, and subsequently RetroPath was employed to find pathways for a smaller set of target compounds including methyl acetate, pterostilbene (Fig. 2), 2-propanol, butanol, sabinene, 2-methylbutanal and isobutanol. RetSynth with this smaller database was able to identify pathways for all compounds in this smaller set while RetroPath was only able to find optimal and sub-optimal pathways for 2-methylbutanal, isobutanol and 2-propanol (Fig. 6).
RetSynth vs RetroPath2.0. Optimal and sub-optimal pathways identified by RetSynth and RetroPath for 2-propanol (a), 2-methylbutanal (b) and isobutanol (c). Red indicates compound targets, magenta indicates native compounds to Escherichia coli K-12 M1655
RetSynth and RetroPath were able to identify 3 pathways for production of 2-propanol in Escherichia coli (Fig. 6a). Pathways identified by the tools consisted of 1) the conversion of native compound farnesyl diphosphate to 2-propanol in 3 enzymatic conversions, 2) the conversion of native compound acetoacetate to 2-propanol in 2 enzymatic conversions, and 3) the conversion of methylglyoxal to 2-propanol in 3 enzymatic conversions. Both tools were also able to find synthetic pathways for 2-methylbutanal (Fig. 2b). RetSynth was able to find 3 pathways, all of which contained 2 enzymatic steps. All pathways produce the intermediate 3-methy-2-oxopentanoate (which is subsequently converted to 2-methylbutanal) from 3 different native compounds including 2-methylbutanoyl CoA, isoleucine and 3-methyl-2-oxobutanoate. RetroPath was only able to identify one pathway which was the conversions of isoleucine to 3-methyl-2-oxopentanoate and then to 2-methylbutanal. Finally, for isobutanol 3 pathways of almost identical enzymatic conversions were found by RetroPath and RetSynth (Fig. 6c). Both identified the 3-step pathway which takes valine and produces isobutanol as well as a 2-step pathway which takes 3-methyl-2 oxobutanoate and produces isobutanol. The final pathway of 3 enzymatic conversion steps starts again with native compound 3-methyl-2-oxobutanoate and transforms it into isobutanoyl-CoA and then into isobutanal and subsequently isobutanol. The second step is catalyzed by EC 1.2.1.10 in RetSynth and EC 3.6.1.- in RetroPath2.0. The removal of CoA from a substrate is represented by a general reaction in RetroPath and therefore the corresponding enzyme is less specific than what is given by RetSynth.
Overall RetSynth was able to identify pathways for a larger set of compounds than RetroPath. Additionally, RetSynth's supplementary capabilities, including identifying theoretical yields for target compounds as well as incorporating chemical reactions into the database of external reactions makes it highly versatile for individual user needs. RetSynth can be easily run using the graphical user interface and can implement usage of multiple processors, enabling quick identification of synthetic pathways for large sets of target compounds. Currently, RetSynth can only generate pathways with reactions that are known enzymatic transformations while RetroPath, by having a database of reaction SMARTS allows the software to predict novel enzyme transformations. While this RetroPath feature undoubtedly has advantages in discovering production pathways, the goal of RetSynth is to provide the most feasible pathways for target production and therefore using known reactions ultimately makes pathways provided by RetSynth more likely to be functional. Furthermore, because RetSynth is a stand-alone software package it is extremely easy to use and does not require downloading any outside software. Currently, RetroPath is used through KNIME for which the installation and usage can be challenging. All of these features enable RetSynth to perform more comprehensive and system-wide metabolic studies than is currently available from other tools.
RetSynth graphical user interface mode
In addition to RetSynth's command-line interface, a simple graphical user interface (GUI) is available for both MacOS and Windows (Fig. 7). The GUI, which was constructed with the python package Tkinter, provides the same options to the user as the command-line interface including designating a target compound and chassis organism, selecting the level of sub-optimal pathways to identify, predicting maximum theoretical yield using FBA, and the ability to generate a new custom database from metabolic repositories PATRIC, MetaCyc and/or KEGG. To save the user time, a basic default database is included with the application, allowing users to identify pathways in Escherichia coli. The application outputs all pathway information into figures and text/excel files to the user's desktop or a user-specified directory. The GUI enables RetSynth to be used by a broader user-base compared to other tools currently available.
RetSynth Application. A graphical user interface for RetSynth
RetSynth is an open-source, stand-alone software tool for identifying optimal and sub-optimal pathways to biological, chemical and hybrid production of target chemicals. Additionally, RetSynth is able to rank pathways based on maximum theoretical yield which is calculated by flux balance analysis. Our tool exceeds the capabilities of any other current software available because it includes a graphical user interface, providing the ability for RetSynth to be used by scientists without a programming background, the capability to add new and proprietary biological reactions as well as synthetic chemical databases, efficient identification of optimal and sub-optimal pathways and clear images of pathways via our visualization module to allow quick interpretation of results.
Project name: RetSynth
Project home page: https://github.com/sandialabs/RetSynthhttps://github.com/sandialabs/RetSynth
Operating system(s): Mac, Windows and Linux
Programming language: Python and Java
Other requirements: GNU Linear Programming Kit (v4.64), libSMBL
License: BSD 2-clause license
All software and data are available at https://github.com/sandialabs/RetSynth.
EMP:
Enzyme and metabolic pathways
FBA:
Flux balance analysis
KEGG:
Kyoto encyclopedia of genomes and genes
MILP:
Mixed integer linear program
Metabolic in-silico network expansion
UM-BBD:
University of Minnesota Biocatalyst/Biodegradation database
Eng CH, Backman TWH, Bailey CB, Magnan C, García Martín H, Katz L, Baldi P, Keasling JD. ClusterCAD: a computational platform for type I modular polyketide synthase design. Nucleic Acids Res. 2017; 46(D1):509–15.
Orth JD, Thiele I, Palsson BO. What is flux balance analysis?Nat Biotechnol. 2010; 28(3):245–8.
Thiele I, Palsson BO. A protocol for generating a high-quality genome-scale metabolic reconstruction. Nat Protoc. 2010; 5(1):93–121.
Pharkya P, Burgard AP, Maranas CD. OptStrain: a computational framework for redesign of microbial production systems. Genome Res. 2004; 14(11):2367–76.
Blum T, Kohlbacher O. MetaRoute: fast search for relevant metabolic routes for interactive network navigation and visualization. Bioinformatics. 2008; 24(18):2108–9.
Campodonico MA, Andrews BA, Asenjo JA, Palsson BO, Feist AM. Generation of an atlas for commodity chemical production in Escherichia coli and a novel pathway prediction algorithm, GEM-path. Metab Eng. 2014; 25:140–58.
Prather KL, Martin CH. De novo biosynthetic pathways: rational design of microbial chemical factories. Curr Opin Biotechnol. 2008; 19(5):468–74.
Delepine B, Duigou T, Carbonell P, Faulon JL. RetroPath2.0: A retrosynthesis workflow for metabolic engineers. Metab Eng. 2018; 45:158–70.
Latendresse M, Krummenacker M, Karp PD. Optimal metabolic route search based on atom mappings. Bioinformatics. 2014; 30(14):2043–50.
Wattam AR, Davis JJ, Assaf R, Boisvert S, Brettin T, Bun C, Conrad N, Dietrich EM, Disz T, Gabbard JL, Gerdes S, Henry CS, Kenyon RW, Machi D, Mao C, Nordberg EK, Olsen GJ, Murphy-Olson DE, Olson R, Overbeek R, Parrello B, Pusch GD, Shukla M, Vonstein V, Warren A, Xia FF, Yoo H, Stevens RL. Improvements to PATRIC, the all-bacterial Bioinformatics Database and Analysis Resource Center. Nucleic Acids Res. 2017; 45(D1):535–42.
Mundy M, Mendes-Soares H, Chia N. Mackinac: a bridge between ModelSEED and COBRApy to generate and analyze genome-scale metabolic models. Bioinformatics. 2017; 33(15):2416–8.
Allen B, Drake M, Harris N, Sullivan T. Using KBase to Assemble and Annotate Prokaryotic Genomes. Curr Protoc Microbiol. 2017; 46:1–13111318.
Caspi R, Billington R, Fulcher CA, Keseler IM, Kothari A, Krummenacker M, Latendresse M, Midford PE, Ong Q, Ong WK, Paley S, Subhraveti P, Karp PD. The MetaCyc database of metabolic pathways and enzymes. Nucleic Acids Res. 2018; 46(D1):633–9.
Kanehisa M, Goto S. KEGG: Kyoto Encyclopedia of Genes and Genomes. Nucleic Acids Res. 2000; 28(1):27–30.
Jeffryes JG, Colastani RL, Elbadawi-Sidhu M, Kind T, Niehaus TD, Broadbelt LJ, Hanson AD, Fiehn O, Tyo KE, Henry CS. MINES: open access databases of computationally predicted enzyme promiscuity products for untargeted metabolomics. J Cheminform. 2015; 7:44.
Hadadi N, Hafner J, Shajkofci A, Zisaki A, Hatzimanikatis V. Atlas of biochemistry: A repository of all possible biochemical reactions for synthetic biology and metabolic engineering studies. ACS Synth Biol. 2016; 5(10):1155–66.
Roth DL. SPRESIweb 2.1, a selective chemical synthesis and reaction database. J Chem Inf Model. 2005; 45(5):1470–3.
Ebrahim A, Lerman JA, Palsson BO, Hyduke DR. COBRApy: Constraints-based reconstruction and analysis for Python. BMC Syst Biol. 2013; 7:74.
Bornstein BJ, Keating SM, Jouraku A, Hucka M. LibSBML: an API library for SBML. Bioinformatics. 2008; 24(6):880–1.
Jojima T, Inui M, Yukawa H. Production of isopropanol by metabolically engineered Escherichia coli. Appl Microbiol Biotechnol. 2008; 77(6):1219–24.
Atsumi S, Hanai T, Liao JC. Non-fermentative pathways for synthesis of branched-chain higher alcohols as biofuels. Nature. 2008; 451(7174):86–9.
McCormack D, McFadden D. A review of pterostilbene antioxidant activity and disease modification. Oxid Med Cell Longev. 2013; 2013:575482.
Itoh T, Nagata K, Matsuya Y, Miyazaki M, Ohsawa A. Reaction of nitric oxide with amines. J Org Chem. 1997; 62(11):3582–5.
Hadadi N, Hatzimanikatis V. Design of computational retrobiosynthesis tools for the design of de novo synthetic pathways. Curr Opin Chem Biol. 2015; 28:99–104.
Carbonell P, Planson AG, Fichera D, Faulon JL. A retrosynthetic biology approach to metabolic pathway design for therapeutic production. BMC Syst Biol. 2011; 5:122.
Carbonell P, Fichera D, Pandit SB, Faulon JL. Enumerating metabolic pathways for the production of heterologous target chemicals in chassis organisms. BMC Syst Biol. 2012; 6:10.
Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
This research was conducted as part of the Co-Optimization of Fuels and Engines (Co-Optima) project sponsored by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), Bioenergy Technologies and Vehicle Technologies Offices. Co-Optima is a collaborative project of multiple national laboratories initiated to simultaneously accelerate the introduction of affordable, scalable, and sustainable biofuels and high-efficiency, low-emission vehicle engines. The DOE was not involved in the design of study, collection, analysis, interpretation of data or the writing of this manuscript.
Sandia National Laboratories, East Avenue, Livermore, 94550, USA
Leanne S. Whitmore, Bernard Nguyen, Ali Pinar, Anthe George & Corey M. Hudson
Leanne S. Whitmore
Bernard Nguyen
Ali Pinar
Anthe George
Corey M. Hudson
LSW designed the study, wrote software, conducted experiments and wrote the manuscript. BN designed the GUI and wrote software, AP wrote the proofs, AG designed the study and helped write the manuscript, CMH wrote and designed software, and wrote the manuscript. All authors have read and approved this manuscript for publication.
Correspondence to Corey M. Hudson.
Supplementary Methods-Preventing Cyclic pathways from being identified as viable routes. Outlines how software prevents pathways with cycles from being identified. (DOCX 3 kb)
Whitmore, L.S., Nguyen, B., Pinar, A. et al. RetSynth: determining all optimal and sub-optimal synthetic pathways that facilitate synthesis of target compounds in chassis organisms. BMC Bioinformatics 20, 461 (2019). https://doi.org/10.1186/s12859-019-3025-9
Mixed integer linear programming
Metabolic engineering
|
CommonCrawl
|
Inactivity periods and postural change speed can explain atypical postural change patterns of Caenorhabditis elegans mutants
Tsukasa Fukunaga ORCID: orcid.org/0000-0003-4442-60491,2,3 &
Wataru Iwasaki1,4,5
With rapid advances in genome sequencing and editing technologies, systematic and quantitative analysis of animal behavior is expected to be another key to facilitating data-driven behavioral genetics. The nematode Caenorhabditis elegans is a model organism in this field. Several video-tracking systems are available for automatically recording behavioral data for the nematode, but computational methods for analyzing these data are still under development.
In this study, we applied the Gaussian mixture model-based binning method to time-series postural data for 322 C. elegans strains. We revealed that the occurrence patterns of the postural states and the transition patterns among these states have a relationship as expected, and such a relationship must be taken into account to identify strains with atypical behaviors that are different from those of wild type. Based on this observation, we identified several strains that exhibit atypical transition patterns that cannot be fully explained by their occurrence patterns of postural states. Surprisingly, we found that two simple factors—overall acceleration of postural movement and elimination of inactivity periods—explained the behavioral characteristics of strains with very atypical transition patterns; therefore, computational analysis of animal behavior must be accompanied by evaluation of the effects of these simple factors. Finally, we found that the npr-1 and npr-3 mutants have similar behavioral patterns that were not predictable by sequence homology, proving that our data-driven approach can reveal the functions of genes that have not yet been characterized.
We propose that elimination of inactivity periods and overall acceleration of postural change speed can explain behavioral phenotypes of strains with very atypical postural transition patterns. Our methods and results constitute guidelines for effectively finding strains that show "truly" interesting behaviors and systematically uncovering novel gene functions by bioimage-informatic approaches.
While recent advances in DNA sequencing technology have greatly facilitated genomic analysis, quantitative and reproducible analysis of animal behavior is expected to further promote data-driven behavioral genetics [1–4]. Caenorhabditis elegans is a model organism for which various research resources are available, including a high-quality genome sequence, highly curated and integrated databases, and a complete neuronal wiring diagram [5–7]. Several systems that automatically track and video-record individual worms are already available for ethological studies [8–13]. Some of these systems record not only movement trajectories but also time-series postural images of individual worms. Although these trajectories and postures are not independent from each other [14], perturbations at the molecular and cellular levels influence the latter more directly than the former.
Computational methods for analyzing C. elegans time-series postural data are still under development. A classic approach is to search given datasets for predefined postural patterns or behavioral parameters; however, such an approach suffers from a lack of objectivity or the ability to identify novel characteristics [15]. A more systematic approach is to use unsupervised machine learning to find frequently appearing stretches or "behavioral motif" de novo within time-series postures. Using this approach, Brown et al. analyzed 7708 movies of 307 mutant strains and detected 2223 C. elegans behavioral motifs [16]. A feature vector for each individual worm was then calculated based on the detected behavioral motifs, and clustering of the strains using these feature vectors successfully grouped mutant strains in which the responsible genes have related biological functions [16]. Szigeti et al. developed another method for finding behavioral motifs based on spline mixture models and identified motifs corresponding to turning or passive behaviors [17].
An alternative systematic approach for analyzing time-series postural data is to quantify transition frequencies between "postural states". In this approach, worm postures are clustered based on similarities between postures and the obtained clusters are defined as postural states. Whereas the behavioral motif approach detects atypical behaviors as continuous stretches, the postural state approach detects those that rather reflect worms' prompt reaction, which might reflect their decision-making criteria, for instance. As a pioneering work, Schwarz et al. used K-means clustering to bin worm postures, and observed condition-specific state transition patterns [18]. However, the factors underlying atypical worm postural movement patterns were not sufficiently dissected, particularly because postures and transition patterns between them should not be independent from each other.
Here, we applied the Gaussian Mixture Model (GMM)-based binning method [19] to time-series postural data for 322 C. elegans strains to quantify their transition frequencies between postural states, and revealed that the occurrence patterns of the postural states and the transition patterns among these states have a relationship as expected. In addition, we discovered several strains that exhibit atypical transition patterns that cannot be fully explained by their occurrence patterns of postural states. We also propose that elimination of inactivity periods where the postural change speed is nearly equal to zero, and overall acceleration of postural change speed can explain the behavioral phenotypes of strains with very atypical transition patterns.
Dataset preparation
The original dataset was obtained from the C. elegans behavioral database [20] and consisted of data from 9975 hermaphroditic individual worms of 338 strains freely crawling on agar plate surfaces with food. The 338 strains comprised 21 wild-type (including N2) and 317 N2-derived mutant strains. To concisely represent their postures, we adopted four-dimensional eigenworm vectors [20, 21] that were pre-calculated in the original dataset. In brief, an eigenworm vector was calculated from each image frame of a video-recorded individual worm as follows. First, the midline of the worm body was obtained by image processing, and 48 angles were measured at regular intervals along the midline (Fig. 1). Second, the 48 angles were normalized to obtain a mean value of zero to ignore the general orientation of the body. Third, the normalized 48 values were projected onto four dimensions that were defined by four eigenvectors explaining 92% of the overall variability of worm postures [16, 21]. Such an eigenvector representation of animal shapes is widely accepted for analyzing animal behavior [17, 22].
Measurement and eigenworm representation of worm postures. The left panel shows a picture of a wild-type N2 worm; its contour and midline are highlighted. This picture was taken from the C. elegans behavioral database [20]. In total, 48 angles were measured at regular intervals along the midline and projected onto the four-dimensional eigenworm space after normalization
To ensure that the data were consistent, we excluded data from any individual worm whose video length was not between 890 and 910 seconds or whose eigenworm vectors were missing in more than 40% of the entire frames (typically because of video-tracking failure). Missing eigenworm vectors in the remaining dataset were linearly interpolated using values from the two immediately flanking frames (see Additional file 1: Figure S1A for the proportions of such "gap" frames). Because various frame rates were used in the original dataset (Additional file 1: Figure S1B), we downsampled all data to five frames per second. Finally, by excluding data for any strain for which less than five different individuals were available, we obtained time-series eigenworm vector data for 8769 individual worms from 322 strains (20 wild-type and 302 mutant strains).
Probabilistic binning of C. elegans postures into postural states
To represent any eigenworm vector by discrete postural states in a probabilistic manner, we used a GMM-based binning method [19]. This method represents each four-dimensional eigenworm vector by a probabilistic mixture of multiple Gaussian distributions. First, because the total number of image frames in the entire dataset was too large, we randomly sampled 1% of the frames (i.e., 385,790 frames) for parameter estimation. Second, we plotted the eigenworm vectors of all frames and fit the four-dimensional GMM to the pooled distribution consisting of 385,790 data points. The GMM parameters were estimated by the factorized asymptotic Bayes (FAB) algorithm [23]. The FAB algorithm is similar to the conventional expectation-maximization algorithm for fitting GMM [19] but allows automatic estimation of the numbers of mixture components based on factorized information criterion (FIC) [23]. Unlike conventional information criteria such as Akaike information criterion and Bayesian information criterion, FIC can be applied to the inference of mixture models with theoretical justification. The FAB algorithm eliminates components if their mixture ratios are smaller than a given threshold ε after the E-step that is modified from that of the conventional expectation-maximization algorithm. The source codes of FAB-GMM algorithm are available at https://github.com/fukunagatsu/FAB-GMM. We set ε=0.01,0.005, or 0.001 using initial parameter sets estimated by the K-means++ algorithm [24] with K=100,200, or 1000, respectively (These K values are the maximum numbers of states for each ε). Third, after the convergence of the FAB algorithm, each Gaussian distribution component was regarded as a postural state. We obtained 44, 95, and 459 postural states when ε was 0.01, 0.005, and 0.001, respectively. Finally, for each frame (including the remaining 99%), the responsibility of each Gaussian distribution component for explaining its eigenworm vector was calculated using the estimated parameters. As a result, a posture of any individual i at any frame f was represented by a K-dimensional nonnegative vector r i,f =(r i,f,1,r i,f,2,…,r i,f,K )T, where K is the number of postural states, each element represents the responsibility of each state, and \(\sum _{k=1}^{K} r_{{i,f,k}} = 1\).
For comparison, we also adopted the K-means clustering, which not probabilistically but deterministically bins any eigenworm vector to a single state. First, the same set of 1% frames were selected and their eigenworm vectors were plotted to the four-dimensional space in the same manner. Second, K-means clustering was applied to the pooled distribution. The model parameters were estimated by the Lloyd algorithm [25] with initial parameters estimated by K-means++ algorithm [24], where K was set to 90 (a parameter used in a previous study [18]) or 44, 95, or 459 (parameters estimated by the GMM-based method earlier). Third, after the convergence of the Lloyd algorithm, a centroid of each cluster was regarded as a postural state. Finally, for each frame (including the remaining 99%), its eigenworm vector was binned to the closest postural state. Note that, any worm posture was represented by a vector r i,f as in the case of the GMM-based method, but it was an integer vector (i.e., only one of its elements was 1 and the others were 0).
Evaluation of binning methods
In this work, we assumed that the postures of individual worms belonging to the same strain should be statistically more similar than the postures of worms belonging to different strains. Thus, if worm postures are represented more properly by the postural states, the relative state occurrence frequencies of an individual \(i, \mathbf {r}_{i} = \frac {1}{F}\sum _{f=1}^{F} \mathbf {r}_{i,f} \), where F is the number of frames, are expected to be more similar between individuals of the same strain than those of different strains. The GMM-based method and K-means clustering with three and four different parameters, respectively, were compared based on this rationale.
Let S i be a set of individuals that belong to the same strain as i except i, and \(\overline {S}_{i}\) be a set of randomly selected individuals such that \(S_{i} \cap \overline {S}_{i} = \emptyset, i \notin \overline {S}_{i}\), and \(|S_{i}| = |\overline {S}_{i}|\), where |S| represents the number of individuals belonging to S. For every individual i, we calculated the mean divergences of the relative state occurrence frequencies against S i and \(\overline {S}_{i} \) as follows:
$$\Delta_{\text{intra}} \mathbf{r}_{i} = \frac{1}{|S_{i}|} \sum_{j \in S_{i}} d(\mathbf{r}_{i}, \mathbf{r}_{j}) $$
$$\Delta_{\text{inter}} \mathbf{r}_{i} = \frac{1}{|\overline{S}_{i}|} \sum_{j \in \overline{S}_{i}} d(\mathbf{r}_{i}, \mathbf{r}_{j}) $$
where d is the Jensen-Shannon divergence, which is a measure of divergence between two probability distributions [26]. Note that r i is a normalized vector and can be regarded as a probability distribution. Then, for each strain, we tested the hypothesis that Δ intra r i of all individuals that belong to that strain are statistically smaller than Δ inter r i of them by a one-sided Wilcoxon-Mann-Whitney test. The test was repeated against all strains, and the Benjamini-Hochberg approach was used to control the false-discovery rates of multiple testing (q<0.05) [27].
Discovery of strains showing wild-type N2-like postures but atypical transition patterns
For each individual i, the relative transition frequencies between postural states were represented by a K×K matrix T i whose element representing the transition from a state k to l is
$$T_{\text{\textit{i,k,l}}} = \frac{1}{F-1}\sum_{f=1}^{F-1} r_{\text{\textit{i,f,k}}} r_{i,f+1,l} $$
For each strain S, the relative state occurrence frequency r i and relative state transition frequency T i were averaged for its individuals to obtain r S and T S , respectively. With the exception of the wild-type N2 strain, we calculated the divergences of each strain from wild-type N2 as follows:
$$\Delta_{\text{N2}} \mathbf{r}_{S} = d(\mathbf{r}_{S}, \mathbf{r}_{\text{N2}}) $$
$$\Delta_{\text{N2}} T_{S} = d(T_{S}, T_{\text{N2}}) $$
where d is the Jensen-Shannon divergence. Note that T i can also be regarded as a probability distribution. For example, a large Δ N2 T S indicates that strain S has a state transition pattern that is very different from that of wild-type N2.
Next, we conducted linear regression to investigate relationships between a dependent variable Δ N2 T S and an explanatory variable Δ N2 r S . Then, we detected strains showing atypical transition patterns using standardized residuals (Z-values) from the estimated linear model. To control the false-discovery rates of multiple testing, we used the Benjamini-Hochberg approach (q<0.05).
Analysis of factors underlying atypical state transition patterns
To reveal factors underlying the atypical transition patterns, we created artificial N2 strains in silico by modifying the eigenworm data for the wild-type N2 strain and determined if these artificial N2 strains reproduced the atypical state transition frequencies of strains that showed atypical transition patterns. Specifically, we focused on the effects of eliminating inactivity periods and accelerating the average postural change speed. To remove inactivity periods from wild-type N2, we excluded any frame for which the Euclidean distance of the eigenworm vectors between that and the previous frame was smaller than a threshold α. To change the postural change speed as a whole, we removed frames at regular intervals to simulate movement of β-times accelerated wild-type N2. When β=1.5, for every three consecutive frames, the eigenworm vectors of the second and third frames were replaced with the averaged vector. When β=2, every second frame was removed. Because all strains that showed atypical transition patterns were faster than wild-type N2 on average, only acceleration was considered here (i.e., deceleration was not considered here). For each strain S, the parameters α and β were selected to minimize
$$D_{\text{eigenworm speed}} = \int_{x}|F_{\text{aN2}}(x)-F_{S}(x)| dx $$
where F aN2 and F S are the cumulative distributions of the instantaneous postural change speed (Euclidean distance between eigenworm vectors of adjacent frames) of the artificial N2 strain and S, respectively.
Then, we calculated r aN2 and T aN2, which are the relative state occurrence frequency and relative state transition frequency, respectively, of the artificial N2 strain. To determine whether the artificial N2 strain reproduced the behavioral characteristics of the strain S,Δ aN2 r S and Δ aN2 T S . In addition, we calculated the standardized residuals of Δ aN2 r S and Δ aN2 T S based on the previously predicted linear model. We called this standardized residual Z a .
Evaluation of binning methods and parameters
The postures of each of the 8769 individual worms belonging to 322 strains were represented by time-series four-dimensional eigenworm vectors. Every eigenworm vector was binned to postural states by the GMM-based method and K-means clustering with three and four parameters, respectively. Then, we determined if the relative state occurrence frequencies were more similar between worms of the same strain than between worms of different strains. The number of strains for which the null hypothesis of no difference with Benjamini-Hochberg's q<0.05 was rejected is shown in Table 1. Overall, the GMM-based method detected greater numbers of such strains than K-means clustering, indicating that less postural information was lost during binning in the former than in the latter. Although the parameter selection did not have a strong impact on the results, K=95 and ε=0.005 were the best parameters for the K-means clustering and GMM-based methods, respectively. Among the 213 strains that exhibited significance by K-means-clustering with K=95, only ten were missed by the GMM-based method with ε=0.005 (Additional file 1: Figure S2). Therefore, the GMM-based method with ε=0.005 was adopted for postural state binning in the following analyses.
Table 1 Evaluation of binning methods and parameters
Strong relationships between postural state occurrences and transitions
After the binning of eigenworm vectors to the postural states, the relative state occurrence frequency r S and relative state transition frequency T S of each strain S were calculated. Figure 2 shows their divergences from those of the wild-type N2 strain, where large Δ N2 r S and Δ N2 T S indicate that strain S displays postures and transition patterns that are very different from those of wild-type N2. We clearly observed a general trend of a positive linear correlation between the two divergence values (The adjusted R-squared value was 0.96). This likely reflected the fact that the use of different postures naturally leads to the use of different transition patterns. Note that Δ N2 T S −Δ N2 r S ≥0 (the proof is provided in the Additional file 1).
Divergences of postural state occurrence and transition frequencies of 321 non wild-type N2 strains. The x- and y-axis represent Δ N2 r S and Δ N2 T S , respectively
For example, three mutant strains, unc-103, unc-1(e1598), and unc-77(e625), exhibited the largest divergences of both values from wild-type N2 (Fig. 2). The unc-103 gene encodes an ether-a-go-go-related K + channel homolog, and the strain in which this gene has a gain-of-function mutation has been reported to show extremely lethargic behavior [28]. The unc-1(e1598) strain is a mutant of a stomatin-like-protein gene and has also been reported to show very slow behavior [29]. The large deviations of the postural state occurrence and transition frequencies of these two mutant strains likely reflect their exceptionally inactive phenotypes. The unc-77(e625) strain features a gain-of-function mutation of a subunit gene of a voltage-insensitive cation leak channel and exhibits coiled postures [29]. To reveal what postures are specific in this strain, we calculated the fold change between r u n c − 7 7 ( e 6 2 5 ) , k and r N2,k for each postural state k and detected over-represented and under-represented postures in unc-77(e625) (Fig. 3). These results showed that the unc-77(e625) strain tends to take more "C-shaped" but less "S-shaped" postures compared to the wild-type N2 strain.
Top five over-represented and under-represented postures in unc-77(e625) Each posture was reconstructed from the mean value of the corresponding postural states. Red and blue colors represent over-represented and under-represented postures in unc-77(e625), respectively
Identification of strains exhibiting atypical transition patterns
As shown in Fig. 2, although most strains strongly followed the positive linear correlation trend, several strains did not. We identified seven strains exhibiting atypical transition patterns that were significantly deviated from expectation (q<0.05, left side of Fig. 2). Only these seven strains showed Z-values larger than 3.0 (Table 2, Fig. 4).
Histogram of Z-values of 321 non wild-type N2 strains
Table 2 Strains exhibiting wild-type N2-like postures but atypical transition patterns
The two strains with the largest Z-values, npr-1 and npr-3, are mutants of neuropeptide receptor (npr) genes. As it is known that NPR-1 is expressed in ventral nerve cord motor neuron, it is reasonable that these neuropeptide receptor genes have roles in controlling postural movement [30, 31]. However, notably, other npr mutant strains did not show large Z-values, even though npr-1 and npr-3 are neither the closest paralogs in the npr gene family nor highly identical in sequence (amino-acid sequence identity% = 25.5) (Additional file 1: Figure S3). Because r S and T S of these two strains were most similar to each other among all strains (i.e., the differences of r S and T S between the two strains were the smallest among every strain pair that contains either of the npr-1 and npr-3 strains), the npr-1 and npr-3 genes were suggested to have closely related functions at the behavioral level regardless of their different evolutionary origins at the sequence level.
The egl-30 and eat-16 genes encode components of heterotrimeric G-protein signaling pathways. Loss of EGL-30 function depresses the behavioral activity of C. elegans, whereas EAT-16 negatively regulates EGL-30 [32, 33]. Because the egl-30 and eat-16 mutant strains in this study have gain- (ep271) and loss-of-function alleles (sa609), respectively [33, 34], their similar, active behavioral phenotypes are consistent with previous reports. Indeed, r S and T S from the egl-30 and eat-16 strains were most similar to each other.
lon-2 encodes a glypican-family protein of heparan sulfate proteoglycans, and its mutant was reported to have a longer body than that of wild-type N2 [35]. A previous study reported that lon-2 was one of the worst-fit mutants in the eigenworm representation [16]. Although it is not clear why Δ N2 r S of lon-2 is not very large, the poor fitting of the eigenworm representation may have resulted in atypical transition patterns of this strain.
ED3017 and JU258 are non-N2 wild-type strains. C. elegans population genomics studies revealed that N2 strains acquired gain-of-function mutations in npr-1 during laboratory domestication [36, 37], and ED3017 and JU258 have a lower activity allele in npr-1. The large Z-values of these two strains may be caused by this low npr-1 activity.
Figure 5 a presents the distributions of the instantaneous postural change speed of wild-type N2 and the six strains that exhibited atypical transition patterns. Note that lon-2 was excluded here because the earlier stage of eigenworm representation could be problematic for this strain. Overall, all six strains exhibited faster postural change speeds than those of wild-type N2. Only wild-type N2 had a mode of the postural change speed at approximately 0.1 (Fig. 5 b), at which four of the other strains also had small "shoulders" (Fig. 5 a). Such a small speed value indicates that the individuals are under inactivity periods, which may correspond to quiescence worm behavior [38]. We also observed several strains that retain this mode of postural change speed at approximately 0.1 but have different distribution shapes from that of wild-type N2 (e.g., unc-43 and C52B9.11; Additional file 1: Figure S4).
Distributions of instantaneous speed of postural change. a The distributions of wild-type N2, npr-1, npr-3, egl-30, eat-16, ED3017, and JU258. The y-axis represents density. b A histogram that magnifies around the mode of the wild-type N2 distribution. c The distributions of npr-1 and the artificial N2 strain whose postural change speed resembles that of npr-1
On the basis of these observations, we investigated whether artificial elimination of the inactivity periods and overall acceleration of postural change speed from wild-type N2 could reproduce the state transition patterns of these six strains without significantly altering the state occurrence frequencies. In time-series sequence representations of postural states, inactivity periods are represented by stretches of identical or similar state(s). Because inactivity periods likely do not occur only at specific postural states (the Jensen-Shannon divergence of relative state occurrence frequencies between frames whose posture change speed was less than and greater than 0.3 was 0.0161 for wild-type N2), the elimination of inactivity periods would change the state transition frequencies while modestly preserving the state occurrence frequencies. For the acceleration of postural change speed, for example, two-fold acceleration of a state sequence AABBCCAA... into ABCA... as a whole will also change the state transition frequencies by preserving the state occurrence frequencies.
Artificial N2 strain data were created in silico by removing inactivity periods with threshold α and accelerating β-fold as a whole from the time-series eigenworm data for wild-type N2. For each of the six strains, we chose the best parameters from α=0.3,0.4,…,1.0 and β=1.5,2.0 by examining whether the postural change speed distributions of the artificial N2 strains fit those of each of the six strains (Fig. 5 c, Additional file 1: Figure S5, and Additional file 1: Table S1). Finally, we investigated whether the artificial N2 strains reproduced not only the distributions of the instantaneous postural change speed but also the postural state occurrence and transition frequencies of the six strains. All Z a values became substantially smaller than the original Z-values, and these values decreased to the level that was not significantly different from expectation (Table 3). In other words, the atypical state transition frequencies of these four mutant strains can be explained almost entirely by the lack of inactivity periods and overall acceleration of the postural change speed.
Table 3 Reproduction of atypical state transition frequencies by artificially modified N2
In this study, we used the GMM-based method for probabilistically binning worm postures into a finite number of postural states and revealed an apparent relationship between the postures and transition patterns of C. elegans strains. The superior binning performance of the GMM-based method reflects the fact that time-series postures of any individual are distributed along a single trajectory in the four-dimensional eigenworm space because the postures of continuous frames should be similar to each other. Thus, a worm must adopt intermediate postures while changing its posture from one postural state to another. Deterministic binning of such intermediate postures inevitably loses information or introduces noise to the representation of worm behavior. The case of the lon-2 strain in this study also indicates the importance of preserving information during the computational analysis of animal behavior, although the difficulty in this case occurred during the eigenworm representation. The strong relationship between the postural state occurrence and transition frequencies offers two important suggestions for worm postural movement analysis: a significant part of postural movement variations can be evaluated solely by examining postures without temporal information, and the effects of using different postures must be taken into account in postural movement analysis.
Several strains that exhibited atypical transition patterns among postural states were identified. Surprisingly, for the six strains that showed the most atypical postural movement, merely eliminating the inactivity periods and accelerating the postural change speed as a whole nearly reproduced their atypical transition patterns. While quantification of the transition frequencies between postural states is a powerful approach for computationally analyzing animal behavior, our results demonstrate that even very atypical state transition patterns can result from simple factors. Analyses of inactivity periods and postural change speeds both require consideration of time duration; the compression of state time duration abolishes their effects in the analysis [18]. To effectively detect strains that show truly interesting behavior, e.g., strains whose neural circuits encode special decision-making criteria, computational analyses of animal behavior must be accompanied by evaluation of the effects of more "trivial" factors such as overall change in speed (of course, we note that these trivial factors themselves would also provide many insights into worm behavior). The C. elegans behavioral database also contains various behavioral data such as dorsal/ventral orientations, velocities, and trajectories during worm movement. Using these additional datasets, we may dissect factors that underlie interesting phenotypes more deeply, for example, effects of dorsal/ventral biases in postural change patterns and/or relationships between postural change patterns and movement trajectories.
Our analysis also revealed that the npr-1 and npr-3 genes have closely related functions that were unpredictable by sequence homology, the most basic principle in this genomic era. Many studies have conducted functional analyses of npr-1 [31, 39–41], but few studies have focused on npr-3 [42]. Therefore, we envision that existing knowledge about npr-1 will substantially accelerate future functional analyses of npr-3 based on the present result.
In this study, divergence of the state occurrence and transition frequencies from the wild-type N2 strain was examined. Although this would make sense for the analysis of N2-derived mutant strains, comparison among different wild-type strains can also be done by selecting another strain as a reference. We expect that the linear correlation trend between the state occurrence and transition frequencies will be recovered regardless of the reference strain choice; however, for example, it would also be of interest to select strains that have specific evolutionary context or strains that show characteristic behavior (such as ED3017 or JU258).
Although more than a decade has passed since the genomes of many model organisms were sequenced, significant numbers of genes remain functionally uncharacterized. Systematically deciphering their functions beyond straightforward sequence homology analysis is one of the most important goals in computational biology today, where an advantage of bioimage informatics for functional analysis is the ability of this method to directly evaluate phenotypes [43–45]. Finally, it should be noted that genome-editing technologies are enabling rapid construction of genetically engineered animal strains [46]. Bioimage-informatic analysis of their behaviors will, for example, contribute to the identification of novel genes responsible for neurological disease. We emphasize that further development of computational methods and accumulation of technical knowledge will be critical to promote this emerging field.
FAB:
Factorized asymptotic Bayes
FIC:
Factorized information criteria
GMM:
Gaussian mixture model
Dankert H, Wang L, Hoopfer ED, Anderson DJ, Perona P. Automated monitoring and analysis of social behavior in Drosophila. Nat Methods. 2009; 6:297–303.
Anderson DJ, Perona P. Toward a science of computational ethology. Neuron. 2014; 84:18–31.
Pérez-Escudero A, Vicente-Page J, Hinz RC, Arganda S, de Polavieja GG. idTracker: tracking individuals in a group by automatic identification of unmarked animals. Nat Methods. 2014; 11:743–8.
Fukunaga T, Kubota S, Oda S, Iwasaki W. GroupTracker: video tracking system for multiple animals under severe occlusion. Comput Biol Chem. 2015; 57:39–45.
C. elegans Sequencing Consortium T. Genome sequence of the nematode C. elegans: A platform for investigating biology. Science. 1998; 282:2012–8.
Harris TW, Baran J, Bieri T, Cabunoc A, Chan J, Chen WJ, Davis P, Done J, Grove C, Howe K, et al.Wormbase 2014: new views of curated biology. Nucleic Acids Res. 2014; 42:789–93.
White J, Southgate E, Thomson J, Brenner S. The structure of the nervous system of the nematode Caenorhabditis elegans: the mind of a worm. Philos Trans R Soc B: Biol Sci. 1986; 314:1–340.
Baek JH, Cosman P, Feng Z, Silver J, Schafer WR. Using machine vision to analyze and classify Caenorhabditis elegans behavioral phenotypes quantitatively. J Neurosci Methods. 2002; 118(1):9–21.
Ramot D, Johnson BE, Berry Jr TL, Carnell L, Goodman MB. The Parallel Worm Tracker: a platform for measuring average speed and drug-induced paralysis in nematodes. PLOS ONE. 2008; 3(5):2208.
Swierczek NA, Giles AC, Rankin CH, Kerr RA. High-throughput behavioral analysis in C. elegans. Nat Methods. 2011; 8(7):592–8.
Cronin CJ, Mendel JE, Mukhtar S, Kim YM, Stirbl RC, Bruck J, Sternberg PW. An automated system for measuring parameters of nematode sinusoidal movement. BMC Genet. 2005; 6(1):5.
Nagy S, Wright C, Tramm N, Labello N, Burov S, Biron D. A longitudinal study of Caenorhabditis elegans larvae reveals a novel locomotion switch, regulated by g αs signaling. eLife. 2013; 2:00782.
Nagy S, Goessling M, Amit Y, Biron D. A generative statistical algorithm for automatic detection of complex postures. PLOS Comput Biol. 2015; 11(10):1004517.
Stephens GJ, Johnson-Kerner B, Bialek W, Ryu WS. From modes to movement in the behavior of Caenorhabditis elegans. PLOS ONE. 2010; 5(11):13914.
Dell AI, Bender JA, Branson K, Couzin ID, de Polavieja GG, Noldus LP, Pérez-Escudero A, Perona P, Straw AD, Wikelski M, et al. Automated image-based tracking and its application in ecology. Trends Ecol Evol. 2014; 29:417–28.
Brown AE, Yemini EI, Grundy LJ, Jucikas T, Schafer WR. A dictionary of behavioral motifs reveals clusters of genes affecting Caenorhabditis elegans locomotion. Proc Natl Acad Sci. 2013; 110(2):791–6.
Szigeti B, Deogade A, Webb B. Searching for motifs in the behaviour of larval Drosophila melanogaster and Caenorhabditis elegans reveals continuity between behavioural states. J R Soc Interface. 2015; 12(113):20150899.
Schwarz RF, Branicky R, Grundy LJ, Schafer WR, Brown AE. Changes in postural syntax characterize sensory modulation and natural variation of C. elegans locomotion. PLOS Comput Biol. 2015; 11(8):1004322.
Bishop CM, Nasrabadi NM, Vol. 1. Pattern Recognition and Machine Learning. New York: Springer; 2006.
Yemini E, Jucikas T, Grundy LJ, Brown AE, Schafer WR. A database of Caenorhabditis elegans behavioral phenotypes. Nat Methods. 2013; 10(9):877–9.
Stephens GJ, Johnson-Kerner B, Bialek W, Ryu WS. Dimensionality and dynamics in the behavior of C. elegans. PLOS Comput Biol. 2008; 4(4):1000028.
Girdhar K, Gruebele M, Chemla YR. The behavioral space of zebrafish locomotion and its neural network analog. PLOS ONE. 2015; 10(7):0128668.
Fujimaki R, Morinaga S. Factorized asymptotic bayesian inference for mixture modeling. International Conference on Artificial Intelligence and Statistics. 2012:400–408.
Arthur D, Vassilvitskii S. K-means++: the advantages of careful seeding. ACM-SIAM symposium on Discrete algorithms. 2007:1027–1035.
Lloyd S. Least squares quantization in PCM. IEEE Trans Inf Theory. 1982; 28(2):129–37.
Lin J. Divergence measures based on the shannon entropy. IEEE Trans Inf Theory. 1991; 37(1):145–51.
Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B Methodol. 1995; 57:289–300.
Park EC, Horvitz HR. Mutations with dominant effects on the behavior and morphology of the nematode Caenorhabditis elegans. Genetics. 1986; 113(4):821–52.
Huang KM, Cosman P, Schafer WR. Machine vision based detection of omega bends and reversals in C. elegans. J Neurosci Methods. 2006; 158(2):323–36.
Wang Q, Wadsworth WG. The C domain of netrin UNC-6 silences calcium/calmodulin-dependent protein kinase-and diacylglycerol-dependent axon branching in Caenorhabditis elegans. J Neurosci. 2002; 22(6):2274–82.
Coates JC, de Bono M. Antagonistic pathways in neurons exposed to body fluid regulate social feeding in Caenorhabditis elegans. Nature. 2002; 419(6910):925–9.
Brundage L, Avery L, Katz A, Kim UJ, Mendel JE, Sternberg PW, Simon MI. Mutations in a C. elegans Gq α gene disrupt movement, egg laying, and viability. Neuron. 1996; 16(5):999–1009.
Hajdu-Cronin YM, Chen WJ, Patikoglou G, Koelle MR, Sternberg PW. Antagonism between Go α and Gq α in Caenorhabditis elegans: the RGS protein EAT-16 is necessary for Go α signaling and regulates Gq α activity. Gene Dev. 1999; 13(14):1780–93.
Fitzgerald K, Tertyshnikova S, Moore L, Bjerke L, Burley B, Cao J, Carroll P, Choy R, Doberstein S, Dubaquie Y, et al. Chemical genetics reveals an RGS/G-protein role in the action of a compound. PLOS Genet. 2006; 2(4):57.
Gumienny TL, MacNeil LT, Wang H, de Bono M, Wrana JL, Padgett RW. Glypican LON-2 is a conserved negative regulator of bmp-like signaling in Caenorhabditis elegans. Curr Biol. 2007; 17(2):159–64.
Rockman MV, Kruglyak L. Recombinational landscape and population genomics of Caenorhabditis elegans. PLOS Genet. 2009; 5(3):1000419.
Weber KP, De S, Kozarewa I, Turner DJ, Babu MM, de Bono M. Whole genome sequencing highlights genetic changes associated with laboratory domestication of C. elegans. PLOS one. 2010; 5(11):13922.
Gallagher T, Bjorness T, Greene R, You YJ, Avery L. The geometry of locomotive behavioral states in C. elegans. PLOS ONE. 2013; 8(3):59865.
De Bono M, Bargmann CI. Natural variation in a neuropeptide Y receptor homolog modifies social behavior and food response in C. elegans. Cell. 1998; 94(5):679–89.
Choi S, Chatzigeorgiou M, Taylor KP, Schafer WR, Kaplan JM. Analysis of NPR-1 reveals a circuit mechanism for behavioral quiescence in C. elegans. Neuron. 2013; 78(5):869–80.
Cheung BH, Cohen M, Rogers C, Albayram O, de Bono M. Experience-dependent modulation of C. elegans behavior by ambient oxygen. Curr Biol. 2005; 15(10):905–17.
Kubiak TM, Larsen MJ, Zantello MR, Bowman JW, Nulf SC, Lowery DE. Functional annotation of the putative orphan Caenorhabditis elegans G-protein-coupled receptor C10C6.2 as a FLP15 peptide receptor. J Biol Chem. 2003; 278(43):42115–20.
Ohya Y, Sese J, Yukawa M, Sano F, Nakatani Y, Saito TL, Saka A, Fukuda T, Ishihara S, Oka S, et al. High-dimensional and large-scale phenotyping of yeast mutants. Proc Natl Acad Sci. 2005; 102(52):19015–20.
Houle D, Govindaraju DR, Omholt S. Phenomics: the next challenge. Nat Rev Genet. 2010; 11(12):855–66.
Yu H, Aleman-Meza B, Gharib S, Labocha MK, Cronin CJ, Sternberg PW, Zhong W. Systematic profiling of Caenorhabditis elegans locomotive behaviors reveals additional components in G-protein G αq signaling. Proc Natl Acad Sci. 2013; 110(29):11940–5.
Friedland AE, Tzur YB, Esvelt KM, Colaiácovo MP, Church GM, Calarco JA. Heritable genome editing in C. elegans via a CRISPR-Cas9 system. Nat Methods. 2013; 10:741–3.
We thank Haruka Ozaki and Hirotaka Matsumoto for critically reading the manuscript.
This study was supported by the Japan Society for the Promotion of Science [grant numbers 15J07635, 16J00129 and 16H06154], the CREST Program from the Japan Science and Technology Agency, and the Canon Foundation.
Worm posture dataset are available in the C. elegans behavioral database [20]. In addition, the source codes of FAB-GMM algorithm can be downloaded from https://github.com/fukunagatsu/FAB-GMM.
TF and WI designed the project. TF performed the analyses. TF and WI wrote the paper. Both the authors read and approved the final manuscript.
Department of Computational Biology and Medical Science, Graduate School of Frontier Sciences, The University of Tokyo, Chiba, 277-8568, Japan
Tsukasa Fukunaga & Wataru Iwasaki
Faculty of Science and Engineering, Waseda University, Tokyo, 169-0072, Japan
Tsukasa Fukunaga
Research Fellow of Japan Society for the Promotion of Science, Tokyo, Japan
Atmosphere and Ocean Research Institute, The University of Tokyo, Chiba, 277-8564, Japan
Wataru Iwasaki
Department of Biological Sciences, Graduate School of Science, The University of Tokyo, Tokyo, 113-0032, Japan
Correspondence to Tsukasa Fukunaga or Wataru Iwasaki.
Additional file
Additional file 1
Supplementary materials. This file includes additional texts, figures and tables not shown in the manuscript. (PDF 477 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Fukunaga, T., Iwasaki, W. Inactivity periods and postural change speed can explain atypical postural change patterns of Caenorhabditis elegans mutants. BMC Bioinformatics 18, 46 (2017). https://doi.org/10.1186/s12859-016-1408-8
Computational ethology
Eigenworms
Worm posture analysis
Transition patterns of postures
Imaging, image analysis and data visualization
|
CommonCrawl
|
PDF PubReader
Singh* , Azzaoui* , Salim* , and Park*: Quantum Communication Technology for Future ICT – Review
Volume 16, No 6 (2020), pp. 1459 - 1478
Sushil Kumar Singh* , Abir El Azzaoui* , Mikail Mohammed Salim* and Jong Hyuk Park*
Quantum Communication Technology for Future ICT – Review
Abstract: In the last few years, quantum communication technology and services have been developing in various advanced applications to secure the sharing of information from one device to another. It is a classical commercial medium, where several Internet of Things (IoT) devices are connected to information communication technology (ICT) and can communicate the information through quantum systems. Digital communications for future networks face various challenges, including data traffic, low latency, deployment of high-broadband, security, and privacy. Quantum communication, quantum sensors, quantum computing are the solutions to address these issues, as mentioned above. The secure transaction of data is the foremost essential needs for smart advanced applications in the future. In this paper, we proposed a quantum communication model system for future ICT and methodological flow. We show how to use blockchain in quantum computing and quantum cryptography to provide security and privacy in recent information sharing. We also discuss the latest global research trends for quantum communication technology in several countries, including the United States, Canada, the United Kingdom, Korea, and others. Finally, we discuss some open research challenges for quantum communication technology in various areas, including quantum internet and quantum computing.
Keywords: Computing Security and Privacy , Quantum , Communication , Sensor , Smart Applications
Werner Heisenberg, in 1925, described quantum physics as a physics theory presenting a mathematical description of matter and energy communication. Quantum mechanics, a subset of quantum physics, defines the foundational subatomic behavior, where the unknown location of a subatomic particle is observed. It details how the universe functions at a scale smaller than an atom, whereas classical physics describes nature elements at a more macroscopic level. Particles possess wavelike properties, and their behavior is observed using the wave equation and Schrodinger equation. Several new and distinct foundations in quantum technology have been derived, such as quantum chemistry, field theory, information science, and technology.
Quantum information theory (QIT) is an amalgamation of several concepts from computer science, classical information theory, and quantum mechanics, which include mathematical physics, quantum statistical physics, and probability theory. The study's primary purpose in QIT is to accomplish tasks using quantum mechanical systems to achieve efficient storage and transmission of information using physical systems' quantum mechanical properties [1]. Information theory relies on probability theory to understand the mathematical limitations of communication and security. It utilizes quantum mechanics to determine information processing limits such as secret key agreement and quantum states' preservation.
Quantum information theory is the central pillar of quantum computers. Recently, quantum computers are being developed at a rapid pace [2]. As one of the prominent research institutions on the quantum computers area, Google has reached quantum supremacy with its "Sycamore" quantum computer that, reportedly, encloses 53 qubits and was able to solve complex computations in 200 seconds. The same mathematic puzzles will take over 10,000 years to solve using today's most influential classical computer. IBM also created a global community for researchers and companies called "IBM Q Network" to work all together for the advancement and development of quantum information-related areas. Other high-tech companies are developing their services and preparing their classical models to shift to a quantum model as soon as quantum computers are available.
The future information communication technology (ICT) will surely rely on quantum communication technologies (QCT) which is built over quntum physics laws to secure data communication; thus, preparing for this new upcoming area is significant. In the future ICT, computers are not the only benefits from quantum technology, but our communication will also shift to quantum. Instead of the classical Internet, quantum Internet is viewed as the new channel of communication. Recent studies are currently focusing more on quantum Internet and quantum teleportation as it is the most appropriate technologies now. The most utilized quantum Internet application is quantum key distribution (QKD), which is used to secure communication between the sender and receiver as it is based on quantum mechanics' law. QKD's security guarantees the high privacy of future quantum Internet, where not only data can be shared securely, but also multiple quantum devices can be grouped in the cloud and share huge computational power.
Quantum Internet, however, cannot get rid of the classical internet yet. To send data in quantum Internet, we send photons encoded into the qubits' status containing the data. These photons travel via a fiber-optic channel, albeit the distance they can cross is very limited (under 300 km). If a photon traveled for more than this distance, it risked being lost, and it could take us the billions of years to recover it. A photon as well as risk being destroyed while measured, which leads to a data loss. The fragile characteristics of photons are what make QKD and quantum Internet very secure. It is, however, the same reason that creates a burden on integrating quantum communication in today's ICT scenarios.
To fix this dilemma, researchers proposed to utilize a Quantum Repeater, which plays as a middle point between the sender and receiver. The Quantum Repeater entangled with the sender and receiver at the same time and store their qubit status in its memory, it receives from the sender the photon with the original information status, measure it, and send it to the final receiver. This method has been used for years now; however, it is not the perfect solution. The Quantum Repeater requires a huge quantum memory to store qubit's states; it obligates a big power consumption and executes all these steps. Moreover, the Quantum Repeater must be a very trusted node as we send it to all the messages or data at once.
To this end, we propose in this paper the use of Quantum machines with a single qubit as quantum chain repeaters. We divide the data into multiple qubit and send every qubit of information via different quantum machine in the quantum machine chain (QMC) at other time slots. Every quantum machine in QMC has to deal with a single qubit and register only the time-stamp and not the whole qubit state into its memory, which will consume less time and demand less memory size and computational power. These quantum machines can be any device in the future ICT; the devices connected to a Quantum cloud can benefit from a Quantum computer-alike power without creating a complex system.
The main contribution of our paper is as follows:
Discuss quantum technologies for future ICT, such as QKD and Blockchain-based quantum cryptography.
We explain the main concepts and features of QIT, quantum computers, and quantum Internet in detail.
We depict some of the recent state-of-the-art research and project trends and areas worldwide about quantum computers and quantum Internet.
We explain our proposition overview of a QMC in future ICT and discuss its phases.
Describe some of the main open research challenges in the area of quantum Internet and quantum computing as quantum communication technology.
The rest of the paper is organized as follows: in Section 2, we define the main foundation of quantum computer and quantum Internet and depict the related technologies for future ICT. Section 3 presents the recent research advances in the area of quantum Internet around the globe. In Section 4, we demonstrate our proposition of QMC and discuss the main components of our model; we clarify some and the leading open research challenges of the area. And we conclude our work with the fifth section.
2. Quantum Technologies for Future ICT
In this section, we discuss quantum technologies for future ICT. It is categorized into three subsections, including the foundation of quantum computers and quantum physics, quantum cryptography, and blockchain-based quantum computing.
2.1 Foundation of Quantum Computers and Quantum Physics
The term quantum computing was first proposed in 1980 by the mathematician Yuri Manin [3], where he discussed the idea of quantum computation in his book. Subsequently, physicist Feynman [4] recorded an exponential slowdown of efficiency while simulating a quantum physical system of [TeX:] $$\mathbb{R}$$ particles using ordinary computers. Simulating a classical physical system in the same computer, however, can be done without polynomial slowdown. The rationalization of this phenomenon is that classical physics describes linearly the size of a particle system in [TeX:] $$\mathbb{R}$$, while it is described exponentially in quantum physics. Based on this observation, physicist Feynman [4] suggested to build a computer-based on quantum physics laws. Classical computers and quantum computers are based on different laws and designed to achieve different tasks. Using transistors, a classical computer is capable of processing information and calculation based on a finite combination of binary digits (bits) denoted as 0 and 1. A quantum computer, however, is based on quantum mechanical states of elementary particles such as the internal angular momentum denoted as the spin. Quantum computers have also different other features from the classical computer, we note those elements as follows:
Qubit: The term of qubit was first introduced in 1995 by physicist Schumacher [5], the proposed theorem states that the von Neumann entropy S of the density operator to describe a quantum state can be perfectly represented by the spin of particles, the spin serves as a signal and was denoted in the paper by quantum bit. To understand the qubit, we will denote them as mathematical objects with unique characteristics. Similar to a classical bit which has two states, 0 or 1, a qubit as well as a state denoted by ∣[TeX:] $$0\rangle$$ and ∣[TeX:] $$1\rangle$$ with "∣[TeX:] $$\rangle$$" known as the Dirac Notion. While a classical bit can either be in state 0 or 1, qubit has, however, the possibility to the in states other than ∣[TeX:] $$0\rangle$$ or ∣[TeX:] $$1\rangle$$ simultaneously. It is as well possible to create a linear combination of states, which is known in quantum theory by superposition. A state in quantum information is denoted as∣[TeX:] $$\boldsymbol{\psi}\rangle$$ and can be represented as the following formula:
[TeX:] $$|\boldsymbol{\psi}\rangle=\boldsymbol{\alpha}|\mathbf{0}\rangle+\boldsymbol{\beta}|\mathbf{1}\rangle$$
where and are two complex numbers. The state of qubit can be represented as a unit vector in a two-dimensional complex vector space where the states ∣[TeX:] $$\mathbf{0}\rangle$$ and ∣[TeX:] $$\mathbf{1}\rangle$$ form the orthonormal basis. Unlike the classical bits, we cannot examine a qubit to measure its state, rather we determine it based on its coefficients [TeX:] $$\boldsymbol{\alpha}$$ and [TeX:] $$\boldsymbol{\beta}$$. At the measurement, the state can be 0 with the probability [TeX:] $$|\alpha|^{2}$$ or 1 with the probability [TeX:] $$|\boldsymbol{\beta}|^{2}$$ and [TeX:] $$|\alpha|^{2}+|\beta|^{2}=1$$ based on probability law. In order to visualize the concept of qubits, the previous formula can be represented as follows:
[TeX:] $$|\boldsymbol{\psi}\rangle=\cos \frac{\boldsymbol{\theta}}{2}|\boldsymbol{0}\rangle+\boldsymbol{e}^{i \alpha} \sin \frac{\boldsymbol{\theta}}{2}|\mathbf{1}\rangle$$
where [TeX:] $$\boldsymbol{\theta}$$ and [TeX:] $$\alpha$$ represent points on the unit three-dimensional sphere that provides a conceptual way of visualizing the state of a qubit as shown in Fig. 1.
Visuals representation of a qubit state.
Entanglement: Einstein et al. [6] published a paper in 1935 stating that in a spatially separated Quantum system, a unique and nonclassical correlation was noticed. The authors called this action a "spooky action at distance". This action means that two spatially separated particles can be described with reference to each other and was called later as quantum entanglement or EPR paradox. Given this definition, if two particles are entangled and separated, the measurement of one particle spontaneously influences the other particle's state. Quantum entanglement serves as the main characteristic of quantum computers as is used to realize quantum teleportation.
Quantum Teleportation: One famous demonstration of quantum entanglement is the quantum teleportation; it provides a solution of transmitting qubits without physically transferring the particle storing the qubit [7]. Using measurement-based of the Bell-State called BSM and an EPR pair shared between source and destination, we can transmit a quantum state between two spatially separated quantum devices. As Fig. 2 depicted, to send information from Lab 1 to Lab 2, we must create two entangled particles (EPR pair) P1 and P2 each one is attributed to a Lab consecutively. In Lab 1, a BSM is performed upon the particle P1 and qubit state ∣[TeX:] $$\boldsymbol{\psi}\rangle$$. The results of the measurement will be sent through a classical channel to Lab 2 in form of two Bits with four possibilities. Upon the reception, Lab 2 will start processing the results until it matches with the pre-entangled particle P2, and with that, they can retrieve the original status of P1 and the qubit ∣[TeX:] $$\boldsymbol{\psi}\rangle$$ sent by Lab 1. We must note here that the original particles will both be destroyed upon measurement. Thus, in order to send other Qubit information, we need to re-construct a new EPR pair and distribute them between the sender and receiver.
Quantum Repeater: Transferring qubit and quantum information over long distances require using fiber-optic networks [8]. Due to the fragile state of photons, however, they cannot be distributed over long distance channels without being lost. Moreover, it requires years to just detect a single photon, which will dismantle the concept and characteristics of quantum communication. As quantum approaches to this dilemma, a repeater can be used. A quantum repeater is a complex system with high-performance levels that store a quantum entanglement state, purify it, and swap it in a very organized architecture [9].
Visual concept of quantum teleportation.
A quantum computer, using qubits, is potentially and theoretically capable of outstanding a classical computer in terms of capacity and computationally. Moreover, using the entanglement and transportation aspects, we can create a save and high-power quantum network sharing multiple quantum computers over a quantum Internet layer. The future of ICT is based on the development of quantum computers and the quantum Internet.
2.2 Quantum Cryptography
Cryptography is the method for preserving information by converting plain text data to unintelligible text data. It is a process of storing and sharing transaction data in a specific form so that only those for whom it is intended can read and process it. The enhancement of quantum technologies starts a new era for cryptography, and ICT with the latest possibilities are rapidly rising [10-12]. During the last three decades, quantum communication is the most developing field that combines quantum sensors, quantum computing, quantum physics, and information theory. The extension version of cryptography is known as quantum cryptography or quantum encryption. The basic quantum cryptography functionality is shown in Fig. 3. It applies quantum mechanics principles to encrypt messages and follows various security properties, including confidentiality, integrity, non-repudiation, and authentication [13-15]. Quantum cryptography is categorized into multiple sub-fields such as QKD, quantum random number generator (QRNG), quantum digital signature (QDS), and quantum computation (QC) for better understanding the functionality of transmitting the transaction, which is the following:
Quantum key distribution: According to the need for secure data communication, encryption and decryption are the part method because they protect from exposure to attacks or hackers. The integrity of data communication is dependent on symmetric cryptography; it has private and public keys. Thus, secure communication in the network is based on key distribution. It transfers the keys process between the sender and recipient to secure communication in the systems [16,17]. Traditional key distribution methods have various challenges, including security threatened by weak random number generators, needs high power CPU, unmanaged unknown attacks, and more. To effectively address these challenges, QKD is utilized, and it follows quantum properties for communicating the secret information. It facilitates the continuous generation and sharing of truly random one-time pad keys for the highest security requirements and follows the quantum mechanical properties. The working process of QKD have three points, which are the following:
A quantum channel is free space or enabled fiber, send quantum light states between sender and recipient. This channel does not need to be secure.
A public authenticated channel performs post-processing steps and uses a genuinely secrete key between the sender and receiver. Photons work as a private or secret key.
Key distribution is the rules and regulations that utilize quantum characteristics to secure communi¬cation by identifying eavesdropping and estimating lost or appropriated information in the network system.
With the help of continuous error rectification and post-processing steps, we reduced information leakage and error bits. Traditional fiber-based QKD demonstrated for few 100 km distances, but recent QKD is distributing photons for 1,000 km distances with emerging latest technologies. These technologies employ powerful deterministic efficiencies light sources, high-speed data transmission, low-cost photon detection, more reliable quantum memories, and quantum repeaters.
Illustration of quantum cryptography functionality.
Quantum random number generator: Random number generation is an essential security element for secure private information [18-20]; it has a vital role in various applications such as asymmetric cryptography and secret sharing. Generally, computer systems rely on deterministic methods like PRNG (pseudo random number generators). This process generates randomness, but it is not more secure because it uses deterministic algorithms. QRNG is utilized for the address above issue and provides more security in advanced applications. QRNG is the quantum physics process that is radically probabilistic to construct true randomness. It is categorized based on the quantum channel's inner working method, modelized, and controlled to perform irregular randomness.
Quantum digital signature: A digital signature is the mathematical techniques for modern sharing communication to validate authenticity and integrity by preventing masquerading. Traditional digital signature schemes (TDSS) need pairs of public and private keys for generating hashing functions in which signatures are based on the message bit and a secret or private key sign it, then verifies by the sender's public key. The hashing function for TDSS is only computational secure; it is easily hacked by the latest technological system, such as a quantum computer. Thus, nowadays, a QDS is deployed for mitigating this issue. It is based on quantum mechanics for providing secure communication. In this signature, the sender's signature is a message with quantum states and follow the properties of quantum function and multiport optical systems. Multiport optical systems are used to overcoming the quantum memory problem and utilized in asymmetric quantum cryptography.
Quantum computation: As know that, numerous essential aspects of communication security rely on encryption and public-key cryptography, which are necessary for electronic business and protecting confidential automated information. Thus, the computation of messages is required for secure communication in advanced applications. QC is the process for calculation of message with a hash function and quantum-based computing devices. These devices follow quantum mechanics' properties and use qubits in which a single qubit can encode more than two states. We are showing the comparison of quantum cryptography and post-quantum cryptography in Table 1.
Comparison table of quantum cryptography and post-quantum cryptography
Post-quantum cryptography Requires special channel such as fiber-based channel, line of the sight-based channel No need a special channel
Algorithm needs QKD has used a classical symmetrical algorithm such as AES, RSA, for bulk data sharing or communication. Use larger keys than RSA and AES algorithm
Computational assumption Computational assumption relies on the hardness factoring Computational assumption relies on the test of time
Definition It follows the properties of quantum mechanics, and optics for security is known as quantum cryptography A new set of rules and regulations of classical algorithms is known as post-quantum cryptography
2.3 Blockchain-based Quantum Computing
Blockchain networks secure user records and data such as financial records stored in blocks as transactions using immutable ledgers supported by cryptographic methods such as digital signatures. Data stored in blocks require a considerable power to break the computationally complex mathematical problems protecting the network. Quantum computers pose a severe threat to the mining process of blocks, essential to growing the blockchain network. Attackers mine the blocks with a considerably higher computing power resulting in a much higher network hash rate than average users. Attacks such as 51% attacks are simpler to execute using Quantum computers, allowing malicious attackers to steal and manipulate stored data.
QKD based authentication is essential to secure data in the quantum period. It requires the sender and receiver to share quantum states of light across fiber or free-space quantum channels. Kiktenko et al. [21] proposed a blockchain protocol combining Byzantine Fault Theorem without digital signatures and QKD for secure authentication. The protocol consists of two layers where the first layer is a QKD network permitting transmission of keys securely for each pair of nodes. The second layer transmits messages using a secure Toeplitz hashing using private keys received during the first layer. Blocks are created in a decentralized manner using the broadcast protocol [22], which allows managing paired-based grouped authentication assuming the number of dishonest users is below 3.
The protocol is applied to each unconfirmed transaction at a periodic interval of 10 minutes based on pairs to prevent data manipulation by a corrupted node. Forking in the blockchain network is prevented by approving authorized transactions based on timestamps and forming a common node. QKD is used only for generating the private keys while data is transmitted using the broadcast protocol. Experimental analysis using an urban fiber QKD network between three nodes (A, B, C) shows successful, legitimate transactions. An unauthorized block with illegitimate transactions attempting to perform a double-spending attack is successfully blocked.
Quantum computers have successfully broken the current security protocols of the blockchain network [23,24]. Several recent types of research have proposed modifications of blockchain technology [21,25] to secure against quantum attacks; however, they are not considered reliable due to new quantum algorithms proposed [26-28] that threaten these security measures. An ideal approach to secure blockchain against quantum attacks is to merge quantum entanglement with blockchain architecture. Rajan and Visser [29] proposed a quantum blockchain method where timestamped blocks and hash functions are linked with a temporal Greenberger–Horne–Zeilinger (GHZ) state of photons that do not correspond at the same time. Using superdense coding, quantum blockchain replaces the traditional structure with a spatially entangled Bell states.
[TeX:] $$\left|\beta_{x y}\right\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle|y\rangle+(-1)^{x}|1\rangle|\bar{y}\rangle\right)$$
[TeX:] $$\text { Here, } x y$$ represents two standard bits, 00, 01, 10. and 11. Every block in the traditional blockchain is transforming using temporal Bell state into a quantum block. The creation of the first block is represented as t=0 and r represents each record:
[TeX:] $$\left|\beta_{r 1 r 2}\right\rangle^{0, \tau}=\frac{1}{\sqrt{2}}\left(\left|0^{0}\right\rangle\left|r_{2}^{\tau}\right\rangle+(-1)^{r_{1}}\left|1^{0}\right\rangle\left|\overline{r_{2}^{\tau}}\right\rangle\right)$$
Entanglement between two quantum bits (qubits) exists initially as t= [TeX:] $$\tau$$. A new photon that did not exist earlier is created as a corresponding entangled qubit to the first qubit. The conversion of blockchain into temporal Bell state is as follows:
[TeX:] $$\left|\beta_{00}\right\rangle^{0, \tau},\left|\beta_{10}\right\rangle^{\tau, 2 \tau},\left|\beta_{11}\right\rangle^{2 \tau, 3 \tau}$$
The response of the proposed new quantum blockchain to an attacker's attempt to modify a block's contents or tamper with photons results in the entire malicious block destroyed. In standard blockchain technology, only the forward blocks are rejected. Since all previous photons are removed, an attacker cannot access the last photon.
Traditional blockchain technology implements the elliptic curve cryptography or the Rivest–Shamir–Adleman (RSA) to create digital signatures and secure blocks from attackers that rely on mathematical complexity [25]. Factoring of large composite numbers into two prime factors increases the complexity in RSA; however, quantum computers possess the computational capacity to solve difficult problems, which take hundreds of years on a standard computer. Quantum powered computers, due to high speedups in computation, can break RSA, DSA, and elliptic curve cryptography. One of the two popular quantum algorithms, Shor's algorithm, breaks the RSA encryption due to its high efficiency in factoring large numbers. The Shor's algorithm's high execution speed compared to other existing algorithms is due to its input length being polynomial. To determine an odd integer N's prime factors, we choose a co-prime of N, x. The order r relates x to N according to:
[TeX:] $$x^{r} \bmod N=1$$
r determines the factors provided by the greatest common divisor [30] and this is made possible only by using quantum computers resulting in a 4096-bit RSA key, breakable.
[TeX:] $$g c d\left(x^{\frac{r}{2}} \pm 1, N\right)$$
The Grover's algorithm attacks the Blockchain security using two methods, locate hash collisions and replace blocks without affecting the integrity of the blockchain network. The second method is to influence the chain's integrity by increasing the creation of nonces to the level where chains of records are recreated using modified hashes. The speed of Grover's algorithm is given by [TeX:] $$O \sqrt{N}$$ compared to [TeX:] $$O(n)$$ used by classical algorithms. The increase in speed allows the algorithm to break a hashing function and insert a modified block in the blockchain network. An attacker can potentially create multiple blocks in negligible times allowing them to take control of the entire network. The faster-growing chain in the network is decided to be the main chain, effectively allowing the attacker to rewrite transactions and initiate double-spending in cryptocurrency-based blockchain networks.
Quantum computing has grown in strides in recent years with organizations such as Google and IBM developing their quantum systems. Google's Sycamore system computes complex mathematical problems in 200 seconds using 53 qbits, whereas today's supercomputer requires a minimum of 10,000 years. IBM's Q Network allows various companies and academic institutions to improve and advance the quantum algorithms using an open-source Qiskit programming framework. Recent advances in quantum technology have prompted researchers to develop new algorithms to secure blockchain networks and counter any future quantum based attacks. Some of the proven algorithms that secure Blockchain networks against quantum based attacks include quantum entanglement, lattice-based cryptography, and QKD. We discussed earlier quantum entanglement in blockchain networks to secure stored data in blocks by Rajan and Visser [29] and now present lattice-based cryptography and QDK to secure blockchain networks.
The general lattice definition is described as a collection of points in n-dimensional space with a cyclic composition. The foundation of the lattice L is B=([TeX:] $$b_{1}, b_{2}, \ldots, b_{n}$$) and different lattices could represent the same lattice. For a group of independent vectors b, the lattice formed by them is as follows:
[TeX:] $$\mathcal{L}\left(b_{1}, \quad b_{2}, \ldots, b_{n}\right)=\sum_{i=1}^{n}\left[\mathrm{x}_{\mathrm{i}} b_{i}: x_{i} \epsilon \mathbb{Z}, b_{n} \epsilon \mathbb{R}^{m}\right]$$
Lattice-based algorithms are cited and suggested by several recent kinds of research due to their resistance to attacks against elliptic curve cryptography in blockchain networks. Torres et al. [31] proposed a one-time linkable ring signature (L2RS) relying on lattice-based cryptography, enabling verification of multiple signatures created by an identical signatory. The L2RS assures a privacy-preserving protocol for cryptocurrencies and presents a foundation for block building and homomorphic assurance fundamental to secure the post-quantum confidential transactions. Gao et al. [32] proposed a signature scheme relying on the lattice algorithm to produce secret keys using stochastic values. A post-quantum blockchain is designed by first signing the message using a preimage sampling algorithm, and secondly, a double signature reduces the relationship between the message and the signature. The analysis of the signature scheme showed resistance to quantum computing attacks.
To secure keys, QDK relies on exchanging cryptographic keys using individual photons where each photon contains a single bit of data as either 0 or 1. The theory of quantum physics states that each photon's value is based on the spin and the polarization, i.e., the photon's state. QKD destroys the block in the blockchain network if an attacker attempts to modify or read block contents. A laser at the sender's end produces a range of single photons where each photon is in a defined state of polarization, i.e., vertical, or horizontal. Additionally, the sender cannot create the same photon using the same state of polarization. The photon receiver measures the state of photons to assure the sender is a secure and authorized user. Using the Heisenberg uncertainty principle, QDK prevents an attacker from determining quantum particles' position and velocity.
3. Global Research Trends for QCT
This section discusses the recent global research trends with QCT for various advanced fields, including electronic market, semiconductor testing, energy storage, internet, and others for multiple countries such as the United States, the United Kingdom, Korea, Canada, and China. These countries are using various projects based on quantum cryptography and providing secure communication in the advanced industries.
In North America, the United States has allocated US$1.2 billion for quantum research as part of the National Quantum Initiative Act. The focus of the Act is to build development research centers to be developed. The research centers aim to collaborate with academia, industry, and the government to accelerate the quantum research progress. The focus of research is on developing quantum processors that enable further computing applications, quantum clocks for precise timekeeping and maintaining communications during warfare incidents in GPS denied conditions, and research on gravity using the quantum information theory. Research on quantum-resistant cryptography for the post-quantum era, such as new optimizations using novel algorithms and cybersecurity systems [33].
Canada has invested more than US$1 billion in the past decade for research and development in quantum computing technology. It ranks 5th in the world for patents filed in the field of quantum computing. The focus of research is on quantum information processing, metrology, communications, cryptography, and networks. In collaboration with the Canadian Space Agency, Canada's government, with a funding of US$80.9 million, is actively researching quantum key distribution to enhance secure and encrypted communication in space and protect digital communication. Quantum sensors are recognized as an important research area to help the country extract oil in an environmentally friendly manner [34].
Germany has pledged €2 billion, the highest in Europe, to promote quantum computing research from its COVID recovery fund to catch up with other countries such as the United States and China, which have filed 500 and 200 patents, respectively [35]. The increase in quantum technology investment comes after a government decision in 2018 to invest €650 million. The German research minister, Anja Karliczek, announced the building of an experimental Q System One quantum computer in collaboration with IBM near Stuttgart by 2021. The Fraunhofer Gesellschaft, Europe's leading applied research institute, works with IBM to develop new quantum technology, application scenarios, and new algorithms [36]. As part of its National Quantum Technologies Programme in the United Kingdom, the UK Research and Innovation (UKRI) aims to establish the National Quantum Computing Centre (NQCC) at the Harwell Campus in Oxfordshire by 2025. The NQCC will invest £95 million in working on multiple workstreams. The research projects include 100+ qubit Noisy Intermediate-Scale Quantum hardware platform, Quantum software, algorithm, applications development, and high performing and scalable qubit technology. Participants include multiple stakeholders from the government, business organizations, and academic researchers [37]. On March 24, 2020, the National Cyber Security Center released a whitepaper on quantum-safe cryptography, highlighting the best mitigation methods against quantum computers and suggests reducing reliance on asymmetric cryptography due to their vulnerability against quantum computers [38].
In Asia, China is leading the research in quantum technology to build computers outperforming the computational power of existing systems, and sensors that can view through smog and corners [39]. The research area of focus in developing QKD's industrial applications was initiated by the National Development and Reform Commission and the China Academy of Science between 2011–2015, with an investment of US$490 million. The focus of research pushed by both the central and local governments since 2016 has centered on quantum communication, computation, and metrology. In 2017, the study's direction was on building a national standard of quantum cryptography [40]. The research for satellite-based quantum communication proved to be a success with the launch of satellite Micius. Quantum cryptographic keys were distributed between Vienna and Beijing's ground stations, facilitating a secure virtual meeting between academics from Austria and China [41].
Quantum research in Japan in quantum information processing, metrology, and sensing is funded by the Japan Science and Technology (JST) and the Japan Society for the Promotion of Science (JSPS). Quantum communication and cryptography are funded by the National Institute of Information and Communications Technology (NICT). Between the years 2001–2015, the research focus in QKD in collaboration between industries and universities resulted in designing high-speed QKD systems performing at a 1-GHz repetition rate known as the Tokyo QKD network. The research focus has expanded to secure cryptographic applications such as TV conferencing, IP routers, and smartphone systems. The JST has funded numerous projects between 2003–2010 in quantum information processing with photonic qubits, superconducting qubits, quantum information processing by entangled photons, optical lattice clock, and quantum simulation tools [42].
Several mobile companies, including SKT, KT, Samsung, and LG, provide telecommunication services and electronic business for industries and humans in Korea. KT and Samsung electronics companies, however, already started developing technologies for quantum-based communication with quantum computers [43]. So, Korea's e-market is jumping into the quantum industries, followed by SKT. KT telecommunication company is going to discover a quantum information research center, known as Korea Advanced Nano Fab Center, with the Korea Institute of Science and Technology (KIST). KIST, KT operate this research center, and it has planning to concentrate their capabilities in secure communication with quantum computers. Already, SAIT (Samsung Advanced Institute Of Technology) completed one project, Global Research Outreach (GRO), which is dependent on quantum computers. In Korea, now Samsung aims to develop quantum error-free, highly effective, more secure, recent qubit equipment, and algorithms. In 2014, quantum information communication medium and long-term promotion strategies were established by the South Korean Government.
The government joined the race of the next generation of ICT developments field, including quantum computing. It is investing 44.5 billion Korean won over the next 5 years, which will enhance computa¬tional performance with secure sharing information worldwide with quantum computers and quantum mechanics [44]. By developing key technologies for quantum computing, the government plans to complete a presentation of the effective five-qubit quantum computer system with more than 90% security by 2023. Market Research Media in Korea estimates that global markets for quantum cryptography communication and quantum computers will be worth more than US$23.2 billion (26 trillion Korean won) in 2025 [44]. The Korean government will provide an investment of 13.4 billion Korean won for next-generation ICT technology, including ultra-high-computing data, computer software, intelligence systems, and human-computer interaction, and quantum computing.
4. A Quantum Communication Model
Based on the current research trends, researchers are focusing on QKD as it is one of the most applicable techniques nowadays. The leading application for quantum Internet [45] enables secure remote communication between two or more parties based on quantum mechanic's laws. This section will propose QMC; a model that uses small, relatively restricted devices compared to the normal quantum repeater to send a message through the quantum Internet, between two quantum computers separated by at least 300 km.
4.1 Proposed Quantum Communication Model System
Quantum Internet, unlike classical internet, will theatrically support and develop several applications, including secure access to quantum computers from relatively restricted devices, clock synchronization, and other scientific applications in physics, medicine, and astronomy. Transferring qubit between quantum computers in the quantum Internet layer, however, is not a straightforward nor simple task to do. Due to photons and particles' physical nature, they cannot be entangled perfectly in a distance over 300 km. The progress has been significant in recent years. Researchers in China have successfully managed to measure for 900 times two entangled particles destined over 1,400 km from each other using a satellite as a Quantum Repeater. This is a huge step toward quantum Internet in future ICT. Nonetheless, the quintessential architecture and design for quantum computers are quite complex. Relying on quantum entanglement distribution, quantum repeaters require two-way communication between the sender and receiver. To apply the BSM, and a large quantum memory to save the particles' state [46], not to mention that the quantum information needs to be sent all through the same repeater that must be a trusted node; otherwise, security and privacy concerns arise.
To this end, we propose in this paper a one-way single qubit transmission to send quantum information from a quantum computer to another, as discussed in Fig. 4. To understand the proposal, we take the case-example where a quantum computer wants to send a message to another quantum computer located at a distance of over 300 km. The first quantum computer named QC1 will generate the message and encode it into qubits; every single qubit should be sent through the QMC. QMC is a group of quantum machines will only at least 1 qubit. Unlike Quantum Repeaters, those machine does not require large quantum memory and computational power. QMCs are distributed around the future smart city; it could be phones, base stations, and personal computers connected to quantum computers in the cloud. The QC1 encode each qubit of the message into a Bell measure¬ment and send it through signals to the first quantum machine in the QMC based on the proximity with a time-stamp to memorize the time slot of each message.
Model overview of quantum machine chain.
The quantum machine that received the signals will re-encode it and check if the destinated QC2 (receiver) is close by (less than 50 km). If yes, the signals, along with the time stamp, will be sent directly to the receiver. If not, they will be sent to the next quantum machine, and it keeps going until it reaches the receiver. Another case is if the quantum machine is busy and cannot receive the signals, in this condition, the next approximate available quantum machine will be a solicitation. Moreover, if the quantum machine has already carried a bit in the previous time slot for the same quantum computer, it cannot be solicited again, and we will move directly to the next quantum machine. After receiving all the signals, the receiver end (QC2) will start to decode them based on their timestamp and starting with the oldest signals to retrieve the original message.
4.2 Methodological Flow of Proposed Model System
To understand the flow of the proposed model, we refer to the Fig. 5. Here we use the notion of photonic tree clusters presented by Borregaard et al. [47]. The QC1 encodes his message into multiple qubits. Each qubit is encoded using BSM with the root spin qubit of the photonic tree cluster. The encoded qubit is sent to the next quantum machine, where it will be re-encoded using BSM. The re-encoding is done between the first-level photonic qubit and the next new photonic tree. Again, the photons will be sent either to the next quantum machine, and the same phases will be repeated, or directly to the QC2 (receiver). The receiver decodes the qubit by measuring the photon tree received. the tree-cluster scheme's overview is shown in Fig. 5.
Tree-cluster scheme's overview.
Methodological flow of the proposed model system.
The timestamp is used every time the qubit is sent from a quantum machine to another to keep track of qubits order. The encoding, re-encoding, and decoding phases fell out of the scope of this study; however, it will be covered in detail in our future work. After receiving the qubits, the quantum machine starts organizing them based on their history and time stamp from the oldest Qubit to the newly received ones and decoding them to retrieve the message. Fig. 6 depicts in detail the proposed model's phases as methodological flow.
4.3 Discussion and Open Research Challenges
Quantum computers and processors are very soon to become our daily reality and replace classical computers. This industry's advancement is speedy, especially with tech-giant companies such as Google and IBM's efforts to develop quantum computers. Moreover, hosting a quantum computer in the cloud can facilitate the task. In the future smart cities, quasi all IoT devices, and machines with only 1 qubit processor, will be able to use the full power and benefit from a highly developed quantum computer hosted in the cloud layer. To realize this, however, we will need a quantum network known as quantum Internet. Nowadays, countries worldwide are engaging in quantum communication research such as the United States, European countries, and China. Quantum Internet will enable high-private networks where devices and machines built upon quantum mechanics rules will be able to communicate and share information securely using QKD law. Moreover, based on quantum Internet, a quantum computer can be hosted on the cloud and used by several machines with a lower quantum processor's capability (at least 1 qubit).
Due to quantum state fragility, however, two qubits cannot be entangled throughout long distances. That was the reason behind the use of quantum repeaters. As explained previously, a quantum repeater can entangle the sender's state with the receiver's state; it acts as a middle-point to transfer the information. Nonetheless, quantum repeaters require large quantum memory and a powerful quantum processor to save the quantum state and re-encode it, which creates a serious dilemma. With this method, creating a scalable quantum network will upscale the cost and demand high requirements. To this end, we propose in this paper a QMC model that can replace the Quantum Repeaters. The main purpose is to lower the cost of creating quantum Internet and scalable communication for future ICT.
QMC relies on dividing the encoded message into several qubit, sent to multiple quantum machines with relatively smaller quantum processors compared with Quantum Repeaters. It does not require large quantum memory as it deals with only one qubit of information. The model uses time stamper to record the history of each qubit and organize the message later at the receiver side. Our future work will be focusing on the encoding, re-encoding, and decoding phases as we intend to explain them in detail and prove our proposal's performance compared to other related works.
The science and technology have achieved so much in the field of quantum computers and quantum Internet. In the future ICT and due to the heterogeneous nature of future smart cities [48-50]. Quantum computers will be stored in the cloud rather than local machines. They give access to relatively smaller and restricted devices into the quantum cloud, where they can benefit from the computational power to execute complex tasks. Quantum Internet and computers will improve and empower smart cities and be the main pillars for future ICT. It is still, however, not an easy task yet. Quantum computers and Quantum Internet still face multiple challenges, which is shown in Fig. 7.
Limited Resource: Quantum repeaters require multiple systems available for widespread public usage with sufficient processing power to forward a single qubit to other devices. The most powerful Quantum computer built by IBM processes 65 qubits, but by 2023, IBM expects to make a quantum computer capable of processing 1,000 qubits. Current classical systems operate well in room temperatures, whereas current quantum computers require near-zero temperatures using cooling systems, making them confined to laboratories.
Open research challenges for quantum communication.
High Error Rate: Performance enhancement of ion trap computers requires improving the gates' laser intensity exposing qubits to environmental factors such as electromagnetic waves and temperature variations, result in decoherence, i.e., loss of data from the qubit to the environment. An error rate of 10^(-6) per gate can be avoided by placing ions in small holes or pits preventing unwanted transformations. Furthermore, fault tolerance schemes using error-correcting algorithms can tolerate error probability rates of 10^(-6), which is adequately below the accuracy threshold.
Decoherence: Quantum computers follow superposition, entanglement problems resolving by quantum principles, and these principles properties. These computers utilize quantum states. Decoherence is the next open research challenge for quantum communication technology for future ICT because quantum states are more vulnerable to error than the classical computer in communication. Decoherence is when the environment interacts with the qubits and changes their quantum states and loses or changes the information in the quantum computers. Various aspects generate decoherence, including radiation from warm objects, a collision between qubits, changing electric and magnetic fields, the collapse of wave functions in quantum mechanics. Thus, it represents an open issue for the practical implementation of quantum computers.
Quantum State Fragility: It is another open challenge for quantum communication for future ICT. As already knows that, quantum computers use quantum states value (0 and 1 bunch) as qubits. Qubits states may be incredibly fragile, compared to bits because they use the outside environment, electric and magnetic fields, wave functions, and object radiations. Using these environments, quantum states may be changed, which means original pieces of information also change or lost in the quantum computers with quantum communications in future ICT. Thus, quantum state fragility is a very crucial open research issue for secure transmission in advanced applications.
This paper reviewed quantum communication technologies for future ICT and proposed a quantum communication model system based on quantum machines to create a scalable quantum Internet network. We discussed all phases of the quantum machine chain in futuristic communications. We showed how to use blockchain in quantum computing for providing the secret-sharing the data to each other with the help of the quantum computers. We also discuss the latest global research trends for quantum communi¬cation technology as several countries, including the United States, Canada, United Kingdom, Korea, and others. Finally, we discussed some open research challenges for quantum communication technology. We also provided a comparison table of quantum communication cryptography and post-quantum cryptography.
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (No. NRF-2019R1A2B5B01070416).
Sushil Kumar Singh
He received his M.Tech. degree in Computer Science and Engineering from Uttarakhand Technical University, Dehradun, India, in 2018. He also received an M.E. degree in Information Technology from Karnataka State University, Mysore, India, in 2011. Currently, he is pursuing his Ph.D. degree under the supervision of Prof. Jong Hyuk Park at the Ubiquitous Computing Security (UCS) Lab, Seoul National University of Science and Technology, Seoul, South Korea. He has more than 9-year experience of teaching in the field of computer science. His current research interests include blockchain, artificial intelligence, big data, and the Internet of Things. He is a reviewer of the IEEE Systems Journal, FGCS, Computer Network, HCIS, JIPS, and Others.
Abir El Azzaoui
She received the B.S. degree in computer science from the University of Picardie Jules-Verne, Amiens, France. She graduated from the National School of Higher Education Hassan II in the Development of Information Systems, Marrakech, Morocco. She is currently pursuing a master's degree in computer science and engineering with the Ubiquitous Computing Security (UCS) Laboratory, Seoul National University of Science and Technology, Seoul, South Korea, under the supervision of Prof. Jong Hyuk Park. Her current research interests include blockchain, the Internet-of-Things (IoT) security, and post-quantum cryptography. She is also a reviewer of the IEEE Access. She has received the Quarterly Franklin Membership from the London Journal of Engineering Research (LJER), London, UK.
Mikail Mohammed Salim
He received his bachelor's degree in Computer Applications from Bangalore University, Bangalore, India in May 2011. He also received his Post Graduate Diploma in Management from Integrated Learning in Management, Greater Noida, India in 2014. Currently he is pursuing his Master's combined Ph.D. degree under the supervision of Prof. Jong Hyuk Park at the UCS Lab, Seoul National University of Science and Technology, Seoul, South Korea. He has 5 years of experience working as a Marketing and Project Manager designing web services for clients. His research interests include IoT and 5G network security. He is the reviewer of the Journal of Supercomputing, and Human-centric Computing and Information Science.
James J. (Jong Hyuk) Park
He received Ph.D. degrees from the Graduate School of Information Security, Korea University, Korea and the Graduate School of Human Sciences of Waseda University, Japan. Dr. Park served as a research scientist at the R&D Institute, Hanwha S&C Co. Ltd., Korea from December 2002 to July 2007, and as a professor at the Department of Computer Science and Engineering, Kyungnam University, Korea from September 2007 to August 2009. He is currently employed as a professor at the Department of Computer Science and Engineering and the Department of Interdisciplinary Bio IT Materials, Seoul National University of Science and Technology (SeoulTech), Korea. Dr. Park has published about 200 research papers in international journals and conferences. He has also served as the chair, program committee chair or organizing committee chair at many international conferences and workshops. He is a founding steering chair of various international conferences including MUE, FutureTech, CSA, UCAWSN, etc. He is employed as editor-in-chief of Human-centric Computing and Information Sciences (HCIS) by Springer, The Journal of Information Processing Systems (JIPS) by KIPS, and the Journal of Convergence (JoC) by KIPS CSWRG. He is also the associate editor or editor of fourteen international journals, including eight journals indexed by SCI(E). In addition, he has been employed as a guest editor for various international journals by such publishers as Springer, Elsevier, Wiley, Oxford University Press, Hindawi, Emerald, and Inderscience. Dr. Park's research interests include security and digital forensics, human-centric ubiquitous computing, context awareness, and multimedia services. He has received "best paper" awards from the ISA-08 and ITCS-11 conferences and "outstanding leadership" awards from IEEE HPCC-09, ICA3PP-10, IEE ISPA-11, and PDCAT-11. Furthermore, he received an "outstanding research" award from SeoulTech in 2014. Also, Dr. Park's research interests include human-centric ubiquitous computing, vehicular cloud computing, information security, digital forensics, secure communications, multimedia computing, etc. He is a member of the IEEE, IEEE Computer Society, KIPS, and KMMS.
1 N. Datta, "Course 9 - Quantum entropy and quantum information," Les Houches, vol. 83, pp. 395-466, 2006.custom:[[[-]]]
2 E. G. Rieffel, W. H. Polak, Quantum Computing: A Gentle Introduction, MA: MIT Press, Cambridge, 2011.custom:[[[-]]]
3 Y, Computable and Uncomputable. MoscowRussia: Sovetskoye Radio, Manin, 1980.custom:[[[-]]]
4 R. P. Feynman, "Simulating physics with computers," International Journal of Theoretical Physics, vol. 21, pp. 467-488, 1982.custom:[[[-]]]
5 B. Schumacher, "Quantum coding," Physical Review A, vol. 51, no. 4, pp. 2738-2747, 1995.custom:[[[-]]]
6 A. Einstein, B. Podolsky, N. Rosen, "Can quantum-mechanical description of physical reality be considered complete?," Physical Review, vol. 47, no. 10, pp. 777-780, 1935.custom:[[[-]]]
7 M. Pant, H. Krovi, D. Towsley, L. Tassiulas, L. Jiang, P. Basu, D. Englund, S. Guha, "Routing entanglement in the quantum internet," npj Quantum Information, vol. 5, no. 25, 2019.custom:[[[-]]]
8 Quantum Flagship, 2020 (Online). Available:, https://qt.eu/discover-quantum/underlying-principles/quantum-repeaters/
9 B. Zhao, M. Muller, K. Hammerer, P. Zoller, "Efficient quantum repeater based on deterministic Rydberg gates," Physical Review A, vol. 81, no. 5, 2010.custom:[[[-]]]
10 Z. Dou, G. Xu, X. B. Chen, J. Li, M. Naseri, "Rational non-hierarchical quantum state sharing protocol," ComputersMaterials & Continua, vol. 58, no. 2, pp. 335-347, 2019.custom:[[[-]]]
11 Y. Sun, Y. Chen, H. Ahmad, Z. Wei, "An asymmetric controlled bidirectional quantum state transmission protocol," Computers Materials & Continua, vol. 59, no. 1, pp. 215-227, 2019.custom:[[[-]]]
12 Y. Chang, S. Zhang, L. Yani, G. Han, H. Song, Y. Zhang, X. Li, Q. Wang, "A quantum authorization management protocol based on EPR-pairs," Computers Materials & Continua, vol. 59, no. 3, pp. 1005-1014, 2019.custom:[[[-]]]
13 W. Liu, Y. Xu, J. C. Y ang, W. Y u, L. Chi, "Privacy-preserving quantum two-party geometric intersection," Computers Materials & Continua, vol. 60, no. 3, pp. 1237-1250, 2019.custom:[[[-]]]
14 C. Li, G. Xu, Y. Chen, H. Ahmad, J. Li, "A new anti-quantum proxy blind signature for blockchain-enabled Internet of Things," Computers Materials & Continua, vol. 61, no. 2, pp. 711-726, 2019.custom:[[[-]]]
15 J. C. S. Sicato, S. K. Singh, S. Rathore, J. H. Park, "A comprehensive analyses of intrusion detection system for IoT environment," Journal of Information Processing Systems, vol. 16, no. 4, pp. 975-990, 2020.custom:[[[-]]]
16 A. El Azzaoui, S. K. Singh, Y. Pan, J. H. Park, "Block5gintell: blockchain for AI-enabled 5G networks," IEEE Access, vol. 8, pp. 145918-145935, 2020.custom:[[[-]]]
17 Y. Lee, S. Rathore, J. H. Park, J. H. Park, J, "A blockchain-based smart home gateway architecture for preventing data forgery," Human-centric Computing and Information Sciences, vol. 10, no. 9, 2020.custom:[[[-]]]
18 K. Gafurov, T. M. Chung, "Comprehensive survey on Internet of Things, architecture, security aspects, applications, related technologies, economic perspective, and future directions," Journal of Information Processing Systems, vol. 15, no. 4, pp. 797-819, 2019.custom:[[[-]]]
19 V. Mohammadi, A. M. Rahmani, A. M. Darwesh, A. Sahafi, "Trust-based recommendation systems in Internet of Things: a systematic literature review," Human-centric Computing and Information Sciences, vol. 9, no. 21, 2019.custom:[[[-]]]
20 Y. Kim, 2017 (Online). Available:, https://english.etnews.com/20170615200001
21 E. O. Kiktenko, N. O. Pozhar, M. N. Anufriev, A. S. Trushechkin, R. R. Yunusov, Y. V. Kurochkin, A. I. Lvovsky, A. K. Fedorov, "Quantum-secured blockchain," Quantum Science and Technology, vol. 3, no. 3, 2018.custom:[[[-]]]
22 D. Malkhi, Concurrency: The Works of Leslie Lamport, CA: ACM, San Rafael, 2019.custom:[[[-]]]
23 D. Aggarwal, G. K. Brennen, T. Lee, M. Santha, and M. Tomamichel, 2017 (Online)., https://arxiv.org/abs/1710.10377
24 S. King and S. Nadal, 2012 (Online). Available:, https://decred.org/research/king2012.pdf
25 J. H. Witte, 2016 (Online). Available:, https://arxiv.org/abs/1612.06244
26 D. McMahon, Quantum Computing Explained, NJ: John Wiley & Sons, Hoboken, 2007.custom:[[[-]]]
27 A. Montanaro, "Quantum algorithms: an overview," npj Quantum Informationvo. 2, no. 1, pp. 1-8, 2016.custom:[[[-]]]
28 W. Zeng, B. Johnson, R. Smith, N. Rubin, M. Reagor, C. Ryan, C. Rigetti, "First quantum computers need smart software," Nature News, vol. 549, no. 7671, pp. 149-151, 2017.custom:[[[-]]]
29 D. Rajan, M. Visser, "Quantum Blockchain using entanglement in time," Quantum Reports, vol. 1, no. 1, pp. 3-11, 2019.custom:[[[-]]]
30 A. E. Azzaoui, J. H. Park, "Post-quantum blockchain for a scalable smart city," Journal of Internet Technology, vol. 21, no. 4, pp. 1171-1178, 2020.custom:[[[-]]]
31 W. A. A. Torres, R. Steinfeld, A. Sakzad, J. K. Liu, V. Kuchta, N. Bhattacharjee, M. H. Au, J. Cheng, "Post-quantum one-time linkable ring signature and application to ring confidential transactions in blockchain (Lattice RingCT v1.0)," in Information Security and Privacy. ChamSwitzerland: Springer, pp. 558-576, 2018.custom:[[[-]]]
32 Y. L. Gao, X. B. Chen, Y. L. Chen, Y. Sun, X. X. Niu, Y. X. Yang, "A secure cryptocurrency scheme based on post-quantum blockchain," IEEE Access, vol. 6, pp. 27205-27213, 2018.doi:[[[10.1109/ACCESS.2018.2827203]]]
33 National Science and Technology Council, 2018 (Online). Available:, https://www.whitehouse.gov/wp-content/uploads/2018/09/National-Strategic-Overview-for-Quantum-Information-Science.pdf
34 B. Sussman, P. Corkum, A. Blais, D. Cory, A. Damascelli, "Quantum Canada," Quantum Science and Technology, vol. 4, no. 2, 2019.custom:[[[-]]]
35 Inside Quantum Technology, 2020 (Online). Available:, https://www.insidequantumtechnology.com/news/germanys-billions-in-funding-for-quantum-computing-reflects-its-push-for-technological-sovereignty-european-self-reliance/
36 J. Eitner, 2020 (Online). Available:, https://www.fraunhofer.de/en/press/research-news/2020/march/ibm-and-fraunhofer-bring-quantum-computin-to-germany.html
37 National Quantum Computing Centre (Online). Available:, https://www.ukri.org/about-us/nqcc/
38 National Cyber Security Center, 2016 (Online). Available:, https://www.ncsc.gov.uk/whitepaper/quantum-safe-cryptography
39 J. Whalen, 2019 (Online). Available:, https://www.washingtonpost.com/business/2019/08/18/quantum-revolution-is-coming-chinese-scientists-are-forefront/
40 Q. Zhang, F. Xu, L. Li, N. L. Liu, J. W. Pan, "Quantum information research in China," Quantum Science and Technology, vol. 4, no. 4, 2019.custom:[[[-]]]
41 H. Siljak, 2020 (Online). Available:, https://theconversation.com/chinas-quantum-satellite-enables-first-totally-secure-long-range-messages-140803
42 Y. Y amamoto, M. Sasaki, H. Takesue, "Quantum information science and technology in Japan," Quantum Science and Technology, vol. 4, no. 2, 2019.custom:[[[-]]]
43 Korea-EU Research Centre, 2019 (Online). Available:, https://k-erc.eu/korea-rd-research-trends-and-results/korea-starts-five-year-development-program-for-quantum-computing-technology/
44 A. S. Cacciapuoti, M. Caleffi, F. Tafuri, F. S. Cataliotti, S. Gherardini, G. Bianchi, "Quantum internet: networking challenges in distributed quantum computing," IEEE Network, vol. 34, no. 1, pp. 137-143, 2019.custom:[[[-]]]
45 S. Wehner, D. Elkouss, R. Hanson, "Quantum internet: a vision for the road ahead," Science, vol. 362, no. 6412, 2018.doi:[[[10.1126/science.aam9288]]]
46 W. J. Munro, K. Azuma, K. Tamaki, K. Nemoto, "Inside quantum repeaters," IEEE Journal of Selected Topics in Quantum Electronics, vol. 21, no. 3, pp. 78-90, 2015.custom:[[[-]]]
47 J. Borregaard, H. Pichler, T. Schroder, M. D. Lukin, P. Lodahl, A. S. Sorensen, "One-way quantum repeater based on near-deterministic photon-emitter interfaces," Physical Review X, vol. 10, no. 2, 2020.custom:[[[-]]]
48 S. K. Singh, N. Rastogi, "Role of cyber cell to handle cyber crime within the public and private sector: an Indian case study," in Proceedings of 2018 3rd International Conference on Internet of Things: Smart Innovation and Usages (IoT-SIU), Bhimtal, India, 2018;pp. 1-6. custom:[[[-]]]
49 S. K. Singh, Y. S. Jeong, J. H. Park, "A deep learning-based IoT-oriented infrastructure for secure smart city," Sustainable Cities and Society, vol. 60, no. 102252, 2020.custom:[[[-]]]
50 S. K. Singh, S. Rathore, J. H. Park, "Blockiotintelligence: a blockchain-enabled intelligent IoT architecture with artificial intelligence," Future Generation Computer Systems, vol. 110, pp. 721-743, 2020.custom:[[[-]]]
Received: October 20 2020
Revision received: November 24 2020
Accepted: November 24 2020
Published (Print): December 31 2020
Published (Electronic): December 31 2020
Corresponding Author: Jong Hyuk Park* , [email protected]
Sushil Kumar Singh*, Dept. of Computer Science and Engineering, Seoul National University of Science & Technology (SeoulTech), Seoul, Korea, [email protected]
Abir El Azzaoui*, Dept. of Computer Science and Engineering, Seoul National University of Science & Technology (SeoulTech), Seoul, Korea, [email protected]
Mikail Mohammed Salim*, Dept. of Computer Science and Engineering, Seoul National University of Science & Technology (SeoulTech), Seoul, Korea, [email protected]
Jong Hyuk Park*, Dept. of Computer Science and Engineering, Seoul National University of Science & Technology (SeoulTech), Seoul, Korea, [email protected]
|
CommonCrawl
|
Calculus exercises: part II
2 Definite integral
3 Integration
4 Applications of integrals
5 Parametric curves
6 Several variables
Example. Find $\sqrt{4.01}$ with accuracy at least $.001$. We are to approximate with Taylor polynomials the function $f(x)=x^{1/2}$ around the point $a=4$.
First, we consider the constant approximation $T_0$ which produces the value of $\sqrt{4}=2$ as a substitute for $\sqrt{4.01}$. We compute: $$f'(x)=\frac{1}{2}x^{-1/2}.$$ We estimate this function on the interval $[4,4.01]$. Find $K_1$ such that $$\frac{1}{2}x^{-1/2}\le K_1$$ for all $x$ in $[4,4.01]$. Since $f'$ is decreasing, its maximum is at $x=4$ with the max value $f'(4)=\frac{1}{4}$. This the best estimate and we choose this number as $K_1$. Therefore, we consider $T_0$ with the error estimate: $$E_0=|T_0(x)-f(x)|\le K_1\frac{|x-4|}{1!}=\frac{1}{4}|x-4|.$$ Specifically, for $x=4.01$, we have: $$E_0\le \frac{1}{4}|4.01-4|=\frac{.01}{4}=.0025.$$ Not good enough!
We proceed to the linear approximation $T_1$. We compute: $$f' '(x)=\left(\frac{1}{2}x^{-1/2}\right)'=-\frac{1}{4x^{3/2}}.$$ We find $K_2$ from the estimate we need: $$\frac{1}{4x^{3/2}}\le K_2$$ for all $x$ in $[4,4.01]$. Since $f' '$ is decreasing, its max is at $x=4$. Therefore, we choose $$K_2=\frac{1}{4\cdot 4^{3/2}}=\frac{1}{4\cdot 8}=\frac{1}{32}.$$ Then, we have by the theorem: $$E_1\le \frac{1}{32}\frac{|x-4|^2}{2!}.$$ Specifically, for $x=4.01$, the accuracy estimate is $$E_1\le \frac{1}{32\cdot 2}.01^2=\frac{.0001}{64}.$$ Definitely better then necessary!
Thus the answer, with this degree of accuracy, is $$\sqrt{4.01}=2.0225\pm .001.$$ $\square$
Definite integral
(a) State the Fundamental Theorem of Calculus. (b) Use part (a) to evaluate
$$\int_{-1}^1 \sin \frac{x}{3} \, dx.$$
$$\frac{d}{dx}\int_0^{x}e^{t^2}\, dt.$$
(a) Make a sketch of the left-end Riemann sums for $\int_0^1\sqrt{x}\, dx$ with $n=4$ intervals. (b) State the algebraic properties of the Riemann integral.
Given $f(x)=x^{2}+1,$ write (but do not evaluate) the Riemann sum for the integral of $f$ from $-1$ to $2$ with $n=6$ and left ends as sample points. Make a sketch.
Provide the definition of the definite integral via its Riemann sums. Make a sketch.
The Fundamental Theorem of Calculus includes the formula $\int _a^bf(x)\, dx=F(b)-F(a)$. (a) State the whole theorem. (b) Provide definitions of the items appearing in the formula.
(a) State the definition of the definite integral $\int _a^bf(x)\, dx$ and illustrate the construction with a sketch. (b) Use the definition to justify that $$\int _a^b cf(x)\, dx=c\int _a^b f(x)\, dx$$ for a constant $c$.
Suppose $$\int_0^1f\, dx=2,\ \int_0^4f\, dx=0,\ \int_1^2f\, dx=2.$$ Find
$$\int_1^3f\, dx,\ \int_0^1(f(x)+3)\, dx,\ \int_2^4f\, dx.$$
Suppose a function is defined by: $$F(x)=\int_2^xf\, dx.$$ Find, in terms of $F$, the following:
$$\int_0^4f\,dx,\ \int_1^2f\,dx,\ \int_0^{-1}f\,dx,\ \int_1^2(f(x)-1)\,dx.$$
Evaluate the Riemann sum of $f$ below on the interval $[-1,1.5]$ with $n=5$. What are its sample points? What does it estimate?
Write (don't evaluate) the left-end Riemann sum of the integral $\int _0^5 f(x)\, dx$ for function $f$ shown below with $n=5$ intervals.
Write the formula and illustrate with a sketch the left-end Riemann sum $L_4$ of the integral $\int _1^3 f(x)\, dx$ for the function plotted above.
Write the mid-point Riemann sum that approximates the integral $\int _0^1 \sin x\, dx$ within $.01$.
Set up the Riemann sum for the area of the circle of radius $R$ as the area between two curves, provide an illustration and the integral formula. Evaluate for extra 5 points.
Let $I=\int_ 2^8f\, dx$. (a) Use the graph of $y=f(x)$ below to estimate $L_{4},\ M_{4},\ R_{4}$. (b) Compare them to $I$.
Complete the following statements:
$(f(x)\cdot x^2)'=f'(x)\cdot x^2+...$;
$\int x^{-1}\, dx=... $;
$\int f'(x)\, dx=...$;
$\int u\, dv=uv... $;
$u=\cos t \ \Longrightarrow\ du=...$.
Suppose that $F$ is an antiderivative of a differentiable function $f$. If $F$ is increasing on $[a,b]$, what can you say about $f$?
Execute the following substitution in the integral (don't evaluate the resulting integral):
$$\int \sqrt{\cos x+\sin x}\, dx,\quad u=\sin x.$$
Suppose $s(t)$ represents the position of a particle at time $t$ and $v(t)$ its velocity. If $v(t)=\sin t-\cos t$ and the initial position is $s(0)=0,$ find the position $s(1).$
$$\int e^{3x}\, dx.$$
$$\int_1^2(e^{x}+\sqrt {x}+x^{-1})\, dx.$$
$$\int e^{x^2}2x\, dx.$$
$$\int 2x\sin 5x\, dx.$$
Evaluate $$\int_1^3 e^{t+1}\, dx.$$ Hint: watch the variables.
Calculate:
$$\int \left(e^{\sin x^2+77}\right)' \, dx.$$
Evaluate the integral by substitution
$$\int xe^{x^{2}}\, dx.$$
Find all antiderivatives of the following function: $f(x)=e^{-x}$.
Find the antiderivative $F$ of the function $f(x)=3x^{2}-1$ satisfying the initial condition $F(1)=0$.
Evaluate the integral
$$\int_0^1x^3\, dx.$$
Evaluate:
$$\int x^2\, dx - \int x^2\, dx.$$
$$\int x^{-2}\, dx - \int x^{-2}\, dx.$$
Integrate by parts:
$$\int 3x e^{-x}\, dx.$$
Use the table of integrals to evaluate:
$$\int \sin^{-1}{2x}\, dx.$$
$$\int_0^{1} \frac{1}{2x}\, dx.$$
Use substitution to evaluate the integral:
$$\int _0^{\pi} \sin x \ \cos^2 x \, dx.$$
$$\int x(\ln x)^2\, dx.$$
$$\int _0^1 \frac{1}{\sqrt{4-x^2}}\, dx.$$
$$\int x^2(\sqrt{x^2-4}-\sqrt{x^2+9})\, dx.$$
$$\int x \sin x\, dx.$$
Use substitution $u=1+x^2$ to evaluate the integral
$$\int \sqrt{1+x^2} x^5\, dx.$$
Evaluate the improper integral:
$$\int_1^{\infty} \frac{1}{2x}\, dx.$$
Find the antiderivative $F$ of the function $f(x)=e^{x}+x$ satisfying the initial condition $F(0)=1$.
Applications of integrals
The region bounded by the graphs of $y=\sqrt{x},\ y=0,$ and $x=1$ is revolved about the $x$-axis. Find the surface area of the solid generated.
A chord of a circle is a straight line segment whose end-points lie on the circle. Find the average length of a chord perpendicular to the diameter. What about parallel?
Find the average length of a segment in a square parallel to (a) the base, (b) the diagonal.
Find (by integration) the length of a circle of radius $r$.
Find the area enclosed by the curves below:
Find the area of the region bounded by $y=x^2-1$ and $y=3$.
Suppose $f$ is an integrable function. (a) Show that $f$ is also odd then $\int_{-a}^af\, dx=0$. (b) Suggest a related formula for an even $f$.
Find the centroid of the region bounded by the curves $y=x^2,\ y=1$.
Find the $x$-coordinate of the center of mass of the region between $y=x^2$ and $y=x^3$.
Find the volume of a right circular cone of radius $R$ and height $h$ by any method you like.
Compute the average area of the cross section of the sphere of radius $1$.
Find the center of mass of the region below $y=2x$ for $0 \leq x \leq 1$.
The volume of a solid is the integral of the areas of its cross-sections. Explain and justify using Riemann sums.
The region bounded by the graphs of $y=x^{2}+1,\ y=0,\ x=0$ and $x=1$ is revolved about the $x$-axis. Find the volume area of the solid generated.
The region bounded by the graphs of $y=x^{2}+1,\ y=0,\ x=0,$ and $x=1$ is revolved about the $y$-axis. Find the volume area of the solid generated.
An aquarium $2$ m long, $1$ m wide, and $1$ m deep is full of water. Find the work needed to pump half of the water out of the aquarium (the density of water is $1000$ kg/m$^{3}$).
Find the area of the surface of revolution around the $x$-axis obtained from $y=\sqrt{x},\ 4\le x\le 9$.
Find the centroid of the region bounded by the curves $y=4-x^{2},\ y=x+2$.
Find the area of the region bounded by $y=x^{2}-1$ and $y=3$.
Find the area under the graph of the function $f(x)=e^x$ from $x=-1$ to $x=1$.
Find the average value of the function $f(x)=2x^2-3$ on the interval $[1,3]$.
Find the area of the region bounded by $y=\sqrt{x}$, the $x$-axis, and the lines $x=1$ and $x=4$.
Parametric curves
Describe the motion of a particle with position $(x,y)$, where $$x=2+t\cos t,\ y=1+t\sin t,$$ as $t$ varies within $[0,\infty )$.
Suppose the parametric curve is given by \[x=\cos3t,\ y=2\sin t.\] Set up, but do not evaluate, the integrals that represent (a) the arc-length of the curve, (b) the area of the surface obtained by rotating the curve about the $x$-axis.
Suppose curve $C$ is the graph of function $y=f(x)$. (a) Find a parametric representation of $C$. (b) Find a parametric representation of $C$ that goes from right to left.
Find all points on the curve \[x=\cos3t,\ y=2\sin t\] where the tangent is either horizontal or vertical.
Sketch the following parametric curve: $$x=|\cos t|,\ y=|\sin t|,\ -\infty <t <+\infty.$$ Describe the curve and the motion.
Sketch the following parametric curves:
$x(t)=\frac{1}{t},\ y(t)=\sin t,\ t>0$;
$x = \cos t,\ y= 2$;
$x=1/t,\ y=1/t^{2},\ t>0$.
(1) Sketch the parametric curve $x=\cos t,\ y=\sin 2t$. (2) The curve intersects itself. Find the angle of this intersection.
Find an equation of the spiral converging to the origin as below:
Plot this entire parametric curve: $x=\sin t,\ y=\cos 2t$.
Find a parametric representation of a curve similar to the one below, a spiral wrapping around a circle. What about one that is wrapping from the inside? (no proof necessary):
Given a parametric curve $x=\sin t,\ y=t^2$. Find the line(s) tangent to the curve at the origin.
Find a parametric representation of a curve that looks like the figure eight or a flower (no proof necessary).
Several variables
Draw a few level curves of the function $f(x,y)=x^{2}+y$.
The graph of function $y=g(x)$ of one variable is shown below. Suppose now that $z=f(x,y)=g(x)$ is a function of two variables, which depends only on $x$, given by the same formula. Find all points where the gradient of $f$ is equal to $0$.
Find all critical points of the function $f(x,y)=2x^3-6x+y^2-2y+7$.
Sketch the contour (level) curves of the function shown below, along with points $A,B,C,D$, on the $xy$-plane:
Sketch the level curves of the function $f(x,y)=2xy+1$ for the following values of $z=-1,0,1,2.$
Show that the limit doesn't exist:
$$\lim_{(x,y)\to (0,0)}\frac{xy}{x^{2}+y^{2}}.$$
Draw the contour map (level curves) of the function $f(x,y)=e^{y/x}$. Explain what the level curves are.
Sketch the graph of a function of two variables $z=f(x,y)$ the derivatives of which have the following signs:
$$f_x>0,\ f_{xx}>0,\ f_y<0,\ f_{yy}<0.$$
The graph of a function of two variables $z=f(x,y)$ is given below along with four points on the graph. Sketch the gradient for each on a separate $xy$-plane:
Find the gradient of the function $f(x,y)=x^2y^{-3}$ at the point $(1,1)$. Use this information to sketch the graph of $f$ in the vicinity of this point. Explain.
The graph of a function of two variables $z=f(x,y)$ is given below along with four points on the graph. Provide the signs (positive or negative) of the partial derivatives of $f$ at these points. For example, $\frac{\partial f}{\partial x}<0$ at point $A$.
Make a sketch of contour (level) curves for the following function:
The wave heights $h$ in the open sea depend on the speed $v$ of the wind and the length of time $t$ that the wind has been blowing at that speed. Values of the function $h=f(v,t)$ are recorded in the table below. Estimate the rate of change of $h$ with respect to $v$ when $v=40$ and $t=15$. Show your computations.
$$\begin{array}{c|ccc} v\backslash t &15 &20 &25\\ \hline 30 &16 &17 &18\\ 40 &25 &28 &31\\ 50 &36 &40 &45 \end{array}$$
The contour (level) curves for a function are given below. They are equally spaced. Sketch a possible graph that produced it and describe it.
Draw the contour map (level curves) of the following function of two variables:
$g(x,y)=\ln(x+y)$;
$f(u,v)=uv$;
$h(x,y)=2x-3y+7$;
$z=x^2+y^2$.
$$\begin{array}{c|ccccccccc} & 1&2&3&4\\ \hline f_x &+&+&-&+\\ f_{xx}&-&+&+&-\\ f_y &-&-&+&+\\ f_{yy}&-&+&-&+ \end{array}$$
The graph of a function of two variables $z=f(x,y)$ is given below along with a point on the graph: 1. A, 2. B, 3. C, 4. D. Determine the signs of the derivatives $f_x,f_{xx},f_y,f_{yy}$ at that point:
Estimate the coefficients of the Taylor polynomial $T_1$ of order $1$ centered at $a=1$ of the function $f$ shown above. Provide a formula for this $T_1$. What about $T_2$; what is the sign of the coefficient of the next term that appears in $T_2$?
What degree Taylor polynomial one would need to approximate $e^{.01}$ within $.001$? (Answers may vary and yours doesn't have to be perfect but it has to be justified.)
(a) State the definition of absolute convergence. (b) Give an example of a series that converges but not absolutely.
What degree Taylor polynomial one would need to approximate $\sin (-.01)$ within $.001$? Explain the formula: $$E_n \le K_{n+1} \frac{|x-a|^{n+1}}{(n+1)!}$$ and why you can choose $K_{n+1}=1$.
Find the interval of convergence of the series:
$$\sum \frac{(x-2)^n}{n}.$$
Explain how functions are represented by power series and how they both are differentiated. Demonstrate on $f(x)=e^x$.
Find the Taylor polynomial of degree $4$ that would help to approximate $e^{1.01}$.
(a) State the definition of the sum of a series. (b) Use (a) to prove the Sum Rule.
Find the sum of the series
$$\sum _{n=0}^{\infty} \frac{(-1)^n+2}{3^n}.$$
Test the following series for convergence (including absolute/conditional):
$$\sum \frac{(-1)^{n-1}}{(1.1)^n}.$$
Find the radius and the interval of convergence of the series
$$\sum \frac{2(x+1)^n}{n^2}.$$
Find the Taylor series centered at $a=1$ of the function $f(x)=x^4$.
Apply the Integral Test to show that the $p$-series with $p=1/3$ diverges.
$$\sum (-1)^{2n}\frac{1}{n^n}.$$
$$\sum \frac{n^{1/2}}{n^2-1}.$$
Find the radius and the interval of convergence of the series:
$$\sum \frac{(x-1)^n}{\sqrt{n}2^n}.$$
Find the Taylor polynomial $T_{2}(x)$ of order $2$ centered at $a=\pi $ of the function $f(x)=\sin ^{2}x.$
Retrieved from "https://calculus123.com/index.php?title=Calculus_exercises:_part_II&oldid=1939"
|
CommonCrawl
|
Materials Research (39)
Physics and Astronomy (39)
Journal of Materials Research (17)
MRS Online Proceedings Library Archive (14)
MRS Bulletin (5)
MRS Advances (3)
Ergodic Theory and Dynamical Systems (1)
Materials Research Society (39)
Influence of adherend properties on the strength of adhesively bonded joints
Mariana D. Banea
Journal: MRS Bulletin / Volume 44 / Issue 8 / August 2019
Print publication: August 2019
Advanced lightweight materials, including high-strength steels, aluminum, magnesium, plastics, and reinforced polymer composites, are increasingly used in industry. Combinations of mixed materials are becoming commonplace in the design of structures. Adhesives can be used to join a wide range and combinations of materials. However, joining of materials depends on their specific characteristics. The choice of adherend material is one particular and important parameter that influences adhesively bonded joint performance, and its effect should be taken into consideration in the design of adhesive joints. This article overviews experimental and modeling investigations on the influence of adherend properties on the strength of adhesively bonded joints.
Advances in dissimilar metals joining through temperature control of friction stir welding
Kenneth Ross, Md. Reza-E-Rabby, Martin McDonnell, Scott A. Whalen
Lightweighting of vehicles and portable structures is an important undertaking. Multimaterial design is required to achieve conflicting design targets such as cost, stiffness, and weight. Friction stir welding (FSW) variants, such as friction stir dovetailing and friction stir scribe, are enabling technologies for joining of dissimilar metals. This article discusses how FSW variants are capable of joining aluminum to steel in particular. The characteristics of metallurgical bonding at the dissimilar materials interface are strongly affected by weld temperature. Control of FSW process temperature enables metallurgical bonding with suppressed formation of intermetallics at the dissimilar materials interface, resulting in improved mechanical properties relative to competing techniques. Temperature control is thus a powerful tool for process development and ensuring weld quality of dissimilar materials welds.
Enabling sustainable transportation through joining of dissimilar lightweight materials
Sarah Kleinbaum, Cindy Jiang, Steve Logan
The transportation sector is the largest contributor to greenhouse gas emissions in the United States. One method being used to reduce greenhouse emissions related to the transportation sector is improving vehicle fuel efficiency through mass reduction. Reducing the mass of on-highway passenger vehicles by 10% can result in vehicle fuel economy improvements of as much as 6–8% if the powertrain is downsized to maintain equivalent performance. Some of the materials being investigated and implemented to reduce passenger vehicle mass include advanced high-strength steel, aluminum, magnesium, and polymer composites. Additionally, multimaterial structures that allow for optimal combinations of lightweight materials to achieve maximum weight reduction with lowest cost and best structural performance have recently become of particular interest. However, assembling multimaterial structures can be challenging due to differences in melting temperature and coefficient of thermal expansion of different materials, as well as formation of intermetallic compounds and galvanic corrosion potential. Joining technologies for lightweight multimaterial structures must address these challenges to be successful. This article highlights advances made in five different joining techniques: nondestructive evaluation of resistance spot-welded aluminum to steel, modeling of structural adhesives, temperature control of friction stir welds, ultrasonic welding of magnesium, and vapor foil actuation welding.
An answer to Furstenberg's problem on topological disjointness
Topological dynamics
Connections with other structures, applications
WEN HUANG, SONG SHAO, XIANGDONG YE
Journal: Ergodic Theory and Dynamical Systems / Volume 40 / Issue 9 / September 2020
Published online by Cambridge University Press: 10 April 2019, pp. 2467-2481
In this paper we give an answer to Furstenberg's problem on topological disjointness. Namely, we show that a transitive system $(X,T)$ is disjoint from all minimal systems if and only if $(X,T)$ is weakly mixing and there is some countable dense subset $D$ of $X$ such that for any minimal system $(Y,S)$, any point $y\in Y$ and any open neighbourhood $V$ of $y$, and for any non-empty open subset $U\subset X$, there is $x\in D\cap U$ such that $\{n\in \mathbb{Z}_{+}:T^{n}x\in U,S^{n}y\in V\}$ is syndetic. Some characterization for the general case is also given. By way of application we show that if a transitive system $(X,T)$ is disjoint from all minimal systems, then so are $(X^{n},T^{(n)})$ and $(X,T^{n})$ for any $n\in \mathbb{N}$. It turns out that a transitive system $(X,T)$ is disjoint from all minimal systems if and only if the hyperspace system $(K(X),T_{K})$ is disjoint from all minimal systems.
Improving the interface adherence at sealings in solid oxide cell stacks
Ilaria Ritucci, Ragnar Kiebach, Belma Talic, Li Han, Philipp Zielke, Peter V. Hendriksen, Henrik L. Frandsen
Journal: Journal of Materials Research / Volume 34 / Issue 7 / 15 April 2019
Published online by Cambridge University Press: 08 February 2019, pp. 1167-1178
Thermal cycling of planar solid oxide cell (SOC) stacks can lead to failure due to thermal stresses arising from differences in thermal expansion of the stack's materials. The interfaces between the cell, interconnect, and sealing are particularly critical. Hence, understanding possible failure mechanisms at the interfaces and developing robust sealing concepts are important for stack reliability. In this work, the mechanical performance of interfaces in the sealing region of SOC stacks is studied. Joints comprising Crofer22APU (preoxidized or coated with MnCo2O4 or Al2O3) are sealed using V11 glass. The fracture energy of the joints is measured, and the fractured interfaces are analyzed using microscopy. The results show that choosing the right coating solution would increase the fracture energy of the sealing area by more than 70%. We demonstrate that the test methodology could also be used to test the adhesion of thin coatings on metallic substrates.
Interfacial Reaction Mechanism between Molten Ag-Cu-Based Active Brazing Alloys and Untreated or Pre-Oxidized PLS-SiC
J. López-Cuevas, J.C. Rendón-Angeles, J.L. Rodríguez-Galicia, C.A. Gutiérrez-Chavarría
Journal: MRS Advances / Volume 4 / Issue 57-58 / 2019
Based on wettability and reaction interfaces previously reported, as well as on thermodynamic considerations, a likely mechanism has been proposed for the chemical interaction taking place at the metal/ceramic interface during wettability experiments carried out by the so-called "sessile drop" method. The experiments involved three Ag-Cu-based brazing alloys [Cusil (Ag-28wt.%Cu), Cusil-ABA (Ag-34.6wt.%Cu-1.58wt.%Ti) and Incusil-ABA (Ag-26.6wt.%Cu-12.4wt.%In-0.89wt.%Ti)] and as polished and pre-oxidized pressure-less sintered silicon carbide (PLS-SiC), with a total holding time of 90 minutes at 850 °C, under a Zr sponge-gettered vacuum of 10-4/10-5 Torr.
A Practical Procedure for Measuring Contact Angles in Wettability Studies by the Sessile Drop Method
J. López-Cuevas, M.I. Pech-Canul, J.L. Rodríguez-Galicia, J.C. Rendón-Angeles
Published online by Cambridge University Press: 07 October 2019, pp. 3143-3152
An old procedure used to carry out a graphical derivation of curves, which is based on the optical properties of plane mirrors, has been adapted for the measurement of the contact angle (θ) formed between a liquid drop and a flat solid substrate in wettability experiments carried out by the so-called "sessile drop" method. The method was tested for mercury on soda-lime glass at room temperature in air as well as for Cusil (Ag-28wt.%Cu) and Incusil-ABA (Ag-27wt.%Cu-12wt.%In-2wt.%Ti) brazing alloys on pressureless-sintered silicon carbide (PLS-SiC) at 850 °C, under a vacuum of 10-4/10-5 Torr. The proposed method is fast, simple and accurate enough from high (∼140°) to relatively low (∼10°) contact angles. Although the proposed method has been tested for metal-ceramic systems, it is of general application, so that it would be useful for any liquid-solid system. The method is applicable for any temperature, pressure and atmospheric experimental conditions employed, as well as for any chemical composition of liquid and solid. It is also useful for both low and high contact angles, as well as for reactive and non-reactive systems, as long as a photograph of a liquid drop resting on a flat solid surface is available for the studied system.
A review on friction-based joining of dissimilar aluminum–steel joints
Kush P. Mehta
Journal: Journal of Materials Research / Volume 34 / Issue 1 / 14 January 2019
Published online by Cambridge University Press: 17 October 2018, pp. 78-96
Print publication: 14 January 2019
This article showcases details on enumerative information of dissimilar aluminum (Al) to steel welds manufactured using different friction-based welding processes with an emphasis on the description of the manufacturing process, influence of parameters, microstructural variations, formation of intermetallic compounds (IMCs), and variations in mechanical properties. Friction-based welding processes such as friction welding, friction stir welding, hybrid friction stir welding, friction stir spot welding, friction stir spot fusion welding, friction stir scribe welding, friction stir brazing, friction melt bonding, friction stir dovetailing, friction bit joining, friction stir extrusion, and friction stir assisted diffusion welding are analyzed for the formation of dissimilar Al–steel joints. It can be summarized that friction-based joining processes have great potential to obtain sound Al–steel joints. The amount of frictional heat applied decides the type and volume fraction of IMCs that subsequently affects mechanical joint properties. Process variations and novel process parameters can enhance joint properties.
Experimental analysis and thermodynamic calculations of an additively manufactured functionally graded material of V to Invar 36
Lourdes D. Bobbio, Brandon Bocklund, Richard Otis, John Paul Borgonia, Robert Peter Dillon, Andrew A. Shapiro, Bryan McEnerney, Zi-Kui Liu, Allison M. Beese
Journal: Journal of Materials Research / Volume 33 / Issue 11 / 13 June 2018
Print publication: 13 June 2018
Functionally graded materials (FGMs) in which the elemental composition intentionally varies with position can be fabricated using directed energy deposition additive manufacturing (AM). This work examines an FGM that is linearly graded from V to Invar 36 (64 wt% Fe, 36 wt% Ni). This FGM cracked during fabrication, indicating the formation of detrimental phases. The microstructure, composition, phases, and microhardness of the gradient zone were analyzed experimentally. The phase composition as a function of chemistry was predicted through thermodynamic calculations. It was determined that a significant amount of the intermetallic σ-FeV phase formed within the gradient zone. When the σ phase constituted the majority phase, catastrophic cracking occurred. The approach presented illustrates the suitability of using equilibrium thermodynamic calculations for the prediction of phase formation in FGMs made by AM despite the nonequilibrium conditions in AM, providing a route for the computationally informed design of FGMs.
Investigation on the multi-pass gas tungsten arc welded Bi-metallic combination between nickel-based superalloy and Ti-stabilized austenitic stainless steel
Sumitra Sharma, Ravindra V. Taiwade, Himanshu Vashishtha
Journal: Journal of Materials Research / Volume 32 / Issue 16 / 28 August 2017
Published online by Cambridge University Press: 22 June 2017, pp. 3055-3065
Print publication: 28 August 2017
The present study addressed the weldability of Hastelloy C-276 and Type 321 austenitic stainless steel (ASS) dissimilar combination used for manufacturing of high-temperature equipments in nuclear power plants. Investigation of the microstructural evolutions across the different welding passes and their subsequent effect on the mechanical properties and corrosion resistance would be helpful in better understanding and pave way for the frequent application of such dissimilar joints in the industrial applications. The problem of segregation associated with the multi-pass gas tungsten arc welding process was also investigated systematically. The fusion zone microstructures exhibited a transition from columnar to an equiaxed dendritic structure with varying passes. The topologically closed packed (TCP) phases (such as P and μ) were observed in the fusion zone as well as at the weld interface of Hastelloy C-276. Polarization test was performed to evaluate the corrosion resistance and results indicated that the Cr and Mo depleted zones formed around the TCP phases might be responsible for decreased Epit value for fusion zone. The novelty of this work is to explore the possibilities of substitution of an expensive Hastelloy C-276 with a cost-effective Ti-stabilized Type 321 ASS.
Novel Superconducting Joints for Persistent Mode Magnet Applications
Tayebeh Mousavi, William Darby, Canan Aksoy, Timothy Davies, Greg Brittles, Chris Grovenor, Susannah Speller
Journal: MRS Advances / Volume 1 / Issue 51 / 2016
Persistent current joints are a critical component of commercial superconducting magnets.The standard jointing method widely used in the magnet industry for technological low temperature superconducting wires such as NbTi and Nb3Sn wires uses a superconducting solder (e.g. PbBi). In these joints the physical and superconducting properties of the solder materials inevitably play an important role in the overall performance of the joint. Key requirements for superconducting solders include low melting point to prevent degradation of the superconducting filaments during joining, good wettability of the superconducting filaments, suitable liquid phase viscosity, and finally adequate superconducting properties to enable sufficient supercurrent to pass through the joint under typical operating conditions (typically at 4.2K in a field of 1 T for an MRI magnet). PbBi solder satisfies all these criteria, but restrictions on the use of lead in the magnet industry are expected in the relatively near future, so new lead-free jointing techniques need to be developed.
One approach is the development of superconducting lead-free solder materials. In our work, we have focussed on the In-Sn system and ternary systems involving In and Sn as two of the elements. Thermodynamic modelling has been used to produce ternary phase diagrams of potential alloy systems, and various formulations have been fabricated in order to explore how microstructure and phase chemistry influence the superconducting properties of the solders. Alternative approaches to fabricating lead-free joints, including spot welding and cold-pressing, have also been investigated. These methods have the potential advantage of achieving direct NbTi-NbTi joints with no intermediate, lower performance superconducting material. The spot welding method produced joints with the best superconducting performance, signifiantly better than the currently used PbBi solder, but the lack of reproducibility in this technique may be a problem from an industrial point of view.
Additive manufacturing of materials: Opportunities and challenges
S.S. Babu, L. Love, R. Dehoff, W. Peter, T.R. Watkins, S. Pannala
Journal: MRS Bulletin / Volume 40 / Issue 12 / December 2015
Published online by Cambridge University Press: 27 November 2015, pp. 1154-1161
Additive manufacturing (also known as 3D printing) is considered a disruptive technology for producing components with topologically optimized complex geometries as well as functionalities that are not achievable by traditional methods. The realization of the full potential of 3D printing is stifled by a lack of computational design tools, generic material feedstocks, techniques for monitoring thermomechanical processes under in situ conditions, and especially methods for minimizing anisotropic static and dynamic properties brought about by microstructural heterogeneity. This article discusses the role of interdisciplinary research involving robotics and automation, process control, multiscale characterization of microstructure and properties, and high-performance computational tools to address each of these challenges. Emerging pathways to scale up additive manufacturing of structural materials to large sizes (>1 m) and higher productivities (5–20 kg/h) while maintaining mechanical performance and geometrical flexibility are also discussed.
Three-dimensional integration: An industry perspective
Subramanian S. Iyer
Journal: MRS Bulletin / Volume 40 / Issue 3 / March 2015
The field of electronics packaging is undergoing a significant transition to accommodate the slowing down of lithographically driven semiconductor scaling. Three-dimensional (3D) integration is an important component of this transition and promises to revolutionize the way chips are assembled and interconnected in a subsystem. In this article, we develop the key attributes of 3D integration, the enablers and the challenges that need to be overcome before widespread acceptance by industry. While we are already seeing the proliferation of applications in the memory subsystem, the best is yet to come with the heterogeneous integration of a diverse set of technologies, the mixing of lithographic nodes and an economic argument for its implementation based on overall system function, and cost rather than a narrow component-based analysis. Finally, an extension to monolithic 3D integration promises even further benefits.
Microstructural Effects between AHSS Dissimilar Joints Using MIG and TIG Welding Process
G.Y. Pérez Medina, M. Padovani, M. Merlin, A.F. Miranda Pérez, F.A. Reyes Valdés
Journal: MRS Online Proceedings Library Archive / Volume 1766 / 2015
Published online by Cambridge University Press: 11 May 2015, pp. 29-35
Gas tungsten arc welding-tungsten inert gas (GTAW-TIG) is focused in literature as an alternative choice for joining high strength low alloy steels; this study is performed to compare the differences between gas metal arc welding-metal inert gas (GMAW-MIG) and GTAW welding processes. The aim of this study is to characterize microstructure of dissimilar transformation induced plasticity steels (TRIP) and martensitic welded joints by GMAW and GTAW welding processes. It was found that GMAW process lead to relatively high hardness in the HAZ of TRIP steel, indicating that the resultant microstructure was martensite. In the fusion zone (FZ), a mixture of phases consisting of bainite, ferrite and small areas of martensite were present. Similar phase's mixtures were found in FZ of GTAW process. The presence of these mixtures of phases did not result in mechanical degradation when the GTAW samples were tested in lap shear tensile testing as the fracture occurred in the heat affected zone. In order to achieve light weight these result are benefits which is applied an autogenous process, where it was shown that without additional weight the out coming welding resulted in a high quality bead with homogeneous mechanical properties and a ductile morphology on the fracture surface. Scanning electron microscopy (SEM) was employed to obtain information about the specimens that provided evidence of ductile morphology.
The tensile and impact resistance properties of accumulative roll bonded Al6061 and AZ31 alloy plates
M. Ali Sarigecili, Hasan H. Saygili, Benat Kockar
Journal: Journal of Materials Research / Volume 29 / Issue 10 / 28 May 2014
Print publication: 28 May 2014
Al6061 and AZ31 plates were processed using accumulative roll bonding (ARB) method up to two passes to produce laminated composites. The sandwich stacks of Al6061/AZ31/Al6061 were held at 450 °C for 10 min in a cubical furnace and rolled together with reduction of 50% in one pass. The microstructural investigations were done using optical and scanning electron microscopes. The structures of the interface, mechanical and drop impact properties of the laminated composites after the first and second passes were investigated and compared with Al6061 and AZ31 alloy plates. It was found that Al6061 improved the elongation to failure property of AZ31 after the first pass of ARB process and the drop impact properties of AZ31 after the first and second passes. However, elongation to failure magnitude with the uniaxial tensile loading decreased with increase in the number of passes due to the formation of brittle intermetallic between the Al6061/AZ31 nonuniform interfaces.
Effect of the Heat Input in the Mechanical and Metallurgical Properties of Welds on AHSS Transformed Induced Plasticity Steel Joined with GMAW Processes in the Automotive Industry
Victor Lopez, Arturo Reyes, Patricia Zambrano
Published online by Cambridge University Press: 24 February 2014, imrc2013-s5c-o009
The effect of the heat input on the mechanical and metallurgical properties of the welds has been investigated in the heat affected zone (HAZ) of welds joined with gas metal arc welding (GMAW), using normal production welding parameters. The thermal effect in the HAZ of the welds is important for the optimization of the welding parameters used when weld transformed induced plasticity (TRIP) steels, because this will have a great influence in the mechanical and metallurgical properties of the weld. In this work 3 samples was welded a high, average and low heat input, with the variation of welding parameters to obtain different thermal affectation to investigate the variations in different parts of weld joint: weld, HAZ and base metal, due the heat applied for the welding process used. Mechanical properties were evaluated by tension test, microhardness and fatigue testing and metallurgical evaluation with optical metallograpy, scanning electron microscopy (SEM), fractograpy and X-Ray diffraction (XRD).The results obtained shows that the mechanical properties of the tension test decrease when the heat input increase and the microhardness exhibit a softening zone in the HAZ with lower hardness and the fatigue life were similar for all heat inputs for the high stress levels, but only in low stress there is a difference. For metallurgical properties the metallographic evaluation shows ferrite, bainite - martensite and retained austenite, and the fractography analysis exhibit a ductile fracture in all cases and the content in volume fraction of retained austenite increases in the HAZ of welds when increasing heat input in to the base metal due the thermal effect.
Development of iron-base composite materials with high thermal conductivity for DEMO
H. Homma, N. Hashimoto, S. Ohnuki
Published online by Cambridge University Press: 14 January 2014, mrsf13-1645-ee06-03
One of the critical issues for development of the nuclear fusion demonstration reactor (DEMO) is the high heat flux on heat-resistant equipments, especially the blanket and divertor. Materials of such equipments require relatively high thermal conductivities. In this study, we developed iron-based composite materials with carbon nanotube (CNT) and copper, which have high thermal diffusivities, by means of Hot Pressing (HP) and Spark Plasma Sintering (SPS).
The thermal diffusivity in the iron/CNT composites was not high enough compared with that of pure iron, while iron/copper composite showed a relatively high thermal diffusivity in the joining conditions. One of the reasons not to be improved thermal diffusivity could be non-mono-dispersion of CNT by the formation of carbides in the matrix.
An investigation of metallurgical bonding in Al–7Si/gray iron bimetal composites
Yang Liu, Xiufang Bian, Jianfei Yang, Kai Zhang, Le Feng, Chuncheng Yang
Journal: Journal of Materials Research / Volume 28 / Issue 22 / 28 November 2013
Print publication: 28 November 2013
Al–7Si/gray iron bimetal composites with a sound metallurgical bonding were obtained by a gravity die casting process. The surface treatments of the gray iron specimen including fluxing and hot dipping were applied to forming a complete metallurgical bonding layer at the Al–7Si/gray iron interface. In addition, the effect of Mn in dipping bath on the microstructure of the Al–7Si/gray iron interfacial bond zone has been studied in an Al–7Si alloy containing five different levels of Mn ranging from 0 to 5 wt%. Microstructure analysis indicates that addition of Mn in dipping bath can eliminate the harmful needle-like phase (β-Al5FeSi) as the Mn content is no less than 1.5 wt% and also plays an important role in facilitating the growth of intermetallic phases [α-Al15(FexMn1−x)3Si2] and the metallurgical bonding layer. The sound metallurgical bonding formed at the Al–7Si/gray iron interface is attributed to combining the effect of surface treatments and selection of Mn content.
A novel polymer technology for underfill
Osamu Suzuki, Toshiyuki Sato, Paul Czubarow, David Son
Published online by Cambridge University Press: 30 July 2012, mrss12-1428-c07-01
Capillary type underfill is still the mainstream underfill for mass production flip chip applications. Flip chip packages are migrating to ultra low-k, Pb-free, 3D and fine pitch packages. Underfill selection is becoming more critical. This paper discusses the performance and potential of underfills using a novel organic-inorganic hybrid polymer technology.
Compared to eutectic and high lead solder, tin-silver-copper solder has lower C.T.E., higher elasticity and greater brittleness. In light of these properties, it is generally better to select high Tg and lower CTE underfill in order to prevent bump fatigue during reliability testing. Given the brittleness of low-k dielectric layers of flip chips, the destruction of low-k layers by stress inside the flip chip packages has become a major issue. Underfills for low-k packages should have low stress, and the warpage should be small. It is expected that as the low-k trend expands, the underfill is required to provide less stress. Low Tg underfill shows lower warpage. New chemical technologies have been developed to address the needs of underfills for low-k/Pb-free flip chip packages, specifically organic-inorganic hybrid polymer compounds. The organic-inorganic hybrid polymer provides excellent cure properties which enable a balanced combination of low stress and good bump protection. The material properties of the underfill were characterized using Differential Scanning Calorimetry (DSC), Thermo-Mechanical Analysis (TMA), and Dynamic Mechanical Analysis (DMA). A daisy-chained test vehicle was used for reliability testing. A detailed study is presented on the underfill properties, reliability data, as well as finite element modeling results.
Precipitates formation and its impact in friction stir welded and post-heat-treated Inconel 718 alloy
Kuk Hyun Song, Han Sol Kim, Won Yong Kim
Published online by Cambridge University Press: 23 August 2011, mrss11-1363-rr05-17
In order to investigate the formation of precipitates such as MC carbides and intermetallic compounds in the friction stir welded and post-heat-treated Inconel 718 alloy, this work was carried out. Furthermore, the microstructural and mechanical properties of welds and post-heat-treated material were evaluated to identify the effect on precipitates formed during post-heat-treatment. Friction stir welding (FSW) was performed at a rotation speed of 200 rpm and welding speed of 150 mm/min; heat treatment was performed after welding at 720 °C for 8 hours in vacuum. As a result, the grain size due to FSW was notably refined from 5–20 μm in the base material to 1–3 μm in the stir zone; this was accompanied by dynamic recrystallization, which resulted in enhancements in the mechanical properties as compared to the base material. In particular, applying heat treatment after FSW led to improvements in the mechanical properties of the welds—the microhardness and tensile strength increased by more than 50% and 40% in fraction, respectively, as compared to FSW alone.
|
CommonCrawl
|
Updates and Lessons from AI Forecasting
Aug 18, 2021 15 min read forecasting
Earlier this year, my research group commissioned 6 questions for professional forecasters to predict about AI. Broadly speaking, 2 were on geopolitical aspects of AI and 4 were on future capabilities:
Geopolitical:
How much larger or smaller will the largest Chinese ML experiment be compared to the largest U.S. ML experiment, as measured by amount of compute used?
How much computing power will have been used by the largest non-incumbent (OpenAI, Google, DeepMind, FB, Microsoft), non-Chinese organization?
Future capabilities:
What will SOTA (state-of-the-art accuracy) be on the MATH dataset?
What will SOTA be on the Massive Multitask dataset (a broad measure of specialized subject knowledge, based on high school, college, and professional exams)?
What will be the best adversarially robust accuracy on CIFAR-10?
What will SOTA be on Something Something v2? (A video recognition dataset)
Forecasters output a probability distribution over outcomes for 2022, 2023, 2024, and 2025. They have financial incentives to produce accurate forecasts; the rewards total \$5k per question (\$30k total) and payoffs are (close to) a proper scoring rule, meaning forecasters are rewarded for outputting calibrated probabilities.
Depending on who you are, you might have any of several questions:
What the heck is a professional forecaster?
Has this sort of thing been done before?
What do the forecasts say?
Why did we choose these questions?
What lessons did we learn?
You're in luck, because I'm going to answer each of these in the following sections! Feel free to skim to the ones that interest you the most.
And before going into detail, here were my biggest takeaways from doing this:
Projected progress on math and on broad specialized knowledge are both faster than I would have expected. I now expect more progress in AI over the next 4 years than I did previously.
The relative dominance of the U.S. vs. China is uncertain to an unsettling degree. Forecasters are close to 50-50 on who will have more compute directed towards AI, although they do at least expect it to be within a factor of 10 either way.
It's difficult to come up with forecasts that reliably track what you intuitively care about. Organizations might stop reporting compute estimates for competitive reasons, which would confound both of the geopolitical metrics. They might similarly stop publishing the SOTA performance of their best models, or do it on a lag, which could confound the other metrics as well. I discuss these and other issues in the "Lessons learned" section.
Professional forecasting seems really valuable and underincentivized. (On that note, I'm interested in hiring forecasting consultants for my lab--please e-mail me if you're interested!)
Acknowledgments. The particular questions were designed by my students Alex Wei, Collin Burns, Jean-Stanislas Denain, and Dan Hendrycks. Open Philanthropy provided the funding for the forecasts, and Hypermind ran the forecasting competition and constructed the aggregate summaries that you see below. Several people provided useful feedback on this post, especially Luke Muehlhauser and Emile Servan-Schreiber.
What is a professional forecaster? Has this been done before?
Professional forecasters are individuals, or often teams, who make money by placing accurate predictions in prediction markets or forecasting competitions. A good popular treatment of this is Philip Tetlock's book Superforecasting, but the basic idea is that there are a number of general tools and skills that can improve prediction ability and forecasters who practice these usually outperform even domain experts (though most strong forecasters have some technical background and will often read up on the domain they are predicting in). Historically, many forecasts were about geopolitical events (perhaps reflecting government funding interest), but there have been recent forecasting competitions about Covid-19 and the future of food, among others.
At this point, you might be skeptical. Isn't predicting the future really hard, and basically impossible? An important thing to realize here is that forecasters usually output probabilities over outcomes, rather than a single number. So while I probably can't tell you what US GDP will be in 2025, I can give you a probability distribution. I'm personally pretty confident it will be more than \$700 billion and less than \$700 trillion (it's currently $21 trillion), although a professional forecaster would do much better than that.
There are a couple other important points here. The first is that forecasters' probability distributions are often significantly wider than the sorts of things you'd see pundits on TV say (if they even bother to venture a range rather than a single number). This reflects the future actually being quite uncertain, but even a wide range can be informative, and sometimes I see forecasted ranges that are a lot narrower than I expected.
The other point is that most forecasts are for at most a year or two into the future. Recently there have been some experimental attempts to forecast out to 2030, but I'm not sure we can say yet how successful they were. Our own forecasts go out to 2025, so we aren't as ambitious as the 2030 experiments, but we're still avant-garde compared to the traditional 1-2 year window. If you're interested in what we currently know about the feasibility of long-range forecasting, I recommend this detailed blog post by Luke Muehlhauser.
So, to summarize, a professional forecaster is someone who is paid to make accurate probabilistic forecasts about the future. Relative to pundits, they express significantly more uncertainty. The moniker "professional" might be a misnomer, since most income comes from prizes and I'd guess that most forecasters have a day job that produces most of their income. I'd personally love to live in a world with truly professional forecasters who could fully specialize in this important skill.
Other forecasting competitions. Broadly, there are all sorts of forecasting competitions, often hosted on Hypermind, Metaculus, or Good Judgment. There are also prediction markets (e.g. PredictIt), which are a bit different but also incentivize accurate predictions. Specifically on AI, Metaculus had a recent AI prediction tournament, and Hypermind ran the same questions on their own platform (AI2023, AI2030). I'll discuss below how some of our questions relate to the AI2023 tournament in particular.
What the forecasts say
Here are the point estimate forecasts put together into a single chart (expert-level is approximated as ~90%):
The MATH and Multitask results were the most interesting to me, as they predict rapid progress starting from a low present-day baseline. I'll discuss these in detail in the following subsections, and then summarize the other tasks and forecasts.
To get a sense of the uncertainty spread, I've also included aggregate results below (for 2025) on each of the 6 questions; you can find the results for other years here. The aggregate combines all crowd forecasts but places higher weight on forecasters with a good track record.
The MATH dataset consists of competition math problems for high school students. A Berkeley PhD student got in the ~75% range, while an IMO gold medalist got ~90%, but probably would have gotten 100% without arithmetic errors. The questions are free-response and not multiple-choice, and can contain answers such as $\frac{1 + \sqrt{2}}{2}$.
Current performance on this dataset is quite low--6.9%--and I expected this task to be quite hard for ML models in the near future. However, forecasters predict more than 50% accuracy* by 2025! This was a big update for me. (*More specifically, their median estimate is 52%; the confidence range is ~40% to 60%, but this is potentially artifically narrow due to some restrictions on how forecasts could be input into the platform.)
To get some flavor, here are 5 randomly selected problems from the "Counting and Probability" category of the benchmark:
5 white balls and $k$ black balls are placed into a bin. Two of the balls are drawn at random. The probability that one of the drawn balls is white and the other is black is $\frac{10}{21}$. Find the smallest possible value of $k$.
Here are 5 randomly selected problems from the "Intermediate Algebra" category (I skipped one that involved a diagram):
Suppose that $x$, $y$, and $z$ satisfy the equations $xyz = 4$, $x^3 + y^3 + z^3 = 4$, $xy^2 + x^2 y + xz^2 + x^2 z + yz^2 + y^2 z = 12$. Calculate the value of $xy + yz + zx$.
If $\|z\| = 1$, express $\overline{z}$ as a simplified fraction in terms of $z$.
In the coordinate plane, the graph of $\|x + y - 1\| + \|\|x\| - x\| + \|\|x - 1\| + x - 1\| = 0$ is a certain curve. Find the length of this curve.
Let $\alpha$, $\beta$, $\gamma$, and $\delta$ be the roots of $x^4 + kx^2 + 90x - 2009 = 0$. If $\alpha \beta = 49$, find $k$.
Let $\tau = \frac{1 + \sqrt{5}}{2}$, the golden ratio. Then $\frac{1}{\tau} + \frac{1}{\tau^2} + \frac{1}{\tau^3} + \dotsb = \tau^n$ for some integer $n$. Find $n$.
You can see all of the questions at this git repo.
If I imagine an ML system getting more than half of these questions right, I would be pretty impressed. If they got 80% right, I would be super-impressed. The forecasts themselves predict accelerating progress through 2025 (21% in 2023, then 31% in 2024 and 52% in 2025), so 80% by 2028 or so is consistent with the predicted trend. This still just seems wild to me and I'm really curious how the forecasters are reasoning about this.
Multitask
The Massive Multitask dataset also consists of exam questions, but this time they are a range of high school, college, and professional exams on 57 different subjects, and these are multiple choice (4 answer choices total). Here are five example questions:
(Jurisprudence) Which position does Rawls claim is the least likely to be adopted by the POP (people in the original position)?
(A) The POP would choose equality above liberty.
(B) The POP would opt for the 'maximin' strategy.
(C) The POP would opt for the 'difference principle.'
(D) The POP would reject the 'system of natural liberty.
(Philosophy) According to Moore's "ideal utilitarianism," the right action is the one that brings about the greatest amount of:
(A) pleasure. (B) happiness. (C) good. (D) virtue.
(College Medicine) In a genetic test of a newborn, a rare genetic disorder is found that has X-linked recessive transmission. Which of the following statements is likely true regarding the pedigree of this disorder?
(A) All descendants on the maternal side will have the disorder.
(B) Females will be approximately twice as affected as males in this family.
(C) All daughters of an affected male will be affected.
(D) There will be equal distribution of males and females affected.
(Conceptual Physics) A model airplane flies slower when flying into the wind and faster with wind at its back. When launched at right angles to the wind, a cross wind, its groundspeed compared with flying in still air is
(A) the same (B) greater (C) less (D) either greater or less depending on wind speed
(High School Statistics) Jonathan obtained a score of 80 on a statistics exam, placing him at the 90th percentile. Suppose five points are added to everyone's score. Jonathan's new score will be at the
(A) 80th percentile.
(B) 85th percentile.
(C) 90th percentile.
(D) 95th percentile.
Compared to MATH, these involve significantly less reasoning but more world knowledge. I don't know the answers to these questions (except the last one), but I think I could figure them out with access to Google. In that sense, it would be less mind-blowing if an ML system did well on this task, although it would be accomplishing an intellectual feat that I'd guess very few humans could accomplish unaided.
The actual forecast is that ML systems will be around 75% on this by 2025 (range is roughly 70-85, with some right-tailed uncertainty). I don't find this as impressive/wild as the MATH forecast, but it's still pretty impressive.
My overall take from this task and the previous one is that forecasters are pretty confident that we won't have the singularity before 2025, but at the same time there will be demonstrated progress in ML that I would expect to convince a significant fraction of skeptics (in the sense that it will look untenable to hold positions that "Deep learning can't do X").
Finally, to give an example of some of the harder types of questions (albeit not randomly selected), here are two from Professional Law and College Physics:
(College Physics) One end of a Nichrome wire of length 2L and cross-sectional area A is attached to an end of another Nichrome wire of length L and cross- sectional area 2A. If the free end of the longer wire is at an electric potential of 8.0 volts, and the free end of the shorter wire is at an electric potential of 1.0 volt, the potential at the junction of the two wires is most nearly equal to
(A) 2.4 V (B) 3.3 V (C) 4.5 V (D) 5.7 V
(Professional Law) The night before his bar examination, the examinee's next-door neighbor was having a party. The music from the neighbor's home was so loud that the examinee couldn't fall asleep. The examinee called the neighbor and asked her to please keep the noise down. The neighbor then abruptly hung up. Angered, the examinee went into his closet and got a gun. He went outside and fired a bullet through the neighbor's living room window. Not intending to shoot anyone, the examinee fired his gun at such an angle that the bullet would hit the ceiling. He merely wanted to cause some damage to the neighbor's home to relieve his angry rage. The bullet, however, ricocheted off the ceiling and struck a partygoer in the back, killing him. The jurisdiction makes it a misdemeanor to discharge a firearm in public. The examinee will most likely be found guilty for which of the following crimes in connection to the death of the partygoer?
(A) Murder (B) Involuntary manslaughter (C) Voluntary manslaughter (D) Discharge of a firearm in public
You can view all the questions at this git repo.
The other four questions weren't quite as surprising, so I'll go through them more quickly.
SOTA robustness: The forecasts expect consistent progress at ~7% per year. In retrospect this one was probably not too hard to get just from trend extrapolation. (SOTA was 44% in 2018 and 66% in 2021, with smooth-ish progress in-between.)
US vs. China: Forecasters have significant uncertainty in both directions, skewed towards the US being ahead in the next 2 years and China after that (seemingly mainly due to heavier-tailed uncertainty), but either one could be ahead and up to 10x the other. One challenge in interpreting this is that either country might stop publishing compute results if they view it as a competitive advantage in national security (or individual companies might do the same for competitive reasons).
Incumbents vs. rest of field: forecasters expect newcomers to increase size by ~10x per year for the next 4 years, with a central estimate of 21 EF-days in 2023. Note the AI2023 results predict the largest experiment by anyone (not just newcomers) to be 261EFLOP-s days in 2023, so this expects newcomers to be ~10x behind the incumbents, but only 1 year behind. This is also an example where forecasters have significant uncertainty--newcomers in 2023 could easily be in single-digit EF-days, or at 75 EF-days. In retrospect I wish I had included Anthropic on the list, as they are a new "big-compute" org that could be driving some fraction of the results, and who I wouldn't have intended to count as a newcomer (since they already exist).
Video understanding: Forecasters expect us to hit 88% accuracy (range: ~82%-95%) in 2025. In addition, they expect accuracy to increase at roughly 5%/year (though this presumably has to level off soon after 2025). This is faster than ImageNet, which has only been increasing at roughly 2%/year. In retrospect this was an "easy" prediction in the sense that accuracy has increased by 14% from Jan'18 to Jan'21 (close to 5%/year), but it is also "bold" in the sense that progress since Jan'19 has been minimal. (Apparently forecasters are more inclined to average over the longest available time window.) In terms of implications, video recognition is one of the last remaining "instinctive" modalities that humans are very good at, other than physical tasks (grasping, locomotion, etc.). It looks like we'll be pretty good at a "basic" version of it by 2025, for a task that I'd intuitively rate as less complex than ImageNet but about as complex as CIFAR-100. Based on vision and language I expect an additional 4-5 years to master the "full" version of the task, so expect ML to have mostly mastered video by 2030. As before, this simultaneously argues against "the singularity is near" but for "surprisingly fast, highly impactful progress".
Why we chose these questions
We liked the AI2023 questions (the previous prediction contest), but felt there were a couple categories that were missing. One was geopolitical (the first 2 questions), but the other one was benchmarks that would be highly informative about progress. The AI2023 challenge includes forecasts about a number of benchmarks, e.g. Pascal, Cityscape, few-shot on Mini-ImageNet, etc. But there aren't ones where, if you told me we'd have a ton of progress on them by 2025, it would update my model of the world significantly. This is because the tasks included in AI2023 are mostly in the regime where NNs do reasonably well and I expect gradual progress to continue. (I would have been surprised by the few-shot Mini-ImageNet numbers 3 years ago, but not since GPT-3 showed that few-shot works well at scale).
It's not so surprising that the AI2023 benchmarks were primarily ones that ML already does well on, because most ML benchmarks are created to be plausibly tractable. To enable more interesting forecasts, we created our own "hard" benchmarks where significant progress would be surprising. This was the motivation behind the MATH and Multitask datasets (we created both of these ourselves). As mentioned, I was pretty surprised by how optimistic forecasters were on both tasks, which updated me downward a bit on the task difficulty but also upward on how much progress we should expect in the next 4 years.
The other two benchmarks already existed but were carefully chosen. Robust accuracy on CIFAR was based on the premise that adversarial robustness is really hard and we haven't seen much progress--perhaps it's a particularly difficult challenge, which would be worrying if we care about the safety of AI systems. Forecasters instead predicted steady progress, but in retrospect I could have seen this myself. Even though adversarial robustness "feels" hard (perhaps because I work on it and spend a lot of time trying to make it work better), the actual year-on-year numbers showed a pretty clear 7%/year improvement.
The last task, video recognition, is an area that not many people work in currently, as it seems challenging compared to images (perhaps due to hardware constraints). But it sounds like we should expect steady progress on it in the coming years.
It can sometimes be surprisingly difficult to formalize questions that track an intuitive quantity you care about.
For instance, we initially wanted to include questions about economic impacts of AI, but were unable to. For instance, we wanted to ask "How much private vs. public investment will there be in AI?" But this runs into the question of what counts as investment--Do we count something like applying data science to agriculture? If you look at most metrics that you'd hope track this quantity, they include all sorts of weird things like that, and the weird things probably dominate the metric. We ran into similar issues for indicators of AI-based automation--e.g. do industrial robots on assembly lines count, even if they don't use much AI? For many economic variables, short-term effects may also disort results (investment might drop because of a pandemic or other shock).
There were other cases where we did construct a question, but had to be careful about framing. We initially considered using parameters rather than compute for the two geopolitical questions, but it's possible to achieve really high parameter counts in silly ways and some organizations might even do so for publicity (indeed we think this is already happening to some extent). Compute is harder to fake in the same way.
As discussed above, secrecy could cloud many of the metrics we used. Some organizations might not publish compute numbers for competitive reasons, and the same could be true of SOTA results on leaderboards. This is more likely if AI heats up significantly, so unfortunately I expect forecasts to be least reliable when we need them most. We could potentially get around this issue by interrogating forecasters' actual reasoning, rather than just the final output.
I also came to appreciate the value of doing lots of legwork to create a good forecasting target. The MATH dataset obviously was a lot of work to assemble, but I'm really glad we did because it created the single biggest update for me. I think future forecasting efforts should more strongly consider this lever.
Finally, even while often expressing significant uncertainty, forecasters can make bold predictions. I'm still surprised that forecasters predicted 52% on MATH, when current accuracy is 7% (!). My estimate would have had high uncertainty, but I'm not sure the top end of my range would have included 50%. I assume the forecasters are right and not me, but I'm really curious how they got their numbers.
Because of the possibility of such surprising results, forecasting seems really valuable. I hope that there's significant future investment in this area. Every organization that's serious about the future should have a resident or consultant forecaster. I am putting my money where my mouth is and currently hiring forecasting consultants for my research group; please e-mail me if this sounds interesting to you.
|
CommonCrawl
|
npj climate and atmospheric science
Comparing deuterium excess to large-scale precipitation recycling models in the tropics
Florida's urban stormwater ponds are net sources of carbon to the atmosphere despite increased carbon burial over time
Audrey H. Goeckner, Mary G. Lusk, … Joseph M. Smoak
Increased extreme precipitation challenges nitrogen load management to the Gulf of Mexico
Chaoqun Lu, Jien Zhang, … Steven E. Lohrenz
Historically inconsistent productivity and respiration fluxes in the global terrestrial carbon cycle
Jinshi Jian, Vanessa Bailey, … Ben Bond-Lamberty
A deconvolutional Bayesian mixing model approach for river basin sediment source apportionment
William H. Blake, Pascal Boeckx, … Brice X. Semmens
The future of Blue Carbon science
Peter I. Macreadie, Andrea Anton, … Carlos M. Duarte
A cautionary tale about using the apparent carbon accumulation rate (aCAR) obtained from peat cores
Dylan M. Young, Andy J. Baird, … Julie Loisel
A meta-analysis of 1,119 manipulative experiments on terrestrial carbon-cycling responses to global change
Jian Song, Shiqiang Wan, … Mengmei Zheng
Spatiotemporal origin of soil water taken up by vegetation
Gonzalo Miguez-Macho & Ying Fan
The Invisible Carbon Footprint as a hidden impact of peatland degradation inducing marine carbonate dissolution in Sumatra, Indonesia
Francisca Wit, Tim Rixen, … Andreas A. Hutahaean
Stephen Cropper1,
Kurt Solander1,
Brent D. Newman1,
Obbe A. Tuinenburg2,
Arie Staal ORCID: orcid.org/0000-0001-5409-14362,
Jolanda J. E. Theeuwen ORCID: orcid.org/0000-0002-7505-02942,3 &
Chonggang Xu1
npj Climate and Atmospheric Science volume 4, Article number: 60 (2021) Cite this article
Atmospheric dynamics
Precipitation recycling is essential to sustaining regional ecosystems and water supplies, and it is impacted by land development and climate change. This is especially true in the tropics, where dense vegetation greatly influences recycling. Unfortunately, large-scale models of recycling often exhibit high uncertainty, complicating efforts to estimate recycling. Here, we examine how deuterium excess (d-excess), a stable-isotope quantity sensitive to recycling effects, acts as an observational proxy for recycling. While past studies have connected variability in d-excess to precipitation origins at local or regional scales, our study leverages >3000 precipitation isotope samples to quantitatively compare d-excess against three contemporary recycling models across the global tropics. Using rank-correlation, we find statistically significant agreement (\(\bar \tau = 0.52\) to \(0.70\)) between tropical d-excess and recycling that is strongly mediated by seasonal precipitation, vegetation density, and scale mismatch. Our results detail the complex relationship between d-excess and precipitation recycling, suggesting avenues for further investigation.
Growing water scarcity, changing terrestrial hydrologic cycles, and their impacts on ecological function have generated considerable interest in the origins of precipitation1,2. Such interest is particularly strong for the tropics, where local (recycled) water significantly impacts precipitation. For instance, it is estimated that 25–35% of rainfall in the Amazon basin, and >50% in the Congo basin, comes from nearby land evapotranspiration3,4. These levels of recycling are caused not only by the high evapotranspiration rates that result from tropical forests, but also are influenced by regional atmospheric circulation patterns and topography5. Climate change and deforestation are linked to rainfall reductions over the Asia monsoon region and the Amazon6,7,8, and the tropical climate has changed dramatically over time9,10,11. Accordingly, as policymakers seek to forecast water availability, accurate recycling models are crucial for informed decision making.
Precipitation recycling refers to the mechanism by which evapotranspiration from some pre-defined source is later re-precipitated within the very same source region12. Despite its importance, precipitation recycling is challenging to measure directly. While precipitation stable isotopes have been a recent focus of efforts to measure recycling because they exist as natural tracers in precipitation samples, they typically are not collected for global studies and can be influenced by many environmental effects that may obscure the relationship between isotope measurements and recycling13.
In response, recent research efforts have focused on developing models of precipitation recycling which are not subject to these drawbacks. The Mass Balance model, which focuses on the moisture fluxes in and out of a pre-defined region, is one such example. The precipitation recycling metric produced by the Mass Balance model is the regional recycling ratio (RRR), defined as the fraction of locally sourced precipitation occurring over a pre-defined region. Originally developed to study the hydrologic cycle of Russian river basins12, the RRR has since been applied worldwide to characterize recycling phenomena from several large river basins including the Amazon and the Mississippi14,15. While the RRR is computationally efficient, it fails to consider short-timescale processes critical to moisture tracking, such as diurnal timescales16. Moreover, the RRR is influenced by both region size and shape14, making irregular boundaries such as coastlines or political borders particularly challenging to evaluate. Responding to these shortcomings, modelers developed alternative methods to instead compute precipitation recycling by tracking moisture teleconnections16,17,18. In their estimates, these particle-tracking models consider moisture originating from any land evaporation source, not just from within a pre-defined region. In addition, they are forced using reanalysis data at a sub-daily resolution, thus capturing diurnal patterns. However, these reanalysis products, which resolve precipitation, evaporation, and wind fields, are subject to any biases that exist in such data19,20. The resulting computed entity in these models is known as the Land Recycling Ratio (LRR), which is the percentage of precipitation in each region sourced from land evaporation.
Given the growing diversity in approaches to characterizing precipitation recycling, a means by which to constrain and evaluate recycling models has become increasingly desirable, especially since precipitation recycling is difficult to measure at global scales21,22,23. A leading candidate to provide this observational value is a precipitation isotope quantity known as deuterium excess (d-excess)24, computed using the following equation:
$${{{\mathrm{d}}}} - {{{\mathrm{excess }}}}\left( \textperthousand \right){{{\mathrm{ }}}} = \delta ^2H - {{{\mathrm{ }}}}8{{{\mathrm{ }}}} \ast \delta ^{18}O$$
where \({\mathbf{\updelta}}^2H\) and \({\mathbf{\updelta}}^{18}O\) are the deuterium and oxygen-18 contents measured from a water sample24. Precipitation d-excess is known to correlate with land recycling because of kinetic fractionation that modifies isotopic signatures after evaporation and condensation25,26,27,28. However, obstacles limit the direct application of d-excess to constraining large-scale modeled recycling estimates. For instance, ground-based precipitation isotope observations have historically been sparse29. Moreover, non-recycling influences on d-excess, such as sub-cloud evaporation (the re-evaporation of falling precipitation)21, may influence the degree of model-observation agreement.
Even while facing these barriers, some case studies have succeeded in relating recycling phenomena to seasonal d-excess dynamics30,31,32. It is in the context of these encouraging results that in this investigation, we leverage extensive tropical isotope data to attempt a more spatially comprehensive analysis by quantitatively measuring spatio-temporal agreement between models of precipitation recycling and observations of d-excess across the entire tropics. We also explore the effects of observation frequency, vegetation density, and seasonal precipitation patterns on our results. We hypothesize that (1) recycling ratios generated from a representative model should correlate with the spatial and temporal variations in d-excess (H1), given the sensitivity of d-excess to the phase changes that occur during precipitation recycling; (2) the particle tracking models will better predict d-excess by capturing a more representative set of land evaporation sources (H2) because unlike the Mass Balance method (RRR), the particle tracking approach to computing the recycling ratio (LRR) theoretically accounts for all global sources of land evaporation; and (3) increased spatial coverage of d-excess sampling data will improve model-observation agreement (H3), as the mismatch in scales between point-like d-excess observations and large-scale recycling model estimates is resolved by averaging over larger datasets.
Spatiotemporal intercomparison of recycling models
Before introducing d-excess as an observational proxy for recycling, we first compared the three precipitation recycling models considered in this study, Mass Balance (RRR)14, WAM-2layers (LRR)17, and UTrack (LRR)18, across the Köppen-Geiger (KG) climate subzones that occur in the tropics33. The mean recycling ratios for each model were first plotted side by side, showing that recycling ratios are generally higher for LRR than RRR (Fig. 1). We anticipated this pattern since the LRR is defined to account for all land evaporation, while the RRR accounts for land evaporation solely within a bounded region. Furthermore, we evaluated spatio-temporal agreement among different models using Kendall's rank correlation coefficient (\(\tau = 1\) is a perfect monotonic correlation; \(\tau = - 1\) is a perfect anti-correlation), which is helpful in cases where a linear relationship is not guaranteed34. Model agreement between particle tracking and Mass Balance models was modest (WAM-2layers \(\bar \tau = 0.37\), Fig. 2a; UTrack \(\bar \tau = 0.27\), Fig. 2b), although results showed higher levels of spatial agreement between WAM-2layers and UTrack particle tracking models (\(\bar \tau = 0.67\), Fig. 2c). We largely expected this behavior, since the two models computing the LRR should agree more often with each other than with model predictions of the RRR. All models demonstrated high time-series correlation across most of East Africa but showed differences in West Africa, where the particle tracking models displayed anti-correlated behavior against the Mass Balance model.
Fig. 1: Mean annual recycling ratios.
The mean annual recycling ratio is visualized from 2002 to 2018 for UTrack (a), WAM-2layers (b), and Mass Balance (c) within the tropical Köppen–Geiger climate zones. Identical color scales are used for each plot, but the range of the Mass Balance color gradient is reduced from 0–1 to 0–0.05 due to the generally smaller values taken on by the regional recycling ratio (RRR) in comparison to the land recycling ratios (LRRs).
Fig. 2: Model intercomparison of moisture recycling estimates.
The scatter plots shown above represent the grid-by-grid comparison of mean annual estimates of the recycling ratio for the entire 2002–2018 analysis period, where the black dashed line indicates a 1:1 correspondence. The right panel includes two plots per model comparison: the difference in mean annual estimates for the two models (top) and corresponding rank-based time-series cross-correlation between model estimates (bottom). Panel (a) shows WAM-2layers vs. Mass Balance, panel (b) shows UTrack vs. Mass Balance, and panel (c) shows UTrack vs. WAM-2layers.
WAM-2layers and UTrack, the two models that track water vapor tracers, had the highest level of inter-model agreement (\(\bar \tau = 0.67\)). The LRRs reported in WAM-2layers were on average higher than UTrack by \(5.4 \pm 0.1\!\%\). Furthermore, while WAM-2layers estimated greater recycling levels in Central America and Asia, UTrack predicted higher recycling in Central Africa and South America (Fig. 2c). These discrepancies could be due to differences in numerical integration techniques or vertical model resolutions, both of which have previously been linked to divergent outcomes in model outputs18. For example, while the UTrack model is a Lagrangian model that is forced with reanalysis data for 25 vertical atmospheric layers, the WAM-2layers model is an Eulerian model forced with data for two layers. The model forcing data also differ in their horizontal resolutions, where the UTrack model uses ERA535 climate forcing data at a 0.25-degree resolution to drive its model, while WAM-2layers uses ERA-Interim36 at a 1.5-degree resolution. The ERA-Interim reanalysis has been found in regional studies to exhibit greater biases in tropical precipitation than ERA5, which is a surface flux that directly impacts measurements of precipitation recycling20. These different features potentially compound to produce divergent model results.
All models were highly correlated in small landmass regions surrounded by water, such as the Philippines and Central America, likely because in this regime, the LRR closely approximates the RRR due to the lack of nearby land evaporation sources. While the LRR can be impacted by moisture parcels traveling from distant land evaporation sources, the RRR is strictly dependent on local gridded moisture fluxes. As a result, the particle tracking and Mass Balance model are less related across larger landmasses such as South America and Africa, meaning that the behavior of the RRR diverged from that of the LRR within inland continental regions.
Climatological comparisons of model recycling ratios to deuterium excess
To test whether d-excess would be correlated with model recycling estimates (H1) and whether particle tracking models would see a stronger correlation with d-excess (H2), we analyzed their climatological signals within each tropical KG subzone. As expected, we found statistically significant (\(\alpha = 0.05\)) model-observation agreement in several climatological comparisons, including the tropics overall (Fig. 3a) for WAM-2layers (\(\bar \tau = 0.55\)) and Mass Balance (\(\bar \tau = 0.61\)); the Af subzone for UTrack (\(\bar \tau = 0.52\), Fig. 3b); and the Am subzone for Mass Balance (\(\bar \tau = 0.70\), Fig. 3c). However, no model achieved a statistically significant correlation in the Aw or As subzones (Fig. 3d, e). In fact, the mean rank correlations reported for the climatological analyses done in the Aw and As subzones were mostly lower than the mean rank correlations reported for other regions. One possible explanation for the lower agreement in the Aw and As subzones is the strong seasonal precipitation signal in these regions leading to higher divergence of d-excess and recycling across the annual precipitation cycle, especially during the dry seasons. Accordingly, we found support for hypothesis H1 within Af and Am, but not for Aw or As.
Fig. 3: Climatological cross-comparison of model recycling ratios against observational d-excess values across the tropical region.
The mean d-excess value across all stations and years (brown) is compared against the corresponding recycling ratio estimates given by UTrack, WAM-2layers, and Mass Balance models (blue). From top to bottom, the results are arranged by row according to their Köppen–Geiger subzone (A, Af, Am, Aw, As), and the results for each recycling model are given in a new column (UTrack, WAM-2layers, Mass Balance). Kendall's Tau rank cross-correlation coefficient is computed between climatological signals of recycling ratio and d-excess. The expected correlation is reported (\(\bar \tau\)) as well as the 95% confidence interval.
In addition to finding variability in model-observation agreement between climate subzones, we also found that recycling models demonstrated different strengths in correlation for the same region. Somewhat surprisingly, the Mass Balance model sometimes showed higher or equal climatological agreement when compared to WAM-2layers or UTrack for a given subzone. For example, in the tropics overall (Fig. 3a), WAM-2layers agreement (\(\bar \tau = 0.55\)) was similar to Mass Balance (\(\bar \tau = 0.61\)). It also was the highest correlated model in Am (\(\bar \tau = 0.70\), Fig. 3c), although the UTrack model was more highly correlated with Af (\(\bar \tau = 0.52\), Fig. 3b). Seemingly, while the RRR does not account for distant land evaporation, it still reports relatively strong agreement with d-excess when compared to the particle tracking results, calling hypothesis H2 into question and perhaps indicating that this process does not play an important role in precipitation recycling for some regions.
Finally, we examined how other non-recycling influences might impact climatological agreement beyond the amount effect. For example, canopy interactions with falling precipitation can change isotopic compositions of precipitation before it reaches the GNIP sampling station37,38,39. Accordingly, dense canopies surrounding d-excess observation sites can further de-couple stable isotope signatures from land and ocean evaporation. To study this effect, we removed observations from regions with outlier vegetation coverage, which we consider for this study to be greater than the 90th percentile value of total leaf area index (TLAI) for all points sampled (>8.6 m2/m2), from the climatological analysis of the overall tropics (Fig. 4). We find that this intervention improved agreement for both UTrack (\(\bar \tau = 0.48\), Fig. 4b) and WAM-2layers (\(\bar \tau = 0.58\), Fig. 4b), and all three models showed statistically significant agreement with d-excess signal. Beyond vegetation, we also looked for evidence that ENSO or extreme precipitation events might impact our climatology-based analysis, both of which are known to influence isotopic composition and its correspondence with recycling40,41. However, neither uniformly improved climatological agreement for our tropical analysis. Accordingly, the robust correspondence between d-excess and recycling ratios across several models and tropical regions supports hypothesis H1, but confounding factors such as vegetation coverage and seasonal precipitation are shown to further mediate this agreement at regional to local scales.
Fig. 4: Climatological comparison with dense vegetation outliers removed.
The overall tropical climatological comparison from Fig. 3a is replicated in panel (a). Panel (b) shows an identical analysis, but where observations which are associated with the >90th percentile of total leaf area index (TLAI) are removed from the analysis.
More stations increase model-observation agreement
To further test hypothesis H3, we set out to understand how spatial coverage (i.e., number of isotope stations sampled in each month) could influence model-observation agreement. First, we computed the rank correlation coefficient between the co-located mean recycling ratio and d-excess values for all three models and all 204 months in the analysis period. Then, we systematically selected months to ignore in our analysis according to the minimum station sample size targeted for each month and re-computed the correlation coefficient. Without filtering out low-coverage months, only UTrack (\(\bar \tau = 0.17\), Fig. 5a) and WAM-2layers (\(\bar \tau = 0.14\), Fig. 5b) demonstrated a statistically significant relationship between monthly mean recycling estimates and d-excess. However, with a station sample size filter in place, the Mass Balance model also achieved a statistically significant agreement with d-excess data starting at a minimum station sample size of 7 (Fig. 5c).
Fig. 5: Station sample size analysis.
Kendall's Tau rank correlation is computed between mean d-excess and precipitation recycling estimates across all tropical stations and for each recycling model for the monthly time-series from 2002 to 2018. The average correlation coefficient (black solid line) and corresponding 95% confidence interval (solid gray fill) are constructed for each minimum station sample size value. The corresponding sample size is reported (brown dashed line) for each analysis.
Overall, we found that greater station sample sizes generally corresponded to increased model-observation agreement, supporting hypothesis H3. This statement was especially true for the Mass Balance approach, as the addition of a sample size filter made the results statistically significant with moderately correlated behavior that was comparable to UTrack and WAM-2layers. However, for minimum station sample sizes of \(n \,>\, 10\), the total months included in the analysis dropped precipitously, eventually incorporating just a small fraction (<25%) of the original total number of months that were used. In this regime, the uncertainty in rank correlation estimates grew substantially, masking any underlying trend. Therefore, we stopped the station sample size analysis for each model when the results were not statistically significant (\(p\, >\, 0.05\)). More d-excess station observations would be needed to better understand how model-observation agreement changes in large station sample sizes and to reduce statistical uncertainty.
This study leveraged recently available datasets to test the relationship between observed d-excess signal and estimated precipitation recycling ratios across the tropics. We compared three contemporary recycling models with different methodologies and showed how their estimates of precipitation recycling varied, in part due to the uncertainty propagated by measurements and because of differences in their computational methods. We then used a climatological analysis to evaluate the extent to which d-excess observations aligned with predicted recycling ratios for each model and tropical climate subzone.
We found strong agreement between d-excess measurements and model-generated recycling ratios for a wide range of models and tropical regions, which supports our hypothesis that d-excess concentrations would be shown to track recycling behavior (H1). Due to the promising outcomes of our approach, future studies may find it instructive to incorporate d-excess into next-generation recycling models, Earth System Models that include water isotopes42, or to use it as a benchmark to evaluate model performance. However, our analysis highlights two possible influences on model-observation agreement that might complicate the direct application of d-excess to modeled recycling values at global scales: (1) a lack of sufficient isotope data and (2) a robust seasonal precipitation pattern in certain Köppen–Geiger climate subzones. Within the tropics, the first of these reasons is most likely to be influential within the As subzone, where measurements are significantly less abundant (\(n = 233\)) than the other three: Af (\(n = 891\)), Am (\(n = 779\)), and Aw (\(n = 1498\)). Large sample sizes could also contribute to the higher agreement found in the overall tropics comparisons, which showed strong model-observation correlations for multiple models. However, a lack of data is unlikely to be the complete story since the Aw subzone demonstrated no statistically significant model-observation agreement but had a larger sample size than any other region.
Accordingly, we identify seasonal precipitation patterns as another likely mediating factor due to the higher potential for sub-cloud evaporation effects. Sub-cloud evaporation is an atmospheric process whereby water vapor gets re-evaporated while raining down onto the ground. While sub-cloud evaporation can occur over sub-daily timescales, its cumulative impact is detected in our analysis on the monthly timescales at which the GNIP stations measure. With increasing precipitation levels, the d-excess measured at ground level will approach the d-excess as measured at the cloud base and thus be less influenced by sub-cloud evaporation—a phenomenon that has been termed the amount effect21. Therefore, with lower precipitation, d-excess becomes less related to the recycling signal. The amount effect could explain why climate subzones with extended dry periods, such as Aw and As, generally show poor model-observation agreement, while subzones with intermittent (Am) or non-existent (Af) dry periods demonstrated stronger correlations. While our goal in this study was to compare observation data directly to recycling estimates over a large region such as the tropics, future work could involve using a correction procedure on d-excess data to further probe the influence of sub-cloud evaporation on model-observation agreement at finer spatial scales in climates with a distinct dry season30.
In addition to seasonal precipitation, we identify vegetation interactions with precipitation as a further influence on our climatological analysis. Dense vegetation coverage in the face of incoming rainfall is known to influence precipitation isotope compositions. Local d-excess values are influenced by amplified rates of interception and transpiration in the forest canopy, while rates of evaporation at the soil surface are simultaneously suppressed37,38,39. Furthermore, the forest vegetation can be a significant driver of local precipitation, even by potentially initiating a wet season43. These mechanisms serve to further confound the relationship between precipitation recycling and d-excess. This is reflected by the fact that removing outlier observations corresponding to particularly dense canopies improved our climatological analysis. A similar analysis restricting the dataset based on ENSO influences (removing months associated with SOI > 1.5 and SOI > 3.0) or extreme precipitation (removing observations associated with >90th percentile monthly precipitation rates) did not clearly improve climatological agreements, even though both have been identified as potential mediators of precipitation isotope composition40,41. This could be due to strength of their effects depending greatly on regional conditions, along with interannual variability in the case of ENSO, which would be less influential at the global, climatological scales of our analysis. For example, in one region, an ENSO event could dramatically change precipitation, but for the same region in another year with the same magnitude ENSO effect, there may be a much smaller impact. Furthermore, our study includes a substantial number of stations in the African tropics, which are farther away from the source region of ENSO (tropical Pacific).
The climatological analysis was also instructive in testing hypothesis H2 that particle tracking models demonstrate stronger agreement with d-excess observations than the Mass Balance model. Differences between the LRR generated with WAM-2layers and UTrack may result from UTrack's greater spatial resolution, or the different numerical integration approaches used by the two models. They could also be due to UTrack's reliance on ERA5, which has been shown to exhibit less bias in some tropical regions than ERA-Interim, used by WAM-2layers20. Although the UTrack model performed best in the Af subzone (\(\bar \tau = 0.52\), Fig. 3b), we found that in the Am subzone, the Mass Balance model showed a higher correlation with d-excess (\(\bar \tau = 0.70\), Fig. 3c) than either particle tracking model. In addition, for the tropics overall (Fig. 3a), the Mass Balance model exhibited similar correlation with d-excess (\(\bar \tau = 0.61\)) to another particle tracking model, WAM-2layers (\(\bar \tau = 0.55\)). These results offer evidence against H2 since the model performance of particle tracking models was not clearly superior. Such evidence suggests that distant land evaporation may have less impact on d-excess than land evaporation from more proximal sources. Specifically, moisture parcels that undergo longer transit paths may have more opportunities to experience phase changes and other environmental effects that could impact the isotopic composition of precipitation and reduce the relative influence of land recycling on d-excess values. For example, there would be more opportunities to mix with other moisture sourced from open water evaporation from vegetation canopies, urban areas, lakes, or other sources that do not cause the same isotope fractionation as traditional land evaporation. This theory could be further tested by assessing model-measurement agreement as a function of source area, but such analysis would require a much finer spatial scale (e.g., sub-kilometer) investigation than what we do here and is thus beyond the scope of this work.
Finally, the station sample size analysis demonstrated support for hypothesis H3 since increased station sample sizes generally improved model-observation agreement. This result can be explained by confounding factors that influence precipitation isotope composition at local scales. The International Atomic Energy Agency (IAEA) Global Networks in Precipitation (GNIP) standards suggest that measurement stations should be placed as far away as possible from trees and buildings29. This guidance is intended to reduce the likelihood that incoming precipitation will be intercepted on its way to the detector, which would introduce bias into the collected sample. Even so, these effects can be hard to eliminate completely, especially for certain locations with particularly dense vegetation coverage like those in the tropics. In contrast, recycling models report estimates at a grid-scale, which are not subject to local interception effects. Accordingly, by averaging across all stations in a climate subzone for a given month, the differences between models and observations were diminished, and a more robust spatio-temporal pattern of agreement emerged. A similar result has been found in the past when comparing global models of precipitation isotope quantities to GNIP data44. Nevertheless, because existing isotope measurement stations remain sparse, the effectiveness of such comparisons is currently limited to well-studied regions where estimates from many stations (\(n \,>\, 10\)) and months (\(n \,>\, 50\)) are available. Additional isotope station measurements would be ideal to explore how sample size impacts model-observation agreement in greater detail.
In summary, our study serves to further illustrate the relationship between precipitation recycling models and d-excess observations by studying their correlation under a variety of conditions. We found that although these relationships were statistically significant in several analyses, strong seasonal precipitation patterns and observation-model mismatches in scale reduce agreement. To further increase the utility of d-excess in evaluating recycling, we encourage the addition of precipitation isotope data to publicly available databases--especially in under-sampled tropical regions such as the As climate subzone and the Southern Hemisphere tropics. A denser observation network will enable researchers to continue to probe the sources of uncertainties in d-excess and recycling ratios and how they vary with environmental conditions that impact precipitation and scale. We also encourage the use of more regional or local studies to examine the impact of smaller-scale influences on model-measurement correspondence, such as extreme precipitation events and ENSO signals, which are more pronounced at these resolutions.
Defining the tropics
For the purposes of this study, we constrained our analysis to the humid tropics, as defined according to the Köppen–Geiger classification zones: rainforest (Af), monsoon (Am), winter dry (Aw), and summer dry (As). These climate zones are based on mean climate conditions from 1951 to 200033. A map of the tropics, along with the location of each precipitation isotope sample point and the four climate zones, is shown in Supplementary Fig. 1.
Generating mass balance recycling estimates
The first modeled recycling estimate in this study was derived using the grid-based Mass Balance method14. The Mass Balance equation used to calculate the regional recycling ratio (RRR) is shown in Eq. 2:
$$RRR = \left( {1 + \frac{{2F^ + }}{{ETA}}} \right)^{ - 1}$$
where ET is the evapotranspiration flux (kg m−2 s−1), A is the area of the region (m2), and F+ is the moisture influx (kg s−1), which is calculated using Eq. 3:
$$F^ + = - {\int}_{\lambda _{in}} {\left( {\vec Q \cdot \hat n} \right)} d\lambda _{in}$$
where \(\lambda _{in}\) represents the boundary of interest, \(\vec Q\) is the moisture flux field (kg m−1 s−1), and \(\hat n\) is the normal unit vector (unitless)13.
The moisture flux field was taken from the ERA5 reanalysis product35. In this dataset, the monthly mean vertical integral of eastward moisture vapor flux and monthly mean vertical integral of northward moisture vapor flux were used as the zonal and meridional components, respectively, of the moisture flux field, interpolated at a regular grid resolution of 0.25 degrees. Using this field, the moisture influx was then computed at a 1-degree grid resolution by evaluating the line integral in Eq. 3. This result was then used in Equation 1, along with evapotranspiration data from ERA5, to compute each region's recycling ratio. The result was a collection of monthly mean regional recycling ratios for each 1-degree grid across the humid tropics, from January 2002 to December 2018.
Generating WAM-2layers recycling estimates
Recently, researchers forced the WAM-2layers model using data from the ERA-Interim reanalysis36 to generate a database of 1.5 × 1.5-degree moisture teleconnection estimates between 79.5 degree N and 79.5 degree S latitude, enabling the determination of a monthly history on the fate of evaporation originating from land-cells45. The WAM-2layers model is an Eulerian tracking model in which the atmosphere is described with two vertical layers to account for vertical shear17. The dataset is generated with 3-hourly input data of evaporation and precipitation and 6-hourly input data of the wind components, surface pressure, total water column, total water vapor column and the vertical integral (all in eastward and northward direction) of the water vapor flux, cloud liquid water flux and cloud frozen water flux45. The database provides interannual estimates of the distribution of precipitation sources associated with each basin. To compute the land recycling ratio (\(LRR_{WAM - 2layers}\)) over a given 1.5-degree grid cell, the contributions to precipitation in the grid-cell of interest from each basin \((P_in_i)\)were summed and divided by the total precipitation as observed in ERA-Interim (\(P_{tot}\)). The monthly mean LRR was computed for each month between 2002 and 2018 using Eq. 4:
$$LRR_{WAM - 2layers} = \frac{1}{{P_{tot}}}\mathop {\sum}\nolimits_{all\;sources} {\left( {P_1n_1 + P_2n_2 + P_3n_3 + ...} \right)}$$
where \(P_{tot}\) is the total precipitation observed in the region, and \(P_in_i\) is the component of precipitation in the region that can be attributed to an arbitrary basin, which are summed for all possible land evaporation sources. While the other two recycling estimates were generated using data from ERA5, it has been shown that evaporation and precipitation values are similar between ERA5 and ERA-Interim reanalysis products45. Therefore, WAM-2layers should be comparable to UTrack and Mass Balance approaches, which both rely on ERA5 for input data.
Generating UTrack recycling estimates
Like WAM-2layers, the UTrack model has been used to produce public databases of moisture teleconnection46. However, the UTrack model uses Lagrangian, instead of Eulerian, numerical integration and divides the vertical component of transport into more atmospheric layers (25 layers) than WAM-2layers. The dataset is generated with hourly input data of evaporation, total precipitation, wind components, specific humidity and total precipitable water, all with a spatial resolution of 0.25 × 0.25 degree. Analogous to the WAM-2layers approach, UTrack computes the LRR by adding all global contributions of land evaporation to precipitation over a select region, then divides this sum by the total monthly precipitation. Running the model using ERA5 reanalysis data, the monthly mean land recycling ratio (\(LRR_{UTrack}\)) was reported on the 1.0-degree grid-scale spanning from 2002 to 2018. The results and model settings specific to this run were included as a supplement to this paper.
Monthly deuterium excess measurements
To compute d-excess, this study used water isotope measurements from the International Atomic Energy Agency's Global Network of Isotopes in Precipitation (GNIP) database29. From 2002 to 2018, 90 stations reported monthly measurements within the A Köppen–Geiger (KG) tropical climate zone. The breakdown of station location by KG subzone was 22 stations in Af, 41 stations in Aw, 23 stations in Am, and 4 stations in As. Typically, stations provided measurements of \(\delta ^2H\) and \(\delta ^{18}O\), which were collected once per month, and plotted in Supplementary Fig. 2. The resulting mean water line, 10.39, was slightly higher than the global mean water line of 10, as expected for the tropics47. As expected, the amount effect was clearly observed in the GNIP sample, where a scatter plot shows the characteristic asymptote of d-excess towards the meteoric water line as precipitation increases (Supplementary Fig. 3). These isotope quantities were used in Equation 1 to calculate d-excess. To generate monthly estimates for d-excess, precipitation isotope measurements were averaged across every station which reported an estimate of monthly isotope concentrations for a given collection period. This process was repeated for all 204 months between 2002 and 2018. To justify the use of this dataset in our study, we assessed whether the climate and geography of isotope stations was representative of the global tropics. We compared their geographical distribution (Supplementary Fig. 4) and found that certain geographical regions, such as the As climate subzone and the Southern Hemisphere tropics, are under-represented in the dataset. We also probed the representativity of our sample dataset with respect to surface fluxes, such as precipitation and evapotranspiration rates. We find that our sample generally overrepresents locations with high magnitude precipitation and evapotranspiration, but that the distribution of our sample overlaps considerably with the tropics overall (Supplementary Fig. 5). While the sample median rates for evapotranspiration and precipitation differ from the tropics, their values lie firmly within the interquartile range for the overall region. Furthermore, the 10th and 90th percentile whiskers for both precipitation and ET match closely between sample locations and the tropics, suggesting that there is significant overlap between the typical values in both datasets. Finally, we tested the correspondence between GNIP measurements of monthly precipitation and co-located ERA5 estimates of monthly precipitation for each observation in the dataset. While we find relatively high error between individual GNIP observations and their ERA5 estimates (Supplementary Fig. 6a), the overall distribution of their values is highly similar (Supplementary Fig. 6b), suggesting that the datasets are congruent at large (global) spatial scales.
Model intercomparison
The model intercomparison study is designed to assess the extent to which a given recycling model pair agreed in their estimates of the recycling ratio. Each model time-series pair is compared via cross-correlation using Kendall's Tau rank correlation coefficient as a comparison metric. A rank correlation metric was chosen because the expected relationship between models is sometimes unclear. For example, the relationship between RRR and LRR is not obviously linear, so Pearson's coefficient would not be appropriate. For example, while d-excess would differentiate between a parcel of moisture that evaporates from land twice versus only once, the recycling ratio makes no such distinction. For each 0.5-degree with estimates recorded from both models, there were 204 months of data from 2002 through 2018. The estimates that each month provide were compared in time using Kendall's Tau as described in Eq. 5:
$$\tau = \frac{{2(c - d)}}{{n(n - 1)}}$$
where \(\tau\) is the rank correlation coefficient, c is the number of concordant pairs of observations, d is the number of discordant pairs of observations, and n is the total number of observation pairs34.
Model results were also compared using a grid-by-grid difference in mean annual recycling estimates to compute relative bias between models within each model pair. These results were also visualized using a scatterplot, with a rank correlation coefficient used to show the general agreement between model pairs.
Climatological analysis
The climatological analysis was designed to test hypotheses H1 and H2, which both predict the extent to which model-generated recycling ratios will correlate with ground-based d-excess measurements. For each month (January 2002 to December 2018), any d-excess observations reported within the analysis region of interest are paired with the co-located monthly recycling estimate for the same month and year. Any coastal d-excess measurements that do not have a corresponding recycling measurement are omitted from the analysis. After all observations of d-excess are accounted for, the paired data are averaged according to month, from January to December. The averaging process produces two climatological signals for comparison, which are specific to the region and recycling model that were used in the procedure. Originally, the raw signals were smoothed using a three-month moving window designed to reduce inhomogeneities that might disturb the physical climate signal48,49, but this was found to have limited benefit. The climatological analysis procedure was repeated for all three-recycling model estimates and by restricting the analysis to each climate subzone within the tropics (A, Af, Am, Aw, As). The 95% confidence interval of Kendall's Tau rank cross-correlation coefficient between the climatological signals was also computed for each signal that was generated. Finally, we repeated our procedure for climatological analysis once more in the overall tropics, but this time removing all observations which corresponded to a total leaf area index in the 90th percentile or higher for the population of GNIP sampling sites. The total leaf area index was computed by retrieving from ERA5 the values of the high vegetation and low vegetation leaf area indices co-located at each GNIP sample site and adding these two components together.
Station sample size analysis
The station sample size analysis was designed to test hypothesis H3, which predicts that an increase in spatial coverage of d-excess observations will improve the overall agreement between model-generated recycling ratios and d-excess values. First, for each of 204 months from 2002 through 2018, the mean d-excess from all reporting stations was aggregated. Then, the recycling ratios for the grid cells co-located with the d-excess measurement station were also aggregated. These monthly time-series were compared by constructing a 95% confidence interval of Kendall's Tau rank cross-correlation. The procedure was then repeated for each minimum station sample size of N such that all months during the analysis period where at least N stations reported a d-excess measurement were included in the cross-correlation computation.
Code base
All analysis code base was written in Python50 on the standard Anaconda Distribution51 of Jupyter Notebook52. Kendall Tau statistics were computed using Dr. Wang's "correlation" library53.
All relevant data are available from the authors. Global Köppen–Geiger climate data is available for download at http://koeppen-geiger.vu-wien.ac.at. The WAM-2layers data is available at https://doi.org/10.1594/PANGAEA.90870554. The UTrack data has been uploaded to the figshare repository in conjunction with this article at https://doi.org/10.6084/m9.figshare.16641361. The ERA5 reanalysis is available at the Copernicus Climate Change Service (C3S) Climate Data Store https://cds.climate.copernicus.eu.
The code generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Entekhabi, D., Rodriguez-Iturbe, I. & Bras, R. L. Variability in large-scale water balance with land surface-atmosphere interaction. J. Clim. 5, 798–813 (1992).
Yao, J. et al. Climatic and associated atmospheric water cycle changes over the Xianjiang, China. J. Hydrol. 585, 124823 (2020).
Eltahir, E. A. B. & Bras, R. L. Precipitation recycling in the Amazon basin. Q. J. R. Meteorol. Soc. 120, 861–880 (1994).
Sorí, R., Nieto, R., Vicente-Serrano, S. M., Drumond, A. & Gimeno, L. A Lagrangian perspective of the hydrological cycle in the Congo River basin. Earth Syst. Dyn. 8, 653–675 (2017).
Staal, A. et al. Forest-rainfall cascades buffer against drought across the Amazon. Nat. Clim. Change 8, 539–543 (2018).
Paul, S. et al. Weakening of Indian summer monsoon rainfall due to changes in land use land cover. Sci. Rep. 6, 32177 (2016).
Spracklen, D., Arnold, S. & Taylor, C. Observations of increased tropical rainfall preceded by air passage over forests. Nature 489, 282–285 (2012).
Zemp, D. C. et al. On the importance of cascading moisture recycling in South America. Atmos. Chem. Phys. 14, 13337–13359 (2014).
Gimeno, L., Nieto, R. & Sorí, R. The growing importance of oceanic moisture sources for continental precipitation. npj Clim. Atmos. Sci. 3, 27 (2020).
Herrmann, S. M., Brandt, M., Rasmussen, K. & Fensholt, R. Accelerating land cover change in West Africa over four decades as population pressure increased. Commun. Earth Environ. 1, 53 (2020).
Khanna, J., Medvigy, D., Fueglistaler, S. & Walko, R. Regional dry-season climate changes due to three decades of Amazonian deforestation. Nat. Clim. Change 7, 200–204 (2017).
Budyko, M. I. Climate and life. (Academic Press, New York, 1974).
Froelich, K., Gibson, J. J. & Aggarwal, P. K. Deuterium excess in precipitation and its climatological significance. Study Environ. Change Using Isot. Tech., CS Pap. Ser. 13, 54–65 (2002).
Brubaker, K. L., Entekhabi, D. & Eagleson, P. S. Estimation of continental precipitation recycling. J. Clim. 6, 1077–1089 (1993).
Bosilovich, M. G. & Schubert, S. D. Precipitation recycling over the central United States diagnosed from the GEOS-1 data assimilation system. J. Hydrometeorol. 2, 26–35 (2001).
Bosilovich, M. G. & Schubert, S. D. Water vapor tracers as diagnostics of the regional hydrologic cycle. J. Hydrometeorol. 3, 149–165 (2002).
van der Ent, R. J., Wang-Erlandsson, L., Keys, P. W. & Savenije, H. H. G. Contrasting roles of interception and transpiration in the hydrological cycle – Part 2: Moisture recycling. Earth Syst. Dyn. 5, 471–489 (2014).
Tuinenburg, O. A. & Staal, A. Tracking the global flows of atmospheric moisture and associated uncertainties. Hydrol. Earth Syst. Sci. 24, 2419–2435 (2020).
Jian, Q. et al. Evaluation of the ERA5 reanalysis precipitation dataset over Chinese Mainland. J. Hydrol. 595, 125660 (2021).
Gleixner, S., Demissie, T. & Diro, G. T. Did ERA5 improve temperature and precipitation analysis over East Africa? Atmosphere 11, 996 (2020).
Galewsky, J. et al. Stable isotopes in atmospheric water vapor and applications to the hydrologic cycle. Rev. Geophys. 54, 809–865 (2016).
Guswa, A. et al. Advancing ecohydrology in the 21st century: a convergence of opportunities. Ecohydrology 13, e2208 (2020).
Gimeno, L. et al. Recent progress on the sources of continental precipitation as revealed by moisture transport analysis. Earth Sci. Rev. 201, 103070 (2020).
Dansgaard, W. Stable isotopes in precipitation. Tellus A: Dyn. Meteorol. Oceanogr. 16, 436–468 (1964).
Salati, E., Attilio, D. O., Matsui, E. & Gat, J. R. Recycling of water in the Amazon Basin: an Isotopic Study. Water Resour. Res. 15, 1250–1258 (1979).
Gat, J. R. & Matsui, E. Atmospheric water balance in the Amazon basin: an isotopic evapotranspiration model. J. Geophys. Res.: Atmos. 96, 13179–13188 (1991).
Victoria, R. L., Martinelli, L. A., Mortatti, J. & Richey, J. Mechanisms of water recycling in the Amazon Basin: Isotopic Insights. AMBIO 20, 384–387 (1991).
Mathieu, R. & Bariac, T. A numerical model for the simulation of stable isotope profiles in drying soils. J. Geophys. Res.: Atmos. 101, 12685–12696 (1996).
IAEA/WMO. Global Network of Isotopes in Precipitation. The GNIP Database. https://nucleus.iaea.org/wiser (2020).
Kong, Y., Pang, Z. & Froehlich, K. Quantifying recycled moisture fraction in precipitation of an arid region using deuterium excess. Tellus B: Chem. Phys. Meteorol. 65, 19251 (2013).
Juhlke, T. R. et al. Assessing moisture sources of precipitation in the Western Pamir Mountains (Tajikistan, Central Asia) using deuterium excess. Tellus B: Chem. Phys. Meteorol. 71, 1445379 (2019).
Edirisinghe, E. A. N. V., Pitawala, H. M. T. G. A., Dharmagunawardhane, H. A. & Wijayawardane, R. L. Spatial and temporal variation in the stable isotope composition (δ18O and δ2H) of rain across the tropical island of Sri Lanka. Isot. Environ. Health Stud. 53, 628–645 (2017).
Kottek, M., Grieser, J., Beck, C., Rudolf, B. & Rubel, F. World Map of the Köppen-Geiger climate classification updated. Meteorologische Z. 15, 259–263 (2006).
Kendall, M. A new measure of rank correlation. Biometrika 30, 81–93 (1938).
Hersbach, H. et al. The ERA5 global reanalysis. Q. J. R. Meteorol. Soc. 146, 1999–2049 (2020).
Dee, D. P. et al. The ERA-Interim reanalysis: Configuration and performance of the data assimilation system. Q. J. R. Meteorol. Soc. 146, 553–597 (2011).
Song, X. et al. Relationships between precipitation, soil water and groundwater at Chongling catchment with the typical vegetation cover in the Taihang mountainous region, China. Environ. Earth Sci. 62, 787–796 (2011).
Zhang, W., An, S., Xu, Z., Cui, J. & Xu, Q. The impact of vegetation and soil on runoff regulation in headwater streams on the east Qinghai–Tibet Plateau, China. CATENA 87, 182–189 (2011).
Zhai, L., Wang, X., Wang, P., Miralles‐Wilhelm, F. & Sternberg, L. Vegetation and location of water inflow affect evaporation in a subtropical wetland as indicated by the deuterium excess method. Ecohydrology 12, e2082 (2019).
Tharammal, T., Govindasamay, B. & Noone, D. Impact of deep convection on the isotopic amount effect in tropical precipitation. J. Geophys. Res.: Atmos. 122, 1505–1523 (2017).
Panarello, H. O. & Dapeña, C. Large scale meteorological phenomena, ENSO and ITCZ, define the Paraná River isotope composition. J. Hydrol. 365, 105–112 (2009).
Brady, E. et al. The connected Isotopic Water cycle in the Community Earth System Model Version 1. J. Adv. Model. Earth Syst. 11, 2547–2566 (2019).
Wright, J. S. et al. Rainforest-initiated wet season onset over the southern Amazon. PNAS 114, 8481–8486 (2017).
Dee, S., Noone, D., Buenning, N., Emile-Geay, J. & Zhou, Y. SPEEDY-IER: a fast atmospheric GCM with water isotope physics. J. Geophys. Res.: Atmos. 120, 73–91 (2014).
Link, A., van der Ent, R., Berger, M., Eisner, S. & Finkbeiner, M. The fate of land evaporation – a global dataset. Earth Syst. Sci. Data 12, 1897–1912 (2020).
Tuinenburg, O. A., Theeuwen, J. J. E. & Staal, A. High-resolution global atmospheric moisture connections from evaporation to precipitation. Earth Syst. Sci. Data 12, 3177–3188 (2020).
Landwehr, J. M. & Coplen, T. B. Line-conditioned excess: a new method for characterizing stable hydrogen and oxygen isotope ratios in hydrologic systems. Isotopes in Environmental Studies Aquatic Forum, 132–135. http://www-pub.iaea.org/MTCD/publications/PDF/CSP_26_web.pdf (2004).
Auer, I. et al. A new instrumental precipitation dataset for the greater alpine region for the period 1800–2002. Int. J. Climatol. 25, 139–166 (2005).
Brunetti, M., Maugeri, M., Monti, F. & Nanni, T. Temperature and precipitation variability in Italy in the last two centuries from homogenised instrumental time series. Int. J. Climatol. 26, 345–381 (2006).
Van Rossum, G. & Fred, L. D. Python 3 Reference Manual (CreateSpace, Scotts Valley, 2009).
Anaconda. https://anaconda.com/ (2016).
Project Jupyter. https://jupyter.org/ (2020).
Weng, X. Correlation. https://github.com/XiangwenWang/correlation (2020).
Link, A., van der Ent, R., Berger, M., Eisner, S. & Finkbeiner, M. The fate of land evaporation - A global dataset. PANGAEA. https://doi.org/10.1594/PANGAEA.908705 (2019).
We would like to thank Jiamin Li and Chenghai Wang (Lanzhou University); Kevin Trenberth (NCAR); Kaye Brubaker (University of Maryland); Michael Bosilovich (NASA); Ruud van der Ent (Delft University of Technology), Yanlong Kong (Chinese Academy of Sciences), and Tobias Juhlke (Friedrich-Alexander-University of Erlangen-Nürnberg) for their helpful comments, and Nicolette Gonzales for her work on the isotope data set. This work was supported in part by the U.S. Department of Energy, Office of Science, Office of Workforce Development for Teachers and Scientists (WDTS) under the Science Undergraduate Laboratory Internships Program (SULI). This material is based upon work supported as part of the Next Generation Ecosystem Experiments-Tropics (NGEE-Tropics) funded by the U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research. The work of JJET was performed in the cooperation framework of Wetsus, European Centre of Excellence for Sustainable Water Technology (www.wetsus.eu). Wetsus is co-funded by the Dutch Ministry of Economic Affairs and Climate Policy, the Norther Netherlands Provinces, the Province of Fryslan. The authors would like to thank the participants of the Natural water production theme for the financial support. Arie Staal acknowledges support from the Talent Program grant VI.Veni.202.170 by the Dutch Research Council (NWO). Obbe A. Tuinenburg acknowledges support from the research program Innovational Research Incentives Scheme Veni (016.veni.171.019), funded by the Dutch Research Council.
Earth and Environmental Sciences, Los Alamos National Laboratory, Los Alamos, NM, USA
Stephen Cropper, Kurt Solander, Brent D. Newman & Chonggang Xu
Copernicus Institute of Sustainable Development, Utrecht University, Utrecht, The Netherlands
Obbe A. Tuinenburg, Arie Staal & Jolanda J. E. Theeuwen
Wetsus, European Centre of Excellence for Sustainable Water Technology, Leeuwarden, The Netherlands
Jolanda J. E. Theeuwen
Stephen Cropper
Kurt Solander
Brent D. Newman
Obbe A. Tuinenburg
Arie Staal
Chonggang Xu
S.C., K.S., and B.N. designed the study, S.C. and K.S. did the analysis, and S.C. led the writing of the paper. J.T., A.S., and O.T. provided the land recycling data from the UTrack model. All authors contributed to the editing of the paper and discussion of the results.
Correspondence to Stephen Cropper.
Cropper, S., Solander, K., Newman, B.D. et al. Comparing deuterium excess to large-scale precipitation recycling models in the tropics. npj Clim Atmos Sci 4, 60 (2021). https://doi.org/10.1038/s41612-021-00217-3
Network motifs shape distinct functioning of Earth's moisture recycling hubs
Nico Wunderling
Frederik Wolf
For Authors and Referees
npj Climate and Atmospheric Science (npj Clim Atmos Sci) ISSN 2397-3722 (online)
|
CommonCrawl
|
scientific data
data descriptors
A dataset of hourly sea surface temperature from drifting buoys
33 years of globally calibrated wave height and wind speed data based on altimeter observations
Agustinus Ribal & Ian R. Young
Tendencies, variability and persistence of sea surface temperature anomalies
Claire E. Bulgin, Christopher J. Merchant & David Ferreira
A global sea state dataset from spaceborne synthetic aperture radar wave mode data
Xiao-Ming Li & BingQing Huang
Improving the Real-time Marine Forecasting of the Northern South China Sea by Assimilation of Glider-observed T/S Profiles
Shiqiu Peng, Yuhang Zhu, … Jiancheng Yu
Large diurnal bottom temperature oscillations around the Saint Pierre and Miquelon archipelago
Pascal Lazure, Bernard Le Cann & Marion Bezaud
Tracking deep-sea internal wave propagation with a differential pressure gauge array
Chu-Fang Yang, Wu-Cheng Chi, … Ban-Yuan Kuo
Sea level along the world's coastlines can be measured by a network of virtual altimetry stations
Anny Cazenave, Yvan Gouzenes, … Jérôme Benveniste
Coastal sea level anomalies and associated trends from Jason satellite altimetry over 2002–2018
The Climate Change Initiative Coastal Sea Level Team
Anisotropy of the sea surface height wavenumber spectrum from altimeter observations
Shihong Wang, Fangli Qiao, … Xiaohui Zhou
Data Descriptor
Shane Elipot1,
Adam Sykulski2,
Rick Lumpkin ORCID: orcid.org/0000-0002-6690-17043,
Luca Centurioni4 &
Mayra Pazos3
Scientific Data volume 9, Article number: 567 (2022) Cite this article
A dataset of sea surface temperature (SST) estimates is generated from the temperature observations of surface drifting buoys of NOAA's Global Drifter Program. Estimates of SST at regular hourly time steps along drifter trajectories are obtained by fitting to observations a mathematical model representing simultaneously SST diurnal variability with three harmonics of the daily frequency, and SST low-frequency variability with a first degree polynomial. Subsequent estimates of non-diurnal SST, diurnal SST anomalies, and total SST as their sum, are provided with their respective standard uncertainties. This Lagrangian SST dataset has been developed to match the existing and on-going hourly dataset of position and velocity from the Global Drifter Program.
Measurement(s) temperature of sea surface
Technology Type(s) Thermistor Device
Factor Type(s) Latitude • Longitude • Time
Sample Characteristic - Environment Ocean
Sample Characteristic - Location Global
Background & Summary
The Global Drifter Program (GDP) funded by the U.S. National Oceanic and Atmospheric Administration (NOAA) maintains an array of satellite-tracked water-following drifting buoys, hereafter referred to as drifters, designed to acquire in situ observations of near-surface ocean current, sea surface temperature (SST), and atmospheric sea level pressure1. The requirement of the Global Ocean Observing System (GOOS) to achieve a nominal 5° × 5° coverage of the world's ocean has been fulfilled since September 2005 with a pool of 1250 drifters2. In near-real time, drifter locations and sensor data are relayed to the World Meteorological Organization's Global Telecommunication System (WMO GTS), contributing to the collection of critical information needed for the World Weather Watch programme. Drifter data are also harvested by various national and international projects and organizations which aim at assembling in situ SST observations to produce quality-controlled and reformatted datasets for scientific analysis, climate monitoring, and calibration and validation of satellite-based SST observations. In delayed-time, the GDP maintains the historical database of drifter data and metadata, and delivers regular updates of drifter data products of surface currents and SST following quality control and estimation procedures. The historical observations, with the earliest ones from 1979, have been processed in incremental steps to generate a 6-hour joint dataset of drifter position, velocity, and SST estimates, along with their uncertainty estimates3,4. Because the frequency of drifter observations has increased since the onset of the array, an hourly product of drifter velocity estimates with uncertainties has been generated following a new estimation methodology, since 20165. This paper describes the methods that have now been devised to generate a new dataset of SST estimates at hourly time steps along drifters' trajectories, aimed at accompanying the on-going hourly drifter velocity dataset5. The dataset of drifter position and velocity estimates augmented with SST estimates is now the version 2.00 of the Hourly location, current velocity, and temperature collected from Global Drifter Program drifters world-wide dataset6. A summary of the products generated by the GDP is contained in Table 1.
Table 1 Table of location, velocity, and temperature data products and availability from the Global Drifter Program (GDP) as described in Hansen and Poulain (1996), Elipot et al. (2016), and this paper which defines three levels of data processing.
Hourly estimates of SST along drifters' trajectories are ultimately obtained from in situ sea water temperature observations. Estimates are obtained by least squares fitting a mathematical model of SST temporal evolution to temporally-uneven SST observations. The adopted fitting method is an adaptation of the locally weighted scatterplot smoothing method, known as LOWESS7. The method operates in an iterative manner in order to gradually reduce originally-uniform weights given to observations, eventually rejecting observations diagnosed as outliers. The method first generates SST estimates at the original times of the drifter SST sensor observations, and second generates SST estimates at regular top-of-hour times that typically do not coincide with the observation times. After fitting the mathematical model, the local error variance of the assumed observational process is estimated by summing the variance of the residuals from the fit and an ad hoc term aimed at taking into account the quantization error arising from temperature sensor resolution8. The ultimately chosen mathematical SST model is the sum of a polynomial function of order one, meant to capture non-diurnal variability, and the sum of three pairs of cosine and sine functions at harmonic frequencies of the diurnal frequency, meant to capture diurnal variability. The error variance estimates are subsequently propagated through the least squares method to derive standard uncertainties of the model parameter and of the SST estimates. The parameters of the model and of the fitting method have been chosen by analyzing two limited subsets of the global observational dataset. The choices made aim at minimizing both the mean square error calculated from the residuals of the fits and the estimates of the error variance of the observational process model. Ultimately, the variables added to the existing hourly dataset of drifter positions and velocities consist of SST diurnal hourly estimates, SST non-diurnal hourly estimates, and total SST hourly estimates. Each of these estimates is accompanied by its respective standard error estimates and specifically devised quality flag. Global statistics from our estimates indicate that the square root of the typical error variance is 0.031 °C for drogued drifters and 0.036 °C for undrogued drifters, and that the typical standard error for SST estimates is 0.016 °C for drogued drifters and 0.019 °C for undrogued drifters. There exist, however, marked geographical differences for these values across the world's ocean. The magnitudes of these uncertainties are an order of magnitude smaller than previously estimated measurement uncertainties for drifting buoys9 because of differences of methods.
The methods described in this paper define four levels of SST data denoted Level-0,1,2,3, as explained in the following sections:
Level-0 corresponds to the original, temporally unevenly distributed data as reported by the SST sensor and transmitted to the GDP DAC by Service Argos system or by the Lagrangian Drifter Laboratory at Scripps Institution of Oceanography;
Level-1 data result from the application of initial processing and quality-controls to Level-0 data;
Level-2 corresponds to SST estimates at the same unevenly distributed times as Level-1;
Level-3 corresponds to SST estimates at a regular hourly interval, at the top of each hour. Level-3 estimates are obtained at the same times as the position and velocity estimates for drifters of the of the GDP hourly dataset, release 1.04c5. The dataset of drifter position and velocity estimates augmented with SST estimates is release 2.00.
This paper describes the derivations of Level-1,2,3 datasets and announces the release of the Level-3 data as a product.
Data collation
In its basic configuration, a standard SVP drifter (from the Surface Velocity Program of the World Ocean Circulation Experiment) is composed of a surface float tethered to a holey-sock "drogue", or sea anchor, centered at 15 m depth when deployed. As a result, the surface displacement of the float tracked by satellites is predominantly representative of oceanic velocity at 15 m10. With time, a drifter can lose its drogue and becomes "undrogued2" but still continues to transmit its position and its sensor data until it dies11. In addition to the standard SVP configuration, a number of drifters are equipped with additional sensors such as a barometer for sea-level atmospheric pressure12 or a conductivity sensor to measure salinity13 (see https://www.aoml.noaa.gov/phod/dac/deployed.html for the historical deployment log of the GDP). Yet, all drifters are equipped with a temperature sensor attached to the surface float and located at about 18 cm depth when at rest. Despite being an environmental variable of climate importance, sea surface temperature does not have a unique definition, and the depth at which a measurement is taken is crucial to interpret its value and variability. The definitions for near-surface seawater temperature from the Group for High Resolution Sea Surface Temperature (GHRSST, https://www.ghrsst.org/ghrsst-data-services/products/) would suggest to call the temperature data from surface drifters "observations of sea water temperature at a depth of 18 cm". In the rest of this paper, for simplicity, we will refer to temperature observations from drifters, as well as temperature estimates derived from these, as SST data.
At the onset of the GDP in 1979, drifters were tracked by Argos which is both a positioning system and a data transmission system. At the beginning of the program, battery power and money were conserved by sampling location and transmitting data using different 1/3 and 2/3 schemes. As an example, data was transmitted for one day, followed by two days of no transmission, or data was transmitted for 8 hours, followed by 16 hours of no transmission14. Since 2000, this sampling scheme has been abandoned thanks to increased battery life and other technological advancements. At the same time, the number of operational satellites of the Argos constellation increased with time, so that the typical time interval between two consecutive Argos fixes reduced to between 1 and 2 hours15. However, stemming from the original sampling pattern, the GDP has continued to routinely process and interpolate the location fixes and temperature observations to produce drifter locations and temperature estimates continuously along trajectories at 6-hour intervals. The general method of interpolation, called kriging3, provides an estimate of location, or of SST, at a given time as a weighted linear combination of observations close in time (the five previous ones and the five subsequent ones in this case). Finding the optimal set of weights involves assuming a mathematical expression for a so-called structure function which is half of the variance of observation differences as a function of temporal lag. For the 6-hourly GDP product, structure functions for location or temperature are fitted regionally and in discrete time periods to observations. As such, the kriging implementations for either location or temperature differ because of the structure function employed, and estimates of location and SST are independent from each other in the sense that no location information is used to estimate SST and vice versa. Drifter velocities are subsequently computed from the 6-hour locations by 12-hour central differencing3. The GDP started to phase out the Argos positioning system in 2014 in favor of the Global Positioning System (GPS). This system provides locations with estimated O(10)-meter scale accuracy5 that are relayed almost instantly via the Iridium Satellite Communication system, along with sensor data, at regular temporal interval (typically hourly but not always), in contrast to Argos locations and transmissions. At the time of writing, the transition to GPS tracking and Iridium transmission is complete. A few drifters of the array were equipped with GPS receivers that transmitted their data via the Argos system5.
All transmitted locations and sensor data from Argos-tracked drifters were collected by Collecte Localisation Satellite (CLS) which relayed them, first in near-real time to the WMO GTS, and second to the GDP Data Assembly Center (DAC) located at the NOAA Atlantic Oceanographic and Meteorological Laboratory (AOML) in Miami, Florida. The Argos data received by the DAC are organized in messages, each associated with an Argos localization from a single Argos satellite pass, with a time stamp contained within the 10 to 20 min duration of that pass16,17. Each message may contain one or more sets of sensor observations, each set having its own sensor time which differs from the localization time, but is typically within plus or minus the pass' duration. Sometimes, an Argos message does not contain a location and a location time but nevertheless contains some sets of observations. In this case, the DAC assigns the location and time of the previous message to these observations. The sensor observations are subsequently processed by the DAC as follows. In the case of a message containing multiple and distinct sets of observations, the median value of all observation times and SST observations are retained for that message. For some drifters with specific sampling configurations, observations may explicitly include an age, which is a time interval that needs to be subtracted from the nominal observation time to obtain the true time of observations. Next, the DAC reorganizes Argos data as a row file, one per drifter, with each row containing in its columns Argos location (latitude and longitude), Argos location time, observation time, and observation data (SST and other sensors). Observation data are originally in sensor count but are decoded and converted throughout this process to physical units according to sensor equations found within each drifter specification sheet. Note that because several Argos satellites can be within the view of a single drifter at the same time, it is possible for the same set of observations to be transmitted by a drifter to different satellites, and to be eventually repeated in the dataset collated by the DAC, but with different locations and location times. As a result of the disconnection between Argos localization and acquisition of observations, there is no strict temporal coordination of location data and sensor data for Argos-tracked drifters.
For the modern GDP drifters that relay their data through the Iridium Satellite System, the geographical location from a GPS receiver is treated as another sensor variable, like the sensor SST, and as such is not subject to the semi-aleatory transmission schedule like with the Argos system. The data are transmitted in Short Burst Data (SBD) format which contains a number of parameters that depends on the type of drifter and the manufacturer, but typically includes date and time and sensor data including GPS when available. GPS location times and sensor data times are therefore concurrent. If a GPS position is not available at an observation time, the previous position is reported with a recorded time delay. Depending on a drifter's firmware, the GPS location sampling interval may differ from the sensor data sampling intervals. Drifter locations and sensor data are relayed in near-real time to the GDP Data Processing Center (DPC) located at the Lagrangian Drifter Laboratory (LDL) of the Scripps Institution of Oceanography and, from there, are sent the WMO GTS after decoding the GPS and sensor data according to manufacturers' specification sheets. The drifter messages decoded by the LDL are also made available as text files to the DAC at AOML for inclusion in the GDP database. All data relayed by CLS and the GDP DPC, collated by the GDP DAC, form what we call here Level-0 data.
Pre-processing and initial quality controls
Out of the Level-0 data, we consider for the initial release of this new augmented product the SST observations from 20-Dec-1978 02:00:00 to 06-Jul-2020 22:59:31, which totals 285,886,818 data tuples of SST values and observation times. We apply a number of pre-processing and quality control procedures to these data to form the Level-1 data. These initial procedures, as well as all subsequent estimation methods, are applied to all drifters irrespective of their tracking and data transmission systems. As we saw for Argos drifters, the nominal sampling patterns for drifter SST and location acquisition are essentially independent. Note that at this stage, as described in the previous section, an approximate or "raw" geographical location with varied, or unknown, uncertainty is associated with a SST data point.
The GDP DPC and DAC harvest drifter deployment sheets filled out at sea by operators, as well as conduct a number of diagnostics based on location and sensor data, in order to maintain a directory file at the DAC. This directory file lists the dates, times, and locations of trajectory starts, the dates, times, and locations of trajectory ends (i.e. drifter deaths), and the estimated dates and times of drogue losses11,18. The start and death dates and times are used to truncate if needed the SST time series for data points before oceanic deployment and after oceanic death ("post-death" sensor data may exist in the transmitted data for example if a drifter had been picked up by a vessel or run aground but continued to transmit its sensor data). Next, we further determine blocks of time for which SST observations are deemed valid by applying a quality-control step that has been in place as part of the production of the 6-hourly SST dataset. The NOAA Optimum Interpolation (OI) SST V219 at monthly time steps is used to calculate a monthly climatology which is subsequently interpolated to the raw locations associated with the drifter SST observations. These interpolated values are then visually compared to drifter SST observations to determine a first and a last "good" SST observation per drifter trajectory, based on an expert human assessment. The corresponding dates and times of these two points are recorded in a dedicated master file for all drifters. Another master file is maintained by the DAC that lists periods of time for which it appears that an SST sensor failed temporarily, also based on the comparison to climatological values. These potential periods of sensor failure, and the periods before and after the first and last good points, are subsequently discarded from the SST observation time series. Note that this comparison to a climatology is not used to remove seemingly outlying individual points, but rather to determine blocks of invalid SST observations. After this stage, we remove some data records contain filling values for missing data points. Next, we find two or more SST observations existing at the same time for a single drifter. In that case, all observations are kept and will be processed for obtaining Level-1 estimates at the same times. Next, we identify a number of drifter SST time series with only a single point, which are removed, and constant-value time series which are also removed. In the end, the final Level-1 dataset consists of 197,916,695 tuples of SST and time data originating from 24,597 drifter trajectories.
As a result of the differing technologies of the data transmission systems (Argos and Iridium), of the number of different drifter manufacturers for the GDP, and of changing firmwares with time, the Level-1 dataset of time series of SST observations is heterogeneous in its sampling intervals and apparent levels of noisiness. Two-dimensional histograms of occurrences of time differences and absolute temperature differences between two subsequent data points for all trajectories (Fig. 1) generally indicate that larger absolute SST differences are found for smaller time differences, for both Iridium and Argos drifters. For Iridium drifters, time differences are concentrated around multiples of one hour or 30 min, but with deviations from these because of possible delays of GPS signal acquisitions compared to a specified schedule. The distribution of time differences for Argos drifters is more continuous but exhibits local peaks near one hour and 101 min, the latter corresponding to the orbital period for an Argos satellite5. Even when considering the distribution of the median of time differences (or sampling interval) per trajectory, the Argos drifters exhibit a much more varied set of values compared to the Iridium drifters (Fig. 2).
Distribution of time differences and absolute temperature differences between two consecutive SST observations of the Level-1 data. (a) Two-dimensional histogram for Iridium drifters, and (b) Two-dimensional histogram for Argos drifters. Only values of Δt less than 24 hours and values of ΔSST less than 40 °C are shown.
Histograms in 3-minute bins of median SST temporal sampling intervals per drifter trajectory for Level-1 data (24,597 SST time series from 4,495 Iridium drifters and 20,102 Argos drifters). Only values smaller than 10 hours are displayed. 105 Argos time series have a median sampling larger than 10 hours.
Model of SST temporal evolution
We seek to obtain SST estimates by fitting a local temporal model to the temperature data acquired along drifter trajectories. In situ observations of SST from drifting or moored platforms, as well as remote sensing observations, suggest that two types of temporal variability typically co-exist: a relatively fast evolution on a diurnal time scale, sometimes referred to as a diurnal warming, and a relatively slower background evolution. For this background evolution (also referred to as non-diurnal in the rest of this manuscript), there is a priori no expectation of a dominant physical process acting at all times and all places. As such, it is reasonable to model this evolution with a local polynomial model as an approximation of a Taylor series expansion of an unknown underlying function20. For the diurnal evolution, we follow a number of previous studies21,22,23,24 and model this evolution as the sum of cosine functions with fundamental frequency ω = 2π radians per day. In contrast to some previous studies however, our diurnal model is exactly periodic in the sense that it is locally zero-mean. A mean SST value and a possible difference of SST between the beginning and the end of a diurnal period will be both captured by the background non-diurnal polynomial model (as it will be at least of order 1). In addition, the amplitudes and phases of each of the cosine functions contributing to the diurnal model are not constant within a day, but rather vary locally since they are fitted at every time step using data within a sliding window centered on that time step. This diurnal model allows us to accommodate various environmental conditions (e.g. momentum and heat fluxes) affecting the shape of the diurnal signal in time and space as a drifter is advected by ocean currents. Note that since the diurnal SST estimate is locally zero-mean and does not represent solely a diurnal warming, the contemporaneous non-diurnal SST estimate differs from what is called a foundation temperature, that is a temperature free of diurnal temperature variability. In other words, the non-diurnal SST estimate typically contains the local mean of the SST diurnal variability.
In summary, the complete SST model is the sum of a polynomial sP of order P, and a sum sD of N cosine functions at harmonic frequencies of the diurnal frequency ω = 2π radians per day. Next, we consider that a number of drifter SST observations si, found in the temporal vicinity of a single observation sk at time tk, are generated by the process described by the equation
$${s}_{i}={s}_{m}({t}_{i};{t}_{k})+{\sigma }_{i}{\varepsilon }_{i},$$
where \({s}_{m}({t}_{i};{t}_{k})\) is the SST model and the noise model, εi, is zero-mean, has unit variance [\(E({\varepsilon }_{i})=0\), \({\rm{Var}}({\varepsilon }_{i})=1\)], and is independent of the noise at other times. The noise is locally scaled by the square root of \({\sigma }_{i}^{2}\) which is the error variance of the observations, is conditional to time ti, and will be estimated a posteriori.
The temporal evolution model is
$${s}_{m}({t}_{i};{t}_{k})={s}_{P}({t}_{i};{t}_{k})+{s}_{D}({t}_{i};{t}_{k})$$
$$=\mathop{\sum }\limits_{p=0}^{P}{s}_{p,k}{({t}_{i}-{t}_{k})}^{p}+\mathop{\sum }\limits_{n=1}^{N}{A}_{n,k}\,{\rm{\cos }}\;\left[n\omega ({t}_{i}-{t}_{k})+{\phi }_{n,k}\right]$$
$$=\mathop{\sum }\limits_{p=0}^{P}{s}_{p,k}{({t}_{i}-{t}_{k})}^{p}+\mathop{\sum }\limits_{n=1}^{N}\left[{\alpha }_{n,k}\;{\rm{\cos }}\;n\omega ({t}_{i}-{t}_{k})+{\beta }_{n,k}\,{\rm{\sin }}\;n\omega ({t}_{i}-{t}_{k})\right],$$
$${\alpha }_{n,k}={A}_{n,k}\,{\rm{\cos }}\;{\phi }_{n,k},$$
$${\beta }_{n,k}=-{A}_{n,k}\,{\rm{\sin }}\;{\phi }_{n,k}.$$
The last form (4) of the model shows that the P + 1 + 2N parameters of this model can be estimated by forming a linear system of equations. Ultimately, once the model parameters are estimated, the SST estimate itself at time tk is evaluated by setting t = tk in (4) to obtain:
$${\widehat{s}}_{m,k}\equiv {s}_{m}({t}_{k};{t}_{k})={s}_{0,k}+\mathop{\sum }\limits_{n=1}^{N}{\alpha }_{n,k},$$
which involves only N + 1 parameters of the P + 1 + 2N estimated parameters. The other N + P parameters nevertheless provide further physical information such as the SST tendency for the non-diurnal evolution (e.g. \({s}_{1,k}=\partial {s}_{P}({t}_{k};{t}_{k})/\partial t\) if P ≥ 1) or the phase and amplitude of the diurnal harmonics:
$${\phi }_{n,k}={\rm{\arctan }}\left(\frac{-{\beta }_{n,k}}{{\alpha }_{n,k}}\right),$$
$${A}_{n,k}=\frac{{\alpha }_{n,k}}{{\rm{\cos }}\;{\phi }_{n,k}}.$$
Ultimately, we will select P = 1 and N = 3 on the basis of the analyses of two subsets of surface drifters, as explained in the section Model selection. As explained further in the next section, the model is first fitted at all original observation times of a drifter trajectory in a iterative manner in order to gradually adjust the weight of the data in the estimation, as well as identify outlier data points. After a given number of iterations, the model is ultimately fitted once at regular, top-of-the-hour, times that do not typically coincide with the original times.
Estimation of model parameters and SST
The devised method to estimate SST continuously along a drifter trajectory is adapted from the method known as the locally weighted scatterplot smoothing or LOWESS7. This method is iterative, and thus robust to outlying data points which are commonly observed in SST time series from surface drifters (see an example in Fig. 3). Our method goes as follows. For a given SST time series from a single drifter, for each SST observation sk at time tk, we compute by weighted least squares the P + 1 + 2N parameters of the model sm that minimize
$$\mathop{\sum }\limits_{i=1}^{K}{\left[{s}_{i}-{s}_{m}({t}_{i};{t}_{k})\right]}^{2}{K}_{{h}_{k},i},$$
where \({K}_{{h}_{k},i}\), is a set of weights given by
$${K}_{{h}_{k},i}=K\left(\frac{{t}_{i}-{t}_{k}}{{h}_{k}}\right),$$
with K the tricube kernel function7:
$$K(\tau )={\left(1-| \tau {| }^{3}\right)}^{3}{I}_{\left[-1,1\right]}(\tau ),\quad {\rm{with}}\,\;{I}_{[-1,1]}(\tau )=\left\{\begin{array}{ll}1, & | \tau | \le 1\\ 0, & | \tau | > 1.\end{array}\right.$$
Time series of SST data for GDP drifter ID 55366 (WMO number 3100541). This drifter was built by Pacific Gyre and is of the Surface velocity Program (SVP) type, tracked by the Argos positioning system. The median time interval between SST observations for this drifter is about 52 min. The SST equation for this drifter is SST(°C) = 0.05 × n - 5.00 where n is a 10-bit sensor count. This equation defines the data resolution (0.05°C) as well as the minimum value (0.05 × 0–5.00 = −5.00° C) and maximum value (0.05 × (210−1)−5.00 = 46.15° C) that should be returned by the temperature sensor, indicated by the horizontal dashed lines on this figure. The Level-1 data (sk) are indicated by gray dots. The circled dots are the data points that are ultimately down-weighed to zero by the iterative estimation method, and thus flagged as outliers (δk = 0).
In (11), hk is called the bandwidth of the kernel K, that is the half-width of the temporal window around the observation time tk within which the weights \({K}_{{h}_{k},i}\) are different from zero. The least squares calculation therefore involves practically only those data points with non-zero weights. Because the complete model sm includes a diurnal oscillation, we initially set hk = 1 day for all points, but this value is automatically and gradually increased as needed in 1-hour steps in order to include more data points to ensure that the least squares system of equations is not undetermined, up to a maximum value of 2 days. If not enough data points are available within the temporal window of maximum length, then no SST estimate is obtained. For the data selected to match the GDP hourly dataset version 1.04c (released in February 2021, with data through June 2020), fewer than 0.4% of the data points require a half-bandwidth longer than 1 day.
Using matrix notation for convenience, the minimization problem (10) can be written
$$\mathop{{\rm{\min }}}\limits_{{\boldsymbol{\beta }}}{({\bf{s}}-{\bf{X\beta }})}^{T}{\bf{W}}({\bf{s}}-{\bf{X\beta }}),$$
with solution
$$\widehat{{\boldsymbol{\beta }}}={({{\bf{X}}}^{T}{\bf{WX}})}^{-1}{{\bf{X}}}^{T}{\bf{Ws}}.$$
In (13), X is the design matrix for linear model (4):
$${\bf{X}}=\left[\begin{array}{lll}{{\bf{X}}}_{1} & {{\bf{X}}}_{2} & {{\bf{X}}}_{3}\end{array}\right],$$
$${{\bf{X}}}_{1}=\left[\begin{array}{llll}1 & ({t}_{1}-{t}_{k}) & \cdots & {({t}_{1}-{t}_{k})}^{P}\\ \vdots & \vdots & & \vdots \\ 1 & ({t}_{i}-{t}_{k}) & \cdots & {({t}_{i}-{t}_{k})}^{P}\\ \vdots & \vdots & & \vdots \\ 1 & ({t}_{K}-{t}_{k}) & \cdots & {({t}_{K}-{t}_{k})}^{P}\\ & & & \end{array}\right],$$
$${{\bf{X}}}_{2}=\left[\begin{array}{llll}{\rm{\cos }}\{\omega ({t}_{1}-{t}_{k})\} & {\rm{\cos }}\{2\omega ({t}_{1}-{t}_{k})\} & \cdots & {\rm{\cos }}\{N\omega ({t}_{1}-{t}_{k})\}\\ \vdots & \vdots & & \\ {\rm{\cos }}\{\omega ({t}_{i}-{t}_{k})\} & {\rm{\cos }}\{2\omega ({t}_{i}-{t}_{k})\} & \cdots & {\rm{\cos }}\{N\omega ({t}_{i}-{t}_{k})\}\\ \vdots & \vdots & & \\ {\rm{\cos }}\{\omega ({t}_{K}-{t}_{k})\} & {\rm{\cos }}\{2\omega ({t}_{K}-{t}_{k})\} & \cdots & {\rm{\cos }}\{N\omega ({t}_{K}-{t}_{k})\}\\ & & & \end{array}\right],$$
$${{\bf{X}}}_{3}=\left[\begin{array}{llll}{\rm{\sin }}\{\omega ({t}_{1}-{t}_{k})\} & {\rm{\sin }}\{2\omega ({t}_{1}-{t}_{k})\} & \cdots & {\rm{\sin }}\{N\omega ({t}_{1}-{t}_{k})\}\\ \vdots & \vdots & & \\ {\rm{\sin }}\{\omega ({t}_{i}-{t}_{k})\} & {\rm{\sin }}\{2\omega ({t}_{i}-{t}_{k})\} & \cdots & {\rm{\sin }}\{N\omega ({t}_{i}-{t}_{k})\}\\ \vdots & \vdots & & \\ {\rm{\sin }}\{\omega ({t}_{K}-{t}_{k})\} & {\rm{\sin }}\{2\omega ({t}_{K}-{t}_{k})\} & \cdots & {\rm{\sin }}\{N\omega ({t}_{K}-{t}_{k})\}\\ & & & \end{array}\right].$$
The weight matrix W is defined by
$${\bf{W}}={\rm{diag}}\left\{{K}_{{h}_{k},i}\right\},$$
and s and β are the vector of data points and the vector of dimension P + 1 + 2N of parameters to be estimated, respectively:
$${\bf{s}}=\left[\begin{array}{l}{s}_{1}\\ \vdots \\ {s}_{K}\end{array}\right],\beta =\left[\begin{array}{l}{s}_{0,k}\\ \vdots \\ {s}_{P,k}\\ {\alpha }_{1,k}\\ \vdots \\ {\alpha }_{N,k}\\ {\beta }_{1,k}\\ \vdots \\ {\beta }_{N,k}\end{array}\right].$$
Next, following the initial iteration of estimating the model parameters and calculating the corresponding SST estimates at all times tk, we consider the residuals at all observation times:
$${r}_{k}={s}_{k}-{\widehat{s}}_{m,k},$$
and compute M, the median value of the distribution of their absolute values. A set of robust weights are next calculated as
$${\delta }_{k}=B\left(\frac{{r}_{k}}{DM}\right),$$
$$B(t)={\left(1-| t{| }^{2}\right)}^{2}{I}_{[-1,1]}(t),$$
is the biweight kernel function7 and D is a real factor to be determined.
The next step of the method consists in iterating the weighted least squares estimation of all parameters of the model at all times tk, but this time using modified weights \({\delta }_{i}{K}_{{h}_{k},i}\) instead of \({K}_{{h}_{k},i}\) in (10). How many data points are down-weighted is dependent on the coefficient D in the denominator of (22) which is typically set to 67 but here is set to 14, as discussed in the section Model selection. The number of iterations is chosen here to be three after the initial least squares estimation without modified weights. The modified weights can effectively become zero when δk = 0, that is when the absolute value of a residual is larger than D times M for a given SST time series associated with one drifter trajectory. This implies that such data points are ultimately not used for any estimation but SST values at the corresponding time are nevertheless obtained using all available non-zero-weighted data points within the temporal window centered on any of these points. This method effectively flags as outliers some of the Level-1 SST data point like a "de-spiking" procedure would do, for example by applying a median filter3. An example of flagged outliers in a Level-1 drifter SST time series is shown in Fig. 3. One implicit assumption of using (22) to modify the weights of the data is that all residuals from a given drifter SST time series originate from a common distribution, or equivalently that the statistics of the observations are constant within a given trajectory. This assumption may be violated if an entire drifter trajectory is long enough to experience environmental condition changes, or the characteristics of the SST sensor changes in an undetected fashion. An illustration of a Level-2 estimation step is provided in Fig. 4.
Time series of SST estimates for GDP drifter ID 55366 (WMO number 3100541) between 2005/9/1 and 2005/9/10. Top panel: Black dots are the original SST data (Level-1, sk) and circles are the data points down-weighed to zero (δk = 0). The blue dots with vertical lines show the total SST estimates and their plus or minus two standard errors (\({\widehat{s}}_{m}\pm 2{\widehat{\sigma }}_{m}\)). The red dots and vertical lines show the non-diurnal SST estimates and their plus or minus two standard errors (\({\widehat{s}}_{P}\pm 2{\widehat{\sigma }}_{P}\)). The blue estimates are the sum of the red estimates and purple estimates shown in the lower panel of the figure. Lower panel: Black dots show SST data minus the non-diurnal SST estimate (\({s}_{k}-{\widehat{s}}_{P}\)). The purple dots and vertical lines show the corresponding diurnal SST estimates and their plus or minus two standard errors (\({\widehat{s}}_{D}\pm 2{\widehat{\sigma }}_{D}\)).
Finally, as the last step of the method, the SST model (4) with the same number of parameters is fitted to the data but at times tk corresponding to the top of the hour UTC (00:00, 01:00, etc.), in one iteration with weights \({\delta }_{i}{K}_{{h}_{k},i}\) where the δi were calculated prior to the last iteration for the original data times (not posterior). As before, the bandwidth is set to 1 day but is allowed to increase in increments of one hour, up to two days, to make sure the linear estimation problem is not under-determined. This last step generates the final Level-3 data product. An illustration of Level-3 estimated data is provided in Fig. 5.
Time series of continuous hourly SST estimates for GDP drifter ID 55366 (WMO number 3100541) between 2005/9/1 and 2005/9/10. Top panel: Black dots show the original SST data (Level-1, sk). The blue line and shaded region show continuously the hourly total SST estimates and twice their standard errors (\({\widehat{s}}_{m}\pm 2{\widehat{\sigma }}_{m}\)). The red line and shaded region show continuously the hourly non-diurnal SST estimates and twice their standard errors (\({\widehat{s}}_{m}\pm 2{\widehat{\sigma }}_{P}\)). The blue line is the sum of the red line and the purple line shown in the lower panel. Lower panel: Black dots show SST data minus the non-diurnal SST estimate (\({s}_{k}-{\widehat{s}}_{P}\)). The purple line and shaded region show continuously the hourly diurnal SST estimates and twice their standard errors (\({\widehat{s}}_{D}\pm 2{\widehat{\sigma }}_{D}\)).
Error variance estimates and uncertainty estimates
As part of the method, we quantify the uncertainties of the parameter estimates and thus the uncertainties of the diurnal SST estimates, of the non-diurnal SST estimates, and of the total SST estimates. Formally, the covariance matrix of the weighted least squares solution at time tk is
$${{\bf{C}}}_{\beta }\equiv {\rm{Var}}\left(\widehat{\beta }\right)={({{\bf{XW}}}^{* }{\bf{X}})}^{-1}({{\bf{X}}}^{T}{{\bf{W}}}^{* }\Sigma {{\bf{W}}}^{* }{\bf{X}}){({{\bf{XW}}}^{* }{\bf{X}})}^{-1},$$
where W* is the weight matrix containing in its diagonal the modified weights \({\delta }_{i}{K}_{{h}_{k}}({t}_{i}-{t}_{k})\) from the penultimate iteration of the least squares estimation, and Σ is the unknown covariance matrix of the observation errors from the process model (1). In order to proceed, we assume local homoscedasticity and that the errors are independent which results in \({\boldsymbol{\Sigma }}={\sigma }^{2}({t}_{k}){\bf{I}}\), where the local error variance \({\sigma }^{2}({t}_{k})\) is unknown and needs to be estimated. In the case of a local polynomial regression of order P, it is recommended20 to re-conduct a polynomial fit of order P + 2 and to estimate the error variance from the residuals of that fit. In our case, which is not a sole polynomial regression since model (2) also includes trigonometric functions, the optimal course of action is unclear. Yet, to proceed, we classically calculate a first estimate of the error variance from the normalized weighted residual sum of squares:
$${\widehat{\sigma }}_{1}^{2}({t}_{k})=\frac{{\left({\bf{s}}-{\bf{X}}\widehat{\beta }\right)}^{T}{{\bf{W}}}^{* }\left({\bf{s}}-{\bf{X}}\widehat{\beta }\right)}{{\rm{tr}}\left\{{{\bf{W}}}^{* }-{{\bf{W}}}^{* }{\bf{X}}{({{\bf{X}}}^{T}{{\bf{W}}}^{* }{\bf{X}})}^{-1}{{\bf{X}}}^{T}{{\bf{W}}}^{* }\right\}}$$
$$=\frac{{\Sigma }_{i}{\left[{s}_{i}-{\widehat{s}}_{m}({t}_{i};{t}_{k})\right]}^{2}{\delta }_{i}{K}_{{h}_{k}}({t}_{i}-{t}_{k})}{\nu }.$$
The denominator of (25), referred to as ν in (26), is the effective number of degrees of freedom for the residuals for weighted least squares20. For ordinary least squares, ν would simply be the number of data points used to calculate \(\widehat{\beta }\) minus the number of parameters of the model (P + 1 + 2N), but for weighted least squares cases, ν is smaller.
The majority of drifters from the GDP database are equipped with temperature sensors returning a bit count n used to calculate SST following the sensor equation:
$$SST=an+b,$$
where a is the resolution of the temperature sensor. As such, the Level-1 data should be rounded due to the resolution of the instrument recording. In the signal processing literature this is known as quantization, and has the effect of removing high resolution information in the data. As a result, the estimated error variance [\({\widehat{\sigma }}_{1}^{2}({t}_{k})\), (25)] should be increased to reflect the additional uncertainty created through quantization, as this information cannot be recovered. In the extreme case that the input data is the same value within a full window length then the increase to the error variance is a2/128, following from the properties of the uniform distribution. As a result, adjusting for resolution, our total error variance is
$${\widehat{\sigma }}^{2}({t}_{k})={\widehat{\sigma }}_{1}^{2}({t}_{k})+\frac{{a}^{2}}{12}.$$
This adjustment is conservative, in that the effect of resolution on the error variance will decrease as the input values have more variance8. For simplicity, we use the conservative adjustment proposed above as this ensures the reported standard errors always include the resolution effect which should not be ignored.
For about 85% of the drifters, representing 83% of the Level-2 estimates, the resolution a can be obtained from the individual specification sheets provided by the manufacturers. We identify in this way 179 different resolutions, ranging from 0.00260877 °C to 0.17 °C. The three most common resolutions are 0.01 °C, 0.05 °C, and 0.08 °C. For the remaining 15% of the drifters, some have an SST equation which is not a linear function of a sensor single bit count and the impact of the quantization error cannot be simply modeled as in (28). Some other drifters have an unknown resolution because of the lack of available metadata. For these drifters, we estimate the resolution from the data as follows: we consider the time series of absolute SST temporal difference, bin these differences in 0.001 °C bins, and assign the resolution to the most common value that is not zero. In this way, the three most commonly estimated resolutions are 0.05 °C, 0.08 °C., and 0.043 °C. This method is successful in 92% of cases when tested on the data of the drifters for which the resolution is known from the metadata. The overall distribution of all resolution values, as well as their temporal distribution, are illustrated in Fig. 6.
Distributions of drifter SST sensors resolution. (a) Temporal distribution of drifter SST resolution in the GDP database from February 1979 to July 2020. The blue points corresponds to resolution a obtained from the drifter metadata from the SST equation: SST(°C) = a × n + b where n is a bit sensor count. The red points corresponds to drifters for which the resolution is not available from the metadata and was estimated directly from the data (see text). (b) Histogram of drifter SST resolution values in 0.001 °C bins. The red bars correspond to the estimated resolution values and the gray bars correspond to all values. Note that the horizontal axis is on a log scale. The three most common resolution values in the dataset are in order: 0.05, 0.01, and 0.08 °C.
We found necessary to consider the resolution error for two reasons. First, since we have allowed our estimation algorithm to obtain a solution with as few data points as the number of model parameters to be estimated, and because of numerical precision errors, we find a small number of instances (0.33% of the Level-2 data) for which ν, and therefore the first estimated error variance \({\widehat{\sigma }}_{1}^{2}({t}_{k})\), is small and negative. These instances are resolved by adding the resolution error. Second, in some other instances (0.22% of the Level-2 results, see Fig. 7), we find that the residuals, and hence the estimated error variance and the parameter uncertainties, are locally zero within numerical precision despite an ample number of data points available for the estimation. This occurs when the Level-1 SST data does not change in value within the estimation window for reasons which are not clear. Once again, these instances are resolved by adding the resolution error, resulting in more realistic error estimates. Nevertheless, these two instances define two populations of the results that are clearly separated within a two-dimensional histogram of \({\widehat{\sigma }}_{1}^{2}({t}_{k})\) and ν (Fig. 7). As such, we can flag these results using an empirical and ad-hoc condition:
$${{\rm{\log }}}_{10}\,{[\widehat{{\sigma }_{1}^{2}}]}^{1/2} < -\frac{1}{2}{{\rm{\log }}}_{10}| \nu | -10.$$
Two-dimensional histogram of the effective degrees of freedom for the residuals (ν) and estimates of the error variance neglecting the resolution error variance [\({\widehat{\sigma }}_{1}^{2}({t}_{k})\)] for Level-2 data results. The two populations found below the black dashed line (\({{\rm{\log }}}_{10}{[{\widehat{\sigma }}_{1}^{2}]}^{1/2} < -\frac{1}{2}{{\rm{\log }}}_{10}| \nu | -10\)) correspond to failed estimations of the error variance and are flagged with quality flag 1. The left population below the dashed line (0.33% of the data) corresponds to negative estimated variance (see text). The right population below the dashed line (0.22% of the data) corresponds to uncharacteristically flat SST records leading to unrealistic near-zero estimated error variances. The peak of the distribution is found within the upper-right population for ν ≈ 22.4 and \({[{\widehat{\sigma }}_{1}^{2}({t}_{k})]}^{1/2}\approx 0.02{8}^{\circ }\)C.
The final error variance estimate \({\widehat{\sigma }}^{2}({t}_{k})\) (28) is a function of time tk and specific to a drifter because of the sensor resolution. This estimate is subsequently used to calculate an estimate of the local covariance matrix of the observations \(\widehat{{\boldsymbol{\Sigma }}}={\widehat{\sigma }}^{2}({t}_{k}){\bf{I}}\) and to calculate the covariance matrix Cβ [expression (24)].
From the expression for the SST estimate (7), its variance is
$${\sigma }_{m}^{2}\equiv {\rm{Var}}\left[{s}_{m}({t}_{k};\,{t}_{k})\right]={\rm{Var}}\left[{s}_{0,k}+{\Sigma }_{n=1}^{N}{\alpha }_{n,k}\right]$$
$$=\;{\rm{Var}}\left[{s}_{0,k}\right]+2{\rm{Cov}}\left[\left({s}_{0,k}\right),\left({\Sigma }_{n=1}^{N}{\alpha }_{n,k}\right)\right]+{\rm{Var}}\left[{\Sigma }_{n=1}^{N}{\alpha }_{n,k}\right]$$
$$=\;{\sigma }_{P}^{2}+2{\rm{Cov}}\left[\left({s}_{0,k}\right)\left({\Sigma }_{n=1}^{N}{\alpha }_{n,k}\right)\right]+{\sigma }_{D}^{2}.$$
This last expression describes how the variance of the total SST estimate (\({\sigma }_{m}^{2}\)) is the sum of the variance of the non-diurnal SST estimate (\({\sigma }_{P}^{2}\), containing one term), of the variance of the diurnal estimate (\({\sigma }_{D}^{2}\), containing N2 terms), and of 2N additional cross-covariance terms. The (N + 1)2 needed terms to estimate \({\sigma }_{m}^{2}\), \({\sigma }_{P}^{2}\), and \({\sigma }_{D}^{2}\) are extracted and summed appropriately from the calculated covariance matrix Cβ [expression (24)] at each time step. The square root of each of these three estimated variances, referred to subsequently as \({\widehat{\sigma }}_{m}\), \({\widehat{\sigma }}_{P}\), and \({\widehat{\sigma }}_{D}\), define the standard errors, or standard uncertainties, of the three SST estimates. Illustrations of estimated square roots of error variances and SST uncertainties is provided in Figs. 4, 5, and 8. These figures, and Fig. 8 in particular, illustrate that the uncertainty estimates are temporally correlated for each individual drifter. Because the estimation procedure takes place within a sliding window, one should expect correlation at least among uncertainty estimates separated in time by less than the total length of the window (or twice the bandwidth hk, typically 2 days). Additional discussions of error variance and uncertainty estimates are provided in section Interpretation of uncertainty estimates.
Time series for GDP drifter ID 55366 (WMO ID 3100541) between 2005/9/1 and 2005/9/10 of SST standard error estimates (\({\widehat{\sigma }}_{m}\)), non-diurnal SST standard error estimates (\({\widehat{\sigma }}_{P}\)), diurnal SST standard error estimates (\({\widehat{\sigma }}_{D}\)), square root of error variance estimates from residuals [\(\sqrt{{\widehat{\sigma }}_{1}^{2}}\), Eq. (25)], square root of total error variance estimates [\(\sqrt{{\widehat{\sigma }}^{2}}\), Eq. (28)], and absolute residuals (\(| {\widehat{s}}_{m}-{s}_{k}| \)). The curves for \(\sqrt{{\widehat{\sigma }}_{1}^{2}}\) and \(\sqrt{{\widehat{\sigma }}^{2}}\) are most often indistinguishable except around 09/03 and 09/09-09/10.
Model selection
In order to fit the total SST model to the data, choices need to be made for the order P of the polynomial of the non-diurnal model and the number N of harmonics of the diurnal model. We consider a total of 14 models with P varying between 0 and 3, and N between 2 and 6 (Table 2). We test and assess the performances of the models by fitting them to two limited subsets of GDP drifters as it would be computationally prohibitive to conduct tests on the entire Level-1 data.
Table 2 Table of temporal models considered for SST as a function of polynomial order and number of diurnal harmonics.
The first subset is from the Salinity Processes in the Upper Ocean Regional Study (SPURS) in the subtropical North Atlantic13,25. The drifters released as part of SPURS were manufactured by Pacific Gyre Inc. but differed from standard SVP-type drifters of the GDP10. Instead of a temperature sensor on their buoys, these drifters were equipped with an unpumped Sea-Bird Electronics SBE37-SI MicroCAT CTD placed underneath the surface buoy with its sensors located at a depth of 50 cm. The Microcat instruments were set to acquire conductivity and temperature at 30-min intervals by sampling once a minute for 5 min and averaging the values. According to the manufacturer, the initial accuracy and resolution of the temperature sensor are 0.002 °C and 0.0001 °C respectively, but the data transmitted and relayed to the DAC exhibit a resolution of 0.01 °C. For this study, we select 80 drifters which generated temperature data (considered to be SST observations) for time periods spanning between 29 and 660 days. These drifters transmitted their locations and sensor data via the Argos satellite system, including their position data from GPS receivers. These GPS data were previously used as a test set to devise the methodology being used to generate the global dataset of hourly position and velocity for the GDP5. However, here, the original Argos message data files for these drifters are re-processed to eliminate redundant and corrupted data by taking into consideration a previously ignored checksum flag indicating the integrity of Argos data transmissions. Next, the SST time series are further truncated to match the beginnings and ends of the regular hourly time series of position and velocity for these drifters, as well as truncated for their first and last good data points as diagnosed by the QC procedures of the DAC. The resulting dataset consists of 80 time series of SST at uneven temporal intervals (multiple of 30 minutes), totalling nearly 1.26 M data points over 29,018 drifter days.
The second subset of drifter SST data, hereafter referred to as the "test" subset, is built from the global database by selecting at random 14 drifters within each 10° latitude band between −70°S and 70°N with an average SST temporal sampling interval of between 50 and 70 minutes, resulting in a total of 98 individual SST time series which are further truncated in time for deployment times etc. The resulting dataset consists of 98 time series of SST at uneven temporal intervals, totalling 697,045 data points over 29,408 drifter days. These test drifters constitute a limited subset but represent a variety of drifter types deployed between years 2000 and 2019. Fifty of them are drifters with barometer (SVPB type), and 48 are standard SVP drifters. Fifty-seven of them were Iridium drifters and 41 Argos drifters. The test drifters were built by a variety of manufacturers: 18 by DBi, 19 by Metocean, 9 by Clearwater, 30 by Pacific Gyre, 18 by Scripps Institution of Oceanography, 2 by Technocean, 1 by Marlin-Yug, and 1 by NKE. Finally, the stated resolution of their SST sensors as specified by their respective specification sheets were varied: 0.01 °C for 68 of them, 0.05 °C for 20, 0.04 °C for 2, 0.043 °C for 2, 0.04329 °C for 1, 0.04343 °C for 1, 0.08 °C for one, and unknown for three of them.
We proceed to fit the 14 models listed in Table 2 to the SPURS and test subsets of drifter SST time series, and subsequently consider two statistics calculated for each time series. The first statistic is the weighted root mean square error (WRMSE) calculated from the residuals of a given fit. For this calculation, the weights are the robust weights calculated by the algorithm described previously after the penultimate iteration (that is the weights calculated before the last estimation at the original times), but with a further normalization to ensure that weights sum to one:
$${\rm{WRMSE}}={\left[\mathop{\sum }\limits_{k=1}^{M}{w}_{k}{({s}_{k}-{\widehat{s}}_{m,k})}^{2}\right]}^{1/2},$$
$${w}_{k}=\frac{{\delta }_{k}{K}_{k,k}}{{\sum }_{i=1}^{M}{\delta }_{i}{K}_{i,i}}.$$
The second statistic considered is the square root of the weighted median of the error variance estimates: after the final iteration, we consider the error variance estimates \({\widehat{\sigma }}^{2}({t}_{k})\) given by (28) and using the weights defined by (34), we calculate the weighted median defined as the value \({\widehat{\sigma }}^{2}({t}_{n})\) such that
$$\mathop{\sum }\limits_{k=1}^{n-1}{w}_{k}\le 1/2\,{\rm{and}}\;\mathop{\sum }\limits_{k=n+1}^{M}{w}_{k}\le 1/2.$$
For these two statistics, using weighted calculations effectively filters out the outlier data points diagnosed from the method (that for which wk = 0). We also tried non-weighted calculations that include all data points: besides shifting numerical values of the results in the sense of worsening performances, this did not change the relative performances of the models nor our overall conclusions and model selection choice.
We find that varying the number of parameters of either the diurnal model or the non-diurnal model affects the two statistics differently. We present the results in Fig. 9, displaying the WRMSE and the square root of the weighted median error variance, both in unit of degrees Celsius. The figures display scatter plots of the two statistics averaged over each of the subsets, along with ellipses representing the 95% confidence intervals for the means in order to illustrate the scatter of the results. Not surprisingly, the scatter of the results is relatively smaller for the SPURS subset for which the SST records have the same nominal characteristics compared to the test subset composed of heterogeneous records.
Summary statistics for the 14 models listed in Table 2 for (a) the subset of 80 drifters from the SPURS experiment and (b) the subset of 98 "test" drifters selected from the global database. Colored dots with numbers indicate the average values of the square root of the weighted median of the error variance (horizontal axis) versus the average values of the weighted root mean square error (WRMSE, vertical axis). Ellipses correspond to 95% confidence intervals across ensemble statistics. Note the different axis ranges between panels (a) and (b). The black dotted line indicates the slope-1 intercept-0 curve.
For both data sets, we find that for a fixed number of harmonics of the diurnal model, increasing the polynomial order of the non-diurnal model (going right through the columns of Table 2) reduces the error variance with little change to the WRMSE. The most dramatic reduction occurs when going from models for which P = 0 (models 1, 2, and 3) to models for which P = 1 (models 4 and higher) for which the square root of the weighted median error variance is at least approximately halved. Conversely, for a fixed order of the polynomial non-diurnal model, we find that increasing the number of harmonics (going down the rows of Table 2) decreases the WRMSE with little change to the error variance. Considering these two general tendencies together, as well as the scatter of the results as depicted by the ellipses, we find that model 5 (P = 1 and N = 3) provides a good balance between the two statistics. Further, we find that from model 5, no significant improvement is obtained for the WMRSE error variance by increasing the polynomial order from 1 to 2 (going to model 8), and no significant improvement is obtained for the error variance by increasing the number of harmonics from 3 to 4 (going to model 6). Significant improvements are obtained for both statistics by both increasing the polynomial order from 1 to 2 and the number of harmonics from 3 to 4 (going from model 5 to model 9) for the SPURS subset but not for the test subset, which is expected to be representative of a much greater fraction of the total data. We also tested models with P = 1 and N = 5,6 (models 13 and 14) but these, while reducing significantly the WRMSE from model 5, did not reduce significantly the error variance, and started to show larger error variances for the test subset. As a result, model 5 is our final choice of model to be fitted to the entire SST drifter dataset to generate the Level-2 and Level-3 datasets.
We now discuss briefly the choice of the bandwidth length [hk, (11)] and the choice of the factor D for the robust weights [see (22)]. The sensitivity of the results to these choices is summarized in Fig. 10 for model 5 only. The choice of hk technically implies that data points within a 2hk window centered on the estimation time are considered [Eqs. (10) and (11)]. Yet, because the weighing window is not uniform but a tricube kernel, the effective number of degrees of freedom used for each estimation is closer to the number of data points one would find in a uniform window of length hk. Here, our choice hk = 1 day is based on observations that the characteristics of diurnal SST oscillations change on a daily time scale26. Yet, we examine the summary statistics for model 5 for hk varying between 0.25 and 1.25 days at 0.25 day interval for the test subset (Fig. 10). We find that decreasing hk to less than 1 day significantly decreases the error variance yet does not decrease the WRMSE, and thus does not overall improve the performances of model 5. In contrast, we find that increasing hk to 1.25 days significantly increases the WRMSE and increases the error variance. The results for the SPURS subset are similar (not shown). These overall results therefore suggest that hk = 1 day is an appropriate choice for the bandwidth.
Panel (a) Summary statistics for the 14 models for the test set of drifters as in Fig. 9(b). Here are also shown the results of varying the D factor in Eq. (22) from 4 to 20 in increments of 2 for model 5 (black dots and ellipses) and the results of varying the bandwidth parameter hk from 0.25 days to 1.25 days in increments of 0.25 (white dots and ellipses). The white ellipse around the black dot for model 5 corresponds to hk = 1 and D = 14 as in Fig. 9. Panel (b) Ensemble averages of the fraction of data points not labeled as outliers as a function of factor D. The shading indicates plus or minus one standard deviation around the ensemble averages. The vertical dotted line indicates D = 14 ultimately chosen here.
The choice of the factor D in the denominator of the biweight kernel for calculating the robust weights [Eq. (22)] effectively sets the threshold for labeling data points as outliers. Our final choice of D = 14 is compared to alternatively choosing D between 4 and 20 at intervals of 2. The sensitivity of the summary statistics to the value of D is displayed in Fig. 10 for the test subset. We find that varying D has a modest impact on the performances of model 5 and only for D less than 6 does model 5 exhibits significantly better WRMSE, but no better standard deviation error. We also consider the ensemble average of the fraction of data points not labeled as outliers as a function of the choice of D (Fig. 10), and find that this fraction starts to decrease strongly as D decreases from 8. In the original LOWESS method7, D is set to 6 without justification, and such a choice in our case would result in around 10% of the data points labeled as outliers. In the end, we settled on D = 14 which results in only between 1% and 4% of the data points being labeled as outliers, but maintains approximately the performance of model 5 compared to D = 6. The results are similar for the SPURS subset (not shown).
Quality indication
The Level-3 data product is intended to provide SST estimates contemporaneous to the estimated positions and velocities of drifters at hourly top-of-the-hour times from the hourly GDP dataset version 1.04c5, which with SST now included we shall call version 2.00. Since the sampling of SST sensors onboard drifters can be independent from the positioning, it sometimes occurs that our methodology is able to provide an SST estimate at times when no location estimate is available. Since there is little use for an SST estimate with no associated location estimate, these are not included in the Level-3 data product (see Table 1).
We devise three different quality indication flag schemes, one for each component of SST (non-diurnal, diurnal, total), with flag values ranging from 0 (worst) to 5 (best), with the intention of characterizing an increasing level of correctness. For all three schemes, when no SST estimate could be obtained from the methodology (for example for lack of enough Level-1 data within the sliding temporal window), or when SST data was simply not transmitted by a drifter (as an example because of a faulty sensor), the estimate is assigned quality flag 0 (and the NetCDF file contains a standard filling value). When an SST estimate could be obtained but not an SST uncertainty estimate, the estimate is assigned quality flag 1 (and the NetCDF file contains a filling value for the uncertainty estimate).
For higher flag values, the schemes for the non-diurnal SST estimates and for the total SST estimates are the same, as illustrated in Fig. 11. When an SST estimate and an uncertainty estimate both exist, the quality flag is based on the relative position of the interval formed by the SST estimate plus or minus its standard error estimate with respect to the [−2,50]°C range of physically-acceptable temperature values27. If the estimated interval is completely contained within this range, the assigned quality flag is the highest, at 5. If one or two end points of the interval are located outside of the range but the SST estimate is inside the range, the assigned quality flag is 4. If the SST estimate is outside the range but one of the end point of the interval is within the range, the assigned quality flag is 3. Finally, if the interval is located completely outside the physical range, then the quality flag is 2. For analyses of total and non-diurnal SST, only estimates with quality flag 4 and 5 should be utilized. Estimates with quality flag 1, 2, and 3 are suspect and should not be used. They suggest that their corresponding true SST value are located outside of the physically-acceptable range. These estimates are nevertheless retained in the dataset with their distinct flags for traceability of the methods.
Illustration of the quality flag determination scheme for time series of non-diurnal and total SST estimates. Flag 0 indicates both missing SST estimates (\(\widehat{s}\)) and uncertainty estimates (\(\widehat{\sigma }\)). Flag 1 indicates a missing or failed uncertainty estimate only. Flags 2, 3, 4, and 5 are based on the values of \(\widehat{s}\) and \(\widehat{s}\pm \widehat{\sigma }\) with respect to the range of acceptable temperature values ([−2,50]°C).
The quality flag scheme for the diurnal SST estimates differs from the scheme described above because a diurnal SST estimate is an anomaly around zero for which a range of physically plausible values is not straightforward to define. A climatology of SST diurnal variability24 constructed by fitting a model to temperature observations from drifters within zonal bands, by seasons, and by environmental categories (clear or cloudy sky, wind speed) provides amplitude of SST diurnal anomalies no larger than 2.4 °C (from observations) or 0.689 °C (from modeled values). Yet, diurnal warming as large as 6.6 °C has been be detected in coastal regions28. As a consequence, rather than defining here an acceptable amplitude threshold for diurnal SST anomalies, we consider three criteria for the quality flag of a diurnal SST estimate:
Is the absolute value of the diurnal anomaly estimate strictly larger than its standard error estimate?
Is the standard error estimate for the diurnal estimate smaller than 1 °C?
Were more than 24 Level-1 data points used to obtain an estimate?
As illustrated in Fig. 12, criteria (1) and (2) define specific sub-regions in the parameter space defined by the absolute value of the diurnal estimates and the value of the standard error estimate of the diurnal estimate. In contrast, criterion (3) does not strictly defines a sub-region in that parameter space, but rather an average region which can be visualized by mapping in that space the average number of data points used for the estimations. On average, estimates obtained with 24 data points or more are found in the parameter space for which the diurnal anomaly estimates are smaller than 10 °C and the standard error of the estimates are most often smaller than 1 °C. In conclusion, we use the three criteria listed above to define self-exclusive quality flags as follows: a quality flag 5 indicates that all criteria (1), (2), and (3) are fulfilled; a quality flag 4 indicates that (1) and (2) are fulfilled but not (3); a quality flag 3 indicates that (1) is fulfilled but not (2) nor (3); and quality flag 2 indicates that none are fulfilled. For analyses of diurnal SST, we recommend utilizing only estimates with quality flag 5. Further, a user may want to discard diurnal SST estimates for which the corresponding total and non-diurnal SST estimates both have quality flag less than 4. This occurs for less than 0.03% of the diurnal SST estimates with quality flag 5.
(a) Two-dimensional histogram of absolute diurnal SST estimates (\({\widehat{s}}_{D}\)) and standard error estimates for diurnal estimates (\({\widehat{\sigma }}_{D}\)) for Level-3 data. The dotted line corresponds to the slope 1 line (\({\widehat{\sigma }}_{D}={\widehat{s}}_{D}\)). (b) Average number of Level-1 data points used for estimating SST mapped onto the two-dimensional distribution shown in the top panel. The black contour corresponds to 24 data points on average.
The inventory of Level-3 estimates for each type (total, non-diurnal, and diurnal) and each quality flag class (0 to 5) is provided in Table 3. The number of position and velocity estimates for the GDP hourly dataset version 2.00 is 165,754,333 from 17,324 individual drifter trajectories. Of this target number, 95.59% are with a quality flag 5 for the total SST estimates, 95.60% are with a quality flag 5 for the non-diurnal SST estimates, but only 75.58% are with quality flag 5 for the diurnal SST estimates. Note that estimates of total SST and non-diurnal SST with quality flag 3 or 2 are outside the physically-acceptable range of values and should be used and interpreted with extreme caution. We assessed that diurnal SST estimates with quality flag 5 are plausible but we could not conclude the same for lesser quality flags.
Table 3 Inventory of quality flags for Level-3 estimates.
Interpretation of uncertainty estimates
In order to interpret our uncertainty estimates, we examine the distribution of the residuals of all model fits, normalized by their associated estimates of error standard deviations. This constitutes an assessment of the distribution of the error term εi of the process model (1):
$${\widehat{\varepsilon }}_{i}=\frac{{s}_{k}-{\widehat{s}}_{m,k}}{\widehat{\sigma }({t}_{k})}.$$
The results are shown in Fig. 13 for both the SPURS and test drifter subsets. For both sets, the distributions are never Gaussian for any of the models. The distributions are nearly centered but exhibit central peaks more narrow than Gaussian distributions with the same means and standard deviations (only comparisons to model 5 are shown). We observe that increasing the number of harmonics of the diurnal oscillation model consistently renders the peak of the residual distribution to be narrower and higher, and the tails to be slightly lighter. The opposite is true when increasing the order of the non-diurnal polynomial model, while still being non-Gaussian in the sense of exhibiting a higher kurtosis. We find that a t location-scale distribution (also known as non-standardized Student's t distribution), previously used to model Argos location errors5, is a better fit to the observed distributions than Gaussian distributions, yet still does not completely capture their shapes (not shown). An implication of the non-gaussianity of the normalized residuals is that the error term εi of the process model (1) is also not Gaussian-distributed. As a result, a classic least squares estimation of the parameters of the models would tend to give too much weight to outliers in the data. Fortunately, we are applying an iterative least squares estimation method based on the LOWESS7 which is expected to temper such outliers, but the exact impact on the estimation is difficult to quantify here.
Probability density function (PDF) estimates of normalized residuals following the fitting of the 14 models for (a) the subset of 80 drifters from the SPURS experiment and (b) the subset of 98 "test" drifters selected from the global database. The PDFs are estimated using an Epanechnikov kernel20 at 0.01 resolution using only residuals with non-zero final robust weights. In each panel, the thin dashed black lines indicate the 16-th and 84-th percentiles of the distribution of residuals for model 5 whereas the thick dashed black lines indicate the 2.5-th and 97.5-th percentiles. The gray curve in each panel corresponds to the fit to a normal distribution for the residuals for model 5 and the gray dashed vertical lines indicate the mean plus or minus 1 (thin line) and 1.96 (thick line) standard deviation and therefore correspond to the 2.5-, 16-, 84-, and 97.5-th percentiles of that fitted normal distribution.
Nevertheless, a further implication of the non-gaussianity is that caution should be taken when interpreting the standard errors for the SST estimates described above: whereas for Gaussian-distributed errors one standard error can be used to calculate a 68% confidence interval for an estimate, in our case, a standard error represents an interval encompassing more probable values of the true unknown values of a quantity, thus a more conservative confidence interval. As shown in Fig. 13 for model 5, the 16-th and 84-th percentiles, encompassing 68% of the residual distribution, define an interval narrower than the interval defined by plus or minus one sample standard deviation around the sample mean. Plus or minus one standard deviation actually encompasses approximately 78% of the distribution of the residuals for model 5 (and approximately the same percentage for the other models, not shown). In other words, the standard error for our estimates can be interpreted as being representative of a 78% confidence interval rather than a 68% confidence interval. In contrast, the 2.5-th and 97.5-th percentiles, encompassing 95% of the distribution, define an interval slightly wider but close to the one defined by plus or minus 1.96 sample standard deviation around the sample mean, which encompasses approximately 94% of the distribution of the residuals for model 5 (and approximately the same percentage for the other models, not shown). In conclusion, considering 1.96 standard errors to quantify uncertainty in this case happens to represent an approximate 95% confidence interval, as would be the case if the errors were Gaussian-distributed. Note that for the Level-3 hourly product (Table 1), the uncertainty estimates provided for location and velocity is 95% confidence intervals, whereas for SST estimates the uncertainty estimates are standard error estimates.
Global characteristics of error variance estimates and uncertainty estimates
In Fig. 14, we examine the distribution of error variance estimates from residuals [\({\widehat{\sigma }}_{1}^{2}\), Eq. (25)] not including data points for which the error estimation failed, and the distribution of total error variance estimates incorporating the resolution error variance [\({\widehat{\sigma }}^{2}\), Eq. (28)] for Level-3 estimates. We show the distributions for Level-3 estimates only because the ones for Level-2 estimates are extremely similar. We also report some statistics rounded to the nearest 0.001 in Table 4, which differ by no more than 0.001 °C between Level-2 and Level-3 estimates. Based on the distributions in Fig. 14, we assess that the mode value, or most probable value, of the square root of the error variance estimates from residuals is 0.020 °C for drogued drifters, but is 50% larger at 0.030 °C for undrogued drifters. Over all data, the mode value is 0.026 °C. Further, we assess that the mode value of the square root of the error variance estimates incorporating the resolution error variance is 0.031 °C for drogued drifters and 0.036 °C for undrogued drifters. Over all data, the mode value of the total error variance estimates is 0.033 °C. Median values of each of these variables are typically higher by a few 1/1000-th of a degree (See Table 4): the overall median value of the square root of the total error variance estimates is 0.036 °C. The distribution of the total error variance is however not unimodal (Fig. 14, right) because of the resolution error variance is dominated by a few discrete values (Fig. 6).
Distribution of error variance estimates from residuals [left, \({\widehat{\sigma }}_{1}^{2}\), Eq. (25)] and total error variance estimates incorporating the resolution error [right, \({\widehat{\sigma }}^{2}\), Eq. (28)] for Level-3 data. The histograms of the decimal logarithm of the square root of the estimates are displayed. Mode values at the peak of the distributions and 50-th percentile values are listed in Table 4.
Table 4 Statistics of error variance estimates from residuals [\({\widehat{\sigma }}_{1}^{2}\), Eq. (25)] and final error variance estimates incorporating the resolution error [\({\widehat{\sigma }}^{2}\), Eq. (28)].
The error variance estimates are however very heterogeneous in space, which is revealed when these estimates are averaged in half-degree geographical bins (Fig. 15, top). The spatial distribution of the error variance estimates is clearly related to ocean surface dynamics: it is found to be the highest in regions of high surface kinetic energy such as western boundary currents and equatorial regions29, but is also relatively high at mid-latitudes within regions of high wind stress variability. Largest mean error variance estimates are found on average within the Agulhas Retroflection region in the Indian Ocean and north of the Gulf Stream in the North Atlantic Ocean. The geographical distribution of error variance estimates suggests that the temporal evolution model (2) might be improved by allowing the order of the polynomial sP to change spatially, in order to reduce the error variance estimates in these regions. Such a spatially-dependent model is beyond the scope of the present work but may be investigated in the future.
Top: Square root of total error variance estimates \(\left[{(\widehat{{\sigma }^{2}})}^{1/2}\right]\) averaged in half-degree spatial bins. Middle: Non-diurnal SST uncertainty estimates (\({\widehat{\sigma }}_{P}\)) averaged in half-degree spatial bins. Bottom: diurnal SST uncertainty estimates (\({\hat{\sigma }}_{D}\)) averaged in half-degree spatial bins. The maps are obtained with Level-3 data with quality flags 5 for all estimates. In all three panels the units are decimal logarithm of degrees Celsius.
Whereas an error variance estimate provides a local quantification of the magnitude of the background noise, an uncertainty estimate for SST [Eq. (24)] provides a statistical characterization of the distance between a SST estimate and the true, but unknown, SST value. In Fig. 16, we examine the distributions of standard error estimates for the non-diurnal SST estimates, the diurnal SST estimates, and the total SST estimate for Level-3 data, and we report overall statistics rounded to the nearest 0.001 in Table 5. The results for Level-2 data are extremely similar and their distributions are not shown. Overall, the uncertainty estimates for diurnal SST estimates are a factor of 2 to 3 times larger than the uncertainty estimates for non-diurnal estimates. In turn, the uncertainty estimates for total SST estimates are larger than the uncertainty estimates for diurnal SST estimates but by no more than a few 1/1000-th of a degree. The most probable value of the uncertainty estimate for non-diurnal SST estimate is 0.006 °C for all data. For drogued drifters only it is 0.005 °C, and for undrogued drifters only it is 0.006 °C. The most probable value of the uncertainty estimate for diurnal SST estimate is 0.016 °C for all data. For drogued drifters only it is 0.015 °C, and for undrogued drifters only it is 0.017 °C. The most probable value of the uncertainty estimate for total SST estimate is 0.018 °C for all data. For drogued drifters only it is slightly smaller at 0.016 °C, and slightly higher for undrogued drifters at 0.019 °C. The spatial distribution of the uncertainty estimates follow closely the spatial distribution of the error variance estimates (Fig. 15, middle and bottom). The spatial distribution of the uncertainty estimates for total SST estimates is not shown as it is extremely similar to the spatial distribution of the uncertainty estimates for diurnal SST estimates. The fact that the maps of averaged uncertainty estimates exhibit spatially coherent features related to geophysical variability implies that uncertainty estimates between drifters may also be correlated in relation to the geographical distance separating them.
Probability density function (PDF) estimates of standard error estimates for the non-diurnal (\({\widehat{\sigma }}_{P}\)), diurnal (\({\widehat{\sigma }}_{D}\)), and total (\({\widehat{\sigma }}_{m}\)), SST estimates, separated between data from drogued and undrogued drifters. The normalized histograms of the decimal logarithm of the estimates are displayed. Mode values at the peak of the distributions and 50-th percentile values are listed in Table 5.
Table 5 Statistics of SST standard uncertainty estimates.
The overall statistics of uncertainty estimates for SST estimates (Table 5) are an order of magnitude smaller than previously estimated measurement uncertainties for drifting buoys9. Such uncertainty estimates range between 0.1 °C and 0.7 °C and are typically based on analyses of collocated SST observations from drifting buoys, ships, and satellites30,31,32,33. These uncertainty estimates encompass not only the instrumental error of the drifter SST sensors but also the spatial and temporal differences between the different measurands that are targeted by the different observational platforms, such as a SST satellite's ground footprint versus a pointwise drifter measurement. Here, our uncertainty estimates represent only random sources of instrumental and communication noise, as well as sub-hourly unresolved geophysical variability. These uncertainties are on the order of 1/100-th of a Kelvin, rather than on the order of a 1/10-th of a Kelvin, because they are not estimated from a single observation, but rather are benefiting from time series of observations that typically provide around 22 effective degrees of freedom over a 2-day observational estimation window (see mode value of ν in Fig. 7). As a result, sources of instrumental and geophysical random noise are averaged downward for our estimates. What our uncertainty estimates are not able to capture is any drifter-specific original bias of a SST sensor, and which may, or not, have evolved in time since the times of manufacture and deployment (i.e. a sensor drift)34.
Data Records
The Level-3 estimates of total SST, non-diurnal SST, diurnal SST, and each of their respective standard error estimates, along with quality flag variables for each of the three SST estimates, are distributed as part of the hourly drifter dataset of the GDP5, now in its version 2.00 with the addition of these SST estimates. The dataset, assembled as a contiguous ragged array in a single file, is officially available from the NOAA National Center for Environmental Information (NCEI) as a data collection6 called "Hourly location, current velocity, and temperature collected from Global Drifter Program drifters world-wide" and accessible at https://doi.org/10.25921/x46c-3620. The original and future releases (also called accession) of this collection can be accessed and downloaded through the "Lineage" tab of the landing page. Future releases of the dataset, scheduled twice a year, will add estimates of position, velocity, and SST variables as they become available from the GDP.
The data are also available via the ERDDAP server of the NOAA Observing System Monitoring Center at http://osmc.noaa.gov/erddap/tabledap/gdp_hourly_velocities.html where subsets of the data can be selected according to a number of temporal and spatial criteria.
Table 6 lists the names of the variables included in the NetCDF files, including the new SST-related variables. Usage of this SST data product in combination with any of the position and/or velocity data5 for release 2.00 or subsequent releases must cite this present paper as well as the original 2016 paper describing the hourly position and velocity dataset (Elipot et al.5).
Table 6 Data record information for the Level-3 data product and details of variables in drifter NetCDF files.
Technical Validation
The spatial and temporal distributions of the Level-3 hourly SST estimates are displayed in Fig. 17 in order to verify the technical quality of the product. The map of spatial data density is the result of historical deployments and the efforts of the GDP to fulfill the requirement of the array, and of the patterns of the convergence and divergence of the near-surface oceanic circulation2,11. The temporal histogram of SST estimates closely follows the distribution of hourly position and velocity estimates, showing the maturity of the array at the beginning of 2006 as well as the drop in the amount of data between 2011 and 2014 because of unfortunately numerous short-lived instruments. To support the technical validation of the new SST dataset, we compute the mean and standard deviation of SST estimates globally within 0.5° × 0.5° geographical bins (Fig. 18). The mean total SST map exhibits the expected meridional gradients as well as the west-east asymmetries within each ocean basins. As also expected, the standard deviation map of total SST estimates exhibits larger values within regions of higher surface kinetic energy such as in western boundary current regions35 but also within the mid-latitude regions where high variability of air-sea fluxes is expected to enhance SST variance. The map of diurnal SST standard deviation exhibits different patterns resulting from the competing effects of the spatial pattern of solar heating increasing diurnal variability, and the spatial pattern of wind speed decreasing diurnal variability. At the scales displayed here, the maps of mean and standard deviation of non-diurnal SST estimates (not shown) are indistinguishable from the maps for the total SST estimates. The map of mean diurnal SST estimates (not shown) is approximately zero everywhere as expected from the model of temporal SST evolution used to derive this product.
Top panel: Spatial distribution of Level-3 total SST estimates expressed as a density per (50 km)2 in half-degree spatial bins. Only quality flag 5 data are counted. Bottom panel: Level-3 total (\({\widehat{s}}_{m}\)) and diurnal (\({\widehat{s}}_{D}\)) SST estimates temporal distribution in 10-day bins from 03-Oct-1987 13:00:00 to 30-Jun-2020 23:00:00. The temporal distribution of the matching position and velocity hourly dataset5 release 2.00 is also displayed. The temporal distribution of non-diurnal SST estimates is not displayed as it would be indistinguishable from the distribution of the total SST estimates.
Top: Level-3 total SST estimates averaged in half-degree spatial bins. Middle: Level-3 total SST estimates standard deviation in half-degree spatial bins. Bottom: Level-3 diurnal SST estimates standard deviation in half-degree spatial bins. Only quality flag 5 estimates for each respective variable is used to produce these maps.
We proceed to verify the consistency of the drifter hourly SST estimates against the gridded, multi-sensor, interpolated SST Climate Change Initiative data product (ESA SST CCI Analysis v2.1, hereafter CCI), available from 1981 to 201636. The CCI product is generated by combining measurements of infrared radiance from two suites of radiometers on multiple satellites, eventually aggregated and gap-filled on a daily 0.05° grid27. The CCI product provides SST estimates representative of daily mean values and at a depth of 20 cm. These estimates are obtained by converting the instantaneous skin SST captured by satellite measurements at various times throughout a day to the closest of 10:30 or 22:20 local mean solar times, using a one-dimensional turbulence closure model driven by atmospheric fluxes. Arguably, this conversion makes the CCI estimates comparable to the drifter SST estimates because of the depth conversion, yet differences are expected to remain between the two estimates because of the drifter SST sub-daily temporal variability mostly associated with the diurnal cycle. We conduct the comparison by interpolating bilinearly in space the gridded values of the CCI product for a given day onto all drifter hourly geographical locations of that same day (from 00:00 to 23:00). SST estimates derived from satellite measurements are not independent from in situ measurements, including from drifters, because these are generally used for calibration purposes. Yet, the ESA SST CCI Analysis v2.1 is supposed to achieve a "high degree of independence" from in situ observations from 1995 onward27, so that we limit our comparison to the 1995 to 2016 time period. We choose to compare only drifter total and non-diurnal SST estimates with quality flag 5 to successfully interpolated values of the CCI product, that is when no land pixel or pixel with non-zero sea ice concentration were involved in the bilinear interpolation. Two-dimensional histograms of drifter SST estimates versus their corresponding CCI interpolated values for nearly 122 M data pairs (Fig. 19a,b) suggest qualitatively a very good consistency between the two datasets. Next, we examine difference statistics between the two datasets in order to conduct a more quantitative comparison. We interpolate the evaluated standard uncertainty of the CCI product at the drifter locations in order to assess the difference statistics.
Comparison between the drifter hourly SST dataset and the ESA SST CCI Analysis v2.1 product27,36. (a) Two-dimensional histogram in 0.05 °C bins of drifter total SST estimates (\({\widehat{s}}_{m}\)) versus the interpolated CCI values. (b) Same as in a) but for the drifter non-diurnal SST estimates (\({\widehat{s}}_{P}\)). (c) Normalized histograms of absolute differences between drifter estimates and CCI values. (d) Average differences between non-diurnal SST estimates (\({\widehat{s}}_{P}\)) and CCI values as a function of time and latitude.
We calculate the difference statistics \({d}_{m}={\widehat{s}}_{m}-{s}_{cci}\) and \({d}_{P}={\widehat{s}}_{P}-{s}_{cci}\), where scci is the interpolated CCI SST value, and the uncertainties corresponding to these differences, i.e. \({\left({\widehat{\sigma }}_{m}^{2}+{\sigma }_{cci}^{2}\right)}^{1/2}\) and \({\left({\widehat{\sigma }}_{P}^{2}+{\sigma }_{cci}^{2}\right)}^{1/2}\), where σcci is the interpolated CCI SST uncertainty value. Over all estimates, we find that the mean and median values of dm are 0.048 °C and 0.031 °C, respectively, and that the mean and median values of dP are 0.047 °C and 0.037 °C, respectively, suggesting a global positive bias of the drifter estimates compared to the CCI estimates. We next conduct a 95% confidence level two-tailed test by determining the number of instances for which the absolute values of the difference statistics are smaller than 1.96 times the difference uncertainties. We thus find that 85.5% of the differences with the drifter total SST estimates are not statistically significant, whereas we find that 88.1% of the differences with the drifter non-diurnal SST estimates are not significant, overall suggesting a high level of consistency between the two products. Because the CCI product does not resolve diurnal variability, it is expected that the consistency between the two datasets would be better for the drifter non-diurnal estimates than for the drifter total estimates, as evidenced here by the two-tailed test results. However, the proportion of statistically significant differences for the non-diurnal drifter SST estimates is still over twice the expectation from a 95% confidence level test, at 11.9% compared to 5%. This excess proportion may be due to the incorrect assumption of Gaussian-distributed errors made in our two-tailed test where heavier tailed errors are expected in practice (see section Interpretation of uncertainty estimates and Fig. 14), stochastic variability unresolved by the uncertainty estimates of either data product, loss of spatial resolution of the CCI analysis product because of its mapping, or a global bias as revealed by the mean difference values reported above. The most probable difference of values between the two products, as indicated by the maximum of the distribution of the absolute difference statistics (Fig. 19c) is 0.250 °C with the drifter total SST estimates, and 0.223 °C with the drifter non-diurnal SST estimates. These mode values are roughly consistent with a linear sum of the stated global uncertainty for the CCI product27 (0.18 °C), plus the typical uncertainty values of the drifter estimates (0.018 °C and 0.007 °C, see Table 5), plus potential global biases (0.048 °C and 0.047 °C), amounting to 0.246 °C and 0.234 °C for the total SST estimates and the non-diurnal SST estimates, respectively.
While it is beyond the scope of this paper to systematically investigate the potential sources of differences between our drifter hourly SST dataset and the CCI product (or other satellite-based products), we further examine the differences between the drifter non-diurnal SST estimates and the CCI values as a function of latitude and time in 10-day intervals from 1996 to 2016 (Fig. 19d). This analysis is consistent with a similar analysis conducted previously27, but provides more details and is performed with our Level-3 hourly drifter product. Our results show that the drifter non-diurnal estimates generally exhibit a positive bias compared to the CCI product, except between approximately 15° and 45°, N or S, where the differences exhibit alternating signs with an annual periodicity propagating poleward. These propagating differences might be related to inadequacies of the turbulent closure model used to convert satellite-measured skin SST to CCI depth SST, or to the inability of representing accurately seasonal processes in either the model or the forcing fields of the model. We also examine the dependency of differences on local mean solar time (Fig. 20). We find that the differences between drifter total SST estimates and CCI values as a function of local mean solar time follow a distribution pattern consistent with the typical SST diurnal cycle24: the drifter total SST estimates generally capture a higher temperature between the hours of 10:30 and 22:30, but generally capture a lower temperature between 22:30 and 10:30 the next day (panel a). In contrast, the distribution of differences between the drifter non-diurnal SST estimates and CCI values do not appreciably exhibit a dependency on local mean solar time, as expected, but only an overall positive bias (panel b).
(a) Two-dimensional histogram of drifter total SST estimates minus CCI values (\({\widehat{s}}_{m}\)-CCI) versus local mean solar time. (b) Same as in (a) but for drifter non-diurnal SST estimates (\({\widehat{s}}_{P}\)-CCI). In both panels the vertical dotted lines indicate 10:30 and 22:30 local mean solar times.
In the NetCDF file, all SST estimates are provided to three decimal places, with the last digit rounded towards the nearest 0.001. Total SST estimates are the sum of non-diurnal SST estimates and diurnal SST estimates but because of rounding, discrepancies exist within the NetCDF files for about 44% of values between the numerical value the user will read for the total SST value and the value of the sum of the non-diurnal and diurnal SST values.
The uncertainty estimates are also provided to three decimal places but with the last digit rounded "up" (i.e. towards infinity) to the nearest 0.001. The reason for the rounding up of the uncertainty estimates is to prevent reporting null uncertainties for 13,502 non-diurnal SST estimates for which the calculated uncertainty is smaller than 0.001 but larger than 0.0001. Rounding up uncertainties is acceptable as this provides more conservative uncertainties but only increasing their values by typically 6% for the non-diurnal SST uncertainties, and by typically 2% for the diurnal and total SST uncertainties.
Instructions on how to read the data file, as well as examples of typical Lagrangian analyses, can be found in a publicly-accessible Python-based Jupyter Notebook at https://github.com/Cloud-Drift/earthcube-meeting-2022.
Code availability
A Matlab software associated with this manuscript is licensed under MIT and published on GitHub at https://github.com/selipot/sst-drift.git and archived on Zenodo37. This software allows the user to fit model (2) to temperature observations and derive the resulting SST estimates and their uncertainties. Input arguments to the model fitting function include an arbitrary order for the background non-diurnal SST model and arbitrary frequencies for the diurnal oscillatory model. A sample of Level-1 data from drifter AOML ID 55366 is provided in order to test the routines and produce figures similar to Figs. 4 and 5. Alternatively, the main code can also generate stochastic data for testing purposes.
Centurioni, L. et al. Global in situ Observations of Essential Climate and Ocean Variables at the Air–Sea Interface. Front. Mar. Sci. 6, 1–23, https://doi.org/10.3389/fmars.2019.00419 (2019).
Lumpkin, R., Centurioni, L. & Perez, R. C. Fulfilling Observing System Implementation Requirements with the Global Drifter Array. J. Atmos. Ocean. Technol. 33, 685–695, https://doi.org/10.1175/JTECH-D-15-0255.1 (2016).
Hansen, D. V. & Poulain, P.-M. Quality Control and Interpolations of WOCE-TOGA Drifter Data. J. Atmos. Ocean. Technol. 13, 900–909, 10.1175/1520-0426(1996)013<0900:QCAIOW>2.0.CO;2 (1996).
Lumpkin, R. & Centurioni, L. NOAA Global Drifter Program quality-controlled 6-hour interpolated data from ocean surface drifting buoys. NOAA National Centers for Environmental Information. https://doi.org/10.25921/7ntx-z961 (2019).
Elipot, S. et al. A global surface drifter data set at hourly resolution. J. Geophys. Res. Ocean. 121, 2937–2966, https://doi.org/10.1002/2016JC011716 (2016).
Elipot, S., Sykulski, A., Lumpkin, R., Centurioni, L. & Pazos, M. Hourly location, current velocity, and temperature collected from Global Drifter Program drifters world-wide. Accession 0248584 v1.1. NOAA National Centers for Environmental Information. https://doi.org/10.25921/x46c-3620 (2022).
Cleveland, W. S. Robust Locally Weighted Regression and Smoothing Scatterplots. J. Am. Stat. Assoc. 74, 829, https://doi.org/10.2307/2286407 (1979).
Article MathSciNet MATH Google Scholar
Chiorboli, G. Uncertainty of mean value and variance obtained from quantized data. IEEE Trans. Instrum. Meas. 52, 1273–1278, https://doi.org/10.1109/TIM.2003.816820 (2003).
Kennedy, J. J. A review of uncertainty in in situ measurements and data sets of sea surface temperature. Rev. Geophys. 52, 1–32, https://doi.org/10.1002/2013RG000434 (2014).
Lumpkin, R., Özgökmen, T. & Centurioni, L. Advances in the Application of Surface Drifters. Ann. Rev. Mar. Sci. 9, 59–81, https://doi.org/10.1146/annurev-marine-010816-060641 (2017).
Lumpkin, R., Maximenko, N. & Pazos, M. Evaluating Where and Why Drifters Die. J. Atmos. Ocean. Technol. 29, 300–308, https://doi.org/10.1175/JTECH-D-11-00100.1 (2012).
Centurioni, L., Horányi, A., Cardinali, C., Charpentier, E. & Lumpkin, R. A global ocean observing system for measuring sea level atmospheric pressure: Effects and impacts on numerical weather prediction. Bull. Am. Meteorol. Soc. 98, 231–238, https://doi.org/10.1175/BAMS-D-15-00080.1 (2017).
Centurioni, L. R. et al. Sea surface salinity observations with lagrangian drifters in the tropical North Atlantic during SPURS: Circulation, fluxes, and comparisons with remotely sensed salinity from aquarius. Oceanography 28, 96–105, https://doi.org/10.5670/oceanog.2015.08 (2015).
Hansen, D. V. & Herman, A. Temporal Sampling Requirements for Surface Drifting Buoys in the Tropical Pacific. J. Atmos. Ocean. Technol. 6, 599–607, 10.1175/1520-0426(1989)006<0599:TSRFSD>2.0.CO;2 (1989).
Elipot, S. & Lumpkin, R. Spectral description of oceanic near-surface variability. Geophys. Res. Lett. 35, L05606, https://doi.org/10.1029/2007GL032874 (2008).
Argos User's Manual. http://www.argos-system.org/manual/, Copyright 2007–2015 CLS (2015).
Lopez, R., Malarde, J.-P., Royer, F. & Gaspar, P. Improving Argos Doppler Location Using Multiple-Model Kalman Filtering. IEEE Trans. Geosci. Remote Sens. 52, 4744–4755, https://doi.org/10.1109/TGRS.2013.2284293 (2014).
Lumpkin, R. et al. Removing Spurious Low-Frequency Variability in Drifter Velocities. J. Atmos. Ocean. Technol. 30, 353–360, https://doi.org/10.1175/JTECH-D-12-00139.1 (2013).
Huang, B. et al. Improvements of the Daily Optimum Interpolation Sea Surface Temperature (DOISST) Version 2.1. J. Clim. 34, 2923–2939, https://doi.org/10.1175/JCLI-D-20-0166.1 (2021).
Fan, J. & Gijbels, I. Local Polynomial Modelling and Its Applications, 1st edn (Routledge, 2018).
Gentemann, C. L. Diurnal signals in satellite sea surface temperature measurements. Geophys. Res. Lett. 30, 1140, https://doi.org/10.1029/2002GL016291 (2003).
Kennedy, J. J., Brohan, P. & Tett, S. F. B. A global climatology of the diurnal variations in sea-surface temperature and implications for MSU temperature trends. Geophys. Res. Lett. 34, 1–5, https://doi.org/10.1029/2006GL028920 (2007).
Lindfors, A. V., Mackenzie, I. A., Tett, S. F. B. & Shi, L. Climatological Diurnal Cycles in Clear-Sky Brightness Temperatures from the High-Resolution Infrared Radiation Sounder (HIRS). J. Atmos. Ocean. Technol. 28, 1199–1205, https://doi.org/10.1175/JTECH-D-11-00093.1 (2011).
Morak-Bozzo, S., Merchant, C. J., Kent, E. C., Berry, D. I. & Carella, G. Climatological diurnal variability in sea surface temperature characterized from drifting buoy data. Geosci. Data J. 3, 20–28, https://doi.org/10.1002/gdj3.35 (2016).
Hormann, V., Centurioni, L. R. & Reverdin, G. Evaluation of Drifter Salinities in the Subtropical North Atlantic. J. Atmos. Ocean. Technol. 32, 185–192, https://doi.org/10.1175/JTECH-D-14-00179.1 (2015).
Kawai, Y. & Wada, A. Diurnal sea surface temperature variation and its impact on the atmosphere and ocean: A review. J. Oceanogr. 63, 721–744, https://doi.org/10.1007/s10872-007-0063-0 (2007).
Merchant, C. J. et al. Satellite-based time-series of sea-surface temperature since 1981 for climate applications. Sci. Data 6, 223, https://doi.org/10.1038/s41597-019-0236-x (2019).
Flament, P., Firing, J., Sawyer, M. & Trefois, C. Amplitude and Horizontal Structure of a Large Diurnal Sea Surface Warming Event during the Coastal Ocean Dynamics Experiment. J. Phys. Oceanogr. 24, 124–139, 10.1175/1520-0485(1994)024<0124:AAHSOA>2.0.CO;2 (1994).
Laurindo, L. C., Mariano, A. J. & Lumpkin, R. An improved near-surface velocity climatology for the global ocean from drifter observations. Deep Sea Res. Part I Oceanogr. Res. Pap. 124, 73–92, https://doi.org/10.1016/j.dsr.2017.04.009 (2017).
Emery, W. J., Baldwin, D. J., Schlüssel, P. & Reynolds, R. W. Accuracy of in situ sea surface temperatures used to calibrate infrared satellite measurements. J. Geophys. Res. Ocean. 106, 2387–2405, https://doi.org/10.1029/2000JC000246 (2001).
O'Carroll, A. G., Eyre, J. R. & Saunders, R. W. Three-Way Error Analysis between AATSR, AMSR-E, and In Situ Sea Surface Temperature Observations. J. Atmos. Ocean. Technol. 25, 1197–1207, https://doi.org/10.1175/2007JTECHO542.1 (2008).
Xu, F. & Ignatov, A. Evaluation of in situ sea surface temperatures for use in the calibration and validation of satellite retrievals. J. Geophys. Res. Ocean. 115, 1–18, https://doi.org/10.1029/2010JC006129 (2010).
Merchant, C. J. et al. A 20year independent record of sea surface temperature for climate from Along-Track Scanning Radiometers. J. Geophys. Res. Ocean. 117, 1–18, https://doi.org/10.1029/2012JC008400 (2012).
Poli, P. et al. The Copernicus Surface Velocity Platform drifter with Barometer and Reference Sensor for Temperature (SVP-BRST): genesis, design, and initial results. Ocean Sci. 15, 199–214, https://doi.org/10.5194/os-15-199-2019 (2019).
Lumpkin, R. & Johnson, G. C. Global ocean surface velocities from drifters: Mean, variance, El Niño-Southern Oscillation response, and seasonal cycle. J. Geophys. Res. Ocean. 118, 2992–3006, https://doi.org/10.1002/jgrc.20210 (2013).
Good, S., Embury, O., Bulgin, C. & Mittaz, J. Centre for Environmental Data Analysis (CEDA). https://doi.org/10.5285/62c0f97b1eac4e0197a674870afe1ee6 (2019).
Elipot, S., Sykulski, A. & Lumpkin, R. selipot/sst-drift: v1.0.0 Zenodo https://doi.org/10.5281/zenodo.5705442 (2021).
This research was partially carried out in part under the auspices of the Cooperative Institute for Marine and Atmospheric Studies (CIMAS), a Cooperative Institute of the University of Miami and the National Oceanic and Atmospheric Administration, cooperative agreement #NA20OAR4320472. This research was also supported by the US National Science Foundation under EarthCube Capabilities Grant No. 2126413. A. M. Sykulski was funded by the UK Engineering and Physical Sciences Research Council Grant EP/R01860X/1. R. Lumpkin was supported by NOAA's Global Ocean and Monitoring Program and the Atlantic Oceanographic and Meteorological Laboratory. L. Centurioni was supported by NOAA's grant NA20OAR4320278 "The Global Drifter program". The authors thank Sofia Olhede for her advice on some of the statistical aspects of this work, and Bertrand Dano and Philippe Miron for generating the final NetCDF file for distribution.
Rosenstiel School of Marine, Atmospheric, and Earth Science, University of Miami, Miami, FL, 33149, USA
Shane Elipot
Lancaster University, Department of Mathematics and Statistics, Lancaster, LA1 4YW, UK
Adam Sykulski
NOAA Atlantic Oceanographic and Meteorological Laboratory, Miami, FL, 33149, USA
Rick Lumpkin & Mayra Pazos
Lagrangian Drifter Laboratory, Scripps Institution of Oceanography, University of California San Diego, San Diego, CA, 92103, USA
Luca Centurioni
Rick Lumpkin
Mayra Pazos
S. Elipot, A. Sykulski, and R. Lumpkin conceived the dataset and designed the associated methods. M. Pazos assembled and conducted the quality control of the drifter data forming the basis for the Level-1 data. All authors reviewed the manuscript.
Correspondence to Shane Elipot.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Elipot, S., Sykulski, A., Lumpkin, R. et al. A dataset of hourly sea surface temperature from drifting buoys. Sci Data 9, 567 (2022). https://doi.org/10.1038/s41597-022-01670-2
Editors & Editorial Board
Scientific Data (Sci Data) ISSN 2052-4463 (online)
|
CommonCrawl
|
Time optimal internal controls for the Lotka-McKendrick equation with spatial diffusion
MCRF Home
Strong stabilization of (almost) impedance passive systems by static output feedback
December 2019, 9(4): 673-696. doi: 10.3934/mcrf.2019046
Backward uniqueness results for some parabolic equations in an infinite rod
Jérémi Dardé and Sylvain Ervedoza ,
Institut de Mathématiques de Toulouse, UMR 5219, Université de Toulouse, CNRS, UPS IMT F-31062 Toulouse Cedex 9, France
* Corresponding author: Sylvain Ervedoza
Received June 2018 Revised August 2019 Published November 2019
Fund Project: The first author is partially supported by IFSMACS ANR-15-CE40-0010 of the French National Research Agency (ANR) and both authors are supported by the CIMI Labex, Toulouse, France, under grant ANR-11-LABX-0040-CIMI.
The goal of this article is to provide backward uniqueness results for several models of parabolic equations set on the half line, namely the heat equation, and the heat equation with quadratic potential and with purely imaginary quadratic potentials, with non-homogeneous boundary conditions. Such result can thus also be interpreted as a strong lack of controllability on the half line, as it shows that only the trivial initial datum can be steered to zero. Our results are based on the explicit knowledge of the kernel of each equation, and standard arguments from complex analysis, namely the Phragmén-Lindelöf principle.
Keywords: Backward uniqueness, parabolic equation, unbounded domain, Fourier analysis, Phragmén-Lindelöf principle.
Mathematics Subject Classification: 35A08, 35B37, 35B53, 35K08, 93C20.
Citation: Jérémi Dardé, Sylvain Ervedoza. Backward uniqueness results for some parabolic equations in an infinite rod. Mathematical Control & Related Fields, 2019, 9 (4) : 673-696. doi: 10.3934/mcrf.2019046
H. Aikawa, N. Hayashi and S. Saitoh, The Bergman space on a sector and the heat equation, Complex Variables Theory Appl., 15 (1990), 27-36. doi: 10.1080/17476939008814430. Google Scholar
K. Beauchard, Null controllability of Kolmogorov-type equations, Math. Control Signals Systems, 26 (2014), 145-176. doi: 10.1007/s00498-013-0110-x. Google Scholar
K. Beauchard, P. Cannarsa and R. Guglielmi, Null controllability of Grushin-type operators in dimension two, J. Eur. Math. Soc. (JEMS), 16 (2014), 67-101. doi: 10.4171/JEMS/428. Google Scholar
K. Beauchard, J. Dardé and S. Ervedoza, Minimal time issues for the observability of Grushin-type equations, December 2017. Google Scholar
K. Beauchard, B. Helffer, R. Henry and L. Robbiano, Degenerate parabolic operators of Kolmogorov type with a geometric control condition, ESAIM Control Optim. Calc. Var., 21 (2015), 487-512. doi: 10.1051/cocv/2014035. Google Scholar
K. Beauchard, L. Miller and M. Morancey, 2D Grushin-type equations: Minimal time and null controllable data, J. Differential Equations, 259 (2015), 5813-5845. doi: 10.1016/j.jde.2015.07.007. Google Scholar
K. Beauchard and K. Pravda-Starov, Null-controllability of non-autonomous Ornstein-Uhlenbeck equations, J. Math. Anal. Appl., 456 (2017), 496–524. doi: 10.1016/j.jmaa.2017.07.014. Google Scholar
L. S. Boulton, Non-self-adjoint harmonic oscillator, compact semigroups and pseudospectra, J. Operator Theory, 47 (2002), 413-429. Google Scholar
[9] E. B. Davies, Heat Kernels and Spectral Theory, volume 92 of Cambridge Tracts in Mathematics, Cambridge University Press, Cambridge, 1989. doi: 10.1017/CBO9780511566158. Google Scholar
E. B. Davies, Pseudo-spectra, the harmonic oscillator and complex resonances, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., 455 (1999), 585-599. doi: 10.1098/rspa.1999.0325. Google Scholar
E. B. Davies, Semi-classical states for non-self-adjoint Schrödinger operators, Comm. Math. Phys., 200 (1999), 35-41. doi: 10.1007/s002200050521. Google Scholar
E. B. Davies and A. B. J. Kuijlaars, Spectral asymptotics of the non-self-adjoint harmonic oscillator, J. London Math. Soc. (2), 70 (2004), 420-426. doi: 10.1112/S0024610704005381. Google Scholar
T. Duyckaerts and L. Miller, Resolvent conditions for the control of parabolic equations, J. Funct. Anal., 263 (2012), 3641-3673. doi: 10.1016/j.jfa.2012.09.003. Google Scholar
Ju. V. Egorov, Some problems in the theory of optimal control, Ž. Vyčisl. Mat. i Mat. Fiz., 3 (1963), 887-904. Google Scholar
L. Escauriaza, C. E. Kenig, G. Ponce and L. Vega, Hardy's uncertainty principle, convexity and Schrödinger evolutions, J. Eur. Math. Soc. (JEMS), 10 (2008), 883-907. doi: 10.4171/JEMS/134. Google Scholar
L. Escauriaza, G. Seregin and V. Šverák, Backward uniqueness for the heat operator in half-space, Algebra i Analiz, 15 (2003), 201-214. Google Scholar
H. O. Fattorini and D. L. Russell, Exact controllability theorems for linear parabolic equations in one space dimension, Arch. Rational Mech. Anal., 43 (1971), 272-292. doi: 10.1007/BF00250466. Google Scholar
E. Fernández-Cara and E. Zuazua, The cost of approximate controllability for heat equations: The linear case, Adv. Differential Equations, 5 (2000), 465-514. Google Scholar
A. V. Fursikov and O. Y. Imanuvilov, Controllability of Evolution Equations, volume 34 of Lecture Notes Series, Seoul National University Research Institute of Mathematics Global Analysis Research Center, Seoul, 1996. Google Scholar
B. Helffer, Semi-classical Analysis for the Schrödinger Operator and Applications, volume 1336 of Lecture Notes in Mathematics, Springer-Verlag, Berlin, 1988. doi: 10.1007/BFb0078115. Google Scholar
L. Iskauriaza, G. A. Serëgin and V. Shverak, $L_{3, \infty}$-solutions of Navier-Stokes equations and backward uniqueness, Uspekhi Mat. Nauk, 58 (2003), 3-44. doi: 10.1070/RM2003v058n02ABEH000609. Google Scholar
F. John, Partial Differential Equations, volume 1 of Applied Mathematical Sciences, Springer-Verlag, New York, fourth edition, 1982. doi: 10.1007/978-1-4684-9333-7. Google Scholar
B. Frank Jones and Jr ., A fundamental solution for the heat equation which is supported in a strip, J. Math. Anal. Appl., 60 (1977), 314-324. doi: 10.1016/0022-247X(77)90021-X. Google Scholar
A. Koenig, Non null controllability of the Grushin equation in 2D, C. R. Math. Acad. Sci. Paris, 355 (2017), 1215–1235. https://arXiv.org/abs/1701.06467, 2017. doi: 10.1016/j.crma.2017.10.021. Google Scholar
C. Laurent and M. Leautaud, Tunneling estimates and approximate controllability for hypoelliptic equations, https://arXiv.org/abs/1703.10797, March 2017. Google Scholar
J. Le Rousseau and I. Moyano, Null-controllability of the Kolmogorov equation in the whole phase space, J. Differential Equations, 260 (2016), 3193-3233. doi: 10.1016/j.jde.2015.09.062. Google Scholar
G. Lebeau and L. Robbiano, Contrôle exact de l'équation de la chaleur, Comm. Partial Differential Equations, 20 (1995), 335-356. doi: 10.1080/03605309508821097. Google Scholar
B. Ya. Levin, Lectures on Entire Functions, volume 150 of Translations of Mathematical Monographs, American Mathematical Society, Providence, RI, 1996. In collaboration with and with a preface by Yu. Lyubarskii, M. Sodin and V. Tkachenko, Translated from the Russian manuscript by Tkachenko. Google Scholar
L. Li and V. Šverák, Backward uniqueness for the heat equation in cones, Comm. Partial Differential Equations, 37 (2012), 1414-1429. doi: 10.1080/03605302.2011.635323. Google Scholar
S. Micu and E. Zuazua, On the lack of null-controllability of the heat equation on the half-line, Trans. Amer. Math. Soc., 353 (2001), 1635–1659 (electronic). doi: 10.1090/S0002-9947-00-02665-9. Google Scholar
S. Micu and E. Zuazua, On the controllability of a fractional order parabolic equation, SIAM J. Control Optim., 44 (2006), 1950–1972 (electronic). doi: 10.1137/S036301290444263X. Google Scholar
L. Miller, On the null-controllability of the heat equation in unbounded domains, Bull. Sci. Math., 129 (2005), 175-185. doi: 10.1016/j.bulsci.2004.04.003. Google Scholar
L. Miller, Unique continuation estimates for the Laplacian and the heat equation on non-compact manifolds, Math. Res. Lett., 12 (2005), 37-47. doi: 10.4310/MRL.2005.v12.n1.a4. Google Scholar
L. Miller, On the controllability of anomalous diffusions generated by the fractional Laplacian, Math. Control Signals Systems, 18 (2006), 260-271. doi: 10.1007/s00498-006-0003-3. Google Scholar
L. Miller, Unique continuation estimates for sums of semiclassical eigenfunctions and null-controllability from cones, 20 pages, 1 figure, AMS-LaTeX., November 2008. Google Scholar
A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations, volume 44 of Applied Mathematical Sciences, Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4612-5561-1. Google Scholar
J. Rauch, Partial Differential Equations, volume 128 of Graduate Texts in Mathematics, Springer-Verlag, New York, 1991. doi: 10.1007/978-1-4612-0953-9. Google Scholar
A. Rüland, On the backward uniqueness property for the heat equation in two-dimensional conical domains, Manuscripta Math., 147 (2015), 415-436. doi: 10.1007/s00229-015-0764-4. Google Scholar
G. Seregin and V. Šverák, The Navier-Stokes equations and backward uniqueness, In Nonlinear Problems in Mathematical Physics and Related Topics, II, volume 2 of Int. Math. Ser. (N. Y.), pages 353–366. Kluwer/Plenum, New York, 2002. Google Scholar
G. Wang, M. Wang, C. Zhang and Y. Zhang, Observable set, observability, interpolation inequality and spectral inequality for the heat equation in $\mathbb{R}^n$, J. Math. Pures Appl., 126 (2019), 144–194, https://arXiv.org/abs/1711.04279. doi: 10.1016/j.matpur.2019.04.009. Google Scholar
D. V. Widder, The role of the Appell transformation in the theory of heat conduction, Trans. Amer. Math. Soc., 109 (1963), 121-134. doi: 10.1090/S0002-9947-1963-0154068-2. Google Scholar
J. Wu and W. Wang, On backward uniqueness for the heat operator in cones, J. Differential Equations, 258 (2015), 224-241. doi: 10.1016/j.jde.2014.09.011. Google Scholar
Fabio Punzo. Phragmèn-Lindelöf principles for fully nonlinear elliptic equations with unbounded coefficients. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1439-1461. doi: 10.3934/cpaa.2010.9.1439
M. Carme Leseduarte, Ramon Quintanilla. Phragmén-Lindelöf alternative for an exact heat conduction equation with delay. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1221-1235. doi: 10.3934/cpaa.2013.12.1221
Seppo Granlund, Niko Marola. Phragmén--Lindelöf theorem for infinity harmonic functions. Communications on Pure & Applied Analysis, 2015, 14 (1) : 127-132. doi: 10.3934/cpaa.2015.14.127
G. P. Trachanas, Nikolaos B. Zographopoulos. A strongly singular parabolic problem on an unbounded domain. Communications on Pure & Applied Analysis, 2014, 13 (2) : 789-809. doi: 10.3934/cpaa.2014.13.789
S.V. Zelik. The attractor for a nonlinear hyperbolic equation in the unbounded domain. Discrete & Continuous Dynamical Systems, 2001, 7 (3) : 593-641. doi: 10.3934/dcds.2001.7.593
Hakima Bessaih, Yalchin Efendiev, Florin Maris. Homogenization of the evolution Stokes equation in a perforated domain with a stochastic Fourier boundary condition. Networks & Heterogeneous Media, 2015, 10 (2) : 343-367. doi: 10.3934/nhm.2015.10.343
Fabio Paronetto. A Harnack type inequality and a maximum principle for an elliptic-parabolic and forward-backward parabolic De Giorgi class. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 853-866. doi: 10.3934/dcdss.2017043
Dominique Blanchard, Olivier Guibé, Hicham Redwane. Existence and uniqueness of a solution for a class of parabolic equations with two unbounded nonlinearities. Communications on Pure & Applied Analysis, 2016, 15 (1) : 197-217. doi: 10.3934/cpaa.2016.15.197
Reinhard Farwig, Yasushi Taniuchi. Uniqueness of backward asymptotically almost periodic-in-time solutions to Navier-Stokes equations in unbounded domains. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1215-1224. doi: 10.3934/dcdss.2013.6.1215
Michael Renardy. A backward uniqueness result for the wave equation with absorbing boundary conditions. Evolution Equations & Control Theory, 2015, 4 (3) : 347-353. doi: 10.3934/eect.2015.4.347
Brahim Alouini, Olivier Goubet. Regularity of the attractor for a Bose-Einstein equation in a two dimensional unbounded domain. Discrete & Continuous Dynamical Systems - B, 2014, 19 (3) : 651-677. doi: 10.3934/dcdsb.2014.19.651
Said Hadd, Rosanna Manzo, Abdelaziz Rhandi. Unbounded perturbations of the generator domain. Discrete & Continuous Dynamical Systems, 2015, 35 (2) : 703-723. doi: 10.3934/dcds.2015.35.703
Shota Sato, Eiji Yanagida. Singular backward self-similar solutions of a semilinear parabolic equation. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 897-906. doi: 10.3934/dcdss.2011.4.897
Jesus Ildefonso Díaz, Jacqueline Fleckinger-Pellé. Positivity for large time of solutions of the heat equation: the parabolic antimaximum principle. Discrete & Continuous Dynamical Systems, 2004, 10 (1&2) : 193-200. doi: 10.3934/dcds.2004.10.193
Michael Renardy. Backward uniqueness for linearized compressible flow. Evolution Equations & Control Theory, 2015, 4 (1) : 107-113. doi: 10.3934/eect.2015.4.107
Brahim Alouini. Finite dimensional global attractor for a Bose-Einstein equation in a two dimensional unbounded domain. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1781-1801. doi: 10.3934/cpaa.2015.14.1781
Carlos Fresneda-Portillo. A new family of boundary-domain integral equations for the diffusion equation with variable coefficient in unbounded domains. Communications on Pure & Applied Analysis, 2020, 19 (11) : 5097-5114. doi: 10.3934/cpaa.2020228
Shuang Yang, Yangrong Li. Forward controllability of a random attractor for the non-autonomous stochastic sine-Gordon equation on an unbounded domain. Evolution Equations & Control Theory, 2020, 9 (3) : 581-604. doi: 10.3934/eect.2020025
Rui Peng, Dong Wei. The periodic-parabolic logistic equation on $\mathbb{R}^N$. Discrete & Continuous Dynamical Systems, 2012, 32 (2) : 619-641. doi: 10.3934/dcds.2012.32.619
Qiang Du, Jiang Yang, Zhi Zhou. Analysis of a nonlocal-in-time parabolic equation. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 339-368. doi: 10.3934/dcdsb.2017016
Jérémi Dardé Sylvain Ervedoza
|
CommonCrawl
|
Evaluating antimalarial efficacy in single-armed and comparative drug trials using competing risk survival analysis: a simulation study
Prabin Dahal ORCID: orcid.org/0000-0002-2158-846X1,2,
Philippe J. Guerin1,2,
Ric N. Price1,2,3,
Julie A. Simpson4 &
Kasia Stepniewska1,2
BMC Medical Research Methodology volume 19, Article number: 107 (2019) Cite this article
Antimalarial efficacy studies in patients with uncomplicated Plasmodium falciparum are confounded by a new infection (a competing risk event) since this event can potentially preclude a recrudescent event (primary endpoint of interest). The current WHO guidelines recommend censoring competing risk events when deriving antimalarial efficacy. We investigated the impact of considering a new infection as a competing risk event on the estimation of antimalarial efficacy in single-armed and comparative drug trials using two simulation studies.
The first simulation study explored differences in the estimates of treatment failure for areas of varying transmission intensities using the complement of the Kaplan-Meier (K-M) estimate and the Cumulative Incidence Function (CIF). The second simulation study extended this to a comparative drug efficacy trial for comparing the K-M curves using the log-rank test, and Gray's k-sample test for comparing the equality of CIFs.
The complement of the K-M approach produced larger estimates of cumulative treatment failure compared to the CIF method; the magnitude of which was correlated with the observed proportion of new infection and recrudescence. When the drug efficacy was 90%, the absolute overestimation in failure was 0.3% in areas of low transmission rising to 3.1% in the high transmission settings. In a scenario which is most likely to be observed in a comparative trial of antimalarials, where a new drug regimen is associated with an increased (or decreased) rate of recrudescences and new infections compared to an existing drug, the log-rank test was found to be more powerful to detect treatment differences compared to the Gray's k-sample test.
The CIF approach should be considered for deriving estimates of antimalarial efficacy, in high transmission areas or for failing drugs. For comparative studies of antimalarial treatments, researchers need to select the statistical test that is best suited to whether the rate or cumulative risk of recrudescence is the outcome of interest, and consider the potential differing prophylactic periods of the antimalarials being compared.
The primary endpoint in clinical studies of uncomplicated Plasmodium falciparum malaria is the occurrence of recrudescent parasitaemia, defined as recurrence due to the same parasite which caused the original infection. Parasite recurrence due to a heterologous parasite, which can either be a new infection with P. falciparum or another species of Plasmodia can potentially preclude the occurrence of recrudescence and constitute a competing risk event [1, 2]. Such scenario can occur when the parasite load of a newly acquired infection (regardless of the species or strain) outnumbers and outcompetes the low level of parasitaemia of an existing infection. A recrudescence can also be precluded when the new infection is due to a more resistant parasite strain compared to the existing susceptible parasite. These scenarios further depend on the inoculum density and the multiplication rates (efficiency) of the newly emergent infection and of the existing recrudescent parasites.
Despite advancement in statistical methods for analysing time to event outcomes [1,2,3,4,5,6,7], competing risk events are often ignored in the medical literature. Recent reviews have pointed out that a vast majority of studies published in high impact medical journal are susceptible to competing risk biases [8,9,10], and malaria is no exception. The Kaplan-Meier (K-M) survival analysis (\( {\widehat{S}}_{KM}(t) \)) is currently recommended by the World Health Organization (WHO) for deriving antimalarial efficacy [11, 12]. Commonly the complement of the K-M estimate (\( {\widehat{F}}_{KM}(t)=1-{\widehat{S}}_{KM}(t) \)) is reported as the WHO recommends replacing a first-line treatment with an alternative regimen if the derived estimate of cumulative failure exceeds 10% [12].
The complement of the K-M estimate provides an estimate of the marginal risk (of recrudescence), i.e. the risk of recrudescence where new infections do not occur. However, this is only possible when all enrolled participants are admitted to a hospital setting where it is not possible to get another mosquito bite, and thus, new infection. In practice, antimalarial trials are almost invariably conducted in endemic settings where new infections occur frequently and can be observed in as high as 50% of the cases [13]. The Cumulative Incidence Function (CIF) estimator proposed by Kalbfleisch and Prentice provides an alternative approach to estimate the cumulative failure by accounting for such competing risk events [14]. Several studies have compared the cumulative failure estimates derived by the complement of K-M method against the CIF estimator and have reported that the K-M approach leads to an overestimation of cumulative failure in the presence of competing risk events [9, 15,16,17,18].
The presence of competing risk events have further implications in comparative studies. Comparative antimalarial studies utilise the log-rank test for comparing the efficacy of two drugs. The log-rank test is essentially the comparison of the underlying cause-specific hazard rate between two groups [19] (see Additional file 1, Section 1 for definitions). In the absence of competing risk events, there is a one-to-one correspondence between the cause-specific hazard rate and the cumulative risk. This means that any inference drawn upon the hazard function holds equivalently true for the survival function and the cumulative risk. However, in the presence of competing risk events, this one-to-one relationship no longer holds true [20]. In such a scenario, inferences drawn using the log-rank test for comparing the equality of cause-specific hazard rates may not be valid when the interest is in comparing the cumulative risk of failure at time t. An alternative approach, which compares the difference in cumulative risks between two groups accounting for competing risk events, is the Gray's k-sample test [21]. This is the usual log-rank test where the cause-specific hazard function is replaced by the hazard of the sub-distribution [22].
To date, there has been no comprehensive investigation of how new infections impact the analysis and interpretation of efficacy data in antimalarial trials of uncomplicated P. falciparum malaria. This simulation study aimed to address this gap and there were two specific objectives:
To quantify the magnitude of overestimation in cumulative risk of treatment failure derived by the complement of the Kaplan-Meier approach compared to the Cumulative Incidence Function in a single-armed antimalarial trial, and
To quantify the influence of new infections on the comparative efficacy between antimalarial drugs, by comparing two statistical tests, the log-rank test and Gray's k-sample test
Two simulation studies were carried out to explore the utility of competing risk survival analysis in single armed and comparative antimalarial drug trials. The generation of survival data is common to both of these studies and is described first.
Generation of survival data
The time to parasitic recurrences were simulated from baseline hazard functions reflective of underlying biological mechanism of recrudescence and new infection (Fig. 1). The hazard functions were derived from individual patient outcome data from 15 studies with 4122 children aged less than 5 years for the antimalarial regimen dihydroartemisinin-piperaquine (DP). The existing studies analysed had an average efficacy of 95% in a sensitive parasite population. Fractional polynomials were used to capture non-monotonous relationship between the log of the cumulative instantaneous hazard and time to recrudescence (new infection) in order to generate survival data (manuscript currently under preparation). We then varied the intercept parameters in these two functions to explore specific scenarios outlined in the simulation studies I and II. The following cumulative baseline hazard (CBH) functions (on log scale) were used for the generation of time to recrudescence (rc), and time to new infection (ni), respectively:
$$ \ln \left( CBH{(t)}_{rc}\right)={\beta}_0-63.6284\times \left\{\ln {(t)}^{-1}-0.2849\right\}-0.3800\times \left\{\ln {(t)}^2-12.3188\right\} $$
$$ \ln \left( CBH{(t)}_{ni}\right)={\alpha}_0+9501.2150\times \left\{\ln {(t)}^{-2}-0.0858\right\}-31651.33\times \Big\{\ln {(t)}^{-2}\times \ln \left(\ln (t)-0.1054\right\}+29340.83\times \left\{{lnt}^{-2}\times \ln {(lnt)}^2-0.1294\right\}-12690.51\times \left\{\ln {(t)}^{-2}\times \ln {\left(\ln (t)\right)}^3-0.1588\right\} $$
The instantaneous hazard, cumulative hazard and survival function used in simulation study I. Cumulative baseline hazard for recrudescence and new infection (top panel), respective baseline hazard function (middle panel) and survival function (bottom panel) used for generating time to recrudescence and new infection for simulation Study-I. The middle panel is the numerical derivative of the equation used for the top panel. Note that y-axes are on different scales for each plot
The parameters β0 and α0 represent the intercept and were varied to achieve the desired proportion of recrudescence and new infection.
Simulation study I: aim, design and setting
The first simulation study aimed at quantifying the magnitude of overestimation in cumulative risk of treatment failure derived by the complement of the Kaplan-Meier method compared to the Cumulative Incidence Function in a single-armed antimalarial trial.
The following combination of parasitic recurrences were generated: recrudescent proportion (5, 10, and 15%) and new infection proportion (< 10%, 10–20%, 20–40% and > 40%). The base case simulation of 5% recrudescence represents the scenario of high efficacy currently observed with the artemisinin combination therapies in Africa [23,24,25]. The scenarios of 10 and 15% recrudescence represent the situations likely to be observed when antimalarial drug resistance worsens, which has now been observed for some antimalarials in Cambodia and Vietnam [26,27,28]. New infection proportions of < 10%, 10–20%, 20–40% and > 40% progressively represent areas of very low, low, moderate and high malaria transmission settings. Standard sample size calculations are not relevant for the methodological comparisons as the aim was to compare the derived estimates of cumulative risk of treatment failure from the two methods. Trials of sample size 100, 200, 500 and 1000 patients were simulated. Sample sizes of 100 and 200 were chosen to reflect the scenarios frequently observed in antimalarial studies.
The following steps describe the simulation protocol:
Simulate time to recrudescence (t1) using eq. (1). The parameter β0 was varied to achieve the desired proportion of recrudescence:
β0 = − 3.7092 for approximately 5% recrudescence by day 63 (base case scenario for recrudescence)
β0 = − 3.0160 for approximately 10% recrudescence by day 63
Simulate time to new infections (t2) using eq. (2). The parameter α0 was varied in order to achieve the desired proportion of new infections:
α0 = − 5.6004 for approximately < 10% new infection by day 63
α0 = − 3.9909 for approximately 10–20% new infection by day 63
α0 = − 2.8924 for approximately > 40% new infection by day 63
Since early recurrences are very unlikely in patients with adequate drug exposure [25, 29], the minimum time was set to day 14 and administrative censoring was applied on the last scheduled follow-up visit (day 63). For simplicity, no losses to follow-up were assumed.
For each individual, the observed time (t) was defined as the minimum of the simulated time to recrudescence (t1) and new infection (t2).
$$ t=\min \left({t}_1,{t}_2\right) $$
The final observed time was rounded to the nearest weekly visit day (7, 14, 21 and so on), reflective of the antimalarial follow-up design. The observed event corresponded to the event with minimum time, t, else administrative censoring was applied on day 63.
For each simulated dataset, the cumulative probability of failure was estimated on days 28, 42 and 63 using the 1 minus K-M method and the CIF. New infections were censored on the day of occurrence in the 1-K-M analysis and were kept as a separate category of competing risk event when estimating the CIF.
The absolute and relative differences in the two estimators derived in step (vi) were calculated.
For each scenario, steps (i)-(vii) were repeated 1000 times using an acceptance sampling procedure where only datasets fulfilling the study criteria were kept (e.g. 5% recrudescence, < 10% new infection). Studies where 4–6%, 9–11% and 14–16% of recrudescences were observed were defined to have 5, 10 and 15% recrudescence, respectively. In order to achieve the desired proportion of recrudescences (approximately 5, 10 and 15%), this required a large number of simulation runs, and the first 1000 datasets fulfilling the criteria were kept for analysis.
Simulation study II: aim, design and setting
The second simulation study aimed to quantify the influence of new infections on the comparative efficacy between antimalarial drugs, by comparing two statistical tests, the log-rank test and Gray's k-sample test.
Let drug A be the current first line treatment and drug B be a new antimalarial drug under investigation. The interest is in establishing whether drug A and B are different in terms of their effect on recrudescence. The aim of the simulation was to present the results from the log-rank test for comparing the equality of the K-M curves of drug efficacies and Gray's k-sample test for comparing the cumulative risks of recrudescence for drug A and drug B at day 63. For the log-rank test, new infections were censored on the time of recurrence.
Let \( {\lambda}_1^A(t) \) be the cause-specific hazard function of recrudescence for drug A and \( {\lambda}_2^B(t) \) be the cause-specific hazard function for drug B at time t. The null hypothesis under consideration for the log-rank test is H0:
$$ {H}_0:{\lambda}_1^A(t)={\lambda}_2^B(t) $$
Let \( {F}_1^A(t) \) and \( {F}_2^B(t) \) be the CIF of recrudescence for drug A and drug B respectively at time t. The null hypothesis under consideration for the Gray's k-sample test is I0:
$$ {I}_0:{F}_1^A(t)={F}_2^B(t) $$
The following hazard ratio \( \left({\theta}_{rc}=\frac{\lambda_1^A(t)}{\lambda_2^B(t)}\right) \) of recrudescence (RC) for drug A relative to drug B was assumed:
θrc= 1.00 drug B has the same effect on RC as drug A
θrc = 2.72 drug B is associated with increased hazard of RC compared to drug A
θrc = 0.37 drug B is associated with decreased hazard of RC compared to drug A
Similarly, the following hazard ratio (θni) of new infection (NI) for drug A relative to drug B was assumed:
θni= 1.00 drug B has the same effect on NI as drug A
θni= 2.72 drug B is associated with increased hazard of NI compared to drug A
θni= 0.37 drug B is associated with decreased hazard of NI compared to drug A
θni= 1.00 represents a null scenario, θni = 2.72 represents a scenario where the new drug has a shorter terminal elimination half-life compared to the existing drug and thus exerts a shorter prophylactic effect, while θni = 0.37 represents a scenario where the new drug is associated with a longer post-treatment prophylaxis than the reference drug.
Nine different possible scenarios of drugs A and B were explored in this study (Table 1, Fig. 2). Some of these scenarios presented might not be plausible in antimalarial studies and were kept for completeness as such scenarios might be applicable for other therapeutic interventions [30]. For antimalarial studies, we consider the scenarios where when drug B, compared to drug A exerts unidirectional effect i.e. associated with increased (or decreased) risk of both recrudescence and new infection as the most likely scenario. Similarly, a partially null scenario can be considered likely to be observed in antimalarial trials. For example, when drug A with a short half-life and drug B with a long half-life are compared, then despite observing similar efficacy, it can be expected that more new infections will be observed with drug A (Scenario 1B in Table 1).
Table 1 Different scenarios for comparing two drug regimens (drug B compared against drug A) in simulation study II
The baseline hazard function for recrudescence and new infection used for simulation study II. Top panel (recrudescence); bottom (new infection). Drug A (orange) is the reference arm and its hazard function for recrudescence and new infection is kept constant across all the simulation scenarios studied. Drug B (green) is a new regimen which is being compared against drug A. Scenario 1 (1A, 1B and 1C) is the null scenario where there is no difference in hazard function of recrudescence between these two drugs. In scenario 2, the two regimens have same hazard function for new infection, but drug B has either increased or decreased hazard of recrudescence with respect to drug A. In scenario 3, the two drugs differ in terms of both recrudescence and new infection
Since this simulation was set-up to evaluate type I error when comparing the two drugs, the number of patients needed per arm to detect a difference of a given log-hazard ratio was calculated. A sample size of 500 patients per arm was found to be adequate across all the simulation scenarios studied assuming 80% power for three different log-hazard ratios (Additional file 1, Section 2). However, as for simulation study I, we repeated the simulation for n = 100, 200, 500 and 1000 subjects/arm for completeness.
The following steps describe the simulation protocol for each scenario:
For each drug arm, time to recrudescence (t1) was simulated for 500 hypothetical patients using eq. (1). Since drug A is the reference treatment, its intercept parameter was held constant at − 3.7092 for all the simulation scenarios. The intercept parameter for drug B was varied to simulate the scenario of null effect (− 3.7092), increased effect (− 2.7092) or decreased effect (− 4.7092) of drug B on recrudescence relative to drug A. The corresponding hazard functions for different scenarios studied are presented in Fig. 2.
For each drug arm, time to new infection (t2) was simulated for 500 patients using eq. (2). Since drug A is the reference treatment, its intercept parameter was held constant at − 2.8924 for all the simulation scenarios. The intercept parameter for drug B was varied to simulate the scenario of null effect (− 2.8924), increased effect (− 1.8924) or decreased effect (− 3.8924) of drug B on new infection relative to drug A. The corresponding hazard functions for different scenarios studied are presented in Fig. 2.
Repeat steps (iii-v) as outlined in simulation study I
The difference between drugs A and B in terms of cumulative recrudescence were tested using the log-rank test at day 63 by censoring the new infections. The equality of CIFs for the two regimens was tested using Gray's k-sample test where a new infection was considered a competing risk event. P-values and the associated chi-squared test statistic were extracted. The hazard ratio for drug A relative to drug B was estimated using the Cox regression model.
The above simulations were repeated 1000 times and the proportion of times the derived p-value from log-rank test and Gray's k-sample test was less than 0.05 was calculated. This is equal to the rejection of the null hypothesis that there is no difference between the two treatment regimens in terms of the risk of recrudescence.
The time to recrudescence and new infection were generated using the survsim package in Stata [31] (See Additional file 1, Section 3 for Stata codes). The log-rank test was carried out using the survdiff function in the survival package and Gray's k-sample test was performed using the cuminc function in the cmprsk package in R software (Version 3.2.4) [32].
Simulation study I
The findings of this simulation study are presented in Figs. 3 and 4, and Table 2. The 1 minus K-M was associated with an overestimation of cumulative failure in all the scenarios studied. The magnitude of the overestimation increased with i) increasing proportion of new infections, ii) increasing proportion of recrudescences, and iii) the study follow-up duration (Fig. 3).
Overestimation of failure using K-M method compared to the CIF in simulation study I (n = 500 subjects). The overestimation \( \left({\widehat{F}}_{KM}(t)-{\widehat{F}}_{CIF}(t)\right) \) of cumulative recrudescence by the K-M method. Each panel represents different underlying status of drug efficacy on average (~ 5, 10 and 15% recrudescence observed) in a study with a sample size of 500 subjects/trial. The results are presented from 1000 independent simulation runs. The variation in absolute overestimation within each boxplot is due to varying proportion of new infection observed within the simulation scenario. Within each panel, the colours indicate different simulated scenarios of proportions of new infections: < 10% new infections (grey), 10–20% new infections (blue), 20–40% new infections (green) and > 40% new infections (orange), representing areas of progressively increasing malaria transmission from very low to very high
Cumulative failure estimates by study follow-up using extreme examples from simulation study I (n = 500 subjects). The figure shows the derived cumulative estimate of recrudescence in three cases from simulation study I where the maximum difference was observed between 1-(K-M) and CIF for 5, 10 and 15% respectively in the areas of very high transmission (> 40% new infections). The absolute difference between the two estimators was 1.8, 3.1 and 4.3% on day 63 respectively for 5, 10 and 15% recrudescence. These three cases are the extreme cases presented in Fig. 3 for the scenarios where > 40% new infections were observed
Table 2 Absolute overestimation in cumulative recrudescence by Kaplan-Meier (K-M) method compared to Cumulative Incidence Function (CIF) in simulation study I (n = 500 subjects)
In the areas of low transmission (< 10% observed new infection), the maximum overestimation in the derived cumulative risk of recrudescence on day 63 was 0.16% when drug exhibited 95% efficacy (base case scenario), however as the drug efficacy fell to 85%, the difference in estimates increased to 0.46%. In the high transmission areas (> 40% new infections), the maximum absolute overestimation by the 1-KM method was 1.75% for the base case simulation and this rose to 3.13 and 4.30% when the drug efficacy declined to 90 and 85% respectively (Table 2, Fig. 4).
The results when expressed on relative scale exhibited the same trend and conclusion as observed on the absolute scale (Additional file 1, Section 4). The results remained unaffected when the simulation was repeated with sample sizes of n = 100, 200, and 1000 patients (Additional file 1, Section 4).
Simulation study II
For each simulated dataset, the hazard ratio of recrudescence and new infection (for drug B relative to drug A) was estimated using the Cox model with treatment group as a covariate. The distribution of hazard ratios from 1000 simulations is presented in Fig. 5. Table 3 presents the results for the different scenarios considered with sample size of 500 patients per arm, which had at least 80% power to detect the desired hazard ratio for recrudescence between the two drugs across all the scenarios studied.
Distribution of simulated hazard ratio (n = 500 subjects) in simulation study II. The scatterplot of estimated hazards ratio for recrudescence and new infection for drug B relative to drug A from 1000 simulation runs. The median and interquartile range is shown. The centre green dot depicts the true hazard ratio which was used to simulate the respective datasets (1, 2.72 or 0.37). RC = recrudescence, NI = New infection. The description of each of the individual scenario is provided in Table 1
Table 3 Probability of rejecting the null hypothesis at two sided 0.05 level (n = 500 subjects per arm) in simulation study II
No difference in recrudescence
In the null situation (Scenario 1A), where it was postulated there was no difference in the risk of recrudescence and risk of new infection between the two drug regimens, both tests achieved their correct size (α) i.e. rejection rate was close to nominal 5%, as expected. Despite there being no difference between the two drugs for both events (as the respective hazard functions for recrudescence and new infections were identical for both drugs), stochastic variations will lead to a rejection of the null hypothesis approximately 5% of the time when the converse is true. In the partially null scenario of 1C i.e. drug B had the same effect on recrudescence as drug A but was associated with decreased hazard of new infection, both tests achieved their correct α. In partially null Scenario 1B, where drug B was associated with increased risk of new infection by a hazard ratio of 2.72, the log-rank test correctly achieved its nominal size (5% rejection), but the Gray's k-sample test led to a slightly higher rejection rate (11.9%).
Drug A and B have the same post-treatment prophylaxis
When there was no difference between the drug A and drug B in terms of their post-treatment prophylaxis, but drug B was associated with increased recrudescence with a hazard ratio of 2.72 (Scenario 2A), both tests had similar rejection probability. The median proportion of recrudescence observed in this scenario was 6.5% in drug B compared to 2.5% for drug A. In scenario 2B, where the drug B decreased recrudescence relative to drug A (hazard ratio = 0.37), both tests led to rejection of the null hypothesis 80% of the time.
The most relevant and biologically plausible scenario in an antimalarial trial occurs when a new treatment exerts unidirectional effect on recrudescence and new infection (compared to the reference drug), corresponding to scenarios 3A and 3D. In scenario 3A, where drug B was associated with approximately 2-fold increase in both recrudescence and new infection compared to drug A, the log-rank test appeared to be the more powerful of the two approaches with rejection probability of 99% compared to 90% with Gray's k-sample test. In situation 3D, where drug B was associated with a median reduction in recrudescence and new infection by approximately 60%, the log-rank test again proved to be superior by rejecting the null hypothesis of no difference (between drug A and drug B) 82.8% of the time compared to 71.3% by the Gray's k-sample test (Fig. 6, Panel D). The most interesting difference was observed when drug B exerted a differential effect on recrudescence and new infection, i.e. reduced recrudescence but increased new infection compared to drug A (Scenario 3C). In this situation, the Gray's k-sample test appeared to be the more powerful of the two tests (Fig. 6, Panel C). In Scenario 3B, where drug B was associated with increased recrudescence but reduced new infection, the results of the two tests were again very similar.
Ratio of recrudescence and new infection in simulation study II (n = 500 subjects/arm). The ratio of recrudescence for drug B relative to drug A plotted against the ratio of new infection for drug B relative to drug A for 1000 simulated dataset
Assumption of proportional hazards
In the simulation scenarios studied, the assumption of proportional hazards was violated in 5.4% (490/9000) of the simulated datasets for the comparison of recrudescence, and 4.5% (407/9000) for new infection. The violation of this assumption didn't seem to affect the results of the tests as the proportion of times this assumption was violated were similar across different scenarios (Additional file 1, Section 5). Increasing the number of simulation runs to 10,000 from 1000 didn't change the result (Table 3, results from 10,000 simulation runs shown in parenthesis). However, there were small variations in the results when the simulation was repeated with different sample sizes (Table 4).
Table 4 Probability of rejecting the null hypothesis at two sided 0.05 level for different sample sizes in simulation study II
Impact of sample size
In studies with n = 100, and 200 (which were known to be under-powered from the sample size calculations), both tests achieved their nominal 5% level i.e. rejection probability close to 5% for scenario 1 (Table 4). In scenarios 2 and 3, where the hazards ratio for recrudescence between the two drugs was 2.72 and 0.37, the rejection probability did not reach the required level of 0.8.
As expected, when the sample size was increased to 1000 patients per arm, both tests achieved their nominal size in the null scenario with the exception of Gray's k-sample test for scenario 1B, which rejected the null hypothesis 21.7% despite there being no difference between the two drugs. In this scenario, the influence of sample size was apparent as the rejection probability using Gray's k-sample test progressively increased with an increase in study sample size. Both tests rejected the null hypothesis in nearly all simulations for scenarios 2 and 3.
Competing risk survival analysis is increasingly being used in the medical and statistical literature [8, 33]. However, this approach remains novel in the context of antimalarial research [34]. The K-M method is the currently recommended approach for deriving antimalarial drug efficacy of uncomplicated P. falciparum malaria. Theoretically, the K-M method overestimates the cumulative incidence of recrudescence in the presence of new infection [17]. The magnitude of this overestimation is currently not documented and the implications for comparative efficacy studies is unknown. In order to fill this research gap, we carried out two simulation studies using biologically plausible survival functions consistent with the underlying pharmacokinetics profile of the antimalarial drugs.
The first simulation study quantified the degree of overestimation in cumulative incidence of recrudescence using the naïve 1 minus K-M method compared to the CIF in a single-armed antimalarial trial. The magnitude of the overestimation was found to increase with the increasing proportion of recrudescence, new infection and study follow-up duration; a finding consistent with the statistical and medical literature [16, 17]. The simulation study suggested that the estimates from the two approaches differed by less than 0.1% for most of the scenarios presented in Table 2; such differences are unlikely to have clinical consequences. In a scenario which reflected the current observations of drug efficacy with artemisinin combination therapies (> 95%), the overestimation was negligible in the areas of low transmission intensities, i.e. new infections lower than 10% (Table 2). For high transmission areas, this reached a maximum of 1.75%. However, we have also clearly identified several scenarios where the two methods will lead to a substantially different estimate. The magnitude of the overestimation was greatly increased when antimalarial drug efficacy began to decline. At 90% drug efficacy, the absolute deviation in derived estimates reached a maximum of 0.27% in the areas of low transmission and 3.13% for high transmission areas. When the efficacy fell to the low level of 85%, the overestimation reached 4.30% in the areas of high transmission. Similarly, in antimalarial studies, additional treatment is administered on detecting a recurrent parasitaemia. In such a scenario where the recurrence is due to a new infection, which has masked an existing low-density parasitaemia of the original infection (recrudescence), this would prevent the potential recrudescence from being observed due to additional antimalarial drugs. This will lead to an underestimation of failure. Taken together, our results highlight that estimation of drug failure in areas of high transmission requires careful attention and the CIF provides an alternative approach for deriving the failure estimates.
The second simulation study explored the results from the log-rank test for comparing the cause-specific hazard rates and Gray's k-sample test for comparing the cumulative incidences in comparative drug trials. A total of nine different hypothetical scenarios on how a new drug B might affect the recrudescence and new infection compared to an existing drug A were explored (Table 1). There were contrasting differences in two out of the nine scenarios. When drug B, compared to drug A, was associated with increased (or decreased) risk of both recrudescence and new infection, we found that log-rank test was more powerful compared to Gray's k-sample test for detecting differences between the two treatments. However, when drug B had higher risk of recrudescence and lower risk of new infection (or vice versa) compared to drug A, then Gray's k-sample test was more powerful in detecting the differences between the two drugs in terms of primary endpoint (Table 3). This finding is consistent with the results reported by two previous simulation studies in statistical literature [18, 30]. However, it must be stressed that the latter scenario is less likely to be observed within the context of comparing antimalarial regimens in a real-life situation.
Our simulation study has a number of methodological limitations. First, time to recrudescence and new infection were generated assuming independence. While this greatly simplified the simulation settings, this is an assumption unlikely to be verified and carrying out simulation studies accounting for correlation between recrudescence and new infections remained beyond the scope of this work. Second, we assumed no losses to follow-up for simplicity. A loss to follow-up of approximately 20% is anticipated in antimalarial studies and this can be incorporated in the simulation studies as future work. Third, when simulating time to recrudescence, we used rejection sampling and kept the first 1000 observations with 4–6%, 9–11% and 14–16% recrudescence for the scenarios of 5, 10 and 15% recrudescence, respectively. This approach might have led to less variability between the 1000 simulated datasets. Fourth, in simulation study II, we simulated data based on reference drug A assuming low failure in the areas of low transmission (2.5% recrudescence and 21.4% new infections). Hence, the generalisability of results for comparative studies in areas of different transmission settings might be limited. And finally, this manuscript has focused on the point estimation of the derived failure estimates. However, we would like to emphasise that the uncertainty around the point estimates (associated 95% confidence interval) be given as equal importance as the point estimate.
Our results have important clinical consequences. The current WHO strategy for monitoring and evaluation of antimalarial drug efficacy uses a series of threshold-based approaches. For new drugs to be eligible for introduction as a first line treatment, derived failure estimates should be less than 5%, and for current first line treatments, the failure estimates should not exceed 10% [35]. The results presented in Fig. 4 highlighted the implications for drug policy usage when the derived estimates are at the cusp of these thresholds. The derived estimate of cumulative failure was greater than 5% (Fig. 4a) and 10% (Fig. 4b) when the K-M method was used, but remained below 5 and 10% respectively when using the competing risk survival analysis approach, i.e. the CIF. This highlights that ignoring the competing risk of new infections can result in potentially misleading conclusions being drawn from a clinical study, particularly in high transmission settings where a large fraction of patients may develop new infections during the follow-up period, thus confounding the derived efficacy estimates. Similarly, the effect of competing events has implications for not only standalone trials but also comparative drug trials, particularly when the partner component of the artemisinin combination therapies are eliminated at different rates. For example, lumefantrine, the partner drug in artemether-lumefantrine (AL), has an elimination half-life of 4 days and hence almost all antimalarial activity is sub-therapeutic within 16 days [36]. Conversely the elimination half-life of piperaquine (partner drug in dihydroatemsinin-piperaquine (DP)) is four weeks and it exerts prolonged post treatment prophylaxis, reducing the risk of recurrent infections for up to 42 days [36]. Hence, the observed proportion of competing risk events is expected to be significantly lower following DP compared to AL, especially in the areas of high transmission. When a large fraction of patients develop new infections, fewer patients are available from which recrudescences can be observed. Hence, it is important that the proportion of competing risk events be taken into consideration when comparing two regimens with different pharmacological properties.
There is an ongoing debate in medical and statistical literature regarding the choice of the method for comparing treatment regimens in the presence of competing risk events [19, 30, 37,38,39]. It is increasingly being advocated that if the research interest is in understanding the biological mechanism of how a treatment affects hazard rate, the log-rank test is considered the appropriate method. However, when the interest is in comparison of overall risk i.e. if individuals receiving a particular drug are more likely to experience recrudescence, the comparison of CIF through Gray's k-sample test is considered appropriate [17, 40, 41]. Many authors advocate presenting results of both these approaches to provide a complete biological understanding of the treatment on different endpoints [17, 42]. It is important that researchers are aware that the choice of the analytical method in the presence of competing risk events should be guided by the research question of interest.
Our simulation study showed that 1 minus K-M method led to an overestimation of cumulative antimalarial treatment failure compared to the CIF and the degree of overestimation was far greater in high transmission areas. In the areas where a large proportion of recurrences are attributable to new infections, the use of CIF should be considered as an alternative approach for the derivation of failure estimates for antimalarial studies. For comparative studies of antimalarial treatments, the choice of the statistical test should be guided by whether the rate or cumulative risk of recrudescence is the outcome of interest.
\( {\widehat{S}}_{KM}(t) \) :
Kaplan-Meier estimates of drug efficacy at time t
\( {\widehat{F}}_{KM}(t) \) :
The complement of Kaplan-Meier estimate\( \left[1-{\widehat{S}}_{KM}(t)\right] \)
CBH:
Cumulative baseline hazard
CIF:
Cumulative Incidence Function
Hazards Ratio
K-M:
Kaplan-Meier
New infection
RC:
Recrudescence
Prentice RL, Kalbfleisch JD, Peterson AV, Flournoy N, Farewell VT, Breslow NE. The analysis of failure times in the presence of competing risks. Biometrics. 1978;34:541–54.
Wolbers M, Koller MT, Stel VS, Schaer B, Jager KJ, Leffondre K, et al. Competing risks analyses: objectives and approaches. Eur Heart J. 2014;35:2936–41.
Blower S, Bernoulli D. An attempt at a new analysis of the mortality caused by smallpox and of the advantages of inoculation to prevent it. Rev Med Virol. 2004;14:275–88.
Evelyn F, Jerzy N. A simple stochastic recovery of relapse death and loss of patients. Hum Biol. 1951;Sep:205–41.
Cornfield J. The estimation of the probability of developing a disease in the presence of competing risks. Am J Public Health. 1957;47:601–7.
Chiang CL. Introduction to stochastic processes in biostatistics. New York, USA: Wiley; 1968.
Kalbfleisch JD, Prentice RL. The statistical analysis of failure time data; 2002.
Koller MT, Raatz H, Steyerberg EW, Wolbers M. Competing risks and the clinical community: irrelevance or ignorance. Stat Med. 2012;31:1089–97.
Van Walraven C, McAlister FA. Competing risk bias was common in Kaplan-Meier risk estimates published in prominent medical journals. J Clin Epidemiol. 2016;69:170–3.
Austin PC, Fine JP. Accounting for competing risks in randomized controlled trials : a review and recommendations for improvement. Stat Med. 2017;36:1203–9.
World Health Organization. Assessment and monitoring of antimalarial drug efficacy for the treatment of uncomplicated falciparum malaria. Geneva, Switzerland; 2003.
World Health Organization. Methods for surveillance of antimalarial drug efficacy. Geneva. In: Switzerland; 2009.
Yeka A, Banek K, Bakyaita N, Staedke SG, Kamya MR, Talisuna A, et al. Artemisinin versus nonartemisinin combination therapy for uncomplicated malaria: randomized clinical trials from four sites in Uganda. PLoS Med. 2005;2:0654–62.
Kalbfleisch JD, Prentice RL. Competing risks and multistate models. In: The statistical analysis of failure time data. 2nd ed. New York, USA: John Wiley and Sons Inc; 2002. p. 247–77.
Southern DA, Faris PD, Brant R, Galbraith PD, Norris CM, Knudtson ML, et al. Kaplan-Meier methods yielded misleading results in competing risk scenarios. J Clin Epidemiol. 2006;59:1110–4.
Lacny S, Wilson T, Clement F, Roberts DJ, Faris PD, Ghali WA, et al. Kaplan-Meier survival analysis overestimates the risk of revision arthroplasty: a meta-analysis. Clin Orthop Relat Res. 2015;473:3431–42.
Gooley TA, Leisenring W, Crowley J, Storer BE. Estimation of failure probabilities in the presence of competing risks: new representations of old estimators. Stat Med. 1999;18:695–706.
Varadhan R, Weiss CO, Segal JB, Wu AW, Scharfstein D, Boyd C. Evaluating health outcomes in the presence of competing risks: a review of statistical methods and clinical applications. Med Care. 2010;48(6 Suppl):S96–105.
Bajorunaite R, Klein JP. Comparison of failure probabilities in the presence of competing risks. J Stat Comput Simul. 2008;78:951–66.
Andersen PK, Geskus RB, De witte T, Putter H. Competing risks in epidemiology: possibilities and pitfalls. Int J Epidemiol. 2012;41:861–70.
Gray RJ. A class of K-sample tests for comparing the cumulative incidence of a competing risk. Ann Stat. 1988;16:1141–54.
Klein JP. Competing risks. Wiley Interdisciplinary Reviews: Computational Statistics. 2010;2:333–9.
Worldwide Antimalarial Resistance Network (WWARN) AL Dose Impact Study Group. The effect of dose on the antimalarial efficacy of artemether–lumefantrine: a systematic review and pooled analysis of individual patient data. Lancet Infect Dis. 2015;15:692–702.
The WorldWide Antimalarial Resistance Network (WWARN) AS-AQ Study Group. The effect of dosing strategies on the therapeutic efficacy of artesunate-amodiaquine for uncomplicated malaria: a meta-analysis of individual patient data. BMC Med. 2015;13:66.
The WorldWide Antimalarial Resistance Network (WWARN) DP Study Group. The effect of dosing regimens on the antimalarial efficacy of Dihydroartemisinin-Piperaquine: a pooled analysis of individual patient data. PLoS Med. 2013;10:1–17.
Leang R, Barrette A, Bouth DM, Menard D, Abdur R, Duong S, et al. Efficacy of dihydroartemisinin-piperaquine for treatment of uncomplicated plasmodium falciparum and plasmodium vivax in Cambodia, 2008 to 2010. Antimicrob Agents Chemother. 2013;57:818–26.
Saunders DL, Vanachayangkul P, Lon C. Dihydroartemisinin–Piperaquine Failure in Cambodia. N Engl J Med. 2014;371:484–5.
Phuc BQ, Rasmussen C, Duong TT, Dong LT, Loi MA, Tarning J, et al. Treatment failure of Dihydroartemisinin/Piperaquine for plasmodium falciparum malaria, Vietnam. Emerg Infect Dis. 2017;23:715–7.
WorldWide Antimalarial Resistance Network (WWARN) Lumefantrine PK/PD Study Group. Artemether-lumefantrine treatment of uncomplicated plasmodium falciparum malaria: a systematic review and meta-analysis of day 7 lumefantrine concentrations and therapeutic response using individual patient data. BMC Med. 2015;13:227.
Williamson PR, Kolamunnage-Dona R, Tudur Smith C. The influence of competing-risks setting on the choice of hypothesis test for treatment effect. Biostatistics. 2007;8:689–94.
Crowther MJ, Lambert PC. Simulating biologically plausible complex survival data. Stat Med. 2013.
R: a language and environment for statistical computing. In: R Foundation for statistical computing; 2017. https://www.r-project.org/.
Austin PC, Lee DS, Fine JP. Introduction to the analysis of survival data in the presence of competing risks. Circulation. 2016;133:601–9.
Dahal P, Simpson JA, Dorsey G, Guérin PJ, Price RN, Stepniewska K. Statistical methods to derive efficacy estimates of anti-malarials for uncomplicated plasmodium falciparum malaria: pitfalls and challenges. Malar J. 2017;16:430.
World Health Organization. Responding to antimalarial drug resistance. In: World Health Organization. 2017. http://www.who.int/malaria/areas/drug_resistance/overview/en/. Accessed 5 Dec 2017.
World Health Organization. Guidelines for the treatment of malaria: third edition. Geneva, Switzerland; 2015.
Freidlin B, Korn EL. Testing treatment effects in the presence of competing risks. Stat Med. 2005;24:1703–12.
Dignam JJ, Kocherginsky MN. Choice and interpretation of statistical tests used when competing risks are present. J Clin Oncol. 2008;26:4027–34.
Rotolo F, Michiels S. Testing the treatment effect on competing causes of death in oncology clinical trials. BMC Med Res Methodol. 2014;14:1–11.
Pintilie M. Analysing and interpreting competing risk data. Stat Med. 2007;26:1360–7.
Tai B-C, Wee J, Machin D. Analysis and design of randomised clinical trials involving competing risks endpoints. Trials. 2011;12:127.
Latouche A, Allignol A, Beyersmann J, Labopin M, Fine JP. A competing risks analysis should report results on all cause-specific hazards and cumulative incidence functions. J Clin Epidemiol. 2013;66:648–53.
We thank Dr. Marcel Wolbers for several helpful discussions on the topic and Prof. Sir Nick J White for his astute clinical acumen.
PD is funded by Tropical Network Fund, Centre for Tropical Medicine and Global Health, Nuffield Department of Clinical Medicine, University of Oxford. The WorldWide Antimalarial Resistance Network (PD, KS, RNP, and PJG) is funded by a Bill and Melinda Gates Foundation grant and the ExxonMobil Foundation. JAS is an Australian National Health and Medical Research Council Senior Research Fellow (1104975). RNP is a Wellcome Trust Senior Fellow in Clinical Science (200909). This work was supported in part by the Australian Centre of Research Excellence on Malaria Elimination (ID# 1134989). The funders did not participate in the study development, the writing of the paper, decision to publish, or preparation of the manuscript.
Data generated and analysed for this study is available from the corresponding author on reasonable request.
WorldWide Antimalarial Resistance Network (WWARN), Oxford, UK
Prabin Dahal, Philippe J. Guerin, Ric N. Price & Kasia Stepniewska
Centre for Tropical Medicine and Global Health, Nuffield Department of Clinical Medicine, University of Oxford, Oxford, UK
Global and Tropical Health Division, Menzies School of Health Research and Charles Darwin University, Darwin, Australia
Ric N. Price
Centre for Epidemiology and Biostatistics, Melbourne School of Population and Global Health, The University of Melbourne, Melbourne, Australia
Julie A. Simpson
Prabin Dahal
Philippe J. Guerin
Kasia Stepniewska
PD, PJG, RNP, JAS and KS conceived the idea and wrote the first draft of the manuscript. PD, JAS and KS designed the simulation study. PD performed all the simulations. All authors read and approved the final version.
Correspondence to Prabin Dahal.
Additional text and results (DOCX 130 kb)
Dahal, P., Guerin, P.J., Price, R.N. et al. Evaluating antimalarial efficacy in single-armed and comparative drug trials using competing risk survival analysis: a simulation study. BMC Med Res Methodol 19, 107 (2019). https://doi.org/10.1186/s12874-019-0748-2
Competing risk events
|
CommonCrawl
|
Convergence rates in homogenization of p-Laplace equations
Jie Zhao1 &
Juan Wang1
This paper is concerned with homogenization of p-Laplace equations with rapidly oscillating periodic coefficients. The main difficulty of this work is due to the nonlinear structure in this field concerning p-Laplace equations itself. Utilizing the layer and co-layer type estimates as well as homogenization techniques, we establish the desired error estimates. As a consequence, we obtain the rates of convergence for solutions in \(W_{0}^{1,p}\) as well as \(L^{p}\). Meanwhile, our convergence rate results do not involve the higher derivatives any more. This may be viewed as rather surprising. The novelty of this work is that it seems to find a new analysis method in quantitative homogenization.
In this paper, we shall establish the rates of convergence for p-Laplace equations with rapidly oscillating periodic coefficients. More precisely, let Ω be a bounded Lipschitz domain in \(\mathbb{R} ^{n}\). Suppose that \(u_{\varepsilon }\in W^{1,p}(\varOmega )\), for any \(1\leq p<\infty \), is a weak solution to the following problem:
$$ \textstyle\begin{cases} L_{\varepsilon }u_{\varepsilon }=-\operatorname{div} (A(x/\varepsilon ) \vert \triangledown u_{\varepsilon } \vert ^{p-2}\triangledown u_{\varepsilon } ) =F& \mbox{in } \varOmega , \\ u_{\varepsilon } =f& \mbox{on }\partial \varOmega . \end{cases} $$
Throughout this paper, the summation convention is used. We assume that the matrix \(A(y)=(a_{ij}(y))\) with \(1\leq i\), \(j\leq n\), is real, bounded measurable, and satisfies the following conditions.
Periodicity conditions: for any \(y\in \mathbb{R}^{n}\) and \(Y=[0,1)^{n}\simeq \mathbb{R}^{n}/\mathbb{Z}^{n}\),
$$ A(y+Y)=A(y). $$
Coerciveness and growth conditions: there exists a \(\lambda >0\), for any \(y\in \mathbb{R}^{n}\) and \(\xi ,\xi '\in \mathbb{R}^{n}\),
$$\begin{aligned} \lambda \bigl( \vert \xi \vert + \bigl\vert \xi ' \bigr\vert \bigr)^{p-2} \bigl\vert \xi -\xi ' \bigr\vert ^{2}&\leq \bigl\langle A(y) \vert \xi \vert ^{p-2}\xi -A(y) \bigl\vert \xi ' \bigr\vert ^{p-2}\xi ',\xi -\xi '\bigr\rangle \\ &\leq \frac{1}{ \lambda }\bigl( \vert \xi \vert + \bigl\vert \xi ' \bigr\vert \bigr)^{p-2} \bigl\vert \xi -\xi ' \bigr\vert ^{2}. \end{aligned}$$
Smoothness conditions: with \(1/p+1/p'=1\),
$$ F\in W^{-1,p'}(\varOmega ), \qquad f\in W^{1,p}( \partial \varOmega ). $$
It is well known that the solution \(u_{\varepsilon }\rightharpoonup u _{0}\) weakly in \(W^{1,p}(\varOmega )\), as \(\varepsilon \rightarrow 0\), where \(u_{0}\) is the solution to the homogenized problem
$$ \textstyle\begin{cases} L_{0}u_{0} =-\operatorname{div} (Q \vert \triangledown u_{0} \vert ^{p-2} \triangledown u_{0} ) =F & \mbox{in } \varOmega , \\ u_{0} =f& \mbox{on }\partial \varOmega . \end{cases} $$
The Q is a constant matrix, defined by
$$ Q= \int _{Y} \bigl[A(y) \bigl\vert \triangledown \chi (y)+1 \bigr\vert ^{p-2}\bigl(\triangledown \chi (y)+1\bigr) \bigr]\,dy, $$
where the corrector \(\chi (y)\) satisfies the following cell problem:
$$ \textstyle\begin{cases} \operatorname{div} [A(y) \vert \triangledown \chi (y)+1 \vert ^{p-2}(\triangledown \chi +1) ]=0 & \mbox{in } Y, \\ \int _{Y}\chi (y)\,dy=0. \end{cases} $$
Recently, there has been published much classical work about convergence of solutions for linear operators in homogenization with the various settings. In 2011, Gérard and Masmoudi [4] obtained the \(L^{2}\) convergence rate for the boundary layers Neumann problems. In 2012, Kenig, Lin and Shen [7] obtained \(L^{2}\) as well as \(H^{\frac{1}{2}}\) convergence rates for the elliptic oscillating operators. In 2013, Aleksanyan, Shahgholian and Sjölin [1, 2] proved pointwise as well as \(L^{p}\) estimates for fixed operators and oscillating Dirichlet boundary data. In 2014, Kenig, Lin and Shen [8] established \(W^{k,p}\) convergence rates, via the asymptotic behavior of the Green or Neumann functions. In 2015, the first author [24] obtained the pointwise as well as \(W^{1,p}\) convergence rates for fixed operators and oscillating Neumann boundary data. In 2015, Gu [5] also proved convergence rates in \(L^{2}\) and \(H^{1}\) for linear Stokes systems. In 2016, Shen [18] proved the \(L^{2}\) convergence rate for the mixed Dirichlet–Neumann boundary value problems. In 2018, Niu and Xu [11] got the \(L^{2}\) convergence rate for 2mth-order equations with periodic oscillating coefficients.
The nonlinear operators case in homogenization have also been studied extensively. Piat and Deferanceschi [15] have obtained convergence weakly in \(W^{1,p}\) for the quasi-linear monotone operator. Pastukhova [14] considered nonlinear equations of monotone type with multiscale coefficients, and established the \(L^{2}\) convergence rate. Recently, Wang, Xu and Zhao [21] studied the quasilinear elliptic equations and obtained an error estimate in \(L^{2}\). We refer the reader to see [3, 6, 10, 16, 23] and their references for more results about nonlinear problems in homogenization.
The motivation for studying this paper is inspired by the problems raised by Wang, Xu and Zhao in [22] for the p-Laplace type equations. The aim of the paper is to obtain the accurate convergence rates of solutions for the classical p-Laplace equations with rapidly oscillating periodic coefficients. Thanks to the layer and co-layer type estimates, we could handle the different ingredients in the integral by energy methods. The similar procedures could be found in [9] or [12], which were used to analyze the spatial and mechanical properties for solutions of reflecting the microstructure of the materials.
The following are the main results of this paper.
Let Ω be a bounded Lipschitz domain in \(\mathbb{R}^{n}\). Suppose that \(u_{\varepsilon }\in W^{1,p}(\varOmega )\) and \(u_{0}\in W^{1,q}(\varOmega )\), with \(q>p\geq 4\), are the weak solutions of the problems (1.1) and (1.5), respectively. Then, under the assumptions (1.2)–(1.4), there exists a constant C such that
$$ \bigl\Vert u_{\varepsilon }-u_{0}-\varepsilon \chi T_{\varepsilon }( \eta _{\varepsilon }\triangledown u_{0}) \bigr\Vert _{W_{0}^{1,p}(\varOmega )}\leq C\varepsilon ^{\frac{1}{p}-\frac{1}{q}} \Vert \triangledown u_{0} \Vert _{L^{q}(\varOmega )}, $$
where \(T_{\varepsilon }\) is the smoothing operator, and \(\eta _{\varepsilon }\) is a cut-off function.
Under the same conditions as Theorem 1, then there exists a constant C, with some \(q>p\geq 4\), such that
$$ \Vert u_{\varepsilon }-u_{0} \Vert _{L^{p}(\varOmega )}\leq C \varepsilon ^{\frac{1}{p}-\frac{1}{q}} \Vert \triangledown u_{0} \Vert _{L^{q}(\varOmega )}. $$
The astute reader may have already noticed that our convergence rate result in Theorem 1, which do not involve the higher derivatives any more. This may be viewed as rather surprising, even though in the linear case. The novelty of this work is that it seems to find a new analysis method, which depends on the layer and co-layer type estimates, in quantitative homogenization. To the best of our knowledge, there are few contributions in the field concerning p-Laplace equations in homogenization.
The rest of the paper is organized as follows. Section 2 contains some basic notations and useful propositions which will play crucial roles to obtain convergence rates. In Sect. 3, we show that the solution \(u_{\varepsilon }\) of p-Laplace equations is convergent to the solution \(u_{0}\) of the corresponding homogenized problems, this is based on the energy method as well as using homogenization tools.
We begin by specifying our notations.
Let \(B_{r}(x)\) denote an open ball with center x and radius r. \(\varOmega _{\varepsilon }=\{x\in \varOmega : \operatorname{dist}(x,\partial \varOmega )> \varepsilon \}\), we also call it the co-layer part of Ω, associated with so-called layer part is denoted by \(\varOmega \setminus \varOmega _{\varepsilon }\). Set \(\eta _{\varepsilon }\in C_{0}^{\infty }( \varOmega )\) is a cut-off function, satisfying \(\eta _{\varepsilon }=1\) in \(\varOmega _{\varepsilon }\), \(\eta _{\varepsilon }=0\) outside \(\varOmega _{\varepsilon }\) and \(|\triangledown \eta _{\varepsilon }|\leq C/ \varepsilon \). In the whole paper, we use C to denote positive constant which may vary in different formulas.
Proposition 2.1
Let \(F=(F_{1},F_{2},\ldots ,F_{n})\in L^{p}(Y)\). Suppose that \(\int _{Y}F_{j}(y)\,dy=0\) and \(\operatorname{div}F(y)=0\) in Y. Then there exists \(\varPhi _{ij}\in W^{1,p}(Y)\) such that \(\varPhi _{ij}=-\varPhi _{ji}\) and \(F_{j}=\frac{\partial \varPhi _{ij}}{\partial y_{i}}\).
This proposition is well known. It is called the technique of flux correctors. The linear operator case is well known (see for example [7], Lemma 3.1). Let \(f_{j}\in W^{2,p}(Y)\) be the solution to the cell problem \(\triangle f_{j}=F_{j}\) in Y. Then we could define \(\varPhi _{ij}(y)=\frac{\partial }{\partial y_{i}}[f_{j}(y)]-\frac{\partial }{\partial y_{j}}[f_{i}(y)]\). From an energy estimate, we may get the desired properties.
Recently, the smoothing operator was introduced by Suslina in [19, 20]. Meanwhile, applying the smoothing operator to get error estimates was first established by Shen in [17]. Next, we will introduce the smoothing operator and its properties. This work is to extend the usage of smoothing operator to the case of p-Laplace equations, of independent interest itself.
Fix \(\psi \in C^{\infty }_{0}(B_{1}(0))\) such that \(\psi \geq 0\) and \(\int _{\mathbb{R}^{n}}\psi \,dx=1\). Define operator \(T_{\varepsilon }\) on \(L^{2}\) as
$$ T_{\varepsilon }(u) (x)=u\ast \psi _{\varepsilon }= \int _{\mathbb{R}^{n}}u(x-y) \psi _{\varepsilon }(y)\,dy, $$
where \(\psi _{\varepsilon }(x)=\varepsilon ^{-n}\psi (x/\varepsilon )\). We call it the smoothing operator.
Let \(u_{0}\in W^{1,p}(\varOmega )\) and a periodic function \(f\in L^{p}(Y)\), for some \(1< p<\infty \). Then we have
$$ \bigl\Vert f(\cdot /\varepsilon )T_{\varepsilon }(u_{0}) \bigr\Vert _{L^{p}(\varOmega )}\leq C \Vert f \Vert _{L^{p}(Y)} \Vert u_{0} \Vert _{L^{p}(\varOmega )} $$
$$ \bigl\Vert u_{0}-T_{\varepsilon }(\triangledown u_{0}) \bigr\Vert _{L^{p}(\varOmega _{\varepsilon })} \leq C\varepsilon \Vert \triangledown u_{0} \Vert _{L^{p}(\varOmega _{\varepsilon })}. $$
These estimates could be proved by Fubini's theorem and Hölder's inequality. We refer the reader to [13, 17] or [18] for the detailed proof.
The main interest of the present work is to attempt to find a new approach to analyzing the error estimates for homogenization problems. Fortunately, it may be a new way to derive rates of convergence, via the co-layer and layer type estimates.
(Co-layer and layer type estimates)
If \(u_{0}\in W^{1,p}( \varOmega )\) for some \(q>p>1\), then we have estimates
$$\begin{aligned}& \int _{\varOmega \setminus \varOmega _{\varepsilon }} \vert \triangledown u _{0} \vert ^{p}\,dx\leq C\varepsilon ^{1-\frac{p}{q}} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{p}{q}}, \\& \int _{\varOmega _{\varepsilon }} \bigl\vert \triangledown ^{2} u_{0} \bigr\vert ^{2} \vert \cdot \triangledown u_{0} \vert ^{p-2}\,dx\leq C \varepsilon ^{-1-\frac{p}{q}} \biggl( \int _{\varOmega } \vert \triangledown u _{0} \vert ^{q}\,dx \biggr)^{\frac{p}{q}}, \end{aligned}$$
$$ \int _{\varOmega _{\varepsilon }} \bigl\vert \triangledown ^{2} u_{0} \bigr\vert ^{p} \,dx \leq C\varepsilon ^{1-\frac{p}{q}-p} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{p}{q}}. $$
These estimates will play crucial roles to obtain convergence rates in the present paper, and they also do not involve the higher derivatives any more. These estimates could be derived by regularity estimates, and they may be found in [22].
Proofs of theorems
The goal of this section is to establish \(W_{0}^{1,p}\) and \(L^{p}\) convergence rates of solutions for the p-Laplace equations in homogenization.
Set the first-order approximation term
$$ v_{\varepsilon }= u_{0}+\varepsilon \chi T_{\varepsilon }( \eta _{\varepsilon }\triangledown u_{0}). $$
We find that
$$ \triangledown v_{\varepsilon }=\triangledown u_{0}+ \triangledown \chi T_{\varepsilon }(\eta _{\varepsilon }\triangledown u_{0})+ \varepsilon \chi T_{\varepsilon }\bigl(\eta _{\varepsilon }\triangledown ^{2} u_{0}\bigr)+ \varepsilon \chi T_{\varepsilon }( \triangledown \eta _{\varepsilon } \triangledown u_{0}). $$
In view of the fact that, for any \(\varphi \in C_{0}^{\infty }(\varOmega )\),
$$ \int _{\varOmega }A(x/\varepsilon ) \vert \triangledown u_{\varepsilon } \vert ^{p-2} \triangledown u_{\varepsilon } \cdot \triangledown \varphi \,dx= \int _{\varOmega }Q \vert \triangledown u_{0} \vert ^{p-2}\triangledown u_{0} \cdot \triangledown \varphi \,dx, $$
$$\begin{aligned}& \int _{\varOmega } \bigl[A(x/\varepsilon ) \vert \triangledown u_{\varepsilon } \vert ^{p-2} \triangledown u_{\varepsilon } -A(x/\varepsilon ) \vert \triangledown v_{ \varepsilon } \vert ^{p-2}\triangledown v_{\varepsilon } \bigr] \cdot \triangledown \varphi \,dx \\& \quad = \int _{\varOmega } \bigl[Q \vert \triangledown u_{0} \vert ^{p-2}\triangledown u_{0}-Q \bigl\vert T _{\varepsilon }(\eta _{\varepsilon }\triangledown u_{0}) \bigr\vert ^{p-2}T_{ \varepsilon }(\eta _{\varepsilon }\triangledown u_{0}) \bigr]\cdot \triangledown \varphi \,dx \\& \qquad {}+ \int _{\varOmega } \bigl[Q \bigl\vert T_{\varepsilon }(\eta _{\varepsilon }\triangledown u_{0}) \bigr\vert ^{p-2}T_{\varepsilon }(\eta _{\varepsilon }\triangledown u_{0})-A(x/ \varepsilon ) \bigl\vert T_{\varepsilon }(\eta _{\varepsilon }\triangledown u_{0}) \bigr\vert ^{p-2} \\& \qquad {}\cdot T _{\varepsilon }(\eta _{\varepsilon }\triangledown u_{0}) \vert \triangledown \chi +1 \vert ^{p-2}(\triangledown \chi +1) \bigr]\cdot \triangledown \varphi \,dx \\& \qquad {}+ \int _{\varOmega } \bigl[A(x/\varepsilon ) \bigl\vert T_{\varepsilon }( \eta _{\varepsilon }\triangledown u_{0}) \bigr\vert ^{p-2}T_{\varepsilon }( \eta _{\varepsilon }\triangledown u_{0}) \vert \triangledown \chi +1 \vert ^{p-2}( \triangledown \chi +1) \\& \qquad {}-A(x/\varepsilon ) \vert \triangledown v_{\varepsilon } \vert ^{p-2}\triangledown v_{\varepsilon } \bigr] \cdot \triangledown \varphi \,dx \\& \quad \doteq I_{1}+I_{2}+I_{3}. \end{aligned}$$
To estimate \(I_{1}\), we note Proposition 2.3, and Proposition 2.4 for the co-layer and layer type estimates, showing that
$$\begin{aligned} \vert I_{1} \vert \leq& C \int _{\varOmega } \bigl\vert \triangledown u_{0}-T_{\varepsilon }( \eta _{\varepsilon }\triangledown u_{0}) \bigr\vert \bigl( \vert \triangledown u_{0} \vert + \bigl\vert T_{\varepsilon }(\eta _{\varepsilon }\triangledown u_{0}) \bigr\vert \bigr)^{p-2}\cdot \vert \triangledown \varphi \vert \,dx \\ \leq& C \int _{\varOmega \setminus \varOmega _{\varepsilon }} \vert \triangledown u _{0} \vert ^{p-1} \vert \triangledown \varphi \vert \,dx+C \int _{\varOmega _{\varepsilon }} \bigl\vert \triangledown u_{0}-T_{\varepsilon }( \eta _{\varepsilon }\triangledown u_{0}) \bigr\vert \vert \triangledown u_{0} \vert ^{p-2} \vert \triangledown \varphi \vert \,dx \\ \leq& C \biggl( \int _{\varOmega \setminus \varOmega _{\varepsilon }} \vert \triangledown u_{0} \vert ^{p}\,dx \biggr)^{1-\frac{1}{p}}+C \int _{\varOmega _{\varepsilon }} \bigl\vert \eta _{\varepsilon }\triangledown u_{0}-T_{\varepsilon }( \eta _{\varepsilon }\triangledown u_{0}) \bigr\vert \vert \triangledown u_{0} \vert ^{p-2} \vert \triangledown \varphi \vert \,dx \\ \leq& C \biggl( \int _{\varOmega \setminus \varOmega _{\varepsilon }} \vert \triangledown u_{0} \vert ^{p}\,dx \biggr)^{1-\frac{1}{p}}+C \biggl( \int _{\varOmega _{\varepsilon }} \bigl\vert \eta _{\varepsilon }\triangledown u_{0}-T_{\varepsilon }( \eta _{\varepsilon }\triangledown u_{0}) \bigr\vert ^{\frac{p}{p-1}} \vert \triangledown u_{0} \vert ^{\frac{p(p-2)}{p-1}}\,dx \biggr)^{1-\frac{1}{p}} \\ \leq& C \biggl( \int _{\varOmega \setminus \varOmega _{\varepsilon }} \vert \triangledown u_{0} \vert ^{p}\,dx \biggr)^{1-\frac{1}{p}}+C\varepsilon \biggl( \int _{ \varOmega _{\varepsilon }} \bigl\vert \triangledown ^{2} u_{0} \bigr\vert ^{\frac{p}{p-1}} \vert \triangledown u_{0} \vert ^{\frac{p(p-2)}{p-1}}\,dx \biggr)^{1-1/p} \\ \leq& C \biggl( \int _{\varOmega \setminus \varOmega _{\varepsilon }} \vert \triangledown u_{0} \vert ^{p}\,dx \biggr)^{1-\frac{1}{p}}+C\varepsilon \biggl( \int _{ \varOmega _{\varepsilon }} \bigl\vert \triangledown ^{2} u_{0} \bigr\vert ^{2} \vert \triangledown u _{0} \vert ^{p-2}\,dx \biggr)^{\frac{1}{2}} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{p}\,dx \biggr)^{\frac{p-2}{2p}} \\ \leq& C\varepsilon ^{(1-\frac{1}{p})(1-\frac{p}{q})} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{p-1}{q}}+C \varepsilon ^{\frac{1}{2}-\frac{p}{2q}} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{p}{2q}} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{p-2}{2q}} \\ \leq& C\varepsilon ^{\frac{1}{2}(1-\frac{p}{q})} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{p-1}{q}} \end{aligned}$$
for some \(q>p\geq 2\), where we have used the Hölder inequality.
Next, we shall estimate \(I_{2}\). Let
$$ F(y,\xi )=Q \bigl\vert T_{\varepsilon }(\xi ) \bigr\vert ^{p-2}T_{\varepsilon }(\xi )-A(y) \bigl\vert T _{\varepsilon }( \xi ) \bigr\vert ^{p-2}T_{\varepsilon }(\xi ) \bigl\vert \chi (y)+1 \bigr\vert ^{p-2}\bigl( \triangledown \chi (y)+1\bigr). $$
Note that \(F(\cdot ,\xi )\) is a periodic function with the first variable and satisfies the conditions of Proposition 2.1. Then there exists \(\varPhi (\cdot ,\xi )\), such that \(\varPhi _{ij}=-\varPhi _{ji}\), and
$$ Q \bigl\vert T_{\varepsilon }(\xi ) \bigr\vert ^{p-2}T_{\varepsilon }( \xi )-A(y) \bigl\vert T_{\varepsilon }(\xi ) \bigr\vert ^{p-2}T_{\varepsilon }( \xi ) \bigl\vert \chi (y)+1 \bigr\vert ^{p-2}\bigl(\triangledown \chi (y)+1\bigr)=\operatorname{div}_{y} \varPhi (y,\xi ). $$
Thus, it gives
$$\begin{aligned} I_{2} =& \int _{\varOmega }F(x,x/\varepsilon )\cdot \triangledown \varphi \,dx \\ =& \int _{\varOmega }\operatorname{div}_{y} \varPhi (y,\xi ) \cdot \triangledown \varphi \,dx \\ =& \int _{\varOmega }\frac{\partial }{\partial y_{i}} \bigl( \varPhi _{ij}(y, \xi ) \bigr)\cdot \frac{\partial \varphi }{\partial x_{j}}\,dx \\ =&- \int _{\varOmega }\frac{\partial }{\partial x_{i}} \bigl( \varepsilon \varPhi _{ij}(x,x/\varepsilon ) \bigr)\cdot \frac{\partial \varphi }{ \partial x_{j}}\,dx+ \int _{\varOmega }\frac{\partial }{\partial y_{i}} \bigl( \varPhi _{ij}(y,\xi ) \bigr)\cdot \frac{\partial \varphi }{\partial x _{j}}\,dx \\ &{}+ \int _{\varOmega }\varepsilon \varPhi _{ij}(x,x/\varepsilon )\cdot \frac{ \partial \varphi }{\partial x_{i}\partial x_{j}}\,dx \\ =& \int _{\varOmega } \biggl[\frac{\partial }{\partial y_{i}} \bigl( \varPhi _{ij}(y,\xi ) \bigr)-\frac{\partial }{\partial x_{i}} \bigl( \varepsilon \varPhi _{ij}(x,x/\varepsilon ) \bigr) \biggr]\cdot \frac{\partial \varphi }{\partial x_{j}} \,dx, \end{aligned}$$
where we have used the divergence theorem and the antisymmetry of \(\varPhi _{ij}\).
As a result, using Proposition 2.4 again, we get
$$\begin{aligned} \vert I_{2} \vert \leq& C \int _{\varOmega } \bigl\vert \triangledown _{y} \varPhi (y,\xi )-\triangledown _{x} \varPhi (x,x/\varepsilon ) \bigr\vert \cdot \vert \triangledown \varphi \vert \,dx \\ \leq& C\varepsilon \int _{\varOmega _{\varepsilon }} \bigl\vert T_{\varepsilon }\bigl( \triangledown ^{2} u_{0}\bigr) \bigr\vert \cdot \vert \triangledown u_{0} \vert ^{p-2}\cdot \vert \triangledown \varphi \vert \,dx \\ \leq& C\varepsilon \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{p}\,dx \biggr) ^{1-\frac{2}{p}} \biggl( \int _{\varOmega _{\varepsilon }} \bigl\vert T_{\varepsilon }\bigl( \triangledown ^{2} u_{0}\bigr) \bigr\vert ^{p}\,dx \biggr)^{\frac{1}{p}} \\ \leq& C\varepsilon \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr) ^{\frac{p-2}{q}} \biggl( \int _{\varOmega _{\varepsilon }} \bigl\vert \triangledown ^{2} u_{0} \bigr\vert ^{p}\,dx \biggr)^{\frac{1}{p}} \\ \leq& C\varepsilon ^{\frac{1}{p}-\frac{1}{q}} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{p-1}{q}} \end{aligned}$$
for some \(q>p\).
For \(I_{3}\), it follows that
$$\begin{aligned}& \vert I_{3} \vert \\& \quad \leq C \int _{\varOmega } \bigl\vert (\triangledown \chi +1)T_{\varepsilon }( \eta _{\varepsilon }\triangledown u_{0})- \triangledown v_{\varepsilon } \bigr\vert \bigl( \bigl\vert (\triangledown \chi +1)T_{\varepsilon }(\eta _{\varepsilon }\triangledown u_{0}) \bigr\vert + \vert \triangledown v_{\varepsilon } \vert \bigr) ^{p-2}\cdot \vert \triangledown \varphi \vert \,dx \\& \quad \leq C \int _{\varOmega } \bigl\vert \triangledown u_{0}-T_{\varepsilon }( \eta _{\varepsilon }\triangledown u_{0}) \bigr\vert \bigl( \vert \triangledown u_{0} \vert + \bigl\vert (\triangledown \chi +1) T_{\varepsilon }(\eta _{\varepsilon }\triangledown u_{0}) \bigr\vert \bigr)^{p-2}\cdot \vert \triangledown \varphi \vert \,dx \\& \qquad {} +C \int _{\varOmega } \bigl\vert \triangledown u_{0}-T_{\varepsilon }( \eta _{\varepsilon }\triangledown u_{0}) \bigr\vert \\& \qquad {}\cdot \bigl( \bigl\vert \triangledown \chi T_{\varepsilon }(\eta _{\varepsilon } \triangledown u_{0}) \bigr\vert + \bigl\vert \varepsilon \chi T_{\varepsilon }\bigl(\eta _{\varepsilon }\triangledown ^{2} u_{0}\bigr) \bigr\vert + \bigl\vert \varepsilon \chi T_{\varepsilon }(\triangledown \eta _{\varepsilon }\triangledown u_{0}) \bigr\vert \bigr)^{p-2} \vert \triangledown \varphi \vert \,dx \\& \qquad {}+ C \int _{\varOmega } \bigl\vert \varepsilon \chi T_{\varepsilon } \bigl(\eta _{\varepsilon }\triangledown ^{2} u_{0} \bigr)+\varepsilon \chi T_{\varepsilon }(\triangledown \eta _{\varepsilon } \triangledown u_{0}) \bigr\vert \bigl( \bigl\vert (\triangledown \chi +1)T_{\varepsilon }(\eta _{\varepsilon }\triangledown u_{0}) \bigr\vert + \vert \triangledown v_{\varepsilon } \vert \bigr)^{p-2} \\& \qquad {}\cdot \vert \triangledown \varphi \vert \,dx \\& \quad \doteq I_{31}+I_{32}+I_{33}. \end{aligned}$$
Here, we divide the estimate into three ingredients.
Similar to the estimate of \(I_{1}\), we have
$$\begin{aligned} \vert I_{31} \vert \leq& C \biggl( \int _{\varOmega \setminus \varOmega _{\varepsilon }} \vert \triangledown u_{0} \vert ^{p}\,dx \biggr)^{1-\frac{1}{p}}+C\varepsilon \biggl( \int _{ \varOmega _{\varepsilon }} \bigl\vert \triangledown ^{2} u_{0} \bigr\vert ^{2} \vert \triangledown u _{0} \vert ^{p-2}\,dx \biggr)^{\frac{1}{2}} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{p}\,dx \biggr)^{\frac{p-2}{2p}} \\ \leq& C\varepsilon ^{\frac{1}{2}(1-\frac{p}{q})} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{p-1}{q}}. \end{aligned}$$
Next, we proceed to deal with \(I_{32}\):
$$\begin{aligned} \vert I_{32} \vert \leq& C \int _{\varOmega } \bigl\vert \triangledown u_{0}-T_{\varepsilon }( \eta _{\varepsilon }\triangledown u_{0}) \bigr\vert \bigl( \bigl\vert T_{\varepsilon }( \eta _{\varepsilon }\triangledown u_{0}) \bigr\vert + \bigl\vert \varepsilon T_{ \varepsilon }( \triangledown \eta _{\varepsilon }\triangledown u_{0}) \bigr\vert \bigr)^{p-2} \vert \triangledown \varphi \vert \,dx \\ &{}+C \int _{\varOmega } \bigl\vert \triangledown u_{0}-T_{\varepsilon }( \eta _{\varepsilon }\triangledown u_{0}) \bigr\vert \bigl\vert \varepsilon \chi T_{\varepsilon }\bigl( \eta _{\varepsilon }\triangledown ^{2} u_{0}\bigr) \bigr\vert ^{p-2} \vert \triangledown \varphi \vert \,dx \\ \leq& C \int _{\varOmega \setminus \varOmega _{\varepsilon }} \vert \triangledown u _{0} \vert ^{p-1} \vert \triangledown \varphi \vert \,dx+C \int _{\varOmega _{\varepsilon }} \bigl\vert \triangledown u_{0}-T_{\varepsilon }( \eta _{\varepsilon }\triangledown u_{0}) \bigr\vert \vert \triangledown u_{0} \vert ^{p-2} \vert \triangledown \varphi \vert \,dx \\ &{}+C \int _{\varOmega _{\varepsilon }} \vert \triangledown u_{0} \vert \cdot \bigl\vert \varepsilon \chi T_{\varepsilon }\bigl(\eta _{\varepsilon } \triangledown ^{2} u_{0}\bigr) \bigr\vert ^{p-2} \vert \triangledown \varphi \vert \,dx \\ \leq& C\varepsilon ^{\frac{1}{2}(1-\frac{p}{q})} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{p-1}{q}}+ \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{1}{q}} \biggl( \int _{ \varOmega _{\varepsilon }} \bigl\vert \varepsilon \chi T_{\varepsilon } \bigl( \eta _{\varepsilon }\triangledown ^{2} u_{0} \bigr) \bigr\vert ^{p}\,dx \biggr)^{ 1- \frac{2}{p}} \\ \leq& C\varepsilon ^{\frac{1}{2}(1-\frac{p}{q})} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{p-1}{q}}+ \biggl( \int _{\varOmega} \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{1}{q}} \varepsilon ^{(1-\frac{p}{q})(1-\frac{2}{p})} \biggl( \int _{\varOmega } \vert \triangledown u_{0}\biggr) \vert ^{q}\,dx )^{\frac{p-2}{q}} \\ \leq& C\varepsilon ^{\frac{1}{2}(1-\frac{p}{q})} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{p-1}{q}}, \end{aligned}$$
for some \(q>p\geq 4\), where we have used Proposition 2.3 and Proposition 2.4.
Last, it remains to handle \(I_{33}\):
$$\begin{aligned} \vert I_{33} \vert \leq& C\varepsilon \int _{\varOmega _{\varepsilon }} \bigl\vert T_{\varepsilon }\bigl( \triangledown ^{2} u_{0}\bigr)+ T_{\varepsilon }( \triangledown u_{0}) \bigr\vert \cdot \vert \triangledown u_{0} \vert ^{p-2} \vert \triangledown \varphi \vert \,dx \\ &{}+C\varepsilon ^{p-1} \int _{\varOmega _{\varepsilon }} \bigl\vert T_{\varepsilon }\bigl( \triangledown ^{2} u_{0}\bigr)+ T_{\varepsilon }( \triangledown u_{0}) \bigr\vert ^{p-1} \vert \triangledown \varphi \vert \,dx \\ \leq& C\varepsilon \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{p}\,dx \biggr) ^{1-\frac{2}{p}} \biggl( \int _{\varOmega _{\varepsilon }} \bigl\vert T_{\varepsilon }\bigl( \triangledown ^{2} u_{0}\bigr)+ T_{\varepsilon }( \triangledown u_{0}) \bigr\vert ^{p}\,dx \biggr) ^{\frac{1}{p}} \\ &{}+C\varepsilon ^{p-1} \biggl( \int _{\varOmega _{\varepsilon }}\bigl( \bigl\vert T_{\varepsilon }\bigl( \triangledown ^{2} u_{0}\bigr) \bigr\vert + \bigl\vert T_{\varepsilon }(\triangledown u_{0}) \bigr\vert \bigr)^{p}\,dx \biggr) ^{1-\frac{1}{p}} \\ \leq& C\varepsilon \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr) ^{\frac{p-2}{q}} \biggl[ \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr) ^{\frac{1}{q}}+ \biggl( \int _{\varOmega _{\varepsilon }} \bigl\vert \triangledown ^{2} u _{0} \bigr\vert ^{p}\,dx \biggr)^{\frac{1}{p}} \biggr] \\ &{}+C\varepsilon ^{p-1} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr) ^{\frac{p-1}{q}}+C\varepsilon ^{p-1} \biggl( \int _{\varOmega _{\varepsilon }} \bigl\vert \triangledown ^{2} u_{0} \bigr\vert ^{p}\,dx \biggr)^{1-\frac{1}{p}} \\ \leq& C\bigl(\varepsilon +\varepsilon ^{\frac{1}{p}-\frac{1}{q}}+ \varepsilon ^{p-1}+\varepsilon ^{(p-1)(\frac{1}{p}-\frac{1}{q})}\bigr) \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{p-1}{q}} \\ \leq& C\varepsilon ^{\frac{1}{p}-\frac{1}{q}} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{p-1}{q}}, \end{aligned}$$
for some \(q>p\geq 4\).
This, together with (3.1)–(3.7), shows that, for some \(q>p\geq 4\),
$$\begin{aligned}& \biggl\vert \int _{\varOmega } \bigl[A(x/\varepsilon ) \vert \triangledown u_{\varepsilon } \vert ^{p-2} \triangledown u_{\varepsilon } -A(x/\varepsilon ) \vert \triangledown v_{ \varepsilon } \vert ^{p-2}\triangledown v_{\varepsilon } \bigr] \cdot \triangledown \varphi \,dx \biggr\vert \\& \quad \leq C \varepsilon ^{\frac{1}{p}-\frac{1}{q}} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{p-1}{q}}. \end{aligned}$$
Then let \(\varphi =v_{\varepsilon }=u_{\varepsilon }-u_{0}-\varepsilon \chi T_{\varepsilon }(\eta _{\varepsilon }\triangledown u_{0})\). It gives
$$\begin{aligned}& \bigl\Vert \triangledown \bigl[u_{\varepsilon }-u_{0}- \varepsilon \chi T_{\varepsilon }(\eta _{\varepsilon }\triangledown u_{0}) \bigr] \bigr\Vert ^{p-1}_{L^{p}(\varOmega )} \\& \quad \leq C \biggl\vert \int _{\varOmega } \bigl[A(x/\varepsilon ) \vert \triangledown u_{\varepsilon } \vert ^{p-2} \triangledown u_{\varepsilon } -A(x/\varepsilon ) \vert \triangledown v_{ \varepsilon } \vert ^{p-2}\triangledown v_{\varepsilon } \bigr]\cdot \triangledown \varphi \,dx \biggr\vert \\& \quad \leq C\varepsilon ^{\frac{1}{p}-\frac{1}{q}} \biggl( \int _{\varOmega } \vert \triangledown u_{0} \vert ^{q}\,dx \biggr)^{\frac{p-1}{q}}. \end{aligned}$$
In view of the fact that \(T_{\varepsilon }(\eta _{\varepsilon }\triangledown u_{0})=0\) in the \(\varOmega \setminus \varOmega _{\varepsilon }\) and the Poincaré inequality, this completes the proof of Theorem 1.
It follows from Theorem 1 and Proposition 2.3, together with Minkowski's inequality, that
$$\begin{aligned} \Vert u_{\varepsilon }-u_{0} \Vert _{L^{p}(\varOmega )} \leq& C \varepsilon \bigl\Vert \chi T_{\varepsilon }(\eta _{\varepsilon } \triangledown u_{0}) \bigr\Vert _{L^{p}(\varOmega )}+C \varepsilon ^{\frac{1}{p}-\frac{1}{q}} \Vert \triangledown u_{0} \Vert _{L^{q}(\varOmega )} \\ \leq& C\varepsilon \Vert \triangledown u_{0} \Vert _{L^{p}( \varOmega )}+C\varepsilon ^{\frac{1}{p}-\frac{1}{q}} \Vert \triangledown u_{0} \Vert _{L^{q}(\varOmega )} \\ \leq& C\varepsilon ^{\frac{1}{p}-\frac{1}{q}} \Vert \triangledown u _{0} \Vert _{L^{q}(\varOmega )}, \end{aligned}$$
with \(q>p\geq 4\).
This completes the proof of Theorem 2.
Aleksanyan, H., Shahgholian, H., Sjölin, P.: Applications of Fourier analysis in homogenization of Dirichlet problem I. Pointwise estimates. J. Differ. Equ. 254, 2626–2637 (2013)
Aleksanyan, H., Shahgholian, H., Sjölin, P.: Applications of Fourier analysis in homogenization of the Dirichlet problem: \(L^{p}\) estimates. Arch. Ration. Mech. Anal. 215, 65–87 (2015)
Fusco, N., Moscariello, G.: On the homogenization of quasilinear divergence structure operators. Ann. Mat. Pura Appl. 146, 1–13 (1987)
Gérard, D., Masmoudi, N.: Homogenization and boundary layers. Acta Math. 209, 133–178 (2012)
Gu, S.: Convergence rates in homogenization of Stokes systems. J. Differ. Equ. 260, 5796–5815 (2016)
Jikov, V., Kozlov, S., Oleinik, O.: Homogenization of Differential Operators and Integral Functionals. Springer, Berlin (1994)
Kenig, C.E., Lin, F.H., Shen, Z.W.: Convergence rates in \(L^{2}\) for elliptic homogenization problems. Arch. Ration. Mech. Anal. 203, 1009–1036 (2012)
Kenig, C.E., Lin, F.H., Shen, Z.W.: Periodic homogenization of Green and Neumann functions. Commun. Pure Appl. Math. 67, 1219–1262 (2012)
Marin, M., Craciun, E.M., Pop, N.: Considerations on mixed initial-boundary value problems for micropolar porous bodies. Dyn. Syst. Appl. 25, 175–196 (2016)
Maso, G.D., Defranceschi, A.: Correctors for the homogenization of monotone operators. Differ. Integral Equ. 3, 1151–1166 (1990)
Niu, W., Xu, Y.: Convergence rates in homogenization of higher order parabolic systems. Discrete Contin. Dyn. Syst. 38, 4203–4229 (2018)
Othman, M., Marin, M.: Effect of thermal loading due to laser pulse on thermoelastic porous medium under G–N theory. Results Phys. 7, 3863–3872 (2017)
Pakhnin, M.A., Suslina, T.A.: Operator error estimates for the homogenization of the elliptic Dirichlet problem in a bounded domain. St. Petersburg Math. J. 24, 949–976 (2013)
Pastukhova, S.: Operator estimates in nonlinear problems of reiterated homogenization. Proc. Steklov Inst. Math. 261, 214–228 (2008)
Piat, V.C., Deferanceschi, A.: Homogenization of monotone operators. Nonlinear Anal. 14, 717–732 (1990)
Piat, V.C., Maso, G.D., Defranceschi, A.: G-convergence of monotone operators. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 3, 123–160 (1990)
Shen, Z.W.: Boundary estimates in elliptic homogenization. Mathematics 10, 653–694 (2017)
Shen, Z.W., Zhuge, J.: Convergence rates in periodic homogenization of systems of elasticity. Proc. Am. Math. Soc. 145, 1187–1202 (2016)
Suslina, T.: Homogenization of the Dirichlet problem for elliptic systems: \(L^{2}\)-operator error estimates. Mathematika 59, 463–476 (2013)
Suslina, T.: Homogenization of the Neumann problem for elliptic systems with periodic coefficients. SIAM J. Math. Anal. 45, 3453–3493 (2013)
Wang, L., Xu, Q., Zhao, P.: Quantitative estimates on periodic homogenization of nonlinear elliptic operators (2018). arXiv:1807.10865
Wang, L., Xu, Q., Zhao, P.: Convergence rates on periodic homogenization of p-Laplace type equations (2018). arXiv:1812.04837
Zeidler, E., Boron, L.F.: Nonlinear Functional Analysis and Its Applications, Nonlinear Monotone Operators. Springer, New York (1990)
Zhao, J.: Homogenization of the boundary value for the Neumann problem. J. Math. Phys. 56, 021508 (2015)
The author would like to thank the reviewers for their valuable comments and helpful suggestions to improve the quality of this paper. The part of this work was done while the author was visiting school of mathematics and applied statistics, University of Wollongong, Australia.
This work has been supported by the Natural Science Foundation of China (No. 11626239), the China Scholarship Council (No. 201708410483), as well as the Foundation of Education Department of Henan Province (No. 18A110037).
College of Science, Zhongyuan University of Technology, Zhengzhou, China
Jie Zhao
& Juan Wang
Search for Jie Zhao in:
Search for Juan Wang in:
All authors read and approved the submitted manuscript.
Correspondence to Jie Zhao.
Zhao, J., Wang, J. Convergence rates in homogenization of p-Laplace equations. Bound Value Probl 2019, 143 (2019) doi:10.1186/s13661-019-1258-1
35J15
Convergence rates
p-Laplace equations
|
CommonCrawl
|
DTI-HeNE: a novel method for drug-target interaction prediction based on heterogeneous network embedding
Methodology article
Yang Yue1 &
Shan He ORCID: orcid.org/0000-0003-1694-14652
BMC Bioinformatics volume 22, Article number: 418 (2021) Cite this article
Prediction of the drug-target interaction (DTI) is a critical step in the drug repurposing process, which can effectively reduce the following workload for experimental verification of potential drugs' properties. In recent studies, many machine-learning-based methods have been proposed to discover unknown interactions between drugs and protein targets. A recent trend is to use graph-based machine learning, e.g., graph embedding to extract features from drug-target networks and then predict new drug-target interactions. However, most of the graph embedding methods are not specifically designed for DTI predictions; thus, it is difficult for these methods to fully utilize the heterogeneous information of drugs and targets (e.g., the respective vertex features of drugs and targets and path-based interactive features between drugs and targets).
We propose a DTI prediction method DTI-HeNE (DTI based on Heterogeneous Network Embedding), which is specifically designed to cope with the bipartite DTI relations for generating high-quality embeddings of drug-target pairs. This method splits a heterogeneous DTI network into a bipartite DTI network, multiple drug homogeneous networks and target homogeneous networks, and extracts features from these sub-networks separately to better utilize the characteristics of bipartite DTI relations as well as the auxiliary similarity information related to drugs and targets. The features extracted from each sub-network are integrated using pathway information between these sub-networks to acquire new features, i.e., embedding vectors of drug-target pairs. Finally, these features are fed into a random forest (RF) model to predict novel DTIs.
Our experimental results show that, the proposed DTI network embedding method can learn higher-quality features of heterogeneous drug-target interaction networks for novel DTIs discovery.
Drug repurposing or repositioning refers to deploying old drugs for new purposes, which holds great promise in the future. That is because developing a new drug is costly and time-consuming [1]. By contrast, drug repurposing, i.e., finding the new use of existing drugs approved by the Food and Drug Administration (FDA) could save time and experimental funds for clinical trials. DTIs prediction based on computational techniques plays an important role in drug repurposing because it requires lower cost and less time, compared with biochemical experimental methods [2,3,4]. With an increasing number of public databases [5], different computational strategies can be more effectively applied for the DTIs prediction. There are two varieties of traditional computational methods: the ligand-based method [6] and the structure-based or docking-based method [7], which can provide relatively accurate DTI predictions. However, the former one has the limitation on predictive performance when few binding ligands are provided for a certain target, while the latter will not be feasible when the three-dimensional (3D) structure of the target is not available [2].
In recent years, machine-learning-based methods have been widely used for the DTIs prediction because they can search more potential targets of existing drugs in the DTIs space. The main assumption of most of these methods is that similar drugs may share similar targets [8]. Based on this assumption, kernel-based methods have been proposed, which essentially map various drug-drug and target-target similarity matrices (i.e., kernels) to DTI labels [9, 10].
A recent trend is the graph-based methods, compared with kernel-based methods, they can better describe interactive relations between drugs and targets by vertices and edges. The methods extract topological features from drug-target interaction networks and process these features for DTI predictions [11]. However, many existing methods cannot consider the distinctive characteristics carried by different types of entities and complex relations between these entities. Heterogeneous information networks are powerful tools to model the semantic information of such complex data by varieties of vertices and edges [12]. It is natural to use heterogeneous networks to represent the characteristics of drug and target vertices as well as diverse relations between drugs and targets. After constructing heterogeneous DTI networks, we need to use network embedding algorithms to extract the features, i.e., low-dimensional vector representations of networks, for downstream machine learning tasks, e.g., link predictions [13, 14].
However, while many homogeneous network embedding algorithms exist and have been applied to DTI predictions, heterogeneous network embedding remains a challenging task due to the various vertex types and the diversity of relations between vertices. Recently, Chen et al. [15] proposed an idea to cope with the heterogeneous network embedding: a heterogeneous network can be decomposed into several sub-networks, and each of them is processed separately. Similarly, a heterogeneous DTI network can be divided into a bipartite DTI network and other auxiliary networks which contain similarity information between the same kind of nodes. Luo et al. [2] proposed an approach named DTINet, which could learn embeddings by the network diffusion algorithm and inductive matrix completion strategy. Based on a heterogeneous network, Thafar MA et al. [16] utilized node2vec [17], graph mining techniques, and drug and target similarities generated by heuristic algorithms for DTI predictions. Peng et al. [18] introduced a random walk with restart (RWR) model, a denoising autoencoder (DAE), and a Convolutional Neural Network (CNN)-based model to extract low-dimensional vectors from heterogeneous networks, and they also used end-to-end graph convolutional networks (GCN) to do the similar work [19].
Although the methods mentioned above achieved promising results, there are still some issues. More specifically, current methods do not explicitly consider the bipartite nature of the drug-target interactions (containing all known DTIs) in a heterogeneous DTI network. Instead, these bipartite drug-target interactions are treated equally with other auxiliary information such as drug-drug and target-target similarity information. Such an indiscriminate treatment of heterogeneous relationships might lead to the suboptimal set of features and will ultimately affect the accuracy of the DTIs prediction.
To address this issue, we propose a novel heterogeneous network embedding method called DTI-HeNE that specially considers the bipartite drug-target relations. Similar to Chen et al. [15], we first decompose a heterogeneous DTI network into a bipartite DTI network and homogeneous drug-drug and target-target similarity networks. The proposed method is a multi-staged embedding method with good interpretability which then employs Bipartite Network Embedding (BiNE) [20] to specifically learn the DTI embeddings from the bipartite DTI network. Next, a path-based method is developed to combine the bipartite DTI embeddings with the homogeneous networks according to the topological information of pathways between sub-networks for creating new embedding representations of all drug-target pairs. Finally, we acquire novel DTIs by running a random forest (RF) model to learn these integrated representations.
Problem formulation
In our study, the DTIs prediction can be formulated as a transductive-learning binary link-prediction task (i.e., discovering novel DTIs within the DTIs space consisted of fixed drugs and targets in the given dataset, that is, the involved entities do not need to be extended) based on a heterogeneous network, which is divided into a bipartite DTI network as well as drug and target homogeneous networks. More specifically, let \({G}_{b}=(\mathrm{D},\mathrm{ T},\mathrm{ E})\) be a bipartite DTI network, where \(\mathrm{D}=\{{d}^{1}, {d}^{2},\dots ,{d}^{m}\}\) (m refers to the number of drugs in the dataset) and \(\mathrm{T}=\{{t}^{1}, {t}^{2},\dots , {t}^{n}\}\) (n represents the amount of targets in the dataset) denote the set of drug and target protein nodes, respectively. \(\mathrm{E}\subset \mathrm{D}\times \mathrm{T}\) defines known edges (interactions) between drugs and targets, and all known edges correspond to the weight of 1. Meanwhile, the homogeneous drug and target networks are defined as the \(m\times m\) matrix (\({G}_{d}\)) and the \(n\times n\) matrix (\({G}_{t}\)), respectively, in which every element indicates the degree of similarity between two drugs or two targets. The higher the value of one element, the higher the similarity between two corresponding entities. In addition, there is a \(m\times n\) matrix (Y) storing binary DTI predictions, if \({y}^{ij}=1\), it indicates that the \({d}^{i}-{t}^{j}\) pair is predicted to have a potential interaction, if not, then \({y}^{ij}=0\).
Furthermore, it is precisely because of the definition of our prediction task (i.e., the involved nodes are fixed) that the transductive-learning-like method can be utilized. Another contributing reason is that directly setting the weight of unknown interactions to 0 may not produce a satisfactory performance on datasets with a highly imbalanced ratio between known and unknown samples (e.g., DTI datasets) [21]. The transductive learning allows methods to have observed all the data beforehand, including training and test datasets, and potentially exploit structure information in their distribution [22] (so that it can better use additional information of unknown samples in the face of datasets with sparse known interactions). Compared with the inductive learning that learns a general inference to a task based on the information of a dataset, transductive learning is less ambitious and finds a specific solution that is optimal only for the current dataset (i.e., acquiring the best performance under the fixed drugs and targets in the dataset in our case study) [23, 24]; and the transductive setup has already been mentioned by some DTI prediction approaches [25].
Figure 1 presents the four main steps of the proposed method in our study:
Obtaining drug and target embeddings: a bipartite DTI network is established based on every known \({d}^{i}-{t}^{j}\) pair, and then the BiNE algorithm is performed on the bipartite network to capture the prior high-order similarity information of explicit and implicit transition relationships of all entities in the dataset.
Selection and fusion of homogeneous networks: a heuristic algorithm is applied to screen and integrate multiple drug and target homogeneous networks.
Path-based information integration: in this step, the path-based heterogeneous information is added as the auxiliary information to generate the embedding of every \({d}^{i}-{t}^{j}\) pair.
Novel DTI predictions: a RF classifier is trained to learn the integrated embedding representations for predicting unknown DTIs.
The flowchart of our method. The method integrates three varieties of networks to acquire embeddings of drug-target pairs. The original representations of drug and target nodes are produced by BiNE, and then these representations are augmented using the drug and target homogeneous matrices as well as path-based topological features for predicting DTIs
Learning bipartite DTI embedding
The challenge of learning bipartite network embedding is how to learn the explicit bipartite relationships between different types of vertices (e.g., DTIs) and implicit transition relationships between the same types of vertices (e.g., drugs and targets) simultaneously. BiNE addresses this challenge by using a three-part joint optimization framework and assigns each type of relationships with a dedicated objection function and an adjustable weight, which produces better vertex embeddings. Specifically, the first part of the framework is modeling the explicit relationships. In order to preserve the information of observed edges between two different types of nodes (\({u}_{i}\) and \({v}_{j}\)), the KL-divergence is chosen to measure the difference between the joint probability \(P(i,j)\) between vertices \({u}_{i}\) and \({v}_{j}\) and the joint probability \(\widehat{P}(i,j)\) between the embedding vectors of vertices \({u}_{i}\) and \({v}_{j}\) (\(\overrightarrow{{u}_{i}}\) and \(\overrightarrow{{v}_{j}}\)). The objection function can be defined as follows, which aims at minimizing the difference between \(P(i,j)\) and \(\widehat{P}(i,j)\):
$$\mathrm{minimize }{O}^{1}=KL(P||\widehat{P})=\sum_{{e}_{ij}\in E}P(i,j)\mathrm{log}(\frac{P(i,j)}{\widehat{P}(i,j)})$$
For the sake of explicitly modeling the unobserved but transitive links (implicit transition relationships) between the same type of nodes (i.e., directly modeling that similar drugs/targets could interact with similar targets/drugs in our case study), firstly, BiNE utilizes an idea named Co-HITS [26] to generate two homogeneous networks (matrices) which contain the 2nd-order proximity between the same type of nodes, and then the nodes having at least one weight greater than 0 are selected in the generated matrices. Then the truncated random walks, which are designed to better capture the frequency distribution of nodes, are performed on these two homogeneous networks consisted of selected nodes respectively, to convert the networks into two corpora of vertex sequences. More specifically, during our DTIs prediction process, there are two different types of homogeneous networks being generated. The first type is obtained in the second step of the workflow shown in Fig. 1, which contains the chemical and physical similarity information of drugs and targets and is more widely used by other DTIs prediction methods [27]. For the second type, it is calculated by Co-HITS mentioned above to model the implicit transition relationships, which has the same size as the first type (i.e., drug homogeneous networks: \(m\times m\) matrix, target homogeneous networks: \(n\times n\) matrix), and every element (weight) denotes the implicit transition probability between two drugs/targets. That is, given a \(m\times n\) bipartite DTI matrix \({G}_{b}\), the drug homogeneous network can be represented by a \(m\times m\) matrix \({G}_{b}{G}_{b}^{T}\), and the target homogeneous network is defined as \({G}_{b}^{T}{G}_{b}\), which is a \(n\times n\) matrix. In our task, taking the drug homogeneous matrix as an example (Fig. 2), the entry \({w}_{ij}^{d}\) with a higher value in this matrix can be interpreted as that the \({drug}_{i}\) and \({drug}_{j}\) would share more similar targets, and the similar principle can be applied to the target homogeneous matrix. Such a characteristic is in line with the known assumption – "guilt-by-association" [2]. Thus, the second type of homogeneous networks can carry more interactive information between drugs and between targets, which is helpful to improve the accuracy of the DTIs prediction.
An illustration of the drug homogeneous network generated in BiNE. Assuming that there are only three drugs and two targets in the whole bipartite DTI matrix \({G}_{b}\). When the \(3\times 3\) drug homogeneous matrix is made by multiplying \({G}_{b}\) (the \(3\times 2\) matrix) by \({G}_{b}^{T}\) (the \(2\times 3\) matrix), we can find that, in this \(3\times 3\) matrix, the value between Drug1 and Drug2 is 1, while the value between Drug1 and Drug3 is 0, and these values correspond to the DTI relations in \({G}_{b}\). Specifically, Drug1 and Drug2 share one target (Target2), Drug1 and Drug3 do not share any target, correspondingly, the value of the former drug pair in the drug homogeneous matrix is higher than that of the latter one
Next, based on the corpora created by the truncated random walks, the Skip-gram model [28] is used to learn embeddings of the two types of vertices in the bipartite network (e.g., drug and target embeddings), which makes embeddings capture more high-order proximity information; Essentially, the purpose of the Skip-gram model is assigning the similar embeddings to the vertices which are more frequently co-occurred in the same context of a sequence in the corpora. Intuitively, if the vertices in a corpus sequence are more similar to each other, these vertices are more likely to co-occur in the same context, so they could be allocated more similar embeddings. Thus, we further add a relatively high restart probability (e.g., 0.7) to every step of truncated random walks. Taking the embedding process of drug nodes as an example, for a truncated random walk starting at a certain drug node, when the next node is randomly selected from the set which has other drug nodes having a connection to the current drug (the connection is determined based on the value between these two nodes in the drug homogeneous network, if the value is non-zero, which indicates that there is a connection between them), a number from 0 to 1 is randomly chosen. If this number is less than the restart probability, the next node will become the starting node instead. In this way, the drug nodes selected in the current corpus sequence are closer to the starting node, which could bring higher-quality embeddings for the DTIs prediction.
Therefore, in order to learn the implicit transition relationships, there needs two objection functions expressed in (2)—(3) to maximize the conditional probability for high-order proximities on the two corpora respectively, where \(S\) denotes a vertex sequence which contains only \({u}_{i}\) nodes or only \({v}_{j}\) nodes, \({D}^{U}\) and \({D}^{V}\) correspond to the two generated corpora, \({C}_{S}({u}_{i})\) and \({C}_{S}({v}_{j})\) represent the context vertices of \({u}_{i}\) and \({v}_{j}\) in the sequence \(S\), respectively, and context vertices are several vertices (the number is \(\mathrm{ws}\) in total) before and after \({u}_{i}\) or \({v}_{j}\) in a sequence \(S\). In addition, \(P({u}_{c}|{u}_{i})\) refers to how likely \({u}_{c}\) is found in the contexts of \({u}_{i}\), and the similar meaning can be applied to \(P({v}_{c}|{v}_{j})\).
$$\mathrm{maximize }{O}^{2}=\prod_{{u}_{i}\in S\wedge S\in {D}^{U}}\prod_{{u}_{c}\in {C}_{S}({u}_{i})}P({u}_{c}|{u}_{i})$$
$$\mathrm{maximize }{O}^{3}=\prod_{{v}_{j}\in S\wedge S\in {D}^{V}}\prod_{{v}_{c}\in {C}_{S}({v}_{j})}P({v}_{c}|{v}_{j})$$
Finally, the three parts of objection functions mentioned above can be integrated into a joint framework to capture explicit and implicit transition relationships simultaneously. The framework is optimized by the Stochastic Gradient Ascent (SGA) algorithm, which can be presented as the Eq. (4). \(\mathrm{\alpha }\), \(\beta\) and \(\gamma\) are adjustable weights which control the relations between the three components.
$$\mathrm{maximize L}=\mathrm{\alpha log}{O}^{2}+\beta \mathrm{log}{O}^{3}-\gamma {O}^{1}$$
When optimizing the Eq. (4) using SGA, in order to save the calculation time, negative sampling [29], which approximates the costly denominator of the softmax function by sampling several negative instances, is adapted to learn the embedding vectors. As a result, the whole optimization process in one gradient step is as follows:
Firstly the \(-\gamma {O}^{1}\) part is maximized to update embeddings \(\overrightarrow{{u}_{i}}\) and \(\overrightarrow{{v}_{j}}\) as the Eqs. (5)-(6):
$$\overrightarrow{{u}_{i}}=\overrightarrow{{u}_{i}}+\lambda \{\gamma {w}_{ij}[1-\sigma ({\overrightarrow{{u}_{i}}}^{T}\overrightarrow{{v}_{j}})]\bullet \overrightarrow{{v}_{j}}\}$$
$$\overrightarrow{{v}_{j}}=\overrightarrow{{v}_{j}}+\lambda \{\gamma {w}_{ij}[1-\sigma ({\overrightarrow{{u}_{i}}}^{T}\overrightarrow{{v}_{j}})]\bullet \overrightarrow{{u}_{i}}\}$$
where \(\lambda\) is the learning rate and \({w}_{ij}\) is the weight of edge between \({u}_{i}\) and \({v}_{j}\) (in our study the weight is 1 if there is an edge between \({u}_{i}\) and \({v}_{j}\)). Then, the \(\mathrm{\alpha log}{O}^{2}\) and \(\beta \mathrm{log}{O}^{3}\) parts are maximized separately for further updating the embedding vectors as follows:
$$\overrightarrow{{u}_{i}}=\overrightarrow{{u}_{i}}+\lambda \{\sum_{z\in \{{u}_{c}\}\cup {N}_{S}^{ns}({u}_{i})}\alpha [I\left(z,{u}_{i}\right)-\sigma ({\overrightarrow{{u}_{i}}}^{T}\overrightarrow{{\theta }_{z}})]\bullet \overrightarrow{{\theta }_{z}}\}$$
$$\overrightarrow{{v}_{j}}=\overrightarrow{{v}_{j}}+\lambda \{\sum_{z\in \{{v}_{c}\}\cup {N}_{S}^{ns}({v}_{j})}\beta [I\left(z,{v}_{j}\right)-\sigma ({\overrightarrow{{v}_{j}}}^{T}\overrightarrow{{\vartheta }_{z}})]\bullet \overrightarrow{{\vartheta }_{z}}\}$$
where \({u}_{c}\) and \({v}_{c}\) are the context vertices of \({u}_{i}\) and \({v}_{j}\) separately, \({N}_{S}^{ns}({u}_{i})\) denotes the negative samples (the number is \(\mathrm{ns}\) in total) of \({u}_{i}\) in the sequence \(S\epsilon {D}^{U}\), and the similar meaning can be applied to \({N}_{S}^{ns}({v}_{j})\). \(I\left(z,{u}_{i}\right)\) and \(I\left(z,{v}_{j}\right)\) are indicator functions determining whether vertex \(z\) is the context vertex of \({u}_{i}\) and \({v}_{j}\) respectively (is: 1, not: 0). Besides, \(\sigma\) is the sigmoid function \(1/(1+{e}^{-x})\), and \(\overrightarrow{{\theta }_{z}}\) and \(\overrightarrow{{\vartheta }_{z}}\) are the embeddings of the context vertex of \({u}_{i}\) and \({v}_{j}\) respectively.
Furthermore, BiNE is an embedding method which could not well learn total isolation nodes that the truncated random walk cannot reach. However, under our transductive-learning setup, we reckon that the use of BiNE can be understood from another perspective. More specifically, many methods adopt multiple drug and target similarities (as a part of the input feature to generate homogeneous networks), which are pre-calculated over all nodes in the dataset based on certain properties of drugs and targets. As an analogy, we can treat BiNE as a similarity generator which takes drug and target Co-HITS matrices (that are calculated based on the whole bipartite DTI network) as the input to pre-calculate another type of similarity of drugs and targets. In this case, the form of this drug and target similarity is the embedding score, and the property on which it is based is the high-order proximity; and every node in the whole bipartite DTI network in used datasets has at least one edge such that the truncated random walk can produce every node's (high-order proximity) similarity in advance (i.e., there are no isolation nodes actually in the process of high-order similarity production).
Composite homogeneous network generation
As for the second step of our workflow, we choose a heuristic method to screen and combine different homogeneous networks (in matrix form) which contain different drug-drug and target-target similarity information [27]. This method can acquire an informative and robust composite homogeneous network by removing redundant information and integrating the retained features. Specifically, we first calculate the entropy of each homogeneous matrix for determining how much information these matrices contain. Secondly, delete the homogeneous matrices with the entropy value higher than \(\mathrm{c}1\mathrm{log}\left(\mathrm{k}\right)\) where \(\mathrm{c}1\) is a threshold to control the information each matrix contains (is subjectively set to 0.7) and \(\mathrm{log}(\mathrm{k})\) represents the highest entropy contained among all matrices.
Next, flatten each matrix and calculate the Euclidean distance (\(d\)) between homogeneous matrices, and then start from the matrix with the lowest entropy, based on the similarity index \({E}_{s}\) (shown in Eq. (9)), further remove other matrices having \({E}_{s}\) higher than \(\mathrm{c}2\) (is subjectively set to 0.6) with the current matrix, and the process will be repeated until all matrices are removed or retained. Finally, the similarity network fusion (SNF) [30] algorithm is adopted for non-linearly fusing the remaining matrices into a composite matrix that carries the necessary information from different similarity measures.
$${E}_{s}=\frac{1}{1+d}$$
As a result, a drug and a target composite matrix are obtained from multiple drug and target homogeneous matrices respectively. These two matrices and other matrices mentioned in this section all belong to the first type of the homogeneous network mentioned in the "Learning bipartite DTI embedding" section, which sizes are \(m\times m\) (for drug) and \(n\times n\) (for target), respectively.
Generating new embedding vectors of drug-target pairs
In order to tackle the problem that some recent embedding-based methods cannot add the pathway information about drug-target interactions into embeddings of drug-target pairs (e.g., simply concatenating generated drug and target embeddings as the final embeddings of drug-target pairs), we provide a method, which draws on the path-based information (about similar drugs interacting with the same targets and about similar targets sharing the same drugs), to acquire new embeddings of every drug-target pair (i.e., the reconstruction of DTI relations (network) included in the whole dataset). The intuition behind this idea is that, although separate drug and target embeddings produced by embedding algorithms could carry certain DTI (high-order proximity) information through learning process, the characterization of DTIs they contain for DTI predictions is still insufficient before the heterogeneous information (e.g., path-based knowledge) is added. The explanation of main calculation steps is shown in Fig. 3.
The illustration of the embedding generation process of the \({d}^{i}-{t}^{j}\) pair. Characteristics from three types of sub-networks will be combined to create a new embedding representation. This process will be repeated many times until embeddings of all drug-target pairs in the DTIs space are produced
Specifically, taking the embedding generation process of a \({d}^{i}-{t}^{j}\) pair as an example, first, we obtain \({d}^{i}\) and \({t}^{j}\) embeddings (\(\overrightarrow{{d}^{i}}\) and \(\overrightarrow{{t}^{j}}\)) produced by BiNE, the bipartite DTI matrix \({G}_{b}\), and drug and target homogeneous fused matrices mentioned in the "Composite similarity matrix generation" section. Second, acquire the five nearest drugs of \({d}^{i}\) according to the weights in the drug homogeneous matrix. That is, find the row corresponding to \({d}^{i}\) in the drug homogeneous matrix, and the values in the row are sorted from large to small, then the drugs corresponding to the five largest values are selected. In the same way, five targets with the highest similarity to \({t}^{j}\) can be found.
Third, multiply the embedding vector of \({d}^{i}\) by corresponding weights (i.e., similarities) of selected five nearest drugs in the drug homogeneous matrix respectively, then sum the obtained five products up to acquire a new feature \({d}^{sim\_i}\); the same rule can be applied to the embedding vector of \({t}^{j}\) to acquire a new feature \({t}^{sim\_j}\) (Eqs. (10)-(11)).
$${d}^{sim\_i}=\sum_{{d}^{z}\in {D}^{near}}{w}_{d}^{z}\overrightarrow{{d}^{i}}$$
$${t}^{sim\_j}=\sum_{{t}^{z}\in {T}^{near}}{w}_{t}^{z}\overrightarrow{{t}^{j}}$$
where \({D}^{near}\) and \({T}^{near}\) denote the set of the selected nearest drugs of \({d}^{i}\) and the nearest targets of \({t}^{j}\) separately, \({w}_{d}^{z}\) is the weight between \({d}^{z}\) and \({d}^{i}\) in the drug homogeneous matrix, and the similar meaning can be applied to \({w}_{t}^{z}\). The main purpose in this step is integrating drug-drug and target-target homogeneous matrices (similarity information) into the embedding vectors \({d}^{i}\) and \({t}^{j}\), respectively. In the fourth step, multiply the embedding vector \({t}^{j}\) by weights in \({G}_{b}\) between selected five nearest drugs and \({t}^{j}\) respectively, and then sum the five generated products up for acquiring a new feature \({d}^{path\_i}\). At the same time, we multiply the embedding vector \({d}^{i}\) by weights in \({G}_{b}\) between five selected nearest targets and \({d}^{i}\) respectively, and then sum the obtained products up to create a new feature \({t}^{path\_j}\) (Eqs. (12)-(13)).
$${d}^{path\_i}=\sum_{{d}^{z}\in {D}^{near}}{w}_{{t}^{j}}^{z}\overrightarrow{{t}^{j}}$$
$${t}^{path\_j}=\sum_{{t}^{z}\in {T}^{near}}{w}_{{d}^{i}}^{z}\overrightarrow{{d}^{i}}$$
where \({w}_{{t}^{j}}^{z}\) and \({w}_{{d}^{i}}^{z}\) represent the weight between \({d}^{z}\) and \({t}^{j}\) in \({G}_{b}\) and the weight between \({t}^{z}\) and \({d}^{i}\) in \({G}_{b}\), respectively. In this step, we can model the interactive pathway information about the known interactions between drugs (which are more similar to \({d}^{i}\)) and \({t}^{j}\) as well as the known interactions between \({d}^{i}\) and targets (which are more similar to \({t}^{j}\)). In the fifth step, a new embedding vector \({d}^{part\_i}\) is calculated by summing the vectors \({d}^{sim\_i}\) and \({d}^{path\_i}\) up, and the embedding vector \({t}^{part\_j}\) is formed in a similar way (Eqs. (14)-(15)).
$${d}^{part\_i}={d}^{sim\_i}+{d}^{path\_i}$$
$${t}^{part\_j}={t}^{sim\_j}+{t}^{path\_j}$$
Finally, the \({d}^{part\_i}\) and \({t}^{part\_j}\) can be concatenated to obtain an embedding of the \({d}^{i}-{t}^{j}\) pair, which effectively integrates characteristics from the bipartite DTI network as well as drug and target homogeneous networks. In addition, this calculation process is conducted after the cross-validation (CV) setup.
RF-based drug-target interaction predictor
After acquiring embeddings of all drug-target pairs in the dataset, the RF classifier [31] can be used for predicting the DTIs. RF has been proved to perform well in the face of high-dimensional features and be able to deal with overfitting in the case of insufficient training data. More importantly, it can handle the sample-class-imbalance problem efficiently. We implement the RF classifier by using the scikit-learn [32] tool, and the embeddings of drug-target pairs are as the input. The probability of whether each drug-target pair has a potential interaction is then predicted.
In addition, we tune the parameters of the RF classifier for better learning the complex integrated embeddings. The number of estimators is set to 100, the criterion for measuring the quality of a split is the Gini coefficient, and we make the weights of the model inversely proportional to the occurrence frequency of positive (known DTIs) and negative (unknown DTIs) classes based on input labels, to further overcome the challenge of considerable imbalance between the number of known and unknown DTIs.
In this section, we evaluate the predictive performance of the purposed method in two different settings (SD, ST) based on two main datasets. Firstly, we introduce model parameters, details of experimental settings as well as model evaluation metrics. Then, we compare our method with other advanced DTI prediction approaches under the same experimental conditions. Next, we conduct a case study in which unknown DTIs are predicted and top-score results are validated by searching for the evidence from multiple reference databases.
In this study, two benchmark datasets are used for establishing the bipartite DTI relations (networks); the first one (a gold standard dataset) was collected by Yamanishi et al. [33], which includes four DTI subsets classified by the types of target proteins (in human): Enzymes (E, including 445 drugs and 664 proteins), Ion Channels (IC, 210 drugs and 204 proteins), G-protein-coupled Receptors (GPCR, 223 drugs and 95 proteins), and Nuclear Receptors (NR, 54 drugs and 26 targets), respectively. The second one was obtained from Olayan RS et al. [27], consisting of interactions between 1482 FDA-approved drugs and 1408 human target proteins (including multiple categories), which were acquired from the DrugBank dataset [34]. Furthermore, the proportion of known and unknown interactions in these datasets are shown in Table 1.
Table 1 The proportion of positive and negative samples in each dataset
In the bipartite DTI networks, if there is a known interaction between \({d}^{i}\) and \({t}^{j}\), the corresponding weight is 1, otherwise it is 0 instead.
Besides, the drug-drug and target-target similarities for generating the composite homogeneous network were obtained from Olayan RS et al. [27]. As for the similarities for the first dataset, there are three types of drug similarities (chemical structure fingerprints, drug side-effects profiles, and the Gaussian interaction profile (GIP)) and six varieties for targets (amino acid sequences profiles, various parameterizations of the Mismatch, Spectrum kernels, target proteins functional annotation based on Gene Ontology (GO) terms, proximity in the protein–protein interaction (PPI) network, and the GIP). With regards to the second dataset, there are eight similarities for drugs (molecular fingerprints, drug interaction profiles, side-effects profiles, drug profiles of the anatomical therapeutic class coding system, drug-induced gene expression profiles, drug disease profiles, drug pathways profiles, and the GIP), and six in total for targets (protein amino acid sequence, protein GO annotations, proximity in the PPI network, the GIP, protein domain profiles, and gene expression similarity profiles of protein encoding genes). Besides, the weights in each kind of similarity matrix were mapped to the same scale using the 0-1 normalization method.
Experimental settings, evaluation metrics and model parameters
In order to avoid an overly idealistic assessment, we evaluate the performance of our method (i.e., the quality of generated embeddings) under two different DTI prediction settings inspired by Pahikkala T et al. [35], which provide different split of generated drug-target pair embeddings set. Further, same to the definition of the settings in Olayan RS et al. [27], the first setting is called the SD task in which the tenfold CV is used, and in each fold, drug-target pair embeddings in the DTIs space corresponding to one tenth of all drugs will only appear in the test set). As an analogy, for the ST task, drug-target pairs in the DTIs space corresponding to one tenth of all targets will only appear in the test set. In addition, the case study mentioned above corresponds to a more realistic scenario to test the performance of predicting unknown DTIs, in which all known DTIs are added to the training data as the auxiliary information to predict unknown DTIs (and then verify these predictions) [27, 36]. More specifically, we first set the labels of all known DTIs to 1, and the labels of other samples (including drug-target pairs without any interaction and drug-target pairs with undiscovered interactions) in the DTIs space are set to 0. Then, we randomly divide all drug-target pairs labeled 0 into 10 non-overlapping groups, and in each group, all samples labeled 1 are incorporated into the training set. Thus, during the whole predictive process, the RF classifier will receive embeddings corresponding to all drug-target pairs labeled 0 and thus can provide the probability scores of all unknown drug-target pairs in the given dataset, so that we can acquire predicted novel DTIs from top-ranked-score results. Furthermore, since the aim of the case study is to predict potential interactions of unknown DTIs only, it is not necessary to calculate the performance metrics.
As for the SD and ST tasks, we can acquire a more reasonable performance estimation by choosing the PR-AUC as the main evaluation metrics, it functions well when there are far more negative samples than positive samples in the dataset (Table 1), because it can impose a stricter punishment on the false positive (FP) case [37], and the ROC-AUC is selected as the auxiliary evaluation metrics. In each fold of CV, the PR-AUC is obtained by calculating the area under the precision-recall (PR) curve constructed based on the predictions of the RF classifier and corresponding actual labels. Similarly, the ROC-AUC can be calculated from the ROC curve, which is plotted by multiple true positive rate (TPR)—false positive rate (FPR) pairs under different threshold settings. The overall PR-AUC and ROC-AUC of the tenfold CV are derived by averaging the values in all folds. The general hyperparameters of our method tuned by the grid search for each dataset are shown in Table 2. In addition, the dimension of final embeddings of drug-target pairs is twice as high as that of the embeddings generated by BiNE.
Table 2 Hyperparameters of BiNE for different datasets
Comparison with other recent DTI prediction methods
In this section, under the same datasets, evaluation metrics, and prediction tasks (SD and ST tasks), seven advanced methods including DDR [27], NEDD [38], NRLMFβ [39], DTINet [2], CMF [40], BLM-NII [41], and NetLapRLS [10] which can effectively utilize the drug-target related knowledge are involved into the performance comparison, which allows us to compare the proposed method with the representative heterogeneous-network-based, matrix-factorization-based, and kernel-based methods. For methods which could only handle a single type of drug and target similarities, like BLM-NII and NetLapRLS, we use compound structure similarities (for drugs) and protein sequence similarities (for targets) provided by Yamanishi et al. [33] as the model input. In order to further demonstrate the effectiveness and feasibility of integrating similarity-based and path-based prior knowledge into the embeddings of drug-target pairs, we add BiNE into the comparison. That is, obtain the embedding vector of each drug-target pair by directly concatenating corresponding drug and target embeddings produced by BiNE (i.e., not considering any additional prior information). The generated vectors are then put into a RF classifier which is the same as the RF used in our method, to get the probability score of every drug-target pair. In addition, we do not consider DTiGEMS + [16] mentioned above, because it is difficult to evaluate this method and ours simultaneously in the same experimental settings. In other words, it requires the same number of positive and negative samples in each fold of a tenfold CV, while in our method, the allocation of samples follows the rule of the SD and ST tasks, which results in highly imbalanced samples in the training set.
Tables 3 and 4 show the PR-AUC and ROC-AUC of the methods participating in the SD and ST tasks. In general, based on the main evaluation metrics PR-AUC, our method has overall better performance than the other methods in the both tasks. For the SD task, the PR-AUC achieved by our method increases by 1.2%, 2.6%, 3.2%, 2.8%, and 35.1% on E, IC, GPCR, NR, and DrugBank datasets, respectively, compared with that of the second-best model. For the ST task, the corresponding improvements made by our method are 1.8% (E), 2.8% (IC), 4.2% (GPCR), 13.8% (NR), and − 11.7% (DrugBank), respectively. Meanwhile, under the auxiliary evaluation metrics ROC-AUC, our method is also generally superior to other models.
Table 3 Performance comparison over five datasets in the SD task
Table 4 Performance comparison over five datasets in the ST task
To investigate why our method performed differently in the SD and ST tasks on the DrugBank dataset, we counted the number of targets that every drug had (\({N}^{drug}\)) in the SD task (in which data was split according to drugs) and the number of drugs that every target corresponded to (\({N}^{target}\)) in the ST task (in which data was split according to targets) based on the known DTIs in the DrugBank dataset; and we further calculated the mean and variance of \({N}^{drug}\) and \({N}^{target}\). The corresponding values were \(\mathrm{Mean}\left({N}^{drug}\right)=6.67\), \(\mathrm{Var}\left({N}^{drug}\right)=45.30\), \(\mathrm{Mean}\left({N}^{target}\right)=7.02\), and \(\mathrm{Var}\left({N}^{target}\right)=660.80\), respectively. The significant difference between \(\mathrm{Var}\left({N}^{drug}\right)\) and \(\mathrm{Var}\left({N}^{target}\right)\) components indicates that when auxiliary information (i.e., pathway and similarity-based information), \(\mathrm{Mean}\left({N}^{drug}\right)\), and \(\mathrm{Mean}\left({N}^{target}\right)\) are similar, because our method is dependent on high-quality bipartite DTI relations to produce embeddings as well as the sample variance related to DTI relations in the ST task is much larger than that in the SD task, therefore, our method performs better in the SD task than in the ST task. Meanwhile, another heterogeneous network embedding method DTINet, which also relies on DTIs to generate projection matrix for DTI predictions, also suffers a significant drop in the predictive performance (from 0.316 to 0.176 in PR-AUC). In contrast, for DDR, since it is not an embedding-based method that needs DTIs, therefore, its performance in the ST task remains stable. This phenomenon can also prove that the quality of bipartite DTI relations plays a significant role for learning embeddings of a heterogeneous DTI network.
In addition, after obtaining drug and target embeddings, DTINet used inductive matrix completion (IMC) to learn these embeddings and known DTIs directly, for generating a projection matrix, which led to DTI predictions, and there were few between-class imbalance learning techniques being adopted. While our method utilized the RF classifier to predict DTIs, which could handle the sample-class-imbalance problem more efficiently. Therefore, in the face of highly imbalanced samples in the SD and ST tasks, our method outperformed DTINet.
To further prove the capability of the proposed model in a more realistic DTI prediction scenario, we introduce the case study mentioned in the "Experimental settings, evaluation metrics and other model parameters" section. Based on the case study, we can acquire drug-target pairs with the highest (top 5) probability scores predicted by the RF classifier on each dataset and search for the relevant evidence from six external databases (KEGG (K) [42], DrugBank (D) [34], Matador (M) [43], ChEMBL (C) [44], T3DB (T) [45], and CTD [46]). The DTIs contained in the used datasets were collected before 2008, thus, we can do verification by using newly updated DTIs in the above databases. The predicted interactions (a total of 25 pieces of data) and corresponding supporting evidence are shown in Table 5.
Table 5 The novel interactions predicted by DTI-HeNE and corresponding evidence provided by external databases
In summary, we found the evidence for the majority of predicted interactions (22 out of 25), and we carried out further research on these predictions. For the drug in the drug-target pair having a top probability score, we can usually find the evidence that this drug can interact with other targets which belong to the same gene family as the target in this drug-target pair. For example, in the GPCR group, the first ranked prediction indicates that there is a potential interaction between pindolol and ADRA2C. Pindolol is a moderately lipophilic beta blocker (adrenergic beta-antagonists) [47], and ADRA2C stands for the Alpha-2C adrenergic receptor. It was reported that the gene coding ADRA2C is associated with beta blockers response in a group of patients troubled by chronic kidney disease [48]. Meanwhile, we find that ADRA2A and ADRA2B, which are also members of the ADRA gene family, can interact with pindolol (from the Matador database).
There is another instance that can be used to further illustrate such a characteristic of DTI predictions. In the IC group, it was predicted that carbachol could react with CHRNA5 (the top ranked interaction). Carbachol [49] is a slowly hydrolyzed cholinergic agonist and CHRNA5 refers to the neuronal acetylcholine receptor subunit alpha-5. There is a recent drug-repurposing report that carbachol can combine with histamine and dopamine to block the inhibitory effects of benztropine mesylate on mammosphere formation of breast cancer stem cells. During the interaction process, the mRNA expression levels of CHRNA5 were variably altered within different types of tested cells [50]. Furthermore, the interactive information between carbachol and CHRNA2, CHRNA3, CHRNA4, CHRNA6 can be accessed from the Matador dataset.
In this work, we introduce a novel DTI prediction method – DTI-HeNE, which resorts to the heterogeneous information from every sub-network of the heterogeneous DTI network, to produce high-quality embeddings of drug-target pairs. Under the same experimental settings (SD and ST tasks) and evaluation metrics (PC-AUC, ROC-AUC), we obtained the comparison results shown in Tables 3 and 4. Based on current five benchmark datasets, we show that the overall performance of our method is better than that of other advanced methods involved in the experiment. We consider that the superior performance of DTI-HeNE is attributed to the following two reasons.
The first reason is the use of BiNE, when processing bipartite DTI relations for DTI predictions, in addition to modeling observed edges between drugs and targets, it is essential to consider the distinctive information of drug and target nodes, respectively. BiNE implements this by separately extracting implicit transition relationships between drugs and between targets (i.e., acquiring the 2nd-order proximity between the same type of vertices), which can provide unique similarity information (e.g., the homogeneous network illustrated in Fig. 2) compared with the similarities calculated based on domain knowledge. The second reason is that distinct information of each sub-network of the heterogeneous DTI network is effectively combined by using the path-based semantic information, as integrating this information through interpretable pathways between the sub-networks could contribute to a more explicit description of drug-target associations throughout the DTIs space. For the analogical reason, DDR also achieved great performance by extracting various path-category-based features from a heterogeneous network and combining the generated features into one fixed-length vector (as a representation of one drug-target pair). The advantage of our method is that the high-order prior proximity information of drugs and targets can be fused into the representations of drug-target pairs, and the length of these representations is no longer fixed so that we can flexibly adjust the length to meet the needs of some specific tasks. These benefits are brought by utilizing embedding-based algorithm as the backbone to process the heterogeneous DTI network.
When doing the case study, we observed that, for the newly discovered DTIs, it was common to find the supporting evidence that the targets that belonged to the same gene family as the predicted target could interact with the predicted drug. We speculate the reason is that we follow the principle "similar drugs may interact with similar targets" to design the predictive method, which can be reflected in the process of the Co-HITS-based homogeneous matrix generation and the drug-target embedding generation. The benefit is that we can forecast unknown DTIs more purposefully and directionally and reduce the probability of misjudgment using abundant similarity information. However, the scale of the searching space in which novel DTIs could be found is also narrowed. That is, if the similarity between the nodes in a certain drug-target pair and other nodes in the dataset is relatively low, it is less likely for this drug-target pair to be predicted to have a potential interaction, even though it actually contains an association. Thus, we plan to explore how to give our method a functional extension which can give higher attention to certain drugs having relatively lower similarity to other drugs but are worthy of further study. In addition, the proposed method is an attempt to use the stage-by-stage transductive-learning method to do the DTIs prediction, the benefit is that the method has better interpretability than many end-to-end methods, as every stage has a clearly actual meaning in the workflow; however, it is because currently our method functions in a transductive-like way, it has higher computational cost than the inductive-learning method (as the inductive learning will not be limited to a specific dataset, e.g., fixed drugs and targets, i.e., transductive learning can bring higher predictive accuracy than inductive learning due to the better use of additional information of unknown samples in the dataset with sparse known interactions, while the model have to be re-run if any new nodes/samples will be added into the dataset). Thus, in the future, we would like to do further modification of our method to make it suitable for inductive-like DTI prediction task.
Furthermore, adapting our algorithm to predict interactions between microRNAs (miRNAs) and small molecular drugs would be a highly interesting future direction. This is because increasing number of studies have found that the abnormal expression of miRNAs had close connections with many complex human diseases, and small molecular drugs could treat them by modulating the expression of miRNAs [51]. Similar to the general drug-target interaction prediction, accurate predictions of miRNA targets of small molecular drugs can be made based on miRNA and small molecule similarity networks, known miRNA-molecule interactions, and the "guilt-by-association" assumption [52,53,54]; and such data is quite similar to the required data of our method. Based on this, we believe that, with proper adjustments and data, DTI-HeNE can be applied to predict the interaction between small molecular drugs and miRNAs.
In this paper, a novel heterogeneous network embedding method – DTI-HeNE, has been proposed for the DTI prediction, which can extract distinct features from every sub-network of the heterogeneous DTI network and concatenate these features by the topological information between the sub-networks. This study has demonstrated the feasibility and practicability of de-constructing the heterogeneous DTI network to capture the contained complex information for generating high-quality embeddings of drug-target pairs. In addition, we have proved that, after proper adjustments, BiNE can efficiently learn the special bipartite relations included in the drug-target interactions.
Moreover, our method achieved overall higher predictive accuracy than other advanced methods in different experimental scenarios based on the same way of evaluation and verification. In the task of novel DTI predictions, our method can also generate reasonable results with clear directivity. In conclusion, for drug repurposing, the proposed method is an effective and useful tool to identify new DTIs.
The datasets analyzed during the current study are available in the DDR repository, https://bitbucket.org/RSO24/ddr/. The source codes are publicly available in the GitHub repository, https://github.com/arantir123/DTI-hene/.
DTI:
Drug-target interaction
BiNE:
Bipartite network embedding
RWR:
Random walk with restart
DAE:
Denoising autoencoder
CNN:
Convolutional neural network
GCN:
Graph convolutional network
SGA:
Stochastic gradient ascent
SNF:
Similarity network fusion
IMC:
Inductive matrix completion
GPCR:
NR:
GIP:
Gaussian interaction profile
PPI:
Protein–protein interaction
Cross-validation
FP:
Precision-recall
TPR:
True positive rate
FPR:
False positive rate
MicroRNA:
Manoochehri HE, Nourani M. Drug-target interaction prediction using semi-bipartite graph model and deep learning. BMC Bioinform. 2020;21(4):1–16.
Luo Y, Zhao X, Zhou J, et al. A network integration approach for drug-target interaction prediction and computational drug repositioning from heterogeneous information. Nat Commun. 2017;8(1):1–13.
Chen X, Yan C-C, Zhang X, et al. Drug–target interaction prediction: databases, web servers and computational models. Brief Bioinform. 2016;17(4):696–712.
Wang C-C, Zhao Y, Chen X. Drug-pathway association prediction: from experimental results to computational models. Brief Bioinform. 2021;22(3):bbaa061.
Li J, Zheng S, Chen B, et al. A survey of current trends in computational drug repositioning. Brief Bioinform. 2016;17(1):2–12.
Keiser MJ, Roth BL, Armbruster BN, et al. Relating protein pharmacology by ligand chemistry. Nat Biotechnol. 2007;25(2):197–206.
Donald BR. Algorithms in structural molecular biology. Cambridge: MIT Press; 2011.
Lan W, Wang J, Li M, et al. Predicting drug-target interaction based on sequence and structure information. IFAC PapersOnLine. 2015;48(28):12–6.
Nascimento ACA, Prudêncio RBC, Costa IG. A multiple kernel learning algorithm for drug-target interaction prediction. BMC Bioinform. 2016;17(1):46.
Xia Z, Wu L-Y, Zhou X, et al. Semi-supervised drug-protein interaction prediction from heterogeneous biological spaces. BMC Syst Biol BioMed Cent. 2010;4(2):1–16.
Bleakley K, Yamanishi Y. Supervised prediction of drug–target interactions using bipartite local models. Bioinformatics. 2009;25(18):2397–403.
Zhao Z, Zhang X, Zhou H, et al. HetNERec: heterogeneous network embedding based recommendation. Knowl Based Syst. 2020;204:106218.
Lu Z, Wang Y, Zeng M, et al. HNEDTI: Prediction of drug-target interaction based on heterogeneous network embedding. In: 2019 IEEE international conference on bioinformatics and biomedicine (BIBM). IEEE; 2019. p. 211–4.
Parvizi P, Azuaje F, Theodoratou E, et al. A Network-based embedding method for drug-target interaction prediction. In: 2020 42nd annual international conference of the IEEE engineering in medicine & biology society (EMBC). IEEE; 2020. p. 5304–7.
Chen X, Yu G, Wang J, et al. Zhang XJapa: Activehne: active heterogeneous network embedding. arXiv preprint arXiv:1905.05659. 2019.
Thafar MA, Olayan RS, Ashoor H, et al. DTiGEMS+: drug–target interaction prediction using graph embedding, graph mining, and similarity-based techniques. J Cheminform. 2020;12(1):1–17.
Grover A, Leskovec J. node2vec: scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining; 2016. p. 855–64.
Peng J, Li J, Shang X. A learning-based method for drug-target interaction prediction based on feature representation learning and deep neural network. BMC Bioinform. 2020;21(13):1–13.
Peng J, Wang Y, Guan J, et al. An end-to-end heterogeneous graph representation learning-based framework for drug–target interaction prediction. Brief Bioinform. 2021.
Gao M, Chen L, He X, et al. Bine: bipartite network embedding. In: The 41st international ACM SIGIR conference on research & development in information retrieval; 2018. p. 715–24.
Zhu Q, Luo J, Ding P, et al. GRTR: Drug-disease association prediction based on graph regularized transductive regression on heterogeneous network. In: International symposium on bioinformatics research and applications. Springer; 2018. p. 13–25.
Joachims T. Transductive learning via spectral graph partitioning. In: Proceedings of the 20th international conference on machine learning (ICML-03); 2003. p. 290–7.
Wan S, Mak MW, Kung SY. Transductive learning for multi-label protein subchloroplast localization prediction. IEEE/ACM Trans Comput Biol Bioinf. 2016;14(1):212–24.
Gammerman A, Vovk V, Vapnik V. Learning by transduction. arXiv preprint arXiv:1301.7375. 2013.
Pliakos K, Vens C. Drug-target interaction prediction with tree-ensemble learning and output space reconstruction. BMC Bioinform. 2020;21(1):1–11.
Deng H, Lyu MR, King I. A generalized co-hits algorithm and its application to bipartite graphs. In: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining; 2009. p. 239–48.
Olayan RS, Ashoor H, Bajic VB. DDR: efficient computational method to predict drug–target interactions using graph mining and machine learning approaches. Bioinformatics. 2018;34(7):1164–73.
Mikolov T, Sutskever I, Chen K, et al. Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems; 2013. p. 3111–9.
Yin H, Zou L, Nguyen QVH, et al. Joint event-partner recommendation in event-based social networks. In: 2018 IEEE 34th international conference on data engineering (ICDE). IEEE; 2018. p. 929–40.
Wang B, Mezlini AM, Demir F, et al. Similarity network fusion for aggregating data types on a genomic scale. Nat Methods. 2014;11(3):333.
Ho TK. Random decision forests. In: Proceedings of 3rd international conference on document analysis and recognition. IEEE; 1995. p. 278–82.
Pedregosa F, Varoquaux G, Gramfort A, et al. Scikit-learn: machine learning in Python. J Mach Learn Res. 2011;12:2825–30.
Yamanishi Y, Araki M, Gutteridge A, et al. Prediction of drug–target interaction networks from the integration of chemical and genomic spaces. Bioinformatics. 2008;24(13):i232–40.
Wishart DS, Knox C, Guo A-C, et al. DrugBank: a knowledgebase for drugs, drug actions and drug targets. Nucleic Acids Res. 2008;36(suppl_1):D901–6.
Pahikkala T, Airola A, Pietilä S, et al. Toward more realistic drug–target interaction predictions. Brief Bioinform. 2015;16(2):325–37.
Van Laarhoven T, Nabuurs SB, Marchiori E. Gaussian interaction profile kernels for predicting drug–target interaction. Bioinformatics. 2011;27(21):3036–43.
Davis J, Goadrich M. The relationship between Precision-Recall and ROC curves. In: Proceedings of the 23rd international conference on machine learning; 2006. p. 233–240.
Zhou R, Lu Z, Luo H, et al. NEDD: a network embedding based method for predicting drug-disease associations. BMC Bioinform. 2020;21(13):1–12.
Ban T, Ohue M, Akiyama Y. NRLMFβ: Beta-distribution-rescored neighborhood regularized logistic matrix factorization for improving the performance of drug–target interaction prediction. Biochem Biophys Rep. 2019;18:100615.
Zheng X, Ding H, Mamitsuka H, et al. Collaborative matrix factorization with multiple similarities for predicting drug-target interactions. In: Proceedings of the 19th ACM SIGKDD international conference on knowledge discovery and data mining; 2013. p. 1025–33.
Mei J-P, Kwoh C-K, Yang P, et al. Drug–target interaction prediction by learning from local information and neighbors. Bioinformatics. 2013;29(2):238–45.
Kanehisa M, Goto S. KEGG: kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 2000;28(1):27–30.
Günther S, Kuhn M, Dunkel M, et al. SuperTarget and Matador: resources for exploring drug-target relationships. Nucleic Acids Res. 2007;36(suppl_1):D919–22.
Gaulton A, Bellis LJ, Bento AP, et al. ChEMBL: a large-scale bioactivity database for drug discovery. Nucleic Acids Res. 2012;40(D1):D1100–7.
Wishart D, Arndt D, Pon A, et al. T3DB: the toxic exposome database. Nucleic Acids Res. 2015;43(D1):D928–34.
Davis AP, Grondin CJ, Johnson RJ, et al. The comparative toxicogenomics database: update 2017. Nucleic Acids Res. 2017;45(D1):D972–8.
Reynolds JEF. Martindale: the extra pharmacopoeia. London: The Pharmaceutical Press; 1982.
Borro M, Guglielmetti M, Simmaco M, et al. The future of pharmacogenetics in the treatment of migraine. Pharmacogenomics. 2019;20(16):1159–73.
Konopacki J, MacIver MB, Bland BH, et al. Carbachol-induced EEG 'theta' activity in hippocampal brain slices. Brain Res. 1987;405(1):196–8.
Cui J, Hollmén M, Li L, et al. New use of an old drug: inhibition of breast cancer stem cells by benztropine mesylate. Oncotarget. 2017;8(1):1007.
Wang C-C, Chen X, Qu J, et al. RFSMMA: a new computational model to identify and prioritize potential small molecule–mirna associations. J Chem Inf Model. 2019;59(4):1668–79.
Chen X. miREFRWR: a novel disease-related microRNA-environmental factor interactions prediction method. Mol BioSyst. 2016;12(2):624–33.
Chen X, Guan N-N, Sun Y-Z, et al. MicroRNA-small molecule association identification: from experimental results to computational models. Brief Bioinform. 2020;21(1):47–61.
Jamali AA, Kusalik A, Wu F-X. MDIPA: a microRNA–drug interaction prediction approach based on non-negative matrix factorization. Bioinformatics. 2020;36(20):5061–7.
We are grateful to the anonymous reviewers for their constructive comments on the original manuscript.
College of Information and Electrical Engineering, China Agricultural University, Beijing, 100083, China
Yang Yue
Centre for Computational Biology, School of Computer Science, The University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
Shan He
Conception: SH; design of the work: YY; the acquisition, analysis: YY; interpretation of data: YY; the creation of new software used in the work: YY; paper writing: YY, SH. All authors read and approved the final manuscript.
Correspondence to Shan He.
No ethics approval was required for the study.
Yue, Y., He, S. DTI-HeNE: a novel method for drug-target interaction prediction based on heterogeneous network embedding. BMC Bioinformatics 22, 418 (2021). https://doi.org/10.1186/s12859-021-04327-w
DOI: https://doi.org/10.1186/s12859-021-04327-w
Drug-target interaction prediction
Heterogeneous network embedding
Graph mining
Feature fusion
Machine Learning and Artificial Intelligence in Bioinformatics
|
CommonCrawl
|
A BERT-Based Automatic Scoring Model of Korean Language Learners' Essay
Journal of Information Processing Systems. ISSN: 2092-805X. 2022;18(2):282-291
JIPS JIPS Submission System Korea Information Processing Society
Full-text View Full Text via DOI
Previous Page Next Page ✘ ◀
Making articles easier to read in PMC
We are experimenting with display styles that make it easier to read articles in PMC. The ePub format uses eBook readers, which have several "ease of reading" features already built in.
The ePub format is best viewed in the iBooks reader. You may notice problems with the display of certain parts of an article in other eReaders.
Generating an ePub file may take a long time, please be patient.
CancelDownload article
Welcome to PubReader!
Click on above to:
Get help with PubReader, or
Switch to the classic article view.
Jung Hee Lee , Ji Su Park and Jin Gon Shon
Corresponding Author: Jin Gon Shon , [email protected]
Jung Hee Lee, Dept. of Korean Language Education as a Second Language, Kyung Hee University, Seoul, Korea, [email protected]
Ji Su Park, Dept. of Computer Science and Engineering, Jeonju University, Jeonju, Korea, [email protected]
Jin Gon Shon, Dept. of Computer Science, Korea National Open University, Seoul, Korea, [email protected]
Received: September 13 2021
Revision received: November 17 2021
Accepted: November 29 2021
Published (Print): April 30 2022
Published (Electronic): April 30 2022
Abstract: This research applies a pre-trained bidirectional encoder representations from transformers (BERT) hand-writing recognition model to predict foreign Korean-language learners' writing scores. A corpus of 586 answers to midterm and final exams written by foreign learners at the Intermediate 1 level was acquired and used for pre-training, resulting in consistent performance, even with small datasets. The test data were pre-processed and fine-tuned, and the results were calculated in the form of a score prediction. The difference between the prediction and actual score was then calculated. An accuracy of 95.8% was demonstrated, indicating that the prediction results were strong overall; hence, the tool is suitable for the automatic scoring of Korean written test answers, including grammatical errors, written by foreigners. These results are particularly meaningful in that the data included written language text produced by foreign learners, not native speakers.
Keywords: Automatic Writing Scoring , Bidirectional Encoder Representations from Transformers , Korean as a Foreign Language , Natural Language Processing
The most important part of descriptive scoring is determining whether the scoring is consistent based on valid criteria [1]. Therefore, if these scoring criteria and processes can be automated, scoring costs and the temporal burdens of large-scale evaluations could be greatly reduced. Additionally, learners could improve their writing skills faster by receiving immediate feedback on their written inputs. To this end, it is necessary to determine whether automatic scoring is possible for Korean learners' descriptive answers containing various errors of form, syntax, and grammar. Research related to automatic scoring capabilities for descriptive Korean-language text-exam responses has developed alongside corresponding natural language processing (NLP)-related technologies [2-4]. A language model performs the allocation and probability determination of word sequences and sentences in NLP [5]. Machine- and deep-learning technologies, such as the embeddings-from-language model, the generative pre-training model, and the bidirectional encoder representations from transformers (BERT), have been developed for NLP tasks [6,7]. Unfortunately, their application to automatic scoring systems is lacking [8].
In this paper, a pre-trained BERT model is used to score foreign Korean-language students' descriptive written test answers, and the results are compared to other language model techniques. BERT's pre-trained model can provide consistent results when trained on a small number of data [6]. Fig. 1 displays the high-level BERT model.
BERT model.
2. Characteristics of Descriptive Answers of Korean Learners
2.1 Intermediate Korean Learner's Language Characteristics
There are 200 training hours for each grade [TeX:] $$(10 \text { weeks } \times 20 \text { hours })$$ according to the International Korean Curriculum Standard for regular domestic institutions and the Korean Language Proficiency Test, governed by the National Institute of Korean Language. The data used in this paper correspond to Intermediate 1 learners, whose goals include being able to write abstract topics about familiar social life in a simple structure [9]. At the intermediate level, learners have accomplished 350–400 hours of Korean language learning and have acquired about 100 grammar knowledge items and 1,000 vocabulary words.
These learners can create complete sentences, but there are often errors in word order. Complexity increases when developing various sentence structures related to life experiences. Basic errors that often appear are related to the persistence of the grammar systems applied in their native language. Students have a strong tendency to ask teachers for detailed explanations to help them reconcile the new rules to their old rules. However, in terms of vocabulary learning, students are interested in various media sources for learning and immersion. Furthermore, learners who are not familiar with Chinese characters suddenly realize the need to learn Chinese characters. Many structural vocabulary errors have been catalogued, including errors of derivation, behavior, and instruction [10].
In this study, data are obtained from past processes of teaching Korean to foreign students, where the foreign students are learning Korean. These students must acquire and master morphological, syntactic, and pragmatic language rules for the first time. The focus of this research is not on analyzing the quality of prose. Instead, it is about analyzing language rules in the context of midterm and final exams. This type of writing consists of personal introductions and simple expository writings.
2.2 Test Question Detail
The test questions for intermediate-level learners are not based on proficiency evaluations but on the curricula of educational institutions. Most learners score more than 70 points. For example, a test question may direct the student to write more than 500 characters about a memorable travel account and provide categorical details, inclusively, of destination, transportation, purpose, sights, feelings, events, and reasons using complete sentences.
3. A BERT-Based Automatic Scoring Model
3.1 Model Design
A BERT-Based Automatic Scoring Model consists of three modules: data analysis, data preprocessing, and data learning (Fig. 2).
A BERT-Based Automatic Scoring Model.
Because the actual tests were conducted as written exams in which answers were written on paper, the contents of the answers were entered as a text file that included all typos, spaces, non-grammatical items, and symbols as-written by learners. A tag of "unknown" was added if the writing could not be recognized. Items were analyzed based on the English special character ratio, the distribution of the answer length, and the graded answer data. Factors, such as special characters and numbers, are parts that interfere with learning. The lengths of the answers and the distribution of scores were correlated to the grades.
In the BERT's preprocessing module, sentence tokenization was required. A [CLS] indicator was attached to the beginning of the text token, and a [SEP] was added to the end. In succession, each token was indexed, the length of each sentence was compared to the maximum length, and padding was applied for those not reaching the maximum. Then, an attention mask was generated.
BERT is a two-way language model in which context is considered based on a transformer model. BERT differs from other models in that it conducts unsupervised pre-learning in both directions. The language model provides the probability distribution for the order of words. Through this two-way learning, performance is improved during the fine-tuning stage using the BERT pre-learning model.
A BERT model requires powerful graphical processor units (GPUs), and ours was implemented using Google Colab's Tesla T4. Our study is based on the Hugging Face model based on Transformers for PyTorch and TensorFlow 2.0. The input value required an attention process to improve learning efficiency after fixing the length. In this study, the maximum length was set to 384 based on the average length and learning efficiency of the writing test answer. The pre-trained BERT model of Oh Yeon-taek used Google's SentencePiece to learn 32,000 vocabulary words using about 180 million sentences from Wikipedia, and news data it achieved 87.8% from KorQuAD [11,12].
3.2 Model Implementation
3.2.1 Data collection and input
The data contained information collected from 2013 to 2017 from the Korean language education institution at K. University in Seoul, Korea. The dataset included 586 answers about "my hobby" and "memorable travel," as obtained from Level-3 (Intermediate 1) writing answers for mid-term and final exams (Table 1). The general criteria of Test of Proficiency in Korean (TOPIK) Level 3 are described as follows [13]:
The individual has no problem doing normal, day-to-day activities and has the basic language skills to use various public facilities and maintain interpersonal relationships. He/she can understand and express him/herself regarding not only familiar and specific topics but also familiar social issues in paragraphs. He/she is also capable of distinguishing the basic features of colloquial language and literary language and can understand and use the two forms of language him/herself.
Distribution of answer sheets by score
3.2.2 Data analysis
From the data, the distribution of English special characters, answer length distributions, and score data were analyzed. In the case of "my hobby" 4.85% of sentences included question marks, 100% included periods, 8.63% included uppercase alpha characters, and 54.72% included numbers. In the case of "memorable travel," the ratio of sentences that included question marks was 3.72%, the period was 100%, the sentence including capital letters was 10.23%, and the sentence including numbers was 80.47%.
In the case of uppercase characters, proper nouns, such as place names and anthroponyms, were written in English. Factors, such as special characters, numbers, and punctuation, slowed calculation speeds, interfering with learning. The distribution of answer lengths was as follows. Among the "my hobby" answers, the maximum length was 736 words, and the minimum was 57 words. The average was 506.19 words, the standard deviation was 114, and the median was 523. Among the "memorable travel" answers, the maximum length was 833 words, and the minimum was 137 words. The average was 618 words, the standard deviation was 112 words, and the median was 630 words. The length of the answer to "memorable travel" was longer than that of "my hobby." Descriptive statistics are shown in Table 2.
The distribution of answers by score group is as follows. The average score of "my hobby" was 64.3, and that of "memorable travel" was 72.5. The median value was 70.
As shown in Tables 2 and 3, because the null hypothesis cannot be rejected: the correlation between the length of the answer and the score is verified. The results are shown in Table 4. As a result of checking the correlation between the score and the length of the answer, the topic, "my hobby," has a correlation coefficient between the length of the answer and the score of 0.477. The correlation coefficient between the length of the answer and the score is 0.495.
Descriptive statistics of answers by length
Descriptive statistics of answers by score group
Correlation between the lengths of the answers
3.2.3 Data pre-training
The dataset contained 586 answers from midterm and final exams of foreign Korean-language learners at Level 3 (Intermediate 1) Korean proficiency. The pre-training was carried out as follows. First, the writing test data of third-grade learners were transcribed into a text file. Second, personal information and topics were deleted using the Notepad++ program as a pre-processing process. Third, all personal data, carriage returns, punctuation marks, and special characters were removed. Then, single topics were integrated into single lines to facilitate text processing. Commands and scores were tagged at the ends of the lines to enable automatic processing.
Data processing was conducted based on the Hugging Face model. The use of GPUs is essential to BERT model success. There are several GPU methods, such as utilizing cloud services (e.g., Google Korab) or implementing a local environment. In this paper, Google Colab's Tesla T4 was used.
Data were prepared following transformer installation to load training and testing datasets. The data were divided into 70% versus 30% portions: one for training and the other for testing, respectively. During the verification stage, 10% of the training data were used. The next step was conversion according to the input format of BERT, and the Korean Sentence Splitter library was used to separate the converted content into sentence units. Then, the score labels were separated. The tokenizer step was next, which tokenized the content using BERT's tokenizer. This process converts tokenized sentences into numeric indices, calculates the maximum sequence length of the input token, converts each token into a numeric index, matches the sentence to the maximum sequence length, and fills the deficiencies with padded zeroes.
The next step initializes the attention mask, which sets the attention mask to one without padding and zero with padding. Padding is not performed in the BERT attention layer so that the speed can be improved. After separating the data into training and verification sets, the attention mask is also separated into training and verification. Then, the data are converted into a PyTorch tensor. Then, the batch size is set, and it is entered using PyTorch's DataLoader, where it is tied to the mask and labeled to set the data. During learning, data are imported based on the batch size, and to preprocess extracted review sentences, they are converted according to the input format of BERT; then labels are extracted.
3.2.4 Data learning
A bidirectional language model was created using BERT for data learning (Fig. 3). When dealing with text, this kind of model is advantageous because the problem of word ambiguity can be solved, even if the same word has different embedding vector values based on sentence form and location. The classification model leveraged 11 layers and an attention method. The optimizer, learning rate, and epoch value were then set. Next, data training was conducted. For reproduction, the random seed was fixed, and the gradient was initialized. During this process, a loss function was set for accuracy calculation.
Next, the accuracy was obtained by multiplying the value by 100; then, training was performed repeatedly according to the number of epochs. Data were imported repeatedly based on the batch setting size in the DataLoader. Then, the batch was sent to the GPU, where data were extracted, and the loss value was obtained. Finally, the average and total loss values were calculated.
Fine tuning model for automatic scoring.
4. Performance Analysis
4.1 Experimental Environment
The BERT program used in this paper for automatic scoring was implemented using Windows 10 with 16-GB memory, Python 3.6, TensorFlow 2.2, PyCharm, and a Google Colab GPU Tesla T4. The Adam-W optimizer was used with a learning rate of 0.00003 to set up the generative adversarial network (GAN) experiment. The contents related to the experimental environment are shown in Table 5.
Experimental environment
The answers, including scores and labels, were used for automatic scoring. Among the descriptive answers written by foreign learners, a total of 586 answers were used, which included 371 corresponding to "my hobby" and 215 answers under the topic of "memorable travel." The data were divided into 70% versus 30% portions: one for training and the other for testin0g, respectively. During the verification stage, 10% of the training data were used.
For the WScore model creation, the BERT model utilizing GPU for classification was created. The optimizer was set to AdamW, and the learning rate was set to 0.00003 with an epoch size of 40. The hyperparameters of the BERT-based WScore are shown in Table 6.
Hyperparameters of the BERT-based WScore
4.2 Result of the Experiment
For model learning, accuracy calculation and loss functions were set to reflect the order of scores. Next, a third accuracy was obtained by multiplying the value by 100; then, training was performed repeatedly based on settings. The data loader repeatedly moved data of the respective batch sizes into the GPU. The data were extracted from the batch to calculate the average loss value. Through this process, the accuracy was confirmed as follows. As shown in Table 7, the accuracy was 84.49% at one epoch, and when it reached 20 epochs, the last accuracy was 93.62%. It was the highest at 40 epochs. It was confirmed that, even if the number of epochs was increased to 50 and 100, the accuracy no longer increased.
Accuracy by epoch
This paper used a BERT transfer language model to apply an automatic scoring standard to Korean-learning writing tests for foreign learners to assess their efficacy. The data included score labels attributed to descriptive answers written by foreign learners, including their test scores. 371 answers were assessed under the topic, "My hobby," and 215 were assessed under the topic, "Memorable travel." For a total of 586 subjects, the training and testing datasets were divided 70 vs. 30%, and 10% of the training data were used for verification. For the automatic scoring of descriptive answers, the BERT-based WScore model was developed, achieving 95.80% accuracy, which is high compared with the methods assessed in previous studies. The most important part of the proposed descriptive scoring method depends on the scoring consistency, which is judged based on valid criteria. Hence, if these scoring criteria and processes can be automated, scoring costs and the temporal burdens of large-scale evaluations can be greatly reduced. This study provides the opportunity to further improve automatic scoring capabilities that analyze foreign learners' written answers. In the future, the efficiency and consistency of writing test scoring will benefit by establishing large-scale learner datasets and setting standards for Korean language evaluation. However, pertaining to the scoring of the formative evaluation dimension of writing education, student feedback should be centered. Hence, modified approaches are needed.
Jung Hee Lee
She received the B.S. degree in Korean Language and Literature and the M.S. degree in Korean Language Education, and Ph.D. degree in Korean Language and Literature from Kyung Hee University, Seoul, Korea. She is Professor in the Graduate School of Education at the Kyung Hee University, where she teaches Korean language Education as a Foreign Language. She specializes in Korean language pedagogy and her research interests include Korean language assessment as a foreign Language, Korean Language Curriculum and Material development. She has been researching as Visiting Scholars at Georgetown University at Washington D.C., USA, in 2012.
Ji Su Park
He received his B.S. and M.S. degrees in Computer Science from Korea National Open University, Korea, in 2003 and 2005, respectively and Ph.D. degree in Computer Science Education from Korea University, 2013. He is currently a Professor in Dept. of Computer Science and Engineering from Jeonju University in Korea. His research interests are in mobile grid computing, mobile cloud computing, cloud computing, distributed system, computer education, and IoT. He is employed as managing & associate editor of Human-centric Computing and Information Sciences (HCIS) by Springer, The Journal of Information Processing Systems (JIPS) & KIPS TRANSACTIONS ON SOFTWARE AND DATA ENGINEERING by KIPS He has received "best paper" awards from the CSA2018 conferences and "outstanding service" awards from CUTE2019 and BIC2020. He has also served as the chair, pro-gram committee chair or organizing committee chair at several international confer-ences including World IT Congress, MUE, FutureTech, CSA, CUTE, BIC.
Jin Gon Shon
He received the B.S. degree in Mathematics and the M.S. and Ph.D. degrees in Computer Science from Korea University, Seoul, Korea. Since 1991, he has been with the Department of Computer Science, Korea National Open University (KNOU). He had been researching as Visiting Scholars at State University of New York (SUNY) at Stony Brook, USA, in 1997, at Melbourne University, Australia, in 2004, and Indiana University, USA, in 2013. After serving the Head of Information & Computer Center and the Head of e-Learning Center, Professor Shon had established the Department of e-Learning in KNOU, offering the first master program of e-Learning in Korea, and served as the Chair of the Department until 2010. He had also worked for KNOU as Director of the Digital Media Center, where all of KNOU e-learning contents and TV programs are produced. Since 1991, he has been working as well for the community services, as chairs or members in various committees including a Vice President of Korea Information Processing Systems and a Vice President of e-Learning Society. His research interests are mainly focused on computer networks, modeling & simulation, distributed computing, wireless sensor networks, e-learning, and especially in ITLET (Information Technology for Learning, Education, and Training) as a member of Korean Delegation to ISO/IEC JTC1/SC36 since 2000. He has made presentations in many conferences, and he won a few of Best Paper Awards including the Gold Medal Paper in the 24th AAOU Annual Conference in 2010. He has also published over 40 scholarly articles in the noted journals and written several books on computer science and e-learning.
1 S. H. Ahn, C. S. Kim, "A study on the features of writing rater in TOPIK writing assessment," Journal of Korean Language Education, vol. 28, no. 1, pp. 173-196, 2017.doi:[[[10.18209/iakle.2017.28.1.173]]]
2 S. Hwang, K. Kim, "BERT-based classification model for Korean documents," Journal of Society for e-Business Studies, vol. 25, no. 1, pp. 203-214, 2020.custom:[[[-]]]
3 J. O. Min, J. W. Park, Y. J. Jo, B. G. Lee, "Korean machine reading comprehension for patent consultation using BERT," KIPS Transactions on Software and Data Engineering, vol. 9, no. 4, pp. 145-152, 2020.custom:[[[-]]]
4 C. H. Lee, Y. J. Lee, D. H. Lee, "A study of fine tuning pre-trained Korean BERT for question answering performance development," Journal of Information Technology Services, vol. 19, no. 5, pp. 83-91, 2020.custom:[[[-]]]
5 K. Jiang, X. Lu, "Natural language processing and its applications in machine translation: a diachronic review," in Proceedings of 2020 IEEE 3rd International Conference of Safe Production and Informatization (IICSPI), Chongqing City, China, 2020;pp. 210-214. doi:[[[10.1109/iicspi51290.2020.9332458]]]
6 J. Devlin, M. W. Chang, K. Lee, K. Toutanova, "BERT: pre-training of deep bidirectional transformers for language understanding," in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Minneapolis, MN, 2019;pp. 4171-4186. custom:[[[-]]]
7 D. Alikaniotis, H. Yannakoudakis, M. Rei, "Automatic text scoring using neural networks," in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Berlin, Germany, 2016;pp. 715-725. doi:[[[10.18653/v1/p16-1068]]]
8 J. E. Kim, K. Park, J. M. Chae, H. J. Jang, B. W. Kim, S. Y. Jung, "Automatic scoring system for short descriptive answer written in Korean using lexico-semantic pattern," Soft Computing, vol. 22, no. 13, pp. 4241-4249, 2018.doi:[[[10.1007/s00500-017-2772-7]]]
9 National Institute of the Korean Language, Application Research of Korean Language Curriculum, Korea: National Institute of the Korean Language, Seoul, 2017.custom:[[[-]]]
10 J. H. Lee, "A study on error determination standard and classification in Korean education," Journal of Korean Language Education, vol. 13, no. 1, pp. 175-197, 2002.custom:[[[-]]]
11 Y . Oh, 2020 (Online). Available:, https://github.com/yeontaek/BERT-Korean-Model
12 H. Lee, J. Yoon, B. Hwang, S. Joe, S. Min, Y. Gwon, "KoreALBERT: pretraining a Lite BERT model for Korean language understanding," in Proceedings of 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 2021;pp. 5551-5557. doi:[[[10.1109/icpr48806.2021.9412023]]]
13 Test of Proficiency of Korea (TOPIC) (Online). Available:, https://www.topik.go.kr/HMENU0/HMENU00018.do
Task 1 (my hobby) 5 47 76 62 82 32 30 15 8 12 2 371
Task 2 (memorable travel) 8 48 44 55 35 12 6 4 2 0 1 215
my hobby
memorable travel
Samples 371 215
Mean of length 503 618
Median 523 630
SD 113 112
Minimum 57 137
Maximum 736 833
Skewness -1.03 -1.21
Shapiro-Wilk W 0.946 0.93
Shapiro-Wilk p [TeX:] $$<0.001$$ [TeX:] $$<0.001$$
Mean of length 64.3 72.5
Median 70 70
SD 19.9 16.7
Minimum 0 0
Skewness -0.747 -0.967
Shapiro-Wilk W 0.935 0.916
Length of answer
my hobby Length of answer - - - -
Score [TeX:] $$0.477(<0.001)$$ - - -
memorable travel Length of answer -0.029 (0.667) 0.034 (0.623) - -
Score -0.071 (0.302) 0.011 (0.868) [TeX:] $$0.495(<0.001)$$ -
Values are presented as Pearson's R (p-value).
Subjects and contents Foreign students 586 participants
Topic 1 (my hobby) 371 writing samples
Topic 2 (memorable travel) 215 writing samples
H/W GPU Colab Tesla T4
OS Window 10
S/W Language Python 3.6
Framework TensorFlow 2.2
optimizer AdamW
max_length 384
batch_size 4
ir 0.00003
epoch 40
epoch 1/40
epoch 20/40
Average training loss 0.07 0.00 0.00
Training epoch took 0:00:22 0:00:15 0:02:16
Accuracy (%) 84.49 93.64 95.80
Validation took 0:00:01 0:00:00 0:00:04
|
CommonCrawl
|
Automatic differentiation
Created: Jan 8, 2021 | Last modified: Jan 8, 2021
Automatic differentiation is a technique for computing the gradient of a function specified by a computer program.
It takes advantage of the fact that any function implemented in a computer program can be decomposed into primitive operations (or else how would the function be implemented in the first place?), which are themselves easy to differentiate and whose derivatives can then be combined to get the derivative of the original function.
For example, suppose our primitive operations are multiplying by a constant, adding a constant and exponentiating by a constant. And the function we want to differentiate is $f(x) = 2 x^3 + 7$. We can write $f(x)$ as a composition of primitive operations. Define $f_1(x) = x + 7$, $f_2(x) = 2x$, $f_3(x) = x^3$. Then $f(x) = f_1(f_2(f_3(x)))$.
We can apply the chain rule to get the derivative. Define $g(x) = f_2(f_3(x))$. The chain rule states that $\frac{\partial} {\partial x} [ f_1(g(x)) ] = f'(g(x)) g'(x)$. By the same token, $g'(x) = f_2'(f_3(x)) f_3'(x)$. Putting that together, we have $f'(x) = f_1'(f_2(f_3(x))) f_2'(f_3(x)) f_3'(x)$, which is the derivate of the original function written in terms of the derivative of primitive operations.
How do we do this more generally? First, we need a way to represent a function. Supposing that we can decompose a function into primitive operations, then we can represent a function as a computational graph where each node in the graph (a directed acyclic graph) is either a primitive operation or a variable. For example, the computational graph for $f(x, y, z) = (x + y) \cdot z$ looks like:
+ z
x y
An automatic differentiator takes the root of a computational graph as input and values for the variable nodes and returns the gradient of the function evaluated at those input values.
How do we compute the gradient using the graph? Even though we don't yet know how to calculate the partial derivative of the root node with respect to one of its grandchildren ($x$ or $y$), we first notice that it's easy to calculate the partial derivative of a node with respect to one of its children, because each parent node is a primitive operation and we know how to calculate derivatives for primitive operations by applying one of a few formulas. For example, relabel the computational graph above with $a = x + y$ and $b = az$:
a z
The partial derviative of the $b$ node with respect to its child node $a$ is just $\frac{\partial}{\partial a} b = \frac{\partial}{\partial a} a \cdot z = a \frac{\partial{z}}{\partial{a}} + z \frac{\partial{a}}{\partial{a}} = z$ according to the product rule of calculus.
We can label each edge with the partial derivative of the parent/destination node with respect to its child/source node. Now, how do we use that information to get the partial derivatives of the root with respect to each of the leaf/variable nodes in the graph? For a given leaf node, it turns out that the sum over all the paths from that leaf to the root of the product of the edges for each path gives you the partial derivative of the root node with respect to the leaf node.
That procedure is just a visual way of describing the multivariate chain rule. Suppose we have a function $f(u_1, u_2, \cdots, u_n)$ where the input variables depend on some other variable $x$ ($f$ should be thought of as the root node in the graph, $u_i$ as intermediate nodes and $x$ as a leaf node), then $\frac{\partial f}{\partial x} = \sum_{i=1}^n \frac{\partial f}{\partial u_i} \frac{\partial u_i}{\partial x}$.
Forward and reverse mode accumulation: The way that we traverse the graph to compute these partial derivatives can dramatically alter the efficiency of the computation. Consider the following graph:
/ | | \
x1 ... xp
If we want to calculate the gradient of $y$ and we start from the leaf nodes and move up, then we first calculate $\frac{\partial y}{\partial x_1} = \frac{\partial u_1}{\partial x_1} \frac{\partial u_2}{\partial u_1} \dots \frac{\partial u_k}{\partial u_{k-1}} \frac{\partial y}{\partial u_k}$. We then sweep through the graph again to calculate $\frac{\partial y}{\partial x_2} = \frac{\partial u_1}{\partial x_2} \frac{\partial u_2}{\partial u_1} \dots \frac{\partial u_k}{\partial u_{k-1}} \frac{\partial y}{\partial u_k}$. And again for $\frac{\partial y}{\partial x_3}$ and so on. Each time repeating the computation $\frac{\partial u_2}{\partial u_1} \dots \frac{\partial u_k}{\partial u_{k-1}} \frac{\partial y}{\partial u_k}$. This is forward mode accumulation and it requires $k \cdot p$ multiplies to get the gradient.
If we instead start from the top of the graph and move downwards caching the results as we go, then we first calculate $\frac{\partial y}{\partial u_k}$, then $\frac{\partial y}{\partial u_{k-1}}$ as $\frac{\partial y}{\partial u_k} \cdot \frac{\partial u_k}{\partial u_{k-1}}$ using the cached value for the first term and so on down the graph. This is reverse mode accumulation and it only requires $k + p$ multiplies to get the gradient. In the case of gradient descent on the cost function of a neural network with millions of parameters, automatic differentiation with reverse mode accumulation (also called backpropagation) makes optimization feasible.
https://colah.github.io/posts/2015-08-Backprop/
https://www.offconvex.org/2016/12/20/backprop/
https://en.wikipedia.org/wiki/Automatic_differentiation
|
CommonCrawl
|
Genomic insights into plant growth promoting rhizobia capable of enhancing soybean germination under drought stress
Nicholas O. Igiehon1,
Olubukola O. Babalola ORCID: orcid.org/0000-0003-4344-19091 &
Bukola R. Aremu1
The role of soil microorganisms in plant growth, nutrient utilization, drought tolerance as well as biocontrol activity cannot be over-emphasized, especially in this era when food crisis is a global challenge. This research was therefore designed to gain genomic insights into plant growth promoting (PGP) Rhizobium species capable of enhancing soybean (Glycine max L.) seeds germination under drought condition.
Rhizobium sp. strain R1, Rhizobium tropici strain R2, Rhizobium cellulosilyticum strain R3, Rhizobium taibaishanense strain R4 and Ensifer meliloti strain R5 were found to possess the entire PGP traits tested. Specifically, these rhizobial strains were able to solubilize phosphate, produce exopolysaccharide (EPS), 1-aminocyclopropane-1-carboxylate (ACC), siderophore and indole-acetic-acid (IAA). These strains also survived and grew at a temperature of 45 °C and in an acidic condition with a pH 4. Consequently, all the Rhizobium strains enhanced the germination of soybean seeds (PAN 1532 R) under drought condition imposed by 4% poly-ethylene glycol (PEG); nevertheless, Rhizobium sp. strain R1 and R. cellulosilyticum strain R3 inoculations were able to improve seeds germination more than R2, R4 and R5 strains. Thus, genomic insights into Rhizobium sp. strain R1 and R. cellulosilyticum strain R3 revealed the presence of some genes with their respective proteins involved in symbiotic establishment, nitrogen fixation, drought tolerance and plant growth promotion. In particular, exoX, htrA, Nif, nodA, eptA, IAA and siderophore-producing genes were found in the two rhizobial strains.
Therefore, the availability of the whole genome sequences of R1 and R3 strains may further be exploited to comprehend the interaction of drought tolerant rhizobia with soybean and other legumes and the PGP ability of these rhizobial strains can also be harnessed for biotechnological application in the field especially in semiarid and arid regions of the globe.
The symbiotic interaction between leguminous plants and nitrogen (N) fixing bacteria, generally called rhizobia, has been the focus of research for over 12 decades. Recently, 'a renewed interest' in this area of research has been noticed due to its importance in sustainable agriculture, minimizing cost for the agriculturalists, enhancing soil fertility, alleviation of greenhouse-gas emissions [1] and improving plant's tolerance to drought stress [2].
In addition, the role of soil microorganisms in plant growth, nutrient utilization, drought tolerance as well as biocontrol activity is well known and these beneficial microorganisms inhabit the plant rhizosphere. In the rhizosphere, these microorganisms promote plant growth via 'direct and indirect mechanisms' [3]. Additionally, the role of these beneficial microorganisms in biotic and abiotic stresses is gaining relevance and the mechanisms by which they enhance plant tolerance to drought include: Production of ACC deaminase to minimize the quantity of ethylene produced in the roots, microbial exopolysaccharide (EPS), induced systemic resistance and phytohormones production such as indole-3-acetic acid (IAA) [4,5,6,7].
Indeed, plant growth can be regulated by ethylene (C2H4) contents and the biosynthesis of this compound is regulated by biotic and abiotic stressors [8]. In the synthetic pathway of C2H4 in plants, S-adenosyl methionine (S-AdoMet) is transformed to the immediate precursor of C2H4 1-aminocyclopropane-1-carboxylate (ACC) by aminocyclopropane-1-carboxylate synthase (ACS). Under drought stress conditions, plant homeostasis is regulated by C2H4, leading to decrease in shoot and root growth and even seed germination. Plant ACC is confiscated and disintegrated by ACC deaminase-producing rhizobia to release and supply energy and nitrogen. Thus, the disintegration and consequential removal of ACC by rhizobia alleviate the effects of C2H4, thereby minimizing plant stress and enhancing plant growth [9]. Therefore single and dual inoculation of plants with ACC-producing rhizobia can result in improved seed germination even under drought stress conditions. In particular, dual inoculation of ACC deaminase producing Pseudomonas and Bacillus with Mesorhizobium ciceri enhanced seed germination, shoot height, root length and seedling fresh weight of chickpea grown under stressed condition when compared to non-inoculated plants [10].
In addition, drought stress affects water availability to plant and water availability regulates the production and utilization of polysaccharides by rhizobia [11]. Example of such polysaccharides is exopolysaccharide (EPS) and production of EPS by rhizobia protects them from harsh conditions, which enhances their survival under such conditions. Amendment of wheat with EPS and catalase producing Rhizobium leguminosarum (LR-30), Rhizobium phaseoli (MR-2) and Mesorhizobium ciceri (CR-30 and CR-39) benefited the plant by improving its growth, drought tolerance index and biomass under drought condition using polyethylene glycol (PEG) 6000 as the drought factor. Thus, there is further need to X-ray the effects of new strains of Rhizobium on growth parameters (such as percentage seed germination) of other agricultural crops such as soybean (G. max L.) under drought condition stimulated by PEG.
Again, it has been reported that soil bacteria offer benefits to their host plants by suppressing plant pathogens and facilitating nutrient assimilation [4, 12, 13]. In our previous study [2], it was reported that some rhizobacteria mop up the insoluble form of iron from the soil environment and make it available to plants 'with the aid of siderophore' [14] and there is an evidence that some plants can use bacterial iron (III)-siderophore complexes for their growth [15] even though the phytorelevance of these complexes is controversial. On the other hand, the removal of iron from the soil by siderophore-producing rhizobia reduces the bioavailability of iron in the root region and consequentially suppresses the growth of fungal pathogens [16, 17].
Similarly, just like siderophore-producing bacteria, some rhizobia contribute to plant growth by helping to mineralize insoluble phosphate compounds to release phosphorus needed for plant growth [18]. Phosphorus in di-calcium phosphate, hydroxyapatite, rock phosphate and tri-calcium phosphate in soil can be released by phosphate solubilizing bacteria such as Rhizobium, Bacillus, Burkholderia and Agrobacterium while other rhizospheric rhizobia have the ability to produce indole-acetic acid (IAA) which helps in root elongation and production of lateral roots and root hairs involved in nutrient absorption [19]. Elongation and increase in the number of root produced by plants as a result of IAA production can serve as a survival strategy to plants under drought stress condition and may even contribute in some other ways to plant development. It was reported by [20] that the increased production of IAA by Bradyrhizobium japonicum shows that, in addition to plant promotion, the bacterium could have other beneficial traits needed for plants (such as soybean) survival. In short, considering these benefits, the interaction between plants, especially legumes, and rhizobia is key to plant productivity.
Actually, rhizobia-legume symbiotic relationship commences with a molecular dialogue between the partners. The legumes produce flavonoids [21] that elicit the production of Nod factors (lipochitin oligosaccharides), that in turn, stimulate the development of root nodule [22]. Rhizobial species enter and colonize the root nodules where they metamorphose to bacteriods that fix atmospheric N [23]. Admittedly, other bacterial systems are involved in root colonization, efficient nodulation and N-fixation, 'including surface polysaccharide and secretion systems' [1, 23, 24]. These processes in addition to PGP and drought tolerance ability of rhizobia are regulated by myriads of genetic components which can further be exploited to gain insights into legume –rhizobial interactions.
Therefore this study was designed to gain genomic insights into selected PGP rhizobia capable of promoting soybean seed germination under drought stress condition.
Source of rhizobial species used in this study
The rhizobial species used in this study were isolated from Bambara groundnut rhizospheric soil at North-West University campus (25.82080S: 025.61382E), Ngaka Modiri Molema District, Mahikeng, North-West Province, South Africa (Fig. 1) and the physicochemical analysis showed that the soil has the following properties: 7.65 pH, 1.62 mg/kg Fe, 24.1 mg/kg Mn, 1.06% organic carbon, 4.01% organic matter, 285 mg/kg K, 397 mg/kg mg and 0.066% total N. The rhizobial species were sequenced by Sanger sequencing technique and identified in our previous study (National Centre for Biotechnology Information – NCBI - database) as Rhizobium sp. strain R1 (accession no. MG309875), Rhizobium tropici strain R2 (accession no. MG851722), Rhizobium cellulosilyticum strain R3 (accession no. MG309874), Rhizobium taibaishanense strain R4 (accession no. MG851723) and Ensifer meliloti strain R5 (accession no. MG851724).
Geographical location of Bambara groundnut rhizospheric soil used for rhizobial species isolation. To the left, the upper sketch represents a map of South Africa showing North-West Province (red sketch) and below is a map of North-West Province accommodating a map of Mahikeng (the light-yellow region) which encompasses Ngaka Modiri Molema district (the green spot) the site of North-West University where Bambara groundnut rhizospheric soil was collected for bacterial isolation. To the right, is a sketch showing Bambara groundnut rhizospheric soil sample collection site
ACC deaminase quantification
Rhizobial strains were grown in 5 ml Luria Bertani (LB) at ambient temperature. Then ACC deaminase activity was determined according to method described by [25].
EPS test
First, rhizobial strains were qualitatively screened for exopolysaccharide production according to the method described by [26] with little modifications. Briefly, sterile Whatman filter paper discs (6 mm in diameter) were aseptically placed in Petri dishes containing nutrient agar and 2 μl of freshly grown cultures of each rhizobial species was directly inoculated on the surfaces of the discs in the plates. The nutrient agar used in this study was amended with 10% sucrose adjusted to pH of 5.5 and 7.5. Upon inoculation, plates were incubated at 28 ± 2 °C, 37 °C and 45 °C for 7 days, 2 days and 1 day at the respective temperatures. Then, EPS production was evaluated on the basis of formation of mucoid colonies around the discs.
Alternatively, the quantity of EPS produced was determined according to the method described by [27] with little modifications. In summary, the four isolates were grown in nutrient broth amended with 5 and 10% PEG 800 to induced drought stress as well as in nutrient broth lacking PEG 800 (0% PEG). Cultures were incubated in a rotary incubator at room temperature for 4 days and were thereafter centrifuged to obtain the supernatant. Three milliliter (3 ml) of cold absolute alcohol was mixed with 5 ml of each rhizobial supernatant and incubated for 12 h at 4 °C. Then EPS was gotten by centrifuging the cold alcohol-rhizobial supernatant mixture at 10000 rpm for 15 min and the resultant supernatants were discarded. Then, the optical density of the EPS that settled at the bottom of the tubes were determined using a spectrophotometer (ThermoSpectronic, Merck) at 490 nm.
Quantitative determination of siderophore produced by rhizobial species
The quantitative determination of siderophore produced by rhizobial species was determined according to the method described by [28] with little modification. Briefly, freshly grown rhizobial species were inoculated into King B broth (10 g/l glycerine, 20 g/l peptone, 1.5 g/l MgSO4) and iron-free succinic acid broth (6 g K2HPO4, 3 g KH2PO4, 1 g (NH4)2SO4, 0.2 g MgSO4.7H2O and 4 g succinic acid) in tubes, while controls were amended with chrome azurol S (CAS) solution and incubated at ambient temperature in a shaker incubator at 120 rpm. Rhizobial broths were centrifuged at 10000 rpm for 10 min. The quantity of siderophore was produced assessed by measuring the optical density of the supernatant at 400 nm.
Indole-acetic-acid (IAA) test
Quantitative measurement of IAA produced by rhizobia species were determined according to the method described by [18] with little modification. In summary, each rhizobial strain was inoculated in 0.2 L LB broth and incubated in a rotary shaker at ambient temperature for 96 h. One milliliter (1 ml) of the rhizobial broth was centrifuged at 3000 rpm for 30 min, and thereafter, 2 ml of the supernatant was mixed with 2 drops of orthophosphoric acid and 4 ml Salkowski reagent. Optical density of the pink broth was taken at 530 nm using a spectrophotometer (ThermoSpectronic, Merck) and the actual concentration of IAA produced by the rhizobial species was estimated from a standard IAA curve in the range of 0–120 μg/ml.
Phosphate solubilization test
Phosphate solubilization test was determined as described by [29] with little modification. Pikovskaya's agar with the following composition per litre was prepared: tricalcium phosphate (5 g), potassium chloride (0.2 g), magnesium sulphate (0.1 g), manganese sulphate (0.0001 g), yeast extract (0.5 g), glucose (10 g), agar (15 g), ammonia sulphate (0.5 g), ferrous sulphate (0.001 g), and the medium was autoclaved at 121 °C for 15 min, after adjusting the pH of the final composition to pH 7.0 using a pH meter. Autoclaved medium was poured on Petri dishes and allowed to solidify. Wells of 8 mm in diameter were made in the medium and inoculated with 25 μl of broth culture of each isolates. Three (3) wells per isolate were used. Plates were incubated at 27 °C for 4 days and a cleared zone around the wells indicated a positive result. Diameters of zones were obtained by measuring the diameter of zone of inhibition minus the diameter of the wells.
Rhizobial growth response to different temperature
LB broth was prepared according to the manufacturer's guidelines and autoclaved. Five μl of each rhizobial strain was inoculated in 25 ml of LB broth and gently vortexed. Each rhizobial treatment was replicated 3 times for the different temperature. Inoculated broth was incubated at 28, 35, 45 °C. The O.D (optical density) of the rhizobial growths was taken using a spectrophotometer at 630 nm at days 4, 8, 12, 16 and 20.
Rhizobial growth response to different pH
LB broth was prepared according to the manufacturer's guidelines and the pH of the broth was adjusted to acidic (4), neutral (7) and alkaline (10) pH and autoclaved. Five microliters of each rhizobial strain was inoculated in 25 ml of LB broth and gently vortexed. Each rhizobial treatment was replicated 3 times. Inoculated broth was incubated at 28 °C. The OD of the rhizobial growths was taken using a spectrophotometer at 630 nm at days 5, 10, 15 and 20.
Bacterial growth and preparation
Three (3) of the rhizobial strains were selected for soybean inoculation. Rhizobial spp. were harvested as described by [30] with little modification. Freshly grown cultures of the rhizobial spp. were centrifuged at 5000 rpm for 300 s and the pellets were washed in 0.85% (w/v) normal saline solution and thereafter homogenized in saline solution prior to solution.
Seed germination test
The colony counts of the rhizobial spp. were 20 × 105 CFU (colony forming unit) ml− 1 (for R1 strain), 11 × 105 CFU ml− 1 (for R3 strain) and 21 × 105 CFU ml− 1 (for R5 strain). Rhizobial suspension (0.5 ml) of each strain was pipetted into Petri dishes containing Whatman filter paper while 0.5 ml of sterile distilled water was transferred to the non-inoculated (control) plates. Soybean seeds (PAN 1532 R) obtained from Agricultural Research Council, South Africa were surface sterilized in 75% alcohol and 1% sodium hypochlorite for 600 s and rinsed in sterile distilled water. Then 30 seeds were place in the Petri dishes containing inoculated filter papers and 4% PEG and the plates were gently swirled. Each treatment was done in triplicate. Parafilm paper was used to seal the plates incubated for 8 days in a growth chamber (GC-300TL, JEIO TECH, Korea) adjusted to 23/16 day/night for periods of 8/16 h night/day at 10,000 light lux for 8 days. The number of germinated seeds was counted afterwards and the percentage seed germination rate was estimated using the following formula:
$$ \mathrm{Percentage}\ \mathrm{seed}\ \mathrm{germination}\ \left(\%\right)=\frac{n}{\mathrm{N}}\mathrm{x}\ 100 $$
Where n is the number of germinated seeds after 8 days and N is the total number of seeds.
Deoxyribonucleic acid (DNA) extraction for whole genome sequencing
Fresh culture of Rhizobium sp. strain R1 and R. cellulosilyticum strain R3 was obtained by taking inocula from 50% glycerol and streaking on freshly prepared nutrient agar. Plates were incubated at 28 °C for 4 days and thereafter bacterial DNA was extracted from the fresh isolates using Zymo DNA extraction kit following manufacturer's the instructions. The purity and concentration was determined by both 1% agarose gel electrophoresis and a NanoDrop spectrophotometer. The DNA extracts were stored at − 20 °C until use. Afterwards, 40 μl of DNA of extract of each bacterium was sent in an ice pack to Molecular Research Laboratory (Mr. DNA), Texas, USA for HiSeq system (illumina) sequencing.
Sequencing, quality check, trimming and assembly
Following the manufacturer's instructions, DNA libraries were made from 25 to 50 ng of extracted DNA using KAPA HyperPlus kits (Roche). Upon library preparation, DNA concentration was determined using the Qubit® dsDNA HS Assay Kit (Life Technologies) and average library size was evaluated using Agilent 2100 Bioanalyzer (Agilent Technologies). 'The workflow combines enzymatic steps and employs minimal bead-based cleanups'. DNA samples were enzymatically degraded into ds DNA fragments and thereafter end repair cum A-tailing were performed to obtain 'end-repaired, 5'-phosphorylated, 3'-dA-tailed ds DNA fragments.' Adapter ligation was performed by ligating ds DNA adapters with 3′-dTMP overhangs to 3′-dA-tailed DNA molecules, and thereafter, DNA libraries amplification were performed by using high fidelity and low-bias polymerase chain reaction (PCR). The DNA libraries were then assembled and diluted to 10.5pM and 'sequenced paired end for 500 cycles using the HiSeq system (Illumina).
Illumina data were extracted and uploaded into Kbase, reads quality was done by performing quality check of the illumina sequence using FastQC (v1.0.4) and low quality sequence and adapter were trimmed off using trimmomatic [31]. Illumina sequence reads were de novo assembled using both SPAdes and ARAST to create contigs.
The genomes of R1 and R3 strains were annotated using Kbase Prokka (V1.12) annotation pipeline and rapid annotation using subsystem technology (RAST) server [32]. The aforementioned systems permit the identification of introns, functional annotations as well as 'manual curation of gene annotations'. They also possess platforms for metabolic construction with the aid of Kyoto encyclopedia of genes and genomes (KEGG) for comparing sequence using Basic Local Alignment Search Tool (BLAST) and functional comparisons using KEGG and/or FIGfam. The data for R1 and R3 were both given Bioproject number PRJNA496421 while R1 and R3 data were assigned Biosample numbers SAMN10240937 and SAMN10245972 respectively upon submission to the GenBank database. In addition, R1 has SRA Accession number: SRR8060784 and R3 has SRA Accession number: SRR8061690.
Data obtained for the plant growth promoting and seed germination tests were analyzed using Microsoft Excel and Statistical Analysis System (SAS) platforms. Analysis of Variance (ANOVA) was performed for the data followed by Duncan test to determine differences between mean and P < 0.05 was considered significant [33, 34]. With respect to sequenced data, mean read length and standard deviation of read length were computed using Kbase pipeline.
Plant growth promoting traits of rhizobial species
In this present study, the plant growth promoting traits of rhizobial species were determined.
Aminocyclopropane-1-carboxylate (ACC) production by rhizobial species
With regard to ACC production, R5 strain produced the highest concentration of ACC followed by R1 strain while R2 strain produced the lowest concentration of ACC (Fig. 2a) under stress condition imposed by PEG. These rhizobial strains were further screened for other plant growth promoting traits such as EPS, siderophore production, IAA and phosphate solubilization tests.
The concentration of a ACC (produced under drought stress induced by − 0.30 MPa PEG), b EPS, c siderophore, d IAA and e diameter of clear (halo) zones produced by rhizobial species. R1 - Rhizobium sp. strain R1, R2 - Rhizobium tropici strain R, R3 - Rhizobium cellulosilyticum strain R3, R4 - Rhizobium taibaishanense strain R4 and R5 - Ensifer meliloti strain R5. ACC-1-aminocyclopropane-1-carboxylate, EPS – exopolysaccharide, IAA - Indole-acetic-acid, O.D – optical density. Data represent mean ± SE
EPS production by rhizobial species
In this study, all rhizobial species produced EPS. In particular, the rhizobial species incubated at 37 °C produced EPS at pH 5.5 and 7.5 but R1 and R4 strains did not produce EPS when incubated at 45 °C while R2 and R3 strains produced EPS under all the environmental conditions considered (Table 1). To be specific, among the rhizobial treatments, R1 strain produced the highest concentrations (0.7 and 0.6 O.D respectively) of EPS at 0 and 10% PEG concentrations and R2 produced the highest EPS at 5% PEG concentration (Fig. 2b) followed by R1 and R3 strains.
Table 1 Qualitative response of bacteria towards exopolysaccharide (EPS) assay
Siderophore production by rhizobial species
The ability of rhizobial species to produce siderophore in different media (succinic acid broth and King B broth) showed that all the rhizobial species produced more siderophore in King B broth compared to succinic acid broth (Fig. 2c). R1 strain produced the highest concentration of 0.9 O.D in King B broth while R1 and R2 strains produced more siderophore than R4 and R5 strains in succinic acid broth (Fig. 2c). Conversely, the control treatments amended with CAS solution showed the lowest values for both media.
IAA production by rhizobial species
The ability of rhizobial species to produce IAA under different tryptophan concentrations revealed higher concentrations of IAA production by R1 and R3 strains. In particular, R3 strain produced the highest concentrations of IAA (22.19 and 23.155 μl) at 0.5 and 1 mg/ml of tryptophan respectively, followed by R1 strain, but the lowest concentrations were produced by R5 strain (Fig. 2d).
Phosphate solubilization by rhizobial species
As regards phosphate solubilization, R2 strain comparatively showed a bigger halo-zone in Pikovskaya's agar with a mean diameter of 17.3 mm while R1 strain showed a mean diameter of 16.7 mm. The diameter of the halo-zone produced by R5 strain was lowest in this study (with a mean value of 10.7 mm). Nevertheless, R3 and R4 strains produced halo zones with the same diameter (13.7 mm) (Fig. 2e).
Rhizobial growth response under environments with different temperatures
Considering the response of rhizobial species towards different environmental temperatures, we observed that R1 strain showed the highest growth at 28 °C as depicted by O.D values throughout the experimental period. However, at 45 °C, rhizobial growth response modulated throughout the experimental period. As an illustration, R1 strain showed the highest O.D values of 0.4 and 0.3 on day 4 and 8 respectively but R5 and R4 strains showed the highest growths of 0.564 and 0.7 O.D corresponding to day 12 and 16 while R2 strain had the highest O.D value of 0.98 on day 20. The same pattern of rhizobial growths was observed at 37 °C (Fig. 3a, b, c, d and e).
Bacterial growth response to different environmental temperatures on day a 4, b 8, c 12, s 16 and e 20. R1 - Rhizobium sp. strain R1, R2 - Rhizobium tropici strain R2, R3 - Rhizobium cellulosilyticum strain R3, R4 - Rhizobium taibaishanense strain R4 and R5 - Ensifer meliloti strain R5. Data represent mean ± SE
From the plate count method, R1 (193333333.3 Cfu/ml) had the highest count followed by R3 (73333333.3 Cfu/ml) at 45 °C on day 4 (Fig. 4a). Similarly, R1 had the highest counts on day 8, 12 and 16 while R3 showed the highest growth on day 20 at 45 °C (Fig. 4b, c, d, and e). This further indicates that R1 and R3 are relatively more tolerant to heat.
Bacterial growth response to different environmental temperatures on day a 4, b 8, c 12, d 16 and e 20. R1 - Rhizobium sp. strain R1, R2 - Rhizobium tropici strain R2, R3 - Rhizobium cellulosilyticum strain R3, R4 - Rhizobium taibaishanense strain R4 and R5 - Ensifer meliloti strain R5. Data represent mean ± SE
Rhizobial growth response under environments with different pH
From the spectrophotometric method, R1 strain tended to showed better growth at a pH of 4 at the onset of rhizobial growth response to pH experiment (Fig. 5a) and later decreased as the experiment progressed. However, R1 strain responded more positively at pH 7 throughout the experimental sampling period (Fig. 5a, b, c and d), but R3 grew better at pH 10 on day 5 (Fig. 5a) while R5 strain was more abundant (with a cell biomass of 0.79 O.D) on day 20 under this pH condition (Fig. 5d).
Rhizobial growth response to different environmental pH on day a 5, b 10, c 15 and d 20. R1 - Rhizobium sp. strain R1, R2 - Rhizobium tropici strain R2, R3 - Rhizobium cellulosilyticum strain R3, R4 - Rhizobium taibaishanense strain R4 and R5 - Ensifer meliloti strain R5. Data represent mean ± SE
On the other hand, from the plate count method, R1 (120, 000000 Cfu/ml) had the highest count followed by R3 (100000000 Cfu/ml) at pH 4 on day 5 (Fig. 6a). On the contrary, at the same pH, R3 (146666666.7 Cfu/ml) had the highest count followed by R1 (93333333.33 Cfu/ml) (Fig. 6a). Similarly, R3 had the highest counts on day 10 and 15 at both extreme pH (4 and 10) while R4 and R1 showed the highest growth on day 20 at pH 4 and 10 respectively (Fig. 6b, c, d).
Soybean seed germination
The effects of R1, R2, R3, R4 and R5 inoculation on soybean seeds germination under drought stressed condition imposed by 4% PEG revealed that R1 and R3 strains had a better effect on soybean germination with a percentage seed germination of 97.3% each when compared to R2, R3 and R5 strains with a percentage seed germination of 94.4, 93.3 and 93.3% respectively. However, the non-inoculated (control) experiment showed the lowest percentage seed germination of 90% (Fig. 7). Thus, whole genome sequencing was performed for R1 and R3 strains in order to gain genomic insights into some of the functional genes that may be involved in drought tolerance, symbiotic establishment as well as plant survival and growth promotion.
Percentage of soybean seeds inoculated with rhizobial species that germinated in Petri dishes. R1 - Rhizobium sp. strain R1, R3 - Rhizobium cellulosilyticum strain R3 and R5 - Ensifer meliloti strain R5. Data represent mean ± SE
Genomic overview of R1 and R3 strains
Prior to illumina sequencing, DNA concentration of 35.6 ng/μL, DNA library concentration of 81.60 ng/μL with an average size of 647 bp were generated for R1 strain while R3 strain yielded 50.2 ng/μL, 90.40 and 661 bp corresponding to DNA concentration, final DNA library concentration and average library size (Table 2).
Table 2 DNA final library concentration and average library size
Upon de novo assembly, R1 strain was found to have 17,408,810 reads with a mean length of 201.15 and a total of 5773 contigs. The number of genes predicted was 29842 with a GC content of 61.91%. The N50 value of 936 was obtained for the scaffold. On the other hand, R3 strain had 17,794,094 reads with a mean read length of 214.18 and 129 contigs. The genome size of the strain was 4,114,542 with guanine-cytosine (GC) content off 43.59%. The N50 value of 57294 was obtained for the scaffold.
EPS producing genes
Whole genome sequencing revealed 78 exoX genes in R1 strain and 99 exoX genes in R3 strain and these genes are responsible for the production of exopolysaccharide in the bacterial species. Two (2) of the 78 exoX genes found in R1 strain code for signal transduction histidine-protein kinase BaeS and exodeoxyribonuclease III proteins with the corresponding baeS_1, 2.7.13.3 and xthA 3.1.11.2 aliases (Table. 3). The location of the signal transduction histidine-protein kinase gene was between 474 and 762 contigs (Fig. 8a) and that of BaeS and exodeoxyribonuclease III proteins gene was between 47 and 389 contigs (Fig. 8b). On the other hand, of the 99 exoX genes found in R3 strain, 2 encode signal transduction histidine-protein kinase ArlS and response regulator aspartate phosphatase J with arlS 2.7.13.3 and rapJ_2 3.1 aliases respectively (Table 4). Also, the location of the signal transduction histidine-protein kinase ArlS gene was between 3,808–5,173 contigs (Fig. 10a) while that of response regulator aspartate phosphatase J gene was between 26,133–27,255 contigs (Fig. 10b).
Table 3 Selected stress tolerance, symbiotic and plant growth promoting functional genes found in the genome of Rhizobium sp. strain R1
Feature context of a Signal transduction histidine-protein kinase BaeS b Exodeoxyribonuclease III c Extracellular serine protease d Microbial serine proteinase e Cysteine desulfurase SufS f Cysteine desulfurase IscS g putative MFS-type transporter YcaD and h Riboflavin transporter depicting gene names in the genome map of R1 strain. The blue bars represent the gene locations
Table 4 Selected stress tolerance, symbiotic and plant growth promoting functional genes found in the genome of R. cellulosilyticum strain R3
High-temperature stress response genes
Again, 5 htrA and 6 htrA genes were found in R1 and R3 strains. HtrA genes are involved in tolerance to high temperature and therefore the survival and growth of R1 and R3 strains observed at 45 °C (Figs. 3a, b, c, d, e & 4a, b, c, d, e) may be due to the high temperature tolerant proteins produced by these microorganisms (Tables 3 and 4).
Notably, 2 of the htrA genes found in R1 strain are responsible for the production of extracellular serine protease and microbial serine proteinase (Table 3) and they were located between 16 and 271 and 1,372–2,164 contigs respectively (Fig. 8c, d). In the same way, R3 strain had 2 genes coding for serine protease Do-like HtrA (Table 4) but with different contigs locations (Fig. 10c, d).
Nitrogen fixing genes
Nitrogen fixing (nif) genes are involved in the conversion of atmospheric N to the form that can be utilized by plants. Two (2) of the nif genes noticed in R1 strain are involved in the production of cysteine desulfurase SufS and cysteine desulfurase IscS with the corresponding sufS 2.8.1.7 and iscS 2.8.1.7 aliases (Table 3). The locations of these genes were between 11 and 338 contigs for cysteine desulfurase SufS gene and 1,202–1,664 (Fig. 8e) for cysteine desulfurase IscS gene (Fig. 8f). Regarding R3 strain, 2 of its nif genes are involved in the production of cysteine desulfurase IscS and Putative cysteine desulfurase NifS proteins (Table 4). These protein producing genes with different aliases also had different contigs locations (Fig. 10e, f).
Nodulation genes
Nodulation genes play a key role in nodule formation in plant roots where Rhizobium species establish symbiosis with host plants. As for R1 strain, 23297 nodA genes were found and 2 of the genes possess putative MFS-type transporter YcaD and riboflavin transporter protein potential (Table 3) with contigs locations between 93 and 735 and 734–1,091 respectively (Fig. 8g, h). On the contrary, 2 of the 12242 genes found in R3 strain code for Beta-N acetylglucosaminidase and Teichoic acid poly (ribitol-phosphate) polymerase situated between 4,582–7225 and 734–1,091 contigs respectively (Fig. 10g, h).
Siderophore-producing genes
At the same time, R1 strain was found to have 13 siderophore-producing genes and 2 of the genes evidently produce Catecholate siderophore Receptor Fiu and 2, 3-dihydro-2,3 dihydroxybenzoate dehydrogenase. The Catecholate siderophore Receptor Fiu gene had fiu aliases while 2, 3-dihydro-2, 3 dihydroxybenzoate dehydrogenase gene had dhbA 1.3.1.28 aliases (Table 3) situated between contigs 662–1,424 (Fig. 9a) and 59–572 (Fig. 9b) within the genome. With respect to R3 strain, it had 33 siderophore-producing genes and 2 of the genes and their respective contigs locations within the genome are shown in Table 4 and Fig. 11a, b accordingly.
Feature context of a Catecholate siderophore Receptor Fiu b 2,3-dihydro-2,3-dihydroxybenzoate dehydrogenase c Isoaspartyl peptidase d Isoaspartyl peptidase e UDP-N-acetylmuramate--L-alanyl-gamma-D-glutamyl-meso-2,6-diaminoheptandioate ligase f Phosphoethanolamine transferase EptA depicting gene names in the genome map of R1 strain. The blue bars represent the gene locations
IAA producing genes
Unlike R3 strain, 1 CDS and 1 IAA producing-gene were found in R1 strain with the biological function of Isoaspartyl peptidase (Table 3) and they were both located within 536–704 contigs (Fig. 9c, d). But, R3 strain had 6 IAA-producing genes and 2 of the genes produce inner membrane protein YiaA and tRNA dimethylallyltransferase proteins with the corresponding yiaA and miaA 2.5.1.75 aliases. The locations of their respective gene were also different within the genome (Fig. 11c, d). In reality, proteins produced by these genes can be involved in root elongation and lateral root production in plants.
Low-pH stress response genes
As a matter of fact, R1 and R3 strains were also found to possess genes that are involved in tolerance to low pH environments and one of them reported in this study is collectively called eptA. Indeed, R1 strain had 12 eptA genes and 2 of the genes were found to have the biological functions shown in Table 3. R3 strain also had several of the eptA genes and 2 of the genes had heptaprenyl diphosphate synthase component 1 and septation ring formation regulator EzrA biological functions (Table 4). These genes were found to be located within different contigs locations in the genome of R1 (Fig. 9e, f) and R3 (Fig. 11e, f) strains.
In this present study, the plant growth promoting traits of rhizobial species were determined. To be specific, the ACC experiment revealed the production of ACC by the rhizobial species (Fig. 2a) but the highest concentration of ACC was produced by R5 strain followed by R1 strain while the lowest concentration of ACC was produced by R2 strain. The production of ACC by these microorganisms shows that they have the potential to increase plant tolerance to drought stress since it was reported by [25] that application of 'ACC deaminase-producing microorganisms' into water stressed soil environments can reduce stress in the local plants by minimizing stress triggered by C2H4. Indeed, a study carried out by [35] showed that ACC deaminase-producing Pseudomonas species partially eradicated the detrimental effects of water stress on pea (Pisum sativum L.) growth and/or productivity.
In a study performed by [27], bacterial tolerance to water stress conditions was characterized by EPS production and therefore in this study, rhizobial tolerance to drought stress was determined by their ability to produce EPS in medium amended with different concentrations of PEG (drought stress stimulant). Based on the qualitative results, we found that environmental stresses such as pH and temperature stimulated the production of EPS. In particular, R2, R3 and R4 strains produced EPS under the different pH and temperature conditions (Table 1). Quantitatively, our findings showed that R1 strain was more effective in EPS production under severe drought condition (10% PEG) while R2 strain produced more EPS at 5% PEG (Fig. 2b). Other rhizobial species produced EPS at different concentrations in this study. Indeed, EPS production by bacteria protect them from water stress, heavy metals and other environmental stresses [27, 36], and therefore, it is possible for these rhizobial species to survive, multiply and harness other plant growth promoting traits when applied under drought conditions - as evident in the soybean germination experiment (Fig. 7) even in a complex soil environment in the field.
Indeed, siderophore production by different microorganisms has been reported by many researchers [37, 38]. In this present study, the 'maximum siderophore production' was found in King B broth. These quantitative results for siderophore production further validate the qualitative plate test for siderophore production for these rhizobial species in our previous study (data not shown). However, the results of this study contradict the findings of [28] who reported maximum siderophore production by Pseudomonas species in succinic acid broth.
The rhizobial species used in this study showed other plant growth promoting traits. In particular, all the rhizobial species produced IAA but at different concentrations. This is in agreement with the report that IAA production can differ among different bacterial species, which can be influenced by culture condition, nutrient availability and growth stage [19]. In addition, bacteria from plant rhizosphere are more effective producers of IAA than those from bulk soil [39]. In this study, we observed that R3 strain was more efficient in producing IAA at both concentrations of tryptophan (Fig. 2d) and this could be the reason for the high root biomass observed in soybean treated with this species in our previous study (data not shown), since IAA production has been implicated in root elongation and development of lateral roots [19].
Another strategic mechanism that can be used by rhizospheric bacteria to support the growth of agricultural crops lies in their capacity to solubilize phosphate, and it has been stated that phosphates always occur in bound forms in the soil [27]. Indeed, we found that R2 strain was more effective in solubilizing tri-calcium phosphate in Pikovskaya's agar followed by R1 strain, but R3 and R4 strains showed similar phosphate solubilizing potential while R5 strain demonstrated the least ability to degrade phosphate as shown in Fig. 2e. Thus, these rhizobial species have the tendency to solubilize bound phosphates and make them available for plant uptake in the soil.
Additionally, rhizobial growth response towards different environmental temperatures showed that these species possess the capacity to thrive and survive at relatively high temperature (45 °C). It has been shown that the optimal growth temperature for many rhizobial species is 25–30 °C [40], which agrees with our findings, since in this study, rhizobial species grew best at 28 °C. The ability of these species to grow and survive at 37 and 45 °C indicate that they may be able to help agricultural crops such as soybean to survive in most tropical countries currently facing drought and/or high temperature problems and further contradict the report of [41, 42] that soybean rhizobial species grow poorly at 40 °C and none of the species is 'able to grow' at 42 °C. On the contrary, some rhizobial species capable of nodulating common bean (Phaseolus vulgaris) can survive at 47 °C, although they do not have the ability to form nodules at such high temperatures [43]. Other rhizobial species from Phaseolus vulgaris are able to survive and remain infective at 40 °C [41].
Additionally, the ability of rhizobial species to grow and survive at different pH was experimented in this study, since real-life biotechnological application of microbial inoculants under drought condition in the field would demand that these species should possess the capacity to adapt to pH fluctuations inherent in complex soil ecosystem. Thus, all the rhizobial species in this current study were able to grow and survive in acidic (pH 4), neutral (pH 7) and alkaline environments (pH 10) (Fig. 5a, b, c, d & 6 a, b, c, d), indicating that these species possibly have broad environmental adaptability with respect to environmental pH. Such trait can help these microorganisms to function actively (without interference) in their symbiotic interactions with crops, since it was reported by [44] that nodulation of faba bean treated with Rhizobium leguminosarum was inhibited significantly by soil alkalinity.
The use of R1, R2, R3, R4 and R5 strains for in vitro enhancement of soybean germination under drought condition stimulated by 4% PEG showed that R1 and R3 strains were able to effectively enhance the germination of soybean than R2, R4 and R5 strains. This result showed that R1 and R3 strains were able to enhance the germination of soybean more effectively than the other 3 strains. This finding is in agreement with the results of [45] who reported that 'ACC deaminase – producing fluorescent pseudomonads' improved canola (Brassica napus L) seed germination under osmotic stress.
Regarding genomic insights into R1 and R3 strains, annotation of R3 genome revealed the presence of 99 different genes (exo genes) responsible for EPS production. One of the genes with aliases yjcG had the biological function of putative phosphoesterase while ArlS gene had the biological function of signal transduction histidine-protein kinase. These EPS genes are known to empower microorganisms to survive under harsh environmental conditions [11]. Although, it has been reported that EPS production is a survival strategy needed by microorganisms under drought stress condition, [1] reported that production of EPS is connected to acid tolerance. In reality, EPS has also been produced by Agrobacterium tumefaciens and S. meliloti in acidic environments [46,47,48].
Under certain circumstances, surface polysaccharides such as EPS, capsular polysaccharides (CPS), β-1, 2-glucans and lipopolysaccharides (LPS), which are essential molecules for symbioses establishment [49, 50] might have other functions such as defense against antimicrobial substances and oxidative stress [51,52,53]. Different types of exo genes have been reported [54], but in this study, we are reporting only exoX genes produced by R1 and R3 strains. It was also reported by [54] that exoX gene regulates the production of exopolysaccharide (such as succinoglycan) by a 'new Rhizobium meliloti'. The 2 exoX genes reported for R1 strain were located between contigs 474–762 and 47–389 (Fig. 8a, b) with signal transduction histidine-protein kinase BaeS and exodeoxyribonuclease III biological functions respectively (Table 3). Similarly, R3 strain had its 2 exoX genes located between 3,808–5,173 and 26,133–27,255 (Fig. 10a, b) with signal transduction histidine-protein kinase ArlS and response regulator aspartate phosphatase J functions respectively (Table 4).
Feature context of a Signal transduction histidine-protein kinase ArlS b Response regulator aspartate phosphatase J c Serine protease Do-like HtrA d Serine protease Do-like HtrA e Cysteine desulfurase IscS f Putative cysteine desulfurase NifS g Beta-N acetylglucosaminidase h Teichoic acid poly (ribitol-phosphate) polymerase depicting gene names in the genome map of R3 strain. The blue bars represent the gene locations
As previously mentioned, R1 and R3 strains were able to survive and grow at 45 °C (Fig. 3a, b, c, d, e & 4 a, b, c, d, e) and this trait is thought to be essential for their success as semiarid and/or arid inoculant species [1, 55, 56]. A number of proteins are induced upon exposure to relatively high temperatures and these proteins are generally termed heat shock proteins (HSPs). A couple of HSPs were present in R1 strain and htrA genes were found in both R1 and R3 strains. Specifically, 5 htrA and 14 htrA genes were correspondingly found in R1 and R3 strains. Besides, many different htrA homologues are similarly found in the genome of other bacteria [57]. However, it was reported that alterations in one 'htrA paralogue' of Brucella abortus and S. meliloti had only a little impact on growth at high temperatures [58, 59], perhaps as a result of functional redundancy. Furthermore, [1] also reported htrA gene as one of the components of heat shock response in Rhizobium tropici CIAT 899 and Rhizobium sp. PRF 81. In reality, besides high temperatures, some HSPs also offer protection against other stressful conditions such as the DnaK machinery which protects against salt stress [60] and even the HtrA that proffers fortification against oxidative damage [59]. Among the htrA found in the rhizobial strains in this present study are the R1 strain extracellular serine protease and microbial serine proteinase htrA (Table 3) and the R3 strain serine protease Do-like htrA and serine protease Do-like htrA (Table 4). Again, both htrA genes for R1 strain (Fig. 8c, d) and R3 strain (Fig. 10c, d) have their unique contigs locations within the genome.
R1 and R3 strains harbored nitrogen fixing (nif) genes with 6 different biological functions and 2 of the genes are presented in Tables 3 and 4 respectively. With respect to location, iscS_1, 2.8.1.7 gene was located between contigs 38, 300–39,443 (Fig. 10e) while nifS was located between contigs 72–336 (Fig. 10f) in R3 strain but the nif genes in R1 were found at contigs locations (Fig. 8e, f) different from that of R3 strain. IscS has the biological function of providing sulphur for the synthesis of iron-sulphur cluster in vitro; nevertheless, in vivo role of IscS in iron-sulphur formation is yet to be established [61]. Studies of the Azotobacter vinelandii nitrogen fixation gene cluster revealed that there are activities that enhance the effectiveness of iron-sulphur cluster assembly [62]. To be specific, study of nifS led to the detection that the protein produced by IscS gene is a pyridoxal 59-phosphate-haboring cysteine desulfurase that helps to transfer the sulfur moiety from cysteine to cysteinyl active site of nifS leading to the formation of enzyme-bound persulfide [63]. After reduction and incorporation of an iron source, the sulphur can be released and effectively integrated into the iron-sulphur protein cluster of the nitrogenase enzyme complex [63].
However, R1 strain was found to possess other types of nif genes such as nifW, nifN, nifQ etc. with negative (−) strands and these genes were found to have different biological products (Table 5). Also, R3 strain was also found to possess different nitrogen fixing genes with almost similar products, for instance, nifX and nifQ both with positive (+) strands with biological products similar to nitrogenase FeMo-cofactor synthesis molybdenum delivery protein NifQ (Table 6).
Table 5 Nif - genes of Rhizobium sp. strain R1
Table 6 Nif – and Nif + genes of R. cellulosilyticum strain R3
Also, we found 23297 and 12242 nodA genes in R1 and R3 strains respectively but only 2 of the genes are reported for each strain in this study. Particularly, we observed NodA genes encoding putative MFS-type transporter YcaD and riboflavin transporter in R1 strain (Table 3) and Beta-N acetylglucosaminidase and Teichoic acid poly (ribitol-phosphate) polymerase in R3 strain (Table 4). The location of putative MFS-type transporter YcaD was between contigs 93–735 (Fig. 8g), riboflavin transporter was between contigs 734–1091 (Fig. 8h), Beta-N acetylglucosaminidase was between contigs 4582–7225 (Fig. 10g) and Teichoic acid poly (ribitol-phosphate) polymerase was between contigs 7268–9128 (Fig. 10h). It is a common knowledge that Nod genes help in the formation of nodules, the site of nitrogen fixation by nitrogen fixing bacteria such as Rhizobium species. Moreover, it was suggested that the kind of 'Nod factor acyl group attached by NodA can contribute to the determination of host range' [1, 64]. NodA gene was also reported by [1] as one of the nodulation genes found in Rhizobium species.
In addition, R1 strain was noticed to have gene with + strand responsible for the production of nodulation protein N (Table 7) while R3 possesses genes, one with + strand and another with – strand, that can produce protein translocase subunit SecD/protein translocase subunit SecF and swarming motility protein SwrC (Table 8).
Table 7 Nod + gene of Rhizobium sp. strain R1
Table 8 Nod − and Nod + gene of R. cellulosilyticum strain R3
R1 and R3 strains were found to have 13 and 33 siderophore-producing genes respectively. To be specific, Catecholate siderophore receptor fiu and 2, 3-dihydro-2, 3-dihroxybenzoate dehydrogenase genes were noticed in R1 strain (Table 3) while putative siderophore transport system permease protein YfiZ and putative siderophore-binding lipoprotein YfiY genes were detected in R3 strain (Table 4). It is presumed that these siderophore protein-producing genes in R1 strain located between contigs 662–1,424 and 59–572 (Fig. 9a, b) and in R3 strain located between 14769 and 15,771 and 13,660–14,638 (Fig. 11a, b) could have contributed to the quantitative siderophore production (Fig. 2c) observed in this study, which can help these rhizobia strains to chelate Fe and make it available for plant use. Naturally, the role of siderophore producing-rhizobia under drought conditions is highly appreciated since it was reported by [65] that plants are highly prone to pathogenic attack under water stress conditions. Siderophore is obviously antimicrobial in nature [2] and thus application of siderophore-producing rhizobia will certainly help to improve plant health under drought stress and consequently increase agricultural productivity [66].
Feature context of a putative siderophore transport system permease protein YfiZ b putative siderophore-binding lipoprotein YfiY c Inner membrane protein YiaA d tRNA dimethylallyltransferase e Heptaprenyl diphosphate synthase component 1 f Septation ring formation regulator EzrA depicting gene names in the genome map of R3. The blue bars represent the gene locations
Furthermore, IAA genes involved in the production of Isoaspartyl peptidase protein was detected in R1 strain (Table 3). Similarly, 2 of the 6 IAA genes found in R3 strain are responsible for the production of inner membrane protein YiaA and tRNA dimethylallyltransferase (Table 4) and the locations of the respective genes were between 536 and 704 for the R1 strain (Fig. 9c, d) and between contigs 11,701–11,965 and 10,955–11,900 for the R3 strain (Fig. 11c, d).
Based on the results of the growth response to different environmental pH, R1 and R3 strains can be considered acid-tolerant strains since they were able to survive and grow at a low pH of 4. Again, upon application in the field, these strains may be confronted with acid stress in acidic soils and within the symbiosome. According to [1], the mechanisms involved in rhizobial survival and growth in acidic environments are yet to be understood. EptA genes coding for UDP-N-acetylmuramate-L-alanyl-gamma-D-glutamyl-meso-2,6-diaminoheptandioate ligase and Phosphoethanolamine transferase were found in R1 strains. [1] also found genes in R. tropici CIAT 899 and Rhizobium sp. PRF 81 coding for 'putative lipid A Phosphoethanolamine transferase' similar to that found in R1 strain in this study. In Salmonella typhimurium and Escherichia coli, eptA gene is activated under mildly acidic environments and it was reported that eptA imposed acid tolerance in Shigella flexneri 2a [67]. Also, eptA genes were reported in R. rhizogenes K84, sinorhizobia, agrobacteria, but not in other Rhizobium spp. [1].
It was found that Rhizobium sp. strain R1, Rhizobium tropici strain R2, Rhizobium cellulosilyticum strain R3, Rhizobium taibaishanense strain R4 and Ensifer meliloti strain R5 isolated from Bambara groundnut rhizosphere possess the PGP traits considered in this study. In particular, these rhizobial strains produced EPS, ACC and in addition were able to survive and grow at a temperature of 45 °C and in an acidic condition with a pH of 4. Consequently, R1, R3 and R5 strains enhanced the germination of soybean seeds (PAN 1532 R) under drought condition imposed by 4% PEG; nevertheless, Rhizobium sp. strain R1 and R. cellulosilyticum strain R3 inoculations were able to improve seeds germination more than R5. Thus, genomic insights into Rhizobium sp. strain R1 and R. cellulosilyticum strain R3 revealed the presence of some genes with their respective proteins involved in symbiotic establishment, drought tolerance and plant growth promotion. In particular, exoX, htrA, Nif, nodA, eptA, IAA and siderophore-producing genes were found in the two rhizobial strains. Therefore, the possible PGP ability of these rhizobial strains can further be harnessed for biotechnological application in the field especially in semiarid and arid regions of the globe.
The data for R1 strain are available in NCBI database under Bioproject number PRJNA496421, Biosample number SAMN10240937, SRA Accession number SRR8060784. Similarly data for R3 strain are available in NCBI database under Bioproject number PRJNA496421, Biosample number SAMN10245972, SRA Accession number SRR8061690.
ACC:
1-aminocyclopropane-1-carboxylate
ACS:
Aminocyclopropane-1-carboxylate synthase
BLAST:
Basic Local Alignment Search Tool
Chrome azurol S
CFU:
Coliform forming unit
DNA:
Deoxyribonucleic acid
Exopolysaccharide
GC:
Guanine cytosine
HSPs:
IAA:
Indole-acetic-acid
Kyoto encyclopedia of genes and genomes
LB:
Luria Bertani
NCBI:
National Centre for Biotechnology Information
O.D:
Optical density
PCR:
Polymerase chain reaction
PEG:
Poly-ethylene glycol
PGP:
Plant growth promoting
R1:
Rhizobium sp. strain R1
Rhizobium tropici strain R2
Rhizobium cellulosilyticum strain R3
Rhizobium taibaishanense strain R4
Ensifer meliloti strain R5
RAST:
rapid annotation using subsystem technology
Revolution per minute
SAS:
Statistical Analysis System
SE:
Ormeño-Orrillo E, Menna P, Almeida LGP, Ollero FJ, Nicolás MF, Rodrigues EP, et al. Genomic basis of broad host range and environmental adaptability of Rhizobium tropici CIAT 899 and Rhizobium sp. PRF 81 which are used in inoculants for common bean (Phaseolus vulgaris L.). BMC Genomics. 2012;13:735.
Igiehon NO, Babalola OO. Rhizosphere microbiome modulators: contributions of nitrogen fixing Bacteria towards sustainable agriculture. Environ Res Public Health. 2018;15:574.
Grover M, Ali SZ, Sandhya V, Rasul A, Venkateswarlu B. Role of microorganisms in adaptation of agriculture crops to abiotic stresses. World J Microbiol Biotechnol. 2011;27:1231–40.
Yang J, Kloepper JW, Ryu C-M. Rhizosphere bacteria help plants tolerate abiotic stress. Trends Plant Sci. 2009;14:1–4.
Dimkpa C, Weinand T, Asch F. Plant–rhizobacteria interactions alleviate abiotic stress conditions. Plant Cell Environ. 2009;32:1682–94.
Maheshwari DK. Bacteria in agrobiology: crop system. Heidelberg: Springer; 2011.
Kim Y-C, Glick BR, Bashan Y, Ryu C-M. Enhancement of plant drought tolerance by microbes. In: Plant responses to drought stress. Berlin, Heidelberge: Springer-Verlag; 2012. p. 383–413.
Hardoim PR, van Overbeek LS, van Elsas JD. Properties of bacterial endophytes and their proposed role in plant growth. Trends Microbiol. 2008;16:463–71.
Glick BR. Modulation of plant ethylene levels by the bacterial enzyme ACC deaminase. FEMS Microbiol Lett. 2005;251:1–7.
Sharma P, Khanna V, Kumari P. Efficacy of aminocyclopropane-1-carboxylic acid (ACC)-deaminase-producing rhizobacteria in ameliorating water stress in chickpea under axenic conditions. Afr J Microbiol Res. 2013;7:5749–57.
Roberson EB, Firestone MK. Relationship between desiccation and exopolysaccharide production in a soil Pseudomonas sp. Appl Environ Microbiol. 1992;58:1284–91.
Mantelin S, Touraine B. Plant growth-promoting bacteria and nitrate availability: impacts on root development and nitrate uptake. J Exp Bot. 2004;55:27–34.
Pérez-Montaño F, Alías-Villegas C, Bellogín R, Del Cerro P, Espuny M, Jiménez-Guerrero I, et al. Plant growth promotion in cereal and leguminous agricultural important plants: from microorganism capacities to crop production. Microbiol Res. 2014;169:325–36.
Tkacz A, Poole P. Role of root microbiota in plant productivity. J Exp Bot. 2015;66:2167–75.
Bar-Ness E, Chen Y, Hadar Y, Marschner H, Römheld V. Siderophores of Pseudomonas putida as an iron source for dicot and monocot plants. Plant Soil. 1991;130:231–41.
Traxler MF, Seyedsayamdost MR, Clardy J, Kolter R. Interspecies modulation of bacterial development through iron competition and siderophore piracy. Mol Microbiol. 2012;86:628–44.
Bal HB, Das S, Dangar TK, Adhya TK. ACC deaminase and IAA producing growth promoting bacteria from the rhizosphere soil of tropical rice plants. J Basic Microbiol. 2013;53:972–84.
Ahmad F, Ahmad I, Khan M. Screening of free-living rhizospheric bacteria for their multiple plant growth promoting activities. Microbiol Res. 2008;163:173–81.
Mohite B. Isolation and characterization of indole acetic acid (IAA) producing bacteria from rhizospheric soil and its effect on plant growth. J Soil Sci Plant Nutr. 2013;13:638–49.
Masciarelli O, Llanes A, Luna V. A new PGPR co-inoculated with Bradyrhizobium japonicum enhances soybean nodulation. Microbiol Res. 2014;169:609–15.
Hungria M, Stacey G. Molecular signals exchanged between host plants and rhizobia: basic aspects and potential application in agriculture. Soil Biol Biochem. 1997;29:819–30.
Oldroyd GE, Downie JA. Coordinating nodule morphogenesis with rhizobial infection in legumes. Annu Rev Plant Biol. 2008;59:519–46.
Perret X, Staehelin C, Broughton WJ. Molecular basis of symbiotic promiscuity. Microbiol Mol Biol Rev. 2000;64:180–201.
Fauvart M, Michiels J. Rhizobial secreted proteins as determinants of host specificity in the Rhizobium–legume symbiosis. FEMS Microbiol Lett. 2008;285:1–9.
Ali SZ, Sandhya V, Rao LV. Isolation and characterization of drought-tolerant ACC deaminase and exopolysaccharide-producing fluorescent Pseudomonas sp. Ann Microbiol. 2014;64:493–502.
Paulo EM, Vasconcelos MP, Oliveira IS, Affe HMJ, Nascimento R, ISd M, et al. An alternative method for screening lactic acid bacteria for the production of exopolysaccharides with rapid confirmation. Food Sci Technol. 2012;32:710–4.
Putrie RFW, Wahyudi AT, Nawangsih AA, Husen E. Screening of rhizobacteria for plant growth promotion and their tolerance to drought stress. Microbiol Indones. 2013;7:2.
Sasirekha B, Srividya S. Siderophore production by Pseudomonas aeruginosa FP6, a biocontrol strain for Rhizoctonia solani and Colletotrichum gloeosporioides causing diseases in chilli. Agric Nat Resour. 2016;50:250–6.
Gaur A. Phosphate solubilizing micro-organisms as biofertilizer. New Delhi: Omega scientific publishers; 1990. p. 16-72.
Prakamhang J, Tittabutr P, Boonkerd N, Teamtisong K, Uchiumi T, Abe M, et al. Proposed some interactions at molecular level of PGPR coinoculated with Bradyrhizobium diazoefficiens USDA110 and B. japonicum THA6 on soybean symbiosis and its potential of field application. Appl Soil Ecol. 2015;85:38–49.
Bolger AM, Lohse M, Usadel B. Trimmomatic: a flexible trimmer for Illumina sequence data. Bioinformatics. 2014;30:2114–20.
Aziz RK, Bartels D, Best AA, DeJongh M, Disz T, Edwards RA, et al. The RAST server: rapid annotations using subsystems technology. BMC Genomics. 2008;9:75.
Igiehon NO. Bioremediation potentials of Heterobasidion annosum 13.12 B and Resinicium bicolor in diesel oil contaminated soil microcosms. J Appl Sci Environ Manag. 2015;19:513–9.
Dytham C. Choosing and using statistics: a biologist's guide. 3rd ed. West Sussex: Wiley; 2011.
Arshad M, Shaharoona B, Mahmood T. Inoculation with Pseudomonas spp. containing ACC-deaminase partially eliminates the effects of drought stress on growth, yield, and ripening of pea (Pisum sativum L.). Pedosphere. 2008;18:611–20.
Ozturk S, Aslim B. Modification of exopolysaccharide composition and production by three cyanobacterial isolates under salt stress. Environ Sci Pollut Res. 2010;17:595–602.
Sayyed R, Badgujar M, Sonawane H, Mhaske M, Chincholkar S. Production of microbial iron chelators (siderophores) by fluorescent Pseudomonads; 2005.
Omidvari M, Sharifi RA, Ahmadzadeh M, Dahaji PA. Role of fluorescent pseudomonads siderophore to increase bean growth factors. J Agric Sci. 2010;2:242.
Sarwar M, Kremer R. Determination of bacterially derived auxins using a microplate method. Lett Appl Microbiol. 1995;20:282–5.
Zhang F, Lynch DH, Smith DL. Impact of low root temperatures in soybean [Glycine max.(L.) Merr.] on nodulation and nitrogen fixation. Environ Exp Bot. 1995;35:279–85.
Alexandre A, Oliveira S. Response to temperature stress in rhizobia. Crit Rev Microbiol. 2013;39:219–28.
Chen L, Figueredo A, Villani H, Michajluk J, Hungria M. Diversity and symbiotic effectiveness of rhizobia isolated from field-grown soybean nodules in Paraguay. Biol Fertil Soils. 2002;35:448–57.
Karanja NK, Wood M. SelectingRhizobium phaseoli strains for use with beans (Phaseolus vulgaris L.) in Kenya: tolerance of high temperature and antibiotic resistance. Plant Soil. 1988;112:15–22.
Abd-Alla MH, El-Enany A-WE, Nafady NA, Khalaf DM, Morsy FM. Synergistic interaction of Rhizobium leguminosarum bv. Viciae and arbuscular mycorrhizal fungi as a plant growth promoting biofertilizers for faba bean (Vicia faba L.) in alkaline soil. Microbiol Res. 2014;169:49–58.
Jalili F, Khavazi K, Pazira E, Nejati A, Rahmani HA, Sadaghiani HR, et al. Isolation and characterization of ACC deaminase-producing fluorescent pseudomonads, to alleviate salinity stress on canola (Brassica napus L.) growth. J Plant Physiol. 2009;166:667–74.
Yuan Z-C, Liu P, Saenkham P, Kerr K, Nester EW. Transcriptome profiling and functional analysis of Agrobacterium tumefaciens reveals a general conserved response to acidic conditions (pH 5.5) and a complex acid-mediated signaling involved in Agrobacterium-plant interactions. J Bacteriol. 2008;190:494–507.
Hellweg C, Pühler A, Weidner S. The time course of the transcriptomic response of Sinorhizobium meliloti 1021 following a shift to acidic pH. BMC Microbiol. 2009;9:37.
Cunningham SD, Munns DN. The correlation between extracellular polysaccharide production and acid tolerance in Rhizobium 1. Soil Sci Soc Am J. 1984;48:1273–6.
Becker A, Fraysse N, Sharypova L. Recent advances in studies on structure and symbiosis-related function of rhizobial K-antigens and lipopolysaccharides. Mol Plant-Microbe Interact. 2005;18:899–905.
Skorupska A, Janczarek M, Marczak M, Mazur A, Król J. Rhizobial exopolysaccharides: genetic control and symbiotic functions. Microb Cell Factories. 2006;5:7.
Ormeño-Orrillo E, Rosenblueth M, Luyten E, Vanderleyden J, Martínez-Romero E. Mutations in lipopolysaccharide biosynthetic genes impair maize rhizosphere and root colonization of Rhizobium tropici CIAT899. Environ Microbiol. 2008;10:1271–84.
D'Haeze W, Holsters M. Surface polysaccharides enable bacteria to evade plant immunity. Trends Microbiol. 2004;12:555–61.
Ingram BO, Sohlenkamp C, Geiger O, Raetz CR. Altered lipid a structures and polymyxin hypersensitivity of Rhizobium etli mutants lacking the LpxE and LpxF phosphatases. Biochim Biophys Acta Mol Cell Biol Lipids. 2010;1801:593–604.
Zhan H, Leigh JA. Two genes that regulate exopolysaccharide production in Rhizobium meliloti. J Bacteriol. 1990;172:5254–9.
Martínez-Romero E, Segovia L, Mercante FM, Franco AA, Graham P, Pardo MA. Rhizobium tropici, a novel species nodulating Phaseolus vulgaris L. beans and Leucaena sp. trees. Int J Syst Evol Microbiol. 1991;41:417–26.
Hungria M, de S Andrade D, de O Chueire LM, Probanza A, Guttierrez-Mañero FJ, Megı́as M. Isolation and characterization of new efficient and competitive bean (Phaseolus vulgaris L.) rhizobia from Brazil. Soil Biol Biochem. 2000;32:1515–28.
Clausen T, Southan C, Ehrmann M. The HtrA family of proteases: implications for protein composition and cell fate. Mol Cell. 2002;10:443–55.
Glazebrook J, Ichige A, Walker GC. Genetic analysis of Rhizobium meliloti bacA-phoA fusion results in identification of degP: two loci required for symbiosis are closely linked to degP. J Bacteriol. 1996;178:745–52.
Phillips RW, Roop RM. Brucella abortus HtrA functions as an authentic stress response protease but is not required for wild-type virulence in BALB/c mice. Infect Immun. 2001;69:5911–3.
Nogales J, Campos R, BenAbdelkhalek H, Olivares J, Lluch C, Sanjuan J. Rhizobium tropici genes involved in free-living salt tolerance are required for the establishment of efficient nitrogen-fixing symbiosis with Phaseolus vulgaris. Mol Plant-Microbe Interact. 2002;15:225–32.
Schwartz CJ, Djaman O, Imlay JA, Kiley PJ. The cysteine desulfurase, IscS, has a major role in in vivo Fe-S cluster formation in Escherichia coli. Proc Natl Acad Sci. 2000;97:9009–14.
Dean DR, Bolin JT, Zheng L. Nitrogenase metalloclusters: structures, organization, and synthesis. J Bacteriol. 1993;175:6737.
Zheng L, White RH, Cash VL, Dean DR. Mechanism for the desulfurization of L-cysteine catalyzed by the nifS gene product. Biochemistry. 1994;33:4714–20.
Debellé F, Plazanet C, Roche P, Pujol C, Savagnac A, Rosenberg C, et al. The NodA proteins of Rhizobium meliloti and Rhizobium tropici specify the N-acylation of nod factors by different fatty acids. Mol Microbiol. 1996;22:303–14.
Hungria M, Nogueira MA, Araujo RS. Co-inoculation of soybeans and common beans with rhizobia and azospirilla: strategies to improve sustainability. Biol Fertil Soils. 2013;49:791–801.
Igiehon NO, Babalola OO. Below-ground-above-ground plant-microbial interactions: focusing on soybean, Rhizobacteria and mycorrhizal Fungi. Open Microbiol J. 2018;12:261-79.
Martinić M, Hoare A, Contreras I, Álvarez SA. Contribution of the lipopolysaccharide to resistance of Shigella flexneri 2a to extreme acidity. PLoS One. 2011;6:e25557.
This work was financially supported by National Research Foundation, South Africa/ The World Academy of Science African Renaissancee grant (UID105466) and National Research Foundation, South Africa grants (UID81192, UID99779, UID95111, and UID104015).
Food Security and Safety Niche, Faculty of Natural and Agricultural Sciences, Private Mail Bag X2046, North-West University, Mmabatho, 2735, South Africa
Nicholas O. Igiehon, Olubukola O. Babalola & Bukola R. Aremu
Nicholas O. Igiehon
Olubukola O. Babalola
Bukola R. Aremu
INO designed and performed the experiment and wrote the article; BOO secured funding, provided academic input in writing the manuscript and thoroughly critiqued the article while INO and ABR did bioinformatics analyses of the genomic data. All authors approved the article for publication.
Correspondence to Olubukola O. Babalola.
This article does not contain any studies with human participants or animals performed by any of the authors.
Igiehon, N.O., Babalola, O.O. & Aremu, B.R. Genomic insights into plant growth promoting rhizobia capable of enhancing soybean germination under drought stress. BMC Microbiol 19, 159 (2019). https://doi.org/10.1186/s12866-019-1536-1
Accepted: 30 June 2019
Nitrogen fixation
Symbiotic establishment
Whole genome sequences
|
CommonCrawl
|
Classical Physics Quantum Physics Quantum Interpretations
Special and General Relativity Atomic and Condensed Matter Nuclear and Particle Physics Beyond the Standard Model Cosmology Astronomy and Astrophysics Other Physics Topics
Fundamentals of the Diffraction Grating Spectrometer - Comments
Thread starter Charles Link
Charles Link
Insights Author
In this article we will discuss the fundamentals of the diffraction grating spectrometer. The operation of the instrument is based upon the textbook equations for the far-field interference (Fraunhofer case) that results from a plane wave incident on a diffraction grating. It is rather remarkable how the standard textbook equations can be used to tell most everything one needs to know in order to understand the complete operation of the instrument. It is hoped that upon reading this article, the reader will have a good understanding of how a diffraction grating spectrometer works.
For a diffraction grating spectrometer, the grating is the dispersive element instead of a prism. In very simple form, the primary maxima from a diffraction grating for wavelength ## \lambda ## are found at angles ## \theta ## that satisfy ## m \lambda=d \sin{\theta} ##, with ##m=## integer. The result is different...
Reactions: Delta2 and Greg Bernhardt
Related Classical Physics News on Phys.org
neilparker62
Thanks - read your article with interest albeit perhaps needing to do some careful study to properly understand all the equations.
In the Insights article I wrote on the Deuterium Lyman Alpha line, the claimed resolution with a "3 metre vacuum grating spectrograph in fifth order" was:
$$L_\alpha(D)=1215.3378\pm0.00025 Å$$
Is this level of resolution achievable with a modern diffraction grating spectrometer and if so why has there been no attempt to repeat Herzberg's measurement (at least not as far as I can make out anyway) ?
Reactions: Charles Link and Greg Bernhardt
Greg Bernhardt
Thanks Charles! It's a welcome addition to your knowledge base!
Reactions: Charles Link
neilparker62 said:
That kind of resolution would be very difficult to achieve, and I believe it is well beyond the accuracy of any commercially available instrument. To achieve anywhere near this kind of precision would be quite painstaking. I can see where no one has attempted to repeat it to that level of accuracy. (Edit note: These initial comments may be in error. See post 5, etc.). With a commercially available instrument, one would typically measure something like ##\lambda= 1215.3 ## angstroms at first order, and you can get additional precision by doing it at fifth order. In general, the accuracy would be limited to the accuracy of the other spectral lines that you use as a standard. ## \\ ## In the Herzberg measurement, he most likely measured it from first principles=i.e. measuring ## \sin{\theta_i} ## and ## \sin{\theta_r} ##, rather than using other spectral lines. Alternatively, he could have calibrated his spectrometer with another source, such as the iron lines, that are commonly used as a wavelength standard. I don't have access to any iron calibration standard handbook at present, but if I remember correctly, the precision is perhaps ##+/- .001 ## angstroms. @neilparker62 Perhaps you could try to find some info on the currently available precision of these standards.
When working from first principles, besides measuring ## \theta_i ## and ## \theta_r ## very accurately, to get tremendous accuracy, you need to know the dimensions of the grating, i.e. the spacing ## d ## between the lines of the grating, to very high precision.
Charles Link said:
@neilparker62 Perhaps you could try to find some info on the currently available precision of these standards.
Re-Optimized Energy Levels and Ritz Wavelengths of ##^{ 198}##Hg I,
A. Kramida,
J. Res. Natl. Inst. Stand. Technol. 116, 599–619 (2011)
DOI:10.6028/jres.116.008
You can compare and make your own correction for Herzberg's D measurement. In the H-D-T compilation, I used the values from Saloman. This is the update mentioned in the H-D-T compilation.
See page 610 of above reference for the Mercury standard lines used by Herzberg.
Reactions: vanhees71 and Charles Link
Thank you @neilparker62 I need to look these over carefully, but a quick comment=on at least one of them, they are using an FTS=Fourier transform spectrometer, which uses a Michelson interferometer based system. In any case, the resolution is quite phenomenal. ## \\ ## Edit: I also see that another of the measurements uses a Fabry-Perot interferometer. The reader may find this write-up that I previously did of interest: https://www.physicsforums.com/insights/fabry-perot-michelson-interferometry-fundamental-approach/
Reactions: vanhees71
sophiecentaur
they are using an FTS=Fourier transform spectrometer, which uses a Michelson interferometer based system. In any case, the resolution is quite phenomenal.
Frequency measurement and resolution at RF will take you easily to One part in 1011. Pretty damn good eh?
sophiecentaur said:
It appears that the FTS Michelson interferometer may have the best (absolute) resolution/precision for determining the wavelength of isolated and very bright and narrow spectral lines, to be used as wavelength calibration standards. Once these "standard" wavelengths are determined, a diffraction grating spectrometer can be used to accurately measure the wavelengths of other sources, including those that have many spectral lines. ## \\ ## With a diffraction grating spectrometer, when working from first principles, (i.e. working with ##d, \theta_i ##, and ## \theta_r ##, and computing ## \lambda=\frac{d(\sin{\theta_i}+\sin{\theta_r})}{m} ##), the highest (absolute) precision that can be readily achieved may be on the order of +/- .001 angstroms. Using wavelength calibration standards from an FTS, relative precision for the diffraction grating instruments can then be taken to the diffraction limit of resolution, which may be a couple orders of magnitude smaller.
In light of the last few posts then, is there any reason in principle why we can't 're-run' Herzberg's measurement with any one - or combination of - the above-mentioned methods ? The impression I have is that this measurement represented some kind of 'pinnacle' in what was achievable by using diffraction grating technology. Thereafter laser-based methods took over leading (for example) to resolution of fine structure lines associated with Hydrogen/Deuterium Balmer alpha. And also to a very accurate measurement of "Ground State Lamb Shift" - the original intent of Herzberg's work. Although (for some reason) this term - ie "Ground State Lamb Shift" - seems to have more or less disappeared from the scientific landscape as of about 1995 - I'm not sure why ?
See the "link" you @neilparker62 provided in another discussion:
https://www.sciencedirect.com/science/article/pii/S0370269319304010#br0020
C.G. Parthey's measurements are referenced in part 6, references [14] and [36].
It appears he recently did very accurate measurements of the 1s-2s transitions in hydrogen and deuterium.
Yes - I was aware of Parthey's measurements. In fact it was only because of comparing these against calculated values (per Dirac energy formula), that I discovered there were discrepancies and hence eventually came to learn firstly about fine structure differences (2p 1/2 - 2p 3/2) , then about the 'original' Lamb shift (2s - 2p 1/2) and so-called 'ground state lamb shift'.
The simplified Dirac energy equation I have used calculates the 1s - 2p 3/2 energy difference which should differ from an accurately measured value mainly on account of QED contributions - ie "ground state Lamb shift". Herzberg sought to obtain a value for this 1s lamb shift by measuring the same transition and subtracting from theory.
In my article on the Deuterium Lyman Alpha line I said I thought it was perhaps a little surprising that if the Hydrogen and Deuterium 1s - 2s transitions could be measured with such precision, why not also the 1s - 2p 3/2 transitions ? Herzberg's 1950s measurement remains the most accurate we have of this particular transition - at least for Deuterium. In his paper he explains why he measured Deuterium and not Hydrogen 1s - 2p 3/2.
Related Threads for: Fundamentals of the Diffraction Grating Spectrometer - Comments
Diffraction grating lab
Reflection Diffraction Gratings
Diffraction Gratings & CD's
'Diffraction' or 'Interference' gratings ?
Diffraction Grating And Lightbulb
Blazed, transmission, diffraction gratings
Diffraction gratings and angular separation
B Single Slit and Diffraction Grating Formulas
I What Do Newton's Laws Say When Carefully Analysed
B Coriolis problem - Point mass movement upon release from Earth
B Sailboats providing their own wind
B Halving of distance traveled
I Are harmonics "real" in a vibrating string?
|
CommonCrawl
|
Bifurcations and exact traveling wave solutions of the Zakharov-Rubenchik equation
DCDS-S Home
An efficient adjoint computational method based on lifted IRK integrator and exact penalty function for optimal control problems involving continuous inequality constraints
doi: 10.3934/dcdss.2020131
Stability analysis of an equation with two delays and application to the production of platelets
Loïs Boullu 1,, , Laurent Pujo-Menjouet 1, and Jacques Bélair 2,
Univ Lyon, Université Claude Bernard Lyon 1, CNRS UMR 5208, Institut Camille Jordan, 43 blvd. du 11 novembre 1918, F-69622 Villeurbanne cedex, France
Département de Mathématiques et de statistiques de l'Université de Montréal, Pavillon André-Aisenstadt, CP 6128 Succ. centre-ville, Montréal (Québec) H3C 3J7, Canada
* Corresponding author: [email protected]
Received December 2018 Revised March 2019 Published November 2019
Fund Project: LB was supported by the LABEX MILYON (ANR-10-LABX-0070) of Université de Lyon, within the program "Investissements d'Avenir" (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR). Also, LB is supported by a grant of Région Rhône-Alpes and benefited of the help of the France Canada Research Fund, of the NSERC and of a support from MITACS. JB acknowledges support from NSERC [Discovery Grant]
Figure(11)
We analyze the stability of a differential equation with two delays originating from a model for a population divided into two subpopulations, immature and mature, and we apply this analysis to a model for platelet production. The dynamics of mature individuals is described by the following nonlinear differential equation with two delays: $ x'(t) = -\gamma x(t) + g(x(t-\tau_1)) - g(x(t-\tau_1 - \tau_2)) e^{-\gamma \tau_2} $. The method of D-decomposition is used to compute the stability regions for a given equilibrium. The centre manifold theory is used to investigate the steady-state bifurcation and the Hopf bifurcation. Similarly, analysis of the centre manifold associated with a double bifurcation is used to identify a set of parameters such that the solution is a torus in the pseudo-phase space. Finally, the results of the local stability analysis are used to study the impact of an increase of the death rate $ \gamma $ or of a decrease of the survival time $ \tau_2 $ of platelets on the onset of oscillations. We show that the stability is lost through a small decrease of survival time (from 8.4 to 7 days), or through an important increase of the death rate (from 0.05 to 0.625 days$ ^{-1} $).
Keywords: Platelets, oscillations, stability, two delays, D-decomposition, centre manifold analysis.
Mathematics Subject Classification: Primary: 34K13, 34K18; Secondary: 92D25.
Citation: Loïs Boullu, Laurent Pujo-Menjouet, Jacques Bélair. Stability analysis of an equation with two delays and application to the production of platelets. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020131
J. Bélair and S. A. Campbell, Stability and bifurcations of equilibriain a multiple-delayed differential equation, SIAM Journal on Applied Mathematics, 54 (1994), 1402-1424. doi: 10.1137/S0036139993248853. Google Scholar
J. Bélair and M. C. Mackey, A model for the regulation of mammalian platelet production, Annals of the New York Academy of Sciences, 504 (1987), 280-282. doi: 10.1111/j.1749-6632.1987.tb48740.x. Google Scholar
J. Bélair, M. C. Mackey and J. M. Mahaffy, Age-structured and two delay models for erythropoiesis, Math. Biosciences, 128 (1995), 317-346. doi: 10.1016/0025-5564(94)00078-E. Google Scholar
A. Besse, Modélisation Mathématique de La Leucémie Myéloïde Chronique, Ph.D thesis, Université Claude Bernard Lyon 1, 2017. Google Scholar
L. Boullu, M. Adimy, F. Crauste and L. Pujo-Menjouet, Oscillations and asymptotic convergence for a delay differential equation modeling platelet production, Discrete and Continuous Dynamical Systems Series B, 24 (2019), 2417-2442. doi: 10.3934/dcdsb.2018259. Google Scholar
L. Boullu, L. Pujo-Menjouet and J. H. Wu, A model for megakaryopoiesis with state-dependent delay, SIAM J. Appl. Math., 79 (2019), 1218-1243. doi: 10.1137/18M1201020. Google Scholar
T. C. Busken and J. M. Mahaffy, Regions of stability for a linear differential equation with two rationally dependent delays, Discrete and Continuous Dynamical Systems, 35 (2015), 4955-4986. doi: 10.3934/dcds.2015.35.4955. Google Scholar
S. A. Campbell, Calculating centre manifolds for delay differential equations using MapleTM, Delay Differential Equations, Springer, New York, (2009), 221-244. doi: 10.1007/978-0-387-85595-0_8. Google Scholar
F. J. de Sauvage, K. Carver-Moore, S. M. Luoh, A. Ryan, M. Dowd, D. L. Eaton and M. W. Moore, Physiological regulation of early and late stages of megakaryocytopoiesis by thrombopoietin, The Journal of Experimental Medicine, 183 (1996), 651-656. doi: 10.1084/jem.183.2.651. Google Scholar
H. A. El-Morshedy, G. Röst and A. Ruiz-Herrera, Global dynamics of delay recruitment models with maximized lifespan, Zeitschrift für angewandte Mathematik und Physik, 67 (2016), Art. 56, 15 pp. doi: 10.1007/s00033-016-0644-0. Google Scholar
R. S. Go, Idiopathic cyclic thrombocytopenia, Blood Reviews, 19 (2005), 53-59. doi: 10.1016/j.blre.2004.05.001. Google Scholar
R. Grozovsky, A. J. Begonja, K. F. Liu, G. Visner, J. H. Hartwig, H. Falet and K. M. Hoffmeister, The Ashwell-Morell receptor regulates hepatic thrombopoietin production via JAK2-STAT3 signaling, Nature Medicine, 21 (2015), 47-54. doi: 10.1038/nm.3770. Google Scholar
E. N. Gryazina, The D-decomposition theory, Automation and Remote Control, 65 (2004), 1872-1884. doi: 10.1023/B:AURC.0000049874.93222.2c. Google Scholar
J. Guckenheimer and P. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Applied Mathematical Sciences, 42. Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4612-1140-2. Google Scholar
K. Kaushansky, Megakaryopoiesis and thrombopoiesis, in Williams Hematology, McGraw-Hill, 9th edition, (2016), 1815-1828. Google Scholar
D. J. Kuter, The biology of thrombopoietin and thrombopoietin receptor agonists, International Journal of Hematology, 98 (2013), 10-23. doi: 10.1007/s12185-013-1382-0. Google Scholar
Y. A. Kuznetsov, Elements of Applied Bifurcation Theory, 2nd edition, Applied Mathematical Sciences, 112. Springer-Verlag, New York, 1998. doi: 10.1007/b98848. Google Scholar
G. P. Langlois, M. Craig, A. R. Humphries, M. C. Mackey, J. M. Mahaffy, J. Bélair, T. Moulin, S. R. Sinclair and L. L. Wang, Normal and pathological dynamics of platelets in humans, Journal of Mathematical Biology, 75 (2017), 1411-1462. doi: 10.1007/s00285-017-1125-6. Google Scholar
J. Li, D. E. van der Wal, G. H. Zhu, M. Xu, I. Yougbare, L. Ma, B. Vadasz, N. Carrim, R. Grozovsky, M. Ruan, L. Y. Zhu, Q. S. Zeng, L. L. Tao, Z.-M. Zhai, J. Peng, M. Hou, V. Leytin, J. Freedman, K. M. Hoffmeister and H. Y. Ni, Desialylation is a mechanism of Fc-independent platelet clearance and a therapeutic target in immune thrombocytopenia, Nature Communications, 6 (2015), 7737. doi: 10.1038/ncomms8737. Google Scholar
J. M. Mahaffy, J. Bélair and M. C. Mackey, Hematopoietic model with moving boundary condition and state dependent delay: Applications in erythropoiesis, Journal of Theoretical Biology, 190 (1998), 135-146. doi: 10.1006/jtbi.1997.0537. Google Scholar
J. M. Mahaffy, K. M. Joiner and P. J. Zak, A geometric analysis of stability regions for a linear differential equation with two delays, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 5 (1995), 779-796. doi: 10.1142/S0218127495000570. Google Scholar
S. E. McKenzie, S. M. Taylor, P. Malladi, H. Yuhan, D. L. Cassel, P. Chien, E. Schwartz, A. D. Schreiber, S. Surrey and M. P. Reilly, The role of the human Fc receptor FcgRIIA in the immune clearance of platelets: A transgenic mouse model, The Journal of Immunology, 162 (1999), 4311-4318. Google Scholar
L. Pitcher, K. Taylor, J. Nichol, D. Selsi, R. Rodwell, J. Marty, D. Taylor, S. Wright, D. Moore, C. Kelly and A. Rentoul, Thrombopoietin measurement in thrombocytosis: Dysregulation and lack of feedback inhibition in essential thrombocythaemia, British Journal of Haematology, 99 (1997), 929-932. doi: 10.1046/j.1365-2141.1997.4633267.x. Google Scholar
H. Y. Shu, L. Wang and J. H. Wu, Global dynamics of Nicholson's blowflies equation revisited: Onset and termination of nonlinear oscillations, J. Differential Equations, 255 (2013), 2565-2586. doi: 10.1016/j.jde.2013.06.020. Google Scholar
H. Y. Shu, L. Wang and J. H. Wu, Bounded global Hopf branches for stage-structured differential equations with unimodal feedback, Nonlinearity, 30 (2017), 943-964. doi: 10.1088/1361-6544/aa5497. Google Scholar
Y. L. Song and J. Jiang, Steady-state, Hopf and steady-state-hopf bifurcations in delay differential equations with applications to a damped harmonic oscillator with delay feedback, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 22 (2012), 1250286, 31 pp. doi: 10.1142/S0218127412502860. Google Scholar
M.-F. Tsan, Kinetics and distribution of platelets in man, American Journal of Hematology, 17 (1984), 97-104. doi: 10.1002/ajh.2830170114. Google Scholar
Figure 2.1. Region of stability for the null solution of (2.1) with $ \tau = 1 $. The numbers indicate the number of pairs of eigenvalues with positive real parts. The graph is the same for any positive $ \tau $
Figure Options
Download as PowerPoint slide
Figure 2.2. Region of stability for the null solution of (2.1) with $ A = 0.2 $. The numbers indicate the number of pairs of eigenvalue with positive real parts
Figure 2.3. Region of stability for the null solution of (2.1) with $ A = 0.55 $
Figure 2.4. Region of stability for the null solution of (2.1) with A = 1
Figure 4.1. Parametric portraits for the phase portraits near the double Hopf bifurcation (from [1], Figure 3.3)
Figure 4.2. Numerical simulation of (1.1) for $ \tau_2 = 4.75 \times \tau_1 $ in the pseudo-phase plane $ (x(t), x(t-\tau_1-\tau_2)) $.]{Numerical simulation of (1.1) for $ \tau_2 = 4.75 \times \tau_1 $ in the pseudo-phase plane $ (x(t), x(t-\tau_1-\tau_2)) $, corresponding to the lowest wedge of the $ \mu_1<0, \mu_2>0 $ quadrant of Figure 4.1. Once the transient dynamic is lost, a stable limit cycle appears
Figure 4.3. Numerical simulation of (1.1) for $ \tau_2 = 4.24 \times \tau_1 $ in the pseudo-phase space $ (x(t), x(t-\tau_1), x(t-\tau_1-\tau_2)) $, corresponding to the lowest interior wedge of the $ \mu_1<0 $, $ \mu_2>0 $ quadrant of Figure 4.1. Once the transient dynamic is lost, a stable torus appears
Figure 4.4. Numerical simulation of (1.1) for $ \tau_2 = 4.24 \times \tau_1 $ in the pseudo-phase plane $ (x(t), x(t-\tau_1-\tau_2)) $, corresponding to the $ \mu_1>0 $, $ \mu_2>0 $ quadrant of Figure 4.1. Once the transient dynamic is lost, a stable limit cycle appears
Figure 5.1. Stability as $ \tau_2 $ or $ \gamma $ are varied and other parameters are fixed. Blue dotted lines represent the values in healthy patients, and red dotted lines mark the limits after which the equilibrium is unstable. We see that when $ \tau_2 $ decreases of one day (to $ \tau_2 = 7.2 $), then the system loses its stability. Furthermore, if $ \gamma $ is multiplied more than 12 times (to $ \gamma = 0.625 $) then the system also loses its stability
Figure 5.2. The evolution of the platelet count with time (blue line) for different values of $ \tau_2 $ and $ \gamma $, after the transient phase. The green doted line represents the average platelet count of healthy patients, $ 20 \times 10^9 $ platelets/kg, and the two red dotted lines represent the healthy range of platelet count, $ 11 \times 10^9 $ - $ 32 \times 10^9 $
Ariadna Farrés, Àngel Jorba. On the high order approximation of the centre manifold for ODEs. Discrete & Continuous Dynamical Systems - B, 2010, 14 (3) : 977-1000. doi: 10.3934/dcdsb.2010.14.977
Ismail Abdulrashid, Abdallah A. M. Alsammani, Xiaoying Han. Stability analysis of a chemotherapy model with delays. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 989-1005. doi: 10.3934/dcdsb.2019002
Bin Fang, Xue-Zhi Li, Maia Martcheva, Li-Ming Cai. Global stability for a heroin model with two distributed delays. Discrete & Continuous Dynamical Systems - B, 2014, 19 (3) : 715-733. doi: 10.3934/dcdsb.2014.19.715
Miljana Jovanović, Vuk Vujović. Stability of stochastic heroin model with two distributed delays. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 0-0. doi: 10.3934/dcdsb.2020016
Marek Bodnar, Monika Joanna Piotrowska, Urszula Foryś, Ewa Nizińska. Model of tumour angiogenesis -- analysis of stability with respect to delays. Mathematical Biosciences & Engineering, 2013, 10 (1) : 19-35. doi: 10.3934/mbe.2013.10.19
Hui Miao, Zhidong Teng, Chengjun Kang. Stability and Hopf bifurcation of an HIV infection model with saturation incidence and two delays. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2365-2387. doi: 10.3934/dcdsb.2017121
Joseph M. Mahaffy, Timothy C. Busken. Regions of stability for a linear differential equation with two rationally dependent delays. Discrete & Continuous Dynamical Systems - A, 2015, 35 (10) : 4955-4986. doi: 10.3934/dcds.2015.35.4955
Frederic Mazenc, Gonzalo Robledo, Michael Malisoff. Stability and robustness analysis for a multispecies chemostat model with delays in the growth rates and uncertainties. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1851-1872. doi: 10.3934/dcdsb.2018098
Hayato Chiba, Georgi S. Medvedev. The mean field analysis of the kuramoto model on graphs Ⅱ. asymptotic stability of the incoherent state, center manifold reduction, and bifurcations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 3897-3921. doi: 10.3934/dcds.2019157
Freddy Dumortier, Robert Roussarie. Bifurcation of relaxation oscillations in dimension two. Discrete & Continuous Dynamical Systems - A, 2007, 19 (4) : 631-674. doi: 10.3934/dcds.2007.19.631
Saikat Mazumdar. Struwe's decomposition for a polyharmonic operator on a compact Riemannian manifold with or without boundary. Communications on Pure & Applied Analysis, 2017, 16 (1) : 311-330. doi: 10.3934/cpaa.2017015
Junyuan Yang, Yuming Chen, Jiming Liu. Stability analysis of a two-strain epidemic model on complex networks with latency. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2851-2866. doi: 10.3934/dcdsb.2016076
Alexander Nabutovsky and Regina Rotman. Lengths of geodesics between two points on a Riemannian manifold. Electronic Research Announcements, 2007, 13: 13-20.
Tyrus Berry, Timothy Sauer. Consistent manifold representation for topological data analysis. Foundations of Data Science, 2019, 1 (1) : 1-38. doi: 10.3934/fods.2019001
Guy Katriel. Stability of synchronized oscillations in networks of phase-oscillators. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 353-364. doi: 10.3934/dcdsb.2005.5.353
Keonhee Lee, Ngoc-Thach Nguyen, Yinong Yang. Topological stability and spectral decomposition for homeomorphisms on noncompact spaces. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2487-2503. doi: 10.3934/dcds.2018103
Aurore Back, Emmanuel Frénod. Geometric two-scale convergence on manifold and applications to the Vlasov equation. Discrete & Continuous Dynamical Systems - S, 2015, 8 (1) : 223-241. doi: 10.3934/dcdss.2015.8.223
Dingheng Pi. Limit cycles for regularized piecewise smooth systems with a switching manifold of codimension two. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 881-905. doi: 10.3934/dcdsb.2018211
Olena Naboka. On synchronization of oscillations of two coupled Berger plates with nonlinear interior damping. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1933-1956. doi: 10.3934/cpaa.2009.8.1933
Alexandr A. Zevin, Mark A. Pinsky. Qualitative analysis of periodic oscillations of an earth satellite with magnetic attitude stabilization. Discrete & Continuous Dynamical Systems - A, 2000, 6 (2) : 293-297. doi: 10.3934/dcds.2000.6.293
HTML views (74)
Loïs Boullu Laurent Pujo-Menjouet Jacques Bélair
|
CommonCrawl
|
Constructing subgroups by intersection
Jimeree
I'd like to construct a subgroup of $Sp\left(4,\mathbb{Z}\right)$ of the form:
$$G_0\left(N\right) = M\left(N\right) \cap {Sp}\left(4,\mathbb{Z}\right)$$
where $M\left(N\right)$ is a $4\times4$ matrix over the integer ring with elements that are multiples of the integer $N$. I think I know how to construct such an $M\left(N\right)$ for a given $N$, but how does one then construct such a subgroup $G_0\left(N\right)$? Thanks!
With your definition M(N) is not a group as it does not contain the identity... Do you meant the principal congruence subgroup?
vdelecroix ( 2015-02-27 16:40:52 -0600 )edit
Sorry, I should have put curly brackets around $M\left(N\right)$, so it's just a matrix. Specifically, I want to construct:
$$G_0\left(N\right) = { \left( \begin{array}{cccc} \mathbb{Z} & \mathbb{Z} & \mathbb{Z} & N\mathbb{Z} \ N\mathbb{Z} & \mathbb{Z} & N\mathbb{Z} & N^2 \mathbb{Z} \ \mathbb{Z} & \mathbb{Z} & \mathbb{Z} & N \mathbb{Z} \ \mathbb{Z} & \mathbb{Z} & \mathbb{Z} & \mathbb{Z} \end{array} \right) } \cap {Sp}\left(4,\mathbb{Z}\right)$$
We can define congruence subgroups of the modular group in this way, but I want to do the same thing for subgroups of $Sp\left(4,\mathbb{Z}\right)$. Thanks for your help!
Jimeree ( 2015-02-28 10:25:57 -0600 )edit
The answer really depends on what kind of computations you want to achieve. Could you precise it in your question? Building such a group in Sage will require some non-trivial amount of work and the only non-trivial operations you might get will come from the software GAP (which is shipped with Sage and used a lot for everything related to group theory). You should have a look at it.
Okay, thanks for your response! What I really want to look at are the generators for such subgroups..
Last updated: Feb 27 '15
NEW VERSION: Sparse matrix stores something for each row?
Characteristic polynomial of symbolic matrix of size 7
MatrixSpace quotientRings
morphism between permutation group and matrix group
double cosets in Sage
Help with unstable code
Sage 7.4 : sage -i database_gap hangs
Orbits on group actions acting on sets
Substitution using Dictionary with Matrix as Value
How to treat a vector space as a group?
|
CommonCrawl
|
Does a spaceship travelling at near lightspeed see the universe aging slow or fast?
Now I know this probably is well trodden territory, but this question has bugged me for some time, and I couldn't find a similar question in the archives (although plenty about relativistic time dilation and the twin paradox, which is closely related but not quite the same). The theory of special relativity suggests that two objects moving relative to each other (at near light speeds) will observe the other as moving at a slower time. However, both of these reference frames are local reference frames - they are not a global perspective that the universe is. So I'll give a little background before asking about the scenario, to show the difference between this and previous questions seen here (although I doubt I have been exhaustive in my search here).
At the time of Einstein, no one knew a means of measuring speed relative to the universe that was applicable everywhere in the universe (short of Mach's principle which Einstein supplanted in GR). Neither did they know the universe had a finite age, let alone a means of measuring it. But now we have both - we can measure the age of the universe by the wavelength of the MBR, and we can "in principle" measure the rate of change of the universe by the slow change in the MBR's wavelength. (If we really want to, we can also measure historic elements such as the rate and duration of supernova and the expansion of the universe.) Additionally our velocity relative to the universe can be known by measuring the variation in the MBR due to the Doppler Effect (assuming the MBR is more or less isotropic). That is, in the direction of travel, the MBR would be blue-shifted, while behind, it would be red-shifted, and the speed can be deduced by the magnitude of the wavelength shift. So we now have "in principle" methods of measuring both our velocity with respect to the universe and a form of universal time.
So if an alien is traveling at relativistic speeds to the universe, and measuring the MBR and it's wavelength change with respect to the internal clock of his ship, will it observe the universe to be aging faster or slower than we would observe?
special-relativity doppler-effect
matsciencemanmatscienceman
$\begingroup$ We still don't know of a means of measuring a speed relative to that of the universe. We can blue-shift one part of the sky (while red-shifting the other) by traveling relative to the system in which the CMB dipole disappears. That, however, is not the "special frame of the universe", it's just the frame of the CMB. You could always pick a special frame and then "get into its future faster" by moving relative to it. See e.g. physics.stackexchange.com/questions/25928/… $\endgroup$ – CuriousOne Dec 22 '15 at 5:48
The short answer is that yes, an astronaut moving relative to the cosmic microwave background would measure a shorter time since the Big Bang than an observer stationary wrt to the CMB. However this vague statement needs stating more carefully to make it useful.
If we ignore minor irritants such as inflation and quantum mechanics then the geometry of the expanding universe is described by the FLRW metric:
$$ ds^2 = -c^2dt^2 + a^2(t)d\Sigma^2 $$
For the purposes of this question let's assume the universe is flat, and we'll consider only radial motion. This allows us to simply the metric to get the expression for the proper time:
$$ d\tau^2 = dt^2 - \frac{a^2(t)}{c^2}dr^2 \tag{1} $$
The quantity $d\tau$ is the proper time, which is equal to the time measured by a clock carried by a freely falling observer. The radius $r$ is measured in comoving units, which aren't the same as the distance measurements we make. The distances we measure are $r$ multiplied by the scale factor $a(t)$, and it's the increase in $a(t)$ with time that we see as the expansion of the universe. In these units stationary observers, i.e. at constant $r$, are moving away from us as the universe expands.
Where the CMB comes in is that the CMB is roughly isotropic for stationary (in comovong coordinates i.e. constant $r$) observers. NB the CMB does not define the comoving frame, it approximately coincides with the comoving frame because of the way it was created. See Assuming that the Cosmological Principle is correct, does this imply that the universe possess an empirically privileged reference frame? and Is the CMB rest frame special? Where does it come from? for more on this.
Anyhow, if you're a comoving observer then your $r$ coordinate is constant and therefore $dr=0$ and the metric simplifies to:
$$ d\tau^2 = dt^2 $$
And this immediately integrates to give $\tau = t$. The age of the universe $\tau$ is just the time $t$ shown on the clock you've been carrying since the Big Bang. this applies to all comoving observers so all comoving observers agree on how long it is since the Big Bang.
But suppose you're not a stationary observer. Suppose you are moving at some comovingg velocity $v(t)$ so:
$$ \frac{dr}{dt} = v(t) $$
$$ dr = v(t)dt $$
and we can substitute this into our equation (1) above to get:
$$ d\tau^2 = dt^2\left(1 - a^2(t)\frac{v^2(t)}{c^2}\right) $$
Again we get the time since the Big Bang by integrating:
$$ \tau = \int_0^t \sqrt{1 - a^2(t')\frac{v^2(t')}{c^2}} dt' $$
Actually doing the integration is hard even for constant $v$ because $a(t)$ is a complicated function of time, but we don't need to do the integration to see that $\tau < t$. The quantity $a^2(t')v^2(t')/c^2$ is positive because it's a square, and that means the function $\sqrt{1 - a^2(t')v^2(t')/c^2}$ is less than one if $v > 0$. So whatever the precise form of $v(t)$ the integral is always going to give the result:
$$ \tau < t $$
Since we comoving observers found $\tau = t$ that means an observer who is moving in comoving coordinates must measure a time since the Big Bang that is less than we measure.
John RennieJohn Rennie
Not the answer you're looking for? Browse other questions tagged special-relativity doppler-effect or ask your own question.
Is the CMB rest frame special? Where does it come from?
How does the Hubble parameter change with the age of the universe?
Assuming that the Cosmological Principle is correct, does this imply that the universe possess an empirically privileged reference frame?
How is this conflict about age of the universe resolved?
Can the Universe be significantly older than 15 billion years?
How can time be relative?
Why did we need relativity to derive $E=mc^2$?
How would we see a near-lightspeed object emitting light?
Finding the cosmological redshift of a galaxy in the expanding Universe
Is the amplitude proportional to frequency in electromagnetic waves?
How'd you explain Red shift and Blue shift with respect to Doppler Effect
Lorentz transformation appears to lead to a contradiction. Where is my mistake?
When a clock was flown around the world,was time dilation mutual?
|
CommonCrawl
|
Home Journals MMEP A thin supported Pd-Au based membrane for hydrogen generation and purification: A case study
A thin supported Pd-Au based membrane for hydrogen generation and purification: A case study
Adolfo Iulianelli* | Yan Huang | Angelo Basile
Institute on Membrane Technology of the Italian National Research Centre, Via P. Bucci Cubo 17/C c/o University of Calabria, Rende (CS) 87036, Italy
Nanjing Tech University, Nanjing 210009, China
[email protected]
In this work, a composite membrane based on a thin Pd-Au metallic layer supported on a ceramic substrate was produced by electroless plating deposition with the intent of generating and, meanwhile, purifying hydrogen in a single stage. Permeation tests were performed with pure gases (H2, N2, CO2, CH4) on both of them by varying the temperature between 300 °C and 400 °C and the feed pressure from 150 to 250 kPa to evaluate the hydrogen perm-selectivity characteristics of the membrane.
A reference H2/N2 ideal selectivity around 500 was reached at 400 °C and 50 kPa of transmembrane pressure and it remained stable up to 600 h under operation. The presence of defects on the metallic layer affected negatively the membrane performance in terms of H2 perm-selectivity, probably caused by the absence on an intermediate layer, which did not compensate the mechanical stress due to different thermal dilation coefficients within the metallic layer and the ceramic substrate.
Pd and Pd-Au/Al2O3 membranes, hydrogen separation, H2/N2 selectivity, methane steam reforming
The growing attention towards the hydrogen utilization as energy carrier involved a high demand on hydrogen permeable membranes as compact devices for hydrogen separation and purification [1]. Several studies in the open literature demonstrated that composite Pd-based membranes are very effective for the aforementioned purposes, particularly if compared to the utilization of unsupported Pd-based membranes because of their high cost and mechanical limitations [2-3]. Palladium and its alloys possess a particular behavior to be fully hydrogen perm-selective over all of the other gases and this has been extensively studied in the last decades [4].
It is well known that hydrogen permeation through palladium-based membranes follows a solution/diffusion mechanism and many studies were conducted on self-supported thick Pd-based membranes (>5 mm of dense palladium or palladium-alloy layer) because presenting full hydrogen perm-selectivity but low permeability as well as resulting extremely expensive as much as the membrane was thick [5]. In order to improve the hydrogen permeance, meanwhile reducing the amount of palladium utilized and, consequently, the membrane cost, in the last twenty years, much attention was paid for manufacturing thin palladium and palladium-alloy films supported on porous substrates (both ceramic and metallic) [6-10].
However, the hydrogen transport through a dense layer of palladium or palladium-alloy takes place under a driving force (from a high to a low pressure gas region) in a multi-step mechanism involving: (a) the diffusion of molecular hydrogen on the palladium membrane surface, (b) reversible dissociative adsorption on the palladium surface, (c) dissolution of atomic hydrogen into the metal bulk, (d) diffusion of atomic hydrogen through the bulk metal, (e) association of hydrogen atom on the palladium surface, (f) desorption of molecular hydrogen from the surface, (g) diffusion of molecular hydrogen away from the surface [11].
Commonly, hydrogen permeation through a pd-based membrane is represented by the equation reported below (eq. 1):
$J_{H_{2}}=\frac{P\left(p_{h p s}^{n}-p_{l p s}^{n}\right)}{\delta}$ (1)
JH2 represents the hydrogen flux permeating through the dense layer of palladium or palladium-alloy, P is the hydrogen permeability, d the thickness of the palladium/palladium alloy layer, phps and plps the hydrogen partial pressures on the high pressure (feed) and low pressure (permeate) sides, respectively, while "n" is the pressure exponent. n-value can vary from 0.5 to 1 depending on the rate-determining step among the hydrogen permeation steps reported above. In case of bulk diffusion through the palladium layer controlling the hydrogen permeation mechanism, n is 0.5 (Eq. 1 becomes the Sieverts-Fick) and, consequently, the Pd-based membrane shows fully hydrogen perm-selectivity. On the contrary, in case of mass transport to or from the surface, dissociative adsorption or associative desorption are the rate determining stage, n is 1 since the processes depend linearly on the concentration of molecular hydrogen.
Among a number of supported palladium-alloy membranes, Pd-Au composite membranes attracted a growing interest because gold ensures higher resistance to the catalytic poisoning and corrosive degradation acted by sulfur compounds, globally enhancing the hydrogen permeability over pure palladium (up to 15% Au content) and reducing the embrittlement phenomenon [12-13]. In early studies, Pd-Au membranes were prepared by expensive metallurgical processes with a thickness ranging from 25 to 100 mm and using intermediate layers [14]. More recently, supported thin Pd-Au alloy membranes were prepared by electroless plating or electroplating technique on different porous substrates, experimentally analyzing the hydrogen perm-selectivity performance and the chemical-physical resistance [15-19]. The intent of this work is to give an overview about Pd-based membranes, paying special attention to the preparation and characterization of composite Pd-alloyed membranes for hydrogen production (in membrane reactor modules), separation and purification.
Figure 1. Metallic layers electrodeposition steps for the Pd-Au/Al2O3 membrane, a) ceramic substrate; b) Pd deposition; c) Au deposition; d) Pd-Au/Al2O3 final membrane
As a case study, a new generation of Pd-Au membranes supported on a ceramic substrate (Figures 1,2) is studied considering the absence of an intermediate layer, by varying pressure and temperature, meanwhile evaluating their effects on n-value in the Sieverts' equation and the stability of the membrane in terms of H2/N2 reference selectivity and hydrogen permeating flux with respect to the thermal cycles.
Figure 2. Cross section of the Pd-Au/Al2O3 membrane (Pd-Au metallic layer is around 8 mm)
Furthermore, a comparison among our study and the Pd/Pd-alloyed membranes present in literature is also illustrated.
2. Experimental
The case study of this work considers a not commercial Pd-Au/α-Al2O3 membrane showing an average metallic layer of 8 μm, total length of 7.5 cm (5.0 cm of active length), o.d. 13 mm and i.d. 8 mm. Pd-Au layer was deposited via electroless plating technique following a multistage process. Indeed, when the Pd deposition was completed, the Pd/Al2O3 sample was soaked with water before Au plating. The plating agent comprises 1 g/L HAuCl4·4H2O, 70 g/L Na2EDTA and 250 mL/L NH3·H2O, the reducing agent is a 0.5 mol/L N2H4 solution. The composite membrane was housed in the membrane module and two graphite gaskets were used to prevent the mixing within permeate and retentate streams. The operating temperature was varied between 300 °C and 400 °C, and feed pressure between 150 kPa (abs.) and 250 kPa (abs.), while the permeate pressure was kept constant at 100 kPa (abs.) in the whole experimental campaign.
The ideal H2 perm-selectivities (Eq. 2) of the supported Pd-Au/α-Al2O3 membrane were experimentally evaluated by permeation tests with pure N2, H2, CH4 and CO2. The volume flow rate of each pure gas permeating through the membrane was measured by means of a bubble-flow meter as an average value of at least 10 experimental points.
αH2/i = JH2/Ji (i = N2, CO2, CH4) (2)
where, JH2 and Ji are the H2 permeating flux (Eq. 1) and the permeating flux of another pure gas among CO2, N2 and CH4.
A X-ray diffraction (XRD) analysis was carried out on a Siemens D8 Bruker-Axs III diffractometer with Cu-Ka radiation operating at 40 kV and 30mA.
SEM analyses were done using a Phenom ProX desktop.
XRD patterns of Pd-Au membrane before and after heat treatment are illustrated in Figure 3. The higher intensity of Au signals is because the outer layer of the membrane precursor is Au.
After heat treatment, both palladium and gold with a cubic crystallinity were completely transferred into Pd-Au alloy with a cubic crystallinity. As shown in Figure 3, the peak of the Pd-Au alloy is closer to that of palladium, probably because of the much larger content of palladium.
Figure 3. XRD Patterns of Pd-Au membrane
The H2 hydrogen flux permeating through the composite membrane at various temperatures and by varying the transmembrane pressure between 150 and 250 kPa is reported in Figures 4-6, where the graphical assessment of "n-value" indicated that the best fitting is reached at n=0.5.
Figure 4. Graphical assessment of H2 permeating flux vs transmembrane pressure at 300 °C for the Pd-Au/a-Al2O3 membrane
Consequently, being n=0.5 for all the temperatures investigated in this work, the mechanism regulating the H2 transport through the Pd-Au/Al2O3 membrane can be described by the Fick-Sieverts law (Eq. 3).
$J_{H_{2}}=\frac{P\left(p_{h p s}^{0.5}-p_{l p s}^{0.5}\right)}{\delta}$ (3)
The resulting H2/other gas selectivities are then reported in Table 1 (at 400 °C).
Table 1. Perm-selectivity characteristics of the Pd-Au/Al2O3 membrane at 400 °C
As stated above, the hydrogen transport mechanism through the membrane is the Fick-Sieverts law and, consequently, higher H2 perm-selectivities were expected. However, the low H2 perm-selectivity values reported in Table 1 can be justified by the possible presence of defects in the separative metallic layer, which favored the transport of other gases besides hydrogen with a different mechanism from diffusion. The defects in the metallic film could be related to the absence of an intermediate layer, responsible of a not uniform metallic thickness distribution on the porous ceramic support. Indeed, this probably made possible local imperfections in the Pd-Au film. It was also reflected in the decreasing trend of the perm-selectivities at higher transmembrane pressures, which favored higher pure gas flow rates in the defects of the metallic layer.
Generally speaking, an intermediate layer plays a crucial role when H2 selective metallic films are deposited on porous metallic supports. Indeed, it acts as a barrier limiting the intermetallic diffusion. On the contrary, for porous ceramic supports, in which its roughness should favor a better adhesion of the deposited metallic layer, the intermediate layer is useful to compensating the effect of the thermal dilatation of the two different materials constituting the composite membrane, avoiding the formation of defects and, consequently, the membrane failure.
The negative role of the absence of the intermediate layer was checked by analyzing the effects of the thermal cycles on the perm-selectivity performance of the composite membrane. After almost 650 h under operation, the membrane module was cooled down at room temperature and heated up once again at 400 °C to observe the effect of the thermal cycle. At this temperature and $\Delta p$=50 kPa, a dramatic decrease of H2/N2 ideal selectivity was observed, inducing to stop the experimental tests and confirming that the different thermal dilation/contraction coefficients the of Pd-Au layer and ceramic support were responsible of local cracks in th separative layer, consequently making possible low H2 perm-selectivity values.
A H2/N2 ideal selectivity around 500 was reached at 400 °C and 50 kPa of transmembrane pressure and it remained stable up to 600 h under operation. The presence of defects on the metallic layer affected the H2 perm-selectivity of the membrane and it was probably caused by the absence of an intermediate layer, useful for compensating the mechanical stress due to different thermal dilation coefficients of the two materials constituting the composite membrane.
[1] Alavi M, Iulianelli A, Rahimpour MR, Eslamloueyan R, De Falco M, Bagnato G. (2017). Basile fixed bed membrane reactors for ultrapure hydrogen production: Modelling approach. Hydrogen Production, Separation and Purification for Energy, Institution Engineering and Technology 231-257.
[2] Iulianelli A, Basile A. (2018). Advances on inorganic membrane reactors for production of hydrogen. Encyclopedia of Sustainability Science and Technology 1-11. https://doi.org/10.1007/978-1-4939-2493-6_948-1
[3] Basile A, Iulianelli A, Tong J. (2015). Single-stage hydrogen production and separation from fossil fuels using micro- and macromembrane reactors. Compendium of Hydrogen Energy 1: 445-468. http://dx.doi.org/10.1016/B978-1-78242-361-4.00015-7
[4] Zornoza B, Casado C, Navajas A. (2015). Advances in hydrogen separation and purification with membrane technology. Palladium membrane technology for hydrogen production, carbon capture and other applications: Principles, energy production and other applications. Woodhead Publishing Series in Energy 167-191.
[5] Paglieri S, Way J. (2002). Innovations in palladium membrane research. Separation and Purification Methods 31(1): 1-169. http://dx.doi.org/10.1081/SPM-120006115
[6] Zhang X, Xiong G, Yang W. (2008). A modified electroless plating technique for thin dense palladium composite membranes with enhanced stability. Journal of Membrane Science 314(1): 67-84. http://dx.doi.org/10.1016/j.memsci.2008.01.051
[7] Jun CS, Lee KH. (2000). Palladium and palladium alloy composite membranes prepared by metal-organic chemical vapor deposition method (cold-wall). Journal of Membrane Science 176(1): 121-130. http://dx.doi.org/10.1016/S0376-7388(00)00438-5
[8] Li H, Caravella A, Xu HY. (2016). Recent progress in Pd-based composite membranes. Journal of Materials Chemistry A 4(37): 14069-14094. http://dx.doi.org/10.1039/C6TA05380G
[9] Ma YH, Mardilovich IP, Engwall EE. (2003). Thin composite palladium and palladium/alloy membranes for hydrogen separation. Annals of the New York Academy of Sciences 984(1): 346-360. http://dx.doi.org/10.1111/j.1749-6632.2003.tb06011.x
[10] Yun S, Ted Oyama S. (2011). Correlations in palladium membranes for hydrogen separation: A review. Journal of Membrane Science 375(1-2): 28-45. http://dx.doi.org/10.1016/j.memsci.2011.03.057
[11] Basile A, Blasi A, Fiorenza G, Iulianelli A, Longo T, Calabrò V. (2011). Membrane and membrane reactor technologies in the treatment of syngas streams produced from gasification processes. in Gasification: Chemistry, Processes and Applications, Michael D. Baker (Ed.), Nova Sci. Pub. 139-174.
[12] Shi L, Goldbach A, Zeng G, Xu H. (2010). Preparation and performance of thin layer PdAu/ceramic composite membranes. International Journal of Hydrogen Energy 35(9): 4201-4208. http://dx.doi.org/10.1016/j.ijhydene.2010.02.048
[13] Way JD, Lusk M, Thoen P. (2008). Sulfur-resistant composite metal membranes. US Patent 2008/0038567, Feb. 14, 2008. https://techportal.eere.energy.gov/application.do/ID=21924
[14] Gade SK, Payzant EA, Park HJ, Thoen PM, Way JD. (2009). The effects of fabrication and annealing on the structure and hydrogen permeation of Pd–Au binary alloy membranes. Journal of Membrane Science 340(1-2): 227-233. http://dx.doi.org/10.1016/j.memsci.2009.05.034
[15] Chen CH, Ma YH. (2010). The effect of H2S on the performance of Pd and Pd/Au composite membrane. Journal of Membrane Science 362(1): 535-544. http://dx.doi.org/10.1016/j.memsci.2010.07.002
[16] Iulianelli A, Alavi M, Bagnato G, Liguori S, Wilcox J, Rahimpour MR, Eslamlouyan R, Anzelmo B, Basile A. (2016). Supported Pd-Au membrane reactor for hydrogen production: Membrane preparation, characterization and testing. Molecules 21(5): 581-594. https://doi.org/10.3390/molecules21050581
[17] Tardini A, Gerboni C, Cornaglia L. (2013). PdAu membranes supported on top of vacuum-assisted ZrO2 modified porous stainless steel substrates. Journal of Membrane Science 428: 1-10. http://dx.doi.org/10.1016/j.memsci.2012.10.029
[18] Lee SW, Oh DK, Park JW, Lee CB, Lee DW, Park JS, Kim SH, Hwang KR. (2015). Effect of a Pt-ZrO2 protection layer on the performance and morphology of Pd-Au alloy membrane during H2S exposure. Journal of Alloys and Compounds 641: 210-215. http://dx.doi.org/10.1016/j.jallcom.2015.03.210
[19] Patki NS, Lundin ST, Way JD. (2018). Apparent activation energy for hydrogen permeation and its relation to the composition of homogeneous PdAu alloy thin-film membranes. Separation and Purification Technology 191: 370-374. http://dx.doi.org/10.1016/j.seppur.2017.09.047
|
CommonCrawl
|
Calculate Determinants of Matrices
Calculate the determinants of the following $n\times n$ matrices.
\[A=\begin{bmatrix}
1 & 0 & 0 & \dots & 0 & 0 &1 \\
1 & 1 & 0 & \dots & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \dots & \dots & \ddots & \vdots \\
0 & 0 & 0 &\dots & 1 & 1 & 0\\
0 & 0 & 0 &\dots & 0 & 1 & 1
\end{bmatrix}\]
The entries of $A$ is $1$ at the diagonal entries, entries below the diagonal, and $(1, n)$-entry.
The other entries are zero.
\[B=\begin{bmatrix}
1 & 0 & 0 & \dots & 0 & 0 & -1 \\
-1 & 1 & 0 & \dots & 0 & 0 & 0 \\
0 & -1 & 1 & \dots & 0 & 0 & 0 \\
0 & 0 & 0 &\dots & -1 & 1 & 0\\
0 & 0 & 0 &\dots & 0 & -1 & 1
The entries of $B$ is $1$ at the diagonal entries.
The entries below the diagonal and $(1,n)$-entry are $-1$.
Hint.
Calculate the first row cofactor expansion.
The determinant of a triangular matrix is the product of its diagonal entries.
Apply the cofactor expansion corresponding to the first row. We obtain
\det(A)&=
\begin{vmatrix}
1 & 0 & \dots & 0 & 0 & 0 \\
\vdots & \vdots & \dots & \ddots & \vdots & \vdots \\
0 & 0 &\dots & 1 & 1 & 0\\
0 & 0 &\dots & 0 & 1 & 1
\end{vmatrix}
+(-1)^{n+1}
1 & 1 & 0 & \dots & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 &\dots & 1 & 1 \\
0 & 0 & 0 &\dots & 0 & 1
The two smaller (minor) $n-1 \times n-1$ matrices are both triangular.
Thus we see that
\det(A)&=1+(-1)^{n+1} \\
&= \begin{cases}
2 & \text{ if } n \text{ is odd}\\
0 & \text{ if } n \text{ is even}.
\end{cases}
Next we calculate $\det(B)$. By the first row cofactor expansion , we obtain
\det(B)&=\\
&\begin{vmatrix}
-1 & 1 & \dots & 0 & 0 & 0 \\
0 & 0 &\dots & -1 & 1 & 0\\
0 & 0 &\dots & 0 & -1 & 1
+(-1)^{n+1}(-1)
-1 & 1 & 0 & \dots & 0 & 0 \\
0 & -1 & 1 & \dots & 0 & 0 \\
0 & 0 & 0 &\dots & -1 & 1 \\
0 & 0 & 0 &\dots & 0 & -1
\end{vmatrix}.
The two minor matrices are both triangular.
All the diagonal entries of the first minor matrix are $1$ and those of the second minor matrix are $-1$.
Thus we have
\det(B)&=1+(-1)^{n}(-1)^{n-1}=0.
Click here if solved 5
How to Find Eigenvalues of a Specific Matrix. Find all eigenvalues of the following $n \times n$ matrix. \[ A=\begin{bmatrix} 0 & 0 & \cdots & 0 &1 \\ 1 & 0 & \cdots & 0 & 0\\ 0 & 1 & \cdots & 0 &0\\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & […]
Find Inverse Matrices Using Adjoint Matrices Let $A$ be an $n\times n$ matrix. The $(i, j)$ cofactor $C_{ij}$ of $A$ is defined to be \[C_{ij}=(-1)^{ij}\det(M_{ij}),\] where $M_{ij}$ is the $(i,j)$ minor matrix obtained from $A$ removing the $i$-th row and $j$-th column. Then consider the $n\times n$ matrix […]
Inverse Matrix Contains Only Integers if and only if the Determinant is $\pm 1$ Let $A$ be an $n\times n$ nonsingular matrix with integer entries. Prove that the inverse matrix $A^{-1}$ contains only integer entries if and only if $\det(A)=\pm 1$. Hint. If $B$ is a square matrix whose entries are integers, then the […]
Find All Values of $x$ so that a Matrix is Singular Let \[A=\begin{bmatrix} 1 & -x & 0 & 0 \\ 0 &1 & -x & 0 \\ 0 & 0 & 1 & -x \\ 0 & 1 & 0 & -1 \end{bmatrix}\] be a $4\times 4$ matrix. Find all values of $x$ so that the matrix $A$ is singular. Hint. Use the fact that a matrix is singular if and only […]
Compute Determinant of a Matrix Using Linearly Independent Vectors Let $A$ be a $3 \times 3$ matrix. Let $\mathbf{x}, \mathbf{y}, \mathbf{z}$ are linearly independent $3$-dimensional vectors. Suppose that we have \[A\mathbf{x}=\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, A\mathbf{y}=\begin{bmatrix} 0 \\ 1 \\ 0 […]
Maximize the Dimension of the Null Space of $A-aI$ Let \[ A=\begin{bmatrix} 5 & 2 & -1 \\ 2 &2 &2 \\ -1 & 2 & 5 \end{bmatrix}.\] Pick your favorite number $a$. Find the dimension of the null space of the matrix $A-aI$, where $I$ is the $3\times 3$ identity matrix. Your score of this problem is equal to that […]
Characteristic Polynomial, Eigenvalues, Diagonalization Problem (Princeton University Exam) Let \[\begin{bmatrix} 0 & 0 & 1 \\ 1 &0 &0 \\ 0 & 1 & 0 \end{bmatrix}.\] (a) Find the characteristic polynomial and all the eigenvalues (real and complex) of $A$. Is $A$ diagonalizable over the complex numbers? (b) Calculate $A^{2009}$. (Princeton University, […]
How to Find the Determinant of the $3\times 3$ Matrix Find the determinant of the matix \[A=\begin{bmatrix} 100 & 101 & 102 \\ 101 &102 &103 \\ 102 & 103 & 104 \end{bmatrix}.\] Solution. Note that the determinant does not change if the $i$-th row is added by a scalar multiple of the $j$-th row if $i \neq […]
Tags: cofactor expansiondeterminantlinear algebramatrixminor matrixtriangular matrix
Next story Trace of the Inverse Matrix of a Finite Order Matrix
Previous story Find a Matrix that Maps Given Vectors to Given Vectors
Complex Conjugates of Eigenvalues of a Real Matrix are Eigenvalues
by Yu · Published 05/09/2017 · Last modified 07/23/2017
The Matrix Representation of the Linear Transformation $T (f) (x) = ( x^2 – 2) f(x)$
Characteristic Polynomials of $AB$ and $BA$ are the Same
Linearly independent/dependent vectors question – Problems in Mathematics
[…] The other method to compute the determinant is to use the cofactor expansion. See problem Calculate determinants of matrices for […]
Nilpotent Matrices and Non-Singularity of Such Matrices
Isomorphism Criterion of Semidirect Product of Groups
Find All the Eigenvalues of $A^k$ from Eigenvalues of $A$
Non-Example of a Subspace in 3-dimensional Vector Space $\R^3$
Find a Basis of the Subspace of All Vectors that are Perpendicular to the Columns of the Matrix
How to Find a Formula of the Power of a Matrix
Find a Matrix that Maps Given Vectors to Given Vectors
Suppose that a real matrix $A$ maps each of the following vectors \[\mathbf{x}_1=\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}, \mathbf{x}_2=\begin{bmatrix}...
|
CommonCrawl
|
BMC Medical Research Methodology
Propensity score to detect baseline imbalance in cluster randomized trials: the role of the c-statistic
Clémence Leyrat1,2,3,4,
Agnès Caille1,2,3,5,
Yohann Foucher6 &
Bruno Giraudeau1,2,3,5
BMC Medical Research Methodology volume 16, Article number: 9 (2016) Cite this article
Despite randomization, baseline imbalance and confounding bias may occur in cluster randomized trials (CRTs). Covariate imbalance may jeopardize the validity of statistical inferences if they occur on prognostic factors. Thus, the diagnosis of a such imbalance is essential to adjust statistical analysis if required.
We developed a tool based on the c-statistic of the propensity score (PS) model to detect global baseline covariate imbalance in CRTs and assess the risk of confounding bias. We performed a simulation study to assess the performance of the proposed tool and applied this method to analyze the data from 2 published CRTs.
The proposed method had good performance for large sample sizes (n =500 per arm) and when the number of unbalanced covariates was not too small as compared with the total number of baseline covariates (≥40 % of unbalanced covariates). We also provide a strategy for pre selection of the covariates needed to be included in the PS model to enhance imbalance detection.
The proposed tool could be useful in deciding whether covariate adjustment is required before performing statistical analyses of CRTs.
In cluster randomized trials (CRTs), the units of randomization are not individuals but rather the social units to which the individuals belong [1]. This may challenge the balance between groups in terms of baseline covariates. Indeed, clusters are sometimes randomized before the identification and recruitment of participants, which may jeopardize allocation concealment [2–5]. In their review, Puffer et al. [6] showed that 39 % of the selected CRTs were at risk of confounding bias on individual characteristics. That was also supported by the work of Brierley et al. [7], who found a risk of bias in 40 % of CRTs that did not use prior identification of participants. In addition, the risk of chance imbalances increases when the number of randomized clusters decreases, which is frequent [8, 9].
Some allocation techniques have been proposed to achieve a better baseline balance in CRTs, but they are not always feasible to implement in practice [10]. If imbalance occurs on one or more prognostic factors, the intervention effect estimate may be biased and could compromise the validity of statistical inferences. Identifying baseline imbalance in CRTs is therefore of importance to implement suitable statistical analyses.
In individually randomized trials, statistical testing is not recommended to assess group comparability because if randomization is properly applied, all observed imbalances will be due to chance [11, 12]. When reporting the results of a randomized trial, the CONSORT statement advises displaying baseline characteristics in a table to gauge group comparability [13]. The same recommendation is given in the CONSORT extension for CRTs, both for individual-level and cluster-level covariates [14]. Fayers and King [15] stated that significance tests "are usually only worth doing if potential violation of the randomisation is suspected". In some CRTs, allocation concealment is impossible (i.e., when, for instance, participants are recruited after the randomization of clusters, and because no blinding is possible), and therefore, in this case, tests may be worthwhile. Nevertheless, Wright et al. [16] showed that about 44.7 % of papers reporting the results of a CRT did not provide a statistical test for covariate balance, and 20 % did not even display a table reporting covariates between groups.
The problem of baseline imbalance observed in some CRTs is close to the imbalance that can occur in observational studies [17]. For these latter studies, several methods exist to assess group comparability at baseline. The methods can be divided into two groups: those that assess covariate balance one by one, and those that allow a global assessment of the balance on several baseline covariates [18]. Significance testing (based on t test or χ 2 test, for example), standardized difference [19], overlapping coefficient [18] and Kolmogorov-Smirnov [20] or Lévy [21] distances are in the first group of methods. Belister [22] found that the standardized difference (see Table 1) had the highest correlation with the bias of the intervention effect estimate. Standardized differences also perform well with small sample sizes [23], so it may have the best performance in detecting baseline imbalance when covariates are considered one by one. Nevertheless, this method does not provide a global overview of the overlap of covariates between groups. Global assessment of imbalance on several covariates simultaneously is of interest in that it allows for capturing the correlations between covariates. For example, let us consider two quantitative prognostic factors, for which the impact on the outcome is on the same direction: high values for these covariates lead to a higher risk of an event. Because each of these prognostic factors is slightly unbalanced, a univariate test may not detect any imbalance. However, the impact of both imbalances together may cause an important bias in the intervention effect estimate. Consequently, a global approach is more appropriate in the context of CRTs to handle complex relationships underlying a potential confounding bias.
Table 1 Standardized differences
Global metrics include the Mahalanobis distance [24], the post-matching c-statistic of the propensity score (PS) model [18] and L 1 measure [25]. Franklin et al. [18] found that the c-statistic of the PS model led to the better prediction of bias for binary, count or continuous outcome, provided the sample size is large enough. This statistic represents the extent to which covariates can predict intervention allocation. The c-statistic of the PS model has been used to help in the selection of variables to include in the PS model (even if this method is not recommended [26]) but to our knowledge has not been used as a tool to detect baseline imbalance.
In this context, we developed a decision rule based on the c-statistic of the PS model and its expected probability distribution to assess baseline imbalance. This method can be viewed as a global statistical testing at a 5 % significant level. The basic idea is to use the distribution of the c-statistic in accordance with the characteristics of the CRT (size, number of covariates) to choose the cut-off for the detection of imbalance, rather than using a unique threshold value. It is important to note that the PS model fitted in order to detect imbalance is different from the model fitted for the statistical analysis of the trial. In both situation, the outcome of the PS model is the treatment allocation, but, in the former situation, all covariates associated with treatment allocation have to be included in the PS model, whereas in the latter, covariates both linked to treatment allocation and the outcome need to be accounted for. Indeed, when analyzing the trial, selecting confounding only for a propensity score analysis is desirable [27, 28] while such a restriction does not hold for our aim which is to detect any baseline imbalance, to obtain a qualitative assessment of the risk of bias in a given CRT. This paper is organized as follows. We first describe two CRTs motivating examples at risk of confounding bias because clusters were randomized before the patients were enrolled. We then give the theoretical background for the PS approach and the c-statistic, followed by the objectives of the present paper and the principle of our method. Then, we give the design and the results of a simulation study to assess the performance of our method based on the distribution of the c-statistic to detect baseline imbalance in CRTs. The implication in terms of risk of confounding bias and need for covariates adjustment are then discussed, along with an application of our method with the two motivating examples.
Motivating examples
Example 1: Management of osteoarthritis with a patient-administered assessment tool
The first motivating example was a published CRT using a 2×2 factorial design that aimed to assess the impact of an unsupervised home-based exercise programme, the use of standardized evaluation tools, their combination, or usual care on symptoms (pain, global assessment of disease and physical functioning) in patients with knee and hip osteoarthritis (OA) [29]. A total of 867 rheumatologists were randomized and each had to enrol four patients (three with knee OA and one with hip OA). Thus, rheumatologists were not blinded to intervention allocation. For simplicity, we focus on only one intervention: the use of standardized evaluation tools. In all, 1462 patients received the standardised evaluation tools and 1 495 patients received usual care. Twelve covariates were collected at baseline (Table 2). Standardized differences are displayed to assess the balance between arms. Univariate statistical testing showed an imbalance in age, pain, disability (measured by the the Western Ontario McMaster University Osteoarthritis Index [WOMAC] physical function subscale) and global assessment of disease at a 5 % significance level. These imbalances correspond to a standardized difference of 7.81 % for age but greater than 10 % for the other variables. Moreover, these variables were known to be strongly associated with the potential outcome of the subjects. Because these variables were associated with whether patients were enrolled into the trial in a given group, they constituted possible confounders. In addition, pain, WOMAC score and global assessment of disease were correlated with each other, with Pearson correlation coefficients in the range \(\left [0.38-0.49\right ]\).
Table 2 Patient baseline characteristics per group in the study on management of osteoarthritis with a standardized evaluation tool (first motivating example)
Example 2: Standardized consultation for patients with osteoarthritis of the knee
The second example was a CRT which evaluated the impact of standardized consultations on patients with OA of the knee versus usual care [30]. It was an open pragmatic CRT in which 198 rheumatologists were randomized, each of whom had to include two consecutive patients who met the inclusion criteria. In total, 154 patients were allocated to standardized consultation and 182 to usual care. Overall, 26 covariates were measured at baseline (Table 3). Statistical testing revealed a significant imbalance in body mass index (BMI), delay in years since the beginning of pain, age at the beginning of pain and the use of non-drug treatments. Moreover, some other variables (weight, pain, Medical Outcomes Study Short Form 12 [SF-12] mental, and eight concommitant treatments) had a standardized difference greater than the usual threshold of 10 %.
Table 3 Patient baseline characteristics per group in the study on standardized consultation for patients with osteoarthritis of the knee (second motivating example)
Theoretical background
Propensity score theory
The PS theory was initially developed by Rosenbaum and Rubin [31] to overcome the problem of confounding bias in observational studies. The individual PS refers to the individual probability, for a subject l involved in a study, of receiving the intervention of interest (T l =1) rather than the control intervention (T l =0), conditionally on the subject's characteristics at baseline x l =(x (1)l ,…,x (r)l ). The PS is frequently denoted by e(x l ) and is defined as e(x l )=P(T l =1|x l ). The true PS is unknown in practice, but it can be estimated by logistic regression, modeling the probability of receiving the intervention of interest given r observed covariates as follows:
$$ e(\boldsymbol{x_{l}})=\left\{1+\exp{\left(-\alpha_{0}-\sum_{p=1}^{r}{\alpha_{p}x_{(p)l}}\right)}\right\}^{-1}, $$
((3))
where α 0 is the intercept and α p (p=1,…,r) are the regression coefficients.
In CRTs, the PS has been studied for the estimation of the intervention effect [27, 28], or to improve randomization [32] but not for detection of imbalance between groups.
The c-statistic
The c-statistic (concordance statistic) measures the discriminatory capacity of a predictor [33]. It also corresponds to the area under the receiver operating characteristic (ROC) curve, which displays sensitivity as a function of 1-specificity for all the possible thresholds of the predictor [34]. If we consider an intervention allocation (intervention vs. control), the c-statistic is the probability that a subject receiving the intervention has a higher value for the predictor than a subject in the control group [35]. It can be estimated as follows:
where i=1,…,n 0 is the participant index in the untreated group, j=1,…,n 1 is the participant index in the treated group and 𝟙 is a dummy variable equals to 1 if p i <p j , 0 otherwise. The c-statistic takes its values in the range [0.5;1.0], where 0.5 corresponds to a classification that does not outperform chance and 1.0 corresponds to perfect classification. In our situation, the groups are the treatment arms and the predictor is the prediction obtained from the PS model. The c-statistic is often computed with the predictions obtained from a logistic model.
Propensity scores and the c-statistic
In the absence of baseline imbalance, the PS has a normal distribution of mean 0.5 in each group of the study, and thus the baseline variables are independant from the intervention allocation. In other words, the c-statistic of the PS model (3) is close to 0.5. By contrast, if at least one covariate is associated with intervention allocation, the c-statistic will be larger than 0.5. To our knowledge, the c-statistic of the PS model has not been used as a tool to detect baseline imbalance.
Objectives and principles
We developed a method, based on the c-statistic of the PS model, to detect baseline imbalance between groups in CRTs. In practice, this method is a tool to appreciate the risk of confounding bias and to identify the situations in which suitable statistical methods to take imbalance into account must be implemented. Our method relies on three steps: (i) The c-statistic is estimated from the data of the CRT for which one wants to assess the baseline balance (ii) The 95th percentile of the c-statistic distribution under the hypothesis of no systematic baseline imbalance is determined from simulation with the same number of covariates and sample size in the CRT (iii) The statistical decision rule is expressed as follows: if the c-statistic estimated in step (i) is above the threshold value obtained in step (ii), then a baseline imbalance is suspected.
Because of the use of the 95th percentile of the c-statistic distribution as a threshold, our method is similar to a global statistical test for baseline imbalance at a 5 % significance level. It is important to note that our method focuses only on individual-level characteristics; indeed, in CRTs, clusters are the unit of randomization and thus any observed imbalance in cluster-level covariates will be due to sampling fluctuations. Applying this method to cluster-level covariates would be similar to baseline tests for individually randomized trials, which is not recommended in practice. Conversely, because participants are not the randomization units in CRTs, confounding bias may affect some trials, leading to systematic imbalances in individual-level variables [17]. Because threshold values are different for each combination of sample size and number of covariates involved (illustrative results are given in Additional file 1), our approach is more flexible than using a unique threshold for the c-statistic. Indeed, our methods uses the empirical distribution of the c-statistic of the PS model considering the characteristics of the CRT of interest.
The objectives of the present paper are to assess the performance of this method with a simulation study and to interpret the diagnosis of baseline imbalance in terms of risk of confounding bias and need for covariate adjustment.
Design of the simulation study
We performed a simulation study to assess the performance of the proposed method to detect baseline imbalance. The determination of thresholds values for our methods (step (ii) in the principle of our method) is described in Additional file 1: Appendix A.
Data generation
We generated datasets corresponding to CRTs without systematic imbalance and estimated the c-statistic of the PS model for each dataset. The data were generated as follows:
Cluster size: Let us consider a two-parallel-arm CRT, in which 2k clusters of mean size m are randomized. We generated cluster sizes, as proposed by Turner et al. [36], from a Poisson distribution with parameter m: \(m_{\textit {ij}}\sim \mathcal {P}(m)\), (i=0,1 the intervention index and j=1,…,k the cluster index).
Covariates: Let X=(X 1,…,X r ) be a vector of r randomly generated covariates, among which r c are continuous covariates and r b are binary (r c +r b =r). To generate X, a vector X 0 was first drawn in a multivariate normal distribution \(\mathcal {N}_{r}\sim (0,\boldsymbol {\Sigma _{r\times r}})\), without loss of generalizability.
At this stage, we have a matrix X 0 of r continuous balanced covariates measured at baseline. However, this situation does not differ from an individually randomized trial. To fit to the situation of a real CRT, we induced an intraclass correlation for the covariates meaning that subjects belonging to the same cluster had more similar individual characteristics. We randomly drew a cluster effect γ pj , (j=1,…,k) for each cluster j and each covariate p (p=1,…,r) in a distribution \(\mathcal {N}(0,0.15)\), with the constraint \(\sum _{j=1}^{k}{\gamma _{\textit {pj}}}=0\) in each arm. The variance parameter for the cluster effect of 0.15 was chosen to obtain intraclass correlation coefficient (ICC) values for the covariates in the range [0.01;0.05]. These values are based on those observed for baseline characteristics in the study of Kul et al. [37]. Then, for each subject in cluster j and each covariate, a random error was drawn from a distribution \(\mathcal {N}(\gamma _{\textit {pj}},1)\) and this error term was added to X 0 , the initial value of the covariate.
Among the r generated covariates, we induced an imbalance on s of them. These covariates were correlated with each other, because such correlations are often observed in clinical trials [38]. Moreover, for each of the s unbalanced covariates, the standardized difference (reflecting the imbalance 'size') depended on the degree of correlation between covariates: two highly correlated covariates had similar standardized differences. To induce the correlations between the standardized differences, a vector X 0 =(X 1,…,X r ) of r covariates was first randomly drawn from a multivariate normal distribution \(\mathcal {N}_{r}\sim (0,\boldsymbol {\Sigma _{r\times r}})\) with the following covariance matrix:
$${} \boldsymbol{\Sigma_{r\times r}= \left(\begin{array}{ccccccc} 1 & \sigma_{1,2} & \cdots & \sigma_{1,s} & \sigma_{1,s+1} & \cdots & \sigma_{1,r}\\ \sigma_{2,1} & 1 & \cdots & \sigma_{2,s} & \sigma_{2,s+1} & \cdots & \sigma_{2,r}\\ \vdots & \vdots & \ddots &\vdots &\vdots &\cdots &\vdots\\ \sigma_{s,1} & \sigma_{s,2} & \cdots & 1 & \sigma_{s,s+1} & \cdots & \sigma_{s,r}\\ \sigma_{s+1,1} & \sigma_{s+1,2} & \cdots & \sigma_{s+1,s} & 1 & \cdots & \sigma_{s+1,r} \\ \vdots & \vdots & \cdots &\vdots & \vdots &\ddots &\vdots\\ \sigma_{r,1} & \sigma_{r,2} & \cdots &\sigma_{r,s} &\sigma_{r,s+1}& \cdots & 1\\ \end{array}\right)}.$$
σ f,g , f, g=1,..,q, f≠g represents the covariance and the correlation between covariates X f and X g in that the covariates followed standard normal distributions. The covariance matrix Σ r×r was a positive definite matrix randomly generated with the R function genPositiveDefMat from the clusterGeneration package. For convenience, we considered the absolute values of the covariance matrix.
Second, the sub-matrix Σ s×s of Σ r×r was used to draw the standardized differences for unbalanced covariates from a multivariate normal distribution. Let Δ and s Δ be respectively the mean standardized differences of unbalanced covariates and its standard deviations. Thus, the s unbalanced covariates followed a distribution \(\mathcal {N}(\Delta,s_{\Delta }^{2})\). As σ f,g =r f,g ×σ j σ k , the covariance matrix \(\boldsymbol {\Sigma _{s\times s}^{\Delta }}\) used to generate the standardized differences was: \( \boldsymbol {\Sigma _{s\times s}^{\Delta }}= s_{\Delta }^{2} \boldsymbol {\Sigma _{s\times s}}. \) Thus, standardized differences Δ=(Δ 1,…,Δ s ) were drawn from a multivariate normal distribution with mean Δ 𝟙 s with \(\boldsymbol {\Sigma _{s\times s}^{\Delta }}\) for the covariance matrix. Then, for an unbalanced covariate f (f=1,…,s) and a subject l, the covariate's value was X fl +Δ×T l , where X fl corresponded to the f th covariate's value for subject l when generating X 0 and T l being the intervention indicator for subject l, as previously defined.
Finally, r b covariates from X 0 were dichotomized by covariate-specific threshold values t p (p=1,…,r b ). Thresholds t p were a priori fixed to obtain the desired prevalences P p of these characteristics, drawn in a uniform distribution in the range [ 0.2;0.8]. From P p , the threshold was: t p =Φ −1(1−P p ), where Φ is the cumulative density function (CDF) of a standard normal distribution. Doing so, the standardized difference for binary covariates could be calculated from the formula in Table 1 with \(\hat {P}_{1_{b}}=\Phi (\Phi ^{-1} (1-\hat {P}_{0_{b}})-\Delta /100)\), where \(\hat {P}_{0b}\) and \(\hat {P}_{1b}\) are the observed proportions in the control and the intervention arms, respectively.
Propensity score estimation
The PS was estimated with a logistic model adjusted on the set of generated covariates. A cluster-specific random effect cannot be taken into account in this model because clusters are nested in the intervention arm (subjects from the same cluster received the same intervention). Even if this limitation can have an impact when PS is used to estimate the intervention effect [28], the impact on the performance for imbalance detection is negligible because clusters are the unit of randomization. Thus, considering clusters as a fixed effect, cluster effect would be balanced between groups.
Covariate pre-selection
Within the simulation, we also proposed two criteria to select only some covariates among the r generated covariates, in order to assess the efficiency of a more parsimonious model to detect imbalance, because numerous studies showed the importance of covariate selection for PS model to avoid over-fitting problems [39, 40]. Moreover, the presence of a large number of balanced covariates in the PS model can attenuate the importance of a potential global imbalance. A covariate was included in the PS model if it satisfied at least one of these two criteria:
its standardized difference was ≥5 %
its standardized difference was <5 % but its correlation with at least one covariate with a standard difference ≥ 5 % was greater or equal to 0.2 in absolute value.
These criteria allowed for selecting covariates with more flexibility than with univariate testing. In practice, a covariate is supposed to be unbalanced when its standardized difference ≥ 10 % [41], whereas our method was less stringent for the number of covariates kept. Moreover, a balanced covariate highly correlated with an unbalanced one may have an impact on the c-statistic. This strategy allowed to assess if baseline imbalance must be diagnosed from all available baseline covariates.
For each studied scenario, the corresponding threshold value to conclude to baseline imbalance was obtained from simulations with the same simulation parameters but under the hypothesis of no systematic imbalance (i.e. the r generated covariates are balanced). The impact of sample size, number of clusters, number of covariates and trial design (CRT or individually randomized trial) on the c-statistic of the PS model without systematic imbalance was studied beforehand and results are presented in Additional file 1: Appendix A.
Results assessment
The results were assessed in terms of the following:
proportion π of simulated datasets in which the estimated c-statistic was greater or equal to the threshold value defined as the 95th percentile of the c-statistic distribution in absence of systematic baseline imbalance, i.e., the proportion of situations in which baseline imbalance was detected, according to our proposed rule,
for each unbalanced covariate, the proportion of significant univariate tests at a 5 % significance level. These tests were adjusted t test and adjusted chi-square test, described in [1] to take the clustering into account.
Studied scenarios
First, we studied 144 scenarios corresponding to the different combination of the following parameters:
the sample size per arm: n=(100,500). In CRTs, the median number of subjects per arm is 329 (interquartile range [143–866]) [42]. Thus, the chosen values correspond to the situation of a small and average size CRT.
the number of clusters per arm: k=(5,10,50),
the number of covariates: r=(4,10,20) for n=100 and r=(10,20,50) for n=500, corresponding to ratios \(\frac {n}{r}=(25,10,5)\) for n=100 and \(\frac {n}{r}=(50, 25, 10)\) for n=500. We considered \(r_{c}=r_{b}=\frac {r}{2}\).
the number of unbalanced covariates: s was defined such that the percentage of unbalanced covariates among all covariates was 20 % or 40 % (except for the case k=5, m=20 in which 25 % and 50 % of covariates were unbalanced). Thus, s=(2,4) for r=10, s=(4,8) for r=20 and s=(10,20) for r=50. Among unbalanced covariates, \(\frac {s}{2}\) were binary and \(\frac {s}{2}\) were continuous.
the standardized difference for unbalanced covariates: Δ(s Δ )=10 %(5 %) or 20 %(10 %).
Second, we studied the performance of our method after covariate selection according to the rule expressed in Covariate pre-selection section of the main paper. We focused on scenarios in which the total number of covariates was ≥ 20 and the standardized difference for unbalanced covariates was moderate (10 %), corresponding to 36 different scenarios.
In both situations, we performed 5000 simulations.
Results without covariate pre-selection
The results are displayed in (Fig. 1). As expected, the imbalance detection rate π (i.e. the proportion of situations in which our method allowed to detect imbalance) was higher when the standardized differences for the unbalanced covariates was high (20 %) than for a moderate imbalance (10 %). Second, imbalance was detected more often when the proportion of unbalanced total baseline covariates was higher 40 or 50 % (for k=5 and m=20) than when this proportion was 20 or 25 % (for k=5 and m=20). This result suggested that when there were too many balanced covariates, the information on unbalanced covariates was attenuated.
Percentage of imbalance detection π as a function of the number of baseline covariates r, the sample size per arm n, the standardized difference (SD) for unbalanced covariates and the percentage of unbalanced covariates 100×s/r. Results were pooled over the number of clusters per arm k. Five thousand simulations were performed per scenario
Moreover, the percentage imbalance detection was higher with sample size n=500 than with n=100. However, this latter situation corresponded to a small sample size (lower than the first quartile of the sample size per arm in a review of CRTs). This percentage increased also with the number of covariates. When the percentage of unbalanced covariates remained constant, the performance was better with increased number of total covariates (and thus the number of unbalanced ones), which suggests that the method allowed for capturing a global imbalance rather than imbalance on isolated covariates.
Results with covariate pre-selection
For a set of 20 baseline covariates, the average number of covariates retained after the pre-selection was 13.5 with 20 % unbalanced covariates and 14.1 with 40 % unbalanced covariates. For a set of 50 baseline covariates, the average number of covariates retained was 27.1 and 29.7 with 20 and 40 % of imbalance, respectively. So this pre-selection mechanism allowed for retaining a large set of covariates, which was the basic idea for our method.
Moreover, this pre-selecting strategy for the covariates allowed for a systematic improvement in the percentage of bias detection for each study scenario, as displayed in Fig. 2. The relative improvement (defined as the ratio of the difference in percentage of imbalance detection with and without selection) varied from 0.7 % for a scenario in which the initial percentage of imbalance detection equaled 99.4 % before pre-selection, to 116.4 % for the scenario showing the worst performance without covariate pre-selection. However, this strategy was mainly helpful for scenarios in which the initial performance was moderate (about 50 %). Even after an average improvement >100 % for a small sample size (n=100), the performance remained <50 %. Indeed, in these situations the risk of chance imbalance due to sampling fluctuations is high (balance is achieved according the law of large numbers). Thus, threshold values for these trials are large even with no systematic imbalance and consequently, the detection rate is small. However, covariate selection increases detection rate in every scenario, so these results confirmed the need for a parsimonious PS model (i.e. including only a subset of covariates) that could be obtained with our simple and automatic proposed strategy.
Percentage of imbalance detection π ′ after covariate pre-selection as a function of the initial percentage of imbalance detection π. Each point corresponds to a different number of covariates. The gray line is the first bisector. Five thousand simulations were performed per scenario
From global imbalance to confounding bias
Once the imbalance is detected, further assessment could be conducted to assess any risk of confounding bias, that is, if at least one of the covariates included in the PS model is also associated with the outcome. Such a variable, known as a confounding factor, is both associated with the intervention allocation and the outcome and may lead to a mis-estimation of the intervention effect [43]. Statistical measures of association can be used to identify them, as well as the literature to identify known confounders for a given outcome. When confounding bias is suspected, adjustment is required, whereas if the imbalance results from chance, adjustment would only improve the precision of the estimate, at least in linear models [44]. Among adjustment methods available for CRTs, multivariable regression [45] or PS-based methods [46, 47] are commonly used. However, the best predictive PS model is not the best model to correct imbalance [40]. As compared with a model for imbalance detection which can involve a large amount of covariates, a good PS model would include only confounding factors [39]; covariates which are related only to the intervention would increase standard errors without reducing bias [48]. A simulation study showed that discrimination criteria, such as the c-statistic or adequation tests such as Hosmer and Lemeshow cannot detect the omission of a confounding factor [26]. Consequently, the model built to detect imbalance is not the most proper for the statistical analysis.
Figure 3 displays the different steps that help identifying the need of covariate adjustment. If patients are identified before cluster randomization and if the sample size is large enough, there is no risk of global imbalance or confounding bias and adjustment is not required. If cluster randomization occurs after patients recruitment but the sample size is small, there is a risk of chance imbalance. If cluster randomization occurs beforehand, there is a risk of systematic bias. In the last two situations, our tool can detect a global imbalance. If a such imbalance is detected, the assessment of the association between covariates and the outcome is needed to identify confounding bias, i.e. the presence of covariates both linked to the intervention and the outcome. When confounders are detected, covariate adjustment is needed to obtain an unbiased estimate of intervention effect. Otherwise, covariate adjustment will have no impact on the estimate but can increase precision for linear models.
Steps for bias detection and guidance for covariates adjustment. Our diagnosis tool corresponds to the top part of the graph (part 1), whereas the bottom part (part 2) is a qualitative approach to help to perform a covariate adjustment. Part 2 has to be thought in accordance to clinical knowledge about potential confounders. *Adjustment on predictors can increase precision in linear model and generally increases power in case of chance imbalance
Results from the two motivating examples
For the two following examples, threshold values to detect baseline imbalance were obtained under a hypothesis of no systematic imbalance, with the same number of covariates (and the same proportion of continuous and binary covariates) and the same sample size as in the original CRT. For covariate generation, we used the observed mean (or rate) and standard deviations of covariates in the control arm and the correlation matrix from each CRT.
Example 1: Management of OA with a patient-administered assessment tool
The PS was estimated with a logistic model adjusted on the 12 covariates displayed in Table 2. The PS distributions by arm are displayed in Additional file 1: Appendix C Figure 2a. The estimated c-statistic from this model was 0.598. The threshold value under the hypothesis of no systematic baseline imbalance was 0.549, below the estimated c-statistic for the dataset. We also applied our method using the pre-selection methods for the covariates: seven covariates among the 12 measured at baseline were retained. The estimated c-statistic was 0.595, and the corresponding threshold value was 0.541. Thus, our method allowed for diagnosing a baseline imbalance, with or without selection for covariates, and highlights the need for adjusted statistical methods. We showed in a previous work a huge difference in the intervention effect estimate obtained from a crude analysis (without adjustment) and that obtained with multivariable regression or PS adjustment [27], and therefore confirmed that baseline imbalance occured on counfounding factors.
Moreover, the results showed that covariates significantly associated with the intervention allocation in the PS model were not the same covariates that appeared significantly unbalanced with univariate tests. Indeed, polyarthritis and radiological grade were significant in the PS model at a 5 % significance level, whereas the WOMAC score was no longer significant. These results were explained by the correlation patterns between covariates, which suggests that global approaches for the diagnosis of baseline imbalance may add some global information on relationships among covariates, missing with the univariate approach.
Example 2: Standardized consultation for patients with OA of the knee
The PS model was built from the 26 covariates described in Table 3. The PS distribution in the two arms was not layered (see Additional file 1: Appendix C Figure 2b). However, the estimated c-statistic was 0.684 and the threshold value was 0.696, so the method did not detect imbalance between groups. This situation is close to the case in which the number of unbalanced covariates was small as compared to the total number of covariates, and thus, a pre-selection of covariates is needed. Therefore, we applied the selection strategy previously proposed. From Table 3, Five covariates had a standardized difference <5 % (physical exercise level [PEL] scale, WOMAC score, global assesment of the disease, current use of SYSADOA and use of walking sticks). Then, we estimated the correlation matrix (Pearson's correlation coefficients were used, both for qualitative and quantitative covariates). Among the five balanced covariates, two showed correlation >0.2 in absolute value, with at least one covariate with a standardized difference >5 %: WOMAC score was correlated with SF-12 physical score (r=−0.513) and PEL was correlated with sex (r=0.289) and WOMAC score (r=−0.245). Therefore, we removed only the global assessment of the disease status, the current use of SYSADOA and the use of walking sticks from the PS model. The estimated c-statistic for the PS model with 23 covariates remained 0.684. However, the provided threshold value was 0.682. Consequently, after a pre-selection of covariates, a baseline imbalance was detected. This example also showed that our selection method allow for retaining a large amount of covariates, keeping the advantage of a global method over univariate testing. In the original paper, authors used an Inverse Probability of Treatment Weighted (IPTW) estimator to correct for baseline imbalance.
In this paper, we provide a new tool, based on the c-statistic of the PS model, to detect baseline imbalance in CRTs. This tool performed well for CRTs with a large sample size and a large number of covariates and allowed allowed us to capture global information, in contrast to univariate tests. In the first motivating example, our method revealed a predictor of intervention allocation that univariate methods ignored, and confirmed the presence of imbalance and the requirement of adjusted statistical methods when estimating the intervention effect. The efficiency of the proposed pre-selection strategy was shown in the second motivating example. Even if a subset of covariates was retained, the subset was still meaningful for a global approach because the pre-selection method aimed at retaining the correlation patterns between covariates.
In practice, this approach can be viewed as a kind of hypothesis testing because it relies on a "known" probability distribution and uses a threshold value defined according to a significance level (5 % in our study because we used the 95th percentile of the c-statistic distribution). Of note, we used the 95th percentile of the c-statistic distribution under the hypothesis of no systematic imbalance to allow us to compare the results with classical univariate tests; however, to detect smaller baseline imbalances, smaller percentiles could be adopted, especially in CRTs with a large sample size with less chance variation in baseline covariates expected. Indeed, adjustment on balanced covariates does not have a negative impact such as omitting an unbalanced risk factor would, and thus this method will be less restrictive with a smaller percentile. Moreover, as for classical tests for which a p-value close to 5 % has to be interpreted carefully, estimated c-statistics close to their threshold values do not necessarily mean that there is no confounding bias (if c<threshold) or a systematic bias (if c>threshold). In these situations, a risk of bias can be suspected and further considerations about the link between covariates and outcome are needed to assess the risk of bias. But again, unnecessary adjustment would have a smaller impact on the analysis that the omission of a confounder in the analysis. Statistical testing is not recommended in individually randomized trials because they are not theoretically prone to confounding bias [11]. However, as previously explained, this assumption does not hold in CRTs that randomize clusters before selecting participants. Therefore, the quantitative approach proposed in this paper could be useful to improve both the reporting of baseline characteristics and the subsequent statistical analysis.
The performance of our method was high for n=500, a sample size close to that observed in practice (the interquartile range of sample size per arm being [143–866]) in a recent systematic review [42]. For n=100, i.e. a value below the first quartile of the observed sample size per arm, the performance was low or moderate. In these situations that are highly prone to chance imbalance, covariate adjustment may be useful even if our method does not lead to the conclusion of a baseline imbalance. Our method must be viewed first as a tool to assess the risk of confounding bias and then to help identify CRTs in which an adjustment is needed, but for small sample sizes, covariate adjustment should be systematic, considering the high risk of sample fluctuations.
A limitation of this tool is the focus on 'overt bias' only, i.e. it can only assess the imbalance on observed characteristics as defined by Rosenbaum [49]. However, most trials collect information on a large number of baseline covariates, and given the fact that there are likely to be associations between different covariates, it is unlikely for the observed baseline covariates to be balanced between treatment arms, but for the unobserved covariates to be imbalanced. This would only happen if the observed and unobserved covariates were independent from each other and the association of these variables with the outcome variable is weak. Moreover, this tool can only help in assessing confounding bias, but not selection bias (i.e. difference in characteristics between recruited and not recruited patients). In order to detect selection bias, baseline characteristics of patients not recruited would be necessary, such as screen log data, but these data are often not available.
Further research is needed to assess the performance of the proposed method in a wider variety of situations. This study focused mainly on individual baseline characteristics: because clusters are the randomization unit, systematic imbalance on cluster-level covariates should not occur, provided the randomization method has been implemented correctly with appropriate allocation concealment, but chance imbalance on these covariates may occur. In particular, chance imbalance is likely to occur with only few randomized clusters, which is frequent; a systematic review showed that the median number of randomized clusters is 34 [9].
To avoid a risk of confounding bias, CRTs should, if possible, be designed to respect the usual chronology of randomized trials (recruitment and then randomization of clusters). However it is not always feasible in practice, for example when participants are incident cases. When clusters are randomized prior to participants being recruited, the proposed method is a helpful qualitative tool to assess the risk of bias in CRTs and to provide guidance for covariate adjustment.
Donner A, Klar N. Design and Analysis of Cluster Randomization Trials in Health Research. London: Wiley; 2000.
Torgerson DJ, Torgerson C. Designing Randomised Trials in Health, Education and the Social Sciences: an Introduction. Basingstoke: Palgrave Macmillan; 2008.
Carter B. Cluster size variability and imbalance in cluster randomized controlled trials. Stat Med. 2010; 29(29):2984–93.
Hahn S, Puffer S, Torgerson DJ, Watson J. Methodological bias in cluster randomised trials. BMC Med Res Method. 2005; 5(1):10.
Eldridge S, Ashby D, Bennett C, Wakelin M, Feder G. Internal and external validity of cluster randomised trials:s systematic review of recent trials. BMJ. 2008; 336(7649):876–80.
Puffer S, Torgerson D, Watson J. Evidence for risk of bias in cluster randomised trials: review of recent trials published in three general medical journals. BMJ. 2003; 327(7418):785–9.
Brierley G, Brabyn S, Torgerson D, Watson J. Bias in recruitment to cluster randomized trials: a review of recent publications. J Eval Clin Pract. 2012; 18(4):878–86.
de Hoop E, Teerenstra S, van Gaal BG, Moerbeek M, Borm GF. The "best balance" allocation led to optimal balance in cluster-controlled trials. J Clin Epidemiol. 2012; 65(2):132–7.
Eldridge SM, Ashby D, Feder GS, Rudnicka AR, Ukoumunne OC. Lessons for cluster randomized trials in the twenty-first century: as systematic review of trials in primary care. Clinical Trials (London, England). 2004; 1(1):80–90.
Ivers NM, Halperin IJ, Barnsley J, Grimshaw JM, Shah BR, Tu K, et al.Allocation techniques for balance at baseline in cluster randomized trials: a methodological review. Trials. 2012; 13(1):120.
Stang A, Poole C, Kuss O. The ongoing tyranny of statistical significance testing in biomedical research. Eur J Epidemiol. 2010; 25(4):225–30.
Senn S. Seven myths of randomisation in clinical trials. Stat Med. 2013; 32(9):1439–50.
Moher D, Schulz KF, Altman D. The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials 2001. EXPLORE: J Sci Healing. 2005; 1(1):40–5.
Campbell MK, Elbourne DR, Altman DG. CONSORT statement: extension to cluster randomised trials. BMJ. 2004; 328(7441):702–8.
Fayers PM, King M. A highly significant difference in baseline characteristics: the play of chance or evidence of a more selective game?Qual Life Res. 2008; 17(9):1121–1123.
Wright N, Ivers N, Eldridge S, Taljaard M, Bremner S. A review of the use of covariates in cluster randomized trials uncovers marked discrepancies between guidance and practice. 2014. doi:10.1016/j.jclinepi.2014.12.006.
Giraudeau B, Ravaud P. Preventing bias in cluster randomised trials. PLoS Med. 2009; 6(5):1000065.
Franklin JM, Rassen JA, Ackermann D, Bartels DB, Schneeweiss S. Metrics for covariate balance in cohort studies of causal effects. Stat Med. 2013. doi:10.1002/sim.6058.
Rosenbaum PR, Rubin DB. The bias due to incomplete matching. Biom; 41(1):103–16.
Smirnov N. Table for estimating the goodness of fit of empirical distributions. Ann Math Stat. 1948; 19(2):279–81.
Thompson JW. A note on the lévy distance. J Appl Prob. 1975; 12(2):412–4.
Belitser SV, Martens EP, Pestman WR, Groenwold RHH, de Boer A, Klungel OH. Measuring balance and model selection in propensity score methods. Pharmacoepidemiol Drug Saf. 2011; 20(11):1115–1129.
Ali MS, Groenwold RHH, Pestman WR, Belitser SV, Roes KCB, Hoes AW, et al.Propensity score balance measures in pharmacoepidemiology: a simulation study. Pharmacoepidemiol Drug Saf. 2014. doi:10.1002/pds.3574.
Mahalanobis P. On the generalised distance in statistics. In: Proceedings National Institute of Science, India, Vol. 2, No. 1: (16 April 1936). p. 49–55.
Iacus SM, King G, Porro G. Multivariate matching methods that are monotonic imbalance bounding. J Am Stat Assoc. 2011; 106(493):345–61.
Weitzen S, Lapane KL, Toledano AY, Hume AL, Mor V. Weaknesses of goodness-of-fit tests for evaluating propensity score models: the case of the omitted confounder. Pharmacoepidemiol Drug Saf. 2005; 14(4):227–38.
Leyrat C, Caille A, Donner A, Giraudeau B. Propensity scores used for analysis of cluster randomized trials with selection bias: a simulation study. Stat Med. 2013; 32(19):3357–372.
Leyrat C, Caille A, Donner A, Giraudeau B. Propensity score methods for estimating relative risks in cluster randomized trials with low-incidence binary outcomes and selection bias. Stat med. 2014. doi:10.1002/sim.6185. PMID: 24771662.
Ravaud P, Giraudeau B, Logeart I, Larguier JS, Rolland D, Treves R, et al.Management of osteoarthritis (OA) with an unsupervised home based exercise programme and/or patient administered assessment tools. a cluster randomised controlled trial with a 2x2 factorial design. Ann Rheum Dis. 2004; 63(6):703–8.
Ravaud P, Flipo RM, Boutron I, Roy C, Mahmoudi A, Giraudeau B, et al.ARTIST (osteoarthritis intervention standardized) study of standardised consultation versus usual care for patients with osteoarthritis of the knee in primary care in france: pragmatic randomised controlled trial. BMJ. 2009; 338:b421.
Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983; 70(1):41–55.
Xu Z, Kalbfleisch JD. Propensity score matching in randomized clinical trials. Biometrics. 2010; 66(3):813–23.
Harrell F. Regression Modeling Strategies : with Applications to Linear Models, Logistic Regression, and Survival Analysis. New York: Springer; 2001.
Altman D. Practical Statistics for Medical Research, 1st edn. London: New York: Chapman and Hall; 1991.
Westreich D, Cole SR, Funk MJ, Brookhart MA, Stürmer T. The role of the c-statistic in variable selection for propensity score models. Pharmacoepidemiol Drug Saf. 2011; 20(3):317–20.
Turner RM, White IR, Croudace T. Analysis of cluster randomized cross-over trial data: a comparison of methods. Stat Med. 2007; 26(2):274–89.
Kul S, Vanhaecht K, Panella M. Intraclass correlation coefficients for cluster randomized trials in care pathways and usual care: hospital treatment for heart failure. BMC health Serv Res. 2014; 14(1):84.
Kimko H, Duffull SB. Simulation for Designing Clinical Trials: A Pharmacokinetic-Pharmacodynamic Modeling Perspective. New York: CRC Press; 2002.
Perkins SM, Tu W, Underhill MG, Zhou XH, Murray MD. The use of propensity scores in pharmacoepidemiologic research. Pharmacoepidemiol Drug Saf. 2000; 9(2):93–101.
Brookhart MA, Schneeweiss S, Rothman KJ, Glynn RJ, Avorn J, Stürmer T. Variable selection for propensity score models. Am J Epidemiol. 2006; 163(12):1149–1156.
Kuss O. The z-difference can be used to measure covariate balance in matched propensity score analyses. J Clin Epidemiol. 2013; 66(11):1302–1307.
Ivers NM, Taljaard M, Dixon S, Bennett C, McRae A, Taleban J, et al.Impact of CONSORT extension for cluster randomised trials on quality of reporting and study methodology: review of random sample of 300 trials, 2000-8. BMJ. 2011; 343:5886–886.
Greenland S, Robins JM, Pearl J. Confounding and collapsibility in causal inference. Stat Sci. 1999; 14(1):29–46.
Murray DM, Blitstein JL. Methods to reduce the impact of intraclass correlation in group-randomized trials. Eval Rev. 2003; 27(1):79–103.
Gomes M, Grieve R, Nixon R, Ng ES-W, Carpenter J, Thompson SG. Methods for covariate adjustment in cost-effectiveness analysis that use cluster randomised trials. Health Econ. 2012; 21(9):1101–1118.
van Marwijk HW, Ader H, de Haan M, Beekman A. Primary care management of major depression in patients aged 55 years:. Br J Gen Prac. 2008; 58:680–7.
Taft AJ, Small R, Hegarty KL, Watson LF, Gold L, Lumley JA. Mothers' AdvocateS in the community (MOSAIC)–non-professional mentor support to reduce intimate partner violence and depression in mothers: a cluster randomised trial in primary care. BMC public health. 2011; 11:178.
Patrick AR, Schneeweiss S, Brookhart MA, Glynn RJ, Rothman KJ, Avorn J, et al.The implications of propensity score variable selection strategies in pharmacoepidemiology: an empirical illustration. Pharmacoepidemiol Drug Saf. 2011. doi:10.1002/pds.2098.
Rosenbaum PR. Discussing Hidden Bias in Observational Studies. Ann Intern Med. 1991; 115(11):901–5.
Austin PC. Balance diagnostics for comparing the distribution of baseline covariates between treatment groups in propensity-score matched samples. Stat Med. 2009; 28(25):3083–107.
Austin PC. Using the standardized difference to compare the prevalence of a binary variable between two groups in observational research. Commun Stat Simul Comput. 2009; 38(6):1228–1234.
The authors are grateful to Jerry Cottrell, Professor Olivier Chosidow and Professor Philippe Ravaud for granting permission to use their data.
INSERM U1153, Paris, France
Clémence Leyrat
, Agnès Caille
& Bruno Giraudeau
INSERM CIC 1415, Tours, France
CHRU de Tours, Tours, France
Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, United Kingdom
Université François-Rabelais, PRES Centre-Val de Loire Université, Tours, France
Agnès Caille
SPHERE (EA 4275): Biostatistics, Clinical Research and Subjective Measures in Health Sciences, Université de Nantes, Nantes, France
Yohann Foucher
Search for Clémence Leyrat in:
Search for Agnès Caille in:
Search for Yohann Foucher in:
Search for Bruno Giraudeau in:
Correspondence to Clémence Leyrat.
CL, AC and BG conceived the study. CL performed the simulation study and CL, AC, YF and BG interpreted the results. CL, AC, YF and BG drafted the manuscript. All authors read and approved the final manuscript.
Additional file 1
Appendix. Simulation plan and distribution of the c-statistic without baseline imbalance. The R code to compute threshold values is available on request to the corresponding author. (PDF 279 Kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Leyrat, C., Caille, A., Foucher, Y. et al. Propensity score to detect baseline imbalance in cluster randomized trials: the role of the c-statistic. BMC Med Res Methodol 16, 9 (2016). https://0-doi-org.brum.beds.ac.uk/10.1186/s12874-015-0100-4
Accepted: 08 December 2015
DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12874-015-0100-4
Cluster randomized trial
Confounding bias
C-statistic
Baseline imbalance
Data analysis, statistics and modelling
|
CommonCrawl
|
find the derivative of y=(3x+1)^2 y= square root 13x^2-5x+8
Can someone explain how to figure out these types of problems? Solve log3 (x - 4) > 2. Evaluate log7 49. If log5 4 ≈ .8614 and log5 9 ≈ 1.3652 find the approximate value of log5 36. Solve the equation, log4 (m + 2) - log4 (m -5) = log4 8. Solve, 2^x =
asked by Nikki on October 20, 2017
the lead (II) nitrate in 25.49 ml of a 0.1338M solution reacts with all of the aluminum sulfate in 25.00 ml solution. What is the molar concentration of the aluminum sulfate in the original aluminum sulfate solution? For a question like this how do I find
asked by CL on February 23, 2017
A 0.20-kg ball on a stick is whirled on a vertical circle at a constant speed. When the ball is at the three o'clock position, the stick tension is 16N. Find the tensions in the stick when the ball is at the twelve o'clock and at the six o'clock positions.
A 0.20kg ball on a string is whirled on a vertical circle at a constant speed.When the ball is at 3 o'clock position, the string tension is 20N.Find the tensions in the string when the ball is at 12 o'clock and 6 o'clock positions
asked by Refiloe on March 31, 2015
A number consist of 3 digits.The middle digit is 0 and the sum of the first and last digits is 13.If the digits are reversed the number is increased by 297.Find the original number.MY SOLUTION: let x=the 1st digit & y=last digit x+y=13 i.e eqn1 & pls help
asked by Oduro Hayford on August 20, 2016
History Help!!
Which descriptions of the roles of the popes and patrons of the arts in restoring the city of Rome are accurate? Choose all the correct answers. A- organized teams to find ancient statues buried in Rome. B- commissioned artists to create stunning paintings
asked by Skye on October 2, 2014
physics....
1) Yellow light that has a wavelength of 560 nm passes through two narrow slits that are 0.300 mm apart. An interference pattern is produced on a screen 160 cm distant. What is the location of the first-order maximum? 2) A communication satellite is
asked by kennedy on June 12, 2012
The preparation of "azo" dyes often does not work well in laboratory. Alought the correct colour is usually apparent, very little product is obtained. Seeing this, a student decides to modify the experimental procedure and obtains a record quantity of
asked by Lucian on October 22, 2006
What does pestilence mean as it is used in the following lines from Act V,Scene 2 of Romeo and Juliet? Friar John:Going to find a barefoot brother out,One of our order,to associate me, Here in this city visiting the sick, and finding him,the searches of
asked by GummyBears16 on March 25, 2016
10.VerA(16-19) Identify the underlined word or phrase that should be corrected. 1.The cheerful(A),lively sound of(B) dance music expired(C) almost everyone(D).No error.(E) 2.After such(A)a magnificent(B) meal,we were(C) all quick to(D) applaud Mary for her
asked by Bayarbold on June 24, 2011
which of the following are you most likely to find in a science fiction novel such as The Giver? A. People living in a futuristic environment where rules lead them to act differently than is commonly accepted now. B. Characters from the distant past
asked by Dove on May 1, 2015
life science (body systems)
on my word search, it just says kneecap for a clue, what do i do!7 letters 3rd letter is (e) last is (a) http://orthoinfo.aaos.org/fact/thr_report.cfm?Thread_ID=165&topcategory=knee Try patella. it wont work. i think the crossword puzzle iz messed up. my
asked by katharine on November 14, 2006
i keep trying to put this in my calculator but i keep getting the wrong answer...can someone please try it on their calculator and help me? thnks (a) Find the distance r to geosynchronous orbit. Apply Kepler's third law. T^2=((4pi^2)/(GM_E))r^3 Substitute
asked by cb on March 15, 2009
Physics college
A particle is thrown from the origin of the frame xOy at the moment t = 0, with speed v0 at a certain angle b from the Ox axis. At the same moment another particle begins to fall freely from a wall with height h, situated at the distance d. Air resistance
asked by Daniel on December 27, 2015
On the package of candy coated chocolates it states the box contains 147 grams. In reality, the mass can vary between 145 g and 150 g. You may find between 63 and 69 pieces of candy in any given box. Explain which variable represents the domain and range.
asked by jordan on December 22, 2015
Calculus help
\documentclass{article} \usepackage{amssymb,amsmath,amscd,concmath} \pagestyle{empty} \begin{document} \begin{equation*}\int_{1}^{3} [(e^3/x)/(x^2)]\, dx\end{equation*} \end{document} Sorry,the html format does not work... How to find the integral of
asked by Linda on February 20, 2007
A boy throws a ball off the top of a 30m high building with a speed of 10m/s at an angle of 30 degrees to the horizontal. How far from the base of the building does the ball strike? I broke in up into 2 dimensional: X horizontal values: Xi=0 Xf=?
asked by Leo on June 11, 2016
5. The ball launcher in a pinball machine has a spring that has a force constant of 1.20 N/cm . The surface on which the ball moves is inclined 10.0° with respect to the horizontal. The spring is initially compressed 5.00 cm. Find (a) the launching speed
asked by John Tucker on February 1, 2013
Math(Geometry)
In my geometry book I am really stumped on these question.This is what it says to do, "Find a pattern for each sequence. Use the pattern to show the next two terms."And here are th questions I am stuck on with the #'s. 7.O,T,T,F,F,S,S,E,... 8.J,F,M,A,M,...
asked by LizardGuy on August 22, 2007
Projectile motion: Let's suppose you throw a ball straight up with an initial speed of 50 feet per second from a height of feet. a) Find the equation that describes the motion as a function of time. My answer was x = 0 and y=-16t^2+50t+6 b) How long is the
asked by Abbey on April 16, 2010
Jack Sparrow swings on a 3 m long rope that was horizontal before he jumped from the boom of the Black Pearl. At the very bottom of his arc, he grabs Elizabeth Swann and they swing upward together. Sparrow (Depp) has a mass of 75 kg, and Swann (Knightley)
asked by Jessica on July 12, 2012
Under less than ideal conditions, a student pushes a 6 kg cart holding 2 boxes of books up a ramp to a platform that is 1.8 m high in 42 sec. Each box contains 12 kg of books. The ramp is inclined at an angle of 15 degrees. He pushes w/ a steady force of
asked by Ronald on April 8, 2010
Physics help Rolling Motion!!
A spherically symmetric object with radius R= 0.51m and mass of M= 3.5kg rolls without slipping accross a horizontal surface with velocity of V = 4.4m/s. It then rolls up an incline with an angle of 30 degrees and comes to rest a distance d = 2.2m up the
asked by Samantha on April 28, 2016
Difficult Trig Word Problem
The displacement of a spring vibrating in damped harmonic motion is given by y(t) = 4e^-3t sin(2pi*t) where y = displacement and t = time with t greater than/equal to zero. Find the time(s) when the spring is at its equilibrium position (y=0). The number
asked by Greg on November 14, 2012
Math 116
could someone please help me with this question. I have no clue where to start... the line: y = 0.15x + 0.79 represents an estimate of the average cost of gasoline for each year. the line: 0.11x - y = -0.85 estimates the price of gasoline in January each
asked by Carmen on October 15, 2008
1) A T-shirt cannon at a basketball game launches shirts to the upper level at 55 m/s with an angle of 52 degrees. Find the height (dy) of the seats if it takes 4.5 seconds to land in the seats. 2) A soccer ball is kicked on a flat field by the goalie at
asked by Keke on October 5, 2015
In our lab we had to experiment with an apparatus in order to find such things as the percent of KClO_3 in the solution.. Assume that you heated your sample too rapidly and some of the sample spattered onto the rubber stopper of the test tube. Will your
asked by Liz on March 3, 2012
eth/125
The terms Hispanic and Latino are umbrella terms for people from many different Spanish-speaking cultures in the Western Hemisphere. Although the grouping includes a wide range of cultures there is evidence of the formation of a panethnic identity. What do
asked by Sade C on May 2, 2011
confidentiality of health information
Joe researcher asks for access to patient records as part of a research project. What should the health information manager do? A. Say yes-access for research is permitted. B. Say no. C. Tell Joe to submit his request to an institutional review board if he
asked by Question on April 28, 2010
A shot-putter puts a shot (weight = 71.3 N) that leaves his hand at distance of 1.62 m above the ground. (a) Find the work done by the gravitation force when the shot has risen to a height of 2.08 m above the ground. Include the correct sign for work. J
Find the 98% confidence interval for estimating ìd based on these paired data and assuming normality. (Give your answers correct to one decimal place.) Before 50 55 61 52 68 54 After 34 58 56 36 40 39 Lower Limit Incorrect: Your answer is incorrect. 64.0
asked by tracy10 on June 23, 2013
college physics
Let the radius of the circular segment be 108 m, the mass of the car 1861 kg, and the coefficient of the static friction between the road and the tire 0.9. The banking angle is not given. Find the magnitude of the normal force N which the road exerts on
asked by mark on October 5, 2010
How do you find acceleration and Fnet for a diagram containing only one object with a mass, like say 10kg, and two arrows going in the opposite directions(horizontally) with thier tips touching each side of the object. **NOTE: there are no other numbers
asked by Pamela on June 7, 2007
I have another problem that I don't know how to solve. A car travels along a straight road, heading east for 1 hour, then traveling 30 minutes on another road that leads northeast. If the car had maintained a constant speed of 40 mph, how far is it from
asked by Cassie on January 16, 2014
asked by Ron on April 8, 2010
2. The Occupational Safety and Health Administration (OSHA) estimates that roughly 30% of all worker compensation claims stem from back injuries. This statement was made many years ago and you want to know if this has changed in recent years. Suppose you
Algebra HELP!
Starting with the kinematics equation x = vot + ½ at^2, derive an expresion relating the acceleration and initial velocity of an object to two paired position and time measurements. In other words, if an object's position is x1 at t1, find the
asked by Jack on September 11, 2012
asked by Jack1 on September 11, 2012
a physics book is on top of a drafting table at 35 degree angle. Find the net external force acting on the book and determing wether the book move or remain at rest on this position. they give you: force of friction 11N, force normal 18N,force of gravity
asked by Grace on January 16, 2011
CP Physics
asked by Ronnie on April 8, 2010
Physics Rolling Motion!!
asked by Pam on April 27, 2016
Please help me! 1) Karen shouts across a canyon and hears an echo 5.7 s later. How wide is the canyon? The speed of sound is 343 m/s. Answer in units of m. 2) The distance between two consecutive nodes of a standing wave is 16.8 cm. The hand generating the
asked by Mikayla on February 15, 2017
block A (mass 2.04kg) rests on a tabletop. It is connected by a horizontal cord passing over a light, frictionless pulley to a hanging block B (mass 3.00kg ). The coefficient of kinetic friction between block A and the tabletop is 0.215. After the blocks
asked by george on October 11, 2012
Inflation is running 2% per year when you deposit $1000 in an account earning interest of 13% per year compounded annually. In constant dollars, how much money will you have two years from now? (Hint: First calculate the value of your account in two years'
asked by liz c on November 13, 2010
asked by Brandon on November 2, 2008
physics ( help)
A large (15kg) holiday decoration is suspended on either side of Hay Street in Perth by two cables stretching between buildings. The length of the first cable is 8m, the second is 6m and the buildings are 12m apart. The 8m rope hangs at an angle 26.4°
asked by H.T on April 7, 2011
in Table E [N] are the names of the students, in Table L [M] are the names of the subjects, in Table T [N, M] are the grades of students for each subject. to find the student with the highest grade in the class "Math" (name of the subject given by keypad)
asked by Elton Xhafa on February 23, 2012
A convex mirror forms an image of half the size of the object. When it is moved 15cm away from the object the size of the image becomes 2/5times that of the object. Find the focal length of mirror
asked by Aashutosh on September 10, 2016
According to a 2008 poll 14% Americans have tattoos. Find the odds that an American has a tattoo? (I got probability of 14/100) 1 - 14/100 then 86/100 X 11/14 ***odds American has a tattoo 7:43*** did I work this problem correctly
asked by Shay on July 8, 2011
Mark hid a $10 bill inside his favorite book. He forgot the pages where he hid it. If the sum of the pages where the bill is hidden is 177, on what pages will Mark find his money?
asked by Bianca on April 5, 2010
In three math tests, you have scored 91,95 and 88 points. You are about to take your next test. Suppose you want to have an average score of at least 90 points after all four tests. Explain a method you could use to find the score you must receive in order
The voltage across the plates of a capacitor at any time t seconds is given by V=Ve -t/CR, where V,C and r are constants. Given V = 300 Volts, C =0.12*10-6 F and R = 4*10 6 ohms Find the initial rate of change of voltage and the rate of change of voltage
asked by lee on June 30, 2015
An experiment consists of drawing one ball from a box containing 5 white balls numbered 1 through 5 and black balls numbered 6 through 10. Find the probability that a randomly selected ball is black and numbered 7.
asked by trinx on December 8, 2014
Engineering/Algebra
A cardboard tube is loaded with a 15.3 N axial load. The tube has an outside diameter of 2.5 cm and a wall thickness of .035 cm. Find the axial stress in the cardboard tube. Is the answer 75 N/cm^2? If so can you show me your way of getting to this answer.
asked by George on November 4, 2008
in a class of 60 students 30 like maths 25 like physics 30 like chemistry. if 10 like both physics and maths 5 like maths and chem and 5 like phy & chem and 3 like all the subjects. find student like none of subject if 100 students were surveyed
asked by Anonymous on June 8, 2016
in a class of 60 students 30 like maths 25 like physics 30 like chemistry. if 10 like both physics and maths 5 like maths and chem and 5 like phy & chem and 3 like all the subjects. find student like none of subject of if 100 students were surveyed
asked by Dp on June 8, 2016
An object with a mass of 34.0 g and a volume of 41.0 + B cm3 is placed in a liquid with a density of 1.25 g/cm3. Find the volume of the part of the object that is NOT submerged in the liquid. Give your answer in cm3 and with 3 significant figures.
asked by Idali on April 3, 2017
The sum of the digits of a three digit number is 20.The middle digit is equal to one fourth the sum of the other two. If the order of the digits is reversed the number increases by 198. Find original number
asked by Rohan on December 3, 2017
The top and bottom flaps in the box are each 4 centimeters by 2 centimeters.The right and left flaps are each 2 centimeters by 2 centimeters. There are flaps at the other end that are identical.Find the total area of carboard used to make the box.
asked by carlos on February 8, 2012
A plane flies 150km due south and then 150km on a bearing of 45 degrees Celsius. With repeat to it initial position, find a. How far south the plane is b. how far east the plane is is c. It bearing d. How far way it is
asked by Isaac on June 7, 2011
Sarah owns a small business. There was a loss of $19 on Thursday and a loss of $12 on Friday. On Saturday there was a loss of $11, and on Sunday there was a profit of $15. Find the total profit or loss for the four days. A. $35 loss B. $19 profit C. $27
asked by Renee :) on April 3, 2018
A2. Reviewed statistics indicated that there is an average of five thefts per day from a large supermarket operating from 08 00 to 19 00 six days a week. Find the probability that there (a) are no thefts in a week. [2] (b) will be ten thefts in any given
asked by MOSES on October 27, 2016
calculus ..>steve
Given a function f(x)2/3x^3+5/2x^2-3x. a) Find i. The inflection point. ii. The y-intercept and x-intercept. b) Sketch the graph of f(x). i have already try it..but i don't understand.. which graph that is true.. the first or second ? and how to calculate
asked by yaya on December 15, 2012
In a class of 50 students, 26 are Democrats, 13 are business majors, and 3 of the business majors are Democrats. If one student is randomly selected from the class, find the probability of choosing a Democrat or a business major.
asked by Amber on November 15, 2014
a person was climbing on a building.he slips 1 metre for every 5 metre he climb up. if the building is 53 metre high and he takes 5 seconds to climb 1 metre,find the total time taken by him to get on the top of the building.
asked by Anonymous on April 9, 2014
Aneeta spends all her pocket money on chocolates(x)and icecream(y). Her utility function is U(x,y)= min(4x,2x+y). Aneeta consumes 15 chocolates and 10 icecreams. The price of chocolates is Rs.10. Find out her pocket money.
asked by swagata on June 12, 2008
suppose there are 2 pieces of the cake left over, but you don't know how many pieces were in the whole cake.Explain how you could find the number of pieces in the whole cake if Taylor told you 1/6 of the cake was left.You may show your work in a drawing.
asked by Derl on April 9, 2014
asked by Victoria on April 15, 2010
A coordinate system (in meters) is constructed on the surface of a pool table, and three objects are placed on the table as follows: a m1 = 1.9-kg object at the origin of the coordinate system, a m2 = 3.4-kg object at (0, 2.0), and a m3 = 4.6-kg object at
asked by Evette on October 8, 2016
The sum of the 4th and 6th terms of an AP is 42. The sum of the 3rd and the 9th term of progression is 52, find the first term and the common difference of the sum of the 1st, 10th terms of the progression
asked by Okoro Jennifer on November 25, 2018
the ratio of the number of students in class A, B and C is 3:7:8. if 10 students leave c and join b , the ratio of the number of students in b and c would be reversed. find the total number of students in classes a,b and c Can I from a equation and solve?
asked by nimal on December 24, 2014
Find digits A and B in the number below so the folling condition are true. The 5-digit number must be divisible by 4. The 5-digit number must be divisible by 9. Digit A cannot be the same as Digit B. 12A3B Explain the steps you followed to solve the
asked by peter on December 21, 2011
Algebra: Simple Interest
Problem: Geoff has $5000 to invest. He wants to earn $568 in interest in one year. He will invest part of the money at 12% and the other part at 10%. How will he invest. +I already know how to do Simplest Interest, I just can't read the problem to find
asked by Milly on November 22, 2010
A professor knows that in her class 37% of the students have passed an Algebra course, 29% have passed an English course, and 21% have passed both courses. If one student is selected at random from this class, find the probability that: c) the student did
asked by Confused on April 24, 2011
Two objects, one of mass 3kg moving at 2m/s, the other of mass 5kg and speed of 2m/s move towards each other and collide in a head on collision. If the collision is perfectly inelastic, find the speed of the objects after the collision.
asked by Brooke on December 21, 2014
Math (Ms.Sue please help!)
asked by Fork-A-Dork on March 15, 2017
Gss Giza , nasarawa state
Q.1.Calculate the length of an are of a circle of radius 6 cm if the angle it subtends at the centre of circle is 6o. Q.2. Find the perimeter of a sector of a circle radius 5.2cm if the angle subtend at the centre is (a)3o (b)6o (c)135.
asked by Husseini Suleiman Akpori on May 9, 2016
D J HIGH SCHOOL SOST GOJAL PAKISTANmaths trignometry
from an observation point the angle of depression of two boats in line with this point are found to 30 degrees and 45 degrees .find the distance between the two boats if the point of observation is 4000 feet high.
asked by tasleem asghar on December 10, 2014
College Algebra
Please explain in detail so I know for next time. Two angles are complementary. The sum of the measure of the first angle and one-fourth the second angle is 62.25 degrees. Find the measures of the angles. What is the measure of the smaller angle? What is
asked by LeAnn/Please help me on November 18, 2009
asked by Kaitlyn on April 28, 2013
Graph the following equations; calculate the slope, x-intercept, and y-intercept, and label the intercepts on the graph. Find the area of a rectangle if three of its corners are (-0.1,-3.4), (-7.9,-3.4), and (-0.1,6.8). one I draw the graph how do I figure
asked by bianca on February 28, 2011
math-please help!
The number of complaints received by a business bureau can be represented by the following probability distribution. Find the expected number of complaints per day. complaints per day -0,1,2,3,4,5 probability-0.01,0.11,0.26,0.28,0.19,0.12 can someone
asked by KiKi on April 25, 2010
A store owner receives 12 computers:nine are model A and the rest are model B.if two computers are sold at random,find the probability that one of each model is sold. (base on probability and combinatorial analysis)
asked by Bravo on March 28, 2016
Mr. digit wanted to assign a number to each student in his class. He could only use the digits 2, 3, and 6 and each student's number had to have three digits. Can he find enough numbers for all 25 students if repeated numbers are allowed? Will there be any
asked by jessica on January 11, 2010
Mr. Digit wanted to assign a number to each student in his class. He could only use the digits 2,3,and 6 and each student's number had to have three digits. Can he find enough numbers for all 25 students if repeated digits are allowed? Will there be any
asked by fko on December 10, 2009
Find the angle Q between the two vectors. u = 10i + 40j v = -3j+8k cos / sqrt1700 sqrt73 cos(-120/sqrt1700 sqrt73) = .9072885394 arc cos (.9072885394) = 24.86 degrees. Is this correct?
asked by Abbey(Please help) on May 4, 2010
1. What paints did the Mughals use to create their paintings? What pigments were used to create these paints? 2. How expensive were these pigments? I haven't been able to find much information regarding the paints or pigments used online, but I would be
asked by Claire on May 18, 2016
the subway train has 18 doors and starts with 15 passenger. if each passenger is equally likely to get off at any station and the passengers leave the train independently, find the probability that 2 or more passengers leave the train through the same door
asked by linn on April 13, 2014
ok this is in my practice book. there are 33 students in the chess club. there are five more boys than girls in the club. write and solve a system of equations to find the number of boys and girls in the chess club?
asked by ariel on December 6, 2011
A plane flies the first half of a 5600 km flight into the wind in 3.5 hours. The return trip, with the same wind, takes 2.5 hours. Find the speed of the wind and the speed of the plane in still air.
asked by Betsy on December 6, 2011
Physics (Please help!!!)
1) An ideal gas occupies a volume of 0.60 m^3 at 5atm and 400K. What volume does it occupy at 4.0atm and a temp of 200K? I know how to find the volume with the m^3 and the atm but it threw me off since it gives the 4.0 atm and the temp 200K.
asked by Hannah on February 4, 2010
A cylinder of radius 14 cm contains water. A metal solid cone of base radius 7 cm and height 18cm is submerged into the water. Find the change in height of the water level in the cylinder
asked by kudu on February 7, 2015
Math algebra
You plan a party and spend $26 on decorations. Each of the tables will have 8 party favors and 8 individual flowers. The flowers cost $2.50 each. Which equation can you use to find the cost of each party favor? *please make an equation*
asked by Mae on June 2, 2014
The region R is a rectangle with vertices P(a,lna), Q(a,0), S(3,0), and T(3,lna), where 1
asked by Anonymous on May 5, 2012
A pendulum bob swings 5.0cm on its first oscillation.On each subsequent oscillation the bob travels 2/5 of the previous distance. Find the total distance the bob travels before coming to rest.
asked by voytek on May 3, 2012
find the rule for the Nth term of the arithmetic sequence. 11/2, 25/6, 17/6, 3/2, 1/6..... If you change the denomators to 6, you should notice the numerators follow the sequence: 33,25,17,9,1,...which is an arithmetic sequence with a common difference of
asked by brandon on April 1, 2007
I used coulomb's law to find the force of attraction between the ionic atoms in BeS and BeO. For BeS, F = -6.85x10^29 J/m For BeO, F = -9.078x10^29 J/m Which has the stronger force of attraction? BeO? What does the "-" sign mean?
asked by jake on September 23, 2010
|
CommonCrawl
|
A novel TIRADS of US classification
Yan Zhuang1,
Cheng Li2,
Zhan Hua2,
Ke Chen1 &
Jiang Li Lin ORCID: orcid.org/0000-0003-2030-96811
BioMedical Engineering OnLine volume 17, Article number: 82 (2018) Cite this article
Thyroid imaging reporting and data system (TIRADS) is the assessment of a risk stratification of thyroid nodules, usually using a score. However, there is no consensus as to the version of TIRADS for reporting the results of thyroid ultrasound in clinic. The objective of this study is to develop a practical TIRADS with which to categorize thyroid nodules and stratify their malignant risk.
A TIRADS scoring system was developed to provide more decision levels than standard scoring through the selection of the ultrasound features which include the calcification shape, margins, taller-than-wide, internal echo, blood flow quantization of features, setting of the weight, and calculation of the score. Ultimately, the accuracy of our TIRADS was evaluated by comparing with the results of current vision of TIRADS and thyroid radiologist in 153 patients who had US-guided fine-needle aspiration biopsy.
Classification results showed that the total accuracy reached 97% (100% of malignant and 95% of the benign) in 153 cases (benign:78, malignant:75). The percentages of malignancy is defined in our TIRADS were as follows: TIRADS 2 (0% malignancy), TIRADS 3 (3.6% malignancy), TIRADS 4 (17–75% malignancy), and TIRADS 5 (98% malignancy).
We established a novel TIRADS to predict the malignancy risk of the thyroid nodules based on six categories US features by a scoring system, which included a standardized vocabulary and score and a quantified risk assessment. The results showed that objective quantitative classification of thyroid nodules by our TIRADS can be useful in guiding management decisions.
The prevalence of thyroid nodules in population is increasing around the world. In China, the morbidity of thyroid cancer grows gradually from year to year, especially in female patients [1,2,3]. Recently, the estimated incidence of thyroid nodules is nearly 19–67%, approximately 5–15% of these nodules are found to be malignant [4]. Thyroid ultrasound (US) is a key examination for the management of thyroid nodules. Thyroid US is easily accessible, noninvasive, cost-effective, and is a mandatory step in the diagnosis of thyroid nodules [5]. Thus, it is necessary to standardize terminology and create guidelines to categorize thyroid nodules according to their malignant potential for effective management [6].
Several different thyroid imaging reporting and data system (TIRADS) classifications and recommendations have been proposed [7,8,9,10]. In 2009, Horvath et al. [7] described 10 US patterns of thyroid nodules and divided these nodules into a 5-point TIRADS with malignancy risk. However, their system is difficult to apply because not all thyroid nodules have stereotypic appearances on US. Park et al. [8] provided an equation for predicting the probability of malignancy in thyroid nodules on the basis of 12 US features. The categorization may be difficult to apply in practice because it requires subjective judgment of doctors on suspicious features and complex calculations. Recently, Kwak et al. [9] used multivariate regression analysis and proposed a TIRADS score that refers to five risk features: micro-calcification, irregular shape, taller-than-wide, solidity, and hypoechogenicity. As the number of suspicious US features increased, risk of malignancy also increased. They developed a 5-grade scale with a score of 2 for benign lesions; 3 for no suspicious features; 4A, 4B, and 4C when there were one, two, and three or four suspicious features, respectively; and 5 when all the six risk factors were presented. This system is convenient for risk stratification, and simple to use. However, each US feature of this TIRADS is given the same weight, without consideration of the different probabilities of malignancy associated with each, and the determination of the feature performance in TIRADS depends on the doctors.
It is widely accepted that no single US feature has enough sensitivity and specificity to reliably indicate that thyroid nodules are benign or malignant, many US features of TIRADS have inter- and intra-observer variation, making difficult an accurate diagnosis based on TIRADS. Each US feature has different effects on the malignant evaluation of thyroid nodules but none covers US features weights reasonably in previous TIRADS. The certainty of malignancy increases with the number of features rather than an available comprehensive threshold from the US features.
In this study, we established a novel TIRADS that provided many potential decision levels by distinguishing weights among the features of six categories, quantifying each malignant risk indicators through a TIRADS scoring system. Ultimately, the goal was to obtain an objective and comprehensive evaluation of each thyroid nodules based on our TIRADS.
This paper put forward 6 category features of TIRADS through studying with clinical experts, as shown in Table 1. The composition feature includes the performance of solid, cystic and mixed [11]. The feature of margin is evaluated by ill-defined and microlobulated. The shape of the tumor is quantified with degree of irregularity. The feature of calcification was divided into micro-calcification, macro-calcification and no-calcification. The distributions of blood flow are characterized as central type, peripheral type, messy type, focal thyroid inferno (Doppler flow covering the entire nodule whereas little or no flow within the surrounding parenchyma [12]) and no blood flow signals. These features corresponding to their manifestations play an important role in predicting benign and malignant thyroid nodules.
Table 1 TIRADS classification features of thyroid ultrasound
Features quantification
It is widely accepted that the internal composition of benign tumor is mainly cystic, malignant tumor is mainly solid. Predominantly solid composition of mixed tumor is commonly malignant. Three types of internal components of thyroid nodules, as shown in Fig. 1.
Feature of internal composition for thyroid nodules. Three types of internal components of thyroid nodules, they are a solid, b mixed and c cystic
Quantify the composition characteristics of tumor based on the gray-level histogram. First, get gray histogram of the tumor region which number of gray pixel is not zero, and count the top 10 and 5% of the pixels number of the gray level distribution: N1, N2 respectively, and the number N0 of the pixel gray level 0, then the top 50% and the remaining 50% of the number of pixels of the gray level distribution respectively, the gray variance V1, V2 and pixels number N of the nodule region. The cystic rate is defined as CysR, as shown in Formula (1), and quantitative formula of the composition Com is shown in (2).
$$CysR = {{\left( {N_{1} + N_{2} } \right)} \mathord{\left/ {\vphantom {{\left( {N_{1} + N_{2} } \right)} N}} \right. \kern-0pt} N}$$
$$Com = \left\{ {\begin{array}{*{20}l} {Solid,\;CysR \le 0.02,\;V_{1} \le V_{2} } \\ {Cystic,\;CysR \ge 0.3,\;N_{1} > N_{2} ,\;N_{0} > 0} \\ {Mixed,\;Other} \\ \end{array} } \right.$$
Concavity and Compactness are extracted automatically to quantify the shape of the irregularities in this paper. As the value of these parameters increases, the more irregular of thyroid nodules, and the increasing risk of malignancy will be. First, fit the quadratic curve of thyroid nodule boundaries by the least square method, as shown in Fig. 2.
The boundary fitted curve of thyroid nodule. The thyroid nodule boundary and boundary fitted curve, where the black solid line is the nodule boundary and the red dashed line is the elliptic curve fitted based on this boundary
For the feature of shape, two shape parameters of Concavity and Compactness are extracted to quantify degree of the irregularities in this paper. The more irregular the thyroid nodules are, the more risk of malignancy there will be, and the greater the value of these parameters will be.
In order to obtain this parameter of Concavity, the quadratic curve was used to fit the boundary of thyroid nodules with the least square method firstly, as shown in Fig. 2.
So the fitted curve divides the nodule area into three parts, the concave part, which is beyond the boundary and within the curve, the overlapping part between the nodular area and the curve, and the convex which is inside the boundaries and outside the curve. Concavity is defined as the ratio of the sum area of concave and convex to the common area, as shown in Eq. (3).
$${\text{Concavity}} = \frac{{{\text{S}}_{\text{o}} + {\text{S}}_{\text{i}} }}{{{\text{S}}_{\text{c}} }},$$
wherein S o , S i are the area of the convex part and the area of the concave part, respectively, S c is the area of the nodular area overlapping with the curve.
Another parameter Compactness, is defined as the ratio of the square of the perimeter and area of thyroid nodule multiplied by 4π, as shown in Formula (4).
$${\text{Compactness}} = \frac{{{\text{L}}^{2} }}{{4\pi \times {\text{Area}}}}$$
wherein L is the perimeter, Area is the area of the nodule.
The quantification of the margin is mainly based on the gray scale inside and outside the nodule boundary. A 10-pixel disk structure was used to obtain the band-shaped region in the binary images of the tumor region for erosion and dilation operations, as shown in Fig. 3.
Feature of margin for thyroid nodules. The characteristics of the thyroid margin, a is the original image of thyroid nodule, b is the boundary of the nodule, and c, d are the internal and external bands of the boundary after morphological operation, respectively
Given that the number of pixels in the inner and outer band are represented by n1 and n2, and the mean gray values u1 and u2, respectively, the statistical difference of the gray scales between the inner and outer regions of the nodules adjacent to the border is measured by the inter-class variance [13], as shown in Formula (5).
$$InterVar = \frac{{n_{1} (u_{1} - u)^{2} + n_{2} (u_{2} - u)^{2} }}{{(n_{1} + n_{2} )}},\quad u = \frac{{n_{1} u_{1} + n_{2} u_{2} }}{{n_{1} + n_{2} }}$$
Next, normalized to get the average gray scale difference (mean separability),
$$MeanSep = \frac{InterVar}{TotalVar}$$
wherein TotalVar represents the variance of the gray levels of all the pixels in the banded region inside and outside the boundary.
Calcification
The features of calcification are mainly manifested as: macro-calcification, no-calcification and micro-calcification [14], as shown by the red arrows in Fig. 4. Micro-calcification is recognized as a strong indicator of malignant, and the macro-calcification and no-calcification also have the potential for malignancy. Deep learning algorithm [15] is used to quantify calcification into these three categories, which is not detailed here.
Feature of calcification for thyroid nodules. Three types of calcification features of thyroid nodules, they are no-calcification, macro-calcification and micro-calcification, and the calcification area is shown by the red arrow in b, c. a No-calcification, b macro-calcification, c micro-calcification
Taller than wide
Taller than wide is another momentous feature of shape, called aspect ratio (AR), defined as the ratio of depth to width, as shown in Fig. 5. It reflects the growth pattern of tumor to a certain degree. If the greater AR is, the higher risk of malignancy.
Feature of Taller than wide for thyroid nodules. The red rectangle in the figure is the minimum bounding rectangle of the thyroid nodule boundary, where depth and width are the length and width of the rectangle respectively
However, many other previous TIRADS studies did not contain the feature of blood flow. We believe that using color Doppler imaging is crucial to the improvement of the diagnostic accuracy of benign and malignant thyroid nodules, especially in the distribution of blood flow [16,17,18,19]. Generally, the blood flow distribution of central type is considered as one of the significant malignant features, and the distribution of focal thyroid inferno is a typical blood flow pattern of benign tumors. We explored the distribution of blood flow of thyroid nodules on color Doppler sonography to provide all possible patterns of blood flow distribution by including weak as well as strong indicators of malignancy. The distribution pattern of blood flow was quantified as: central type, peripheral type, focal thyroid inferno [12], messy type and no blood flow, as shown in Fig. 6.
Thyroid nodules blood flow distribution. The distribution pattern of blood flow was quantified as: central type, peripheral type, focal thyroid inferno, messy type and no blood flow, as shown in figure. a Messy, b central type, c peripheral type, d focal thyroid inferno, e no vascularity
Feature weights
It is widely accepted that each feature plays a different role in the identification of benign and malignant thyroid nodules. We proposed the definitions of "benign rate" and "malignant rate" to describe the contribution of each feature based on the statistical results, namely reference weight. Then the final weight of each feature is obtained by the combination of the reference weight and the experience of thyroid experts.
We used the statistical results of ultrasound gray scale features (except cystic features) in literature [9] to obtain the occurrence frequency of each feature on 1658 cases of thyroid nodules. Each feature has a probability of occurrence in both benign and malignant thyroid nodules, as shown in Table 2. The definitions of "benign rate" and "malignant rate" in this paper to show the contribution of each characteristic in the prediction of benign and malignant tumors, respectively, as shown in Eqs. (7, 8):
$${\text{BenRate}} = \frac{{N_{iB} }}{{N_{B} }} \div \left( {\frac{{N_{iB} }}{{N_{B} }} + \frac{{N_{iM} }}{{N_{M} }}} \right),\quad N_{B} = 1383,\;N_{M} = 275$$
$${\text{MalRate}} = \frac{{N_{iM} }}{{N_{M} }} \div \left( {\frac{{N_{iB} }}{{N_{B} }} + \frac{{N_{iM} }}{{N_{M} }}} \right),\quad N_{B} = 1383,\;N_{M} = 275$$
wherein N iB (i = 1, 2, 3…10) and N iM (i = 1, 2, 3…10) are the numbers of benign and malignant nodules for each grayscale feature, respectively. N B and N M represent the total number of benign and malignant nodules, respectively.
Table 2 Gray scale ultrasound features of thyroid nodules
As for the feature of blood flow, the distribution of blood flow statistics can be obtained by counting the 153 cases of thyroid nodules of Doppler images, as shown in Table 3, followed by the calculation of the benign and malignant rates of blood flow feature of thyroid nodules according to the Formulas (9, 10), as shown in Table 3.
$${\text{DBenRate}} = \frac{{N_{iDB} }}{{N_{DB} }} \div \left( {\frac{{N_{iDB} }}{{N_{DB} }} + \frac{{N_{iDM} }}{{N_{DM} }}} \right),\quad N_{DB} = 78,\;N_{DM} = 75$$
$${\text{DMalRate}} = \frac{{N_{iDM} }}{{N_{DM} }} \div \left( {\frac{{N_{iDB} }}{{N_{DB} }} + \frac{{N_{iDM} }}{{N_{DM} }}} \right),\quad N_{DB} = 78,\;N_{DM} = 75$$
where N iDB (i = 1, 2, 3…5) and N iDM (i = 1, 2, 3…5) are the numbers of benign and malignant nodules for each distribution type of blood flow, respectively. N DB and N DM are the total numbers of benign and malignant nodules for each distribution type of blood flow, respectively.
Table 3 Results of blood flow feature statistics of thyroid nodules
The reference weight W i was obtained according to the benign rate and malignant rate of each feature in Tables 3 and 4, as shown in the Formula (11). The final weight of each feature, malignant score, derived from the opinions of clinical expert and the reference weight, as shown in Table 4.
$$W_{i} = \left\{ {\begin{array}{l} {\frac{{R_{iM} }}{{R_{iB} }},\;R_{iB} < R_{iM} \;} \\ {6,\;\left( {{{R_{iM} } \mathord{\left/ {\vphantom {{R_{iM} } {R_{iB} }}} \right. \kern-0pt} {R_{iB} }}} \right) > 6,\quad \left( {i = 1,\;2,\;3 \ldots 15} \right)} \\ {\frac{{R_{iM} }}{{R_{iB} }},\;R_{iB} > R_{iM} } \\ \end{array} } \right. \,$$
wherein R iB and R iM is the benign and malignant rate of each feature in Tables 2 and 3, respectively.
Table 4 Feature weight of TIRADS
Feature scoring
The malignant risk assessment of thyroid nodules is generally based on the number of malignant features in the current TIRADS [6, 20], which depends on the subjective diagnosis of doctors and does not contain the essential feature of blood flow, so the accuracy is not high. We used different scoring methods for different ultrasound features.
First, the feature scores of composition, calcification and distribution of blood flow were obtained their corresponding malignant weights in TIRADS, as shown in the last column of Table 4.
Then, the feature scores of margin and shape were obtained by the curve fitting of the quantified feature parameters as the abscissa range and their malignant weights as the ordinate maximum. Take the feature score of the parameter Concavity of the shape feature as an example, parameter Concavity of each thyroid nodule were sorted in ascending order as the values of the abscissa. In order to obtain the corresponding feature score for each parameter. We first got the value of maximum, minimum and average of parameter Concavity as x coordinate values, respectively, which corresponded to the feature scores of 6 (the weight of the shape, which is the maximum value of the feature score), 0.5, and 3 as y coordinate values, respectively. Then, the curve fitting was performed on the sample parameters Concavity by these three sets of coordinates so that we can obtain the ordinate values i.e. the feature scores, corresponding to each value of parameters of the abscissa, as shown in Fig. 7. Wherein the red "*" and the green "*" represent the malignant and benign, respectively.
Score curve for the shape feature parameter values of Concavity. The abscissa represents the quantified shape parameter value of Concavity for each thyroid nodule, and the ordinate represents the feature score for each parameter value. Red "*" represents a sample of malignant thyroid nodules, and green "*" is a benign sample
Finally, the feature score of taller than wide (ScoreAR) was calculated by the combination of Aspect Ratio parameters and their malignant weights. The feature score was refined in the case of the value of Aspect Ratio greater than 1 combined with the feature weight, instead of being simply divided into two categories in our study to obtain a more accurate malignant assessment, as shown in Formula (12).
$$ScoreAR = \left\{ {\begin{array}{*{20}ll} {AR, \quad AR < = 1} \\ {(AR - 1)*w, \quad AR > 1} \\ \end{array} } \right.$$
wherein AR is the value of aspect ratio parameter, and w is the characteristic weight of taller than wide, w = 6, which is shown in Table 4.
TIRADS score
After accumulating the six category feature—(for a total of eight feature scores, as shown in Table 6 of each thyroid tumor, a comprehensive score of TIRADS for predicting the malignancy of each nodule was obtained, as shown in Formula (13), wherein si is the score of each feature.
$$Score = \sum\limits_{i = 1}^{7} {s_{i} } ,\quad i = 1,\;2,\;3 \ldots ,8$$
Then, we sorted the TIRADS scores of the 153 cases of the ultra sound image of thyroid nodules in ascending order. The corresponding number of each nodule was taken as the abscissa, and the TIRADS score corresponding to each nodule was used as the ordinate to draw the TIRADS score curve, as shown in Fig. 8
TIRADS score curve of 153 cases of thyroid nodules. The TIRASD score distribution for each thyroid nodule image. Wherein, the red dot represents a malignant sample and green dot is a benign sample, and the black solid line is the fitted curve according to the sample's TIRADS score
The maximum score of the TIRADS proposed in this paper is 26 points (the sum of the largest malignant weights for each type of feature score), and it is divided into 52 scoring sub-intervals with a 0.5-point step size. First, The TIRADS score for each thyroid nodule was obtained by Eq. 13, and we sorted the TIRADS scores of the 153 cases of the ultra sound image of thyroid nodules in ascending order; then, counted the cases of benign and malignant thyroid nodules whose TIRADS scores fall within each scoring sub-interval. Based on malignant risk of the current TIRADS, these 52 scoring sub-intervals were divided tinto 6 grading intervals, which represent 2, 3, 4a, 4b, 4c, 5 levels in the TIRADS classification. Finally, the number of benign and malignant tumors was counted, and the risk of malignancy was calculated in each TIRADS level according to the Eq. (14).
$$R_{i} = \frac{{n_{iM} }}{{n_{iM} + n_{iB} }},\quad i = 1,\;2,\;3 \ldots 6$$
wherein i is the number of TIRADS classification interval, n iB and n iM are the numbers of benign and malignant cases in each interval, respectively.
Therefore, the method of TIRADS classification proposed in this paper can obtain the corresponding TIRADS grading and risk of malignant based on the TIRADS score of the thyroid nodule ultrasound image.
The experimental data come from the Department of Ultrasound in Beijing China-Japan Friendship Hospital, which do not involve the patient's personal information.
In this study, 153 cases of thyroid nodules were graded, among which 78 were benign and 75 are malignant. The results of the TIRADS classification are shown in Table 5. Samples of correct results for the TIRADS classification are shown in Fig. 9 and their specific parameters of the classification are shown in Table 6.
Table 5 TIRADS score and grading results
Samples of correct classification results for Ultrasound images of thyroid nodules. Each image is labeled with the TIRADS classification in Table 6
Table 6 The specific parameters of the TIRADS classification for each nodule in Fig. 9
The classification of the thyroid nodules, based onTIRADS in literature [9], is presented in Table 7.
Table 7 Classification results of TIRADS in literature [9]
TI-RADS classification of the thyroid is derived from the BI-RADS classification of the breast. BI-RADS of the 2013 edition of the ACR is classified into categories 0–6, with incomplete evaluation of category 0; category 1 negative; category 2 benign; 3 may be benign; 4 suspicious malignant, 4 is divided into three subtypes 4a, 4b and 4c; 5 highly suspected malignant; 6 is pathologically confirmed malignant lesions. Most of the authors have used this method to classify breast and thyroid lesions in the past 3 years, the results of malignant risk comparison are shown in Table 8.
Table 8 TIRADS malignant risk comparison results
As can be seen from Table 8, the TIRADS presented in this paper is more in line with the Malignancy risk of BI-RADS compared with the Kwak in literature [9].
In order to further confirm the classification accuracy of each sample studied, we compared it with the reference grade of each sample screened by experts, as shown in Fig. 10. In 78 cases of benign nodules, 4 cases of grading results do not match the experts, as shown in Fig. 11, the correct classification rate reaches 94.87% and the classification of malignant nodules is 100%.
Comparison of TIRADS grading results Comparison of TIRADS grading results between our TIRADS and Radiologists. In order to further confirm the classification accuracy of each sample studied, we compared it with the reference grade of each sample screened by experts, as shown in figure. The abscissa represents the thyroid nodule sample, the ordinate represents the TIRADS grading result corresponding to each sample, the blue represents the TIRADS grading results of this article, and the orange represents the thyroid radiologists grading results
Examples of wrong classification in our TIRADS. Four thyroid nodule samples which classification results are inconsistent with radiologists, named nodules (a–d). Each sample includes a grayscale ultrasound image and corresponding Doppler color ultrasound image
In order to find out the reasons for the grading deviation, the TIRADS feature parameters and their TIRADS scores of the each nodules above were analyzed as shown in Table 9.
Table 9 The specific parameters of the TIRADS classification in Fig. 11
The TIRADS classification of nodule (a) is 4c and the reference level of Radiologist is 4a. The thyroid tumor has ill-defined margin, irregular shapes, as can be seen from Table 9. Thus, the malignant scores of shape parameters (Concavity and Compactness) and margin parameters (InterVar and MeanSep) are higher according to the weights of margin and shape. Besides, the feature of blood flow in our TIRADS is the messy and experts think that there is no blood flow signals in nodule. Since feature extraction algorithm of blood flow relies on the position of nodule boundary, it is inaccurate boundary information caused by blood flow coverage, which makes the TIRADS score slightly higher than the actual value.
The TIRADS classification of nodule (b) is 5 and the reference grade of radiologist is 4a. It has strong malignant features of ill-defined margin, irregular and micro-calcification, although the tumor is benign. The TIRADS classification of nodule (c) is 3 and the reference grade of radiologist is 4a. It can be seen from Table 9 that the malignant score of all parameters of the nodule is ordinary, and the features of malignant are not very obvious, leading to a lower TIRADS comprehensive score. The experts mainly considered a feature that was not included in the TIRADS classification of this article. That is when the nodule protrudes or protrudes out of the boundary, the risk of malignancy can be increased. However, the grading result is very close to the reference grading of radiology experts.
As for nodule (d), the TIRADS grade is 4c higher than radiologist reference grade 4a. The features of irregular shape and ill-defined margin are strong malignant indicators, as can be seen in Fig. 11. Therefore, TIRADS classification is higher than the reference classification because the two types of features (shape and margin) have higher malignant scores, as shown in Table 9. For thyroid nodules (b) and (d), the experts mainly consider that their features of composition are mixed, while the nodules are solid in our TIRADS. Therefore, the accuracy of boundary information and feature extraction still needs further improvement.
In summary, we proposed a novel TIRADS to stratify thyroid nodules according to the probability of a malignancy calculated by a scoring system. Although the usefulness of this category system requires confirmation by a prospective study with a general population, our TIRADS could provide helpful guidelines in deciding the optimal strategies for the management of thyroid nodules.
Ito M, Chono T, Sekiguchi M, et al. Quantitative evaluation of diagnostic information around the contours in ultrasound images. J Med Ultrason. 2005;32(4):135–44.
Russ G. Risk stratification of thyroid nodules on ultrasonography with the French TI-RADS: description and reflections. Ultrasonography. 2016;35(1):25–38.
Cheng SP. Characterization of thyroid nodules using the proposed thyroid imaging reporting and data system (TI-RADS). Head Neck. 2013;35(4):541–7.
Han XT, Yang Y, Peng B, et al. Thyroid nodule ultrasound image feature extraction technique based on TI-RADS. Comput Sci. 2015;42(S2):126–30.
Duan HM, Zhang TS, et al. Diagnostic value of ultrasound TI-RADS classification of thyroid cancer. Pract Med. 2015;20:3391–4.
Yao JF, Zhang YH, Wang QJ, et al. Diagnostic efficacy of TIRADS classification and routine ultrasound in the qualitative diagnosis of thyroid nodules. J Oncol. 2017;23(4):273–7.
Horvath E, Majlis S, Rossi R, Franco C, Niedmann JP, Castro A, Dominguez M. An ultrasonogram reporting system for thyroid nodules stratifying cancer risk for clinical management. J Clin Endocrinol Metab. 2009;94:1748–51.
Park JY, Lee HJ, Jang HW, et al. A proposal for a thyroid imaging reporting and data system for ultrasound features of thyroid carcinoma. Thyroid. 2009;19(11):1257–64.
Kwak JY, Han KH, Yoon JH, et al. Thyroid imaging reporting and data system for US features of nodules: a step in establishing better stratification of cancer risk. Int J Med Radiol. 2011;260(3):892.
Zhang J, Liu BJ, Xu HX, et al. Prospective validation of an ultrasound-based thyroid imaging reporting and data system (TI-RADS) on 3980 thyroid nodules. Int J Clin Exp Med. 2015;8(4):5911–7.
Zhang ZY, Wan DD, et al. Benign and malignant thyroid nodules identification based on B-mode ultrasonography. Microcomput Appl. 2013;32(2):30–3.
MathSciNet Google Scholar
Fu X, Guo L, Zhang H, et al. "Focal thyroid inferno" on color Doppler ultrasonography: a specific feature of focal Hashimoto's thyroiditis. Eur J Radiol. 2012;81(11):3319–25.
Han XT. Ultrasound-based thyroid nodules computer-aided diagnosis method. Southwest Jiao Tong University; 2016.
Bi T, Bai W, Hu B. Study of relationship between thyroid calcification morphology on ultrasound and thyroid carcinoma. Chin J Ultrasound Med. 2016;32(6):481–3.
Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR), Boston, MA, USA, 7–12 June 2015. IEEE; 2015. p. 3431–40.
Palaniappan MK, Aiyappan SK, Ranga U. Role of gray scale, color Doppler and spectral Doppler in differentiation between malignant and benign thyroid nodules. J Clin Diagn Res. 2016;10(8):TC01.
Chammas MC, Moon HJ, Kim EK. Why do we have so many controversies in thyroid nodule Doppler US? Radiology. 2011;259(1):304.
Lacout A, Chevenet C, Salas J, et al. Thyroid Doppler US: tips and tricks. J Med Imaging Radiat Oncol. 2016;60(2):210–5.
Tatar IG, Kurt A, Yilmaz KB, et al. The role of elastosonography, gray-scale and color flow Doppler sonography in prediction of malignancy in thyroid nodules. Radiol Oncol. 2014;48(4):348.
Yang YP, Xu XH. Progress in thyroid ultrasound TI-RADS grading diagnostic criteria. Med Theory Pract. 2014;18:2418–9.
YZ, JLL and KC conceived and designed the experiments; CL and ZH provided image data base, gave conceptual advice and commented on the manuscript; YZ contributed to the experiments and the manuscript preparation; YZ, JLL, KC and CL analyzed the discussions and revised the manuscript. All authors read and approved the final manuscript.
This paper is supported by the National Science Foundation of China Grant No. 81301286, the Ph.D. Programs Foundation of Ministry of Education of China Grant No. 20130181120001, and the Science and Technology Support Project of Sichuan Province Grant No. 2014GZ0005-7. Our images are supported by the Department of Ultrasound, China-Japan Friendship Hospital (Beijing 100029).
For this type of retrospective study, formal consent is not required, and this article does not contain patient data.
National Science Foundation of China (81301286). Ph.D. Programs Foundation of Ministry of Education of China (20130181120001). Science and Technology Support project of Sichuan Province (2014GZ0005-7).
Department of Biomedical Engineering, Sichuan University College of Materials Science and Engineering, Chengdu, 610065, Sichuan, China
Yan Zhuang, Ke Chen & Jiang Li Lin
China-Japan Friendship Hospital, Beijing, 100029, China
Cheng Li & Zhan Hua
Yan Zhuang
Zhan Hua
Ke Chen
Jiang Li Lin
Correspondence to Jiang Li Lin.
Zhuang, Y., Li, C., Hua, Z. et al. A novel TIRADS of US classification. BioMed Eng OnLine 17, 82 (2018). https://doi.org/10.1186/s12938-018-0507-3
TIRADS ultrasound
BioMedical Engineering and the Heart
|
CommonCrawl
|
A note on higher regularity boundary Harnack inequality
Full characterization of optimal transport plans for concave costs
December 2015, 35(12): 6133-6153. doi: 10.3934/dcds.2015.35.6133
Complexity and regularity of maximal energy domains for the wave equation with fixed initial data
Yannick Privat 1, , Emmanuel Trélat 2, and Enrique Zuazua 3,
CNRS, Sorbonne Universités, UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris, France
Université Pierre et Marie Curie (Univ. Paris 6) and Institut Universitaire de France and Team GECO Inria Saclay, CNRS UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris
BCAM - Basque Center for Applied Mathematics, Mazarredo, 14, E-48009 Bilbao-Basque Country
Received September 2013 Revised January 2014 Published May 2015
We consider the homogeneous wave equation on a bounded open connected subset $\Omega$ of $\mathbb{R}^n$. Some initial data being specified, we consider the problem of determining a measurable subset $\omega$ of $\Omega$ maximizing the $L^2$-norm of the restriction of the corresponding solution to $\omega$ over a time interval $[0,T]$, over all possible subsets of $\Omega$ having a certain prescribed measure. We prove that this problem always has at least one solution and that, if the initial data satisfy some analyticity assumptions, then the optimal set is unique and moreover has a finite number of connected components. In contrast, we construct smooth but not analytic initial conditions for which the optimal set is of Cantor type and in particular has an infinite number of connected components.
Keywords: Wave equation, Fourier series, optimal domain, Cantor set, calculus of variations..
Mathematics Subject Classification: 93B07, 49K20, 49Q1.
Citation: Yannick Privat, Emmanuel Trélat, Enrique Zuazua. Complexity and regularity of maximal energy domains for the wave equation with fixed initial data. Discrete & Continuous Dynamical Systems - A, 2015, 35 (12) : 6133-6153. doi: 10.3934/dcds.2015.35.6133
R. A. Adams, Sobolev Spaces,, Pure and Applied Mathematics, (1975). Google Scholar
C. Bardos, G. Lebeau and J. Rauch, Sharp sufficient conditions for the observation, control, and stabilization of waves from the boundary,, SIAM J. Control Optim., 30 (1992), 1024. doi: 10.1137/0330055. Google Scholar
N. Burq and P. Gérard, Condition nécessaire et suffisante pour la contrôlabilité exacte des ondes (French) [A necessary and sufficient condition for the exact controllability of the wave equation],, C. R. Acad. Sci. Paris Sér. I Math., 325 (1997), 749. doi: 10.1016/S0764-4442(97)80053-5. Google Scholar
R. M. Hardt, Stratification of real analytic mappings and images,, Invent. Math., 28 (1975), 193. doi: 10.1007/BF01436073. Google Scholar
P. Hébrard and A. Henrot, A spillover phenomenon in the optimal location of actuators,, SIAM J. Control Optim., 44 (2005), 349. doi: 10.1137/S0363012903436247. Google Scholar
A. Henrot, Extremum Problems for Eigenvalues of Elliptic Operators,, Frontiers in Mathematics, (2006). Google Scholar
A. Henrot and M. Pierre, Variation et Optimisation de Formes (French) [Shape Variation and Optimization] Une Analyse Géométrique [A Geometric Analysis],, Math. & Appl., (2005). doi: 10.1007/3-540-37689-5. Google Scholar
H. Hironaka, Subanalytic sets,, in Number Theory, (1973), 453. Google Scholar
B. Kawohl, Rearrangements and Convexity of Level Sets in PDE,, Lecture Notes in Math., (1150). Google Scholar
C. S. Kubrusly and H. Malebranche, Sensors and controllers location in distributed systems - a survey,, Automatica, 21 (1985), 117. doi: 10.1016/0005-1098(85)90107-4. Google Scholar
S. Kumar and J. H. Seinfeld, Optimal location of measurements for distributed parameter estimation,, IEEE Trans. Autom. Contr., 23 (1978), 690. doi: 10.1109/TAC.1978.1101803. Google Scholar
J.-L. Lions, Contrôlabilité Exacte, Perturbations et Stabilisation de Systèmes Distribués, Tome 1,, Recherches en Mathématiques Appliquées [Research in Applied Mathematics], (1988). Google Scholar
K. Morris, Linear-quadratic optimal actuator location,, IEEE Trans. Automat. Control, 56 (2011), 113. doi: 10.1109/TAC.2010.2052151. Google Scholar
A. Münch, Optimal location of the support of the control for the 1-D wave equation: numerical investigations,, Comput. Optim. Appl., 42 (2009), 443. doi: 10.1007/s10589-007-9133-x. Google Scholar
E. Nelson, Analytic vectors,, Ann. Math., 70 (1959), 572. doi: 10.2307/1970331. Google Scholar
F. Periago, Optimal shape and position of the support for the internal exact control of a string,, Syst. Cont. Letters, 58 (2009), 136. doi: 10.1016/j.sysconle.2008.08.007. Google Scholar
Y. Privat, E. Trélat and E. Zuazua, Optimal observation of the one-dimensional wave equation,, J. Fourier Anal. Appl., 19 (2013), 514. doi: 10.1007/s00041-013-9267-4. Google Scholar
Y. Privat, E. Trélat and E. Zuazua, Optimal observability of the multi-dimensional wave and Schrödinger equations in quantum ergodic domains,, to appear in J. Europ. Math. Soc. (JEMS), (2013). Google Scholar
Y. Privat, E. Trélat and E. Zuazua, Optimal location of controllers for the one-dimensional wave equation,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 30 (2013), 1097. doi: 10.1016/j.anihpc.2012.11.005. Google Scholar
Y. Privat, E. Trélat and E. Zuazua, Optimal shape and location of sensors for parabolic equations with random initial data,, Arch. Ration. Mech. Anal., 216 (2015), 921. doi: 10.1007/s00205-014-0823-0. Google Scholar
J.-M. Rakotoson, Réarrangement Relatif,, Math. & Appl. (Berlin) [Mathematics & Applications], (2008). doi: 10.1007/978-3-540-69118-1. Google Scholar
M. Tucsnak and G. Weiss, Observation and Control for Operator Semigroups,, Birkhäuser Advanced Texts: Basler Lehrbücher, (2009). doi: 10.1007/978-3-7643-8994-9. Google Scholar
D. Ucinski and M. Patan, Sensor network design fo the estimation of spatially distributed processes,, Int. J. Appl. Math. Comput. Sci., 20 (2010), 459. doi: 10.2478/v10006-010-0034-2. Google Scholar
Bernard Dacorogna, Giovanni Pisante, Ana Margarida Ribeiro. On non quasiconvex problems of the calculus of variations. Discrete & Continuous Dynamical Systems - A, 2005, 13 (4) : 961-983. doi: 10.3934/dcds.2005.13.961
Daniel Faraco, Jan Kristensen. Compactness versus regularity in the calculus of variations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (2) : 473-485. doi: 10.3934/dcdsb.2012.17.473
Hans Josef Pesch. Carathéodory's royal road of the calculus of variations: Missed exits to the maximum principle of optimal control theory. Numerical Algebra, Control & Optimization, 2013, 3 (1) : 161-173. doi: 10.3934/naco.2013.3.161
G. Gentile, V. Mastropietro. Convergence of Lindstedt series for the non linear wave equation. Communications on Pure & Applied Analysis, 2004, 3 (3) : 509-514. doi: 10.3934/cpaa.2004.3.509
Ivar Ekeland. From Frank Ramsey to René Thom: A classical problem in the calculus of variations leading to an implicit differential equation. Discrete & Continuous Dynamical Systems - A, 2010, 28 (3) : 1101-1119. doi: 10.3934/dcds.2010.28.1101
James W. Cannon, Mark H. Meilstrup, Andreas Zastrow. The period set of a map from the Cantor set to itself. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 2667-2679. doi: 10.3934/dcds.2013.33.2667
Hakima Bessaih, Yalchin Efendiev, Florin Maris. Homogenization of the evolution Stokes equation in a perforated domain with a stochastic Fourier boundary condition. Networks & Heterogeneous Media, 2015, 10 (2) : 343-367. doi: 10.3934/nhm.2015.10.343
Felix Sadyrbaev. Nonlinear boundary value problems of the calculus of variations. Conference Publications, 2003, 2003 (Special) : 760-770. doi: 10.3934/proc.2003.2003.760
Kim Dang Phung. Boundary stabilization for the wave equation in a bounded cylindrical domain. Discrete & Continuous Dynamical Systems - A, 2008, 20 (4) : 1057-1093. doi: 10.3934/dcds.2008.20.1057
Kazuhiro Ishige, Michinori Ishiwata. Global solutions for a semilinear heat equation in the exterior domain of a compact set. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 847-865. doi: 10.3934/dcds.2012.32.847
Agnieszka B. Malinowska, Delfim F. M. Torres. Euler-Lagrange equations for composition functionals in calculus of variations on time scales. Discrete & Continuous Dynamical Systems - A, 2011, 29 (2) : 577-593. doi: 10.3934/dcds.2011.29.577
Delfim F. M. Torres. Proper extensions of Noether's symmetry theorem for nonsmooth extremals of the calculus of variations. Communications on Pure & Applied Analysis, 2004, 3 (3) : 491-500. doi: 10.3934/cpaa.2004.3.491
Nuno R. O. Bastos, Rui A. C. Ferreira, Delfim F. M. Torres. Necessary optimality conditions for fractional difference problems of the calculus of variations. Discrete & Continuous Dynamical Systems - A, 2011, 29 (2) : 417-437. doi: 10.3934/dcds.2011.29.417
Michel Potier-Ferry, Foudil Mohri, Fan Xu, Noureddine Damil, Bouazza Braikat, Khadija Mhada, Heng Hu, Qun Huang, Saeid Nezamabadi. Cellular instabilities analyzed by multi-scale Fourier series: A review. Discrete & Continuous Dynamical Systems - S, 2016, 9 (2) : 585-597. doi: 10.3934/dcdss.2016013
Moez Daoulatli. Energy decay rates for solutions of the wave equation with linear damping in exterior domain. Evolution Equations & Control Theory, 2016, 5 (1) : 37-59. doi: 10.3934/eect.2016.5.37
Laurent Bourgeois, Dmitry Ponomarev, Jérémi Dardé. An inverse obstacle problem for the wave equation in a finite time domain. Inverse Problems & Imaging, 2019, 13 (2) : 377-400. doi: 10.3934/ipi.2019019
Yannick Privat, Emmanuel Trélat. Optimal design of sensors for a damped wave equation. Conference Publications, 2015, 2015 (special) : 936-944. doi: 10.3934/proc.2015.0936
Nikos Katzourakis. Nonuniqueness in vector-valued calculus of variations in $L^\infty$ and some Linear elliptic systems. Communications on Pure & Applied Analysis, 2015, 14 (1) : 313-327. doi: 10.3934/cpaa.2015.14.313
Gisella Croce, Nikos Katzourakis, Giovanni Pisante. $\mathcal{D}$-solutions to the system of vectorial Calculus of Variations in $L^∞$ via the singular value problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6165-6181. doi: 10.3934/dcds.2017266
Ioan Bucataru, Matias F. Dahl. Semi-basic 1-forms and Helmholtz conditions for the inverse problem of the calculus of variations. Journal of Geometric Mechanics, 2009, 1 (2) : 159-180. doi: 10.3934/jgm.2009.1.159
Yannick Privat Emmanuel Trélat Enrique Zuazua
|
CommonCrawl
|
The purpose of this paper is to explore the trade-offs and synergies of multifunctional cultivated land (MCL) at multiple scales. The study area is Wuhan Metropolitan Area, China. The entropy method and the method of Spearman's rank correlation were employed for the analysis of combined land use/cover data, administrative division data, population data and statistical yearbook data, from the multi-scale perspectives of cities, counties and townships. The results showed that: (1) The multi-functionality of cultivated land had obvious spatial differences and its overall spatial patterns were relatively robust, which did not change very much at the single scale. (2) At each single scale, the MCL's trade-offs and synergies had spatial heterogeneity. (3) Scale effects existed in the MCL's trade-offs and synergies. From the prefecture-level city scale, to the county scale, and to the township scale, the MCL's trade-offs were changed to synergies, and some synergic relationships were enhanced. This article contributes to the literature by deepening the multiscale analysis of trade-offs and synergies of multifunctional cultivated land. The conclusions might provide a basis for helping policy-makers to implement protection measures for the multi-functionality of cultivated land at the right spatial scale, and to promote the higher-level synergies of multifunctional cultivated land to realize its sustainable use.
Key words: sustainable cultivated land use, multifunctional cultivated land, trade-offs and synergies, scale effects, Wuhan Metropolitan Area
YANG Fengyanzi, HU Weiyan. Exploring the Scale Effects of Trade-offs and Synergies of Multifunctional Cultivated Land—Evidence from Wuhan Metropolitan Area[J]. Journal of Resources and Ecology, 2022, 13(6): 1116-1127.
Fig. 1 Location of the Wuhan Metropolitan Area in China
Fig. 2 Technical roadmap of the trade-offs and synergies of multifunctional cultivated land
Table 1 Evaluation system of multifunctional cultivated land in the Wuhan Metropolitan Area
Evaluation index
Calculation method**
F1:
Production function F11: Grain yield (t) +
${{y}_{1ij}}={{Y}_{1i}}\times \frac{{{A}_{ij}}}{{{S}_{i}}}$. 0.0769 0.1028 0.1102
F12: Vegetable yield (t) + ${{y}_{2ij}}={{Y}_{2i}}\times \frac{{{A}_{ij}}}{{{S}_{i}}}$. 0.1308 0.1388 0.2296
F13: Fruit yield(t) + ${{y}_{3ij}}={{Y}_{3i}}\times \frac{{{A}_{ij}}}{{{S}_{i}}}$. 0.1399 0.1347 0.2423
F14: Land reclamation rate (%) + Cultivated land area / total land area 0.0504 0.0611 0.0592
Ecological function F21: Fertilizer application rate (t) - ${{y}_{4ij}}={{Y}_{4i}}\times \frac{{{A}_{ij}}}{{{S}_{i}}}$. 0.0278 0.0331 0.0109
F22: Carbon fixation and oxygen release (t ha-1) + CO2 Absorption + O2 Release 0.0786 0.1030 0.1107
F23: Habitat fragmentation (/) - (Area fragmentation index + distribution fragmentation index)/2 0.0379 0.0436 0.0326
F24: Per capita ecological carrying capacity of cultivated land (/)* + Per capita cropland resource endowment × cropland yield factor × cropland equilibrium factor 0.0441 0.0438
Social function F31: Food self-sufficiency ratio (%) + Food output/ (resident population × 400 kg) 0.0434 0.0610 0.0772
F32: Per capita cultivated land area
(m2 person-1) + Cultivated land area / resident population 0.0470 0.0391 0.0567
F33: Proportion of agricultural output value (%)* + Agricultural output value/GDP 0.0878 0.0782
F34: Proportion of employed population in primary industry* (%) + Number of employees in primary industry / total number of employees 0.0547
Landscape function F41: Aggregation index (/) + Fragstats 4.2 (AI) 0.0415 0.0396 0.0233
F42: Shannon's diversity index (/) + Fragstats 4.2 (SHDI) 0.0527 0.0372 0.0161
F43: Contagion index (/) + Fragstats 4.2 (CONTAG) 0.0318 0.0393 0.0164
F44: Perimeter area fractal dimension (/) - Fragstats 4.2 (PAFRAC) 0.0548 0.0446 0.0148
Table 2 Data sources and descriptions
Land use/cover Bureau of Natural Resources and Planning of Hubei Province Vector
Administrative division data NGCC (http://www.ngcc.cn/ngcc/) Vector
Population data Resource and Environment Science and Data Center (http://www.resdc.cn/) Raster
Socioeconomic statistics Hubei Provincial Bureau of Statistics (http://tjj.hubei.gov.cn/) Spreadsheet
Agricultural production data Hubei Provincial Bureau of Statistics (http://tjj.hubei.gov.cn/) Spreadsheet
Fig. 3 Multiscale spatial distribution patterns of the single functions of cultivated land in Wuhan Metropolitan Area Note: From column 1 to column 4, a-d, is used to represent the production function, ecological function, social function and landscape function of cultivated land, respectively; while line 1 to line 3 uses 1-3 to represent city scale, county scale and township scale, respectively.
Fig. 4 Multiscale spatial distribution pattern of multifunctional cultivated land in Wuhan Metropolitan Area Note: a-c is the multifunctional degree of cultivated land measured by the weighted comprehensive index method at the city scale, county scale and township scale; d-f is the multifunctional degree of cultivated land measured by Simpson's reciprocal index method at the city scale, county scale and township scale.
Fig. 5 Scale effect of trade-offs and synergies of multifunctional cultivated land in Wuhan Metropolitan Area Note: a, b, and c represent the city scale, county scale, and township scale, respectively; *, and ** indicate a significance level of 0.05, and 0.01, respectively.
[1] Brandão M, Clift R, Canals L M I, et al. 2010. A life-cycle approach to characterising environmental and economic impacts of multifunctional land-use systems: An integrated assessment in the UK. Sustainability, 2(12): 3747-3776.
[2] Cao Y, Cao Y, Li G Y, et al. 2020. Linking ecosystem services trade-offs, bundles and hotspot identification with cropland management in the coastal Hangzhou Bay area of China. Land Use Policy, 97: 104689. DOI: 10.1016/j.landusepol.2020.104689.
[3] Chan K M A, Shaw M R, Cameron D R, et al. 2006. Conservation planning for ecosystem services. Plos Biology, 4(11): 2138-2152.
[4] Deng X Z, Li Z H, Gibson J. 2016. A review on trade-off analysis of ecosystem services for sustainable land-use management. Journal of Geographical Sciences, 26(7): 953-968.
[5] Dong P, Zhao H. 2019. Study on trade-off and synergy relationship of cultivated land multifunction: A case of Qingpu District, Shanghai. Resources and Environment in the Yangtze Basin, 28(2): 368-375. (in Chinese)
[6] Egoh B, Reyers B, Rouget M, et al. 2008. Mapping ecosystem services for planning and management. Agriculture Ecosystems and Environment, 127(1-2): 135-140.
[7] Fan Y, Jin X, Xiang X, et al. 2018. Evaluation and spatial characteristics of arable land multifunction in southern Jiangsu. Resources Science, 40(5): 980-992. (in Chinese)
[8] Fleskens L, Duarte F, Eicher I. 2009. A conceptual framework for the assessment of multiple functions of agro-ecosystems: A case study of Tras-os-Montes olive groves. Journal of Rural Studies, 25(1): 141-155.
[9] Gong J, Liu D, Zhang J, et al. 2019. Tradeoffs/synergies of multiple ecosystem services based on land use simulation in a mountain-basin area, western China. Ecological Indicators, 99: 283-293.
[10] Huang J, Tichit M, Poulot M, et al. 2015. Comparative review of multifunctionality and ecosystem services in sustainable agriculture. Journal of Environmental Management, 149: 138-147.
[11] Jiang G, Zhang F, Kong X, et al. 2011. The different levels and protection of multi-functions of cultivated land. China Land Science, 25(8): 42-47. (in Chinese)
[12] Lee C, Liao L, Chen Y, et al. 2009. Farmland functions and use types option under multifunctional agricultural regime. Journal of Taiwan Land Research, 12: 135-162.
[13] Liu T. 2013. Study on the social security function of cultivated land. Diss., Changsha, China: Hunan Agricultural University. (in Chinese)
[14] Liu T, Hu W, Wei A, et al. 2018. Multiscale study of location selection of prime farmland in the Wuhan Metropolitan Area. Resources Science, 40(7): 1365-1374. (in Chinese)
[15] Liu Y, Bi J, Lv J S, et al. 2017. Spatial multi-scale relationships of ecosystem services: A case study using a geostatistical methodology. Scientific Reports, 7(1): 9486. DOI: 10.1038/s41598-017-09863-1.
[16] Liu Y S, Li Y H. 2017. Revitalize the world's countryside. Nature, 548(7667): 275-277.
[17] OECD. 2001. Multifunctionality: Towards an analytical framework. Paris, France: OECD Publications.
[18] Pearson W R, Lipman D J. 1988. Improved tools for biological sequence comparison. Proceedings of the National Academy of Sciences of the USA, 85(8): 2444-2448.
[19] Peng J, Liu Z, Liu Y, et al. 2016. Assessment of farmland landscape multifunctionality at county level in Beijing-Tianjin-Hebei area. Acta Ecologica Science, 36(8): 2274-2285. (in Chinese)
[20] Plieninger T, Bieling C, Ohnesorge B, et al. 2013. Exploring futures of ecosystem services in cultural landscapes through participatory scenario development in the Swabian Alb, Germany. Ecology and Society, 18(3): 261-272.
[21] Qi X, Zhang Z, Huang X. 2018. The contradiction of cultivated land protection in the New Era and its innovative countermeasures. China Land Science, 32(8): 9-15. (in Chinese)
[22] Qian F, Zhang L, Jia L, et al. 2016. Site condition assessment during prime farmland demarcating. Journal of Natural Resources, 31(3): 447-456. (in Chinese)
[23] Raudsepp-Hearne C, Peterson G D. 2016. Scale and ecosystem services: How do observation, management, and analysis shift with scale-lessons from Québec. Ecology and Society, 21(3): 16. DOI: 10.5751/ES-08605-210316.
[24] Raudsepp-Hearne C, Peterson G D, Bennett E M. 2010. Ecosystem service bundles for analyzing tradeoffs in diverse landscapes. Proceedings of the National Academy of Sciences of the USA, 107(11): 5242-5247.
[25] Rodriguez J P, Beard T D, Bennett E M, et al. 2006. Trade-offs across space, time, and ecosystem services. Ecology and Society, 11(1): 709-723.
[26] Song X, Ouyang Z. 2012. Connotation of multifunctional cultivated land and its implications for cultivated land protection. Progress in Geography, 31(7): 859-868. (in Chinese)
[27] Spearman C. 2010. The proof and measurement of association between two things. International Journal of Epidemiology, 39(5): 1137-1150.
[28] Stürck J, Verburg P H. 2017. Multifunctionality at what scale? A landscape multifunctionality assessment for the European Union under conditions of land use change. Landscape Ecology, 32(3): 481-500.
[29] Swallow B M, Sang J K, Nyabenge M, et al. 2009. Tradeoffs, synergies and traps among ecosystem services in the Lake Victoria basin of East Africa. Environmental Science and Policy, 12(4): 504-519.
[30] Wang S, Huang X, Chen Z, et al. 2008. Study on compensation standard of land expropriation based on value of cultivated land. China Land Science, 22(11): 44-50. (in Chinese)
[31] Wiggering H, Dalchow C, Glemnitz M, et al. 2006. Indicators for multifunctional land use-Linking socio-economic requirements with landscape potentials. Ecological Indicators, 6(1): 238-249.
[32] Wu J S, Feng Z, Gao Y, et al. 2013. Hotspot and relationship identification in multiple landscape services: A case study on an area with intensive human activities. Ecological Indicators, 29: 529-537.
[33] Xie G D, Zhen L, Zhang C X, et al. 2010. Assessing the multifunctionalities of land use in China. Journal of Resources and Ecology, 1(4): 311-318.
[34] Zhang L, Chen F. 2014. Analysis and forecast on prospect about influence of urbanization gradual progress on cultivated land in China based on Logistic model. Transactions of the Chinese Society of Agricultural Engineering, 30(4): 1-11. (in Chinese)
[35] Zhu Q, Hu W, Zhao Z. 2018. Dynamic analysis on spatial-temporal pattern of trade-offs and synergies of multifunctional cultivated land evidence from Hubei Province. Economic Geography, 38(7): 143-153. (in Chinese)
|
CommonCrawl
|
Malaysian Journal of Science
Phycoerythrin Production by a Marine Osdilatoria (Cyanophyta)
Chu, Wan-Loy1, Afnani Alwi2, Phang, Siew Moi3.
Malaysian Journal of Science (Volume 21, No. 1 & 2, 2002)
The production of the commercially important pigment phycoerythrin by the marine cyanobacterium $\mathbi{Oscillatoria}$ UMACC 216 was investigated. The cultures from different stages of growth (day 2, 4, 6, 8 and 10) were harvested for the determination of phyocerythrin. Cells from the exponential phase contained the highest amounts of phycoerythrin (66.7 $mg$ $g^{-1}$ dry weight). The cultures changed from red to green then yellow colour after attaining stationary phase. A separate batch of cultures was grown at salinities ranging from 5, 10, 15, 20 to 25 (control) parts per thousand (ppt). Cells grown at 15 ppt contained the highest amounts of phycoerythrin (114.7 $mg$ $g^{-1}$ dry weight). The phycoerythrin content of $\mathbi{Oscillatoria}$ UMACC 216 was much higher than that reported for other cyanobacteria. Further studies to optimise phycoerythrin production by this alga are worthwhile.
Volume 34, No.2, 2015
Volume 32, Sp Issue, 2013
Volume 29, No. 3, Sp Issue, 2010
Volume 26, No.3 Sp Issue, 2007
Volume 24, No. 3 Sp Issue, 2005
|
CommonCrawl
|
Advances in Continuous and Discrete Models
Theory and Modern Applications
Operators constructed by means of basic sequences and nuclear matrices
Ahmed Morsy1,
Nashat Faried2,
Samy A. Harisa1,2 &
Kottakkaran Sooppy Nisar1
Advances in Difference Equations volume 2019, Article number: 504 (2019) Cite this article
In this work, we establish an approach to constructing compact operators between arbitrary infinite-dimensional Banach spaces without a Schauder basis. For this purpose, we use a countable number of basic sequences for the sake of verifying the result of Morrell and Retherford. We also use a nuclear operator, represented as an infinite-dimensional matrix defined over the space \(\ell _{1}\) of all absolutely summable sequences. Examples of nuclear operators over the space \(\ell _{1}\) are given and used to construct operators over general Banach spaces with specific approximation numbers.
Introduction and basic definitions
Banach spaces, which are separable and reflexive, can exist without a Schauder basis as proved by Enflo in 1973 [11]. However, in 1972, Morrell and Retherford [8] showed that in each infinite-dimensional Banach space and for any sequence of positive numbers, that is, monotonically convergent to zero \((\lambda _{i})_{i\in N}\), where \(N=\{1,2,3,\ldots \}\), one can construct a weakly square-summable basic sequence whose norms equal to \((\lambda _{i})_{i \in N}\).
In 1977, Makarov and Faried [7] showed how to construct compact operators of the form \(\sum_{i\in N} \mu _{i}f_{i}\otimes x_{i}\) between arbitrary infinite-dimensional Banach spaces such that its sequence of approximation numbers has a specific rate of convergence to zero. It was also proved that the operator ideal, whose sequence of approximation numbers are p-summable, is a small ideal; see [4, 10, 11].
In this work, we show how to construct compact operators between arbitrary infinite-dimensional Banach spaces using a countable number of basic sequences and nuclear operators, represented in the form of an infinite-dimensional matrix \((\mu _{ij})_{i,j\in N}\) defined over the space \(\ell _{1}\) of all absolutely summable sequences, which verifies
$$\begin{aligned} \lim_{j}\mu _{ij}=0 \end{aligned}$$
for every \(i\in N\). For such double-summation operators, a choice of matrix elements is more convenient than choosing sequence elements in the case of single-summation operators. Such a construction will help give counterexamples of operators between Banach spaces without a Schauder basis. An upper estimate of the sequence of approximation numbers is given for such double-summation operators. For basic notions and some related results, one can see [1, 6, 9, 13].
The following notations are used throughout this study. The normed space of bounded linear operators from a normed space X into a normed space Y is denoted by \(L(X, Y)\), while the dual space of the normed space X is denoted by \(X^{*}=L(X, R)\), where R is the set of real numbers.
Also as mentioned before, the space \(\{x=(x_{i})_{i=1}^{\infty }:\sum_{i}|x_{i}|^{p} <\infty \}\) of all sequences of real numbers that are p-absolutely summable, is denoted by \(\ell _{p}\), which is equipped with the norm \(\|x\|=(\sum_{i\in N}|x_{i}|^{p})^{\frac{1}{p}}\). The space \(\{x=(x_{i})_{i=1}^{\infty }: \lim x_{i}=0\}\) of all sequences of real numbers that are convergent to zero, is denoted by \(c_{o}\), which is equipped with the norm \(\|x\|=\sup_{i\in N}|x_{i}|\).
Definition 1.1
([12])
A map s, which assigns a unique sequence \(\{s_{r}(T)\}_{r=0}^{ \infty }\) of real numbers to every operator \(T\in {L(X,Y)}\), is called an s-number sequence if the following conditions are verified:
\(\|T\|=s_{0}(T)\geq s_{1}(T)\geq \cdots \geq 0\) for \(T\in L(X,Y)\).
\(s_{r+m}(U+V)\leq s_{r}(U)+s_{m}(V)\) for \(U,V\in L(X,Y)\).
\(s_{r}(UTV)\leq \|U\|s_{r}(T)\|V\|\) for \(V\in L(X_{0},X), T \in L(X,Y)\) and
\(U\in L(Y,Y_{0})\).
\(s_{r}(T)=0\) if and only if \(\operatorname{rank}(T)\leq r\) for \(T\in L(X,Y)\).
\(s_{r}(I_{k})=\bigl\{ \begin{array}{l@{\quad}l} 1, & \text{for }r< k; \\ 0, & \text{for }r\geq k, \end{array} \)
where \(I_{k}\) is the identity operator on Euclidean space \(\ell _{2} ^{k}\).
As an examples of s-numbers, we mention the approximation numbers \(\alpha _{r}(T)\), Gelfand numbers \(c_{r}(T)\), Kolmogorov numbers \(d_{r}(T)\), and Tikhomirov numbers \(d_{r}^{*}(T)\), defined by
\(\alpha _{r}(T)=\inf \{\|T-A\|: A\in L(X,Y)\) and \(\operatorname{rank}(A)\leq r\}\). Clearly, we always have \(\|T\|=\alpha _{0}(T)\geq \alpha _{1}(T)\geq \alpha _{2}(T)\geq \cdots \geq 0\).
\(c_{r}(T)=\alpha _{r}(J_{Y}T)\), where \(J_{Y}\) is a metric injection from the space Y into a higher space \(\ell ^{\infty }( \varLambda )\) of all bounded-real functions for a suitable index set Λ.
$$\begin{aligned} d_{r}(T)=\inf_{\operatorname{dim}K\leq r} \sup_{ \Vert x \Vert \leq 1} \inf_{y\in K} \Vert Tx-y \Vert , \end{aligned}$$
where \(K\subseteq Y\).
\(d_{r}^{*}(T)=d_{r}(J_{Y}T)\).
An operator \(T\in L(X,Y)\) is nuclear if and only if it can be represented in the form
$$\begin{aligned} T(x)=\sum_{i=1}^{\infty }a_{i}(x)y_{i}, \end{aligned}$$
with \(a_{1}, a_{2},\ldots \in X^{*}\) and \(y_{1}, y_{2}, \ldots \in Y\), such that
$$\begin{aligned} \sum_{i=1}^{\infty } \Vert a_{i} \Vert \Vert y_{i} \Vert < \infty. \end{aligned}$$
On the class \(N(X,Y)\) of all nuclear operators from X into Y, a norm \(\nu (T)\) is defined by
$$\begin{aligned} \nu (T)=\inf \biggl\{ \sum_{i} \Vert a_{i} \Vert \Vert y_{i} \Vert \biggr\} , \end{aligned}$$
where the inf is taken over all possible representations of the operator T.
Basic theorems and technical lemmas
It is well known that an infinite matrix defines a linear continuous operator from the space \(\ell _{1}\) into itself if its columns are absolutely uniformly-summable; see [3, 4, 10].
Lemma 2.1
([11], 6.3.6)
An operator \(T\in L(\ell _{1},\ell _{1})\) is nuclear if and only if there is an infinite matrix \((\sigma _{ik})_{i,k\in N}\) such that
$$\begin{aligned} T(x)= \Biggl(\sum_{k=1}^{\infty }\sigma _{ik}x_{k} \Biggr)_{i=1}^{\infty } \quad\textit{for } x=(x_{k})_{k=1}^{\infty }\in \ell _{1} \end{aligned}$$
$$\begin{aligned} \sum_{i=1}^{\infty }\sup_{k} \vert \sigma _{ik} \vert < \infty. \end{aligned}$$
$$\begin{aligned} \nu (T)=\sum_{i=1}^{\infty }\sup _{k} \vert \sigma _{ik} \vert . \end{aligned}$$
([3])
If \((T_{i})_{i=1}^{\infty }\)is an absolutely summable sequence of bounded linear operators then
$$\begin{aligned} \alpha _{n} \Biggl(\sum_{i=1}^{\infty }T_{i} \Biggr)\leq \inf \Biggl\{ \sum_{i=1}^{\infty } \alpha _{n_{i}}(T_{i}):\sum_{i=1}^{\infty }n_{i}=n \Biggr\} , \end{aligned}$$
where the inf is taken over all possible representations for
$$\begin{aligned} \sum_{i=1}^{\infty }n_{i}=n. \end{aligned}$$
The following is a consequence of Lemma 2 in [2].
Theorem 2.3
Let \((x_{i})_{i=1}^{\infty }\)be a sequence in a Banach spaceXsuch that
$$\begin{aligned} \sum_{i=1}^{\infty } \bigl\vert f(x_{i}) \bigr\vert < \infty\quad \textit{for every } f\in X^{*}, \end{aligned}$$
then the series \(\sum_{i=1}^{\infty }\lambda _{i}x_{i}\)converges unconditionally inXfor every sequence \((\lambda _{i})_{i=1}^{ \infty }\in c_{o}\).
(Morrell and Retherford [8])
LetXbe an infinite-dimensional Banach space and let \((\lambda _{i})_{i=1} ^{\infty }\in c_{o}\)with \(0<\lambda _{i}<1\), then there is a basic sequence \((x_{i})_{i=1}^{\infty }\)inXsuch that \(\|x_{i}\|=\lambda _{i}\)for all \(i=1,2,\ldots \)that verifies
$$\begin{aligned} \sum_{i=1}^{\infty } \bigl\vert f(x_{i}) \bigr\vert ^{2}\leq \Vert f \Vert ^{2} \quad\textit{for every } f\in X^{*}. \end{aligned}$$
Remark 2.5
Theorem 2.4 is valuable in the case of sequences that are slowly convergent to zero \((\lambda _{i})_{i=1}^{\infty }\). Indeed, if \((\lambda _{i})_{i=1}^{\infty }\) converges rapidly to zero then \(\sum_{i=1}^{\infty }\|x_{i}\|<\infty \) and hence, one can write
$$\begin{aligned} \sum_{i=1}^{\infty } \bigl\vert f(x_{i}) \bigr\vert ^{2}\leq \sum _{i=1}^{\infty } \Vert f \Vert ^{2} \Vert x_{i} \Vert ^{2}\leq C \Vert f \Vert ^{2} \quad\text{for every } f\in X^{*}. \end{aligned}$$
(Dini's theorem [5])
For a convergent series \(\sum_{i=1}^{\infty }a_{i}\)of positive real numbers, the series
$$\begin{aligned} \sum_{i=1}^{\infty }\frac{a_{i}}{R_{i}^{m}} \quad\textit{is } \textstyle\begin{cases} \textit{convergent} & \textit{for }m< 1; \\ \textit{divergent} & \textit{for }m\geq 1, \end{cases}\displaystyle \end{aligned}$$
where \(R_{i}=\sum_{j=i}^{\infty }a_{j}\)is the remainder of the series \(\sum_{i=1}^{\infty }a_{i}\).
LetXandYbe infinite-dimensional Banach spaces and let \((\lambda _{r})_{r=1}^{\infty }\)be a monotonically decreasing sequence of positive real numbers, then there is a completely continuous operator \(A\in L(X,Y)\)verifying
$$\begin{aligned} 2^{-4}\lambda _{3r}\leq d_{r}^{*}(A) \leq \alpha _{r}(A)\leq 8\lambda _{r} \quad\textit{for every } r \in \{1,2,\ldots \}. \end{aligned}$$
Let \(\{\xi _{i}\}_{i\in N}\)be a bounded family of real numbers and let \(K\subseteq N\)be an arbitrary subset of indices, such that cardKis the number of elements inK. Then
$$\begin{aligned} \sup_{\operatorname{card} K=r+1} \inf_{i\in K}\xi _{i} = \inf_{\operatorname{card} K=r} \sup_{i\notin K} \xi _{i}. \end{aligned}$$
Main results
Proposition 3.1
LetXandYbe infinite-dimensional Banach spaces and let \(M=(\mu _{ij})_{i,j\in N}\)be an infinite matrix verifying that:
\(\lim_{j}\mu _{ij}=0 \)for every \(i\in N\).
\(\sum_{i=1}^{\infty }\sup_{j=1}^{\infty } \vert \mu _{ij} \vert <\infty\).
Let \((f_{ij})_{i,j\in N}\)be a matrix of functionals in \(X^{*}\)and \((z_{ij})_{i,j\in N}\)be a matrix of elements inYthat verifies
$$ \sup_{i=1}^{\infty }\sum _{j=1}^{\infty } \bigl\vert f_{ij}(x)F(z_{ij}) \bigr\vert < \infty $$
for everyFin \(Y^{*}\)and everyxinX. Then the expression
$$\begin{aligned} T(x)=\sum_{i=1}^{\infty }\sum _{j=1}^{\infty }\mu _{ij} f_{ij}(x) z _{ij} \end{aligned}$$
defines a linear continuous operator fromXintoY.
$$\begin{aligned} \lambda _{n}=\sum_{i\geq n}\sup _{j=1}^{\infty } \vert \mu _{ij} \vert , \end{aligned}$$
then from Dini's theorem 2.6 we get
$$\begin{aligned} \sum_{i=1}^{\infty }\frac{\sup_{j=1}^{\infty } \vert \mu _{ij} \vert }{\sqrt{ \lambda _{i}}}< \infty. \end{aligned}$$
From condition (1) and Theorem 2.3, the formula
$$\begin{aligned} T_{i}(x)=\sum_{j=1}^{\infty } \frac{\mu _{ij}}{\sqrt{\lambda _{i}}} f _{ij}(x) z_{ij} \end{aligned}$$
defines a linear continuous operator \(T_{i}\in L(X,Y)\) for every \(i=1,2,\ldots \) .
Now we need to prove the unconditional convergence of the series
$$\begin{aligned} T(x)=\sum_{i=1}^{\infty }\sqrt{\lambda _{i}} T_{i}(x). \end{aligned}$$
In order to do so, it is enough to apply again Theorem 2.3, noting that \(\lambda _{n}\rightarrow 0\) and we only have to verify that
$$\begin{aligned} \sum_{i=1}^{\infty } \bigl\vert g T_{i}(x) \bigr\vert < \infty, \quad\text{for every } g\in Y^{*}. \end{aligned}$$
In fact,
$$\begin{aligned} \sum_{i=1}^{\infty }\sum _{j=1}^{\infty } \biggl\vert \frac{\mu _{ij}}{\sqrt{ \lambda _{i}}} f_{ij}(x) g(z_{ij}) \biggr\vert &\leq \sum _{i=1}^{\infty }\sup_{j=1}^{\infty } \frac{ \vert \mu _{ij} \vert }{\sqrt{\lambda _{i}}}\sum_{j=1}^{ \infty } \bigl\vert f_{ij}(x) g(z_{ij}) \bigr\vert \\ &\leq \sum_{i=1}^{\infty }\sup _{j=1}^{\infty }\frac{ \vert \mu _{ij} \vert }{\sqrt{ \lambda _{i}}} \Biggl[ \sup _{i=1}^{\infty } \sum_{j=1}^{\infty } \bigl\vert f_{ij}(x) g(z_{ij}) \bigr\vert \Biggr]< \infty. \end{aligned}$$
Then the expression
defines a linear continuous operator from X into Y. □
From Theorem 2.4 and for every \(i=1,2,\ldots \) , there exist a basic sequence of functionals \(\{f_{ij}\}_{j=1}^{\infty }\) in \(X^{*}\) and a basic sequence of elements \(\{z_{ij}\}_{j=1}^{\infty }\) in Y such that
$$\begin{aligned} \sum_{j=1}^{\infty } \bigl\vert f_{ij}(x) \bigr\vert ^{2}\leq \Vert x \Vert ^{2} \quad\text{for every } x\in X \end{aligned}$$
$$\begin{aligned} \sum_{j=1}^{\infty } \bigl\vert F(z_{ij}) \bigr\vert ^{2}\leq \Vert F \Vert ^{2} \quad\text{for every } F\in Y^{*}. \end{aligned}$$
Basic sequences can be found by choosing different convergent to zero sequences \((\lambda _{i})_{i=1}^{\infty }\in c_{o}\), as mentioned in Theorem 2.4, according to their rate of convergence.
As a consequence of Proposition 3.1 and Remark 3.2 we get the following result.
LetXandYbe Banach spaces and let \(\{f_{ij}\}_{j=1}^{\infty }\)and \(\{z_{ij}\}_{j=1}^{\infty }\), where \(i\in N\), be basic sequences in \(X^{*}\)andY, respectively. Verifying the following,
\(\sum_{j=1}^{\infty } \vert f_{ij}(x) \vert ^{2}< \Vert x \Vert ^{2}\)for every \(x\in X\), and \(i\in N\).
\(\sum_{j=1}^{\infty } \vert F(z_{ij}) \vert ^{2}< \Vert F \Vert ^{2}\)for every \(F\in Y^{*}\)and \(i\in N\), then every nuclear operator
$$\begin{aligned} M=\{\mu _{ij}\}:\ell _{1}\rightarrow \ell _{1}, \quad\textit{with } \lim_{j}\mu _{ij}=0, \end{aligned}$$
defines an operator \(T:X\rightarrow Y\)of the form
$$\begin{aligned} T(x)=\sum_{i=1}^{\infty }\sum _{j=1}^{\infty }\mu _{ij} f_{ij}(x) z _{ij}. \end{aligned}$$
The proof follows directly from Proposition 3.1 and Remark 3.2. □
LetXandYbe infinite-dimensional Banach spaces and let \(\{\mu _{i}\}_{i=1}^{\infty }\)be a sequence of real numbers that is convergent to zero and \(\{f_{i}\}_{i=1}^{\infty }\), \(\{z_{i}\}_{i=1} ^{\infty }\)be sequences in \(X^{*}\)andY, respectively. Verifying that
$$\begin{aligned} \sum_{i=1}^{\infty } \bigl\vert f_{i}(x) \bigr\vert ^{2}\leq \Vert x \Vert ^{2} \quad\textit{for every } x\in X, \end{aligned}$$
$$\begin{aligned} \sum_{i=1}^{\infty } \bigl\vert F(z_{i}) \bigr\vert ^{2}\leq \Vert F \Vert ^{2}\quad \textit{for every } F\in Y^{*}. \end{aligned}$$
Then for the operator
$$\begin{aligned} T=\sum_{i=1}^{\infty }\mu _{i} f_{i} \otimes z_{i} \end{aligned}$$
$$\begin{aligned} \alpha _{n}(T)\leq \inf_{\operatorname{card} K\leq n} \sup _{i\notin K} \vert \mu _{i} \vert , \end{aligned}$$
whereKis any subset of the index setNwith \(\operatorname{card} K \leq n\).
For every operator \(T\in L(X,Y)\) and every subset of indices \(K\subset N\) with \(\operatorname{card} K\leq n\), we define a finite rank operator
$$\begin{aligned} A_{K}=\sum_{i\in K}\mu _{i} f_{i} \otimes z_{i} \end{aligned}$$
with \(\operatorname{rank}(A_{K})\leq n\). From the definition of approximation numbers we get
$$\begin{aligned} \alpha _{n}(T) &\leq \Vert T-A_{K} \Vert = \biggl\Vert \sum_{i\notin K}\mu _{i} f_{i} \otimes z_{i} \biggr\Vert \\ &= \sup_{ \Vert x \Vert =1} \sup_{ \Vert F \Vert =1} \biggl\vert \sum_{i\notin K}\mu _{i} f_{i}(x) F(z_{i}) \biggr\vert \\ &\leq \sup_{ \Vert x \Vert =1} \sup_{ \Vert F \Vert =1} \sum _{i\notin K} \bigl\vert \mu _{i} f _{i}(x) F(z_{i}) \bigr\vert \\ &\leq \sup_{i\notin K} \vert \mu _{i} \vert \sup_{ \Vert x \Vert =1} \sup_{ \Vert F \Vert =1} \sum _{i\notin K} \bigl\vert f_{i}(x) F(z_{i}) \bigr\vert \\ &\leq \sup_{i\notin K} \vert \mu _{i} \vert . \end{aligned}$$
Since this relation is true for every index subset K with \(\operatorname{card} K\leq n\),
$$\begin{aligned} \alpha _{n}(T)\leq \inf_{\operatorname{card} K\leq n} \sup _{i\notin K} \vert \mu _{i} \vert . \end{aligned}$$
As a consequence of Theorem 3.4 and by using Lemma 2.8, we can get the following similar result:
$$\begin{aligned} \alpha _{n}(T)\leq \sup_{\operatorname{card} K=n+1} \inf _{i\in K} \vert \mu _{i} \vert . \end{aligned}$$
LetXandYbe infinite-dimensional Banach spaces and let \((\mu _{ij})_{i,j\in N}\)be an infinite matrix with linearly independent rows such that conditions of Proposition 3.1are verified, and let \(\{f_{ij}\}_{j=1}^{\infty }\), \(\{z_{ij}\}_{j=1}^{\infty }\)for \(i=1,2,\ldots \) , be sequences in \(X^{*}\)andY, respectively, such that conditions of Theorem 3.4are fulfilled for all \(i=1,2,\ldots \) . Then for the operator
$$\begin{aligned} T=\sum_{i=1}^{\infty }\sum _{j=1}^{\infty }\mu _{ij} f_{ij} \otimes z _{ij} \end{aligned}$$
$$\begin{aligned} \alpha _{n}(T)\leq \inf_{\varSigma n_{i}=n} \sum_{i=1}^{\infty } \Bigl\{ \inf _{\operatorname{card} K\leq n_{i}} \sup_{j\notin K} \vert \mu _{ij} \vert \Bigr\} , \end{aligned}$$
whereKis a subset of the index setNwith \(\operatorname{card} K \leq n_{i}\).
From Lemma 2.2, Theorem 3.4 and by using the same operator \(T_{i}\) defined by Eq. (2) throughout the proof of Proposition 3.1, we get
$$\begin{aligned} \alpha _{n}(T)=\alpha _{n}\Biggl(\sum _{i=1}^{\infty }T_{i}\Biggr)\leq \sum _{i=1} ^{\infty }\alpha _{n_{i}}(T_{i}) \leq \sum_{i=1}^{\infty } \inf _{\operatorname{card} K\leq n_{i}} \sup_{j\notin K} \vert \mu _{ij} \vert . \end{aligned}$$
This relation is true for every \(\varSigma n_{i}=n\), then we get the proof.
In the following, we are going to give two examples of nuclear operators over \(\ell _{1}\) and use them to construct operators over general Banach spaces with specific approximation numbers. □
Example 3.7
Consider the operator \(A\in L(c_{0},\ell _{1})\) such that \(A=(a_{ij})_{i,j=1} ^{\infty }\), where
$$\begin{aligned} &a_{ij} =0 \quad\text{for } i\neq j, \\ &a_{ii} =\frac{1}{2^{k}(k+1)^{2}} \quad\text{for } 2^{k} \leq i< 2^{k+1}. \end{aligned}$$
Also, consider \(B\in L(\ell _{1},c_{0})\), such that
$$\begin{aligned} B= \begin{pmatrix} B_{0}&0&0&\cdots \\ 0&B_{1}&0&\cdots \\ 0&0&B_{2}&\cdots \\ \cdot &\cdot &\cdot \\ \cdot &\cdot &\cdot \\ \cdot &\cdot &\cdot \end{pmatrix}, \end{aligned}$$
$$\begin{aligned} &B_{0} =(1), \\ &B_{k} = \begin{pmatrix} B_{k-1}&B_{k-1} \\ B_{k-1}&-B_{k-1} \end{pmatrix} \quad\text{is a } 2^{k}\times 2^{k} \text{ matrix for } k=1,2,3,\ldots. \end{aligned}$$
Thus we have \(D=AB\in L(\ell _{1},\ell _{1})\), such that
$$\begin{aligned} D= \begin{pmatrix} D_{0}&0&0&\cdots \\ 0&D_{1}&0&\cdots \\ 0&0&D_{2}&\cdots \\ \cdot &\cdot &\cdot \\ \cdot &\cdot &\cdot \\ \cdot &\cdot &\cdot \end{pmatrix}, \end{aligned}$$
$$\begin{aligned} &D_{0} =(1), \\ &D_{k} =\frac{k^{2}}{2(1+k)^{2}} \begin{pmatrix} D_{k-1}&D_{k-1} \\ D_{k-1}&-D_{k-1} \end{pmatrix} \quad\text{is a } 2^{k}\times 2^{k} \text{ matrix for } k=1,2,3,\ldots. \end{aligned}$$
Let \(D=(\mu _{ij})_{i,j=1}^{\infty }\), then this operator has the following properties:
$$\begin{aligned} \sum_{i=1}^{\infty } \vert \mu _{ii} \vert &=1+\biggl(\frac{1}{8}+\frac{1}{8} \biggr)+\biggl( \frac{1}{36}+\frac{1}{36}+\frac{1}{36}+ \frac{1}{36}\biggr)+\biggl(\frac{1}{128}+ \frac{1}{128}+ \cdots \biggr)+\cdots \\ &=\sum_{i=1}^{\infty }\frac{1}{i^{2}}= \frac{\pi ^{2}}{6}. \end{aligned}$$
$$\begin{aligned} \nu (D)=\sum_{i=1}^{\infty }\sup _{j} \vert \mu _{ij} \vert = \frac{\pi ^{2}}{6}< \infty, \end{aligned}$$
then by using Lemma 2.1D is a nuclear operator.
\(\operatorname{Trac}(D)=1+(\frac{1}{8}-\frac{1}{8})+( \frac{1}{36}-\frac{1}{36}+\frac{1}{36}-\frac{1}{36})+(\frac{1}{128}- \frac{1}{128}+\cdots )+\cdots =1\).
\(D=(\mu _{ij})_{i,j=1}^{\infty }\) is having linearly independent rows.
Now, for \(D=(\mu _{ij})_{i,j=1}^{\infty }\) and by using Proposition 3.1 and Theorem 3.6 one can construct an operator \(T\in L(X,Y)\) for any Banach spaces \(X,Y\) of the form
$$\begin{aligned} T=\sum_{i=1}^{\infty }\sum _{j=1}^{\infty }\mu _{ij} f_{ij} \otimes z _{ij}, \end{aligned}$$
where \(\{f_{ij}\}_{i,j=1}^{\infty }\), \(\{z_{ij}\}_{i,j=1}^{\infty }\), are basic sequences in \(X^{*}\) and Y, respectively, such that conditions of Theorem 3.4 are fulfilled for all \(i=1,2,\ldots \) .
Now by applying Eq. (3), one can get
$$\begin{aligned} \alpha _{n}(T)\leq \frac{\pi ^{2}}{6}-\sum _{i=1}^{k+1}\frac{1}{i^{2}} \quad\text{for } n=1,2,3,\ldots \text{ where } 2^{k}\leq n< 2^{k+1}. \end{aligned}$$
Hence, we have
$$\begin{aligned} \lim_{n\rightarrow \infty }\alpha _{n}(T)\leq \frac{\pi ^{2}}{6}-\sum_{i=1}^{\infty } \frac{1}{i^{2}}=0, \end{aligned}$$
which is consistent with the properties of the approximation numbers.
By applying Eq. (3) in the case of \(n=0\), we get
$$\begin{aligned} \alpha _{0}(T)&= \Vert T \Vert \leq 1+\biggl( \frac{1}{8}+\frac{1}{8}\biggr)+\biggl(\frac{1}{36}+ \frac{1}{36}+\frac{1}{36}+\frac{1}{36}\biggr)+\biggl( \frac{1}{128}+\frac{1}{128}+ \cdots \biggr)+\cdots \\ &=\sum_{i=1}^{\infty }\frac{1}{i^{2}}= \frac{\pi ^{2}}{6}. \end{aligned}$$
Consider the operator \(J\in L(\ell _{1},\ell _{1})\) such that \(J=(\lambda _{ij})_{i,j=1}^{\infty }\) where \(\lambda _{ij}= \frac{ij}{2^{i+j}}\), then this operator has the following properties:
\(\nu (J)=\sum_{i=1}^{\infty }\sup _{j} \vert \lambda _{ij} \vert =\sum _{i=1}^{\infty }\frac{i}{2^{i}}\sup _{j}(\frac{j}{2^{j}})=1<\infty,\) then by using Lemma 2.1J is a nuclear operator.
\(J=(\lambda _{ij})_{i,j=1}^{\infty }\) has linearly independent rows.
Now for \(J=(\lambda _{ij})_{i,j=1}^{\infty }\) and by using Proposition 3.1 and Theorem 3.6, one can construct an operator \(T\in L(X,Y)\) for any Banach spaces \(X,Y\) on the form,
$$\begin{aligned} T=\sum_{i=1}^{\infty }\sum _{j=1}^{\infty }\lambda _{ij} f_{ij} \otimes z_{ij}, \end{aligned}$$
where \(\{f_{ij}\}_{i,j=1}^{\infty }\) and \(\{z_{ij}\}_{i,j=1}^{\infty }\) are basic sequences in \(X^{*}\) and Y, respectively, such that conditions of Theorem 3.4 are fulfilled for all \(i=1,2,\ldots \) .
Applying Eq. (3) yields
$$\begin{aligned} \alpha _{n}(T)\leq \frac{n+1}{2^{n}}\quad \text{for } n=1,2,3, \ldots. \end{aligned}$$
Thus, we have \((\alpha _{n}(T))_{n=1}^{\infty }\in \ell _{1}\) because
$$\begin{aligned} \sum_{n=1}^{\infty }\alpha _{n}(T)\leq \sum_{n=1}^{\infty } \frac{n+1}{2^{n}}=3< \infty. \end{aligned}$$
Applying Eq. (3) in the case of \(n=0\) yields
$$\begin{aligned} \alpha _{0}(T)= \Vert T \Vert \leq \frac{1}{2}\sum _{i=1}^{\infty } \frac{i}{2^{i}}= \frac{1}{2}\times 2=1, \end{aligned}$$
noting that this is independent of the selection of \(\{f_{ij}\}_{i,j=1} ^{\infty }\) and \(\{z_{ij}\}_{i,j=1}^{\infty }\).
If we choose \(\{f_{ij}\}_{i,j=1}^{\infty }\) and \(\{z_{ij}\}_{i,j=1} ^{\infty }\) such that
$$\begin{aligned} \Vert f_{ij} \Vert = \Vert z_{ij} \Vert = \frac{1}{\sqrt{ij}}, \end{aligned}$$
then we get
$$\begin{aligned} \nu (T)\leq \sum_{i,j=1}^{\infty }\lambda _{ij} \Vert f_{ij} \Vert \Vert z_{ij} \Vert = \sum_{i,j=1}^{\infty }\biggl( \frac{ij}{2^{i+j}}\biggr) \biggl(\frac{1}{ij}\biggr)=1< \infty, \end{aligned}$$
which means that T, in this case, is a nuclear operator.
By using nuclear operators defined over \(\ell _{1}\) with particular representation, one can construct compact operators over general Banach spaces with specific approximation numbers. Such compact operators are been constructed using a countable number of basic sequences and nuclear operators. For such nuclear operators, its construction in a matrix form will yield to double-summation operators. This double-summation gives more freedom rather than choosing sequence elements in the case of single-summation operators. Such a construction will help give counterexamples of operators between Banach spaces without a Schauder basis.
Albiac, F., Kalton, N.J.: Topics in Banach Space Theory. Graduate Texts in Mathematics. Springer, Berlin (2006)
MATH Google Scholar
Bessaga, C., Pełczyński, A.: On basis and unconditional convergence of series in Banach spaces. Stud. Math. 17, 151–164 (1958) http://eudml.org/doc/216910
Faried, N., Abd El Kader, Z., Mehanna, A.A.: s-numbers of polynomials of shift operators on \(\ell ^{p}\) spaces \(1\leq p \leq \infty \). J. Egypt. Math. Soc. 1, 31–37 (1993)
Faried, N., Harisa, S.A.: Wide class of Banach spaces in which Grothendieck conjecture holds. Glob. J. Pure Appl. Math. 12(6), 5059–5077 (2016) http://www.ripublication.com/gjpam.htm
Knopp, K.: Theory and Application of Infinite Series. Blackie and Son Limited, London (1951)
Lindenstrauss, J., Lior, T.: Classical Banach Spaces I Sequence Spaces. Springer, Berlin (1977)
Makarov, B.M., Faried, N.: Some properties of operator ideals constructed by S-numbers, operator theory in functional spaces. In: Theory of Operators in Functional Spaces, pp. 206–211. The Academy of Science Novosibirsk, Russia (1977)
Morrell, J.S., Retherford, J.R.: p-trivial Banach spaces. Studia Math. XLIII 47, 2321 (1972). https://doi.org/10.4064/sm-43-1-1-25
Munoz, F., Oja, E., Pineiro, C.: On α-nuclear operators with applications to vector-valued function spaces. J. Funct. Anal. 269, 2871–2889 (2015). https://doi.org/10.1016/j.jfa.2015.06.002
Article MathSciNet MATH Google Scholar
Pietsch, A.: Nuclear Locally Convex Spaces. Springer, Berlin (1972)
Pietsch, A.: Operator Ideals. North-Holland, Amsterdam (1980)
Pietsch, A.: Eigenvalues and s-Numbers. Akademische-Verlag, Germany (1987)
Reinov, O.I.: On linear operators with s-nuclear adjoints, \(0 < s \leq 1\). J. Math. Anal. Appl. 415, 816–824 (2014). https://doi.org/10.1016/j.jmaa.2014.02.007
The authors would like to thank the reviewers for valuable comments and suggestions which helped improving this work.
This project was supported by the Deanship of scientific research at Prince Sattam Bin Abdulaziz University under the research project 2017/01/7606.
Department of Mathematics, College of Arts and Sciences, Prince Sattam bin Abdulaziz University, Wadi Aldawasir, Kingdom of Saudi Arabia
Ahmed Morsy, Samy A. Harisa & Kottakkaran Sooppy Nisar
Department of Mathematics, Faculty of Science, Ain Shams University, Cairo, Egypt
Nashat Faried & Samy A. Harisa
Ahmed Morsy
Nashat Faried
Samy A. Harisa
Kottakkaran Sooppy Nisar
The authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.
Correspondence to Kottakkaran Sooppy Nisar.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Morsy, A., Faried, N., Harisa, S.A. et al. Operators constructed by means of basic sequences and nuclear matrices. Adv Differ Equ 2019, 504 (2019). https://doi.org/10.1186/s13662-019-2445-1
Nuclear operators
s-numbers
Schauder basis
Basic sequence
|
CommonCrawl
|
Study Group: Towards Sound Fresh Re-keying with Hard (Physical) Learning Problems
MathJax TeX Test Page
The first study group on side-channel analysis in the academic year 2016/2017 is "Towards Sound Fresh Re-keying with Hard (Physical) Learning Problems" from Stefan Dziembowski, Sebastian Faust, Gottfried Herold, Anthony Journault, Daniel Masny and Francois-Xavier Standaert, and recently published at Crypto this year. There are mainly two reasons why I chose to present it: it is somehow close to my personal research and, generally speaking, I like to read of different areas influencing each other. In this particular case, learning problems which are usually associated to post-quantum cryptography (more specifically to lattice-based cryptography) have been used to prove a security result in leakage-resilient cryptography.
In one sentence and very loosely speaking, the content of the paper which has been covered in the study group can be summarised as: the authors construct a fresh re-keying scheme and show it is secure under the LPN assumption. The remaining of this blog post is structured as a number of questions and answers on the previous statement that will hopefully clarify and detail its meaning.
Q: what is a fresh re-keying scheme?
A: intuitively, a fresh re-keying scheme is a set of cryptographic algorithms which allows the generation of session keys from a master secret key and some publicly known randomness. A session key is used to encrypt a message and then discarded. The following diagram nicely represents what is going on.
Protecting block ciphers (or other cryptographic primitives) against DPA is an expensive procedure that introduces overhead in the design and whose security is often based on heuristic arguments. Such attacks exploit the dependence of multiple power traces with the same secret key to retrieve it. Hence, fresh re-keying schemes represent a solution that try to structurally avoid the threat by using each session key only once. Both the receiver and the sender can compute the same session key by means of the shared master secret key and some randomness associated with the ciphertext. At that point, the underlying block cipher should retain security only against SPA, while the algorithm the session key is computed by should resist DPA to prevent the adversary to gain information about the master secret key thanks to leakage. The overall scheme is designed to be more efficient than a one which applies DPA countermeasures directly to the block cipher because the GenSK algorithm is built to be easier to protect.
Q: what is the LPN assumption?
A: the Learning Parity with Noise (LPN) problem asks to find a vector $\mathbf{k}\in\mathbb{Z}_2^n$ given query access to an oracle that outputs $$ (\mathbf{r},<\mathbf{r},\mathbf{k}>\oplus e) \in \mathbb{Z}_2^n\times\mathbb{Z}_2 $$ where $\mathbf{r}\in\mathbb{Z}_2^n$ is a uniformly random vector and $e\leftarrow \mathcal{B}_\tau$ is an error bit drawn from the Bernoulli distribution of a fixed parameter $0<\tau<1/2$, i.e. $\mathbb{P}(e=1)=\tau$. The decisional version, instead, asks to distinguish the above distribution from the uniform one on $\mathbb{Z}_2^n\times\mathbb{Z}_2$. The first step in the overall proof is to define a similar version of such a problem, what the author call the Learning Parity with Leakage (LPL) problem, in which everything is the same bar the distribution of the noise, which is now taken to be the Gaussian distribution over the reals. Note that the second component of the LPL distribution is then a real number. It is shown that: $$ \text{LPN is hard} \Rightarrow \text{LPL is hard}. $$
Q: what does it mean for a fresh re-keying scheme to be secure?
A: since we deal with an environment where leakage exists, we should specify both what kind of security model the proof follows and what kind of leakage the adversary is allowed to query.
Security can be stated as the indistinguishability game in the above picture. In the real part on the left, the adversary plays with and queries a real oracle, that is to say an oracle that generates all keys and randomness according to the scheme to be secured. Hence also the leakage is computed on the sensitive variables. Instead, the right-hand side depicts the random game: an ideal oracle generates keys at random, which are then passed to a simulator which computes leakage on them. For the security claim to be valid, the (computational) adversary shouldn't distinguish the oracles she is playing with.
The leakage model is the $t$-noisy probing model. If you imagine a circuit, such a model can be depicted as the adversary being allowed to put probes on up to $t$ wires and to read a noisy version of the values being carried on them. In the specific case of their construction, since there isn't a circuit from which the adversary can get leakage from, the authors specify a set of variables on which noisy bit-selector functions are applied.
Q: which is their fresh re-keying scheme?
A: even if the scheme, called $\Pi_{noisy}$ in the paper, is composed of a set of algorithms, I only specify the core of GenSK, which is responsible for generating the session key. I assume the inputs are some randomness $R$ and a set of shares of the master secret key $\{msk_i\}_{i\leq d}$, which are needed in order to secure the algorithm against DPA. $$ \begin{align*} 1.\ & u_i \leftarrow R\cdot msk_i \\ 2.\ & u \leftarrow \sum_{i\leq d} u_i \\ 3.\ & sk \leftarrow H(u) \\ \end{align*} $$ The sum in the second line is computed iteratively as $((\dots (u_1 +u_2)+u_3)+\dots )+u_d$ for security reasons, while the $H$ in the third line is an hash function modelled as a random oracle. In the game, the adversary is given $sk$ and $R$ and can query leakage on a certain amount of bits of: $msk_i$, $R\cdot msk_i$ and $\sum u_i$. A further step not specified in the previous list is the application of a refreshing algorithm to the shares of the master secret key, in such a way that they look like new shares when the next session key is generated. Finally, the author prove the following: $$ \text{LPL is hard} \Rightarrow \Pi_{noisy} \text{ is secure}. $$
For many other details, the paper is available on ePrint.
What is SPDZ? Part 2: Circuit Evaluation
This blog post is the second in a series of three in which we describe what SPDZ is. The first part can be found here and the third here.
In this part we discuss how we perform the additions and multiplications of shared secret values as in the SPDZ protocol.
The Preprocessing Model
The preprocessing model is a framework in which we assume the existence of a 'trusted dealer' who distributes 'raw material' to the parties before any circuit evaluation takes place. The reason for doing this is that if we have the 'raw material' already in place, the circuit can be evaluated very efficiently.
We realise this by splitting the protocol into two phases: a 'preprocessing' (or 'offline') phase where we generate the preprocessed data, and an 'online' phase where we evaluate the circuit. (The term 'offline' is a little misleading, since the parties still communicate.)
Evaluating the Circuit
We now suppose that the parties have agreed on some arithmetic circuit they want to evaluate jointly.
Providing input
The first step of the circuit evaluation is to read in values from (some of) the parties. In SPDZ, this means getting each party with input, $P_i$ with $x^i$ (the superscript $i$ is an index to avoid confusion with $x_i$ which is a share of $x$), to secret-share input $x^i$ to the other parties.
To do this, we need to use a share of a random value: each party $P_i$ holds some $r_i$ and the value $r = \sum_i r_i$ is uniformly random and not known by any party. First, every party sends their share $r_j$ of $r$ to party $P_i$. This is called a 'partial opening' because $P_i$ now discovers the value of $r$ (by summing the shares).
Party $P_i$ then broadcasts the value $x^i - r$. Party $P_1$ sets its share of $x^i$ as $x_1^i=r_1 + x^i - r$, and $P_j$ for $j > 1$ sets its share of $x^i$ as $x^i_j=r_j$. (In practice, it is not always $P_1$ who does a different computation.)
Thus we have turned $x^i$ into $\langle x^i \rangle$.
Suppose we want to compute some arithmetic circuit on our data; that is, some series of additions and multiplications. We have the following very useful properties of an additive sharing scheme:
If we have some secret-shared field elements $\langle a \rangle$ and $\langle b \rangle$, so party $P_i$ has $a_i$ and $b_i$, each party can locally compute $a_i + b_i$, and hence, since $\sum_i a_i + \sum_i b_i = \sum_i (a_i+b_i)$, they obtain a secret sharing $\langle a + b \rangle$ of the value $a+b$.
If each party multiplies its share by some public value $\alpha$, then since $\sum_i \alpha a_i = \alpha \sum_i a_i = \alpha a$ the parties obtain a secret sharing $\langle \alpha a \rangle$ of the value $\alpha a$.
In other words, the secret-sharing is linear; this is good because it means there is no communication cost for doing these operations. Unfortunately, we have to do a bit more work to multiply secret-shared elements.
In 1991, Donald Beaver [2] observed the following: suppose we want to compute $\langle x y \rangle$ given some $\langle x \rangle$ and $\langle y \rangle$, and we already have three secret-shared values, called a 'triple', $\langle a \rangle$, $\langle b \rangle$ and $\langle c \rangle$ such that $c = a b$. Then note that if each party broadcasts $x_i-a_i$ and $y_i - b_i$, then each party $P_i$ can compute $x-a$ and $y-b$ (so these values are publicly known), and hence compute \[ z_i=c_i + (x-a) b_i + (y-b) a_i\] Additionally, one party (chosen arbitrarily) adds on the public value $(x-a)(y-b)$ to their share so that summing all the shares up, the parties get \[\sum_i z_i = c + (x-a)b + (y-b)a + (x-a)(y-b) = xy\] and so they have a secret sharing $\langle z \rangle$ of $xy$.
The upshot is that if we have lots of triples, then we can perform many multiplications on secret-shared data and hence we can compute any arithmetic circuit. (Note that a triple cannot be reused because this would reveal information about the secrets we are trying to multiply - i.e. since $x-a$ and $y-b$ are made public in the process above)
Importantly, observe that not only do these triples not depend on the input secret-shared values they are used to multiply, they are also completely independent of the circuit to be evaluated. This means that we can generate these triples at any point prior to evaluating the circuit. The values of $a$, $b$ and $c$ are not known by any parties when generated - each party only knows a share of each of 'some values' for which they are told this relation holds.
Moreover, if these triples are generated in the offline phase at some point before the circuit evaluation, since addition, scalar multiplication and field multiplication are inexpensive in terms of communication and computation, the online phase is both highly efficient and information-theoretically secure.
Combining the above, we have now established roughly how SPDZ evaluates an arithmetic circuit.
Next time: In the final part, we will look at what makes SDPZ different from other MPC protocols which achieve the same thing.
[1] B. Applebaum, Y. Ishai, and E. Kushilevitz. How to garble arithmetic circuits. 52nd FOCS, pp120–129. IEEE Computer Society Press, 2011
[2] D. Beaver. Efficient Multiparty Protocols using Circuit Randomisation. In J. Feigenbaum, editor, CRYPTO, volume 576 of Lecture Notes in Computer Science, pp420-432, Springer, 2012.
[3] R. Bendlin, I. Damgard, C. Orlandi, and S. Zakarias. Semi-homomorphic encryption and multiparty computation. In EUROCRYPT, pp169-188, 2011.
[4] I. Damgard, M. Keller, E. Larraia, V. Pastro, P. Scholl, N. P. Smart. Practical covertly secure MPC for dishonest majority - or: Breaking the SPDZ limits. In ESORICS (2013), J. Crampton, S. Jajodia, and K. Mayes, Eds., vol. 8134 of Lecture Notes in Computer Science, Springer, pp. 1–18.
[5] I. Damgard, V. Pastro, N. P. Smart, and S. Zakarias. Multiparty computation from somewhat homomorphic encryption. In Advances in Cryptology – CRYPTO 2012, volume 7417 of LNCS, pp643–662. Springer, 2012.
[6] I. Damgard and S. Zakarias. Constant-overhead secure computation of
boolean circuits using preprocessing. In TCC, pp621-641, 2013.
[7] M. Keller and E. Orsini and P. Scholl. MASCOT: Faster Malicious Arithmetic Secure Computation with Oblivious Transfer. Cryptology ePrint Archive, Report 2016/505, 2016.
[8] J. Buus Nielsen, P. Nordholt, C. Orlandi, and S. Burra. A new approach to practical active-secure two-party computation. In Reihaneh Safavi-Naini and Ran Canetti, editors, Advances in Cryptology CRYPTO 2012, volume 7417 of Lecture Notes in Computer Science, pp681-700. Springer Berlin Heidelberg, 2012.
[9] A. Shamir. How to Share a Secret. In Communications of the ACM, Volume 22 Issue 11, Nov. 1979, pp612-613.
[10] A. Yao. How to generate and exchange secrets. In SFCS '86 Proceedings of the 27th Annual Symposium on Foundations of Computer Science, pp162–167. IEEE, 1986.
What is SPDZ? Part 1: MPC Circuit Evaluation Overview
This blog post is the first in a series of three in which we look at what MPC circuit evaluation is, an outline of how MPC protocols in the so-called 'preprocessing model' work, and finally the specifics of SPDZ. They will come in weekly instalments.
In this part, we will introduce the idea of MPC circuit evaluation.
If you do research in the field of cryptography, at some point you've quite possibly come across the curiously named SPDZ ('speedz'). The aim of this blog post is to explain what it is and why it's used. In order to keep this post as short and accessible as possible, lots of the details are omitted, and where new concepts are introduced, they are kept superficial.
We start by defining secure multi-party computation (MPC): MPC is a way by which multiple parties can compute some function of their combined secret input without any party revealing anything more to the other parties about their input other than what can be learnt from the output.
Let's make this more concrete: suppose there are two millionaires who want to know which of them has more money without revealing exactly how much money they have. How can they do this? Clearly we can do it with MPC, providing it exists.
Thankfully, MPC does exist. It is used in many different contexts and has various applications, ranging from the 'simple' and specific such as oblivious transfer (more on this later), to the relatively general-purpose functionality of joint circuit computation. SDPZ is an MPC protocol allowing joint computation of arithmetic circuits.
Circuit Garbling vs Secret Sharing
There are two main constructions of MPC protocols for circuit evaluation: circuit garbling and secret sharing.
The answer to the so-called millionaire's problem was first found in the 1980s with Yao's garbled circuits [10]. As circuit garbling is somewhat parallel to the MPC model we work with in SPDZ, we will not discuss it here.
Contrasting this, the SPDZ protocol is a secret-sharing-based MPC protocol.
Secret-Sharing-Based MPC
Whereas circuit garbling involves encrypting and decrypting keys in a specific order to emulate a circuit evaluation (originally a Boolean circuit, but now arithmetic circuits too [1]), SPDZ instead 'secret shares' inputs amongst all parties and uses these shares to evaluate a circuit.
SPDZ is neither the first nor the only secret-sharing-based MPC protocol. Other well known constructions include BDOZ [3], TinyOT [8] and MiniMAC [6]. MASCOT [7] can be seen as an oblivious-transfer-based version of SPDZ. This will be discussed in a little more detail later on.
What is secret sharing?
Suppose I have some field element $a \in \mathbb{F}$, split it up 'at random' (uniformly) into two pieces, $a = a_1 + a_2$, and give party $P_1$ the value $a_1$ and $P_2$ the value $a_2$. Neither party knows the value $a$, but together they can recover it. We will write $\langle a \rangle$ to mean that the value $a$ is secret-shared between all parties (i.e. for each i, party $P_i$ has $a_i$, where $\sum_i a_i = a$).
Of course, there are different ways of secret sharing data (e.g. the analogous multiplicative sharing $a = a_1 \cdot a_2$, and also more complicated schemes like Shamir's [9]), but it turns out that the additive scheme is particularly useful for MPC applications, as we shall see.
The basic overview of secret-sharing MPC of arithmetic circuits (SSMPCoAC?) is the following:
The parties first secret-share their inputs; i.e. input $x^i$ is shared so that $\sum_j x_j^i = x^i$ and party $P_j$ holds $x_j^i$ (and $P_i$ which provides input is included in this sharing, even though it knows the sum).
The parties perform additions and multiplications on these secret values by local computations and communication of certain values (in methods specified below). By construction, the result of performing an operation is automatically shared amongst the parties (i.e. with no further communication or computation).
Finally, the parties 'open' the result of the circuit evaluation. This last step involves each party sending their 'final' share to every other party (and also performing a check that no errors were introduced by the adversary along the way).
These are the steps we follow in a few different MPC circuit evaluation protocols, as we have discussed. The way we compute the circuit differs (slightly) with the protocol.
Next time: In the next part in this series, we will see how to use these secret-shared values to evaluate an arithmetic circuit as in the SDPZ protocol.
Study Group: Crying Wolf: An Empirical Study of SSL Warning Effectiveness
Today's study group was on the now a little dated paper of 2009 'Crying Wolf: An Empirical Study of SSL Warning Effectiveness' [1], which was published at USENIX. In cryptography research, it is easy to overlook implementation and usability and instead focus on theory. As is succinctly explained in Randall Munroe's well-known comic, the weaknesses in our cryptographic solutions are seldom in the constructions themselves, but in their real-world application.
This paper explores the use and design of warnings which modern (!) browsers present to a user when SSL certificates cannot be verified, and in particular the user's reaction to them. There is little point in a cryptographically secure system of authentication if the end user ignores and proceeds past warnings when presented with them. The authors suggests that when browsers 'cry wolf' upon encountering SSL errors, users become desensitised over time, learn to ignore these warnings, and thus become susceptible to having their data stolen.
(The initiated can skip this.)
SSL stands for Secure Sockets Layer, and is a method by which a client can access a web server securely. The SSL Handshake protocol uses a so-called SSL certificate to verify a server's authenticity to a client. An SSL certificate specifies whom the certificate was issued to, whom it was issued by, the period of validity and the server's public key. (Old SSL protocols have been superseded by TLS, but the principles involved are essentially the same.) At a very high level, the protocol proceeds as follows:
The client sends a 'hello' message to the server, requesting content.
The server sends the client its certificate, which contains its public key.
The client checks that the certificate is valid.
If the check passes, the client generates a session key, encrypts using the server's public key, and sends this to the server. If the check fails, the client aborts.
The server decrypts the session key using its secret key.
The client and the server can now encrypt all data sent between them using the (symmetric) session key.
What can go wrong?
If the certificate is invalid, the client aborts. The problems this study considers are:
Expired certificate: the certificate is no longer valid.
Unknown certificate authority: the issuing authority is not known.
Domain mismatch: the domain of the web server and the certificate's listed domain do not match.
If one of the above occurs, the web browser will alert the user. The purpose of the study was to assess the effectiveness of the browser in conveying the severity of the problem to the user: strong warnings where the risks are small cause people to assume high-risk situations given the same warning are just as innocuous.
Using a survey, the authors gathered data from 409 web users on their reactions to SSL warnings and their overall comprehension of the risks involved in ignoring them.
They found that context (i.e. the type of website visited) made little difference to whether or not a user would heed the warnings.
According to the data, respondents who understood 'Domain mismatch' and 'Unknown certificate authority' warnings were less likely to proceed than those who did not, whereas those who understood certificate expiry errors were more likely to proceed. In fact, the experimenters found that users consistently rated risk of an expired certificate lower than the other two errors.
The authors additionally report some wonderful responses from users, including:
'I use a Mac, so nothing bad would happen'
'Since I use FreeBSD, rather than Windows, not much [risk]'
'On my Linux box, nothing significantly bad would happen'
A set of 100 participants were asked to use four websites to complete different tasks. One website was a banking website with an invalid certificate, one a library website with an invalid certificate, and two were other sites used as dummies.
The participants were shown either Internet Explorer 7 (IE7), Firefox 2 (FF2), Firefox 3 (FF3), or one of two newly-designed SSL warnings. The IE7 warning is whole page but requires just one click to ignore. The FF2 warning is a pop-up window but also only requires one click to ignore. The first version of the FF3 warning needed 11 steps. 'They made the original version of the warning so difficult for users to override, that only an expert could be likely to figure out how to do it.' The first new design was multi-page and asked users to specify the nature of the website they were visiting, presenting severe warnings for websites requiring a high level of security and milder warnings otherwise. The second new design was similar to the FF3 warning but 'looked more severe'. Images can be found in the paper.
For the library website, the IE7, FF2 and multi-page warnings did not prevent people from proceeding compared to the FF3 warning, and the single-page warning was similar to the previous warnings.
For the banking website, the two new warnings did prevent people from accessing the website, but no more than the FF3 warning. The new warnings and the FF3 warning outperformed the IE7 and FF2 warnings in preventing people from accessing the website.
In conclusion, the authors say that the average user does not understand the dangers of SSL warnings, and as such the decision of whether or not to proceed should essentially be made for them by the browser in most cases.
More recently, Chrome recently redesigned its SSL warnings due to the large proportion of users who simply ignored all SSL warnings [2].
To see different SSL warnings in your current browser, visit badssl.com.
[1] Crying Wolf: An Empirical Study of SSL Warning Effectiveness by Joshua Sunshine, Serge Egelman, Hazim Almuhimedi, Naha Atri and Lorrie Faith Cranor. In Proceedings of the 18th Conference on USENIX Security Symposium, 2009; link.
[2] Improving SSL Warnings: Comprehension and Adherence by Adrienne Porter Felt, Alex Ainslie, Robert W. Reeder, Sunny Consolvo, Somas Thyagaraja, Alan Bettes, Helen Harris and Jeff Grimes. In CHI 2015; link.
Study Group: Towards Sound Fresh Re-keying with Ha...
Study Group: Crying Wolf: An Empirical Study of SS...
|
CommonCrawl
|
Cīrulis, Jānis
Adjoint Semilattice and Minimal Brouwerian Extensions of a Hilbert Algebra. (English). Acta Universitatis Palackianae Olomucensis. Facultas Rerum Naturalium. Mathematica, vol. 51 (2012), issue 2, pp. 41-51
MSC: 03G25, 06A12, 06A15, 08A35 | MR 3058872 | Zbl 06204929
adjoint semilattice; Brouwerian extension; closure endomorphism; compatible meet; filter; Hilbert algebra; implicative semilattice; subtraction
Let $A := (A,\rightarrow ,1)$ be a Hilbert algebra. The monoid of all unary operations on $A$ generated by operations $\alpha _p\colon x \mapsto (p \rightarrow x)$, which is actually an upper semilattice w.r.t. the pointwise ordering, is called the adjoint semilattice of $A$. This semilattice is isomorphic to the semilattice of finitely generated filters of $A$, it is subtractive (i.e., dually implicative), and its ideal lattice is isomorphic to the filter lattice of $A$. Moreover, the order dual of the adjoint semilattice is a minimal Brouwerian extension of $A$, and the embedding of $A$ into this extension preserves all existing joins and certain "compatible" meets.
[1] Cīrulis, J.: Multipliers in implicative algebras. Bull. Sect. Log. (Łódź) 15 (1986), 152–158. MR 0907610 | Zbl 0634.03067
[2] Cīrulis, J.: Multipliers, closure endomorphisms and quasi-decompositions of a Hilbert algebra. In: Chajda et al., I. (eds) Contrib. Gen. Algebra Verlag Johannes Heyn, Klagenfurt, 2005, 25–34. MR 2166943 | Zbl 1082.03056
[3] Cīrulis, J.: Hilbert algebras as implicative partial semilattices. Centr. Eur. J. Math. 5 (2007), 264–279. DOI 10.2478/s11533-007-0008-2 | MR 2300273 | Zbl 1125.03047
[4] Curry, H. B.: Foundations of Mathematical logic. McGraw-Hill, New York, 1963. MR 0148529 | Zbl 0163.24209
[5] Diego, A.: Sur les algèbres de Hilbert. Gauthier-Villars; Nauwelaerts, Paris; Louvain, 1966. MR 0199086 | Zbl 0144.00105
[6] Henkin, L.: An algebraic characterization of quantifiers. Fund. Math. 37 (1950), 63–74. MR 0040234 | Zbl 0041.34804
[7] Horn, A.: The separation theorem of intuitionistic propositional calculus. Journ. Symb. Logic 27 (1962), 391–399. DOI 10.2307/2964545 | MR 0171706
[8] Huang, W., Liu, F.: On the adjoint semigroups of $p$-separable BCI-algebras. Semigroup Forum 58 (1999), 317–322. DOI 10.1007/BF03325431 | MR 1678492 | Zbl 0928.06012
[9] Huang, W., Wang, D.: Adjoint semigroups of BCI-algebras. Southeast Asian Bull. Math. 19 (1995), 95–98. MR 1366413 | Zbl 0859.06016
[10] Iseki, K., Tanaka, S.: An introduction in the theory of BCK-algebras. Math. Japon. 23 (1978), 1–26. MR 0500283
[11] Karp, C. R.: Set representation theorems in implicative models. Amer. Math. Monthly 61 (1954), 523–523 (abstract).
[12] Karp, C. R.: Languages with expressions of infinite length. Univ. South. California, 1964 (Ph.D. thesis). MR 0176910 | Zbl 0127.00901
[13] Kondo, M.: Relationship between ideals of BCI-algebras and order ideals of its adjoint semigroup. Int. J. Math. 28 (2001), 535–543. DOI 10.1155/S0161171201010985 | MR 1895299 | Zbl 1007.06014
[14] Marsden, E. L.: Compatible elements in implicational models. J. Philos. Log. 1 (1972), 195–200. DOI 10.1007/BF00650494 | MR 0476504
[15] Schmidt, J.: Quasi-decompositions, exact sequences, and triple sums of semigroups I. General theory. II Applications. In:Contrib. Universal Algebra Colloq. Math. Soc. Janos Bolyai (Szeged) 17 North-Holland, Amsterdam, 1977, 365–428. MR 0472657
[16] Tsinakis, C.: Brouwerian semilattices determined by their endomorphism semigroups. Houston J. Math. 5 (1979), 427–436. MR 0559982 | Zbl 0431.06003
[17] Tsirulis, Ya. P.: Notes on closure endomorphisms of implicative semilattices. Latvijskij Mat. Ezhegodnik 30 (1986), 136–149 (in Russian). MR 0878277
|
CommonCrawl
|
The seminar on mathematical physics will be held on select Mondays and Wednesdays from 12 – 1pm in CMSA Building, 20 Garden Street, Room G10. This year's Seminar will be organized by Artan Sheshmani and Yang Zhou.
The list of speakers for the upcoming academic year will be posted below and updated as details are confirmed. Titles and abstracts for the talks will be added as they are received.
Date Speaker Title/Abstract
9/10/2018 Xiaomeng Xu, MIT Title: Stokes phenomenon, Yang-Baxter equations and Gromov-Witten theory.
Abstract: This talk will include a general introduction to a linear differential system with singularities, and its relation with symplectic geometry, Yang-Baxter equations, quantum groups and 2d topological field theories.
9/17/2018 Gaetan Borot, Max Planck Institute
Video Title: A generalization of Mirzakhani's identity, and geometric recursion
Abstract: McShane obtained in 1991 an identity expressing the function 1 on the Teichmueller space of the once-punctured torus as a sum over simple closed curves. It was generalized to bordered surfaces of all topologies by Mirzakhani in 2005, from which she deduced a topological recursion for the Weil-Petersson volumes. I will present new identities which represent linear statistics of the simple length spectrum as a sum over homotopy class of pairs of pants in a hyperbolic surface, from which one can deduce a topological recursion for their average over the moduli space. This is an example of application of a geometric recursion developed with Andersen and Orantin.
9/24/2018 Yi Xie, Simons Center Title: sl(3) Khovanov module and the detection of planar theta-graph
Abstract: In this talk we will show that Khovanov's sl(3) link homology together with its module structure can be generalized for spatial webs (bipartite trivalent graphs).We will also introduce a variant called pointed sl(3) Khovanov homology. Those two combinatorial invariants are related to Kronheimer-Mrowka's instanton invariants $J^\sharp$ and $I^\sharp$ for spatial webs by two spectral sequences. As an application, we will prove that sl(3) Khovanov module and pointed sl(3) Khovanov homology both detect the planar theta graph.
10/01/2018 Dori Bejleri, MIT Title: Stable pair compactifications of the moduli space of degree one del Pezzo surfaces via elliptic fibrations
Abstract: A degree one del Pezzo surface is the blowup of P^2 at 8 general points. By the classical Cayley-Bacharach Theorem, there is a unique 9th point whose blowup produces a rational elliptic surface with a section. Via this relationship, we can construct a stable pair compactification of the moduli space of anti-canonically polarized degree one del Pezzo surfaces. The KSBA theory of stable pairs (X,D) is the natural extension to dimension 2 of the Deligne-Mumford-Knudsen theory of stable curves. I will discuss the construction of the space of interest as a limit of a space of weighted stable elliptic surface pairs and explain how it relates to some previous compactifications of the space of degree one del Pezzo surfaces. This is joint work with Kenny Ascher.
10/08/2018 Pei-Ken Hung, MIT Title: The linear stability of the Schwarzschild spacetime in the harmonic gauge: odd part
Abstract: We study the odd solution of the linearlized Einstein equation on the Schwarzschild background and in the harmonic gauge. With the aid of Regge-Wheeler quantities, we are able to estimate the odd part of Lichnerowicz d'Alembertian equation. In particular, we prove the solution decays at rate $\tau^{-1+\delta}$ to a linearlized Kerr solution.
10/15/2018 Chris Gerig, Harvard Title: A geometric interpretation of the Seiberg-Witten invariants
Abstract: Whenever the Seiberg-Witten (SW) invariants of a 4-manifold X are defined, there exist certain 2-forms on X which are symplectic away from some circles. When there are no circles, i.e. X is symplectic, Taubes' "SW=Gr" theorem asserts that the SW invariants are equal to well-defined counts of J-holomorphic curves (Taubes' Gromov invariants). In this talk I will describe an extension of Taubes' theorem to non-symplectic X: there are well-defined counts of J-holomorphic curves in the complement of these circles, which recover the SW invariants. This "Gromov invariant" interpretation was originally conjectured by Taubes in 1995. This talk will involve contact forms and spin-c structures.
*Room G02* Sze Ning Mak, Brown Title: Tetrahedral geometry in holoraumy spaces of 4D, $\mathcal{N}=1$ and $\mathcal{N}=2$ minimal supermultiplets
Abstract: In this talk, I will review the supersymmetry algebra. For Lie algebras, the concepts of weights and roots play an important role in the classification of representations. The lack of linear "eigen-equations" in supersymmetry leads to the failure to realize the Jordan-Chevalley decomposition of ordinary Lie algebras on the supersymmetry algebra. Therefore, we introduce the concept "holoraumy" for the 4D, $\mathcal{N}$-extended supersymmetry algebras, which allows us to explore the possible representations of supersymmetric systems of a specified size. The coefficients of the holoraumy tensors for different representations of the same size form a lattice space. For 4D, $\mathcal{N}=1$ minimal supermultiplets (4 bosons + 4 fermions), a tetrahedron is found in a 3D subspace of the 4D lattice parameter space. For 4D, $\mathcal{N}=2$ minimal supermultiplets (8 bosons + 8 fermions), 4 tetrahedrons are found in 4 different 3D subspaces of a 16D lattice parameter space.
10/29/2018 Francois Greer, Simons Center Title: Rigid Varieties with Lagrangian Spheres
Abstract: Let X be a smooth complex projective variety with its induced Kahler structure. If X admits an algebraic degeneration to a nodal variety, then X contains a Lagrangian sphere as the vanishing cycle. Donaldson asked whether the converse holds. We answer this question in the negative by constructing rigid complex threefolds with Lagrangian spheres using Teichmuller curves in genus 2.
11/05/2018 Siqi He, Simons Center Title: The Kapustin-Witten Equations, Opers and Khovanov Homology
Abstract: We will discuss a Witten's gauge theory program to define Jones polynomial and Khovanov homology for knots inside of general 3-manifolds by counting singular solutions to the Kapustin-Witten or Haydys-Witten equations. We will prove that the dimension reduction of the solutions moduli space to the Kapustin-Witten equations can be identified with Beilinson-Drinfeld Opers moduli space. We will also discuss the relationship between the Opers and a symplectic geometry approach to define the Khovanov homology for 3-manifolds. This is joint work with Rafe Mazzeo.
11/12/2018 No Seminar
11/19/2018 Yusuf Barış Kartal, MIT Title: Distinguishing symplectic fillings using dynamics of Fukaya categories
Abstract: The purpose of this talk is to produce examples of symplectic fillings that cannot be distinguished by the dynamical invariants at a geometric level, but that can be distinguished by the dynamics and deformation theory of (wrapped) Fukaya categories. More precisely, given a Weinstein domain $M$ and a compactly supported symplectomorphism $\phi$, one can produce another Weinstein domain $T_\phi$-\textbf{the open symplectic maping torus}. Its contact boundary is independent of $\phi$ and it is the same as the boundary of $T_0\times M$, where $T_0$ is the once punctured torus. We will outline a method to distinguish $T_\phi$ from $T_0\times M$. This will involve the construction of a mirror symmetry inspired algebro-geometric model related to Tate curve for the Fukaya category of $T_\phi$ and exploitation of dynamics on these models to distinguish them.
11/26/2018 Charles Doran (fill-in)Andreas Malmendier, Utah State (originally)
Video Speaker: Charles Doran
Title: Feynman Amplitudes from Calabi-Yau Fibrations
Abstract: This talk is a last-minute replacement for the originally scheduled seminar by Andreas Malmendier.
After briefly reviewing the interpretation of Feynman amplitudes as periods of graph hypersurfaces, we will focus on a class of graphs called the n-loop sunset (or banana) graphs. For these graphs, the underlying geometry consists of very special families of (n-1)-dimensional Calabi-Yau hypersurfaces of degree n+1 in projective n-space. We will present a reformulation using fibrations induced from toric geometry, which implies a simple, iterative construction of the corresponding Feynman integrals to all loop orders. We will then reinterpret the mass-parameter dependence in the case of the 3-loop sunset in terms of moduli of lattice-polarized elliptic fibered K3 surfaces, and describe a method to construct their Picard-Fuchs equations. (As it turns out, the 3-loop sunset K3 surfaces are all specializations of those constructed by Clingher-Malmendier in the originally scheduled talk!) This is joint work with Andrey Novoseltsev and Pierre Vanhove
Speaker: Andreas Malmendier
Title: (1,2) polarized Kummer surfaces and the CHL string
Abstract: A smooth K3 surface obtained as the blow-up of the quotient of a four-torus by the involution automorphism at all 16 fixed points is called a Kummer surface. Kummer surface need not be algebraic, just as the original torus need not be. However, algebraic Kummer surfaces obtained from abelian varieties provide a fascinating arena for string compactification as they are not trivial spaces but are sufficiently simple for one to be able to analyze most of their properties in detail.
In this talk, we give an explicit description for the relation between algebraic Kummer surfaces of Jacobians of genus-two curves with principal polarization and those associated to (1, 2)-polarized abelian surfaces from three different angles: the point of view of 1) the binational geometry of quartic surfaces in P^3 using even-eights, 2) elliptic fibrations on K3 surfaces of Picard-rank 17 over P^1 using Nikulin involutions, 3) theta-functions of genus-two using two-isogeny. Finally, we will explain how these (1,2)-polarized Kummer surfaces naturally appear as F-theory backgrounds for the so-called CHL string. (This is joint work with Adrian Clingher.)
12/03/2018 Monica Pate, Harvard Title: Gravitational Memory in Higher Dimensions
Abstract: A precise equivalence among Weinberg's soft graviton theorem, supertranslation conservation laws and the gravitational memory effect was previously established in theories of asymptotically flat gravity in four dimensions. Moreover, this triangle of equivalence was proposed to be a universal feature of generic theories of gauge and gravity. In theories of gravity in even dimensions greater than four, I will show that there exists a universal gravitational memory effect which is precisely equivalent to the soft graviton theorem in higher dimensions and a set of conservation laws associated to infinite-dimensional asymptotic symmetries.
12/10/2018 Fenglong You, University of Alberta Title: Relative and orbifold Gromov-Witten theory
Abstract: Given a smooth projective variety X and a smooth divisor D \subset X, one can study the enumerative geometry of counting curves in X with tangency conditions along D. There are two theories associated to it: relative Gromov-Witten invariants of (X,D) and orbifold Gromov-Witten invariants of the r-th root stack X_{D,r}. For sufficiently large r, Abramovich-Cadman-Wise proved that genus zero relative invariants are equal to the genus zero orbifold invariants of root stacks (with a counterexample in genus 1). We prove that higher genus orbifold Gromov-Witten invariants of X_{D,r} are polynomials in r and the constant terms are exactly higher genus relative Gromov-Witten invariants of (X,D). If time permits, I will also talk about further results in genus zero which allows us to study structures of genus zero relative Gromov-Witten theory. This is based on joint work with Hisan-Hua Tseng, Honglu Fan and Longting Wu.
1/28/2019 Per Berglund (University of New Hampshire) Title: A Generalized Construction of Calabi-Yau Manifolds and Mirror Symmetry
Abstract: We extend the construction of Calabi-Yau manifolds to hypersurfaces in non-Fano toric varieties. This provides a generalization of Batyrev's original work, allowing us to construct new pairs of mirror manifolds. In particular, we find novel K3-fibered Calabi-Yau manifolds, relevant for type IIA/heterotic duality in d=4, N=2, string compactifications. We also calculate the three-point functions in the A-model following Morrison-Plesser, and find perfect agreement with the B-model result using the Picard-Fuchs equations on the mirror manifold.
2/4/2019 Netanel (Nati) Rubin-Blaier (Cambridge) Title: Abelian cycles, and homology of symplectomorphism groups
Abstract: Based on work of Kawazumi-Morita, Church-Farb, and N. Salter in the classical case of Riemann surfaces, I will describe a technique which allows one to detect some higher homology classes in the symplectic Torelli group using parametrized Gromov-Witten theory. As an application, we will consider the complete intersection of two quadrics in $P^5$, and produce a non-trivial lower bound for the dimension of the 2nd group homology of the symplectic Torelli group (relative to a fixed line) with rational coefficients.
2/11/2019 Tristan Collins (MIT) Title: Stability and Nonlinear PDE in mirror symmetry
Abstract: A longstanding problem in mirror symmetry has been to understand the relationship between the existence of solutions to certain geometric nonlinear PDES (the special Lagrangian equation, and the deformed Hermitian-Yang-Mills equation) and algebraic notions of stability, mainly in the sense of Bridgeland. I will discuss progress in this direction through ideas originating in infinite dimensional GIT. This is joint work with S.-T. Yau.
2/25/2019 Hossein Movasati (IMPA) Title: Modular vector fields
Abstract: Using the notion of infinitesimal variation of Hodge structures I will define an R-variety which generalizes Calabi-Yau and abelian varieties, cubic four, seven and ten folds, etc. Then I will prove a theorem concerning the existence of certain vector fields in the moduli of enhanced R-varieties. These are algebraic incarnation of differential equations of the generating functions of GW invariants (Lian-Yau 1995), Ramanujan's differential equation between Eisenstein series (Darboux 1887, Halphen 1886, Ramanujan 1911), differential equations of Siegel modular forms (Resnikoff 1970, Bertrand-Zudilin 2005).
3/4/2019 Zhenkun Li (MIT) Title: Cobordism and gluing maps in sutured monopoles and applications.
Abstract: The sutured monopole Floer homology was constructed by Kronheimer and Mrowka on balanced sutured manifolds. Floer homologies on closed three manifolds are functors from oriented cobordism category to the category of modules over suitable rings. It is natural to ask whether the sutured monopole Floer homology can be viewed as a functor similarly. In the talk we will answer this question affirmatively.
In order to study the above problem, we will need to use an important tool called the gluing maps. Gluing maps were constructed in the Heegaard Floer theory by Honda, Kazez and Matić , while were previously unknown in the monopole theory. In the talk we will also explain how to construct such gluing maps in monopoles and how to use them to define a minus version of knot monopole Floer homology.
3/11/2019 Yu Pan (MIT) Title: Augmentations and exact Lagrangian cobordisms.
Abstract: Augmentations are tightly connected to embedded exact Lagrangian fillings. However, not all the augmentations of a Legendrian knot come from embedded exact Lagrangian fillings. In this talk, we introduce immersed exact Lagrangian fillings into the picture and show that all the augmentations come from possibly immersed exact Lagrangian fillings. In this way, we realize augmentations, which is an algebraic object, fully geometrically. This is a joint work with Dan Rutherford working in progress.
3/25/2019 Eduardo Gonzalez (UMass Boston) Title: Stratifications in gauged Gromov-Witten theory.
Abstract: Let G be a reductive group and X be a smooth projective G-variety. In classical geometric invariant theory (GIT), there are stratifications of X that can be used to understand the geometry of the GIT quotients X//G and their dependence on choices. In this talk, after introducing basic theory, I will discuss the moduli of gauged maps, their relation to the Gromov-Witten theory of GIT quotients X//G and work in progress regarding stratifications of the moduli space of gauged maps as well as possible applications to quantum K-theory. This is joint work with D. Halpern-Leistner, P. Solis and C. Woodward.
4/1/2019 Athanassios S. Fokas (University of Cambridge) Title: Asymptotics: the unified transform, a new approach to the Lindelöf Hypothesis,and the ultra-relativistic limit of the Minkowskian approximation of general relativity
Abstract: Employing standard, as well as novel techniques of asymptotics, three different problems will be discussed: (i) The computation of the large time asymptotics of initial-boundary value problems via the unified transform (also known as the Fokas method, www.wikipedia.org/wiki/Fokas_method)[1]. (ii) The evaluation of the large t-asymptotics to all orders of the Riemann zeta function[2], and the introduction of a new approach to the Lindelöf Hypothesis[3]. (iii) The proof that the ultra relativistic limit of the Minkowskian approximation of general relativity [4] yields a force with characteristics of the strong force, including confinement and asymptotic freedom[5].
[1] J. Lenells and A. S. Fokas. The Nonlinear Schrödinger Equation
with t-Periodic Data: I. Exact Results, Proc. R. Soc. A 471, 20140925
J. Lenells and A. S. Fokas, The Nonlinear Schrödinger Equation with
t-Periodic Data: II. Perturbative Results, Proc. R. Soc. A 471,
20140926 (2015).
[2] A.S. Fokas and J. Lenells, On the Asymptotics to All Orders of the
Riemann Zeta Function and of a Two-Parameter Generalization of the
Riemann Zeta Function, Mem. Amer. Math. Soc. (to appear).
[3] A.S. Fokas, A Novel Approach to the Lindelof Hypothesis,
Transactions of Mathematics and its Applications (to appear).
[4] L. Blanchet and A.S. Fokas, Equations of Motion of
Self-Gravitating N-Body Systems in the First Post-Minkowskian
Approximation, Phys. Rev. D 98, 084005 (2018).
[5] A.S. Fokas, Super Relativistic Gravity has Properties Associated
with the Strong Force, Eur. Phys. J. C (to appear).
4/8/2019 Yoosik Kim (Boston University) Title: String polytopes and Gelfand-Cetlin polytopes
Abstract: The string polytope was introduced by Littelmann and Berenstein–Zelevinsky as a generalization of the Gelfand-Cetlin polytope in representation theory. For a connected reductive algebraic group $G$ over $\mathbb{C}$ and a dominant integral weight $\lambda$, a choice of a reduced word of the longest element in the Weyl group of G determines a string polytope. Depending on a reduced word of the longest element in the Weyl group, combinatorially distinct string polytopes arise in general. In this talk, I will explain how to classify the string polytopes that are unimodularly equivalent to Gelfand-Cetlin polytopes when $G = \mathrm{SL}_{n+1}(\mathbb{C})$ and $\lambda$ is a regular dominant integral weight. Also, I will explain a conjectural way obtaining SYZ mirrors respecting a cluster structure invented by Fomin–Zelevinsky. This talk is based on joint work with Yunhyung Cho, Eunjeong Lee, and Kyeong-Dong Park.
Room G02 Junliang Shen (MIT) Title: Perverse sheaves in hyper-Kähler geometry
Abstract: I will discuss the role played by perverse sheaves in the study of topology and geometry of hyper-Kähler manifolds. Motivated by the P=W conjecture, we establish a connection between topology of Lagrangian fibrations and Hodge theory using perverse filtrations. Our method gives new structural results for topology of Lagrangian fibrations associated with hyper-Kähler varieties. If time permits, I will also discuss connections to enumerative geometry of Calabi-Yau 3-folds. Based on joint work with Qizheng Yin.
4/22/2019 Yang Zhou (CMSA) Title: Quasimap wall-crossing for GIT quotients
Abstract: For a large class of GIT quotients X=W//G, Ciocan-Fontanine–Kim–Maulik have developed the
theory of epsilon-stable quasimap invariants. They are conjecturally equivalent to the Gromov–Witten invariants of X
via explicit wall-crossing formulae, which have been proved in many cases, including targets with good torus action
and complete intersections in a product of projective spaces.
In this talk, we will give a proof for all targets in all genera. The main ingredient is the construction of some moduli space
with C^* action whose fixed-point loci precisely correspond to the terms in the wall-crossing formulae.
Room G02 Zili Zhang(University of Michigan) Title: P=W, a strange identity for Dynkin diagrams
Abstract: Start with a compact Riemann surface X with marked points and a complex reductive group G. According to Hitchin-Simpson's nonabelian Hodge theory, the pair (X,G) comes with two new complex varieties: the character variety M_B and the Higgs moduli M_D. I will present some aspects of this story and discuss an identity P=W indexed by affine Dynkin diagrams – occurring in the singular cohomology groups of M_D and M_B, where P and W dwell. Based on joint work with
Junliang Shen.
5/6/2019 Dennis Borisov (CMSA)
Title: Global shifted potentials for -2-shifted symplectic structures
Abstract: I will explain the notion of shifted symplectic structures due to Pantev, Toen, Vaquie and Vaquie, and then show that a derived scheme with a -2-shifted symplectic structure can be realized as critical locus of a globally defined -1-shifted potential.
Joint work with Artan Sheshmani
For a listing of previous Mathematical Physics Seminars, please click here.
Seminars,Uncategorized
|
CommonCrawl
|
To make things more interesting, I think I would like to try randomizing different dosages as well: 12mg, 24mg, and 36mg (1-3 pills); on 5 May 2014, because I wanted to finish up the experiment earlier, I decided to add 2 larger doses of 48 & 60mg (4-5 pills) as options. Then I can include the previous pilot study as 10mg doses, and regress over dose amount.
28,61,36,25,61,57,39,56,23,37,24,50,54,32,50,33,16,42,41,40,34,33,31,65,23,36,29,51,46,31,45,52,30, 50,29,36,57,60,34,48,32,41,48,34,51,40,53,73,56,53,53,57,46,50,35,50,60,62,30,60,48,46,52,60,60,48, 47,34,50,51,45,54,70,48,61,43,53,60,44,57,50,50,52,37,55,40,53,48,50,52,44,50,50,38,43,66,40,24,67, 60,71,54,51,60,41,58,20,28,42,53,59,42,31,60,42,58,36,48,53,46,25,53,57,60,35,46,32,26,68,45,20,51, 56,48,25,62,50,54,47,42,55,39,60,44,32,50,34,60,47,70,68,38,47,48,70,51,42,41,35,36,39,23,50,46,44,56,50,39
(I was more than a little nonplussed when the mushroom seller included a little pamphlet educating one about how papaya leaves can cure cancer, and how I'm shortening my life by decades by not eating many raw fruits & vegetables. There were some studies cited, but usually for points disconnected from any actual curing or longevity-inducing results.)
If you want to focus on boosting your brain power, Lebowitz says you should primarily focus on improving your cardiovascular health, which is "the key to good thinking." For example, high blood pressure and cholesterol, which raise the risk of heart disease, can cause arteries to harden, which can decrease blood flow to the brain. The brain relies on blood to function normally.
Imagine a pill you can take to speed up your thought processes, boost your memory, and make you more productive. If it sounds like the ultimate life hack, you're not alone. There are pills that promise that out there, but whether they work is complicated. Here are the most popular cognitive enhancers available, and what science actually says about them.
Similarly, we could try applying Nick Bostrom's reversal test and ask ourselves, how would we react to a virus which had no effect but to eliminate sleep from alternating nights and double sleep in the intervening nights? We would probably grouch about it for a while and then adapt to our new hedonistic lifestyle of partying or working hard. On the other hand, imagine the virus had the effect of eliminating normal sleep but instead, every 2 minutes, a person would fall asleep for a minute. This would be disastrous! Besides the most immediate problems like safely driving vehicles, how would anything get done? You would hold a meeting and at any point, a third of the participants would be asleep. If the virus made it instead 2 hours on, one hour off, that would be better but still problematic: there would be constant interruptions. And so on, until we reach our present state of 16 hours on, 8 hours off. Given that we rejected all the earlier buffer sizes, one wonders if 16:8 can be defended as uniquely suited to circumstances. Is that optimal? It may be, given the synchronization with the night-day cycle, but I wonder; rush hour alone stands as an argument against synchronized sleep - wouldn't our infrastructure would be much cheaper if it only had to handle the average daily load rather than cope with the projected peak loads? Might not a longer cycle be better? The longer the day, the less we are interrupted by sleep; it's a hoary cliche about programmers that they prefer to work in long sustained marathons during long nights rather than sprint occasionally during a distraction-filled day, to the point where some famously adopt a 28 hour day (which evenly divides a week into 6 days). Are there other occupations which would benefit from a 20 hour waking period? Or 24 hour waking period? We might not know because without chemical assistance, circadian rhythms would overpower anyone attempting such schedules. It certainly would be nice if one had long time chunks in which could read a challenging book in one sitting, without heroic arrangements.↩
Two studies investigated the effects of MPH on reversal learning in simple two-choice tasks (Clatworthy et al., 2009; Dodds et al., 2008). In these tasks, participants begin by choosing one of two stimuli and, after repeated trials with these stimuli, learn that one is usually rewarded and the other is usually not. The rewarded and nonrewarded stimuli are then reversed, and participants must then learn to choose the new rewarded stimulus. Although each of these studies found functional neuroimaging correlates of the effects of MPH on task-related brain activity (increased blood oxygenation level-dependent signal in frontal and striatal regions associated with task performance found by Dodds et al., 2008, using fMRI and increased dopamine release in the striatum as measured by increased raclopride displacement by Clatworthy et al., 2009, using PET), neither found reliable effects on behavioral performance in these tasks. The one significant result concerning purely behavioral measures was Clatworthy et al.'s (2009) finding that participants who scored higher on a self-report personality measure of impulsivity showed more performance enhancement with MPH. MPH's effect on performance in individuals was also related to its effects on individuals' dopamine activity in specific regions of the caudate nucleus.
Our 2nd choice for a Brain and Memory supplement is Clari-T by Life Seasons. We were pleased to see that their formula included 3 of the 5 necessary ingredients Huperzine A, Phosphatidylserine and Bacopin. In addition, we liked that their product came in a vegetable capsule. The product contains silica and rice bran, though, which we are not sure is necessary.
Two additional studies used other spatial working memory tasks. Barch and Carter (2005) required subjects to maintain one of 18 locations on the perimeter of a circle in working memory and then report the name of the letter that appeared there in a similarly arranged circle of letters. d-AMP caused a speeding of responses but no change in accuracy. Fleming et al. (1995) referred to a spatial delay response task, with no further description or citation. They reported no effect of d-AMP in the task except in the zero-delay condition (which presumably places minimal demand on working memory).
Power times prior times benefit minus cost of experimentation: (0.20 \times 0.30 \times 540) - 41 = -9. So the VoI is negative: because my default is that fish oil works and I am taking it, weak information that it doesn't work isn't enough. If the power calculation were giving us 40% reliable information, then the chance of learning I should drop fish oil is improved enough to make the experiment worthwhile (going from 20% to 40% switches the value from -$9 to +$23.8).
Between midnight and 1:36 AM, I do four rounds of n-back: 50/39/30/55%. I then take 1/4th of the pill and have some tea. At roughly 1:30 AM, AngryParsley linked a SF anthology/novel, Fine Structure, which sucked me in for the next 3-4 hours until I finally finished the whole thing. At 5:20 AM, circumstances forced me to go to bed, still having only taken 1/4th of the pill and that determines this particular experiment of sleep; I quickly do some n-back: 29/20/20/54/42. I fall asleep in 13 minutes and sleep for 2:48, for a ZQ of 28 (a full night being ~100). I did not notice anything from that possible modafinil+caffeine interaction. Subjectively upon awakening: I don't feel great, but I don't feel like 2-3 hours of sleep either. N-back at 10 AM after breakfast: 25/54/44/38/33. These are not very impressive, but seem normal despite taking the last armodafinil ~9 hours ago; perhaps the 3 hours were enough. Later that day, at 11:30 PM (just before bed): 26/56/47.
Lebowitz says that if you're purchasing supplements to improve your brain power, you're probably wasting your money. "There is nothing you can buy at your local health food store that will improve your thinking skills," Lebowitz says. So that turmeric latte you've been drinking everyday has no additional brain benefits compared to a regular cup of java.
The soft gels are very small; one needs to be a bit careful - Vitamin D is fat-soluble and overdose starts in the range of 70,000 IU35, so it would take at least 14 pills, and it's unclear where problems start with chronic use. Vitamin D, like many supplements, follows a U-shaped response curve (see also Melamed et al 2008 and Durup et al 2012) - too much can be quite as bad as too little. Too little, though, is likely very bad. The previously cited studies with high acute doses worked out to <1,000 IU a day, so they may reassure us about the risks of a large acute dose but not tell us much about smaller chronic doses; the mortality increases due to too-high blood levels begin at ~140nmol/l and reading anecdotes online suggest that 5k IU daily doses tend to put people well below that (around 70-100nmol/l). I probably should get a blood test to be sure, but I have something of a needle phobia.
Organizations, and even entire countries, are struggling with "always working" cultures. Germany and France have adopted rules to stop employees from reading and responding to email after work hours. Several companies have explored banning after-hours email; when one Italian company banned all email for one week, stress levels dropped among employees. This is not a great surprise: A Gallup study found that among those who frequently check email after working hours, about half report having a lot of stress.
AMP and MPH increase catecholamine activity in different ways. MPH primarily inhibits the reuptake of dopamine by pre-synaptic neurons, thus leaving more dopamine in the synapse and available for interacting with the receptors of the postsynaptic neuron. AMP also affects reuptake, as well as increasing the rate at which neurotransmitter is released from presynaptic neurons (Wilens, 2006). These effects are manifest in the attention systems of the brain, as already mentioned, and in a variety of other systems that depend on catecholaminergic transmission as well, giving rise to other physical and psychological effects. Physical effects include activation of the sympathetic nervous system (i.e., a fight-or-flight response), producing increased heart rate and blood pressure. Psychological effects are mediated by activation of the nucleus accumbens, ventral striatum, and other parts of the brain's reward system, producing feelings of pleasure and the potential for dependence.
Modafinil, sold under the name Provigil, is a stimulant that some have dubbed the "genius pill." It is a wakefulness-promoting agent (modafinil) and glutamate activators (ampakine). Originally developed as a treatment for narcolepsy and other sleep disorders, physicians are now prescribing it "off-label" to cellists, judges, airline pilots, and scientists to enhance attention, memory and learning. According to Scientific American, "scientific efforts over the past century [to boost intelligence] have revealed a few promising chemicals, but only modafinil has passed rigorous tests of cognitive enhancement." A stimulant, it is a controlled substance with limited availability in the U.S.
Regarding other methods of cognitive enhancement, little systematic research has been done on their prevalence among healthy people for the purpose of cognitive enhancement. One exploratory survey found evidence of modafinil use by people seeking cognitive enhancement (Maher, 2008), and anecdotal reports of this can be found online (e.g., Arrington, 2008; Madrigal, 2008). Whereas TMS requires expensive equipment, tDCS can be implemented with inexpensive and widely available materials, and online chatter indicates that some are experimenting with this method.
Finally, it's not clear that caffeine results in performance gains after long-term use; homeostasis/tolerance is a concern for all stimulants, but especially for caffeine. It is plausible that all caffeine consumption does for the long-term chronic user is restore performance to baseline. (Imagine someone waking up and drinking coffee, and their performance improves - well, so would the performance of a non-addict who is also slowly waking up!) See for example, James & Rogers 2005, Sigmon et al 2009, and Rogers et al 2010. A cross-section of thousands of participants in the Cambridge brain-training study found caffeine intake showed negligible effect sizes for mean and component scores (participants were not told to use caffeine, but the training was recreational & difficult, so one expects some difference).
Adaptogens are plant-derived chemicals whose activity helps the body maintain or regain homeostasis (equilibrium between the body's metabolic processes). Almost without exception, adaptogens are available over-the-counter as dietary supplements, not controlled drugs. Well-known adaptogens include Ginseng, Kava Kava, Passion Flower, St. Johns Wort, and Gotu Kola. Many of these traditional remedies border on being "folk wisdom," and have been in use for hundreds or thousands of years, and are used to treat everything from anxiety and mild depression to low libido. While these smart drugs work in a many different ways (their commonality is their resultant function within the body, not their chemical makeup), it can generally be said that the cognitive boost users receive is mostly a result of fixing an imbalance in people with poor diets, body toxicity, or other metabolic problems, rather than directly promoting the growth of new brain cells or neural connections.
Nootropics (/noʊ.əˈtrɒpɪks/ noh-ə-TROP-iks) (colloquial: smart drugs and cognitive enhancers) are drugs, supplements, and other substances that may improve cognitive function, particularly executive functions, memory, creativity, or motivation, in healthy individuals.[1] While many substances are purported to improve cognition, research is at a preliminary stage as of 2018, and the effects of the majority of these agents are not fully determined.
|
CommonCrawl
|
Distributed mobility management with mobile Host Identity Protocol proxy
Muhana M. Muslam1,
H. Anthony Chan2 &
Neco Ventura3
EURASIP Journal on Wireless Communications and Networking volume 2017, Article number: 71 (2017) Cite this article
The architectural evolution from hierarchical to flatter networks creates new challenges such as single points of failure and bottlenecks, non-optimal routing paths, scalability problems, and long handover delays. The cellular networks have been hierarchical so that they are largely built on centralized functions based on which their handover mechanisms have been built. They need to be redesigned and/or carefully optimized. The mobility extension to Host Identity Protocol (HIP) proxy, mobile HIP Proxy (MHP), provides a seamless and secure handover for the Mobile Host in the hierarchical network. However, the MHP cannot ensure the same handover performance in flatter network because the MHP has also utilized the features offered by the hierarchical architecture. This paper extends the MHP to distributed mobile HIP proxy (DMHP). The performance evaluation of the DMHP in comparison to MHP and other similar mobility solutions demonstrates that DMHP does indeed perform well in the flatter networks. Moreover, the DMHP supports both efficient multi-homing and handover management for many mobile hosts at the same time to the same new point of attachment.
Cellular network is evolving from a hierarchical to a flatter architecture [1]. The nature of a hierarchical architecture can be harnessed to efficiently and seamlessly support host mobility. This is because it is possible to select/identify between a mobile host (MH) and correspondent host (CH) a functional entity, which can be updated on the MH's current location. Consequently, handover performance will be optimized since the selected entity is topologically closer than the CH to the MH. Unfortunately, that specific aspect of a hierarchical architecture's nature that allows the selection of a central entity for handover optimization is no longer available in a flat architecture where entities are distributed across different networks.
The architectural evolution from hierarchical to flatter networks to handle increased data traffic volumes creates new challenges as identified in [1]. These challenges include single points of failure and bottlenecks, non-optimal routing paths, scalability problems, and long handover delays. Consequently, the handover mechanisms such as [2–4] that have been built based on the centralized mobility function need to be redesigned and/or carefully optimized again.
To support mobility with both Host Identity Protocol (HIP) hosts and non-HIP hosts in hierarchical networks, HIP proxy and mobile HIP can be integrated into mobile HIP proxy (MHP). In [3], we had presented a preliminary design of the MHP which was able to support centralized mobility management only. It therefore still has all the drawbacks of centralized mobility management described in [1]. For the host mobility support in flat network architectures, an improved design of the MHP to take the role of mobility anchor will achieve distributed mobility. This paper introduces such a network-based and distributed mobility management. It distributes to the access networks such as mobility anchors based on an improved design of the MHPs to support the IP hosts in all these networks. The proposed distributed mobile HIP proxy (DMHP) enables host mobility in a flat network architecture and addresses problems of handover delay, scalability, single point of failure, packet loss, and signaling overhead. Further enhancement can be added to our proposal, DMHP, by (1) using an optimization model to provide useful theoretical insights and a protocol of distributed data query such as one presented in [5] and (2) using a distributed online algorithm that employs an optimal stopping theory to let nodes make adaptive, online decisions on whether this communication opportunity should be exploited to deliver data packets in each meeting event as explained in [6]. Although the method in [5] is developed to efficiently allow data query in a Mobile Ad hoc Social Network (MASON), many of its ideas can be employed by DMHP, i.e., designed for infrastructure networks without centralized host mobility support, that may lead to further improvement.
The main contributions in this paper are (1) introducing a network-based distributed mobility management solution for MHs in flat networks, (2) developing an architecture using the advantages of host identity protocol (HIP) for both HIP-enabled MH and non-HIP-enabled MH to support secured mobility and multi-homing, (3) developing mechanisms to enable our proposed solution, DMHP, to manage the handover of several MHs to the same new point of attachment (N-PoA), and (4) qualitatively and quantitatively investigating our proposed solution, DMHP, as well as some widely referenced distributed mobility solutions.
The rest of the paper is organized as follows: Section 2 reviews the related work. Section 3 presents the proposed solution while Section 4 presents the simulation results and performance analysis. Section 5 concludes the paper.
Many mobility solutions have employed the network-based approach to provide mobility support to hosts that lack mobility support capability. For example, Proxy MIPv6 (PMIPv6) [4] extends mobile IPv6 (MIPv6) [7] to provide network-based mobility support which is not implemented in the MH protocol stack. However, PMIPv6 relies on the dual role of IP addresses for host identity and locator, and it lacks scalable mobility support and required extensions to work in a flat network architecture.
Furthermore, in [8–10], authors extend PMIPv6 to provide network-based distributed IP mobility management solutions. However, these solutions need to communicate with some central or distributed entity to verify and validate the handed over mobile hosts, thereby incurring additional delay and signaling. In [10], authors have proposed a mobility solution, called multiple local mobility anchor (MLMA), in which authors used the PMIPv6 while employing a replicating strategy. Since MLMA supports the host mobility management in flatter networks, i.e., the same context our proposal is prepared for, further details about MLMA are presented and its handover procedures are shown in Fig. 1. As shown in this figure which explains the MH's handover procedure, the network consists of many access networks represented by access routers (AR), AR1 to ARn. MLMA has replicated the PMIP's local mobility anchor (LMA) into each of the ARs, AR1 to ARn, as well as the gateway router (GW).
MH handover procedures using MLMA
In the MLMA, when the MH performs the handover from an access network through which the MH has established the active session, at AR1 collocated with LMA, to another access network, at AR2 which detects the attachment of the MH and sends an proxy binding update (PBU) packet to all ARs and GW in the network. When the ARs and the GW in the network receive the PBU, they all reply with proxy binding acknowledgment (PBA) packets, one packet from each. Therefore, MH data traffic routing is improved. However, the MLMA has the following shortcomings: (1) large buffer is needed in each AR collocated with LMA to maintain a record for each MH in the network in which host mobility is managed by MLMA; (2) additional time is needed to search the database of MH records involving maybe simple but rather large buffer, when MH performs a handover, on when to receive packets for each of the MHs in the network; (3) high signaling overhead for the new AR of MH to update all the other ARs and GW inside the network in which host mobility is managed by MLMA; and (4) MLMA does not have any mechanism that support management of handover of many MHs to the new AR.
We have presented in [3] a preliminary mobility management design of MHP. It provides seamless, secure handovers for both HIP-enabled and non-HIP-enabled mobile hosts without unnecessary signaling overheads to the hosts. However, it is only a centralized approach.
The papers [11, 12], and [13] have developed network-based distributed mobility solutions using ID/locator separation architecture. However, [11] and [12] is only concerned with network mobility, whereas [13] leverages neither network-based mobility support for host nor HIP proxy.
Although the PMIPv6 and MHP achieve a good handover performance in the hierarchical network architecture, there is a need for mobility solutions to respond to the challenges of evolving the network architecture from being hierarchical to being flat.
The preliminary design of MHP we reported in [3] combines mobility function with the HIP proxy function. Yet its mobility function is not serving as a mobility anchor but rather is equivalent to that of a mobile access gateway in PMIPv6. It relies on another centralized entity to serve as a mobility anchor to support mobility. MHP is therefore a centralized mobility management protocol with the drawbacks described in [1].
Distributed mobility management function is more general and more capable of serving the future mobile internet which continues to flatten. Compared to our preliminary design, we have enriched the MHP function to act as a mobility anchor, which collocates at the access router. Distributed mobility management is then supported by these mobility anchors. There is no longer need to rely on a separate centralized entity, as in [3], which is now removed. The elimination of a centralized anchor also simplifies signaling.
Network-based distributed mobility management
This section introduces a network-based distributed mobility management solution that distributes the MHPs to the access networks to provide HIP and mobility support to all IP hosts. These mobility management functions of the MHP do not rely on any centralized mobility function such as the local mobility anchor in [2, 4] and the local rendezvous server in [3]. The MHPs are also included at the access routers taking advantage of the HIP proxy capability. They enable handover of mobile hosts in the flat network architecture with good performance. They enable an MH, whether or not HIP enabled, to use the same IP address as it changes its points of attachments within the flat network architecture. To support the distributed solution, MHPs provide the following functions: (1) each proxy serves as local mobility anchor for all connections established through it (proxy); (2) each proxy updates its neighbor proxies about the MHs that established their connections through it so the new proxy can determine the previous proxy from a distributed database; (3) a new proxy serves a mobile gateway and establishes channel between the previous and new proxies; and (4) a new proxy sends directly to CH the traffic for connections established through it and sends via previous proxies the traffic for connections established through other proxies.
Mobility management architecture
The architecture for network-based distributed mobility management with a mobile HIP proxy is shown in Fig. 2.
Design of network-based distributed mobility management and HIP proxy
The rendezvous server (RVS) [14] with the DNS enables reachability of an HIP host by maintaining a mapping between the host identity, called HIT, and the IP address of the MH. This design, called distributed mobile HIP proxy, adds a set of co-located mobility and HIP proxy functions at the access router. Like the MHP [3] for the hierarchical network architecture, the mobile HIP proxy performs HIP signaling on behalf of non-HIP MH so that HIP services can be offered to non-HIP enabled hosts. It also tracks the movement of the MH and updates the MH binding record if the MH is moving away from the network during an established session even when the session is active. The binding information, which is shown in a table in Fig. 2, is managed in the hierarchy DNS-RVS-proxy to enable reachability of an MH which is registered with the mobility-enabled HIP proxy.
Registration and reachability
Before using an HIP service, an HIP host needs to register with the service using the registration mechanism defined in [15]. The registration of an MH, which may either be HIP enabled or not, is illustrated in Fig. 3. This figure illustrates an example flow diagram of DMHP operations for the attachment of an HIP enabled MH and a non-HIP enabled MH.
Attachment detection for an HIP and a non-HIP MH
Upon detection of a MH attachment, the MHP checks whether the MH is HIP enabled or not. If not, the MHP assigns a HIT and returns it to the MH. The MHP uses the HIT, from the HIP MH or the assigned one for non-HIP MH, to check whether the MH is registered or not. If it is not registered, the MHP sends an update message to the RVS, which is the intermediate server of location information between the MHP entities and the DNS servers.
After registration, the mobile HIP proxy contains the binding of the HIT of the MH, HIT (MH), to the IP address of the MH, IP (MH). The RVS contains the binding of the HIT of the MH, HIT (MH), to the IP address of the proxy, IP (proxy). The DNS contains the binding of the HIT of the MH, HIT (MH), to the IP address of the RVS, IP (RVS).
If the MH has more than one interface, it has to register the physical address for each of its interfaces with its host identity, HIT (MH). For example, if the MH has two interfaces and their physical addresses are PHYaddr1 and PHYaddr2, then MH registers its HIT (MH) with both its physical addresses, PHYaddr1(MH) and PHYaddr2(MH). This information will be registered at the MHP through which the MH is attached during the registration process. This information is also accessible from other MHPs to which the MH is possible to move/connect. Therefore, the MH uses its identity, HIT (MH), to find its record and thus be able to preserve its ongoing communication sessions even those established via other interface so as to support multi-homing.
Establishing communication sessions
This distributed mobility management design enables data traffic between either an HIP-enabled MH or non-HIP-enabled MH and a CH. A Security Association (SA) is set up prior to the transport of data plane traffic. If the MH is an HIP host, the SA ends or terminates at the MH. If the MH is not an HIP host, the SA ends at the mobile HIP proxy to which the MH is registered.
Like the MHP, two pairs of initiation-response packets (I1, R1 and I2, R2) are exchanged to prepare for an SA establishment. Figure 4 illustrates an example flow diagram of MHP operations in establishing an HIP base exchange (HIPBE) between an HIP-enabled MH and an HIP-enabled CH. In addition, the figure illustrates an example flow diagram of MHP operations in establishing an HIPBE between a non-HIP enabled MH and an HIP enabled CH.
HIP SA establishment detection for an HIP and a non-HIP MH
Upon receiving an I1 packet from the CH, the RVS checks if the destination HIT corresponds to that of a registered MH. If so, the I1 packet is forwarded to the registered IP address of the proxy. Upon receiving an I1 packet from the RVS, the mobile HIP proxy checks the destination HIT in the HIP header. If the destination HIT corresponds to that of a registered HIP-enabled MH, the mobile HIP proxy (proxy1) forwards the I1 packet to the MH. The mobile HIP proxy does not store any binding in the case of the HIP MH. The MH will store the binding HIT (CH):IP (CH), and the MH will send the reply R1.
If the destination HIT corresponds to that of a registered MH which is not HIP enabled, the mobile HIP proxy (proxy2) stores the binding HIT (CH):IP (CH). The mobile HIP proxy (proxy2) will send the reply R1 on behalf of the MH.
After the successful exchange of the two initiation-response packet pairs, an HIP SA will be established between the initiator and responder. In data traffic, the HIP proxy (proxy2) uses the HIP SA and ESP to encapsulate/decapsulate non-HIP MH data packets, whereas the HIP MH uses its HIP SA and ESP to process its data. Figure 5 shows how the HIP SA is used based on the traffic type, HIP or IP traffic. In addition, it illustrates an example flow diagram of MHP operations as a MHP receives a packet for and from a MH.
HIP SA for data processing, encapsulation, and decapsulation
When the MHP receives HIP packets destined for one of its MHs, it checks first whether the packets are sent for an HIP or non-HIP MH. When the MHP receives packets from a non-HIP, the MHP determines first whether packets need HIP services or not. To achieve this, there are two solutions: (1) enable the network-layer of the MHP to pass the received packets to the HIP layer. The HIP identifies the IP flow to which the received packets belong and accordingly offer HIP services if needed; and (2) add a flag, for example, an HIP flag to the packets of a flow that requires HIP services. The MHP then offers the HIP services if the HIP flag is set to 1.
The re-use of the established HIPSA allows the MH to avoid some delay and signals and thus enable the seamless IP handover in a secure way. In addition, the proposed DMHP ensures another way to at least obtain some of the necessary security information from the local server, while the full authentication is being performed at the original servers as explained in the HIP RFCs. In this case, at the server, for example, as responder in a remote network location, the average of the end-to-end delay for the inter-domain will be about 110 ms [16] that can lead to a long handover delay.
Figure 6 shows the handover procedure of a MH, which is either HIP MH or non-HIP-enabled MH, between two wireless access networks belonging to the domain managed by the same GW. The MH is communicating with an HIP-enabled CH (not included in the figure) which lies in a different domain.
Packet flow after MH handover within the IPn domain
The MH may change its point of attachment (PoA) and attach to another mobile HIP proxy (proxy2) under the same GW. During this attachment, the MH presents its HIT and previous IP address to proxy2. Proxy2 then determines the previous proxy, proxy1, from the network prefix of the MH's previous IP and then acts as the HIP proxy and updates the binding record of the MH at proxy1. Communicating with proxy1 allows proxy2 to securely know the context of the established HIP SA.
Note that in a secure private network, for non-HIP MH, HIP communications can be terminated at proxy1 and then exchanged with a MH as IP communications via proxy2. That is, proxy1 performs HIP proxy functions while proxy2 performs mobility support. The advantages of this approach are the following: (1) non-HIP MH can move to any mobility-enabled access router and still preserve its active sessions with HIP CHs and (2) it allows load balancing, for example, if the proxy is heavily loaded, it can assign some of the load to other HIP proxies. However, this approach can result in inefficient routing if the distance between proxy1 and proxy2 is large while the distance between the GW and proxy2 is small. In the DMHP, all HIP communications are handled in the new proxy, proxy2. Furthermore, the DMHP can ensure efficient routing and reduces vulnerability between the MH and the proxy.
When the MH performs the handover from a network through which the MH has established the active session, proxy2 detects the attachment of the MH and sends an UPDATE packet (packet1) to proxy1. When proxy2 receives the reply UPDATE packet (packet2) from proxy1, it will send a RA to the MH. The RA will have the same network prefix that the MH used to configure its IP address in the proxy1 subnet. The MH, therefore, retains the same IP address configuration so that duplicate address detection (DAD) is not needed. This procedure significantly reduces handover latency, signaling overheads and packet loss.
Figure 7 shows exchanged messages between entities in a wireless communications system as a non-HIP-enabled MH performs a handover from one access network to another, through which the active session is established.
Handover procedure of a MH using DMHP
When the MH returns to the proxy, through which the active session is established, the proxy checks its cache binding to identify the MH and where its active sessions are established. If the sessions were established via the new proxy, the latter updates the record of the MH and starts serving it instead of forwarding to another proxy. It is important to note that the proxy does not send any handover-related signaling, and thus, the location update delay is eliminated. Furthermore, there is no need to update the MH record at the RVS since the MH is still reachable via the registered proxy at the RVSs. Unlike [8–10], the DMHP does not incur additional handover delay due to verification and validation of handed over MHs. And also unlike [12], the DMHP does not incur additional handover delay due to configuration of new IP address in the same domain and thus DAD delay.
So far, we have discussed the handover of a single MH. Suppose, however, that two or more MHs need to handover to the same new point of attachment, N-PoA. An example scenario is when a train carrying many passengers moves from one network to another. Therefore, if two or more mobile hosts have moved at the same time to the N-PoA, how does the N-PoA handle these MHs? And in which order?
The movement of many mobile hosts at the same time to the same new point of attachment, N-PoA, is one that can affect the handover latency, packet loss, and handover-related messages. Either the newly attached MHs can detach from different PoAs (that is where some MHs are coming from different PoAs) or all MHs detached from the same PoA. The former case is referred to as case 1 and the other as case 2. Such concurrent movement (handover) of MHs may result in long handover latency and packet loss as well as more handover messages, however. In this paper, we discuss various methods to ensure the efficient management of many MHs that move at the same time to the same new PoA, so that handover performance is maintained. To the best of our knowledge, none of the existing host mobility solutions have addressed the abovementioned issue.
In Fig. 8, we show case 1 with two MHs, MH1 and MH2, coming from different points of attachment, PoA1 and PoA2, and attaching to the same new point of attachment, N-PoA.
Many MHs attach at the same time to the same N-PoA but coming from different PoAs using DMHP
Furthermore, in Fig. 9, we show case 2 with three MHs, MH1, MH2, and MH3, coming from the same points of attachment, PoA1, and attaching to the same new point of attachment, N-PoA.
Many MHs attach at the same time to the same new PoA using DMHP
In case 1, if many MHs coming from different PoAs have moved to the same N-PoA, the N-PoA must first classify the MHs into different groups based on their old PoAs. The N-PoA sends only one update packet, which we called a group UPDATE packet and denoted by GUPDATE packet, for each group and not for each MH. An example that explains this scenario is shown in Fig. 10. As depicted in the figure, the N-PoA has classified the MHs, the nine MHs, into three groups because the attached MHs are coming from three different PoAs, PoA1, PoA2, and PoA3. Let us name these groups as group1, group2, and group3. Group1 includes two MHs (MH1 and MH2), group2 includes three MHs (MH3, MH4, and MH5), and group3 includes four MHs (MH6, MH7, MH8, and MH9). It is important to note that the number of MHs will equal the number of groups if each MH is coming from a different PoA, which is the worst case.
Handover procedure of many MHs at the same time from different PoAs to the same new PoA using DMHP
In case 2, if at the same approximate time many MHs have moved (handover) to the same N-PoA, the N-PoA builds an aggregated mobility packet that we denoted by AgUPDATE pkt1 and then sends it to the old PoA from which the MHs detached. The aggregated UPDATE packet includes the identifiers for all MHs attached to the N-PoA. Sending of only one packet (an aggregated UPDATE packet) will reduce the signaling overhead and location update latency as well as the packet loss. That is because only one update packet will be sent to update locations of many MHs instead of sending a separate update packet for each MH.
Consider a movement of n MHs {MH0, MH1,…, MH n-1} in case 1. After that, the N-PoA will send three UPDATE packets, GUPDATE packets, instead of nine UPDATE packets. One of the three UPDATE packets (that include the identifiers of MH1 and MH2) will be sent to the PoA1. One of the remaining two UPDATE packets (one that includes the identifiers of MH3, MH4, and MH5) will be sent to the PoA2. The last one of the remaining two UPDATE packets (one that includes the identifiers of MH6, MH7, MH8, and MH9) will be sent to the PoA3. On the reception of each of these UPDATE packets, an acknowledge packet that we called GUPDATE packet2 will be sent to the N-PoA. Specifically, one acknowledgement packet will be sent from each of the PoAs, PoA1, PoA2, and PoA3.
When handover of many mobile hosts to the same N-PoA occurs (case 2), the N-PoA must respond to and service it as quickly as possible. It is inefficient for the N-PoA to send a separate update packet for each MH. The principal reasons for this are as follows: (1) numerous MHs can come from the same old PoA as in the scenario of a train. Thus, it makes sense to only send one update packet for MHs' location update. (2) A mechanism is needed to manage the sharing of the bandwidth and other resources by multiple MHs handover at the same time and coming (detached) from different old PoA.
The N-PoA includes multiple MH identifiers on a single update packet. Such a method we termed multi-update. It can be more efficient than multiple update packets for each single MH because multi-update communication is faster than multiple update packets communication. In addition, one multi-update uses significantly less signals than multiple update packets.
Simulation and results
Simulation setup
The OMNeT++ v.4 [17], which is an open source network simulator, is used to model the functionality of DMHP.
The simulation environment under which the authors examined the DMHP constitutes two IEEE 802.11b subnetworks with MHPs co-located within the access routers. The two subnetworks partially overlap. A fixed HIP CH (i.e., hipsrv) is placed outside the access network of the MH and runs a UDP application to transmit a data stream at 15 Kbps with a packet size of 256 bytes to the MH. It is important to note that these application settings are chosen to represent configuration of the voice IP application. Although TCP applications are popular, we only check the performance of DMHP for UPD applications because these applications are delay-sensitive. The simulation runs for 25,000 s while the MH speed is fixed at 1 m/s as it moves from subnet 1 that is managed by MHP1 to subnet 2 that is managed by MHP2 and vice versa. The simulation parameters of this scenario are described in Table 1.
Table 1 Simulation parameters under which MHP and DMHP are examined
This section presents and analyzes the handover performance results obtained from the MHP and DMHP. The handover delays, packet loss, and signaling overheads are investigated. Also investigated are other factors that affect MH handover performance such as the number of MHs simultaneously performing handover while communicating with different CHs. In addition, end-to-end delays before and after the MH handover are investigated.
Using the abovementioned simulation, the authors examined the model (DMHP). In addition, they recoded and analyzed a hundred handovers for the DMHP. The fluctuation in the handover latency (HOL) of the DMHP and MHP over the first 23 handover (HO) instances is depicted in Fig. 11.
The first 23 handovers for DMHP and MHP
It is important to note that experiment of MHP is conducted in the hierarchical networks whereas experiment of DMHP is conducted in the flat networks. It is observed that the DMHP exhibits varying handover latencies, which vary between 0.6 and 1.8 s in the handover from a visited network to the home network and from the home to a visited network, respectively. This is because the DMHP communicates with the PoA of the session, that is, the PoA in the home position when the MH moves from the home to a visited network, to redirect the data traffic via the new PoA, that is, the PoA in the visited network. It is also evident from the measurements presented in the figure, in the IP handover towards the PoA of the session, that the handover delay due to location updates has been completely eliminated. This is because in the DMHP, when returning to the PoA of the session, the MHP stops forwarding the data traffic and thus serves as an authoritative MHP. These services are provided for both HIP and non-HIP MHs.
With respect to the handover of many MHs at the same time, the handover latency depends on the number of MHs and number of old PoAs for those MHs. These different situations are explained above and referred to as case 1 (MHs come from different old PoAs) and case 2 (all MHs come from the same old PoA). In case 1, time needed to complete the handover of many MHs can be described by the following equations.
$$ {L}_{\mathrm{ly}3\mathrm{HO}} = {L}_{\mathrm{Loc}\_\mathrm{update}}(Gn) + {L}_{\mathrm{IP}\ \mathrm{addr}.\ \mathrm{config}.} $$
$$ {L}_{\mathrm{loc}\_\mathrm{update}}(Gn)={L}_{\mathrm{gupdate}\ \mathrm{pkt}1} + {L}_{\mathrm{gupdate}\ \mathrm{pkt}2} $$
Where Lly3HO, Gn, ly3HO, LLoc_update, LIP addr. config., L gupdate pkt1, and L gupdate pkt2 denote the total latency of MH's handover at layer 3 and the group number and thus determine the old PoA for each of MHs, latency of location update, latency of IP address configuration, latency of the first update packet sent from the new PoA to any of old PoA from which one or many MHs detached, and latency of the reply update packet from each of the old PoA to the new PoA, respectively. For simplicity, let us assume that times (latencies) needed to update each of old PoAs are equal. With this assumption, all the old PoAs of MHs will be updated approximately during the same time. Therefore, the location update latency, latencyLoc update(Gn), of DMHP is similar to the location update latency required to mange handover of only one MH.
In case 2, the time needed to complete the location update of many MHs, coming from the same old PoA and moving to the same N-PoA, using DMHP is similar to the location update latency required to manage handover of only one MH.
Figure 12 illustrates the relationship between the delays owing to the security process with a third party, for example, an Authentication, Authorization, and Accounting (AAA) server, and the handover delay of the DMHP and MHP. Every point on the graph represents an average of the MH handovers, layer-2 and layer-3 handovers, measured while the MH was moving with a speed of 1 mps. Like the MHP, the DMHP is not affected by a third party security delay since the security checks are not performed at the third party and thus avoids additional delay. The main advantage is that DMHP achieved this in the flat networks while MHP achieved that in the hierarchical networks.
Impact of AAA server delay on Handover delay of the DMHP and MHP
Figure 13 shows how different MH speeds affect the handover delay of the DMHP. Each point in the graph represents an average of all the MH handovers, from the home network to the one, the MH moves away from the PoA of the active sessions, and vice versa, made within 2,000 s for each different MH speed. For example, the number of HOs the MH has performed with a speed of 5 mps is five times the number of handovers the MH has performed with a speed of 1 mps. Here, we considered the average of all the MH handovers for each different speed. In handover delay, the measurements with different MH speeds are interesting in that the HO delay for MH speeds of 15mps is lower than the handover delay for MH speeds of 10mps. The figure depicts the impact of the different MH speeds on the location update delay (layer 3 handover) for the DMHP when the MH moves from home to a visited network, in which MH moves away from the PoA of the active sessions. As shown in the figure, the impact of the MH speeds on the handover delay for the DMHP when MH moves from a visited to the home network, in which the MH moves to the PoA of the active sessions, is negligible. This is because when the MH is detected at the PoA of the active sessions, it just stops forwarding the traffic of the MH via another PoA. In other words, when the MH moves to the PoA of the active sessions, the DMHP is less affected.
Impact of MH speeds on the handover delay for the DMHP
Figure 14 depicts the packet loss of the DMHP and MHP. The authors measured the loss of data packets of a UDP application in the unidirectional traffic going from the CH to the MH during IP handover. It is important to note that in these measurements, there is no buffering or forwarding technique used to mitigate the packet loss, i.e., number of packet losses. Like the handover delay, packet loss in DMHP is small when MHs move towards the PoA of the session while packet loss is high when the MH moves away from it. To mitigate the packet loss of DMHP to the same level of MHP or even further, a buffer at the previous point of attachment can be used while the DMHP offers host mobility support in an architecture (flat) in which the MHP cannot be used.
The first 70 packet loss for DMHP and MHP
Handover-related messages in the DMHP and MHP are portrayed in Fig. 15. In the DMHP during 25,000 s simulation time, the MH performed 70 handovers. Thus, Fig. 10 depicts the handover-related messages for the MHP and DMHP over the first 70 handovers. It is evident from the figure that the DMHP has outperformed the MHP in the handover-related signaling since the DMHP does not use any handover-related messages when the MH moves to the PoA of the session. It is important to note that a case where the MH performs a handover during active sessions established through different PoAs is not present in the said figure.
Handover-related messages of the MHP and DMHP
Furthermore, signaling overheads of PMIPv6-based distributed mobility management solutions [8, 9], HIPPMIP [2], MHP [3], MLMA [10], and DMHP are described in Table 2. The first row in Table 2 indicates the number of binding update messages when the MH has ongoing communication sessions with one CH. In fact for DMHP, the number of binding update messages when the MH has ongoing sessions with n CHs is the same as a case where the MH has a session with one CH. Thus, the mobility related signaling overheads of the DMHP is not affected by the increasing number of CHs with which the MH has ongoing communication sessions. This is because the DMHP updates only the PoA through which the active sessions are established and not the CHs. Furthermore, unlike distributed mobility solutions in [8–10], the DMHP does not require a consultation with any third party on security aspects as it has capabilities of self-certifying at the HIP layer. Moreover, DMHP avoids all signals related to DAD and signal overheads on the HIP MH interface. In MLMA [10], the number of required mobility signals per one MH handover is described by an equation, 2*(r), where "r" indicates the number of the ARs plus one GW in the network in which the MLMA is used as host mobility solution.
Table 2 Signaling overheads of PMIPv6-based distributed mobility, MHP, MLMA, and DMHP
With respect to the handover of many MHs at the same time, the signaling overhead will also be affected by the number MHs, number of old PoAs for MHs, mechanism employed to manage handover of many MHs. Signaling overhead of DMHP used for case 1 (MHs come from different old PoAs) is shown in Fig. 16 while signaling overhead of DMHP used for case 2 (all MHs come from the same old PoA) is shown in Fig. 17.
Signaling overhead for many MHs attach at the same time to the same N-PoA but coming from different PoAs using DMHP
Signaling overhead of DMHP for managing handover of many MHs at the same time to the same N-PoA but all coming from the same PoA using DMHP
Figure 16 shows the relationship between the number of mobile hosts (MHs) and the number of update packets needed for handover management of many MHs performed at the same time. As depicted by the figure, the number of update messages is increased if the number of old PoA from which MHs come is increased. But the number of update messages is not affected by the number of MHs coming from each of the old PoA. This is because the DMHP classify MHs into different groups based on the old PoA for each MH. Thus, the N-PoA exchanges only two update packets with the appropriate old PoA for each group irrespective of the number of MHs inside each group. For example, if all MHs come from two PoAs then only four update messages (packets) are needed, two packets for each group.
As shown in the Fig. 17, only two update packets are required for handover management of many MHs coming from the same PoA and attaching to the same N-PoA.
Figure 18 shows the number of update messages (packets) needed by the DMHP for handover management of many MHs attaching at the same time to the same N-PoA but each of the MH coming from the different PoA. That is the worst case where the number of groups equals the number of the MHs.
Signaling overhead of DMHP for handover management of many MHs attach at the same time to the same N-PoA but each coming from different PoA
The DMHP distributes MHPs introduced by the MHP and equips them with additional functions to produce a powerful mobility management solution suitable for a flat network architecture. Thus, the DMHP reduces the air signaling overheads, maintains a stable MH locator even when the MH changes MHPs, and reduces unnecessary signaling overheads over the core network through which established sessions are communicated. Furthermore, the DMHP makes the IP handover in flat architecture transparent to the upper layer protocols and thus securely preserves the active sessions. Consequently, IP handover with good performance is achieved in flat networks without relying on any centralized mobility entity. The network-based aspect of the DMHP locally manages handover-related packets and packet routing before and after the handover, thus ensuring efficient routing. The HIP aspect, on the other hand, mainly provides its security capabilities and multi-homing insured by the HIP secure and permanent host identifier.
In DMHP, distributed entities that provide both mobility management and HIP features by the network to all IP hosts are introduced to achieve the MH IP handover with good performance in the flat network architecture. This distributed mobility solution provides a framework, for the flat network architecture, that supports a seamless vertical handover in a secure manner. The DMHP utilizes the benefits of the MHP to achieve its goal. Furthermore, DMHP employs efficient mechanisms to manage handover of many MHs to the same N-PoA either MHs come from different old PoAs or the same old PoA. The performance evaluation of the DMHP in comparison to MHP demonstrates that it does indeed perform well in flat networks with similar handover performance achieved by optimized mobility solutions developed for hierarchical networks.
HA Chan et al., Requirements of Distributed Mobility Management, 2014. Internet Engineering Task Force (IETF) RFC 7333
M Muslam, HA Chan, N Ventura, LA Magagula, Hybrid HIP and PMIPv6 (HIPPMIP) Mobility Management for Handover Performance Optimization (in 6th International Conf. of Wireless and Mobile Communications (ICWMC), Valencia, 2010), pp. 232–237
MM Muslam, HA Chan, LA Magagula, N Ventura, Network-Based Mobility and Host Identity Protocol (IEEE Wireless Communications and Networking Conference (WCNC), Paris, 2012), pp. 2395–2400
S Gundavelli, K Leung, V Devarapalli, K Chowdhury, B Patil, Proxy Mobile IPv6, 2008. IETF RFC 5213
Y Liu, Y Han, Z Yang, H Wu, Efficient data query in intermittently-connected mobile ad hoc social networks. IEEE Trans. Parallel Distrib. Syst. 26(5), 1301–1312 (2015)
L Yang, AMA Elman Bashar, L Fan, W Yu, L Kun, Multi-Copy Data Dissemination with Probabilistic Delay Constraint in Mobile Opportunistic Device-to-Device Networks, 2016, pp. 1–9. World of Wireless Mobile and Multimedia Networks (WoWMoM) 2016 IEEE 17th International Symposium on A
C Perkins, D Johnson, J Arkko, Mobility Support in IPv6, 2011. IETF RFC 6275
F Giust, CJ Bernardos, A De La Oliva, Analytic evaluation and experimental validation of a network-based IPv6 distributed mobility management solution. IEEE Trans. Mob. Comput. 13(11), 2484–2497 (2014)
K Xie, J Lin, L Wu, Design and Implementation of Flow Mobility Based on D-PMIPv6 (IEEE 17th International Conference on Computational Science and Engineering (CSE), Chengdu, 2014), pp. 1344–1349
T Condeixa, S Sargento, Centralized, distributed or replicated IP mobility? IEEE Commun. Lett. 18(2), 376–379 (2014)
Y Kim, H Ko, S Pack, Network Mobility Support in Distributed ID/Locator Separation Architectures, 2014, pp. 521–522. IEEE 11th Consumer Communications and Networking Conference (CCNC)
VP Kafle, Y Fukushima, H Harai, New Mobility Paradigm with ID/Locator Split in the Future Internet, 2014, pp. 163–169. IEEE 11th Consumer Communications and Networking Conference (CCNC)
S-I Choi, S-J Koh, Distributed Mobility Control Schemes in the HIP-Based Mobile Networks, 2014, pp. 269–275. 16th International Conference on Advanced Communication Technology (ICACT)
J Laganier, L Eggert, Host Identity Protocol (HIP) Rendezvous Extension, 2008. RFC 5204
J Laganier, T Koponen, L Eggert, Host Identity Protocol (HIP) Registration Extension, 2008. RFC 5203
C Bovy, H Mertodimedjo, G Hooghiemstra, H Uijterwaal, P Van Mieghem, Analysis of End-to-End Delay Measurements in Internet, 2002. in Proc. of the Passive and Active Measurement Workshop-PAM'2002
OMNet++ open source network simulator. Official website: http://www.omnetpp.org. Accessed 30 Sept 2016
This work is supported in part by Telkom, Nokia Siemens Networks, TeleSciences, and National Research Foundation, South Africa, under the Broadband Center of Excellence program.
This work is an extension of the PhD thesis research of MMM, who had designed the proposed protocol and performed simulation and analyses. HAC and NV had participated with technical discussions to provide guidance on the thesis work and the extensions. All authors read and approved the final manuscript.
Department of Information Technology, Al-Imam Muhammad Ibn Saud Islamic University, Riyadh, Saudi Arabia
Muhana M. Muslam
Huawei Technologies, Plano, Texas, USA
H. Anthony Chan
Department of Electrical Engineering, University of Cape Town, Rondebosch, South Africa
Neco Ventura
Correspondence to Muhana M. Muslam.
Muslam, M., Chan, H.A. & Ventura, N. Distributed mobility management with mobile Host Identity Protocol proxy. J Wireless Com Network 2017, 71 (2017). https://doi.org/10.1186/s13638-017-0853-z
DOI: https://doi.org/10.1186/s13638-017-0853-z
Distributed mobility
Mobility proxy
|
CommonCrawl
|
Vector calculus simplification in calculation of generalized force
Consider a system of $N$ particles subject to forces $\vec F_i\ (i=1\dots N)$ that derive from a potential $V$. My lecture notes propose a simple proof that
$$Q_j = -\frac{\partial V}{\partial q_j}$$
where the generalized forces are defined as $Q_j = \sum_i \vec F_i\cdot\frac{\partial\vec r_i}{\partial q_j}$. It goes like this:
$$ Q_j = \sum_i \vec F_i\cdot\frac{\partial\vec r_i}{\partial q_j} = -\sum_i\vec\nabla_i V\cdot\frac{\partial\vec r_i}{\partial q_j} = -\frac{\partial V}{\partial q_j} $$
I'm trying to understand the last step in detail, but I get a wrong answer by a factor $N$. For example with two particles, and writing $\vec r_i = (x_i,y_i,z_i)$, I have $$ \begin{aligned} \sum_i\vec\nabla_i V\cdot\frac{\partial\vec r_i}{\partial q_j} &= \vec\nabla_1V\cdot\frac{\partial\vec r_1}{\partial q_j} + \vec\nabla_2V\cdot\frac{\partial\vec r_2}{\partial q_j} \\ &= (\tfrac{\partial V}{\partial x_1}, \tfrac{\partial V}{\partial y_1}, \tfrac{\partial V}{\partial z_1}) \cdot(\tfrac{\partial x_1}{\partial q_j}, \tfrac{\partial y_1}{\partial q_j}, \tfrac{\partial z_1}{\partial q_j}) + (\tfrac{\partial V}{\partial x_2}, \tfrac{\partial V}{\partial y_2}, \tfrac{\partial V}{\partial z_2}) \cdot(\tfrac{\partial x_2}{\partial q_j}, \tfrac{\partial y_2}{\partial q_j}, \tfrac{\partial z_2}{\partial q_j}) \\[1ex] &= \frac{\partial V}{\partial q_j} + \frac{\partial V}{\partial q_j} \\[1ex] &= 2\frac{\partial V}{\partial q_j} \end{aligned} $$ What did I do wrong to get this factor 2?
newtonian-mechanics forces potential-energy differentiation vector-fields
Qmechanic♦
alfbaalfba
$\begingroup$ There's already a question on the same result (or very close) but a with a different proof: physics.stackexchange.com/q/271213 . But I'm really trying to understand this proof. $\endgroup$ – alfba Apr 16 '19 at 14:17
$\begingroup$ This is just the chain rule of calculus applied to a function of more than one variable. There is no factor 6. $\endgroup$ – user197851 Apr 16 '19 at 14:40
$\begingroup$ Thanks, that corrects a factor 3 (I edited the question accordingly). But I'm still wrong by a factor $N$ (factor 2 in the example). $\endgroup$ – alfba Apr 16 '19 at 15:03
$\begingroup$ (As Pedro Fernando made clear, I was confused about the $V$ function, it's "just the chain rule" as you say :-). ) $\endgroup$ – alfba Apr 16 '19 at 15:21
The problem is that you are making incorrect use of the chain rule for derivatives. In the last step
$$\frac{\partial V}{\partial q_{j}}=\sum_{i}\frac{\partial V}{\partial x_{i}}\frac{\partial x_{i}}{\partial q_{j}}=\frac{\partial V}{\partial x_{1}}\frac{\partial x_{1}}{\partial q_{j}}+\frac{\partial V}{\partial y_{1}}\frac{\partial y_{1}}{\partial q_{j}}+\frac{\partial V}{\partial z_{1}}\frac{\partial z_{1}}{\partial q_{j}}+\frac{\partial V}{\partial x_{2}}\frac{\partial x_{2}}{\partial q_{j}}+\frac{\partial V}{\partial y_{1}}\frac{\partial y_{2}}{\partial q_{j}}+\frac{\partial V}{\partial z_{2}}\frac{\partial z_{2}}{\partial q_{j}}$$
Pedro FernandoPedro Fernando
$\begingroup$ Thanks, I didn't think clearly about $V$ being a 6-variable function. $\endgroup$ – alfba Apr 16 '19 at 15:19
Not the answer you're looking for? Browse other questions tagged newtonian-mechanics forces potential-energy differentiation vector-fields or ask your own question.
Potential of conservative generalized forces
Separating the potential energy of a system of particles.
Is the electric force a vector or a vector field?
Determining whether a force is conservative
Lagrange equations in a conservative system, understanding $\nabla_i$
|
CommonCrawl
|
Friction and contact temperature in dry rolling-sliding contacts with MoS2-bonded and a-C:H:Zr DLC coatings
Stefan Hofmann1,
Mustafa Yilmaz1,
Enzo Maier1,
Thomas Lohner1 &
Karsten Stahl1
Gearboxes are usually lubricated with oil or grease to reduce friction and wear and to dissipate heat. However, gearbox applications that cannot be lubricated with oil or grease, for example in the space or food industry, are commonly lubricated with solid lubricants. Especially solid lubricants with a lamellar sliding mechanism like graphite and molybdenum disulfide (MoS2) or diamond-like carbon (DLC) coatings can enable very low coefficients of friction. This study investigates the friction and temperature behavior of surface coatings in rolling-sliding contacts for the application in dry lubricated gears. In an experimental setup on a twin-disk test rig, case-hardened steel 16MnCr5E (AISI5115) is considered as substrate material together with an amorphous, hydrogenated, and metal-containing a-C:H:Zr DLC coating (ZrCg) and a MoS2-bonded coating (MoS2-BoC). The friction curves show reduced coefficients of friction and a significantly increased operating area for both surface coatings. Due to the sufficient electrical insulation of the MoS2-BoC, the application of thin-film temperature measurement-known from lubricated contacts-was successfully transfered to dry rolling-sliding contacts. The results of the contact temperature measurements reveal pronounced thermal insulation with MoS2-BoC, which can interefere the sliding mechanism of MoS2 by accelerated oxidation. The study shows that the application of dry lubricated gears under ambient air conditions is challenging as the tribological and thermal behavior requires tailored surface coatings.
Solid lubricants have become particularly important in the 1960s due to the space industry, since prevailing vacuum conditions obstruct the use of fluid lubrication. Nowadays, dry lubrication is generally also important in hygenic environments and in engineering designs with extreme lightweight or thermal requirements. It can present an ecological alternative to fluid lubricants in suitable applications. In gearboxes, dry lubrication omits the use of expensive seals and drastically reduces no-load power losses (Höhn et al. 2009).
Solid lubricants with a lamellar sliding mechanism like graphite, molybdenum disulfide (MoS2), and tungsten disulfide (WS2) have been shown to reduce friction and wear. However, the tribological performance strongly depends on the environmental conditions ((Gradt and Schneider 2016) and (Banerjee and Chattopadhyay 2014)) and the provision of the solid lubricants in the tribological system (Birkhofer and Kümmerle 2012). Investigations on the life time of dry lubricated bearings from (Schul 1997) show higher friction and wear of bonded compared to sputtered MoS2 coatings. According to (Dienwiebel et al. 2004) and (Li et al. 2017), solid lubricants with lamellar sliding mechanism can reach the friction regime of superlubricity. (Hirano and Shinjo 1990) introduced this friction regime, which is attributed to coefficients of friction below μ ≤ 0.01. However, the requirements for achieving superlubricity are numerous and can only be observed on microscale (Berman et al. 2018). On a ball-on-disk tribometer under vacuum conditions, sputtered MoS2 enabled coefficients of friction as low as 0.05 (Gradt and Schneider 2016). Investigations from (Gamyula et al. 1984) revealed coefficients of friction below 0.02 for MoS2-bonded coating on a ball-on-disk tribometer. However, in contrast to graphite, significant relative humidity (RH) under ambient air conditions leads to an sharp increase in friction and wear for MoS2 (Vazirisereshk et al. 2019). Metallic elements like chromium or titanium can be added to increase wear resistance, whereas there is no remarkable influence on frictional behavior according to (Banerjee and Chattopadhyay 2014) and (Gradt and Schneider 2016). Moreover, it is possible that high contact temperatures interfere with the sliding mechanism due to the oxidation of, e.g., MoS2 to MoO3, which leads to a structural change of the lattice-layer structure (Zhang et al. 2011). Investigations from (Xu et al. 2003) reveal to increased wear for a MoS2-bonded coating at temperatures above 100 °C compared to room temperature, which was attributed to oxidation of MoS2.
In addition to bonded coatings and pure solid lubricant layers, diamond-like carbon (DLC) coatings can also reduce friction and provide high wear resistance even at high loads. The mechanical and tribological properties of DLC coatings can be strongly influenced by the ratio of sp2- to sp3-bonded carbon, the hydrogen content and doping elements (Donnet and Erdemir 2008). Especially DLC coatings with high proportions of amorphous sp2 carbon bonds and hydrogen (a-C:H) can result in ultra-low friction under dry lubrication. Investigations in high vacuum on a linear reciprocating pin-on-disk tribometer showed coefficients of friction as small as 0.005 for an a-C:H DLC coating (Fontaine et al. 2005). Also (Erdemir and Eryilmaz 2014) observed coefficients of friction in the range of 0.001 in a dry atmosphere on a ball-on-disk tribometer for a highly hydrogenated DLC coating. Such low coefficients of friction are attributed to superlubricity (Hirano and Shinjo 1990). However, the tribological properties of DLC coatings also depend strongly on environmental conditions, where the humidity of the ambient air plays a major role according to (Weihnacht et al. 2012) and (Schultrich and Weihnacht 2008). (Ronkainen et al. 1998) determined coefficients of friction of 0.15 < μ < 0.22 in ambient air with RH = 50 % on a pin-on-disk tribometer for a-C:H and a-C DLC coatings, which are similar to the results from (Donnet et al. 1994) at RH = 40 %. Investigations from (Yilmaz et al. 2018) on a twin-disk test rig under dry lubrication in ambient air showed coefficients of friction in the range of 0.10 < μ < 0.18 for a tetrahedral ta-C DLC coating.
(Grossl 2007) and (Martens 2008) conducted experimental investigations on a gear test rig under dry lubrication in ambient air and showed improved durability of ta-C DLC-coated gears. Uncoated gears demonstrated significant scuffing failure after a short running time. The durability can be further increased by low-loss gear designs (Hinterstoißer et al. 2019).
The aim of this study is the evaluation of operating limits and the characterization of friction and temperature behavior of two different surface coatings for reliable operation of dry lubricated rolling-sliding contacts. Besides an amorphous, hydrogenated, and metal-containing a-C:H:Zr (ZrCg) DLC coating, a MoS2-bonded coating (MoS2-BoC) based on polyimide is used. As the focus is put on transferability of load and kinematic conditions to gears, a twin-disk test rig is used. Besides friction and bulk temperature, the contact temperature is measured for MoS2-BoC using thin-film sensor technology in order to get insight into the shearing mechanism.
Experimental setup
The following sections describe the considered twin-disk test rig, the test disks for friction and contact temperature measurements, the operating conditions, and the thin-film sensor technology.
FZG twin-disk test rig
Figure 1 shows the mechanical layout of the considered FZG twin-disk test rig, which was designed by (Stößel 1971). The description and formulations are mainly adopted from (Lohner et al. 2015) and (Reitschuster et al. 2020). Both test disks are press-fitted onto shafts that can be driven independently by two three-phase motors. Traction drives mounted between the motors and driving shafts allow a continuous variation of speed. The normal force FN in the disk contact is applied by a pneumatic cylinder via the pivot arm where the lower disk is mounted. The upper disk is mounted in a skid, which is attached to the frame by thin steel sheets. The skid is supported laterally by a load cell to ensure that the friction force FR in the disk contact for sliding velocities vg≠ 0 m/s can be measured as a reaction force with hardly any displacement of the skid. Normal force FN, friction force FR, surface velocities v1 and v2, and bulk temperature of the upper disk ϑM are measured. ϑM is recorded by a Pt100 resistance temperature sensor 5 mm below the surface of the disk. The coefficient of friction is calculated according to Eq. (1).
$$ \mu =\frac{F_R}{F_N} $$
Mechanical layout of the FZG twin-disk test rig and geometry of the test disks (Reitschuster et al. 2020)
The sum velocity v∑ is defined as the sum of the surface velocity of the upper disk v1 and the surface velocity of the lower disk v2, whereas the sliding velocity vg is defined as the difference between the surface velocities v1 and v2:
$$ {v}_{\sum }={v}_1+{v}_2\ \mathrm{with}\ {v}_1>{v}_2 $$
$$ {v}_g={v}_1-{v}_2 $$
The slip ratio s is defined as
$$ s=\frac{v_1-{v}_2}{v_1}\cdot 100\% $$
All experiments are carried out under line contact conditions with cylindrical disks having a diameter of 80 mm and a width of 5 mm according to Fig. 1.
Test disks
All test disks were made of case-hardened steel 16MnCr5E (AISI5115) with a surface hardness of 60 ± 2 HRC and a case hardening depth of CHD550HV1 = 0.9 ± 0.2 mm. The preparation of the running surface was adapted to each coating process. ZrCg coated disks were peripherally ground and polished to an arithmetic mean roughness value Ra ≤ 0.01 μm to ensure high adhesion of the coating-bonding system. DLC coating graded zirconium carbide (ZrCg) was deposited by middle frequency magnetron sputtering (mfMS) at the Surface Engineering Institute (IOT) RWTH Aachen University, Aachen, Germany. Thereby an industrial coating unit was used with two zirconium targets with a purity > 99.5 % and argon (Ar) and acethylene (C2H2) as process and reactive gas. To avoid annealing effects of the case-hardened steel, temperature during the physical vapor deposition (PVD) process was limited to 180 °C. Further process parameters can be found in (Bobzin et al. 2015). The coating possesses of a crystalline zirconium interlayer on steel substrate and a graded zirconium carbide layer with increasing portions of carbon. The maximum carbon ratio is reached at the zirconium and hydrogen containing top layer with an amorphous structure a-C:H:Zr. Young's modulus E = 110 ± 12 GPa is lower compared to the steel substrate with E = 210 GPa. Total film thickness was tc = 3.2 μm. MoS2-BoC for friction measurements was applied by spraying at FUCHS LUBRITECH GmbH, Kaiserslautern, Germany, and consists of solid lubricant particles dispersed in an organic binder matrix. In order to increase adhesion between lacquer and substrate, a surface roughness of Ra ≈ 0.5 μm after hardening was chosen before coating as recommended by (Gamyula et al. 1984) and (Yukhno et al. 2001) for bonded coatings. For the coating process, the disks were heated up to 100 °C, sprayed manually, and tempered in a convection oven for 1 h at 200 °C. A total coating thickness of tc =15 ± 2 μm was measured with the magnetic induction test method according to DIN EN ISO 2178. For contact temperature measurements, a MoS2-BoC was used that consists of 70 wt.% polyimide as binder and 30 wt.% of MoS2 and was applied at INM Saarbrücken, Germany. An identical coating process was applied. The total coating thickness was tc = 25 ± 5 μm. Figure 2 shows a representative uncoated, ZrCg coated and MoS2-BoC coated test disk before the test run. It can be seen that the optical appearance and roughness strongly depends on the preparation and coating method. The mfMS PVD coating technology leads to a slight increase in arithmetic roughness from Ra ≤ 0.01 μm to Ra = 0.03 μm, whereas MoS2-BoC considers a slight increase from Ra ≈ 0.5 μm to Ra = 0.61 μm. The MoS2 solid lubricant particles of MoS2-BoC are visible due to the colored reflections of the light of the microscope.
Initial optical scans and mean arithmetic roughness of considered test disks
All surface roughness measurements were performed in axial direction by a profile method according to DIN EN ISO 13565-1 to 13565-3 with a measured length of Lt = 4.0 mm and a cut-off wavelength of λc = 0.08 mm and 0.8 mm, respectively.
Thin-film sensors
Besides infrared technology, thin-film sensors enable high-resolution contact temperature measurements. With these, resistance measurements take advantage of the high sensitivity of a suitable sensor material to a change in temperature or pressure during the transit of the thin-film sensor through a tribological contact. Initial pressure measurements in elastohydrodynamic (EHL) contacts on a twin-disk test rig with manganin sensors started in the 1960s in investigations of (Kannel et al. 1965). Since then, numerous authors have carried out contact temperature and pressure measurements in elastohydrodynamic contacts on model test-rigs with thin-film sensor technology, e.g., (Schouten 1973), (Baumann 1987), (Bauerochs 1989), (Kagerer and Königer 1989), and (Mayer 2013). Measurements on gears and bearings can only be found from a few authors like (Kagerer 1991) and (Kühl 1996). Whereas the thin-film sensor technology is highly developed for lubricated contacts with high fluid load portions, measurements in dry lubricated contacts remain unexplored.
Within this study, thin-film sensors made of platinum were used for contact temperature measurements. The sensors were applied on zirconium dioxide (ZrO2) disks to provide electrical insulation. A titanium adhesion layer with a thickness of approximate 40 nm on the ZrO2 disks improved durability of the thin-film sensor. For both layers (titanium and platinum) ion beam sputtering was used. Figure 3 illustrates the considered sensor geometry, which has been successfully applied for EHL contact temperature measurements ((Kagerer 1991), (Mayer 2013), (Ebner et al. 2020)). The manufacturing process consists of masking with photolithography technique and sputtering of the titanium bonding layer and platinum sensor respectively. The sensor height of the thin film sensor is 100–150 nm, which results in an ohmic resistance value of RS ≈ 120 Ω. A detailed description of the manufacturing process can be found in (Ebner et al. 2020). The ohmic resistance of platinum shows high sensitivity to temperature and weak influence of pressure. The ratio between the temperature and pressure coefficient αT and αP is high, which results in high sensitivity of the measurand compared to the measuring method. Equation (5) shows the correlation of relative resistance change of platinum ∆RPt/∆R0,Pt to temperature (∆T) and pressure change (∆p).
$$ \frac{\Delta {R}_{Pt}}{R_{0, Pt}}={\alpha}_{p, Pt}\cdot \Delta p+{\alpha}_{T, Pt}\cdot \Delta T $$
Thin-film sensor geometry for contact temperature measurements (Ebner et al. 2020)
During the transit of the thin-film sensor through the contact, temperature and pressure changes always occur simultaneously such that the pressure distribution has to be known to calculate the temperature rise ∆T. For the used platinum thin-film sensor, the temperature coefficient αT, Pt = 1.16∙10−3 1/K is two orders in magnitude higher than the pressure coefficient αp, Pt = − 1.10∙10−5 mm2/N.
For evaluation of the influence of MoS2-BoC on the contact temperature, Table 1 supplements the material data and thermophysical properties of steel, ZrO2, and polyimide. The latter approximates the properties of the MoS2-BoC. The thermal effusivity e represents the ability to transport heat by conduction and convection (also known as thermal inertia (Ziegltrum et al. 2020) and is defined by Eq. (6).
$$ e=\sqrt{\rho \cdot {c}_P\cdot \lambda } $$
Table 1 Material data and thermophysical properties (Benford et al. 1999), (Detakta 2015), (Deutsche Edelstahlwerke GmbH 2011), (Oxidkeramik J Cardenas GmbH 2015) (MoS2-BoC values approximated by plain polyimide)
Thereby, ρ describes the density, cp the specific heat capacity, and λ the thermal conductivity of the considered substrate or surface coating.
Friction curves and bulk temperatures were measured for various operating conditions shown in Table 2. Each friction curve was recorded at a constant normal force of FN = 435 N, which corresponds to a Hertzian pressure of pH = 400 MPa for the contact of uncoated disks. For each considered sum velocity v∑, the slip ratio s was incrementally increased from s = 0 % to s = 50 %. All coefficients of friction μ and bulk temperatures ϑM were recorded in quasi-stationary states characterized by a bulk temperature change ∆ϑM/∆t ≤ 0.5 K/min. After reaching the maximum slip ratio of s = 50 %, the system was left at rest to cool down to room temperature before adjusting the next sum velocity v∑. Besides dry lubrication, reference measurements under oil injection lubrication were conducted with reference ISO VG100 mineral oil with 4 % of sulfur-phosphorus extreme pressure (EP) additive (FVA3A, (Laukotka 2007)). The oil was injected at a temperature ϑoil = 40 °C. To avoid damage, all friction curve measurements were aborted if the coefficient of friction exceeded μ = 0.5, or if the measured bulk temperature exceeded ϑM = 160 °C. Disks with the same surface finish according to Fig. 2 were paired. Each friction curve was measured twice with a new set of disks. The room was conditioned to 20 °C and a relative humidity of RH = 40 − 50 %.
Table 2 Operating conditions of friction and contact temperature measurements. Hertzian pressure calculated for the uncoated system
Contact temperature measurements were performed at constant sum velocity v∑ = 2 m/s, but different loads and slip ratios s, as shown in Table 2. The considered normal forces were FN = {251, 446} N, which correspond to Hertzian pressures of the uncoated contact of pH = {300, 400} MPa. The slip ratio ranged from s = 0 % to s = 30 %. After adjusting each operating point, the disks were brought into contact at the considered normal force FN for about three seconds and six roll-overs from the thin-film sensor were tracked with a digital oscilloscope. This procedure was repeated once to achieve a total of twelve signals for each operating point.
This section presents the experimental results divided into friction curves and contact temperature measurements.
Friction curve measurements
Figure 4 shows the measured coefficients of friction μ and bulk temperatures ϑM over the slip ratio s for a normal force of FN = 435 N and sum velocities v∑ = {1, 2, 4} m/s. Both friction curves of the uncoated polished surface under dry lubrication show a degressive increase of the coefficient of friction for a sum velocity v∑ = 1 m/s up to a slip ratio s = 20%. A higher slip ratio results in a rapid increase in the coefficient of friction. Thus, the tests were aborted and a stationary value could not be measured. The results correspond to investigations by (Ebner et al. 2018) and (Yilmaz et al. 2018) for uncoated surfaces under dry lubrication. Measured bulk temperatures increase only slightly with increasing slip ratio for the uncoated surface, even after the coefficient of friction had increased rapidly. Compared to the uncoated polished surfaces, MoS2-BoC demonstrates a significantly increased operating area. The friction curves of MoS2-BoC at v∑ = 1 m/s show a similar profile as with the uncoated polished surface, but decrease at high slip ratios s > 30 %. For sum velocities v∑ = {2, 4} m/s, both friction curves increase rapidly at small slip ratios and then decrease slightly with increasing slip ratio. The decrease of the coefficient of friction with increasing slip ratio can be referred to the lattice-layer structure of MoS2 (Vazirisereshk et al. 2019). Corresponding bulk temperatures increase with increasing sum velocity and slip ratio due to rising frictional power in the disk contact. The first test run is aborted due to high friction force at a sum velocity v∑ = 4 m/s and a slip ratio s = 50 %, the second test run did not reach the abortion criteria. The friction curves for DLC coating ZrCg reveal frictional behavior similar to MoS2-BoC. For a sum velocity v∑ = 1 m/s, a sharp increase of the coefficient of friction up to a slip ratio s = 5 % and a moderate decrease for higher slip ratios is observed for the first test run. The second test run shows its maximum coefficient of friction at a slip ratio s = 20 % before it decreases to a similar level as the first test run. At higher sum velocities, the profile of the friction curves reveals a sharp increase to a maximum coefficient of friction at a slip ratio of s = 2 and 5 %, after which a continuous decrease occurs. The decrease of the coefficient of friction with an increasing slip ratio can be attributed to a temperature-induced structural transformation from carbon to graphite (Durst 2008) resulting in a lattice-layer structure. This transformation process is also observed in investigations on the frictional behavior of ta-C coatings by (Yilmaz et al. 2018). The corresponding bulk temperatures increase with increasing sum velocity and slip ratio due to rising frictional power in the contact. The second test run is aborted due to high friction force at a sum velocity v∑ = 4 m/s and a slip ratio s = 50%.
Measured friction curves and bulk temperatures ϑM at a normal force of FN = 435 N and sum velocities v∑ = {1, 2, 4} m/s
The experimental results under oil injection lubrication with uncoated polished surfaces show significantly lower coefficients of friction and bulk temperatures. Despite the oil injection temperature of ϑoil = 40 °C, the bulk temperatures are lower for every operating condition due to heat convection into the surroundings. Compared to lubricated contacts, dry lubrication shows lower repetition accuracy and higher sensitivity to changes of the operating conditions, as seen in Fig. 4.
Figure 5 shows optical scans and corresponding mean arithmetic roughness Ra of all investigated surfaces under dry lubrication after the second test run. Note that the abortion limit was reached for the uncoated surface and DLC coating ZrCg, which indicates that the operation area was exceeded, resulting in damage to the investigated surface. Despite the rise in surface roughness Ra, the uncoated surface shows brown discoloration, which may indicate oxidation processes. Similar observations by (Yilmaz et al. 2018) revealed the formation of Fe2O3 on the uncoated steel running surface under dry lubrication through energy-dispersive X-ray spectroscopy (EDX) element analysis. Generally, high friction is associated with tremendous local temperature developments that can lead to accelerated oxidation processes. MoS2-BoC reveals a partial detachment of the bonded coating as the steel substrate becomes visible, whereas the surface roughness increases slightly to Ra = 1.01 μm. Clear changes can be observed for DLC coating ZrCg in the optical appearance and increase of surface roughness to Ra = 3.18 μm.
Optical scans and mean arithmetic roughness Ra of test disks after the second test run
Contact temperature measurements
Measurements of contact temperatures were considered for MoS2-BoC. The polyimide matrix and solid lubricant particles are sufficient electrical insulators to not cause a short circuit between the contact pads of the thin-film sensor. Besides, MoS2-BoC is soft compared to the platinum of the thin-film sensor, which inhibits detachment of the sensor from the sensor disk. Figure 6 shows the contact temperature rise ΔT* above bulk temperature over the dimensionless gap length direction x/bH for MoS2-BoC at normal forces FN = {251, 446} N, sum velocity v∑ = 2 m/s and slip ratios s = {0, 10, 20, 30} %. Thereby, the dimensionless gap length direction represents the sensor distance × normalized by the half Hertzian contact width bH in entraintment direction to enable a simpler visualization of different loads. All illustrated temperature profiles represent the mean value of up to twelve roll-overs. The signal scattering for FN = 251 N is less than 1.2 K at a slip ratio s = 20 %. For slip ratios of s = {0, 10} %, the temperature scatters between 0.1 K and 0.2 K. At a normal force of FN = 446 N, the scattering increases to a maximum of 6 K for a slip ratio of s = 30 %. The influence of pressure on the temperature profile was neglected (symbolized by * added to ΔT), which leads to underestimated temperature rises due to the negative pressure coefficient of platinum. As MoS2-BoC with coating thickness tc = 25 ± 5 μm affects contact pressure, a pressure correction based on pH for the uncoated disk contact cannot be easily applied. Investigations by (Elsharkawy et al. 2006) on the influence of coatings Young's modulus and thickness on pressure distribution reveal significant lower maximum contact pressures for soft coatings. For a ratio of Young's moduli between substrate (Es) and coating (Ec), Es/Ec = 20, and a coating thickness of 20 μm, the maximum contact pressure can be as low as half the value of the uncoated system. However, since no detailed simulations are available for the considered MoS2-BoC, no pressure correction was applied. The comparison of measured contact temperatures ∆T* is limited to the same normal force, although the maximum temperature change due to pressure is estimated to be small as the platinum sensor is much more sensitive to temperature as described in "Thin-film sensors" section.
Contact temperature rise ∆T* for normal forces FN = {251, 446} N and slip ratios s = {0, 10, 20, 30} % for MoS2-BoC
At a normal load of FN = 251 N, a maximum temperature rise \( \Delta {T}_{max}^{\ast } \) = 0.5 K for pure rolling (curve a) was measured. This can be referred to hysteresis friction in the polyimide matrix and microslip. The temperature rise increases from \( \Delta {T}_{max}^{\ast } \) = 3.5 K (curve b) up to \( \Delta {T}_{max}^{\ast } \) = 6.2 K (curve c) for the highest investigated slip ratio s = 20 %. This can be referred to the increased sliding speed and hence friction power. Note that the coefficient of friction also increases slightly from μ = 0.046 (curve b) to μ = 0.054 (curve c). A similar pattern is observed at a normal load FN = 446 N. The temperature rise ∆T* also rises steadily with increasing slip ratio s. For pure rolling (curve a), the temperature rise is very small. For curve b, the temperature rise is \( \Delta {T}_{max}^{\ast } \) = 3.1 K and for curve c, \( \Delta {T}_{max}^{\ast } \) = 8.0 K. The maximum temperature rise \( \Delta {T}_{max}^{\ast } \) = 20.5 K is observed for curve d at s = 30 %. As for the lower normal force, the coefficient of friction increases slightly from curve b to d, from μ = 0.040 to μ = 0.050 to μ = 0.051.
In the following section, the experimental results are discussed with focus on the tribological behavior and thermal performance of the considered surface coatings.
Tribological performance of surface coatings
The experimental results in "Friction curve measurements" section under dry lubrication demonstrate a reduction of friction and an increase of the operating area by the considered surface coatings MoS2-BoC and DLC coating ZrCg compared to plain steel contacts. At higher sum velocities v∑, an increase in the slip ratio s can result in reduced friction. The measured coefficients of friction between 0.1 < μ < 0.2 for DLC coating ZrCg correlate with the results from (Erdemir and Eryilmaz 2014) for an a-C:H DLC coating under ambient air conditions. The decrease in friction at higher slip ratio and sum velocities can be referred to the graphitization of amorphous carbon coatings. (Liu et al. 1996) conducted experiments on a pin-on-disk tribometer with similar load and sliding speeds as for the friction curves in Fig. 4 and observed a significant reduction of friction especially for high sliding. According to (Durst 2008), the graphitization starts from 300 °C, which can occur locally at asperity contacts with high local pressures and flash temperatures. Also for the MoS2-BoC, the coefficients of friction are in agreement to further investigations. (Xu et al. 2003) performed experiments under ambient air conditions on a ball-on-disk tribometer with MoS2-BoC and epoxy resin as binder. For a relative humidity of RH = 50 % and room temperature (23 °C), coefficients of friction between 0.1 < μ < 0.2 were measured.
The coefficients of friction for contact temperature measurements in the "Contact temperature measurements" section are significantly lower than observed during friction curve measurements with MoS2-BoC in the "Friction curve measurements" section. Although the comparison is restricted with respect to different disk pairings, testing time has to be considered. While the contact temperature measurements lasted only a few seconds, the total running time for a friction curve measurement can be up to 2 h. (Gradt and Schneider 2016) observed continuously increasing coefficients of friction over running time for MoS2 coatings on a ball-on-disk tribometer. The results of friction curve measurements may indicate this behavior, since low coefficients of friction as low as μ = 0.05 to 0.1 were observed only for low slip ratios at a sum velocity of v∑ = 1 m/s. For the sum velocity v∑ = 1 m/s and the slip ratio s = 10 %, the measured coefficient of friction is μ = 0.09, whereas for the same sliding speed vg at a sum velocity of v∑ = 2 m/s (s = 5%), the measured mean coefficient of friction is μ = 0.15. This indicates that ongoing interaction in the contact accompanied with wear in the contact zone interferes with the sliding mechanism of the MoS2-BoC. Note that also the bulk temperatures were higher during friction curve measurements, which may have an influence on the material properties of the polyimide binder matrix of the MoS2-BoC. Investigations from (Tsai et al. 2003) on the temperature dependency of vapor deposited polyimide reveal a linear decrease in Young's modulus with increasing temperature. Hence, the increasing bulk temperatures for higher sum velocities and slip ratios may affect the tribological performance significantly.
The recorded coefficients of friction during contact temperature measurements in the "Contact temperature measurements" section also show a decrease with higher load as known for solid lubricants with a lamellar sliding mechanism (Gustavsson et al. 2013). According to (Donnet and Erdemir 2008) and (Kazuhisa 2001), coefficients of friction for compounds with lamellar sliding mechanism correlate to the contact area and shear stresses between the sliding planes. However, there is no linear relationship between load and contact area resulting in decreasing coefficients of friction for higher loads as observed by (Gradt and Schneider 2016) and (Gustavsson et al. 2013).
Thermal performance of surface coatings
The investigations with dry lubrication show significantly higher coefficients of friction than the investigations with oil injection lubrication. This higher friction power accompanied with a poor heat dissipation results in higher bulk temperatures. The related higher level of contact temperatures under dry lubrication are amplified by thermal insulation effects observed by contact temperature measurements with MoS2-BoC.
To consider the influence of the thermophysical properties of MoS2-BoC (see Table 1) on contact temperature, the flash temperatures TBl according to Blok (Blok 1937) are calculated by Eq. (7) for the material pairings ZrO2/steel and ZrO2/polyimide. Thereby, polyimide approximates the thermophysical properties of MoS2-BoC. FN represents the normal load, leff the width of the contact, E´ the reduced modulus, an R the radius of relative curvature. Index 1 refers to the upper disk (ZrO2) and index 2 to the lower disk (steel or polyimide).
$$ {T}_{Bl}=\frac{0.62\cdot \mu \cdot {\left(\frac{F_N}{l_{eff}}\right)}^{0.75}\cdot {\left(\frac{E^{\prime }}{R}\right)}^{0.25}\cdot \mid {v}_g\mid }{\sqrt{\lambda_1\cdot {\rho}_1\cdot {c}_{p,1}\cdot {v}_1}+\sqrt{\lambda_2\cdot {\rho}_2\cdot {c}_{p,2}\cdot {v}_2}} $$
As the temperature penetration depth is in the range of only a few micrometer (Habchi 2014) and (Ziegltrum et al. 2020), the influence of the steel substrate on the flash temperature can be neglected. TBl refers to the maximum contact temperature rise above bulk temperature and can thus be compared to the maximum temperature rise ∆Tmax measured for MoS2-BoC.
Figure 7 shows the maximum temperature rise ∆Tmax measured for ZrO2/MoS2-BoC and the flash temperatures TBl calculated for ZrO2/polyimide and ZrO2/steel.
Measured maximum relative temperature rise ∆Tmax under dry lubrication at FN = {251,446} N and v∑ = 2 m/s and calculated flash temperatures
The comparison between the measured and calculated values for ZrO2/MoS2-BoC and ZrO2/polyimide agree, with largest deviations at the slip ratio of s = 30 % at FN = 446 N. The calculated flash temperatures for ZrO2/steel are clearly lower than for the measured and calculated pair with the considered coating. This clearly indicates thermal insulation effects that are mainly caused by the low thermal effusivity of e = 486 J/(K\( \sqrt{s} \)m2) of the MoS2-BoC compared to steel with e = 12130 J/(K\( \sqrt{s} \)m2). Numerical simulations of Ziegltrum et al. (Ziegltrum et al. 2020) reveal thermal insulation effects in lubricated steel/polymer contacts due to low thermal effusivity of polymers. Hence, especially the high proportions of polyimide binder matrix with its very low thermal conductivity (Benford et al. 1999) favors thermal insulation by MoS2-BoC, since the thermal conductivity of MoS2 is only slightly lower compared to steel (Peng et al. 2016). Thermal insulation has also been observed with coatings under oil lubrication (e.g. (Björling et al. 2014) and (Ebner et al. 2020)). Thereby, in contrast to dry lubricated contacts, thermal insulation can result in a reduction in the coefficient of friction, as the contact viscosity is reduced by high contact temperatures. It should be noted that the absolute contact temperatures measured with MoS2-BoC are low in comparison for the considered operating conditions. This is mainly because the disks are in contact for approximately only twelve roll-overs per operating point in order to avoid detachment and high wear of the platinum thin-film sensor. Hence, the measured bulk temperature is not quasi-stationary like for the friction curve measurements.
Nevertheless, the observed thermal insulation with high contact temperature rises can affect the performance of MoS2-BoC since MoS2 is sensitive to oxidation processes (Zhang et al. 2011). Investigations from (Xu et al. 2003) show significantly increased wear at temperatures above 100 °C for MoS2-bonded coatings. Besides accelerated oxidation, high temperature levels can also result in reduced Young's modulus of the polymer binder matrix (Tsai et al. 2003). For DLC coatings, high contact temperatures can result in increased tribological performance as a structural change (graphitization) from sp3- to sp2-bonded carbon can occur.
It should be noted that no pressure correction was considered for the measured contact temperatures. For this and a detailed interpretation, numerical modelling can be applied to obtain the pressure and temperature distributions. In addition, the influence of the MoS2 particles on the thermophysical properties of MoS2-BoC should be investigated, whereby the thermal conductivity of filled polymers can show an anisotropic behavior (Mamunya et al. 2002).
This study investigated the friction and temperature behavior of rolling-sliding contacts under dry lubrication. Two surface coatings with different coating technologies were considered on a twin-disk test rig. To classify the results, uncoated polished surfaces were tested under dry and oil injection lubrication. The main conclusions of this study with respect to rolling-sliding contacts add
The bearable load level of uncoated steel surfaces under dry lubrication is very low.
Surface coatings can significantly increase operating areas under dry lubrication.
Friction and bulk temperatures are considerably higher compared to oil lubrication.
Contact temperatures under dry lubrication can be measured by thin-film sensors for bonded coatings if the electrical insulation is sufficient.
Low thermal effusivity of bonded coatings can result in thermal insulation effects that can accelerate oxidation processes.
This study demonstrates the challenge of reliable implementation of e.g. gears with rolling-sliding contacts under dry lubrication under ambient air conditions. Nevertheless, the potential of tailored surface coatings under dry lubrication becomes clear. The lubrication and failure mechanisms have to be investigated further.
Data generated or analyzed during this study are included in this published article.
MoS2 :
DLC:
Diamond-like carbon
MoS2-BoC:
MoS2-bonded coating
WS2 :
Tungsten disulfide
RH:
EHL:
Elastohydrodynamic lubrication
PVD:
Physical vapor deposition
ZrO2 :
Zirconium dioxide
Banerjee, T., & Chattopadhyay, A. K. (2014). Structural, mechanical and tribological properties of pulsed DC magnetron sputtered TiN-WSx/TiN bilayer coating. Surface & Coatings Technology, 258;849–860.
Bauerochs, R. (1989). Pressure- and temperature measurements in EHL rolling-sliding contacts (in German). Dissertation, Universität Hannover
Baumann, H. (1987). Measuring surface temperatures between rolling steel cylinders using double layer transducers. Journal of Mechanical Engeneering Science, 201(4), 263–270. https://doi.org/10.1243/PIME_PROC_1987_201_119_02.
Benford, D. J., Power, T. J., & Moseley, S. H. (1999). Thermal conductivity of Kapton tape. Cyrogenics, 39(1), 93–95. https://doi.org/10.1016/S0011-2275(98)00125-8.
Berman, D., Erdemir, A., & Sumant, A. V. (2018). Approaches for achieving superlubricity in two-dimension materials. ACS Nano, 12(3), 2122–2137. https://doi.org/10.1021/acsnano.7b09046.
Birkhofer, H., & Kümmerle, T. (2012). Feststoffgeschmierte Wälzlager: Einsatz, Grundlagen und Auslegung. Berlin Heidelberg: Springer Vieweg. https://doi.org/10.1007/978-3-642-16797-3.
Björling, M., Larsson, R., & Marklund, P. (2014). The effect of DLC coating thickness on elastohydrodynamic friction. Tribology Letters, 55(2), 353–362. https://doi.org/10.1007/s11249-014-0364-6.
Blok, H. (1937). Theoretical study of temperature rise at surface of actual contact under oiliness lubrication, Proc. Gen. Disc. Lubric (pp. 225–235). London: IME.
Bobzin, K., Brögelmann, T., Stahl, K., Michaelis, K., Mayer, J., & Hinterstoißer, M. (2015). Friction reduction of highly-loaded rolling-sliding contacts by surface modifications under elasto-hydrodynamic lubrication. Wear, 328-329,217–228.
Detakta (2015). Material data sheet of Kapton HN-Polyimid Folie. https://www.detakta.de/fileadmin/datenblaetter/flexibleIsolierstoffe/Kapton.pdf. Accessed 18 Aug 2020
Deutsche Edelstahlwerke GmbH (2011). Material data sheet of 1.7131/1.7139 (16MnCr5/16MnCrS5). Witten. https://www.dew-stahl.com/fileadmin/files/dew-stahl.com/documents/Publikationen/Werkstoffdatenblaetter/Baustahl/1.7131_1.7139_de.pdf. Accessed 12 Dec 2018
Dienwiebel, M., Verhoeven, G. S., Pradeep, N., Frenken, J. W. M., Heimberg, J. A., & Zandbergen, H. W. (2004). Superlubricity of graphite. Physical Review Letters, 92, 126101.
Donnet, C., Belin, M., Auge, J. C., Martin, J. M., Grill, A., & Patel, V. (1994). Tribochemistry of diamond-like carbon coatings in various enviroments. Surface & Coatings Technology, 68-69, 626–631. https://doi.org/10.1016/0257-8972(94)90228-3.
Donnet, C., & Erdemir, A. (2008). Tribology of diamond-like-carbon films: fundamentals and applications. New York: Springer. https://doi.org/10.1007/978-0-387-49891-1.
Durst, O. (2008). Corrosion and wear properties of new carbon-containing PVD layers (in German). Dissertation, Technische Universität Darmstadt
Ebner, M., Yilmaz, M., Lohner, T., Michaelis, K., Höhn, B.-R., & Stahl, K. (2018). On the effect of starved lubrication on elastohydrodynamic (EHL) line contacts. Tribology International, 118, 515–523.
Ebner, M., Ziegltrum, A., Lohner, T., Michaelis, K., & Stahl, K. (2020). Measurement of EHL temperature by thin film sensors-Thermal insulation effects. Tribology International, 149, 105515. https://doi.org/10.1016/j.triboint.2018.12.015.
Elsharkawy, A. A., Holmes, M. J. A., Evans, H. P., & Snidle, R. W. (2006). Micro-elastohydrodynamic lubrication of coated cylinders using coupled differential defletection method. Proceedings of the Institution of Mechanical Engineers, Part J. Journal of Engineering Tribology, 220,29–41.
Erdemir, A., & Eryilmaz, O. (2014). Achieving superlubricity in DLC films by controlling bulk, surface and tribochemistry. Friction, 2(2), 140–155. https://doi.org/10.1007/s40544-014-0055-1.
Fontaine, J., Mogne, T., Loubet, J. L., & Belin, M. (2005). Achieving superlow friction with hydrogenated amorphous carbon: some key requirements. Thin Solid Films, 482, 99–108.
Gamyula, G., Dobrovolskaya, G. V., Lebedeva, I. L., & Yukhno, T. P. (1984). General regularities of wear in vacuum for solid film lubricant formulated with lamellar materials.
Gradt, T., & Schneider, T. (2016). Tribological Performance of MoS2 Coatings in various Environments. Lubricants, 4, 32.
Grossl, A. (2007). Influence of PVD-coatings on the flank load carrying capacity and bending strenght of case hardened spur gears (in German). Dissertation, Technische Universität München
Gustavsson, F., Svahn, F., Bexell, U., & Jacobson, S. (2013). Nanoparticle based and sputtered WS2 low friction coatings - Differences and similarities with respect to friction mechanism and tribofilm formation. Surface & Coatings Technology, 232, 616–626.
Habchi, W. (2014). A numerical model of the solution of thermal elastohydrodynamic lubrication in coated circular contacts. Tribology International, 73, 57.
Hinterstoißer, M., Sedlmaier, M., Lohner, T., & Stahl, K. (2019). Minimizing load-dependent gear losses. Tribologie und Schmierungstechnik, 66, 15–25.
Hirano, M., & Shinjo, K. (1990). Atomistic locking and friction. Physical Review B, 41, 11837–11851.
Höhn, B.-R., Michaelis, K., & Otto, H.-P. (2009). Minimised gear lubrication by a minimum oil/air flow rate. Wear, 266, 461–467.
Kagerer, E. (1991). Measurement of elastohydrodynamic paramters in highly-loaded disk- and gear contacts (in German). Dissertation, Technische Universität München
Kagerer, E., & Königer, M. E. (1989). Ion beam sputter deposition of thin film sensors for applications in highly loaded contacts. Thin Solid Films, 182, 333–344.
Kannel, J., Bell, J. C., & Allen, C. M. (1965). Methods for determining pressure distributions between lubricated rolling contacts. ASLE Transactions, 8,250–270.
Kazuhisa, M. (2001). Solid Lubricants-Fundamental and applications. Washington: Marcel Dekker Inc.
Kühl, S. (1996). Joint contact resolved measurement of pressure and temperature in the elastohydrodynamic line contact of force-transmitting involute gears (in German). Dissertation, Technische Universität Clausthal
Laukotka, E. (2007). FVA-Reference oil catalog-Reference oil data sheet. FVA Heft 660. Frankfurt/Main: Forschungsvereinigung Antrieb e. V (in German).
Li, H., Wang, J., Gao, S., Chen, Q., Peng, L., Liu, K., & Wei, X. (2017). Superlubricity between MoS2 monolayers. Advanced Materials. 29(27),1701474.
Liu, Y., Erdemir, A., & Meletis, E. I. (1996). An investigation of the relationship between graphitization and fricitonal behavior of DLC coatings. Surface & Coatings Technology, 86-87, 564–568. https://doi.org/10.1016/S0257-8972(96)03057-5.
Lohner, T., Merz, R., Mayer, J., Michaelis, K., Kopnarski, M., & Stahl, K. (2015). On the effect of plastic deformation (PD) additives in lubricants. Tribologie und Schmierungstechnik, 62(2),13–24.
Mamunya, Y. P., Davydenko, V. V., Pissis, P., & Lebedev, E. V. (2002, 1887–1897). Electrical and thermal conductivity of polymers filled with metal powders. European Polymer Journal, 38.
Martens, S. (2008). Oilfree industrial transmissions - Measures and possibilities to minimize or eliminate conventional lubricants (in German). Dissertation, Technische Universität Dresden
Mayer, J. (2013). Influence of the surface and lubricant on the frictional behavior in EHL contact (in German). Dissertation, Technische Universität München
Oxidkeramik J Cardenas GmbH (2015). Material data sheet. Albershausen
Peng, B., Zhang, H., Shao, H., Xu, Y., Zahng, X., & Zhu, H. (2016). Thermal conductivity of monolayer MoS2, MoSe2 and WS2: interplay of mass effect, interatomic bonding and anharmonicity. RSC Advances. 6,5767-5773.
Reitschuster, S., Maier, E., Lohner, T., & Stahl, K. (2020). Friction and temperature behavior of lubricated thermoplastic polymer contacts. Lubricants, 8, 67.
Ronkainen, H., Varjus, S., & Holmberg, K. (1998). Friction and wear properties in dry, water- and oil-lubricated DLC against alumina and DLC against steel contacts. Wear, 222, 120–128.
Schouten, M. J. M. (1973). The influence of elastohydrodynamic lubrication on friction, wear and durability of gears (in German). Dissertation, University of Eindhoven
Schul, C. (1997). Einfluß der Baugröße auf die Lebensdauer feststoffgeschmierter Kugellager. Dissertation, Technische Universität Darmstadt
Schultrich, B., & Weihnacht, V. (2008). Tribological behavior of hard and super hard carbon layers. Vakuum in Forschung und Praxis, 20,12–17 (in German).
Stößel, K. (1971). Coefficients of friction under elasto-hydrodynamic conditions (in German). Dissertation, Technische Universität München
Tsai, F., Blanton, T., Harding, D. R., & Chen, S. (2003). Temperature dependence of properties of vapor deposited polyimide. Journal of Applied Physics, 93, 3760–3764.
Vazirisereshk, M. R., Martini, A., Strubbe, D. A., & Baykara, M. Z. (2019). Solid lubrication with MoS2: a review. Lubricants. 7,57.
Weihnacht, V., Makowski, S., Brückner, A., Theiler, G., & Gradt, T. (2012). Tribology and applications of ta-C layers under dry lubrication. Tribologie und Schmierungstechnik (in German). 59(6),41-44.
Xu, J., Zhu, M. H., Zhou, Z. R., Kapsa, P., & Vincent, L. (2003). An investigation on fretting wear life of bonded MoS2 solid lubricant coatings in complex conditions. Wear, 255, 253–258.
Yilmaz, M., Kratzer, D., Lohner, T., Michaelis, K., & Stahl, K. (2018). A study on highly-loaded contacts under dry lubrication for gear applications. Tribology International, 128, 410–420.
Yukhno, T. P., Vvedenskij, Y. V., & Sentyurikhina, L. N. (2001). Low temperature investigations on frictional behavior and wear resistance of solid lubricant coatings. Tribology International, 34, 293–298.
Zhang, H. J., Zhang, Z. Z., & Guo, F. (2011). Studies on the influence of graphite and MoS2 on the tribological behaviors of hybrid PTFE/Nomex fabric composite. Tribology Transactions, 54(3), 417–423. https://doi.org/10.1080/10402004.2011.553027.
Ziegltrum, A., Maier, E., Lohner, T., & Stahl, K. (2020). A numerical study on thermal elastohydrodynamic lubrication of coated polymers. Tribology Letters, 68(2). https://doi.org/10.1007/s11249-020-01309-6.
The presented results are based on the research project STA 1198/17-1 of the priority program SPP2074, supported by the German Research Foundation e.V. (DFG). The authors would like to thank for the sponsorship and support received from the DFG. Open Access funding was enabled and organized by Projekt DEAL.
Gear Research Centre (FZG), Technical University of Munich (TUM), Boltzmannstr. 15, 85748, Garching near Munich, Germany
Stefan Hofmann, Mustafa Yilmaz, Enzo Maier, Thomas Lohner & Karsten Stahl
Stefan Hofmann
Mustafa Yilmaz
Enzo Maier
Thomas Lohner
Karsten Stahl
SH designed the experiments, analyzed the results, and wrote the paper with the support of MY. EM and TL supported the interpretation of the results, participated in the scientific discussions, and revised the paper. KS proofread the paper. All authors have read and agreed to the published version of the manuscript.
Correspondence to Stefan Hofmann.
This paper is an extended version of an abstract published in the 61st German Tribology Conference, Göttingen, Germany, 28–30 September 2020.
Hofmann, S., Yilmaz, M., Maier, E. et al. Friction and contact temperature in dry rolling-sliding contacts with MoS2-bonded and a-C:H:Zr DLC coatings. Int J Mech Mater Eng 16, 9 (2021). https://doi.org/10.1186/s40712-021-00129-3
Dry lubrication
Thin film sensors
Tribology of components in mechanical systems
|
CommonCrawl
|
Enumeration of self-dual and self-orthogonal negacyclic codes over finite fields
AMC Home
FORSAKES: A forward-secure authenticated key exchange protocol based on symmetric key-evolving schemes
November 2015, 9(4): 449-469. doi: 10.3934/amc.2015.9.449
The nonassociative algebras used to build fast-decodable space-time block codes
Susanne Pumplün 1, and Andrew Steele 2,
School of Mathematical Sciences, University of Nottingham, University Park, Nottingham NG7 2RD
Flat 203, Wilson Tower, 16 Christian Street, London E1 1AW, United Kingdom
Received March 2014 Revised November 2014 Published November 2015
Let $K/F$ and $K/L$ be two cyclic Galois field extensions and $D=(K/F,\sigma,c)$ a cyclic algebra. Given an invertible element $d\in D$, we present three families of unital nonassociative algebras over $L\cap F$ defined on the direct sum of $n$ copies of $D$. Two of these families appear either explicitly or implicitly in the designs of fast-decodable space-time block codes in papers by Srinath, Rajan, Markin, Oggier, and the authors. We present conditions for the algebras to be division and propose a construction for fully diverse fast decodable space-time block codes of rate-$m$ for $nm$ transmit and $m$ receive antennas. We present a DMT-optimal rate-3 code for 6 transmit and 3 receive antennas which is fast-decodable, with ML-decoding complexity at most $\mathcal{O}(M^{15})$.
Keywords: fast-decodable, nonassociative algebra, Space-time block codes, MIMO code, division algebra..
Mathematics Subject Classification: Primary: 17A35, 94B0.
Citation: Susanne Pumplün, Andrew Steele. The nonassociative algebras used to build fast-decodable space-time block codes. Advances in Mathematics of Communications, 2015, 9 (4) : 449-469. doi: 10.3934/amc.2015.9.449
V. Astier and S. Pumplün, Nonassociative quaternion algebras over rings,, Israel J. Math., 155 (2006), 125. doi: 10.1007/BF02773952. Google Scholar
C. Brown, PhD Thesis University of Nottingham,, in preparation., (). Google Scholar
N. Jacobson, Finite-dimensional Division Algebras Over Fields,, Springer Verlag, (1996). doi: 10.1007/978-3-642-02429-0. Google Scholar
G. R. Jithamitra and B. S. Rajan, Minimizing the complexity of fast-sphere decoding of STBCs,, IEEE Int. Symposium on Information Theory Proceedings (ISIT), (2011), 1846. doi: 10.1109/ISIT.2011.6033869. Google Scholar
M. A. Knus, A. Merkurjev, M. Rost and J.-P. Tignol, The Book of Involutions,, AMS Coll. Publications, 44 (1998). Google Scholar
N. Markin and F. Oggier, Iterated space-time code constructions from cyclic algebras,, IEEE Trans. Inf. Theory, 59 (2013), 5966. doi: 10.1109/TIT.2013.2266397. Google Scholar
L. P. Natarajan and B. S. Rajan, Fast group-decodable STBCs via codes over GF(4),, Proc. IEEE Int. Symp. Inform. Theory, (2010), 1056. doi: 10.1109/ISIT.2010.5513721. Google Scholar
L. P. Natarajan and B. S. Rajan, Fast-Group-Decodable STBCs via codes over GF(4): Further results,, Proceedings of IEEE ICC 2011, (2011), 1. doi: 10.1109/icc.2011.5962874. Google Scholar
L. P. Natarajan and B. S. Rajan, written communication,, 2013., (). Google Scholar
J.-C. Petit, Sur certains quasi-corps généralisant un type d'anneau-quotient,, Séminaire Dubriel. Algèbre et théorie des nombres, 20 (): 1. Google Scholar
S. Pumplün and A. Steele, Fast-decodable MIDO codes from nonassociative algebras,, Int. J. of Information and Coding Theory (IJICOT), 3 (2015), 15. doi: 10.1504/IJICOT.2015.068695. Google Scholar
S. Pumplün, How to obtain division algebras used for fast decodable space-time block codes,, Adv. Math. Comm., 8 (2014), 323. doi: 10.3934/amc.2014.8.323. Google Scholar
S. Pumplün, Tensor products of nonassociative cyclic algebras,, Online at , (). Google Scholar
S. Pumplün and T. Unger, Space-time block codes from nonassociative division algebras,, Adv. Math. Comm., 5 (2011), 449. doi: 10.3934/amc.2011.5.449. Google Scholar
K. P. Srinath and B. S. Rajan, DMT-optimal, low ML-complexity STBC-schemes for asymmetric MIMO systems,, 2012 IEEE International Symposium on Information Theory Proceedings (ISIT), (2012), 3043. doi: 10.1109/ISIT.2012.6284120. Google Scholar
K. P. Srinath and B. S. Rajan, Fast-decodable MIDO codes with large coding gain,, IEEE Transactions on Information Theory, 60 (2014), 992. doi: 10.1109/TIT.2013.2292513. Google Scholar
R. D. Schafer, An Introduction to Nonassociative Algebras,, Dover Publ., (1995). Google Scholar
A. Steele, Nonassociative cyclic algebras,, Israel J. Math., 200 (2014), 361. doi: 10.1007/s11856-014-0021-7. Google Scholar
A. Steele, S. Pumplün and F. Oggier, MIDO space-time codes from associative and non-associative cyclic algebras,, Information Theory Workshop (ITW) 2012 IEEE, (2012), 192. Google Scholar
R. Vehkalahti, C. Hollanti and F. Oggier, Fast-decodable asymmetric space-time codes from division algebras,, IEEE Transactions on Information Theory, 58 (2012), 2362. doi: 10.1109/TIT.2011.2176310. Google Scholar
W. C. Waterhouse, Nonassociative quaternion algebras,, Algebras Groups Geom., 4 (1987), 365. Google Scholar
Susanne Pumplün. How to obtain division algebras used for fast-decodable space-time block codes. Advances in Mathematics of Communications, 2014, 8 (3) : 323-342. doi: 10.3934/amc.2014.8.323
Susanne Pumplün, Thomas Unger. Space-time block codes from nonassociative division algebras. Advances in Mathematics of Communications, 2011, 5 (3) : 449-471. doi: 10.3934/amc.2011.5.449
Grégory Berhuy. Algebraic space-time codes based on division algebras with a unitary involution. Advances in Mathematics of Communications, 2014, 8 (2) : 167-189. doi: 10.3934/amc.2014.8.167
Hassan Khodaiemehr, Dariush Kiani. High-rate space-time block codes from twisted Laurent series rings. Advances in Mathematics of Communications, 2015, 9 (3) : 255-275. doi: 10.3934/amc.2015.9.255
Frédérique Oggier, B. A. Sethuraman. Quotients of orders in cyclic algebras and space-time codes. Advances in Mathematics of Communications, 2013, 7 (4) : 441-461. doi: 10.3934/amc.2013.7.441
David Grant, Mahesh K. Varanasi. Duality theory for space-time codes over finite fields. Advances in Mathematics of Communications, 2008, 2 (1) : 35-54. doi: 10.3934/amc.2008.2.35
José Gómez-Torrecillas, F. J. Lobillo, Gabriel Navarro. Convolutional codes with a matrix-algebra word-ambient. Advances in Mathematics of Communications, 2016, 10 (1) : 29-43. doi: 10.3934/amc.2016.10.29
David Grant, Mahesh K. Varanasi. The equivalence of space-time codes and codes defined over finite fields and Galois rings. Advances in Mathematics of Communications, 2008, 2 (2) : 131-145. doi: 10.3934/amc.2008.2.131
Richard H. Cushman, Jędrzej Śniatycki. On Lie algebra actions. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 1-15. doi: 10.3934/dcdss.2020066
Hari Bercovici, Viorel Niţică. A Banach algebra version of the Livsic theorem. Discrete & Continuous Dynamical Systems - A, 1998, 4 (3) : 523-534. doi: 10.3934/dcds.1998.4.523
Yuming Zhang. On continuity equations in space-time domains. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 4837-4873. doi: 10.3934/dcds.2018212
Heinz-Jürgen Flad, Gohar Harutyunyan. Ellipticity of quantum mechanical Hamiltonians in the edge algebra. Conference Publications, 2011, 2011 (Special) : 420-429. doi: 10.3934/proc.2011.2011.420
Franz W. Kamber and Peter W. Michor. Completing Lie algebra actions to Lie group actions. Electronic Research Announcements, 2004, 10: 1-10.
Viktor Levandovskyy, Gerhard Pfister, Valery G. Romanovski. Evaluating cyclicity of cubic systems with algorithms of computational algebra. Communications on Pure & Applied Analysis, 2012, 11 (5) : 2023-2035. doi: 10.3934/cpaa.2012.11.2023
Chris Bernhardt. Vertex maps for trees: Algebra and periods of periodic orbits. Discrete & Continuous Dynamical Systems - A, 2006, 14 (3) : 399-408. doi: 10.3934/dcds.2006.14.399
Gerard A. Maugin, Martine Rousseau. Prolegomena to studies on dynamic materials and their space-time homogenization. Discrete & Continuous Dynamical Systems - S, 2013, 6 (6) : 1599-1608. doi: 10.3934/dcdss.2013.6.1599
Dmitry Turaev, Sergey Zelik. Analytical proof of space-time chaos in Ginzburg-Landau equations. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1713-1751. doi: 10.3934/dcds.2010.28.1713
Montgomery Taylor. The diffusion phenomenon for damped wave equations with space-time dependent coefficients. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5921-5941. doi: 10.3934/dcds.2018257
Chaoxu Pei, Mark Sussman, M. Yousuff Hussaini. A space-time discontinuous Galerkin spectral element method for the Stefan problem. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3595-3622. doi: 10.3934/dcdsb.2017216
Vincent Astier, Thomas Unger. Galois extensions, positive involutions and an application to unitary space-time coding. Advances in Mathematics of Communications, 2019, 13 (3) : 513-516. doi: 10.3934/amc.2019032
Susanne Pumplün Andrew Steele
|
CommonCrawl
|
Effects of Dietary Selenium, Sulphur and Copper Levels on Selenium Concentration in the Serum and Liver of Lamb
Netto, Arlindo Saran (Department of Animal Science, Faculty of Animal Science and Food Engineering, University of Sao Paulo) ;
Zanetti, Marcus Antonio (Department of Animal Science, Faculty of Animal Science and Food Engineering, University of Sao Paulo) ;
Correa, Lisia Bertonha (Department of Animal Science, Faculty of Animal Science and Food Engineering, University of Sao Paulo) ;
Del Claro, Gustavo Ribeiro (Department of Animal Science, Faculty of Animal Science and Food Engineering, University of Sao Paulo) ;
Salles, Marcia Saladini Vieira (Regional Poles of technological development of agribusiness - APTA) ;
Vilela, Flavio Garcia (Department of Nutrition and Animal Production, School of Veterinary Medicine and Animal Science, University of Sao Paulo)
Thirty-two lambs were distributed in eight treatments under $2{\times}2{\times}2$ factorial experiment to compare the effects of two levels of selenium (0.2 to 5 mg/kg dry matter [DM]), sulphur (0.25% and 0.37%) and copper (8 and 25 mg/kg DM) levels on selenium concentration in liver and serum of lambs. A liver biopsy was done on all animals and blood samples were collected from the jugular vein prior to the beginning of the treatments. The blood was sampled every thirty days and the liver was sampled after 90 days, at the slaughter. Increasing differences were noticed during the data collection period for the serum selenium concentration, and it was found to be 0.667 mg/L in animals fed with 5 mg Se/kg DM and normal sulphur and copper concentrations in their diet. However, a three-way interaction and a reduction of selenium concentration to 0.483 mg/L was verified when increasing copper and sulphur concentration levels to 25 ppm and 0.37% respectively. The liver selenium concentration was also high for diets containing higher selenium concentrations, but the antagonist effect with the increased copper and sulphur levels remained, due to interactions between these minerals. Therefore, for regions where selenium is scarce, increasing its concentration in animal diets can be an interesting option. For regions with higher levels of selenium, the antagonistic effect of interaction between these three minerals should be used by increasing copper and sulphur dietary concentrations, thus preventing possible selenium poisoning.
Selenium;Sulphur;Copper;Nutrition;Sheep
Langlands, J. P., J. E. Bowles, G. E. Donald, and A. J. Smith. 1984. Deposition of copper, manganese, selenium and zinc in Merino sheep. Aust. J. Agric. Res. 35:701-707. https://doi.org/10.1071/AR9840701
Levander, O. A. 1986. Selenium. In: Trace elements in Human and Animal Nutrition (Ed. W. Mertz). Academic Press, New York. 2:209-279.
National Research Council. 2007 Nutrient Requirements of Small Ruminants. 7 ed. National Academic Press, Washington, DC, USA. 384 p.
Olson, O. E., I. S. Palmer, and E. E. Cary. 1975. Modification of the official fluorimetric method for selenium in plants. J. Assoc. Agric. Chem. 58:117-121.
Orden, E. A., A. B. Serra, S. D. Serra, K. Nakamura, L. C. Cruz, and T. Fujihara. 2000. Direct effects of copper and selenium supplementation and its subsequent effects on other plasma minerals, body weight and hematocrit count of grazing philippine goats. Asian Australas. J. Anim. Sci. 13:323-328. https://doi.org/10.5713/ajas.2000.323
SAS Institute Inc. 2004. SAS Stat Guide, Release 6.03 Edition. Cary, NC, USA. 1028 p.
Stowe, H. D. and T. H. Herdt. 1992. Clinical assesment of selenium status of livestock. J. Anim. Sci. 70:3928-3933.
Underwood, E. J. 1981. The Mineral Nutrition of Livestock, 2nd ed. Commonwealth Agricultural Bureaux, Buckinghamshire, UK. 210 p.
White, C. L. and M. Somers. 1977. Sulphur-selenium studies in sheep I. The effects of varying dietary sulphate and selenomethionine on sulphur, nitrogen and selenium metabolism in sheep. Aust. J. Biol. Sci. 30:47-56.
Van Ryssen, J. B. L., P. S. M. Van Malsen, and F. Hartmann. 1998. Contribution of dietary sulphur to the interaction between selenium and copper in sheep. J. Agric. Sci. 130:107-114. https://doi.org/10.1017/S0021859697005030
Echevarria, M. G., P. R. Henry, C. B. Ammerman, and P. V. Rao. 1988. Effects of time and dietary selenium concentration as sodium selenite on tissue selenium uptake by sheep. J. Anim. Sci. 66:2299-2305.
Fairweather-Tait, S. J. 1997. Bioavailability of selenium. Euro. J. Clin. Nutr. 51 (Suppl.1):s20-s23. https://doi.org/10.1038/sj.ejcn.1600354
Gierus, M. 2007. Organic and inorganic sources of selenium in the nutrition of dairy cows: Digestion, absorption, metabolism and requirements. Cien. Rural. 37:1212-1220. https://doi.org/10.1590/S0103-84782007000400052
Gunter, S. A., P. A. Beck, and D. M. Hallford. 2013. Effects of supplementary selenium source on the blood parameters in beef cows and their nursing calves. Biol. Trace. Elem. Res. 152:204-211. https://doi.org/10.1007/s12011-013-9620-0
Hartman, F. and J. B. J. Van Ryssen. 1997. Metabolism of selenium and copper in sheep with and without sodium bicarbonate suplementation. J. Agric. Sci. 128:357-364. https://doi.org/10.1017/S0021859696004169
Henry, P. R., M. G. Echevarria, C. B. Ammerman, and P. V. Rao. 1988. Estimation of the relative biological availability of inorganic selenium sources for ruminants using tissue uptake of selenium. J. Anim. Sci. 66:2306-2312.
Koenig, K.M., L. M. Rode, R. D. H. Cohen, and W. T. Buckley. 1997. Effects of diet and chemical form of selenium on selenium metabolism in sheep. J. Ani. Sci. 75:817-827.
Kumar, N., A. K. Garg, V. Mudgal, R. S. Dass, V. K. Chatuverdi, and V. P. Varshney. 2008. Effect of different levels of selenium supplementation on growth rate, nutrient utilization, blood metabolic profile, and immune response in lambs. Biol. Trace Elem. Res. 126 (Suppl.):S44-S56. https://doi.org/10.1007/s12011-008-8214-8
Kumar, N., A. K. Garg, R. S. Dass, V. K. Chatuverdi, V. Mudgal, and V. P. Varshney. 2009. Selenium supplementation influences growth performance, antioxidant status and immune response in lambs. Anim. Feed Sci. Technol. 153:77-87. https://doi.org/10.1016/j.anifeedsci.2009.06.007
Aghwan, Z. A., A. Q. Sazili, and A. R. Alimon. 2013. Blood haematology, serum thyroid hormones and glutathione peroxidase status in Kacang goats fed inorganic iodine and selenium supplemented diets. Asian Australas. J. Anim. Sci. 26:1577-1582. https://doi.org/10.5713/ajas.2013.13180
AOAC. 1990. Official Methods of Analysis. 15 edn. Association of Oficial Analytical Chemists. Arlington, VI, USA. p. 128.
Brennan, K. M., W. R. Burris, J. A. Boling and J. C. Matthews. 2010. Effects of selenium source on blood selenium content, blood cell counts and peripheral blood mononuclear cell mRNA profiles in maturing beef heifers. FASEB J. 24 (Meeting Abstract Supplement) 916.4.
Brisola, M. L. 2000. Effects of Increasing Levels of Sulfur in Diets for Lambs Subjected to Toxic Levels of Selenium. PhD Thesis, University of Sao Paulo, Sao Paulo, Brazil. 89 p.
Cristaldi, L. A., L. R. McDowell, C. D. Buergelt, P. A. Davis, N. S. Wilkinson, and F. G. Martin. 2005. Tolerance of inorganic selenium in wether sheep. Small Rumin. Res. 56:205-213. https://doi.org/10.1016/j.smallrumres.2004.06.001
Deagen, J. T., J. A. Butler, and M. A. Belstein. 1987. Effects of dietary selenite, selenocystine and selenomethionine on selenocysteine lyase and glutathione peroxidase activities and on selenium levels in tissues. J. Nutr. 117:91-98.
Deol, H. S., J. M. Howell, and P. R. Dorling. 1994. Effect of the ingestion of heliotrope and copper on the concentration of zinc, selenium and molybdenum in the liver of sheep. J. Comp. Pathol. 110:303-307. https://doi.org/10.1016/S0021-9975(08)80283-6
Predictive equations of selenium accessibility of dry pet foods vol.101, pp.3, 2016, https://doi.org/10.1111/jpn.12593
Trace element concentrations in livers of Common Buzzards Buteo buteo from eastern Poland vol.189, pp.8, 2017, https://doi.org/10.1007/s10661-017-6135-8
Concentrations of lead and other elements in the liver of the white-tailed eagle (Haliaeetus albicilla), a European flagship species, wintering in Eastern Poland vol.46, pp.8, 2017, https://doi.org/10.1007/s13280-017-0929-3
|
CommonCrawl
|
Search all SpringerOpen articles
Nanoscale Research Letters
Nano Express
Mapping the structural, electrical, and optical properties of hydrothermally grown phosphorus-doped ZnO nanorods for optoelectronic device applications
Vantari Siva†1,
Kwangwook Park†2View ORCID ID profile,
Min Seok Kim1,
Yeong Jae Kim1,
Gil Ju Lee1,
Min Jung Kim1 and
Young Min Song1Email author
Nanoscale Research Letters201914:110
Received: 3 January 2019
The phosphorus-doped ZnO nanorods were prepared using hydrothermal process, whose structural modifications as a function of doping concentration were investigated using X-ray diffraction. The dopant concentration-dependent enhancement in length and diameter of the nanorods had established the phosphorus doping in ZnO nanorods. The gradual transformation in the type of conductivity as observed from the variation of carrier concentration and Hall coefficient had further confirmed the phosphorus doping. The modification of carrier concentration in the ZnO nanorods due to phosphorus doping was understood on the basis of the amphoteric nature of the phosphorus. The ZnO nanorods in the absence of phosphorus showed the photoluminescence (PL) in the range of the ultraviolet (UV) and visible regimes. The UV emission, i.e. near-band-edge emission of ZnO, was found to be red-shifted after the doping of phosphorus, which was attributed to donor-acceptor pair formation. The observed emissions in the visible regime were due to the deep level emissions that were aroused from various defects in ZnO. The Al-doped ZnO seed layer was found to be responsible for the observed near-infrared (NIR) emission. The PL emission in UV and visible regimes can cover a wide range of applications from biological to optoelectronic devices.
Phosphorus doping
P-type ZnO nanorods
Hydrothermal growth
ZnO is one of the most promising semiconducting materials, which has received significant attention owing to its unique and easily tunable physical and chemical properties [1–11]. It is known that the ZnO is an intrinsic n-type semiconductor. The p-type conductivity in ZnO plays a key role in the formation of homojunction, which has several applications including light-emitting diodes [12], electrically pumped random lasers [2], and photodetectors [9]. Till now, several attempts were made to induce the p-type conductivity in the ZnO matrix by doping different elements such as antimony (Sb), arsenic (As), nitrogen (N), phosphorus (P), or other elements [2, 5–9]. However, few of these elements are likely to fail in inducing the p-type conductivity as they form deep acceptors and hence become not useful. The apparent bottleneck issues with the p-type doping in ZnO are the initial achievement and their reproducibility and stability [7]. Fortunately, the stability/degradation issues can be avoided in the case of phosphorus in ZnO by the thermal activation using rapid thermal annealing process [15]. Furthermore, the phosphorus-doped ZnO thin films were found to be stable up to 16 months under ambient conditions according to Allenic et al. [14]. Therefore, the phosphorus was considered to be one of the most reliable and stable ones for inducing the p-type conductivity in ZnO among the aforementioned dopants. Moreover, the phosphorus in ZnO nanostructures was found to trigger the oxygen vacancy-related photoluminescence (PL) emission in the visible region [8, 16]. Though there have been several reports on the PL emission study of ZnO nanostructures [17–22], a systematic study that can cover the luminescence in the three different and important regimes of the electromagnetic spectra including ultraviolet (UV), visible, and near-infrared (NIR) regimes along with their electrical and structural properties is quite scarce.
In the present study, we report the successful doping of phosphorus in ZnO nanorods using hydrothermal method, which is one of the cost-effective, scalable, large-area, and low-temperature techniques. The phosphorus was found to be amphoteric in nature, which was realized from an unconventional variation of the type of conductivity and carrier concentration as a function of the doping concentration. We further demonstrate the PL emission in the UV, visible, and NIR regions by controlled doping of phosphorus in the ZnO nanorods grown on Al-doped ZnO seed layer. The underlying mechanism of the present findings is discussed on the basis of various defect states in the existing system. The most interesting aspect of the present study is the achievement of emission in two different regimes (UV and visible) in a single system by carefully choosing an appropriate combination of the nanostructures, seed layer, and dopants.
Preparation of Seed Layer
A seed layer of Al-doped ZnO film of approximately 100 nm was grown using radio frequency (RF) sputter deposition having a 2% of alumina ZnO target on a set of cleaned quartz substrates (Fig. 1a). The substrates were cleaned in acetone and isopropyl alcohol using ultrasonication, after which the substrates were dried carefully using nitrogen gas. The sputtering of the seed layer was carried out for 40 min using RF power of 90 W and 60 SCCM of Ar gas flow. The reason for choosing the Al-doped ZnO film as a seed layer was due to its better conductivity and more transmittance compared to pure ZnO film [23].
Schematic representation of the Al-doped ZnO seed layer (a), growth process of ZnO nanorods (b), and grown ZnO nanorods (c). The XRD patterns (d) of the ZnO nanorods corresponding to varying NH4H2(PO4)2 M ratio. The integrated intensity of (002) peak as functions of NH4H2(PO4)2 M ratio (e)
Growth of ZnO Nanorods
The undoped ZnO nanorods were grown by hydrothermal method using zinc nitrate hexahydrate (Zn(NO3)2, reagent grade (98%), and hexamethylenetetramine (HMTA, C6H12N4, ≥ 99.0%). A solution of zinc nitrate and HMTA of 0.06 M in 400 ml volume were prepared by stirring for 2 h. The phosphorus-doped ZnO nanorods were prepared by adding ammonium dihydrogen phosphate (NH4H2(PO4)2, ≥ 98%) to the above chemicals in the M ratios of 0%, 0.05%, 0.1%, 0.2%, 0.5%, and 1%. The seed layer-deposited quartz substrates were dipped in these beakers and kept in the oven at 90 °C for 10 h (Fig. 1b). In next, these samples were rinsed with deionized water and thoroughly dried with nitrogen gas to arrive at the vertically aligned phosphorus-doped ZnO nanorods by removing the residues (Fig. 1c).
Characterization Methods
The surface morphology of the samples was examined using a scanning electron microscope (SEM). The effect of doping on the structural properties of the samples was investigated using powder mode X-ray diffraction (XRD). Hall-effect measurements were performed on all the samples to understand the type of conductivity of the samples, where the magnetic field of 0.5 T has been applied. The room temperature PL measurements were performed using an excitation wavelength of 266 nm (Nd-YAG-pulsed laser) and incident power of 150 mW.
In order to understand the structural changes due to the incorporation of phosphorus into ZnO nanorods, we performed the powder mode XRD measurements, whose plots are presented in Fig. 1d. We note here that the undoped sample shows the diffraction peaks at 34.36°, 44.27°, 62.80°, and 72.45° corresponding to (002), (111), (103), and (004) planes of ZnO, respectively. The peak corresponding to (002) plane shows the highest intensity, and the peak position does not change regardless of NH4H2(PO4)2 M ratio and its resulting diameter/length changes of the nanorods. Upon increasing the NH4H2(PO4)2 M ratio, the integrated intensity of the highest intensity peak, i.e., (002) peak, gradually decreases as shown in Fig. 1e. The only difference in these samples is the variation of the M ratio; hence, it can be attributed to the reduced crystalline nature of the ZnO nanorods [24]. However, one thing to note here is the full width at half maximum (FWHM) of the (002) peak. The FWHM was found to be nearly the same around 0.25° irrespective of the NH4H2(PO4)2 M ratio. In these perspectives, it is also highly likely that the misalignment of the nanorods in c-axis can also lead to the decrease in integrated (002) peak intensity. When the M ratio of NH4H2(PO4)2 reached to 1%, three additional peaks were observed at angles 31.70°, 36.17°, and 47.50°, which are related to (100), (101), and (102) peaks of ZnO crystal, respectively. The appearance of these additional peaks is also in good agreement with the abovementioned claim.
The top and cross-sectional view of SEM images of the undoped and doped (up to 1%) samples are shown in Fig. 2a–f, where a uniform distribution of the hexagonal nanorods can be noticed. As discussed in the above paragraph, the diameter and length of the nanorods were found to be increased upon increasing the NH4H2(PO4)2 M ratio, which can be observed in the inset (top view) and right-hand side (cross-sectional view) of each image respectively. In the undoped sample (0% of NH4H2(PO4)2 M ratio), the average diameter of the nanorods was noticed to be approximately 60 nm, which kept on increasing gradually till 145 nm upon increasing the doping concentration as shown in the insets of Fig. 2a–f. Similarly, the length of the nanorods was also found to be increased with doping concentration though rather little increase as shown in the right-hand side of each image. The length and diameter of the nanorods are plotted as functions of NH4H2(PO4)2 M ratio in Fig. 3a and b, respectively. In the insets of these figures, we show the schematic illustration of vertically grown ZnO nanorods to indicate their length and diameters. It may be noted that the length of these nanorods also increases rapidly from 1.35 μm to 2.5 μm upon increasing the NH4H2(PO4)2 M ratio from 0% to 0.1% and almost saturates beyond this M ratio. A similar trend in the variation of diameter of the nanorods was noticed (Fig. 3b). The enhanced length and diameter of the nanorods till 0.1% of NH4H2(PO4)2 M ratio is attributed to the larger size of the phosphorus compared to oxygen atoms in the ZnO [12, 13, 25]. Beyond 0.1% M ratio, the nature of length and diameter variation can be understood on the basis of saturation of solubility limit of the incorporating phosphorus into ZnO matrix [26]. Though all the other parameters kept constant or slowed down to increase or decrease except the doping concentration, the length and diameter of the nanorods were still found to be increased, which indicates the successful incorporation of phosphorus into the ZnO nanorods [12, 25]. The chemical reactions responsible for the growth of ZnO and the doping of phosphorus into the ZnO crystals can be understood from the following equations [16]:
Top (left) and cross-sectional (right) SEM images of ZnO nanorods corresponding to NH4H2(PO4)2 M ratios 0% (a), 0.05% (b), 0.1% (c), 0.2% (d), 0.5% (e), and 1.0% (f), respectively. Diameter and length of the nanorods increased as functions of NH4H2(PO4)2 M ratio. The enhancement of volumetric features of the nanorods is due to the elevated incorporation of phosphorus
a, b Quantitative views of the length and diameter of ZnO nanorods with an increase of NH4H2(PO4)2 M ratio, respectively. c–e The changes in doping concentration, Hall coefficient, and mobility of the nanorods as functions of NH4H2(PO4)2 M ratio, respectively. Conductivity changed from negative to positive when NH4H2(PO4)2 M ratio is higher than 0.3% approximately. The decrease in the doping concentration of the nanorods corresponding to 1% of NH4H2(PO4)2 M ratio is due to the self-compensation effect beyond the solubility limit of phosphorus in the ZnO nanorods
$$ \mathrm{Zn}{\left({\mathrm{NO}}_3\right)}_2\to {\mathrm{Zn}}^{2+}+2{\mathrm{NO}}_3^{-} $$
$$ {\mathrm{C}}_6{\mathrm{H}}_{12}{\mathrm{N}}_4+10{\mathrm{H}}_2\mathrm{O}\leftrightarrow 6\mathrm{HCHO}+4{{\mathrm{N}\mathrm{H}}_4}^{+}+4{\mathrm{OH}}^{-} $$
$$ {\mathrm{Zn}}^{2+}+2{\mathrm{OH}}^{-}\leftrightarrow \mathrm{Zn}{\left(\mathrm{OH}\right)}_2\to \mathrm{Zn}\mathrm{O}+{\mathrm{H}}_2\mathrm{O} $$
$$ \mathrm{N}{\mathrm{H}}_4{\mathrm{H}}_2\mathrm{P}{\mathrm{O}}_4+2{\mathrm{O}\mathrm{H}}^{-}\to {{\mathrm{NH}}_4}^{+}+2{\mathrm{H}}_2\mathrm{O}+{\mathrm{PO}}_4^{3-} $$
$$ 3{\mathrm{Zn}}^{2+}+2{{\mathrm{PO}}_4}^{3-}\to {\mathrm{Zn}}_3{\left({\mathrm{PO}}_4\right)}_2\downarrow $$
In the hydrothermal process, upon increasing the temperature, the zinc nitrate would decompose initially into Zn2+ and nitrate ions. On the other hand, the chemical reaction between HMTA and water molecule gives rise to formaldehyde, ammonium ions, and hydroxyl ions as shown above in Eq. (2). These hydroxyl ions react with the Zn2+ ions and lead to the formation of ZnO and H2O molecules. In addition, the ammonium dihydrogen phosphate reacts with the already existing hydroxyl ions in the beaker and forms phosphate ions along with ammonium ions and a water molecule. We note here that these phosphate ions react with the zinc ions to form zinc phosphate (Zn3(PO4)2) precipitation, which is detrimental to the incorporation of phosphorus into ZnO nanorods [16]. However, the zinc nitrate being the strong acid and strong alkaline salt, it has a potential to minimize the possibility of zinc phosphate precipitation and hence can increase the probability of successful incorporation of phosphorus into ZnO nanorods [16]. The phosphorus doping in ZnO nanorods is known to induce p-type conductivity from their inherent n-type conductivity [7, 27, 28], which will further validate the doping of phosphorus atoms.
Using the Hall-effect measurements, we investigate the effect of phosphorus doping on electrical properties such as the type of conductivity, doping concentration, and mobility of charge carriers. In general, Hall-effect measurement of nanorods and/or nanowires is quite challenging due to its one-dimensional geometry. Thus, it is clear that the one-by-one Hall measurement of single nanorods is probably the most accurate one. However, the method is mostly valid to nanorods or nanowires of brittle and low density, which requires challenging processing procedures [45]. In this case, Hall-effect measurement is enabled by Al-doped ZnO seed layer beneath ZnO nanorods as a conducting medium. Due to the electrical imperfection of Al-doped ZnO seed layer as a medium for current flow, the measurement can possibly underestimate the actual electrical properties of the ZnO nanorods. However, the result yet can show how the NH4H2(PO4)2 M ratio changes the electrical properties of ZnO nanorods. The dependence of carrier concentration, Hall coefficient, and mobility on NH4H2(PO4)2 M ratio is illustrated in Fig. 3c, d, and e, respectively. The carrier concentrations for 0%, 0.05%, 0.1%, 0.2%, 0.5%, and 1% M ratios are − 6.1 × 1015, − 4.0 × 1015, − 3.4 × 1015, 1.6 × 1015, 7.8 × 1015, and 1.67 × 109 cm−2, respectively. The negative sign in the doping concentration of the samples below 0.2% of the NH4H2(PO4)2 M ratio indicates their n-type conductivity, and the positive sign in the remaining samples reveals their p-type conductivity. Indeed, ZnO nanorods exhibit intrinsic n-type conductivity due to the presence of oxygen vacancy-related defects and/or Zn interstitials, yet the details are controversial [7, 27, 28]. However, with increasing NH4H2(PO4)2 M ratio, the ZnO nanorods are gradually being transformed into the p-type ones by compensating their intrinsic negative conductivity. The p-type conductivity by the incorporation of phosphorus is also observed in ZnO thin films [29–31]. On the other hand, the nanorods corresponding to 1% of NH4H2(PO4)2 M ratio showed quite different behavior as compared to the previous reports. As shown in Fig. 3c, the sample corresponding to 0.5% of NH4H2(PO4)2 M ratio showed the highest carrier concentration around 7.8 × 1015 cm−2 and falls down to 1.67 × 109 cm−2 all of a sudden as soon as the NH4H2(PO4)2 M ratio was increased to 1%. We assume that this change is due to the amphoteric behavior of phosphorus in ZnO [27]. The phosphorus acts as either an acceptor or a donor depending on whether the phosphorus substitutes oxygen sites (PO) or Zn sites (PZn), respectively. It is reported in [27] that the solubility of p-type dopants in ZnO is low. In these regimes, incorporation of excess phosphorus beyond the solubility limit, they substitute the Zn sites and compensate itself with PO and hence can lose the p-type conductivity. The solubility limit of phosphorus is around 1020 cm−3 when Zn3P2 has been used for the purpose of phosphorus doping in the ZnO matrix [27]. However, we cannot clearly say how much is the solubility limit of phosphorus when it comes to growing p-type ZnO with NH4H2(PO4)2 via hydrothermal process, but we believe the solubility limit should be somewhere around 7.8 × 1015 cm−2. It is noteworthy that the carrier concentration can be increased by post-thermal annealing process as mentioned in [16]. However, the annealing process changes not only carrier concentrations but also their diameter, length, and density of the nanorods unexpectedly [16]. Thus, the annealing of the nanorods was not considered in the present work. The Hall coefficients (RH) for a semiconductor can be given by RH = 1/nce [32], where nc represents the concentration of charge carriers, whose sign is negative and positive for n-type and p-type semiconductors, respectively, where the charge carriers are electrons and holes, respectively. The variation of RH (shown in Fig. 3d) further confirms the transformation of conductivity from n-type to p-type in the ZnO nanorods. It is known that the Hall coefficient and mobility are related by the equation μ = σRH [32], where σ represents the electrical conductivity. It may be noticed that the mobility is directly proportional to the Hall coefficients, and therefore, the variation of mobility as a function of the doping concentration also follows the nature of RH curve (as shown in Fig. 3e).
Figure 4a shows the normalized reflectance of the undoped and phosphorus-doped samples, which were measured in the diffuse reflectance geometry. It is known that the sharp fall around 380 nm in the reflectance spectra indicates the optical bandgap of the ZnO samples. The tailing effect after the doping can be noticed in the sharp fall, which denotes a change in the optical bandgap due to the doping of phosphorus into ZnO nanorods. In order to determine the optical bandgap of these samples, we have used the Kubelka-Munk (KM) function, which was obtained from the reflectance spectra. The relation between the KM function (F(R)) and reflectance is given by F(R) = (1−R)2/2R [33], where R represents the reflectance of the samples, whose corresponding KM function has been plotted using the Tauc relation (shown in Fig. 4b). The optical bandgaps of all the samples were estimated from these Tauc plots, which are shown in the inset of Fig. 4b. The bandgap of undoped ZnO sample was found to be 3.28 eV, which reduces to 3.18 eV till the NH4H2(PO4)2 M ratio of 0.1%, and then, the bandgap increases above this concentration, which reaches 3.26 eV for the case of 1% NH4H2(PO4)2 M ratio. We note here that the bandgap of all the samples lies within the range of 3.18 and 3.28 eV. Though the bandgap of the ZnO nanorods is obtained from the Tauc plot, however, it deviates as per changes in NH4H2(PO4)2 M ratio. Indeed, obtaining the bandgap from the Tauc plot is probably not a proper way for the samples investigated in this article; this is because the Tauc plot ignores excitonic effect. In order to address this issue, we have performed the PL measurements on the all samples [49].
Normalized reflectance (a) and corresponding Tauc plots (b) for all the samples (inset: NH4H2(PO4)2 M ratio-dependent variation of optical bandgaps of ZnO nanorods.). c Normalized PL spectra of Al-doped ZnO seed layer, undoped ZnO nanorods, and phosphorus-doped ZnO nanorods. d The PL peak positions of NBE emission as a function of NH4H2(PO4)2 M ratio. e The magnified NIR emission from the Al-doped ZnO seed layer. f The DLE emission peaks of undoped and doped ZnO nanorods samples
Figure 4c shows the normalized PL spectra of undoped and phosphorus-doped ZnO nanorods as well as Al-doped ZnO seed layer. It may be observed that all the spectra consist of two prominent peaks, one in the UV region and the other one being located in the region that covers visible and NIR regimes. It is known that the first peak in the UV region is related to the near-band-edge emission (NBE) and the other peak/hump is related to the deep level emission (DLE) in the ZnO nanorods. We note here that the origin of the deep level emission in ZnO is controversial and is expected to arise from various kinds of defects and/or vacancies [34–36]. Therefore, the peaks were deconvoluted carefully by considering the asymmetry in these spectra as shown in Fig. 4c, which provides an insight into the origin of the observed emissions. It is to be noted here that the deconvoluted peaks correspond to the UV, violet, yellow, red, and NIR emissions. The UV emission (P1) at ~ 379 nm in the undoped ZnO sample corresponds to their bandgap (as discussed above). This emission represents the characteristic feature of ZnO, which arises due to free excitonic transitions [14]. It is noteworthy that the bandgap obtained by PL is 10 meV smaller than the one of the Tauc plot (Fig. 4b). For example, the bandgap of undoped ZnO nanorods from PL is 3.27 eV corresponding to the 379-nm emission, and the one from the Tauc plot is 3.28 eV. This is presumably due to the Stokes shift [48]. As the doping concentration increases from 0 to 1%, this emission undergoes a bathochromic shift from 379 to 384 nm (as shown in Fig. 4d). According to the previous reports, the phosphorus doping induces an emission at ~ 384 nm, which is due to donor-acceptor pair (DAP) transitions [14, 25]. Therefore, the red shift in the present case can be attributed to the phosphorus-induced DAP emission in the ZnO nanorods [8, 14]. It is known that the diameter of the nanorods also affects the emission wavelength regarding the surface-to-volume ratio-dependent number of the quasi-Fermi level and the shift gets severe once the diameter is larger than 150 nm [44]. However, the largest diameter of the nanorods investigated is around 150 nm, and the rest of them are below 150 nm in this article; thus, we rule out the effect of the diameter change. The violet emission (P2) observed at ~ 389 nm (in undoped ZnO nanorods sample) is due to Zn interstitials, whose emission also undergoes a red shift, from 389 to 408 nm, after the doping [37]. The observed yellow emission (P3), within the wavelength range of 574–587 nm, is due to the presence of interstitial oxygen atoms [38, 39]. The presence of excessive oxygen or zinc vacancies is responsible for the observed red emission (P4) [40, 41], which covers the wavelength range of 678–729 nm (as shown in Fig. 4c). It may be observed that the full width at half maximum (FWHM) of the yellow and red emissions is much higher compared to the other emissions. We note here that the deconvolution made was solely based on the observed asymmetry of the peaks and it may happen that these two peaks might consist of one or more peaks. Therefore, one cannot exclude the possibility of the existence of green and orange emissions within the aforementioned yellow and red emissions, respectively. On the other hand, the emission (P5) in the NIR region was found to show no significant change in both the position and FWHM of the peaks as a function of doping, whose variation lies within the error bars (not shown here). We note here that the only common constant factor in all these samples is the seed layer, which is Al-doped ZnO film in the present case. Moreover, the PL spectrum of the seed layer alone (Fig. 4c, e) does confirm the NIR emission as expected, which may be noticed in Fig. 4e. Furthermore, the PL spectrum from the seed layer shows another emission at 425 nm (Fig. 4c), which is the characteristic NBE emission of the Al-doped ZnO seed layer [42]. However, the reason for the NIR emission from the Al-doped ZnO thin films remains to be understood. It is to be noted here that the peak positions of deep level emissions do not undergo any significant change as a function of doping concentration while varying NBE emission changes, as shown in Fig. 4f. The persistent peak wavelength regardless of NH4H2(PO4)2 M ratio can be advantageous in designing visible light-emitting devices which utilize the DLE emission. Let's consider a simple visible light-emitting device structure which is composed of phosphorus-doped p-type ZnO nanorods and n-type substrate, the p-n junction. In that case, the phosphorus-doped p-type ZnO nanorods should be not only a light emitting medium but also an electrical carrier injection medium. To be an efficient electrical carrier injection medium, it is needless to say that the phosphorus-doped ZnO nanorods should be a highly doped ones. In such a circumstance, let's assume another condition that the DLE emission wavelength of phosphorus-doped ZnO nanorods depends on phosphorus concentration and/or carrier concentration. Then, the emission wavelength is compelled to pin to a certain emission wavelength of highly phosphorus-doped ZnO nanorods. This is because, we have no other choice but to keep the carrier concentration as high as possible to have an efficient carrier injection medium. However, unfortunately, the emission wavelength of highly phosphorus-doped ZnO nanorods can possibly not matches with the target emission wavelength we expect from the light-emitting devices; failing in light-emitting device design. Also, in real world, the visible DLE emission wavelength of phosphorus-doped ZnO nanorods does not change as per carrier concentration as shown in Fig. 4f. Then how can we tune the emission wavelength? Indeed, there are more parameters to consider in designing the light-emitting devices yet, in other words, the parameters to tailor DLE emission wavelength. Simimol et al. [43] and other literature indicate that the ZnO nanorods upon annealing changes the emission wavelength and hence can serve the purpose of tuning the emission spectrum. In that case, the persistent DLE emission wavelength of phosphorus-doped ZnO nanorods as per carrier concentration enables designing the light-emitting device rather straightforward; we have only one parameter (annealing) to consider in tailoring emission wavelength, and another one (phosphorus concentration or NH4H2(PO4)2 M ratio) in electrical carrier injection, separately. Such an approach will make phosphorus-doped ZnO nanorods as a platform to fabricate light-emitting devices a la carte in visible wavelength range with the cheapest route along with the hydrothermal process. In addition, we further note here that the observed emissions in the most important regimes of the electromagnetic spectra including UV and visible emission range would be interesting for a broad range of applications from biological to optoelectronic devices. However, it is noteworthy that the persistent p-type doping in ZnO nanorods as well as thin films is yet challenging for practical device applications. In other words, though a 16-month-long p-type conductivity of phosphorus-doped ZnO is quite persistent [14], yet not comparable to the other inorganic crystalline semiconductors such as GaN: Gallium nitride, GaAs: Gallium arsenide, and InP: Indium phosphide. The unstable p-type conductivity is originated by intrinsic native defects [46, 47], and further study should be addressed on the precise control of the defects.
In summary, the p-type conductivity in ZnO nanorods has effectively been accomplished by the doping of phosphorus impurities into them. The successful doping of phosphorus into ZnO nanorods enhances the length and diameter of the nanorods. An unusual variation of carrier concentration, mobility, and Hall coefficient as functions of NH4H2(PO4)2 M ratio i.e. phosphorus concentration was noticed, which was explained on the basis of the amphoteric nature of phosphorus. These hydrothermally synthesized ZnO nanorods grown on Al-doped ZnO seed layer were found to show PL emission in the three different regimes including UV, visible, and NIR regimes. The observed emissions in UV, violet, yellow, red, and NIR regimes were attributed to NBE emission, zinc interstitials, oxygen interstitials, excess oxygen (or zinc vacancies), and characteristic feature of Al-doped ZnO seed layer, respectively. Interestingly, the doping of phosphorus into these nanorods led to a change in the UV emissions and does not affect the visible and NIR emissions. Such unusual effects in ZnO by phosphorus incorporation can be suitable for various optoelectronic and biological applications.
Vantari Siva and Kwangwook Park contributed equally to this work.
DLE:
Deep level emission
HMTA or C6H12N4 :
Hexamethylenetetramine
KM method:
Kubelka-Munk method
NBE:
Near-band-edge emission
Nd-YAG:
Neodymium-doped yttrium aluminum garnet
NH4H2(PO4)2 :
Ammonium dihydrogen phosphate
NIR:
Near-infrared
Photoluminescence
PO :
Oxygen sites in ZnO
PZn :
Zinc sites in ZnO
RF:
SEM:
UV:
XRD:
Zn(NO3)2 :
Zinc nitrate hexahydrate
ZnO:
This work was supported by an Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korean government (MSIP) (No.2017000709), the Creative Materials Discovery Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (NRF-2017M3D1A1039288), and the National Research Foundation (NRF) of Korea (NRF-2018R1A4A1025623).
All data supporting the conclusions of this article are included within this article.
VS, MSK, YJK, GJL, and MJK carried out the hydrothermal growth and measurements of ZnO nanorods. VS and KP performed the analysis of the measurement data and drafted the manuscript. YMS supervised the whole work and finalized the manuscript. All authors read and approved the final manuscript.
School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju, 61005, Republic of Korea
Korea Advanced Nano Fab Center, Suwon, Gyeonggi-do, 16229, Republic of Korea
Djurišić AB, Ng AMC, Chen XY (2010) ZnO nanostructures for optoelectronics: material properties and device applications. Prog Quantum Electron 34:191–259View ArticleGoogle Scholar
Chu S, Wang G, Zhou W, Lin Y, Chernyak L, Zhao J, Kong J, Li L, Ren J, Liu J (2011) Electrically pumped waveguide lasing from ZnO nanowires. Nat Nanotech 6:506–510View ArticleGoogle Scholar
Fujihara S, Ogawa Y, Kasai A (2004) Tunable visible photoluminescence from ZnO thin films through Mg doping and annealing. Chem Mater 16:2965–2968View ArticleGoogle Scholar
Peng SM, Su YK, Ji LW, Wu CZ, Cheng WB, Chao WC (2010) ZnO nanobridge array UV photodetectors. J Phys Chem C 114:3204–3208View ArticleGoogle Scholar
Ma Y, Gao Q, Wu GG, Li WC, Gao FB, Yin JZ, Zhang BL, Du GT (2013) Growth and conduction mechanism of as-doped p-type ZnO thin films deposited by MOCVD. Mat Res Bull 48:1239–1243View ArticleGoogle Scholar
Das S, Patra S, Kar JP, Roy A, Ray A, Myoung JM (2015) Origin of p-type conductivity for N-doped ZnO nanostructure synthesized by MOCVD method. Mater Lett 161:701–704View ArticleGoogle Scholar
Özgür Ü, Alivov YI, Liu C, Teke A, Reshchikov MA, Doğanc S, Avrutin V, Cho SJ, Morkoç H (2005) A comprehensive review of ZnO materials and devices. J Appl Phys 98:041301View ArticleGoogle Scholar
Hsu CL, Chang SJ, Lin YR, Tsaib SY, Chen IC (2005) Vertically well aligned P-doped ZnO nanowires synthesized on ZnO-Ga/glass templates. Chem Commun 0:3571–3573View ArticleGoogle Scholar
Leung YH, He ZB, Luo LB, Tsang CHA, Wong NB, Zhang WJ, Lee ST (2010) ZnO nanowires array p-n homojunction and its application as a visible-blind ultraviolet photodetector. Appl Phys Lett 96:053102View ArticleGoogle Scholar
Hsu CL, Lin YH, Wang LK, Hsueh TJ, Chang SP, Chang SJ (2017) Tunable UV- and visible-light photoresponse based on p-ZnO nanostructures/n-ZnO/glass peppered with Au nanoparticles. ACS Appl Mater Interfaces 9:14935–14944View ArticleGoogle Scholar
Rajagopalan P, Singh V, Palani IA (2018) Enhancement of ZnO-based flexible nano generators via a sol-gel technique for sensing and energy harvesting applications. Nanotechnology 29:105406View ArticleGoogle Scholar
Fang X, Li J, Zhao D, Shen D, Li B, Wang X (2009) Phosphorus-doped p-type ZnO nanorods and ZnO nanorod p-n homojunction LED fabricated by hydrothermal method. J Phys Chem C 113:21208–21212View ArticleGoogle Scholar
Hwang SH, Moon KJ, Lee TI, Lee W, Myoung JM (2014) Controlling phosphorus doping concentration in ZnO nanorods by low temperature hydrothermal method. Mat Chem Phys 143:600–604View ArticleGoogle Scholar
Allenic A, Guo W, Chen YB, Katz MB, Zhao GY, Che Y, Hu Z, Liu B, Zhang SB, Pan X (2007) Amphoteric phosphorus doping for stable p-type ZnO. Adv Mater 19:3333–3337View ArticleGoogle Scholar
Kim KK, Kim HS, Hwang DK, Lim JH, Park SJ (2003) Realization of p-type ZnO thin films via phosphorus doping and thermal activation of the dopant. Appl Phys Lett 83:63–65View ArticleGoogle Scholar
Wang D, Jiao S, Zhang S, Li H, Gao S, Wang J, Ren J, Yu QJ, Zhang Y (2017) Growth mechanism and optical properties of phosphorus-doped ZnO nanorods synthesized by hydrothermal method. Phys Status Solidi A 214:1600959View ArticleGoogle Scholar
Yang J, Lang J, Li C, Yang L, Han Q, Zhang Y, Wang D, Gao M, Liu X (2008) Effects of substrate on morphologies and photoluminescence properties of ZnO nanorods. Appl Surf Sci 255:2500–2503View ArticleGoogle Scholar
Zhao D, Zhang X, Dong H, Yang L, Zeng Q, Li J, Cai L, Zhang X, Luan P, Zhang Q, Tu M, Wang S, Zhou W, Xie S (2013) Surface modification effect on photoluminescence of individual ZnO nanorods with different diameters. Nanoscale 5:4443–4448View ArticleGoogle Scholar
Liu WJ, Meng XQ, Zheng Y, Xia W (2010) Synthesis and photoluminescence properties of ZnO nanorods and nanotubes. Appl Surf Sci 257:677–679View ArticleGoogle Scholar
Amiruddin R, Kumar MCS (2014) Enhanced visible emission from vertically aligned ZnO nanostructures by aqueous chemical growth process. J Lumin 155:149–155View ArticleGoogle Scholar
Wu Y, Li J, Ding H, Gao Z, Wu Y, Pan N, Wang X (2015) Negative thermal quenching of photoluminescence in annealed ZnO-Al2O3 core-shell nanorods. Phys Chem Chem Phys 17:5360View ArticleGoogle Scholar
Zhu W, Kitamura S, Boffelli M, Marin E, Gaspera ED, Sturaro M, Martuccid A, Pezzotti G (2016) Analysis of defect luminescence in Ga-doped ZnO nanoparticles. Phys Chem Chem Phys 18:9586View ArticleGoogle Scholar
Bai SN, Tseng TY (2006) Effect of alumina doping on structural, electrical, and optical properties of sputtered ZnO thin films. Thin Sol Films 515:872–875View ArticleGoogle Scholar
Qiu Z, Gong H, Yang H, Zhang Z, Han J, Cao B, Nakamura D, Okada T (2015) Phosphorus concentration dependent microstructure and optical property of ZnO nanowires grown by high-pressure pulsed laser deposition. J Phys Chem C 119:4371–4378View ArticleGoogle Scholar
Panigrahy B, Aslam M, Misra DS, Ghosh M, Bahadur D (2010) Defect-related emissions and magnetization properties of ZnO nanorods. Adv Funct Mater 20:1161–1165View ArticleGoogle Scholar
Biroju RK, Gir PK (2017) Strong visible and near infrared photoluminescence from ZnO nanorods/nanowires grown on single layer graphene studied using sub-band gap excitation. J Appl Phys 122:044302View ArticleGoogle Scholar
Xiu FX, Yang Z, Mandalapu LJ, Liu JL, Beyermann WP (2006) P-type ZnO films with solid-source phosphorus doping by molecular-beam epitaxy. Appl Phys Lett 88:052106View ArticleGoogle Scholar
Lee WJ, Kang J, Chang J (2006) Defect properties and p-type doping efficiency in phosphorus-doped ZnO. Phys Rev B 73:024117View ArticleGoogle Scholar
Lin CC, Chen SY, Cheng SY (2004) Physical characteristics and photoluminescence properties of phosphorous-implanted ZnO thin films. Appl Surf Sci 238:405–409View ArticleGoogle Scholar
Yao B, Xie YP, Cong CX, Zhao HJ, Sui YR, Wang T, He Q (2008) Mechanism of p-type conductivity for phosphorus-doped ZnO thin film. J Phys D Appl Phys 42:015407Google Scholar
Jang S, Chen JJ, Kang BS, Ren F, Norton DP, Pearton SJ, Lopata J, Hobson WS (2005) Formation of p-n homojunctions in n-ZnO bulk single crystals by diffusion from a Zn3P2 source. Appl Phys Lett 87:222113View ArticleGoogle Scholar
Gao J, Zhao Q, Sun Y, Li G, Zhang J, Yu D (2011) A novel way for synthesizing phosphorus-doped ZnO nanowires. Nanoscale Res Lett 6:45Google Scholar
Kittel C (2004) Introduction to solid state physics, 8th edn. Wiley. https://www.wiley.com/en-us/Introduction+to+Solid+State+Physics%2C+8th+Edition-p-9780471415268
Kubelka P, Munk F (1931) Ein Beitrag Zur Optik Der Farbanstriche. Zeit Für Tekn Physik 12:593Google Scholar
Sarkar S, Basak D (2014) Defect mediated highly enhanced ultraviolet emission in P-doped ZnO nanorods. RSC Adv 4:39095View ArticleGoogle Scholar
Panigrahy B, Bahadur D (2012) P-type phosphorus doped ZnO nanostructures: an electrical, optical, and magnetic properties study. RSC Adv 2:6222–6227View ArticleGoogle Scholar
Cao BQ, Lorenz M, Rahm A, Wenckstern HV, Czekalla C, Lenzner J, Benndorf G, Grundmann M (2007) Phosphorus acceptor doped ZnO nanowires prepared by pulsed-laser deposition. Nanotechnology 18:455707View ArticleGoogle Scholar
Lee HB, Ginting RT, Tan ST, Tan CH, Alshanableh A, Oleiwi HF, Yap CC, Jumali MHH, Yahaya M (2016) Controlled defects of fluorine-incoprorated ZnO nanorods for photovoltaic enhancement. Sci Rep 6:32645View ArticleGoogle Scholar
Wu XL, Siu GG, Fu CL, Ong HC (2001) Photoluminescence and cathodoluminescence studies of stoichiometric and oxygen-deficient ZnO films. Appl Phys Lett 78:2285–2287View ArticleGoogle Scholar
Zhong H, Wang J, Pan M, Wang S, Li Z, Xu W, Chen X, Lu W (2006) Preparation and photoluminescence of ZnO nanorods. Chem Phys 97:390–393Google Scholar
Djurišić AB, Leung YH, Tam KH, Hsu YF, Ding L, Ge WK, Zhong YC, Wong KS, Chan WK, Tam HL, Cheah KW, Kwok WM, Phillips DL (2007) Defect emissions in ZnO nanostructures. Nanotechnology 18:095702. https://doi.org/10.1088/0957-4484/18/9/095702
Letailleur AA, Grachev SY, Barthel E, Søndergård E, Nomenyo K, Couteau C, Mc Murtry S, Lérondel G, Charlet E, Peter E (2011) High efficiency white luminescence of alumina doped ZnO. J Lumin 131:2646–2651View ArticleGoogle Scholar
Simimol A, Manikandanath NT, Anappara AA, Chowdhury P, Barshilia HC (2014) Tuning of deep level emission in highly oriented electrodeposited ZnO nanorods by post growth annealing treatments. J Appl Phys 116:074309View ArticleGoogle Scholar
Wang T, Zhang X, Wen J, Chen T, Ma X, Gao H (2014) Diameter-dependent luminescence properties of ZnO wires by mapping. J Phys D Appl Phys 47:175304View ArticleGoogle Scholar
Zhao C, Ebaid M, Zhang H, Priante D, Janjua B, Zhang D, Wei N, Alhamoud A, Shakfa M, Ng T, Ooi B (2018) Quantified hole concentration in AlGaN nanowires for high-performance ultraviolet emitters. Nanoscale 10:15980View ArticleGoogle Scholar
Park C, Zhang S, Wei S (2002) Origin of p-type doping difficulty in ZnO: the impurity perspective. Phys Rev B 66:073202View ArticleGoogle Scholar
Pearton S, Norton D, Ip K, Heo Y (2004) Recent advances in processing of ZnO. J Vac Sci Technol B 22:932View ArticleGoogle Scholar
Shan F, Liu G, Lee W, Shin B (2006) Stokes shift, blue shift and red shift of ZnO-based thin films deposited by pulsed-laser deposition. J Cryst Growth 2:328–333View ArticleGoogle Scholar
Nakamura Y, Sano J, Matsushita T, Kiyota Y, Udagawa Y, Kunugita H, Ema K, Kondo T (2015) Exciton and bandgap energies of hybrid perovskite CH3NH3PbI3. International conference on solid state devices and materials, Sapporo, pp 524–525. https://doi.org/10.7567/SSDM.2015.PS-15-1
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page
|
CommonCrawl
|
Tagged: symmetric matrix
Find All Symmetric Matrices satisfying the Equation
Find all $2\times 2$ symmetric matrices $A$ satisfying $A\begin{bmatrix}
1 \\
\end{bmatrix}
\begin{bmatrix}
\end{bmatrix}$? Express your solution using free variable(s).
If a Symmetric Matrix is in Reduced Row Echelon Form, then Is it Diagonal?
Recall that a matrix $A$ is symmetric if $A^\trans = A$, where $A^\trans$ is the transpose of $A$.
Is it true that if $A$ is a symmetric matrix and in reduced row echelon form, then $A$ is diagonal? If so, prove it.
Otherwise, provide a counterexample.
Prove that $\mathbf{v} \mathbf{v}^\trans$ is a Symmetric Matrix for any Vector $\mathbf{v}$
Let $\mathbf{v}$ be an $n \times 1$ column vector.
Prove that $\mathbf{v} \mathbf{v}^\trans$ is a symmetric matrix.
Click here if solved 8
Eigenvalues of $2\times 2$ Symmetric Matrices are Real by Considering Characteristic Polynomials
Let $A$ be a $2\times 2$ real symmetric matrix.
Prove that all the eigenvalues of $A$ are real numbers by considering the characteristic polynomial of $A$.
The Inverse Matrix of a Symmetric Matrix whose Diagonal Entries are All Positive
Let $A$ be a real symmetric matrix whose diagonal entries are all positive real numbers.
Is it true that the all of the diagonal entries of the inverse matrix $A^{-1}$ are also positive?
If so, prove it. Otherwise, give a counterexample.
The set of $2\times 2$ Symmetric Matrices is a Subspace
Let $V$ be the vector space over $\R$ of all real $2\times 2$ matrices.
Let $W$ be the subset of $V$ consisting of all symmetric matrices.
(a) Prove that $W$ is a subspace of $V$.
(b) Find a basis of $W$.
(c) Determine the dimension of $W$.
Linear Algebra Midterm 1 at the Ohio State University (3/3)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.
There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).
The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.
Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}
-3 & -4\\
8& 9
\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}
-1 \\
\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.
Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
(e) The vectors
\[\mathbf{v}_1=\begin{bmatrix}
\end{bmatrix}, \mathbf{v}_2=\begin{bmatrix}
\end{bmatrix}\] are linearly independent.
7 Problems on Skew-Symmetric Matrices
Let $A$ and $B$ be $n\times n$ skew-symmetric matrices. Namely $A^{\trans}=-A$ and $B^{\trans}=-B$.
(a) Prove that $A+B$ is skew-symmetric.
(b) Prove that $cA$ is skew-symmetric for any scalar $c$.
(c) Let $P$ be an $m\times n$ matrix. Prove that $P^{\trans}AP$ is skew-symmetric.
(d) Suppose that $A$ is real skew-symmetric. Prove that $iA$ is an Hermitian matrix.
(e) Prove that if $AB=-BA$, then $AB$ is a skew-symmetric matrix.
(f) Let $\mathbf{v}$ be an $n$-dimensional column vecotor. Prove that $\mathbf{v}^{\trans}A\mathbf{v}=0$.
(g) Suppose that $A$ is a real skew-symmetric matrix and $A^2\mathbf{v}=\mathbf{0}$ for some vector $\mathbf{v}\in \R^n$. Then prove that $A\mathbf{v}=\mathbf{0}$.
Construction of a Symmetric Matrix whose Inverse Matrix is Itself
Let $\mathbf{v}$ be a nonzero vector in $\R^n$.
Then the dot product $\mathbf{v}\cdot \mathbf{v}=\mathbf{v}^{\trans}\mathbf{v}\neq 0$.
Set $a:=\frac{2}{\mathbf{v}^{\trans}\mathbf{v}}$ and define the $n\times n$ matrix $A$ by
\[A=I-a\mathbf{v}\mathbf{v}^{\trans},\] where $I$ is the $n\times n$ identity matrix.
Prove that $A$ is a symmetric matrix and $AA=I$.
Conclude that the inverse matrix is $A^{-1}=A$.
A Symmetric Positive Definite Matrix and An Inner Product on a Vector Space
(a) Suppose that $A$ is an $n\times n$ real symmetric positive definite matrix.
Prove that
\[\langle \mathbf{x}, \mathbf{y}\rangle:=\mathbf{x}^{\trans}A\mathbf{y}\] defines an inner product on the vector space $\R^n$.
(b) Let $A$ be an $n\times n$ real matrix. Suppose that
Prove that $A$ is symmetric and positive definite.
A Positive Definite Matrix Has a Unique Positive Definite Square Root
Prove that a positive definite matrix has a unique positive definite square root.
A Matrix Equation of a Symmetric Matrix and the Limit of its Solution
Let $A$ be a real symmetric $n\times n$ matrix with $0$ as a simple eigenvalue (that is, the algebraic multiplicity of the eigenvalue $0$ is $1$), and let us fix a vector $\mathbf{v}\in \R^n$.
(a) Prove that for sufficiently small positive real $\epsilon$, the equation
\[A\mathbf{x}+\epsilon\mathbf{x}=\mathbf{v}\] has a unique solution $\mathbf{x}=\mathbf{x}(\epsilon) \in \R^n$.
(b) Evaluate
\[\lim_{\epsilon \to 0^+} \epsilon \mathbf{x}(\epsilon)\] in terms of $\mathbf{v}$, the eigenvectors of $A$, and the inner product $\langle\, ,\,\rangle$ on $\R^n$.
(University of California, Berkeley, Linear Algebra Qualifying Exam)
Inequality about Eigenvalue of a Real Symmetric Matrix
Let $A$ be an $n\times n$ real symmetric matrix.
Prove that there exists an eigenvalue $\lambda$ of $A$ such that for any vector $\mathbf{v}\in \R^n$, we have the inequality
\[\mathbf{v}\cdot A\mathbf{v} \leq \lambda \|\mathbf{v}\|^2.\]
If $A^{\trans}A=A$, then $A$ is a Symmetric Idempotent Matrix
Let $A$ be a square matrix such that
\[A^{\trans}A=A,\] where $A^{\trans}$ is the transpose matrix of $A$.
Prove that $A$ is idempotent, that is, $A^2=A$. Also, prove that $A$ is a symmetric matrix.
Express a Hermitian Matrix as a Sum of Real Symmetric Matrix and a Real Skew-Symmetric Matrix
Recall that a complex matrix is called Hermitian if $A^*=A$, where $A^*=\bar{A}^{\trans}$.
Prove that every Hermitian matrix $A$ can be written as the sum
\[A=B+iC,\] where $B$ is a real symmetric matrix and $C$ is a real skew-symmetric matrix.
Inverse Matrix of Positive-Definite Symmetric Matrix is Positive-Definite
Suppose $A$ is a positive definite symmetric $n\times n$ matrix.
(a) Prove that $A$ is invertible.
(b) Prove that $A^{-1}$ is symmetric.
(c) Prove that $A^{-1}$ is positive-definite.
(MIT, Linear Algebra Exam Problem)
Positive definite Real Symmetric Matrix and its Eigenvalues
A real symmetric $n \times n$ matrix $A$ is called positive definite if
\[\mathbf{x}^{\trans}A\mathbf{x}>0\] for all nonzero vectors $\mathbf{x}$ in $\R^n$.
(a) Prove that the eigenvalues of a real symmetric positive-definite matrix $A$ are all positive.
(b) Prove that if eigenvalues of a real symmetric matrix $A$ are all positive, then $A$ is positive-definite.
Quiz 13 (Part 1) Diagonalize a Matrix
\[A=\begin{bmatrix}
2 & -1 & -1 \\
-1 &2 &-1 \\
-1 & -1 & 2
\end{bmatrix}.\] Determine whether the matrix $A$ is diagonalizable. If it is diagonalizable, then diagonalize $A$.
That is, find a nonsingular matrix $S$ and a diagonal matrix $D$ such that $S^{-1}AS=D$.
Diagonalizable by an Orthogonal Matrix Implies a Symmetric Matrix
Let $A$ be an $n\times n$ matrix with real number entries.
Show that if $A$ is diagonalizable by an orthogonal matrix, then $A$ is a symmetric matrix.
Eigenvalues of a Hermitian Matrix are Real Numbers
Show that eigenvalues of a Hermitian matrix $A$ are real numbers.
(The Ohio State University Linear Algebra Exam Problem)
The Existence of an Element in an Abelian Group of Order the Least Common Multiple of Two Elements
Are these vectors in the Nullspace of the Matrix?
Inverse Map of a Bijective Homomorphism is a Group Homomorphism
Eigenvalues and Eigenvectors of Matrix Whose Diagonal Entries are 3 and 9 Elsewhere
The Determinant of a Skew-Symmetric Matrix is Zero
|
CommonCrawl
|
Roles of Conceptus Secretory Proteins in Establishment and Maintenance of Pregnancy in Ruminants
Bazer, Fuller W.;Song, Gwon-Hwa;Thatcher, William W. 1
https://doi.org/10.5713/ajas.2011.r.08 PDF KSCI
Reproduction in ruminant species is a highly complex biological process requiring a dialogue between the developing conceptus (embryo-fetus and associated placental membranes) and maternal uterus which must be established during the peri-implantation period for pregnancy recognition signaling and regulation of gene expression by uterine epithelial and stromal cells. The uterus provide a microenvironment in which molecules secreted by uterine epithelia and transported into the uterine lumen represent histotroph, also known as the secretome, that are required for growth and development of the conceptus and receptivity of the uterus to implantation by the elongating conceptus. Pregnancy recognition signaling as related to sustaining the functional lifespan of the corpora lutea, is required to sustain the functional life-span of corpora lutea for production of progesterone which is essential for uterine functions supportive of implantation and placentation required for successful outcomes of pregnancy. It is within the peri-implantation period that most embryonic deaths occur in ruminants due to deficiencies attributed to uterine functions or failure of the conceptus to develop appropriately, signal pregnancy recognition and/or undergo implantation and placentation. The endocrine status of the pregnant ruminant and her nutritional status are critical for successful establishment and maintenance of pregnancy. The challenge is to understand the complexity of key mechanisms that are characteristic of successful reproduction in humans and animals and to use that knowledge to enhance fertility and reproductive health of ruminant species in livestock enterprises.
Detection of QTL for Carcass Quality on Chromosome 6 by Exploiting Linkage and Linkage Disequilibrium in Hanwoo
Lee, J.H.;Li, Y.;Kim, J.J. 17
https://doi.org/10.5713/ajas.2011.11337 PDF KSCI
The purpose of this study was to improve mapping power and resolution for the QTL influencing carcass quality in Hanwoo, which was previously detected on the bovine chromosome (BTA) 6. A sample of 427 steers were chosen, which were the progeny from 45 Korean proven sires in the Hanwoo Improvement Center, Seosan, Korea. The samples were genotyped with the set of 2,535 SNPs on BTA6 that were imbedded in the Illumina bovine 50 k chip. A linkage disequilibrium variance component mapping (LDVCM) method, which exploited both linkage between sires and their steers and population-wide linkage disequilibrium, was applied to detect QTL for four carcass quality traits. Fifteen QTL were detected at 0.1% comparison-wise level, for which five, three, five, and two QTL were associated with carcass weight (CWT), backfat thickness (BFT), longissimus dorsi muscle area (LMA), and marbling score (Marb), respectively. The number of QTL was greater compared with our previous results, in which twelve QTL for carcass quality were detected on the BTA6 in the same population by applying other linkage disequilibrium mapping approaches. One QTL for LMA was detected on the distal region (110,285,672 to 110,633,096 bp) with the most significant evidence for linkage (p< $10^{-5}$). Another QTL that was detected on the proximal region (33,596,515 to 33,897,434 bp) was pleiotrophic, i.e. influencing CWT, BFT, and LMA. Our results suggest that the LDVCM is a good alternative method for QTL fine-mapping in detection and characterization of QTL.
Association of Length of Pregnancy with Other Reproductive Traits in Dairy Cattle
Nogalski, Zenon;Piwczynski, Dariusz 22
The experiment involved observations of 2,514 Holstein-Friesian cows to determine the effects of environmental factors (cow's age, calving season, weight and sex of calves, housing system) and genetic factors on gestation length in dairy cattle and the correlation between gestation length and other reproductive traits (calving ease, stillbirth rates and placental expulsion). Genetic parameters were estimated based on the sires of calved cows (indirect effect) and the sires of live-born calves (direct effect). The following factors were found to contribute to prolonged gestation: increasing cow's age, male fetuses and growing fetus weight. Optimal gestation length was determined in the range of 275-277 days based on calving ease and stillbirth rates. The heritability of gestation length was estimated at 0.201-0.210 by the direct effect and 0.055-0.073 by the indirect effect. The resulting genetic correlations suggest that the efforts to optimize (prolong) gestation length could exert an adverse influence on the breeding value of bulls by increasing perinatal mortality and calving difficulty. The standard errors of the investigated parameters were relatively high, suggesting that any attempts to modify gestation length for the purpose of improving calving ease and reducing stillbirth rates should be introduced with great caution.
Evaluation of Single Nucleotide Polymorphisms (SNPs) Genotyped by the Illumina Bovine SNP50K in Cattle Focusing on Hanwoo Breed
Dadi, Hailu;Kim, Jong-Joo;Yoon, Du-Hak;Kim, Kwan-Suk 28
In the present study, we evaluated the informativeness of SNPs genotyped by the Illumina Bovine SNP50K assay in different cattle breeds. To investigate these on a genome-wide scale, we considered 52,678 SNPs spanning the whole autosomal and X chromosomes in cattle. Our study samples consists of six different cattle breeds. Across the breeds approximately 72 and 6% SNPs were found polymorphic and fixed or close to fix in all the breeds, respectively. The variations in the average minor allele frequency (MAF) were significantly different between the breeds studied. The level of average MAF observed in Hanwoo was significantly lower than the other breeds. Hanwoo breed also displayed the lowest number of polymorphic SNPs across all the chromosomes. More importantly, this study indicated that the Bovine SNP50K assay will have reduced power for genome-wide association studies in Hanwoo as compared to other cattle breeds. Overall, the Bovine SNP50K assay described in this study offer a useful genotyping platform for mapping quantitative trait loci (QTLs) in the cattle breeds. The assay data represent a vast and generally untapped resource to assist the investigation of the complex production traits and the development of marker-assisted selection programs.
Genotype and Allelic Frequencies of a Newly Identified Mutation Causing Blindness in Jordanian Awassi Sheep Flocks
Jawasreh, K.I.Z.;Ababneh, H.;Awawdeh, F.T.;Al-Massad, M.A.;Al-Majali, A.M. 33
A total of 423 blood samples were collected (during 2009 and 2010) from all the ram holdings at three major Jordanian governmental Awassi breeding stations (Al-Khanasry, Al- Mushairfa and Al-Fjaje) and two private flocks. All blood samples were screened for the presence of mutations at the CNGA3 gene (responsible for day blindness in Awassi sheep) using RFLP-PCR. The day blindness mutation was detected in all studied flocks. The overall allele and genotype frequencies of all studied flocks of the day blindness mutation were 0.088 and 17.49%, respectively. The genotype and allele frequencies were higher in station flocks than the farmer flocks (0.121, 24.15 and 0.012, 2.32, respectively). Al-Mushairfa and Al-Khanasry stations have the highest genotype and allele frequencies for the day blindness mutation that were 27.77, 30.00% and 0.14, 0.171, respectively. The investigated farmer flocks have low percentages (0.03, 5.88% at Al-Shoubak and 0.005 and 1.05%, at Al-Karak, respectively for genotype and allele frequencies) compared with the breeding stations. Ram culling strategy was applied throughout the genotyping period in order to gradually eradicate this newly identified day blindness mutation from Jordanian Breeding station, since they annually distribute a high percentage of improved rams to farmer's flocks.
Melatonin Induced Changes in Specific Growth Rate, Gonadal Maturity, Lipid and Protein Production in Nile Tilapia Oreochromis niloticus (Linnaeus 1758)
Singh, Ruchi;Singh, A.K.;Tripathi, Madhu 37
We have investigated the effect of melatonin (MLT) on specific growth rate (SGR% $day^{-1}$), condition factor (k), gonado-somatic-index (GSI), histological structures of gonads, serum as well as gonadal protein and lipid in Nile tilapia Oreochromis niloticus. MLT treatment in the dose of 25 ${\mu}g/L$ for three weeks reduced SGR% $day^{-1}$ ($0.9{\pm}0.04$) as compared to control ($1.23{\pm}0.026$). The GSI value was significantly (p<0.05) reduced to $1.77{\pm}0.253$ from control where it was $2.56{\pm}0.25$. Serum protein level increased from $9.33{\pm}2.90$ mg/ml (control) to $11.67{\pm}1.45$ mg/ml after MLT treatment while there was depressed serum triglycerides ($86.16{\pm}1.078$ mg/dl) and cholesterol ($126.66{\pm}0.88$ mg/dl) as compared to control values where these were $123.0{\pm}1.23$ mg/dl and $132.0{\pm}1.65$ mg/dl respectively. Histological structure of ovary showed small eggs of early perinucleolus stage after MLT treatment while testicular structure of control and MLT treated fish was more or less similar. It is concluded that exogenous melatonin suppressed SGR% $day^{-1}$, GSI, ovarian cellular activity, protein and lipid biosynthesis, in tilapia suggesting that melatonin is useful in manipulating the gonadal maturity in fishes.
Regulation of S100G Expression in the Uterine Endometrium during Early Pregnancy in Pigs
Choi, Yo-Han;Seo, Hee-Won;Shim, Jang-Soo;Kim, Min-Goo;Ka, Hak-Hyun 44
Calcium ions play an important role in the establishment and maintenance of pregnancy, but molecular and cellular regulatory mechanisms of calcium ion action in the uterine endometrium are not fully understood in pigs. Previously, we have shown that calcium regulatory molecules, transient receptor potential vanilloid type 5 (TRPV6) and calbindin-D9k (S100G), are expressed in the uterine endometrium during the estrous cycle and pregnancy in a pregnancy status- and stage-specific manner, and that estrogen of conceptus origin increases endometrial TRPV6 expression. However, regulation of S100G expression in the uterine endometrium and conceptus expression of S100G has been not determined during early pregnancy. Thus, we investigated regulation of S100G expression by estrogen and interleukin-$1{\beta}$ (IL1B) in the uterine endometrium and conceptus expression of S100G during early pregnancy in pigs. We obtained uterine endometrial tissues from day (D) 12 of the estrous cycle and treated with combinations of steroid hormones, estradiol-$17{\beta}$ ($E_2$) and progesterone ($P_4$), and increasing doses of IL1B. Real-time RT-PCR analysis showed that $E_2$ and IL1B increased S100G mRNA levels in the uterine endometrium, and conceptuses expressed S100G mRNA during early pregnancy, as determined by RT-PCR analysis. To determine if endometrial expression of S100G mRNA during the implantation period was affected by the somatic cell nuclear transfer (SCNT) procedure, we compared S100G mRNA levels in the uterine endometrium from gilts with SCNT-derived conceptuses with those from gilts with conceptuses derived from natural mating on D12 of pregnancy. Real-time RT-PCR analysis showed that levels of S100G mRNA in the uterine endometrium from gilts carrying SCNT-derived conceptuses was significantly lower than those from gilts carrying conceptuses derived from natural mating. These results showed that S100G expression in the uterine endometrium was regulated by estrogen and IL1B of conceptus origin, and affected by the SCNT procedure during early pregnancy. These suggest that conceptus signals regulate S100G, an intracellular calcium transport protein, for the establishment of pregnancy in pigs.
Ovarian Response to Different Dose Levels of Follicle Stimulating Hormone (FSH) in Different Genotypes of Bangladeshi Cattle
Ali, M.S.;Khandoker, M.A.M.Y.;Afroz, M.A.;Bhuiyan, A.K.F.H. 52
The experiment was conducted under the Department of Animal Breeding and Genetics, Bangladesh Agricultural University (BAU), Mymensingh from June, 2001 to December, 2005 in two different locations (Central Cattle Breeding and Dairy Farm and Bangladesh Livestock Research Institute in Savar, Dhaka) to observe ovarian response to different doses of FSH in three different genotypes of cattle- indigenous Local, Pabna cattle and Friesian${\times}$Local cross. Five different dose levels used were 200, 240, 280, 320 and 360 mg. Ovarian response as corpus luteum (CL), recovered embryo (RE) and of transferable embryos (TE) count in Local were significant for 320, 280 and 280 mg respectively. In Pabna cattle CL, RE and TE count were found significant for 360, 320 and 320 mg respectively. In Friesian${\times}$Local cross CL, RE and TE count were found significant for 360, 320 and 320 mg respectively. The excellent quality embryos showed significantly the highest yield ($1.80{\pm}0.20$) in the 240 and 280 mg FSH levels in Local genotype. In Pabna cattle, the highest yield ($2.00{\pm}0.32$) was found at FSH level 320 mg. In Friesian${\times}$Local, the highest yield ($2.20{\pm}0.20$) was found at FSH level 280 mg.
Effects of Substituting Soybean Meal for Sunflower Cake in the Diet on the Growth and Carcass Traits of Crossbred Boer Goat Kids
Palmieri, Adriana Dantas;Oliveira, Ronaldo Lopes;Ribeiro, Claudio Vaz Di Mambro;Ribeiro, Marinaldo Divino;Ribeiro, Rebeca Dantas Xavier;Leao, Andre Gustavo;Agy, Mariza Sufiana Faharodine Aly;Ribeiro, Ossival Lolato 59
The present study was conducted to determine the best level of substitution of soybean meal by sunflower cake in diets for kids through the evaluation of quantitative carcass traits. Thirty-two Boer kids X 1/2 NDB (no defined breed), males, non-castrated, with 4 months of age and initial body weight of $15{\pm}3.2$ kg, were randomly assigned to individual pens. The treatments contained four substitution levels of soybean meal by sunflower cake (0, 33, 66 and 100% DM). At the end of the experimental period, the animals were slaughtered. There was no influence of the treatments on any of the mean values of the evaluated measures (p>0.05): 21.78 kg (body weight at slaughter), 8.65 kg (hot carcass weight), 8.59 kg (cold carcass weight), 40.27% (hot carcass yield), 39.20% (cold carcass yield), 7.73 $cm^2$ (rib eye area), 46.74 cm (carcass outer length), 45.68 cm (carcass internal length), 36.92 cm (leg length), 26.04 cm (leg perimeter), 48.66 cm (hind perimeter), 58.62 cm (thoracic perimeter), 0.20 (carcass compactness index), 68.48% (total muscle of the leg), 2.79% (total leg fat), 55.19% (subcutaneous leg fat), 28.82% (total bone), 81.66 g (femur weight), 14.88 cm (femur length), 0.38 (leg muscularity index), 2.53 (muscle:bone ratio) and 33.42 (muscle:fat ratio). The substitution of soybean meal by sunflower cake may be recommended up to a level of 100% without alterations to quantitative carcass traits.
Effect of Exogenous Fibrolytic Enzyme Application on the Microbial Attachment and Digestion of Barley Straw In vitro
Wang, Y.;Ramirez-Bribiesca, J.E.;Yanke, L.J.;Tsang, A.;McAllister, T.A. 66
The effects of exogenous fibrolytic enzymes (EFE; a mixture of two preparations from Trichoderma spp., with predominant xylanase and ${\beta}$-glucanase activities, respectively) on colonization and digestion of ground barley straw and alfalfa hay by Fibrobacter succinogenes S85 and Ruminococcus flavefaciens FD1 were studied in vitro. The two levels (28 and 280 ${\mu}g$/ml) of EFE tested and both bacteria were effective at digesting NDF of hay and straw. With both substrates, more NDF hydrolysis (p<0.01) was achieved with EFE alone at 280 than at 28 ${\mu}g$/ml. A synergistic effect (p<0.01) of F. succinogenes S85 and EFE on straw digestion was observed at 28 but not 280 ${\mu}g$/ml of EFE. Strain R. flavefaciens FD1 digested more (p<0.01) hay and straw with higher EFE than with lower or no EFE, but the effect was additive rather than synergistic. Included in the incubation medium, EFE showed potential to improve fibre digestion by cellulolytic ruminal bacteria. In a second batch culture experiment using mixed rumen microbes, DM disappearance (DMD), gas production and incorporation of $^{15}N$ into particle-associated microbial N ($^{15}N$-PAMN) were higher (p<0.001) with ammoniated (5% w/w; AS) than with native (S) ground barley straw. Application of EFE to the straws increased (p<0.001) DMD and gas production at 4 and 12 h, but not at 48 h of the incubation. EFE applied onto S increased (p<0.01) $^{15}N$-PAMN at 4 h only, but EFE on AS increased (p<0.001) $^{15}N$-PAMN at all time points. Prehydrolysis increased (p<0.01) DMD from both S and AS at 4 and 12 h, but reduced (p<0.01) $^{15}N$-PAMN in the early stage (4 h) of the incubation, as compared to non-prehydrolyzed samples. Application of EFE to barley straw increased rumen bacterial colonization of the substrate, but excessive hydrolytic action of EFE prior to incubation decreased it.
Feeding Unprotected CLA Methyl Esters Compared to Sunflower Seeds Increased Milk CLA Level but Inhibited Milk Fat Synthesis in Cows
Dohme-Meier, F.;Bee, G. 75
An experiment was conducted to compare the effect of the same amount of 18:2 offered either as 18:2n-6 or as a mixture of unprotected 18:2c9t11 and 18:2t10c12 on feed intake, milk components as well as plasma and milk fatty acid profile. Fifteen cows were blocked by milk yield and milk fat percentage and within block assigned randomly to 1 of 3 treatments (n = 5). Each cow passed a 12-d adjustment period (AP) on a basal diet. After the AP cows received 1 of 3 supplements during an 18-d experimental period (EP). The supplements contained either 1.0 kg ground sunflower seeds (S), 0.5 kg conjugated linoleic acid (CLA)-oil (C) or 0.75 kg of a mixture of ground sunflower seeds and CLA-oil (2:1; SC). All 3 supplements contained the same amount of 18:2 either as CLA (${\Sigma}18$:2c9t11+18:2t10c12, 1:1) or as 18:2c9c12. During the last 2 d of AP and the last 4 d of EP feed intake and milk yield were recorded daily and milk samples were collected at each milking. Blood samples were collected from the jugular vein on d 11 of AP and d 15 and 18 of EP. The 18:2 intake increased in all treatments from AP to EP. Regardless of the amount of supplemented CLA, the milk fat percentage decreased by 2.35 and 2.10%-units in treatment C and SC, respectively, whereas in the treatment S the decrease was with 0.99%-unit less pronounced. Thus, C and SC cows excreted daily a lower amount of milk fat than S cows. The concentration of trans 18:1 in the plasma and the milk increased from AP to EP and increased with increasing dietary CLA supply. While the concentration of 18:2c9t11 and 18:2t10c12 in the plasma and that of 18:2t10c12 in the milk paralleled dietary supply, the level of 18:2c9t11 in the milk was similar in C and CS but still lower in S. Although the dietary concentration of CLA was highest in treatment C, the partial replacement of CLA by sunflower seeds had a similar inhibitory effect on milk fat synthesis. Comparable 18:2c9t11 levels in the milk in both CLA treatments implies that this isomer is subjected to greater biohydrogenation with increasing supply than 18:2t10c12. The fact that unprotected 18:2t10c12 escaped biohydrogenation in sufficient amounts to affect milk fat synthesis reveals opportunities to develop feeding strategies where reduced milk fat production is desirable or required by the metabolic state of the cow.
Methane Production of Different Forages in In vitro Ruminal Fermentation
Meale, S.J.;Chaves, A.V.;Baah, J.;McAllister, T.A. 86
An in vitro rumen batch culture study was completed to compare effects of common grasses, leguminous shrubs and non-leguminous shrubs used for livestock grazing in Australia and Ghana on $CH_4$ production and fermentation characteristics. Grass species included Andropodon gayanus, Brachiaria ruziziensis and Pennisetum purpureum. Leguminous shrub species included Cajanus cajan, Cratylia argentea, Gliricidia sepium, Leucaena leucocephala and Stylosanthes guianensis and non-leguminous shrub species included Annona senegalensis, Moringa oleifera, Securinega virosa and Vitellaria paradoxa. Leaves were harvested, dried at $55^{\circ}C$ and ground through a 1 mm screen. Serum bottles containing 500 mg of forage, modified McDougall's buffer and rumen fluid were incubated under anaerobic conditions at $39^{\circ}C$ for 24 h. Samples of each forage type were removed after 0, 2, 6, 12 and 24 h of incubation for determination of cumulative gas production. Methane production, ammonia concentration and proportions of VFA were measured at 24 h. Concentration of aNDF (g/kg DM) ranged from 671 to 713 (grasses), 377 to 590 (leguminous shrubs) and 288 to 517 (non-leguminous shrubs). After 24 h of in vitro incubation, cumulative gas, $CH_4$ production, ammonia concentration, proportion of propionate in VFA and IVDMD differed (p<0.05) within each forage type. B. ruziziensis and G. sepium produced the highest cumulative gas, IVDMD, total VFA, proportion of propionate in VFA and the lowest A:P ratios within their forage types. Consequently, these two species produced moderate $CH_4$ emissions without compromising digestion. Grazing of these two species may be a strategy to reduce $CH_4$ emissions however further assessment in in vivo trials and at different stages of maturity is recommended.
Effect of Sea Buckthorn Leaves on Inosine Monophosphate and Adenylosuccinatelyase Gene Expression in Broilers during Heat Stress
Zhao, Wei;Chen, Xin;Yan, Changjiang;Liu, Hongnan;Zhang, Zhihong;Wang, Pengzu;Su, Jie;Li, Yao 92
The trial was conducted to evaluate the effects of sea buckthorn leaves (SBL) on meat flavor in broilers during heat stress. A total 360 one-day-old Arbor Acre (AA) broilers (male) were randomly allotted to 4 treatments with 6 replicates pens pretreatment and 15 birds per pen. The control group was fed a basal diet, the experimental group I, II and III were fed the basal diet supplemented with 0.25%, 0.5%, 1% SBL, respectively. During the 4th week, broilers were exposed to heat stress conditions ($36{\pm}2^{\circ}C$), after which, muscle and liver samples were collected. High performance liquid chromatography (HPLC) was performed to measure the content of inosine monophosphate (IMP); Real-Time PCR was performed to determine the expression of the ADSL gene. The results showed that the content of breast muscle IMP of group I, II and III was significantly increased 68%, 102% and 103% (p<0.01) compared with the control, respectively; the content of thigh muscle IMP of group II and III was significantly increased 56% and 58% (p<0.01), respectively. Additionally, ADSL mRNA expression in group I, II and III was increased significantly 80%, 65% and 49% (p<0.01) compared with the control, respectively. The content of IMP and expression of ADSL mRNA were increased by basal diet supplemented with SBL, therefore, the decrease of meat flavor caused by heat stress was relieved.
Re-evaluation of the Optimum Dietary Vitamin C Requirement in Juvenile Eel, Anguilla japonica by Using L-ascorbyl-2-monophosphate
Bae, Jun-Young;Park, Gun-Hyun;Yoo, Kwang-Yeol;Lee, Jeong-Yeol;Kim, Dae-Jung;Bai, Sung-Chul C. 98
This study was conducted to re-evaluate the dietary vitamin C requirement in juvenile eel, Anguilla japonica by using L-ascorbyl-2-monophosphate (AMP) as the vitamin C source. Five semi-purified experimental diets were formulated to contain 0 ($AMP_0$), 30 ($AMP_{24}$), 60 ($AMP_{52}$), 120 ($AMP_{108}$) and 1,200 ($AMP_{1137}$) mg AMP $kg^{-1}$ diet on a dry matter basis. Casein and defatted fish meal were used as the main protein sources in the semi-purified experimental diets. After a 4-week conditioning period, fish initially averaging $15{\pm}0.3$ g (mean${\pm}$SD) were randomly distributed to each aquarium as triplicate groups of 20 fish each. One of five experimental diets was fed on a DM basis to fish in three randomly selected aquaria, at a rate of 3% of total body weight, twice a day. At the end of the feeding trial, weight gain (WG) and specific growth rate (SGR) for fish fed $AMP_{52}$ and $AMP_{108}$ were significantly higher than those recorded for fish fed the control diet (p<0.05). Similarly, feed efficiency (FE) and protein efficiency ratio (PER) for fish fed $AMP_{52}$ were significantly higher than those for fish fed the control diet (p<0.05). Broken-line regression analysis on the basis of WG, SGR, FE and PER showed dietary vitamin C requirements of juvenile eel to be 41.1, 41.2, 43.9 and 43.1 (mg $kg^{-1}$ diet), respectively. These results indicated that the dietary vitamin C requirement could range from 41.1 to 43.9 mg $kg^{-1}$ diet in juvenile eel when L-ascorbyl-2-monophosphate was used as the dietary source of vitamin C.
Energy and Standardized Ileal Amino Acid Digestibilities of Chinese Distillers Dried Grains, Produced from Different Regions and Grains Fed to Growing Pigs
Xue, P.C.;Dong, B.;Zang, J.J.;Zhu, Z.P.;Gong, L.M. 104
Two experiments were conducted to determine the digestibility of crude protein (CP), amino acids and energy in three Chinese corn distillers dried grains with solubles (DDGS), one rice DDGS, one American corn DDGS and one American high protein distillers dried grains (HP-DDG). In Exp. 1, the apparent ileal digestibility (AID) and standardized ileal digestibility (SID) of CP and amino acids in the six samples were determined using cannulated barrows (initial BW: $43.3{\pm}1.7$ kg). In Exp. 2, the digestible energy (DE) and metabolizable energy (ME) content of these six samples were determined using crossbred barrows (initial BW: $46.0{\pm}2.5$ kg). The results of the two experiments indicated that Chinese corn DDGS is generally similar to American DDGS in chemical composition, digestibility of amino acids, DE and ME. However, Chinese DDGS had a lower Lys concentration (0.50% vs. 0.74%) and SID Lys (52.3% vs. 57.0%, p<0.01). The DE and ME values in Chinese corn DDGS were 3,427 and 3,306 kcal/kg, respectively. Rice DDGS had a similar DE and ME (3,363 and 3,228 kcal/kg) but higher Lys concentration (0.64% vs. 0.50%) to corn DDGS, while the SID of Lys was quite low (61.8%, p<0.01). HP-DDG had high value of SID of Lys, DE and ME (79.8%, 3,899 and 3,746 kcal/kg). In conclusion, except for a lower Lys concentrations and availability, the chemical composition, digestibility of amino acids, DE and ME values in Chinese corn DDGS are similar to American corn DDGS. Additionally, the rice DDGS had lower Lys content and digestible Lys values than that in corn DDGS. Thirdly, HP-DDG has higher levels of digestible amino acids and energy than DDGS.
Isolation, Screening and Identification of Swine Gut Microbiota with Ochratoxin A Biodegradation Ability
Upadhaya, Santi Devi;Song, Jae-Yong;Park, Min-Ah;Seo, Ja-Kyeom;Yang, Liu;Lee, Chan-Ho;Cho, Kyung-J.;Ha, Jong-K. 114
The potential for ochratoxin A (OTA) degradation by swine intestinal microbiota was assessed in the current study. Intestinal content that was collected aseptically from swine was spiked with 100 ppb OTA and incubated for 6 and 12 h at $39^{\circ}C$. An OTA assay was conducted using the incubated samples, and it was found that 20% of the OTA toxin was detoxified, indicating the presence of microbes capable of OTA degradation. Twenty-eight bacterial species were isolated anaerobically in M 98-5 media and 45 bacterial species were isolated using nutrient broth aerobically. Screening results showed that one anaerobic bacterial isolate, named MM11, detoxified more than 75% of OTA in liquid media. Furthermore, 1.0 ppm OTA was degraded completely after 24 h incubation on a solid 'corn' substrate. The bacterium was identified by 16S rDNA sequencing as having 97% sequence similarity with Eubacterium biforme. The isolation of an OTA-degrading bacterium from the swine natural flora is of great importance for OTA biodegradation and may be a valuable potential source for OTA-degradation enzymes in industrial applications.
Rainfed Areas and Animal Agriculture in Asia: The Wanting Agenda for Transforming Productivity Growth and Rural Poverty
Devendra, C. 122
The importance of rainfed areas and animal agriculture on productivity enhancement and food security for economic rural growth in Asia is discussed in the context of opportunities for increasing potential contribution from them. The extent of the rainfed area of about 223 million hectares and the biophysical attributes are described. They have been variously referred to inter alia as fragile, marginal, dry, waste, problem, threatened, range, less favoured, low potential lands, forests and woodlands, including lowlands and uplands. Of these, the terms less favoured areas (LFAs), and low or high potential are quite widely used. The LFAs are characterised by four key features: i) very variable biophysical elements, notably poor soil quality, rainfall, length of growing season and dry periods, ii) extreme poverty and very poor people who continuously face hunger and vulnerability, iii) presence of large populations of ruminant animals (buffaloes, cattle, goats and sheep), and iv) have had minimum development attention and an unfinished wanting agenda. The rainfed humid/sub-humid areas found mainly in South East Asia (99 million ha), and arid/semi-arid tropical systems found in South Asia (116 million ha) are priority agro-ecological zones (AEZs). In India for example, the ecosystem occupies 68% of the total cultivated area and supports 40% of the human and 65% of the livestock populations. The area also produces 4% of food requirements. The biophysical and typical household characteristics, agricultural diversification, patterns of mixed farming and cropping systems are also described. Concerning animals, their role and economic importance, relevance of ownership, nomadic movements, and more importantly their potential value as the entry point for the development of LFAs is discussed. Two examples of demonstrated success concern increasing buffalo production for milk and their expanded use in semi-arid AEZs in India, and the integration of cattle and goats with oil palm in Malaysia. Revitalised development of the LFAs is justified by the demand for agricultural land to meet human needs e.g. housing, recreation and industrialisation; use of arable land to expand crop production to ceiling levels; increasing and very high animal densities; increased urbanisation and pressure on the use of available land; growing environmental concerns of very intensive crop production e.g. acidification and salinisation with rice cultivation; and human health risks due to expanding peri-urban poultry and pig production. The strategies for promoting productivity growth will require concerted R and D on improved use of LFAs, application of systems perspectives for technology delivery, increased investments, a policy framework and improved farmer-researcher-extension linkages. These challenges and their resolution in rainfed areas can forcefully impact on increased productivity, improved livelihoods and human welfare, and environmental sustainability in the future.
Pork Preference for Consumers in China, Japan and South Korea
Oh, S.H.;See, M.T. 143
Competition in global pork markets has increased as trade barriers have opened as a result of free trade agreements. Japanese prefer both loin and Boston butt, while Chinese prefer pork offal. Frozen pork has increased in terms of imports into China. Japanese consumers consider pork meat origin along with pork price when making purchase decisions. While the Chinese prefer a strong tasting pork product, South Korean consumers show very strong preferences to pork that is higher in fat. Therefore, South Korean consumers have a higher demand for pork belly and Boston butt. Consequently, the supply and demand of pork in Korea is hardly met, which means that importation of high fat parts is inevitable. In Korea there is lower preference toward low fat parts such as loin, picnic shoulder, and ham. During the economic depression in South Korea there have been observable changes in consumer preferences. There remains steep competition among the pork exporting countries in terms of gaining share in the international pork market. If specific consumer preferences would be considered carefully, there is the possibility to increase the amount of pork exported to these countries.
|
CommonCrawl
|
Chapter 8 Work, Power, Simple Machines
Robert_Duncan56
when a force causes an object to move in the direction of the force.
When is work done?
work is only done if an object moves in the same direction as the force being applied to it.
Calculating Work
W = F x d or W = N x m = J
a unit used to measure work in terms of newtons x meters = a newton meter or Joule
the rate at which energy is transferred or the rate in which work is done.
Calculating Power
P = W / t or Joules divided by seconds = Watts
a unit used to measure power or J/s = Watts
Power relationships
The faster work is done the more power is created or used
a device that makes work easier by changing the size or direction of a force.
Work input
work you do on a machine
Work output
work the machine does on an object
Benefits of using a machine
Machines allow force to be applied over a greater distance, which means that less force will be needed for the same amount of work.
Force - Distance Trade Off
When a machine changes the size of the force, the distance through which the force is exerted must also change. Force or distance can increase, but both cannot increase. When one increases, the other must decrease.
Machines with a mechanical advantage of 1
some machines change only the direction of the force, not the size of the force or the distance through which the force is exerted.
Mechanical Advantage
the number of times the machine multiplies force. In other words, the mechanical advantage compares the input force with the output force.
Calculating Mechanical Advantage
MA = OF(output force) / IF (input force)
Mechanical Advantage greater than 1
A machine that has a mechanical advantage that is greater than 1 can help move or lift heavy objects because the output force is greater than the input force.
Mechanical Advantage less than 1
A machine that has a mechanical advantage that is less than 1 will reduce the output force but can increase the distance an object moves.
Mechanical Efficiency
A comparison of a machine's work output with the work input. The work output of a machine can never be greater than the work input. In fact, the work output of a machine is always less than the work input due to friction. Mechanical efficiency tells you what percentage of the work input gets converted into work output.
Calculating Mechanical Efficiency
Work Output / Work Input x 100 (always a percentage)
Ideal Machine
a machine that had 100% mechanical efficiency, They do not exist because machines with moving parts have to overcome friction.
a simple machine that has a bar that pivots at a fixed point, called a fulcrum. Levers are used to apply a force to a load. There are three classes of levers, which are based on the placements of the fulcrum, the load, and the input force.
1st Class Lever
the fulcrum is between the input force and the load... can have a MA of 1, less than 1 or greater than 1 depending on the placement of fulcrum
1st Class lever where fulcrum is directly between the load and input force
1st class lever with MA of 1 (See Saw)
1st Class lever where fulcrum is closer to the load
always has a MA of greater than 1
1st Class lever where the fulcrum is closer to the input force
always has a MA of less than 1
2nd Class Lever
The load is between the fulcrum and the input force. The lever does not change the direction of the input force. But they allow you to apply less force than the force exerted by the load. Because the output force is greater than the input force, you must exert the input force over a greater distance. (Wheel barrow)
3rd Class Lever
The input force is between the fulcrum and the load. These levers do not change the direction of the input force. In addition, they do not increase the input force. Therefore, the output force is always less than the input force. (your arm)
a simple machine that has a grooved wheel that holds a rope or a cable. A load is attached to one end of the rope, and an input force is applied to the other end.
Fixed Pulley
pulley is attached to something that does not move. By using a fixed pulley, you can pull down on the rope to lift the load up. The pulley changes the direction of the force. MA = 1 (flag pole or window blinds)
Movable Pulley
pulleys that are attached to the object being moved. It does not change a force's direction. These pulleys do increase force, but they also increase the distance over which the input force must be exerted. MA = 2
Block and Tackle Pulley
When a fixed pulley and a movable pulley are used together. The mechanical advantage depends on the number of rope segments.
Wheel and Axle
a simple machine consisting of two circular objects of different sizes. (door knobs, steering wheel)
Inclined Plane
a simple machine that is a straight, slanted surface.
The mechanical advantage (MA) of an inclined plane
calculated by dividing the length of the inclined plane by the height to which the load is lifted.
a pair of inclined planes that move. It applies an output force that is greater than your input force, but you apply the input force over a greater distance. (knife, ax, plow, door stop)
MA of a Wedge
found by dividing the length of the wedge by its greatest thickness
an inclined plane that is wrapped in a spiral around a cylinder
MA of a Screw
the longer the spiral on a screw is and the closer together the threads are, the greater the screw's mechanical advantage is.
Compound or Complex Machines
machines that are made of two or more simple machines.
In the diagram, two different trajectories are shown for a ball thrown by a center fielder to home plate in a baseball game. Which of the two trajectories (if either) will result in the longer time?
When moving in an elevator at a constant velocity, would a scale placed under you read your Apparent Weight or Normal Weight?
A cannonball is launched horizontally from a tower. If the cannon has a barrel velocity of 60 m/s, where will the cannonball be 1 second later?
When you halve the radius of a car making a circular turn, the force becomes
Chapter 9, work and machines
coolbeans109
CH. 8 - Work and Machines
annashaddix88
Chapter 4 Notes
schnitzerl
Work and Machines (Ch. 8)
MsMcCollum
Atomic Theory / Models
Cardiovascular System and Blood
Leech tenses
TerezaMekotova
Evolution Final: Humans
cmh030
Victoria_Pavey3
tsvetozarv
A rocket burns fuel at a rate of 5.0 kg/s, expelling the exhaust gases at a speed of 4.0 km/s relative to the rocket. We would like to find the thrust of the rocket engine. a. Model the fuel burning as a steady ejection of small pellets, each with the small mass $\Delta m$. Suppose it takes a short time $\Delta t$ to accelerate a pellet (at constant acceleration) to the exhaust speed $v_{e x}$. Further, suppose the rocket is clamped down so that it can't recoil. Find an expression for the magnitude of the force that one pellet exerts on the rocket during the short time while the pellet is being expelled. b. If the rocket is moving, $\mathcal{V}_{\mathrm{ex}}$ is no longer the pellet's speed through space but it is still the pellet's speed relative to the rocket. By considering the limiting case of $\Delta m and \Delta t$ approaching zero, in which case the rocket is now burning fuel continuously, calculate the rocket thrust for the values given above.
If the intensity level of a sound at 20 dB is increased to 40 dB, the intensity would increase by a factor of (a) 10, (b) 20, (c) 40, (d) 100.
The radius of Mars (from the center to just above the atmosphere) is 3400 km $(3400 \times 10^3$ m), and its mass is $0.6 \times 10^24$ kg. An object is launched straight up from just above the atmosphere of Mars. (a) What initial speed is needed so that when the object is far from Mars its final speed is 1000 m/s?
An automobile with a radio antenna 1.0 m long travels at 100.0 km/h in a location where the Earth's horizontal magnetic field is $5.5 \times 10 ^ { - 5 } \mathrm { T }$. What is the maximum possible emf induced in the antenna due to this motion?
Flickr Creative Commons Images
Some images used in this set are licensed under the Creative Commons through Flickr.com.
Click to see the original works with their full license.
|
CommonCrawl
|
On the algebraic independence o...
Nagloo, Joel 2015. Geometric triviality of the strongly minimal second Painlevé equations. Annals of Pure and Applied Logic, Vol. 166, Issue. 3, p. 358.
Joshi, Nalini and Radnović, Milena 2016. Asymptotic Behavior of the Fourth Painlevé Transcendents in the Space of Initial Values. Constructive Approximation, Vol. 44, Issue. 2, p. 195.
Nagloo, Joel 2017. On transformations in the Painlevé family. Journal de Mathématiques Pures et Appliquées, Vol. 107, Issue. 6, p. 784.
April 2014 , pp. 668-678
On the algebraic independence of generic Painlevé transcendents
MSC 2010: Model theory
MSC 2010: Differential equations in the complex domain
Joel Nagloo (a1) and Anand Pillay (a2)
Department of Pure Mathematics, University of Leeds, UK email [email protected]
Department of Mathematics, University of Notre Dame, IN 46556, USA email [email protected]
DOI: https://doi.org/10.1112/S0010437X13007525
Published online by Cambridge University Press: 10 March 2014
We prove that if $y''=f(y,y',t,\alpha ,\beta ,\ldots)$ is a generic Painlevé equation from among the classes II, IV and V, and if $y_1,\ldots,y_n$ are distinct solutions, then $\mathrm{tr.deg}(\mathbb{C}(t)(y_1,y'_1,\ldots,y_n,y'_n)/\mathbb{C}(t))=2n$ . (This was proved by Nishioka for the single equation $P_{{\rm I}}$ .) For generic Painlevé III and VI, we have a slightly weaker result: $\omega $ -categoricity (in the sense of model theory) of the solution space, as described below. The results confirm old beliefs about the Painlevé transcendents.
[Boa06]Boalch, P., The fifty-two icosahedral solutions to Painlevé VI, J. Reine Angew. Math. 596 (2006), 183–214.
[GLS02]Gromak, V., Laine, I. and Shimomura, S., Painlevé differential equations in the complex plane, De Gruyter Studies in Mathematics, vol. 28 (De Gruyter, 2002).
[KLM94]Kitaev, A., Law, C. and McLeod, J., Rational solutions of the fifth Painlevé equation, Differ. Integral Equations 7 (1994), 967–1000.
[LT08]Lisovyy, O. and Tykhyy, Yu., Algebraic solutions of the sixth Painlevé equation, Preprint (2008), arXiv:0809.4873.
[Mar05]Marker, D., Model theory of differential fields, in Model theory of fields, Lecture Notes in Logic, vol. 5, eds Marker, D., Messmer, M. and Pillay, A. (Springer, Berlin–Tokyo, 2005).
[Mur85]Murata, Y., Rational solutions of the second and fourth Painlevé equations, Funkcial. Ekvac. 28 (1985), 1–32.
[Mur95]Murata, Y., Classical solutions of the third Painlevé equation, Nagoya Math. 139 (1995), 37–65.
[NP11]Nagloo, R. and Pillay, A., On Algebraic relations between solutions of a generic Painlevé equation, Preprint (2011), arXiv:1112.2916.
[Nis04]Nishioka, K., Algebraic independence of Painlevé first transcendents, Funkcial. Ekvac. 47 (2004), 351–360.
[Oka87]Okamoto, K., Studies on the Painlevé equations IV, third Painlevé equation, P III, Funkcial. Ekvac. 30 (1987), 305–332.
[UW97]Umemura, H. and Watanabe, H., Solutions of the second and fourth Painlevé equations, I, Nagoya Math. J 148 (1997), 151–198.
[Wat98]Watanabe, H., Birational canonical transformations and classical solutions of the sixth Painlevé equation, Ann. Sc. Norm. Super. Pisa Cl. Sci. (4) XXVII (1998), 379–425.
URL: /core/journals/compositio-mathematica
MathJax
MathJax is a JavaScript display engine for mathematics. For more information see http://www.mathjax.org.
Painlevé equations
algebraic independence
model theory
differentially closed fields
MSC classification
14H05: Algebraic functions; function fields
14H70: Relationships with integrable systems
34M55: Painlevé and other special equations; classification, hierarchies;
03C60: Model-theoretic algebra
|
CommonCrawl
|
hall effect derivation
Avance Consulting > Uncategorized > hall effect derivation
Now when you place a magnet near the plate, its magnetic field will distort the magnetic field of the charge carriers.
If a material with a known density of charge carriers n is placed in a magnetic field and V is measured, … They find applications in position sensing as they are immune to water, mud, dust, and dirt. Find answer to specific questions by searching them here. 0. The Hall effect can be used to measure magnetic fields.
This principle is observed in the charges involved in the electromagnetic fields. The separation of charge set up a transverse electric field across the specimen given by. Derive the expression for Hall coefficient with neat diagram. This phenomenon is known as Hall Effect. The phenomenon is called HALL EFFECT. (Force due to Hall voltage on charge carriers)=(Force due to magnetic field). When such a magnetic field is absent, the charges follow approximately straight, 'line of sight' paths between collisions with impurities, phonons, etc. 2 – Hall Effect Principle – Current Flowing Through a Plate. It's the best way to discover useful content.
The Hall effect is used today as a research tool to probe the movement of charges, their drift velocities and densities, and so on, in materials. ADD COMMENT 1. written 2.1 years ago by sashivarma58 • 150: If a current carrying conductor or semiconductor is placed in a transverse magnetic field, a potential difference is developed across the specimen in a direction perpendicular to both the current and magnetic field. The separation of charge establishes an electric field that opposes the migration of further charge, so a steady electric potential is established for as long as the charge is flowing. When a magnetic field is present, these charges experience a force, called the Lorentz force. It is used to measure the magnetic field and is known as a magnetometer. Hall effect is defined as the production of a voltage difference across an electrical conductor which is transverse to an electric current and with respect to an applied magnetic field it is perpendicular to the current. To know more about Hall effect and its derivation, visit CoolGyan.
This will upset the straight flow of the charge carriers. Hall effect is the production of voltage across an electrical conductor, transverse to an electric current in the conductor and a magnetic field perpendicular to the current; The above figure shows a conductor placed in a magnetic field (B) along the z-axis. Hall voltage $(V_H)$ is developed along y-axis with electric field intensity $E_H$. Therefore, the Hall effect derivation refers to the following – eE H = Bev \[\frac{{evH}}{d}\] = BevV H = Bvd However, this derivation stipulates that the force is downward facing because of the magnetic field (equal to the upward electric force), in the case of equilibrium. What is Hall Effect? The result is an asymmetric distribution of charge density across the Hall element, arising from a force that is perpendicular to both the 'line of sight' path and the applied magnetic field. The above figure shows a conductor placed in a magnetic field (B) along the z-axis. The field developed across the conductor is called Hall field and corresponding potential difference is called Hall voltage and its value is found to depend on the magnetic field strength, nature of the materials and applied current. The current (I) … Following is the derivation of Hall-effect: The ratio between density (x-axis direction) and current density (y-axis direction) is known as Hall angle that measures the average number of radians due to collisions of the particles.
4k Famous Paintings, Numeron Yugioh Tcg, Mint App Review, Class Record Book Pdf, Undyed Felt Ffxiv, Motion Sensor Light Settings Dusk To Dawn, Japanese Rice Wine, Whipped Matcha No Eggs, Thai Pancake Near Me, Hoi3 Black Ice Germany Guide,
hall effect derivation November 26, 2020
|
CommonCrawl
|
How should I introduce the Chain Rule
I'm halfway through my first year of teaching AP Calculus to high school seniors. It's been going generally well, but I'm feeling like I really could have done better getting them into the Chain Rule.
I started with it the same basic way that I did with the Product and Quotient Rules -- showing that the rule worked for elementary polynomials and could save us some calculation time. But, in retrospect, the Chain Rule is such a fundamental part of much of the rest of differentiation that I feel like there could have been more that would help them understand how it works and how the concept ties together.
Is the u-substitution notion a good idea? Our class is generally much more comfortable with the $f'(x)$ notation, and as a result I stayed away from the $\frac{dy}{dx}=\frac{dy}{du}\frac{du}{dx}$ format. Instead, I did a lot of hand-waving around the "inside function" and the "outside function" that hasn't taken hold in all my students as well as I might have hoped.
Any suggestions about what works in your calculus classrooms?
secondary-education calculus curriculum
Matthew Daly
Matthew DalyMatthew Daly
$\begingroup$ Stretch a rubber band by a factor of 3, then by a factor of 2. What's the total stretch? $\endgroup$ – Peter Saveliev Dec 24 '19 at 0:50
$\begingroup$ I suspect students have trouble because they do not really understand function composition in all the contexts we expect they can operate. It's probably useful to give some practice with plain old identification of inside vs. outside function as a prelude to the talk. You know your students better than we do though... $\endgroup$ – James S. Cook Dec 24 '19 at 1:01
$\begingroup$ In my answer to Revisiting topics from previous courses I discussed how I approached the chain rule. $\endgroup$ – Dave L Renfro Dec 24 '19 at 13:47
$\begingroup$ I would introduce it with linear functions first to see how composition relates to multiplication. $\endgroup$ – copper.hat Dec 25 '19 at 4:24
$\begingroup$ Agree that the function composition concept has to be solid separately from the derivative concept. Keep the sample functions simple and use matching variable names. Like y=f(x)=x^2 and z=g(y)=1/y. Easy to see that g(f(x))=g(y=x^2)=1/y=1/(x^2). Once that's solid, then you can show how the Chain Rule gives the right result. $\endgroup$ – Jeff Y Dec 26 '19 at 20:09
I just start with constant rates of change, where it's pretty blazingly obvious that the chain rule works. E.g., Jane hikes 3 kilometers in an hour, and hiking burns 70 calories per kilometer. At what rate does she burn calories?
Our class is generally much more comfortable with the $f'(x)$ notation, and as a result I stayed away from the $\frac{dy}{dx}=\frac{dy}{du}\frac{du}{dx}$ format.
This is your opportunity to help them overcome that irrational prejudice by showing them an application where the Leibniz notation is clearly superior. It's not as though learning the Leibniz notation is optional for, e.g., engineering majors.
I also like to do the example of $x=A\cos bt$, where $x$ and $A$ both have units of meters, $t$ is in seconds, and $b$ has units of inverse seconds. I explicitly act this out with a heavy object and elicit the interpretations of $A$ and $b$. Then I take the derivative and intentionaly omit the factor of $b$ coming from the "derivative of the inside stuff." I then point out that the result is obviously wrong, both because it has the wrong units and because it doesn't depend on the frequency, which it should.
Ben CrowellBen Crowell
$\begingroup$ "It's not as though learning the Leibniz notation is optional for, e.g., engineering majors." From my personal experience, not learning the different notations for a thing is surprisingly crippling as I move forward to the next level of a subject. @MatthewDaly sticking to a particular notation may provide short-term benefits, but does your students a huge disservice toward their future studies. $\endgroup$ – Him Dec 24 '19 at 15:08
$\begingroup$ @Scott Your experience resonates with me! My education professor really emphasized to us how it important it was to introduce any relevant, meaningful terminology or notation from the get-go. As I recall, the math pedagogy research she was quoting said that it takes seven examples for students to correctly relearn an omitted or misdirected word for a concept. Students learn by creating their own organizational structures in their minds, so if you try to redirect concepts within those structures in your teaching, students will struggle. $\endgroup$ – Eliza Wilson Dec 25 '19 at 7:00
$\begingroup$ +1 for learnLeibniz. It will be crucial when you get to integration and the fundamental theorem of calculus. See math.stackexchange.com/questions/1991575/… $\endgroup$ – Ethan Bolker Dec 25 '19 at 21:44
What is difficult about the chain rule is the function concept, more specifically the composition of functions. Notation that hides or leaves implicit the composition of functions causes a great deal of confusion for students. However, the fundamental issues are not the notation used (all choices are messy to some extent), but what the use of the notation leaves implicit or to be inferred, and the extent to which these expectations are unrealizable by beginning students.
One thing potentially confusing (and I think not just for students) about the Leibniz notation is that in $\tfrac{dy}{dx} = \tfrac{dy}{du}\tfrac{du}{dx}$ it is not clear that $\tfrac{dy}{du}$ and $\tfrac{du}{dx}$ are both viewed as functions of $x$, and that, in the case of $\tfrac{dy}{du}$, this moreover means that this notation really indicates the composition $\tfrac{dy}{du}(u(x))$. That is, the Leibniz notation, at least as commonly used, hides the composition of functions. The notation $\tfrac{dy}{du}$ appears to indicate a function of $u$, and it is implicit from context that this function of $u$ is viewed as a function of $x$. Too much is left implicit, to be inferred from context.
One part of a solution is to make explicit all functional compositions. With the Leibniz notation this can become quite messy, particularly if higher derivatives are involved. For example, $\tfrac{d}{dx}(y \circ u) = (\tfrac{dy}{du}\circ u )\tfrac{du}{dx}$ indicates more clearly the functional compositions that occur, although it still does not indicate the dependence on $x$. Explicitly adding this notation becomes somewhat ugly - $\tfrac{d}{dx}(y \circ u)(x) = (\tfrac{dy}{du}\circ u )(x)\tfrac{du}{dx}(x)$ - but perhaps this is nonetheless preferable to begin. (Once students understand what they are doing, the explicit indication of the functional compositions becomes a bother, and it becomes clarifying to omit it, but at first I think the situation is reversed.) (I am not saying I like any of these notations - on the contrary I generally avoid the Leibniz notation - Also, a cleaner, functorial notation would be something like $D(u^{\ast}y) = u^{\ast}(Dy)D(u)$, where the pullback is defined by $u^{\ast}y = y \circ u$, but such a presentation of the chain rule as a cocyle identity is simply not viable for most students as usually educated.)
One could write alternatively $(y \circ u)^{\prime}(x) = (y^{\prime}\circ u)(x)u^{\prime}(x)$, and this is in many senses easier to read. What can be confusing for the student is that operationally the prime requires taking the derivative with respect to different variables ($x$ in one instance, $u$ in the other). Formally this is not a problem as variable names are really just placeholders that indicate the sequencing of compositions (the derivative is the derivative, whatever one chooses to call the argument), but it can be the essence of the difficulties that students have.
On the other hand, it is also one thing that can be problematic with the Leibniz notation - the Leibniz notation attaches too much significance to variable names. The derivative of $u$ is not the derivative with respect to $x$, it is the derivative of $u$ with respect to its argument, whatever name one gives to that argument. Fixation on variable names and their magical qualities is a quite natural, one might say primitive, human tendency, but it is also part of what needs to be overcome to understand properly the chain rule. Precisely one of the confusing aspects of $\tfrac{dy}{dx} = \tfrac{dy}{du}\tfrac{du}{dx}$ is that, since its right-hand side must be a function of $x$ for the equality to have sense, the expression $\tfrac{dy}{du}$, which the notation apparently indicates is a function of $u$, has to be considered as a function of $u(x)$, that is with $u$ viewed as a function of $x$, and this aspect is hidden notationally, so has to be inferred. For those with experience, the notational higiene compensates for leaving something implicit, but for students it can be a source of serious confusion.
I think the best tack is to make all of this as explicit as possible (obviously, in a language more accessible to students than that which I am using here), in particular indicating clearly what the difficulties are, where they occur, what is left implicit and what is not, whatever notation one choose to use. Operational rules of thumb that refer to inside and outside function will not help if they are not accompanied by precise explanation that makes clear what they intend to summarize and elide, although of course they can help when students are first sufficiently prepared to properly interpret them (however, in my experience this sort of informal summary works only with the most engaged students).
Diagrams can help. I am not sure how to make decent diagrams in mathjax, so I won't try here, but what I have in mind is a directed graph with three vertices and three arrows. The vertices represent the domains/codomains and the arrows represent the functions. The diagram can be labeled with the variable and function names. What it helps makes clear is that $y^{\prime}$ and $y$ have the same domain (it is the codomain of $u$, while $(y \circ u)^{\prime}$, must have the same domain as $y \circ u$. Accompanying computations by such diagrams, and repeating this a fair number of times can help.
A fundamental example, useful for other reasons, that should be clarifying in the context of the chain rule, is to take the derivative of the sine function viewed as a function of degrees. A student who can do this correctly, and write correctly to what it corresponds in whatever abstract functional notation (Leibniz or otherwise) has understood the chain rule.
Finally, a reflection. Many of the difficulties students have in calculus reduce to a failure to understand the abstract function concept. This concept is difficult and it is quite modern (In some sense it postdates calculus by one or two centuries). Its difficulty becomes apparent in any context requiring change of variables (chain rule, change of domain in integrals). Much of the problem is that it is often treated as something simple, requiring little explanation, or given explanation that is not precise. Better to treat the difficult topics directly and plainly than to look for ways to avoid them.
Dan FoxDan Fox
$\begingroup$ This $\frac{d}{dx}(y\circ u)=\left(\frac{dy}{du}\circ u\right)\frac{du}{dx}$ is very awkward. You are mixing up the modern concept of a function with the original notion of something being a function of something else. It will only cause more confusion to students. $\endgroup$ – Michael Bächtold Dec 24 '19 at 16:05
$\begingroup$ I don't think "the Leibniz notation attaches too much significance to variable names". Sometimes you want to concentrate on the names of the variables, other times on the names of the functions. Both approaches are valid. $\endgroup$ – Peter Saveliev Dec 24 '19 at 16:44
$\begingroup$ @MichaelBächtold: A fully modern notation would be something like $Dg^{\ast}(f) = g^{\ast}(Df) Dg$, but such fully functorial notation has its own problems. In any cae, I am arguing that a lot of these issues are inherent whatever notation is used, and that most of the notations in common use are difficult in one way or another; I am not advocating the one you cite, simply saying that it seems possibly preferable to that which does not even indicate the compositions. $\endgroup$ – Dan Fox Dec 24 '19 at 17:06
$\begingroup$ This answer spreads several misconceptions. When $y$ is a function of $x$ (and not a function in the modern sense), then nothing prohibits it from being a function of something else too. For instance the area $y$ of a circle is a function of the radius $x$, of the diameter $d$, of the circumference $c$ or of the area $y$ itself etc. So when we derive a variable quantity $y$ (not a modern function), we always need to make explicit wrt to which variable. The Leibniz notation does this. $\endgroup$ – Michael Bächtold Feb 17 '20 at 11:01
$\begingroup$ To make an analogy: when $H$ is a subgroup of $G$, it's also a subgroup of many other groups (for instance itself). When we quotient by $H$ we need to say which supergroup we are quotienting, and the notation $G/H$ makes that explicit. I've never heard someone complain that "Too much is left implicit, to be inferred from context" in this analogous situation of groups. $\endgroup$ – Michael Bächtold Feb 17 '20 at 11:02
The intuition here is basically that of Ben Crowell's answer, and that kind of intuitive explanation might be worth going through first. What I want to show is the kind of activity you can explore with students to investigate how it works in a "less than completely obvious" situation, when at least one of the rates of change itself keeps changing.
One way to approach differentiation is in terms of "sensitivity" - the derivative $f'(x_0)$ measures the sensitivity of the function $f(x)$ to small changes in in its input, about $x = x_0$. In particular, $f(x_0 + \Delta x) \approx f(x_0) + f'(x_0) \Delta x $. In addition to thinking about this graphically, one can investigate numerically with a suitable function (depending on the students' level of prior knowledge, perhaps one they already know the result of differentiating even if they can't prove it from first principles) e.g. the difference between $f(4)$ and $f(4.001)$.
Chain rule is just about extending this idea to the sensitivity of composite functions, i.e. how sensitive is $f(g(x))$ to changes in $x$? This is clearly going to depend on how sensitive $g(x)$ is to a small change in its input, but then also on how sensitive $f$ is to a change in its input... moreover the change in the input to $f$ is not just $\Delta x$ but $\Delta u = \Delta g(x) \approx g'(x) \Delta x$ where $u = g(x)$ is the input to $f$. So overall we reach $$fg(x_0 + \Delta x) = f(u_0 + \Delta u) \approx f(u_0) + f'(u_0) \Delta u \approx fg(x_0) + f'(g(x_0)) g'(x) \Delta x$$
Again this can be investigated numerically by students given an appropriate pair of functions and a set of values to play with (I have seen this work quite well by getting all students to use the same functions but "sharing out" which values to input), for example with $f(u) = u^2$ and $g(x) = 3x + 1$ we have $fg(x) = (3x + 1)^2$, $f'(x) = 3$ and $g'(u) = 2u$. A student might work with $x_0 = 5$ and $\Delta x = 0.001$; they tabulate that $u_0 = g(x_0) = 16$ and that $u_0 + \Delta u = g(x_0 + \Delta x) = 16.003$ so that $\Delta u$ = 0.003; this can be seen to match $g'(x_0) = 3$ multiplied by $\Delta x = 0.001$ (for more complicated functions this would only be an approximation, of course). In further columns of the table, the student might tabulate $f(u_0) = fg(x_0) = 256$; $f(u_0 + \Delta u) = fg(x_0 + \Delta x) = 256.096009$; $\Delta f(u) = 0.096009$; $f'(u_0) = 2 u_0 = 32$ and finally $f'(u_0) \Delta u = 32 \times 0.003 = 0.096$ which is reassuringly close to the value obtained for $\Delta f(u)$.
SilverfishSilverfish
This is kind of a "how I did it," but I think that it's something worth writing up somewhere, and it's too long for a comment. Here's my approach.
When I last taught calculus, I began with a graphed function f(x) which had no simple formula (piecewise linear is sufficient), and we sketched its derivative where it was defined. Then I asked them about what the derivative of f(x-1) should be. After some reminder of what f(x-1) means and how it leads to a translation of the function, most students agree that it should be f'(x-1). "Does this always happen for graph transformations?"
Then we took it a little further using f(2x) and I asked students what they thought should happen to the derivative of a function as you performed the horizontal compression that f(2x) causes. They immediately see that the derivative should be compressed too. But we graph f(2x) and notice that there's something wrong about its slopes compared to f(x). The rise/run values at corresponding points are different: the "rise" stayed the same, but the corresponding "run" got cut in half, making the derivative double in value. So we came up with the formula d/dx f(2x) = f'(2x)*2 . The f'(2x) was needed to make the derivative "match up in x" with the stretches performed on the original function, and the *2 was needed to account for the change in steepness that happens because of the stretch.
This motivated the chain rule enough that we then could ask "What is the formula for the derivative of f(g(x)) in general?" This I did not prove, but it was enough to comment that you'd have to use f'(g(x)) to get the derivative to match with horizontal transformations, and g'(x) to deal with the steepness changing, giving the final formula d/dx f(g(x))= f'(g(x))g'(x). The analogous notation dy/du du/dx is introduced simultaneously and compared/contrasted.
Then we went to computing the derivatives of functions like sin(x^3) where the students practiced identifying the outer and inner functions as well as computing the derivatives, comparing/contrasting the two uses of different notation and noticing that the results were the same either way we did it.
It's worth noting that I particularly emphasized the notion of function composition at the start of the course and had assigned multiple homework problems leading up to that point where students were expected to perform compositions and describe transformations, under the guise of "let's make sure you know prerequisite content," and the priming activity at the start of the class was to graph a function transformation. I think this fit in one to two hour-long college class periods.
Opal EOpal E
The chain rule is one of the areas where teaching using differentials (instead of derivatives) shines. If you are not aware, instead of teaching the "derivative" as the fundamental operation of calculus, you teach the differential. When you differentiate with the differential, there is no preferred variable with respect to differentiation. So, your rule, instead of being $y = x^n$ and $\frac{dy}{dx} = nx^{n-1}$, the rule is instead $dy = nx^{n - 1}\, dx$.
This has several advantages. First, it is much more symmetric. You are always doing the same thing to both sides, and always doing the same thing in all situations. You can still solve for the derivative (by just dividing by $dx$ in many cases), but the operation is the differential. The rules then become such like $d(nu) = n\,du$. Note that there are no extraneous variables here (like $y$), so there is less going on. This means that it is easier to apply this rule in a multivariable situation. Example: $y = x + z^2$. The differential is $dy = dx + 2z\,dz$. I can then solve for any derivative I want. This makes related rates, implicit differentiation, and the like super-easy because you aren't adding any new rules, you are just applying algebra.
And that's also the case with the chain rule. The rule for $\sin$, for instance, is $d(\sin(u)) = \cos(u)\,du$. Now, any rule you apply has to match the formula exactly (but can use any variable we want). So, if we have $d(\sin(x^2))$, this doesn't match our rule exactly. But, we can make it match our rule exactly with a variable substitution. We can say, $q = x^2$. Now our problem is $d(\sin(q))$ which becomes $\cos(q)\, dq$. We can easily back-substitute $q$ to get $\cos(x^2)\,dq$, but we still have the pesky $dq$ to take care of. However, we also have a new equation describing $q$ which we can differentiate in order to get a value for $dq$. If we differentiate $q = x^2$, the result is $dq = 2x\,dx$. Therefore, we will replace $dq$ in our result, giving $\cos(x^2)\,2x\,dx$.
Doing it this way, the chain rule barely qualifies as a rule. It's just a natural mathematical tool to make a substitution to transform an equation to be manageable under the rules that we already understand. There's no "special" rule called "the chain rule", it is just the natural extension of applying algebra to differentials.
Side note - differentials have fewer problematic cases. For instance, if you take the derivative with respect to $x$ for $x = 1$, you will get $1 = 0$. However, if you take the differential, you will get $dx = 0$, which is actually true. It will also be more obvious when you try to transform it into a derivative why it is problematic ($\frac{dx}{dx}$ becomes $\frac{0}{0}$).
johnnybjohnnyb
$\begingroup$ If you want to know more about this methodology, see my paper, "Simplifying and Refactoring Introductory Calculus" - journals.blythinstitute.org/ojs/index.php/cbi/article/view/29/… $\endgroup$ – johnnyb Feb 16 '20 at 4:12
If you teach the chain rule with Leibniz notation I recommend this suggestion of Steven Gubkin. It makes computations more explicit and straightforward, and students pick it up fairly well in my experience.
For the remainder I'll address some of the subtleties involved with derivative notation, the function concept and how that relates to the chain rule.
Let's start with notation. Many books suggest that when $y=f(x)=x^2$, all of the following mean the same: $$ f', \quad \frac{df}{dx},\quad \frac{dy}{dx}, \quad \frac{df(x)}{dx}, \quad \frac{d(x^2)}{dx}, \quad y', \quad (f(x))' , \quad (x^2)',\quad f'(x), \quad \frac{df}{dx}(x). $$
Since we all agree that $y\neq f$, the first two of these, $f'$ and $\frac{df}{dx}$, cannot possibly mean the same as the rest. I'll discuss in a moment why the second, $\frac{df}{dx}$, is nonsense notation, but let's first look at the rest of the list. By virtue of $y=f(x)=x^2$ and the principle that we can substitute equals for equals, we see that no 3. 4. and 5. must indeed all denote the same $$ \frac{dy}{dx}=\frac{df(x)}{dx}= \frac{d(x^2)}{dx}. $$ (The middle one should be parsed as $\frac{d(f(x))}{dx}$, but the parenthesis are omitted.) By the same reason no. 6. 7. and 8. should all denote the same $$ y' = (f(x))'= (x^2)' $$ if that notation where sensitive. I'll argue that it isn't and should be avoided, in particular when using the chain rule. No. 9, $f'(x)$, is perfectly fine, while the last one, $\frac{df}{dx}(x)$, should be discarded for the same reason as the second one, $\frac{df}{dx}$.
So what's wrong with $\frac{df}{dx}$? If $f$ is truly a function in the modern sense, namely the function $f:\mathbb{R}\to\mathbb{R}$ that squares every input, then $f$ is agnostic of the name of its input variable. In particular, the function that for every $x\in\mathbb{R}$ satisfies $f(x)=x^2$ is exactly the same as the one which for every $y\in \mathbb{R}$ satisfies $f(y)=y^2$. So if we allowed $\frac{df}{dx}$ we should also allow $\frac{df}{dy}$ and $\frac{df}{dt}$ etc. Anything could be placed in the denominator and it should all denote the same. The notation is redundant and misleading. Corollary: do not write $\frac{df}{dx}$ for the derivative of a modern function. Simply write $f'$.
Does that mean we should never apply Leibniz notation to functions? No. For instance when $f$ depends on a parameter, say $f(x)=ax^2$, then $\frac{df}{da}$ is meaningful. Which raises the question of when exactly to use Leibniz notation. This is more subtle, as can be seen from these two discussions, but the summary is: $\frac{d}{dx}$ operates on functions of $x$ and not on functions. Examples of functions of $x$ are $y$, $f(x)$ and $x^2$, while $f$ is not a function of $x$.
Finally what's the problem with writing $y'$, $(f(x))'$ and $(x^2)'$? Observe that here we are applying $(\;)'$ to functions of something and not to functions. But this notation is incomplete, in that it doesn't make explicit with respect to which variabel to differentiate. This is best illustrated with the chain rule: Suppose for instance $y=t^3$ and $t= \cos \phi$, then obviously $y=(\cos \phi)^3$. Now what should $y'$ mean? Is it $\frac{dy}{dt}$ or $\frac{dy}{d\phi}$? You might think that it becomes clear once we write $(t^3)'$ resp. $\left((\cos \phi)^3 \right)'$. But since $t^3=(\cos \phi)^3$ we would violate the principle of substituting equals for equals, if those two expressions had a different meaning. Sounds like very bad notation to me. Corollary: Do not write $y'$ for the derivative of variable quantity with respect to another variable. Always use Leibniz notation.
One could of course insist on the convention that $y'$ always denotes derivative with respect to $x$. But this seems like bad practice from a didactical perspective. For one we would be using the same prime notation to denote two different things: derivative of a modern function as in $f'$ vs derivative of a function of $x$ as in $y'$. But we agree that $y$ and $f$ are objects of different types and (hopefully) want our students to understand that. Moreover in areas where most students will apply calculus (physics, engineering, economy etc), almost no variable is called $x$, so the convention would be of little use.
Michael BächtoldMichael Bächtold
There's a delightfully simple visual intuition.
Imagine you have the curve of $y=f\left(x\right)$ drawn for you.
Put your pencil on the $y$-intercept. Start moving to the right at 1 unit per second, but keep your pencil on the curve. At time $t$, you're at the point $\left(t, f\left(t\right)\right)$ and your vertical velocity is $f^\prime \left(t\right)$.
Now imagine there's a screen in front of you that displays "$x=$", followed by a real number. Internally, it uses a clock and a function $g$ to decide which number to display: $x = g\left(t\right)$. Currently, $t=0$. When you press "start", the clock starts running, and the number starts changing.
Put your pencil on the curve at $\left(x, f\left(x\right)\right)$ according to the display. Press "start", watch the number, and keep your pencil on the curve for the $x$-value shown.
Your vertical velocity is $f^\prime\left(x\right) g^\prime\left(t\right)$. It's the slope of the hill times your horizontal velocity of attacking the hill.
Note that because $x=g\left(t\right)$, your vertical velocity is $f^\prime\left(g\left(t\right)\right)g^\prime\left(t\right)$.
Also note that because $f^\prime\left(x\right)$ is the rate of change of $y$ with respect to $x$, and $g^\prime\left(t\right)$ is the rate of change of $x$ with respect to $t$, you can write your vertical velocity as: $\frac{dy}{dx} \frac{dx}{dt}$.
The rest of the challenge is knowing how to apply this rule. Specifically, knowing how to decompose a function, and recognizing when a function can be decomposed into two functions whose derivatives you already know. I think that just takes practice.
But while you're practicing, it's helpful to stay rooted in the above intuition. And it's helpful to remember that the equivalence between $f^\prime\left(g\left(t\right)\right)g^\prime\left(t\right)$ and $\frac{dy}{dx} \frac{dx}{dt}$ comes from the fact that they are simply two different ways of writing $f^\prime\left(x\right) g^\prime\left(t\right)$ where $y=f\left(x\right)$ and $x=g\left(t\right)$.
JordanJordan
3blue1brown has an excellent video about this. (Chain rule discussion starts at 8:40.)
Not the answer you're looking for? Browse other questions tagged secondary-education calculus curriculum or ask your own question.
Revisiting topics from previous courses
Should students be asked to use more than one notation for the derivative in an introductory calculus class?
How can we neatly explain chain rule of differentiation
Students use WolframAlpha. Can we change calculus instruction to exploit it while discouraging 'cheating'?
Good exercises in proof by induction, very early in freshman calculus?
Simple "real world" l'Hôpital's rule problem?
Symmetric version of product and quotient differentiation rules
Justifying the multi-variable chain rule to students
Recommended list of things calculus students should be required to memorise?
How to explain what's wrong with this application of the chain rule?
What is the most difficult concept to grasp in Calculus 1?
An intuitive explanation of l'Hôpital's rule for ∞/∞
|
CommonCrawl
|
Periodic solutions to non-autonomous evolution equations with multi-delays
Solving a system of linear differential equations with interval coefficients
Quasi-periodic solutions for nonlinear wave equation with Liouvillean frequency
Yanling Shi 1,, and Junxiang Xu 2,
College of Mathematics and Physics, Yancheng Institute of Technology, Yancheng 224051, China
Department of Mathematics, Southeast University, Nanjing 211189, China
* Corresponding author: Yanling Shi
Received April 2019 Published August 2020
Fund Project: The first author is partially supported by NSFC Grant(11801492, 61877052), NSFJS Grant (BK 20170472). The second author is supported by the NSFC Grant(11871146)
In this paper, one dimensional nonlinear wave equation
$ u_{tt}-u_{xx} +mu +\varepsilon f(\omega t,x,u;\xi) = 0 $
with Dirichlet boundary condition is considered, where
$ \varepsilon $
is small positive parameter,
$ \omega = \xi \bar{\omega}, $
$ \bar{\omega} $
is weak Liouvillean frequency. It is proved that there are many quasi-periodic solutions with Liouvillean frequency for the above equation. The proof is based on an infinite dimensional KAM Theorem.
Keywords: Wave equation, Liouvillean frequency, quasi-periodic solution, infinite dimensional KAM theory, Hamiltonian system, fractional Schrödinger equation system, Töplitz-Lipschitz property, normal form.
Mathematics Subject Classification: Primary:35Q35;37K55.
Citation: Yanling Shi, Junxiang Xu. Quasi-periodic solutions for nonlinear wave equation with Liouvillean frequency. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020241
A. Avila, Almost reducitility and absolute continuity, preprint, arXiv: 1006.0704. Google Scholar
A. Avila, Global theory of one-frequency Schrödinger operators, Acta Math., 215 (2015), 1-54. doi: 10.1007/s11511-015-0128-7. Google Scholar
A. Avila, B. Fayad and R. Krikorian, A KAM scheme for $\mathbb{SL}(2,\mathbb{R})$ cocycles with Liouvillean frequencies, Geom. Funct. Anal., 21 (2011), 1001-1019. doi: 10.1007/s00039-011-0135-6. Google Scholar
M. Bambusi and S. Graffi, Time quasi-periodic unbounded perturbations of Schrödinger operators and KAM methods, Comm. Math. Phys., 219 (2001), 465-480. doi: 10.1007/s002200100426. Google Scholar
M. Berti and L. Biasco, Branching of Cantor manifolds of elliptic tori and applications to PDEs, Comm. Math. Phys., 305 (2011), 741-796. doi: 10.1007/s00220-011-1264-3. Google Scholar
M. Berti and P. Bolle, Quasi-periodic solutions with Sobolev regularity of NLS on $\mathbb{T}^d$ with a multiplicative potential, Eur. J. Math., 15 (2013), 229-286. doi: 10.4171/JEMS/361. Google Scholar
J. Bourgain, Construction of quasi-periodic solutions for Hamiltonian perturbations of linear equations and applications to nonlinear PDE, Internat. Math. Res. Notices, 11 (1994), 475-497. doi: 10.1155/S1073792894000516. Google Scholar
J. Bourgain, Construction of periodic solutions of nonlinear wave equations in higher dimension, Geom. Funct. Anal., 5 (1995), 629-639. doi: 10.1007/BF01902055. Google Scholar
J. Bourgain, Quasi-periodic solutions of Hamiltonian perturbations of 2D linear Schrödinger equations, Ann. of Math., 148 (1998), 363-439. doi: 10.2307/121001. Google Scholar
J. Bourgain, Nonlinear Schrödinger Equations, Park City Ser., 5, American Mathematical Society, Providence, 1999. doi: 10.1090/coll/046. Google Scholar
[11] J. Bourgain, Green's Function Estimates for Lattice Schrödinger Operators and Applications, Annals of Mathematics Studies, 158, Princeton Univ. Press, 2005. doi: 10.1515/9781400837144. Google Scholar
J. Bourgain, On Melnikov's persistency problem, Math. Res. Lett., 4 (1997), 445-458. doi: 10.4310/MRL.1997.v4.n4.a1. Google Scholar
W. Craig and C. Wayne, Newton's method and periodic solutions of nonlinear wave equations, Comm. Pure Appl. Math., 46 (1993), 1409-1498. doi: 10.1002/cpa.3160461102. Google Scholar
L. Eliasson, Perturbations of stable invariant tori, Ann. Sc. Norm. Sup. Pisa CI Sci. Iv Ser., 15 (1998), 115-147. Google Scholar
J. Geng and Y. Yi, Quasi-periodic solutions in a nonlinear Schrödinger equation, J. Differential Equations, 233 (2007), 512-542. doi: 10.1016/j.jde.2006.07.027. Google Scholar
J. Geng and J. You, A KAM theorem for one dimensional Schrödinger equation with periodic boundary conditions, J. Differential Equations, 209 (2005), 1-56. doi: 10.1016/j.jde.2004.09.013. Google Scholar
J. Geng and X. Ren, Lower dimensional invariant tori with prescribed frequency for nonlinear wave equation, J. Diff. Eq., 249 (2010), 2796-2821. doi: 10.1016/j.jde.2010.04.003. Google Scholar
X. Hou and J. You, Almost reducibility and non-perturbative reducibility of quasi periodic linear sysems, Invent. Math., 190 (2012), 209-260. doi: 10.1007/s00222-012-0379-2. Google Scholar
T. Kappeler and J. Pöschel, KDV & KAM, Spinger, Berlin, 1993. doi: 10.1007/978-3-662-08054-2. Google Scholar
S. Kuksin, Nearly Integrable Infinite-dimensional Hamiltonian Systems, Lecture Notes in Mathematics, 1556, Springer-Verlag, Berlin, 1993. doi: 10.1007/BFb0092243. Google Scholar
S. Kuksin and J. Pöschel, Invariant Cantor manifolds of quasi-periodic oscillations for a nonlinear Schrödinger equation, Ann. of Math., 143 (1996), 149-179. doi: 10.2307/2118656. Google Scholar
R. Krikorian, J. Wang, J. You and Q. Zhou, Linearization of quasi periodically forced circle flow beyond brjuno condition, Comm. Math. Phys., 358 (2018), 81-100. doi: 10.1007/s00220-017-3021-8. Google Scholar
Z. Liang and J. You, Quasi-periodic solutions for 1D Schrödinger equations with higher order nonlinearity, SIAM J. Math. Anal., 36 (2005), 1965-1990. doi: 10.1137/S0036141003435011. Google Scholar
J. Liu and X. Yuan, A KAM theorem for Hamiltonian partial differential equation with unbounded perturbations, Comm. Math. Phys., 307 (2011), 629-673. doi: 10.1007/s00220-011-1353-3. Google Scholar
H. Niu and J. Geng, Almost periodic solutions for a class of higher dimensional beam equations, Nonlinearity, 20 (2007), 2499-2517. doi: 10.1088/0951-7715/20/11/003. Google Scholar
J. Pöschel, A KAM-theorem for some nonlinear partial differential equations, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 23 (1996), 119â€"148. Google Scholar
J. Pöschel, Quasi-periodic solutions for a nonlinear wave equation, Comment. Math. Helv., 71 (1996), 269-296. doi: 10.1007/BF02566420. Google Scholar
Y. Shi, J. Xu and X. Xu, On quasi-periodic solutions for a generalized Boussinesq equation, Nonlinear Anal., 105 (2014), 50-61. doi: 10.1016/j.na.2014.04.007. Google Scholar
C. E. Wayne, Periodic and quasi-periodic solutions of nonlinear wave equations via KAM theory, Comm. Math. Phys., 127 (1990), 479-528. doi: 10.1007/BF02104499. Google Scholar
X. Yuan, Quasi-periodic solutions of completely resonant nonlinear wave equations, J. Differential Equations., 230 (2006), 213-274. doi: 10.1016/j.jde.2005.12.012. Google Scholar
M. Zhang and J. Si, Quasi-periodic solutions of nonlinear wave equations with quasi-periodic forcing, Phys. D, 238 (2009), 2185-2215. doi: 10.1016/j.physd.2009.09.003. Google Scholar
X. Xu, J. You and Q. Zhou, Quasi-periodic solutions of NLS with Liouvillean frequency, preprint, arXiv: 1707.04048. Google Scholar
Wenmeng Geng, Kai Tao. Large deviation theorems for dirichlet determinants of analytic quasi-periodic jacobi operators with Brjuno-Rüssmann frequency. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5305-5335. doi: 10.3934/cpaa.2020240
Oussama Landoulsi. Construction of a solitary wave solution of the nonlinear focusing schrödinger equation outside a strictly convex obstacle in the $ L^2 $-supercritical case. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 701-746. doi: 10.3934/dcds.2020298
Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, 2021, 20 (1) : 449-465. doi: 10.3934/cpaa.2020276
Van Duong Dinh. Random data theory for the cubic fourth-order nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020284
Riadh Chteoui, Abdulrahman F. Aljohani, Anouar Ben Mabrouk. Classification and simulation of chaotic behaviour of the solutions of a mixed nonlinear Schrödinger system. Electronic Research Archive, , () : -. doi: 10.3934/era.2021002
Dong-Ho Tsai, Chia-Hsing Nien. On space-time periodic solutions of the one-dimensional heat equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3997-4017. doi: 10.3934/dcds.2020037
Vaibhav Mehandiratta, Mani Mehra, Günter Leugering. Fractional optimal control problems on a star graph: Optimality system and numerical solution. Mathematical Control & Related Fields, 2021, 11 (1) : 189-209. doi: 10.3934/mcrf.2020033
Denis Bonheure, Silvia Cingolani, Simone Secchi. Concentration phenomena for the Schrödinger-Poisson system in $ \mathbb{R}^2 $. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020447
Masaru Hamano, Satoshi Masaki. A sharp scattering threshold level for mass-subcritical nonlinear Schrödinger system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1415-1447. doi: 10.3934/dcds.2020323
Juntao Sun, Tsung-fang Wu. The number of nodal solutions for the Schrödinger–Poisson system under the effect of the weight function. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021011
Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243
Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247
José Luis López. A quantum approach to Keller-Segel dynamics via a dissipative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020376
Xiaorui Wang, Genqi Xu, Hao Chen. Uniform stabilization of 1-D Schrödinger equation with internal difference-type control. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021022
Hai Huang, Xianlong Fu. Optimal control problems for a neutral integro-differential system with infinite delay. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020107
Patrick Martinez, Judith Vancostenoble. Lipschitz stability for the growth rate coefficients in a nonlinear Fisher-KPP equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 695-721. doi: 10.3934/dcdss.2020362
Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021002
Abdollah Borhanifar, Maria Alessandra Ragusa, Sohrab Valizadeh. High-order numerical method for two-dimensional Riesz space fractional advection-dispersion equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020355
Justin Holmer, Chang Liu. Blow-up for the 1D nonlinear Schrödinger equation with point nonlinearity II: Supercritical blow-up profiles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 215-242. doi: 10.3934/cpaa.2020264
Yanling Shi Junxiang Xu
|
CommonCrawl
|
Journal of Dynamics and Differential Equations
Structure and Stability of the Rhombus Family of Relative Equilibria under General Homogeneous Forces
Eduardo S. G. Leandro
Let \(N>2\) and \(n>1\). Among the classes of symmetric relative equilibria of the N-body problem whose symmetry group is one of the dihedral groups \(D_n\), the rhombus family stands out as the only noncollinear family for which the symmetry implies that the stability polynomial is fully factorizable. The present paper discusses basic properties and linear stability of the rhombus family assuming the forces of interaction depend on the mutual distances raised to an arbitrary real exponent \(2a+1\). In a suitable parameter plane, the family of rhombus relative equilibria forms a pincel of graphs which foliates the union of an open unit square and an open rectangle obtained from the unit square by a reflection and an inversion. We show that all rhombus relative equilibria are linearly stable if \(a>-1\), that they are all unstable for a in the interval bound by \(-4-2\sqrt{2}\approx -6.82\) and \(4(\sqrt{3}-2)\approx -1.07\), and that stability and instability depend on mass values for the remaining values of a. These results impose limitations on the validity of Moeckel's dominant mass stability conjecture in the context of generalized N-body problems.
N-body problem Relative equilibria Linear stability
70F10 37N05 70Fxx 37Cxx
Appendix A: Diagonalization of \(D\nabla U_a(\varvec{x^{(0)}})\)
In this appendix we explain how group representation theory was applied to simplify the linear stability analysis of a rhombus relative equilibrium. References are [5, 12]. The key basic fact is that the rhombus is a configuration with a nontrivial symmety group \(\Sigma \) which can be viewed as the dihedral group \(D_2\) generated by the reflections through the diagonals of the rhombus, or as Klein's four-group V, which is generated by the reflections through the perpendicular bisectors of opposite sides of the rhombus. As \(D_2\), \(\Sigma \) consists of rotations E and R about the midpoint of one of the diagonals by zero and 180 degrees, respectively, a reflection S accross the same diagonal, and a reflection RS through the other diagonal. As explained in [5], it follows that the restriction B of the hessian matrices \(D\nabla U_a(\varvec{x^{(0)}})\) of the potentials \(U_a\) to the space of planar displacements of a rhombus relative equilibrium \(\varvec{x^{(0)}}\) commutes with the elements of a group of isomorphisms of that space. This observation, together with our knowledge of some of the eigenvectors of the hessian, can be used to fully diagonalize the matrices \(M^{-1}B\), as shown by our calculations in Sect. 3.2.
Let G be a finite group. A (linear) representation of G on the (complex, real) vector space \(\mathcal {V}\) is a map \(\rho \) from G into the group \(GL(\mathcal {V})\) of isomorphisms of \(\mathcal {V}\). A subrepresentation of \(\rho \) is a representation of G on \(\mathcal {V}\) given by the restriction of \(\rho \) to a subspace \(\mathcal {W}\) of \(\mathcal {V}\) which is \(\rho \)-invariant, i.e., such that \(\rho (g)(\mathcal {W})\subseteq \mathcal {W}\) for all \(g \in G\). A representation is said to be irreducible if its only subrepresentations are the ones corresponding to the trivial subspaces \(\{O\}\) and \(\mathcal {V}\). The table below contains a description of the irreducible representations of \(D_2\).
The entries of the column headed by E indicate that \(\tau ,\alpha ,\phi ,\psi \) are one-dimensional representations.
Every complex representation \(\rho \) of the finite group G can be written (non-uniquely) as the direct sum of subrepresentations isomorphic to the irreducible representations of G. The direct sum of all copies of a given irreducible representation \(\varrho \) of G present in \(\rho \) is the space \(\mathcal {V}_{\varrho }\) of a subrepresentation of \(\rho \) called the \(\varrho \)-isotypic component of \(\rho \). For each irreducible representation \(\varrho _1, \ldots ,\varrho _t\) of the finite group G and representation \(\rho :G \longrightarrow GL(\mathcal {V})\), representation theory provides a family of projection operators \(p_j\) whose images are precisely the isotypic components \(\mathcal {V}_{\varrho _j}\). Using the projections \(p_k\), we obtain the isotypic decomposition of \(\mathcal {V}\), namely
$$\begin{aligned} \mathcal {V}=\mathcal {V}_{\varrho _1} \oplus \cdots \oplus \mathcal {V}_{\varrho _t} . \end{aligned}$$
In the case of \(D_2\), we have that
$$\begin{aligned} p_{\tau }=&\frac{1}{4}[\rho (E)+\rho (R)+\rho (S)+\rho (RS)],&p_{\alpha }=&\frac{1}{4}[\rho (E)+\rho (R)-\rho (S)-\rho (RS)],\\ p_{\phi }=&\frac{1}{4}[\rho (E)-\rho (R)+\rho (S)-\rho (RS)],&p_{\psi }=&\frac{1}{4}[\rho (E)-\rho (R)-\rho (S)+\rho (RS)], \end{aligned}$$
where the coefficients of the linear combinations in the square brackets on the right-hand sides of the above expressions are precisely the entries of each row of Table 1.
We will apply the projections \(p_{\tau },p_{\alpha }, p_{\phi }, p_{\psi }\) associated to the representation of \(D_2\) on the space of planar displacements of the rhombus (identified with \(\mathbb {R}^8\)) defined by
$$\begin{aligned} \sigma (g)(\delta )=(g \delta _{g^{-1}(1)},g \delta _{g^{-1}(2)},g \delta _{g^{-1}(3)},g \delta _{g^{-1}(4)}), \end{aligned}$$
for all \(g \in D_2\), \(\delta =(\delta _1,\delta _2,\delta _3,\delta _4) \in \mathbb {R}^8\). In our notation, each \(\delta _j\) is a vector in a copy of \(\mathbb {R}^2\) whose origin is at the jth vertex of the rhombus. In order to understand the definition of \(\sigma \), notice that the action of each \(g \in D_2\) on the set of vertices of the rhombus produces a permutation of the vertices, i.e., g can be interpreted as a permutation of the indices 1, 2, 3, 4 of the vertices. As an element of \(\mathbb {R}^2\), each displacement \(\delta _j\) can be rotated or reflected according to \(g \in D_2\). Thus the definition of \(\sigma \) implies that each \(\sigma (g)\) corresponds to applying g as a rigid motion of the rhombus together with its displacement \(\delta \). The isotypic decomposition of the space of displacements is the direct sum of the images (isotypic components) \(\mathcal {V}_{\tau },\mathcal {V}_{\alpha },\mathcal {V}_{\phi },\mathcal {V}_{\psi }\) of \(p_{\tau },p_{\alpha }, p_{\phi }, p_{\psi }\). A simple application of character theory shows that the dimensions of the isotypic components are all equal to two, which is the common multiplicity of \(\tau ,\alpha ,\phi ,\psi \) in \(\sigma \).
Before moving on, we introduce an Euclidean structure in the space of displacements of a rhombus relative equilibrium \(\varvec{x^{(0)}}\) using the masses, namely
$$\begin{aligned} \langle (\delta _1,\delta _2,\delta _3,\delta _4), (\varepsilon _1,\varepsilon _2,\varepsilon _3,\varepsilon _4)\rangle _M= m \delta _1\cdot \varepsilon _1+\delta _2\cdot \varepsilon _2 +m \delta _3\cdot \varepsilon _3+\delta _4\cdot \varepsilon _4, \end{aligned}$$
where the \(\cdot \) represents the usual dot product in \(\mathbb {R}^2\). It is not hard to verify that the projections \(p_{\tau },p_{\alpha }, p_{\phi }, p_{\psi }\) are orthogonal with respect to \(\langle ,\rangle _M\), or M-orthogonal for short. As a consequence, the isotypic decomposition
$$\begin{aligned} \mathbb {R}^8=\mathcal {V}_{\tau } \oplus \mathcal {V}_{\alpha } \oplus \mathcal {V}_{\phi } \oplus \mathcal {V}_{\psi } \end{aligned}$$
is an M-orthogonal decomposition of the space of displacements of \(\varvec{x^{(0)}}\).
The irreducible representations of the group \(D_2\)
\(\tau \)
\(\alpha \)
\(-1\)
\(\phi \)
\(\psi \)
Let us firstly determine \(\mathcal {V}_{\tau }\). If we apply \(p_{\tau }\) to the displacement \(v_1=(r,0,0,1,0,0,\) 0, 0), we obtain \(\delta _{\tau }=\varvec{x^{(0)}}\) viewed as a displacement. By applying \(p_{\tau }\) to \(w_1=(1/(mr),0,0,-1,\) 0, 0, 0, 0), we obtain a displacement \(\varepsilon _{\tau }\) which, together with \(\delta _{\tau }\), forms an M-orthogonal basis of \(\mathcal {V}_{\tau }\). Figure 3 of subsection 3.2 depicts the images of \(v_1\) and \(w_1\) under \(p_{\tau }\).
In order to determine \(\mathcal {V}_{\alpha }\), we apply a rotation of 90 degrees to the copies of \(\mathbb {R}^2\) at each vertex of the rhombus. As in Sect. 3.2, denote by J the corresponding linear map. We have that
$$\begin{aligned} J \circ \sigma (g) = \sigma (g) \circ J, \ \ \text {if }g=E,R, \ \ \text {and}\ \ J \circ \sigma (g) = - \sigma (g) \circ J, \ \ \text {if }g=S,RS. \end{aligned}$$
These simple relations imply the following relations among the projections
$$\begin{aligned} J \circ p_{\tau } = p_{\alpha } \circ J, \quad \text {and} \quad J \circ p_{\phi } = p_{\psi } \circ J. \end{aligned}$$
Thus an M-orthogonal basis of \(\mathcal {V}_{\alpha }\) can be produced simply by applying J to the displacements forming the M-orthogonal basis of \(\mathcal {V}_{\tau }\) determined in the previous paragraph. We illustrated the basis of \(\mathcal {V}_{\alpha }\) formed by \(\delta _{\alpha }=J \delta _{\tau }\) and \(\varepsilon _{\alpha }=J \varepsilon _{\tau }\) in Fig. 4, Sect. 3.2.
Next we apply the same procedure in order to obtain M-orthogonal bases for \(\mathcal {V}_{\phi }\) and \(\mathcal {V}_{\psi }\), with \(v_1,w_1\) replaced with \(v_2=(1,0,1,0,0,0,\) 0, 0) and \(w_2=(1/m,0,-1,0\) 0, 0, 0, 0). By applying \(p_{\phi }\) to \(v_2\) and \(w_2\), we obtain the displacements \(\delta _{\phi }\) and \(\varepsilon _{\phi }\) on the first two columns of Fig. 5, Sect. 3.2. Notice \(\delta _{\phi }\) and \(\delta _{\psi }=J\delta _{\phi }\) correspond respectively to horizontal and vertical translations of the rhombus.
We collect the displacements determined in the previous paragraphs and reorder them so that J keeps its block-diagonal matrix form. The basis of the space of planar displacements of the rhombus thus obtained consists of the displacements
$$\begin{aligned} \delta _{\tau }&= (r,0,0,1,-r,0,0,-1), \end{aligned}$$
(A.1)
$$\begin{aligned} \delta _{\alpha }&= (0,r,-1,0,0,-r,1,0), \end{aligned}$$
$$\begin{aligned} \varepsilon _{\tau }&= (1/(mr),0,0,-1,-1/(mr),0,0,1), \end{aligned}$$
$$\begin{aligned} \varepsilon _{\alpha }&=(0,1/(mr),1,0,0,-1/(mr),-1,0), \end{aligned}$$
$$\begin{aligned} \delta _{\phi }&=(1,0,1,0,1,0,1,0), \end{aligned}$$
$$\begin{aligned} \delta _{\psi }&=(0,1,0,1,0,1,0,1), \end{aligned}$$
$$\begin{aligned} \varepsilon _{\phi }&=(1/m,0,-1,0,1/m,0,-1,0), \end{aligned}$$
$$\begin{aligned} \varepsilon _{\psi }&=(0,1/m,0,-1,0,1/m,0,-1) . \end{aligned}$$
It can be deduced from theory [5] that vectors (A.1)–(A.8) must form a basis of eigenvectors of B and \(M^{-1}B\), where M is the matrix that defines the inner product \(\langle ,\rangle _M\). This basis can be found in previous articles, see for instance [10], however the deeper connection with the symmetry of the rhombus has not been clarified as fas as we know.
Albouy, A.: On a paper of Moeckel on central configurations. Regul. Chaotic Dyn. 8(2), 133–142 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
Albouy, A., Cabral, H.E., Santos, A.A.: Some problems on the classical \(n\)-body problem. Celest. Mech. Dyn. Astron. 113, 369–375 (2012)MathSciNetCrossRefGoogle Scholar
Albouy, A., Fu, Y., Sun, S.: Symmetry of planar four-body convex central configurations. Proc. R. Soc. A 464, 1355–1365 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
Brumberg, V.A.: Permanent configurations in the problem of four bodies and their stability. Sov. Astron. 1, 57–79 (1957)Google Scholar
Leandro, E.S.G.: Factorization of the Stability Polynomials of Ring Systems. arXiv: 1705.02701v1 [math.DS] (2017)
Longley, W.R.: Some particular solutions in the problem of \(n\) bodies. Bull. Am. Math. Soc. 13(7), 324–335 (1907)MathSciNetCrossRefzbMATHGoogle Scholar
MacMillan, W.D., Bartky, W.: Permanent configurations in the problem of four bodies. Trans. Am. Math. Soc. 34, 838–875 (1932)MathSciNetCrossRefzbMATHGoogle Scholar
Moeckel, R.: Linear stability analysis of some symmetrical classes of relative equilibria. In: Dumas, H.S., Meyer, K.R., Schmidt, D.S. (eds.) Hamiltonian Dynamical Systems: History, Theory and Applications, IMA, vol. 63, pp. 291–317. Springer, New York (1995)CrossRefGoogle Scholar
Roberts, G.: Linear stability of the \(1+n\)-gon relative equilibrium. In: Delgado, J., Lacomba, E.A., Pérez-Chavela, E., Llibre, J. (eds.) Hamiltonian Systems and Celestial Mechanics (HAMSYS-98), Proceedings of the III International Symposium, World Scientific Monograph Series in Mathematics, vol. 6, pp. 303–330. World Scientific Publishing Co., Singapore (2000)Google Scholar
Roberts, G.: Stability of relative equilibria in the planar n-vortex problem. SIAM J. Appl. Dyn. Syst. 12(2), 1114–1134 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
Schmidt, D.S.: Central configurations and relative equilibria for the \(N\)-body problem. In: Cabral, H., Diacu, F. (eds.) Classical and Celestial Mechanics, The Recife Lectures, pp. 1–34. Princeton University Press, Princeton (2002)Google Scholar
Serre, J.-P.: Linear Representations of Finite Groups. Springer, New York (1997)Google Scholar
© Springer Science+Business Media, LLC, part of Springer Nature 2018
1.Depto de MatemáticaUniversidade Federal de PernambucoRecifeBrazil
Leandro, E.S.G. J Dyn Diff Equat (2019) 31: 933. https://doi.org/10.1007/s10884-018-9687-6
|
CommonCrawl
|
Measuring the Degree of Content Immersion in a Non-experimental Environment Using a Portable EEG Device
Nam-Ho Keum* , Taek Lee** , Jung-Been Lee*** and Hoh Peter In***
Corresponding Author: Hoh Peter In*** ([email protected])
Nam-Ho Keum*, Information Technology Management Division, Agency for Defense Development, Daejeon, Korea, [email protected]
Taek Lee**, College of Knowledge-Based Services Engineering, Sungshin University, Seoul, Korea, [email protected]
Jung-Been Lee***, Dept. of Computer Science, Korea University, Seoul, Korea, [email protected]
Hoh Peter In***, Dept. of Computer Science, Korea University, Seoul, Korea, [email protected]
Received: January 13 2016
Revision received: November 16 2016
Accepted: March 23 2017
Abstract: As mobile devices such as smartphones and tablet PCs become more popular, users are becoming accustomed to consuming a massive amount of multimedia content every day without time or space limitations. From the industry, the need for user satisfaction investigation has consequently emerged. Conventional methods to investigate user satisfaction usually employ user feedback surveys or interviews, which are considered manual, subjective, and inefficient. Therefore, the authors focus on a more objective method of investigating users' brainwaves to measure how much they enjoy their content. Particularly for multimedia content, it is natural that users will be immersed in the played content if they are satisfied with it. In this paper, the authors propose a method of using a portable and dry electroencephalogram (EEG) sensor device to overcome the limitations of the existing conventional methods and to further advance existing EEG-based studies. The proposed method uses a portable EEG sensor device that has a small, dry (i.e., not wet or adhesive), and simple sensor using a single channel, because the authors assume mobile device environments where users consider the features of portability and usability to be important. This paper presents how to measure attention, gauge and compute a score of user's content immersion level after addressing some technical details related to adopting the portable EEG sensor device. Lastly, via an experiment, the authors verified a meaningful correlation between the computed scores and the actual user satisfaction scores.
Keywords: Automated Collection , BCI , Measurement of Immersion , Noise Filtering , Non-experimental Environment , Portable EEG
With the recent development of mobile devices such as smartphones, demand for multimedia content consumption is no longer limited by time and space. A massive amount of content is currently produced and consumed compared to the past. Along with this trend, the need to measure user satisfaction and the degree of content immersion has emerged as an important issue [1], and providing an optimized/automated solution to measure user satisfaction as well as to collect the entailed big data has emerged as a lucrative business opportunity. The currently available solutions for measuring content immersion are typically question-based surveys [2,3] including redefining the evaluation scope [4] to improve the quality of surveys. However, these non-automated methods have limitations on collecting large amounts of data.
Recently, in order to overcome the limitations of the existing survey-based studies, some studies have attempted to measure content immersion using biological signals; however, because this approach involves the use of high-end multichannel electroencephalogram (EEG) equipment [5,6], its implementation is not practical in actual environments where mobile users consume multimedia content.
Conversely, by using a portable EEG device, large amounts of data can be obtained easily without any content consumer intervention. Although the EEG device enables analysts to objectively gather information on consumer attention, the extraction of brainwaves relevant to the content immersion measurement is a complex technological implementation. This is because portable EEG devices are used normally in non-ideal conditions (i.e., implemented with non-adhesive electrodes and a single channel with a battery, and contaminated by noise from wireless communication operations).
In this study, the authors developed a system to measure the level of a user's content immersion in entertainment multimedia content with a portable EEG sensor device that can automatically and objectively observe brain states, whereas the existing methods rely on a manual and subjective conventional approach based on surveys or interviews. To address the noise problems from a portable EEG sensor, the authors specifically used the online singular spectrum analysis (SSA) algorithm [7,8], which is a powerful real-time noise filtering method for raw signals that have high SNR(signal-to-noise ratio) ratio and a limited number of sensor channels because the other existing filtering algorithms are mostly used in multi-channel sensors and offline processing environments. The authors also used the median of medians algorithm to find the loss signal value. As a result, the authors could remove noise and isolate brainwaves from the noise-ridden signal output of the portable EEG device. The frequency analysis of the pre-processed EEG signal was used to measure the degree of user content immersion. This study conclusively demonstrates that content immersion measurement using a portable EEG device performed as well as expected, presenting positive experimental results.
This paper consists of five sections: Section 1 presents an overview of the study, Section 2 explains the related work, Section 3 introduces the proposed solution approach, Section 4 presents the experimental results, and Section 5 concludes the paper with a summary.
2. Related Work
One of the most significantly studied areas related to content quality is improvement in survey questions and survey configuration to gather accurate and meaningful feedback [2-4]. However, a recent trend is to avoid the survey methods used previously that prompt the user to provide a response, causing inconvenience, and instead measure various biological signals that occur when a user views the content. There is a movement towards developing a method that automatically measures the quality of the content [9-12]. These studies are limited to laboratory environments. Therefore, this study suggests using a portable device that is applicable to any mobile environment, so feedback is not limited to a lab environment.
Analytical studies on human brainwaves and concentration are also an active topic. Brainwaves are electrical signals that are generated from the activity of the neurons in the brain, and are important biological signals that can be used to evaluate the activity state of the human brain. As shown in Table 1, the activity state of the brain can be categorized on the basis of the brainwave frequency ranges.
According to Table 1, brainwaves during the concentration state are defined by sensory motor rhythm (SMR), beta, and high beta brainwaves because concentration measurement is a critical issue in neurofeedback training for improving brain-computer interfaces (BCIs), user evaluation, and other applications [13]. A study confirmed the concentration state experimentally [14,15]. It was reported that there is a high correlation between the concentration state of the brain and the beta, SMR, and high beta waves.
Frequency ranges of brainwaves and the characteristics of the activity state of the human brain
Research on noise reduction at the EEG site is also under way. EEG studies in the past used adhesive or implanted electrodes to obtain signals in a lab experiment environment, and there was no particular mention regarding interfering noise that could occur in everyday environments. Previous studies on EEG noise removal did not consider the removal of external noises (e.g., user's arbitrary actions and white noise), and instead, it focused on the removal of conflicted noise signals occurring during EEG measurements such as separating and removing biological signals; this could only be executed using multichannel EEG methods [16,17].
Portable EEG measuring devices have been introduced recently, and studies on removing other unwanted biological noise from single electrodes are underway. For example, methods to remove electrooculography (EOG) and electrocardiography (ECG) signals from the EEG signal are being studied.
3. Proposed Signal Processing
In this section, the authors explain how to acquire EEG signals, measure content immersion, and handle exceptional signal patterns.
3.1 EEG Signal Acquisition and Content Immersion Measurement
The authors assume EEG data is collected over wireless communications from mobile devices in practical situations (i.e., not lab-testing environments). Therefore, not only are unwanted biological signals removed, which were included in the EEGs in the previous studies, but the signal loss due to other everyday noise is also removed from the single electrode to extract a pure EEG signal. Later, the signal is separated into frequency ranges to numerically compute the immersion. The process steps are summarized in Fig. 1.
In the filtering process of Fig. 1, first, the original EEG is collected in real time. At this time, the device used for collecting the EEG should not restrict the freedom of time and space. To meet this requirement, a portable device must be used and it must be a device that has non-adhesive electrodes and can amplify up to 1 μV potential in the 0.1–100 Hz band, which is a general requirement of the EEG; the authors used the NeuroSky MindSet product and a Samsung Galaxy Note 3 device in the experiment. The collected data (Fig. 2) shows that the EEG signal is mixed with other biological signals and noise due to errors in the mobile device.
Filtering process.
Original EEG signal mixed with other biological signals and noise.
The next step is the signal recovery process. As shown in Fig. 2, a steep slope change is observed at the point where the signal is lost. So, it is necessary to remove these parts in order to restore the original signal, and the first differential is used for this purpose. By calculating the first differential of this signal, the steep slope is eliminated according to Eq. (1). This is followed by signal restoration through integration, as shown in Fig. 3, to extract the primary filtered data.
[TeX:] $$\text { if } a b s \left( \frac { d f ( t ) } { d t } \right) > 100 \text { then } \frac { d f ( t ) } { d t } = \frac { \frac { d f ( t - 1 ) } { d t } + \frac { d f ( t + 1 ) } { d t } } { 2 }$$
[TeX:] $$\text { if } \operatorname { abs } \left( \frac { d f ( t ) } { d t } \right) \leq 100 \text { then } \frac { d f ( t ) } { d t } = \frac { d f ( t ) } { d t }$$
Primary filtered data.
After Step 2, most of the noise due to signal loss has been removed, but this signal still contains excessive noise from unwanted biological signals such as EOG, EEG, and other electronic device noise. Thus, noise filtering was necessary on the third step using an online SSA algorithm that was effective for real-time processing on a single electrode [7]. After filtering, the pure EEG signal is obtained as shown in Fig. 4.
Signal after applying the online SSA algorithm.
Finally, because the baseline dropped to approximately −400 due to the previously applied algorithm, by removing the signal below 0.1 Hz to return the baseline to 0, the final signal is extracted as shown in Fig. 5.
Moreover, as reported in [14], when the subject is in a state of concentration, the alpha wave decreases by an average of 2.38%, and the beta, SMR, and high beta waves increase by 4.16%, 6.47%, and 7.49%, respectively. The numerical calculation used in the study is as follows:
Pure EEG signal.
[TeX:] $$\text {Immersion brainwave value} = \sum \beta , S M R , \text {high} \beta \text { waves } - \sum \alpha \text { waves } \\ \begin{array} { r l } { \text { Immersion score } } \\ { } & { = \left( \frac { \text { this experment value } - \text { total min value } } { \text {entire experiment max value } - \text {entire experiment min value} } \right) \times 10 \text { score } } \end{array}$$
3.2 Configuration for Exception Handling
The proposed method is intended to be implemented not in a lab environment but in actual mobile environments. Consequently, many technical problems in handling exceptional signal patterns must be addressed. The first is defective connectivity of the electrode; if the electrodes of an EEG sensor device are not perfectly placed on the subject's head, device connectivity and its working may be incomplete.
When using adhesive and implanted electrodes, there is only a small chance of defective connectivity; however, the portable EEG adopted in this study uses a non-adhesive electrode. Therefore, in the event of sudden movement, yawning, or even head scratching, the adhesion state of the portable EEG can momentarily change, creating constant states of defective connectivity (Fig. 6).
Defective connectivity state signal.
The defective connectivity state is a result of the amplification of the 60-Hz waveform of other electronic devices; however, it is a good rule of thumb to exclude the section with defective connectivity in order to derive accurate data. It is possible to filter the signal using a notch filter that filters out only the 60-Hz waveforms. Therefore, wavelength analysis is carried out before executing the filtering process, and if there is more than a 10-fold 60-Hz waveform factor compared to other waveforms as shown in Eq. (3), the measurement values of the corresponding section are excluded.
[TeX:] $$\text { if } \{ F ( 60 H z ) > \text { average } ( F ( 10 H z - 50 H z ) ) \times 10 \} \mathrm { ~ t h e n ~ \{ e x c l u d e ~ v a l u e \} ~ e l s e ~ \{ k e e p ~ g o i n g \} ~ }$$
The second exception taken into consideration is the case where a subject is not viewing the content. In ideal conditions, the test (measuring the brainwaves of subjects) is conducted under the assumption that the subject is concentrating on the content; however, in actual situations, there are many reasons why the subject is not paying attention to the content. A possible situation is when the subject leaves the area of content consumption. In the case where the subject takes off the portable EEG device and moves away, a defective connectivity signal can be transmitted. Such a situation is excluded in the analysis. In the case where the subject physically leaves the EEG signal receiver device, the signal goes to 0. However, this exceptional case can be easily determined, as shown in Eq. (4), and excluded from the analysis.
[TeX:] $$\mathrm { if } \left\{ \sum _ { x = t } ^ { t + 10 } f ( x ) = 0 \right\} \mathrm { ~ t h e n ~ \{ e x c l u d e ~ v a l u e \} ~ e l s e ~ \{ k e e p ~ g o i n g \} . ~ }$$
If the subject does not leave the area of consumption and does not pay attention to the content, several other data points must be calculated to evaluate in comparison with the baseline. This is carried out using the relationship between the number of blinks and concentration. If the subject blinks too much, it means the subject has no focus on the content; that signal part will be excluded [18]. By numerically scoring the number of blinks, the measurement can be used as a numerical score along with Eq. (2) in the evaluation of immersion. However, in the event of an excess of an average of more than 40 blinks per minute as shown in Eq. (5), it is thought that the subject is not watching the content and is engaged in a different activity. Consequently, the corresponding signal part is therefore excluded. The blinking determination signal disappears if the proposed filtering method is applied; the numerical data for blinking provided by the hardware can be used for evaluation.
[TeX:] $$\text { if\{averagemin }_{min} ( b l a n k \text { count } ) > 40 \} \mathrm { ~ t h e n ~ \{ e x c l u d e ~ v a l u e \} ~ e l s e ~ \{ k e e p ~ g o i n g \} ~ }.$$
The case where the subjects have their eyes open but are not focusing on the content and enter a delusional state must also be considered as an exception case. In Table 1, the Theta brainwave is defined as the brainwave associated with delusion. As reported in [19], Theta waves are released in a dreamlike state where the eyes can move in a paradoxical sleep state while thinking about other things. To confirm the reported clue, a comparison is performed between the measured results of the brainwaves of when the subject is forced to think about something else, when they are immersed in the visual stimulation, and when they doze off. The result for the comparison is that Theta waves take up more than 30% of brainwaves, as shown in Fig. 7.
Ratio comparison of theta waves.
When measuring content immersion, theta waves increase. In other words, the fact that "the subject is not watching the content and thinking about something else" can be included as a factor when calculating the immersion score. However, that is neither a test environment solely to examine the personal state of the subject, nor is it an environment set up solely for content viewing. Therefore, for the signal parts with a theta ratio of more than 30% (Eq. (6)), it is determined that the contents are not being watched and the signal part is excluded.
[TeX:] $$i f \left\{ \frac { \sum _ { x = 4 } ^ { 8 } F ( x ) } { \sum _ { x = 1 } ^ { 30 } F ( x ) } \times 100 > 30 \right\} \mathrm { ~ t h e n ~ \{ e x c l u d e ~ v a l u e \} ~ e l s e ~ \{ k e e p ~ g o i n g \} . }$$
The final situation that is handled as an exception is when the subject falls asleep. Similar to the delusion numerical value configured in the previously discussed exception, it can be determined that the drowsiness is caused because the content lacks a degree of immersion. However, because the study is fundamentally conducted only on situations that the subjects actually watch the content, those signal parts are excluded. The Alpha to Theta brainwave ratio has already been verified in an existing study [20]. It can be confirmed that compared to that of an alert state, the ratio was significantly lower in the drowsy state, as shown in Fig. 8.
Ratio of theta wave to alpha wave.
However, in the case where the numerical value drops below 2, owing to the involvement of excessive personal discrepancy for a comparison of absolute values, the study defined the state where the content is first engaged as the alert state, and the state where the theta to alpha brainwave ratio drops to 80% of the alert state as the drowsy state. Moreover, as shown in Eq. (7), the values during the drowsy state are excluded from the test values.
[TeX:] $$\text { if } \left\{ \frac { \frac { \sum _ { x = 4 } ^ { 8 } F ( x ) } { \sum _ { x = 8 } ^ { 12 } F ( x ) } } { \text { when awake state } \frac { \sum _ { x = 4 } ^ { 8 } F ( x ) } { \sum _ { x = 8 } ^ { 12 } F ( x ) } } \times 100 < 80 \right\} \mathrm { ~ t h e n ~ \{ e x c l u d e ~ v a l u e \} ~ e l s e ~ \{ k e e p ~ g o i n g \} . ~ }$$
The reason for this exception to be configured and excluded in this manner can be seen clearly when examining the actual numerical data from the test. Fig. 9 shows the brainwave values of the subjects with an average content immersion numerical score of 0.74 who fall asleep during content consumption.
Numerical immersion score of sleeping subjects.
Data comparison before and after the removal of exception parts.
From the data of Fig. 9, it can be observed that the immersion brainwave numerical data of the subjects is especially high during sections where they fall asleep. This is because once the subject enters the rapid eye movement (REM) state of sleep, the brainwave state becomes similar to that of someone who is awake, which causes increased Beta brainwave activity [13,21]. Therefore, if that signal part is not removed, the data will be distorted. Fig. 10 shows a comparison of the exception parts before and after they are removed.
Because the subjects are very tired and unable to concentrate on the content, this should generally be predictable data that show a lower than average immersion score. When the exception parts are included, the data during the sleep state and the distorted numerical measurements should be considered. The corresponding results show a higher immersion score. On the other hand, when the exception parts are excluded, the results show that the immersion score is lower than the average score predicted for the subjects.
Therefore, in the test conducted in the study, before calculating the numerical immersion score of Eq. (2), the exception situations of Eqs. (3)–(7) are determined, and accordingly, the data with those conditions are excluded before conducting the experiment.
4. Experiment
This section explains the experimental settings and presents the analyzed results.
4.1 Experiment Setup
To verify the validity of the immersion brainwave measurement calculated using Eq. (2), the subjects were exposed to entertaining movie content and boring movie content, and using the proposed Eq. (2), the EEG was collected and a correlation analysis was conducted.
To determine whether the contents were entertaining or boring, the top three movies based on NAVER movie ratings with more than 300 reviewers with a standard deviation of 95% ± 5.65% were selected as entertaining content, and the last three movies were categorized as boring content. These movies were shown to the subjects without giving any prior information on the movies.
After the subjects watched the movies, scores on the entertaining factor of the movies were surveyed, and then they were compared with those obtained through previous immersion measurement methods.
4.2 Experiment Results
The experiment results indicated that when engaging content with no prior information is given, as shown in Fig. 11, the immersion brainwave measurement was identical for all content for the first 10 minutes at an average of 0.06, showing that the brainwave was maintained at a higher level than the general state. When the subjects watched a funny movie, the immersion was either maintained or increased. However, when the subjects watched a boring movie, the immersion dropped dramatically to a negative value.
By numerically representing the data according to Eq. (2), a comparison of the scores obtained from the EEG measurement, the survey, and the NAVER movie ratings were similar, as shown in Table 2.
The comparison in Table 2 shows that the immersion score computed by the proposed method is as highly accurate as the previous survey methods and the NAVER ratings. In addition, the proposed method has advantages over of existing methods. When an additional non-automated survey was carried out, 5 minutes was required on average to fill out the survey questionnaire; however, in the case of using the proposed method, no additional time was required to objectively and accurately measure the degree of immersion. The properties of automation, short time consumption, and no user disturbance in survey, which the proposed method can support but the others cannot, are highly critical in mobile environments.
Immersion brainwave values while watch a movie.
Comparison of immersion scores
This paper confirmed that it is possible to measure the degree of content immersion with the EEG of user brainwaves. Brainwaves are extracted from the portable EEG device signals. Because the portable EEG device consists of a non-adhesive electrode and is powered by a single channel battery, raw signals from the device are highly contaminated by noise owing to wireless communication operations in the environment. We addressed the noise problems in this study. Our proposed method is an automated technique that significantly reduces the time required for collecting user survey feedback, which is easily applicable to large-scale big data analysis. Our proposed method of collecting and analyzing information from brainwave signal scans significantly contribute to the design of a user-adaptive content recommendation system.
This research was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (No. 2012M3C4A7033345).
Nam-Ho Keum
He received his M.S degree in Computer and Information Technology at Korea University in Seoul, Korea. His research interests include ICT-healthcare convergence, machine learning, cloud-based data center architecture, information security.
Taek Lee
He received his Ph.D. degree in Computer Science and Engineering at Korea University in Seoul, Korea. He received his M.Sc. in Computer Science and Engineering at Korea University in 2006. His research interests include ICT-healthcare convergence, machine learning and data mining, user behavior modeling in software systems, software defect prediction, information security, and information risk analysis received M.S. degree in School of Computer Science and Engineering from Kyungpook National University in 2014. His current research interests include mobile communication and lighting control network.
Jung-Been Lee
He is a PhD Course in the Department of Computer Science and Engineering at Korea University in Seoul, Korea. His major areas of study are self-adaptive software, software architecture evaluation and potential defect analysis. He received the M.S. degrees in Computer Science and Engineering from Korea University in 2011.
Hoh Peter In
He received his Ph.D. degree in Computer Science from the University of Southern California (USC). He was an Assistant Professor at Texas A University. At present, he is a professor in Department of Computer Science and Engineering at Korea University in Seoul, Korea. He is an editor of the EMSE and TIIS journals. His primary research interests are software engineering, social media platform and services, and software security management. He earned the most influential paper award for 10 years in ICRE 2006. He has published over 100 research papers.
1 F. Y. Chen, S. H. Chen, "Application of importance and satisfaction indicators for service quality improvement of customer satisfaction," International Journal of Services Technology and Management, 2014, vol. 20, no. 1-3, pp. 108-122. doi:[[[10.1504/IJSTM.2014.063567]]]
2 J. Y. Hwang, E. B. Lee, "A review of studies on the service quality evaluation of digital libraries: on the basis of evaluation models and measures methodologies," Journal of Korean Library and Information Science Society, 2009, vol. 40, no. 2, pp. 1-23. doi:[[[10.16981/kliss.40.2.200906.243]]]
3 A. Maitland, S. Presser, "How accurately do different evaluation methods predict the reliability of survey questions?," Journal of Survey Statistics and Methodology, 2016, vol. 4, no. 3, pp. 362-381. doi:[[[10.1093/jssam/smw014]]]
4 M. R. Seo, "Validation in emotional evaluation system as game evaluation tool-focused on comparison with Jakob Nielsons evaluation system," The Journal of the Korea Contents Association, 2007, vol. 7, no. 8, pp. 86-93. custom:[[[http://www.riss.kr/search/detail/DetailView.do?p_mat_type=1a0202e37d52c72d&control_no=dc98fb9a41594fbfffe0bdc3ef48d419]]]
5 C. H. Lee, J. W. Kwon, J. E. Hong, D. H. Lee, "A study on EEG based concentration power index transmission and brain computer interface application," in World Congress on Medical Physics and Biomedical Engineering. Heidelberg: Springer2009,, pp. 537-539. doi:[[[10.1007/978-3-642-03882-2_142]]]
6 R. R. Wehbe, L. Nacke, "An Introduction to EEG analysis techniques and brain-computer interfaces for games user researchers," in Proceedings of the 2013 DiGRA International Conference: DeFragging Game Studies, Atlanta, GA, 2013;pp. 1-16. custom:[[[https://uwaterloo.ca/scholar/rrwehbe/publications/introduction-eeg-analysis-techniques-and-brain-computer-interfaces-games-user]]]
7 H. Shin, S. Lee, H. Kim, J. Kang, K. Lee, "Extracting signals from noisy single-channel EEG stream for ubiquitous healthcare applications," Journal of Internet Technology, 2012, vol. 13, no. 1, pp. 85-94. custom:[[[https://koreauniv.pure.elsevier.com/en/publications/extracting-signals-from-noisy-single-channel-eeg-stream-for-ubiqu]]]
8 D. S. Broomhead, G. P. King, "Extracting qualitative dynamics from experimental data," Physica D: Nonlinear Phenomena, 1986, vol. 20, no. 2-3, pp. 217-236. doi:[[[10.1016/0167-2789(86)90031-x]]]
9 J. J. Allen, W. G. Iacono, R. A. Depue, P. Arbisi, "Regional electroencephalographic asymmetries in bipolar seasonal affective disorder before and after exposure to bright light," Biological Psychiatry, 1993, vol. 33, no. 8, pp. 642-646. doi:[[[10.1016/0006-3223(93)90104-l]]]
10 C. Muhl, A. M. Brouwer, N. C. van Wouwe, E. L. van den Broek, F. Nijboer, D. K. Heylen, "Modality-specific affective responses and their implications for affective BCI," in Proceedings of the 5th International Brain-Computer Interface Conference, Graz, Austria, 2011;pp. 120-123. custom:[[[https://research.utwente.nl/en/publications/modality-specific-affective-responses-and-their-implications-for-]]]
11 J. M. Ryu, S. B. Park, J. K. Kim, "A study of the reactive movement synchronization for analysis of group flow," Journal of Intelligence and Information Systems, 2013, vol. 19, no. 1, pp. 79-94. doi:[[[10.13088/jiis.2013.19.1.079]]]
12 C. Amo, M. O. del Castillo, R. Barea, L. de Santiago, A. Martinez-Arribas, P . Amo-Lopez, L. Boquete, "Induced gamma-band activity during voluntary movement: EEG analysis for clinical purposes," Motor Control, 2016, vol. 20, no. 4, pp. 409-428. doi:[[[10.1123/mc.2015-0010]]]
13 Wikipeia, (Online). Available:, https://en.wikipedia.org/wiki/Electroencephalography
14 S. H. Cho, P. K. Kim, C. B. Ahn, "Study of attention using the EEG bands," in Proceedings of the 40th KIEE Summer Conference, 2009;pp. 1994-1995. custom:[[[-]]]
15 M. Gadea, M. Alino, E. Garijo, R. Espert, A. Salvador, "Testing the benefits of neurofeedback on selective attention measured through dichotic listening," Applied Psychophysiology and Biofeedback, 2016, vol. 41, no. 2, pp. 157-164. doi:[[[10.1007/s10484-015-9323-8]]]
16 R. J. Croft, R. J. Barry, "Removal of ocular artifact from the EEG: a review," Neurophysiologie Clinique/ Clinical Neurophysiology, 2000, vol. 30, no. 1, pp. 5-19. doi:[[[10.1016/s0987-7053(00)00055-1]]]
17 V . Maurandi, B. Rivet, R. Phlypo, A. Guerin–Dugue, C. Jutten, "Multimodal approach to remove ocular artifacts from EEG signals using multiple measurement vectors," in International Conference on Latent V ariable Analysis and Signal Separation. Cham: Springer, 2017;pp. 563-573. doi:[[[10.1007/978-3-319-53547-0_53]]]
18 J. A. Stern, "What's behind blinking?," The Sciences, 1988, vol. 28, no. 6, pp. 43-44. doi:[[[10.1002/j.2326-1951.1988.tb03056.x]]]
19 E. Kirmizi-Alsan, Z. Bayraktaroglu, H. Gurvit, Y. H. Keskin, M. Emre, T. Demiralp, "Comparative analysis of event-related potentials during Go/NoGo and CPT: decomposition of electrophysiological markers of response inhibition and sustained attention," Brain Research, 2006, vol. 1104, no. 1, pp. 114-128. doi:[[[10.1016/j.brainres.2006.03.010]]]
20 J. K. Jang, H. S. Kim, "EEG analysis of learning attitude change of female college student on e-learning," The Journal of the Korea Contents Association, 2011, vol. 11, no. 4, pp. 42-50. doi:[[[10.5392/jkca.2011.11.4.042]]]
21 D. L. Koo, J. Kim, "The physiology of normal sleep," Hanyang Medical Reviews, 2013, vol. 33, no. 4, pp. 190-196. doi:[[[10.7599/hmr.2013.33.4.190]]]
Brainwave type Frequency range (Hz) Activity scope
Delta 0.1–3 Sleep state
Theta 4–7 Sleepy or delusional state
Alpha 8–12 Stable state
Sensory motor rhythm 12–15 Maintaining concentration in a static state
Beta 15–20 Concentrating and stressed state
High beta 20–35 Rigid, anxious, and nervous state
Immersion score
(our proposed measure)
(conventional measure)
NAVER movie ratings
(collective reliable measure)
Measuring time (min)
(similar with survey method)
(more than 300 reviewers with a standard deviation of 95% ± 5.65%)
Automation Yes No No
Mobility Yes No No
Collectability Easy Hard Hard
Funny movie average score
Boring movie average score
|
CommonCrawl
|
Login | Create
Sort by: Relevance Date Users's collections Twitter
Group by: Day Week Month Year All time
Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity)
Science with the Cherenkov Telescope Array (1709.07997)
The Cherenkov Telescope Array Consortium: B.S. Acharya, I. Agudo, I. Al Samarai, R. Alfaro, J. Alfaro, C. Alispach, R. Alves Batista, J.-P. Amans, E. Amato, G. Ambrosi, E. Antolini, L.A. Antonelli, C. Aramo, M. Araya, T. Armstrong, F. Arqueros, L. Arrabito, K. Asano, M. Ashley, M. Backes, C. Balazs, M. Balbo, O. Ballester, J. Ballet, A. Bamba, M. Barkov, U. Barres de Almeida, J.A. Barrio, D. Bastieri, Y. Becherini, A. Belfiore, W. Benbow, D. Berge, E. Bernardini, M.G. Bernardini, M. Bernardos, K. Bernlöhr, B. Bertucci, B. Biasuzzi, C. Bigongiari, A. Biland, E. Bissaldi, J. Biteau, O. Blanch, J. Blazek, C. Boisson, J. Bolmont, G. Bonanno, A. Bonardi, C. Bonavolontà, G. Bonnoli, Z. Bosnjak, M. Böttcher, C. Braiding, J. Bregeon, A. Brill, A.M. Brown, P. Brun, G. Brunetti, T. Buanes, J. Buckley, V. Bugaev, R. Bühler, A. Bulgarelli, T. Bulik, M. Burton, A. Burtovoi, G. Busetto, R. Canestrari, M. Capalbi, F. Capitanio, A. Caproni, P. Caraveo, V. Cárdenas, C. Carlile, R. Carosi, E. Carquín, J. Carr, S. Casanova, E. Cascone, F. Catalani, O. Catalano, D. Cauz, M. Cerruti, P. Chadwick, S. Chaty, R.C.G. Chaves, A. Chen, X. Chen, M. Chernyakova, M. Chikawa, A. Christov, J. Chudoba, M. Cieślar, V. Coco, S. Colafrancesco, P. Colin, V. Conforti, V. Connaughton, J. Conrad, J.L. Contreras, J. Cortina, A. Costa, H. Costantini, G. Cotter, S. Covino, R. Crocker, J. Cuadra, O. Cuevas, P. Cumani, A. D'Aì, F. D'Ammando, P. D'Avanzo, D. D'Urso, M. Daniel, I. Davids, B. Dawson, F. Dazzi, A. De Angelis, R. de Cássia dos Anjos, G. De Cesare, A. De Franco, E.M. de Gouveia Dal Pino, I. de la Calle, R. de los Reyes Lopez, B. De Lotto, A. De Luca, M. De Lucia, M. de Naurois, E. de Oña Wilhelmi, F. De Palma, F. De Persio, V. de Souza, C. Deil, M. Del Santo, C. Delgado, D. della Volpe, T. Di Girolamo, F. Di Pierro, L. Di Venere, C. Díaz, C. Dib, S. Diebold, A. Djannati-Ataï, A. Domínguez, D. Dominis Prester, D. Dorner, M. Doro, H. Drass, D. Dravins, G. Dubus, V.V. Dwarkadas, J. Ebr, C. Eckner, K. Egberts, S. Einecke, T.R.N. Ekoume, D. Elsässer, J.-P. Ernenwein, C. Espinoza, C. Evoli, M. Fairbairn, D. Falceta-Goncalves, A. Falcone, C. Farnier, G. Fasola, E. Fedorova, S. Fegan, M. Fernandez-Alonso, A. Fernández-Barral, G. Ferrand, M. Fesquet, M. Filipovic, V. Fioretti, G. Fontaine, M. Fornasa, L. Fortson, L. Freixas Coromina, C. Fruck, Y. Fujita, Y. Fukazawa, S. Funk, M. Füßling, S. Gabici, A. Gadola, Y. Gallant, B. Garcia, R. Garcia López, M. Garczarczyk, J. Gaskins, T. Gasparetto, M. Gaug, L. Gerard, G. Giavitto, N. Giglietto, P. Giommi, F. Giordano, E. Giro, M. Giroletti, A. Giuliani, J.-F. Glicenstein, R. Gnatyk, N. Godinovic, P. Goldoni, G. Gómez-Vargas, M.M. González, J.M. González, D. Götz, J. Graham, P. Grandi, J. Granot, A.J. Green, T. Greenshaw, S. Griffiths, S. Gunji, D. Hadasch, S. Hara, M.J. Hardcastle, T. Hassan, K. Hayashi, M. Hayashida, M. Heller, J.C. Helo, G. Hermann, J. Hinton, B. Hnatyk, W. Hofmann, J. Holder, D. Horan, J. Hörandel, D. Horns, P. Horvath, T. Hovatta, M. Hrabovsky, D. Hrupec, T.B. Humensky, M. Hütten, M. Iarlori, T. Inada, Y. Inome, S. Inoue, T. Inoue, Y. Inoue, F. Iocco, K. Ioka, M. Iori, K. Ishio, Y. Iwamura, M. Jamrozy, P. Janecek, D. Jankowsky, P. Jean, I. Jung-Richardt, J. Jurysek, P. Kaaret, S. Karkar, H. Katagiri, U. Katz, N. Kawanaka, D. Kazanas, B. Khélifi, D.B. Kieda, S. Kimeswenger, S. Kimura, S. Kisaka, J. Knapp, J. Knödlseder, B. Koch, K. Kohri, N. Komin, K. Kosack, M. Kraus, M. Krause, F. Krauß, H. Kubo, G. Kukec Mezek, H. Kuroda, J. Kushida, N. La Palombara, G. Lamanna, R.G. Lang, J. Lapington, O. Le Blanc, S. Leach, J.-P. Lees, J. Lefaucheur, M.A. Leigui de Oliveira, J.-P. Lenain, R. Lico, M. Limon, E. Lindfors, T. Lohse, S. Lombardi, F. Longo, M. López, R. López-Coto, C.-C. Lu, F. Lucarelli, P.L. Luque-Escamilla, E. Lyard, M.C. Maccarone, G. Maier, P. Majumdar, G. Malaguti, D. Mandat, G. Maneva, M. Manganaro, S. Mangano, A. Marcowith, J. Marín, S. Markoff, J. Martí, P. Martin, M. Martínez, G. Martínez, N. Masetti, S. Masuda, G. Maurin, N. Maxted, D. Mazin, C. Medina, A. Melandri, S. Mereghetti, M. Meyer, I.A. Minaya, N. Mirabal, R. Mirzoyan, A. Mitchell, T. Mizuno, R. Moderski, M. Mohammed, L. Mohrmann, T. Montaruli, A. Moralejo, D. Morcuende-Parrilla, K. Mori, G. Morlino, P. Morris, A. Morselli, E. Moulin, R. Mukherjee, C. Mundell, T. Murach, H. Muraishi, K. Murase, A. Nagai, S. Nagataki, T. Nagayoshi, T. Naito, T. Nakamori, Y. Nakamura, J. Niemiec, D. Nieto, M. Nikołajuk, K. Nishijima, K. Noda, D. Nosek, B. Novosyadlyj, S. Nozaki, P. O'Brien, L. Oakes, Y. Ohira, M. Ohishi, S. Ohm, N. Okazaki, A. Okumura, R.A. Ong, M. Orienti, R. Orito, J.P. Osborne, M. Ostrowski, N. Otte, I. Oya, M. Padovani, A. Paizis, M. Palatiello, M. Palatka, R. Paoletti, J.M. Paredes, G. Pareschi, R.D. Parsons, A. Pe'er, M. Pech, G. Pedaletti, M. Perri, M. Persic, A. Petrashyk, P. Petrucci, O. Petruk, B. Peyaud, M. Pfeifer, G. Piano, A. Pisarski, S. Pita, M. Pohl, M. Polo, D. Pozo, E. Prandini, J. Prast, G. Principe, D. Prokhorov, H. Prokoph, M. Prouza, G. Pühlhofer, M. Punch, S. Pürckhauer, F. Queiroz, A. Quirrenbach, S. Rainò, S. Razzaque, O. Reimer, A. Reimer, A. Reisenegger, M. Renaud, A.H. Rezaeian, W. Rhode, D. Ribeiro, M. Ribó, T. Richtler, J. Rico, F. Rieger, M. Riquelme, S. Rivoire, V. Rizi, J. Rodriguez, G. Rodriguez Fernandez, J.J. Rodríguez Vázquez, G. Rojas, P. Romano, G. Romeo, J. Rosado, A.C. Rovero, G. Rowell, B. Rudak, A. Rugliancich, C. Rulten, I. Sadeh, S. Safi-Harb, T. Saito, N. Sakaki, S. Sakurai, G. Salina, M. Sánchez-Conde, H. Sandaker, A. Sandoval, P. Sangiorgi, M. Sanguillon, H. Sano, M. Santander, S. Sarkar, K. Satalecka, F.G. Saturni, E.J. Schioppa, S. Schlenstedt, M. Schneider, H. Schoorlemmer, P. Schovanek, A. Schulz, F. Schussler, U. Schwanke, E. Sciacca, S. Scuderi, I. Seitenzahl, D. Semikoz, O. Sergijenko, M. Servillat, A. Shalchi, R.C. Shellard, L. Sidoli, H. Siejkowski, A. Sillanpää, G. Sironi, J. Sitarek, V. Sliusar, A. Slowikowska, H. Sol, A. Stamerra, S. Stanič, R. Starling, Ł. Stawarz, S. Stefanik, M. Stephan, T. Stolarczyk, G. Stratta, U. Straumann, T. Suomijarvi, A.D. Supanitsky, G. Tagliaferri, H. Tajima, M. Tavani, F. Tavecchio, J.-P. Tavernet, K. Tayabaly, L.A. Tejedor, P. Temnikov, Y. Terada, R. Terrier, T. Terzic, M. Teshima, V. Testa, S. Thoudam, W. Tian, L. Tibaldo, M. Tluczykont, C.J. Todero Peixoto, F. Tokanai, J. Tomastik, D. Tonev, M. Tornikoski, D.F. Torres, E. Torresi, G. Tosti, N. Tothill, G. Tovmassian, P. Travnicek, C. Trichard, M. Trifoglio, I. Troyano Pujadas, S. Tsujimoto, G. Umana, V. Vagelli, F. Vagnetti, M. Valentino, P. Vallania, L. Valore, C. van Eldik, J. Vandenbroucke, G.S. Varner, G. Vasileiadis, V. Vassiliev, M. Vázquez Acosta, M. Vecchi, A. Vega, S. Vercellone, P. Veres, S. Vergani, V. Verzi, G.P. Vettolani, A. Viana, C. Vigorito, J. Villanueva, H. Voelk, A. Vollhardt, S. Vorobiov, M. Vrastil, T. Vuillaume, S.J. Wagner, R. Wagner, R. Walter, J.E. Ward, D. Warren, J.J. Watson, F. Werner, M. White, R. White, A. Wierzcholska, P. Wilcox, M. Will, D.A. Williams, R. Wischnewski, M. Wood, T. Yamamoto, R. Yamazaki, S. Yanagita, L. Yang, T. Yoshida, S. Yoshiike, T. Yoshikoshi, M. Zacharias, G. Zaharijas, L. Zampieri, F. Zandanel, R. Zanin, M. Zavrtanik, D. Zavrtanik, A.A. Zdziarski, A. Zech, H. Zechlin, V.I. Zhdanov, A. Ziegler, J. Zorn
Jan. 22, 2018 hep-ex, astro-ph.IM, astro-ph.HE
The Cherenkov Telescope Array, CTA, will be the major global observatory for very high energy gamma-ray astronomy over the next decade and beyond. The scientific potential of CTA is extremely broad: from understanding the role of relativistic cosmic particles to the search for dark matter. CTA is an explorer of the extreme universe, probing environments from the immediate neighbourhood of black holes to cosmic voids on the largest scales. Covering a huge range in photon energy from 20 GeV to 300 TeV, CTA will improve on all aspects of performance with respect to current instruments. The observatory will operate arrays on sites in both hemispheres to provide full sky coverage and will hence maximize the potential for the rarest phenomena such as very nearby supernovae, gamma-ray bursts or gravitational wave transients. With 99 telescopes on the southern site and 19 telescopes on the northern site, flexible operation will be possible, with sub-arrays available for specific tasks. CTA will have important synergies with many of the new generation of major astronomical and astroparticle observatories. Multi-wavelength and multi-messenger approaches combining CTA data with those from other instruments will lead to a deeper understanding of the broad-band non-thermal properties of target sources. The CTA Observatory will be operated as an open, proposal-driven observatory, with all data available on a public archive after a pre-defined proprietary period. Scientists from institutions worldwide have combined together to form the CTA Consortium. This Consortium has prepared a proposal for a Core Programme of highly motivated observations. The programme, encompassing approximately 40% of the available observing time over the first ten years of CTA operation, is made up of individual Key Science Projects (KSPs), which are presented in this document.
Cherenkov Telescope Array Contributions to the 35th International Cosmic Ray Conference (ICRC2017) (1709.03483)
F. Acero, B.S. Acharya, V. Acín Portella, C. Adams, I. Agudo, F. Aharonian, I. Al Samarai, A. Alberdi, M. Alcubierre, R. Alfaro, J. Alfaro, C. Alispach, R. Aloisio, R. Alves Batista, J.-P. Amans, E. Amato, L. Ambrogi, G. Ambrosi, M. Ambrosio, J. Anderson, M. Anduze, E.O. Angüner, E. Antolini, L.A. Antonelli, V. Antonuccio, P. Antoranz, C. Aramo, M. Araya, C. Arcaro, T. Armstrong, F. Arqueros, L. Arrabito, M. Arrieta, K. Asano, A. Asano, M. Ashley, P. Aubert, C. B. Singh, A. Babic, M. Backes, S. Bajtlik, C. Balazs, M. Balbo, O. Ballester, J. Ballet, L. Ballo, A. Balzer, A. Bamba, R. Bandiera, P. Barai, C. Barbier, M. Barcelo, M. Barkov, U. Barres de Almeida, J.A. Barrio, D. Bastieri, C. Bauer, U. Becciani, Y. Becherini, J. Becker Tjus, W. Bednarek, A. Belfiore, W. Benbow, M. Benito, D. Berge, E. Bernardini, M.G. Bernardini, M. Bernardos, S. Bernhard, K. Bernlöhr, C. Bertinelli Salucci, B. Bertucci, M.-A. Besel, V. Beshley, J. Bettane, N. Bhatt, W. Bhattacharyya, S. Bhattachryya, B. Biasuzzi, G. Bicknell, C. Bigongiari, A. Biland, A. Bilinsky, R. Bird, E. Bissaldi, J. Biteau, M. Bitossi, O. Blanch, P. Blasi, J. Blazek, C. Boccato, C. Bockermann, C. Boehm, M. Bohacova, C. Boisson, J. Bolmont, G. Bonanno, A. Bonardi, C. Bonavolontà, G. Bonnoli, J. Borkowski, R. Bose, Z. Bosnjak, M. Böttcher, C. Boutonnet, F. Bouyjou, L. Bowman, V. Bozhilov, C. Braiding, S. Brau-Nogué, J. Bregeon, M. Briggs, A. Brill, W. Brisken, D. Bristow, R. Britto, E. Brocato, A.M. Brown, S. Brown, K. Brügge, P. Brun, P. Brun, F. Brun, L. Brunetti, G. Brunetti, P. Bruno, M. Bryan, J. Buckley, V. Bugaev, R. Bühler, A. Bulgarelli, T. Bulik, M. Burton, A. Burtovoi, G. Busetto, S. Buson, J. Buss, K. Byrum, A. Caccianiga, R. Cameron, F. Canelli, R. Canestrari, M. Capalbi, M. Capasso, F. Capitanio, A. Caproni, R. Capuzzo-Dolcetta, P. Caraveo, V. Cárdenas, J. Cardenzana, M. Cardillo, C. Carlile, S. Caroff, R. Carosi, A. Carosi, E. Carquín, J. Carr, J.-M. Casandjian, S. Casanova, E. Cascone, A.J. Castro-Tirado, J. Castroviejo Mora, F. Catalani, O. Catalano, D. Cauz, C. Celestino Silva, S. Celli, M. Cerruti, E. Chabanne, P. Chadwick, N. Chakraborty, C. Champion, A. Chatterjee, S. Chaty, R. Chaves, A. Chen, X. Chen, K. Cheng, M. Chernyakova, M. Chikawa, V.R. Chitnis, A. Christov, J. Chudoba, M. Cieślar, P. Clark, V. Coco, S. Colafrancesco, P. Colin, E. Colombo, J. Colome, S. Colonges, V. Conforti, V. Connaughton, J. Conrad, J.L. Contreras, R. Cornat, J. Cortina, A. Costa, H. Costantini, G. Cotter, B. Courty, S. Covino, G. Covone, P. Cristofari, S.J. Criswell, R. Crocker, J. Croston, C. Crovari, J. Cuadra, O. Cuevas, X. Cui, P. Cumani, G. Cusumano, A. D'Aì, F. D'Ammando, P. D'Avanzo, D. D'Urso, P. Da Vela, Ø. Dale, V.T. Dang, L. Dangeon, M. Daniel, I. Davids, B. Dawson, F. Dazzi, A. De Angelis, V. De Caprio, R. de Cássia dos Anjos, G. De Cesare, A. De Franco, F. De Frondat, E.M. de Gouveia Dal Pino, I. de la Calle, C. De Lisio, R. de los Reyes Lopez, B. De Lotto, A. De Luca, M. De Lucia, J.R.T. de Mello Neto, M. de Naurois, E. de Oña Wilhelmi, F. De Palma, F. De Persio, V. de Souza, J. Decock, C. Deil, P. Deiml, M. Del Santo, E. Delagnes, G. Deleglise, M. Delfino Reznicek, C. Delgado, J. Delgado Mengual, R. Della Ceca, D. della Volpe, M. Detournay, J. Devin, T. Di Girolamo, C. Di Giulio, F. Di Pierro, L. Di Venere, L. Diaz, C. Díaz, C. Dib, H. Dickinson, S. Diebold, S. Digel, A. Djannati-Ataï, M. Doert, A. Domínguez, D. Dominis Prester, I. Donnarumma, D. Dorner, M. Doro, J.-L. Dournaux, T. Downes, G. Drake, S. Drappeau, H. Drass, D. Dravins, L. Drury, G. Dubus, K. Dundas Morå, A. Durkalec, V. Dwarkadas, J. Ebr, C. Eckner, E. Edy, K. Egberts, S. Einecke, J. Eisch, F. Eisenkolb, T.R.N. Ekoume, C. Eleftheriadis, D. Elsässer, D. Emmanoulopoulos, J.-P. Ernenwein, P. Escarate, S. Eschbach, C. Espinoza, P. Evans, C. Evoli, M. Fairbairn, D. Falceta-Goncalves, A. Falcone, V. Fallah Ramazani, K. Farakos, E. Farrell, G. Fasola, Y. Favre, E. Fede, R. Fedora, E. Fedorova, S. Fegan, M. Fernandez-Alonso, A. Fernández-Barral, G. Ferrand, O. Ferreira, M. Fesquet, E. Fiandrini, A. Fiasson, M. Filipovic, D. Fink, J.P. Finley, C. Finley, A. Finoguenov, V. Fioretti, M. Fiorini, H. Flores, L. Foffano, C. Föhr, M.V. Fonseca, L. Font, G. Fontaine, M. Fornasa, P. Fortin, L. Fortson, N. Fouque, B. Fraga, F.J. Franco, L. Freixas Coromina, C. Fruck, D. Fugazza, Y. Fujita, S. Fukami, Y. Fukazawa, Y. Fukui, S. Funk, A. Furniss, M. Füßling, S. Gabici, A. Gadola, Y. Gallant, D. Galloway, S. Gallozzi, B. Garcia, A. Garcia, R. García Gil, R. Garcia López, M. Garczarczyk, D. Gardiol, F. Gargano, C. Gargano, S. Garozzo, M. Garrido-Ruiz, D. Gascon, T. Gasparetto, F. Gaté, M. Gaug, B. Gebhardt, M. Gebyehu, N. Geffroy, B. Genolini, A. Ghalumyan, A. Ghedina, G. Ghirlanda, P. Giammaria, F. Gianotti, B. Giebels, N. Giglietto, V. Gika, R. Gimenes, P. Giommi, F. Giordano, G. Giovannini, E. Giro, M. Giroletti, J. Gironnet, A. Giuliani, J.-F. Glicenstein, R. Gnatyk, N. Godinovic, P. Goldoni, J.L. Gómez, G. Gómez-Vargas, M.M. González, J.M. González, K.S. Gothe, D. Gotz, J. Goullon, T. Grabarczyk, R. Graciani, J. Graham, P. Grandi, J. Granot, G. Grasseau, R. Gredig, A.J. Green, T. Greenshaw, I. Grenier, S. Griffiths, A. Grillo, M.-H. Grondin, J. Grube, V. Guarino, B. Guest, O. Gueta, S. Gunji, G. Gyuk, D. Hadasch, L. Hagge, J. Hahn, A. Hahn, H. Hakobyan, S. Hara, M.J. Hardcastle, T. Hassan, T. Haubold, A. Haupt, K. Hayashi, M. Hayashida, H. He, M. Heller, J.C. Helo, F. Henault, G. Henri, G. Hermann, R. Hermel, J. Herrera Llorente, A. Herrero, O. Hervet, N. Hidaka, J. Hinton, N. Hiroshima, K. Hirotani, B. Hnatyk, J.K. Hoang, D. Hoffmann, W. Hofmann, J. Holder, D. Horan, J. Hörandel, M. Hörbe, D. Horns, P. Horvath, J. Houles, T. Hovatta, M. Hrabovsky, D. Hrupec, J.-M. Huet, G. Hughes, D. Hui, G. Hull, T.B. Humensky, M. Hussein, M. Hütten, M. Iarlori, Y. Ikeno, J.M. Illa, D. Impiombato, T. Inada, A. Ingallinera, Y. Inome, S. Inoue, T. Inoue, Y. Inoue, F. Iocco, K. Ioka, M. Ionica, M. Iori, A. Iriarte, K. Ishio, G.L. Israel, Y. Iwamura, C. Jablonski, A. Jacholkowska, J. Jacquemier, M. Jamrozy, P. Janecek, F. Jankowsky, D. Jankowsky, P. Jansweijer, C. Jarnot, P. Jean, C.A. Johnson, M. Josselin, I. Jung-Richardt, J. Jurysek, P. Kaaret, P. Kachru, M. Kagaya, J. Kakuwa, O. Kalekin, R. Kankanyan, A. Karastergiou, M. Karczewski, S. Karkar, H. Katagiri, J. Kataoka, K. Katarzyński, U. Katz, N. Kawanaka, L. Kaye, D. Kazanas, N. Kelley-Hoskins, B. Khélifi, D.B. Kieda, T. Kihm, S. Kimeswenger, S. Kimura, S. Kisaka, S. Kishida, R. Kissmann, W. Kluźniak, J. Knapen, J. Knapp, J. Knödlseder, B. Koch, J. Kocot, K. Kohri, N. Komin, A. Kong, Y. Konno, K. Kosack, G. Kowal, S. Koyama, M. Kraus, M. Krause, F. Krauß, F. Krennrich, P. Kruger, H. Kubo, V. Kudryavtsev, G. Kukec Mezek, S. Kumar, H. Kuroda, J. Kushida, P. Kushwaha, N. La Palombara, V. La Parola, G. La Rosa, R. Lahmann, K. Lalik, G. Lamanna, M. Landoni, D. Landriu, H. Landt, R.G. Lang, J. Lapington, P. Laporte, O. Le Blanc, T. Le Flour, P. Le Sidaner, S. Leach, A. Leckngam, S.-H. Lee, W.H. Lee, J.-P. Lees, J. Lefaucheur, M.A. Leigui de Oliveira, M. Lemoine-Goumard, J.-P. Lenain, G. Leto, R. Lico, M. Limon, R. Lindemann, E. Lindfors, L. Linhoff, A. Lipniacka, S. Lloyd, T. Lohse, S. Lombardi, F. Longo, M. Lopez, R. Lopez-Coto, T. Louge, F. Louis, M. Louys, F. Lucarelli, D. Lucchesi, P.L. Luque-Escamilla, E. Lyard, M.C. Maccarone, T. Maccarone, E. Mach, G.M. Madejski, G. Maier, A. Majczyna, P. Majumdar, M. Makariev, G. Malaguti, A. Malouf, S. Maltezos, D. Malyshev, D. Malyshev, D. Mandat, G. Maneva, M. Manganaro, S. Mangano, P. Manigot, K. Mannheim, N. Maragos, D. Marano, A. Marcowith, J. Marín, M. Mariotti, M. Marisaldi, S. Markoff, J. Martí, J.-M. Martin, P. Martin, L. Martin, M. Martínez, G. Martínez, O. Martínez, R. Marx, N. Masetti, P. Massimino, A. Mastichiadis, M. Mastropietro, S. Masuda, H. Matsumoto, N. Matthews, S. Mattiazzo, G. Maurin, N. Maxted, M. Mayer, D. Mazin, M.N. Mazziotta, L. Mc Comb, I. McHardy, C. Medina, A. Melandri, C. Melioli, D. Melkumyan, S. Mereghetti, J.-L. Meunier, T. Meures, M. Meyer, S. Micanovic, T. Michael, J. Michałowski, I. Mievre, J. Miller, I.A. Minaya, T. Mineo, F. Mirabel, J.M. Miranda, R. Mirzoyan, A. Mitchell, T. Mizuno, R. Moderski, M. Mohammed, L. Mohrmann, C. Molijn, E. Molinari, R. Moncada, T. Montaruli, I. Monteiro, D. Mooney, P. Moore, A. Moralejo, D. Morcuende-Parrilla, E. Moretti, K. Mori, G. Morlino, P. Morris, A. Morselli, F. Moscato, D. Motohashi, E. Moulin, S. Mueller, R. Mukherjee, P. Munar, C. Mundell, J. Mundet, T. Murach, H. Muraishi, K. Murase, A. Murphy, A. Nagai, N. Nagar, S. Nagataki, T. Nagayoshi, B.K. Nagesh, T. Naito, D. Nakajima, T. Nakamori, Y. Nakamura, K. Nakayama, D. Naumann, P. Nayman, D. Neise, L. Nellen, R. Nemmen, A. Neronov, N. Neyroud, T. Nguyen, T.T. Nguyen, T. Nguyen Trung, L. Nicastro, J. Nicolau-Kukliński, J. Niemiec, D. Nieto, M. Nievas-Rosillo, M. Nikołajuk, K. Nishijima, K.-I. Nishikawa, G. Nishiyama, K. Noda, L. Nogues, S. Nolan, D. Nosek, M. Nöthe, B. Novosyadlyj, S. Nozaki, F. Nunio, P. O'Brien, L. Oakes, C. Ocampo, J.P. Ochoa, R. Oger, Y. Ohira, M. Ohishi, S. Ohm, N. Okazaki, A. Okumura, J.-F. Olive, R.A. Ong, M. Orienti, R. Orito, A. Orlati, J.P. Osborne, M. Ostrowski, N. Otte, Z. Ou, E. Ovcharov, I. Oya, A. Ozieblo, M. Padovani, S. Paiano, A. Paizis, J. Palacio, M. Palatiello, M. Palatka, J. Pallotta, J.-L. Panazol, D. Paneque, M. Panter, R. Paoletti, M. Paolillo, A. Papitto, A. Paravac, J.M. Paredes, G. Pareschi, R.D. Parsons, P. Paśko, S. Pavy, A. Pe'er, M. Pech, G. Pedaletti, P. Peñil Del Campo, A. Perez, M.A. Pérez-Torres, L. Perri, M. Perri, M. Persic, A. Petrashyk, S. Petrera, P.-O. Petrucci, O. Petruk, B. Peyaud, M. Pfeifer, G. Piano, Q. Piel, D. Pieloth, F. Pintore, C. Pio García, A. Pisarski, S. Pita, L. Pizarro, Ł. Platos, M. Pohl, V. Poireau, A. Pollo, J. Porthault, J. Poutanen, D. Pozo, E. Prandini, P. Prasit, J. Prast, K. Pressard, G. Principe, D. Prokhorov, H. Prokoph, M. Prouza, G. Pruteanu, E. Pueschel, G. Pühlhofer, I. Puljak, M. Punch, S. Pürckhauer, F. Queiroz, J. Quinn, A. Quirrenbach, I. Rafighi, S. Rainò, P.J. Rajda, R. Rando, R.C. Rannot, S. Razzaque, I. Reichardt, O. Reimer, A. Reimer, A. Reisenegger, M. Renaud, T. Reposeur, B. Reville, A.H. Rezaeian, W. Rhode, D. Ribeiro, M. Ribó, M.G. Richer, T. Richtler, J. Rico, F. Rieger, M. Riquelme, P.R. Ristori, S. Rivoire, V. Rizi, J. Rodriguez, G. Rodriguez Fernandez, J.J. Rodríguez Vázquez, G. Rojas, P. Romano, G. Romeo, M. Roncadelli, J. Rosado, S. Rosen, S. Rosier Lees, J. Rousselle, A.C. Rovero, G. Rowell, B. Rudak, A. Rugliancich, J.E. Ruíz del Mazo, W. Rujopakarn, C. Rulten, F. Russo, O. Saavedra, S. Sabatini, B. Sacco, I. Sadeh, E. Sæther Hatlen, S. Safi-Harb, V. Sahakian, S. Sailer, T. Saito, N. Sakaki, S. Sakurai, D. Salek, F. Salesa Greus, G. Salina, D. Sanchez, M. Sánchez-Conde, H. Sandaker, A. Sandoval, P. Sangiorgi, M. Sanguillon, H. Sano, M. Santander, A. Santangelo, E.M. Santos, A. Sanuy, L. Sapozhnikov, S. Sarkar, K. Satalecka, Y. Sato, F.G. Saturni, R. Savalle, M. Sawada, S. Schanne, E.J. Schioppa, S. Schlenstedt, T. Schmidt, J. Schmoll, M. Schneider, H. Schoorlemmer, P. Schovanek, A. Schulz, F. Schussler, U. Schwanke, J. Schwarz, T. Schweizer, S. Schwemmer, E. Sciacca, S. Scuderi, M. Seglar-Arroyo, A. Segreto, I. Seitenzahl, D. Semikoz, O. Sergijenko, N. Serre, M. Servillat, K. Seweryn, K. Shah, A. Shalchi, M. Sharma, R.C. Shellard, I. Shilon, L. Sidoli, M. Sidz, H. Siejkowski, J. Silk, A. Sillanpää, D. Simone, B.B. Singh, G. Sironi, J. Sitarek, P. Sizun, V. Sliusar, A. Slowikowska, A. Smith, D. Sobczyńska, A. Sokolenko, H. Sol, G. Sottile, W. Springer, O. Stahl, A. Stamerra, S. Stanič, R. Starling, D. Staszak, Ł. Stawarz, R. Steenkamp, S. Stefanik, C. Stegmann, S. Steiner, C. Stella, M. Stephan, R. Sternberger, M. Sterzel, B. Stevenson, M. Stodulska, M. Stodulski, T. Stolarczyk, G. Stratta, U. Straumann, R. Stuik, M. Suchenek, T. Suomijarvi, A.D. Supanitsky, T. Suric, I. Sushch, P. Sutcliffe, J. Sykes, M. Szanecki, T. Szepieniec, G. Tagliaferri, H. Tajima, K. Takahashi, H. Takahashi, M. Takahashi, L. Takalo, S. Takami, J. Takata, J. Takeda, T. Tam, M. Tanaka, T. Tanaka, Y. Tanaka, S. Tanaka, C. Tanci, M. Tavani, F. Tavecchio, J.-P. Tavernet, K. Tayabaly, L.A. Tejedor, F. Temme, P. Temnikov, Y. Terada, J.C. Terrazas, R. Terrier, D. Terront, T. Terzic, D. Tescaro, M. Teshima, V. Testa, S. Thoudam, W. Tian, L. Tibaldo, A. Tiengo, D. Tiziani, M. Tluczykont, C.J. Todero Peixoto, F. Tokanai, M. Tokarz, K. Toma, J. Tomastik, A. Tonachini, D. Tonev, M. Tornikoski, D.F. Torres, E. Torresi, G. Tosti, T. Totani, N. Tothill, F. Toussenel, G. Tovmassian, N. Trakarnsirinont, P. Travnicek, C. Trichard, M. Trifoglio, I. Troyano Pujadas, M. Tsirou, S. Tsujimoto, T. Tsuru, Y. Uchiyama, G. Umana, M. Uslenghi, V. Vagelli, F. Vagnetti, M. Valentino, P. Vallania, L. Valore, A.M. Van den Berg, W. van Driel, C. van Eldik, B. van Soelen, J. Vandenbroucke, J. Vanderwalt, G.S. Varner, G. Vasileiadis, V. Vassiliev, J.R. Vázquez, M. Vázquez Acosta, M. Vecchi, A. Vega, P. Veitch, P. Venault, C. Venter, S. Vercellone, P. Veres, S. Vergani, V. Verzi, G.P. Vettolani, C. Veyssiere, A. Viana, J. Vicha, C. Vigorito, J. Villanueva, P. Vincent, J. Vink, F. Visconti, V. Vittorini, H. Voelk, V. Voisin, A. Vollhardt, S. Vorobiov, I. Vovk, M. Vrastil, T. Vuillaume, S.J. Wagner, R. Wagner, P. Wagner, S.P. Wakely, T. Walstra, R. Walter, M. Ward, J.E. Ward, D. Warren, J.J. Watson, N. Webb, P. Wegner, O. Weiner, A. Weinstein, C. Weniger, F. Werner, H. Wetteskind, M. White, R. White, A. Wierzcholska, S. Wiesand, R. Wijers, P. Wilcox, A. Wilhelm, M. Wilkinson, M. Will, D.A. Williams, M. Winter, P. Wojcik, D. Wolf, M. Wood, A. Wörnlein, T. Wu, K.K. Yadav, C. Yaguna, T. Yamamoto, H. Yamamoto, N. Yamane, R. Yamazaki, S. Yanagita, L. Yang, D. Yelos, T. Yoshida, M. Yoshida, S. Yoshiike, T. Yoshikoshi, P. Yu, D. Zaborov, M. Zacharias, G. Zaharijas, A. Zajczyk, L. Zampieri, F. Zandanel, R. Zanin, R. Zanmar Sanchez, D. Zaric, M. Zavrtanik, D. Zavrtanik, A.A. Zdziarski, A. Zech, H. Zechlin, V.I. Zhdanov, A. Ziegler, J. Ziemann, K. Ziętara, A. Zink, J. Ziółkowski, V. Zitelli, A. Zoli, J. Zorn
Oct. 3, 2017 astro-ph.HE
List of contributions from the Cherenkov Telescope Array Consortium presented at the 35th International Cosmic Ray Conference, July 12-20 2017, Busan, Korea.
A physical scenario for the high and low X-ray luminosity states in the transitional pulsar PSR J1023+0038 (1607.06245)
S. Campana, F. Coti Zelati, A. Papitto, N. Rea, D.F. Torres, M.C. Baglio, P. D'Avanzo
Sept. 26, 2017 astro-ph.HE
PSR J1023+0038 (J1023) is a binary system hosting a neutron star and a low mass companion. J1023 is the best studied transitional pulsar, alternating a faint eclipsing millisecond radio pulsar state to a brighter X-ray active state. At variance with other Low Mass X-ray binaries, this active state reaches luminosities of only ~$10^{34}$ erg s$^{-1}$, showing strong, fast variability. In the active state, J1023 displays: i) a high state ($L_X\sim7\times10^{33}$ erg s$^{-1}$, 0.3-80 keV) occurring ~80% of the time and during which X-ray pulsations at the neutron star spin period are detected (pulsed fraction ~8%); ii) a low state ($L_X~10^{33}$ erg s$^{-1}$) during which pulsations are not detected (~<3%); and iii) a flaring state during which sporadic flares occur in excess of ~$10^{34}$ erg s$^{-1}$, with no pulsation too. The transition between the high and the low states is very rapid, on a ~10 s timescale. Here we put forward a plausible physical interpretation of the high and low states based on the (fast) transition among the propeller state and the radio pulsar state. We modelled the XMM-Newton spectra of the high, low and radio pulsar states, finding a good agreement with this physical picture.
Prospects for CTA observations of the young SNR RX J1713.7-3946 (1704.04136)
The CTA Consortium: F. Acero, R. Aloisio, J. Amans, E. Amato, L.A. Antonelli, C. Aramo, T. Armstrong, F. Arqueros, K. Asano, M. Ashley, M. Backes, C. Balazs, A. Balzer, A. Bamba, M. Barkov, J.A. Barrio, W. Benbow, K. Bernlöhr, V. Beshley, C. Bigongiari, A. Biland, A. Bilinsky, E. Bissaldi, J. Biteau, O. Blanch, P. Blasi, J. Blazek, C. Boisson, G. Bonanno, A. Bonardi, C. Bonavolontà, G. Bonnoli, C. Braiding, S. Brau-Nogué, J. Bregeon, A.M. Brown, V. Bugaev, A. Bulgarelli, T. Bulik, M. Burton, A. Burtovoi, G. Busetto, M. Böttcher, R. Cameron, M. Capalbi, A. Caproni, P. Caraveo, R. Carosi, E. Cascone, M. Cerruti, S. Chaty, A. Chen, X. Chen, M. Chernyakova, M. Chikawa, J. Chudoba, J. Cohen-Tanugi, S. Colafrancesco, V. Conforti, J.L. Contreras, A. Costa, G. Cotter, S. Covino, G. Covone, P. Cumani, G. Cusumano, F. D'Ammando, D. D'Urso, M. Daniel, F. Dazzi, A. De Angelis, G. De Cesare, A. De Franco, F. De Frondat, E.M. de Gouveia Dal Pino, C. De Lisio, R. de los Reyes Lopez, B. De Lotto, M. de Naurois, F. De Palma, M. Del Santo, C. Delgado, D. della Volpe, T. Di Girolamo, C. Di Giulio, F. Di Pierro, L. Di Venere, M. Doro, J. Dournaux, D. Dumas, V. Dwarkadas, C. Díaz, J. Ebr, K. Egberts, S. Einecke, D. Elsässer, S. Eschbach, D. Falceta-Goncalves, G. Fasola, E. Fedorova, A. Fernández-Barral, G. Ferrand, M. Fesquet, E. Fiandrini, A. Fiasson, M.D. Filipovíc, V. Fioretti, L. Font, G. Fontaine, F.J. Franco, L. Freixas Coromina, Y. Fujita, Y. Fukui, S. Funk, A. Förster, A. Gadola, R. Garcia López, M. Garczarczyk, N. Giglietto, F. Giordano, A. Giuliani, J. Glicenstein, R. Gnatyk, P. Goldoni, T. Grabarczyk, R. Graciani, J. Graham, P. Grandi, J. Granot, A.J. Green, S. Griffiths, S. Gunji, H. Hakobyan, S. Hara, T. Hassan, M. Hayashida, M. Heller, J.C. Helo, J. Hinton, B. Hnatyk, J. Huet, M. Huetten, T.B. Humensky, M. Hussein, J. Hörandel, Y. Ikeno, T. Inada, Y. Inome, S. Inoue, T. Inoue, Y. Inoue, K. Ioka, M. Iori, J. Jacquemier, P. Janecek, D. Jankowsky, I. Jung, P. Kaaret, H. Katagiri, S. Kimeswenger, S. Kimura, J. Knödlseder, B. Koch, J. Kocot, K. Kohri, N. Komin, Y. Konno, K. Kosack, S. Koyama, M. Kraus, H. Kubo, G. Kukec Mezek, J. Kushida, N. La Palombara, K. Lalik, G. Lamanna, H. Landt, J. Lapington, P. Laporte, S. Lee, J. Lees, J. Lefaucheur, J.-P. Lenain, G. Leto, E. Lindfors, T. Lohse, S. Lombardi, F. Longo, M. Lopez, F. Lucarelli, P.L. Luque-Escamilla, R. López-Coto, M.C. Maccarone, G. Maier, G. Malaguti, D. Mandat, G. Maneva, S. Mangano, A. Marcowith, J. Martí, M. Martínez, G. Martínez, S. Masuda, G. Maurin, N. Maxted, C. Melioli, T. Mineo, N. Mirabal, T. Mizuno, R. Moderski, M. Mohammed, T. Montaruli, A. Moralejo, K. Mori, G. Morlino, A. Morselli, E. Moulin, R. Mukherjee, C. Mundell, H. Muraishi, K. Murase, S. Nagataki, T. Nagayoshi, T. Naito, D. Nakajima, T. Nakamori, R. Nemmen, J. Niemiec, D. Nieto, M. Nievas-Rosillo, M. Nikołajuk, K. Nishijima, K. Noda, L. Nogues, D. Nosek, B. Novosyadlyj, S. Nozaki, Y. Ohira, M. Ohishi, S. Ohm, A. Okumura, R.A. Ong, R. Orito, A. Orlati, M. Ostrowski, I. Oya, M. Padovani, J. Palacio, M. Palatka, J.M. Paredes, S. Pavy, A. Pe'er, M. Persic, P. Petrucci, O. Petruk, A. Pisarski, M. Pohl, A. Porcelli, E. Prandini, J. Prast, G. Principe, M. Prouza, E. Pueschel, G. Pühlhofer, A. Quirrenbach, M. Rameez, O. Reimer, M. Renaud, M. Ribó, J. Rico, V. Rizi, J. Rodriguez, G. Rodriguez Fernandez, J.J. Rodríguez Vázquez, P. Romano, G. Romeo, J. Rosado, J. Rousselle, G. Rowell, B. Rudak, I. Sadeh, S. Safi-Harb, T. Saito, N. Sakaki, D. Sanchez, P. Sangiorgi, H. Sano, M. Santander, S. Sarkar, M. Sawada, E.J. Schioppa, H. Schoorlemmer, P. Schovanek, F. Schussler, O. Sergijenko, M. Servillat, A. Shalchi, R.C. Shellard, H. Siejkowski, A. Sillanpää, D. Simone, V. Sliusar, H. Sol, S. Stanič, R. Starling, Ł. Stawarz, S. Stefanik, M. Stephan, T. Stolarczyk, M. Szanecki, T. Szepieniec, G. Tagliaferri, H. Tajima, M. Takahashi, J. Takeda, M. Tanaka, S. Tanaka, L.A. Tejedor, I. Telezhinsky, P. Temnikov, Y. Terada, D. Tescaro, M. Teshima, V. Testa, S. Thoudam, F. Tokanai, D.F. Torres, E. Torresi, G. Tosti, C. Townsley, P. Travnicek, C. Trichard, M. Trifoglio, S. Tsujimoto, V. Vagelli, P. Vallania, L. Valore, W. van Driel, C. van Eldik, J. Vandenbroucke, V. Vassiliev, M. Vecchi, S. Vercellone, S. Vergani, C. Vigorito, S. Vorobiov, M. Vrastil, M.L. Vázquez Acosta, S.J. Wagner, R. Wagner, S.P. Wakely, R. Walter, J.E. Ward, J.J. Watson, A. Weinstein, M. White, R. White, A. Wierzcholska, P. Wilcox, D.A. Williams, R. Wischnewski, P. Wojcik, T. Yamamoto, H. Yamamoto, R. Yamazaki, S. Yanagita, L. Yang, T. Yoshida, M. Yoshida, S. Yoshiike, T. Yoshikoshi, M. Zacharias, L. Zampieri, R. Zanin, M. Zavrtanik, D. Zavrtanik, A. Zdziarski, A. Zech, H. Zechlin, V. Zhdanov, A. Ziegler, J. Zorn
April 13, 2017 astro-ph.HE
We perform simulations for future Cherenkov Telescope Array (CTA) observations of RX~J1713.7$-$3946, a young supernova remnant (SNR) and one of the brightest sources ever discovered in very-high-energy (VHE) gamma rays. Special attention is paid to explore possible spatial (anti-)correlations of gamma rays with emission at other wavelengths, in particular X-rays and CO/H{\sc i} emission. We present a series of simulated images of RX J1713.7$-$3946 for CTA based on a set of observationally motivated models for the gamma-ray emission. In these models, VHE gamma rays produced by high-energy electrons are assumed to trace the non-thermal X-ray emission observed by {\it XMM-Newton}, whereas those originating from relativistic protons delineate the local gas distributions. The local atomic and molecular gas distributions are deduced by the NANTEN team from CO and H{\sc i} observations. Our primary goal is to show how one can distinguish the emission mechanism(s) of the gamma rays (i.e., hadronic vs leptonic, or a mixture of the two) through information provided by their spatial distribution, spectra, and time variation. This work is the first attempt to quantitatively evaluate the capabilities of CTA to achieve various proposed scientific goals by observing this important cosmic particle accelerator.
Very High-Energy Gamma-Ray Follow-Up Program Using Neutrino Triggers from IceCube (1610.01814)
IceCube Collaboration: M.G. Aartsen, K. Abraham, M. Ackermann, J.Adams, J.A. Aguilar, M. Ahlers, M.Ahrens, D. Altmann, K. Andeen, T. Anderson, I. Ansseau, G.Anton, M. Archinger, C. Arguelles, J.Auffenberg, S. Axani, X. Bai, S.W. Barwick, V. Baum, R. Bay, J.J. Beatty, J.Becker-Tjus, K.-H.Becker, S. BenZvi, D. Berley, E. Bernardini, A.Bernhard, D.Z. Besson, G. Binder, D. Bindig, M.Bissok, E. Blaufuss, S. Blot, C. Bohm, M. Borner, F. Bos, D. Bose, S. Boser, O. Botner, J. Braun, L. Brayeur, H.-P. Bretz, S. Bron, A. Burgman, T. Carver, M. Casier, E. Cheung, D. Chirkin, A. Christov, K. Clark, L. Classen, S. Coenders, G.H. Collin, J.M. Conrad, D.F. Cowen R. Cross, M. Day, J.P.A.M. de Andre, C.De Clercq, E.del Pino Rosendo, H. Dembinski, S. De Ridder, P. Desiati, K.D. de Vries, G. de Wasseige, M. de With, T. DeYoung, J.C. Diaz-Velez, V. di Lorenzo, H.Dujmovic, J.P. Dumm, M. Dunkman, B. Eberhardt, T. Ehrhardt, B. Eichmann, P. Eller, S. Euler, P.A. Evenson, S. Fahey, A.R. Fazely, J. Feintzeig, J. Felde, K. Filimonov, C.Finley, S. Flis, C.-C. Fosig, A. Franckowiak, R. Franke, E. Friedman, T. Fuchs, T.K. Gaisser, J. Gallagher, L. Gerhardt, K. Ghorbani, W. Giang, L. Gladstone, T. Glauch, T. Glusenkamp, A. Goldschmidt, G. Golup, J.G. Gonzalez, D. Grant, Z. Griffith, C. Haack, A. Haj Ismail, A. Hallgren, F. Halzen, E. Hansen, T. Hansmann, K. Hanson, D. Hebecker, D. Heereman, K. Helbing, R. Hellauer, S. Hickford, J. Hignight, G.C. Hill, K.D. Hoffman, R. Hoffmann, K. Holzapfel, K. Hoshina, F. Huang, M. Huber, K. Hultqvist, S. In, A. Ishihara, E. Jacobi, G.S. Japaridze, M. Jeong, K. Jero, B.J.P. Jones, M. Jurkovic, A. Kappes, T. Karg, A. Karle, U. Katz, M. Kauer, A. Keivani, J.L. Kelley, A. Kheirandish, M. Kim, T. Kintscher, J. Kiryluk, T. Kittler, S.R. Klein, G. Kohnen, R. Koirala, H. Kolanoski, R. Konietz, L. Kopke, C. Kopper, S. Kopper, D.J. Koskinen, M. Kowalski, K. Krings, M. Kroll, G. Kruckl, C. Kruger, J. Kunnen, S. Kunwar, N. Kurahashi, T. Kuwabara, M. Labare, J.L. Lanfranchi, M.J. Larson, F. Lauber, D. Lennarz, M. Lesiak-Bzdak, M. Leuermann, L. Lu, J. Lunemann, J. Madsen, G. Maggi, K.B.M. Mahn, S. Mancina, M. Mandelartz, R. Maruyama, K. Mase, R. Maunu, F. McNally, K. Meagher, M. Medici, M. Meier, A. Meli, T. Menne, G. Merino, T. Meures, S. Miarecki, L. Mohrmann, T. Montaruli, M. Moulai, R. Nahnhauer, U. Naumann, G. Neer, H. Niederhausen, S.C. Nowicki, D.R. Nygren, A. Obertacke Pollmann, A. Olivas, A. O'Murchadha, T. Palczewski, H. Pandya, D.V. Pankova, P. Peiffer, O. Penek, J.A. Pepper, C. Perez de los Heros, D. Pieloth, E. Pinat, P.B. Price, G.T. Przybylski, M. Quinnan, C. Raab, L. Radel, M. Rameez, K. Rawlins, R. Reimann, B. Relethford, M. Relich, E. Resconi, W. Rhode, M. Richman, B. Riedel, S. Robertson, M. Rongen, C. Rott, T. Ruhe, D.Ryckbosch, D. Rysewyk, L.Sabbatini, S.E. Sanchez-Herrera, A. Sandrock, J. Sandroos, S. Sarkar, K. Satalecka, P. Schlunder, T. Schmidt, S. Schoenen, S. Schoneberg, L. Schumacher, D. Seckel, S. Seunarine, D. Soldin, M. Song, G.M. Spiczak, C. Spiering, T. Stanev, A. Stasik, J. Stettner, A. Steuer, T. Stezelberger, R.G. Stokstad, A. Stossl, R. Strom, N.L. Strotjohann, G.W. Sullivan, M. Sutherland, H. Taavola, I. Taboada, J. Tatar, F. Tenholt, S. Ter-Antonyan, A. Terliuk, G. Tevsic, S. Tilav, P.A. Toale, M.N. Tobin, S. Toscano, D. Tosi, M. Tselengidou, A. Turcati, E. Unger, M. Usner, J. Vandenbroucke, N. van Eijndhoven, S. Vanheule, M. van Rossem, J. van Santen, J. Veenkamp, M. Vehring, M. Voge, E. Vogel, M. Vraeghe, C. Walck, A. Wallace, M. Wallraff, N. Wandkowsky, Ch. Weaver, M.J. Weiss, C. Wendt, S. Westerhoff, B.J. Whelan, S. Wickmann, K. Wiebe, C.H. Wiebusch, L. Wille, D.R. Williams, L. Wills, M. Wolf, T.R. Wood, E. Woolsey, K. Woschnagg, D.L. Xu, X.W. Xu, Y. Xu, J.P. Yanez, G. Yodh, S. Yoshida, M. Zoll MAGIC Collaboration: M.L. Ahnen, S. Ansoldi, L.A. Antonelli, P. Antoranz, A. Babic, B. Banerjee, P. Bangale, U.Barres de Almeida, J.A. Barrio, J. Becerra Gonzalez, W. Bednarek, E. Bernardini, A. Berti, B. Biasuzzi, A. Biland, O. Blanch, S. Bonnefoy, G. Bonnoli, F. Borracci, T. Bretz, S. Buson, A. Carosi, A. Chatterjee, R. Clavero, P. Colin, E. Colombo, J.L. Contreras, J. Cortina, S. Covino, P. Da Vela, F. Dazzi, A. De Angelis, B. De Lotto, E. de Ona Wilhelmi, F. Di Pierro, M. Doert, A. Dominguez, D. Dominis Prester, D. Dorner, M. Doro, S. Einecke, D. Eisenacher Glawion, D. Elsaesser, M. Engelkemeier, V. Fallah Ramazani, A. Fernandez-Barral, D. Fidalgo, M.V. Fonseca, L. Font, K. Frantzen, C. Fruck, D. Galindo, R. J. Garcia Lopez, M. Garczarczyk, D. Garrido Terrats, M. Gaug, P. Giammaria, N. Godinovic, A. Gonzalez Munoz, D. Gora, D. Guberman, D. Hadasch, A. Hahn, Y. Hanabata, M. Hayashida, J. Herrera, J. Hose, D. Hrupec, G. Hughes, W. Idec, K. Kodani, Y. Konno, H. Kubo, J. Kushida, A. La Barbera, D. Lelas, E. Lindfors, S. Lombardi, F. Longo, M. Lopez, R. Lopez-Coto, P. Majumdar, M. Makariev, K. Mallot, G. Maneva, M. Manganaro, K. Mannheim, L. Maraschi, B. Marcote, M. Mariotti, M. Martinez, D. Mazin, U. Menzel, J.M. Miranda, R. Mirzoyan, A. Moralejo, E. Moretti, D. Nakajima, V. Neustroev, A. Niedzwiecki, M. Nievas Rosillo, K. Nilsson, K. Nishijima, K. Noda, L. Nogues, A. Overkemping, S. Paiano, J. Palacio, M. Palatiello, D. Paneque, R. Paoletti, J.M. Paredes, X. Paredes-Fortuny, G. Pedaletti, M. Peresano, L. Perri, M. Persic, J. Poutanen, P.G. Prada Moroni, E.Prandini, I. Puljak, I. Reichardt, W. Rhode, M. Ribo, J. Rico, J. Rodriguez Garcia, T. Saito, K. Satalecka, S. Schroeder, C. Schultz, T. Schweizer, A. Sillanpaa, J. Sitarek, I. Snidaric, D. Sobczynska, A. Stamerra, T. Steinbring, M. Strzys, T. Suric, L. Takalo, F. Tavecchio, P. Temnikov, T.Terzic, D. Tescaro, M. Teshima, J. Thaele, D.F. Torres, T. Toyama, A. Treves, G. Vanzo, V. Verguilov, I. Vovk, J.E. Ward, M. Will, M.H. Wu, R. Zanin VERITAS Collaboration: A.U. Abeysekara, S. Archambault, A. Archer, W. Benbow, R. Bird, E. Bourbeau, M. Buchovecky, V. Bugaev, K. Byrum, J.V Cardenzana, M. Cerruti, L. Ciupik, M.P. Connolly, W. Cui, H.J. Dickinson, J. Dumm, J.D. Eisch, M. Errando, A. Falcone, Q. Feng, J.P. Finley, H. Fleischhack, A. Flinders, L. Fortson, A. Furniss, G.H. Gillanders, S. Griffin, J. Grube, M. Hutten, N. Haakansson, O. Hervet, J. Holder, T.B. Humensky, C.A. Johnson, P. Kaaret, P. Kar, N. Kelley-Hoskins, M. Kertzman, D. Kieda, M. Krause, F. Krennrich, S. Kumar, M.J. Lang, G. Maier, S. McArthur, A. McCann, P. Moriarty, R. Mukherjee, T. Nguyen, D. Nieto, S. O'Brien, R.A. Ong, A.N. Otte, N. Park, M. Pohl, A. Popkow, E. Pueschel, J. Quinn, K. Ragan, P.T. Reynolds, G.T. Richards, E. Roache, C. Rulten, I. Sadeh, M. Santander, G.H. Sembroski, K. Shahinyan, D. Staszak, I. Telezhinsky, J.V. Tucci, J. Tyler, S.P. Wakely, A. Weinstein, P. Wilcox, A. Wilhelm, D.A. Williams, B. Zitzer
Nov. 12, 2016 hep-ex, physics.ins-det, astro-ph.IM, astro-ph.HE
We describe and report the status of a neutrino-triggered program in IceCube that generates real-time alerts for gamma-ray follow-up observations by atmospheric-Cherenkov telescopes (MAGIC and VERITAS). While IceCube is capable of monitoring the whole sky continuously, high-energy gamma-ray telescopes have restricted fields of view and in general are unlikely to be observing a potential neutrino-flaring source at the time such neutrinos are recorded. The use of neutrino-triggered alerts thus aims at increasing the availability of simultaneous multi-messenger data during potential neutrino flaring activity, which can increase the discovery potential and constrain the phenomenological interpretation of the high-energy emission of selected source classes (e.g. blazars). The requirements of a fast and stable online analysis of potential neutrino signals and its operation are presented, along with first results of the program operating between 14 March 2012 and 31 December 2015.
A propeller model for the sub-luminous disk state of the transitional millisecond pulsar PSR J1023+0038 (1504.05029)
A. Papitto, D.F. Torres
The discovery of millisecond pulsars switching between states powered either by the rotation of their magnetic field or by the accretion of matter, has recently proved the tight link shared by millisecond radio pulsars and neutron stars in low-mass X-ray binaries. Transitional millisecond pulsars also show an enigmatic intermediate state in which the neutron star is surrounded by an accretion disk, it emits coherent X-ray pulsations, but is sub-luminous in X-rays with respect to accreting neutron stars, and is brighter in gamma-rays than millisecond pulsars in the rotation-powered state. Here, we model the X-ray and gamma-ray emission observed from PSR J1023+0038 in such a state based on the assumption that most of the disk in-flow is propelled away by the rapidly rotating neutron star magnetosphere, and that electrons can be accelerated to energies of a few GeV at the turbulent disk-magnetosphere boundary. We show that the synchrotron and self-synchrotron Compton emission coming from such a region, together with the hard disk emission typical of low states of accreting compact objects, is able to explain the radiation observed in the X-ray and gamma-ray band. The average emission observed from PSR J1023+0038 is modelled by a disk in-flow with a rate of $(1-3)\times10^{-11} M_{\odot}/yr$, truncated at a radius ranging between 30 and 45 km, compatible with the hypothesis of a propelling magnetosphere. We compare the results we obtained with models that rather assume that a rotation-powered pulsar is turned on, showing how the spin down power released in similar scenarios is hardly able to account for the magnitude of the observed emission.
The Large Observatory For x-ray Timing (1408.6526)
M. Feroci, S. Brandt, A. Santangelo, M. Ahangarianabhari, D. Altamirano, N. Andersson, J.-L. Atteia, S. Balman, A. Baykal, S. Bianchi, F. Bocchino, S. Boutloukos, N. Bucciantini, C. Budtz-Jørgensen, G.A. Caliandro, J. Casares, P. Cerda-Duran, J. Chenevez, T. Courvoisier, A. D'Aì, D. De Martino, M. Del Santo, A. Drago, P. Esposito, Y. Favre, M. Finger, M. Gabler, E. Garcia-Berro, P. Giommi, A. Goldwurm, M. Grassi, C. Guidorzi, F. Hansen, A. Heger, J. Huovelin, K. Iwasawa, T. Johannsen, G. Kanbach, L. Keek, S. Korpela, I. Kuvvetli, P.P. Laubert, F. Longo, S. Mahmoodifar, V. Mangano, A. Martindale, M. Mendez, R. Mignani, G. Miniutti, G. Mouret, T. Muñoz-Darias, P. O'Brien, M. Orlandini, F. Ozel, J. M. Paredes, A. Pellizzoni, C. Pittori, M. Prakash, P. Ramon, I. Rashevskaya, M. Reina Aranda, M. Ribo, P. Rodríguez- Gil, E.M.R. Rossi, L. Sabau-Graziati, S. Scaringi, S. Shore, J.-Y. Seyler, V. Sochora, B. Stappers, T.E. Strohmayer, T. Takahashi, L. Tolos, D.F. Torres, S. Turriziani, P. Varniere, S. Watanabe, H. Wende, C.A. Wilson-Hodge, N. Zampa, F. Zwart, E. Kuulkers INFN, Sez. Roma Tor Vergata, Rome, Italy, ISDC, Geneve University, Switzerland, Astronomical Institute Anton Pannekoek, University of Amsterdam, The Netherlands, INAF-IASF-Bologna, Italy, Faculty of Physical, Applied Sciences, University of Southampton, United Kingdom, ASDC, Rome, Italy, Dipartimento di Chimica e Fisica, Palermo University, Italy, Politecnico Milano, Italy, Dept. of Physics, Astronomy University of Padua, Italy, IAAT Tuebingen, Germany, National Space Institute, Lyngby, Denmark, DAM, ICC-UB, Universitat de Barcelona, Spain, Cagliari University, Italy, Astronomical Institute of the Academy of Sciences of the Czech Republic, Czech Republic, Cambridge University, Cambridge, United Kingdom, Laboratoire d'Astrophysique de Bordeaux, France, MIT, Cambridge, United States, McGill University, Montréal, Canada, Ferrara University, Ferrara, Italy, Department of Medical Biophysics, University of Toronto, Canada, Leicester University, United Kingdom, Universities Space Research Association, Huntsville, United States, Monash Centre for Astrophysics, School of Physics, School of Mathematical Sciences, Monash University, Australia, University of Tasmania, Australia, Radboud University, The Netherlands, Open University, United Kingdom, NASA/Marshall Space Flight Center, Huntsville, United States, Durham University, United Kingdom, University of Iowa, United States, Copernicus Astronomical Center, Warsaw, Poland, NASA/Marshall Space Flight Center, United States, Cornell University, Ithaca, United States, Dipartimento di Fisica, Università degli Studi di Milano, Italy, University of Trieste, Italy, University of California, United States, University of Melbourne, Australia, Kapteyn Astronomical Institute, University of Groningen, The Netherlands, University of Maryland, United States, University of Alberta, Canada, Observatoire Astronomique de Strasbourg, France, INAF-OA Torino, Italy, Space Telescope Institute, United States, Raman Research Institute, India, Czech Technical University in Prague, Czech Republic, Armagh Observatory, United Kingdom, NRL, Washington, United States, Institute for Nuclear Theory, University of Washington, United States, Instituto de Astrofisica de Canarias, Tenerife, Spain, Leiden Observatory, The Netherlands, INAF-IASF-Milano, Italy, Silesian University in Opava, Czech Republic, Institut für Kernphysik, Technische Universität Darmstadt, ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung GmbH, Germany, Leibniz-Institut fuer Astrophysik Potsdam, Germany, National University of Ireland, Ireland, Aristotle University of Thessaloniki, Greece, Goddard Space Flight Center, Greenbelt, United States, University of Alicante, Spain, Physical Institute of the Academy of Sciences of the Czech Republic, Czech Republic, University of Warwick, United Kingdom, INAF-OA Padova, Padova, Italy, University of Rome Tor Vergata, Italy, University of Bologna, Italy, Universidad de La Laguna, Santa Cruz de Tenerife, Spain, APC, Université Paris Diderot, CEA/Irfu, Observatoire de Paris, France, School of Physics, Astronomy, University of Southampton, United Kingdom, Kepler Institute of Astronomy, University of Zielona Gòra, Poland, University of Wisconsin, United States, Wayne State University, Detroit, United States, Foundation for Research, Technology, Heraklion, Greece, CNES, Toulouse, France, Instituto Astrofisica de Andalucia, Granada, Spain, Perimeter Institute for Theoretical Physics, Waterloo, Canada, Università di Napoli Fedelico II, Italy, School of Physics, Astronomy, University of Birmingham, United Kingdom, University of California, Berkeley, Space Sciences Laboratory, United States, Ohio University, United States, Max-Planck-Institut fuer extraterrestrische Physik, Garching, Germany, Max Planck Institute for Gravitational Physics, Germany, Technical University of Catalonia, Barcelona, Spain, Department of Physics, Astronomy, University of Waterloo, Canada, Sapienza University, Rome, Italy, Institute for Astronomy K.U. Leuven, Leuven, Belgium, Texas Tech. University, United States, Tata Institute of Fundamental Research, Mumbai, India, Jorgen Sandberg Consulting, Denmark, Istanbul Kültür University, Turkey, Facultad de Ciencias-Trilingüe University of Salamanca, Spain, University of Surrey, United Kingdom, Oxford University, United Kingdom, European Space Agency, ESTEC, The Netherlands, European Space Astronomy Centre, Madrid, Spain,
Aug. 29, 2014 astro-ph.IM
The Large Observatory For x-ray Timing (LOFT) was studied within ESA M3 Cosmic Vision framework and participated in the final down-selection for a launch slot in 2022-2024. Thanks to the unprecedented combination of effective area and spectral resolution of its main instrument, LOFT will study the behaviour of matter under extreme conditions, such as the strong gravitational field in the innermost regions of accretion flows close to black holes and neutron stars, and the supra-nuclear densities in the interior of neutron stars. The science payload is based on a Large Area Detector (LAD, 10 m 2 effective area, 2-30 keV, 240 eV spectral resolution, 1 deg collimated field of view) and a WideField Monitor (WFM, 2-50 keV, 4 steradian field of view, 1 arcmin source location accuracy, 300 eV spectral resolution). The WFM is equipped with an on-board system for bright events (e.g. GRB) localization. The trigger time and position of these events are broadcast to the ground within 30 s from discovery. In this paper we present the status of the mission at the end of its Phase A study.
CTA contributions to the 33rd International Cosmic Ray Conference (ICRC2013) (1307.2232)
The CTA Consortium: O. Abril, B.S. Acharya, M. Actis, G. Agnetta, J.A. Aguilar, F. Aharonian, M. Ajello, A. Akhperjanian, M. Alcubierre, J. Aleksic, R. Alfaro, E. Aliu, A.J. Allafort, D. Allan, I. Allekotte, R. Aloisio, E. Amato, G. Ambrosi, M. Ambrosio, J. Anderson, E.O. Angüner, L.A. Antonelli, V. Antonuccio, M. Antonucci, P. Antoranz, A. Aravantinos, A. Argan, T. Arlen, C. Aramo, T. Armstrong, H. Arnaldi, L. Arrabito, K. Asano, T. Ashton, H. G. Asorey, T. Aune, Y. Awane, H. Baba, A. Babic, N. Baby, J. Bähr, A. Bais, C. Baixeras, S. Bajtlik, M. Balbo, D. Balis, C. Balkowski, J. Ballet, A. Bamba, R. Bandiera, A. Barber, C. Barbier, M. Barceló, A. Barnacka, J. Barnstedt, U. Barres de Almeida, J.A. Barrio, A. Basili, S. Basso, D. Bastieri, C. Bauer, A. Baushev, U. Becciani, J. Becerra, J. Becerra, Y. Becherini, K.C. Bechtol, J. Becker Tjus, V. Beckmann, W. Bednarek, B. Behera, M. Belluso, W. Benbow, J. Berdugo, D. Berge, K. Berger, F. Bernard, T. Bernardino, K. Bernlöhr, B. Bertucci, N. Bhat, S. Bhattacharyya, B. Biasuzzi, C. Bigongiari, A. Biland, S. Billotta, T. Bird, E. Birsin, E. Bissaldi, J. Biteau, M. Bitossi, S. Blake, O. Blanch Bigas, P. Blasi, A. Bobkov, V. Boccone, M. Böttcher, L. Bogacz, J. Bogart, M. Bogdan, C. Boisson, J. Boix Gargallo, J. Bolmont, G. Bonanno, A. Bonardi, T. Bonev, P. Bonifacio, G. Bonnoli, P. Bordas, A. Borgland, J. Borkowski, R. Bose, O. Botner, A. Bottani, L. Bouchet, M. Bourgeat, C. Boutonnet, A. Bouvier, S. Brau-Nogué, I. Braun, T. Bretz, M. Briggs, M. Brigida, T. Bringmann, R. Britto, P. Brook, P. Brun, L. Brunetti, P. Bruno, N. Bucciantini, T. Buanes, J. Buckley, R. Bühler, V. Bugaev, A. Bulgarelli, T. Bulik, G. Busetto, S. Buson, K. Byrum, M. Cailles, R. Cameron, J. Camprecios, R. Canestrari, S. Cantu, M. Capalbi, P. Caraveo, E. Carmona, A. Carosi, R. Carosi, J. Carr, J. Carter, P.-H. Carton, R. Caruso, S. Casanova, E. Cascone, M. Casiraghi, A. Castellina, O. Catalano, S. Cavazzani, S. Cazaux, P. Cerchiara, M. Cerruti, E. Chabanne, P. Chadwick, C. Champion, R. Chaves, P. Cheimets, A. Chen, J. Chiang, L. Chiappetti, M. Chikawa, V.R. Chitnis, F. Chollet, A. Christof, J. Chudoba, M. Cieślar, A. Cillis, M. Cilmo, A. Codino, J. Cohen-Tanugi, S. Colafrancesco, P. Colin, J. Colome, S. Colonges, M. Compin, P. Conconi, V. Conforti, V. Connaughton, J. Conrad, J.L. Contreras, P. Coppi, J. Coridian, P. Corona, D. Corti, J. Cortina, L. Cossio, A. Costa, H. Costantini, G. Cotter, B. Courty, S. Couturier, S. Covino, G. Crimi, S.J. Criswell, J. Croston, G. Cusumano, M. Dafonseca, O. Dale, M. Daniel, J. Darling, I. Davids, F. Dazzi, A. de Angelis, V. De Caprio, F. De Frondat, E.M. de Gouveia Dal Pino, I. de la Calle, G.A. De La Vega, R. de los Reyes Lopez, B. de Lotto, A. De Luca, M. de Naurois, Y. de Oliveira, E. de Oña Wilhelmi, F. de Palma, V. de Souza, G. Decerprit, G. Decock, C. Deil, E. Delagnes, G. Deleglise, C. Delgado, D. della Volpe, P. Demange, G. Depaola, A. Dettlaff, T. Di Girolamo, C. Di Giulio, A. Di Paola, F. Di Pierro, G. di Sciascio, C. Díaz, J. Dick, R. Dickherber, H. Dickinson, V. Diez-Blanco, S. Digel, D. Dimitrov, G. Disset, A. Djannati-Ataï, M. Doert, M. Dohmke, W. Domainko, D. Dominis Prester, A. Donat, D. Dorner, M. Doro, J.-L. Dournaux, G. Drake, D. Dravins, L. Drury, F. Dubois, R. Dubois, G. Dubus, C. Dufour, D. Dumas, J. Dumm, D. Durand, V. Dwarkadas, J. Dyks, M. Dyrda, J. Ebr, E. Edy, K. Egberts, P. Eger, S. Einecke, C. Eleftheriadis, S. Elles, D. Emmanoulopoulos, D. Engelhaupt, R. Enomoto, J.-P. Ernenwein, M. Errando, A. Etchegoyen, P.A. Evans, A. Falcone, A. Faltenbacher, D. Fantinel, K. Farakos, C. Farnier, E. Farrell, G. Fasola, B.W. Favill, E. Fede, S. Federici, S. Fegan, F. Feinstein, D. Ferenc, P. Ferrando, M. Fesquet, P. Fetfatzis, A. Fiasson, E. Fillin-Martino, D. Fink, C. Finley, J. P. Finley, M. Fiorini, R. Firpo Curcoll, E. Flandrini, H. Fleischhack, H. Flores, D. Florin, W. Focke, C. Föhr, E. Fokitis, L. Font, G. Fontaine, M. Fornasa, A. Förster, L. Fortson, N. Fouque, A. Franckowiak, F.J. Franco, A. Frankowski, C. Fransson, G.W. Fraser, R. Frei, L. Fresnillo, C. Fruck, D. Fugazza, Y. Fujita, Y. Fukazawa, Y. Fukui, S. Funk, W. Gäbele, S. Gabici, R. Gabriele, A. Gadola, N. Galante, D. Gall, Y. Gallant, J. Gámez-García, M. Garczarczyk, B. García, R. Garcia López, D. Gardiol, F. Gargano, D. Garrido, L. Garrido, D. Gascon, M. Gaug, J. Gaweda, L. Gebremedhin, N. Geffroy, L. Gerard, A. Ghedina, M. Ghigo, P. Ghislain, E. Giannakaki, F. Gianotti, S. Giarrusso, G. Giavitto, B. Giebels, N. Giglietto, V. Gika, M. Giomi, P. Giommi, F. Giordano, N. Girard, E. Giro, A. Giuliani, T. Glanzman, J.-F. Glicenstein, N. Godinovic, V. Golev, M. Gomez Berisso, J. Gómez-Ortega, M.M. Gonzalez, A. González, F. González, A. González Muñoz, K.S. Gothe, T. Grabarczyk, M. Gougerot, R. Graciani, P. Grandi, F. Grañena, J. Granot, G. Grasseau, R. Gredig, A. Green, T. Greenshaw, T. Grégoire, A. Grillo, O. Grimm, M.-H. Grondin, J. Grube, M. Grudzinska, V. Gruev, S. Grünewald, J. Grygorczuk, V. Guarino, S. Gunji, G. Gyuk, D. Hadasch, A. Hagedorn, R. Hagiwara, J. Hahn, N. Hakansson, A. Hallgren, N. Hamer Heras, S. Hara, M.J. Hardcastle, D. Harezlak, J. Harris, T. Hassan, K. Hatanaka, T. Haubold, A. Haupt, T. Hayakawa, M. Hayashida, R. Heller, F. Henault, G. Henri, G. Hermann, R. Hermel, A. Herrero, O. Hervet, N. Hidaka, J.A. Hinton, K. Hirotani, D. Hoffmann, W. Hofmann, P. Hofverberg, J. Holder, J.R. Hörandel, D. Horns, D. Horville, J. Houles, M. Hrabovsky, D. Hrupec, H. Huan, B. Huber, J.-M. Huet, G. Hughes, T.B. Humensky, J. Huovelin, J.-F. Huppert, A. Ibarra, D. Ikawa, J.M. Illa, D. Impiombato, S. Incorvaia, S. Inoue, Y. Inoue, F. Iocco, K. Ioka, G.L. Israel, C. Jablonski, A. Jacholkowska, J. Jacquemier, M. Jamrozy, M. Janiak, P. Jean, C. Jeanney, J.J. Jimenez, T. Jogler, C. Johnson, T. Johnson, L. Journet, C. Juffroy, I. Jung, P. Kaaret, S. Kabuki, M. Kagaya, J. Kakuwa, C. Kalkuhl, R. Kankanyan, A. Karastergiou, K. Kärcher, M. Karczewski, S. Karkar, J. Kasperek, D. Kastana, H. Katagiri, J. Kataoka, K. Katarzyński, U. Katz, N. Kawanaka, D. Kazanas, N. Kelley-Hoskins, B. Kellner-Leidel, H. Kelly, E. Kendziorra, B. Khélifi, D.B. Kieda, T. Kifune, T. Kihm, T. Kishimoto, K. Kitamoto, W. Kluźniak, C. Knapic, J. Knapp, J. Knödlseder, F. Köck, J. Kocot, K. Kodani, J.-H. Köhne, K. Kohri, K. Kokkotas, D. Kolitzus, N. Komin, I. Kominis, Y. Konno, H. Köppel, P. Korohoda, K. Kosack, G. Koss, R. Kossakowski, R. Koul, G. Kowal, S. Koyama, J. Kozioł, T. Krähenbühl, J. Krause, H. Krawzcynski, F. Krennrich, A. Krepps, A. Kretzschmann, R. Krobot, P. Krueger, H. Kubo, V.A. Kudryavtsev, J. Kushida, A. Kuznetsov, A. La Barbera, N. La Palombara, V. La Parola, G. La Rosa, K. Lacombe, G. Lamanna, J. Lande, D. Languignon, J.S. Lapington, P. Laporte, B. Laurent, C. Lavalley, T. Le Flour, A. Le Padellec, S.-H. Lee, W.H. Lee, J.-P. Lefèvre, H. Leich, M.A. Leigui de Oliveira, D. Lelas, J.-P. Lenain, R. Leoni, D.J. Leopold, T. Lerch, L. Lessio, G. Leto, B. Lieunard, S. Lieunard, R. Lindemann, E. Lindfors, A. Liolios, A. Lipniacka, H. Lockart, T. Lohse, S. Lombardi, F. Longo, A. Lopatin, M. Lopez, R. López-Coto, A. López-Oramas, A. Lorca, E. Lorenz, F. Louis, P. Lubinski, F. Lucarelli, H. Lüdecke, J. Ludwin, P.L. Luque-Escamilla, W. Lustermann, O. Luz, E. Lyard, M.C. Maccarone, T.J. Maccarone, G.M. Madejski, A. Madhavan, M. Mahabir, G. Maier, P. Majumdar, G. Malaguti, G. Malaspina, S. Maltezos, A. Manalaysay, A. Mancilla, D. Mandat, G. Maneva, A. Mangano, P. Manigot, K. Mannheim, I. Manthos, N. Maragos, A. Marcowith, M. Mariotti, M. Marisaldi, S. Markoff, A. Marszałek, C. Martens, J. Martí, J.-M. Martin, P. Martin, G. Martínez, F. Martínez, M. Martínez, F. Massaro, A. Masserot, A. Mastichiadis, A. Mathieu, H. Matsumoto, F. Mattana, S. Mattiazzo, A. Maurer, G. Maurin, S. Maxfield, J. Maya, D. Mazin, L. Mc Comb, A. McCann, N. McCubbin, I. McHardy, R. McKay, K. Meagher, C. Medina, C. Melioli, D. Melkumyan, D. Melo, S. Mereghetti, P. Mertsch, M. Meucci, M. Meyer, J. Michałowski, P. Micolon, A. Mihailidis, T. Mineo, M. Minuti, N. Mirabal, F. Mirabel, J.M. Miranda, R. Mirzoyan, A. Mistò, T. Mizuno, B. Moal, R. Moderski, I. Mognet, E. Molinari, M. Molinaro, T. Montaruli, C. Monte, I. Monteiro, P. Moore, A. Moralejo Olaizola, M. Mordalska, C. Morello, K. Mori, G. Morlino, A. Morselli, F. Mottez, Y. Moudden, E. Moulin, I. Mrusek, R. Mukherjee, P. Munar-Adrover, H. Muraishi, K. Murase, A. StJ. Murphy, S. Nagataki, T. Naito, D. Nakajima, T. Nakamori, K. Nakayama, C. Naumann, D. Naumann, M. Naumann-Godo, P. Nayman, D. Nedbal, D. Neise, L. Nellen, A. Neronov, V. Neustroev, N. Neyroud, L. Nicastro, J. Nicolau-Kukliński, A. Niedźwiecki, J. Niemiec, D. Nieto, A. Nikolaidis, K. Nishijima, K.-I. Nishikawa, K. Noda, S. Nolan, R. Northrop, D. Nosek, N. Nowak, A. Nozato, L. Oakes, P.T. O'Brien, Y. Ohira, M. Ohishi, S. Ohm, H. Ohoka, T. Okuda, A. Okumura, J.-F. Olive, R.A. Ong, R. Orito, M. Orr, J.P. Osborne, M. Ostrowski, L.A. Otero, N. Otte, E. Ovcharov, I. Oya, A. Ozieblo, L. Padilla, I. Pagano, S. Paiano, D. Paillot, A. Paizis, S. Palanque, M. Palatka, J. Pallota, M. Palatiello, K. Panagiotidis, J.-L. Panazol, D. Paneque, M. Panter, M.R. Panzera, R. Paoletti, A. Papayannis, G. Papyan, J.M. Paredes, G. Pareschi, J.-M. Parraud, D. Parsons, G. Pauletta, M. Paz Arribas, M. Pech, G. Pedaletti, V. Pelassa, D. Pelat, M. d. C. Perez, M. Persic, P.-O. Petrucci, B. Peyaud, A. Pichel, D. Pieloth, E. Pierre, S. Pita, G. Pivato, F. Pizzolato, M. Platino, Ł. Platos, R. Platzer, S. Podkladkin, L. Pogosyan, M. Pohl, G. Pojmanski, J.D. Ponz, W. Potter, J. Poutanen, E. Prandini, J. Prast, R. Preece, F. Profeti, H. Prokoph, M. Prouza, M. Proyetti, I. Puerto-Giménez, G. Pühlhofer, I. Puljak, M. Punch, R. Pyzioł, E.J. Quel, J. Quesada, J. Quinn, A. Quirrenbach, E. Racero, S. Rainò, P.J. Rajda, M. Rameez, P. Ramon, R. Rando, R.C. Rannot, M. Rataj, M. Raue, D. Ravignani, P. Reardon, O. Reimann, A. Reimer, O. Reimer, K. Reitberger, M. Renaud, S. Renner, B. Reville, W. Rhode, M. Ribó, M. Ribordy, G. Richards, M.G. Richer, J. Rico, J. Ridky, F. Rieger, P. Ringegni, J. Ripken, P.R. Ristori, A. Rivière, S. Rivoire, L. Rob, G. Rodeghiero, U. Roeser, R. Rohlfs, G. Rojas, P. Romano, W. Romaszkan, G. E. Romero, S.R. Rosen, S. Rosier Lees, D. Ross, G. Rouaix, J. Rousselle, S. Rousselle, A.C. Rovero, F. Roy, S. Royer, B. Rudak, C. Rulten, M. Rupiński, F. Russo, F. Ryde, O. Saavedra, B. Sacco, E.O. Saemann, A. Saggion, V. Sahakian, K. Saito, T. Saito, Y. Saito, N. Sakaki, R. Sakonaka, A. Salini, F. Sanchez, M. Sanchez-Conde, A. Sandoval, H. Sandaker, E. Sant'Ambrogio, A. Santangelo, E.M. Santos, A. Sanuy, L. Sapozhnikov, S. Sarkar, N. Sartore, H. Sasaki, K. Satalecka, M. Sawada, V. Scalzotto, V. Scapin, M. Scarcioffolo, J. Schafer, T. Schanz, S. Schlenstedt, R. Schlickeiser, T. Schmidt, J. Schmoll, P. Schovanek, M. Schroedter, A. Schubert, C. Schultz, J. Schultze, A. Schulz, K. Schure, F. Schussler, T. Schwab, U. Schwanke, J. Schwarz, S. Schwarzburg, T. Schweizer, S. Schwemmer, U. Schwendicke, C. Schwerdt, A. Segreto, J.-H. Seiradakis, G.H. Sembroski, M. Servillat, K. Seweryn, M. Sharma, M. Shayduk, R.C. Shellard, J. Shi, T. Shibata, A. Shibuya, S. Shore, E. Shum, E. Sideras-Haddad, L. Sidoli, M. Sidz, J. Sieiro, M. Sikora, J. Silk, A. Sillanpää, B.B. Singh, G. Sironi, J. Sitarek, C. Skole, R. Smareglia, A. Smith, D. Smith, J. Smith, N. Smith, D. Sobczyńska, H. Sol, G. Sottile, M. Sowiński, F. Spanier, D. Spiga, S. Spyrou, V. Stamatescu, A. Stamerra, R.L.C. Starling, Ł. Stawarz, R. Steenkamp, C. Stegmann, S. Steiner, C. Stella, N. Stergioulas, R. Sternberger, M. Sterzel, F. Stinzing, M. Stodulski, Th. Stolarczyk, U. Straumann, E. Strazzeri, L. Stringhetti, A. Suarez, M. Suchenek, R. Sugawara, K.-H. Sulanke, S. Sun, A.D. Supanitsky, T. Suric, P. Sutcliffe, J.M. Sykes, M. Szanecki, T. Szepieniec, A. Szostek, G. Tagliaferri, H. Tajima, H. Takahashi, K. Takahashi, L. Takalo, H. Takami, G. Talbot, J. Tammi, M. Tanaka, S. Tanaka, J. Tasan, M. Tavani, J.-P. Tavernet, L.A. Tejedor, I. Telezhinsky, P. Temnikov, C. Tenzer, Y. Terada, R. Terrier, M. Teshima, V. Testa, D. Tezier, J. Thayer, D. Thuermann, L. Tibaldo, L. Tibaldo, O. Tibolla, A. Tiengo, M.C. Timpanaro, M. Tluczykont, C.J. Todero Peixoto, F. Tokanai, M. Tokarz, K. Toma, A. Tonachini, K. Torii, M. Tornikoski, D.F. Torres, M. Torres, S. Toscano, G. Toso, G. Tosti, T. Totani, F. Toussenel, G. Tovmassian, P. Travnicek, A. Treves, M. Trifoglio, I. Troyano, K. Tsinganos, H. Ueno, G. Umana, K. Umehara, S.S. Upadhya, T. Usher, M. Uslenghi, F. Vagnetti, J.F. Valdes-Galicia, P. Vallania, G. Vallejo, W. van Driel, C. van Eldik, J. Vandenbrouke, J. Vanderwalt, H. Vankov, G. Vasileiadis, V. Vassiliev, D. Veberic, I. Vegas, S. Vercellone, S. Vergani, V. Verzi, G.P. Vettolani, C. Veyssière, J.P. Vialle, A. Viana, M. Videla, C. Vigorito, P. Vincent, S. Vincent, J. Vink, N. Vlahakis, L. Vlahos, P. Vogler, V. Voisin, A. Vollhardt, H.-P. von Gunten, S. Vorobiov, C. Vuerli, V. Waegebaert, R. Wagner, R.G. Wagner, S. Wagner, S.P. Wakely, R. Walter, T. Walther, K. Warda, R.S. Warwick, P. Wawer, R. Wawrzaszek, N. Webb, P. Wegner, A. Weinstein, Q. Weitzel, R. Welsing, M. Werner, H. Wetteskind, R.J. White, A. Wierzcholska, S. Wiesand, A. Wilhelm, M.I. Wilkinson, D.A. Williams, R. Willingale, M. Winde, K. Winiarski, R. Wischnewski, Ł. Wiśniewski, P. Wojcik, M. Wood, A. Wörnlein, Q. Xiong, K.K. Yadav, H. Yamamoto, T. Yamamoto, R. Yamazaki, S. Yanagita, J.M. Yebras, D. Yelos, A. Yoshida, T. Yoshida, T. Yoshikoshi, P. Yu, V. Zabalza, M. Zacharias, A. Zajczyk, L. Zampieri, R. Zanin, A. Zdziarski, A. Zech, A. Zhao, X. Zhou, K. Zietara, J. Ziolkowski, P. Ziółkowski, V. Zitelli, C. Zurbach, P. Zychowski
July 29, 2013 hep-ex, astro-ph.IM, astro-ph.HE
Compilation of CTA contributions to the proceedings of the 33rd International Cosmic Ray Conference (ICRC2013), which took place in 2-9 July, 2013, in Rio de Janeiro, Brazil
Exploring high-energy processes in binary systems with the Cherenkov Telescope Array (1307.3048)
J.M. Paredes, W. Bednarek, P. Bordas, V. Bosch-Ramon, E. De Cea del Pozo, G. Dubus, S. Funk, D. Hadasch, D. Khangulyan, S. Markoff, J. Moldon, P. Munar-Adrover, S. Nagataki, T. Naito, M. de Naurois, G. Pedaletti, O. Reimer, M. Ribo, A. Szostek, Y. Terada, D.F. Torres, V. Zabalza, A.A. Zdziarski (for the CTA Consortium)
July 11, 2013 astro-ph.HE
Several types of binary systems have been detected up to now at high and very high gamma-ray energies, including microquasars, young pulsars around massive stars and colliding wind binaries. The study of the sources already known, and of the new types of sources expected to be discovered with the unprecedented sensitivity of CTA, will allow us to qualitatively improve our knowledge on particle acceleration, emission and radiation reprocessing, and on the dynamics of flows and their magnetic fields. Here we present some examples of the capabilities of CTA to probe the flux and spectral changes that typically occur in these astrophysical sources, as well as to search for delays in correlated X-ray/TeV variability with CTA and satellites of the CTA era. Our results show that our knowledge of the high-energy physics in binary systems will significantly deepen with CTA.
Binaries with the eyes of CTA (1210.3215)
Oct. 11, 2012 astro-ph.HE
The binary systems that have been detected in gamma rays have proven very useful to study high-energy processes, in particular particle acceleration, emission and radiation reprocessing, and the dynamics of the underlying magnetized flows. Binary systems, either detected or potential gamma-ray emitters, can be grouped in different subclasses depending on the nature of the binary components or the origin of the particle acceleration: the interaction of the winds of either a pulsar and a massive star or two massive stars; accretion onto a compact object and jet formation; and interaction of a relativistic outflow with the external medium. We evaluate the potentialities of an instrument like the Cherenkov telescope array (CTA) to study the non-thermal physics of gamma-ray binaries, which requires the observation of high-energy phenomena at different time and spatial scales. We analyze the capability of CTA, under different configurations, to probe the spectral, temporal and spatial behavior of gamma-ray binaries in the context of the known or expected physics of these sources. CTA will be able to probe with high spectral, temporal and spatial resolution the physical processes behind the gamma-ray emission in binaries, significantly increasing as well the number of known sources. This will allow the derivation of information on the particle acceleration and emission sites qualitatively better than what is currently available.
LOFT: the Large Observatory For X-ray Timing (1209.1497)
M. Feroci, J.W. den Herder, E. Bozzo, D. Barret, S. Brandt, M. Hernanz, M. van der Klis, M. Pohl, A. Santangelo, L. Stella, A. Watts, J. Wilms, S. Zane, M. Ahangarianabhari, A. Alpar, D. Altamirano, L. Alvarez, L. Amati, C. Amoros, N. Andersson, A. Antonelli, A. Argan, R. Artigue, P. Azzarello, G. Baldazzi, S. Balman, M. Barbera, T. Belloni, G. Bertuccio, S. Bianchi, A. Bianchini, P. Bodin, J.-M. Bonnet Bidaud, S. Boutloukos, J. Braga, E. Brown, N. Bucciantini, L. Burderi, M. Bursa, C. Budtz-Jørgensen, E. Cackett, F.R. Cadoux, P. Cais, G.A. Caliandro, R. Campana, S. Campana, P. Casella, D. Chakrabarty, J. Chenevez, J. Coker, R. Cole, A. Collura, T. Courvoisier, A. Cros, A. Cumming, G. Cusumano, A. D'Aì, V. D'Elia, E. Del Monte, D. De Martino, A. De Rosa, S. Di Cosimo, S. Diebold, T. Di Salvo, I. Donnarumma, A. Drago, M. Durant, D. Emmanoulopoulos, Y. Evangelista, A. Fabian, M. Falanga, Y. Favre, C. Feldman, C. Ferrigno, M. H. Finger, G.W. Fraser, F. Fuschino, D.K. Galloway, J.L. Galvez Sanchez, E. Garcia-Berro, B. Gendre, S. Gezari, A.B. Giles, M. Gilfanov, P. Giommi, G. Giovannini, M. Giroletti, A. Goldwurm, D. Götz, C. Gouiffes, M. Grassi, P. Groot C. Guidorzi, D. Haas, F. Hansen, D.H. Hartmann, C.A. Haswe, A. Heger, J. Homan, A. Hornstrup, R. Hudec, J. Huovelin, A. Ingram, J.J.M. in't Zand, J.Isern, G. Israe, L. Izzo, P. Jonker, P. Kaaret, V. Karas, D. Karelin, D. Kataria, L. Keek, T. Kennedy, D. Klochkov, W. Kluzniak, K. Kokkotas, S. Korpela, C. Kouveliotou, I. Kreykenbohm, L.M. Kuiper, I. Kuvvetli, C. Labanti, D. Lai, F.K. Lamb, F. Lebrun, D. Lin, D. Linder, G. Lodato, F. Longo, N. Lund, T.J. Maccarone, D. Macera, D. Maier, P. Malcovati, V. Mangano, A. Manousakis, M. Marisaldi, A. Markowitz, A. Martindale, G. Matt, I.M. McHardy, A. Melatos, M. Mendez, S. Migliari, R. Mignani, M.C. Miller, J.M. Miller, T. Mineo, G. Miniutti, S. Morsink, C. Motch, S. Motta, M. Mouchet, F. Muleri, A.J. Norton, M. Nowak, P. O'Brien, M. Orienti, M. Orio, M. Orlandini, P. Orleanski, J.P. Osborne, R. Osten, F. Ozel, L. Pacciani, A. Papitto, B. Paul, E. Perinati, V. Petracek, J. Portell, J. Poutanen, D. Psaltis, D. Rambaud, G. Ramsay, M. Rapisarda, A. Rachevski, P.S. Ray, N. Rea, S. Reddy, P. Reig, M. Reina Aranda, R. Remillard, C. Reynolds, P. Rodríguez-Gil, J. Rodriguez, P. Romano, E.M.R. Rossi, F. Ryde, L. Sabau-Graziati, G. Sala, R. Salvaterra, A. Sanna, S. Schanne, J. Schee, C. Schmid, A. Schwenk, A.D. Schwope, J.-Y. Seyler, A. Shearer, A. Smith, D.M. Smith, P.J. Smith, V. Sochora, P. Soffitta, P. Soleri, B. Stappers, B. Stelzer, N. Stergioulas, G. Stratta, T.E. Strohmayer, Z. Stuchlik, S. Suchy, V. Sulemainov, T. Takahashi, F. Tamburini, C. Tenzer, L. Tolos, G. Torok, J.M. Torrejon, D.F. Torres, A. Tramacere, A. Trois, S. Turriziani, P. Uter, P. Uttley, A. Vacchi, P. Varniere, S. Vaughan, S. Vercellone, V. Vrba, D. Walton, S. Watanabe, R. Wawrzaszek, N. Webb, N. Weinberg, H. Wende, P. Wheatley, R. Wijers, R. Wijnands, M. Wille, C.A. Wilson-Hodge, B. Winter, K. Wood, G. Zampa, N. Zampa, L. Zampieri, A. Zdziarski, B. Zhang
Sept. 7, 2012 astro-ph.IM
The LOFT mission concept is one of four candidates selected by ESA for the M3 launch opportunity as Medium Size missions of the Cosmic Vision programme. The launch window is currently planned for between 2022 and 2024. LOFT is designed to exploit the diagnostics of rapid X-ray flux and spectral variability that directly probe the motion of matter down to distances very close to black holes and neutron stars, as well as the physical state of ultra-dense matter. These primary science goals will be addressed by a payload composed of a Large Area Detector (LAD) and a Wide Field Monitor (WFM). The LAD is a collimated (<1 degree field of view) experiment operating in the energy range 2-50 keV, with a 10 m^2 peak effective area and an energy resolution of 260 eV at 6 keV. The WFM will operate in the same energy range as the LAD, enabling simultaneous monitoring of a few-steradian wide field of view, with an angular resolution of <5 arcmin. The LAD and WFM experiments will allow us to investigate variability from submillisecond QPO's to year-long transient outbursts. In this paper we report the current status of the project.
Observations of Milky Way Dwarf Spheroidal galaxies with the Fermi-LAT detector and constraints on Dark Matter models (1001.4531)
Fermi-LAT Collaboration: A.A. Abdo, M. Ackermann, M. Ajello, W.B. Atwood, L. Baldini, J. Ballet, G. Barbiellini, D. Bastieri, K. Bechtol, R. Bellazzini, B. Berenji, E.D. Bloom, E. Bonamente, A.W. Borgland, J. Bregeon, A. Brez, M. Brigida, P. Bruel, T.H. Burnett, S. Buson, G.A. Caliandro, R.A. Cameron, P.A. Caraveo, J.M. Casandjian, C. Cecchi, A. Chekhtman, C.C. Cheung, J. Chiang, S. Ciprini, R. Claus, J. Cohen-Tanugi, J. Conrad, A. de Angelis, F. de Palma, S.W. Digel, E. do Couto e Silva, P.S. Drell, A. Drlica-Wagner, R. Dubois, D. Dumora, C. Farnier, C. Favuzzi, S.J. Fegan, W.B. Focke, P. Fortin, M. Frailis, Y. Fukazawa, P. Fusco, F. Gargano, N. Gehrels, S. Germani, B. Giebels, N. Giglietto, F. Giordano, T. Glanzman, G. Godfrey, I.A. Grenier, J.E. Grove, L. Guillemot, S. Guiriec, M. Gustafsson, A.K. Harding, E. Hays, D. Horan, R.E. Hughes, M.S. Jackson, T.E. Jeltema, G. Johannesson, A.S. Johnson, R.P. Johnson, W.N. Johnson, T. Kamae, H. Katagiri, J. Kataoka, M. Kerr, J. Knodlseder, M. Kuss, J. Lande, L. Latronico, M. Lemoine-Goumard, F. Longo, F. Loparco, B. Lott, M.N. Lovellette, P. Lubrano, G.M. Madejski, A. Makeev, M.N. Mazziotta, J.E. McEnery, C. Meurer, P.F. Michelson, W. Mitthumsiri, T. Mizuno, A.A. Moiseev, C. Monte, M.E. Monzani, E. Moretti, A. Morselli, I.V. Moskalenko, S. Murgia, P.L. Nolan, J.P. Norris, E. Nuss, T. Ohsugi, N. Omodei, E. Orlando, J.F. Ormes, D. Paneque, J.H. Panetta, D. Parent, V. Pelassa, M. Pepe, M. Pesce-Rollins, F. Piron, T.A. Porter, S. Profumo, S. Raino, R. Rando, M. Razzano, A. Reimer, O. Reimer, T. Reposeur, S. Ritz, A.Y. Rodriguez, M. Roth, H.F.-W. Sadrozinski, A. Sander, P.M.Saz Parkinson, J.D. Scargle, T.L. Schalk, A. Sellerholm, C. Sgro, E.J. Siskind, D.A. Smith, P.D. Smith, G. Spandre, P. Spinelli, M.S. Strickman, D.J. Suson, H. Takahashi, T. Takahashi, T. Tanaka, J.B. Thayer, J.G. Thayer, D.J. Thompson, L. Tibaldo, D.F. Torres, A. Tramacere, Y. Uchiyama, T.L. Usher, V. Vasileiou, N. Vilchez, V. Vitale, A.P. Waite, P. Wang, B.L. Winer, K.S. Wood, T. Ylinen, M. Ziegler, James S. Bullock, Manoj Kaplinghat, Gregory D. Martinez
Jan. 25, 2010 hep-ph, astro-ph.CO, astro-ph.HE
We report on the observations of 14 dwarf spheroidal galaxies with the Fermi Gamma-Ray Space Telescope taken during the first 11 months of survey mode operations. The Fermi telescope provides a new opportunity to test particle dark matter models through the expected gamma-ray emission produced by pair annihilation of weakly interacting massive particles (WIMPs). Local Group dwarf spheroidal galaxies, the largest galactic substructures predicted by the cold dark matter scenario, are attractive targets for such indirect searches for dark matter because they are nearby and among the most extreme dark matter dominated environments. No significant gamma-ray emission was detected above 100 MeV from the candidate dwarf galaxies. We determine upper limits to the gamma-ray flux assuming both power-law spectra and representative spectra from WIMP annihilation. The resulting integral flux above 100 MeV is constrained to be at a level below around 10^-9 photons cm^-2 s^-1. Using recent stellar kinematic data, the gamma-ray flux limits are combined with improved determinations of the dark matter density profile in 8 of the 14 candidate dwarfs to place limits on the pair annihilation cross-section of WIMPs in several widely studied extensions of the standard model. With the present data, we are able to rule out large parts of the parameter space where the thermal relic density is below the observed cosmological dark matter density and WIMPs (neutralinos here) are dominantly produced non-thermally, e.g. in models where supersymmetry breaking occurs via anomaly mediation. The gamma-ray limits presented here also constrain some WIMP models proposed to explain the Fermi and PAMELA e^+e^- data, including low-mass wino-like neutralinos and models with TeV masses pair-annihilating into muon-antimuon pairs. (Abridged)
Radio detections towards unidentified variable EGRET sources (0803.0721)
J.M. Paredes, J.Marti, C.H. Ishwara-Chandra, D.F. Torres, G.E. Romero, J.A. Combi, V. Bosch-Ramon, A.J. Munoz-Arjonilla, J.R. Sanchez-Sutil
March 5, 2008 astro-ph
Context. A considerable fraction of the gamma-ray sources discovered with the Energetic Gamma-Ray Experiment Telescope (EGRET) remain unidentified. The EGRET sources that have been properly identified are either pulsars or variable sources at both radio and gamma-ray wavelengths. Most of the variable sources are strong radio blazars.However, some low galactic-latitude EGRET sources, with highly variable gamma-ray emission, lack any evident counterpart according to the radio data available until now. Aims. The primary goal of this paper is to identify and characterise the potential radio counterparts of four highly variable gamma-ray sources in the galactic plane through mapping the radio surroundings of the EGRET confidence contours and determining the variable radio sources in the field whenever possible. Methods. We have carried out a radio exploration of the fields of the selected EGRET sources using the Giant Metrewave Radio Telescope (GMRT) interferometer at 21 cm wavelength, with pointings being separated by months. Results. We detected a total of 151 radio sources. Among them, we identified a few radio sources whose flux density has apparently changed on timescales of months. Despite the limitations of our search, their possible variability makes these objects a top-priority target for multiwavelength studies of the potential counterparts of highly variable, unidentified gamma-ray sources.
Chandra Observations of the Gamma-ray Binary LSI+61303: Extended X-ray Structure? (0706.0877)
J.M. Paredes, M. Ribo, V. Bosch-Ramon, J.R. West, Y.M. Butt, D.F. Torres, J. Marti
June 6, 2007 astro-ph
We present a 50 ks observation of the gamma-ray binary LSI+61303 carried out with the ACIS-I array aboard the Chandra X-ray Observatory. This is the highest resolution X-ray observation of the source conducted so far. Possible evidence of an extended structure at a distance between 5 and 12 arcsec towards the North of LSI+61303 have been found at a significance level of 3.2 sigma. The asymmetry of the extended emission excludes an interpretation in the context of a dust-scattered halo, suggesting an intrinsic nature. On the other hand, while the obtained source flux, of F_{0.3-10 keV}=7.1^{+1.8}_{-1.4} x 10^{-12} ergs/cm^2/s, and hydrogen column density, N_{H}=0.70+/-0.06 x 10^{22} cm^{-2}, are compatible with previous results, the photon index Gamma=1.25+/-0.09 is the hardest ever found. In light of these new results, we briefly discuss the physics behind the X-ray emission, the location of the emitter, and the possible origin of the extended emission ~0.1 pc away from LSI+61303.
Identifying variable gamma-ray sources through radio observations (astro-ph/0407454)
J.M. Paredes, J. Marti, D.F. Torres, G.E. Romero, J.A. Combi, V. Bosch-Ramon, J. Garcia-Sanchez
July 21, 2004 astro-ph
We present preliminary results of a campaign undertaken with different radio interferometers to observe a sample of the most variable unidentified EGRET sources. We expect to detect which of the possible counterparts of the gamma-ray sources (any of the radio emitters in the field) varies in time with similar timescales as the gamma-ray variation. If the gamma-rays are produced in a jet-like source, as we have modelled theoretically, synchrotron emission is also expected at radio wavelengths. Such radio emission should appear variable in time and correlated with the gamma-ray variability.
Unidentified Gamma-Ray Sources and Microquasars (astro-ph/0402285)
G.E. Romero, I.A Grenier, M.M. Kaufman Bernado, I. F. Mirabel, D.F. Torres
April 21, 2004 astro-ph
Some phenomenological properties of the unidentified EGRET detections suggest that there are two distinct groups of galactic gamma-ray sources that might be associated with compact objects endowed with relativistic jets. We discuss different models for gamma-ray production in both microquasars with low- and high-mass stellar companions. We conclude that the parent population of low-latitude and halo variable sources might be formed by yet undetected microquasars and microblazars.
Discovery of a new radio galaxy within the error box of the unidentified gamma-ray source 3EG J1735-1500 (astro-ph/0301487)
J.A. Combi, G.E. Romero, J.M. Paredes, D.F. Torres, M. Ribo
Jan. 24, 2003 astro-ph
We report the discovery of a new radio galaxy within the location error box of the gamma-ray source 3EG J1735-1500. The galaxy is a double-sided jet source forming a large angle with the line of sight. Optical observations reveal a V ~ 18 magnitude galaxy at the position of the radio core. Although the association with the EGRET source is not confirmed at the present stage, because there is a competing, alternative gamma-ray candidate within the location error contours which is also studied here, the case deserves further attention. The new radio galaxy can be used to test the recently proposed possibility of gamma-ray emitting radio galaxies beyond the already known case of Centaurus A.
|
CommonCrawl
|
Gravitational Mass = Inertial Mass: Einstein or Galileo?
Einstein takes as postulate of his general theory of relativity that
gravitational mass = inertial mass.
To Einstein this represented a deep insight into the inner nature of things, which he named the Equivalence Principle. To Galileo the same thing was a most natural consequence of his theoretical insight from experiments of dropping objects from the Tower of Pisa noting that all objects fall in the same way (modulo air resistance) and reflecting over the connection between force and motion.
Let us see if we can understand what to Einstein was beyond comprehension and to Galileo more or less self-evident. Newton's second law states that
$m_i\frac{dv_i}{dt}=F_i$,
where $m_i$ is the inertial mass of a body showing acceleration $\frac{dv_i}{dt}$ with $v_i$ velocity and $t$ time when subject to a force $F_i$. On the other hand, the same body when subject to a gravitational force $F_g$, shows an acceleration $\frac{dv_g}{dt}$ satisfying
$m_g\frac{dv_g}{dt}=F_g$,
where $m_g$ is the gravitational mass.
To find out if $m_i=m_g$, let us consider the following experiment: Consider two identical bodies, a body $A$ at rest on a frictionless table and another body $B$ in your hand with the two bodies connected by a weightless string stretched over a frictionless wheel attached at the end of the table, see picture in earlier version of this post. Then remove your hand and observe the action of the two-body-string system. Observe that $A$ is acted upon by the horisontal string force $F_s$, while $B$ is acted upon by $F_g-F_s$ with $F_g$ the gravitational force acting on $B$. Since A and B have the same acceleration, we have
$\frac{F_s}{m_i}=\frac{F_g-F_s}{m_g}$.
If we now observe that
$F_s=\frac{F_g}{2}$, (1)
then we can conclude that $m_g=m_i$ as a simple experimental verification of the Equivalence Principle. We can also argue that (1) must be true according to Leibniz' principle of sufficient reason, since there is no reason that the two-body-string system should not show this form of symmetry. We can also argue that (1) must hold if we re-orient the system to be all horisontal and pull $B$ with a certain force $F$ which must result in a string force $\frac{F}{2}$.
Summing up, we have given simple evidence that gravitational mass = inertial mass, based on the insight that there is only one type of mass, namely inertial mass as a measure of acceleration vs force. Since gravitation is a force the measure of acceleration vs gravitational force as gravitational mass is necessarily the same as inertial mass. This is captured in the experiment with $B$ subject to (vertical) gravitational force (minus vertical string force) and $A$ to horizontal string force.
The Equivalence Principle is thus a direct consequence of Newtonian mechanics, and as such a most questionable empty Basic Postulate for general relativity. As usual Einstein managed to create confusion rather than clarification. For more reason to this verdict see earlier posts on the Equivalence Principle.
Etiketter: Equivalence Principle
Special Theory of Relativity: Empty of Real Physics 2
Modern physics is supposed to be based on in particular on Einstein's special theory of relativity SR. In discussions with theoretical physicist Ulf Danielson recorded in a previous post, and in an upcoming post with physics philosopher Lars-Göran Johansson, my position is that SR is empty of real physics because the two basic Postulates of SR are definitions or analytic propositions true by definition (or stipulations without truth value), and not synthetic propositions about physics which may be false. To see this, recall the two basic Postulates of SR:
Laws of physics have the same formal expression in different inertial systems.
Measurements in different inertial systems (must) give the same constant speed of light.
Inertial systems are space-time coordinate systems traveling with constant speed with respect to each other.
Is Postulate 1 a synthetic proposition stating something about physical reality which may be false? No! It only states that a "law of physics" must meet a requirement of looking the same in all inertial systems. It does not say what a "law of physics" is, nor gives any example, only states that it must look precisely the same in all inertial systems. It is thus a stipulation like a legal law, which has no truth value, or definition true by semantic construction. No physics in Postulate 1.
Postulate 2 is also a stipulation about the result of measurement of the speed of light by different observers using different inertial systems. Since speed is measured in terms of measures in space and time, Postulate 2 says that measures in space (meter) and time (seconds) must be chosen so that the speed of light comes out the same in all inertial systems. This is the SI standard since 1983 where the meter is defined as the distance traveled by light over a certain length of time as measured by a cesium atom clock.
Is Postulate 2 a synthetic proposition stating something about physical reality which may be false. No! It is a definition of the length scale to be used in different inertial systems, a stipulation or legal law to follow a certain standard, which again has no truth value. No physics in Postulate 2.
We arrive at the conclusion that since the Postulates of SR contain no physics, neither does SR. Empty of physics = pseudo-physics!
Further evidence on the strange unphysical form the Postulates of SR is obtained from the observation that Postulate 2 appears to be a consequence of Postulate 1 using the following reasoning: If there is a law of propagation of light it must be viewed to be a "law of physics" and as such it must take the same form in all inertial systems by Postulate 1 and thus in particular express the same constant speed of light = the statement of Postulate 2.
In fact, Postulate 1 is ridiculous and as such unphysical: Laws of physics in general have different formal expressions in different coordinate systems and so very few, if any, satisfy Postulate 1. Not even Maxwell's equations for the propagation of light waves in vacuum take the same form in different inertial systems since initial conditions change form and waves are extended in space and thus require initial values as wave forms extended in space at a specific time.
Modern physics is thus based on empty physics, and so it is no surprise to meet modern physics empty of physics, such as multiversa and string theory.
PS1 Recall Leibniz strict separation between space and time, in direct contradiction to the mixing of space and time in SR, with
space = order of coexistence (connecting to initial value),
time = order of succession.
PS2 In the upcoming discussion with philosopher Lars-Göran Johansson the question of analytic vs synthetic statement and Kant's synthetic a priori, will come up. LGJ will argue that the distinction between analytic and synthetic cannot be made and that there are propositions which are both analytic and synthetic, or neither. This can be a tricky debate, and to avoid getting bogged down in sophistry, I will seek to focus on the question which real physics is expressed in the Postulates of SR, if any.
PS3 Modern physics is based on Einstein's mechanics including SR, and not Newton's mechanics, and thus it would seem to be one of the fundamental missions of education and practice of modern physics to subject SR to a critical analysis concerning form and physical meaning, right?
This is anyway the objective of the book Many-Minds Relativity. My conclusion is that SR is empty pseudo-science or fake-physics, which does not say anything of interest concerning the real physics of the world. But this is viewed simply as crackpot heretics, which a modern theoretical physicist can dismisses without any argument from a position that a critical analysis of SR is not needed nor possible, as shown in the discussion with Ulf Danielson.
Etiketter: special theory of relativity
The True Twin Paradox
Two twins C and S, Clever (Salviati) and Stupid (Simplicio), decide to part with mutually agreed speed $v$ equipped with identical cesium clocks and so decide to compare the readings of the two clocks, to test the validity of Einstein's special theory of relativity SR, by exchanging light signals with the clock frequency. Both twins then record a redshift of frequency of the other clock of size $\frac{1}{1+v}$ (with light speed normalised to 1).
C says that this is just what is to expected from laws of physics and it does not say that the clock of S runs slow; it is only a red-shift effect.
S says that the reduction in frequency, which he records carefully, shows that the clock of C runs slow compared to that of S, and S becomes convinced that C ages more slowly as evidence of SR, which makes S unhappy.
Who is right then? C or S? Is the reason for S to get unhappy, reasonable?
A symmetrical situtation has turned into an unsymmetrical and this is the true twin paradox. What is your solution?
Etiketter: special theory of relativity, twin paradox
Speed of Gravity? Newton or Einstein?
Tom Van Flandern (1940-2009) was a free-thinking physicist who with perplexion made the observation (along with Laplace and also Newton of course) that the Earth on its path around the Sun at every instant in time accelerates in the direction of the actual position of the Sun, which is about 20 arc seconds ahead of the position of the Sun as seen in the sky from the Earth, because of the 8 minutes it takes for light to travel the distance from the Sun to the Earth. See also this review of Van Flandern's work.
This observation is in accordance with Newtonian gravitation, which is assumed to propagate with infinite speed. If gravitation propagated with the speed of light, the acceleration would be instead in the direction of the visible Sun, but this is not what is observed (because it would be unstable).
I have discussed this observation in various posts with conclusion that the connection between mass density $\rho (x,t)$ and gravitational potential $\phi (x,t)$ as given by Poisson's equation in Newtonian gravitation
$\Delta\phi (x,t)=\rho (x,t)$
with $\Delta$ the Laplacian with respect to a space coordinate $x$ and $t$ being a time coordinate, is to be interpreted as a relation where mass $\rho (x,t)$ somehow is "created" at $x$ at time $t$ by the local operation of differentiation through the Laplacian $\Delta$ acting on the gravitational potential $\phi (x,t)$.
This is different from the standard interpretation where instead the presence of mass $\rho (x,t)$ at a specific point in space at time $t$ contributes to $\phi (x,t)$ for all points $x$ somehow through instant action at distance. Like Tom Van Flandern I view instant action at distance as physically impossible, while local instant action may be physical. The creation of mass from gravitational potential through the Laplacian thus may be possible, while its detailed physics remains to be discovered...
In any case the observation of the acceleration of the Earth towards the actual position of the Sun is only compatible with a speed of propagation of gravitational waves (if they exist), which is much bigger than the speed of light. This observation is in accordance with Newton's mechanics (with both the new and old interpretations of the mass-potential connection), but not with Einstein's mechanics.
What is your conclusion concerning who describes physics of gravitation best? Newton or Einstein? Be careful when you look at the Sun for answer.
The current wisdom among physicists is that despite the above Earth-Sun observation, for sure there are gravitational waves because Einsteins so says, waves which propagate with the speed of light and that these waves can be detected, not gravitational waves from the Sun, but from distant mergers of black holes and stuff. Do you buy this?
Sorry to say Tom passed away in 2009, but his ideas live.
PS Of course there is a cover up suggesting that also in Einstein's mechanics does the Earth accelerate in the direction of the current position of the Sun, even if the speed of gravitational waves is the finite speed of light, because there is a subtle cancellation of the effect of the 8 minute delay from another effect, a most happy and welcome cancellation which allows a stable observable planetary system not only according to Newton but also for Einstein. But why Einstein if Newton explains what is observed? No wonder that Einstein begged for pardon in his: "Newton, forgive me!".
Etiketter: gravitation, New View on gravitation, Tom Van Flandern
Einstein: The Illusionist
What would Einstein answer to this question: Is the time dilation and space contraction of special relativity real physics or only illusionary physics?
Here is his answer published in Physik Zeitschrift 12, p 509, 1911:
The question whether the Lorentz contraction (time dilation and space contraction) does or does not exist is confusing. It does not really exist in so far as it does not exist for an observer who moves (with the rod); it really exists, however, in the sense that it can as a matter of principle be demonstrated by a resting observer.
This ambiguous answer is typical of Einstein and makes discussion so difficult. One way to interpret Einstein's statement is that time dilation and space contraction are both real and illusionary physics at the same time and one can always choose whatever fits the discussion best.
If a skeptic says that the physics is contradictory a physicist can say that the contradiction is only an illusion so it is no real contradiction, and if the skeptic complains that it is illusion the physicist can say that this is only a misinterpretation of something which is real physics.
This is why the discussion becomes so confusing, even to Einstein, and all his followers as thousands and thousands of modern physicists.
An example of illusion of contradiction connecting to space contraction, is to consider two twins looking at each other at distance both stating according to physical input that the other appears to be smaller. Of course you say that the smaller size is only an illusion depending on view at distance and that the twins in fact remain physically equally tall.
It is like two twins both appearing to age slower than the other which could be an illusion of similar form if the twins are equipped with identical clocks and are traveling with a mutual velocity difference $v$ each one able to record the frequency of the other clock through a light signal subject to Doppler shift scaling with $\frac{1}{1+v}$ with the speed of light normalised to 1. Both twins would then be able to record a redshift when parting and blueshift in approach and state that their instruments record a different rate of ageing. All illusion of course, depending on Doppler shift.
A discussion with Swedish media physicist Ulf Danielson can be followed in comments to the previous post. This is hot topic so follow closely! Is it confusing or illuminating?
Note that Danielson immediately plays the following card suggesting that I am a crackpot representing pseudoscience:
In a time when theories that the Earth is flat and conspirations around moon landings are flooding the net, it is not so surprising that questions like this (my questions) come up.
(I en tid när teorier om att att jorden är platt och konspirationer kring månlandningen florerar på nätet tillsammans med pseudovetenskap av olika slag är det kanske inte så överraskande att en fråga av detta slag dyker upp.)
This is one approach to debate. Let us see how effective it is this time. Danielson's professional work concerns string theory, by many viewed today as illusionary physics rather than real physics. This may explain why Danielson is not sensitive to a distinction between illusion and reality, or definition and fact as discussed in previous posts.
Etiketter: Einstein, special theory of relativity, twin paradox
Dingle Destroyed as Scientist by Questioning Relativity Theory
It is ironical that, in the very field in which Science has claimed superiority to Theology, for example—in the abandoning of dogma and the granting of absolute freedom to criticism—the positions are now reversed. Science will not tolerate criticism of special relativity, while Theology talks freely about the death of God, religionless Christianity, and so on.
Herbert Dingle (1890-1978) was a prominent English physicist who came to question Einstein's special theory of relativity in an intense scientific controversy in late 1950s, see Questioning Relativity 1 with more as 2-20.
Dingle pointed to the logical contradiction of two traveling twins both ageing more slowly than the other. Dingle concluded that since physics cannot be contradictory, the special theory of relativity with its Lorentz time dilation and different twin ageing cannot be a theory about physics. This is also my standpoint 60 years later.
The reaction from the physics community to Dingle's heretics was harsh and Dingle was destroyed as scientist, like Bruno in 1600. Dingle recorded his experience of this process in Science at the Crossroads:
They are, briefly, that the great majority of physical scientists, including practically all those who conduct experiments in physics and are best known to the world as leaders in science, when pressed to answer allegedly fatal criticism of the theory, confess either that they regard the theory as nonsensical but accept it because the few mathematical specialists in the subject say they should do so, or that they do not pretend to understand the subject at all, but, again, accept the theory as fully established by others and therefore a safe basis for their experiments.
The response of the comparatively few specialists to the criticism is either complete silence or a variety of evasions couched in mystical language which succeeds in convincing the experimenters that they are quite right in believing that the theory is too abstruse for their comprehension and that they may safely trust men endowed with the metaphysical and mathematical talents that enable them to write confidently in such profound terms.
What no one does is to answer the criticism.
The situation today 60 years later is the same: The accepted truth is that Einstein's special/general theory of relativity is correct and experimentally verified over and over again, but this cannot be questioned because no real physicist can understand the theory nor the experiments. Only crackpots like Dingle can understand that something is wrong.
I have asked Ulf Danielson as a Swedish media physicist about his view.
PS1 Read Tom van Flandern on the (non)use of SR in GPS.
PS2 For a detailed presentation of my criticism of special relativity theory, see Many-Minds Relativity (download)
PS3 Listen to Louis Essen, designer of the atomic clock:
No one has attempted to refute my arguments, but I was warned that if I persisted I was likely to spoil my career prospects. …the continued acceptance and teaching of relativity hinders the development of a rational extension of electromagnetic theory." - Louis Essen F.R.S., "Relativity and time signals", Wireless World, oct78, p44. 'Students are told that the theory must be accepted although they cannot expect to understand it. They are encouraged right at the beginning of their careers to forsake science in favor of dogma.'
PS4 More Dingle from the Crossroads:
Lorentz, in order to justify his transformation equations, saw the necessity of postulating a physical effect of interaction between moving matter and æther, to give the mathematics meaning. Physics still had de jure authority over mathematics: it was Einstein, who had no qualms about abolishing the æther and still retaining light waves whose properties were expressed by formulae that were meaningless without it, who was the first to discard physics altogether and propose a wholly mathematical theory...
PS5 Here is short summary of the exchange of comments with Ulf Danielson:
CJ Question1: What is the real physics of the Postulates of SR? Are the Postulates only definitions empty of physics?
UD Answer:??
CJ Question2: In what way does GPS depend on the special and general theory of relativity?
CJ Question3: Are there any physical laws that are Lorentz invariant, when not even Maxwell's are so with respect to initial conditions and presence of charges? If yes, which?
CJ Question4: Does translation with constant velocity influence the physical action of a pendulum or atomic clock? If so, what is the physics of the influence?
There are many more questions, but with this poor result we both felt that continued discussion was meaningless. UD appears in media as an authority on modern physics and so it would have been very interesting, for me in particular but maybe also for the world, to get some illuminating answers on pressing questions, and so we have to wait for answers with patience. The questions remain and authorities on physics must be expected to have some form of answers.
PS6 Dingle pointed to an apparent physical contradiction in SR (asymmetry of symmetric twins), but that argument did not bite on modern physicists used to view contradictions as signs of deep physics. So the contradiction did not kill SR, but Dingle instead.
I try with another approach pointing to the fact that SR is empty of physics, and as such does not contain contradictions of physics. Maybe it is more difficult for a modern physicist to dismiss emptiness than contradiction. We shall see.
Dingle's analysis and conclusion of incompatibility/contradiction of SR could be dismissed because the physics of the postulates of SR is so murky that it could twisted to support any claim ("event", "rigid measuring rod", "clock", "reading of a moving clock", "time" et cet). The only way to get out of this swamp is to show that the postulates contain no physics at all, in which case twisting of physics no longer is possible.
Etiketter: Dingle, Einstein, special theory of relativity, twin paradox
(Postulates of) Special Relativity Empty of Physics 1
The postulates of Einstein's special theory of relativity are:
The laws of physics are the same in all inertial frames of reference.
The speed of light in free space has the same value c in all inertial frames of reference.
In the light of the discussion in recent posts on relativity theory, we make the observations that the postulates state that:
It is necessary for a law to be a law of physics (but not sufficient) that it takes the same form in all inertial systems.
It is necessary for different observers to measure the same speed of light.
We understand that neither 1. nor 2. contains any actual real physics, since they do not specify any law of physics, only stipulate a necessary requirement to be satisfied by a physical law (invariance in the sense of taking the same form in all inertial systems) and stipulate what the result of a measurement of the speed of light must be. The postulates of special relativity thus are not postulates (assumptions) about real physics, but instead are stipulations or definitions concerning form (invariance) or procedure (measurement of the speed of light). But form and procedure do not contain any real physics, and therefore special relativity has nothing to say about real physics. If the postulates of special relativity are empty of physics, this must be the case for any logical derivation from the postulates, and so the whole special theory of relativity is empty of physics.
In particular, it is not enough to note invariance of a law to allow declaration that it is a law of physics.
Special relativity is a corner stone of modern physics, and if special relativity is empty of physics, this means that modern physics rests on emptiness. Viewing the result in the form of string theory and multiversa gives further evidence of this unfortunate state of affairs.
PS As stated in my comments to the next post, postulate 2. is a convention since by definition according to the 1983 SI standard the speed of light is specified to be exactly 299792458 meter per second, which is used to define the meter. Postulate 2 is thus a definition without physical content. Likewise, postulate 1 is void of real physics, since it is only a specification of what can be called a physical law.
Etiketter: Einstein, special theory of relativity, theory of relativity
Special Relativity: Physics without Physical Laws
Unphysics of sawing a woman into two pieces.
The focus of both the special and general theory of relativity is coordinate systems with the idea that coordinate systems hide truths about the world. Special relativity concerns Euclidean space coordinate systems moving with constant velocity with respect to each other, so-called inertial systems, while general relativity is expressed in general curvi-linear space-time coordinate systems with Einstein's equations expressing a connection between space-time curvature and mass-energy distribution.
Einstein's contribution to physics with the special theory is the postulate that physical laws have the same formal expression in all inertial systems connected by the Lorentz transformation, in other words are Lorentz invariant. Einstein thus postulates a formal requirement on what is allowed to be called a physical law: It must be Lorentz invariant.
Recall that the Lorentz transformation connecting two inertial space-time coordinate systems $(x,t)$ and $(x^\prime ,t^\prime )$ moving with velocity $v$ with respect to each other, read:
$x^\prime =\gamma (x - vt)$, $t^\prime =\gamma (t - vx)$,
$x =\gamma (x^\prime + vt^\prime )$, $t =\gamma (t^\prime + vx^\prime )$,
where $\gamma = \frac{1}{\sqrt{1-v^2}}$ assuming the speed of light is 1.
Einstein's contribution to physics with the general theory is Einstein's equations which express a physical law satisfying the invariance requirement by being covariant in the sense of having the same formal expression in different space-time coordinates as if allowing a coordinate-free representation in terms of curvature and mass-energy.
Which physical laws are then Lorentz invariant? Does Newton's 2nd law $\frac{d^2x}{dt^2}=F(x)$ for a body of unit mass moving under the force $F(x)$ in the $(x,t)$ system take the same form in the the $(x^\prime ,t^\prime )$ system? Let us check out: By the chain law, we have
$\frac{\partial}{\partial x}=\gamma (\frac{\partial}{\partial x^\prime}-v\frac{\partial}{\partial t^\prime})$,
$\frac{\partial}{\partial t}=\gamma (\frac{\partial}{\partial t^\prime}-v\frac{\partial}{\partial x^\prime})$,
and conclude that
$\frac{\partial^2}{\partial t^2}=\gamma^2(\frac{\partial}{\partial t^\prime}-v\frac{\partial}{\partial x^\prime})^2$.
Does this show that Newton's 2nd law takes the same form in the two systems? Does not seem so, but to be sure let's take an even simpler case, that of a body moving with constant velocity $V$ in the $(x,t)$ system, with motion satisfying the physical law $x=Vt$ (with the initial condition $x=0$ for $t=0$), which in the $(x^\prime ,t^\prime )$ takes the form
$x^\prime =\frac{1-Vv}{V -v}t^\prime$.
We conclude that only for $V=1$ does the physical law $\frac{dx}{dt}=V$ take the same form in the $(x^\prime ,t^\prime )$ system. In other words, the physical law of propagation with constant velocity $x=Vt$ is Lorentz invariant only if $V=1$, that is only if the physical law of propagation is the law of propagation of light. Of course you can save the situation by simply defining $V^\prime =\frac{1-Vv}{V-v}$ and then claim the $x=Vt$ and $x^\prime =V^\prime t^\prime$ have the same formal appearance (with and without prime), but opening this possibility would loose the meaning of Lorentz invariance in the sense that any law could be made Lorentz invariant by suitable manipulation of symbols.
Einstein thus says that the physical law of propagation with constant speed less than 1 is not a physical law. The only physical law compatible with Lorentz invariance is the law of propagation of light at speed 1. This means that special relativity is empty of almost all physics as a physics with the only physical law being that of propagation light. That this is so, is clear from the only hypothesis of special relativity, which is constant speed of propagation of light. With (close to) zero real input the real output can only be (close to) zero.
Special theory thus does not contain even the most basic physics as Lorentz invariant physics, but instead a lot of unphysics such as time dilation and space contraction as a consequence of postulated Lorentz invariance.
General relativity is supposed to be a generalisation of the special theory and if the special theory is zero so is the general theory. More precisely general relativity is not Lorentz invariant, so there is a possibility that the general theory contains some physics such as Newton's 2nd law and gravitation, but without special relativity the rationale of replacing Galilean invariant Newtonian mechanics with new Einstein mechanics is missing. See Many-Minds Relativity for a generalisation of Newtonian mechanics different from Einstein's reaching into speeds comparable to the speed of light.
PS1 What about Maxwell's equations? Yes, they are Lorentz invariant insofar they express propagation of light with the same constant speed in all inertial systems, but not concerning initial conditions and the connection between electric and magnetic fields, as made clear in Chapters 5 and 17 of Many-Minds Relativity.
PS2 The starting point for Einstein in 1905 was that it is impossible to determine the speed of a train traveling in rectilinear motion with constant velocity (inertial motion) from an experiment made inside the train, if there is no possibility to look out into the environment. This is the same in Newtonian mechanics under Galilean invariance. Similarly it showed to be impossible (the null result in the Michelson-Morley experiment) to determine motion of the Earth vs a (stationary) aether medium carrying electromagnetic waves. Without an environment or aether as (stationery) reference, inertial motion is impossible to detect; only relative inertial motion is possible to detect. But that does not hold for non-inertial motion, like rotation; an ice skating princess with closed eyes can certainly feel if she is spinning or not.
In any case the Michelson-Morley null experiment made Einstein claim that there is no aether at all, and postulated that therefore all observers independent of inertial motion must record the same (unit) speed of light, independent of any physics of propagation of light. This was not an assumption about physics, but simply a human standard or recipe about how to measure time and space so that the speed of light comes out to be 1, independent of any physics. This made special relativity to a theory without physical content and as such without scientific meaning.
PS3 Many-Minds Relativity proposes a different way to explain the MM null result based on the following assumption with clear physics content: All observers share a common time (have identical clocks), travel with respect to each other with constant velocity and make observations in a Euclidean space coordinate system in which they are stationary. Different observers thus use different inertial systems or aethers, and there are thus as many aethers as inertial coordinate systems (in the spirit of Ebenezer Cunningham). Each observer assumes the validity of a wave equation in the observer's coordinate system which says that light propagates with unit speed in the observer's coordinate system and which effectively determines how to measure length (in light seconds). This is an assumption about physics which is consistent with the MM null result. The key question of focus in Many-Minds Relativity is then to what degree different observers will agree on lengths and motion in space.
Many-Minds Relativity is different from special relativity in that all observers use the same type of clock (with operation independent of inertial motion such as a pendulum) and thus can share a common time without time dilation by some suitable synchronisation, while all observers are tied to their own inertial system. The conundrum of special relativity of one observer making observations in two different inertial systems, then is not an issue at all and the paradoxes of special relativity all collapse to null.
To modern physicists special relativity and Lorentz invariance is viewed as such a holy cow that it cannot be subject to a critical analysis and only be swallowed without any questioning, despite all its paradoxes. My experience is that it is very difficult to find a physicist willing to enter into a discussion about special relativity and its role as pillar of modern physics.
Special Relativity: Unphysical Event Theory
A great physics event: Einstein's Nobel Prize reception speech about his special theory of relativity presented to a stunned King Gustav V in middle front row at the Gothenburg Words Fair in 1923. Notable is that Einstein did not speak about the law of the photoelectric effect, for which he was awarded the Prize, along with the remark by the Nobel Committee that Einstein did not get the prize because of his special theory of relativity but despite of it! Here x = Gothenburg and t = 1923 with the space-time coordinates (x,t) telling nothing about the physics of the event.
It was Einstein who introduced the concept of event to physics in his 1905 article presenting his new revolutionary theory of special relativity as follows:
We have to take into account that all our judgements in which time plays a part are always judgements of simultaneous events. If, for instance, I say: "That train arrives here at 7 o'clock" I mean something like this: "The pointing of the small hand of my watch and the arrival of the train are simultaneous events".
We here meet both the concept of event and the qualification of simultaneous events. We learn that the time of an event is something which can be recorded by the hand of a watch while the event itself can be just anything. The time of an event is thus identified with measurement while the physical nature of the event appears to be irrelevant. It can be a train arriving at station or anything. In previous posts we have also noted that an event according to Einstein has no extension in space and thus can be recorded with a single space coordinate and reading of a clock.
Altogether, we find evidence that the notion of event used by Einstein has no real physical meaning, which suggests that his special relativity about events is not a theory about physical reality, only an empty play with definitions. Einstein's mastery was that he could turn this emptiness into shining physics blinding the world of both professional physicists and media, if not ordinary people who simply where confused.
Einstein's mastery is exercised through a double play between reality and illusion as revealed in Physik Zeitschrift 12, p 509, 1911:
The question whether the Lorentz contraction does or does not exist is confusing. It does not really exist in so far as it does not exist for an observer who moves (with the rod); it really exists, however, in the sense that it can as a matter of principle be demonstrated by a resting observer.
We read that Einstein considers the Lorentz transformation with its built-in Lorentz contraction to be a matter of principle, in other words is a tautology true by definition, something which does not really exist and thus is empty of physical meaning and thus is unphysical. It is a complete mystery that modern physicists have been so overwhelmed by Einstein's form of jokery that all rational thought has evaporated.
Etiketter: Einstein, special theory of relativity
Coexistence vs Special Theory of Relativity
Two cars sharing time before collision
This is a continuation of the previous post on the special theory of relativity based on the concept of event which is something which can recorded by a space coordinate $x$ and time coordinate $t$ into a space-time coordinate $(x,t)$. This is also the basic element of Minkowski space-time physics closely connected to theory of relativity, where the distinction between space and time of such fundamental importance in classical physics, is given up and space coordinates are mixed into time coordinates as in the Lorentz transformation of special relativity.
An event is thus something without extension in space which takes place (exists) at a specific point in space $x$ and time $t$. But is existence without extension in space possible? Of course not, but a modern physicist would probably say that an event recorded by $(x,t)$ is an idealisation of the position $x$ in space at time $t$ of a physical phenomenon of such small dimension in space that one space coordinate $x$ is enough to describe its position in space.
But in both mathematics and physics it can be misleading to stretch an idealisation into a singularity such as that connected with the concept of a physical phenomenon without extension in space, that is introducing the concept of particle as the basic element of modern particle physics. Singularities are tricky because they hide their true nature and thus can be misunderstood.
Real physical phenomena like a physical body has extension in space and as such represents coexistence in the sense that the different parts of the body all exist a the same common instant of time and thus can be viewed to share the same time coordinate. The Lorentz transformation has no role for bodies with extension in space because it mixes space into time an upsets coexistence with shared time.
As an illustration consider two objects moving with constant velocity with respect to each other and connect to each body a Euclidean space coordinate system attached to the body with the body at the origin. This gives us two inertial systems $(x,t)$ and $(x^\prime ,t^\prime )$ and we now ask if it is possible that they can be connected by the Lorentz transformation supposed to connect space-time coordinates of inertial systems without common time.
Assume now that the bodies approach each other and collide. In special relativity a collision is viewed as an event without extension in space and as such can be recorded in different inertial systems connected by the Lorentz transformation without common time. But a collision is not an event without extension in space, because it is the end of a process where the two bodies approach each other and thus form a two-body system with extension in space with necessarily coexistence of the two bodies with necessarily shared common time prior to collision.
Collision without shared time is impossible. You cannot decide to meet a good friend at a cafe without sharing time. When meeting you share time. Without shared time there can be no meeting.
We conclude that the theory of special relativity concerned with events without extension in space misses the physics of real phenomena, which all have extension in space. Even the physics of collision between two particles (even without extension in space), which in special relativity is viewed as an event without extension in space, in fact is a phenomenon with extension in space because the particles prior to collision approach each other and thus form a system with extension in space, coexistence and shared time.
In order for two particles about to collide they must coexist and corresponding inertial systems then cannot be connected by Lorentz transformation without common shared time. In order words, special relativity is not a theory about real physics, and as such of no interest from scientific point of view. Special relativity is a fantasy identical to a Lorentz transformation without physical meaning.
How to Understand that the Special Theory of Relativity is Unphysical
Two space traveling Lorentz invariant twins both one year older than the other: Deep but unphysical.
Einstein's special theory of relativity is loaded with paradoxes, such as the twin paradox, ladder paradox, cooling paradox and Ehrenfest's paradox, expressing effects of time dilation and space contraction. In classical physics one paradox would be enough to kill a physics theory, but not so in modern physics with the special theory of relativity as corner stone, where the presence of paradoxes instead is taken as evidence that the theory is deep.
The paradoxes of special relativity express physical paradoxes such as traveling twins both ageing more slowly than the other. So even if special relativity is deep it cannot be a theory about true physics if it contains paradoxes, because true physics cannot be paradoxical that is contradictory.
Two twins cannot both physically age more slowly than the other.
The quickest way to understand that special relativity is unphysical, and as such can carry seemingly physical (but unphysical) paradoxes, is to recall Einstein's starting point as a description of so-called events recorded by space-time coordinates $(x,t)$ with $x$ a space coordinate and $t$ time coordinate. Einstein thus consider events to have no extension in space so that one single space coordinate $x$ is enough to describe its location in space.
Einstein's special theory concerns the description of events in two different (Euclidean) coordinate systems assumed to move with respect to each other with a certain constant velocity, so-called inertial systems.
In classical physics the connection between such systems is given by a Galilean coordinate transformation where space coordinates are transformed to match the difference in motion between the two coordinate systems while time coordinates remain the same.
In special relativity the connection is instead given by the Lorentz transformation where also time coordinates are transformed by mixing space into time. Special relativity thus boils down to the Lorentz transformation and all the paradoxes from physical point of view such as time dilation and space contraction, are consequences of the Lorentz transformation mixing space into time.
Einstein's catch is that Maxwell's equations for electromagnetics, like any wave equation, has the same analytical expression in all inertial systems under the Lorentz transformation (but not under the Galilean transformation).
Einstein then postulates that physical laws are laws which take the same form in all inertial systems connected by the Lorentz transformation, and thus declares that Maxwell's equations express a physical law. In circular reasoning Einstein then argues that because a physical law expresses physics the Lorentz transformation expresses physics and therefore special relativity is a physical theory as a mathematical theory about physics.
Einstein's great contribution to modern physics is viewed to be his postulate that physical laws (must) have the same analytical expression in all inertial systems connected by Lorentz transformation, in other words must be Lorentz invariant.
Einstein's basic idea is thus that (true) physical laws (must) be Lorentz invariant. What we now first must understand is that this whole idea is absurd: Of course physical laws in their physical meaning must be independent of choice of coordinate system, but it is absurd to ask that they would literally have the same analytical expression. It is like claiming that a statement translated to different languages would not only have the same meaning but also the same notational expression letter by letter. This would mean that there was only one language, which is absurd. What is not absurd but rational is to expect that the same physical law will have different analytical expressions in different coordinate systems.
Next, we return to Einstein's starting point as a study of events without spatial extension, which we will see is the very reason Einstein can make the absurd claim that Maxwell's equations are Lorentz invariant. Now, solutions to Maxwell's equations represent physical waves and waves have extension in space. And now comes the catch: Maxwell's equations come along with initial conditions, which describe the initial configuration of a wave with extension in space at a certain initial time. And initial conditions are not Lorentz invariant because they mix space into time. Only Einsteinian events without extension in space can be claimed to be Lorentz invariant. A detailed account of the mathematics by physics is given in Chapter 5 and 11 of Many-Minds Relativity.
Einstein's insistence on Lorentz invariance thus builds on the misconception that initial conditions with extension in space as physics, can be reduced to events without extension in space as unphysics. This is absurd and is the root to all the paradoxes of special relativity resulting from mixing space into time by Lorentz transformations without physics.
Einstein insisted on Lorentz invariance forgetting that wave equations have initial conditions with extension in space. The result is a lot of modern physics formed to be Lorentz invariant which cannot be physics.
The basic trouble with modern physic preventing progress is generally viewed to be that quantum mechanics is incompatible with relativity theory in that Schrödinger's equations are not Lorentz invariant. The above analysis indicates that this is a ghost problem which should not be allowed to prevent progress. Asking for Lorentz invariance is unphysical. There is no incompatibility. There can be no incompatibility between physical theories because physics cannot be incompatible with itself. Twins cannot be incompatible.
Einstein confessed at several occasions that his knowledge of mathematics was superficial:
I neglected mathematics...because my intuition was not strong enough to differentiate the fundamentally important from the dispensable erudition.
It is therefore not so strange that Einstein could be misled to give the Lorentz transformation a meaning which lacked physical reason. What is strange is that his delusion has come to represent the highest level of understanding of a modern physicist even that of Ed Witten as the smartest living physicist, by many viewed to be smarter than Einstein with an extraordinary deep understanding of mathematics...
PS The reason Einstein's unphysical events without extension in space and the associated Lorentz transformation have come to serve as a corners stone of modern physics, is that it fits with the modern physics core concept of elementary particle as an object without extension in space. But this concept is loaded with poison in the form of infinities and divergent integrals and more, and so modern physicists have been driven into a despair of string theory, in 11 space dimensions on a spatial scale 15 orders of magnitude smaller than estimated scale of a proton, all way beyond any scientific reason.
A better way out is to accept that there are no particles without extension in space, only waves with extension in 3d space, all according to Schrödinger, the inventor of quantum mechanics. Without particles the special theory of relativity has no physical meaning and scientific relevance. What has Ed Witten to say about this revelation?
Was Einstein a Swindler?
What is the most compelling argument showing that Einstein's special theory of relativity as based on the Lorentz transformation connecting observations by different observers in different coordinate systems (inertial systems) moving with constant velocity with respect to each other, is unphysical and thus void of scientific content?
You find this argument in a previous post showing that although Maxwell's equations, as a form of wave equation, describing the propagation of electromagnetic waves including light, which is the central object of study in the special theory of relativity, takes the same mathematical form in different inertial systems, and thus appears to be invariant as requested by the special theory of relativity, initial conditions are not invariant and thus the whole point of relativity theory evaporates. We recall that an initial condition represents a configuration of an object extended in space, like a wave form, at a special instant of time.
The fact that this is not seen in presentations of relativity is a result of Einstein's restriction to consider events in space-time as isolated flashes at specific coordinates in space $x$ at specific instances of time $t$ described by space-time coordinates $(x,t)$. The unphysical aspect of such isolated flash-like events, is that they have no spatial extension and thus do not appear in the form of initial conditions for a wave equation. By restricting events to concern objects of no spatial extension, the non-invariance of initial conditions for a wave equations with a collapse of the basic idea of special relativity, can be hidden and success can be declared. This is what Einstein did, and the world was stunned!
But real objects/waves have extension in space, even flashes, and so their physics cannot be described by the special theory of relativity. The special theory of relativity is thus unphysical and as such is loaded with physical paradoxes, including the Ladder Paradox arising because a ladder has spatial extension.
Are you convinced by this argument? That the special theory is unphysical because it concerns physics without extension in space. If you are convinced, what is then your conclusion about the status of modern physics with the special theory of relativity declared as a corner stone? If the corner stone is unphysical, what about the building erected on the corner stone? So was Einstein correct when he described himself as a swindler? For more evidence see Many-Minds Relativity, in particular section 5.9 and Dr Faustus of Modern Physics. Or maybe you say that we must leave physics to physicists even if they are misled by a swindler?
PS1 The above argument shows that the idea of Lorentz invariance of Maxwell's equations is misconceived. The logical conclusion made in Many-Minds Relativity following an idea of Ebenezer Cunningham, is that formulation of Maxwell's equations requires specification of a Euclidean spatial coordinate system with the observer in normal case tied to its origin. Such a coordinate system acts like an aether for propagation of electromagnetic waves, and there are thus as many aethers as Euclidean coordinate systems. Einstein said that there is no aether, and then there can be no Maxwell equations and no electromagnetic waves and no light...
PS2 The logical conclusion from the Michelsen-Morley null result is that there are many aethers, as many as there are Euclidean coordinate systems and that physical laws in general take different forms in different coordinate systems, while expressing the same physical reality. Einstein's idea that true physical laws take the same formal mathematical form in all (inertial) coordinate systems represents a fundamental misconception of the meaning of a physical law. It is like claiming that a statement about a physical fact necessarily must have the same form in all languages, while it is clear to everyone with a rational mind that different languages express the same thing in different ways and not with the same words. Yes, Einstein was a swindler and led modern physics into a quagmire, but this is something modern physicists are unable to fathom. If you think this analysis does not capture reality ask your favourite physicist about the physics of special relativity and notice that you get no meaningful response.
$E=mc^2$: Definition or Physical Fact?
All these fifty years of conscious brooding have brought me no nearer to the answer to the question, 'What are light quanta (photons)?' Nowadays every Tom, Dick and Harry thinks he knows it, but he is mistaken. (Albert Einstein, 1954)
We continue exploring the meaning of the most famous equation of physics $E=mc^2$, which Einstein suggested in 1905 to be a consequence of his special theory of relativity and struggled through all his life to justify theoretically, however without success.
The equation $E=mc^2$ carries the same ambiguity as the basic postulate of Einstein's special theory of relativity, the constancy of the speed $c$ of light, for which it is never clear if it is only a definition true by logic, or a law of physics which may be true or false.
Is then $E=mc^2$ a definition or a physical law as a statement about a physical fact, which may be false?
We start with the following natural question: From where does the factor $c^2$ come, which attributes the energy $mc^2$ to mass of size $m$?
This question can be given an answer for a photon of frequency $\nu$, which can be observed e.g. through the photoelectric effect, to have the energy
$E=h\nu$,
with a properly specified Planck's constant $h$. We can now, if we want, attribute the mass $m=\frac{h\nu}{c^2}$ to a photon, to get
$E=h\nu =mc^2$
simply by definition. We can do this because the physics of a photon is unclear. We can supplement by naturally attributing the momentum $p=mc$ to a photon of mass $m$ and speed $c$ and so obtain $E=pc$ as an equivalent form of $E=mc^2$ (also discussed in the previous post).
In short, we can argue that for the fictitious concept of a photon (compare with the Einstein quote above), energy and mass indeed can be viewed to be, in the words of Einstein, different manifestations of the same thing (namely energy).
By claiming that a radiating body loses the mass attributed to emitted photons (although the loss is too small to be measured), we can then give the relation $E=mc^2$ a general meaning beyond photons, still however essentially by definition with eventual physical meaning remaining to be explained.
That mass indeed can be converted to energy in nuclear fission and fusion processes, was a surprise to Einstein and cannot be seen as a consequence of his special theory of relativity, because it has no connection to nuclear physics. The first quantum field theory Standard Model proof (with quarks and gluons) of $E=mc^2$ was presented only in 2008.
The general idea of a connection between mass and energy is as old as physics, with in particular the kinetic energy of a body of mass $m$ moving with speed $v$ being equal to $\frac{1}{2}mv^2$. Moreover, the factor $c^2$ was suggested prior to Einstein by Poincare and Hasenörhl preceded by Heaviside and Wien, among others. But it was Einstein who got the Prize.
Recall that in Maxwell's equations for electromagnetic waves in vacuum, with the electric field $\hat E$ (in Gaussian units) satisfying an equation of the form
$\frac{1}{c^2}\frac{\partial^2\hat E}{\partial t^2}-\nabla^2\hat E=0$,
with the factor $\frac{1}{c^2}$ serving the same role as mass $m$ in a mechanical wave equation with connection to energy, thus supporting a relation of the form $m=\frac{E}{c^2}$, which was the original form suggested by Einstein, among others, as an expression for (fictitious) mass rather than (real) energy.
Recall that with the new meter standard as a certain fraction of a light second, the speed of light is by definition equal to exactly 1 (light second per second). For perspective see Many-Minds Relativity. So today, the constancy of the speed of light is a matter of definition. To Einstein in 1905 it was both a definition and a physical fact with truth secured by pure logic as the unique nature of Einstein's physics, which so impressed the world, although there has been many critical views from start which however have been muted since they did not fit the success story of modern physics.
Is it then true that a radiating body loses mass, even if the emitted energy comes from a source of internal heat energy as a form of kinetic vibrational energy measured by temperature (and not mass) according to a new analysis of blackbody radiation? I will seek to return with an answer.
To start with let us recall that the concept of mass $m$ connects to force $f$ and motion/acceleration $\frac{dv}{dt}$ with $v$ velocity through Newton's 2nd law $\, m\frac{dv}{dt}=f$, which can be used to define mass in terms of force and motion, as well as momentum $mv$ as the integral of force, and kinetic energy $m\frac{v^2}{2}$ as the integral of $fv$ as work. Force can be measured by a spring and motion by a meter stick and time, which defines mass threefold in terms of Newton's law, momentum and kinetic energy. The basic relation is Newton's 2nd law, while the integrals of $f$ and $fv$ are computed/collected in physical form as momentum and kinetic energy. Newton's 2nd law is Galilean invariant, while momentum and kinetic energy as integrals depend on initial velocity. Momentum and kinetic energy thus carries information about mass modulo initial velocity: If you travel at the same velocity as a cannon ball, its mass is hidden and you cannot detect it by being hit.
Defining mass by Newton's 2nd law in terms of force and motion/acceleration makes mass = inertial mass, from which equality of inertial mass and gravitational mass follows by definition, since gravitation appears as force. Einstein's Equivalence Principle as the basic assumption of the general theory of relativity is thus empty of physical content, as the general nature of Einstein's physics jumping freely between definition and physical fact, as exposed in detail Many-Minds Relativity. Take a look and get enlightened by understanding the confusion between definition and fact, which has corrupted modern physics into a mess of subjective epistemology instead of a science of objective ontology in the spirit of Enlightenment. This was understood by Einstein but he only gave cryptic evidence to this insight like in the above quote and
If I would be a young man again and had to decide how to make my living, I would not try to become a scientist or scholar or teacher. I would rather choose to be a plumber or a peddler...
Etiketter: Einstein, emc2
Gravitational Mass = Inertial Mass: Einstein or Ga...
Special Theory of Relativity: Empty of Real Physic...
Dingle Destroyed as Scientist by Questioning Relat...
(Postulates of) Special Relativity Empty of Physic...
How to Understand that the Special Theory of Relat...
|
CommonCrawl
|
What is the central charge of the disordered $q$-state Potts model, for large $q$?
"tmf(n) is the space of supersymmetric conformal field theories of central charge -n"
How do you measure numerically the central charge of a system?
How do I reconcile the representation theoretic picture of the central charge in a CFT with the physical picture of the same?
Some questions about calculation central charge in a CFT in $d$ spacetime dimensions
How much of the Capelli-Itzykson-Zuber ADE-classification of su(2)-conformal field theories can one see perturbatively?
What is the connection between Conformal Field Theory and Renormalization group in QFT?
What is known about the classification of N=4 SCFTs with central charge 6?
Factor of two differences for free field Green's functions in conformal field theory
Gravity duals of conformal field theories without relevant operators.
What is the set of central charges of two dimensional rational conformal field theories?
In this question, RCFT means a unitary full rational two dimensional conformal field theory. As every unitary two dimensional conformal field theory, a RCFT has a central charge $c$ which is a nonnegative real number. The rationality hypothesis implies that in fact $c$ is a nonnegative rational number: $c \in \mathbb{Q}_{\geq 0}$.
Let $\mathcal{C}$ be the subset of $\mathbb{Q}_{\geq 0}$ made of rational numbers which are central charge of some RCFT. As it is possible to tensorize RCFTs, $\mathcal{C}$ is an additive subset of $\mathbb{Q}_{\geq 0}$.
For example, the intersection of $\mathcal{C}$ with the interval $[0,1]$ is given by $0$, $1/2$, $7/10$, $4/5$,...., $1$ i.e. by the central charges of the unitary minimal models union $c=1$. In particular, $c=1$ is an accumulation point of $\mathcal{C}$.
Quantitative: Is the set $\mathcal{C}$ explicitely known ?
Qualitative: Is the set $\mathcal{C}$ closed in $\mathbb{R}$ ? Is it well-ordered ? If yes, what is its ordinal ? (for example, are there accumulation points of accumulation points...)
These questions have two motivations:
1) the claim that the RCFT's are classified: see for example http://ncatlab.org/nlab/show/FRS-theorem+on+rational+2d+CFT
I did not go through this work but I would like to know if this classification is "abstract" or "concrete". In particular, I would like to know if it gives an answer to the previous questions.
2)Similar questions have been asked and solved for a different set of real numbers: the set of volumes of hyperbolic 3-manifolds. It seems to me that there is a (very vague at this moment) similarity between these two sets of real numbers.
conformal-field-theory
central-charge
asked Apr 6, 2015 in Theoretical Physics by 40227 (5,120 points) [ revision history ]
recategorized Apr 6, 2015 by Dilaton
I think above 1, the spectrum is continuous, but I need to check the yellow book.
commented Apr 7, 2015 by Ryan Thorngren (1,925 points) [ no revision ]
@Ryan Thorngren : as I am restricting myself to rational CFTs, the set of central charges is certainly not continuous. But even considering all the CFTs, I don't think it is true. For example, the existence of some CFTs with irrational central charges was a not so easy question as far as I understand. Are you rather refering to the fact that there exists unitary representations of the Virasoro algebra for any value of the central charge above 1?
commented Apr 7, 2015 by 40227 (5,120 points) [ no revision ]
p$\hbar$ysics$\varnothing$verflow
|
CommonCrawl
|
Search all SpringerOpen articles
Did green debt instruments aid diversification during the COVID-19 pandemic?
Paresh Kumar Narayan1,
Syed Aun R. Rizvi2 &
Ali Sakti3
Financial Innovation volume 8, Article number: 21 (2022) Cite this article
Faced with a persistent pandemic, investors are concerned about portfolio diversification. While the literature on COVID-19 has evolved impressively, limited work remains on diversification opportunities. We contribute to the literature by exploring the volatility and co-movement of different sovereign debt instruments, including green sukuk, sukuk, bond and Islamic and conventional equity indices for Indonesia. Our results consistently point towards increased asset co-movement and weak profitability during the pandemic. Interestingly, sukuk and green sukuk have a 14% correlation with stocks, suggesting potential diversification prospects in times of extreme shocks.
Global crises, whether financial or economic, bring to attention the role of portfolio diversification. The literature on portfolio diversification in times of crises is rich and perceives debt instruments as attractive tools (see, inter alia, Boucher and Tokpavi 2019; Selmi et al. 2019; Skintzi 2019). The current COVID-19 pandemic, because it is more persistent than any previous crisis, has had a dynamic effect on the financial system. In other words, the effects have been experienced in stages. The initial stage, for instance, was one when the pandemic started and markets overreacted; then, with time, as more was understood about the pandemic, markets corrected their overreaction (Harjoto et al. 2021). In this regard, many studies have demonstrated how equity markets (see, inter alia, Haroon and Rizvi 2020a,b; Narayan 2020a; Sharma 2020) and energy markets (see, for instance, Iyke 2020; Polemis and Soursou 2020; Gil-Alana and Monge 2020) globally have reacted to the pandemic over time. Some studies have also explored the interaction between different asset classes: exchange rates (Narayan 2020b), oil and exchange rates (Devpura 2020), stocks and exchange rates (Prabheesh and Kumar 2021), stocks and bonds (Papadamou et al. 2020) and cryptocurrencies (Yousaf and Ali 2020; Shahzad et al. 2021).Footnote 1
The literature alluded to above contains an important gap: limited studies have considered diversification prospects in light of the pandemic from a green investmentFootnote 2 point of view. In this study, we explore the potential diversification prospects. We argue that faced with the pandemic, investors are likely to behave consistent with the flight-to-quality phenomena. In this pandemic situation, we perceive investors as treating market volatility and connectedness between sovereign debt and equity markets differently (that is, by considering the dynamic and persistent nature of the pandemic). When diversification prospects are evaluated, a multi-asset class model is suitable. We, therefore, consider an asset class that includes Islamic debt and equity markets. Over the years, Islamic finance has developed into a unique asset class; see Narayan and Phan (2019) for a survey.Footnote 3
A relatively recent innovation in the debt market is the green debt instruments. An attractive feature of this instrument has been its use in financing renewable energy projects globally (see, inter alia, Tang and Zhang 2020; Banga 2019; Hachenberg and Schiereck 2018). However, the role of the debt market in an asset diversification portfolio needs further inquiry given the ramifications of the COVID-19 pandemic.
To explore the volatility and inter-connectedness of multiple asset classes within stocks and debt during the COVID-19 pandemic, we focus on the conventional stock market (IDX Composite/Indeks Harga Saham Gabungan), its Islamic counterpart (Jakarta Islamic Index), the conventional debt market (sovereign bond), its Islamic counterpart (sovereign sukuk), and the green sukuk from Indonesia. Our hypothesis is that the interplay between these asset classes has evolved and changed owing to the pandemic. Our hypothesis is motivated by earlier work on previous pandemics by Bhuyan et al. (2010), who highlight that stock market returns of the infected countries exhibit a significant increase in co-movements. The evolving recent literature shows that pandemic has impacted asset prices substantially differently during the COVID-19 period. Amongst these studies, Iyke (2020) shows not only that the pandemic predicts asset prices but how its influence changed during the pandemic.Footnote 4 Focusing on the exchange rate market, Narayan (2020a) shows how bubble activity intensified, and how the exchange rate became more resilient to shocks during the pandemic. Moreover, Narayan (2020b) shows how the oil market return/volatility changed due to the pandemic. In addition, Prabheesh and Kumar (2021) show that exchange rate remains neutral while energy and financial markets were affected by COVID uncertainty.Footnote 5 Other studies offer equally important insights: Appiah-Otoo (2020), for instance, reveal using Chinese data that exchange rate significantly reduces domestic credit during the pandemic; Salisu and Sikiru (2020) find that during the COVID-19 period uncertainty of the pandemic is a factor for Asia–Pacific Islamic stock returns; and Qin et al. (2020) show that the pandemic has a negative effect on the oil price. Overall, we conclude from the literature that the impact of the recent pandemic on financial markets has exacerbated uncertainty—a point demonstrated by Sharma (2020). This literature, therefore, inspires the following questions. What has happened to asset price correlations over time including over the pandemic period? Has this portfolio diversification or otherwise influenced profits from those asset classes we consider? Has there been a flight to quality because of the uncertainty created by the pandemic? Our hypothesis allows us to address these questions and contribute to our understanding of asset pricing behavior in the COVID-19 pandemic period.
Our focus on Indonesia has roots in its unique structure and diverse set of financial asset class offerings. As highlighted by Sharma et al. (2019), Indonesia is uniquely poised as its equity market is large yet underdeveloped, offering opportunities for growth. As the fourth largest global population, the Indonesian market boosts of a range of diverse products across both conventional and Islamic asset classes. Indonesia also boosts the largest Muslim population in the world offering a range of Islamic products. Indonesia is also a leader in sovereign issuance of green sukuk.Footnote 6 These features make a study on Indonesia's asset co-movement from the point of view of diversification benefits ideal.
We test our hypothesis on the evolution of risks and returns from multiple two-asset portfolios by using daily data (March 4, 2019 to December 4, 2020) fitted to a multivariate GARCH model. To evaluate the impact of the COVID-19 pandemic, the sample is divided into a pre-COVID-19 and a COVID-19 sub-sample periods. Our findings and contributions are as follows. We show that the volatility of all assets increased during the COVID-19 pandemic. While this finding adds to the evolving literature on financial market volatility during COVID-19 (see Haroon and Rizvi 2020a, b; Ali et al. 2020; Salisu and Adediran 2020), the insights we provide come from a unique set of assets that include less riskier assets, such as sukuk and green sukuk, which have not been subjected to this type of empirical investigation before.
We, therefore, explore the correlations between asset pairs consisting of conventional and Islamic stocks, bonds, and sukuk and green sukuk. By considering a mixture of risky and less risky assets, we offer new insights on the dynamic relationship among assets that have implications for portfolio formation and risk diversification. We find high correlations: averaging over 80% between stocks, and these correlations increased during the pandemic. Both sukuk and green sukuk have low correlations with stocks and although these correlations increase in the pandemic period, they remain low, at less than 14%. The implication is that both sukuk and green sukuk offer diversification benefits during the pandemic. When we utilize a two-asset portfolio consisting of the same pairs of assets used to obtain correlations, we find, in general, that diversification offers greater profits from a two-asset portfolio weight optimization model. Our findings support the mixed evidence in the literature on green debt instruments. This literature (see, for instance, Nguyen et al. 2021; Reboredo et al. 2020) finds evidence of: (a) low correlations between green bonds and commodities; and (b) price spillover from fixed income to green bonds. We add to this by showing potential diversification benefits when it comes to sukuk and green sukuk.
Finally, because in our exercise, we have both risky assets (such as the conventional and Islamic stocks) and less risky assets (such as bonds, sukuk and green sukuk), this allows us to test the flight to quality hypothesis—the idea that during crises investors prefer holding less risky assets. If this is true, then we should expect to see a switch from risky assets to less risky assets. Using a Granger causality test, we find strong evidence supporting Granger causality from Islamic and conventional stocks to sukuk and green sukuk. The evidence is, as expected, much stronger during the COVID-19 pandemic period.
Lastly, the COVID-19 pandemic has instigated a rich literature on the effects of the pandemic on the financial system and asset prices (see, inter alia, Yan and Qian 2020; Sharma 2020; Iyke 2020; Sha and Sharma 2020; Sharma and Sha 2020). In this literature, none of these studies has yet analyzed how sukuk and green sukuk are connected to stocks and bonds from a portfolio diversification perspective. By doing so, we provide additional insights on how asset connectedness has evolved in light of the pandemic. In addition, our study contributes to the evolving literature on sovereign debt classes and equity nexus which concludes with mixed results (see Samour et al. 2020; Golab et al. 2018; Allegret et al. 2017).
We engage in robustness tests to confirm our key findings. Our attempt starts with sample splitting. We decompose the full sample into a pre-COVID pandemic sub-sample, an epidemic sub-sample (before the COVID-19 was declared a pandemic), and a pandemic sub-sample owing to the date on which the World Health Organization (WHO) declared COVID-19 a pandemic. We consistently find evidence of low correlations between sukuk/green sukuk vis-à-vis other assets. Secondly, when we localize the date of the start of COVID-19 to Indonesia and create sub-samples, which are Indonesia-specific, our main results remain insensitive. Thirdly as an alternative econometric model, we employ the exponential GARCH model to estimate volatility for our data sample. Results from this exercise do not change our story.
Data and methodology
Given that the COVID-19 pandemic is a recent crisis, the data are limited. For instance, the first confirmed case was reported on December 31, 2019. In this study, we use Indonesian data: namely, Indonesia's sovereign 5-year debt instruments (bond, sukuk and green sukuk). For equities, we use the benchmark conventional stock market index (IDX Composite/ Indeks Harga Saham Gabungan) and the Islamic index from the Jakarta stock exchange (Jakarta Islamic Index).
The data spans March 4, 2019 to December 4, 2020, which gives us 225 and 240 observation (in terms of days) for pre and post the first COVID-19 reported case on December 31, 2020.Footnote 7 We also segregate the COVID-19 period into a pre-COVID-19 (March 4, 2019 to December 30, 2019) sample. We create a sub-sample marked by the period when the WHO had not declared COVID-19 a pandemic; we refer to this as an epidemic phase (December 31, 2019 to March 10, 2020). Finally, we have a pandemic sub-sample covering the time from when the WHO declared COVID-19 as a pandemic (March 11, 2020 to December 4, 2020). In additional analysis, we categorize the pre-COVID-19 and post-COVID-19 timelines localized specifically to the Indonesian context given that the first reported case in Indonesia was on March 2, 2020.Footnote 8 With this date, for Indonesia, we have a pre-COVID-19 period from March 4, 2019 to March 1, 2020 and a COVID-19 period from March 2, 2020 to December 4, 2020.
Daily returns for all the sample are calculated using the equation, \({r}_{t}={ln(P}_{t})-{ln(P}_{t-1})\). Here, \({r}_{t}\) and \({P}_{t}\) denote daily returns and price at the business day t, respectively, and ln represents the natural log.
To study the volatility and correlations of the sovereign bond, sukuk, green sukuk, Islamic stocks, and conventional stocks, we employ a Multivariate Generalized Autoregressive Conditional Heteroscedastic-Dynamic Conditional Correlation (MGARCH-DCC) model proposed by Engle (2002) and Pesaran and Pesaran (2009). The MGARCH-DCC is suitable to obtain the variance and correlations between assets over time. This method is popular and widely used in applications, and can be stated as:
$${r}_{t}= {\mu }_{t}+{\varepsilon }_{t}$$
where \({\mu }_{t}=\mathrm{\rm E}\left[{r}_{t\left|{\Omega }_{t-1}\right|}\right]\), \({\mu }_{t}|{\Omega }_{t-1}\sim N(0, {H}_{t})\), \({H}_{t}={D}_{t}{R}_{t}{D}_{t}\), \({D}_{t}=diag\{\sqrt{{h}_{ii.t}}\}\), and \({z}_{t}={D}_{t}^{-1}{\varepsilon }_{t}\). In this, \({h}_{ii.t}\) is the estimated conditional variance from the individual univariate GARCH model; \({D}_{t}\) is the diagonal matrix of conditional standard deviations; \({R}_{t}\) is the time-varying conditional correlation coefficient matrix of returns; and \({z}_{t}\) is the standardized residuals vector with mean zero and variance one. The dynamic correlation coefficient matrix of the DCC model can be specified further as per Hsu et al. (2008):
$${R}_{t}=\left({diag\left({Q}_{t}\right)}^{-1/2}{Q}_{t}({diag\left({Q}_{t}\right))}^{-1/2}\right)$$
where \({Q}_{t}=({q}_{ij,t})\) and \(({diag\left({Q}_{t}\right))}^{-1/2}= \left(diag\frac{1}{\sqrt{{q}_{11,t}}},\dots ,\frac{1}{\sqrt{{q}_{nn,t}}}\right)\) with \({q}_{ij,t}={\overline{\rho }}_{ij}+a\left({z}_{i, t-1}{z}_{j,t-1}-{\overline{\rho }}_{ij}\right)+\beta ({q}_{ij,t-1}-{\overline{\rho }}_{ij})\) in which \({\overline{\rho }}_{ij}\) is the unconditional correlations and the new time-varying conditional correlation coefficient is \({\rho }_{ij,t}={q}_{ij,t}/\sqrt{{q}_{ii,t}{q}_{jj,t}} .\)
Table 1 provides descriptive statistics of the five assets. We report average returns and standard deviation of the data across various sub-samples. Panel A provides the descriptive statistics for the pre-COVID-19 period (March 4, 2019 to December 30, 2019) and the COVID-19 phase (December 31, 2019 to December 4, 2020). Panel B presents corresponding descriptive statistics for the pre-COVID period (March 4, 2019 to December 30, 2019), the epidemic phase (December 31, 2019 to March 10, 2020), and the pandemic phase (March 11, 2020 to December 4, 2020). Panel C defines pre-COVID-19 and COVID-19 periods specifically for Indonesia based on the first reported COVID-19 case in Indonesia as discussed earlier.Footnote 9 Consistent with recent studies (see those in Sha and Sharma (2020)(2020 and Narayan et al. (2020), for instance), returns in the pandemic period have been relatively low with higher variance.
Table 1 Descriptive statistics
The data suggest that the sovereign sukuk issued by the Government of Indonesia is least affected in terms of average returns during the pandemic. We also observe that in the case of green sukuk, there was a sharp decline in returns during the epidemic phase but returns recovered during the pandemic phase. The implication is that because asset price reaction to the different phases of COVID-19 has been heterogeneous (see Sharma and Sha 2020), the conditional correlations and volatilities are likely to be heterogeneous.
To formally test this, we turn to Tables 2 and 3 where average conditional volatility and conditional correlations for different asset combinations and sub-samples are presented. From results described in Table 2, we observe that over the full sample of data across the five assets, Islamic stocks were most volatile followed by conventional stocks. Bonds were least volatile together with green sukuk. Panel A tells that except for bonds, which became least volatile in the COVID-19 period, all other assets were more volatile in the COVID-19 period compared to the pre-COVID-19 period. This is true regardless of how we define COVID-19 dates (see Panel C). Panel B suggests that even when COVID-19 was not declared a pandemic—that is, at the epidemic stage, three of the five assets had higher volatility compared to the pre-COVID-19 period. In this regard, we see that bonds and green sukuk volatility was less sensitive to the epidemic. Overall, the largest increase in volatility is noted for stocks and least for sukuk-based assets.
Table 2 Average sample volatility of assets
Table 3 Average conditional correlations amongst asset pairs
Table 3 presents average conditional correlations. The correlations are divided into three parts. The first part of table provides conditional correlations for pairs of debt securities. The second part presents the conditional correlations of conventional stocks with bonds, sukuk, and green sukuk. The third part presents the conditional correlations of Islamic stocks with conventional stocks, bonds, sukuk and green sukuk. Starting with sukuk, green sukuk and bond correlations, we see that over the full sample, sukuk-green sukuk had the highest correlation at 39%. In the pre-COVID-19 period, it had the highest correlation at 51%. In the COVID-19 phase, this correlation declined to around 30% (Panel C). The sukuk-bond correlations were lowest, from around 14% in the full-sample to around 11% in the pre-COVID-19 sample (Panel C). It was the least at 3.54% during the epidemic stage (Panel B). Overall, in the COVID-19 phase sukuk-bond correlations have increased compared to the pre-COVID-19 phase. The exception is the correlation between green sukuk and sukuk, which declined in the COVID-19 phase.
The second part of the correlations between conventional stocks and bond/sukuk suggests that correlations have grown in the COVID-19 phase compared to the pre-COVID-19 times. The weakest correlations though are for conventional stocks and sukuk followed by conventional stocks and green sukuk.
To the last set of results on Islamic stocks, conventional stocks, bonds, sukuk, and green sukuk, we see that stock level correlations are highest amongst all pairs and has increased substantially over the COVID-19 period (average of over 93%) compared to around 80% average correlation in the pre-COVID-19 period. Islamic stock–bond correlation has also increased in the COVID-19 period, but it is around 20%. Islamic stocks and sukuk are weakly correlated both before and during the pandemic; however, in the COVID-19 phase, average correlation has risen to as much as 11%. A similar pattern is observed for the correlation between Islamic stocks and green sukuk with correlations maximizing at 14% during the pandemic. This suggests flight-to quality phenomena, as discussed by Papadamou et al. (2020), regarding COVID-19's impact on financial asset classes.
A test of the flight to quality hypothesis
As we discussed in the introduction, our test of pair-wise correlations between asset classes was motivated by the flight to quality hypothesis: that investors prefer less risky assets during crises compared to riskier assets. One way of testing this would be examine if trading volume in risky assets, such as equity, increased during the COVID-19 pandemic. We are unable to test this given that trading volume data for those asset classes in our sample are unavailable. As an alternative, we employ a Granger causality testFootnote 10 based on a VAR model with four lags. Our motivation for using a VAR model is as follows. If the riskier assets in our portfolio—namely, conventional and Islamic stocks—Granger cause less riskier assets—namely, bond, sukuk and/or green sukuk—this is evidence of flight to quality phenomena at work. To set the scene of how the assets have Granger caused each other, we start with results in Panel A of Table 4, based on the full sample of data. We see evidence of Granger causality running from conventional and Islamic stocks to green sukuk. When we explore the pre-COVID-19 flight to quality hypothesis with the COVID-19 phase, we find stronger evidence of flight to quality. That is, during the COVID-19 period, conventional and Islamic stocks Granger cause both sukuk and green sukuk. By comparison, in the pre-COVID-19 period, conventional and Islamic stocks only Granger caused green sukuk. Our work in this regard is preliminary and should be open to additional assessment of capital flight.
Table 4 Granger Causality
Economic significance of correlations
The objective of this sub-section is to explore, for each pair of our asset portfolio, profits based on obtaining a portfolio weight that is dynamic by construction. The idea is to see whether high asset correlations do indeed deliver less profits. In other words, the more diversified the portfolio (that is, the lower the correlations), the higher the profit. To achieve this goal, we draw on the two-asset portfolio weight optimization of Kroner and Ng (1998). To see how this weight is obtained, consider one of our pair of assets, namely Sukuk and bond. In this case, the weight \(\left({w}_{t}\right)\) of the Sukuk market in a one-dollar portfolio of Sukuk and bond (US) at time t is given by:
$${w}_{t}=\frac{{h}_{t}^{Bond}-{h}_{t}^{Bond,Sukuk}}{{h}_{t}^{Sukuk}-2{h}_{t}^{Sukuk, Bond}+{h}_{t}^{Bond}}$$
The time-varying conditional variance of the bond market \(\left({h}_{t}^{Sukuk}\right)\) and the bond market \(\left({h}_{t}^{Bond}\right)\), and the conditional covariance \(\left({h}_{t}^{Sukuk,Bond}\right)\) are extracted from estimating a bivariate GARCH model.
Two trends are worth a discussion: (1) we see that low-risk asset portfolios, in general, have lower profits (between 2 to 7% in the COVID-19 pandemic period) compared to when one of the two assets in the portfolio is a riskier asset, such as the Islamic stock–bond portfolio (8.24%) and Islamic stock-sukuk portfolio (11.27%); and (2) in general, profits are higher for portfolios of lower correlations (see Table 5).
Table 5 Economic significance results
A final observation is that the 10-asset portfolios we consider are all profitable in the COVID-19 period, with annualized profits in the 2.59% (sukuk-bond) to 14.66% (conventional stock-Islamic stock) range. This is consistent with the stronger performance of the financial market in the aftermath of the early market panic due to the pandemic (see Sha and Sharma 2020 and Sharma and Sha 2020 for a summary of the literature).
Robustness test
One concern with our correlation and flight to quality hypothesis is that in taking returns we have not adjusted for other potential risk factors. We do this now. We proceed in two steps. In the first step we regress excess returns (on each asset) on excess market returns, stock price volatility, exchange rate, and day-of-the-week dummies variables. For further details, see notes to Table 6 where the results for the flight to capital hypothesis is reported. We see that our main finding is insensitive to the adjustment of potential market and macro risk factors. The flight to capital holds strongly during the COVID-19 phase. Not only Islamic (conventional) stocks Granger cause bonds and sukuk (bonds and green sukuk) but bonds and sukuk Granger cause green sukuk.
Table 6 Robustness Test
The COVID-19 pandemic has brought to attention the issue of portfolio diversification. This is also an issue for emerging markets such as Indonesia. In this paper, we focus on Indonesia given its progress on sukuk and green sukuk as financial instruments. We explore the volatility and correlation patterns in Indonesia's sovereign bond, its stock market (both conventional and Islamic markets), sukuk and green sukuk. In evaluating these five asset classes, we show that their volatility has not only increased over time but was also higher during the COVID-19 pandemic. Dynamic unconditional correlation analysis shows that correlations between asset pairs increased during the pandemic, and optimized portfolio weights (in a two-asset portfolio) offer less returns on average when correlations are high. We also show evidence of flight to quality, with investors switching from riskier assets (such as conventional and Islamic stocks) to less risky assets (such as sukuk and green sukuk). This evidence is immensely strong during the COVID-19 pandemic phase.
Our work suggests multiple directions for future research. First, asset correlations have clearly changed during the pandemic phase. It will be interesting to explore how these correlations have implications for dynamic trading strategies such as those of a mean–variance investor who tracks forecasted returns given an information set. It is also clear that the information set used to predict asset prices has also been influenced by the pandemic. Second, we have used a Granger causality test to deduce evidence of flight to quality. Future research should build on this by exploring other approaches and methods to exploring flight to quality in light of the pandemic.
For a literature survey on COVID-19, see Sha and Sharma (2020) and Sharma and Sha (2021).
Green investments are investments in asset classes, which are aligned with a commitment to the promotion of environmentally friendly business practices and the conservation of natural resources. Green stocks are equities of environmentally friendly companies while green debt instruments are a type of fixed-income instrument that are specifically earmarked to raise money for climate and environmental project.
Several studies on equities have explored the unique characteristics of Islamic equities (see, inter alia, Rizvi and Arshad 2018; Rana and Akhter 2015) and for debt instruments (Sukuk) (see, for instance, Azmat et al. 2017; Naifar and Hammoudeh 2016; Önder 2016).
Several studies on COVID-19 have shown how the pandemic has influenced and shaped economic and financial relationship between and within a country and amongst groups of countries such as those belonging to regions. For a partial list of studies, see Prabheesh (2020), who examines stock and foreign portfolio investments during pandemic for India; Gil-Alana and Claudio-Quiroga (2020) evaluate the response of Asian stock markets to the pandemic; Yan and Qian (2020) and He et al. (2020) evaluate the reaction of the Chinese stock market to the pandemic; Liu et al. (2020) test the oil-stock returns nexus for the US; Xu (2020) studies stock returns of USA and Canada; Djurovic et al. (2020) study affect on Montenegro, while Sergi et al. (2021) and Ashraf (2020) consider this relation for 76 and 43 countries, respectively; So et al. (2020) examine the case of Hong Kong stock returns; and Salisu et al. (2020) focus on the OECD countries. In related work, Haroon and Rizvi (2020b) show how financial market liquidity is impacted by the pandemic. See also Narayan, Devpura and Wang (2020) and Wei et al (2020) for the exchange rate-COVID-19 analysis.
One of the most influential strands of the COVID-19 literature relates to the energy sector; see, for instance, Ertuğrul et al. (2020) who find evidence of a high volatility pattern of the Turkish diesel market during COVID-19. Polemis and Soursou (2020) find that the pandemic influenced stock returns of the Greek energy firms. Akhtaruzzaman et al. (2020) disclose COVID-19 as a moderator to the oil price shock. Gharib et al. (2020) find contagion effect of bubbles in oil and gold markets during the COVID-19 pandemic. Amar et al. (2020) show spillovers of commodity and stock prices in oil producing and consuming countries during the COVID-19 period.
Green sukuk and bond initiative of the Indonesian government is part of efforts to reduce greenhouse gas emissions. Initiated in 2018, the green sukuk issuance raised $1.25 billion to finance environment friendly infrastructural projects across Indonesia.
https://www.who.int/news/item/27-04-2020-who-timeline---covid-19
https://covid19.who.int/region/searo/country/id
Granger Causality test has been used to understand and measure flight to quality in the literature where complete set of data is limited; see, for instance, Soylu, & Güloğlu, (2019), and Corsi, Lillo, Pirino, & Trapin (2018), & Sarwar (2017).
Akhtaruzzaman M, Boubaker S, Chiah M, Zhong A (2021) COVID−19 and oil price risk exposure. Financ Res Lett 42:101882
Ali MH, Uddin MA, Khan MAR, Goud B (2020) Faith‐based versus value‐based finance: is there any portfolio diversification benefit between responsible and Islamic finance? Int J Financ Econ (In Press)
Allegret JP, Raymond H, Rharrabti H (2017) The impact of the European sovereign debt crisis on banks stocks. Some evidence of shift contagion in Europe. J Bank Finance 74:24–37
Amar AB, Belaid F, Youssef AB, Chiao B, Guesmi K (2021) The unprecedented reaction of equity and commodity markets to COVID-19. Financ Res Lett 38:101853
Appiah-Otoo I (2020) Does COVID-19 affect domestic credit? Aggregate and Bank level evidence from China. Asian Econ Lett 1(3):18074. https://doi.org/10.46557/001c.18074
Ashraf BN (2020) Economic impact of government interventions during the COVID-19 pandemic: international evidence from financial markets. J Behav Exp Finance 27:100371
Azmat S, Skully M, Brown K (2017) The (little) difference that makes all the difference between Islamic and conventional bonds. Pac Basin Financ J 42:46–59
Banga J (2019) The green bond market: a potential source of climate finance for developing countries. J Sustain Fin Invest 9:17–32
Bhuyan R, Lin EC, Ricci PF (2010) Asian stock markets and the severe acute respiratory syndrome (SARS) epidemic: implications for health risk management. Int J Environ Health 4(1):40–56
Boucher C, Tokpavi S (2019) Stocks and bonds: flight-to-safety forever? J Int Money Financ 95:27–43
Corsi F, Lillo F, Pirino D, Trapin L (2018) Measuring the propagation of financial distress with granger-causality tail risk networks. J Financ Stab 38:18–36
Devpura N (2020) Can oil prices predict Japanese yen. Asian Econ Lett. https://doi.org/10.46557/001c.17964
Djurovic G, Djurovic V, Bojaj MM (2020) The macroeconomic effects of COVID-19 in Montenegro: a Bayesian VARX approach. Financ Innov 6(1):1–16
Engle R (2002) Dynamic conditional correlation: a simple class of multivariate generalized autoregressive conditional heteroskedasticity models. J Bus Econ Stat 20(3):339–350
Ertuğrul HM, Güngör BO, Soytaş U (2020) The effect of the COVID-19 outbreak on the Turkish diesel consumption volatility dynamics. Energy Res Lett. https://doi.org/10.46557/001c.17496
Gharib C, Mefteh-Wali S, Jabeur SB (2021) The bubble contagion effect of COVID-19 outbreak: evidence from crude oil and gold markets. Financ Res Lett 38:101703
Gil-Alana LA, Claudio-Quiroga G (2020) The COVID-19 impact on the ASIAN stock markets. Asian Econ Lett. https://doi.org/10.46557/001c.17656
Gil-Alana LA, Monge M (2020) Crude oil prices and COVID-19: persistence of the shock. Energy Res Lett 1(1):13200. https://doi.org/10.46557/001c.13200
Golab A, Jie F, Powell R, Zamojska A (2018) Cointegration between the European union and the selected global markets following sovereign debt crisis. Invest Manag Financ Innov 15(1):35–45
Hachenberg B, Schiereck D (2018) Are green bonds priced differently from conventional bonds? J Asset Manag 19:371–383
Harjoto MA, Rossi F, Lee R, Sergi BS (2021) How do equity markets react to COVID-19? Evidence from emerging and developed countries. J Econ Bus 115:105966
Haroon O, Rizvi SAR (2020a) COVID-19: media coverage and financial markets behavior—a sectoral inquiry. J Behav Exp Financ 27:100343
Haroon O, Rizvi SAR (2020b) Flatten the curve and stock market liquidity–an inquiry into emerging economies. Emerg Mark Financ Trade 56(10):2151–2161
Hsu Ku, Y. H., & Wang, J. J. (2008) Estimating portfolio value-at-risk via dynamic conditional correlation MGARCH model–an empirical study on foreign exchange rates. Appl Econ Lett 15(7):533–538
Iyke B (2020) COVID-19: the reaction of US oil and gas producers to the pandemic. Energy Res Lett 1(2):13912. https://doi.org/10.46557/001c.13912
Kroner KF, Ng VK (1998) Modelling asymmetric comovement of asset returns. Rev Financ Stud 11:817–844
Liu D, Sun W, Zhang X (2020) Is the Chinese economy well positioned to fight the COVID-19 pandemic? The financial cycle perspective. Emerg Mark Financ Trade 56(10):2259–2276
Naifar N, Hammoudeh S (2016) Do global financial distress and uncertainties impact GCC and global sukuk return dynamics? Pac Basin Financ J 39:57–69
Narayan PK, Phan DHB (2019) A survey of Islamic banking and finance literature: issues, challenges and future directions. Pac Basin Financ J 53:484–496
Narayan PK (2020a) Did bubble activity intensify during COVID-19? Asian Econ Lett. https://doi.org/10.46557/001c.17654
Narayan PK (2020b) Has COVID-19 changed exchange rate resistance to shocks? Asian Econ Lett. https://doi.org/10.46557/001c.17389
Narayan PK, Devpura N, Wang H (2020) Japanese currency and stock market—What happened during the COVID-19 pandemic? Econ Anal Pol 68:191–198
Nguyen TTH, Naeem MA, Balli F, Balli HO, Vo XV (2021) Time-frequency comovement among green bonds, stocks, commodities, clean energy, and conventional bonds. Financ Res Lett 40:101739
Önder YK (2016) Asset backed contracts and sovereign risk. J Econ Behav Organ 132:237–252
Papadamou S, Fassas AP, Kenourgios D, Dimitriou D (2021) Flight-to-quality between global stock and bond markets in the covid era. Financ Res Lett 38:101852
Pesaran B, Pesaran MH (2009) Time series econometrics: using Microfit 5.0 (No. 330.015195 P48.). Oxford University Press, Oxford
Polemis M, Soursou S (2020) Assessing the impact of the COVID-19 pandemic on the Greek energy firms: an event study analysis. Energy Res Lett 1:3. https://doi.org/10.46557/001c.17238
Prabheesh KP (2020) Dynamics of foreign portfolio investment and stock market returns during the COVID-19 pandemic: evidence from India. Asian Econ Lett. https://doi.org/10.46557/001c.17658
Prabheesh KP, Kumar S (2021) The dynamics of oil prices exchange rates and the stock market under COVID-19 uncertainty: evidence from India. Energy Res Lett. https://doi.org/10.46557/001c.27015
Qin M, Zhang YC, Su CW (2020) The essential role of pandemics: a fresh insight into the oil market. Energy Res Lett 1(1):13166. https://doi.org/10.46557/001c.13166
Rana ME, Akhter W (2015) Performance of Islamic and conventional stock indices: empirical evidence from an emerging economy. Financ Innov 1(1):1–17
Reboredo JC, Ugolini A, Aiube FAL (2020) Network connectedness of green bonds and asset classes. Energy Econ 86:104629
Rizvi SAR, Arshad S (2018) Understanding time-varying systematic risks in Islamic and conventional sectoral indices. Econ Model 70:561–570
Salisu AA, Sikiru AA (2020) Pandemics and the Asia-Pacific Islamic stocks. Asian Econ Lett. https://doi.org/10.46557/001c.17413
Salisu A, Adediran I (2020) Uncertainty due to infectious diseases and energy market volatility. Energy Res Lett. https://doi.org/10.46557/001c.14185
Salisu AA, Akanni L, Raheem I (2020) The COVID-19 global fear index and the predictability of commodity price returns. J Behav Exp Financ 27:100383
Samour A, Isiksal AZ, Gunsel Resatoglu N (2020) The impact of external sovereign debt and the transmission effect of the US interest rate on Turkey's equity market. J Int Trade Econ Dev 29(3):319–333
Sarwar G (2017) Examining the flight-to-safety with the implied volatilities. Financ Res Lett 20:118–124
Selmi R, Gupta R, Kollias C, Papadamou S (2019) The stock-bond nexus and investors' behavior in mature and emerging markets. Stud Econ Financ 38(3):562–582
Sergi BS, Harjoto MA, Rossi F, Lee R (2021) Do stock markets love misery evidence from the COVID-19. Finance Res Lett 42:101923
Sha Y, Sharma S (2020) Research on pandemics special issue of the journal emerging markets finance and trade. Emerg Markets Finance Trade 56(10):2133–2137
Shahzad SJH, Bouri E, Kang SH, Saeed T (2021) Regime specific spillover across cryptocurrencies and the role of COVID-19. Financ Innov 7(1):1–24
Sharma SS (2020) A note on the Asian market volatility during the COVID-19 pandemic. Asian Econ Lett. https://doi.org/10.46557/001c.17661
Sharma SS, Narayan PK, Thuraisamy K, Laila N (2019) Is Indonesia's stock market different when it comes to predictability? Emerg Markets Rev 40:100623
Sharma S, Sha Y (2020) Part A: special section on COVID-19 research. Emerg Mark Financ Trade 56(15):3551–3553. https://doi.org/10.1080/1540496X.2020.1858617
Skintzi VD (2019) Determinants of stock-bond market comovement in the Eurozone under model uncertainty. Int Rev Financ Anal 61:20–28
So MK, Tiwari A, Chu AM, Tsang JT, Chan JN (2020) Visualising COVID-19 pandemic risk through network connectedness. Int J Infect Dis (In Press)
Soylu PK, Güloğlu B (2019) Financial contagion and flight to quality between emerging markets and US bond market. North Am J Econ Financ 50:100992
Tang DY, Zhang Y (2020) Do shareholders benefit from green bonds? J Corp Financ 61:101427
Wei Z, Luo Y, Huang Z, Guo K (2020) Spillover effects of RMB exchange rate among B&R countries: before and during COVID-19 event. Financ Res Lett 37:101782
Xu D (2020) Canadian Stock Market Volatility under COVID-19 (No. 2001). University of Waterloo, Department of Economics
Yan L, Qian Y (2020) The impact of COVID-19 on the Chinese stock market: an event study based on the consumer industry. Asian Econ Lett. https://doi.org/10.46557/001c.18068
Yousaf I, Ali S (2020) Discovering interlinkages between major cryptocurrencies using high-frequency data: new evidence from COVID-19 pandemic. Financ Innov 6(1):1–18
Monash University, Melbourne, Australia
Paresh Kumar Narayan
Lahore University of Management Sciences, Lahore, Pakistan
Syed Aun R. Rizvi
Bank Indonesia, Jakarta, Indonesia
Ali Sakti
Correspondence to Paresh Kumar Narayan.
Narayan, P.K., Rizvi, S. & Sakti, A. Did green debt instruments aid diversification during the COVID-19 pandemic?. Financ Innov 8, 21 (2022). https://doi.org/10.1186/s40854-021-00331-4
Green Sukuk
Indonesia Capital market
COVID-19 and the financial and economic systems
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page
|
CommonCrawl
|
Using machine learning to identify gene interaction networks associated with breast cancer
Liyuan Liu1,2 na1,
Wenli Zhai3 na1,
Fei Wang1,4,
Lixiang Yu1,4,
Fei Zhou1,4,
Yujuan Xiang1,4,
Shuya Huang1,4,
Chao Zheng1,4,
Zhongshang Yuan5,
Yong He3,
Zhigang Yu1,4 &
Jiadong Ji3
BMC Cancer volume 22, Article number: 1070 (2022) Cite this article
Breast cancer (BC) is one of the most prevalent cancers worldwide but its etiology remains unclear. Obesity is recognized as a risk factor for BC, and many obesity-related genes may be involved in its occurrence and development. Research assessing the complex genetic mechanisms of BC should not only consider the effect of a single gene on the disease, but also focus on the interaction between genes. This study sought to construct a gene interaction network to identify potential pathogenic BC genes.
The study included 953 BC patients and 963 control individuals. Chi-square analysis was used to assess the correlation between demographic characteristics and BC. The joint density-based non-parametric differential interaction network analysis and classification (JDINAC) was used to build a BC gene interaction network using single nucleotide polymorphisms (SNP). The odds ratio (OR) and 95% confidence interval (95% CI) of hub gene SNPs were evaluated using a logistic regression model. To assess reliability, the hub genes were quantified by edgeR program using BC RNA-seq data from The Cancer Genome Atlas (TCGA) and identical edges were verified by logistic regression using UK Biobank datasets. Go and KEGG enrichment analysis were used to explore the biological functions of interactive genes.
Body mass index (BMI) and menopause are important risk factors for BC. After adjusting for potential confounding factors, the BC gene interaction network was identified using JDINAC. LEP, LEPR, XRCC6, and RETN were identified as hub genes and both hub genes and edges were verified. LEPR genetic polymorphisms (rs1137101 and rs4655555) were also significantly associated with BC. Enrichment analysis showed that the identified genes were mainly involved in energy regulation and fat-related signaling pathways.
We explored the interaction network of genes derived from SNP data in BC progression. Gene interaction networks provide new insight into the underlying mechanisms of BC.
The World Health Organization (WHO)'s International Agency for Research on Cancer (IARC) showed that the most predominant change in global cancer data in 2020 was a rapid increase in breast cancer (BC) incidence. BC has replaced lung cancer as the most common cancer worldwide [1]. The mortality rate of female BC is particularly high in transitional versus developed countries [2]. Obesity is a recognized risk factor for many cancers [3, 4]. Higher estrogen levels resulting from the aromatization of adipose tissue, increased production of inflammatory cytokines such as tumor necrosis factor α, interleukin-6, and prostaglandin E2, insulin resistance, and over activation of insulin-like growth factor signaling, adipokine production, and oxidative stress in obese women are associated with the development of cancer [5]. Structural variants of genes associated with BC and obesity, including LEP, LEPR, PON1, FTO, and MC4R, are associated with a higher or lower risk of BC [5].
Genome-wide association studies (GWAS) have linked many single nucleotide polymorphisms (SNPs) with BC occurrence [6,7,8,9]. In our previous studies, a potential relationship between the sequence variations of individual gene and BC has been proposed. In the study of 11 SNPs of PTPN1, rs3787345, rs718050, rs3215684, and rs718049 were associated with a reduction in BC risk [10]. Several studies have identified the genomic region of PTPN1 as a quantitative trait locus (QTL) in obesity and diabetes mellitus [11,12,13]. XRCC5 and XRCC6 SNP genotyping revealed that XRCC5 rs16855458 was associated with BC, XRCC6 rs2267437 was associated with ER-/PR- BC risk, and there may be interactions with environmental factors [14]. However, current research has largely focused on the impact of a single SNP on disease, and potential SNP-SNP interactions remain less well studied. Most diseases, including cancers, follow a polygenic model, indicating that they may involve multiple genes or SNPs [9]. However, little is known about how they interact. Understanding this issue will help to characterize the biological mechanism of BC risk.
Differential network analysis provides information about how genes interact. Recent studies suggest that cancer occurrence and development are not only caused by gene mutations but also by abnormal gene regulation [15]. Thus, it is important to assess the impact of both a single gene and gene–gene interactions on cancer onset and progression. Network analysis can effectively capture gene–gene interactions and genetic data can be used to establish gene regulation networks that characterize the biological mechanisms of disease [16]. A recent study analyzed the genetic and clinical data from gastric cancer patients using weighted gene co-expression network analysis (WGCNA) to explore new prognostic markers and therapeutic targets of gastric cancer [17]. Jubair et al. proposed a novel network-based method by integrating a protein–protein interaction network with gene expression data to identify biomarkers for different BC subtypes and predict patients ' survivability [18]. Another study constructed the multi-omics markers associated with BC by high-dimensional embedding and residual neural network [19]. To date, network analysis has relied on DNA methylation and RNA-seq data [17,18,19,20]. Meanwhile, genetic effects of combinations of functionally related SNPs may affect genes in a synergistic manner, thereby increasing BC risk [21, 22]. Network analysis using SNP data can provide insights into the mechanisms of disease.
The joint density-based nonparametric difference interaction network analysis and classification (JDINAC) method [23] was used to identify the differential gene interaction network between individuals in the BC and healthy control groups. Unlike previous studies, gene interaction network results were based on SNP data, providing new insight into potential pathogenic BC genes.
The study population has been described previously [10]. In brief, a hospital-based case–control study was used that included patients diagnosed with BC by pathology between April 2012 and April 2013 in the second hospital of Shandong University and 21 collaborative hospitals. Non-BC patients were selected as controls using 1:1 matching on age group (±3 years), hospital, and treatment time period (within 2 months). The subjects were 25 to 70 years of age. Patients with clinical or pathological diagnoses of recurrence or metastasis or other malignant tumor complications were excluded. The selection of cases and controls was carried out in strict accordance with project research design standards.
The data used for this study were obtained from a key project of clinical discipline dataset belonging to the hospitals under the Ministry of Health (administered) of the People's Republic of China [24]. The present study collected data from a face-to-face interview and, clinical breast and imaging examinations. The interview included questions relating to demographics, physiology, reproductive factors, chronic disease, and family history. Height, weight, hip and waist circumference were also obtained, body mass index (BMI) and the waist-hip rate (WHR) were calculated. Clinical examination results were also collected, including visual examination, palpation, and related diagnostic tests, including breast ultrasound, mammography, and blood testing. Blood samples were collected using an EDTA vacuum collector.
RNA-seq expression and clinical data from BC patients, including 112 tumor tissue samples and matched normal tissue samples, were downloaded from The Cancer Genome Atlas (TCGA; https://cancergenome.nih.gov/). SNP data from 4,030 and 3,494 women with and without BC, respectively, were screened using UK Biobank BC data [25]. These data were used as validation datasets.
Genotyping and laboratory methods
The blood samples consisting of fasting venous whole blood were injected into EDTA anticoagulant tubes. These were placed fully upside-down in a 4 °C refrigerator and vertically placed in a -80 °C refrigerator after sedimentation. DNA was extracted using the Wizard Genomic DNA Purification Kit (a1120, Promega) and genotyped using the Sequenom MassARRAY SNP system (CapitalBio Technology, Beijing, China).
Differential network analysis using JDINAC method
A Chi-square test was used to analyze differences in demographic and BC-related factors between the case and control groups. BMI data from the cases and controls was represented as the mean ± standard deviation. First, 101 SNPs were matched to their respective genes and the mean value of SNP for each gene was calculated for each sample. The gene difference interaction network was obtained using the JDINAC method. The 95% confidence interval (95% CI) and odds ratio (OR) were also estimated for hub gene polymorphisms in the gene difference interaction network. Significance was defined as a p-value < 0.05. All data were statistically analyzed using R × 64 4.1.0.
The JDINAC method assumes that the network-level difference between BC patients and healthy controls is the result of the collective effect of differential pairwise gene–gene interactions that are characterized by the conditional joint density of two genes [23]. Formally, Yl (l = 1,2,…,n) is the binary response vector and if the lth subject is BC, Yl = 1, otherwise Yl = 0. Pr is the probability of the subject with BC, i.e., Pr = P(Yl = 1), and Si is the ith gene risk score. The JDINAC method based on the logistic regression is then represented as:
$$\text{logit(Pr)}={\alpha }_{0}+\sum_{t=1}^{T}{\alpha }_{t}{Z}_{t}+\sum_{i=1}^{p}\sum_{j>i}^{p}{\beta }_{ij}1\mathrm{n}\frac{{f}_{ij}^{1}\left({S}_{i},{S}_{j}\right)}{{f}_{ij}^{0}\left({S}_{i},{S}_{j}\right)}, s.t. \sum_{i=1}^{p}\sum_{j>i}^{p}\left|{\beta }_{ij}\right|\le c,c>0,$$
Zt (t = 1,…,T) denotes covariates such as BMI and age, p is the number of genes. \(f_{ij}^k\left(k=0,1\right)\) denotes the group conditional joint density of Si and Sj for group k, respectively, i.e.,
$$\left(\left({S}_{i},{S}_{j}\right)\left|Y=1\right.\right)\sim {f}_{ij}^{1}$$
which represents the strength of interaction between Si and Sj for group k [23]. βij indicates the dependency between specific conditional groups.
JDINAC adopted a multiple randomly split algorithm to improve the accuracy and robustness of the results. A Lasso penalty was added to the logistics regression to estimate the coefficient βij and a cross-validation method was used to determine the best penalty parameter. The importance score for each pair \(S_i,S_j\) was obtained by the following formula:
$${\omega }_{ij}=\sum_{t=1}^{T}I\left({\widehat{\beta }}_{ij,t}\ne 0\right), i,j=1,\dots ,p, j>i$$
where \(\omega_{ij}\) was the importance score, \(I\left(\cdot\right)\) was an indicative function, \({\widehat\beta}_{ij,t}\left(t=1,\dots,T\right)\) was the tth estimation of the coefficient \(\beta_{ij}\) . The importance scores represented the differential dependency weight of each pair \(\left(S_i,S_j\right)\) between two groups [23]. The difference network was inferred by connecting pairs with high importance scores through their shared genes.
Differential expression analysis and enrichment analysis
The edgeR package [26] was utilized to identify differentially expressed genes in TCGA breast cancer data to test the reliability of the JDINAC results. Multiplicity correction was performed by applying the Benjamini–Hochberg method on the p-values.
To explore the biological functions of the identified interaction genes, Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways in enrichment analysis were performed by the R package "clusterProfiler" [27]. Only terms with a multiple-test adjusted p-value < 0.05 were considered significant.
Participant demographic and lifestyle characteristics
There were 1,916 subjects in the study, including 953 and 963 in the BC and control groups, respectively. There were significant differences in BMI and menopausal status between the two groups (p-value < 0.05) (Table 1). Women with BC had a higher BMI than that of healthy women (24.36 ± 3.46 vs. 24.01 ± 3.11, respectively), indicating that obesity may be a risk factor for BC.
Table 1 Clinical characteristics of the study population
Differential network of gene interaction
Twenty genes that might be related to the pathogenesis of BC and 101 SNPs associated with these genes were selected. The differential gene interaction network was estimated based on four scenarios: no adjustment for covariates, adjustment for BMI, adjustment for the menopause status (Fig. 1), and adjustment for BMI and menopause status simultaneously (see Additional file 1). The number of edges selected under the four scenarios was 18, 14, 19 and 16, respectively. The orange nodes in the figure represent the central genes with at least four adjacent genes in the network. All scenarios had the three genes, LEP, LEPR, and XRCC6 in common. Gene pairs were ranked based on the importance scores derived from JDINAC and the top ten pairs in the network with no covariate adjustment are summarized in Table 2. Among them, six pairs had evidence of interaction in STRING database [28]. Additional data are shown in Additional files 2, 3, 4 and 5.
The differential interaction networks inferred by the joint density-based nonparametric difference interaction network analysis and classification (JDINAC). The hub genes are colored orange. A no adjustment for covariates. B adjustment for BMI. C adjustment for the menopause status
Table 2 Top 10 gene interaction pairs identified by JDINAC with no covariate
Association between polymorphisms and BC risk
Next, the association between SNPs in the hub genes of differential networks and BC risk was assessed (Table 3). Most SNPs were not associated with BC significantly. Rs1137101 (OR = 0.728, p-value = 0.002) and rs4655555 (OR = 0.825, p-value = 0.015) contained in LEPR were significantly associated with BC risk, while the LEP, XRCC6, and RETN polymorphisms were not significantly. Functional consequences of SNPs on genes were also shown in Table 3. Rs4655555 is an intron variant. Rs1137101 is a missense variant and coding sequence variant reported as benign [29].
Table 3 The association of SNPs in hub genes with breast cancer (BC) adjusted for BMI and menopause status
Identification of the interaction network
RNA-seq expression and clinical data from BC patients were obtained from TCGA to analyze and verify the identified hub genes. The validation dataset included 112 subjects for whom both tumor and matched normal samples were available. All genes available in the TCGA dataset were analyzed to detect differences between tumor and normal samples, and 10 common genes in Fig. 1 were screened out from the results. LEP, LEPR and XRCC6 expression was significantly different between two groups (Table 4). RETN was not differentially expressed in the TCGA data.
Table 4 The validation results of the 10 identical genes in Fig. 1 using TCGA data
Genetic data from 4,030 BCs and 3,494 controls in the UK Biobank was used to verify the eight identical edges of the three networks in Fig. 1 using logistic regression. The data were randomly divided into two parts, the kernel density function of the BC and control groups were estimated, and logistic regression was used to assess the corresponding p-value of the eight edges (Table 5). The results showed that the first four edges were significantly different (p-value < 0.05). The genes connected by these four edges were the identified hub genes, indicating that the interaction between hub genes in this network is more significant than it is for other genes.
Table 5 The validation results of the 8 identical edges in Fig. 1 using UK Biobank data
GO analysis showed that the biological processes of the identified genes were mainly related to glucose homeostasis and carbohydrate homeostasis (Fig. 2). KEGG pathway analysis showed that these genes were mainly enriched in adenosine-monophosphate-activated protein kinase (AMPK) signaling pathway, adipocytokine signaling and non-alcoholic fatty liver disease (Fig. 2).
GO function and KEGG pathway enrichment analysis of the genes identified by JDINAC. A Dot plots show the top ten enriched GO BP, CC, and MF terms for identified genes; B Dot plots show the top ten enriched KEGG pathways. BP, Biological Processes; CC, Cell Component; MF, Molecular Function
This study sought to identify potential pathogenic genes associated with BC by constructing a BC gene interaction network. This study extended the results of prior studies [14] by not only assessing the effect of a single gene on BC but also the gene interaction network, providing new insight into how genetic factors impact complex human diseases. These results suggest that BMI and menopausal status may be risk factors for BC. The gene interaction network obtained using the JDINAC method showed that LEPR, LEP, XRCC6, and RETN have significant interactivity difference between BC patients and healthy women, and are associated with higher BC risk. However, analysis of hub gene polymorphisms indicated that only LEPR rs1137101 and rs4655555 were strongly linked to BC. Other independent datasets and bioinformatics analysis tools were used to verify the hub genes and the edges, increasing the reliability of the results. The expression of LEPR, LEP and XRCC6 was significantly associated with BC in TCGA dataset. Meanwhile, UK Biobank SNP data validated their interaction on BC.
GO enrichment analysis showed that the interacting genes were closely related to cell energy and cell metabolism, such as glucose homeostasis, carbohydrate homeostasis, muscle cell proliferation and regulation of small molecules. The results in KEGG analysis were consistent with those by GO analysis. Studies have shown that AMPK is the main cellular energy sensor [30]. Reduced activity of AMPK is associated with altered cellular metabolic processes that drive BC tumor growth and progression. If AMPK is activated, it can respond to adenosine triphosphate (ATP) depletion, glucose starvation, and metabolic stress [31]. Obesity-related factors modulate metabolic pathways in BC, providing a molecular link between obesity and BC.
Many studies have shown that LEP and LEPR play an important role in obesity. LEP is a hormone secreted by adipose tissue, which regulates eating and energy consumption through the hypothalamic region of the brain [32]. Circulating leptin binds to LEPR, activating Janus kinase 2 (JAK2), phosphorylating three tyrosine residues in LEPR, and inducing phosphorylation of STAT transcription factors, STAT5 and STAT3, which are involved in the development of BC [32]. Leptin may stimulate the expression of estrogen by increasing aromatase expression, which is also involved in BC development [33]. The LEPR rs1137101 polymorphism results from a nonconservative A to G substitution at codon 223, reducing leptin binding and impairing signaling [34]. While the effect of LEPR rs4655555 on the development of BC has not yet been reported, one study has shown that rs4655555 is significantly correlated with plasma soluble leptin receptor levels and may inform diabetes prognosis [35]. The findings from the current study further support the evidence that LEP and LEPR play an important role in BC pathogenesis.
The impact of RETN on BC has been reported previously. RETN is highly expressed in BC tissues and may serve as a biomarker for disease stage and the degree of inflammation [36, 37]. Low-grade systemic inflammation is one of the characteristics of obesity [38], and RETN is shown to exert pro-inflammatory properties by upregulating pro-inflammatory cytokines [39] through the NFκB signaling pathway [40] that lead to inflammation and tumorigenesis. Several studies have also linked XRCC6 with an increased risk of BC [14, 41, 42]. Interaction between XRCC6 genetic polymorphisms and reproductive risk factors is thought by some researchers to contribute to estrogen exposure, which results in double-strand breaks on BRCA1 and BRCA2 DNA and induces BC [41]. XRCC6 is also involved in the production of proinflammatory cytokines induced by lipopolysaccharide (LPS) in human macrophages and monocytes. Proinflammatory cytokine production is, in turn, associated with obesity and BC [42].
Recent studies have used gene expression data to explore the pathogenesis of BC [18] and other diseases [17, 20]. However, no genetic interaction network has been constructed to identify potential BC pathology genes using SNP data. As discussed previously, single genetic variants often explain only a small fraction of phenotypic variation, that is, the problem of missing heritability [43]. Gene–gene interactions are proposed as a potential source of this problem [44]. The current study built gene interaction networks based on SNP data to explain the etiology of complex human traits. While high-throughput SNP genotyping methods have been developed, the computational and statistical challenges of simultaneously analyzing large SNP datasets still exist [9]. The method used here provides ideas for handling SNP data. In addition, because BC incidence is affected by demography [45, 46] the gene network was constructed adjust the influence of confounding factors such as BMI and menopause, making the results more reliable. This study does have some limitations, however. Only the interaction between paired genes was assessed. For BC, the relationship between genes may be more complicated. Future studies should assess more complex interactions associated with this disease.
Potential pathogenic BC genes were investigated by constructing a gene interaction network. LEP, LEPR, XRCC6, and RETN had significant interactions during BC, and LEPR polymorphisms may also be associated with BC development. Gene network analysis can provide more detailed information about the pathogenesis of complex diseases.
The datasets analyzed during the current study are not publicly available due to privacy but are available from the corresponding author on reasonable request.
BC:
LEP:
LEPR:
Leptin receptor
XRCC6:
X-ray repair cross complementing 6
RETN:
Resistin
JDINAC:
The joint density-based non-parametric differential interaction network analysis and classification
Single nucleotide polymorphism
TCGA:
The Cancer Genome Atlas
BMI:
IARC:
International Agency for Research on Cancer
Genome wide association study
WGCNA:
Weighted gene co-expression network analysis
WHR:
Waist-hip rate
JAK2:
Janus kinase 2
LPS:
Lipopolysaccharide
Sung H, Ferlay J, Siegel RL, Laversanne M, Soerjomataram I, Jemal A, et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2021;71(3):209–49.
Burden G, Fitzmaurice C, Akinyemiju T, Al Lami F, Alam T, Alizadeh-Navaei R, et al. Global, regional, and national cancer incidence, mortality, years of life lost, years lived with disability, and disability-adjusted life-years for 29 cancer groups, 1990 to 2016: a systematic analysis for the global burden of disease study. JAMA Oncol. 2018;4(11):1553–68.
Keum N, Greenwood DC, Lee DH, Kim R, Aune D, Ju W, et al. Adult weight gain and adiposity-related cancers: a dose-response meta-analysis of prospective observational studies. J Natl Cancer Inst. 2015;107(2):djv088.
Yoon YS, Kwon AR, Lee YK, Oh SW. Circulating adipokines and risk of obesity related cancers: A systematic review and meta-analysis. Obes Res Clin Pract. 2019;13(4):329–39.
Simone V, D'avenia M, Argentiero A, Felici C, Rizzo FM, De Pergola G, et al. Obesity and breast cancer: molecular interconnections and potential clinical applications. Oncologist. 2016;21(4):404–17.
Kaklamani V, Yi N, Sadim M, Siziopikou K, Zhang K, Xu Y, et al. The role of the fat mass and obesity associated gene (FTO) in breast cancer risk. BMC Med Genet. 2011;12(1):1–10.
Gallicchio L, McSorley MA, Newschaffer CJ, Huang HY, Thuita LW, Hoffman SC, et al. Body mass, polymorphisms in obesity-related genes, and the risk of developing breast cancer among women with benign breast disease. Cancer Detect Prev. 2007;31(2):95–101.
Sayad S, Dastgheib SA, Farbod M, Asadian F, Karimi-Zarchi M, Salari S, et al. Association of PON1, LEP and LEPR Polymorphisms with Susceptibility to Breast Cancer: A Meta-Analysis. Asian Pac J Cancer Prev: APJCP. 2021;22(8):2323.
Chuang LY, Chang HW, Lin MC, Yang CH. Chaotic particle swarm optimization for detecting SNP–SNP interactions for CXCL12-related genes in breast cancer prevention. Eur J Cancer Prev. 2012;21(4):336–42.
Huang S, Liu L, Xiang Y, Wang F, Yu L, Zhou F, et al. Association of PTPN1 polymorphisms with breast cancer risk: A case-control study in Chinese females. J Cell Biochem. 2019;120(7):12039–50.
Ghosh S, Watanabe RM, Hauser ER, Valle T, Magnuson VL, Erdos MR, et al. Type 2 diabetes: evidence for linkage on chromosome 20 in 716 Finnish affected sib pairs. Proc Natl Acad Sci. 1999;96(5):2198–203.
Lee JH, Reed DR, Li WD, Xu W, Joo EJ, Kilker RL, et al. Genome scan for human obesity and linkage to markers in 20q13. Am J Hum Genet. 1999;64(1):196–209.
Soro A, Pajukanta P, Lilja HE, Ylitalo K, Hiekkalinna T, Perola M, et al. Genome scans provide evidence for low-HDL-C loci on chromosomes 8q23, 16q24. 1–24.2, and 20q13. 11 in Finnish families. Am J Hum Genet. 2002;70(5):1333–40.
Yu LX, Liu LY, Xiang YJ, Wang F, Zhou F, Huang SY, et al. XRCC5/6 polymorphisms and their interactions with smoking, alcohol consumption, and sleep satisfaction in breast cancer risk: A Chinese multi-center study. Cancer Med. 2021;10(8):2752–62.
Schadt EE. Molecular networks as sensors and drivers of common human diseases. Nature. 2009;461(7261):218–23.
Gong BS, Zhang QP, Zhang GM, Zhang SJ, Zhang W, Lv HC, et al. Single-nucleotide polymorphism-gene intermixed networking reveals co-linkers connected to multiple gene expression phenotypes. In: BMC proceedings. BioMed Central. 2007;1(1):1–7.
Chen J, Wang X, Hu B, He Y, Qian X, Wang W. Candidate genes in gastric cancer identified by constructing a weighted gene co-expression network. PeerJ. 2018;6: e4692.
Jubair S, Alkhateeb A, Tabl AA, Rueda L, Ngom A. A novel approach to identify subtype-specific network biomarkers of breast cancer survivability. Network Model Anal Health Inform Bioinform. 2020;9(1):1–12.
Zhou L, Rueda M, Alkhateeb A. Classification of breast cancer nottingham prognostic index using high-dimensional embedding and residual neural network. Cancers. 2022;14(4):934.
Chen H, He Y, Ji J, Shi Y. A machine learning method for identifying critical interactions between gene pairs in alzheimer's disease prediction. Frontiers in Neurology. 2019;10:1162.
Onay VÜ, Briollais L, Knight JA, Shi E, Wang Y, Wells S, et al. SNP-SNP interactions in breast cancer susceptibility. BMC Cancer. 2006;6(1):1–16.
Sapkota Y, Mackey JR, Lai R, Franco-Villalobos C, Lupichuk S, Robson PJ, et al. Assessing SNP-SNP interactions among DNA repair, modification and metabolism related pathway genes in breast cancer susceptibility. PLoS ONE. 2013;8(6): e64896.
Ji J, He D, Feng Y, He Y, Xue F, Xie L. JDINAC: joint density-based non-parametric differential interaction network analysis and classification using high-dimensional sparse omics data. Bioinform. 2017;33(19):3080–7.
Liu LY, Wang F, Cui SD, Tian FG, Fan ZM, Geng CZ, et al. A case-control study on risk factors of breast cancer in Han Chinese women. Oncotarget. 2017;8(57):97217.
Ahmed M, Mulugeta A, Lee SH, Mäkinen VP, Boyle T, Hyppönen E. Adiposity and cancer: a Mendelian randomization analysis in the UK biobank. Int J Obes. 2021;45(12):2657–65.
Robinson MD, McCarthy DJ, Smyth GK. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010;26(1):139–40.
Yu G, Wang LG, Han Y, He QY. clusterProfiler: an R package for comparing biological themes among gene clusters. Omics. 2012;16(5):284–7.
von Mering C, Huynen M, Jaeggi D, Schmidt S, Bork P, Snel B. STRING: a database of predicted functional associations between proteins. Nucleic Acids Res. 2003;31(1):258–61.
Considine RV, Caro JF, Considine EL, Williams CJ, Hyde TM. Identification of Incidental Sequence Polymorphisms and Absence of the db/db Mouse and fa/fa Rat Mutations. Diabetes. 1996;45(7):992–4.
López M. Hypothalamic AMPK and energy balance. Eur J Clin Invest. 2018;48(9): e12996.
Ponnusamy L, Natarajan SR, Thangaraj K, Manoharan R. Therapeutic aspects of AMPK in breast cancer: Progress, challenges, and future directions. Biochimica et Biophysica Acta (BBA)-Reviews on Cancer. 2020;1874(1):188379.
Bains V, Kaur H, Badaruddoza B. Association analysis of polymorphisms in LEP (rs7799039 and rs2167270) and LEPR (rs1137101) gene towards the development of type 2 diabetes in North Indian Punjabi population. Gene. 2020;754: 144846.
Hosney M, Sabet S, El-Shinawi M, Gaafar KM, Mohamed MM. Leptin is overexpressed in the tumor microenvironment of obese patients with estrogen receptor positive breast cancer. Exp Ther Med. 2017;13(5):2235–46.
Illangasekera Y, Kumarasiri P, Fernando D, Dalton C. Association of the leptin receptor Q223R (rs1137101) polymorphism with obesity measures in Sri Lankans. BMC Res Notes. 2020;13(1):1–4.
Sun Q, Cornelis MC, Kraft P, Qi L, van Dam RM, Girman CJ, et al. Genome-wide association study identifies polymorphisms in LEPR as determinants of plasma soluble leptin receptor levels. Hum Mol Genet. 2010;19(9):1846–55.
Lee YC, Chen YJ, Wu CC, Lo S, Hou MF, Yuan SSF. Resistin expression in breast cancer tissue as a marker of prognosis and hormone therapy stratification. Gynecol Oncol. 2012;125(3):742–50.
Dalamaga M, Sotiropoulos G, Karmaniolas K, Pelekanos N, Papadavid E, Lekka A. Serum resistin: a biomarker of breast cancer in postmenopausal women? Association with clinicopathological characteristics, tumor markers, inflammatory and metabolic parameters. Clin Biochem. 2013;46(7–8):584–90.
Fantuzzi G. Adipose tissue, adipokines, and inflammation. J Allergy clin immunol. 2005;115(5):911–9.
Bokarewa M, Nagaev I, Dahlberg L, Smith U, Tarkowski A. Resistin, an Adipokine with Potent Proinflammatory Properties. J Immunol. 2005;174(9):5789.
Filková M, Haluzík M, Gay S, Šenolt L. The role of resistin as a regulator of inflammation: Implications for various human pathologies. Clin Immunol. 2009;133(2):157–70.
Fu YP, Yu JC, Cheng TC, Lou MA, Hsu GC, Wu CY, et al. Breast cancer risk associated with genotypic polymorphism of the nonhomologous end-joining genes: a multigenic study on cancer susceptibility. Can Res. 2003;63(10):2440–6.
Sun H, Li Q, Yin G, Ding X, Xie J. Ku70 and Ku80 participate in LPS-induced pro-inflammatory cytokines production in human macrophages and monocytes. Aging (Albany NY). 2020;12(20):20432.
Maher B. Personal genomes: The case of the missing heritability. Nature. 2008;456(7218):18–21.
Yang S, Liu Y, Jiang N, Chen J, Leach L, Luo Z, et al. Genome-wide eQTLs and heritability for gene expression traits in unrelated individuals. BMC Genomics. 2014;15(1):1–12.
Suzuki Y, Tsunoda H, Kimura T, Yamauchi H. BMI change and abdominal circumference are risk factors for breast cancer, even in Asian women. Breast Cancer Res Treat. 2017;166(3):919–25.
Li T, Tang L, Gandomkar Z, Heard R, Mello-Thoms C, Shao Z, et al. Mammographic density and other risk factors for breast cancer among women in China. Breast J. 2018;24(3):426–8.
This work was funded by General program of China Postdoctoral Science Foundation (2021M691911), General programs of Natural Science Foundation of Shandong Province (ZR2021MH243), National Natural Science Foundation of China (81903410), National Statistical Scientific Research Project (2022LY031), and the Young Scholars Program of Shandong University.
Liyuan Liu and Wenli Zhai are the co-first authors.
Department of Breast Surgery, The Second Hospital, Cheeloo College of Medicine, Shandong University, 250033, Jinan, China
Liyuan Liu, Fei Wang, Lixiang Yu, Fei Zhou, Yujuan Xiang, Shuya Huang, Chao Zheng & Zhigang Yu
School of Mathematics, Shandong University, Jinan, 250100, China
Liyuan Liu
Institute for Financial Studies, Shandong University, Jinan, 250100, China
Wenli Zhai, Yong He & Jiadong Ji
Institute of Translational Medicine of Breast Disease Prevention and Treatment, Shandong University, Jinan, 250100, China
Fei Wang, Lixiang Yu, Fei Zhou, Yujuan Xiang, Shuya Huang, Chao Zheng & Zhigang Yu
Department of Biostatistics, School of Public Health, Cheeloo College of Medicine, Shandong University, Jinan, 250012, China
Zhongshang Yuan
Wenli Zhai
Fei Wang
Lixiang Yu
Fei Zhou
Yujuan Xiang
Shuya Huang
Chao Zheng
Yong He
Zhigang Yu
Jiadong Ji
Conceptualization, J.J. and Z.G.Y.; Writing—original draft, W.Z. and L.L.; Writing—review & editing, J.J., Z.S.Y., Z.G.Y. and Y.H.; Formal analysis, W.Z. and J.J.; Resources, L.L.; Data curation, L.L., F.W., L.Y., F.Z., Y.X., S.H. and C.Z.. All authors read and approved the final manuscript.
Correspondence to Zhigang Yu or Jiadong Ji.
The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Ethics Committee of the Second Hospital of Shandong University (No. 2010004, KYLL-2021(KJ)P-0136). Informed consent was obtained from all subjects involved in the study.
Figure S1. The differential interaction network inferred by JDINAC after adjusting for BMI and menopause status.
Table S1. Top 10 gene interaction pairs identified by JDINAC after adjusting for BMI.
Table S2. Top 10 gene interaction pairs identified by JDINAC after adjusting for menopausal status.
Table S3. Top 10 gene interaction pairs identified by JDINAC after adjusting for BMI and menopause status.
Table S4. The association of IFI30 polymorphisms with BC adjusted for BMI and menopause status.
Liu, L., Zhai, W., Wang, F. et al. Using machine learning to identify gene interaction networks associated with breast cancer. BMC Cancer 22, 1070 (2022). https://doi.org/10.1186/s12885-022-10170-w
Gene interaction network
Differential network analysis
Submission enquiries: [email protected]
|
CommonCrawl
|
Fourier Analysis with Applications
Author: Adrian ConstantinPublish On: 2016-06-02
A two-volume advanced text for graduate students. This first volume covers the theory of Fourier analysis.
Author: Adrian Constantin
Categories: Mathematics
Fourier Analysis and Approximation
Author: P.L. ButzerPublish On: 2012-12-06
Approximately a half of this first volume deals with the theories of Fourier series and of Fourier integrals from a transform point of view.
Author: P.L. Butzer
Publisher: Birkhäuser
At the international conference on 'Harmonic Analysis and Integral Transforms', conducted by one of the authors at the Mathematical Research Institute in Oberwolfach (Black Forest) in August 1965, it was felt that there was a real need for a book on Fourier analysis stressing (i) parallel treatment of Fourier series and Fourier trans forms from a transform point of view, (ii) treatment of Fourier transforms in LP(lRn)_ space not only for p = 1 and p = 2, (iii) classical solution of partial differential equations with completely rigorous proofs, (iv) theory of singular integrals of convolu tion type, (v) applications to approximation theory including saturation theory, (vi) multiplier theory, (vii) Hilbert transforms, Riesz fractional integrals, Bessel potentials, (viii) Fourier transform methods on locally compact groups. This study aims to consider these aspects, presenting a systematic treatment of Fourier analysis on the circle as well as on the infinite line, and of those areas of approximation theory which are in some way or other related thereto. A second volume is in preparation which goes beyond the one-dimensional theory presented here to cover the subject for functions of several variables. Approximately a half of this first volume deals with the theories of Fourier series and of Fourier integrals from a transform point of view.
Fourier Analysis and Approximation Volume 1 One Dimensional Theory
Author: PL. BUTZERPublish On: 1971
Author: PL. BUTZER
ISBN: OCLC:985855414
Modern Fourier Analysis
Author: Loukas GrafakosPublish On: 2010-11-19
These volumes are mainly addressed to graduate students who wish to study Fourier analysis. This second volume is intended to serve as a text for a seco- semester course in the subject. It is designed to be a continuation of the rst v- ume.
Author: Loukas Grafakos
The great response to the publication of the book Classical and Modern Fourier Analysishasbeenverygratifying.IamdelightedthatSpringerhasofferedtopublish the second edition of this book in two volumes: Classical Fourier Analysis, 2nd Edition, and Modern Fourier Analysis, 2nd Edition. These volumes are mainly addressed to graduate students who wish to study Fourier analysis. This second volume is intended to serve as a text for a seco- semester course in the subject. It is designed to be a continuation of the rst v- ume. Chapters 1–5 in the rst volume contain Lebesgue spaces, Lorentz spaces and interpolation, maximal functions, Fourier transforms and distributions, an introd- tion to Fourier analysis on the n-torus, singular integrals of convolution type, and Littlewood–Paley theory. Armed with the knowledgeof this material, in this volume,the reader encounters more advanced topics in Fourier analysis whose development has led to important theorems. These theorems are proved in great detail and their proofs are organized to present the ow of ideas. The exercises at the end of each section enrich the material of the corresponding section and provide an opportunity to develop ad- tional intuition and deeper comprehension. The historical notes in each chapter are intended to provide an account of past research but also to suggest directions for further investigation. The auxiliary results referred to the appendix can be located in the rst volume.
Classical Fourier Analysis
This third edition includes new Sections 3.5, 4.4, 4.5 as well as a new chapter on "Weighted Inequalities," which has been moved from GTM 250, 2nd Edition. Appendices I and B.9 are also new to this edition.
The main goal of this text is to present the theoretical foundation of the field of Fourier analysis on Euclidean spaces. It covers classical topics such as interpolation, Fourier series, the Fourier transform, maximal functions, singular integrals, and Littlewood–Paley theory. The primary readership is intended to be graduate students in mathematics with the prerequisite including satisfactory completion of courses in real and complex variables. The coverage of topics and exposition style are designed to leave no gaps in understanding and stimulate further study. This third edition includes new Sections 3.5, 4.4, 4.5 as well as a new chapter on "Weighted Inequalities," which has been moved from GTM 250, 2nd Edition. Appendices I and B.9 are also new to this edition. Countless corrections and improvements have been made to the material from the second edition. Additions and improvements include: more examples and applications, new and more relevant hints for the existing exercises, new exercises, and improved references.
Author: Elias M. SteinPublish On: 2003-04-06
This first volume, a three-part introduction to the subject, is intended for students with a beginning knowledge of mathematical analysis who are motivated to discover the ideas that shape Fourier analysis.
Author: Elias M. Stein
This first volume, a three-part introduction to the subject, is intended for students with a beginning knowledge of mathematical analysis who are motivated to discover the ideas that shape Fourier analysis. It begins with the simple conviction that Fourier arrived at in the early nineteenth century when studying problems in the physical sciences--that an arbitrary function can be written as an infinite sum of the most basic trigonometric functions. The first part implements this idea in terms of notions of convergence and summability of Fourier series, while highlighting applications such as the isoperimetric inequality and equidistribution. The second part deals with the Fourier transform and its applications to classical partial differential equations and the Radon transform; a clear introduction to the subject serves to avoid technical difficulties. The book closes with Fourier theory for finite abelian groups, which is applied to prime numbers in arithmetic progression. In organizing their exposition, the authors have carefully balanced an emphasis on key conceptual insights against the need to provide the technical underpinnings of rigorous analysis. Students of mathematics, physics, engineering and other sciences will find the theory and applications covered in this volume to be of real interest. The Princeton Lectures in Analysis represents a sustained effort to introduce the core areas of mathematical analysis while also illustrating the organic unity between them. Numerous examples and applications throughout its four planned volumes, of which Fourier Analysis is the first, highlight the far-reaching consequences of certain ideas in analysis to other fields of mathematics and a variety of sciences. Stein and Shakarchi move from an introduction addressing Fourier series and integrals to in-depth considerations of complex analysis; measure and integration theory, and Hilbert spaces; and, finally, further topics such as functional analysis, distributions and elements of probability theory.
Author: Javier Duoandikoetxea ZuazoPublish On: 2001
Fourier analysis encompasses a variety of perspectives and techniques. This volume presents the real variable methods of Fourier analysis introduced by Calderon and Zygmund.
Author: Javier Duoandikoetxea Zuazo
Publisher: American Mathematical Soc.
Fourier analysis encompasses a variety of perspectives and techniques. This volume presents the real variable methods of Fourier analysis introduced by Calderon and Zygmund. The text was born from a graduate course taught at the Universidad Autonoma de Madrid and incorporates lecture notes from a course taught by Jose Luis Rubio de Francia at the same university. Motivated by the study of ""Fourier"" series and integrals, classical topics are introduced, such as the Hardy-Littlewood maximal function and the Hilbert transform. The remaining portions of the text are devoted to the study of singular integral operators and multipliers. Both classical aspects of the theory and more recent developments, such as weighted inequalities, $H^1$, $BMO$ spaces, and the $T1$ theorem, are discussed.Chapter 1 presents a review of Fourier series and integrals; Chapters 2 and 3 introduce two operators that are basic to the field: the Hardy-Littlewood maximal function and the Hilbert transform. Chapters 4 and 5 discuss singular integrals, including modern generalizations. Chapter 6 studies the relationship between $H^1$, $BMO$, and singular integrals; and Chapter 7 presents the elementary theory of weighted norm inequalities. Chapter 8 discusses Littlewood-Paley theory, which had developments that resulted in a number of applications. The final chapter concludes with an important result, the $T1$ theorem, which has been of crucial importance in the field.This volume has been updated and translated from the Spanish edition that was published in 1995. Minor changes have been made to the core of the book; however, the sections, 'Notes and Further Results' have been considerably expanded and incorporate new topics, results, and references. It is geared toward graduate students seeking a concise introduction to the main aspects of the classical theory of singular operators and multipliers. Prerequisites include basic knowledge in Lebesgue integrals and functional analysis.
Fourier Series
Author: R.E. EdwardsPublish On: 2012-12-06
A Modern Introduction Volume 1 R.E. Edwards ... Modular Functions and Dirichlet
Series in Number Theory. SEERE. ... Vol. 1. LoEVE. Probability Theory. 4th ed.
Vol. 2. MoISE. Geometric Topology in Dimensions 2 and 3. SACHS/WU. General
Author: R.E. Edwards
The principal aim in writing this book has been to provide an intro duction, barely more, to some aspects of Fourier series and related topics in which a liberal use is made of modem techniques and which guides the reader toward some of the problems of current interest in harmonic analysis generally. The use of modem concepts and techniques is, in fact, as wide spread as is deemed to be compatible with the desire that the book shall be useful to senior undergraduates and beginning graduate students, for whom it may perhaps serve as preparation for Rudin's Harmonic Analysis on Groups and the promised second volume of Hewitt and Ross's Abstract Harmonic Analysis. The emphasis on modem techniques and outlook has affected not only the type of arguments favored, but also to a considerable extent the choice of material. Above all, it has led to a minimal treatment of pointwise con vergence and summability: as is argued in Chapter 1, Fourier series are not necessarily seen in their best or most natural role through pointwise-tinted spectacles. Moreover, the famous treatises by Zygmund and by Baryon trigonometric series cover these aspects in great detail, wl:tile leaving some gaps in the presentation of the modern viewpoint; the same is true of the more elementary account given by Tolstov. Likewise, and again for reasons discussed in Chapter 1, trigonometric series in general form no part of the program attempted.
From Fourier Analysis and Number Theory to Radon Transforms and Geometry
Author: Hershel M. FarkasPublish On: 2012-09-18
1. C.M. Bender and S.A. Orszag. Advanced mathematical methods for scientists
and engineers: Asymptotic methods and perturbation theory, volume 1. Springer
Verlag, 1978. 2. E. Bombieri and J.C. Lagarias. Complements to Li's criterion for ...
Author: Hershel M. Farkas
A memorial conference for Leon Ehrenpreis was held at Temple University, November 15-16, 2010. In the spirit of Ehrenpreis's contribution to mathematics, the papers in this volume, written by prominent mathematicians, represent the wide breadth of subjects that Ehrenpreis traversed in his career, including partial differential equations, combinatorics, number theory, complex analysis and a bit of applied mathematics. With the exception of one survey article, the papers in this volume are all new results in the various fields in which Ehrenpreis worked . There are papers in pure analysis, papers in number theory, papers in what may be called applied mathematics such as population biology and parallel refractors and papers in partial differential equations. The mature mathematician will find new mathematics and the advanced graduate student will find many new ideas to explore.A biographical sketch of Leon Ehrenpreis by his daughter, a professional journalist, enhances the memorial tribute and gives the reader a glimpse into the life and career of a great mathematician.
Introduction to Fourier Analysis and Wavelets
Author: Mark A. PinskyPublish On: 2002
This book provides a concrete introduction to a number of topics in harmonic analysis, accessible at the early graduate level or, in some cases, at an upper undergraduate level.
Author: Mark A. Pinsky
This book provides a concrete introduction to a number of topics in harmonic analysis, accessible at the early graduate level or, in some cases, at an upper undergraduate level. Necessary prerequisites to using the text are rudiments of the Lebesgue measure and integration on the real line. It begins with a thorough treatment of Fourier series on the circle and their applications to approximation theory, probability, and plane geometry (the isoperimetric theorem). Frequently, more than one proof is offered for a given theorem to illustrate the multiplicity of approaches. The second chapter treats the Fourier transform on Euclidean spaces, especially the author's results in the three-dimensional piecewise smooth case, which is distinct from the classical Gibbs-Wilbraham phenomenon of one-dimensional Fourier analysis. The Poisson summation formula treated in Chapter 3 provides an elegant connection between Fourier series on the circle and Fourier transforms on the real line, culminating in Landau's asymptotic formulas for lattice points on a large sphere. Much of modern harmonic analysis is concerned with the behavior of various linear operators on the Lebesgue spaces $L^p(\mathbb{R}^n)$. Chapter 4 gives a gentle introduction to these results, using the Riesz-Thorin theorem and the Marcinkiewicz interpolation formula. One of the long-time users of Fourier analysis is probability theory. In Chapter 5 the central limit theorem, iterated log theorem, and Berry-Esseen theorems are developed using the suitable Fourier-analytic tools. The final chapter furnishes a gentle introduction to wavelet theory, depending only on the $L_2$ theory of the Fourier transform (the Plancherel theorem). The basic notions of scale and location parameters demonstrate the flexibility of the wavelet approach to harmonic analysis. The text contains numerous examples and more than 200 exercises, each located in close proximity to the related theoretical material.
Fourier Analysis in Probability Theory
Author: Tatsuo KawataPublish On: 2014-06-17
This book will be of value to mathematicians, engineers, teachers, and students.
Author: Tatsuo Kawata
Fourier Analysis in Probability Theory provides useful results from the theories of Fourier series, Fourier transforms, Laplace transforms, and other related studies. This 14-chapter work highlights the clarification of the interactions and analogies among these theories. Chapters 1 to 8 present the elements of classical Fourier analysis, in the context of their applications to probability theory. Chapters 9 to 14 are devoted to basic results from the theory of characteristic functions of probability distributors, the convergence of distribution functions in terms of characteristic functions, and series of independent random variables. This book will be of value to mathematicians, engineers, teachers, and students.
Author: Eric StadePublish On: 2011-10-07
... and Boundary Value Problems, Second Edition *HENRICIIAppIied and
Computational Complex Analysis Volume 1, ... Optimization ROITMAN—
Introduction to Modern Set Theory *RUDIN—~Fourier Analysis on Groups
SENDOV—The ...
Author: Eric Stade
A reader-friendly, systematic introduction to Fourieranalysis Rich in both theory and application, Fourier Analysispresents a unique and thorough approach to a key topic in advancedcalculus. This pioneering resource tells the full story of Fourieranalysis, including its history and its impact on the developmentof modern mathematical analysis, and also discusses essentialconcepts and today's applications. Written at a rigorous level, yet in an engaging style that doesnot dilute the material, Fourier Analysis brings twoprofound aspects of the discipline to the forefront: the wealth ofapplications of Fourier analysis in the natural sciences and theenormous impact Fourier analysis has had on the development ofmathematics as a whole. Systematic and comprehensive, the book: Presents material using a cause-and-effect approach,illustrating where ideas originated and what necessitated them Includes material on wavelets, Lebesgue integration, L2 spaces,and related concepts Conveys information in a lucid, readable style, inspiringfurther reading and research on the subject Provides exercises at the end of each section, as well asillustrations and worked examples throughout the text Based upon the principle that theory and practice arefundamentally linked, Fourier Analysis is the ideal text andreference for students in mathematics, engineering, and physics, aswell as scientists and technicians in a broad range of disciplineswho use Fourier analysis in real-world situations.
Author: T. W. KörnerPublish On: 1989-11-09
Ranging from number theory, numerical analysis, control theory and statistics, to earth science, astronomy and electrical engineering, the techniques and results of Fourier analysis and applications are displayed in perspective.
Author: T. W. Körner
Numerical Fourier Analysis
Author: Gerlind PlonkaPublish On: 2019-02-05
L. Brandolini, L. Colzani, A. Iosevich, and G. Travaglini: Fourier Analysis and
Convexity (ISBN 978-0-8176-3263-2) 27. ... G.S. Chirikjian: Stochastic Models,
Information Theory, and Lie Groups, Volume 1 (ISBN 978-0-8176-4802-2) 42.
Author: Gerlind Plonka
This book offers a unified presentation of Fourier theory and corresponding algorithms emerging from new developments in function approximation using Fourier methods. It starts with a detailed discussion of classical Fourier theory to enable readers to grasp the construction and analysis of advanced fast Fourier algorithms introduced in the second part, such as nonequispaced and sparse FFTs in higher dimensions. Lastly, it contains a selection of numerical applications, including recent research results on nonlinear function approximation by exponential sums. The code of most of the presented algorithms is available in the authors' public domain software packages. Students and researchers alike benefit from this unified presentation of Fourier theory and corresponding algorithms.
Excursions in Harmonic Analysis Volume 1
Author: Travis D AndrewsPublish On: 2013-01-04
The Applied and Numerical Harmonic Analysis (ANHA) book series aims to
provide the engineering, mathematical, and ... harmonic analysis includes
mathematical areas such as wavelet theory, Banach algebras, classical Fourier
analysis, ...
Author: Travis D Andrews
The Norbert Wiener Center for Harmonic Analysis and Applications provides a state-of-the-art research venue for the broad emerging area of mathematical engineering in the context of harmonic analysis. This two-volume set consists of contributions from speakers at the February Fourier Talks (FFT) from 2006-2011. The FFT are organized by the Norbert Wiener Center in the Department of Mathematics at the University of Maryland, College Park. These volumes span a large spectrum of harmonic analysis and its applications. They are divided into the following parts: Volume I · Sampling Theory · Remote Sensing · Mathematics of Data Processing · Applications of Data Processing Volume II · Measure Theory · Filtering · Operator Theory · Biomathematics Each part provides state-of-the-art results, with contributions from an impressive array of mathematicians, engineers, and scientists in academia, industry, and government. Excursions in Harmonic Analysis: The February Fourier Talks at the Norbert Wiener Center is an excellent reference for graduate students, researchers, and professionals in pure and applied mathematics, engineering, and physics.
Author: Paul ButzerPublish On: 2012-05-03
Author: Paul Butzer
Stochastic Models Information Theory and Lie Groups Volume 1
Author: Gregory S. ChirikjianPublish On: 2009-09-02
Author: Gregory S. Chirikjian
This unique two-volume set presents the subjects of stochastic processes, information theory, and Lie groups in a unified setting, thereby building bridges between fields that are rarely studied by the same people. Unlike the many excellent formal treatments available for each of these subjects individually, the emphasis in both of these volumes is on the use of stochastic, geometric, and group-theoretic concepts in the modeling of physical phenomena. Stochastic Models, Information Theory, and Lie Groups will be of interest to advanced undergraduate and graduate students, researchers, and practitioners working in applied mathematics, the physical sciences, and engineering. Extensive exercises and motivating examples make the work suitable as a textbook for use in courses that emphasize applied stochastic processes or differential geometry.
Samsung Galaxy S7 For Dummies
Broke Millennial Talks Money
The Silent Witness
Best Hikes Near Albuquerque
The Six Core Theories of Modern Physics
Handbook of Economic Forecasting: Volume 2A
The Theory and Practice of Taiji Qingong
Liturgy of Liberation
Cincinnatis Brewing History
The Power of Tai Chi
Official Family Guy Calendar 2012
Novel Without a Name
The Hapenny Place
Best American Mystery Stories 2007
Leading and Managing Teaching Assistants
The Art of the Boss Baby
Refereeing Identity
Listening on the Edge
Understanding Osteoporosis Paper Poster
FLOWERS FOR THE GOD OF LOVE
Hipster Dinosaurs
Biosecurity Interventions
Jewels and Jewelry
Dis/ability Studies
Mid-Cheshire Pubs
How to Estimate with RSMeans Data
Happy St. Pat T-Rex Day
Complete Works of Aristotle, Volume 2
OCR Psychology for A Level Workbook 3
The Holiday Murders
Memory in a Time of Prose
Life with Forty Dogs
The Circle of the Way
Dont Tell Your Momma Youre an Atheist
High School Debut (3-in-1 Edition), Vol. 3
When the Sea Turned to Silver
Chord Approach to Electronic Keyboards
Fortschritte der Chemischen Forschung
Second Chance Rancher
School Library Media File: No. 2
Some principles of every-day art
Lets Make Pasta
Afro-Cuban Rhythms for Drumset
Mathematics and Transition to School
The Homemade Housewife
Cosmic Quantic Intelligence (Cqi)
The Ceramics of Raquira, Colombia
Subversive Spirituality
Slavery, Law, and Politics
Darker Shores
Lighting a Lamp
Book of Beginning Circle Games
Children Of The Tide
Training the Working Labrador
The Complete Essays Of Mark Twain
Dance Magazine College Guide 2006 & 2007
Room Little Darker
Noahs Ark and the Genesis-10 Patriarchs
The Holistic Cat
Learning Their Language
Rotating Fields in General Relativity
A Rabbi Looks at Jesus Parables
Live Out Loud
Fat Tire Wisconsin
The Good Citizens Alphabet
Science and Science Teaching
Inclusive Literacy Teaching
The Death of Achilles
PostGIS Essentials
Healthy Juices for Healthy Kids
Doggie Language
Mother, What Is the Moon?
Callahan 2001 Calendar
Cute Tigers and Wannabes.
Secret of High Eldersham
America Now 9e & Choices 4e
Maximize Your Medicare (2019 Edition)
The Book of Marmalade
Feeling Naked on the First Tee
Kids Get Coding: Staying Safe Online
The Infernal Devices 3: Clockwork Princess
Erich Jantsch
They Came to Baghdad
A Tour of the Sherry Triangle
Guide to Scientific Computing in C++
The A to Z of Jainism
Beaded Embroidery Stitching
Scotland's 100 Best Walks
The New Social Economy
All the Words Are Yours
Idd6: Ceramics, Glass & Jewelry Design
Training The Samurai Mind
What Flo Eats
|
CommonCrawl
|
Climatic Change
February 2017 , Volume 140, Issue 3–4, pp 375–385 | Cite as
No evidence of publication bias in climate change science
Christian Harlos
Tim C. Edgell
Johan Hollander
Non-significant results are less likely to be reported by authors and, when submitted for peer review, are less likely to be published by journal editors. This phenomenon, known collectively as publication bias, is seen in a variety of scientific disciplines and can erode public trust in the scientific method and the validity of scientific theories. Public trust in science is especially important for fields like climate change science, where scientific consensus can influence state policies on a global scale, including strategies for industrial and agricultural management and development. Here, we used meta-analysis to test for biases in the statistical results of climate change articles, including 1154 experimental results from a sample of 120 articles. Funnel plots revealed no evidence of publication bias given no pattern of non-significant results being under-reported, even at low sample sizes. However, we discovered three other types of systematic bias relating to writing style, the relative prestige of journals, and the apparent rise in popularity of this field: First, the magnitude of statistical effects was significantly larger in the abstract than the main body of articles. Second, the difference in effect sizes in abstracts versus main body of articles was especially pronounced in journals with high impact factors. Finally, the number of published articles about climate change and the magnitude of effect sizes therein both increased within 2 years of the seminal report by the Intergovernmental Panel on Climate Change 2007.
Publication Bias Impact Factor Oyster Reef High Impact Factor High Impact Journal
Data archiving
Data used in the meta-analysis are archived in the Dryad repository
Tim C. Edgell and Johan Hollander contributed equally to this work.
The online version of this article (doi: 10.1007/s10584-016-1880-1) contains supplementary material, which is available to authorized users.
Publication bias in scientific journals is widespread (Fanelli 2012). It leads to an incomplete view of scientific inquiry and results and presents an obstacle for evidence-based decision-making and public acceptance of valid, scientific discoveries and theories. A growing trend in scientific inquiry, as practiced in this article, includes the meta-analysis of large bodies of literature, a practice that is particularly susceptible to misleading and inaccurate results given a systematic bias in the literature (e.g. Michaels 2008; Fanelli 2012, 2013).
The role of publication bias in scientific consensus has been described in a variety of scientific disciplines, including but not limited to medicine (Kicinski 2013; Kicinski et al. 2015), social science (Fanelli 2012), ecology (Palmer 1999), and global climate change research (Michaels 2008; Reckova and Irsova 2015).
Despite widespread consensus among climate scientists that global warming is real and has anthropogenic roots (e.g., Holland 2007; Idso and Singer 2009; Anderegg et al. 2010), several end users of science such as popular media, politicians, industrialists, and citizen scientists continue to treat the facts of climate change as fodder for debate and denial. For example, Carlsson-Kanyama and Hörnsten Friberg (2012) found only 30% of politicians and directors from 63 Swedish municipalities believed humans contribute to global warming; 61% of respondents were uncertain about the causes of warming, and as much as 9% denied it was real.
Much of this skepticism stems from an event that has been termed Climategate, when emails and files from the Climate Research Unit (CRU) at the University of East Anglia were copied and later exposed for public scrutiny and interpretation. Climate change skeptics claimed the IPCC 2007 report—the Intergovernmental Panel on Climate Change Fourth Assessment Report (IPCC 2007), which uses scientific facts to argue humans are causing climate change—was based on an alleged bias for positive results by editors and peer reviewers of scientific journals; editors and scientists were accused of suppressing research that did not support the paradigm for carbon dioxide-induced global warming. In 2010, the CRU was cleared by the Muir Russell Committee of any scientific misconduct or dishonesty (Adams 2010; but see Michaels 2010).
Although numerous reviews have examined the credibility of climate researchers (Anderegg et al. 2010), the scientific consensus on climate change (Doran and Kendall Zimmerman 2009) and the complexity of media reporting (Corner et al. 2012), few studies have undertaken an empirical review of the publication record to evaluate the existence of publication biases in climate change science. However, Michaels (2008) scrutinized the two most prestigious journals, Nature and Science, in the field of global warming, and by using vote-counting meta-analysis, confirmed a skewed publication record. Reckova and Irsova (2015) also detected a publication bias after analyzing 16 studies of carbon dioxide concentrations in the atmosphere and changes in global temperature. Although publication biases were reported by Michaels (2008) and Reckova and Irsova (2015), the former test used a small set of pre-defined journals to test the prediction, while the latter test lacked statistical power given a sample size of 16 studies. In contrast, here we conducted a meta-analysis on results from 120 reports and 31 scientific journals. Our approach expands upon the conventional definition of publication bias to include publication trends over time and in relation to seminal events in the climate change community, stylistic choices made by authors who may selectively report some results in abstracts and others in the main body of articles (Fanelli 2012) and patterns of effect size and reporting style in journals representing a broad cross-section of impact factors.
We tested the hypothesis of bias in climate change publications stemming from the under-reporting of non-significant results (Rosenthal 1979) using fail-safe sample sizes, funnel plots, and diagnostic patterns of variability in effect sizes (Begg and Mazumdar 1994; Palmer 1999, 2000; Rosenberg 2005). More specifically, we (a) examined whether non-significant results were omitted disproportionately in the climate change literature, (b) if there were particular trends of unexpected and abrupt changes in the number of published studies and reported effects in relation to IPCC 2007 and Climategate, (c) whether effects presented in the abstracts were significantly larger than those reported in the main body of reports, and (d) how findings from these first three tests related to the impact factor of journals.
Meta-analysis is a powerful statistical tool used to synthesize statistical results from numerous studies and to facilitate general trends in a field of research. Unfortunately, not all articles within a given field of science will contain statistical estimates required for meta-analysis (e.g., estimate of effect size, error, sample size). Therefore, the literature used in meta-analysis is often a sample of all available articles, which is analogous to the analytical framework used in ecology and typically uses a sub-sample of a population to estimate parameters of true populations. For the purpose of our meta-analysis, we sampled articles from the body of literature that explores the effects of climate change on marine organisms. Marine species are exposed to a large array of abiotic factors that are linked directly to atmospheric climate change. For instance, oceans absorb heat from the atmosphere and mix with freshwater run-off from melting glaciers and ice caps, which changes ocean chemistry and puts stress on ocean ecosystems. For example, the resulting changes in ocean salinity and pH can inhibit calcification in shell-bearing organisms that are either habitat-forming (e.g., coral reefs, oyster reefs) or the foundation of food webs (e.g., plankton) (The Copenhagen Diagnosis 2009).
Results of our meta-analysis found no evidence of publication bias, in contrast to prior studies that were based on smaller sample sizes than used here (e.g., Michaels 2008; Reckova and Irsova 2015). We did, however, discover some interesting patterns in the numbers of climate change articles being published over time and, within journal articles, stylistic biases by authors with respect to reporting large statistically significant effects. Finally, results are discussed in the context of social responsibility borne by climate scientists and the challenges for communicating science to stakeholders and end users.
2 Materials and methods
Meta-analysis is a suite of data analysis tools that allow for quantitative synthesis of results from numerous scientific studies, now widely used from medicine to ecology (Adams et al. 1997). Here, we randomly sampled articles from a broader body of literature about climate change in marine systems and withdrew statistics summarizing magnitude of effects, error, and experimental sample size for meta-analysis.
2.1 Data collection
We surveyed the scientific literature via the ISI Web of Science, Scopus and Biological Abstracts, and in the reference sections of identified articles for experimental results pertaining to climate change in ocean ecosystems. The search was performed with no restrictions on publication year, using different combinations of the terms: (acidification* AND ocean*) OR (acidification* AND marine*) OR (global warming* AND marine*) OR (global warming* AND ocean*) OR (climate change* AND marine* AND experiment*) OR (climate change* AND ocean* AND experiment*). The search was performed exclusively on scientific journals with an impact factor of at least 3 (Journal Citation Reports science edition 2010).
We restricted our analysis to a sample of articles reporting (a) an empirical effect size between experimental and control group, (b) a measure of statistical error, and (c) a sample size with specific control and experimental groups (see Supplementary Material S1–S3 for identification of studies). We identified 120 articles from 31 scientific journals published between the years 1997 and 2013, with impact factors ranging from 3.04 to 36.104 and a mean of 6.58. Experimental results (n = 1154) were extracted from the main body of articles; 362 results were also retrieved from the articles' abstracts or summary paragraphs.
Data from the main body of articles and abstracts were analyzed separately to test for potential stylistic biases related to how authors report key findings. The two datasets, hereafter designated "main" dataset and "abstract" dataset, were also divided into three time periods: pre-IPCC 2007 (x–2007-November), After IPCC 2007/pre-Climategate (2007-December–2009-November) and after Climategate (2009-December–December 2012), based on each article's date of acceptance. We used November 2007 as the publication date for the IPCC Fourth Assessment Report, which was an updated version of the original February 2007 release.
We extracted graphical data using the software GraphClick v. 3.0 (2011). Each study could include several experimental results, which could result in non-independence bias driven by studies with relatively large numbers of results. Therefore, we assessed the robustness of our meta-analysis by re-running the analysis multiple times with data subsets consisting of one randomly selected result per article (Hollander 2008).
Experimental results found in articles can be either negative or positive. To prevent composite, mean values from equalling zero, we reversed the negative sign of effects to positive; consequently, all results were analyzed as positive effects (Hollander 2008). The reversed effect sizes do not generally produce a standard normal distribution, as negative effects are reversed around zero. Statistical significance was therefore assessed using bias-corrected 95% bootstrap confidence intervals produced by re-sampling tests in 9999 iterations, with a two-tailed critical value from Students t distribution. If the mean of one sample lies outside the 95% confidence intervals of another mean, the null hypothesis that subcategories did not differ was rejected (Adams et al. 1997; Borenstein et al. 2010). Hedges' d was used to quantify the weighted effect size of climate change effects (Gurevitch and Hedges 1993).
$$ d=\frac{{\overline{X}}^{\mathrm{E}}-{\overline{X}}^{\mathrm{C}}}{s}J $$
Hedges' d was the mean of the control group (X C) subtracted from the mean of the experimental group (X E), divided by the pooled standard deviation (s) and multiplied by a correction factor for small sample sizes (J). However, since sample sizes vary among studies, and variance is a function of sample size, some form of weighting was necessary. In other words, studies with larger sample sizes are expected to have lower variances and will accordingly provide more precise estimates of the true population effect size (Hedges and Olkin 1985; Shadish and Haddock 1994; Cooper 1998). Therefore, a weighted average was used in the meta-analysis to estimate the cumulative effect size (weighted mean) for the sample of studies (see Rosenberg et al. 2000 for details).
2.2 Funnel plot and fail-safe sample sizes
Funnel plots are sensitive to heterogeneity, which is why they are effective for visual detection of systematic heterogeneity in the publication record. For example, funnel plot asymmetry has long been equated with publication bias (Light and Pillemer 1984; Begg and Mazumdar 1994), whereas a systematic inverted funnel is diagnostic of a "well-behaved" data set in which publication bias is unlikely. Initial funnel plots showed large amounts of variation in the y-axis (Hedge's d) along the length of the x-axis (sample size), which could potentially obscure inspection of diagnostic features of the funnel around the mean. To improve detectability of publication bias, should one truly exist, we transformed Hedges' d to Pearson coefficient correlation (r), which condensed extreme values in the y-axis and converted the measure of effect size to a range between zero and ±1 (Palmer 2000; Borenstein et al. 2009). Therefore, the data transformation ultimately converted the measure of effect size from a standardized mean difference (d) to a correlation (r)
$$ r=\frac{d}{\sqrt{d^2}+a} $$
where a is the correction factor for cases where n 1 ≠ n 2 (Borenstein et al. 2009).
Both funnel plots and fail-safe sample size were inspected to test for under-reporting of non-significant effects, following Palmer (2000). Extreme publication bias (caused by under-reporting of non-significant results) would appear as a hole or data gap in a funnel plot. Also, if there is no bias, the density of points should be greatest around the mean value and normally distributed around the mean at all sample sizes. To help visualize the threshold between significant and non-significant studies, 95% significance lines were calculated for the funnel plots.
Robust Z-scores were used to identify possible outliers in the dataset, as such values could distort the mean and make the conclusions of a study less accurate or even incorrect. Instead of using the dataset mean, robust Z-scores use the median as it has a higher breakdown point and is therefore more accurate than regular Z-scores (Rousseeuw and Hubert 2011). The cause for each identified outlier was carefully investigated before any value could be excluded from the dataset (Table S1).
All data were analyzed with MetaWin v. 2.1.5 (Sinauer Associates Inc. 2000), and graphs for visual illustrations were created using the graphic data tool DataGraph v. 3.1.2 (VisualDataTools Inc. 2013).
3.1 Publication bias
For each of the three time periods considered in this study (prior to IPCC-AR4 2007, after IPCC-AR4 2007 and before Climategate 2009, and after Climategate 2009), the funnel plots showed no evidence of statistically non-significant results being under-represented (Fig. 1); there were no holes around Y = 0, nor were there conspicuous gaps or holes in other parts of the funnels (Fig. 1). Strong fail-safe sample sizes confirmed that the effect sizes were robust and that publication bias was not detected (Rosenthal 1979). We further tested the robustness of results by re-sampling single results from articles and reproducing funnel plots (see Supplementary Material Fig. S4 a–j).
Funnel plots representing effect size (r) as a function of sample size (N). The shaded areas represent results that were not significant statistically for the main dataset. a Pre-IPCC 2007 (n = 265). b After IPCC 2007/pre-Climategate (n = 345). c After Climategate (n = 544). n denotes number of experiments
3.2 Number of studies, effect size, and abstract versus main
The number of articles about climate change in ocean ecosystems has increased annually since 1997, peaking within 2 years after IPCC 2007 and subsiding after Climategate 2009 (Fig. 2). Before Climategate, reported effect sizes were significantly larger in article abstracts than in the main body of articles, suggesting a systematic bias in how authors are communicating results in scientific articles: Large, significant effects were emphasized where readers are most likely to see them (in abstracts), whereas small or non-significant effects were more often found in the technical results sections where we presume they are less likely to be seen by the majority of readers, especially non-scientists. Moreover, between IPCC 2007 and Climategate, when publication rates on ocean climate change were greatest, the difference in effect sizes reported in the abstract and body of reports was also greatest (Fig. 2). After Climategate, publication rates about ocean climate change fell, the magnitude of reported effect sizes in abstracts diminished, and the difference in effect sizes between abstracts and the body of reports returned to a level comparable to pre-IPCC 2007 (Fig. 2).
Publication rate. a Number of published reports for each year. The two vertical grey bars illustrate the timing of IPCC 2007 and Climategate 2009. b Cumulative effect sizes of Hedges' d and bias-corrected 95% bootstrap confidence intervals for the magnitude of climate-change effects. Mean effect sizes are computed separately for results presented in abstracts and in the main body of articles. N sample size. Pre-IPCC 2007 main dataset: d = 1.46; CI = 1.30–1.63; df = 264, FSN = 36,299. Abstract dataset: d = 2.08; CI = 1.73–2.51; df = 62, FSN = 3475: P < 0.05. After IPCC 2007/pre-Climategate main dataset: d = 1.87; CI = 1.69–2.06; df = 344, FSN = 79,576. Abstract dataset: d = 2.82; CI = 2.41–3.31; df = 118, FSN = 11,557: P < 0.05. After Climategate main dataset: d = 1.72; CI = 1.59–1.88; df = 543, FSN = 214,674. Abstract dataset: d = 2.14; CI = 1.85–2.46; df = 176, FSN = 26,480: P = n.s. d Hedges' d, CI bias-corrected 95% confidence intervals, df degrees of freedom (one less than total sample), FSN fail-safe numbers. P, n.s. probability that abstract results and main text results differ
3.3 Impact factor
Journals with an impact factor greater than 9 published significantly larger effect sizes than journals with an impact factor of less than 9 (Fig. 3). Regardless of the impact factor, journals reported significantly larger effect sizes in abstracts than in the main body of articles; however, the difference between mean effects in abstracts versus body of articles was greater for journals with higher impact factors. We also detected a small, yet statistically significant, negative relationship between reported sample size and journal impact factor, which was largely driven by the large effects reported in high impact factor journals (Fig. 4). Despite the larger effect sizes, journals with high impact factors published results with generally lower sample sizes.
Cumulative effect sizes of Hedges' d and bias-corrected 95% bootstrap confidence intervals for the magnitude of climate change effects for journals with impact factor above or below 9. Results are computed separately for data from abstracts and the main body of reports. N denotes the sample size. IF < 9 main dataset: d = 1.60; CI = 1.51–1.69; df = 1042, FSN = 696,107, P < 0.05. Abstract dataset: d = 2.04; CI = 1.86–2.24; df = 316, FSN = 83,671, P < 0.05. IF > 9 main dataset: d = 2.65; CI = 2.20–3.23; df = 111, FSN = 10,131, P < 0.05. Abstract dataset: d = 5.27; CI = 3.66–7.50; df = 44, FSN = 2298, P < 0.05. Abbreviations as in Fig. 2 legend
Relationship between journal impact factor and sample size for experimental results (R 2 = 0.004; P < 0.05)
Our meta-analysis did not find evidence of small, statistically non-significant results being under-reported in our sample of climate change articles. This result opposes findings by Michaels (2008) and Reckova and Irsova (2015), which both found publication bias in the global climate change literature, albeit with a smaller sample size for their meta-analysis and in other sub-disciplines of climate change science. Michaels (2008) examined articles from Nature and Science exclusively, and therefore, his results were influenced strongly by the editorial position of these high impact factor journals with respect to reporting climate change issues. We believe that the results presented here have added value because we sampled a broader range of journals, including some with relatively low impact factor, which is probably a better representation of potential biases across the entire field of study. Moreover, several end users and stakeholders of science, including other scientists and public officials, base their research and opinions on a much broader suite of journals than Nature and Science.
However, our meta-analysis did find multiple lines of evidence of biases within our sample of articles, which were perpetuated in journals of all impact factors and related largely to how science is communicated: The large, statistically significant effects were typically showcased in abstracts and summary paragraphs, whereas the lesser effects, especially those that were not statistically significant, were often buried in the main body of reports. Although the tendency to isolate large, significant results in abstracts has been noted elsewhere (Fanelli 2012), here we provide the first empirical evidence of such a trend across a large sample of literature.
We also discovered a temporal pattern to reporting biases, which appeared to be related to seminal events in the climate change community and may reflect a socio-economic driver in the publication record. First, there was a conspicuous rise in the number of climate change publications in the 2 years following IPCC 2007, which likely reflects the rise in popularity (among public and funding agencies) for this field of research and the increased appetite among journal editors to publish these articles. Concurrent with increased publication rates was an increase in reported effect sizes in abstracts. Perhaps a coincidence, the apparent popularity of climate change articles (i.e., number of published articles and reported effect sizes) plummeted shortly after Climategate, when the world media focused its scrutiny on this field of research, and perhaps, popularity in this field waned (Fig. 1). After Climategate, reported effect sizes also dropped, as did the difference in effects reported in abstracts versus main body of articles. The positive effect we see post IPCC 2007, and the negative effect post Climategate, may illustrate a combined effect of editors' or referees' publication choices and researchers' propensity to submit articles or not. However, since meta-analysis is correlative, it does not elucidate the mechanisms underlying observed patterns.
Similar stylistic biases were found when comparing articles from journals with high impact factors to those with low impact factors. High impact factors were associated with significantly larger reported effect sizes (and lower sample sizes; see Fig. 4); these articles also had a significantly larger difference between effects reported in abstracts versus the main body of their reports (Fig. 3). This trend appears to be driven by a small number of journals with large impact factors; however, the result is consistent with those of supplementary studies. For example, our results corroborate with others by showing that high impact journals typically report large effects based on small sample sizes (Fraley and Vazire 2014), and high impact journals have shown publication bias in climate change research (Michaels 2008, and further discussed in Radetzki 2010).
Stylistic biases are less concerning than a systematic tendency to under-report non-significant effects, assuming researchers read entire reports before formulating theories. However, most audiences, especially non-scientific ones, are more likely to read article abstracts or summary paragraphs only, without perusing technical results. The onus to effectively communicate science does not fall entirely on the reader; rather, it is the responsibility of scientists and editors to remain vigilant, to understand how biases may pervade their work, and to be proactive about communicating science to non-technical audiences in transparent and un-biased ways. Ironically, articles in high impact journals are those most cited by other scientists; therefore, the practice of sensationalizing abstracts may bias scientific consensus too, assuming many scientists may also rely too heavily on abstracts during literature reviews and do not spend sufficient time delving into the lesser effects reported elsewhere in articles.
Despite our sincerest aim of using science as an objective and unbiased tool to record natural history, we are reminded that science is a human construct, often driven by human needs to tell a compelling story, to reinforce the positive, and to compete for limited resources—publication trends and communication bias is a proof of that.
We are grateful to Roger Butlin, Christer Brönmark, A. Richard Palmer, Tobias Uller, Charlie Cornwallis and Johannes Persson for constructive advice, and a special thanks to Dean Adams. We are also thankful to Michael MacAskill, Jessica Gurevitch and Shinichi Nakagawa for technical advice. JH was funded by a Marie Curie European Reintegration Grant (PERG08-GA-2010-276915) and by the Royal Physiographic Society in Lund.
The authors declare that they have no conflicts of interest.
10584_2016_1880_MOESM1_ESM.docx (1.5 mb)
ESM 1 (DOCX 1518 kb)
Adams D (2010) "Climategate" review clears scientists of dishonesty over data. The Guardian, Wednesday 7 JulyGoogle Scholar
Adams DC, Gurevitch J, Rosenberg MS (1997) Resampling tests for meta-analysis of ecological data. Ecology 78:1277–1283. doi: 10.1890/0012-9658(1997)078[1277:RTFMAO]2.0.CO;2 CrossRefGoogle Scholar
Anderegg WRL, Prall JW, Harold J, Schneider SH (2010) Expert credibility in climate change. Proc Natl Acad Sci U S A 107:12107–12109. doi: 10.1073/pnas.1003187107 CrossRefGoogle Scholar
Begg CB, Mazumdar M (1994) Operating characteristics of a rank correlation test for publication bias. Biometrics 50:1088–1101. doi: 10.2307/2533446 CrossRefGoogle Scholar
Borenstein M, Hedges LV, Higgins JPT, Rothstein HR (2009) Introduction to Meta-Analysis. John Wiley & Sons, LtdGoogle Scholar
Borenstein M, Hedges LV, Higgins JPT, Rothstein HR (2010) A basic introduction to fixed-effect and random-effects models for meta-analysis. Res Synth Meth 1:97–111. doi: 10.1002/jrsm.12 CrossRefGoogle Scholar
Carlsson-Kanyama A, Hörnsten Friberg L (2012) Views on climate change and adaptation among politicians and directors in Swedish municipalities. FOI-R–3441--SE. FOI Totalförsvarets forskningsinstitut, FOI, StockholmGoogle Scholar
Cooper H (1998) Synthesizing research: a guide for literature reviews (3rd ed.) Sage Thousand OaksGoogle Scholar
Corner AJ, Whitmarsh LE, Xenias D (2012) Uncertainty, scepticism and attitudes towards climate change: biased assimilation and attitude polarisation. Clim Chang 114:463–478. doi: 10.1007/s10584-012-0424-6 CrossRefGoogle Scholar
DataGraph v. 3.1.2. (2013) Adalsteinsson D, VisualDataTools IncGoogle Scholar
Doran PT, Kendall Zimmerman M (2009) Examining the scientific consensus on climate change. Science 90:22–23. doi: 10.1029/2009EO030002 Google Scholar
Fanelli D (2012) Negative results are disappearing from most disciplines and countries. Scientometrics 90:891–904. doi: 10.1007/s11192-011-0494-7 CrossRefGoogle Scholar
Fanelli D (2013) Positive results receive more citations, but only in some disciplines. Scientometrics 94:701–709. doi: 10.1007/s11192-012-0757-y CrossRefGoogle Scholar
Fraley RC, Vazire S (2014) The N-pact factor: evaluating the quality of empirical journals with respect to sample size and statistical power. PLoS ONE 9:e109019. doi: 10.1371/journal.pone.0109019 CrossRefGoogle Scholar
GraphClick v. 3.0. Retrieved 2011-08-23. Arizona Software. See: www.arizona-software.ch
Gurevitch J, Hedges LV (1993) Meta-analysis: combining the results of independent experiments. In: Scheiner SM, Gurevitch J (eds) Design and analysis of ecological experiments. Chapman & Hall, New York, p 445Google Scholar
Hedges LV, Olkin I (1985) Statistical Methods for Meta-Analysis. Academic Press, New YorkGoogle Scholar
Holland D (2007) Bias and concealment in the IPCC process: the "hockey-stick" affair and its implications. Energy Environ 18:951–983. doi: 10.1260/095830507782616788 CrossRefGoogle Scholar
Hollander J (2008) Testing the grain-size model for the evolution of phenotypic plasticity. Evolution 62:1381–1389. doi: 10.1111/j.1558-5646.2008.00365.x CrossRefGoogle Scholar
Idso C, Singer FS (2009) Climate change reconsidered: 2009 report of the nongovernmental panel on climate change (NIPCC). The Heartland Institute, ChicagoGoogle Scholar
IPCC (2007) In: Core Writing Team, Pachauri RK, Reisinger A (eds) Climate Change 2007: Synthesis Report. Contribution of Working Groups I, II and III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. IPCC, Geneva, p 104Google Scholar
Kicinski M (2013) Publication bias in recent meta-analyses. PLoS ONE 8:e81823. doi: 10.1371/journal.pone.0081823 CrossRefGoogle Scholar
Kicinski M, Springate DA, Kontopantelis E (2015) Publication bias in meta-analyses from the Cochrane database of systematic reviews. Stat Med 34:2781. doi: 10.1002/sim.6525 CrossRefGoogle Scholar
Light RJ, Pillemer DB (1984) Summing up: The science of reviewing research. Cambridge: Harvard Univ. PressGoogle Scholar
Michaels PJ (2008) Evidence for "Publication bias" concerning global warming in Science and Nature. Energy Environ 19:287–301. doi: 10.1260/095830508783900735 CrossRefGoogle Scholar
Michaels PJ (2010) The Climategate whitewash continues. Wall Street JGoogle Scholar
Palmer RA (1999) Notes and comments—detecting publication bias in meta-analyses: a case study of fluctuating asymmetry and sexual selection. Am Nat 154:220–233. doi: 10.1086/303223 Google Scholar
Palmer RA (2000) Quasireplication and the contract of error: lessons from sex ratios, heritabilities and fluctuating asymmetry. Annu Rev Ecol Syst 31:441–480. doi: 10.1146/annurev.ecolsys.31.1.441 CrossRefGoogle Scholar
Radetzki M (2010) The fallacies of concurrent climate policy efforts. Ambio. 2010 May; 39(3): 211–222. Published online 2010 Jun 3. doi: 10.1007/s13280-010-0029-0
Reckova D, Irsova Z (2015) Publication bias in measuring anthropogenic climate change. Energy Environ 26:853–862. doi: 10.1260/0958-305X.26.5.853 CrossRefGoogle Scholar
Rosenberg MS (2005) The file-drawer problem revisited: a general weighted method for calculating fail-safe numbers in meta-analysis. Evolution 59:464–468. doi: 10.1111/j.0014-3820.2005.tb01004.x CrossRefGoogle Scholar
Rosenberg MS, Adams DC, Gurevitch J (2000) MetaWin. Statistical software for meta-analysis. Sinauer Associates, Inc., v. 2.1.5. Sunderland, Massachusetts: See: http://www.metawinsoft.com/.
Rosenthal R (1979) The "file drawer problem" and tolerance for null results. Psych Bull 86:638–641. doi: 10.1037/0033-2909.86.3.638 CrossRefGoogle Scholar
Rousseeuw PJ, Hubert M (2011) Robust statistics for outlier detection. WIREs Data Min Knowl Discovery 1:73–79. doi: 10.1002/widm.2 CrossRefGoogle Scholar
Shadish WR, Haddock CK (1994) Combining estimates of effect size. In: Cooper H, Hedges LV (eds) The Handbook of research synthesis. Russell Sage, New YorkGoogle Scholar
The Copenhagen Diagnosis (2009) In: Allison I et al (eds) Updating the world on the latest climate science. The University of New South Wales Climate Change Research Centre (CCRC), Sydney, p 60Google Scholar
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.Department of Biology, Aquatic EcologyLund UniversityLundSweden
2.StantecSidneyCanada
Harlos, C., Edgell, T.C. & Hollander, J. Climatic Change (2017) 140: 375. https://doi.org/10.1007/s10584-016-1880-1
|
CommonCrawl
|
For how long CMB was being emitted?
Or equivalently, how long was the recombination period? Was it a fraction of a second? Millions of years?
cosmology space-expansion cosmic-microwave-background
Qmechanic♦
AsmaniAsmani
$\begingroup$ according to wikipeida en.wikipedia.org/wiki/… 380,000 years after the Big Bang $\endgroup$
– Jannick
$\begingroup$ That's the date. I'm asking about the duration. $\endgroup$
– Asmani
The answer depends on the adopted baryon/photon ratio, the time-dependence of temperature in the expanding universe and what you define to be the ionisation fractions at the beginning and end of the (re)combination epoch.
As the universe cools, first the helium ions recombine, followed by the hydrogen ions (protons).
The time to go from a hydrogen ionisation fraction of 90% to say 10% is around 70,000 years, with a central epoch of around 300,000 years, for the usual $\Lambda$CDM model and a baryon to photon ratio of $6\times 10^{-10}$.
ProfRobProfRob
See Wikipedia: "Cosmic Dark Age" and "Recombination, photon decoupling, and the cosmic microwave background (CMB)" or "A more detailed summary":
"Recombination, photon decoupling, and the cosmic microwave background (CMB)":
"This period, known as the Dark Ages, began around 377,000 years after the Big Bang. During the Dark Ages, the temperature of the universe cooled from some 4000 K down to about 60 K, and only two sources of photons existed: the photons released during recombination/decoupling (as neutral hydrogen atoms formed), which we can still detect today as the cosmic microwave background (CMB), and photons occasionally released by neutral hydrogen atoms, known as the 21 cm spin line of neutral hydrogen.
Structures may have begun to emerge from around 150 million years, and stars and early galaxies gradually emerged from around 400 to 700 million years. As they emerged, the Dark Ages gradually ended. Because this process was gradual, the Dark Ages only fully ended around 1 billion (1000 million) years, as the universe took its present appearance.".
A non-Wikipedia explanation: "Lecture 31: The Cosmic Microwave Background Radiation" or Hyperphysic's WMAP webpage "Age of the Universe ".
RobRob
Not the answer you're looking for? Browse other questions tagged cosmology space-expansion cosmic-microwave-background or ask your own question.
How is the Redshift of Recombination Calculated?
How precisely can we date the recombination?
If CMB photons were emitted long ago, how can we measure a "current" temperature?
How far was the surface of last scattering at the moment of recombination?
Where does the 379,000 year recombination prediction of the Big Bang theory come from?
Why wasn't CMB radiation absorbed long ago, since when it was emitted the universe was much smaller?
Since the CMB is the oldest thing we can see, how do we know for sure what happened before the CMB?
How uniform is CMB?
Does it make sense to measure the time from the Big Bang until the CMB was emitted?
If a CMB photon traveled for 13.7 billion years to reach me, how far away was the source of that CMB photon when it first emitted it?
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.